code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="9zdr4voxArb0"
# # Gaussian processes
#
# Gaussian processes (GPs) are an example of non-parametric Bayesian models. Non-parametric means the model has infinite parameters and does not rely on a specific functional form of the parameters. To specify a Gaussian process, one requires two things:
# 1. Mean vector.
# 2. Covariance matrix/Kernel: Relation of data points (samples) with one another.
#
# The kernel or the covariance matrix represents the relation between the sample points, that is off-diagonal elements represent dependence of one observation on another. This makes GPs different from many other commonly used function approximators that typically assume *iid* samples. Like other kernel methods, GPs are *universal approximators* assuming the appropriate kernel for the data is being used. A commonly used kernel is the Radial Basis Function,
# $$
# k(x, x') = \exp \left( - \frac{||x - x'||^2}{2\ell^2}\right)
# $$
#
# where $\ell$ represents the smoothing length.
#
# Gaussian processes are different from Naive-Bayesian regression in that the Gaussian is defined over the space of functions rather than feature weights. GPs are great at quantifying uncertainty, for example in noisy time series data but are not clearly interpretable since they do not use explicit features. Another issue is the computational complexity of inverting the covariance matrix inversion, $n^3$ that has been addressed by more recent approaches.
#
#
#
#
# References:
# 1. Self-contained discussion with python code: https://github.com/fonnesbeck/Bayes_Computing_Course/blob/master/notebooks/Section5_1-Gaussian_Processes.ipynb
# 2. scikit-learn: https://scikit-learn.org/stable/modules/gaussian_process.html#gp-kernels
# 3. Application to dynamical systems: https://www.youtube.com/watch?list=PLFfvLE9TGnegjHFetV-zjPztaM_1UQk9B&v=BepKfyGbOM4&feature=emb_logo&ab_channel=WilWard
# 4. Broad overview: https://www.youtube.com/watch?v=U85XFCt3Lak&ab_channel=MLSSAfrica
# 5. pyro: https://pyro.ai/examples/gp.html
# 6. GPy: https://github.com/SheffieldML/GPy
#
# + [markdown] id="DPYNn1uQ4I9j"
# ## Gaussian processes using sklearn applied to candy dataset
# + id="XpBKbFw-4LGX"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
# + [markdown] id="OR-zwV384l_z"
# ## Load data
# + id="i9SLwx-h4nl2" colab={"base_uri": "https://localhost:8080/", "height": 198} outputId="32446d8e-2ada-4c42-c462-b763b1cd4b3e"
df = pd.read_csv('candy_production.csv')
df.columns = ['date', 'sales']
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="xWvj0P1k4zAd" outputId="96fc541a-bf0e-40bc-b814-b34d3197938e"
X = np.atleast_2d(np.linspace(1, df.shape[0], df.shape[0])).T
y = df['sales'].values
X.shape, y.shape
# + id="2R76yzGz4zDM"
gp = GaussianProcessRegressor(kernel=RBF())
gp.fit(X, y)
y_pred, sigma = gp.predict(X, return_std=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="Uj2BLAQ7844B" outputId="aa340a83-b5b6-4aa9-d200-87ad19c9036b"
plt.figure()
plt.plot(X[-200:], y[-200:], 'r.', markersize=10, label='Observations')
plt.plot(X[-200:], y_pred[-200:], 'b-', label='Prediction')
plt.fill(np.concatenate([X[-200:], X[:-201:-1]]),
np.concatenate([y_pred[-200:] - 1.9600 * sigma[-200:],
(y_pred[-200:] + 1.9600 * sigma[-200:])[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
| candydata/gaussianprocesses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
import diogenes.read as read
import diogenes.display as display
import diogenes.modify as modify
import diogenes.utils as utils
import diogenes.grid_search as grid_search
import numpy as np
# ### Methods
#
# 1. Data obtained from the [Citizens Police Data Project](cpdb.co).
# 2. This data includes only the FOIA dataset from 2011 to present (i.e. the Bond and Moore datasets have been removed).
# 3. This was accomplished by entering FOIA in the search bar.
# 4. The resulting table was saved to GitHub as a .xslx.
# 5. The Allegations, Complaining Witnesses, and Officer Profile tabs were then saved as allegations.csv, citizens.csv, and officers.csv respectively.
#
# ### Disclaimer
#
# The following disclaimer is included with the data by the [Invisible Institute](http://invisible.institute).
#
#
# This dataset is compiled from three lists of allegations against Chicago Police Department officers,
# spanning approximately 2002 - 2008 and 2010 - 2014, produced by the City of Chicago in response
# to litigation and to FOIA requests.
#
# The City of Chicago's production of this information is accompanied by a disclaimer that
# not all information contained in the City's database may be correct.
#
# No independent verification of the City's records has taken place and this dataset does not
# purport to be an accurate reflection of either the City's database or its veracity.
# +
#Record arrays
allegations = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Allegations.csv',parse_datetimes=['IncidentDate','StartDate','EndDate'])
citizens = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Citizens.csv')
officers = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Officers.csv')
# -
# ### What data do we have?
#
# We can see the column names for the three tables below.
#
# 1. The Allegations table includes data on each allegation, including an ID for the complaint witness, the officer, and the outcome of the allegation.
# 2. The Citizens table includes additional information for each complaint witness.
# 3. The Officers table includes additional information for each officer.
#I shouldn't have to nest function calls just to get a summary of my data. This needs to be a single call.
#Most of the data isn't numeric, so we should find a way to be more helpful than this.
#Also, what is the "None" printing at the end of this?
print display.pprint_sa(display.describe_cols(allegations))
print display.pprint_sa(display.describe_cols(citizens))
print display.pprint_sa(display.describe_cols(officers))
# For this analysis, we will be removing several columns for the following reasons:
#
# 1. To anonymize our data, names of officers and investiagtors have been removed.
# 2. Many of the columns in Allegations are redundant as they code for other columns. We will preserve only the human readable columns.
# 3. The Beat column has no data, so it will be removed.
# 4. We will only focus on final outcomes, so the "recommended" columns have been removed from Allegations.
# 5. We will be limiting our geographic analysis to Location, so the address information has been removed.
#
# We will also translate ApptDate, which specifies the number of days between the hire date and 1900-1-1, to the number of years working.
# +
import datetime
#TODO: there is a typo in the "OfficerFirst" column in allegations.
#Should pass this on to Kalven at Invisible Institute along with questions about data.
allegations = utils.remove_cols(allegations,['OfficeFirst','OfficerLast','Investigator','AllegationCode','RecommendedFinding','RecommendedOutcome','FinalFinding','FinalOutcome','Beat','Add1','Add2','City'])
officers = utils.remove_cols(officers,['OfficerFirst','OfficerLast','Star'])
#Convert appointment date days since 1900-1-1 to years prior to today
def tenure(vector):
today = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d')
started = np.add(np.datetime64('1900-01-01'),map(lambda x: np.timedelta64(int(x), 'D'),vector))
tenure = np.subtract(np.datetime64(today),started)
return np.divide(tenure,np.timedelta64(1,'D')) / 365
#Impute median date for missing values
officers['ApptDate'] = modify.replace_missing_vals(officers['ApptDate'], strategy='median')
tenure_days = modify.combine_cols(officers,tenure,['ApptDate'])
officers = utils.append_cols(officers,[tenure_days],['Tenure'])
# -
# For ease of use, let's join our tables.
# +
master = utils.join(allegations,citizens,'left',['CRID'],['CRID'])
#Rename Race and Gender, since citizens and officers have these columns
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "CitizenGender"
temp_col_names[race_index] = "CitizenRace"
master.dtype.names = tuple(temp_col_names)
master = utils.join(master,officers,'left',['OfficerID'],['OfficerID'])
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "OfficerGender"
temp_col_names[race_index] = "OfficerRace"
master.dtype.names = tuple(temp_col_names)
# -
# There are some allegations where no officer ID was provided. For this analysis, we will discard those allegations.
#This is a pretty awkward way to remove nan, is there a better way I missed?
master = modify.choose_rows_where(master,[{'func': modify.row_val_between, 'col_name': 'OfficerID', 'vals': [-np.inf,np.inf]}])
# Now, let's encode our data numerically
#Unit is interpreted as numeric, but we really want to analyze it categorically
#There should be an easier way to treat a numeric column as categorical data
master = utils.append_cols(master,master['Unit'].astype('|S10'),['UnitCat'])
master = utils.remove_cols(master,['Unit'])
master_data, master_classes = modify.label_encode(master)
# For convenience, we'll build every possible categorical directive
# +
#Directives
def cat_directives(array,classes):
cat_directives = {}
for column in classes:
cat_directives[column] = {v:[{'func': modify.row_val_eq, 'col_name': column, 'vals': i}] for i,v in enumerate(classes[column])}
return cat_directives
where = cat_directives(master_data,master_classes)
# -
# Now, we can build intuitive masks as combinations of our human-readable directives
# +
#Masks
#Gender
female_officers = modify.where_all_are_true(master_data,where['OfficerGender']['F'])
male_officers = modify.where_all_are_true(master_data,where['OfficerGender']['M'])
female_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['F'])
male_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['M'])
#Race
white_officers = modify.where_all_are_true(master_data,where['OfficerRace']['White'])
black_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Black'])
hispanic_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Hispanic'])
white_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['White'])
black_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Black'])
hispanic_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Hispanic'])
#Cross-sections
white_M_officers_black_F_citizens = modify.where_all_are_true(master_data,where['OfficerRace']['White']+
where['OfficerGender']['M']+
where['CitizenRace']['Black']+
where['CitizenGender']['F'])
# -
# Let's generate a potentially interesting new feature from our existing data, and pull out all non-numeric data
# +
duration = modify.combine_cols(master_data,np.subtract,['EndDate','StartDate'])
durationDays = duration / np.timedelta64(1, 'D')
duration_data = utils.append_cols(master_data,[durationDays],['InvestigationDuration'])
numeric_data = utils.remove_cols(master_data,['StartDate','EndDate','IncidentDate'])
# -
# We understand what data we have, and we have some tools to easily slice and dice. Let's dive in and learn something.
#Ex 1: What percentage of allegations have a black female citizen and a white male officer?
print np.sum(white_M_officers_black_F_citizens.astype(np.float))/np.size(white_M_officers_black_F_citizens.astype(np.float))
#Ex 2: What is the breakdown of officers with complaints by race?
#This seems a little clunky to me
#Would be nice if plot_simple_histogram could handle categorical labels for me
display.plot_simple_histogram(master_data['OfficerRace'],verbose=False)
display.plt.xticks(range(len(master_classes['OfficerRace'])), master_classes['OfficerRace'])
#Ex 3: What does the distribution of complaints look like?
complaint_counter = display.Counter(numeric_data['OfficerID'])
officer_list, complaint_counts = zip(*complaint_counter.items())
display.plot_simple_histogram(complaint_counts)
# +
#Ex 4: What can we learn from the 100 officers who receive the most complaints?
#FYI: Wikipedia says 12,244 officers total, so this is roughly the top 1% of all Chicago officers.
#Obviously, all officers do not have the same quantity and quality of interactions with citizens.
#Need to account for this fact for any real analysis.
#Median imputation makes histogram look unnatural
#Top 100 Officers
top_100 = counts.most_common(100)
top_100_officers = map(lambda x: x[0],top_100)
#We should add this to modify.py for categorical data
def row_val_in(M,col_name,boundary):
return [x in boundary for x in M[col_name]]
top_100_profile = modify.choose_rows_where(officers,[{'func': row_val_in, 'col_name': 'OfficerID', 'vals': top_100_officers}])
#Can't check this against CPDB, their allegation counts are for the whole time period
#Not just 2011 - present.
display.plot_simple_histogram(master_data['Tenure'],verbose=False)
display.plot_simple_histogram(top_100_profile['Tenure'],verbose=False)
# +
#Ex 5: What does the distribution of outcomes look like?
#Hastily written, possibly not useful. Just curious.
#Almost everything is unknown or no action taken
def sortedFrequencies(array,classes,col_name):
if col_name not in classes:
raise ValueError('col_name must be categorical')
counts = display.Counter(array[col_name])
total = float(sum(counts.values()))
for key in counts:
counts[key] /= total
count_dict = {}
for value in counts:
count_dict[classes[col_name][value]] = counts[value]
return sorted(count_dict.items(), key=lambda x: x[1],reverse=True)
print sortedFrequencies(numeric_data,master_classes,'Outcome')
# +
#Ex 6: What has the number of complaints over time been like?
#Looks seasonal (peaking in summer), and declining over time (coud the decline just be a collection issue?)
def numpy_to_month(dt64):
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
dt = datetime.datetime.utcfromtimestamp(ts)
d = datetime.date(dt.year, dt.month, 1) #round to month
return d
months, counts = zip(*display.Counter(map(numpy_to_month,duration_data['IncidentDate'])).items())
display.plt.plot_date(months,counts)
# +
#How does it look to split complaints by location?
#Very disproportionate. Locations 17,19,3,4 have almost all complaints.
display.plot_simple_histogram(numeric_data['Location'],verbose=False)
display.plt.xticks(range(len(master_classes['Location'])), master_classes['Location'])
# #Unit?
#Still uneven, but more even than location.
display.plot_simple_histogram(numeric_data['UnitCat'],verbose=False)
display.plt.xticks(range(len(master_classes['UnitCat'])), master_classes['UnitCat'])
# +
#Are there officers getting a lot of complaints not from the high yield locations?
#What does the social network of concomitant officers look like?
# -
| examples/CPDB/CPDB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [<NAME>](http://www.sebastianraschka.com)
#
# [back](https://github.com/rasbt/matplotlib-gallery) to the `matplotlib-gallery` at [https://github.com/rasbt/matplotlib-gallery](https://github.com/rasbt/matplotlib-gallery)
# [Link](https://github.com/rasbt/matplotlib-gallery) the matplotlib gallery at [https://github.com/rasbt/matplotlib-gallery](https://github.com/rasbt/matplotlib-gallery)
# %load_ext watermark
# %watermark -u -v -d -p matplotlib,numpy
# <font size="1.5em">[More info](http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/ipython_magic/watermark.ipynb) about the `%watermark` extension</font>
# %matplotlib inline
# <br>
# <br>
# # Lineplots in matplotlib
# # Sections
# - [Simple line plot](#Simple-line-plot)
#
# - [Line plot with error bars](#Line-plot-with-error-bars)
#
# - [Line plot with x-axis labels and log-scale](#Line-plot-with-x-axis-labels-and-log-scale)
#
# - [Gaussian probability density functions](#Gaussian-probability-density-functions)
#
# - [Cumulative Plots](#Cumulative-Plots)
#
# - [Cumulative Sum](#Cumulative-Sum)
#
# - [Absolute Count](#Absolute-Count)
#
# - [Colormaps](#Colormaps)
#
# - [Marker styles](#Marker-styles)
#
# - [Line styles](#Line-styles)
# <br>
# <br>
# # Simple line plot
# [[back to top](#Sections)]
# +
import matplotlib.pyplot as plt
x = [1, 2, 3]
y_1 = [50, 60, 70]
y_2 = [20, 30, 40]
plt.plot(x, y_1, marker='x')
plt.plot(x, y_2, marker='^')
plt.xlim([0, len(x)+1])
plt.ylim([0, max(y_1+y_2) + 10])
plt.xlabel('x-axis label')
plt.ylabel('y-axis label')
plt.title('Simple line plot')
plt.legend(['sample 1', 'sample2'], loc='upper left')
plt.show()
# -
# <br>
# <br>
# # Line plot with error bars
# [[back to top](#Sections)]
# +
import matplotlib.pyplot as plt
x = [1, 2, 3]
y_1 = [50, 60, 70]
y_2 = [20, 30, 40]
y_1_err = [4.3, 4.5, 2.0]
y_2_err = [2.3, 6.9, 2.1]
x_labels = ["x1", "x2", "x3"]
plt.errorbar(x, y_1, yerr=y_1_err, fmt='-x')
plt.errorbar(x, y_2, yerr=y_2_err, fmt='-^')
plt.xticks(x, x_labels)
plt.xlim([0, len(x)+1])
plt.ylim([0, max(y_1+y_2) + 10])
plt.xlabel('x-axis label')
plt.ylabel('y-axis label')
plt.title('Line plot with error bars')
plt.legend(['sample 1', 'sample2'], loc='upper left')
plt.show()
# -
# <br>
# <br>
# # Line plot with x-axis labels and log-scale
# [[back to top](#Sections)]
# +
import matplotlib.pyplot as plt
x = [1, 2, 3]
y_1 = [0.5,7.0,60.0]
y_2 = [0.3,6.0,30.0]
x_labels = ["x1", "x2", "x3"]
plt.plot(x, y_1, marker='x')
plt.plot(x, y_2, marker='^')
plt.xticks(x, x_labels)
plt.xlim([0,4])
plt.xlabel('x-axis label')
plt.ylabel('y-axis label')
plt.yscale('log')
plt.title('Line plot with x-axis labels and log-scale')
plt.legend(['sample 1', 'sample2'], loc='upper left')
plt.show()
# -
# <br>
# <br>
# # Gaussian probability density functions
# [[back to top](#Sections)]
# +
import numpy as np
from matplotlib import pyplot as plt
import math
def pdf(x, mu=0, sigma=1):
"""
Calculates the normal distribution's probability density
function (PDF).
"""
term1 = 1.0 / ( math.sqrt(2*np.pi) * sigma )
term2 = np.exp( -0.5 * ( (x-mu)/sigma )**2 )
return term1 * term2
x = np.arange(0, 100, 0.05)
pdf1 = pdf(x, mu=5, sigma=2.5**0.5)
pdf2 = pdf(x, mu=10, sigma=6**0.5)
plt.plot(x, pdf1)
plt.plot(x, pdf2)
plt.title('Probability Density Functions')
plt.ylabel('p(x)')
plt.xlabel('random variable x')
plt.legend(['pdf1 ~ N(5,2.5)', 'pdf2 ~ N(10,6)'], loc='upper right')
plt.ylim([0,0.5])
plt.xlim([0,20])
plt.show()
# -
# <br>
# <br>
# # Cumulative Plots
# [[back to top](#Sections)]
# <br>
# <br>
# ### Cumulative Sum
# [[back to top](#Sections)]
# +
import numpy as np
import matplotlib.pyplot as plt
A = np.arange(1, 11)
B = np.random.randn(10) # 10 rand. values from a std. norm. distr.
C = B.cumsum()
fig, (ax0, ax1) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10,5))
## A) via plt.step()
ax0.step(A, C, label='cumulative sum') # cumulative sum via numpy.cumsum()
ax0.scatter(A, B, label='actual values')
ax0.set_ylabel('Y value')
ax0.legend(loc='upper right')
## B) via plt.plot()
ax1.plot(A, C, label='cumulative sum') # cumulative sum via numpy.cumsum()
ax1.scatter(A, B, label='actual values')
ax1.legend(loc='upper right')
fig.text(0.5, 0.04, 'sample number', ha='center', va='center')
fig.text(0.5, 0.95, 'Cumulative sum of 10 samples from a random normal distribution', ha='center', va='center')
plt.show()
# -
# <br>
# <br>
# ### Absolute Count
# [[back to top](#Sections)]
# +
import numpy as np
import matplotlib.pyplot as plt
A = np.arange(1, 11)
B = np.random.randn(10) # 10 rand. values from a std. norm. distr.
plt.figure(figsize=(10,5))
plt.step(np.sort(B), A)
plt.ylabel('sample count')
plt.xlabel('x value')
plt.title('Number of samples at a certain threshold')
plt.show()
# -
# <br>
# <br>
# # Colormaps
# [[back to top](#Sections)]
# More color maps are available at [http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps](http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps)
# +
import numpy as np
import matplotlib.pyplot as plt
fig, (ax0, ax1) = plt.subplots(1,2, figsize=(14, 7))
samples = range(1,16)
# Default Color Cycle
for i in samples:
ax0.plot([0, 10], [0, i], label=i, lw=3)
# Colormap
colormap = plt.cm.Paired
plt.gca().set_color_cycle([colormap(i) for i in np.linspace(0, 0.9, len(samples))])
for i in samples:
ax1.plot([0, 10], [0, i], label=i, lw=3)
# Annotation
ax0.set_title('Default color cycle')
ax1.set_title('plt.cm.Paired colormap')
ax0.legend(loc='upper left')
ax1.legend(loc='upper left')
plt.show()
# -
# <br>
# <br>
# # Marker styles
# [[back to top](#Sections)]
# +
import numpy as np
import matplotlib.pyplot as plt
markers = [
'.', # point
',', # pixel
'o', # circle
'v', # triangle down
'^', # triangle up
'<', # triangle_left
'>', # triangle_right
'1', # tri_down
'2', # tri_up
'3', # tri_left
'4', # tri_right
'8', # octagon
's', # square
'p', # pentagon
'*', # star
'h', # hexagon1
'H', # hexagon2
'+', # plus
'x', # x
'D', # diamond
'd', # thin_diamond
'|', # vline
]
plt.figure(figsize=(13, 10))
samples = range(len(markers))
for i in samples:
plt.plot([i-1, i, i+1], [i, i, i], label=markers[i], marker=markers[i], markersize=10)
# Annotation
plt.title('Matplotlib Marker styles', fontsize=20)
plt.ylim([-1, len(markers)+1])
plt.legend(loc='lower right')
plt.show()
# -
# <br>
# <br>
# # Line styles
# [[back to top](#Sections)]
# +
import numpy as np
import matplotlib.pyplot as plt
linestyles = ['-.', '--', 'None', '-', ':']
plt.figure(figsize=(8, 5))
samples = range(len(linestyles))
for i in samples:
plt.plot([i-1, i, i+1], [i, i, i],
label='"%s"' %linestyles[i],
linestyle=linestyles[i],
lw=4
)
# Annotation
plt.title('Matplotlib line styles', fontsize=20)
plt.ylim([-1, len(linestyles)+1])
plt.legend(loc='lower right')
plt.show()
# -
| Unit02/extension_studies/Matplotlib_gallery/ipynb/lineplots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import re
import time
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.linear_model import LogisticRegression
from nltk.corpus import stopwords
STOPWORDS = set(stopwords.words('english'))
from sklearn.externals import joblib
import matplotlib.pyplot as plt
df_full = pd.read_csv('Consumer_Complaints.csv')
df = df_full.dropna(subset = ["Consumer complaint narrative"])[["Product", "Consumer complaint narrative"]]
df = df.reset_index(drop=True)
df.info()
df.Product.value_counts()
df.loc[df['Product'] == 'Credit reporting', 'Product'] = 'Credit reporting, credit repair services, or other personal consumer reports'
df.loc[df['Product'] == 'Credit card', 'Product'] = 'Credit card or prepaid card'
df.loc[df['Product'] == 'Payday loan', 'Product'] = 'Payday loan, title loan, or personal loan'
df.loc[df['Product'] == 'Virtual currency', 'Product'] = 'Money transfer, virtual currency, or money service'
df = df[df.Product != 'Other financial service']
df["Product"].value_counts().plot(x='Product', y='Number of complaints', kind='bar', figsize=(15,6),\
title='Count of complaints across products since 2011')
# +
REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]')
BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')
def clean_text(text):
text = text.lower() # lowercase text
text = REPLACE_BY_SPACE_RE.sub(' ', text) # replace REPLACE_BY_SPACE_RE symbols by space in text.
# substitute the matched string in REPLACE_BY_SPACE_RE with space.
text = BAD_SYMBOLS_RE.sub('', text) # remove symbols which are in BAD_SYMBOLS_RE from text.
# substitute the matched string in BAD_SYMBOLS_RE with nothing.
text = text.replace('x', '')
text = ' '.join(word for word in text.split() if word not in STOPWORDS) # remove stopwords from text
return text
df['Consumer complaint narrative'] = df['Consumer complaint narrative'].apply(clean_text)
df['Consumer complaint narrative'] = df['Consumer complaint narrative'].str.replace('\d+', '')
# -
complaint=df['Consumer complaint narrative']
product=df['Product']
complaint_train, complaint_test, product_train, product_test = \
train_test_split(complaint, product, test_size=0.1, random_state=42)
print(complaint_train.shape, product_train.shape)
print(complaint_test.shape, product_test.shape)
# +
logistic_regression = Pipeline([
('bow', CountVectorizer(stop_words = STOPWORDS)),
('tfidf', TfidfTransformer()),
('classifier', LogisticRegression())])
start = time.time()
logistic_regression.fit(complaint_train, product_train)
end = time.time()
print ("Time in training: "+str(end - start))
start = time.time()
predictions=logistic_regression.predict(complaint_test)
end = time.time()
print ("Time in testing: "+str(end - start))
print (classification_report(product_test, predictions))
print (confusion_matrix(product_test, predictions))
# +
model_dir = 'models'
if not os.path.exists(model_dir):
os.mkdir(model_dir)
joblib.dump(logistic_regression, os.path.join(model_dir, 'logistic_regression.pkl'))
# -
joblib.dump(logistic_regression, './web app/logistic_regression.pkl')
| Machine Learning/Consumer Complaint Classification/Logistic Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: BMI Estimation
# language: python
# name: bmiestimation
# ---
# ### Explore metadata IMDB
# +
from scipy.io import loadmat
path_imdb_metadata = 'dataset/imdb/imdb.mat'
imdb_metadata = loadmat(path_imdb_metadata)
# -
print(imdb_metadata)
len(imdb_metadata['imdb'][0][0][3][0])
for v in imdb_metadata['imdb'][0][0]:
print(v)
print()
# +
no_attributes = len(imdb_metadata['imdb'][0][0])
dob_idx = 0
photo_taken_idx = 1
full_path_idx = 2
gender_idx = 3
name_idx = 4
face_location_idx = 5
face_score_idx = 6
second_face_score_idx = 7
celeb_names_idx = 8
celeb_id_idx = 9
entry_idx = 842
for atr in range(0,no_attributes):
print(imdb_metadata['imdb'][0][0][atr][0][entry_idx])
# -
imdb_metadata['imdb'][0][0][celeb_names_idx][0][15156]
# +
import requests
import time
all_names_imdb = imdb_metadata['imdb'][0][0][celeb_names_idx][0]
def toHyphenName(name):
name = name.lower()
parts = name.split()
return '-'.join(parts)
def build_url_1(hyphen_name):
return 'http://heightandweights.com/' + hyphen_name + '/'
def build_url_2(hyphen_name):
return 'https://bodyheightweight.com/' + hyphen_name + '-body-measurements/'
list_results = []
for celeb_name in all_names_imdb:
hyphen_name = toHyphenName(celeb_name[0])
r1 = requests.head(build_url_1(hyphen_name))
r2 = requests.head(build_url_2(hyphen_name))
if r1.status_code == 200 or r2.status_code == 200 :
list_results.append(celeb_name)
print(celeb_name)
# +
r = requests.head(build_url_1('maria-sharapova'))
r
# -
list_results
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Test tf.keras with eager execution
#
# Can I debug line-by-line?
#
# https://www.tensorflow.org/guide/eager
import tensorflow as tf
tf.enable_eager_execution()
from cbrain.utils import limit_mem
from cbrain.imports import *
from cbrain.data_generator import *
limit_mem()
tf.__version__
tf.executing_eagerly()
# ## A very simple example
# ### Build a tf.keras model
model = tf.contrib.keras.models.Sequential([
tf.contrib.keras.layers.Dense(20, input_shape=(784,)), # must declare input shape
tf.contrib.keras.layers.Dense(10)
])
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.contrib.keras.datasets.mnist.load_data()
mnist_images.shape, mnist_labels.shape
x = mnist_images
model(mnist_images[0].flatten()[None,:])
l = model.layers
l[0](mnist_images[0].flatten()[None,:])
model.compile(tf.train.AdamOptimizer(0.01), 'sparse_categorical_crossentropy') # This is obvously not right
model.fit(mnist_images.reshape(-1, 28*28), mnist_labels)
# ## Test for status-quo CBRAIN workflow
DATADIR = '/local/S.Rasp/preprocessed_data/'
# !ls $DATADIR
train_gen = DataGenerator(
data_dir=DATADIR,
feature_fn='32_col_engy_ess_3d_train_shuffle_features.nc',
target_fn='32_col_engy_ess_3d_train_shuffle_targets.nc',
batch_size=512,
norm_fn='32_col_engy_ess_3d_train_norm.nc',
fsub='feature_means',
fdiv='feature_stds',
tmult='target_conv',
shuffle=True,
)
gen = train_gen.return_generator()
x, y = next(gen)
x.shape, y.shape
import pdb
class TestLayer(tf.contrib.keras.layers.Layer): # Important to not use keras.layers.Layer
def __init__(self, **kwargs):
super().__init__(**kwargs)
def build(self, input_shape):
super().build(input_shape) # Be sure to call this somewhere!
def call(self, x):
#pdb.set_trace()
x1 = x[:, :30]
x2 = x[:, 30:]
x1 *= 1000.
out = tf.concat([x1, x2], 1)
return out
def compute_output_shape(self, input_shape):
return input_shape[1]
inp = tf.contrib.keras.layers.Input(shape=(94,))
act = tf.contrib.keras.layers.Dense(256, activation='relu')(inp)
for i in range(4):
act = tf.contrib.keras.layers.Dense(256, activation='relu')(act)
act = tf.contrib.keras.layers.Dense(65, activation='relu')(act)
out = TestLayer()(act)
m = tf.contrib.keras.models.Model(inputs=inp, outputs=out)
m.layers
m.summary()
preds = m(x)
preds.numpy().shape
# It all seems to work as I hoped. Basically like using PyTorch with the simple Keras API! Awesome.
# ## Test a custom training loop
train_gen.n_batches
optimizer = tf.train.AdamOptimizer()
from tqdm import tqdm_notebook as tqdm
# +
loss_hist = []
for i in tqdm(range(train_gen.n_batches)):
x, y = next(gen)
with tf.GradientTape() as tape: # This is where tf gets ugly
preds = m(x)
loss = tf.losses.mean_squared_error(y, preds)
loss_hist.append(loss)
grads = tape.gradient(loss, m.variables)
optimizer.apply_gradients(zip(grads, m.variables))
# -
plt.plot(loss_hist[10:])
plt.yscale('log')
# ### Profiling
def loop(n_batches):
for i in tqdm(range(n_batches)):
x, y = next(gen)
with tf.GradientTape() as tape: # This is where tf gets ugly
preds = m(x)
loss = tf.losses.mean_squared_error(y, preds)
loss_hist.append(loss)
grads = tape.gradient(loss, m.variables)
optimizer.apply_gradients(zip(grads, m.variables))
# %load_ext line_profiler
# %lprun -f loop loop(500)
# Profiling also works great which will really help later on. There are performance issues which we will have to sort out going on.
| notebooks/stephans-devlog/3.0-tf.keras-test-with-eager-execution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''cdcbirth'': conda)'
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pathlib import Path
import pathlib
import os
import seaborn as sns
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# -
root_dir = Path.cwd().parent.parent
print(root_dir)
folder_raw_data = root_dir / 'data/raw'
folder_processed_data = root_dir / 'data/processed'
folder_external_data = root_dir / 'data/external'
# load births_with_geo_apgar_consolidated.csv.gz as df
df = pd.read_csv(folder_processed_data / 'births_with_geo_apgar_consolidated.csv.gz', compression='gzip')
df.head()
df[df['mrcityfips']!='ZZZ']
| notebooks/scratch/check_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/X-ray-Imaging-Group/X-ray-Root-Cellar/blob/main/Poisson_ratio/Poisson_ratio_calculator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="-AEZyOTCTywS"
# 
# + colab={"base_uri": "https://localhost:8080/"} id="f73bEITjvWEd" cellView="form" outputId="3ee51e64-07ac-402b-9024-34966aca297c"
#@title <-- Click this button to run.
import numpy as np
import pandas as pd
def material_compliance(material, s):
data = {'material': ['Si', 'Ge', 'C'],
'lattice spacing (m)': [0.00000000054309402, 0.000000000565735, 0.000000000356679],
's11': [0.768, 0.964, 0.0949],
's12': [-0.214, -0.26, -0.00978],
's44': [1.26, 1.49, 0.17]}
data = pd.DataFrame(data)
data = data.set_index('material')
s = data.loc[material, s]
return s
def normalize_vector(v):
v_sum = np.sqrt(v[0] ** 2 + v[1] ** 2 + v[2] ** 2)
return np.array([v[0] / v_sum, v[1] / v_sum, v[2] / v_sum])
def cross_vector(v1, v2):
v1 = np.array(v1)
v2 = np.array(v2)
return np.cross(v1, v2)
def calculator(material, reflection_hkl, bending_hkl, chi=0):
"""
References: <NAME>., & <NAME>. (1965). Young’s modulus, shear modulus, and poisson’s ratio in silicon
and germanium. Journal of Applied Physics, 36(1), 153–156. https://doi.org/10.1063/1.1713863
Parameters
----------
material: 'Si', 'Ge', 'C'
reflection_hkl: [h, k, l]
bending_hkl: [h, k, l]
chi: Degree. Asymmetric angle from reflection plane to surface normal (counterclockwise positive)
Returns
-------
Poisson's ratio on ZX direction.
"""
chi = np.radians(chi)
# XR, YR, ZR for Reflection related dimensions
ZR = np.array(reflection_hkl)
Y = np.array(bending_hkl)
XR = np.cross(Y, ZR)
# **n for NORMALIZED vector
XRn = normalize_vector(XR)
# Yn = normalize_vector(Y)
ZRn = normalize_vector(ZR)
# X, Y, Z for crystal orientation related dimensions
X = XRn * np.cos(chi) - ZRn * np.sin(chi)
Z = np.cross(X, Y)
# *n for NORMALIZED vector
Xn = normalize_vector(X)
Yn = normalize_vector(Y)
Zn = normalize_vector(Z)
# sc** for Compliance coefficient
sc11 = material_compliance(material, 's11')
sc12 = material_compliance(material, 's12')
sc44 = material_compliance(material, 's44')
sc = sc11 - sc12 - 0.5 * sc44
s13 = sc12 + sc * (Xn[0] ** 2 * Zn[0] ** 2 + Xn[1] ** 2 * Zn[1] ** 2 + Xn[2] ** 2 * Zn[2] ** 2)
s33 = sc11 + sc * (Zn[0] ** 4 + Zn[1] ** 4 + Zn[2] ** 4 - 1)
sp31 = s13 / s33
c_factor = 1.0
nu = -sp31 * c_factor # (\nu ZX)
return nu
#@markdown Material
material = 'Si' #@param ["Si", "Ge", "C"]
#@markdown Reflection
h_reflection = 1#@param {type:"number"}
k_reflection = 1 #@param {type:"number"}
l_reflection = -1#@param {type:"number"}
reflection_hkl=[h_reflection, k_reflection, l_reflection]
#@markdown Bending axis (perpendicular to Reflection)
h_bending = 1 #@param {type:"number"}
k_bending = -1#@param {type:"number"}
l_bending = 0 #@param {type:"number"}
bending_hkl=[h_bending, k_bending, l_bending]
#@markdown Asymmetry angle (degree)
chi = 5#@param {type:"number"}
nu = calculator(material, reflection_hkl, bending_hkl, chi=chi)
print("The Poisson's ratio is \n{:.4f}".format(nu))
# + [markdown] id="ZTGhcSoqTywZ"
# ## Useful tools
# + id="FYyrPhozTywa" cellView="form" outputId="d055a1d3-9809-46a0-fdf9-e8f3c032242e"
#@title Calculate the Bending axis, if the Surface and Reflection are known.
import numpy as np
def cross_vector(v1, v2):
v1 = np.array(v1)
v2 = np.array(v2)
return np.cross(v1, v2)
#@markdown Surface
h_surface = 2 #@param {type:"number"}
k_surface = 2 #@param {type:"number"}
l_surface = 4 #@param {type:"number"}
surface_hkl=[h_surface, k_surface, l_surface]
#@markdown Reflection
h_reflection = 1 #@param {type:"number"}
k_reflection = 1 #@param {type:"number"}
l_reflection = -1 #@param {type:"number"}
reflection_hkl=[h_reflection, k_reflection, l_reflection]
bending = cross_vector(reflection_hkl,surface_hkl)
print("The bending axis is {}".format(bending))
# + [markdown] id="i9zzcqFbqudn"
# ## Relation between Poisson's ratio $\nu$ and asymmetry angle $\chi$.
# + colab={"base_uri": "https://localhost:8080/", "height": 369} cellView="form" id="TDuJLifkrlAw" outputId="8e4072b2-7d5e-4269-af29-92e155ca992c"
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn') # ['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'tableau-colorblind10']
def material_compliance(material, s):
data = {'material': ['Si', 'Ge', 'C'],
'lattice spacing (m)': [0.00000000054309402, 0.000000000565735, 0.000000000356679],
's11': [0.768, 0.964, 0.0949],
's12': [-0.214, -0.26, -0.00978],
's44': [1.26, 1.49, 0.17]}
data = pd.DataFrame(data)
data = data.set_index('material')
s = data.loc[material, s]
return s
def normalize_vector(v):
v_sum = np.sqrt(v[0] ** 2 + v[1] ** 2 + v[2] ** 2)
return np.array([v[0] / v_sum, v[1] / v_sum, v[2] / v_sum])
def cross_vector(v1, v2):
v1 = np.array(v1)
v2 = np.array(v2)
return np.cross(v1, v2)
def calculator(material, reflection_hkl, bending_hkl, chi=0):
"""
References: <NAME>., & <NAME>. (1965). Young’s modulus, shear modulus, and poisson’s ratio in silicon
and germanium. Journal of Applied Physics, 36(1), 153–156. https://doi.org/10.1063/1.1713863
Parameters
----------
material: 'Si', 'Ge', 'C'
reflection_hkl: [h, k, l]
bending_hkl: [h, k, l]
chi: Degree. Asymmetric angle from reflection plane to surface normal (counterclockwise positive)
Returns
-------
Poisson's ratio on ZX direction.
"""
chi = np.radians(chi)
# XR, YR, ZR for Reflection related dimensions
ZR = np.array(reflection_hkl)
Y = np.array(bending_hkl)
XR = np.cross(Y, ZR)
# **n for NORMALIZED vector
XRn = normalize_vector(XR)
# Yn = normalize_vector(Y)
ZRn = normalize_vector(ZR)
# X, Y, Z for crystal orientation related dimensions
X = XRn * np.cos(chi) - ZRn * np.sin(chi)
Z = np.cross(X, Y)
# *n for NORMALIZED vector
Xn = normalize_vector(X)
Yn = normalize_vector(Y)
Zn = normalize_vector(Z)
# sc** for Compliance coefficient
sc11 = material_compliance(material, 's11')
sc12 = material_compliance(material, 's12')
sc44 = material_compliance(material, 's44')
sc = sc11 - sc12 - 0.5 * sc44
s13 = sc12 + sc * (Xn[0] ** 2 * Zn[0] ** 2 + Xn[1] ** 2 * Zn[1] ** 2 + Xn[2] ** 2 * Zn[2] ** 2)
s33 = sc11 + sc * (Zn[0] ** 4 + Zn[1] ** 4 + Zn[2] ** 4 - 1)
sp31 = s13 / s33
c_factor = 1.0
nu = -sp31 * c_factor # (\nu ZX)
return nu
#@markdown Material
material = 'Si' #@param ["Si", "Ge", "C"]
#@markdown Reflection
h_reflection = 1#@param {type:"number"}
k_reflection = 1 #@param {type:"number"}
l_reflection = -1#@param {type:"number"}
reflection_hkl=[h_reflection, k_reflection, l_reflection]
#@markdown Bending axis (perpendicular to Reflection)
h_bending = 1 #@param {type:"number"}
k_bending = -1#@param {type:"number"}
l_bending = 0 #@param {type:"number"}
bending_hkl=[h_bending, k_bending, l_bending]
#@markdown Asymmetry angle range
chi_1 = -10 #@param {type:"number"}
chi_2 = 10 #@param {type:"number"}
interval = 0.01 #@param {type:"number"}
chis= np.arange(chi_1, chi_2, interval)
nus = np.array([calculator(material, reflection_hkl, bending_hkl, chi=chi) for chi in chis])
plt.plot(chis, nus)
fs=15
plt.xlabel('Asymmetry angle, $\chi$ (degree)', fontsize=fs)
plt.ylabel(r"Poisson's ratio, $\nu$ ", fontsize=fs);
| Poisson_ratio/Poisson_ratio_calculator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/logodwengo.png" alt="Banner" width="150"/>
# <div>
# <font color=#690027 markdown="1">
# <h1>PARAMETERS SCHUINE RECHTE</h1>
# </font>
# </div>
# <div class="alert alert-box alert-success">
# De grafiek van een schuine rechte hangt af van de waarden van de parameters a en b in het functievoorschrift:
# $$ y = a x + b.$$
# In deze notebook kan je m.b.v. sliders de waarden van a en b aanpassen en het effect van de aanpassingen op de grafiek onderzoeken.
# </div>
# ### Nodige modules importeren
# Voer de volgende code-cel uit (klik erin en klik in menu 'Run' aan).
# +
# voor interactiviteit
# %matplotlib widget
import ipywidgets as widgets # voor widgets
import matplotlib.pyplot as plt
import numpy as np
# -
# <div style='color: #690027;' markdown="1">
# <h2>Invloed van de parameters op de grafiek</h2>
# </div>
# Voer de volgende code-cel uit. Onderaan de grafiek kan je de slider gebruiker met je muis of met de pijltjestoetsen.
# +
def rechte(x, a, b):
"""Geef functiewaarde van x terug van schuine rechte."""
return a * x + b
# interactieve grafiek schuine rechte
x = np.arange(-10, 10, 0.02) # x-waarden genereren
# interactief grafiekvenster aanmaken
fig, ax = plt.subplots(figsize=(4, 4))
plt.axis("equal") # zelfde ijk op x-as en y-as
ax.set_xlim([-10, 10])
ax.set_ylim([-10, 10])
ax.grid(True)
plt.title("Rechte $y = a \; x + b$")
# sliders voor a en b
# a varieert tussen -10 en 10, b tussen -8 en 8
@widgets.interact(b=(-8, 8, 0.5), a=(-10, 10, .1))
# functie definiëren om te werken met waarden gekozen via sliders
# startwaarde voor a is 1 en voor b is 0
def update(a=1, b=0):
"""Verwijder vorige grafiek en plot nieuwe."""
[l.remove() for l in ax.lines] # vorige grafiek verwijderen
plt.hlines(0, -10, 10)
plt.vlines(0, -10, 10)
plt.plot(x, x, color="green") # rechte met vergelijking y = x
ax.plot(x, rechte(x, a, b), color="C0") # nieuwe grafiek plotten
# -
# <img src="images/cclic.png" alt="Banner" align="left" width="100"/><br><br>
# Notebook Python in wiskunde, zie Computationeel denken - Programmeren in Python van <a href="http://www.aiopschool.be">AI Op School</a>, van F. wyffels & N. Gesquière is in licentie gegeven volgens een <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>.
| Wiskunde/GrafiekenFuncties/0600_ParametersRechten.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
#hide
#all_ignoretest
from google.colab import drive
drive.mount('/content/drive')
#hide
# !pip install fastai -Uqq
# !pip install einops
#hide
# %cd '/content/drive/MyDrive/colab_notebooks/hmckd'
# !pip3 install -e . -q
#hide
from hmckd.utils import get_features, baseline_df, med2como, get_tabpandas_dls
from hmckd.utils_tab import prepare_df_nsetpoints, prepare_df_firstnpoints
from hmckd.saint import SAINT
from hmckd.utils_saint import embed_data_mask, data_prep, get_saint_model, get_saint_nsp_dls, get_saint_fnp_dls, training_saint, test_saint
#hide
from fastai.tabular.all import *
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, recall_score, precision_score
# # Results
# ## n_setpoints Simple TabularModel
# ### 2_setpoints
df = pd.read_csv('data/train_df_5f.csv')
features = get_features("data/dataScienceTask/")
df, cont_names = prepare_df_nsetpoints(features, df, 650, [200, 400])
procs = [Categorify, FillMissing, Normalize]
cat_names = ['race', 'gender']
y_names = 'Stage_Progress'
train_df = df[df['fold']!= 4].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_2pt_{i}')])
learn.fit_one_cycle(10, 0.002)
# +
#hide_output
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_2pt_{i}')
pred, y_pred, y_true = learn.get_preds(dl=test_dl, with_decoded=True)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ### 5_setpoints
df = pd.read_csv('data/train_df_5f.csv')
features = get_features("data/dataScienceTask/")
df, cont_names = prepare_df_nsetpoints(features, df, 650, [100, 200, 300, 400, 500])
procs = [Categorify, FillMissing, Normalize]
cat_names = ['race', 'gender']
y_names = 'Stage_Progress'
train_df = df[df['fold']!= 4].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_5pt_{i}')])
learn.fit_one_cycle(10, 0.002)
# +
#hide_output
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_5pt_{i}')
pred, y_pred, y_true = learn.get_preds(dl=test_dl, with_decoded=True)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ### 2_setpoints + meds
# +
meds = pd.read_csv('data/dataScienceTask/T_meds.csv')
meds = meds[meds['end_day'] < 650]
meddf = pd.DataFrame(columns=med2como.keys())
meddf['id'] = range(0,300)
for i in range(300):
pat = meds[meds['id']==i]
med = pd.DataFrame(columns=list(med2como.keys()))
medstatus = {e:0 for e in pat['drug'].unique()}
for drug in pat['drug'].unique():
doses = pat[pat['drug']==drug]['daily_dosage'].values
inidose = doses[0]
change = 1
for dose in doses[1:]:
if dose>inidose:
change = 2
meddf.loc[i,drug] = change
meddf = meddf.fillna(0)
med_cat_cols = list(meddf.columns[:-1])
# -
df = pd.read_csv('data/train_df_5f.csv')
features = get_features("data/dataScienceTask/")
df, cont_names = prepare_df_nsetpoints(features, df, 650, [200, 400])
df = df.merge(meddf, on='id')
procs = [Categorify, FillMissing, Normalize]
cat_names = ['race', 'gender']+med_cat_cols
y_names = 'Stage_Progress'
# +
train_df = df[df['fold']!= 4].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_2pt_meds_{i}')])
learn.fit_one_cycle(10, 0.002)
# +
categorical_dims = {o:len(i) for o,i in dls.train.categorify.classes.items()}
test_cats = dict(test_df[list(categorical_dims.keys())].nunique())
test_cats = {k:v+1 for k, v in test_cats.items()}
for k, v in categorical_dims.items():
if test_cats[k] > v:
print(f'{k}')
test_df = test_df[test_df['metoprolol'] != 2]
test_df = test_df[test_df['simvastatin'] != 2].reset_index(drop=True)
# +
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_2pt_meds_{i}')
pred, y_true = learn.get_preds(dl=test_dl)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ## first_n_points Simple TabularModel
# ### first_3_points
#hide_input
df = pd.read_csv('data/train_df_5f.csv')
df, cont_names = prepare_df_firstnpoints(features, df, 3)
procs = [Categorify, FillMissing, Normalize]
y_names = 'Stage_Progress'
y_block = CategoryBlock()
cat_names = ['race', 'gender']
# +
#hide_output
train_df = df[df['fold']!= 4].reset_index(drop=True)
train_df = train_df[~train_df['id'].isin([281, 134, 7])].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_f3pt_{i}')])
learn.fit_one_cycle(10, 0.002)
# +
#hide_output
test_df = test_df[~test_df['id'].isin([299,67,255])].reset_index(drop=True)
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_f3pt_{i}')
pred, y_pred = learn.get_preds(dl=test_dl)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ### first_6_points
#hide_input
df = pd.read_csv('data/train_df_5f.csv')
df, cont_names = prepare_df_firstnpoints(features, df, 6)
procs = [Categorify, FillMissing, Normalize]
y_names = 'Stage_Progress'
y_block = CategoryBlock()
cat_names = ['race', 'gender']
# +
#hide_output
train_df = df[df['fold']!= 4].reset_index(drop=True)
train_df = train_df[~train_df['id'].isin([281, 134, 7])].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_f6pt_{i}')])
learn.fit_one_cycle(10, 0.002)
# +
#hide_output
test_df = test_df[~test_df['id'].isin([299,67,255])].reset_index(drop=True)
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_f6pt_{i}')
pred, y_pred = learn.get_preds(dl=test_dl)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ### first_3_points + meds
#hide_input
df = pd.read_csv('data/train_df_5f.csv')
df, cont_names = prepare_df_firstnpoints(features, df, 3)
df = df.merge(meddf, on='id')
procs = [Categorify, FillMissing, Normalize]
cat_names = ['race', 'gender']+med_cat_cols
y_names = 'Stage_Progress'
# +
#hide_output
train_df = df[df['fold']!= 4].reset_index(drop=True)
train_df = train_df[~train_df['id'].isin([281, 134, 7])].reset_index(drop=True)
test_df = df[df['fold']== 4].reset_index(drop=True)
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy',
fname=f'tab_f3pt_meds_{i}')])
learn.fit_one_cycle(10, 0.002)
# -
test_df = test_df[test_df['metoprolol'] != 2]
test_df = test_df[test_df['simvastatin'] != 2].reset_index(drop=True)
# +
#hide_output
test_df = test_df[~test_df['id'].isin([299,67,255])].reset_index(drop=True)
test_dl = dls.test_dl(test_df)
preds = []
for i in range(4):
dls, tabdf = get_tabpandas_dls(i, train_df, procs, cat_names, cont_names, y_names, 32)
emb_szs = get_emb_sz(tabdf)
learn = tabular_learner(dls, [100,50],
metrics=accuracy,
cbs=[SaveModelCallback(monitor='accuracy')])
learn.load(f'tab_f3pt_meds_{i}')
pred, y_pred = learn.get_preds(dl=test_dl)
preds.append(pred)
y_true = test_dl.y.values
y_pred = np.array(torch.argmax(torch.stack(preds).mean(0),1))
# -
#hide
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ## n_setpoints SAINT TabularModel
# ### 2_setpoints
# +
config = {'cont_embeddings':'MLP',
'embedding_size':32,
'transformer_depth':1,
'attention_heads':8,
'attention_dropout':0.1,
'ff_dropout':0.8,
'attentiontype':'colrow',
'lr':0.0001,
'pretrain_epochs':30,
'epochs':15,
'batchsize':256,
'pt_tasks':['contrastive','denoising'],
'pt_aug':['mixup', 'cutmix',],
'pt_aug_lam':0.1,
'mixup_lam':0.3,
'train_mask_prob':0,
'mask_prob':0,
'pt_projhead_style':'diff',
'nce_temp':0.7,
'lam0':0.5,
'lam1':10,
'lam2':1,
'lam3':10,
'final_mlp_style':'sep'}
mask_params = {'mask_prob':config['train_mask_prob'],
'avail_train_y': 0,
'test_mask':config['train_mask_prob']}
pt_mask_params = {'mask_prob':0,
'avail_train_y':0,
'test_mask':0}
# +
#hide_output
df = pd.read_csv('data/train_df_5f.csv')
train_df = df[df['fold'] != 4].reset_index(drop=True)
test_df = df[df['fold'] == 4].reset_index(drop=True)
device = 'cpu'
y_dim = 2
for i in range(4):
print(f'____________________Fold: {i}_______________________')
dls, test_dl, tabdf, cat_dims, num_continuous, continuous_mean_std, y_dim = get_saint_nsp_dls(i, train_df, test_df, 200, 400, 650, 32)
model = get_saint_model(config, cat_dims, num_continuous, continuous_mean_std, y_dim)
output_fn =f'models/saint_2pt_{i}.pth'
training_saint(dls, model, config, cat_dims, output_fn, 'nspt')
# -
#hide_output
y_preds = []
for i in range(4):
dls, test_dl, tabdf, cat_dims, num_continuous, continuous_mean_std, y_dim = get_saint_nsp_dls(i, train_df, test_df, 200, 400, 650, 32)
model = get_saint_model(config, cat_dims, num_continuous, continuous_mean_std, y_dim)
model.load_state_dict(torch.load(f'models/saint_2pt_{i}.pth'))
y_true, y_pred = test_saint(test_dl, model, 'nspt')
y_preds.append(y_pred)
#hide
y_pred = torch.argmax(torch.stack(y_preds).mean(0), 1)
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
# ### first_3_points
# +
#hide_output
df = pd.read_csv('data/train_df_5f.csv')
train_df = df[df['fold'] != 4].reset_index(drop=True)
test_df = df[df['fold'] == 4].reset_index(drop=True)
device = 'cpu'
y_dim = 2
for i in range(4):
print(f'____________________Fold: {i}_______________________')
dls, test_dl, tabdf, cat_dims, num_continuous, continuous_mean_std, y_dim = get_saint_nsp_dls(i, train_df, test_df, 200, 400, 650, 32)
model = get_saint_model(config, cat_dims, num_continuous, continuous_mean_std, y_dim)
output_fn =f'models/saint_f3pt_{i}.pth'
training_saint(dls, model, config, cat_dims, output_fn, 'fnpt')
# -
#hide_output
y_preds = []
for i in range(4):
dls, test_dl, tabdf, cat_dims, num_continuous, continuous_mean_std, y_dim = get_saint_nsp_dls(i, train_df, test_df, 200, 400, 650, 32)
model = get_saint_model(config, cat_dims, num_continuous, continuous_mean_std, y_dim)
model.load_state_dict(torch.load(f'models/saint_f3pt_{i}.pth'))
y_true, y_pred = test_saint(test_dl, model, 'fnpt')
y_preds.append(y_pred)
#hide
y_pred = torch.argmax(torch.stack(y_preds).mean(0), 1)
print(f'5-fold accuracy score: {accuracy_score(y_true, y_pred):.2f}')
print(f'5-fold recall score: {recall_score(y_true, y_pred):.2f}')
print(f'5-fold precision score: {precision_score(y_true, y_pred):.2f}')
#hide_input
sns.heatmap(confusion_matrix(y_true, y_pred), annot=True, cmap='Blues')
| nbs/results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:uncluster]
# language: python
# name: conda-env-uncluster-py
# ---
# ## Mass model of the Galaxy
# %matplotlib inline
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import gammainc
# ### Observational constraints
#
# [Piffl et al. 2014](http://adsabs.harvard.edu/abs/2014A%26A...562A..91P) Mvir = (1.2-1.3)e12 Msun
#
# [Deason et al. 2012](http://adsabs.harvard.edu/abs/2012MNRAS.425.2840D) [at 150 kpc] M = (5-10)e11 Msun
#
# [Gnedin et al. 2010](http://adsabs.harvard.edu/abs/2010ApJ...720L.108G) [at 80 kpc] M = (6.9 +3.0-1.2)e11 Msun
#
# [Deason et al. 2012](http://adsabs.harvard.edu/abs/2012MNRAS.424L..44D) [at 50 kpc] M = (4.2 ± 0.4)e11 Msun
#
# [McMillan 2016](http://arxiv.org/abs/1608.00971) M_disk,stellar = 5.4e10, Mvir = 1.3e12 Msun
#
# [Bovy & Rix 2013](http://adsabs.harvard.edu/abs/2013ApJ...779..115B) [at 4-9 kpc] M_disk = 5.3e10, M_disk,stellar = 4.6e10
#
# [Nakanishi & Sofue](http://adsabs.harvard.edu/abs/2016PASJ...68....5N) M_gas = 8e9 Msun
#
# [Bland-Hawthorn & Gerhard 2016](http://arxiv.org/abs/1602.07702) M_NSD = (1.4 ± 0.6)e9 Msun, r_h,NSD = 90 pc,
# M_bulge = (1.4-1.7)e10 Msun, M_thin = (4 ± 1)e10, r_thin = 2.6 ± 0.5 kpc, M_thick = (8 ± 3)e9 Msun, r_thick = 2 ± 0.2 kpc, R_0 = 8.2 ± 0.1 kpc, V_0 = 238 ± 15 km/s, V_0/R_0 = 29.0 ± 1.8 km/s/kpc, M(8.2kpc) = 1.08e11 Msun
#
# [Launhardt et all. 2002](http://adsabs.harvard.edu/abs/2002A%26A...384..112L) [at 120 pc] NSD mass = (8 ± 2)e8 Msun, scale radius = 90 pc
#
# [Feldmeier et al. 2014](http://adsabs.harvard.edu/abs/2014A%26A...570A...2F) [at 10 pc] total mass = (3 ± 1)e7 Msun
#
# [Chatzopoulos et al. 2015](http://adsabs.harvard.edu/abs/2015MNRAS.447..948C) [at 1 and 4 pc] R_0 = 8.33 ± 0.11 kpc, M_BH = (4.23 ± 0.14)e6 Msun, M_NSC = (1.8 ± 0.3)e7 Msun, r_h,NSC = 4.2 ± 0.4 pc, M_NSC(1pc) = 0.89e6 Msun
robs = np.array([ 0.001, 0.004, 0.01, 0.12, 8.2, 50., 80., 150. ])
Mobs = np.array([ 5.1e6, 1.3e7, 2.6e7, 8.e8, 1.08e11, 4.2e11, 6.9e11, 9.0e11 ])
Mobs_l = np.array([ 4.6e6, 1.1e7, 1.6e7, 6.e8, 9.37e10, 3.8e11, 5.0e11, 5.0e11 ])
Mobs_u = np.array([ 5.6e6, 1.5e7, 3.6e7, 1.e9, 1.24e11, 4.6e11, 9.9e11, 1.1e12 ])
# Nuclear star cluster mass distribution from [Chatzopoulos et al. 2015](http://adsabs.harvard.edu/abs/2015MNRAS.447..948C)
# +
def M_dehnen( x, gam ):
return np.power(x/(1.+x), 3.-gam)
def Mass_NSC( r ):
mfrac1 = 1./106.45 # fraction of mass in first component
mfrac = [ mfrac1, 1.-mfrac1 ]
rh = 0.0042 # half-mass radius of the nuclear star cluster in kpc
gam = [ 0.51, 0.07 ] # inner logarithmic slope
ascale = [ 99., 2376. ] # scale length in arcsec
arcsec = 4.e-5 # 1 arcsec in kpc at the distance of the Galactic Center
asc = np.array(ascale)*arcsec
part = [ frac*M_dehnen(r/a, g) for (a,g,frac) in zip(asc,gam,mfrac) ]
parth = [ frac*M_dehnen(rh/a, g) for (a,g,frac) in zip(asc,gam,mfrac) ]
fracm = np.minimum( np.sum(part)/np.sum(parth)/2., 1. )
return Mnsc*fracm
# -
# Galactic mass components: nuclear star cluster, bulge, disk, and dark matter halo
# +
def NSC():
M = 1.8e7 # mass of the nuclear star cluster in Msun
return M
def Bulge():
M = 1.4e10 # mass of stellar bulge/bar in Msun (in G05 was 1e10)
a = 0.4 # scale length of stellar bulge in kpc (in G05 was 0.6)
return M, a
def Disk():
M = 5.6e10 # mass of stellar and gaseous disk in Msun (in G05 was 4e10)
a = 2.6 # scale length of stellar disk in kpc (in G05 was 5)
b = 0.3 # scale height of stellar disk in kpc
return M, a, b
def Halo():
M = 1.2e12 # mass of dark matter halo
rs = 20. # halo scale radius, in kpc
xm = 2.2 # scaled radius of maximum circular velocity
return M, rs, xm
def SMBH():
M = 4.2e6 # mass of central black hole
return M
Mnsc = NSC()
Mbulge, abulge = Bulge()
Mdisk, adisk, bdisk = Disk()
Mhalo, rs, xm = Halo()
MBH = SMBH()
Mvir = Mhalo + Mdisk + Mbulge + Mnsc + MBH
kms2 = 4.30e-6 # conversion from GMsun/kpc to (km/s)^2
Rvir = 56.*np.power(Mvir/1.e10, 1./3.) # virial radius in kpc, for delta0=340
c = Rvir/rs # halo concentration parameter
Mh = Mhalo/(np.log(1.+c)-c/(1.+c))
print('M_vir = %.2e Msun R_vir = %.1f kpc c_vir = %.1f'%(Mvir, Rvir, c))
#print 'M_NSC = %.2e Msun'%(Mass_NSC(0.01))
# +
# from galaxy_mass_model import galaxy_mass_model
# gm = galaxy_mass_model()
# MBH = gm.M_BH
# print(gm.M_BH)
# +
plt.figure(figsize=(8,6))
plt.xlim(-3.5, 2.5)
plt.ylim(6.4, 12.4)
rcParams['lines.linewidth'] = 1.5
rcParams['xtick.major.size'] = 6
rcParams['ytick.major.size'] = 6
rcParams['xtick.labelsize'] = 14
rcParams['ytick.labelsize'] = 14
plt.xlabel(r'$\log{\,r}\ (\mathrm{kpc})$', fontsize=18)
plt.ylabel(r'$\log{\,M}\ (M_\odot)$', fontsize=18)
lgr = np.arange(-3.5, 2.6, 0.05)
r = 10.**lgr
# best model
Mnsc_g = np.array([ Mass_NSC(rr) for rr in r ])
#Mbulge_g = Mbulge*r**2/(r + abulge)**2
#Mdisk_g = Mdisk*r**3/(r**2 + (adisk+np.sqrt(0.**2+bdisk**2))**2)**1.5
#Mhalo_g = Mh*(np.log(1.+r/rs) - r/rs/(1.+r/rs))
Mbulge_g = np.array([ Mass_Bulge(rr) for rr in r ])
Mdisk_g = np.array([ Mass_Disk(rr, 0.) for rr in r ])
Mhalo_g = np.array([ Mass_Halo(rr) for rr in r ])
Mtot = MBH + Mnsc_g + Mbulge_g + Mdisk_g + Mhalo_g
#plt.plot(lgr, np.log10(Mnsc_g), 'k--')
#plt.plot(lgr, np.log10(Mbulge_g), 'k:')
#plt.plot(lgr, np.log10(Mdisk_g), 'k-.')
plt.plot(lgr, np.log10(Mtot), 'k-')
#plt.text(1.2, 7.65, 'nuclear cluster', fontsize=12)
#plt.text(1.9, 9.85, 'bulge', fontsize=12)
#plt.text(1.9, 10.45, 'disk', fontsize=12)
#plt.text(1.9, 11.4, 'halo', fontsize=12)
# Sersic fit, used in Gnedin, Ostriker & Tremaine 2014
nser = 4. # Sersic index (in G14 was 2.2)
aser = 4. # effective radius, in kpc
bn = 2.*nser-1./3.+0.0098765/nser+0.0018/nser**2
Mser = 5.e10*gammainc(2*nser, bn*(r/aser)**(1./nser))
#plt.plot(lgr, np.log10(Mser + Mhalo_g + MBH), 'g-')
# Gnedin 2005 model
Mbulge_g5 = 1e10*r**2/(r + 0.6)**2
Mdisk_g5 = 4e10*r**3/(r**2 + (5.+0.3)**2)**1.5
Mhalo_g5 = Mh/1.2*(np.log(1.+r/rs) - r/rs/(1.+r/rs))
Mtot_g5 = MBH + Mbulge_g5 + Mdisk_g5 + Mhalo_g5
#plt.plot(lgr, np.log10(Mtot_g5), 'g-')
#plt.text(1., 7.7, 'Gnedin+05', color='g', fontsize=12)
# Kenyon 2008 model, updated in Brown et al. 2014
Mbulge_k = 3.76e9*r**2/(r + 0.1)**2
Mdisk_k = 6e10*r**3/(r**2 + (2.75+bdisk)**2)**1.5
Mtot_k = MBH + Mbulge_k + Mdisk_k + Mhalo_g/1.2
#plt.plot(lgr, np.log10(Mtot_k), 'b-')
#plt.text(1., 8.3, 'Kenyon+08', color='b', fontsize=12)
# observational points
plt.scatter(np.log10(robs), np.log10(Mobs), s=20, marker='s', color='k')
yerr1 = np.log10(Mobs) - np.log10(Mobs_l)
yerr2 = np.log10(Mobs_u) - np.log10(Mobs)
plt.errorbar(np.log10(robs), np.log10(Mobs), yerr=[yerr1,yerr2], ecolor='k', capthick=0, linestyle='None')
plt.show()
#plt.savefig('galactic_mass_compare.png')
# -
# Escape velocity curve
# +
# best model
pot = -Mbulge/(r+abulge) -Mnsc/r -MBH/r -Mdisk/np.sqrt(0**2+(adisk+np.sqrt(r**2+bdisk**2))**2) -Mh/r*np.log(1.+r/rs)
Vesc = np.sqrt(-2.*pot*kms2)
# Kenyon 2008 model
pot_k = -3.76e9/(r+0.1) -MBH/r -6e10/np.sqrt(0**2+(2.75+np.sqrt(r**2+bdisk**2))**2) -Mh/1.2/r*np.log(1.+r/rs)
Vesc_k = np.sqrt(-2.*pot_k*kms2)
plt.figure(figsize=(6,4))
plt.xlim(-3, 2.5)
plt.ylim(0, 1000)
rcParams['lines.linewidth'] = 1.5
rcParams['xtick.major.size'] = 6
rcParams['ytick.major.size'] = 6
rcParams['xtick.labelsize'] = 12
rcParams['ytick.labelsize'] = 12
plt.xlabel(r'$\log{\,r}\ (\mathrm{kpc})$', fontsize=18)
plt.ylabel(r'$V_{esc}\ (\mathrm{km\, s}^{-1})$', fontsize=18)
plt.plot(lgr, Vesc, 'k-')
plt.plot(lgr, Vesc_k, 'b-')
plt.show()
# +
plt.figure(figsize=(6,4))
plt.xlim(0, 100)
plt.ylim(200, 800)
plt.xlabel(r'$r\ (\mathrm{kpc})$', fontsize=18)
plt.ylabel(r'$V_{esc}\ (\mathrm{km\, s}^{-1})$', fontsize=18)
plt.plot(r, Vesc, 'k-')
plt.plot(r, Vesc_k, 'b-')
plt.show()
# -
for lev in [ -3, -2, -1, 0., 1., 2. ]:
l = np.fabs(lgr-lev) < 0.001
print r[l], Vesc[l], Vesc_k[l]
| notebooks/oleg_galactic_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Scikit-Learn(sklearn)
# This notebook demonstrate some of the most useful functions of the beautiful Scikit-Learn library.
#
# What we're going to cover:
#
# 0. An end-to-end Scikit-Learn workflow
# 1. Getting the data ready
# 2. Choose the right estimator/algorithm/model for our problems
# 3. Fit the model and use it to make predictions on our data
# 4. Evaluating a model
# 5. Improve the model
# 6. Save and load a trained model
# 7. Putting it all together
# ## 0. An end-to-end Scikit-Learn workflow
# 1. Get the data ready
import pandas as pd
import numpy as np
heart_disease = pd.read_csv('../3. Matplotlib/heart-disease.csv')
heart_disease
# +
# Create X (features matrix)
X = heart_disease.drop('target', axis=1) # everything except target
# Create Y (labels)
y = heart_disease['target']
# +
# 2. Choose the right model and hyperparameters
# RandomForestClassifier is an inbuilt CLASSIFICATION ML model
from sklearn.ensemble import RandomForestClassifier
# initating Classifier object
clf = RandomForestClassifier()
# We'll keep the default hyperparameters
clf.get_params()
# +
# 3. Fit the model to the training data
from sklearn.model_selection import train_test_split
# Splitting the data(8:2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# -
# telling classifier model to find pattern in training data
clf.fit(X_train, y_train)
# +
# make a prediction on test data
y_preds = clf.predict(X_test)
y_preds
# -
# 4. Evaluate the model on training and test data
clf.score(X_train, y_train) # model was trained in X_train and y_train hence 1
clf.score(X_test, y_test)
# +
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
print(classification_report(y_test, y_preds))
# -
confusion_matrix(y_test, y_preds)
accuracy_score(y_test, y_preds)
# +
# 5. Improve the model
# Try different amount of n_estimators(hyperparameters, we can tune)
np.random.seed(32)
for i in range(10, 100, 10):
print(f"Trying model with {i} estimators... ")
clf = RandomForestClassifier(n_estimators=i).fit(X_train, y_train)
print(f"Model accuracy on test set: {clf.score(X_test, y_test) * 100: .2f}%")
print("")
# -
# 6. Save a model and load it
import pickle
pickle.dump(clf, open('random_forest_model_1.pkl', "wb"))
loaded_model = pickle.load(open('random_forest_model_1.pkl', 'rb'))
loaded_model.score(X_test, y_test)
# ## 1. Getting the Data ready to be used with ML
#
# Three main things we have to do:
# 1. Split the data into features(X) and labels(y)
# 2. Filling(aka imputing) or disregarding missing values
# 3. Converting non numerical values to numerical values, aka feature encoding
heart_disease.head()
X = heart_disease.drop('target', axis=1)
X.head()
y = heart_disease['target']
y.head()
# 1. Split the data into train and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# ## Data Science Quick Tip: Clean, Transform, Reduce
#
# #### Clean Data >> Transform Data >> Reduce Data
# 1. Clean Data - means to remove missing data(or fill it), remove outliers etc.
# 2. Transform Data - means to change datas to numbers and same units
# 3. Reduce Data - More data means more money to store and run them, hence if we get the same result from less data, it's better
# ### 1.1 Make sure all data are numerical
car_sales = pd.read_csv('./car-sales-extended.csv')
car_sales
len(car_sales)
car_sales.dtypes
# +
# Split into X/y
X = car_sales.drop('Price', axis=1)
y = car_sales['Price']
# Split into training and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# +
# Build ML model (Regression model)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
# model.fit(X_train, y_train)
# model.score(X_test, y_test)
"""ML model can't deal with strings so we need to convert it into numbers
that's why we will get an error if you run above 2 lines. """
# -
# <img src='./one_hot_encoding.png' />
# +
# Turn the categories into numbers
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
# categorising data
categorical_features = ['Make', 'Colour', 'Doors']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([('one_hot',
one_hot,
categorical_features)],
remainder='passthrough')
transformed_X = transformer.fit_transform(X)
transformed_X
# -
# The odometer column hasn't been changed
pd.DataFrame(transformed_X)
# #### 12 different values in 3 column hence 12 column(0 t0 11) in transformed_X
# sum of all different values in Make, Colour and Doors column
sum([len(X['Make'].unique()), len(X['Colour'].unique()), len(X['Doors'].unique())])
# A cheap Alternative to one hot encoding
# get_dummies - change dtype to int by grouping
dummies = pd.get_dummies(car_sales[['Make', 'Colour', 'Doors']])
dummies
# +
# Let's refit the model
np.random.seed(7)
# y can be same since it's numerical already
X_train, X_test, y_train, y_test = train_test_split(transformed_X, y, test_size=0.2)
model.fit(X_train, y_train)
# -
# the data was not enough to predict correct price, hence less score
model.score(X_test, y_test)
# ### 1.2 What if there were missing values?
#
# #### Two Ways to handle missing data :
# 1. Imputation - fill all missing data with some value
# 2. Remove all the missing data.
# Import car sales missing data
car_sales_missing = pd.read_csv('./car-sales-extended-missing.csv')
car_sales_missing
# find all mising data in each column
car_sales_missing.isna().sum()
# Create X/y
X = car_sales_missing.drop('Price', axis=1)
y = car_sales_missing['Price']
# +
# Let's try to convert data into numbers
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
# categorising data
categorical_features = ['Make', 'Colour', 'Doors']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([('one_hot',
one_hot,
categorical_features)],
remainder='passthrough')
# transformed_X = transformer.fit_transform(X)
# transformed_X
"""We need to do something with the missing data first, otherwise we
will get error in above line. """
# -
# #### Option 1: Fill missing values using Pandas
car_sales_missing['Make'].fillna('missing', inplace=True)
car_sales_missing['Colour'].fillna('missing', inplace=True)
car_sales_missing['Odometer (KM)'].fillna(car_sales_missing['Odometer (KM)'].mean(),
inplace=True)
car_sales_missing['Doors'].fillna(4, inplace=True)
# check our data again
car_sales_missing.isna().sum()
# Remove the rows with missing price value
car_sales_missing.dropna(inplace=True)
car_sales_missing.isna().sum()
len(car_sales_missing)
# Create X/y
X = car_sales_missing.drop('Price', axis=1)
y = car_sales_missing['Price']
# +
# Let's try to convert data into numbers
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
# categorising data
categorical_features = ['Make', 'Colour', 'Doors']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([('one_hot',
one_hot,
categorical_features)],
remainder='passthrough')
transformed_X = transformer.fit_transform(car_sales_missing)
transformed_X
# -
pd.DataFrame(transformed_X)
# #### Option 2: Fill missing data using sklearn
#
# ##### ***Important: We should Fill missing value in train and test data separately, because while filling we are adding mean which is the mean of total sample and not train and test data which is not good. I didn't do this in both pandas and scikit learn, but I should actually do this in real problems.
car_sales_missing = pd.read_csv('./car-sales-extended-missing.csv')
car_sales_missing
car_sales_missing.isna().sum()
# Drop wit
car_sales_missing.dropna(subset=['Price'], inplace=True)
car_sales_missing.isna().sum()
# Split into X/y
X = car_sales_missing.drop("Price", axis=1)
y = car_sales_missing['Price']
# +
# fill nan with sklearn
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
# fill categorical column missing value with 'missing'
cat_imputer = SimpleImputer(strategy='constant', fill_value='missing')
# fill door missing value with 4
door_imputer = SimpleImputer(strategy='constant', fill_value=4)
# fill num column missing value with mean
num_imputer = SimpleImputer(strategy='mean')
# Define columns
cat_features = ['Make', 'Colour']
door_features = ['Doors']
num_features = ['Odometer (KM)']
# Create an imputer(that fills the missng data)
imputer = ColumnTransformer([
('cat_imputer', cat_imputer, cat_features),
('door_imputer', door_imputer, door_features),
('num_imputer', num_imputer, num_features)
])
# Transform the Data
filled_X = imputer.fit_transform(X)
filled_X
# -
# The output data will not have any column name, so we need to add
car_sales_filled = pd.DataFrame(filled_X,
columns=['Make', 'Colour', 'Doors', 'Odometer (KM)'])
car_sales_filled
# check missing data
car_sales_filled.isna().sum()
# +
# Let's try to convert data into numbers
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
# categorising data
categorical_features = ['Make', 'Colour', 'Doors']
one_hot = OneHotEncoder()
transformer = ColumnTransformer([('one_hot',
one_hot,
categorical_features)],
remainder='passthrough')
transformed_X = transformer.fit_transform(car_sales_filled)
transformed_X
# +
# Now we've got our data as numbers with no missing value
# Let's train our data now
np.random.seed(8)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(transformed_X, y,
test_size=0.2)
model = RandomForestRegressor()
model.fit(X_train, y_train)
model.score(X_test, y_test)
# -
# we have less data in car_sales_filled, hence less score
len(car_sales_filled), len(car_sales)
# ## 2. Choose the right estimator/algorithm for our problem
#
# ##### Scikit-learn uses estimator as another term for machine learning model or algorithm.
#
# - Types of Problem we will be looking on:
# 1. Classification : Predicting whether a sample is one thing or another(heart disease problem)
# 2. Regression - Predicting a number(car sales problem)
#
# -- Both of them comes under supervised learning.
#
# <a href='https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html'>Choosing the right model</a>
# ### 2.1 Picking a ML model for a regression problem
# <a href='https://scikit-learn.org/stable/datasets/index.html#boston-dataset'>more about boston houseprice dataset</a>
# Import Boston housing dataset(inbuilt in sklearn)
from sklearn.datasets import load_boston
boston = load_boston()
boston
boston_df = pd.DataFrame(boston['data'], columns=boston['feature_names'])
boston_df['target'] = pd.Series(boston['target'])
boston_df
# How many samples
len(boston_df)
# +
# According to sklearn map, we need ridge regression for our problem
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
# setup random seed
np.random.seed(9)
# Create the data
X = boston_df.drop('target', axis=1)
y = boston_df['target']
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate Ridge model
ridge = Ridge()
ridge.fit(X_train, y_train)
# Check the ridge model on test data
ridge.score(X_test, y_test)
# -
# How do we IMPROVE the score ?
#
# What if Ridge WASN'T WORKING ??
# +
# If Ridge not working then, we go to ensemble method(RandomForest)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
# Setup random seed
np.random.seed(10)
# Create the data
X = boston_df.drop('target', axis=1)
y = boston_df['target']
# Split into tain and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate Random Forest Regressor model
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
# Check the Random Forest Regressor model on test data
rfr.score(X_test, y_test)
# -
# ##### Quick Tip: How ML Algorithms Work
# Random Forest Regressor is a Decision Tree model. Basically a Decision Tree creates a lot of `if else` statements and figure out the output.
# ### 2.2 Picking a ML model for a classification problem
# Getting data
heart_disease
len(heart_disease)
# +
# According to sklean map, we should try Linear SVC
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split
# Setup random see
np.random.seed(17)
# Make the data
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate LinearSCV
lsvc = LinearSVC(max_iter=10000)
lsvc.fit(X_train, y_train)
# Check the score
lsvc.score(X_test, y_test)
# -
# How do we IMPROVE the score ?
#
# What if Linear SVC WASN'T WORKING ??
# +
# According to sklean map, next is knearestneighbour,but we skipping it
# and going to ensemble classifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Setup random see
np.random.seed(34)
# Make the data
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
# Check the score
rfc.score(X_test, y_test)
# -
# ##### Quick Tip:
# 1. If you have structured data(ex dataframe), then ensemble method works bets.
# 2. If you have unstructured data, use deep learning or transfer learning
# ## 3. Fit the model and make predictions on data
# ### 3.1 Fitting the model to the data
#
# Different names for:
# * `X`=features, feature variables, data
# * `y`=labels, targets, target variables
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Setup random see
np.random.seed(34)
# Make the data
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train) ## THIS PART IS FITTING THE MODEL
# Check the score
rfc.score(X_test, y_test)
# -
# fitting is telling our model to find pattern in X...continue next cell
X.head()
# so that we get a particular y.
y.head()
# ### 3.2 Make predictions using ML model
#
# 2 Main Ways to make pedictions:
# 1. `predict()`
# 2. `predict_proba()`
# #### 1. `predict()`
# Use a trained model to make predictions
rfc.predict(X_test)
# check with y_test
np.array([y_test])
# Compare predictions to truth labels to evalute the model
y_preds = rfc.predict(X_test)
np.mean(y_preds == y_test)
rfc.score(X_test, y_test) ## same thing as above cell
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_preds) ## again same thing
# #### 2. `predict_proba()`
#
# predict proba return probability of a `classification` label,
# in simple words it return the probability of each of the different target values being true.
#
# `predict_proba()` is really good in telling us whether our model is working properly with the data we have.
rfc.predict_proba(X_test[:5])
# The first sublist means 11 % chance target = 0 and 89% chance target = 1
# and so on...
# lets predict on same data
rfc.predict(X_test[:5])
# target is either 1 or 0
heart_disease['target'].unique()
# `predict()` can also be used for regression models.
# +
from sklearn.ensemble import RandomForestRegressor
np.random.seed(19)
# Create the data
X = boston_df.drop('target', axis=1)
y = boston_df['target']
# Split into train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate and fit the model
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
# Make predictions
y_preds = rfr.predict(X_test) # regressor doesnot have predict_proba()
# -
y_preds[:10]
np.array(y_test[:10])
# +
# Compare the predictions to the truth
from sklearn.metrics import mean_absolute_error
# mean of the absolute error, basically, average diffrence between
# y_preds from y_test
mean_absolute_error(y_test, y_preds)
# -
# ## 4. Evaluating a ML model
#
# Three ways to evaluate Scikit-Learn models/estimator :
# 1. Estimator `score` method
# 2. The `scoring` parameter
# 3. Problem-specific metric functions
#
# <a href='https://scikit-learn.org/stable/modules/model_evaluation.html' > Evaluate the model</a>
# ### 4.1 Evaluating model with `Score` method
# +
## for Classification Problem
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Setup random seed
np.random.seed(110)
# Make the data
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
# -
# Check the score
rfc.score(X_train, y_train)
# Return the MEAN ACCURACY on the given test data and labels.
rfc.score(X_test, y_test)
# +
## for Regression model
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
np.random.seed(19)
# Create the data
X = boston_df.drop('target', axis=1)
y = boston_df['target']
# Split into train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate and fit the model
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
# -
# Return the COEFFICIENT OF DETERMINATION R^2 of the predictions
rfr.score(X_test, y_test)
# ### 4.2 Evaluating a model using the `Scoring` parameter
# <img src='./cross_validation.png' />
# +
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Setup random see
np.random.seed(110)
# Make the data
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
# -
rfc.score(X_test, y_test)
# cv=5(default), means it will split the data 5 times in 5 different splits such that the model will train in all of the data and gets evalutated in all of the data.
# reason for using cross validation is that there is a chance we got lucky while splitting the data first time such that most of the data(similar data) in test data are already present in training data.
cross_val_score(rfc, X, y, cv=5)
# +
np.random.seed(47)
# Single training and test split score
rfc_single_score = rfc.score(X_test, y_test)
# Take the mean of 5-fold cross-validation score
rfc_cross_val_score = np.mean(cross_val_score(rfc, X, y))
# Compare the two
rfc_single_score, rfc_cross_val_score
# -
# Scoring parameter is set to None by default so ...
# default 'Scoring Parameter' of classifier is MEAN ACCURACY
cross_val_score(clf, X, y, scoring=None)
# ### 4.2.1 Classification model Evaluation metrics
#
# 1. Accuracy
# 2. Area under ROC curve
# 3. Confusion matrix
# 4. Classification Report
#
# **1. Accuracy**
# +
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
np.random.seed(24)
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
rfc = RandomForestClassifier()
cv_score = cross_val_score(rfc, X, y, cv=5)
cv_score
# -
np.mean(cv_score)
print(f"Heart Disease Classifier Cross-Validated Accuracy:{np.mean(cv_score) * 100: .2f}%")
# **2. Area Under ROC(Reciever Operating Characteristic) Curve**
#
# ROC curves are a comparison of a model's true positive rate(tpr) versus the model's false positive rate(fpr).
#
# - True Positive = model predicts 1 when truth is 1
# - False positive = model predicts 1 when truth is 0
# - True Negative = model predicts 0 when truth is 0
# - False Negative = model predicts 0 when truth is 1
# Splitting data into train/test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# +
# import roc curve
from sklearn.metrics import roc_curve
# Fit the classifier
rfc.fit(X_train, y_train)
# Make predictions with probabilities
y_probs = rfc.predict_proba(X_test)
y_probs[:10]
# -
# we only need postive (predicting 1) for roc curve
y_probs_positive = y_probs[:, 1]
y_probs_positive[:10]
# +
# Calculate for, tpr and thresholds
fpr, tpr, thresholds = roc_curve(y_test, y_probs_positive)
# Check the false positive rates
fpr
# +
# Create a function for plotting ROC curves
import matplotlib.pyplot as plt
def plot_roc_curve(fpr, tpr):
"""
Plots a ROC curve given false positive rate(fpr) and a true
positive rate (tpr) of a model.
"""
# plot roc curve
plt.plot(fpr, tpr, color='orange', label='ROC')
# Plot line with no predictive power(baseline)
plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--', label='Guessing')
# Customize the plot
plt.xlabel('False postive rate (fpr)')
plt.ylabel('True postive rate (tpr)')
plt.title("Receiver Operating Charecteristic (ROC) Curve")
plt.legend()
return plt.show()
plot_roc_curve(fpr, tpr)
# +
from sklearn.metrics import roc_auc_score
# this is the (%cent)area covered by the orange line from X axis
roc_auc_score(y_test, y_probs_positive)
# -
# Plot perfect ROC curve and AUC score
fpr, tpr, thresholds = roc_curve(y_test, y_test)
plot_roc_curve(fpr, tpr)
# Perfect AUC score
roc_auc_score(y_test, y_test)
# **3. Confusion Matrix**
#
# A confusion matrix is a quick way to compare the label a model predicts and the actual labels it was supposed to predict.
#
# In essence, giving an idea of where the model is getting confused.
# <img src='./confusion_matrix.png' />
# +
from sklearn.metrics import confusion_matrix
y_preds = rfc.predict(X_test)
confusion_matrix(y_test, y_preds)
# -
# Visualize confusion matrix with pd.crosstab()
pd.crosstab(y_test, y_preds,
rownames=['Actual Label'],
colnames=['Predicted Labels'])
29 + 6 + 1 + 25
len(X_test)
# <a href='https://seaborn.pydata.org/generated/seaborn.heatmap.html' >Seaborn Heatmap</a>
# +
# Make our confusion matrix more visual with Seaborn's heatmap()
import seaborn as sns
# set the font scale
sns.set(font_scale=1.5)
# Create a confusion matrix
conf_mat = confusion_matrix(y_test, y_preds)
# Plot it using seaborn
sns.heatmap(conf_mat);
# -
# **Quick Tip: Install package directly from jupyter notebook**
#
# `import sys
# # !conda install --yes --prefix {sys.prefix} <package name>`
# +
def plot_conf_mat(conf_mat):
"""
Plots a confusion matrix using Seaborn's heatmap().
"""
fig, ax = plt.subplots(figsize=(3,3))
ax = sns.heatmap(conf_mat,
annot=True, # Annote the boxes with conf_mat info
cbar=False)
plt.xlabel('Predicted Label')
plt.ylabel('True Label');
plot_conf_mat(conf_mat)
# +
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(rfc, X, y); # for total data
# -
132 + 6 + 1 + 164
len(heart_disease)
# **4. Classification Report**
# <img src='./classification_report.png' />
#
# - Class Imbalance Occurce when one target Class(say 0) is much more than the other target Class(say 1).
#
# Ex: If a data has 200 target as 0 and only 10 target as 1.
# +
from sklearn.metrics import classification_report
print(classification_report(y_test, y_preds))
# +
# Where precison and recall become valuable
disease_true = np.zeros(10000)
disease_true[0] = 1 # only 1 postive case
disease_preds = np.zeros(10000) # model predict every case as 0
pd.DataFrame(classification_report(disease_true,
disease_preds,
output_dict=True))
# -
# As we can see above, for class 1, precision, recall and f1-score are 0. It is because in the predicting array, model did not have any 1.
# But as we can see accuracy is 0.9999 which seems good until we see that
# precision and other parameters are mostly for one class(0).
# ### 4.2.2 Regression Model Evaluation Metrics
#
# 1. R^2 (r-squared) or coefficient of determination
# 2. Mean Absolute Error (MAE)
# 3. Mean Squared Error (MSE)
# **1. R^2 (r-squared) or coefficient of determination**
#
# Compares your model's predictions to the mean of the targets. Values can range from negative infinity (a very poor model) to 1. For example, if all your model does is predict the mean of the targets, it's R^2 values would be 0. And if your model perfectly predicts each and every value, it's R^2 value would be 1.
# +
from sklearn.ensemble import RandomForestRegressor
np.random.seed(57)
X = boston_df.drop('target', axis=1)
y = boston_df['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
rfr.score(X_test, y_test)
# +
from sklearn.metrics import r2_score
# Fill an array with y_test mean
y_test_mean = np.full(len(y_test), y_test.mean())
# -
y_test.mean()
y_test_mean[:10]
# all our data is equal to mean
r2_score(y_test, y_test_mean)
# model predict all data perfectly
r2_score(y_test, y_test)
# **2. Mean Absolute Error (MAE)**
#
# It is the average of absolute differences between predictions and actual values. It gives you an idea of how wrong your model's predictions are.
# +
# Mean absolute error
from sklearn.metrics import mean_absolute_error
y_preds = rfr.predict(X_test)
mae = mean_absolute_error(y_test, y_preds)
mae
# -
df = pd.DataFrame(data={'actual values': y_test,
'predicted values': y_preds})
df['differences'] = abs(df['predicted values'] - df['actual values'])
df
# same as MAE
df['differences'].mean()
# **3. Mean Squared Error (MSE)**
# +
from sklearn.metrics import mean_squared_error
y_preds = rfr.predict(X_test)
mse = mean_squared_error(y_test, y_preds)
mse
# -
# Calculate MSE by hand
df['squared differences'] = np.square(df['differences'])
df
# same as MSE
df['squared differences'].mean()
# <img src='./regression_metrics.png' />
#
# - We need to minimize MAE and MSE while maximize R^2.
# ### 4.2.3 Finally using the `Scoring Parameter`
# +
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
np.random.seed(87)
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
rfc = RandomForestClassifier()
# -
np.random.seed(87)
cv_acc = cross_val_score(rfc, X, y, scoring=None)
cv_acc
# Cross-validated accuracy
print(f"The cross-validated accuracy is: {np.mean(cv_acc)*100:.2f}")
# when scoring is set to none, it return the result of default parameter
# which is accuracy in classifier.
np.random.seed(87)
cv_acc = cross_val_score(rfc, X, y, scoring='accuracy')
print(f"The cross-validated accuracy is: {np.mean(cv_acc)*100:.2f}")
# precison
cv_pres = cross_val_score(rfc, X, y, scoring='precision')
np.mean(cv_pres)
# recall
cv_recall = cross_val_score(rfc, X, y, scoring='recall')
np.mean(cv_recall)
# f1-score
cv_f1 = cross_val_score(rfc, X, y, scoring='f1')
np.mean(cv_f1)
# How about our regression model ?
# +
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
np.random.seed(29)
X = boston_df.drop('target', axis=1)
y = boston_df['target']
rfr = RandomForestRegressor()
# -
np.random.seed(29)
cv_r2 = cross_val_score(rfr, X, y, scoring=None)
np.mean(cv_r2)
# when scoring is set to none, it return the result of default parameter
# which is r^2 in regressor.
np.random.seed(29)
cv_r2 = cross_val_score(rfr, X, y, scoring='r2')
np.mean(cv_r2)
# All scorer objects follow the convention that **higher return values are better than lower return values**. Thus metrics which measure the distance between the model and the data, like `metrics.mean_squared_error`, are available as `neg_mean_squared_error` which return the **negated value of the metric**.
# Mean Absolute Error
cv_mae = cross_val_score(rfr, X, y, scoring='neg_mean_absolute_error')
cv_mae
# Mean Squared Error
cv_mse = cross_val_score(rfr, X, y, scoring='neg_mean_squared_error')
cv_mse
# ### 4.3 Problem-specific metric functions
#
# **Classification Evaluation Functions**
# +
from sklearn.metrics import accuracy_score, precision_score, f1_score, recall_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
np.random.seed(67)
X = heart_disease.drop('target', axis=1)
y = heart_disease['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
# Make some predictions
y_preds = rfc.predict(X_test)
# Evaluate the classifier
print('Classification metrics on the test set:')
print(f"Accuracy: {accuracy_score(y_test, y_preds)*100:.2f}")
print(f"Precision: {precision_score(y_test, y_preds)}")
print(f"Recall: {recall_score(y_test, y_preds)}")
print(f"F1-score: {f1_score(y_test, y_preds)}")
# -
# **Regression Evaluation Functions**
# +
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
np.random.seed(44)
X = boston_df.drop('target', axis=1)
y = boston_df['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
# Make predictions using regression model
y_preds = rfr.predict(X_test)
# Evaluate the regression model
print('Regression metrics on the test set:')
print(f"R^2: {r2_score(y_test, y_preds)}")
print(f"MAE: {mean_absolute_error(y_test, y_preds)}")
print(f"MSE: {mean_squared_error(y_test, y_preds)}")
# -
# ## 5. Improving the model
#
# First predictions = baseline predictions,
# First model = baseline model
#
# From a data perpective:
# * Could we collect more data ? (generally, the more data, the better)
# * Could we improve our data ?
#
# From a model perspective:
# * Is there a better model we can use ?
# * Could we improve the current model ?
#
# Hyperparameters V/s parameters
# * Parameters = model finds these pattern in data
# * Hyperparameters = setting on a model we can adjust to (potentially) improve it's ability to find patterns
#
# *All hyperparameters are parameters but all parameters are not hyperparameters*
#
# Three ways to adjust hyperparameters:
# 1. By hand
# 2. Randomly with RandomSearch CV
# 3. EXhaustively with GridSearchCV
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
# returns all the hyperparamters with their default value
rfc.get_params()
# ### 5.1 Tuning hyperparameters by hand
# <img src='./by hand.png'/>
#
# We are going to try and adjust:
# - `max_depth`
# - `max_features`
# - `min_samples_leaf`
# - `min_samples_split`
# - `n_estimators`
def evaluate_preds(y_true, y_preds):
"""
Performs evaluation comparison on y_true labels vs. y_preds labels
on a classification model.
"""
accuracy = accuracy_score(y_true, y_preds)
precision = precision_score(y_true, y_preds)
recall = recall_score(y_true, y_preds)
f1 = f1_score(y_true, y_preds)
metric_dict = {
'accuracy': round(accuracy, 2),
'precision': round(precision, 2),
'recall': round(recall, 2),
'f1': round(f1, 2)
}
print(f"Accuracy: {accuracy*100:.2f}%")
print(f"Precision: {precision:.2f}")
print(f"Recall: {recall:.2f}")
print(f"F1-score: {f1:.2f}")
return metric_dict
# +
from sklearn.ensemble import RandomForestClassifier
np.random.seed(32)
# Shuffle the data
heart_disease_shuffle = heart_disease.sample(frac=1)
# Split into X/y
X = heart_disease_shuffle.drop('target', axis=1)
y = heart_disease_shuffle['target']
# Split into TRAIN, VALIDATION AND TEST SETS
train_split = round(0.7 * len(heart_disease_shuffle))
valid_split = round(train_split + 0.15 * len(heart_disease_shuffle))
X_train, y_train = X[:train_split], y[:train_split]
X_valid, y_valid = X[train_split: valid_split], y[train_split: valid_split]
X_test, y_test = X[valid_split:], y[valid_split:]
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
# Make baseline predictions
y_preds = rfc.predict(X_valid)
# Evaluate the classifier on validation set
baseline_metrics = evaluate_preds(y_valid, y_preds)
baseline_metrics
# +
np.random.seed(22)
# Create a second classifier with different hyperparameters
rfc2 = RandomForestClassifier(n_estimators=120)
rfc2.fit(X_train, y_train)
# Make predictions with different hyperparameters
y_preds2 = rfc2.predict(X_valid)
# Evaluate the 2nd classifier
rfc2_metrics = evaluate_preds(y_valid, y_preds2)
rfc2_metrics
'''Not as good as model 1.'''
# +
np.random.seed(221)
# Create a third classifier with different hyperparameters
rfc3 = RandomForestClassifier(n_estimators=100, max_depth=10)
rfc3.fit(X_train, y_train)
# Make predictions with different hyperparameters
y_preds3 = rfc3.predict(X_valid)
# Evaluate the 3rd classifier
rfc3_metrics = evaluate_preds(y_valid, y_preds3)
rfc3_metrics
'''better than 2nd model but not as good as first one.'''
# -
# ### 5.2 Hyperparameters tuning with RandomizedSearchCV
#
# tuning all hyperparamters by hand and finding the right model takes lot of
# work and time.
# +
from sklearn.model_selection import RandomizedSearchCV
grid = {
'max_depth': [None, 5, 10, 20 ,30],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 4, 6],
'n_estimators': [10, 100, 200, 500, 1000, 1200]
}
np.random.seed(34)
# Split into X/y
X = heart_disease_shuffle.drop('target', axis=1)
y = heart_disease_shuffle['target']
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate Random Forest Classifier
rfc = RandomForestClassifier(n_jobs=1) # n_jobs is no. of processor of CPU
# Setup randomizedSearchCV
rs_rfc = RandomizedSearchCV(estimator=rfc,
param_distributions=grid,
n_iter=10, # number of models
cv=5, # cross validation
verbose=2)
# Fit the randomizedSearchCv version of rfc
rs_rfc.fit(X_train, y_train)
# -
# best value for hyperparameters
rs_rfc.best_params_
# +
# Make predictions with the best hyperparameters
rs_y_preds = rs_rfc.predict(X_test)
# Evaluate the predictions
rs_metrics = evaluate_preds(y_test, rs_y_preds)
'''As we can see here, you won't always find improvement after doing
RandomizedSeacrhCV .'''
# -
# ### 5.3 Hyperparamters tuning with GridSearchCV
#
# - The key difference between RandomizedSearch CV and GridSearchCV is that in RS CV the n_iter(10) tells the number of model to try, but GS CV will go through all the possible combination of hyperparameters and find the best fitting model. i,e total model in GS CV is
# `5 * 2 * 3 * 3 * 6 = 540` and then cross validate it so in total there are `540 * 5(cv) = 2700`.
grid
5 * 2 * 3 * 3 * 6 * 5
# reducing number of different value for hyperparamters because GS CV will
# make a model for each combination.
grid_2 = {
'max_depth': [None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2],
'min_samples_split': [6],
'n_estimators': [100, 200, 500]
}
# total model
1 * 2 * 2 * 1 * 3 * 5
# +
from sklearn.model_selection import GridSearchCV, train_test_split
np.random.seed(39)
# Split into X/y
X = heart_disease_shuffle.drop('target', axis=1)
y = heart_disease_shuffle['target']
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate Random Forest Classifier
rfc = RandomForestClassifier(n_jobs=1) # n_jobs is no. of processor of CPU
# Setup gridSearchCV
gs_rfc = GridSearchCV(estimator=rfc,
param_grid=grid_2,
cv=5, # cross validation
verbose=2)
# Fit the gridSearchCV version of rfc
gs_rfc.fit(X_train, y_train)
# -
gs_rfc.best_params_
# +
# Make predictions with the best hyperparameters
gs_y_preds = gs_rfc.predict(X_test)
# Evaluate the predictions
gs_metrics = evaluate_preds(y_test, gs_y_preds)
'''It's better than RandomizedSearchCV and also by hand method but not as good as the first
model with all the default hyperparameters(baseline model).'''
# -
# **Let's Compare our different model metrics**
# +
compare_metrics = pd.DataFrame({
'baseline': baseline_metrics,
'by hand': rfc2_metrics,
'random search': rs_metrics,
'grid search': gs_metrics
})
compare_metrics.plot.bar(figsize=(10, 6));
"""There is a major error in this graph and that is all these different
types of hyperparamters tuning are trained in different train and test data, which
is wrong. WE SHOULD USE THE SAME TRAIN TEST DATA to compare different hyperparameters."""
# -
# #### Quick Tips :-
#
# 1. Correlation Analysis: let's say there are two column in our dataframe and both are related to each other so if column A value goes up, the value in column B also goes up. They are co-related and hence we can remove one of them from our analysis.
#
#
# 2. Forward/Backward Attribute Selection: this comes from corelation analysis. Backward attribute selection means we train the model with all the available columns and start removing each column to see how it effects the model performance. Forward attribute selection is the opposite, we train the model with few columns and keep on adding columns to see how it effects the model performance.
# ## 6. Saving and loading a trained model
#
# Two ways to do it:
# 1. with Python's `pickle` module
# 2. With the `joblib` module
#
# <a href='https://scikit-learn.org/stable/modules/model_persistence.html' >For More</a>
# **1. `Pickle`**
# +
import pickle
# Save an existing model to file
pickle.dump(gs_rfc, open("./trained model/grid_search_rfc_model.pkl", 'wb'))
# -
# Load a saved model
loaded_pickle_model = pickle.load(open("./trained model/grid_search_rfc_model.pkl", 'rb'))
# Make some predictions
pickle_y_preds = loaded_pickle_model.predict(X_test)
evaluate_preds(y_test, pickle_y_preds)
"""same as gs_metrics"""
# **2. `Joblib`**
# +
from joblib import dump, load
# Save model to file
dump(gs_rfc, filename="./trained model/grid_search_rfc_model.joblib")
# -
# Load a file
loaded_joblib_model = load(filename='./trained model/grid_search_rfc_model.joblib')
# Make predictions
joblib_y_preds = loaded_joblib_model.predict(X_test)
evaluate_preds(y_test, joblib_y_preds)
"""same as gs_metrics"""
# ## 7. Putting it all together!
#
# <img src='./to remember.png' />
#
# <a href='https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html'>
# Pipleline</a>
data = pd.read_csv('./car-sales-extended-missing.csv')
data
data.dtypes
data.isna().sum()
# **Steps we want to do(all in one cell):**
# 1. Fill missing data
# 2. Convert data to numbers
# 3. Build a model on the data
# +
# Getting data ready
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline # new one
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
# Modelling
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
# Setup random seed
import numpy as np
np.random.seed(42)
# Import data and drop rows with missing labels
data = pd.read_csv('./car-sales-extended-missing.csv')
data.dropna(subset=['Price'], inplace=True)
# Define different features and transformer pipeline
categorical_features = ['Make', 'Colour']
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
door_features = ['Doors']
door_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value=4))
])
num_features = ['Odometer (KM)']
num_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='mean'))
])
# Setup preprocssing steps(fill missing values, then convert to numbers)
preprocessor = ColumnTransformer(
transformers=[
('cat', categorical_transformer, categorical_features),
('door', door_transformer, door_features),
('num', num_transformer, num_features)
]
)
# Creating a preprocessing and modelling pipeline
model = Pipeline(steps=[('preprocessor', preprocessor),
('model', RandomForestRegressor())])
# Split data
X = data.drop("Price", axis=1)
y = data['Price']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Fit and score the model
model.fit(X_train, y_train)
model.score(X_test, y_test)
# -
# **It's also possible to use `GridSearchCV` or `RandomizedSearchCV` with our `Pipeline`**
# +
# Use GridSearchCV with our regression Pipeline
from sklearn.model_selection import GridSearchCV
pipe_grid = {
# preprocessor > num > num_transformer(imputer) > strategy
"preprocessor__num__imputer__strategy": ['mean', 'median'],
"model__n_estimators": [100, 1000],
"model__max_depth": [None, 5],
"model__max_features": ['auto'],
"model__min_samples_split": [2, 4]
}
gs_model = GridSearchCV(model, pipe_grid, cv=5, verbose=2)
gs_model.fit(X_train, y_train)
# -
# Better than previous model
gs_model.score(X_test, y_test)
| Course/Complete Machine Learning and Data Science/Scikit-Learn/.ipynb_checkpoints/Introduction_to_scikit_learn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="NR3tA_mpMLTq"
# <div class="alert alert-block alert-info"><b></b>
# <h1><center> <font color='black'> Homework 03 </font></center></h1>
# <h2><center> <font color='black'> Classification problems </font></center></h2>
# <h2><center> <font color='black'> Due date : 05 April 23:59 </font></center></h2>
#
# <h2><center> <font color='black'> BDA - University of Tartu - Spring 2020</font></center></h3>
# </div>
# + [markdown] colab_type="text" id="h-qwwDWhMLT-"
# # Homework instructions
# + [markdown] colab_type="text" id="AVDAIuJlMLUI"
# - Insert your team member names and student IDs in the field "Team mates" below. If you are not working in a team please insert only your name, surname and student ID
#
# - The accepted submission formats are Colab links or .ipynb files. If you are submitting Colab links please make sure that the privacy settings for the file is public so we can access your code.
#
# - The submission will automatically close at 12:00 am, so please make sure you have enough time to submit the homework.
#
# - Only one of the teammates should submit the homework. We will grade and give points to both of you!
#
# - You do not necessarily need to work on Colab. Especially as the size and the complexity of datasets will increase through the course, you can install jupyter notebooks locally and work from there.
#
# - If you do not understand what a question is asking for, please ask in Moodle.
#
# + [markdown] colab_type="text" id="Ajm9qYjsMLUU"
# **<h2><font color='red'>Team mates:</font></h2>**
#
#
# <font color='red'>Name Surname: Enlik -</font>  <font color='red'>Student ID: YYYY</font>
#
#
# <font color='red'>Name Surname: XXXXX</font>  <font color='red'>Student ID: YYYY</font>
#
# + [markdown] colab_type="text" id="e3vqLGBCMLUh"
# # 1. Classification tasks and algorithms (8 points)
# + [markdown] colab_type="text" id="Sq8fuNbAMLUs"
# We are going to use the dataset from the file HR_Employee_Attrition.csv which contains data about the employees of a company and the fact if they have left the company due to causes like retirement, resignation, elimination of a position, personal health etc. It is important for companies to predict if their employees are going to leave because the hiring process is costly and requires planification. The data has the following columns:
#
#
# Age – self descriptive
#
# BusinessTravel – how frequent employee travels
#
# DailyRate – daily rate on terms of salary
#
# Department – self descriptive
#
# DistanceFromHome – distance between employee home and work
#
# Education – education level of employee
#
# EducationField – self descriptive
#
# EnvironmentSatisfaction – level of satisfaction with working environment
#
# Gender – self descriptive
#
# HourlyRate – self descriptive
#
# JobRole – self descriptive
#
# JobInvolvement – level of interest of the job
#
# JobSatisfaction – level of satisfaction with current job
#
# MaritalStatus – self descriptive
#
# MonthlyIncome – self descriptive
#
# MonthlyRate – self descriptive
#
# NumCompaniesWorked – self descriptive
#
# Over18 – whether customer age is more than 18
#
# OverTime – whether customer works overtime or not
#
# PerformanceRating – performance level of employee
#
# RelationshipSatisfaction – level of satisfaction with working community
#
# StandardHours – standard amount of hours that employee works
#
# TotalWorkingYears – whether customer age is more than 18
#
# TrainingTimesLastYear – whether customer age is more than 18
# + colab={} colab_type="code" id="Zjr3pwZCMLU6"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + colab={} colab_type="code" id="sUsbyScxMLVd"
hr_data = pd.read_csv('HR_Employee_Attrition.csv', header=0)
hr_data.head()
# + [markdown] colab_type="text" id="Is8x9Ju4MLWJ"
# ## 1.1 Dataset exploration (1.6 points)
# + [markdown] colab_type="text" id="dJiBjvKEMLWV"
# **1.1.0.
# Plot the correlation of the variables in the dataset with the Attrition variable. (0.4 points)**
# + colab={} colab_type="code" id="6S0dw0W7MLWd"
# + [markdown] colab_type="text" id="pdQNOhknMLWv"
# **1.1.1. Write three interesting observation that you notice. Were they as you expected ? Please elaborate your answer in 1 - 3 sentences. (0.4 points)**
# + [markdown] colab_type="text" id="yIubn6J7MLW6"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="RsdpgQg6MLXJ"
# **<font color='red'>Answer 2:</font>**
# + [markdown] colab_type="text" id="-_4b5-ZbMLXP"
# **<font color='red'>Answer 3:</font>**
# + [markdown] colab_type="text" id="O-Hxf9mdMLXZ"
# **1.1.2 Make a boxplot for total working years for each type of Attrition values. (0.4 points)**
# + colab={} colab_type="code" id="PDoUwZs4MLXk"
# + [markdown] colab_type="text" id="1jFZYAmkMLX3"
# **1.1.3. Plot the relative frequency of Attrition values (Yes/No) (0.4 points)**
# + colab={} colab_type="code" id="Aj7FAx2oMLYA"
# + [markdown] colab_type="text" id="CeHvmFAvMLYk"
# ## 1.2 Classification (6.4 points)
# + [markdown] colab_type="text" id="ZYU-BjCIMLYv"
# We are going to predict the variable Attrition by trying different classification algorithms and comparing them. Before let's split the data into training and test set. Hint: You can apply some preprocessing as well to get better results.
# + colab={} colab_type="code" id="7sjN8R1YMLY8"
X =
y =
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# + colab={} colab_type="code" id="hlGbLThIMLZS"
# + [markdown] colab_type="text" id="hFrtucYjMLZy"
# **1.2.1 Use the scikit-learn DecisionTreeClassifier with default parameters to predict the attrition value for the test set. Set the random seed to 0. Calculate the accuracy score and print it. (0.4 points)**
# + colab={} colab_type="code" id="l5ytUenLMLZ5"
from sklearn.tree import DecisionTreeClassifier as DT
from sklearn import metrics
# + [markdown] colab_type="text" id="KFj7-9m-MLaF"
# **1.2.2 Plot the confusion matrix for the predicted values. Based on this matrix or your general knowledge, why accuracy is not a good metric to use in this case ? (0.4 points)**
# + colab={} colab_type="code" id="UJOLZ7JJMLaN"
# + [markdown] colab_type="text" id="l_DZJ_vlMLap"
# **1.2.3 We want to use a dumy model (not a machine learning approach) to get 83.88% accuracy. Considering the label ratios how this model would look like ? (0.4 points)**
# + [markdown] colab_type="text" id="ov12wEZWMLat"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="aCogwmH-MLay"
# **1.2.4 It is possible to plot the decision tree by using different plotting libraries. We are using the https://pypi.org/project/graphviz/ and sklearn.tree. Install the package and complete the code below so you will get a visualisation of our decision tree. (0.4 points)**
# + colab={} colab_type="code" id="q0rwCCqDMLa7"
# # !pip install graphviz
from sklearn.tree import export_graphviz
import graphviz
dot_prod = export_graphviz(..., out_file=None, feature_names=....,
class_names=True, filled=True, rounded=True,
special_characters=False)
graph = graphviz.Source(...)
graph
# + [markdown] colab_type="text" id="nWX-1InxMLbU"
# **1.2.5 For the decision tree we modeled, what is the most important factor to decide if an employee is going to leave or not? (0.4 points)**
# + colab={} colab_type="code" id="9mwt0Sc4MLbc"
# + [markdown] colab_type="text" id="zEFpbEYHMLb2"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="Puw9lIlbMLb9"
# **1.2.6 Plot the classification report for the decision tree. In this case study which one out of precision and recall, would you consider more important ? Please elaborate your answer. (0.4 points)**
# + colab={} colab_type="code" id="TfQz4VSNMLcH"
# + [markdown] colab_type="text" id="z4QrRGORMLci"
# **<font color='red'>Answer 1:</font>**
#
#
#
# + [markdown] colab_type="text" id="3cyKqrkDMLcs"
# **1.2.7 Calculate the F1 score of the model in training data and compare it with the F1 score in test data. What is the effect happening ? (0.4 points)**
# + colab={} colab_type="code" id="Cnx2h53qMLc1"
# + [markdown] colab_type="text" id="I2WYaL4kMLdU"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="iRsrRj76CjjB"
# **1.2.8 We can use cross validation scores to ensure that our model is generalizing well and we can be more confident when we apply it in test data. We will now try different combinations of maximum depth parameters for the decision tree and choose the best while using cross validation. Please complete the code below and report the best maximum depth. (0.4 points)**
# + colab={} colab_type="code" id="d8jBr23zDzMn"
best_score = 0
best_depth = 0
for i in range(5,20):
clf = DT(max_depth=i, random_seed = 0)
# Perform 5-fold cross validation.
# The number of folds you want to use generally depends from the size of data
scores = cross_val_score(estimator= ..., scoring="f1", X=..., y=..., cv=5)
mean_score = ...
# TODO
print('Mean score', mean_score)
print('\n The best tree depth is: ', best_depth )
# + [markdown] colab_type="text" id="zFTtDD7GMLda"
# **1.2.9 Use SVM with default parameters to classify test data and report accuracy, recall, precision, f1-score and AUC. Set the random_state equal to 0. (0.4 points)**
# + colab={} colab_type="code" id="w23rLdVpMLd7"
# + [markdown] colab_type="text" id="Y3NxlOM0MLeL"
# **1.2.10 Use Logistic Regression with default parameters to classify test data and report accuracy, recall, precision, f1-score, AUC. Set the random_state equal to 0 (0.4 points)**
# + colab={} colab_type="code" id="tg2_nAivMLeT"
# + [markdown] colab_type="text" id="ouBtSaYTJBcL"
# **1.2.11 One of the parameters for the Logistic regression is tol which sets the tolerance for the stopping criteria. We are going to calculate the log loss metric for different values of tol. Please fill in the code below and plot the log loss values. Which one of tol values is better for our model based on log loss? (0.4 points)**
#
# + colab={} colab_type="code" id="CQpm6s2SJxJG"
from sklearn.linear_model import LogisticRegression as LR
log_loss = []
for tol in [0.9, 0.5, 0.1, 0.001, 0.0001, 0.000001, 0.000001]:
lr = LR(tol = tol, random_state = 0 )
## TODO
log_loss.append(...)
# + [markdown] colab_type="text" id="Os990pmPI-Tm"
# **<font color='red'>Answer 1:</font>**
#
# + [markdown] colab_type="text" id="E2z05iMlMLep"
# **1.2.12 Use Random Forest with default parameters to classify test data and report accuracy, recall, precision and f1-score and AUC. Set the random_state equal to 0. Please build as well a classification report separately which shows the metrics for each class. (0.4 points)**
# + colab={} colab_type="code" id="EWibHlLAMLew"
# + [markdown] colab_type="text" id="UJrqNiijMfjW"
# **1.2.13 Get the probabilities for each class from Random Forest model. Threshold the probabilities such that it will output the class No only if the model is 70% or higher confident. In all other cases it will predict the class Yes. (0.4 points)**
#
# + colab={} colab_type="code" id="9GpWHAWvNWZL"
# + [markdown] colab_type="text" id="zk80i4XKNX6W"
# **1.2.14 Build again the classification matrix. Do you think there were some improvements regarding the classification for class Yes ? Explain your answer briefly. (0.4 points)**
# + colab={} colab_type="code" id="4C48fb3jNW4d"
# + [markdown] colab_type="text" id="Am9hDnA5OMX6"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="Nge1WamEMLfJ"
# **1.2.15 Use XGBoost with default parameters to classify test data and report accuracy, recall, precision, f1-score and AUC. (0.4 points)**
# + colab={} colab_type="code" id="MO5l5l03MLfO"
# + [markdown] colab_type="text" id="W41zIM_iMLfo"
# **1.2.16 Based on your answer from 1.2.6 and other important evaluation metrics for unbalanced datasets, choose the best classifier and plot its feature importances in decreasing order. Were the 3 most important features as you expected ? Please explain why. (0.4 points)**
# + [markdown] colab_type="text" id="khLM_PzmMLfv"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="0GBmXXe8MLf2"
# # 2. Improving classification (2 points)
# + [markdown] colab_type="text" id="YyFysH0rMLf_"
# In this task we will try to improve the performance of the best classifier you selected on 1.2.12 by using several techniques.
# + [markdown] colab_type="text" id="jjqzAyR4MLgM"
# **2.1 Do you think it is better to try oversampling or downsampling in this case study and why ? (0.4 points)**
# + [markdown] colab_type="text" id="CDkfyTzhMLgR"
# **<font color='red'>Answer 1:</font>**
# + [markdown] colab_type="text" id="YaYBgnPLMLgY"
# **2.2 Apply oversampling to the data while keeping random_state equal to 0. (0.4 points)**
# + colab={} colab_type="code" id="Aj2QLJR8MLgn"
# + [markdown] colab_type="text" id="Kx-bFmqpMLhP"
# **2.3 Split the data into train/test set with a ratio 80/20. Keep a random_state equal to 0. Use the algorithm chosen in 1.2.12 and report accuracy, precision, recall, f1-score and AUC. (0.4 points)**
# + colab={} colab_type="code" id="zQVlRIJvMLhX"
# + [markdown] colab_type="text" id="cWxZ0WobMLhm"
# **2.4 Apply undersampling to the data while keeping random_state equal to 0. (0.4 points)**
# + colab={} colab_type="code" id="rPIbRaRIMLhp"
# + [markdown] colab_type="text" id="TsQzm2JmMLiC"
# **2.5 Split the data into train/test set with a ratio 80/20. Keep a random_state equal to 0. Use the algorithm chosen in 1.2.12 to classify the test data and report accuracy, precision, recall, f1-score and AUC. (0.4 points)**
# + colab={} colab_type="code" id="RKi4p2MVMLiK"
# + [markdown] colab_type="text" id="on8nnIjpMLjE"
# ## How long did it take you to solve the homework?
#
# * Please answer as precisely as you can. It does not affect your points or grade in any way. It is okay, if it took 0.5 hours or 24 hours. The collected information will be used to improve future homeworks.
#
# <font color='red'> **Answer:**</font>
#
# **<font color='red'>(please change X in the next cell into your estimate)</font>**
#
# X hours
#
# ## What is the level of difficulty for this homework?
# you can put only number between $0:10$ ($0:$ easy, $10:$ difficult)
#
# <font color='red'> **Answer:**</font>
# + colab={} colab_type="code" id="jF-RRO5QMLjL"
| HW3/Homework_03_Original.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Coding the magic
# ==================================
#
#
# Collect wires, a 1k or so resistor and an LED for a simple test. We'll wire up your LED light to the pi.
#
# I try to use red for +5 and blue or green for ground. My green ground wire is grounded to pin 9 on the pi. My red wire goes to pin 11 where the voltage can be controlled by our code.
#
# An LED has to have the polarity right or it won't turn on. The long leg of the LED should go to the resistor and then to the red wire. The short leg should go to the green wire.
#
# 
#
# For a high-level look, see the diagram of the 40-pin interface. The diagram below was made by the people at raspberry-pi-geek.com.
#
# 
#
# We will refer in the code to the physical pin number, but it is always useful to be able to look at a diagram to tell what is available to use on each pin. Anything marked "GPIO" is a general-purpose input/output pin that we can assign to output a signal to drive the LED. When the signal is HIGH, it will allow current to light the LED.
#
# Once you have that, and as long as you are executing now inside ipython, you can click on the box below and hit shift-enter to run the script and watch it flash the LED.
# +
import RPi.GPIO as GPIO
import time
from IPython.display import clear_output
# blinking function
def blink(pin):
GPIO.output(pin, GPIO.HIGH)
time.sleep(0.5)
GPIO.output(pin, GPIO.LOW)
time.sleep(0.5)
return
print("using Raspberry Pi board pin numbers")
GPIO.setmode(GPIO.BOARD)
led_pin = 11
print("set up GPIO output channel")
GPIO.setwarnings(False)
GPIO.setup(led_pin, GPIO.OUT)
print("blinking the LED")
for i in range(0,6):
print("blink")
blink(led_pin)
GPIO.cleanup()
# -
# It really does it
# =================
# Did you see the output of the script appear under your code window? Did your LED flash its little heart out? We love being able to dig right in like this. It's spine-tingling fun the first time you see it.
#
# Now click in the code above and try changing it before you hit shift-enter again. You could make it blink more quickly for example by shortening the sleep timer duration to 0.25.
#
# What is the code doing
# ======================
# It may look like a jumbled mess when you first encounter someone's code. Let's dig in and sort it out.
#
# First, this is python code that uses libraries especially for pi. Python is a popular language for using the pi and it's also fairly easy to get started with.
#
# The first lines with "import" make it clear what other libraries and functions we will call on. We need GPIO functions for activating the LED as well as time functions for pausing between flashes.
#
# Next we create a function to blink the LED. It will set the pin to HIGH to illuminate it with current and LOW when we want it off. It will return after flashing the LED once.
#
# In two steps, we tell the GPIO library we will use board pin numbering and we set "led_pin" to 11. With board pin numbering, this refers to the pin 11 on the physical connector. If you count your way up from the bottom, you can see in the diagram above this pin has the name "GPIO17". The 17 isn't really important to us, but we had to choose a pin that was a GPIO so we could use it for general purpose input/output (output specifically).
#
# Pin 11 has to be designated for output next.
#
# Finally we use a range to cycle through a loop to blink the light. It blinks once for each run through the loop from 0 to 5.
#
# Go on to [Learner lesson 2](learner2.ipynb)
#
#
| learner1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import math
import cv2
class CountPainter:
xstart = 0
xend = 0
xdelta = 0
xnum = 0
ystart = 0
yend = 0
ydelta = 0
ynum = 0
record = [[]]
coldcolor = 0
hotcolor = 0
bgcolor = 0
piexlsize = 0
errors = 0
count = 0
norfunc = 0
def __init__(self, xdata, ydata, delta, coldcolor = [255, 0, 0, 255], hotcolor = [0, 255, 0, 255], bgcolor = [255,255,255,255], piexlsize = 10, norfunc = lambda x,y:abs(x)/y):
self.xstart = xdata[0]
self.xend = xdata[1]
self.xdelta = delta
self.xnum = math.ceil((self.xend - self.xstart) / delta)
self.ystart = ydata[0]
self.yend = ydata[1]
self.ydelta = delta
self.ynum = math.ceil((self.yend - self.ystart) / delta)
self.coldcolor = np.array(coldcolor)
self.hotcolor = np.array(hotcolor)
self.piexlsize = piexlsize
self.record = np.zeros((self.xnum, self.ynum), dtype=np.int)
self.errors = 0
self.count = 0
self.norfunc = norfunc
self.bgcolor = np.array(bgcolor)
def add(self, x, y, v):
self.count += 1
if x < self.xstart or x >= self.xend or y < self.ystart or y >= self.yend:
self.errors += 1
return
self.record[math.floor((x-self.xstart)/self.xdelta)][math.floor((y-self.ystart)/self.ydelta)] += v
def report(self):
return (self.errors, self.count)
def paint(self):
img = np.zeros((self.ynum * self.piexlsize, self.xnum*self.piexlsize, 4), np.uint8)
maxval = 1
for i in range(len(self.record)):
if maxval < max(np.abs(self.record[i])):
maxval = max(np.abs(self.record[i]))
for i in range(self.xnum):
for j in range(self.ynum):
val = self.record[i][j]
if val != 0:
color = self.getColor(self.norfunc(val, maxval), val > 0)
else:
color = (0, 0, 0, 0)
cv2.rectangle(img, (i * self.piexlsize, j * self.piexlsize), ((i+1) * self.piexlsize, (j+1) * self.piexlsize), color, -1)
return img
def getColor(self, val, isHot):
if isHot:
return ((self.hotcolor - self.bgcolor)*val+self.bgcolor).astype(np.int32).tolist()
else:
return ((self.coldcolor - self.bgcolor)*val+self.bgcolor).astype(np.int32).tolist()
# -
crime_status = pd.read_csv('2019-01-metropolitan-stop-and-search.csv')
#plt.hist(x=crime_status.Latitude, bins = 20)
print("latitude from {} to {}".format(min(crime_status.Latitude), max(crime_status.Latitude)))
#plt.hist(x=crime_status.Longitude, bins = 20)
print("longitude from {} to {}".format(min(crime_status.Longitude), max(crime_status.Longitude)))
print("before drop ", len(crime_status))
crime_status = crime_status.dropna(subset = ['Latitude', 'Longitude'])
print("after drop", len(crime_status))
squarefun = lambda x,y:(pow(y,2)-pow((x-y),2))/pow(y,2)
linefunc = lambda x,y: (1-0.2)/y*abs(x)+0.2
linefunc1 = lambda x,y: (1-0.1)/y*abs(x)+0.1
linezerofunc = lambda x,y: abs(x)/y
norfunc = linezerofunc
#lineigfunc = lambda x,y: (1-0.2)/y*abs(x)+0.2 if (1-0.2)/y*abs(x)+0.2 > 0.2 else 0
xrange = [-0.6, 0.3]
yrange = [51.25, 51.70]
gridsize = 0.01
pxsize = 10
summercolor = [0,0,255,255]
wintercolor = [255,0,0,255]
winterpainter = CountPainter(xrange, yrange, gridsize,piexlsize=pxsize, hotcolor=wintercolor, norfunc=norfunc)
painter = CountPainter(xrange, yrange, gridsize,piexlsize=pxsize, coldcolor=wintercolor, hotcolor=summercolor, norfunc=norfunc)
for index, row in crime_status.iterrows():
winterpainter.add(row["Longitude"], row["Latitude"], 1)
painter.add(row["Longitude"], row["Latitude"], -1)
winterimg = winterpainter.paint()
plt.imshow(winterimg)
crime_status = pd.read_csv('2019-07-metropolitan-stop-and-search.csv')
#plt.hist(x=crime_status.Latitude, bins = 20)
print("latitude from {} to {}".format(min(crime_status.Latitude), max(crime_status.Latitude)))
#plt.hist(x=crime_status.Longitude, bins = 20)
print("longitude from {} to {}".format(min(crime_status.Longitude), max(crime_status.Longitude)))
print("before drop ", len(crime_status))
crime_status = crime_status.dropna(subset = ['Latitude', 'Longitude'])
print("after drop", len(crime_status))
summerpainter = CountPainter(xrange, yrange, gridsize,piexlsize=pxsize, hotcolor=summercolor, norfunc=norfunc)
for index, row in crime_status.iterrows():
summerpainter.add(row["Longitude"], row["Latitude"], 1)
painter.add(row["Longitude"], row["Latitude"], 1)
summerimg = summerpainter.paint()
plt.imshow(summerimg)
diffimg = painter.paint()
plt.imshow(diffimg)
cv2.imwrite("london_stop_winter.png", winterimg)
cv2.imwrite("london_stop_summer.png", summerimg)
cv2.imwrite("london_stop_diff.png", diffimg)
| citydata/week4/.ipynb_checkpoints/london_stop_and_search-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from numpy import array
from numpy import argmax
from keras.utils import to_categorical
user_features=pd.read_csv("users.csv")
user_features["gender"][8]=="M"
for i in range(len(user_features["gender"])):
if(user_features["gender"][i]=="M"):
user_features["gender"][i]=0
else:
user_features["gender"][i]=1
print(user_features["gender"])
encoded.shape
for i in range(len(user_features["age"])):
if(user_features["age"][i]<=10):
user_features["age"][i]=0
else:
if(user_features["age"][i]<=20):
user_features["age"][i]=1
else:
if(user_features["age"][i]<=30):
user_features["age"][i]=2
else:
if(user_features["age"][i]<=40):
user_features["age"][i]=3
else:
if(user_features["age"][i]<50):
user_features["age"][i]=4
else:
if(user_features["age"][i]<=60):
user_features["age"][i]=5
else:
if(user_features["age"][i]<=70):
user_features["age"][i]=6
else:
if(user_features["age"][i]<=80):
user_features["age"][i]=7
print(user_features["age"])
user_features["occupation"]=user_features["occupation"].astype('category')
user_features["occupation"]=user_features["occupation"].cat.codes
user_features
from numpy import array
from numpy import argmax
from keras.utils import to_categorical
# define example
age_encoded = user_features["age"]
age_encoded = array(age_encoded)
# one hot encode
age_encoded = to_categorical(age_encoded)
print(age_encoded)
print(age_encoded.shape)
# define example
gender_encoded = user_features["gender"]
gender_encoded = array(gender_encoded)
# one hot encode
gender_encoded = to_categorical(gender_encoded)
print(gender_encoded)
# define example
occupation_encoded = user_features["occupation"]
occupation_encoded = array(occupation_encoded)
# one hot encode
occupation_encoded = to_categorical(occupation_encoded)
print(occupation_encoded)
print(occupation_encoded.shape)
movie_features=pd.read_csv("movies.csv")
movie_features.shape
movie_features=movie_features.drop(["movie_title","release_date","IMDb URL"],axis=1)
movie_features_2=movie_features.drop("movie_id",axis=1)
movie_features_2=array(movie_features_2)
movie_features_2
ratings=pd.read_csv("ratings_train.csv")
rating_matrix = ratings.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
rating_matrix
import numpy as np
print(rating_matrix.columns)
rating_matrix[1][943]
movie_avg=[]
user_avg=[]
i=0
count=0
value=0
avg=0
for i in rating_matrix.index:
count=0
avg=0
value=0
for j in rating_matrix.columns:
if(rating_matrix[i][j]!=0):
count=count+1
value=value+rating_matrix[i][j]
avg=value/count
user_avg.append(avg)
count=0
value=0
avg=0
for i in rating_matrix.columns:
count=0
avg=0
value=0
for j in rating_matrix.index:
if(rating_matrix[i][j]!=0):
count=count+1
value=value+rating_matrix[i][j]
avg=value/count
movie_avg.append(avg)
print(movie_avg)
ratings
array(ratings["movie_id"][ratings["user_id"]==895])
| movie recommender-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Isomap with RobustScaler
# This code template is for Isomap which is used as a dimensionality reduction technique with the rescaling technique RobustScaler. Isomap is a nonlinear dimensionality reduction method.
# ### Required Packages
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.manifold import Isomap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler
warnings.filterwarnings('ignore')
# ### Initialization
#
# Filepath of CSV file
#filepath
file_path= ""
# List of features which are required for model training .
#x_values
features=[]
# Target feature for prediction.
#y_value
target=''
# ### Data Fetching
#
# Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
#
# We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path)
df.head()
# ### Feature Selections
#
# It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
#
# We will assign all the required input features to X and target/outcome to Y.
X=df[features]
Y=df[target]
# ### Data Preprocessing
#
# Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
#
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
# Calling preprocessing functions on the feature and target set.
#
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
# #### Correlation Map
#
# In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
# ## Data Rescaling
#
# It scales features using statistics that are robust to outliers. This method removes the median and scales the data in the range between 1st quartile and 3rd quartile. i.e., in between 25th quantile and 75th quantile range. This range is also called an Interquartile range.
#
# <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html">More about Robust Scaler</a>
X_Scaled=RobustScaler().fit_transform(X)
X_Scaled=pd.DataFrame(data = X_Scaled,columns = X.columns)
X_Scaled.head()
# ### Model
#
# Isomap is a nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods. Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points.
#
# Isomap embedding is a technique that creates an embedding of the dataset and tries to preserve the relationships in the dataset. Isomap looks for a lower-dimensional embedding which maintains distances between all points.
#
#
# ### Tuning parameters
#
# > 1. **n_neighbors :** int, default=5
# number of neighbors to consider for each point.
#
# > 2. **n_components :** int, default=2
# number of coordinates for the manifold
#
# > 3. **eigen_solver :** {‘auto’, ‘arpack’, ‘dense’}, default=’auto’
# ‘auto’ : Attempt to choose the most efficient solver for the given problem.
# ‘arpack’ : Use Arnoldi decomposition to find the eigenvalues and eigenvectors.
# ‘dense’ : Use a direct solver (i.e. LAPACK) for the eigenvalue decomposition.
#
# > 4. **tol :** float, default=0
# Convergence tolerance passed to arpack or lobpcg. not used if eigen_solver == ‘dense’.
#
# For more information : <a href="https://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html">API</a>
# ### How Components are selected in Isomaps?
# Isomap is a non-linear dimensionality reduction method based on the spectral theory which tries to preserve the geodesic distances in the lower dimension. Isomap starts by creating a neighborhood network. After that, it uses graph distance to the approximate geodesic distance between all pairs of points. And then, through eigenvalue decomposition of the geodesic distance matrix, it finds the low dimensional embedding of the dataset.
# <br><br>
# >[Suggested Reading](https://blog.paperspace.com/dimension-reduction-with-isomap/)
iso = Isomap(n_components=5)
isoX = pd.DataFrame(data = iso.fit_transform(X_Scaled))
# ### Output Dataframe
#
# This is the dataframe with reduced numbers of features.
#
finalDf = pd.concat([isoX, Y], axis = 1)
finalDf.head()
# ### Creator: <NAME>, Github: <a href="https://github.com/abhishek-252">Profile</a>
| Dimensionality Reduction/Isomaps/Isomap_RobustScaler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Remove rivers from waterbody polygons <img align="right" src="../../../Supplementary_data/dea_logo.jpg">
#
# * **Compatability:** Notebook currently compatible with the `NCI` environment only. You can make this notebook `Sandbox` compatible by pointing it to the DEA Waterbodies timeseries located in AWS.
# * **Products used:**
# None.
# * **Special requirements**
# * River line dataset for filtering out polygons comprised of river segments.
# * Variable name: `MajorRiversDataset`
# * Here we use the [Bureau of Meteorology's Geofabric v 3.0.5 Beta (Suface Hydrology Network)](ftp://ftp.bom.gov.au/anon/home/geofabric/), filtered to only keep features tagged as `major rivers`.
# * There are some identified issues with this data layer that make the filtering using this data inconsistent (see the discussion below)
# * We therefore turn this off during the production of the water bodies shapefile.
# * **Prerequisites:** This notebook explores the individual waterbody timeseries csvs contained within the DEA Waterbodies dataset. It has been designed with that very specific purpose in mind, and is not intended as a general analysis notebook.
# ## Description
# This notebook applies the `FilterRivers` filter from the [`TurnWaterObservationsIntoWaterbodyPolygons.ipynb`](../TurnWaterObservationsIntoWaterbodyPolygons.ipynb) notebook. This allows this filtering step to be applied to the final Waterbody polygon dataset, in order to produce a further refined version in which the polygon UIDs still match up with the all encompassing version.
#
# 1. Load in modules and set up some functions
# 2. Load in the river lines dataset
# 3. Load in the waterbodies dataset
# 4. Find where the two intersect and remove waterbodies that intersect with river lines
# 5. Write out the results to a new shapefile
#
# ***
# ## Getting started
#
# To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
#
# Note that file paths have been hardcoded below. To run this notebook, make sure that these are still correct.
# ### Load packages and functions
# Import Python packages that are used for the analysis.
import geopandas as gp
def Filter_shapefile_by_intersection(gpdData,
gpdFilter,
filtertype='intersects',
invertMask=True,
returnInverse=False):
'''
Filter out polygons that intersect with another polygon shapefile.
Parameters
----------
gpdData: geopandas dataframe
Polygon data that you wish to filter
gpdFilter: geopandas dataframe
Dataset you are using as a filter
Optional
--------
filtertype: default = 'intersects'
Options = ['intersects', 'contains', 'within']
invertMask: boolean
Default = 'True'. This determines whether you want areas that DO ( = 'False') or DON'T ( = 'True')
intersect with the filter shapefile.
returnInnverse: boolean
Default = 'False'. If true, then return both parts of the intersection - those that intersect AND
those that don't as two dataframes.
Returns
-------
gpdDataFiltered: geopandas dataframe
Filtered polygon set, with polygons that intersect with gpdFilter removed.
IntersectIndex: list of indices of gpdData that intersect with gpdFilter
Optional
--------
if 'returnInverse = True'
gpdDataFiltered, gpdDataInverse: two geopandas dataframes
Filtered polygon set, with polygons that DON'T intersect with gpdFilter removed.
'''
# Check that the coordinate reference systems of both dataframes are the same
#assert gpdData.crs == gpdFilter.crs, 'Make sure the the coordinate reference systems of the two provided dataframes are the same'
Intersections = gp.sjoin(gpdFilter, gpdData, how="inner", op=filtertype)
# Find the index of all the polygons that intersect with the filter
IntersectIndex = sorted(set(Intersections['index_right']))
# Grab only the polygons NOT in the IntersectIndex
# i.e. that don't intersect with a river
if invertMask:
gpdDataFiltered = gpdData.loc[~gpdData.index.isin(IntersectIndex)]
else:
gpdDataFiltered = gpdData.loc[gpdData.index.isin(IntersectIndex)]
if returnInverse:
# We need to use the indices from IntersectIndex to find the inverse dataset, so we
# will just swap the '~'.
if invertMask:
gpdDataInverse = gpdData.loc[gpdData.index.isin(IntersectIndex)]
else:
gpdDataInverse = gpdData.loc[~gpdData.index.isin(IntersectIndex)]
return gpdDataFiltered, IntersectIndex, gpdDataInverse
else:
return gpdDataFiltered, IntersectIndex
# ## Load in the datasets
#
# We use the [Bureau of Meteorology's Geofabric v 3.0.5 Beta (Suface Hydrology Network)](ftp://ftp.bom.gov.au/anon/home/geofabric/) to filter out polygons that intersect with major rivers. This is done to remove river segments from the polygon dataset. We use the `SH_Network AHGFNetworkStream any` layer within the `SH_Network_GDB_V2_1_1.zip` geodatabase, and filter the dataset to only keep rivers tagged as `major`. It is this filtered dataset we use here.
#
# Note that we reproject this dataset to `epsg 3577`, Australian Albers coordinate reference system. If this is not correct for your analysis, you can change this in the cell below.
#
# ### Note when using the Geofabric to filter out rivers
#
# The option to filter out rivers was switched off for the production of our water bodies dataset. During testing, the Geofabric dataset was shown to lead to inconsistencies in what was removed, and what remained within the dataset.
#
# * The Geofabric continues the streamline through on-river dams, which means these polygons are filtered out. This may not be the desired result.
#
# 
#
# ### Read in the river lines dataset
# +
# Where is this file located?
MajorRiversDataset = '/g/data/r78/cek156/ShapeFiles/SH_Network_GDB_National_V3_0_5_Beta/SH_Network_GDB_National_V3_0_5_Beta_MajorFiltered.shp'
# Read in the major rivers dataset (if you are using it)
MajorRivers = gp.GeoDataFrame.from_file(MajorRiversDataset)
MajorRivers = MajorRivers.to_crs({'init':'epsg:3577'})
# -
# ### Read in the waterbodies dataset
WaterPolygons = gp.read_file('/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/AusAllTime01-005HybridWaterbodies/AusWaterBodiesFINAL.shp')
WaterPolygons = WaterPolygons.to_crs({'init':'epsg:3577'})
# ## Filter out polygons that intersect with a major river
WaterBodiesBigRiverFiltered, Index = Filter_shapefile_by_intersection(WaterPolygons,
MajorRivers)
# ## Write out the amended shapefile
WaterBodiesBigRiverFiltered.crs = {'init': 'epsg:3577'}
WaterBodiesBigRiverFiltered.to_file(
'/g/data/r78/cek156/dea-notebooks/Scientific_workflows/DEAWaterbodies/AusAllTime01-005HybridWaterbodies/AusWaterBodiesFINALRiverFiltered.shp',
driver='ESRI Shapefile')
# ***
#
# ## Additional information
#
# **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
# Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
#
# **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
# If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).
#
# **Last modified:** January 2020
#
# **Compatible datacube version:** N/A
# ## Tags
# Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
# + raw_mimetype="text/restructuredtext" active=""
# **Tags**: :index:`DEA Waterbodies`
| Scientific_workflows/DEAWaterbodies/DEAWaterbodiesToolkit/RemoveRiversfromWaterBodyPolygons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_conda)
# language: python
# name: conda_conda
# ---
# +
import sys
sys.path.insert(0, '../')
import pycor
from pycor.res import CollocationResolver
training_data_dir = "../../data/NP"
data_dir = "../samples"
outputpath = "../../output/NP/"
model_path = outputpath + "model"
print(pycor.version())
resolver = CollocationResolver()
pycor.addresolver(resolver)
# +
pycor.train(training_data_dir, pattern="*.txt")
# -
pycor.savemodel(model_path)
# +
coldict = {}
wordmap = pycor.getmodel()
for colloc in wordmap.collocations.values():
if colloc.frequency > 30:
coldict[colloc.text] = colloc.texts
# -
for col in coldict.values():
end = len(col)-1
last = col[end]
endHead = wordmap.heads.get(col[end])
if endHead is None:
word = pycor.resolveword(last)
if word.bestpair:
endHead = word.bestpair.head
else:
endHead = word.text
print(endHead)
# +
def printHead(text):
word = pycor.resolveword(text)
if word.bestpair:
print(word.bestpair.head)
for pair in word.particles:
print(' ', pair.head, pair.head.score)
else:
for pair in word.particles:
print(pair.head)
printHead("없었다")
printHead("먹고있었다")
printHead("먹고있다")
printHead("갔었다")
printHead("먹었다")
printHead("적다")
# -
| notebooks/book.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.chdir('..')
import data_importer
from sklearn.feature_extraction.text import TfidfVectorizer
import time
from sklearn import svm
import pickle
# # SVM Trainer
# Using this notebook, SVMs can be trained with token vectorizers.
# ## Step 1. Importing a training set
# Being a supervised learning algorithm, any SVM needs a training set. By default, sentiment annotated news sentences is used provided by Levenberg, Pulman, Moilanen, Simpson, and Roberts (2014). However, this can be changed to any other annotated training set.
data = data_importer.import_nonfarm_data()
data = (data[data['Confidence']>0.90])
# ## Step 2. Vectorizing the training set
# Next, we vectorize our data using the TF-IDF vectorizer from sklearn.
vectorizer = TfidfVectorizer(#min_df = 5,
max_df = 0.8,
sublinear_tf = True,
use_idf = True)
train_vectors = vectorizer.fit_transform(data['Sentence'])
# ## Step 3. Training the SVM
# Now that we have our vectorized training set, we can use it to train the SVM. The kernel and other parameters of the SVM can be modified to obtain different results.
# Perform classification with SVM, kernel=linear
classifier_linear = svm.SVC(kernel='linear')
t0 = time.time()
classifier_linear.fit(train_vectors, data['Label'])
t1 = time.time()
training_time = t1-t0
print("SVM trained in", training_time, "seconds.")
# ## Step 4. Storing the vectorizer and SVM
# Finally, we store both the SVM and the vectorizer as pickles. This way, they can be used later by sentiment analysis models.
outfile = open('pickles/svm_classifier','wb')
pickle.dump(classifier_linear,outfile)
outfile.close()
outfile = open('pickles/vectorizer','wb')
pickle.dump(vectorizer,outfile)
outfile.close()
| notebooks/Train SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Let's Create Our Credit Card Dataset
# - There two main font variations used in credit cards
# +
import cv2
cc1 = cv2.imread('creditcard_digits1.jpg', 0)
cv2.imshow("Digits 1", cc1)
cv2.waitKey(0)
cc2 = cv2.imread('creditcard_digits2.jpg', 0)
cv2.imshow("Digits 2", cc2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
cc1 = cv2.imread('creditcard_digits2.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imshow("Digits 2 Thresholded", th2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
# ## Now let's get generate an Augumentated Dataset from these two samples
#
# +
#Create our dataset directories
import os
def makedir(directory):
"""Creates a new directory if it does not exist"""
if not os.path.exists(directory):
os.makedirs(directory)
return None, 0
for i in range(0,10):
directory_name = "./credit_card/train/"+str(i)
print(directory_name)
makedir(directory_name)
for i in range(0,10):
directory_name = "./credit_card/test/"+str(i)
print(directory_name)
makedir(directory_name)
# -
# ## Let's make our Data Augmentation Functions
# These are used to perform image manipulation and pre-processing tasks
# +
import cv2
import numpy as np
import random
import cv2
from scipy.ndimage import convolve
def DigitAugmentation(frame, dim = 32):
"""Randomly alters the image using noise, pixelation and streching image functions"""
frame = cv2.resize(frame, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
random_num = np.random.randint(0,9)
if (random_num % 2 == 0):
frame = add_noise(frame)
if(random_num % 3 == 0):
frame = pixelate(frame)
if(random_num % 2 == 0):
frame = stretch(frame)
frame = cv2.resize(frame, (dim, dim), interpolation = cv2.INTER_AREA)
return frame
def add_noise(image):
"""Addings noise to image"""
prob = random.uniform(0.01, 0.05)
rnd = np.random.rand(image.shape[0], image.shape[1])
noisy = image.copy()
noisy[rnd < prob] = 0
noisy[rnd > 1 - prob] = 1
return noisy
def pixelate(image):
"Pixelates an image by reducing the resolution then upscaling it"
dim = np.random.randint(8,12)
image = cv2.resize(image, (dim, dim), interpolation = cv2.INTER_AREA)
image = cv2.resize(image, (16, 16), interpolation = cv2.INTER_AREA)
return image
def stretch(image):
"Randomly applies different degrees of stretch to image"
ran = np.random.randint(0,3)*2
if np.random.randint(0,2) == 0:
frame = cv2.resize(image, (32, ran+32), interpolation = cv2.INTER_AREA)
return frame[int(ran/2):int(ran+32)-int(ran/2), 0:32]
else:
frame = cv2.resize(image, (ran+32, 32), interpolation = cv2.INTER_AREA)
return frame[0:32, int(ran/2):int(ran+32)-int(ran/2)]
def pre_process(image, inv = False):
"""Uses OTSU binarization on an image"""
try:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
except:
gray_image = image
pass
if inv == False:
_, th2 = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
else:
_, th2 = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
resized = cv2.resize(th2, (32,32), interpolation = cv2.INTER_AREA)
return resized
# -
# ## Testing our augmentation functions
# +
cc1 = cv2.imread('creditcard_digits2.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
cv2.imshow("cc1", th2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# This is the coordinates of the region enclosing the first digit
# This is preset and was done manually based on this specific image
region = [(0, 0), (35, 48)]
# Assigns values to each region for ease of interpretation
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
for i in range(0,1): #We only look at the first digit in testing out augmentation functions
roi = cc1[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
for j in range(0,10):
roi2 = DigitAugmentation(roi)
roi_otsu = pre_process(roi2, inv = False)
cv2.imshow("otsu", roi_otsu)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
# ## Creating our Training Data (1000 variations of each font type)
# +
# Creating 2000 Images for each digit in creditcard_digits1 - TRAINING DATA
# Load our first image
cc1 = cv2.imread('creditcard_digits1.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
cv2.imshow("cc1", th2)
cv2.imshow("creditcard_digits1", cc1)
cv2.waitKey(0)
cv2.destroyAllWindows()
region = [(2, 19), (50, 72)]
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
for i in range(0,10):
# We jump the next digit each time we loop
if i > 0:
top_left_x = top_left_x + 59
bottom_right_x = bottom_right_x + 59
roi = cc1[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
print("Augmenting Digit - ", str(i))
# We create 200 versions of each image for our dataset
for j in range(0,2000):
roi2 = DigitAugmentation(roi)
roi_otsu = pre_process(roi2, inv = True)
cv2.imwrite("./credit_card/train/"+str(i)+"./_1_"+str(j)+".jpg", roi_otsu)
cv2.destroyAllWindows()
# +
# Creating 2000 Images for each digit in creditcard_digits2 - TRAINING DATA
cc1 = cv2.imread('creditcard_digits2.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
cv2.imshow("cc1", th2)
cv2.waitKey(0)
cv2.destroyAllWindows()
region = [(0, 0), (35, 48)]
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
for i in range(0,10):
if i > 0:
# We jump the next digit each time we loop
top_left_x = top_left_x + 35
bottom_right_x = bottom_right_x + 35
roi = cc1[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
print("Augmenting Digit - ", str(i))
# We create 200 versions of each image for our dataset
for j in range(0,2000):
roi2 = DigitAugmentation(roi)
roi_otsu = pre_process(roi2, inv = False)
cv2.imwrite("./credit_card/train/"+str(i)+"./_2_"+str(j)+".jpg", roi_otsu)
cv2.imshow("otsu", roi_otsu)
print("-")
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# Creating 200 Images for each digit in creditcard_digits1 - TEST DATA
# Load our first image
cc1 = cv2.imread('creditcard_digits1.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
cv2.imshow("cc1", th2)
cv2.imshow("creditcard_digits1", cc1)
cv2.waitKey(0)
cv2.destroyAllWindows()
region = [(2, 19), (50, 72)]
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
for i in range(0,10):
# We jump the next digit each time we loop
if i > 0:
top_left_x = top_left_x + 59
bottom_right_x = bottom_right_x + 59
roi = cc1[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
print("Augmenting Digit - ", str(i))
# We create 200 versions of each image for our dataset
for j in range(0,2000):
roi2 = DigitAugmentation(roi)
roi_otsu = pre_process(roi2, inv = True)
cv2.imwrite("./credit_card/test/"+str(i)+"./_1_"+str(j)+".jpg", roi_otsu)
cv2.destroyAllWindows()
# +
# Creating 200 Images for each digit in creditcard_digits2 - TEST DATA
cc1 = cv2.imread('creditcard_digits2.jpg', 0)
_, th2 = cv2.threshold(cc1, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
cv2.imshow("cc1", th2)
cv2.waitKey(0)
cv2.destroyAllWindows()
region = [(0, 0), (35, 48)]
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
for i in range(0,10):
if i > 0:
# We jump the next digit each time we loop
top_left_x = top_left_x + 35
bottom_right_x = bottom_right_x + 35
roi = cc1[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
print("Augmenting Digit - ", str(i))
# We create 200 versions of each image for our dataset
for j in range(0,2000):
roi2 = DigitAugmentation(roi)
roi_otsu = pre_process(roi2, inv = False)
cv2.imwrite("./credit_card/test/"+str(i)+"./_2_"+str(j)+".jpg", roi_otsu)
cv2.imshow("otsu", roi_otsu)
print("-")
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
# # 2. Creating our Classifier
# +
import os
import numpy as np
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D
from keras import optimizers
import keras
input_shape = (32, 32, 3)
img_width = 32
img_height = 32
num_classes = 10
nb_train_samples = 10000
nb_validation_samples = 2000
batch_size = 16
epochs = 1
train_data_dir = './credit_card/train'
validation_data_dir = './credit_card/test'
# Creating our data generator for our test data
validation_datagen = ImageDataGenerator(
# used to rescale the pixel values from [0, 255] to [0, 1] interval
rescale = 1./255)
# Creating our data generator for our training data
train_datagen = ImageDataGenerator(
rescale = 1./255, # normalize pixel values to [0,1]
rotation_range = 10, # randomly applies rotations
width_shift_range = 0.25, # randomly applies width shifting
height_shift_range = 0.25, # randomly applies height shifting
shear_range=0.5,
zoom_range=0.5,
horizontal_flip = False, # randonly flips the image
fill_mode = 'nearest') # uses the fill mode nearest to fill gaps created by the above
# Specify criteria about our training data, such as the directory, image size, batch size and type
# automagically retrieve images and their classes for train and validation sets
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'categorical')
validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'categorical',
shuffle = False)
# -
# ## Creating out Model based on the LeNet CNN Architecture
# +
# create model
model = Sequential()
# 2 sets of CRP (Convolution, RELU, Pooling)
model.add(Conv2D(20, (5, 5),
padding = "same",
input_shape = input_shape))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2), strides = (2, 2)))
model.add(Conv2D(50, (5, 5),
padding = "same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2), strides = (2, 2)))
# Fully connected layers (w/ RELU)
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
# Softmax (for classification)
model.add(Dense(num_classes))
model.add(Activation("softmax"))
model.compile(loss = 'categorical_crossentropy',
optimizer = keras.optimizers.Adadelta(),
metrics = ['accuracy'])
print(model.summary())
# -
# ## Training our Model
# +
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("/home/deeplearningcv/DeepLearningCV/Trained Models/creditcard.h5",
monitor="val_loss",
mode="min",
save_best_only = True,
verbose=1)
earlystop = EarlyStopping(monitor = 'val_loss',
min_delta = 0,
patience = 3,
verbose = 1,
restore_best_weights = True)
# we put our call backs into a callback list
callbacks = [earlystop, checkpoint]
# Note we use a very small learning rate
model.compile(loss = 'categorical_crossentropy',
optimizer = RMSprop(lr = 0.001),
metrics = ['accuracy'])
nb_train_samples = 20000
nb_validation_samples = 4000
epochs = 5
batch_size = 16
history = model.fit_generator(
train_generator,
steps_per_epoch = nb_train_samples // batch_size,
epochs = epochs,
callbacks = callbacks,
validation_data = validation_generator,
validation_steps = nb_validation_samples // batch_size)
model.save("/home/deeplearningcv/DeepLearningCV/Trained Models/creditcard.h5")
# -
# # 3. Extract a Credit Card from the backgroud
# #### NOTE:
# You may need to install imutils
# run *pip install imutils* in terminal and restart your kernal to install
# +
import cv2
import numpy as np
import imutils
from skimage.filters import threshold_adaptive
import os
def order_points(pts):
# initialzie a list of coordinates that will be ordered
# such that the first entry in the list is the top-left,
# the second entry is the top-right, the third is the
# bottom-right, and the fourth is the bottom-left
rect = np.zeros((4, 2), dtype = "float32")
# the top-left point will have the smallest sum, whereas
# the bottom-right point will have the largest sum
s = pts.sum(axis = 1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# now, compute the difference between the points, the
# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis = 1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
# return the ordered coordinates
return rect
def four_point_transform(image, pts):
# obtain a consistent order of the points and unpack them
# individually
rect = order_points(pts)
(tl, tr, br, bl) = rect
# compute the width of the new image, which will be the
# maximum distance between bottom-right and bottom-left
# x-coordiates or the top-right and top-left x-coordinates
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
maxWidth = max(int(widthA), int(widthB))
# compute the height of the new image, which will be the
# maximum distance between the top-right and bottom-right
# y-coordinates or the top-left and bottom-left y-coordinates
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
maxHeight = max(int(heightA), int(heightB))
# now that we have the dimensions of the new image, construct
# the set of destination points to obtain a "birds eye view",
# (i.e. top-down view) of the image, again specifying points
# in the top-left, top-right, bottom-right, and bottom-left
# order
dst = np.array([
[0, 0],
[maxWidth - 1, 0],
[maxWidth - 1, maxHeight - 1],
[0, maxHeight - 1]], dtype = "float32")
# compute the perspective transform matrix and then apply it
M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(image, M, (maxWidth, maxHeight))
# return the warped image
return warped
def doc_Scan(image):
orig_height, orig_width = image.shape[:2]
ratio = image.shape[0] / 500.0
orig = image.copy()
image = imutils.resize(image, height = 500)
orig_height, orig_width = image.shape[:2]
Original_Area = orig_height * orig_width
# convert the image to grayscale, blur it, and find edges
# in the image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 75, 200)
cv2.imshow("Image", image)
cv2.imshow("Edged", edged)
cv2.waitKey(0)
# show the original image and the edge detected image
# find the contours in the edged image, keeping only the
# largest ones, and initialize the screen contour
_, contours, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:5]
# loop over the contours
for c in contours:
# approximate the contour
area = cv2.contourArea(c)
if area < (Original_Area/3):
print("Error Image Invalid")
return("ERROR")
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if our approximated contour has four points, then we
# can assume that we have found our screen
if len(approx) == 4:
screenCnt = approx
break
# show the contour (outline) of the piece of paper
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
cv2.imshow("Outline", image)
warped = four_point_transform(orig, screenCnt.reshape(4, 2) * ratio)
# convert the warped image to grayscale, then threshold it
# to give it that 'black and white' paper effect
cv2.resize(warped, (640,403), interpolation = cv2.INTER_AREA)
cv2.imwrite("credit_card_color.jpg", warped)
warped = cv2.cvtColor(warped, cv2.COLOR_BGR2GRAY)
warped = warped.astype("uint8") * 255
cv2.imshow("Extracted Credit Card", warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
return warped
# -
cv2.destroyAllWindows()
# ## Extract our Credit Card and the Region of Interest (ROI)
# +
image = cv2.imread('test_card.jpg')
image = doc_Scan(image)
region = [(55, 210), (640, 290)]
top_left_y = region[0][1]
bottom_right_y = region[1][1]
top_left_x = region[0][0]
bottom_right_x = region[1][0]
# Extracting the area were the credit numbers are located
roi = image[top_left_y:bottom_right_y, top_left_x:bottom_right_x]
cv2.imshow("Region", roi)
cv2.imwrite("credit_card_extracted_digits.jpg", roi)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
# ## Loading our trained model
# +
from keras.models import load_model
import keras
classifier = load_model('/home/deeplearningcv/DeepLearningCV/Trained Models/creditcard.h5')
# -
# # Let's test on our extracted image
# +
def x_cord_contour(contours):
#Returns the X cordinate for the contour centroid
if cv2.contourArea(contours) > 10:
M = cv2.moments(contours)
return (int(M['m10']/M['m00']))
else:
pass
img = cv2.imread('credit_card_extracted_digits.jpg')
orig_img = cv2.imread('credit_card_color.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow("image", img)
cv2.waitKey(0)
# Blur image then find edges using Canny
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
#cv2.imshow("blurred", blurred)
#cv2.waitKey(0)
edged = cv2.Canny(blurred, 30, 150)
#cv2.imshow("edged", edged)
#cv2.waitKey(0)
# Find Contours
_, contours, _ = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#Sort out contours left to right by using their x cordinates
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:13] #Change this to 16 to get all digits
contours = sorted(contours, key = x_cord_contour, reverse = False)
# Create empty array to store entire number
full_number = []
# loop over the contours
for c in contours:
# compute the bounding box for the rectangle
(x, y, w, h) = cv2.boundingRect(c)
if w >= 5 and h >= 25 and cv2.contourArea(c) < 1000:
roi = blurred[y:y + h, x:x + w]
#ret, roi = cv2.threshold(roi, 20, 255,cv2.THRESH_BINARY_INV)
cv2.imshow("ROI1", roi)
roi_otsu = pre_process(roi, True)
cv2.imshow("ROI2", roi_otsu)
roi_otsu = cv2.cvtColor(roi_otsu, cv2.COLOR_GRAY2RGB)
roi_otsu = keras.preprocessing.image.img_to_array(roi_otsu)
roi_otsu = roi_otsu * 1./255
roi_otsu = np.expand_dims(roi_otsu, axis=0)
image = np.vstack([roi_otsu])
label = str(classifier.predict_classes(image, batch_size = 10))[1]
print(label)
(x, y, w, h) = (x+region[0][0], y+region[0][1], w, h)
cv2.rectangle(orig_img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(orig_img, label, (x , y + 90), cv2.FONT_HERSHEY_COMPLEX, 2, (255, 0, 0), 2)
cv2.imshow("image", orig_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
| Final CC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Workshop - MELI Data Challenge 2021
import pandas as pd
import numpy as np
import json
from tqdm import tqdm
import csv
import pickle
import matplotlib.pyplot as plt
import multiprocessing as mp
from itertools import chain, islice
from datetime import timedelta
import jsonlines
import seaborn as sns
from pathlib import Path
import core.evaluators.metrics as metrics
import multiprocessing as mp
from itertools import chain, islice
import gzip
# ### 1. Fetching the data
# #### Load train and test datasets
# set up the directory where the challenge data is stored
data_dir = Path('../data')
data_train = pd.read_parquet(data_dir/'train_data.parquet')
data_test = pd.read_csv(data_dir/'test_data.csv')
data_train.head()
data_test.head()
# #### Load extra item data
### auxiliary function to read jsonlines files
def load_jsonlines(filename):
rv = []
for obj in tqdm(jsonlines.open(filename)):
rv.append(obj)
return rv
item_metadata = load_jsonlines(data_dir/'items_static_metadata_full.jl')
# #### Convert to a df and use sku as the index
df_metadata = pd.DataFrame(item_metadata)
df_metadata.index = df_metadata.sku
df_metadata.drop(columns=['sku'],inplace=True)
df_metadata.head()
# #### Hydrate the initial datasets with the extra data
data_train = data_train.join(df_metadata, on='sku',how='left')
data_test = data_test.join(df_metadata, on='sku',how='left')
data_train.head(3)
data_test.head()
# ### 2. Exploration
# #### List all the columns
for col in data_train.columns:
print(col)
# #### Get some stats for each column
pd.set_option('display.float_format', lambda x: '%.3f' % x)
def describe_cols(cols,df):
for col in cols:
print('\t COLUMN: ', col)
print('\t type: ', df[col].dtype,'\n')
print(df[col].describe(),'\n')
columns_to_describe = ['date','listing_type','current_price']
describe_cols(columns_to_describe,data_train)
# ### Visualize the time series
# #### Visualize daily sales grouped by site
# First we summarize the info
summary_site = data_train.groupby(['site_id','date']).sold_quantity.sum().reset_index()
summary_site.head()
def plot_time_series(summary_data,time_var,series,level):
plt.figure(figsize=(15, 4))
plt.title(f'{series} time series grouped by {level}')
sns.lineplot(data=summary_data,
x=time_var,y=series,hue=level)
plt.xticks(rotation=45)
plt.show()
# Then we plot it
plot_time_series(summary_site, time_var='date',series='sold_quantity',level='site_id')
# #### Visualize weekly sales grouped by site
# Define a new variable based on the date column to extract the week number
data_train['week'] = pd.to_datetime(data_train.date).dt.week
# Summarize info
summary_site_w = data_train.groupby(['site_id','week']).sold_quantity.sum().reset_index()
# Then we plot it
plot_time_series(summary_site_w,time_var='week',series='sold_quantity',level='site_id')
# #### Get the top levels of categorical variable for a site
def get_top_categories(df, categorical_var, site_id, by, N=10):
grand_total = df[df.site_id == site_id][by].sum()
top_cat_df = (df[df.site_id == site_id]
.groupby(['site_id',categorical_var])[by]
.sum()
.sort_values(ascending=False)
.head(N))
top_cat_df = top_cat_df.reset_index()
top_cat_df[f'relative_{by}'] = top_cat_df[by]/grand_total
return(top_cat_df[[categorical_var,by,f'relative_{by}']])
top_domains_MLM = get_top_categories(data_train,
categorical_var= 'item_domain_id',
site_id='MLM',
by='sold_quantity',
N=10)
top_domains_MLM
# #### Asses overlap between train and test skus
# library
import matplotlib.pyplot as plt
from matplotlib_venn import venn2
def asses_overlap(df_train, df_test, key):
figure, axes = plt.subplots(1, len(df_train.site_id.unique()),figsize=(16, 6))
for i,site in enumerate(df_train.site_id.unique()):
unique_train = df_train[df_train.site_id == site][key].unique()
unique_test = df_test[df_test.site_id == site][key].unique()
v = venn2(subsets=[set(unique_train),set(unique_test)],
set_labels = (f"Train \n ({len(unique_train)})",
f"Test \n ({len(unique_test)}) "),
ax=axes[i],
set_colors=('purple', 'skyblue'), alpha = 0.6)
axes[i].set_title(site)
plt.show()
asses_overlap(data_train, data_test, key='sku')
# #### Plot distributions
# ##### Plot distribution for continuos variable
site_id = 'MLM'
item_domain_id = 'MLM-CELLPHONE_COVERS'
#product_id = 'MLM15586828'
subset_data = data_train[(data_train.site_id == site_id)& (data_train.item_domain_id == item_domain_id)]
subset_data.current_price.hist(bins=100)
# ##### Plot distribution for categorical variable
subset_data.shipping_logistic_type.value_counts(normalize=True).plot.bar()
# #### Plot the relationship between two continuos variables
site_id = 'MLM'
item_domain_id = 'MLM-CELLPHONE_COVERS'
subset_data = data_train[(data_train.site_id == site_id)& (data_train.item_domain_id == item_domain_id)]
def plot_bivariate(data,level, x, y, agg_x, agg_y):
sns.scatterplot(data=data.groupby(level).agg(
{x: agg_x,y: agg_y}),
x=x,y=y)
plt.show()
plot_bivariate(subset_data,
x='current_price',
level='sku',
y='sold_quantity',
agg_x=np.mean,
agg_y=np.sum)
plot_bivariate(subset_data,
level='sku',
x='minutes_active',
y='sold_quantity',
agg_x=np.mean,
agg_y=np.sum)
# #### Distribution of target stock
figure, axes = plt.subplots(1, 2,figsize=(14, 6))
figure.suptitle('Distribution of target stock')
sns.histplot(x=data_test.target_stock,bins=5000, kde=False, ax=axes[0])
axes[0].set_xlim(0,80)
sns.boxplot(x=data_test.target_stock, ax=axes[1])
axes[1].set_xlim(0,80)
plt.show()
# ### 3. Building your validation set
data_train.date.min(), data_train.date.max()
# ##### Make a temporary split
split_date = (pd.to_datetime(data_train.date).max()-timedelta(days=30)).date()
print(split_date)
# +
#separete the last 30 days for validation
data_val = data_train.loc[(data_train.date > str(split_date))]
#use the rest as training
data_train = data_train.loc[(data_train.date <= str(split_date))]
# -
# ##### Now let's build the validation dataset by calculating target stock and inventory days.
# +
#disclaimer: this is not the code that was used to generate the test_set.
# It was made from scratch
def create_validation_set(dataset):
np.random.seed(42)
print('Sorting records...')
temp_pd = dataset.loc[:, ['sku','date','sold_quantity']].sort_values(['sku','date'])
print('Grouping quantity...')
temp_dict = temp_pd.groupby('sku').agg({'sold_quantity':lambda x: [i for i in x]})['sold_quantity'].to_dict()
result = []
for idx, list_quantity in tqdm(temp_dict.items(), desc='Making targets...'):
cumsum = np.array(list_quantity).cumsum()
stock_target = 0
if cumsum[-1] > 0 and len(cumsum)==30:
#choose a random target different from 0
while stock_target == 0:
stock_target = np.random.choice(cumsum)
#get the first day with this amounnt of sales
day_to_stockout = np.argwhere(cumsum==stock_target).min() + 1
#add to a list
result.append({'sku':idx, 'target_stock':stock_target, 'inventory_days':day_to_stockout})
return result
#generate target for the 30 days of validation
val_dataset = create_validation_set(data_val)
# -
val_dataset[:10]
y_true_val = [x['inventory_days'] for x in val_dataset]
# ### 4. Modeling
# #### Baseline #1: UNIFORM distribution
# We need a baseline to know what is our starting point. We will use it latter to validate more complex models.
# Besides we could iterate a simple baseline model to get better models
days_to_predict = 30
y_pred_uniform = [(np.ones(days_to_predict)/days_to_predict).round(5).tolist()] * len(val_dataset)
# This is how a uniform distribution baseline output would look like
pd.DataFrame(y_pred_uniform, columns=range(1,days_to_predict+1)).head()
# ##### How the inventory_days probability distribution looks like for a random observation
# +
sku, stock, days = pd.DataFrame(val_dataset)[['sku','target_stock','inventory_days']].sample(1).to_dict(orient='records')[0].values()
plt.ylim([0,0.05])
plt.axvline(days, color='r')
plt.title(f'sku:{sku}, target_stock:{stock},target days: {days}')
plt.bar(range(1,31), np.ones(days_to_predict)/days_to_predict, color='green')
plt.xlabel('Days')
plt.ylabel('Probs')
plt.legend(['Target days', 'Uniform Dist.'])
plt.show()
# -
# ##### Now let's score this model's prediction
# ##### Scoring function:
# +
def ranked_probability_score(y_true, y_pred):
"""
Input
y_true: np.array of shape 30.
y_pred: np.array of shape 30.
"""
return ((y_true.cumsum(axis=1) - y_pred.cumsum(axis=1))**2).sum(axis=1).mean()
def scoring_function(y_true, y_pred):
"""
Input
y_true: List of Ints of shape Nx1. Contain the target_stock
y_pred: List of float of shape Nx30. Contain the prob for each day
"""
y_true = np.array(y_true)
y_pred = np.array(y_pred)
y_true_one_hot = np.zeros_like(y_pred, dtype=np.float)
y_true_one_hot[range(len(y_true)), y_true-1] = 1
return ranked_probability_score(y_true_one_hot, y_pred)
# -
uniform_score = scoring_function(y_true_val, y_pred_uniform)
print('Uniform model got a validation RPS of: ',uniform_score)
# ***In the public leaderboard this approach got a score of 5.07***
# #### Baseline #2: Linear Model
# As the uniform distributioin works so well, the idea is to slighly move the distribution toward the target day.
# To do so we are going to use a very wide normal distribution.
# +
def generate_batch_predictions(model, x_test, batch_size=10000, processors=20):
"""Function usefull for paralellize inference"""
pool = mp.Pool(processors)
batches = batchify(x_test,batch_size)
results = pool.imap(model.predict_batch,batches)
pool.close()
output = []
for r in tqdm(results, total=int(len(x_test)/batch_size), desc='generating preds...'):
output.extend(r)
preds_dict = {}
for sku,probs in tqdm(output):
preds_dict[sku] = probs
y_pred = []
for x in tqdm(x_test):
pred = preds_dict[x['sku']]
y_pred.append(pred)
return y_pred
def batchify(iterable, batch_size):
"""Convert an iterable in a batch-iterable"""
iterator = iter(iterable)
for first in iterator:
yield list(chain([first], islice(iterator, batch_size - 1)))
# -
from scipy.stats import norm
step=1
model_ = norm(15, 10)
# +
if step >= 1:
x_axis = np.arange(-10, 40, 0.001)
plt.plot(x_axis, model_.pdf(x_axis))
plt.legend(['Normal dist'])
if step >= 2:
plt.axvline(0, color='black')
plt.axvline(30, color='black')
if step >= 3:
for i in range(30):
plt.vlines(i,ymin=0,ymax=model_.pdf(i))
if step >= 4:
scale = model_.cdf(30) - model_.cdf(0)
x_axis = np.arange(0, 31, 1)
plt.plot(x_axis, model_.pdf(x_axis)/scale)
step = 0
step += 1
plt.show()
# -
# ##### Model definition
# +
from scipy.stats import norm
from tqdm import tqdm
class LinearModel():
"""
Linear model based on sold_quantity
"""
def __init__(self,
last_n_days=None,
normalize=True):
self.normalize = normalize
self.last_n_days = last_n_days
self.border_cases = 0
self.normal_cases = 0
def fit(self, data):
""" Store mean and std-dev for each SKU """
if self.last_n_days != None:
min_training_date = str((pd.to_datetime(data.date.max())-timedelta(days=self.last_n_days)).date())
else:
min_training_date = str(data.date.min().date())
self.parameters = (data[data.date >= min_training_date]
.groupby('sku')
.agg({'sold_quantity':['mean', 'std']})
.sold_quantity
.to_dict())
self.general_mean = data.sold_quantity.mean()
self.general_std = data.sold_quantity.std()
return self
def calc_probs(self, norm_dist):
#cut probs in days
probs = []
for i in range(1, 31):
probs.append(norm_dist.cdf(i+1) - norm_dist.cdf(i))
#if prob is zero, replace with uniform
if np.sum(probs) == 0:
return np.ones(30) / 30
if self.normalize:
probs = probs / np.sum(probs)
return probs
def predict(self, idx, stock):
""" calculate mean and variance to stockout for a given SKU """
#retrieve the mean and variance for the SKU
if self.parameters['mean'].get(idx, 0.) != 0.:
mean = self.parameters['mean'][idx]
std = self.parameters['std'][idx]
self.normal_cases += 1
else:
#to catch border cases where there is no data in train or has all 0s.
mean = self.general_mean
std = self.general_std
self.border_cases += 1
if std == 0. or np.isnan(std):
std = self.general_std
#convert quantities into days
days_to_stockout = stock / mean
std_days = (std / mean) * days_to_stockout
return days_to_stockout, std_days
def predict_proba(self, idx, stock):
""" Calculates the 30 days probs given a SKU and a target_stock """
days_to_stockout, std_days = self.predict(idx, stock)
norm_dist = norm(days_to_stockout, std_days)
return self.calc_probs(norm_dist)
def predict_batch(self, X, proba=True):
"""
Predict probs for many SKUs
Input:
X: List of Dicts with keys sku and target_stock
"""
result = []
for x in X:
idx = x['sku']
stock = x['target_stock']
if proba:
result.append((idx, self.predict_proba(idx, stock)))
else:
result.append((idx, self.predict(idx, stock)))
return result
# -
# ##### Model Training
# +
# %%time
model = LinearModel(last_n_days=14, normalize=True)
#train the model with train data
model.fit(data_train)
# -
# ##### Inference
y_pred_normal = generate_batch_predictions(model, val_dataset, batch_size=10000, processors=20)
# ##### How the inventory_days probability distribution looks like for a random observation in this case
# +
from matplotlib.pyplot import figure
figure(figsize=(8, 6), dpi=80)
sku, stock, days = pd.DataFrame(val_dataset)[['sku','target_stock','inventory_days']].sample(1).to_dict(orient='records')[0].values()
probs = model.predict_proba(sku, stock)
mean_to_stockout, var_to_stockout = model.predict(sku, stock)
plt.bar(range(1,31), probs)
plt.axvline(days, color='r')
plt.title('sku:{}, target_stock:{}, mean: {}, std:{}'.format(int(sku),
stock,
round(mean_to_stockout),
round(var_to_stockout)))
plt.axhline(1/30, color='y')
plt.show()
# -
#calculate the score
normal_score = scoring_function(y_true_val, y_pred_normal)
print('Normal distribution model got a validation RPS of: ',normal_score)
# ### 5. Error analysis
val_dataset_pd = pd.DataFrame(val_dataset)
scores = []
for y_t, y_p in tqdm(zip(val_dataset_pd['inventory_days'].to_list(), y_pred_normal)):
scores.append(scoring_function(np.array([int(y_t)]), np.array([y_p])))
val_dataset_pd.loc[:, 'score'] = scores
plt.scatter(val_dataset_pd.iloc[:10000].inventory_days, val_dataset_pd.iloc[:10000].score)
plt.xlabel('Days')
plt.ylabel('Score')
plt.title('Score by days')
plt.show()
# Here we see ....
# ### 6. Train model to submit
# Now that we have validated that the approach works, we train the model with all the data in order to make a submission
all_data = pd.concat([data_train,data_val])
# +
model = LinearModel(last_n_days=14, normalize=True)
model.fit(all_data) # <---- HERE WE TRAIN THE MODEL WITH FULL DATA !!!!
# -
# ##### Generate predictions on test data
# +
x_test = data_test.reset_index()[['index','sku','target_stock']].to_dict(orient='records')
y_pred = generate_batch_predictions(model, x_test, batch_size=10000, processors=20)
# -
# ##### Finally we generate a submission file with the model predictions
# +
def array2text(y_pred):
"""convert a list of number in a list of texts with 4 decimal positions """
result = []
for xs in tqdm(y_pred):
line = []
for x in xs:
line.append('{:.4f}'.format(x))
result.append(line)
return result
def make_submission_file(y_pred, file_name='submission_file', compress=True, single_row=True):
"""Convert a list of text into a submition file"""
result = array2text(y_pred)
if compress:
if single_row:
file_name = f'{file_name}.csv.gz'
with gzip.open(file_name, "wt") as f:
writer = csv.writer(f)
for row in tqdm(result, desc='making file...'):
writer.writerow(row)
else:
file_name = f'{file_name}.csv.gz'
with gzip.open(file_name, "wt") as f:
writer = csv.writer(f)
writer.writerows(result)
else:
if single_row:
file_name = f'{file_name}.csv'
with open(file_name, "w") as f:
writer = csv.writer(f)
for row in tqdm(result, desc='making file...'):
writer.writerow(row)
else:
file_name = f'{file_name}.csv'
with open(file_name, "w") as f:
writer = csv.writer(f)
writer.writerows(result)
return file_name
def read_submission_file(file_name, compress=False):
if compress:
with gzip.open(file_name, 'rt') as f:
submission = f.read()
else:
with open(file_name, 'r') as f:
submission = f.read()
# -
file_name = make_submission_file(y_pred, 'submittion_file_linear_model', compress=True, single_row=True)
print(f'Submission file created at: {file_name}')
| notebooks/workshop_datachallenge2021.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Constraining subglacial processes from surface velocity observations using surrogate-based Bayesian inference
# ## Part 1 - Training an ensemble of neural networks
#
# In this notebook, we will illustrate the process of using Bayesian Bootstrap Aggregation (BayesBag) to train an ensemble of neural networks. In this case, each ensemble member is one possible surrogate for the coupled hydrology-ice dynamics model described in the paper, mapping from a vector of 8 parameters to a velocity field. We begin by importing both the parameters and the associated velocity fields computed by the physics model, which will act as training data for the surrogate.
# +
import pickle
import numpy as np
import utilities
# Load velocity fields
F_lin = pickle.load(open('data/F_prior.p','rb'))
# Load model parameters
X = pickle.load(open('data/X_prior.p','rb'))
# -
# The velocity fields have some bad simulations in them, so we filter out circumstances in which the model never ran past 12 years, and in which the max velocity was greater than 100km/a.
# +
p = (X[:,1]<1e5)*(X[:,3]>=12)
F_lin = F_lin[p]
X = X[p,6:]
# -
# Finally, we log transform the velocity fields.
F = np.log10(F_lin)
# We will use pytorch to construct and train the neural networks. To this end, we will move the physical model's parameters and (log-)speed fields to pytorch, and use the GPU if it's available.
# +
import torch
device = torch.device('cuda' if torch.cuda.is_available() else "cpu")
m = X.shape[0]
X = torch.from_numpy(X)
F = torch.from_numpy(F)
X = X.to(torch.float32)
F = F.to(torch.float32)
X = X.to(device)
F = F.to(device)
X_hat = torch.log10(X)
# -
# Part of our objective function is to weight by element area. We will grab those values from a .vtu of an observed velocity field.
u_obs = utilities.VData('./data/u_observed.vtu')
point_area = torch.tensor(u_obs.get_point_area(),dtype=torch.float,device=device)
normed_area = point_area/point_area.sum()
# Next we need to define a few functions and classes. First, we will create a function that extracts eigenglaciers and constructs the matrix $\hat{V}$, corresponding to the Dimensionality Reduction section.
def get_eigenglaciers(omegas,F,cutoff=0.999):
F_mean = (F*omegas).sum(axis=0)
F_bar = F - F_mean # Eq. 28
S = F_bar.T @ torch.diag(omegas.squeeze()) @ F_bar # Eq. 27
lamda, V = torch.eig(S,eigenvectors=True) # Eq. 26
lamda = lamda[:,0].squeeze()
cutoff_index = torch.sum(torch.cumsum(lamda/lamda.sum(),0)<cutoff)
lamda_truncated = lamda.detach()[:cutoff_index]
V = V.detach()[:,:cutoff_index]
V_hat = V @ torch.diag(torch.sqrt(lamda_truncated)) # A slight departure from the paper: Vhat is the
# eigenvectors scaled by the eigenvalue size. This
# has the effect of allowing the outputs of the neural
# network to be O(1). Otherwise, it doesn't make
# any difference.
return V_hat, F_bar, F_mean
# Second, we define the architecture of the neural network to be used as a surrogate. This corresponds to the architecture defined in Fig. 3.
# +
import torch.nn as nn
class Emulator(nn.Module):
def __init__(self,n_parameters,n_eigenglaciers,n_hidden_1,n_hidden_2,n_hidden_3,n_hidden_4,V_hat,F_mean):
super().__init__()
# Inputs to hidden layer linear transformation
self.l_1 = nn.Linear(n_parameters, n_hidden_1)
self.norm_1 = nn.LayerNorm(n_hidden_1)
self.dropout_1 = nn.Dropout(p=0.0)
self.l_2 = nn.Linear(n_hidden_1, n_hidden_2)
self.norm_2 = nn.LayerNorm(n_hidden_2)
self.dropout_2 = nn.Dropout(p=0.5)
self.l_3 = nn.Linear(n_hidden_2, n_hidden_3)
self.norm_3 = nn.LayerNorm(n_hidden_3)
self.dropout_3 = nn.Dropout(p=0.5)
self.l_4 = nn.Linear(n_hidden_3, n_hidden_4)
self.norm_4 = nn.LayerNorm(n_hidden_3)
self.dropout_4 = nn.Dropout(p=0.5)
self.l_5 = nn.Linear(n_hidden_4, n_eigenglaciers)
self.V_hat = torch.nn.Parameter(V_hat,requires_grad=False)
self.F_mean = torch.nn.Parameter(F_mean,requires_grad=False)
def forward(self, x, add_mean=False):
# Pass the input tensor through each of our operations
a_1 = self.l_1(x)
a_1 = self.norm_1(a_1)
a_1 = self.dropout_1(a_1)
z_1 = torch.relu(a_1)
a_2 = self.l_2(z_1)
a_2 = self.norm_2(a_2)
a_2 = self.dropout_2(a_2)
z_2 = torch.relu(a_2) + z_1
a_3 = self.l_3(z_2)
a_3 = self.norm_3(a_3)
a_3 = self.dropout_3(a_3)
z_3 = torch.relu(a_3) + z_2
a_4 = self.l_4(z_3)
a_4 = self.norm_3(a_4)
a_4 = self.dropout_3(a_4)
z_4 = torch.relu(a_4) + z_3
z_5 = self.l_5(z_4)
if add_mean:
F_pred = z_5 @ self.V_hat.T + self.F_mean
else:
F_pred = z_5 @ self.V_hat.T
return F_pred
# -
# Third, we create an optimization procedure that trains a model for a given set of instance weights ($\omega_d$) and training data. Optimization is performed using mini-batch gradient descent.
# +
from torch.utils.data import TensorDataset
def criterion_ae(F_pred,F_obs,omegas,area):
instance_misfit = torch.sum(torch.abs((F_pred - F_obs))**2*area,axis=1)
return torch.sum(instance_misfit*omegas.squeeze())
def train_surrogate(e,X_train,F_train,omegas,area,batch_size=128,epochs=3000,eta_0=0.01,k=1000.):
omegas_0 = torch.ones_like(omegas)/len(omegas)
training_data = TensorDataset(X_train,F_train,omegas)
batch_size = 128
train_loader = torch.utils.data.DataLoader(dataset=training_data,
batch_size=batch_size,
shuffle=True)
optimizer = torch.optim.Adam(e.parameters(),lr=eta_0,weight_decay=0.0)
# Loop over the data
for epoch in range(epochs):
# Loop over each subset of data
for param_group in optimizer.param_groups:
param_group['lr'] = eta_0*(10**(-epoch/k))
for x,f,o in train_loader:
e.train()
# Zero out the optimizer's gradient buffer
optimizer.zero_grad()
f_pred = e(x)
# Compute the loss
loss = criterion_ae(f_pred,f,o,area)
# Use backpropagation to compute the derivative of the loss with respect to the parameters
loss.backward()
# Use the derivative information to update the parameters
optimizer.step()
e.eval()
F_train_pred = e(X_train)
# Make a prediction based on the model
loss_train = criterion_ae(F_train_pred,F_train,omegas,area)
# Make a prediction based on the model
loss_test = criterion_ae(F_train_pred,F_train,omegas_0,area)
# Print the epoch, the training loss, and the test set accuracy.
if epoch%10==0:
print(epoch,loss_train.item(),loss_test.item())
# -
# Here we put it all together: loop over the desired number of models, drawing random Bayesian bootstrap weights for each, training the surrogate, and saving the resulting models.
# +
from scipy.stats import dirichlet
torch.manual_seed(0)
np.random.seed(0)
n_parameters = X_hat.shape[1]
n_hidden_1 = 128
n_hidden_2 = 128
n_hidden_3 = 128
n_hidden_4 = 128
n_models = 3 #To reproduce the paper, this should be 50
for model_index in range(n_models):
omegas = torch.tensor(dirichlet.rvs(np.ones(m)),dtype=torch.float,device=device).T
V_hat, F_bar, F_mean = get_eigenglaciers(omegas,F)
n_eigenglaciers = V_hat.shape[1]
e = Emulator(n_parameters,n_eigenglaciers,n_hidden_1,n_hidden_2,n_hidden_3,n_hidden_4,V_hat,F_mean)
e.to(device)
train_surrogate(e,X_hat,F_bar,omegas,normed_area,epochs=3000)
torch.save(e.state_dict(),'emulator_ensemble/emulator_{0:03d}.h5'.format(model_index))
# -
# ## Part 2 - MCMC over the ensemble
# Now that a number of neural network surrogates have been trained on random subsets of high-fidelity model runs, we will perform Markov Chain Monte Carlo sampling over each of these surrogates. The correct parameter distribution for the high-fidelity model will be approximated by concatenating the Markov Chains over all of the surrogates.
import pickle
import numpy as np
import utilities
# Read in the models trained above.
# +
models = []
n_models = 3 #To reproduce the paper, this should be 50
for i in range(n_models):
state_dict = torch.load('emulator_ensemble/emulator_{0:03d}.h5'.format(i))
e = Emulator(state_dict['l_1.weight'].shape[1],state_dict['V_hat'].shape[1],n_hidden_1,n_hidden_2,n_hidden_3,n_hidden_4,state_dict['V_hat'],state_dict['F_mean'])
e.load_state_dict(state_dict)
e.to(device)
e.eval()
models.append(e)
# -
# Read in some relevant training data and ancillary values. Convert observed velocities to speeds.
# +
u_obs = utilities.VData('./data/u_observed.vtu')
v_obs = utilities.VData('./data/v_observed.vtu')
H_obs = utilities.VData('./data/H_observed.vtu')
H = torch.tensor(H_obs.u)
H = H.to(torch.float32).to(device)
U_obs = torch.tensor(((np.sqrt(u_obs.u**2 + v_obs.u**2))))
U_obs = U_obs.to(torch.float32).to(device)
# -
# Define the likelihood model, which requires a parameterization of observational uncertainty.
# +
from scipy.spatial.distance import pdist, squareform
D = torch.tensor(squareform(pdist(u_obs.x)),dtype=torch.float32,device=device)
sigma2 = 10**2
sigma_flow2 = 10**2
alpha_cov = 1
l_model = 4*torch.sqrt(H.unsqueeze(1) @ H.unsqueeze(0))
Sigma_obs = sigma2*torch.eye(D.shape[0],device=device)
Sigma_flow = sigma_flow2*(1 + D**2/(2*alpha_cov*l_model**2))**-alpha_cov
Sigma = Sigma_obs + Sigma_flow
# -
# Construct the precision matrix (the inverse of equation 50)
rho = 1./(1e4**2)
K = torch.diag(point_area*rho)
Tau = K @ torch.inverse(Sigma) @ K
# Construct the Beta prior distribution.
# +
from scipy.stats import beta
alpha_b = 3.0
beta_b = 3.0
X_min = X_hat.cpu().numpy().min(axis=0)-1e-3
X_max = X_hat.cpu().numpy().max(axis=0)+1e-3
X_prior = beta.rvs(alpha_b,beta_b,size=(10000,8))*(X_max - X_min) + X_min
X_min = torch.tensor(X_min,dtype=torch.float32,device=device)
X_max = torch.tensor(X_max,dtype=torch.float32,device=device)
# -
# This function returns a value that is proportional to the negative log-posterior distribution (The summands of equation 53).
def V(X):
U_pred = 10**m(X,add_mean=True)
r = (U_pred - U_obs)
X_bar = (X - X_min)/(X_max - X_min)
L1 = -0.5*r @ Tau @ r
L2 = torch.sum((alpha_b-1)*torch.log(X_bar) + (beta_b-1)*torch.log(1-X_bar))
return -(L1 + L2)
# We use the Metropolis-adjusted Langevin Algorithm to sample from the posterior distribution, which benefits from the availability of gradient and Hessian information. Here, we compute these quantities (and some helpful additional ones) using automatic differentiation in pytorch.
def get_log_like_gradient_and_hessian(V,X,eps=1e-2,compute_hessian=False):
log_pi = V(X)
if compute_hessian:
g = torch.autograd.grad(log_pi,X,retain_graph=True,create_graph=True)[0]
H = torch.stack([torch.autograd.grad(e,X,retain_graph=True)[0] for e in g])
lamda,Q = torch.eig(H,eigenvectors=True)
lamda_prime = torch.sqrt(lamda[:,0]**2 + eps)
lamda_prime_inv = 1./torch.sqrt(lamda[:,0]**2 + eps)
H = Q @ torch.diag(lamda_prime) @ Q.T
Hinv = Q @ torch.diag(lamda_prime_inv) @ Q.T
log_det_Hinv = torch.sum(torch.log(lamda_prime_inv))
return log_pi,g,H,Hinv,log_det_Hinv
else:
return log_pi
# We initialize the sampler by first finding the Maximum A Posteriori parameter value, or MAP point. We find the MAP point using gradient descent paired with a simple line search.
def find_MAP(X,n_iters=50,print_interval=10):
print('***********************************************')
print('***********************************************')
print('Finding MAP point')
print('***********************************************')
print('***********************************************')
# Line search distances
alphas = np.logspace(-4,0,11)
# Find MAP point
for i in range(n_iters):
log_pi,g,H,Hinv,log_det_Hinv = get_log_like_gradient_and_hessian(V,X,compute_hessian=True)
p = Hinv @ -g
alpha_index = np.nanargmin([get_log_like_gradient_and_hessian(V,X + alpha*p,compute_hessian=False).detach().cpu().numpy() for alpha in alphas])
mu = X + alphas[alpha_index] * p
X.data = mu.data
if i%print_interval==0:
print('===============================================')
print('iter: {0:d}, ln(P): {1:6.1f}, curr. m: {2:4.4f},{3:4.2f},{4:4.2f},{5:4.2f},{6:4.2f},{7:4.2f},{8:4.2f},{9:4.2f}'.format(i,log_pi,*X.data.cpu().numpy()))
print('===============================================')
return X
# With a good initial guess for the sampler discovered, we now implement the MALA algorithm.
# +
def draw_sample(mu,cov,eps=1e-10):
L = torch.cholesky(cov + eps*torch.eye(cov.shape[0],device=device))
return mu + L @ torch.randn(L.shape[0],device=device)
def get_proposal_likelihood(Y,mu,inverse_cov,log_det_cov):
return -0.5*log_det_cov - 0.5*(Y - mu) @ inverse_cov @ (Y-mu)
def MALA_step(X,h,local_data=None):
if local_data is not None:
pass
else:
local_data = get_log_like_gradient_and_hessian(V,X,compute_hessian=True)
log_pi,g,H,Hinv,log_det_Hinv = local_data
X_ = draw_sample(X,2*h*Hinv).detach()
X_.requires_grad=True
log_pi_ = get_log_like_gradient_and_hessian(V,X_,compute_hessian=False)
logq = get_proposal_likelihood(X_,X,H/(2*h),log_det_Hinv)
logq_ = get_proposal_likelihood(X,X_,H/(2*h),log_det_Hinv)
log_alpha = (-log_pi_ + logq_ + log_pi - logq)
alpha = torch.exp(min(log_alpha,torch.tensor([0.],device=device)))
u = torch.rand(1,device=device)
if u <= alpha and log_alpha!=np.inf:
X.data = X_.data
local_data = get_log_like_gradient_and_hessian(V,X,compute_hessian=True)
s = 1
else:
s = 0
return X,local_data,s
def MALA(X,n_iters=10001,h=0.1,acc_target=0.25,k=0.01,beta=0.99,sample_path='./samples/',model_index=0,save_interval=1000,print_interval=50):
print('***********************************************')
print('***********************************************')
print('Running Metropolis-Adjusted Langevin Algorithm for model index {0}'.format(model_index))
print('***********************************************')
print('***********************************************')
local_data = None
vars = []
acc = acc_target
for i in range(n_iters):
X,local_data,s = MALA_step(X,h,local_data=local_data)
vars.append(X.detach())
acc = beta*acc + (1-beta)*s
h = min(h*(1+k*np.sign(acc - acc_target)),h_max)
if i%print_interval==0:
print('===============================================')
print('sample: {0:d}, acc. rate: {1:4.2f}, log(P): {2:6.1f}'.format(i,acc,local_data[0].item()))
print('curr. m: {0:4.4f},{1:4.2f},{2:4.2f},{3:4.2f},{4:4.2f},{5:4.2f},{6:4.2f},{7:4.2f}'.format(*X.data.cpu().numpy()))
print('===============================================')
if i%save_interval==0:
print('///////////////////////////////////////////////')
print('Saving samples for model {0:03d}'.format(model_index))
print('///////////////////////////////////////////////')
X_posterior = torch.stack(vars).cpu().numpy()
np.save(open(sample_path+'X_posterior_model_{0:03d}.npy'.format(model_index),'wb'),X_posterior)
X_posterior = torch.stack(vars).cpu().numpy()
return X_posterior
# -
# We now run the MAP/MALA procedure for each surrogate in the bootstrapped ensemble, and save the resulting posterior distributions.
torch.manual_seed(0)
np.random.seed(0)
for j,m in enumerate(models):
X = torch.tensor(X_prior[np.random.randint(X_prior.shape[0],size=5)].mean(axis=0),requires_grad=True,dtype=torch.float,device=device)
X = find_MAP(X)
# To reproduce the paper, n_iters should be 10^5
X_posterior = MALA(X,n_iters=10000,model_index=j,save_interval=1000,print_interval=100)
# TODO: Add plotting
| hydrology_surrogate_construction_and_mcmc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp distill.distillation_callback
# -
# # Knowledge Distillation
#
# > Train a network in a teacher-student fashion
# +
# all_slow
# +
#hide
from nbdev.showdoc import *
# %config InlineBackend.figure_format = 'retina'
# -
# Knowledge Distillation, sometimes called teacher-student training, is a compression method in which a small (the student) model is trained to mimic the behaviour of a larger (the teacher) model.
#
# The main goal is to reveal what is called the **Dark Knowledge** hidden in the teacher model.
#
# If we take the same [example](https://www.ttic.edu/dl/dark14.pdf) provided by <NAME> et al., we have
# The main problem of classification is that the output activation function (softmax) will, by design, make a single value really high and squash others.
# $$
# p_{i}=\frac{\exp \left(z_{i}\right)}{\sum_{j} \exp \left(z_{j}\right)}
# $$
#
# With $p_i$ the probability of class $i$, computed from the logits $z$
# Here is an example to illustrate this phenomenon:
#
# Let's say that we have trained a model to discriminate between the following 5 classes: [cow, dog, plane, cat, car]
# And here is the output of the final layer (the logits) when the model is fed a new input image:
logits = torch.tensor([1.3, 3.1, 0.2, 1.9, -0.3])
# By judging on the predictions, the model seems confident that the input data is a dog and quite confident that it is definitely not a plane nor a car, with predictions for cow and cat being moderately high.
#
# So the model not only has learned to recognize a dog in the image, but also that a dog is very different from a car and a plane and share similarities with cats and cows. This information is what is called **dark knowledge** !
# When passing those predictions through a softmax, we have:
predictions = F.softmax(hard_preds, dim=-1); predictions
# This is accuenting the differences that we had earlier, discarding some of the dark knowledge acquired earlier. The way to keep this knowledge is to "soften" our softmax outputs, by adding a **temperature** parameter. The higher the temperature, the softer the predictions.
soft_predictions = F.softmax(hard_preds/3, dim=-1); soft_predictions
# > Note: if the Temperature is equal to 1, then we have regular softmax
# When applying Knowledge Distillation, we want to keep the **Dark Knowledge** that the teacher model has acquired during its training but not rely entirely on it. So we combine two losses:
#
# - The Teacher loss between the softened predictions of the teacher and the softened predictions of the student
# - The Classification loss, which is the regular loss between hard labels and hard predictions
#
# The combination between those losses are weighted by an additional parameter α, as:
#
# $$
# L_{K D}=\alpha * \text { CrossEntropy }\left(p_{S}^{\tau}, p_{T}^{\tau}\right)+(1-\alpha) * \text { CrossEntropy }\left(p_{S}, y_{\text {true }}\right)
# $$
#
# With $p^{\tau}$ being the softened predictions of the student and teacher
# > Note: In practice, the distillation loss will be a [bit different](http://cs230.stanford.edu/files_winter_2018/projects/6940224.pdf) in the implementation
# 
# +
#export
from fastai.vision.all import *
import torch
import torch.nn as nn
import torch.nn.functional as F
# -
# This can be done with fastai, using the Callback system !
#export
class KnowledgeDistillation(Callback):
def __init__(self, teacher, loss):
store_attr()
def after_loss(self):
self.teacher.model.eval()
teacher_output = self.teacher.model(self.x)
new_loss = self.loss(self.pred, self.y, teacher_output, student=self.learn.model, teacher=self.teacher.model)
self.learn.loss_grad = new_loss
self.learn.loss = self.learn.loss_grad.clone()
# The loss function that is used may depend on the use case. For classification, we usually use the one presented above, named `SoftTarget` in fasterai. But for regression cases, we may want to perform regression on the logits directly.
# +
#export
def SoftTarget(y, labels, teacher_scores, T=20, α=0.7, **kwargs):
return nn.KLDivLoss(reduction='batchmean')(F.log_softmax(y/T, dim=-1), F.softmax(teacher_scores/T, dim=-1)) * (T*T * 2.0 * α) + F.cross_entropy(y, labels) * (1. - α)
def LogitsRegression(y, labels, teacher_scores, **kwargs):
return F.mse_loss(y, teacher_scores)
def WeightRegression(y, labels, teacher_scores, student, teacher, α=0.5, **kwargs):
loss = 0
for m1, m2 in zip(student.modules(), teacher.modules()):
if isinstance(m1, nn.Conv2d) or isinstance(m1, nn.Linear):
loss += F.mse_loss(m1.weight.to(device), m2.weight.to(device))
return α*loss + F.cross_entropy(y, labels) * (1. - α)
| nbs/05_knowledge_distillation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Timing
# ------
#
# Quickly time a single line.
import math
import ubelt as ub
timer = ub.Timer('Timer demo!', verbose=1)
with timer:
math.factorial(100000)
# Robust Timing and Benchmarking
# ------------------------------
#
# Easily do robust timings on existing blocks of code by simply indenting
# them. The quick and dirty way just requires one indent.
import math
import ubelt as ub
for _ in ub.Timerit(num=200, verbose=3):
math.factorial(10000)
# Loop Progress
# -------------
#
# ``ProgIter`` is a (mostly) drop-in alternative to
# ```tqdm`` <https://pypi.python.org/pypi/tqdm>`__.
# *The advantage of ``ProgIter`` is that it does not use any python threading*,
# and therefore can be safer with code that makes heavy use of multiprocessing.
#
# Note: ProgIter is now a standalone module: ``pip intstall progiter``)
import ubelt as ub
import math
for n in ub.ProgIter(range(7500)):
math.factorial(n)
# +
import ubelt as ub
import math
for n in ub.ProgIter(range(7500), freq=2, adjust=False):
math.factorial(n)
# Note that forcing freq=2 all the time comes at a performance cost
# The default adjustment algorithm causes almost no overhead
# -
>>> import ubelt as ub
>>> def is_prime(n):
... return n >= 2 and not any(n % i == 0 for i in range(2, n))
>>> for n in ub.ProgIter(range(1000), verbose=2):
>>> # do some work
>>> is_prime(n)
# Caching
# -------
#
# Cache intermediate results in a script with minimal boilerplate.
import ubelt as ub
cfgstr = 'repr-of-params-that-uniquely-determine-the-process'
cacher = ub.Cacher('test_process', cfgstr)
data = cacher.tryload()
if data is None:
myvar1 = 'result of expensive process'
myvar2 = 'another result'
data = myvar1, myvar2
cacher.save(data)
myvar1, myvar2 = data
# Hashing
# -------
#
# The ``ub.hash_data`` constructs a hash corresponding to a (mostly)
# arbitrary ordered python object. A common use case for this function is
# to construct the ``cfgstr`` mentioned in the example for ``ub.Cacher``.
# Instead of returning a hex, string, ``ub.hash_data`` encodes the hash
# digest using the 26 lowercase letters in the roman alphabet. This makes
# the result easy to use as a filename suffix.
import ubelt as ub
data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])]
ub.hash_data(data)
import ubelt as ub
data = [('arg1', 5), ('lr', .01), ('augmenters', ['flip', 'translate'])]
ub.hash_data(data, hasher='sha512', base='abc')
# Command Line Interaction
# ------------------------
#
# The builtin Python ``subprocess.Popen`` module is great, but it can be a
# bit clunky at times. The ``os.system`` command is easy to use, but it
# doesn't have much flexibility. The ``ub.cmd`` function aims to fix this.
# It is as simple to run as ``os.system``, but it returns a dictionary
# containing the return code, standard out, standard error, and the
# ``Popen`` object used under the hood.
import ubelt as ub
info = ub.cmd('cmake --version')
# Quickly inspect and parse output of a
print(info['out'])
# The info dict contains other useful data
print(ub.repr2({k: v for k, v in info.items() if 'out' != k}))
# Also possible to simultaniously capture and display output in realtime
info = ub.cmd('cmake --version', tee=1)
# tee=True is equivalent to using verbose=1, but there is also verbose=2
info = ub.cmd('cmake --version', verbose=2)
# and verbose=3
info = ub.cmd('cmake --version', verbose=3)
# Cross-Platform Resource and Cache Directories
# ---------------------------------------------
#
# If you have an application which writes configuration or cache files,
# the standard place to dump those files differs depending if you are on
# Windows, Linux, or Mac. UBelt offers a unified functions for determining
# what these paths are.
#
# The ``ub.ensure_app_cache_dir`` and ``ub.ensure_app_resource_dir``
# functions find the correct platform-specific location for these files
# and ensures that the directories exist. (Note: replacing "ensure" with
# "get" will simply return the path, but not ensure that it exists)
#
# The resource root directory is ``~/AppData/Roaming`` on Windows,
# ``~/.config`` on Linux and ``~/Library/Application Support`` on Mac. The
# cache root directory is ``~/AppData/Local`` on Windows, ``~/.config`` on
# Linux and ``~/Library/Caches`` on Mac.
#
import ubelt as ub
print(ub.shrinkuser(ub.ensure_app_cache_dir('my_app')))
# Downloading Files
# -----------------
#
# The function ``ub.download`` provides a simple interface to download a
# URL and save its data to a file.
#
# The function ``ub.grabdata`` works similarly to ``ub.download``, but
# whereas ``ub.download`` will always re-download the file,
# ``ub.grabdata`` will check if the file exists and only re-download it if
# it needs to.
#
# New in version 0.4.0: both functions now accepts the ``hash_prefix`` keyword
# argument, which if specified will check that the hash of the file matches the
# provided value. The ``hasher`` keyword argument can be used to change which
# hashing algorithm is used (it defaults to ``"sha512"``).
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.download(url, verbose=0)
>>> print(ub.shrinkuser(fpath))
>>> import ubelt as ub
>>> url = 'http://i.imgur.com/rqwaDag.png'
>>> fpath = ub.grabdata(url, verbose=0, hash_prefix='944389a39')
>>> print(ub.shrinkuser(fpath))
try:
ub.grabdata(url, verbose=0, hash_prefix='not-the-right-hash')
except Exception as ex:
print('type(ex) = {!r}'.format(type(ex)))
# # Dictionary Tools
import ubelt as ub
item_list = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'bannana']
groupid_list = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit']
groups = ub.group_items(item_list, groupid_list)
print(ub.repr2(groups, nl=1))
import ubelt as ub
item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]
ub.dict_hist(item_list)
import ubelt as ub
items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]
ub.find_duplicates(items, k=2)
import ubelt as ub
dict_ = {'K': 3, 'dcvs_clip_max': 0.2, 'p': 0.1}
subdict_ = ub.dict_subset(dict_, ['K', 'dcvs_clip_max'])
print(subdict_)
import ubelt as ub
dict_ = {1: 'a', 2: 'b', 3: 'c'}
print(list(ub.dict_take(dict_, [1, 2, 3, 4, 5], default=None)))
import ubelt as ub
dict_ = {'a': [1, 2, 3], 'b': []}
newdict = ub.map_vals(len, dict_)
print(newdict)
import ubelt as ub
mapping = {0: 'a', 1: 'b', 2: 'c', 3: 'd'}
ub.invert_dict(mapping)
import ubelt as ub
mapping = {'a': 0, 'A': 0, 'b': 1, 'c': 2, 'C': 2, 'd': 3}
ub.invert_dict(mapping, unique_vals=False)
# AutoDict - Autovivification
# ---------------------------
#
# While the ``collections.defaultdict`` is nice, it is sometimes more
# convenient to have an infinitely nested dictionary of dictionaries.
#
# (But be careful, you may start to write in Perl)
>>> import ubelt as ub
>>> auto = ub.AutoDict()
>>> print('auto = {!r}'.format(auto))
>>> auto[0][10][100] = None
>>> print('auto = {!r}'.format(auto))
>>> auto[0][1] = 'hello'
>>> print('auto = {!r}'.format(auto))
# String-based imports
# --------------------
#
# Ubelt contains functions to import modules dynamically without using the
# python ``import`` statement. While ``importlib`` exists, the ``ubelt``
# implementation is simpler to user and does not have the disadvantage of
# breaking ``pytest``.
#
# Note ``ubelt`` simply provides an interface to this functionality, the
# core implementation is in ``xdoctest``.
# +
>>> import ubelt as ub
>>> module = ub.import_module_from_path(ub.truepath('~/code/ubelt/ubelt'))
>>> print('module = {!r}'.format(module))
>>> module = ub.import_module_from_name('ubelt')
>>> print('module = {!r}'.format(module))
>>> modpath = ub.util_import.__file__
>>> print(ub.modpath_to_modname(modpath))
>>> modname = ub.util_import.__name__
>>> assert ub.truepath(ub.modname_to_modpath(modname)) == modpath
# -
# Horizontal String Concatenation
# -------------------------------
#
# Sometimes its just prettier to horizontally concatenate two blocks of
# text.
>>> import ubelt as ub
>>> B = ub.repr2([[1, 2], [3, 4]], nl=1, cbr=True, trailsep=False)
>>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False)
>>> print(ub.hzcat(['A = ', B, ' * ', C]))
| docs/notebooks/Ubelt Demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <table align="left" width="100%"> <tr>
# <td style="background-color:#ffffff;">
# <a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
# <td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
# prepared by <NAME> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
# </td>
# </tr></table>
# <table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
# $ \newcommand{\bra}[1]{\langle #1|} $
# $ \newcommand{\ket}[1]{|#1\rangle} $
# $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
# $ \newcommand{\dot}[2]{ #1 \cdot #2} $
# $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
# $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
# $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
# $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
# $ \newcommand{\mypar}[1]{\left( #1 \right)} $
# $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
# $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
# $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
# $ \newcommand{\onehalf}{\frac{1}{2}} $
# $ \newcommand{\donehalf}{\dfrac{1}{2}} $
# $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
# $ \newcommand{\vzero}{\myvector{1\\0}} $
# $ \newcommand{\vone}{\myvector{0\\1}} $
# $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
# $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
# $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
# $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
# $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
# $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
# $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
# $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
# $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
# $ \newcommand{\Y}{ \mymatrix{rr}{0 & -i \\ i & 0} } $ $ \newcommand{\S}{ \mymatrix{rr}{1 & 0 \\ 0 & i} } $
# $ \newcommand{\T}{ \mymatrix{rr}{1 & 0 \\ 0 & e^{i \frac{pi}{4}}} } $
# $ \newcommand{\Sdg}{ \mymatrix{rr}{1 & 0 \\ 0 & -i} } $
# $ \newcommand{\Tdg}{ \mymatrix{rr}{1 & 0 \\ 0 & e^{-i \frac{pi}{4}}} } $
# $ \newcommand{\qgate}[1]{ \mathop{\textit{#1} } }$
# <h1> <font color="blue"> Solutions for </font> Quantum gates with complex numbers </h1>
# <a id="task1"></a>
# <h3> Task 1 </h3>
#
# Remember that $\qgate{X}$-gate flips the value of a qubit.
#
# Design a quantum circuit with a single qubit. Set the value of qubit to $ \ket{1} $ by using x-gate. After that apply $\qgate{Y}$-gate and check the outcome on a statevector_simulator.
# <h3> Solution </h3>
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
# apply x-gate
mycircuit.x(qreg[0])
# apply y-gate
mycircuit.y(qreg[0])
job = execute(mycircuit,Aer.get_backend('statevector_simulator'))
current_quantum_state=job.result().get_statevector(mycircuit)
print(current_quantum_state)
# -
# <a id="task2"></a>
# <h3> Task 2 </h3>
#
# Find the relationship between the following operators:
# <ul>
# <li>$\qgate{S}$ and $\qgate{T}$;</li>
# <li>$\qgate{Z}$ and $\qgate{T}$.</li>
# </ul>
# <h3> Solution </h3>
# $\qgate{S} = \S = \mymatrix{cc}{1 & 0 \\ 0 & e^{i \frac{pi}{4}} \cdot e^{i \frac{pi}{4}}} = \T \T$ => $\qgate{S} = T^2$.
#
# $\qgate{Z} = \Z = \mymatrix{cc}{1 & 0 \\ 0 & i \cdot i} = \S \S$ => $\qgate{Z} = S^2$ => $\qgate{Z} = T^4$.
# <a id="task3"></a>
# <h3> Task 3 </h3>
#
# For each one of the discussed 4 phase gates construct the following circuit:
#
# <ul>
# <li>Create a circuit with one qubit,</li>
# <li>apply Hadamard operator,</li>
# <li>apply the corresponding phase operator,</li>
# <li>make a measurement.</li>
# </ul>
#
# What is the measurement outcome in each case?
# <h3> Solution </h3>
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# s-gate
mycircuit.s(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# t-gate
mycircuit.t(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# sdg-gate
mycircuit.sdg(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# tdg-gate
mycircuit.tdg(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# -
# <a id="task4"></a>
# <h3> Task 4 </h3>
#
# For each one of the discussed 4 phase gates construct the following circuit:
#
# <ul>
# <li>Create a circuit with one qubit,</li>
# <li>apply Hadamard operator,</li>
# <li>apply the corresponding phase operator,</li>
# <li>apply Hadamard operator,</li>
# <li>make a measurement.</li>
# </ul>
#
# Guess the measurement outcome in each case before executing the code.
# <h3> Solution </h3>
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# s-gate
mycircuit.s(qreg[0])
mycircuit.h(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# t-gate
mycircuit.t(qreg[0])
mycircuit.h(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# sdg-gate
mycircuit.sdg(qreg[0])
mycircuit.h(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
mycircuit.h(qreg[0])
# tdg-gate
mycircuit.tdg(qreg[0])
mycircuit.h(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print(counts) # print the outcomes
mycircuit.draw(output='mpl')
# -
#
# <a id="task5"></a>
# <h3> Task 5 </h3>
#
# Repeat previous experiment for different number of $\qgate{H}$-gates and $\qgate{T}$-gates, but this time applying them in opposite order: first $\qgate{H}$ and then $\qgate{T}$.
# <h3> Solution </h3>
# +
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
for i in range(25):
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
for j in range(i):
mycircuit.h(qreg[0])
mycircuit.t(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
print('gates applied',i,'times, outcome:',counts) # print the outcomes
# -
# <a id="task6"></a>
# <h3> Task 6 (discussion) </h3>
#
# Why do the outcomes repeat after several iterations?
# <h3> Solution </h3>
# We have $(\qgate{HTH})^n$ for some $n$. As a result we have $\qgate{H T HH T HH T} \cdots \qgate{HH T H}$, and $\qgate{HH} = \qgate{I}$, so in result $(\qgate{HTH})^n = \qgate{HT}^n\qgate{H}$. Because $\qgate{T}^8 = \qgate{I}$, after 8 iterations outcomes repeat.
| silver/C04_Quantum_Gates_With_Complex_Numbers_Solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
import scipy.stats as st
plt.style.use('seaborn-darkgrid')
# +
# generate surrogate data
sd0 = 1
mu0 = 0
ymax0 = 5
q = np.exp(np.random.randn(500)*sd0+mu0)
y = (q*ymax0)/(q + ymax0)
_, ax = plt.subplots(1, 2, figsize=(15, 3))
ax[0].hist(np.log(y), 50)
ax[0].set_xlabel('log surrogate')
ax[1].hist(np.log(q), 50, density='pdf')
x = np.linspace(-5, 5, 1000)
pdf = st.norm.pdf(x, mu0, sd0)
ax[1].plot(x, pdf)
ax[1].set_xlabel('log latent');
# +
## setup model
yobsmax = np.amax(y)
with pm.Model() as ullnmodel:
ymax = pm.Pareto('ymax', alpha=1, m=yobsmax)
# ymax = pm.Uniform('ymax', 0, 10.)
mu = pm.Normal('mu', mu=0, sd=5)
sigma = pm.Lognormal('sigma', mu=0, sd=5)
qt = pm.math.log(y*ymax) - pm.math.log(ymax - y)
q2 = pm.Normal('q', mu=mu, sd = sigma, observed = qt)
startpoint = {'mu': np.mean(np.log(y)), 'sigma': np.std(np.log(y)), 'ymax': yobsmax*2.0}
map_estimate = pm.find_MAP(model=ullnmodel,start=startpoint)
map_estimate
# +
_, ax = plt.subplots(1,2,figsize=(15,3))
ax[0].hist(np.log(q), 50, density='pdf')
pdf = st.norm.pdf(x, mu0, sd0)
ax[0].plot(x, pdf)
ax[0].set_xlabel('log latent')
ymax_ = map_estimate['ymax']# ymax0
qt2 = np.log(y*ymax_) - np.log(ymax_-y)
ax[1].hist(qt2, 50, density='pdf')
pdf = st.norm.pdf(x, map_estimate['mu'], map_estimate['sigma'])
ax[1].plot(x, pdf)
ax[1].set_xlabel('log latent');
# +
from pymc3.distributions.transforms import ElemwiseTransform
import theano.tensor as tt
import theano
class logmax(ElemwiseTransform):
name = "logmax"
def __init__(self, ymax):
self.ymax = tt.as_tensor_variable(ymax)
def forward(self, x):
return (x * self.ymax)/(x + self.ymax)
def backward(self, y):
return tt.log(y * self.ymax) - tt.log(self.ymax - y)
# -
with pm.Model() as ullnmodel2:
ymax = pm.Pareto('ymax', alpha=1, m=yobsmax)
# ymax = pm.Uniform('ymax', 0, 10.)
mu = pm.Normal('mu', mu=0, sd=5)
sigma = pm.Lognormal('sigma', mu=0, sd=5)
q2 = pm.Normal('q', mu=mu, sd = sigma, transform=logmax(ymax))
with ullnmodel2:
like = pm.Potential('like', ullnmodel2.free_RVs[-1]
.distribution
.logp(theano.shared(y)))
startpoint = {'mu': np.mean(np.log(y)), 'sigma': np.std(np.log(y)), 'ymax': yobsmax*2.0}
map_estimate = pm.find_MAP(model=ullnmodel2, start=startpoint)
map_estimate
# +
_, ax = plt.subplots(1,2,figsize=(15,3))
ax[0].hist(np.log(q), 50, density='pdf')
pdf = st.norm.pdf(x, mu0, sd0)
ax[0].plot(x, pdf)
ax[0].set_xlabel('log latent')
ymax_ = map_estimate['ymax']# ymax0
qt2 = np.log(y*ymax_) - np.log(ymax_-y)
ax[1].hist(qt2, 50, density='pdf')
pdf = st.norm.pdf(x, map_estimate['mu'], map_estimate['sigma'])
ax[1].plot(x, pdf)
ax[1].set_xlabel('log latent');
# -
# $$f(y) = log(y_{max}*y/(y_{max}-y))$$
# $$f(y) \sim Normal(mu,sigma)$$
# $$det(log(J)) = log(df/dy) = log(y_{max}/(y*(y_{max}-y)))$$
#
# https://discourse.pymc.io/t/how-do-i-implement-an-upper-limit-log-normal-distribution/1337/8
# +
with pm.Model() as ullnmodel3:
ymax = pm.Pareto('ymax',alpha=1,m=yobsmax)
mu = pm.Normal('mu', mu=0, sd=5)
sigma = pm.Lognormal('sigma', mu=0, sd=5)
qt = pm.math.log(y*ymax) - pm.math.log(ymax - y)
q2 = pm.Normal('q', mu=mu, sd = sigma, observed = qt)
pm.Potential('jacob_det', pm.math.log(ymax/(y*(ymax-y))))
startpoint = {'mu': np.mean(np.log(y)), 'sigma': np.std(np.log(y)), 'ymax': 10}
map_estimate = pm.find_MAP(model=ullnmodel3,start=startpoint)
map_estimate
# +
_, ax = plt.subplots(1,2,figsize=(15,3))
ax[0].hist(np.log(q), 50, density='pdf')
pdf = st.norm.pdf(x, mu0, sd0)
ax[0].plot(x, pdf)
ax[0].set_xlabel('log latent')
ymax_ = map_estimate['ymax']# ymax0
qt2 = np.log(y*ymax_) - np.log(ymax_-y)
ax[1].hist(qt2, 50, density='pdf')
pdf = st.norm.pdf(x, map_estimate['mu'], map_estimate['sigma'])
ax[1].plot(x, pdf)
ax[1].set_xlabel('log latent');
# -
| PyMC3QnA/discourse_1337.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### 주피터노트북에 출력되는 그래프 사이즈 조정하기
# +
import matplotlib.pyplot as plt
# %matplotlib inline
def plot(dpi):
fig, ax = plt.subplots(dpi=dpi)
ax.plot([2, 4, 1, 5], label='Label')
ax.legend()
plot(100)
# -
fig, ax = plt.subplots(dpi=50)
ax.plot([[0, 0], [1, 1]], label='Label')
ax.legend()
| python/set_graph_size.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Q
# ### To predict whether the client will subscribe a term deposit or not
# #### Importing libraries
# +
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
# -
# #### Importing data
bank = pd.read_csv('bank-full.csv', sep= ';')
bank
# #### Checking for null value
bank.info()
bank.columns
# #### Checking frequencies
sns.countplot(x='y',data=bank)
# %matplotlib inline
pd.crosstab(bank.job,bank.y).plot(kind='bar')
plt.title('Subsribed Frequency for Job Title')
plt.xlabel('Job')
plt.ylabel('Frequency of Purchase')
# #### All occupations have different type of subscription to term deposit and so this is an important feature
table=pd.crosstab(bank.marital,bank.y)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=False)
plt.title('Stacked Bar Chart of Marital Status vs Subscribed')
plt.xlabel('Marital Status')
plt.ylabel('Proportion of Customers')
# #### People from all status levels, have the same frequency. So, this is not an important feature
table=pd.crosstab(bank.education,bank.y)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar')
plt.title('Stacked Bar Chart of Education vs Subscribed')
plt.xlabel('Education')
plt.ylabel('Proportion of Customers')
# #### People with tertiary education, tend to get subscription to term deposit, a little more than people from other education background
pd.crosstab(bank.month,bank.y).plot(kind='bar')
plt.title('Purchase Frequency for Month')
plt.xlabel('Month')
plt.ylabel('Frequency of Subsribe')
# #### People tend to get term deposit mostly in the month of May
# ### Checking for outliers
bank.boxplot(column='age', by='y')
pd.crosstab(bank.poutcome,bank.y).plot(kind='bar')
plt.title('Subscribe Frequency for Poutcome')
plt.xlabel('Poutcome')
plt.ylabel('Frequency of Subscribe')
bank.head()
# #### Dropping all the unimportant features.
bank.drop(["month","education","pdays","day","campaign","age",'loan',"housing",'marital'],axis=1,inplace=True)
bank.head()
# #### Creating dummy variable for categorical data
contactd=pd.get_dummies(bank['contact'],drop_first=True)
jobd=pd.get_dummies(bank['job'],drop_first=True)
poutcomed=pd.get_dummies(bank['poutcome'],drop_first=True)
defaultd = pd.get_dummies(bank['default'],drop_first=True)
bank = pd.concat([bank,defaultd,poutcomed,jobd,contactd],axis=1)
bank.head()
bank.drop(['default','poutcome','job',"contact"],axis=1,inplace = True)
bank.head()
bank_new= bank.copy()
bank_new['y'] = bank_new['y'].map({'no':0,'yes':1})
bank_new
# ### Independent and Target variables
X = bank.drop("y",axis=1)
Y = bank["y"]
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X,Y)
# ### Prediction
y_pred = classifier.predict(X)
y_pred_prob=classifier.predict_proba(X)
y_pred_prob
# ### Evaluation metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
cm=confusion_matrix = confusion_matrix(Y,y_pred)
accuracy=accuracy_score(Y,y_pred)
print (cm,accuracy)
# #### Accuray of the model is 90%
from sklearn.metrics import classification_report
print(classification_report(Y,y_pred))
# #### ROC Curve
# +
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
y_predict= label_encoder.fit_transform(y_pred)
y_actual= label_encoder.fit_transform(bank_new['y'])
# +
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
fpr, tpr, thresholds = roc_curve(y_actual, classifier.predict_proba (X)[:,1])
area_under_curve = roc_auc_score(y_actual, y_predict)
import matplotlib.pyplot as plt
plt.plot(fpr, tpr, color='red', label='logit model ( area = %0.2f)'%area_under_curve)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.ylabel('True Positive Rate')
# -
area_under_curve
# #### EDA is performed and a model is constructed only by considering the required and features that significantly contribute to the client subscribing to the term deposit.
#
# #### From the ROC curve, it is evident that, the area under curve is 64.8%, which means that by an approximation of 65%, it can be used to predict the true positive rate. It is baised beacuse, the dataset has been a large frequency of negative results
#
# #### The precision, senstivity (true positive rate), Specificity (true negative rate ) and F1-score are also biased because the dataset has a large frequency of negative result
# ### precision - no (0.92) and yes ( 0.65 )
# ### recall - no (0.98) and yes ( 0.32 )
# ### f1-score - no (0.95) and yes ( 0.43 )
# ### support - no ( 39922) and yes ( 5289 )
#
# ### Thus, the created model gives 90% accurate prediction for the client to subscribe the term deposit
#
#
| Logistic Regression/Logistic Regression (Bank).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import monai.networks
from monai.networks.blocks.evonorm import EvoNormLayer
from monai.networks.nets.regunet import RegUNet
import torch
evonormlayer = EvoNormLayer(128)
test_volume = torch.randn(128,128,128).unsqueeze(0).unsqueeze(1)
for i in range(9):
evonormlayer.mutate()
model = RegUNet(3, 1, 1, 2, evonorm=evonormlayer)
model(test_volume)
evonormlayer.get_forward_nodes_()
evonormlayer.nodes
evonormlayer.nodes[1](5).size()
evonormlayer
print(model)
el = EvoNormLayer(128)
el(test_volume)
| monai/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Want to predict the opponent's choice, given yours, and maximize your own choice's score given theirs
#
# Can look only at the new/relevant parts of the board and calculate their choices there. Then look to see if their score is higher there than in the areas of the board that don't change.
#
# Other option: do all the calculations for certain situations (letter next to a TW/TL etc) and calculate their effect on the "present value" of a play, then subtract from your score and compare.
| WWF_AI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from transformers import (
AutoTokenizer,
LEDForConditionalGeneration,
)
from datasets import load_dataset, load_metric
import torch
# First, we load the **Multi-news** dataset from huggingface dataset hub
dataset=load_dataset('multi_news')
# Then we load the fine-tuned PRIMERA model, please download [it](https://storage.googleapis.com/primer_summ/PRIMER_multinews.tar.gz) to your local computer.
PRIMER_path='../github/PRIMER/PRIMER_multinews_hf'
TOKENIZER = AutoTokenizer.from_pretrained(PRIMER_path)
MODEL = LEDForConditionalGeneration.from_pretrained(PRIMER_path)
MODEL.gradient_checkpointing_enable()
PAD_TOKEN_ID = TOKENIZER.pad_token_id
DOCSEP_TOKEN_ID = TOKENIZER.convert_tokens_to_ids("<doc-sep>")
# We then define the functions to pre-process the data, as well as the function to generate summaries.
# +
def process_document(documents):
input_ids_all=[]
for data in documents:
all_docs = data.split("|||||")[:-1]
for i, doc in enumerate(all_docs):
doc = doc.replace("\n", " ")
doc = " ".join(doc.split())
all_docs[i] = doc
#### concat with global attention on doc-sep
input_ids = []
for doc in all_docs:
input_ids.extend(
TOKENIZER.encode(
doc,
truncation=True,
max_length=4096 // len(all_docs),
)[1:-1]
)
input_ids.append(DOCSEP_TOKEN_ID)
input_ids = (
[TOKENIZER.bos_token_id]
+ input_ids
+ [TOKENIZER.eos_token_id]
)
input_ids_all.append(torch.tensor(input_ids))
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids_all, batch_first=True, padding_value=PAD_TOKEN_ID
)
return input_ids
def batch_process(batch):
input_ids=process_document(batch['document'])
# get the input ids and attention masks together
global_attention_mask = torch.zeros_like(input_ids).to(input_ids.device)
# put global attention on <s> token
global_attention_mask[:, 0] = 1
global_attention_mask[input_ids == DOCSEP_TOKEN_ID] = 1
generated_ids = MODEL.generate(
input_ids=input_ids,
global_attention_mask=global_attention_mask,
use_cache=True,
max_length=1024,
num_beams=5,
)
generated_str = TOKENIZER.batch_decode(
generated_ids.tolist(), skip_special_tokens=True
)
result={}
result['generated_summaries'] = generated_str
result['gt_summaries']=batch['summary']
return result
# -
# Next, we simply run the model on 10 data examples (or any number of examples you want)
import random
data_idx = random.choices(range(len(dataset['test'])),k=10)
dataset_small = dataset['test'].select(data_idx)
result_small = dataset_small.map(batch_process, batched=True, batch_size=2)
# After getting all the results, we load the evaluation metric.
#
#
# (Note in the original code, we didn't use the default aggregators, instead, we simply take average over all the scores.
# We simply use 'mid' in this notebook)
rouge = load_metric("rouge")
result_small['generated_summaries']
score=rouge.compute(predictions=result_small["generated_summaries"], references=result_small["gt_summaries"])
print(score['rouge1'].mid)
print(score['rouge2'].mid)
print(score['rougeL'].mid)
import random
random.choices(range(5000),k=5)
# – Facebook removed a photo of two men kissing in protest of a London pub’s decision to eject a same-sex couple for kissing, reports the America Blog. “Shares that contain nudity, or any kind of graphic or sexually suggestive content, are not permitted on Facebook,” the administrators of the Dangerous Minds Facebook page said in an email. The decision to remove the photo has prompted scores of people to post their own pictures of same-sex couples kissing in protest— dozens in the last few hours alone.
# – Facebook has removed a photo from a protest page for a gay pub that booted a same-sex couple for kissing, USA Today reports. The Dangerous Minds Facebook page was trying to promote a “gay kiss-in” demonstration in London to protest the pub. The page used a photo of two men kissing to promote the event. But Facebook quickly removed the photo, saying in an email, “Shares that contain nudity, or any kind of graphic or sexually suggestive content, are not permitted on Facebook.” The decision to remove the photo has prompted scores of people to post their own pictures of same-sex couples kissing in protest— dozens in the last few hours alone.
| Evaluation_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import collections
import scipy
import numpy as np
import pandas as pd
import seaborn as sns
import requests
import io
urlData = requests.get(r'https://raw.githubusercontent.com/Yorko/mlcourse.ai/main/data/weights_heights.csv').content
data = pd.read_csv(io.StringIO(urlData.decode('utf-8')), index_col='Index')
data
data.plot(y="Height", kind="hist", color="red", title="Height (inch.) distribution");
data.plot(y='Weight', kind='hist', color='green', title='Weight (lbs.) distribution')
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / (height_inch / METER_TO_INCH) ** 2
data["BMI"] = data.apply(lambda row: make_bmi(row["Height"], row["Weight"]), axis=1)
| lesson_6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
import pickle
import numpy as np
data_path = '/mnt/368AE7F88AE7B313/Files_Programming/Git/Project_Electricity/data/Raw/1002.csv'
data = pd.read_csv(data_path,sep=';',thousands=',')
data.head()
data.columns
data.index
data_messwert = data[['Unnamed: 0','Messwert_kWh']].copy()
data_messwert
type(data_messwert['Unnamed: 0'][0])
data_messwert['date_time'] = pd.to_datetime(data_messwert['Unnamed: 0'],format='%d.%m.%Y %H:%M')
data_messwert
data_messwert = data_messwert.set_index('date_time')
data_messwert.head()
data = data_messwert[['Messwert_kWh']]
data.head()
type(data['Messwert_kWh'][0])
Messwert_KWh_num = data['Messwert_kWh'].apply(lambda x: x.replace('.', '').replace(',','.'))
Messwert_KWh_num.head()
Messwert_KWh_num = Messwert_KWh_num.astype(np.float32)
Messwert_KWh_num
data['messwert_kwh'] = Messwert_KWh_num
data = data.drop('Messwert_kWh', axis="columns")
type(data['messwert_kwh'][0])
path = '/mnt/368AE7F88AE7B313/Files_Programming/Git/Project_Electricity/data/Processed/'
data.to_pickle(path + '1002_processed_single_datetime.pkl')
type(data)
| notebooks/Structuring_Data_single-datetime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <p><font size="6"><b>06 - Pandas: "Group by" operations</b></font></p>
#
# > *DS Data manipulation, analysis and visualisation in Python*
# > *December, 2017*
#
# > *© 2016, <NAME> and <NAME> (<mailto:<EMAIL>>, <mailto:<EMAIL>>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
#
# ---
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "-"}
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# -
# # Some 'theory': the groupby operation (split-apply-combine)
# + run_control={"frozen": false, "read_only": false}
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
# -
# ### Recap: aggregating functions
# When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:
# + run_control={"frozen": false, "read_only": false}
df['data'].sum()
# -
# However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
#
# For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:
# + run_control={"frozen": false, "read_only": false}
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
# -
# This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
#
# What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this.
# ### Groupby: applying functions per group
# + [markdown] slideshow={"slide_type": "subslide"}
# The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets**
#
# This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
#
# * **Splitting** the data into groups based on some criteria
# * **Applying** a function to each group independently
# * **Combining** the results into a data structure
#
# <img src="../img/splitApplyCombine.png">
#
# Similar to SQL `GROUP BY`
# -
# Instead of doing the manual filtering as above
#
#
# df[df['key'] == "A"].sum()
# df[df['key'] == "B"].sum()
# ...
#
# pandas provides the `groupby` method to do exactly this:
# + run_control={"frozen": false, "read_only": false}
df.groupby('key').sum()
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "subslide"}
df.groupby('key').aggregate(np.sum) # 'sum'
# -
# And many more methods are available.
# + run_control={"frozen": false, "read_only": false}
df.groupby('key')['data'].sum()
# + [markdown] slideshow={"slide_type": "subslide"}
# # Application of the groupby concept on the titanic data
# -
# We go back to the titanic passengers survival data:
# + run_control={"frozen": false, "read_only": false}
df = pd.read_csv("../data/titanic.csv")
# + run_control={"frozen": false, "read_only": false}
df.head()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Using groupby(), calculate the average age for each sex.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
df.groupby('Sex')['Age'].mean()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate the average survival ratio for all passengers.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# df['Survived'].sum() / len(df['Survived'])
df['Survived'].mean()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What is the difference in the survival ratio between the sexes?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
df.groupby('Sex')['Survived'].mean()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Make a bar plot of the survival ratio for the different classes ('Pclass' column).</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
df.groupby('Pclass')['Survived'].mean().plot(kind='bar', color="C0") #and what if you would compare the total number of survivors?
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li>
# </ul>
# </div>
# + clear_cell=false run_control={"frozen": false, "read_only": false}
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# + clear_cell=true run_control={"frozen": false, "read_only": false}
df.groupby('AgeClass')['Fare'].mean().plot(kind='bar', rot=0, color="C0")
# -
# If you are ready, more groupby exercises can be found below.
# + [markdown] slideshow={"slide_type": "subslide"}
# # Some more theory
# -
# ## Specifying the grouper
# + [markdown] slideshow={"slide_type": "subslide"}
# In the previous example and exercises, we always grouped by a single column by passing its name. But, a column name is not the only value you can pass as the grouper in `df.groupby(grouper)`. Other possibilities for `grouper` are:
#
# - a list of strings (to group by multiple columns)
# - a Series (similar to a string indicating a column in df) or array
# - function (to be applied on the index)
# - levels=[], names of levels in a MultiIndex
# + run_control={"frozen": false, "read_only": false}
df.groupby(df['Age'] < 18)['Survived'].mean()
# + run_control={"frozen": false, "read_only": false}
df.groupby(['Pclass', 'Sex'])['Survived'].mean()
# -
# ## The size of groups - value counts
# Oftentimes you want to know how many elements there are in a certain group (or in other words: the number of occurences of the different values from a column).
#
# To get the size of the groups, we can use `size`:
# + run_control={"frozen": false, "read_only": false}
df.groupby('Pclass').size()
# + run_control={"frozen": false, "read_only": false}
df.groupby('Embarked').size()
# -
# Another way to obtain such counts, is to use the Series `value_counts` method:
# + run_control={"frozen": false, "read_only": false}
df['Embarked'].value_counts()
# + [markdown] slideshow={"slide_type": "subslide"}
# # [OPTIONAL] Additional exercises using the movie data
# -
# These exercises are based on the [PyCon tutorial of <NAME>](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so credit to him!) and the datasets he prepared for that. You can download these data from here: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/data` folder.
# `cast` dataset: different roles played by actors/actresses in films
#
# - title: title of the movie
# - year: year it was released
# - name: name of the actor/actress
# - type: actor/actress
# - n: the order of the role (n=1: leading role)
# + run_control={"frozen": false, "read_only": false}
cast = pd.read_csv('../data/cast.csv')
cast.head()
# -
# `titles` dataset:
#
# * title: title of the movie
# * year: year of release
# + run_control={"frozen": false, "read_only": false}
titles = pd.read_csv('../data/titles.csv')
titles.head()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Using `groupby()`, plot the number of films that have been released each decade in the history of cinema.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
titles['decade'] = titles['year'] // 10 * 10
# + clear_cell=true run_control={"frozen": false, "read_only": false}
titles.groupby('decade').size().plot(kind='bar', color='green')
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Use `groupby()` to plot the number of 'Hamlet' movies made each decade.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
titles['decade'] = titles['year'] // 10 * 10
hamlet = titles[titles['title'] == 'Hamlet']
hamlet.groupby('decade').size().plot(kind='bar', color="orange")
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>For each decade, plot all movies of which the title contains "Hamlet".</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
titles['decade'] = titles['year'] // 10 * 10
hamlet = titles[titles['title'].str.contains('Hamlet')]
hamlet.groupby('decade').size().plot(kind='bar', color="lightblue")
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast1990 = cast[cast['year'] >= 1990]
cast1990 = cast1990[cast1990['n'] == 1]
cast1990.groupby('name').size().nlargest(10)
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast1990['name'].value_counts().head(10)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>In a previous exercise, the number of 'Hamlet' films released each decade was checked. Not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet' and an overview of the titles that start with 'Hamlet', each time providing the amount of occurrences in the data set for each of the movies</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
hamlets = titles[titles['title'].str.contains('Hamlet')]
hamlets['title'].value_counts()
# + clear_cell=true run_control={"frozen": false, "read_only": false}
hamlets = titles[titles['title'].str.startswith('Hamlet')]
hamlets['title'].value_counts()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>List the 10 movie titles with the longest name.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
title_longest = titles['title'].str.len().nlargest(10)
title_longest
# + clear_cell=true run_control={"frozen": false, "read_only": false}
pd.options.display.max_colwidth = 210
titles.loc[title_longest.index]
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast1950 = cast[cast['year'] // 10 == 195]
cast1950 = cast1950[cast1950['n'] == 1]
cast1950.groupby(['year', 'type']).size()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What are the 11 most common character names in movie history?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast.character.value_counts().head(11)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Plot how many roles <NAME> has played in each year of his career.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast[cast.name == '<NAME>'].year.value_counts().sort_index().plot()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>What are the 10 most occurring movie titles that start with the words 'The Life'?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
titles[titles['title'].str.startswith('The Life')]['title'].value_counts().head(10)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Which actors or actresses were most active in the year 2010 (i.e. appeared in the most movies)?</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast[cast.year == 2010].name.value_counts().head(10)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Determine how many roles are listed for each of 'The Pink Panther' movies.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
pink = cast[cast['title'] == 'The Pink Panther']
pink.groupby(['year'])[['n']].max()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li> List, in order by year, each of the movies in which '<NAME>' has played more than 1 role.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
oz = cast[cast['name'] == '<NAME>']
oz_roles = oz.groupby(['year', 'title']).size()
oz_roles[oz_roles > 1]
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li> List each of the characters that <NAME> has portrayed at least twice.</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
oz = cast[cast['name'] == '<NAME>']
oz_roles = oz.groupby(['character']).size()
oz_roles[oz_roles > 1].sort_values()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li> Add a new column to the `cast` DataFrame that indicates the number of roles for each movie. [Hint](http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation)</li>
# </ul>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast['n_total'] = cast.groupby('title')['n'].transform('max') # transform will return an element for each row, so the max value is given to the whole group
cast.head()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li> Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade. </li>
# </ul><br>
#
# **Tip**: you can do a groupby twice in two steps, first calculating the numbers, and secondly, the ratios.
# </div>
#
# + clear_cell=true run_control={"frozen": false, "read_only": false}
leading = cast[cast['n'] == 1]
sums_decade = leading.groupby([cast['year'] // 10 * 10, 'type']).size()
sums_decade
# + clear_cell=true run_control={"frozen": false, "read_only": false}
#sums_decade.groupby(level='year').transform(lambda x: x / x.sum())
ratios_decade = sums_decade / sums_decade.groupby(level='year').transform('sum')
ratios_decade
# + clear_cell=true run_control={"frozen": false, "read_only": false}
ratios_decade[:, 'actor'].plot()
ratios_decade[:, 'actress'].plot()
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li> In which years the most films were released?</li>
# </ul><br>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
t = titles
t.year.value_counts().head(3)
# -
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?</li>
# </ul><br>
# </div>
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast1950 = cast[cast['year'] // 10 == 195]
cast1950 = cast1950[cast1950['n'] == 1]
cast1950['type'].value_counts()
# + clear_cell=true run_control={"frozen": false, "read_only": false}
cast2000 = cast[cast['year'] // 10 == 200]
cast2000 = cast2000[cast2000['n'] == 1]
cast2000['type'].value_counts()
| _solved/pandas_06_groupby_operations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
# -
# # Splitting train test dataset
# +
imdb_data = pd.read_pickle('pickles/cleaned_data2.pkl')
X = imdb_data.review
y = imdb_data.sentiment
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# -
# # Training Model
# +
# Store all models
models = {}
# -
# ### Logistic Regression Model
# +
from sklearn.linear_model import LogisticRegression
lr_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(penalty='l2',max_iter=500,C=1,random_state=42)),
])
# Fitting lr model
lr_clf.fit(X_train, y_train)
models['lr_clf'] = lr_clf
print('Logistic Regression classifire added!')
# -
# ### Naive Bayes Classifier
# +
from sklearn.naive_bayes import MultinomialNB
nb_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
# Fitting lr model
nb_clf.fit(X_train, y_train)
models['nb_clf'] = nb_clf
print('Naive Bayes Classifier added!')
# -
# ### Supprot Vector Classifier
# +
from sklearn.svm import LinearSVC
svc_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LinearSVC()),
])
# Fitting svm classifier model
svc_clf.fit(X_train, y_train)
models['svc_clf'] = svc_clf
print('Supprot Vector Classifier added!')
# -
# ### Regularized linear models with stochastic gradient descent (SGD) learning
# +
from sklearn.linear_model import SGDClassifier
pipeline = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier()),
])
parameters = {
'vect__max_df': (0.5, 0.75, 1.0),
# 'vect__max_features': (None, 5000, 10000, 50000),
'vect__ngram_range': ((1, 1), (1, 2)), # unigrams or bigrams
# 'tfidf__use_idf': (True, False),
# 'tfidf__norm': ('l1', 'l2'),
'clf__max_iter': (20,),
'clf__alpha': (0.00001, 0.000001),
'clf__penalty': ('l2', 'elasticnet'),
# 'clf__max_iter': (10, 50, 80),
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1)
grid_search.fit(X_train, y_train)
models['sgd_clf'] = grid_search
print('SGD added!')
# -
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
# # Compare Accuracy
# ### Confusion Matrix
# +
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
# %matplotlib inline
f, axes = plt.subplots(1, 4, figsize=(24, 6), sharey='row')
for i, (name, classifier) in enumerate(models.items()):
y_pred = classifier.predict(X_test)
cf_matrix = confusion_matrix(y_test, y_pred)
disp = ConfusionMatrixDisplay(
confusion_matrix=cf_matrix,
display_labels=['positive', 'negative']
)
disp.plot(ax=axes[i], xticks_rotation=45)
accuracy = accuracy_score(y_pred, y_test)
disp.ax_.set_title(f'Model: {name} \nAccuracy: {accuracy}')
disp.im_.colorbar.remove()
disp.ax_.set_xlabel('')
if i!=0:
disp.ax_.set_ylabel('')
f.text(0.4, 0.1, 'Predicted label', ha='left')
plt.subplots_adjust(wspace=0.40, hspace=0.1)
f.colorbar(disp.im_, ax=axes)
plt.show()
# -
# ### Classification Report
# +
from sklearn.metrics import classification_report
for name, clf in models.items():
y_pred = clf.predict(X_test)
print('\n')
print('-'*20,' '*2,'Classifier: ', name, ' '*2, '-'*20)
print(classification_report(y_test, y_pred))
# -
| 2-approaching-ml-technique.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import json
Settings = json.load(open('../settings.txt'))
import matplotlib.pyplot as plt
import numpy as np
from os.path import join
from cselect import color as cs
import sys
sys.path.insert(0,'../')
sys.path.insert(0,'../samples')
sys.path.insert(0,'../debugging')
from mvpose.data import shelf
from time import time
root = join(Settings['data_root'], 'pak')
root = '/media/tanke/Data1/datasets/pak/'
tmp = Settings['tmp']
import mvpose.data.kth_football2 as kth
from mvpose import pose
from mvpose.settings import get_settings
from paf_loader import Loader
from mvpose.evaluation import pcp
from mvpose.plot.limbs import draw_mscoco_human, draw_mscoco_human2d
# from openpose import OpenPose
from mvpose.data.openpose import OpenPoseKeypoints, MultiOpenPoseKeypoints
FRAME = 530
#FRAME = 416
# pe = OpenPose(tmp=tmp)
Im, Y, Calib = shelf.get(root, FRAME)
# predictions = pe.predict(Im, 'cvpr_shelf', FRAME)
kp_loc = '/media/tanke/Data1/outputs/Shelf'
pe_cam1 = OpenPoseKeypoints('img_%06d', join(kp_loc, 'cam0'))
pe_cam2 = OpenPoseKeypoints('img_%06d', join(kp_loc, 'cam1'))
pe_cam3 = OpenPoseKeypoints('img_%06d', join(kp_loc, 'cam2'))
pe_cam4 = OpenPoseKeypoints('img_%06d', join(kp_loc, 'cam3'))
pe_cam5 = OpenPoseKeypoints('img_%06d', join(kp_loc, 'cam4'))
pe = MultiOpenPoseKeypoints([
pe_cam1, pe_cam2, pe_cam3, pe_cam4, pe_cam5])
predictions = pe.predict(FRAME)
fig = plt.figure(figsize=(16, 8))
for idx, (im, pred) in enumerate(zip(Im, predictions)):
ax = fig.add_subplot(2, 3, idx+1); ax.axis('off')
ax.imshow(im)
for human in pred:
draw_mscoco_human2d(ax, human[:, 0:2], color='red')
plt.show()
# +
# from mvpose.baseline.baseline import estimate
# _start = time()
# H = estimate(Calib, predictions, scale_to_mm=1000)
# _end = time()
# print('elapsed', _end - _start)
# fig = plt.figure(figsize=(16,12))
# colors = ['red', 'blue', 'green', 'teal']
# for cid, cam in enumerate(Calib):
# ax = fig.add_subplot(2, 3, 1+cid)
# ax.axis('off')
# im = Im[cid]
# h,w,_ = im.shape
# ax.set_xlim([0, w])
# ax.set_ylim([h, 0])
# ax.imshow(im, alpha=0.6)
# for pid, hyp in enumerate(H):
# draw_mscoco_human(ax, hyp, cam, alpha=0.5,
# color=colors[pid], linewidth=3)
# plt.tight_layout()
# plt.show()
# +
# =====================================================
def proper_pcp_calc(Y, Humans):
alpha = 0.5
L_Arms = []
U_Arms = []
L_Legs = []
U_Legs = []
GTIDs = []
for gtid, gt in enumerate(Y):
if gt is None:
continue
larms = 0
uarms = 0
llegs = 0
ulegs = 0
avg = 0
for d in Humans:
r = pcp.evaluate(gt, d, alpha)
larms_ = r.lower_arms
uarms_ = r.upper_arms
ulegs_ = r.upper_legs
llegs_ = r.lower_legs
avg_ = (larms_ + uarms_ + ulegs_ + llegs_) / 4
if avg_ > avg:
avg = avg_
larms = larms_
uarms = uarms_
llegs = llegs_
ulegs = ulegs_
L_Arms.append(larms)
U_Arms.append(uarms)
L_Legs.append(llegs)
U_Legs.append(ulegs)
GTIDs.append(gtid)
return L_Arms, U_Arms, L_Legs, U_Legs, GTIDs
# def generate_pcp_score(frame):
# Im, Y, Calib = shelf.get(root, frame)
# predictions = pe.predict(Im, 'cvpr_shelf', frame)
# # -- reduce #cams
# # Calib = Calib[0:3]
# # Y = Y[0:3]
# # predictions = predictions[0:3]
# # --------------
# detections = estimate(Calib, predictions,
# epi_threshold=80)
# Humans = kth.transform3d_from_mscoco(detections)
# return proper_pcp_calc(Y, Humans)
# =====================================================
valid_frames = list(range(300, 600))
# valid_frames = list(range(300, 310))
Calib = []
poses_per_frame = []
Pos3d = {}
_start = time()
for frame in valid_frames:
Im, Y, calib = shelf.get(root, frame)
Calib.append(calib)
Pos3d[frame] = Y
# predictions = pe.predict(Im, 'cvpr_shelf', frame)
predictions = pe.predict(frame)
poses_per_frame.append(predictions)
_end = time()
print('elapsed', _end - _start)
# PER_GTID = {}
# for frame in valid_frames:
# _start = time()
# L_Arms, U_Arms, L_Legs, U_Legs, GTIDs = generate_pcp_score(frame)
# if len(L_Arms) > 0:
# for gtid, larms, uarms, llegs, ulegs in zip(
# GTIDs, L_Arms, U_Arms, L_Legs, U_Legs
# ):
# if not gtid in PER_GTID:
# PER_GTID[gtid] = {
# 'larms': [],
# 'uarms': [],
# 'llegs': [],
# 'ulegs': [],
# 'frame': []
# }
# PER_GTID[gtid]['larms'].append(larms)
# PER_GTID[gtid]['uarms'].append(uarms)
# PER_GTID[gtid]['llegs'].append(llegs)
# PER_GTID[gtid]['ulegs'].append(ulegs)
# PER_GTID[gtid]['frame'].append(frame)
# _end = time()
# print('frame ' + str(frame) + ', elapsed:', _end - _start)
# +
from mvpose.baseline.tracking import tracking, Track
_start = time()
tracks = tracking(Calib, poses_per_frame,
epi_threshold=80,
scale_to_mm=1000,
max_distance_between_tracks=200,
actual_frames=valid_frames,
min_track_length=10,
merge_distance=80,
last_seen_delay=15)
_end = time()
print('elapsed', _end - _start)
print("#tracks", len(tracks))
for track in tracks:
print(len(track))
# -
# sigma = 1.7
_start = time()
tracks_ = []
for track in tracks:
track = Track.smoothing(track,
sigma=2,
interpolation_range=50)
tracks_.append(track)
tracks = tracks_
_end = time()
print("elapsed", _end - _start)
Im.shape
# +
colors = [
'red', 'blue', 'green', 'yellow', 'teal',
'cyan', 'lawngreen', 'maroon', 'magenta', 'cornflowerblue',
'olive', 'salmon', 'darkorange', 'cadetblue', 'darkgreen',
'turquoise'
]
assert len(colors) >= len(tracks)
# +
from mvpose.plot.limbs import draw_mscoco_human
from os.path import isdir, join
A = tracks[0]
output_dir = '/home/tanke/Videos/mvpose/shelf'
assert isdir(output_dir)
for frame_idx, frame in enumerate(range(300, 600)):
print("handle " + str(frame))
fig = plt.figure(figsize=(16, 8))
Im, Y, Calib = shelf.get(root, frame)
for idx, (im, cam) in enumerate(zip(Im, Calib)):
ax = fig.add_subplot(2, 3, idx+1); ax.axis('off')
ax.set_xlim([0, 1032])
ax.set_ylim([776, 0])
ax.imshow(im)
for tid, track in enumerate(tracks):
person = track.get_by_frame(frame)
if person is None:
continue
draw_mscoco_human(ax, person, cam, color=colors[tid])
fname = join(output_dir, 'frame_%06d.png' % frame_idx)
plt.savefig(fname)
del fig
# fig = plt.figure(figsize=(16, 8))
# for idx, (im, pred) in enumerate(zip(Im, predictions)):
# ax = fig.add_subplot(2, 3, idx+1); ax.axis('off')
# ax.imshow(im)
# for human in pred:
# draw_mscoco_human2d(ax, human[:, 0:2], color='red')
# plt.show()
# -
PER_GTID = {}
for frame in valid_frames:
Humans = []
for track in tracks:
pose = track.get_by_frame(frame)
if pose is not None:
Humans.append(pose)
Humans = kth.transform3d_from_mscoco(Humans)
if frame % 12 == 0:
print("Humans " + str(frame), len(Humans))
Y = Pos3d[frame]
L_Arms, U_Arms, L_Legs, U_Legs, GTIDs = proper_pcp_calc(Y, Humans)
if len(L_Arms) > 0:
for gtid, larms, uarms, llegs, ulegs in zip(
GTIDs, L_Arms, U_Arms, L_Legs, U_Legs
):
if not gtid in PER_GTID:
PER_GTID[gtid] = {
'larms': [],
'uarms': [],
'llegs': [],
'ulegs': [],
'frame': []
}
PER_GTID[gtid]['larms'].append(larms)
PER_GTID[gtid]['uarms'].append(uarms)
PER_GTID[gtid]['llegs'].append(llegs)
PER_GTID[gtid]['ulegs'].append(ulegs)
PER_GTID[gtid]['frame'].append(frame)
total_avg = []
for key, values in PER_GTID.items():
print('actor ', key)
print('\tuarms:', np.mean(values['uarms']))
print('\tlarms:', np.mean(values['larms']))
print('\tulegs:', np.mean(values['ulegs']))
print('\tllegs:', np.mean(values['llegs']))
avg = np.mean([
np.mean(values['uarms']),
np.mean(values['larms']),
np.mean(values['ulegs']),
np.mean(values['llegs'])
])
total_avg.append(avg)
print('\tavg: ', avg)
print('\navg*: ', np.mean(total_avg))
# +
# actor 0
# uarms: 0.996415770609319
# larms: 0.982078853046595
# ulegs: 1.0
# llegs: 1.0
# avg: 0.9946236559139785
# actor 2
# uarms: 0.8881987577639752
# larms: 0.8695652173913043
# ulegs: 0.9937888198757764
# llegs: 0.9937888198757764
# avg: 0.9363354037267081
# actor 1
# uarms: 0.8648648648648649
# larms: 0.6486486486486487
# ulegs: 1.0
# llegs: 1.0
# avg: 0.8783783783783784
# avg*: 0.9364458126730217
#---------
# actor 0
# uarms: 0.9874551971326165
# larms: 0.9551971326164874
# ulegs: 1.0
# llegs: 1.0
# avg: 0.985663082437276
# actor 2
# uarms: 0.8664596273291926
# larms: 0.8540372670807453
# ulegs: 0.9968944099378882
# llegs: 0.9968944099378882
# avg: 0.9285714285714286
# actor 1
# uarms: 0.8783783783783784
# larms: 0.4864864864864865
# ulegs: 1.0
# llegs: 0.9864864864864865
# avg: 0.8378378378378378
# avg*: 0.9173574496155141
# ~~~~~~~~~~~~~~~~~~~~~~~~~~`
# actor 0
# uarms: 0.9910394265232975
# larms: 0.9587813620071685
# ulegs: 1.0
# llegs: 1.0
# avg: 0.9874551971326164
# actor 2
# uarms: 0.8695652173913043
# larms: 0.8571428571428571
# ulegs: 1.0
# llegs: 1.0
# avg: 0.9316770186335404
# actor 1
# uarms: 0.8783783783783784
# larms: 0.4864864864864865
# ulegs: 1.0
# llegs: 0.9864864864864865
# avg: 0.8378378378378378
# avg*: 0.9189900178679982
# +
FRAME = 300
Im, Y, calib = shelf.get(root, FRAME)
pos3d = Pos3d[FRAME]
predictions = poses_per_frame[FRAME-300]
Humans = predictions
fig = plt.figure(figsize=(16,6))
colors = ['blue', 'red', 'green', 'orange', 'teal']
fig = plt.figure(figsize=(16, 8))
for idx, (im, pred) in enumerate(zip(Im, predictions)):
ax = fig.add_subplot(2, 3, idx+1); ax.axis('off')
ax.imshow(im, alpha=0.6)
for pid, human in enumerate(pred):
draw_mscoco_human2d(ax, human[:, 0:2], color=colors[pid], linewidth=3)
plt.show()
# +
H = []
for track in tracks:
h = track.get_by_frame(FRAME)
if h is not None:
H.append(h)
fig = plt.figure(figsize=(16,12))
for cid, cam in enumerate(calib):
ax = fig.add_subplot(2, 3, 1+cid)
ax.axis('off')
im = Im[cid]
h,w,_ = im.shape
ax.set_xlim([0, w])
ax.set_ylim([h, 0])
ax.imshow(im, alpha=0.6)
for gt in pos3d:
if gt is None:
continue
gt2d = cam.projectPoints(gt)
ax.scatter(gt2d[:, 0], gt2d[:, 1], color='white')
for pid, hyp in enumerate(H):
draw_mscoco_human(ax, hyp, cam, alpha=0.5,
color=colors[pid], linewidth=3)
plt.tight_layout()
plt.show()
# -
for pid in PER_GTID.keys():
frames = PER_GTID[pid]['frame']
try:
index = frames.index(FRAME)
print('pid', pid)
print('\tuarms', PER_GTID[pid]['uarms'][index])
print('\tlarms', PER_GTID[pid]['larms'][index])
print('\tulegs', PER_GTID[pid]['ulegs'][index])
print('\tllegs', PER_GTID[pid]['llegs'][index])
print('')
except:
print("frame does not contain " + str(pid))
# # 3DPCK
# +
from mvpose.evaluation import pck3d
import numpy.linalg as la
def pck3devaluate(gt, d, scale_to_mm, maxdistance=150):
"""
percentage of correctly estimated parts.
This score only works on single-human estimations
and the 3d data must be transformed to fit the
KTH football2 data format (see {transform3d_from_mscoco})
:param gt: ground truth human
:param d: detection human
:param alpha: 0.5
:return:
"""
assert len(gt) == 14
result = np.zeros((14,), np.int64)
if d is not None:
assert len(d) == 14
for jid in range(14):
pt_gt = gt[jid]
pt_pr = d[jid]
if pt_pr is not None:
distance = la.norm(pt_gt - pt_pr) * scale_to_mm
if distance <= maxdistance:
result[jid] = 1
return np.mean(result)
def get_3DPCK_for_frame(pos3d, H):
total = []
must_find = 0
for gt in pos3d:
if gt is None:
continue
must_find += 1
H_pred = pred_kth = kth.transform3d_from_mscoco(H)
acc = [0]
for pred in H_pred:
acc.append(pck3devaluate(gt, pred, scale_to_mm=1000))
total.append(max(acc))
if len(total) > 0:
assert len(total) <= must_find
factor = len(total) / must_find
return np.mean(total) * factor # to penelize points that were not found
else:
if must_find > 0:
return 0
return -1 # nothing to find
# get_3DPCK_for_frame(pos3d, H)
TOTAL_3DPCK = []
for frame in valid_frames:
Humans = []
for track in tracks:
pose = track.get_by_frame(frame)
if pose is not None:
Humans.append(pose)
if frame % 12 == 0:
print("Humans " + str(frame), len(Humans))
Y = Pos3d[frame]
score = get_3DPCK_for_frame(Y, Humans)
TOTAL_3DPCK.append(score)
TOTAL_3DPCK_surv = [v for v in TOTAL_3DPCK if v > -0.1]
print("#frames with no gt:", len(TOTAL_3DPCK) - len(TOTAL_3DPCK_surv))
print(np.mean(TOTAL_3DPCK_surv))
# -
Pos3d[300]
# +
import motmetrics as mm
from mvpose.baseline.baseline import distance_between_poses
acc = mm.MOTAccumulator(auto_id=True)
scale_to_mm = 1000
MOTA_distance = 200 # [mm]
def cleanup(Humans):
Result = []
for human in Humans:
remove = True
for pt in human:
if pt is not None:
remove = False
break
if not remove:
Result.append(human)
return Result
possible_ids_lookup = {}
n_possible_ids = 0
for frame in valid_frames:
Y = Pos3d[frame]
Humans = []
gtids = []
trids = []
for tid, track in enumerate(tracks):
pose = track.get_by_frame(frame)
if pose is not None:
Humans.append(pose)
trids.append(tid)
Humans = cleanup(Humans)
for pid, _ in Y:
gtids.append(pid)
n = len(gtids)
m = len(trids)
distances = np.empty((n, m))
for i, gt in enumerate(Y):
for j, pr in enumerate(Humans):
try:
d = distance_between_poses(gt[1][:,0:3], pr, z_axis=1)
except:
print("GT", gt)
print("Pr", pr)
raise
d = d * scale_to_mm
if d <= MOTA_distance:
distances[i, j] = d
# track possible identity switches
if (frame, i) in possible_ids_lookup:
possible_ids_lookup[frame, i] += 1
else:
possible_ids_lookup[frame, i] = 1
if possible_ids_lookup[frame, i] == 2:
n_possible_ids += 1
else:
distances[i, j] = np.nan
acc.update(gtids, trids, distances)
mh = mm.metrics.create()
summary = mh.compute(acc,
metrics=['num_frames', 'mota', 'motp'],
name='acc')
print(summary)
| evaluation/Baseline_Shelf_openpose.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Number representation in computers
# In this chapter we are going to give elements of understanding of how numbers are represented in today's computers. We won't go too much into the details (which are perhaps not so relevant here), however it is extremely important for scientists to understand the pitfalls of the so-called "floating-point arithmetics". At the end of this lecture, you should understand what these pitfalls are and when they occur, since **sooner or later one of these pitfalls will hit you** (believe me on that one).
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Number-representation-in-computers" data-toc-modified-id="Number-representation-in-computers-12"><span class="toc-item-num">12 </span>Number representation in computers</a></span><ul class="toc-item"><li><span><a href="#Binary-representation-of-integers" data-toc-modified-id="Binary-representation-of-integers-12.1"><span class="toc-item-num">12.1 </span>Binary representation of integers</a></span><ul class="toc-item"><li><span><a href="#Basics" data-toc-modified-id="Basics-12.1.1"><span class="toc-item-num">12.1.1 </span>Basics</a></span></li><li><span><a href="#Overflow" data-toc-modified-id="Overflow-12.1.2"><span class="toc-item-num">12.1.2 </span>Overflow</a></span></li><li><span><a href="#Extending-the-binary-representation-to-non-integers:-fixed-point-notation" data-toc-modified-id="Extending-the-binary-representation-to-non-integers:-fixed-point-notation-12.1.3"><span class="toc-item-num">12.1.3 </span>Extending the binary representation to non-integers: fixed-point notation</a></span></li></ul></li><li><span><a href="#Floating-point-numbers" data-toc-modified-id="Floating-point-numbers-12.2"><span class="toc-item-num">12.2 </span>Floating point numbers</a></span><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-12.2.1"><span class="toc-item-num">12.2.1 </span>Introduction</a></span></li><li><span><a href="#Scientific-(exponential)-notation-in-base-b" data-toc-modified-id="Scientific-(exponential)-notation-in-base-b-12.2.2"><span class="toc-item-num">12.2.2 </span>Scientific (exponential) notation in base b</a></span></li><li><span><a href="#Floating-point-representation-(simplified)" data-toc-modified-id="Floating-point-representation-(simplified)-12.2.3"><span class="toc-item-num">12.2.3 </span>Floating point representation (simplified)</a></span></li><li><span><a href="#IEEE-754" data-toc-modified-id="IEEE-754-12.2.4"><span class="toc-item-num">12.2.4 </span>IEEE 754</a></span></li><li><span><a href="#Consequences" data-toc-modified-id="Consequences-12.2.5"><span class="toc-item-num">12.2.5 </span>Consequences</a></span></li><li><span><a href="#Overflow" data-toc-modified-id="Overflow-12.2.6"><span class="toc-item-num">12.2.6 </span>Overflow</a></span></li></ul></li><li><span><a href="#What-to-do-from-here?" data-toc-modified-id="What-to-do-from-here?-12.3"><span class="toc-item-num">12.3 </span>What to do from here?</a></span><ul class="toc-item"><li><span><a href="#Alternative-data-types" data-toc-modified-id="Alternative-data-types-12.3.1"><span class="toc-item-num">12.3.1 </span>Alternative data types</a></span></li><li><span><a href="#Symbolic-computations" data-toc-modified-id="Symbolic-computations-12.3.2"><span class="toc-item-num">12.3.2 </span>Symbolic computations</a></span></li><li><span><a href="#Deal-with-it" data-toc-modified-id="Deal-with-it-12.3.3"><span class="toc-item-num">12.3.3 </span>Deal with it</a></span><ul class="toc-item"><li><span><a href="#Awareness" data-toc-modified-id="Awareness-12.3.3.1"><span class="toc-item-num">12.3.3.1 </span>Awareness</a></span></li><li><span><a href="#Error-propagation" data-toc-modified-id="Error-propagation-12.3.3.2"><span class="toc-item-num">12.3.3.2 </span>Error propagation</a></span></li><li><span><a href="#Safer-tests" data-toc-modified-id="Safer-tests-12.3.3.3"><span class="toc-item-num">12.3.3.3 </span>Safer tests</a></span></li></ul></li></ul></li><li><span><a href="#Take-home-points" data-toc-modified-id="Take-home-points-12.4"><span class="toc-item-num">12.4 </span>Take home points</a></span></li><li><span><a href="#Further-reading" data-toc-modified-id="Further-reading-12.5"><span class="toc-item-num">12.5 </span>Further reading</a></span></li><li><span><a href="#What's-next?" data-toc-modified-id="What's-next?-12.6"><span class="toc-item-num">12.6 </span>What's next?</a></span></li><li><span><a href="#License" data-toc-modified-id="License-12.7"><span class="toc-item-num">12.7 </span>License</a></span></li></ul></li></ul></div>
# -
# ## Binary representation of integers
# Traditional arithmetics and mathematics rely on the **decimal** system. Every number can be decomposed in a sum of products such as: $908.2 = 9\times10^2 + 0\times10^1 + 8\times10^{0} + 2\times10^{-1}$. This system is called decimal because the factors can take the values $[0 - 9]$ while the **fixed base** (the number which is scaled by an exponent) is 10.
# Nearly all modern processors however represent numbers in binary form, with each digit being represented by a two-valued physical quantity such as a "low" or "high" voltage: 0s and 1s. These binary digits are called **bits**, and are the basis of the **binary representation of numbers**.
# ### Basics
# With only 0s and 1s available the most efficient way to represent numbers is as a sum of 2s to the power of i, with i going from 0 to N-1. It is best shown with an example, here with N=4 digits:
#
# 0101
#
# To convert our number to a decimal representation we compute $0\times2^3 + 1\times2^2 + 0\times2^1 + 1\times2^0 = 5$. In this convention, the leftmost digit is called the **most significant bit** and the rightmost the **least significant bit**. If all the elements from the left are zeros, the convention is to omit them when writing. This is for example what the built-in [bin](https://docs.python.org/3/library/functions.html#bin) function chooses to do:
bin(5)
# which converts an integer number to a binary string prefixed with `0b`. Now it appears quite clear that the number of different integers that a computer can represent with this system depends on the size N of the binary sequence. A *positive* integer represented with a **byte** (8 bits are called a **byte**) can thus be as large as:
sum([2**i for i in range(8)]) # do you understand what this line does?
# but not larger. Unless specified otherwise, the first bit is often used to give the *sign* of the integer it represents. Therefore, the actual range of numbers that a byte can represent is $[-2^7; 2^7-1]= [-128; 127]$ (the reason for this asymmetry is a matter of definition, as we will see later). If you are sure that you only want to do arithmetics with positive number you can spare this one bit and specify your binary number as being of **unsigned integer type**.
# So how many bits does our computer use to represent integers? Well, it depends on the platform and programming language you are using. Many older languages (including old python versions) made the distinction between short (32 bits) and long (64 bits) integers.
#
# <img src="../img/logo_ex.png" align="left" style="width:1em; height:1em;"> **Exercise**: what is the largest number that a 64-bit integer can represent? The smallest? And a 32-bit unsigned integer?
# Now, what is the default length of binary integers in python 3? Let's try to find out:
from sys import getsizeof
a = 10
getsizeof(a)
# So 28 bytes? That's a lot of bits for the number 10. This is because the [getsizeof](https://docs.python.org/3/library/sys.html#sys.getsizeof) function returns the **memory consumption** of the *object* `int(10)`. What does that mean? Well, in python numbers are not only numbers, they are also "things" (**objects**). And these things come with services, like for example the ``bit_length`` method:
a.bit_length()
# <img src="../img/logo_ex.png" align="left" style="width:1em; height:1em;"> **Exercise**: what does `bit_length()` do? What is the bit length of 5? of 127?
# These services have a memory cost (an "overhead"), and are required no matter how big our number is:
size_int = getsizeof(int())
size_int # size of an empty integer object
def get_int_bitsize(integer):
"""Returns the actual memory consumption of an integer (in bits) without the object overhead."""
return (getsizeof(integer) - getsizeof(int())) * 8
get_int_bitsize(2)
# Ha! This looks more reasonable. So python uses 32 bits to store integers? But then, the largest number it can manipulate must be very small? Let's see if we can create a number larger than `2**32-1`:
12**68
get_int_bitsize(2**18), get_int_bitsize(2**68), get_int_bitsize(2**100000)
# As shown above, it turns out that modern python versions have **no limitations** on the size of integers (other than the total memory available on your computer). The memory slot used to store your number simply depends on how large it is.
get_int_bitsize(2**100000) / 8 / 1024
# So the ``2**100000`` number requires about 13 KB (**Kilobytes**) of memory.
#
# <img src="../img/logo_ex.png" align="left" style="width:1em; height:1em;"> **Exercise**: print ``2**100000`` on screen, "just for fun".
# ### Overflow
# This dynamic resizing of integers in python means that they cannot **overflow**. **Overflow** is a common pitfall of many numerical operations, and to illustrate what it is we can either use floats (unlike python integers, python floats can overflow), or numpy, which uses integer types of fixed length:
import numpy as np
a = np.int8(127)
a.dtype
a
a + np.int8(1)
# What happened here? To understand this we need to understand how binary numbers are added together first. Please read the [addition chapter](https://en.wikipedia.org/wiki/Binary_number#Addition) of the Wikipedia article on binary numbers before going on.
# Basically, we added 1 (binary `00000001`) to 127 (binary `01111111`) which gives us the binary number `10000000`, i.e. -128 in [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) representation, which is the most common method of representing signed integers on computers. At least, python warned us that we are doing something wrong here. But be aware that this is not always the case:
np.int32(2147483648)
a = np.array([18])
a**a
# These are examples of **silent** overflows. Silent means that they do not warn you about the probable mistake and could happen in your code without you noticing that this happens.
# ### Extending the binary representation to non-integers: fixed-point notation
# A more general definition of the binary representation for integers is to use negative exponents as well. With negative exponents, any rational number a can be approximated by:
#
# $$a = \pm \sum_{i=-j}^{k} z_i b^i$$ with $b > 1$ (base) and $0 \le z_i \le b-1$, $j$ and $k$ all positive integers. The precision depends on the size of $j$ while the maximum range depends on the size of $k$.
# In the notation, a fixed point separates digits of positive powers of the base, from those of negative powers. In **base 2** (binary), the number $10000.1_2$ is equal to $16.5_{10}$ in base 10.
#
# Indeed:
#
# $$1\times2^4 + 0\times2^3 + 0\times2^2 + 0\times2^1 + 0\times2^0 + 1\times2^{-1} = 16.5_{10}$$
# Although this representation is convenient, the representable value range is heavily limited by the number of bits available to represent the number (see this week's assignments). Therefore, most computers today are relying on the **floating point notation** to represent real numbers.
# ## Floating point numbers
# ### Introduction
# Because computers have a *finite* number of storage units available, they can only represent a *finite* number of distinguishable values. In fact, a memory slot with $N$ available bits cannot represent more than $2^N$ distinguishable values. The range of real (or complex) numbers is of course infinite, and therefore it becomes clear that in the computer representation of numbers **there will always be a trade-off between the range of numbers one would like to represent and their relative accuracy** (i.e. the absolute difference between two consecutive representable numbers).
#
# Taking the **decimal representation** of the number 1/3 as an example: it can be written as ``0.33``, ``0.333``, ``0.3333``, etc. Depending on the numbers of digits available, the precision of the number will increase but never reach the exact value, at least in the decimal representation.
#
# This fundamental limitation is the explanation for unexpected results of certain arithmetic operations. For example:
0.1 + 0.1 # so far so good
0.1 + 0.2 # wtf?
# This is a typical **rounding error**, happening because most computers do not represent numbers as decimal fractions, but as binary. Without going too much into details (which can be tricky), this chapter will give you some elements of understanding in order to prepare you for the most common pitfalls of floating point arithmetics.
# ### Scientific (exponential) notation in base b
# In the exponential notation (used by [floating point](https://en.wikipedia.org/wiki/Floating-point_arithmetic) numbers), a number is approximated with a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
#
# $\mathrm{number} = \mathrm{significand} \times \mathrm{base} ^{\mathrm{exponent}}$
# For example, in base 10:
#
# $$1.234 = 1234 \times 10^{-3}$$
# The number ``1.234`` can easily be represented exactly in base 10, but the number 1/3 cannot. However, in base 3 (which is just used here as an example) 1/3 can be represented exactly by $1 \times 3^{-1}$.
#
# To approximate a number in base 10, the rule of "the more bits you have, the more precise you are" continues to hold true: $33 \times 10^{-2}$ and $33333333333333 \times 10^{-14}$ are two ways to approximate the number, the later being more expensive in terms of storage requirements but more accurate.
# The exponential notation is the most common way to represent real numbers in computers and is the basis of the floating-point number representation.
# ### Floating point representation (simplified)
# A floating point number in any base will store three numbers:
# - the sign (one bit)
# - the significand ($N_s$ bits)
# - the exponent ($N_e$ bits)
#
# The numbers $N_s$ and $N_e$ are usually fixed beforehand, in the format specification. The base also needs to be specified of course: computers usually work in base 2, but other bases have been experimented with as well )e.g. 16, or hexadecimal). Now remember:
# - **the significand determines the precision of the representation** (significant digits)
# - **the exponent determines the magnitude of the represented number**
#
# Let's make an example to illustrate this concept. We will work in base 10 for simplicity, and assume that we have 8 "memory slots" (the equivalent of bits) available, each memory slot capable of storing a number from 0 to 9 or the sign (+ or -). We attribute $N_s=6$ slots to the significand (including its sign) and $N_e=2$ slots to the exponent (including its sign).
#
# Now, what is the smallest positive number we can represent? And the biggest? Let's try it:
# - smallest: $+00001 \times 10^{-9}$
# - second smallest: $+00002 \times 10^{-9}$
# - biggest: $+99999 \times 10^{+9}$
# - second biggest: $+99998 \times 10^{+9}$
# From these examples it becomes apparent that the precision (the distance between two consecutive numbers) is better for small numbers than for large numbers. Although our example is simplified, the principle is exactly the same for "real" floating point numbers in most computers, which follow the IEEE754 convention.
# ### IEEE 754
# From the example above, we can see that with a fixed number of memory slots, we have a trade-off between the maximum precision of a number and its size.
#
# This precision/size trade-off raises many challenges: memory and electronics must be optimized so that computations are fast while still allowing programs to realize computations on a *wide* range of numbers in the same program. For example, atmospheric models realize computations on specific humidity values ($\mathrm{kg}\,\mathrm{kg}^{-1}$) with typical values of 10$^{-5}$ and on the geopotential with values up to several order of magnitude larger. Using different standards for each variable would be impracticable. This lead to the development of the [IEEE Standard for Floating-Point Arithmetic (IEEE 754)](https://en.wikipedia.org/wiki/IEEE_754).
#
# The standard defines five basic formats that are named for their numeric base and the number of bits used in their encoding, as listed in [this table](https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats). For example, ``binary64`` is the famous "double precision" format, called ``float64`` in numpy and simply ``float`` in the python standard library. In this format, $N_s=53$ bits are used for the significand, and the remaining $N_e=11$ for the exponent.
# It is possible to compute the approximate precision of IEEE754 floating point numbers according to their value (see also the exercises):
# 
# Source: [wikipedia](https://en.wikipedia.org/wiki/IEEE_754)
# ### Consequences
# With the floating point format, **small numbers have a larger absolute precision than large numbers**. See this example:
.99 == .98 # so far so good
999999999999999.99 == 999999999999999.98 # wtf?
999999999999999.99 # wtf?
# A further direct consequence is that **when summing two numbers, precision is lost to match the size of the outcome**.
# ### Overflow
# Like numpy integers, floating point numbers can overflow:
np.float16(65510) + np.float16(20)
# Fortunately, the IEEE 754 standard defines two new numbers (-inf and +inf) which are more informative (and less dangerous) than the negative reset of overflowing integer numbers. IEEE 754 also defines "Not A Number" (abbreviated NaN), which propagates through computations:
np.NaN * 10
# ## What to do from here?
# As we've learned, errors in floating-point numbers are unavoidable. Even if these errors are very small, simple calculations on approximated numbers can contain pitfalls that increase the error in the result way beyond just having the individual errors "add up". Here we discuss some possible ways to deal with the pitfalls of floating point arithmetics.
# ### Alternative data types
# In certain cases where perfect decimal accuracy is needed (for example when dealing with currencies and money), it is possible to use a decimal floating point representation instead of the default binary one:
1/10*3 # not precise
from decimal import Decimal
Decimal(1) / Decimal(10) * 3 # precise
# With limited-precision decimals there are no unexpected rounding errors. In practice, however, such alternative datatypes are used rarely because the precision gain comes with a performance overhead: computers work best with 0s and 1s, not with numbers humans can read.
# ### Symbolic computations
# Symbolic computations are realized *literally* (like in mathematics), not approximately. [SymPy](http://www.sympy.org) is a popular python library for symbolic mathematics:
import sympy
a = sympy.sympify('1 / 3')
a + a
# Seams like the perfect solution, right? It probably is if you are a mathematician, but for actual numerical computations SymPy will be way too slow to use. Symbolic mathematics can only be used for problems where analytical solutions are known. Unfortunately, this is not always the case (take numerical models of the atmosphere for example).
# ### Deal with it
# There are no simple answers to numerical rounding errors. Therfore: **be aware that they occur and try to mitigate their effect**.
# #### Awareness
# *Awareness* is mostly hindered by the string representation of floating point numbers. In practice:
0.1
format(0.1, '.16g') # give 16 significant digits
format(0.1, '.30g') # give 30 significant digits
# The default `0.1` print is therefore a "lie", but it is a useful one: in most cases you don't want to know about these insignificant digits at the end. The [numpy.finfo](https://docs.scipy.org/doc/numpy/reference/generated/numpy.finfo.html) is a useful function informing you about the machine limits for floating point types:
info = np.finfo(np.float)
info.bits, info.precision, info.max
# #### Error propagation
# Preventing rounding errors to happen is not possible, but there are a few general rules:
# - Multiplication and division are "safer" operations
# - Addition and subtraction are dangerous, because when numbers of different magnitudes are involved, digits of the smaller-magnitude number are lost.
# - This loss of digits can be inevitable and benign (when the lost digits are also insignificant for the final result) or catastrophic (when the loss is magnified and distorts the result strongly).
# - The more calculations are done (especially when they form an iterative algorithm) the more important it is to consider this kind of problem.
# - A method of calculation can be stable (meaning that it tends to reduce rounding errors) or unstable (meaning that rounding errors are magnified). Very often, there are both stable and unstable solutions for a problem.
#
# (list taken from [the floating point guide](http://floating-point-gui.de/errors/propagation/))
# As illustration for the difference between addition and multiplication, see the following example:
a = 10 * 0.1
b = 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1
a == b
a, b
# Realize **safer computations** therefore involves asking yourself at which stage the most precision is going to be lost: this is most often the case when adding numbers of very different magnitudes. When building numerical models, this should always be something you consider: is a formula leading to dangerous additions, then reformulate it and/or use other units for your variables (e.g. $\mathrm{g}\,\mathrm{kg}^{-1}$ instead of $\mathrm{kg}\,\mathrm{kg}^{-1}$ for specific humidity). Consider the following example:
a = 1
b = 1e-16
c = 10
c1 = c * (a + b)
c2 = c * a + c * b
c1 == c2
# ``c * (a + b)`` and ``c * a + c * b`` are mathematically equivalent, and the first is computationally less expensive (two operations instead of three). However, the second is less prone to rounding errors!
# #### Safer tests
# Fortunately, rounding errors often remain unnoticed, meaning that your computations are probably OK! In our field in particular, we often do not care if the post-processed temperature forecast is numerically precise at 0.001° since the forecast accuracy is much lower anyway. However, this can still lead to surprises when comparing arrays for equality (e.g. when testing that a temperature is equal to zero, or for matching coordinates like longitude or latitude). In these cases, prefer to use numpy's specialized functions:
np.isclose(c1, c2) # OK is you don't care about small numerical errors
# ## Take home points
# - computers can only represent a finite number of distinguishable values.
# - the range of representable numbers depends on the size of memory allocated to store it. There is practically no limit to the size of integers in python, but there is a limit for floats. Numpy implements several types of variables named after their size in bits (e.g. ``np.float32``, ``np.float64``, ``np.float128``).
# - there are many different ways to represent numbers on computers, all with different strengths and weaknesses. The vast majority of systems use the [IEEE 754](https://en.wikipedia.org/wiki/IEEE_754) standard for floating points, which is a good compromise between range and accuracy. Most systems are binary (base 2) per default, but there are other bases available: base 10 (decimal) and base 16 (hexadecimal) are frequent as well.
# - rounding errors happen because of these limitations. They always happen, even for the simplest arithmetics, and you **shall not ignore them**.
# - the fact that a number is printed "correctly" on screen does not mean that its internal binary representation is perfect. In fact, it is statistically much more probable (*inifinitely* more probable) that a number is not represented exactly by the floating-point format
# - however, there are ways to mitigate the impact of these rounding errors. This includes the use of more precise datatypes (e.g. float64 instead of float32), alternative representations (decimal instead of binary), and the use of more conservative operations (e.g. multiplication before addition when dealing with numbers of different magnitude)
# - floating point errors have dramatic consequences in chaotic systems. A scary example is given in [this paper](https://journals.ametsoc.org/doi/pdf/10.1175/MWR-D-12-00352.1) about the influence of floating point computations on numerical weather forecasts.
# ## Further reading
# Because of the importance of floating point computations you will find many resources online. I **highly recommend** to go through the short [floating point guide](http://floating-point-gui.de/) website, which explains the problem to non specialists. It will give you another point of view on the topic.
#
# Other resources:
# - [Python's documentation](https://docs.python.org/3/tutorial/floatingpoint.html) on floating point arithmetic
# - [What Every Computer Scientist Should Know About Floating-Point Arithmetic](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html): kind of heavy, but a reference text
# ## What's next?
# Back to the [table of contents](00-Introduction.ipynb#ctoc), or [jump to this week's assignment](13-Assignment-04.ipynb).
# ## License
# <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank">
# <img align="left" src="https://mirrors.creativecommons.org/presskit/buttons/88x31/svg/by.svg"/>
# </a>
| notebooks/12-Numbers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Starbucks Capstone Challenge
#
# ### Introduction
#
# This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
#
# Not all users receive the same offer, and that is the challenge to solve with this data set.
#
# Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
#
# Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
#
# You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
#
# Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
#
# ### Example
#
# To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
#
# However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
#
# ### Cleaning
#
# This makes data cleaning especially important and tricky.
#
# You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers.
#
# ### Final Advice
#
# Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A).
# # Data Sets
#
# The data is contained in three files:
#
# * portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
# * profile.json - demographic data for each customer
# * transcript.json - records for transactions, offers received, offers viewed, and offers completed
#
# Here is the schema and explanation of each variable in the files:
#
# **portfolio.json**
# * id (string) - offer id
# * offer_type (string) - type of offer ie BOGO, discount, informational
# * difficulty (int) - minimum required spend to complete an offer
# * reward (int) - reward given for completing an offer
# * duration (int) - time for offer to be open, in days
# * channels (list of strings)
#
# **profile.json**
# * age (int) - age of the customer
# * became_member_on (int) - date when customer created an app account
# * gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
# * id (str) - customer id
# * income (float) - customer's income
#
# **transcript.json**
# * event (str) - record description (ie transaction, offer received, offer viewed, etc.)
# * person (str) - customer id
# * time (int) - time in hours since start of test. The data begins at time t=0
# * value - (dict of strings) - either an offer id or transaction amount depending on the record
#
# **Note:** If you are using the workspace, you will need to go to the terminal and run the command `conda update pandas` before reading in the files. This is because the version of pandas in the workspace cannot read in the transcript.json file correctly, but the newest version of pandas can. You can access the termnal from the orange icon in the top left of this notebook.
#
# You can see how to access the terminal and how the install works using the two images below. First you need to access the terminal:
#
# <img src="pic1.png"/>
#
# Then you will want to run the above command:
#
# <img src="pic2.png"/>
#
# Finally, when you enter back into the notebook (use the jupyter icon again), you should be able to run the below cell without any errors.
# +
import pandas as pd
import numpy as np
import math
import json
import matplotlib.pyplot as plt
import datetime
import seaborn as sns
% matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
# -
# # Part1: Data Analysis and Cleansing
# ## Feature Engineering of dataset
portfolio.head()
# + active=""
# portfolio.describe()
# portfolio.dtypes
# -
portfolio['channels'].value_counts
for value in ['email', 'web', 'mobile', 'social']:
portfolio[value] = portfolio['channels'].apply(lambda x: 1 if value in x else 0)
portfolio.drop('channels', axis=1, inplace=True)
portfolio.head()
portfolio['offer_type'].unique()
portfolio['offer_type'].hist()
# ### Part 1.2 Profile Dataset Feature Engineering
profile.head()
profile['year'] = profile.became_member_on.apply(lambda x: int(str(x)[:4]))
profile['month'] = profile.became_member_on.apply(lambda x: int(str(x)[4:6]))
profile['day'] = profile.became_member_on.apply(lambda x: int(str(x)[6:]))
profile['date'] = profile.became_member_on.apply(lambda x: datetime.datetime.strptime(str(x), '%Y%m%d'))
profile.drop('became_member_on', axis = 1, inplace = True)
profile.shape
profile['date'].hist()
# ### Part 1.3 Transcript Dataset Feature Engineering
transcript.head()
transcript['type'] = transcript['value'].apply(lambda x: list(x.keys())[0])
transcript['value'] = transcript['value'].apply(lambda x: list(x.values())[0])
transcript.head()
transcript.type.value_counts()
# ### Part 1.4 Exploring the Data
#
profile.date.value_counts().plot(kind = 'line', figsize = (10,10))
plt.xlabel('Date', fontsize = 12)
plt.ylabel('Number of Sign Ups', fontsize = 12)
plt.title('Number of Sign Ups Each Day');
membership_subs = profile[profile['year'] >= 2014].groupby(['year','month'], as_index=False).agg({'id':'count'})
plt.figure(figsize=(15,8))
sns.pointplot(x="month", y="id", hue="year", data = membership_subs)
plt.ylabel('Customer Subsciptions', fontsize = 12)
plt.xlabel('Month', fontsize = 12)
plt.title('Customer Subsciptions by Month and Year');
| Starbucks_Capstone_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A common interface for handling tabular data
#
# As we've seen in the FITS tutorial, the [astropy.io.fits](http://docs.astropy.org/en/stable/io/fits/index.html) sub-package can be used to access FITS tables. In addition, as we will see in the next tutorial, there is functionality in [astropy.io.votable](http://docs.astropy.org/en/stable/io/votable/index.html) and [astropy.io.ascii](http://docs.astropy.org/en/stable/io/ascii/index.html) to read in VO and ASCII tables. However, while these sub-pacakges have user interfaces that are specific to each kind of file, it can be difficult to remember all of them. Therefore, astropy includes a higher level interface in [astropy.table](http://docs.astropy.org/en/stable/table/index.html) which can be used to access tables in many different formats in a similar way.
#
# <section class="objectives panel panel-warning">
# <div class="panel-heading">
# <h2><span class="fa fa-certificate"></span> Objectives</h2>
# </div>
#
#
# <div class="panel-body">
#
# <ul>
# <li>Create tables</li>
# <li>Access data in tables</li>
# <li>Combining tables</li>
# <li>Using high-level objects as columns</li>
# <li>Aggregation</li>
# <li>Masking</li>
# <li>Reading/writing</li>
# </ul>
#
# </div>
#
# </section>
#
# ## Documentation
#
# This notebook only shows a subset of the functionality in astropy.table. For more information about the features presented below as well as other available features, you can read the
# [astropy.table documentation](https://docs.astropy.org/en/stable/table/).
# %matplotlib inline
import matplotlib.pyplot as plt
plt.rc('image', origin='lower')
plt.rc('figure', figsize=(10, 6))
# ## Creating tables
#
# The main class we will use here is called ``Table``:
# Before we look at how to read and write tables, let's first see how to create a table from scratch:
# We can look at the table with:
# We can add columns:
# Access the values in a column:
# Convert the column to a Numpy array:
# Access individual cells:
# And access rows:
# ## Units in tables
#
# Table columns can include units:
# Some unitful operations will then work:
# However, you may run into unexpected behavior, so if you are planning on using table columns as Quantities, we recommend that you use the ``QTable`` class:
#
# <section class="challenge panel panel-success">
# <div class="panel-heading">
# <h2><span class="fa fa-pencil"></span> Challenge</h2>
# </div>
#
#
# <div class="panel-body">
#
# <ol>
# <li>Make a table that contains three columns: <code>spectral type</code>, <code>temperature</code>, and <code>radius</code>, and incude 5 rows with fake data (or real data if you like, for example from <a href="http://www.atlasoftheuniverse.com/startype.html">here</a>). Try including units on the columns that can have them.</li>
# <li>Find the mean temperature and the maximum radius</li>
# <li>Try and find out how to add and remove rows</li>
# <li>Add a new column which gives the luminosity (using $L=4\pi R^2 \sigma T^4$)</li>
# </ol>
#
# </div>
#
# </section>
#
# ## Iterating over tables
# It is possible to iterate over rows or over columns. To iterate over rows, iterate over the table itself:
# Rows can act like dictionaries, so you can access specific columns from a row:
# Iterating over columns is also easy:
# Accessing specific rows from a column object can be done with the item notation:
# ## Joining tables
#
# The astropy.table sub-package provides a few useful functions for stacking/combining tables. For example, we can do a 'join':
t2 = Table()
t2['name'] = ['source 1', 'source 3']
t2['flux2'] = [1, 9]
# ## Masked tables
#
# It is possible to mask individual cells in tables:
# ## Using high-level objects as columns
#
# A few specific astropy high-level objects can be used as columns in table - this includes SkyCoord and Time:
# Note however that you may not necessarily be able to write this table to a file and get it back intact, since being able to store this kind of information is not possible in all file formats.
# ## Slicing
#
# Tables can be sliced like Numpy arrays:
obs = Table(rows=[('M31' , '2012-01-02', 17.0, 17.5),
('M31' , '2012-01-02', 17.1, 17.4),
('M101', '2012-01-02', 15.1, 13.5),
('M82' , '2012-02-14', 16.2, 14.5),
('M31' , '2012-02-14', 16.9, 17.3),
('M82' , '2012-02-14', 15.2, 15.5),
('M101', '2012-02-14', 15.0, 13.6),
('M82' , '2012-03-26', 15.7, 16.5),
('M101', '2012-03-26', 15.1, 13.5),
('M101', '2012-03-26', 14.8, 14.3)],
names=['name', 'obs_date', 'mag_b', 'mag_v'])
#
# <section class="challenge panel panel-success">
# <div class="panel-heading">
# <h2><span class="fa fa-pencil"></span> Challenge</h2>
# </div>
#
#
# <div class="panel-body">
#
# <p>Starting from the <code>obs</code> table:</p>
# <ol>
# <li>Make a new table that shows every other row, starting with the second row? (that is, the second, fourth, sixth, etc. rows).</li>
# <li>Make a new table the only contains rows where <code>name</code> is <code>M31</code></li>
# </ol>
#
# </div>
#
# </section>
#
# ## Grouping and Aggregation
# It is possible to aggregate rows of a table together - for example, to group the rows by source name in the ``obs`` table, you can do:
# This is not just sorting the values but actually making it possible to access each group of rows:
# We can then aggregate the rows together in each group using a function:
# ## Writing data
#
# To write out the data, we can use the ``write`` method:
# In some cases the format will be inferred from the extension, but only in unambiguous cases - otherwise the format has to be specified explicitly:
# You can find the [list of supported formats](https://docs.astropy.org/en/stable/io/unified.html#built-in-table-readers-writers) in the documentation.
# ## Reading data
#
# You can also easily read in tables using the ``read`` method:
#
# <section class="challenge panel panel-success">
# <div class="panel-heading">
# <h2><span class="fa fa-pencil"></span> Challenge</h2>
# </div>
#
#
# <div class="panel-body">
#
# <p>Using the <code>t6</code> table above:</p>
# <ol>
# <li>
# <p>Make a plot that shows <code>j_m</code>-<code>h_m</code> on the x-axis, and <code>h_m</code>-<code>k_m</code> on the y-axis</p>
# </li>
# <li>
# <p>Make a new table that contains the subset of rows where the <code>j_snr</code>, <code>h_snr</code>, and <code>k_snr</code> columns, which give the signal-to-noise-ratio in the J, H, and K band, are greater than 10, and try and show these points in red in the plot you just made.</p>
# </li>
# <li>
# <p>Make a new table (based on the full table) that contains only the RA, Dec, and the <code>j_m</code>, <code>h_m</code> and <code>k_m</code> columns, then try and write out this catalog into a format that you can read into another software package. For example, try and write out the catalog into CSV format, then read it into a spreadsheet software package (e.g. Excel, Google Docs, Numbers, OpenOffice). You may run into an issue at this point - if so, take a look at https://github.com/astropy/astropy/issues/7357 to see how to fix it.</p>
# </li>
# </ol>
#
# </div>
#
# </section>
#
# <center><i>This notebook was written by <a href="https://aperiosoftware.com/">Aperio Software Ltd.</a> © 2019, and is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License (CC BY 4.0)</a></i></center>
#
# 
| 05-tables.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.0
# language: julia
# name: julia-1.6
# ---
using LinearAlgebra
using Jacobi
using Test
# # Prime Basis
# The prime basis is defined in reference element $\hat{E}=[-1,1]^2$as given in eq(2.15) of Wheeler and Yotov 2006 SIAM paper.
# $$
# \hat{\boldsymbol{V}}(\hat{E}) = P_1(\hat{E})^2 + r \, \text{curl} (\hat{x}^2\hat{y}) + s \, \text{curl} (\hat{x}\hat{y}^2)
# $$
# +
function PrimeBasis(xhat, yhat)
"""
Input:
xhat, yhat: are defined on the reference element Ehat=[-1,1]^2
Return:
Prime basis: of size (2,8) evaluated at xhat,yhat
Note the first entries "2", are prime basis in the directions x,y
"m" are the length of xhat,yhat
"8" is the dimension of the prime basis
"""
m = length(xhat)
P = zeros(2,m,8)
P[1,:,1] = ones(m)
P[1,:,2] = xhat
P[1,:,3] = yhat
P[2,:,4] = ones(m)
P[2,:,5] = xhat
P[2,:,6] = yhat
# supplement (curl term)
P[1,:,7] = (xhat.^2)
P[2,:,7] = (-2*xhat .* yhat)
P[1,:,8] = (2*xhat .* yhat)
P[2,:,8] = (-yhat.^2)
return P
end
function VondermondeMat()
"""
Input:
------
Note
3---4
| |
1---2
Output:
------
VM: the 8x8 vondermonde matrix
"""
# normals
nl = [-1.0;0.0]
nr = [1.0;0.0]
nb = [0.0;-1.0]
nt = [0.0;1.0]
# nodes
node1 = [-1.;-1.]
node2 = [1.;-1.]
node3 = [-1.;1.]
node4 = [1.;1.]
nodes = [node1 node2 node2 node4 node3 node4 node1 node3]
normals = [nb nb nr nr nt nt nl nl]
# vandermonde matrix, V_ij = phi_j(x_i).n_i
VM = zeros(8,8)
for i=1:8
for j=1:8
P = PrimeBasis([nodes[1,i]], [nodes[2,i]])
VM[i,j] = P[1,1,j]*normals[1,i] + P[2,1,j]*normals[2,i]
end
end
return VM
end
# -
# # Nodal Basis in libCEED
# Following gives the nodal basis for quad element that we use in libCEED. Note that we have 8 dofs and since they are vector, we separate its $x,y$ componanats. Note we have not applied `Piola` map on the basis, we should apply it in Qfunction.
VM = VondermondeMat();
invVM = inv(VM);
V = Array{String}(undef, 2,8)
for i = 1:8
b = zeros(8)
b[i] = 1
xx = invVM * b
vx = Array{String}(undef, 5)
vy = Array{String}(undef, 5)
vx[1] = ""
vx[2] = "*xhat"
vx[3] = "*yhat"
vx[4] = "*xhat*xhat"
# This is 2xy, we multiplied 2 in the coefficient vector V1
vx[5] = "*xhat*yhat"
vy[1] = ""
vy[2] = "*xhat"
vy[3] = "*yhat"
# This is -2xy, we multiplied -2 in the coefficient vector V2
vy[4] = "*xhat*yhat"
# This is -y^2, we multiplied -1 in the coefficient vector V2
vy[5] = "*yhat*yhat"
V1 = [xx[1] xx[2] xx[3] xx[7] 2*xx[8]]
V2 = [xx[4] xx[5] xx[6] -2*xx[7] -xx[8]]
VX = join(("$a"vx[i] for (i,a) in enumerate(V1) if a ≠ 0), " + ")
VY = join(("$a"vy[i] for (i,a) in enumerate(V2) if a ≠ 0), " + ")
#println("============= Nodal Basis of dof$i =============\n")
println("Bx[:,$(i)] = @. ",VX, ";")
println("By[:,$(i)] = @. ",VY, ";")
#println("\n")
V[1,i] = VX
V[2,i] = VY
end
# this is divergence of the nodal basis that we use it in libCEED
divV = [0. 1. 0. 0. 0. 1. 0. 0.]
Dhat = divV * invVM
# # CeedBasisView
function GetBasis(xhat, yhat)
"""
We create basis as a matrix of size (2*num_qpts,8)
The first 1:num_qpts is dof in x-direction
and num_qpts+1:2*num_qpts is dof in y-direction
And
Div operator as a matrix (num_qpts,8)
"""
m = length(xhat)
Bx = zeros(m,8)
By = zeros(m,8)
Bx[:,1] = @. -0.125 + 0.125*xhat*xhat;
By[:,1] = @. -0.25 + 0.25*xhat + 0.25*yhat + -0.25*xhat*yhat;
Bx[:,2] = @. 0.125 + -0.125*xhat*xhat;
By[:,2] = @. -0.25 + -0.25*xhat + 0.25*yhat + 0.25*xhat*yhat;
Bx[:,3] = @. 0.25 + 0.25*xhat + -0.25*yhat + -0.25*xhat*yhat;
By[:,3] = @. -0.125 + 0.125*yhat*yhat;
Bx[:,4] = @. 0.25 + 0.25*xhat + 0.25*yhat + 0.25*xhat*yhat;
By[:,4] = @. 0.125 + -0.125*yhat*yhat;
Bx[:,5] = @. -0.125 + 0.125*xhat*xhat;
By[:,5] = @. 0.25 + -0.25*xhat + 0.25*yhat + -0.25*xhat*yhat;
Bx[:,6] = @. 0.125 + -0.125*xhat*xhat;
By[:,6] = @. 0.25 + 0.25*xhat + 0.25*yhat + 0.25*xhat*yhat;
Bx[:,7] = @. -0.25 + 0.25*xhat + 0.25*yhat + -0.25*xhat*yhat;
By[:,7] = @. -0.125 + 0.125*yhat*yhat;
Bx[:,8] = @. -0.25 + 0.25*xhat + -0.25*yhat + 0.25*xhat*yhat;
By[:,8] = @. 0.125 + -0.125*yhat*yhat
B = zeros(2m,8)
B[1:m,:] = Bx[1:m,:]
B[m+1:2*m,:] = By[1:m,:]
Dhat = zeros(1,8)
Dhat[1,:] .= 0.25;
Div = repeat(Dhat, inner=(m,1))
return B, Div
end
function GetQuadrature2D(Q, quad_mode)
"""
Input:
Q: number of quadrature points in 1D over [-1,1]
quad_mode: GAUSS or LOBATTO
Return:Gauss Quadrature data over [-1,1]^2.
w: weights of quadrature pts
qx: quadrature pts in x
qy: quadrature pts in y
"""
# 1D Gauss
if quad_mode == "GAUSS"
q = zgj(Q, 0.0, 0.0)
w1 = wgj(q, 0.0, 0.0)
elseif quad_mode == "LOBATTO"
q = zglj(Q, 0.0, 0.0)
w1 = wglj(q, 0.0, 0.0)
end
w = zeros(Q*Q)
qx = zeros(Q*Q)
qy = zeros(Q*Q)
for i=1:Q
for j=1:Q
k = (i-1)*Q +j
qx[k] = q[j]
qy[k] = q[i]
w[k] = w1[j]*w1[i]
end
end
return w, qx, qy
end
Q = 3
num_qpts = Q*Q
w2, qx, qy = GetQuadrature2D(Q, "GAUSS")
B, Div = GetBasis(qx, qy);
B
Div
# Just to check the values with libCEED, copied the result of t330-basis.c for Q=3, and CEED_GAUSS
B_CEED = [
-0.05000000 0.05000000 0.10000000 0.01270167 -0.05000000 0.05000000 -0.78729833 -0.10000000;
-0.12500000 0.12500000 0.44364917 0.05635083 -0.12500000 0.12500000 -0.44364917 -0.05635083;
-0.05000000 0.05000000 0.78729833 0.10000000 -0.05000000 0.05000000 -0.10000000 -0.01270167;
-0.05000000 0.05000000 0.05635083 0.05635083 -0.05000000 0.05000000 -0.44364917 -0.44364917;
-0.12500000 0.12500000 0.25000000 0.25000000 -0.12500000 0.12500000 -0.25000000 -0.25000000;
-0.05000000 0.05000000 0.44364917 0.44364917 -0.05000000 0.05000000 -0.05635083 -0.05635083;
-0.05000000 0.05000000 0.01270167 0.10000000 -0.05000000 0.05000000 -0.10000000 -0.78729833;
-0.12500000 0.12500000 0.05635083 0.44364917 -0.12500000 0.12500000 -0.05635083 -0.44364917;
-0.05000000 0.05000000 0.10000000 0.78729833 -0.05000000 0.05000000 -0.01270167 -0.10000000;
-0.78729833 -0.10000000 -0.05000000 0.05000000 0.10000000 0.01270167 -0.05000000 0.05000000;
-0.44364917 -0.44364917 -0.05000000 0.05000000 0.05635083 0.05635083 -0.05000000 0.05000000;
-0.10000000 -0.78729833 -0.05000000 0.05000000 0.01270167 0.10000000 -0.05000000 0.05000000;
-0.44364917 -0.05635083 -0.12500000 0.12500000 0.44364917 0.05635083 -0.12500000 0.12500000;
-0.25000000 -0.25000000 -0.12500000 0.12500000 0.25000000 0.25000000 -0.12500000 0.12500000;
-0.05635083 -0.44364917 -0.12500000 0.12500000 0.05635083 0.44364917 -0.12500000 0.12500000;
-0.10000000 -0.01270167 -0.05000000 0.05000000 0.78729833 0.10000000 -0.05000000 0.05000000;
-0.05635083 -0.05635083 -0.05000000 0.05000000 0.44364917 0.44364917 -0.05000000 0.05000000;
-0.01270167 -0.10000000 -0.05000000 0.05000000 0.10000000 0.78729833 -0.05000000 0.05000000
];
@test B_CEED ≈ B
w2, qx, qy = GetQuadrature2D(1, "GAUSS")
| t330-basis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ilf0GGh_Itts" colab_type="text"
# <h2 align="center">Logistic Regression: A Sentiment Analysis Case Study</h2>
# + [markdown] id="9F9JwIKQItty" colab_type="text"
# ### Introduction
# ___
# + [markdown] id="dK0h38bjIttz" colab_type="text"
# - IMDB movie reviews dataset
# - http://ai.stanford.edu/~amaas/data/sentiment
# - Contains 25000 positive and 25000 negative reviews
# <img src="https://i.imgur.com/lQNnqgi.png" align="center">
# - Contains at most reviews per movie
# - At least 7 stars out of 10 $\rightarrow$ positive (label = 1)
# - At most 4 stars out of 10 $\rightarrow$ negative (label = 0)
# - 50/50 train/test split
# - Evaluation accuracy
# + [markdown] id="2jLRaxLVItt1" colab_type="text"
# <b>Features: bag of 1-grams with TF-IDF values</b>:
# - Extremely sparse feature matrix - close to 97% are zeros
# + [markdown] id="AZWP7Q3UItt2" colab_type="text"
# <b>Model: Logistic regression</b>
# - $p(y = 1|x) = \sigma(w^{T}x)$
# - Linear classification model
# - Can handle sparse data
# - Fast to train
# - Weights can be interpreted
# <img src="https://i.imgur.com/VieM41f.png" align="center" width=500 height=500>
# + [markdown] id="cvqzihQrItt4" colab_type="text"
# ### Task 1: Loading the dataset
# ---
# + id="8PKuNrWeItt5" colab_type="code" colab={}
import pandas as pd
# + id="I2N84cw0ItuA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="023154d8-7e6e-4f60-8204-3786f6e35b9b"
data = pd.read_csv("movie_data.csv")
data.head()
# + id="mVMAG0INJ_g4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="ec603e94-bf39-4a61-afa1-5f37d8069091"
data['review'][0]
# + [markdown] id="p8-2I03nItuG" colab_type="text"
# ## <h2 align="center">Bag of words / Bag of N-grams model</h2>
# + [markdown] id="7PJDiAH9ItuH" colab_type="text"
# ### Task 2: Transforming documents into feature vectors
# + [markdown] id="AA0yka2NItuJ" colab_type="text"
# Below, we will call the fit_transform method on CountVectorizer. This will construct the vocabulary of the bag-of-words model and transform the following three sentences into sparse feature vectors:
# 1. The sun is shining
# 2. The weather is sweet
# 3. The sun is shining, the weather is sweet, and one and one is two
#
# + id="N1sbL0n0ItuK" colab_type="code" colab={}
import numpy as np
# transformer
from sklearn.feature_extraction.text import CountVectorizer
phrases = np.array(['The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
count = CountVectorizer()
bag = count.fit_transform(phrases)
# + id="7l3Cbn4NItuS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} outputId="10736cbc-66d1-42f3-9947-84ff16f7802d"
# indices to words
count.vocabulary_
# + id="_6Mh2hy1R4fj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="812aeb2a-1b55-41ff-c4d6-b5ff9b26786b"
# tells freq of word with indices as above
bag.toarray()
# + [markdown] id="vOk8PwW5Itug" colab_type="text"
# Raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*
# + [markdown] id="q6H98fqoItui" colab_type="text"
# ### Task 3: Word relevancy using term frequency-inverse document frequency
# + [markdown] id="wdwjnsOdItuj" colab_type="text"
# $$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
# + [markdown] id="mgS1v5DHItuk" colab_type="text"
# $$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
# + [markdown] id="7CyduPmMItul" colab_type="text"
# where $n_d$ is the total number of documents, and df(d, t) is the number of documents d that contain the term t.
# + id="iFBrGBG_Itun" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="64913f14-8b5d-47e6-ab03-51f7868ee559"
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,norm='l2')
# decimal precision = 2
np.set_printoptions(precision=2)
print(tfidf.fit_transform(count.fit_transform(phrases)).toarray())
# + [markdown] id="VK0oxoO1Itux" colab_type="text"
# The equations for the idf and tf-idf that are implemented in scikit-learn are:
#
# $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
# The tf-idf equation that is implemented in scikit-learn is as follows:
#
# $$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
# + [markdown] id="Sfb1FqQ1Ituy" colab_type="text"
# ### Task 4: Data Preparation
# + id="HcOQs9PcItuy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6e7dba98-9624-4b48-cf6b-56d8f1dae5dc"
data.loc[0,'review'][-50:]
# + id="foSMYfTsItu5" colab_type="code" colab={}
import re
def preprocessor(text):
# get rid of <>
text = re.sub('<[^>]*>', '', text)
# move emoticons at end of text
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
# + id="ff1RPCIXItu-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9515f1fb-1dfc-41af-9774-d424ba9ea4f9"
preprocessor('is seven.<br /><br />Title (Brazil): Not Available')
# + id="CEQalF8LItvC" colab_type="code" colab={}
data['review'] = data['review'].apply(preprocessor)
# + [markdown] id="5oA6GIwOItvN" colab_type="text"
# ### Task 5: Tokenization of documents
# + id="tRmBjvlsItvO" colab_type="code" colab={}
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
# + id="mTotPi_4ItvR" colab_type="code" colab={}
def stemming(text):
return [porter.stem(word) for word in text.split()]
# + id="s1u_95ICItvV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="616e5f37-9d37-4457-f3f1-b090099741f1"
stemming('The Runners like running & this they run')
# + id="L8fDbg6lZHoJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="9835a97c-4680-4445-80fc-7e0b19c37f7a"
import nltk
nltk.download('stopwords')
# + id="7BaDFINPItvY" colab_type="code" colab={}
from nltk.corpus import stopwords
stop = stopwords.words('english')
# + id="OJHp0Hr7Itvb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="99a4df86-753a-4f99-9589-09a1f2c4fb2c"
[word for word in stemming('The Runners like running & this they run') if word not in stop]
# + [markdown] id="B8VTwXyVItve" colab_type="text"
#
# + [markdown] id="9C4_JLGWItve" colab_type="text"
#
# + [markdown] id="OfJ2Lo4mItvf" colab_type="text"
# ### Task 6: Transform Text Data into TF-IDF Vectors
# + id="juQsavx5Itvf" colab_type="code" colab={}
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(tokenizer=stemming, norm='l2',use_idf=True)
y = data.sentiment.values
X = tfidf.fit_transform(data.review)
# + [markdown] id="y6x5yfj-Itvl" colab_type="text"
# ### Task 7: Document Classification using Logistic Regression
# + id="PEfD8CPeItvm" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=101, test_size=0.2)
# + id="rJQGvnJ5Itvr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="ff533e81-1acd-433f-d91c-d1f7ac978662"
# save model
import pickle
from sklearn.linear_model import LogisticRegressionCV
lrc = LogisticRegressionCV(cv=5, random_state=101).fit(X_train,y_train)
# + id="mP73YM56eip4" colab_type="code" colab={}
with open("saved_model.sav","wb") as saved_model:
pickle.dump(lrc, saved_model)
# + [markdown] id="Z-JqYTK1Itvv" colab_type="text"
# ### Task 8: Model Evaluation
# + id="QAay5YQIItvv" colab_type="code" colab={}
saved_lrc = pickle.load(open('saved_model.sav','rb'))
# + id="VQinxIzrItv0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f902048e-cb7e-4eb5-83b4-2f84d79d13c3"
saved_lrc.score(X_test, y_test)
| Sentiment Analysis - Step 2/Sentiment_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # An Astronomical Application of Machine Learning:
# Separating Stars and Galaxies from SDSS
# ====
#
# ##### Version 0.3
#
# ***
# By <NAME> 2017 Jan 22
#
# AA Miller 2022 Mar 06 (v0.03)
# The problems in the following notebook develop an end-to-end machine learning model using actual astronomical data to separate stars and galaxies. There are 5 steps in this machine learning workflow:
#
# 1. Data Preparation
# 2. Model Building
# 3. Model Evaluation
# 4. Model Optimization
# 5. Model Predictions
#
# The data come from the [Sloan Digital Sky Survey](http://www.sdss.org) (SDSS), an imaging survey that has several similarities to LSST (though the telescope was significantly smaller and the survey did not cover as large an area).
# *Science background*: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring the structure of the Milky Way without knowing which sources are galaxies and which are stars.
#
# During this exercise, we will utilize supervised machine learning methods to separate extended sources (galaxies) and point sources (stars) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Problem 1) Examine the Training Data
#
# For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the [454 properties measured for each source in SDSS](https://skyserver.sdss.org/dr12/en/help/browser/browser.aspx#&&history=description+PhotoObjAll+U)).
# If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database:
#
# SELECT TOP 20000
# p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
# p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
# s.class
# FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
# WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
# ORDER BY p.objid ASC
# First [download the training set](https://arch.library.northwestern.edu/downloads/p2676w013?locale=en) and the [blind test set](https://arch.library.northwestern.edu/downloads/xd07gt05g?locale=en) for this problem.
# **Problem 1a**
#
# Visualize the training set data. The data have 8 features ['psfMag_r', 'fiberMag_r', 'fiber2Mag_r', 'petroMag_r', 'deVMag_r', 'expMag_r', 'modelMag_r', 'cModelMag_r'], and a 9th column ['class'] corresponding to the labels ('STAR' or 'GALAXY' in this case).
#
# *Hint* - just execute the cell below.
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
# **Problem 1b**
#
# Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
# *write your answer here - do not change it after later completing the problem*
# The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
# [`sklearn.model_selection`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection) has a useful helper function [`train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
#
# **Problem 1c** Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: `train_X, train_y, test_X, test_y`, respectively. Use `rs` for the `random_state` in `train_test_split`.
#
# *Hint - recall that `sklearn` utilizes X, a 2D `np.array()`, and y as the features and labels arrays, respecitively.*
# +
from sklearn.model_selection import train_test_split
rs = 1851
feats = list(sdss_df.columns)
feats.remove('class')
X = np.array(sdss_df[feats])
y = np.array(sdss_df['class'])
train_X, test_X, train_y, test_y = train_test_split( X, y, test_size = 0.3, random_state = rs)
# -
# We will now ignore everything in the test set until we have fully optimized the machine learning model.
# ## Problem 2) Model Building
#
# After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
#
# Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
# **Problem 2a**
#
# Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
#
# *Hint* - the [`KNeighborsClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier) object in the [`sklearn.neighbors`](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier) module may be useful for this task.
# +
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=25)
knn_clf.fit(train_X, train_y)
# -
# **Problem 2b**
#
# Train a Random Forest (RF) model [(Breiman 2001)](http://link.springer.com/article/10.1023/A:1010933404324) on the training set. Include 50 trees in the forest using the `n_estimators` parameter. Again, set `random_state` = rs.
#
# *Hint* - use the [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) object from the [`sklearn.ensemble`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble) module. Also - be sure to set `n_jobs = -1` in every call of `RandomForestClassifier`.
# +
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=50, random_state=rs, n_jobs=-1)
rf_clf.fit(train_X, train_y)
# -
# A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
#
# RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the `.feature_importances_` attribute associated with the `RandomForestClassifer()` object. The higher the value, the more important feature.
# **Problem 2c**
#
# Calculate the relative importance of each feature.
#
# Which feature is most important? Does this match your answer from **1c**?
# +
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
# -
# *write your answer here*
# ## Problem 3) Model Evaluation
#
# To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
#
# If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
# The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
# The SDSS photometric classifier uses a [single hard cut](http://www.sdss.org/dr12/algorithms/classify/#photo_class) to separate stars and galaxies in imaging data:
#
# $$\mathtt{psfMag_r} - \mathtt{cModelMag_r} > 0.145.$$
#
# Sources that satisfy this criteria are considered galaxies.
# **Problem 3a**
#
# Determine the baseline figure of merit by measuring the accuracy of the SDSS photometric classifier on the training set.
#
# *Hint - the [`accuracy_score`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) function in the [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics) module may be useful.*
# +
from sklearn.metrics import accuracy_score
phot_y = np.empty_like(train_y)
phot_gal = np.logical_not(train_X[:,0] - train_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(train_y, phot_y)))
# -
# **Problem 3b**
#
# Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
#
# *Hint* - the [`cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function from the `sklearn.model_selection` module performs the necessary calculations.
# +
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score(knn_clf, train_X, train_y, cv=10)
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format(np.mean(knn_cv), np.std(knn_cv, ddof=1)))
# -
# **Problem 3c**
#
# Use 10-fold cross validation to estimate the FoM for the random forest model.
# +
rf_cv = cross_val_score(rf_clf, train_X, train_y, cv=10)
print('The RF model FoM = {:.4f} +/- {:.4f}'.format(np.mean(rf_cv), np.std(rf_cv, ddof=1)))
# -
# **Problem 3d**
#
# Do the machine-learning models outperform the SDSS photometric classifier?
# *write your answer here*
# ## Problem 4) Model Optimization
#
# While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
# All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
#
# 1. $N_\mathrm{tree}$ - the number of trees in the forest `n_estimators` (default: 10) in `sklearn`
# 2. $m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node `max_features` (default: `sqrt(n_features)`) in `sklearn`
# 3. Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are [many choices](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) for this in `sklearn` (My preference is `min_samples_leaf` (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
# Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
# Before globally optimizing the model, let's develop some intuition for how the tuning parameters affect the final model predictions.
# **Problem 4a**
#
# Determine the 10-fold cross validation accuracy for $k$NN models with $k$ = 1, 10, 100.
#
# How do you expect changing the number of neighbors to affect the results?
for k in [1,10,100]:
knn_cv = cross_val_score(KNeighborsClassifier(n_neighbors=k), train_X, train_y, cv=10)
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format(k, np.mean(knn_cv), np.std(knn_cv, ddof=1)))
# *write your answer here*
# **Problem 4b**
#
# Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
#
# How do you expect changing the number of trees to affect the results?
for ntree in [1,10,30,100,300]:
rf_cv = cross_val_score(RandomForestClassifier(n_estimators=ntree), train_X, train_y, cv=10)
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format(ntree, np.mean(rf_cv), np.std(rf_cv, ddof=1)))
# *write your answer here*
# Now you are ready for the moment of truth!
# ## Problem 5) Model Predictions
# **Problem 5a**
#
# Calculate the FoM for the SDSS photometric model on the test set.
# +
phot_y = np.empty_like(test_y)
phot_gal = np.logical_not(test_X[:,0] - test_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(test_y, phot_y)))
# -
# **Problem 5b**
#
# Using the optimal number of trees from **4b** calculate the FoM for the random forest model.
#
# *Hint* - remember that the model should be trained on the training set, but the predictions are for the test set.
# +
rf_clf = RandomForestClassifier(n_estimators=300, n_jobs=-1)
rf_clf.fit(train_X, train_y)
test_preds = rf_clf.predict(test_X)
print("The RF model has FoM = {:.4f}".format(accuracy_score(test_y, test_preds)))
# -
# **Problem 5c**
#
# Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
#
# *Hint* - the [`confusion_matrix`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html) function in `sklearn.metrics` will help.
# +
from sklearn.metrics import confusion_matrix
print(confusion_matrix(test_y, test_preds))
# -
# *write your answer here*
# **Problem 5d**
#
# Calculate (and plot the region of interest) the [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) assumming that stars are the positive class.
#
# *Hint 1* - you will need to calculate probabilistic classifications for the test set using the [`predict_proba()`](http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.predict_proba) method.
#
# *Hint 2* - the [`roc_curve`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html) function in the `sklearn.metrics` module will be useful.
# +
from sklearn.metrics import roc_curve
test_y_int = np.ones_like(test_y, dtype=int)
test_y_int[np.where(test_y == 'GALAXY')] = 0
test_preds_proba = rf_clf.predict_proba(test_X)
fpr, tpr, thresh = roc_curve(test_y_int, test_preds_proba[:,1])
fig, ax = plt.subplots()
ax.plot(fpr, tpr)
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
ax.set_xlim(2e-3,.2)
ax.set_ylim(0.3,1)
# -
# **Problem 5e**
#
# Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
#
# What classification threshold should be adopted for this model?
#
# What fraction of galaxies does this model misclassify?
# +
tpr_99_thresh = thresh[np.argmin(np.abs(0.99 - tpr))]
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = fpr[np.argmin(np.abs(0.99 - tpr))]
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
# -
# ## Problem 6) Classify New Data
#
# Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
new_data_df = pd.read_hdf("blind_test_set.h5")
# **Problem 6a**
#
# Create a feature and label array for the new data.
#
# *Hint* - copy the code you developed above in Problem 2.
new_X = np.array(new_data_df[feats])
new_y = np.array(new_data_df['class'])
# **Problem 6b**
#
# Calculate the accuracy of the model predictions on the new data.
new_preds = rf_clf.predict(new_X)
print("The model has an accuracy of {:.4f}".format(accuracy_score(new_y, new_preds)))
# **Problem 6c**
#
# Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
#
# If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
# *write your answer here*
# ## Challenge Problem) Full RF Optimization
#
# Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
#
# We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
#
# It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
# Use [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to perform a **3-fold** CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
#
# What are the optimal tuning parameters for the model?
#
# *Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.*
#
# *Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid*
# +
from sklearn.model_selection import GridSearchCV
grid_results = GridSearchCV(RandomForestClassifier(n_jobs=-1),
{'n_estimators': [30, 100, 300],
'max_features': [1, 3, 7],
'min_samples_leaf': [1,10,30]},
cv = 3)
grid_results.fit(train_X, train_y)
# -
print('The best model has {}'.format(grid_results.best_params_))
| Sessions/Session14/Day1/SeparatingStarsAndGalaxiesSolutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
import boto3
import dask.dataframe as dd
from sagemaker import get_execution_role
import pandas as pd
from fastparquet import ParquetFile
role = get_execution_role()
bucket='tally-ai-dspt3'
folder = 'yelp-kaggle-raw-data'
pd.set_option('display.max_columns', None)
print(f"S3 Bucket is {bucket}, and Folder is {folder}")
# +
# follow the following steps below verbatim and open in terminal to run them except for kernel restart which is at menu
# source activate python3
# conda install dask -y
# conda install s3fs -c conda-forge -y
# restart kernel
#Note, to run parquet which is a way to export to S3 with highly reduces dask files, you need to add the following command:
# conda install -c conda-forge fastparquet
#or togeher: conda install s3fs fastparquet -c conda-forge -y
# -
# # Loading Yelp Businees `attributes`
data_key = 'yelp_academic_dataset_business.json'
data_location = 's3://{}/{}/{}'.format(bucket, folder, data_key)
business = dd.read_json(data_location, lines=True)
business.head(5)
# In order to use dask
#
# 1. Activate Conda Python 3 `source activate python3`
# 2. Install dask and s3fs `conda install dask s3fs -c conda-forge -y`
# # Loading Yelp Businees `reviews` ~ 6GB data
#uncomment below to spin reviews which will be too large. Suggested we create a subset on RDS/postgres and then spin only what's necessary
data_key = 'yelp_academic_dataset_review.json'
data_location = 's3://{}/{}/{}'.format(bucket, folder, data_key)
reviews = dd.read_json(data_location, blocksize=32e6)
reviews.head()
# +
#need to join the three datasets together. Dask join is just like dask
#once joined, filter with restaurants
#once filtered, only have restaurants in database, export as .csv
#once have .csv file load to S3 bucket
# -
#uncomment below to spin reviews which will be too large. Suggested we create a subset on RDS/postgres and then spin only what's necessary
data_key = 'yelp_academic_dataset_user.json'
data_location = 's3://{}/{}/{}'.format(bucket, folder, data_key)
users = dd.read_json(data_location, blocksize=32e6)
users.head()
# +
#business_reviews = business.merge(reviews, on = 'business_id', how = 'inner')
# +
#business_reviews.head()
# +
#business_reviews_users = business_reviews.merge(users, on = 'user_id', how = 'inner')
# +
#business_reviews_users.head()
# +
#method to pull unique values out of column 'category' in business
# def unique_col(col):
# return ','.join(set(col.split(',')))
# x = business.categories.apply(unique_col)
# -
#drop na values
business2 = business.dropna(subset=['categories'])
restaurants = business2[business2['categories'].str.contains('Restaurants')]
restaurants.head()
#stars appear as a column in the first two datasets, to idenity, renaming
restaurants['stars_business'] = restaurants['stars']
restaurants2 = restaurants.drop(['stars'], axis=1)
#doing same for reviews:
reviews['stars_reviews'] = reviews['stars']
reviews2 = reviews.drop(['stars'], axis=1)
restaurants2_reviews2 = restaurants2.merge(reviews2, on = 'business_id', how = 'inner')
restaurants2.columns
reviews.columns
restaurants2_reviews2.columns
restaurants2_reviews2.head()
users.columns
restaurants2_reviews2.columns
restaurants2_reviews2_users = restaurants2_reviews2.merge(users, on = 'user_id', how = 'inner')
restaurants2_reviews2_users.columns
#doing same for reviews:
restaurants2_reviews2['name_reviews'] = restaurants2_reviews2['name']
restaurants2_reviews2['review_count_reviews'] = restaurants2_reviews2['review_count']
restaurants2_reviews2['cool_reviews'] = restaurants2_reviews2['cool']
restaurants2_reviews2['funny_reviews'] = restaurants2_reviews2['funny']
restaurants2_reviews2['useful_reviews'] = restaurants2_reviews2['useful']
restaurants3_reviews3 = restaurants2_reviews2.drop(['name', 'review_count', 'cool', 'funny', 'useful'], axis=1)
#now do this again to users:
#doing same for reviews:
users['name_users'] = users['name']
users['review_count_users'] = users['review_count']
users['cool_users'] = users['cool']
users['funny_users'] = users['funny']
users['useful_users'] = users['useful']
users2 = users.drop(['name', 'review_count', 'cool', 'funny', 'useful'], axis=1)
# +
#make sure to include: address, hours, postal code, and date
# +
#restaurants2_reviews_users.head()
# -
restaurants3_reviews3_users2 = restaurants3_reviews3.merge(users2, on = 'user_id', how = 'inner')
#check that everything worked:
restaurants3_reviews3_users2.columns
final_combined = restaurants3_reviews3_users2.drop(['attributes', 'hours'], axis=1)
# +
#reducing file size further by cropping out additional columns for parquet to pass, as issues with RAM cause kernel to restart
#restaurants_reviews_users2 = restaurants_reviews_users.drop(['address', 'attributes', 'hours', 'is_open', 'postal_code', 'date',
# 'friends', 'yelping_since','compliment_cool', 'compliment_cute', 'compliment_hot',
# 'compliment_list', 'compliment_more', 'compliment_note', 'compliment_photos',
# 'compliment_plain', 'compliment_profile'], axis=1)
# -
final_combined.head()
# +
# 'compliment_cool', 'compliment_cute', 'compliment_hot', 'compliment_list', 'compliment_more', 'compliment_note', 'compliment_photos', 'compliment_plain', 'compliment_profile', 'compliment_write',
# +
#add the following code to run parquet, in terminal:
#conda install -c conda-forge fastparquet
# -
final_combined.to_parquet('s3://tally-ai-dspt3/yelp-kaggle-raw-data/final_combined.parquet.gzip', compression='gzip')
# +
#dd.to_csv(restaurants_reviews_users,'s3://tally-ai-dspt3/yelp-kaggle-raw-data/restaurants_reviews_users.csv')
| notebooks/Ofer-dask-joint-tables-and-filter-by-restaurants.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib as mpl
mpl.rcParams['figure.figsize']=(12,7)
sns.set(style="ticks", context="talk")
plt.style.use("dark_background")
#
# -
df = pd.read_csv('00-2-shipman-times-x.csv')
df.head()
ax = df.plot(x='Hour');
ax.set_ylabel('Percentage of Deaths');
# +
fig, ax = plt.subplots()
df.plot(x='Hour', ax=ax);
ax.set_ylabel('Percentage of Deaths');
fig.savefig('00-2-shipman-times.png');
# -
| 00-2-shipman-times/00-2-shipman-times.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cryptocurrency Market Analysis
# *Based off my stock market analysis of tech giants, located [here](https://github.com/melvfernandez/data_science_portfolio/blob/master/Stock%20Market%20Analysis%20for%20Tech%20Stocks.ipynb).*
#
# ***
#
# In this project, we will analyze data from Yahoo Finance of three popular cryptocurrencies to date.
#
# We will use Pandas to extract and analyze the information, visualize it, and analyze risks based on it's performance history.
#
# Here are questions we will try to answer:
# - What was the change of price over time?
# - What was the daily return on average of a stock?
# - What was the moving average of various stock?
# - What is the correlation between daily returns of different stock?
# - How much value do we put at risk by investing in a stock?
# - How can we attempt to predict future stock behavior?
# +
#python data analysis imports
import pandas as pd
from pandas import Series, DataFrame
import numpy as np
#visualization imports
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
# %matplotlib inline
#grab data imports
import pandas_datareader.data as pdr
from datetime import datetime
# -
#We are going to analyze the top cryptocurrencies.
cc_list = ['BTC-USD','ETH-USD','LTC-USD']
# +
#Setting the end date to today
end = datetime.today()
#Start date set to one year back
start = datetime(end.year-1,end.month,end.day)
# -
#using yahoo finance to grab cryptocurrency data
BTC = pdr.DataReader('BTC-USD','yahoo',start,end)
ETH = pdr.DataReader('ETH-USD','yahoo',start,end)
LTC = pdr.DataReader('LTC-USD','yahoo',start,end)
#STATISTICS FOR BTC'S STOCK
BTC.describe()
BTC.head()
#INFORMATION ABOUT BTC DATAFRAME
BTC.info()
# ***
# ## What is the change in stock's price over time?
#Using pandas we canplot the stocks adjusted closing price
BTC['Adj Close'].plot(legend = True, figsize=(12,5))
# Within the year, we can see the value of BTC almost reach up to 20K.
#Using pandas once more to plot the total volume being traded over time
BTC['Volume'].plot(legend=True,figsize=(12,5))
# ***
# ## What was the moving average of the stocks?
# +
#using pandas we will create moving averages for 10, 20 and 50 day periods of time
ma_days = [10,20,50]
for ma in ma_days:
column_name = "MA %s days" %(str(ma))
BTC[column_name] = BTC['Adj Close'].rolling(window=ma,center=False).mean()
# -
BTC.tail()
BTC[['Adj Close','MA 10 days','MA 20 days','MA 50 days']].plot(legend=True,figsize=(12,5))
# Genereally some casual dips in the past month but overall an upward trend.
# ***
# ## What was the daily return average of a stock?
# +
#In order calculate daily return we can use the percentage change of the adjusted closing price
BTC['Daily Return'] = BTC['Adj Close'].pct_change()
BTC['Daily Return'].tail()
# -
#Let us now plot the daily return
BTC['Daily Return'].plot(marker='.',legend=True,figsize=(12,5))
# Positive daily returns seem to be more frequent than negative returns.
# ## What was the correlation between daily returns of different stocks?
# +
#Let's read the 'Adj Close' column from all the cryptocurrencies giants
close_df = pdr.DataReader(cc_list,'yahoo',start,end)['Adj Close']
# -
close_df.tail()
#Let's explore the returns again using the percentage change from the adj close.
returns_df = close_df.pct_change()
returns_df.plot(marker='.',legend=True,figsize=(12,5))
# This plot is difficult to understand, let's use a jointplot instead.
returns_df.tail()
#We can now try to find the correlation between Bitcoin and Ethreum
sns.jointplot('BTC-USD','ETH-USD',returns_df,kind='scatter')
# There seems to be a minor positive correlation between the two, the pearsonr correlation coefficient value of 0.4 agrees with that statement.
# Let's use a pairplot to visualize all the tech giants in one view.
sns.pairplot(returns_df.dropna(),diag_kind='kde')
# Quick and easy way to view correlations but let's use a correlation plot to see the actual numbers.
# +
corr = returns_df.dropna().corr()
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
annot=True,
cmap='Blues')
#the darker the shade the higher the correlation
# -
rets = returns_df.dropna()
# +
plt.figure(figsize=(8,5))
plt.scatter(rets.mean(),rets.std(),s=25)
plt.xlabel('Expected Return')
plt.ylabel('Risk')
#For adding annotatios in the scatterplot
for label,x,y in zip(rets.columns,rets.mean(),rets.std()):
plt.annotate(
label,
xy=(x,y),xytext=(-120,20),
textcoords = 'offset points',
ha = 'right',
va = 'bottom',
arrowprops = dict(arrowstyle='->'))
# -
# As of January 12, the current trend for cryptocurrencies seem to output a negative return. We want a crypto with high return and low risk.
rets.head()
qt = rets['BTC-USD'].quantile(0.05)
qt_pct = abs(rets['BTC-USD'].quantile(0.05))*100
print(qt_pct)
print("The 0.05 empirical quantile of daily returns is at {0:.2f}. This means that with 95% confidence, the worst daily loss will not exceed {0:.2f}% (of the investment).".format(qt,qt_pct))
# ## How can we predict future behavior?
days = 365
dt = 1/365
mu = rets.mean()['BTC-USD']
sigma = rets.std()['BTC-USD']
#Function takes in stock price, number of days to run, mean and standard deviation values
def stock_monte_carlo(start_price,days,mu,sigma):
price = np.zeros(days)
price[0] = start_price
shock = np.zeros(days)
drift = np.zeros(days)
for x in range(1,days):
#Shock and drift formulas taken from the Monte Carlo formula
shock[x] = np.random.normal(loc=mu*dt,scale=sigma*np.sqrt(dt))
drift[x] = mu * dt
#New price = Old price + Old price*(shock+drift)
price[x] = price[x-1] + (price[x-1] * (drift[x]+shock[x]))
return price
BTC.tail()
# +
start_price = 13841.190 #Taken from above
for run in range(100):
plt.plot(stock_monte_carlo(start_price,days,mu,sigma))
plt.xlabel('Days')
plt.ylabel('Price')
plt.title('Monte Carlo Analysis for BTC')
# +
runs = 10000
simulations = np.zeros(runs)
for run in range(runs):
simulations[run] = stock_monte_carlo(start_price,days,mu,sigma)[days-1]
# +
q = np.percentile(simulations,1)
plt.hist(simulations,bins=200)
plt.figtext(0.6,0.8,s="Start price: $%.2f" %start_price)
plt.figtext(0.6,0.7,"Mean final price: $%.2f" % simulations.mean())
plt.figtext(0.6,0.6,"VaR(0.99): $%.2f" % (start_price -q,))
plt.figtext(0.15,0.6, "q(0.99): $%.2f" % q)
plt.axvline(x=q, linewidth=4, color='r')
plt.title(u"Final price distribution for BTC after %s days" %days, weight='bold')
# -
# Seems like BTC's overall price is going down. After 10,000 runs the starting price is at $13,840 and it goes down to $13680.
| Cryptocurrency%20Market%20Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pafnucy
# language: python
# name: pafnucy
# ---
# +
import numpy as np
import pandas as pd
import h5py
import pybel
from tfbio.data import Featurizer
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
# path to the extracted PDBbind dataset
path = '../PDBbind'
print(path)
# # Parse and clean affinity data
# + magic_args="-s $path --out missing" language="bash"
#
# path=$1
#
# # Save binding affinities to csv file
# echo $path > 1
#
# echo 'pdbid,-logKd/Ki' > affinity_data.csv
# cat $path/plain-text-index/index/INDEX_general_PL_data.2019 | while read l1 l2 l3 l4 l5; do
# if [[ ! $l1 =~ "#" ]]; then
# echo $l1,$l4
# fi
# done >> affinity_data.csv
#
#
# # Find affinities without structural data (i.e. with missing directories)
#
# cut -f 1 -d ',' affinity_data.csv | tail -n +2| while read l;
# do if [ ! -e $path/general-set-except-refined/$l ] && [ ! -e $path/refined-set/$l ]; then
# echo $l;
# fi
# done
# -
missing
missing = set(missing.split())
len(missing)
affinity_data = pd.read_csv('affinity_data.csv', comment='#')
affinity_data = affinity_data[~np.in1d(affinity_data['pdbid'], list(missing))]
affinity_data.shape
# +
# Check for NaNs
affinity_data['-logKd/Ki'].isnull().any()
# +
# Separate core, refined, and general sets
core_set = ! grep -v '#' $path/plain-text-index-2013/pdbbind_v2015_docs/INDEX_core_data.2013 | cut -f 1 -d ' '
core_set = set(core_set)
refined_set = ! grep -v '#' $path/plain-text-index/index/INDEX_refined_data.2019 | cut -f 1 -d ' '
refined_set = set(refined_set)
general_set = set(affinity_data['pdbid'])
# assert core_set & refined_set == core_set
#assert refined_set & general_set == refined_set
len(general_set), len(refined_set), len(core_set)
# +
# Exclude v 2013 core set - it will be used as another test set
core2013 = ! cat core_CASF.ids
core2013 = set(core2013)
affinity_data['include'] = True
#affinity_data.loc[np.in1d(affinity_data['pdbid'], list(core2013 & (general_set - core_set))), 'include'] = False
# +
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(general_set)), 'set'] = 'general'
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(refined_set)), 'set'] = 'refined'
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(core2013)), 'set'] = 'core'
affinity_data.head()
# -
affinity_data[affinity_data['include']].groupby('set').apply(len)
affinity_data.to_csv('affinity_data_cleaned.csv')
# Check if 81 exist in DataFrame
if '3zf' in affinity_data.values:
print('Element exists in Dataframe')
else:
print('not found')
# +
# Check affinity distributions
grid = sns.FacetGrid(affinity_data[affinity_data['include']], row='set', row_order=['general', 'refined', 'core'],
size=3, aspect=3)
grid.map(sns.distplot, '-logKd/Ki');
# -
affinity_data[['pdbid']].to_csv('pdb.ids', header=False, index=False)
affinity_data[['pdbid', '-logKd/Ki', 'set']].to_csv('affinity_data_cleaned.csv', index=False)
# ---
# # Parse molecules
#dataset_path = {'general': 'general-set-except-refined', 'refined': 'refined-set', 'core': 'refined-set'}
dataset_path = {'refined': 'test-set'}
# + magic_args="-s $path" language="bash"
#
# # Prepare pockets with UCSF Chimera - pybel sometimes fails to calculate the charges.
# # Even if Chimera fails to calculate several charges (mostly for non-standard residues),
# # it returns charges for other residues.
#
# path=$1
#
# #for dataset in general-set-except-refined refined-set; do
# for dataset in test-set; do
# echo $dataset
# for pdbfile in $path/$dataset/*/*_pocket.pdb; do
# mol2file=${pdbfile%pdb}mol2
#
# if [[ ! -e $mol2file ]]; then
# !chimera
# echo -e "open $pdbfile \n addh \n addcharge \n write format mol2 0 tmp.mol2 \n stop" | chimera --nogui
# # Do not use TIP3P atom types, pybel cannot read them
# sed 's/H\.t3p/H /' tmp.mol2 | sed 's/O\.t3p/O\.3 /' > $mol2file
# fi
# done
# done > chimera_rw.log
# +
featurizer = Featurizer()
charge_idx = featurizer.FEATURE_NAMES.index('partialcharge')
with h5py.File('%s/core2013.hdf' % path, 'w') as g:
j = 0
for dataset_name, data in affinity_data.groupby('set'):
print(dataset_name, 'set')
i = 0
ds_path = dataset_path[dataset_name]
with h5py.File('%s/%s.hdf' % (path, dataset_name), 'w') as f:
for _, row in data.iterrows():
name = row['pdbid']
affinity = row['-logKd/Ki']
ligand = next(pybel.readfile('mol2', '%s/%s/%s/%s_ligand.mol2' % (path, ds_path, name, name)))
# do not add the hydrogens! they are in the strucutre and it would reset the charges
try:
pocket = next(pybel.readfile('mol2', '%s/%s/%s/%s_pocket.mol2' % (path, ds_path, name, name)))
# do not add the hydrogens! they were already added in chimera and it would reset the charges
except:
warnings.warn('no pocket for %s (%s set)' % (name, dataset_name))
continue
ligand_coords, ligand_features = featurizer.get_features(ligand, molcode=1)
assert (ligand_features[:, charge_idx] != 0).any()
pocket_coords, pocket_features = featurizer.get_features(pocket, molcode=-1)
assert (pocket_features[:, charge_idx] != 0).any()
centroid = ligand_coords.mean(axis=0)
ligand_coords -= centroid
pocket_coords -= centroid
data = np.concatenate((np.concatenate((ligand_coords, pocket_coords)),
np.concatenate((ligand_features, pocket_features))), axis=1)
if row['include']:
dataset = f.create_dataset(name, data=data, shape=data.shape, dtype='float32', compression='lzf')
dataset.attrs['affinity'] = affinity
i += 1
else:
dataset = g.create_dataset(name, data=data, shape=data.shape, dtype='float32', compression='lzf')
dataset.attrs['affinity'] = affinity
j += 1
print('prepared', i, 'complexes')
print('excluded', j, 'complexes')
# -
with h5py.File('%s/core.hdf' % path, 'r') as f, \
h5py.File('%s/core2013.hdf' % path, 'r+') as g:
for name in f:
if name in core2013:
dataset = g.create_dataset(name, data=f[name])
dataset.attrs['affinity'] = f[name].attrs['affinity']
# # Protein data
# +
protein_data = pd.read_csv('../PDBbind/plain-text-index/index/INDEX_general_PL_name.2019',
comment='#', sep=' ', engine='python', na_values='------',
header=None, names=['pdbid', 'year', 'uniprotid', 'name'])
protein_data.head()
# +
# we assume that PDB IDs are unique
assert ~protein_data['pdbid'].duplicated().any()
protein_data = protein_data[np.in1d(protein_data['pdbid'], affinity_data['pdbid'])]
# check for missing values
protein_data.isnull().any()
# -
protein_data[protein_data['uniprotid'].isnull()]
# +
# fix rows with wrong separators between protein ID and name
for idx, row in protein_data[protein_data['name'].isnull()].iterrows():
uniprotid = row['uniprotid'][:6]
name = row['uniprotid'][7:]
protein_data.loc[idx, ['uniprotid', 'name']] = [uniprotid, name]
protein_data.isnull().any()
# -
protein_data.to_csv('protein_data.csv', index=False)
| database_parsing/pdbbind_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="sYSUpqL-9WKY"
# # Formação Cientista de Dados
#
# Pandas
#
# Prof. <NAME>
# + id="hbrk7fkw9Yzj"
import pandas as pd
# + id="XbpyywLT9Y26"
#carrega arquivo para dataframe Pandas
dados = pd.read_csv("Credit.csv")
#formato
dados.shape
# + id="xIILAe8iAwaG"
#resumo estatístico de colunas numéricas
dados.describe()
# + id="l1jeBCwxA8PW"
#primeiros registros
dados.head()
# + id="YwBTbeE_B23c"
#últimos registros, com parâmetros
dados.tail(2)
# + id="p97-dKeb9Y5i"
#filtras por nome da coluna
dados[["duration"]]
# + id="V1X4VldnCXdV"
#filtrar linhas por indice
dados.loc[1:3]
# + id="HQwYtnPrDC_Y"
#linhas 1 e 3
dados.loc[[1,3]]
# + id="U7PnlC849Y8b"
#filtro
dados.loc[dados['purpose'] == "radio/tv"]
# + id="_FxSXUQW4eEq"
#outra condição
dados.loc[dados['credit_amount'] > 18000]
# + id="AuPDpjl34pvs"
#atribuimos resultado a variável, criando outro df
credito2 = dados.loc[dados['credit_amount'] > 18000]
print(credito2)
# + id="8A6mGAhM9Y_b"
#definimos só algumas colunas
credito3 = dados[['checking_status','duration']].loc[dados['credit_amount'] > 18000]
print(credito3)
# + id="5j6ddo_29ZE8"
#séries, única coluna
# pode ser criada a partir de listas, array do numpy ou coluna de data frame
s1 = pd.Series([2,5,3,34,54,23,1,16])
print(s1)
# + id="BTa8lcsh59Zy"
#serie a partir de um array do numpy
import numpy as np
array1 = np.array([2,5,3,34,54,23,1,16])
s2 = pd.Series(array1)
print(s2)
# + id="BZBumFfl9zG1"
#series a partir de um dataframe
s3 = dados['purpose']
print(s3)
type(s3)
# + id="LWTSgc8O6dUU"
#note a diferença, temos um data frame
d4= dados[['purpose']]
type(d4)
# + id="jfOrPZYM9zJz"
#renomear
dados.rename(columns={"duration":"duração","purpose":"propósito"})
# + id="aWOAK3Vc7Ud0"
#porém a alteração não é persistida
dados.head(1)
# + id="zYosuOaX7C1t"
#para persistir
dados.rename(columns={"duration":"duração","purpose":"propósito"},inplace=True)
# + id="bxx_vnJV77FG"
dados.head(1)
# + id="UU6BISHF9zMl"
#excluir coluna
dados.drop('checking_status',axis=1,inplace=True)
print(dados)
# + id="4yPH--D08Ors"
dados.head(1)
# + id="u404kOYD9zPU"
#verificar dados nulos
dados.isnull()
# + id="gZ3ZXl818bu9"
#verificar dados nulos
dados.isnull().sum()
# + id="cRf4E7VH8nGM"
#retirar colunas com NaN
dados.dropna()
# + id="tGnKKpLS9zSj"
#preencher dados faltantes
dados['duração'].fillna(0,inplace = True)
# + id="N1QVSODR9ZHz"
#iloc
dados.iloc[0:3,0:5]
# + id="PysAnk2_9Y5H"
dados.iloc[[0,1,2,3,7],0:5]
# -
| 1.Pratica em Python/scripts/4.4.pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gazebo proxy
#
# The Gazebo proxy is an implementation of interfaces with all services provided by the `gazebo_ros_pkgs`. It allows easy use and from of the simulation through Python.
#
# It can be configured for different `ROS_MASTER_URI` and `GAZEBO_MASTER_URI` environment variables to access instances of Gazebo running in other hosts/ports.
#
# The tutorial below will make use of the simulation manager to start instances of Gazebo.
# Importing the Gazebo proxy
from pcg_gazebo.task_manager import GazeboProxy
# The Gazebo proxy may also work with an instance of Gazebo that has been started external to the scope of this package, for example by running
#
# ```
# roslaunch gazebo_ros empty_world.launch
# ```
#
# The only instance will be found by using the input hostname and ports for which they are running.
# Here we will use the simulation manager.
# If there is a Gazebo instance running, you can spawn the box into the simulation
from pcg_gazebo.task_manager import Server
# First create a simulation server
server = Server()
# Create a simulation manager named default
server.create_simulation('default')
simulation = server.get_simulation('default')
# Run an instance of the empty.world scenario
# This is equivalent to run
# roslaunch gazebo_ros empty_world.launch
# with all default parameters
if not simulation.create_gazebo_empty_world_task():
raise RuntimeError('Task for gazebo empty world could not be created')
# A task named 'gazebo' the added to the tasks list
print(simulation.get_task_list())
# But it is still not running
print('Is Gazebo running: {}'.format(simulation.is_task_running('gazebo')))
# Run Gazebo
simulation.run_all_tasks()
# Adding some models to the simulation to demonstrate the Gazebo proxy methods.
#
# + slideshow={"slide_type": "slide"}
# Now create the Gazebo proxy with the default parameters.
# If these input arguments are not provided, they will be used per default.
gazebo_proxy = simulation.get_gazebo_proxy()
# The timeout argument will be used raise an exception in case Gazebo
# fails to start
# +
from pcg_gazebo.simulation import create_object
from pcg_gazebo.generators import WorldGenerator
generator = WorldGenerator(gazebo_proxy)
box = create_object('box')
box.add_inertial(mass=20)
print(box.to_sdf('model'))
# +
generator.spawn_model(
model=box,
robot_namespace='box_1',
pos=[-2, -2, 3])
generator.spawn_model(
model=box,
robot_namespace='box_2',
pos=[2, 2, 3])
# -
# ## Pausing/unpausing the simulation
#
from time import time, sleep
pause_timeout = 10 # seconds
start_time = time()
# Pausing simulation
gazebo_proxy.pause()
print('Simulation time before pause={}'.format(gazebo_proxy.sim_time))
while time() - start_time < pause_timeout:
print('Gazebo paused, simulation time={}'.format(gazebo_proxy.sim_time))
sleep(1)
print('Unpausing simulation!')
gazebo_proxy.unpause()
sleep(2)
print('Simulation time after pause={}'.format(gazebo_proxy.sim_time))
# ## Get world properties
#
# The world properties return
#
# * Simulation time (`sim_time`)
# * List of names of models (`model_names`)
# * Is rendering enabled flag (`rendering_enabled`)
#
# The return of this function is simply the service object [`GetWorldProperties`](https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_msgs/srv/GetWorldProperties.srv).
# The world properties returns the following
gazebo_proxy.get_world_properties()
# ## Model properties
# Get list of models
gazebo_proxy.get_model_names()
# Get model properties
for model in gazebo_proxy.get_model_names():
print(model)
print(gazebo_proxy.get_model_properties(model))
print('-----------------')
# Get model state
for model in gazebo_proxy.get_model_names():
print(model)
print(gazebo_proxy.get_model_state(model_name=model, reference_frame='world'))
print('-----------------')
# Check if model exists
print('Does ground_plane exist? {}'.format(gazebo_proxy.model_exists('ground_plane')))
print('Does my_model exist? {}'.format(gazebo_proxy.model_exists('my_model')))
# Get list of link names for a model
for model in gazebo_proxy.get_model_names():
print(model)
print(gazebo_proxy.get_link_names(model))
print('-----------------')
# Test if model has a link
print('Does ground_plane have a link named link? {}'.format(gazebo_proxy.has_link(model_name='ground_plane', link_name='link')))
# Get link properties
for model in gazebo_proxy.get_model_names():
print(model)
for link in gazebo_proxy.get_link_names(model_name=model):
print(' - ' + link)
print(gazebo_proxy.get_link_properties(model_name=model, link_name=link))
print('-----------------')
print('==================')
# Get link state
for model in gazebo_proxy.get_model_names():
print(model)
for link in gazebo_proxy.get_link_names(model_name=model):
print(' - ' + link)
print(gazebo_proxy.get_link_state(model_name=model, link_name=link))
print('-----------------')
print('==================')
# ## Get physics properties
#
# The physics properties returns the [GetPhysicsProperties](https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_msgs/srv/GetPhysicsProperties.srv) response with the current parameters for the physics engine. Currently only the parameters for the ODE engine can be retrieved.
print(gazebo_proxy.get_physics_properties())
# ## Apply wrench
# +
# Applying wrench to a link in the simulation
# The input arguments are
# - model_name
# - link_name
# - force: force vector [x, y, z]
# - torque: torque vector [x, y, z]
# - start_time: in seconds, if it is a value lower than simulation time, the wrench will be applied as soon as possible
# - duration: in seconds
# if duration < 0, apply wrench continuously without end
# if duration = 0, do nothing
# if duration < step size, apply wrench for one step size
# - reference_point: [x, y, z] coordinate point where wrench will be applied wrt the reference frame
# - reference_frame: reference frame for the reference point, if None it will be set as the provided model_name::link_name
gazebo_proxy.apply_body_wrench(
model_name='box_1',
link_name='box',
force=[100, 0, 0],
torque=[0, 0, 100],
start_time=0,
duration=5,
reference_point=[0, 0, 0],
reference_frame=None)
gazebo_proxy.apply_body_wrench(
model_name='box_2',
link_name='box',
force=[10, 0, 200],
torque=[0, 0, 150],
start_time=0,
duration=4,
reference_point=[0, 0, 0],
reference_frame=None)
start_time = time()
while time() - start_time < 10:
sleep(1)
# -
# ## Move models in the simulation
# +
gazebo_proxy.move_model(
model_name='box_1',
pos=[2, 2, 15],
rot=[0, 0, 0],
reference_frame='world')
gazebo_proxy.move_model(
model_name='box_2',
pos=[-2, -1, 4],
rot=[0, 0, 0],
reference_frame='world')
# -
# End the simulation by killing the Gazebo task
simulation.kill_all_tasks()
| pcg_notebooks/task_manager/gazebo_proxy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Learning, part3: Other important examples
#
# - Generative models: autoencoders and GANS
# - Working with tabular data, data integration
# - Recurrent NN and attention mechanisms
# - Reinforcement learning
# + active=""
# conda create -n biopy37 python=3.7
# conda install jupyterlab matplotlib tensorflow
# conda install pandas scikit-learn
# -
# ### Generative models: autoencoders, VAEs and GANS
#
# - **Generative models.** These are NNs models used for dimensionality reduction or dataset transformations.
# - A popular use for a NNs is to take its fitted weights and use them on other datasets. This is called **transfer learning**.
# - NNs need to verify information against a set of prior information in order to learn. In that sense, all NNs are supervised learning methods.
# - It is possible however to perform unsupervised learning with NNs, and the most popular method is auto-encoders. More precisely though, they are **self-supervised** because they generate their own labels from the training data.
#
#
# **Autoencoders**
#
# - A dimensionality reduction (or compression) NN algorithm in which the input of a model is the same as the output.
# - They compress the input into a lower-dimensional code and then reconstruct the output from this representation.
# - 3 components: encoder, code and decoder. The encoder compresses the input and produces the code, the decoder then reconstructs the input only using this code.
#
from IPython.display import Image
Image(url= "../img/AE.png", width=400, height=400)
# Autoencoders properties and usage:
# - Data-specific: Only able to meaningfully compress data similar to what they have been trained on. Autoencoders trained on handwritten digits won't compress landscape photos.
# - Lossy: The output of the autoencoder will not be exactly the same as the input, it will be a close but degraded representation.
# - Data denoising: By learning the relevant features they are able to denoise/normalize a dataset.
# - Clustering: Clustering algorithms struggle with large dimensional data, so AE are an important preprocessing step.
# - Generative models: Variational Autoencoders (VAE) learn the parameters of the probability distribution modeling the input data. By sampling points from this distribution we can also use the VAE as a generative model.
#
#
# **Tabular data**
# So far we have only used NNs on image and text (by converting them to numbers). Let's see an example of working directly with tabular data, which is more commonly used in 'omics research.
# +
import pathlib
from pathlib import Path
import pandas as pd
data_loc = r'D:\windata\work\biopycourse\data\cll_data'
df = pd.read_csv(pathlib.Path(data_loc) / "cll_mrna.txt", index_col=0, sep ="\t")
df = df.dropna(axis='columns')
print(df.shape)
df.head()
# -
X_train = df.T
# **Goal**: reduce the dimensionality of this dataset from 5000 to 16, in order to efficiently cluster these samples.
#
# New learnings:
# - parametrization
# - batch normalization layers
# - naming layers
# +
import tensorflow
import numpy as np
from tensorflow.keras.models import Model
from tensorflow.keras.layers import BatchNormalization, Concatenate, Dense, Input, Lambda,Dropout
# Hyperparameters
input_size = X_train.shape[1]
# elu, https://keras.io/activations/, maybe deals better with vanishing gradient
#act = "elu"
act = "relu"
# the intermediate dense layers size
ds = 128
# latent space dimension size
ls = 16
# dropout rate [0 1]
dropout = 0.2
# ensure reproducibility
np.random.seed(42)
tf.random.set_seed(42)
# +
# Define the encoder
inputs_layer = Input(shape=(input_size,), name='input')
x = Dense(ds, activation=act)(inputs_layer)
x = BatchNormalization()(x)
coded_layer = Dense(ls, name='coded_layer')(x)
encoder = Model(inputs_layer, coded_layer, name='encoder')
encoder.summary()
# +
# Define the decoder
decoder_inputs_layer = Input(shape=(ls,), name='latent_inputs')
x = decoder_inputs_layer
x = Dense(ds, activation=act)(x)
x = BatchNormalization()(x)
x = Dropout(dropout)(x)
output_layer = Dense(input_size)(x)
decoder = Model(decoder_inputs_layer, output_layer, name='decoder')
decoder.summary()
# -
# Define the autoencoder
outputs = decoder(encoder(inputs_layer))
autoencoder = Model(inputs_layer, outputs, name='autoencoder')
autoencoder.summary()
# +
# compile and run
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import optimizers
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.001, amsgrad=False)
autoencoder.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
#history = autoencoder.fit(X_train, X_train, epochs=5, batch_size=32, shuffle=True, validation_data=(X_test, X_test))
history = autoencoder.fit(X_train, X_train, epochs=200, batch_size=64, shuffle=True)
autoencoder.save('cnn.h5')
# -
encoded_X_train = encoder.predict(X_train)
encoded_X_train.shape
# Task:
# - Run KMeans both before and after dimensionality reduction, and plot their silhouette scores. Is there an improvement?
# - (advanced) Expand the AE above into a VAE, and repeat clustering assesment.
# - Using the above hyperparameters try to improve the model fit and re-asses clustering performance.
# - (really advanced) Search for a VAE-GANS implementation and re run.
# ## What are GANS?
#
# - “the most interesting idea in the last 10 years in Machine Learning” (<NAME>)
# - Generator model: the goal of the generator is to fool the discriminator, so the generative neural network is trained to maximise the final classification error (between true and generated data)
# - Discriminator model: the goal of the discriminator is to detect fake generated data, so the discriminative neural network is trained to minimise the final classification error
#
# Example for MNIST:
# - https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/
# ### Recurrent Neural Networks
#
# These networks process (loop) the information several times through every node. Such networks are mainly applied with the purpose of classifying sequential input and rely on backpropagation of error to do so. When the information passes a single time, the network is called feed-forward. Recurrent networks, on the other hand, take as their input not just the current input example they see, but also what they have perceived previously in time. Thus a RNN uses the concept of time and memory.
#
# One could, for example, define the activation function on a hidden state in this manner, by a method called backpropagation through time:
# output_t = relu(dot(W, input) + dot(U, output.t-1))
#
# A traditional deep neural network uses different parameters at each layer, while a RNN shares the same parameters across all steps. The output of each time step doesn't need to be kept (not necessarily). We not care for example while doing sentiment analysis about the output after every word.
#
# Features:
# - they can be bi-directional
# - they can be deep (multiple layers per time step)
# - RNNs can be combined with CNNs to solve complex problems, from speech or image recognition to machine translation.
# +
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
# -
conda install numpy">=1.19.1"
import numpy
print(numpy.__version__)
| day3/.ipynb_checkpoints/DL3_other-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import cv2
from matplotlib import pyplot as plt
# %pylab inline
# draw only keypoints location,not size and orientation
# Initiate ORB detector
orb = cv2.ORB_create()
# -
img = cv2.imread('images/simple.jpg',cv2.COLOR_BGR2GRAY) # queryImage
plt.imshow(img),plt.show()
# +
# draw only keypoints location,not size and orientation
# Initiate ORB detector
#orb = cv2.ORB_create()
# find the keypoints and descriptors with SIFT
kp, des = orb.detectAndCompute(img,None)
print (len(img))
print(len(kp))
print(len(des[0]))
print(des[0])
imgOut = cv2.drawKeypoints(img,kp,None, color=(255,0,0), flags=0)
plt.imshow(imgOut),plt.show()
# -
print (des)
# +
img1 = cv2.imread('images/NotreDame0.jpg',0) # queryImage
img2 = cv2.imread('images/NotreDame1.jpg',0) # trainImage
# Initiate ORB detector
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
print (len(kp1))
print (len(kp2))
print (kp1[0].pt)
print (des1[0:2])
print (des2[0:2])
# +
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# -
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)
return out
# +
# Draw first 10 matches.
print (len(matches))
img3 = drawMatches(img1,kp1,img2,kp2,matches[:10])
plt.imshow(img3),plt.show()
# +
# Draw first 10 matches.
print (len(matches))
img3 = drawMatches(img1,kp1,img2,kp2,matches[:50])
plt.imshow(img3),plt.show()
# +
img1 = cv2.imread('images/cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('images/cameraman_rot55.png', 0) # Rotated image - ensure grayscale
# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB_create(1000, 1.2)
# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)
# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)
# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Do matching
matches = bf.match(des1,des2)
# Sort the matches based on distance. Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)
print (len(kp1))
print (len(kp2))
print (len(matches))
# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])
plt.imshow(out),plt.show()
# -
| teams/team_lynx/OpenCV/Face_Detection/GO/.ipynb_checkpoints/CV_FeatureDetection-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--
# # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# #
# # Licensed under the Apache License, Version 2.0 (the "License").
# # You may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
# -->
#
# # Data Discovery using Athena
#
# * Functions: https://docs.aws.amazon.com/redshift/latest/dg/c_SQL_functions.html
# * UDF: https://docs.aws.amazon.com/redshift/latest/dg/user-defined-functions.html
# * Store Procedure: https://docs.aws.amazon.com/redshift/latest/dg/stored-procedure-overview.html
#
# Using CMS Data at: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Inpatient2016.html
# ## Contents
# 1. [Reference Links](#Reference-Links)
# 2. [Setup](#Setup)
# 1. [Import Libraries](#Import-Libraries)
# 2. [Initialize Functions](#Initialize-Functions)
# 3. [Define Athena Parameters](#Define-Athena-Parameters)
# 4. [Establish Athena Connection](#Establish-Athena-Connection)
# 5. [Use SQL Query to Grab Sample Database Data](#Use-SQL-Query-to-Grab-Sample-Database-Data)
# 3. [Data Analysis](#Data-Analysis)
# 1. [Select all Elements from the Database Sample File](#Select-all-Elements-from-the-Database-Sample-File)
# 2. [Provide an Input Dataset](#Provide-an-Input-Dataset)
# 3. [Error with missing column](#Error-with-missing-column)
# 4. [Visualize Data](#Vizualize-Data)
# 5. [Populate Data](#Populate-Data)
# 4. [Create New Table with Analysis](#Create-New-Table-with-Analysis)
# 1. [Run Analysis](#Run-Analysis)
# 2. [Display Analysis](#Display-Analysis)
# 3. [Test Code](#Test-Code)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# %reload_ext sql
# -
# ## Setup
# #### Import Athena Libraries
from aws_orbit_sdk.database import get_athena
from aws_orbit_sdk.common import get_workspace,get_scratch_database
import aws_orbit_sdk.glue_catalog as datamaker_catalog_api
import matplotlib.pyplot as plt
# #### Initialize athena,workspace and scratch database functions
athena = get_athena()
# %config SqlMagic.autocommit=False # for engines that do not support autommit
workspace = get_workspace()
scratch_glue_db = get_scratch_database()
team_space = workspace['team_space']
env_name = workspace['env_name']
#DO NOT RUN THIS NOTEBOOK IN LAKE CREATOR TEAM SPACE
#assert team_space == 'lake-user'
workspace
# #### Define Athena parameters
# + tags=["parameters"]
glue_db = f"cms_raw_db_{env_name}".replace('-', '_')
target_db = "users"
# -
# #### Establish Athena Connection
# %connect_to_athena -database $glue_db
# #### Use SQL Query to Grab Sample Database Data
# + [markdown] language="sql"
# #
# # SELECT 1 as "Test"
# -
# %catalog -database $glue_db
# ## Now lets start Data Analysis
# +
# Now we can show how you can bind a variable to use within the SQL
ben_id = "F72554149E321FF9"
# %sql select * from {glue_db}.beneficiary_summary where desynpuf_id = :ben_id
# -
# *** Maybe we want to write multi-line SQL directly and output it into a variable *** :
# #### Run the Dataset SQL Query to Select all Elements from the Database Sample File
# + magic_args="dataset << " language="sql"
#
# SELECT *
# FROM {glue_db}.beneficiary_summary
# limit 1
# -
# #### Provide an Input Dataset
dataset
# #### Showing how error looks like with the missing column below
# + magic_args="population_by_age_rs <<" language="sql"
#
#
# select age, count(desynpuf_id) as pop_size
# from
# (select least(year(current_date),year(bene_death_dt)) - year(bene_birth_dt) as age
# from {glue_db}.beneficiary_summary)
# group by age
# order by age
# -
# #### With a bit of python , we can also visualize data
# + magic_args="population_by_age_rs <<" language="sql"
# select age,count(desynpuf_id) as pop_size
# from (
# select desynpuf_id, least(year(current_date),year(bene_death_dt)) - year(bene_birth_dt) as age
# from {glue_db}.beneficiary_summary
#
# ) A
# group by age
# order by age
#
# -
# #### Populate the Data into a Chart with Age and Population Size Columns
# +
# Lets see what we got into our variable
population_by_age = population_by_age_rs.DataFrame()
population_by_age.head()
# -
# #### Visualize the Dataset Using a Scatter Plot
# +
# Play with visualization:
ax1 = population_by_age.plot.scatter(x='age',
y='pop_size',
c='DarkBlue')
# -
# ## Lets create a new table with our analysis
# + pycharm={"name": "#%%\n"}
env_name_replaced = env_name.replace('-', '_')
population_by_age_tbl_name = f"users.{env_name_replaced}_population_by_age"
drop_users_population_by_age = f"DROP TABLE IF EXISTS {population_by_age_tbl_name} "
drop_users_population_by_age
# -
# %sql $drop_users_population_by_age
# #### The Following SQL Query Creates a New Table
# + pycharm={"name": "#%%\n"}
ctas_population_by_age = f"""
CREATE TABLE users.{env_name_replaced}_population_by_age
WITH (format = 'PARQUET')
AS
select age, count(desynpuf_id) as pop_size
from (
select desynpuf_id, least(year(current_date), year(bene_death_dt))-year(bene_birth_dt) as age
from {glue_db}.beneficiary_summary
) A
group by age
order by age
"""
print(ctas_population_by_age)
# -
# %sql $ctas_population_by_age
# #### Run an Analysis SQL Query on the New Table
# + magic_args="analysis << " language="sql"
#
# select * from {population_by_age_tbl_name}
# -
# #### Display the Analysis as Input on a Grid
analysis.DataFrame()
# #### lets test our code
assert population_by_age.at[0,'age'] > 20.
| samples/notebooks/B-DataAnalyst/Example-1-SQL-Analysis-Athena.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create and train MitoSplit-Net model
# + [markdown] id="0mIcpL6pLPvL"
# ## Import required Python libraries
#
#
# + id="jaV4mHBXVj2x"
import util
import plotting
import training
import numpy as np
import matplotlib.pyplot as plt
plt.rc('xtick', labelsize=18)
plt.rc('ytick', labelsize=18)
plt.rc('axes', labelsize=20)
plt.rc('legend', fontsize=18)
from tqdm import tqdm
import tensorflow as tf
# -
#Define GPU device where the code will run on
gpu = tf.config.list_physical_devices('GPU')[0]
print(gpu)
tf.config.experimental.set_memory_growth(gpu, True)
gpu = tf.device('GPU:0/')
# ## Data and models directories
# +
#base_dir = '//lebsrv2.epfl.ch/LEB_SHARED/SHARED/_Scientific projects/MitoSplit-Net/'
base_dir = 'H:/Santi/'
print('base_dir:', base_dir)
data_path = base_dir+'Data/'
print('data_path:', data_path)
model_path = base_dir+'Models/'
print('model_path:', model_path)
# + [markdown] id="Rkaaw6D6LjLU"
# ## Create model, split dataset and train
# -
# #### No preprocessing, different batch sizes
# + colab={"base_uri": "https://localhost:8080/", "height": 740} id="lxMLelekV8xh" outputId="d24c25ea-08f4-4a95-b467-f3be7088cb50"
#Inputs
input_data = util.load_h5(data_path, 'Mito')
print('Inputs'+':', input_data.shape)
#Outputs
output_data = util.load_h5(data_path, 'Proc')
print('Outputs:', output_data.shape)
# -
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = [8, 16, 32, 256]
model, history, frames_test = {}, {}, {}
for b in batch_size:
model_name = 'ref_f%i_c%i_b%i'%(nb_filters, firstConvSize, b)
print('Model:', model_name)
model[model_name] = training.create_model(nb_filters, firstConvSize)
history[model_name], frames_test[model_name] = training.train_model(model[model_name], input_data, output_data, batch_size=b)
# +
folder_name = list(model.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# -
# #### MitoProc, different batch sizes
# +
#Inputs
input_data = util.load_h5(data_path, 'MitoProc')
print('Inputs'+':', input_data.shape)
#Outputs
output_data = util.load_h5(data_path, 'Proc')
print('Outputs:', output_data.shape)
# -
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = [8, 16, 32, 256]
model, history, frames_test = {}, {}, {}
for b in batch_size:
model_name = 'mp_f%i_c%i_b%i'%(nb_filters, firstConvSize, b)
print('Model:', model_name)
model[model_name] = training.create_model(nb_filters, firstConvSize)
history[model_name], frames_test[model_name] = training.train_model(model[model_name], input_data, output_data, batch_size=b)
# + tags=[]
folder_name = list(model.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# -
# #### Mito & WatProc
#Inputs
input_data = util.load_h5(data_path, 'Mito')
print('Inputs'+':', input_data.shape)
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = 16
optimal_sigma = util.load_pkl(data_path, 'optimal_sigma')
threshold = util.load_pkl(data_path, 'mean_intensity_threshold')
model, history, frames_test = {}, {}, {}
for s, t in zip(optimal_sigma, threshold):
model_name = 'mwp_f%i_c%i_b%i_s%.1f_t%.1f'%(nb_filters, firstConvSize, batch_size, s, t)
print('Model:', model_name)
#Outputs
output_data = util.load_h5(data_path, 'mWatProc_s%.1f_t%.1f'%(s, t))
print('Outputs:', output_data.shape)
model[model_name] = training.create_model(nb_filters, firstConvSize)
history[model_name], frames_test[model_name] = training.train_model(model[model_name], input_data, output_data, batch_size=batch_size)
# +
folder_name = list(model.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# -
del input_data, output_data
# #### MitoProc & WatProc
#Inputs
input_data = util.load_h5(data_path, 'MitoProc')
print('Inputs'+':', input_data.shape)
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = 16
optimal_sigma = util.load_pkl(data_path, 'optimal_sigma')
threshold = util.load_pkl(data_path, 'mean_intensity_threshold')
model, history, frames_test = {}, {}, {}
for s, t in zip(optimal_sigma, threshold):
model_name = 'wp_mp_f%i_c%i_b%i_s%.1f_t%.1f'%(nb_filters, firstConvSize, batch_size, s, t)
print('Model:', model_name)
#Outputs
output_data = util.load_h5(data_path, 'WatProc_s%.1f_t%.1f'%(s, t))
print('Outputs:', output_data.shape)
model[model_name] = training.create_model(nb_filters, firstConvSize)
history[model_name], frames_test[model_name] = training.train_model(model[model_name], input_data, output_data, batch_size=batch_size)
# +
folder_name = list(model.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# + [markdown] id="Rkaaw6D6LjLU"
# ## Augmented model
# -
# #### Mito & WatProc with ElasticTransform
# + tags=[]
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = 16
optimal_sigma = util.load_pkl(data_path, 'optimal_sigma')
threshold = util.load_pkl(data_path, 'mean_intensity_threshold')
folder_name = util.get_filename(model_path, 'wp')
folder_name = [model_name for model_name in folder_name if 'mp' not in model_name]
nb_models = len(folder_name)
model = util.load_model(model_path, ['model']*nb_models, folder_name, as_type=dict)
history = {}
for model_name, s, t in zip(model, optimal_sigma, threshold):
print('\nModel:', model_name)
#Inputs
aug_input_data = util.load_h5(data_path, 'aug_Mito_s%.1f_t%.1f'%(s, t))
print('Inputs'+':', aug_input_data.shape)
#Outputs
aug_output_data = util.load_h5(data_path, 'aug_WatProc_s%.1f_t%.1f'%(s, t))
print('Outputs:', aug_output_data.shape)
history['aug_'+model_name] = training.train_model(model[model_name], aug_input_data, aug_output_data, batch_size=batch_size)[0]
frames_test = util.load_pkl(model_path, ['frames_test']*nb_models, folder_name)
# +
model_path = base_dir + 'Models/'
folder_name = list(history.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# -
del aug_input_data, aug_output_data
# #### Mito & WatProc without ElasticTransform
# + tags=[]
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = 16
optimal_sigma = util.load_pkl(data_path, 'optimal_sigma')
threshold = util.load_pkl(data_path, 'mean_intensity_threshold')
folder_name = util.get_filename(model_path, 'wp')
folder_name = np.array([model_name for model_name in folder_name if 'mp' not in model_name and 'aug' not in model_name])
nb_models = len(folder_name)
model = util.load_model(model_path, ['model']*nb_models, folder_name, as_type=dict)
history = {}
for model_name, s, t in zip(model, optimal_sigma, threshold):
print('\nModel:', model_name)
#Inputs
aug_input_data = util.load_h5(data_path, 'aug2_Mito_s%.1f_t%.1f'%(s, t))
print('Inputs'+':', aug_input_data.shape)
#Outputs
aug_output_data = util.load_h5(data_path, 'aug2_WatProc_s%.1f_t%.1f'%(s, t))
print('Outputs:', aug_output_data.shape)
history['aug2_'+model_name] = training.train_model(model[model_name], aug_input_data, aug_output_data, batch_size=batch_size)[0]
del aug_input_data, aug_output_data
frames_test = util.load_pkl(model_path, ['frames_test']*nb_models, folder_name)
# +
folder_name = list(history.keys())
util.save_model(model, model_path, ['model']*nb_models, folder_name)
util.save_pkl(history, model_path, ['history']*nb_models, folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*nb_models, folder_name)
# + [markdown] id="Rkaaw6D6LjLU"
# ## Temporal median filter
# -
# #### Mito & WatProc
# +
temp_filter = util.load_pkl(data_path, 'temporal_filter')
#Inputs
input_data = util.load_h5(data_path, 'Mito')
print('Inputs'+':', input_data.shape)
# -
with gpu:
nb_filters = 8
firstConvSize = 9
batch_size = 16
optimal_sigma = util.load_pkl(data_path, 'optimal_sigma')
threshold = util.load_pkl(data_path, 'mean_intensity_threshold')
model, history, frames_test = {}, {}, {}
for s, t in zip(optimal_sigma, threshold):
model_name = 'temp_wp_f%i_c%i_b%i_s%.1f_t%.1f'%(nb_filters, firstConvSize, batch_size, s, t)
print('Model:', model_name)
#Outputs
output_data = util.load_h5(data_path, 'WatProc_s%.1f_t%.1f'%(s, t))*temp_filter[:, None, None]
print('Outputs:', output_data.shape)
model[model_name] = training.create_model(nb_filters, firstConvSize)
history[model_name], frames_test[model_name] = training.train_model(model[model_name], input_data, output_data, batch_size=batch_size)
del output_data
# +
folder_name = list(model.keys())
util.save_model(model, model_path, ['model']*len(model), folder_name)
util.save_pkl(history, model_path, ['history']*len(model), folder_name)
util.save_pkl(frames_test, model_path, ['frames_test']*len(model), folder_name)
# -
del input_data
| mitosplit-net/.ipynb_checkpoints/training-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## __Goal 1 : Making a NeuralCoref function__
# +
import spacy
import neuralcoref
def NeuralCoref(text, visualize=False, debug=False):
nlp = spacy.load('en')
neuralcoref.add_to_pipe(nlp)
doc = nlp(text)
if visualize: print("=======================> INPUT <=======================\n\n%s" % text);
for i in range(len(doc._.coref_clusters)):
a = doc._.coref_clusters[i].mentions[-1]
b = doc._.coref_clusters[i].mentions[-1]._.coref_cluster.main
text = text.replace(str(a), str(b))
if debug: print("|- ", text);
if visualize: print("\n\n=======================> OUTPUT <=======================\n\n%s" % text);
return text
# -
# #### __String Options__
# +
str_1 = 'My sister has a dog. She loves him.'
str_2 = 'John have dinner today and he enjoyed it.'
str_3 = 'Angela lives in Boston. She is quite happy in that city.'
aFile = open('test/0.txt', 'r')
str_4 = aFile.read()
aFile.close()
# -
# Replace NeuralCoref parameter with desired string
a = NeuralCoref(str_4, visualize=True)
#
# -----------------------------------
#
# ## __Goal 2 : Use OpenIE + NeuralCoref__
#
#
# ### 2.1 : Create OpenIE Function
# +
from openie import StanfordOpenIE
def OpenIE(text, visualize=False, debug=False):
with StanfordOpenIE() as client:
i=1
if visualize: print("=======================> INPUT <=======================\n\n%s" % text);
if visualize: print("\n\n=======================> OUTPUT <=======================");
for triple in client.annotate(text):
if i: i=0; print();
print("==> ", triple)
# -
# Replace OpenIE parameter with desired string
b = OpenIE(str_4, visualize=True)
# ### 2.2 : Using NeuralCoref & OpenIE together
# +
# Replace NeuralCoref parameter with desired string
# Step 1.) Run co-reference resolution
# on string using NeuralCoref.
nc = NeuralCoref(str_4)
# Step 2.) Run the output through OpenIE
c = OpenIE(nc, visualize=True)
# -
# ### __2.3 : Example from Stanford's CoreNLP Website
#
# Github.io : [StanfordNLP : Overview of CoreNLP](https://stanfordnlp.github.io/CoreNLP/)
# + language="bash"
# echo -e "=======================> INPUT <=======================\n"; cat ex.txt
#
# # java -cp "stanford-corenlp-4.2.0/*" -Xmx5g edu.stanford.nlp.pipeline.StanfordCoreNLP -file ex.txt;
#
# echo -e "\n\n=======================> OUTPUT <======================="; cat ex.txt.out
| Co-reference_Resolution/Week_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hkrsmk/beepboop-shopee-code-league-2020/blob/hkrsmk/pdt_detection_github.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="pK9_6sEFRFj1" colab_type="text"
# # import from google cloud storage
# + id="Y0nHI7AHGtrU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fba8cfd9-c42a-4bf6-90fc-14b441e9bdd2"
import tensorflow as tf
import pandas as pd
import numpy as np
import os
# ! python --version
# + [markdown] id="DTbP5kLnCZRY" colab_type="text"
# # Drive mounting
# + id="ybii-ANLYcxl" colab_type="code" colab={}
# allow colab to use gcloud storage
from google.colab import auth
auth.authenticate_user()
# !mkdir gcs
# + id="8cTo-UmHHB9o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="2e0b0c3f-7134-43c8-874d-f2d21b723a18"
# install gcs FUSE to mount files
# !echo "deb http://packages.cloud.google.com/apt gcsfuse-bionic main" > /etc/apt/sources.list.d/gcsfuse.list
# !curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# !apt -qq update
# !apt -qq install gcsfuse
# + id="ZjHcsKfIPPeV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="072a4f02-be2d-42c8-c4e9-e2c598e004d5"
# !gcsfuse --implicit-dirs [redacted] gcs
# + id="9Zny0MpDYJQ3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="efaa72fa-b11b-4775-99ca-3622baed7c13"
# %cd gcs/cloud_mirror
# + id="Xrbje4x7RD8p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1ea335ec-ca31-4ff6-9d2a-a552321c89b1"
# test if it works
test = pd.read_csv("test.csv")
print(test)
from IPython.display import Image
Image(filename='train/train/train/00/00b32bd5ba9cdd7c2f11e3975b3e54fa.jpg')
# + [markdown] id="v2kiXxwNT6TN" colab_type="text"
# # Training
#
# https://www.tensorflow.org/tutorials/load_data/images
#
# https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/classification.ipynb#scrollTo=DLdCchMdCaWQ
# + id="tofGDIEZXWVc" colab_type="code" colab={}
IMG_HEIGHT = 871
IMG_WIDTH = 871
CLASS_NAMES = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12', '13', '14',
'15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29',
'30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41']
LABEL_COLUMN = 'category'
AUTOTUNE = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 32
image_count = 2683 # for 00 only
STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)
# + id="kxtkgq57T8NX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="26cd8abe-9436-4d0a-f6d9-87f8d2e3d0df"
list_ds = tf.data.Dataset.list_files(str ('train/train/train/00/*.jpg'))
for f in list_ds.take(5):
print(f.numpy())
# + id="yBdJ5xCn-q-A" colab_type="code" colab={}
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
return parts[-2] == CLASS_NAMES
# + id="8E1E63Jy-tEC" colab_type="code" colab={}
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# resize the image to the desired size.
return tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH])
# + id="e76Hi5Eh-9Al" colab_type="code" colab={}
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
# + id="XbKscxSW_A0c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="947a8376-94a2-4293-af03-697836a58fa3"
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in labeled_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
# + id="E1kSA7XqINB2" colab_type="code" colab={}
# + [markdown] id="2hnj79SaLHEl" colab_type="text"
# # Attempt to store as TFRecord?
#
# https://www.tensorflow.org/tutorials/load_data/tfrecord#walkthrough_reading_and_writing_image_data
# + id="o2UqqVs3IBKi" colab_type="code" colab={}
def tf_serialize_example(f0,f1):
tf_string = tf.py_function(
serialize_example,
(f0,f1), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
# + id="dKgsAFOxFDF2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="b2096f40-ff78-45b0-c1f1-085212af0ae2"
filename = '00.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(labeled_ds)
filenames = [labeled_ds]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
| 2_ProductDetection/pdt_detection_github.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# index 5 depth
# index 8 feature
# index 11 auc
# index 14 validation error
# index 17 training error
data = []
file = "ddd.txt"
i = 0
for line in open(file):
datum = line.split()
data.append(datum)
# -
rankAUC = sorted(data, key=lambda l: -float(l[11]))
print rankAUC[0]
rankTrainValDiff = sorted(data, key = lambda l: abs(float(l[17]) - float(l[14])))
print rankTrainValDiff[0]
# init
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import mltools as ml
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import RandomForestRegressor
# +
# load data
X = np.genfromtxt('data/X_train.txt', delimiter=None)
Y = np.genfromtxt('data/Y_train.txt', delimiter=None)
Xte = np.genfromtxt('data/X_test.txt', delimiter=None)
np.random.seed(0)
X,Y = ml.shuffleData(X,Y)
# -
# Setting Xtr, Ytr, Xva, Yva
Xtr = X[:150000, :]
Ytr = Y[:150000]
Xva = X[150000:, :]
Yva = Y[150000:]
def calcError(prediction, real):
err_count = 0.
for i in range(len(prediction)):
if prediction[i] != real[i]:
err_count+=1
return err_count / len(prediction)
def convert(regress_list):
result = []
for i in regress_list:
if i < 0.5:
result.append(0)
else:
result.append(1)
return result
# +
learner = RandomForestRegressor(max_depth=18,
random_state=0,
n_estimators = 500,
max_features = 1)
learner.fit(Xtr, Ytr)
Yva_hat = convert(learner.predict(Xva))
validation_auc = roc_auc_score(Yva_hat, Yva)
print "Validation AUC", validation_auc
print "Validation Error", calcError(Yva_hat, Yva)
Ytr_hat = convert(learner.predict(Xtr))
print "Train Error", calcError(Ytr_hat, Ytr), "\n"
# -
Yte_hat = learner.predict(Xte)
Yte = np.vstack((np.arange(Xte.shape[0]), Yte_hat)).T
np.savetxt('Y_submit_random_forestForEnsemble.txt',Yte,'%d, %.2f',header='ID,Prob1',comments='',delimiter=',')
| src/Random Forest Config.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ```
# sudo apt install python3-odf
# sudo python3 -m pip install pyexcel-ods
# ```
from odf.opendocument import OpenDocumentSpreadsheet, load
doc = load('data/frame-6.ods')
for e in doc.spreadsheet.childNodes:
print(str(e))
from pyexcel_ods import get_data
data = get_data("data/frame-6.ods")
data
import json
print(json.dumps(data))
data['members']
# +
# convert ints to floats
# -
data['nodes']
[type(x) for x in data['nodes'][1]]
[ float(x) if type(x) is int else x for x in data['nodes'][1]]
| matrix-methods/frame2d/Frame2D/00-Test-ODS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + nbsphinx="hidden"
# this is a hidden cell. It will not show on the documentation HTML.
import os
from vespa.deployment import VespaDocker
from vespa.gallery import QuestionAnswering
app_package = QuestionAnswering()
vespa_docker = VespaDocker()
app = vespa_docker.deploy(application_package=app_package)
# -
# # Exchange data with applications
#
# > Feed, get, update and delete operations
# We will use the [question answering (QA) app](https://pyvespa.readthedocs.io/en/latest/use_cases/qa/semantic-retrieval-for-question-answering-applications.html) to demonstrate ways to feed data to an application. We start by downloading sample data.
# +
import json, requests
sentence_data = json.loads(
requests.get("https://data.vespa.oath.cloud/blog/qa/sample_sentence_data_100.json").text
)
list(sentence_data[0].keys())
# -
# We assume that `app` holds a [Vespa](reference-api.rst#vespa.application.Vespa) connection instance to the desired Vespa application.
# ## Feed data
# We can either feed a batch of data for convenience or feed individual data points for increased control.
# ### Batch
# We need to prepare the data as a list of dicts having the `id` key holding a unique id of the data point and the `fields` key holding a dict with the data fields.
batch_feed = [
{
"id": idx,
"fields": sentence
}
for idx, sentence in enumerate(sentence_data)
]
# We then feed the batch to the desired schema using the
# [feed_batch](reference-api.rst#vespa.application.Vespa.feed_batch) method.
response = app.feed_batch(schema="sentence", batch=batch_feed)
# ### Individual data points
# #### Synchronous
# Syncronously feeding individual data points is similar to batch feeding, except that you have more control when looping through your dataset.
response = []
for idx, sentence in enumerate(sentence_data):
response.append(
app.feed_data_point(schema="sentence", data_id=idx, fields=sentence)
)
# #### Asynchronous
# `app.asyncio()` returns a `VespaAsync` instance that contains async operations such as `feed_data_point`. Using the `async with` context manager ensures that we open and close the appropriate connections required for async feeding.
async with app.asyncio() as async_app:
response = await async_app.feed_data_point(
schema="sentence",
data_id=idx,
fields=sentence,
)
# We can then use asyncio constructs like `create_task` and `wait` to create different types of asynchronous flows like the one below.
# +
from asyncio import create_task, wait, ALL_COMPLETED
async with app.asyncio() as async_app:
feed = []
for idx, sentence in enumerate(sentence_data):
feed.append(
create_task(
async_app.feed_data_point(
schema="sentence",
data_id=idx,
fields=sentence,
)
)
)
await wait(feed, return_when=ALL_COMPLETED)
response = [x.result() for x in feed]
# -
# <div class="alert alert-info">
#
# **Note**: The code above runs from a Jupyter Notebook because it already has its async event loop running in the background. You must create your event loop when running this code on an environment without one, just like any asyncio code requires.
# </div>
# ## Get data
# Similarly to the examples about feeding, we can get a batch of data for convenience or get individual data points for increased control.
# ### Batch
# We need to prepare the data as a list of dicts having the `id` key holding a unique id of the data point.
# We then get the batch from the desired schema using the
# [get_batch](reference-api.rst#vespa.application.Vespa.get_batch) method.
batch = [{"id": idx} for idx, sentence in enumerate(sentence_data)]
response = app.get_batch(schema="sentence", batch=batch)
# ### Individual data points
# We can get individual data points synchronously or asynchronously.
# #### Synchronous
response = app.get_data(schema="sentence", data_id=0)
# #### Asynchronous
async with app.asyncio() as async_app:
response = await async_app.get_data(schema="sentence",data_id=0)
# <div class="alert alert-info">
#
# **Note**: The code above runs from a Jupyter Notebook because it already has its async event loop running in the background. You must create your event loop when running this code on an environment without one, just like any asyncio code requires.
# </div>
# ## Update data
# Similarly to the examples about feeding, we can update a batch of data for convenience or update individual data points for increased control.
# ### Batch
# We need to prepare the data as a list of dicts having the `id` key holding a unique id of the data point, the `fields` key holding a dict with the fields to be updated and an optional `create` key with a boolean value to indicate if a data point should be created in case it does not exist (default to `False`).
batch_update = [
{
"id": idx, # data_id
"fields": sentence, # fields to be updated
"create": True # Optional. Create data point if not exist, default to False.
}
for idx, sentence in enumerate(sentence_data)
]
# We then update the batch on the desired schema using the
# [update_batch](reference-api.rst#vespa.application.Vespa.update_batch) method.
response = app.update_batch(schema="sentence", batch=batch_update)
# ### Individual data points
# We can update individual data points synchronously or asynchronously.
# #### Synchronous
response = app.update_data(schema="sentence", data_id=0, fields=sentence_data[0], create=True)
# #### Asynchronous
async with app.asyncio() as async_app:
response = await async_app.update_data(schema="sentence",data_id=0, fields=sentence_data[0], create=True)
# <div class="alert alert-info">
#
# **Note**: The code above runs from a Jupyter Notebook because it already has its async event loop running in the background. You must create your event loop when running this code on an environment without one, just like any asyncio code requires.
# </div>
# ## Delete data
# Similarly to the examples about feeding, we can delete a batch of data for convenience or delete individual data points for increased control.
# ### Batch
# We need to prepare the data as a list of dicts having the `id` key holding a unique id of the data point.
# We then delete the batch from the desired schema using the
# [delete_batch](reference-api.rst#vespa.application.Vespa.delete_batch) method.
batch = [{"id": idx} for idx, sentence in enumerate(sentence_data)]
response = app.delete_batch(schema="sentence", batch=batch)
# ### Individual data points
# We can delete individual data points synchronously or asynchronously.
# #### Synchronous
response = app.delete_data(schema="sentence", data_id=0)
# #### Asynchronous
async with app.asyncio() as async_app:
response = await async_app.delete_data(schema="sentence",data_id=0)
# <div class="alert alert-info">
#
# **Note**: The code above runs from a Jupyter Notebook because it already has its async event loop running in the background. You must create your event loop when running this code on an environment without one, just like any asyncio code requires.
# </div>
# + nbsphinx="hidden"
# this is a hidden cell. It will not show on the documentation HTML.
vespa_docker.container.stop()
vespa_docker.container.remove()
| docs/sphinx/source/exchange-data-with-app.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import pandas as pd
import numpy as np
# -
data_path = '/home/zigan/Documents/wangyouan/github_projects/wangyouan.github.io/data/useful_data'
df = pd.read_csv(os.path.join(data_path, 'compustat_sample.csv'), dtype={'cusip': str, 'fyear': str},
usecols=[
'fyear', 'cusip', 'conm', 'at', 'ch', 'emp', 'lt', 'ni', 'xopr', 'city', 'state'])
df = df[df['fyear'].notnull()]
df.loc[:, 'year'] = df['fyear'].apply(int)
df = df.drop(['fyear'], axis=1).rename(index=str, columns={
'conm': 'firm_name', 'at': 'total_assets', 'ch': 'cash', 'emp': 'emp_num', 'lt': 'total_debt',
'ni': 'net_income', 'xdp': 'depreciation_expense', 'xopr': 'operation_cost',
})
df.to_pickle(os.path.join(data_path, 'compustat_sample.pkl'))
df
df = df.drop(['fyear', 'costat'], axis=1).rename(index=str, columns={
'conm': 'firm_name', 'at': 'total_assets', 'ch': 'cash', 'emp': 'emp_num', 'lt': 'total_debt',
'ni': 'net_income', 'xdp': 'depreciation_expense', 'xopr': 'operation_cost',
})
df
| notebooks/20180423_sort_sample_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Setup some basic stuff
import logging
logging.getLogger().setLevel(logging.DEBUG)
import folium
import folium.features as fof
import folium.utilities as ful
import branca.element as bre
import json
import geojson as gj
import arrow
def lonlat_swap(lon_lat):
return list(reversed(lon_lat))
def get_row_count(n_maps, cols):
rows = (n_maps / cols)
if (n_maps % cols != 0):
rows = rows + 1
return rows
def get_marker(loc, disp_color):
if loc["geometry"]["type"] == "Point":
curr_latlng = lonlat_swap(loc["geometry"]["coordinates"])
return folium.Marker(curr_latlng, icon=folium.Icon(color=disp_color),
popup="%s" % loc["properties"]["name"])
elif loc["geometry"]["type"] == "Polygon":
assert len(loc["geometry"]["coordinates"]) == 1,\
"Only simple polygons supported!"
curr_latlng = [lonlat_swap(c) for c in loc["geometry"]["coordinates"][0]]
# print("Returning polygon for %s" % curr_latlng)
return folium.PolyLine(curr_latlng, color=disp_color, fill=disp_color,
popup="%s" % loc["properties"]["name"])
# ### Read the data
spec_to_validate = json.load(open("train_bus_ebike_mtv_ucb.filled.json"))
sensing_configs = json.load(open("sensing_regimes.all.specs.json"))
# ### Validating the time range
print("Experiment runs from %s -> %s" % (arrow.get(spec_to_validate["start_ts"]), arrow.get(spec_to_validate["end_ts"])))
start_fmt_time_to_validate = arrow.get(spec_to_validate["start_ts"]).format("YYYY-MM-DD")
end_fmt_time_to_validate = arrow.get(spec_to_validate["end_ts"]).format("YYYY-MM-DD")
if (start_fmt_time_to_validate != spec_to_validate["start_fmt_date"]):
print("VALIDATION FAILED, got start %s, expected %s" % (start_fmt_time_to_validate, spec_to_validate["start_fmt_date"]))
if (end_fmt_time_to_validate != spec_to_validate["end_fmt_date"]):
print("VALIDATION FAILED, got end %s, expected %s" % (end_fmt_time_to_validate, spec_to_validate["end_fmt_date"]))
# ### Validating calibration trips
def get_map_for_calibration_test(trip):
curr_map = folium.Map()
if trip["start_loc"] is None or trip["end_loc"] is None:
return curr_map
curr_start = lonlat_swap(trip["start_loc"]["coordinates"])
curr_end = lonlat_swap(trip["end_loc"]["coordinates"])
folium.Marker(curr_start, icon=folium.Icon(color="green"),
popup="Start: %s" % trip["start_loc"]["name"]).add_to(curr_map)
folium.Marker(curr_end, icon=folium.Icon(color="red"),
popup="End: %s" % trip["end_loc"]["name"]).add_to(curr_map)
folium.PolyLine([curr_start, curr_end], popup=trip["id"]).add_to(curr_map)
curr_map.fit_bounds([curr_start, curr_end])
return curr_map
calibration_tests = spec_to_validate["calibration_tests"]
rows = get_row_count(len(calibration_tests), 4)
calibration_maps = bre.Figure((rows,4))
for i, t in enumerate(calibration_tests):
if t["config"]["sensing_config"] != sensing_configs[t["config"]["id"]]["sensing_config"]:
print("Mismatch in config for test" % t)
curr_map = get_map_for_calibration_test(t)
calibration_maps.add_subplot(rows, 4, i+1).add_child(curr_map)
calibration_maps
# ### Validating evaluation trips
def get_map_for_travel_leg(trip):
curr_map = folium.Map()
get_marker(trip["start_loc"], "green").add_to(curr_map)
get_marker(trip["end_loc"], "red").add_to(curr_map)
# trips from relations won't have waypoints
if "waypoint_coords" in trip:
for i, wpc in enumerate(trip["waypoint_coords"]["geometry"]["coordinates"]):
folium.map.Marker(
lonlat_swap(wpc), popup="%d" % i,
icon=fof.DivIcon(class_name='leaflet-div-icon')).add_to(curr_map)
print("Found %d coordinates for the route" % (len(trip["route_coords"]["geometry"]["coordinates"])))
latlng_route_coords = [lonlat_swap(rc) for rc in trip["route_coords"]["geometry"]["coordinates"]]
folium.PolyLine(latlng_route_coords,
popup="%s: %s" % (trip["mode"], trip["name"])).add_to(curr_map)
for i, c in enumerate(latlng_route_coords):
folium.CircleMarker(c, radius=5, popup="%d: %s" % (i, c)).add_to(curr_map)
curr_map.fit_bounds(ful.get_bounds(trip["route_coords"]["geometry"]["coordinates"], lonlat=True))
return curr_map
def get_map_for_shim_leg(trip):
curr_map = folium.Map()
mkr = get_marker(trip["loc"], "purple")
mkr.add_to(curr_map)
curr_map.fit_bounds(mkr.get_bounds())
return curr_map
# +
evaluation_trips = spec_to_validate["evaluation_trips"]
map_list = []
for t in evaluation_trips:
for l in t["legs"]:
if l["type"] == "TRAVEL":
curr_map = get_map_for_travel_leg(l)
map_list.append(curr_map)
else:
curr_map = get_map_for_shim_leg(l)
map_list.append(curr_map)
rows = get_row_count(len(map_list), 2)
evaluation_maps = bre.Figure(ratio="{}%".format((rows/2) * 100))
for i, curr_map in enumerate(map_list):
evaluation_maps.add_subplot(rows, 2, i+1).add_child(curr_map)
evaluation_maps
# -
# ### Validating sensing settings
for ss in spec_to_validate["sensing_settings"]:
for phoneOS, compare_map in ss.items():
compare_list = compare_map["compare"]
for i, ssc in enumerate(compare_map["sensing_configs"]):
if ssc["id"] != compare_list[i]:
print("Mismatch in sensing configurations for %s" % ss)
| spec_creation/Validate_spec_before_upload.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# STandard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
# %matplotlib notebook
# Import MAVE-NN
import mavenn
# Import logomaker
import logomaker
# Import helper functions
from helper_functions import my_rsquared, save_fig_with_date_stamp, set_xticks
# Set random seed
np.random.seed(0)
# Set figure name
fig_name = 'fig6'
# +
style_file_name = f'{fig_name}.style'
s = """
axes.linewidth: 0.5 # edge linewidth
font.size: 7.0
axes.labelsize: 7.0 # fontsize of the x any y labels
xtick.labelsize: 7.0 # fontsize of the tick labels
ytick.labelsize: 7.0 # fontsize of the tick labels
legend.fontsize: 7.0
legend.borderpad: 0.2 # border whitespace
legend.labelspacing: 0.2 # the vertical space between the legend entries
legend.borderaxespad: 0.2 # the border between the axes and legend edge
legend.framealpha: 1.0
"""
with open(style_file_name, 'w') as f:
f.write(s)
plt.style.use(style_file_name)
# + code_folding=[0]
## Define OtwinowskiGPMapLayer
# Standard TensorFlow imports
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.initializers import Constant
# Import base class
from mavenn.src.layers.gpmap import GPMapLayer
# Define custom G-P map layer
class OtwinowskiGPMapLayer(GPMapLayer):
"""
A G-P map representing the thermodynamic model described by
Otwinowski (2018).
"""
def __init__(self, *args, **kwargs):
"""Construct layer instance."""
# Call superclass constructor
# Sets self.L, self.C, and self.regularizer
super().__init__(*args, **kwargs)
# Initialize constant parameter for folding energy
self.theta_f_0 = self.add_weight(name='theta_f_0',
shape=(1,),
trainable=True,
regularizer=self.regularizer)
# Initialize constant parameter for binding energy
self.theta_b_0 = self.add_weight(name='theta_b_0',
shape=(1,),
trainable=True,
regularizer=self.regularizer)
# Initialize additive parameter for folding energy
self.theta_f_lc = self.add_weight(name='theta_f_lc',
shape=(1, self.L, self.C),
trainable=True,
regularizer=self.regularizer)
# Initialize additive parameter for binding energy
self.theta_b_lc = self.add_weight(name='theta_b_lc',
shape=(1, self.L, self.C),
trainable=True,
regularizer=self.regularizer)
def call(self, x_lc):
"""Compute phi given x."""
# 1kT = 0.582 kcal/mol at room temperature
kT = 0.582
# Reshape input to samples x length x characters
x_lc = tf.reshape(x_lc, [-1, self.L, self.C])
# Compute Delta G for binding
Delta_G_b = self.theta_b_0 + \
tf.reshape(K.sum(self.theta_b_lc * x_lc, axis=[1, 2]),
shape=[-1, 1])
# Compute Delta G for folding
Delta_G_f = self.theta_f_0 + \
tf.reshape(K.sum(self.theta_f_lc * x_lc, axis=[1, 2]),
shape=[-1, 1])
# Compute and return fraction folded and bound
Z = 1+K.exp(-Delta_G_f/kT)+K.exp(-(Delta_G_f+Delta_G_b)/kT)
p_bf = (K.exp(-(Delta_G_f+Delta_G_b)/kT))/Z
phi = p_bf #K.log(p_bf)/np.log(2)
return phi
# -
# Load GB1 model
model_gb1 = mavenn.load('../models/gb1_thermodynamic_model_2021.12.30.21h.07m')
# +
# Load GB1 data
data_df = pd.read_csv('../datasets/gb1_data.csv.gz')
# Split into trainval and test
trainval_df, test_df = mavenn.split_dataset(data_df)
# Compute variational and predictive information
x_test = test_df['x']
y_test = test_df['y']
I_var, dI_var = model_gb1.I_variational(x=x_test, y=y_test)
I_pred, dI_pred = model_gb1.I_predictive(x=x_test, y=y_test, num_subsamples=1000)
# Compute R^2
phi_test = model_gb1.x_to_phi(x=x_test)
yhat_test = model_gb1.phi_to_yhat(phi=phi_test)
r2, dr2 = my_rsquared(yhat_test, y_test)
# Report metrics
print(f'I_var = {I_var:.3f} +- {dI_var:.3f} bits')
print(f'I_pred = {I_pred:.3f} +- {dI_pred:.3f} bits')
print(f'Rsq = {r2:.3f} +- {dr2:.3f}')
# + code_folding=[]
gb1_seq = model_gb1.x_stats['consensus_seq']
alphabet = model_gb1.alphabet
L = model_gb1.L
C = model_gb1.C
# Load each energy matrix and fix in the wild-type gauge
theta_dict = model_gb1.layer_gpmap.get_params()
theta_f_lc = theta_dict['theta_f_lc']
theta_b_lc = theta_dict['theta_b_lc']
# Get one-hot encoding of wt sequence
wt_ohe = mavenn.src.utils.x_to_ohe(x=gb1_seq, alphabet=alphabet).reshape([L,C])
# Fix each energy matrix in the wild-type gauge
theta_f_lc = theta_f_lc - (theta_f_lc*wt_ohe).sum(axis=1)[:,np.newaxis]
theta_b_lc = theta_b_lc - (theta_b_lc*wt_ohe).sum(axis=1)[:,np.newaxis]
# Test gauge
assert np.all(np.isclose(theta_b_lc*wt_ohe, 0))
assert np.all(np.isclose(theta_f_lc*wt_ohe, 0))
# + code_folding=[]
# Get indices for aa order used in Olson et al.
ordered_aa = np.array(list('EDRKHQNSTPGCAVILMFYW'))
ix = ordered_aa.argsort()
sorted_aa = ordered_aa[ix]
ixx = ix.argsort()
sorted_aa[ixx]
# + code_folding=[]
## heatmaps
# Create two panels
fig, axs = plt.subplots(1, 2, figsize=[4.8,2.0])
fontsize=8.0
# Left panel: draw heatmap illustrating 1pt mutation effects
ax = axs[0]
ax, cb = mavenn.heatmap(-theta_b_lc[:,ixx],
alphabet=alphabet[ixx],
seq=gb1_seq,
clim=[-3,3],
cbar=False,
seq_kwargs={'s':1,'c':'black'},
cmap='PiYG',
ccenter=0,
ax=ax)
ax.tick_params(labelsize=5, rotation=0, axis='y', length=0, pad=5)
ax.tick_params(rotation=0, axis='x')
set_xticks(ax=ax, L=L, pos_start=2, pos_spacing=5)
#ax.set_xlabel('position',fontsize=fontsize)
ax.set_ylabel('amino acid',fontsize=fontsize,labelpad=3)
#cb.set_label('-$\Delta\Delta G$', rotation=-90, va="bottom",fontsize=fontsize)
ax.set_title('binding energy ($\Delta G_\mathrm{B}$)',fontsize=fontsize)
ax = axs[1]
# Left panel: draw heatmap illustrating 1pt mutation effects
ax, cb1 = mavenn.heatmap(-theta_f_lc[:,ixx],
alphabet=alphabet[ixx],
seq=gb1_seq,
clim=[-3,3],
cmap='PiYG',
seq_kwargs={'s':1,'c':'black'},
ccenter=0,
ax=ax)
#ax.set_xlabel('position',fontsize=fontsize)
#ax.set_ylabel('amino acid ($c$)',fontsize=fontsize)
cb1.set_label('$\Delta\Delta G$ (kcal/mol)', rotation=90, fontsize=fontsize, labelpad=0)
cb1.set_ticks([-3, -2, -1, 0, 1, 2, 3])
cb1.outline.set_visible(False)
cb1.ax.tick_params(direction='in', size=10, color='white')
ax.set_title(' folding energy ($\Delta G_\mathrm{F}$)',fontsize=fontsize)
ax.tick_params(labelsize=5, rotation=0, axis='y')
ax.set_yticks([])
ax.tick_params(rotation=0, axis='x')
set_xticks(ax=ax, L=L, pos_start=2, pos_spacing=5)
#fig.subplots_adjust(left=3, bottom=1, right=5, top=2, wspace=5, hspace=None)
# Fix up plot
fig.tight_layout(h_pad=1,w_pad=2)
# Save figure
save_fig_with_date_stamp(fig, 'fig6_panel_b', bbox_inches='tight')
# -
# Function to draw logos
def draw_logo(ax, df, ylim=[-1,1], highlight_color='#9981B3', highlight_alpha=0.1):
xmin = int(df.index.min())
xmax = int(df.index.max())
xlim = [xmin-.5, xmax+.5]
logo = logomaker.Logo(-0.62*df, ax=ax, center_values=True, font_name='Arial Rounded MT Bold')
ax.set_ylim(ylim)
ax.set_yticks([ylim[0], 0, ylim[1]])
logo.style_xticks(anchor=0, spacing=10)
logo.style_spines(visible=False)
logo.highlight_position_range(xmin, xmax, alpha=highlight_alpha, color=highlight_color)
ax.set_ylabel('$-\Delta \Delta G$ (kcal/mol)', labelpad=-1)
return logo
# + code_folding=[0]
# Define sortseqGPMapLayer
from mavenn.src.layers.gpmap import GPMapLayer
# Tensorflow imports
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.initializers import Constant
class sortseqGPMapLayer(GPMapLayer):
"""
Represents a four-stage thermodynamic model
containing the states:
1. free DNA
2. CPR-DNA binding
3. RNAP-DNA binding
4. CPR and RNAP both bounded to DNA and interact
"""
def __init__(self,
tf_start,
tf_end,
rnap_start,
rnap_end,
*args, **kwargs):
"""Construct layer instance."""
# Call superclass
super().__init__(*args, **kwargs)
# set attributes
self.tf_start = tf_start # transcription factor starting position
self.tf_end = tf_end # transcription factor ending position
self.L_tf = tf_end - tf_start # length of transcription factor
self.rnap_start = rnap_start # RNAP starting position
self.rnap_end = rnap_end # RNAP ending position
self.L_rnap = rnap_end - rnap_start # length of RNAP
# define bias/chemical potential weight for TF/CRP energy
self.theta_tf_0 = self.add_weight(name='theta_tf_0',
shape=(1,),
initializer=Constant(1.),
trainable=True,
regularizer=self.regularizer)
# define bias/chemical potential weight for rnap energy
self.theta_rnap_0 = self.add_weight(name='theta_rnap_0',
shape=(1,),
initializer=Constant(1.),
trainable=True,
regularizer=self.regularizer)
# initialize the theta_tf
theta_tf_shape = (1, self.L_tf, self.C)
theta_tf_init = np.random.randn(*theta_tf_shape)/np.sqrt(self.L_tf)
# define the weights of the layer corresponds to theta_tf
self.theta_tf = self.add_weight(name='theta_tf',
shape=theta_tf_shape,
initializer=Constant(theta_tf_init),
trainable=True,
regularizer=self.regularizer)
# define theta_rnap parameters
theta_rnap_shape = (1, self.L_rnap, self.C)
theta_rnap_init = np.random.randn(*theta_rnap_shape)/np.sqrt(self.L_rnap)
# define the weights of the layer corresponds to theta_rnap
self.theta_rnap = self.add_weight(name='theta_rnap',
shape=theta_rnap_shape,
initializer=Constant(theta_rnap_init),
trainable=True,
regularizer=self.regularizer)
# define trainable real number G_I, representing interaction Gibbs energy
self.theta_dG_I = self.add_weight(name='theta_dG_I',
shape=(1,),
initializer=Constant(-4),
trainable=True,
regularizer=self.regularizer)
def call(self, x):
"""Process layer input and return output.
x: (tensor)
Input tensor that represents one-hot encoded
sequence values.
"""
# 1kT = 0.616 kcal/mol at body temperature
kT = 0.616
# extract locations of binding sites from entire lac-promoter sequence.
# for transcription factor and rnap
x_tf = x[:, self.C * self.tf_start:self.C * self.tf_end]
x_rnap = x[:, self.C * self.rnap_start: self.C * self.rnap_end]
# reshape according to tf and rnap lengths.
x_tf = tf.reshape(x_tf, [-1, self.L_tf, self.C])
x_rnap = tf.reshape(x_rnap, [-1, self.L_rnap, self.C])
# compute delta G for crp binding
G_C = self.theta_tf_0 + \
tf.reshape(K.sum(self.theta_tf * x_tf, axis=[1, 2]),
shape=[-1, 1])
# compute delta G for rnap binding
G_R = self.theta_rnap_0 + \
tf.reshape(K.sum(self.theta_rnap * x_rnap, axis=[1, 2]),
shape=[-1, 1])
G_I = self.theta_dG_I
# compute phi
numerator_of_rate = K.exp(-G_R/kT) + K.exp(-(G_C+G_R+G_I)/kT)
denom_of_rate = 1.0 + K.exp(-G_C/kT) + K.exp(-G_R/kT) + K.exp(-(G_C+G_R+G_I)/kT)
phi = numerator_of_rate/denom_of_rate
return phi
# + code_folding=[]
# Load energy matrices
model_lac = mavenn.load('../models/sortseq_thermodynamic_mpa_2021.12.30.21h.07m')
theta_dict = model_lac.layer_gpmap.get_params()
# Create dataframe for CRP emat
lac_crp_df = pd.DataFrame(columns=model_lac.alphabet,
data=theta_dict['theta_tf'])
lac_crp_df.index = lac_crp_df.index - 74
# Create dataframe for RNAP emat
lac_rnap_df = pd.DataFrame(columns=model_lac.alphabet,
data=theta_dict['theta_rnap'])
lac_rnap_df.index = lac_rnap_df.index - 41
# +
###
### Panel d
###
fig, ax = plt.subplots(figsize=(2.5,2.8))
# Define helper variables
phi_lim = [-6, 4]
phi_grid = np.linspace(phi_lim[0], phi_lim[1], 1000)
# Create array of allowable y values
Y = model_lac.Y
y_lim = [-.5, Y-.5]
y_all = list(range(Y))
# Load measurement process
#measurement_process_ss = np.load('../old/lac_emats/measurement_process_ss.npy')
p_of_y_given_phi = model_lac.p_of_y_given_phi(phi=phi_grid, y=y_all, paired=False)
# Plot measurement process
tick_spacing=10
fontsize = 8
im = ax.imshow(p_of_y_given_phi,
cmap='Greens',
extent=phi_lim+y_lim,
vmin=0,
vmax=1.0,
origin='lower',
interpolation='nearest',)
ax.set_yticks(y_all)
ax.set_ylabel('measurement ($y$)',fontsize=fontsize)
ax.set_xlabel('latent phenotype ($\phi$)',fontsize=fontsize)
ax.set_xticks([-6, -4, -2, 0, 2, 4])
cb = plt.colorbar(im, **{'format': '%0.1f','ticks': [0, 1],}, ax=ax)
cb.ax.tick_params(labelsize=7.5)
ax.set_aspect('auto')
cb.set_label('$p(y|\phi)$', rotation=-90, va="bottom", fontsize=fontsize, labelpad=-15)
cb.ax.tick_params(labelsize=fontsize)
ax.tick_params(labelsize=fontsize)
# Tighten layout
fig.tight_layout(h_pad=1)
# Save figure
save_fig_with_date_stamp(fig, 'fig6_panel_d', bbox_inches='tight')
# +
###
### Panel e
###
fig, axs = plt.subplots(2,1,figsize=[2.25,2.25])
# CRP logo
ax = axs[0]
lac_crp_logo = draw_logo(ax=ax,
df=-lac_crp_df,
ylim=[-1,1],
highlight_color='#9981B3',
highlight_alpha=0.1)
ax.set_title("CRP-DNA ($\Delta G_\mathrm{C}$)", fontsize=8)
# RNAP logo
ax = axs[1]
lac_rnap_logo = draw_logo(ax=ax,
df=-lac_rnap_df,
ylim=[-1,1],
highlight_color='lightcyan',
highlight_alpha=0.7)
ax.set_title("RNAP-DNA ($\Delta G_\mathrm{R}$)", fontsize=8)
# Save and show figure
# fig.tight_layout(h_pad=1)
# fig.savefig('png/fig6_panel_e.png',dpi=300, facecolor='white')
# plt.show()
# Tighten layout
fig.tight_layout(h_pad=1)
# Save figure
save_fig_with_date_stamp(fig, 'fig6_panel_e', bbox_inches='tight')
# +
# Compute parameter uncertainty
import glob
models = []
for model_num in range(20):
file_name = f'../models/sortseq_thermodynamic_mpa_model_{model_num}_2021.12.30.21h.07m'
model = mavenn.load(file_name, verbose=False)
models.append(model)
# Once Mahdi retrains models using the fixed custom layer, refer to parameter by name
dGs = np.array([model.layer_gpmap.get_params()['theta_dG_I'] for model in models])
print(f'dG_I = {dGs.mean():.3f} +- {dGs.std():.3f} kcal/mol')
# -
| paper/figure_scripts/fig6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''myenv'': conda)'
# name: python3
# ---
# # The Pumpkin Market
# #### Question
# Predict the price of a pumpkin for a sale during a given month
import pandas as pd
pumpkins = pd.read_csv('../data/US-pumpkins.csv')
pumpkins.head()
# the last 5 rows of data
pumpkins.tail()
# check missing data in the current dataframe
pumpkins.isnull().sum()
# the packaging does not has a consistent measurement
# so we are adding filter to our dataset
pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
print(pumpkins)
# we might want to drop some data to make it easier to work with
new_coloumns = ['Package' , 'Month' , 'Low Price' , 'High Price' , 'Date'] # these are the columns that we want to keep
pumpkins = pumpkins.drop([c for c in pumpkins.columns if c not in new_coloumns], axis=1)
price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
month = pd.DatetimeIndex(pumpkins['Date']).month
print(month)
new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
new_pumpkins.head()
print(new_pumpkins)
#standardize the pricing per bushel
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9') , 'Price'] = price/ ( 1 + 1/9)
new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2') , 'Price'] = price/ ( 1/2)
print(new_pumpkins)
# create the visualization
import matplotlib.pyplot as plt
price = new_pumpkins.Price
month = new_pumpkins.Month
plt.scatter(price, month)
plt.show()
new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
plt.ylabel("Pumpkin Price")
| 2-Regression/2-Data/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Wq_r4j0gvQgJ"
# # Colab-taming-transformers
#
# Original repo is located in [CompVis/taming-transformers](https://github.com/CompVis/taming-transformers) and [original colab can be found here](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb)
#
# My fork: [styler00dollar/Colab-taming-transformers](https://github.com/styler00dollar/Colab-taming-transformers)
#
# This is just a more compressed/smaller version of the original notebook.
# + id="79RpJwq1s_tB"
# !nvidia-smi
# + id="Wwj8j_l201aF" cellView="form"
#@title install
# !git clone https://github.com/CompVis/taming-transformers
# %cd taming-transformers
# !mkdir -p logs/2020-11-09T13-31-51_sflckr/checkpoints
# !wget 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' -O 'logs/2020-11-09T13-31-51_sflckr/checkpoints/last.ckpt'
# !mkdir logs/2020-11-09T13-31-51_sflckr/configs
# !wget 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' -O 'logs/2020-11-09T13-31-51_sflckr/configs/2020-11-09T13-31-51-project.yaml'
# %pip install omegaconf==2.0.0 pytorch-lightning==1.0.8
import sys
sys.path.append(".")
from omegaconf import OmegaConf
config_path = "logs/2020-11-09T13-31-51_sflckr/configs/2020-11-09T13-31-51-project.yaml"
config = OmegaConf.load(config_path)
import yaml
print(yaml.dump(OmegaConf.to_container(config)))
from taming.models.cond_transformer import Net2NetTransformer
model = Net2NetTransformer(**config.model.params)
import torch
ckpt_path = "logs/2020-11-09T13-31-51_sflckr/checkpoints/last.ckpt"
sd = torch.load(ckpt_path, map_location="cpu")["state_dict"]
missing, unexpected = model.load_state_dict(sd, strict=False)
model.cuda().eval()
torch.set_grad_enabled(False)
# + id="8LiAOU6C-vTP" cellView="form"
#@title load data
from PIL import Image
import numpy as np
segmentation_path = "data/sflckr_segmentations/norway/25735082181_999927fe5a_b.png" #@param {type:"string"}
segmentation = Image.open(segmentation_path)
segmentation = np.array(segmentation)
segmentation = np.eye(182)[segmentation]
segmentation = torch.tensor(segmentation.transpose(2,0,1)[None]).to(dtype=torch.float32, device=model.device)
def show_segmentation(s):
s = s.detach().cpu().numpy().transpose(0,2,3,1)[0,:,:,None,:]
colorize = np.random.RandomState(1).randn(1,1,s.shape[-1],3)
colorize = colorize / colorize.sum(axis=2, keepdims=True)
s = s@colorize
s = s[...,0,:]
s = ((s+1.0)*127.5).clip(0,255).astype(np.uint8)
s = Image.fromarray(s)
display(s)
show_segmentation(segmentation)
c_code, c_indices = model.encode_to_c(segmentation)
print("c_code", c_code.shape, c_code.dtype)
print("c_indices", c_indices.shape, c_indices.dtype)
assert c_code.shape[2]*c_code.shape[3] == c_indices.shape[1]
segmentation_rec = model.cond_stage_model.decode(c_code)
show_segmentation(torch.softmax(segmentation_rec, dim=1))
# + id="VTfao3jJSCfW" cellView="form"
#@title display random
def show_image(s):
s = s.detach().cpu().numpy().transpose(0,2,3,1)[0]
s = ((s+1.0)*127.5).clip(0,255).astype(np.uint8)
s = Image.fromarray(s)
display(s)
codebook_size = config.model.params.first_stage_config.params.embed_dim
z_indices_shape = c_indices.shape
z_code_shape = c_code.shape
z_indices = torch.randint(codebook_size, z_indices_shape, device=model.device)
x_sample = model.decode_to_img(z_indices, z_code_shape)
show_image(x_sample)
# + id="5rVRrUOwbEH0" cellView="form"
#@title generate image
from IPython.display import clear_output
import time
idx = z_indices
idx = idx.reshape(z_code_shape[0],z_code_shape[2],z_code_shape[3])
cidx = c_indices
cidx = cidx.reshape(c_code.shape[0],c_code.shape[2],c_code.shape[3])
temperature = 1.0
top_k = 100
update_every = 50
start_t = time.time()
for i in range(0, z_code_shape[2]-0):
if i <= 8:
local_i = i
elif z_code_shape[2]-i < 8:
local_i = 16-(z_code_shape[2]-i)
else:
local_i = 8
for j in range(0,z_code_shape[3]-0):
if j <= 8:
local_j = j
elif z_code_shape[3]-j < 8:
local_j = 16-(z_code_shape[3]-j)
else:
local_j = 8
i_start = i-local_i
i_end = i_start+16
j_start = j-local_j
j_end = j_start+16
patch = idx[:,i_start:i_end,j_start:j_end]
patch = patch.reshape(patch.shape[0],-1)
cpatch = cidx[:, i_start:i_end, j_start:j_end]
cpatch = cpatch.reshape(cpatch.shape[0], -1)
patch = torch.cat((cpatch, patch), dim=1)
logits,_ = model.transformer(patch[:,:-1])
logits = logits[:, -256:, :]
logits = logits.reshape(z_code_shape[0],16,16,-1)
logits = logits[:,local_i,local_j,:]
logits = logits/temperature
if top_k is not None:
logits = model.top_k_logits(logits, top_k)
probs = torch.nn.functional.softmax(logits, dim=-1)
idx[:,i,j] = torch.multinomial(probs, num_samples=1)
step = i*z_code_shape[3]+j
if step%update_every==0 or step==z_code_shape[2]*z_code_shape[3]-1:
x_sample = model.decode_to_img(idx, z_code_shape)
clear_output()
print(f"Time: {time.time() - start_t} seconds")
print(f"Step: ({i},{j}) | Local: ({local_i},{local_j}) | Crop: ({i_start}:{i_end},{j_start}:{j_end})")
show_image(x_sample)
| Colab-taming-transformers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="UyjADbwO-kj7"
# Install pyspark
# ! pip install --ignore-installed pyspark
# Install Spark NLP
# ! pip install --ignore-installed spark-nlp
# + colab={"base_uri": "https://localhost:8080/"} id="mxJniPtV_gqj" outputId="7b657a7f-2074-450b-968a-c88f2d97f5c5"
import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.sql import SparkSession
print("Spark NLP version", sparknlp.version())
# + [markdown] id="qceFcfEhr9r5"
# To use Merge Entities parameter we need to set allowSparkContext parameter to true
# + colab={"base_uri": "https://localhost:8080/", "height": 216} id="zzy3PziR_654" outputId="75eab5a2-6a01-48bc-86b6-3808be18d603"
spark = SparkSession.builder \
.appName("SparkNLP") \
.master("local[*]") \
.config("spark.driver.memory", "12G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "2000M") \
.config("spark.driver.maxResultSize", "0") \
.config("spark.jars", "jars/sparknlp.jar") \
.config("spark.executor.allowSparkContext", "true") \
.getOrCreate()
spark
# + colab={"base_uri": "https://localhost:8080/"} id="luNlbsk1AJqP" outputId="8c153330-e2e8-4faf-c016-9d0d4a9f64ce"
from pyspark.sql.types import StringType
text = ['<NAME> is a nice lad and lives in New York']
data_set = spark.createDataFrame(text, StringType()).toDF("text")
data_set.show(truncate=False)
# + [markdown] id="HSvNig972xXC"
# # Graph Extraction
# + [markdown] id="QkW7uQ4_cqAQ"
# Graph Extraction will use pretrained POS, Dependency Parser and Typed Dependency Parser annotators when the pipeline does not have those defined
# + colab={"base_uri": "https://localhost:8080/"} id="VVFs6NDBlWsN" outputId="3ffc8639-5661-4d85-9bca-8fe43ebf6d18"
document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
tokenizer = Tokenizer().setInputCols(["document"]).setOutputCol("token")
word_embeddings = WordEmbeddingsModel.pretrained() \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings")
ner_tagger = NerDLModel.pretrained() \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
# + [markdown] id="mY1IKzQuuMO_"
# When setting ExplodeEntities to true, Graph Extraction will find paths between all possible pair of entities
# -
# Since this sentence only has two entities, it will display the paths between PER and LOC. Each pair of entities will have a left path and a right path. By default the paths starts from the root of the dependency tree, which in this case is the token *lad*:
# * Left path: lad-PER, will output the path between lad and <NAME>
# * Right path: lad-LOC, will output the path between lad and New York
# + id="XxqysCFDg1aP"
graph_extraction = GraphExtraction() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("graph") \
.setMergeEntities(True) \
.setExplodeEntities(True)
# + id="LRpKY22pAqlL"
graph_pipeline = Pipeline().setStages([document_assembler, tokenizer,
word_embeddings, ner_tagger,
graph_extraction])
# + [markdown] id="lJV6x-Nqw442"
# The result dataset has a *graph* column with the paths between PER,LOC
# + colab={"base_uri": "https://localhost:8080/"} id="Kh78KBe-63Dn" outputId="7bdc8eb8-a9e3-44d8-96b7-b3580a6f38ae"
graph_data_set = graph_pipeline.fit(data_set).transform(data_set)
graph_data_set.select("graph").show(truncate=False)
| jupyter/annotation/english/graph-extraction/graph_extraction_explode_entities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="wdpNhIslRhYb" colab_type="code" colab={}
# ! pip install -q kaggle
# ! mkdir ~/.kaggle
# ! cp kaggle.json ~/.kaggle/
# ! chmod 600 ~/.kaggle/kaggle.json
# ! kaggle datasets download -d chopinforest1986/regular-deepdrid
# + id="btLE5A88RqZJ" colab_type="code" colab={}
# ! unzip -q '/content/regular-deepdrid.zip'
# + id="FosM9N7fR9pv" colab_type="code" colab={}
import os
import cv2
import glob
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + id="hn4F77PXXOSN" colab_type="code" colab={}
import tensorflow as tf
from tensorflow.keras.utils import Sequence
# + id="yGacAYZwH34d" colab_type="code" colab={}
from albumentations import Compose, OneOf, CLAHE, Flip, Transpose, Rotate, RGBShift, RandomBrightnessContrast, RandomGamma
# + id="NOj0g7aPL4LY" colab_type="code" colab={}
from sklearn.metrics import classification_report, cohen_kappa_score
# + id="mW5uEdcFSdld" colab_type="code" colab={}
train_df = pd.read_csv('/content/DR_label/DR_label/regular-fundus-training.csv')
valid_df = pd.read_csv('/content/DR_label/DR_label/regular-fundus-validation.csv')
# + id="kQYJoz4-1Y6f" colab_type="code" colab={}
train_df = train_df.fillna(0)
train_df['diagnosis'] = train_df['left_eye_DR_Level'] + train_df['right_eye_DR_Level']
train_df = train_df[['image_id', 'diagnosis']]
valid_df = valid_df.fillna(0)
valid_df['diagnosis'] = valid_df['left_eye_DR_Level'] + valid_df['right_eye_DR_Level']
valid_df = valid_df[['image_id', 'diagnosis']]
# + id="ozisjQ5NZFFN" colab_type="code" colab={}
IMG_SIZE = 512
# + id="8VBkCYZRYh1Z" colab_type="code" colab={}
train_images = []
train_labels = []
for img_id, diagnosis in zip(train_df['image_id'], train_df['diagnosis']):
path = f'Regular_DeepDRiD/regular_train/{img_id}.jpg'
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (IMG_SIZE,IMG_SIZE))
train_images.append(img)
train_labels.append(np.eye(5)[int(diagnosis)])
train_images = np.array(train_images)
train_labels = np.array(train_labels)
# + id="EyKHi9zdYPAq" colab_type="code" colab={}
valid_images = []
valid_labels = []
for img_id, diagnosis in zip(valid_df['image_id'], valid_df['diagnosis']):
path = f'Regular_DeepDRiD/regular_valid/{img_id}.jpg'
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (IMG_SIZE,IMG_SIZE))
valid_images.append(img)
valid_labels.append(np.eye(5)[int(diagnosis)])
valid_images = np.array(valid_images)
valid_labels = np.array(valid_labels)
# + id="fnMAIKP_vbo-" colab_type="code" colab={}
for i in range(10):
seed = 100*i
np.random.seed(seed)
np.random.shuffle(train_images)
np.random.seed(seed)
np.random.shuffle(train_labels)
np.random.seed(seed)
np.random.shuffle(valid_images)
np.random.seed(seed)
np.random.shuffle(valid_labels)
# + id="X6TrSKNPQMm9" colab_type="code" colab={}
class Generator(Sequence):
def __init__(self, x_set, y_set, batch_size=1, augment=True):
self.x = x_set
self.y = y_set
self.batch_size = batch_size
self.augment = augment
self.preprocessing = CLAHE(always_apply=True, p=1.0)
self.augmentations = Compose(
[
OneOf([
Flip(),
Transpose(),
Rotate(border_mode=0)
], p=0.9),
OneOf([
RandomBrightnessContrast(),
RandomGamma(),
RGBShift(),
], p=0.3)
])
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_x = np.array([self.preprocessing(image=i)['image'] for i in batch_x])
if self.augment is True:
batch_x = np.array([self.augmentations(image=i)['image'] for i in batch_x])
return batch_x/255, batch_y
# + id="T_LD1nK6T3O5" colab_type="code" colab={}
batch_size = 32
def train_generator_func():
generator = Generator(train_images, train_labels, batch_size, True)
return generator
def valid_generator_func():
generator = Generator(valid_images, valid_labels, batch_size, False)
return generator
train_generator = tf.data.Dataset.from_generator(
train_generator_func,
output_types=(tf.float32, tf.float32),
output_shapes=((None, IMG_SIZE, IMG_SIZE, 3), (None , 5))
).repeat().prefetch(tf.data.experimental.AUTOTUNE)
valid_generator = tf.data.Dataset.from_generator(
valid_generator_func,
output_types=(tf.float32, tf.float32),
output_shapes=((None, IMG_SIZE, IMG_SIZE, 3), (None, 5))
).prefetch(tf.data.experimental.AUTOTUNE)
# + id="t8Gef7gjSCd2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 466} outputId="ccceba0a-d27e-4e83-b9fd-21ef770f9b44"
for i, j in train_generator:
break
fig, axes = plt.subplots(2, 5, figsize=(12, 7))
fig.suptitle('Train Images', fontsize=15)
axes = axes.flatten()
for img, lbl, ax in zip(i[:10], j[:10], axes):
ax.imshow(img)
ax.title.set_text(str(int(np.argmax(lbl))))
ax.axis('off')
plt.tight_layout()
plt.show()
# + id="YWFCp9mYRzto" colab_type="code" colab={}
def kappa(y_true, y_pred):
y_true = np.argmax(y_true, axis=-1)
y_pred = np.argmax(y_pred, axis=-1)
kappa_score = cohen_kappa_score(y_true, y_pred, weights='quadratic')
return kappa_score
def tf_kappa(y_true, y_pred):
kappa_score = tf.py_function(func=kappa, inp=[y_true, y_pred], Tout=tf.float32)
return kappa_score
# + id="l7OCspz2B0O-" colab_type="code" colab={}
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout
# + id="H_VHrmhBPsN7" colab_type="code" colab={}
base_model = tf.keras.applications.DenseNet201(input_shape=(IMG_SIZE, IMG_SIZE, 3),
include_top=False,
weights='imagenet')
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(5),
])
# + id="1qQ95Iu8cgQZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 260} outputId="3fd180b5-c451-4023-a617-686aeab1a4e0"
model.summary()
# + id="dU5DgzWVWppe" colab_type="code" colab={}
base_model.trainable = False
# + id="h1C7j0KiBM2Q" colab_type="code" colab={}
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy', tf_kappa])
# + id="XWy5F6JqBOup" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="67f4cf99-bc1c-4cf8-8e06-68fb5566ed83"
steps_per_epoch = (3*len(train_images))//batch_size
validation_steps = len(valid_images)//batch_size
epochs = 20
initial_epoch = 0
history = model.fit(
x=train_generator,
steps_per_epoch=steps_per_epoch,
initial_epoch=initial_epoch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=validation_steps,
verbose=1,
)
# + id="26DnuWDUfcmc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="10ca6698-bb14-41dd-cad1-0b9ee947d6fe"
len(base_model.layers)
# + id="lD4VB2nVfTsv" colab_type="code" colab={}
fine_tune_at = 100
for layer in base_model.layers[fine_tune_at:]:
layer.trainable = True
# + id="0YMLwsR8fz4X" colab_type="code" colab={}
base_learning_rate = 0.001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy', tf_kappa])
# + id="JXxn7p1yf2Xw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 384} outputId="3988ae8e-572b-43c4-d33a-b817c2098757"
steps_per_epoch = (3*len(train_images))//batch_size
validation_steps = len(valid_images)//batch_size
epochs = 50
initial_epoch = 40
history = model.fit(
x=train_generator,
steps_per_epoch=steps_per_epoch,
initial_epoch=initial_epoch,
epochs=epochs,
validation_data=valid_generator,
validation_steps=validation_steps,
verbose=1,
)
# + id="0CXxljl1OHgg" colab_type="code" colab={}
test_generator = Generator(valid_images, valid_labels, len(valid_images), False)
# + id="sVU0K7WyONe1" colab_type="code" colab={}
for x_test, y_test in test_generator:
break
# + id="ezdYw-V8B2M-" colab_type="code" colab={}
y_true = np.argmax(y_test, axis=-1)
y_pred = np.argmax(model.predict(x_test), axis=-1)
# + id="wbrryLGML1cM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="68d83f99-5db9-4e68-c9cc-63cc19173a0c"
print(classification_report(y_true, y_pred))
# + id="hymjJ0i9OlVv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="424c96e0-8f53-4dfe-b0f6-d7b753eff752"
print(cohen_kappa_score(y_true, y_pred, weights='quadratic'))
# + id="dwa_W9xVPncn" colab_type="code" colab={}
| Notebooks/DeepDrid_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.2 (''.venv39'': venv)'
# language: python
# name: python3
# ---
# ## Read data using pandas
# +
import pandas as pd
import tensorflow as tf
from tensorflow import keras
SHUFFLE_BUFFER = 500
BATCH_SIZE = 2
csv_file = keras.utils.get_file(
'heart.csv', 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
# -
df: pd.DataFrame = pd.read_csv(csv_file)
df.head()
df.dtypes
target = df.pop('target')
numeric_feature_names = ['age', 'thalach', 'trestbps', 'chol', 'oldpeak']
numeric_features = df[numeric_feature_names]
numeric_features.head()
tf.convert_to_tensor(numeric_features, dtype=tf.float32)
normalizer = keras.layers.Normalization(axis=-1)
normalizer.adapt(numeric_features)
normalizer(numeric_features.loc[:3])
# +
from keras.models import Sequential
from keras.layers import Dense
def get_basic_model():
model = Sequential([
normalizer,
Dense(10, activation='relu'),
Dense(10, activation='relu'),
Dense(1),
])
model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])
return model
# -
model = get_basic_model()
history = model.fit(numeric_features, target, epochs=15, batch_size=BATCH_SIZE)
# ## With tf.data
# +
from tensorflow import data
numeric_dataset = data.Dataset.from_tensor_slices((numeric_features, target))
# -
for row in numeric_dataset.take(3):
print(row)
# +
numeric_batches = numeric_dataset.shuffle(1000).batch(BATCH_SIZE).prefetch(1)
model = get_basic_model()
model.fit(numeric_batches, epochs=15)
# -
# ## DataFrame with heterogeneous data
# use dataframe as dict
numeric_dict_ds = tf.data.Dataset.from_tensor_slices((dict(numeric_features), target))
for row in numeric_dict_ds.take(3):
print(row)
# # Dictionaries with Keras
#
# two ways to make Keras model that accepts a dictionary as input
# 1. subclass tf.keras.Model
# 2. Keras functional style
def stack_dict(inputs, fun=tf.stack):
values = []
for key in sorted(inputs.keys()):
values.append(tf.cast(inputs[key], tf.float32))
return fun(values, axis=-1)
inputs = {}
for name, column in numeric_features.items():
inputs[name] = keras.Input(shape=(1,), name=name, dtype=tf.float32)
inputs
# +
x = stack_dict(inputs, fun=tf.concat)
normalizer = keras.layers.Normalization(axis=-1)
normalizer.adapt(stack_dict(dict(numeric_features)))
x = normalizer(x)
x = tf.keras.layers.Dense(10, activation='relu')(x)
x = tf.keras.layers.Dense(10, activation='relu')(x)
x = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, x)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'],
run_eagerly=True)
# -
tf.keras.utils.plot_model(model, rankdir='LR', show_shapes=True)
model.fit(dict(numeric_features), target, batch_size=BATCH_SIZE, epochs=5)
numeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE).prefetch(2)
model.fit(numeric_dict_batches, epochs=5)
# # Full Example
# +
binary_feature_names = ['sex', 'fbs', 'exang']
categorical_feature_names = ['cp', 'restecg', 'slope', 'thal', 'ca']
inputs = {}
for name, column in df.items():
if type(column[0]) == str:
dtype = tf.string
elif (name in categorical_feature_names or name in binary_feature_names):
dtype = tf.int64
else:
dtype = tf.float32
inputs[name] = tf.keras.Input(shape=(), name=name, dtype=dtype)
inputs
# -
# make preprocessor
# +
preprocessed = []
for name in binary_feature_names:
inp = inputs[name]
inp = inp[:, tf.newaxis]
float_value = tf.cast(inp, tf.float32)
preprocessed.append(float_value)
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(stack_dict(dict(numeric_features)))
numeric_inputs = {}
for name in numeric_feature_names:
numeric_inputs[name] = inputs[name]
numeric_inputs = stack_dict(numeric_inputs)
numeric_normalized = normalizer(numeric_inputs)
preprocessed.append(numeric_normalized)
for name in categorical_feature_names:
vocab = sorted(set(df[name]))
if type(vocab[0]) is str:
lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')
else:
lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')
x = inputs[name][:, tf.newaxis]
x = lookup(x)
preprocessed.append(x)
# -
preprocessed_result = tf.concat(preprocessed, axis=-1)
preprocessor = tf.keras.Model(inputs, preprocessed_result)
tf.keras.utils.plot_model(preprocessor, rankdir='LR', show_shapes=True)
# create and train a model
body = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1),
])
x = preprocessor(inputs)
result = body(x)
model = tf.keras.Model(inputs, result)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
dict(df), target, epochs=5, batch_size=BATCH_SIZE
)
ds = tf.data.Dataset.from_tensor_slices((
dict(df),
target
))
ds = ds.batch(BATCH_SIZE)
history = model.fit(ds, epochs=5)
| tftutorials/loadPandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
print(os.listdir())
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
dataset = pd.read_csv('Social_Network_Ads.csv')
dataset.head(10)
dataset.info()
X = dataset.iloc[:, [2,3]].values
y = dataset.iloc[:, 4].values
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25)
y_test
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#fit data to NB
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train,y_train)
y_predictor = classifier.predict(X_test)
y_predictor
#detect things using Confussion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_predictor)
cm
# +
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# -
| naiveBayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="aBz_keVvhruc" outputId="a31d2630-b9a8-45fb-86fb-0ae8231f27aa"
import torch
import torchvision
import importlib
# !pip install --upgrade torchvision
# !pip install cython
# !pip install -U "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
# -
torch.__version__
from dataset import *
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# +
### HYPERPARAM
batchsize = 1
in_dim = (300,300)
normalization_data = torch.load('mean-std.pt')
num_classes = 5
# -
import wandb
wandb.init(project="FSDL-Skin")
# + id="e3POxBawhruo"
# # !cd Desktop/FSDL-FinalProject/dataverse_files/HAM10000_images_part_1/
# # !ls
# from PIL import Image
# Image.open('ISIC_0024306.jpg')
# + id="8TZ2p2D3jBrx"
# precision = 'fp32'
# pretrained=False
# ssd_model = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_ssd',
# model_math=precision, pretrained=False)
# if precision == 'fp16':
# checkpoint_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp16/versions/1/files/nvidia_ssdpyt_fp16_20190225.pt'
# elif precision == 'fp32':
# checkpoint_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt'
# checkpoint = torch.hub.load_state_dict_from_url('https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt', map_location="cpu")
# state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}
# ssd_model.load_state_dict(state_dict)
# -
import torchvision.transforms as transforms
# %load_ext autoreload
# %autoreload 2
import dataset
dataset = dataset.SkinData('/', 'final.csv', transform=transforms.Compose([Resizing(in_dim), ToTensor, Normalizer(normalization_data)]))
train_data, test_data = torch.utils.data.random_split(dataset,[int(0.8 * len(dataset)), int(0.2 * len(dataset))], generator=torch.Generator().manual_seed(42))
data_loader_train = torch.utils.data.DataLoader(train_data, batch_size=batchsize)
data_loader_test = torch.utils.data.DataLoader(test_data, batch_size=batchsize)
# + id="ZsDYbkjnhrvq"
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
def get_instance_segmentation_model(num_classes):
# load an instance segmentation model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# get the number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
# now get the number of input features for the mask classifier
in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
hidden_layer = 256
# and replace the mask predictor with a new one
model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
hidden_layer,
num_classes)
return model
# -
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
target['boxes'].shape
target
model(im, [target])
# + colab={"base_uri": "https://localhost:8080/", "height": 84, "referenced_widgets": ["3f4be7853a354d67bedf185023bd4ae7", "f73dd5f402114a2c85185652a7a5a28b", "34f1406f877244c1be53f4813132f619", "0378b2ef6900484b958d2b8f342af17b", "6469c80bdbfc4a7ea9d057d7a8c60d26", "5c8e15c6e7f344a9956ef3040e51e6f0", "3eea8a3c52bc450a8a408e167fcd2268", "3e4b2420a0074c09819018ef3b0d230b"]} id="KXpUihL-hrvr" outputId="ee019280-3040-4650-c3c6-9bc232a4f1a9"
# Set device type
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load pre-trained model with new head/tail
model = get_instance_segmentation_model(num_classes)
model.to(device)
# Build optimizer
params = [p for p in model.parameters() if p.requires_grad]
# optimizer = torch.optim.SGD(params, lr=0.005,
# momentum=0.9, weight_decay=0.0005)
optimizer = torch.optim.AdamW(params, lr=0.005, weight_decay=0.0005)
# Learning rate scheduler (LR decay)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# -
image, target = dataset[0]
image.shape
model(image, target)
im = image.unsqueeze(0)
im.shape
with torch.no_grad():
model.eval()
out = model(im)
out
# + id="o2SRRnG0ihpd"
from engine import train_one_epoch, evaluate
# Training Loop
num_epochs = 10
for epoch in range(num_epochs):
train_one_epoch(model, optimizer, data_loader_train, device, epoch, print_freq=10)
lr_scheduler.step()
evaluate(model, data_loader_test, device=device)
# -
| Archive/Training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch_p36)
# language: python
# name: conda_pytorch_p36
# ---
import boto3
import numpy as np
import pandas as pd
from skimage.color import rgb2lab, lab2rgb
import os
from io import BytesIO
from tqdm import tqdm
from PIL import Image
import pickle
from sklearn.cluster import KMeans
import itertools
# # for all images in miro s3 bucket
# +
sts = boto3.client("sts")
assumed_role_object = sts.assume_role(
RoleArn="arn:aws:iam::760097843905:role/calm-assumable_read_role",
RoleSessionName="AssumeRoleSession1",
)
credentials = assumed_role_object["Credentials"]
s3_platform = boto3.client(
"s3",
aws_access_key_id=credentials["AccessKeyId"],
aws_secret_access_key=credentials["SecretAccessKey"],
aws_session_token=credentials["SessionToken"],
)
# -
s3_data_science = boto3.client("s3")
def get_s3_keys_as_generator(bucket):
"""Generate all the keys in an S3 bucket."""
kwargs = {"Bucket": bucket}
while True:
resp = s3_platform.list_objects_v2(**kwargs)
for obj in resp["Contents"]:
yield obj["Key"]
try:
kwargs["ContinuationToken"] = resp["NextContinuationToken"]
except KeyError:
break
bucket_name = "wellcomecollection-miro-images-public"
all_keys = list(get_s3_keys_as_generator(bucket_name))
len(all_keys)
# # get the ids that have already been processed
n_items_in_bucket = 164
palette_dicts = []
for i in tqdm(range(n_items_in_bucket + 1)):
try:
binary_data = s3_data_science.get_object(
Bucket="model-core-data",
Key="palette_similarity/palette_dict_{}.pkl".format(i),
)["Body"].read()
palette_dict = pickle.load(BytesIO(binary_data))
palette_dicts.append(palette_dict)
except:
pass
# +
palette_dict = {}
for d in palette_dicts:
palette_dict.update(d)
len(palette_dict)
# -
already_processed_ids = set(palette_dict.keys())
def id_from_object_key(object_key):
image_id, _ = os.path.splitext(os.path.basename(object_key))
return image_id
not_yet_processed_keys = [
object_key
for object_key in all_keys
if id_from_object_key(object_key) not in already_processed_ids
]
len(not_yet_processed_keys)
# # get their palettes
# +
def get_image(object_key):
image_object = s3_platform.get_object(Bucket=bucket_name, Key=object_key)
image = Image.open(BytesIO(image_object["Body"].read()))
if image.mode != "RGB":
image = image.convert("RGB")
image = image.resize((75, 75), resample=Image.BILINEAR)
return image
def get_palette(image, palette_size=5):
lab_image = rgb2lab(np.array(image)).reshape(-1, 3)
clusters = KMeans(n_clusters=palette_size).fit(lab_image)
return clusters.cluster_centers_
# +
chunk_size, palette_dict = 1000, {}
for i, object_key in enumerate(tqdm(not_yet_processed_keys)):
try:
image = get_image(object_key)
image_id = id_from_object_key(object_key)
palette_dict[image_id] = get_palette(image)
except:
pass
if (i % chunk_size == 0) and (i != 0):
s3_data_science = boto3.client("s3")
s3_data_science.put_object(
Bucket="model-core-data",
Key="palette_similarity/palette_dict_{}.pkl".format(
(i // chunk_size) + n_items_in_bucket
),
Body=pickle.dumps(palette_dict),
)
palette_dict = {}
# -
# # save the data
| notebooks/palette/notebooks/08 - get palettes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Imports
import numpy as np
import pandas as pd
# ## Load data
data = pd.read_csv('Pokemon.csv')
# Checking the head and some infos
data.head()
data.shape # Checking the shape
data.info()
# Checking null values
data.isnull().sum()/len(data)*100 # Type 2 is the only column with null values, more than 20%!!
# Let's handle with the null values, primally these Pokemon don't have a second type
# So I put -1 because I don't wanna to lose info
data = data.fillna('NaN')
data.head()
data.isnull().sum()/len(data)*100
# The Mega evolutions are data that not much information, so let's filter
data = data.loc[~data['Name'].str.contains('Mega')]
# Checking if works
data.head()
data.shape # The new shape
# ## Now let's answer some questions
# ### Question: which are the most powerful Pokemon top 5?
data.sort_values('Total',ascending=False).head() # All of then are Legendary :O
# ### Question: which Pokemon have more attack?
data.sort_values('Attack',ascending=False).head(1) # Groudon with Attack equal 180
# ### Question: which Pokemon have more defense?
data.sort_values('Defense',ascending=False).head(1) # Shuckle with Defense equal 230
# ### Question: which Pokemon are the more weak top 5?
data.sort_values('Total').head()
# ### Question: which non legendary Pokemon are the strongest?
non_legendary = data[(data['Total']>600) & (data['Legendary']==False)]
non_legendary.head() # Slaking is the most powerful non legendary Pokemon, what? More than Dragonite?
# ### Question: which Pokemon have the most HP?
data.sort_values('HP',ascending=False).head(1) # Blissey with HP equal to 255
# ### Question: which is the fastest Pokemon?
data.sort_values('Speed',ascending=False).head(1) # Deoxys with Speed equal to 180
# ## Let's move to some visualization
# More imports
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
plt.title('Legendary Pokemon')
sns.countplot(x='Legendary',data=data,palette='winter_r') # We have more non legendary Pokemon
sns.lmplot(x='Defense', y='Attack', data=data, fit_reg=False, hue='Legendary',palette='winter_r')
# Legendary Pokemon have more Attack and Defense
# ### One Pokemon have one of the highest attack but have a low defense, which Pokemon?
high_atk_low_df = data[(data['Attack']>175) & (data['Defense']<50)]
high_atk_low_df # That Pokemon are Deoxys Attack form
plt.figure(figsize=(13,4))
plt.title('Number of Each Type of Pokemon')
order = sorted(data['Type 1'].unique())
sns.countplot(x='Type 1',data=data,order=order,palette='winter_r') # The water type is the most common
plt.figure(figsize=(13,4))
plt.title('Count per Generation')
order = sorted(data['Generation'].unique())
sns.countplot(x='Generation',data=data,order=order,palette='winter_r') # Generation 5 have more Pokemon in this df
# Generation 6 have Mega Pokemon and we drop, because of it there is less Pokemon
# ### Which Pokemon type is the strongest?
plt.figure(figsize=(13,4))
plt.title('Best Pokemon Type')
order = sorted(data['Type 1'].unique())
sns.barplot(x='Type 1',y='Total',data=data,order=order,palette='winter_r')
# Dragon is the strongest Pokemon type
# ### Which Pokemon type don't have legendaries?
plt.figure(figsize=(13,4))
plt.title('Non Legendary Pokemon')
order = sorted(data['Type 1'].unique())
sns.barplot(x='Type 1',y='Total',hue='Legendary',data=data,order=order,palette='winter_r')
plt.legend(bbox_to_anchor=(1.01,1),loc=2,borderaxespad=0)
# Bug, fighting and poison don't have any legendary
# Baseline for our model
59/len(data)
# ## Modeling and predicting if a Pokemon is legendary or not
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
# Droping # and Name columns
data.drop(['#','Name'],axis=1,inplace=True)
data.head()
# ### Transform categorical features
# Categorical features
columns = (data.dtypes == 'object')
object_cols = list(columns[columns].index)
print('Categorical variables:')
print(object_cols)
for col in object_cols:
data[col] = label_encoder.fit_transform(data[col])
data['Legendary'] = data['Legendary'].map({True:1, False:0})
# Checking if it works
data.head()
data['Legendary'].value_counts() # Unbalanced data
# ### Spliting the data
X = data.drop('Legendary',axis=1)
y = data['Legendary']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train.shape, X_test.shape # The shape of data after the split
# ### Creating the model
# Let's use a robust algorithm
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(random_state=42)
# ### Training the model
model.fit(X_train,y_train)
# ### Predictions
predictions = model.predict(X_test)
# ### Evaluating the model
from sklearn.metrics import classification_report,confusion_matrix,roc_auc_score
print(classification_report(y_test,predictions))
print(roc_auc_score(predictions,y_test))
# ### Tuning the parameters
model = RandomForestClassifier(n_estimators=200,max_features=5,max_leaf_nodes=20,random_state=42)
model.fit(X_train,y_train)
tune_predictions = model.predict(X_test)
print(classification_report(y_test,tune_predictions))
print(roc_auc_score(tune_predictions,y_test))
# +
# Even adjusting some hyperparameters, we were unable to improve the model.
# -
# ### Conclusion
# +
# Our model is good to predict non legendary Pokemon, and is best than the baseline to predict legendary ones
# -
# ### Saving the model
import joblib
filename = 'model.joblib'
joblib.dump(model,filename)
| Pokemon_Analysis/Pokemon_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
# -
digits = load_digits()
X, y = digits.data, digits.target
param_range = np.logspace(-6, -1, 5)
train_scores, test_scores = validation_curve(SVC(), X, y, param_name="gamma", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
# %matplotlib inline
plt.title("Validation Curve with SVM")
plt.xlabel("$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
| evaluate-model-validation-curve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + tags=[]
import dask.array as da
a = da.from_array([[[0], [1]]])
da.full(a.shape, da.mean(a)).compute()
# -
da.full(a.shape, da.mean(a), dtype=float).compute()
import dask
dask.__version__
type(da.mean(a))
| notebooks/8461.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import pandas as pd
# for data processing
from mmctools.dataloaders import read_files
from mmctools.helper_functions import calc_wind, theta, covariance
# site-specific correction
from TTURawToMMC import reg_coefs, tilts
from mmctools.measurements.metmast import tilt_correction
# for plotting
from mmctools.plotting import plot_timehistory_at_height, plot_profile
# # Process and tilt-correct sonic data from TTU tower
# written by [<NAME>](mailto:<EMAIL>)
#
# Process data at SWiFT on 2013-11-08 and 09
# DAP data
dataset = 'mmc/tower.z01.00'
datapath = 'data' # where the data will get saved locally
startdate = pd.to_datetime('2013-11-08')
enddate = pd.to_datetime('2013-11-10')
# outputs
output_1Hz = 'data/TTU_tilt_corrected_20131108-09.csv'
output_10min = 'data/TTU_tilt_corrected_20131108-09_10min.csv'
# ## 1. Data processing
filelist = []
overwrite_files = False # force download of files
# ### 1a. try to get data from DAP on the fly
try:
import A2e
except ImportError:
print('dap-py module not available; need to manually set `filelist`')
else:
a2e = A2e.A2e()
a2e.setup_cert_auth()
datafiles = a2e.search({
'Dataset': dataset,
'date_time': {
'between': [startdate.strftime('%Y%m%d%H%M%S'), enddate.strftime('%Y%m%d%H%M%S')]
}
})
print(len(datafiles),'data files selected')
filelist = a2e.download_files(datafiles, path=datapath, force=overwrite_files)
if filelist is None:
print('No files were downloaded; need to manually download and set `filelist`')
# UNCOMMENT THIS to use previously downloaded files
import glob
filelist = glob.glob('data/mmc.tower.z01.00/*.dat')
filelist.sort()
filelist
# ### 1b. process downloaded data
# +
variables = ['unorth','vwest','w','ustream','vcross','wdir','tsonic','t','p','rh']
#heights_ft = [3,8,13,33,55,155,245,382,519,656]
heights = np.array([0.9, 2.4, 4.0, 10.1, 16.8, 47.3, 74.7, 116.5, 158.2, 200.0])
sampling_freq = 50. # Hz
resample_interval = '1s'
# -
# *wide* format: variables change the fastest, then heights
# e.g., unorth_3ft, vwest_3ft, ..., rh_3ft, unorth_8ft, vwest_8ft, ...
columns = pd.MultiIndex.from_product([heights,variables],names=['height',None])
def read_wide_csv(fname,**kwargs):
"""Convert table in wide format into stacked/long format with multi-index"""
df = pd.read_csv(fname,
skiprows=5, header=None,
index_col=0, parse_dates=True,
**kwargs)
df.index.name = 'datetime'
df.columns = columns
df = df.resample(resample_interval).first()
return df.stack(level=0)
# %time rawdata = read_files(filelist, reader=read_wide_csv)
rawdata.head()
# CPU times: user 2min 16s, sys: 10.3 s, total: 2min 26s
# Wall time: 2min 27s
# ### 1c. data standardization
# <font color='red'>These data conversions are specific to TTU met tower (tower.z01.00)</font>
rawdata['u'] = rawdata['vwest']
rawdata['v'] = -rawdata['unorth']
# convert from deg F to K
rawdata['t'] = 5./9. * (rawdata['t']-32) + 273.15
rawdata['tsonic'] = 5./9. * (rawdata['tsonic']-32) + 273.15
# convert kPa to mbar
rawdata['p'] *= 10.
# rename vars
rawdata = rawdata.rename(columns={
't': 'T',
'tsonic': 'Ts',
'rh': 'RH',
})
# ## 2. Calculations
# ### 2a. Perform tilt correction
corrected = rawdata[['u','v','w','Ts','T','RH','p']].unstack() # make an unstacked copy
ucorr,vcorr,wcorr = tilt_correction(corrected['u'],corrected['v'],corrected['w'],
reg_coefs=reg_coefs,
tilts=tilts)
corrected.loc[:,'u'] = ucorr
corrected.loc[:,'v'] = vcorr
corrected.loc[:,'w'] = wcorr
corrected = corrected.stack(dropna=False)
# <font color='green'>At this point, we save the high-frequency data for spectral analysis</font>
# %time corrected.to_csv(output_1Hz)
# CPU times: user 32.7 s, sys: 1.33 s, total: 34 s
# Wall time: 34.1 s
# ### 2b. Calculate additional quantities of interest
# Note: per `TTU_tower_heatflux_stability.ipynb`, `theta` (calculated from air temperature) is acceptable for analysis in the mean; for correlations, `thetas` (calculated from sonic temperature) should be used for calculating fluctuations and correlations.
def calc_QOIs(df):
df['theta'] = theta(df['T'], df['p'])
df['thetas'] = theta(df['Ts'], df['p'])
calc_QOIs(rawdata)
rawdata.head()
calc_QOIs(corrected)
corrected.head()
# ### 2c. Calculate rolling statistics
# See `TTU_tower_heatflux_stability.ipynb` for more details regarding $T_v$ vs $T_s$
def calc_rolling_stats(df,offset='10min'):
"""Calculate quantities of interest, with statistics based on the specified pandas offset string
Note: we could also have used `resample` instead of `rolling`. To be consistent with previous analyses,
however, I've used rolling statistics for the plots in the next section.
"""
# calculate statistical quantities on unstacked
unstacked = df.unstack()
stats = unstacked.rolling(offset).mean().stack()
# - recalculate wind based on averaged components
stats['wspd'], stats['wdir'] = calc_wind(stats)
# stats['theta'] = theta(stats['T'], stats['p'])
# stats['thetas'] = theta(stats['Ts'], stats['p'])
# - calculate variances
stats['uu'] = unstacked['u'].rolling(offset).var().stack()
stats['vv'] = unstacked['v'].rolling(offset).var().stack()
stats['ww'] = unstacked['w'].rolling(offset).var().stack()
# - calculate covariances
stats['uv'] = covariance(unstacked['u'], unstacked['v'], offset).stack()
stats['vw'] = covariance(unstacked['v'], unstacked['w'], offset).stack()
stats['uw'] = covariance(unstacked['u'], unstacked['w'], offset).stack()
stats['Tw'] = covariance(unstacked['Ts'], unstacked['w'], offset).stack()
stats['thetaw'] = covariance(unstacked['thetas'], unstacked['w'], offset).stack()
# - calculate derived quantities
# TODO: implement in helper_functions.py
stats['u*'] = (stats['uw']**2 + stats['vw']**2)**0.25
stats['TKE'] = 0.5*(stats['uu'] + stats['vv'] + stats['ww'])
ang = np.radians(270. - stats['wdir'])
ang[ang<0] += 2*np.pi
stats['TI'] = stats['uu']*np.cos(ang)**2 + 2*stats['uv']*np.sin(ang)*np.cos(ang) + stats['vv']*np.sin(ang)**2
stats['TI'] = np.sqrt(stats['TI']) / stats['wspd']
return stats
# %time uncorrected_10min = calc_rolling_stats(rawdata)
# CPU times: user 24.2 s, sys: 2.03 s, total: 26.2 s
# Wall time: 24.9 s
# rolling 10-min stats
# %time corrected_10min = calc_rolling_stats(corrected)
# CPU times: user 22.3 s, sys: 1.48 s, total: 23.8 s
# Wall time: 23 s
# <font color='green'>We can now save more general (and manageable) validation data for other analyses</font>
# save resampled 10-min stats (note: we've already calculated the rolling statistics)
corrected_10min.unstack().resample('10min').first().stack().to_csv(output_10min)
# ## 3. Plots
times = corrected_10min.index.levels[0]
times
zhub = 74.7
fig,ax = plot_timehistory_at_height(
{'uncorrected':uncorrected_10min, 'tilt corrected':corrected_10min},
fields=['wspd','wdir','w'],
heights=zhub,
)
selected_times = [
'2013-11-08 12:00:00', # very stable
'2013-11-08 15:00:00', # ~neutral
'2013-11-08 18:00:00', # slightly unstable
'2013-11-08 21:00:00', # ~neutral
'2013-11-09 00:00:00', # ~neutral
'2013-11-09 03:00:00', # slightly stable
]
fig,ax = plot_profile(corrected_10min, fields=['Ts','theta'], times=selected_times, cmap='viridis')
ax[0].set_xlabel(r'$T_s$ [K]')
ax[1].set_xlabel(r'$\theta$ [K]')
# Jeremy's plot from `TTU_tiltCorrection.ipynb`
w = corrected.xs(zhub, level='height')['w']
day1 = (w.index >= '2013-11-08 00:00') & (w.index <= '2013-11-09 00:00')
w = w.loc[day1]
wmean = w.rolling('10min').mean()
fig,ax = plt.subplots(figsize=(18,2))
w.plot(style='b.')
wmean.plot(style='r-')
ax.set_ylabel(r'$w$ $[\mathrm{ms^{-1}}]$', fontsize=14)
ax.set_xlabel(r'$time$ $[\mathrm{HH:MM}]$ $Z$', fontsize=14)
| datasets/SWiFT/process_TTU_tower.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''general'': venv)'
# name: python37664bitgeneralvenvfbd0a23e74cf4e778460f5ffc6761f39
# ---
# +
import datetime, time
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, cross_validate
from sklearn import tree
from sklearn.metrics import classification_report, confusion_matrix, f1_score
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier
from stree import Stree
from odte import Odte
random_state = 1
# -
from sklearn.datasets import load_wine
X, y = load_wine(return_X_y=True)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.2, random_state=random_state)
n_estimators = 20
clf = {}
clf["stree"] = Stree(random_state=random_state, max_depth=5)
clf["stree"].set_params(**dict(splitter="best", kernel="linear", max_features="auto"))
clf["odte"] = Odte(n_jobs=-1, base_estimator=clf["stree"], random_state=random_state, n_estimators=n_estimators, max_features=.8)
clf["adaboost"] = AdaBoostClassifier(base_estimator=clf["stree"], n_estimators=n_estimators, random_state=random_state, algorithm="SAMME")
clf["bagging"] = BaggingClassifier(base_estimator=clf["stree"], n_estimators=n_estimators)
# + tags=[]
print("*"*30,"Results for wine", "*"*30)
for clf_type, item in clf.items():
print(f"Training {clf_type}...")
now = time.time()
item.fit(Xtrain, ytrain)
print(f"Score: {item.score(Xtest, ytest) * 100:.3f} in {time.time()-now:.2f} seconds")
# -
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.2, random_state=random_state)
n_estimators = 10
clf = {}
clf["stree"] = Stree(random_state=random_state, max_depth=3)
clf["odte"] = Odte(n_jobs=-1, random_state=random_state, n_estimators=n_estimators, max_features=1.0)
clf["adaboost"] = AdaBoostClassifier(base_estimator=clf["stree"], n_estimators=n_estimators, random_state=random_state, algorithm="SAMME")
clf["bagging"] = BaggingClassifier(base_estimator=clf["stree"], n_estimators=n_estimators)
# + tags=[]
print("*"*30,"Results for iris", "*"*30)
for clf_type, item in clf.items():
print(f"Training {clf_type}...")
now = time.time()
item.fit(Xtrain, ytrain)
print(f"Score: {item.score(Xtest, ytest) * 100:.3f} in {time.time()-now:.2f} seconds")
# + tags=[]
cross = cross_validate(estimator=clf["odte"], X=X, y=y, n_jobs=-1, return_train_score=True)
print(cross)
print(f"{np.mean(cross['test_score'])*100:.3f} +- {np.std(cross['test_score']):.3f}")
# + tags=[]
cross = cross_validate(estimator=clf["adaboost"], X=X, y=y, n_jobs=-1, return_train_score=True)
print(cross)
print(f"{np.mean(cross['test_score'])*100:.3f} +- {np.std(cross['test_score']):.3f}")
# + tags=[]
from sklearn.utils.estimator_checks import check_estimator
# Make checks one by one
c = 0
checks = check_estimator(Odte(), generate_only=True)
for check in checks:
c += 1
print(c, check[1])
check[1](check[0])
| notebooks/wine_iris.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Application GUI
import tkinter as tk
from PIL import Image
from PIL import ImageTk
import cv2
import threading
import queue
# I have taken a more modular approach so that UI is easy to change, update and extend. I have also developed UI in a way so that UI has no knowledge of how data is fetched or processed, it is just a UI.
# ## Left Screen Views
class LeftView(tk.Frame):
def __init__(self, root):
#call super class (Frame) constructor
tk.Frame.__init__(self, root)
#save root layour for later references
self.root = root
#load all UI
self.setup_ui()
def setup_ui(self):
#create a output label
self.output_label = tk.Label(self, text="Webcam Output", bg="black", fg="white")
self.output_label.pack(side="top", fill="both", expand="yes", padx=10)
#create label to hold image
self.image_label = tk.Label(self)
#put the image label inside left screen
self.image_label.pack(side="left", fill="both", expand="yes", padx=10, pady=10)
def update_image(self, image):
#configure image_label with new image
self.image_label.configure(image=image)
#this is to avoid garbage collection, so we hold an explicit reference
self.image = image
# ## Right Screen Views
class RightView(tk.Frame):
def __init__(self, root):
#call super class (Frame) constructor
tk.Frame.__init__(self, root)
#save root layour for later references
self.root = root
#load all UI
self.setup_ui()
def setup_ui(self):
#create a webcam output label
self.output_label = tk.Label(self, text="Face detection Output", bg="black", fg="white")
self.output_label.pack(side="top", fill="both", expand="yes", padx=10)
#create label to hold image
self.image_label = tk.Label(self)
#put the image label inside left screen
self.image_label.pack(side="left", fill="both", expand="yes", padx=10, pady=10)
def update_image(self, image):
#configure image_label with new image
self.image_label.configure(image=image)
#this is to avoid garbage collection, so we hold an explicit reference
self.image = image
# ## All App GUI Combined
class AppGui:
def __init__(self):
#initialize the gui toolkit
self.root = tk.Tk()
#set the geometry of the window
#self.root.geometry("550x300+300+150")
#set title of window
self.root.title("Face Detection")
#create left screen view
self.left_view = LeftView(self.root)
self.left_view.pack(side='left')
#create right screen view
self.right_view = RightView(self.root)
self.right_view.pack(side='right')
#define image width/height that we will use
#while showing an image in webcam/neural network
#output window
self.image_width=200
self.image_height=200
#define the center of the cirlce based on image dimentions
#this is the cirlce we will use for user focus
self.circle_center = (int(self.image_width/2),int(self.image_height/4))
#define circle radius
self.circle_radius = 15
#define circle color == red
self.circle_color = (255, 0, 0)
self.is_ready = True
def launch(self):
#start the gui loop to listen for events
self.root.mainloop()
def process_image(self, image):
#resize image to desired width and height
#image = image.resize((self.image_width, self.image_height),Image.ANTIALIAS)
image = cv2.resize(image, (self.image_width, self.image_height))
#if image is RGB (3 channels, which means webcam image) then draw a circle on it
#for user to focus on that circle to align face
#if(len(image.shape) == 3):
# cv2.circle(image, self.circle_center, self.circle_radius, self.circle_color, 2)
#convert image to PIL library format which is required for Tk toolkit
image = Image.fromarray(image)
#convert image to Tk toolkit format
image = ImageTk.PhotoImage(image)
return image
def update_webcam_output(self, image):
#pre-process image to desired format, height etc.
image = self.process_image(image)
#pass the image to left_view to update itself
self.left_view.update_image(image)
def update_neural_network_output(self, image):
#pre-process image to desired format, height etc.
image = self.process_image(image)
#pass the image to right_view to update itself
self.right_view.update_image(image)
def update_chat_view(self, question, answer_type):
self.left_view.update_chat_view(question, answer_type)
def update_emotion_state(self, emotion_state):
self.right_view.update_emotion_state(emotion_state)
# ## Class to Access Webcam
# +
import cv2
class VideoCamera:
def __init__(self):
#passing 0 to VideoCapture means fetch video from webcam
self.video_capture = cv2.VideoCapture(0)
#release resources like webcam
def __del__(self):
self.video_capture.release()
def read_image(self):
#get a single frame of video
ret, frame = self.video_capture.read()
#return the frame to user
return ret, frame
#method to release webcam manually
def release(self):
self.video_capture.release()
#function to detect face using OpenCV
def detect_face(img):
#load OpenCV face detector, I am using LBP which is fast
#there is also a more accurate but slow Haar classifier
face_cascade = cv2.CascadeClassifier('data/lbpcascade_frontalface.xml')
#img_copy = np.copy(colored_img)
#convert the test image to gray image as opencv face detector expects gray images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#let's detect multiscale (some images may be closer to camera than others) images
#result is a list of faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5);
#if no faces are detected then return original img
if (len(faces) == 0):
return img
#under the assumption that there will be only one face,
#extract the face area
(x, y, w, h) = faces[0]
#return only the face part of the image
return img[y:y+w, x:x+h]
# -
# ## Thread Class for Webcam Feed
# +
class WebcamThread(threading.Thread):
def __init__(self, app_gui, callback_queue):
#call super class (Thread) constructor
threading.Thread.__init__(self)
#save reference to callback_queue
self.callback_queue = callback_queue
#save left_view reference so that we can update it
self.app_gui = app_gui
#set a flag to see if this thread should stop
self.should_stop = False
#set a flag to return current running/stop status of thread
self.is_stopped = False
#create a Video camera instance
self.camera = VideoCamera()
#define thread's run method
def run(self):
#start the webcam video feed
while (True):
#check if this thread should stop
#if yes then break this loop
if (self.should_stop):
self.is_stopped = True
break
#read a video frame
ret, self.current_frame = self.camera.read_image()
if(ret == False):
print('Video capture failed')
exit(-1)
#opencv reads image in BGR color space, let's convert it
#to RGB space
self.current_frame = cv2.cvtColor(self.current_frame, cv2.COLOR_BGR2RGB)
#key = cv2.waitKey(10)
if self.callback_queue.full() == False:
#put the update UI callback to queue so that main thread can execute it
self.callback_queue.put((lambda: self.update_on_main_thread(self.current_frame, self.app_gui)))
#fetching complete, let's release camera
#self.camera.release()
#this method will be used as callback and executed by main thread
def update_on_main_thread(self, current_frame, app_gui):
app_gui.update_webcam_output(current_frame)
face = detect_face(current_frame)
app_gui.update_neural_network_output(face)
def __del__(self):
self.camera.release()
def release_resources(self):
self.camera.release()
def stop(self):
self.should_stop = True
# -
# ## A GUI Wrappr (Interface) to Connect it with Data
class Wrapper:
def __init__(self):
self.app_gui = AppGui()
#create a Video camera instance
#self.camera = VideoCamera()
#intialize variable to hold current webcam video frame
self.current_frame = None
#create a queue to fetch and execute callbacks passed
#from background thread
self.callback_queue = queue.Queue()
#create a thread to fetch webcam feed video
self.webcam_thread = WebcamThread(self.app_gui, self.callback_queue)
#save attempts made to fetch webcam video in case of failure
self.webcam_attempts = 0
#register callback for being called when GUI window is closed
self.app_gui.root.protocol("WM_DELETE_WINDOW", self.on_gui_closing)
#start webcam
self.start_video()
#start fetching video
self.fetch_webcam_video()
def on_gui_closing(self):
self.webcam_attempts = 51
self.webcam_thread.stop()
self.webcam_thread.join()
self.webcam_thread.release_resources()
self.app_gui.root.destroy()
def start_video(self):
self.webcam_thread.start()
def fetch_webcam_video(self):
try:
#while True:
#try to get a callback put by webcam_thread
#if there is no callback and call_queue is empty
#then this function will throw a Queue.Empty exception
callback = self.callback_queue.get_nowait()
callback()
self.webcam_attempts = 0
#self.app_gui.root.update_idletasks()
self.app_gui.root.after(70, self.fetch_webcam_video)
except queue.Empty:
if (self.webcam_attempts <= 50):
self.webcam_attempts = self.webcam_attempts + 1
self.app_gui.root.after(100, self.fetch_webcam_video)
def test_gui(self):
#test images update
#read the images using OpenCV, later this will be replaced
#by live video feed
image, gray = self.read_images()
self.app_gui.update_webcam_output(image)
self.app_gui.update_neural_network_output(gray)
#test chat view update
self.app_gui.update_chat_view("4 + 4 = ? ", "number")
#test emotion state update
self.app_gui.update_emotion_state("neutral")
def read_images(self):
image = cv2.imread('data/test1.jpg')
#conver to RGB space and to gray scale
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
return image, gray
def launch(self):
self.app_gui.launch()
def __del__(self):
self.webcam_thread.stop()
# ## The Launcher Code For GUI
# if __name__ == "__main__":
wrapper = Wrapper()
wrapper.launch()
| GUI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# This Jupyter Notebook has code and examples of how to pull data out of https://perf.skia.org.
#
# For further information see:
#
# * Pandas: http://pandas.pydata.org/pandas-docs/stable/
# * NumPy: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
# * matplotlib: http://matplotlib.org/2.0.0/index.html
#
# +
# This is where the two functions, perf_calc and perf_query are defined.
#
# See the cells below this one for example of how to use them.
import httplib2
import json
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas
# perf_calc evaluates the formula against the last 50 commits
# and returns a pandas.DataFrame with the results of the calculations.
#
# Example: perf_calc('count(filter(""))')
#
def perf_calc(formula):
body = {
'formulas': [formula],
'tz': 'America/New_York',
}
return perf_impl(body)
# perf_query evaluates the query against the last 50 commits
# and returns a pandas.DataFrame with the results of the query.
#
# Example: perf_query('source_type=skp&sub_result=min_ms')
#
def perf_query(query):
body = {
'queries': [query],
'tz': 'America/New_York',
}
return perf_impl(body)
# utility function.
def noe(x):
if x == 1e32:
return np.nan
else:
return x
def paramset():
h = httplib2.Http()
url = 'https://perf.skia.org/_/initpage/?tz=America/New_York'
resp, content = h.request(url)
if resp.status != 200:
raise "Failed to get initial bounds."
init = json.loads(content)
return init['dataframe']['paramset']
# utility function.
def perf_impl(body):
h = httplib2.Http()
url = 'https://perf.skia.org/_/initpage/?tz=America/New_York'
resp, content = h.request(url)
if resp.status != 200:
raise "Failed to get initial bounds."
init = json.loads(content)
body['begin'] = init['dataframe']['header'][0]['timestamp']
body['end'] = init['dataframe']['header'][-1]['timestamp']+1
(resp, content) = h.request("https://perf.skia.org/_/frame/start", "POST",
body=json.dumps(body),
headers={'content-type': 'application/json'})
if resp.status != 200:
raise "Failed to start query: " + content
id = json.loads(content)['id']
state = {'state': 'Starting'}
url = 'https://perf.skia.org/_/frame/status/' + id
i = 0
while state['state'] != 'Success':
print '\r', '|/-\\'[i%4],
i+=1
time.sleep(0.5)
resp, content = h.request(url)
if resp.status != 200:
raise "Failed during query: " + content
state = json.loads(content)
url = 'https://perf.skia.org/_/frame/results/' + id
resp, content = h.request(url)
if resp.status != 200:
raise "Failed to load results: " + content
df = json.loads(content)
clean = {}
for key, value in df['dataframe']['traceset'].iteritems():
clean[key] = [noe(x) for x in value]
print '\r ',
return pandas.DataFrame(data=clean)
# +
# The following line makes the plots interactive.
# %matplotlib notebook
# Perform a calculation over Perf data.
df = perf_calc('count(filter(""))')
# pandas.DataFrame's can plot themselves.
df.plot()
# +
# The following line makes the plots interactive.
# %matplotlib notebook
df = perf_query('sub_result=min_ms&test=AndroidCodec_01_original.jpg_SampleSize2_640_480')
# You can also use matplotlib to do the plotting.
plt.plot(df, linestyle='-', marker='o')
# +
# %matplotlib notebook
# DataFrames allow operating on traces in bulk. For example, to
# normalize each trace to a mean of 0.0 and a std deviation of 1.0:
normed = (df - df.mean())/df.std()
plt.plot(normed)
# -
# %matplotlib notebook
df = perf_query('source_type=skp&sub_result=min_ms')
df.mean(axis=1)
# Find the noisiest models, from lowest to highest.
#
# Takes a while to run.
params = paramset()
results = pandas.DataFrame()
for model in params['model']:
df = perf_calc('ave(trace_cov(fill(filter("source_type=svg&sub_result=min_ms&model=%s"))))' % model)
if df.size > 0:
df.rename_axis({df.columns[0]: model}, axis="columns")
results[model] = pandas.Series([df.mean()[0]])
results.sort_values(by=0,axis=1).transpose()
# Find the differences between CPU and GPU for Chorizo.
#
# Takes a while to run.
params = paramset()
results = pandas.DataFrame()
i = 0
for model in params['name']:
if model.endswith(".skp") and (model.startswith("top") or model.startswith("key") or model.startswith("desk")):
print model
df = perf_calc("""trace_ave(ratio(
ave(filter("cpu_or_gpu=GPU&model=Chorizo&sub_result=min_ms&name=%s")),
ave(filter("cpu_or_gpu=CPU&model=Chorizo&sub_result=min_ms&name=%s"))
))""" % (model, model))
if df.size > 0:
i+=1
print model
if i > 50:
break
df.rename_axis({df.columns[0]: model}, axis="columns")
results[model] = pandas.Series([df.mean()[0]])
results.sort_values(by=0,axis=1).transpose()
# Find the noisiest models, from lowest to highest.
#
# Takes a while to run.
params = paramset()
results = pandas.DataFrame()
for model in params['test']:
if model.startswith("GM_"):
df = perf_calc("""trace_ave(ratio(
ave(filter("cpu_or_gpu=GPU&model=Chorizo&sub_result=min_ms&test=%s")),
ave(filter("cpu_or_gpu=CPU&model=Chorizo&sub_result=min_ms&test=%s"))
))""" % (model, model))
if df.size > 0:
df.rename_axis({df.columns[0]: model}, axis="columns")
results[model] = pandas.Series([df.mean()[0]])
results.sort_values(by=0,axis=1).transpose()
| perf/jupyter/Perf+Query.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import gym
import shutil
import uuid
from gym import error, spaces, utils
from gym.utils import seeding
import sys
import os
import numpy as np
from xml.etree.ElementTree import parse
sys.path.append('/home/pi/traffic-simulator/tools/libsalt')
import libsalt
import pandas as pd
import matplotlib.pyplot as plt
# +
tree = parse(os.getcwd() + '/data/envs/salt/doan/doan(without dan).tss.xml')
root = tree.getroot()
trafficSignal = root.findall("trafficSignal")
target_tl_obj = {}
phase_numbers = []
i=0
for x in trafficSignal:
if x.attrib['signalGroup'] in ['SA 101', '101', 'SA101', 'SA 107', '107', 'SA107', 'SA 111', '111', 'SA111', 'SA 104']:
target_tl_obj[x.attrib['nodeID']] = {}
target_tl_obj[x.attrib['nodeID']]['crossName'] = x.attrib['crossName']
target_tl_obj[x.attrib['nodeID']]['signalGroup'] = x.attrib['signalGroup']
target_tl_obj[x.attrib['nodeID']]['offset'] = int(x.find('schedule').attrib['offset'])
target_tl_obj[x.attrib['nodeID']]['minDur'] = [int(y.attrib['minDur']) if 'minDur' in y.attrib else int(y.attrib['duration']) for
y in x.findall("schedule/phase")]
target_tl_obj[x.attrib['nodeID']]['maxDur'] = [int(y.attrib['maxDur']) if 'maxDur' in y.attrib else int(y.attrib['duration']) for
y in x.findall("schedule/phase")]
target_tl_obj[x.attrib['nodeID']]['cycle'] = np.sum([int(y.attrib['duration']) for y in x.findall("schedule/phase")])
target_tl_obj[x.attrib['nodeID']]['duration'] = [int(y.attrib['duration']) for y in x.findall("schedule/phase")]
tmp_duration_list = np.array([int(y.attrib['duration']) for y in x.findall("schedule/phase")])
target_tl_obj[x.attrib['nodeID']]['green_idx'] = np.where(tmp_duration_list > 5)
target_tl_obj[x.attrib['nodeID']]['main_green_idx'] = np.where(tmp_duration_list==np.max(tmp_duration_list))
target_tl_obj[x.attrib['nodeID']]['sub_green_idx'] = list(set(np.where(tmp_duration_list > 5)[0]) - set(np.where(tmp_duration_list==np.max(tmp_duration_list))[0]))
target_tl_obj[x.attrib['nodeID']]['tl_idx'] = i
target_tl_obj[x.attrib['nodeID']]['remain'] = target_tl_obj[x.attrib['nodeID']]['cycle'] - np.sum(target_tl_obj[x.attrib['nodeID']]['minDur'])
target_tl_obj[x.attrib['nodeID']]['action_space'] = (len(target_tl_obj[x.attrib['nodeID']]['green_idx'][0])-1)*2
phase_numbers.append(len(target_tl_obj[x.attrib['nodeID']]['green_idx'][0]))
i+=1
max_phase_length = int(np.max(phase_numbers))
target_tl_id_list = list(target_tl_obj.keys())
agent_num = len(target_tl_id_list)
# +
salt_scenario = 'data/envs/salt/doan/doan_2021_test.scenario.json'
tree = parse(os.getcwd() + '/data/envs/salt/doan/doan_20210401.edg.xml')
root = tree.getroot()
edge = root.findall("edge")
near_tl_obj = {}
for i in target_tl_id_list:
near_tl_obj[i] = {}
near_tl_obj[i]['in_edge_list'] = []
near_tl_obj[i]['in_edge_list_0'] = []
near_tl_obj[i]['in_edge_list_1'] = []
# near_tl_obj[i]['near_length_list'] = []
for x in edge:
if x.attrib['to'] in target_tl_id_list:
near_tl_obj[x.attrib['to']]['in_edge_list'].append(x.attrib['id'])
near_tl_obj[x.attrib['to']]['in_edge_list_0'].append(x.attrib['id'])
_edge_len = []
for n in near_tl_obj:
_tmp_in_edge_list = near_tl_obj[n]['in_edge_list']
_tmp_near_juction_list = []
for x in edge:
if x.attrib['id'] in _tmp_in_edge_list:
_tmp_near_juction_list.append(x.attrib['from'])
for x in edge:
if x.attrib['to'] in _tmp_near_juction_list:
near_tl_obj[n]['in_edge_list'].append(x.attrib['id'])
near_tl_obj[n]['in_edge_list_1'].append(x.attrib['id'])
target_tl_obj[n]['in_edge_list'] = near_tl_obj[n]['in_edge_list']
target_tl_obj[n]['in_edge_list_0'] = near_tl_obj[n]['in_edge_list_0']
target_tl_obj[n]['in_edge_list_1'] = near_tl_obj[n]['in_edge_list_1']
_edge_len.append(len(near_tl_obj[n]['in_edge_list']))
max_edge_length = int(np.max(_edge_len))
print(target_tl_obj)
print(max_edge_length)
startStep = 0
endStep = 3600
done = False
libsalt.start(salt_scenario)
libsalt.setCurrentStep(startStep)
print("init", [libsalt.trafficsignal.getTLSConnectedLinkID(x) for x in target_tl_id_list])
print("init", [libsalt.trafficsignal.getCurrentTLSPhaseIndexByNodeID(x) for x in target_tl_id_list])
print("init", [libsalt.trafficsignal.getLastTLSPhaseSwitchingTimeByNodeID(x) for x in target_tl_id_list])
print("init", [len(libsalt.trafficsignal.getCurrentTLSScheduleByNodeID(x).myPhaseVector) for x in target_tl_id_list])
print("init", [libsalt.trafficsignal.getCurrentTLSScheduleByNodeID(x).myPhaseVector[0][1] for x in target_tl_id_list])
_lane_len = []
for target in target_tl_obj:
_lane_list = []
_lane_list_0 = []
for edge in target_tl_obj[target]['in_edge_list_0']:
for lane in range(libsalt.link.getNumLane(edge)):
_lane_id = "{}_{}".format(edge, lane)
_lane_list.append(_lane_id)
_lane_list_0.append((_lane_id))
# print(_lane_id, libsalt.lane.getLength(_lane_id))
target_tl_obj[target]['in_lane_list_0'] = _lane_list_0
_lane_list_1 = []
for edge in target_tl_obj[target]['in_edge_list_1']:
for lane in range(libsalt.link.getNumLane(edge)):
_lane_id = "{}_{}".format(edge, lane)
_lane_list.append(_lane_id)
_lane_list_1.append((_lane_id))
# print(_lane_id, libsalt.lane.getLength(_lane_id))
target_tl_obj[target]['in_lane_list_1'] = _lane_list_1
target_tl_obj[target]['in_lane_list'] = _lane_list
target_tl_obj[target]['state_space'] = len(_lane_list)
_lane_len.append(len(_lane_list))
max_lane_length = np.max(_lane_len)
print(target_tl_obj)
print(np.max(_lane_len))
libsalt.close()
simulationSteps = 0
# -
target_tl_obj
| z.mine/nets/salt/test/genTLObjs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
x = 1
def f():
global x
x = 10
f()
print(x)
var = [1, 2]
def f():
var = 20
f()
var
var = [1, 2]
def f():
var = [1, 3]
f()
var
var = [1, 2]
def f():
var.append(3)
f()
var
var = [1, 2]
def f():
global var
var = 1
f()
var
var = [1, 2]
def f():
global var
var.append(10)
f()
var
# +
a = 10
def f():
global a
a = a + 10
b = 20
def g():
nonlocal b
b = b + 20
g()
return b
print(f() + a)
# -
L = [1, 2, 'a']
def f(a, b, c):
print(a, b, c)
f(*L)
def f(*args):
print(args)
L = [1, 2, 'a']
perc_1 = int(input('First_porcentage: '))
perc_2 = int(input('Second_porcentage: '))
limit_ration = int(input("Which is the limit ratio? "))
# +
import numpy
from matplotlib import pyplot
# %matplotlib inline
pyplot.rc('font', family='serif', size='18')
x = []
y = []
inverso =[]
liste =[]
for i in range(0,75):
if i> 0 :
tmp = i/2
#
x.append(tmp)
m = (perc_2 - perc_1)/((1/limit_ration) - limit_ration)*(1/x)
b = perc_2 - m*x/limit_ration
a = m*x[i] + b
y.append(a)
print(tmp, x)
else :
pass
# print (i, x)
pyplot.figure(figsize=(10, 5))
pyplot.plot(x, y, color='#2929a3', linestyle='-', linewidth=2,
label='data',alpha=0.7)
#pyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('ratio')
pyplot.ylabel('percentage')
pyplot.legend(loc='best', fontsize=15);
# +
s, w = 3000, 1300
n, o = w/s, s/w
d_s, d_w = m*n+b, m*o + b
t_s, t_w = d_w*w/100, d_s*s/100
d_s, d_w, t_s, t_w, t_s + t_w
# -
| calculo salario.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.4 64-bit (''qc-3.9'': conda)'
# name: python3
# ---
# Re-running the graph witness notebook with refractored code
# +
import qcdenoise as qcd
import matplotlib.pyplot as plt
import qiskit as qk
import numpy as np
import networkx as nx
import os
# -
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
# ### 1. Building Quantum Circuit from Graph States Database
#
# #### Load the default graph state database
#
# This is the corpus from which subgraphs are drawn and combined to form random graphs
graph_db = qcd.GraphDB(directed=True)
graph_db.plot_graph(graph_number=[35])
graph_db['35']
graph_db.plot_graph(graph_number=[17])
graph_db['17']
graph_db.plot_graph(graph_number=[3])
graph_db['3']
# #### Initiate a q-circuit builder from graph states
#
# Build a random graph on 4 qubits with undirected edges
n_qubits=4
graph_state = qcd.GraphState(graph_db=graph_db, n_qubits=n_qubits)
g = graph_state.sample()
nx.draw(g)
circ_builder = qcd.CXGateCircuit(n_qubits=n_qubits,stochastic=False)
circ_builder.stochastic
circ_builder.build(g)["circuit"].draw(output='mpl')
base_circuit = circ_builder.circuit
# Once the circuit builder is initialized, and the base circuit is constructed- then the generators and stabilizer operators can be built
#
# Calling `TothStabilizer` or `JungStabilizer` will build the sub-circuits needed to measure each stabilizer (Pauli string)
#
# **Note** the generators and stabilizers will depend on the neighborhoods of each vertex and what is defined as a neighbor is dependent on the `NetworkX` graph structure. If the edges are directed (pass `directed=True` inside `GraphConstructor`) then only vertices connected by a directed arc that ternminates at vertex (i) is a neighbor of (i). On the other hand, if the graph is undirected (the default is `directed=False`) then any vertex connected by an edge to vertex (i) is a neighbor.
stabilizer = qcd.TothStabilizer(g, n_qubits=n_qubits)
stab_ops = stabilizer.find_stabilizers()
stabilizer_circuit_dict = stabilizer.build()
stabilizer_circuit_dict
# The keys of the `circuit_dict` are the associated Pauli strings to measure-- comparing the keys to the graph `g` above, they are correctly defined
stabilizer_circuit_dict['ZXIZ'].draw(output='mpl')
stabilizer_circuit_dict['IIXZ'].draw(output='mpl')
stabilizer_circuit_dict['IIII'].draw(output='mpl')
stabilizer_circuit_dict['IZZX'].draw(output='mpl')
stabilizer_circuit_dict['XZII'].draw(output='mpl')
# _there is something wrong with the stabilizer construction the initial Pauli operator is being dropped_
# + active=""
# stabilizer.find_stabilizers()
# + active=""
# qcd.stabilizers.get_unique_operators(stabilizer.find_stabilizers())
# -
# **Current Workaround** when the stabilizer strings are built using `find_stabilizers` followed by `get_unique_stabilizers`, the leading sign coefficient is dropped, using `drop_coef=True` will then result in a Pauli operator getting dropped. Current workaround sets the default value of `drop_coef=False`.
#
# Stabilizer stubs are correctly built with this workaround
stabilizer_circuit_dict
# ##### Build and mesure stabilizer circuits
#
# pass the dictionary of stabilier sub-circuits and the base circuit
from qiskit.test.mock import FakeValencia, FakeTokyo, FakeMontreal
from qiskit.providers.aer.noise import NoiseModel
sampler = qcd.StabilizerSampler(n_shots=1024, backend=FakeMontreal())
ideal_counts = sampler.sample(stabilizer_circuits=stabilizer_circuit_dict,
graph_circuit=base_circuit)
ideal_counts
witness = qcd.GenuineWitness(n_qubits=n_qubits,
stabilizer_circuits=stabilizer_circuit_dict,
stabilizer_counts=ideal_counts)
witness.stabilizer_measurements
witness.estimate(graph=g,noise_robust= 0)
sampler.backend
sampler.noise_model
# backend = AerSimulator.from_backend(FakeValencia())
# sampler.backend=backend
noise_model=NoiseModel.from_backend(FakeTokyo())
tokyo_counts = sampler.sample(stabilizer_circuits=stabilizer_circuit_dict,
graph_circuit=base_circuit,noise_model=noise_model)
sampler.noise_model
# +
witness = qcd.BiSeparableWitness(n_qubits=n_qubits,
stabilizer_circuits=stabilizer_circuit_dict,
stabilizer_counts=tokyo_counts)
witness.estimate(graph=g)
# -
witness.estimate(graph=g)
# #### Extract the expectation values from the counts
# Use the built-in Qiskit functions of `ignis`. when `Witness` is constructed, the diagonals and expectation values are evaluated and storedd. Calling `evaluate()` constructs the witness value from these stored values
witness.stabilizer_circuits
witness.stabilizer_counts
witness.diagonals
witness.stabilizer_measurements
witness = qcd.GenuineWitness(n_qubits=n_qubits,
stabilizer_circuits=stabilizer_circuit_dict,
stabilizer_counts=tokyo_counts)
witness.stabilizer_measurements
# #### Construct the witness value by hand using the stored values
non_iden_keys = [x for x in witness.stabilizer_measurements if x!='IIII']
genuine_wit = (n_qubits-1)*witness.stabilizer_measurements['IIII'][0]-\
np.sum([witness.stabilizer_measurements[kdx][0] for kdx in non_iden_keys])
genuine_wit
# #### Construct the witness value using `.estimate()`
witness.estimate(graph=g,noise_robust= 0)
| examples/graph_witness_refractored.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic Regression
# Logistic Regression is actually a classification algorithm. In its simplest form, it performoms binary classification (i.e. assign a label 1 or 0), but it can be extended for multi-class classification as well.
# To study this we will use the following
import pandas as pd
import numpy as np
# +
df = pd.read_csv(r'C:\Users\pmercati\Documents\intro_to_machine_learning\intro_to_machine_learning\breast-cancer-wisconsin.csv')
cols = df.columns
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
df.head()
# +
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
# +
p = figure(title="Breast Cancer Data", toolbar_location=None)
X = np.zeros(np.shape(df['class']))
for name in df.columns.drop('ID'):
X = X + np.array(df[name])
# +
p.scatter(X, df['class'])
show(p)
# -
np.shape(df['class'])
| 2_logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="J2a8KT0jNMrt"
# ####Copyright 2020 Google LLC.
# + colab={} colab_type="code" id="5i5kL8_wNNvo"
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="4b6a60p_OCiW"
# # Introduction to Pandas
#
# [Pandas](https://pandas.pydata.org/) is an open-source library for data analysis and manipulation. It is a go-to toolkit for data scientists and is used extensively in this course.
#
# Pandas integrates seamlessly with other Python libraries such as [NumPy](http://www.numpy.org) and [Matplotlib](http://www.matplotlib.org) for numeric processing and visualizations.
#
# When using Pandas, we will primarily interact with [DataFrames](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) and [Series](https://pandas.pydata.org/pandas-docs/stable/reference/series.html), which we will introduce in this lab.
# + [markdown] colab_type="text" id="vkjMnw7c7WTc"
# ## Importing Pandas
#
# In order to use Pandas, you must import it. This is as simple as:
#
# ```python
# import pandas
# ```
#
# However, you'll rarely see Pandas imported this way. By convention programmers rename Pandas to `pd`. This isn't a requirement, but it is a pattern that you'll see repeated often.
#
# To import Pandas in the conventional manner run the code block below.
# + colab={} colab_type="code" id="t_i-rT9N8Iou"
import pandas as pd
pd.__version__
# + [markdown] colab_type="text" id="C07_w8VL8l_J"
# After importing Pandas as `pd` we can use pandas by calling methods provided by `pd`. In the code block above we printed the Pandas version.
#
# Pandas went 1.0.0 on January 29, 2020. The interface should stay relatively stable until a 2.0.0 release is declared sometime in the future. If you ever have a problem where a Pandas function isn't acting the way you think it should, be sure to check out which version you are using and find the documentation for that specific version.
# + [markdown] colab_type="text" id="6Lde64-x9QX5"
# ## Pandas Series
# + [markdown] colab_type="text" id="3z9fwxzVHDBp"
# A [Series](https://pandas.pydata.org/pandas-docs/stable/reference/series.html) represents a sequential list of data. It is a foundational building block of the powerful [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) that we'll cover later in this lab.
# + [markdown] colab_type="text" id="-oAlhrAG9lc6"
# ### Creating a Series
#
# We create a new `Series` object as we would any Python object:
#
# ```python
# s = pd.Series()
# ```
#
# This creates a new, empty `Series` object, which isn't very interesting. You can create a series object with data by passing it a list or tuple:
# + colab={} colab_type="code" id="7vXrfIIJ9ocv"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
print(type(series))
print(series)
# + [markdown] colab_type="text" id="grcARA89-4BI"
# Here we created a new `pandas.core.series.Series` object with ten values presumably representing some temperature measurement.
# + [markdown] colab_type="text" id="N13kNb0m_T3t"
# ### Analyzing a Series
#
# You can ask the series to compute information about itself. The `describe()` method provides statistics about the series.
# + colab={} colab_type="code" id="b3sVqlhj_M61"
series.describe()
# + [markdown] colab_type="text" id="v3POByAk_1U2"
# You can also find other information about a `Series` such as if its values are all unique:
# + colab={} colab_type="code" id="bYppQOdnADYw"
series.is_unique
# + [markdown] colab_type="text" id="dnf7A_IyAJ-o"
# Or if it is monotonically increasing or decreasing:
# + colab={} colab_type="code" id="nwnBtvZvAOk8"
print(series.is_monotonic)
# + [markdown] colab_type="text" id="kyc0HP_eAZ9q"
# #### Exercise 1: Standard Deviation
#
# Create a series using the list of values provided below. Then, using a function in the [Series](https://pandas.pydata.org/pandas-docs/stable/reference/series.html) class, find the standard deviation of the values in that series and store it in the variable `std_dev`.
# + [markdown] colab_type="text" id="s_8aPljkA_Om"
# **Student Solution**
# + colab={} colab_type="code" id="HQ430Um3BBXE"
import pandas as pd
weights = (120, 143, 98, 280, 175, 205, 210, 115, 122, 175, 201)
series = None # Create a series and assign it here.
std_dev = None # Find the standard deviation of the series and assign it here.
print(std_dev)
# + [markdown] colab_type="text" id="bJELcyz-7kXH"
# ---
# + [markdown] colab_type="text" id="4icP6JbDB8Iy"
# ### Accessing Values
#
# Let's take another look at the first series that we created in this lab:
# + colab={} colab_type="code" id="EIsjHq0LCTBi"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
print(type(series))
print(series)
# + [markdown] colab_type="text" id="-Az7xR9eCYgD"
# We can see the values printed down the right-side column. But what are those numbers along the left?
#
# They are **indices**.
#
# You are probably thinking that `Series` objects feel a whole lot like lists, tuples, and `NumPy` arrays. If so, you are correct.
#
# They are very similar to these other sequential data structures, and individual items in a series can be accessed by index as expected.
# + colab={} colab_type="code" id="5fIJthCqCyhg"
series[4]
# + [markdown] colab_type="text" id="uQ25ozJdEfIf"
# You can also loop over the values in a `Series`.
# + colab={} colab_type="code" id="cbvHuOQrEaSo"
for temp in series:
print(temp)
# + [markdown] colab_type="text" id="1LzL_Fd3GE44"
# ### Modifying Values
# + [markdown] colab_type="text" id="GOY4ZG8pCx1J"
# Series are mutable, so you can modify individual values.
# + colab={} colab_type="code" id="VChNSnoiE7Ou"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
print(series[1])
series[1] = 65
print(series[1])
# + [markdown] colab_type="text" id="NsJ3RQgZSrmV"
# You can also modify all of the elements in a series using standard Python expressions. For instance, if we wanted to add `1` to every item in a series, we can just do:
# + colab={} colab_type="code" id="e3ebIBLBS2qM"
series + 1
# + [markdown] colab_type="text" id="PYL0lCxLS53k"
# Note that this doesn't actually change the `Series` though. To do that we need to assign the computation back to our original series.
#
# More operations than addition can be applied. You can add, subtract, multiple, divide, and more with a simple Python expression.
# + colab={} colab_type="code" id="1AAsYHEiTBfH"
series = series + 1
# + [markdown] colab_type="text" id="O_CCr4BBGo3e"
# You can remove values from the series by index using `pop`:
#
# + colab={} colab_type="code" id="Lib564--GsbO"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
print(series)
series.pop(4)
print(series)
# + [markdown] colab_type="text" id="_wAoxGO2Id-t"
# Notice that when we print the series out a second time, the index with value `4` is missing. After we pop the value out, the index is no longer valid to access!
# + colab={} colab_type="code" id="UG967A4PIXLi"
try:
print(series[4])
except:
print('Unable to print the value at index 4')
# + [markdown] colab_type="text" id="rDGbTJZ6R4HK"
# In order to get the indices back into a smooth sequential order, we can call the `reset_index` function. We pass the argument `drop=True` to tell Pandas *not* to save the old index as a new column. We pass the argument `inplace=True` to tell Pandas to modify the series directly instead of making a copy.
# + colab={} colab_type="code" id="3pNFx47CQ9QT"
series.reset_index(drop=True, inplace=True)
series
# + [markdown] colab_type="text" id="nBdi6PqrizOF"
# This is very different from what we would expect from a normal Python list! While it is possible to use `pop` on a list, the indices will automatically reset.
# + colab={} colab_type="code" id="wljBRcPCikxQ"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
print(temperatures)
temperatures.pop(4)
print(temperatures[4])
# + [markdown] colab_type="text" id="xPXWRLqIHNk9"
# You can also add values to a `Series` by appending another `Series` to it. We pass the argument `ignore_index=True` to tell Pandas to append the values with new indices, rather than copying over the old indices of the appended values. In this case, that means the new values (`66` and `74`) get the indices `10` and `11`, rather than `0` and `1`:
# + colab={} colab_type="code" id="s_VBjALUHWlr"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
print(series)
new_series = pd.Series([66, 74])
series = series.append(new_series, ignore_index=True)
print(series)
# + [markdown] colab_type="text" id="nnnkyznaET1H"
# #### Exercise 2: Sorting a Series
#
# Find the correct method in the [Series documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) to sort the values in `series` in ascending order. Be sure the indices are also sorted and that the new sorted series is stored in the `series` variable.
# + [markdown] colab_type="text" id="llEvrONtFSg5"
# **Student Solution**
# + colab={} colab_type="code" id="0Iy_JGihEeQU"
temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]
series = pd.Series(temperatures)
# Your code goes here.
print(series.sort_values())
# + [markdown] colab_type="text" id="z1CZZGSYGG_B"
# ---
# + [markdown] colab_type="text" id="DluyH3N3tyPE"
# ## Pandas DataFrame
# + [markdown] colab_type="text" id="DN1Efp21HO6u"
# Now that we have a basic understanding of `Series`, let's dive into the [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html). If you picture `Series` as a *list* of data, you can think of `DataFrame` as a *table* of data.
#
# A `DataFrame` consists of one or more `Series` presented in a tabular format. Each `Series` in the `DataFrame` is a column.
# + [markdown] colab_type="text" id="16datPOKIC9p"
# ### Creating a DataFrame
# + [markdown] colab_type="text" id="pYTSLBdEAuau"
# We can create an empty `DataFrame` using the `DataFrame` class in Pandas:
#
# ```python
# df = pd.DataFrame()
# ```
#
# But an empty `DataFrame` isn't particularly exciting. Instead, let's create a `DataFrame` using a few series.
#
# In the code block below you'll see that we have three series:
#
# 1. Cities
# 1. Populations of those cities
# 1. Number of airports in those cities
#
# + colab={} colab_type="code" id="hErWNo6RS0-N"
city_names = pd.Series([
'Atlanta',
'Austin',
'Kansas City',
'New York City',
'Portland',
'San Francisco',
'Seattle',
])
population = pd.Series([
498044,
964254,
491918,
8398748,
653115,
883305,
744955,
])
num_airports = pd.Series([
2,
2,
8,
3,
1,
3,
2,
])
print(city_names, population, num_airports)
# + [markdown] colab_type="text" id="_weonc5sL2eB"
# We can now combine these series into a `DataFrame`, using a dictionary with keys as the column names and values as the series:
# + colab={} colab_type="code" id="HOTQnmJjL_qI"
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
print(df)
# + [markdown] colab_type="text" id="G6XEeFU7MN-J"
# The data is now displayed in a tabular format. We can see that there are three columns: `City Name`, `Population`, and `Airports`. There are six rows, each row representing the data for a single city.
# + [markdown] colab_type="text" id="jMCslIbiMfDB"
# In the block above we used the `print` function to display the `DataFrame`, which printed out the data in a plain text form. Colab and other notebook environments can "pretty print" DataFrames if you make it the last part of a code block and don't wrap the variable in a `print` statement. Run the code block below to see this in action.
# + colab={} colab_type="code" id="C0VbODk7M3NL"
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df
# + [markdown] colab_type="text" id="ppMjIT9gM6L1"
# That's much easier on the eyes! The rows are colored in an alternating background color scheme, which makes long rows of data easier to view.
# + [markdown] colab_type="text" id="A397Fja_NOdX"
# ### Analyzing a DataFrame
#
# Similar to a `Series`, you can ask the `DataFrame` to compute information about itself. The `describe()` method provides statistics about the `DataFrame`.
# + colab={} colab_type="code" id="YLIM4mGkAsjT"
df.describe()
# + [markdown] colab_type="text" id="VhWWKfKlOQoT"
# These are the same statistics that we got when we called `describe` on a `Series` above. As you work with Pandas, you'll find that many of the methods that operate on `Series` also work with `DataFrame` objects.
#
# *Did you notice something missing in the output from `describe` though?*
#
# We have three columns in our `DataFrame`, but only two columns have statistics printed for them. This is because `describe` only works with numeric `Series` by default, and the 'City Name' column is a string.
#
# To show all columns add an `include='all'` argument to describe:
# + colab={} colab_type="code" id="AmWICoAVVqYi"
df.describe(include='all')
# + [markdown] colab_type="text" id="LYMlR8S1Vy7v"
# We now get a few more metrics specific to string columns: `unique`, `top`, and `freq`. We also now can see the 'City Name' column.
# + [markdown] colab_type="text" id="LyvcomOpD_bw"
# If we want to look at the data we could print the entire `DataFrame`, but that doesn't scale well for really large `DataFrames`. The `head` method is a way to just look at the first few rows of a `DataFrame`.
# + colab={} colab_type="code" id="xKh5SCetDXER"
df.head()
# + [markdown] colab_type="text" id="UFlAcJryWYFx"
# Conversely, the `tail` method returns the last few rows of a data frame.
# + colab={} colab_type="code" id="LZAD5VbtDYqH"
df.tail()
# + [markdown] colab_type="text" id="nYziu5ScrsIn"
# You can also choose the number of rows you want to print as part of `head` and `tail`.
# + colab={} colab_type="code" id="xRuRx_Zkrwxx"
df.head(12)
# + [markdown] colab_type="text" id="UkDcNsq2Wi2l"
# These are useful ways at taking a look at actual data, but they can have some inherent bias in them. If the data is sorted by any column values, `head` or `tail` might show a skewed view of the data.
#
# One way to combat this is to always look at both the head and tail of your data. Another way is to randomly sample your data and look at the sample. This will reduce the chance that you are seeing a lopsided view of your data.
# + [markdown] colab_type="text" id="SjTWrMmfETIi"
# We can also visualize the data in a `DataFrame`. The `hist` command will make a histogram of each of the numerical columns. As you will see, some of these histograms are more informative than others.
# + colab={} colab_type="code" id="iOiWtVR2DZyR"
_ = df.hist()
# + [markdown] colab_type="text" id="35ne0g79WoYM"
# **What Information Might We Gain From These Histograms?**
#
# In the airports histogram, we can see that there is one outlier (Kansas City), and all other cities have roughly two airports.
#
# In the population histogram, we can see that there is also one outlier (New York City), which has an order of magnitude more population, such that all other populations are very close to zero in comparison. We also see here how the axis can get very messy.
# + [markdown] colab_type="text" id="dXr-NQYcXF23"
# #### Exercise 3: Sampling Data
#
# Find a method in the [DataFrame documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) that returns a random sample of your `DataFrame`. Call that method and make it return five rows of data.
# + [markdown] colab_type="text" id="8qXQUroOXYcd"
# **Student Solution**
# + colab={} colab_type="code" id="ag_gH8H7XaO8"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
# Your Code Goes Here
# + [markdown] colab_type="text" id="eZhR8SFsX9mm"
# ---
# + [markdown] colab_type="text" id="rk03BOzuYNhM"
# ### Accessing Values
#
# We saw that individual values in a `Series` can be accessed using indexing similar to that seen in standard Python lists and tuples. Accessing values in `DataFrame` objects is a little more involved.
# + [markdown] colab_type="text" id="qWtjPJr_Ygz0"
# #### Accessing Columns
#
# To access an entire column of data you can index the `DataFrame` by column name. For instance, to return the entire `City Name` column as a `Series` you can run the code below:
# + colab={} colab_type="code" id="8WAQRgoYbJxX"
df['City Name']
# + [markdown] colab_type="text" id="o66D4v6GgwJj"
# But what if you want a `DataFrame` instead of a `Series`?
#
# In this case, you index the `DataFrame` using a list, where the list contains the name of the column that you want returned as a `DataFrame`:
# + colab={} colab_type="code" id="FPSyeueTgtqH"
df[['City Name']]
# + [markdown] colab_type="text" id="0eVwq3W5hB4K"
# Similarly, you can return more than one column in the resultant `DataFrame`:
# + colab={} colab_type="code" id="UvKD5hIPhISw"
df[['City Name', 'Population']]
# + [markdown] colab_type="text" id="ZHr4L2LghLvx"
# Sometimes you might also see columns of data referenced using the dot notation:
# + colab={} colab_type="code" id="XsE5Phm8hMqp"
df.Population
# + [markdown] colab_type="text" id="CPpoBPTRhS2B"
# This is a neat trick, but it is problematic for a couple of reasons:
#
# 1. You can only get a `Series` back.
# 1. It is impossible to reference columns with spaces in the names with this notation (ex. 'City Name').
# 1. It is confusing if a column has the same name as an inbuilt method of a `DataFrame`, such as `size`.
#
# We mention this notation because you'll likely see it. However, we don't advise using it.
# + [markdown] colab_type="text" id="nnTuc0y0hrQu"
# #### Accessing Rows
#
# In order to access rows of data, you can't use standard indexing. It would seem natural to index using a numeric row value, but as you can see in the example below, this yields a `KeyError`.
# + colab={} colab_type="code" id="KcD_78_thoNL"
try:
df[1]
except KeyError:
print('Got KeyError')
# + [markdown] colab_type="text" id="vrm4fuNTiVbz"
# This is because the default indexing is to look for column names, and numbers are valid column names. If you had a column named `1` in a `DataFrame` with at least two rows, Pandas wouldn't know if you wanted row `1` or column `1`.
#
# In order to index by row, you must use the `iloc` feature of the `DataFrame` object.
# + colab={} colab_type="code" id="vzHQh72JiezU"
df.iloc[1]
# + [markdown] colab_type="text" id="MXfytch_imDy"
# The code above returns the second row of data in the `DataFrame` as a `Series`.
#
# You can also return multiple rows using slices:
# + colab={} colab_type="code" id="UKDTY0aEihFN"
df.iloc[1:3]
# + [markdown] colab_type="text" id="qtmJzbuRkhJ9"
# As an aside, if you do use a range, then `iloc` is optional since columns can't be referenced in a range, and the default selector can disambiguate what you are doing. This can be a little confusing, though, so try to avoid it.
# + colab={} colab_type="code" id="Dv1kBnrAkuG_"
df[1:3]
# + [markdown] colab_type="text" id="fF0IS7r_jFcG"
# If you want sparse rows that don't fall into an easily defined range, you can pass `iloc` a list of rows that you would like returned:
# + colab={} colab_type="code" id="jgGFU_fGi3YI"
df.iloc[[1, 3]]
# + [markdown] colab_type="text" id="KDt0iotCjNtQ"
# ##### Exercise 4: Single Row as a `DataFrame`
#
# Given the methods of accessing rows in a `DataFrame` that we have learned so far, how would you access the third row in the `df` `DataFrame` defined below as a `DataFrame` itself (as opposed to as a `Series`)?
# + [markdown] colab_type="text" id="qMDjTzIQjlUJ"
# **Student Solution**
# + colab={} colab_type="code" id="Me-p2uNXjnFJ"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
# Your Code Goes Here
# + [markdown] colab_type="text" id="uHSFpxGkj7ZQ"
# ---
# + [markdown] colab_type="text" id="Lyx0bH8xuCg0"
# ##### Accessing Row/Column Intersections
#
# We've learned how to access columns by direct indexing on the `DataFrame`. We've learned how to access rows by using `iloc`. You can combine these two access methods using the `loc` functionality of the `DataFrame` object.
#
# Simply call `loc` and pass it two arguments:
#
# 1. The row(s) you want to access
# 1. The column(s) you want to access
#
# In the example below we access the 'City Name' in the third row of the `DataFrame`:
# + colab={} colab_type="code" id="2SEzG3jXuYah"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df.loc[2, 'City Name']
# + [markdown] colab_type="text" id="K3lc7le8ChF1"
# In the example below we access the 'City Name' and 'Airports' columns in the third and fourth rows of the `DataFrame`:
# + colab={} colab_type="code" id="OKCMEAhhuyPl"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df.loc[[2,3], ['City Name', 'Airports']]
# + [markdown] colab_type="text" id="xtV_FAiJUMD4"
# We will learn more about `loc` in the next section. Specifically, we will come to understand how using `loc` enables us to access a `DataFrame` directly in order to modify it.
# + [markdown] colab_type="text" id="4_aCEXgSrfxa"
# #### Modifying Values
#
# There are many ways to modify values in a `DataFrame`. We'll look at a few of the more straightforward ways in this section.
# + [markdown] colab_type="text" id="tuL-wDyIWeZD"
# ##### Modifying Individual Values
#
# The easiest way to modify a single value in a `DataFrame` is to directly index it on the left-hand sign of an expression.
#
# Let's say the Seattle area got a new commercial airport called Paine Field. If we want to increment the number of airports for Seattle, we could access the Seattle airport count directly and modify it:
# + colab={} colab_type="code" id="T2uZqM9ZsXsR"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df.loc[6, 'Airports'] = 3
df
# + [markdown] colab_type="text" id="FD1SlNIzUtmQ"
# ##### Modifying an Entire Column
#
# Modifying a single value is a great skill to have, especially when working with small numbers of **outliers**. However, you'll often want to work with larger swaths of data.
#
# When would you want to do this?
#
# Consider the 'Population' column that we have been working with in this lab. It is integer-valued, however in some cases it might be better to work with the "thousands" value. For this we can do column-level modifications.
#
# In the example below we simply divide the population by 1,000.
# + colab={} colab_type="code" id="ncU6tftLUzFz"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df['Population'] /= 1000
df
# + [markdown] colab_type="text" id="tgzG63uPbJ5T"
# Instead of overwriting the existing column, you may instead want to create a new column. This can be done by assigning to a new column name:
# + colab={} colab_type="code" id="OhCE-9pPcFeL"
city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City',
'Portland', 'San Francisco', 'Seattle'])
population = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])
num_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])
df = pd.DataFrame({
'City Name': city_names,
'Population': population,
'Airports': num_airports,
})
df['Population_M'] = df['Population'] / 1000
df
# + [markdown] colab_type="text" id="5wIgLlZlW1h_"
# ### Fetching Data
# + [markdown] colab_type="text" id="7fCcGyteW6jx"
#
# So far we have created the data that we have worked with from scratch. In reality, we'll load our data from a file system, the internet, a database, or one of many other sources.
#
# Throughout this course, we'll load data in many ways. Let's start by loading the data from the internet directly.
#
# For this, we'll use the Pandas method `read_csv`. This method can read comma-separated data from a URL. See an example below:
# + colab={} colab_type="code" id="yAcjA1sEXzNQ"
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv"
california_housing_dataframe = pd.read_csv(url)
california_housing_dataframe
# + [markdown] colab_type="text" id="8QEbef-5X26x"
# We now have a `DataFrame` full of data about housing prices in California. This is a classic dataset that we'll look at more closely in future labs. For now, we'll load it in and try to get an understanding of the data.
# + [markdown] colab_type="text" id="fMlB8JM-Eggv"
# ## Exercise 5: Exploring Data
#
# In this exercise we will write code to explore the California housing dataset mentioned earlier in this lab. As seen previously, we can load the data using the following code:
# + colab={} colab_type="code" id="cs5Zny9dE2S5"
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv"
california_housing_df = pd.read_csv(url)
california_housing_df
# + [markdown] colab_type="text" id="FU2N1NDEdXb_"
# ### Question 1: Histograms
#
# This question will have two parts: one coding and one data analysis.
#
# #### Question 1.1: Display Histograms
#
# Write the code to display histograms for all numeric columns in the `california_housing_df` object.
#
# **Student Solution**
# + colab={} colab_type="code" id="VESvrqqEd845"
# Your Code Goes Here
# + [markdown] colab_type="text" id="JvUQhwWsDx2S"
# ---
# + [markdown] colab_type="text" id="PHDDgUJteHY2"
# #### Question 1.2: Histogram Analysis
#
# Two of the histograms have two strong peaks rather than one. Which columns are these? What do you think this tells us about the data?
# + [markdown] colab_type="text" id="H5SCdTyaFW_q"
# **Student Solution**
# + [markdown] colab_type="text" id="EwzTPaISebrx"
# What are the names of the two columns with two strong peaks each?
# 1. *Write the first column name here*
# 1. *Write the second column name here*
#
# What insights do you gather from the columns with dual peaks?:
# * *Write your answer here*
# + [markdown] colab_type="text" id="MoAz4DxUEy_-"
# ---
# + [markdown] colab_type="text" id="mCBkUlYvfj0X"
# ### Question 2: Ordering
#
# Does there seem to be any obvious ordering to the data? If so, what is the ordering? Show the code that you used to determine your answer.
#
# + [markdown] colab_type="text" id="sdgWRcYYfsGg"
# **Student Solution**
# + [markdown] colab_type="text" id="rgZpIOT7fufN"
# Is there any ordering?
# * *(Yes/No)*
#
# If there was ordering, what columns were sorted and in what order (ascending/descending)?:
# * *Write your answer here*
#
#
# + [markdown] colab_type="text" id="6JbbqruzgHWJ"
# What code did you use to determine the answer?
# + colab={} colab_type="code" id="gABo8DFVfttD"
# Your code goes here
# + [markdown] colab_type="text" id="EOU9lI8yhDC-"
# ---
# + [markdown] colab_type="text" id="KhYviOYIhGNy"
# ## Exercise 6: Creating a New Column
#
# Create a new column in `california_housing_df` called `persons_per_bedroom` that is the ratio of `population` to `total_bedrooms`.
# + [markdown] colab_type="text" id="ymqzKjJDh9zg"
# **Student Solution**
# + colab={} colab_type="code" id="qSjv3cwZh55t"
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv"
california_housing_df = pd.read_csv(url)
# Your Code Goes Here
# + [markdown] colab_type="text" id="tfT14I7kGxJT"
# ---
| content/02_data/01_introduction_to_pandas/colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: uc19
# language: python
# name: uc19
# ---
# +
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import datetime
from scipy.signal import argrelextrema
import scipy.interpolate as interpolate
from tensorflow.keras.callbacks import Callback
from matplotlib import rc
rc('text', usetex=True)
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = '#121212'
plt.rcParams['axes.linewidth'] = 1
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['grid.linestyle'] = ':'
plt.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
rc('font',**{'family':'serif','serif':['New Century Schoolbook']})
# For the rolling average
window_size = 7
# +
def makeuniversal(df_c, country, startdate, datet_max, markers, plt, fact=1./2.):
df_p = pd.DataFrame()
if len(country) != len(startdate):
print('unequal input lists')
exit(1)
for i in range(len(country)):
df = ((df_c[df_c['Country/Region']==country[i]].iloc[-1:]).iloc[0][4:]).rolling(window_size).mean().dropna()
df.index = pd.to_datetime(df.index)
df = df[startdate[i]:]
Nmax = df[datet_max[i]]
t1_2 = (df.iloc[(df-Nmax*fact).abs().argsort()[:1]].index[0]-datetime.datetime.strptime(startdate[i], '%Y-%m-%d')).days
x = np.linspace(1,len(df[:datet_max[i]]),len(df[:datet_max[i]]))/t1_2
df_r = df[:datet_max[i]]/Nmax
df_temp = pd.DataFrame()
df_temp['X'] = x
df_temp['Y'] = df_r.values
df_temp['country'] = country[i]
df_temp['Nmax'] = Nmax
df_temp['t1/2'] = t1_2
df_p = pd.concat([df_p, df_temp])
plt.scatter(df_temp.X, df_temp.Y, label=country[i]+r' ($t_{1/2}:$ ' + str(t1_2) + ')', marker=markers[i], s=20)
return plt, df_p
class TerminateOnBaseline(Callback):
""" Callback that terminates training when monitored value reaches a specified baseline
"""
def __init__(self, monitor='val_loss', patience=50):
super(TerminateOnBaseline, self).__init__()
self.monitor = monitor
self.baseline = np.Inf
self.patience = patience
self.wait = 0
self.stopped_epoch = 0
self.best = np.Inf
self.best_weights = None
self.best_epoch = 0
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
value = logs.get(self.monitor)
if epoch == 0:
self.baseline = value/1000.
if np.less(value, self.best):
self.best = value
self.wait = 0
self.best_weights = self.model.get_weights()
self.best_epoch = epoch
else:
self.wait += 1
if value is not None:
if value <= self.baseline and self.wait >= self.patience:
self.stopped_epoch = epoch
print('\nepoch %d: Reached baseline, terminating training and lost patience' % epoch)
self.model.stop_training = True
print('Restoring model weights from the end of the best epoch: ' + str(self.best_epoch))
self.model.set_weights(self.best_weights)
elif self.wait >= self.patience:
self.baseline *= 2.5
self.wait = self.patience/2
def runML(df, epochs=2000):
X = ((df['X'].values)[np.newaxis]).T
Y = df['Y'].values
regressor = tf.keras.Sequential([
tf.keras.layers.Dense(16, activation='sigmoid', input_shape=(1,)),
tf.keras.layers.Dense(16, activation='sigmoid'),
tf.keras.layers.Dense(16, activation='sigmoid'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
regressor.summary()
regressor.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.01),
loss='mse', metrics=['mse', 'mae'])
patience = 25
history = regressor.fit(
X, Y,
epochs=epochs,
verbose=0,
validation_split = 0.2,
callbacks=[TerminateOnBaseline(monitor='val_loss', patience=patience)])
x_pred = (np.linspace(np.min(X), 2.5, 100)[np.newaxis]).T
y_pred = regressor.predict(x_pred)
x_new = np.linspace(np.min(X), 2.5, 200)
spline = interpolate.splrep(x_pred, y_pred)
return x_pred, y_pred, regressor, spline
def analyze(df_c, country, startdate, datet_max, markers, plt, fact=1./2., model='gauss'):
plt.subplot(1,2,1)
plt, df_p = makeuniversal(df_c, country, startdate, datet_max, markers, plt, fact)
x_new, y_new, regressor, spline = runML(df_p)
plt.plot(x_new, y_new, label='Fit', color='#434343', linewidth=3)
plt.legend(fontsize=12, loc='lower right')
plt.xlim((0,2.5))
plt.tick_params(axis="x", labelsize=16)
plt.tick_params(axis="y", labelsize=16)
plt.xlabel(r'$t/t_{1/2}$',fontsize=16)
plt.ylabel(r'$r(t)$',fontsize=16)
plt.grid()
plt.subplot(1,2,2)
df_g = pd.DataFrame()
for i in range(len(country)):
df_pp = pd.DataFrame()
df_pp = df_p[df_p['country']==country[i]][['X']]
df_pp['diff'] = df_p[df_p['country']==country[i]][['X']].diff()
df_pp['Y'] = df_p[df_p['country']==country[i]][['Y']].diff()#.rolling(window_size).mean()
df_pp.dropna(inplace=True)
df_pp['Y'] = df_pp['Y']/df_pp['diff']
df_g = pd.concat([df_g, df_pp[['X', 'Y']]])
plt.scatter(df_pp['X'], df_pp['Y'], label=country[i], marker=markers[i], s=20)
plt.xlim((0, 2.5))
plt.xlabel(r'$t/t_{1/2}$', fontsize=16)
plt.ylabel(r'$dr(\tau)/d\tau$', fontsize=16)
ax = plt.gca()
x_new = np.linspace(np.min(df_g['X'].values), 2.5, 10000)
y_new = interpolate.splev(x_new, spline, der=1)
plt.plot(x_new, y_new, label='Fit', color='#434343', linewidth=3)
plt.legend(fontsize=12)
plt.grid()
plt.tick_params(axis="x", labelsize=16)
plt.tick_params(axis="y", labelsize=16)
return plt, regressor, spline,
# -
# Pull in the data processed in DataScout-UniversalityClasses.ipynb
countries = pd.read_csv('data/countries.csv')
# +
country = ['Japan', 'New Zealand', 'Ireland', 'Australia', 'Slovakia']
startdate = ['2020-3-1', '2020-3-10', '2020-3-10', '2020-3-1', '2020-3-1']
datet_max = ['2020-6-10', '2020-5-15', '2020-7-1', '2020-5-15', '2020-6-15']
markers = ['v', 'o', 'x', 's', 'd']
plt.figure(figsize=(16,5))
plt, regressor_t1, spline_t1 = analyze(countries, country, startdate, datet_max, markers, plt, model='PBC')
x_p = np.linspace(0,2.5,100000)
y_p = interpolate.splev(x_p, spline_t1, der=0)
y_d = interpolate.splev(x_p, spline_t1, der=1)
data = np.vstack((x_p, y_p, y_d))
np.savetxt('data/type1-DNN.txt', data)
t_fact_1 = x_p[argrelextrema(y_d, np.greater)[0]][0]
plt.suptitle('Type I transmission dynamics', fontsize=16, x=0.54)
plt.tight_layout()
plt.savefig('../plots/universal_1_DNN.pdf', facecolor='white', dpi=300)
plt.show()
# +
country = ['UK', 'Germany', 'Italy', 'South Korea', 'Qatar', 'New York', 'Bayern']
startdate = ['2020-3-1','2020-3-1', '2020-2-20', '2020-2-18', '2020-3-20', '2020-3-5', '2020-3-1']
datet_max = ['2020-8-1', '2020-7-15', '2020-7-15', '2020-5-1', '2020-11-1', '2020-8-1', '2020-7-15']
markers = ['o', 's', 'x', '^', '*', 'v', 'd']
plt.figure(figsize=(16,5))
plt, regressor_t2, spline_t2 = analyze(countries, country, startdate, datet_max, markers, plt, model='PBC')
x_p = np.linspace(0,2.5,100000)
y_p = interpolate.splev(x_p, spline_t2, der=0)
y_d = interpolate.splev(x_p, spline_t2, der=1)
data = np.vstack((x_p, y_p, y_d))
np.savetxt('data/type2-DNN.txt', data)
t_fact_2 = x_p[argrelextrema(y_d, np.greater)[0]][0]
plt.suptitle('Type II transmission dynamics', fontsize=16, x=0.54)
plt.tight_layout()
plt.savefig('../plots/universal_2_DNN.pdf', facecolor='white', dpi=300)
plt.show()
| notebooks/UniversalityClasses-DNN.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#export
from fastai2.basics import *
from nbdev.showdoc import *
# +
#default_exp callback.hook
# -
# # Model hooks
#
# > Callback and helper function to add hooks in models
from fastai2.test_utils import *
# ## What are hooks?
# Hooks are functions you can attach to a particular layer in your model and that will be executed in the foward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to `HookCallback` if you quickly want to implement one (and read the following example `ActivationStats`).
#
# Forward hooks are functions that take three arguments: the layer it's applied to, the input of that layer and the output of that layer.
# +
tst_model = nn.Linear(5,3)
def example_forward_hook(m,i,o): print(m,i,o)
x = torch.randn(4,5)
hook = tst_model.register_forward_hook(example_forward_hook)
y = tst_model(x)
hook.remove()
# -
# Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output.
# +
def example_backward_hook(m,gi,go): print(m,gi,go)
hook = tst_model.register_backward_hook(example_backward_hook)
x = torch.randn(4,5)
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
hook.remove()
# -
# Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have your hook associated to a class so that it can put it in the state of an instance of that class.
# ## Hook -
#export
@docs
class Hook():
"Create a hook on `m` with `hook_func`."
def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False, gather=False):
store_attr(self,'hook_func,detach,cpu,gather')
f = m.register_forward_hook if is_forward else m.register_backward_hook
self.hook = f(self.hook_fn)
self.stored,self.removed = None,False
def hook_fn(self, module, input, output):
"Applies `hook_func` to `module`, `input`, `output`."
if self.detach:
input,output = to_detach(input, cpu=self.cpu, gather=self.gather),to_detach(output, cpu=self.cpu, gather=self.gather)
self.stored = self.hook_func(module, input, output)
def remove(self):
"Remove the hook from the model."
if not self.removed:
self.hook.remove()
self.removed=True
def __enter__(self, *args): return self
def __exit__(self, *args): self.remove()
_docs = dict(__enter__="Register the hook",
__exit__="Remove the hook")
# This will be called during the forward pass if `is_forward=True`, the backward pass otherwise, and will optionally `detach`, `gather` and put on the `cpu` the (gradient of the) input/output of the model before passing them to `hook_func`. The result of `hook_func` will be stored in the `stored` attribute of the `Hook`.
tst_model = nn.Linear(5,3)
hook = Hook(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hook.stored, y)
show_doc(Hook.hook_fn)
show_doc(Hook.remove)
# > Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state.
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
hook = Hook(tst_model, example_forward_hook)
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
hook.remove()
test_stdout(lambda: tst_model(x), "")
# ### Context Manager
# Since it's very important to remove your `Hook` even if your code is interrupted by some bug, `Hook` can be used as context managers.
show_doc(Hook.__enter__)
show_doc(Hook.__exit__)
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
with Hook(tst_model, example_forward_hook) as h:
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
test_stdout(lambda: tst_model(x), "")
# +
#export
def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
def hook_output(module, detach=True, cpu=False, grad=False):
"Return a `Hook` that stores activations of `module` in `self.stored`"
return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
# -
# The activations stored are the gradients if `grad=True`, otherwise the output of `module`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
# +
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
with hook_output(tst_model) as h:
y = tst_model(x)
test_eq(y, h.stored)
assert not h.stored.requires_grad
with hook_output(tst_model, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
test_close(2*y / y.numel(), h.stored[0])
# -
#cuda
with hook_output(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
test_eq(h.stored.device, torch.device('cpu'))
# ## Hooks -
#export
@docs
class Hooks():
"Create several hooks on the modules in `ms` with `hook_func`."
def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
@property
def stored(self): return L(o.stored for o in self)
def remove(self):
"Remove the hooks from the model."
for h in self.hooks: h.remove()
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
_docs = dict(stored = "The states saved in each hook.",
__enter__="Register the hooks",
__exit__="Remove the hooks")
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
hooks = Hooks(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hooks.stored[0], layers[0](x))
test_eq(hooks.stored[1], F.relu(layers[0](x)))
test_eq(hooks.stored[2], y)
hooks.remove()
show_doc(Hooks.stored, name='Hooks.stored')
show_doc(Hooks.remove)
# ### Context Manager
# Like `Hook` , you can use `Hooks` as context managers.
show_doc(Hooks.__enter__)
show_doc(Hooks.__exit__)
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
with Hooks(layers, lambda m,i,o: o) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
#export
def hook_outputs(modules, detach=True, cpu=False, grad=False):
"Return `Hooks` that store activations of all `modules` in `self.stored`"
return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
# The activations stored are the gradients if `grad=True`, otherwise the output of `modules`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
# +
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
x = torch.randn(4,5)
with hook_outputs(layers) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
for s in h.stored: assert not s.requires_grad
with hook_outputs(layers, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
g = 2*y / y.numel()
test_close(g, h.stored[2][0])
g = g @ layers[2].weight.data
test_close(g, h.stored[1][0])
g = g * (layers[0](x) > 0).float()
test_close(g, h.stored[0][0])
# -
#cuda
with hook_outputs(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
for s in h.stored: test_eq(s.device, torch.device('cpu'))
#export
def dummy_eval(m, size=(64,64)):
"Evaluate `m` on a dummy input of a certain `size`"
ch_in = in_channels(m)
x = one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)
with torch.no_grad(): return m.eval()(x)
#export
def model_sizes(m, size=(64,64)):
"Pass a dummy input through the model `m` to get the various sizes of activations."
with hook_outputs(m) as hooks:
_ = dummy_eval(m, size=size)
return [o.stored.shape for o in hooks]
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(model_sizes(m), [[1, 16, 64, 64], [1, 32, 32, 32], [1, 32, 32, 32]])
#export
def num_features_model(m):
"Return the number of output features for `m`."
sz,ch_in = 32,in_channels(m)
while True:
#Trying for a few sizes in case the model requires a big input size.
try:
return model_sizes(m, (sz,sz))[-1][1]
except Exception as e:
sz *= 2
if sz > 2048: raise e
m = nn.Sequential(nn.Conv2d(5,4,3), nn.Conv2d(4,3,3))
test_eq(num_features_model(m), 3)
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(num_features_model(m), 32)
# ## HookCallback -
# To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a `hook` function (plus any element you might need).
#export
def has_params(m):
"Check if `m` has at least one parameter"
return len(list(m.parameters())) > 0
assert has_params(nn.Linear(3,4))
assert has_params(nn.LSTM(4,5,2))
assert not has_params(nn.ReLU())
#export
@funcs_kwargs
class HookCallback(Callback):
"`Callback` that can be used to register hooks on `modules`"
_methods = ["hook"]
hook = noops
def __init__(self, modules=None, every=None, remove_end=True, is_forward=True, detach=True, cpu=True, **kwargs):
store_attr(self, 'modules,every,remove_end,is_forward,detach,cpu')
assert not kwargs
def begin_fit(self):
"Register the `Hooks` on `self.modules`."
if self.modules is None: self.modules = [m for m in flatten_model(self.model) if has_params(m)]
if self.every is None: self._register()
def begin_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._register()
def after_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._remove()
def after_fit(self):
"Remove the `Hooks`."
if self.remove_end: self._remove()
def _register(self): self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
def _remove(self):
if getattr(self, 'hooks', None): self.hooks.remove()
def __del__(self): self._remove()
# You can either subclass and implement a `hook` function (along with any event you want) or pass that a `hook` function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.
#
# If not provided, `modules` will default to the layers of `self.model` that have a `weight` attribute. Depending on `do_remove`, the hooks will be properly removed at the end of training (or in case of error). `is_forward` , `detach` and `cpu` are passed to `Hooks`.
#
# The function called at each forward (or backward) pass is `self.hook` and must be implemented when subclassing this callback.
# +
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def after_batch(self): test_eq(self.hooks.stored[0], self.pred)
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
# +
class TstCallback(HookCallback):
def __init__(self, modules=None, remove_end=True, detach=True, cpu=False):
super().__init__(modules, None, remove_end, False, detach, cpu)
def hook(self, m, i, o): return o
def after_batch(self):
if self.training:
test_eq(self.hooks.stored[0][0], 2*(self.pred-self.y)/self.pred.shape[0])
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
# -
show_doc(HookCallback.begin_fit)
show_doc(HookCallback.after_fit)
# ## Model summary
#export
def total_params(m):
"Give the number of parameters of a module and if it's trainable or not"
params = sum([p.numel() for p in m.parameters()])
trains = [p.requires_grad for p in m.parameters()]
return params, (False if len(trains)==0 else trains[0])
test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))
test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))
test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))
test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))
test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))
test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))
#First ih layer 20--10, all else 10--10. *4 for the four gates
test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))
#export
def layer_info(learn):
def _track(m, i, o):
return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
layers = [m for m in flatten_model(learn.model)]
xb = learn.dls.train.one_batch()[:learn.dls.train.n_inp]
with Hooks(layers, _track) as h:
_ = learn.model.eval()(*apply(lambda o:o[:1], xb))
return xb,h.stored
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
learn = synth_learner()
learn.model=m
test_eq(layer_info(learn)[1], [
('Linear', 100, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
# +
# Test for n_inp
def _ninp_synth_dls(n=160, bs=16, n_inp=1, cuda=False):
def get_data(n, n_inp):
xs = [torch.randn(n, 1) for _ in range(n_inp)]
y = torch.cat(xs, dim=-1).sum(-1)
return torch.utils.data.TensorDataset(*xs, y)
n_train = int(n*0.8)
train_ds = get_data(n_train, n_inp)
valid_ds = get_data(n-n_train, n_inp)
device = default_device() if cuda else None
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True, num_workers=0)
valid_dl = TfmdDL(valid_ds, bs=bs, num_workers=0)
return DataLoaders(train_dl, valid_dl, device=device)
class _NInpModel(Module):
def __init__(self, n_inp=1):
super().__init__()
self.n_inp = n_inp
self.seq = nn.Sequential(nn.Linear(1*n_inp,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
def forward(self, *inps):
outputs = torch.cat(inps, dim=-1)
return self.seq(outputs)
n_inp = 3
dls = _ninp_synth_dls(n_inp=n_inp)
m = _NInpModel(n_inp)
learn = Learner(dls, m, lr=1e-3, loss_func=MSELossFlat(), opt_func=partial(SGD, mom=0.9))
test_eq(layer_info(learn)[1], [
('Linear', 200, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
# -
#export
def _print_shapes(o, bs):
if isinstance(o, torch.Size): return ' x '.join([str(bs)] + [str(t) for t in o[1:]])
else: return str([_print_shapes(x, bs) for x in o])
# +
#hide
#Individual parameters wrapped in ParameterModule aren't called through the hooks in `layer_info`, thus are not counted inside the summary
#TODO: find a way to have them counted in param number somehow
# -
#export
@patch
def summary(self:Learner):
"Print a summary of the model, optimizer and loss function."
xb,infos = layer_info(self)
n,bs = 64,find_bs(xb)
inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
res = f"{self.model.__class__.__name__} (Input shape: {inp_sz})\n"
res += "=" * n + "\n"
res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
res += "=" * n + "\n"
ps,trn_ps = 0,0
infos = [o for o in infos if o is not None] #see comment in previous cell
for typ,np,trn,sz in infos:
if sz is None: continue
ps += np
if trn: trn_ps += np
res += f"{typ:<20} {_print_shapes(sz, bs)[:19]:<20} {np:<10,} {str(trn):<10}\n"
res += "_" * n + "\n"
res += f"\nTotal params: {ps:,}\n"
res += f"Total trainable params: {trn_ps:,}\n"
res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\n"
if self.opt is not None:
res += f"Model " + ("unfrozen\n\n" if self.opt.frozen_idx==0 else f"frozen up to parameter group number {self.opt.frozen_idx}\n\n")
res += "Callbacks:\n" + '\n'.join(f" - {cb}" for cb in sort_by_run(self.cbs))
return PrettyString(res)
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
for p in m[0].parameters(): p.requires_grad_(False)
learn = synth_learner()
learn.create_opt()
learn.model=m
learn.summary()
# Test for multiple output
class _NOutModel(nn.Module):
def forward(self, x1):
seq_len, bs, hid_size = 50, 16, 256
num_layer = 1
return torch.randn((seq_len, bs, hid_size)), torch.randn((num_layer, bs, hid_size))
m = _NOutModel()
learn = synth_learner()
learn.model = m
learn.summary() # Output Shape should be (50, 16, 256), (1, 16, 256)
# ## Activation graphs
# This is an example of a `HookCallback`, that stores the mean, stds and histograms of activations that go through the network.
#exports
@delegates()
class ActivationStats(HookCallback):
"Callback that record the mean and std of activations."
run_before=TrainEvalCallback
def __init__(self, with_hist=False, **kwargs):
super().__init__(**kwargs)
self.with_hist = with_hist
def begin_fit(self):
"Initialize stats."
super().begin_fit()
self.stats = L()
def hook(self, m, i, o):
o = o.float()
res = {'mean': o.mean().item(), 'std': o.std().item(),
'near_zero': (o<=0.05).long().sum().item()/o.numel()}
if self.with_hist: res['hist'] = o.histc(40,0,10)
return res
def after_batch(self):
"Take the stored results and puts it in `self.stats`"
if self.training and (self.every is None or self.train_iter%self.every != 0):
self.stats.append(self.hooks.stored)
super().after_batch()
def layer_stats(self, idx):
lstats = self.stats.itemgot(idx)
return L(lstats.itemgot(o) for o in ('mean','std','near_zero'))
def hist(self, idx):
res = self.stats.itemgot(idx).itemgot('hist')
return torch.stack(tuple(res)).t().float().log1p()
def plot_hist(self, idx, figsize=(10,5), ax=None):
res = self.hist(idx)
if ax is None: ax = subplots(figsize=figsize)[1][0]
ax.imshow(res, origin='lower')
ax.axis('off')
def plot_layer_stats(self, idx):
_,axs = subplots(1, 3, figsize=(12,3))
for o,ax,title in zip(self.layer_stats(idx),axs,('mean','std','% near zero')):
ax.plot(o)
ax.set_title(title)
learn = synth_learner(n_trn=5, cbs = ActivationStats(every=4))
learn.fit(1)
learn.activation_stats.stats
# The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
# +
#hide
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def begin_fit(self):
super().begin_fit()
self.means,self.stds = [],[]
def after_batch(self):
if self.training:
self.means.append(self.hooks.stored[0].mean().item())
self.stds.append (self.hooks.stored[0].std() .item())
learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])
learn.fit(1)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("mean"), learn.tst.means)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("std"), learn.tst.stds)
# -
# ## Export -
#hide
from nbdev.export import notebook2script
notebook2script()
| nbs/15_callback.hook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:em_track]
# language: python
# name: conda-env-em_track-py
# ---
# +
# import package
# installed via pip
from emtracks.particle import * # main solver object
from emtracks.conversions import one_gev_c2_to_kg # conversion for q factor (transverse momentum estimate)
from emtracks.tools import *#InitConds # initial conditions namedtuple
from emtracks.mapinterp import get_df_interp_func # factory function for creating Mu2e DS interpolation function
from emtracks.Bdist import get_B_df_distorted
from emtracks.interpolations import *
import matplotlib.animation as animation
import numpy as np
from scipy.constants import c, elementary_charge
import pandas as pd
import pickle as pkl
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import math
from mpl_toolkits.mplot3d import Axes3D
plt.rcParams['figure.figsize'] = [24,16] # bigger figures
from matplotlib import style
style.use('fivethirtyeight')
import os
from joblib import Parallel, delayed
import multiprocessing
from tqdm.notebook import tqdm
rad13plotdir = '/home/shared_data/mao10,mao13_analysis/plots/mao13(0.90,1.10TS)rad/'
reg13plotdir = '/home/shared_data/mao10,mao13_analysis/plots/mao13(0.90,1.10TS)/'
mao13datadir = '/home/shared_data/mao10,mao13_analysis/data/mao13contourplots4/'
# -
files = sorted(os.listdir(mao13datadir)) #all your files
# +
#check initconds match with title theta/phi
asdf = []
for file in files:
e_solvernom = trajectory_solver.from_pickle(mao13datadir+file)
theta = float(file.split('_')[1])
phi = float(file.split('_')[2])
thetainitcond = round(e_solvernom.init_conds.theta0, 3)
phiinitcond = round(e_solvernom.init_conds.phi0, 3)
asdf.append([(theta-thetainitcond), (phi-phiinitcond)])
asdf = np.array(asdf)
asdf
asdf.mean(), asdf.std()
# -
e_solvernom = trajectory_solver.from_pickle(mao13datadir+files[500])
e_solvernom.dataframe
e_solvernom.init_conds.theta0
files[0].split('_')
# +
bounce = True
files_new = []
for file in files:
if file[0:5] != '1.000':
files_new.append(file)
files = files_new
# +
info = []
deleted = []
for file in files:
e_solvernom = trajectory_solver.from_pickle(mao13datadir+file)
field = file.split('_')[0]
phi = e_solvernom.init_conds.phi0
theta = e_solvernom.init_conds.theta0
if e_solvernom.dataframe.z.max() < 7.00:
bounce = 0
else:
bounce = 1
info.append([field, theta, phi, bounce])
df = pd.DataFrame(info, columns = ['field', 'theta', 'phi', 'bounce'])
# -
df['field'].unique()
dfnew9 = df[df['field']=='0.90']
dfnew1 = df[df['field']=='1.00'] #want this bounce
dfnew11 = df[df['field']=='1.10']# want this not bounce
# +
mask1 = (dfnew1.bounce == 1).values
mask2 = (dfnew11.bounce == 0).values
(mask1 & mask2).sum()
dfnow = dfnew1[mask1 & mask2]
# -
dfnew1[mask1 & mask2]
# +
def getDSfield(file):
return file.split('_')[1].split('x')[0]
def getPSfield(file):
return file.split('_')[2].split('x')[0]
def getfiles(files, field, thetas, phis):
fieldrounded = round(field, 3)
thetasrounded = [round(num, 3) for num in thetas]
phisrounded = [round(num, 3) for num in phis]
filedata = []
for file in files:
if np.isclose(float(file.split('_')[0]), field, 1e-5):
if float(getDSfield(file)) in thetasrounded:
if float(getPSfield(file)) in phisrounded:
filedata.append(file)
return filedata
filedata = getfiles(files, 1.00, dfnow['theta'], dfnow['phi'])
filedata2 = getfiles(files, 1.10, dfnow['theta'], dfnow['phi'])
# -
tempfiles = filedata[0:3]
tempfiles2 = filedata2[0:3]
tempfiles
# +
e_solvernom = trajectory_solver.from_pickle(mao13datadir+tempfiles[2])
e_solvernom2 = trajectory_solver.from_pickle(mao13datadir+tempfiles2[2])
e_solvernom.dataframe = e_solvernom.dataframe[::2]
e_solvernom2.dataframe = e_solvernom2.dataframe
fig, ax = e_solvernom.plot3d(cmap = 'Spectral')
fig, ax = e_solvernom2.plot3d(fig = fig, ax = ax)
# -
e_solvernom.dataframe.z.max(), e_solvernom2.dataframe.z.max()
zees = {}
for field in df['field'].unique():
df2 = df[df['field']==field]
dfbounce = df2[(df2['bounce']==1) & (df2['field']==field)]
bounce = []
for i in range(0, len(dfbounce['theta'].values), 1):
bounce.append([dfbounce['theta'].values[i], dfbounce['phi'].values[i]]) #all pairs of [theta, phi] that bounce
thetas = np.array(df2['theta'].unique())
phis = np.array(df2['phi'].unique())
z = np.zeros((len(phis), len(thetas)))
for phi in range(0, len(phis), 1):
for theta in range(0, len(thetas), 1):
if [thetas[theta], phis[phi]] in bounce:
z[phi][theta] = 1
zees.update({f'{field}':z})
zees
# +
import matplotlib.patches as mpatches
from matplotlib.lines import Line2D
fig = plt.figure()
ax1 = plt.subplot2grid((4,4), (0,0), rowspan=1, colspan=1)
ax2 = plt.subplot2grid((4,4), (0,1), rowspan=1, colspan=1)
ax3 = plt.subplot2grid((4,4), (0,2), rowspan=1, colspan=1)
ax4 = plt.subplot2grid((4,4), (0,3), rowspan=1, colspan=1)
ax5 = plt.subplot2grid((4,4), (1,0), rowspan=1, colspan=1)
ax6 = plt.subplot2grid((4,4), (1,1), rowspan=1, colspan=1)
ax7 = plt.subplot2grid((4,4), (1,2), rowspan=1, colspan=1)
ax8 = plt.subplot2grid((4,4), (1,3), rowspan=1, colspan=1)
ax9 = plt.subplot2grid((4,4), (2,0), rowspan=1, colspan=1)
ax10 = plt.subplot2grid((4,4), (2,1), rowspan=1, colspan=1)
ax11 = plt.subplot2grid((4,4), (2,2), rowspan=1, colspan=1)
ax12 = plt.subplot2grid((4,4), (2,3), rowspan=1, colspan=1)
ax13 = plt.subplot2grid((4,4), (3,0), rowspan=1, colspan=1)
ax14 = plt.subplot2grid((4,4), (3,1), rowspan=1, colspan=1)
ax15 = plt.subplot2grid((4,4), (3,2), rowspan=1, colspan=1)
ax16 = plt.subplot2grid((4,4), (3,3), rowspan=1, colspan=1)
ax1.contourf(thetas, phis, zees['0.90'], cmap = 'inferno')
ax1.set_title(f'0.90')
ax1.set_xlabel(f'theta (rad)')
ax1.set_ylabel(f'phi (rad)')
ax1.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax2.contourf(thetas, phis, zees['0.91'], cmap = 'inferno')
ax2.set_title(f'0.91')
ax2.set_xlabel(f'theta (rad)')
ax2.set_ylabel(f'phi (rad)')
ax2.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax3.contourf(thetas, phis, zees['0.92'], cmap = 'inferno')
ax3.set_title(f'0.92')
ax3.set_xlabel(f'theta (rad)')
ax3.set_ylabel(f'phi (rad)')
ax3.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax4.contourf(thetas, phis, zees['0.93'], cmap = 'inferno')
ax4.set_title(f'0.93')
ax4.set_xlabel(f'theta (rad)')
ax4.set_ylabel(f'phi (rad)')
ax4.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax5.contourf(thetas, phis, zees['0.94'], cmap = 'inferno')
ax5.set_title(f'0.94')
ax5.set_xlabel(f'theta (rad)')
ax5.set_ylabel(f'phi (rad)')
ax5.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax6.contourf(thetas, phis, zees['0.95'], cmap = 'inferno')
ax6.set_title(f'0.95')
ax6.set_xlabel(f'theta (rad)')
ax6.set_ylabel(f'phi (rad)')
ax6.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax7.contourf(thetas, phis, zees['0.96'], cmap = 'inferno')
ax7.set_title(f'0.96')
ax7.set_xlabel(f'theta (rad)')
ax7.set_ylabel(f'phi (rad)')
ax7.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax8.contourf(thetas, phis, zees['0.97'], cmap = 'inferno')
ax8.set_title(f'0.97')
ax8.set_xlabel(f'theta (rad)')
ax8.set_ylabel(f'phi (rad)')
ax8.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax9.contourf(thetas, phis, zees['0.98'], cmap = 'inferno')
ax9.set_title(f'0.98')
ax9.set_xlabel(f'theta (rad)')
ax9.set_ylabel(f'phi (rad)')
ax9.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax10.contourf(thetas, phis, zees['0.99'], cmap = 'inferno')
ax10.set_title(f'0.99')
ax10.set_xlabel(f'theta (rad)')
ax10.set_ylabel(f'phi (rad)')
ax10.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax11.contourf(thetas, phis, zees['1.00'], cmap = 'inferno')
ax11.set_title(f'1.00')
ax11.set_xlabel(f'theta (rad)')
ax11.set_ylabel(f'phi (rad)')
ax11.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax12.contourf(thetas, phis, zees['1.01'], cmap = 'inferno')
ax12.set_title(f'1.01')
ax12.set_xlabel(f'theta (rad)')
ax12.set_ylabel(f'phi (rad)')
ax12.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax13.contourf(thetas, phis, zees['1.02'], cmap = 'inferno')
ax13.set_title(f'1.02')
ax13.set_xlabel(f'theta (rad)')
ax13.set_ylabel(f'phi (rad)')
ax13.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax14.contourf(thetas, phis, zees['1.05'], cmap = 'inferno')
ax14.set_title(f'1.05')
ax14.set_xlabel(f'theta (rad)')
ax14.set_ylabel(f'phi (rad)')
ax14.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax15.contourf(thetas, phis, zees['1.08'], cmap = 'inferno')
ax15.set_title(f'1.08')
ax15.set_xlabel(f'theta (rad)')
ax15.set_ylabel(f'phi (rad)')
ax15.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
ax16.contourf(thetas, phis, zees['1.10'], cmap = 'inferno')
ax16.set_title(f'1.10')
ax16.set_xlabel(f'theta (rad)')
ax16.set_ylabel(f'phi (rad)')
ax16.contour(thetas, phis, zees['1.00'], cmap = 'viridis')
cmap = plt.cm.get_cmap('inferno')
rgba = cmap(0.0)
rgba2 = cmap(1.0)
bounces = mpatches.Patch(color=rgba, label = 'scaled not bounce')
notbounces = mpatches.Patch(color=rgba2, label = 'scaled bounce')
nomcmap = plt.cm.get_cmap('viridis')
rgba3 = nomcmap(1.0)
rgba4 = nomcmap(0.0)
overlay = Line2D([0], [0], color='lawngreen', lw = 2, label = 'nominal bounce border')
overlay2 = Line2D([0], [0], color='blue', lw = 2, label = 'nominal not bounce border')
fig.legend(handles = [notbounces, bounces, overlay, overlay2], ncol = 2)
fig.tight_layout(pad = 4.0)
fig.suptitle('Particles that Bounce in Different Distorted TS Field Scenarios', fontsize = '25')
# -
zeees = {}
for field in df['field'].unique():
thetadif = (thetas[-1] - thetas[0])/(len(thetas))
phidif = (phis[-1] - phis[0])/(len(phis))
scaledthetas = []
scaledphis = []
for theta in thetas:
scaledthetas.append(theta-thetadif)
scaledthetas.append(thetas[-1] + thetadif)
for phi in phis:
scaledphis.append(phi-phidif)
scaledphis.append(phis[-1] + phidif)
zeees.update({f'{field}': [scaledthetas, scaledphis]})
# +
fig = plt.figure()
ax1 = plt.subplot2grid((4,4), (0,0), rowspan=1, colspan=1)
ax2 = plt.subplot2grid((4,4), (0,1), rowspan=1, colspan=1)
ax3 = plt.subplot2grid((4,4), (0,2), rowspan=1, colspan=1)
ax4 = plt.subplot2grid((4,4), (0,3), rowspan=1, colspan=1)
ax5 = plt.subplot2grid((4,4), (1,0), rowspan=1, colspan=1)
ax6 = plt.subplot2grid((4,4), (1,1), rowspan=1, colspan=1)
ax7 = plt.subplot2grid((4,4), (1,2), rowspan=1, colspan=1)
ax8 = plt.subplot2grid((4,4), (1,3), rowspan=1, colspan=1)
ax9 = plt.subplot2grid((4,4), (2,0), rowspan=1, colspan=1)
ax10 = plt.subplot2grid((4,4), (2,1), rowspan=1, colspan=1)
ax11 = plt.subplot2grid((4,4), (2,2), rowspan=1, colspan=1)
ax12 = plt.subplot2grid((4,4), (2,3), rowspan=1, colspan=1)
ax13 = plt.subplot2grid((4,4), (3,0), rowspan=1, colspan=1)
ax14 = plt.subplot2grid((4,4), (3,1), rowspan=1, colspan=1)
ax15 = plt.subplot2grid((4,4), (3,2), rowspan=1, colspan=1)
ax16 = plt.subplot2grid((4,4), (3,3), rowspan=1, colspan=1)
ax1.pcolormesh(zeees['0.90'][0], zeees['0.90'][1], zees['0.90'], cmap = 'inferno')
ax1.set_title(f'0.90')
ax1.set_xlabel(f'theta (rad)')
ax1.set_ylabel(f'phi (rad)')
ax2.pcolormesh(zeees['0.91'][0], zeees['0.91'][1], zees['0.91'], cmap = 'inferno')
ax2.set_title(f'0.91')
ax2.set_xlabel(f'theta (rad)')
ax2.set_ylabel(f'phi (rad)')
ax3.pcolormesh(zeees['0.92'][0], zeees['0.92'][1], zees['0.92'], cmap = 'inferno')
ax3.set_title(f'0.92')
ax3.set_xlabel(f'theta (rad)')
ax3.set_ylabel(f'phi (rad)')
ax4.pcolormesh(zeees['0.93'][0], zeees['0.93'][1], zees['0.93'], cmap = 'inferno')
ax4.set_title(f'0.93')
ax4.set_xlabel(f'theta (rad)')
ax4.set_ylabel(f'phi (rad)')
ax5.pcolormesh(zeees['0.94'][0], zeees['0.94'][1], zees['0.94'], cmap = 'inferno')
ax5.set_title(f'0.94')
ax5.set_xlabel(f'theta (rad)')
ax5.set_ylabel(f'phi (rad)')
ax6.pcolormesh(zeees['0.95'][0], zeees['0.95'][1], zees['0.95'], cmap = 'inferno')
ax6.set_title(f'0.95')
ax6.set_xlabel(f'theta (rad)')
ax6.set_ylabel(f'phi (rad)')
ax7.pcolormesh(zeees['0.96'][0], zeees['0.96'][1], zees['0.96'], cmap = 'inferno')
ax7.set_title(f'0.96')
ax7.set_xlabel(f'theta (rad)')
ax7.set_ylabel(f'phi (rad)')
ax8.pcolormesh(zeees['0.97'][0], zeees['0.97'][1], zees['0.97'], cmap = 'inferno')
ax8.set_title(f'0.97')
ax8.set_xlabel(f'theta (rad)')
ax8.set_ylabel(f'phi (rad)')
ax9.pcolormesh(zeees['0.98'][0], zeees['0.98'][1], zees['0.98'], cmap = 'inferno')
ax9.set_title(f'0.98')
ax9.set_xlabel(f'theta (rad)')
ax9.set_ylabel(f'phi (rad)')
ax10.pcolormesh(zeees['0.99'][0], zeees['0.99'][1], zees['0.99'], cmap = 'inferno')
ax10.set_title(f'0.99')
ax10.set_xlabel(f'theta (rad)')
ax10.set_ylabel(f'phi (rad)')
ax11.pcolormesh(zeees['1.00'][0], zeees['1.00'][1], zees['1.00'], cmap = 'inferno')
ax11.set_title(f'1.00')
ax11.set_xlabel(f'theta (rad)')
ax11.set_ylabel(f'phi (rad)')
ax12.pcolormesh(zeees['1.01'][0], zeees['1.01'][1], zees['1.01'], cmap = 'inferno')
ax12.set_title(f'1.01')
ax12.set_xlabel(f'theta (rad)')
ax12.set_ylabel(f'phi (rad)')
ax13.pcolormesh(zeees['1.02'][0], zeees['1.02'][1], zees['1.02'], cmap = 'inferno')
ax13.set_title(f'1.02')
ax13.set_xlabel(f'theta (rad)')
ax13.set_ylabel(f'phi (rad)')
ax14.pcolormesh(zeees['1.05'][0], zeees['1.05'][1], zees['1.05'], cmap = 'inferno')
ax14.set_title(f'1.05')
ax14.set_xlabel(f'theta (rad)')
ax14.set_ylabel(f'phi (rad)')
ax15.pcolormesh(zeees['1.08'][0], zeees['1.08'][1], zees['1.08'], cmap = 'inferno')
ax15.set_title(f'1.08')
ax15.set_xlabel(f'theta (rad)')
ax15.set_ylabel(f'phi (rad)')
ax16.pcolormesh(zeees['1.10'][0], zeees['1.10'][1], zees['1.10'], cmap = 'inferno')
ax16.set_title(f'1.10')
ax16.set_xlabel(f'theta (rad)')
ax16.set_ylabel(f'phi (rad)')
cmap = plt.cm.get_cmap('inferno')
rgba = cmap(0.0)
rgba2 = cmap(1.0)
bounces = mpatches.Patch(color=rgba, label = 'not bounce')
notbounces = mpatches.Patch(color=rgba2, label = ' bounce')
fig.legend(handles = [notbounces, bounces])
fig.tight_layout(pad = 5.0)
fig.suptitle('Particles that Bounce in Different Distorted TS Field Scenarios', fontsize = '25')
# -
| scripts/Darren/randomphi/Jupyter Notebooks/Mao10,13_analysis_jupyter/Readmao13contour*.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:python3]
# language: python
# name: conda-env-python3-py
# ---
# # EIS metadata validation script
# Used to validate Planon output with spreadsheet input
# ## 1. Data import
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Read data. There are two datasets: Planon and Master. The latter is the EIS data nomencalture that was created. Master is made up of two subsets: loggers and meters. Loggers are sometimes called controllers and meters are sometimes called sensors. In rare cases meters or sensors are also called channels.
planon=pd.read_excel('EIS Assets.xlsx',index_col = 'Code')
master_loggerscontrollers = pd.read_csv('LoggersControllers.csv', index_col = 'Asset Code')
master_meterssensors = pd.read_csv('MetersSensors.csv', encoding = 'macroman', index_col = 'Asset Code')
planon['Code']=planon.index
master_loggerscontrollers['Code']=master_loggerscontrollers.index
master_meterssensors['Code']=master_meterssensors.index
# Unify index, caps everything and strip of trailing spaces.
planon.index=[str(i).upper().strip() for i in planon.index]
master_loggerscontrollers.index=[str(i).upper().strip() for i in master_loggerscontrollers.index]
master_meterssensors.index=[str(i).upper().strip() for i in master_meterssensors.index]
# Drop duplicates (shouldn't be any)
planon.drop_duplicates(inplace=True)
master_loggerscontrollers.drop_duplicates(inplace=True)
master_meterssensors.drop_duplicates(inplace=True)
# Split Planon import into loggers and meters
# Drop duplicates (shouldn't be any)
# Split the Planon file into 2, one for loggers & controllers, and one for meters & sensors.
planon_loggerscontrollers = planon.loc[(planon['Classification Group'] == 'EN.EN4 BMS Controller') | (planon['Classification Group'] == 'EN.EN1 Data Logger')]
planon_meterssensors = planon.loc[(planon['Classification Group'] == 'EN.EN2 Energy Meter') | (planon['Classification Group'] == 'EN.EN3 Energy Sensor')]
planon_loggerscontrollers.drop_duplicates(inplace=True)
planon_meterssensors.drop_duplicates(inplace=True)
# Index unique? show number of duplicates in index
len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
# Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future.
planon_meterssensors.head(3)
# ## 2. Validation
# Create list of all buildings present in Planon export. These are buildings to check the data against from Master.
buildings=set(planon_meterssensors['BuildingNo.'])
buildings
len(buildings)
# ### 2.1. Meters
# Create dataframe slice for validation from `master_meterssensors` where the only the buildings located in `buildings` are contained. Save this new slice into `master_meterssensors_for_validation`. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together.
master_meterssensors_for_validation = \
pd.concat([master_meterssensors.loc[master_meterssensors['Building Code'] == building] \
for building in buildings])
master_meterssensors_for_validation.head(2)
master_meterssensors_for_validation.loc['MC202-B15/F15']
#alternative method
master_meterssensors_for_validation2 = \
master_meterssensors[master_meterssensors['Building Code'].isin(buildings)]
master_meterssensors_for_validation2.head(2)
# Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index.
len(master_meterssensors_for_validation)
len(planon_meterssensors)-len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
# Sort datasets after index for easier comparison.
master_meterssensors_for_validation.sort_index(inplace=True)
planon_meterssensors.sort_index(inplace=True)
# #### 2.1.1 Slicing of meters to only certain columns of comparison
planon_meterssensors.T
master_meterssensors_for_validation.T
# Create dictionary that maps Planon column names onto Master.
#
# From Nicola:
# - Code (Asset Code)
# - Description
# - EIS ID (Channel)
# - Utility Type
# - Fiscal Meter
# - Tenant Meter
#
# `Building code` and `Building name` are implicitly included. `Logger Serial Number`, `IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
#Planon:Master
meters_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Channel",
"Tenant Meter.Name":"Tenant meter",
"Fiscal Meter.Name":"Fiscal meter",
"Code":"Code"
}
# Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified.
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation[list(meters_match_dict.values())]
planon_meterssensors_filtered=planon_meterssensors[list(meters_match_dict.keys())]
pd.DataFrame(master_meterssensors_for_validation_filtered.loc['MC202-B15/F15'])
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
# Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!).
planon_meterssensors_filtered.columns=[meters_match_dict[i] for i in planon_meterssensors_filtered]
planon_meterssensors_filtered.drop_duplicates(inplace=True)
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
planon_meterssensors_filtered.head(2)
# Fiscal/Tenant meter name needs fixing from Yes/No and 1/0.
planon_meterssensors_filtered['Fiscal meter']=planon_meterssensors_filtered['Fiscal meter'].isin(['Yes'])
planon_meterssensors_filtered['Tenant meter']=planon_meterssensors_filtered['Tenant meter'].isin(['Yes'])
master_meterssensors_for_validation_filtered['Fiscal meter']=master_meterssensors_for_validation_filtered['Fiscal meter'].isin([1])
master_meterssensors_for_validation_filtered['Tenant meter']=master_meterssensors_for_validation_filtered['Tenant meter'].isin([1])
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
# Cross-check missing meters
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
#without MC210
len(set([i for i in meterssensors_not_in_planon if i[:5]!='MC210']))
set([i for i in meterssensors_not_in_planon if i[:5]!='MC210'])
(set([i[:5] for i in meterssensors_not_in_planon]))
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
len(set([i for i in meterssensors_not_in_master]))
set([i[:9] for i in meterssensors_not_in_master])
set([i[:5] for i in meterssensors_not_in_master])
# Check for duplicates in index, but not duplicates over the entire row
print(len(planon_meterssensors_filtered.index))
print(len(set(planon_meterssensors_filtered.index)))
print(len(master_meterssensors_for_validation_filtered.index))
print(len(set(master_meterssensors_for_validation_filtered.index)))
master_meterssensors_for_validation_filtered[master_meterssensors_for_validation_filtered.index.duplicated()]
# The duplicates are the `nan`s. Remove these for now. Could revisit later to do an index-less comparison, only over row contents.
good_index=[i for i in master_meterssensors_for_validation_filtered.index if str(i).lower().strip()!='nan']
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation_filtered.loc[good_index]
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
len(planon_meterssensors_filtered)
len(master_meterssensors_for_validation_filtered)
# Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example).
comon_index=list(set(master_meterssensors_for_validation_filtered.index).intersection(set(planon_meterssensors_filtered.index)))
len(comon_index)
master_meterssensors_for_validation_intersected=master_meterssensors_for_validation_filtered.loc[comon_index].sort_index()
planon_meterssensors_intersected=planon_meterssensors_filtered.loc[comon_index].sort_index()
len(master_meterssensors_for_validation_intersected)
len(planon_meterssensors_intersected)
# Still have duplicate indices. For now we just drop and keep the first.
master_meterssensors_for_validation_intersected = master_meterssensors_for_validation_intersected[~master_meterssensors_for_validation_intersected.index.duplicated(keep='first')]
master_meterssensors_for_validation_intersected.head(2)
planon_meterssensors_intersected.head(2)
# #### 2.1.2. Primitive comparison
planon_meterssensors_intersected==master_meterssensors_for_validation_intersected
np.all(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected)
# #### 2.1.3. Horizontal comparison
# Number of cells matching
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()
# Percentage matching
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
((planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100).plot(kind='bar')
# #### 2.1.4. Vertical comparison
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum())
df
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()/\
len(planon_meterssensors_intersected.T)*100)
df[df[0]<100]
# #### 2.1.5. Smart(er) comparison
# Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps.
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
planon_meterssensors_intersected['Description']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Description'].values]
master_meterssensors_for_validation_intersected['Description']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Description'].values]
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
# Some errors fixed, some left. Let's see which ones. These are either:
# - Wrong duplicate dropped
# - Input human erros in the description.
# - Actual erros somewhere in the indexing.
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Description'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Description'])
# Let us repeat the exercise for `Logger Channel`. Cross-validate, flag as highly likely error where both mismatch.
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
planon_meterssensors_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Logger Channel'].values]
master_meterssensors_for_validation_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Logger Channel'].values]
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
# All errors fixed on logger channels.
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Logger Channel'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Logger Channel'])
# New error percentage:
(planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
# ### 2.2. Loggers
buildings=set(planon_loggerscontrollers['BuildingNo.'])
buildings
master_loggerscontrollers_for_validation = \
pd.concat([master_loggerscontrollers.loc[master_loggerscontrollers['Building Code'] == building] \
for building in buildings])
master_loggerscontrollers_for_validation.head(2)
master_loggerscontrollers[master_loggerscontrollers['Building Code']=='MC060']
planon_loggerscontrollers[planon_loggerscontrollers['BuildingNo.']=='MC060']
master_loggerscontrollers.loc['MC060-L01']
len(master_loggerscontrollers_for_validation)
len(planon_loggerscontrollers)-len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
master_loggerscontrollers_for_validation.sort_index(inplace=True)
planon_loggerscontrollers.sort_index(inplace=True)
master_loggerscontrollers_for_validation.loc['MC060-L01']
planon_loggerscontrollers.T
master_loggerscontrollers_for_validation.T
# Create dictionary that maps Planon column names onto Master.
#
# From Nicola:
# - EIS ID (Serial Number)
# - Make
# - Model
# - Description
# - Code (Asset Code)
# - Building Code
#
# `Building code` and `Building name` are implicitly included. `Logger IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
#Planon:Master
loggers_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Serial Number",
"Make":"Make",
"Model":"Model"
}
master_loggerscontrollers_for_validation_filtered=master_loggerscontrollers_for_validation[list(loggers_match_dict.values())]
planon_loggerscontrollers_filtered=planon_loggerscontrollers[list(loggers_match_dict.keys())]
master_loggerscontrollers_for_validation_filtered.head(2)
planon_loggerscontrollers_filtered.head(2)
planon_loggerscontrollers_filtered.columns=[loggers_match_dict[i] for i in planon_loggerscontrollers_filtered]
planon_loggerscontrollers_filtered.drop_duplicates(inplace=True)
master_loggerscontrollers_for_validation_filtered.drop_duplicates(inplace=True)
planon_loggerscontrollers_filtered.head(2)
master_loggerscontrollers_for_validation_filtered.head(2)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nLoggers in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nLoggers in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
print(len(planon_loggerscontrollers_filtered.index))
print(len(set(planon_loggerscontrollers_filtered.index)))
print(len(master_loggerscontrollers_for_validation_filtered.index))
print(len(set(master_loggerscontrollers_for_validation_filtered.index)))
master_loggerscontrollers_for_validation_filtered[master_loggerscontrollers_for_validation_filtered.index.duplicated()]
comon_index=list(set(master_loggerscontrollers_for_validation_filtered.index).intersection(set(planon_loggerscontrollers_filtered.index)))
master_loggerscontrollers_for_validation_intersected=master_loggerscontrollers_for_validation_filtered.loc[comon_index].sort_index()
planon_loggerscontrollers_intersected=planon_loggerscontrollers_filtered.loc[comon_index].sort_index()
master_loggerscontrollers_for_validation_intersected.head(2)
planon_loggerscontrollers_intersected.head(2)
planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected
# Loggers matching
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()
# Percentage matching
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
((planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100).plot(kind='bar')
# Loggers not matching on `Building Name`.
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
planon_loggerscontrollers_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_loggerscontrollers_intersected['Building Name'].values]
master_loggerscontrollers_for_validation_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_loggerscontrollers_for_validation_intersected['Building Name'].values]
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
# That didnt help.
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Building Name'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name'])
# Follow up with lexical distance comparison. That would flag this as a match.
# Loggers not matching on `Serial Number`.
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in planon_loggerscontrollers_intersected['Logger Serial Number'].values]
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in master_loggerscontrollers_for_validation_intersected['Logger Serial Number'].values]
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
# Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw `cannot onvert to float` exception if strings are left in from the previous step.
z1=[]
z2=[]
for i in planon_loggerscontrollers_intersected.index:
if planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']:
if float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])==\
float(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']):
z1.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
z2.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=z1
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=z2
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
# New error percentage:
(planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
# (Bearing in my mind the above, this is technically 0)
| test/eis-metadata-validation/.ipynb_checkpoints/Planon metadata validation4-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ahilan-invoke/Folder-Structure-Conventions/blob/master/Data_Processing(CVPARSER).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="9Bxpxx9SvPV1"
# !pip install pymupdf
# !apt-get install poppler-utils
# !pip install flashtext
# !pip install pdf2image
# !pip install -U pip setuptools wheel
# !pip install -U spacy[transformers]
# + id="EWKTlsS2kl_u"
import sys
sys.path.append('/content/drive/MyDrive/CVPARSER V4')
import spacy
import pandas as pd
import numpy as np
from flashtext import KeywordProcessor
from google.colab.patches import cv2_imshow
from data_extraction import ResumeDataExtraction
# + id="BgEiX1wjlEI0"
filepath1 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/CV_Junaidi_2019-v2.pdf"
filepath2 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/100948_2020929_231124.pdf"
filepath3 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/Ahilan_Ashwin.T CV.pdf"
filepath4 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/Mohd_Nor_Shohir_CV.pdf"
filepath5 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/228928_202117_164227.pdf"
filepath6 = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only/100229_2020313_233715.pdf"
filepath7 = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only/228910_202117_154625.pdf"
filepath8 = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only/226252_202118_85929.pdf"
filepath9 = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only/227714_2021110_13946.pdf"
filepath10 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/228866_202117_122752.pdf"
filepath11 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/225967_202117_13647.pdf"
filepath12 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/100280_2020127_235752.pdf"
filepath13 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs/CV Aeriq Aqmal.pdf"
filepath14 = "/content/drive/MyDrive/CV_sample2/new_resume_001.pdf"
# + id="4cXlS7Gtvb2g"
class NER:
def __init__(self):
self.nlp = spacy.load('/content/drive/MyDrive/SpaCy V1/output/model-best')
def create_doc(self, text):
self.doc = self.nlp(text)
def display_ents(self, display=True):
if display:
spacy.displacy.render(self.doc, style='ent', jupyter=True)
def show_ents(self):
if self.doc.ents:
for ent in self.doc.ents:
print(ent.text+' - ' +str(ent.start_char) +' - '+ str(ent.end_char)
+' - '+ent.label_+ ' - '+str(spacy.explain(ent.label_)))
else:
print('No named entities found.')
# Load models
ml = NER()
# + id="XJN4bznh_E96"
def dataframe_with_merged_spans(dataframe):
def join_spans(row):
page_num = pd.Series.min(row['page_num'])
line_num = pd.Series.min(row['line_num'])
text = " ".join(row['text'])
size = pd.Series.mean(row['size'])
flags = tuple(row['flags'])
font = tuple(row['font'])
color = tuple(row['color'])
boxes = row['bbox'].tolist()
x0, y0, x1, y1 = list(zip(*boxes))
new_box = (min(x0), min(y0), max(x1), max(y1))
return pd.Series([page_num, line_num, text, size, flags, font, color, new_box])
df = dataframe.copy()
new_df = df.groupby(['page_num', 'line_num']).apply(join_spans).reset_index(drop=True)
new_df.columns = ['page_num', 'line_num', 'text', 'size', 'flags', 'font', 'color', 'bbox']
return new_df
# + id="nplV4GI5I45c"
def group_by_style(dataframe):
df = dataframe.copy()
df['index'] = df.index
df['style'] = df.apply(
lambda row : (round(row['size']), row['flags'], row['font'], row['color'], row['text'].isupper(), row['text'].istitle()),
axis = 1)
style_df = df.groupby('style').apply(lambda x: [list(x['text']), list(x['index'])]).apply(pd.Series).reset_index()
style_df.columns = ['style', 'text', 'index']
return style_df
# + id="EPAv9OyK9pr_"
def add_keyword_ratio(text_list, processor):
text = " ".join(text_list)
results = processor.extract_keywords(text)
if results:
number_of_hits = sum(len(result.split()) for result in results)
number_of_words = len(text.split())
return number_of_hits/number_of_words
return 0
def get_headers(dataframe):
df = dataframe.copy()
with open("/content/drive/MyDrive/CVPARSER V4/Keyword2.txt") as f:
content = f.readlines()
keywordprocessor = KeywordProcessor()
keywordprocessor.add_keywords_from_list([x.strip() for x in content])
df['header_ratio'] = df.apply(lambda row: add_keyword_ratio(row['text'], keywordprocessor), axis=1)
possible_headers = df[df['header_ratio']>0.3]
if possible_headers.empty: possible_headers = df[df['header_ratio']==df['header_ratio'].max()]
possible_headers = possible_headers.set_index(['style', 'header_ratio']).apply(pd.Series.explode).reset_index()
possible_headers = possible_headers.sort_values(by='index', ignore_index=True)[['style', 'index', 'text', 'header_ratio']]
display(possible_headers)
indices = possible_headers['index'].tolist()
if 0 in indices: indices.remove(0)
return indices
# + id="CGx5-_ZeXreB"
def split_dataframe(dataframe, indexes):
return np.split(dataframe, indexes, axis=0)
# + [markdown] id="_jt2u_j_dxbY"
# # Testing
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="kUGshsAJdwE_" outputId="41c6fc7a-75df-4562-9feb-51bd3ef21187"
cv_data = ResumeDataExtraction(filepath=filepath1)
df = cv_data.extract_data()
features_df = cv_data.find_structuring_elements()
images = cv_data.convert_pdf_to_image(350, 1000)
if images:
for image in images:
cv2_imshow(image)
if cv_data.images_with_lines:
for image in cv_data.images_with_lines:
cv2_imshow(image)
if df is not None:
new_df = dataframe_with_merged_spans(df)
text = "\n".join(new_df['text'].tolist())
styles_df = group_by_style(new_df)
indexs = get_headers(styles_df)
sections = split_dataframe(new_df, indexs)
# display(df)
ml.create_doc(text)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="1ak5OhptDAWt" outputId="0eabed2f-0651-43fd-ee82-f7591cb2e680"
ml.display_ents()
# + [markdown] id="KSWkLUvhoDOB"
# # Testing
# + id="87w6Plcn0HVF"
def find_file(exp):
with open("/content/drive/MyDrive/CVPARSER V4/data_seen.txt") as f:
content = f.readlines()
content = [x.strip() for x in content]
found = ''
for line in content:
file_exp = exp
if file_exp in line:
found = line
break
if found:
print(found)
candidate = ResumeDataExtraction(filepath=found)
dataframe = candidate.extract_data()
images = candidate.convert_pdf_to_image(350, 1000)
if images:
for image in images:
cv2_imshow(image)
# + id="Z9W6QeG4AXkc" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1e77c353-9e93-43fa-ba81-a46b7774590d"
find_file("12759_201935_15854")
# + id="c1t029CtoEiI"
from os import listdir
from os.path import join
test_dir = "/content/drive/MyDrive/CVParserV1"
test_dir = "/content/drive/MyDrive/CV_sample2"
test_dir = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs"
test_dir = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only"
test_dir = "/content/drive/MyDrive/CV_sample/CV_sample(original)"
# test_dir = '/content/drive/MyDrive/ahilan'
i = 0
for file in listdir(test_dir):
if i ==200:
break
if file[-3:] == "pdf":
filepath = join(test_dir,file)
print(filepath)
candidate = ResumeDataExtraction(filepath=filepath)
if candidate.images_with_lines is not None:
for img in candidate.images_with_lines:
cv2_imshow(img)
display(candidate.features)
print("\n")
i+=1
# + id="t6yDnR-TAx25"
import pathlib
def create_dataset(start, end, folder_index):
test_dir1 = "/content/drive/MyDrive/CVParserV1"
test_dir2 = "/content/drive/MyDrive/CV_sample2"
test_dir3 = "/content/drive/MyDrive/CVPARSER(V2)/Test_CVs"
test_dir4 = "/content/drive/MyDrive/CVPARSER(V2)/A_pdf_only"
test_dir5 = "/content/drive/MyDrive/CV_sample/CV_sample(original)"
test_dir6 = "/content/drive/MyDrive/ahilan"
test_dirs = [test_dir1, test_dir2, test_dir3, test_dir4, test_dir5, test_dir6]
save_path1 = pathlib.Path(f'/content/drive/MyDrive/NER_DATA_{folder_index}')
save_path2 = pathlib.Path(f'/content/drive/MyDrive/Classification_Data_{folder_index}')
save_path1.mkdir(parents=True, exist_ok=True)
save_path2.mkdir(parents=True, exist_ok=True)
i=0
for dir in test_dirs:
test_dir = pathlib.Path(dir)
for file_ in test_dir.glob('*.pdf'):
if i == end:
break
txt_path = save_path1.joinpath(file_.name[:-3]+'txt')
csv_path = save_path2.joinpath(file_.name[:-3]+'csv')
print(file_)
candidate = ResumeDataExtraction(filepath = file_)
dataframe = candidate.extract_data()
if dataframe is not None:
if i < start:
i+=1
continue
# print(file_)
new_df = dataframe_with_merged_spans(dataframe)
text = "\n".join(new_df['text'].tolist())
new_df.to_csv(csv_path, index=False)
f = open(txt_path, "w")
f.write(text)
f.close()
# + id="MowL3-T9COYK"
create_dataset(0, 1000, 1)
| Data_Processing(CVPARSER).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="2CBBv9cTXyZA" colab_type="code" outputId="1d81c39f-b537-40f5-f7fa-338d05200451" executionInfo={"status": "ok", "timestamp": 1576936927160, "user_tz": -60, "elapsed": 8825, "user": {"displayName": "<NAME>\u0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 390}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pickle
import os
import random
from sklearn.metrics import balanced_accuracy_score, precision_score, recall_score, accuracy_score
# !pip install -U mlxtend
from mlxtend.evaluate import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
# + id="uJW2ofNqYQPt" colab_type="code" colab={}
def load(path):
from google.colab import drive
drive.mount('/gdrive')
data_dir = '/gdrive/My Drive/Studia/INZYNIERKA/StockPricePrediction/data/'
with open(data_dir + path, 'rb') as handle:
data = pickle.load(handle)
return data
# + id="wdj8TW7VYxqo" colab_type="code" colab={}
def save(path, data):
from google.colab import drive
drive.mount('/gdrive')
data_dir = '/gdrive/My Drive/Studia/INZYNIERKA/StockPricePrediction/data/'
with open(data_dir + path, 'wb') as handle:
pickle.dump(data, handle)
# + id="CyvjYtdY7JJE" colab_type="code" outputId="41be6a6d-1763-47f0-835a-a7d27299b184" executionInfo={"status": "ok", "timestamp": 1576936949130, "user_tz": -60, "elapsed": 30725, "user": {"displayName": "<NAME>\u0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 125}
train_x, train_y, val_x, val_y, test_x, test_y = load('input/data_lookback_1_notscaled.pickle')
# + id="tOi5xdE8r3eH" colab_type="code" outputId="789a2ef1-c8a8-4a89-e223-362957d8da40" executionInfo={"status": "ok", "timestamp": 1576936949135, "user_tz": -60, "elapsed": 30715, "user": {"displayName": "<NAME>0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 70}
print('train', train_y[train_y == 1].shape, ' 0: ', train_y[train_y == 0].shape)
print('val', val_y[val_y == 1].shape, ' 0: ', val_y[val_y == 0].shape)
print('test', test_y[test_y == 1].shape, ' 0: ', test_y[test_y == 0].shape)
# + id="bCxffW0j0VKi" colab_type="code" outputId="8c0a182c-69d5-47d1-86f8-a27ee3bdb0bb" executionInfo={"status": "ok", "timestamp": 1576936949141, "user_tz": -60, "elapsed": 30711, "user": {"displayName": "<NAME>\u0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 70}
count_1 = 0
count_0 = 0
for data_y in [train_y, val_y, test_y]:
count_1 += data_y[data_y == 1].shape[0]
count_0 += data_y[data_y == 0].shape[0]
print('1 -> ', count_1)
print('0 -> ', count_0)
all_counts = count_0 + count_1
weight_1 = count_1 / all_counts
weight_0 = count_0 / all_counts
weight_1
# + id="RfSMWAGditp-" colab_type="code" outputId="100561f7-1b50-4486-b110-c578ccba8e45" executionInfo={"status": "ok", "timestamp": 1576936950404, "user_tz": -60, "elapsed": 31960, "user": {"displayName": "<NAME>\u0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 70}
#PSEUDO RANODM MODEL
predictions_list = []
set_lens, acc_all, prec_all, rec_all, conf_all = [], [], [], [], []
for x in range(1000):
predictions = random.choices([0, 1], weights=[weight_0, weight_1], k=test_y.shape[0])
predictions_list.append(predictions)
prec = precision_score(test_y, predictions)
rec = recall_score(test_y, predictions)
acc = accuracy_score(test_y, predictions)
# print(f"Accuracy: {round(acc * 100,2)}%")
# print(f"precision: {round(prec * 100,2)}%")
# print(f"recall: {round(rec * 100,2)}%")
acc_all.append(acc)
prec_all.append(prec)
rec_all.append(rec)
print(f"Accuracy: {round(np.mean(acc_all) * 100,2)}%")
print(f"precision: {round(np.mean(prec_all) * 100,2)}%")
print(f"recall: {round(np.mean(rec_all) * 100,2)}%")
# + id="YiVndUMP7sE3" colab_type="code" outputId="d471248d-a88a-43b5-bade-cdc3b95c4369" executionInfo={"status": "ok", "timestamp": 1576936950411, "user_tz": -60, "elapsed": 31952, "user": {"displayName": "<NAME>\u0142ota", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 428}
plt.rcParams.update({'font.size': 14})
cm = confusion_matrix(y_target=test_y,
y_predicted=predictions)
fig, ax = plot_confusion_matrix(conf_mat=cm,
class_names=['Spadek', 'Wzrost'],
figsize=(6,6))
_= plt.ylabel('Prawdziwa etykieta')
_ = plt.xlabel('Przewidziana etykieta')
# + id="TLO1QGcSnthc" colab_type="code" outputId="80ee3810-71c5-4525-a5e4-b44746be9a86" executionInfo={"status": "ok", "timestamp": 1576936951977, "user_tz": -60, "elapsed": 33497, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 232}
def plot_predictions(length=60):
fig = plt.figure(figsize=(22,3))
idx = np.random.randint(test_y.shape[0], size=length)
plt.plot(test_y[idx])
plt.plot(np.array(predictions)[:length])
plt.legend(['Rzeczywista', 'Predykcja'], loc=1)
plt.xlabel('Krok czasowy')
plt.ylabel('Klasa')
plt.yticks([0, 1], ['Spadek', 'Wzrost'])
plot_predictions()
# + id="ODUDsSPKxxn5" colab_type="code" outputId="fd0a9d2e-9474-4495-e836-fbdbaa30e81c" executionInfo={"status": "ok", "timestamp": 1576936951989, "user_tz": -60, "elapsed": 33496, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00826307941781326452"}} colab={"base_uri": "https://localhost:8080/", "height": 52}
print(len(predictions))
save('predictions/pseudo_random.pickle', predictions_list)
| src/models/00_PSEUDO_RANDOM_BASELINE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Structure Parsing
#
# For this project, you will use Python to parse a structure downloaded from the website [PubChem](https://pubchem.ncbi.nlm.nih.gov/). The files from PubChem are in a format called SDF. We will write a script to save the coordinates from the file. We will write these in a format called an xyz file so it can be used for our simulations later.
#
# We've included an sdf file for nitrobenzene in the sdf folder, but after you complete this exercise, you can download and try this with other sdf files. The file you should open is in the sdf folder and named 'nitrobenzene.sdf'
# +
# Write code here to build the file path. We want to open a file called 'nitrobenzene' in the sdf folder.
import os
filehandle =
# +
# Write code here to open and read the file. Save the file contents
# in a variable called data
data =
# -
# This cell will print the file with line numbers
for line_num, line in enumerate(data):
print(line_num, line)
# For these files, the first number on row 3 (remember counting starts at 0!) is always the number of atoms.
#
# For our example, the line looks like this:
#
# `3 14 14 0 0 0 0 0 0 0999 V2000`
#
# Remember that the first number (3) is the line number, so the line from the file is
#
# `14 14 0 0 0 0 0 0 0999 V2000`
# +
# To get the number of atoms, we will want to get information from this line. First, slice the data variable to
# get the fourth line
fourth_line =
print(fourth_line)
# -
# Next, we'll want to split our line so we can get the number of atoms. Write code to split the line here, and
# save in a variable called words
words =
# +
# After you have split the line, get the first value in that list to get the number of atoms.
num_atoms =
# We want num_atoms to be a number (it's a string because we read it from a file), so we save it as an integer
num_atoms = int(num_atoms)
print(F"There are {num_atoms} atoms")
# -
# After the fourth line, the next lines give the coordinates of the atoms. There will be one line for each atom.
#
# How can you slice data so that you get the lines with atom information?
# +
starting_line = 4
# What is the ending line of atom information in the atom information starts on line 4 and there are
# num_atoms lines?
ending_line =
# Slice data to get the lines that have coordinates. Save in a variable called atom_lines
atom_lines =
for line in atom_lines:
print(line)
# -
# In these lines, the first column is the x coordinate, the second is the y coordinate, the third is the z coordinate, and the fourth is the atom element. We'll first see how to get this information out of one line, then move it into a loop to do for all of the lines.
# +
# Get the first line
first_line =
# Split the first line on white space
words =
# Get the atom name and coordinates
atom_name =
x =
y =
z =
print(atom_name, x, y, z)
# -
# We want to do this for all the lines and write a file called an xyz file.
#
# An xyz file always has the number of atoms on the first line. The second line will is always a comment, it doesn't really matter what it says. Then, the coordinates are listed for each atom
#
# `element x_coordinate y_coordinate z_coordinate`
#
# Loop over the lines in your `atom_lines` variable and print the atom names and coordinates. Save the x coordinate in a list called `x_coord`, the y coordinates in a list called `y_coord`, the z coordinates in a list called `z_coords` and the atom names in a list called `atom_names`,
# +
# Loop over atom_lines and save atom_names, x_coord, y_coord, z_coord
# -
# # Saving the information to a file
# We might want to save the information to a file for later use. The code below opens a file called `output.xyz` and writes the list you created in a file.
# +
num_atoms = len(atom_names)
# Open an output file called 'output.xyz' for writing.
output_file = open('output.xyz', 'w+')
# First - write the number of atoms
output_file.write(f'{num_atoms}\n')
# Second line - a comment, we will leave empty
output_file.write("\n")
for i in range(num_atoms):
output_string = atom_names[i] + ' ' + x_coord[i] + ' ' + y_coord[i] + ' ' + z_coord[i] + '\n'
output_file.write(output_string)
output_file.close()
# -
# After you have written the file, navigate back to your file browser. You should see a new file called 'output.xyz'. You can click on this file to view the file you wrote.
| structure_parsing-student.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PySAM Workshop
#
# Oct 14, 2020
#
# dguittet
#
# https://sam.nrel.gov/software-development-kit-sdk/pysam.html
# ## How to Get Started
#
# 64-bit Python 3.5-3.8 for Linux, Mac and Windows
#
# PyPi:
# ```
# pip install nrel-pysam
# ```
#
# Anaconda:
# ```
# conda install -u nrel nrel-pysam
# ```
# ## Model Initialization
#
# Each technology and financial configuration is composed of unit modules. Module documentation is in [Modules](https://nrel-pysam.readthedocs.io/en/master/Models.html) or refer to [Module Index](https://nrel-pysam.readthedocs.io/en/master/py-modindex.html).
#
# Each module can be imported by:
#
# ```
# import PySAM.<Module name>
# ```
#
# There are four ways to initialize a model:
#
# 1. new
# 2. default
# 3. from_existing
# 4. wrap
#
import PySAM.Pvwattsv7 as pv
import PySAM.Utilityrate5 as ur
# ### New
#
# Creates an instance with empty attributes.
new_model = pv.new()
print(type(new_model))
new_model.export()
# ### Default
#
# There is a lot of data needed to run a model. Entering that data with assignment statements in PySAM can be tedious. One way to get a full set of data is to load the default setup, which are the same as in SAM. A module's default values are unique for each SAM configuration type.
#
# Default config names are __case insensitive__. The list of options can be found on the module's PySAM doc page or as below:
# list configuration options
help(pv.default)
default_model = pv.default("FuelCellCommercial")
default_model.export()
# This is good if your situation is close to the default.
#
# Often, however, this is not the case. When your setup is significantly different than the default,
# you can enter the data using SAM, and save that data to a JSON file, which can then be loaded by PySAM.
# You can then modify and use the data in PySAM as needed. This is shown in [To import a case from the SAM GUI](https://nrel-pysam.readthedocs.io/en/master/Import.html) and was a subject of last year's webinar.
#
# ### from_existing
#
# When running more than one unit module in a sequence, data needs to get passed from one to the subsequent models. For example, a technology module's `analysis_period` and generation profile `gen`, are inputs to the utility rate calculator in order to calculate the `annual_energy_value`, the energy value in each year of the analysis period due to electricity bill savings.
#
# `from_existing` is used create a new model that shares the underlying data with an existing model. This new model can be created with default values if a default configuration name is provided, similar to ``default``.
shared_model = ur.from_existing(default_model, "FuelCellCommercial")
print(default_model.Lifetime.analysis_period)
print(shared_model.Lifetime.analysis_period)
# change in analysis period reflected in both models
default_model.Lifetime.analysis_period = 20
print(default_model.Lifetime.analysis_period)
print(shared_model.Lifetime.analysis_period)
# All variables and their values, including inputs and outputs (after model execution), are shared between models linked in this way. When the PV model below is executed, its outputs will automatically be available to the Utility rate model. __Order of execution matters.__
#
# Simulation data is passed between unit modules when the name of a unit module's output is the same as another unit module's input, such as `gen`. __The group names can be different.__
# gen doesn't exist yet because simulation hasn't been executed
default_model.Outputs.gen
# can't calculate utility rate value without a generation profile
shared_model.execute(0)
# +
# execute PV then utility rate calculations
default_model.SolarResource.solar_resource_file = "/Users/dguittet/SAM Downloaded Weather Files/weather.csv"
default_model.execute(0)
print('gen\n', default_model.Outputs.gen[0:10])
shared_model.execute(0)
print('\nannual energy value\n', shared_model.Outputs.annual_energy_value)
# -
# ### wrap
#
# Creates a model from a PySSC data structure. This allows compatibility with PySSC.
#
# This is used primarily during data import from SAM via JSON. This import feature was covered in the [2019 PySAM Webinar](https://sam.nrel.gov/software-development-kit-sdk/pysam.html) and is also shown in [To import a case from the SAM GUI](https://nrel-pysam.readthedocs.io/en/master/Import.html).
# +
import PySAM.PySSC as pyssc
ssc = pyssc.PySSC()
pv_dat = ssc.data_create()
ssc.data_set_number(pv_dat, b'analysis_period', 10)
wrap_model = pv.wrap(pv_dat)
wrap_model.export()
# -
# ## Detailed PV-Battery - Commercial Owner Example
# +
import PySAM.Pvsamv1 as pvsam
import PySAM.Grid as grid
import PySAM.Utilityrate5 as ur
import PySAM.Cashloan as cl
import PySAM
print(PySAM.__version__)
# +
# get all years of weather data
import glob
import random
import PySAM.ResourceTools as tools
from PySAM.BatteryTools import battery_model_sizing
weather_folder = "/Users/dguittet/SAM Downloaded Weather Files/lexington_or/"
weather_files = glob.glob(weather_folder + "/*.csv")
# load data from file into dictionaries
weather_data = [tools.SAM_CSV_to_solar_data(f) for f in weather_files]
steps_per_year = len(weather_data[0]['year'])
print(steps_per_year)
# initialize all models
pvbatt_model = pvsam.default("PVBatteryCommercial")
grid_model = grid.from_existing(pvbatt_model, "PVBatteryCommercial")
ur_model = ur.from_existing(pvbatt_model, "PVBatteryCommercial")
cl_model = cl.from_existing(pvbatt_model, "PVBatteryCommercial")
# change simulation settings
pvbatt_model.Lifetime.analysis_period = 1
pvbatt_model.value("batt_room_temperature_celsius", (25,) * steps_per_year)
pvbatt_model.BatteryDispatch.batt_dispatch_choice = 0 # peak shaving
# +
def installed_cost(pv_kw, battery_kw, battery_kwh):
return pv_kw * 700 + battery_kw * 600 + battery_kwh * 300
print("batt_kw\tbatt_kwh\tavg_npv")
for battery_kw in range(10, 100, 10):
battery_kwh = 4 * battery_kw # four hour battery
battery_model_sizing(pvbatt_model, battery_kw, battery_kwh, 500)
npvs = []
for solar_resource_data in weather_data:
pvbatt_model.SolarResource.solar_resource_data = solar_resource_data
pvbatt_model.execute(0)
grid_model.execute(0)
ur_model.execute(0)
cl_model.total_installed_cost = installed_cost(pvbatt_model.SystemDesign.system_capacity, battery_kw, battery_kwh)
cl_model.execute(0)
npvs.append(cl_model.Outputs.npv)
avg_npv = sum(npvs) / len(npvs)
print("{}\t{}\t{}".format(battery_kw, battery_kwh, avg_npv))
| Examples/PySAMWorkshop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DateMatcher multi-language
#
# #### This annotator allows you to specify a source language that will be used to identify temporal keywords and extract dates.
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "943a272c-0686-4e02-a8d9-b2849721c829", "showTitle": false, "title": ""}
# Import Spark NLP
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.pretrained import PretrainedPipeline
import sparknlp
# Start Spark Session with Spark NLP
# start() functions has two parameters: gpu and spark23
# sparknlp.start(gpu=True) will start the session with GPU support
# sparknlp.start(spark23=True) is when you have Apache Spark 2.3.x installed
spark = sparknlp.start()
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "b200e2aa-6280-4f51-9eb4-e30f660e2ba4", "showTitle": false, "title": ""}
spark
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "c0b759a0-346f-4d9f-9f01-383124c0aa05", "showTitle": false, "title": ""}
sparknlp.version()
# -
# # German examples
# ### Let's import some articoles sentences from the news where relative dates are present.
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a91c2626-5ef8-4e01-9563-120daf4f63f3", "showTitle": false, "title": ""}
de_articles = [
("Am Sonntag, 11. Juli 2021, benutzte Chiellini das Wort Kiricocho, als Saka sich dem Ball zum Elfmeter näherte.",),
("Die nächste WM findet im November 2022 statt.",),
]
# -
# ### Let's fill a DataFrame with the text column
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "cfe3f9e0-4a96-44bb-b056-0b4a5407c6dc", "showTitle": false, "title": ""}
articles_cols = ["text"]
df = spark.createDataFrame(data=de_articles, schema=articles_cols)
df.printSchema()
df.show()
# -
# ### Now, let's create a simple pipeline to apply the DateMatcher, specifying the source language
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "f4baf2a1-3e75-479e-9e9b-2b071624ee3d", "showTitle": false, "title": ""}
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
date_matcher = DateMatcher() \
.setInputCols(['document']) \
.setOutputCol("date") \
.setFormat("MM/dd/yyyy") \
.setSourceLanguage("de")
# +
### Let's transform the Data
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "10380fbb-43c1-42c3-b6d0-f02e55d75a24", "showTitle": false, "title": ""}
assembled = document_assembler.transform(df)
date_matcher.transform(assembled).select('date').show(10, False)
# -
| jupyter/annotation/german/date_matcher_multi_language_de.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import pickle
from collections import defaultdict
from os.path import join, exists, splitext, basename, isdir
from os import listdir, symlink, makedirs
from glob import glob
from praatio import tgio
from termcolor import colored
from tqdm import tqdm
import pandas as pd
import numpy as np
from librosa import get_duration
import scipy.io.wavfile as wav
from cac.utils.pandas import apply_antifilters
# +
# directory where the data resides
data_root = '/data/freesound-kaggle/'
# src and destination directories
load_dir = join(data_root, 'raw')
save_root = join(data_root, 'processed')
makedirs(save_root, exist_ok=True)
load_audio_dir = join(load_dir, 'audio')
save_audio_dir = join(save_root, 'audio')
makedirs(save_audio_dir, exist_ok=True)
# -
files = glob(join(load_audio_dir, '*.wav'))
len(files)
# +
invalid_files = []
for file in tqdm(files, desc='Checking valid files'):
try:
fs,signal = wav.read(file)
except:
import ipdb; ipdb.set_trace()
invalid_files.append(file)
# -
len(invalid_files)
# +
# -------- Creating `processed/audio` -------- #
# +
files = []
for file in tqdm(glob(join(load_audio_dir, '*.wav')), desc='Creating symlinks processed/ <- raw/'):
# print(file)
save_filename = basename(file)
save_path = join(save_audio_dir, save_filename)
# ignore .wav
files.append(splitext(save_filename)[0])
if not exists(save_path):
symlink(file, save_path)
# +
# -------- Creating `processed/annotation.csv` -------- #
# -
train_annotations = pd.read_csv(join(load_dir, 'train_post_competition.csv'))
test_annotations = pd.read_csv(join(load_dir, 'test_post_competition_scoring_clips.csv'))
# +
# making both the DFs have the same columns
# -
train_annotations['usage'] = 'Public'
train_annotations.head()
test_annotations['manually_verified'] = 1
test_annotations.head()
len(train_annotations), len(test_annotations)
attributes = train_annotations.append(test_annotations)
# +
# removing rows for which audio file was not extracted properly or does not exist
# -
len(invalid_files)
attributes = apply_antifilters(attributes, {'fname': [basename(x) for x in invalid_files]})
attributes.shape
attributes['label'] = attributes['label'].apply(lambda x: x.lower())
files = [splitext(f)[0] for f in attributes['fname']]
classification_targets = [[label] for label in attributes['label']]
len(files), len(classification_targets)
starts = [0.0 for _ in files]
ends = [get_duration(filename=join(load_dir, 'audio', x + '.wav')) for x in tqdm(files)]
# create dataframe storing the data
final_df = pd.DataFrame(
{'file': files, 'classification': classification_targets, 'manually_verified': attributes['manually_verified'], 'start': starts, 'end': ends}
)
final_df.head()
# save the dataframe
annotation_save_path = join(save_root, 'annotation.csv')
final_df.to_csv(annotation_save_path, index=False)
# +
# -------- Creating `processed/attributes.csv` -------- #
# -
attributes.head()
# save the dataframe
attribute_save_path = join(save_root, 'attributes.csv')
attributes.to_csv(attribute_save_path, index=False)
# +
# -------- Creating `processed/description.txt` -------- #
# -
description = '\
Annotation columns: \n \
`classification`: valid labels = ["Acoustic_guitar", "Applause", "Bark", "Bass_drum", "Burping_or_eructation", "Bus", \n \
"Cello", "Chime", "Clarinet", "Computer_keyboard", "Cough", "Cowbell", "Double_bass", "Drawer_open_or_close", \n \
"Electric_piano", "Fart", "Finger_snapping", "Fireworks", "Flute", "Glockenspiel", "Gong", "Gunshot_or_gunfire", \n \
"Harmonica", "Hi-hat", "Keys_jangling", "Knock", "Laughter", "Meow", "Microwave_oven", "Oboe", "Saxophone", "Scissors", \n \
"Shatter", "Snare_drum", "Squeak", "Tambourine", "Tearing", "Telephone", "Trumpet", "Violin_or_fiddle", "Writing"\n \
\n \
Attributes: \n \
`names`: ["fname", "label", "manually_verified", "freesound_id", "license", "usage"]'
with open(join(save_root, 'description.txt'), 'w') as f:
f.write(description)
| datasets/cleaning/freesound-kaggle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: chemprop_pgg
# language: python
# name: chemprop_pgg
# ---
# # Split QM9 data
import pandas as pd
import os
import numpy as np
np.random.seed(42)
df = pd.read_csv('qm9.csv')
df.sample(5)
df_subsample = df.sample(50_000, random_state=42)
len(df_subsample)
_train_size = 0.8
_val_size = 0.1
_train_val_size = _train_size + _val_size
train, validate, test = np.split(df_subsample, [int( _train_size * len(df_subsample)), int( _train_val_size * len(df_subsample))])
print('Train: {0} | Val: {1} | Test: {2}'.format(train.shape, validate.shape, test.shape))
train.to_csv('qm9_train.csv', columns=train.columns, index=None)
validate.to_csv('qm9_val.csv', columns=validate.columns, index=None)
test.to_csv('qm9_test.csv', columns=test.columns, index=None)
| qm9_truncated/make_train_test_val.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Big Market Sales Prediction
# # Importing all essentials
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
# # Data Analysis and Collection
df = pd.read_csv('bigmarket.csv')
df.head()
df.shape
df.info()
# Categorical Features:
# - Item_Identifier
# - Item_Fat_Content
# - Item_Type
# - Outlet_Indentifier
# - Outlet_Size
# - Outlet_Location-Type
# - Outlet_Type
# checking for missing values
df.isnull().sum()
# Handling Missing Values
#
# - MEAN -> AVERAGE
# - MODE -> MORE REPEATED VALUE
df['Item_Weight'].mean()
# filling missing values in "Item-Weight" with MEAN VALUE
df['Item_Weight'].fillna(df['Item_Weight'].mean(), inplace=True)
# Mode of "Outlet_Size" column
df['Outlet_Size'].mode()
# filling the missing values in "Outlet_Size" column with Mode
mode_of_Outlet_size = df.pivot_table(values='Outlet_Size', columns='Outlet_Type', aggfunc=(lambda x: x.mode()[0]))
print(mode_of_Outlet_size)
miss_values = df['Outlet_Size'].isnull()
print(miss_values)
df.loc[miss_values, 'Outlet_Size'] = df.loc[miss_values, 'Outlet_Type'].apply(lambda x:mode_of_Outlet_size[x])
#checking for missing values
df.isnull().sum()
# # Data Analysis
df.describe()
# Item_Weight distribution
plt.figure(figsize=(6,6))
sns.distplot(df['Item_Weight'])
plt.show()
# Item Visibility distribution
plt.figure(figsize=(6,6))
sns.distplot(df['Item_Visibility'])
plt.show()
# Item MRP distribution
plt.figure(figsize=(6,6))
sns.distplot(df['Item_MRP'])
plt.show()
# Item_Outlet_Sales distribution
plt.figure(figsize=(6,6))
sns.distplot(df['Item_Outlet_Sales'])
plt.show()
# Outlet_Establishment_Year column
plt.figure(figsize=(6,6))
sns.countplot(x='Outlet_Establishment_Year', data=df)
plt.show()
# Item_Fat_Content column
plt.figure(figsize=(6,6))
sns.countplot(x='Item_Fat_Content', data=df)
plt.show()
# Item_Type column
plt.figure(figsize=(30,6))
sns.countplot(x='Item_Type', data=df)
plt.show()
# Outlet_Size column
plt.figure(figsize=(6,6))
sns.countplot(x='Outlet_Size', data=df)
plt.show()
# # Data Pre=Processing
df.head()
df['Item_Fat_Content'].value_counts()
df.replace({'Item_Fat_Content':{'low fat':'Low Fat', 'LF':'Low Fat', 'reg':'Regular'}},inplace=True)
df['Item_Fat_Content'].value_counts()
# ### Label Encoding
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
# +
df['Item_Identifier'] = encoder.fit_transform(df['Item_Identifier'])
df['Item_Fat_Content'] = encoder.fit_transform(df['Item_Fat_Content'])
df['Item_Type'] = encoder.fit_transform(df['Item_Type'])
df['Outlet_Identifier'] = encoder.fit_transform(df['Outlet_Identifier'])
df['Outlet_Size'] = encoder.fit_transform(df['Outlet_Size'])
df['Outlet_Location_Type'] = encoder.fit_transform(df['Outlet_Location_Type'])
df['Outlet_Type'] = encoder.fit_transform(df['Outlet_Type'])
# -
df.head()
# # Splitting features and Target
X = df.iloc[:,:-1].values
y = df.iloc[:,-1].values
print(X)
print(y)
# # Splitting into Train Test Split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=0)
print(X.shape, X_train.shape, X_test.shape)
# # Training Algortihm
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# +
model = Sequential()
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1)) # output layer
model.compile(optimizer='rmsprop', loss='mse')
# -
model.fit(x = X_train, y = y_train, epochs=50)
model.summary()
loss_df = pd.DataFrame(model.history.history)
loss_df
loss_df.plot()
# From above graph we can say that loss of our model decreasing significantly.
# # Model Evaluation
# Model evaluation on test data
test_eval = model.evaluate(X_test, y_test, verbose=0)
print(test_eval)
# model evaluation on train set
train_eval = model.evaluate(X_train, y_train, verbose=0)
print(train_eval)
# Checking difference between train_eval and test_eval
model_diff = train_eval - test_eval
print(model_diff)
# Train prediction on test data
train_prediction = model.predict(X_train)
# Test prediction on test data
test_prediction = model.predict(X_test)
print(train_prediction)
print(test_prediction)
# R Squared : R-squared measures the strength of the relationship between your model and the dependent variable on a convenient 0 – 100% scale.
from sklearn.metrics import r2_score
r2_train = r2_score(y_train, train_prediction)
print('R squared value of train data:', r2_train)
r2_test = r2_score(y_test, test_prediction)
print('R squared value of test data:',r2_test)
| Big Market Sales Prediction/big-market-sales-prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lists vs. Arrays
import numpy as np
L = [1,2,3] # creating python list
A = np.array([1,2,3]) # creating numpy array
for e in L:
print e
for e in A:
print e
L.append(4) # adding element to list
L = L + [5] # another way to add element to list
L
A + A # adding two arrays element wise addition in numpy (vector addition)
A * 2 # multiplying vector times a scalar
A**2 # squaring a vector
# Something to keep in mind about numpy is that most functions act element wise. This just means that the function is applied to each element of the vector or matrix.
np.sqrt(A) # taking square root of all elements in vector
np.log(A) # element wise log
np.exp(A) # element wise exponential
# With numpy you can treat lists like a vector, a mathematical object.
# ---
# <br></br>
# # Dot Products
#
# Recall that there are two definitions of the dot product, and they are each equivalent.
#
# 1: The first is the summation of the element wise multiplication of the two vectors:
#
# $$a \cdot b = a^Tb = \sum_{d=1}^Da_db_d$$
#
# Here $d$ is being used to index each component. Notice that the convention $a^Tb$ implies that the vectors are column vectors, which means that the result is a (1 x 1), aka a scalar.
#
# 2: The second is the magnitude of $a$, times the magnitude of $b$, times the cosine of the angle between $a$ and $b$:
#
# $$a \cdot b = |a||b|cos\theta_{ab}$$
#
# This method is not very convenient unless we know each of the things on the right hand side to begin with. It would generally be used to find the angle itself.
#
# ### Definition 1
# Let's look at this in code.
a = np.array([1,2])
a
b = np.array([2,1])
b
# If we wanted to use the direct definition of the dot product, we would want to loop through both arrays simultaneously, multiply each corresponding element together, and add it to the final sum.
dot = 0
for e, f in zip(a,b):
dot += e*f
dot # result is 4 as expected
# Another interesting operation that you can do with numpy arrays is multiply two arrays together. We have already seen how to multiply a vector by a scalar.
a * b # element wise multiplication of a and b
# However, the above method could not be done with two arrays of different sizes. Now, if we summed the result of `a * b` we would end up with the dot product.
np.sum(a * b) # this is the element wise multiplication of a and b, summed
# An interesting thing about numpy is that the sum function is an instance method of the numpy array itself. So we could also write the above as:
(a * b).sum()
# Now, while both of the above methods yield the correct answer, there is a more convenient way to calculate the dot product. Numpy comes packaged with a dot product function.
np.dot(a,b)
# Like the `sum` function, the `dot` function is also an instance method of the numpy array, so we can call it on the object itself.
a.dot(b)
# This is also equivalent to:
b.dot(a)
# ### Definition 2
# Let's now look at the alternative definition of the dot product, to calculate the angle between $a$ and $b$. For this we need figure out how to calculate the length of a vector. We can do this by taking the square root of the sum of each element squared. In other words, use pythagorean theorem.
amag = np.sqrt( (a * a).sum())
amag
# Numpy actually has a function to do all of this work for us, since it is such a common operation. It is part of the linalg module in numpy, which also contains many other linear algebra functions.
amag = np.linalg.norm(a)
amag
# Now with this in hand, we are ready to calculate the angle. For clarity, the angle is defined as:
#
# $$cos\theta_{ab} = \frac{a \cdot b}{|a||b|}$$
cosangle = a.dot(b) / (np.linalg.norm(a) * np.linalg.norm(b))
cosangle
# So the cosine of the angle is 0.8, and the actual angle is the arc cosine of 0.8:
angle = np.arccos(cosangle)
angle
# By default this is in radians.
# ---
# <br></br>
# # Vectors and Matrices
# A numpy array has already been shown to be like a vector: we can add them, multiply them by a scalar, and perform element wise operations like `log` or `sqrt`. So what is a matrix then? Think of it as a two dimensional array.
M = np.array([ [1,2], [3,4] ]) # creating a matrix. 1st index is row, 2nd index is col
M
M[0][0] # one way of accessing values from matrix
M[0,0] # another shorthand way of accessing value in matrix
# There is an actual data type in numpy called matrix as well.
M2 = np.matrix([ [1,2], [3,4] ])
M2
# This works somewhat similarly to a numpy array, but it is not exactly the same. Most of the time we just use numpy arrays, and in fact the official documentation actually recommends not using numpy matrix. If you see a matrix, it is a good idea to convert it into an array:
M3 = np.array(M2)
M3
# Note that even though this is now an array, we still have convenient matrix operations. For example if we wanted to find the transpose of M:
M
M.T
# To summarize, we have shown that a matrix is really just a 2-dimensional numpy array, and a vector is a 1-dimensional numpy array. So a matrix is really like a 2 dimensional vector. The more general way to think about this is that a matrix is a 2-dimensional mathematical object that contains numbers, and a vector is a 1-dimensional mathematical object that contains numbers.
#
# Sometimes you may see vectors represented as a 2-d object. For example, in a math textbook a column vector may be described as (3 x 1), and a row vector (1 x 3). Sometimes we may represent them like this in numpy.
# ---
# <br></br>
# # Generating Matrices to Work With
# Sometimes we just need arrays to try stuff on, like in this course. One way to do this is to use `np.array` and pass in a list:
np.array([1,2,3])
# However, this is inconvenient since each element needs to be typed in manually. What if we wanted arrays of different sizes?
#
# Lets start by creating a vector of zeros.
Z = np.zeros(10)
Z
# We can also create a 10 x 10 matrix of all zeros.
Z = np.zeros((10, 10))
Z
# Notice that the function still takes in 1 input, a tuple containing each dimension.
#
# There is an equivalent function that creates an array of all ones.
O = np.ones((10, 10))
O
# What if we wanted random numbers? We could use `np.random.random`.
R = np.random.random((10,10))
R
# One thing that we can quickly see is that all of these values are greater than 0 and less than 1. Whenever we talk about random numbers, you should be interested in the probability distribution that the random numbers came from. This particular random functions gives us uniformly distributed numbers between 0 and 1. What if we wanted gaussian distributed numbers? Numpy has a function for that as well.
# +
# G = np.random.randn((10, 10)) this will not work, sine randn does not take tuple
G = np.random.randn(10,10)
G
# -
# Numpy arrays also have convenient ways for us to calculate statistics of matrices.
G.mean() # gives us the mean
G.var() # gives us the variance
# ---
# <br></br>
# # Matrix Products
# When you learn about matrix products in linear algebra, you generally learn about matrix multiplication. Matrix multiplication has a special requirement, and that is that the inner dimensions of the matrices you are multiplying must match.
#
# For example say we have matrix `A` that is **(2, 3)** and a matrix `B` that is **(3, 3)**, we can multiply A * B, since the inner dimension is 3, however we cannot multiply B * A, since the inner dimensions are 3 and 2, hence they do not match.
#
# Why do we have this requirement when we multiply matrices? Well lets look at the definition of matrix multiplication:
#
# $$C(i,j) = \sum_{k=1}^KA(i,k)B(k,j)$$
#
# So the (i,j)th entry of $C$ is the sum of the multiplication of all the corresponding elements of the ith row of A and the jth column of B. In other words, C(i,j) is the dot product of the ith row of A and the jth column of B. Because of this, we actually use the `dot` function in numpy! That does what we recognize as matrix multiplication!
#
# A very natural thing to want to do, both in math and in computing, is element by element multiplication!
#
# $$C(i,j) = A(i,j) * B(i,j)$$
#
# For vectors, we already saw that an asterisk `*` operation does this. As you may have guessed, for 2-d arrays, the asterisk also does element wise multiplication. That means that when you use the `*` on multidimensional arrays, both of them have to be the exact same size. This may seem odd, since in other languages, the asterisk does mean real matrix multiplication. So we just need to remember that in numpy, the asterisk `*` does mean element by element multiplication, and the `dot` means matrix multiplication.
#
# Another thing that is odd is that when we are writing down mathematical equations, there isn't even a well defined symbol for element wise multiplication. Sometimes researchers use a circle with a dot inside of it, sometimes they use a circle with an x inside of it. But there does not seem to be a standard way to do that in math.
# ---
# <br></br>
# # More Matrix Operations
#
# The dot product is often referred to as the **inner product**. But we can also look at the **outer product**. An outer product is going to a be a **column vector** times a **row vector**. An inner product is going to a be a **row vector** times a **column vector**. For more information on this checkout my linear algebra walk through in the math appendix.
a = np.array([1,2])
a
b = np.array([3,4])
b
# Lets first look at the dot product:
np.dot(a,b)
# Now the inner product:
np.inner(a,b)
# We see that it is the same as the dot product. Now let's look at the outer product:
np.outer(a,b)
# We can get the same result if we ensure that out `a` and `b` are proper matrices, and then use the dot product. Note, here we see the equivalence to the `inner` and `outer` methods above.
a = np.array([[1,2]])
a
b = np.array([[3,4]])
b
b = b.T
b
np.dot(a,b)
np.dot(b,a)
# # Sum along certain axis
# If we want to sum along all of the rows in a matrix we can use the following:
A = np.array([[1,1,1], [2,2,2], [3,3,3]])
A.sum(axis=1) # sum along each row, axis = 1
A.sum(axis=0) # sum along each column, axis = 0
| notebooks/Programming Appendix/Numpy Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# ### Analysis
# * As expected, the weather becomes significantly warmer as one approaches the equator (0 Deg. Latitude). More interestingly, however, is the fact that the southern hemisphere tends to be warmer this time of year than the northern hemisphere. This may be due to the tilt of the earth, or more obviously - it is currently summer in the southern hemisphere.
# * There is no strong relationship between latitude and cloudiness. However, it is interesting to see that a strong band of cities sits at 0, 80, and 100% cloudiness.
# * There is no strong relationship between latitude and wind speed. However, in northern hemispheres there is a flurry of cities with over 20 mph of wind.
#
# ---
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import json
from pprint import pprint
# Import API key
import api_keys
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
# output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# Replace spaces with %20, otherwise url may be incorrect
city = city.replace(" ", "%20")
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# +
#Open API key
api_key = api_keys.api_key
# URL for Weather Map API call
url = 'http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID=' + api_key
# +
# Create empty lists to append API data to
city_name = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
# Start the call counter
record = 1
# Log file print statement
print(f"Beginning Data Retrieval")
print(f"-------------------------------")
#Loop through the cities in the city list
for city in cities:
# Try statement to append calls where value is found
# Not all calls will be successful as not all cities generated will have records
try:
response = requests.get(f"{url}&q={city}").json()
city_name.append(response["name"])
cloudiness.append(response["clouds"]["all"])
country.append(response["sys"]["country"])
date.append(response["dt"])
humidity.append(response["main"]["humidity"])
max_temp.append(response["main"]["temp_max"])
lat.append(response["coord"]["lat"])
lng.append(response["coord"]["lon"])
wind_speed.append(response["wind"]["speed"])
city_record = response["name"]
print(f"Processing Record {record} | {city_record}")
print(f"{url}&q={city}")
# Increase counter by one
record += 1
# Wait a second in loop to not over exceed rate limit of API
time.sleep(1.01)
# If no record found "skip" to next call
except:
print("City not found. Skipping...")
continue
# -
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
# Create dictionary to store lists
weather_dict = {
"City": city_name,
"Country": country,
"Date": date,
"Humidity": humidity,
"Cloudiness": cloudiness,
"Wind Speed": wind_speed,
"Max Temp": max_temp,
"Latitude": lat,
"Longitude": lng
}
# Create dataframe
weather_data = pd.DataFrame(weather_dict)
weather_data.count()
# Export dataframe to CSV
weather_data.to_csv('Output_CSV/weather_data.csv')
# ### Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# Show dataframe
weather_data.head()
# #### Latitude vs. Temperature Plot
# +
# Build scatterplot
plt.scatter(weather_data["Latitude"], weather_data["Max Temp"], marker="o", s=30, edgecolor="black", alpha=0.75)
# Set title, labels, grid
plt.title("City Latitude vs. Max Temperature (01/27/19)")
plt.xlabel("Latitude")
plt.ylabel("Max. Temperature (F)")
plt.grid(True)
# Export plot
plt.savefig("Output_Plots/MaxTemp_vs_Latitude.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Humidity Plot
# +
# Build scatterplot
plt.scatter(weather_data["Latitude"], weather_data["Humidity"], marker="o", s=30, edgecolor="black", alpha=0.75)
# Set title, labels, grid
plt.title("City Latitude vs. Humidity (01/27/19)")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.grid(True)
# Export plot
plt.savefig("Output_Plots/Humidity_vs_Latitude.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Cloudiness Plot
# +
# Build scatterplot
plt.scatter(weather_data["Latitude"], weather_data["Cloudiness"], marker="o", s=30, edgecolor="black", alpha=0.75)
#Set title, labels, grid
plt.title("City Latitude vs. Cloudiness (01/27/19)")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.grid(True)
# Export plot
plt.savefig("Output_Plots/Cloudiness_vs_Latitude.png")
# Show plot
plt.show()
# -
# #### Latitude vs. Wind Speed Plot
# +
# Build scatterplot
plt.scatter(weather_data["Latitude"], weather_data["Wind Speed"], marker="o", s=30, edgecolor="black", alpha=0.75)
# Set title, labels, grid
plt.title("City Latitude vs. Wind Speed (01/27/19)")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.grid(True)
# Export
plt.savefig("Output_Plots/WindSpeed_vs_Latitude.png")
# Show plot
plt.show()
# -
| WeatherPy.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Lab 3
# > **Draft Report**
#
# > ***W 203 Section 4***
#
# > *<NAME>*
#
# > *<NAME>*
#
# > *<NAME>*
# # Research Question
# ***What are the determinants of crime in the state of North Carolina.***
# # Introduction
# The purpose of this report is to explore the causes of crime in several counties in North Carolina.
#
# The main dependent variable explored throughout this report is **crime rate**. Every other variable can be considered an independent variable and will be considered as to whether it is a factor in any increase or decrease in crime rate.
#
# We explore these variables with linear regression, specifically using the ordinary least squares method. We use hypothesis testing techniques to understand if any of our findings have statistical significance. We then look at whether there is any practical significance in our results.
#
# Our goal is to use the findings in this report to inform policy recommendations for this political campaign.
# # Exploratory Data Analysis
# We first perform our exploratory data analysis to analyze the dataset of crime statistics for a selection of counties in North Carolina.
#
# We then get the summary of our independent variables using R functions accompanied with graphs for seeing what the data can tell us before we perform our hypothesis testing.
# ### Data cleaning
# We first import our dataset and other helpful R libraries for looking at the statistics of both dependent and independent variables.
# +
######### Supress warnings #########
options(warn=-1)
######### Import Data (dataset of crime statistics for a selection of counties in North Carolina) #########
data_set = read.csv("crime_v2.csv")
######### Import Libraries #########
# Data Visualisations
# https://cran.r-project.org/web/packages/ggplot2/index.html
library(ggplot2)
# "Grid" Graphics
# https://cran.r-project.org/web/packages/gridExtra/index.html
library(gridExtra)
# Tidy Messy Data
# https://cran.r-project.org/web/packages/tidyr/index.html
library(tidyr)
# Reshape Data
# https://cran.r-project.org/web/packages/reshape2/index.html
library(reshape2)
# Regression and Summary Statistics Tables
# https://CRAN.R-project.org/package=stargazer
library(stargazer)
# Applied Regression
# https://cran.r-project.org/web/packages/car/index.html
library(car)
# -
# First we will explore the structures of all of the variables in this study.
cat("\nCompact display of the structure of our Data Set\n\n")
str(data_set)
# | Variable | Representation | Type | Analysis |
# |------------|----------------------------------|------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
# | $county$ | county identifier | Categorical | We cannot do any data analysis on this variable. But will use this variable in identifying results if applicable. |
# | $year$ | 1987 | Categorical | Our data set is a cross-sectional set that has only data for the year 1987. So we could ignore this variable for our analysis, as it does not give any meaningful relationship to other variables in this context. |
# | $crmrte$ | crimes committed per person | Dependent Variable Number [0-1] | This is our dependent Variable. We will compare and measure the relationship between this variable with other independent variables |
# | $prbarr$ | ‘probability’ of arrest | Covariate Number [0-1] | Ratio: Arrests/Offenses Used to determine the "certainty of punishment" (Do criminals expect to get caught and face punishment) along with $prbconv$ |
# | $prbconv$ | ‘probability’ of conviction | Covariate Factor variable Number [0-1] | Ratio: Convictions/Arrests Used to determine the "certainty of punishment" (Do criminals expect to get caught and face punishment) along with $prbarr$ Factor variable - We believe that this is an error, as probabilities are continuous and uncountably infinite and "Factor" variables are intended to represent a finite set of data. We elect to convert prbconv from a Factor to a numeric data type |
# | $prbpris$ | ‘probability’ of prison sentence | Covariate Number [0-1] | Ratio: Convictions resulting in Sentence/Total Convictions Used to determine the "severity of punishment" (how long prison sentences) |
# | $avgsen$ | avg. sentence, days | Number | |
# | $polpc$ | police per capita | Number | |
# | $density$ | people per sq. mile | Number | |
# | $taxpc$ | tax revenue per capita | Number | |
# | $west$ | =1 if in western N.C. | Categorical | We cannot do any data analysis on this variable. But will use this variable in identifying results if applicable. |
# | $central$ | =1 if in central N.C. | Categorical | We cannot do any data analysis on this variable. But will use this variable in identifying results if applicable. |
# | $urban$ | =1 if in SMSA | Categorical | We cannot do any data analysis on this variable. But will use this variable in identifying results if applicable. |
# | $pctmin80$ | perc. minority, 1980 | Number [0-100] | |
# | $wcon$ | weekly wage, construction | Number | |
# | $wtuc$ | wkly wge, trns, util, commun | Number | |
# | $wtrd$ | wkly wge, whlesle, retail trade | Number | |
# | $wfir$ | wkly wge, fin, ins, real est | Number | |
# | $wser$ | wkly wge, service industry | Number | |
# | $wmfg$ | wkly wge, manufacturing | Number | |
# | $wfed$ | wkly wge, fed employees | Number | |
# | $wsta$ | wkly wge, state employees | Number | |
# | $wloc$ | wkly wge, local gov emps | Number | |
# | $mix$ | offense mix: face-to-face/other | Number [0-1] | |
# | $pctymle$ | percent young male | Number [0-1] | Proportion of the population that is male and between the ages of 15 and 24 |
data_set$prbconv <- as.numeric(levels(data_set$prbconv))[data_set$prbconv]
# We now review the variables with the new prbconv.
str(data_set)
# We next look at the interesting facets of each variable (such as the mean and the median).
summary(data_set)
# +
#Remove rows with NA
#Method 1
data<-na.omit(data_set)
#method 2
data2<-data_set[!is.na(data_set),]
#Year is also not a useful data column as this dataset comes entirely from 1987 (Is there a need to remove or just leave be?)
summary(data)
# +
#Histogram of each of the variables, There is probably a better way to do this :)
#I know there is a way to use tidyr to make it use less lines (Next cell), but it seems to use the same Y axis scale
options(repr.plot.width=20, repr.plot.height=10)
p1<-ggplot(data=data, aes(data$county)) + geom_histogram(binwidth=0.5) + ggtitle("Plot of County ID") +
xlab("ID") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p2<-ggplot(data=data, aes(data$year)) + geom_histogram(binwidth=1) + ggtitle("Plot of Survey Year") +
xlab("Year") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p3<-ggplot(data=data, aes(data$crmrte)) + geom_histogram(binwidth=0.005) + ggtitle("Crimes commited per person") +
xlab("Crime Committed") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p4<-ggplot(data=data, aes(data$prbarr)) + geom_histogram(binwidth=0.05) + ggtitle("Probability of Arrest") +
xlab("Probability") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p5<-ggplot(data=data, aes(data$prbconv)) + geom_histogram(binwidth=0.05) + ggtitle("Probability of Conviction") +
xlab("Probability") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p6<-ggplot(data=data, aes(data$prbpris)) + geom_histogram(binwidth=0.05) + ggtitle("Probability of Prison") +
xlab("Probability") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p7<-ggplot(data=data, aes(data$avgsen)) + geom_histogram(binwidth=0.5) + ggtitle("Average Sentence") +
xlab("Sentence (Days?)") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p8<-ggplot(data=data, aes(data$polpc)) + geom_histogram(binwidth=0.0005) + ggtitle("Police per capita") +
xlab("Police") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p9<-ggplot(data=data, aes(data$density)) + geom_histogram(binwidth=0.5) + ggtitle("People per square mile") +
xlab("People") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p10<-ggplot(data=data, aes(data$taxpc)) + geom_histogram(binwidth=5) + ggtitle("Tax revenue per capita") +
xlab("Revenue") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p11<-ggplot(data=data, aes(data$west)) + geom_histogram(binwidth=0.5) + ggtitle("Is Western N.C") +
xlab("Yes/No") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p12<-ggplot(data=data, aes(data$central)) + geom_histogram(binwidth=0.5) + ggtitle("Is Central N.C") +
xlab("Yes/No") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p13<-ggplot(data=data, aes(data$urban)) + geom_histogram(binwidth=0.5) + ggtitle("SMSA") +
xlab("Yes/No") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p14<-ggplot(data=data, aes(data$pctmin80)) + geom_histogram(binwidth=5) + ggtitle("Percent minority (1980)") +
xlab("Percent") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p15<-ggplot(data=data, aes(data$wcon)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Construction)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p16<-ggplot(data=data, aes(data$wtuc)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Trns/Util/Commun)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p17<-ggplot(data=data, aes(data$wtrd)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Wlesle/trade)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p18<-ggplot(data=data, aes(data$wfir)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Fin/Ins/RealEst)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p19<-ggplot(data=data, aes(data$wser)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Service)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p20<-ggplot(data=data, aes(data$wmfg)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Manufacturing)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p21<-ggplot(data=data, aes(data$wfed)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Federal Employees)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12), axis.title=element_text(size=14,face="bold"))
p22<-ggplot(data=data, aes(data$wsta)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (State Employees)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p23<-ggplot(data=data, aes(data$wloc)) + geom_histogram(binwidth=50) + ggtitle("Weekly Wage (Local Gov. Employ)") +
xlab("Wage") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p24<-ggplot(data=data, aes(data$mix)) + geom_histogram(binwidth=0.005) + ggtitle("Offense Mix: f2f/other") +
xlab("??") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
p25<-ggplot(data=data, aes(data$pctymle)) + geom_histogram(binwidth=0.005) + ggtitle("Percent young male") +
xlab("Percent") + ylab("Count") +
theme(plot.title = element_text(hjust = 0.5,size=14,face="bold"),
axis.text=element_text(size=12),
axis.title=element_text(size=14,face="bold"))
grid.arrange(p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,
ncol = 5)
# +
ggplot(gather(data), aes(value)) +
geom_histogram(bins = 10) +
facet_wrap(~key, scales = 'free_x')
# +
res<-cor(data)
res<-round(res,2)
melted_res <- melt(res)
head(res)
# -
options(repr.plot.width=20, repr.plot.height=20)
ggplot(data = melted_res, aes(x=Var1, y=Var2, fill=value)) +
geom_tile()+scale_fill_gradient2(low="navy", mid="white", high="red",
midpoint=0, limits=range(melted_res$value))
# then note . It should be
# noted that there are 100 counties in North Carolina; therefore this dataset contains data for 91% of them. It is not possible to
# tell if the excluded counties are randomly excluded or share specific features that may bias this data set.
# TODO: Eliminate all missing data based on county
#Eliminate data by reassigning index to county.
#I am assuming you mean replace rowname with appropriate county number.
#193 is a duplicated data row
data$county[duplicated(data$county)]
data[data$county==193,]
#We see that all the data is the same
all(data[88,]==data[89,])
#Therefore we remove this duplicated entry
data<-data[-c(89),]
#Remove column "County" and apply it to row.names instead
data.indexed <- data.frame(data[,-1], row.names=data[,1])
head(data.indexed)
# MAYBE DELETE?
#
# We note that two of the variables that are intended to represent probabilities (prbarr and prbconv) have maximum values greater than 1. We feel comforable assuming that the entries greater than 1 must be in error, as probabilities must lie in between 0 and 1.
#
# To be safe, we remove all rows where the value of any of our probability variables (prbpris, prbarr, and prconv) is greater than 1.
# +
g1 <- ggplot(data = data, mapping = aes(x = county, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g2 <- ggplot(data = data, mapping = aes(x = density, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g3 <- ggplot(data = data, mapping = aes(x = prbarr, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g4 <- ggplot(data = data, mapping = aes(x = prbconv, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g5 <- ggplot(data = data, mapping = aes(x = prbpris, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g6 <- ggplot(data = data, mapping = aes(x = avgsen, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g7 <- ggplot(data = data, mapping = aes(x = polpc, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g8 <- ggplot(data = data, mapping = aes(x = taxpc, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g9 <- ggplot(data = data, mapping = aes(x = west, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g10 <- ggplot(data = data, mapping = aes(x = central, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g11 <- ggplot(data = data, mapping = aes(x = urban, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g12 <- ggplot(data = data, mapping = aes(x = pctmin80, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g13 <- ggplot(data = data, mapping = aes(x = wcon, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g14 <- ggplot(data = data, mapping = aes(x = wtuc, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g15 <- ggplot(data = data, mapping = aes(x = wtrd, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g16 <- ggplot(data = data, mapping = aes(x = wfir, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g17 <- ggplot(data = data, mapping = aes(x = wser, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g18 <- ggplot(data = data, mapping = aes(x = wmfg, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g19 <- ggplot(data = data, mapping = aes(x = wfed, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g20 <- ggplot(data = data, mapping = aes(x = wsta, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g21 <- ggplot(data = data, mapping = aes(x = wloc, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g20 <- ggplot(data = data, mapping = aes(x = mix, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
g21 <- ggplot(data = data, mapping = aes(x = pctymle, y = crmrte)) +
geom_jitter(alpha = 0.5, color = "blue")
grid.arrange(g1, g2, g3, g4, g5, g6, g7, g8, g9, g10, g11, g12, g13, g14, g15, g16, g17, g18, g19, g20, g21, nrow = 7)
# +
par(mfrow=c(2,2))
# create the scatterplot
plot(jitter(data$crmrte),
jitter(data$county),
xlab = "crmrte",
ylab = "county",
main = "Plot of Crime Rate-County ID")
# fit the linear model
(model1 = lm(crmrte ~ county, data = data))
# Add regression line to scatterplot
abline(model1)
# plot model 1 - linear model
plot(model1,
which = 5,
main = "Linear Model crmrte ~ county")
county_with_error = data$county
county_with_error[5] = 80
model1_with_error = lm(data$crmrte ~ county_with_error)
# visualize the data with the error and the new ols line
plot(jitter(county_with_error),
jitter(data$colGPA),
xlab = "County",
ylab = "Crime Rate",
main = "Crime Rate versus County including Error")
# Add regression line to scatterplot
abline(model1_with_error)
plot(model1_with_error,
which=5,
main = "Crime Rate Data with Error Introduced")
# We first examine the year variable.
cat("Summary of Year")
summary(data$year)
hist(data$year, breaks = 20, main = "Year", xlab = NULL)
#scatterplotMatrix(data[,c("crmrte", "county", "year")], diagonal = "histogram")
# fit the linear model
(model2 = lm(crmrte ~ county + year, data = data))
model2$coefficients
# compare the R-squares for our two models
summary(model1)$r.square
summary(model2)$r.square
## Presenting Regression Output
stargazer(model1, model2, type = "latex",
report = "vc", # Don't report errors, since we haven't covered them
title = "Linear Models Predicting Crime Rate",
keep.stat = c("rsq", "n"),
omit.table.layout = "n") # Omit more output related to errors
# For an assessment of model fit that penalizes extra variables,
# we can use the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC)
AIC(model1)
AIC(model2)
# -
# ### From Lab3 suggestions:
# Here are some things to keep in mind during your model building process:
# 1. What do you want to measure? Make sure you identify variables that will be relevant to the concerns of the political campaign.
# It should be something that can be addressed in the political campaign, something that can feasibly be changed
#
# SMSA = Standard Metropolitan Statistics Area?
#
# variable | label | Changeable?
# ----------|-----------------------------------|--------------------------
# 1 county| county identifier| No
# 2 year| 1987| No
# 3 crmrte| crimes committed per person| (Target?)
# 4 prbarr| 'probability' of arrest| (Target?)
# 5 prbconv| 'probability' of conviction| (Target?)
# 6 prbpris| 'probability' of prison sentence| (Target?)
# 7 avgsen| avg. sentence, days| Yes
# 8 polpc| police per capita| Yes
# 9 density| people per sq. mile| No
# 10 taxpc| tax revenue per capita| Yes
# 11 west| =1 if in western N.C.| No
# 12 central| =1 if in central N.C.| No
# 13 urban| =1 if in SMSA| Possibly yes?
# 14 pctmin80| perc. minority, 1980| No
# 15 wcon| weekly wage, construction| Yes
# 16 wtuc| wkly wge, trns, util, commun| Yes
# 17 wtrd| wkly wge, whlesle, retail trade| Yes
# 18 wfir| wkly wge, fin, ins, real est| Yes
# 19 wser| wkly wge, service industry| Yes
# 20 wmfg| wkly wge, manufacturing| Yes
# 21 wfed| wkly wge, fed employees| Yes
# 22 wsta| wkly wge, state employees| Yes
# 23 wloc| wkly wge, local gov emps| Yes
# 24 mix| offense mix: face-to-face/other| (Not entirely sure what this is)
# 25 pctymle| percent young male| No
#
# I am assuming that we cannot just change the population of people living in each county like Hitler or something, so I have marked `density`|`pctymle`|`pctmin80` as not really changeable
#
# Similarly, physical location isn't really something we can change, so `west`|`central`| are marked No.
#
# I marked `urban` as possibly yes because SMSA classification might be changeable based on "high degree of social and economic integration with the core as measured by commuting ties" ([Wikipedia](https://en.wikipedia.org/wiki/List_of_metropolitan_statistical_areas)). I think that this would make for a good model.
# 2. What covariates help you identify a causal effect? What covariates are problematic, either due to multicollinearity, or because they will absorb some of a causal effect you want to measure?
# * People per square mile (density) and police per capita (polpc) will influence the number of arrests made in each county. Depending on how probability of arrest is determined, this may affect that value.
# *
# 3. What transformations should you apply to each variable? This is very important because transformations can reveal linearities in the data, make our results relevant, or help us meet model assumptions.
#
# 4. Are your choices supported by EDA? You will likely start with some general EDA to detect anomalies (missing values, top-coded variables, etc.). From then on, your EDA should be interspersed with your model building. Use visual tools to guide your decisions.
#
# # Model Specification #1
# DELETE THIS CELL LATER
#
# This model will be this one:
# - One model with only the explanatory variables of key interest (possibly transformed, as
# determined by your EDA), and no other covariates.
#
# Here are some things to keep in mind during your model building process:
# 1. What do you want to measure? Make sure you identify variables that will be relevant to the
# concerns of the political campaign.
# 2. What covariates help you identify a causal effect? What covariates are problematic, either
# due to multicollinearity, or because they will absorb some of a causal effect you want to
# measure?
# 3. What transformations should you apply to each variable? This is very important because
# transformations can reveal linearities in the data, make our results relevant, or help us meet
# model assumptions.
# 4. Are your choices supported by EDA? You will likely start with some general EDA to detect
# anomalies (missing values, top-coded variables, etc.). From then on, your EDA should be
# interspersed with your model building. Use visual tools to guide your decisions.
#
# $Y = aX_o +b$
#
# Crimes committed per person = a(urban)+b?
# + active=""
# # Model Specification #2
# -
# DELETE THIS CELL LATER
#
# This model will be this one:
# • One model that includes key explanatory variables and only covariates that you believe
# increase the accuracy of your results without introducing substantial bias (for example, you
# should not include outcome variables that will absorb some of the causal effect you are
# interested in). This model should strike a balance between accuracy and parsimony and
# reflect your best understanding of the determinants of crime.
#
# Here are some things to keep in mind during your model building process:
# 1. What do you want to measure? Make sure you identify variables that will be relevant to the
# concerns of the political campaign.
# 2. What covariates help you identify a causal effect? What covariates are problematic, either
# due to multicollinearity, or because they will absorb some of a causal effect you want to
# measure?
# 3. What transformations should you apply to each variable? This is very important because
# transformations can reveal linearities in the data, make our results relevant, or help us meet
# model assumptions.
# 4. Are your choices supported by EDA? You will likely start with some general EDA to detect
# anomalies (missing values, top-coded variables, etc.). From then on, your EDA should be
# interspersed with your model building. Use visual tools to guide your decisions.
#
# $Y = aX_o +b+Z_0+...+Z_n$
# # Model Specification #3
# DELETE THIS CELL LATER
#
# This model will be this one:
# • One model that includes the previous covariates, and most, if not all, other covariates. A key
# purpose of this model is to demonstrate the robustness of your results to model specification.
#
# Here are some things to keep in mind during your model building process:
# 1. What do you want to measure? Make sure you identify variables that will be relevant to the
# concerns of the political campaign.
# 2. What covariates help you identify a causal effect? What covariates are problematic, either
# due to multicollinearity, or because they will absorb some of a causal effect you want to
# measure?
# 3. What transformations should you apply to each variable? This is very important because
# transformations can reveal linearities in the data, make our results relevant, or help us meet
# model assumptions.
# 4. Are your choices supported by EDA? You will likely start with some general EDA to detect
# anomalies (missing values, top-coded variables, etc.). From then on, your EDA should be
# interspersed with your model building. Use visual tools to guide your decisions.
#
# # Regression Table
# DELETE THIS CELL LATER
#
# Your report will include a model building process, culminating in a well formatted regression table that displays a minimum of three model specifications.
#
# You should display all of your model specifications in a regression table, using a package like stargazer to format your output. It should be easy for the reader to find the coefficients that represent key effects near the top of the regression table, and scan horizontally to see how they change from specification to specification. Since we won’t cover inference for linear regression until unit 12, you should not display any standard errors at this point. You should also avoid conducting statistical tests for now (but please do point out what tests you think would be valuable)
# # Residual Analysis
# # Omitted variables discussion
# DELETE THIS CELL LATER
#
# The data is provided in a file, crime_v2.csv. While we are only providing you with a single cross-section of
# data, the original study was based on a multi-year panel. The authors used panel data methods
# and instrumental variables to control for some types of omitted variables.
#
# Since you are restricted to ordinary least squares regression, omitted variables will be a major obstacle to your estimates.
#
# You should aim for causal estimates, while clearly explaining how you think omitted variables
# may affect your conclusions.
#
# After your model building process, you should include a substantial discussion of omitted
# variables. Identify what you think are the 5-10 most important omitted variables that bias results
# you care about. For each variable, you should estimate what direction the bias is in. If you can
# argue whether the bias is large or small, that is even better. State whether you have any variables
# available that may proxy (even imperfectly) for the omitted variable. Pay particular attention to
# whether each omitted variable bias is towards zero or away from zero. You will use this information to judge whether the effects you find are likely to be real, or whether they might be entirely
# an artifact of omitted variable bias.
# # Conclusion
# ### Ordinary least squares regression
#
#
#
# #### Omitted variables
#
# ---- what are the omitted variables
#
#
# #### Causal estimates
# ------------------ talk about how omitted variables will be a major obstacle to our estimates
#
| Wall_Seenivasagam_Hee_lab3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Part 5 - Generating Reports
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Generating-Reports" data-toc-modified-id="Generating-Reports-1"><span class="toc-item-num">1 </span>Generating Reports</a></span><ul class="toc-item"><li><span><a href="#Available-Reports" data-toc-modified-id="Available-Reports-1.1"><span class="toc-item-num">1.1 </span>Available Reports</a></span><ul class="toc-item"><li><span><a href="#Reports-Available-for-U.S." data-toc-modified-id="Reports-Available-for-U.S.-1.1.1"><span class="toc-item-num">1.1.1 </span>Reports Available for U.S.</a></span></li><li><span><a href="#Reports-Available-for-Another-Country" data-toc-modified-id="Reports-Available-for-Another-Country-1.1.2"><span class="toc-item-num">1.1.2 </span>Reports Available for Another Country</a></span></li></ul></li><li><span><a href="#Creating-Reports" data-toc-modified-id="Creating-Reports-1.2"><span class="toc-item-num">1.2 </span>Creating Reports</a></span><ul class="toc-item"><li><span><a href="#Report-for-Single-Line-Address" data-toc-modified-id="Report-for-Single-Line-Address-1.2.1"><span class="toc-item-num">1.2.1 </span>Report for Single Line Address</a></span></li><li><span><a href="#Report-for-Buffered-Locations" data-toc-modified-id="Report-for-Buffered-Locations-1.2.2"><span class="toc-item-num">1.2.2 </span>Report for Buffered Locations</a></span><ul class="toc-item"><li><span><a href="#Using-options" data-toc-modified-id="Using-options-1.2.2.1"><span class="toc-item-num">1.2.2.1 </span>Using <code>options</code></a></span></li></ul></li><li><span><a href="#Report-for-a-Point-Feature" data-toc-modified-id="Report-for-a-Point-Feature-1.2.3"><span class="toc-item-num">1.2.3 </span>Report for a Point Feature</a></span></li><li><span><a href="#Report-for-a-Polygon-Study-Area" data-toc-modified-id="Report-for-a-Polygon-Study-Area-1.2.4"><span class="toc-item-num">1.2.4 </span>Report for a Polygon Study Area</a></span></li></ul></li><li><span><a href="#Customizing-Reports" data-toc-modified-id="Customizing-Reports-1.3"><span class="toc-item-num">1.3 </span>Customizing Reports</a></span><ul class="toc-item"><li><span><a href="#Using-report_fields" data-toc-modified-id="Using-report_fields-1.3.1"><span class="toc-item-num">1.3.1 </span>Using <code>report_fields</code></a></span></li><li><span><a href="#Using-use_data" data-toc-modified-id="Using-use_data-1.3.2"><span class="toc-item-num">1.3.2 </span>Using <code>use_data</code></a></span></li></ul></li></ul></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-2"><span class="toc-item-num">2 </span>Conclusion</a></span></li></ul></div>
# -
# Import Libraries
from arcgis.gis import GIS
from arcgis.geoenrichment import Country, create_report, BufferStudyArea
# Create a GIS Connection
gis = GIS(profile='your_online_profile')
# ## Generating Reports
# + [markdown] slideshow={"slide_type": "subslide"}
# GeoEnrichment also enables you to create many types of high quality reports for a variety of use cases describing the input area. These reports can be generated in PDF or Excel formats containing relevant information on demographics, consumer spending, tapestry market, etc. for the area. You can find a sample PDF report [here](http://help.arcgis.com/en/geoenrichment/rest-report-samples/ex2.pdf).
#
# Find details about a wide variety of existing reports [here](https://doc.arcgis.com/en/esri-demographics/reference/sample-reports.htm).
# -
# ### Available Reports
# + [markdown] slideshow={"slide_type": "skip"}
# The `reports` property of a `Country` object lists its available reports as a Pandas DataFrame. Let's look at available reports for some countires. The report `id` you see below is used as an input in the `create_report()` method to create reports.
# -
# #### Reports Available for U.S.
# Get Country
usa = Country.get('USA')
# + slideshow={"slide_type": "fragment"}
# print a sample of the reports available for USA
usa.reports.head(10)
# -
# total number of reports available
usa.reports.shape
# We can see that there are 53 reports available for United States along with their title, categoriies and the available formats.
# #### Reports Available for Another Country
#
# Let's look at the reports available for Australia.
# Get Country
aus = Country.get('Australia')
# + slideshow={"slide_type": "fragment"}
# print a sample of the reports available for USA
aus.reports.head(20)
# -
# total number of reports available
aus.reports.shape
# Here we see that Australia has 15 reports available along with their title, categoriies and the available formats. Details about different categories of reports can be found [here](https://doc.arcgis.com/en/esri-demographics/reference/sample-reports.htm).
# ### Creating Reports
# + [markdown] slideshow={"slide_type": "subslide"}
# The `create_report` method allows you to create many types of high quality reports for a variety of use cases describing the input area. Let's look at some examples of creating reports for some study areas. To learn more about Study Areas, refer to [Enriching Study Areas](part2_enrich_study_areas.ipynb) guide.
#
# __Note:__ The report `id` must be used as an input in the `create_report()` method to create reports.
# -
# #### Report for Single Line Address
# Let's create a report for an address using the Business Summary `business_summary` report id.
# + slideshow={"slide_type": "-"}
address_report = create_report(study_areas=["380 New York Street, Redlands, CA"],
report="business_summary",
export_format="PDF",
out_folder=r"reports",
out_name="esri_add_business_summary.pdf")
address_report
# -
# You can view the `pdf` report in `reports folder` on your disk. Here is a snapshot of how part of your report will look like in Pdf.
#
# <img src="../../static/img/geoenrich_pdf_sample.jpg">
# #### Report for Buffered Locations
#
# Reports can also be created for one or more buffered rings, or drive time service areas around the points of interest. Let's create a report for a street address with non-overlapping disks of radii 1, 3 and 5 Miles respectively using the Business Summary report template `report="business_summary"`.
#
# We will export this report as a Microsoft Excel file using the `XLSX` export format.
# + slideshow={"slide_type": "-"}
buffered = BufferStudyArea(area='380 New York St Redlands CA 92373',
radii=[1,3,5], units='Miles', overlap=False)
# + slideshow={"slide_type": "-"}
buffered_report = create_report(study_areas=[buffered],
report="business_summary",
export_format="XLSX",
out_folder=r"reports",
out_name="esri_buffered_business_summary.xlsx")
buffered_report
# -
# Here is a snapshot of how part of your report will look like in Excel.
#
# <img src="../../static/img/geoenrich_xls_sample.jpg">
# ##### Using `options`
#
# `options` parameter can be specified for the study area buffer. Let's look at an example.
#
# __Note:__ By default a 1 mile radius buffer will be applied to points and address locations to define a study area.
# Define options
area_options={"areaType":"RingBuffer","bufferUnits":"esriMiles","bufferRadii":[1,3,5]}
# Create report
areaOptions_report = create_report(study_areas=["380 New York Street, Redlands, CA"],
report="business_summary",
export_format="XLSX",
options= area_options,
out_folder=r"reports",
out_name="studyAreaOptions_business_summary.xlsx")
areaOptions_report
# #### Report for a Point Feature
# When a point is used as a study area, the service will automatically create a `1 mile` ring buffer around the point to collect and append enrichment data. We will use the Business Location `business_loc` report to `enrich()` our point feature.
# Create point geometry
from arcgis.geometry import Point
pt = Point({"x" : -118.15, "y" : 33.80, "spatialReference" : {"wkid" : 4326}})
# Create report
pt_report = create_report(study_areas=[pt],
report="business_loc",
export_format="PDF",
out_folder=r"reports",
out_name="esri_pt_business_profile.pdf")
pt_report
# #### Report for a Polygon Study Area
# Let's create a polygon and use the Laborforce Profile `business_loc` report to `enrich()` our polygon.
# Create polygon geometry
from arcgis.geometry import Polygon
poly = Polygon({"rings":[[[-117.26,32.81],[-117.40,32.92],[-117.12,32.80],[-117.26,32.81]]],
"spatialReference":{"wkid":4326}})
# Create report
poly_report = create_report(study_areas=[poly],
report="business_loc",
export_format="PDF",
out_folder=r"reports",
out_name="esri_poly_business_profile.pdf")
poly_report
# ### Customizing Reports
# Reports can be customized by specifying optional paramaters. Report header can be customzied to include title, subtitle and logo etc. Parameters can also be specified to explicitly call the country or dataset to query. Let's look at some examples of customizing reports.
# #### Using `report_fields`
#
# Optional parameter `report_fields` specifies additional choices to customize reports. For example, the title, subtitle, logo, etc., which appear in the header of a report can be customized with this parameter. Let's look at an example by customizing report title, subtitle and logo.
# Define report fields
custom_fields={"title": "My Report",
"subtitle": "Produced by My Company"}
# Create report
customFields_report = create_report(study_areas=["380 New York Street, Redlands, CA"],
report="business_summary",
export_format="PDF",
report_fields= custom_fields,
out_folder=r"reports",
out_name="customFields_business_summary.pdf")
customFields_report
# Here is a snapshot of how custom fields (highlighted in red boxes) look like in a report.
# <img src="../../static/img/geoenrich_custom_sample.jpg">
# #### Using `use_data`
#
# In order to explicitly call the country or dataset to query, the `use_data` parameter can be specified. This parameter can be specified to provide an additional "performance hint" to the service.
#
# By default, the service will automatically determine the country or dataset that is associated with each location or area submitted in the `study_areas` parameter. Narrowing down the query to a specific dataset or country through this parameter will potentially improve response time.
#
# Let's look at an example of how `use_data` is used to indicate to the service that all input features in the `study_areas` parameter describe locations or areas only discoverable in the U.S. `USA_ESRI_2019` dataset.
# Define data
useData={"sourceCountry":"US","dataset":"USA_ESRI_2019"}
# Create report
useData_report = create_report(study_areas=["380 New York Street, Redlands, CA"],
report="business_summary",
export_format="XLSX",
use_data= useData,
out_folder=r"reports",
out_name="useData_business_summary.xlsx")
useData_report
# ## Conclusion
# In this part of the `arcgis.geoenrichment` module guide series, you saw how GeoEnrichment also enables you to create different types of high quality reports. You explored the `reports` property of a `Country` object to list available reports. You experienced, in detail, how `create_report` method allows you to create rich reports for various study areas. Towards the end, you saw how these reports can be customized as well.
#
# In the next and final page, you will learn about Standard Geography Queries.
| guide/12-enrich-data-with-thematic-information/part5_generate_reports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Convolution Neural Networks
# #### Cats and Dogs
import tensorflow as tf
import numpy as np
import pandas as pd
# +
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
# -
# ## Part 1 - Data Preprocessing
# ### Image Data Generator Class
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# ### Training Set
# +
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_set = train_datagen.flow_from_directory(
'../../../Deep_Learning/CNN/dataset/training_set',
target_size=(128, 128),
batch_size=32,
class_mode='binary')
# -
# ### Test Set
# +
test_datagen = ImageDataGenerator(rescale=1./255)
test_set = test_datagen.flow_from_directory(
'../../../Deep_Learning/CNN/dataset/test_set',
target_size=(128, 128),
batch_size=32,
class_mode='binary')
# -
# ## Part 2 - Model Building
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Conv2D, MaxPool2D, Flatten
# +
tf.random.set_seed(42)
cnn = Sequential()
cnn.add(Conv2D(32, kernel_size=(3, 3), input_shape=(128, 128, 3), activation='relu'))
cnn.add(MaxPool2D(pool_size=(2, 2)))
cnn.add(Conv2D(32, kernel_size=(3, 3), activation='relu'))
cnn.add(MaxPool2D(pool_size=(2, 2)))
cnn.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
cnn.add(MaxPool2D(pool_size=(2, 2)))
cnn.add(Flatten())
cnn.add(Dense(64, activation='relu'))
cnn.add(Dropout(0.5))
cnn.add(Dense(1, activation='sigmoid'))
cnn.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# -
cnn.summary()
# ## Part 3 - Model Training
# +
epochs = 100
cnn.fit_generator(training_set, validation_data=test_set,
epochs=epochs)
# -
# ## Part 4 - Model Evaluation
metrics = pd.DataFrame(cnn.history.history)
metrics[['loss', 'val_loss']].plot()
metrics[['accuracy', 'val_accuracy']].plot()
# +
from tensorflow.keras.preprocessing import image
from matplotlib.image import imread
import matplotlib.pyplot as plt
PATH = '../../../Deep_Learning/CNN/dataset/single_prediction/cat_or_dog_1.jpg'
test_image = image.load_img(PATH, target_size=(128, 128))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image / 255, axis=0)
result = cnn.predict_classes(test_image)
label = list(training_set.class_indices.keys())[result[0][0]]
plt.title("Prediction: " + label)
plt.imshow(imread(PATH))
# -
cnn.save("cat_dog.h5")
# +
from tensorflow.keras.preprocessing import image
from matplotlib.image import imread
import matplotlib.pyplot as plt
PATH = '../../../Deep_Learning/CNN/dataset/single_prediction/cat_or_dog_2.jpg'
test_image = image.load_img(PATH, target_size=(128, 128))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image / 255, axis=0)
result = cnn.predict_classes(test_image)
label = list(training_set.class_indices.keys())[result[0][0]]
plt.title("Prediction: " + label)
plt.imshow(imread(PATH))
| Machine Learning Projects/Useful_Code_Examples/Deep_Learning/CNN-CatsAndDogs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="0MRC0e0KhQ0S"
# # Support Vector Machine (SVM)
# + [markdown] colab_type="text" id="LWd1UlMnhT2s"
# ## Importing the libraries
# + colab={} colab_type="code" id="YvGPUQaHhXfL"
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# + [markdown] colab_type="text" id="K1VMqkGvhc3-"
# ## Importing the dataset
# + colab={} colab_type="code" id="M52QDmyzhh9s"
dataset = pd.read_csv('Social_Network_Ads.csv')
dataset.head()
# -
X = dataset.iloc[:, :-1].values
X[:5]
y = dataset.iloc[:, -1].values
y[:5]
# +
#y = y.reshape(len(y),1)
#y.shape
# + [markdown] colab_type="text" id="YvxIPVyMhmKp"
# ## Splitting the dataset into the Training set and Test set
# + colab={} colab_type="code" id="AVzJWAXIhxoC"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1137, "status": "ok", "timestamp": 1588267335709, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="P3nS3-6r1i2B" outputId="c9d82a73-9c13-4cac-e5f2-a7c7803f1819"
X_train[:5]
# + colab={"base_uri": "https://localhost:8080/", "height": 171} colab_type="code" executionInfo={"elapsed": 1133, "status": "ok", "timestamp": 1588267335710, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="8dpDLojm1mVG" outputId="a3d03ccc-37c0-40b8-92c7-232abd3240a7"
X_test[:5]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1128, "status": "ok", "timestamp": 1588267335710, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="qbb7i0DH1qui" outputId="ae89dad9-0dfb-4612-f88a-828fb9f95836"
y_train[:5]
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 1591, "status": "ok", "timestamp": 1588267336179, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="kj1hnFAR1s5w" outputId="948c3b43-2282-400f-9f0e-e9f397b65047"
y_test[:5]
# -
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# + [markdown] colab_type="text" id="kW3c7UYih0hT"
# ## Feature Scaling
# + colab={} colab_type="code" id="9fQlDPKCh8sc"
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1585, "status": "ok", "timestamp": 1588267336180, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="syrnD1Op2BSR" outputId="cd5ad357-7763-4894-d894-76fbe781fcd8"
X_train[:5]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1579, "status": "ok", "timestamp": 1588267336180, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="JUd6iBRp2C3L" outputId="6661e6f4-9c33-42af-d9c7-ca552603de1e"
X_test[:5]
# + [markdown] colab_type="text" id="bb6jCOCQiAmP"
# ## Training the SVM model on the Training set
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" executionInfo={"elapsed": 1578, "status": "ok", "timestamp": 1588267336181, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="e0pFVAmciHQs" outputId="2456d6a2-0437-42b3-fbe1-e75a23b26148"
from sklearn.svm import SVC
classifier = SVC(kernel='linear', random_state=0)
classifier.fit(X_train, y_train)
# + [markdown] colab_type="text" id="yyxW5b395mR2"
# ## Predicting a new result
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1573, "status": "ok", "timestamp": 1588267336181, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="f8YOXsQy58rP" outputId="46dd75b3-1359-4f2a-8978-5ea65c8a52e9"
classifier.predict(sc.transform([[30,87000]]))
# + [markdown] colab_type="text" id="vKYVQH-l5NpE"
# ## Predicting the Test set results
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1569, "status": "ok", "timestamp": 1588267336182, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="p6VMTb2O4hwM" outputId="3621a714-16d0-4c4a-dfc1-ae223f3cfc1d"
y_pred = classifier.predict(X_test)
y_pred[:5]
# -
y_pred_test = np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1)
y_pred_test[:5]
# + [markdown] colab_type="text" id="h4Hwj34ziWQW"
# ## Making the Confusion Matrix
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 1563, "status": "ok", "timestamp": 1588267336182, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="D6bpZwUiiXic" outputId="f72110a8-b97b-43e8-9adf-14673886ccab"
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score
cm = confusion_matrix(y_test, y_pred)
ConfusionMatrixDisplay(cm).plot()
accuracy_score(y_test,y_pred)
# + [markdown] colab_type="text" id="6OMC_P0diaoD"
# ## Visualising the Training set results
# + colab={"base_uri": "https://localhost:8080/", "height": 349} colab_type="code" executionInfo={"elapsed": 155558, "status": "ok", "timestamp": 1588267490181, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="_NOjKvZRid5l" outputId="ac9cc7c4-d0db-4fb1-bca7-779ff68cbfd4"
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5
y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(8, 8), facecolor='white')
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, edgecolors='k', cmap=plt.cm.Paired)
#plt.figure(facecolor='red')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
# + [markdown] colab_type="text" id="SZ-j28aPihZx"
# ## Visualising the Test set results
# + colab={"base_uri": "https://localhost:8080/", "height": 349} colab_type="code" executionInfo={"elapsed": 307655, "status": "ok", "timestamp": 1588267642283, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEuXdT7eQweUmRPW8_laJuPggSK6hfvpl5a6WBaA=s64", "userId": "15047218817161520419"}, "user_tz": -240} id="qeTjz2vDilAC" outputId="08413d38-f94b-4100-bfc3-19c1b5d5efe4"
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X_test[:, 0].min() - .5, X_test[:, 0].max() + .5
y_min, y_max = X_test[:, 1].min() - .5, X_test[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(8, 8), facecolor='white')
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the test points
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, edgecolors='k', cmap=plt.cm.Paired)
#plt.figure(facecolor='red')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
| Machine-Learning/Part 3 - Classification/3- Support Vector Machine (SVM)/support_vector_machine_solved.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shubhigupta991/Reddit-Flair-Detection/blob/main/scripts/Modelling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LO7mMSXXCORM"
# ## Showing Data
# + id="SyfNb57OVBHl"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="tCY-RcAMCJQX" outputId="a52c111d-9ec6-4b55-dfd4-cf9479f7aa64"
data = pd.read_csv("drive/MyDrive/data.csv")
data.head()
# + id="21s_lj57VBHn"
data.fillna("None",inplace = True)
# + colab={"base_uri": "https://localhost:8080/"} id="PQ4r08DzVBHn" outputId="e795c440-7ea8-43f1-879a-9e7000848907"
data.isna().sum()
# + [markdown] id="Sq7PnlhKVBHn"
# ## Data Preprocessing and Model Fitting
# + id="WOEGj1RwVBHo"
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
# + id="Bjwi9kevVBHo"
def fit_and_score(models,X_train,y_train,X_test,y_test) :
'''
Fits and evaluate given machine learning model.
Parameters:-
models = Take a dictionary of models to fit and evaluate.
X_train = Training data without labels.
y_train = Training labels.
X_test = Test data without labels.
y_test = Test labels
'''
np.random.seed(21)
model_scores = {}
for model_name, model in models.items() :
model = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
(model_name, model)])
print(f'Fitting {model_name} .....')
model.fit(X_train,y_train)
print(f'Evaluating {model_name} .....')
model_scores[model_name] = model.score(X_test,y_test)
return model_scores
# + id="nieKGJs3cQPO"
models = {'LogisticRegression' : LogisticRegression(),
'Linear_svm' : SGDClassifier(),
'KNN': KNeighborsClassifier(),
'RandomForestClassifier' : RandomForestClassifier(),
'XGBClassifier' : XGBClassifier()}
# + id="WBj6GzwAaWVb"
features = {'combined_features' : data['combined_features'],
'comments' : data['comments'],
'title' : data['title'],
'body' : data['body'],
'url' : data['url']}
cat = data['flair']
# + colab={"base_uri": "https://localhost:8080/"} id="dpUsKqMZcVeW" outputId="39fb9e2e-7295-4ebb-c8e9-c5516ee47545"
scores = {}
for feature in features:
X_train,X_test,y_train,y_test = train_test_split(features[feature],cat,test_size=0.2, random_state = 21)
print(f'Flair Detection using {feature} as Feature')
model_scores = fit_and_score(models,X_train,y_train,X_test,y_test)
scores[feature] = model_scores
# + colab={"base_uri": "https://localhost:8080/"} id="sjcHOBi0gaIU" outputId="a0e03f21-3607-4bcf-b3ff-0fb2ac5883a8"
scores
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="GCUunMFYj3ee" outputId="7b8d0266-32ea-40ae-d2e9-44107253aec2"
for feature,score in scores.items():
model_compare = pd.DataFrame(score,index = ['accuracy'])
model_compare.T.plot.bar(title=feature.upper())
# + [markdown] id="P9YLI05eVUkc"
# ## Best Model
# By observing the above results I concluded that the model that gives the best result is `XGBClassifier` and the best feature for predicting flair is `combined_features`.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="TqJKtTZKk7yi" outputId="2174af21-18aa-45c9-ead9-042e98888e39"
X = data['combined_features']
y = data['flair']
np.random.seed(21)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2)
model = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', XGBClassifier())])
model.fit(X_train,y_train)
model_score = model.score(X_test,y_test)
print(f'Model score : {model_score}')
# + [markdown] id="5BCXC2cjYl2r"
# ## Improving a model
# First prediction = baseline prediction First model = baseline model
#
# From data perspective:
#
# * Could we collect more data. (generally,the more data, the better)
# * Can we improve data? (adding more reliable features)
#
# From a model perspective:
#
# * Is there a better model we can use? (Refer - sklearn ml map)
# * Could we imporove our model? (Tuning Hyperparameters)
# Parameters vs HyperParameters
#
# Parameters :- model find these patterns
# * Hyperparameters :- settings on model we can adjust(potentially) to improve its ability to find patterns
#
# + [markdown] id="lBRWsrb6gU67"
# After going though the model documentation https://xgboost.readthedocs.io/en/latest/parameter.html and by experimenting
# I found that the best parameters are :
# n_estimators=1000, verbosity=1, seed=2,
# colsample_bytree=0.6, subsample=0.7,objective='multi:softmax'
# + id="Bs_EN9WBQSBQ"
X = data['combined_features']
y = data['flair']
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 21)
model = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', XGBClassifier(random_state=42,n_estimators=1000,verbosity=1, seed=2,
colsample_bytree=0.6, subsample=0.7,objective='multi:softmax'))])
model.fit(X_train,y_train);
# + colab={"base_uri": "https://localhost:8080/"} id="Zy7e-5J0-S8x" outputId="043cd13c-940f-4ee6-9d74-6535882e6039"
model.score(X_test,y_test)
# + [markdown] id="dWnwWmqh-hoZ"
# ## Evaluting our tuned machine learning classifier, beyond accuracy
# * ROC curve and AUC score
# * Confusion matrix
# * Classification report
# * Precision
# * Recall
# * F1-score
# ... and it would be great if cross-validation was used where possible.
#
#
# + id="Zhb2U0Va_BEp"
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
# + [markdown] id="nykFWhnR-0Aw"
# To make comparisons and evaluate our trained model, first we need to make predictions.
# + id="idNOZtec_g7q"
y_preds = model.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="resEuccP_5AF" outputId="d1e07e5d-b9f7-4aad-cd08-b819ee76b6d3"
confusion_matrix(y_test,y_preds)
# + colab={"base_uri": "https://localhost:8080/", "height": 352} id="6kCst-fo_76K" outputId="7eea8fe9-36cf-4ebe-f6ca-688b97fd2200"
sns.set(font_scale=1.5)
def plot_conf_mat(conf_mat):
'''
Plot Confusion matrix.
'''
fig, ax = plt.subplots(figsize = (5,5))
ax = sns.heatmap(conf_mat, annot =True,
cbar = False)
plt.xlabel('True label')
plt.ylabel('Predicted label')
plt.show()
conf_mat = confusion_matrix(y_test, y_preds)
plot_conf_mat(conf_mat)
# + id="FhNOeMDdchPZ" outputId="42c19cd3-35e6-4a51-e54c-e808639d3d91" colab={"base_uri": "https://localhost:8080/"}
print(classification_report(y_test, y_preds))
# + [markdown] id="OtgGPRQ8BraW"
# ## Saving a Machine learning model
#
# + id="qPkFAou2AMtU"
import pickle
pickle.dump(model,open('final_model.pkl -1','wb'))
# + [markdown] id="yWueauuggCqd"
# ## Before deploying the model to webiste first let's check the prediction
# + [markdown] id="tZCPIT98jobQ"
# loading the model and all the files
#
# + id="Y6VVS1_eXqlp"
final_model = pickle.load(open('final_model.pkl -1','rb'))
# + id="TsmP7pxjkTFf" outputId="ba23f1d9-e9c2-4f3a-84fe-c61d4ba3fb1c" colab={"base_uri": "https://localhost:8080/"}
# !pip install praw
# + id="0N7cQBTYkVjm" outputId="6401e5dd-33bc-45ec-c247-2ffa77f4aade" colab={"base_uri": "https://localhost:8080/"}
import praw
import numpy as np
import pandas as pd
import re
import nltk
from nltk.corpus import stopwords
import datetime as dt
nltk.download('all')
from bs4 import BeautifulSoup
# + id="dII-2LoClnd0"
model = final_model
# + id="bfVOBagol6Y2"
reddit = praw.Reddit(client_id='KqCyLYQgMNwp4w', client_secret='cA9UCAPiVadgs4FTsnZ3RqJUR0hROw', user_agent='Flair-Detector',
username='shubhigupta09', password='<PASSWORD>')
# + id="Rm-FvfQ-mJB7"
replace_by_space = re.compile('[/(){}\[\]\|@,;]')
bad_symbols = re.compile('[^0-9a-z #+_]')
stopWords = set(stopwords.words('english'))
def text_cleaning(text):
text = BeautifulSoup(text, "lxml").text
text = text.lower()
text = replace_by_space.sub(' ', text)
text = bad_symbols.sub('', text)
text = ' '.join(word for word in text.split() if word not in stopWords)
return text
def string(value):
return str(value)
# + id="4lMzs_TKmUJG"
def prediction(url):
submission = reddit.submission(url = url)
data = {}
data["title"] =str(submission.title)
data["url"] = str(submission.url)
data["body"] = str(submission.selftext)
submission.comments.replace_more(limit=None)
comment = ''
count = 0
for top_level_comment in submission.comments:
comment = comment + ' ' + top_level_comment.body
count+=1
if(count > 10):
break
data["comment"] = str(comment)
data['title'] = text_cleaning(str(data['title']))
data['body'] = text_cleaning(str(data['body']))
data['comment'] = text_cleaning(str(data['comment']))
combined_features = data["title"] + data["comment"] + data["body"] + data["url"]
return str(model.predict([combined_features]))[2:-2]
# + id="NljkK01rsm4a" outputId="ae5964d6-669c-49d8-a0c9-bf2e39c99d06" colab={"base_uri": "https://localhost:8080/", "height": 35}
prediction("https://www.reddit.com/r/india/comments/d1m9ld/iran_removes_antiindia_banners_from_pak_consulate/")
# + id="UMaEieE2sn_-"
| Reddit-Scraping-And-Flair-Detection/Modelling.ipynb |