code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/CReis93/2022_ML_Earth_Env_Sci/blob/main/Copie_de_S4_2_Clustering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="KO7kiUVK5SGi"
# <img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EYCb6vvFoSlMoAfr_YXSg8UBoMRAF1cpIUTeUFFhBVYsZw?download=1'>
# <center> Photo Credits: <a href='https://unsplash.com/photos/PizD8punZsw'>Three Assorted-Color Garbage Cans</a> by <a href='https://unsplash.com/@julytheseventifirst'><NAME></a> licensed under the <a href='https://unsplash.com/license'>Unsplash License</a>
# </center>
#
# *Sorting takes a lot of effort - is there a way that we can get computers to do it automatically for us when we don't even know where to begin?*
# + [markdown] id="hsXnPPkh5SDp"
# This notebook will be used in the lab session for week 4 of the course, covers Chapters 9 of Géron, and builds on the [notebooks made available on _Github_](https://github.com/ageron/handson-ml2).
#
# Need a reminder of last week's labs? Click [_here_](https://colab.research.google.com/github/tbeucler/2022_ML_Earth_Env_Sci/blob/main/Lab_Notebooks/Week_3_Decision_Trees_Random_Forests_SVMs.ipynb) to go to notebook for week 3 of the course.
# + [markdown] id="LQmG3MgLh6fC"
# ##**Chapter 9 – Clustering**
# + [markdown] id="l6_AqYXXCY0c"
# # Setup
# + [markdown] id="SjZzFMhZCYCj"
# First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
# + id="FOWGsRI22jOT"
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Is this notebook running on Colab or Kaggle?
IS_COLAB = "google.colab" in sys.modules
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
rnd_seed = 2022
rnd_gen = np.random.default_rng(rnd_seed)
# To plot pretty figures
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# + [markdown] id="AVIjgpeDDrJC"
# ## Data Setup
# + [markdown] id="vJfxDFQBCeVO"
# We need to load the MNIST dataset from OpenML - we won't be loading it as a Pandas dataframe, but will instead use the Dictionary / ndrray representation.
# + id="96ed3Hj8ChTd"
#Load the mnist dataset
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
X = mnist['data']
y = mnist['target'].astype(np.uint8)
# + [markdown] id="tW68lz26CoI6"
# We're going to subsample the digits in the dataset, choosing a random set of digit classes - we won't even know the number of different digits we will chose; it will be somewhere between 4 and 8!
# + id="TxjW4hltD0nL"
# Generating the random set of digit labels we will use to extract samples from
# the MNIST handwritten digit dataset.
digits = rnd_gen.choice(np.arange(10), # Digit Possibilities
int( np.round( rnd_gen.uniform(3.5, 8.5) ) ), # Number of digits to use
replace = False) # Can't repeat digits
# + [markdown] id="n4PrwneRD1me"
# We will learn on a total of 8000 digits, evenly distributed amongst the randomly selected digits in the dataset. Let's store the number of samples to be taken from each class.
# + id="TQrrl9j2Iv7t"
# Let's find a round number of digits to extract for each digit
num_samples = np.round(8000/len(digits) + 1 ).astype(int)
# + [markdown] id="zbaBrvOIKjyN"
# With that out of the way, let's generate the dataset!
# + id="Wt_XQ9FUCnV7"
# Placeholder Vars
sub_X = None
sub_y = None
# Looping through digit types
for digit in digits:
# find indices where target is digit of interest
y_idxs = y==digit
# rnd_gen.choice chooses n = balanced_size indices from the set of digits
# available. Since we know the truth is an array with the same number of rows
# as the subset, full of the current digit
X_subset = X[y_idxs][rnd_gen.choice(np.arange(y_idxs.sum()),(num_samples,))]
y_subset = np.full(X_subset.shape[0],digit)
if type(sub_X) == type(None):
sub_X = X_subset
sub_y = y_subset
else:
sub_X = np.vstack([sub_X, X_subset])
sub_y = np.hstack((sub_y,y_subset))
# Shuffling the dataset, also limitting the number of digits to 8000 so we can't
# cheat and tell how many digits there are by looking at the length of the array
shuffler = rnd_gen.permutation(len(sub_X))
sub_X = sub_X[shuffler][:8000]
sub_y = sub_y[shuffler][:8000]
# + [markdown] id="R70HPKpVCgaA"
# We now have a set of 8000 random samples that we know belong to somewhere between 4 and 8 clusters 😃 <br> Can we divide them into these groups without knowing the labels beforehand?
#
# **Warning**: Don't expect near perfect results this time.
# + [markdown] id="yEwKJDqKmQwW"
# # Clustering with KMeans
# + [markdown] id="eeR0w4WSmV_K"
# The first thing we need to do is to import the KMeans model from scikit learn. Let's go ahead and do so.
#
# ###**Q1) Import the KMeans model from scikit learn.**
#
# *Hint: [Here is the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) for the Kmeans implementation in sklearn.*
# + id="FVRiu373mWYI"
#Write your code here
from sklearn.cluster import KMeans
# + [markdown] id="5wuMA-ir7AGG"
# We don't know how many clusters we should use to split the data. Our first instinct would be to use as many clusters as the number of digits we have, but even that is not necessarily optimal. Why dont we try all K's between 4 and 40?
#
# To do so, we'll need to begin by training a Kmeans algorithm for each value of K we're interested in.
#
# + [markdown] id="KX2Op57zAZ-P"
# How long does it take to train a single KMeans model with 10 initial centroid settings? This will give us an idea as to whether it may be a good idea to apply a dimensionality reduction algorithm before fitting our models.
#
# ###**Q2) Import python's time library and measure how long it takes to train a single KMeans model with 3 clusters on the raw data subset.**
#
# *Hint 1: [Here is the documetation](https://docs.python.org/3/library/time.html#time.time) for the function used to get timestamps*
# + id="GPqc5D1cAY5y" colab={"base_uri": "https://localhost:8080/"} outputId="c1c13e91-3fd1-4157-fff8-577455a747d2"
import time
t0 = time.time()
kmeans_test = KMeans(n_clusters=3, # Number of clusters to split into
random_state = rnd_seed) # Random seed
kmeans_test.fit(sub_X) # Fitting to data subset
t1 = time.time()
print(f"Training took {(t1 - t0):.2f}s")
# + [markdown] id="7clPTnByCnuD"
# This should seem like a bit longer than we want. (3.42 seconds during testing of the notebook). Doing this many times means there's a possibility we'll be sitting around doing *nothing*, possiblity for *several minutes*. ***Who has time for this?***
#
# Let's reduce the dataset using PCA, capturing 99% of the variability in the data.
#
# ###**Q3) Import PCA from scikit and reduce the dimensionality of our input data. 99% of the variance in the data should be captured.**
#
# *Hint 1: [Here is the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) for PCA.*
#
# *Hint 2: `.fit_transform()` will be very useful*
# + id="l7gGFSUiDxGF"
from sklearn.decomposition import PCA # Importing PCA
# + id="tce6_jIMBWMb"
pca = PCA(0.99) # Instantiate PCA, setting it to explain 95% of the variance
# + id="75flJdqhBWXE"
reduced_X = pca.fit_transform(sub_X) # Transform the data subset
# + [markdown] id="OBUe-CCvGNVN"
# Let's try training a KMeans model on the reduced dataset and see if our training time improved...
#
# ###**Q5) Repeat Q2 using the reduced dataset**
# *Hint 1: We're still splitting the data into 3 clusters*
# + id="NIAWAWLwGhdh" colab={"base_uri": "https://localhost:8080/"} outputId="3965b122-8bae-48c6-d7cc-449b257666a7"
#Complete the code
t0 = time.time()
kmeans_test = KMeans(n_clusters=3, # Number of clusters to split into
random_state = rnd_seed) # Random seed
kmeans_test.fit(reduced_X) # Fitting to reduced data subset
t1 = time.time()
print(f"Training took {(t1 - t0):.2f}s")
# + [markdown] id="GnSq_d_PGtek"
# That's somewhat better (1.42 seconds during my testing). Let's try training on this reduced dataset!
# + [markdown] id="0H8NeEuXHBNq"
# ###**Q6) Train a KMeans model for $\; 2 \le k \le 20$**
#
# *Hint 1: Set up a range using python's [`range`](https://docs.python.org/3/library/functions.html#func-range) function or numpy's [`arange`](https://numpy.org/doc/stable/reference/generated/numpy.arange.html)*
#
# *Hint 2: You can store each trained model by appending it to a list as you iterate*
# + id="L10E0hhgBroL"
#Complete the code
k_list = list(range(2,21,1)) # Create a list of k values to test
# + id="2Otzq8hHBrap"
kmeans_models = list() # Create a variable in which to store the different models
# + id="iKajWdb760IG" colab={"base_uri": "https://localhost:8080/"} outputId="1c8d03e4-52b3-4be2-da58-dad216f5dd7e"
t0 = time.time() # Get a timestamp to keep track of time
for k in k_list:
#print out a statement stating which k value you are working on
t1 = time.time() # Get a current timestamp
print(f"\r Currently working on k={k}, elapsed time: {(t1 - t0):.2f}s", end="")
kmeans = KMeans(n_clusters=k, # Set the number of clusters
random_state= rnd_seed ) # Set the random state
kmeans.fit(reduced_X) # Fit the model to the reduced data subset
kmeans_models.append(kmeans) # store the model trained to predict k clusters
print(f"\r Finished training the models! It took: {(time.time() - t0):.2f}s")
# + [markdown] id="7yTraW93IF5z"
# You should hopefully remember the silhouette score and inertia metrics from the reading. Let's go ahead and make a $k$ vs $silhouette$ plot and a $k$ vs $inertia$ plot.
# + [markdown] id="I5f4hywgIyFN"
# ###**Q7) Import the silhouette score metric and generate a silhouette score value for each model we trained**
#
# *Hint 1: [Here is the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for the Silhouette score implementation in scikit learn*
#
# *Hint 2: The silhouette score needs the model input data and the model labels as arguments*
#
# *Hint 3: The model labels are stored as an attribute in each model. Check the list of attributes in the [sklearn KMeans documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html).
# + colab={"base_uri": "https://localhost:8080/"} id="IfyGTo6cwps0" outputId="ce9c12ca-f56f-40a3-9b60-924b2b6925fb"
kmeans_models
# + id="8qmBfe8oIkfF"
# Complete the code
from sklearn.metrics import silhouette_score
# + id="PPKxDVspByPA"
silhouette_scores = [silhouette_score(reduced_X, model.labels_)
for model in kmeans_models]
# + [markdown] id="pDt_a4PqWQf7"
# ###**Q8) Plot comparing $k$ vs Silhouette Score. Highlight the maximum score**
#
# *Hint 1: You'll need to find the position of the best score in the silhouette score list. [Here is the documentation ](https://numpy.org/doc/stable/reference/generated/numpy.argmax.html)to a numpy function that would be very useful for this.*
#
# *Hint 2: matplotlib's pyplot has been imported as `plt`. [Here is the documentation](https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.subplots.html) to the `subplots()` method.*
#
# *Hint 3: [Here is the documentation](https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.plot.html) to the `plot()` method in matplotlib's pyplot. Note that it is also implemented as a method in the `axes` objects created with `plt.subplots()`*
#
# + id="OkjyAv1jWQme" colab={"base_uri": "https://localhost:8080/"} outputId="8e39463e-a7d6-43a4-ea02-dd0ceb1dc392"
# Complete the code
best_index = np.argmax(silhouette_scores)# Find the index of the model with the highest score
best_index
# + id="QwYQF4nSB1sY"
best_k = k_list[best_index] # Get the best K value, per the silhouette score
best_score = silhouette_scores[best_index]
# + id="HXjxgaNGB13n" colab={"base_uri": "https://localhost:8080/", "height": 414} outputId="670e7aa3-a1c4-4fc8-da05-859f9849bbae"
# Make a figure with size (18,6)
fig, ax = plt.subplots(figsize=(18,6))
# Make the plot with the K values in the horizontal axis and the silhouette
# score in the vertical axis
ax.plot(k_list, silhouette_scores, "bo-")
ax.set_xlabel("$k$", fontsize=14)
ax.set_ylabel("Silhouette score", fontsize=14)
ax.plot(best_k, best_score, "rs")
# + [markdown] id="ACdrjMZlz-z-"
# Now we'll plot the value of $k$ vs inertia
#
# ###**Q9) Plot comparing $k$ vs Inertia. Highlight the inertia for the model with the highest silhouette score**
#
# *Hint 1: If you followed the previous step as it was written, you have the index for the best model stored in*`best_index`.
#
# *Hint 2: [The KMeans documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans) details the attribute in which the model's intertia is stored.*
#
# + id="gEC3lr000RDW"
#Complete the Code
inertias = [model.inertia_ for model in kmeans_models] # Iterate through list of kmeans models
# + id="eZU_mkmvB-MM"
best_inertia = inertias[np.argmax(inertias)]# Get the inertia for the model with the highest silhouette score
# + id="LAy0DMLy5KG-" colab={"base_uri": "https://localhost:8080/", "height": 426} outputId="b1df049f-9548-4191-c981-b5c362fb25ac"
# Make a figure with size (18,6)
fig, ax = plt.subplots(figsize=(18,6))
# Make the plot with the K values in the horizontal axis and the inertia in the
# vertical axis
ax.plot(k_list, inertias, "bo-")
ax.set_xlabel("$k$", fontsize=14)
ax.set_ylabel("Inertia", fontsize=14)
ax.plot(best_k, best_inertia, "rs")
# + [markdown] id="p2aDRxJj7Pen"
# If you ran the notebook with the default random seed, you may be surprised to see that the best performing KMeans model is the one that breaks it off into two clusters! The next best two will be those associated with $k=4$ and $k=5$. Let's get some plots to try to make sense of the results, since we have the actual labels.
#
# There aren't any more questions to answer from here on out, but you *will* have to change the code if you started out from a different random seed! I'll try to be good about pointing out what code you'll need to change.
#
# Let's begin by using [scikit's TSNE implementation](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) to reduce the dimensionality of the dataset for plotting. This will take a bit of computation time!
# + id="Kr24Ye_T8sDZ" colab={"base_uri": "https://localhost:8080/"} outputId="36a11f8a-b44d-443f-aaca-1c885669e619"
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, # We'll project onto 2D plane
random_state=rnd_seed, # We need a random seed
learning_rate='auto') #Let the algorithm handle the learning rate
# And now get the input data in 2-component reduced form
X_plot = tsne.fit_transform(sub_X)
# + [markdown] id="c7BlPG2YdQ1-"
# Let's continue by making a list of the three best models.
# + id="wjo_EZs4-kw6"
best_models = []
# We get the best model ~automagically~
best_models.append(kmeans_models[best_index])
# But the 2nd and 3rd best are retrieved manually.
best_models.append(kmeans_models[2])
best_models.append(kmeans_models[3])
# + [markdown] id="l04hAbNHeF7w"
# And now, let's get a set of predictions for each model and store it in a list!
# + id="7CMme4lpeM3x"
pred_labels = []
for model in best_models:
pred_labels.append(model.predict(reduced_X))
# + [markdown] id="PzRQD2xVeVMC"
# Pandas will make producing a nice plot a lot simpler. Let's import it and make a dataframe with the reduced input components, the truth labels, and the predicted cluster labels.
#
# Note that the predicted labels don't correspond to the digit labels since this is an unsupervised model!
# + id="6vHSkMKSebwF"
import pandas as pd
plot_data = np.stack([X_plot[:,0], X_plot[:,1], sub_y, *pred_labels],axis=1)
df = pd.DataFrame(plot_data, columns=['X1','X2','truth','pred_1','pred_2','pred_3'])
# + [markdown] id="FW9nDgbxe4ze"
# And now we'll make a nice, big 4x4 plot that allows us to see the true answers and how our algorithm clustered our data!
# + id="P5gFTg7XfCQM" colab={"base_uri": "https://localhost:8080/", "height": 936} outputId="045f4975-33f2-476b-fab9-58455549b0e5"
fig, axes = plt.subplots(2, 2, figsize=(16,16))
groups = df.groupby('truth')
for label, group in groups:
axes[0,0].plot(group.X1, group.X2, marker='o', linestyle='', markersize=4, label=int(label))
axes[0,0].legend(fontsize=12)
axes[0,0].set_title('Truth', fontsize=16)
axes[0,0].axis('off')
groups = df.groupby('pred_1')
for label, group in groups:
axes[0,1].plot(group.X1, group.X2, marker='o', linestyle='', markersize=4)
axes[0,1].set_title('"Best" Clustering', fontsize=16)
axes[0,1].axis('off')
groups = df.groupby('pred_2')
for label, group in groups:
axes[1,0].plot(group.X1, group.X2, marker='o', linestyle='', markersize=4)
axes[1,0].set_title('$2^{nd}$ Best Clustering', fontsize=16)
axes[1,0].axis('off')
groups = df.groupby('pred_3')
for label, group in groups:
axes[1,1].plot(group.X1, group.X2, marker='o', linestyle='', markersize=4)
axes[1,1].set_title('$3^{rd}$ Best Clustering', fontsize=16)
axes[1,1].axis('off')
# + [markdown] id="kzx17wP7f6i6"
# Assuming you started out from the intended random seed, you'll be able to see that the "Best" model is trying to separate the 0 digits from the non-zero digits. The $2^{nd}$ best model is able to separate the digits pretty well, but lumps 4s and 9s into a single cluster!
#
# The $3^{rd}$ best model begins to group digits into clusters that may not have much significance to us at first glance, but whose metrics seem to indicate worse performance and whose results would need further analysis to try to understand.
| Copie_de_S4_2_Clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
from numba import jit
from sympy import integrate, oo, var
from sympy.physics.hydrogen import R_nl
from numerov.cy.core import radial_wf as radial_wf_cy
from numerov.core import radial_wf as radial_wf_py
# numba.jit can provide significant speed improvements (faster than cython for `radial_wf` and comparable for `radial_integral`).
radial_wf_jit = jit(radial_wf_py)
step = 0.0001
n = 10
l = 5
# +
offset = 0.002
fig, ax = plt.subplots()
# python
r_py, y_py = radial_wf_py(n, l, step=step)
ax.plot(r_py, y_py + 3*offset, label="py")
# jit
r_jit, y_jit = radial_wf_jit(n, l, step=step)
ax.plot(r_jit, y_jit + 2*offset, label="jit")
# cython
r_cy, y_cy = radial_wf_cy(n, l, step=step)
ax.plot(r_cy, y_cy + offset, label="cy")
# sympy
y_sympy = [R_nl(n, l, r).evalf() for r in r_cy]
ax.plot(r_cy, y_sympy, label="sympy")
ax.legend(loc=0)
plt.show()
# -
# %timeit radial_wf_py(n, l, step=step)
# %timeit radial_wf_jit(n, l, step=step)
# %timeit radial_wf_cy(n, l, step=step)
from numerov.cy.core import radial_integral as radial_integral_cy
from numerov.core import radial_integral as radial_integral_py
radial_integral_jit = jit(radial_integral_py)
n1, l1 = 14, 1
n2, l2 = 13, 2
# python
radial_integral_py(n1, l1, n2 ,l2, step=step)
# %timeit radial_integral_py(n1, l1, n2 ,l2, step=step)
# numba.jit
radial_integral_jit(n1, l1, n2 ,l2, step=step)
# %timeit radial_integral_jit(n1, l1, n2 ,l2, step=step)
# cython
radial_integral_cy(n1, l1, n2 ,l2, step=step)
# %timeit radial_integral_cy(n1, l1, n2 ,l2, step=step)
# sympy
var("r")
integrate(R_nl(n1, l1, r) * r**3 * R_nl(n2, l2, r), (r, 0, oo)).evalf()
# %timeit integrate(R_nl(n1, l1, r) * r**3 * R_nl(n2, l2, r), (r, 0, oo)).evalf()
| notebooks/method_comparision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import re
import xgboost
import math
from __future__ import division
from scipy.stats import pearsonr
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, explained_variance_score, roc_curve, auc
from sklearn.metrics import precision_recall_curve, log_loss, average_precision_score
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.naive_bayes import GaussianNB
import pickle
from sklearn.datasets import load_boston
import xgboost as xgb
import googlemaps
gmaps = googlemaps.Client(key='<KEY>')
from datetime import datetime
# +
def xgb_feat_imp(xgb_model, feature_names, top_n=10, print_imp=False, plot=False):
'''
Important features in XGBoost
'''
if top_n > len(feature_names):
top_n = len(feature_names)
imp_df = pd.DataFrame(pd.Series(xgb_model.booster().get_score(), name='imp'))
imp_df['feat'] = imp_df.index
imp_df['feat'] = imp_df['feat'].apply(lambda x: feature_names[int(x[1:])])
imp_df.reset_index(drop=True, inplace=True)
imp_df_top = imp_df.sort_values(by='imp', ascending=False).iloc[:top_n, :]
imp_df_top['imp'] = np.round(imp_df_top['imp'] / imp_df['imp'].sum(), 3)
imp_df_top = imp_df_top[['feat', 'imp']]
print('XGBoost model top {} feature importance:'.format(top_n))
if print_imp:
print(imp_df_top)
if plot:
# bar graph to show feature importance
pos = np.arange(imp_df_top.shape[0]) + 0.5
plt.figure(figsize=(6, 5))
plt.barh(pos, imp_df_top.imp.values[::-1]*100, align='center')
plt.yticks(pos, imp_df_top.feat.values[::-1])
plt.xlabel("Importance")
plt.title("Feature Importance in XGBoost")
plt.show()
return imp_df_top
def logicreg_feat_imp(logicreg_model, feature_names, top_n=10, print_imp=False, plot=False):
'''
Important features in Logistic Regression
'''
if top_n > len(feature_names):
top_n = len(feature_names)
imp_df = pd.DataFrame({"feat": feature_names, "imp": np.round(logicreg_model.coef_.ravel(),3)})
imp_df_top = imp_df.sort_values(by= ["imp"], ascending= False).iloc[:top_n, :]
print("LogicReg model top {} feature importance:".format(top_n))
if print_imp:
print(imp_df_top)
if plot:
# bar graph to show feature importance
pos = np.arange(imp_df_top.shape[0]) + 0.5
plt.figure(figsize=(6, 5))
plt.barh(pos, imp_df_top.imp.values[::-1]*100, align='center')
plt.yticks(pos, imp_df_top.feat.values[::-1])
plt.xlabel("Importance")
plt.title("Feature Importance in Logistic Regression")
plt.show()
return imp_df_top
def sklean_model_feat_imp(model, feature_names, model_name='', top_n=10, print_imp=False, plot=False):
'''
Model feature importance
'''
if top_n > len(feature_names):
top_n = len(feature_names)
imp_df = pd.DataFrame({"feat": feature_names, "imp": np.round(model.feature_importances_,3)})
imp_df_top = imp_df.sort_values(by= ["imp"], ascending= False).iloc[:top_n, :]
print(model_name + 'model top {} feature importance:'.format(top_n))
if print_imp:
print(imp_df_top)
if plot:
# bar graph to show feature importance
pos = np.arange(imp_df_top.shape[0]) + 0.5
plt.figure(figsize=(6, 5))
plt.barh(pos, imp_df_top.imp.values[::-1]*100, align='center')
plt.yticks(pos, imp_df_top.feat.values[::-1])
plt.xlabel("Importance")
plt.title(model_name + "feature importance")
plt.show()
return imp_df_top
# +
def walking_distance(address1, address2, v_type='value'):
'''
Use Google Maps API to calculate walking distance from address1 to address2.
@address1: starting address
@address2: ending address
@v_type: distance value type: 'value'(m) or 'text' (more human readable)
'''
directions_result = gmaps.directions(address1, address2, mode="walking", departure_time=datetime.now())
if v_type == 'value':
return directions_result[0]['legs'][0]['distance']['value']
else:
return directions_result[0]['legs'][0]['distance']['text']
def walking_time(address1, address2, v_type='value'):
'''
Use Google Maps API to calculate walking time from address1 to address2.
@address1: starting address
@address2: ending address
@v_type: return time value type: 'value'(s) or 'text' (more human readable)
'''
directions_result = gmaps.directions(address1, address2, mode="walking", departure_time=datetime.now())
if v_type == 'value':
return directions_result[0]['legs'][0]['duration']['value']
else:
return directions_result[0]['legs'][0]['duration']['text']
# -
# ## 1. Prepare Data
df = pd.read_excel(r'C:\Users\WEIL\Documents\GitHub\yonge_eglinton_housing\YE_5yr_V1.xlsx', sheet_name='NW')
# +
df['lot_width'] = df.lot_size.apply(lambda x: int(re.findall(r"[\w']+", x)[0]))
df['lot_length'] = df.lot_size.apply(lambda x: int(re.findall(r"[\w']+", x)[1]))
df['tran_year'] = df.trasaction_date.apply(lambda x: int(re.findall(r"[\w']+", str(x))[0]))
df['tran_month'] = df.trasaction_date.apply(lambda x: int(re.findall(r"[\w']+", str(x))[1]))
df['bed_main'] = df.bed_room.apply(lambda x: int(re.findall(r"[\w']+", str(x))[0]))
df['bed_bsmt'] = df.bed_room.apply(lambda x: int(re.findall(r"[\w']+", str(x))[1]) if len(re.findall(r"[\w']+", str(x))) > 1 else 0)
df['tran_date'] = df['tran_year'] + df['tran_month'] / 12.
# use Google Maps API to calculate walking distance to Eglinton Station
df['walking_distance'] = df.Address.apply(lambda x: walking_distance(
x + ", Toronto, ON, Canada", "Eglinton Station, Toronto, ON, Canada"))
# +
# Check any number of columns with NaN
print(df.isnull().any().sum(), ' / ', len(df.columns))
# Check any number of data points with NaN
print(df.isnull().any(axis=1).sum(), ' / ', len(df))
# fill missing values
df.condition.fillna(value=round(df.condition.mean()), inplace=True)
# -
# scatter plot of the price
plt.scatter(df.tran_date, df.price)
plt.show()
# scatter plot of price group by condition
groups = df.groupby('condition')
fig, ax = plt.subplots()
ax.margins(0.05)
for name, group in groups:
ax.plot(group.tran_date, group.price, marker='o', linestyle='', ms=6, label=name)
ax.legend(title='condition')
plt.show()
# ### Feature correlation
target = 'price'
numeric_features = df._get_numeric_data().columns.tolist()
numeric_features.remove(target)
print("Correlation between numeric feature and price:")
correlations = {}
for f in numeric_features:
data_temp = df[[f,target]]
x1 = data_temp[f].values
x2 = data_temp[target].values
key = f + ' vs ' + target
correlations[key] = pearsonr(x1,x2)[0]
data_correlations = pd.DataFrame(correlations, index=['Value']).T
data_correlations.loc[data_correlations['Value'].abs().sort_values(ascending=False).index]
# ### Save data to csv
if True:
df.to_csv(r'C:\Users\WEIL\Documents\GitHub\yonge_eglinton_housing\YE_5yr_V2.csv', sep=',', header=True)
# ## 2. Modeling
# ### Choose features
print('column names: \n',df.columns.tolist())
features = ['wash_room', 'condition', 'lot_width', 'lot_length', 'tran_year',
'tran_month', 'bed_main', 'bed_bsmt', 'walking_distance'
]
target = 'price'
# ### Split train / test
X = df[features].values
y = df[target].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=123)
# ### OLS
# +
# Linear Regression
lr = LinearRegression()
lr.fit(X_train, y_train)
# -
print("Linear Regression score: {0:.2f}".format(lr.score(X_test,y_test)))
print("RMSE: {0:.2f}".format(math.sqrt(np.mean((lr.predict(X_test) - y_test) ** 2))))
lr_feature_importance = logicreg_feat_imp(lr, features, top_n=10, print_imp=False, plot=True)
# ### XGBoost
# +
# regression with XGBoost
xgb = xgboost.XGBRegressor(n_estimators=1000, learning_rate=0.005, gamma=0, subsample=0.7,
colsample_bytree=0.7, max_depth=7)
xgb.fit(X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=10, verbose=False)
print('XGB variance score: ', explained_variance_score(xgb.predict(X_test),y_test))
train_score = mean_squared_error(y_train, xgb.predict(X_train, ntree_limit=xgb.best_iteration))
test_score = mean_squared_error(y_test, xgb.predict(X_test, ntree_limit=xgb.best_iteration))
print("MSE on Train: {}, on Test: {}".format(train_score, test_score))
xgb_feature_importance = xgb_feat_imp(xgb, features, top_n=10, print_imp=False, plot=True)
# -
plt.figure()
plt.scatter(xgb.predict(X_train), y_train, label='train')
plt.scatter(xgb.predict(X_test), y_test, label='test')
plt.legend()
plt.xlabel('Prediction')
plt.ylabel('True')
plt.show()
# ### GBM
# +
gbm = GradientBoostingRegressor(loss = "huber", learning_rate= 0.005, n_estimators= 500,
max_depth=7, min_samples_split= 5, min_samples_leaf= 5,
subsample= 0.7, max_features= 'auto', verbose= 0)
gbm.fit(X_train, y_train)
train_score = mean_squared_error(y_train, gbm.predict(X_train))
test_score = mean_squared_error(y_test, gbm.predict(X_test))
print("MSE on Train: {}, on Test: {}".format(train_score, test_score))
gbm_feature_importance = sklean_model_feat_imp(gbm, features, model_name='', top_n=10, print_imp=False, plot=True)
# -
# ### RF
# +
rf = RandomForestRegressor(n_estimators=1000, criterion='mse', max_features="auto", max_depth=None
, min_samples_split= 2, min_samples_leaf= 1, oob_score=True)
rf.fit(X_train, y_train)
train_score = mean_squared_error(y_train, rf.predict(X_train))
test_score = mean_squared_error(y_test, rf.predict(X_test))
print("MSE on Train: {}, on Test: {}".format(train_score, test_score))
rf_feature_importance = sklean_model_feat_imp(rf, features, model_name='', top_n=10, print_imp=False, plot=True)
# -
# ### ETR
# +
etr = ExtraTreesRegressor(n_estimators=1000, min_samples_split=10, criterion='mse', random_state=1234,
n_jobs=-1, verbose=0)
etr.fit(X_train, y_train)
train_score = mean_squared_error(y_train, etr.predict(X_train))
test_score = mean_squared_error(y_test, etr.predict(X_test))
print("MSE on Train: {}, on Test: {}".format(train_score, test_score))
rf_feature_importance = sklean_model_feat_imp(etr, features, model_name='', top_n=10, print_imp=False, plot=True)
# -
# ## 3. Predict
# Sample data to predict
sample_info = {
"address": "571 Oriole Pkwy, Toronto, ON, Canada",
"wash_room": 3,
"condition": 7,
"lot_width": 33,
"lot_length": 99.33,
"tran_year": 2018,
"tran_month": 9,
"bed_main": 4,
"bed_bsmt": 0,
'walking_distance': 0 # leave it as zero
}
sample = pd.DataFrame.from_dict(sample_info, orient='index').T
sample['walking_distance'] = walking_distance(sample['address'].values[0], "Eglinton Station, Toronto, ON, Canada")
sample
first_record = False
if first_record:
samples_to_predict = sample.copy()
else:
samples_to_predict = pd.concat([samples_to_predict, sample])
samples_to_predict
samples_to_predict['PP_xgb'] = xgb.predict(samples_to_predict[features].values)
samples_to_predict['PP_gbm'] = gbm.predict(samples_to_predict[features].values)
# samples_to_predict['PP_lr'] = lr.predict(samples_to_predict[features].values)
samples_to_predict['PP_rf'] = rf.predict(samples_to_predict[features].values)
samples_to_predict['PP_etr'] = etr.predict(samples_to_predict[features].values)
samples_to_predict['Pred_Price_x10K'] = (samples_to_predict['PP_xgb'] + samples_to_predict['PP_gbm'] +
samples_to_predict['PP_rf'] + samples_to_predict['PP_etr']) /4.
print('#####################################')
print(' Predicted price for samples')
print('#####################################')
samples_to_predict.drop_duplicates(keep='last', inplace=True)
samples_to_predict
# Save predictions to csv
if True:
samples_to_predict.to_csv(r"C:\Users\WEIL\Documents\GitHub\yonge_eglinton_housing\house_price_predictions\predictions_20180913.csv",
header=True, sep=",")
| .ipynb_checkpoints/yonge_eglinton_housing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 2: Comparison of the exponential and running mean for random walk model
# ## Team №6:
# 1. <NAME>
# 2. <NAME>
# 3. <NAME>
# 4. <NAME>
#
# 03.10.2019, Skoltech
# ## Working progress
# +
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
from IPython.display import Image
import os
if not os.path.exists("img"):
os.mkdir("img")
# -
# **Supplementary function for grapics display**
generate_report = False
def plot(trace_num, x_data, y_data, xlable = 'xlable', ylable = 'ylable',
legend = 'legend', title = 'title', mode='lines'):
plot.counter += 1
fig_name = 'images/' + str(plot.counter) + '.jpg'
fig = go.Figure()
for i in range(trace_num):
fig.add_trace(go.Scatter(x=x_data[i], y=y_data[i], mode=mode, name=legend[i], showlegend = True))
fig.update_layout(
title=go.layout.Title(
text=title,
),
xaxis=go.layout.XAxis(
title=go.layout.xaxis.Title(
text=xlable
)
),
yaxis=go.layout.YAxis(
title=go.layout.yaxis.Title(
text=ylable
)
)
)
if generate_report is True:
fig.write_image(fig_name)
display(Image(fig_name))
else:
fig.show()
plot.counter = 0
# **Supplementary function for exponential mean**
def exp_mean(data, alpha, init=0):
out = np.empty(data.size)
out[0] = init
for i in range(1, data.size):
out[i] = out[i-1] + alpha*(data[i]-out[i-1])
return out
# **Supplementary function for running mean**
def run_mean(data, M, first_M, last_M):
n = round((M-1)/2)
out = np.empty(data.size)
out[:n] = np.ones(n)*first_M
out[-n:] = np.ones(n)*last_M
for i in range(n, data.size - n):
out[i] = 1/M*np.sum(data[i-n:i+n+1])
return out
# # First part
# ## Trajectories with 3000 points
num = 3000
sigma_w_given = 13**0.5
sigma_n_given = 8**0.5
# **Generation a true trajectory using the random walk model**
# +
X = np.empty(num)
X[0] = 10
# normally distributed random noise with zero mathematical
# expectation and variance sigma^2 = 13
w = np.random.normal(loc=0, scale=sigma_w_given, size=num)
# random walk model
for i in range(1, num):
X[i] = X[i-1] + w[i]
num_points = np.linspace(1, num, num=num)
# -
plot(1, [num_points], [X], mode='lines', title='{} points random trajectory'.format(num),
xlable = 'point', ylable = 'value', legend = ['true trajectory'])
# **Generating measurements 𝑧𝑖 of the process 𝑋𝑖**
# +
# normally distributed random noise with zero mathematical expectation
# and variance sigma^2 = 8
n = np.random.normal(loc=0, scale=sigma_n_given, size=num)
# measurements generation
z = X + n
# -
# **Variance calculation out of formulas from slides**
# +
# residuals calculation
v = np.empty(num)
p = np.empty(num)
v[0] = z[0]
for i in range(1, num):
v[i] = z[i] - z[i-1]
p[0] = z[0]
p[1] = z[1]
for i in range(2, num):
p[i] = z[i] - z[i-2]
# math expectation calculation
E_v = 1/(num-1) * np.sum(v[2:]**2)
E_p = 1/(num-2) * np.sum(p[3:]**2)
# variance calculation for random parameters
sigma_w = E_p - E_v
sigma_n = (2*E_v - E_p)/2
print('sigma_w = {}, sigma_n = {}'.format(sigma_w, sigma_n))
print('Referal values are 13 and 8 correspondingly.')
# -
# **We can see that for 3000 points trajectory calculated variances are close to given ones**
# **Now calculating optimal smoothing coefficient in exponential smoothing**
ksi = sigma_w / sigma_n
alpha = (-ksi + (ksi**2 + 4*ksi)**0.5)/2
print('Exponential smoothing parameter = {}'.format(alpha))
X_smooth = exp_mean(z, alpha, z[0])
plot(3, [num_points, num_points, num_points], [X, z, X_smooth],
legend=['True trajectory', 'Measurements', 'Exponential smoothing'],
title='Exponential smoothing method visualization',
xlable = 'point', ylable = 'value')
# ## Now repeat the same for 300 points trajectory
num = 300
sigma_w_given = 13**0.5
sigma_n_given = 8**0.5
# **Generation a true trajectory using the random walk model**
# +
X = np.empty(num)
X[0] = 10
# normally distributed random noise with zero mathematical
# expectation and variance sigma^2 = 13
w = np.random.normal(loc=0, scale=sigma_w_given, size=num)
# random walk model
for i in range(1, num):
X[i] = X[i-1] + w[i]
num_points = np.linspace(1, num, num=num)
# -
plot(1, [num_points], [X], mode='lines', title='{} points random trajectory'.format(num),
xlable = 'point', ylable = 'value', legend = ['true trajectory'])
# **Generating measurements 𝑧𝑖 of the process 𝑋𝑖**
# +
# normally distributed random noise with zero mathematical expectation
# and variance sigma^2 = 8
n = np.random.normal(loc=0, scale=sigma_n_given, size=num)
# measurements generation
z = X + n
# -
# **Variance calculation out of formulas from slides**
# +
# residuals calculation
v = np.empty(num)
p = np.empty(num)
v[0] = z[0]
for i in range(1, num):
v[i] = z[i] - z[i-1]
p[0] = z[0]
p[1] = z[1]
for i in range(2, num):
p[i] = z[i] - z[i-2]
# math expectation calculation
E_v = 1/(num-1) * np.sum(v[2:]**2)
E_p = 1/(num-2) * np.sum(p[3:]**2)
# variance calculation for random parameters
sigma_w = E_p - E_v
sigma_n = (2*E_v - E_p)/2
print('sigma_w = {}, sigma_n = {}'.format(sigma_w, sigma_n))
print('Referal values are 13 and 8 correspondingly.')
# -
# **We can see that for 300 points trajectory calculated variances are not as close to referal ones as for 3000 points trajectory. We imply that exponential smoothing method works better for larger datasets, keeping other parameters the same.**
# **Now calculating optimal smoothing coefficient in exponential smoothing**
ksi = sigma_w / sigma_n
alpha = (-ksi + (ksi**2 + 4*ksi)**0.5)/2
print('Exponential smoothing parameter = {}'.format(alpha))
X_smooth = exp_mean(z, alpha, z[0])
plot(3, [num_points, num_points, num_points], [X, z, X_smooth],
legend=['True trajectory', 'Measurements', 'Exponential smoothing'],
title='Exponential smoothing method visualization',
xlable = 'point', ylable = 'value')
# ## Second part
# **Comparison of methodical errors of exponential and running mean.**
# **Trajectory parameters**
num = 300
sigma_w = 28
sigma_n = 97
num_points = np.linspace(1, num, num=num)
# **Generation of a true trajectory 𝑋𝑖 and its measurements z using the random walk model**
# +
X = np.empty(num)
X[0] = 10
# normally distributed random noise with zero mathematical
# expectation and variance sigma^2 = 13
w = np.random.normal(loc=0, scale=sigma_w, size=num)
# random walk model
for i in range(1, num):
X[i] = X[i-1] + w[i]
# measurements generation
n = np.random.normal(loc=0, scale=sigma_n, size=num)
z = X + n
# optimal smoothing coefficient 𝛼 determination
ksi = sigma_w**2/sigma_n**2
alpha = (-ksi + (ksi**2 + 4*ksi)**0.5)/2
print('Exponential smoothing parameter alpha = {}'.format(alpha))
# -
# **Calculation window size M such as theoretical variance of running mean method is equel to exponential smoothing one**
M = round((2-alpha)/alpha)
print('Running mean window size M = {}'.format(M))
# **Theoratical variance of running mean and exponential smoothing methods**
sigma2_rm = sigma_n**2/M
sigma2_es = sigma_n**2*alpha/(2-alpha)
print('RM variance = {}, ES variance = {}'.format(sigma2_rm, sigma2_es))
# **We cans see that theoretical variances are about the same.**
# **Performing exponential smoothing methos**
X_ES = exp_mean(z, alpha, z[0])
# **Performing running mean method**
X_RM = run_mean(z, M, np.mean(z[:round((M-1)/2)]), np.mean(z[-round((M-1)/2):]))
plot(4, [num_points, num_points, num_points, num_points], [X, z, X_ES, X_RM],
legend=['True trajectory', 'Measurements', 'Exponential smoothing', 'Running mean'],
title='Visual comparison of results', xlable='points', ylable='values')
# **Finaly, lets determine the variance of deviation of smoothed data from the true one.**
# +
RM_err = X_RM - X
ES_err = X_ES - X
RM_var = np.var(RM_err, ddof = 1)
ES_var = np.var(ES_err, ddof = 1)
print("Running mean variance", float("{:.2f}".format(RM_var)))
print("Exp smoothing variance", float("{:.2f}".format(ES_var)))
# -
# ## Conclusion
# Bases on visual analysis, there is a tendency of shifting in Forward Exponential Smoothing method. The running mean method has much less shifting error. By calculating variances of both methods we obtained exact result: running mean performs more accurately.
# From first part we noticed very important thing: exponential smoothing performs better for larger datasets, keeping other parameters the same
# In this lab, we learned about two methods of data processing: running mean and exponential smoothing. The second method has an advantage over the first: the averaging takes into account all previous points with different weights varying exponentially, while the first method takes into account measurements only by window size. For different signals, the same window size will have a different effect on the quality of averaging and noise elimination. However, the disadvantage of exponential smoothing is the shift relative to the actual trajectory and measurements. This methodical error can lead to delay or advance of a signal. The running mean method has almost no such methodological error. By changing the smoothing factor, it is possible to adapt such a filter to different input signals: smoother or frequently changing.
| Assignment2/Assignment2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import pymongo
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# +
conn = 'mongodb://localhost:27017'
client = pymongo.MongoClient(conn)
db = client.australia_fire_db
historicalFires = db.historicalFires.find()
temp_rainfall = db.temp_rainfall.find()
# +
temp_rain_data = []
for data in temp_rainfall:
temp_rain_data.append(data)
temp_rain_data
# +
docs = pd.DataFrame(columns=[])
for num, doc in enumerate( temp_rain_data ):
# convert ObjectId() to str
doc["_id"] = str(doc["_id"])
# get document _id from dict
doc_id = doc["_id"]
# create a Series obj from the MongoDB dict
series_obj = pd.Series( doc, name=doc_id )
# append the MongoDB Series obj to the DataFrame obj
docs = docs.append( series_obj )
temp_rain_df = docs.copy()
temp_rain_df
# -
temp_rain_df = temp_rain_df.astype({"Year": "int"})
temp_rain_df.dtypes
# +
historical_data = []
for data in historicalFires:
historical_data.append(data)
historical_data
# +
docs = pd.DataFrame(columns=[])
for num, doc in enumerate( historical_data ):
# convert ObjectId() to str
doc["_id"] = str(doc["_id"])
# get document _id from dict
doc_id = doc["_id"]
# create a Series obj from the MongoDB dict
series_obj = pd.Series( doc, name=doc_id )
# append the MongoDB Series obj to the DataFrame obj
docs = docs.append( series_obj )
historical_df = docs.copy()
historical_df
# -
historical_df = historical_df.astype({"Year": "int"})
historical_df.dtypes
hist_temp_df = temp_rain_df.merge(historical_df, on ="Year", how="inner")
hist_temp_df
# +
# remove outlier
for areaBurned in hist_temp_df["AreaBurned(ha)"]:
if areaBurned > 10000000:
hist_temp_df = hist_temp_df[hist_temp_df["AreaBurned(ha)"]!=areaBurned]
print(f"removed data point with area burned = {areaBurned}")
# +
hist_temp_df = hist_temp_df.sort_values("AreaBurned(ha)")
rainfall = hist_temp_df["Avg Annual Rainfall"]
temp = hist_temp_df["Avg Annual Temp"]
x = hist_temp_df["AreaBurned(ha)"]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.xlabel("Area Burned (hectares)")
ax.scatter(x, rainfall, c='b', marker='o')
plt.ylabel("Average Annual Rainfall")
def func(x, a, b, c):
return a * np.log(b + x) + c
popt, pcov = curve_fit(func, x, rainfall)
plt.plot(x, func(x, *popt), 'r-', label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
# scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, rainfall)
# p = np.polyfit(x, np.log(rainfall), 1)
# p = np.poly1d(np.polyfit(x, rainfall, 2))
# ax2=ax.twinx()
# ax2.scatter(x, temp, c='r', marker='^')
# plt.ylabel("Average Annual Temperature")
# p2 = p = np.poly1d(np.polyfit(x, temp, 3))
# xp = np.linspace(np.min(x), np.max(x), 1000)
# _ = plt.plot(x, rainfall, '.', xp, p(xp), '-')
plt.tight_layout()
plt.legend()
plt.show()
# +
rainfall = hist_temp_df["Avg Annual Rainfall"]
temp = hist_temp_df["Avg Annual Temp"]
x = hist_temp_df["AreaBurned(ha)"]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.xlabel("Area Burned (hectares)")
rain = ax.scatter(x, rainfall, c='b', marker='o', label="Rainfall")
plt.ylabel("Average Annual Rainfall")
ax2=ax.twinx()
temp = ax2.scatter(x, temp, c='r', marker='^', label="Temperature")
plt.ylabel("Average Annual Temperature")
plt.tight_layout()
plt.legend([rain, temp], ("Rainfall", "Temperature"))
plt.savefig("images/rainTempAreaBurned.png")
plt.show()
# -
| aggRainandAreaperYear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
print(pd.__version__)
datos = pd.read_excel('corruption-data.xls')
datos.head()
# # Instrucciones
#
# 1. Abra el archivo corruption_dataset.xlsx
# 2. Lea la sección "METHODS" del artículo de Correa & Jaffe (2015) que está disponible en la siguiente [Página web](https://arxiv.org/pdf/1604.00283.pdf)
# 3. Abra una nueva celda debajo de esta y responda a las siguientes preguntas:
# -3.1. A partir de la lectura de la sección "METHODS" del artículo de Correa & Jaffe (2015), ¿Cuáles fuentes de datos fueron empleadas para construir la base de datos?
#
# -3.2. Si se deseara crear un sola tabla de datos con todas las variables de todas las fuentes de datos, argumente qué variable usaría y por qué. Mencione además qué problemas se encontraría en el camino en el proceso de disponer todos esos datos en una única tabla.
#
# -3.3. Elabore un gráfico bivariable para visualizar las relaciones entre variables y redacte un análisis que le ayude al lector a interpretar dicho gráfico. Seleccione las variables cuidadosamente considerando el contexto del artículo de Correa & Jaffe (2015)
#
# -3.4. Elabore un gráfico que muestre el comportamiento estadístico de alguna de las variables analizadas.
#
# -3.5. Con base en las conclusiones del artículo de Correa & Jaffe (2015) desarrolle por escrito su opinión personal sobre la relevancia de conocer la realidad de la corrupción en los diferentes países para fines de gerencia y de emprendimiento (elabore una respuesta con al menos 200 palabras)
#
| Taller 9A.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
# +
# 讀取糖尿病資料集
diabetes = datasets.load_diabetes()
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(diabetes.data, diabetes.target, test_size=0.2, random_state=4)
# 建立一個線性回歸模型
regr = linear_model.LinearRegression()
# 將訓練資料丟進去模型訓練
regr.fit(x_train, y_train)
# 將測試資料丟進模型得到預測結果
y_pred = regr.predict(x_test)
# -
print(regr.coef_)
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# ### LASSO
# +
# 讀取糖尿病資料集
diabetes = datasets.load_diabetes()
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(diabetes.data, diabetes.target, test_size=0.2, random_state=4)
# 建立一個線性回歸模型
lasso = linear_model.Lasso(alpha=1.0)
# 將訓練資料丟進去模型訓練
lasso.fit(x_train, y_train)
# 將測試資料丟進模型得到預測結果
y_pred = lasso.predict(x_test)
# -
# 印出各特徵對應的係數,可以看到許多係數都變成 0,Lasso Regression 的確可以做特徵選取
lasso.coef_
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# ### Ridge
# +
# 讀取糖尿病資料集
diabetes = datasets.load_diabetes()
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(diabetes.data, diabetes.target, test_size=0.2, random_state=4)
# 建立一個線性回歸模型
ridge = linear_model.Ridge(alpha=1.0)
# 將訓練資料丟進去模型訓練
ridge.fit(x_train, y_train)
# 將測試資料丟進模型得到預測結果
y_pred = regr.predict(x_test)
# -
# 印出 Ridge 的參數,可以很明顯看到比起 Linear Regression,參數的數值都明顯小了許多
print(ridge.coef_)
# 預測值與實際值的差距,使用 MSE
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
# 可以看見 LASSO 與 Ridge 的結果並沒有比原本的線性回歸來得好,
# 這是因為目標函數被加上了正規化函數,讓模型不能過於複雜,相當於限制模型擬和資料的能力。因此若沒有發現 Over-fitting 的情況,是可以不需要一開始就加上太強的正規化的。
# ## 練習時間
# 請使用其他資料集 (boston, wine),並調整不同的 alpha 來觀察模型訓練的情形。
| Day_040_lasso_ridge_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/UCREL/welsh-summarisation-dataset/blob/main/dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="eoVpoaeMGjrJ" outputId="c29b572c-4211-430c-fe0c-7acd3fa7fb21"
# !git clone https://github.com/UCREL/welsh-summarisation-dataset.git
# + id="iTi-PjdIH0y2"
import os
import pickle as pkl
os.chdir('/content/welsh-summarisation-dataset')
# + id="1zhLL7LEIs4s"
with open('./data/dataset.pkl', "rb") as dataset_file:
dataset = pkl.load(dataset_file)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="b44gJE3UKn-N" outputId="84355d8c-0c99-43c1-f6e3-a4dc9fac6829"
dataset.head()
| dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 1
# ### Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data
# ### Step 2. Download the dataset to your computer and unzip it.
# ### Step 3. Use the tsv file and assign it to a dataframe called food
# +
import pandas as pd
import numpy as np
filepath = r'C:\Users\matthew.zupan\Downloads\world-food-facts\en.openfoodfacts.org.products.tsv'
url_data = pd.read_table(filepath,sep='\t')
url_data
# -
# ### Step 4. See the first 5 entries
url_data.head()
# ### Step 5. What is the number of observations in the dataset?
url_data.count()
# ### Step 6. What is the number of columns in the dataset?
len(url_data.columns.values)
# ### Step 7. Print the name of all the columns.
for i in range(1, len(url_data.columns.values)):
print(url_data.columns.values[i])
# ### Step 8. What is the name of 105th column?
url_data.columns.values[105]
# ### Step 9. What is the type of the observations of the 105th column?
type(url_data.columns.values[105][0])
# ### Step 10. How is the dataset indexed?
url_data.index
# ### Step 11. What is the product name of the 19th observation?
url_data.iloc[19]['product_name']
| 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.5 64-bit (''ds'': venv)'
# name: python37564bitdsvenv7f769053fa4943f085b819bad569cbc2
# ---
# Import 3rd party libraries
import requests
import json
import pandas as pd
from datetime import datetime
# +
# Query variables
url = "https://covid-19-coronavirus-statistics.p.rapidapi.com/v1/stats"
headers = {
'x-rapidapi-host': "covid-19-coronavirus-statistics.p.rapidapi.com",
'x-rapidapi-key': "<KEY>"
}
# + tags=["outputPrepend"]
# Send request
response = requests.request("GET", url, headers = headers)
print(response)
# -
response['Countries']
# +
import requests
url = "https://covid-19-coronavirus-statistics.p.rapidapi.com/v1/stats"
querystring = {"country":"Canada"}
headers = {
'x-rapidapi-host': "covid-19-coronavirus-statistics.p.rapidapi.com",
'x-rapidapi-key': "<KEY>"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
# -
response.text["lastChecked"]
| Covid-19/Covid-19-Datasets/Rapid-Api/Rapid-Api.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
valor1 =tf.constant(2)
valor2 = tf.constant(3)
type(valor1)
print(valor1)
soma=valor1 + valor2
type(soma)
print(soma)
with tf.Session() as sess:
s=sess.run(soma)
print(5)
texto1=tf.constant('Texto 1 ')
texto2=tf.constant('Texto 2')
type(texto1)
print(texto1)
with tf.Session() as sess:
con=sess.run(texto1 + texto2)
print(con)
| Jupyter/CTensorFlow/basico/constantes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotting and typesetting
# The following are functions for plotting and typesetting.
# %pylab inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
font = {'size': 12}
plt.rc('font', **font)
# Function for creating a surface plot.
def plot3d(f,lim=(-5,5),title='Surface plot',detail=0.05,
xlabel='X',ylabel='Y',zlabel='Z',angle=0,line=[]):
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, projection='3d')
if angle != 0:
ax.view_init(30, angle)
xs = ys = np.arange(lim[0],lim[1], detail)
X, Y = np.meshgrid(xs, ys)
zs = np.array([f(x, y) for x, y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)
surf = ax.plot_surface(X, Y, Z, cmap=cm.RdYlGn)
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.set_xlabel(xlabel);ax.set_ylabel(ylabel);ax.set_zlabel(zlabel)
if len(line) > 0:
plt.plot(line[0], line[1], [f(x,y) for x, y in zip(line[0], line[1])],
c='r', lw='2', zorder=3)
plt.title(title)
plt.show()
# Function to create a contour plot.
def plot_contour(f,lim=(-5,5),title='Contour plot',detail=0.05,
xlabel='X',ylabel='Y'):
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
xs = ys = np.arange(lim[0],lim[1], detail)
X, Y = np.meshgrid(xs, ys)
zs = np.array([f(x, y) for x, y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)
CS = ax.contourf(X,Y,Z,16, cmap=cm.RdYlGn)
#plt.clabel(CS, fontsize=9, inline=1)
ax.set_xlabel(xlabel);ax.set_ylabel(ylabel)
return ax
# # Problem
# In this problem we want to find an optimum between the amount of operators and the time that they operate. The operators have a certain productivity at each point in time, which is low at the beginning, then reaches a maximum, and then diminishes.
#
# Each operator has an hourly cost of $\$30$, and produces $3$ units at his maximum efficiency. The sales price per unit is $\$25$.
#
# The operators have a productivity curve that is described with the following function:
#
# $$ p(t) = \dfrac{1}{1.2 + \frac{1}{2}\left(x-2\right)^2} $$
p = lambda t: 1 / (1.2 + 0.5 * (t - 2)**2)
x = np.linspace(0, 10, 128)
y = E(x)
plot(x, y, lw=2, c='r')
grid()
title('Productivity over time')
ylabel('Productivity $p(t)$')
xlabel('Time $t$');
# ## Problem statement
#
# In order to maximize the profit, how many hours should each operator work, and how many operators do we need?
# # Model
# Let an operator be denoted with $w$, the hourly cost with $c$, the produced units at maximum efficiency $u$, and the sales price per unit $s$.
# ## Cumulative productivity
# The total realized productivity for a person at time $t$, is the area under the curve of $p(t)$ from $0$ to $t$. To make this easier, we will create a cumulative function $P(t)$. This is achieved by integrating $p(t)$, which gives:
#
# $$ P(t) = \int_0^t \dfrac{1}{1.2 + 0.5 (t - 2)^2} \approx -1.29 \tan^{-1}(1.29 - 0.64t) + 1.175.$$
P = lambda t: -1.29 * np.arctan(1.29 - 0.64*t) + 1.175
x = np.linspace(0, 10, 128)
y = P(x)
plot(x, y, lw=2, c='r')
grid()
title('Cumulative Productivity')
ylabel('Cumulative Productiviy $P(t)$')
xlabel('Time $t$');
# ## Total revenue
# The total revenue $R$ consists of how many operators have been working, how long they have been working, and the total units produced at peak efficiency, all while taking the productivity curve into account. To calculate the total revenue, we get:
#
# $$ R(w,t) = usw\cdot P(t)$$
# We know that $s=25$ and $u=3$, substiting gives us $R(w,t) = 75w \cdot P(t)$. Which means that one operator will generate $\$75$ revenue per unit of time $t$ if his efficiency is $100\%$. If we multiply this by all the operators $w$, we get the total generated revenue for $w$ operators working for $t$ time.
# ## Total cost
# The total cost is the number of operators multiplied by their cost per unit of time, and the time they have been working. This gives us:
#
# $$ C(w,t) = cwt $$
#
# We know that $c=25$, substituting gives us $C(w,t)=25wt$.
# ## Profit function
# The profit $T$ is found with $T=R-C$. This means that:
#
# $$ T(w,t) = R(w,t) - C(w,t)$$
#
# If we now substitute in all of the functions we will get the profit function:
#
# $$ T(w,t) = 75w \cdot (-1.29 \tan^{-1}(1.29 - 0.64t) + 1.175) - 25wt.$$
T = lambda w,t: 75 * w * (-1.29*arctan(1.29 - 0.64 * t) + 1.175) - 25*w*t
# # Analysis
# Excellent, we now have a function of two variables. If we create a surface plot of $T$, we can see how it relates $w$ and $t$.
plot3d(T, lim=(0, 20),
title='Total Profit $T(w,t)$',
xlabel='# of Operators',
ylabel='Time $t$',
zlabel='Total Profit',
angle=60)
# If we look at the contour plot, it becomes clear that an operator yields a maximum profit at a certian point $t$. As more operators are added, the profit change linearly.
plot_contour(T, lim=(0,20), xlabel='# of Operators', ylabel='Time $t$',
title='Contour plot of $T(w,t)$');
plot((1,1), (1,17.5), c='r', lw=3, ls='dashed')
# We want to cut the function parallel to the time axis, as shown in the contour map above (red line). To do this, we will evaluate the function $T(1,t)$, which gives us:
#
# $$ T(1,t) = T(t) = 75(-1.29\tan^{-1}(1.29-0.64t)+1.175)-30t $$
#
# The result is a function of one variable, which is easy to optimize:
T1 = lambda t: 75*(-1.29*arctan(1.29 - 0.64*t)+1.175) - 30*t
x = np.linspace(0, 8, 128)
y = T1(x)
plot(x,y, lw=2, c='r')
grid()
title('Total Profit working for $t$ hours')
ylabel('Total Profit')
xlabel('Time $t$');
# To get the idea, this graph is a cross section of the 3D plot, which can be visualized:
plot3d(T, lim=(0, 20),
title='Total Profit $T(w,t)$',
xlabel='# of Operators',
ylabel='Time $t$',
zlabel='Total Profit',
angle=60,
line=[ [4] * 24, np.linspace(0,8,24)] )
# To find the optimal value of $t$, we find $\dfrac{d 𝑇(1,𝑡)}{dt} = 0$. Differentiating $T(t)$ gives $\dfrac{61.92}{(1.29-0.64t)^2+1}-30$ which means that $t \approx 3.62745$.
# # Conclusion
# The profit is maximized when the operators work for $3.62$ hours. When more operators are added, the profit increases linearly.
plot3d(T, lim=(0, 20),
title='Total Profit $T(w,t)$',
xlabel='# of Operators',
ylabel='Time $t$',
zlabel='Total Profit',
angle=10,
line=[ np.linspace(0,20,24), [3.62]*24] )
| Notebooks/Manufacturing optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import DartsApp
from dartsapp.game import x01
from dartsapp.player import Player
from dartsapp.throw import Throw
# ## Create Player
bobbz = Player(name="Bobbz")
# ## Initialize Game
game = x01()
# ### Register Player
game.register_player(bobbz)
# ### Set Options
game.set_options(finish_type=2)
# ## Start Game
game.start()
# ### Throw Dart
game.throw_dart(Throw(20,3))
# ### Print Score
game.print_score()
# ### Throw two more darts
game.throw_dart(Throw(20,1))
game.throw_dart(Throw(1,3))
# ### etc.
| docs/notebooks/how_to_play_x01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import Counter
import math, random
# data splitting
def split_data(data, prob):
# split data into fractions [prob, 1 - prob]
results = [], []
for row in data:
results[0 if random.random() < prob else 1].append(row)
return results
def train_test_split(x, y, test_pct):
data = list(zip(x, y)) # pair corresponding values
train, test = split_data(data, 1 - test_pct) # split the dataset of pairs
x_train, y_train = list(zip(*train)) # magical un-zip trick
x_test, y_test = list(zip(*test))
return x_train, x_test, y_train, y_test
# +
# model = SomeKindOfModel()
# x_train, x_test, y_train, y_test = train_test_split(xs, ys, 0.33)
# model.train(x_train, y_train)
# perfomance = model.test(x_test, y_test)
# +
def accuracy(tp, fp, fn, tn):
correct = tp + tn
total = tp + fp + fn + tn
return correct / total
accuracy(70, 4930, 13930, 981070)
# +
def precision(tp, fp, fn, tn):
return tp / (tp + fp)
precision(70, 4930, 13930, 981070)
# +
def recall(tp, fp, fn, tn):
return tp / (tp + fn)
recall(70, 4930, 13930, 981070)
# +
def f1_score(tp, fp, fn, tn):
p = precision(tp, fp, fn, tn)
r = recall(tp, fp, fn, tn)
return 2 * p * r / (p + r)
f1_score(70, 4930, 13930, 981070)
# -
| ipynb/11_machine_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# # Copyright 2020 (c) Cognizant Digital Business, Evolutionary AI. All rights reserved. Issued under the Apache 2.0 License.
# +
import pandas as pd
import numpy as np
from scenario_generator import get_raw_data, generate_scenario, NPI_COLUMNS
# -
# # Scenario generator
# ## Latest data
DATA_FILE = "tests/fixtures/OxCGRT_latest.csv"
latest_df = get_raw_data(DATA_FILE, latest=True)
# # Scenario: historical IP until 2020-09-30
# Latest historical data, truncated to the specified end date
start_date_str = None
end_date_str = "2020-09-30"
countries = None
output_file = "data/2020-09-30_historical_ip.csv"
scenario_df = generate_scenario(start_date_str, end_date_str, latest_df, countries, scenario="Historical")
scenario_df[scenario_df.CountryName == "France"].Date.max()
truncation_date = pd.to_datetime(end_date_str, format='%Y-%m-%d')
scenario_df = scenario_df[scenario_df.Date <= truncation_date]
scenario_df.tail()
scenario_df.to_csv(output_file, index=False)
# # Scenario: frozen NPIs
# Latest historical data + frozen NPIS between last known date and end of Januaray 2021 for India and Mexico
start_date_str = "2021-01-01"
end_date_str = "2021-01-31"
countries = ["India", "Mexico"]
scenario_df = generate_scenario(start_date_str, end_date_str, latest_df, countries, scenario="Freeze")
len(scenario_df)
scenario_df.CountryName.unique()
scenario_df.tail()
# ## Save
hist_file_name = "data/future_ip.csv"
hist_file_name
scenario_df.to_csv(hist_file_name, index=False)
| validation/scenario_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# First let's import pandas and numpy libraries to handle spreadsheets and dataframes, matplotlib to display basic plots and Prophet for predictions.
import numpy as np
import pandas as pd
import matplotlib as plt
import fbprophet as Prophet
# Read in the main spreadsheet that we are going to use. This contains ~20years worth of real estate data for many US states. We only be using a small sample of this data for this tuorial.
# +
state_time_series = pd.read_csv(r'dataset/State_time_series.csv')
# Make sure the date is interpreted correctly.
state_time_series.Date = pd.to_datetime(state_time_series.Date)
# Add another column just for year
state_time_series['year']= state_time_series.Date.dt.year
# Display the final few rows
# to get an idea about how our dataset looks like
state_time_series.tail()
# -
# Let's sure that we are operating on valid data by getting all the states whose Zillow housing value index and median sold price columns are not null.
#
# Then prepare a new spreadsheet with just these valid states in them.
# +
states = set(state_time_series[~state_time_series['ZHVI_AllHomes'].isnull() & ~state_time_series['MedianSoldPrice_AllHomes'].isnull()]['RegionName'].values)
state_time_series_valid = state_time_series[state_time_series['RegionName'].isin(states)].copy()
state_time_series_valid.tail()
# -
# Get top and bottom 5 costly states
costliest_states = state_time_series_valid[['RegionName', 'ZHVI_AllHomes']].groupby('RegionName').max().sort_values(by=['ZHVI_AllHomes'], ascending=False)[:5].index.values.tolist()
print(costliest_states)
cheapest_states = state_time_series_valid[['RegionName', 'ZHVI_AllHomes']].groupby('RegionName').max().sort_values(by=['ZHVI_AllHomes'], ascending=True)[:5].index.values.tolist()
print(cheapest_states)
costliest_time_series = state_time_series_valid[state_time_series_valid.RegionName.isin(costliest_states)]
costliest_time_series.tail()
cheapest_time_series = state_time_series_valid[state_time_series_valid.RegionName.isin(cheapest_states)]
cheapest_time_series.tail()
costliest_mean_sale_price = costliest_time_series.groupby([costliest_time_series.year, costliest_time_series.RegionName])['ZHVI_AllHomes'].mean().dropna().reset_index(name='MedianSoldPrice_AllHomes')
costliest_mean_sale_price
cheapest_mean_sale_price = cheapest_time_series.groupby([cheapest_time_series.year, cheapest_time_series.RegionName])['ZHVI_AllHomes'].mean().dropna().reset_index(name='MedianSoldPrice_AllHomes')
cheapest_mean_sale_price
# +
costliest_mean_prices_pivot = costliest_mean_sale_price.pivot(index='year', columns='RegionName', values='MedianSoldPrice_AllHomes')
costliest_mean_prices_pivot
# -
fte_graph = costliest_mean_prices_pivot.plot(figsize=(20,10))
plt.pyplot.gca().xaxis.set_major_locator(plt.ticker.MaxNLocator(integer=True))
plt.pyplot.ylabel('Average SoldPrice')
plt.pyplot.xlabel('Year')
# +
cheapest_mean_prices_pivot = cheapest_mean_sale_price.pivot(index='year', columns='RegionName', values='MedianSoldPrice_AllHomes')
cheapest_mean_prices_pivot
# -
fte_graph = cheapest_mean_prices_pivot.plot(figsize=(20,10))
plt.pyplot.gca().xaxis.set_major_locator(plt.ticker.MaxNLocator(integer=True))
plt.pyplot.ylabel('Average SoldPrice')
plt.pyplot.xlabel('Year')
#Let's prepare a dataframe to predict Cal prices using fb Prophet.
cal_df = state_time_series[state_time_series.RegionName.str.contains('California')]
cal_df_median_prices = cal_df[['Date','RegionName', 'MedianSoldPrice_AllHomes']].dropna()
cal_df_for_prophet = cal_df_median_prices[['Date','MedianSoldPrice_AllHomes']]
cal_df_for_prophet
#Rename our columns per prophet's requirements
cal_df_for_prophet = cal_df_for_prophet.rename(columns={"Date":"ds", "MedianSoldPrice_AllHomes":"y"})
cal_df_for_prophet
m = Prophet()
m.fit(cal_df_for_prophet)
future = m.make_future_dataframe(periods=50, freq='M')
future.tail()
forecast = m.predict(future)
forecast.tail()
m.plot(forecast)
m.plot_components(forecast)
| real_estate_predictions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multivariate Kernel Density Estimator
#
# This example compares the statsmodel multivariate KDE with a ROC GPU version
# %cd -q ../..
import numpy as np
from timeit import default_timer as timer
# Import for statsmodels KDE
import statsmodels.api as sm
# Import Bokeh for plotting
from bokeh.plotting import figure, show, output_notebook
from bokeh.models.layouts import Column
from bokeh import palettes
# Import our custom ROC KDE implementation
from numba_roc_examples.kerneldensity.roc_imp import approx_bandwidth, build_support_nd, roc_multi_kde, calc_rms
from numba_roc_examples.kerneldensity.plotting import RGBColorMapper
output_notebook()
# ## Prepare sample data
# +
size = 200 # Configurable for different sample size
samples = np.squeeze(np.dstack([np.random.normal(size=size),
np.random.normal(size=size)]))
bwlist = [approx_bandwidth(samples[:, k])
for k in range(samples.shape[1])]
bandwidths = np.array(bwlist)
support = build_support_nd(samples, bandwidths)
print("Samples byte size:", samples.nbytes)
print("Support byte size:", support.nbytes)
# -
# ## Run Statsmodels KDE
kde = sm.nonparametric.KDEMultivariate(samples, var_type='cc', bw=bwlist)
pdf_sm = kde.pdf(support)
# ### Plot
#
# Plot the density
# +
N = int(np.sqrt(pdf_sm.size))
normed = pdf_sm/np.ptp(pdf_sm)
cm = RGBColorMapper(0, 1, palettes.Spectral11)
img = cm.color_rgba(normed).view(dtype=np.uint32).reshape(N, N)
x0 = support[0, 0]
y0 = support[0, 1]
x1 = support[-1, 0]
y1 = support[-1, 1]
dw = x1 - x0
dh = y1 - y0
fig_sm = figure(title='KDE-statsmodel', x_range=[x0, x1], y_range=[y0, y1])
fig_sm.image_rgba(image=[img], x=[x0], y=[y0], dw=[dw], dh=[dh])
show(fig_sm)
# -
# ## Run ROC GPU KDE
pdf_roc = np.zeros(support.shape[0], dtype=np.float64)
roc_multi_kde(support, samples, bandwidths, pdf_roc)
# ### Plot
# Plot the density
# +
N = int(np.sqrt(pdf_roc.size))
normed = pdf_roc/np.ptp(pdf_roc)
cm = RGBColorMapper(0, 1, palettes.Spectral11)
img = cm.color_rgba(normed).view(dtype=np.uint32).reshape(N, N)
x0 = support[0, 0]
y0 = support[0, 1]
x1 = support[-1, 0]
y1 = support[-1, 1]
dw = x1 - x0
dh = y1 - y0
fig_roc = figure(title='KDE-numba-roc', x_range=[x0, x1], y_range=[y0, y1])
fig_roc.image_rgba(image=[img], x=[x0], y=[y0], dw=[dw], dh=[dh])
show(fig_roc)
# -
# ## Benchmark
#
# Test the two version for different size of samples.
#
# The `driver()` function will run both versions of KDE over the same random data and support. The results are compared
# and the execution time of each version is returned.
def driver(size):
# Prepare samples
samples = np.squeeze(np.dstack([np.random.normal(size=size),
np.random.normal(size=size)]))
bwlist = [approx_bandwidth(samples[:, k])
for k in range(samples.shape[1])]
bandwidths = np.array(bwlist)
support = build_support_nd(samples, bandwidths)
# Statsmodel
kde = sm.nonparametric.KDEMultivariate(samples, var_type='cc', bw=bwlist)
ts = timer()
pdf_sm = kde.pdf(support)
time_sm = timer() - ts
# ROC GPU
pdf_roc = np.zeros(support.shape[0], dtype=np.float64)
ts = timer()
roc_multi_kde(support, samples, bandwidths, pdf_roc)
time_roc = timer() - ts
# Check error
assert calc_rms(pdf_sm, pdf_roc, norm=True) < 1e-4
return time_sm, time_roc
# We will run `driver()` over 5 differents sizes to measure how each implementation scale.
sample_sizes = [100, 200, 300, 400, 500]
sm_timings = []
roc_timings = []
for sz in sample_sizes:
time_sm, time_roc = driver(sz)
sm_timings.append(time_sm)
roc_timings.append(time_roc)
# The following plots the measured data.
# +
fig = figure(title='Execution Time')
fig.xaxis.axis_label = 'sample size'
fig.yaxis.axis_label = 'execution time (seconds)'
fig.line(sample_sizes, sm_timings, color='blue', legend='statsmodels')
fig.line(sample_sizes, roc_timings, color='red', legend='ROC-GPU')
fig2 = figure(title='Speedup')
fig2.xaxis.axis_label = 'sample size'
fig2.yaxis.axis_label = 'speedup (ROC over Statsmodels)'
fig2.line(sample_sizes, np.array(sm_timings)/np.array(roc_timings))
show(Column(fig, fig2))
# -
| numba_roc_examples/kerneldensity/multi_variate_kde_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ICML2020)
# language: python
# name: icml2020
# ---
# +
from __future__ import print_function
import collections
import time
import import_ipynb
from config import *
# Import Python wrapper for or-tools CP-SAT solver.
from ortools.sat.python import cp_model
from IPython.core.debugger import set_trace
# +
def placement_ortools_multiobjective(config, NB_JOBS, NB_MACHINES, MACHINES, DURATION):
"""Minimal jobshop problem."""
# Create the model.
model = cp_model.CpModel()
jobs_data = [[(xx,yy) for xx,yy in zip(x,y)] for x, y in zip(MACHINES, DURATION)]
duration_machine = [0 for _ in range(config.num_machines)]
for job, tasks in enumerate(MACHINES):
for i, task in enumerate(tasks):
duration_machine[task] += DURATION[job][i]
"""
jobs_data = [ # task = (machine_id, processing_time).
[(0, 3), (1, 2), (2, 2)], # Job0
[(0, 2), (2, 1), (1, 4)], # Job1
[(1, 4), (2, 3)] # Job2
]
"""
machines_count = 1 + max(task[0] for job in jobs_data for task in job)
all_machines = range(machines_count)
min_start = []
max_end = []
span = []
sum_start = []
sum_end = []
gaps = []
for job_id, job in enumerate(jobs_data):
min_start.append(model.NewIntVar(0, 1000, "min_start[%i]" % (job_id)))
max_end.append(model.NewIntVar(0, 1000, "max_end[%i]" % (job_id)))
span.append(model.NewIntVar(0, 1000, "span[%i]" % (job_id)))
gaps.append(model.NewIntVar(0, 1000, "gaps[%i]" % (job_id)))
# Computes horizon dynamically as the sum of all durations.
horizon = sum(task[1] for job in jobs_data for task in job)
# Named tuple to store information about created variables.
task_type = collections.namedtuple('task_type', 'start end interval')
# Named tuple to manipulate solution information.
assigned_task_type = collections.namedtuple('assigned_task_type',
'start job index duration')
# Creates job intervals and add to the corresponding machine lists.
all_tasks = {}
machine_to_intervals = collections.defaultdict(list)
machine_jobs = collections.defaultdict(list)
for job_id, job in enumerate(jobs_data):
for task_id, task in enumerate(job):
machine = task[0]
duration = task[1]
suffix = '_%i_%i' % (job_id, task_id)
start_var = model.NewIntVar(0, horizon, 'start' + suffix)
end_var = model.NewIntVar(0, horizon, 'end' + suffix)
interval_var = model.NewIntervalVar(start_var, duration, end_var,
'interval' + suffix)
all_tasks[job_id, task_id] = task_type(
start=start_var, end=end_var, interval=interval_var)
machine_to_intervals[machine].append(interval_var)
machine_jobs[machine].append(all_tasks[job_id, task_id])
# Create and add disjunctive constraints.
for machine in all_machines:
model.AddNoOverlap(machine_to_intervals[machine])
# Precedences inside a job.
for job_id, job in enumerate(jobs_data):
for task_id in range(len(job) - 1):
model.Add(all_tasks[job_id, task_id +
1].start >= all_tasks[job_id, task_id].end)
for job_id, job in enumerate(jobs_data):
model.AddMinEquality(min_start[job_id], [machine_jobs[job_id][task_id].start for task_id in range(len(job))])
model.AddMaxEquality(max_end[job_id], [machine_jobs[job_id][task_id].end for task_id in range(len(job))])
model.Add( span[job_id] == max_end[job_id] - min_start[job_id])
model.Add( gaps[job_id] == span[job_id] - duration_machine[job_id])
# Makespan objective.
obj_var = model.NewIntVar(0, horizon, 'makespan')
model.AddMaxEquality(obj_var, [
all_tasks[job_id, len(job) - 1].end
for job_id, job in enumerate(jobs_data)
])
#model.Minimize(obj_var + tot_gaps)
model.Minimize(obj_var + sum( gaps[job_id] for job_id, job in enumerate(jobs_data) ))
# Solve model.
solver = cp_model.CpSolver()
solver.parameters.max_time_in_seconds = config.timeout
status = solver.Solve(model)
placements = [[] for m in range(NB_MACHINES)]
if status == cp_model.OPTIMAL:
# Create one list of assigned tasks per machine.
assigned_jobs = collections.defaultdict(list)
for job_id, job in enumerate(jobs_data):
for task_id, task in enumerate(job):
machine = task[0]
assigned_jobs[machine].append(
assigned_task_type(
start=solver.Value(all_tasks[job_id, task_id].start),
job=job_id,
index=task_id,
duration=task[1]))
# Create per machine output lines.
for machine in all_machines:
# Sort by starting time.
assigned_jobs[machine].sort()
for assigned_task in assigned_jobs[machine]:
placements[machine].append([assigned_task.job, assigned_task.index, assigned_task.start, assigned_task.duration])
# Finally print the solution found.
#print('Optimal Schedule Length: %i' % solver.ObjectiveValue())
return placements, solver.ObjectiveValue()
# -
if __name__ == "__main__":
config = Config()
config.machine_profile = "xsmall_default"
config.job_profile = "xsmall_default"
config.reconfigure()
# Read the input data file.
# Available files are jobshop_ft06, jobshop_ft10 and jobshop_ft20
# First line contains the number of jobs, and the number of machines.
# The rest of the file consists of one line per job.
# Each line contains list of operations, each one given by 2 numbers: machine and duration
filename = "datasets/inference/dataset_xsmall.data"
with open(filename, "r") as file:
NB_JOBS, NB_MACHINES = [int(v) for v in file.readline().split()]
JOBS = [[int(v) for v in file.readline().split()] for i in range(NB_JOBS)]
#-----------------------------------------------------------------------------
# Prepare the data for modeling
#-----------------------------------------------------------------------------
# Build list of machines. MACHINES[j][s] = id of the machine for the operation s of the job j
MACHINES = [[JOBS[j][2 * s] for s in range(NB_MACHINES)] for j in range(NB_JOBS)]
# Build list of durations. DURATION[j][s] = duration of the operation s of the job j
DURATION = [[JOBS[j][2 * s + 1] for s in range(NB_MACHINES)] for j in range(NB_JOBS)]
start = time.time()
placements, makespan = placement_ortools_multiobjective(config, NB_JOBS, NB_MACHINES, MACHINES, DURATION)
end = time.time()
print("time: ", end - start)
print("makespan: ", makespan)
# +
#print(placements)
# -
| placement_ortools_multiobjective.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Minimum Eigen Optimizer
#
# ## Introduction
# An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.
# QUBO is equivalent to finding the ground state of a corresponding Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows to replace the resulting spin variables by Pauli Z matrices, and thus, an Hamiltonian. For more details on this mapping we refere to [1].
#
# Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows to leverage all the `MinimumEigenSolver` available in Qiskit Aqua, such as
# - `VQE`,
# - `QAOA`, or
# - `NumpyMinimumEigensolver` (classical exact method).
#
# Qiskit wraps the translation to an Ising Hamiltonian (in Qiskit Aqua also called `Operator`), the call to an `MinimumEigensolver` as well as the translation of the results back to `OptimizationResult` in the `MinimumEigenOptimizer`.
#
# In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver` to solve a given `QuadraticProgram`.
# The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add a linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that Aqua will through a `QiskitOptimizationError` if conversion of a quadratic program with integer variable is attempted.
#
# The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.
# A possible workaround is Recursive QAOA, as introduced in [2].
# Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial.
#
# ### References
# [1] [<NAME>, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)
#
# [2] [<NAME>, <NAME>, <NAME>, <NAME>, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980)
# ## Converting a QUBO to an Operator
from qiskit import BasicAer
from qiskit.aqua.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit.optimization.algorithms import MinimumEigenOptimizer, RecursiveMinimumEigenOptimizer
from qiskit.optimization import QuadraticProgram
from qiskit.optimization.converters import QuadraticProgramToIsing
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var('x')
qubo.binary_var('y')
qubo.binary_var('z')
qubo.minimize(linear=[1,-2,3], quadratic={('x', 'y'): 1, ('x', 'z'): -1, ('y', 'z'): 2})
print(qubo.export_as_lp_string())
# Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taking into account to shift the resulting value.
qp2op = QuadraticProgramToIsing()
op, offset = qp2op.encode(qubo)
print('offset: {}'.format(offset))
print('operator:')
print(op.print_details())
# Sometimes an `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a converter from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
from qiskit.optimization.converters import IsingToQuadraticProgram
op2qp = IsingToQuadraticProgram()
print(op2qp.encode(op, offset).export_as_lp_string())
# This converter allows, for instance, to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`.
# ## Solving a QUBO with the `MinimumEigenOptimizer`
# We start by initializing the `MinimumEigensolver` we want to use.
qaoa_mes = QAOA(quantum_instance=BasicAer.get_backend('statevector_simulator'))
exact_mes = NumPyMinimumEigensolver()
# Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
# We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
exact_result = exact.solve(qubo)
print(exact_result)
# Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
# ## `RecursiveMinimumEigenOptimizer`
# The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.
# Once the size of the generated intermediate problem is below a given threshold (`min...`), the `RecursiveMinimumEigenOptimizer` uses another solver (`...`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.
#
# In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer` introduced before.
# First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
rqaoa = RecursiveMinimumEigenOptimizer(min_eigen_optimizer=qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
import qiskit.tools.jupyter
# %qiskit_version_table
# %qiskit_copyright
| tutorials/optimization/3_minimum_eigen_optimizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# language: python
# name: python38364bitbasecondadbbbc0d6687a4c64a122e31974242c3a
# ---
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import random
random.seed(10)
np.random.seed(0)
V = 4039
T = 1000*V
init_node_1 = np.random.randint(0,V)
init_node_2 = np.random.randint(0,V)
nodes = list(range(V)) # Get a list of only the node names
edges = np.loadtxt('facebook_combined.txt',dtype=int)
G = nx.Graph()
G.add_nodes_from(nodes)
G.add_edges_from(edges)
nx.set_node_attributes(G, 0,'visits')
print(nx.info(G))
z = list(G.degree([n for n in G]))
y = [y[1] for y in z]
b = np.linspace(0,max(y))
plt.hist(y,bins=b)
plt.show()
H1 = G.copy()
H2 = G.copy()
Hv1 = G.copy()
Hv2 = G.copy()
pi = np.array([x[1] for x in list(G.degree())])
pi = pi/np.sum(pi)
# +
t = 0
v1 = init_node_1
v2 = init_node_2
fv1 = np.empty(V)
fv2 = np.empty(V)
f1 = np.empty(V)
f2 = np.empty(V)
err_v1 = []
err_v2 = []
err_vm = []
err_1 = []
err_2 = []
err_m = []
explore = np.arange(1,T,T//(100*np.log(T))+1)
tracker = 0
print("Init Nodes - ",init_node_1,init_node_2)
while t < T:
if t in explore:
v_n1 = random.choice(list(H1.adj[v2]))
v_n2 = random.choice(list(H2.adj[v1]))
tracker = (tracker + 1)%2
else:
v_n1 = random.choice(list(H1.adj[v1]))
v_n2 = random.choice(list(H2.adj[v2]))
if tracker == 0:
v_nv1 = v_n1
v_nv2 = v_n2
else:
v_nv1 = v_n2
v_nv2 = v_n1
Hv1.nodes[v_nv1]['visits'] += 1
Hv2.nodes[v_nv2]['visits'] += 1
H1.nodes[v_n1]['visits'] += 1
H2.nodes[v_n2]['visits'] += 1
v1 = v_n1
v2 = v_n2
t += 1
if t%(V//10) == 0:
for i in range(V):
fv1[i] = Hv1.nodes[i]['visits']
fv2[i] = Hv2.nodes[i]['visits']
f1[i] = H1.nodes[i]['visits']
f2[i] = H2.nodes[i]['visits']
pi_v1 = fv1/np.sum(fv1)
pi_v2 = fv2/np.sum(fv2)
pi_1 = f1/np.sum(f1)
pi_2 = f2/np.sum(f2)
err_v1.append(np.mean(abs(pi-pi_v1)))
err_v2.append(np.mean(abs(pi-pi_v2)))
err_vm.append(np.mean(abs(pi-(pi_v1+pi_v2)/2)))
err_1.append(np.mean(abs(pi-pi_1)))
err_2.append(np.mean(abs(pi-pi_2)))
err_m.append(np.mean(abs(pi-(pi_1+pi_2)/2)))
# +
plt.figure(1,figsize=(8,5))
#plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_1),color='blue')
#plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_2),color='red')
plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_m),color='green')
#plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_v1),color='black')
#plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_v2),color='black')
#plt.plot(np.array(list(range(len(err_v1))))*(V//10),np.log(err_vm),color='black')
plt.grid()
# +
n_threads = len(explore)
G = nx.Graph()
G.add_nodes_from(nodes)
G.add_edges_from(edges)
nx.set_node_attributes(G, [0]*n_threads,'visits')
H = G.copy()
# +
t = 0
init_nodes = np
fv1 = np.empty(V)
fv2 = np.empty(V)
f1 = np.empty(V)
f2 = np.empty(V)
err_v1 = []
err_v2 = []
err_vm = []
err_1 = []
err_2 = []
err_m = []
explore = np.arange(1,T,T//(100*np.log(T))+1)
tracker = 0
print("Init Nodes - ",init_node_1,init_node_2)
while t < T:
if t in explore:
v_n1 = random.choice(list(H1.adj[v2]))
v_n2 = random.choice(list(H2.adj[v1]))
tracker = (tracker + 1)%2
else:
v_n1 = random.choice(list(H1.adj[v1]))
v_n2 = random.choice(list(H2.adj[v2]))
if tracker == 0:
v_nv1 = v_n1
v_nv2 = v_n2
else:
v_nv1 = v_n2
v_nv2 = v_n1
Hv1.nodes[v_nv1]['visits'] += 1
Hv2.nodes[v_nv2]['visits'] += 1
H1.nodes[v_n1]['visits'] += 1
H2.nodes[v_n2]['visits'] += 1
v1 = v_n1
v2 = v_n2
t += 1
if t%(V//10) == 0:
for i in range(V):
fv1[i] = Hv1.nodes[i]['visits']
fv2[i] = Hv2.nodes[i]['visits']
f1[i] = H1.nodes[i]['visits']
f2[i] = H2.nodes[i]['visits']
pi_v1 = fv1/np.sum(fv1)
pi_v2 = fv2/np.sum(fv2)
pi_1 = f1/np.sum(f1)
pi_2 = f2/np.sum(f2)
err_v1.append(np.mean(abs(pi-pi_v1)))
err_v2.append(np.mean(abs(pi-pi_v2)))
err_vm.append(np.mean(abs(pi-(pi_v1+pi_v2)/2)))
err_1.append(np.mean(abs(pi-pi_1)))
err_2.append(np.mean(abs(pi-pi_2)))
err_m.append(np.mean(abs(pi-(pi_1+pi_2)/2)))
# -
| parallel-particle-jump.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''arc_env'': conda)'
# language: python
# name: python37664bitarcenvconda83c4abf9215d4a698ce68e2a44e6e6bc
# ---
# # A Demo of using RDKitMol as intermediate to generate TS by TS-GCN
#
# A demo to show how RDKitMol can connect RMG and TS-GCN to help predict TS geometry. TS-GCN requires a same atom ordering for the reactant and the product, which is seldomly accessible in practice. RDKitMol + RMG provides an opportunity to match reactant and product atom indexes according to RMG reaction family. <br>
#
# Some codes are compiled from https://github.com/ReactionMechanismGenerator/TS-GCN
#
# +
import os
import sys
import subprocess
# To add this RDMC into PYTHONPATH in case you haven't do it
sys.path.append(os.path.dirname(os.path.abspath('')))
from rdmc.mol import RDKitMol
from rdmc.view import grid_viewer, mol_viewer
# Import RMG dependencies
try:
from rdmc.external.rmg import (from_rdkit_mol,
load_rmg_database,
generate_product_complex,)
except (ImportError, ModuleNotFoundError):
print('You need to install RMG-Py first and run this IPYNB in rmg_env!')
def parse_xyz_or_smiles(identifier, **kwargs):
"A helper function to allow both xyz and smiles as input to generate molecule."
try:
return RDKitMol.FromXYZ(identifier, **kwargs)
except:
mol = RDKitMol.FromSmiles(identifier,)
mol.EmbedConformer()
return mol
# %load_ext autoreload
# %autoreload 2
# -
# Load RMG database
database = load_rmg_database()
# [INPUT] Set TS-GCN package
TS_GCN_PYTHON = '~/Apps/anaconda3/envs/ts_gen_v2/bin/python3.7'
TS_GCN_DIR = '~/Apps/ts_gen_v2'
# ### 1. Input molecule information
# Perceive xyz and generate RMG molecule.
# - **Please always define the single species end of the reaction as the reactant.**
# - **Prefered to put the heavier product in the first place of the list.**
#
# Here, some examples are provided
# #### 1.1: Intra H migration (A = B)
# +
reactant = """C -1.528265 0.117903 -0.48245
C -0.214051 0.632333 0.11045
C 0.185971 2.010727 -0.392941
O 0.428964 2.005838 -1.836634
O 1.53499 1.354342 -2.136876
H -1.470265 0.057863 -1.571456
H -1.761158 -0.879955 -0.103809
H -2.364396 0.775879 -0.226557
H -0.285989 0.690961 1.202293
H 0.605557 -0.056315 -0.113934
H -0.613001 2.746243 -0.275209
H 1.100271 2.372681 0.080302"""
products = ["""C 1.765475 -0.57351 -0.068971
H 1.474015 -1.391926 -0.715328
H 2.791718 -0.529486 0.272883
C 0.741534 0.368416 0.460793
C -0.510358 0.471107 -0.412585
O -1.168692 -0.776861 -0.612765
O -1.768685 -1.15259 0.660846
H 1.164505 1.37408 0.583524
H 0.417329 0.069625 1.470788
H -1.221189 1.194071 0.001131
H -0.254525 0.771835 -1.433299
H -1.297409 -1.977953 0.837367"""]
# -
# #### 1.2: Intra_R_Add_Endocyclic (A = B)
# +
reactant = """C -1.280629 1.685312 0.071717
C -0.442676 0.4472 -0.138756
C 0.649852 0.459775 -0.911627
C 1.664686 -0.612881 -1.217378
O 1.590475 -1.810904 -0.470776
C -0.908344 -0.766035 0.616935
O -0.479496 -0.70883 2.04303
O 0.804383 -0.936239 2.193929
H -1.330008 1.940487 1.13602
H -0.87426 2.544611 -0.46389
H -2.311393 1.527834 -0.265852
H 0.884957 1.398914 -1.412655
H 2.661334 -0.151824 -1.125202
H 1.56564 -0.901818 -2.270488
H 1.630132 -1.574551 0.469563
H -0.531309 -1.699031 0.2105
H -1.994785 -0.790993 0.711395"""
products = ["""C -1.515438 1.173583 -0.148858
C -0.776842 -0.102045 0.027824
C 0.680366 -0.300896 -0.240616
O 1.080339 -1.344575 0.660508
O -0.122211 -2.188293 0.768145
C -1.192654 -1.233281 0.917593
C -1.377606 -0.848982 2.395301
O -0.302953 -0.072705 2.896143
H -2.596401 1.013314 -0.200053
H -1.327563 1.859316 0.692798
H -1.211486 1.693094 -1.062486
H 0.888934 -0.598866 -1.280033
H 1.294351 0.57113 0.013413
H -2.08787 -1.759118 0.559676
H -1.514675 -1.774461 2.97179
H -2.282313 -0.243469 2.505554
H 0.511127 -0.541653 2.673033"""]
# -
# #### 1.3: ketoenol (A = B)
# +
reactant = """O 0.898799 1.722422 0.70012
C 0.293754 -0.475947 -0.083092
C -1.182804 -0.101736 -0.000207
C 1.238805 0.627529 0.330521
H 0.527921 -1.348663 0.542462
H 0.58037 -0.777872 -1.100185
H -1.45745 0.17725 1.018899
H -1.813437 -0.937615 -0.310796
H -1.404454 0.753989 -0.640868
H 2.318497 0.360641 0.272256"""
products = ["""O 2.136128 0.058786 -0.999372
C -1.347448 0.039725 0.510465
C 0.116046 -0.220125 0.294405
C 0.810093 0.253091 -0.73937
H -1.530204 0.552623 1.461378
H -1.761309 0.662825 -0.286624
H -1.923334 -0.892154 0.536088
H 0.627132 -0.833978 1.035748
H 0.359144 0.869454 -1.510183
H 2.513751 -0.490247 -0.302535"""]
# -
# #### 1.4: Retroene (A = B + C)
# +
reactant = """C -6.006673 2.090429 -0.326601
C -4.967524 1.669781 0.388617
C -3.589427 2.26746 0.357355
C -2.508902 1.272686 -0.104697
H -3.327271 2.622795 1.363524
H -3.58379 3.147152 -0.296003
C -1.100521 1.87264 -0.092522
H -2.756221 0.924031 -1.113232
H -2.5361 0.386526 0.540381
H -1.035149 2.742418 -0.753598
H -0.355581 1.145552 -0.426718
H -0.818137 2.200037 0.913007
H -6.976055 1.60886 -0.26225
H -5.925943 2.93703 -1.002368
H -5.097445 0.815022 1.052006"""
products = [
"""C -1.134399 -0.013643 -0.104812
C 0.269995 0.453024 0.142565
C 1.359378 -0.302236 0.042286
H -1.605932 0.564934 -0.907015
H -1.757078 0.122064 0.786524
H -1.163175 -1.069646 -0.383531
H 0.381197 1.49848 0.425883
H 1.301395 -1.350314 -0.236967
H 2.348619 0.097335 0.235065""",
"""C 0.659713 0.003927 0.070539
C -0.659713 -0.003926 -0.070539
H 1.253364 0.882833 -0.158319
H 1.20186 -0.86822 0.420842
H -1.20186 0.86822 -0.420842
H -1.253364 -0.882833 0.158319""",
]
# -
# #### 1.5 HO2 Addition (A = B + C)
# +
reactant = """C -1.890664 -0.709255 -0.271996
C -0.601182 0.078056 -0.018811
C 0.586457 -0.545096 -0.777924
C -0.292203 0.188974 1.451901
H -0.683164 -0.56844 2.124827
C 0.477032 1.332664 2.012529
O -0.367239 2.493656 2.288335
O -0.679966 1.393013 -0.618968
O -1.811606 2.119506 -0.074789
H -1.819659 -1.711353 0.159844
H -2.063907 -0.801665 -1.346104
H -2.739557 -0.190076 0.171835
H 0.374452 -0.548385 -1.849706
H 1.501209 0.026135 -0.608139
H 0.747239 -1.572318 -0.444379
H 1.209047 1.707778 1.296557
H 0.998836 1.047896 2.931789
H -0.994076 2.235514 2.974109
H -1.392774 2.537261 0.704151"""
products = [
"""C -1.395681 1.528483 -0.00216
C -0.402668 0.411601 -0.210813
C -0.997629 -0.972081 -0.127641
C 0.890607 0.678979 -0.433435
C 2.015631 -0.28316 -0.676721
O 2.741986 0.043989 -1.867415
H -0.923699 2.509933 -0.072949
H -2.200649 1.479183 -0.744922
H -1.873843 1.44886 0.981238
H -1.839799 -1.068706 -0.822233
H -0.283424 -1.765173 -0.346167
H -1.400492 -1.154354 0.875459
H 1.201336 1.7219 -0.466637
H 2.754241 -0.212398 0.127575
H 1.667906 -1.32225 -0.7073
H 2.101868 0.079395 -2.5857""",
"""O -0.168488 0.443026 0.0
O 1.006323 -0.176508 0.0
H -0.837834 -0.266518 0.0""",
]
# -
# #### 1.6 cycloaddition (A = B + C)
# +
reactant = """O -0.854577 1.055663 -0.58206
O 0.549424 1.357531 -0.196886
C -0.727718 -0.273028 -0.011573
C 0.76774 -0.043476 0.113736
H -1.066903 -1.044054 -0.706048
H -1.263435 -0.349651 0.939354
H 1.374762 -0.530738 -0.655177
H 1.220707 -0.172248 1.098653"""
products = [
"""O 0.0 0.0 0.682161
C 0.0 0.0 -0.517771
H 0.0 0.938619 -1.110195
H 0.0 -0.938619 -1.110195""",
"""O 0.0 0.0 0.682161
C 0.0 0.0 -0.517771
H 0.0 0.938619 -1.110195
H 0.0 -0.938619 -1.110195""",
]
# -
# ### [OTHERWISE] we can create molecules RDKit by SMILES or mix the use of SMILES and XYZ
# +
reactant = """CCCC=C"""
products = [
"""CC=C""",
"""C=C""",
]
# -
# ### [TEST] Playground for your input
# +
reactant = """ C -1.46906640 0.60005140 -0.33926754
H -2.26116914 1.22007728 -0.72460795
C -1.38956818 -0.73623936 -0.32193923
H -2.10238195 -1.45466333 -0.69227790
C -0.15313704 -1.26906745 0.28524353
H 0.57488016 -1.74435829 -0.35925707
H -0.22157142 -1.70137139 1.27323768
C -0.24485657 1.20842304 0.28454163
H 0.34631301 1.85087508 -0.38575410
O 0.80687615 0.16816893 0.60967766
O 2.26152266 0.49372545 0.20724445
H 2.73555750 0.25285146 1.00597112
H -0.43624274 1.72491768 1.24065942
"""
products = [
"""C 1.041174 0.363020 0.022251
C -0.039248 1.105625 -0.024168
C -1.205067 0.166738 -0.031137
O -0.717353 -1.089725 -0.295617
C 0.629086 -1.044078 0.046303
H 2.048685 0.750420 0.038636
H -0.091538 2.187600 -0.052664
H -1.819394 0.468851 -0.927329
H -1.771201 0.222565 0.899440
H 1.171410 -1.629308 -0.724483
H 0.753445 -1.501707 1.048767
""",
"""O 0.487604 0.0 0.0
H -0.487604 0.0 0.0""",
]
# -
# ### Generate molecules based on inputs
# +
r_mol = parse_xyz_or_smiles(reactant, backend='openbabel', header=False, correctCO=True)
p_mols = [parse_xyz_or_smiles(product, backend='openbabel', header=False, correctCO=True) \
for product in products]
reactant_mols = [from_rdkit_mol(r_mol.ToRWMol())]
product_mols = [from_rdkit_mol(p_mol.ToRWMol()) for p_mol in p_mols]
# -
# ### 2. Check if this reaction matches RMG templates and generate product complex
# If the reaction matches at least one RMG family, the result will be shown. Otherwise,
# this script will not be helpful
# +
product_match = generate_product_complex(database,
reactant_mols,
product_mols,
verbose=True)
# A product complex with the same atom indexing as the reactant is generated
# p_rmg is its RDKitMol form and product_match is its RMG molecule form
p_rmg = RDKitMol.FromRMGMol(product_match)
# +
# Get the coordinates of the reactant
conf = r_mol.GetConformer(); coords = conf.GetPositions()
# Set reactant's coordinates to the product complexes
p_rmg.EmbedConformer(); conf = p_rmg.GetConformer(); conf.SetPositions(coords)
# -
# obtained product complex
viewer = mol_viewer(p_rmg.ToMolBlock(), 'sdf')
viewer.show()
# ### 3. Find structure match between product complex and input product molecules
# #### 3.1 Combine products if necessary
# +
# [INPUT for [A = B + C ONLY]]
# When locating product, only consider heavy atoms (True) or also consider Hs (False)
# So far, not sure which works better
heavy = False
# When combine two product into a complex the offset to be used
# can be a 3D vector in tuple or list or an float number as a proportion of the distance vector
# So far, it is not sure how sensitive is TS_gen to the alignment distance
offset = 0.1
# -
if len(products) == 1:
p_combine = p_mols[0] # No need to combine if A = B
else:
print('Combine two products...')
matches = [p_rmg.GetSubstructMatches(p_mol, uniquify=False) # unique is used in case both products are the same molecule
for p_mol in p_mols]
if heavy:
heavy_indexes = [[atom.GetIdx() for atom in p_mol.GetAtoms() \
if atom.GetAtomicNum() > 1] \
for p_mol in p_mols]
# Find the heavy atom match for the first product
matches[0] = tuple(matches[0][0][i] for i in heavy_indexes[0])
# Find the heavy atom match for the second product
for match in matches[1]:
match_tmp = tuple(match[i] for i in heavy_indexes[1])
# Check if any common element
if not(set(matches[0]) & set(match_tmp)):
matches[1] = match_tmp
break
else:
# Otherwise, just use the first match for each product
matches[0] = matches[0][0]
for match in matches[1]:
if not(set(matches[0]) & set(match)):
matches[1] = match
break
# Align and combine the two products into one complex
p_aligns = []
for p_idx, p_mol in enumerate(p_mols):
# Make a copy of p_mol to preserve its original information
p_align = p_mol.Copy()
atom_map = [(prb, ref) for prb, ref in enumerate(matches[p_idx])]
# Align product to the product complex
rmsd, reflect = p_align.GetBestAlign(refMol=p_rmg,
atomMap=atom_map)
p_aligns.append(p_align)
print(f'Product{p_idx + 1}, Reflect Conformation: {reflect}, RMSD: {rmsd}')
p_combine = p_aligns[0].CombineMol(p_aligns[1], offset=offset)
# #### 3.2 Find all possible atom mapping between the reactant and the product.
matches = p_rmg.GetSubstructMatches(p_combine, uniquify=False)
# Find the best atom mapping by RMSD. <br>
# Note, this can perform relatively poorly if the reactant and the product are in different stereotype (cis/trans). or most rotors are significantly different oriented. However, previous step (match according to RMG reaction) makes sure that all heavy atoms and reacting H atoms are consistent, so only H atoms that are more trivial are influenced by this.
# +
rmsds = []
# Make a copy of p_combine to preserve its original information
p_align = p_combine.Copy()
# Align the combined complex to the rmg generated complex
# According to different mapping and find the best one.
for i, match in enumerate(matches):
atom_map = [(ref, prb) for ref, prb in enumerate(match)]
rmsd, reflect = p_align.GetBestAlign(refMol=p_rmg,
atomMap=atom_map,
keepBestConformer=False)
rmsds.append((i, reflect, rmsd))
best = sorted(rmsds, key=lambda x: x[2])[0]
print('Match index: {0}, Reflect Conformation: {1}, RMSD: {2}'.format(*best))
# Realign and reorder atom indexes according to the best match
best_match = matches[best[0]]
p_align.AlignMol(refMol=p_rmg,
atomMap=[(ref, prb) for ref, prb in enumerate(best_match)],
reflect=best[1])
new_atom_indexes = [best_match.index(i) for i in range(len(best_match))]
p_align = p_align.RenumberAtoms(new_atom_indexes)
# -
# ### 4. View Molecules
# +
entry = 3 if len(products) == 1 else 4
viewer = grid_viewer(viewer_grid=(1, entry),
viewer_size=(240 * entry, 300),)
mol_viewer(r_mol.ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, 0))
mol_viewer(p_align.ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, 1))
for i in range(2, entry):
mol_viewer(p_mols[i-2].ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, i))
print('reactant matched product original product')
viewer.show()
# -
# ### 5. Launch TS-GCN to generate TS guess
# +
r_mol.ToSDFFile('reactant.sdf')
p_align.ToSDFFile('product.sdf')
try:
subprocess.run(f'export PYTHONPATH=$PYTHONPATH:{TS_GCN_DIR};'
f'{TS_GCN_PYTHON} {TS_GCN_DIR}/inference.py '
f'--r_sdf_path reactant.sdf '
f'--p_sdf_path product.sdf '
f'--ts_xyz_path TS.xyz',
check=True,
shell=True)
except subprocess.CalledProcessError as e:
print(e)
else:
with open('TS.xyz', 'r') as f:
ts_xyz=f.read()
ts = RDKitMol.FromXYZ(ts_xyz)
# -
# ### 6. Visualize TS
# +
# Align the TS to make visualization more convenient
atom_map = [(i, i) for i in range(r_mol.GetNumAtoms())]
ts.GetBestAlign(refMol=r_mol,
atomMap=atom_map,
keepBestConformer=True)
viewer = grid_viewer(viewer_grid=(1, 3),
viewer_size=(240 * entry, 300),)
mol_viewer(r_mol.ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, 0))
mol_viewer(ts.ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, 1))
mol_viewer(p_align.ToMolBlock(), 'sdf', viewer=viewer, viewer_loc=(0, 2))
print('reactant TS product')
viewer.show()
# -
# Get TS xyz
print(ts.ToXYZ())
| ipython/TS-GCN+RDMC_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MetAnalyst_Example Notebook
#
# # Warning - Outdated notebook
#
# test_scaling.py does a better job than this notebook to observe that the pre-treatments in the scaling.py module are doing the pre-treatments properly as done by the MetaboAnalyst 4.0 software.
#
# This notebook compares the data transformation (missing values imputation, normalization, transformation and scaling) performed by the online software MetaboAnalyst and the methods present in the module scaling of this repository (only Missing Value Imputation by half of the minimum value in the dataset, Normalization by a reference feature, Glog transformation and Pareto Scaling) by observing the similarity between the linkage matrices (for dendrogram construction) instead of comparing the datasets directly as it is done in test_scaling.py.
#
# The example data used is provided by MetaboAnalyst. The data is available in the statistical analysis section of the software, being the test data labelled as MS Peak List. This data was chosen for being one of the closest to the data that is being used in other BinSim analyses being the major difference the "m/z" column since, in this case, it also contains the retention time (due to the data being from LC-MS) and is in the format "mass/retention time".
#
# Note: The files used here were obtained from MetaboAnalyst. The files can be obtained by starting the statistical analysis and picking this file in MetaboAnalyst, do the data pre-treatment which will be indicated for each file as they are used in the notebook and getting the .csv file from the download tab of MetaboAnalyst. Some of the treatments may have been changed meanwhile, for example Missing Value Imputation by half of the minimum value in the dataset is no longer available at MetaboAnalyst 4.0, so this exact analysis can't be replicated.
#
# ## Organization of the Notebook
#
# - Read all the different files (original and after different pre-treatments) and construct linkage matrices.
# - Apply using the scaling module all the different pre-treatments and construck linkage matrices.
# - Observe correlation between all different paris of linkage matrices
#
# ### Needed Imports
#from metabolinks import read_data_csv, read_data_from_xcel
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
import pandas as pd
import scipy.spatial.distance as dist
import scipy.cluster.hierarchy as hier
import scipy.stats as stats
import scaling as sca
import multianalysis as ma
# %matplotlib inline
# ### File from MetaboAnalyst reading
#
# The first file is the original file of the data (data_original.csv on the download tab). As such, the methods from scaling will be applied on this data. Here is where the trouble with the index format matters, since the normalization procedure requires a m/z index column (column of floats/integers). To transform this index we do the following:
def read_aligned_files(filename):
"""Short function to read the aligned files fast while putting the MultiIndex in the correct order for the CDL accessor."""
df = pd.read_csv(filename, header = None, index_col = [0])
df.index.name = 'm/z'
mi = pd.concat([df.iloc[1, :],df.iloc[0, :]], axis = 'columns')
mi = pd.MultiIndex.from_frame(mi)
final_file = pd.read_csv(filename, header = [0,1], index_col = [0])
final_file.columns = mi
return final_file
MetAna_O = read_aligned_files('MetAnalyst/MetAna_Original.csv') #data_original.csv (no processing required)
# Now, we get and read the files from the other methods. First with missing value imputation (no features removed/features that have more than 100% of missing values removed) by replacing by half of the minimum value. After this, files with data transformed from all combinations of these 3 methods: normalization by a reference sample (random), glog transformation and Pareto Scaling (if multiple used, they are processed in this order).
MetAna_I = read_aligned_files('MetAnalyst/MetAna_Imputed.csv') #data_processed.csv (after missing value imputation).
# From now on, the files extracted have 2 extra columns (separated m/z and retention time) that aren't required and need to be removed. For that, we apply the following function to the rest of the files:
def reading_MetAna_files(filename):
file = pd.read_table(filename, header=[0], sep=',')
file = file.set_index(file.columns[0])
file.index.name = 'm/z'
file = file[["ko15","ko16","ko18","ko19","ko21","ko22","wt15","wt16","wt18","wt19","wt21","wt22"]]
MetAna_file = file.cdf.add_labels(["KO","KO","KO","KO","KO","KO","WT","WT","WT","WT","WT","WT"])
return MetAna_file
# All of these are obtained from peak_normalized_rt_mz.csv after respective processing. They all have the same missing value
# imputation as above.
# No data filter was performed.
MetAna_P = reading_MetAna_files('MetAnalyst/MetAna_Pareto.csv') # Pareto Scaling only
MetAna_N = reading_MetAna_files('MetAnalyst/MetAna_Norm.csv') # Normalization by a reference feature only - 301/2791.68 (random choice)
MetAna_G = reading_MetAna_files('MetAnalyst/MetAna_Glog.csv') # glog transformation
MetAna_NP = reading_MetAna_files('MetAnalyst/MetAna_np.csv') # Normalization by reference feature + Pareto Scaling
MetAna_GP = reading_MetAna_files('MetAnalyst/MetAna_gp.csv') # glog transformation + Pareto Scaling
MetAna_NG = reading_MetAna_files('MetAnalyst/MetAna_ng.csv') # Normalization by reference feature + glog transformation
MetAna_NGP = reading_MetAna_files('MetAnalyst/MetAna_ngp.csv') # Normalization by reference feature + glog transformation + Pareto Scaling
# Measure distances and linkage matrix of hierarchical clustering for each of the 8 files.
dist_MetAna_I = dist.pdist(MetAna_I.T, metric = 'euclidean')
Z_MetAna_I = hier.linkage(dist_MetAna_I, method='average')
dist_MetAna_P = dist.pdist(MetAna_P.T, metric = 'euclidean')
Z_MetAna_P = hier.linkage(dist_MetAna_P, method='average')
dist_MetAna_N = dist.pdist(MetAna_N.T, metric = 'euclidean')
Z_MetAna_N = hier.linkage(dist_MetAna_N, method='average')
dist_MetAna_G = dist.pdist(MetAna_G.T, metric = 'euclidean')
Z_MetAna_G = hier.linkage(dist_MetAna_G, method='average')
dist_MetAna_NP = dist.pdist(MetAna_NP.T, metric = 'euclidean')
Z_MetAna_NP = hier.linkage(dist_MetAna_NP, method='average')
dist_MetAna_GP = dist.pdist(MetAna_GP.T, metric = 'euclidean')
Z_MetAna_GP = hier.linkage(dist_MetAna_GP, method='average')
dist_MetAna_NG = dist.pdist(MetAna_NG.T, metric = 'euclidean')
Z_MetAna_NG = hier.linkage(dist_MetAna_NG, method='average')
dist_MetAna_NGP = dist.pdist(MetAna_NGP.T, metric = 'euclidean')
Z_MetAna_NGP = hier.linkage(dist_MetAna_NGP, method='average')
# Example of a dendrogram from this data (Pareto Scaling only)
fig = plt.figure(figsize=(16,7))
dn = hier.dendrogram(Z_MetAna_P, labels=MetAna_P.cdl.samples,
leaf_font_size=15,
above_threshold_color='b')
# ### Applying Scaling module methods to the original data - MetAna_O
# +
# Applying the different methods
I_O = sca.NaN_Imputation(MetAna_O, 0) # Missing Value Imputation (serves as a base to other methods). No features removed.
P_O = sca.ParetoScal(I_O) # Pareto Scaling only
N_O = sca.Norm_Feat(I_O, '301/2791.68') # Normalization by a reference feature only - 301/2791.68 (random choice)
G_O = sca.glog(I_O) # glog transformation
NP_O = sca.ParetoScal(N_O) # Normalization by reference feature + Pareto Scaling
GP_O = sca.ParetoScal(G_O) # glog transformation + Pareto Scaling
NG_O = sca.glog(N_O) # Normalization by reference feature + glog transformation
NGP_O = sca.ParetoScal(NG_O) # Normalization by reference feature + glog transformation + Pareto Scaling
# -
# Measure distances and linkage matrix of hierarchical clustering for each of the 8 combinations of methods.
dist_I_O = dist.pdist(I_O.T, metric = 'euclidean')
Z_I_O = hier.linkage(dist_I_O, method='average')
dist_P_O = dist.pdist(P_O.T, metric = 'euclidean')
Z_P_O = hier.linkage(dist_P_O, method='average')
dist_N_O = dist.pdist(N_O.T, metric = 'euclidean')
Z_N_O = hier.linkage(dist_N_O, method='average')
dist_G_O = dist.pdist(G_O.T, metric = 'euclidean')
Z_G_O = hier.linkage(dist_G_O, method='average')
dist_NP_O = dist.pdist(NP_O.T, metric = 'euclidean')
Z_NP_O = hier.linkage(dist_NP_O, method='average')
dist_GP_O = dist.pdist(GP_O.T, metric = 'euclidean')
Z_GP_O = hier.linkage(dist_GP_O, method='average')
dist_NG_O = dist.pdist(NG_O.T, metric = 'euclidean')
Z_NG_O = hier.linkage(dist_NG_O, method='average')
dist_NGP_O = dist.pdist(NGP_O.T, metric = 'euclidean')
Z_NGP_O = hier.linkage(dist_NGP_O, method='average')
# Example of a dendrogram from this transformed data - same as previous - only Pareto Scaling
fig = plt.figure(figsize=(16,7))
dn = hier.dendrogram(Z_P_O, labels=P_O.cdl.samples,
leaf_font_size=15,
above_threshold_color='b')
# ### Calculating correlation between every combination of data processing from MetaboAnalyst and from scaling
# +
MetAna = (Z_MetAna_I, Z_MetAna_P, Z_MetAna_N, Z_MetAna_G, Z_MetAna_NP, Z_MetAna_GP, Z_MetAna_NG, Z_MetAna_NGP,
Z_I_O, Z_P_O, Z_N_O, Z_G_O, Z_NP_O, Z_GP_O, Z_NG_O, Z_NGP_O)
dist_MetAna = (dist_MetAna_I, dist_MetAna_P, dist_MetAna_N, dist_MetAna_G, dist_MetAna_NP, dist_MetAna_GP, dist_MetAna_NG,
dist_MetAna_NGP, dist_I_O, dist_P_O, dist_N_O, dist_G_O, dist_NP_O, dist_GP_O, dist_NG_O, dist_NGP_O)
K_MetAna = []
S_MetAna = []
Coph_MetAna = []
for i in range(len(MetAna)):
K_MetAna.append(ma.mergerank(MetAna[i])) # Mergerank
S_MetAna.append(K_MetAna[i][K_MetAna[i]!=0]) # Both reshape to a 1D array (needed for spearman correlation) and take out 0s
Coph_MetAna.append(hier.cophenet(MetAna[i], dist_MetAna[i])) # Matrix of Cophenetic distances
# -
# Column names and row names for the dataframes
colnames = ['MetAna_I', 'MetAna_P', 'MetAna_N', 'MetAna_G', 'MetAna_NP', 'MetAna_GP', 'MetAna_NG', 'MetAna_NGP',
'I_O', 'P_O', 'N_O', 'G_O', 'NP_O', 'GP_O', 'NG_O', 'NGP_O']
df_K_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # K - Kendall (Baker)
df_S_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # S - Spearman (Baker)
df_C_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # C - Cophenetic Correlation
df_K_p_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # p-values of K method
df_S_p_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # p-values of S method
df_C_p_MetAna = pd.DataFrame(np.zeros((len(S_MetAna),len(S_MetAna))), columns = colnames, index = colnames) # p-values of C method
# Calculation of correlation coefficient for each method
for i in range(len(S_MetAna)):
for j in range(len(S_MetAna)):
df_K_MetAna.iloc[i,j] = stats.kendalltau(S_MetAna[i],S_MetAna[j])[0] # Correlation coefficient
df_S_MetAna.iloc[i,j] = stats.spearmanr(S_MetAna[i],S_MetAna[j])[0] # Correlation coefficient
df_C_MetAna.iloc[i,j] = stats.pearsonr(Coph_MetAna[i][1],Coph_MetAna[j][1])[0] # Correlation coefficient
df_K_p_MetAna.iloc[i,j] = stats.kendalltau(S_MetAna[i],S_MetAna[j])[1] # p-value
df_S_p_MetAna.iloc[i,j] = stats.spearmanr(S_MetAna[i],S_MetAna[j])[1] # p-value
df_C_p_MetAna.iloc[i,j] = stats.pearsonr(Coph_MetAna[i][1],Coph_MetAna[j][1])[1] # p-value
# And finally we check the results with Heatmaps.
#
# ### Heatmaps
# +
f, ax = plt.subplots(figsize=(20, 15))
print('Baker (Kendall) Correlation Coefficient Heatmap')
hm = sns.heatmap(df_K_MetAna, annot=True, ax=ax)
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.show()
# +
f, ax = plt.subplots(figsize=(20, 15))
print('Baker (Spearman) Correlation Coefficient Heatmap (between dendrograms made with different distance metrics)')
hm = sns.heatmap(df_S_MetAna, annot=True, ax=ax)
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.show()
# +
f, ax = plt.subplots(figsize=(20, 15))
print('Cophenetic Correlation Coefficient Heatmap (between dendrograms made with different distance metrics)')
hm = sns.heatmap(df_C_MetAna, annot=True, ax=ax)
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.show()
# -
# ## Results Summary
#
# - The main takeway is the lower diagonal (starting at I_O and MetAna_I and ending in NGP_O and MetAna_NGP) is 1 all across the board. As you can see below, the values aren't all exactly 1 but are extremely close to it (>0,9999999). This means that the analysis of MetaboAnalyst and the scaling module are virtually identical.
# - Howewer there are some other things to consider: first the optimization to calculate lambda in the glog transformation isn't still added in scaling to imitate MetaboAnalyst analysis.
# - Although normalization seems to have lower correlation with other combinations, we can't conclude that this is due to this method being more transforming of the data or worse since we used a random feature rather than an actual reference feature (like in real datasets) to normalize.
print('Cophenetic Correlation Example')
print('Imputation (I) Comparison: \t', df_C_MetAna.iloc[0,8])
print('Pareto Scaling (P) Comparison: \t', df_C_MetAna.iloc[1,9])
print('Normalization (N) Comparison: \t', df_C_MetAna.iloc[2,10])
print('Glog transformation (G) Comparison: \t', df_C_MetAna.iloc[3,11])
print('Normalization + Pareto Scaling (NP) Comparison: \t', df_C_MetAna.iloc[4,12])
print('Glog transformation + Pareto Scaling (GP) Comparison: \t', df_C_MetAna.iloc[5,13])
print('Normalization + Glog transformation Comparison (NG): \t', df_C_MetAna.iloc[6,14])
print('Normalization + Glog transformation + Pareto Scaling (NGP) Comparison : \t', df_C_MetAna.iloc[7,15])
| MetAnalyst_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 (''venv'': venv)'
# language: python
# name: python3
# ---
# # Page View Time Series Visualizer
# External imports
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# +
# Get data from the fcc-forum-pageviews.csv file
df = pd.read_csv("fcc-forum-pageviews.csv").set_index("date")
df.index = pd.DatetimeIndex(df.index)
df.head()
# +
# Filter and plot the time series
per_025 = df.value.quantile(0.025)
per_975 = df.value.quantile(0.975)
df = df.query("(value > @per_025) & (value < @per_975)")
g = df.plot(legend=None, figsize=(15,5),color="#B1442C")
plt.title("Daily freeCodeCamp Forum Page Views 5/2016-12/2019");
plt.xlabel("Date");plt.ylabel("Page Views");
# -
# Create the appropiate DataFrame for the second analysis
pd.set_option('mode.chained_assignment', None) # Disable the warnnings
df["month"] = df.index.month; df["year"] = df.index.year;
df.head()
# Plot the information of the DataFrame just created
df_bar = df.pivot_table("value", index="year", columns="month").plot(kind="bar", figsize=(8,6))
months = ["January","February","March","April","May","June","July","August","September","October","November","December"]
plt.xlabel("Years");plt.ylabel("Average Page Views"); plt.legend(months, title="Months");
# +
# Plot the boxplots
fig, ax = plt.subplots(1, 2, figsize=(17, 6))
g1 = sns.boxplot(ax=ax[0], x="year", y="value", data=df, dodge=False)
g2 = sns.boxplot(ax=ax[1], x="month", y="value", data=df, dodge=False)
ax[0].set_title("Year-wise Box Plot (Trend)"); ax[0].set(xlabel="Year", ylabel="Page Views");
month_short = [x[:3] for x in months]
ax[1].set_title("Month-wise Box Plot (Seasonality)"); ax[1].set(xlabel="Month", ylabel="Page Views");
def format_func(value, tick_number):
return month_short[value]
ax[1].xaxis.set_major_formatter(plt.FuncFormatter(format_func))
| data-analysis-python/boilerplate-page-view-time-series-visualizer/Page View Time Series Visualizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="6iCMtFzscuoh" outputId="a9769ff0-bf1d-4309-c12f-83d40c8f3e00"
from google.colab import drive
drive.mount('/content/gdrive')
# + colab={"base_uri": "https://localhost:8080/"} id="kamLJFzWc25u" outputId="3b041a5f-7d3d-4c64-ca3d-9ff9dc938c44"
import os
import string
import pandas as pd
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
from gensim.parsing.preprocessing import remove_stopwords
from nltk.tokenize import word_tokenize
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.utils.data
import numpy as np
from keras.preprocessing.text import one_hot
import warnings
from gensim.models import Word2Vec
import torch.nn.functional as F
warnings.filterwarnings("ignore")
import pickle
# + id="BWGhXQIGc_Lv"
os.chdir('/content/gdrive/MyDrive/CS772')
# + id="mfzLwLyfdHCD"
embeddings_dict = {}
with open("glove.6B.300d.txt", 'r') as f:
for line in f:
values = line.split()
word = values[0]
vector = np.asarray(values[1:], "float32")
embeddings_dict[word] = vector
# + colab={"base_uri": "https://localhost:8080/"} id="FG5s7CesdKTH" outputId="c8f8d757-e326-4609-a1d3-3fecee315784"
embeddings_dict['the'].shape[0]
# + id="H4PFKs0_dMNB"
'''
About the task:
You are provided with a codeflow- which consists of functions to be implemented(MANDATORY).
You need to implement each of the functions mentioned below, you may add your own function parameters if needed(not to main).
Execute your code using the provided auto.py script(NO EDITS PERMITTED) as your code will be evaluated using an auto-grader.
'''
num_words = 300
oov_token = '<UNK>'
pad_type = 'post'
trunc_type = 'post'
vocabulary = []
# Function to average all word vectors in a paragraph
def featureVecMethod(words, model, num_features):
# Pre-initialising empty numpy array for speed
featureVec = np.zeros(num_features,dtype="float32")
nwords = 0
#Converting Index2Word which is a list to a set for better speed in the execution.
index2word_set = set(model.wv.index2word)
for word in words:
if word in index2word_set:
nwords = nwords + 1
featureVec = np.add(featureVec,model[word])
# Dividing the result by number of words to get average
featureVec = np.divide(featureVec, nwords)
return featureVec
# Function for calculating the average feature vector
def getAvgFeatureVecs(reviews, model, num_features):
counter = 0
reviewFeatureVecs = np.zeros((len(reviews),num_features),dtype="float32")
for review in reviews:
# Printing a status message every 1000th review
reviewFeatureVecs[counter] = featureVecMethod(review, model, num_features)
counter = counter+1
return reviewFeatureVecs
def encode_data(text,train=True):
# This function will be used to encode the reviews using a dictionary(created using corpus vocabulary)
# Example of encoding :"The food was fabulous but pricey" has a vocabulary of 4 words, each one has to be mapped to an integer like:
# {'The':1,'food':2,'was':3 'fabulous':4 'but':5 'pricey':6} this vocabulary has to be created for the entire corpus and then be used to
# encode the words into integers
# full = []
# for lines in text['reviews']:
# for word in lines:
# full.append(word)
# unique_words = set(full)
# #vocab_length = len(corpus_unique_words)
# vocab_length = len(unique_words)
# print(vocab_length)
# for i in range(0,len(text['reviews'])):
# line = (" ").join(text['reviews'].iloc[i])
# text['reviews'].iloc[i] = one_hot(line,vocab_length)
new_list=[]
for i in text['reviews']:
new_list.append(i)
ls = list(embeddings_dict.keys())
text['reviews'] = [np.sum([embeddings_dict[word] for word in post if word in ls],axis=0) for post in new_list]
return text
def convert_to_lower(text):
# return the reviews after convering then to lowercase
return text.apply(lambda row :row.lower())
def remove_punctuation(text):
# return the reviews after removing
for punctuation in string.punctuation:
text = text.replace(punctuation, '')
return text
def remove_stopwords(text):
# return the reviews after removing the stopwords
# print('before stopword removal',text)
stop = stopwords.words('english')
stop = []
return text.apply(lambda x: (" ").join([item for item in x.split() if item not in stop]))
def perform_tokenization(text):
# return the reviews after performing tokenization
return text.apply(lambda row : word_tokenize(row))
def perform_padding(data):
# return the reviews after padding the reviews to maximum length
# pd.options.mode.chained_assignment = None # default='warn'
# #review_max_length = data['reviews'].str.len().max()
# max_length = 29
# for i in range(0,len(data['reviews'])):
# data['reviews'].iloc[i] += ['0']*(max_length - len(data['reviews'].iloc[i]))
# data['reviews'].iloc[i] = np.array(data['reviews'].iloc[i]).astype('int64')
padded_posts = []
for post in encoded_docs:
# Pad short posts with alternating min/max
if len(post) < MAX_LENGTH:
# Method 2
pointwise_avg = np.mean(post)
padding = [pointwise_avg]
post += padding * ceil((MAX_LENGTH - len(post) / 2.0))
# Shorten long posts or those odd number length posts we padded to 51
if len(post) > MAX_LENGTH:
post = post[:MAX_LENGTH]
# Add the post to our new list of padded posts
padded_posts.append(post)
return padded_posts
def preprocess_data(data,train = True):
# make all the following function calls on your data
# EXAMPLE:->
review = data["reviews"]
review = convert_to_lower(review)
review = review.apply(remove_punctuation)
data["reviews"] = review
#review = remove_stopwords(review)
# data = remove_stopwords(data)
review = perform_tokenization(review)
data["reviews"] = review
data = encode_data(data,train)
review = data["reviews"]
data = data.loc[:, ~data.columns.str.contains('^Unnamed')]
print(data)
# print(review)
# data = perform_padding(data)
return data
# + id="zw_agnUFgKKN"
# + id="Cso3o0t9iiST"
# + colab={"base_uri": "https://localhost:8080/"} id="3uP867N6dOhU" outputId="7057ed10-a2cc-4035-8063-08a023c747ef"
np.random.seed(0)
import torch.nn as nn
import nltk
nltk.download('punkt')
from sklearn.metrics import classification_report
from math import log2
from imblearn.over_sampling import SMOTE, RandomOverSampler, SMOTENC, ADASYN
from imblearn.under_sampling import RandomUnderSampler
from imblearn.under_sampling import NearMiss
from imblearn.pipeline import Pipeline
from collections import Counter
validation = 0
main_model = ''
def softmax_activation(x):
# write your own implementation from scratch and return softmax values(using predefined softmax is prohibited)
# print((torch.exp(x) - torch.max(torch.exp(x)))/ (torch.sum(torch.exp(x),axis=0) - torch.max(torch.exp(x))))
x=x-torch.max(x)
return torch.exp(x)/(torch.sum(torch.exp(x),axis=0))
class NeuralNet(nn.Module):
def __init__(self,train_data_loader,val_data_loader):
super(NeuralNet, self).__init__()
self.rnn = nn.RNN(input_size=num_words,hidden_size=256,num_layers=2, batch_first=True, nonlinearity='relu')
# self.layer1 = nn.Linear(300,512)
# self.dropout1 = nn.Dropout(0.5)
# self.normal1 = nn.BatchNorm1d(512)
# self.layer2 = nn.Linear(512,128)
# self.dropout2 = nn.Dropout(0.2)
# self.normal2 = nn.BatchNorm1d(128)
# self.layer3 = nn.Linear(128,5)
self.train_data_loader = train_data_loader
self.val_data_loader = val_data_loader
# input_dim = 300
# hidden_dim = 100
# layer_dim = 1
# output_dim = 5
# super(NeuralNet, self).__init__()
# # Hidden dimensions
# self.hidden_dim = hidden_dim
# # Number of hidden layers
# self.layer_dim = layer_dim
# # Building your RNN
# # batch_first=True causes input/output tensors to be of shape
# # (batch_dim, seq_dim, input_dim)
# # batch_dim = number of samples per batch
# self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim, batch_first=True, nonlinearity='relu')
# # Readout layer
self.fc = nn.Linear(256, 5)
def build_nn(self,dt):
# #add the input and output layer here; you can use either tensorflow or pytorch
# x1 = self.layer1(dt)
# x1 = self.dropout1(x1)
# x1 = self.normal1(x1)
# x1 = F.relu(x1)
dt = dt.view(dt.shape[0],1,dt.shape[1])
rnn_out, hidden = self.rnn(dt)
rnn_out = rnn_out.contiguous().view(-1, 256)
x1 = F.relu(rnn_out)
# x1 = self.layer2(x1)
# x1 = self.dropout2(x1)
# x1 = self.normal2(x1)
# x1 = F.relu(x1)
# x1 = self.layer3(x1)
x1 = self.fc(x1)
return x1
# # Initialize hidden state with zeros
# # (layer_dim, batch_size, hidden_dim)
# h0 = torch.zeros(self.layer_dim, dt.size(0), self.hidden_dim).requires_grad_()
# # We need to detach the hidden state to prevent exploding/vanishing gradients
# # This is part of truncated backpropagation through time (BPTT)
# out, hn = self.rnn(dt, h0.detach())
# # Index hidden state of last time step
# # out.size() --> 100, 28, 10
# # out[:, -1, :] --> 100, 10 --> just want last time step hidden states!
##x1 = self.fc(x1[:, -1, :])
# # out.size() --> 100, 10
##return x1
# calculate cross entropy
def cross_entropy(p, q):
return -sum([p[i]*log2(q[i]) for i in range(len(p))])
def train_nn(self,batch_size,epochs,optimizer,model):
global validation,main_model
# write the training loop here; you can use either tensorflow or pytorch
loss_cr = nn.CrossEntropyLoss()
train_ls = []
val_ls = []
train_acc = []
val_acc = []
for epoch in range(epochs):
#for param in model.parameters():
# print(param.data)
# For training
tot_loss = 0
#keeping track of loss
correct = 0
total = 0
for (batch,label) in self.train_data_loader:
data = batch
labels = label
#initializing all the gradients to be zero
optimizer.zero_grad()
output = model.build_nn(data)
loss = loss_cr(output,labels)
#getting the total loss
tot_loss = loss.item()
#For backpropagation
loss.backward()
optimizer.step()
total += labels.size(0)
_,pred = torch.max(output.data,1)
correct += (pred == labels).sum().item()
train_acc1 = 100 * correct / total
train_acc.append(train_acc1)
# print validation accuracy
val_loss=0
#making sure that training is not set to true while validation
# Training has already been done before and we have obtained the values of our parameters after training that we shall use for making the predictions
# We shall use the validation set to tune the hyperparameters that we use
model.eval()
correct = 0
total = 0
#Now, similar to above, we take batch from validation data, one batch containing 16 examples in one iteration
for batch in self.val_data_loader:
data = batch[0]
labels = batch[1]
output = model.build_nn(data)
loss = loss_cr(output,labels)
val_loss = loss.item()
total += labels.size(0)
_,pred = torch.max(output.data,1)
correct += (pred == labels).sum().item()
valid_acc = 100 * correct / total
val_acc.append(valid_acc)
train_ls.append(tot_loss)
val_ls.append(val_loss)
if validation < val_acc[-1]:
validation = val_acc[-1]
main_model = model
print('updating model')
pickle.dump(model, open('Rnn_Glove_model_withoutsmote.pkl','wb'))
print(f'Epoch {epoch} --> Train loss: {tot_loss} Train accuracy: {train_acc[-1]} Val loss: {val_loss} Val accuracy: {val_acc[-1]}')
return 0
def predict(self, data,model1):
output = model1.build_nn(data)
pred,index = torch.max(output,1)
return index + 1,output
# return a list containing all the ratings predicted by the trained model
# DO NOT MODIFY MAIN FUNCTION'S PARAMETERS
def main(train_file, test_file):
global train_dataset
train_data = pd.read_csv(train_file)
test_data = pd.read_csv(test_file)
valid_data = pd.read_csv("gold_test.csv")
batch_size = 1024
epochs = 25
#preprocessing the data
train_dataset=preprocess_data(train_data)
train_dataset = train_dataset.loc[:,~train_dataset.columns.str.match("Unnamed")]
validation_dataset = preprocess_data(valid_data,False)
validation_dataset = validation_dataset.loc[:,~validation_dataset.columns.str.match("Unnamed")]
test_dataset=preprocess_data(test_data,False)
test_dataset = test_dataset.loc[:,~test_dataset.columns.str.match("Unnamed")]
# print(train_dataset)
#converting data into a tensor
X = torch.tensor(train_dataset['reviews'], dtype=torch.float)
#Normalizing the training data to keep it over a smaller range
X = (X - torch.mean(X))/torch.std(X)
y = torch.tensor(train_dataset['ratings'], dtype=torch.float) -1
# y = y - torch.tensor(1).expand_as(y)
y = y.to(torch.int64)
#validation_data
X_val = torch.tensor(validation_dataset['reviews'], dtype=torch.float)
#Normalizing the training data to keep it over a smaller range
X_val = (X_val - torch.mean(X_val))/torch.std(X_val)
y_val = torch.tensor(validation_dataset['ratings'], dtype=torch.float) -1
# y = y - torch.tensor(1).expand_as(y)
y_val = y_val.to(torch.int64)
# X_train,X_val,y_train, y_val = train_test_split(X,y,test_size=0.2,shuffle=True)
# sm = SMOTE()
# X1,y1 = sm.fit_resample(X,y)
#over = SMOTE()
#under = RandomUnderSampler()
#steps = [('o', over), ('u', under)]
#pipeline = Pipeline(steps=steps)
#smotenc = SMOTENC([1],random_state = 101)
#adasyn = ADASYN(random_state = 101)
#X1,y1 = adasyn.fit_resample(X,y)
#print('Original dataset shape:', Counter(y))
#print('Resample dataset shape:', Counter(y1))
X1 = torch.tensor(X,dtype=torch.float)
y1 = torch.tensor(y,dtype=torch.int64)
train_dataset = torch.utils.data.TensorDataset(X1,y1)
#concatinating back the splitted datasets
# train_dataset = torch.utils.data.TensorDataset(X_train,y_train)
validation_dataset = torch.utils.data.TensorDataset(X_val,y_val)
# print(train_dataset)
#loading the trining and validation dataset into batches
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
val_loader = torch.utils.data.DataLoader(dataset=validation_dataset,
batch_size=batch_size,
shuffle=False)
#converting test data into tensor then normalizing it and loading it in batches
test_dataset = torch.tensor(test_dataset['reviews'], dtype=torch.float)
test_dataset = (test_dataset - torch.mean(test_dataset))/torch.std(test_dataset)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
#caaling the Neural Network model
model=NeuralNet(train_loader,val_loader)
# model.build_nn()
lr = 0.001
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
model.train_nn(batch_size,epochs,optimizer,model)
main_model = pickle.load(open('Rnn_Glove_model_withoutsmote.pkl','rb'))
output,_ = main_model.predict(test_dataset,main_model)
# output = model.predict(test_dataset,model)
output = output.cpu().detach().numpy()
print('output:',output)
return output
val = []
if __name__ == "__main__":
val=main("train.csv","test.csv")
original_data = pd.read_csv("gold_test.csv")['ratings']
print(val)
print(classification_report(original_data, val))
# + colab={"base_uri": "https://localhost:8080/"} id="2nmuvwJ7fDw6" outputId="b1adfd1c-6b41-4e5d-f3d8-d049c8164b87"
from sklearn import metrics
original_data = pd.read_csv("gold_test.csv")['ratings']
print("accuracy:",metrics.accuracy_score(original_data,val))
print("recall:",metrics.recall_score(original_data,val,average = 'weighted'))
print("precision:",metrics.precision_score(original_data,val,average = 'weighted'))
print("f1:",metrics.f1_score(original_data,val,average = 'weighted'))
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="sINVcvcBqdKi" outputId="1a79aa12-e4c7-4b84-917d-6a10b85fa824"
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
import itertools
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
cnf_matrix = confusion_matrix(original_data, val,labels=[1,2,3,4,5])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=[1,2,3,4,5],
title='Confusion matrix')
# + colab={"base_uri": "https://localhost:8080/"} id="olw8VJqABkCh" outputId="8620b581-0615-46e5-95b6-a4b90ae0f867"
# !pip install gradio
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="Eu3aNOhc9gRc" outputId="a0e77a50-24dc-42fa-a142-8a5e6c3340c0"
import gradio as gr
def generate_text(inp):
line = []
line.append(inp)
df = pd.DataFrame()
df['reviews'] = pd.Series(line)
df=preprocess_data(df,False)
df = df.loc[:,~df.columns.str.match("Unnamed")]
df = torch.tensor(df['reviews'], dtype=torch.float)
df = (df - torch.mean(df))/torch.std(df)
Model = pickle.load(open('Rnn_Glove_model_withoutsmote.pkl','rb'))
index,out = Model.predict(df,Model)
out = out.cpu().detach().numpy()
index = index.cpu().detach().numpy()
x = out
e_x = np.exp(x - np.max(x))
arr = e_x / e_x.sum()
dictionary = dict(zip(['1','2','3','4','5'], map(float, arr[0])))
# "probabilities: {},sentiment :{}".format( str(arr[0]),str(index[0]))
return dictionary
#gr.outputs.Textbox()
gr.Interface(generate_text,
"textbox",
gr.outputs.Label(num_top_classes=5)).launch(share=True) #, debug=True Use in Colab
# + id="rdAubPDoBfu5"
| RNN_pretrained_(1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # First Blog
# ---------
#
# The current file demostrates how to extract data from US SEC EDGAR filing reports. Post scraping, text analysis was performed to drive sentiment opinion, sentiment scores, readability, passive words, personal pronouns, etc.
# # Metrics
# ------
# - **Positive Score**: This score is calculated by assigning the value of +1 for each word if found in the Positive Dictionary and then adding up all the values.
# - **Negative Score**: This score is calculated by assigning the value of -1 for each word if found in the Negative Dictionary and then adding up all the values. We multiply the score with -1 so that the score is a positive number.
# - **Polarity Score**: $\frac{(\text{Positive Score} – \text{Negative Score})}{((\text{Positive Score + Negative Score} ) + 0.000001)}$
# - **Subjective Score**: $\frac{\text{(Positive Score + Negative Score)}}{\text{((Total Words after cleaning) + 0.000001)}}$
# - **Average Sentece Length**: $\frac{\text{No. of Words}}{\text{Np. of Sentences}}$
# - **% Complex Words**: $\frac{\text{The number of complex words}}{\text{The number of words }}$
# - **Fog Index**: $0.4 * \text{(Average Sentence Length + % Complex words)}$
# - **Syllable Count**: Personal Pronouns are considered for counting. Special care is taken so that the country name US is not included in the list.
# ## Libraries
import sys, os, glob
import re
import numpy as np
import pandas as pd
import bs4 as bs
import requests
import spacy # NLP
from spacy_syllables import SpacySyllables # Syllables
from pathlib import Path
from string import punctuation
# ### Custom NLP Model
# +
nlp = spacy.load('en_core_web_sm') # Custom Model
nlp.add_pipe('syllables', after='tagger') # Model for Syllable identification
# -
# ### Constants
# +
TEXT_LIMIT = 1_000_000 # Default SpaCy limit
URLArchive = "https://www.sec.gov/Archives/"
CONSTANT = 0.000_001
# -
# ## Constant Factory
#
# - Create List of the following Constants:
# - Positive Words
# - Negative Words
# - Stop Words(_Auditiors_, _Currencies_,_Dates and Numbers_,_Generic_, _Generic Long_, _Geographic_, _Names_)
# - Constraining Words
# - Uncertainty Words
# ## StopWords
#
# - Scraped Stopwords from the [Official Source](https://sraf.nd.edu/textual-analysis/resources/ )
# - Saved Words in the folder _SentinemtWordList_
# - Collected __positive__ and __negative__ words for creating a StopWords Corpus.
# - Used Similar approach to create a corpus of __constraint__ and __uncertainty__ words.
# +
path = sorted(Path('.').glob(f"**/*SentimentWordList*")) # Stop Words file Path
sentiment_file = Path.cwd()/Path(path[0])
negs = pd.read_excel(io=sentiment_file, sheet_name='Negative',header=None).iloc[:,0].values
pos = pd.read_excel(io=sentiment_file, sheet_name='Positive',header=None).iloc[:,0].values
neg_words = [word.lower() for word in negs]
pos_words = [word.lower() for word in pos]
# -
def createStopWords():
"""Function to fetch Stopwords from various stop-words text files as provided in reference.
"""
stop_words = []
pathStopWords = sorted(Path('.').glob(f"**/stopwords*.txt")) # Stop Words file Path
if pathStopWords:
for filePath in pathStopWords:
fullPath = Path.cwd()/Path(filePath)
with open(fullPath, mode='r', encoding='utf8', errors='ignore') as file:
TEXT = file.read()
fileStopWOrds = [line.split('|')[0] for line in TEXT.split('\n')]
stop_words.extend(fileStopWOrds)
else:
raise FileNotFoundError(f'StopWords related files are not in the {Path.cwd()} and its subdirectories')
return(stop_words)
stopWordList = createStopWords() # Collect Stop words from all files
stopWordList = list(map(lambda x: x.lower(), stopWordList))
nlp.Defaults.stop_words -= nlp.Defaults.stop_words # Remove Default Stop Words
nlp.Defaults.stop_words = {stopword for stopword in stopWordList if stopword !=""} # Add custom stop words
# Create Constraint list
path_constraint = sorted(Path('.').glob(f"**/*constraining*")) # Stop Words file Path
constraining_file = Path.cwd()/Path(path_constraint[0])
constraints = pd.read_excel(io=constraining_file, sheet_name=0,header=0).iloc[:,0].values
constraints = [w.lower() for w in constraints]
# Create Uncertainty List
path_uncertainty = sorted(Path('.').glob(f"**/*uncertainty*")) # Stop Words file Path
uncertainty_file = Path.cwd()/Path(path_uncertainty[0])
uncertainty = pd.read_excel(io=uncertainty_file, sheet_name=0,header=0).iloc[:,0].values
uncertainty = [w.lower() for w in uncertainty]
# ## Helper Functions
#
# - Performs Data cleaning, formating and NLP tasks.
def replaceHTMLTags(text):
"Function that uses Regex to create word tokens."
text = re.sub("<[^>]*>", "", text) # Remove HTML Tags
text = re.sub(r'[^\w\s]',"", text) # Remove Punctuations
return(text)
# +
# For Data Scraping
def save_text(url):
"""Function to save the results of the scraped text in 'raw' directory."""
file_name = Path.cwd()/Path("raw/"+url.split('/')[-1]) # Save .txt file in raw folder
try:
data = requests.get(url, timeout = 10) # Standard Timeout according to SEC.gov
response = data.status_code
if response >200:
return(url)
else:
data = data.content.decode('utf-8')
with open(file_name, 'w') as f:
f.write(data)
except:
pass
# raise ProxyError('Unable to connect!')
# -
def read_from_txt(secfname):
"""
Function to read from the text file related to SECFNAME.
Params:
------
secfname: str, SECFNAME column value.
"""
text_file = secfname.split('/')[-1]
file_path = sorted(Path('.').glob(f"**/{text_file}"))
with open(file_path[0]) as f:
TEXT = f.read()
return(TEXT)
def get_sentiments(TEXT):
"""Function to get various count in a text.
Params:
------
TEXT: str, Input text
Returns:
-------
List[count_pos_sents,count_neg_sents, total_complex_words,
total_words, total_sents, total_syllables, total_const, total_uncertain]:
count_pos_sents: int, No. of Positive words in the SEC Filing
count_neg_sents: int, No. of Negative words in the SEC Filing
total_complex_words: int, No of Complex words in the SEC Filing
total_words: int, Total words post cleanup in the SEC Filing
total_sents : int, Total sentences in the SEC Filing
total_syllables: int, Total No of complex Syllables in the SEC Filing
total_const: int, Total No of Constraint words
total_uncertain: int, Total no. of Uncertain words
"""
print("startint sentiment analysis...")
if not TEXT:
return([CONSTANT]*7) # To avoid ZeroDivisionError
else:
if len(TEXT)< TEXT_LIMIT:
doc = nlp(TEXT)
else:
nlp.max_length = len(TEXT) +1
doc = nlp(TEXT, disable = ['ner'])
print("document loaded...")
count_pos_sents = 0
count_neg_sents = 0
total_complex_words = 0
total_words = 0
total_const = 0
total_uncertain = 0
total_sents = 0
for token in doc:
# Positive Word Count
if (token.lower_ in pos_words):
count_pos_sents += 1
# Negative Word Count
if (token.lower_ in neg_words):
count_neg_sents +=1
# Complex Word Count
if (token._.syllables_count is not None and token._.syllables_count >2):
total_complex_words +=1
# Total Words
if (token.lower_ not in nlp.Defaults.stop_words):
total_words +=1
# Count Constraints
if (token.lower_ in constraints):
total_const +=1
# Count uncertainty
if (token.lower_ in uncertainty):
total_uncertain +=1
# Total Sentences
total_sents = sum(1 for sent in doc.sents)
return([count_pos_sents,count_neg_sents, total_complex_words,
total_words, total_sents, total_const, total_uncertain])
# ## Data Loading
data = pd.read_excel('cik_list.xlsx', sheet_name='cik_list_ajay', header=0)
# ## Text Mining Pipeline
# ---------
# - Reads the TEXT file into the SpaCy's vectorized format.
# - Removes possible HTML related tags and punctuations.
# - Performs Sentiment analysis: calculates, positive, negative and polarity score
# - Calculates Complex Word count
# - Calculates Total Word count
# - Calculates Uncertainty count
# - Calculates Constraining count.
# - Retuns results as pandas columns
text_mining_pipeline = lambda secfname: get_sentiments(replaceHTMLTags(read_from_txt(secfname)))
# Calculated Positivity Score, Negativity Score, Complex WOrd Count, Word Count, Sentence Length,
# Uncertainity Score and Constraining Score
data[['positive_score','negative_score','complex_word_count','word_count','sentence_length',\
'uncertainty_score', 'constraining_score']] = data.apply(lambda x: text_mining_pipeline(x['SECFNAME']), axis=1).apply(pd.Series)
# Calculated Polarity Score
data['polarity_score'] = data.apply(lambda row: (row['positive_score'] - row['negative_score'])/(row['positive_score'] + row['negative_score']+ CONSTANT) , axis=1)
# Calculated Average Sentence Length
data['average_sentence_length'] = data.apply(lambda row: row['word_count']/row['sentence_length'], axis=1)
# Calculated % Complex Words
data['percentage_of_complex_words'] = data.apply\
(lambda row: (row['complex_word_count']/row['word_count'])*100, axis=1)
# Calculated Fog Index
data['fog_index'] = 0.4*(data['average_sentence_length'] + data['percentage_of_complex_words'])
# Calculated Positive Word Proportion
data['positive_word_proportion'] = data.apply(lambda r: r['positive_score']/r['word_count'], axis=1)
# Calculated Negative Word Proportion
data['negative_word_proportion'] = data.apply(lambda r: r['negative_score']/r['word_count'], axis=1)
# Calculated Uncertainty Word Proportion
data['uncertainty_word_proportion'] = data.apply(lambda r: r['uncertainty_score']/ r['word_count'] ,axis=1)
# Calculated Constraining Word Proportion
data['constraining_word_proportion'] = data.apply(lambda r: r['constraining_score']/ r['word_count'] ,axis=1)
# Calculated Constrainig Word in whole Report
data['constraining_words_whole_report'] = np.sum(data.constraining_score) # Broadcasting the total constraining score of all the docs.
# ### Data Saving
data.to_csv(path_or_buf=Path.cwd()/Path('result.csv'),
columns=['CIK', 'CONAME', 'FYRMO', 'FDATE', 'FORM', 'SECFNAME', 'positive_score', \
'negative_score', 'polarity_score', 'average_sentence_length', 'percentage_of_complex_words', 'fog_index',\
'complex_word_count', 'word_count', 'uncertainty_score', 'constraining_score', 'positive_word_proportion',\
'negative_word_proportion', 'uncertainty_word_proportion', 'constraining_word_proportion', 'constraining_words_whole_report'],
index=False)
| _notebooks/2020-06-13-seniment-analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
import pickle
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from pymo.parsers import BVHParser
from pymo.preprocessing import *
from pymo.viz_tools import *
# +
p = BVHParser()
# data_all = [p.parse('../../MocapFiles/8walk/14_NVLA_8_walk_meredith1.bvh')]
data_all = [p.parse('./data/AV_8Walk_Meredith_HVHA_Rep1.bvh')]
# -
print_skel(data_all[0])
data_all[0].values.head(10)
dr_pipe = Pipeline([
('param', MocapParameterizer('position')),
])
data_all[0].values.shape
xx = dr_pipe.fit_transform(data_all)
df=xx[0].values
draw_stickfigure(xx[0], 600, xx[0].values)
frame = 100
fig = plt.figure(figsize=(8,8))
#ax = fig.add_subplot(111, projection='3d')
ax = fig.add_subplot(111)
for joint in ['LeftArm', 'RightArm', 'Head', 'LeftFoot','RightFoot','Hips','Spine', 'Head_Nub','RightUpLeg', 'LeftUpLeg']:
ax.scatter(x=df['%s_Yposition'%joint][frame],
y=-df['%s_Zposition'%joint][frame],
# zs=df['%s_Yposition'%joint][frame],
alpha=0.3, c='b', marker='o')
ax.annotate(joint,
(df['%s_Yposition'%joint][frame] + 0.5,
-df['%s_Zposition'%joint][frame] + 0.5))
xx[0].values.head(10)
xx[0].values.plot(x='Hips_Xposition', y='Hips_Zposition',figsize=(15,8))
xx[0].values.plot.scatter(x='Hips_Xposition', y='Hips_Zposition', figsize=(15,8))
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=df.Hips_Xposition, ys=df.Hips_Zposition, zs=df.Hips_Yposition, alpha=0.3, c='b', marker='o')
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=df.LeftArm_Xposition, ys=df.LeftArm_Zposition, zs=df.LeftArm_Yposition, alpha=0.3, c='b', marker='o')
| demos/Position.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 with widget support (Andy)
# language: python
# name: ajc_widgets_p37
# ---
# +
import os
import numpy as np
import pandas as pd
import pickle
import bqplot as bq
from ipywidgets import Layout, Box
# +
from bqplot import LinearScale, Axis, Lines, Scatter, Hist, Figure
from bqplot.interacts import (
FastIntervalSelector, IndexSelector, BrushIntervalSelector,
BrushSelector, MultiSelector, LassoSelector, PanZoom, HandDraw
)
from traitlets import link
from ipywidgets import ToggleButtons, VBox, HTML
# -
ddir = '/epyc/projects/sso-lc/notebooks/aug_29_2019'
filename = 't.csv'
d = pd.read_csv(os.path.join(ddir, filename))
d[0:3]
both = d.query('period > 0')
# +
x_sc = LinearScale(min=0, max=60)
y_sc = LinearScale(min=0, max=60)
x_data = both.fit_period.values
y_data = both.period.values
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5})
ax_x = Axis(scale=x_sc, tick_format='0.0f', label='Fit period')
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.0f', label='ALCDEF period')
Figure(marks=[scatter_chart], axes=[ax_x, ax_y])
# -
cols = ['ztfname_x', 'Nobs', 'Nights', 'mag_med', 'sig_med', 'fit_period', 'period', 'fit_amp', 'AmpMax', 'U']
both.iloc[scatter_chart.selected][cols]
def plot_scatter(x=[], y=[], zid=[], color='red', filt=''):
'''Create and return Scatter plot'''
#TODO: tooltip format with all data
tooltip = bq.Tooltip(fields=['x', 'y', 'zid'], formats=['.2f', '.2f', ''],
labels=['fit_period', 'lcd_period', 'ztfname'])
sc_x = bq.LinearScale(min=0, max=50)
sc_y = bq.LinearScale(min=0, max=50)
scatt = bq.Scatter(
scales={'x': sc_x, 'y': sc_y},
tooltip=tooltip,
tooltip_style={'opacity': 0.5},
interactions={'hover': 'tooltip'},
unhovered_style={'opacity': 0.5},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5},
display_legend=False)
scatt.colors = [color]
scatt.label = filt
if ((y != [])):
scatt.x = x
scatt.y = y
#scatt.on_element_click(display_info)
return scatt
both['fit_period'].values[0:3], both['period'].values[0:3], both['ztfname_x'].values[0:3]
scat = plot_scatter(both['fit_period'].values, both['period'].values,
both['ztfname_x'].values, color='dodgerblue')
# +
sc_x = bq.LinearScale()
sc_y = bq.LinearScale()
xax = bq.Axis(label='Fit Period', scale=sc_x,
grid_lines='solid',
label_location="middle")
xax.tick_style={'stroke': 'black', 'font-size': 12}
yax = bq.Axis(label='ACLDEF Period', scale=sc_y,
orientation='vertical', tick_format='0.1f',
grid_lines='solid', label_location="middle")
yax.tick_style={'stroke': 'black', 'font-size': 12}
ax_x = Axis(scale=x_sc, tick_format='0.0f', label='Fit period')
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.0f', label='ALCDEF period')
ax_x = Axis(scale=sc_x, tick_format='0.0f', label='Fit period')
ax_y = Axis(scale=sc_y, orientation='vertical', tick_format='0.0f', label='ALCDEF period')
#panzoom = bq.PanZoom(scales={'x': [sc_x], 'y': [sc_y]})
bq.Figure(#axes=[ax_x, ax_y],
marks=[scat],
layout=Layout(width='500px', height='500px'),
fig_margin = {'top': 0, 'bottom': 40, 'left': 50, 'right': 0},
#legend_location='top-right',
)
# -
help(sc_x)
| TrojanScatterPlot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Link prediction with GraphSAGE
# + [markdown] nbsphinx="hidden" tags=["CloudRunner"]
# <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/link-prediction/graphsage-link-prediction.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/link-prediction/graphsage-link-prediction.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
# -
# In this example, we use our implementation of the [GraphSAGE](http://snap.stanford.edu/graphsage/) algorithm to build a model that predicts citation links in the Cora dataset (see below). The problem is treated as a supervised link prediction problem on a homogeneous citation network with nodes representing papers (with attributes such as binary keyword indicators and categorical subject) and links corresponding to paper-paper citations.
#
# To address this problem, we build a model with the following architecture. First we build a two-layer GraphSAGE model that takes labeled node pairs (`citing-paper` -> `cited-paper`) corresponding to possible citation links, and outputs a pair of node embeddings for the `citing-paper` and `cited-paper` nodes of the pair. These embeddings are then fed into a link classification layer, which first applies a binary operator to those node embeddings (e.g., concatenating them) to construct the embedding of the potential link. Thus obtained link embeddings are passed through the dense link classification layer to obtain link predictions - probability for these candidate links to actually exist in the network. The entire model is trained end-to-end by minimizing the loss function of choice (e.g., binary cross-entropy between predicted link probabilities and true link labels, with true/false citation links having labels 1/0) using stochastic gradient descent (SGD) updates of the model parameters, with minibatches of 'training' links fed into the model.
# + nbsphinx="hidden" tags=["CloudRunner"]
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
# %pip install -q stellargraph[demos]==1.2.1
# + nbsphinx="hidden" tags=["VersionCheck"]
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.2.1")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.2.1, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
# +
import stellargraph as sg
from stellargraph.data import EdgeSplitter
from stellargraph.mapper import GraphSAGELinkGenerator
from stellargraph.layer import GraphSAGE, HinSAGE, link_classification
from tensorflow import keras
from sklearn import preprocessing, feature_extraction, model_selection
from stellargraph import globalvar
from stellargraph import datasets
from IPython.display import display, HTML
# %matplotlib inline
# -
# ## Loading the CORA network data
# + [markdown] tags=["DataLoadingLinks"]
# (See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.)
# + tags=["DataLoading"]
dataset = datasets.Cora()
display(HTML(dataset.description))
G, _ = dataset.load(subject_as_feature=True)
# -
print(G.info())
# We aim to train a link prediction model, hence we need to prepare the train and test sets of links and the corresponding graphs with those links removed.
#
# We are going to split our input graph into a train and test graphs using the EdgeSplitter class in `stellargraph.data`. We will use the train graph for training the model (a binary classifier that, given two nodes, predicts whether a link between these two nodes should exist or not) and the test graph for evaluating the model's performance on hold out data.
# Each of these graphs will have the same number of nodes as the input graph, but the number of links will differ (be reduced) as some of the links will be removed during each split and used as the positive samples for training/testing the link prediction classifier.
# From the original graph G, extract a randomly sampled subset of test edges (true and false citation links) and the reduced graph G_test with the positive test edges removed:
# +
# Define an edge splitter on the original graph G:
edge_splitter_test = EdgeSplitter(G)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from G, and obtain the
# reduced graph G_test with the sampled links removed:
G_test, edge_ids_test, edge_labels_test = edge_splitter_test.train_test_split(
p=0.1, method="global", keep_connected=True
)
# -
# The reduced graph G_test, together with the test ground truth set of links (edge_ids_test, edge_labels_test), will be used for testing the model.
#
# Now repeat this procedure to obtain the training data for the model. From the reduced graph G_test, extract a randomly sampled subset of train edges (true and false citation links) and the reduced graph G_train with the positive train edges removed:
# +
# Define an edge splitter on the reduced graph G_test:
edge_splitter_train = EdgeSplitter(G_test)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from G_test, and obtain the
# reduced graph G_train with the sampled links removed:
G_train, edge_ids_train, edge_labels_train = edge_splitter_train.train_test_split(
p=0.1, method="global", keep_connected=True
)
# -
# G_train, together with the train ground truth set of links (edge_ids_train, edge_labels_train), will be used for training the model.
# Summary of G_train and G_test - note that they have the same set of nodes, only differing in their edge sets:
print(G_train.info())
print(G_test.info())
# Next, we create the link generators for sampling and streaming train and test link examples to the model. The link generators essentially "map" pairs of nodes (`citing-paper`, `cited-paper`) to the input of GraphSAGE: they take minibatches of node pairs, sample 2-hop subgraphs with (`citing-paper`, `cited-paper`) head nodes extracted from those pairs, and feed them, together with the corresponding binary labels indicating whether those pairs represent true or false citation links, to the input layer of the GraphSAGE model, for SGD updates of the model parameters.
#
# Specify the minibatch size (number of node pairs per minibatch) and the number of epochs for training the model:
# + tags=["parameters"]
batch_size = 20
epochs = 20
# -
# Specify the sizes of 1- and 2-hop neighbour samples for GraphSAGE. Note that the length of `num_samples` list defines the number of layers/iterations in the GraphSAGE model. In this example, we are defining a 2-layer GraphSAGE model:
num_samples = [20, 10]
# For training we create a generator on the `G_train` graph, and make an iterator over the training links using the generator's `flow()` method. The `shuffle=True` argument is given to the `flow` method to improve training.
train_gen = GraphSAGELinkGenerator(G_train, batch_size, num_samples)
train_flow = train_gen.flow(edge_ids_train, edge_labels_train, shuffle=True)
# At test time we use the `G_test` graph and don't specify the `shuffle` argument (it defaults to `False`).
test_gen = GraphSAGELinkGenerator(G_test, batch_size, num_samples)
test_flow = test_gen.flow(edge_ids_test, edge_labels_test)
# Build the model: a 2-layer GraphSAGE model acting as node representation learner, with a link classification layer on concatenated (`citing-paper`, `cited-paper`) node embeddings.
#
# GraphSAGE part of the model, with hidden layer sizes of 50 for both GraphSAGE layers, a bias term, and no dropout. (Dropout can be switched on by specifying a positive dropout rate, 0 < dropout < 1)
# Note that the length of layer_sizes list must be equal to the length of `num_samples`, as `len(num_samples)` defines the number of hops (layers) in the GraphSAGE model.
layer_sizes = [20, 20]
graphsage = GraphSAGE(
layer_sizes=layer_sizes, generator=train_gen, bias=True, dropout=0.3
)
# Build the model and expose input and output sockets of graphsage model
# for link prediction
x_inp, x_out = graphsage.in_out_tensors()
# Final link classification layer that takes a pair of node embeddings produced by GraphSAGE, applies a binary operator to them to produce the corresponding link embedding (`ip` for inner product; other options for the binary operator can be seen by running a cell with `?link_classification` in it), and passes it through a dense layer:
prediction = link_classification(
output_dim=1, output_act="relu", edge_embedding_method="ip"
)(x_out)
# Stack the GraphSAGE and prediction layers into a Keras model, and specify the loss
# +
model = keras.Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.binary_crossentropy,
metrics=["acc"],
)
# -
# Evaluate the initial (untrained) model on the train and test set:
# +
init_train_metrics = model.evaluate(train_flow)
init_test_metrics = model.evaluate(test_flow)
print("\nTrain Set Metrics of the initial (untrained) model:")
for name, val in zip(model.metrics_names, init_train_metrics):
print("\t{}: {:0.4f}".format(name, val))
print("\nTest Set Metrics of the initial (untrained) model:")
for name, val in zip(model.metrics_names, init_test_metrics):
print("\t{}: {:0.4f}".format(name, val))
# -
# Train the model:
history = model.fit(train_flow, epochs=epochs, validation_data=test_flow, verbose=2)
# Plot the training history:
sg.utils.plot_history(history)
# Evaluate the trained model on test citation links:
# +
train_metrics = model.evaluate(train_flow)
test_metrics = model.evaluate(test_flow)
print("\nTrain Set Metrics of the trained model:")
for name, val in zip(model.metrics_names, train_metrics):
print("\t{}: {:0.4f}".format(name, val))
print("\nTest Set Metrics of the trained model:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
# + [markdown] nbsphinx="hidden" tags=["CloudRunner"]
# <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/link-prediction/graphsage-link-prediction.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/link-prediction/graphsage-link-prediction.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| demos/link-prediction/graphsage-link-prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# ---
# # Section 2.6: Propagation of Roundoff Errors in Gaussian Elimination
# ---
# The $n \times n$ Hilbert matrix $H$ is defined by
#
# $$h_{ij} = \frac{1}{i+j-1}, \quad i,j = 1,\ldots,n.$$
#
# This matrix is known to be very ill-conditioned.
#
# Catastrophic cancellation during Gaussian elimination can occur on such badly conditioned matrices.
# + jupyter={"outputs_hidden": false}
n = 7
H = Rational{BigInt}[1//(i+j-1) for i=1:n, j=1:n]
# -
# The inverse of $H$ is integer.
# + jupyter={"outputs_hidden": false}
inv(H)
# + jupyter={"outputs_hidden": false}
invH = round.(Int, inv(H))
# + jupyter={"outputs_hidden": false}
invH*H
# + jupyter={"outputs_hidden": false}
round.(Int, invH*H)
# -
# We will first compute the $LU$-decomposition of $H$ using fractional arithmetic to find the _exact_ values of $U$.
using LinearAlgebra
# + jupyter={"outputs_hidden": false}
L, U, p = lu(H);
U
# -
# Now let's store the Hilbert matrix $H$ in double floating-point format.
# + jupyter={"outputs_hidden": false}
Hd = map(Float64, H)
# -
# The following $LU$-decomposition is computed using floating-point arithmetic.
# + jupyter={"outputs_hidden": false}
Ld, Ud, pd = lu(Hd);
Ud
# -
# Comparing the true $U$ with the $U_d$ that was computed using floating point arithmetic, we see that there a complete loss of precision.
# + jupyter={"outputs_hidden": false}
map(Float64, U)
# + jupyter={"outputs_hidden": false}
norm(Ud - map(Float64,U))/norm(map(Float64, U))
# -
# However, the error in the $LU$-decomposition of $H$ is small.
# + jupyter={"outputs_hidden": false}
E = Ld*Ud - Hd[pd,:]
# + jupyter={"outputs_hidden": false}
norm(E)/norm(Hd)
# -
# # Solving $Hx = b$
# + jupyter={"outputs_hidden": false}
n = 16
H = Rational{BigInt}[1//(i+j-1) for i=1:n, j=1:n]
x = ones(Rational{BigInt}, n)
b = H*x
xhat = H\b
# + jupyter={"outputs_hidden": false}
n = 16
H = Float64[1/(i+j-1) for i=1:n, j=1:n]
x = ones(n)
b = H*x
xhat = H\b
# -
# The computed solution $\hat{x}$ is very far from the true solution $x$.
# + jupyter={"outputs_hidden": false}
norm(xhat - x)/norm(x)
# -
# The Hilbert matrix is very ill-conditioned, so solving $Hx=b$ is very sensitive to roundoff errors. Here we compute the condition number $\kappa_2(H)$.
# + jupyter={"outputs_hidden": false}
cond(H)
# -
# However, the residual $\hat{r} = b - H\hat{x}$ is very small, so $\hat{x}$ exactly satisfies the slightly perturbed system
#
# $$H\hat{x} = b + \delta b,$$
#
# where $\delta b = -\hat{r}$.
# + jupyter={"outputs_hidden": false}
rhat = b - H*xhat;
norm(rhat)/norm(b)
# -
# Therefore, the computation of $\hat{x}$ is **backward stable**. The large error in $\hat{x}$ is completely due to the fact that $H$ is very ill-conditioned.
cond(H)*norm(rhat)/norm(b)
# ---
| Section 2.6 - Propagation of Roundoff Errors in Gaussian Elimination.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
# %load_ext autoreload
# %autoreload
from simba import transfer_function_to_graph, tf2rss, adiabatically_eliminate
from sympy import symbols, simplify, Matrix, sqrt, conjugate, lambdify
# -
from simba.utils import construct_permutation_matrix
construct_permutation_matrix(6)
# Tuned cavity
s = symbols('s')
gamma_f = symbols('gamma_f', real=True, positive=True)
tf = (s + gamma_f) / (s - gamma_f)
split_network = tf2rss(tf).to_slh().split()
gamma, = split_network.aux_coupling_constants
split_network.state_vector
split_network.dynamical_matrix.eqns
tf = split_network.tfm.open_loop('ain', 'aout')
tf
adiabatically_eliminate(tf, gamma).simplify()
tf = split_network.tfm.open_loop('a', 'aout').simplify()
tf
adiabatically_eliminate(tf, gamma).simplify()
tf = split_network.tfm.open_loop('ain', 'a').simplify()
tf
adiabatically_eliminate(tf, gamma).simplify()
split_network.interaction_hamiltonian.h
# First looking at passive realisation of coupled cavity setup with coupling constant $g = 0$
# + pycharm={"name": "#%%\n"}
s = symbols('s')
gamma_f, omega_s = symbols('gamma_f omega_s', real=True, positive=True)
tf = (s**2 + s * gamma_f + omega_s**2) / (s**2 - s * gamma_f + omega_s**2)
transfer_function_to_graph(tf, 'passive_coupled_cavity.png', layout='dot')
# -
# 
split_network = tf2rss(tf).to_slh().split()
# + pycharm={"name": "#%%\n"}
h_int = split_network.interaction_hamiltonian
h_int.expr.simplify()
# -
split_network.interaction_hamiltonian.h
h_int.states
simplify(h_int.dynamical_matrix)
# Looking at adiabatic elimination of $a_1'$
#
# $\dot{a}_1' = -\gamma_1 a_1' - \sqrt{\gamma_1 \gamma_f} a_1 + \sqrt{2 \gamma_1} a_\text{in}$
#
# adiabatic elimination: $\dot{a}_1' = 0$
#
# $a_1' = \sqrt{\frac{\gamma_f}{\gamma_1}} a_1 - \sqrt{\frac{2}{\gamma_1}} a_\text{in}$
#
# $H_\text{int} = i \sqrt{2\gamma_f}(a_\text{in}^\dagger a_1 - a_\text{in} a_1^\dagger)$
split_network.dynamical_matrix.states.states
split_network.dynamical_matrix.eqns
# +
# Calculating the input-output transfer function
tfm = split_network.tfm
tf = tfm.open_loop('ain_1', 'aout_1').simplify()
gamma_1, _ = split_network.aux_coupling_constants
adiabatically_eliminate(tf, gamma_1)
# -
tf = tfm.open_loop('a_1', 'aout_1').simplify()
gamma_1, _ = split_network.aux_coupling_constants
adiabatically_eliminate(tf, gamma_1)
# Now looking at the active realisation ($g \neq 0$)
# + pycharm={"name": "#%%\n"}
# parameterise with lambda = g**2 - omega_s**2 > 0
lmbda = symbols('lambda', real=True, positive=True)
tf = (s**2 + s * gamma_f - lmbda) / (s**2 - s * gamma_f - lmbda)
transfer_function_to_graph(tf, 'active_coupled_cavity.pdf', layout='dot')
# -
# 
split_network = tf2rss(tf).to_slh().split()
h_int = split_network.interaction_hamiltonian
h_int.expr.simplify()
simplify(h_int.dynamical_matrix)
split_network.frequency_domain_eqns
# +
# Calculating the input-output transfer function
tfm = split_network.tfm
tf = tfm.open_loop('ain_1', 'aout_1').simplify()
gamma_1, _ = split_network.aux_coupling_constants
adiabatically_eliminate(tf, gamma_1)
# -
(s**2 + s * gamma_f - lmbda) / (s**2 - s * gamma_f - lmbda)
# Differs by phase shift of $\pi$
# Now let's look at the transfer function from $a_1$ to $aout_1$, expect it to be frequency independent
tf = tfm.open_loop('a_1', 'aout_1').simplify()
gamma_1, _ = split_network.aux_coupling_constants
adiabatically_eliminate(tf, gamma_1)
| notebooks/coupled-cavity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
# -
# ## Modal analysis of the lunar lander
#
# The dynamic matrix $A$ that represents the Lunar Lander dynamics is (see example [Lunar lander lateral position dynamics](SS-11-Lunar_lander_lateral_position_dynamics) for more details):
#
# $$
# A=\begin{bmatrix}0&1&0&0 \\ 0&0&F/m&0 \\ 0&0&0&1 \\ 0&0&0&0\end{bmatrix},
# $$
#
# where $F$ is the thrust force and $m$ the mass of the lander. The state of the system is $x=[z,\dot{z},\theta,\dot{\theta}]^T$, where $z$ is the lateral position, $\dot{z}$ the time variation of the lateral position, $\theta$ the orientation angle of the lander with respect the vertical and $\dot{\theta}$ its variation in time.
#
# The dynamic matrix in this form shows four eigenvalues, all equal to 0. Eigenvalues 0 are often called integrators (recall the Laplace Transform of a the integral of a signal: what is the root of the denominator of its expression?), thus this system is said to have 4 integrators. With $F\neq0$ ($m\neq0$) the system presents a structure that is similar to a $4\times4$ Jordan block, so the eigenvalue 0, in this case, has a geometrical multiplicity equal to 1. With $F=0$ the eigenvalue remains the same with the same algebraic multiplicity but with a geometrical multiplicity equal to 2.
#
# Presented below is an example with $F\neq0$.
#
# ### How to use this notebook?
#
# - Try to set $F=0$ and try to explain what physically implies this case for the lander, specially for the $z$ and $\theta$ dynamics and their relationship.
# +
#Preparatory Cell
import control
import numpy
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
# %matplotlib inline
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('<EMAIL>', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
# -
#define the sliders for m, k and c
m = widgets.FloatSlider(
value=1000,
min=400,
max=2000,
step=1,
description='$m$ [kg]:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
F = widgets.FloatSlider(
value=1500,
min=0,
max=5000,
step=10,
description='$F$ [N]:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
# +
#function that make all the computations
def main_callback(m, F):
eig1 = 0
eig2 = 0
eig3 = 0
eig4 = 0
if numpy.real([eig1,eig2,eig3,eig4])[0] == 0 and numpy.real([eig1,eig2,eig3,eig4])[1] == 0:
T = numpy.linspace(0,20,1000)
else:
if min(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))) != 0:
T = numpy.linspace(0,7*1/min(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))),1000)
else:
T = numpy.linspace(0,7*1/max(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))),1000)
if F==0:
mode1 = numpy.exp(eig1*T)
mode2 = T*mode1
mode3 = mode1
mode4 = mode2
else:
mode1 = numpy.exp(eig1*T)
mode2 = T*mode1
mode3 = T*mode2
mode4 = T*mode3
fig = plt.figure(figsize=[16, 10])
fig.set_label('Modes')
g1 = fig.add_subplot(221)
g2 = fig.add_subplot(222)
g3 = fig.add_subplot(223)
g4 = fig.add_subplot(224)
g1.plot(T,mode1)
g1.grid()
g1.set_xlabel('Time [s]')
g1.set_ylabel('First mode')
g2.plot(T,mode2)
g2.grid()
g2.set_xlabel('Time [s]')
g2.set_ylabel('Second mode')
g3.plot(T,mode3)
g3.grid()
g3.set_xlabel('Time [s]')
g3.set_ylabel('Third mode')
g4.plot(T,mode4)
g4.grid()
g4.set_xlabel('Time [s]')
g4.set_ylabel('Fourth mode')
modesString = r'The eigenvalue is equal to 0 with algebraic multiplicity equal to 4. '
if F==0:
modesString = modesString + r'The corresponding modes are $k$ and $t$.'
else:
modesString = modesString + r'The corresponding modes are $k$, $t$, $\frac{t^2}{2}$ and $\frac{t^3}{6}$.'
display(Markdown(modesString))
out = widgets.interactive_output(main_callback,{'m':m,'F':F})
sliders = widgets.HBox([m,F])
display(out,sliders)
# -
| ICCT_en/examples/04/.ipynb_checkpoints/SS-12-Modal_analysis_of_the_lunar_lander-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
plt.rcParams['font.size'] = 13
plt.rcParams['axes.spines.right'] = False
plt.rcParams['ytick.right'] = False
plt.rcParams['axes.spines.top'] = False
plt.rcParams['xtick.top'] = False
# ## Information as reduced uncertainty
# The foundation for information theory differs slightly from many other concepts in physics, as it is not derived out of empirical observations. Rather, Shannon (1948) started from an intuition of what properties an information measure should posses, and then showed that there only exist one measure with those properties. In short, he imagined a situation where the probabilities ($p_1, \ldots, p_N$) for $N$ outcomes/answers to an event/question are known beforehand, and soughed to quantify the information obtained ones the outcome/answer was learned.
#
# For example, imagine a professor that wants to know how many students $x$ attended a specific lecture. The professor is assumed to know the distribution $p(x)$ over all possible number of attendants from previous experience, but the real number of attendants is unknown. The distribution $p(x)$ thus reflects current uncertainty, and ones the real number of attendees is learned, this uncertainty is decreased to zero. The basic idea is, therefore, to quantify the information learned by measuring how much the uncertainty has decreased.
# +
# Illustration of the uncertainty before and after
N = 16 # number of possible outcomes
mu = N/2. # mean
sigma = N/4. # standard deviation
x = np.arange(N) # possible outcomes
p = np.exp(-(x-mu)**2/sigma**2) # p(x)
p /= p.sum() # Normalize
# One sample from p(x)
p_cum = np.cumsum(p)
outcome = np.argmax(np.random.rand() < p_cum)
y = np.zeros(N)
y[outcome] = 1.
# Plotting
plt.figure(figsize=(15, 3))
ax = plt.subplot(1, 2, 1)
ax.bar(x-0.4, p)
ax.set_xlabel('Number of attendants')
ax.set_ylabel('P(x)')
ax.set_title('Before')
ax = plt.subplot(1, 2, 2)
ax.bar(x, y)
ax.set_xlabel('Number of attendants');
ax.set_title('After');
# -
# Based on the idea above, Shannon (1948) proposed that a measure $H(p_1,\ldots,p_N)$ of uncertainty should posses the following three properties:
# 1. $H$ should be continuous in the $p_i$.
# 2. If all the $p_i$ are equal, $p_i=1/N$, then $H$ should be a monotonically increasing function of $N$.
# 3. If a choice can be broken down into two successive choices, the original $H$ should be a weighted sum of the individual values of $H$. For example: $H(\frac{1}{2}, \frac{1}{3}, \frac{1}{6}) = H(\frac{1}{2}, \frac{1}{2}) + \frac{1}{2}H(\frac{2}{3}, \frac{1}{3})$.
#
# ***
# ```
# -----|----- -----|-----
# | | | | |
# 1/2 2/6 1/6 1/2 1/2
# | ---|---
# | | |
# | 2/3 1/3
# | | |
# 1/2 2/6 1/6
# ```
# ***
# Shannon then moved on to shown that the only uncertainty measure that satisfies the above three properties is of the form:
#
# $$
# \begin{equation}
# H=-\sum_i p_i \log(p_i),
# \end{equation}
# $$
#
# where the base of the logarithm determines the information unit (usually base two which corresponds to bits). See Shannon (1948) or Bialek (2012) for the proof.
# +
# Uncertanities before and after
H_before = -np.sum(p*np.log2(p))
H_after = -np.sum(y[y>0]*np.log2(y[y>0]))
# Plotting
plt.figure(figsize=(15, 3))
ax = plt.subplot(1, 2, 1)
ax.bar(x, p)
ax.set_ylabel('P(x)')
ax.set_title('$H_\mathrm{before} = %2.1f$ bits' % H_before)
ax.set_xlabel('Number of attendants')
ax = plt.subplot(1, 2, 2)
ax.bar(x, y)
ax.set_title('$H_\mathrm{after} = %2.1f$ bits' % H_after)
ax.set_xlabel('Number of attendants');
# -
# ## Entropy as a measure of uncertainty
# Shannon (1948) chose to denote the uncertainty measure by $H$, and he referred to it as entropy due to its connection with statistical mechanics.
# > Quantities of the form $H=-\sum_i p_i \log(p_i)$ play a central role in information theory as measures of **information, choice, and uncertainty**. The form of $H$ will be recognized as that of entropy as defined in certain formulations of statistical mechanics where $p_i$ is the probability of a system being in cell $i$ of its phase space. $H$ is then, for example, the $H$ in Boltzman's famous $H$ theorem. We shall call $H=-\sum_i p_i \log(p_i)$ the entropy of the set of probabilities $p_1,\ldots,p_n$.
#
# Although fascinating, this connection might, however, not be enough to provide an intuitive picture of which factors that lead to high or low entropies. In short, we can answer this second question by noting that 1) the entropy is always non-negative, 2) it increases with the number of possible outcomes, and 3) it obtains its maximum value for any fixed number of outcomes when all are equally likely.
# +
# Entropies for various example distributions
N = 32
mu = N/2.
sigma = N/6.
x = np.arange(N)
# Distributions
p_equal = 1./N*np.ones(N)
p_normal = np.exp(-(x-mu)**2/sigma**2)
p_normal /= p_normal.sum()
p_random = np.random.rand(N)
p_random /= p_random.sum()
ps = [p_equal, p_normal, p_random]
p_max = np.hstack(ps).max()
# Plotting
plt.figure(figsize=(15, 3))
for idx, p in enumerate(ps, start=1):
H = -np.sum(p*np.log2(p))
ax = plt.subplot(1, len(ps), idx)
ax.bar(x, p)
ax.set_title('$H = %2.1f$ bits' % H)
ax.set_ylim([0, p_max])
if idx == 1:
ax.set_ylabel('P(x)')
elif idx == 2:
ax.set_xlabel('Possible outcomes')
# -
# The entropy of a distribution, as presented above, can also be derived by searching for a minimum length code for denoting each outcome. That is, the entropy also represents a lower limit on how many bits one needs on average to encode each outcome. For example, imagine that $N=4$ and that the probabilities are: $p_1=0.5,\: p_2=0.25,\: p_3=0.125,\: p_4=0.125$. In this case, the minimum length codes would be:
#
# | Outcome | Code |
# |---------|:----:|
# | 1 | 0 |
# | 2 | 10 |
# | 3 | 110 |
# | 4 | 111 |
#
# and the entropy (or average code length) $-0.5\log(0.5)-0.25\log(0.25)-2*0.125\log(0.125)=1.75$ bits. Bialek (2012) commented on this fact by writing:
# >It is quite remarkable that the only way of quantifying how much we learn is to measure how much space is required to write it down.
# Similarly, Bialek (2012) also provided the following link between entropy as a minimum length code and the amount of heat needed to heat up a room:
# >Entropy is a very old idea. It arises in thermodynamics first as a way of keeping track of heat flows, so that a small amount of heat $dQ$ transferred at absolute temperature $T$ generates a change in entropy $dS=\frac{dQ}{T}$. Although there is no function $Q$ that measures the heat content of a system, there is a function $S$ that characterizes the (macroscopic) state of a system independent of the path to that state. Now we know that the entropy of a probability distribution also measures the amount of space needed to write down a description of the (microscopic) states drawn out of that distribution.
#
# >Let us imagine, then, a thought experiment in which we measure (with some finite resolution) the positions and velocities of all gas molecules in a small room and types these numbers into a file on a computer. There are relatively efficient programs (gzip, or "compress" on a UNIX machine) that compress usch files to nearly their shortest possible length. If these programs really work as well as they can, then the length of the file tells us the entropy of the distribution out of which the numbers in the file are being drawn, but this is the entropy of the gas. Thus, if we heat up the room by 10 degreed and repeat the process, we will find that the resulting data file is longer. More profondly, if me measure the increase in the length of the file, we know the entropy change of the gas and hence the amount of heat that must be added to the room to increase the temperature. This connection between a rather abstract quantity (the length in bits of a computer file) and a very tangible physical quantity (the amount of heat added to a room) has long struck me as one of the more dramatic, if elementary, examples of the power of mathematics to unify descriptions of very disparate phenomena.
#
# [Maxwell–Boltzmann distribution](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution)
# ## Mutual information
# Most situation are not as easy the example with the professor, where the uncertainty was removed in total once the answer was obtained. That is, in practice we often face situation where the uncertainty in only partially decreased. For example, imagine a situation where a bright spot is flashed on one out of 8 equally likely horizontally placed locations {$x \in [0, 1,\ldots, 7]$}, and where our information about which location that was lit up comes from a light detector placed at one of the locations. The detector further has three states {$y \in [0, 1, 2]$}, and it responds with state 2 if the spot is flashed on the location where it is located, state 1 if the spot is flashed at either of the two neighboring locations, and state 3 otherwise. Assuming that the detector is placed at location 3, then its response to a flash at any of the eight locations is as depicted below.
#
#
# +
N = 8; # Eight locations
placement = 3 # The detector's location
responses = np.zeros(N) # Detector reponses at each location
responses[placement] = 2
responses[placement-1] = 1
responses[placement+1] = 1
# Plotting
plt.figure(figsize=(7.5, 3))
plt.bar(np.arange(N), responses)
plt.xlabel('Spot location')
plt.ylabel('Detector response');
# -
# If we now expand on the initial idea to define information as the entropy difference between before and after knowing the output of the detector, then we get:
#
# $$
# \begin{equation}
# I(X;Y) = \sum_{i=0}^7 -p(x_i)\log p(x_i) - \sum_{j=0}^2 p(y_j) \sum_{i=0}^7 -p(x_i|y_j) \log p(x_i|y_j).
# \end{equation}
# $$
#
# That is, from the initial uncertainty in flash spot location $\sum_{i=0}^7 -p(x_i)\log p(x_i)$, we subtract off the uncertainty that remains for each possible state of the detector $\sum_{i=0}^7 -p(x_i|y_j) \log p(x_i|y_j)$ weighted by its probability of occurrence $p(y_j)$. For the case described above, the relevant probability distributions and entropies are:
# +
# Probability distributions
px = 1./N * np.ones(N)
px_y0 = np.zeros(N) + np.float64((responses == 0)) / (responses == 0).sum()
px_y1 = np.zeros(N) + np.float64((responses == 1)) / (responses == 1).sum()
px_y2 = np.zeros(N) + np.float64((responses == 2)) / (responses == 2).sum()
py = 1./N * np.array([(responses==r).sum() for r in np.unique(responses)])
ps = [px, px_y0, px_y1, px_y2, py]
titles = ['$P(x)$', '$P(x|y=0)$', '$P(x|y=1)$', '$P(x|y=2)$', '$P(y)$']
# Plotting
Hs = []
plt.figure(figsize=(15, 3))
for idx, p in enumerate(ps, start=1):
H = -np.sum(p[p>0]*np.log2(p[p>0]))
Hs.append(H)
ax = plt.subplot(1, len(ps), idx)
ax.bar(np.arange(len(p)), p)
ax.set_ylim([0, 1])
ax.set_title(titles[idx-1] + ', $%2.1f$ bits' % H)
if idx < len(ps):
ax.set_xlabel('x')
else:
ax.set_xlabel('y')
if idx > 1:
ax.set_yticklabels([])
else:
ax.set_ylabel('Probability')
# Calculate and write out the mutual information
mi = Hs[0] - py[0]*Hs[1] - py[1]*Hs[2] - py[2]*Hs[3]
print('I=%3.2f - %3.2f*%3.2f - %3.2f*%3.2f - %3.2f*%3.2f=%3.2f' % (Hs[0], py[0], Hs[1], py[1], Hs[2], py[2], Hs[3], mi))
# -
# By further replacing the summation limits with $x\in X$ and $y\in Y$, respectively, we obtain the more general expression:
#
# $$
# \begin{equation}
# I(X;Y) = \sum_{x\in X} -p(x)\log p(x) - \sum_{y\in Y} p(y) \sum_{x\in X} -p(x|y) \log p(x|y) = H(X) - H(X|Y),
# \end{equation}
# $$
#
# where $H(X|Y)$ is the conditional entropy (i.e., the average uncertainty that remains ones $y$ is known) and $I$ the mutual information between $X$ and $Y$. Mutual information is thus a generalization of the initial idea that we can quantify what we learn as the difference in uncertainty before and after.
# ## Entropy, uncertainty or information
# Shannon (1948) actually emphasized a different interpretation than the one presented above. As he was interested in the case where a source sends information over a noisy channel to a receiver, he interpreted the entropy $H(X)$ in $I(X;Y) = H(X) - H(X|Y)$ as the information produced by the source instead of an uncertainty. This interpretation can be understood by noting that the entropy can both be seen as an initial uncertainty or as an upper bound on the information learned when $H(X|Y)$ is zero (a duality that sometimes leads to confusion, especially if mutual information is abbreviated to information only). And in a source and receiver scenario, the upper limit obviously denotes the amount of information sent (produced) by the source. These different interpretations might seem unnecessary at first, but they help in interpreting the symmetry of the mutual information measure. Starting from the expression of mutual information as given above, one can reformulate it as:
#
# $$
# \begin{align}
# I(X;Y) &= \sum_{x\in X} -p(x)\log p(x) - \sum_{y\in Y} p(y) \sum_{x\in X} -p(x|y) \log p(x|y) = H(X) - H(X|Y), \quad\quad (1) \\
# &=-\sum_{x\in X}\sum_{y\in Y} p(x, y)\log p(x) + \sum_{y\in Y} \sum_{x\in X} p(x,y) \log p(x|y), \\
# &= \sum_{y\in Y} \sum_{x\in X} p(x,y) \log \frac{p(x|y)}{p(x)}, \\
# &= \sum_{y\in Y} \sum_{x\in X} p(x,y) \log \frac{p(x,y)}{p(x)p(y)} = \dots = H(X) + H(Y) - H(X,Y), \\
# &= \quad \vdots \\
# I(Y;X) &= \sum_{y\in Y} -p(y)\log p(y) - \sum_{x\in X} p(x) \sum_{y\in Y} -p(y|x) \log p(y|x) = H(Y) - H(Y|X), \quad\quad (2)
# \end{align}
# $$
#
# Shannon interpreted these two descriptions as: (1) The information that was sent less the uncertainty of what was sent. (2) The amount of information received less the part which is due to noise. Observe that expression (2) two makes little sense for the detector example above if $H(Y)$ is interpreted as uncertainty, whereas it becomes clearer with the interpretation that Shannon's emphasized. From that point of view, expression (2) tells us that the mutual information is the information contained in the detector's response $H(Y)$ less the part that is due to noise $H(Y|X)$. However, as the detector is deterministic (no noise), we arrive at the conclusion that the mutual information should equal $H(Y)$ in our particular example, which it also does.
#
# Additionally, we note that the mutual information has the following properties:
# 1. It is non-negative and equal to zero only when $x$ and $y$ are statistically independent, that is, when $p(x,y)=p(x)p(y)$.
# 2. It is bounded from above by either $H(X)$ or $H(Y)$, whichever is smaller.
#
# ## Mutual information as a general measure of correlation
# As the mutual information is a measure of dependence between two random variables, it can also be understood in more familiar terms of correlations. To visualize this, imagine a joint distribution of two random variables ($X_1$ and $X_2$). Equation 1 and 2 above tells us that the mutual information can be obtained as either $H(X) - H(X|Y)$ or $H(Y) - H(Y|X)$. That is, the entropy of either marginal distribution less the conditional entropy. In more practical terms, this means that we subtract of the average uncertainty that remains ones either variable is known. And in even more practical terms, it corresponds to looking at individual rows or columns in the joint distribution, as these reflect the uncertainty that remains ones either variable is know. This is illustrated below where two 2D multivariate Gaussian distributions are plotted together with the mutual information between the two variables.
# +
# Generating one independent and one correlated gaussian distribution
N = 16
mu = (N-1) / 2.*np.ones([2, 1])
var = 9.
cov = 8.
cov_ind = np.array([[var, 0.], [0., var]])
cov_cor = np.array([[var, cov], [cov, var]])
[x1, x2,] = np.meshgrid(range(N), range(N))
p_ind = np.zeros([N, N])
p_cor = np.zeros([N, N])
for i in range(N**2):
x_tmp = np.array([x1.ravel()[i]-mu[0], x2.ravel()[i]-mu[1]])
p_ind.ravel()[i] = np.exp(-1/2 * np.dot(x_tmp.T, np.dot(np.linalg.inv(cov_ind), x_tmp)))
p_cor.ravel()[i] = np.exp(-1/2 * np.dot(x_tmp.T, np.dot(np.linalg.inv(cov_cor), x_tmp)))
p_ind /= p_ind.sum()
p_cor /= p_cor.sum()
# Calculate I(X1;X2)
p1_ind = p_ind.sum(axis=1)
p2_ind = p_ind.sum(axis=0)
mi_ind = -np.sum(p1_ind*np.log2(p1_ind)) - np.sum(p2_ind*np.log2(p2_ind)) + np.sum(p_ind*np.log2(p_ind))
p1_cor = p_cor.sum(axis=1)
p2_cor = p_cor.sum(axis=0)
mi_cor = -np.sum(p1_cor*np.log2(p1_cor)) - np.sum(p2_cor*np.log2(p2_cor)) + np.sum(p_cor[p_cor>0]*np.log2(p_cor[p_cor>0]))
# Plotting
titles = ['Independent', 'Correlated']
p = [p_ind, p_cor]
mi = [mi_ind, mi_cor]
x_ticks = [0, 5, 10, 15]
fig = plt.figure(figsize=(15, 7.5))
for idx, p_tmp in enumerate(p):
ax = fig.add_axes([0.1 + idx*0.5, 0.1, 0.25, 0.5])
ax.imshow(p_tmp.reshape(N, N))
ax.set_xticks(x_ticks)
ax.set_xticklabels([])
ax.set_xlabel('$x_1$')
ax.set_yticks(x_ticks)
ax.set_yticklabels([])
ax.set_ylabel('$x_2$')
ax.invert_yaxis()
plt.draw()
pos = ax.get_position()
ax = fig.add_axes([pos.x0, 0.65, pos.x1-pos.x0, 0.1])
ax.plot(range(N), p_tmp.sum(axis=1), 'o-')
ax.set_xticks(x_ticks)
ax.get_yaxis().set_visible(False)
ax.spines['left'].set_visible(False)
ax.set_title(titles[idx] + ', $I(X_1;X_2) = %3.2f$ bits' % mi[idx])
ax = fig.add_axes([pos.x1 + 0.03, 0.1, 0.1/2, 0.5])
ax.plot(p_tmp.sum(axis=0), range(N), 'o-')
ax.set_yticks(x_ticks)
ax.get_xaxis().set_visible(False)
ax.spines['bottom'].set_visible(False)
print('H(X1): %3.2f bits' % -np.sum(p1_cor*np.log2(p1_cor)))
print('H(X2): %3.2f bits' % -np.sum(p2_cor*np.log2(p2_cor)))
print('H(X1,X2)_ind: %3.2f bits' % -np.sum(p_ind*np.log2(p_ind)))
print('H(X1,X2)_cor: %3.2f bits' % -np.sum(p_cor[p_cor>0]*np.log2(p_cor[p_cor>0])))
# -
# Another way of understanding why mutual information measures correlation is to look at the expression $I(X;Y) = H(X) + H(Y) - H(X,Y)$, from which we observe that the joint entropy $H(X,Y)$ is subtracted from the sum of the individual entropies. As entropy increases with uncertainty (or possible outcomes), we can infer that a less spread out joint distribution will cause a smaller subtraction. Importantly, however, the shape of the joint distribution does not matter, only how concentrated the probability mass is to a small number of outcomes. This is an important distinction that makes mutual information a general measure of correlation, in contrast to the commonly used correlation coefficients (Pearson's r), which only captures linear correlations. The example below highlight this by calculating the mutual information and the correlation coefficient for both a linear and quadratic relationship between $x$ and $y$.
# +
# Generate y responses as y = f(x) for 16 x values with f(x) being either f(x)=x or f(x) = -x^2
x = np.arange(-3.75, 4, 0.5)
y = [x, -x**2]
# Entropies, mutual information, correlation coefficients
Hx = [np.log2(x.size), np.log2(x.size)] # Assume each x-value is equally likely
Hy = [np.log2(np.unique(y_tmp).size) for y_tmp in y]
mi = Hy # H(Y|X) = 0 as there is no noise, thus I = H(Y)
r = [pearsonr(x, y_tmp)[0] for y_tmp in y]
# Plotting
fig = plt.figure(figsize=(15, 3))
for i in range(len(y)):
ax = plt.subplot(1, len(y), i+1)
ax.plot(x, y[i], 'o')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
info = '$r: %2.1f$\n$H(X): %2.1f$ bits\n$H(Y): %2.1f$ bits\n$I(X;Y): %2.1f$ bits' % (r[i], Hx[i], Hy[i], mi[i])
ax.text(x[2]-i*x[2], y[i].max()-i*5, info, va='top', ha='center')
# -
# The mutual information retains its maximum value in both cases (remember that it is bounded from above by min[H(x), H(y)]), whereas the correlation coefficient indicates maximal correlation for the linear $f$ and no correlation for the quadratic $f$. Additionally, the quadratic example provides a nice description of how the mutual information can be interpreted: If we learn 3 bits of information by observing $y$, then our uncertainty about $x$ is one bit $H(X) - I(X;Y)$. This, in turn, corresponds to a choice between two equally likely alternatives, a condition that simply reflects that there are two different $x$-values mapping onto the same $y$-value.
| InformationTheory/Part 1, entropy, uncertainty and information.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
from gensim.models import Word2Vec
import numpy as np
from itertools import product
from tqdm import tqdm
import sys
sys.path.append('../src')
from models import open_pickle, filter_terms_not_in_wemodel
we_model_load = KeyedVectors.load('../data/interim/glove_840_norm', mmap='r')
'''
we_model_name = "sg_dim300_min100_win5"
we_vector_size = 300
we_model_dir = '../data/external/wiki-english/wiki-english-20171001/%s' % we_model_name
we_model = Word2Vec.load(we_model_dir+'/model.gensim')
print ('loading done!')
'''
# +
RESULTS_FILEPATH = '../data/interim/glove_840B_association_metric_exps.pickle'
EXPERIMENT_DEFINITION_FILEPATH = '../data/interim/glove_840B_experiment_definitions.pickle'
IMAGE_SAVE_FILEPATH = '../reports/figures/glove_840B_exp_results.png'
NONRELATIVE_IMAGE_SAVE_FILEPATH = '../reports/figures/glove_840B_nonrelative_exp_results.png'
exp_def_dict = open_pickle(EXPERIMENT_DEFINITION_FILEPATH)
results_dict = open_pickle(RESULTS_FILEPATH)
'''RESULTS_FILEPATH = '../data/interim/association_metric_exps.pickle'
EXPERIMENT_DEFINITION_FILEPATH = '../data/interim/experiment_definitions.pickle'
IMAGE_SAVE_FILEPATH = '../reports/figures/exp_results.png'
NONRELATIVE_IMAGE_SAVE_FILEPATH = '../reports/figures/nonrelative_exp_results.png'
exp_def_dict = open_pickle(EXPERIMENT_DEFINITION_FILEPATH)
results_dict = open_pickle(RESULTS_FILEPATH)
'''
()
# -
results_dict[2]['second']
def add_axes_obj_labels(ax, exp_num, target_label, A_label, B_label, n_samples):
TITLE_FONT_SIZE = 12
[target_label, A_label, B_label] = [s.upper() for s in [target_label, A_label, B_label]]
ax.set_title(f'#{exp_num}: {target_label} terms: {B_label} (left) vs. {A_label} (right)',
fontsize=TITLE_FONT_SIZE)
ax.set_xlabel(f'Bias Regions: CI with {n_samples} samples')
ax.set_ylabel(f'Word')
ax.yaxis.set_ticklabels([])
def annotate_points(ax, terms, x_array, y):
POINT_FONT_SIZE = 9
for i, txt in enumerate(terms):
ax.annotate(txt, (x_array[i], y[i]), fontsize=POINT_FONT_SIZE)
def add_scatters_and_lines(ax, arr_second, threshold_second,
mean_second, pct_5_second, pct_95_second, lower_bound, upper_bound,
ST1_80CI, ST2_90CI, ST3_90CI, y):
S = 20 # Marker size
ZERO_LINE_COLOR = 'lime'
FIRST_ORDER_COLOR = 'black'
SECOND_ORDER_COLOR = 'red'
SECOND_ORDER_PERCENTILES_COLOR = 'blue'
SHADE_DARKNESS = 0.2
SHADE_DARKNESS_80CI = 0.1
SHADE_DARKNESS_90CI = 0.15
SHADE_DARKNESS_95CI = 0.25
CI_COLOR = 'black'
XAXIS_LIMIT = 0.6
y = [i for i in range(1,len(arr_second)+1)]
ax.scatter(arr_second, y, c=SECOND_ORDER_COLOR, s=S)
ax.xaxis.grid()
#ax.axvline(threshold_second, color=SECOND_ORDER_COLOR, linestyle='-.', label='second-order threshold')
#ax.axvline(-threshold_second, color=SECOND_ORDER_COLOR, linestyle='-.')
#ax.axvline(mean_second, c=SECOND_ORDER_COLOR, label='second-order mean')
#ax.axvspan(lower_bound, upper_bound, alpha=SHADE_DARKNESS, color=SECOND_ORDER_PERCENTILES_COLOR)
#ax.axvspan(ST1_80CI[0], ST1_80CI[1], alpha=SHADE_DARKNESS_80CI, color=CI_COLOR)
#ax.axvspan(ST1_90CI[0], ST1_90CI[1], alpha=SHADE_DARKNESS_90CI, color=CI_COLOR)
#ax.axvspan(ST1_95CI[0], ST1_95CI[1], alpha=SHADE_DARKNESS_95CI, color=CI_COLOR)
#ax.axvspan(pct_5_second, pct_95_second, alpha=SHADE_DARKNESS, color=SECOND_ORDER_PERCENTILES_COLOR)
ax.set_xlim(-XAXIS_LIMIT, XAXIS_LIMIT)
# +
fig, axs = plt.subplots(10,2, figsize=(15,50))
LEGEND_SIZE = 10
exps = range(1,11)
target_letters = ['X','Y']
for exp_num, target_letter in tqdm(product(exps, target_letters), total=20):
col = 0 if target_letter =='X' else 1
ax = axs[exp_num-1, col]
arr_second = results_dict[exp_num]['second'][f'{target_letter}_array']
threshold_second = results_dict[exp_num]['second']['threshold']
mean_second = results_dict[exp_num]['second'][f'{target_letter}_mean']
pct_5_second = None # results_dict[exp_num]['second']['pct_5']
pct_95_second = None # results_dict[exp_num]['second']['pct_95']
lower_bound = None #results_dict[exp_num]['second']['lower_bound']
upper_bound = None #results_dict[exp_num]['second']['upper_bound']
ST1_80CI = None #results_dict[exp_num]['second']['ST1_80CI']
ST1_90CI = None #results_dict[exp_num]['second']['ST1_90CI']
ST1_95CI = None #results_dict[exp_num]['second']['ST1_95CI']
n_samples = len(results_dict[exp_num]['second']['sigtest_dist_1'])
y = [i for i in range(1,len(arr_second)+1)]
terms = exp_def_dict[exp_num][f'{target_letter}_terms']
target_label = exp_def_dict[exp_num][f'{target_letter}_label']
A_label = exp_def_dict[exp_num]['A_label']
B_label = exp_def_dict[exp_num]['B_label']
add_scatters_and_lines(ax, arr_second, threshold_second,
mean_second, pct_5_second, pct_95_second, lower_bound, upper_bound,
ST1_80CI, ST1_90CI, ST1_95CI, y)
annotate_points(ax, terms, arr_second, y)
add_axes_obj_labels(ax, exp_num, target_label, A_label, B_label, n_samples)
axs[0,0].legend(loc=2, prop={'size': LEGEND_SIZE})
fig.tight_layout(pad=2)
print('Rendering...')
plt.savefig(IMAGE_SAVE_FILEPATH)
plt.show()
# -
open_pickle(EXPERIMENT_DEFINITION_FILEPATH)
| notebooks/SingleWordViz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# # %%capture
# python libraties
import os, itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from glob import glob
from PIL import Image
import cv2
import time
import copy
# pytorch libraries
import torch
import torch.nn.functional as F
from torch import optim,nn
from torch.optim import lr_scheduler
from torch.autograd import Variable
from torch.utils.data import DataLoader,Dataset, TensorDataset
from torchvision import models, transforms, utils
from torchsummary import summary
from torchsampler import ImbalancedDatasetSampler
# from imblearn.over_sampling import SMOTE
# sklearn libraries4
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import balanced_accuracy_score
import math
# import gc
# data directory
data_dir = 'data'
# +
def set_parameter_requires_grad(model, child_no = 8, finetuning = False):
'''
This function freezes the parameters of the model.
Parameters:
model: model to be freezed before training
child_no the layer to be reamined unfreezed for training
finetuning: if finetuning is true then the model will not be freezed
Returns:
Nothing
'''
if not finetuning:
print(finetuning)
for param in model.parameters():
param.requires_grad = False
else:
print(finetuning)
child_counter = 0
for child in model.children():
if child_counter == child_no:
print("child ",child_counter," was frozen")
for param in child.parameters():
param.requires_grad = False
else:
print("child ", child_counter," is not frozen")
for param in child.parameters():
param.requires_grad = True
child_counter += 1
# +
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class ClassifierNew(nn.Module):
def __init__(self, inp = 2208, h1=1024, out = 7, d=0.35):
super().__init__()
self.ap = nn.AdaptiveAvgPool2d((1,1))
self.mp = nn.AdaptiveMaxPool2d((1,1))
self.fla = Flatten()
self.bn0 = nn.BatchNorm1d(inp*2,eps=1e-05, momentum=0.1, affine=True)
self.dropout0 = nn.Dropout(d)
self.fc1 = nn.Linear(inp*2, h1)
self.bn1 = nn.BatchNorm1d(h1,eps=1e-05, momentum=0.1, affine=True)
self.dropout1 = nn.Dropout(d)
self.fc2 = nn.Linear(h1, out)
def forward(self, x):
ap = self.ap(x)
mp = self.mp(x)
x = torch.cat((ap,mp),dim=1)
x = self.fla(x)
x = self.bn0(x)
x = self.dropout0(x)
x = F.relu(self.fc1(x))
x = self.bn1(x)
x = self.dropout1(x)
x = self.fc2(x)
return x
# -
def initialize_models(model_name, num_classes = 7, fine_tuning = False):
'''
This function initialize3s the pretrained model
'''
model_fe = None
input_size = 0
if(model_name == 'resnet'):
model_fe = models.resnet101(pretrained = True)
set_parameter_requires_grad(model_fe, fine_tuning)
num_ftrs = model_fe.fc.in_features
model_fe = torch.nn.Sequential(*(list(model_fe.children())[:-2]))
model_fe.fc = ClassifierNew(inp = num_ftrs, out = num_classes, d = 0.1)
# model_fe.fc = nn.Linear(num_ftrs, num_classes)
input_size = (224, 224)
elif(model_name == 'densenet'):
model_fe = models.densenet201(pretrained = True)
set_parameter_requires_grad(model_fe, fine_tuning)
num_ftrs = model_fe.classifier.in_features
model_fe.classifier = ClassifierNew(inp = num_ftrs, out = num_classes)
# model_fe.classifier = nn.Linear(num_ftrs, num_classes)
input_size = (224, 224)
elif(model_name == 'inception'):
model_fe = models.inception_v3(pretrained = True)
model_fe.aux_logits = False
set_parameter_requires_grad(model_fe, fine_tuning)
num_ftrs = model_fe.fc.in_features
model_fe.fc = ClassifierNew(inp = num_ftrs, out = num_classes)
input_size = (299, 299)
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
return model_fe, input_size, norm_mean, norm_std
# ## Training Script
# +
def train_model(model, criterion, optimizer, dataloader, scheduler = None, num_epochs=25):
since = time.time()
val_acc, val_loss = [],[]
train_acc, train_loss = [],[]
lr_rate = []
# best_model_wts = copy.deepcopy(model.state_dict())
# best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
# print('phase train')
# for param_group in optimizer.param_groups:
# print('Learning Rate {}'.format(param_group['lr']))
# lr_rate.append(param_group['lr'])
model.train() # Set model to training mode
else:
# print('phase val')
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in tqdm(dataloader[phase]):
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train' and scheduler is not None:
scheduler.step()
lr_rate.append(scheduler.get_lr()[0])
del inputs
del labels
torch.cuda.empty_cache()
epoch_loss = running_loss / len(dataloader[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloader[phase].dataset)
if phase == 'train':
val_loss.append(epoch_loss)
val_acc.append(epoch_acc)
else:
train_loss.append(epoch_loss)
train_acc.append(epoch_acc)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# # deep copy the model
# if phase == 'val' and epoch_acc > best_acc:
# best_acc = epoch_acc
# print('Best accuracy {}'.format(best_acc))
# best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
# print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
# model.load_state_dict(best_model_wts)
return model, val_loss, val_acc, train_loss, train_acc, lr_rate
# -
# ## class weights for the sampler
# +
import pandas as pd
from collections import Counter
from collections import OrderedDict
def get_class_weights(y, forsampler = True):
counter = Counter(y)
dic = {}
if not forsampler:
majority = max(counter.values())
dic = {cls: float(majority)/count for cls, count in counter.items()}
else:
dic = {cls: 1 / count for cls , count in counter.items()}
dic = OrderedDict(sorted(dic.items()))
weights = list(dic.values())
sample_weights = [weights[t] for t in y]
sample_weights = torch.FloatTensor(sample_weights)
return sample_weights
# -
# ## data transforms
# transfromations for train images
def get_transforms(input_size, norm_mean, norm_std):
train_transform = transforms.Compose([transforms.Resize(input_size),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize(norm_mean, norm_std)])
# define the transformation of the test images.
test_transform = transforms.Compose([transforms.Resize(input_size),
transforms.ToTensor(),
transforms.Normalize(norm_mean, norm_std)])
return train_transform, test_transform
# del model
torch.cuda.empty_cache()
# ## Custom Dataset
class HamDataset(Dataset):
def __init__(self, csvpath, transform = None):
self.df = pd.read_csv(csvpath)
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, index):
X = Image.open(self.df['path'][index])
y = torch.tensor(int(self.df['cell_type_idx'][index]))
if self.transform:
X = self.transform(X)
return X, y
# ## Dataloader Function
def get_data_loader(csvpathh, transform, batch_size = 7):
csvpath = os.path.join(data_dir, csvpathh)
df = pd.read_csv(csvpath)
dataset = HamDataset(csvpath, transform)
sampler = None
if(csvpathh == 'train.csv'):
print(csvpath)
class_weights = get_class_weights(df.cell_type_idx.values, forsampler = True)
sampler = torch.utils.data.sampler.WeightedRandomSampler(class_weights, len(class_weights))
dataloader = DataLoader(dataset, batch_size=batch_size, sampler = sampler, num_workers=0)
else:
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle = True, num_workers=0)
return dataloader
def imshow(inp, mean, std, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
# mean = np.array([0.485, 0.456, 0.406])
# std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
plt.axis('off')
plt.figure(figsize=(200,100))
# if title is not None:
# plt.title(title)
# plt.pause(0.001) # pause a bit so that plots are updated
plt.savefig('graphs/processedimages.png')
# +
# from torchvision.models.densenet import DenseNet
# class myDenseNet(DenseNet):
# def __init__(self, inp = 1920, out = 7):
# super(myDenseNet, self).__init__(32, (6, 12, 48, 32), 64)
# self.classifier = ClassifierNew(inp = inp, out = out)
# def forward(self, x):
# features = self.features(x)
# x = F.relu(features, inplace=True)
# x = F.relu(features, inplace=True)
# x1 = F.adaptive_avg_pool2d(x, (1, 1))
# y = F.adaptive_max_pool2d(x, (1,1))
# self.classifier(x)
# return x
# model = myDenseNet()
# # if you need pretrained weights
# p = models.densenet201(pretrained=True).state_dict()
# +
# model.load_state_dict(p)
# model = model.to(device)
# +
# model.classifier = ClassifierNew(inp = 1920, out = 7)
# +
# model
# +
# summary(model, (3, 224, 224))
# -
# ## model and data declaration
# +
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model, input_size, norm_mean, norm_std = initialize_models('resnet', num_classes = 7)
model = model.to(device)
# input_size = (224, 224)
# norm_mean = [0.485, 0.456, 0.406]
# norm_std = [0.229, 0.224, 0.225]
train_transform, test_transform = get_transforms(input_size, norm_mean, norm_std)
dataloader = {'train': get_data_loader('train.csv', transform = train_transform, batch_size = 32),
'val': get_data_loader('validation.csv', transform = train_transform, batch_size = 32),
'test': get_data_loader('test.csv', transform = train_transform, batch_size = 32)}
# -
summary(model, input_size = (3, 224, 224))
# model
# +
inputs, classes = next(iter(dataloader['train']))
# Make a grid from batch
inputs = inputs[:7]
classes = classes[:7]
out = utils.make_grid(inputs)
imshow(out, norm_mean, norm_std, title=classes)
classes
dataiter = iter(dataloader['train'])
images,labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
# -
# ## Lr finder
def lr_finder(model, dataloader, criterion, optmizer, lr_find_epochs = 5, start_lr = 1e-7, end_lr = 0.1):
lr_lambda = lambda x: math.exp(x * math.log(end_lr / start_lr) / (lr_find_epochs * len( dataloader["train"])))
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda)
lr_find_loss = []
lr_find_lr = []
iter = 0
smoothing = 0.05
for i in range(lr_find_epochs):
print("epoch {}".format(i))
for inputs, labels in tqdm(dataloader["train"]):
# Send to device
inputs = inputs.to(device)
labels = labels.to(device)
# Training mode and zero gradients
model.train()
optimizer.zero_grad()
# Get outputs to calc loss
outputs = model(inputs)
loss = criterion(outputs, labels)
# Backward pass
loss.backward()
optimizer.step()
# Update LR
scheduler.step()
lr_step = optimizer.state_dict()["param_groups"][0]["lr"]
lr_find_lr.append(lr_step)
# smooth the loss
if iter==0:
lr_find_loss.append(loss)
else:
loss = smoothing * loss + (1 - smoothing) * lr_find_loss[-1]
lr_find_loss.append(loss)
iter += 1
max_lr = lr_find_lr[lr_find_loss.index(min(lr_find_loss))] / 10
base_lr = max_lr / 6
return lr_find_loss, lr_find_lr, max_lr, base_lr
start_lr = 1e-7
criterion = nn.CrossEntropyLoss().to(device)
optimizer = optim.Adam(model.parameters(), lr=start_lr)
loss, lr, max_lr, base_lr = lr_finder(model, dataloader, criterion, optimizer, start_lr = start_lr)
print(max_lr)
print(base_lr)
min(loss)
# len(lr)
plt.ylabel("loss")
plt.xlabel("learning rate")
plt.xscale("log")
plt.plot(lr, loss)
plt.savefig('graphs/resnet101-adam-2/lr-loss.png')
plt.show()
plt.ylabel("lr")
plt.xlabel("step")
plt.plot(range(len(lr)), lr)
# plt.savefig('graphs/resnet101-adam-2/lr-growth.png')
plt.show()
# ## Training
criterion = nn.CrossEntropyLoss().to(device)
optimizer = optim.Adam(model.parameters(), lr = 8.33e-5)
# scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=base_lr, max_lr=max_lr, mode = 'triangular2')
model, val_loss, val_acc, train_loss, train_acc, lr_rate = train_model(model, criterion, optimizer, dataloader, num_epochs = 30)
dirr = "D:\Github\SkinCancerCapstone\models"
torch.save(model, os.path.join(dirr, 'resnet101-adam-2.pth'))
# +
# plt.ylabel('Learning rate')
# plt.xlabel('Steps')
# plt.title('Learning Rate over Steps')
# plt.plot(lr_rate, label = 'learning rate')
# plt.grid()
# plt.savefig('graphs/resnet101-adam/lr.png')
# plt.legend()
# plt.show()
# val_loss
# -
lr_rate
plt.xlabel('Accuracy')
plt.ylabel('Epochs')
plt.title('Accuracy over Epochs')
plt.plot(val_acc, label = 'training Accuracy')
plt.plot(train_acc, label = 'validation accuracy')
plt.grid()
plt.savefig('graphs/resnet101-adam-2/accuracy.png')
plt.legend()
plt.show()
plt.xlabel('Loss')
plt.title('Loss over Epochs')
plt.plot(val_loss, label = 'train Loss')
plt.plot(train_loss, label = 'val Loss')
plt.grid()
plt.savefig('graphs/resnet101-adam-2/loss.png')
plt.legend()
plt.show()
def plot_confusion_matrix(matrix, labels,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
#print(matrix)
plt.imshow(matrix, interpolation='nearest', cmap=cmap)
# plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels) #, rotation=45)
plt.yticks(tick_marks, labels)
fmt = '.2f' if normalize else 'd'
thresh = matrix.max() / 2.
for i, j in itertools.product(range(matrix.shape[0]), range(matrix.shape[1])):
plt.text(j, i, format(matrix[i, j], fmt),
horizontalalignment="center",
color="white" if matrix[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.savefig('graphs/resnet101-adam/conf_mat-tta.png')
plt.show()
# +
model = torch.load("D:/Github/SkinCancerCapstone/models/resnet101-adam.pth")
model.eval()
y_label = []
y_predict = []
with torch.no_grad():
for images, labels in tqdm(dataloader['test']):
N = images.size(0)
images = Variable(images).to(device)
outputs = model(images)
prediction = outputs.max(1, keepdim=True)[1]
y_label.extend(labels.cpu().numpy())
y_predict.extend(np.squeeze(prediction.cpu().numpy().T))
conf_matrix = confusion_matrix(y_true=y_label, y_pred=y_predict)
plot_labels = ['akiec', 'bcc', 'bkl', 'df', 'nv', 'mel','vasc']
# -
plot_confusion_matrix(conf_matrix, labels=plot_labels, normalize=True)
print(balanced_accuracy_score(y_label, y_predict))
# Generate a classification report
report = classification_report(y_label, y_predict, target_names=plot_labels)
print(report)
# ## Predict Function
# +
from PIL import Image
def predict(path, modelpath = "D:/Github/SkinCancerCapstone/models/resnet101-classifier.pth"):
model = torch.load(modelpath)
model.eval()
img = Image.open(path)
img = test_transform(img).float()
img = Variable(img, requires_grad=False)
img = img.unsqueeze(0).cuda()
output = model(img)
print(output)
m = nn.Softmax(dim = 1)
op = m(output).tolist()
print(op)
prediction = output.max(1, keepdim=True)[1].tolist()
prediction = prediction[0][0]
print(prediction, op[0][prediction])
# -
predict("D:/Github/SkinCancerCapstone/report/img/melanoma.jpg")
| Model Creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scale-Free Networks
#
# Code examples from [Think Complexity, 2nd edition](https://thinkcomplex.com).
#
# Copyright 2016 <NAME>, [MIT License](http://opensource.org/licenses/MIT)
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
from empiricaldist import Pmf, Cdf
from utils import decorate
from warnings import simplefilter
from matplotlib.cbook import mplDeprecation
simplefilter('ignore', mplDeprecation)
# -
# ## Graphs
# To represent social networks, we'll use `nx.Graph`, the graph representation provided by NetworkX.
#
# Each person is represented by a node. Each friendship is represented by an edge between two nodes.
#
# Here's a simple example with 4 people:
G = nx.Graph()
G.add_edge(1, 0)
G.add_edge(2, 0)
G.add_edge(3, 0)
nx.draw(G)
# The number of friends a person has is the number of edges that connect to their node, which is the "degree" of the node.
for node in G.nodes():
print(node, G.degree(node))
# We are often intereted in the "degree distribution" of a graph, which is the number of people who have 0 friends, the number who have 1 friend, and so on.
#
# The following function extracts a list of degrees, one for each node in a graph.
def degrees(G):
"""List of degrees for nodes in `G`.
G: Graph object
returns: list of int
"""
return [G.degree(node) for node in G]
# Here's the result for the small example.
degrees(G)
# I'll use `Pmf` from `empiricaldist` to make a probability mass function.
pmf = Pmf.from_seq(degrees(G))
pmf
# And `bar` to display it as a bar plot.
pmf.bar()
decorate(xlabel='Degree', ylabel='PMF', xlim=[0.4, 3.6])
# **Exercise:** Add another node or nodes to the graph above, and add a few edges. Plot the new degree distribution.
# +
# Solution goes here
# +
# Solution goes here
# -
# ## Facebook data
# The following function reads a file with one edge per line, specified by two integer node IDs.
def read_graph(filename):
"""Read a graph from a file.
filename: string
return: Graph
"""
G = nx.Graph()
array = np.loadtxt(filename, dtype=int)
G.add_edges_from(array)
return G
# We'll read the Facecook data downloaded from [SNAP](https://snap.stanford.edu/data/egonets-Facebook.html)
# +
# https://snap.stanford.edu/data/facebook_combined.txt.gz
fb = read_graph('facebook_combined.txt.gz')
n = len(fb)
m = len(fb.edges())
n, m
# -
# To see how popular "you" are, on average, we'll draw a random sample of 1000 people.
sample = np.random.choice(fb.nodes(), 1000, replace=True)
# For each "you" in the sample, we'll look up the number of friends.
sample_friends = [fb.degree(node) for node in sample]
# To plot the degree distribution, I'll use `EstimatedPdf`, which computes a smooth Probability Density Function that fits the data.
Pmf.from_seq(sample_friends).plot()
decorate(xlabel='Number of friends',
ylabel='PMF',
title='Degree PMF, friends')
Cdf.from_seq(sample_friends).plot(color='C0')
decorate(xlabel='Number of friends',
ylabel='CDF',
title='Degree CDF, friends')
# Now what if, instead of "you", we choose one of your friends, and look up the number of friends your friend has.
sample_fof = []
for node in sample:
friends = list(fb.neighbors(node))
friend = np.random.choice(friends)
sample_fof.append(fb.degree(friend))
# Here's the degree distribution for your friend's friends:
Pmf.from_seq(sample_fof).plot()
decorate(xlabel='Number of friends',
ylabel='PMF',
title='Degree PMF, friends of friends')
Cdf.from_seq(sample_fof).plot(color='C1')
decorate(xlabel='Number of friends',
ylabel='CDF',
title='Degree CDF, friends of friends')
Cdf.from_seq(sample_friends).plot(color='C0', label='friends')
Cdf.from_seq(sample_fof).plot(color='C1', label='friends of friends')
decorate(xlabel='Number of friends',
ylabel='CDF',
title='Degree distributions')
# The bulk of the distribution is wider, and the tail is thicker. This difference is reflected in the means:
np.mean(sample_friends), np.mean(sample_fof)
# And we can estimate the probability that your friend has more friends than you.
np.mean([friend > you for you, friend in zip(sample_friends, sample_fof)])
# ## Power law distributions
# As we'll see below, the degree distribution in the Facebook data looks, in some ways, like a power law distribution. To see what that means, we'll look at the Zipf distribution, which has a power law tail.
#
# Here's a sample from a Zipf distribution.
zipf_sample = np.random.zipf(a=2, size=10000)
# Here's what the PMF looks like.
pmf = Pmf.from_seq(zipf_sample)
pmf.plot()
decorate(xlabel='Zipf sample', ylabel='PMF')
# Here it is on a log-x scale.
pmf.plot()
decorate(xlabel='Zipf sample (log)', ylabel='PMF', xscale='log')
# And on a log-log scale.
pmf.plot()
decorate(xlabel='Zipf sample (log)', ylabel='PMF (log)',
xscale='log', yscale='log')
# On a log-log scale, the PMF of the Zipf distribution looks like a straight line (until you get to the extreme tail, which is discrete and noisy).
# For comparison, let's look at the Poisson distribution, which does not have a power law tail. I'll choose the Poisson distribution with the same mean as the sample from the Zipf distribution.
mu, sigma = zipf_sample.mean(), zipf_sample.std()
mu, sigma
poisson_sample = np.random.poisson(lam=mu, size=10000)
poisson_sample.mean(), poisson_sample.std()
# Here's the PMF on a log-log scale. It is definitely not a straight line.
poisson_pmf = Pmf.from_seq(poisson_sample)
poisson_pmf.plot()
decorate(xlabel='Poisson sample (log)', ylabel='PMF (log)',
xscale='log', yscale='log')
# So this gives us a simple way to test for power laws. If you plot the PMF on a log-log scale, and the result is a straight line, they is evidence of power law behavior.
#
# This test is not entirely reliable; there are better options. But it's good enough for an initial exploration.
# ## Barabási and Albert
# Let's see what the degree distribution for the Facebook data looks like on a log-log scale.
pmf_fb = Pmf.from_seq(degrees(fb))
pmf_fb.plot(label='Facebook')
decorate(xscale='log', yscale='log', loc='upper right',
xlabel='Degree (log)', ylabel='PMF (log)')
# For degrees greater than 10, it resembles the Zipf sample (and doesn't look much like the Poisson sample).
#
# We can estimate the parameter of the Zipf distribution by eyeballing the slope of the tail.
# +
plt.plot([10, 1000], [5e-2, 2e-4], color='gray', linestyle='dashed')
pmf_fb.plot(label='Facebook')
decorate(xscale='log', yscale='log', loc='upper right',
xlabel='Degree (log)', ylabel='PMF (log)')
# -
# Here's a simplified version of the NetworkX function that generates BA graphs.
# +
# modified version of the NetworkX implementation from
# https://github.com/networkx/networkx/blob/master/networkx/generators/random_graphs.py
import random
def barabasi_albert_graph(n, k, seed=None):
"""Constructs a BA graph.
n: number of nodes
k: number of edges for each new node
seed: random seen
"""
if seed is not None:
random.seed(seed)
G = nx.empty_graph(k)
targets = set(range(k))
repeated_nodes = []
for source in range(k, n):
G.add_edges_from(zip([source]*k, targets))
repeated_nodes.extend(targets)
repeated_nodes.extend([source] * k)
targets = _random_subset(repeated_nodes, k)
return G
# -
# And here's the function that generates a random subset without repetition.
def _random_subset(repeated_nodes, k):
"""Select a random subset of nodes without repeating.
repeated_nodes: list of nodes
k: size of set
returns: set of nodes
"""
targets = set()
while len(targets) < k:
x = random.choice(repeated_nodes)
targets.add(x)
return targets
# I'll generate a BA graph with the same number of nodes and edges as the Facebook data:
n = len(fb)
m = len(fb.edges())
k = int(round(m/n))
n, m, k
# Providing a random seed means we'll get the same graph every time.
ba = barabasi_albert_graph(n, k, seed=15)
# The number of edges is pretty close to what we asked for.
len(ba), len(ba.edges()), len(ba.edges())/len(ba)
# So the mean degree is about right.
np.mean(degrees(fb)), np.mean(degrees(ba))
# The standard deviation of degree is pretty close; maybe a little low.
np.std(degrees(fb)), np.std(degrees(ba))
# Let's take a look at the degree distribution.
pmf_ba = Pmf.from_seq(degrees(ba))
# Looking at the PMFs on a linear scale, we see one difference, which is that the BA model has no nodes with degree less than `k`, which is 22.
# +
plt.figure(figsize=(12,6))
plt.subplot(1, 2, 1)
pmf_fb.plot(label='Facebook')
decorate(xlabel='Degree', ylabel='PMF')
plt.subplot(1, 2, 2)
pmf_ba.plot(label='BA model')
decorate(xlabel='Degree', ylabel='PMF')
# -
# If we look at the PMF on a log-log scale, the BA model looks pretty good for values bigger than about 20. And it seems to follow a power law.
# +
plt.figure(figsize=(12,6))
plt.subplot(1, 2, 1)
pmf_fb.plot(label='Facebook')
decorate(xlabel='Degree', ylabel='PMF',
xscale='log', yscale='log')
plt.subplot(1, 2, 2)
pmf_ba.plot(label='BA model')
decorate(xlabel='Degree', ylabel='PMF',
xlim=[1, 1e4],
xscale='log', yscale='log')
# -
# ## Cumulative distributions
# Here are the degree CDFs for the Facebook data, the WS model, and the BA model.
cdf_fb = Cdf.from_seq(degrees(fb))
cdf_ba = Cdf.from_seq(degrees(ba))
# If we plot them on a log-x scale, we get a sense of how well the model fits the central part of the distribution.
#
# The BA model is ok for values above the median, but not very good for smaller values.
cdf_fb.plot(label='Facebook')
cdf_ba.plot(color='gray', label='BA model')
decorate(xlabel='Degree', xscale='log',
ylabel='CDF')
# If we plot the complementary CDF on a log-log scale, we see that the BA model fits the tail of the distribution reasonably well.
complementary_cdf_fb = 1-cdf_fb
complementary_cdf_ba = 1-cdf_ba
complementary_cdf_fb.plot(label='Facebook')
complementary_cdf_ba.plot(color='gray', label='BA model')
decorate(xlabel='Degree', xscale='log',
ylabel='Complementary CDF', yscale='log')
# But there is certainly room for a model that does a better job of fitting the whole distribution.
| ComplexityScience/03_workshop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "a8900f347a4a017a196a554be4d0a117", "grade": false, "grade_id": "cell-90d5844897aa79ef", "locked": true, "schema_version": 3, "solution": false, "task": false}
# <img src="https://www.epfl.ch/about/overview/wp-content/uploads/2020/07/logo-epfl-1024x576.png" style="padding-right:10px;width:140px;float:left"></td>
# <h2 style="white-space: nowrap">Image Processing Laboratory Notebooks</h2>
# <hr style="clear:both">
# <p style="font-size:0.85em; margin:2px; text-align:justify">
# This Juypter notebook is part of a series of computer laboratories which are designed
# to teach image-processing programming; they are running on the EPFL's Noto server. They are the practical complement of the theoretical lectures of the EPFL's Master course <b>Image Processing II</b>
# (<a href="https://moodle.epfl.ch/course/view.php?id=463">MICRO-512</a>) taught by Dr. <NAME>, Dr. <NAME>, Prof. <NAME> and Prof. <NAME>.
# </p>
# <p style="font-size:0.85em; margin:2px; text-align:justify">
# The project is funded by the Center for Digital Education and the School of Engineering. It is owned by the <a href="http://bigwww.epfl.ch/">Biomedical Imaging Group</a>.
# The distribution or the reproduction of the notebook is strictly prohibited without the written consent of the authors. © EPFL 2021.
# </p>
# <p style="font-size:0.85em; margin:0px"><b>Authors</b>:
# <a href="mailto:<EMAIL>"><NAME></a>,
# <a href="mailto:<EMAIL>"><NAME></a>,
# <a href="mailto:<EMAIL>"><NAME></a>,
# <a href="mailto:<EMAIL>"><NAME></a>, and
# <a href="mailto:<EMAIL>"><NAME></a>.
#
# </p>
# <hr style="clear:both">
# <h1>Lab 6.2: Wavelet processing</h1>
# <div style="background-color:#F0F0F0;padding:4px">
# <p style="margin:4px;"><b>Released</b>: Thursday April 28, 2022</p>
# <p style="margin:4px;"><b>Submission</b>: <span style="color:red">Friday May 6, 2022</span> (before 11:59PM) on <a href="https://moodle.epfl.ch/course/view.php?id=463">Moodle</a></p>
# <p style="margin:4px;"><b>Grade weigth</b> (Lab 6, 17 points): 7.5 % of the overall grade</p>
# <p style="margin:4px;"><b>Remote help</b>: Monday May 2, 2022 on Zoom (12h-13h, see Moodle for link) and Thursday May 5, on campus</p>
# <p style="margin:4px;"><b>Related lectures</b>: Chapter 8</p>
# </div>
# + [markdown] kernel="SoS"
# ### Student Name: <NAME>
# ### SCIPER: 334988
#
# Double-click on this cell and fill your name and SCIPER number. Then, run the cell below to verify your identity in Noto and set the seed for random results.
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "d52796925bd877083419082bd81afd19", "grade": true, "grade_id": "cell-a5cd438011c0014e", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false}
import getpass
# This line recovers your camipro number to mark the images with your ID
uid = int(getpass.getuser().split('-')[2]) if len(getpass.getuser().split('-')) > 2 else ord(getpass.getuser()[0])
print(f'SCIPER: {uid}')
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "a66aa72d897af6addae8d408903d70d9", "grade": false, "grade_id": "cell-3b60588aab6df011", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## <a name="imports_"></a> Imports
#
# Just as in Part 1, in the next two cells we will import the libraries and images that we will use throughout the notebook. Moreover, we will load an extra library (`lab6`) with the functions we declared on Part 1 that we will now reuse. Run these cells to get your environment ready.
#
# <div class='alert alert-success'>
#
# <b>Note:</b> As mentioned in <a href="./1_Wavelet_transform.ipynb">Part 1</a> of the lab, every exercise of the lab is designed to work and be tested independently of any other exercises. This is why in [<code>lab6.py</code>](lab6.py) we have included only the PyWavelets functions and not the ones you implemented. Moreover, the function <code>norm_std_map</code> is left incomplete. If you think you implemented it correctly, simply copy paste it there by opening the file <code>lab6.py</code> using the left pane's file explorer. Then you will be able to use it just like in <a href="./1_Wavelet_transform.ipynb">Part 1</a> by changing which line you comment in the cell below the imports. If you make any changes to <code>lab6.py</code>, make sure to save them and restart the kernel in this notebook to import it again.
# </div>
#
# <div class='alert alert-danger'>
#
# <b>Note</b>: We will not ask you to submit <code>lab6.py</code>. Therefore, do not make any changes there that are required for your lab to work. If, for example, you want to use your filterbank implementation of the wavelet transform, simply copy it in the soluction cell after the imports cell.
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "2731179b3bd91720fa65e0233b888231", "grade": false, "grade_id": "cell-912ca8608a4cce92", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Configure plotting as dynamic
# %matplotlib widget
# Import standard required packages for this exercise
import matplotlib.pyplot as plt
import ipywidgets as widgets
import numpy as np
import cv2 as cv
import scipy.ndimage as ndi
import pywt
# Standard general python libraries
from scipy import stats
from skimage import data
import math
import sys
# ImageViewer & functions from first part
from interactive_kit import imviewer as viewer
import lab6
# Load images to be used in this exercise
doisneau = cv.imread('images/doisneau.tif', cv.IMREAD_UNCHANGED).astype('float64')
doisneau_noise = cv.imread('images/doisneau-noise.tif', cv.IMREAD_UNCHANGED).astype('float64')
mit_coef = cv.imread('images/mit-coef.tif', cv.IMREAD_UNCHANGED).astype('float64')
lowlight = cv.imread('images/lowlight.tif', cv.IMREAD_UNCHANGED).astype('float64')
mer_de_glace = cv.imread('images/mer-de-glace.tif', cv.IMREAD_UNCHANGED).astype('float64')
# +
# Choose colormap to use throughout the lab
# Here, you can choose to use the norm_std_map you implemented in Part 1, if you copy it to lab6.py
color_map = lab6.non_uniform_map
# color_map = lab6.norm_std_map
# If you wanna reuse your functions, copy them here instead of in lab6.py
# + [markdown] deletable=false editable=false kernel="JavaScript" nbgrader={"cell_type": "markdown", "checksum": "dbcd463851198bdda8be2cd01307ef50", "grade": false, "grade_id": "cell-3bfd756ac33ae4d5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Wavelet processing (8 points)
#
# In this laboratory we propose to study some applications of the wavelet transform, namely denoising and compression.
#
# ## <a id="ToC_2_WT"></a> Table of contents
# 1. [Processing the wavelet coefficients](#1.-Processing-the-wavelet-coefficients-(3-points))
# 1. [Keeping the low frequency band](#1.A.-Keeping-the-low-frequency-band-(1-point)) (**1 point**)
# 2. [Keeping the high frequency bands](#1.B.-Keeping-the-high-frequency-bands-(2-points)) (**2 points**)
# 2. [Denoising](#2.-Denoising-(3-points))
# 1. [Soft thresholding](#2.A.-Soft-thresholding-(1-point)) (**1 point**)
# 2. [Hard thresholding](#2.B.-Hard-thresholding-(1-point)) (**1 point**)
# 3. [Optimal threshold](#2.C.-Optimal-threshold-(1-point)) (**1 point**)
# 3. [Compression](#3.-Compression-(2-points)) (**2 points**)
#
#
# ## 1. Processing the wavelet coefficients (3 points)
# [Back to table of contents](#ToC_2_WT)
#
# In this section we will propose two very simple operations:
# * **Keeping the low frequency band:** This operation will set to zero all the high frequency components, regardless their direction,
# * **Keeping the high frequency bands:** This operation will keep only some of the high frequency components.
#
# <div class = "alert alert-success">
#
# <b>Note:</b> We will give you some freedom to choose how you want to implement these functions. You can take advantage of <code>lab6.pywt_analysis(img, n, wavelet)</code>, use <code>pywt.dwt2</code>, or reuse your own functions (filterbanks or polyphase implementation). What we will require is to take advantage of vectorization in NumPy. In other words, <span style="color:red"> we <b>DO NOT</b> accept loops iterating through NumPy arrays, which will be considered an incorrect solution in <b>ALL</b> the exercises in this lab.</span> Remember that this is because Python is a vectorized, high-level language, and iterating NumPy Arrays is very slow.
# </div>
#
# <div class = "alert alert-danger">
#
# <b>Note:</b> Remember to make sure that your implementations work for both square and rectangular images.
# </div>
#
# ### 1.A. Keeping the low frequency band (1 point)
# [Back to table of contents](#ToC_2_WT)
#
# This operation is intended to totally remove the high frequency coefficients (vertical, horizontal and diagonal) at every scale. **For 1 point**, complete the function `lowpass` below, where the parameters are
# * `img`: the image,
# * `filterbank` (a tuple of length 4): the 4 wavelet filters, as `(analysis_lp, analysis_hp, synthesis_lp, synthesis_hp)`. If you are not going to use the filterbank implementation, you can simply ignore this parameter, which will take its default value and never be used,
# * `n`: the number of iterations of the wavelet transform,
# * `wavelet` (a string): the wavelet family to be used by PyWavelets (see the [options](https://pywavelets.readthedocs.io/en/latest/ref/wavelets.html#built-in-wavelets-wavelist)). If you are not going to use PyWavelets, you can ignore this parameter, which will take its default value and never be used,
#
# and returns
# * `output`: an image of the same size as `img`, that results from applying the inverse wavelet transform after keeping only the LL coefficients (setting the HL, LH, and HH coefficients to $0$),
# * `ll_transform`: an image of the same size as `img`, containing the wavelet transform, but where everything except the LL coefficient is set to zero. The purpose of this image is for you to visually test that your function is doing the right thing.
#
# <div class = "alert alert-info">
#
# <b>Note:</b> <ul><li>These exercises are a combination of content we have practiced in <a href="./1_Wavelet_transform.ipynb">Part 1</a> and pixel-wise operations we studied in <a href="https://moodle.epfl.ch/course/view.php?id=522">IP 1</a>. If you have any doubt, we recommend you look back at <a href="./Introductory.ipynb">Lab 0: Introductory</a> and <a href="./Pixel_Fourier.ipynb">Lab 1: Pixel-wise operations and the Fourier transform</a>.</li><li>Note that while there is only one LL, its size depends on the number of iterations of the wavelet transform. </li></ul>
# </div>
#
# <div class = 'alert alert-success'>
#
# <b>Note:</b> Make sure to declare a proper filterbank in the cell below if you intend to use one (we recommend to start with the Haar filterbank, but we did already give you the filters to implement DB2 in Part 1). Otherwise, just ignore the corresponding variables, but make sure that everything runs properly.
# </div>
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "33723b5dc55180c1fdda6b945c6181f3", "grade": false, "grade_id": "cell-64161a9c60e84317", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Here, you can copy your filterbank implementation, if you want to use it at your own risk
# Declare global variables to be used later. You can modify these at anytime, or just ignore them
analysis_lp = np.array([1/np.sqrt(2), 1/np.sqrt(2)])
analysis_hp = np.array([-1/np.sqrt(2), 1/np.sqrt(2)])
synthesis_lp = np.array([1/np.sqrt(2), 1/np.sqrt(2)])
synthesis_hp = np.array([1/np.sqrt(2), -1/np.sqrt(2)])
filterbank = (analysis_lp, analysis_hp, synthesis_lp, synthesis_hp)
wavelet = 'haar'
def lowpass(img, filterbank = (np.array([0]), np.array([0]), np.array([0]), np.array([0])), n = 1, wavelet = 'haar'):
# Allocate output variables
output = np.zeros((img.shape))
ll_transform = np.zeros((img.shape))
# Collect filters from filterbank
analysis_lp, analysis_hp, synthesis_lp, synthesis_hp = filterbank
# Get the wavelet transform, put all but the LL coefficients to 0, and get the inverse wavelet transform
output = lab6.pywt_analysis(img, n, wavelet='haar')
div = 2**(n-1)
ny, nx = np.array(img.shape) / div
ny, nx = int(ny), int(nx)
ll_transform = np.copy(output)
ll_transform[:ny//2, nx//2:] = 0
ll_transform[ny//2:, :] = 0
output[:ny//2, nx//2:] = 0
output[ny//2:, :] = 0
output = lab6.pywt_synthesis(output, n, wavelet = 'haar')
return output, ll_transform
# + [markdown] deletable=false editable=false kernel="Python3" nbgrader={"cell_type": "markdown", "checksum": "a02b3f8d61e7a4fe9ba38e3c68a05e24", "grade": false, "grade_id": "cell-512021bd4501aff3", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now we are going to visualize the result. Run the two next cells to apply your function to the image `doisneau_noise` with $n = 1$ and `mer_de_glace` with $n = 3$. We will then plot the original image, the wavelet transform and the reconstruction from only the LL coefficients.
#
# <div class = "alert alert-success">
#
# <b>Note:</b> Look at the details in different regions of the image by zooming in and then changing image (the zoomed area will remain). Look at the different effects on regions of low and high variation. Ask yourself, what changes do you see? would you consider this method a good denoising technique? why do we need the high frequency coefficients?
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "461d63e7e460029e6e97e1ff30c7cf41", "grade": true, "grade_id": "cell-9671ee1b92702d6c", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
# Test lowpass n = 1
output_1, ll_transform_1 = lowpass(doisneau_noise, filterbank = filterbank, n = 1, wavelet = 'haar')
image_list = [doisneau_noise, output_1, ll_transform_1]
title_list = ['Original', 'Reconstruction from only LL (n = 1)', 'Wavelet transform (n = 1, keeping only LL)']
plt.close("all")
lowpass_viewer = viewer(image_list, title = title_list, widgets = True)
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "a69daa5c899b68f09d99e904ad5c415b", "grade": true, "grade_id": "cell-5dc0c0c9e45f8517", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
# Test lowpass n = 3
output_3, ll_transform_3 = lowpass(mer_de_glace, filterbank, n = 3)
image_list = [mer_de_glace, output_3, ll_transform_3]
title_list = ['Original', 'Reconstruction from only LL (n = 3)', 'Wavelet transform (n = 3, keeping only LL)']
plt.close("all")
lowpass_viewer = viewer(image_list, title = title_list, widgets = True)
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "97a70937121358359644ab5102ef3ca5", "grade": false, "grade_id": "cell-0f997dd19c5a1c08", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 1.B. Keeping the high frequency bands (2 points)
# [Back to table of contents](#ToC_2_WT)
#
# In this subsection, we are going to do the conceptual opposite to the last exercise: we are going to keep only the high-frequency coefficients ($HL_n$, $LH_n$ and $HH_n$) at a specific scale. We want you to understand and exploit the concept of multiresolution, by selecting a specific *order* or scale of the high-frequency coefficients. **The $n^{th}$ order corresponds to the high-frequency coefficients (vertical, horizontal and diagonal) that are generated when applying the $n^{th}$ iteration of the wavelet transform**.
#
# Run the next cell to visualize an example of what we mean. We will apply the wavelet transform with $n = 4$, and **highlight in green the $2^{nd}$ order high-frequency coefficients**.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "3bc59e3079fbcfe433154d86c342529b", "grade": false, "grade_id": "cell-6d9f33a00cd6f6b6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Example of the coefficients we excpet when we asked you to keep the 2nd order H coefficients.
# First we get the transform
wt = lab6.pywt_analysis(doisneau, n = 4, wavelet = 'haar')
# Apply selected colormap
wt = lab6.map_color(wt, n = 4, color_map = color_map)/255
# Green overlay
# Grayscale in RGB
rgb_wt = np.stack((wt, wt, wt), axis=2)
# Set alpha
rgb_wt[128:256,0:256,0] = 0.45; rgb_wt[128:256,0:256,2] = 0.45
rgb_wt[0:128,128:256,0] = 0.45; rgb_wt[0:128,128:256,2] = 0.45
plt.close('all')
order_example_viewer = viewer(rgb_wt, title = ['2nd order high-frequency coefficients (green overlay)'])
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "4ad0d1a6be205a71a3c9e49aa386342b", "grade": false, "grade_id": "cell-823fea0354bc2d0b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# **For 1 point**, complete the function `highpass`, where the parameters are
# * `img`: the original image,
# * `n`: the number of iterations of the wavelet transform,
# * `order`: the scale from which the high-frequency coefficients should be extracted,
# * `filterbank` (a tuple of length 4): the 4 wavelet filters, as `(analysis_lp, analysis_hp, synthesis_lp, synthesis_hp)`. Just like for the function `lowpass`, these are only relevant when using the filterbank implementation,
# * `wavelet` (a string): the wavelet family to be used by PyWavelets. Just like for the function `lowpass`, this is only relevant when using the PyWavelets implementation,
#
# and the function returns
# * `output`: an array of the same size as `img`, that results from applying the inverse wavelet transform after keeping only high-frequency coefficients of order `order` (see the explanations above).
# * `h_transform`: an image of the same size as `img`, containing the wavelet transform, but where **everything except the high-frequency coefficients of order `order` is set to $0$**. The purpose of this image is for you to visually test that your function is doing the right thing.
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "3c695e0a721389dc60a91d6e1fe06865", "grade": false, "grade_id": "cell-db805f9fabc4db32", "locked": false, "schema_version": 3, "solution": true, "task": false}
def highpass(img, n=1, order=1, filterbank=(np.array([0]),np.array([0]),np.array([0]),np.array([0])), wavelet='haar'):
# Allocate output variables
output = np.zeros((img.shape))
h_transform = np.zeros((img.shape))
# Collect filters from filterbank
analysis_lp, analysis_hp, synthesis_lp, synthesis_hp = filterbank
# Ensure that order exists in transform
if order > n:
raise Exception(f'The wavelet transform of order {n} has no high-frequency coefficients of order {order}.')
# YOUR CODE HERE
output = lab6.pywt_analysis(img, n, wavelet='haar')
# test = lab6.pywt_analysis(img, n, wavelet='haar')
div_order = 2**(order-1)
ny_order, nx_order = np.array(img.shape) / div_order
ny_order, nx_order = int(ny_order), int(nx_order)
h_transform = np.copy(output)
h_transform[nx_order//2:nx_order, :ny_order] = 0
h_transform[:nx_order//2, ny_order//2:ny_order] = 0
output[:nx_order//2, :ny_order//2] = 0
output[:nx_order, ny_order:] = 0
output[nx_order:, :ny_order] = 0
# test[:nx_order//2, :ny_order//2] = 0
# test[:nx_order, ny_order:] = 0
# test[nx_order:, :] = 0
output = lab6.pywt_synthesis(img, n, wavelet='haar')
return output, h_transform
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "f04bedf1eebcf5331caee193d93b279c", "grade": false, "grade_id": "cell-c68515ef8656e015", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Run the next cell to run your function `highpass` with $\mathrm{n} = 1$ and $\mathrm{order} = 1$ on the image `lowlight` and visualize the results. We will show the reconstruction, the wavelet transform `h_transform`, and the original image. If you have a hard time visualizing the images, make sure to use the *Brightness & Contrast* slider to find a better range of visualization, or try a different colormap.
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "148b1a29afc79a87c99cc47e274ce76a", "grade": true, "grade_id": "cell-4f9b9ce6ed30a530", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
# Test highpass (n = 1)
output, wt = highpass(lowlight, n = 1, order = 1, filterbank = filterbank, wavelet = wavelet)
wt = lab6.map_color(wt, n = 1, color_map = color_map)
image_list = [lowlight, output, wt]
title_list = ['Original', "Reconstruction", 'Wavelet transform high-frequency bands (order 1)']
highpass_viewer = viewer(image_list, title = title_list, widgets=True)
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "01f53cc3b7346a1a0f4709cf69123cc3", "grade": false, "grade_id": "cell-5c160057d20ce7f5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Great! Now we want to really dig into the meaning of multiresolution. In the next cell, you will have the option to plot in a *viewer* the original image, the selected band of the wavelet transform, and the reconstruction from this band. You will see the following widgets
#
# * *Wavelet order slider*: to select the order of the wavelet transform,
# * *Band slider*: to select the order of the band you want to keep. If it is higher than the wavelet order, that one will get updated too,
# * *Mode dropdown menu*: to select whether you want to plot the reconstruction or the selected high-frequency component in the wavelet transform,
# * *Wavelet transform dropdown menu*: to select which wavelet family to use. This will only work if you used PyWavelets in your functions,
# * *Colormap dropdown menu*: to select which colormap to apply to both the wavelet transform and the reconstruction. Note that the `'Normalize std'` option will only appear if you have implemented it in `lab6.py`.
#
# Run the next cell to initialize this viewer. Remember to click on `Apply highpass` to show the results of the options you selected. We will use the image `doisneau`. Try to apply a high order of the wavelet transform, and see how the reconstruction looks when using different bands.
#
# <div class = "alert alert-info">
#
# <b>Hint</b>: Try to zoom to a region in the original image, and see if you can find a corresponding pattern in the reconstruction.
#
# <b>Note</b>: As you probably have noticed, in general the wavelet transform <i>does not</i> have the same range of values as the image, but the reconstruction does. Since we are removing the LL coefficients, however, the reconstruction will not have the original range of values anymore. This is why we also use the colormap to display it.
#
# <b>Note</b>: If you want to recover the original, just click on the button <code>Reset</code>.
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "8a89b88cc492bffe2a81af04a245563c", "grade": true, "grade_id": "cell-79561eaa109e078c", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
## Declare extra widgets
# Sliders
wt_slider = widgets.IntSlider(value = 1, min = 1, max = 5, step = 1, description = 'Wavelet order')
band_slider = widgets.IntSlider(value = 1, min = 1, max = 5, step = 1, description = 'H bands order')
## Menus
iwt_menu = widgets.Dropdown(options = ['Reconstruction', 'Wavelet transform'], value = 'Reconstruction')
# Check if you use PyWavelets
if np.allclose( highpass(doisneau, n=1, order=1, filterbank=filterbank, wavelet='haar'),
highpass(doisneau, n=1, order=1, filterbank=filterbank, wavelet='db2')):
wt_menu = widgets.Dropdown(options = ['haar'], value = 'haar')
else:
wt_menu = widgets.Dropdown(options = ['haar', 'db2', 'db10', 'bior1.3', 'bior6.8', 'rbio1.3', 'dmey'],
value = 'haar')
# Check if you have defined Normalize std in lab6.py
rand_array = np.random.randn(10,10)
if np.allclose(lab6.norm_std_map(rand_array), rand_array):
colormap_menu = widgets.Dropdown(options = ['Non-uniform map', 'None'], value = 'None')
else:
colormap_menu = widgets.Dropdown(options = ['Normalize std', 'Non-uniform map', 'None'], value = 'None')
# Buttons
button = widgets.Button(description = 'Apply highpass')
# Widget array
new_widgets = [wt_slider, band_slider, iwt_menu, colormap_menu, wt_menu, button]
# Callback function
def order_callback(img):
# Extract orders
n = wt_slider.value
order = band_slider.value
# If n is not high enough, fix it
if order > n:
wt_slider.value = band_slider.value
n = order
# Extract wavelet family
wavelet = wt_menu.value
# Compute
rec, wt = highpass(img, n = n, order = order, filterbank = filterbank, wavelet = wavelet)
# Apply
if colormap_menu.value == 'Normalize std':
wt = lab6.map_color(wt, n = 0, color_map = lab6.norm_std_map)
rec = lab6.norm_std_map(rec)
elif colormap_menu.value == 'Non-uniform map':
wt = lab6.map_color(wt, n = 0, color_map = lab6.non_uniform_map)
rec = lab6.non_uniform_map(rec)
if iwt_menu.value == 'Wavelet transform':
return wt
else:
return rec
plt.close('all')
highpass_viewer = viewer(doisneau, widgets = True, new_widgets = new_widgets, callbacks = [order_callback])
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "8a91f2ca768c2526c5e7265bbaa65238", "grade": false, "grade_id": "cell-0357b9752c6b6103", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Multiple Choice Questions
#
# What were your observations? Did you recover the original image? What did you recover? To finish this section, answer the next MCQ **worth 1 point**.
#
# * Q1: What would you consider the most direct application of selecting only one or more high-frequency coefficients for reconstruction?
#
# 1. Edge detection,
# 2. denoising,
# 3. compression,
# 4. or enhancement.
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "ee5fe729da36ffed2947fea668aadff3", "grade": false, "grade_id": "cell-90194e835e36a048", "locked": false, "schema_version": 3, "solution": true, "task": false}
### Modify these variables
answer = 3
# YOUR CODE HERE
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "634af6d3e99f01288ea16707cc9766ec", "grade": false, "grade_id": "cell-1c57ff8a0e5fa547", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Run the next cell to verify that your answer is valid.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "4c48798b357d9e2b7db836712695b023", "grade": true, "grade_id": "cell-b678416997c3437c", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Sanity check
assert answer in [1, 2, 3, 4], 'Choose one of 1, 2, 3 or 4.'
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "b59dbbfa5f2dbb0e4f40c51874d7d20e", "grade": false, "grade_id": "cell-eb2e56703fc1b783", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Even though we asked for two specific *band* operations, there are a lot of interesting ones that you can try. What would you see if you keep only the diagonal high-frequency coefficients (HH)? Or mix coefficients from different orders and LL?
#
# If you are curious, use the next empty cell to experiment or to confirm your hypothesis. Recycle any code you want!
# + kernel="SoS"
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "777e5ca3dea389263050472574258949", "grade": false, "grade_id": "cell-ba1113422736a6f5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## 2. Denoising (3 points)
# [Back to table of contents](#ToC_2_WT)
#
# As you probably noticed, while the functions `lowpass` and `highpass` can have different applications, neither of them is really useful for denoising. In this section, we will see the most used methods for wavelet denoising, i.e.,
# * soft thresholding, and
# * hard thresholding.
#
# Moreover, we will see a method to determine the optimal threshold in the wavelet domain to denoise an image.
#
# For tests, we will compare your implementations directly against the implementation from PyWavelets. In particular, we will compare to the function [`pywt.threshold()`](https://pywavelets.readthedocs.io/en/latest/ref/thresholding-functions.html), which takes as parameters
# * `data`: an array to threshold. In this context, the wavelet transform,
# * `value`: a threshold value,
# * `mode` (a string): threshold type. Defaults to `'soft'`, but this is only one of the possibilities. Look at the documentation or keep solving the notebook for further options,
# * `substitute`: value to which to set pixels with current values lower than the threshold. Defaults to $0$,
#
# and returns
# * `output`: an array of the same size as `data`, where the specified thresholding method has been applied.
#
# <div class = 'alert alert-danger'>
#
# <b>Note</b> Naturally, for the graded exercises, it is <b>strictly forbidden</b> to use the function <code>pywt.threshold</code>, and we <b>will not</b> count answers containing such function as correct.
# </div>
#
# ### 2.A. Soft thresholding (1 point)
# [Back to table of contents](#ToC_2_WT)
#
# Soft thresholding is a technique that, while removing elements with absolute value smaller than a certain $T$, tries to ensure that smoothness is preserved. It is defined element-wise for $x\in\mathbb{R}$ as
#
# $$\mathrm{t}(x) = \mathrm{sign}(x) \, \mathrm{max}\lbrace 0, |x|-T\rbrace\,,$$
#
# where $T$ is the threshold value.
#
# Run the next cell to view an example of how it looks using PyWavelets. In it, we will prepare an axis array and apply the function `pywt.threshold` to it. This will build the typical shape of the soft thresholding function. For visualization purposes, we also keep the line $y=x$ as a reference.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "14d86cdf2e67b6ef5b446d2479838ea9", "grade": false, "grade_id": "cell-c52e1a6525f60fa6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Set x axis
test = np.arange(-10, 10, 0.01)
# Handle matplotlib objects
plt.close("all"); fig = plt.figure(); ax = plt.gca()
# Plot reference
ax.plot(test, test, label = r'$y=x$')
# Plot soft thresholding
soft_thr_test = pywt.threshold(test, value = 1, mode='soft')
ax.plot(test, soft_thr_test, label = 'Soft thresholding for T=1 (PyWavelets)')
# Set grid and legend
ax.grid(); ax.legend(loc='upper left'); ax.set_xlabel("Input"); ax.set_ylabel("Output"); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "df5857a33f186e98c5594f680eaaa2d1", "grade": false, "grade_id": "cell-c67430582541d9f5", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now, for **1 point**, code the function `threshold_soft(img, T)`, where the parameters are
# * `img`: an image to be thresholded,
# * `T`: the value at which to threshold,
#
# and returns the variable `output`, a NumPy Array of the same size as `img`, where soft thresholding has been applied.
#
# <div class='alert alert-info'>
#
# <b>Hint:</b> Try the function <a href='https://numpy.org/doc/stable/reference/generated/numpy.maximum.html'><code>np.maximum</code></a>.
# </div>
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "7333a64184ecde52bbb63b8691e7f3d6", "grade": false, "grade_id": "cell-b701ff3a0ecf167f", "locked": false, "schema_version": 3, "solution": true, "task": false}
def threshold_soft(img, T):
output = np.copy(img)
# YOUR CODE HERE
output = np.sign(img) * np.maximum(0, np.abs(img)-T)
return output
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "ec16fa718c838bc41dd38327a8e137b5", "grade": false, "grade_id": "cell-58e9748ff408b4be", "locked": true, "schema_version": 3, "solution": false, "task": false}
# As a first test on your function, we will try to replicate the previous plot using your function. Run the next cell to see if you get the same results.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "d589922f457f5b88564dc6e671fb2e69", "grade": false, "grade_id": "cell-f12b921afbf07d4e", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Set x axis
test = np.arange(-10, 10, 0.01)
# Handle matplotlib objects
plt.close("all"); fig = plt.figure(); ax = plt.gca()
# Plot reference
ax.plot(test, test, label = r'$y=x$')
# Plot soft thresholding
soft_thr = threshold_soft(test, T = 1)
ax.plot(test, soft_thr, label = 'Soft thresholding for T=1 (yours)')
# Set grid and legend
ax.grid(); ax.legend(loc='upper left'); ax.set_xlabel("Input"); ax.set_ylabel("Output"); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "21ca8797422e23a1d99730796404280e", "grade": false, "grade_id": "cell-ed0d3358b1cbc011", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now run the next cell to look at the results of your thresholding and of PyWavelets thresholding. Look at the wavelet transform and its inverse. Is soft thresholding a better denoising method than just keeping the LL coefficients?
#
# For you to explore these issues, we will again use the image `doisneau_noise` with $n = 2$ and an arbitrary threshold of $50$. We will plot
# * the noisless image, `doisneau`, as ground truth,
# * the noisy image `doisneau_noise`,
# * the reconstruction using your soft thresholding,
# * the reconstruction using PyWavelet's soft thresholding,
# * the wavelet transform,
# * your thresholding of the wavelet transform,
# * PyWT's thresholding of the wavelet transform.
#
# <div class = 'alert alert-warning'>
#
# <b>Notes:</b>
# <ul>
# <li> We are <b>not</b> suggesting that the value of $50$ is ideal. Rather, we want the effect to be very noticeable. See
# <a href="#2.C.-Optimal-threshold-(1-point)">Section 2.C.</a> to address the question of optimal thresholding.</li>
#
# <li> In this particular viewer, we want the effect to be very visual in the wavelet transform. Thus, we apply a colormap. This implies that the statistics, including the histogram, are not representative of the thresholded transforms.</li>
# </ul>
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "3999060428b824971d49dbadadf2976a", "grade": true, "grade_id": "cell-a03a4cfc34da8e7d", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# First we get the WT
transform = lab6.pywt_analysis(doisneau_noise, n = 2, wavelet = 'haar')
# Now we will apply both your and PyWavelet thresholding functions to the transform with an arbitrary threshold
student_thr = threshold_soft(transform, T = 50)
pywt_thr = pywt.threshold(transform, value = 50, mode='soft')
# Get the respective inverse transforms
student_iwt = lab6.pywt_synthesis(student_thr, n = 2, wavelet = 'haar')
pywt_iwt = lab6.pywt_synthesis(pywt_thr, n = 2, wavelet = 'haar')
# Enhance visualization
transform = lab6.map_color(transform, n = 2, color_map = color_map)
student_thr = lab6.map_color(student_thr, n = 2, color_map = color_map)
pywt_thr = lab6.map_color(pywt_thr, n = 2, color_map = color_map)
# Plot
plt.close('all')
image_list = [doisneau, doisneau_noise, student_iwt, pywt_iwt, transform, student_thr, pywt_thr ]
title_list = ['Noiseless image', 'Noisy image', 'Your reconstruction', "PyWavelets' reconstruction", 'Wavelet transform', 'Your thresholded transform', "PyWavelets' thresholded transform"]
softthr_viewer = viewer(image_list, title = title_list, widgets = True)
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "7a27903e24bb54780af08b1791db6b7c", "grade": false, "grade_id": "cell-462586ffbba28106", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now run the next cell for the comparison with PyWavelets. If your answer is correct, it should not raise an error.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "48f9bada1e2ffa087fea82299ddd867c", "grade": false, "grade_id": "cell-f6dd2e9ee6378837", "locked": true, "schema_version": 3, "solution": false, "task": false}
np.testing.assert_array_almost_equal(pywt_thr, student_thr, decimal = 4, err_msg = "Your results and PyWavelet's are not the same.")
print('Congratulations! You are getting really good at wavelets.')
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "119aed11ae63b4db39a8a322d93ceb4d", "grade": false, "grade_id": "cell-755e9d850c586d64", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.B. Hard thresholding (1 point)
# [Back to table of contents](#ToC_2_WT)
#
# Hard thresholding is another technique to attenuate the smallest coefficients of the wavelet transform, and thus, get rid of the noise. Unlike soft thresholding technique, hard thresholding **does not** try to preserve the smoothness in the values of the transform. It simply puts to zero all the coefficients with an absolute value smaller than a certain $T$, and leave the ones that are bigger or equal than $T$ untouched, as shown by the following formula:
#
# $$t_{h}(x) =
# \begin{cases}
# 0 \mbox{ if } |x|<\operatorname{T}, \\
# x \mbox{ }\mathrm{otherwise}
# \end{cases}
# $$
#
# Run the next cell to see how this looks when we use PyWavelets `pywt.threshold` function.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "165959ccf05a63bdf8928b699f826c9d", "grade": false, "grade_id": "cell-742ca80842a41966", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Set x axis
test = np.arange(-10, 10, 0.01)
# Handle matplotlib objects
plt.close("all"); fig = plt.figure(); ax = plt.gca()
# Plot reference
ax.plot(test, test, label = r'$y=x$')
# Plot soft thresholding
hard_thr_test = pywt.threshold(test, value = 1, mode='hard')
ax.plot(test, hard_thr_test, label = 'Hard thresholding for T=1 (PyWavelets)')
# Set grid and legend
ax.grid(); ax.legend(loc='upper left'); ax.set_xlabel("Input"); ax.set_ylabel("Output"); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "67ab06e8c2847fbbbf4dd1c2708fe345", "grade": false, "grade_id": "cell-ea8492c900598dc6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# For **1 point**, code the function `threshold_hard(img, T)`, where the parameters are
# * `img`: an image to be thresholded,
# * `T`: the value at which to threshold,
#
# and the function returns
# * `output`: an image of the same size as `img`, where hard thresholding has been applied.
#
# <div class = "alert alert-danger">
#
# <b>Note</b>: Code the hard threshold in an <i>exclusive</i> way. That is, if a coefficient has absolute value <b>exactly equal to T, it should not be put to zero</b>.
# </div>
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "f960249dd0fdc186ad05b6eefb64a8b3", "grade": false, "grade_id": "cell-d39e99f8d6e2f768", "locked": false, "schema_version": 3, "solution": true, "task": false}
def threshold_hard(img, T):
output = np.copy(img)
# YOUR CODE HERE
output[np.less(np.abs(img), T)] = 0
return output
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "161fc60cf3638e5a6bc296bdcba885e4", "grade": false, "grade_id": "cell-9599309772716c12", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now run the next cell, and see if you can recover the lineshape that we showed in the previous plot.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "9310788a921b3c9573c0fea21f535097", "grade": false, "grade_id": "cell-e068d70089e606c1", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Set x axis
test = np.arange(-10, 10, 0.01)
# Handle matplotlib objects
plt.close("all"); fig = plt.figure(); ax = plt.gca()
# Plot reference
ax.plot(test, test, label = r'$y=x$')
# Plot hard thresholding
hard_thr_test = threshold_hard(test, T = 1)
ax.plot(test, hard_thr_test, label = 'Hard thresholding for T=1 (yours)')
# Set grid and legend
ax.grid(); ax.legend(loc='upper left'); ax.set_xlabel("Input"); ax.set_ylabel("Output"); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "95c64fd2df5cdef94ace4db18e1bd459", "grade": false, "grade_id": "cell-22fe8e0418a4a2ce", "locked": true, "schema_version": 3, "solution": false, "task": false}
# We are going to test your function in a similar way as we did for the soft threshold. Run the next cell to apply the hard threshold on the wavelet transform of `doisneau_noise`, with the same parameters as we used for soft thresholding. As with [soft thresholding](#2.A.-Soft-thresholding-(1-point)), we will apply a colormap, and thus, the statistics, including the histogram, are not representative of the thresholded transforms.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "53f1fc6f3cdc02716585d81fae275a55", "grade": true, "grade_id": "cell-515699f212853cf1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# First we get the WT
transform = lab6.pywt_analysis(doisneau_noise, n = 2, wavelet = 'haar')
# Now we will apply both your and PyWavelet thresholding functions to the transform with an arbitrary threshold
student_thr = threshold_hard(transform, T = 50)
pywt_thr = pywt.threshold(transform, value = 50, mode='hard')
# Get the respective inverse transforms
student_iwt = lab6.pywt_synthesis(student_thr, n = 2, wavelet = 'haar')
pywt_iwt = lab6.pywt_synthesis(pywt_thr, n = 2, wavelet = 'haar')
# Enhance visualization
transform = lab6.map_color(transform, n = 2, color_map = color_map)
student_thr = lab6.map_color(student_thr, n = 2, color_map = color_map)
pywt_thr = lab6.map_color(pywt_thr, n = 2, color_map = color_map)
# Plot
plt.close('all')
image_list = [doisneau, doisneau_noise, student_iwt, pywt_iwt, transform, student_thr, pywt_thr ]
title_list = ['Noiseless image', 'Noisy image', 'Your reconstruction', "PyWT's reconstruction", 'Wavelet transform', 'Your thresholded transform', "PyWT's thresholded transform"]
softthr_viewer = viewer(image_list, title = title_list, widgets = True)
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "11e9614095ac33bb9b01f08cb29ab468", "grade": false, "grade_id": "cell-512252765568c3e1", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Finally, we are going to compare your function against PyWavelets' on the wavelet transform of `doisneau_noise`. Run the next cell and if it does not throw any error, your implementation is likely correct.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "d85c0fa527e2eaa52953b57787e420f8", "grade": false, "grade_id": "cell-bd288df50901f05b", "locked": true, "schema_version": 3, "solution": false, "task": false}
np.testing.assert_array_almost_equal(pywt_thr, student_thr, decimal = 4,
err_msg = 'Your results and PyWT\'s are not the same. Look for the differences in the viewer above!')
print('Congratulations! You are getting even better at wavelets.')
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "7d0bb609aeb4dcdde34a8ab52489017d", "grade": false, "grade_id": "cell-f63b5e9501bbdc49", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### 2.C. Optimal threshold (1 point)
# [Back to table of contents](#ToC_2_WT)
#
# As you have probably seen, wavelet denoising can be really powerful. But how would you choose the optimal threshold value $T$? Let's look for a moment at the evolution of the quality of an image as we increase the threshold T. For that, we will leave the real images for a moment and use a toy example, the [Shepp-Logan phantom](https://en.wikipedia.org/wiki/Shepp%E2%80%93Logan_phantom), a standard test for image reconstruction algorithms coming from the field of computerized tomography (CT). We will load this image from skimage's [data](https://scikit-image.org/docs/dev/api/skimage.data.html) module. It has a range of $[0, 1]$, and we will add zero-mean gaussian noise with a standard deviation of $0.2$ (comparably, quite a lot of noise). Then we will denoise with a series of thresholds, and plot the evolution of the SNRs with the threshold.
#
# Run the next cell to see this test. Browse the images in the viewer, to see how powerful wavelet denoising is!
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "2e1410dee8f280f4a5736114ac09bf5d", "grade": false, "grade_id": "cell-353e60aee5dbeb01", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Load phantom and add noise
phantom = data.shepp_logan_phantom()
noisy_phantom = phantom + np.random.normal(loc=0, scale=0.2, size=phantom.shape)
# Declare function nosie
def denoise(img, T, mode):
output = np.copy(img)
# Get WT with arbitrary n=2
output = lab6.pywt_analysis(output, 2, 'haar')
# Denoise with given threshold
output = pywt.threshold(output, value = T, mode=mode)
# Get iWT
return lab6.pywt_synthesis(output, 2, 'haar')
# Declare viewer parameters
image_list = [phantom, noisy_phantom]
snr = lab6.snr_db(phantom, noisy_phantom)
title_list = ['Original', f'Noisy (SNR [dB] = {np.round(lab6.snr_db(phantom, noisy_phantom), 2)})']
# Get lists with SNRs and thresholds
snrs = [snr]
thresholds = [0.05, 0.15, 0.175, 0.2, 0.225, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 1, 1.5, 2]
# Apply denoising
for T in thresholds:
denoised = denoise(noisy_phantom, T, 'soft')
image_list.append(denoised)
snr = lab6.snr_db(phantom, denoised)
snrs.append(snr)
title_list.append(f'Denoised with T={T} (SNR [dB] = {np.round(snr, 2)})')
# Visualize images
plt.close('all')
viewer(image_list, title = title_list, widgets=True)
# Plot evolution of SNR with T
plt.figure(figsize = [6, 5])
plt.plot(np.concatenate(([0], thresholds)), snrs, 'r-o', label = 'SNR [dB]')
plt.xlabel(r'Threshold $T$'); plt.ylabel('SNR'); plt.grid()
plt.title('SNR [dB] vs threshold')
plt.legend(); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "ac75b94c3134235f43afd59e712744bf", "grade": false, "grade_id": "cell-41c8c43fb8b8e47a", "locked": true, "schema_version": 3, "solution": false, "task": false}
# In terms of SNR there is clearly an optimal $T$. A rule of thumb is to choose $T = \sigma_{\mathrm{HH}_1}$ as threshold for the denoising. $\sigma_{\mathrm{HH}_1}$ is the sample standard deviation of the noise in the **largest diagonal highpass band HH**, i.e., in the diagonal highpass band of the first iteration of the transform. This is partly because $T = \sigma_{\mathrm{HH}_1}$ is a good estimator of the true standard deviation $\sigma$ of the noise.
#
# <div class = 'alert = alert-success'>
#
# <b>Note:</b> Did you look closely at the plot above? You can see that the treshold that maximizes the SNR and the standard deviation of the noise of the image <code>noisy_phantom</code> are in the same ballpark.
# </div>
#
# For **1 point**, implement the function `h_std(wt)`, that takes as parameter a wavelet transform in the form we have used in the rest of the lab and returns the rule-of-thumb threshold $T = \sigma_{\mathrm{HH}_1}$. A good test that you can build for yourself in the empty cell below is to take the wavelet transform of the image `noisy_phantom`, apply `h_std` on it and see of you recover something similar to the standard deviation of the noise $\sigma$.
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "0e308f0019f155dbf6104f3b46130154", "grade": false, "grade_id": "cell-7bd3f7118ab7e93b", "locked": false, "schema_version": 3, "solution": true, "task": false}
def h_std(wt):
# Preallocate output variable
T = 0
# YOUR CODE HERE
T = np.std(wt[wt.shape[0]//2:, wt.shape[1]//2:])
return T
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "43e7f6a550d73c9090d9256f60b935c8", "grade": false, "grade_id": "cell-d4ba24f5ef576e17", "locked": true, "schema_version": 3, "solution": false, "task": false}
# We will quickly test your function on the image `mit_coef` (which is already a wavelet transform), where we know that the rule-of-thumb threshold is $T = 7.784177$.
#
# <div class = 'alert alert-info'>
#
# <b>Note:</b> If you want to further test your function, you can use the previous cell to apply it on the wavelet transform of different images. Compare the value you get against the value from <i>ImageViewer</i> (remember that if you zoom into a region, the statistics textbox will automatically update the standard deviation).
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "46da2ed5dceae7440e2e721878f36ac2", "grade": true, "grade_id": "cell-0d7155058f065049", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
if not np.abs(h_std(mit_coef) - 7.1784177) < 1e-5:
print(f"WARNING!!\nh_std doesn't return the correct value ({h_std(mit_coef)}).")
print('Nice, your h_std function passed the sanity check.')
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "b27eadcb27c3fa1f3c82247c9d1a7088", "grade": false, "grade_id": "cell-0ea74540409b610d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# So far, we have tested arbitrary thresholds and only in particular cases. In the next cell you will find an ImageViewer with
#
# * a slider to control the value of $T$,
# * a checkbox selecting whether to simply set $T$ to its rule-of-thumb value (check off for the aforementioned slider to take effect),
# * a slider to control the number of iterations of the transform $n$,
# * a dropdown menu to choose a colormap. Note that the `'Normalize std'` option will only appear if you have implemented it in `lab6.py`,
# * a dropdown menu to choose the different thresholding operations,
# * the button `Apply denoising`, to plot the images resulting from the options you selected.
#
# Remember to go to the menu `Extra Widgets` to see these options. In the viewer, you will see both the original image and the reconstruction.
#
# <div class = "alert alert-info">
#
# <b>Note:</b> In order to preserve the visual effect of thresholding, we use colormaps. In order to see it clearly, alternate between $T = 0$ and a given value. If you want to see the effect of thresholding in the histogram too, make sure to set the colormap to <code>None</code>.
# </div>
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "3e12f7134b28e0f3dd3d33f923501569", "grade": false, "grade_id": "cell-255638447351a8b7", "locked": true, "schema_version": 3, "solution": false, "task": false}
## Sliders
T_slider = widgets.FloatSlider(value = 0, min = 0, max = 200, step = 0.1, description = 'T')
T_checkbox = widgets.Checkbox(description='Optimal T')
n_slider = widgets.IntSlider(value = 1, min = 0, max = 5, step = 1, description = 'n')
## Menus
# Check if you have defined Normalize std in lab6.py, define color map menu accordingly
rand_array = np.random.randn(10,10)
if np.allclose(lab6.norm_std_map(rand_array), rand_array):
cmapping_dropdown = widgets.Dropdown(description='Colormap', options = ['Non-uniform map', 'None'], value = 'None')
else:
cmapping_dropdown = widgets.Dropdown(description='Colormap', options = ['Normalize std', 'Non-uniform map', 'None'], value = 'None')
thresh_dropdown = widgets.Dropdown(description='Threshold mode', options = ['Soft', 'Hard'], value = 'Soft')
# Button
button = widgets.Button(description = 'Apply denoising')
def callback_wt(img):
# Set slider and T with optimal checkbox, or extract slider value
if T_checkbox.value:
T = h_std(img)
T_slider.value = np.round_(T, decimals=1)
else:
T = T_slider.value
# Set n
n = n_slider.value
# Compute transform
transform = lab6.pywt_analysis(img, wavelet = 'haar', n = n)
# Threshold
if thresh_dropdown.value == 'Soft':
transform = pywt.threshold(transform, value = T, mode='soft')
elif thresh_dropdown.value == 'Hard':
transform = pywt.threshold(transform, value = T, mode='hard')
# Return reconstruction
return lab6.pywt_synthesis(transform, wavelet = 'haar', n = n)
def callback_iwt(img):
# Set slider and T with optimal checkbox, or extract slider value
if T_checkbox.value:
T = h_std(img)
T_slider.value = np.round_(T, decimals=1)
else:
T = T_slider.value
# Set n
n = n_slider.value
# Compute transform
transform = lab6.pywt_analysis(img, wavelet = 'haar', n = n)
# Threshold
if thresh_dropdown.value == 'Soft':
transform = pywt.threshold(transform, value = T, mode='soft')
elif thresh_dropdown.value == 'Hard':
transform = pywt.threshold(transform, value = T, mode='hard')
# Apply colormap
if cmapping_dropdown.value == 'Normalize std':
transform = lab6.map_color(transform, n = n, color_map = lab6.norm_std_map)
elif cmapping_dropdown.value == 'Non-uniform map':
transform = lab6.map_color(transform, n = n, color_map = lab6.non_uniform_map)
return transform
new_widgets = [T_slider, T_checkbox, n_slider, cmapping_dropdown, thresh_dropdown, button]
plt.close('all')
soft_thr_viewer = viewer([doisneau_noise, doisneau_noise], title=['Denoised image','Wavelet transform'], new_widgets=new_widgets,
callbacks=[callback_wt, callback_iwt], hist = True, subplots = [2, 1])
T_checkbox.value = True
button.click()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "ccf7fc8de5e49de10d982d0b45f7e764", "grade": false, "grade_id": "cell-74e8c786392d0282", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## 3. Compression (2 points)
# [Back to table of contents](#ToC_2_WT)
#
# As you saw towards the end of [Lab 1](./1_Wavelet_transform.ipynb), just a few coefficients of the wavelet transform can rebuild an image with a decent SNR. This makes the wavelet transform a great tool for image compression. Simple data compression is achieved by applying a hard threshold to the coefficients of an image transform, as used in JPEG2000 with the wavelet transform or in JPEG with the discrete cosine transform (DCT). Note that this is only a rudimentary form of compression. A true encoder would further quantize the wavelet coefficients, which induces additional errors. The resulting coefficient map would also need to be encoded efficiently using, for example, the EZW algorithm (embedded zero-tree wavelet coding). You can find more information about the EZW algorithm on your [course-notes](https://moodle.epfl.ch/course/view.php?id=463), or if you are really interested in the topic, read [the original article](https://ieeexplore.ieee.org/abstract/document/258085).
#
# For **1 point**, code the function `compress`, that **retains a specific percentage of the wavelet coefficients** (the ones with largest **absolute values**), and sets all the other ones to $0$. The parameters are
# * `wt`: the wavelet transform of an image,
# * `per`: the percentage of coefficients to be retained (in integer percentages, e.g., $10$ instead of $0.1$ to specify $10\%$),
#
# and returns
# * `output`: an array of the same size as `wt`, containing wavelet coefficients where **hard thresholding has been applied** (you are allowed to use `pywt.threshold` if you so wish),
# * `T`: the threshold value,
# * `r`: the **ratio** of non-zero pixels in `output`.
#
# <div class = "alert alert-info">
#
# <b>Note</b>: You might find the function [<code>np.percentile</code>](https://numpy.org/doc/stable/reference/generated/numpy.percentile.html) useful.
# </div>
#
# <div class = "alert alert-danger">
#
# <b>Note</b>: Use the function [<code>np.count_nonzero</code>](https://numpy.org/doc/stable/reference/generated/numpy.count_nonzero.html) to calculate the ratio of non-zero coefficients.
# </div>
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "08c66d0e0d0be452c4487e20ba9c8353", "grade": false, "grade_id": "cell-808af027a8477c4b", "locked": false, "schema_version": 3, "solution": true, "task": false}
def compress(wt, per):
output = np.copy(wt)
T = None
r = None
# YOUR CODE HERE
T = np.max(np.abs(wt)) * (100 - per) / 100
output = pywt.threshold(wt, value=T, mode='hard', substitute=0)
r = np.count_nonzero(output) / len(output)
return output, T, r
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "be47b415d7f49c99c76acf5c9f8e62fc", "grade": false, "grade_id": "cell-e0d1e9860d0a7480", "locked": true, "schema_version": 3, "solution": false, "task": false}
# For a quick test on your function, we will not use an image, but an axis in the range $[-10, 10]$, like the ones we used to demonstrate the thresholding functions.
#
# Run the next cell to test your function, which will show the curves for different percentages of kept values. Note that since we have evenly spread values from $-10$ to $10$, it is easy to verify the correctness of your function.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "d4d4e1f0ee3f56d7069ceb93a4bf8acc", "grade": true, "grade_id": "cell-3a19f2b606d72b7b", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Set x axis
test = np.arange(-10, 10, 0.01)
# Handle matplotlib objects
plt.close("all"); fig = plt.figure(); ax = plt.gca()
# Plot reference
ax.plot(test, test, label = r'$y=x$')
# Plot hard thresholding for different percentiles
for i, per in enumerate([10,20,30,40,50]):
hard_thr_test, T, r = compress(test, per = per)
ax.plot(test, hard_thr_test, label = f'Hard thresh. (kept {per}% of coeff.s)')
print(f"Kept {per}% of coeff.s with threshold {T:.2f} and compression ratio {r}")
if not np.isclose(T, 9-i):
if np.isclose(T, i+1):
print(f"\n###\nBe careful with how are you calculating your threshold!\nYou need to KEEP {per}\
% of coefficients, as opposed to discarding {per}% of them\n###\n")
else:
print(f"###\nBe careful with how are you calculating your threshold!\n###")
# Set grid and legend
ax.grid(); ax.legend(loc='upper left'); ax.set_xlabel("Input"); ax.set_ylabel("Output"); plt.show()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "4fed1dd6ab97b2ab3db5e72a859217a1", "grade": false, "grade_id": "cell-06fc30388ef035a0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Verify that both the threshold and the number of remaining pixels make sense. Now, let's take a look at your function applied to the image `mer_de_glace`, where we will keep only $5\%$ of the coefficients.
#
# Run the next cell to see the image `mer_de_glace`, its wavelet transform, its reconstruction after wavelet compression, and the thresholded wavelet transform. To enhance visualization, we will also use the colormap you selected on the wavelet transforms. Observe that while the reconstructed image looks quite good at first, zooming in reveals many compression artifacts (after all, $5\%$ of the coefficients is not much). To better compare the images, feel free to use `Options` $\rightarrow$ `Enable Joint Zoom` in the viewer.
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "4ab6d9854d1532313652ea171714beae", "grade": false, "grade_id": "cell-494511a170fd5560", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Get the wavelet transform
wt = lab6.pywt_analysis(mer_de_glace, wavelet = 'haar', n = 4)
# Apply compression (keeping 5% of the coefficients)
compressed_wt, T, r = compress(wt, per = 5)
# Reconstruct each and calculate SNR
compressed_rec = lab6.pywt_synthesis(compressed_wt, wavelet = 'haar', n = 4)
snr_comp = np.round_(lab6.snr_db(compressed_rec, mer_de_glace), decimals = 2)
# Apply colormap for better visualization
wt = lab6.map_color(wt, n = 4, color_map = color_map)
compressed_wt = lab6.map_color(compressed_wt, n = 4, color_map = color_map)
image_list = [mer_de_glace, wt, compressed_rec, compressed_wt]
title_list = ['Original', 'Wavelet transform (n=4)',
f'Compressed rec. (SNR [dB] = {snr_comp})',
f'Compressed wavelet tr. (T = {np.round_(T, decimals = 2)})']
plt.close('all')
compression_viewer = viewer(image_list, title = title_list, subplots=(3,2))
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "70403be7fc8a4bc226aecd82131910f9", "grade": false, "grade_id": "cell-eff4d05a52a8238d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Multiple Choice Question
# [Back to table of contents](#ToC_2_WT)
#
# Now at this point you may be asking yourself, why do we put compression and denoising in a different category, if we use thresholding anyway? For **1 point** answer the following.
#
# * Q1. Assume we calculate, with respect to `doisneau`, the **SNR** of `doisneau_noise`, of `doisneau_noise` after optimal denoising, and of `doisneau_noise` after compression **keeping only $2\%$ of the coefficients**. What will be the order of the SNRs, from low to high?
#
# 1. `doisneau_noise` < `doisneau_noise` after optimal denoising < `doisneau_noise` after compression
# 2. `doisneau_noise` < `doisneau_noise` after compression < `doisneau_noise` after optimal denoising
# 3. `doisneau_noise` after optimal denoising < `doisneau_noise` < `doisneau_noise` after compression
# 4. `doisneau_noise` after optimal denoising < `doisneau_noise` after compression < `doisneau_noise`
# 5. `doisneau_noise` after compression < `doisneau_noise` < `doisneau_noise` after optimal denoising
# 6. `doisneau_noise` after compression < `doisneau_noise` after optimal denoising < `doisneau_noise`
#
# Modify the variable answer in the following cell to reflect your choice.
# <div class = 'alert alert-info'>
#
# <b>Note</b>: If you want to verify your answer, you can use the empty cell below to experiment. Recycle any code you need!
# </div>
# + deletable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "518549302e2877730bc4598d0128e9e2", "grade": false, "grade_id": "cell-a147bc08f86f1a34", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Assign your answer to this variable
answer = 5
# YOUR CODE HERE
# + deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "code", "checksum": "2d181b6d3d5b70fd1b5650519ce4fe23", "grade": true, "grade_id": "cell-404e34aabd91cb5a", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Sanity check
if not answer in [1, 2, 3, 4, 5, 6]:
print('WARNING!!\nValid answers are 1, 2, 3, 4, 5 or 6.')
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "789ee312ea64ab44232d574342072d35", "grade": false, "grade_id": "cell-75a5bcf3be2220f9", "locked": true, "schema_version": 3, "solution": false, "task": false}
# As you can see, it is quite remarkable how sparse can the wavelet transform be before we start to loose information. This makes compression and denoising two huge applications, and makes the wavelet transform an important tool in image processing. If you want to finish exploring the potential of the methods we have seen, and in particular, understand the boundaries of denoising and compression, play around with the following cell! We have included a widget where you can:
#
# * Create a noisy image,
# * Denoise/compress keeping a specific percentage of coefficients.
#
# Run the next cell and go to the menu `Extra Widgets` to explore the applications of the wavelet transform! You can choose which image you want to use by changing the first line of the cell.
# +
# Choose image
image = mer_de_glace
## Declare sliders
noise_std_slider = widgets.FloatSlider(value = 20, min = 0, max = 40, step = 0.1, description = r'Noise $\sigma$')
noise_mean_slider = widgets.FloatSlider(value = 0, min = -100, max = 100, step = 0.1, description = 'Noise mean')
per_slider = widgets.FloatSlider(value = 100, min = 0, max = 100, step = 0.1, description = '% of coeff.s')
n_slider = widgets.IntSlider(value = 1, min = 0, max = 5, step = 1, description = 'n')
# Button % Optimal threshold checkbox
T_checkbox = widgets.Checkbox(description='Optimal T')
button = widgets.Button(description = 'Apply')
# Declare callbacks
def callback_noise(img):
# Get noise
mean = noise_mean_slider.value
std = noise_std_slider.value
# Return noisy image
return img + np.random.normal(mean, std, img.shape)
def callback_iwt(img):
# Build noisy image
mean = noise_mean_slider.value
std = noise_std_slider.value
noisy = img + np.random.normal(mean, std, img.shape)
# Set n
n = n_slider.value
# Set percentage
per = per_slider.value
# Compute transform
transform = lab6.pywt_analysis(noisy, wavelet = 'haar', n = n)
# Threshold
if T_checkbox.value:
T = h_std(transform)
transform = pywt.threshold(transform, value = T, mode = 'soft')
else:
transform, T, _ = compress(transform, per)
# Return iWT
return lab6.pywt_synthesis(transform, wavelet = 'haar', n = n)
# Viewer Parameters
new_widgets = [noise_std_slider, noise_mean_slider, n_slider, per_slider, T_checkbox, button]
plt.close('all')
compression_viewer = viewer([image, image], title=['Noisy Image', 'Reconstruction'], new_widgets=new_widgets,
callbacks=[callback_noise, callback_iwt], subplots = [2, 1], widgets = True)
button.click()
# + [markdown] deletable=false editable=false kernel="SoS" nbgrader={"cell_type": "markdown", "checksum": "844e58d8dac1c22320874fd8839fb2ee", "grade": false, "grade_id": "cell-8f2c99c6329bf1a1", "locked": true, "schema_version": 3, "solution": false, "task": false}
# <div class="alert alert-success">
#
# <p><b>You have reached the end of the second part of the Wavelets lab!</b></p>
# <p>
# Make sure to save your notebook (you might want to keep a copy on your personal computer) and upload it to <a href="https://moodle.epfl.ch/mod/assign/view.php?id=1148687">Moodle</a>, in a zip file with other notebooks of this lab.
# </p>
# </div>
#
# * Keep the name of the notebook as: *1_Wavelet_transform.ipynb*,
# * Name the zip file: *Wavelets_lab.zip*.
#
# <div class="alert alert-danger">
# <h4>Feedback</h4>
# <p style="margin:4px;">
# This is the first edition of the image-processing laboratories using Jupyter Notebooks running on Noto. Do not leave before giving us your <a href="https://moodle.epfl.ch/mod/feedback/view.php?id=1148686">feedback here!</a></p>
# </div>
# -
| Wavelet Transform/2_Wavelet_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit (conda)
# metadata:
# interpreter:
# hash: ab2c6effcff14de217321fbbb614bff3794ce5785c4964fdaa7bfe00a73c95ca
# name: python3
# ---
# +
import numpy as np
import statsmodels.api as sm
nobs = 100
X = np.random.random((nobs, 2))
X = sm.add_constant(X)
beta = [1, .1, .5]
e = np.random.random(nobs)
y = np.dot(X, beta) + e
results = sm.OLS(y, X).fit()
print(results.summary())
# -
import statsmodels
print(dir(statsmodels.api))
print(dir(statsmodels.formula.api))
print(len(dir(statsmodels.api)))
print(len(dir(statsmodels.formula.api)))
| chapter3/section3.4_StatsModels_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Foundation inference Bootstrap Permutation
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from plotnine import *
from plotnine.data import *
from sklearn.utils import shuffle
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
import scipy.stats as stats
# %matplotlib inline
sns.set() #Making seaborn the default styling
data_path = '/Users/User/Desktop/Data/Datasets/Learning'
#data_path = '/Users/User/Desktop/Data/DataCamp-master/Intro_to_data'
os.chdir(data_path)
os.listdir()
NHANES = pd.read_csv("NHANES.csv", index_col = 0)
NHANES.head()
# ## Initial EDA using ggplot2
# Create bar plot for Home Ownership by Gender
ggplot(NHANES, aes(x = "Gender", fill = "HomeOwn")) + geom_bar(position = "fill" ) + ylab("Relative frequencies")
# Density for SleepHrsNight colored by SleepTrouble, faceted by HealthGen
ggplot(NHANES, aes(x = "SleepHrsNight", col = "SleepTrouble")) + geom_density(adjust = 2) + facet_wrap("~ HealthGen")
# ## Randomly allocating samples
# Randomly permute the observations and calculate a difference in proportions that could arise from a null distribution.
#
# Using the NHANES dataset, let's investigate the relationship between gender and home ownership. Type ?NHANES in the console to get a description of its variables
# Subset the NHANES dataset to consider only individuals whose home ownership status is either "Own" or "Rent". Save the result to homes
homes = NHANES[(NHANES["HomeOwn"]== "Own") | (NHANES["HomeOwn"]=="Rent")][["Gender","HomeOwn"]]
homes['count'] = 1
homes.head()
# Perform a single permutation to evaluate whether home ownership status (i.e. HomeOwn) differs between the "female" and "male" groups:
# In your call, shuffle home ownership status. Call this new variable HomeOwn_perm, a permuted version of HomeOwn.
homes.groupby(by = ["Gender","HomeOwn"]).count().unstack()
homes_s1 = homes.sample()
homes_s1
homes_s1.groupby(by = ["Gender","HomeOwn"]).count().unstack()
# ## Testing different shuffle approaches
raw_data = {'Coast': ['East', 'West', 'West', 'West', 'East'],
'Cola': ['Coke', 'Coke', np.NaN, np.NaN, 'Coke']}
df = pd.DataFrame(raw_data, columns = ['Coast', 'Cola'])
df['count']=1
display(df)
df.groupby(["Coast","Cola"]).count()
# ### Permutations approach 1
p1 = df.reindex(np.random.permutation(df.index))
display(p1)
p1.groupby(["Coast","Cola"]).count()
p2 = df.sample(frac=1, axis=0).reset_index(drop=True)
display(p2)
p2.groupby(["Coast","Cola"]).count()
p3 = df.apply(np.random.permutation, axis=0)
display(p3)
p3.groupby(["Coast","Cola"]).count()
p4 = shuffle(df)
display(p4)
p4.groupby(["Coast","Cola"]).count()
display(np.random.shuffle(df.values))
# # Verizon Example
# Repair times for two different customers groups:
# - ILEC = Incumbent Local Exchange Carrier i.e. Verizon
# - CLEC = Competing Local Exchange Carrier i.e. others
# Verizon is subject to substantial fines if the repair times for CLEC are substantially worse than for ILEC
# ### Descriptive statistics
verizon = pd.read_csv("verizon.csv")
display(verizon.head())
display(verizon.groupby("Group").describe())
ILEC = verizon[verizon["Group"]=="ILEC"].Time
CLEC = verizon[verizon["Group"]=="CLEC"].Time
# ### Histogram
# Create histogram
ggplot(verizon, aes(x = "Time" )) + geom_histogram() + ylab("Relative frequencies") + facet_wrap("~ Group") + coord_cartesian(xlim = (0, 100)) + ggtitle("Repair times histograms")
# ### Density plot
# Create histogram
ggplot(verizon, aes(x = "Time" , fill = "Group")) + geom_density(alpha = .3) \
+ ggtitle("Repair times distribution")
# ### Box plot
ggplot(verizon, aes(x = "Group" , y = "Time")) + geom_boxplot() \
+ ggtitle("Repair times box plots")
# ### QQ plots to check normality
# For all data points
import scipy.stats as stats
stats.probplot(verizon.Time, dist = "norm", plot = plt)
plt.show()
# For the two groups separately
import statsmodels.api as sm
stats.probplot(verizon[verizon["Group"]=="ILEC"].Time, dist = "norm", plot = plt)
plt.show()
stats.probplot(verizon[verizon["Group"]=="CLEC"].Time, dist = "norm", plot = plt)
plt.show()
# Normalizing the data first and using a different library
Z_ILEC = stats.mstats.zscore(verizon[verizon["Group"]=="ILEC"].Time)
Z_CLEC = stats.mstats.zscore(verizon[verizon["Group"]=="CLEC"].Time)
sm.qqplot(Z_ILEC, line = '45')
sm.qqplot(Z_CLEC, line = '45')
plt.show()
# ## Procedure for Bootstrapping
# 1) **Resample**. Create hundreds of new samples, called bootstrap samples or resamples, by sampling *with replacement* from the original random sample. Each resample is the same size as the original random sample.
#
# - **Sampling with replacement** means that after we randomly draw an observation from the original sample, we put it back before drawing the next observation. This is like drawing a number from a hat, then putting it back before drawing again. As a result, any number can be drawn once, more than once, or not at all. If we sampled without replacement, we’d get the same set of numbers we started with, though in a different order. Figure 18.2 illustrates the bootstrap resampling process on a small scale. In practice, we would start with the entire original sample, not just six observations, and draw hundreds of resamples, not just three.
#
#
# 2) **Calculate the bootstrap distribution**. Calculate the statistic for each resample. The distribution of these resample statistics is called a bootstrap distribution. In Case 18.1, we want to estimate the population mean repair time , so the statistic is the sample mean x.
#
# 3) **Use the bootstrap distribution**. The bootstrap distribution gives information about the shape, center, and spread of the sampling distribution of the statistic.
#
# ### Defining the utility function
def bootstrap_statistic(data,func, B = 1000):
'''Generate B bootstrap samples with replacement (for numpy array only) and calculate the test statistic for each.
Return a vector containing the test statistics'''
statistics_vector = np.array([])
for i in range(B):
bootstrap_sample = np.random.choice(data, len(data), replace = True)
statistics_vector = np.append(statistics_vector, func(bootstrap_sample))
return statistics_vector
# #### Bootstrapping the mean of each group
# +
#Generating bootstraps for 1k and 10k resamples
bootstrap_ILEC_1k = bootstrap_statistic(ILEC,np.mean,1000)
bootstrap_CLEC_1k = bootstrap_statistic(CLEC,np.mean,1000)
bootstrap_ILEC_10k = bootstrap_statistic(ILEC,np.mean,10000)
bootstrap_CLEC_10k = bootstrap_statistic(CLEC,np.mean,10000)
#Combining into dataframes
bootstrap_df_1k = pd.DataFrame({"ILEC_1k":bootstrap_ILEC_1k, "CLEC_1k":bootstrap_CLEC_1k}).melt(var_name = "Group", value_name = "Time")
bootstrap_df_10k = pd.DataFrame({"ILEC_10k":bootstrap_ILEC_10k, "CLEC_10k":bootstrap_CLEC_10k}).melt(var_name = "Group", value_name = "Time")
display(bootstrap_df_1k.groupby("Group").describe())
display(bootstrap_df_10k.groupby("Group").describe())
#Stacking the dataframes for the plot
bootstrap_df = pd.concat([bootstrap_df_1k,bootstrap_df_10k], keys = ['1k','10k']).reset_index(level = 0, )
bootstrap_df.rename(index=str, columns={"level_0": "Size"})
bootstrap_df.head()
# -
# #### Computing the bias of the ILEC set
# - Observed mean is 8.41
# - Bootstrap mean is 8.41 or 8.40
#
#
# #### Bootstrap standard error is the standard deviation of the bootstrap distribution of the statistic
# - ILEC std = 0.36
# - CLEC std = 3.98
# #### Plotting the result
# Clearly, increasing the number of samples decreases the standard deviation
# Create histogram
ggplot(bootstrap_df, aes(x = "Time" , fill = "Group")) \
+ geom_density(alpha = .3) + ggtitle("Bootstrap Repair times distribution")
# ### Bootstrap the difference between means
#
# Given independent random samples of size n and m from two populations
# 1. Draw a resample of size n with replacement from the first sample and a separate sample of size m from the second sample
# 2. Compute a statistic that compares the two groups
# 3. Repeat the resampling process
# 4. Construct a bootstrap distribution of the statistic
# +
def bootstrap_two_populations(data1, data2,func, B = 1000):
'''Generate n bootstrap samples with replacement (for numpy array only) and calculate the test statistic for each.
Return a vector containing the test statistics'''
statistics_vector = np.array([])
for i in range(B):
bootstrap_sample1 = np.random.choice(data1, len(data1), replace = True)
bootstrap_sample2 = np.random.choice(data2, len(data2), replace = True)
statistics_vector = np.append(statistics_vector, func(bootstrap_sample1,bootstrap_sample2))
return statistics_vector
def diff_means(data1, data2):
return np.mean(data1) - np.mean(data2)
# -
bootstrap_diff_1k = pd.DataFrame({'Diff_1k':bootstrap_two_populations(ILEC,CLEC, diff_means,1000)})
bootstrap_diff_10k = pd.DataFrame({'Diff_10k':bootstrap_two_populations(ILEC,CLEC, diff_means,10000)})
bootstrap_diff = pd.concat([bootstrap_diff_1k,bootstrap_diff_10k], keys = ['1k','10k'], axis = 0).melt().dropna()
#bootstrap_diff.rename(index=str, columns={"level_0": "Size"})
bootstrap_diff.groupby('variable').describe()
# ### Difference between the means
# - Mean difference: - 8.09
# - Std deviation of the difference: 4
# - Shape: Not normal, left tail heavy, right tail light
#
# Since the distribution of the difference between means is not normal, we cannot safely use t-distribution to construct tests or confidence intervals
# Create histogram
ggplot(bootstrap_diff, aes(x = "value", fill = 'variable')) + geom_density(alpha = .3) + ggtitle("Bootstrap difference between means distribution")
stats.probplot(bootstrap_diff[bootstrap_diff["variable"]=="Diff_10k"].value, dist = "norm", plot = plt)
plt.show()
# ## Bootstrap linear regression
from sklearn import datasets
boston = pd.read_csv("boston_housing_train.csv")
boston.head()
# ### Finding the highest correlated variable
# Target variable is medv and we will use rm = rooms
f, ax = plt.subplots(figsize=(12, 12))
corr = boston.corr()
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
# ### Computing bootstrap confidence intervals directly using Numpy
# +
def linear_fit(x_data,y_data):
a,b = np.polyfit(x = x_data, y = y_data, deg =1)
#Make theoretical line to plot
x_line = np.array([np.min(x_data),np.max(x_data)])
y_line = a*x_line + b
#Add regression plot
plt.plot(x_line,y_line, alpha = 0.01, color = 'red', linewidth = .5)
def bootstrap_linear_fit(x_data, y_data, B = 1000):
#Plot the original data
plt.plot(x_data, y_data, marker='.', linestyle='none')
#Setup array of indices to sample from
inds = np.arange(len(x_data))
for i in range(B):
bs_inds = np.random.choice(inds, len(inds), replace = True)
bs_x, bs_y = x_data[bs_inds], y_data[bs_inds]
linear_fit(bs_x, bs_y)
# -
plt.figure(figsize = (8,8))
bootstrap_linear_fit(boston.rm, boston.medv)
# ### Scatter and bootstrap confidence interval done directly by Seaborn (95%)
fig, (ax1, ax2) = plt.subplots(1,2, figsize = (16,6))
ax1 = sns.regplot(x = "rm", y = "medv", data = boston, ax = ax1)
ax2 = sns.regplot(x = "rm", y = "medv", data = boston, ax = ax2)
# ## Bootstrapping the correlation coefficient
# ### MLB example
MLB = pd.read_csv("MLB.csv")
display(MLB.head())
print("The correlation coefficient is = ", np.corrcoef(MLB.Average, MLB.Salary)[0,1] )
ggplot(MLB, aes(x = "Average", y="Salary")) + geom_point()+ geom_smooth(method="lm")
def bootstrap_corr_coef(x_data, y_data, B = 2000):
#Initialize empty array
coef_array = np.array([])
#Setup array of indices to sample from
inds = np.arange(len(x_data))
#Loop B times to generate B bootstrap statistics
for i in range(B):
bs_inds = np.random.choice(inds, len(inds), replace = True)
bs_x, bs_y = x_data[bs_inds], y_data[bs_inds]
coef_array = np.append(coef_array, np.corrcoef(bs_x, bs_y)[0,1])
return coef_array
# +
#Running the bootstrap on the correlation coefficient
bs_corr_coef = bootstrap_corr_coef(MLB.Average, MLB.Salary)
# Summary statistics
display(stats.describe(bs_corr_coef))
#Normalizing the bootstrap distribution
norm_bs_corr_coef = stats.mstats.zscore(bs_corr_coef)
#Displaying the distribution and QQ plot
fig, (ax1, ax2) = plt.subplots(1,2, figsize = (16,6))
ax1 = sns.distplot(bs_corr_coef, ax = ax1)
ax2 = stats.probplot(bs_corr_coef, dist = "norm", plot = plt)
# -
# ## Testing for departure from normality
# Recall that all of the following hypothesis test work such that the **null hypothesis tests against the the assumption of normality**
# +
#Testing the normality of the resulting bootstrap distribution
print("Sample size = ", norm_bs_corr_coef.size)
#Shapiro-Wilk
print("Shapiro Wilk test: p_value = ", stats.shapiro(norm_bs_corr_coef)[1])
#Kolmogorov-Smirnov
print("Kolmogorov-Smirnov test: p_value = ", stats.kstest(norm_bs_corr_coef, cdf = 'norm')[1])
#Anderson-Darling
print("Anderson-Darling test: p_value = ", stats.anderson(norm_bs_corr_coef)[1][2])
#D’Agostino and Pearson
print("D’Agostino and Pearson test: p_value = ", stats.normaltest(norm_bs_corr_coef)[1])
# -
# ### An example of NOT normally distributed data
# +
data = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/wind_speed_laurel_nebraska.csv')
wind = data['10 Min Sampled Avg']
sns.distplot(wind)
#Testing the normality of the resulting bootstrap distribution
print("Sample size = ", wind.size)
#Shapiro-Wilk
print("Shapiro Wilk test: p_value = ", stats.shapiro(wind)[1])
#Kolmogorov-Smirnov
print("Kolmogorov-Smirnov test: p_value = ", stats.kstest(wind, cdf = 'norm')[1])
#Anderson-Darling
print("Anderson-Darling test: p_value = ", stats.anderson(wind)[1][2])
#D’Agostino and Pearson
print("D’Agostino and Pearson test: p_value = ", stats.normaltest(wind)[1])
# -
# ### Computing the bootstrap t interval
#
# Checking assumptions:
# - The bootstrap distribution has normal shape
# - Mean = 0.102
# - standard error = 0.129
# +
#Bootstrap standard error
print("Standard error = ", np.std(bs_corr_coef, ddof=1))
#Obtaining the t value for 50 - 1 degrees of freedom and 97.5th percentile:
print("t values interval =",stats.t.interval(0.95, df = 49))
#Calculating the t interval using the bootstrap standard error
bs_t_interval = np.mean(bs_corr_coef) + (np.array(stats.t.interval(0.95, df = 49)) * np.std(bs_corr_coef, ddof=1))
print("bootstrap t interval using standard error =",bs_t_interval)
#Calculating the t interval using the percentile interval
bs_perc_interval = np.percentile(bs_corr_coef, 2.5), np.percentile(bs_corr_coef, 97.5)
print("bootstrap percentile interval =",bs_perc_interval )
f, ax = plt.subplots(figsize=(10, 6))
ax = sns.distplot(bs_corr_coef)
#Vertical lines
plt.axvline(bs_perc_interval[0], color='r')
plt.axvline(bs_perc_interval[1], color='r')
plt.axvline(bs_t_interval[0], color='c')
plt.axvline(bs_t_interval[1], color='c')
# -
# # Significance testing using permutation tests
# ### Verizon data set:
# Penalties are assessed if a significance test concludes at the 1% significance level that CLEC customers are receiving inferior service. A one sided test is used.
#
# Because the distributions are strongly skewed and the sample size very different we cannot use two sample t tests.
#
# - ILEC: size = 1664
# - CLEC: size = 23
# - Mean ILEC = 8.411
# - Mean CLEC = 16.509
# - Mean difference = - 8.097519
# +
def permutation_sample(data1,data2):
'''Generate a permutation sample from two data sets'''
# Concatenate the data
data = np.concatenate((data1,data2))
permutated_data = np.random.permutation(data)
#Select new samples without replacements
perm_sample1 = permutated_data[:len(data1)]
perm_sample2 = permutated_data[len(data1):]
return perm_sample1, perm_sample2
def draw_perm_reps(data_1, data_2, func, n=100):
'''Generate multiple permutation replicates.
Here func is a function that takes two arrays as arguments'''
perm_array = np.array([])
for i in range(n):
perm_sample_1, perm_sample_2 = permutation_sample(data_1, data_2)
perm_array = np.append(perm_array, func(perm_sample_1,perm_sample_2))
return perm_array
# -
perm_mean = draw_perm_reps(ILEC,CLEC,diff_means,100000)
# +
stats.describe(perm_mean)
T = np.mean(ILEC) - np.mean(CLEC)
P = (perm_mean < T).sum() / perm_mean.size
print("the P value is: ",P)
# Plotting the distribution and p value
f, ax = plt.subplots(figsize=(10, 6))
ax = sns.distplot(perm_mean)
plt.annotate(
# Label and coordinate
'T = -8.09', xy=(T, .01), xytext=(-10, 0.03),
# Custom arrow
arrowprops=dict(facecolor='black')
)
plt.axvline(T, color='c')
# -
# ### Calculating the p value corresponding to a difference of -8.0975
np.percentile(perm_mean,1.864)
more_extreme_vals.sum()
perm_mean < T
# +
raw_data1 = {'A': ['a1', 'a1', 'a1', 'a1', 'a1'],
'B': ['b1', 'b1', 'b1', 'b1', 'b1']}
df1 = pd.DataFrame(raw_data1, columns = ['A', 'B'])
df1['count']=1
display(df1)
raw_data2 = {'A': ['a2', 'a2'],
'B': ['b2', 'b2']}
df2 = pd.DataFrame(raw_data2, columns = ['A', 'B'])
df2['count']=1
display(df2)
# -
c = pd.concat([df1,df2], keys = ['one','two'], names = ['Xavier'])
c
c.index.set_names(level = 0, names = 'size')
d = pd.concat([df1,df2], keys = ['1k','10k']).reset_index(level = 0, )
d.rename(index=str, columns={"level_0": "Size"})
| Foundation Inference Bootstrap Permutation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import nltk
print(np.__version__)
print(nltk.__version__)
def get_reviews(path, positive=True):
label = 1 if positive else 0
with open(path, 'r') as f:
review_text = f.readlines()
reviews = []
for text in review_text:
#Returns a tuple of review and label with its positivity and negativity
reviews.append((text, label))
return reviews
def extract_reviews():
positive_reviews = get_reviews("rt-polarity.pos", positive = True)
negative_reviews = get_reviews("rt-polarity.neg", positive = False)
return positive_reviews, negative_reviews
positive_reviews, negative_reviews = extract_reviews()
len(positive_reviews)
len(negative_reviews)
positive_reviews[:2]
# +
TRAIN_DATA = 5000
TOTAL_DATA = len(positive_reviews)
train_reviews = positive_reviews[:TRAIN_DATA] + negative_reviews[:TRAIN_DATA]
test_positive_reviews = positive_reviews[TRAIN_DATA:TOTAL_DATA]
test_negative_reviews = negative_reviews[TRAIN_DATA:TOTAL_DATA]
# +
def get_vocabulary(train_reviews):
words_set = set()
for reviews in train_reviews:
words_set.update(reviews[0].split())
return list(words_set)
vocabulary = get_vocabulary(train_reviews)
# -
len(vocabulary)
vocabulary[:5]
def extract_features(review_text):
#split review into words and create set of words
review_words = set(review_text.split())
features = {}
for word in vocabulary:
features[word] = (word in review_words)
return features
train_features = nltk.classify.apply_features(extract_features, train_reviews)
trained_classifier = nltk.NaiveBayesClassifier.train(train_features)
def sentiment_calculator(review_text):
features = extract_features(review_text)
return trained_classifier.classify(features)
sentiment_calculator("What an amazing movie!")
sentiment_calculator("What a terrible movie!")
def classify_test_reviews(test_positive_reviews, test_negative_reviews, sentiment_calculator):
positive_results = [sentiment_calculator(review[0]) for review in test_positive_reviews]
negative_results = [sentiment_calculator(review[0]) for review in test_negative_reviews]
true_positives = sum(x > 0 for x in positive_results)
true_negatives = sum(x == 0 for x in negative_results)
percent_true_positive = float(true_positives) / len(positive_results)
percent_true_negative = float(true_negatives) / len(negative_results)
total_accuracy = true_positives + true_negatives
total = len(positive_results) + len(negative_results)
print("Accuracy on positive reviews: " + "%.2f" % (percent_true_positive * 100) + "%")
print("Accuracy on negative reviews: " + "%.2f" % (percent_true_negative * 100) + "%")
print("Overall Accuracy: " + "%.2f" % (total_accuracy * 100 / total) + "%")
classify_test_reviews(test_positive_reviews, test_negative_reviews, sentiment_calculator)
| Sentiment Rotten Tomatoes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object-oriented programming practice quiz
#
# This quiz test your basic knowledge of object-oriented programming in Python.
#
# (The solutions do not appear if you view this file from github, so download the notebook to your laptop and view it in Jupyter. Clicking on the ▶`Solution` button opens up the solution for each question.)
# ## Terminology and semantics
# ### Question
#
# What is a class?
# <details>
# <summary>Solution</summary>
# A class is a user-defined type and is a template describing the fields and functions for objects of that type.
# </details>
# ### Question
#
# What's the difference between a class and an instance?
# <details>
# <summary>Solution</summary>
# An instance is an object of a particular type/class. A class is the template or blueprint for that collection of objects.
# </details>
# ### Question
#
# What is the syntax to access a field of an object?
# <details>
# <summary>Solution</summary>
# </details>
# ### Question
#
# How does a method differ from a function?
# <details>
# <summary>Solution</summary>
# </details>
# ### Question
#
# What does the __init__ method do?
# <details>
# <summary>Solution</summary>
# </details>
# ### Question
#
# Given classes Employee and Manager, which would say is the superclass in which is the subclass?
# <details>
# <summary>Solution</summary>
# A manager is a kind of employee so it makes the most sense that Employee would be the superclass and Manager would be the subclass. The Manager class inherits fields and functionality from Employee.
# </details>
# ### Question
#
# Can a method in a subclass call a method defined in the superclass?
# <details>
# <summary>Solution</summary>
# Yes, if f() exists in the superclass, then a method in the subclass can simply call self.() to invoke that inherited method. Is a special case, if we are defining function f() in the subclass and want to call the superclass f() from that method, we must use super.f() not self.f() to explicitly indicate which function to call.
#
# The following example prints "woof"
#
# <pre>
# class Dog:
# def speak(self):
# print("woof")
#
# class Akita(Dog):
# def whos_a_good_boy(self):
# self.speak()
#
# d = Akita()
# d.whos_a_good_boy()
# </pre>
# </details>
# ### Question
#
# How do you override a method inherited from your superclass?
# <details>
# <summary>Solution</summary>
# Just redefine the method in the subclass and it will override the previous functionality. This program:
#
# <pre>
# class Dog:
# def speak(self):
# print("woof")
#
# class DogCat(Dog):
# def speak(self):
# print("Meowoof!")
#
# Dog().speak()
# DogCat().speak()
# </pre>
#
# prints:
#
# <pre>
# woof
# Meowoof!
# </pre>
#
# </details>
# ## Understanding and generating OO code
# ### Question
#
# Define a new class Foo whose constructor takes no arguments but initializes field x to 0.
#
# <details>
# <summary>Solution</summary>
# <pre>
# class Foo:
# def __init__(self): # don't forget the "self"!!
# self.x = 0 # don't forget the "self."!!
# </pre>
# </details>
# ### Question
#
# Create a class called Point who's constructor accepts x and y coordinates; the constructor should set fields using those parameters. This code
#
# ```
# p = Point(9, 2)
# print(p.x, p.y)
# ```
#
# should print "9 2".
#
# <details>
# <summary>Solution</summary>
# <pre>
# class Point:
# def __init__(self, x, y):
# self.x = x
# self.y = y
# </pre>
# </details>
# ### Question
#
# For class Point, define the `__str__` methods so that points are printed out as "(*x*,*y*)". E.g., `print( Point(3,9) )` should print "(3,9)".
#
# <details>
# <summary>Solution</summary>
# <pre>
# class Point:
# def __init__(self, x, y):
# self.x = x
# self.y = y
# def __str__(self):
# return f"({self.x},{self.y})"
# </pre>
# </details>
# ### Question
#
# Define method `distance(q)` that takes a Point and returns the Euclidean distance from self to q. Recall that $dist(u,v) = \sqrt{(u_1-v_1)^2 + (u_2-v_2)^2)}$. You can test your code with:
#
# ```
# p = Point(3,4)
# q = Point(5,6)
# print(p.distance(q)) # prints 2.8284271247461903
# ```
#
# <details>
# <summary>Solution</summary>
# <pre>
# import numpy as np
#
# class Point:
# def __init__(self, x, y):
# self.x = x
# self.y = y
#
# def distance(self, other):
# return np.sqrt( (self.x - other.x)**2 + (self.y - other.y)**2 )
# </pre>
# </details>
# ### Question
#
# Define a `Point3D` that inherits from `Point`.
#
# Define constructor that takes x,y,z values and sets fields. Call `super().__init__(x,y)` to call constructor of superclass.
#
# Define / override `distance(q)` so it works with 3D field values to return distance.
#
# Test with
#
# ```
# p = Point3D(3,4,9)
# q = Point3D(5,6,10)
# print(p.distance(q))
# ```
#
# Add method `__str__` so that `print(q)` prints something nice like `(3,4,9)`. Recall: $dist(u,v) = \sqrt{(u_1-v_1)^2 + (u_2-v_2)^2 + (u_3-v_3)^2)}$
#
# <details>
# <summary>Solution</summary>
# <pre>
# import numpy as np
#
# class Point3D(Point):
# def __init__(self, x, y, z):
# # reuse/refine super class constructor
# super().__init__(x,y)
# self.z = z
#
# def distance(self, other):
# return np.sqrt( (self.x - other.x)**2 +
# (self.y - other.y)**2 +
# (self.z - other.z)**2 )
#
# def __str__(self):
# return f"({self.x},{self.y},{self.z})"
# </pre>
# </details>
| labs/OO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# !pip3 install scipy --user
# -
# !pip3 install -U tsfresh --user
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# !pip3 freeze
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# !python -m pip install tsfresh
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# !pip3 freeze
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288509395}
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import tsfresh as tsf
from tsfresh import extract_features, select_features
from tsfresh.utilities.dataframe_functions import impute
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288602434}
# azureml-core of version 1.0.72 or higher is required
# azureml-dataprep[pandas] of version 1.1.34 or higher is required
from azureml.core import Workspace, Dataset
subscription_id = '5ad40245-5352-4569-a7e8-5c8ed97f9da3'
resource_group = 'montlhly-professional-rg'
workspace_name = 'fy-ml-wp'
workspace = Workspace(subscription_id, resource_group, workspace_name)
data_train1 = Dataset.get_by_name(workspace, name='HeartbeatClassification')
data_train = data_train1.to_pandas_dataframe()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288615142}
data_train.head()
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616249303410}
# 数据读取
# data_train = pd.read_csv("HeartbeatClassification")
# data_test_A = pd.read_csv("testA")
# print(data_train.shape)
# print(data_test_A.shape)
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288648848}
# 对心电特征进行行转列处理,同时为每个心电信号加入时间步特征time
train_heartbeat_df = data_train["heartbeat_signals"].str.split(",", expand=True).stack()
train_heartbeat_df = train_heartbeat_df.reset_index()
train_heartbeat_df = train_heartbeat_df.set_index("level_0")
train_heartbeat_df.index.name = None
train_heartbeat_df.rename(columns={"level_1":"time", 0:"heartbeat_signals"}, inplace=True)
train_heartbeat_df["heartbeat_signals"] = train_heartbeat_df["heartbeat_signals"].astype(float)
train_heartbeat_df
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288654840}
# 将处理后的心电特征加入到训练数据中,同时将训练数据label列单独存储
data_train_label = data_train["label"]
data_train = data_train.drop("label", axis=1)
data_train = data_train.drop("heartbeat_signals", axis=1)
data_train = data_train.join(train_heartbeat_df)
data_train
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616288660568}
data_train[data_train["id"]==1]
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616292620943}
from tsfresh import extract_features
# 特征提取
train_features = extract_features(data_train, column_id='id', column_sort='time')
train_features
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616292764606}
from tsfresh.utilities.dataframe_functions import impute
# 去除抽取特征中的NaN值
impute(train_features)
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616292899876}
from tsfresh import select_features
# 按照特征和数据label之间的相关性进行特征选择
train_features_filtered = select_features(train_features, data_train_label)
train_features_filtered
| T3 - Features Engineering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Scale pool area and $\Phi_{Tr}$
# ### Pool scaling
#
# Tracer flux into the pool from canyon upwelling:
# $\Phi_{Tr}\propto \Phi \Delta C \propto U\mathcal{F}W_mZ^2\delta_zC$,
#
# where $\Delta C=Z\delta_zC$ is the "extra" concentration coming up onto the shelf compared to a vertcally homogeneous tracer.
#
# If there was no canyon upwelling and the pool was just made up of shelf water and generated by Ekman transport through the bottom boundary layer on the shelf, that is shut down due to the balance of buoyancy forces acting against Coriolis force generating Ekman transport (MacCready and Rhines 1993, Slippery Bottom Boundary Layers), that BBL would have a length across the shelf given by
#
# $\mathcal{L}=\frac{fU}{(N\theta)^2}$ (This can be derived from thermal wind equations, (MacCready and Rhines, 1993))
#
# A correponding vertical scale - which in fact is the thickness of the BBL but that cannot be assured for upwelling case - is given by
#
# $\mathcal{H}=\mathcal{L}sin{\theta} \approx \mathcal{L}\theta$ since $\theta<<1$.
#
# So, a volume for a pool made up of shelf water (background water) can be constructed as
#
# $V_{bg}= A_{pool}\mathcal{H}$.
#
# Assuimig shutdowns has occured, a timescale would be
#
# $\tau_0\approx \frac{f}{(N\theta)^2}$ (MacCready and Rhines, 1993). There is a better, more complicated approximation for this but I don't think that is necessary.
#
# Then , the flux of tracer associated to the background pool would be
#
# $\Phi_{bg}\approx \frac{Apool\mathcal{H}}{\tau_0} (H_s-H_h)\delta_zC_{bg})$,
#
# Where $(H_s-H_h)\delta_zC_{bg}$ is analogous to $\Delta C$ and represents the background concentration on the shelf within the shelf pool.
#
# Assuming both tracer fluxes, $\Phi_{Tr}$ and $\Phi_{bg}$, are of the same order, the area of the pool is proportional to
#
# $A_{pool}\propto \frac{U\mathcal{F}W_mZ^2\delta_zC\tau_0}{\mathcal{H}(H_s-H_h)\delta_zC_{bg}}$
#
# Substituting the expresions for $\mathcal{H}$ and $\tau_0$
#
# $A_{pool}\propto \frac{\mathcal{F}W_mZ^2\delta_zC}{\theta(H_s-H_h)\delta_zC_{bg}}$.
#
# Further, we know that the slope $s=(H_s-H_h)/L$ and angle $\theta$ are related as $\theta\sim s$. Substituting the value of s
#
# $A_{pool}\propto \frac{W_mL\mathcal{F}Z^2\delta_zC}{(H_s-H_h)^2\delta_zC_{bg}}$.
#
# Approximating the canyon area as a triangle of base $W_m$ and height $L$, its area would be
#
# $A_{can}=\frac{W_mL}{2}$.
#
# Substituting $A_{can}$ in the expression for $A_{pool}$ we get
#
# $A_{pool}\propto \frac{2 A_{can}\mathcal{F}Z^2\delta_zC}{(H_s-H_h)^2\delta_zC_{bg}}$.
#
# This is nice because the pool area is a function of the canyon area and a non-dimensional number that represents the competiton between the tracer that comes up onto the shelf due to the canyon and due to the gradient of the tracer below the shelf, and the background tracer that would be on the pool.
#
#import gsw as sw # Gibbs seawater package
import cmocean as cmo
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import matplotlib.gridspec as gspec
from matplotlib.lines import Line2D
# %matplotlib inline
from netCDF4 import Dataset
import numpy as np
import pandas as pd
import scipy.stats
import seaborn as sns
import sys
import xarray as xr
import canyon_tools.readout_tools as rout
import canyon_tools.metrics_tools as mpt
# +
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# +
def calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4):
"""-----------------------------------------------------------------------------
calc_rho calculates the density profile using a linear equation of state.
INPUT:
state: xarray dataframe
RhoRef : reference density at the same z as T and S slices. Can be a scalar or a
vector, depending on the size of T and S.
T, S : should be 1D arrays size nz
alpha = 2.0E-4 # 1/degC, thermal expansion coefficient
beta = 7.4E-4, haline expansion coefficient
OUTPUT:
rho - Density [nz]
-----------------------------------------------------------------------------"""
#Linear eq. of state
rho = RhoRef*(np.ones(np.shape(T[:])) - alpha*(T[:]) + beta*(S[:]))
return rho
def call_rho(t,state,zslice,xind,yind):
T = state.Temp.isel(T=t,Z=zslice,X=xind,Y=yind)
S = state.S.isel(T=t,Z=zslice,X=xind,Y=yind)
rho = calc_rho(RhoRef,T,S,alpha=2.0E-4, beta=7.4E-4)
return(rho)
# +
sb_Ast = 29 # shelf break z-index Astoria
sb_Bar = 39 # shelf break z-index Barkley
RhoRef = 999.79998779 # It is constant in all my runs, can't run rdmds
grid_fileB = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/01_Bar03/gridGlob.nc'
grid_fileA = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/gridGlob.nc'
ptr_fileB = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/01_Bar03/ptracersGlob.nc'
ptr_fileA = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/ptracersGlob.nc'
state_fileA = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/stateGlob.nc'
stateA = xr.open_dataset(state_fileA)
state_fileB = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/01_Bar03/stateGlob.nc'
stateB = xr.open_dataset(state_fileB)
with Dataset(ptr_fileA, 'r') as nbl:
time = nbl.variables['T'][:]
with Dataset(grid_fileA, 'r') as nbl:
drC_A = nbl.variables['drC'][:]
with Dataset(grid_fileB, 'r') as nbl:
drC_B = nbl.variables['drC'][:]
tracers = ['Tr01','Tr02','Tr03','Tr04','Tr05','Tr06','Tr07','Tr08','Tr09','Tr10']
labels = ['Linear 01','Salinty 02','Oxygen 03','Nitrate 04','DS 05','Phosphate 06','Nitrous Oxide 07','Methane 08',
'DIC 09', 'Alk 10']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
sb_conc_A = np.empty(len(labels))
sb_conc_B = np.empty(len(labels))
for ii, trac in zip(range(len(tracers)),tracers):
for pfile,sb_array,\
sb_ind,drc,state in zip([ptr_fileA, ptr_fileB],
[sb_conc_A, sb_conc_B],
[sb_Ast, sb_Bar],
[drC_A, drC_B],
[stateA, stateB]):
with Dataset(pfile, 'r') as nbl:
if (trac == 'Tr07' or trac == 'Tr08'):
tr_profile = (1E-3*nbl.variables[trac][0,:,10,180])/1E-3 # nM to mu mol/m^3
elif (trac == 'Tr03' or (trac == 'Tr09' or trac == 'Tr10')):
profile = nbl.variables[trac][0,:,10,180]
density = call_rho(0,state,slice(0,104),180,20)
tr_profile = (density.data*profile.data/1000)/1E-3 # mumol/kg mu mol/m^3
else:
tr_profile = (nbl.variables[trac][0,:,10,180])/1E-3 # muM to mu mol/m^1
tr_grad = (tr_profile[2:]-tr_profile[:-2])/(drc[3:]+drc[1:-2])
sb_array[ii] = tr_profile[sb_ind]
# +
sb_Ast = 29 # shelf break z-index Astoria
sb_Bar = 39 # shelf break z-index Barkley
head_zind = [19,19,34,34]
Z_zind = [30,20,14,21]
g = 9.81
RhoRef = 999.79998779 # It is constant in all my runs, can't run rdmds
grid_fileB = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/01_Bar03/gridGlob.nc'
grid_fileA = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/gridGlob.nc'
ptr_fileBar = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/01_Bar03/ptracersGlob.nc'
ptr_fileAst = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/ptracersGlob.nc'
ptr_filePath = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/03_Bar03_Path/ptracersGlob.nc'
ptr_fileArgo = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/03_Ast03_Argo/ptracersGlob.nc'
state_fileArgo = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/03_Ast03_Argo/stateGlob.nc'
stateArgo = xr.open_dataset(state_fileArgo)
state_filePath = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF4_BAR/03_Bar03_Path/stateGlob.nc'
statePath = xr.open_dataset(state_filePath)
tracers = ['Tr01','Tr02','Tr03','Tr04','Tr05','Tr06','Tr07','Tr08','Tr09','Tr10']
labels = ['Linear 01','Salinty 02','Oxygen 03','Nitrate 04','DS 05','Phosphate 06','Nitrous Oxide 07','Methane 08',
'DIC 09', 'Alk 10']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
sb_gradZ_Ast = np.empty(len(labels))
sb_gradZ_Argo = np.empty(len(labels))
sb_gradZ_Bar = np.empty(len(labels))
sb_gradZ_Path = np.empty(len(labels))
sb_gradtop_Ast = np.empty(len(labels))
sb_gradtop_Argo = np.empty(len(labels))
sb_gradtop_Bar = np.empty(len(labels))
sb_gradtop_Path = np.empty(len(labels))
N_top_Ast = np.empty(len(labels))
N_top_Argo = np.empty(len(labels))
N_top_Bar = np.empty(len(labels))
N_top_Path = np.empty(len(labels))
for ii, trac in zip(range(len(tracers)),tracers):
for pfile,sb_ind,head_ind,Z_ind,grad_array,gradtop_array,drc,state,Ntop \
in zip([ptr_fileAst,ptr_fileArgo,ptr_fileBar,ptr_filePath],
[sb_Ast,sb_Ast,sb_Bar, sb_Bar],
head_zind,
Z_zind,
[sb_gradZ_Ast,sb_gradZ_Argo,sb_gradZ_Bar,sb_gradZ_Path],
[sb_gradtop_Ast,sb_gradtop_Argo,sb_gradtop_Bar,sb_gradtop_Path],
[drC_A, drC_A, drC_B, drC_B],
[stateA,stateArgo,stateB,statePath],
[N_top_Ast,N_top_Argo,N_top_Bar,N_top_Path],):
with Dataset(pfile, 'r') as nbl:
if (trac == 'Tr07' or trac == 'Tr08'):
tr_profile = (1E-3*nbl.variables[trac][0,:,10,180])/1E-3 # nM to mu mol/m^3
density = call_rho(0,state,slice(0,104),180,20)
elif (trac == 'Tr03' or (trac == 'Tr09' or trac == 'Tr10')):
profile = nbl.variables[trac][0,:,10,180]
density = call_rho(0,state,slice(0,104),180,20)
tr_profile = (density.data*profile.data/1000)/1E-3 # muM to mu mol/m^3
else:
tr_profile = nbl.variables[trac][0,:,10,180]/1E-3 # muM to mu mol/m^3
density = call_rho(0,state,slice(0,104),180,20)
Ntop[ii] = np.nanmean((-(g/RhoRef)* \
((density.data[head_ind-2:sb_ind-2]-
density.data[head_ind:sb_ind])/ \
(drc[head_ind-2:sb_ind-2] +
drc[head_ind:sb_ind])))**0.5)
tr_grad = (tr_profile[2:]-tr_profile[:-2])/(drc[3:]+drc[1:-2])
grad_array[ii] = np.nanmean(tr_grad[head_ind:head_ind+Z_ind])
gradtop_array[ii] = np.nanmean(tr_grad[sb_ind-10:sb_ind])
# -
print(N_top_Ast[0])
print(N_top_Argo[0])
print(N_top_Bar[0])
print(N_top_Path[0])
# ### Scaling pool and $\Phi_{Tr}$
# +
def Dh(f,L,N):
'''Vertical scale Dh'''
return((f*L)/(N))
def Ro(U,f,R):
'''Rossby number'''
return(U/(f*R))
def F(Ro):
'''Function that estimates the ability of the flow to follow isobaths'''
return(Ro/(0.9+Ro))
def Bu(N,f,W,Hs):
'''Burger number'''
return((N*Hs)/(f*W))
def RossbyRad(N,Hs,f):
'''1st Rossby radius of deformation'''
return((N*Hs)/f)
def SE(s,N,f,Fw,Rl):
'''Slope effect '''
return((s*N)/(f*(Fw/Rl)**0.5))
def Z(U,f,L,W,N,s):
'''depth of upwelling as in Howatt and Allen 2013'''
return(1.8*(F(Ro(U,f,W))*Ro(U,f,L))**(0.5) *(1-0.42*SE(s,N,f,F(Ro(U,f,W)),Ro(U,f,L)))+0.05)
g = 9.81 # accel. gravity
s = np.array([0.00230,0.00230,0.00454,0.00454]) # shelf slope
N = np.array([0.0055,0.0088,0.0055,0.0038]) # Initial at 152.5 m
f = np.array([1.0E-4,1.05E-4,1.0E-4,1.08E-4])
U = np.array([0.3,0.329,0.3,0.288])
Wiso = np.array([8900,8900,8300,8300])
Wsbs = np.array([15700,15700,13000,13000])
R = np.array([4500,4500,5000,5000])
L = np.array([21800,21800,6400,6400])
Hhs = [97.5,97.5,172.5,172.5]
Hss = [150,150,200,200]
# +
sns.set_style('white')
sns.set_context('notebook')
fig, ax0=plt.subplots(1,1,figsize=(5,5))
labels_exp = ['AST', 'ARGO','BAR', 'PATH']
labels_tra = ['Linear','Salinity','Oxygen','Nitrate','DS',
'Phosphate','Nitrous Oxide','Methane','DIC','Alkalinity']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677',
'#882255','#AA4499', 'dimgray', 'tan']
tracer_keys = ['phiTr01','phiTr02','phiTr03','phiTr04','phiTr05','phiTr06',
'phiTr07','phiTr08','phiTr09','phiTr10']
factors = [1,1,1,1,1,1,1E-3,1E-3,1,1]
markers=['o','^','s','d']
exp_files = ['../saved_calcs/pool_AST.nc',
'../saved_calcs/pool_ARGO.nc',
'../saved_calcs/pool_BAR.nc',
'../saved_calcs/pool_PATH.nc']
runs = ['UPW_10TR_BF2_AST_01','UPW_10TR_BF2_AST_03','UPW_10TR_BF4_BAR_01',
'UPW_10TR_BF4_BAR_03']
exps = ['UPW_10TR_BF2_AST','UPW_10TR_BF2_AST','UPW_10TR_BF4_BAR','UPW_10TR_BF4_BAR']
runs_phi = ['01_Ast03','03_Ast03_Argo','01_Bar03','03_Bar03_Path']
can_Area = [1.8E8, 1.8E8, 8.7E7, 8.7E7]
sb_conc = [sb_conc_A, sb_conc_A, sb_conc_B, sb_conc_B]
sb_grad = [sb_gradZ_Ast,sb_gradZ_Argo, sb_gradZ_Bar,sb_gradZ_Path]
sb_gradtop = [sb_gradtop_Ast,sb_gradtop_Argo, sb_gradtop_Bar,sb_gradtop_Path]
N_top = [N_top_Ast, N_top_Argo, N_top_Bar, N_top_Path]
area_array = np.zeros(40)
Pi_array = np.zeros(40)
kk = 0
for tr, tr_lab, factor, ii, col in zip(tracer_keys, labels_tra, factors, range(len(labels_tra)),colours):
for file, run,run_phi,lab_exp, can_area,exp, \
grad,gradtop,Ntop,conc,Hh,Hs,ff,nn,uu,ll,ww,wsb,ss, rr, mark in zip(exp_files,
runs,runs_phi,
labels_exp,
can_Area,exps,
sb_grad,
sb_gradtop,
N_top,
sb_conc,Hhs,Hss,
f,N,U,L,Wiso,
Wsbs,s,R,
markers):
ZZ = Z(uu,ff,ll,ww,nn,ss)*Dh(ff,ll,nn)
slope = (Hs-Hh)/ll
Cs = conc[ii]
Wsb = wsb
calF = F(Ro(uu,ff,ww))
theta = np.arctan(slope)
T = ff/((nn**2)*(theta**2))#
Hpool = (ff*uu)/((nn**2)*(theta))
PhidC = uu*(ZZ**2)*calF*Wsb*grad[ii]
Cbg = (Hs-Hh)*gradtop[ii]
Pi = (calF*(ZZ**2)*grad[ii])/(((Hs-Hh)**2)*gradtop[ii])
with Dataset(file, 'r') as nbl:
area = nbl.variables['area']
if can_area > 8.8E7:
if lab_exp=='AST':
ax0.plot(Pi*(Wsb*ll),#(PhidC*T)/(Cbg*Hpool),
np.nanmax(area[ii,:]),
'o', mfc = col, mec='0.3',mew=1,
label = tr_lab)
Pi_array[kk]=Pi*(Wsb*ll)#(PhidC*T)/(Cbg*Hpool)
area_array[kk]=np.nanmax(area[ii,:])
else:
ax0.plot(Pi*(Wsb*ll),
np.nanmax(area[ii,:]),
'^', mfc = col, mec='0.3',mew=1)
Pi_array[kk]=Pi*(Wsb*ll)
area_array[kk]=np.nanmax(area[ii,:])
else:
if lab_exp=='BAR':
ax0.plot(Pi*(Wsb*ll),
np.nanmax(area[ii,:]),
's', mfc = col, mec='0.3',mew=1)
Pi_array[kk]=Pi*(Wsb*ll)
area_array[kk]=np.nanmax(area[ii,:])
else:
ax0.plot(Pi*(Wsb*ll),
np.nanmax(area[ii,:]),
'd', mfc = col, mec='0.3',mew=1)
Pi_array[kk]=Pi*(Wsb*ll)
area_array[kk]=np.nanmax(area[ii,:])
kk=kk+1
ax0.yaxis.set_tick_params(pad=2)
ax0.xaxis.set_tick_params(pad=2)
ax0.legend(bbox_to_anchor=(1,1), handletextpad=0)
ax0.set_xlabel(r'$W_m L\mathcal{F}Z^2\delta_z C/(H_s-H_h)^2\delta_zC_{top}$', labelpad=0)
ax0.set_ylabel(r'max $A_{pool} $', labelpad=0)
slope0, intercept0, r_value0, p_value0, std_err0 = scipy.stats.linregress(Pi_array,area_array)
print('MAX POOL AREA: slope = %1.2e, intercept = %1.3f, r-value = %1.3f, std_err = %1.3e' \
%(slope0, intercept0, r_value0, std_err0))
# +
labels_exp = ['AST', 'ARGO','BAR', 'PATH']
labels_tra = ['Linear','Salinity','Oxygen','Nitrate','DS','Phosphate','Nitrous Oxide','Methane','DIC','Alkalinity']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
tracer_keys = ['phiTr01','phiTr02','phiTr03','phiTr04','phiTr05','phiTr06',
'phiTr07','phiTr08','phiTr09','phiTr10']
factors = [1,1,1,1,1,1,1E-3,1E-3,1,1]
markers=['o','^','s','d']
exps = ['UPW_10TR_BF2_AST','UPW_10TR_BF2_AST','UPW_10TR_BF4_BAR','UPW_10TR_BF4_BAR']
runs_phi = ['01_Ast03','03_Ast03_Argo','01_Bar03','03_Bar03_Path']
sb_conc = [sb_conc_A, sb_conc_A, sb_conc_B, sb_conc_B]
sb_grad = [sb_gradZ_Ast,sb_gradZ_Argo, sb_gradZ_Bar,sb_gradZ_Path]
sb_gradtop = [sb_gradtop_Ast,sb_gradtop_Argo, sb_gradtop_Bar,sb_gradtop_Path]
Hhs = [97.5,97.5,172.5,172.,5]
Phi_array = np.zeros(40)
x_array = np.zeros(40)
kk = 0
for tr, ii in zip(tracer_keys, range(len(labels_tra))):
for run_phi,lab_exp,exp,grad,conc,ff,nn,uu,ll,ww,ss, mark in zip(runs_phi,
labels_exp,
exps,
sb_grad,
sb_conc,
f,N,U,L,Wiso,s,
markers):
ZZ = Z(uu,ff,ll,ww,nn,ss)*Dh(ff,ll,nn)
Cs=conc[ii]
file = ('/data/kramosmu/results/TracerExperiments/%s/phi_phiTr_transAlg_%s.csv' %(exp,run_phi))
df = pd.read_csv(file)
if (tr == 'phiTr07' or tr == 'phiTr08'):
TrMass = df[tr][:]# nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3
else:
TrMass = 1E3*df[tr][:] # nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3
PhiTr = np.mean(np.array(TrMass[8:18]))
Phi = np.mean(np.array(HCW[8:18]))
Phi_array[kk]=PhiTr/(Phi*Cs)
x_array[kk] = ZZ*grad[ii]/Cs
kk = kk+1
slope1, intercept1, r_value1, p_value1, std_err1 = scipy.stats.linregress(x_array,Phi_array)
print('PHI_TR NON-DIM: slope = %1.2f, intercept = %1.3f, r-value = %1.3f, std_err = %1.3f' \
%(slope1, intercept1, r_value1, std_err1))
# +
sns.set_style('white')
sns.set_context('paper')
fig,(ax0,ax1) =plt.subplots(1,2,figsize=(5,2))
labels_exp = ['AST', 'ARGO','BAR', 'PATH']
labels_tra = ['Linear','Salinity','Oxygen','Nitrate','DS','Phosphate','Nitrous Oxide','Methane','DIC','Alkalinity']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
tracer_keys = ['phiTr01','phiTr02','phiTr03','phiTr04','phiTr05','phiTr06',
'phiTr07','phiTr08','phiTr09','phiTr10']
exp_files = ['../saved_calcs/pool_AST.nc',
'../saved_calcs/pool_ARGO.nc',
'../saved_calcs/pool_BAR.nc',
'../saved_calcs/pool_PATH.nc']
runs = ['UPW_10TR_BF2_AST_01','UPW_10TR_BF2_AST_03','UPW_10TR_BF4_BAR_01','UPW_10TR_BF4_BAR_03']
markers=['o','^','s','d']
exps = ['UPW_10TR_BF2_AST','UPW_10TR_BF2_AST','UPW_10TR_BF4_BAR','UPW_10TR_BF4_BAR']
runs_phi = ['01_Ast03','03_Ast03_Argo','01_Bar03','03_Bar03_Path']
sb_conc = [sb_conc_A, sb_conc_A, sb_conc_B, sb_conc_B]
sb_grad = [sb_gradZ_Ast,sb_gradZ_Argo, sb_gradZ_Bar,sb_gradZ_Path]
sb_gradtop = [sb_gradtop_Ast,sb_gradtop_Argo, sb_gradtop_Bar,sb_gradtop_Path]
ax0.plot(np.linspace(0.7,2,20),np.linspace(0.7,2,20),'-',color='0.5')
ax1.plot(np.linspace(0,40,50),
np.linspace(0,40,50),'-',color='0.5')
for tr_lab,tr, ii, col in zip(labels_tra,tracer_keys, range(len(labels_tra)),colours):
for file,run,run_phi,lab_exp,can_area,exp,grad,gradtop,Ntop,conc,Hh,Hs,ff,nn,uu,ll,\
ww,wsb,ss,rr, mark in zip(exp_files,
runs,
runs_phi,
labels_exp,
can_Area,
exps,
sb_grad,
sb_gradtop,
N_top,
sb_conc,Hhs,Hss,
f,N,U,L,
Wiso,Wsbs,
s,R,
markers):
ZZ = Z(uu,ff,ll,ww,nn,ss)*Dh(ff,ll,nn)
slope = (Hs-Hh)/ll
Cs=conc[ii]
Wsb = wsb
calF = F(Ro(uu,ff,ww))
theta = np.arctan(slope)
T = ff/((nn**2)*(theta**2))
Hpool = (ff*uu)/((nn**2)*(theta))
PhidC = uu*(ZZ**2)*calF*Wsb*grad[ii]
Cbg = (Hs-Hh)*gradtop[ii]
Pi = (calF*(ZZ**2)*grad[ii])/(((Hs-Hh)**2)*gradtop[ii])
file2 = ('/data/kramosmu/results/TracerExperiments/%s/phi_phiTr_transAlg_%s.csv' %(exp,run_phi))
df = pd.read_csv(file2)
if (tr == 'phiTr07' or tr == 'phiTr08'):
TrMass = df[tr][:]# nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
else:
TrMass = 1E3*df[tr][:] # nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
PhiTr = np.mean(np.array(TrMass[8:18]))
Phi = np.mean(np.array(HCW[8:18]))
ax0.plot(slope1*(ZZ*grad[ii]/Cs)+intercept1,PhiTr/(Phi*Cs), marker=mark, markerfacecolor=col,
markeredgecolor='0.3', markeredgewidth=1)
# Plot area vs tau
with Dataset(file, 'r') as nbl:
area = nbl.variables['area']
if can_area > 8.8E7:
if lab_exp=='AST':
ax1.plot((slope0*(Pi))+intercept0/(Wsb*ll),
np.nanmax(area[ii,:])/(Wsb*ll), 'o', mfc = col, mec='0.3',
mew=1, label = tr_lab)
else:
ax1.plot((slope0*(Pi))+intercept0/(Wsb*ll),
np.nanmax(area[ii,:])/(Wsb*ll), '^', mfc = col, mec='0.3',
mew=1)
else:
if lab_exp=='BAR':
ax1.plot((slope0*(Pi))+intercept0/(Wsb*ll),
np.nanmax(area[ii,:])/(Wsb*ll), 's', mfc = col, mec='0.3',
mew=1)
else:
ax1.plot((slope0*(Pi))+intercept0/(Wsb*ll),
np.nanmax(area[ii,:])/(Wsb*ll), 'd', mfc = col, mec='0.3',
mew=1)
ax0.yaxis.set_tick_params(pad=2)
ax0.xaxis.set_tick_params(pad=2)
ax1.xaxis.set_tick_params(pad=2)
ax1.yaxis.set_tick_params(pad=2)
legend_runs = [Line2D([0], [0], marker='o',color='w', label='AST',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='^',color='w', label='ARGO',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='s',color='w', label='BAR',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='d',color='w', label='PATH',
markerfacecolor='k', mec='k',markersize=7),
]
ax0.legend(handles=legend_runs,bbox_to_anchor=(3.3,1.0), handletextpad=0)
legend_elements=[]
for ii in range(len(colours)):
legend_elements.append(Line2D([0], [0], marker='s',color='w', label=labels_tra[ii],
markerfacecolor=colours[ii], mec=colours[ii],markersize=8),)
ax1.legend(handles=legend_elements, bbox_to_anchor=(1,1), handletextpad=0)
ax1.set_xlabel(r'%1.2f$\Pi$%1.2f' %(slope0,intercept0/(Wsb*ll)), labelpad=0)
ax0.set_xlabel(r'%1.2f$(Z \partial_zC/C_{sb})$+%1.2f' %(slope1,intercept1), labelpad=0)
ax0.set_ylabel('$\Phi_{Tr}$ model/$\Phi C_{sb}$ model', labelpad=0)
ax1.set_ylabel('$A_{pool}$ model / $2A_{can}$', labelpad=0)
ax0.set_aspect(1)
ax1.set_aspect(1)
ax0.text(0.85,0.05,'(a)',fontsize=12, fontweight='bold', transform=ax0.transAxes)
ax1.text(0.85,0.05,'(b)',fontsize=12, fontweight='bold', transform=ax1.transAxes)
plt.savefig('scaling.eps',format='eps', bbox_inches='tight')
# +
sns.set_style('white')
sns.set_context('paper')
fig,(ax0,ax1) =plt.subplots(1,2,figsize=(5,2))
labels_exp = ['AST', 'ARGO','BAR', 'PATH']
labels_tra = ['Linear','Salinity','Oxygen','Nitrate','DS','Phosphate','Nitrous Oxide','Methane','DIC','Alkalinity']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
tracer_keys = ['phiTr01','phiTr02','phiTr03','phiTr04','phiTr05','phiTr06',
'phiTr07','phiTr08','phiTr09','phiTr10']
exp_files = ['../saved_calcs/pool_AST.nc',
'../saved_calcs/pool_ARGO.nc',
'../saved_calcs/pool_BAR.nc',
'../saved_calcs/pool_PATH.nc']
runs = ['UPW_10TR_BF2_AST_01','UPW_10TR_BF2_AST_03','UPW_10TR_BF4_BAR_01','UPW_10TR_BF4_BAR_03']
markers=['o','^','s','d']
exps = ['UPW_10TR_BF2_AST','UPW_10TR_BF2_AST','UPW_10TR_BF4_BAR','UPW_10TR_BF4_BAR']
runs_phi = ['01_Ast03','03_Ast03_Argo','01_Bar03','03_Bar03_Path']
sb_conc = [sb_conc_A, sb_conc_A, sb_conc_B, sb_conc_B]
sb_grad = [sb_gradZ_Ast,sb_gradZ_Argo, sb_gradZ_Bar,sb_gradZ_Path]
sb_gradtop = [sb_gradtop_Ast,sb_gradtop_Argo, sb_gradtop_Bar,sb_gradtop_Path]
ax0.plot(np.linspace(0.7,2,20),np.linspace(0.7,2,20),'-',color='0.5')
ax1.plot(np.linspace(0,10.1,50),
np.linspace(0,10.1,50),'-',color='0.5')
for tr_lab,tr, ii, col in zip(labels_tra,tracer_keys, range(len(labels_tra)),colours):
for file,run,run_phi,lab_exp,can_area,exp,grad,gradtop,Ntop,conc,Hh,Hs,ff,nn,uu,ll,\
ww,wsb,ss,rr, mark in zip(exp_files,
runs,
runs_phi,
labels_exp,
can_Area,
exps,
sb_grad,
sb_gradtop,
N_top,
sb_conc,Hhs,Hss,
f,N,U,L,
Wiso,Wsbs,
s,R,
markers):
ZZ = Z(uu,ff,ll,ww,nn,ss)*Dh(ff,ll,nn)
slope = (Hs-Hh)/ll
Cs=conc[ii]
Wsb = wsb
calF = F(Ro(uu,ff,ww))
theta = np.arctan(slope)
T = ff/((nn**2)*(theta**2))
Hpool = (ff*uu)/((nn**2)*(theta))
PhidC = uu*(ZZ**2)*calF*Wsb*grad[ii]
Cbg = (Hs-Hh)*gradtop[ii]
Pi = (calF*(ZZ**2)*grad[ii])/(((Hs-Hh)**2)*gradtop[ii])
file2 = ('/data/kramosmu/results/TracerExperiments/%s/phi_phiTr_transAlg_%s.csv' %(exp,run_phi))
df = pd.read_csv(file2)
if (tr == 'phiTr07' or tr == 'phiTr08'):
TrMass = df[tr][:]# nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
else:
TrMass = 1E3*df[tr][:] # nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
PhiTr = np.mean(np.array(TrMass[8:18]))
Phi = np.mean(np.array(HCW[8:18]))
ax0.plot(slope1*(ZZ*grad[ii]/Cs)+intercept1,PhiTr/(Phi*Cs), marker=mark, markerfacecolor=col,
markeredgecolor='0.3', markeredgewidth=1)
# Plot area vs tau
with Dataset(file, 'r') as nbl:
area = nbl.variables['area']
if can_area > 8.8E7:
if lab_exp=='AST':
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 'o', mfc = col, mec='0.3',
mew=1, label = tr_lab)
else:
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, '^', mfc = col, mec='0.3',
mew=1)
else:
if lab_exp=='BAR':
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 's', mfc = col, mec='0.3',
mew=1)
else:
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 'd', mfc = col, mec='0.3',
mew=1)
ax0.yaxis.set_tick_params(pad=2)
ax0.xaxis.set_tick_params(pad=2)
ax1.xaxis.set_tick_params(pad=2)
ax1.yaxis.set_tick_params(pad=2)
legend_runs = [Line2D([0], [0], marker='o',color='w', label='AST',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='^',color='w', label='ARGO',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='s',color='w', label='BAR',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='d',color='w', label='PATH',
markerfacecolor='k', mec='k',markersize=7),
]
ax0.legend(handles=legend_runs,bbox_to_anchor=(3.3,1.0), handletextpad=0)
legend_elements=[]
for ii in range(len(colours)):
legend_elements.append(Line2D([0], [0], marker='s',color='w', label=labels_tra[ii],
markerfacecolor=colours[ii], mec=colours[ii],markersize=8),)
ax1.legend(handles=legend_elements, bbox_to_anchor=(1,1), handletextpad=0)
ax1.set_xlabel(r'%1.1f$\Pi 2A_{can}$%1.2f /10$^{9}$ m$^2$' %(slope0,intercept0/1E9), labelpad=0)
ax0.set_xlabel(r'%1.2f$(Z \partial_zC/C_{sb})$+%1.2f' %(slope1,intercept1), labelpad=0)
ax0.set_ylabel('$\Phi_{Tr}$ model/$\Phi C_{sb}$ model', labelpad=0)
ax1.set_ylabel('$A_{pool}$ model / 10$^{9}$ m$^2$', labelpad=0)
ax0.set_aspect(1)
ax1.set_aspect(1)
ax0.text(0.85,0.05,'(a)',fontsize=12, fontweight='bold', transform=ax0.transAxes)
ax1.text(0.85,0.05,'(b)',fontsize=12, fontweight='bold', transform=ax1.transAxes)
plt.savefig('scaling_dimPool.eps',format='eps', bbox_inches='tight')
# +
sns.set_style('white')
sns.set_context('talk')
fig,(ax0,ax1) =plt.subplots(1,2,figsize=(8,4))
labels_exp = ['AST', 'ARGO','BAR', 'PATH']
labels_tra = ['Linear','Salinity','Oxygen','Nitrate','DS','Phosphate','Nitrous Oxide','Methane','DIC','Alkalinity']
colours = ['#332288','#44AA99','#117733','#999933','#DDCC77','#CC6677','#882255','#AA4499', 'dimgray', 'tan']
tracer_keys = ['phiTr01','phiTr02','phiTr03','phiTr04','phiTr05','phiTr06',
'phiTr07','phiTr08','phiTr09','phiTr10']
exp_files = ['../saved_calcs/pool_AST.nc',
'../saved_calcs/pool_ARGO.nc',
'../saved_calcs/pool_BAR.nc',
'../saved_calcs/pool_PATH.nc']
runs = ['UPW_10TR_BF2_AST_01','UPW_10TR_BF2_AST_03','UPW_10TR_BF4_BAR_01','UPW_10TR_BF4_BAR_03']
markers=['o','^','s','d']
exps = ['UPW_10TR_BF2_AST','UPW_10TR_BF2_AST','UPW_10TR_BF4_BAR','UPW_10TR_BF4_BAR']
runs_phi = ['01_Ast03','03_Ast03_Argo','01_Bar03','03_Bar03_Path']
sb_conc = [sb_conc_A, sb_conc_A, sb_conc_B, sb_conc_B]
sb_grad = [sb_gradZ_Ast,sb_gradZ_Argo, sb_gradZ_Bar,sb_gradZ_Path]
sb_gradtop = [sb_gradtop_Ast,sb_gradtop_Argo, sb_gradtop_Bar,sb_gradtop_Path]
ax0.plot(np.linspace(0.7,2,20),np.linspace(0.7,2,20),'-',color='0.5')
ax1.plot(np.linspace(0,10.1,50),
np.linspace(0,10.1,50),'-',color='0.5')
for tr_lab,tr, ii, col in zip(labels_tra,tracer_keys, range(len(labels_tra)),colours):
for file,run,run_phi,lab_exp,can_area,exp,grad,gradtop,Ntop,conc,Hh,Hs,ff,nn,uu,ll,\
ww,wsb,ss,rr, mark in zip(exp_files,
runs,
runs_phi,
labels_exp,
can_Area,
exps,
sb_grad,
sb_gradtop,
N_top,
sb_conc,Hhs,Hss,
f,N,U,L,
Wiso,Wsbs,
s,R,
markers):
ZZ = Z(uu,ff,ll,ww,nn,ss)*Dh(ff,ll,nn)
slope = (Hs-Hh)/ll
Cs=conc[ii]
Wsb = wsb
calF = F(Ro(uu,ff,ww))
theta = np.arctan(slope)
T = ff/((nn**2)*(theta**2))
Hpool = (ff*uu)/((nn**2)*(theta))
PhidC = uu*(ZZ**2)*calF*Wsb*grad[ii]
Cbg = (Hs-Hh)*gradtop[ii]
Pi = (calF*(ZZ**2)*grad[ii])/(((Hs-Hh)**2)*gradtop[ii])
file2 = ('/data/kramosmu/results/TracerExperiments/%s/phi_phiTr_transAlg_%s.csv' %(exp,run_phi))
df = pd.read_csv(file2)
if (tr == 'phiTr07' or tr == 'phiTr08'):
TrMass = df[tr][:]# nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
else:
TrMass = 1E3*df[tr][:] # nMm^3 to muMm^3 and muMm^3 to mumol
HCW = df['Phi'][:]# m^3 to l
PhiTr = np.mean(np.array(TrMass[8:18]))
Phi = np.mean(np.array(HCW[8:18]))
ax0.plot(slope1*(ZZ*grad[ii]/Cs)+intercept1,PhiTr/(Phi*Cs), marker=mark, markerfacecolor=col,
markeredgecolor='0.3', markeredgewidth=1)
# Plot area vs tau
with Dataset(file, 'r') as nbl:
area = nbl.variables['area']
if can_area > 8.8E7:
if lab_exp=='AST':
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 'o', mfc = col, mec='0.3',
mew=1, label = tr_lab)
else:
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, '^', mfc = col, mec='0.3',
mew=1)
else:
if lab_exp=='BAR':
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 's', mfc = col, mec='0.3',
mew=1)
else:
ax1.plot(((slope0*(Pi*Wsb*ll))+intercept0)/1E9,
np.nanmax(area[ii,:])/1E9, 'd', mfc = col, mec='0.3',
mew=1)
ax0.yaxis.set_tick_params(pad=2)
ax0.xaxis.set_tick_params(pad=2)
ax1.xaxis.set_tick_params(pad=2)
ax1.yaxis.set_tick_params(pad=2)
legend_runs = [Line2D([0], [0], marker='o',color='w', label='AST',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='^',color='w', label='ARGO',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='s',color='w', label='BAR',
markerfacecolor='k', mec='k',markersize=7),
Line2D([0], [0], marker='d',color='w', label='PATH',
markerfacecolor='k', mec='k',markersize=7),
]
ax0.legend(handles=legend_runs,bbox_to_anchor=(3.0,1.0), handletextpad=0)
legend_elements=[]
for ii in range(len(colours)):
legend_elements.append(Line2D([0], [0], marker='s',color='w', label=labels_tra[ii],
markerfacecolor=colours[ii], mec=colours[ii],markersize=8),)
ax1.legend(handles=legend_elements, bbox_to_anchor=(1,1), handletextpad=0)
ax1.set_xlabel(r'%1.1f$\Pi 2A_{can}$%1.2f /10$^{9}$ m$^2$' %(slope0,intercept0/1E9), labelpad=0)
ax0.set_xlabel(r'%1.2f$(Z \partial_zC/C_{sb})$+%1.2f' %(slope1,intercept1), labelpad=0)
ax0.set_ylabel('$\Phi_{Tr}$ model/$\Phi C_{sb}$ model', labelpad=0)
ax1.set_ylabel('$A_{pool}$ model / 10$^{9}$ m$^2$', labelpad=-1)
ax0.set_aspect(1)
ax1.set_aspect(1)
#ax0.text(0.85,0.05,'(a)',fontsize=12, fontweight='bold', transform=ax0.transAxes)
#ax1.text(0.85,0.05,'(b)',fontsize=12, fontweight='bold', transform=ax1.transAxes)
plt.savefig('scaling_tracers.eps',format='eps', bbox_inches='tight')
# -
| forPaper2/paperFigures/scaling_pool_phiTr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
"""
getKernels.ipynb
Given a list of pdbrefs and chainrefs,
Remove everything that isn't an alpha-C.
Write the whole whole to an xyz file.
Run glosim on the xyz file.
"""
import quippy
import ase
import json
from ase.atoms import Atoms as AseAtoms
# Get similarities for all test proteins
with open("testProteinsFullResidue.txt") as flines:
proteinPaths = ["testProteins/" +line.strip() for line in flines]
proteins = []
for proteinPath in proteinPaths:
proteins.append(quippy.Atoms(ase.io.read(proteinPath, format='proteindatabank')))
testFamily = quippy.AtomsList(proteins)
testFamily.write("testProteinsFullPocket.xyz")
# !python /usr/local/src/glosim/glosim.py --kernel rematch -n 12 -l 12 -c 10 -g 0.5 --gamma 0.01 --np 4 /root/PocketSVM/testProteinsFullPocket.xyz # Choose parameters carefully
quippy.descriptors
# !ls /usr/local/lib/python2.7/site-packages/quippy/
# !cat /opt/quip/arch/Makefile.linux_x86_64_gfortran_openmp
| containers/getKernels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Discovery test example
# +
import numpy as np
import matplotlib.pyplot as plt
from utils import pltdist, plotfitresult
import zfit
from zfit.loss import ExtendedUnbinnedNLL
from zfit.minimize import Minuit
from hepstats.hypotests.calculators import FrequentistCalculator
from hepstats.hypotests import Discovery
from hepstats.hypotests.parameters import POI
# -
plt.rcParams['figure.figsize'] = (8,6)
plt.rcParams['font.size'] = 16
# ### Fit of a Gaussian signal over an exponential background:
# +
bounds = (0.1, 3.0)
# Data and signal
np.random.seed(0)
tau = -2.0
beta = -1/tau
data = np.random.exponential(beta, 300)
peak = np.random.normal(1.2, 0.1, 25)
data = np.concatenate((data,peak))
data = data[(data > bounds[0]) & (data < bounds[1])]
# -
pltdist(data, bins=80, bounds=bounds)
obs = zfit.Space('x', limits=bounds)
lambda_ = zfit.Parameter("lambda",-2.0, -4.0, -1.0)
Nsig = zfit.Parameter("Nsig", 20., -20., len(data))
Nbkg = zfit.Parameter("Nbkg", len(data), 0., len(data)*1.1)
signal = zfit.pdf.Gauss(obs=obs, mu=1.2, sigma=0.1).create_extended(Nsig)
background = zfit.pdf.Exponential(obs=obs, lambda_=lambda_).create_extended(Nbkg)
tot_model = zfit.pdf.SumPDF([signal, background])
# Create the negative log likelihood
data_ = zfit.data.Data.from_numpy(obs=obs, array=data)
nll = ExtendedUnbinnedNLL(model=tot_model, data=data_)
# Instantiate a minuit minimizer
minimizer = Minuit()
# minimisation of the loss function
minimum = minimizer.minimize(loss=nll)
minimum.hesse()
print(minimum)
nbins = 80
pltdist(data, nbins, bounds)
plotfitresult(tot_model, bounds, nbins)
plt.xlabel("m [GeV/c$^2$]")
plt.ylabel("number of events")
# ### Discovery test
#
# In a discovery test the null hypothesis is the absence of signal, .i.e Nsig = 0.
# instantation of the calculator
#calculator = FrequentistCalculator(nll, minimizer, ntoysnull=5000)
calculator = FrequentistCalculator.from_yaml("toys/discovery_freq_zfit_toys.yml", nll, minimizer, ntoysnull=5000)
calculator.bestfit = minimum #optionnal
# parameter of interest of the null hypothesis
poinull = POI(Nsig, 0)
# instantation of the discovery test
discovery_test = Discovery(calculator, poinull)
pnull, significance = discovery_test.result()
plt.hist(calculator.qnull(poinull, None, onesided=True, onesideddiscovery=True)[poinull], bins=20, label="qnull distribution", log=True)
plt.axvline(calculator.qobs(poinull, onesided=True, onesideddiscovery=True), color="red", label="qobs")
plt.legend(loc="best")
plt.xlabel("q")
calculator.to_yaml("toys/discovery_freq_zfit_toys.yml")
| notebooks/hypotests/discovery_freq_zfit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # IASI functions
def IASI_L3_version(component_nom, year, month):
""" Get version of L3 IASI dataset for each component nomenclature
Args:
component_nom (str): Component chemical nomenclature
year (str): Year of dataset
month (str): Month of dataset
Returns:
version (str): IASI dataset version
"""
# https://iasi.aeris-data.fr/CO_IASI_A_L3_data/
if component_nom == 'CO':
if year >= 2020:
version = 'V6.5.0'
elif (year == 2019 and month >= 6):
version = 'V6.4.0'
else:
version = 'V20151001.0'
# https://iasi.aeris-data.fr/O3_IASI_A_L3_data/
elif component_nom == 'O3':
if year >= 2020 or (year == 2019 and month <= 11):
version = 'V6.5.1'
else:
version = 'V20151001.0'
# https://iasi.aeris-data.fr/NH3_IASI_A_L3_data/
elif component_nom == 'NH3':
version = 'V3.0.0'
# https://iasi.aeris-data.fr/HCOOH_IASI_A_L3_data/
elif component_nom == 'HCOOH':
version = 'V1.0.0'
return version
def IASI_L3_download(component_nom, date, satellite):
""" Download L3 IASI dataset with curl
Args:
component_nom (str): Component chemical nomenclature
date (str): Query date
satellite (str): A, B and/or C referring to METOP series
"""
cnl = component_nom.lower()
sl = 'iasi' + satellite.lower() + 'l3'
year = date.split('-')[0]
month = date.split('-')[1]
version = IASI_L3_version(component_nom, int(year), int(month))
# Create directory for each satellite in case they do not exist
IASI_product_path = os.path.join(os.path.join('/', '/'.join(
os.getcwd().split('/')[1:3]), 'adc-toolbox',
os.path.relpath('data/iasi/' + component_nom + '/L3/' + year + '-' + month)))
os.makedirs(IASI_product_path, exist_ok = True)
if component_nom == 'NH3':
product_name = ''.join(['IASI_METOP' + satellite + '_L3_', component_nom, '_',
year, month, '_ULB-LATMOS_', version, '.nc'])
else:
product_name = ''.join(['IASI_METOP' + satellite + '_L3_', component_nom, '_COLUMN_',
year, month, '_ULB-LATMOS_', version, '.nc'])
file_name = IASI_product_path + '/' + product_name
# !curl -s --insecure https://cds-espri.ipsl.fr/$sl/iasi_$cnl/$version/$year/$product_name --output data/iasi/$component_nom/L3/$year-$month/$product_name
if os.stat(file_name).st_size <= 288:
print(product_name, 'is not available.')
os.remove(file_name)
else:
print(product_name, 'was downloaded.')
def IASI_L3_read(component_nom, sensor_column, dates, lat_res = 1, lon_res = 1):
""" Read L3 IASI dataset as xarray dataset object and assign time
Args:
component_nom (str): Component chemical nomenclature
sensor_column (str): Name of sensor column in downloaded dataset
dates (list): Available dates
lat_res (float): Spatial resolution for latitude
lon_res (float): Spatial resolution for longitude
Returns:
sensor_ds (xarray): IASI dataset in xarray format
"""
if lat_res < 1 or lon_res < 1:
print('To show the original data, the resolution must equal to 1x1º.')
print('To show aggregated data, the resolution must be superior to 1x1º.')
raise KeyboardInterrupt()
sensor_ds_all = []
for date in dates:
year = date.split('-')[0]
month = date.split('-')[1]
sensor_ds_ABC = []
# Combine data from METOP-A, METOP-B and METOP-C
IASI_product_path = os.path.join('/', '/'.join(
os.getcwd().split('/')[1:3]), 'adc-toolbox',
os.path.relpath('data/iasi/' + component_nom + '/L3/' + year + '-' + month))
IASI_product_names = [file for file in os.listdir(IASI_product_path)]
for product_name in IASI_product_names:
sensor_ds_sat = xr.open_dataset(IASI_product_path + '/' + product_name)
unit = sensor_ds_sat[sensor_column].units
sensor_ds_ABC.append(sensor_ds_sat)
sensor_ds_ABC = xr.concat(sensor_ds_ABC, dim = 'latitude')
# Regrid onto a custom defined regular grid
sensor_ds_ABC_gridded = binning(sensor_ds_ABC, lat_res, lon_res)
# Add time
time_str = dt.datetime(int(year), int(month), 1)
sensor_ds_ABC_gridded = sensor_ds_ABC_gridded.assign_coords({'time': time_str}).expand_dims(dim = ['time'])
# Add units as attribute
sensor_ds_ABC_gridded.attrs['units'] = unit
sensor_ds_all.append(sensor_ds_ABC_gridded)
sensor_ds = xr.concat(sensor_ds_all, dim = 'time')
sensor_ds = sensor_ds.rename({sensor_column: 'sensor_column'})
return sensor_ds
def IASI_L2_version(component_nom, year, month, day, satellite):
""" Get version of L2 IASI dataset for each component nomenclature
Args:
component_nom (str): Component chemical nomenclature
year (str): Year of dataset
month (str): Month of dataset
day (str): Day of dataset
satellite (str): A, B and/or C referring to METOP series
Returns:
version (str): IASI dataset version
product_name (str): IASI dataset product name
"""
# https://iasi.aeris-data.fr/o3_iasi_a_arch/
if component_nom == 'O3':
if int(year) == 2020:
version = 'V6.5.0'
product_name = ''.join(['IASI_METOP' + satellite + '_L2_', component_nom, '_COLUMN_',
year, month, day, '_ULB-LATMOS_', version, '.nc'])
elif int(year) <= 2019:
version = 'v20151001'
product_name = ''.join(['IASI_FORLI_' + component_nom + '_metop' + satellite.lower() + '_',
year, month, day, '_', version, '.nc'])
# https://iasi.aeris-data.fr/cos_iasi_a_arch/
elif component_nom == 'CO':
if int(year) >= 2020:
version = 'V6.5.0'
elif int(year) == 2019 and ((int(month) == 5 and int(day) >= 14) or int(month) >= 6):
version = 'V6.4.0'
else:
version = 'v20140922'
print('Data of CO total columns before May 13, 2019 is not available as .nc for download.')
product_name = ''.join(['IASI_METOP' + satellite + '_L2_', component_nom, '_',
year, month, day, '_ULB-LATMOS_', version, '.nc'])
# https://iasi.aeris-data.fr/so2_iasi_b_arch/
elif component_nom == 'SO2':
version = 'V2.1.0'
product_name = ''.join(['IASI_METOP' + satellite + '_L2_', component_nom, '_',
year, month, day, '_ULB-LATMOS_', version, '.nc'])
return version, product_name
def IASI_L2_download(component_nom, date, satellite):
""" Download L2 IASI dataset with curl
Args:
component_nom (str): Component chemical nomenclature
date (str): Query date
satellite (str): A, B and/or C referring to METOP series
"""
cnl = component_nom.lower()
sl = 'iasi' + satellite.lower() + 'l2'
year = date.split('-')[0]
month = date.split('-')[1]
day = date.split('-')[2]
version, product_name = IASI_L2_version(component_nom, year, month, day, satellite)
# Create directories in case they do not exist
path = os.path.join('/', '/'.join(
os.getcwd().split('/')[1:3]), 'adc-toolbox',
os.path.relpath('data/iasi/' + component_nom + '/L2/' + date))
os.makedirs(path, exist_ok = True)
# !curl -s --insecure https://cds-espri.ipsl.fr/$sl/iasi_$cnl/$version/$year/$month/$product_name --output data/iasi/$component_nom/L2/$date/$product_name
file_name = path + '/' + product_name
if (os.stat(file_name).st_size <= 288 or
int(year) < 2019 or (int(year) == 2019 and int(month) == 5 and int(day) <= 13)):
print(product_name, 'is not available.')
os.remove(file_name)
else:
print(product_name, 'was downloaded.')
def IASI_L2_read(component_nom, sensor_column, dates, lat_res = 1, lon_res = 1):
""" Read the L2 IASI dataset as xarray dataset object and assign time
Args:
component_nom (str): Component chemical nomenclature
sensor_column (str): Name of sensor column in downloaded dataset
dates (list): Available dates
lat_res (float): Spatial resolution for latitude
lon_res (float): Spatial resolution for longitude
Returns:
sensor_ds (xarray): IASI dataset in xarray format
sensor_type (str): Sensor type
"""
if lat_res < 1 or lon_res < 1:
print('To show the original data, the resolution must equal to 1x1º.')
print('To show aggregated data, the resolution must be superior to 1x1º.')
raise KeyboardInterrupt()
# Choose an elevation
if component_nom == 'SO2':
height_options = [5, 7, 11, 13, 16, 19, 25]
height = input('Select height (in km) (5, 7, 11, 13, 16, 19, 25): ')
while int(height) not in height_options:
print('ERROR: Enter a valid height number. The options are 5, 7, 11, 13, 16, 19 or 25 km.')
height = input('Select height (in km): ')
sensor_ds_all = []
for date in dates:
year = date.split('-')[0]
month = date.split('-')[1]
day = date.split('-')[2]
sensor_ds_ABC = []
# Change sensor_column name (in 2020 it is O3_total_column and before ozone_total_column)
if component_nom == 'O3' and year == '2020':
sensor_column = 'O3_total_column'
elif component_nom == 'O3' and year != '2020':
sensor_column = 'ozone_total_column'
path = os.path.join('/', '/'.join(
os.getcwd().split('/')[1:3]), 'adc-toolbox',
os.path.relpath('data/iasi/' + component_nom + '/L2/' + date))
product_names = [file for file in os.listdir(path)]
for product_name in product_names:
sensor_ds_sat = xr.open_dataset('data/iasi/' + component_nom + '/L2/' + date + '/' + product_name)
unit = sensor_ds_sat[sensor_column].units
latitude = sensor_ds_sat['latitude'].data
longitude = sensor_ds_sat['longitude'].data
# Choose data for chosen elevation
if component_nom == 'SO2':
sensor_ds_sat = sensor_ds_sat.isel(nlevels = height_options.index(int(height)))
sensor_ds_sat = xr.DataArray(
sensor_ds_sat[sensor_column].data,
dims=('ground_pixel'),
coords={
'latitude': ('ground_pixel', latitude),
'longitude': ('ground_pixel', longitude)
},
name = component_nom
)
sensor_ds_ABC.append(sensor_ds_sat)
sensor_ds_ABC = xr.concat(sensor_ds_ABC, dim = 'ground_pixel')
y = sensor_ds_ABC.latitude.data
x = sensor_ds_ABC.longitude.data
z = sensor_ds_ABC.data
zi, yi, xi = np.histogram2d(y, x, bins = (180, 360), weights = z, normed = False)
counts, _, _ = np.histogram2d(y, x, bins = (180, 360))
zi = zi / counts
sensor_ds_ABC_gridded = xr.DataArray(
zi,
dims = ['latitude', 'longitude'],
coords = {
'latitude': (['latitude'], yi[:-1]),
'longitude': (['longitude'], xi[:-1])
},
name = 'sensor_column'
)
# Regrid onto a custom defined regular grid
sensor_ds_ABC_gridded = binning(sensor_ds_ABC_gridded, lat_res, lon_res)
# Add units as attribute
sensor_ds_ABC_gridded.attrs['units'] = unit
# Add time
time_str = dt.datetime(int(year), int(month), int(day))
sensor_ds_ABC_gridded = sensor_ds_ABC_gridded.assign_coords({'delta_time': time_str})
sensor_ds_ABC_gridded = sensor_ds_ABC_gridded.assign_coords({'time': time_str}).expand_dims(dim = ['time'])
sensor_ds_all.append(sensor_ds_ABC_gridded)
sensor_ds = xr.concat(sensor_ds_all, dim = 'time')
return sensor_ds
| functions/functions_iasi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyzing Airbnb Prices in New York
#
# #### <NAME>
# #### Final Project for Data Bootcamp, Fall 2016
#
#
# 
#
# ## What determines each Airbnb's listing price?
#
# #### Background
#
# Everything in New York is [expensive](https://smartasset.com/mortgage/what-is-the-cost-of-living-in-new-york-city). For first time travelers, New York may seem even more expensive. At the same time, travelers have different wants and needs from their accomodation that a student or a working person would. So I wanted to analyze the price trend of the Airbnb listing prices in the eyes of a traveler.
#
# Travelers of different budget and purpose would have different priorities, but most would definately prefer good accessibility to the top tourist attractions they want to visit. Will this have an effect on the Airbnb rental price?
# #### Data Source
#
# For this data analysis, I used the Airbnb open data avaiable [here](http://insideairbnb.com/get-the-data.html). I used the [listing.csv](http://data.insideairbnb.com/united-states/ny/new-york-city/2016-12-03/visualisations/listings.csv) file for New York.
#
# #### Libraries and API
# Since the csv file contained more than 20,000 entries, I decided to do some basic scrubbing first and then export to a different csv using the csv library. I then used the pandas library to manipulate and display selected data and used the matplotlib and seaborn libraries for visualization. To calculate the average distance from each listing to the top rated tourist attractions of New York, I used the Beautiful Soup library to parse the website and retrieve a list of attraction names. I then used the Google Places API to get each attraction spot's detailed latitude and longitude to calculate the great circle distance from each airbnb apartment.
# +
import sys
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
import seaborn as sns
import statistics
import csv
from scipy import stats
from bs4 import BeautifulSoup as bs
import urllib.request
from googleplaces import GooglePlaces, types, lang
from geopy.distance import great_circle
import geocoder
# %matplotlib inline
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
# -
# #### Google Places API Configuration
apikey = '<KEY>'
gplaces = GooglePlaces(apikey)
# 1. Write a function to calculate the distance from each listing to top trip advisor attractions
# 2. Visualize data including where the closest ones are, the most expensive, the relative borderline
# 3. Write a new function that calculates the distance from the closest subwaby stations
# 4. somehow visualize the convenience and access from each subway station using Google maps API
# 5. decide where is the best value/distance
# 6. make a widget that allows you to copy and paste the link
#
# # Defining Functions
#
# ## 1. tripadvisor_attractions( url, how_many )
#
# This function takes 2 parameters, the url of the trip advisor link and the number of top attractions one wants to check. It then uses the beautiful soup library to find the div that contains the list of top rated tourist attractions in the city and returns them as a list.
def tripadvisor_attractions(url, how_many):
page = urllib.request.urlopen(url)
#using beautiful soup to select targeted div
soup = bs(page.read(), "lxml")
filtered = soup.find("div", {"id": "FILTERED_LIST"})
top_list = filtered.find_all("div", class_="property_title")
sites = []
#save the text within hyperlink into an empty list
for site in top_list:
site = (site.a).text
site = str(site)
if not any(char.isdigit() for char in site):
sites.append(site)
#splices the list by how many places user wants to include
sites = sites[:how_many]
return sites
# ## 2. ta_detail(ta_list, city)
#
# This function takes the list returned by the tripadvisor_attractions() function as well as the city name in a string. I explictly ask for the city name so that Google Places API will find more accurate place details when it looks up each tourist attraction. It returns a dataframe of the tourist attraction, its google place ID, longitude, and latitude.
#ta short for tourist attraction
def ta_detail(ta_list, city):
ta_df = pd.DataFrame( {'Tourist Attraction' : '',
'place_id' : '',
'longitude' : '',
'latitude' : '' },
index = range(len(ta_list)))
for i in range(len(ta_list)):
query_result = gplaces.nearby_search(
location = city,
keyword = ta_list[i],
radius=20000)
#get only the top first query
query = query_result.places[0]
ta_df.loc[i, 'Tourist Attraction'] = query.name
ta_df.loc[i, 'longitude'] = query.geo_location['lng']
ta_df.loc[i, 'latitude'] = query.geo_location['lat']
ta_df.loc[i, 'place_id'] = query.place_id
return ta_df
# ## 3. latlong_tuple(ta_df)
#
# This function takes the tourist attraction data frame created above then returns a list of (latitude, longitude) tuples for every one of them.
def latlong_tuple(ta_df):
tuple_list = []
for j, ta in ta_df.iterrows():
ta_geo = (float(ta['latitude']), float(ta['longitude']))
tuple_list.append(ta_geo)
return tuple_list
# ## 4. clean_csv(data_in, geo_tuples)
#
# This function is the main data scraping function. I tried to first import the csv as a dataframe then clearning each entry, but the pandas iterrow and itertuple took a very long time so I decided to do the basic scrubbing when I was importing the csv. This function automatically saves a new copy of the cleaned csv with a file name extension _out.csv. The function itself doesn't return anything.
def clean_csv(data_in, geo_tuples):
#automatically generates a cleaned csv file with the same name with _out.csv extension
index = data_in.find('.csv')
data_out = data_in[:index] + '_out' + data_in[index:]
#some error checking when opening
try:
s = open(data_in, 'r')
except:
print('File not found or cannot be opened')
else:
t = open(data_out, 'w')
print('\n Output from an iterable object created from the csv file')
reader = csv.reader(s)
writer = csv.writer(t, delimiter=',')
#counter for number or rows removed during filtering
removed = 0
added = 0
header = True
for row in reader:
if header:
header = False
for i in range(len(row)):
#saving indices for specific columns
if row[i] == 'latitude':
lat = i
elif row[i] == 'longitude':
lng = i
row.append('avg_dist')
writer.writerow(row)
#only add the row if the number of reviews is more than 1
elif(int(row[-1]) > 7):
#creaing a geo tuple for easy calculation later on
tlat = row[lat]
tlng = row[lng]
ttuple = (tlat, tlng)
dist_calc = []
#calculate the distance from each listing and to every top tourist attractions we saved
#if the distance is for some reason greater than 100, don't add it as it would skew the result.
for i in geo_tuples:
dist_from_spot = round(great_circle(i, ttuple).kilometers, 2)
if (dist_from_spot < 100):
dist_calc.append(dist_from_spot)
else:
print(ta['Tourist Attraction'] + " is too far.")
#calculates the average distance between the listing and all of the toursist attractions
avg_dist = round(statistics.mean(dist_calc), 3)
row.append(avg_dist)
writer.writerow(row)
added += 1
else:
removed += 1
s.close()
t.close()
print('Function Finished')
print(added, 'listings saved')
print(removed, 'listings removed')
# # Reading in the data: Time for fun!
#
# ## Reading in the trip advisor url for New York and saving the data
#
# In the cell below, we read in the trip advisor url for New York and save only the top 10 in a list. When we print it, we can validate that these are the famous places New York is famous for.
url = "https://www.tripadvisor.com/Attractions-g60763-Activities-New_York_City_New_York.html"
top_10 = tripadvisor_attractions(url, 10)
print(top_10)
# +
ta_df = ta_detail(top_10, 'New York, NY')
geo_tuples = latlong_tuple(ta_df)
ta_df
# -
#
#
# The cell below reads in the original csv file, removes some unwanted listings, and adds a new column that has the average distance from the top 10 Trip Advisor approved(!!) tourist attractions.
#
#
clean_csv("data/listings.csv", geo_tuples)
# We then make a copy dataframe ** listing ** to play around with.
# +
df = pd.read_csv('data/listings_out.csv')
print('Dimensions:', df.shape)
df.head()
listing = df.copy()
# -
listing.head()
# # Visualizing the Data
# ## Neighbourhood
#
# First, I used the groupby function to group the data by neighbourhood groups. I then make 2 different data frames to plot the price and average distance.
area = listing.groupby('neighbourhood_group')
nbhood_price = area['price'].agg([np.sum, np.mean, np.std])
nbhood_dist = area['avg_dist'].agg([np.sum, np.mean, np.std])
# +
fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True)
fig.suptitle('NY Neighbourhoods: Price vs Average Distance to Top Spots', fontsize=10, fontweight='bold')
nbhood_price['mean'].plot(kind='bar', ax=ax[0], color='mediumslateblue')
nbhood_dist['mean'].plot(kind='bar', ax=ax[1], color = 'orchid')
ax[0].set_ylabel('Price', fontsize=10)
ax[1].set_ylabel('Average Distance', fontsize=10)
# -
# Then I used the groupby function for neighbourhoods to see a price comparison between different New York neighbourhoods
area2 = listing.groupby('neighbourhood')
nb_price = area2['price'].agg([np.sum, np.mean, np.std]).sort_values(['mean'])
nb_dist = area2['avg_dist'].agg([np.sum, np.mean, np.std])
fig, ax = plt.subplots(figsize=(4, 35))
fig.suptitle('Most Expensive Neighbourhoods on Airbnb', fontsize=10, fontweight='bold')
nb_price['mean'].plot(kind='barh', ax=ax, color='salmon')
# ### The most expensive neighbourhood: Breezy Point
#
# Why is Breezy Point so expensive? Below code displays the Airbnb listings in Breezy Point, which turned out to be the "Tremendous stylish hotel" which was the only listing in Breezy Point.
breezy = listing.loc[listing['neighbourhood'] == 'Breezy Point']
breezy
# ### The Second Most Expensive: Manhattan Beach
#
# The second most expensive neighbourhood is also not in Manhattan, in contrast to the first visualization we did that showed Manhattan had the highest average Airbnb price. All apartments in Manhattan Beach turns out to be reasonably priced except "Manhattan Beach for summer rent" which costs 2,800 USD per night.
#
# It seems that outliers are skewing the data quite significantly.
beach = listing.loc[listing['neighbourhood'] == 'Manhattan Beach']
beach
# ## Room Type
#
# To account for the price difference between room types, I grouped the data by the room_type column and made some visualizations.
area = listing.groupby('room_type')
room_price = area['price'].agg([np.sum, np.mean, np.std])
room_dist = area['avg_dist'].agg([np.sum, np.mean, np.std])
room_price['mean'].plot(title="Average Price by Room Type")
apt = listing.loc[listing['room_type'] == 'Entire home/apt']
apt = apt.sort_values('price', ascending=False)
apt.drop(apt.head(20).index, inplace=True)
apt.head()
sns.jointplot(x='avg_dist', y="price", data=apt, kind='kde')
# Plotting the Entire Room listings without the top 20 most expensive ones show that there are 2 concentrated correlated areas between average distance and price. The bimodal distribution in average distance might be the concentration of Airbnb listings in Manhattan and Brooklyn
f, ax = plt.subplots(figsize=(11, 6))
sns.violinplot(x="neighbourhood_group", y="price", data=apt, palette="Set3")
# Plotting a violin diagram of the prices of all entire homes in different neighbourhood groups show us that Manhattan has more distrubted price range of apartments, albeit on the higher end, while Queens and Bronx have higher concentration of listings at a specific point at a lower price range.
# # Dealing with Outliers
#
# To deal with some of the outliers at the top, I tried deleting the top 10 or 20 most expensive ones, but this method wasn't very scalable across the dataset neither was it an accurate depiction of the price variety. So I decided to first get an understanding of the most expensive listings in New York and then to create a separate dataframe that removes data entries with price higher or lower than 3 standard deviations from the mean.
fancy = listing.sort_values('price', ascending=False).iloc[:50]
fancy.head(10)
fancy.describe()
# It is likely that some of the listings listed above are specifically for events and photography, rather than for traveler's accomodation. Also it seems like some of the hosts who didn't want to remove their listing from Airbnb but wasn't available to host rather listed the price as 9,900 USD.
#
# Some of the listings that seemed "normal" but had a very high price were:
#
# * [Comfortable one bedroom in Harlem](https://www.airbnb.com/rooms/10770507)
# * [Lovely Room , 1 Block subway to NYC](https://www.airbnb.com/rooms/8704144)
# ## 99.7 percent of the listings
#
# Using simple statistic, I saved a new dataframe named ** reviewed ** that has more than 1 review and is at least within 3 standard deviations from the mean.
reviewed = listing.loc[listing['number_of_reviews'] > 1]
reviewed.describe()
reviewed = reviewed[((reviewed['price'] - reviewed['price'].mean()) / reviewed['price'].std()).abs() < 3]
reviewed.describe()
fig, axs = plt.subplots(1, 2, sharey=True)
fig.suptitle('Do Reviews and Price Matter?', fontsize=20, fontweight='bold')
reviewed.plot(kind='scatter', x='reviews_per_month', y='price', ax=axs[0], figsize=(16, 8))
reviewed.plot(kind='scatter', x='avg_dist', y='price', ax=axs[1])
# The 2 plots above try to find if there would be any relationship between the number of reviews per month (trust and approval) as well as the average distance from the top attractions. Reviews per month plot does not seem to display any positive correlation between price and user approval, which makes sense as there are many other factors that determine an apartment rental price than user approval.
#
# The average distance plot shows an interesting negative correlation between average distance and price. The lower the average distance is, the higher the price seems to be.
#
# Both graphs show that many hosts like to mark prices discretely, by increments of 5 or 10, as there is a heavy concentration of data along y axis along the grid lines.
f, ax = plt.subplots(figsize=(11, 5))
sns.boxplot(x="neighbourhood_group", y="price", hue="room_type", data=reviewed, palette="PRGn")
# The scatterplot above shows how big of a discrepancy apartment prices in Manhattan is. The top 25% of the apartments in Manhattan range in price from 400 USD to more than 700 USD, while those in Bronx span range of just 200 to 300.
reviewed2 = reviewed[((reviewed['price'] - reviewed['price'].mean()) / reviewed['price'].std()).abs() < 2]
sns.jointplot(x='avg_dist', y="price", data=reviewed2, kind='kde')
# For a better visualization of the correlation between price and average distance, I plotted another graph with only the 95% of the dataset, (i.e. those with 2 standard deviations within from the mean). This joint plot shows that there are two highly concentrated areas of apartments around 5km away from top tourist attractions on average at price of around 90-100 USD per night, and those around 8km away and 50-60 USD per night.
# # Conclusion
#
# By looking at several visualizations of the Airbnb data in New York, I was able to find some negative correlation between the average distance away from the most famous sights and price per night. Data grouped by neighbourhood yieled the expected result of the highest average price per night in Manhattan and the lowest in Bronx and Queen. The listings.csv data I used contained a summary of the data so it was not as easy to analyze the detailed factors in determining the price. Moving forward, however, I would love to analyze the detailed version of the open data to identify a more accurate correlation between price and apartment size, availability, reviews, average distace and so on.
| UG_F16/Kim Final Project-airbnb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ising 模型と 温度交換法で遊んで見る
import matplotlib.pylab as plt
import numpy as np
run 'ferro_ising_exmcmc.py'
# 温度パラメータ
model.betas
# ## ログデータのデータ構造
#
# mclog は dict 型,key として 'Elog', 'Slog', 'Exlog' でそれぞれ,エネルギー,状態,交換状況のデータ
#
# * mclog['Elog'] は,エネルギーの状態遷移, 2次元の配列で,1次元目がトライアル番号,2次元目が温度番号
# * mclog['Slog'] は, 磁化状態, 3次元の配列で,1次元目がトライアル番号,2次元目が温度番号,3次元目が素子番号
# * mclog['Exlog'] は, 交換の状態, 3次元の配列で,1次元目がトライアル番号,2次元目が温度番号,3次元目が素子番号
#
# +
# 横軸に逆温度,縦軸にエネルギーをプロット
# 多分温度1で相転移するはず.最初のトライアルと最後のトライアルをプロットしてみる
plt.plot(model.betas, mclog['Elog'][0,:])
plt.plot(model.betas, mclog['Elog'][4999,:])
plt.grid()
# +
# エネルギーを5000回のトライアルでプロット
for k in range(0, 24, 2):
plt.plot(mclog['Elog'][:,k], alpha=0.6)
plt.grid()
# +
# 平均磁化 <m> をヒストグラムとしてプロット
sbins = np.linspace(-1, 1, 32)
plt.figure()
for k, c in zip((1, 16, 23), ('red', 'purple', 'blue')):
labelstr='Temp. id %d' % (k)
plt.hist(mclog['Slog'][:, k, :].mean(axis=1), color=c, bins=sbins, alpha=0.5, label=labelstr)
plt.grid()
plt.title('Ising Magnetization <m>')
plt.xlabel('magnetization')
plt.ylabel('freq')
plt.legend()
plt.show()
# -
mm = mclog['Slog'].mean(axis=2)
# +
#温度に対する平均磁化の散布図
for k in range(24):
plt.plot(np.repeat(model.betas[k], mm.shape[0]), mm[:, k],'o', alpha=0.2)
plt.grid()
plt.title('Magnetization m vs inverse temperature')
plt.xlabel('inverse of T')
plt.ylabel('Magnetization')
plt.show()
| IsingModel/IsingTest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# # Model Optimization with an Image Classification Example
# 1. [Introduction](#Introduction)
# 2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
# 3. [Train the model](#Train-the-model)
# ## Introduction
#
# ***
#
# Welcome to our model optimization example for image classification. In this demo, we will use the Amazon SageMaker Image Classification algorithm to train on the [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/).
# ## Prequisites and Preprocessing
#
# ***
#
# ### Setup
#
# To get started, we need to define a few variables and obtain certain permissions that will be needed later in the example. These are:
# * A SageMaker session
# * IAM role to give learning, storage & hosting access to your data
# * An S3 bucket, a folder & sub folders that will be used to store data and artifacts
# * SageMaker's specific Image Classification training image which should not be changed
#
# We also need to upgrade the [SageMaker SDK for Python](https://sagemaker.readthedocs.io/en/stable/v2.html) to v2.33.0 or greater and restart the kernel.
# !~/anaconda3/envs/mxnet_p36/bin/pip install --upgrade sagemaker>=2.33.0
# +
import sagemaker
from sagemaker import session, get_execution_role
role = get_execution_role()
sagemaker_session = session.Session()
# -
# S3 bucket and folders for saving code and model artifacts.
# Change <uuid> to the name of your Greengrass Component bucket.
bucket = '<uuid>-gg-components'
folder = 'models/uncompiled'
model_with_custom_code_sub_folder = folder + '/model-with-custom-code'
validation_data_sub_folder = folder + '/validation-data'
training_data_sub_folder = folder + '/training-data'
training_output_sub_folder = folder + '/training-output'
# +
from sagemaker import session, get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# S3 Location to save the model artifact after training
s3_training_output_location = 's3://{}/{}'.format(bucket, training_output_sub_folder)
# S3 Location to save your custom code in tar.gz format
s3_model_with_custom_code_location = 's3://{}/{}'.format(bucket, model_with_custom_code_sub_folder)
# -
from sagemaker.image_uris import retrieve
aws_region = sagemaker_session.boto_region_name
training_image = retrieve(framework='image-classification', region=aws_region, image_scope='training')
# ### Data preparation
#
# In this demo, we are using [Caltech-256](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) dataset, pre-converted into `RecordIO` format using MXNet's [im2rec](https://mxnet.apache.org/versions/1.7/api/faq/recordio) tool. Caltech-256 dataset contains 30608 images of 256 objects. For the training and validation data, the splitting scheme followed is governed by this [MXNet example](https://github.com/apache/incubator-mxnet/blob/8ecdc49cf99ccec40b1e342db1ac6791aa97865d/example/image-classification/data/caltech256.sh). The example randomly selects 60 images per class for training, and uses the remaining data for validation. It takes around 50 seconds to convert the entire Caltech-256 dataset (~1.2GB) into `RecordIO` format on a p2.xlarge instance. SageMaker's training algorithm takes `RecordIO` files as input. For this demo, we will download the `RecordIO` files and upload it to S3. We then initialize the 256 object categories as well to a variable.
# +
import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
# +
# Dowload caltech-256 data files from MXNet's website
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
# Upload the file to S3
s3_training_data_location = sagemaker_session.upload_data('caltech-256-60-train.rec', bucket, training_data_sub_folder)
s3_validation_data_location = sagemaker_session.upload_data('caltech-256-60-val.rec', bucket, validation_data_sub_folder)
# -
class_labels = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat',
'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101',
'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101',
'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire',
'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks',
'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor',
'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe',
'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell',
'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern',
'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight',
'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan',
'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose',
'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock',
'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus',
'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass',
'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris',
'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder',
'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning',
'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope',
'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels',
'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder',
'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card',
'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator',
'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus',
'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music',
'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile',
'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass',
'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan',
'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee',
'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato',
'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops',
'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn',
'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask',
'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101',
'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']
# ## Train the model
#
# ***
#
# Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sagemaker.estimator.Estimator`` object. This estimator is required to launch the training job.
#
# We specify the following parameters while creating the estimator:
#
# * ``image_uri``: This is set to the training_image uri we defined previously. Once set, this image will be used later while running the training job.
# * ``role``: This is the IAM role which we defined previously.
# * ``instance_count``: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
# * ``instance_type``: This indicates the type of machine on which to run the training. For this example we will use `ml.p3.8xlarge`.
# * ``volume_size``: This is the size in GB of the EBS volume to use for storing input data during training. Must be large enough to store training data as File Mode is used.
# * ``max_run``: This is the timeout value in seconds for training. After this amount of time SageMaker terminates the job regardless of its current status.
# * ``input_mode``: This is set to `File` in this example. SageMaker copies the training dataset from the S3 location to a local directory.
# * ``output_path``: This is the S3 path in which the training output is stored. We are assigning it to `s3_training_output_location` defined previously.
#
ic_estimator = sagemaker.estimator.Estimator(image_uri=training_image,
role=role,
instance_count=1,
instance_type='ml.p3.8xlarge',
volume_size = 50,
max_run = 360000,
input_mode= 'File',
output_path=s3_training_output_location,
base_job_name='img-classification-training'
)
# Following are certain hyperparameters that are specific to the algorithm which are also set:
#
# * ``num_layers``: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
# * ``image_shape``: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
# * ``num_classes``: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.
# * ``num_training_samples``: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
# * ``mini_batch_size``: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
# * ``epochs``: Number of training epochs.
# * ``learning_rate``: Learning rate for training.
# * ``top_k``: Report the top-k accuracy during training.
# * ``precision_dtype``: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode.
ic_estimator.set_hyperparameters(num_layers=18,
image_shape = "3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=5,
learning_rate=0.01,
top_k=2,
use_pretrained_model=1,
precision_dtype='float32')
# Next we setup the input ``data_channels`` to be used later for training.
# +
train_data = sagemaker.inputs.TrainingInput(s3_training_data_location,
content_type='application/x-recordio',
s3_data_type='S3Prefix')
validation_data = sagemaker.inputs.TrainingInput(s3_validation_data_location,
content_type='application/x-recordio',
s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
# -
# After we've created the estimator object, we can train the model using ``fit()`` API
ic_estimator.fit(inputs=data_channels, logs=True)
# After the training job completes, your trained model will be stored in the bucket specified above. This should be in your Greengrass Components bucket models/uncompiled folder. Check in S3 that you can see the output of the training job.
| examples/mlops-console-example/model-training/Image-classification-fulltraining-highlevel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pytumblr
import pandas as pd
from pathlib import Path
OUT_PATH = Path('/mnt/data/group07/johannes/ynacc_proc/clean_split/articles_fixed_1.csv')
from newspaper import Article
df = pd.read_csv(OUT_PATH)
df
['http://www.si.com/mmqb/2016/04/11/nfl-roger-goodell-court-ruling-confirms-discipline-power-exempt-list',
'http://www.thedrive.com/article/2866/chevrolet-has-big-news-for-camaro-driving-track-rats',
'http://www.si.com/mmqb/2016/04/29/mmqb-nfl-draft-denver-broncos-paxton-lynch'] # last one, no data
empty_text = list(df[df['text'].isnull()]['url'])
empty_text
manual_urls = ['http://www.si.com/mmqb/2016/04/11/nfl-roger-goodell-court-ruling-confirms-discipline-power-exempt-list',
'http://www.thedrive.com/article/2866/chevrolet-has-big-news-for-camaro-driving-track-rats'] + empty_text
dir = Path.cwd()
files = Path('/mnt/data/group07/johannes/ynacc_proc/clean_split/manual').glob('*.html')
xx = sorted(files, key=lambda x: int(x.stem))
xx
final_articles = []
for f, url in zip(xx, manual_urls):
with open(f) as of:
html = of.read()
a = Article(url)
a.download(input_html=html)
a.parse()
final_articles.append(a)
print(f)
for x in final_articles:
# if x.text == '':
print(x.text)
print(x.url)
ff = [{'url': f.url, 'text': f.text, 'publish_date': f.publish_date, 'title': f.title} for f in final_articles]
# +
df2 = pd.DataFrame(ff)
df.set_index('url', inplace=True)
df2.set_index('url', inplace=True)
df.update(df2)
df
# -
df = df.reset_index()
df
list(df[df['text'].isnull()]['url'])
OUT2_PATH = Path('/mnt/data/group07/johannes/ynacc_proc/clean_split/articles_fixed_2.csv')
df.to_csv(OUT2_PATH)
| code/ynacc/00 data/02 fetch labeled articles/03_Fetch_Articles_Manual.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Neural Network - Cat Image Classifier
# This neural network is my first partial implementation from the Andrew Ng Deep Learning assignment in the Deep Learing specialization course. The neural network classifies images as cat images or non-cat images. The images come from the Andrew Ng course as well. This document is not meant to be read from top to bottom. It's best to start at [L-Layer-Model Section](#L-Layer-Model). Follow the algorithm and then read the [helper functions](#Helper-Functions) as you encounter them. For most parts of this algorithm, I will write up another notebook going in-depth about the subject to provide a comprehenive understanding intuition.
# ## Import Pacakages and Set Defaults
# First let's import some packages that the algorithm will need and set some constants
# - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
# - [time](https://docs.python.org/3/library/time.html) provides various time-related functions.
# - [h5pyp](https://www.h5py.org/) is a Pythonic interface to the [HDF5](https://www.hdfgroup.org/) binary data format. The format can be found [here](https://www.hdfgroup.org/solutions/hdf5/).
# - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
# - [scipy](https://www.scipy.org/) is a Python-based ecosystem of open-source software for mathematics, science, and engineering.
# - [PIL](https://pillow.readthedocs.io/en/stable/) is the Python Imaging Library.
# +
import numpy as np
import time
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
# -
# Set defaults and constants
# - `%matplotlib inline` A magic function in Python. This line sets [matplotlib backend](https://matplotlib.org/faq/usage_faq.html#what-is-a-backend) to inline. With this backend, the output of plotting commands is displayed inline within frontends like the Jupyter notebook, directly below the code cell that produced it.
# - `plt.rcParams` Used to set matplotlib default values<br>
# `plt.rcParams['figure.figsize'] = (5.0, 4.0)` sets the default size of the plots. (width, height) in inches.<br>
# `plt.rcParams['image.interpolation'] = 'nearest'` sets the image interpolation to nearest. During scaling we want the pixels to be rendered accurately
# `plt.rcParams['image.cmap'] = 'gray'` sets the colormap to gray. [colormap](https://matplotlib.org/gallery/color/colormap_reference.html)
# +
# Set the matplotlib backend
# %matplotlib inline
# Set default plot parameters
plt.rcParams['figure.figsize'] = (5.0, 4.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Ensure we get the same random numbers each time by using a constant seed value. (for debugging purposes only)
np.random.seed(1)
# Define the number of layers and the nodes of each layer
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# -
# ## Helper Functions
# ### Load Data
# The `load_data()` method loads training and test datasets that contain images and labels that indicate whether each picture is a cat or non-cat. The file format used is an [HDF5 (.h5 extention)](https://portal.hdfgroup.org/display/HDF5/File+Format+Specification).
#
# - `train_dataset` is the training set
# - `train_set_x_orig` is the training images
# - `train_set_y_orig` is the training image labels
# - `test_dataset` is the test set
# - `test_set_x_orig` is the test images
# - `test_set_y_orig` is the test image labels
# - `classes` is the list of classes to represent cat or non-cat
def load_data():
# Read training set
train_dataset = h5py.File('D:/Datasets/train_catvnoncat.h5', "r")
# Get training set features
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
# Get training set labels
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
# Read test set
test_dataset = h5py.File('D:/Datasets/test_catvnoncat.h5', "r")
# Get test set features
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
# Get test set labels
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
# Get the list of classes
classes = np.array(test_dataset["list_classes"][:])
# Turns rank 1 arrays to rank 2 arrays, i.e. array with shape (n,) to (1,n)
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
# ### Sigmoid Activation
#
# The `sigmoid()` function implements the sigmoid mathematical function, $h(z)=\dfrac{1}{1+e^{-z}}$.
#
# I've created another notebook called [Activation Functions](ActivationFunctions.ipynb) that details some common activation functions and when to use them.
# Implements the sigmoid activation in numpy
#
# Arguments:
# Z -- numpy array of any shape
#
# Returns:
# A -- output of sigmoid(z), same shape as Z
# cache -- returns Z as well, useful during backpropagation
def sigmoid(Z):
A = 1/(1+np.exp(-Z))
cache = Z
return A, cache
# ### Sigmoid Backwards Activation
# Implement the backward propagation for a single SIGMOID unit.
#
# Arguments:
# dA -- post-activation gradient, of any shape
# cache -- 'Z' where we store for computing backward propagation efficiently
#
# Returns:
# dZ -- Gradient of the cost with respect to Z
def sigmoid_backward(dA, cache):
Z = cache
s = 1/(1+np.exp(-Z))
dZ = dA * s * (1-s)
assert (dZ.shape == Z.shape)
return dZ
# ### ReLU Activation Function
#
# The `sigmoid()` function implements the sigmoid mathematical function, $h(z)=\dfrac{1}{1+e^{-z}}$.
#
# I've created another notebook called [Activation Functions](ActivationFunctions.ipynb) that details some common activation functions and when to use them.
# Implement the RELU function.
#
# Arguments:
# Z -- Output of the linear layer, of any shape
#
# Returns:
# A -- Post-activation parameter, of the same shape as Z
# cache -- a python dictionary containing "A" ; stored for computing the backward pass efficiently
def relu(Z):
A = np.maximum(0,Z)
assert(A.shape == Z.shape)
cache = Z
return A, cache
# ### ReLU Backwards Activation
# Implement the backward propagation for a single RELU unit.
#
# Arguments:
# dA -- post-activation gradient, of any shape
# cache -- 'Z' where we store for computing backward propagation efficiently
#
# Returns:
# dZ -- Gradient of the cost with respect to Z
def relu_backward(dA, cache):
Z = cache
dZ = np.array(dA, copy=True) # just converting dz to a correct object.
# When z <= 0, you should set dz to 0 as well.
dZ[Z <= 0] = 0
assert (dZ.shape == Z.shape)
return dZ
# ### Predict
# This function is used to predict the results of a L-layer neural network.
#
# Arguments:
# X -- data set of examples you would like to label
# parameters -- parameters of the trained model
#
# Returns:
# p -- predictions for the given dataset X
def predict(X, y, parameters):
m = X.shape[1]
n = len(parameters) // 2 # number of layers in the neural network
p = np.zeros((1,m))
# Forward propagation
probas, caches = L_model_forward(X, parameters)
# convert probas to 0/1 predictions
for i in range(0, probas.shape[1]):
if probas[0,i] > 0.5:
p[0,i] = 1
else:
p[0,i] = 0
#print results
#print ("predictions: " + str(p))
#print ("true labels: " + str(y))
print("Accuracy: " + str(np.sum((p == y)/m)))
return p
# ### Print Mismatched Pictures
# Plots images where predictions and truth were different.
# X -- dataset
# y -- true labels
# p -- predictions
def print_mislabeled_images(classes, X, y, p):
a = p + y
mislabeled_indices = np.asarray(np.where(a == 1))
plt.rcParams['figure.figsize'] = (40.0, 40.0) # set default size of plots
num_images = len(mislabeled_indices[0])
for i in range(num_images):
index = mislabeled_indices[1][i]
plt.subplot(2, num_images, i + 1)
plt.imshow(X[:,index].reshape(64,64,3), interpolation='nearest')
plt.axis('off')
plt.title("Prediction: " + classes[int(p[0,index])].decode("utf-8") + " \n Class: " + classes[y[0,index]].decode("utf-8"))
# ## Initialize Parameters
# Arguments:
# layer_dims -- python array (list) containing the dimensions of each layer in our network
#
# Returns:
# parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
# Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
# bl -- bias vector of shape (layer_dims[l], 1)
def initialize_parameters_deep(layer_dims):
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
# ## Linear-Forward
# Implement the linear part of a layer's forward propagation.
#
# Arguments:
# A -- activations from previous layer (or input data): (size of previous layer, number of examples)
# W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
# b -- bias vector, numpy array of shape (size of the current layer, 1)
#
# Returns:
# Z -- the input of the activation function, also called pre-activation parameter
# cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
def linear_forward(A, W, b):
Z = np.dot(W, A) + b
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
# ## Linear-Activation-Forward
# Implement the forward propagation for the LINEAR->ACTIVATION layer
#
# Arguments:
# A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
# W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
# b -- bias vector, numpy array of shape (size of the current layer, 1)
# activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
#
# Returns:
# A -- the output of the activation function, also called the post-activation value
# cache -- a python dictionary containing "linear_cache" and "activation_cache";
# stored for computing the backward pass efficiently
def linear_activation_forward(A_prev, W, b, activation):
if activation == "sigmoid":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
elif activation == "relu":
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
# ## L_Model_Forward
# Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
#
# Arguments:
# X -- data, numpy array of shape (input size, number of examples)
# parameters -- output of initialize_parameters_deep()
#
# Returns:
# AL -- last post-activation value
# caches -- list of caches containing:
# every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
def L_model_forward(X, parameters):
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
A, cache = linear_activation_forward(A_prev, parameters["W" + str(l)], parameters["b" + str(l)], activation = "relu")
caches.append(cache)
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
AL, cache = linear_activation_forward(A, parameters["W" + str(L)], parameters["b" + str(L)], activation = "sigmoid")
caches.append(cache)
assert(AL.shape == (1,X.shape[1]))
return AL, caches
# ## Cost Function
# Implement the cost function defined by equation (7).
#
# Arguments:
# AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
# Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
#
# Returns:
# cost -- cross-entropy cost
def compute_cost(AL, Y):
m = Y.shape[1]
cost = (-1/m) * np.sum(np.dot(Y, np.log(AL).T) + np.dot(1-Y, np.log(1-AL).T))
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
# ## Linear-Backward
# Implement the linear portion of backward propagation for a single layer (layer l)
#
# Arguments:
# dZ -- Gradient of the cost with respect to the linear output (of current layer l)
# cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
#
# Returns:
# dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
# dW -- Gradient of the cost with respect to W (current layer l), same shape as W
# db -- Gradient of the cost with respect to b (current layer l), same shape as b
def linear_backward(dZ, cache):
A_prev, W, b = cache
m = A_prev.shape[1]
dW = (1/m)*np.dot(dZ, A_prev.T)
db = (1/m)*np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# ## Linear-Activation-Backward
# Implement the backward propagation for the LINEAR->ACTIVATION layer.
#
# Arguments:
# dA -- post-activation gradient for current layer l
# cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
# activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
#
# Returns:
# dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
# dW -- Gradient of the cost with respect to W (current layer l), same shape as W
# db -- Gradient of the cost with respect to b (current layer l), same shape as b
def linear_activation_backward(dA, cache, activation):
linear_cache, activation_cache = cache
if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
elif activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
# ## L-Model-Backwards
# Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
#
# Arguments:
# AL -- probability vector, output of the forward propagation (L_model_forward())
# Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
# caches -- list of caches containing:
# every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
# the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
#
# Returns:
# grads -- A dictionary with the gradients
# grads["dA" + str(l)] = ...
# grads["dW" + str(l)] = ...
# grads["db" + str(l)] = ...
def L_model_backward(AL, Y, caches):
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
# Lth layer (SIGMOID -> LINEAR) gradients.
current_cache = caches[L-1]
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 1)], current_cache, "relu")
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
return grads
# ## Update Parameters
# Update parameters using gradient descent
#
# Arguments:
# parameters -- python dictionary containing your parameters
# grads -- python dictionary containing your gradients, output of L_model_backward
#
# Returns:
# parameters -- python dictionary containing your updated parameters
# parameters["W" + str(l)] = ...
# parameters["b" + str(l)] = ...
def update_parameters(parameters, grads, learning_rate):
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
return parameters
# ## L-Layer-Model
# Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
#
# Arguments:
# X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
# Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
# layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
# learning_rate -- learning rate of the gradient descent update rule
# num_iterations -- number of iterations of the optimization loop
# print_cost -- if True, it prints the cost every 100 steps
#
# Returns:
# parameters -- parameters learnt by the model. They can then be used to predict.
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization. (≈ 1 line of code)
parameters = initialize_parameters_deep(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
AL, caches = L_model_forward(X, parameters)
# Compute cost.
cost = compute_cost(AL, Y)
# Backward propagation.
grads = L_model_backward(AL, Y, caches)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
# ### Load Training and Test Data
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
# ### Exploratory Data Analysis
# +
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
print ("classes shape: " + str(classes.shape))
# -
# ### Reshape Data
# +
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
# -
# ## Train the Model
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
# ## Accuracy on Training Data
pred_train = predict(train_x, train_y, parameters)
# ## Accuracy on Test Data
pred_test = predict(test_x, test_y, parameters)
# ## Mislabeled Pictures
print_mislabeled_images(classes, test_x, test_y, pred_test)
| .ipynb_checkpoints/NeuralNetwork-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Operating on Data in Pandas
# One of the essential pieces of NumPy is the ability to perform quick element-wise operations, both with basic arithmetic (addition, subtraction, multiplication, etc.) and with more sophisticated operations (trigonometric functions, exponential and logarithmic functions, etc.).
# **Pandas inherits much of this functionality from NumPy, and the ufuncs** that we introduced in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) are key to this.
#
# Pandas includes a couple useful twists, however: for unary operations like negation and trigonometric functions, these ufuncs will *preserve index and column labels* in the output, and for binary operations such as **addition and multiplication, Pandas will automatically *align indices* when passing the objects to the ufunc.**
# This means that keeping the context of data and combining data from different sources–both potentially error-prone tasks with raw NumPy arrays–become essentially foolproof ones with Pandas.
# We will additionally see that there are well-defined operations between one-dimensional ``Series`` structures and two-dimensional ``DataFrame`` structures.
# ## Ufuncs: Index Preservation
#
# Because Pandas is designed to work with NumPy, any NumPy ufunc will work on Pandas ``Series`` and ``DataFrame`` objects.
# Let's start by defining a simple ``Series`` and ``DataFrame`` on which to demonstrate this:
import pandas as pd
import numpy as np
# +
np.random.seed(24)
ser = pd.Series(np.random.randint(0, 10, 4))
ser
# +
df = pd.DataFrame(np.random.randint(0, 10, (3, 4)),
columns=['A', 'B', 'C', 'D'])
df
# -
# If we apply a NumPy ufunc on either of these objects, the result will be another Pandas object *with the indices preserved:*
np.exp(ser)
type(np.exp(ser))
# Or, for a slightly more complex calculation:
np.sin(df*np.pi/4)
# Any of the ufuncs discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) can be used in a similar manner.
# ## UFuncs: Index Alignment
#
# For binary operations on two ``Series`` or ``DataFrame`` objects, Pandas will align indices in the process of performing the operation.
# This is very convenient when working with incomplete data, as we'll see in some of the examples that follow.
# ### Index alignment in Series
#
# As an example, suppose we are combining two different data sources, and find only the top three US states by *area* and the top three US states by *population*:
# +
area = pd.Series({'Alaska': 1723337,
'Texas': 695662,
'California': 423967},
name='area')
population = pd.Series({'California': 38332521,
'Texas': 26448193,
'New York': 19651127},
name='population')
# -
# Let's see what happens when we divide these to compute the population density:
population / area
# The resulting array contains the *union* of indices of the two input arrays, which could be determined using standard Python set arithmetic on these indices:
area.index | population.index
area.index.union(population.index)
# Any item for which one or the other does not have an entry is marked with ``NaN``, or "Not a Number," which is how Pandas marks missing data (see further discussion of missing data in [Handling Missing Data](03.04-Missing-Values.ipynb)).
# This index matching is implemented this way for any of Python's built-in arithmetic expressions; any missing values are filled in with NaN by default:
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
print(A)
print(B)
A + B
# If using NaN values is not the desired behavior, the fill value can be modified using appropriate object methods in place of the operators.
# For example, calling ``A.add(B)`` is equivalent to calling ``A + B``, but allows optional explicit specification of the fill value for any elements in ``A`` or ``B`` that might be missing:
A.add(B, fill_value=0) # los NaN los rellena con 0
# ### Index alignment in DataFrame
#
# A similar type of alignment takes place for *both* columns and indices when performing operations on ``DataFrame``s:
# +
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)),
columns=list('AB'))
A
# -
B = pd.DataFrame(np.random.randint(0, 10, (3, 3)),
columns=list('BAC'))
B
A + B
A.add(B, fill_value=0)
# Notice that indices are aligned correctly irrespective of their order in the two objects, and indices in the result are sorted.
# As was the case with ``Series``, we can use the associated object's arithmetic method and pass any desired ``fill_value`` to be used in place of missing entries.
# Here we'll fill with the mean of all values in ``A`` (computed by first stacking the rows of ``A``):
A.stack()
A
A.stack().mean()
fill = A.stack().mean()
A.add(B, fill_value=fill)
# The following table lists Python operators and their equivalent Pandas object methods:
#
# | Python Operator | Pandas Method(s) |
# |-----------------|---------------------------------------|
# | ``+`` | ``add()`` |
# | ``-`` | ``sub()``, ``subtract()`` |
# | ``*`` | ``mul()``, ``multiply()`` |
# | ``/`` | ``truediv()``, ``div()``, ``divide()``|
# | ``//`` | ``floordiv()`` |
# | ``%`` | ``mod()`` |
# | ``**`` | ``pow()`` |
#
# ## Ufuncs: Operations Between DataFrame and Series
#
# When performing operations between a ``DataFrame`` and a ``Series``, the index and column alignment is similarly maintained.
# Operations between a ``DataFrame`` and a ``Series`` are similar to operations between a two-dimensional and one-dimensional NumPy array.
# Consider one common operation, where we find the difference of a two-dimensional array and one of its rows:
# + jupyter={"outputs_hidden": false}
A = np.random.randint(10, size=(3, 4))
A
# + jupyter={"outputs_hidden": false}
A - A[0]
# -
# According to NumPy's broadcasting rules (see [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb)), subtraction between a two-dimensional array and one of its rows is applied row-wise.
#
# In Pandas, the convention similarly operates row-wise by default:
# + jupyter={"outputs_hidden": false}
df = pd.DataFrame(A, columns = list('QRST'))
df - df.iloc[0]
# -
# If you would instead like to operate column-wise, you can use the object methods mentioned earlier, while specifying the ``axis`` keyword:
# + tags=[]
df.subtract(df['R'], axis=0)
# -
# Note that these ``DataFrame``/``Series`` operations, like the operations discussed above, will automatically align indices between the two elements:
df
df.iloc[0, ::2]
# + jupyter={"outputs_hidden": false}
df- df.iloc[0, ::2] # los NaN es porque no existen en el sustraendo de la resta
# + jupyter={"outputs_hidden": false}
# -
# This preservation and alignment of indices and columns means that operations on data in Pandas will always maintain the data context, which prevents the types of silly errors that might come up when working with heterogeneous and/or misaligned data in raw NumPy arrays.
| 2-EDA/2-Pandas/Teoria/4 - Operations-in-Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import math
class cone():
def __init__(self,radius,height):
self.radius = radius
self.height = height
def volume(self):
ans = 1/3 *3.14*(self.radius**2)*self.height
return ans
def surfaceArea(self):
ans1 = 3.14*self.radius*(self.radius + math.sqrt(self.height**2+self.radius**2))
return ans1
ase = cone(4,5)
ase.volume()
ase.surfaceArea()
| Assignment - 2 (Day 6).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:miniconda3-default]
# language: python
# name: conda-env-miniconda3-default-py
# ---
# +
import glob
from ecgtools import Builder
from ecgtools.parsers.cesm import parse_cesm_history, parse_cesm_timeseries
# -
# Let's see what is available for
# /glade/campaign/cesm/development/wawg/WACCM6-TSMLT-GEO/SAI1/b.e21.BW.f09_g17.SSP245-TSMLT-GAUSS-DEFAULT.005/atm/proc/tseries/:
# day_1 day_5 hour_1 hour_3 month_1
glob.glob('/glade/campaign/cesm/development/wawg/WACCM6-TSMLT-GEO/SAI1/*')
# Build a catalog of the 1 degree WACCM6 MA chemistry timeseries output for all these cases
# +
esm_dir = "/glade/campaign/cesm/development/wawg/WACCM6-TSMLT-GEO/SAI1/"
b = Builder(
# Directory with the output
esm_dir,
# Depth of 1 since we are sending it to the case output directory
depth=4,
# Exclude the other components, hist, and restart directories
# and pick out the proc timeseries for 1- and 5-day and monthly data
exclude_patterns=["*/cpl/*",
"*/esp/*",
"*/glc/*",
"*/ice/*",
"*/lnd/*",
"*/logs/*",
"*/ocn/*",
"*/rest/*",
"*/rof/*",
"*/wav/*",
"*/controller/*"],
# Number of jobs to execute - should be equal to # threads you are using
njobs=1
)
# + tags=[]
b = b.build(parsing_func=parse_cesm_timeseries)
# -
# check for invalid assets
# + tags=[]
b.invalid_assets.values
# -
# Save the catalog - creates a csv and json file
# +
catalog_dir = "/glade/work/marsh/intake-esm-catalogs/"
b.save(
# File path - could save as .csv (uncompressed csv) or .csv.gz (compressed csv)
catalog_dir+"WACCM6-TSMLT-GEO-SAI1.csv",
# Column name including filepath
path_column_name='path',
# Column name including variables
variable_column_name='variable',
# Data file format - could be netcdf or zarr (in this case, netcdf)
data_format="netcdf",
# Which attributes to groupby when reading in variables using intake-esm
groupby_attrs=["component", "stream", "case"],
# Aggregations which are fed into xarray when reading in data using intake
aggregations=[
{'type': 'union', 'attribute_name': 'variable'},
{
'type': 'join_existing',
'attribute_name': 'time_range',
'options': {'dim': 'time', 'coords': 'minimal', 'compat': 'override'},
},
],
)
# -
glob.glob(catalog_dir+'*')
# +
# b.filelist?
# -
import ecgtools
print(ecgtools.__version__)
# +
from ecgtools.parsers.cesm import parse_cesm_timeseries
path = "/glade/campaign/cesm/development/wawg/WACCM6-MA-1deg/b.e21.BWSSP245.f09_g17.release-cesm2.1.3.WACCM-MA-1deg.001/atm/proc/tseries/month_1/b.e21.BWSSP245.f09_g17.release-cesm2.1.3.WACCM-MA-1deg.001.cam.h0.ACTREL.206501-209912.nc"
parse_cesm_timeseries(path)
# -
| notebooks/create_intake_catalog-geo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# Import Data:
df = pd.read_csv('data.csv')
df.head(5)
df.info()
# + jupyter={"source_hidden": true} tags=[]
def missing_values_table(df):
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
# Print some summary information
print ("Your selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
# Return the dataframe with missing information
return mis_val_table_ren_columns
# -
missing_values_table(df)
del df['Madein']
df = df.rename(columns=str.lower)
df.columns
df = df.rename(
columns={'product':'id',
'graphiccard':'gpu',
'core':'cpu',
'capacity':'memory'})
df.columns
df['id'] = np.arange(0,len(df))
df.head(5)
df['brand'].value_counts()
df['brand'] = df['brand'].str.upper()
df['brand'].value_counts()
# + tags=[]
print("Before clean 'cpu':")
print('*****')
print(df['cpu'].value_counts())
# + tags=[]
def clean_cpu(df):
a = []
df['cpu_freq'] = df['cpu'].str.extract(r'(\d+(?:\.\d+)?[ ]*GHz)')
df['cpu_freq'] = df['cpu_freq'].str.replace('GHz', '')
df['cpu_freq'] = df['cpu_freq'].str.replace(' ', '')
df.rename(columns={'cpu_freq': 'cpu_GHz'}, inplace=True)
df['cpu_GHz'] = df['cpu_GHz'].astype(float)
for i in df['cpu']:
if 'i5' in i:
a.append('Intel i5')
elif 'i3' in i:
a.append('Intel i3')
elif 'i7' in i:
a.append('Intel i7')
elif 'Ryzen 3' in i:
a.append('AMD Ryzen 3')
elif 'Ryzen 7' in i:
a.append('AMD Ryzen 7')
elif 'Ryzen 5' in i:
a.append('AMD Ryzen 5')
elif 'Pentium' in i:
a.append('Intel Pentium')
elif 'Celeron' in i:
a.append('Intel Celeron')
elif 'Ryzen 9' in i:
a.append('AMD Ryzen 9')
elif 'Apple' in i:
a.append('Apple M1')
else:
a.append(i)
df['cpu'] = a
return df
# -
df = clean_cpu(df)
df[['cpu','cpu_GHz']].sample(5)
df['cpu_brand'] = df['cpu'].str.extract(r'^(\w+)')
df['cpu_brand'].value_counts()
df.info()
# + tags=[]
df['gpu'].value_counts()
# + jupyter={"source_hidden": true} tags=[]
def clean_gpu(df):
a = []
for i in df['gpu']:
if 'RTX' in i:
a.append('NVIDIA GeForce RTX')
elif 'GTX' in i:
a.append('NVIDIA GeForce GTX')
elif 'Iris' in i:
a.append('Intel Iris')
elif 'UHD' in i:
a.append('Intel UHD')
elif 'Radeon' in i:
a.append('AMD Radeon')
elif 'MX' in i:
a.append('NVIDIA GeForce MX')
else:
a.append('M1 GPU')
df['gpu'] = a
return df
# -
clean_gpu(df)
df[['gpu']]
df['gpu'].value_counts()
df['gpu_brand'] = df['gpu'].str.extract(r'^(\w+)')
df['gpu_brand'].value_counts()
df.columns
column_names = ['id', 'brand', 'cpu', 'cpu_GHz', 'cpu_brand', 'ram', 'scrsize', 'gpu', 'gpu_brand',
'memory', 'drive_type', 'opersystem', 'weight', 'since', 'shop', 'price', 'url']
df = df.reindex(columns=column_names)
df
df.info()
df.index
df.to_csv('laptop_clean.csv', index=False)
| cleaning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
# # All my cheat sheets
#
# ## Python
# - [Python basics, comprehensions](python_cheat_sheet_1.html)
# - [Python object oriented and exception handling](python_cheat_sheet_2.html)
# - [Productive in Python - map reduce lambdas](python_cheat_sheet_3.html)
# - [regex in Python](regex-mindmap.png)
# ## Conda
# - [Conda basics](conda_cheat_sheet.html)
# ## Numpy
# - [Numpy basics](numpy_cheat_sheet_1.html)
# - [Numpy array slicing dicing searching](numpy_cheat_sheet_2.html)
#
# ## Pandas
# - [Pandas basics](pandas_cheat_sheet_1.html)
# - [Intermediate - multilevel index, missing data, aggregation, merge join concat](pandas_cheat_sheet_2.html)
# - [Productivity with Pandas](pandas_cheat_sheet_3.html)
#
# ### Data viz with Pandas
# - [data viz with pandas](pandas_data_viz_1.html)
# ## Matplotlib
# - [matplotlib basics](matplotlib_1.html)
# - [advanced matplotlib plotting](matplotlib_2.html)
# ## Seaborn
# - [Seaborn basics](seaborn_cheat_sheet_1.html) - distplot, jointplot, pairplot, rugplot
# - [Seaborn categorical plotting](seaborn_cheat_sheet_2.html) - bar, count, box, violin, strip, swarm plots
# - [Seaborn matrix and regression plots](seaborn_cheat_sheet_3.html) - heatmap, cluster, regression
# - [Seaborn grids and customization](seaborn_cheat_sheet_4.html) - pairgrid, facetgrid, customization
# ## Interactive plotting with Plotly
# - [Plotly introduction](plotly_cufflinks_cheat_sheet_1.html)
# - [Interactive plotting with Plotly](plotly_cufflinks_cheat_sheet_2.html)
#
# ### Geographical plotting with Plotly
# - [Choropleth maps of USA](plotly_geographical_plotting_1.html)
# ## R
# - [R basics](r_cheat_sheet_1.html)
# ## Javascript
# - [JS essentials](js_essentials.html)
# ## Latex in notebooks
# - [latex](latex-1.html)
# ## Open source geospatial
# - [A short tour of Open Geospatial Tools of the scientific Python ecosystem](../python-open-geospatial-world.html)
#
# ### Vector - GeoPandas, Fiona, Shapely, Pyepsg, Folium
# - [Introduction to GeoPandas](geopandas-1.html) Covers Data IO, projections, basic plotting.
# - [Geoprocessing with GeoPandas](geopandas-2.html) More data IO, interactive plotting, Geocoding
# - [Spatial overlays with GeoPandas](geopandas-3.html) Point in polygon, topology checks, spatial join, spatial intersection, geometry simplification, data aggregation
# - [Reclassification, data download](geopandas-4.html) Reclassify with Pysal, download data from OSM
#
# ### Raster - rasterio, rasterstats
# - [Introduction to working with rasters](open-geo-raster-1.html) Read raster into numpy, plotting, histograms
# - [Working with hyperspectral images](open-geo-raster-2.html) Read AVIRIS data, plot spectral signature, perform SAM classification
#
# ### Server - PostGIS stack
# - [PostGIS 1](postgis-1.html) - fundamentals of PostGIS
# - [PostGIS 2](postgis-2.html) - working with PostGIS using SQLAlchemy, GeoAlchemy, Pandas, GeoPandas
# ### Docker
# - [Docker 1](docker-1.html)
| python_crash_course/cheat_sheet_index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# CONSTANTES COLLABORATIVE FILTERING
# CAMINHOS
#
PATH_TO_FULL_CF_FILE = "../../preprocessed-data/CF/data_cf.pkl"
PATH_TO_MOVIES_CF_FILE = "../../preprocessed-data/CF/movies_cf.pkl"
PATH_TO_RATINGS_CF_FILE = "../../preprocessed-data/CF/ratings_cf.pkl"
# DataFrames Names
# data_cf = arquivo completo
# movies_cf = arquivos de filmes
# ratings_cf = arquivos de ratings
# KNN
N_NEIGHBORS = 11
# +
# Importando bibliotecas necessárias
import pandas as pd
pd.set_option("display.max_rows", 25)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import seaborn as sns
import sys
# Importando garbage collector
import gc
# Importando bibliotecas para o sistema de recomendação
import scipy.sparse as sparse # Matriz esparsa (csr_matrix)
# Importando sklearn
import sklearn
from sklearn.neighbors import NearestNeighbors
# Importando Regular Expression operations
import re
# -
# # 1 - Preparação dos dados para Collaborative Filtering
# Definindo Função que carrega os arquivos da pasta CF
def load_cf_files(full_file=True, movie_file=False, ratings_file=False):
if(full_file):
# Carregando o arquivo pré processado completo
data_cf = pd.read_pickle(PATH_TO_FULL_CF_FILE)
data_cf = data_cf[["movieId", "title", "userId", "rating"]] # Reajustando ordem das colunas
print("Arquivo completo: Carregado com sucesso!")
else:
print("Arquivo completo: Não foi carregado, verifique os parâmetros para ver se essa era a intenção!")
if(movie_file):
# Carregando arquivo de filmes
movies_cf = pd.read_pickle(PATH_TO_MOVIES_CF_FILE)
print("Arquivo de filmes: Carregado com sucesso!")
else:
print("Arquivo de filmes: Não foi carregado, verifique os parâmetros para ver se essa era a intenção!")
if(ratings_file):
ratings_cf = pd.read_pickle(PATH_TO_RATINGS_CF_FILE)
print("Arquivo de ratings: Carregado com sucesso!")
else:
print("Arquivo de ratings: Não foi carregado, verifique os parâmetros para ver se essa era a intenção!")
#definindo retornos
if("data_cf" in locals()):
if("movies_cf" in locals()):
if("ratings_cf" in locals()):
return data_cf, movies_cf, ratings_cf
else:
return data_cf, movies_cf
else:
return data_cf
elif("movies_cf" in locals()):
if("ratings_cf" in locals()):
return movies_cf, ratings_cf
else:
return movies_cf
elif("ratings_cf" in locals()):
return ratings_cf
else:
return None
# Chamando função de carregar os arquivos
data_cf, movies_cf= load_cf_files(full_file=True, movie_file=True, ratings_file=False)
# +
# CONSTANTES CONTENT BASED
# CAMINHOS
PATH_TO_FULL_CB_FILE = "../preprocessed-data/CB/data_cb.pkl"
PATH_TO_MOVIES_CB_FILE = "../preprocessed-data/CB/movies_cb.pkl"
PATH_TO_RATINGS_CB_FILE = "../preprocessed-data/CB/ratings_cb.pkl"
PATH_TO_RATINGS_INFOS_CB_FILE = "../preprocessed-data/CB/ratings_info_cb.pkl"
PATH_TO_TAG_RELEVANCE_GROUPED_CB_FILE = "../preprocessed-data/CB/tag_relevance_grouped_cb.pkl"
PATH_TO_TAG_RELEVANCE_CB_FILE = "../preprocessed-data/CB/tag_relevance_cb.pkl"
PATH_TO_TAGS_PROCESSED_CB_FILE = "../preprocessed-data/CB/tags_processed_cb.pkl"
# DataFrames Names
# data_cb = arquivo completo
# movies_cb = arquivos de filmes
# ratings_cb = arquivos de ratings
# ratings_infos_cb = arquivos de informações sobre os ratings
# tag_relevance_grouped_cb = relevancia de tags após o agrupamento
# tag_relevance_cb = relevancia de tags original
# tags_processed_cb = tags todas juntas em uma coluna e processadas pelo nltk
# -
def load_cb_files(full=True, movies=False, ratings=False, ratings_infos=False ,relevance_grouped=False, relevance=False, tags_processed=False):
data_cb = None
movies_cb = None
ratings_cb = None
ratings_infos_cb = None
tag_relevance_grouped_cb = None
tag_relevance_cb = None
tags_processed_cb = None
# Caso se queira carregar o completo
if(full):
data_cb = pd.read_pickle(PATH_TO_FULL_CB_FILE)
print("Arquivo completo: Carregado com sucesso!")
else:
print("Arquivo completo: Não foi carregado, verifique se era o que desejava.")
# Caso queira-se carregar o arquivo de filmes
if(movies):
movies_cb = pd.read_pickle(PATH_TO_MOVIES_CB_FILE)
print("Arquivo movies: Carregado com sucesso!")
else:
print("Arquivo movies: Não foi carregado, verifique se era o que desejava.")
if(ratings):
ratings_cb = pd.read_pickle(PATH_TO_RATINGS_CB_FILE)
print("Arquivo ratings: Carregado com sucesso!")
else:
print("Arquivo ratings: Não foi carregado, verifique se era o que desejava.")
if(ratings_infos):
ratings_infos_cb = pd.read_pickle(PATH_TO_RATINGS_INFOS_CB_FILE)
print("Arquivo ratings infos: Carregado com sucesso!")
else:
print("Arquivo ratings infos: Não foi carregado, verifique se era o que desejava.")
if(relevance_grouped):
tag_relevance_grouped_cb = pd.read_pickle(PATH_TO_TAG_RELEVANCE_GROUPED_CB_FILE)
print("Arquivo relevance grouped: Carregado com sucesso!")
else:
print("Arquivo relevance grouped: Não foi carregado, verifique se era o que desejava.")
if(relevance):
tag_relevance_cb = pd.read_pickle(PATH_TO_TAG_RELEVANCE_CB_FILE)
print("Arquivo relevance: Carregado com sucesso!")
else:
print("Arquivo relevance: Não foi carregado, verifique se era o que desejava.")
if(tags_processed):
tags_processed_cb = pd.read_pickle(PATH_TO_TAGS_PROCESSED_CB_FILE)
print("Arquivo tags processed: Carregado com sucesso!")
else:
print("Arquivo tags processed: Não foi carregado, verifique se era o que desejava.")
return data_cb, movies_cb, ratings_cb, ratings_infos_cb, tag_relevance_grouped_cb, tag_relevance_cb, tags_processed_cb
data_cb, movies_cb, ratings_cb, ratings_infos_cb, tag_relevance_grouped_cb, tag_relevance_cb, tags_processed_cb = load_cb_files(full=False, movies=False, ratings=True, ratings_infos=True, tags_processed=False)
# ### 1.1 - Problemas do Collaborative Filtering:
# <ul>
# <li>Esparsidade</li>
# <li>Cold Start</li>
# </ul>
#
# #### Técnicas possíveis:
# <ul>
# <li><b>Algoritmos não probabilisticos:</b></li>
# <li>User-based nearest neighbor</li>
# <li>Item-based nearest neighbor</li>
# <li>Reducing dimensionality</li>
# </ul>
#
# <ul>
# <li><b>Algoritmos probabilisticos:</b></li>
# <li>Bayesian-network model</li>
# <li>Expectation-minimization</li>
# </ul>
#
#
# Ver: https://pub.towardsai.net/recommendation-system-in-depth-tutorial-with-python-for-netflix-using-collaborative-filtering-533ff8a0e444
# ### 1.2 - Criando uma estrutura de matriz esparsa com o dataframe
# Usando o dataframe para criar uma matriz esparsa que contenha todos os filmes
def create_sparse_matrix(df):
sparse_matrix = sparse.csr_matrix((df["rating"], (df["userId"], df["movieId"])))
return sparse_matrix
ratings_infos_cb
ratings_cb
# #### Agrupando pelo usuário, para sabermos quantos filmes cada usuário avaliou
# +
user_count_rating = pd.DataFrame(ratings_cb.groupby("userId").count()["rating"])
user_count_rating.rename(columns = {'rating':'rating_count'}, inplace = True)
user_count_rating
# -
# #### Mostrando Ratings Info CB
movie_count_rating = ratings_infos_cb.copy()
movie_count_rating.rename(columns = {'rating count':'rating_count'}, inplace = True)
movie_count_rating
# #### Visualizando a quantidade de ratings dos filmes
# +
movie_count_rating_graph = movie_count_rating.copy()
movie_count_rating_graph.drop('weighted rating', axis=1, inplace=True)
movie_count_rating_graph.drop('average rating', axis=1, inplace=True)
movie_count_rating_graph.drop('movieId', axis=1, inplace=True)
ax = movie_count_rating_graph.sort_values('rating_count', ascending=False).reset_index(drop=True).plot(figsize=(16,8), title='Rating Frequency of All Movies', fontsize=12)
ax.set_xlabel('Movie ID')
ax.set_ylabel('Number of Ratings')
# -
# ### 1.3 - Criando um modelo simples utilizando o KNN - CF - Item Based
# #### Removendo usuários "inativos"
# +
# FILMES ATIVOS = FILMES COM UMA QUANTIDADE MINIMA DE RATINGS
# Como os dados são bem vazios, vamos pegar filtrar uma quantidade desses dados, utilizaremos apenas os usuários com mais de 50 ratings.
users_ratings_threshold = 50
active_users = list(set(user_count_rating.query("rating_count >= @users_ratings_threshold").index))
print('Shape of original ratings data: ', user_count_rating.shape)
user_count_rating_drop = user_count_rating[user_count_rating.index.isin(active_users)]
print('Shape of ratings data after dropping inactive users: ', user_count_rating_drop.shape)
user_count_rating_drop
# -
# #### Removendo filmes "inativos"
# +
# FILMES ATIVOS = FILMES COM UMA QUANTIDADE MINIMA DE RATINGS
# Como os dados são bem vazios, vamos pegar filtrar uma quantidade desses dados, utilizaremos apenas os filmes com mais de 200.
movies_ratings_threshold = 20
active_movies = list(set(movie_count_rating.query("rating_count >= @movies_ratings_threshold").movieId))
print('Shape of original ratings data: ', movie_count_rating.shape)
movie_count_rating_drop = movie_count_rating[movie_count_rating.movieId.isin(active_movies)]
print('Shape of ratings data after dropping inactive movies: ', movie_count_rating_drop.shape)
movie_count_rating_drop
# +
# Pegando de data_cf apenas as colunas correspondentes aos filmes ativos e aos usuarios ativos
data_cf_active = data_cf[data_cf.movieId.isin(active_movies)]
data_cf_active = data_cf_active[data_cf_active.userId.isin(active_users)]
# Criando a user_movie_matrix
movie_user_matrix = create_sparse_matrix(data_cf_active).transpose()
movie_user_matrix = movie_user_matrix.tocsr()
# -
print("Numero de avaliações nos filmes selecionados: ", data_cf_active.shape[0])
print("Numero de avaliações total: ", data_cf.shape[0])
# Criando o modelo knn
knn_cf = NearestNeighbors(n_neighbors=N_NEIGHBORS, algorithm='auto', metric='euclidean') # temos que mexer nos parâmetros posteriormente
# Treinando do com os dados
knn_cf.fit(movie_user_matrix)
print(movie_user_matrix)
# criando função que gera recomendações basedo em um filme - utilizando um modelo KNN
def get_recommendations_cf(movie_name, model, data, printable=True): #nome do filme, modelo
# Pegando o Id do filme que tenha o nome passado
movieId = data.loc[data["title"] == movie_name]["movieId"].values[0]
distances, suggestions = model.kneighbors(movie_user_matrix.getrow(movieId).todense().tolist(), n_neighbors=N_NEIGHBORS)
if(printable):
for i in range(0, len(distances.flatten())):
if(i == 0):
print('Recomendações para {0} (ID: {1}): \n '.format(movie_name, movieId))
else:
#caso sejam geradas menos que N_NEIGHBORS recomendações, exibem-se apenas as geradas
if(np.size(data.loc[data["movieId"] == suggestions.flatten()[i]]["title"].values) > 0 and np.size(data.loc[data["movieId"] == suggestions.flatten()[i]]["movieId"].values[0]) > 0):
print('{0}: {1} (ID: {2}), com distância de {3}: '.format(i, data.loc[data["movieId"] == suggestions.flatten()[i]]["title"].values[0], data.loc[data["movieId"] == suggestions.flatten()[i]]["movieId"].values[0], distances.flatten()[i]))
return distances, suggestions
# Função para pesquisar o nome correto do filme
def search_movies(search_word, data):
return data[data.title.str.contains(search_word, flags=re.IGNORECASE)]
#return movies_cf[movies_cf.movieId == 3561]
# Setando um tamanho de coluna, para ver o nome completo dos filmes
pd.set_option('display.max_colwidth', 500)
# Pesquisando filmes
search_movies("Klaus", data=data_cf_active).tail(20)
# # AUXILIO TEMPORARIO DE PESQUISA DE FILME
movie_count_rating_drop[movie_count_rating_drop["average rating"] >= 3.5].sort_values(by="weighted rating", ascending=True)
data_cf_active[data_cf_active.movieId == 4262]
movie_count_rating_drop[movie_count_rating_drop["rating_count"] <= 15000].sort_values(by="rating_count", ascending=True)
data_cf_active.nunique()
# # FIM DO AUXILIO TEMPORARIO
#Voltando a coluna ao normal
pd.set_option('display.max_colwidth', 50)
movieName = "Scarface"
# Pegando recomendações
a, b = get_recommendations_cf(movieName, knn_cf, data_cf_active)
# ## 1.4 Utilizando umap para exibir os agrupamentos
# +
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
# %matplotlib inline
import umap
import umap.plot
# -
# Tirando uma amostra
N = 2000 #numero de FILMES na amostra
sample_indexes = np.random.choice(np.arange(movie_user_matrix.shape[0]),N, replace=False)
# Pegando a amostra
sample = movie_user_matrix[sample_indexes]
# +
#caso queira restar a variavel
# #%reset_selective trans
# #%reset
# -
# %%time
trans = umap.UMAP(n_neighbors=N_NEIGHBORS, random_state=64, metric='euclidean').fit(movie_user_matrix)
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height: 80em; }</style>"))
# +
def plotKNN():
# pegando a lista de recomendações
recommendations_ids = b.flatten()
averageRating = movie_count_rating_drop[movie_count_rating_drop['movieId'] == recommendations_ids[0]]['average rating']
numRatings = movie_count_rating_drop[movie_count_rating_drop['movieId'] == recommendations_ids[0]]['rating_count']
fig, ax = plt.subplots(figsize=(14, 10))
# AREA DO GRAFICO BASE
#mostrando os valores normais
ax.scatter(trans.embedding_[:, 0], trans.embedding_[:, 1], s=5, facecolors='black', cmap='Spectral', alpha=0.15, linewidths=1)
#mostrando filme pedido pelo usuario
ax.scatter(trans.embedding_[:, 0][recommendations_ids[0]], trans.embedding_[:, 1][recommendations_ids[0]], s=5, c='blue', cmap='Spectral', alpha=0.7)
#mostrando filmes recomendados
ax.scatter(trans.embedding_[:, 0][recommendations_ids[1:]], trans.embedding_[:, 1][recommendations_ids[1:]], s=5, c='red', cmap='Spectral', alpha=0.7)
ax.set(title='KNN' + ' - Recomendações para ' + movieName + ' - Numero de Ratings: ' + str(numRatings.values[0]) +' - Nota Média: ' + str(averageRating.values[0]))
# ax.set_xlim(2.1962, 12)
# ax.set_ylim(2.1962, 12)
ax.set_xlim(0, 20)
ax.set_ylim(-10, 15)
# verificando se os pontos gerados são nan, se for, é impossivel exibir essas recomendações graficamente
if(np.isnan(trans.embedding_[:, 0][recommendations_ids]).all() and np.isnan(trans.embedding_[:, 1][recommendations_ids]).all()):
print("Não foi possivel gerar o gráfico para as recomendações de {0}, por favor tente outro filme.\n" .format(movieName))
return
# mostrando legenda
colors_list = ['blue', 'red', 'green', 'yellow', 'magenta', 'brown', 'orange', 'black', 'indigo', 'chocolate', 'turquoise']
legend_list = []
# filme
filme_pesquisado = mlines.Line2D([],[], linestyle='None', color='blue', marker="*", markersize=15,
label=list(data_cf_active[data_cf_active.movieId == recommendations_ids[0]].title)[0])
legend_list.append(filme_pesquisado)
for i in range(1, 11):
filme_recomendado = mlines.Line2D([],[], linestyle='None', color=colors_list[i], marker=".", markersize=13,
label=list(data_cf_active[data_cf_active.movieId == recommendations_ids[i]].title)[0])
legend_list.append(filme_recomendado)
ax.legend(handles=legend_list)
#================================================================================================================
# AREA DO ZOOM
axins = zoomed_inset_axes(ax, 2, loc=3) # zoom = 8
#axins.set(title='Recomendações para ' + movieName)
#mostrando os valores normais
axins.scatter(trans.embedding_[:, 0], trans.embedding_[:, 1], s=3, facecolors='grey', cmap='Spectral', alpha=0.6, linewidths=0.7)
#mostrando filme pedido pelo usuario
axins.scatter(trans.embedding_[:, 0][recommendations_ids[0]], trans.embedding_[:, 1][recommendations_ids[0]], s=20, c='blue', cmap='Spectral', alpha=1, marker="*")
#mostrando todos os filmes recomendados - essa linha abaixo pode ser comentada
axins.scatter(trans.embedding_[:, 0][recommendations_ids[1:]], trans.embedding_[:, 1][recommendations_ids[1:]], s=5, c='red', cmap='Spectral', alpha=1)
#mostrando os filmes 1 ao 10
for i in range(1, 11):
axins.scatter(trans.embedding_[:, 0][recommendations_ids[i]], trans.embedding_[:, 1][recommendations_ids[i]], s=5, c=colors_list[i], cmap='Spectral', alpha=1)
#setando os limites do plot do zoom - min e max de cada axis + um offset
offset = 0.2
axins.set_xlim(np.nanmin(trans.embedding_[:, 0][recommendations_ids]) - offset, np.nanmax(trans.embedding_[:, 0][recommendations_ids]) + offset)
axins.set_ylim(np.nanmin(trans.embedding_[:, 1][recommendations_ids]) - offset, np.nanmax(trans.embedding_[:, 1][recommendations_ids]) + offset)
plt.xticks(visible=False) # Not present ticks
plt.yticks(visible=False)
#
## draw a bbox of the region of the inset axes in the parent axes and
## connecting lines between the bbox and the inset axes area
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.3")
plt.draw()
plt.show()
plotKNN()
# -
count1 = 0
for x, y in trans.embedding_:
if(x >= 2.1962 or y >= 2.1962):
count1 += 1
print(count1)
count2 = 0
for x, y in trans.embedding_:
if(x < 2.1962 or y < 2.1962 ):
count2 += 1
print(count2)
data_cf_active.nunique()
# +
print(trans.embedding_.shape)
trans.graph_
# -
movie_user_matrix
np.where(umap.utils.disconnected_vertices(trans) == True)
data_cf_active.shape
# +
print(b.flatten())
print(movie_user_matrix[593])
data_cf_active[data_cf_active.movieId == 593].sort_values(by="userId", ascending=True)
# -
# # todo: comparar graficos feitos com data_cf e data_cf_active
| Graphic Exibition/KNN - CF/KNN - CF - umap - Euclidean.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # M11. Binary Representation
# I've so far deferred any exposition of the binary system, mostly for reasons which come down to expedience. There is nothing remarkable about positional notation, much less its specific manifestation in base 2 (which is all binary really is).
#
# It's only on two counts that I'll qualify the treatment that was originally given in ECE 112.
#
# First, the binary representation of an integer is merely a *representation*, a mapping from the integers to the set of bit vectors. Even constraining these mappings to bijections, we are left with $n!$ distinct mappings between any two sets of cardinality $n$, this latter point being illustrated by a simple permutation argument. On an $n$-bit architecture with $2^n$ possible states, we could actually maintain $(2^n)!$ different schemes of representing any range of integers of size $2^n$. On a 2-bit system, for instance, I could map
# $$0 \mapsto 01$$
# $$1 \mapsto 11$$
# $$2 \mapsto 00$$
# $$3 \mapsto 10$$
#
# and these would constitute a perfectly valid representation of the integers $\{0,1,2,3\}$. For that matter, so too would
#
# $$0 \mapsto 10$$
# $$1 \mapsto 01$$
# $$2 \mapsto 00$$
# $$3 \mapsto 01$$
#
# and 22 other bijections. Of these, we prefer the one corresponding to what is called the *binary system*
#
# $$0 \mapsto 00$$
# $$1 \mapsto 01$$
# $$2 \mapsto 10$$
# $$3 \mapsto 11$$
#
# for the reason that it is amenable to extant systems of arithmetic for positional notation (as opposed to non-positional notation: for example, the Roman numerals that were used—probably to deleterious effect—by the Latinate peoples).
#
# To be absolutely clear, we *could* implement architectures with integer representations in bases other than two. The Soviets built a *ternary* computer in the late 60s (with some notable success). Babbage's *difference engines*, built in the early 19th century, were decimal machines. Here, too, we have a means of discriminating between different options.
#
# Those of you who took Professor <NAME>'s ECE 101 might remember him having mentioned (in passing) that the 'best' base is actually $e$. This choice is indeed optimal, subject to a metric known as the *radix economy*. The argument is easily sketched as follows:
#
# The radix economy of an integer $n$ with respect to a radix (base) $r$ is defined as the number of symbols required to express $n$ in base $r$, counterweighted by $r$ itself:
#
# $$E(n,r) = r\log_r n.$$
#
# Thus, the radix economy punishes bases that are too large ($r \to \infty$) or too small ($\log_r n \to \infty$). Of course, for a fixed $n$ we can solve
#
# $$\hat{r} = \operatorname*{argmin}_r E(n,r) = \operatorname*{argmin}_r r\log_r n = \operatorname*{argmin}_r \frac{r}{\ln r}.$$
#
# Now notice
#
# \begin{align}
# & \frac{d}{dr}\frac{r}{\ln r} = \frac{-1}{(\ln r)^2} + \frac{1}{\ln r} = \frac{\ln r -1}{(\ln r)^2} = 0\\
# &\Rightarrow \ln r - 1 = 0 \\
# &\Rightarrow \hat{r} = e
# \end{align}
#
# which is the desired result. Since the notion of a non-natural (in fact, transcendental) base is somewhat troublesome, we invoke one of the corollaries of the Fundamental Theorem of Engineering,
#
# $$\sqrt{g} = \pi = e = 3,$$
#
# and pronounce that 3 is a 'close-enough' optimum. In practice, however, we still opt for base 2 owing to other considerations I will not be discussing in this memo. The point I'd like to illustrate is that the binary base is a *choice*, as is the decision to use positional notation.
#
# My second point concerns arithmetic. Notwithstanding the limits of space and memory, there are some things that we fundamentally cannot do with the integers $\mathbb{Z}$. In particular, the ring $\langle \mathbb{Z}, +, \times \rangle$ is not a *field*. The object does not contain a multiplicative inverse, with the consequence that we cannot find general solutions to equations even of the linear form
#
# $$ax = b.$$
#
# Division, correspondingly, is not tractable in $\mathbb{Z}$:
# +
import numpy as np
print(np.int8(np.int8(5)/np.int8(3)))
# -
# This is not the failing of any particular discipline of engineering; it is a limitation of an otherwise 'nice' mathematical object. The ring $\langle \mathbb{Z}, +, \times \rangle$ is not a field, but rather, an *integral domain*—within it, among other things, we can add, subtract, and multiply with familiar impunity.
#
# When we want to divide, we indulge in the rational numbers $\mathbb{Q}$ (which *are* a field). When we want to express the diagonal of a unit square ($\sqrt{2}$) we further indulge in the reals $\mathbb{R}$. In turn, when we want to find solutions to an equation of the form $x^2 = -1$, we refer to the complex numbers $\mathbb{C}$. Thus may the orthodox mathematical curriculum be traced, starting from the 'pre-school' numbers $\mathbb{N}$. Right now, we concern ourselves with the representation of the 'elementary-school' numbers $\mathbb{Z}$, otherwise known as the integers.
#
# Computers have finite states, so we immediately constrain ourselves to 'modeling' $\mathbb{Z}$ with a finite ring. The most natural way to do is to construct the ring of the integers modulo n: $\langle \mathbb{Z}_n, +_n, \times_n \rangle$. This is also known as 'clock arithmetic'. Suppose $n = 12$:
#
# $$1 + 2 = 3 \pmod{12} = 3$$
# $$5+8 = 13 \pmod{12}= 1$$
# $$3 \times 5 = 15 \pmod{12}= 3$$
# $$7 \times 1 = 7 \pmod{12}= 7.$$
#
# This preserves most of the behavior of $\mathbb{Z}$—in fact, if $n=p$ for some prime $p$, we obtain a *finite field*, but we do not need to go this far (if you find this sort of thing interesting, you may want to look into taking MTH 236). On a binary computer, $n$ is some power of $2$. NumPy allows us to explicitly work with unsigned 8-bit integers ($n = 2^8 = 256$), among others:
#
myEightBitInteger = np.uint8(23)
print(myEightBitInteger)
# Rather circuitously, we've arrived at the conclusion to my second point: there is a strong argument *against* stating that 'overflow', at least for unsigned integers, is an exception. It is only an exception if it was unintended by the programmer, which suggests that it was really an exception *in the behavior of that programmer*.
#
# The C standard seems to recognize that what is usually described as 'overflow' is actually the perfect result of modular arithmetic, even preceding any idiosyncrasies in adder design or operation. Of course, overflow *detection* at runtime is important, which is why any half-decent adder will make known the carry status of its most significant bit (we didn't do this when we constructed our 32-bit adder).
| M11_Binary_Representation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import math
# First, some parameters. We will play with them below; these are defaults.
# +
SOUND_PARAM_TOTAL = 80 # chance of not catching cheating prover < 1/2^80
SOUND_PARAM = SOUND_PARAM_TOTAL + 2 # individually, chance of not catching cheating prover for each of (1) MT
# shuffling, (2) AM shuffling, and (3) IKOS portion is < 1/2^82.
# Union bound of these three phases is below total soundness error bound.
HASH_BITS = 256 # output size of SHA-256 hash
COMM_RAND = 128 # size of input randomness to SHA-256 commitments = C = H(R||M)
# Num parties, and num revealed
NUM_PARTIES = 2
PARTIES_REVEALED = NUM_PARTIES-1 # TODO: For now, assuming open P-1. For a protocol robust against r<(P-1) parties, this changes
# Optionally, randomness may be generated from a seed.
RAND_FROM_SEED = True # if True, Prover used one seed per party to generate randomness
SEED_SIZE = 128
# Field size
FIELD_BITS = 128
FIELD_SIZE = 2 ** FIELD_BITS
# Shortcuts
sigma = SOUND_PARAM
H = HASH_BITS
C = COMM_RAND
P = NUM_PARTIES
F = FIELD_BITS
PR = PARTIES_REVEALED
PNR = P - PR # parties not revealed
# -
# Compute the number of IKOS repetitions required to achieve the desired soundness
def get_num_ikos_reps(sigma, P, PNR):
return math.ceil( sigma / math.log2(P/PNR) )
num_ikos_reps = get_num_ikos_reps(sigma, P, PNR)
R = num_ikos_reps # for 2 parties, should equal sigma
print(R)
# Determine the size required for decommitting to each test. Use this to determine the size required for decommitting to each ASWI gate.
#
# Multiplication Triple Test (MT): 80 B
#
# Additive-Multiple Tuple Test (AM): 160 B (includes the MT called within)
#
# ASWI gate: 288 B (includes both tests called within)
# +
# Each MT: 3 shares (input) per revealed party + 2 openings (+1CZ)
def get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, PNR):
return ((2*PNR))*F if RAND_FROM_SEED else (((3*PR) + (2*PNR)) * F)
per_MT_test_decom_size_norand = get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, PNR)
per_MT_test_ZC = 1 # 1 additional thing that should be 0
print("MT test bytes (no comm rand): ", per_MT_test_decom_size_norand/8)
# Each AM (includes the additionally-induced MT)
#per_AM_test_decom_size_incl = (2*F*PR) + (4 * (P-1) + 1) * AMS * F
# Each AM: 2 shares per opened party + 1 elem + 2 openings + 1 MT test
def get_per_AM_test_decom_size_incl_norand(RAND_FROM_SEED, F, PR, PNR):
return ((PR + (2*PNR))*F + get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, PNR)) if RAND_FROM_SEED \
else (((2*PR) + PR + (2*PNR))*F + \
get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, PNR))
per_AM_test_decom_size_incl_norand = get_per_AM_test_decom_size_incl_norand(RAND_FROM_SEED, F, PR, PNR)
per_AM_test_ZC_incl = per_MT_test_ZC
print("AM test bytes (no comm rand): ", per_AM_test_decom_size_incl_norand/8)
# +
# Main view additions for 1 ASWI:
# - 1 elem
# Test view additions for 1 ASWI:
# - 2 elem
# - 1 CZ
# - 1 AM test
# - 1 AM tuple
# - 1 elem
# - 2 openings
# - 1 MT test
# - 1 MT tuple
# - 2 openings
# - 1 CZ
# - 1 MT test
# - 1 MT tuple
# - 2 openings
# - 1 CZ
# Randomness for 1 ASWI:
# - 1x2 AM tuple
# - 2x3 MT tuple
# - 3x. zero-check
# We have already paid for opening the PR shares of the tuples themselves,
# but we need to know how much we must do additionally to validate those tuples
per_switch_MT = 2 # 1 used in AM-test
per_switch_AM = 1
def get_per_switch_main_decom_size(F):
return F
per_switch_main_decom_size = get_per_switch_main_decom_size(F)
def get_per_switch_test_decom_size_incl(RAND_FROM_SEED, F, PR, PNR):
return 2*PR*F + get_per_AM_test_decom_size_incl_norand(RAND_FROM_SEED, F, PR, PNR) + \
get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, PNR)
per_switch_test_decom_size_incl = get_per_switch_test_decom_size_incl(RAND_FROM_SEED, F, PR, PNR)
print(per_switch_test_decom_size_incl / 8)
per_switch_ZC_incl = 1 + per_AM_test_ZC_incl + per_MT_test_ZC
# +
# Pick some number of switches to test on; this will be varied later.
NUM_ASWI = 5
S = NUM_ASWI
# Choose number of variables in the polynomial
VARS = 1
# -
# Now we need to figure out how big the bucketing phase needs to be to achieve the desired soundness error, separately, for each of the AM tuples and the MT tuples.
#
# We first do the AM tuples. Unfortunately, for **each AM tuple test**, we need to consume a (good!) MT tuple. So the number of AM tuple tests get added right on to the number of **desired** MT tuples.
# +
# B for "bucket size"
# For a single test, soundness error = (1/F)^B.
# Let N = number of tuples needed for ALL repetitions
# Need: Chance of succeeding at *any* set of N sets of *all* (B-1) tests to be < (1/2)^sigma
def get_N_AM(R, NUM_ASWI):
return NUM_ASWI * per_switch_AM * R
N_AM = get_N_AM(R, NUM_ASWI)
def get_BUCKET_SIZE_AM(sigma, R, NUM_ASWI, FIELD_SIZE):
return math.ceil(1 + ( (sigma + math.log2(get_N_AM(R, NUM_ASWI))) / math.log2(FIELD_SIZE) ))
B_AM = get_BUCKET_SIZE_AM(sigma, R, NUM_ASWI, FIELD_SIZE)
# N_MT_incl includes both the MT needed for verifying the circuit/tests itself, plus the MT needed
# to run the bucketing verification for N_AM AM-tuples.
# N_MT_excl includes the MT needed for verifying the circuit/tests only, not including the MT needed
# to run the bucketing verification for N_AM AM-tuples.
def get_N_MT_excl(R, NUM_ASWI):
return NUM_ASWI * per_switch_MT * R
N_MT_excl = get_N_MT_excl(R, NUM_ASWI)
def get_N_MT_incl(sigma, R, NUM_ASWI, FIELD_SIZE):
return get_N_MT_excl(R, NUM_ASWI) + (get_BUCKET_SIZE_AM(sigma, R, NUM_ASWI, FIELD_SIZE) - 1)*get_N_AM(R, NUM_ASWI)
N_MT_incl = get_N_MT_incl(sigma, R, NUM_ASWI, FIELD_SIZE)
def get_BUCKET_SIZE_MT_incl(sigma, R, NUM_ASWI, FIELD_SIZE):
return math.ceil(1 + ( (sigma + math.log2(get_N_MT_incl(sigma, R, NUM_ASWI, FIELD_SIZE))) / math.log2(FIELD_SIZE) ))
B_MT_incl = get_BUCKET_SIZE_MT_incl(sigma, R, NUM_ASWI, FIELD_SIZE)
print("N_AM:",N_AM)
print("B_AM:",B_AM)
print("N_MT_excl:",N_MT_excl)
print("N_MT_incl:",N_MT_incl)
print("B_MT_incl:",B_MT_incl)
# -
# As part of the decommitment, we also need to write the commitment randomness. For BN, we can make one commitment per party per repetition.
#
# If RAND_FROM_SEED is used, we must also account for the cost of releasing the seed (since we did not include this in per_MT_test_decom_size_norand and per_AM_test_decom_size_norand)
# +
def get_decom_rand(COMM_RAND, R, PR):
return COMM_RAND * R * PR
decom_rand = get_decom_rand(COMM_RAND, R, PR)
def get_seed_cost(RAND_FROM_SEED, SEED_SIZE, R, PR):
return SEED_SIZE * R * PR if RAND_FROM_SEED else 0
seed_cost = get_seed_cost(RAND_FROM_SEED, SEED_SIZE, R, PR)
def get_rand_verif_decom_size(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI):
pnr = P - PR
r = get_num_ikos_reps(sigma, P, pnr)
n_mt_incl = get_N_MT_incl(sigma, r, NUM_ASWI, 2**F)
b_mt_incl = get_BUCKET_SIZE_MT_incl(sigma, r, NUM_ASWI, 2**F)
n_am = get_N_AM(r, NUM_ASWI)
b_am = get_BUCKET_SIZE_AM(sigma, R, NUM_ASWI, 2**F)
# MT: cost of tests * number tests done to verify
contrib_from_MT_norand = get_per_MT_test_decom_size_norand(RAND_FROM_SEED, F, PR, pnr) * n_mt_incl * (b_mt_incl-1)
# AM: cost of tests * number tests done to verify
contrib_from_AM_norand = get_per_AM_test_decom_size_incl_norand(RAND_FROM_SEED, F, PR, pnr) * n_am * (b_am-1)
# Contribution from decommitment randomness + seeds
contrib_from_decom_rand = get_decom_rand(COMM_RAND, r, PR)
contrib_from_seed_cost = get_seed_cost(RAND_FROM_SEED, SEED_SIZE, r, PR)
#print(contrib_from_MT_norand)
#print(contrib_from_AM_norand)
#print(contrib_from_decom_rand)
#print(contrib_from_seed_cost)
#print(contrib_from_MT_norand + contrib_from_AM_norand + contrib_from_decom_rand + contrib_from_seed_cost)
return contrib_from_MT_norand + contrib_from_AM_norand + contrib_from_decom_rand + contrib_from_seed_cost
rand_verif_decom_size = get_rand_verif_decom_size(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI)
print(rand_verif_decom_size/ 8)
# -
# So now we can finally compute how large the proof is.
# +
# Round 1: {{C(view) || SEED }_p}_r
def get_round1(HASH_BITS, sigma, P, PR):
return HASH_BITS * P * get_num_ikos_reps(sigma, P, P-PR)
round1 = get_round1(HASH_BITS, sigma, P, PR)
round2 = HASH_BITS # Round 2: Epsilons for adjusting randomness
# Round 3: Send {{C(testviews)}_p}_r
def get_round3(HASH_BITS, sigma, P, PR):
return HASH_BITS * P * get_num_ikos_reps(sigma, P, P-PR)
round3 = get_round3(HASH_BITS, sigma, P, PR)
round4 = HASH_BITS # Round 4: Party choices, and nonzero LC coefficients
# Round 5: Send {{D(view)}_j}_r, {{D(testview)}_j}_r, checkzero
def get_round5(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI):
return get_num_ikos_reps(sigma, P, P-PR) * PR * (get_per_switch_main_decom_size(F) +
get_per_switch_test_decom_size_incl(RAND_FROM_SEED, F, PR, P-PR)) + \
get_rand_verif_decom_size(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI) + \
(VARS * PR * get_num_ikos_reps(sigma, P, P-PR) * F) + F
round5 = get_round5(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI)
#round5 = R * PR * (per_switch_main_decom_size + per_switch_test_decom_size_incl) + \
# rand_verif_decom_size + (VARS * PR * R * F) + F
#print((get_per_switch_main_decom_size(F) + get_per_switch_test_decom_size_incl(RAND_FROM_SEED, F, PR, P-PR) +
# get_rand_verif_decom_size(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI)))
#print()
def get_proof_size(HASH_BITS, COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI):
return get_round1(HASH_BITS, sigma, P, PR) + get_round3(HASH_BITS, sigma, P, PR) + \
get_round5(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI)
proof_size = get_proof_size(HASH_BITS, COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, P, PR, NUM_ASWI)
print("-----------")
print("R1:", round1 / 8)
print("R2:", round2 / 8)
print("R3:",round3 / 8)
print("R4:",round4 / 8)
print("R5:",round5 / 8)
print("Total (1+3+5): ", proof_size/8)
# +
# %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
hbs = [256]
crs = [128]
sss = [80, 128]
rfs = [True, False]
fs = range(2,256,1)
soundnesses = [40, 60, 80, 128] #remember this is 2 + the total soundness error
sigmas = [s + 2 for s in soundnesses]
ps = range(2,100,1)
prs = [1, 2, 4, 9, 14, 19, 99]
naswis = range(1,50,1)
# -
# In the following, we show proof size versus field size, for seed randomness on and off.
#
# The "jumps" below represent going from a higher bucket size B to a lower bucket size for randomness verification. The best field size for 80 bits of security is 93, which just barely gets us to B=2, but has the smallest element description.
#
# Using a per-party-per-iteration seed in place of fullly random elements saves us quite a lot of space; the size of the seed is relatively unimportnat.
# +
# PLOT PROOF SIZE VERSUS FIELD SIZE
B = 8
KiB = 1024*B
P=3
PR=2
x = np.array([fs])
yseedsmall = np.array([get_proof_size(HASH_BITS, COMM_RAND, True, sss[0],
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
yseedbig = np.array([get_proof_size(HASH_BITS, COMM_RAND, True, sss[1],
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
ynoseed = np.array([get_proof_size(HASH_BITS, COMM_RAND, False, 0,
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
print("Best 80b seed proof size (KB): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, sss[0],
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
print("Best 128b seed proof size (KB): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, sss[1],
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
print("Best non-seed (full) proof size (KB): ", min([(get_proof_size(HASH_BITS, COMM_RAND, False, 0,
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x, yseedsmall, label="small seed rand")
ax.scatter(x, yseedbig, label="large seed rand")
ax.scatter(x, ynoseed, label="full rand")
plt.title("Proof size (KB) versus Field Size (b) (80b security), 3 party")
ax.legend()
plt.show()
# +
# PLOT PROOF SIZE VERSUS FIELD SIZE
B = 8
KiB = 1024*B
P=2
PR=1
x = np.array([fs])
yseedsmall = np.array([get_proof_size(HASH_BITS, COMM_RAND, True, sss[0],
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
yseedbig = np.array([get_proof_size(HASH_BITS, COMM_RAND, True, sss[1],
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
ynoseed = np.array([get_proof_size(HASH_BITS, COMM_RAND, False, 0,
f, sigma, P, PR, NUM_ASWI) / KiB for f in fs])
print("Best 80b seed proof size (KB): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, sss[0],
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
print("Best 128b seed proof size (KB): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, sss[1],
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
print("Best non-seed (full) proof size (KB): ", min([(get_proof_size(HASH_BITS, COMM_RAND, False, 0,
f, sigma, P, PR, NUM_ASWI) / KiB, f) for f in fs]))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x, yseedsmall, label="small seed rand")
ax.scatter(x, yseedbig, label="large seed rand")
ax.scatter(x, ynoseed, label="full rand")
plt.title("Proof size (KB) versus Field Size (b) (80b security), 2 party")
ax.legend()
plt.show()
# -
# Unsurprisingly, better soundness means bigger proof size.
# +
# PLOT PROOF SIZE VERSUS FIELD SIZE FOR DIFFERENT SIGMAS
B = 8
KiB = 1024*B
P=3
PR=2
x = np.array(fs)
ys = []
for s in sigmas:
ys.append(np.array([get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
f, s, P, PR, NUM_ASWI) / KiB for f in fs]))
print("Best proof size ((KB), F, sigma): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
f, s, P, PR, NUM_ASWI) / KiB, f, s-2) for f in fs]))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
for i in range(len(sigmas)):
ax.scatter(x, ys[i], label="sigma={}".format(sigmas[i]-2))
plt.title("Proof size (KB) versus Field Size (b)")
ax.legend()
plt.show()
# +
# Zoomed in version
B = 8
KiB = 1024*B
fs = fs[40:]
x = np.array(fs)
ys=[]
for s in sigmas:
ys.append(np.array([get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
f, s, P, PR, NUM_ASWI) / KiB for f in fs]))
print("Best proof size ((KB), F, sigma): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
f, s, P, PR, NUM_ASWI) / KiB, f, s-2) for f in fs]))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
for i in range(len(sigmas)):
ax.scatter(x, ys[i], label="sigma={}".format(sigmas[i]-2))
plt.title("Proof size (KB) versus Field Size (b)")
ax.legend()
plt.show()
# -
# The best number of parties (if the verifier reveals P-1 views) is 3; it barely beats 2.
#
# In retrospect, this makes sense; you need 52 reps for 3 parties to get to sigma=82 (80 soundness in the overall protocol), and 82 reps for 2 parties. And 3x52 is slightly less than 2x82.
# +
# PROOF SIZE VERSUS PARTIES
B = 8
KiB = 1024*B
F = 92
sigma = 80+2
x = np.array(ps)
y = np.array([get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
F, sigma, p, p-1, NUM_ASWI) / KiB for p in ps])
print("Best proof size ((KB), parties, reveal p-1): ",min([(get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
F, sigma, p, p-1, NUM_ASWI) / KiB, p) for p in ps]))
for p in [2,3,4]:
print("p=",p,", total proof size: ", get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
F, sigma, p, p-1, NUM_ASWI) / KiB)
print(" p=",p,", IKOS reps:",get_num_ikos_reps(sigma,p,1))
print(" p=",p,", round 1:",get_round1(HASH_BITS, sigma, p, p-1)/B)
print(" p=",p,", round 3:",get_round3(HASH_BITS, sigma, p, p-1)/B)
print(" p=",p,", round 5:",get_round5(COMM_RAND, RAND_FROM_SEED, SEED_SIZE, F, sigma, p,p-1, NUM_ASWI)/B)
print("p=",p,",reveal 2, total proof size: ", get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
F, sigma, p, 2, NUM_ASWI) / KiB)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x, y)
plt.title("Proof size (KB) versus Parties (reveal p-1, F=92, soundness 80))")
#ax.legend()
plt.show()
# -
# Finally, proof size versus number of switches. There seem to be a few cutoff points, again likely related to bucketing. For our case, F=94 leads to 9 being a good number of switches.
# +
# PLOT PROOF SIZE VERSUS SWITCHES
B = 8
KiB = 1024*B
sigma = 80+2
P = 3
PR = 2
x = np.array(naswis)
bestF = 0
bestFSize = 10000000000
fs = [92, 93, 94, 95, 96, 128]
BEST_F = 94
ys = []
for f in fs:
ys.append(np.array([get_proof_size(HASH_BITS, COMM_RAND, True, SEED_SIZE,
f, sigma, P, PR, n) / KiB for n in naswis]))
for i in range(5):
print("f=",f,"| switches=",x[i], "| pf size (KB)=",ys[-1][i])
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
for i in range(len(fs)):
ax.scatter(x, ys[i], label="F={}".format(fs[i]))
plt.title("Proof size (KB) versus #Switches")
ax.legend()
plt.show()
# -
| proof_size_analysis_BN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ml-laboratory] *
# language: python
# name: conda-env-ml-laboratory-py
# ---
# +
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
img_width = X_train.shape[1]
img_height = X_train.shape[2]
# enconde the output labels: one hot encoding
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_train.shape[1]
labels = range(10)
# Normalize data since neural nets are not scale invariant
X_train = X_train.astype("float") / 255.
X_test = X_test.astype("float") / 255.
# prepare the model
model = Sequential()
# flatten the 28x28 image input into a 784x1 input vector
hidden_nodes = 30
model.add(Flatten(input_shape=(img_width, img_height)))
model.add(Dense(hidden_nodes, activation='relu'))
# Add Dropout layers between the dense layers to force the NN to learn
# different 'paths' to the solution, i.e., different
# way to approximate the real function
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# -
# Fit the model
model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))
| tensorflow/notebooks/keras/2_Multilayer-Perceptron-with-keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Eigendecompositions
# :label:`sec_eigendecompositions`
#
# Eigenvalues are often one of the most useful notions
# we will encounter when studying linear algebra,
# however, as a beginner, it is easy to overlook their importance.
# Below, we introduce eigendecomposition and
# try to convey some sense of just why it is so important.
#
# Suppose that we have a matrix $A$ with the following entries:
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 2 & 0 \\
# 0 & -1
# \end{bmatrix}.
# $$
#
# If we apply $A$ to any vector $\mathbf{v} = [x, y]^\top$,
# we obtain a vector $\mathbf{A}\mathbf{v} = [2x, -y]^\top$.
# This has an intuitive interpretation:
# stretch the vector to be twice as wide in the $x$-direction,
# and then flip it in the $y$-direction.
#
# However, there are *some* vectors for which something remains unchanged.
# Namely $[1, 0]^\top$ gets sent to $[2, 0]^\top$
# and $[0, 1]^\top$ gets sent to $[0, -1]^\top$.
# These vectors are still in the same line,
# and the only modification is that the matrix stretches them
# by a factor of $2$ and $-1$ respectively.
# We call such vectors *eigenvectors*
# and the factor they are stretched by *eigenvalues*.
#
# In general, if we can find a number $\lambda$
# and a vector $\mathbf{v}$ such that
#
# $$
# \mathbf{A}\mathbf{v} = \lambda \mathbf{v}.
# $$
#
# We say that $\mathbf{v}$ is an eigenvector for $A$ and $\lambda$ is an eigenvalue.
#
# ## Finding Eigenvalues
# Let us figure out how to find them. By subtracting off the $\lambda \mathbf{v}$ from both sides,
# and then factoring out the vector,
# we see the above is equivalent to:
#
# $$(\mathbf{A} - \lambda \mathbf{I})\mathbf{v} = 0.$$
# :eqlabel:`eq_eigvalue_der`
#
# For :eqref:`eq_eigvalue_der` to happen, we see that $(\mathbf{A} - \lambda \mathbf{I})$
# must compress some direction down to zero,
# hence it is not invertible, and thus the determinant is zero.
# Thus, we can find the *eigenvalues*
# by finding for what $\lambda$ is $\det(\mathbf{A}-\lambda \mathbf{I}) = 0$.
# Once we find the eigenvalues, we can solve
# $\mathbf{A}\mathbf{v} = \lambda \mathbf{v}$
# to find the associated *eigenvector(s)*.
#
# ### An Example
# Let us see this with a more challenging matrix
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 2 & 1\\
# 2 & 3
# \end{bmatrix}.
# $$
#
# If we consider $\det(\mathbf{A}-\lambda \mathbf{I}) = 0$,
# we see this is equivalent to the polynomial equation
# $0 = (2-\lambda)(3-\lambda)-2 = (4-\lambda)(1-\lambda)$.
# Thus, two eigenvalues are $4$ and $1$.
# To find the associated vectors, we then need to solve
#
# $$
# \begin{bmatrix}
# 2 & 1\\
# 2 & 3
# \end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}x \\ y\end{bmatrix} \; \text{and} \;
# \begin{bmatrix}
# 2 & 1\\
# 2 & 3
# \end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}4x \\ 4y\end{bmatrix} .
# $$
#
# We can solve this with the vectors $[1, -1]^\top$ and $[1, 2]^\top$ respectively.
#
# We can check this in code using the built-in `numpy.linalg.eig` routine.
#
# + origin_pos=2 tab=["pytorch"]
# %matplotlib inline
import torch
from IPython import display
from d2l import torch as d2l
torch.eig(torch.tensor([[2, 1], [2, 3]], dtype=torch.float64),
eigenvectors=True)
# + [markdown] origin_pos=4
# Note that `numpy` normalizes the eigenvectors to be of length one,
# whereas we took ours to be of arbitrary length.
# Additionally, the choice of sign is arbitrary.
# However, the vectors computed are parallel
# to the ones we found by hand with the same eigenvalues.
#
# ## Decomposing Matrices
# Let us continue the previous example one step further. Let
#
# $$
# \mathbf{W} = \begin{bmatrix}
# 1 & 1 \\
# -1 & 2
# \end{bmatrix},
# $$
#
# be the matrix where the columns are the eigenvectors of the matrix $\mathbf{A}$. Let
#
# $$
# \boldsymbol{\Sigma} = \begin{bmatrix}
# 1 & 0 \\
# 0 & 4
# \end{bmatrix},
# $$
#
# be the matrix with the associated eigenvalues on the diagonal.
# Then the definition of eigenvalues and eigenvectors tells us that
#
# $$
# \mathbf{A}\mathbf{W} =\mathbf{W} \boldsymbol{\Sigma} .
# $$
#
# The matrix $W$ is invertible, so we may multiply both sides by $W^{-1}$ on the right,
# we see that we may write
#
# $$\mathbf{A} = \mathbf{W} \boldsymbol{\Sigma} \mathbf{W}^{-1}.$$
# :eqlabel:`eq_eig_decomp`
#
# In the next section we will see some nice consequences of this,
# but for now we need only know that such a decomposition
# will exist as long as we can find a full collection
# of linearly independent eigenvectors (so that $W$ is invertible).
#
# ## Operations on Eigendecompositions
# One nice thing about eigendecompositions :eqref:`eq_eig_decomp` is that
# we can write many operations we usually encounter cleanly
# in terms of the eigendecomposition. As a first example, consider:
#
# $$
# \mathbf{A}^n = \overbrace{\mathbf{A}\cdots \mathbf{A}}^{\text{$n$ times}} = \overbrace{(\mathbf{W}\boldsymbol{\Sigma} \mathbf{W}^{-1})\cdots(\mathbf{W}\boldsymbol{\Sigma} \mathbf{W}^{-1})}^{\text{$n$ times}} = \mathbf{W}\overbrace{\boldsymbol{\Sigma}\cdots\boldsymbol{\Sigma}}^{\text{$n$ times}}\mathbf{W}^{-1} = \mathbf{W}\boldsymbol{\Sigma}^n \mathbf{W}^{-1}.
# $$
#
# This tells us that for any positive power of a matrix,
# the eigendecomposition is obtained by just raising the eigenvalues to the same power.
# The same can be shown for negative powers,
# so if we want to invert a matrix we need only consider
#
# $$
# \mathbf{A}^{-1} = \mathbf{W}\boldsymbol{\Sigma}^{-1} \mathbf{W}^{-1},
# $$
#
# or in other words, just invert each eigenvalue.
# This will work as long as each eigenvalue is non-zero,
# so we see that invertible is the same as having no zero eigenvalues.
#
# Indeed, additional work can show that if $\lambda_1, \ldots, \lambda_n$
# are the eigenvalues of a matrix, then the determinant of that matrix is
#
# $$
# \det(\mathbf{A}) = \lambda_1 \cdots \lambda_n,
# $$
#
# or the product of all the eigenvalues.
# This makes sense intuitively because whatever stretching $\mathbf{W}$ does,
# $W^{-1}$ undoes it, so in the end the only stretching that happens is
# by multiplication by the diagonal matrix $\boldsymbol{\Sigma}$,
# which stretches volumes by the product of the diagonal elements.
#
# Finally, recall that the rank was the maximum number
# of linearly independent columns of your matrix.
# By examining the eigendecomposition closely,
# we can see that the rank is the same
# as the number of non-zero eigenvalues of $\mathbf{A}$.
#
# The examples could continue, but hopefully the point is clear:
# eigendecomposition can simplify many linear-algebraic computations
# and is a fundamental operation underlying many numerical algorithms
# and much of the analysis that we do in linear algebra.
#
# ## Eigendecompositions of Symmetric Matrices
# It is not always possible to find enough linearly independent eigenvectors
# for the above process to work. For instance the matrix
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 1 & 1 \\
# 0 & 1
# \end{bmatrix},
# $$
#
# has only a single eigenvector, namely $(1, 0)^\top$.
# To handle such matrices, we require more advanced techniques
# than we can cover (such as the Jordan Normal Form, or Singular Value Decomposition).
# We will often need to restrict our attention to those matrices
# where we can guarantee the existence of a full set of eigenvectors.
#
# The most commonly encountered family are the *symmetric matrices*,
# which are those matrices where $\mathbf{A} = \mathbf{A}^\top$.
# In this case, we may take $W$ to be an *orthogonal matrix*—a matrix whose columns are all length one vectors that are at right angles to one another, where
# $\mathbf{W}^\top = \mathbf{W}^{-1}$—and all the eigenvalues will be real.
# Thus, in this special case, we can write :eqref:`eq_eig_decomp` as
#
# $$
# \mathbf{A} = \mathbf{W}\boldsymbol{\Sigma}\mathbf{W}^\top .
# $$
#
# ## Gershgorin Circle Theorem
# Eigenvalues are often difficult to reason with intuitively.
# If presented an arbitrary matrix, there is little that can be said
# about what the eigenvalues are without computing them.
# There is, however, one theorem that can make it easy to approximate well
# if the largest values are on the diagonal.
#
# Let $\mathbf{A} = (a_{ij})$ be any square matrix ($n\times n$).
# We will define $r_i = \sum_{j \neq i} |a_{ij}|$.
# Let $\mathcal{D}_i$ represent the disc in the complex plane
# with center $a_{ii}$ radius $r_i$.
# Then, every eigenvalue of $\mathbf{A}$ is contained in one of the $\mathcal{D}_i$.
#
# This can be a bit to unpack, so let us look at an example.
# Consider the matrix:
#
# $$
# \mathbf{A} = \begin{bmatrix}
# 1.0 & 0.1 & 0.1 & 0.1 \\
# 0.1 & 3.0 & 0.2 & 0.3 \\
# 0.1 & 0.2 & 5.0 & 0.5 \\
# 0.1 & 0.3 & 0.5 & 9.0
# \end{bmatrix}.
# $$
#
# We have $r_1 = 0.3$, $r_2 = 0.6$, $r_3 = 0.8$ and $r_4 = 0.9$.
# The matrix is symmetric, so all eigenvalues are real.
# This means that all of our eigenvalues will be in one of the ranges of
#
# $$[a_{11}-r_1, a_{11}+r_1] = [0.7, 1.3], $$
#
# $$[a_{22}-r_2, a_{22}+r_2] = [2.4, 3.6], $$
#
# $$[a_{33}-r_3, a_{33}+r_3] = [4.2, 5.8], $$
#
# $$[a_{44}-r_4, a_{44}+r_4] = [8.1, 9.9]. $$
#
#
# Performing the numerical computation shows
# that the eigenvalues are approximately $0.99$, $2.97$, $4.95$, $9.08$,
# all comfortably inside the ranges provided.
#
# + origin_pos=6 tab=["pytorch"]
A = torch.tensor([[1.0, 0.1, 0.1, 0.1],
[0.1, 3.0, 0.2, 0.3],
[0.1, 0.2, 5.0, 0.5],
[0.1, 0.3, 0.5, 9.0]])
v, _ = torch.eig(A)
v
# + [markdown] origin_pos=8
# In this way, eigenvalues can be approximated,
# and the approximations will be fairly accurate
# in the case that the diagonal is
# significantly larger than all the other elements.
#
# It is a small thing, but with a complex
# and subtle topic like eigendecomposition,
# it is good to get any intuitive grasp we can.
#
# ## A Useful Application: The Growth of Iterated Maps
#
# Now that we understand what eigenvectors are in principle,
# let us see how they can be used to provide a deep understanding
# of a problem central to neural network behavior: proper weight initialization.
#
# ### Eigenvectors as Long Term Behavior
#
# The full mathematical investigation of the initialization
# of deep neural networks is beyond the scope of the text,
# but we can see a toy version here to understand
# how eigenvalues can help us see how these models work.
# As we know, neural networks operate by interspersing layers
# of linear transformations with non-linear operations.
# For simplicity here, we will assume that there is no non-linearity,
# and that the transformation is a single repeated matrix operation $A$,
# so that the output of our model is
#
# $$
# \mathbf{v}_{out} = \mathbf{A}\cdot \mathbf{A}\cdots \mathbf{A} \mathbf{v}_{in} = \mathbf{A}^N \mathbf{v}_{in}.
# $$
#
# When these models are initialized, $A$ is taken to be
# a random matrix with Gaussian entries, so let us make one of those.
# To be concrete, we start with a mean zero, variance one Gaussian distributed $5 \times 5$ matrix.
#
# + origin_pos=10 tab=["pytorch"]
torch.manual_seed(42)
k = 5
A = torch.randn(k, k, dtype=torch.float64)
A
# + [markdown] origin_pos=12
# ### Behavior on Random Data
# For simplicity in our toy model,
# we will assume that the data vector we feed in $\mathbf{v}_{in}$
# is a random five dimensional Gaussian vector.
# Let us think about what we want to have happen.
# For context, lets think of a generic ML problem,
# where we are trying to turn input data, like an image, into a prediction,
# like the probability the image is a picture of a cat.
# If repeated application of $\mathbf{A}$
# stretches a random vector out to be very long,
# then small changes in input will be amplified
# into large changes in output---tiny modifications of the input image
# would lead to vastly different predictions.
# This does not seem right!
#
# On the flip side, if $\mathbf{A}$ shrinks random vectors to be shorter,
# then after running through many layers, the vector will essentially shrink to nothing,
# and the output will not depend on the input. This is also clearly not right either!
#
# We need to walk the narrow line between growth and decay
# to make sure that our output changes depending on our input, but not much!
#
# Let us see what happens when we repeatedly multiply our matrix $\mathbf{A}$
# against a random input vector, and keep track of the norm.
#
# + origin_pos=14 tab=["pytorch"]
# Calculate the sequence of norms after repeatedly applying `A`
v_in = torch.randn(k, 1, dtype=torch.float64)
norm_list = [torch.norm(v_in).item()]
for i in range(1, 100):
v_in = A @ v_in
norm_list.append(torch.norm(v_in).item())
d2l.plot(torch.arange(0, 100), norm_list, 'Iteration', 'Value')
# + [markdown] origin_pos=16
# The norm is growing uncontrollably!
# Indeed if we take the list of quotients, we will see a pattern.
#
# + origin_pos=18 tab=["pytorch"]
# Compute the scaling factor of the norms
norm_ratio_list = []
for i in range(1, 100):
norm_ratio_list.append(norm_list[i]/norm_list[i - 1])
d2l.plot(torch.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
# + [markdown] origin_pos=20
# If we look at the last portion of the above computation,
# we see that the random vector is stretched by a factor of `1.974459321485[...]`,
# where the portion at the end shifts a little,
# but the stretching factor is stable.
#
# ### Relating Back to Eigenvectors
#
# We have seen that eigenvectors and eigenvalues correspond
# to the amount something is stretched,
# but that was for specific vectors, and specific stretches.
# Let us take a look at what they are for $\mathbf{A}$.
# A bit of a caveat here: it turns out that to see them all,
# we will need to go to complex numbers.
# You can think of these as stretches and rotations.
# By taking the norm of the complex number
# (square root of the sums of squares of real and imaginary parts)
# we can measure that stretching factor. Let us also sort them.
#
# + origin_pos=22 tab=["pytorch"]
# Compute the eigenvalues
eigs = torch.eig(A)[0][:,0].tolist()
norm_eigs = [torch.abs(torch.tensor(x)) for x in eigs]
norm_eigs.sort()
print(f'norms of eigenvalues: {norm_eigs}')
# + [markdown] origin_pos=24
# ### An Observation
#
# We see something a bit unexpected happening here:
# that number we identified before for the
# long term stretching of our matrix $\mathbf{A}$
# applied to a random vector is *exactly*
# (accurate to thirteen decimal places!)
# the largest eigenvalue of $\mathbf{A}$.
# This is clearly not a coincidence!
#
# But, if we now think about what is happening geometrically,
# this starts to make sense. Consider a random vector.
# This random vector points a little in every direction,
# so in particular, it points at least a little bit
# in the same direction as the eigenvector of $\mathbf{A}$
# associated with the largest eigenvalue.
# This is so important that it is called
# the *principle eigenvalue* and *principle eigenvector*.
# After applying $\mathbf{A}$, our random vector
# gets stretched in every possible direction,
# as is associated with every possible eigenvector,
# but it is stretched most of all in the direction
# associated with this principle eigenvector.
# What this means is that after apply in $A$,
# our random vector is longer, and points in a direction
# closer to being aligned with the principle eigenvector.
# After applying the matrix many times,
# the alignment with the principle eigenvector becomes closer and closer until,
# for all practical purposes, our random vector has been transformed
# into the principle eigenvector!
# Indeed this algorithm is the basis
# for what is known as the *power iteration*
# for finding the largest eigenvalue and eigenvector of a matrix. For details see, for example, :cite:`Van-Loan.Golub.1983`.
#
# ### Fixing the Normalization
#
# Now, from above discussions, we concluded
# that we do not want a random vector to be stretched or squished at all,
# we would like random vectors to stay about the same size throughout the entire process.
# To do so, we now rescale our matrix by this principle eigenvalue
# so that the largest eigenvalue is instead now just one.
# Let us see what happens in this case.
#
# + origin_pos=26 tab=["pytorch"]
# Rescale the matrix `A`
A /= norm_eigs[-1]
# Do the same experiment again
v_in = torch.randn(k, 1, dtype=torch.float64)
norm_list = [torch.norm(v_in).item()]
for i in range(1, 100):
v_in = A @ v_in
norm_list.append(torch.norm(v_in).item())
d2l.plot(torch.arange(0, 100), norm_list, 'Iteration', 'Value')
# + [markdown] origin_pos=28
# We can also plot the ratio between consecutive norms as before and see that indeed it stabilizes.
#
# + origin_pos=30 tab=["pytorch"]
# Also plot the ratio
norm_ratio_list = []
for i in range(1, 100):
norm_ratio_list.append(norm_list[i]/norm_list[i-1])
d2l.plot(torch.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
# + [markdown] origin_pos=32
# ## Conclusions
#
# We now see exactly what we hoped for!
# After normalizing the matrices by the principle eigenvalue,
# we see that the random data does not explode as before,
# but rather eventually equilibrates to a specific value.
# It would be nice to be able to do these things from first principles,
# and it turns out that if we look deeply at the mathematics of it,
# we can see that the largest eigenvalue
# of a large random matrix with independent mean zero,
# variance one Gaussian entries is on average about $\sqrt{n}$,
# or in our case $\sqrt{5} \approx 2.2$,
# due to a fascinating fact known as the *circular law* :cite:`Ginibre.1965`.
# The relationship between the eigenvalues (and a related object called singular values) of random matrices has been shown to have deep connections to proper initialization of neural networks as was discussed in :cite:`Pennington.Schoenholz.Ganguli.2017` and subsequent works.
#
# ## Summary
# * Eigenvectors are vectors which are stretched by a matrix without changing direction.
# * Eigenvalues are the amount that the eigenvectors are stretched by the application of the matrix.
# * The eigendecomposition of a matrix can allow for many operations to be reduced to operations on the eigenvalues.
# * The Gershgorin Circle Theorem can provide approximate values for the eigenvalues of a matrix.
# * The behavior of iterated matrix powers depends primarily on the size of the largest eigenvalue. This understanding has many applications in the theory of neural network initialization.
#
# ## Exercises
# 1. What are the eigenvalues and eigenvectors of
# $$
# \mathbf{A} = \begin{bmatrix}
# 2 & 1 \\
# 1 & 2
# # \end{bmatrix}?
# $$
# 1. What are the eigenvalues and eigenvectors of the following matrix, and what is strange about this example compared to the previous one?
# $$
# \mathbf{A} = \begin{bmatrix}
# 2 & 1 \\
# 0 & 2
# \end{bmatrix}.
# $$
# 1. Without computing the eigenvalues, is it possible that the smallest eigenvalue of the following matrix is less that $0.5$? *Note*: this problem can be done in your head.
# $$
# \mathbf{A} = \begin{bmatrix}
# 3.0 & 0.1 & 0.3 & 1.0 \\
# 0.1 & 1.0 & 0.1 & 0.2 \\
# 0.3 & 0.1 & 5.0 & 0.0 \\
# 1.0 & 0.2 & 0.0 & 1.8
# \end{bmatrix}.
# $$
#
# + [markdown] origin_pos=34 tab=["pytorch"]
# [Discussions](https://discuss.d2l.ai/t/1086)
#
| d2l/pytorch/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome To Galapagos Tutorial
# The Galapagos Middleware presents an orchestration framework to create multi-FPGA and CPU networks and can be built on top of any device that has a Galapagos Hypervisor.
#
# ## Galapagos Hypervisor
#
# The following few cells provide an overview of the Galapagos Hypervisor and Middleware
# If you would like to go straight into building your own kernels you can jump to section <a href='http://127.0.0.1:9000/notebooks/01_setup_and_creating_ip_cores.ipynb'>Setup and Creating IP Cores'</a>
#
#
# ### Hypervisor Overview
#
# The Middleware places Galapagos streaming kernels onto multiple streaming devices.
# A Galapagos streaming devices, includes devices that currently have a Galapagos Hypervisor built for it.
# A Galapagos hypervisor abstracts the device as a streaming device.
#
# The following are two shells of two different FPGA Boards we have:
#
#
# <img style="float: left;" src="fig/7v3_shell.png"/> <img style="float: left;" src="fig/mpsoc_shell.png"/>
# The turqouise part of these hypervisors is where the user application will sit. Note that even though the network and memory interfaces on these boards are different, through the hypervisor the user interacts with both of these devices through the same interfaces.
#
# This principle is also applied to a CPU interface in the following shell:
#
# <img style="float: left;" src="fig/cpu_shell.png"/>
# As you can see across all three shells we have consistent interfaces. This allows us to have portability across different kernels across different boards as well as functional portability between hardware and software.
# ## Galapagos Middleware
#
# The middleware places kernels (software and hardware) onto multiple Galapagos devices described in the Galapagos Hypervisor section. A kernel can be visually represented as the following:
#
# <img style="float: left;" src="fig/kernel.png"/>
# Each kernel has a galapagos-stream in and a galapagos-stream out. This is a specific implementation of the AXI-Stream protocol. Each galapagos-stream has a dest and an id field, the dest specifies the target kernel and the id references the source of the packet. Any kernel within the cluster can reference any other kernel by specifiying the kernel destination in the dest field. This destination is independent of the actual physical mapping of the kernel.
# This notion can be represented through a logical description and a mapping, with the use of these descriptions we can transform a logical cluster into a physical cluster.
#
# The following is a pictoral example:
#
# <img style="float: left;" src="fig/logical_to_physical.png"/>
#
| middleware/python/00_galapagos_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Dependencies
#
# `pip install [modules]`
# * jupyter
# * pandas
# * numexpr (optional, speedup)
# * bottleneck (optional, speedup)
#
# *PostgreSQL support*
# * sqlalchemy
# * psycopg2
#
# *Machine learning*
# * scikit-learn
# * sklearn-pandas
# * imbalanced-learn
# * networkx
#
# *Only for plotting*
# * matplotlib
# * seaborn
# +
import sqlalchemy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
# %matplotlib inline
# -
db = sqlalchemy.create_engine('postgresql://localhost/crunchbase')
connection = db.connect()
# ## Companies
#
# At this section we select the companies we want to work on using the following criteria:
#
# * At least one funding round of any type
# * Primary role is *company*
# * Established in year 2001 or later — see `START_DATE`
# * Really early shut downs and exits are neglected (before 60 days) — see `MIN_DAYS`
#
# The query is somewhat large, we join tables
#
# * `crunchbase_organizations`
# * `crunchbase_acquisitions`
# * `crunchbase_ipos`
# * `crunchbase_funding_rounds`
#
# to get companies along the following fields: `uuid`, `company_name`, `first_funding_round_on`, `last_funding_round_on`, `closed_on`, `acquired_on`, `went_public_on`.
#
# Company status reference: ipo, operating, closed, acquired.
#
# Company primary role reference: investor, company, school, group.
# +
START_DATE = '2001-01-01' # greater or equal than
MIN_DAYS = 60
query = """
SELECT o.uuid, o.company_name, first_funding_round_on, last_funding_round_on, closed_on, acquired_on, went_public_on
FROM crunchbase_organizations AS o
-- if company has been acquired multiple times, just bring the earliest acquisition
LEFT OUTER JOIN (
SELECT DISTINCT ON (acquiree_uuid) acquiree_uuid, acquired_on
FROM crunchbase_acquisitions
ORDER BY acquiree_uuid, acquired_on
) AS a
ON o.uuid = a.acquiree_uuid
-- if company has multiple public offerings, just bring the earliest IPO
LEFT OUTER JOIN (
SELECT DISTINCT ON (uuid) uuid, went_public_on
FROM crunchbase_ipos
ORDER BY uuid, went_public_on
) AS i
ON o.uuid = i.uuid
-- keep companies that have at least one funding round
JOIN (
SELECT
company_uuid,
min(announced_on) AS first_funding_round_on,
max(announced_on) AS last_funding_round_on
FROM crunchbase_funding_rounds
GROUP BY company_uuid
) AS f
ON o.uuid = f.company_uuid
WHERE first_funding_round_on >= '%s' AND primary_role = 'company'
""" % START_DATE
orgs = pd.read_sql_query(query, connection,
parse_dates=['first_funding_round_on', 'last_funding_round_on', 'closed_on', 'acquired_on', 'went_public_on'], coerce_float=False)
orgs.uuid = orgs.uuid.astype('str') # Pandas merge does not work without this
orgs.set_index('uuid', inplace=True)
# -
# **About foundations**: `funded_on` and `first_funding_on` dates (from `crunchbase_organizations` table) are unreliable. We take `first_funding_round_on` (joined from `crunchbase_funding_rounds`) as a verisimilar foundation date.
#
# **About status**: The `status` attribute (from `crunchbase_organizations` table) is not neatly filled, so we cannot rely on it; `closed_on`, `acquired_on` (joined from `crunchbase_acquisitions`), `went_public_on` (joined from `crunchbase_ipos`) dates are better indicators of a company's fate.
#
# **Heads up**! There are status changes. Some acquired companies were later closed, others were acquired and went public later; on the other hand some companies that went public in the first place were acquired or closed afterwards... not to speak about companies that closed before having an exit. In order to disambiguate the status, `acquired_on` and `went_public_on` have precedence over `closed_on`, between the former two we choose the lesser date (favoring acquisition in case of tie). Moreover, some companies were acquired and/or went public multiple times, we consider the earlier of these events only.
# +
acquired = orgs[(orgs.acquired_on <= orgs.went_public_on) | (orgs.acquired_on.notnull() & orgs.went_public_on.isnull() )]
ipoed = orgs[(orgs.acquired_on > orgs.went_public_on) | (orgs.acquired_on.isnull() & orgs.went_public_on.notnull())]
closed = orgs[orgs.closed_on.notnull() & orgs.acquired_on.isnull() & orgs.went_public_on.isnull()]
operating = orgs[orgs.closed_on.isnull() & orgs.acquired_on.isnull() & orgs.went_public_on.isnull()]
orgs.loc[acquired.index, 'status'] = 'acquired'
orgs.loc[ipoed.index, 'status'] = 'ipoed'
orgs.loc[closed.index, 'status'] = 'closed'
orgs.loc[operating.index, 'status'] = 'operating'
# significative dates in the data set
MOST_RECENT_DATA_SET_DATE = orgs.last_funding_round_on.max()
LEAST_RECENT_DATA_SET_DATE = orgs.last_funding_round_on.min()
orgs.loc[acquired.index, 'status_on'] = acquired.acquired_on
orgs.loc[ipoed.index, 'status_on'] = ipoed.went_public_on
orgs.loc[closed.index, 'status_on'] = closed.closed_on
orgs.loc[operating.index, 'status_on'] = MOST_RECENT_DATA_SET_DATE
orgs.drop(['closed_on', 'acquired_on', 'went_public_on'], axis=1, inplace=True) # no needed anymore
# -
# We compute a fundamental feature: days since foundation (`status_on`). We drop those companies with negative days since foundation, which does not make sense.
# +
orgs['status_days'] = (orgs.status_on - orgs.first_funding_round_on).dt.days
orgs = orgs[0 <= orgs.status_days]
orgs.drop('status_on', axis=1, inplace=True) # accomplished its mission
# -
# Furthermore, we drop companies that exited or closed **within the first 60 days** of operation. This is discretional, could be more, could not be at all. It seems noisy data because it takes on average 4 to 7 years to make an exit, on the other side, closing right after starting seems not even trying.
orgs = orgs[(orgs.status == 'operating') | (MIN_DAYS <= orgs.status_days)]
# ## Labels
# +
status_merger = lambda x: 'exited' if x == 'acquired' or x == 'ipoed' else x
orgs['merged_status'] = orgs.status.apply(status_merger)
c = orgs.merged_status.value_counts()
print('Operating %6s' % c.operating)
print('Exited %6s' % c.exited)
print('Closed %6s' % c.closed)
# -
# **It is known that the success rate of startups is grossly 10%. The data set provides a success rate of 80%. Crunchbase is highly biased to successful companies.**
# ## Features
#
# **Companies** raise **investments** through **funding rounds**. Companies can invest in other companies and can also make acquisitions as part of their business: this is the scope of **acquisitions** and **investments** set of features.
#
# **Companies**
# * ~~categories~~ — planned
# * status days — its meaning depends on the status: days operating, days to exit or days to shut down
#
# **Acquisitions**
# * #acquisitions
# * total expended amount
#
# **Investments**
# * #investments
# * #venture investments
# * total invested amount
#
# **Funding rounds**
# * #funding rounds
# * #venture funding rounds
# * #investments
# * **raised amount** — comes in flavors; suppose **seed** funding 1.5M / series **A** 2M / series **B** 1M / series **C** 3M
# * total raised amount = 7.5M (seed + A + B + C)
# * last N raised amounts
# * **n0** = 3M (C)
# * **n1** = 1M (B)
# * **n2** = 2M (A)
# * last N aggregated raised amounts
# * **s0** = 7.5M (seed + A + B + C)
# * **s1** = 4.5M (seed + A + B)
# * **s2** = 3.5M (seed + A)
# * last N highest raised amounts
# * **d0** = 3M (C)
# * **d1** = 2M (A)
# * **d2** = 1.5M (seed)
#
# **Investors**
# * #investors
#
# *Aggregated by unique investors for a company* (see notes for an explanation)
# * #rounds
# * #venture rounds / #rounds
# * #companies / #rounds
# * #leads / #rounds
# * #returns / #rounds
# * #losses / #rounds
# * invested amount / #rounds
# * investor score
#
# ### Notes on investor features
#
# Except for **#investors** that is the number of unique investors that invested in a particular company, **all other investor features are calculated independently from company features**, then aggregated and applied as company features.
#
# In example, as an investor feature, **#rounds** is the number of funding rounds that an investor has in his/her career. As a company feature, it is the sum of the number of rounds of all the investors for the considered company. If a company is backed by experienced investors the overall **#rounds** will be higher than if they were newcomers.
#
# Most investor features are averaged by **#rounds**, otherwise big venture capital firms outperform the average investor.
#
# **#returns** and **#losses** are counted from investments, not companies. If an investor made two investments in the same company and that company exited, then two returns were achieved.
#
# The **investor score** is the result of building a network of investors and using the *Katz centrality* measure to measure the relative degree of influence of each investor in the network.
# ## Complementary data
#
# Emphasis was put on companies to carefully select with which to work. In this section we prepare some data to add features to these companies.
def join(query, data_set):
df = pd.read_sql_query(query, connection, parse_dates=['on'])
df.company_uuid = df.company_uuid.astype('str') # Pandas merge does not work without this
# filter by companies we have prepared
df = pd.DataFrame.merge(data_set, df, left_index=True, right_on='company_uuid', how='inner')
df.amount = df.amount.fillna(0)
df['days'] = (df.on - df.first_funding_round_on).dt.days # days since foundation
# remove pre-foundation events (faulty data) and post-status events (i.e. post-IPO funding rounds)
df = df[(0 <= df.days) & (df.days <= df.status_days)]
# drop unnecessary columns: first_funding_round_on, last_funding_round_on, status, merged_status, status_days, on
df.drop(['first_funding_round_on', 'last_funding_round_on', 'status_days'], axis=1, inplace=True)
return df
# #### Acquisitions (made by companies)
# +
companies_acquisitions_query = """
SELECT o.uuid AS company_uuid, a.acquired_on AS on, a.price_usd AS amount
FROM crunchbase_organizations AS o
JOIN crunchbase_acquisitions AS a
ON o.uuid = a.acquirer_uuid
"""
acquisitions = join(companies_acquisitions_query, orgs)
# -
# #### Investments (made by companies)
#
# Crunchbase does not tell the invested amount per investor in a funding ound,
# it does tell the raised amount (total amount) and number of investors.
# The best guess we can do is to suppose that in a funding round all investors
# participate under the same conditions, providing capital in equal parts.
# +
companies_investments_query = """
SELECT o.uuid AS company_uuid, f.funding_round_type, f.funding_round_code,
f.announced_on AS on, f.raised_amount_usd / f.investor_count AS amount
FROM crunchbase_organizations AS o
JOIN crunchbase_investments AS i
ON o.uuid = i.investor_uuid
JOIN crunchbase_funding_rounds AS f
ON i.funding_round_uuid = f.funding_round_uuid
"""
investments = join(companies_investments_query, orgs)
# -
# #### Funding rounds (investments received by companies)
# +
companies_rounds_query = """
SELECT company_uuid, funding_round_type, funding_round_code,
announced_on AS on, raised_amount_usd AS amount, investor_count
FROM crunchbase_funding_rounds
"""
rounds = join(companies_rounds_query, orgs).sort_values(by='days', ascending=False)
# -
# #### Investments (made by investors)
#
# Crunchbase does not tell the invested amount per investor in a funding round,
# it does tell the raised amount (total amount) and number of investors.
# The best guess we can do is to suppose that in a funding round all investors
# participate under the same conditions, providing capital in equal parts.
# +
investors_investments_query = """
SELECT f.company_uuid, i.investor_uuid, i.funding_round_uuid,
i.is_lead_investor, f.funding_round_type, f.funding_round_code,
f.announced_on AS on, f.raised_amount_usd / f.investor_count AS amount
FROM crunchbase_investments AS i
JOIN crunchbase_funding_rounds AS f
ON i.funding_round_uuid = f.funding_round_uuid
"""
investors_investments = join(investors_investments_query, orgs)
investors_investments.investor_uuid = investors_investments.investor_uuid.astype('str')
investors_investments.funding_round_uuid = investors_investments.funding_round_uuid.astype('str')
# return and loss fields aggregated will be the investors' successes and failures count (True counts as 1 and False as 0)
investors_investments['return'] = investors_investments.merged_status == 'exited'
investors_investments['loss'] = investors_investments.merged_status == 'closed'
# if lead investor count is bigger than one, set it to zero
ii = investors_investments.groupby('funding_round_uuid', sort=False).filter(lambda x: x.is_lead_investor.sum() > 1)
investors_investments.loc[ii.index, 'is_lead_investor'] = False
# -
# #### Investors
# +
investors_query = """
SELECT uuid AS investor_uuid, investor_name, investor_type, cb_url AS crunchbase_url
FROM crunchbase_investors
"""
investors = pd.read_sql_query(investors_query, connection)
investors.investor_uuid = investors.investor_uuid.astype('str')
investors.set_index('investor_uuid', inplace=True)
investor_type_map = {
None:'Unknown',
'venture_capital':'Venture capital',
'accelerator':'Accelerator',
'corporate_venture_capital':'Corporate venture capital',
'micro_vc':'Micro venture capital',
'individual':'Individual',
'investment_bank':'Investment bank',
'funding_platform':'Funding platform',
'angel_group':'Angel group',
'private_equity_firm':'Private equity firm',
'non_equity_program':'Non-equity program',
'incubator':'Incubator',
'venture_debt':'Venture debt',
'government_office':'Government office',
'hedge_fund':'Hedge fund',
'co_working_space':'Co-working space',
'family_investment_office':'Family investment office',
'university_program':'University program',
'fund_of_funds':'Fund of funds',
'secondary_purchaser':'Secondary purchaser',
'technology_transfer_office':'Technology transfer office',
'startup_competition':'Startup competition'
}
investors.investor_type = investors.investor_type.map(investor_type_map)
# -
# ## Investor features
#
# ### Investor network
#
# We are going to perform a social network analysis to the investor ecosystem in order to extract an **investor score**; for this we build a network of influence, a directed graph where nodes are investors and edges are subtended from influenced investors (out-edges) to influential investors (in-edges). Self-loops are not allowed.
#
# A connection between investors is made —*or strengthened* if the connection already exists— whenever the investors invest in the same company, this is the *weight* property of the edges. The connection is directed because we are considering the causality of the events. We are supposing that *investing in a company where others have invested is being influenced by these others*. Of course, this not necessarily holds true but the stronger the connection, the stronger the correlation.
#
# Rounding up, we count **influence** this way:
# * An investor influences all investors from posterior funding rounds.
# * An investor is influenced by all investors from previous funding rounds.
# * Within the same funding round, the lead investor if any, influences the rest of the participants.
# +
import networkx as nx
def investors_graph(investments):
G = nx.DiGraph()
for company_uuid, company_investments in investments.groupby('company_uuid', sort=False):
funding_rounds = [funding_round for _, funding_round in \
company_investments.sort_values(by='on').groupby('funding_round_uuid', sort=False)]
for anterior_funding_round in funding_rounds:
for posterior_funding_round in funding_rounds:
if anterior_funding_round.on.iloc[0] < posterior_funding_round.on.iloc[0]:
for anterior_investor in anterior_funding_round.investor_uuid:
for posterior_investor in posterior_funding_round.investor_uuid:
if anterior_investor == posterior_investor: pass # avoid self-loops
G.add_edge(posterior_investor, anterior_investor) # if the edge already exists it does not get replaced
edge = G.edge[posterior_investor][anterior_investor] # previous line does not return the edge, sadly
edge['weight'] = edge.get('weight', 0) + 1
# TODO: corner case: different funding rounds on the same day
elif anterior_funding_round.on.iloc[0] == posterior_funding_round.on.iloc[0]:
lead = anterior_funding_round[anterior_funding_round.is_lead_investor == True]
if len(lead) == 1:
lead_investor = lead.investor_uuid.iloc[0]
for other_investor in posterior_funding_round.investor_uuid:
if lead_investor == other_investor: pass # avoid self-loops
G.add_edge(other_investor, lead_investor) # if the edge already exists it does not get replaced
edge = G.edge[other_investor][lead_investor] # previous line does not return the edge, sadly
edge['weight'] = edge.get('weight', 0) + 1
return G
# -
# ### Investor score
#
# We apply the *Katz centrality* measure to measure the relative degree of influence of each investor in the network.
#
# Centrality indices are explicitly designed to produce a ranking which allows indication of the most important vertices; they are not designed to measure the influence of nodes in general.
#
# A ranking only orders vertices by importance, it does not quantify the difference in importance between different levels of the ranking. The features which (correctly) identify the most important vertices in a given network do not necessarily generalize to the remaining vertices; for the majority of other network nodes the rankings may be meaningless.
#
# See:
# * [Centrality](https://en.wikipedia.org/wiki/Centrality)
# * [Katz centrality](https://en.wikipedia.org/wiki/Katz_centrality)
# * [Node influence metric](https://en.wikipedia.org/wiki/Node_influence_metric)
# ### Investor activity characteristic time
#
# *Half-life* is the time required for a quantity to reduce to half its initial value [https://en.wikipedia.org/wiki/Half-life].
#
# We define the **investor activity time** as the lapse since an investor makes its first investment till its last one, in other words, it becomes inactive. It is desirable to obtain the median activity time: the time length around which half of the investors shall be inactive.
# +
activity = investors_investments.groupby('investor_uuid').on.apply(lambda x: (x.max() - x.min()).days)
activity = activity[activity != 0] / 365 # neglect one-time investors
sn.distplot(activity, kde=False, norm_hist=True)
plt.xlabel('Years')
plt.ylabel('Density')
plt.savefig('investments_by_years.png')
# -
# Observing the plot, activity's half-life is about **3 years**. This means that *the average investor gathers and invests its entire capital in that time*.
#
# When calculating the investor influence, only the last 3 years of investors’ actions are considered. This way influence does not accumulate forever but increases or decreases in time.
#
# Time segments are disposed in quarters—in other words we take 3-year snapshots of investment data each 3 months.
# +
from datetime import datetime
# generate quarters
dates = [datetime(year, month, 1) for year in range(LEAST_RECENT_DATA_SET_DATE.year, MOST_RECENT_DATA_SET_DATE.year + 1) \
for month in (1,4,7,10) if year < MOST_RECENT_DATA_SET_DATE.year or month < MOST_RECENT_DATA_SET_DATE.month]
# generate 3-year segments (12-quarter segments) rolling on quarters
time_segments = zip(dates, dates[12:])
# -
# ### Computation of features
# +
investor_features = pd.DataFrame()
for date_from, date_to in time_segments:
time_segmented_investments = investors_investments[investors_investments.on.between(date_from, date_to, inclusive=False)]
# FEATURES
# aggregation by investor
features = time_segmented_investments.groupby('investor_uuid', sort=False).agg({
'company_uuid':'nunique',
'funding_round_type':'count',
'funding_round_code':'count',
'is_lead_investor':'sum',
'return':'sum',
'loss':'sum',
'amount':'sum'
})
# column renaming to more meaningful names
features.rename(columns={
'company_uuid':'investor_companies',
'funding_round_type':'investor_rounds',
'funding_round_code':'investor_venture_rounds',
'is_lead_investor':'investor_leads',
'return':'investor_returns',
'loss':'investor_losses',
'amount':'investor_amount'
}, inplace=True)
# SOCIAL NETWORK ANALYSIS
G = investors_graph(time_segmented_investments)
features['investor_score'] = pd.Series( nx.katz_centrality(G, weight='weight', alpha=0.001) )
features.fillna(0, inplace=True)
features['investor_rank'] = features.investor_score.rank(ascending=False)
features['from'] = date_from
features['to'] = date_to
investor_features = investor_features.append(features)
# -
# Persist investor features and the most recent investor network.
# +
investor_features.to_csv('investor_features.csv')
recent = investor_features[investor_features.to == date_to]
recent = pd.DataFrame.merge(investors, recent, left_index=True, right_index=True, how='inner') \
.sort_values(by='investor_rank').drop(['from', 'to'], axis=1)
recent.to_csv('investor_influence_graph_metadata_%s.csv' % date_to.strftime('%Y-%m-%d'))
nx.write_gpickle(G, 'investor_influence_graph_%s.pickle' % date_to.strftime('%Y-%m-%d'))
| Investor features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Number of Trips, Number of Customer Type Usage
# ## Import Packages
import numpy as np
import pandas as pd
import os
import glob
# ## Main Function
def N_trips_customer(datasets_address, output_address):
path = os.getcwd()
csv_files = glob.glob(os.path.join(datasets_address, "*.csv"))
col = ['Date', 'Number of Trips', 'Number_of_Members','Number_of_Rentals']
export_data = pd.DataFrame(columns= col)
j = 0
for i in csv_files:
data = pd.read_csv(i)
number_of_trips = len(data)
if ('usertype' in data.columns) == True:
number_of_members = data['usertype'].value_counts()['Subscriber']
number_of_rentals = number_of_trips - number_of_members
date = data.loc[0,'starttime'][0:7]
else:
number_of_members = data['member_casual'].value_counts()['member']
number_of_rentals = number_of_trips - number_of_members
date = data.loc[0,'started_at'][0:7]
export_data.loc[j,:] = [date, number_of_trips,number_of_members,number_of_rentals]
j+=1
## Eport the DataFrame
export_data.to_csv(output_address, index = False)
N_trips_customer("C:/Users/adria/OneDrive/Desktop", "C:/Users/adria/OneDrive/Desktop/exp.csv")
| Pouya-tasks/N_trips_customer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="c644IyT2b6fN" colab_type="text"
# # Tensor completion (example of minimizing a loss w.r.t. TT-tensor)
#
# [Open](https://colab.research.google.com/github/Bihaqo/t3f/blob/develop/docs/tutorials/tensor_completion.ipynb) **this page in an interactive mode via Google Colaboratory.**
#
# In this example we will see how can we do tensor completion with t3f, i.e. observe a fraction of values in a tensor and recover the rest by assuming that the original tensor has low TT-rank.
# Mathematically it means that we have a binary mask $P$ and a ground truth tensor $A$, but we observe only a noisy and sparsified version of $A$: $P \odot (\hat{A})$, where $\odot$ is the elementwise product (applying the binary mask) and $\hat{A} = A + \text{noise}$. In this case our task reduces to the following optimization problem:
#
# $$
# \begin{aligned}
# & \underset{X}{\text{minimize}}
# & & \|P \odot (X - \hat{A})\|_F^2 \\
# & \text{subject to}
# & & \text{tt_rank}(X) \leq r_0
# \end{aligned}
# $$
# + id="XKpnAlb2b6fP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="53ef39b2-9fc4-474a-c7fe-4706cea33e09"
import numpy as np
import matplotlib.pyplot as plt
# Import TF 2.
# %tensorflow_version 2.x
import tensorflow as tf
# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
# !git clone https://github.com/Bihaqo/t3f.git
# !cd t3f; pip install .
import t3f
# + [markdown] id="jS_8PA1ub6fS" colab_type="text"
# **Generating problem instance**
#
# Lets generate a random matrix $A$, noise, and mask $P$.
# + id="kTe5tB3Kb6fT" colab_type="code" colab={}
shape = (3, 4, 4, 5, 7, 5)
# Generate ground truth tensor A. To make sure that it has low TT-rank,
# let's generate a random tt-rank 5 tensor and apply t3f.full to it to convert to actual tensor.
ground_truth = t3f.full(t3f.random_tensor(shape, tt_rank=5))
# Make a (non trainable) variable out of ground truth. Otherwise, it will be randomly regenerated on each sess.run.
ground_truth = tf.Variable(ground_truth, trainable=False)
noise = 1e-2 * tf.Variable(tf.random.normal(shape), trainable=False)
noisy_ground_truth = ground_truth + noise
# Observe 25% of the tensor values.
sparsity_mask = tf.cast(tf.random.uniform(shape) <= 0.25, tf.float32)
sparsity_mask = tf.Variable(sparsity_mask, trainable=False)
sparse_observation = noisy_ground_truth * sparsity_mask
# + [markdown] id="VIBpNWFzb6fX" colab_type="text"
# **Initialize the variable and compute the loss**
# + id="C4h03v5Kb6fY" colab_type="code" colab={}
observed_total = tf.reduce_sum(sparsity_mask)
total = np.prod(shape)
initialization = t3f.random_tensor(shape, tt_rank=5)
estimated = t3f.get_variable('estimated', initializer=initialization)
# + [markdown] id="8QWYA3a7b6fb" colab_type="text"
# SGD optimization
# -------------------------
# The simplest way to solve the optimization problem is Stochastic Gradient Descent: let TensorFlow differentiate the loss w.r.t. the factors (cores) of the TensorTrain decomposition of the estimated tensor and minimize the loss with your favourite SGD variation.
# + id="6JyoT6uVb6fc" colab_type="code" colab={}
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
def step():
with tf.GradientTape() as tape:
# Loss is MSE between the estimated and ground-truth tensor as computed in the observed cells.
loss = 1.0 / observed_total * tf.reduce_sum((sparsity_mask * t3f.full(estimated) - sparse_observation)**2)
gradients = tape.gradient(loss, estimated.tt_cores)
optimizer.apply_gradients(zip(gradients, estimated.tt_cores))
# Test loss is MSE between the estimated tensor and full (and not noisy) ground-truth tensor A.
test_loss = 1.0 / total * tf.reduce_sum((t3f.full(estimated) - ground_truth)**2)
return loss, test_loss
# + id="e6VWyInAb6ff" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="9ff1647e-2e1e-46be-d0c0-fcc70e25f632"
train_loss_hist = []
test_loss_hist = []
for i in range(5000):
tr_loss_v, test_loss_v = step()
tr_loss_v, test_loss_v = tr_loss_v.numpy(), test_loss_v.numpy()
train_loss_hist.append(tr_loss_v)
test_loss_hist.append(test_loss_v)
if i % 1000 == 0:
print(i, tr_loss_v, test_loss_v)
# + id="0lPHsSveb6fj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="60f23a2d-3961-477a-d7bf-6451f8a523af"
plt.loglog(train_loss_hist, label='train')
plt.loglog(test_loss_hist, label='test')
plt.xlabel('Iteration')
plt.ylabel('MSE Loss value')
plt.title('SGD completion')
plt.legend()
# + [markdown] id="Ouhl6kvBb6fn" colab_type="text"
# Speeding it up
# --------------------
# The simple solution we have so far assumes that loss is computed by materializing the full estimated tensor and then zeroing out unobserved elements. If the tensors are really large and the fraction of observerd values is small (e.g. less than 1%), it may be much more efficient to directly work only with the observed elements.
# + id="88t0Auu3b6fo" colab_type="code" colab={}
shape = (10, 10, 10, 10, 10, 10, 10)
total_observed = np.prod(shape)
# Since now the tensor is too large to work with explicitly,
# we don't want to generate binary mask,
# but we would rather generate indecies of observed cells.
ratio = 0.001
# Let us simply randomly pick some indecies (it may happen
# that we will get duplicates but probability of that
# is 10^(-14) so lets not bother for now).
num_observed = int(ratio * total_observed)
observation_idx = np.random.randint(0, 10, size=(num_observed, len(shape)))
# and let us generate some values of the tensor to be approximated
observations = np.random.randn(num_observed)
# + id="5e3CGCQGb6fr" colab_type="code" colab={}
# Our strategy is to feed the observation_idx
# into the tensor in the Tensor Train format and compute MSE between
# the obtained values and the desired values
# + id="o_nTVpYpb6fu" colab_type="code" colab={}
initialization = t3f.random_tensor(shape, tt_rank=16)
estimated = t3f.get_variable('estimated', initializer=initialization)
# + id="y9vjUa3Xb6fx" colab_type="code" colab={}
# To collect the values of a TT tensor (withour forming the full tensor)
# we use the function t3f.gather_nd
# + id="U1-IJdByb6fz" colab_type="code" colab={}
def loss():
estimated_vals = t3f.gather_nd(estimated, observation_idx)
return tf.reduce_mean((estimated_vals - observations) ** 2)
# + id="Lg_-VK80b6f3" colab_type="code" colab={}
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
def step():
with tf.GradientTape() as tape:
loss_value = loss()
gradients = tape.gradient(loss_value, estimated.tt_cores)
optimizer.apply_gradients(zip(gradients, estimated.tt_cores))
return loss_value
# + [markdown] id="A7b8xOBSgDc7" colab_type="text"
# #### Compiling the function to additionally speed things up
#
# + id="33RRJ_pogCJ7" colab_type="code" colab={}
# In TF eager mode you're supposed to first implement and debug
# a function, and then compile it to make it faster.
faster_step = tf.function(step)
# + id="ua3MnN_Yb6f6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="c1634c70-7d26-45c1-85a7-7128dac42731"
loss_hist = []
for i in range(2000):
loss_v = faster_step().numpy()
loss_hist.append(loss_v)
if i % 100 == 0:
print(i, loss_v)
# + id="q5hrWqCTb6gA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 333} outputId="f0b4326c-9413-4b5b-cf71-c2fdae9325b5"
plt.loglog(loss_hist)
plt.xlabel('Iteration')
plt.ylabel('MSE Loss value')
plt.title('smarter SGD completion')
plt.legend()
# + id="VWVwnhc7b6gD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="b9e46705-3e7e-40db-bc85-ac2087538ed1"
print(t3f.gather_nd(estimated, observation_idx))
# + id="5bvmcc-Jb6gI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="569577a1-3714-4ca8-ba2f-24e2ddcbd061"
print(observations)
| docs/tutorials/tensor_completion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Top 100 Non-Google Domains
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.dates as mdates
import glob
from datetime import datetime
# %matplotlib inline
plt.style.use('seaborn-whitegrid')
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 22
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
# -
# Domains with the term `google` appear often in the Alexa ranking top lists. In this notebook, we look at the other domains. We use this [script](https://github.com/sheikhomar/mako/blob/master/src/data/spark/top_non_google_domains.py) to filter the top 100 non-google domains in the crawled data set.
# +
paths = glob.glob('../data/external/alexa1m-rankings-top-100-part-*')
column_names = ['domain', 'alexa_rank', 'rank', 'date']
df = pd.concat((pd.read_csv(path, names=column_names) for path in paths))
# Convert date column to the correct datetype
df['date'] = df['date'].apply(lambda val: datetime.strptime(str(val), '%Y%m%d')).astype('O')
# Show the first 5 rows
df.head()
# -
# How many rows do we have?
df.info()
# Define a function that can visualise selected domains' ranking over time.
def draw_plot(df, domains):
fig, ax = plt.subplots(figsize=(20, 10))
date_col = df['date']
for domain in domains:
filtered_df = df[(df['domain'] == domain)]
ax.plot(filtered_df['date'], filtered_df['alexa_rank'], label=domain)
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
weeks = mdates.WeekdayLocator() # every week
yearsFmt = mdates.DateFormatter('%Y')
monthsFmt = mdates.DateFormatter('%Y-%m')
# format the ticks
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthsFmt)
ax.xaxis.set_minor_locator(weeks)
datemin = datetime(date_col.min().year, 1, 1)
datemax = datetime(date_col.max().year + 1, 1, 1)
#ax.set_xlim(datemin, datemax)
ax.grid(True)
ax.legend(loc=0, frameon=True, framealpha=1, borderpad=1)
ax.set_ylabel('Alexa Rank')
ax.set_xlabel('Time')
ax.invert_yaxis()
# rotates and right aligns the x labels, and moves the bottom of the
# axes up to make room for them
fig.autofmt_xdate()
# First, let us plot the top 5 domains on the first day
earliest_date = df['date'].min()
top_domains_first_day = df[df['date'] == earliest_date]['domain'].unique()
draw_plot(df, top_domains_first_day[:5])
# Not really interesting. Their rankings remain stable throughout the period. Let us plot domains used by major news corporations.
domains = ['bbc.com', 'dailymail.co.uk', 'bbc.co.uk', 'nytimes.com', 'theguardian.com', 'indiatimes.com', 'chinadaily.com.cn']
draw_plot(df, domains)
# What about video sites?
domains = ['dailymotion.com', 'vimeo.com', 'youtube.com']
draw_plot(df, domains)
| notebooks/03-oas-top-non-google-domains.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2021년 2월 9일 화요일
# ### HackerRank - Revese a doubly linked list (Python)
# ### 문제 : https://www.hackerrank.com/challenges/reverse-a-doubly-linked-list/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=linked-lists
# ### 블로그 : https://somjang.tistory.com/entry/HackerRank-Reverse-a-doubly-linked-list-Python
# ### 첫번째 시도
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
class DoublyLinkedListNode:
def __init__(self, node_data):
self.data = node_data
self.next = None
self.prev = None
class DoublyLinkedList:
def __init__(self):
self.head = None
self.tail = None
def insert_node(self, node_data):
node = DoublyLinkedListNode(node_data)
if not self.head:
self.head = node
else:
self.tail.next = node
node.prev = self.tail
self.tail = node
def print_doubly_linked_list(node, sep, fptr):
while node:
fptr.write(str(node.data))
node = node.next
if node:
fptr.write(sep)
# Complete the reverse function below.
#
# For your reference:
#
# DoublyLinkedListNode:
# int data
# DoublyLinkedListNode next
# DoublyLinkedListNode prev
#
#
def reverse(head):
prev = None
d_node = head
while d_node != None:
temp = d_node.next
d_node.next = prev
prev = d_node
d_node = temp
head = prev
return head
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
t = int(input())
for t_itr in range(t):
llist_count = int(input())
llist = DoublyLinkedList()
for _ in range(llist_count):
llist_item = int(input())
llist.insert_node(llist_item)
llist1 = reverse(llist.head)
print_doubly_linked_list(llist1, ' ', fptr)
fptr.write('\n')
fptr.close()
| DAY 301 ~ 400/DAY316_[HackerRank] Reverse a doubly linked list (Python).ipynb |
# +
import _init_paths
from fast_rcnn.train import get_training_roidb, train_net
from fast_rcnn.config import cfg, cfg_from_file, cfg_from_list, get_output_dir
from datasets.factory import get_imdb
import datasets.imdb
import caffe
import argparse
import pprint
import numpy as np
import sys
import zl_config as C
from fast_rcnn.test import im_detect
import matplotlib.pyplot as plt
from fast_rcnn.nms_wrapper import nms
CLASSES = ('__background__',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair',
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor')
# +
def combined_roidb(imdb_names):
def get_roidb(imdb_name):
imdb = get_imdb(imdb_name)
print 'Loaded dataset `{:s}` for training'.format(imdb.name)
imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)
print 'Set proposal method: {:s}'.format(cfg.TRAIN.PROPOSAL_METHOD)
roidb = get_training_roidb(imdb)
return roidb
roidbs = [get_roidb(s) for s in imdb_names.split('+')]
roidb = roidbs[0]
if len(roidbs) > 1:
for r in roidbs[1:]:
roidb.extend(r)
imdb = datasets.imdb.imdb(imdb_names)
else:
imdb = get_imdb(imdb_names)
return imdb, roidb
cfg_from_file('experiments/cfgs/rfcn_end2end.yml')
# -
imdb, roidb = combined_roidb('voc_0712_test')
# +
import cv2
ann = roidb[9]
im = cv2.imread(ann['image'])
idx = 0
for bb in ann['boxes']:
cv2.rectangle(im,(bb[0],bb[1]),(bb[2],bb[3]),(0,255,0),1)
cv2.imshow('im2',im)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
net =None
cfg.TEST.HAS_RPN=False
prototxt = 'models/pascal_voc/ResNet-50/rfcn_end2end/test_no_rpn.prototxt'
model = 'data/rfcn_models/resnet50_rfcn_iter_800.caffemodel'
caffe.set_mode_gpu()
caffe.set_device(0)
net = caffe.Net(prototxt, model, caffe.TEST)
# +
print ann['image']
#im = cv2.imread('data/demo/004545.jpg')
im = cv2.imread(ann['image'])
print ann['boxes']
scores, boxes = im_detect(net, im,boxes=ann['boxes'])
# -
print boxes
# +
cls_score = net.blobs['cls_score'].data.copy()
cls_score_reindexed_caffe= net.blobs['cls_score_reindexed'].data.copy()
vatt_caffe = net.blobs['vatt'].data.copy()
cls_score_tiled_caffe= net.blobs['cls_score_tiled'].data.copy()
cls_score_tiled_transposed_caffe = net.blobs['cls_score_tiled_transposed'].data.copy()
vatt_raw_caffe = net.blobs['vatt_raw'].data.copy()
attention_caffe = net.blobs['attention'].data.copy()
attention_tiled_caffe = net.blobs['attention_tiled'].data.copy()
cls_score_tiled_caffe = net.blobs['cls_score_tiled'].data.copy()
cls_score_transposed = cls_score.transpose((1,0,2,3))
cls_score_reindexed = cls_score_transposed[15,...]
attention = softmax(cls_score_reindexed.squeeze())
rois = net.blobs['rois'].data
rois = rois/net.blobs['im_info'].data[0,2]
roi_scores = net.blobs['rois_score'].data
vatt = np.zeros((rois.shape[0],21,1,1),np.float32)
for i in xrange(vatt.shape[0]):
vatt[i] += attention[i] * cls_score[i]
#vatt = vatt.sum(axis=0)
vatt_summed= vatt.sum(axis=0)
attention = net.blobs['attention'].data[:,0].squeeze()
ind = np.argsort(attention)[::-1]
attention = attention[ind]
rois_all = np.hstack((rois[:,1:],roi_scores))
rois_all = rois_all[ind]
for i in xrange(5):
ascore = attention[i]
roi = rois_all[i]
cv2.rectangle(im,(roi[0],roi[1]),(roi[2],roi[3]),(255,0,0),1)
cv2.imshow('im',im)
cv2.waitKey(0)
timer.toc()
# -
net=None
| tools/iccv_debug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/FCM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Qh1LipO-Rfnr" colab_type="text"
# <center>
#
# # 模式识别·第七次作业·模糊聚类(Fussy C Means)
#
# #### 纪泽西 17375338
#
# #### Last Modified:26th,April,2020
#
# </center>
#
# <table align="center">
# <td align="center"><a target="_blank" href="https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/FCM.ipynb">
# <img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" /><br>Run in Google Colab</a></td>
# </table>
#
# + [markdown] id="c2Uxa_o7h6Gu" colab_type="text"
# ## Part1: 导入库文件及数据集
#
# #### 如需在其他环境运行需改变数据集所在路径
# + id="efncgxIJihrR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} outputId="3920f763-54d9-43a5-8e82-02340242f05a"
# !pip install -U scikit-fuzzy
# + id="qoesIwOVReii" colab_type="code" outputId="00ac06ed-08bc-45c8-88e0-974a6686ce51" colab={"base_uri": "https://localhost:8080/", "height": 67}
# %tensorflow_version 2.x
import tensorflow as tf
import sklearn
from sklearn.metrics import confusion_matrix
from skfuzzy.cluster import cmeans
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
import glob
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from time import *
import os
import scipy.io as sio
# %cd /content/drive/My Drive/Pattern Recognition/Dataset/cell_dataset
# + id="OaiNnnMV5lxq" colab_type="code" colab={}
x_train = np.load("x_train.npy")
y_train = np.load("y_train.npy")
x_test = np.load("x_test.npy")
y_test = np.load("y_test.npy")
# + id="ymxT80_K69VK" colab_type="code" outputId="75d5842e-63df-495e-a356-cd777116f1f9" colab={"base_uri": "https://localhost:8080/", "height": 67}
print(x_train.shape,x_test.shape)
print(np.unique(y_test))
print(np.bincount(y_test.astype(int)))
# + [markdown] id="o3kA6PCpiW3t" colab_type="text"
# ## Part2:数据预处理
# + id="iICgLZqY_jfN" colab_type="code" outputId="6c60e256-38c9-4c49-91bc-df1d327f3408" colab={"base_uri": "https://localhost:8080/", "height": 34}
x_train = x_train.reshape(x_train.shape[0],-1)
x_test = x_test.reshape(x_test.shape[0],-1)
x_train = x_train/255.0
x_test = x_test/255.0
print(x_train.shape,x_test.shape)
# + [markdown] id="FuNXbVy7jZd-" colab_type="text"
# ## Part3:模型建立
#
# + [markdown] id="3xdDqGmgj8zE" colab_type="text"
# 由于skfuzzy模块内提到对于高维特征数据,cmeans聚类可能存在问题,故使用[第五次作业:细胞聚类](https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/细胞聚类.ipynb)中使用的AutoEncoder进行特征降维。
# + id="LBLIJ3HqfRa3" colab_type="code" colab={}
encoding_dim = 10
# + id="IBdr-xU5w2XK" colab_type="code" colab={}
encoder = tf.keras.models.Sequential([
tf.keras.layers.Dense(128,activation='relu') ,
tf.keras.layers.Dense(32,activation='relu') ,
tf.keras.layers.Dense(8,activation='relu') ,
tf.keras.layers.Dense(encoding_dim)
])
decoder = tf.keras.models.Sequential([
tf.keras.layers.Dense(8,activation='relu') ,
tf.keras.layers.Dense(32,activation='relu') ,
tf.keras.layers.Dense(128,activation='relu') ,
tf.keras.layers.Dense(2601,activation='sigmoid')
])
AE = tf.keras.models.Sequential([
encoder,
decoder
])
# + id="13MOK8nJxz2K" colab_type="code" colab={}
AE.compile(optimizer='adam',loss='binary_crossentropy')
# + id="3M-WJ_kIlxy4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 370} outputId="ebb91fe0-bfbe-4ae5-fa5d-578a9cf51096"
AE.fit(x_train,x_train,epochs=10,batch_size=256)
# + id="Du5dmrdeyj_E" colab_type="code" colab={}
x_encoded = encoder.predict(x_train)
x_encoded_test = encoder.predict(x_test)
# + id="7A11E9cwnnDB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="62a686a3-b24f-4332-e54e-71272e6835fa"
x_encoded_t = x_encoded.T
print(x_encoded_t.shape)
# + id="JL0md-VAjgx6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="b74f5dcf-d916-4b45-f0ac-5729903f4a0f"
st=time()
center, u, u0, d, jm, p, fpc = cmeans(x_encoded_t, m=2, c=8, error=0.0005, maxiter=1000)
et=time()
print('Time Usage:',et-st,'s')
print('Numbers of iterations used:',p)
for i in u:
yhat = np.argmax(u, axis=0)
# + id="dGx0YhSTrGab" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 302} outputId="8a350381-ee1b-48dc-f0c6-3f581ce7d20d"
print(center)
print(center.shape)
# + id="HxpJYFs5nAss" colab_type="code" colab={}
from sklearn.metrics import fowlkes_mallows_score
def draw_confusionmatrix(ytest, yhat):
plt.figure(figsize=(10,7))
cm = confusion_matrix(ytest, yhat)
ax = sns.heatmap(cm, annot=True, fmt="d")
plt.ylabel('True label')
plt.xlabel('Predicted label')
acc = accuracy_score(ytest, yhat)
score_f=fowlkes_mallows_score(ytest,yhat)
print(f"Sum Axis-1 as Classification accuracy: {acc}")
print('F-Score:',score_f)
# + id="lM9ebYgvo5Mn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 482} outputId="496961e7-fc71-4845-a3ed-1bb8cebcbfef"
draw_confusionmatrix(y_train,yhat)
# + colab_type="code" outputId="66c598a9-50dd-4c41-f307-8eb73626d54e" id="ND7M4V6u8Dkv" colab={"base_uri": "https://localhost:8080/", "height": 482}
temp=[2,2,2,2,1,0,1,1]
y_hat1=np.zeros(14536)
for i in range(0,14536):
y_hat1[i] = temp[yhat[i]]
draw_confusionmatrix(y_train,y_hat1)
# + [markdown] id="8Iqyw2dIqhZk" colab_type="text"
# 将结果与Kmeans聚类相比,发现结果有较大提升(61%->67%)。但相对有监督学习方法,结果仍不尽如人意。
| FCM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from IPython.display import Image
# ### The project follows from https://github.com/rjleveque/geoclaw_tides/tree/main/GraysHarbor
# In Dr. Randall Leveque's example, the tides are forced inside the domain, the boundary condition is implemented as zero-order extrapolation.
#
# The maximal tidal forcing is implemented from longitude -124.7 to -124.3 and a linear tapered forcing is implemented from longitude -124.3 to -124.19.
#
# We call this method ocean forcing.
# ### Simple and Riemann
#
# We explore two ways of implementing a tidal signal at the boundary, we call the two methods Simple and Riemann
#
# Geoclaw breaks the domain into smaller rectangle cells, label them $ (i,j)$ in terms of locations, define
#
# $ s(t): $ tidal signal
#
# val$ (1, i, j):$ water height on $ (i,j)$ cell
#
# val$ (2, i, j):$ velocity in x direction on $ (i,j)$ cell
#
# val$ (3, i, j):$ velocity in y direction on $ (i,j)$ cell
#
# $\Delta t:$ next jump in time
#
# ### Simple
# (i, j) is the boundary ghost cell
#
# jump_h$= (s(t + \Delta t) - s(t))$.
#
# #### Code:
#
# val(1, i, j) = val(1, i + 1, j) + jump_h
#
# val(2, i, j) = val(2, i + 1, j)
#
# val(3, i, j) = 0
#
# ### Riemann
#
# Geoclaw uses shallow water Riemann solver, so this method sets $h_m - h_r$ in the Riemann problem as the tidal signal.
#
# Ignore the velocity in y direction (since tidal signal is traveling not in the y direction), 1D shallow water equations are
#
#
# $\begin{aligned} h_{t}+(h u)_{x} &=0 \\(h u)_{t}+\left(h u^{2}+\frac{1}{2} g h^{2}\right)_{x} &=0 . \end{aligned}$
#
#
#
#
# The Riemann problem for 1D shallow water equations involve giving $q_{l}$ and $q_{r}$, solve for $q_{m}$
#
#
# $q_{l}=\left[\begin{array}{c}h_{l} \\ u_{l}h_{l} \end{array}\right],\quad q_{m}=\left[\begin{array}{c}h_{m} \\ u_{m}h_{m}\end{array}\right], \quad q_{r}=\left[\begin{array}{c}h_{r} \\ u_{r}h_{r} \end{array}\right]$
#
# In our example, $q_{l}$ is the values of the ghost cell that we want to implement, $q_{r}$ is the information given inside the physical domain next to the ghost cell.
#
# We assume $h_m - h_r = $ jump_h $= (s(t + \Delta t) - s(t))$.
#
# So in our example, the question is, given $q_r$, set $q_l$ such that $h_m = h_r +$ jump_h, we also want the 1-wave to not travel to the right.
#
# The solution is not unique, the easiest way is to find the unique $u_m$, then set $q_l = q_m$.
#
# A website is attached below which is an interactive view of the Riemann problem, it lets you set $q_l$, $q_r$ and see $q_m$
#
# http://www.clawpack.org/riemann_book/phase_plane/shallow_water_small.html
#
#
# ##### Solve $u_m$
# Case 1: $h_{m}>h_{r}$
#
# By Lax entropy condition, we have a 2-shock if $h_{m}>h_{r}$
#
# For a shock wave moving at speed $s$, Rankine--Hugoniot jump conditions gives
#
# $$\begin{aligned} s\left(h_{r}-h_{m}\right) &=h_{r} u_{r}-h_{m} u_{m} \\ s\left(h_{r} u_{r}-h_{m} u_{m}\right) &=h_{r} u_{r}^{2}-h_{m} u_{m}^{2}+\frac{g}{2}\left(h_{r}^{2}-h^{2}\right) \end{aligned}$$
#
#
# Combine the two equations, let $\alpha = h_m - h_r$
#
# $$u_{m}=\frac{h_{r} u_{r}+\alpha\left[u_{r}-\sqrt{g h_{r}\left(1+\frac{\alpha}{h_{r}}\right)\left(1+\frac{\alpha}{2 h_{r}}\right)}\right].}{h_{m}}$$
#
#
# Case 2: $h_{m} < h_{r}$
#
# It's 2-rarefaction, since the variation within the rarefaction wave is at all points proportional to the corresponding eigenvector $r_p$, the solution can be found by solving $\tilde{q}^{\prime}(\xi)= r^{p}(\tilde{q}(\xi))$, where $\tilde{q}(\xi)$ is a parameterization of the solution
#
# The eigenvectors are $r^{1}=\left[\begin{array}{c}1 \\ u-\sqrt{g h}\end{array}\right], \quad r^{2}=\left[\begin{array}{c}1 \\ u+\sqrt{g h}\end{array}\right]$
#
# Consider $r^{1}$, then $\tilde{q}^{\prime}(\xi)= r^{p}(\tilde{q}(\xi))$ is
#
#
#
# $$\begin{aligned} h^{\prime}(\xi) &=q_{1}^{\prime}(\xi)=1 \\(h u)^{\prime}(\xi) &=q_{2}^{\prime}(\xi)=u \pm \sqrt{g h}=\tilde{q}_{2} / \tilde{q}_{1}-\sqrt{g \tilde{q}_{1}} \end{aligned}$$
#
#
# Fixing $(u_r,u_r h_r)$,
#
# $$ u_{m}= \frac{h_{m} u_{r}-2 h_{r}\left(\sqrt{g h_{r}}-\sqrt{g h_{m}}\right)}{h_{m}}$$
#
#
# So the logic of setting ghost cells goes like this,
#
# #### Code:
#
# jump_h$= (s(t + \Delta t) - s(t))$.
#
# val(3, i, j) = 0
#
# val(1, i, j) = val(1, nxl + 1, j) + jump_h
#
# h_r = val(1, nxl + 1, j)
#
# h_m = val(1, i, j)
#
# u_r = val(2, nxl + 1, j)
#
# if (h_r < h_m)
#
# val(2, i, j) = (h_r*u_r + jump_h*(u_r - sqrt(9.81*h_r*(1+jump_h/h_r)*(1+jump_h/(2*h_r)))))/h_m
#
# else
#
# val(2, i, j) = (h_m*u_r - 2*h_r*(sqrt(9.81*h_r)-sqrt(9.81*h_m)))/h_m
| GraysHarborBC/3Methods.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: drlnd
# language: python
# name: drlnd
# ---
# # Continuous Control
#
# ---
#
# In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
#
# ### 1. Start the Environment
#
# We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
from unityagents import UnityEnvironment
import numpy as np
# Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
#
# - **Mac**: `"path/to/Reacher.app"`
# - **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`
# - **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`
# - **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`
# - **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`
# - **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`
# - **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`
#
# For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
# ```
# env = UnityEnvironment(file_name="Reacher.app")
# ```
env = UnityEnvironment(file_name='Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')
# Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
print('Brain name is:', brain_name)
# ### 2. Examine the State and Action Spaces
#
# In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
#
# The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.
#
# Run the code cell below to print some information about the environment.
# +
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
# -
# ### 3. Take Random Actions in the Environment
#
# In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
#
# Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
#
# Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
# +
# env_info = env.reset(train_mode=False)[brain_name] # reset the environment
# states = env_info.vector_observations # get the current state (for each agent)
# scores = np.zeros(num_agents) # initialize the score (for each agent)
# while True:
# actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
# actions = np.clip(actions, -1, 1) # all actions between -1 and 1
# env_info = env.step(actions)[brain_name] # send all actions to tne environment
# next_states = env_info.vector_observations # get next state (for each agent)
# rewards = env_info.rewards # get reward (for each agent)
# dones = env_info.local_done # see if episode finished
# scores += env_info.rewards # update the score (for each agent)
# states = next_states # roll over states to next time step
# if np.any(dones): # exit loop if episode finished
# break
# print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# -
# When finished, you can close the environment.
# +
# env.close()
# -
# ### 4. It's Your Turn!
#
# Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
# ```python
# env_info = env.reset(train_mode=True)[brain_name]
# ```
# In this section the design of the agent and the training process will be described. For the hyper-parameters not mentioned in this section, they will be described in the comment of the code block with actual config dict.
#
# #### Agent and model structure
# We train a DDPG (deep deterministic policy gradients) agent (described in paper [Continuous control with deep reinforcement learning](https://arxiv.org/abs/1509.02971)) to perform the Reacher task. The formal description from this paper is as following:
#
# 
#
#
#
# The actor and critic are two disjoint feed-forward neural networks, each with 3 hidden layers.Each hidden layer has ReLU as activation, and there is no additional activation function to the output. In our final configuration the actor has (256, 128, 64) units of three hidden units respectively, with about 67k parameters (according to `torchsummary`). The critic has (512, 256, 128) hidden units respectively, yielding about 218k parameters.
#
# The weights of the neural nets are initialized as orthogonal vectors (with `nn.init.orthogonal_` method) and the parameters are trained with Adam optimizers.
#
# #### Training strategy
#
# We apply action repeat strategy to train the agent. The input vector to both actor and critic is the concatenation of three consecutive observation vectors (99 dimensions in total), and each action is also repeated 3 times to interact with the environment. This is an important strategy to make the neural net achieve better performance after convergence.
#
# We also apply gradient clip to make training process more stable. Without gradient clip, with relatively high learning rates, the agent starts to perform very badly after a certain point. In this case the 100-episode average reward drops from range 6~10 down to range 0~1. But if a low learning rate is configured the agent learns too slow (does not converge after 5000 episodes).
#
# The Ornstein–Uhlenbeck noise is added to the actor with a fixed discount rate of 0.9995 per step. After about 15000 steps the noise level is reduced to <0.001 and will be neglectable afterwards.
# %load_ext autoreload
# +
# %autoreload 2
from src import *
env_info = env.reset(train_mode=True)[brain_name]
wrapped_env = EnvWrapper(unity_env=env, brain_name=brain_name)
config = {
# Configs about the task.
# The dimension of the observation (before action repeat).
'state_size': 33,
# The dimension of action.
'action_size': 4,
# The bound of actions.
'out_low': -1.,
'out_high': 1.,
# Configs for the agent.
# The soft update rate for DDPG agent.
'tau': 1e-2,
# The coefficient in reward function.
'gamma': 0.98,
# The scaling factor of the initial weights.
'init_weight_scale': 0.3,
# The max norm of gradient for each optimization step.
'grad_clip': 20.,
# The structure of actor's hidden layers.
'actor_hidden': [256, 128, 64],
# The initial learning rate of the actor.
'actor_lr': 3e-4,
# The structure of critic's hidden layers.
'critic_hidden': [512, 256, 128],
# The initial learning rate of the critic.
'critic_lr': 1e-4,
# Repeat each action 3 times and concatenate 3 consecutive observations
# as actual input fed into the actor/critic networks.
'action_repeat': 3,
# Configs for the training process.
# The discount rate of OU noise.
'noise_discount': 0.9995,
# The seed for random initializations, including the model weights
# and the noise.
'seed': 1317317,
# The size of replay buffer.
'buffer_size': 1000 * 1000,
# The batch size for each update.
'batch_num': 32,
# The maximum number of episodes to train.
'max_episode_num': 2000,
# The maximum number of steps to train.
'max_step_num': 1e8,
# Only update the model every learn_interval. It turned out that
# this hyper-parameter is not helpful to make training more stable
# so just set it as 1 to learn at every new step.
'learn_interval': 1,
# Configs for logging.
# The window size to calculate average rewards.
'window_size': 100,
# The output directory of models.
'model_dir': './reacher_model',
# The file to log the reward trajectory.
'log_file': './reacher_log.pickle',
# The interval to print out the rewards.
'log_interval': 50,
}
print(brain_name)
agent = DDPGAgent(config)
TrainDDPG(wrapped_env, agent, config)
# -
# In the logged lines above, each line has 5 numbers: the episode number, the reward of this episode, the 100-episode average reward (in the parenthesis), the running time of this episode and the 100-episode average running time (in the parenthesis).
# +
# %matplotlib inline
import matplotlib.pyplot as plt
def plot_scores(raw_scores, smooth_scores, figsize=(8,6)):
"""
Plots the raw_scores and smoothed scores in one single figure.
"""
assert(len(raw_scores) == len(smooth_scores))
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
plt.plot(np.arange(len(raw_scores)), raw_scores, label='Raw scores')
plt.plot(np.arange(len(smooth_scores)), smooth_scores, label='Smooth scores (window=100)')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend()
plt.grid()
plt.show()
import pickle
logs = pickle.load(open('./reacher_log.pickle', 'rb'))
raw, smoothed = logs.get_all_rewards()
plot_scores(raw, smoothed)
# -
# The figure above shows the rewards of the agent receives an average reward (over 100 episodes) (version 1). According to this figure, the performance of the agent converges after about 1000 episodes of training and remains around 38 afterwards. The agent is considered reliably solved this task because it consistantly achieves 30+ rewards after convergence (with a small number of exceptions).
#
# To further improve the performance, the simplest way is to make the networks wider (adding more hidden units per layer). It will help the model better approximate the reward function and get better value estimations. I don't know whether making networks deeper will work. In my experience when there are more than 3 hidden layers the training process is slower and less stable (maybe I just didn't find the proper hyper parameters).
#
# Another potential improvement is to add N-step estimation and priority experience replay to make the agent learn faster in the early stages.
| Report.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="h2VAgfQ86pmr"
#IMPORTING THE LIBRARIES
from dateutil.parser import parse
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import calendar
# + [markdown] id="3rMnhXH2-85Z"
# **IMPORTING THE DATA**
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="lIvBfJDP6qin" outputId="07ad4c88-c562-4b44-c6b2-0a0f6afc59ba"
#Importing the Data
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv')
df.head(10)
# + [markdown] id="eF1v4Gv__B4-"
# **VARIOUS PLOTS FOR TIME SERIES**
#
# + colab={"base_uri": "https://localhost:8080/", "height": 480} id="SAgArzkC6-Ju" outputId="08228a69-b59a-429d-b6e9-09cf7cc9c3ca"
def plot_df(df, x, y, title="", xlabel='Date', ylabel='Value', dpi=100):
plt.figure(figsize=(16,5), dpi=dpi)
plt.plot(x, y, color='tab:red')
plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel)
plt.show()
plot_df(df, x=df.index, y=df.value, title='Monthly anti-diabetic drug sales in Australia from 1992 to 2008.')
# + colab={"base_uri": "https://localhost:8080/", "height": 814} id="RynuiV-f7I3u" outputId="397ef518-fb4d-4860-e725-a9cf8be7b901"
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date'], index_col='date')
df.reset_index(inplace=True)
# Prepare data
df['year'] = [d.year for d in df.date]
df['month'] = [d.strftime('%b') for d in df.date]
years = df['year'].unique()
# Prep Colors
np.random.seed(100)
mycolors = np.random.choice(list(mpl.colors.XKCD_COLORS.keys()), len(years), replace=False)
# Draw Plot
plt.figure(figsize=(16,12), dpi= 80)
for i, y in enumerate(years):
if i > 0:
plt.plot('month', 'value', data=df.loc[df.year==y, :], color=mycolors[i], label=y)
plt.text(df.loc[df.year==y, :].shape[0]-.9, df.loc[df.year==y, 'value'][-1:].values[0], y, fontsize=12, color=mycolors[i])
# Decoration
plt.gca().set(xlim=(-0.3, 11), ylim=(2, 30), ylabel='$Drug Sales$', xlabel='$Month$')
plt.yticks(fontsize=12, alpha=.7)
plt.title("Seasonal Plot of Drug Sales Time Series", fontsize=20)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="DiL20Hu17WA_" outputId="b735b2d5-9aa3-4a29-f0f2-fcc76e1e88d4"
#Importing the Data
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv')
df.head(10)
#DOT PLOT
df.plot(style='k.',title = "Dot Plot of Drug Sales Time Series")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="fkz9sOAD8DnZ" outputId="ac063fdf-3af0-4364-ed1a-ae2befa90181"
#HISTOGRAM
df.hist()
plt.title( "Histogram of Drug Sales Time Series")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="ITWCGnGi8a-e" outputId="9bab3686-8bcf-4b3e-8db6-f03594ed3d6f"
#DENSITY PLOT
df.plot(kind='kde')
plt.title( "Density Plot of Drug Sales Time Series")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 531} id="NWNh8VC88qCw" outputId="59c8b742-4d0c-4681-e8c5-c5cf7368c531"
# Import Data
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date'], index_col='date')
df.reset_index(inplace=True)
# Prepare data
df['year'] = [d.year for d in df.date]
df['month'] = [d.strftime('%b') for d in df.date]
years = df['year'].unique()
# Draw Plot
fig, axes = plt.subplots(1, 2, figsize=(20,7), dpi= 80)
sns.boxplot(x='year', y='value', data=df, ax=axes[0])
sns.boxplot(x='month', y='value', data=df.loc[~df.year.isin([1991, 2008]), :])
# Set Title
axes[0].set_title('Year-wise Box Plot\n(The Trend)', fontsize=18);
axes[1].set_title('Month-wise Box Plot\n(The Seasonality)', fontsize=18)
plt.show()
# + id="x4sZk31N-PMX"
| notebooks/Time_Series_Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style="width: 100%; overflow: hidden;">
# <div style="width: 150px; float: left;"> <img src="data/D4Sci_logo_ball.png" alt="Data For Science, Inc" align="left" border="0"> </div>
# <div style="float: left; margin-left: 10px;"> <h1>Causal Inference In Statistics - A Primer</h1>
# <h1>3.2 The Adjustment Formula</h1>
# <p><NAME><br/>
# <a href="http://www.data4sci.com/">www.data4sci.com</a><br/>
# @bgoncalves, @data4sci</p></div>
# <div style="float: right; margin-right:10px;"> <p><a href="https://amzn.to/3gsFlkO" target=_blank><img src='data/causality.jpeg' width='100px'>
# <!--Amazon Affiliate Link--></a></p></div>
# </div>
# +
from collections import Counter
from pprint import pprint
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from CausalModel import CausalModel
import watermark
# %load_ext watermark
# %matplotlib inline
# -
# We start by print out the versions of the libraries we're using for future reference
# %watermark -n -v -m -g -iv
# Load default figure style
plt.style.use('./d4sci.mplstyle')
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
# +
G = CausalModel()
G.add_causation('X', 'Z')
G.add_causation('Z', 'Y')
G.add_causation('X', 'Y')
G.pos = {'Z': (0, 1), 'X': (-1, 0), 'Y':(1, 0)}
# -
fig, ax = plt.subplots(1, figsize=(3, 2.5))
G.plot(ax=ax)
G.save_model('dags/Primer.Fig.3.5.dot')
G = CausalModel('dags/Primer.Fig.2.9.dot')
fig, ax = plt.subplots(1, figsize=(2.2, 2.2))
G.plot(ax=ax)
Gx = G.intervention_graph('X')
Gxz3 = Gx.intervention_graph('Z3')
Gx.dag.nodes
fig, ax = plt.subplots(1, figsize=(2.2, 2.2))
Gxz3.plot(ax=ax)
# <div style="width: 100%; overflow: hidden;">
# <img src="data/D4Sci_logo_full.png" alt="Data For Science, Inc" align="center" border="0" width=300px>
# </div>
| 3.2 - The Adjustment Formula.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deep_learnt_controls
# language: python
# name: deep_learnt_controls
# ---
from rocket import Rocket
from stop import STOP
import pandas as pd
import pickle
from datetime import datetime
import numpy as np
import random
import os
import re
def gen_trajectory():
r = Rocket(disperse=True, alpha=1)
r.lunar_guidance_live(h_ratio=2)
df_s = pd.DataFrame([r.x, r.z, r.vx, r.vz, r.m]).transpose()
df_c = pd.DataFrame([r.u1_mag, r.u2_angle]).transpose()
return r, df_s, df_c
list(np.arange(0, 50, step=10))
# +
path = 'data_apollo'
try:
index = np.max([[int(s) for s in re.findall(r'\d+', d)] for d in os.listdir(path)])[0]
except:
index = 0
t_start = datetime.now()
print(f'Time: {str(datetime.now())}')
while index < 5000:
r, df_s, df_c = gen_trajectory()
#check for u mag, x distance, and cone angle
if max(r.u1_mag) < 1 and max([abs(x) for x in r.x]) < 600 and len(set([r.z[i]>abs(r.x[i]) for i in range(len(r.x)-5)])) == 1 and r.lunar_success:
#downsample by 10
n = len(df_s)
samp = list(np.arange(0, n, step=10))
df_s = df_s.iloc[samp]
df_c = df_c.iloc[samp]
df_s.to_csv(f'{path}/{index}_sim_s.csv', header=True, index=False)
df_c.to_csv(f'{path}/{index}_sim_c.csv', header=True, index=False)
index += 1
if index %25 == 0:
t_compute = (datetime.now()-t_start).total_seconds()
print(f' index: {index}, cumm_t_compute: {t_compute} seconds')
t_compute = (datetime.now()-t_start).total_seconds()
print(f't_compute: {t_compute} seconds')
# -
samp.sort()
samp
df_s.iloc[samp].sorted
l = [4,55,67]
l.insert(0, 0)
l
l
| Simple_Spacecraft/generate_training_data-apollo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shivammehta007/QuestionGenerator/blob/master/BaseModel-RNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BLtWBqUscOtx" colab_type="text"
# # Question Generation
# + [markdown] id="T808ilq_g2bm" colab_type="text"
# Additional Dependencies
# + id="3LU58nY0cTuk" colab_type="code" colab={}
# %%capture
# !pip install fairseq
# !pip install sacremoses subword_nmt
# !pip install -U tqdm
# + id="lbHwJtHvqrvR" colab_type="code" colab={}
import os
import json
import logging
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import random
# + id="WGJ7wyH392a7" colab_type="code" colab={}
# For results duplication
SEED=1234
random.seed(SEED)
# + id="71OqW0Y1BbN0" colab_type="code" colab={}
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
# + [markdown] id="ewFG2QxZqhdN" colab_type="text"
# ## DataSet
# + id="DCxOp7LEcGlf" colab_type="code" outputId="6e5e565a-544a-460e-9df9-a2a568a30f2d" colab={"base_uri": "https://localhost:8080/", "height": 34}
SQUAD_DIR = '/content/drive/My Drive/Colab Notebooks/SQuAD'
SQUAD_TRAIN = os.path.join(SQUAD_DIR, 'train_v2.json')
# SQUAD_DEV = os.path.join(SQUAD_DIR, 'dev.json')
SQUAD_TEST = os.path.join(SQUAD_DIR, 'test_v2.json')
print(SQUAD_TRAIN, SQUAD_TEST) # , SQUAD_DEV
# + id="iTVjKnG6q4qR" colab_type="code" colab={}
with open(SQUAD_TRAIN) as train_file:
train_data = json.load(train_file)
train_data = train_data['data']
with open(SQUAD_TEST) as test_file:
test_data = json.load(test_file)
test_data = test_data['data']
# + [markdown] id="xrg9adeKTgZk" colab_type="text"
# ### PreProcessing Function
# + id="DsZ0jvvqTfIA" colab_type="code" colab={}
def convert_to_file_without_answers(dataset, dataset_type='train', get_impossible=False):
"""
Takes an input json and generates dataset_type.paragraphs and dataset_type.questions
Input:
dataset : string -> Name of json input
dataset_type: string -> Type of dataset like (Train, test, valid)
get_impossible: boolean -> Flag to get unanswerable questions
"""
para_output = open(dataset_type + '.paragraphs', 'w')
question_output = open(dataset_type + '.questions', 'w')
d = []
for paragraphs in tqdm(dataset):
paragraphs = paragraphs['paragraphs']
for i, paragraph in enumerate(paragraphs):
para = paragraph['context']
for questionanswers in paragraph['qas']:
if questionanswers['is_impossible']:
continue
question = questionanswers['question']
para = para.replace('\n', ' ')
para_output.write(para.strip() + '\n')
question_output.write(question.strip() + '\n')
d.append(i)
print(len(d))
para_output.close()
question_output.close()
# + id="XOVyuJg2Tfe-" colab_type="code" outputId="de869fa7-587b-4101-c46e-54d4a42dca23" colab={"base_uri": "https://localhost:8080/", "height": 149, "referenced_widgets": ["ce636476e2f24e65aec55a1b4484cc87", "7e3e9db0339c4e69a6f3d2b89b4d1f95", "320245b654414474bc01c70314e480d2", "1836b2e526414f1cb1b15ce2c9d6c214", "2e36ba0f1f1c4ed3b65979e8c97e1237", "151da38554244f4989c3cf056454df98", "427d9a74ae1c44499393e51b16b34e64", "<KEY>", "8e236021f9c543a9892d58c2a2454919", "<KEY>", "3e7d13611c0f43a9a447334a5574f11d", "389ee4e5879a453e8a36edf1b9f515bf", "825a409f0bbd4a5eb48678693b028d69", "ee7869d731a545c7b2ae4a68421f68c9", "00d0792fa81246178f032d246913c7c5", "268f457837ee4be09a395e6fdf366aec"]}
convert_to_file_without_answers(train_data, 'train')
convert_to_file_without_answers(test_data, 'test')
# + id="F3XnQgaS-Y_D" colab_type="code" colab={}
def split_train_valid(filename_paragraph='train.paragraphs', filename_questions='train.questions', split_ratio=0.8):
"""Splits the train set to a validation set"""
with open(filename_paragraph) as paragraphs_file, open(filename_questions) as questions_file:
data_paragraphs = paragraphs_file.readlines()
data_questions = questions_file.readlines()
# Output files
train_paragraphs_file = open('train.paragraphs', 'w')
valid_paragraphs_file = open('valid.paragraphs', 'w')
train_questions_file = open('train.questions', 'w')
valid_questions_file = open('valid.questions', 'w')
train_count, valid_count = 0, 0
for i in tqdm(range(len(data_paragraphs))):
if random.random() < split_ratio:
train_paragraphs_file.write(data_paragraphs[i].strip() + '\n')
train_questions_file.write(data_questions[i].strip() + '\n')
train_count += 1
else:
valid_paragraphs_file.write(data_paragraphs[i].strip() + '\n')
valid_questions_file.write(data_questions[i].strip() + '\n')
valid_count += 1
logger.info('Total Trainset: {} | Total ValidSet: {}'.format(train_count, valid_count))
# + id="oTlUUJoABrrd" colab_type="code" outputId="65bfffd7-1a8f-4a3c-84a8-1514007575ed" colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["f9a0157258be4f8f82b4c3abc39e9af2", "619c90d6622b4cfc8b79e1bccb20f8c0", "3b8c530a2be14034b1b8af9a7cb178af", "75802289649044dcb3e061ccccf31c0e", "b92b51e08dd844d7b9b555154b6a5a02", "87556d40e1e74119951912c18ac07c31", "d9aa6d891a514bf5b91be3539cc31e2d", "2db2446654b94869a588efec78bbe7b1"]}
split_train_valid()
# + [markdown] id="aHjI0mvZDa7g" colab_type="text"
# ### Generate Binary of Dataset for FairSeq to process
# + id="8iuBiIy39Aaf" colab_type="code" colab={}
# + id="ahWXypAOTfc2" colab_type="code" outputId="0c00f229-09a3-4b8c-a03e-42582a61c47d" colab={"base_uri": "https://localhost:8080/", "height": 275}
# !fairseq-preprocess --source-lang paragraphs --target-lang questions \
# --trainpref train --testpref test --validpref valid\
# --destdir preprocessed_data --seed 1234 --nwordssrc 45002 --nwordstgt 28002
# + [markdown] id="NFfiV_VuDgGA" colab_type="text"
# ### Training a default ConvSeq2Seq Model
# + id="NcJ8LG6-TfaF" colab_type="code" colab={}
# # !fairseq-generate data-bin/iwslt14.tokenized.de-en \
# # --path checkpoints/fconv/checkpoint20.pt \
# # --batch-size 128 --beam 5
# !CUDA_VISIBLE_DEVICES=0 fairseq-train preprocessed_data/ \
# --lr 0.01 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
# --arch fconv_iwslt_de_en --save-dir checkpoints/fconv
# + id="wWcZSBfETfWi" colab_type="code" colab={}
# !fairseq-generate preprocessed_data \
# --path checkpoints/fconv/checkpoint43.pt \
# --batch-size 128 | tee gen.out
# + id="wv7roXL5_tra" colab_type="code" outputId="7b055c12-f3a6-48f0-b6d2-737ae233cacb" colab={"base_uri": "https://localhost:8080/", "height": 51}
# !grep ^H gen.out | cut -f3- > gen.out.sys
# !grep ^T gen.out | cut -f2- > gen.out.ref
# !fairseq-score --sys gen.out.sys --ref gen.out.ref
# + id="9u_JkoWBAt7m" colab_type="code" colab={}
# # %cd fairseq/examples/translation/
# # !bash prepare-iwslt14.sh
# # !fairseq-preprocess --source-lang de --target-lang en \
# # --trainpref examples/translation/iwslt14.tokenized.de-en/train --validpref examples/translation/iwslt14.tokenized.de-en/valid --testpref examples/translation/iwslt14.tokenized.de-en/test \
# # --destdir data-bin/iwslt14.tokenized.de-en
# # !mkdir -p checkpoints/fconv
# # !fairseq-train data-bin/iwslt14.tokenized.de-en \
# # --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
# # --arch fconv_iwslt_de_en --save-dir checkpoints/fconv
# + id="5kDLHksIJ4Ko" colab_type="code" colab={}
# # !fairseq-generate data-bin/iwslt14.tokenized.de-en \
# # --path checkpoints/fconv/checkpoint20.pt \
# # --batch-size 128 --beam 5
# + [markdown] id="MuCn_Bdhx8X_" colab_type="text"
# ### Trying Baseline LSTM Model
# + id="qe_OwaIVx-sR" colab_type="code" outputId="26f89ee4-5faf-45cc-9eca-ce7bfd542144" colab={"base_uri": "https://localhost:8080/", "height": 357}
# !wget http://nlp.stanford.edu/data/glove.840B.300d.zip
# + id="YngekyqK47M0" colab_type="code" outputId="75e7d329-27f8-43b4-c10f-be2debc013ca" colab={"base_uri": "https://localhost:8080/", "height": 51}
# !unzip glove.840B.300d.zip
# + id="k0BnmaSfKNLy" colab_type="code" outputId="7f3b8285-ae1f-478b-f089-ef6ceb8f696a" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !CUDA_VISIBLE_DEVICES=0 fairseq-train preprocessed_data/ \
# --lr 0.001 --clip-norm 5 --dropout 0.2 --batch-size 64 \
# --arch lstm --max-epoch 15 --encoder-hidden-size 600 --encoder-layers 2 \
# --decoder-hidden-size 600 --decoder-layers 2 --optimizer adam --dropout 0.3 --encoder-embed-path glove.840B.300d.txt \
# --encoder-bidirectional --encoder-embed-dim 300 --decoder-embed-dim 300 --no-epoch-checkpoints --decoder-embed-path glove.840B.300d.txt--lr-shrink
# + id="rLdpT2pXMpq8" colab_type="code" colab={}
# !fairseq-generate preprocessed_data \
# --path checkpoints/checkpoint_last.pt \
# --batch-size 64 | tee gen.out
# + id="o0epZNcSNtIM" colab_type="code" colab={}
# !grep ^H gen.out | cut -f3- > gen.out.sys
# !grep ^T gen.out | cut -f2- > gen.out.ref
# !fairseq-score --sys gen.out.sys --ref gen.out.ref
| Miscellanious/BaseModel-RNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Agent state and action definition
# * State Variable: x = [w, n, M, g_lag, e, s, (H, r, m), O_lag], actions variable a = [c, b, k, i, q] both of them are numpy array.
# +
# %pylab inline
from scipy.interpolate import interpn
from helpFunctions import surfacePlot
import numpy as np
from multiprocessing import Pool
from functools import partial
import warnings
import math
warnings.filterwarnings("ignore")
np.printoptions(precision=2)
# time line
T_min = 0
T_max = 70
T_R = 45
# discounting factor
beta = 1/(1+0.02)
# utility function parameter
gamma = 2
# relative importance of housing consumption and non durable consumption
alpha = 0.8
# parameter used to calculate the housing consumption
kappa = 0.3
# depreciation parameter
delta = 0.025
# housing parameter
chi = 0.3
# uB associated parameter
B = 2
# # minimum consumption
# c_bar = 3
# constant cost
c_h = 0.5
# All the money amount are denoted in thousand dollars
earningShock = [0.8,1.2]
# Define transition matrix of economical states
# GOOD -> GOOD 0.8, BAD -> BAD 0.6
Ps = np.array([[0.6, 0.4],[0.2, 0.8]])
# current risk free interest rate
r_b = np.array([0.01 ,0.03])
# stock return depends on current and future econ states
# r_k = np.array([[-0.2, 0.15],[-0.15, 0.2]])
r_k = np.array([[-0.15, 0.20],[-0.15, 0.20]])
# expected return on stock market
# r_bar = 0.0667
r_bar = 0.02
# probability of survival
Pa = np.load("prob.npy")
# deterministic income
detEarning = np.load("detEarning.npy")
# probability of employment transition Pe[s, s_next, e, e_next]
Pe = np.array([[[[0.3, 0.7], [0.1, 0.9]], [[0.25, 0.75], [0.05, 0.95]]],
[[[0.25, 0.75], [0.05, 0.95]], [[0.2, 0.8], [0.01, 0.99]]]])
# tax rate before and after retirement
tau_L = 0.2
tau_R = 0.1
# constant state variables: Purchase value 250k, down payment 50k, mortgage 200k, interest rate 3.6%,
# 55 payment period, 8.4k per period. One housing unit is roughly 1 square feet. Housing price 0.25k/sf
# some variables associate with 401k amount
Nt = [np.sum(Pa[t:]) for t in range(T_max-T_min)]
Dt = [np.ceil(((1+r_bar)**N - 1)/(r_bar*(1+r_bar)**N)) for N in Nt]
# mortgate rate
rh = 0.036
D = [np.ceil(((1+rh)**N - 1)/(rh*(1+rh)**N)) for N in range(T_max-T_min)]
# owning a house
O_lag = 1
# housing unit
H = 100
# housing price constant
pt = 250/1000
# mortgate payment
m = H*pt / D[-1]
# 30k rent 1000 sf
pr = 30/1000
# +
#Define the utility function
def u(c):
# shift utility function to the left, so it only takes positive value
return (np.float_power(c, 1-gamma) - 1)/(1 - gamma)
#Define the bequeath function, which is a function of wealth
def uB(tb):
return B*u(tb)
#Calcualte HE
def calHE(x):
# change input x as numpy array
# w, n, M, g_lag, e, s = x
HE = (H+(1-chi)*(1-delta)*x[:,3])*pt - x[:,2]
return HE
#Calculate TB
def calTB(x):
# change input x as numpy array
# w, n, M, g_lag, e, s = x
TB = x[:,0] + x[:,1] + calHE(x)
return TB
def R(x, a):
'''
Input:
state x: w, n, M, g_lag, e, s
action a: c, b, k, i, q = a which is a np array
Output:
reward value: the length of return should be equal to the length of a
'''
w, n, M, g_lag, e, s = x
# c, b, k, i, q = a
# if q == 1:
# h = H + (1-delta)*g_lag + i
# Vh = (1+kappa)*h
# else:
# h = H + (1-delta)*g_lag
# Vh = (1-kappa)*(h-(1-q)*H)
# The number of reward should be the number of actions taken
reward = np.zeros(a.shape[0])
i_index = (a[:,4]==1)
ni_index = (a[:,4]!=1)
i_h = H + (1-delta)*g_lag + a[i_index][:,3]
i_Vh = (1+kappa)*i_h
ni_h = H + (1-delta)*g_lag
ni_Vh = (1-kappa)*(ni_h-(1-a[ni_index][:,4])*H)
i_C = np.float_power(a[i_index][:,0], alpha) * np.float_power(i_Vh, 1-alpha)
ni_C = np.float_power(a[ni_index][:,0], alpha) * np.float_power(ni_Vh, 1-alpha)
reward[i_index] = u(i_C)
reward[ni_index] = u(ni_C)
return reward
#Define the earning function, which applies for both employment and unemployment, good econ state and bad econ state
def y(t, x):
w, n, M, g_lag, e, s = x
if t <= T_R:
welfare = 5
return detEarning[t] * earningShock[int(s)] * e + (1-e) * welfare
else:
return detEarning[t]
#Earning after tax and fixed by transaction in and out from 401k account
def yAT(t,x):
yt = y(t, x)
w, n, M, g_lag, e, s = x
if t <= T_R and e == 1:
# 5% of the income will be put into the 401k
i = 0.05
return (1-tau_L)*(yt * (1-i))
if t <= T_R and e == 0:
return yt
else:
# t > T_R, n/discounting amount will be withdraw from the 401k
return (1-tau_R)*yt + n/Dt[t]
# +
#Define the evolution of the amount in 401k account
def gn(t, n, x, s_next):
w, n, M, g_lag, e, s = x
if t <= T_R and e == 1:
# if the person is employed, then 5 percent of his income goes into 401k
i = 0.05
n_cur = n + y(t, x) * i
elif t <= T_R and e == 0:
# if the perons is unemployed, then n does not change
n_cur = n
else:
# t > T_R, n/discounting amount will be withdraw from the 401k
n_cur = n - n/Dt[t]
return (1+r_k[int(s), s_next])*n_cur
def transition(x, a, t):
'''
Input: state and action and time
Output: possible future states and corresponding probability
'''
w, n, M, g_lag, e, s = x
# variables used to collect possible states and probabilities
x_next = []
prob_next = []
M_next = M*(1+rh) - m
for aa in a:
c,b,k,i,q = aa
if q == 1:
g = (1-delta)*g_lag + i
else:
g = (1-delta)*g_lag
for s_next in [0,1]:
w_next = b*(1+r_b[int(s)]) + k*(1+r_k[int(s), s_next])
n_next = gn(t, n, x, s_next)
if t >= T_R:
e_next = 0
x_next.append([w_next, n_next, M_next, g, s_next, e_next])
prob_next.append(Ps[int(s),s_next])
else:
for e_next in [0,1]:
x_next.append([w_next, n_next, M_next, g, s_next, e_next])
prob_next.append(Ps[int(s),s_next] * Pe[int(s),s_next,int(e),e_next])
return np.array(x_next), np.array(prob_next)
# +
# used to calculate dot product
def dotProduct(p_next, uBTB, t):
if t >= 45:
return (p_next*uBTB).reshape((len(p_next)//2,2)).sum(axis = 1)
else:
return (p_next*uBTB).reshape((len(p_next)//4,4)).sum(axis = 1)
# Value function is a function of state and time t < T
def V(x, t, NN):
w, n, M, g_lag, e, s = x
yat = yAT(t,x)
if t == T_max-1:
# The objective functions of terminal state
def obj(actions):
# Not renting out case
# a = [c, b, k, i, q]
x_next, p_next = transition(x, actions, t)
uBTB = uB(calTB(x_next)) # conditional on being dead in the future
return R(x, actions) + beta * dotProduct(uBTB, p_next, t)
else:
def obj(actions):
# Renting out case
# a = [c, b, k, i, q]
x_next, p_next = transition(x, actions, t)
V_tilda = NN.predict(x_next) # V_{t+1} conditional on being alive, approximation here
uBTB = uB(calTB(x_next)) # conditional on being dead in the future
return R(x, actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))
def obj_solver(obj):
# Constrain: yat + w - m = c + b + k + (1+chi)*i*pt + I{i>0}*c_h
# i_portion takes [0:0.05:0.95]
# c_portion takes remaining [0:0.05:0.95]
# b_portion takes reamining [0:0.05:0.95]
# k is the remainder
actions = []
for ip in np.linspace(0,0.99,20):
budget1 = yat + w - m
if ip*budget1 > c_h:
i = (budget1*ip - c_h)/((1+chi)*pt)
budget2 = budget1 * (1-ip)
else:
i = 0
budget2 = budget1
for cp in np.linspace(0,1,11):
c = budget2*cp
budget3 = budget2 * (1-cp)
for bp in np.linspace(0,1,11):
b = budget3* bp
k = budget3 * (1-bp)
# q = 1 not renting in this case
actions.append([c,b,k,i,1])
# Constrain: yat + w - m + (1-q)*H*pr = c + b + k
# q takes value [0:0.05:0.95]
# c_portion takes remaining [0:0.05:0.95]
# b_portion takes reamining [0:0.05:0.95]
# k is the remainder
for q in np.linspace(0,0.99,20):
budget1 = yat + w - m + (1-q)*H*pr
for cp in np.linspace(0,1,11):
c = budget1*cp
budget2 = budget1 * (1-cp)
for bp in np.linspace(0,1,11):
b = budget2* bp
k = budget2 * (1-bp)
# i = 0, no housing improvement when renting out
actions.append([c,b,k,0,q])
actions = np.array(actions)
values = obj(actions)
fun = np.max(values)
ma = actions[np.argmax(values)]
return fun, ma
fun, action = obj_solver(obj)
return np.array([fun, action])
# +
# wealth discretization
# w_grid_size = 15
# w_lower = 10
# w_upper = 10000
# 401k amount discretization
# n_grid_size = 5
# n_lower = 10
# n_upper = 6000
# power = 2
# wealth discretization
ws = np.array([10,25,50,75,100,125,150,175,200,250,500,750,1000,1500,3000])
w_grid_size = len(ws)
# 401k amount discretization
ns = np.array([1, 5, 10, 15, 25, 40, 65, 100, 150, 300, 400,1000])
n_grid_size = len(ns)
# Mortgage amount, * 0.25 is the housing price per unit
Ms = np.array([0.2*H, 0.4*H, 0.6*H, 0.8*H]) * pt
M_grid_size = len(Ms)
# Improvement amount
gs = np.array([0,25,50,75,100])
g_grid_size = len(gs)
xgrid = np.array([[w, n, M, g_lag, e, s]
for w in ws
for n in ns
for M in Ms
for g_lag in gs
for e in [0,1]
for s in [0,1]
]).reshape((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2,6))
Vgrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
cgrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
bgrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
kgrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
igrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
qgrid = np.zeros((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2, T_max))
# -
# ### SLSQP with KNN approximation and multidimensional interpolation
class iApproxy(object):
def __init__(self, points, Vgrid):
self.V = Vgrid
self.p = points
def predict(self, xx):
pvalues = np.zeros(xx.shape[0])
index00 = (xx[:,4] == 0) & (xx[:,5] == 0)
index01 = (xx[:,4] == 0) & (xx[:,5] == 1)
index10 = (xx[:,4] == 1) & (xx[:,5] == 0)
index11 = (xx[:,4] == 1) & (xx[:,5] == 1)
pvalues[index00]=interpn(self.p, self.V[:,:,:,:,0,0], xx[index00][:,:4], bounds_error = False, fill_value = None)
pvalues[index01]=interpn(self.p, self.V[:,:,:,:,0,1], xx[index01][:,:4], bounds_error = False, fill_value = None)
pvalues[index10]=interpn(self.p, self.V[:,:,:,:,1,0], xx[index10][:,:4], bounds_error = False, fill_value = None)
pvalues[index11]=interpn(self.p, self.V[:,:,:,:,1,1], xx[index11][:,:4], bounds_error = False, fill_value = None)
return pvalues
# ### Value iteration with interpolation approximation
# +
# %%time
# value iteration part
xs = xgrid.reshape((w_grid_size*n_grid_size*M_grid_size*g_grid_size*2*2,6))
pool = Pool()
points = (ws,ns,Ms,gs)
for t in range(T_max-1,T_min-1, -1):
print(t)
if t == T_max - 1:
f = partial(V, t = t, NN = None)
results = np.array(pool.map(f, xs))
else:
approx = iApproxy(points,Vgrid[:,:,:,:,:,:,t+1])
f = partial(V, t = t, NN = approx)
results = np.array(pool.map(f, xs))
Vgrid[:,:,:,:,:,:,t] = results[:,0].reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
cgrid[:,:,:,:,:,:,t] = np.array([r[0] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
bgrid[:,:,:,:,:,:,t] = np.array([r[1] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
kgrid[:,:,:,:,:,:,t] = np.array([r[2] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
igrid[:,:,:,:,:,:,t] = np.array([r[3] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
qgrid[:,:,:,:,:,:,t] = np.array([r[4] for r in results[:,1]]).reshape((w_grid_size,n_grid_size,M_grid_size,g_grid_size,2,2))
pool.close()
np.save("Vgrid_i", Vgrid)
np.save("cgrid_i", cgrid)
np.save("bgrid_i", bgrid)
np.save("kgrid_i", kgrid)
np.save("igrid_i", igrid)
np.save("qgrid_i", qgrid)
| 20200821_test/Experiment1/.ipynb_checkpoints/discreteOptimization-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !apt-get update
# !apt-get install ant mercurial openjdk-8-jdk -y
# !hg clone http://hg-iesl.cs.umass.edu/hg/mallet
import os
os.chdir('/sharedfolder/mallet')
# !ant
import os
os.chdir('/sharedfolder/mallet')
# !ls bin
# Location of MALLET binary:
#
# /sharedfolder/mallet/bin/mallet
| week/13/Installing MALLET.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# creating own module
from types import ModuleType
test = ModuleType('test', 'This is test module')
# -
test
test.__doc__
test.__dict__
test.hello = lambda: 'Hello'
test.__dict__
test.hello()
globals()
# +
def func():
a=2
print(locals())
pass
func()
# -
import sys
print(sys.modules)
# +
import math
print(id(math))
print(id(sys.modules['math']))
import math
print(id(math))
# it is first created in sys module, every time check if it is in there and reference it, if no then creating
# -
sys.modules['test2'] = lambda: "Tricking python cache modules"
sys.modules['test2']()
type(sys.modules['test2'])
sys.path
import importlib
sys.meta_path
importlib.util.find_spec('importlib')
with open('module.py', 'w') as code_file:
code_file.write("print('Running our test module') \n")
code_file.write("a=200 \n")
importlib.util.find_spec('module')
'module' in globals()
import module
'module' in globals()
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 对象和类
# - 一个学生,一张桌子,一个圆都是对象
# - 对象是类的一个实例,你可以创建多个对象,创建类的一个实例过程被称为实例化,
# - 在Python中对象就是实例,而实例就是对象
# ## 定义类
# class ClassName:
#
# do something #执行体
#
# - class 类的表示与def 一样
# - 类名最好使用驼峰式
# - 在Python2中类是需要继承基类object的,在Python中默认继承,可写可不写
# - 可以将普通代码理解为皮肤,而函数可以理解为内衣,那么类可以理解为外套
#类必须初始化,是用self,初始化自身。( )括号中是谁都行
#类里面的第一个函数变量不在是参数,而是印记。再定义一个函数时括号中要加入印记,还可再加入参数,这时只有一个参数。
#在调用前的函数必须加上初始化了的印记,不然不能使用。
class Jokar:
def __init__(self):
print('我初始化了')
def Print_(self):
print('Jokar hahaha')
Jokar()
class Jokar:
def __init__(self):
print('我初始化了')
def SUM(self,num1,num2):
return num1+num2
def cheng(self,num1,num2):
return num1*num2
#在一个类中,如果有参数需要多次使用,那么就可以将其设置为共享参数
class Jokar:
def __init__(self,num1,num2):
print('我初始化了')
self.num1 = num1 #参数共享
self.num2 = num2
def SUM(self,name):
return self.num1+self.num2
def cheng(self):
return self.num1*self.num2
yj = Jokar(num1 = 1,num2 = 2)
yj.SUM(name='jjj')
yj.cheng()
# # 练习 第一个判断奇偶,第二个判断闰年平年
class Panduan:
def __init__(self):
print('初始化')
def ji(self,numb1)
if num1%2==0:
print('偶数')
else:
print('奇数')
def nian(self,numb2):
if ((num2%4==0) and (num2%100!=0)) or (num2%400==0):
print('闰年')
else:
print('平年')
class Panduan:
def __init__(self):
print('初始化')
def jiou(self,numb1):
if numb1%2==0:
print('偶数')
else:
print('奇数')
def nian(self,numb2):
if ((numb2%4==0) and (numb2%100!=0)) or (numb2%400==0):
print('闰年')
else:
print('平年')
a = Panduan()
a.jiou(numb1=3)
a.nian(numb2=2012)
#
#
#
#
# .## 定义一个不含初始化__init__的简单类
# class ClassName:
#
# joker = “Home”
#
# def func():
# print('Worker')
#
# - 尽量少使用
#
#
# ## 定义一个标准类
# - __init__ 代表初始化,可以初始化任何动作
# - 此时类调用要使用(),其中()可以理解为开始初始化
# - 初始化内的元素,类中其他的函数可以共享
# 
# - Circle 和 className_ 的第一个区别有 __init__ 这个函数
# - 。。。。 第二个区别,类中的每一个函数都有self的这个“参数”
# ## 何为self?
# - self 是指向对象本身的参数
# - self 只是一个命名规则,其实可以改变的,但是我们约定俗成的是self,也便于理解
# - 使用了self就可以访问类中定义的成员
# <img src="../Photo/86.png"></img>
# ## 使用类 Cirlcle
# ## 类的传参
# - class ClassName:
#
# def __init__(self, para1,para2...):
#
# self.para1 = para1
#
# self.para2 = para2
# ## EP:
# - A:定义一个类,类中含有两个功能:
# - 1、产生3个随机数,获取最大值
# - 2、产生3个随机数,获取最小值
# - B:定义一个类,(类中函数的嵌套使用)
# - 1、第一个函数的功能为:输入一个数字
# - 2、第二个函数的功能为:使用第一个函数中得到的数字进行平方处理
# - 3、第三个函数的功能为:得到平方处理后的数字 - 原来输入的数字,并打印结果
import random
class ZDzj:
def __init__(sell):
print('csh')
def Zuida(sell):
a = random.randint(0,9)
b = random.randint(0,9)
c = random.randint(0,9)
print(a,b,c)
return max(a,b,c)
def Zuixiao(sell):
a1 = random.randint(0,9)
b1 = random.randint(0,9)
c1 = random.randint(0,9)
print(a1,b1,c1)
return min(a1,b1,c1)
a = ZDzj()
a.Zuida()
a.Zuixiao()
class Diaoyong():
def __init__(self):
print('csh')
def numb(self):
self.num = eval(input('>>'))
print('输入数字是:',self.num)
def ping(self):
self.b = self.num**2
print('平方是:',self.b)
def jia(self):
self.cha = self.b-self.num
print('两者差:',self.cha)
d=Diaoyong()
d.numb()
d.ping()
d.jia()
class Diaoyong():
def __init__(self):
print('csh')
def numb(self):
num = eval(input('>>'))
print('输入数字是:',num)
return num
def ping(self):
num = self.numb()
b = num**2
print('平方是:',b)
return b,num
def jia(self):
b,num = self.ping()
cha = b - num
print('两者差:',cha)
c = Diaoyong()
c.numb()
c.ping()
c.jia()
# # 进阶 登录账号密码验证
import random
class Denglu():
def __init__(self):
self.account = '123456'
self.password = '<PASSWORD>'
def zhanghao(self):
self.zh = eval(input('输入账号''>>'))
def mima(self):
self.mi = eval('输入密码''>>')
def Check(self):
if self.zh == self.account and self.mi == self.password:
print('成功登陆')
else:
self.yanzheng()
def yanzheng(self):
yanz = random.randint(0,9,4)
print('验证码:',yanz)
while 1:
use_yan = eval(input('输入验证码''>>'))
if use_yan == yanz:
print('成功')
break
def Start(self):
self.zhanghao()
self.mima()
self.Check()
a = Denglu()
a.zhanghao()
a.mima()
# ## 类的继承
# - 类的单继承
# - 类的多继承
# - 继承标识
# > class SonClass(FatherClass):
#
# def __init__(self):
#
# FatherClass.__init__(self)
class A:
def __init__(self):
self.a = 'a' #共享变量
def a_(self):
print('class A')
#多继承,前面覆盖后面;谁先谁来谁继承,自左向右
class C:
def __init__(self):
self.__c = 'c'# 加了‘__’为私有变量,不可继承,不可在外部调用,可以在内部使用
def __c_(self):#
print('class C')
class B(A,C):#是栈的类型,最后继承显示A的
def __init__(self):
#告诉A,B即将继承A
A.__init__(self)
# print(self.a)
# self.a_()
def b_(self):
print(self.a)
self.a_()
p = B()#运行后,A都到了类B中
p.b_()
n = C()
n.c_()
class A:
def __init__(self):
self.a = 'a'
#静态调用
def a_(self):
print('class A')
# ## 私有数据域(私有变量,或者私有函数)
# - 在Python中 变量名或者函数名使用双下划线代表私有 \__Joker, def \__Joker():
#
# - 私有数据域不可继承
# - 私有数据域强制继承 \__dir__()
# 
# ## EP:
# 
# 
# 
#
# ## 类的其他
# - 类的封装
# - 实际上就是将一类功能放在一起,方便未来进行管理
# - 类的继承(上面已经讲过)
# - 类的多态
# - 包括装饰器:将放在以后处理高级类中教
# - 装饰器的好处:当许多类中的函数需要使用同一个功能的时候,那么使用装饰器就会方便许多
# - 装饰器是有固定的写法
# - 其包括普通装饰器与带参装饰器
# # Homewor
# ## UML类图可以不用画
# ## UML 实际上就是一个思维图
# - 1
# 
class Jisuan():
# 定义一个类
def __init__(self):
# 初始化类,初始化后的self不再是参数而是一个印记,在定义函数时需要在括号中带上印记
self.width = 1
self.height = 2
# 参数共享
def getArea(self):
# 定义一个函数,带上印记
area = self.width*self.height
# 计算面积
print('面积是:',area)
return area
# 返回值
def getPerimeter(self):
perimeter = (self.width+self.height)*2
# 计算周长
print('周长是:',perimeter)
return perimeter
a = Jisuan()
a.getArea()
a.getPerimeter()
# - 2
# 
class Account():
def __init__(self):
self.__id = input('>>')
# id默认为0
self.__balance = input('>>')
# 初始额默认为100
self.__annual = input('>>')
# 年利率默认为0
def getMonthlyInterestRate(self):
self.monthly = float(self.__annual)/100/12
# 计算月利率
def getMonthlyInterest(self):
self.MonthlyInterest = float(self.__balance) * self.monthly
# 计算利息额
def withdraw(self):
self.qu = eval(input('>>'))
self.quqianshu = float(self.__balance) - self.qu
# 取指定金额
print('用户',self.__id,'金额',self.quqianshu,'月利率',self.monthly,'利息额',self.MonthlyInterest)
def deposit(self):
self.cun = eval(input('>>'))
self.cunqian = float(self.__balance) + self.cun
# 存指定金额
print('用户',self.__id,'金额',self.cunqian,'月利率',self.monthly,'利息额',self.MonthlyInterest)
qq = Account()
qq.getMonthlyInterestRate()
qq.getMonthlyInterest()
qq.withdraw()
qq.deposit()
# - 3
# 
class Fan():
# 定义类
def __init__ (self):
# 初始化类
self.__speed = 'slow'
# 风速
self.__on = '关闭'
# 开关
self.__radius = 5
# 半径
self.__color = 'blue'
# 颜色
self.slow = 1
# 风速档
self.medium = 2
self.fast = 3
# 设置默认值,私有数据域
def fan1(self):
# 定义函数
self.__speed = self.fast
self.__radius = 10
self.__color = 'yellow'
self.__on = '打开'
print('速度为:',self.__speed,'半径为:',self.__radius,'颜色为:',self.__color,'状态为:',self.__on)
def fan2(self):
self.__speed = self.medium
self.__radius = 5
self.__color = 'blue'
self.__on = '关闭'
print('速度为:',self.__speed,'半径为:',self.__radius,'颜色为:',self.__color,'状态为:',self.__on)
a = Fan()
a.fan1()
a.fan2()
# - 4
# 
# 
import math
class RegularPolygon():
# 定义类
def __init__(self):
self.__n = eval(input('>>'))
self.__side = input('>>')
self.__x = input('>>')
self.__y = input('>>')
# 手动赋值,私有数据域
def getPerimeter(self):
perimeter = self.__n * float(self.__side)
# 在赋的值前加上float变成浮点型数据
print('周长为:',perimeter)
def getArea(self):
area1 = self.__n * (float(self.__side))**2
area2 = 4*math.tan(math.pi/self.__n)
# 应用数学符号前要import math,应用一下库
area = area1/area2
print('面积为:',area)
a = RegularPolygon()
a.getPerimeter()
a.getArea()
# - 5
# 
class LinearEquation():
# 定义类
def __init__(self):
# 初始化类
self.__a = eval(input('>>'))
self.__b = eval(input('>>'))
self.__c = eval(input('>>'))
self.__d = eval(input('>>'))
self.__e = eval(input('>>'))
self.__f = eval(input('>>'))
def isSolvable(self):
if self.__a * self.__d != self.__b * self.__c:
print('true')
else:
print('这个方程无解')
def getX(self):
# 定义函数,加上印记
x = (self.__e * self.__d - self.__b * self.__f)/(self.__a * self.__d - self.__b * self.__c)
# 计算x
print(x)
def getY(self):
y = (self.__a * self.__f - self.__e * self.__c)/(self.__a * self.__d - self.__b * self.__c)
# 计算y
print(y)
ss = LinearEquation()
ss.isSolvable()
ss.getX()
ss.getY()
# - 6
# 
class JiaoDian():
def __init__(self):
self.x1 = eval(input('x1坐标>>'))
self.y1 = eval(input('y1坐标>>'))
self.x2 = eval(input('x2坐标>>'))
self.y2 = eval(input('y2坐标>>'))
self.x3 = eval(input('x3坐标>>'))
self.y3 = eval(input('y3坐标>>'))
self.x4 = eval(input('x4坐标>>'))
self.y4 = eval(input('y4坐标>>'))
# 输入坐标
def xielv(self):
self.k1 = (self.y2 - self.y1)/(self.x2 - self.x1)
# 计算k1
self.b1 = self.y1 - self.x1 * self.k1
# 计算b1
if (self.x4 - self.x3) == 0:
# 考虑斜率是否存在,不存在时
self.k2 = []
# 斜率k2为空
self.b2 = 0
else:
self.k2 = (self.y4 - self.y3) / (self.x4 - self.x3)
# 斜率存在时
self.b2 = self.y3 - self.x3 * self.k2
if self.k2 == []:
# 如果为空
self.x = self.x3
else:
self.x = (self.b2 - self.b1)/(self.k1 - self.k2)
self.y = self.k1 * self.x + self.b1
return self.x,self.y
jd = JiaoDian()
jd.xielv()
# - 7
# 
| 7.23.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
digits = datasets.load_digits()
print(digits.DESCR)
x = digits.data
y = digits.target
from sklearn.model_selection import train_test_split
import matplotlib
x_train, x_test, y_train, y_test = train_test_split(x, y)
some_digits = x[3]
some_digits_image = some_digits.reshape(8, 8)
plt.imshow(some_digits_image, cmap=matplotlib.cm.binary)
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(3)
knn_clf.fit(x_train, y_train)
y_predict = knn_clf.predict(x_test)
y_predict
y_test
sum(y_predict == y_test) / len(y_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_predict)
knn_clf.score(x_test, y_test)
# ### 寻找最好的K
best_k = -1
best_score = 0.0
for k in range(1,11):
knn_clf = KNeighborsClassifier(n_neighbors=k)
knn_clf.fit(x_train, y_train)
score = knn_clf.score(x_test, y_test)
if score > best_score:
best_k = k
best_score = score
print("best_k:", best_k)
print("best_score:", score)
# ### 距离问题
best_method = ""
best_score = 0.0
best_k = -1
for method in ["uniform", "distance"]:
for k in range(1,11):
knn_clf = KNeighborsClassifier(n_neighbors=k, weights=method)
knn_clf.fit(x_train, y_train)
score = knn_clf.score(x_test, y_test)
if score > best_score:
best_k = k
best_score = score
best_method = method
print("best_k:", best_k)
print("best_score:", score)
print("best_method:", best_method)
# ### 搜索明科斯基距离
# +
# %%time
best_p = -1
best_score = 0.0
best_k = -1
for k in range(1,11):
for p in range(1,6):
knn_clf = KNeighborsClassifier(n_neighbors=k, weights="distance")
knn_clf.fit(x_train, y_train)
score = knn_clf.score(x_test, y_test)
if score > best_score:
best_k = k
best_score = score
best_p = p
print("best_p:", p)
print("best_score:", score)
print("best_k:", k)
# -
# ### Grid Search
param_grid = [
{
"weights": ["uniform"],
"n_neighbors": [i for i in range(1, 11)]
},
{
"weights": ["distance"],
"n_neighbors": [i for i in range(1, 11)],
"p": [i for i in range(1, 6)]
}
]
knn_clf = KNeighborsClassifier()
from sklearn.model_selection import GridSearchCV
grid_search = GridSearchCV(knn_clf, param_grid)
# %%time
grid_search.fit(x_train, y_train)
grid_search.best_estimator_
grid_search.best_score_
grid_search.best_params_
knn_clf = grid_search.best_estimator_
knn_clf.predict(x_test)
# %%time
grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=3, verbose=2)
grid_search.fit(x_train, y_train)
| data/digits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [LEGALST-123] Lab 12: Regression for Prediction and Data Splitting
# # Intro to scikit-learn
#
# <img src="https://www.cityofberkeley.info/uploadedImages/Public_Works/Level_3_-_Transportation/DSC_0637.JPG" style="width: 500px; height: 275px;" />
# ---
#
# ** Regression** is useful for predicting a value that varies on a continuous scale from a bunch of features. This lab will introduce the regression methods available in the scikit-learn extension to scipy, focusing on ordinary least squares linear regression, LASSO, and Ridge regression.
#
# *Estimated Time: 45 minutes*
#
# ---
#
#
# ### Table of Contents
#
#
# 1 - [The Test-Train-Validation Split](#section 1)<br>
#
# 2 - [Linear Regression](#section 2)<br>
#
# 3 - [LASSO Regression](#section 3)<br>
#
# 4 - [Ridge Regression](#section 4)<br>
#
# 5 - [Choosing a Model](#section 5)<br>
#
#
#
# **Dependencies:**
import numpy as np
import datetime as dt
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, Lasso, LinearRegression
from sklearn.model_selection import KFold
# ## The Data: Bike Sharing
# In your time at Cal, you've probably passed by one of the many bike sharing station around campus. Bike sharing systems have become more and more popular as traffic and concerns about global warming rise. This lab's data describes one such bike sharing system in Washington D.C., from [UC Irvine's Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset).
# +
bike = pd.read_csv('data/Bike-Sharing-Dataset/day.csv')
# reformat the date column to integers representing the day of the year, 001-366
bike['dteday'] = pd.to_datetime(np.array(bike['dteday'])).strftime('%j')
# get rid of the index column
bike = bike.drop(0)
bike.head(4)
# -
# Take a moment to get familiar with the data set. In data science, you'll often hear rows referred to as **records** and columns as **features**. Before you continue, make sure you can answer the following:
#
# - How many records are in this data set?
# - What does each record represent?
# - What are the different features?
# - How is each feature represented? What values does it take, and what are the data types of each value?
#
# Explore the dataset and answer these questions.
# +
# explore the data set here
# -
# ---
# ## 1. The Test-Train-Validation Split <a id='section 1'></a>
# When we train a model on a data set, we run the risk of [**over-fitting**](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html). Over-fitting happens when a model becomes so complex that it makes very accurate predictions for the data it was trained on, but it can't generalize to make good predictions on new data.
#
# We can reduce the risk of overfitting by using a **test-train split**.
#
# 1. Randomly divide our data set into two smaller sets: one for training and one for testing
# 2. Train the data on the training set, changing our model along the way to increase accuracy
# 3. Test the data's predictions using the test set.
#
# Scikit-learn's [`test_train_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function will help here. First, separate your data into two parts: a dataframe containing the features used to make our prediction, and an array of the true values. To start, let's predict the *total number of riders* (y) using *every feature that isn't a rider count* (X).
#
# Standardization is important for Ridge and LASSO because the penalty term is applied uniformly across the features. Having features on different scales unevely penalizes the coefficients.
# +
# the features used to predict riders
features_only = ...
X = ...
# standardize the features so that they have zero mean and unit variance
scaler = StandardScaler()
X = pd.DataFrame(scaler.fit_transform(X.values), columns=X.columns, index=X.index)
# the number of riders
y = ...
# -
# Next, set the random seed using `np.random.seed(...)`. This will affect the way numpy pseudo-randomly generates the numbers it uses to decide how to split the data into training and test sets. Any seed number is fine- the important thing is to document the number you used in case we need to recreate this pseudorandom split in the future.
#
# Then, call `train_test_split` on your X and y. Also set the parameters `train_size=` and `test_size=` to set aside 80% of the data for training and 20% for testing.
# +
# set the random seed
np.random.seed(10)
# split the data
# train_test_split returns 4 values: X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = ...
# -
# ### The Validation Set
#
# Our test data should only be used once: after our model has been selected, trained, and tweaked. Unfortunately, it's possible that in the process of tweaking our model, we could still overfit it to the training data and only find out when we return a poor test data score. What then?
#
# A **validation set** can help here. By trying your trained models on a validation set, you can (hopefully) weed out models that don't generalize well.
#
# Call `train_test_split` again, this time on your X_train and y_train. We want to set aside 25% of the data to go to our validation set, and keep the remaining 75% for our training set.
#
# Note: This means that out of the original data, 20% is for testing, 20% is for validation, and 60% is for training.
# +
# split the data
# Returns 4 values: X_train, X_validate, y_train, y_validate
X_train, X_validate, y_train, y_validate = ...
# -
# ## 2. Linear Regression (Ordinary Least Squares) <a id='section 2'></a>
# Now, we're ready to start training models and making predictions. We'll start with a **linear regression** model.
#
# [Scikit-learn's linear regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score) is built around scipy's ordinary least squares, which you used in the last lab. The syntax for each scikit-learn model is very similar:
# 1. Create a model by calling its constructor function. For example, `LinearRegression()` makes a linear regression model.
# 2. Train the model on your training data by calling `.fit(train_X, train_y)` on the model
#
# Create a linear regression model in the cell below.
# create a model
...
# fit the model
...
# With the model fit, you can look at the best-fit slope for each feature using `.coef_`, and you can get the intercept of the regression line with `.intercept_`.
# examine the coefficients and intercept
# Now, let's get a sense of how good our model is. We can do this by looking at the difference between the predicted values and the actual values, also called the error.
#
# We can see this graphically using a scatter plot.
#
# - Call `.predict(X)` on your linear regression model, using your training X and training y, to return a list of predicted number of riders per hour. Save it to a variable `lin_pred`.
# - Using a scatter plot (`plt.scatter(...)`), plot the predicted values against the actual values (`y_train`)
# +
# predict the number of riders
# plot the residuals on a scatter plot
# -
# Question: what should our scatter plot look like if our model was 100% accurate?
# **ANSWER:**
# We can also get a sense of how well our model is doing by calculating the **root mean squared error**. The root mean squared error (RMSE) represents the average difference between the predicted and the actual values.
#
# To get the RMSE:
# - subtract each predicted value from its corresponding actual value (the errors)
# - square each error (this prevents negative errors from cancelling positive errors)
# - average the squared errors
# - take the square root of the average (this gets the error back in the original units)
#
# Write a function `rmse` that calculates the mean squared error of a predicted set of values.
def rmse(pred, actual):
# your code here
return ...
# Now calculate the mean squared error for your linear model.
# calculate the rmse
# ## 3. Ridge Regression <a id='section 3'></a>
# Now that you've gone through the process for OLS linear regression, it's easy to do the same for [**Ridge Regression**](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html). In this case, the constructor function that makes the model is `Ridge()`.
# +
# make and fit a Ridge regression model
# +
# use the model to make predictions
# plot the predictions
# -
# calculate the rmse for the Ridge model
# Note: the documentation for Ridge regression shows it has lots of **hyperparameters**: values we can choose when the model is made. Now that we've tried it using the defaults, look at the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) and try changing some parameters to see if you can get a lower RMSE (`alpha` might be a good one to try).
# ## 4. LASSO Regression <a id='section 4'></a>
# Finally, we'll try using [LASSO regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). The constructor function to make the model is `Lasso()`.
#
# You may get a warning message saying the objective did not converge. The model will still work, but to get convergence try increasing the number of iterations (`max_iter=`) when you construct the model.
#
# +
# create and fit the model
# +
# use the model to make predictions
# plot the predictions
# -
# calculate the rmse for the LASSO model
# Note: LASSO regression also has many tweakable hyperparameters. See how changing them affects the accuracy!
#
# Question: How do these three models compare on performance? What sorts of things could we do to improve performance?
# **ANSWER:**
#
# ---
# ## 5. Choosing a model <a id='section 5'></a>
# ### Validation
# Once you've tweaked your models' hyperparameters to get the best possible accuracy on your training sets, we can compare your models on your validation set. Make predictions on `X_validate` with each one of your models, then calculate the RMSE for each set of predictions.
# make predictions for each model
# calculate RMSE for each set of validation predictions
# How do the RMSEs for the validation data compare to those for the training data? Why?
#
# Did the model that performed best on the training set also do best on the validation set?
# **YOUR ANSWER:**
# ### Predicting the Test Set
# Finally, select one final model to make predictions for your test set. This is often the model that performed best on the validation data.
# +
# make predictions for the test set using one model of your choice
# calculate the rmse for the final predictions
# -
# Coming up this semester: how to select your models, model parameters, and features to get the best performance.
# ---
# Notebook developed by: <NAME>
#
# Data Science Modules: http://data.berkeley.edu/education/modules
#
| labs/12_Regression For Prediction and Data Splitting/12_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 3 - Basic plotting
# In [Tutorial 2](./Tutorial%202%20-%20Compare%20models.ipynb), we made use of PyBaMM's automatic plotting function when comparing models. This gave a good quick overview of many of the key variables in the model. However, by passing in just a few arguments it is easy to plot any of the many other variables that may be of interest to you. We start by building and solving a model as before:
# %pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
model = pybamm.lithium_ion.DFN()
sim = pybamm.Simulation(model)
sim.solve([0, 3600])
# We now want to plot a selection of the model variables. To see a full list of the available variables just type:
model.variable_names()
# There are a _lot_ of variables. You can also search the list of variables for a particular string (e.g. "electrolyte")
model.variables.search("electrolyte")
# We have tried to make variables names fairly self explanatory. However, there are two variables for most quantities. This is because PyBaMM utilises both dimensionless and dimensional variables for these quantities. As a rule, the dimensionless variables have no units in their name and the dimensional variables have units in their name. If in doubt, we recommend using the dimensional variable with units.
# As a first example, we choose to plot the terminal voltage. We add this to a list and then pass this list to the `plot` method of our simulation:
output_variables = ["Terminal voltage [V]"]
sim.plot(output_variables=output_variables)
# Alternatively, we may be interested in plotting both the electrolyte concentration and the terminal voltage. In which case, we would do:
output_variables = ["Electrolyte concentration [mol.m-3]", "Terminal voltage [V]"]
sim.plot(output_variables=output_variables)
# You can also plot multiple variables on the same plot by nesting lists
sim.plot([["Electrode current density", "Electrolyte current density"], "Terminal voltage [V]"])
sim.plot()
# In this tutorial we have seen how to use the plotting functionality in PyBaMM.
#
# In [Tutorial 4](./Tutorial%204%20-%20Setting%20parameter%20values.ipynb) we show how to change parameter values.
| examples/notebooks/Getting Started/Tutorial 3 - Basic plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import matplotlib as mp
# %matplotlib inline
import tensorflow.contrib.slim as slim
import os
import sys
sys.path.append('..')
import tools as tools
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import scipy
from scipy import ndimage, misc
from scipy.misc import imshow
import skimage
GPU='0'
tf.reset_default_graph()
def load_real_rgbs(test_mv=5):
obj_rgbs_folder ='./Data_sample/amazon_real_rgbs/airfilter/'
rgbs = []
rgbs_views = sorted(os.listdir(obj_rgbs_folder))
for v in rgbs_views:
if not v.endswith('png'): continue
rgbs.append(tools.Data.load_single_X_rgb_r2n2(obj_rgbs_folder + v, train=False))
rgbs = np.asarray(rgbs)
x_sample = rgbs[0:test_mv, :, :, :].reshape(1, test_mv, 127, 127, 3)
return x_sample, None
def load_shapenet_rgbs(test_mv=8):
obj_rgbs_folder = './Data_sample/ShapeNetRendering/03001627/1a6f615e8b1b5ae4dbbc9440457e303e/rendering/'
obj_gt_vox_path ='./Data_sample/ShapeNetVox32/03001627/1a6f615e8b1b5ae4dbbc9440457e303e/model.binvox'
rgbs=[]
rgbs_views = sorted(os.listdir(obj_rgbs_folder))
for v in rgbs_views:
if not v.endswith('png'): continue
rgbs.append(tools.Data.load_single_X_rgb_r2n2(obj_rgbs_folder + v, train=False))
rgbs = np.asarray(rgbs)
x_sample = rgbs[0:test_mv, :, :, :].reshape(1, test_mv, 127, 127, 3)
y_true = tools.Data.load_single_Y_vox(obj_gt_vox_path)
return x_sample, y_true
# +
def ttest_demo():
# model_path = './Model_released/'
model_path='/home/ajith/3d-reconstruction/attsets/Model_released/'
if not os.path.isfile(model_path + 'model.cptk.data-00000-of-00001'):
print ('please download our released model first!')
return
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.visible_device_list = GPU
with tf.Session(config=config) as sess:
saver = tf.train.import_meta_graph(model_path + 'model.cptk.meta', clear_devices=True)
saver.restore(sess, model_path + 'model.cptk')
print ('model restored!')
# graph = tf.get_default_graph()
# print(graph.get_operations())
X = tf.get_default_graph().get_tensor_by_name("Placeholder:0")
Y_pred = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_9:0")
plot_data_8 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_8:0")
plot_data_7 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_7:0") #############(1,1024)
plot_data_6 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_6:0") #############(1,1024)
plot_data_5 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_5:0")
plot_data_4 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_4:0")
plot_data_3 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_3:0")
plot_data_2 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_2:0")
plot_data_1 = tf.get_default_graph().get_tensor_by_name("r2n/Reshape_1:0")
# print("X: ", X.shape) #Tensor("Placeholder:0", shape=(?, ?, 127, 127, 3), dtype=float32)
# print(Y_pred) #Tensor("r2n/Reshape_9:0", shape=(?, 32, 32, 32), dtype=float32)
# x_sample, gt_vox = load_shapenet_rgbs()
# print("x_sample: ", x_sample.shape)
# print("x_sample_data: ", type(x_sample[:,:,:,:,1]))
# print(y_pred.shape) ###############################(1, 32, 32, 32) ##############################
x_sample, gt_vox = load_real_rgbs()
plot_buf_1= tf.reshape(plot_data_1, [-1, 32, 32, 1])
plot_buf_2= tf.reshape(plot_data_2, [-1, 32, 32, 1])
plot_buf_3= tf.reshape(plot_data_3, [-1, 32, 32, 1])
plot_buf_4= tf.reshape(plot_data_4, [-1, 32, 32, 1])
plot_buf_5= tf.reshape(plot_data_5, [-1, 32, 32, 1])
plot_buf_6= tf.reshape(plot_data_6, [-1, 32, 32, 1])
plot_buf_7= tf.reshape(plot_data_7, [-1, 32, 32, 1])
plot_buf_8= tf.reshape(plot_data_8, [-1, 32, 32, 1])
tf.summary.image("RESHAPE_1", plot_buf_1)
tf.summary.image("RESHAPE_2", plot_buf_2)
tf.summary.image("RESHAPE_3", plot_buf_3)
tf.summary.image("RESHAPE_4", plot_buf_4)
tf.summary.image("RESHAPE_5", plot_buf_5)
tf.summary.image("RESHAPE_6", plot_buf_6)
tf.summary.image("RESHAPE_7", plot_buf_7)
tf.summary.image("RESHAPE_8", plot_buf_8)
summary_op = tf.summary.merge_all()
# with tf.Session() as sess:
# Run
y_pred,c_summary = sess.run([Y_pred,summary_op], feed_dict={X: x_sample})
# Write summary tf.summary.FileWriter
writer = tf.summary.FileWriter('./logs')
writer.add_summary(c_summary)
writer.close()
# sys.exit(). sys.exit()
###### to visualize
th = 0.25
y_pred[y_pred>=th]=1
y_pred[y_pred<th]=0
tools.Data.plotFromVoxels(np.reshape(y_pred,[32,32,32]), title='y_pred')
if gt_vox is not None:
tools.Data.plotFromVoxels(np.reshape(gt_vox,[32,32,32]), title='y_true')
from matplotlib.pyplot import show
show()
# +
# if __name__ == '__main__':
print ('enterd')
ttest_demo()
# -
tensorboard --logdir=logs/
| attsets_demo_python3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data wrangling
# This notebook is adapated from Joris Van den Bossche tutorial:
#
# * https://github.com/paris-saclay-cds/python-workshop/blob/master/Day_1_Scientific_Python/02-pandas_introduction.ipynb
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
# -
# ## 1. Pandas: data analysis in python
#
# For data-intensive work in Python the [Pandas](http://pandas.pydata.org) library has become essential.
#
# **What is `pandas`?**
#
# * Pandas can be thought of as *NumPy arrays with labels* for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
# * Pandas can also be thought of as `R`'s `data.frame` in Python.
# * Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...
#
# It's documentation: http://pandas.pydata.org/pandas-docs/stable/
#
#
# **When do you need pandas?**
#
# When working with **tabular or structured data** (like R dataframe, SQL table, Excel spreadsheet, ...):
#
# - Import data
# - Clean up messy data
# - Explore data, gain insight into data
# - Process and prepare your data for analysis
# - Analyse your data (together with scikit-learn, statsmodels, ...)
#
# <div class="alert alert-warning">
# <b>ATTENTION!</b>: <br><br>
#
# Pandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!
# <ul>
# <li>When working with array data (e.g. images, numerical algorithms): just stick with numpy</li>
# <li>When working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/)</li>
# </ul>
# </div>
# ## 2. The pandas data structures: `DataFrame` and `Series`
#
# ### 2.1 The 2D table: pandas `DataFrame`
#
# A `DataFrame` is a **tablular data structure** (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
#
#
# <img align="left" width=50% src="./schema-dataframe.svg">
# We can create a pandas Dataframe and specify the index and columns to use.
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries
# We can check that we are manipulating a Pandas DataFrame
type(df_countries)
# As previously mentioned, the dataframe stores information regarding the column and index information.
df_countries.columns
df_countries.index
# You can get an overview of the information of a dataframe using the `info()` method:
df_countries.info()
# An information which is quite useful is related to the data type.
#
# It is important to know that machine learning algorithms are based on mathematics and algebra. Thus, these algorithms expect numerical data.
#
# Pandas allows to read, manipulate, explore, and transform heterogeneous data to numerical data.
df_countries.dtypes
# #### Exercise
# We will define a set of 1D NumPy arrays containing the data that we will work with.
country_name = ['Austria', 'Iran, Islamic Rep.', 'France']
country_code = ['AUT', 'IRN', 'FRA']
gdp_2015 = [1349034029453.37, 385874474398.59, 2438207896251.84]
gdp_2017 = [1532397555.55556, 439513511620.591,2582501307216.42]
# * Create a Python dictionary where the keys will be the name of the columns and the values will be the corresponding Python list.
# # %load solutions/02_solutions.py
df = pd.DataFrame({'Country Name': country_name,
'Country Code': country_code,
2015: gdp_2015,
2017: gdp_2017})
df
# * Use the same procedure (Python dictionary) but specify that the country code should be used as the index. Therefore, check the parameter `index_col` or the method `DataFrame.set_index()`
# # %load solutions/03_solutions.py
df = pd.DataFrame({'Country Name': country_name,
'Country Code': country_code,
2015: gdp_2015,
2017: gdp_2017})
df = df.set_index('Country Code')
df
# # %load solutions/04_solutions.py
df = pd.DataFrame({'Country Name': country_name,
2015: gdp_2015,
2017: gdp_2017},
index=country_code)
df
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 2.2 One-dimensional data: `Series` (a column of a DataFrame)
#
# A Series is a basic holder for **one-dimensional labeled data**.
# -
df_countries
df_countries.loc[:, 'population']
population = df_countries.loc[:, 'population']
# We can check that we manipulate a Pandas Series
type(population)
# ### 2.3 Data import and export
# + [markdown] slideshow={"slide_type": "subslide"}
# A wide range of input/output formats are natively supported by pandas:
#
# * CSV, text
# * SQL database
# * Excel
# * HDF5
# * json
# * html
# * pickle
# * sas, stata
# * (parquet)
# * ...
# +
# pd.read_xxxx
# +
# df.to_xxxx
# -
# Very powerful csv reader:
# +
# pd.read_csv?
# -
# Luckily, if we have a well formed csv file, we don't need many of those arguments:
import os
df = pd.read_csv(os.path.join("data", "titanic.csv"))
df.head()
df.info()
# <div class="alert alert-success">
#
# <b>EXERCISE</b>: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`
# <br><br>
# Some aspects about the file:
# <ul>
# <li>Which separator is used in the file?</li>
# <li>The second row includes unit information and should be skipped (check `skiprows` keyword)</li>
# <li>For missing values, it uses the `'n/d'` notation (check `na_values` keyword)</li>
# <li>We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword)</li>
# </ul>
# </div>
# + clear_cell=true
# # %load solutions/22_solutions.py
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'],
index_col=0, parse_dates=True)
no2.head()
# -
no2.info()
# ## 3. Selecting and filtering data
# One of pandas' basic features is the labeling of rows and columns, but this makes indexing a bit complex. We now have to distuinguish between:
#
# * selection by **label**
# * selection by **position**
#
# ### 3.1 Indexing by label using `.loc`
# We will first select data from the dataframe selecting by **label**.
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data).set_index('country')
df_countries
# The syntax to select by label is `.loc['row_name', 'col_name']`. Therefore, we can get a row of the dataframe by indicating the name of the index to select.
df_countries.loc['France', :]
# Similarly, we can get a column of the dataframe by indicating the name of the column.
df_countries.loc[:, 'area']
# Specifying both index and column name, we will get the intersection of the row and the column.
df_countries.loc['France', 'area']
# We can get several columns by passing a list of the columns to be selected.
x = df_countries.loc['France', ['area', 'population']]
# This is the exact same behavior with the index for the rows.
df_countries.loc[['France', 'Belgium'], ['area', 'population']]
# You can go further and slice a portion of the dataframe.
df_countries.loc['France':'Netherlands', :]
# Note that in this case, the first and last item of the slice are selected.
# ### 3.2 Indexing by position using `.iloc`
# Sometimes, it is handy to select a portion of the data given the row and column indices number. We can this indexing by **position**.
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data).set_index('country')
df_countries
# The syntax is similar to `.loc`. It will be `.iloc[row_id, col_id]`. We can get the first row.
df_countries.iloc[0, :]
# Or the last column.
df_countries.iloc[:, -1]
# And make the intersections.
df_countries.iloc[0, -1]
# Passing a list of indices is also working.
df_countries.iloc[[0, 1], [-2, -1]]
# And we can use slicing as well.
df_countries.iloc[1:3, 0:2]
# However, be aware that the ending index of the slice is discarded.
# ### 3.3 Use the pandas shortcut
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data).set_index('country')
df_countries
# Pandas provides a shortcut to select some part of the data.
df_countries['population']
df_countries[['area', 'capital']]
df_countries[2:5]
df_countries['Germany':'United Kingdom']
# You don't need to use `loc` and `iloc`. The selection rules are:
#
# * Passing a single label or list of labels will select a column or several columns;
# * Passing a slice (label or indices) will select the corresponding rows.
#
# You can always use the systematic indexing to avoid confusion. Use the shortcut at your own risk.
# ### 3.4 Boolean indexing (filtering)
# Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
#
# The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
# + run_control={"frozen": false, "read_only": false}
df_countries['population'] > 60
# -
mask_pop_above_60 = df_countries['population'] > 60
# We can then use this mask to index a serie or a dataframe.
population = df_countries['population']
population.loc[mask_pop_above_60]
population[mask_pop_above_60]
df_countries.loc[mask_pop_above_60]
df_countries[~mask_pop_above_60]
# ### 3.5 Exercise
df = pd.read_csv(os.path.join("data","titanic.csv"))
df.head()
# Select the sub-dataframe for which the men are older than 60 years old. Using the attribute shape, find how many individual correspond to this criteria.
# # %load solutions/05_solutions.py
df[(df['Sex'] == 'male') & (df['Age'] > 60)].shape[0]
# ## 4. Statistical analysis
# Pandas provides an easy and fast way to explore data. Let's explore the `titanic` data set.
df = df.set_index('Name')
df.head()
# We will select the `Age` column and compute couple of statistic.
age = df['Age']
age
age.mean()
age.max()
age.min()
age.describe()
age.value_counts()
age.hist(bins=100)
# ### Exercise
# * What is the maximum Fare that was paid? And the median?
# # %load solutions/06_solutions.py
df['Fare'].max()
# # %load solutions/07_solutions.py
df['Fare'].median()
# * Calculate the average survival ratio for passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)).
# # %load solutions/08_solutions.py
df['Survived'].sum() / df.shape[0]
# # %load solutions/09_solutions.py
df['Survived'].mean()
# * Select the sub-dataframe for which the men are older than 60 years old.
# # %load solutions/10_solutions.py
((df['Age'] > 60) & (df['Sex'] == 'male')).value_counts()[True]
# * Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.
# # %load solutions/11_solutions.py
df[df['Sex'] == 'male']['Age'].mean()
# # %load solutions/12_solutions.py
df[df['Sex'] == 'female']['Age'].mean()
# * Plot the Fare distribution.
# # %load solutions/13_solutions.py
df['Fare'].hist()
# ## 5. The group-by operation
# ### Some 'theory': the groupby operation (split-apply-combine)
# + run_control={"frozen": false, "read_only": false}
df = pd.DataFrame({'key':['<KEY>'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
# -
# ### 5.1 Recap: aggregating functions
# When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:
# + run_control={"frozen": false, "read_only": false}
df['data'].sum()
# -
# However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
#
# For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:
# + run_control={"frozen": false, "read_only": false}
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
# -
# This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
#
# What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this.
# ### 5.2 Groupby: applying functions per group
# + [markdown] slideshow={"slide_type": "subslide"}
# The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets**
#
# This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
#
# * **Splitting** the data into groups based on some criteria
# * **Applying** a function to each group independently
# * **Combining** the results into a data structure
#
# <img src="./splitApplyCombine.png">
#
# Similar to SQL `GROUP BY`
# -
# Instead of doing the manual filtering as above
#
#
# df[df['key'] == "A"].sum()
# df[df['key'] == "B"].sum()
# ...
#
# pandas provides the `groupby` method to do exactly this:
# + run_control={"frozen": false, "read_only": false}
df.groupby('key').sum()
# + run_control={"frozen": false, "read_only": false} slideshow={"slide_type": "subslide"}
df.groupby('key').aggregate([np.sum, np.median]) # 'sum'
# -
# And many more methods are available.
# + run_control={"frozen": false, "read_only": false}
df.groupby('key')['data'].sum()
# -
for group_name, group_df in df.groupby('key'):
print(group_name)
print(group_df)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 5.3 Exercise: Application of the groupby concept on the titanic data
# -
# We go back to the titanic passengers survival data:
# + run_control={"frozen": false, "read_only": false}
df = pd.read_csv("data/titanic.csv")
df = df.set_index('Name')
# + run_control={"frozen": false, "read_only": false}
df.head()
# -
# * Using `groupby()`, calculate the average age for each sex.</li>
#
# + clear_cell=true run_control={"frozen": false, "read_only": false}
# # %load solutions/14_solutions.py
df.groupby("Sex")["Age"].mean()
# -
# * Using the `groupby()` function, plot the age distribution for each sex.
# # %load solutions/15_solutions.py
_ = df.groupby("Sex")["Age"].hist(alpha=0.5, legend=True)
fig, axs = plt.subplots(ncols=2)
for plt_idx, (group_name, group_df) in enumerate(df.groupby('Sex')):
axs[plt_idx].hist(group_df["Age"], alpha=0.5, label=group_name)
axs[plt_idx].set_title(group_name)
# _ = plt.legend()
# * Plot the fare distribution based on the class.
# # %load solutions/16_solutions.py
_ = df.groupby("Pclass")["Fare"].hist(alpha=0.3, legend=True)
# * Plot the survival rate by class with a bar plot.
# # %load solutions/17_solutions.py
_ = df.groupby("Pclass")["Survived"].mean().plot(kind="bar")
# * Compute the survival rate grouping by class and sex. (Hint: you can pass a list to the `groupby` function)
# # %load solutions/18_solutions.py
df.groupby(["Sex", "Pclass"])["Survived"].mean().to_frame()
# ## 7. Merging different source of information
# ### 7.1 Simple concatenation
# +
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
# -
# Assume we have some similar data as in countries, but for a set of different countries:
data = {'country': ['Nigeria', 'Rwanda', 'Egypt', 'Morocco', ],
'population': [182.2, 11.3, 94.3, 34.4],
'area': [923768, 26338 , 1010408, 710850],
'capital': ['Abuja', 'Kigali', 'Cairo', 'Rabat']}
countries_africa = pd.DataFrame(data)
countries_africa
# We now want to combine the rows of both datasets:
pd.concat([countries, countries_africa])
# If we don't want the index to be preserved:
pd.concat([countries, countries_africa], ignore_index=True)
# When the two dataframes don't have the same set of columns, by default missing values get introduced:
pd.concat([countries_africa[['country', 'capital']], countries], ignore_index=True)
# ## 7.2 Combining columns instead of rows
# Assume we have another DataFrame for the same countries, but with some additional statistics:
data = {'country': ['Belgium', 'France', 'Netherlands'],
'GDP': [496477, 2650823, 820726],
'area': [8.0, 9.9, 5.7]}
country_economics = pd.DataFrame(data).set_index('country')
country_economics
pd.concat([countries, country_economics], axis=1)
# `pd.concat` matches the different objects based on the index:
countries2 = countries.set_index('country')
countries2
pd.concat([countries2, country_economics], axis=1)
# ### 7.3 Dataframe merging
# Using `pd.concat` above, we combined datasets that had the same columns or the same index values. But, another typical case if where you want to add information of second dataframe to a first one based on one of the columns. That can be done with `pd.merge`.
# Let's look again at the titanic passenger data, but taking a small subset of it to make the example easier to grasp:
df = pd.read_csv("./data/titanic.csv")
df = df.loc[:9, ['Survived', 'Pclass', 'Sex', 'Age', 'Fare', 'Embarked']]
df
# Assume we have another dataframe with more information about the 'Embarked' locations:
locations = pd.DataFrame({'Embarked': ['S', 'C', 'Q', 'N'],
'City': ['Southampton', 'Cherbourg', 'Queenstown', 'New York City'],
'Country': ['United Kindom', 'France', 'Ireland', 'United States']})
locations
# We now want to add those columns to the titanic dataframe, for which we can use `pd.merge`, specifying the column on which we want to merge the two datasets:
pd.merge(df, locations, how="right")
# In this case we use `how='left'` (a "left join") because we wanted to keep the original rows of df and only add matching values from locations to it. Other options are 'inner', 'outer' and 'right' (see the docs for more on this).
pd.merge(df, locations, left_on="Embarked", right_on="Embarked")
# ## 8. Working with time series data
# ### 8.1 Time series preamble
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';',
skiprows=[1], na_values=['n/d'],
index_col=0, parse_dates=True)
# + [markdown] slideshow={"slide_type": "fragment"}
# When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available:
# -
no2.head()
no2.index
# + [markdown] slideshow={"slide_type": "subslide"}
# Indexing a time series works with strings:
# -
no2["2010-01-01 09:00":"2010-01-01 12:00"]
# + [markdown] slideshow={"slide_type": "subslide"}
# A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
# + [markdown] slideshow={"slide_type": "-"}
# E.g. all data of January up to March 2012:
# -
no2['2012-01':'2012-03']
# + [markdown] slideshow={"slide_type": "subslide"}
# Time and date components can be accessed from the index:
# -
no2.index.hour
no2.index.year
no2.index.month.unique()
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 8.2 The power of pandas: `resample`
# -
# A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).
#
# Remember the air quality data:
no2.plot()
# The time series has a frequency of 1 hour. I want to change this to daily:
no2.head()
no2.resample('Y').mean().plot()
# + [markdown] slideshow={"slide_type": "subslide"}
# Above I take the mean, but as with `groupby` I can also specify other methods:
# -
no2.resample('D').max().head()
# + [markdown] slideshow={"slide_type": "skip"}
# The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
# These strings can also be combined with numbers, eg `'10D'`.
# + [markdown] slideshow={"slide_type": "subslide"}
# Further exploring the data:
# -
no2.resample('M').mean().plot() # 'A'
# + clear_cell=true slideshow={"slide_type": "subslide"}
no2.loc['2009':, 'VERS'].resample('M').aggregate(['mean', 'median']).plot()
# + [markdown] slideshow={"slide_type": "subslide"}
# ### 8.3 Exercise
# -
# The evolution of the yearly averages with, and the overall mean of all stations
#
# * Use `resample` and `plot` to plot the yearly averages for the different stations.
# * The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`).
#
no2.resample("Y").mean().mean(axis=1).plot()
#
# ## Further reading
#
# * Pandas documentation: http://pandas.pydata.org/pandas-docs/stable/
#
# * Books
#
# * "Python for Data Analysis" by <NAME>
# * "Python Data Science Handbook" by <NAME>
#
# * Tutorials (many good online tutorials!)
#
# * https://github.com/jorisvandenbossche/pandas-tutorial
# * https://github.com/brandon-rhodes/pycon-pandas-tutorial
#
# * <NAME>spurger's blog
#
# * https://tomaugspurger.github.io/modern-1.html
| 01_pandas/notebook_corrected.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Barplot als Aktivitätsdiagramm
# +
import itertools
import locale
from IPython.display import display, Markdown, HTML
locale.setlocale(locale.LC_ALL, "de_DE.utf8")
def create_bars(specs, max_actions=10000, scaling=lambda x: x, descaling=lambda x: x,
grey="#ddd", height_per_bar=100, text_height=25, bins=4):
def create_hline(y, color, progress=1):
return """<line y1="%s" x1="5" y2="%s" x2="%s" stroke="%s"
stroke-linecap="round"
stroke-width="10"/>""" % (y, y, progress*1000+5, color)
result = []
height = height_per_bar * len(specs) + text_height
result.append('<svg style="width: 100%%" viewBox="0 0 1050 %s">' % height)
for i in range(bins):
x = 1 + 1000/bins * (i+1)
number = int(descaling((i+1)/bins * scaling(max_actions)))
text = "{:,}".format(number)
result.append("""
<text x="%s" y="%s" text-anchor="middle"
style="font-size: 16px;">%s</text>
""" % (x, text_height-10, text))
result.append("""
<line y1="%s" x1="%s" y2="%s" x2="%s" stroke="#111"
stroke-width="2" stroke-dasharray="10"/>
""" % (text_height, x, height, x))
for spec, y in zip(specs, itertools.count(height_per_bar/2 + text_height, height_per_bar)):
progress = min(1, scaling(spec["actions"]) / scaling(max_actions))
result.append(create_hline(y, grey))
result.append(create_hline(y, spec["color"], progress=progress))
label = "%s %s" % (spec["actions"], spec["kind"])
result.append("""
<text x="0" y="%s" style="font-weight: bold; font-size: 18px;">%s</text>
""" % (y+40, label))
result.append('</svg>')
return "\n".join(result)
specs = [
{"color": "#1E65A7", "actions": 8078, "kind": "Bearbeitungen"},
{"color": "#F1CD04", "actions": 567, "kind": "Kommentare"},
{"color": "#00743F", "actions": 2498, "kind": "Reviewing"},
{"color": "#D87A00", "actions": 123, "kind": "Themenbaum"},
]
display(Markdown("## Lineare Skala"))
display(HTML(create_bars(specs)))
# +
import math
display(Markdown("## Exponentielle Skala"))
display(HTML(create_bars(specs, scaling=math.log10, descaling=lambda x: 10**x)))
# +
import math
display(Markdown("## Potenzierte Skala"))
display(HTML(create_bars(specs, scaling=math.sqrt, descaling=lambda x: x**2)))
# +
def polarToCartesian(centerX, centerY, radius, angleInDegrees):
angleInRadians = angleInDegrees * math.pi / 180.0
return ( centerX + (radius * math.cos(angleInRadians)),
centerY + (radius * math.sin(angleInRadians)))
def circle(x, y, radius, color, progress=1, width=1, startAngle=275):
endAngle = startAngle + 360*progress - 1
start = polarToCartesian(x, y, radius, endAngle)
end = polarToCartesian(x, y, radius, startAngle)
largeArcFlag = "0" if endAngle - startAngle <= 180 else "1"
d = " ".join(map(str, [
"M", start[0], start[1],
"A", radius, radius, 0, largeArcFlag, 0, end[0], end[1]
]))
return """
<path d="%s" fill="none" stroke-linecap="round"
stroke="%s" stroke-width="%s" />
""" % (d, color, width)
def create_level_circle(actions, color="#1E65A7", max_actions=10000, scaling=lambda x: x/100):
progress = scaling(min(max_actions, actions))
level = math.floor(progress)
progress = progress - level
result = []
result.append('<svg viewBox="0 0 100 100" style="width: 100%;">')
result.append(circle(50, 50, 40, color, progress=progress, width=5, startAngle=45))
cx, cy = polarToCartesian(50, 50, 40, 45)
result.append("""
<circle cx="%s" cy="%s" r="10" fill="white" />
""" % (cx, cy))
result.append("""
<circle cx="%s" cy="%s" r="8" fill="%s" stroke="#e1bf00"
stroke-width="1"/>
""" % (cx, cy, "#f1cd04"))
result.append("""
<text x="%s" y="%s" text-anchor="middle"
style="font-size: 8px;">%s</text>
""" % (cx, cy+3, level))
result.append('</svg>')
return "\n".join(result)
display(HTML(create_level_circle(1290)))
# +
def create_level_diagram(specs, scaling=lambda x: x/100):
result = []
result.append('<div style="width: 100%; display: flex; flex-flow: row wrap; justify-content: space-around;">')
for spec in specs:
result.append('<div style="width: 25%; min-width: 200px">')
result.append(create_level_circle(spec["actions"], spec["color"], scaling=scaling))
result.append('<p><b>%s %s</b></p>' % (spec["actions"], spec["kind"]))
result.append('</div>')
result.append("</div>")
return "\n".join(result)
display(Markdown("## <NAME>"))
display(HTML(create_level_diagram(specs)))
# -
display(Markdown("## Wurzel-Skala"))
display(HTML(create_level_diagram(specs, scaling=math.sqrt)))
display(Markdown("## Exponentielle Skala"))
display(HTML(create_level_diagram(specs, scaling=math.log10 )))
display(Markdown("## Exponentielle Skala (mit 100 Levels)"))
display(HTML(create_level_diagram(specs, scaling=lambda x: math.log(x) / math.log(10000**0.01) )))
| 2021/2021-02-16 Verschiedene Formen von Aktivitätsdiagramme.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''py38tdml'': conda)'
# name: python3
# ---
# +
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# +
# f(X_1, X_2) = -2.1 * X_0 + 0.4 * X_1 - 0.5
X = np.random.random((10000, 2))
y_orig = -2.1 * X[:, 0] + 0.4 * X[:, 1] - 0.5
y_noise = np.random.randn(10000) * 0.1
y = y_orig + y_noise
# -
X_train = X[0:9000]
y_train = y[0:9000]
X_test = X[9001:len(X)]
y_test = y[9001:len(y)]
fig = plt.figure(figsize=(6, 6), facecolor='white')
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(X_train[0:1000, 0], X_train[0:1000, 1], y_train[0:1000])
plt.plot
model = Sequential()
model.add(Dense(1, input_dim=2))
model.compile(optimizer='rmsprop', loss='mse')
hist = model.fit(X_train, y_train, epochs=10, verbose=1)
scores = model.evaluate(X_test, y_test)
print(scores)
weights = model.get_weights()
weights
X_0_mesh = np.linspace(0, 1.0, 11)
X_1_mesh = np.linspace(0, 1.0, 11)
X_0_mesh, X_1_mesh = np.meshgrid(X_0_mesh, X_1_mesh)
y_pred = X_0_mesh * weights[0][0, 0] + X_1_mesh * weights[0][1, 0] + weights[1][0]
plt.clf()
fig = plt.figure(figsize=(6, 6), facecolor='white')
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(X_test[:, 0], X_test[:, 1], y_test[:])
ax.plot_wireframe(X_0_mesh, X_1_mesh, y_pred)
plt.plot
# +
# f(X_1, X_2) = 10 * X_0^2 + 10 * X_1^2 - 0.5
X = np.random.random((10000, 2))
y_orig = 10 * np.power(X[:, 0], 3) + 10 * np.power(X[:, 1], 3) - 0.5
y_noise = np.random.randn(10000) * 0.1
y = y_orig + y_noise
# -
X_train = X[0:9000]
y_train = y[0:9000]
X_test = X[9001:len(X)]
y_test = y[9001:len(y)]
fig = plt.figure(figsize=(6, 6), facecolor='white')
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(X_train[0:1000, 0], X_train[0:1000, 1], y_train[0:1000])
plt.plot
model = Sequential()
model.add(Dense(1, input_dim=2))
model.compile(optimizer='rmsprop', loss='mse')
hist = model.fit(X_train, y_train, epochs=10, verbose=1)
scores = model.evaluate(X_test, y_test)
print(scores)
weights = model.get_weights()
weights
y_pred = X_0_mesh * weights[0][0, 0] + X_1_mesh * weights[0][1, 0] + weights[1][0]
plt.clf()
fig = plt.figure(figsize=(6, 6), facecolor='white')
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(X_test[:, 0], X_test[:, 1], y_test[:])
ax.plot_wireframe(X_0_mesh, X_1_mesh, y_pred)
plt.plot
| notebook/regression/linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [TF_gpu]
# language: python
# name: Python [TF_gpu]
# ---
# + hideCode=false hidePrompt=false
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
import sys
sys.path.append('../')
from mrcnn.utils import trim_zeros_graph
import numpy as np
import tensorflow as tf
np.set_printoptions(linewidth=100, precision=4)
sess = tf.InteractiveSession()
batch_size = 3
num_rois = 32
num_classes = 4
num_detections = 100
num_rois_per_image = 32
rois_per_image = 32
# + [markdown] hideCode=false hidePrompt=false
# ### Stack non-zero bbox infromaton from pred_array into stacked_array
# + hideCode=false hidePrompt=false
# ps = pred_tensor[~tf.all(pred_tensor[:,2:6] == 0, axis=1)]
# print(' pred tensor shape is : ', pred_tensor.shape)
pt2 = pred_tensor[:,:,:,1:]
print(' pt2 shape is : ', pt2.shape)
# pt2_exp = tf.expand_dims(pt2,axis = 1)
# print(' pt2_exp shape is : ', pt2_exp.shape)
# print(pt2[0,0].eval())
# print(pt2[0,1].eval())
# print(pt2[0,2].eval())
pt2_reshape = tf.reshape( pred_tensor[:,:,:,1:] , [batch_size, num_classes * rois_per_image ,6])
print(' pt2_reshape shape is : ', pt2_reshape.get_shape())
# print(pt2_reshape[0].eval())
# print(pt2_reshape[1].eval())
# print(pt2_reshape[2].eval())
pt2_sum = tf.reduce_sum(tf.abs(pt2), axis=-1)
print(' pt2_sum shape ',pt2_sum.shape)
# print(pt2_sum[0].eval())
pt2_mask = tf.greater(pt2_sum , 0)
# print(' pt2_mask shape ', pt2_mask.get_shape())
# print(pt2_mask.eval())
pt2_ind = tf.where(pt2_mask)
print(' pt2_ind shape ', pt2_ind.get_shape())
# print(pt2_ind.eval())
# pt2_ind_float = tf.to_float(pt2_ind[:,0:1])
dense1 = tf.gather_nd( pt2, pt2_ind)
print(' dense1 shape ',dense1.get_shape())
# print(dense1.eval())
dense1 = tf.concat([tf.to_float(pt2_ind[:,0:1]), dense1],axis=1)
# print(' dense1 shape ',dense1.get_shape())
# print(dense1.eval())
# print(dense[1].eval())
# print(dense[2].eval())
# print(dense[3].eval())
stacked_list = tf.dynamic_partition(dense1, tf.to_int32(pt2_ind[:,0]),num_partitions = batch_size )
# print(len(dyn_part))
for img in range(len(stacked_list) ):
rois_in_image, cols = tf.shape(stacked_list[img]).eval()
print('\n ===> list item #', img, ' stacked_list[img] shape: ',rois_in_image, cols)
# print(stacked_list[img].eval())
stacked_list[img] = tf.pad(stacked_list[img],[[0,32-rois_in_image],[0,0]])
print('tensor_list item pos padding :', tf.shape(stacked_list[img]).eval())
# print(stacked_list[img].eval())
stacked_tensor = tf.stack(stacked_list)
print(tf.shape(stacked_tensor).eval())
# print(stacked_list[img].eval())
# experiment with scatter.....
# pred_scatt = tf.scatter_nd(scatter_classes, pt2, [batch_size, num_classes, num_rois,7])
# print('pred_scatt', pred_scatt.get_sh)
# -
# ### Stack non-zero bbox infromaton from pred_array into stacked_array
# ### Method 2 -- slower
# + hideCode=false hidePrompt=false
# ps = pred_tensor[~tf.all(pred_tensor[:,2:6] == 0, axis=1)]
print(' pred tensor shape is : ', pred_tensor[:,:,:,1:].shape)
pt2_reshape = tf.reshape( pred_tensor[:,:,:,1:] , [batch_size, num_classes * rois_per_image ,6])
print(' pt2_reshape shape is : ', pt2_reshape.get_shape())
# print(pt2_reshape[0].eval())
# print(pt2_reshape[1].eval())
# print(pt2_reshape[2].eval())
pt2_sum = tf.reduce_sum(tf.abs(pt2_reshape), axis=-1)
print(' pt2_sum shape ',pt2_sum.shape)
# print(pt2_sum[0].eval())
pt2_mask = tf.greater(pt2_sum , 0)
print(' pt2_mask shape ', pt2_mask.get_shape())
# print(pt2_mask.eval())
pt2_ind = tf.where(pt2_mask)
print(' pt2_ind shape ', pt2_ind.get_shape())
# print(pt2_ind.eval())
dense1 = tf.gather_nd( pt2_reshape, pt2_ind)
# print(' dense1 shape ',dense1.get_shape())
# print(dense1.eval())
dense1 = tf.concat([tf.to_float(pt2_ind[:,0:1]), dense1],axis=1)
# print(' dense1 shape ',dense1.get_shape())
# print(dense1.eval())
# print(dense[1].eval())
# print(dense[2].eval())
# print(dense[3].eval())
stacked_list = tf.dynamic_partition(dense1, tf.to_int32(pt2_ind[:,0]),num_partitions = batch_size )
# print(len(dyn_part))
for img in range(len(stacked_list) ):
rois_in_image, cols = tf.shape(stacked_list[img]).eval()
print('\n===> list item #', img, ' stacked_list[img] shape: ',rois_in_image, cols)
# print(stacked_list[img].eval())
stacked_list[img] = tf.pad(stacked_list[img],[[0,32-rois_in_image],[0,0]])
print('tensor_list item pos padding :', tf.shape(stacked_list[img]).eval())
# print(stacked_list[img].eval())
stacked_tensor = tf.stack(stacked_list)
print(tf.shape(stacked_tensor).eval())
# + [markdown] hideCode=false hideOutput=false hidePrompt=false
# ### Build coordinate meshgrid pos_grid
#
# Slightly faster method - build meshgrid in the following dimensions to allow for tensorflow Multivariate Normal Dist
#
# width x height x batch_size x rois per image x 2
# + hideCode=false hidePrompt=false
rois_per_image = 32
img_h = 128
img_w = 128
X = tf.range(img_w, dtype=tf.int32)
Y = tf.range(img_h, dtype=tf.int32)
X, Y = tf.meshgrid(X, Y)
print( X.get_shape(), Y.get_shape())
# print( ' X : \n',X.eval())
# print( ' Y : \n',Y.eval())
# ## hear we repeat X and Y batch_size x rois_per_image times
ones = tf.ones([batch_size, rois_per_image,1, 1], dtype = tf.int32)
print(' ones: ',ones.shape)
# # ones = tf.expand_dims(ones,-1)
# print(' ones with exp dims ',ones.shape)
rep_X = ones * X
rep_Y = ones * Y
# print(' ones_exp * X', ones.shape, '*', X.shape, '= ',rep_X.shape)
# print(' ones_exp * Y', ones.shape, '*', Y.shape, '= ',rep_Y.shape)
# # stack the X and Y grids
bef_pos = tf.to_float(tf.stack([rep_X,rep_Y], axis = -1))
# print(' before transpse ', bef_pos.get_shape())
pos_grid_1 = tf.transpose(bef_pos,[2,3,0,1,4])
print(' after transpose ', pos_grid_1.get_shape())
# + hideCode=false hidePrompt=false
# ## Check equality of the two methods
# eq = tf.equal(pos_grid_1, pos_grid_2)
# print( ' grid and pos equal --> ', tf.reduce_all(eq).eval())
# -
# ### Calculate mean and covariance matrices for gaussian distributions
# + hideCode=false hidePrompt=false
# for ps in all_boxes:
ps = stacked_tensor
# for img, ps_init in enumerate(stacked_tensor):
# print('\n===> list memeber #', img, ' shape: ', ps_init.get_shape(), ' ',tf.shape(ps_init).eval())
# rois_in_image, cols = tf.shape(ps_init).eval()
# print(' ps \t rows :', rois_in_image, '\t cols: ', cols)
# ps = tf.pad(ps_init,[[0,32-rois_in_image],[0,0]])
# print('ps.shape is ', tf.shape(ps_init).eval())
# print(ps_init.eval())
print('ps.shape is ', tf.shape(ps).eval())
# print(ps.eval())
width = ps[:,:,5] - ps[:,:,3]
height = ps[:,:,4] - ps[:,:,2]
cx = ps[:,:,3] + ( width / 2.0)
cy = ps[:,:,2] + ( height / 2.0)
means = tf.stack((cx,cy),axis = -1)
covar = tf.stack((width * 0.5 , height * 0.5), axis = -1)
# print(means.eval())
# print(covar.eval())
# print('width shape ',width.get_shape())
mns = means
cov = covar
# print(mns.eval())
tfd = tf.contrib.distributions
mvn = tfd.MultivariateNormalDiag(
loc = mns,
scale_diag = cov)
print(' means shape ',means.get_shape(), ' ', tf.shape(means).eval())
print(' covar shape ',covar.get_shape(), ' ', tf.shape(covar).eval())
print(' from MVN : \t mns shape :', tf.shape(mns).eval(), ' \t cov shape : ', tf.shape(cov).eval())
print(' from MVN : \t mean shape :', tf.shape(mvn.mean()).eval(), '\t stddev shape', tf.shape(mvn.stddev()).eval())
print(' from MVN : \t mvn.batch_shape:', mvn.batch_shape , '\t mvn.event_shape ', mvn.event_shape)
# print(mvn.loc.eval())
print(' Linear OP shape', mvn.scale.shape, ' Linear Op batch shape ',mvn.scale.batch_shape)
print(' Linear op Range Dim ', mvn.scale.range_dimension)
print(' Linear op Domain Dim ', mvn.scale.domain_dimension)
inp = pos_grid_1
print(' >> input to MVN.PROB: pos_grid (meshgrid) shape: ', inp.get_shape())
# print(inp.eval())
# one_layer = tf.to_float(pos[0])
# print(one_layer[0].get_shape)
prob_grid = mvn.prob(inp)
print(' << output probabilities shape:' , prob_grid.get_shape())
# print(prob.eval())
# eq = tf.equal(grid, pos)
# print( ' pos and grid probabalitiy matricies equal -->', tf.reduce_all(eq).eval())
trans_grid = tf.transpose(prob_grid,[2,3,0,1])
# gauss_tensor.append(trans_grid)
print(' trans_grid shape: ', trans_grid.shape)
gauss_grid = tf.where(tf.is_nan(trans_grid), tf.zeros_like(trans_grid), trans_grid)
# -
# ### Build indicies to scatter probabiltiy distributions by class
# + hideCode=false hidePrompt=false
class_inds = tf.to_int32(stacked_tensor[:,:,6])
batch_grid, roi_grid = tf.meshgrid( tf.range(batch_size, dtype=tf.int32), tf.range(num_rois, dtype=tf.int32), indexing = 'ij' )
print('class shape: ', class_inds.shape)
print(class_inds.eval())
print('roi_grid shape',tf.shape(roi_grid).eval())
print(roi_grid.eval())
print('batch_grid shape',tf.shape(batch_grid).eval())
print(batch_grid.eval())
scatter_classes = tf.stack([batch_grid, class_inds, roi_grid ],axis = -1)
print(scatter_classes.shape)
print(scatter_classes.eval())
gauss_scatt = tf.scatter_nd(scatter_classes, gauss_grid, [batch_size, num_classes, num_rois, 128, 128])
print(' gaussian_grid :', gauss_grid.shape)
print(' gaussian scattered :', gauss_scatt.shape)
gauss_sum = tf.reduce_sum(gauss_scatt, axis=2)
print(' gaussian sum :', gauss_sum.shape)
# + hideOutput=true
gauss_scatt[0,0,0].eval()
srt = 10
end = 17
img = 0
item = 1
clss = tf.to_int32(pred_classes[img, item]).eval()
print(clss)
eq = tf.equal(gauss_scatt[img,clss,item],gauss_grid[img,item])
print( 'equal ', tf.reduce_all(eq).eval())
print(' gaussian_grid :', gauss_grid.shape)
print(gauss_grid[img,item].eval())
print()
print(' gaussian scattered :', gauss_scatt.shape)
print(gauss_scatt[img,clss,item].eval())
print()
print(' gaussian sum :', gauss_sum.shape)
print(gauss_sum[img,item].eval())
# + hideCode=false hidePrompt=false
# for img in range(3):
# for idx in range(32):
# compare = tf.equal(trans_grid[img,idx], trans_pos[img,idx])
# print('compare shape' , compare.get_shape())
# eq = tf.reduce_all(compare).eval()
# print( img , '/' ,idx, ' TRANSPOSED pos and grid probabalitiy matricies equal -->', eq)
# if not eq :
# print(trans_grid[img,idx].eval())
# print(trans_pos[img,idx].eval())
# comapre = tf.equal(trans_grid[3], trans_pos[3])
# eq = tf.reduce_all(compare).eval()
# print(eq)
# + hideCode=false hideOutput=true
for img in range(batch_size):
for idx in range(rois_per_image):
clss = tf.to_int32(pred_classes[img, idx]).eval()
compare = tf.equal(gauss_scatt[img,clss,idx],gauss_grid[img,idx])
eq = tf.reduce_all(eq)
print('img:', img, 'idx' , idx, 'class ', clss , 'equal ', eq.eval())
if not eq.eval():
print(gauss_scatt[0,1,idx].eval())
print(gauss_grid[0,idx].eval())
# -
for i in range(32):
print(gauss_scatt[0,0,i].shape)
print(gauss_scatt[0,0,i].eval())
# prob_sum[0,2].eval()
# + [markdown] hideCode=false hidePrompt=false
# # Generate 3D Pred Array Tensors
#
# ### Create class score tensor
# 1- Class_Score tensor: batch_size x num_dectections x num_classes
#
# 2- Output_ROIs : batch_size x num_detections x 4
# 3- mrcnn_bboxes: batch_size x num_detections x num_clases x 4
#
# ### Generate pred_array: tensor [ seq_id, max_score, bbox coordinates, class_id]
# 1 - Get max prediction score and corresponding class for each ROI
# 2 - Concat sequence id, max_score, bbox coordinates and class id
#
# ### Generate meshgrid of image_id (batch grid) and bbox_ids (roi_grid)
# Generate scatter indices to scatter bbox information by image_id and class_id
# The the highest score for each detection (highest score amongst classes) and the class corresponding to the highest score
#
# ### Split Box/ Class info for each Image dim(0) by class and place into new tensor using tf.scatter_nd
#
# Output is converted to NUMN_BATCHES x NUM_CLASSES x NUM_ROIS x columns(7)
#
# ### Sort pred_scatter by score
# Sort each slice based on column 1 (score) using top_k
# Generate calss_grid, batch_grid and roi-grid based using mesh_grid
# Stack results to generate gather indices
# Generate final result - pred_tensor - using tf.gather_nd()
# + hideCode=false hidePrompt=false
mrcnn_class_init_0 = tf.Variable(tf.random_uniform([num_rois-10, num_classes], minval=0, maxval=1 , dtype=tf.float32, seed=1234), name = 'var')
mrcnn_class_init_1 = tf.Variable(tf.random_uniform([num_rois-4 , num_classes], minval=0, maxval=1 , dtype=tf.float32, seed=1234), name = 'var')
mrcnn_class_init_2 = tf.Variable(tf.random_uniform([num_rois-2 , num_classes], minval=0, maxval=1 , dtype=tf.float32, seed=1234), name = 'var')
output_rois_init_0 = tf.Variable(tf.random_uniform([num_rois-10, 4], minval=0, maxval=128, dtype=tf.float32, seed=1234), name = 'var')
output_rois_init_1 = tf.Variable(tf.random_uniform([num_rois-4 , 4], minval=0, maxval=128, dtype=tf.float32, seed=1234), name = 'var')
output_rois_init_2 = tf.Variable(tf.random_uniform([num_rois-2 , 4], minval=0, maxval=128, dtype=tf.float32, seed=1234), name = 'var')
mrcnn_bboxes = tf.Variable(tf.random_uniform([batch_size, num_rois, num_classes, 4], minval=0, maxval=128, dtype=tf.float32, seed=1234), name = 'var')
init = tf.global_variables_initializer().run()
mrcnn_class_filler_0 = tf.zeros([10, num_classes], dtype = tf.float32)
mrcnn_class_filler_1 = tf.zeros([4, num_classes], dtype = tf.float32)
mrcnn_class_filler_2 = tf.zeros([2, num_classes], dtype = tf.float32)
mrcnn_class_0 = tf.concat([mrcnn_class_init_0, mrcnn_class_filler_0],axis = 0)
mrcnn_class_1 = tf.concat([mrcnn_class_init_1, mrcnn_class_filler_1],axis = 0)
mrcnn_class_2 = tf.concat([mrcnn_class_init_2, mrcnn_class_filler_2],axis = 0)
print(' mrcnn_class 0 shape: ', mrcnn_class_0.shape)
print(' mrcnn_class 1 shape: ', mrcnn_class_1.shape)
print(' mrcnn_class 2 shape: ', mrcnn_class_2.shape)
mrcnn_class = tf.stack([mrcnn_class_0, mrcnn_class_1, mrcnn_class_2], axis = 0 )
print(' mrcnn_class shape: ', mrcnn_class.shape)
# print(mrcnn_class.eval())
output_rois_filler_0 = tf.zeros([10, 4], dtype = tf.float32)
output_rois_filler_1 = tf.zeros([4 , 4], dtype = tf.float32)
output_rois_filler_2 = tf.zeros([2 , 4], dtype = tf.float32)
# output_rois_filler = tf.zeros([batch_size, 2, 4], dtype = tf.float32)
output_rois_0 = tf.concat([output_rois_init_0, output_rois_filler_0],axis = 0)
output_rois_1 = tf.concat([output_rois_init_1, output_rois_filler_1],axis = 0)
output_rois_2 = tf.concat([output_rois_init_2, output_rois_filler_2],axis = 0)
print(' output_rois 0 shape: ', output_rois_0.shape)
print(' output_rois 1 shape: ', output_rois_1.shape)
print(' output_rois 2 shape: ', output_rois_2.shape)
output_rois = tf.stack([output_rois_0, output_rois_1, output_rois_2], axis = 0 )
print(' output_rois shape: ', output_rois.shape)
print(output_rois.eval())
one = tf.ones([num_rois, 4])
test_bboxes = tf.stack([one, one+1, one+2, one+3], axis = 0)
pred_classes = tf.argmax(mrcnn_class,axis=-1, output_type = tf.int32)
pred_classes_exp = tf.to_float(tf.expand_dims(pred_classes,axis=-1))
print(' pred_classes (classes with highest scores):', pred_classes.get_shape())
print(pred_classes.eval())
print(' predclasses expanded shape:', pred_classes_exp.get_shape())
pred_scores = tf.reduce_max(mrcnn_class,axis=-1, keepdims=True)
print(' pred_scores shape', pred_scores.shape)
# print(pred_scores.eval())
batch_grid, roi_grid = tf.meshgrid( tf.range(batch_size, dtype=tf.int32), tf.range(num_rois, dtype=tf.int32), indexing = 'ij' )
print('roi_grid shape',tf.shape(roi_grid).eval())
print(roi_grid.eval())
print('batch_grid shape',tf.shape(batch_grid).eval())
print(batch_grid.eval())
bbox_idx = tf.to_float(tf.expand_dims(roi_grid, axis = -1))
pred_array = tf.concat([ bbox_idx, pred_scores , output_rois, pred_classes_exp], axis=-1)
print(' pred_array shape', pred_array.shape)
print(pred_array[0].eval())
print('------')
print(pred_array[1].eval())
print('------')
print(pred_array[2].eval())
scatter_classes = tf.stack([batch_grid , pred_classes, roi_grid],axis = -1)
print('-- scatter to classes ----')
print('scatter_classes shape', scatter_classes.get_shape())
print(scatter_classes.eval())
pred_scatt = tf.scatter_nd(scatter_classes, pred_array, [batch_size, num_classes, num_rois,7])
print('pred_scatter shape is ', pred_scatt.get_shape())
np.set_printoptions(linewidth=100, precision=4)
print('------------')
print(pred_scatt[0].eval())
print('------------')
np.set_printoptions(linewidth=100, precision=4)
print(pred_scatt[1].eval())
print('------------')
print(pred_scatt[2].eval())
# equality = tf.equal(pred_scatt[0,0,0], pred_scatt2[0,0,0])
# print(tf.reduce_all(equality).eval())
sort_vals, sort_inds = tf.nn.top_k(pred_scatt[:,:,:,1], k=pred_scatt.shape[2])
# print(' Sort vals: ',tf.shape(sort_vals).eval())
# print(sort_vals.eval())
print()
print(' Sort inds: ',tf.shape(sort_inds).eval())
print(sort_inds.eval())
class_grid, batch_grid, dtct_grid = tf.meshgrid(tf.range(num_classes),tf.range(batch_size), tf.range(num_rois))
gather_inds = tf.stack([batch_grid , class_grid, sort_inds],axis = -1)
# print('class_grid', type(class_grid), 'shape',tf.shape(class_grid).eval())
# print(class_grid.eval())
# print('batch_grid', type(batch_grid), 'shape',tf.shape(batch_grid).eval())
# print(batch_grid.eval())
# print('class_grid', type(dtct_grid), 'shape',tf.shape(dtct_grid).eval())
# print(dtct_grid.eval())
print('-- stack results ----')
print('pred_scatt ', type(pred_scatt), 'shape',tf.shape(pred_scatt).eval())
print('gather_inds', type(gather_inds), 'shape',tf.shape(gather_inds).eval())
print(gather_inds.eval())
pred_tensor = tf.gather_nd(pred_scatt, gather_inds)
print()
print('-- gather_nd results (A-boxes sorted by score ----')
print(' srtd_boxes shape : ', tf.shape(pred_tensor).eval())
print(pred_tensor.eval())
# + [markdown] hideCode=false hidePrompt=false
# ### Display for visual check
# + hideCode=false hidePrompt=false
np.set_printoptions(linewidth=100, precision=4)
print('scatter shape is ', pred_scatt.get_shape())
print('pred_tensor shape is ', pred_tensor.get_shape() )
img = 0
# print(pred_scatt[img,0].eval())
print(pred_tensor[img,0].eval())
print('------------')
# print(pred_scatt[img, 1].eval())
print(pred_tensor[img,1].eval())
print('------------')
# print(pred_scatt[img,2].eval())
print(pred_tensor[img, 2].eval())
print('------------')
# print(pred_scatt[img,3].eval())
print(pred_tensor[img, 3].eval())
# + [markdown] hideCode=true hidePrompt=true
# ### Build indicies to gather bounding boxes from bboxes_4d corrsponding to predicted class
# #### Only used if we want to use mrcnn_bboxes (batch_size, num_rois, num_classes, 4)
#
# batch_size x nuum_detections x 4
# + hideCode=false hideOutput=true hidePrompt=true
### Build indicies to gather bounding boxes from bboxes_4d corrsponding to predicted class
### Only used if we want to use mrcnn_bboxes (batch_size, num_rois, num_classes, 4)
# gather_boxes = tf.stack([batch_grid, roi_grid, pred_classes, ], axis = -1)
# print('-- gather_boxes ----')
# print('gather_boxes inds', type(gather_boxes), 'shape',tf.shape(gather_boxes).eval())
# print(gather_boxes.eval())
# mrcnn_bboxes_selected = tf.gather_nd(mrcnn_bboxes, gather_boxes)
# print(' padding required for output_rois : ', mrcnn_bboxes_selected.get_shape())
# print(mrcnn_bboxes_selected[0].eval())
# print(' output_rois shape ', output_rois.get_shape())
# print(pred_classes[0].eval())
# print(output_rois[0].eval())
# + hideCode=false hidePrompt=false
# tf.count_nonzero(pred_tensor[0,0]).eval()
# + hideCode=false hidePrompt=false
a00 = pred_tensor
print(a00[:,:,:,0].get_shape())
a00_cnt = tf.count_nonzero(a00[:,:,:,0],axis = -1)
print(a00_cnt.get_shape())
a00_cnt.eval()
# + [markdown] hideCode=false hidePrompt=true
# ### Sort each slice based on class id (obsolete)
#
# #### Now we split based on class id first then sort based on score - no need to sort on class beforehand
# + hideCode=false hidePrompt=true
# sort_vals, sort_inds = tf.nn.top_k(-pred_array[:,:,6], k=pred_array.shape[1])
# print(tf.shape(sort_vals).eval())
# print(sort_vals.eval())
# print()
# print(tf.shape(sort_inds).eval())
# print(sort_inds.eval())
# + [markdown] hideCode=false hidePrompt=true
# ### Scatter to class layers - Loop / Dynamic Partition method (Obsolete)
#
# We are scattering using the scatter_nd_update method
# + hideCode=false hidePrompt=true
print('shape of srtd_boxes ', srtd_boxes.shape)
partition_inds = tf.to_int32(srtd_boxes[...,6])
print(partition_inds.eval())
batch_stack = []
for i in range(batch_size):
splitted = tf.dynamic_partition(srtd_boxes[i], partition_inds[i], num_classes)
for idx, split in enumerate(splitted):
# print(' item ', idx,' shape ',split.shape, 'shape:', tf.shape(split).eval(), 'rank: ', tf.rank(split).eval())
padding = num_rois - tf.shape(split)[0]
split = tf.pad(split,[[0,padding],[0,0]])
# print(' num of padding rows tt add ', padding.eval())
# print(' shape after padding : ', tf.shape(split).eval())
# print(split.eval())
splitted[idx] = split
stacked = tf.stack(splitted, axis = 0)
batch_stack.append(stacked)
batch_tensor = tf.stack(batch_stack, axis = 0)
print(' batch _tensor shape is ', tf.shape(batch_tensor).eval())
print(batch_tensor[0].eval())
# + [markdown] hideCode=false hidePrompt=true
# ## Experimental Code
# + [markdown] hideCode=false hidePrompt=true
# ### Experiment for sort ordering
# + hideCode=false hidePrompt=true
indices = tf.constant([
[[0,2],[0,1],[0,0]],
[[1,2],[1,1],[1,0]],
[[2,0],[2,1],[2,1]],
])
params = tf.constant( [
[['0-00', '0-01', '0-02', '0-03'],
['0-10', '0-11', '0-12', '0-13'],
['0-20', '0-21', '0-22', '0-23'],
['0-30', '0-31', '0-32', '0-33']],
[['1-00', '1-01', '1-02', '1-03'],
['1-10', '1-11', '1-12', '1-13'],
['1-20', '1-21', '1-22', '1-23'],
['1-30', '1-31', '1-32', '1-33']],
[['2-00', '2-01', '2-02', '2-03'],
['2-10', '2-11', '2-12', '2-13'],
['2-20', '2-21', '2-22', '2-23'],
['2-30', '2-31', '2-32', '2-33']]
])
print(params.shape, ' ', indices.shape)
res = tf.gather_nd(params, indices)
res.eval()
# + [markdown] hideCode=false hidePrompt=false
# ### Experiment with GATHER() for splitting into classes
# + hideCode=false hidePrompt=false
# indices = tf.constant([
# [0,2,2],[0,1,1],[0,0,0],
# [1,2,2],[1,1,1],[1,0,0],
# [2,0,0],[2,1,1],[2,1,1],
# ])
# indices = tf.constant([ [0,2],[0,1],[0,0],[1,2],[1,1],[1,0],[1,2],[1,3], [2,3]] ) # 9 x 2
indices = tf.constant([
[
[[0,2],[0,1],[0,0],[1,2],[2,3]],
[[1,1],[1,0],[1,2],[1,3],[1,1]]
]
])
params = tf.constant( [
[['0-00', '0-01', '0-02', '0-03', '0-04', '0-05', '0-06'],
['0-10', '0-11', '0-12', '0-13', '0-14', '0-15', '0-16'],
['0-20', '0-21', '0-22', '0-23', '0-24', '0-25', '0-26'],
['0-30', '0-31', '0-32', '0-33', '0-34', '0-35', '0-36'],
['0-40', '0-41', '0-42', '0-43', '0-44', '0-45', '0-46']],
[['1-00', '1-01', '1-02', '1-03', '1-04', '1-05', '1-06'],
['1-10', '1-11', '1-12', '1-13', '1-14', '1-15', '1-16'],
['1-20', '1-21', '1-22', '1-23', '1-24', '1-25', '1-26'],
['1-30', '1-31', '1-32', '1-33', '1-34', '1-35', '1-36'],
['1-40', '1-41', '1-42', '1-43', '1-44', '1-45', '1-46']],
[['2-00', '2-01', '2-02', '2-03', '2-04', '2-05', '2-06'],
['2-10', '2-11', '2-12', '2-13', '2-14', '2-15', '2-16'],
['2-20', '2-21', '2-22', '2-23', '2-24', '2-25', '2-26'],
['2-30', '2-31', '2-32', '2-33', '2-34', '2-35', '2-36'],
['2-40', '2-41', '2-42', '2-43', '2-44', '2-45', '2-46']]
])
print(' params sahape: ', params.shape, ' parms[:axis]', params.shape[:0], params.shape[:1], params.shape[:2], params.shape[:3])
print(' params sahape: ', params.shape, ' parms[:axis]', params.shape[0:], params.shape[1:], params.shape[2:], params.shape[3:])
print(' indices.shape ', indices.shape)
res = tf.gather_nd(params, indices)
print('result shape ', res.get_shape())
res.eval()
# + [markdown] hideCode=true hidePrompt=false
# ### Experiment with SCATTER() for splitting into classes
# + hideCode=false hideOutput=false hidePrompt=false
# del indices, updates, shape
indices = tf.constant([
#|
#|
[[0,0,0], [0,3,0], [0,2,0], [0,1,0],[0,1,1],[0,1,2],[0,2,1],[0,3,2] ],
[[1,0,0], [1,3,0], [1,2,0], [1,1,0],[1,1,1],[1,1,2],[1,2,1],[1,3,2] ],
[[2,0,0], [2,3,0], [2,2,0], [2,1,0],[2,1,1],[2,1,2],[2,2,1],[2,3,2] ]
])
#<--- each one corrsponds to an entry in the updates array [row , position in resulting array]
# [[1,1], [3,1], [2,1], [1,1],[1,1],[1,2],[2,1],[3,2] ],
# [[0,0], [3,0], [2,0], [1,0],[1,1],[1,2],[2,1],[3,2] ]
###----------------------------------------------------
### Our bounding box array
###---------------------------------------------------
updates = tf.constant( [
[[1000, 1001, 1002, 1003, 1004, 1005, 1006],
[1010, 1011, 1012, 1013, 1014, 1015, 1016],
[1020, 1021, 1022, 1023, 1024, 1025, 1026],
[1030, 1031, 1032, 1033, 1034, 1035, 1036],
[1040, 1041, 1042, 1043, 1044, 1045, 1046],
[1050, 1051, 1052, 1053, 1054, 1055, 1056],
[1060, 1061, 1062, 1063, 1064, 1065, 1066],
[1070, 1071, 1072, 1073, 1074, 1075, 1076] ],
[[2000, 2001, 2002, 2003, 2004, 2005, 2006],
[2010, 2011, 2012, 2013, 2014, 2015, 2016],
[2020, 2021, 2022, 2023, 2024, 2025, 2026],
[2030, 2031, 2032, 2033, 2034, 2035, 2036],
[2040, 2041, 2042, 2043, 2044, 2045, 2046],
[2050, 2051, 2052, 2053, 2054, 2055, 2056],
[2060, 2061, 2062, 2063, 2064, 2065, 2066],
[2070, 2071, 2072, 2073, 2074, 2075, 2076]] ,
[[3000, 3001, 3002, 3003, 3004, 3005, 3006],
[3010, 3011, 3012, 3013, 3014, 3015, 3016],
[3020, 3021, 3022, 3023, 3024, 3025, 3026],
[3030, 3031, 3032, 3033, 3034, 3035, 3036],
[3040, 3041, 3042, 3043, 3044, 3045, 3046],
[3050, 3051, 3052, 3053, 3054, 3055, 3056],
[3060, 3061, 3062, 3063, 3064, 3065, 3066],
[3070, 3071, 3072, 3073, 3074, 3075, 3076] ]
])
zeros = tf.zeros([3,4,8,7], dtype=tf.int32)
ref = tf.Variable(zeros)
scatt1= tf.Variable(zeros)
shape = tf.constant([3,4, 8, 7])
# init = tf.global_variables_initializer().run()
# print('indices shape ', indices.shape, 'indices_shape [-1]', indices.shape[-1], 'indices_shape [:-1]', indices.shape[:-1])
# print('updates.shape ', updates.shape)
# print('ref shape ', ref.get_shape(), ref)
# print('scatt1 shape ', scatt1.get_shape(), scatt)
scatt1 = tf.scatter_nd(indices, updates, shape)
# print('scatt1 shape ', scatt1.get_shape(), scatt)
# print(scatt1[0].eval())
# print('------------')
# print(scatt1[1].eval())
# print('------------')
# print(scatt1[2].eval())
# scatt = tf.scatter_nd_update(ref, indices, updates)
# print('scatter shape is ', scatt.get_shape(), scatt)
# print(scatt.eval())
print()
# + hideCode=false hideOutput=false hidePrompt=false
indices = tf.constant([
[[0,0], [3,0], [2,0], [1,0]]
# ,[[1,3], [1,3], [1,2], [1,0]]
])
updates = tf.constant([
[[5, 5, 5, 5, 0],
[6, 6, 6, 6, 3],
[7, 7, 7, 7, 2],
[8, 8, 8, 8, 1]]
# , [[5, 5, 5, 3],
# [6, 6, 6, 3],
# [7, 7, 7, 2],
# [8, 8, 8, 0]]
])
shape = tf.constant([4, 4, 5])
print(indices.shape, 'inputs shape', updates.shape, shape.shape)
scatter = tf.scatter_nd(indices, updates, shape)
print('scatter shape is ', scatter.get_shape())
print(scatter.eval())
# + hideCode=false hidePrompt=false
num_detections = 8
# batch_size is 3
gt_class_ids = tf.constant([[1,2,3,0,0,0,0,0],[3,3,3,0,0,0,0,0],[1,2,2,0,0,2,0,0]])
gt_bboxes = tf.random_uniform([batch_size,3,4], maxval = 127, dtype=tf.int32)
gt_classes_exp = tf.to_float(tf.expand_dims(gt_class_ids ,axis=-1))
print(' gt_classes_exp: ' ,gt_classes_exp.get_shape())
print(gt_classes_exp.eval())
zeros = tf.zeros([batch_size,5,4], dtype = tf.int32)
gt_bboxes = tf.concat([gt_bboxes, zeros], axis = 1)
gt_bboxes = tf.to_float(gt_bboxes)
print('\n gt_bboxes: ' ,gt_bboxes.get_shape())
print(gt_bboxes.eval())
mask = tf.greater(gt_class_ids,0)
print(mask.eval())
gt_scores = tf.where(mask, tf.ones_like(gt_class_ids), tf.zeros_like(gt_class_ids))
gt_scores_exp = tf.to_float(tf.expand_dims(gt_scores, axis=-1))
print('\n gt_scores ', gt_scores_exp.get_shape())
print(gt_scores_exp.eval())
batch_grid, bbox_grid = tf.meshgrid( tf.range(batch_size , dtype=tf.int32),
tf.range(num_detections , dtype=tf.int32), indexing = 'ij' )
print('\n bbox_grid ', type(bbox_grid) , 'shape', bbox_grid.get_shape())
print(bbox_grid.eval())
print('\n batch_grid ', type(batch_grid), 'shape', batch_grid.get_shape())
print(batch_grid.eval())
bbox_idx_zeros = tf.zeros_like(bbox_grid)
bbox_idx = tf.where(mask, bbox_grid , bbox_idx_zeros)
bbox_idx = tf.to_float(tf.expand_dims(bbox_idx, axis = -1))
print(' bbox_idx', type(bbox_idx), 'shape', bbox_idx.get_shape())
print(bbox_idx.eval())
# + hideCode=false hideOutput=false hidePrompt=false
gt_array = tf.concat([bbox_idx, gt_scores_exp , gt_bboxes, gt_classes_exp], axis=2)
print(' gt_array ', type(gt_array), gt_array.shape)
print(gt_array.eval())
# + hideCode=false hidePrompt=false
scatter_classes = tf.stack([batch_grid , gt_class_ids, bbox_grid],axis = -1)
print('\n -- stack results ----')
print('\n scatter_classes', type(scatter_classes), 'shape',tf.shape(scatter_classes).eval())
print(scatter_classes.eval())
# + hideCode=false hidePrompt=false
gt_scatter = tf.scatter_nd(scatter_classes, gt_array, [batch_size, num_classes, num_detections,7])
print(' gt_tensor shape is ', gt_scatter.get_shape(), gt_scatter)
gt_scatter.eval()
# + hideCode=false hidePrompt=false
sort_vals, sort_inds = tf.nn.top_k(gt_scatter[:,:,:,0], k=gt_scatter.shape[2])
print(' sort vals shape : ', sort_vals.get_shape())
print(sort_vals.eval())
print(' sort inds shape : ', sort_inds.get_shape())
print(sort_inds.eval())
# + hideCode=false hidePrompt=false
# build gathering indexes to use in sorting
class_grid, batch_grid, bbox_grid = tf.meshgrid(tf.range(num_classes),tf.range(batch_size), tf.range(num_detections))
print(' class_grid ', type(class_grid) , 'shape', class_grid.get_shape())
print(class_grid.eval())
print(' batch_grid ', type(batch_grid) , 'shape', batch_grid.get_shape())
print(class_grid.eval())
print(' bbox_grid ', type(bbox_grid) , 'shape', bbox_grid.get_shape())
print(bbox_grid.eval())
# + hideCode=false hidePrompt=false
gather_inds = tf.stack([batch_grid , class_grid, sort_inds],axis = -1)
print(' -- pred_tensor results (A-boxes sorted by score ----')
print(' gatehr_inds ', gather_inds.get_shape())
print(gather_inds.eval())
gt_tensor = tf.gather_nd(gt_scatter, gather_inds)
print(' -- pred_tensor results (A-boxes sorted by score ----')
print(' pred_tensor ', gt_tensor.get_shape())
print(gt_tensor.eval())
# + [markdown] hideCode=false hideOutput=false hidePrompt=false
# ### Split Box/ Class info for each Image dim(0) by class and place into new tensor using tf.scatter_nd_update
# ### obsolete -- use tf.scatter_nd instead
# Output is converted to NUMN_BATCHES x NUM_CLASSES x num_rois x columns
# + hideCode=false hidePrompt=false
# scatt1 = tf.scatter_nd(scatter_inds, pred_array, [batch_size, num_classes, num_rois])
# print('scatt1 shape ', scatt1.get_shape(), scatt)
# zeros = tf.zeros([batch_size,num_classes, num_rois,7], dtype=tf.float32)
# ref = tf.Variable(zeros)
# init = tf.global_variables_initializer().run()
# pred_scatt_old = tf.scatter_nd_update(ref, scatter_classes, pred_array)
# print('scatter shape is ', pred_scatt.get_shape(), pred_scatt)
# print(pred_scatt[0].eval())
# print('------------')
# print(pred_scatt[1].eval())
# print('------------')
# print(pred_scatt[2].eval())
# + [markdown] hideCode=true hideOutput=false hidePrompt=true
# ### Stack non-zero bbox infromaton from pred_array into stacked_array --- method 2 (obsolete)
# + hideCode=false hidePrompt=false
# sep = tf.unstack(pred_tensor)
# print(len(sep))
# all_boxes = []
# for sub_sep in sep:
# p_reduced = tf.reduce_sum(tf.abs(sub_sep), axis=-1)
# print(' p_redcued shape ',p_reduced.shape)
# # print(p_reduced.eval())
# non_zeros = tf.cast(p_reduced, tf.bool)
# print(' mask shape' ,non_zeros.shape)
# # print(non_zeros.eval())
# boxes = tf.boolean_mask(sub_sep, non_zeros,axis = 0)
# print(' boxes shape ',boxes.get_shape())
# print(boxes.eval())
# all_boxes.append(boxes)
# print(len(all_boxes))
# stacked_tensor = tf.stack(all_boxes)
# print('stacked shape',stacked_tensor.get_shape())
# p_reduced = tf.reduce_sum(tf.abs(pred_tensor), axis=-1)
# print(' p_redcued shape ',p_reduced.shape)
# print(p_reduced.eval())
# non_zeros = tf.cast(p_reduced, tf.bool)
# print(' mask shape' ,non_zeros.shape)
# print(non_zeros.eval())
# boxes = tf.boolean_mask(pred_tensor, non_zeros,axis = 0)
# print(boxes.shape)
# print(boxes.eval())
# print(keepers[1].eval())
# -
# #### Another (slower) method for creating a meshgrid with the following shape
# This method is slightly slower - build meshgrid in the following dimensions to allow for tensorflow Multivariate Normal Dist
#
# image width x image height x batch_size x num_rois_per_image x 2 (8 x 16 x 3 x 32 x 2)
# + hideCode=false hidePrompt=false
# rois_per_image = 32
# img_h = 8
# img_w = 16
# X = tf.range(img_w, dtype=tf.int32)
# Y = tf.range(img_h, dtype=tf.int32)
# X, Y = tf.meshgrid(X, Y)
# print( X.get_shape(), Y.get_shape())
# # print( ' X : \n',X.eval())
# # print( ' Y : \n',Y.eval())
# twos_X = tf.tile(tf.expand_dims(X, axis = -1),[1,1,3])
# twos_Y = tf.tile(tf.expand_dims(Y, axis = -1),[1,1,3])
# # print(' twos.shape X',twos_X.shape)
# # print(' twos_shape Y',twos_Y.shape)
# thre_X = tf.tile(tf.expand_dims(twos_X, axis = -1),[1,1,1,32])
# thre_Y = tf.tile(tf.expand_dims(twos_Y, axis = -1),[1,1,1,32])
# # print(' twos_X shape ', twos_X.shape, ' thre_X shape ',thre_X.shape)
# # print(' twos_Y shape ', twos_Y.shape, ' thre_Y shape ',thre_Y.shape)
# # stack the X and Y grids
# pos_grid_2 = tf.to_float(tf.stack([thre_X,thre_Y], axis = -1))
# print(pos_grid_2.get_shape())
| notebooks/Notebook Archive/tf_gaussian 12-04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="jKu0oLhRTIv2" outputId="9711b003-333a-4176-ade3-916f986c248b"
from google.colab import drive
drive.mount('/content/drive')
drive.mount("/content/drive", force_remount=True)
# + colab={"base_uri": "https://localhost:8080/"} id="3XzU7G3-WLyQ" outputId="601a727d-c197-404e-e37a-547a9983bf29"
# !pip install keras-layer-normalization
# + id="JQw5O5yWUdAS"
class Config:
DATASET_PATH ="/content/drive/My Drive/datasets/UCSD_Anomaly_Dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Train/"
DATASET_PATH1="/content/drive/My Drive/datasets/UCSD_Anomaly_Dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Train/Train002/"
SINGLE_TEST_PATH = "/content/drive/My Drive/datasets/UCSD_Anomaly_Dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Test/Test001"
BATCH_SIZE = 4
EPOCHS = 5
MODEL_PATH = "/content/drive/My Drive/datasets/notebooks/lstmautoencoder/model_2.hdf5"
# + id="mFngNm8QSIRC"
from os import listdir
from os.path import isfile, join, isdir
from PIL import Image
import numpy as np
import shelve
def get_clips_by_stride(stride, frames_list, sequence_size):
""" For data augmenting purposes.
Parameters
----------
stride : int
The desired distance between two consecutive frames
frames_list : list
A list of sorted frames of shape 256 X 256
sequence_size: int
The size of the desired LSTM sequence
Returns
-------
list
A list of clips , 10 frames each
"""
clips = []
sz = len(frames_list)
clip = np.zeros(shape=(sequence_size, 256, 256, 1))
cnt = 0
for start in range(0, stride):
for i in range(start, sz, stride):
clip[cnt, :, :, 0] = frames_list[i]
cnt = cnt + 1
if cnt == sequence_size:
clips.append(np.copy(clip))
cnt = 0
return clips
def get_training_set():
"""
Returns
-------
list
A list of training sequences of shape (NUMBER_OF_SEQUENCES,SINGLE_SEQUENCE_SIZE,FRAME_WIDTH,FRAME_HEIGHT,1)
"""
#####################################
# cache = shelve.open(Config.CACHE_PATH)
# return cache["datasetLSTM"]
#####################################
clips = []
# loop over the training folders (Train000,Train001,..)
for f in sorted(listdir(Config.DATASET_PATH)):
if isdir(join(Config.DATASET_PATH, f)):
all_frames = []
# loop over all the images in the folder (0.tif,1.tif,..,199.tif)
for c in sorted(listdir(join(Config.DATASET_PATH, f))):
if str(join(join(Config.DATASET_PATH, f), c))[-3:] == "tif":
img = Image.open(join(join(Config.DATASET_PATH, f), c)).resize((256, 256))
img = np.array(img, dtype=np.float32) / 256.0
all_frames.append(img)
# get the 10-frames sequences from the list of images after applying data augmentation
for stride in range(1, 3):
clips.extend(get_clips_by_stride(stride=stride, frames_list=all_frames, sequence_size=10))
return clips
# + id="0AH_enZ4TB_T"
import keras
import tensorflow as tf
from keras.layers import Conv2DTranspose, ConvLSTM2D, BatchNormalization, TimeDistributed, Conv2D
from keras.models import Sequential, load_model
from keras.layers import LayerNormalization
from keras.layers import Input, Dense
def get_model(reload_model=True):
"""
Parameters
----------
reload_model : bool
Load saved model or retrain it
"""
if not reload_model:
return load_model(Config.MODEL_PATH,custom_objects={'LayerNormalization': LayerNormalization})
training_set = get_training_set()
training_set = np.array(training_set)
training_set = training_set.reshape(-1,10,256,256,1)
seq = Sequential()
seq.add(TimeDistributed(Conv2D(128, (11, 11), strides=4, padding="same"), batch_input_shape=(None, 10, 256, 256, 1)))
seq.add(LayerNormalization())
seq.add(TimeDistributed(Conv2D(64, (5, 5), strides=2, padding="same")))
seq.add(LayerNormalization())
# # # # #
seq.add(ConvLSTM2D(64, (3, 3), padding="same", return_sequences=True))
seq.add(LayerNormalization())
seq.add(ConvLSTM2D(32, (3, 3), padding="same", return_sequences=True))
seq.add(LayerNormalization())
seq.add(ConvLSTM2D(64, (3, 3), padding="same", return_sequences=True))
seq.add(LayerNormalization())
# # # # #
seq.add(TimeDistributed(Conv2DTranspose(64, (5, 5), strides=2, padding="same")))
seq.add(LayerNormalization())
seq.add(TimeDistributed(Conv2DTranspose(128, (11, 11), strides=4, padding="same")))
seq.add(LayerNormalization())
seq.add(TimeDistributed(Conv2D(1, (11, 11), activation="sigmoid", padding="same")))
print(seq.summary())
seq.compile(loss='mse', optimizer=keras.optimizers.Adam(learning_rate=1e-4, decay=1e-5, epsilon=1e-6), metrics=["accuracy"])
seq.fit(training_set, training_set,
batch_size=Config.BATCH_SIZE, epochs=Config.EPOCHS, shuffle=False)
seq.save(Config.MODEL_PATH)
return seq
# + id="4ll2744fTZf3"
def get_single_test():
sz = 200
test = np.zeros(shape=(sz, 256, 256, 1))
cnt = 0
for f in sorted(listdir(Config.SINGLE_TEST_PATH)):
if str(join(Config.SINGLE_TEST_PATH, f))[-3:] == "tif":
img = Image.open(join(Config.SINGLE_TEST_PATH, f)).resize((256, 256))
img = np.array(img, dtype=np.float32) / 256.0
test[cnt, :, :, 0] = img
cnt = cnt + 1
return test
# + id="v9fuICTY_ltw"
def get_normal_vid(path):
sz = 200
test = np.zeros(shape=(sz, 256, 256, 1))
cnt = 0
for f in sorted(listdir(path)):
if str(join(path, f))[-3:] == "tif":
img = Image.open(join(path, f)).resize((256, 256))
img = np.array(img, dtype=np.float32) / 256.0
test[cnt, :, :, 0] = img
cnt = cnt + 1
return test
# + id="TIEK88-2Tcw_"
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import confusion_matrix, precision_recall_curve
from sklearn.metrics import recall_score, classification_report, auc, roc_curve
from sklearn.metrics import precision_recall_fscore_support, f1_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from keras.callbacks import History
history = History()
def evaluate():
model = get_model(False)
print(model.summary())
print("got model")
# Plot of the model training loss
print(model.history.history.keys())
test = get_single_test()
print(test.shape)
sz = test.shape[0] - 10 + 1
sequences = np.zeros((sz, 10, 256, 256, 1))
# apply the sliding window technique to get the sequences
for i in range(0, sz):
clip = np.zeros((10, 256, 256, 1))
for j in range(0, 10):
clip[j] = test[i + j, :, :, :]
sequences[i] = clip
print("got data")
# get the reconstruction cost of all the sequences
reconstructed_sequences = model.predict(sequences,batch_size=4)
sequences_reconstruction_cost = np.array([np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])) for i in range(0,sz)])
mse2=np.mean([np.power(np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])),2)for i in range(0,sz)])
mse = np.array([np.power(np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])),2)for i in range(0,sz)])
sa = (sequences_reconstruction_cost - np.min(sequences_reconstruction_cost)) / np.max(sequences_reconstruction_cost)
sr = 1.0 - sa
mean_error=mse2
threshold=165
ylabel=[]
for i in range(len(mse)):
if i<146:
ylabel.append(0)
#elif i>=8 and i<threshold:
#ylabel.append(1)
#elif i>=threshold and i<113:
#ylabel.append(1)
else:
ylabel.append(1)
auc_score = roc_auc_score(y_true=ylabel, y_score=mse)
fpr = dict()
tpr = dict()
print(len(ylabel))
print(len(mse))
fpr, tpr,thresholds = roc_curve(ylabel, mse)
roc_auc=auc(fpr,tpr)
# calculating the mean squared error reconstruction loss per row in the numpy array
#fpr, tpr, thresholds = metrics.roc_curve(sr, mse)
#plt.plot(mse)
#plt.ylabel('MSE Loss')
#plt.xlabel('frame t')
#plt.show()
plt.plot(model.history.history['accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
plt.show()
print(mse.shape)
#mse=mse.flatten()
print(mse.shape)
plt.plot(mse)
plt.title('Plot of Frames Reconstruction Error')
plt.ylabel('Reconstruction Error - MSE')
plt.xlabel('frame t')
plt.show()
#plot ground truth
plt.plot(ylabel)
plt.title('Plot of Video Frames Ground Truth')
plt.ylabel('Anomaly Presence')
plt.xlabel('frame t')
plt.show()
# plot the regularity scores
plt.plot(sr)
plt.title('Plot of Regularity Score per Frame')
plt.ylabel('regularity score Sr(t)')
plt.xlabel('frame t')
plt.show()
#plotting the ROC Curve
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic For CONVLSTM Model')
plt.legend(loc="lower right")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="a88z8P3YHwEh" outputId="cb52fb88-6cad-4c13-ea5f-baa2166da3d3"
evaluate()
# + colab={"base_uri": "https://localhost:8080/"} id="A1scP0xuVr3e" outputId="0f0c0df4-1b87-4509-9f41-e3a61aae6eef"
#ped2 001
evaluate()
# + colab={"background_save": true} id="GEOit-RKXxo_" outputId="ebb5de68-2a82-41e1-8c03-366b28c031d0"
#training ped2
evaluate()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="JBppz41AOiMf" outputId="07e500e3-1ffe-4303-8c9a-a24a964ff35c"
#028
evaluate()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="8VauszGEj_JK" outputId="30474427-0f6c-4f67-c640-c51471875bbc"
#029
evaluate()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0x2tl5OWCFib" outputId="d32b6463-28cc-4e9c-a752-a96e46350fde"
#031
evaluate()
# + id="vO5Z6RiHCEvS"
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="0Q9qhb2VhNdE" outputId="de9420b9-3a6a-4a6e-cd5d-cb42d9aba1ff"
#006
evaluate()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="DxUHkpk0dFnn" outputId="30aee01e-a90e-4dfc-d42f-52cbaefbf96e"
evaluate()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="oAVD2ACbOPAL" outputId="404efc6c-0ed2-447d-8916-b5f67a088192"
evaluate()
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="KUxwhBbaTmGQ" outputId="bad9e6f5-74c4-4cf7-ce3d-6f1870d76cf5"
evaluate()
# + [markdown] id="VU6qmiPO-Qei"
# Introduction of One Class SVM to detect anomalies.
# + id="sMfA99Wg-Kqt"
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import metrics,svm
from sklearn.metrics import confusion_matrix, precision_recall_curve
from sklearn.metrics import recall_score, classification_report, auc, roc_curve
from sklearn.metrics import precision_recall_fscore_support, f1_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import roc_auc_score
from numpy import quantile, where, random
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
def svm_training_features():
# loop over the training folders (Train000,Train001,..)
model = get_model(False)
print("got model")
print(model.summary())
all_recon_cost=[]
#for f in sorted(listdir(Config.DATASET_PATH)):
#if isdir(join(Config.DATASET_PATH, f)):
#all_frames = []
test= get_normal_vid(Config.DATASET_PATH1)
#test=get_single_test()
print(test.shape)
sz = test.shape[0] - 10 + 1
sequences = np.zeros((sz, 10, 256, 256, 1))
# apply the sliding window technique to get the sequences
for i in range(0, sz):
clip = np.zeros((10, 256, 256, 1))
for j in range(0, 10):
clip[j] = test[i + j, :, :, :]
sequences[i] = clip
print("got data")
# get the reconstruction cost of all the sequences
reconstructed_sequences = model.predict(sequences,batch_size=4)
sequences_reconstruction_cost = np.array([np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])) for i in range(0,sz)])
mse = np.array([np.power(np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])),2)for i in range(0,sz)])
sa = (sequences_reconstruction_cost - np.min(sequences_reconstruction_cost)) / np.max(sequences_reconstruction_cost)
sr = 1.0 - sa
# plot the regularity scores
plt.plot(sr)
plt.title('Plot of Regularity Score per Frame')
plt.ylabel('regularity score Sr(t)')
plt.xlabel('frame t')
plt.show()
#All regularity scores of normal videos
#print(all_recon_cost.shape())
#One Class SVM
train_feature=pd.DataFrame(mse)
print('Oneclass SVM training')
oneclass=svm.OneClassSVM(kernel='linear', gamma=0.0001, nu=0.99)
oneclass.fit(train_feature)
y_pred=oneclass.predict(train_feature)
regscore=pd.DataFrame(sr)
print(regscore.head())
print('reached here')
print(y_pred)
y_pred = pd.DataFrame(y_pred)
print(y_pred.head())
#Marking Anomalies in the Regularity score
anomaly_index=[]
normal_index=[]
for j in range(len(y_pred)):
if y_pred[0][j] == 1:
normal_index.append(j)
else:
anomaly_index.append(j)
print(anomaly_index)
print(normal_index)
anomaly_values = regscore[anomaly_index]
normal_values = regscore[normal_index]
#print (agg_sr.head())
plt.plot(sr)
plt.scatter(anomaly_values[:,0], color='r')
plt.scatter(nomal_values[:,0], color='b')
plt.title('Plot of Anomaly Score per Frame')
plt.ylabel('Anomaly/Reg Score Sr(t)')
plt.xlabel('frame t')
plt.show()
return y_pred
# + id="hhKmlCAlFvdn"
def svm_detector_training():
from sklearn import svm
import pandas as pd
import numpy as np
train_feature=svm_training_features()
train_feature=pd.DataFrame(train_feature)
print('Oneclass SVM training')
oneclass=svm.OneClassSVM(kernel='linear', gamma=0.001, nu=0.95)
oneclass.fit(train_feature)
feature_pred=oneclass.predict(train_feature)
return feature_pred
# + id="lIp7LSfwGvXP"
def svm_testing():
model = get_model(False)
print("Got Model")
#test shape
test = get_single_test()
print(test.shape)
sz = test.shape[0] - 10 + 1
sequences = np.zeros((sz, 10, 256, 256, 1))
# apply the sliding window technique to get the sequences
for i in range(0, sz):
clip = np.zeros((10, 256, 256, 1))
for j in range(0, 10):
clip[j] = test[i + j, :, :, :]
sequences[i] = clip
print("got data")
# get the reconstruction cost of all the sequences
reconstructed_sequences = model.predict(sequences,batch_size=4)
sequences_reconstruction_cost = np.array([np.linalg.norm(np.subtract(sequences[i],reconstructed_sequences[i])) for i in range(0,sz)])
sa = (sequences_reconstruction_cost - np.min(sequences_reconstruction_cost)) / np.max(sequences_reconstruction_cost)
sr = 1.0 - sa
test_feature=sequences_reconstruction_cost
print ('Got Testing set')
X_test=pd.DataFrame(test_feature)
oneclass=svm_detector_training()
#check outliers predicted on single test case
fraud_pred = oneclass.predict(X_test)
return fraud_pred
# + id="zX4osZLrKXKD"
def svm_evaluation():
y_pred= svm_training_features()
y_pred = pd.DataFrame(y_pred)
y_pred= y_pred.rename(columns={0: 'prediction'})
#compare prediction with ground truth
#Populating Ground truth
threshold=100
ylabel=[]
for i in range(len(y_pred)):
if i<threshold:
ylabel.append(1)
elif i>threshold and i<=110:
ylabel.append(0)
else:
ylabel.append(1)
Y_test=pd.DataFrame(ylabel)
Y_tes= Y_test.rename(columns={0:'Category'})
#Calculation of TP
TP = FN = FP = TN = 0
for j in range(len(Y_test)):
if Y_test[0][j]== 0 and y_pred[0][j] == 1:
TP = TP+1
elif Y_test[0][j]== 0 and y_pred[0][j] == -1:
FN = FN+1
elif Y_test[0][j]== 1 and y_pred[0][j] == 1:
FP = FP+1
else:
TN = TN +1
print (TP, FN, FP, TN)
# Performance Matrix
accuracy = (TP+TN)/(TP+FN+FP+TN)
print (accuracy)
sensitivity = TP/(TP+FN)
print (sensitivity)
specificity = TN/(TN+FP)
print (specificity)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="vQfUSVDUEaYa" outputId="111b9977-24aa-45e9-84b8-08f8611d14a7"
svm_evaluation()
| STAE_autoencoder_Ped2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # cadCAD Template: Robot and the Marbles
#
# 
# 
# 
# +
# import libraries
import pandas as pd
import numpy as np
import matplotlib
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
import config
from cadCAD import configs
import matplotlib.pyplot as plt
# %matplotlib inline
exec_mode = ExecutionMode()
# +
# Run Cad^2
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
df = pd.DataFrame(raw_result)
df.set_index(['run', 'timestep', 'substep'])
# -
df.plot('timestep', ['box_A', 'box_B'], grid=True,
colormap = 'RdYlGn',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
# +
import config2
first_config = configs # only contains config1
single_proc_ctx = ExecutionContext(context=exec_mode.single_proc)
run = Executor(exec_context=single_proc_ctx, configs=first_config)
raw_result, tensor_field = run.execute()
df2 = pd.DataFrame(raw_result)
df2.set_index(['run', 'timestep', 'substep'])
# -
df2.plot('timestep', ['box_A', 'box_B'], grid=True,
colormap = 'RdYlGn',
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
| tutorials/videos/robot-marbles-part-1/robot-marbles-part-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# +
from scipy.stats import norm
var = norm()
n_samples = 100
tmp = np.arange(-3,3,0.25)
print(tmp)
cdf = [var.cdf(j) for j in tmp]
plt.plot(tmp,cdf)
plt.show()
xs = []
us = []
for i in range(n_samples):
u = np.random.rand()
x = var.ppf(u)
us.append(u)
xs.append(x)
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(tmp,cdf)
plt.scatter(-3.5*np.ones(len(us)),us,color='r')
plt.scatter(xs,us,marker='x')
plt.scatter(xs,np.zeros(len(xs)))
plt.plot([-3.5,x,x],[u,u,0],':k')
plt.xlim([-4,4])
plt.subplot(1,2,2)
plt.hist(xs)
plt.xlim([-4,4])
# plt.hold(True)
plt.show()
# -
# # Seaborn
# +
import seaborn as sns
x = np.random.randn(1000)
# Univariate histogram
sns.distplot(x)
# +
x = np.random.randn(200)
y = np.random.randn(200)*2.0
# Bivariate histogram
sns.jointplot(x,y,xlim=[-10,10],ylim=[-10,10])
# -
# # Sources of Randomness
# +
# Running this code multiple times
# will produce the same numbers unless
# the seed is changed
np.random.seed(1)
print(np.random.randint(0,100,10))
# -
# # Inverse CDF Sampling
| demos/.ipynb_checkpoints/lecture10-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: europy
# language: python
# name: europy
# ---
# # BERT Transformer Classifier
# ### with HuggingFace and Tensorflow 2
# slient install
# !pip install -q -r requirements.txt
# +
# all imports
import os, time
from datetime import datetime
from tqdm.notebook import trange, tqdm
import requests
import europy
from europy.notebook import load_global_params
from europy.decorator import using_params
from europy.decorator import model_details
from europy.lifecycle import reporting
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Flatten
print(f'TensorFlow Version: {tf.__version__}')
print(f'GPU Devices: {tf.config.list_physical_devices("GPU")}')
from sklearn.model_selection import train_test_split
from transformers import BertTokenizer
from transformers import TFBertModel
from transformers import create_optimizer
from model import BertClassifier
# -
# ## Load Data
#download kaggle data-set
# !kaggle competitions download -c jigsaw-toxic-comment-classification-challenge
#unzip kaggle dataset and remove zip files
# !mkdir data
# !unzip -o jigsaw-toxic-comment-classification-challenge.zip -d data/
# !rm jigsaw-toxic-comment-classification-challenge.zip
# !unzip -o data/sample_submission.csv.zip -d data/
# !rm data/sample_submission.csv.zip
# !unzip -o data/train.csv.zip -d data/
# !rm data/train.csv.zip
# !unzip -o data/test.csv.zip -d data/
# !rm data/test.csv.zip
# !unzip -o data/test_labels.csv.zip -d data/
# !rm data/test_labels.csv.zip
train_path = 'data/train.csv'
test_path = 'data/test.csv'
test_labels_path = 'data/test_labels.csv'
subm_path = 'data/sample_submission'
# +
df_train = pd.read_csv(train_path)
df_test = pd.read_csv(test_path)
df_test_labels = pd.read_csv(test_labels_path).set_index('id')
df_train.head()
# -
# ## Tokenization
# define DistilBERT Tokenizer
tokenizer = BertTokenizer.from_pretrained(
params['pre_trained_model'],
do_lower_case=True
)
@using_params('params.yml')
def tokenize_sentences(sentences, tokenizer, max_seq_len=128):
tokenized_sentences = []
for sentence in tqdm(sentences):
tokenized_sentence = tokenizer.encode(
sentence,
add_special_tokens=True,
max_length=max_seq_len
)
tokenized_sentences.append(tokenized_sentence)
return tokenized_sentences
def create_attention_masks(tokenized_add_padded_sentences):
attention_masks = []
for sentence in tqdm(tokenized_add_padded_sentences):
att_mask = [int(token_id > 0) for token_id in sentence]
attention_masks.append(att_mask)
return np.asarray(attention_masks)
# create tokenized sentences (will take a few minutes)
input_ids = tokenize_sentences(
df_train['comment_text'],
tokenizer
)
input_ids = pad_sequences(
input_ids,
maxlen=params['max_seq_len'],
dtype='long',
value=0,
truncating='post',
padding='post'
)
attention_masks = create_attention_masks(input_ids)
# ## Training
# +
# Split data
labels = df_train[params['label_cols']].values
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(
input_ids,
labels,
random_state=0,
test_size=params['test_size']
)
train_masks, validation_masks, _, _ = train_test_split(
attention_masks,
labels,
random_state=0,
test_size=params['test_size']
)
train_size = len(train_inputs)
validation_size = len(validation_inputs)
# -
@using_params('params.yml')
def create_dataset(data_tuple, num_epochs=1, batch_size=32, buffer_size=10000, train=True):
dataset = tf.data.Dataset.from_tensor_slices(data_tuple)
if train:
dataset = dataset.shuffle(buffer_size=buffer_size)
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
if train:
dataset.prefetch(1)
return dataset
# train & validation datasets
train_dataset = create_dataset(
(train_inputs, train_masks, train_labels)
)
validation_dataset = create_dataset(
(validation_inputs, validation_masks, validation_labels)
)
# define the model
class BertClassifier(tf.keras.Model):
def __init__(self, bert: TFBertModel, num_classes: int):
super().__init__()
self.bert = bert
self.classifier = Dense(num_classes, activation='sigmoid')
@tf.function
def call(self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask
)
cls_output = outputs[1]
cls_output = self.classifier(cls_output)
return cls_output
# init the model
model = BertClassifier(TFBertModel.from_pretrained(params['pre_trained_model']), len(params['label_cols']))
# +
# setup optimizer
steps_per_epoch = train_size // params['batch_size']
validation_steps = validation_size // params['batch_size']
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=False)
train_loss = tf.keras.metrics.Mean(name='train_loss')
validation_loss = tf.keras.metrics.Mean(name='test_loss')
warmup_steps = steps_per_epoch // 3
total_steps = steps_per_epoch * params['num_epochs'] - warmup_steps
optimizer, lr_scheduler = create_optimizer(
init_lr=params['learning_rate'],
num_train_steps=total_steps,
num_warmup_steps=warmup_steps
)
train_auc_metrics = [tf.keras.metrics.AUC() for i in range(len(params['label_cols']))]
validation_auc_metrics = [tf.keras.metrics.AUC() for i in range(len(params['label_cols']))]
# +
# training loops
@tf.function
def train_step(model, token_ids, masks, labels):
labels = tf.dtypes.cast(labels, tf.float32)
with tf.GradientTape() as tape:
predictions = model(token_ids, attention_mask=masks)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
print(optimizer)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
for i, auc in enumerate(train_auc_metrics):
auc.update_state(labels[:,i], predictions[:,i])
@tf.function
def validation_step(model, token_ids, masks, labels):
labels = tf.dtypes.cast(labels, tf.float32)
predictions = model(token_ids, attention_mask=masks, training=False)
v_loss = loss_object(labels, predictions)
validation_loss(v_loss)
for i, auc in enumerate(validation_auc_metrics):
auc.update_state(labels[:,i], predictions[:,i])
@model_details('model_details.yml')
@using_params('params.yml')
def train(model, train_dataset, val_dataset, train_steps_per_epoch, val_steps_per_epoch, num_epochs=1, label_cols=[]):
for epoch in range(num_epochs):
start = time.time()
for i, (token_ids, masks, labels) in enumerate(tqdm(train_dataset, total=train_steps_per_epoch)):
train_step(model, token_ids, masks, labels)
if i % 1000 == 0:
print(f'\nTrain Step: {i}, Loss: {train_loss.result()}')
for i, label_name in enumerate(label_cols):
print(f'{label_name} roc_auc {train_auc_metrics[i].result()}')
train_auc_metrics[i].reset_states()
for i, (token_ids, masks, labels) in enumerate(tqdm(val_dataset, total=val_steps_per_epoch)):
validation_step(model, token_ids, masks, labels)
print(f'\nEpoch {epoch+1}, Validation Loss: {validation_loss.result()}, Time: {time.time()-start}\n')
for i, label_name in enumerate(label_cols):
print(f'{label_name} roc_auc {validation_auc_metrics[i].result()}')
validation_auc_metrics[i].reset_states()
print('\n')
# -
# run the training loop (GPU Required)
# - ~45 minutes per epoch on GTC 1080 8gb GDDR5
train(
model,
train_dataset,
validation_dataset,
train_steps_per_epoch=steps_per_epoch,
val_steps_per_epoch=validation_steps
)
# ## Testing Model Inference
# create testing tokens (will take a few minutes)
test_input_ids = tokenize_sentences(df_test['comment_text'], tokenizer)
test_input_ids = pad_sequences(test_input_ids, maxlen=params['max_seq_len'], dtype='long', value=0, truncating='post', padding='post')
test_attention_masks = create_attention_masks(test_input_ids)
# +
# run inference
# - ~15 minutes on GTX 1080
test_step = len(df_test) // params['batch_size']
print(test_step)
test_dataset = create_dataset(
(test_input_ids, test_attention_masks),
batch_size=params['batch_size'],
train=False,
num_epochs=1
)
# -
test_dataset
# +
df_submission = pd.read_csv('data/sample_submission.csv', index_col='id')
for i, (token_ids, masks) in enumerate(tqdm(test_dataset, total=test_step)):
sample_ids = df_test.iloc[i*params['batch_size']:(i+1)*params['batch_size']]['id']
predictions = model(token_ids, attention_mask=masks).numpy()
df_submission.loc[sample_ids, params['label_cols']] = predictions
# -
# dump results
df_submission.to_csv('data/submission.csv')
# ## Save Model
# !mkdir models
now = datetime.now()
model.save_weights(f'models/{now:%Y-%m-%d %H:%M}_bert.h5')
# ## Model Testing
# load model
model()
model.load_weights('models/2020-11-23 08:42_bert.h5')
model
| toxic_comment_classification/toxic_comment_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="h182RDVhWBQ2" outputId="ca6b43b4-fff9-461a-fbe8-0a730a6bfed7"
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(100_000)
plt.hist(x, bins=20, facecolor="blue", alpha=0.5)
plt.show()
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Demo: repeating a subcycle a fixed number of times
# The basic steps to set up an OpenCLSim simulation are:
# * Import libraries
# * Initialise simpy environment
# * Define object classes
# * Create objects
# * Create sites
# * Create vessels
# * Create activities
# * Register processes and run simpy
#
# ----
#
# This notebook provides an example of a simulation that takes a number of sub processes, grouped in a sequential activity, that is executed **while a stop condition is not yet met**. But the overall cycle is made up of subcycles that are **executed a fixed number of times**.
#
# For this example we work with the following sub processes:
# * pre cycle activity
# * sailing empty
# * repeat loading 5 times
# * pre loading activity
# * loading
# * post loading activity
# * sailing full
# * repeat unloading 5 times
# * pre unloading activity
# * unloading
# * post unloading activity
# * post cycle activity
# #### 0. Import libraries
# +
import datetime, time
import simpy
import shapely.geometry
import pandas as pd
import openclsim.core as core
import openclsim.model as model
import openclsim.plot as plot
# -
# #### 1. Initialise simpy environment
# setup environment
simulation_start = 0
my_env = simpy.Environment(initial_time=simulation_start)
# #### 2. Define object classes
# +
# create a Site object based on desired mixin classes
Site = type(
"Site",
(
core.Identifiable,
core.Log,
core.Locatable,
core.HasContainer,
core.HasResource,
),
{},
)
# create a TransportProcessingResource object based on desired mixin classes
TransportProcessingResource = type(
"TransportProcessingResource",
(
core.Identifiable,
core.Log,
core.ContainerDependentMovable,
core.Processor,
core.HasResource,
core.LoadingFunction,
core.UnloadingFunction,
),
{},
)
# -
# #### 3. Create objects
# ##### 3.1. Create site object(s)
# +
# prepare input data for from_site
location_from_site = shapely.geometry.Point(4.18055556, 52.18664444)
data_from_site = {"env": my_env,
"name": "from_site",
"geometry": location_from_site,
"capacity": 100,
"level": 50
}
# instantiate from_site
from_site = Site(**data_from_site)
# prepare input data for to_site
location_to_site = shapely.geometry.Point(4.25222222, 52.11428333)
data_to_site = {"env": my_env,
"name": "to_site",
"geometry": location_to_site,
"capacity": 50,
"level": 0
}
# instantiate to_site
to_site = Site(**data_to_site)
# -
# ##### 3.2. Create vessel object(s)
# prepare input data for vessel_01
data_vessel01 = {"env": my_env,
"name": "vessel01",
"geometry": location_from_site,
"loading_rate": 0.00001,
"unloading_rate": 0.00001,
"capacity": 5,
"compute_v": lambda x: 10
}
# instantiate vessel_01
vessel01 = TransportProcessingResource(**data_vessel01)
# ##### 3.3 Create activity/activities
# initialise registry
registry = {}
# +
# create a list of the sub processes: loading
sub_processes =[
model.BasicActivity(
env=my_env,
name="pre loading activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
model.ShiftAmountActivity(
env=my_env,
name="loading",
registry=registry,
processor=vessel01,
origin=from_site,
destination=vessel01,
amount=1,
duration=1000,
),
model.BasicActivity(
env=my_env,
name="post loading activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
]
# create a 'sequential activity' that is made up of the 'sub_processes'
sequential_activity = model.SequentialActivity(
env=my_env,
name="sequential_activity_loading_subcycle",
registry=registry,
sub_processes=sub_processes,
)
# create a repeat activity that executes the 'sequential activity' a fixed number of times
repeat_loading_activity = model.RepeatActivity(
env=my_env,
name="repeat_sequential_activity_loading_subcycle",
registry=registry,
sub_processes=[sequential_activity],
repetitions=5
)
# +
# create a list of the sub processes: unloading
sub_processes =[
model.BasicActivity(
env=my_env,
name="pre unloading activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
model.ShiftAmountActivity(
env=my_env,
name="unloading",
registry=registry,
processor=vessel01,
origin=vessel01,
destination=to_site,
amount=1,
duration=1000,
),
model.BasicActivity(
env=my_env,
name="post unloading activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
]
# create a 'sequential activity' that is made up of the 'sub_processes'
sequential_activity = model.SequentialActivity(
env=my_env,
name="sequential_activity_unloading_subcycle",
registry=registry,
sub_processes=sub_processes,
)
# create a repeat activity that executes the 'sequential activity' a fixed number of times
repeat_unloading_activity = model.RepeatActivity(
env=my_env,
name="repeat_sequential_activity_unloading_subcycle",
registry=registry,
sub_processes=[sequential_activity],
repetitions=5
)
# +
# create a list of the sub processes: cycle
sub_processes = [
model.BasicActivity(
env=my_env,
name="pre cycle activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
model.MoveActivity(
env=my_env,
name="sailing empty",
registry=registry,
mover=vessel01,
destination=from_site,
duration=500,
),
repeat_loading_activity,
model.MoveActivity(
env=my_env,
name="sailing full",
registry=registry,
mover=vessel01,
duration=500,
destination=to_site,
),
repeat_unloading_activity,
model.BasicActivity(
env=my_env,
name="post cycle activity",
registry=registry,
duration=100,
additional_logs=[vessel01],
),
]
# create a 'sequential activity' that is made up of the 'sub_processes'
sequential_activity = model.SequentialActivity(
env=my_env,
name="sequential_activity_subcycle",
registry=registry,
sub_processes=sub_processes,
)
# create a while activity that executes the 'sequential activity' while the stop condition is not triggered
while_activity = model.WhileActivity(
env=my_env,
name="while_sequential_activity_subcycle",
registry=registry,
sub_processes=[sequential_activity],
condition_event=[{"type": "container", "concept": to_site, "state": "full"}],
)
# -
# #### 4. Register processes and run simpy
model.register_processes([while_activity])
my_env.run()
# #### 5. Inspect results
# ##### 5.1 Inspect logs
plot.get_log_dataframe(vessel01, [while_activity, *sub_processes, sequential_activity])
# ##### 5.2 Visualise gantt charts
plot.get_gantt_chart([while_activity, sequential_activity, *sub_processes])
plot.get_gantt_chart([vessel01, from_site, to_site])
# ##### 5.3 Visualise container volume developments
fig = plot.get_step_chart([vessel01, from_site, to_site])
| notebooks/15_Subcycle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# ## Develop summary table of all Sensors in LoRa Database
# +
from tempfile import TemporaryDirectory
import pandas as pd
from datetime import datetime, timedelta
from pytz import timezone
pd.options.display.max_rows = 200
# -
gtw_file = '~/gateways.tsv'
df = pd.read_csv(gtw_file, sep='\t', parse_dates=['ts'])
df.head()
tz_ak = timezone('US/Alaska')
ts_start = (datetime.now(tz_ak) - timedelta(days=1)).replace(tzinfo=None)
ts_start
df1d = df.query('ts >= @ts_start')
dfs = df1d.groupby('dev_id').agg(
{
'data_rate': 'last',
'counter': ['first', 'last']
}
)
dfs.columns = ['data_rate', 'counter_first', 'counter_last']
dfs['total_rdg'] = dfs.counter_last - dfs.counter_first + 1
dfs.head()
df_rcvd = df1d[['dev_id', 'counter']].drop_duplicates()
df_rcvd = df_rcvd.groupby('dev_id').count()
df_rcvd.columns = ['rcvd']
df_rcvd.head()
df_final = dfs.join(df_rcvd)
df_final['success_pct'] = df_final.rcvd / df_final.total_rdg
df_display = df_final[['data_rate', 'success_pct']].copy()
df_display['data_rate'] = df_display.data_rate.str.replace('.0', '', regex=False)
df_display.columns = ['Data Rate', 'Success %']
df_display.index.name = 'Dev ID'
s2 = df_display.style.applymap(lambda v: 'color:red;' if v < 0.9 else None, subset=['Success %'])
s2
| summary_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
from typing import Callable
# -
# ### First order differential equation
#
# $$
# \frac{dy}{dt} = f(t, y)
# $$
#
# with initial value
#
# $$
# y(t_0) = y_0
# $$
# ### Euler Method
#
# $$
# h = t_{n+1} - t_{n}
# $$
#
# **Step size: do not need to be uniform!**
#
# $$
# f_n = f(t_n, y_n)
# $$
#
# **Approximation of the slope**
#
# $$
# y_{n+1} = y_n + f_n\cdot h
# $$
# +
def f1(x):
return x**2
def f2(x, y):
return x**2 + y**2
def f3(x, y, z):
return x**2 + y**2 + z**2
# +
# Closure, high order function
def g(f):
def out(*arg):
return f(*arg) + 2.0
return out
# -
# Forward Euler method
def Euler(f: Callable[..., np.float32], init: float, T: float, dt: float):
t = np.arange(0, T + dt, dt)
nt = t.shape[0]
y = np.zeros(nt)
# initial condition
y[0] = init
for n in range(nt - 1):
y[n + 1] = y[n] + f(t[n], y[n]) * dt
return t, y
def RK3(f, init, T, dt):
t = np.arange(0, T + dt, dt)
nt = t.shape[0]
y = np.zeros(nt)
y[0] = init
for n in range(nt - 1):
k1 = f(t[n], y[n])
k2 = f(t[n] + 0.5 * dt, y[n] + 0.5 * dt * k1)
k3 = f(t[n + 1], y[n] - dt * k1 + 2 * dt * k2)
y[n + 1] = y[n] + (k1 + 4 * k2 + k3) * dt / 6
return t, y
# +
class RungeKutta:
def __init__(self, r, butcher): # method
self.r = r
self.lamba_
self.b
self.c
def solve(f, init, T, dt):
t = np.arange(0, T + dt, dt)
nt = t.shape[0]
y = np.zeros(nt)
y[0] = init
for n in range(nt - 1):
self.step()
return t, y
def step():
pass
RK2 = RungeKutta(butcher)
RK2.solve(f, init, T, dt)
# -
# object orientated
# RK
#
# RK2, RK3, Kutta, Classical RK, ......
# ## Example 1
#
# Consider the problem
#
# $$
# \frac{dy}{dt} = 3 - 2t - 0.5y
# $$
#
# with
#
# $$
# y(0) = 1
# $$
#
# Use **Euler method** with step size $h=0.2$ to find solution at $t=0.2, 0.4, 0.6, 0.8$
# Exact solution:
#
# $$
# y = 14 - 4t - 13e^{-t/2}
# $$
# +
t = np.linspace(0, 5.0, 200)
plt.figure(figsize=(8, 8))
plt.plot(t, 14 - 4 * t - 13 * np.exp(-t / 2))
# -
# **Euler method**:
#
# $$
# y_0 = 1
# $$
#
# and
#
# $$
# y_{n+1} = y_n + (3 - 2t_n - 0.5y_n)h
# $$
# +
def f1(t, y):
return 3 - 2 * t - 0.5 * y
h = 0.1
# -
# %time t_num, y_num = Euler(f1, 1.0, 5.0, h)
# %time t_rk, y_rk = RK3(f1, 1.0, 5.0, h)
plt.figure(figsize=(8, 8))
plt.plot(t_rk, y_rk, '-*', t_num, y_num, "-o", t, 14 - 4 * t - 13 * np.exp(-t / 2))
# ### Convergence
# +
t = 5.0
14 - 4 * t - 13 * np.exp(-t / 2)
# +
list_h = [0.5, 0.2, 0.1, 0.05, 0.01, 0.001]
errors1 = []
for h in list_h:
_, y_num = Euler(f1, 1.0, 5.0, h)
err = np.abs(y_num[-1] - (14 - 4 * 5.0 - 13 * np.exp(-5.0 / 2)))
errors1.append(err)
# -
plt.loglog(list_h, errors1, '-o', list_h, errors, '-*', list_h, np.asarray(list_h)**3, '--', list_h, np.asarray(list_h), '--')
# ### Stability
#
# $$ 0 > \lambda h > -2, \quad \lambda < 0 $$
#
# $$ h < \frac{-2}{\lambda} $$
t_num, y_num = Euler(f1, 1.0, 5.0, 5)
# ## Example 2
#
# Consider the problem
#
# $$
# \frac{dy}{dt} = 4 - t + 2y
# $$
#
# with
#
# $$
# y(0) = 1
# $$
#
# Use **Euler method** to solve it.
# Exact solution:
#
# $$
# y = -\frac{7}{4} + \frac{1}{2}t + \frac{11}{4}e^{2t}
# $$
# +
def f2(t, y):
return 4 - t + 2 * y
h = 0.1
t_num, y_num = RK3(f2, 1.0, 5.0, h)
t = np.linspace(0, 5.0, 200)
plt.figure(figsize=(8, 8))
plt.plot(t_num, y_num, "-o", t, -7 / 4 + 0.5 * t + 11 / 4 * np.exp(2 * t))
# +
list_h = [0.5, 0.2, 0.1, 0.05, 0.01, 0.001]
t = 5.0
ref = -7 / 4 + 0.5 * t + 11 / 4 * np.exp(2 * t)
errors = []
for h in list_h:
_, y_num = RK3(f2, 1.0, 5.0, h)
err = np.abs(y_num[-1] - ref)
errors.append(err)
# -
plt.loglog(list_h, errors)
# # Example 3
#
# $$
# \frac{dy}{dx} = x^3 - \frac{y}{x}
# $$
#
# Exact solution
#
# $$
# y = \frac{1}{5} x^4 + \frac{1}{5x}
# $$
# +
t = np.linspace(1.0, 5.0, 200)
plt.figure(figsize=(8, 8))
plt.plot(t, 1 / 5 * t**4 + 1 / (5 * t))
# +
def f3(t, y):
return t**3 - y / t
h = 0.1
# -
# %time t_num, y_num = Euler(f3, 0.4, 5.0, h)
# # %time t_rk, y_rk = RK3(f3, 0.4, 5.0, h)
plt.figure(figsize=(8, 8))
plt.plot(t_num, y_num, "-o", t, 1 / 5 * t**4 + 1 / (5 * t))
| others/ode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.6 (''venv_covid_parana'': venv)'
# name: pythonjvsc74a57bd0a2ff6c97610b99418674babc914468fd6bc815ee1c498d6d0774bf47546c0333
# ---
# +
import pandas as pd
import numpy as np
import matplotlib
import seaborn as sns
import pystan
import fbprophet
import plotly
#built-in
import datetime
print('pandas ==',pd.__version__)
print('numpy ==',np.__version__)
print('matplotlib ==',matplotlib.__version__)
print('seaborn ==',sns.__version__)
print('plotly ==', plotly.__version)
print('pystan ==',pystan.__version__)
print('fbprophet ==',fbprophet.__version__)
# -
# # Funções
# +
# =======================================================================================
## FUNÇÕES PARA A IMPORTAÇÃO DOS DADOS E TRANSFORMAR DATAS
def converte_variaveis_em_datas(dataframe):
for variavel in dataframe.columns:
if 'DATA' in variavel:
try:
dataframe[variavel] = pd.to_datetime(dataframe[variavel], format='%d/%m/%Y')
except:
print(f'Variável "{variavel}" contém um erro e conversão/formatação')
pass
else:
pass
return dataframe
def tira_variaveis_IBGE(dataframe):
dataframe = dataframe.drop(['IBGE_RES_PR','IBGE_ATEND_PR'], axis='columns')
return dataframe
def cria_variavel_caso_confirmado_no_dia(dataframe):
dataframe['CASO_CONFIRMADO_NO_DIA'] = 1
return dataframe
def replace_nas_variaveis_obito_status(dataframe):
dataframe['OBITO'] = dataframe['OBITO'].replace('SIM','Sim')
dataframe['OBITO'] = dataframe['OBITO'].replace('Não','Nao')
dataframe['OBITO'] = dataframe['OBITO'].replace('Sim',1)
dataframe['OBITO'] = dataframe['OBITO'].replace('Nao',0)
dataframe['STATUS'] = dataframe['STATUS'].replace('Recuperado','recuperado')
dataframe['STATUS'] = dataframe['STATUS'].replace('recuperado', 1)
dataframe['STATUS'] = dataframe['STATUS'].replace('nan', 0)
return dataframe
def pre_processamento(dataframe):
dataframe = converte_variaveis_em_datas(dataframe)
dataframe = tira_variaveis_IBGE(dataframe)
dataframe = cria_variavel_caso_confirmado_no_dia(dataframe)
dataframe = replace_nas_variaveis_obito_status(dataframe)
dataframe = dataframe.set_index('DATA_CONFIRMACAO_DIVULGACAO')
dataframe = dataframe.sort_index()
return dataframe
def baixa_base_de_dados_casos_gerais(ano,mes,dia):
"""
Esta função baixa a base de dados disponível no ano, mes e dia.
retorna um dataframe do pandas com os dados disponpiveis.
"""
data = datetime.date(ano, mes, dia)
ano = str(data.year)
if mes != 10 and mes != 11 and mes != 12:
mes = '0'+str(data.month)
else:
mes = str(data.month)
if dia in [1,2,3,4,6,7,8,9]:
dia = '0'+str(data.day)
ano_mes = ano+'-'+mes
if ano == '2020':
arquivo = f'INFORME_EPIDEMIOLOGICO_{dia}_{mes}_GERAL.csv'
elif ano == '2021':
arquivo = f'informe_epidemiologico_{dia}_{mes}_{ano}_geral.csv'
# Podem acontecer as seguintes variações no site do estado:
# informe_epidemiologico_{dia}_{mes}_{ano}_geral.csv
# informe_epidemiologico_{dia}_{mes}_geral.csv
# INFORME_EPIDEMIOLOGICO_{dia}_{mes}_GERAL.csv
dominio = 'https://www.saude.pr.gov.br'
caminho = f'/sites/default/arquivos_restritos/files/documento/{ano_mes}/'
try:
url = dominio+caminho+arquivo
base_de_dados = pd.read_csv(url, sep=';')
base_de_dados = pre_processamento(base_de_dados)
except:
raise Exception('Não Tem dados disponíveis neste dia.')
return base_de_dados
# =======================================================================================
## FUNÇÕES PARA SELEÇÃO
def seleciona_media_movel(dataframe):
if 'MUN_ATENDIMENTO' in dataframe.columns and 'DATA_CONFIRMACAO_DIVULGACAO' in dataframe.columns:
dataframe = dataframe.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().rolling(15).mean()[15:]
return dataframe
else:
raise Exception('')
# -
# # Importando os Dados
hoje = datetime.date.today()
informe_covid = baixa_base_de_dados_casos_gerais(hoje.year, hoje.month, hoje.day - 1)
informe_covid
# +
def seleciona_cidade(cidade, dataframe):
"""
Esta função retona todos os registros
cujo o atendimento ocorreu na cidade
passada como parâmetro.
Retorna um dataframe.
"""
dados_cidade = dataframe.query(f'MUN_ATENDIMENTO == "{cidade.upper()}"')
return dados_cidade.reset_index()
# -
move_average_15_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().rolling(14).mean()[14:].round()
casos_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().round()
move_average_15_por_cidade
casos_por_cidade
# +
def atualiza_dados_diarios():
hoje = datetime.date.today()
informe_covid = baixa_base_de_dados_casos_gerais(hoje.year, hoje.month, hoje.day - 1)
move_average_15_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().rolling(14).mean()[14:].round()
move_average_15_por_cidade.to_csv('../dados_recentes/cidades_mm15.csv', sep=';')
casos_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().round()
casos_por_cidade.to_csv('../dados_recentes/casos_por_cidade.csv', sep=';')
# -
# # Dados de Internações
uri_internacoes = 'https://raw.githubusercontent.com/ConradBitt/covid_parana/main/dados_recentes/INTERNACOES_TRATAMENTO%20DE%20INFEC%C3%87%C3%83O%20PELO_CORONAVIRUS_COVID%2019.csv'
internacoes = pd.read_csv(uri_internacoes,sep=';',
skiprows = 4,
skipfooter=14,
thousands=',',
engine='python', encoding='iso-8859-1')
internacoes
display(internacoes.columns[1:-1])
# +
def pre_processamento_dados_internacoes(dataframe):
from unidecode import unidecode
municipios = []
for municipio in dataframe['Município']:
municipios.append(unidecode(municipio[7:]).upper())
dataframe['Município'] = municipios
dataframe = dataframe.drop('Total', axis='columns')
#dataframe = dataframe.T.iloc[:-1]
columns = [
'Município', '2020-03', '2020-04',
'2020-05', '2020-06', '2020-07',
'2020-08', '2020-09', '2020-10',
'2020-11', '2020-12', '2021-01',
'2021-02','2021-03',
]
dataframe.columns = columns
dataframe = dataframe.set_index('Município')
dataframe.columns = pd.to_datetime(dataframe.columns)
dataframe = dataframe.replace('-',0).astype(int)
return dataframe.T
# -
internacoes = pre_processamento_dados_internacoes(internacoes)
internacoes.T.index
# +
def pre_processamento_dados_internacoes(dataframe):
from unidecode import unidecode
municipios = []
for municipio in dataframe['Município']:
municipios.append(unidecode(municipio[7:]).upper())
dataframe['Município'] = municipios
dataframe.drop('Total',axis='columns')
dataframe = dataframe.T.iloc[:-1]
index = [
'MUN_ATENDIMENTO', '2020-03', '2020-04',
'2020-05', '2020-06', '2020-07',
'2020-08', '2020-09', '2020-10',
'2020-11', '2020-12', '2021-01'
]
dataframe.index = index
dataframe = dataframe.T
dataframe = dataframe.set_index('MUN_ATENDIMENTO')
dataframe = dataframe.replace('-',0).astype(int)
dataframe.columns = pd.to_datetime(dataframe.columns)
return dataframe.T
def carrega_internacoes():
uri_internacoes = 'https://raw.githubusercontent.com/ConradBitt/covid_parana/main/dados_recentes/INTERNACOES_TRATAMENTO%20DE%20INFEC%C3%87%C3%83O%20PELO_CORONAVIRUS_COVID%2019.csv'
internacoes = pd.read_csv(uri_internacoes,sep=';',skiprows = 4,skipfooter=14,thousands=',', engine='python', encoding='iso-8859-1')
internacoes = pre_processamento_dados_internacoes(internacoes)
return internacoes
# -
internacoes = pre_processamento_dados_internacoes(internacoes)
internacoes[['PONTA GROSSA']]
def exibe_internacoes_cidade(dataframe, cidade):
cidade = cidade.upper()
internacoes = dataframe[cidade]
internacoes.rename_axis('Data', inplace=True)
fig = px.bar(dataframe[cidade], x=internacoes.index, y = f'{cidade}',
color=f'{cidade}', height=400, labels={f'{cidade}': 'Internados'})
fig.layout.title.text = f'Internações por COVID-19 - {cidade}'
fig.layout.xaxis.title.text = 'Mês de internação'
fig.layout.yaxis.title.text = ''
fig.update_coloraxes(
colorbar=dict(title='Internações'),
colorbar_title_font_size=22,
colorbar_title_side='top')
return fig
exibe_internacoes_cidade(internacoes, 'Ponta Grossa').show()
internacoes['PONTA GROSSA']
# +
import datetime
import pandas as pd
class InformeCovid():
"""
Esta classe tem como objetivo carregar os informes
da covid do estado do paraná. Ao utilizar a função
carrega_informe() o resultado será um dataframe
com o informe do dia, mes e ano.
Use o método carrega_informe(): para carregar todo o
informe do paraná.
OU
Use o método carrega_informe_mm(): para carregar as
médias móveis da cidade do paraná. É possível modificar
o parâmetro move_average para mudar a janela.
"""
def __init__(self):
pass
def carrega_informe(self, data):
ano = data.year
mes = data.month
dia = data.day - 3
self._data_informe = datetime.date(ano, mes, dia)
self.dia, self.mes, self.ano = self.__ajusta_data(self._data_informe)
self.__uri = self.__ajusta_uri(self.dia, self.mes, self.ano)
try:
dados_informe_covid = pd.read_csv(self.__uri, sep=';')
dados_informe_covid = self.__pre_processamento(dados_informe_covid)
return dados_informe_covid
except:
raise Exception(f'Não Tem dados disponíveis neste dia.\n{self.__uri}')
def carrega_informe_mm(self, data, janela = 14):
dados = self.carrega_informe(data)
dados = self.__pre_processamento(dados)
dados_agrupados = dados.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])
self.informe_mm = dados_agrupados[
['CASO_CONFIRMADO',
'OBITO']
].sum().rolling(janela).mean()[janela:]
return self.informe_mm
def __ajusta_uri(self, dia, mes, ano):
dia = str(dia)
mes = str(mes)
ano = str(ano)
if dia in ['1','2','3','4','6','7','8','9']:
dia = '0' + dia
if ano == '2020':
arquivo = f'INFORME_EPIDEMIOLOGICO_{dia}_{mes}_GERAL.csv'
elif ano == '2021':
arquivo = f'informe_epidemiologico_{dia}_{mes}_{ano}_geral.csv'
ano_mes = ano + '-' + mes
dominio = 'https://www.saude.pr.gov.br'
caminho = f'/sites/default/arquivos_restritos/files/documento/{ano_mes}/'
uri = dominio + caminho + arquivo
return uri
def __ajusta_data(self, data_informe):
ano = str(data_informe.year)
mes = str(data_informe.month)
dia = str(data_informe.day)
if mes != 10 and mes != 11 and mes != 12:
mes = '0'+str(data_informe.month)
else:
mes = str(data_informe.month)
if dia in [1,2,3,4,6,7,8,9]:
dia = '0'+str(data_informe.day)
return dia, mes, ano
def __pre_processamento(self, dataframe):
"""
Esta função faz o pré processamento dos dados
- converte_variaveis_em_datas
- tira_variaveis_IBGE
- cria_variavel_caso_confirmado_no_dia(dataframe)
- replace_nas_variaveis_obito_status(dataframe)
- usa 'DATA_CONFIRMACAO_DIVULGACAO' como indice
- ordena pelo índice
retorna o dataframe.
"""
dataframe = self.__converte_variaveis_em_datas(dataframe)
dataframe = self.__tira_variaveis_IBGE(dataframe)
dataframe = self.__cria_variavel_caso_confirmado(dataframe)
dataframe = self.__replace_nas_variaveis_obito_status(dataframe)
dataframe = dataframe.set_index('DATA_CONFIRMACAO_DIVULGACAO')
dataframe = dataframe.sort_index()
return dataframe
def __converte_variaveis_em_datas(self, dataframe):
for variavel in dataframe.columns:
if 'DATA' in variavel:
try:
dataframe[variavel] = pd.to_datetime(dataframe[variavel], format='%d/%m/%Y')
except:
print(f'Variável "{variavel}" contém um erro e conversão/formatação')
pass
else:
pass
return dataframe
def __tira_variaveis_IBGE(self, dataframe):
dataframe = dataframe.drop(['IBGE_RES_PR','IBGE_ATEND_PR'], axis='columns')
return dataframe
def __cria_variavel_caso_confirmado(self, dataframe):
dataframe['CASO_CONFIRMADO'] = 1
return dataframe
def __replace_nas_variaveis_obito_status(self, dataframe):
if 'OBITO' in dataframe.columns:
dataframe['OBITO'] = dataframe['OBITO'].replace('SIM','Sim')
dataframe['OBITO'] = dataframe['OBITO'].replace('Não','Nao')
dataframe['OBITO'] = dataframe['OBITO'].replace('Sim',1)
dataframe['OBITO'] = dataframe['OBITO'].replace('Nao',0)
else:
pass
if 'STATUS' in dataframe.columns:
dataframe['STATUS'] = dataframe['STATUS'].replace('Recuperado','recuperado')
dataframe['STATUS'] = dataframe['STATUS'].replace('recuperado', 1)
dataframe['STATUS'] = dataframe['STATUS'].replace('nan', 0)
else:
pass
return dataframe
def atualiza_dados_diarios(hoje):
informe_covid = self.carrega_informe(hoje.year, hoje.month, hoje.day - 1)
move_average_15_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().rolling(14).mean()[14:].round()
move_average_15_por_cidade.to_csv('../dados_recentes/cidades_mm15.csv', sep=';')
casos_por_cidade = informe_covid.groupby(['MUN_ATENDIMENTO','DATA_CONFIRMACAO_DIVULGACAO'])[['CASO_CONFIRMADO_NO_DIA','OBITO']].sum().round()
casos_por_cidade.to_csv('../dados_recentes/casos_por_cidade.csv', sep=';')
# -
informe = InformeCovid()
hoje = datetime.date.today()
dados = informe.carrega_informe(hoje)
dados.head()
PG = dados.query('MUN_ATENDIMENTO == "PONTA GROSSA"')
PG.groupby(['DATA_CONFIRMACAO_DIVULGACAO'], as_index=True).sum().reset_index()
# # Número de Habitantes
populacao_pr = """
Posição Município População
1 Curitiba 1 948 626
2 Londrina 575 377
3 Maringá 430 157
4 Ponta Grossa 355 336
5 Cascavel 332 333
6 São José dos Pinhais 329 058
7 Foz do Iguaçu 258 248
8 Colombo 246 540
9 Guarapuava 182 644
10 Paranaguá 156 174
11 Araucária 146 214
12 Toledo 142 645
13 Apucarana 136 234
14 Campo Largo 133 865
15 Pinhais 133 490
16 Arapongas 124 810
17 <NAME> 120 041
18 Piraquara 114 970
19 Umuarama 112 500
20 Cambé 107 341
21 Fazenda Rio Grande 102 004
22 Sarandi 97 803
23 Campo Mourão 95 488
24 <NAME> 92 216
25 Paranavaí 88 922
26 Pato Branco 83 843
27 Cianorte 83 816
28 <NAME> 79 792
29 Castro 71 809
30 Rolândia 67 383
31 Irati 61 088
32 União da Vitória 57 913
33 Ibiporã 55 131
34 Marech<NAME> 53 495
35 Prudentópolis 52 513
36 Palmas 51 755
37 Lapa 48 410
38 <NAME> 47 842
39 São Mateus do Sul 46 705
40 Medianeira 46 574
41 Santo Antônio da Platina 46 251
42 Campina Grande do Sul 43 685
43 Paiçandu 41 773
44 Dois Vizinhos 41 038
45 Jacarezinho 39 322
46 Guaratuba 37 527
47 Marialva 35 804
48 Matinhos 35 219
49 Jaguariaíva 35 027
50 Mandaguari 34 515
51 <NAME> 34 411
52 Quedas do Iguaçu 34 409
53 Palmeira 33 994
54 <NAME> 33 340
55 Guaíra 33 310
56 Imbituva 32 940
57 Pinhão 32 559
58 Rio Branco do Sul 32 517
59 Laranjeiras do Sul 32 139
60 Palotina 32 121
61 Ivaiporã 31 935
62 Ibaiti 31 644
63 Bandeirantes 31 211
64 Pitanga 29 994
65 Campo Magro 29 740
66 Itaperuçu 29 070
67 Goioerê 28 808
68 Arapoti 28 300
69 Nova Esperança 27 984
70 Pontal do Paraná 27 915
71 São Miguel do Iguaçu 27 576
72 Mandirituba 27 315
73 Reserva 26 825
74 Santa Helena 26 767
75 Astorga 26 209
76 Piraí do Sul 25 617
77 Cambará 25 466
78 Colorado 24 145
79 Quatro Barras 23 911
80 Carambeí 23 825
81 Santa Terezinha de Itaipu 23 699
82 Loanda 23 242
83 Mandaguaçu 23 100
84 Altônia 22 176
85 Ortigueira 21 960
86 Siqueira Campos 21 249
87 Jandaia do Sul 21 230
88 Cruzeiro do Oeste 20 947
89 Ubiratã 20 909
90 Tibagi 20 607
91 <NAME> 20 580
92 Santo Antônio do Sudoeste 20 261
93 Andirá 19 926
94 Wenceslau Braz 19 386
95 Sengés 19 385
96 Ampére 19 311
97 Quitandinha 19 221
98 Chopinzinho 19 167
99 Capanema 19 148
100 Antonina 18 949
101 Contenda 18 837
102 <NAME> 18 741
103 Cafelândia 18 456
104 Matelândia 18 107
105 <NAME> 17 833
106 <NAME> 17 522
107 Faxinal 17 316
108 <NAME> 17 200
109 Corbélia 17 117
110 Tijucas do Sul 17 084
111 Realeza 16 950
112 <NAME> 16 924
113 Mangueirinha 16 642
114 Clevelândia 16 450
115 Morretes 16 446
116 Sertanópolis 16 413
117 Bituruna 16 400
118 Tapejara 16 345
119 Candói 16 053
120 <NAME> 15 834
121 Bela Vista do Paraíso 15 399
122 <NAME> 15 336
123 Ipiranga 15 251
124 São João do Triunfo 15 241
125 Curiúva 15 196
126 Tamarana 15 040
127 Assaí 14 954
128 Rebouças 14 946
129 Salto do Lontra 14 872
130 Alto Paraná 14 859
131 Când<NAME> 14 809
132 <NAME> Oeste 14 794
133 Marmeleiro 14 387
134 Carlópolis 14 356
135 Camp<NAME> Lagoa 14 043
136 Paraíso do Norte 14 023
137 Peabiru 14 007
138 Araruna 14 000
139 <NAME> 13 981
140 Ivaí 13 965
141 Iporã 13 782
142 Jaguapitã 13 742
143 <NAME> 13 685
144 Mallet 13 663
145 <NAME> 13 510
146 Planalto 13 431
147 Cantagalo 13 329
148 Imbaú 13 282
149 <NAME> 13 255
150 Nova Londrina 13 200
151 Bocaiuva do Sul 13 129
152 Turvo 13 095
153 <NAME> 13 092
154 Mamborê 13 014
155 Palmital 12 960
156 <NAME> 12 948
157 Piên 12 882
158 Cidade Gaúcha 12 797
159 Porecatu 12 748
160 Jataizinho 12 638
161 Teixeira Soares 12 567
162 Querência do Norte 12 232
163 Guaraniaçu 12 217
164 Santa Fé 12 186
165 Itapejara d'Oeste 12 094
166 Ventania 12 088
167 Moreira Sales 12 042
168 Três Barras do Paraná 12 038
169 <NAME> 12 009
170 <NAME> 11 819
171 Santa Mariana 11 622
172 Paranacity 11 580
173 Nova Laranjeiras 11 507
174 Alvorada do Sul 11 503
175 <NAME> 11 426
176 Itaipulândia 11 385
177 Pérola 11 321
178 Uraí 11 273
179 <NAME> 11 196
180 Primeiro de Maio 11 130
181 São Jerônimo da Serra 11 128
182 <NAME> 11 121
183 São Pedro do Ivaí 11 046
184 Mauá da Serra 10 800
185 Centenário do Sul 10 764
186 Missal 10 704
187 <NAME> 10 645
188 Nova Prata do Iguaçu 10 544
189 Florestópolis 10 453
190 Mariluz 10 336
191 Barracão 10 312
192 Nova Aurora 10 299
193 São João 10 181
194 Catanduvas 10 167
195 Iretama 10 098
196 Santa Tereza do Oeste 10 096
197 São João do Ivaí 10 056
198 <NAME> 9 778
199 Roncador 9 645
200 Rondon 9 622
201 Japurá 9 500
202 Agudos do Sul 9 470
203 Santa Maria do Oeste 9 410
204 São Jorge d'Oeste 9 028
205 Tunas do Paraná 9 022
206 Douradina 8 869
207 São Sebastião da Amoreira 8 859
208 Congonhinhas 8 857
209 Marilândia do Sul 8 814
210 Guamiranga 8 811
211 Califórnia 8 606
212 Tuneiras do Oeste 8 533
213 Santa Isabel do Ivaí 8 523
214 Vera Cruz do Oeste 8 454
215 Jesuítas 8 330
216 Nova Santa Rosa 8 266
217 Ivaté 8 240
218 Nova Fátima 8 136
219 Tupãssi 8 109
220 Reserva do Iguaçu 8 069
221 Campo do Tenente 8 045
222 Cambira 7 917
223 Tomazina 7 807
224 Icaraíma 7 786
225 Santa Cruz de Monte Castelo 7 751
226 Figueira 7 696
227 Guaraqueçaba 7 594
228 Boa Vista da Aparecida 7 540
229 <NAME> 7 518
230 Quatiguá 7 477
231 <NAME> 7 427
232 Abatiá 7 408
233 <NAME> 7 387
234 Juranda 7 292
235 Luiziana 7 240
236 Verê 7 174
237 Marilena 7 084
238 Bom Sucesso 7 068
239 Goioxim 7 053
240 Jussara 7 041
241 São Carlos <NAME>í 6 920
242 Sabáudia 6 891
243 Vitorino 6 859
244 Floresta 6 851
245 Renascença 6 787
246 Sapopema 6 722
247 Mariópolis 6 632
248 Guairaçá 6 609
249 Itambaracá 6 549
250 Formosa do Oeste 6 460
251 Borrazópolis 6 439
252 Ibema 6 370
253 Boa Ventura de São Roque 6 365
254 Amaporã 6 332
255 Pinhalão 6 324
256 Pérola d'Oeste 6 288
257 Perobal 6 160
258 São José da Boa Vista 6 160
259 Itambé 6 109
260 Ouro Verde do Oeste 6 016
261 <NAME> 5 993
262 <NAME> 5 983
263 <NAME> 5 933
264 <NAME> 5 908
265 Adrianópolis 5 857
266 São João do Caiuá 5 837
267 Nova Olímpia 5 826
268 São Pedro do Iguaçu 5 820
269 Laranjal 5 784
270 São Tomé 5 750
271 <NAME> 5 684
272 <NAME> 5 634
273 Xambrê 5 630
274 <NAME> 5 602
275 São Jorge do Patrocínio 5 586
276 Maripá 5 582
277 Mercedes 5 577
278 <NAME> 5 552
279 São Jorge do Ivaí 5 543
280 Saudade do Iguaçu 5 539
281 Guaraci 5 530
282 Grandes Rios 5 497
283 Tapira 5 495
284 Nova Tebas 5 448
285 Santo Inácio 5 416
286 Braganey 5 382
287 President<NAME> 5 351
288 Jaboti 5 303
289 Diamante d'Oeste 5 266
290 Sertaneja 5 216
291 Tamboara 5 158
292 <NAME> 5 119
293 Janiópolis 5 095
294 Pranchita 5 095
295 Nova Cantu 5 061
296 Diamante do Norte 5 030
297 Nova Esperança do Sudoeste 5 030
298 Santana do Itararé 4 954
299 Lupionópolis 4 945
300 Japira 4 930
301 Floraí 4 906
302 Salto do Itararé 4 898
303 Porto Amazonas 4 874
304 Lobato 4 819
305 Fênix 4 748
306 Lunardelli 4 744
307 <NAME> Ivaí 4 689
308 Marumbi 4 677
309 Flor da Serra do Sul 4 603
310 Entre Rios do Oeste 4 596
311 Lindoeste 4 592
312 Foz do Jordão 4 556
313 Quinta do Sol 4 508
314 Serranópolis do Iguaçu 4 477
315 Ramilândia 4 476
316 Indianópolis 4 465
317 Quarto Centenário 4 465
318 Cruzeiro do Sul 4 449
319 Itaguajé 4 446
320 Iguaraçu 4 440
321 Marquinho 4 340
322 Nova Santa Bárbara 4 277
323 Planaltina do Paraná 4 272
324 Cruzeiro do Iguaçu 4 240
325 Rio Branco do Ivaí 4 109
326 Porto Vitória 4 061
327 Espigão Alto do Iguaçu 4 048
328 Boa Esperança 4 047
329 Kaloré 4 047
330 Quatro Pontes 4 029
331 Virmond 4 022
332 Santa Mônica 4 017
333 Cafezal do Sul 4 009
334 <NAME> 4 009
335 Nossa Senhora das Graças 4 008
336 Leópolis 3 925
337 Atalaia 3 881
338 <NAME> 3 876
339 <NAME> 3 859
340 Santa Lúcia 3 793
341 Guapirama 3 784
342 <NAME> 3 784
343 <NAME> 3 780
344 Campo Bonito 3 763
345 São José das Palmeiras 3 627
346 Bom Jesus do Sul 3 506
347 <NAME> 3 483
348 Bela Vista da Caroba 3 457
349 Nova América da Colina 3 434
350 Ourizona 3 425
351 Diamante do Sul 3 424
352 Santa Cecília do Pavão 3 293
353 Ivatuba 3 279
354 Jundiaí do Sul 3 269
355 Santa Amélia 3 266
356 Pitangueiras 3 262
357 Bom Sucesso do Sul 3 254
358 Paranapoema 3 241
359 Lidianópolis 3 231
360 <NAME> 3 206
361 Porto Barreiro 3 184
362 <NAME> 3 182
363 Corumbataí do Sul 3 127
364 Inajá 3 116
365 Farol 3 041
366 Arapuã 3 009
367 Cafeara 2 954
368 Ângulo 2 930
369 Sulina 2 930
370 Cruzmaltina 2 921
371 <NAME> 2 898
372 Novo Itacolomi 2 840
373 Anahy 2 788
374 Barra do Jacaré 2 781
375 Itaúna do Sul 2 781
376 <NAME> Bento 2 737
377 Flórida 2 699
378 Alto Paraíso 2 685
379 Rancho Alegre d'Oeste 2 628
380 Santo Antônio do Caiuá 2 626
381 Uniflor 2 614
382 Brasilândia do Sul 2 585
383 Porto Rico 2 556
384 Manfrinópolis 2 506
385 Boa Esperança do Iguaçu 2 470
386 São Pedro do Paraná 2 289
387 Iguatu 2 253
388 Iracema do Oeste 2 251
389 Guaporema 2 241
390 Mirador 2 196
391 São Manoel do Paraná 2 163
392 Santo Antônio do Paraíso 2 068
393 <NAME> Ivaí 2 066
394 Miraselva 1 796
395 Altamira do Paraná 1 682
396 Esperança Nova 1 665
397 Santa Inês 1 594
398 Nova Aliança do Ivaí 1 551
399 <NAME> 1 320
"""
from io import StringIO
populacao_pr = StringIO(populacao_pr)
populacao_pr = pd.read_csv(populacao_pr, sep='\t')
populacao_pr.columns
from unidecode import unidecode
# +
colunas = []
for coluna in populacao_pr.columns:
colunas.append(unidecode(coluna.upper()))
populacao_pr.columns = colunas
# -
populacao_pr.iloc[25,:] = [26, 'Pato Branco', '83 843']
populacao_pr.iloc[25,:]
# +
pop_pr = []
for pop in populacao_pr['POPULACAO']:
pop_pr.append(int(pop.replace(' ','')))
populacao_pr['POPULACAO'] = pop_pr
# -
populacao_pr.info()
| notebook/Analise_COVID19_PR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from comet_ml import API
import comet_ml
import io
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from collections import defaultdict
import time
import torch
from sklearn.linear_model import HuberRegressor, LinearRegression
# -
sns.set(context='paper', style="whitegrid", font_scale=3, font = 'serif')
colors = [
'skyblue', 'orange', 'steelblue', 'gold', '#f58231', 'red'
]
# %matplotlib inline
linewidth = 3
datafile='rand_preprocessed.pt'
device = torch.device("cpu")
# +
#showers = torch.load(datafile)
# -
len(showers)
showers[0]
# +
lens=[]
for shower in showers:
lens.append(len(np.unique(shower.y)))
plt.figure(figsize=(12, 8))
plt.title("Distribution of number of showers in brick \n")
plt.hist(lens, bins=15);
plt.savefig('showers_per_brick_distr.pdf', bbox_inches='tight')
plt.show()
# -
showers_data = []
for shower in showers:
showers_data.append(shower.shower_data)
showers_data = [item for sublist in showers_data for item in sublist]
len(showers_data)
# +
numtracks = []
E_true = []
for shower in showers_data:
numtracks.append(shower[-2].numpy())
E_true.append(shower[0].numpy())
# -
max(np.array(numtracks))
max(np.array(E_true))
unique, index = np.unique(E_true, return_index=True)
plt.hist(unique, bins=100);
len(unique)
ntrck_unique = np.array(numtracks)[index]
E_true_sorted = unique.copy()
E_true_sorted.sort()
indexes_0_5_2 = [list(unique).index(E_true_sorted[i]) for i in range(1291)]
#1284
indexes_2_3 = [list(unique).index(E_true_sorted[i]) for i in range(1291, 2330)]
#1033
indexes_3_4 = [list(unique).index(E_true_sorted[i]) for i in range(2330, 3227)]
#888
indexes_4_5_5 = [list(unique).index(E_true_sorted[i]) for i in range(3227, 4340)]
#1110
indexes_5_5_7_5 = [list(unique).index(E_true_sorted[i]) for i in range(4340, 5374)]
#1032
indexes_7_5_11 = [list(unique).index(E_true_sorted[i]) for i in range(5374, 6476)]
#1102
indexes_11_22 = [list(unique).index(E_true_sorted[i]) for i in range(6476, 7565)]
#1088
indx = [indexes_0_5_2, indexes_2_3, indexes_3_4, indexes_4_5_5,
indexes_5_5_7_5, indexes_7_5_11, indexes_11_22]
ind_name = ['from 0.5 to 2 Gev', 'from 2 to 3 Gev', 'from 3 to 4 Gev', 'from 4 to 5.5 Gev',
'from 5.5 to 7.5 Gev', 'from 7.5 to 11 Gev', 'from 11 to 22 Gev', ]
# +
plt.figure(figsize=(12, 8), dpi=100)
plt.title("Number of tracks distribution vs True Energy range \n")
for ind, name in zip(indx, ind_name):
sns.distplot(ntrck_unique[ind], hist=True, rug=False, label=name)
plt.legend()
plt.savefig("E_true_vs_numtracks_distr.pdf", bbox_inches='tight')
plt.show()
# -
plt.figure(figsize=(12, 8), dpi=100)
plt.title("True Energy of EM shower vs the shower number of tracks \n")
plt.scatter((np.array(numtracks)[index]),unique)
plt.ylabel("True Energy")
plt.xlabel("Number of tracks")
# plt.ylim
plt.savefig("E_true_vs_numtracks.pdf", bbox_inches='tight')
plt.show()
# # Energy resolution on random graph
def E_pred(E_raw, E_true, nshowers):
#E_raw, E_true -- .npy files
#name - string
E_raw = np.load(E_raw)
E_true = np.load(E_true)
assert(len(E_raw)==len(E_true))
r = HuberRegressor()
r.fit(X=E_raw.reshape((-1, 1)), y=E_true)
E_pred = r.predict(E_raw.reshape((-1, 1)))
plt.figure(figsize=(10, 5), dpi=100)
plt.title('Energy distribution comparison for '+ nshowers + ' per brick \n')
sns.distplot(E_true, bins = 100, hist = True, label='True Energy')
sns.distplot(E_pred, bins = 100, hist = True, label='Predicted Energy')
plt.legend(loc='upper right')
plt.savefig(nshowers + ".pdf", bbox_inches='tight')
plt.show()
return E_pred, E_true
E_pred_rand, E_true_rand = E_pred('E_pred_rand.npy', 'E_true_rand.npy', 'random number of showers')
plt.figure(figsize=(10, 5), dpi=100)
plt.title('E_true - E_pred for 50 showers \n')
plt.hist(E_pred_50 - E_true_50, bins=100);
1-len(E_true_rand)/len(E_true_all)
ER = np.load('ER.npy')
n_showers = np.load('n_showers.npy')
# +
n = []
for i in range(len(n_showers)-1):
if n_showers[i+1] != n_showers[i]:
n.append(n_showers[i])
n.append(n_showers[-1])
# +
plt.figure(figsize=(10, 5), dpi=100)
plt.title('Energy Resolution vs number of showers in a brick \n')
sns.scatterplot(n, ER)
plt.savefig('ER_per_brick.pdf', bbox_inches='tight')
plt.show()
# -
# # Energy resolution dependence on number of showers in a brick
E_pred_200, E_true_200 = E_pred('E_pred.npy', 'E_true.npy', '200showers')
E_pred_120, E_true_120 = E_pred('E_pred_120shower.npy', 'E_pred_true_120shower.npy', '120showers')
E_pred_1, E_true_1 = E_pred('E_pred_1shower.npy', 'E_pred_true_1shower.npy', '1shower')
E_pred_50, E_true_50 = E_pred('E_pred_50shower.npy', 'E_pred_true_50shower.npy', '50showers')
E_all, E_true_all = E_pred('E.npy', 'E_true_all.npy', 'ideal case random number of showers')
nshowers_dict = {'1shower': [], '50showers': [], '120showers': [], '200showers': [],
'ideal_case': [], 'random_case': []}
ER_dict = {'1shower': [], '50showers': [], '120showers': [], '200showers': [],
'baseline': [0.35, 0.32, 0.25, 0.22, 0.18, 0.16, 0.12]}
index_list_1 = [1284, 2317, 3205, 4315, 5347, 6449, 7537]
index_list_50 = [1063, 1907, 2669, 3638, 4537, 5512, 6497]
index_list_120 = [775, 1491, 2092, 2893, 3664, 4496, 5366]
index_list_200 = [1010, 1945, 2762, 3792, 4740, 5741, 6750]
index_list_ideal = [1292, 2331, 3228, 4341, 5375, 6478, 7567]
index_list_random = [831, 1743, 2529, 3542, 4493, 5500, 6502]
indices_list = [index_list_1, index_list_50, index_list_120, index_list_200,
index_list_ideal, index_list_random]
E_pred_list = [E_pred_1, E_pred_50, E_pred_120, E_pred_200, E_all, E_pred_rand]
E_true_list = [E_true_1, E_true_50, E_true_120, E_true_200, E_true_all, E_true_rand]
n = ['1shower', '50showers', '120showers', '200showers', 'ideal_case', 'random_case', 'baseline']
def Energy_boxes(E_pred, E_true, nshowers, index_list):
#nshowers - string
#Dividing Energy into 'boxes'
E_true_sorted = E_true.copy()
E_true_sorted.sort()
E_true_sorted = E_true_sorted.tolist()
E_true = E_true.tolist()
## return indices for every 'box' of Energy (from 0 to 2 GeV, from 2 to 3 GeV...)
nshowers_dict[nshowers].append([E_true.index(E_true_sorted[i]) for i in range(index_list[0])])
for i in range(len(index_list)-1):
nshowers_dict[nshowers].append([E_true.index(E_true_sorted[i]) for i in range(index_list[i],
index_list[i+1])])
for i in range(len(indices_list)):
Energy_boxes(E_pred_list[i], E_true_list[i], n[i], indices_list[i])
def ER(ind, E_pred, E_true):
return np.std((np.array(E_true)[ind] - np.array(E_pred)[ind])
/np.array(E_true)[ind])
for i in range(len(E_pred_list)):
nshowers = n[i]
for ind in nshowers_dict[nshowers]:
ER_dict[nshowers].append(ER(ind, E_pred_list[i], E_true_list[i]))
def plot_ER(nshowers):
E_boxes = [2, 3, 4, 5.5, 7.5, 11, 22]
plt.figure(figsize=(16, 12), dpi=100)
plt.title('Energy resoltion comparison for different number of showers in a brick (Linear Regression) \n')
for n in nshowers:
plt.plot(E_boxes, ER_dict[n], marker = '*', label=n)
plt.xlabel("E (GeV)");
plt.ylabel("ER");
plt.legend()
plt.savefig("ER_comparison_LR.pdf", bbox_inches='tight')
plt.show()
plot_ER(n)
# # Ideal Case
# +
ER_ideal = []
for ind in nshowers_dict['ideal_case']:
ER_ideal.append(ER(ind, E_pred_list[-2], E_true_list[-2]))
# +
E_boxes = [2, 3, 4, 5.5, 7.5, 11, 22]
plt.figure(figsize=(16, 12), dpi=100)
plt.title('Energy resoltion for ideal case \n')
plt.plot(E_boxes, ER_ideal, marker = '*')
plt.xlabel("E (GeV)");
plt.ylabel("ER");
plt.savefig("ER_ideal.pdf", bbox_inches='tight')
plt.show()
# -
# # Random case
# +
ER_random = []
for ind in nshowers_dict['random_case']:
ER_random.append(ER(ind, E_pred_list[-1], E_true_list[-1]))
# +
E_boxes = [2, 3, 4, 5.5, 7.5, 11, 22]
plt.figure(figsize=(16, 12), dpi=100)
plt.title('Energy resoltion. Ideal vs Random case \n')
for E, name in zip([ER_ideal, ER_random], ['Ideal case', "Random case"]):
plt.plot(E_boxes, E, marker = '*', label=name)
plt.xlabel("E (GeV)");
plt.ylabel("ER");
plt.legend()
plt.savefig("ER_comparison_Ideal_Random.pdf", bbox_inches='tight')
plt.show()
# -
| ER_results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sticking to the Target
#
# ### 1. Import Libraries
# +
from unityagents import UnityEnvironment
import numpy as np
import random
import torch
import time
from collections import deque
import matplotlib.pyplot as plt
from workspace_utils import active_session
# %matplotlib inline
# Custom RL Agent
from agent import Agent
# -
# ### 2. Create the Unity Environment
#
# Using the Reacher environment, a unity agents 'brain' is created, which is responsible for deciding the agents actions.
# +
# Set environment and display information
env = UnityEnvironment(file_name='Reacher.app')
# Set the default ReacherBrain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# -
# ### 3. Train the Agent
#
# #### Training Implementation
# An `ddpg_training` function is used for implementating the training for each agent and stores the best weights of the model once an average score of 30 or more has been achieved across all 20 agents.
#
# This algorithm consists of a Deep Deterministic Policy Gradient (DDPG) with the Ornstein-Uhlenbeck process, to create noise within the network and Experience Replay.
#
# #### DDPG Architecture
# - An _actor_ consisting of a fully-connected network (4 layers):
# - An input layer with 33 inputs, representing the state size
# - 2 hidden layers, both with 128 hidden nodes that are passed through relu activation functions
# - An output layer with 4 outputs, for the number of actions, passed through a tanh activation function
#
#
# - A _critic_ consisting of a fully-connected network (4 layers):
# - An input layer with 33 inputs, representing the state size
# - 2 hidden layers, one with 132 hidden nodes (128 + actions) and the other with 128 hidden nodes that are passed through relu activation functions
# - An output layer with 1 output, specifying the Q-value
#
# #### Hyperparameters Used
# - `BUFFER_SIZE = int(1e5)`: replay buffer size
# - `BATCH_SIZE = 256`: minibatch size
# - `GAMMA = 0.99`: discount factor
# - `TAU = 1e-3`: used for soft update of target parameters
# - `LR_ACTOR = 1e-3`: learning rate of the actor
# - `LR_CRITIC = 1e-3`: learning rate of the critic
# - `EPS_START = 1`: epsilon start value
# - `EPS_END = 0.05`: epsilon ending value
# - `EPS_DECAY = 1e-6`: epsilon decay rate
# - `UPDATE_EVERY = 20`: num timesteps before each update
# - `NUM_UPDATE = 10`: num of updates after set num of timesteps
def ddpg_training(brain_name, n_agents, n_episodes=1000,
max_t=1000, print_every=100):
"""
Perform DDPG training on each agent.
Parameters:
- brain_name (string): name of agent brain to use
- n_agents (int): number of agents
- n_episodes (int): maximum number of training episodes
- max_t (int): maximum number of timesteps per episode
- print_every (int): number of scores to average
"""
scores_list = [] # list containing scores from each episode
scores_window = deque(maxlen=print_every) # last set of scores
# Iterate over each episode
for i_episode in range(1, n_episodes+1):
# Reset environment and agents, set initial states
# and reward scores every episode
env_info = env.reset(train_mode=True)[brain_name]
agent.reset()
states = env_info.vector_observations
scores = np.zeros(n_agents)
# Iterate over each timestep
for t in range(max_t):
# Perform an action for each agent in the environment
actions = agent.act(states)
env_info = env.step(actions)[brain_name]
# Set new experiences and interact with the environment
next_states = env_info.vector_observations
rewards = env_info.rewards
dones = env_info.local_done
agent.step(states, actions, rewards, next_states, dones, t)
# Update states and scores
states = next_states
scores += rewards
# Break loop if an agent finishes the episode
if any(dones):
break
# Save most recent scores
scores_window.append(np.mean(scores))
scores_list.append(np.mean(scores))
# Output episode information
print(f'\rEpisode {i_episode}\tAverage Score: {np.mean(scores_window):.2f}', end="")
if i_episode % print_every == 0:
print(f'\rEpisode {i_episode}\tAverage Score: {np.mean(scores_window):.2f}')
# Save environment if goal achieved
if np.mean(scores_window) >= 30.0:
print(f'\nEnvironment solved in {i_episode} episodes!\tAverage Score: {np.mean(scores_window):.2f}')
torch.save(agent.actor_local.state_dict(), 'actor_checkpoint.pth')
torch.save(agent.critic_local.state_dict(), 'critic_checkpoint.pth')
break
# Return reward scores
return scores_list
# +
# Initialize the environment
env_info = env.reset(train_mode=True)[brain_name]
# Set number of actions
action_size = brain.vector_action_space_size # 4
# Set states for each agent
states = env_info.vector_observations
state_size = states.shape[1] # 33
# Set number of agents
num_agents = len(env_info.agents) # 20
print(f'Number of agents: {num_agents}')
# Create and view the agent networks
agent = Agent(state_size, action_size, seed=0, n_agents=num_agents)
print(agent.actor_local)
print(agent.critic_local)
print()
# -
with active_session():
# Start training time
start_time = time.time()
# Train all 20 agents
train_scores = ddpg_training(brain_name, num_agents)
# Calculate time taken to train
train_time = (time.time() - start_time) / 60
print(f"\nTotal Training Time: {train_time:.2f} mins")
# ### 4. Analyse the Training Results
#
# Reviewing the graph we can see the score slowly increase over each episode, where the highest average score across all 20 agents is ~39.0 and the environment was solved in 118 episodes across all agents.
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(train_scores)), train_scores)
plt.title('Reward Per Episode')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.grid(True)
plt.show()
# ### 5. Test the Trained Agents
# Now that the agents have been trained, the environment can be reset and set into test mode using the `train_mode=False` flag. Using the the best weights for the agents we can run the Unity environment to test the agents.
# +
# Initialize environment
env_info = env.reset(train_mode=False)[brain_name]
# Set number of actions and agents
action_size = brain.vector_action_space_size # 4
num_agents = len(env_info.agents) # 20
# Set states for each agent
states = env_info.vector_observations
state_size = states.shape[1] # 33
# Create the agent
agent = Agent(state_size, action_size, seed=0, n_agents=num_agents)
test_scores = np.zeros(num_agents)
# Set best device available
if torch.cuda.is_available():
map_location=lambda storage, loc: storage.cuda()
else:
map_location='cpu'
# Load best agent weights
agent.actor_local.load_state_dict(torch.load('actor_checkpoint.pth', map_location=map_location))
agent.critic_local.load_state_dict(torch.load('critic_checkpoint.pth', map_location=map_location))
# -
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = agent.act(states) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# ### 6. Close the Environment
# When finished, we close the environment.
env.close()
| p2_reacher/Continuous_Control.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
# +
import tensorflow as tf
import keras.backend.tensorflow_backend as KTF
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
session = tf.Session(config=config)
KTF.set_session(session)
# -
from utils import *
from preprecoess import *
from generator import *
from models import *
from fl_training import *
root_data_path = None # MIND-Dataset Path
embedding_path = None # Word Embedding Path
# Read News
news,news_index,category_dict,subcategory_dict,word_dict = read_news(root_data_path,['train','val'])
news_title,news_vert,news_subvert=get_doc_input(news,news_index,category_dict,subcategory_dict,word_dict)
title_word_embedding_matrix, have_word = load_matrix(embedding_path,word_dict)
#Parse User
train_session, train_uid_click, train_uid_table = read_clickhistory(root_data_path,'train')
test_session, test_uid_click,test_uid_table = read_clickhistory(root_data_path,'val')
train_user = parse_user(train_session,news_index)
test_user = parse_user(test_session,news_index)
train_sess, train_user_id, train_label, train_user_id_sample = get_train_input(train_session,train_uid_click,news_index)
test_impressions, test_userids = get_test_input(test_session,news_index)
get_user_data = GetUserDataFunc(news_title,train_user_id_sample,train_user,train_sess,train_label,train_user_id)
# +
lr = 0.3
delta = 0.05
lambd = 0
ratio = 0.02
num = 6
model, doc_encoder, user_encoder, news_encoder = get_model(lr,delta,title_word_embedding_matrix)
Res = []
Loss = []
count = 0
while True:
loss = fed_single_update(model,doc_encoder,user_encoder,num,lr,get_user_data,train_uid_table)
Loss.append(loss)
if count % 25 == 0:
news_scoring = news_encoder.predict(news_title,verbose=0)
user_generator = get_hir_user_generator(news_scoring,test_user['click'],64)
user_scoring = user_encoder.predict_generator(user_generator,verbose=0),
user_scoring = user_scoring[0]
g = evaluate(user_scoring,news_scoring,test_impressions)
Res.append(g)
print(g)
with open('FedRec-woLDP-1.json','a') as f:
s = json.dumps(g) + '\n'
f.write(s)
count += 1
# -
| code/Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sistema de importancia
# Cuando no se tiene la misma cantidad de puntos para cada clase el clasificador pude sesgar el resultado, entonces se debe ajustar para que el sistema cuente con este desequilibrio de clase
import sys
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from utilities import visualize_classifier
# +
#Cargamos los datos
input_file = 'data_imbalance.txt'
data = np.loadtxt(input_file, delimiter=',')
X,y = data[:,:-1], data[:, -1]
#separamos las clases
class_0 = np.array(X[y==0])
class_1 = np.array(X[y==1])
# -
#Visualizamos los datos
plt.figure()
plt.scatter(class_0[:,0], class_0[:,1],s=75, facecolors='black',edgecolors='black',linewidth=1,marker='x')
plt.scatter(class_1[:,0], class_1[:,1],s=75, facecolors='white',edgecolors='black',linewidth=1,marker='o')
plt.title('Datos de entrada')
#Dividimos datos entre entrenamiento y prueba
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state=5)
# +
#Clasificador extremadamente aleatorio
def forest_classifier(balance=''):
params={'n_estimators':100, 'max_depth':4,'random_state':0}
if balance=='balance':
params={'n_estimators':100, 'max_depth':4,'random_state':0, 'class_weight':'balanced'}
classifier = ExtraTreesClassifier(**params)
classifier.fit(X_train, y_train)
print("\nDatos de entrenamiento clasificados \n")
visualize_classifier(classifier, X_train, y_train)
y_test_pred = classifier.predict(X_test)
print("\nDatos de prueba clasificados \n")
visualize_classifier(classifier, X_test, y_test)
print("\nDatos X_test y y_test_pred \n")
visualize_classifier(classifier, X_test, y_test_pred)
# Evaluate classifier performance
class_names = ['Class-0', 'Class-1']
print("\n" + "#"*40)
print("\nClassifier performance on training dataset\n")
print(classification_report(y_train, classifier.predict(X_train),target_names=class_names))
print("#"*40 + "\n")
print("#"*40)
print("\nClassifier performance on test dataset\n")
print(classification_report(y_test, y_test_pred, target_names=class_names))
print("#"*40 + "\n")
# -
forest_classifier()
# La alerta anterior se debe a que en la calificación de la clase_0 hay una división por cero, esto se debe al desvalance en las clases
forest_classifier('balance')
| supervised/class_imbalance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome to AI for Science Bootcamp
#
# The objective of this bootcamp is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This bootcamp will introduce participants to fundamentals of AI and how those can be applied to different HPC simulation domains.
#
# The following contents will be covered during the Bootcamp :
#
# - [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)
# - [Tropical Cyclone Intensity Estimation using Deep Convolution Neural Networks.](Tropical_Cyclone_Intensity_Estimation/The_Problem_Statement.ipynb)
# ## [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)
#
# In this notebook, participants will be introduced to Convolution Neural Network and how to implement one using Keras API. For an absolute beginner to CNN and Keras this notebook would serve as a good starting point.
#
# **By the end of this notebook you will:**
#
# - Understand the Machine Learning pipeline
# - Understand how a Convolution Neural Network works
# - Write your own Deep Learning classifier and train it.
#
# For in depth understanding of Deep Learning Concepts, visit [NVIDIA Deep Learning Institute](https://www.nvidia.com/en-us/deep-learning-ai/education/)
# ## [Tropical Cyclone Intensity Estimation using Deep Convolutional Neural Networks](Tropical_Cyclone_Intensity_Estimation/The_Problem_Statement.ipynb)
#
# In this notebook, participants will be introduced to how Convolutional Neural Networks (CNN) can be applied in the field of Climate analysis.
#
# **Contents of this Notebook:**
#
# - Understanding the problem statement
# - Building a Deep Learning pipeline
# - Understand input data
# - Annotating the data
# - Build the model
# - Understanding different accuracy and what they mean
# - Improving the model
#
# **By the End of the Notebook you will:**
#
# - Understand the process of applying Deep Learning to solve a problem in the field of Climate Analysis
# - Understand various challenges with data pre-processing
# - How hyper-parameters play an essential role in improving the accuracy of the model.
| hpc_ai/ai_science_climate/English/python/jupyter_notebook/Start_Here.ipynb |