code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 (seaborn-dev)
# language: python
# name: seaborn-dev
# ---
# + active=""
# .. _distribution_tutorial:
#
# .. currentmodule:: seaborn
# + active=""
# Visualizing the distribution of a dataset
# =========================================
#
# .. raw:: html
#
# <div class=col-md-9>
#
# + active=""
# When dealing with a set of data, often the first thing you'll want to do is get a sense for how the variables are distributed. This chapter of the tutorial will give a brief introduction to some of the tools in seaborn for examining univariate and bivariate distributions. You may also want to look at the :ref:`categorical plots <categorical_tutorial>` chapter for examples of functions that make it easy to compare the distribution of a variable across levels of other variables.
# -
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
sns.set(color_codes=True)
# + tags=["hide"]
# %matplotlib inline
np.random.seed(sum(map(ord, "distributions")))
# + active=""
# Plotting univariate distributions
# ---------------------------------
#
# The most convenient way to take a quick look at a univariate distribution in seaborn is the :func:`distplot` function. By default, this will draw a `histogram <https://en.wikipedia.org/wiki/Histogram>`_ and fit a `kernel density estimate <https://en.wikipedia.org/wiki/Kernel_density_estimation>`_ (KDE).
# -
x = np.random.normal(size=100)
sns.distplot(x);
# + active=""
# Histograms
# ^^^^^^^^^^
#
# Histograms are likely familiar, and a ``hist`` function already exists in matplotlib. A histogram represents the distribution of data by forming bins along the range of the data and then drawing bars to show the number of observations that fall in each bin.
#
# To illustrate this, let's remove the density curve and add a rug plot, which draws a small vertical tick at each observation. You can make the rug plot itself with the :func:`rugplot` function, but it is also available in :func:`distplot`:
# -
sns.distplot(x, kde=False, rug=True);
# + active=""
# When drawing histograms, the main choice you have is the number of bins to use and where to place them. :func:`distplot` uses a simple rule to make a good guess for what the right number is by default, but trying more or fewer bins might reveal other features in the data:
# -
sns.distplot(x, bins=20, kde=False, rug=True);
# + active=""
# Kernel density estimation
# ^^^^^^^^^^^^^^^^^^^^^^^^^
#
# The kernel density estimate may be less familiar, but it can be a useful tool for plotting the shape of a distribution. Like the histogram, the KDE plots encode the density of observations on one axis with height along the other axis:
# -
sns.distplot(x, hist=False, rug=True);
# + active=""
# Drawing a KDE is more computationally involved than drawing a histogram. What happens is that each observation is first replaced with a normal (Gaussian) curve centered at that value:
# +
x = np.random.normal(0, 1, size=30)
bandwidth = 1.06 * x.std() * x.size ** (-1 / 5.)
support = np.linspace(-4, 4, 200)
kernels = []
for x_i in x:
kernel = stats.norm(x_i, bandwidth).pdf(support)
kernels.append(kernel)
plt.plot(support, kernel, color="r")
sns.rugplot(x, color=".2", linewidth=3);
# + active=""
# Next, these curves are summed to compute the value of the density at each point in the support grid. The resulting curve is then normalized so that the area under it is equal to 1:
# -
from scipy.integrate import trapz
density = np.sum(kernels, axis=0)
density /= trapz(density, support)
plt.plot(support, density);
# + active=""
# We can see that if we use the :func:`kdeplot` function in seaborn, we get the same curve. This function is used by :func:`distplot`, but it provides a more direct interface with easier access to other options when you just want the density estimate:
# -
sns.kdeplot(x, shade=True);
# + active=""
# The bandwidth (``bw``) parameter of the KDE controls how tightly the estimation is fit to the data, much like the bin size in a histogram. It corresponds to the width of the kernels we plotted above. The default behavior tries to guess a good value using a common reference rule, but it may be helpful to try larger or smaller values:
# -
sns.kdeplot(x)
sns.kdeplot(x, bw=.2, label="bw: 0.2")
sns.kdeplot(x, bw=2, label="bw: 2")
plt.legend();
# + active=""
# As you can see above, the nature of the Gaussian KDE process means that estimation extends past the largest and smallest values in the dataset. It's possible to control how far past the extreme values the curve is drawn with the ``cut`` parameter; however, this only influences how the curve is drawn and not how it is fit:
# -
sns.kdeplot(x, shade=True, cut=0)
sns.rugplot(x);
# + active=""
# Fitting parametric distributions
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# You can also use :func:`distplot` to fit a parametric distribution to a dataset and visually evaluate how closely it corresponds to the observed data:
# -
x = np.random.gamma(6, size=200)
sns.distplot(x, kde=False, fit=stats.gamma);
# + active=""
# Plotting bivariate distributions
# --------------------------------
#
# It can also be useful to visualize a bivariate distribution of two variables. The easiest way to do this in seaborn is to just use the :func:`jointplot` function, which creates a multi-panel figure that shows both the bivariate (or joint) relationship between two variables along with the univariate (or marginal) distribution of each on separate axes.
# -
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
# + active=""
# Scatterplots
# ^^^^^^^^^^^^
#
# The most familiar way to visualize a bivariate distribution is a scatterplot, where each observation is shown with point at the *x* and *y* values. This is analogous to a rug plot on two dimensions. You can draw a scatterplot with the matplotlib ``plt.scatter`` function, and it is also the default kind of plot shown by the :func:`jointplot` function:
# -
sns.jointplot(x="x", y="y", data=df);
# + active=""
# Hexbin plots
# ^^^^^^^^^^^^
#
# The bivariate analogue of a histogram is known as a "hexbin" plot, because it shows the counts of observations that fall within hexagonal bins. This plot works best with relatively large datasets. It's available through the matplotlib ``plt.hexbin`` function and as a style in :func:`jointplot`. It looks best with a white background:
# -
x, y = np.random.multivariate_normal(mean, cov, 1000).T
with sns.axes_style("white"):
sns.jointplot(x=x, y=y, kind="hex", color="k");
# + active=""
# Kernel density estimation
# ^^^^^^^^^^^^^^^^^^^^^^^^^
#
# It is also possible to use the kernel density estimation procedure described above to visualize a bivariate distribution. In seaborn, this kind of plot is shown with a contour plot and is available as a style in :func:`jointplot`:
# -
sns.jointplot(x="x", y="y", data=df, kind="kde");
# + active=""
# You can also draw a two-dimensional kernel density plot with the :func:`kdeplot` function. This allows you to draw this kind of plot onto a specific (and possibly already existing) matplotlib axes, whereas the :func:`jointplot` function manages its own figure:
# -
f, ax = plt.subplots(figsize=(6, 6))
sns.kdeplot(df.x, df.y, ax=ax)
sns.rugplot(df.x, color="g", ax=ax)
sns.rugplot(df.y, vertical=True, ax=ax);
# + active=""
# If you wish to show the bivariate density more continuously, you can simply increase the number of contour levels:
# -
f, ax = plt.subplots(figsize=(6, 6))
cmap = sns.cubehelix_palette(as_cmap=True, dark=0, light=1, reverse=True)
sns.kdeplot(df.x, df.y, cmap=cmap, n_levels=60, shade=True);
# + active=""
# The :func:`jointplot` function uses a :class:`JointGrid` to manage the figure. For more flexibility, you may want to draw your figure by using :class:`JointGrid` directly. :func:`jointplot` returns the :class:`JointGrid` object after plotting, which you can use to add more layers or to tweak other aspects of the visualization:
# -
g = sns.jointplot(x="x", y="y", data=df, kind="kde", color="m")
g.plot_joint(plt.scatter, c="w", s=30, linewidth=1, marker="+")
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels("$X$", "$Y$");
# + active=""
# Visualizing pairwise relationships in a dataset
# -----------------------------------------------
#
# To plot multiple pairwise bivariate distributions in a dataset, you can use the :func:`pairplot` function. This creates a matrix of axes and shows the relationship for each pair of columns in a DataFrame. By default, it also draws the univariate distribution of each variable on the diagonal Axes:
# -
iris = sns.load_dataset("iris")
sns.pairplot(iris);
# + active=""
# Specifying the ``hue`` parameter automatically changes the histograms to KDE plots to facilitate comparisons between multiple distributions.
# -
sns.pairplot(iris, hue="species");
# + active=""
# Much like the relationship between :func:`jointplot` and :class:`JointGrid`, the :func:`pairplot` function is built on top of a :class:`PairGrid` object, which can be used directly for more flexibility:
# -
g = sns.PairGrid(iris)
g.map_diag(sns.kdeplot)
g.map_offdiag(sns.kdeplot, n_levels=6);
# + active=""
# .. raw:: html
#
# </div>
| doc/tutorial/distributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Dictionary Data Type
## Dictionary can be created by using flower brackets
dict_sq = {}
## An instance Dictionary class can also be created using calling dict()
dict_cb = dict()
# + tags=[]
## The functions available in the directory of dictionary object are
print(dir(dict_sq))
# -
# Dictionary has a key value pair structure
dict_sq = {1:1,2:4,3:9,4:16,5:25}
dict_cb = {1:1,2:8,3:27,4:64,5:125}
lst_sq = [1,4,9,16,25,36]
lst_sq[2]
dict_sq[3]
# + tags=[]
## Retrive values using keys
print(f'dict_sq[4] gives \n {dict_sq[4]}')
print(f'dict_cb[4] gives \n {dict_cb[4]}')
# -
## Dictionary can have any datatype as values
dict_num = {1:'num',2:'two',3:'three',4:'four',5:'five'}
dict_num[6] = 'six'
dict_num
# ## Dictionary Object Functions
# ### Clear
# + tags=[]
print(help(dict_num.clear))
# + tags=[]
dict_num.clear()
print(dict_num)
# -
# ### Copy
# + tags=[]
print(help(dict_cb.copy))
# + tags=[]
# Since copy does a shallow copy , at the first level reference any modification in the copy of dictionary object will not effect the source dictionary object
dict_num = dict_cb.copy()
dict_num[4]='four'
print(f'dict_num:{dict_num}\n dict_cb:{dict_cb}')
# -
# ### Keys
dict_num['six'] = 36
dict_num
dict_num.keys(),dict_num.values(),dict_num.items()
# + tags=[]
print(help(dict_cb.keys))
# + tags=[]
## Get the keys of the dictionary object
## Using function keys , a list of keys is returned
print(f'dict_cb.keys() -->{ dict_cb.keys()}')
# -
# ### Values
# + tags=[]
print(help(dict_cb.values))
# -
## values() function return the list of values belonging to that dictionary object
dict_sq.values()
# ## Items
# + tags=[]
print(help(dict_cb.items))
# -
## items() function returns the list of tuples of key-value pair belonging to the dictionary object
dict_cb.items()
# ## Update
# + tags=[]
print(help(dict_num.update))
# -
# Using update nethod, the values of keys can be updated by refering to another dictionary having the same keys
# If same keys are not present then then a new key value pair is created
# dict_num had keys as 1,2,3,4,5 which matched with the keys of dict_cb hence were updated
dict_num.update(dict_cb)
dict_num
# ### Get
# + tags=[]
print(help(dict.get))
# -
dict_num
dict_num.get(3,0),dict_num.get(7,0)
# + tags=[]
## get functions checks for the key if its present in the dictionary it returns a value, if not it returns a default value
# dict_num does not have 6 as its key hence it returns 0 which is the second argument
print(f'dict_num.get(6,0) returns \n {dict_num.get(6,0)}')
# dict_num has 5 as its key hence it returns the value of 5
print(f'dict_num.get(5,0) returns \n {dict_num.get(5,0)}')
# -
# ### pop
# + tags=[]
## Items(key value pair) from the dictionary can be removed(poped) using pop keyword with the key mentioned
## If key is not present then a KeyError will be generated
print(dict_num.pop(5))
print(dict_num)
# -
# ### Fromkeys
# + tags=[]
print(help(dict_cb.fromkeys))
# + tags=[]
# If a new dictionary object is intended to be created from the keys of another dictionary we can use from keys() function with a default value
dict_num = dict.fromkeys(dict_cb.keys(),' ')
print(f'dict_num: {dict_num}')
# -
lst_1 = ['hello',1,2,3,'Python']
lst_1
dict_1 = dict(enumerate(lst_1))
dict_1[3]
# ## Exercises
# <html>
# <ol>
# <li> Write a program to create a database like structure for employees.</li>
# <ol>
# <li> Create a dictionary with fields 'Name','Job','Place of Work','Hobbies'</li>
# <li> Repeat step a and create 4 such dictionaries </li>
# <li> Include all these dictionaries into one more dictionary with key being the name and value being the dictionary where their information is stored</li>
# <li> Search for a name to give the details of a particular employee.</li>
# </ol>
# </ol>
# </html>
dict_person1 = {'Name':'Chetan','Job':'Programmer','Place Of Work':'TCS','Hobbies':['Cricket','Chess']}
dict_person2 = {'Name':'Harry','Job':'Wizard','Place Of Work':'Harvard','Hobbies':'Cricket'}
dict_person3 = {'Name':'Ron','Job':'Chess Player','Place Of Work':'Wellington','Hobbies':'Cricket'}
dict_person4 = {'Name':'Harmoine','Job':'Witch','Place Of Work':'Hogwarts','Hobbies':'Cricket'}
dict_person2
dict_people = dict()
dict_people= {'Chetan':dict_person1,'Harry':dict_person2,'Ron':dict_person3,'Harmoine':dict_person4}
dict_people['Harmoine']['Place Of Work']
lst_people = [dict_person1,dict_person2,dict_person3,dict_person4]
lst_people
sentence = 'AMC Engineering College located at Bangalore is one of the top Engineering College'
lst_words = sentence.split()
dict_count = dict()
for word in lst_words:
dict_count[word] = dict_count.get(word,0) + 1
dict_count
| Day_4/Dictionary_Datatype.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # __ Square Matrix Multiply__
#
#
# ## How it works:
#
# It is simply implementation of mathematical formula:
#
# $$
# c_{ij} = \sum_{k=1}^n (a_{ik} * b_{kj})
# $$
#
# Square matrices you can always multiply beacuse number or rows and columns of both matrices are equal (for rectangular matrices you need to chekc does number of columns in first matrix is equal to number of rows in second matrix and if it is you can multiply those matrices).
#
# Algorithm is going through row of first matrix taking one by one value and multiply with one by one value in the column of second matrix. Those multipliers are goig to be summed up. That value is going in position in new matrix on intersection of number of row of first matrix and number oc column of second matrix (for rectangular matrices the resulting matrix is dimension of number of rows of first matrix and number of columns of second matrix).
#
# \begin{pmatrix}
# a_{11} & a_{12} & a_{13} & ... & a_{1n} \\
# a_{21} & a_{22} & a_{23} & ... & a_{2n} \\
# ... \\
# a_{m1} & . & . & . & a_{mn}
# \end{pmatrix}
#
# If m = n you have square matrix
#
# ## E.g.
#
# $$
# A =
# \begin{pmatrix}
# 1 & 2 \\
# 4 & 5 \\
# \end{pmatrix}
# B =
# \begin{pmatrix}
# 9 & 8 \\
# 6 & 5 \\
# \end{pmatrix}
# A*B = ?
# $$
#
# Firstly creat a matrix c with dimensions n*n (n = 2) full of 0
#
# $$
# C =
# \begin{pmatrix}
# 0 & 0 \\
# 0 & 0 \\
# \end{pmatrix}
# $$
#
# n = 2
#
# i = 0; j = 0; k = 0
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 0 + 1 * 9 = 9
# $$
#
# i = 0; j = 0; k = 1
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 9 + 2 * 6 = 21
# $$
#
# i = 0; j = 1; k = 0
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 0 + 1 * 8 = 8
# $$
#
# i = 0; j = 1; k = 1
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 8 + 2 * 5 = 18
# $$
#
# i = 1; j = 0; k = 0
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 0 + 4 * 9 = 36
# $$
#
# i = 1; j = 0; k = 1
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 36 + 5 * 6 = 66
# $$
#
# i = 1; j = 1; k = 0
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 0 + 4 * 8 = 32
# $$
#
# i = 1; j = 1; k = 1
# $$
# c_{ij} = c_{ij} + a_{ik} * b_{kj} = 32 + 5 * 5 = 57
# $$
#
# $$
# C =
# \begin{pmatrix}
# 21 & 18 \\
# 66 & 57 \\
# \end{pmatrix}
# $$
#
# ## Pseudocode:
#
# SQUARE-MATRIX-MULTIPLY(A, B):
# n = A.rows
# lec C be a new n x n matrix
# for (i = 1 to n)
# for (j = 1 to n)
# c(ij) = 0
# for (k = 1 to n)
# c(ij) = c(ij) + a(ik) * b(kj)
#
# ## Worst case running time T(n):
#
# $$
# T(n) = \theta(n^3)
# $$
#
# ## Use cases:
#
# For smaller number of elements and when we need simple algorithm and fast implemtation.
def squareMatrixMultiply(A, B):
n = len(A)
tempRow = [0, 0, 0]
C = []
for i in range(n):
for j in range(n):
for k in range(n):
tempRow[k] += A[i][k] * B[k][j]
C.append([n for n in tempRow])
tempRow = [0, 0, 0]
return C
# +
matrix1 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
matrix2 = [[9, 8, 7], [6, 5, 4], [3, 2, 1]]
resoultMatrix = []
resoultMatrix = squareMatrixMultiply(matrix1, matrix2)
print(resoultMatrix)
| SquareMatrixMultiply/SquareMatrixMultiply.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="cEAicR6nbiGd" outputId="9be6369d-dc3c-4236-e05f-78db15f50398"
from google.colab import drive
drive.mount('/content/drive')
# + id="3-oxDioFb0S_"
import os
import numpy as np
import cv2
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, BatchNormalization, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, GlobalAveragePooling2D
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
# from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="WFMJzgPUb84x"
directory = '/content/drive/MyDrive/static-words'
# + id="xB-gX7nxfQQn"
dict_labels = {
'Bus':0,
'CalmDown':1,
'Car':2,
'Church':3,
'Family':4,
'Father':5,
'Fine':6,
'Hungry':7,
'IHateYou':8,
'Key':9,
'Love':10,
'Mother':11,
'Pray':12,
'okay':13
}
# + id="QsHlyjyJfGC9"
images = []
labels = []
for label in os.listdir(directory):
label_path = os.path.join(directory, label)
for img in os.listdir(label_path):
image_path = os.path.join(label_path, img)
img = load_img(image_path, target_size=(96, 96))
img = img_to_array(img)
img = preprocess_input(img)
images.append(img)
labels.append(dict_labels[label])
# + id="-cd1NKdQhC9Z"
images = np.array(images)
labels = np.array(labels)
# + colab={"base_uri": "https://localhost:8080/"} id="GM_FP7YqhzJR" outputId="a884e2f5-9530-464a-f5d8-ffa786fac89e"
print("Shape of data: {}".format(images.shape))
# + id="bSzfZ4sNhzLh"
labels = to_categorical(labels, num_classes = 14)
# + id="Ro0GH1qVhzOU"
images, labels = shuffle(images, labels)
# + id="lKOWPB6bhzTh"
# images = images / 255.0
# + id="YURC4eivhzV-"
# split the data into train and test data
# train-size: 80%
# test/val-size: 20%
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.2)
# + colab={"base_uri": "https://localhost:8080/"} id="cDnyj7KjhzYO" outputId="2995a544-8417-4dac-d16a-dd28480799bc"
print("Training images shape: {} || Training labels shape: {}".format(x_train.shape, y_train.shape))
print("Test images shape: {} || Test labels shape: {}".format(x_test.shape, y_test.shape))
# + id="UgbUf1QMhzag"
model = Sequential()
model.add(Conv2D(16, (3,3), activation='relu', input_shape=(50,50,3)))
model.add(Conv2D(16, (3,3), activation='relu'))
model.add(Conv2D(16, (3,3), activation='relu'))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(MaxPooling2D(2,2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(14, activation='softmax'))
# + colab={"base_uri": "https://localhost:8080/"} id="Oro8VF8xhzdE" outputId="a114c168-76c4-470d-81bb-4287fe9fe4c8"
model.summary()
# + id="k0lQlBoKhzft"
# compile the model
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# + id="N_73k98ChziO"
# set callbacks
callback = EarlyStopping(
monitor='loss',
patience=3
)
# + id="PujNuqeBiU2p"
datagen = ImageDataGenerator(horizontal_flip=True)
# + colab={"base_uri": "https://localhost:8080/"} id="KEEdXMO5iZSF" outputId="dad7a9a9-d082-4ece-acc8-f7a5d6c762c1"
# fit the train data into the model
history = model.fit(
datagen.flow(x_train, y_train, batch_size=32),
epochs=20,
callbacks=[callback],
validation_data=(x_test, y_test)
)
# + colab={"base_uri": "https://localhost:8080/"} id="CYXLiF3ficTT" outputId="d47e7360-07cb-42b5-e08e-6a70e4985cbc"
len(history.history['loss'])
# + colab={"base_uri": "https://localhost:8080/"} id="ft-TcsMFicVy" outputId="2553c77e-b983-4272-b1ea-466faf839d71"
y_pred_custom = model.predict(x_test)
acc = accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_pred_custom, axis=1))
acc*100
# + id="a89ufgnaicZW"
# save the model
model.save('wordsNet-01.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="ELQFcLYFicbH" outputId="6b48631a-6b58-4786-a355-0889480776e2"
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title("Accuracy/Loss")
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(['train_accuracy','test_accuracy','loss', 'validation_loss'])
plt.show()
# + [markdown] id="UK3QzDSXqNEI"
# # mobileNet
# + id="NXKqafF4jzvB"
mobnet = MobileNetV2(input_shape=(96,96,3), include_top=False, weights='imagenet', pooling='avg')
mobnet.trainable=False
# + id="C6ZUCBHLq0s7"
mobnet_inputs = mobnet.inputs
dense_layer = tf.keras.layers.Dense(256, activation='relu')(mobnet.output)
output_layer = tf.keras.layers.Dense(14, activation='softmax')(dense_layer)
mobnet_model01 = tf.keras.Model(inputs=mobnet_inputs, outputs=output_layer)
# + id="G4NaLamEsE1V"
# compile the model
mobnet_model01.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# + colab={"base_uri": "https://localhost:8080/"} id="IVZgcAcMsNrA" outputId="32e5222b-7456-4552-ad0f-97a4917e7607"
# fit the train data into the model
history_mobnet = mobnet_model01.fit(
datagen.flow(x_train, y_train, batch_size=32),
epochs=20,
callbacks=[callback],
validation_data=(x_test, y_test)
)
# + id="cvRkoepMsWEq"
mobnet_model01.save('mobnet_model01.h5')
| notebooks/static-notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import pandas as pd
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
eye_train = pd.read_csv('D:/blindness-detection/train.csv')
df_train = eye_train.copy()
df_train.head()
# -
eye_test = pd.read_csv('D:/blindness-detection/test.csv')
df_test = eye_test.copy()
df_test.head()
df_train.shape, df_test.shape
df_train.isna().sum()
frequencies = df_train.diagnosis.value_counts()
frequencies
# +
import json
import math
import os
import cv2
from PIL import Image
import numpy as np
from keras import layers
from keras.applications import DenseNet121
from keras.callbacks import Callback, ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.optimizers import Adam
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import cohen_kappa_score, accuracy_score
import scipy
import tensorflow as tf
from tqdm import tqdm
# -
image = cv2.imread('D:/blindness-detection/train_images/cd54d022e37d.png')
plt.imshow(image)
def preprocess_image(image_path, desired_size=224):
im = Image.open(image_path)
im = im.resize((desired_size, )*2, resample=Image.LANCZOS)
return im
# +
N = df_train.shape[0]
x_train = np.empty((N, 224, 224, 3), dtype=np.uint8)
for i, image_id in enumerate(tqdm(df_train['id_code'])):
x_train[i, :, :, :] = preprocess_image(
'D:/blindness-detection/train_images/{image_id}.png'
)
# -
plt.imshow(x_train[0])
# +
N = df_test.shape[0]
x_test = np.empty((N, 224, 224, 3), dtype=np.uint8)
for i, image_id in enumerate(tqdm(df_test['id_code'])):
x_test[i, :, :, :] = preprocess_image(
'D:/blindness-detection/test_images/{image_id}.png'
)
# -
y_train = pd.get_dummies(df_train['diagnosis']).values
x_train.shape, y_train.shape
x_train, x_val, y_train, y_val = train_test_split(
x_train, y_train,
test_size=0.15,
random_state=1111
)
x_train.shape, y_train.shape, x_val.shape, y_val.shape
# +
BATCH_SIZE = 32
def create_datagen():
return ImageDataGenerator(
zoom_range=0.15, # set range for random zoom
# set mode for filling points outside the input boundaries
fill_mode='constant',
cval=0., # value used for fill_mode = "constant"
horizontal_flip=True, # randomly flip images
vertical_flip=True, # randomly flip images
)
# Using original generator
data_generator = create_datagen().flow(x_train, y_train, batch_size=BATCH_SIZE, seed=1111)
# -
class Metrics(Callback):
def on_train_begin(self, logs={}):
self.val_kappas = []
def on_epoch_end(self, epoch, logs={}):
X_val, y_val = self.validation_data[:2]
y_val = y_val.sum(axis=1) - 1
y_pred = self.model.predict(X_val) > 0.5
y_pred = y_pred.astype(int).sum(axis=1) - 1
_val_kappa = cohen_kappa_score(
y_val,
y_pred,
weights='quadratic'
)
self.val_kappas.append(_val_kappa)
print(f"val_kappa: {_val_kappa:.4f}")
if _val_kappa == max(self.val_kappas):
print("Validation Kappa has improved. Saving model.")
self.model.save('model.h5')
return
densenet = DenseNet121(
weights='imagenet',
include_top=False,
input_shape=(224,224,3)
)
def build_model():
model = Sequential()
model.add(densenet)
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(5, activation='sigmoid'))
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.00005),
metrics=['accuracy']
)
return model
model = build_model()
model.summary()
# +
kappa_metrics = Metrics()
history = model.fit_generator(
data_generator,
steps_per_epoch=x_train.shape[0] / BATCH_SIZE,
epochs=15,
validation_data=(x_val, y_val),
callbacks=[kappa_metrics]
)
| DR_Detection_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from torch_bert_classify import tmv_torch_bert_classify
# +
import pandas as pd
number_data_set = 4
csv_dump = True
epochs = 1
dependent_var = r'Definition-Score'
task_word = r'Definition'
number_class = 3
top_to = 200
bertd = tmv_torch_bert_classify(r'../data/', r'../train/pytorch_advanced/nlp_sentiment_bert/')
bertd.restore_model(r'TORCH_BERT-Def-PRE-All')
bertd.load_data('Head4-Serialized-Def-ELVA.PILOT.POST-TEST.csv', dependent_var, [0, 1], task_word)
bertd.perform_prediction(bertd.df_ac_modeling_values, number_class)
bertd.evaluate_prediction(r'TORCH_BERT-Def-PRE-All')
# -
| test/Test_Run6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Generate some validation videos random, download them from the server and then use them to visualize the results.
# +
import random
import os
import numpy as np
from work.dataset.activitynet import ActivityNetDataset
dataset = ActivityNetDataset(
videos_path='../dataset/videos.json',
labels_path='../dataset/labels.txt'
)
videos = dataset.get_subset_videos('validation')
videos = random.sample(videos, 8)
examples = []
for v in videos:
file_dir = os.path.join('../downloads/features/', v.features_file_name)
if not os.path.isfile(file_dir):
os.system('scp imatge:~/work/datasets/ActivityNet/v1.3/features/{} ../downloads/features/'.format(v.features_file_name))
features = np.load(file_dir)
examples.append((v, features))
# -
# Load the trained model with its weigths
# +
from keras.layers import Input, BatchNormalization, LSTM, TimeDistributed, Dense, merge
from keras.models import Model
input_features_a = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized_a = BatchNormalization(mode=1)(input_features_a)
lstm1_a = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(input_normalized_a)
lstm2_a = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1_a)
output_a = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2_a)
model_def = Model(input=input_features_a, output=output_a)
model_def.load_weights('../work/scripts/training/lstm_activity_classification/model_snapshot/lstm_activity_classification_02_e100.hdf5')
model_def.summary()
model_def.compile(loss='categorical_crossentropy', optimizer='rmsprop')
# +
input_features = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized = BatchNormalization()(input_features)
previous_output = Input(batch_shape=(1, 1, 202,), name='prev_output')
merging = merge([input_normalized, previous_output], mode='concat', concat_axis=-1)
lstm1 = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(merging)
lstm2 = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1)
output = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2)
model_feed = Model(input=[input_features, previous_output], output=output)
model_feed.load_weights('../work/scripts/training/lstm_activity_classification_feedback/model_snapshot/lstm_activity_classification_feedback_02_e100.hdf5')
model_feed.summary()
model_feed.compile(loss='categorical_crossentropy', optimizer='rmsprop')
# -
# Extract the predictions for each video and print the scoring
predictions_def = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
model_def.reset_states()
prediction = model_def.predict(X, batch_size=1)
prediction = prediction.reshape(nb_instances, 201)
class_prediction = np.argmax(prediction, axis=1)
predictions_def.append((v, prediction, class_prediction))
predictions_feed = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
prediction = np.zeros((nb_instances, 201))
X_prev_output = np.zeros((1, 202))
X_prev_output[0,201] = 1
model_feed.reset_states()
for i in range(nb_instances):
X_features = X[i,:,:].reshape(1, 1, 4096)
X_prev_output = X_prev_output.reshape(1, 1, 202)
next_output = model_feed.predict_on_batch(
{'features': X_features,
'prev_output': X_prev_output}
)
prediction[i,:] = next_output[0,:]
X_prev_output = np.zeros((1, 202))
X_prev_output[0,:201] = next_output[0,:]
class_prediction = np.argmax(prediction, axis=1)
predictions_feed.append((v, prediction, class_prediction))
# Print the global classification results
# +
from IPython.display import YouTubeVideo, display
for prediction_def, prediction_feed in zip(predictions_def, predictions_feed):
v, prediction_d, class_prediction_d = prediction_def
_, prediction_f, class_prediction_f = prediction_feed
print('Video ID: {}\t\tMain Activity: {}'.format(v.video_id, v.get_activity()))
labels = ('Default Model', 'Model with Feedback')
for prediction, label in zip((prediction_d, prediction_f), labels):
print(label)
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
vid = YouTubeVideo(v.video_id)
display(vid)
print('\n')
# -
# Now show the temporal prediction for the activity happening at the video.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
import matplotlib
normalize = matplotlib.colors.Normalize(vmin=0, vmax=201)
for prediction_d, prediction_f in zip(predictions_def, predictions_feed):
v, _, class_prediction_d = prediction_d
_, _, class_prediction_f = prediction_f
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction_d, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction Default Model')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction_f, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction Model with Feedback')
plt.show()
print('\n')
# +
normalize = matplotlib.colors.Normalize(vmin=0, vmax=1)
for prediction_def, prediction_feed in zip(predictions_def, predictions_feed):
v, prediction_d, class_prediction_d = prediction_def
_, prediction_f, class_prediction_f = prediction_feed
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
output_index = dataset.get_output_index(v.label)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
labels = ('Default Model', 'Model with Feedback')
for prediction, label in zip((prediction_d, prediction_f), labels):
print(label)
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth/output_index, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
# print only the positions that predicted the global ground truth category
temp_d = np.zeros((nb_instances))
temp_d[class_prediction_d==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp_d, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class (Default model)')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction_d[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth (Default model)')
plt.show()
# print only the positions that predicted the global ground truth category
temp_f = np.zeros((nb_instances))
temp_f[class_prediction_f==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp_f, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class (Feedback model)')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction_f[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth (Feedback model)')
plt.show()
print('\n')
# -
| notebooks/18 Visualization of Results Comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:econml-dowhy-py38]
# language: python
# name: conda-env-econml-dowhy-py38-py
# ---
# +
import os
os.environ['CASTLE_BACKEND'] = 'pytorch'
from tqdm import tqdm
from castle.common import GraphDAG
from castle.metrics import MetricsDAG
from castle.datasets import IIDSimulation, DAG
from castle.algorithms import PC, RL, CORL, DirectLiNGAM, ICALiNGAM, Notears, GOLEM, NotearsNonlinear
# -
# Check if GPU is available
import torch
torch.cuda.get_device_name(0)
# ?IIDSimulation
# +
# Data simulation, simulate true causal dag and train_data.
weighted_random_dag = DAG.erdos_renyi(n_nodes=10, n_edges=10,
weight_range=(-5., 5.), seed=1)
dataset = IIDSimulation(W=weighted_random_dag, n=2000, method='linear',
sem_type='gauss')
true_causal_matrix, X = dataset.B, dataset.X
# -
GraphDAG(true_causal_matrix)
# +
# Structure learning
methods = {
'pc': PC(),
'direct_lingam': DirectLiNGAM(),
'ica_lingam': ICALiNGAM(),
'notears': Notears(),
'golem': GOLEM(device_type='gpu'),
'notears_non_lin': NotearsNonlinear(device_type='gpu'),
'corl': CORL(
encoder_name='transformer',
decoder_name='lstm',
reward_mode='episodic',
reward_regression_type='LR',
batch_size=64,
input_dim=64,
embed_dim=64,
iteration=2000,
device_type='gpu')
}
for method in methods:
print(f'\nLearning {method}...\n')
methods[method].learn(X)
# -
for method in methods:
print(f'\nResults for {method}\n')
GraphDAG(methods[method].causal_matrix, true_causal_matrix, 'result')
mt = MetricsDAG(methods[method].causal_matrix, true_causal_matrix)
print(mt.metrics)
| 04 - Causal discovery with gCastle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Prepared by: <NAME>
# # Learning Goals
#
# By the end of this class students will:
#
# 1. know what is logsitic regression.
#
# 2. know why logistic regression is still so poplular.
#
# 3. understand the usecases/applications of logistic regression.
#
# 4. be able to implement logistic regression in Python from scratch using SKLearn and Statsmodels.
#
# 5. know its limitations.
# ## Keywords
#
# Supervised learning
#
# Classification
#
# Sigmoid function
#
# Gradient Descent
#
# Loss function/Cost function
# # Sections:
#
# This notebook is divided into 5 different sections which covers the main idea behind the logistic regression.
#
# 1. What is logistic regression?
#
# 2. Why is logistic regression still so poplular?
#
# 3. Applications of logistic regression
#
# 4. Implement logistic regression in Python from scratch
#
# 5. Limitations of logistic regression
# ## 1. What is logistic regression?
#
# Logistic regression is a supervised learning method to model the relationship between a categorical outcome (dependent variable) and one or more independent variable. The independent variables or explanatory variables can be discrete and/or continuous.
#
# The logistic function was invented in the 19TH century for the description of growth of human populations and the course of autocatalytic chemical reactions.
#
#
# Logistic regression is not a typical linear regression model. It belongs to the family of GLMs. Some of the things to notice about the logistic regression:
#
# 1. Logistic regression does not assume a linear relationship between the dependent and independent variables. It's linear in parameters.
#
# 2. The dependent variable must be categorical.
#
# 3. the independent variables need not be normally distributed, nor linearly related, nor of equal variance within each group, and lastly, the categories (groups) must be mutually exclusive and exhaustive.
#
# 4. The logistic regression has the power to accommodate both categorical and continuous independent variables.
#
# 5. Although the power of the analysis is increased if the independent variables are normally distributed and do have a linear relationship with the dependent variable.
#
# ## 2. Why is logistic regression still so poplular?
#
#
# * Logistic regression is easier to implement, interpret, and very efficient to train.
#
# * It makes no assumptions about distributions of classes in feature space.
#
# * It can easily extend to multiple classes(multinomial regression) and a natural probabilistic view of class predictions.
#
#
# * It not only provides a measure of how appropriate a predictor(coefficient size)is, but also its direction of association (positive or negative).
#
# * It is very fast at classifying unknown records.
#
# * Good accuracy for many simple data sets and it performs well when the dataset is linearly separable.
#
# * It can interpret model coefficients as indicators of feature importance.
#
# * Logistic regression is less inclined to over-fitting but it can overfit in high dimensional datasets.One may consider Regularization (L1 and L2) techniques to avoid over-fitting in these scenarios.
# ## 3. Applications of logistic regression
#
# The logistic regression has many applications in wide variety of fields. The types of problems it deals with are of the following nature:
#
# Binary (Pass/Fail)
#
# Multi (Cats, Dogs, Sheep)
#
# Ordinal (Low, Medium, High)
#
#
# Some of the usecases in different business settings are:
#
#
# **Fraud detection**: Detection of credit card frauds or banking fraud is the objective of this use case.
#
#
# **Email spam or ham**: Classifying the email as spam or ham and putting it in either Inbox or Spam folder is the objective of this use case.
#
#
# **Sentiment Analysis**: Analyzing the sentiment using the review or tweets is the objective of this use case. Most of the brands and companies use this to increase customer experience.
#
#
# **Image segmentation, recognition and classification**: The objective of all these use cases is to identify the object in the image and classify it.
#
#
# **Object detection**: This use case is to detect objects and classify them not in the image but the video.
# Handwriting recognition: Recognizing the letters written is the objective of this use case.
#
#
# **Disease (diabetes, cancer etc.) prediction**: Predicting whether the patient has disease or not is the objective of this use case.
# ## Quiz
#
# **1. True-False: Is Logistic regression a supervised machine learning algorithm?**
#
# A. TRUE
#
# B. FALSE
#
#
# **2. True-False: Is Logistic regression mainly used for Regression?**
#
# A. TRUE
#
# B. FALSE
#
# **3.Logistic regression assumes a:**
#
#
# A. Linear relationship between continuous predictor variables and the outcome variable.
#
# B. Linear relationship between continuous predictor variables and the logit of the outcome variable.
#
# C. Linear relationship between continuous predictor variables.
#
# D. Linear relationship between observations.
#
#
# **4. True-False: Is it possible to apply a logistic regression algorithm on a 3-class Classification problem?**
#
# A. TRUE
#
# B. FALSE
#
#
# **5. Logistic regression is used when you want to:**
#
#
# A. Predict a dichotomous variable from continuous or dichotomous variables.
#
# B. Predict a continuous variable from dichotomous variables.
#
# C. Predict any categorical variable from several other categorical variables.
#
# D. Predict a continuous variable from dichotomous or continuous variables.
#
#
# **6. In binary logistic regression:**
#
# A. The dependent variable is continuous.
#
# B. The dependent variable is divided into two equal subcategories.
#
# C. The dependent variable consists of two categories.
#
# D. There is no dependent variable.
#
# # 4. Implement logistic regression in Python from scratch
#
# Here, we will be implementing a basic logsitic regression model to improve our understanding of how it works.
#
#
# To acomplish that, we will be using a famous dataset called 'iris'. The description and the data can be found here: https://archive.ics.uci.edu/ml/datasets/iris. Basically, the dataset contains information (features) about three different types of flowers. We will be focusing on accuratly classifying the correct flowers.
#
# We wil be taking the first two features into account and two non-linearly flowers are classified as one, so leaving them as binary class.
# ## Loading the data and importing the libraries
# let's import the important libraries we will need
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn import datasets
# Although we will directly be loading a preprocessed iris data from sklearn, I wanted to you to know that the data can also be loaded like this to have a first look at the data
url = "https://gist.githubusercontent.com/curran/a08a1080b88344b0c8a7/raw/0e7a9b0a5d22642a06d3d5b9bcbad9890c8ee534/iris.csv"
df = pd.read_csv(url)
df.head()
# Let's load the data from sklearn and encode the target variable 0 and 1
# +
from sklearn.datasets import load_iris
data = load_iris()
# Load digits dataset
iris = sklearn.datasets.load_iris()
# Create feature matrix
X = iris.data[:, :2]
# Create target vector
y = (iris.target != 0) * 1 # so that 2 converts into 1
y
# y1 = iris.target #this will have category 2 as well that's why we use the above method
# y1
# -
print(iris.feature_names) #column names
print(iris.target_names) # target varieties
# We will plot the 0 and 1 classes to see how it looks
plt.figure(figsize=(10, 6))
plt.scatter(X[y == 0][:, 0], X[y == 0][:, 1], color='b', label='0')
plt.scatter(X[y == 1][:, 0], X[y == 1][:, 1], color='r', label='1')
plt.legend();
# ## Algorithm
#
# To iterate it once again, given a set of inputs X, our goal is to correctly identify the class (0 or 1).As we know, we will use a logistic regression model to predict the probability for each flower to know which catergory it belongs to.
# ### Hypothesis
#
# Any function takes inputs and returns the coresponding outputs. Logistic regression uses a function that generates probabilities an gives outputs between 0 and 1 for all values of X. The function that we will to achieve that is called Sigmoid function.
#
# +
# Import matplotlib, numpy and math
import matplotlib.pyplot as plt
import numpy as np
import math
plt.figure(figsize=(10, 6))
x = np.linspace(-10, 10, 100)
z = 1/(1 + np.exp(-x))
plt.plot(x, z)
plt.xlabel("x")
plt.ylabel("Sigmoid(X)")
plt.show()
# -
# ### The Sigmoid function:
#
#
# \begin{equation}
# \begin{array}{l}
# h_{\theta}(x)=g\left(\theta^{T} x\right) \\
# z=\theta^{T} x \\
# g(z)=\frac{1}{1+e^{-z}}
# \end{array}
# \end{equation}
#
# ### Explanation
#
# The binary dependent variable has the values of 0 and 1 and the predicted value (probability) must be bounded to fall within the same range. To define a relationship bounded by 0 and 1, the logistic regression uses the logistic curve to represent the relationship between the independent and dependent variable. At very low levels of the independent variable, the probability approaches 0, but never reaches 0. Likewise, if the independent variable increases, the predicted values increase up the curve and approach 1 but never equal to 1.
#
#
# The logistic transformation ensures that estimated values do not fall outside the range of 0 and 1. This is achieved in two steps, firstly the probability is re-stated as odds which is defined as the ratio of the probability of the event occurring to the probability of it not occurring. For example, if a horse has a probability of 0.8 of winning a race, the odds of it winning are 0.8/(1 − 0.8) = 4:1. To constrain the predicted values to within 0 and 1, the odds value can be converted back into a probability; thus,
#
# $$
# \text { Probability (event) }=\frac{\text { odds(event) }}{1+\text { odds(event) }}
# $$
#
#
# It can therefore be shown that the corresponding probability is 4/(1 + 4) = 0.8. Also, to keep the odds values form going below 0, which is the lower limit (there is no upper limit), the logit value which is calculated by taking the logarithm of the odds, must be computed. Odds less than 1 have a negative logit value, odds ratio greater than 1.0 have positive logit values and the odds ratio of 1.0 (corresponding to a probability of 0.5) have a logit value of 0.
# Let's create a function in Python:
# +
# sigmoin function
def sigmoid(z):
return 1 / (1 + np.exp(-z))
theta = np.zeros(X.shape[1])
z = np.dot(X, theta)
h = sigmoid(z)
# -
# ### Loss Function
#
# Functions (rememeber linear regression) have weights also called parameters what we are represnting by theta in our case. The goal is to find the best value for them that can make most accurate predcitions.
#
# We start by picking random values. Now we need a way to measure how well the algorithm performs using the random weights that we chose. We compute this by the following loss funtion:
#
#
# $$
# \begin{array}{l}
# h=g(X \theta) \\
# J(\theta)=\frac{1}{m} \cdot\left(-y^{T} \log (h)-(1-y)^{T} \log (1-h)\right)
# \end{array}
# $$
# +
# Let' code our loss fucntion
def loss(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
# -
# ### Gradient Descent
#
# Now that we have our loss function, our goal is to minimize it by increasing/decreasing teh weights i.e. fititng them. One obvious questions is how do we know which weights should be increased or decreased or how big what parameters should be bigger or smaller?
#
# To achieve this, we calculate the partial derivatives of the loss function with respect to each weight. It will tell us how loss changes if we modify the parameters.
#
# $$
# \frac{\delta J(\theta)}{\delta \theta_{j}}=\frac{1}{m} X^{T}(g(X \theta)-y)
# $$
gradient = np.dot(X.T, (h - y)) / y.shape[0]
# To update the weights, we subtract to them the derivative times the learning rate. And we do taht several times until we reach the optimal solution.
lr = 0.01
theta -= lr * gradient
# ### predcitions
#
# To make the predcitions, let's set our threshold (which can change depending on the business prbolem) to 0.5. If the predicted probability is >= 0.5 then it's considered class 1 vs < 0.5 belong to 0.
# +
def predict_probs(X, theta):
return sigmoid(np.dot(X, theta))
def predict(X, theta, threshold=0.5):
return predict_probs(X, theta) >= threshold
# -
# # Putting the code together
#
# Now that we have already learned Logistic regression step by step, it makes sense to put it all together in one single code block. It is important for reproducibility reasons.
#
# Here, we are creating a class because all the other functions belong to the same class. It's a neater way to consolidate your code.
# create a class
class LogisticRegression:
def __init__(self, lr=0.01, num_iter=100000, fit_intercept=True, verbose=False):
self.lr = lr
self.num_iter = num_iter
self.fit_intercept = fit_intercept
self.verbose = verbose
# add intercept or the bias term
def __add_intercept(self, X):
intercept = np.ones((X.shape[0], 1))
return np.concatenate((intercept, X), axis=1)
# the sigmoid function that predcits output in terms of probabilites
def __sigmoid(self, z):
return 1 / (1 + np.exp(-z))
#the loss function
def __loss(self, h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
# fitting the model
def fit(self, X, y):
if self.fit_intercept:
X = self.__add_intercept(X)
# weights initialization
self.theta = np.zeros(X.shape[1])
for i in range(self.num_iter):
z = np.dot(X, self.theta)
h = self.__sigmoid(z)
gradient = np.dot(X.T, (h - y)) / y.size
self.theta -= self.lr * gradient
z = np.dot(X, self.theta)
h = self.__sigmoid(z)
loss = self.__loss(h, y)
if(self.verbose ==True and i % 10000 == 0):
print(f'loss: {loss} \t')
# predciting the probablities for instances
def predict_prob(self, X):
if self.fit_intercept:
X = self.__add_intercept(X)
return self.__sigmoid(np.dot(X, self.theta))
# output the predcited probablities
def predict(self, X):
return self.predict_prob(X).round()
# ### Evaluation
model = LogisticRegression(lr=0.1, num_iter=300000)
# %time model.fit(X, y)
# +
preds = model.predict(X)
# accuracy
(preds == y).mean()
1.0
# -
# ### Resulting Weights
model.theta
plt.figure(figsize=(10, 6))
plt.scatter(X[y == 0][:, 0], X[y == 0][:, 1], color='b', label='0')
plt.scatter(X[y == 1][:, 0], X[y == 1][:, 1], color='r', label='1')
plt.legend()
x1_min, x1_max = X[:,0].min(), X[:,0].max(),
x2_min, x2_max = X[:,1].min(), X[:,1].max(),
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
grid = np.c_[xx1.ravel(), xx2.ravel()]
probs = model.predict_prob(grid).reshape(xx1.shape)
plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors='black');
# ## Logistic Regression Using SKLearn
# ## Import the relevant library
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
model = LogisticRegression(C=1e20)
from sklearn.datasets import load_iris
data = load_iris()
iris = sklearn.datasets.load_iris()
X = iris.data[:, :2]
y = (iris.target != 0) * 1
model.fit(X, y)
model.score(X,y)
preds = model.predict(X)
# (preds == y).mean()
print(f'Train Accuracy: {accuracy_score(y, preds)}')
model.intercept_, model.coef_
# # Logistic Regression using Statsmodel
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
# +
# This is like using np.ones to add a vector of ones
X = sm.add_constant(X)
#obust standard errors essentially correct heteroskedasticity in our data
#cov_type = "HC3" is to compute robust standard error
model = sm.Logit(y, X).fit()
predictions = model.predict(X)
print_model = model.summary()
print(print_model)
# -
# # Your Turn
# 1. Load the data from this url
#
# 2. Take three numerical features to predict whether a passanger survived or not by implementing Logistic regression in sklearn and Statsmodel.
#
# 3. Choose X and y.
#
# 4. Run the model and make predictions.
#
# 5. Explain your results
#
# 6. Give at least two examples of real world problems where you can use logistic regression.
# ### part 1: Load the data
# +
#Train and test datasets
url_train = "https://raw.githubusercontent.com/agconti/kaggle-titanic/master/data/train.csv"
url_test = "https://raw.githubusercontent.com/agconti/kaggle-titanic/master/data/test.csv"
# +
# Be mindful of missing values, you might want to drop them
# +
# check the first few rows of the dataframes
# -
# ### part 2: Take three numerical feature
# +
# choose three numerical variables to train the model
# -
# ### part 3. Choose X and y
# +
# X matrix and y
# -
# ### part 4. Run the model and make predictions
# +
# run the model
# +
# make the predictions
# -
# ### part 5. Interpret your results
# How would you interpret the coefficents?
# +
# which feature are negatively related to Y?
# +
# which feature are positively related to Y?
# -
# ### 6. Give at least two examples of real world problems where you can use logistic regression.
# +
# write down the examples where you would use logistic regression to solve the problem.
# -
#
#
# ## 5. Limitations of logistic regression
#
# * Overfitting: If the number of observations is lesser than the number of features, Logistic Regression should not be used, otherwise, it may lead to overfitting.
#
# * The major limitation of Logistic Regression is the assumption of linearity between the dependent variable and the independent variables.
#
# * It can only be used to predict discrete functions. Hence, the dependent variable of Logistic Regression is bound to the discrete number set.
#
# * Non-linear problems can’t be solved with logistic regression because it has a linear decision surface. Linearly separable data is rarely found in real-world scenarios.
#
# * Logistic Regression requires average or no multicollinearity between independent variables.
#
# * It is tough to obtain complex relationships using logistic regression. More powerful and compact algorithms such as Neural Networks can easily outperform this algorithm.
#
# * In Linear Regression independent and dependent variables are related linearly. But Logistic Regression needs that independent variables are linearly related to the log odds (log(p/(1-p)).
# # Some questions to think about!
#
#
# * What is overfiting? How can we solve this problem?
#
# * What does intercept/constant term represent?
#
# * How would you interpret coefficients?
#
# * What if the classes are imbalanced ( the bianry classes are not equally reprenseted)?
#
# * What acurancy metric we we use in classification problems?
#
# * How can we use logstic regression for multiclass problems?
#
# ## References
#
# https://en.wikipedia.org/wiki/Sigmoid_function
#
# https://www.scirp.org/journal/paperinformation.aspx?paperid=95655
#
# https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.htm
#
# https://www.quora.com/What-are-applications-of-linear-and-logistic-regression
#
# https://papers.tinbergen.nl/02119.pdf
#
# https://online.stat.psu.edu/stat504/node/149/.
#
# http://www.stat.cmu.edu/~ryantibs/advmethods/notes/glm.pdf
#
# https://www.geeksforgeeks.org/advantages-and-disadvantages-of-logistic-regression/
# ############################################ The End ########################################################
| Logistic_Regression_Study_Lesson.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3. TrainPredict_Feature
#
# ## Result:
# - Kaggle score:
#
# ## Tensorboard
# - Input at command: tensorboard --logdir=./log
# - Input at browser: http://127.0.0.1:6006
#
# ## Reference
# - https://www.kaggle.com/codename007/a-very-extensive-landmark-exploratory-analysis
# ## Run name
# +
import time
import os
import pandas as pd
project_name = 'Google_LandMark_Rec'
step_name = 'TrainPredict_Feature'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
t0 = time.time()
# -
# ## Important params
batch_size = 128
image_size = 299
# ## Project folder
# +
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
feature_folder = os.path.join(cwd, 'feature')
post_pca_feature_folder = os.path.join(cwd, 'post_pca_feature')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t', input_folder)
print('output_folder: \t\t\t', output_folder)
print('model_folder: \t\t\t', model_folder)
print('feature_folder: \t\t', feature_folder)
print('post_pca_feature_folder: \t', post_pca_feature_folder)
print('log_folder: \t\t\t', log_folder)
org_train_folder = os.path.join(input_folder, 'org_train')
org_test_folder = os.path.join(input_folder, 'org_test')
train_folder = os.path.join(input_folder, 'data_train')
val_folder = os.path.join(input_folder, 'data_val')
test_folder = os.path.join(input_folder, 'data_test')
test_sub_folder = os.path.join(test_folder, 'test')
print('\norg_train_folder: \t\t', org_train_folder)
print('org_test_folder: \t\t', org_test_folder)
print('train_folder: \t\t\t', train_folder)
print('val_folder: \t\t\t', val_folder)
print('test_folder: \t\t\t', test_folder)
print('test_sub_folder: \t\t', test_sub_folder)
if not os.path.exists(post_pca_feature_folder):
os.mkdir(post_pca_feature_folder)
print('Create folder: %s' % post_pca_feature_folder)
# -
train_csv_file = os.path.join(input_folder, 'train.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_folder = os.path.join(input_folder, 'sample_submission.csv')
# ## 获取landmark_id集合
# +
train_csv = pd.read_csv(train_csv_file)
print('train_csv.shape is {0}.'.format(train_csv.shape))
display(train_csv.head(10))
test_csv = pd.read_csv(test_csv_file)
print('test_csv.shape is {0}.'.format(test_csv.shape))
display(test_csv.head(10))
# -
train_id = train_csv['id']
train_landmark_id = train_csv['landmark_id']
unique_landmark_ids = list(set(train_landmark_id))
len_unique_landmark_ids = len(unique_landmark_ids)
print(unique_landmark_ids[:10]) # 确认landmark_id是从0开始
print('len(unique_landmark_ids)=%d' % len_unique_landmark_ids)
# ## 预览sample_submission.csv
sample_submission_csv = pd.read_csv(sample_submission_folder)
print('sample_submission_csv.shape is {0}.'.format(sample_submission_csv.shape))
display(sample_submission_csv.head(2))
# ## 加载feature
# +
# %%time
import h5py
import numpy as np
from sklearn.utils import shuffle
np.random.seed(2018)
x_train = []
y_train = {}
x_val = []
y_val = {}
x_test = []
time_str = '300_20180421-054322'
# feature_cgg16 = os.path.join(cwd, 'feature', 'feature_VGG16_{}.h5'.format(20180219))
# feature_cgg19 = os.path.join(cwd, 'feature', 'feature_VGG19_{}.h5'.format(20180219))
# feature_resnet50 = os.path.join(cwd, 'feature', 'feature_ResNet50_{}.h5'.format(20180220))
# feature_xception = os.path.join(cwd, 'feature', 'feature_Xception_{}.h5'.format(20180221))
# feature_inceptionV3 = os.path.join(cwd, 'feature', 'feature_InceptionV3_%s.h5' % time_str)
feature_inceptionResNetV2 = os.path.join(feature_folder, 'feature_InceptionResNetV2_%s.h5' % time_str)
# for filename in [feature_cgg16, feature_cgg19, feature_resnet50, feature_xception, feature_inception, feature_inceptionResNetV2]:
for file_name in [feature_inceptionResNetV2]:
print('Load file: %s' % file_name)
with h5py.File(file_name, 'r') as h:
x_train.append(np.array(h['train']))
y_train = np.array(h['train_labels'])
x_val.append(np.array(h['val']))
y_val = np.array(h['val_labels'])
x_test.append(np.array(h['test']))
# -
print(x_train[0].shape)
print(len(y_train))
print(x_val[0].shape)
print(len(y_val))
print(x_test[0].shape)
# %%time
x_train = np.concatenate(x_train, axis=-1)
x_val = np.concatenate(x_val, axis=-1)
x_test = np.concatenate(x_test, axis=-1)
print(x_train.shape)
print(x_val.shape)
print(x_test.shape)
# %%time
from sklearn.utils import shuffle
(x_train, y_train) = shuffle(x_train, y_train)
# +
from sklearn.model_selection import train_test_split
# x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.0025, random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
# +
from keras.utils.np_utils import to_categorical
def data_generator(x_train, y_train, batch_size, num_classes):
len_x_train = len(x_train)
start_index = 0
end_index = 0
while(1):
end_index = start_index + batch_size
if end_index < len_x_train:
# print(start_index, end_index, end=' ')
x_batch = x_train[start_index: end_index, :]
y_batch = y_train[start_index: end_index]
y_batch_cat = to_categorical(y_batch, num_classes)
start_index = start_index + batch_size
# print(x_batch.shape, y_batch_cat.shape)
yield x_batch, y_batch_cat
else:
end_index = end_index-len_x_train
# print(start_index, end_index, end=' ')
x_batch = np.vstack((x_train[start_index:, :], x_train[:end_index, :]))
y_batch = np.concatenate([y_train[start_index:], y_train[:end_index]])
y_batch_cat = to_categorical(y_batch, num_classes)
start_index = end_index
# print(x_batch.shape, y_batch_cat.shape)
yield x_batch, y_batch_cat
# +
len_train = len(y_train)
steps_per_epoch=int(len_train/batch_size)
print(len_train, batch_size, steps_per_epoch)
train_gen = data_generator(x_train, y_train, batch_size, len_unique_landmark_ids)
batch_data = next(train_gen)
print(batch_data[0].shape, batch_data[1].shape)
# -
y_val_cat = to_categorical(y_val, len_unique_landmark_ids)
# ## Build NN
# +
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
# +
def get_lr(x):
lr = round(3e-4 * 0.98 ** x, 6)
if lr < 1e-5:
lr = 1e-5
print(lr, end=' ')
return lr
# annealer = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
annealer = LearningRateScheduler(get_lr)
log_dir = os.path.join(log_folder, run_name)
print('log_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
# +
model = Sequential()
model.add(Dense(4096, input_shape=x_train.shape[1:]))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(4096, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(len_unique_landmark_ids, activation='softmax'))
model.compile(optimizer=Adam(lr=1e-4),
loss='categorical_crossentropy',
metrics=['accuracy'])
# -
model.summary()
# %%time
hist = model.fit_generator(train_gen,
steps_per_epoch=steps_per_epoch,
epochs=10, #Increase this when not on Kaggle kernel
verbose=1, #1 for ETA, 0 for silent
callbacks=[annealer],
max_queue_size=8,
workers=1,
use_multiprocessing=False,
validation_data=(x_val, y_val_cat))
final_loss, final_acc = model.evaluate(x_val, y_val_cat, verbose=1)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
run_name_acc = run_name + '_' + str(int(final_acc*10000)).zfill(4)
histories = pd.DataFrame(hist.history)
histories['epoch'] = hist.epoch
print(histories.columns)
histories_file = os.path.join(model_folder, run_name_acc + '.csv')
histories.to_csv(histories_file, index=False)
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
# -
def saveModel(model, run_name):
cwd = os.getcwd()
modelPath = os.path.join(cwd, 'model')
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
weigthsFile = os.path.join(modelPath, run_name + '.h5')
model.save(weigthsFile)
saveModel(model, run_name_acc)
# ## Predict
# 这里证明os.listdir()得到的图片名称list不正确
files = os.listdir(os.path.join(cwd, 'input', 'data_test', 'test'))
print(files[:10])
# +
# 这里证明ImageDataGenerator()得到的图片名称list才是正确
val_gen = ImageDataGenerator()
test_gen = ImageDataGenerator()
# image_size = 299
# batch_size = 128
val_generator = val_gen.flow_from_directory(
val_folder,
(image_size, image_size),
shuffle=False,
batch_size=batch_size
)
test_generator = test_gen.flow_from_directory(
test_folder,
(image_size, image_size),
shuffle=False,
batch_size=batch_size
)
print('val_generator')
print(len(val_generator.filenames))
print(val_generator.filenames[:10])
print('test_generator')
print(len(test_generator.filenames))
print(test_generator.filenames[:10])
# -
y_val_proba = model.predict(x_val, batch_size=128)
print(y_val_proba.shape)
y_test_proba = model.predict(x_test, batch_size=128)
print(y_test_proba.shape)
# +
def save_proba(y_val_proba, y_val, y_test_proba, test_filenames, file_name):
test_filenames = [n.encode('utf8') for n in test_filenames]
print(test_filenames[:10])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('test_filenames', data=test_filenames)
print('File saved: \t%s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
test_filenames = np.array(h['test_filenames'])
print('File loaded: \t%s' % file_name)
test_filenames = [n.decode('utf8') for n in test_filenames]
print(test_filenames[:10])
return y_val_proba, y_val, y_test_proba, test_filenames
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(y_val_proba, val_generator.classes, y_test_proba, test_generator.filenames, y_proba_file)
y_val_proba, y_val, y_test_proba, test_filenames = load_proba(y_proba_file)
print(y_val_proba.shape)
print(y_val.shape)
print(y_test_proba.shape)
print(len(test_filenames))
# +
# %%time
max_indexes = np.argmax(y_pred, -1)
print(max_indexes.shape)
test_dict = {}
for i, paire in enumerate(zip(test_generator.filenames, max_indexes)):
image_name, indx = paire[0], paire[1]
image_id = image_name[5:-4]
# test_dict[image_id] = '%d %.4f' % (indx, y_pred[i, indx])
test_dict[image_id] = '%d %.4f' % (indx, 1)
#确认图片的id是否能与ImageDataGenerator()对应上
for key in list(test_dict.keys())[:10]:
print('%s %s' % (key, test_dict[key]))
# -
display(sample_submission_csv.head(2))
# %%time
len_sample_submission_csv = len(sample_submission_csv)
print('len(len_sample_submission_csv)=%d' % len_sample_submission_csv)
count = 0
for i in range(len_sample_submission_csv):
image_id = sample_submission_csv.iloc[i, 0]
# landmarks = sample_submission_csv.iloc[i, 1]
if image_id in test_dict:
pred_landmarks = test_dict[image_id]
# print('%s %s' % (image_id, pred_landmarks))
sample_submission_csv.iloc[i, 1] = pred_landmarks
else:
# print(image_id)
sample_submission_csv.iloc[i, 1] = '9633 1.0' # 属于9633的类最多,所以全都设置成这个类,可能会比设置成空得到的结果好
# sample_submission_csv.iloc[i, 1] = '' # 设置成空
count += 1
if count % 10000 == 0:
print(int(count/10000), end=' ')
display(sample_submission_csv.head(2))
pred_file = os.path.join(output_folder, 'pred_' + run_name_acc + '.csv')
sample_submission_csv.to_csv(pred_file, index=None)
# +
print('Time cost: %.2f' % (time.time() - t0))
print(run_name_acc)
print('Done!')
# -
| landmark-recognition-challenge/3. TrainPredict_Feature.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This script loads the current model and performs an evaluation of it
# ### Initialize
# First, initialize the model with all parameters
#
# +
from data_source import DataSource
from visualize import Visualize
from sphere import Sphere
from model import Model
from loss import TripletLoss, ImprovedTripletLoss
from training_set import TrainingSet
from average_meter import AverageMeter
from data_splitter import DataSplitter
from mission_indices import MissionIndices
from database_parser import DatabaseParser
import torch
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch.utils.tensorboard import SummaryWriter
from torchsummary import summary
import pyshtools
from pyshtools import spectralanalysis
from pyshtools import shio
from pyshtools import expand
import sys
import time
import math
import operator
import numpy as np
import pandas as pd
import open3d as o3d
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from tqdm.auto import tqdm
import scipy.stats as st
from scipy import spatial
# %reload_ext autoreload
# %autoreload 2
# -
torch.cuda.set_device(0)
torch.backends.cudnn.benchmark = True
n_features = 3
bandwidth = 100
net = Model(n_features, bandwidth).cuda()
restore = False
optimizer = torch.optim.SGD(net.parameters(), lr=5e-3, momentum=0.9)
batch_size = 12
num_workers = 12
descriptor_size = 256
net_input_size = 2*bandwidth
cache = 50
criterion = ImprovedTripletLoss(margin=2, alpha=0.5, margin2=0.2)
writer = SummaryWriter()
stored_model = './net_params_arche_high_res_big.pkl'
net.load_state_dict(torch.load(stored_model))
#summary(net, input_size=[(2, 200, 200), (2, 200, 200), (2, 200, 200)])
indices = np.arange(0,10,2)
print(indices)
# Initialize the data source
# +
dataset_path = "/media/scratch/berlukas/spherical/koze_high_res/"
#dataset_path = "/home/berlukas/data/arche_low_res2/"
db_parser = DatabaseParser(dataset_path)
n_test_data = 2500
n_test_cache = n_test_data
idx = np.arange(0,n_test_data, 10)
ds_test = DataSource(dataset_path, n_test_cache, -1, False)
ds_test.load(n_test_data, idx, filter_clusters=True)
n_test_data = len(ds_test.anchors)
# -
test_set = TrainingSet(restore, bandwidth)
test_set.generateAll(ds_test)
# +
# hack for removing the images
#test_set.anchor_features = test_set.anchor_features[:,0:2,:,:]
#test_set.positive_features = test_set.positive_features[:,0:2,:,:]
#test_set.negative_features = test_set.negative_features[:,0:2,:,:]
n_test_set = len(test_set)
print("Total size: ", n_test_set)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=10, shuffle=False, num_workers=1, pin_memory=True, drop_last=False)
# -
# # Generate the descriptors for the positive samples
# +
net.eval()
n_iter = 0
anchor_embeddings = np.empty(1)
positive_embeddings = np.empty(1)
with torch.no_grad():
for batch_idx, (data1, data2) in enumerate(test_loader):
embedded_a, embedded_p, embedded_n = net(data1.cuda().float(), data2.cuda().float(), data2.cuda().float())
dist_to_pos, dist_to_neg, loss, loss_total = criterion(embedded_a, embedded_p, embedded_n)
anchor_embeddings = np.append(anchor_embeddings, embedded_a.cpu().data.numpy().reshape([1,-1]))
positive_embeddings = np.append(positive_embeddings, embedded_p.cpu().data.numpy().reshape([1,-1]))
n_iter = n_iter + 1
desc_anchors = anchor_embeddings[1:].reshape([n_test_set, descriptor_size])
desc_positives = positive_embeddings[1:].reshape([n_test_set, descriptor_size])
# -
# ## New testing pipeline (location based)
# +
print(f'Running test pipeline for a map size of {len(desc_anchors)} descriptors.')
sys.setrecursionlimit(50000)
tree = spatial.KDTree(desc_anchors)
p_norm = 2
max_pos_dist = 5.0
anchor_poses = ds_test.anchor_poses
positive_poses = ds_test.positive_poses
for n_nearest_neighbors in tqdm(range(1,21)):
loc_count = 0
match_count = 0
for idx in range(n_test_set):
cur_positive_pos = positive_poses[idx,5:8]
diff = np.subtract(anchor_poses[:,5:8], cur_positive_pos)
distances = np.linalg.norm(diff, axis=1)
if (np.count_nonzero(distances <= max_pos_dist) <= 2):
continue
match_count = match_count + 1
nn_dists, nn_indices = tree.query(desc_positives[idx,:], p = p_norm, k = n_nearest_neighbors)
nn_indices = [nn_indices] if n_nearest_neighbors == 1 else nn_indices
for nn_i in nn_indices:
if (nn_i >= n_test_set):
break;
dist = spatial.distance.euclidean(anchor_poses[nn_i,5:8], cur_positive_pos)
if (dist <= max_pos_dist):
loc_count = loc_count + 1;
break
loc_precision = (loc_count*1.0) / match_count
print(f'recall {loc_precision} for {n_nearest_neighbors} neighbors')
#print(f'{loc_precision}')
#writer.add_scalar('Ext_Test/Precision/Location', loc_precision, n_nearest_neighbors)
# +
print(f'Running test pipeline for a map size of {len(desc_positives)} descriptors.')
sys.setrecursionlimit(50000)
tree = spatial.KDTree(desc_positives)
p_norm = 2
max_pos_dist = 5.0
max_anchor_dist = 1
anchor_poses = ds_test.anchor_poses
positive_poses = ds_test.positive_poses
assert len(anchor_poses) == len(positive_poses)
for n_nearest_neighbors in tqdm(range(1,21)):
loc_count = 0
for idx in range(n_test_set):
nn_dists, nn_indices = tree.query(desc_anchors[idx,:], p = p_norm, k = n_nearest_neighbors)
nn_indices = [nn_indices] if n_nearest_neighbors == 1 else nn_indices
for nn_i in nn_indices:
if (nn_i >= n_test_set):
break;
dist = spatial.distance.euclidean(positive_poses[nn_i,5:8], anchor_poses[idx,5:8])
if (dist <= max_pos_dist):
loc_count = loc_count + 1;
break
loc_precision = (loc_count*1.0) / n_test_set
#print(f'recall {loc_precision} for {n_nearest_neighbors} neighbors')
print(f'{loc_precision}')
#writer.add_scalar('Ext_Test/Precision/Location', loc_precision, n_nearest_neighbors)
# -
# ## Place Voting using Global Spectral Analysis
#
# +
print(f'Running test pipeline for a map size of {len(desc_positives)} descriptors.')
sys.setrecursionlimit(50000)
start = time.time()
tree = spatial.KDTree(desc_anchors)
end = time.time()
print(f'Duration for building the kd-tree {(end - start)}s')
p_norm = 2
max_pos_dist = 5.0
anchor_poses = ds_test.anchor_poses
anchor_clouds = ds_test.anchors
anchor_features = test_set.anchor_features
positive_poses = ds_test.positive_poses
positive_clouds = ds_test.positives
positive_features = test_set.anchor_features
n_bands = 15
tapers, eigenvalues, taper_order = spectralanalysis.SHReturnTapers(2.01, 15)
for n_nearest_neighbors in tqdm(range(10,21)):
#n_nearest_neighbors = 16
n_matches = 0
loc_count = 0
almost_loc_count = 0
hard_loc_count = 0
no_loc_count = 0
fused_loc_count = 0
fused_almost_loc_count = 0
fused_hard_loc_count = 0
fused_no_loc_count = 0
final_count = 0
dur_neighbor_processing_s = 0
dur_s2_s = 0
dur_spectrum_s = 0
for idx in range(0, n_test_set):
#for idx in range(0, 200):
start = time.time()
nn_dists, nn_indices = tree.query(desc_positives[idx,:], p = p_norm, k = n_nearest_neighbors)
end = time.time()
dur_neighbor_processing_s = dur_neighbor_processing_s + (end - start)
nn_indices = [nn_indices] if n_nearest_neighbors == 1 else nn_indices
z_scores_fused = [0] * n_nearest_neighbors
z_scores_range = [0] * n_nearest_neighbors
z_scores_intensity = [0] * n_nearest_neighbors
z_scores_image = [0] * n_nearest_neighbors
n_true_matches = 0
contains_match = False
for i in range(0, n_nearest_neighbors):
nn_i = nn_indices[i]
if (nn_i >= n_test_set):
print(f'ERROR: index {nn_i} is outside of {n_test_set}')
break;
dist = spatial.distance.euclidean(anchor_poses[nn_i,5:8], positive_poses[idx,5:8])
if (dist <= max_pos_dist):
contains_match = True
n_true_matches = n_true_matches + 1
a_range = anchor_features[idx][0,:,:]
p_range = positive_features[nn_i][0,:,:]
a_intensity = anchor_features[idx][1,:,:]
p_intensity = positive_features[nn_i][1,:,:]
a_img = anchor_features[idx][2,:,:]
p_img = positive_features[nn_i][2,:,:]
start_s2 = time.time()
a_range_coeffs = pyshtools.expand.SHExpandDH(a_range, sampling=1)
p_range_coeffs = pyshtools.expand.SHExpandDH(p_range, sampling=1)
a_intensity_coeffs = pyshtools.expand.SHExpandDH(a_intensity, sampling=1)
p_intensity_coeffs = pyshtools.expand.SHExpandDH(p_intensity, sampling=1)
a_img_coeffs = pyshtools.expand.SHExpandDH(a_img, sampling=1)
p_img_coeffs = pyshtools.expand.SHExpandDH(p_img, sampling=1)
end_s2 = time.time()
dur_s2_s = dur_s2_s + (end_s2 - start_s2)
start_spectrum = time.time()
saa_range = spectralanalysis.spectrum(a_range_coeffs)
saa_intensity = spectralanalysis.spectrum(a_intensity_coeffs)
saa_img = spectralanalysis.spectrum(a_img_coeffs)
saa = np.empty([n_features, saa_range.shape[0]])
saa[0,:] = saa_range
saa[1,:] = saa_intensity
saa[2,:] = saa_img
#saa = np.mean(saa, axis=0)
saa = np.amax(saa, axis=0)
spp_range = spectralanalysis.spectrum(p_range_coeffs)
spp_intensity = spectralanalysis.spectrum(p_intensity_coeffs)
spp_img = spectralanalysis.spectrum(p_img_coeffs)
spp = np.empty([n_features, spp_range.shape[0]])
spp[0,:] = spp_range
spp[1,:] = spp_intensity
spp[2,:] = spp_img
#spp = np.mean(spp, axis=0)
spp = np.amax(spp, axis=0)
sap_range = spectralanalysis.cross_spectrum(a_range_coeffs, p_range_coeffs)
sap_intensity = spectralanalysis.cross_spectrum(a_intensity_coeffs, p_intensity_coeffs)
sap_img = spectralanalysis.cross_spectrum(a_img_coeffs, p_img_coeffs)
sap = np.empty([n_features, sap_range.shape[0]])
sap[0,:] = sap_range
sap[1,:] = sap_intensity
sap[2,:] = sap_img
#sap = np.mean(sap, axis=0)
sap = np.amax(sap, axis=0)
#saa = spectralanalysis.spectrum(a_coeffs)
#spp = spectralanalysis.spectrum(p_coeffs)
#sap = spectralanalysis.cross_spectrum(a_coeffs, p_coeffs)
#admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_img, saa_img, spp_img, tapers)
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap, saa, spp, tapers)
end_spectrum = time.time()
dur_spectrum_s = dur_spectrum_s + (end_spectrum - start_spectrum)
for l in range(0, n_bands):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_fused[i] = z_scores_fused[i] + score
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_range, saa_range, spp_range, tapers)
for l in range(0, n_bands):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_range[i] = z_scores_range[i] + score
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_intensity, saa_intensity, spp_intensity, tapers)
for l in range(0, n_bands):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_intensity[i] = z_scores_intensity[i] + score
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_img, saa_img, spp_img, tapers)
for l in range(0, n_bands):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_image[i] = z_scores_image[i] + score
#print(z_scores_range)
#print(z_scores_intensity)
#print(f'z_score > 2 = {np.sum(np.array(z_scores_range) > 3.8)} range, {np.sum(np.array(z_scores_intensity) > 20)} intensity')
#print(f'true matches: {n_true_matches}')
# normalize values
z_scores_fused = np.array(z_scores_fused) / (n_bands)
z_scores_range = np.array(z_scores_range) / (n_bands)
z_scores_intensity = np.array(z_scores_intensity) / (n_bands)
z_scores_image = np.array(z_scores_image) / (n_bands)
n_matches = n_matches + 1
max_index_fused, max_z_score_fused = max(enumerate(z_scores_fused), key=operator.itemgetter(1))
max_index_range, max_z_score_range = max(enumerate(z_scores_range), key=operator.itemgetter(1))
max_index_intensity, max_z_score_intensity = max(enumerate(z_scores_intensity), key=operator.itemgetter(1))
max_index_image, max_z_score_image = max(enumerate(z_scores_image), key=operator.itemgetter(1))
max_index = max_index_range if max_z_score_range > max_z_score_intensity else max_index_intensity
max_score = max_z_score_range if max_z_score_range > max_z_score_intensity else max_z_score_intensity
max_index = max_index if max_score > max_z_score_image else max_index_image
matching_index = nn_indices[max_index]
dist = spatial.distance.euclidean(anchor_poses[matching_index,5:8], positive_poses[idx,5:8])
if (dist <= 5):
loc_count = loc_count + 1;
elif (dist <= 8):
almost_loc_count = almost_loc_count + 1
elif (dist <= 11):
hard_loc_count = hard_loc_count + 1
else:
no_loc_count = no_loc_count + 1
matching_index = nn_indices[max_index_fused]
dist = spatial.distance.euclidean(anchor_poses[matching_index,5:8], positive_poses[idx,5:8])
if (dist <= 5):
fused_loc_count = fused_loc_count + 1;
elif (dist <= 8):
fused_almost_loc_count = fused_almost_loc_count + 1
elif (dist <= 11):
fused_hard_loc_count = fused_hard_loc_count + 1
else:
fused_no_loc_count = fused_no_loc_count + 1
loc_precision = (loc_count*1.0) / n_matches
almost_loc_precision = (almost_loc_count*1.0) / n_matches
hard_loc_precision = (hard_loc_count*1.0) / n_matches
no_loc_precision = (no_loc_count*1.0) / n_matches
fused_loc_precision = (fused_loc_count*1.0) / n_matches
fused_almost_loc_precision = (fused_almost_loc_count*1.0) / n_matches
fused_hard_loc_precision = (fused_hard_loc_count*1.0) / n_matches
fused_no_loc_precision = (fused_no_loc_count*1.0) / n_matches
print(f'Recall loc: {loc_precision} for {n_nearest_neighbors} neighbors with {n_matches}/{n_test_set} correct matches.')
print(f'Remaining recall: almost: {almost_loc_precision}, hard: {hard_loc_precision}, no {no_loc_precision}')
print(f'[FUSED] Recall loc: {fused_loc_precision}, almost: {fused_almost_loc_precision}, hard: {fused_hard_loc_precision}, no {fused_no_loc_precision}')
print('-----------------------------------------------------------------------------------------------------------------')
#print(f'{loc_precision}')
#writer.add_scalar('Ext_Test/Precision/WindowedVoting', loc_precision, n_nearest_neighbors)
#print(f'Duration: {dur_neighbor_processing_s/n_test_set}s')
#print(f'Duration S^2 Transform: {dur_s2_s/n_test_set}s')
#print(f'Duration Spectrum: {dur_spectrum_s/n_test_set}s')
# +
print(f'Running test pipeline for a map size of {len(desc_positives)} descriptors.')
sys.setrecursionlimit(50000)
start = time.time()
tree = spatial.KDTree(desc_positives)
end = time.time()
print(f'Duration for building the kd-tree {(end - start)}s')
p_norm = 2
max_pos_dist = 5.0
anchor_poses = ds_test.anchor_poses
anchor_clouds = ds_test.anchors
anchor_features = test_set.anchor_features
positive_poses = ds_test.positive_poses
positive_clouds = ds_test.positives
positive_features = test_set.anchor_features
tapers, eigenvalues, taper_order = spectralanalysis.SHReturnTapers(2.01, 1)
#for n_nearest_neighbors in tqdm(range(19,20)):
n_nearest_neighbors = 16
n_matches = 0
loc_count = 0
final_count = 0
dur_neighbor_processing_s = 0
dur_s2_s = 0
dur_spectrum_s = 0
for idx in range(0, n_test_set):
#for idx in range(0, 200):
start = time.time()
nn_dists, nn_indices = tree.query(desc_anchors[idx,:], p = p_norm, k = n_nearest_neighbors)
end = time.time()
dur_neighbor_processing_s = dur_neighbor_processing_s + (end - start)
nn_indices = [nn_indices] if n_nearest_neighbors == 1 else nn_indices
z_scores_range = [0] * n_nearest_neighbors
z_scores_intensity = [0] * n_nearest_neighbors
z_scores_image = [0] * n_nearest_neighbors
n_true_matches = 0
contains_match = False
for i in range(0, n_nearest_neighbors):
nn_i = nn_indices[i]
if (nn_i >= n_test_set):
print(f'ERROR: index {nn_i} is outside of {n_data}')
break;
dist = spatial.distance.euclidean(positive_poses[nn_i,5:8], anchor_poses[idx,5:8])
if (dist <= max_pos_dist):
contains_match = True
n_true_matches = n_true_matches + 1
a_range = anchor_features[idx][0,:,:]
p_range = positive_features[nn_i][0,:,:]
a_intensity = anchor_features[idx][1,:,:]
p_intensity = positive_features[nn_i][1,:,:]
a_img = anchor_features[idx][2,:,:]
p_img = positive_features[nn_i][2,:,:]
start_s2 = time.time()
a_range_coeffs = pyshtools.expand.SHExpandDH(a_range, sampling=1)
p_range_coeffs = pyshtools.expand.SHExpandDH(p_range, sampling=1)
a_intensity_coeffs = pyshtools.expand.SHExpandDH(a_intensity, sampling=1)
p_intensity_coeffs = pyshtools.expand.SHExpandDH(p_intensity, sampling=1)
a_img_coeffs = pyshtools.expand.SHExpandDH(a_img, sampling=1)
p_img_coeffs = pyshtools.expand.SHExpandDH(p_img, sampling=1)
end_s2 = time.time()
dur_s2_s = dur_s2_s + (end_s2 - start_s2)
start_spectrum = time.time()
saa_range = spectralanalysis.spectrum(a_range_coeffs)
saa_intensity = spectralanalysis.spectrum(a_intensity_coeffs)
saa_img = spectralanalysis.spectrum(a_img_coeffs)
saa = np.empty([n_features, saa_range.shape[0]])
saa[0,:] = saa_range
saa[1,:] = saa_intensity
saa[2,:] = saa_img
#saa = np.mean(saa, axis=0)
saa = np.amax(saa, axis=0)
spp_range = spectralanalysis.spectrum(p_range_coeffs)
spp_intensity = spectralanalysis.spectrum(p_intensity_coeffs)
spp_img = spectralanalysis.spectrum(p_img_coeffs)
spp = np.empty([n_features, spp_range.shape[0]])
spp[0,:] = spp_range
spp[1,:] = spp_intensity
spp[2,:] = spp_img
#spp = np.mean(spp, axis=0)
spp = np.amax(spp, axis=0)
sap_range = spectralanalysis.cross_spectrum(a_range_coeffs, p_range_coeffs)
sap_intensity = spectralanalysis.cross_spectrum(a_intensity_coeffs, p_intensity_coeffs)
sap_img = spectralanalysis.cross_spectrum(a_img_coeffs, p_img_coeffs)
sap = np.empty([n_features, sap_range.shape[0]])
sap[0,:] = sap_range
sap[1,:] = sap_intensity
sap[2,:] = sap_img
#sap = np.mean(sap, axis=0)
sap = np.amax(sap, axis=0)
#saa = spectralanalysis.spectrum(a_coeffs)
#spp = spectralanalysis.spectrum(p_coeffs)
#sap = spectralanalysis.cross_spectrum(a_coeffs, p_coeffs)
#admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_img, saa_img, spp_img, tapers)
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap, saa, spp, tapers)
end_spectrum = time.time()
dur_spectrum_s = dur_spectrum_s + (end_spectrum - start_spectrum)
for l in range(0, 10):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_intensity[i] = z_scores_intensity[i] + score
'''
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_range, saa_range, spp_range, tapers)
for l in range(0, 10):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_range[i] = z_scores_range[i] + score
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_intensity, saa_intensity, spp_intensity, tapers)
for l in range(0, 10):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_intensity[i] = z_scores_intensity[i] + score
admit, corr = spectralanalysis.SHBiasAdmitCorr(sap_img, saa_img, spp_img, tapers)
for l in range(0, 10):
prob = spectralanalysis.SHConfidence(l, corr[l])
score = st.norm.ppf(1-(1-prob)/2) if prob < 0.99 else 4.0
z_scores_image[i] = z_scores_image[i] + score
'''
if (contains_match is not True):
continue
#print(z_scores_range)
#print(z_scores_intensity)
#print(f'z_score > 2 = {np.sum(np.array(z_scores_range) > 3.8)} range, {np.sum(np.array(z_scores_intensity) > 20)} intensity')
#print(f'true matches: {n_true_matches}')
n_matches = n_matches + 1
max_index_range, max_z_score_range = max(enumerate(z_scores_range), key=operator.itemgetter(1))
max_index_intensity, max_z_score_intensity = max(enumerate(z_scores_intensity), key=operator.itemgetter(1))
max_index_image, max_z_score_image = max(enumerate(z_scores_image), key=operator.itemgetter(1))
#print(f'max range: {max_z_score_range}, max intensity: {max_z_score_intensity}')
max_index = max_index_range if max_z_score_range > max_z_score_intensity else max_index_intensity
#max_index = max_index_intensity
max_score = max_z_score_range if max_z_score_range > max_z_score_intensity else max_z_score_intensity
max_index = max_index if max_score > max_z_score_image else max_index_image
matching_index = nn_indices[max_index]
dist = spatial.distance.euclidean(positive_poses[matching_index,5:8], anchor_poses[idx,5:8])
if (dist <= max_pos_dist):
loc_count = loc_count + 1;
#print('successful')
#else:
#print(f'Place invalid: distance anchor <-> positive: {dist} with score {max_score}.')
#matching_index = nn_indices[true_match_idx]
#dist = spatial.distance.euclidean(positive_poses[matching_index,5:8], positive_poses[true_match_idx,5:8])
#print(f'Distance positive <-> true_match: {dist}, true_match score: {z_scores[true_match_idx]}')
loc_recall = (loc_count*1.0) / n_matches
loc_precision = (loc_count*1.0) / n_matches
#print(f'Recall {loc_precision} for {n_nearest_neighbors} neighbors with {n_matches}/{n_data} correct matches.')
print(f'{loc_precision}')
#writer.add_scalar('Ext_Test/Precision/WindowedVoting', loc_precision, n_nearest_neighbors)
#print(f'Duration: {dur_neighbor_processing_s/n_test_set}s')
print(f'Duration S^2 Transform: {dur_s2_s/n_test_set}s')
print(f'Duration Spectrum: {dur_spectrum_s/n_test_set}s')
| script/playground/ext_evaluation_cross_modality.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import random
import time
from IPython.display import Image, display, clear_output
from ipywidgets import widgets
x = ['lamb.jpg','lion.jpg','tiger.jpg','dog.jpg','ant.jpg','spider.jpg']
def redraw():
choices = random.sample(x, 4)
correct = random.choice(choices)
display(Image(correct))
time.sleep(3)
button1 = widgets.Button(description = choices[0])
button2 = widgets.Button(description = choices[1])
button3 = widgets.Button(description = choices[2])
button4 = widgets.Button(description = choices[3])
container = widgets.HBox(children=[button1,button2,button3,button4])
display(container)
def on_button1_clicked(b):
# [insert code to record choice]
container.close()
clear_output()
redraw()
def on_button2_clicked(b):
# [insert code to record choice]
container.close()
clear_output()
redraw()
def on_button3_clicked(b):
# [insert code to record choice]
container.close()
clear_output()
redraw()
def on_button4_clicked(b):
# [insert code to record choice]
container.close()
clear_output()
redraw()
button1.on_click(on_button1_clicked)
button2.on_click(on_button2_clicked)
button3.on_click(on_button3_clicked)
button4.on_click(on_button4_clicked)
redraw() # initializes th
# +
import random
import time
from IPython.display import Image, display, clear_output
from ipywidgets import widgets
nchoices = 4
x = ['lion.jpg','tiger.jpg','lamb.jpg','dog.jpg','ant.jpg','spider.jpg']
def redraw():
choices = random.sample(x, nchoices)
correct = random.choice(choices)
display(Image(correct))
time.sleep(0.5)
buttons = [widgets.Button(description = choice) for choice in choices]
container = widgets.HBox(children=buttons)
display(container)
def on_button_clicked(b):
choice = b.description
b.color = 'white'
b.background_color = 'green' if choice == correct else 'red'
time.sleep(2)
container.close()
clear_output()
redraw()
for button in buttons:
button.on_click(on_button_clicked)
redraw()
# +
import random
import time
from IPython.display import Image, display, clear_output
from ipywidgets import widgets
from gtts import gTTS
import os
nchoices = 4
x = ['23','35','45','55','66','77']
def redraw():
choices = random.sample(x, nchoices)
correct = random.choice(choices)
#display(Image(correct))
display(correct)
time.sleep(0.5)
buttons = [widgets.Button(description = choice) for choice in choices]
container = widgets.HBox(children=buttons)
display(container)
def on_button_clicked(b):
choice = b.description
b.color = 'white'
b.background_color = 'green' if choice == correct else 'red'
if choice == correct:
tts = gTTS(text='Roger that. That is correct', lang='en')
tts.save("answer.mp3")
os.system("afplay answer.mp3")
if choice != correct:
tts = gTTS(text='Wrong answer Roger', lang='en')
tts.save("answer.mp3")
os.system("afplay answer.mp3")
time.sleep(2)
container.close()
clear_output()
redraw()
for button in buttons:
button.on_click(on_button_clicked)
redraw()
# +
import random
import time
from IPython.display import Image, display, clear_output
from ipywidgets import widgets
text=widgets.Text("Enter your name:")
name = str()
display(text)
def handle_submit(sender):
print("Thank you")
text.on_submit(handle_submit)
name= (text.value.split(':')[1])
# -
# +
# -
| ver_0.4.1/quiz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import cv2
from matplotlib import pyplot as plt
from keras.models import Sequential,load_model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.datasets import fashion_mnist, mnist
import matplotlib.pyplot as plt
reconstructed_model = load_model("mnist_models.h5")
# load fashion mnist dataset
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
for i in range(9):
# define subplot
plt.subplot(330 + 1 + i)
# plot raw pixel data
plt.imshow(X_train[i], cmap=plt.get_cmap('gray'))
# show the figure
plt.show()
# +
X_train = X_train.reshape(X_train.shape[0],28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
# -
probs = reconstructed_model.predict(X_test[np.newaxis, 1])
prediction = probs.argmax(axis=1)
# testing the fashion mnist test data with the model trained on mnist hand written dataset
acc = reconstructed_model.evaluate(X_test, Y_test, verbose=0)
print("Test loss and accuracy: ", acc)
reconstructed_model.summary()
#Iterate through the convolutional layers
for p in reconstructed_model.features.parameters():
p.requires_grad = True
| Q2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from collections import deque
from env.tsp_env import TspEnv
from utils import tsp_plots
import numpy as np
import random
import time
import torch
import torch.nn as nn
import torch.optim as optim
# Set number of cities to visit
NUMBER_OF_CITIES = 10
MAXIMUM_RUNS = 20000
MAXIMUM_TIME_MINS = 180
NO_IMPROVEMENT_RUNS = 1000
NO_IMPROVEMENT_TIME = 180
# Discount rate of future rewards
GAMMA = 0.99
# Learing rate for neural network
LEARNING_RATE = 0.001
# Maximum number of game steps (state, action, reward, next state) to keep
MEMORY_SIZE = 100000
# Frequency of neural net
BATCH_SIZE = 5
# Number of game steps to play before starting training
REPLAY_START_SIZE = 10000
# Exploration rate (episolon) is probability of choosign a random action
EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.001
# Reduction in epsilon with each game step
EXPLORATION_DECAY = 0.995
class DQN(nn.Module):
"""
Deep Q Network solver. Includes control variables, memory of
state/action/reward/end, neural net,and methods to act,
remember, and update neural net by sampling from memory.
"""
def __init__(self, observation_space, n_actions):
"""Constructor method. Set up memory and neural nets."""
self.n_actions = n_actions
# Set starting exploration rate
self.exploration_rate = EXPLORATION_MAX
# Set up memory for state/action/reward/next_state/done
# Deque will drop old data when full
self.memory = deque(maxlen=MEMORY_SIZE)
# Set up neural net
super(DQN, self).__init__()
self.net = nn.Sequential(
nn.Linear(observation_space, NUMBER_OF_CITIES * 5),
nn.ReLU(),
nn.Linear(NUMBER_OF_CITIES * 5, NUMBER_OF_CITIES * 5),
nn.ReLU(),
nn.Linear(NUMBER_OF_CITIES * 5, NUMBER_OF_CITIES * 5),
nn.ReLU(),
nn.Linear(NUMBER_OF_CITIES * 5, n_actions))
# Set loss function and optimizer
self.objective = nn.MSELoss()
self.optimizer = optim.Adam(
params=self.net.parameters(), lr=LEARNING_RATE)
def act(self, state):
"""Act either randomly or by redicting action that gives max Q"""
# Act randomly if random number < exploration rate
if np.random.rand() < self.exploration_rate:
action = random.randrange(self.n_actions)
else:
# Otherwise get predicted Q values of actions
q_values = self.net(torch.FloatTensor(state))
# Get index of action with best Q
action = np.argmax(q_values.detach().numpy()[0])
return action
def experience_replay(self, runs):
"""Update model by sampling (s/a/r/s') from memory. As the memory
accumulates knowledge of rewards, this will lead to a better ability to
estimate Q (which depends on future possible rewards."""
# Do not try to train model if memory is less than reqired batch size
if len(self.memory) < BATCH_SIZE:
return
# Reduce exploration rate down to minimum
self.exploration_rate *= EXPLORATION_DECAY
self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)
# Sample a random batch from memory
batch = random.sample(self.memory, BATCH_SIZE)
for state, action, reward, state_next, terminal in batch:
if terminal:
q_update = reward
else:
# Get best possible Q for next action
action_q = self.net(torch.FloatTensor(state_next))
action_q = action_q.detach().numpy().flatten()
best_next_q = np.amax(action_q)
# Calculate current Q using Bellman equation
q_update = (reward + GAMMA * best_next_q)
# Get predicted Q values for current state
q_values = self.net(torch.FloatTensor(state))
# Update predicted Q for current state/action
q_values[0][action] = q_update
# Update neural net to better predict the updated Q value
# Reset net gradients
self.optimizer.zero_grad()
# calculate loss
loss_v = nn.MSELoss()(self.net(torch.FloatTensor(state)), q_values)
# Backpropogate loss
loss_v.backward()
# Update network gradients
self.optimizer.step()
def forward(self, x):
"""Feed forward function for neural net"""
return self.net(x)
def remember(self, state, action, reward, next_state, done):
"""state/action/reward/next_state/done"""
self.memory.append((state, action, reward, next_state, done))
def main():
"""Main program loop"""
# Set up environment
time_start = time.time()
env = TspEnv(number_of_cities = NUMBER_OF_CITIES)
# Get number of observations returned for state
observation_space = env.observation_space.shape[0] * 2
# Get number of actions possible
n_actions = len(env.action_space)
# Set up neural net
dqn = DQN(observation_space, n_actions)
# Set up list for results
results_run = []
results_exploration = []
total_rewards = []
best_reward = -999999
best_route = None
# Set run and time of last best route
run_last_best = 0
time_last_best = time.time()
# Set up run counter and learning loop
step = 0
run = 0
continue_learning = True
# Continue repeating games (episodes) until target complete
while continue_learning:
# Increment run (episode) counter
run += 1
total_reward = 0
# Start run and get first state observations
state, reward, terminal, info = env.reset()
total_reward += reward
# Reshape state into 2D array with state obsverations as first 'row'
state = np.reshape(state, [1, observation_space])
# Reset route
route = []
# Continue loop until episode complete
while True:
# Increment step counter
step += 1
# Get action to take
action = dqn.act(state)
route.append(action)
# Act
state_next, reward, terminal, info = env.step(action)
total_reward += reward
# Get observations for new state (s')
state_next = np.reshape(state_next, [1, observation_space])
# Record state, action, reward, new state & terminal
dqn.remember(state, action, reward, state_next, terminal)
# Update state
state = state_next
# Update neural net
if len(dqn.memory) >= REPLAY_START_SIZE:
dqn.experience_replay(run)
# Actions to take if end of game episode
if terminal:
# Clear print row content
clear_row = '\r' + ' '*100 + '\r'
print (clear_row, end ='')
print (f'Run: {run: 5.0f}, ', end='')
print (f'exploration: {dqn.exploration_rate: 4.3f}, ', end='')
print (f'reward: {total_reward: 6.0f}', end='')
# Add to results lists
results_run.append(run)
results_exploration.append(dqn.exploration_rate)
total_rewards.append(total_reward)
# Check for best route so far
if total_reward > best_reward:
best_reward = total_reward
best_route = route
run_last_best = run
time_last_best = time.time()
time_elapsed = (time.time() - time_start) / 60
print(f'\nNew best run. Run : {run: 5.0f}, ' \
f'Time {time_elapsed: 4.0f} ' \
f'Reward {total_reward: 6.0f}')
# Check stopping conditions
stop = False
if step > REPLAY_START_SIZE:
if run == MAXIMUM_RUNS:
stop = True
elif time.time() - time_start > MAXIMUM_TIME_MINS * 60:
stop = True
elif time.time() - time_last_best > NO_IMPROVEMENT_TIME*60:
stop = True
elif run - run_last_best == NO_IMPROVEMENT_RUNS:
stop = True
if stop:
# End training
continue_learning = False
# End episode
break
############################# Plot results #################################
# Plot result progress
tsp_plots.plot_result_progress(total_rewards)
# Plot best route
tsp_plots.plot_route(env, best_route)
###################### Show route and distances ############################
print ('Route')
print (best_route)
print ()
print ('Best route distance')
print (f'{env.state.calculate_distance(best_route):.0f}')
main()
| tsp_rl/.ipynb_checkpoints/tsp_dqn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PyRosetta.notebooks] *
# language: python
# name: conda-env-PyRosetta.notebooks-py
# ---
# Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
#
# Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
NAME = ""
COLLABORATORS = ""
# ---
# <!--NOTEBOOK_HEADER-->
# *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
# content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
# <!--NAVIGATION-->
# < [Global Ligand Docking using `XMLObjects` Using the `ref2015.wts` Scorefunction](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/08.01-Ligand-Docking-XMLObjects.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Working With Symmetry](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/10.00-Working-With-Symmetry.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/08.02-Ligand-Docking-pyrosetta.distributed.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# # `GALigandDock` Protocol with `pyrosetta.distributed` Using the `beta_cart.wts` Scorefunction
# *Warning*: This notebook uses `pyrosetta.distributed.viewer` code, which runs in `jupyter notebook` and might not run if you're using `jupyterlab`.
# *Note:* This Jupyter notebook requires the PyRosetta distributed layer. Please make sure to activate the `PyRosetta.notebooks` conda environment before running this notebook. The kernel is set to use this environment.
# +
import logging
logging.basicConfig(level=logging.INFO)
import matplotlib
# %matplotlib inline
import os
import pandas as pd
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.io as io
import pyrosetta.distributed.viewer as viewer
import pyrosetta.distributed.packed_pose as packed_pose
import pyrosetta.distributed.tasks.rosetta_scripts as rosetta_scripts
import seaborn
seaborn.set()
import sys
# Notebook setup
if 'google.colab' in sys.modules:
# !pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
# -
# Load the `TPA.am1-bcc.gp.params` file when using the `-beta_cart` flag, which has `gen_potential` atom typing and AM1-BCC partial charges:
pdb_filename = "inputs/test_lig.pdb"
ligand_params = "inputs/TPA.am1-bcc.gp.params"
flags = f"""
-ignore_unrecognized_res 1
-extra_res_fa {ligand_params}
-beta_cart
-out:level 200
"""
pyrosetta.distributed.init(flags)
pose_obj = io.pose_from_file(filename=pdb_filename)
# Now we change the scorefunction in our RosettaScripts script to `beta_cart.wts`, the weights of which were optimized on protein-ligand complexes using ligands with AM1-BCC partial charges generated with Amber's `antechamber`.
# `GALigandDock` within RosettaScripts normally outputs multiple `.pdb` files to disk if run by the command line. However, when using the `MultioutputRosettaScriptsTask` function in `pyrosetta.distributed`, the outputs will be captured in memory within this Jupyter session!
xml = f"""
<ROSETTASCRIPTS>
<SCOREFXNS>
<ScoreFunction name="fa_standard" weights="beta_cart.wts"/>
</SCOREFXNS>
<MOVERS>
<GALigandDock name="dock"
scorefxn="fa_standard"
scorefxn_relax="fa_standard"
grid_step="0.25"
padding="5.0"
hashsize="8.0"
subhash="3"
nativepdb="{pdb_filename}"
final_exact_minimize="sc"
random_oversample="10"
rotprob="0.9"
rotEcut="100"
sidechains="auto"
initial_pool="{pdb_filename}">
<Stage repeats="10" npool="50" pmut="0.2" smoothing="0.375" rmsdthreshold="2.5" maxiter="50" pack_cycles="100" ramp_schedule="0.1,1.0"/>
<Stage repeats="10" npool="50" pmut="0.2" smoothing="0.375" rmsdthreshold="1.5" maxiter="50" pack_cycles="100" ramp_schedule="0.1,1.0"/>
</GALigandDock>
</MOVERS>
<PROTOCOLS>
<Add mover="dock"/>
</PROTOCOLS>
</ROSETTASCRIPTS>
"""
xml_obj = rosetta_scripts.MultioutputRosettaScriptsTask(xml)
xml_obj.setup()
# `MultioutputRosettaScriptsTask` is a python generator object. Therefore, we need to call `list()` or `set()` on it to run it.
#
# *Warning*, the following cell runs for ~45 minutes CPU time.
if not os.getenv("DEBUG"):
# %time results = list(xml_obj(pose_obj))
# ### Inspect the scores for the `GALigandDock` trajectories:
if not os.getenv("DEBUG"):
df = pd.DataFrame.from_records(packed_pose.to_dict(results))
df
# ### Now that we have performed `GALigandDock`, we can plot the ligand binding energy landscape:
if not os.getenv("DEBUG"):
matplotlib.rcParams["figure.figsize"] = [12.0, 8.0]
seaborn.scatterplot(x="lig_rms", y="total_score", data=df)
# Let's look at the ligand dock with the lowest `total_score` score!
if not os.getenv("DEBUG"):
ppose_lowest_total_score = results[df.sort_values(by="total_score").index[0]]
view = viewer.init(ppose_lowest_total_score)
view.add(viewer.setStyle())
view.add(viewer.setStyle(command=({"hetflag": True}, {"stick": {"colorscheme": "brownCarbon", "radius": 0.2}})))
view.add(viewer.setSurface(residue_selector=pyrosetta.rosetta.core.select.residue_selector.ChainSelector("E"), opacity=0.7, color='white'))
view.add(viewer.setHydrogenBonds())
view()
# <!--NAVIGATION-->
# < [Global Ligand Docking using `XMLObjects` Using the `ref2015.wts` Scorefunction](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/08.01-Ligand-Docking-XMLObjects.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Working With Symmetry](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/10.00-Working-With-Symmetry.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/08.02-Ligand-Docking-pyrosetta.distributed.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| student-notebooks/08.02-Ligand-Docking-pyrosetta.distributed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # Display Exercise 1
# + [markdown] nbgrader={}
# ## Imports
# + [markdown] nbgrader={}
# Put any needed imports needed to display rich output the following cell:
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# YOUR CODE HERE
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg)
from IPython.display import Image
from IPython.display import HTML
# + deletable=false nbgrader={"checksum": "fed914a9fcfb3c6780f71b2a57ca435f", "grade": true, "grade_id": "displayex01a", "points": 2}
assert True # leave this to grade the import statements
# + [markdown] nbgrader={}
# # Basic rich display
# + [markdown] nbgrader={}
# Find a Physics related image on the internet and display it in this notebook using the `Image` object.
#
# * Load it using the `url` argument to `Image` (don't upload the image to this server).
# * Make sure the set the `embed` flag so the image is embedded in the notebook data.
# * Set the width and height to `600px`.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# YOUR CODE HERE
Image(url='http://upload.wikimedia.org/wikipedia/commons/8/88/Astronaut-EVA.jpg')
# + deletable=false nbgrader={"checksum": "00df159bd7cb62dbf196367fd7395e7f", "grade": true, "grade_id": "displayex01b", "points": 4}
assert True # leave this to grade the image display
# + [markdown] nbgrader={}
# Use the `HTML` object to display HTML in the notebook that reproduces the table of Quarks on [this page](http://en.wikipedia.org/wiki/List_of_particles). This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} language="html"
#
# <table>
# <tr>
# <th>Name</th>
# <th>Symbol</th>
# <th>Antiparticle</th>
# <th>Charge</th>
# <th>Mass</th>
# </tr>
# <tr>
# <td>up</td>
# <td>u</td>
# <td> $\bar{u}$ </td>
# <td> +$\frac{2}{3}$</td>
# <td> 1.5-3.3</td>
# </tr>
# <tr>
# <td>down</td>
# <td>d</td>
# <td>$\bar{d}$</td>
# <td> -$\frac{1}{3}$</td>
# <td>3.5-6.0</td>
# </tr>
# <tr>
# <td>charm</td>
# <td>c</td>
# <td>$\bar{c}$</td>
# <td>+$\frac{2}{3}$</td>
# <td>1,160-1,340</td>
# </tr>
# <tr>
# <td>strange</td>
# <td>s</td>
# <td>$\bar{s}$</td>
# <td>-$\frac{1}{3}$</td>
# <td>70-130</td>
# </tr>
# <tr>
# <td>top</td>
# <td>t</td>
# <td>$\bar{t}$</td>
# <td>+$\frac{2}{3}$</td>
# <td>169,100-173,300</td>
# </tr>
# <tr>
# <td>bottom</td>
# <td>b</td>
# <td>$\bar{b}$</td>
# <td>-$\frac{1}{3}$</td>
# <td>4,130-4,370</td>
# </tr>
# </table>
#
# -
# + deletable=false nbgrader={"checksum": "cbfeafa3f168e0c4f55bc93ea685d014", "grade": true, "grade_id": "displayex01c", "points": 4}
assert True # leave this here to grade the quark table
| assignments/assignment06/DisplayEx01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # 1806554 <NAME> Lab 7
x <- as.integer(readline())
i = 1
while(i<=x){
cat(i*i," ")
i = i + 1
}
library(stringi)
x <- readline()
y <- stri_reverse(x)
if(x == y){
cat("palindrome\n")
}else{
cat("not Pal")
}
hl <- function(x,y){
if(x > y) {
smaller = y
} else {
smaller = x
}
for(i in 1:smaller) {
if((x %% i == 0) && (y %% i == 0)) {
hcf = i
}
}
lcm = x*y/hcf
return(list(lcm,hcf))
}
x <- as.integer(readline())
y <- as.integer(readline())
hl(x,y)
mat = matrix(c(3:14),nrow = 4, byrow = FALSE)
mat
for(row in 1:nrow(mat)) {
for(col in 1:ncol(mat)) {
if(mat[col] < 5){
mat[col] <- 0
}
}
}
mat
substr("abcdef", 2, 4)
substr(rep("abcdef", 4), 1:4, 4:5)
install.packages("stringr")
x <- c("whxyz")
library(stringr)
pos <- str_replace(x, "xyz", "???")
cat(pos)
| DA Lab/7_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building a Logistic Regression
# Create a logistic regression based on the bank data provided.
#
# The data is based on the marketing campaign efforts of a Portuguese banking institution. The classification goal is to predict if the client will subscribe a term deposit (variable y).
#
# Note that the first column of the dataset is the index.
#
# Source: [Moro et al., 2014] <NAME>, <NAME> and <NAME>. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
# ## Import the relevant libraries
# +
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# this part not be needed after the latests updates of the library
from scipy import stats
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)
# -
# ## Load the data
# Load the ‘Example_bank_data.csv’ dataset.
raw_data = pd.read_csv('Example-bank-data.csv')
raw_data
# We want to know whether the bank marketing strategy was successful, so we need to transform the outcome variable into 0s and 1s in order to perform a logistic regression.
# +
# We make sure to create a copy of the data before we start altering it. Note that we don't change the original data we loaded.
data = raw_data.copy()
# Removes the index column that came with the data
data = data.drop(['Unnamed: 0'], axis = 1)
# We use the map function to change any 'yes' values to 1 and 'no' values to 0.
data['y'] = data['y'].map({'yes':1, 'no':0})
data
# -
# Check the descriptive statistics
data.describe()
# ### Declare the dependent and independent variables
y = data['y']
x1 = data['duration']
# ### Simple Logistic Regression
# Run the regression and visualize it on a scatter plot (no need to plot the line).
# +
x = sm.add_constant(x1)
reg_log = sm.Logit(y,x)
results_log = reg_log.fit()
# Get the regression summary
results_log.summary()
# +
# Create a scatter plot of x1 (Duration, no constant) and y (Subscribed)
plt.scatter(x1,y,color = 'C0')
# Don't forget to label your axes!
plt.xlabel('Duration', fontsize = 20)
plt.ylabel('Subscription', fontsize = 20)
plt.show()
| 7_LogisticRegresssion_S36_L239/Building a Logistic Regression - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Geography Analysis
# ## Instructions
# 1. Fill in the dataset in section 1.1
#
# 2. Run all of the cells
#
# 3. Look at the summary pdf generated AND/OR explore each metric below.
# - Under each Metric there will be a portion of "Setup" and then "Analyses". Ignore the "Setup" unless customization is needed, and in "Analyses" results are shown to be interacted with. The number that comes after the M in the title refers to the measurement number when collecting the metrics.
#
# ## Table of Contents
# 1. [Initial Setup](#setup) <br/>
# 1.1 [Dataset](#dataset)
# 2. (M5) Metric: [Object counts, duplicate annotations, object cooccurrences](#metric5)<br/>
# 2.1 [Setup](#metric5_setup)<br/>
# 2.2 [Analyses](#metric5_analyses)
# 3. (M6) Metric: [Size and distance from center of supercategories](#metric6)<br/>
# 3.1 [Setup](#metric6_setup)<br/>
# 3.2 [Analyses](#metric6_analyses)
# 4. (M10) Metric: [Supercategories w/wo people](#metric10) <br/>
# 4.1 [Setup](#metric10_setup)<br/>
# 4.2 [Analyses](#metric10_analyses)
# 5. [Setting up summary pdf](#summarypdf)
# # Initial Setup
# <a id="setup"></a>
from __future__ import print_function
import argparse
import datasets
import pickle
import itertools
import torchvision.transforms as transforms
import torch.utils.data as data
import os
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.manifold import TSNE
import seaborn as sns
import numpy as np
from scipy import stats
import PIL.Image
from scipy.cluster.hierarchy import dendrogram, linkage
from math import sqrt
import cv2
import matplotlib.patches as patches
from scipy.spatial.distance import squareform
import pycountry
from geonamescache import GeonamesCache
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
from mpl_toolkits.basemap import Basemap
from sklearn import svm
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from countryinfo import CountryInfo
import re
import copy
import textwrap
import matplotlib.patches as mpatches
import operator
from matplotlib.font_manager import FontProperties
import imageio
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, Layout
import ipywidgets as widgets
from IPython.core.display import HTML
from IPython.display import display, HTML, Image
import time
import warnings
import random
# %matplotlib inline
# +
COLORS = sns.color_palette('Set2', 2)
SAME_EXTENT = (-0.5, 6.5, -0.5, 6.5)
np.seterr(divide='ignore', invalid='ignore')
warnings.filterwarnings("ignore")
if not os.path.exists("dataloader_files"):
os.mkdir("dataloader_files")
if not os.path.exists("results"):
os.mkdir("results")
if not os.path.exists("checkpoints"):
os.mkdir("checkpoints")
# -
# https://stackoverflow.com/questions/31517194/how-to-hide-one-specific-cell-input-or-output-in-ipython-notebook
def hide_toggle(for_next=False, toggle_text='Toggle show/hide'):
this_cell = """$('div.cell.code_cell.rendered.selected')"""
next_cell = this_cell + '.next()'
target_cell = this_cell # target cell to control with toggle
js_hide_current = '' # bit of JS to permanently hide code in current cell (only when toggling next cell)
if for_next:
target_cell = next_cell
js_hide_current = this_cell + '.find("div.input").hide();'
js_f_name = 'code_toggle_{}'.format(str(random.randint(1,2**64)))
html = """
<script>
function {f_name}() {{
{cell_selector}.find('div.input').toggle();
}}
{js_hide_current}
</script>
<a href="javascript:{f_name}()">{toggle_text}</a>
""".format(
f_name=js_f_name,
cell_selector=target_cell,
js_hide_current=js_hide_current,
toggle_text=toggle_text
)
return HTML(html)
hide_toggle(for_next=True, toggle_text='Show/hide helper functions')
# +
def folder(num, folder):
if not os.path.exists("results/{0}/{1}".format(folder, num)):
os.mkdir("results/{0}/{1}".format(folder, num))
file = open("results/{0}/{1}/results.txt".format(folder, num), "w")
return file
# Projecting a set of features into a lower-dimensional subspace with PCA
def project(features, dim):
standardized = StandardScaler().fit_transform(features)
pca = PCA(n_components=dim)
principalComponents = pca.fit_transform(X=standardized)
return principalComponents
# Calculating the binomial proportion confidence interval
def wilson(p, n, z = 1.96):
denominator = 1 + z**2/n
centre_adjusted_probability = p + z*z / (2*n)
adjusted_standard_deviation = sqrt((p*(1 - p) + z*z / (4*n)) / n)
lower_bound = (centre_adjusted_probability - z*adjusted_standard_deviation) / denominator
upper_bound = (centre_adjusted_probability + z*adjusted_standard_deviation) / denominator
return (lower_bound, upper_bound)
def country_to_iso3(country):
missing = {'South+Korea': 'KOR',
'North+Korea': 'PRK',
'Laos': 'LAO',
'Caribbean+Netherlands': 'BES',
'St.+Lucia': 'LCA',
'East+Timor': 'TLS',
'Democratic+Republic+of+Congo': 'COD',
'Swaziland': 'SWZ',
'Cape+Verde': 'CPV',
'C%C3%B4te+d%C2%B4Ivoire': 'CIV',
'Ivory+Coast': 'CIV',
'Channel+Islands': 'GBR'
}
try:
iso3 = pycountry.countries.search_fuzzy(country.replace('+', ' '))[0].alpha_3
except LookupError:
try:
iso3 = missing[country]
except KeyError:
iso3 = None
return iso3
def display_filepaths(filepaths, width=100, height=100):
sidebyside = widgets.HBox([widgets.Image(value=open(filepath, 'rb').read(), format='png', width=width, height=height) for filepath in filepaths], layout=Layout(height='{}px'.format(height)))
display(sidebyside)
# -
# ## Dataset
# Fill in below with dataset and file path names
# <a id="dataset"></a>
transform_train = transforms.Compose([
transforms.ToTensor()
])
dataset = datasets.YfccPlacesDataset(transform_train)
folder_name = 'yfcc_supp'
save_loc = '1_pager_geo'
os.system("rm -r results/{0}/{1}".format(folder_name, save_loc))
file = folder(save_loc, folder_name)
first_pass = True
to_write = {}
if not os.path.exists("checkpoints/{}".format(folder_name)):
os.mkdir("checkpoints/{}".format(folder_name))
# # (M5) Metric: Country Counts
# <a id="metric5"></a>
# ## Setup
# <a id="metric5_setup"></a>
hide_toggle(for_next=True, toggle_text='Show/hide M5 code')
# +
counts = pickle.load(open("results/{}/5.pkl".format(folder_name), "rb"))
iso3_to_subregion = pickle.load(open('util_files/iso3_to_subregion_mappings.pkl', 'rb'))
gc = GeonamesCache()
iso3_codes = list(gc.get_dataset_by_key(gc.get_countries(), 'iso3').keys())
# https://ramiro.org/notebook/basemap-choropleth/
cm = plt.get_cmap('Blues')
bins = np.logspace(min(list(counts.values())), np.log2(max(list(counts.values()))+1), base=2.0)
num_colors = len(bins)
scheme = [cm(i / num_colors) for i in range(num_colors)]
subregion_counts = {}
iso3_to_bin = {}
total = sum(counts.values())
country_count_phrases = []
iso3_to_scaledpop = {}
for country in ['England', 'Scotland', 'Wales', 'Northern+Ireland']:
if country in counts.keys():
counts['United+Kingdom'] += counts[country]
counts.pop(country, None)
for country, count in sorted(counts.items(), key=lambda x: x[1], reverse=True):
country_count_phrases.append("{0}: {1} {2}%".format(country, count, round(100.*count/total)))
iso3 = country_to_iso3(country)
if iso3 is not None:
iso3_to_bin[iso3] = np.digitize(count, bins)
try:
iso3_to_scaledpop[iso3] = count / CountryInfo(country.replace('+', ' ')).population()
except KeyError:
pass
# print("{} not found in CountryInfo".format(country))
try:
subregion = iso3_to_subregion[iso3]
if subregion in subregion_counts.keys():
subregion_counts[subregion] += count
else:
subregion_counts[subregion] = count
except KeyError:
print("This country's subregion not found: {}".format(country))
for key in iso3_to_scaledpop.keys():
iso3_to_scaledpop[key] /= min(iso3_to_scaledpop.values())
def country_counts_num(topn):
print("Total images: {}\n".format(total))
print("Country Counts\n")
print("Top:\n")
for i in range(topn):
print(country_count_phrases[i])
print("\nBottom:\n")
for i in range(topn):
print(country_count_phrases[-1-i])
def subregion_counts_num():
print("Subregion Counts\n")
total_subregion = sum(subregion_counts.values())
for subregion, count in sorted(subregion_counts.items(), key=lambda x: x[1], reverse=True):
print("{0}: {1} {2}%".format(subregion, count, round(100.*count/total_subregion)))
def country_map():
fig = plt.figure(figsize=(16, 7))
fontsize = 20
ax = fig.add_subplot(111, facecolor='w', frame_on=False)
fig.suptitle('Dataset representation by number of images', fontsize=fontsize, y=.95)
m = Basemap(lon_0=0, projection='robin')
m.drawmapboundary(color='w')
shapefile = 'util_files/ne_10m_admin_0_countries_lakes'
m.readshapefile(shapefile, 'units', color='#444444', linewidth=.2)
for info, shape in zip(m.units_info, m.units):
iso3 = info['ADM0_A3']
if iso3 not in iso3_to_bin.keys():
color = '#dddddd'
else:
try:
color = scheme[iso3_to_bin[iso3]]
except IndexError:
print(iso3)
print("this index: {0} when length is {1}".format(iso3_to_bin[iso3], len(scheme)))
patches = [Polygon(np.array(shape), True)]
pc = PatchCollection(patches)
pc.set_facecolor(color)
ax.add_collection(pc)
# Cover up Antarctica so legend can be placed over it.
ax.axhspan(0, 1000 * 1800, facecolor='w', edgecolor='w', zorder=2)
# Draw color legend.
ax_legend = fig.add_axes([0.35, 0.14, 0.3, 0.03], zorder=3)
cmap = mpl.colors.ListedColormap(scheme)
cb = mpl.colorbar.ColorbarBase(ax_legend, cmap=cmap, ticks=bins, boundaries=bins, orientation='horizontal')
#cb = mpl.colorbar.ColorbarBase(ax_legend, cmap=cmap, ticks=bins, boundaries=bins, orientation='vertical')
spots = len(bins) // 4
spots = [0, spots, spots*2, spots*3, len(bins)- 1]
cb.ax.set_xticklabels([str(round(int(i), -3)) if j in spots else '' for j, i in enumerate(bins)])
cb.ax.tick_params(labelsize=fontsize)
plt.show()
print("Total countries: {}".format(len(iso3_to_bin)))
def country_map_population():
fig = plt.figure(figsize=(16, 7))
fontsize = 20
ax = fig.add_subplot(111, facecolor='w', frame_on=False)
m = Basemap(lon_0=0, projection='robin')
m.drawmapboundary(color='w')
shapefile = 'util_files/ne_10m_admin_0_countries_lakes'
cm = plt.get_cmap('Blues')
bins = np.logspace(min(list(iso3_to_scaledpop.values())), np.log2(max(list(iso3_to_scaledpop.values()))+1.), base=2.0)
num_colors = len(bins)
scheme = [cm(i / num_colors) for i in range(num_colors)]
m.readshapefile(shapefile, 'units', color='#444444', linewidth=.2)
for info, shape in zip(m.units_info, m.units):
iso3 = info['ADM0_A3']
if iso3 not in iso3_to_scaledpop.keys():
color = '#dddddd'
else:
try:
color = scheme[np.digitize(iso3_to_scaledpop[iso3], bins)]
except IndexError:
print(iso3)
print("this index: {0} when length is {1}".format(iso3_to_bin[iso3], len(scheme)))
patches = [Polygon(np.array(shape), True)]
pc = PatchCollection(patches)
pc.set_facecolor(color)
ax.add_collection(pc)
ax.axhspan(0, 1000 * 1800, facecolor='w', edgecolor='w', zorder=2)
to_write[0] = ['(M5) Geographic representation of dataset scaled by country population, colored on a logarithmic scale.']
plt.savefig("results/{0}/{1}/0.png".format(folder_name, save_loc), bbox_inches='tight', pad_inches=.2)
fig.suptitle('Dataset representation scaled by country population, logarithmic scale', fontsize=fontsize, y=.95)
plt.show()
# -
# ## Analyses
# <a id="metric5_analyses"></a>
# Counts by country
interact(country_counts_num, topn=widgets.IntSlider(min=1, max=30, step=1, value=10));
# Counts by subregion
subregion_counts_num()
# Visualization of representation by country
country_map()
# Visualization of representation by country, scaled by population. Logarithmic scale. Some countries may be grayed out because the population could not be found for that country.
country_map_population()
# # (M6) Metric: Image Tags
# <a id="metric6"></a>
# ## Setup
# <a id="metric6_setup"></a>
hide_toggle(for_next=True, toggle_text='Show/hide M6 code')
# +
if not os.path.exists("results/{0}/6".format(folder_name)):
os.mkdir("results/{0}/6".format(folder_name))
info_stats = pickle.load(open("results/{}/6.pkl".format(folder_name), "rb")) #20GB
country_tags = info_stats['country_tags']
tag_to_subregion_features = info_stats['tag_to_subregion_features']
iso3_to_subregion = pickle.load(open('util_files/iso3_to_subregion_mappings.pkl', 'rb'))
categories = dataset.categories
total_counts = np.zeros(len(categories))
subregion_tags = {}
for country, counts in country_tags.items():
total_counts = np.add(total_counts, counts)
subregion = iso3_to_subregion[country_to_iso3(country)]
if subregion not in subregion_tags.keys():
subregion_tags[subregion] = np.zeros(len(categories))
subregion_tags[subregion] = np.add(subregion_tags[subregion], counts)
total_counts = total_counts.astype(int)
sum_total_counts = int(np.sum(total_counts))
if not os.path.exists('checkpoints/{}/6_a.pkl'.format(folder_name)):
pvalues_over = {} # pvalue : '[country]: [tag] (country num and total num info for now)'
pvalues_under = {}
for country, counts in country_tags.items():
tags_for_country = int(np.sum(counts))
if tags_for_country < 50: # threshold for country to have at least 50 tags so there are enough samples for analysis
continue
for i, count in enumerate(counts):
this_counts = np.zeros(tags_for_country)
this_counts[:int(count)] = 1
that_counts = np.zeros(sum_total_counts - tags_for_country)
that_counts[:total_counts[i] - int(count)] = 1
p = stats.ttest_ind(this_counts, that_counts)[1]
tag_info = '{0}-{1} ({2}/{3} vs {4}/{5})'.format(country, categories[i], int(count), tags_for_country, int(total_counts[i] - count), sum_total_counts - tags_for_country)
if np.mean(this_counts) > np.mean(that_counts):
pvalues_over[p] = tag_info
else:
pvalues_under[p] = tag_info
else:
pvalues_under, pvalues_over = pickle.load(open('checkpoints/{}/6_a.pkl'.format(folder_name), 'rb'))
subregion_pvalues_over = {}
subregion_pvalues_under = {}
for subregion, counts in subregion_tags.items():
tags_for_subregion = int(np.sum(counts))
for i, count in enumerate(counts):
this_counts = np.zeros(tags_for_subregion)
this_counts[:int(count)] = 1
that_counts = np.zeros(sum_total_counts - tags_for_subregion)
that_counts[:total_counts[i] - int(count)] = 1
p = stats.ttest_ind(this_counts, that_counts)[1]
tag_info = '{0}-{1} ({2}/{3} vs {4}/{5})'.format(subregion, categories[i], int(count), tags_for_subregion, int(total_counts[i] - count), sum_total_counts - tags_for_subregion)
if np.mean(this_counts) > np.mean(that_counts):
subregion_pvalues_over[p] = tag_info
else:
subregion_pvalues_under[p] = tag_info
def tag_rep_by_country(topn):
if first_pass:
to_write[1] = ["(M6) Overrepresentations of tags by country (tag in country vs tag in rest of the countries):"]
for p, content in sorted(pvalues_over.items(), key=lambda x: x[0])[:4]:
to_write[1].append(('{0}: {1}'.format(round(p, 4), content)))
to_write[1].append("")
to_write[1].append("Underrepresentations of tags by country (tag in country vs tag in rest of the countries):")
for p, content in sorted(pvalues_under.items(), key=lambda x: x[0])[:4]:
to_write[1].append(('{0}: {1}'.format(round(p, 4), content)))
print("By Country\n")
print('Over represented\n')
for p, content in sorted(pvalues_over.items(), key=lambda x: x[0])[:topn]:
print('{0}: {1}'.format(round(p, 4), content))
print('\nUnder represented\n')
for p, content in sorted(pvalues_under.items(), key=lambda x: x[0])[:topn]:
print('{0}: {1}'.format(round(p, 4), content))
def tag_rep_by_subregion(topn):
print("By Subregion\n")
print('Over represented\n')
for p, content in sorted(subregion_pvalues_over.items(), key=lambda x: x[0])[:topn]:
print('{0}: {1}'.format(round(p, 4), content))
print('\nUnder represented\n')
for p, content in sorted(subregion_pvalues_under.items(), key=lambda x: x[0])[:topn]:
print('{0}: {1}'.format(round(p, 4), content))
if not os.path.exists('checkpoints/{}/6_b.pkl'.format(folder_name)):
phrase_to_value = {}
# Look at appearance differences in how a tag is represented across subregions
for tag in tag_to_subregion_features.keys():
subregion_features = tag_to_subregion_features[tag]
all_subregions = list(subregion_features.keys())
all_features = []
all_filepaths = []
start = 0
for subregion in all_subregions:
this_features = [features[0] for features in subregion_features[subregion]]
this_filepaths = [features[1] for features in subregion_features[subregion]]
if len(this_features) > 0:
all_features.append(np.array(this_features)[:, 0, :])
all_filepaths.append(this_filepaths)
if len(all_features) == 0:
continue
all_features = np.concatenate(all_features, axis=0)
all_filepaths = np.concatenate(all_filepaths, axis=0)
labels = np.zeros(len(all_features))
for j, subregion in enumerate(all_subregions):
labels[start:len(subregion_features[subregion])+start] = j
start += len(subregion_features[subregion])
num_features = int(np.sqrt(len(all_features)))
all_features = project(all_features, num_features)
clf = svm.SVC(kernel='linear', probability=True, decision_function_shape='ovr', class_weight='balanced')
clf_random = svm.SVC(kernel='linear', probability=True, decision_function_shape='ovr', class_weight='balanced')
if len(np.unique(labels)) <= 1:
continue
clf.fit(all_features, labels)
acc = clf.score(all_features, labels)
probs = clf.decision_function(all_features)
labels = labels.astype(np.integer)
plot_kwds = {'alpha' : .8, 's' : 30, 'linewidths':0}
colorz = sns.color_palette('hls', int(np.amax(labels)) + 1)
projection_instances = TSNE().fit_transform(all_features)
plt.scatter(*projection_instances.T, **plot_kwds, c=[colorz[labels[i]] for i in range(len(all_features))])
handles = []
for lab in np.unique(labels):
patch = mpatches.Patch(color=colorz[lab], label=all_subregions[lab])
handles.append(patch)
fontP = FontProperties()
fontP.set_size('small')
lgd = plt.legend(handles=handles, bbox_to_anchor=(1.04,1), loc="upper left", prop=fontP)
plt.savefig('results/{0}/{1}/{2}_tsne.png'.format(folder_name, 6, dataset.categories[tag]), bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.close()
class_preds = clf.predict(all_features)
class_probs = clf.predict_proba(all_features)
j_to_acc = {}
for j, subregion in enumerate(all_subregions):
if j in labels:
# to get acc in subregion vs out
this_labels = np.copy(labels)
this_labels[np.where(labels!=j)[0]] = -1
this_preds = np.copy(class_preds)
this_preds[np.where(class_preds!=j)[0]] = -1
this_acc = np.mean(this_preds == this_labels)
j_to_acc[j] = this_acc
fig = plt.figure(figsize=(16, 12))
plt.subplots_adjust(hspace=.48)
fontsize = 24
diff_subregion = max(j_to_acc.items(), key=operator.itemgetter(1))[0]
subregion_index = list(clf.classes_).index(diff_subregion)
class_probs = class_probs[:, subregion_index]
in_sub = np.where(labels == diff_subregion)[0]
out_sub = np.where(labels != diff_subregion)[0]
in_probs = class_probs[in_sub]
out_probs = class_probs[out_sub]
in_indices = np.argsort(in_probs)
out_indices = np.argsort(out_probs)
original_labels = np.copy(labels)
np.random.shuffle(labels)
clf_random.fit(all_features, labels)
random_preds = clf_random.predict(all_features)
random_preds[np.where(random_preds!=diff_subregion)[0]] = -1
labels[np.where(labels!=diff_subregion)[0]] = -1
acc_random = np.mean(labels == random_preds)
value = j_to_acc[diff_subregion] / acc_random
if value <= 1.2 and acc <= .7: # can tune as desired
continue
phrase = dataset.labels_to_names[dataset.categories[tag]]
phrase_to_value[phrase] = [value, all_subregions[diff_subregion], acc, acc_random, num_features, j_to_acc]
pickle.dump([original_labels, class_probs, class_preds, diff_subregion, all_filepaths], open('results/{0}/{1}/{2}_info.pkl'.format(folder_name, 6, dataset.labels_to_names[dataset.categories[tag]]), 'wb'))
pickle.dump(phrase_to_value, open('checkpoints/{}/6_b.pkl'.format(folder_name), 'wb'))
else:
phrase_to_value = pickle.load(open('checkpoints/{}/6_b.pkl'.format(folder_name), 'rb'))
svm_options = []
best_tag = None
best_tag_value = 1.2
for phrase, value in sorted(phrase_to_value.items(), key=lambda x: x[1][0], reverse=True):
value, region, acc, acc_random, num_features, j_to_acc = phrase_to_value[phrase]
if acc > .75 and value > best_tag_value:
best_tag_value = value
best_tag = phrase
svm_options.append(('{0} in {1}: {2}% and {3}x'.format(phrase, region, round(100.*acc, 3), round(value, 3)), phrase))
def show_svm_tag(tag, num):
if tag is None:
return
this_info = pickle.load(open('results/{0}/{1}/{2}_info.pkl'.format(folder_name, 6, tag), 'rb'))
labels, class_probs, class_preds, diff_subregion, all_filepaths = this_info
value, region, acc, acc_random, num_features, j_to_acc = phrase_to_value[tag]
if num is not None:
print("{0} in {1} has acc: {2}, random acc: {3} so {4}x and {5} features".format(tag, region, round(acc, 3), round(acc_random, 3), round(value, 3), num_features))
print()
in_sub = np.where(labels == diff_subregion)[0]
out_sub = np.where(labels != diff_subregion)[0]
in_probs = class_probs[in_sub]
out_probs = class_probs[out_sub]
in_indices = np.argsort(in_probs)
out_indices = np.argsort(out_probs)
to_save = False
if num is None:
to_write[2] = ['(M6) To discern if there is an appearance difference in how certain subregions represent a tag, we extract scene-level features from each image, and fit a linear SVM to distinguish between the tag in the subregion and out of the subregion.\nAn example of the most linearly separable tag: {0} has an accuracy of {1} when classifying in {2} vs outside {2}.\n'.format(tag, round(acc, 3), region)]
to_save = True
num = 5
def display_chunk(in_subregion_label=True, in_subregion_pred=True, to_save=False, name=None):
subregion_filepaths = []
if in_subregion_label == in_subregion_pred:
counter = 0
else:
counter = -1
while len(subregion_filepaths) < num:
try:
if in_subregion_label:
index_a = in_sub[in_indices[counter]]
else:
index_a = out_sub[out_indices[counter]]
except:
break
file_path_a = all_filepaths[index_a]
if (in_subregion_pred and class_preds[index_a] == diff_subregion) or ((not in_subregion_pred) and class_preds[index_a] != diff_subregion):
subregion_filepaths.append(file_path_a)
if in_subregion_label == in_subregion_pred:
counter += 1
else:
counter -= -1
if to_save and first_pass:
this_loc = "results/{0}/{1}/1_{2}.png".format(folder_name, save_loc, name)
if len(subregion_filepaths) > 0:
fig = plt.figure(figsize=(16, 8))
for i in range(num):
ax = fig.add_subplot(1, num, i+1)
ax.axis("off")
if i >= len(subregion_filepaths):
image = np.ones((3, 3, 3))
else:
image, _ = dataset.from_path(subregion_filepaths[i])
image = image.data.cpu().numpy().transpose(1, 2, 0)
im = ax.imshow(image, extent=SAME_EXTENT)
plt.tight_layout()
plt.savefig(this_loc, bbox_inches='tight')
plt.close()
else:
os.system("cp util_files/no_images.png {0}".format(this_loc))
elif len(subregion_filepaths) > 0:
display_filepaths(subregion_filepaths, width = 800//len(subregion_filepaths), height=800//len(subregion_filepaths))
else:
print("No images in this category")
if not to_save:
print("In: Correct")
else:
to_write[2].append("In: Correct")
display_chunk(True, True, to_save, 'a')
if not to_save:
print("In: Incorrect")
else:
to_write[2].append("In: Incorrect")
display_chunk(True, False, to_save, 'b')
if not to_save:
print("Out: Incorrect")
else:
to_write[2].append("Out: Incorrect")
display_chunk(False, True, to_save, 'c')
if not to_save:
print("Out: Correct")
else:
to_write[2].append("Out: Correct")
display_chunk(False, False, to_save, 'd')
# -
# ## Analyses
# <a id="metric6_analyses"></a>
# Over- and under- representations of tags by country. The first fraction shows how many of this country's tags are made up of this one, and the second fraction shows how many of all of the country's tags are made up of this one.
interact(tag_rep_by_country, topn=widgets.IntSlider(min=1, max=30, step=1, value=10));
# Over- and under- representations of tags by subregion, fractions represent same thing as mentioned above.
interact(tag_rep_by_subregion, topn=widgets.IntSlider(min=1, max=30, step=1, value=10));
# How linearly separable images with a particular tag are in one subregion compared to the rest.
# The percentage in the dropdown menu indicates the accuracy of the classifier at distinguishing this subregion from the others, and the ratio represents this accuracy over that of random labels (measures how well this classifier is doing w.r.t. overifitting).
# +
num_widget = widgets.IntSlider(min=1, max=20, step=1, value=5)
tag_widget = widgets.Dropdown(options=svm_options, layout=Layout(width='400px'))
all_things = [widgets.Label('Tag, acc, acc/acc_random',layout=Layout(padding='0px 0px 0px 5px', width='170px')), tag_widget, widgets.Label('Num',layout=Layout(padding='0px 5px 0px 40px', width='80px')), num_widget]
if first_pass:
show_svm_tag(best_tag, None)
ui = HBox(all_things)
out = widgets.interactive_output(show_svm_tag, {'tag': tag_widget, 'num': num_widget})
display(ui, out)
# -
# # (M10) Metric: Languages for tourist vs local
# <a id="metric10"></a>
# ## Setup
# <a id="metric10_setup"></a>
hide_toggle(for_next=True, toggle_text='Show/hide M10 code')
# +
iso3_to_subregion = pickle.load(open('util_files/iso3_to_subregion_mappings.pkl', 'rb'))
mappings = pickle.load(open('country_lang_mappings.pkl', 'rb'))
iso3_to_lang = mappings['iso3_to_lang']
lang_to_iso3 = mappings['lang_to_iso3']
lang_info = pickle.load(open('results/{}/10.pkl'.format(folder_name), 'rb'))
counts = lang_info['lang_counts']
country_with_langs = lang_info['country_with_langs']
country_with_imgs = lang_info['country_with_imgs']
gc = GeonamesCache()
iso3_codes = list(gc.get_dataset_by_key(gc.get_countries(), 'iso3').keys())
cm = plt.get_cmap('Blues')
iso3_to_counts = {}
iso3_to_bin = {}
total = sum(counts.values())
langcount_phrases = []
for lang, count in sorted(counts.items(), key=lambda x: x[1], reverse=True):
lang_name = pycountry.languages.get(alpha_2=lang)
if lang_name is not None:
lang_name = lang_name.name
else:
lang_name = lang
langcount_phrases.append("{0}: {1} {2}%".format(lang_name, count, round(count*100./total, 4)))
try:
for iso3 in lang_to_iso3[lang]:
if iso3 not in iso3_to_counts.keys():
iso3_to_counts[iso3] = count
else:
iso3_to_counts[iso3] += count
except KeyError:
pass
bins = np.logspace(min(list(iso3_to_counts.values())), np.log2(max(list(iso3_to_counts.values()))+1), base=2.0)
num_colors = len(bins)
scheme = [cm(i / num_colors) for i in range(num_colors)]
for key in iso3_to_counts.keys():
iso3_to_bin[key] = np.digitize(iso3_to_counts[key], bins) - 1
def language_representation_map():
fig = plt.figure(figsize=(12, 7))
fontsize = 14
ax = fig.add_subplot(111, facecolor='w', frame_on=False)
fig.suptitle('Dataset representation by tag language for images', fontsize=fontsize, y=.95)
m = Basemap(lon_0=0, projection='robin')
m.drawmapboundary(color='w')
shapefile = 'util_files/ne_10m_admin_0_countries_lakes'
m.readshapefile(shapefile, 'units', color='#444444', linewidth=.2)
for info, shape in zip(m.units_info, m.units):
iso3 = info['ADM0_A3']
if iso3 not in iso3_to_bin.keys():
color = '#dddddd'
else:
try:
color = scheme[iso3_to_bin[iso3]]
except IndexError:
pass
patches = [Polygon(np.array(shape), True)]
pc = PatchCollection(patches)
pc.set_facecolor(color)
ax.add_collection(pc)
# Cover up Antarctica so legend can be placed over it.
ax.axhspan(0, 1000 * 1800, facecolor='w', edgecolor='w', zorder=2)
# Draw color legend.
ax_legend = fig.add_axes([0.35, 0.14, 0.3, 0.03], zorder=3)
cmap = mpl.colors.ListedColormap(scheme)
cb = mpl.colorbar.ColorbarBase(ax_legend, cmap=cmap, ticks=bins, boundaries=bins, orientation='horizontal')
spots = len(bins) // 4
spots = [0, spots, spots*2, spots*3, len(bins)- 1]
cb.ax.set_xticklabels([str(int(i)) if j in spots else '' for j, i in enumerate(bins)])
cb.ax.tick_params(labelsize=fontsize)
plt.show()
def language_counts(topn):
if first_pass:
to_write[3] = ['(M10) Most popular languages:']
for i in range(3):
to_write[3].append(langcount_phrases[i])
print("Most popular languages\n")
for i in range(topn):
print(langcount_phrases[i])
print("\nLeast popular languages\n")
for i in range(topn):
print(langcount_phrases[-1-i])
to_write_lower = {}
to_write_upper = {}
iso3_to_percent = {}
subregion_to_percents = {}
subregion_to_filepaths = {} # 0 is tourist, 1 is local
subregion_to_embeddings = {} # 0 is tourist, 1 is local
for country in country_with_langs.keys():
iso3 = country_to_iso3(country)
langs_in = 0
langs_out = {}
for lang in country_with_langs[country]:
try:
if lang in iso3_to_lang[iso3]:
langs_in += 1
else:
if lang in langs_out.keys():
langs_out[lang] += 1
else:
langs_out[lang] = 1
except KeyError:
print("This iso3 can't be found in iso3_to_lang: {}".format(iso3))
this_total = len(country_with_langs[country])
others = ''
for lang in langs_out.keys():
if len(lang) == 2:
lang_name = pycountry.languages.get(alpha_2=lang)
elif len(lang) == 3:
lang_name = pycountry.languages.get(alpha_3=lang)
else:
print("{} is not 2 or 3 letters?".format(lang))
if lang_name is not None:
lang_name = lang_name.name
else:
lang_name = lang
others += lang_name + ": " + str(round(langs_out[lang]/this_total, 4)) + ", "
if iso3 is not None:
subregion = iso3_to_subregion[iso3]
if subregion in subregion_to_percents.keys():
subregion_to_percents[subregion][0] += langs_in
subregion_to_percents[subregion][1] += this_total
subregion_to_filepaths[subregion][0].extend([chunk[1] for chunk in country_with_imgs[country][0]])
subregion_to_filepaths[subregion][1].extend([chunk[1] for chunk in country_with_imgs[country][1]])
subregion_to_embeddings[subregion][0].extend([chunk[0] for chunk in country_with_imgs[country][0]])
subregion_to_embeddings[subregion][1].extend([chunk[0] for chunk in country_with_imgs[country][1]])
else:
subregion_to_percents[subregion] = [langs_in, this_total]
subregion_to_filepaths[subregion] = [[chunk[1] for chunk in country_with_imgs[country][0]], [chunk[1] for chunk in country_with_imgs[country][1]]]
subregion_to_embeddings[subregion] = [[chunk[0] for chunk in country_with_imgs[country][0]], [chunk[0] for chunk in country_with_imgs[country][1]]]
tourist_percent = 1.0 - (langs_in / this_total)
lp_under, lp_over = wilson(tourist_percent, this_total)
phrase = '{0} has {1}% non-local tags, and the extra tags are:\n\n{2}'.format(country, round(100.*tourist_percent, 4), others)
to_write_lower[country] = [phrase, tourist_percent]
iso3_to_percent[iso3] = lp_under
def lang_dist_by_country(country):
print(to_write_lower[country][0][:-2])
subregion_to_accuracy = {}
subregion_to_percents_phrase = {}
for key in subregion_to_percents.keys():
if not os.path.exists('results/{0}/{1}/{2}_info.pkl'.format(folder_name, 10, key.replace(' ', '_'))):
low_bound, high_bound = wilson(1 - subregion_to_percents[key][0] / subregion_to_percents[key][1], subregion_to_percents[key][1])
clf = svm.SVC(kernel='linear', probability=False, decision_function_shape='ovr', class_weight='balanced')
clf_random = svm.SVC(kernel='linear', probability=False, decision_function_shape='ovr', class_weight='balanced')
tourist_features = subregion_to_embeddings[key][0]
local_features = subregion_to_embeddings[key][1]
if len(tourist_features) == 0 or len(local_features) == 0:
continue
tourist_features, local_features = np.array(tourist_features)[:, 0, :], np.array(local_features)[:, 0, :]
all_features = np.concatenate([tourist_features, local_features], axis=0)
num_features = int(np.sqrt(len(all_features)))
all_features = project(all_features, num_features)
labels = np.zeros(len(all_features))
labels[len(tourist_features):] = 1
clf.fit(all_features, labels)
acc = clf.score(all_features, labels)
probs = clf.decision_function(all_features)
np.random.shuffle(all_features)
clf_random.fit(all_features, labels)
acc_random = clf_random.score(all_features, labels)
value = acc / acc_random
subregion_to_percents_phrase[key] = [subregion_to_percents[key][0] / subregion_to_percents[key][1], '[{0} - {1}] for {2}'.format(round(low_bound, 4), round(high_bound, 4), subregion_to_percents[key][1])]
subregion_to_accuracy[key] = [acc, value, len(tourist_features), len(all_features), num_features]
tourist_probs = []
local_probs = []
for j in range(len(all_features)):
if j < len(tourist_features):
tourist_probs.append(-probs[j])
else:
local_probs.append(probs[j])
pickle.dump([labels, tourist_probs, local_probs, subregion_to_filepaths[key]], open('results/{0}/{1}/{2}_info.pkl'.format(folder_name, 10, key.replace(' ', '_')), 'wb'))
subregion_local_svm_loc = 'results/{0}/{1}/subregion_svm.pkl'.format(folder_name, 10)
if not os.path.exists(subregion_local_svm_loc):
pickle.dump([subregion_to_accuracy, subregion_to_percents_phrase], open(subregion_local_svm_loc, 'wb'))
def subregion_language_analysis(key, num):
to_save = False
acc, value, num_tourists, num_total, num_features = pickle.load(open(subregion_local_svm_loc, 'rb'))[0][key]
print_statement = "Accuracy: {0}%, {1}x with {2} features. {3} out of {4} are tourist".format(round(acc*100., 3), round(value, 3), num_features, num_tourists, num_total)
if num is None:
to_save = True
num = 5
to_write[4] = ["(M10) Subregion that is most linearly separable between locals and tourists."]
to_write[4].append(print_statement)
else:
print(print_statement)
labels, tourist_probs, local_probs, the_filepaths = pickle.load(open('results/{0}/{1}/{2}_info.pkl'.format(folder_name, 10, key.replace(' ', '_')), 'rb'))
tourist_indices = np.argsort(np.array(tourist_probs))
local_indices = np.argsort(np.array(local_probs))
the_indices = [tourist_indices, local_indices]
the_probs = [tourist_probs, local_probs]
def display_chunk(local=0, correct=True, to_save=False, name=None):
this_filepaths = the_filepaths[local]
this_indices = the_indices[local]
this_probs = the_probs[local]
collected_filepaths = []
if correct:
counter = 0
else:
counter = -1
while len(collected_filepaths) < num:
try:
index_a = this_indices[counter]
except:
break
file_path_a = this_filepaths[index_a]
if (this_probs[index_a] > 0 and correct) or (this_probs[index_a] < 0 and not correct):
collected_filepaths.append(file_path_a)
if correct:
counter += 1
else:
counter -= -1
if to_save and first_pass:
this_loc = "results/{0}/{1}/2_{2}.png".format(folder_name, save_loc, name)
if len(collected_filepaths) > 0:
fig = plt.figure(figsize=(16, 8))
for i in range(num):
ax = fig.add_subplot(1, num, i+1)
ax.axis("off")
if i >= len(collected_filepaths):
image = np.ones((3, 3, 3))
else:
image, _ = dataset.from_path(collected_filepaths[i])
image = image.data.cpu().numpy().transpose(1, 2, 0)
im = ax.imshow(image, extent=SAME_EXTENT)
plt.tight_layout()
plt.savefig(this_loc, bbox_inches = 'tight')
plt.close()
else:
os.system("cp util_files/no_images.png {}".format(this_loc))
elif len(collected_filepaths) > 0:
display_filepaths(collected_filepaths, width = 800//len(collected_filepaths), height=800//len(collected_filepaths))
else:
print("No images in this category")
if not to_save:
print("Tourist: Correct")
else:
to_write[4].append("Tourist: Correct")
display_chunk(0, True, to_save, 'a')
if not to_save:
print("Tourist: Incorrect")
else:
to_write[4].append("Tourist: Incorrect")
display_chunk(0, False, to_save, 'b')
if not to_save:
print("Local: Incorrect")
else:
to_write[4].append("Local: Incorrect")
display_chunk(1, False, to_save, 'c')
if not to_save:
print("Local: Correct")
else:
to_write[4].append("Local: Correct")
display_chunk(1, True, to_save, 'd')
subregion_to_accuracy, subregion_to_percents_phrase = pickle.load(open(subregion_local_svm_loc, 'rb'))
subregion_svm_options = []
most_different_subregion_value = 1.2
most_different_subregion = None
for subregion, value in sorted(subregion_to_accuracy.items(), key=lambda x: x[1][1], reverse=True):
acc, value, num_tourists, num_total, num_features = subregion_to_accuracy[subregion]
if acc > .75 and value > most_different_subregion_value:
most_different_subregion_value = value
most_different_subregion = subregion
subregion_svm_options.append(('{0}: {1}% and {2}x'.format(subregion, round(100.*acc, 3), round(value, 3)), subregion))
def non_local_lang_map():
iso3_to_bin = {}
num_colors = 20
cm = plt.get_cmap('Blues')
bins = np.linspace(0., 1., num_colors)
scheme = [cm(i / num_colors) for i in range(num_colors)]
for key in iso3_to_percent.keys():
iso3_to_bin[key] = np.digitize(iso3_to_percent[key], bins) - 1 # add a -1 here if np.linspace
fig = plt.figure(figsize=(15, 7))
fontsize = 20
ax = fig.add_subplot(111, facecolor='w', frame_on=False)
fig.suptitle('Percentage of tags in non-local language', fontsize=fontsize, y=.95)
m = Basemap(lon_0=0, projection='robin')
m.drawmapboundary(color='w')
shapefile = 'util_files/ne_10m_admin_0_countries_lakes'
m.readshapefile(shapefile, 'units', color='#444444', linewidth=.2)
for info, shape in zip(m.units_info, m.units):
iso3 = info['ADM0_A3']
if iso3 not in iso3_to_bin.keys():
color = '#dddddd'
else:
try:
color = scheme[iso3_to_bin[iso3]]
except IndexError:
print(iso3)
print("this index: {0} when length is {1}".format(iso3_to_bin[iso3], len(scheme)))
patches = [Polygon(np.array(shape), True)]
pc = PatchCollection(patches)
pc.set_facecolor(color)
ax.add_collection(pc)
# Cover up Antarctica so legend can be placed over it.
ax.axhspan(0, 1000 * 1800, facecolor='w', edgecolor='w', zorder=2)
# Draw color legend.
ax_legend = fig.add_axes([0.35, 0.14, 0.3, 0.03], zorder=3)
cmap = mpl.colors.ListedColormap(scheme)
cb = mpl.colorbar.ColorbarBase(ax_legend, cmap=cmap, ticks=bins, boundaries=bins, orientation='horizontal')
spots = len(bins) // 3
spots = [0, spots, spots*2, len(bins)- 1]
cb.ax.set_xticklabels([str(round(i, 2)) if j in spots else '' for j, i in enumerate(bins)])
cb.ax.tick_params(labelsize=fontsize)
if first_pass:
plt.savefig("results/{0}/{1}/3.png".format(folder_name, save_loc))
to_write[5] = ["(M10) Map representing the fraction of tags in a country that are not labeled in a local language."]
plt.show()
# -
# ## Analyses
# <a id="metric10_analyses"></a>
# Map that for each language, contributes to the counts of all countries that have that language as a national language.
language_representation_map()
# Languages represented in the dataset, as detected by FastText. 3 letter acronyms mean that we could not automatically find the language corresponding to the code.
interact(language_counts, topn=widgets.IntSlider(min=1, max=30, step=1, value=10));
# Lets you select a country, along with the fraction of images in that country that are tagged in a non-local language, to see what languages the tags are made up of.
pairs = [('{0}: {1}'.format(country, round(value[1], 3)), country) for country, value in sorted(to_write_lower.items(), key=lambda x: x[1][1], reverse=True)]
interact(lang_dist_by_country, country=widgets.Dropdown(options=pairs));
# Shows the subregion and how accurately a linear model can separate images taken by locals vs tourists. Ratio is accuracy over that of randomly shuffled labels, as mentioned before.
# +
num_sub_widget = widgets.IntSlider(min=1, max=20, step=1, value=5)
key_widget = widgets.Dropdown(options=subregion_svm_options, layout=Layout(width='400px'))
all_things = [widgets.Label('Tag, acc/acc_random, acc',layout=Layout(padding='0px 0px 0px 5px', width='170px')), key_widget, widgets.Label('Num',layout=Layout(padding='0px 5px 0px 40px', width='80px')), num_sub_widget]
if first_pass and most_different_subregion is not None:
subregion_language_analysis(most_different_subregion, None)
ui = HBox(all_things)
out = widgets.interactive_output(subregion_language_analysis, {'key': key_widget, 'num': num_sub_widget})
display(ui, out)
# -
# Shows for each country, the percentage of tags in a non-local language.
non_local_lang_map()
# Confidence bounds on fracion of each subregion's languages that are non-local, and number of images from that subregion.
print("Bounds on fraction of each subregion's languages that are non-local")
for subregion, percent in sorted(subregion_to_percents_phrase.items(), key=lambda x: x[1][0], reverse=True):
print("{0}: {1}".format(subregion, percent[1]))
# ## Setting up summary pdf
# <a id="summarypdf"></a>
first_pass = False
def write_pdf(numbers):
for i in numbers:
if i in to_write.keys():
if i not in [2, 4]:
for sentence in to_write[i]:
pdf.write(5, sentence)
pdf.ln()
if i == 0:
pdf.image('results/{0}/{1}/0.png'.format(folder_name, save_loc), h=80)
pdf.ln()
elif i == 2:
pdf.write(5, to_write[i][0])
pdf.ln()
pdf.write(5, to_write[i][1])
pdf.ln()
pdf.image('results/{0}/{1}/1_a.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][2])
pdf.ln()
pdf.image('results/{0}/{1}/1_b.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][3])
pdf.ln()
pdf.image('results/{0}/{1}/1_c.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][4])
pdf.ln()
pdf.image('results/{0}/{1}/1_d.png'.format(folder_name, save_loc), w=160)
pdf.ln()
elif i == 4:
pdf.write(5, to_write[i][0])
pdf.ln()
pdf.write(5, to_write[i][1])
pdf.ln()
pdf.write(5, to_write[i][2])
pdf.ln()
pdf.image('results/{0}/{1}/2_a.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][3])
pdf.ln()
pdf.image('results/{0}/{1}/2_b.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][4])
pdf.ln()
pdf.image('results/{0}/{1}/2_c.png'.format(folder_name, save_loc), w=160)
pdf.ln()
pdf.write(5, to_write[i][5])
pdf.ln()
pdf.image('results/{0}/{1}/2_d.png'.format(folder_name, save_loc), w=160)
pdf.ln()
elif i == 5:
pdf.image('results/{0}/{1}/3.png'.format(folder_name, save_loc), h=80)
pdf.ln()
pdf.ln(h=3)
pdf.dashed_line(10, pdf.get_y(), 200, pdf.get_y())
pdf.ln(h=3)
# +
from fpdf import FPDF
pdf = FPDF()
pdf.add_page()
pdf.set_font('Arial', 'B', 16)
pdf.write(5, "Geography-Based Summary")
pdf.ln()
pdf.ln()
# Overview Statistics
pdf.set_font('Arial', 'B', 12)
pdf.write(5, "Overview Statistics")
pdf.ln()
pdf.ln(h=3)
pdf.line(10, pdf.get_y(), 200, pdf.get_y())
pdf.ln(h=3)
pdf.set_font('Arial', '', 12)
write_pdf([0, 3, 5])
# Interesting findings
pdf.set_font('Arial', 'B', 12)
pdf.write(5, "Sample Interesting Findings")
pdf.ln()
pdf.ln(h=3)
pdf.line(10, pdf.get_y(), 200, pdf.get_y())
pdf.ln(h=3)
pdf.set_font('Arial', '', 12)
write_pdf([1, 2, 4])
# Interesting findings
pdf.set_font('Arial', 'B', 12)
pdf.write(5, "Some of the other metrics in the notebook")
pdf.ln()
pdf.ln(h=3)
pdf.line(10, pdf.get_y(), 200, pdf.get_y())
pdf.ln(h=3)
pdf.set_font('Arial', '', 12)
pdf.write(5, "- (M5) Image breakdown by country and subregion")
pdf.ln()
pdf.write(5, "- (M5) Dataset representation map")
pdf.ln()
pdf.write(5, "- (M6) Over/under representations of tags by subregion")
pdf.ln()
pdf.write(5, "- (M10) Visual representation of what languages are represented")
pdf.ln()
pdf.write(5, "- (M10) What languages each country's tags are in")
pdf.ln()
pdf.output('results/{0}/{1}/summary.pdf'.format(folder_name, save_loc), "F")
# -
| Geography Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
print(torch.__version__, torch.cuda.is_available())
import torchvision
print(torchvision.__version__)
# ## Import Libraries
# +
# Some basic setup:
# Setup detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
# +
from detectron2.structures import BoxMode
# Standard Category
from enum import Enum
class FoodClass(Enum):
banana = 0
apple = 1
carrot = 2
tomato = 3
milk = 4
## Capture images in data set
## Works only for bounding box annotation
def get_imageset_dicts(img_dir, annotation_json):
json_file = os.path.join(img_dir, annotation_json)
with open(json_file) as f:
imgs_anns = json.load(f)
dataset_dicts = []
for idx, v in enumerate(imgs_anns.values()):
record = {}
# print ("Processing file: %s"%v["filename"])
filename = os.path.join(img_dir, v["filename"])
height, width = cv2.imread(filename).shape[:2]
record["file_name"] = filename
record["image_id"] = idx
record["height"] = height
record["width"] = width
annos = v["regions"]
objs = []
## For bounding polygon annotation
# for _, anno in annos.items():
# assert not anno["shape_attributes"]
# assert not anno["region_attributes"]
# shape_attr = anno["shape_attributes"]
# px = shape_attr["all_points_x"]
# py = shape_attr["all_points_y"]
# poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
# poly = [p for x in poly for p in x]
# reg_attr = anno["region_attributes"]
# f_class = FoodClass[reg_attr["food-class"]]
# obj = {
# "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
# "bbox_mode": BoxMode.XYXY_ABS,
# "segmentation": [poly],
# "category_id": f_class.value,
# }
# objs.append(obj)
## Works only for bounding box annotation
for anno in annos:
assert anno["shape_attributes"]
assert anno["region_attributes"]
shape_attr = anno["shape_attributes"]
px = shape_attr["x"]
py = shape_attr["y"]
pw = shape_attr["width"]
ph = shape_attr["height"]
poly = [(px, py), (px+pw, py), (px+pw, py+ph), (px, py+ph)]
reg_attr = anno["region_attributes"]
f_class = FoodClass[reg_attr["food-class"]]
obj = {
"bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": f_class.value,
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
# +
## Define the paths and register the dataset
dataset = "ann_test_1"
dataset_ref = "/users/pgrad/hegdeb/IoT/fidarProj/fidar/data/processed"
via_json = "via_regions.json"
proj_path = "/users/pgrad/hegdeb/IoT/fidarProj/fidar/"
# DatasetCatalog.remove(dataset)
DatasetCatalog.register(dataset, lambda:get_imageset_dicts(os.path.join(dataset_ref, dataset), via_json))
MetadataCatalog.get(dataset).set(thing_classes=[cat.name for cat in FoodClass])
# -
from detectron2.data.datasets import register_coco_instances
register_coco_instances(dataset + "_coco", {}, "coco_regions.json", dataset_ref)
dataset_metadata = MetadataCatalog.get(dataset)
dataset_metadata_coco = MetadataCatalog.get(dataset + "_coco")
# ## Visualize the data
from matplotlib import pyplot as plt
dataset_dicts = get_imageset_dicts(os.path.join(dataset_ref, dataset), via_json)
for d in random.sample(dataset_dicts, 3):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=dataset_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
plt.imshow(out.get_image()[:, :, ::-1])
# ## Fine Tune the model
#
# +
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = (dataset+"_coco",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(FoodClass)
cfg.OUTPUT_DIR = os.path.join("/users/pgrad/hegdeb/IoT/fidarProj/fidar/", "models", dataset+"_coco")
# -
print("Results will be in %s"%cfg.OUTPUT_DIR)
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
# + tags=["outputPrepend"]
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# +
cfg.DATASETS.TRAIN = (dataset,)
cfg.OUTPUT_DIR = os.path.join("/users/pgrad/hegdeb/IoT/fidarProj/fidar/", "models", dataset)
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# -
# Look at training curves in tensorboard:
# %load_ext tensorboard
# %tensorboard --logdir output
# cfg already contains everything we've set previously. Now we changed it a little bit for inference:
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
# +
from detectron2.utils.visualizer import ColorMode
# im = cv2.imread("/users/pgrad/hegdeb/IoT/fidarProj/fidar/data/raw/sample_img/test1.png")
im = cv2.imread("/users/pgrad/hegdeb/IoT/fidarProj/fidar/data/raw/ann_test_1/fi13.jpg")
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
metadata=dataset_metadata,
scale=0.5
# ,
# instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.imshow(out.get_image()[:, :, ::-1])
# -
print(outputs)
| notebooks/MultiClassTest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import nbinom, geom
theta = np.linspace(0.0001, 0.1, num=100)
likelihood_gm = geom.pmf(52, theta)
likelihood_gm /= np.max(likelihood_gm)
likelihood_nb = nbinom.pmf(552 - 5, 5, theta)
likelihood_nb /= np.max(likelihood_nb)
plt.plot(theta, likelihood_gm)
plt.plot(theta, likelihood_nb, '--')
plt.title('Prevalence probability')
plt.xlabel(r'$\theta$')
plt.ylabel('Likelihood');
| python/chapter-2/EX2-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Copy Items From one Organization to Another
# ### Run this sample to connect to the source & destination GIS and get started:
# #### You will be prompted to input the url, username, and password for the source agol organization account.
#
#
# +
from arcgis.gis import GIS
from IPython.display import display
uriS = input("What is the soure org/portal url: ")
unS = input("What is the source org Username: ")
def authSource():
global sourceOrg
sourceOrg = GIS(uriS, unS)
print("Logged in as: " + sourceOrg.properties.user.username)
authSource()
# -
# #### After running the next cell, a browser window will launch requesting an authentication string via SAML as Enterprise Logins are enforced in government.maps.arcgis.com. Copy the string and paste it directly into the notebook.
#
# 
# +
destOrg = GIS("https://government.maps.arcgis.com", client_id = "YaMKYbXuSkd02G9i")
print("Logged in as: " + destOrg.properties.user.username)
orig_userid = unS
new_userid = unS
#capture/print information about sourceOrg User
sourceUser = sourceOrg.users.get(orig_userid)
newUser = destOrg.users.get(new_userid)
usergroups = sourceUser['groups']
for group in usergroups:
grp = sourceOrg.groups.get(group['id'])
# -
# #### You will be prompted for the item id which can be obtained from the source agol organization.
# +
a = input("What is the Item ID: ")
def item(a):
global itemId
itemId = sourceOrg.content.get(a)
print(itemId)
item(a)
#define a function to instantiate the list as the ContentManager class clone_items method requires an input list param.
def createList():
global li
li = []
li.append(itemId)
createList()
#function that copies item id into the previously created list
try:
def copyItem():
copy = destOrg.content
copy.clone_items(li, copy_data=True)
copyItem()
print("Copy Items was successful!")
except:
print("There was an error with the copy process")
finally:
print("The script is finished...")
# -
| samples/Old_Samples/Clone_Items_Input.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
from tensorflow import keras
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
tf.__version__
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
print(X_train_full.shape)
print(X_train_full.dtype)
# Differences between loading a model from Scikit-Learn and Keras:
# 1. Keras provides samples in 28x28 array instead of 784x1
# 2. the px intensities are _int_ and not _float_
X_valid, X_train = X_train_full[:5000] / 255.0, X_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.0
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
print(class_names[y_train[0]])
model = keras.models.Sequential() # simplest Keras model. Sequence of layers connected
model.add(keras.layers.Flatten(input_shape=[28, 28])) # flats the entries from 28x28 to 1D array
model.add(keras.layers.Dense(300, activation="relu")) # manages its own weight matrix and a vector of bias terms
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax")) # softmax because the classes are exclusive
# Alternatively you can do so
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
tf.keras.utils.plot_model(model, show_shapes=True)
# Lots of parameter in one layer give flexibility but likewise the risk of overfitting
print(model.layers)
print(model.layers[1].name)
print(model.get_layer('dense_3').name)
weights, biases = model.layers[1].get_weights()
print(weights)
print(weights.shape)
print(biases.shape)
# If I know the input shape when creating the model, it is best to specify it.
#
# After the model is created, I must call its _compile()_ method to specify the loss function and the optimizer to use
model.compile(loss="sparse_categorical_crossentropy", # because we have sparse labels -> for each instance there's just a target class index and exclusive classes
optimizer="sgd", # better to use =keras.ptimizers.SGD(lr=??) to set the learning rate
metrics=["accuracy"])
# keras.utils.to_categorical() to transform from sparse labels to one-hot-vector.
#
# np.argmax() and axis=1 to go theother way round
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid))
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1) # set the vertical range to [0-1]
plt.show()
model.evaluate(X_test, y_test)
# #### Some hyperparameter to tune
# 1. learning rate
# 2. the optimizer
# 3. number of layers
# 4. number of neurons per layer
# 5. activation functions to use for each hidden layer
# 6. the batch size
#
# <b>ALERT: DO NOT TWEAK THE HYPERPARAMETERS ON THE TEST SET</b>
# ### Using the Model to Make Predictions
X_new = X_test[:3]
y_proba = model.predict(X_new)
y_proba.round(2)
y_pred = model.predict_classes(X_new)
print(y_pred)
print(np.array(class_names)[y_pred])
y_new = y_test[:3]
y_new
# ## Building a RegressionMLP Using te Sequential API
# +
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
# -
# The main differences are the fact that the output layer has a single neuron (since we only want to
# predict a single value) and uses no activation function, and the loss function is the
# mean squared error.
#
# Since the dataset is quite noisy, we just use a single hidden layer with fewer neurons than before, to avoid overfitting.
# +
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer="sgd")
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
X_new = X_test[:3] # pretend these are new instances
y_pred = model.predict(X_new)
# -
# ## Building Complex Models Using the Functional API
# This architecture makes it possible for the neural network to learn both deep patterns (using the deep path) and simple rules (through the short path).
#
# A regular MLP forces all the data to flow through the full stack of layers, thus
# simple patterns in the data may end up being distorted by this sequence of transformations.
inputt = keras.layers.Input(shape=X_train.shape[1:]) # specification of the kind of input model will get
hidden1 = keras.layers.Dense(30, activation="relu")(inputt) # call it like a function, passing it the input.
# This is why this is called the Functional API
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.Concatenate()([inputt, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[inputt], outputs=[output])
# Once you have built the Keras model, everything is exactly like earlier, so no need to
# repeat it here: you must compile the model, train it, evaluate it and use it to make
# predictions.
input_A = keras.layers.Input(shape=[5])
input_B = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation="relu")(input_B)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_A, input_B], outputs=[output])
# When we call the fit() method, instead of passing a single input matrix X_train, we must pass a
# pair of matrices (X_train_A, X_train_B): one per input
# +
model.compile(loss="mse", optimizer="sgd")
X_train_A, X_train_B = X_train[:, :5], X_train[:, 2:]
X_valid_A, X_valid_B = X_valid[:, :5], X_valid[:, 2:]
X_test_A, X_test_B = X_test[:, :5], X_test[:, 2:]
X_new_A, X_new_B = X_test_A[:3], X_test_B[:3]
history = model.fit((X_train_A, X_train_B), y_train, epochs=20,
validation_data=((X_valid_A, X_valid_B), y_valid))
mse_test = model.evaluate((X_test_A, X_test_B), y_test)
y_pred = model.predict((X_new_A, X_new_B))
# -
# Use cases:
# - the task demand it. For instance locate and classify main object in a picture --> bot classification and regression task
# - you may have multiple independent tasks to perform based on the same data
# - a regularization technique. For example, you may want to add some auxiliary outputs in a neural network
# architecture to ensure that the underlying part of the network learns something useful on its own, without relying on the rest of the network.
# [...] Same as above, up to the main output layer
output = keras.layers.Dense(1)(concat)
aux_output = keras.layers.Dense(1)(hidden2)
model = keras.models.Model(inputs=[input_A, input_B],
outputs=[output, aux_output])
# Each output will need its own loss function.
#
# Keras will compute all these losses and simply add them up to get the final loss used for training.
#
# It is possible to set all the loss weights when compiling the model:
model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer="sgd")
# We need to provide some labels for each output.
history = model.fit([X_train_A, X_train_B], [y_train, y_train], epochs=20,
validation_data=([X_valid_A, X_valid_B], [y_valid, y_valid]))
total_loss, main_loss, aux_loss = model.evaluate([X_test_A, X_test_B], [y_test, y_test])
y_pred_main, y_pred_aux = model.predict([X_new_A, X_new_B])
# ## Building Dynamic Models Using the Subclassing API
# +
class WideAndDeepModel(keras.models.Model):
def __init__(self, units=30, activation="relu", **kwargs):
super().__init__(**kwargs) # handles standard args (e.g., name)
self.hidden1 = keras.layers.Dense(units, activation=activation)
self.hidden2 = keras.layers.Dense(units, activation=activation)
self.main_output = keras.layers.Dense(1)
self.aux_output = keras.layers.Dense(1)
def call(self, inputs):
input_A, input_B = inputs
hidden1 = self.hidden1(input_B)
hidden2 = self.hidden2(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
main_output = self.main_output(concat)
aux_output = self.aux_output(hidden2)
return main_output, aux_output
model = WideAndDeepModel()
# -
# This extra flexibility comes at a cost:
# - your model’s architecture is hidden within the call() method, so Keras cannot easily inspect it,
# - Keras cannot save or clone it,
# - when you call the summary() method, you only get a list of layers, without any information on how they are connected to each other
# - Keras cannot check types and shapes ahead of time, and it is easier to make mistakes
#
#
# TO SUMMARIZE: unless you really need that extra flexibility, you should probably stick to the Sequential API or the Functional API
# ### Saving and Restoring a Model
# - Sequencial API or Functional API
# - Saving
# ```python
# model = keras.models.Sequential([...])
# model.compile([...])
# model.fit([...])
# model.save("my_keras_model.h5")
# ```
# - Restoring
# ```python
# model = keras.models.load_model("my_keras_model.h5")
# ```
#
# - Dynamics Model:
# You can just save the model's parameters with _save_weights()_ and _restore_weights()_, but anything else should be saved by myself
# ## Using Callbacks
# The fit() method accepts a callbacks argument that lets you specify a list of objects
# that Keras will call during training at the start and end of training, at the start and end
# of each epoch and even before and after processing each batch.
#
# If you use a validation set during training, you can set ```save_best_only=True ``` when creating the ModelCheckpoint. In this case, it will only save your model when its performance on the validation set is the best so far.
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_keras_model.h5", save_best_only=True)
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid), callbacks=[checkpoint_cb])
model = keras.models.load_model("my_keras_model.h5") # rollback to best model
# Another wqay to implement early stopping is with the EarlyStopping callback.
#
# You can combine both callbacks to both save checkpoints of your model, and actually interrupt training early when there is no more progress
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[checkpoint_cb, early_stopping_cb])
# If you need extra control, you can easily write your own custom callbacks
# ```python
# class PrintValTrainRatioCallback(keras.callbacks.Callback):
# def on_epoch_end(self, epoch, logs):
# print("\nval/train: {:.2f}".format(logs["val_loss"] / logs["loss"]))
# ```
#
# ## Visualization Using TensorBoard
#
# TensorBoard is a great interactive visualization tool.
#
# To use it, you must modify your program so that it outputs the data you want to visualize to special binary log files called event files.
#
# Each binary data record is called a summary.
#
# In general, you want to point the TensorBoard server to a root log directory, and configure your program so that it writes to a different subdirectory every time it runs.
# +
import os
root_logdir = os.path.join(os.curdir, "my_logs")
def get_run_logdir():
import time
run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S")
return os.path.join(root_logdir, run_id)
run_logdir = get_run_logdir() # e.g., './my_logs/run_2019_01_16-11_28_43'
# [...] Build and compile your model
model.compile(loss="mse", optimizer="sgd")
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
history = model.fit(X_train, y_train, epochs=30,
validation_data=(X_valid, y_valid), callbacks=[tensorboard_cb])
# -
# To start the TensorBand server type
# ```BASH
# python -m tensorband.main --logdir=./my_logs --port=6006
# ```
# Alternatively type directly in Jupyter:
# ```Jupyter
# # %load_ext tensorboard
# # %tensorboard --logdir=./my_logs --port=6006
# ```
# +
tensor_logdir = get_run_logdir()
writer = tf.summary.create_file_writer(test_logdir)
with writer.as_default():
for step in range(1, 1000 + 1):
tf.summary.scalar("my_scalar", np.sin(step / 10), step=step)
data = (np.random.randn(100) + 2) * step / 100 # some random data
tf.summary.histogram("my_hist", data, buckets=50, step=step)
data = np.random.randn(2, 32, 32, 3) # random 32x32 RGB images
tf.summary.image("my_images", images * step / 1000, step=step)
# ...
# -
# # Fine-Tuning NN Hyperparameters
# One option is to simply try many combinations of hyperparameters and see which one works best on the validation set. For this, we need to wrap our Keras models in objects that mimic regular Scikit-Learn regressors
def build_model(n_hidden=1, n_neurons=30, learning_rate=3e-3, input_shape=[8]):
model = keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape=input_shape))
for layer in range(n_hidden):
model.add(keras.layers.Dense(n_neurons, activation="relu"))
model.add(keras.layers.Dense(1))
optimizer = keras.optimizers.SGD(lr=learning_rate)
model.compile(loss="mse", optimizer=optimizer)
return model
keras_reg = keras.wrappers.scikit_learn.KerasRegressor(build_model)
print(type(keras_reg))
# The _KerasRegressor_ object is a thin wrapper around the Keras model built using build_model().
#
# Now we can use this object like a regular Scikit-Learn regressor: we can train it using its fit() method, then evaluate it using its score() method, and use it to make predictions using its predict() method
keras_reg.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
mse_test = keras_reg.score(X_test, y_test)
y_pred = keras_reg.predict(X_new)
# +
from scipy.stats import reciprocal
from sklearn.model_selection import RandomizedSearchCV
import numpy as np
param_distribs = {
"n_hidden": [0, 1, 2, 3],
"n_neurons": np.arange(1, 100),
"learning_rate": reciprocal(3e-4, 3e-2),
}
rnd_search_cv = RandomizedSearchCV(keras_reg, param_distribs, n_iter=10, cv=3)
rnd_search_cv.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
# -
rnd_search_cv.best_params_
rnd_search_cv.best_score_
# There are many techniques to explore a search space much more efficiently than randomly.
#
# This takes care of the “zooming” process for you and leads to much better solutions in much less time. Here are a few Python libraries you can use to optimize hyperparameters.
#
# - Hyperopt -> for optimizing over all sorts of complex search spaces
# - Hyperas, kopt or Talos -> optimizing hyperparameters for Keras model (first two based on Hyperopt)
# - Scikit-Optimize -> a general-purpose optimization library
# - Spearmint -> Bayesian optimization library
# - Hyperband -> based on the receent Hyperband paper
# - Sklearn-Deap -> hyperparameter optimization library based on evolutionary algorithms, also with a GridSearchCV-like interface
# - Many more...
#
#
# Many companies offer services for such optimizations:
# * Google Cloud AI Platform
# * Arimo
# * SigOpt
# * CallDesk's Oscar
# ## Number of Hidden Layers
# transfer learning -> it will only have to learn the higher-level structures and not all the layers.
#
# For many problems you can start with just one or two hidden layers and it will work just fine.
#
# For more complex problems, you can gradually ramp up the number of hidden layers, until you start overfitting the training set. Very complex tasks, such as large image classification or speech recognition, typically require networks with dozens of layers
#
# ## Number of Neurons per Hidden Layer
# It is determined by the type of input and output your task requires. For example, the MNIST task requires 28 x 28 = 784 input neurons and 10 output neurons.
#
# A simpler and more efficient approach is to pick a model with more layers and neurons than you actually need, then use early stopping to prevent it from overfitting and other regularization techniques, such as dropout --> <u>stretch pants</u> approach
# With this approach, you avoid bottleneck layers that could ruin your model.
#
# GENERAL RULE: I'll get more bang for my buck by increasing the number of layers instead of the number of neurons per layer
#
# ## Learning Rate, Batch Size and Other Hyperparameters
# 1. <u>Learning rate</u>: the optimal is about half of the maximum learning rate (i.e. the leraning rate above which the training algorithm diverges).
# Train the model for a few hundred iterations, starting with a very low learning rate (10e-5) and gradually increasing it up to a very large value (10) --> multiply it by a constant factor at each iteration (exp(log(10e6)/500) to go from 10e-5 to 10 in 500 iterations).
# The optimal l.r. will be a bit lower than the point at which the loss starts to climb (10 times lower typically)
#
# 2. <u>Optimizer</u>: Choosing a better optimizer than plain old Mini-batch Gradient Descent (see next chapter)
#
# 3. <u>Batch size</u>: one strategy is to try to use large batch size (for instance up to 8192) using learning rate warmup (small then ramping it up) but if training is unstable or performances are disappointing, then try using a small batch size instead
# 4. <u>Activatin function</u>:
# - hidden layers -> ReLU is a good default
# - output layer: depends on my task
# 5. <u>Number of iterations</u>: use early stopping
#
# GENERAL RULE: if you modify any yperparameter, make sure to update the learning rate as well
#
# <a href="https://homl.info/1cycle">More info regarding tuning NN hyperparameters - 2018 paper by <NAME></a>
#
# # Excercises
#
# 2. Draw an ANN using the original artificial neurons (like the ones in Figure 10-3) that computes A ⊕ B (where ⊕ represents the XOR operation). Hint: A ⊕ B = (A ∧ ¬ B) ∨ (¬ A ∧ B)
# <center>
# <img src="ex2_solution.jpg" alt="ex2 solution">
# </center>
#
# 3. Why is it generally preferable to use a Logistic Regression classifier rather than a classical Perceptron (i.e., a single layer of threshold logic units trained using the Perceptron training algorithm)? How can you tweak a Perceptron to make it equivalent to a Logistic Regression classifier?
# <span style="color:gold">Because Perceptrons do not output a class probability but make predictions based on a hard threshold and they are incapable of solving some trivial problems.</span>
#
# 4. Why was the logistic activation function a key ingredient in training the first MLPs?
# <span style="color:gold">Because after the uncover of the Backpropagation algorithm they found out it would have been worked properly with that function instead of te step function. The logistic function has a well-defined nonzero derivative everywhere.</span>
#
# 5. Name three popular activation functions. Can you draw them?
# - <span style="color:gold">Sigmoid</span>
# - <span style="color:gold">Tanh</span>
# - <span style="color:gold">ReLU</span>
# - <span style="color:gold">Leaky ReLU</span>
#
# 6. Suppose you have an MLP composed of one input layer with 10 passthrough neurons, followed by one hidden layer with 50 artificial neurons, and finally one output layer with 3 artificial neurons. All artificial neurons use the ReLU activation function.
# * What is the shape of the input matrix X?
# <span style="color:gold">m x 10 with m = training_batch_size</span>
# * What about the shape of the hidden layer’s weight vector Wh, and the shape of its bias vector bh?
# <span style="color:gold">Wh = 10 x 50; bh.length = 10</span>
# * What is the shape of the output layer’s weight vector Wo, and its bias vector bo?
# <span style="color:gold">Wo = 50 x 3; bo.length = 3</span>
# * What is the shape of the network’s output matrix Y?
# <span style="color:gold">m x 3</span>
# * Write the equation that computes the network’s output matrix Y as a function of X, Wh, bh, Wo and bo.
# <span style="color:gold">Y* = ReLU(ReLU(XWh + bh)Wo + bo)</span>
#
#
# 7. How many neurons do you need in the output layer if you want to classify email into spam or ham? What activation function should you use in the output layer? If instead you want to tackle MNIST, how many neurons do you need in the output layer, using what activation function? Answer the same questions for getting your network to predict housing prices as in Chapter 2.
#
# <span style="color:gold">I need 1 neuron. I should use the logistic activation function. </span>
#
# <span style="color:gold">If instead I want to tackle MNIST I should use 10 neurons and the softmax activation function.</span>
#
# <span style="color:gold">Housing problem: 1 neuron and ReLU or softplus, since I'd expect a positive output.</span>
#
# 8. What is backpropagation and how does it work? What is the difference between backpropagation and reverse-mode autodiff?
# <span style="color:gold">It's an algorithm to compute the loss at every iteration. It handles one mini-batch at a time. After computing the prediction (forward pass) and the output error, using the derivatives (especially the chain-rule) the algorithm goes backward and weights how much every step has counted in the final loss value. In the end, it performs a Gradient Descent step to tweak all the connection weights in the network, using the error gradients it just computed.
#
# <span style="color:gold">Difference backpropagation-reverse autodiff: backpropagation refers to the whole process of trainig an ANN using multiple backpropagation steps. Reverse-mode autodiff is just a technique to ompute gradients efficiently, and it happens to be used by backpropagation</span>
#
# 9. Can you list all the hyperparameters you can tweak in an MLP? If the MLP overfits the training data, how could you tweak these hyperparameters to try to solve the problem?
#
# <span style="color:gold">Learning rate, mini-batch size, number of hidden layers, number of neurons per layer, optimizer, number of iterations and the activation function.
# I can lower down the number of neurons per layer and hidden layers, regularize the learning rate gradually, decrease the mini-batch size and use the early stopping technique to stop the training when it has reached the optimal amount of iterations.</span>
#
# 10. Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Try adding all the bells and whistles (i.e., save checkpoints, use early stopping, plot learning curves using TensorBoard, and so on).
(X_full_train, y_full_train), (X_test, y_test) = keras.datasets.mnist.load_data(path="mnist.npz")
# x_full_train.shape = (60000, 28, 28)
# x_test.shape = (10000, 28, 28)
X_valid, X_train = X_full_train[:5000] / 255., X_full_train[5000:] / 255.
y_valid, y_train = y_full_train[:5000], y_full_train[5000:]
X_test = X_test / 255.
# +
K = keras.backend
K.clear_session()
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
# -
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
expon_lr = ExponentialLearningRate(factor=1.005)
history = model.fit(X_train, y_train, epochs=1,
validation_data=(X_valid, y_valid),
callbacks=[expon_lr])
plt.plot(expon_lr.rates, expon_lr.losses)
plt.gca().set_xscale('log')
plt.hlines(min(expon_lr.losses), min(expon_lr.rates), max(expon_lr.rates)) # Plot horizontal lines at each y from xmin to xmax.
plt.axis([min(expon_lr.rates), max(expon_lr.rates), 0, expon_lr.losses[0]])
plt.grid()
plt.xlabel("Learning rate")
plt.ylabel("Loss")
# The loss starts shooting back up violently when the learning rate goes over 6e-1, so let's try using half of that, at 3e-1:
# +
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=3e-1),
metrics=["accuracy"])
# +
import os
run_index = 3 # increment this at every run
run_logdir = os.path.join(os.curdir, "my_mnist_logs", "run_{:03d}".format(run_index))
run_logdir
# +
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_mnist_model.h5", save_best_only=True)
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[checkpoint_cb, early_stopping_cb, tensorboard_cb])
# -
model = keras.models.load_model("my_mnist_model.h5") # rollback to best model
model.evaluate(X_test, y_test)
| Introduction to ANN with Keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Calculate (RESP) partial charges with Psi4
# import the following modules
import os
import psi4
import resp
import openbabel as ob
from rdkit import Chem
from rdkit.Chem import AllChem
# ### some helper functions
# +
def neutralize_atoms(mol):
pattern = Chem.MolFromSmarts("[+1!h0!$([*]~[-1,-2,-3,-4]),-1!$([*]~[+1,+2,+3,+4])]")
at_matches = mol.GetSubstructMatches(pattern)
at_matches_list = [y[0] for y in at_matches]
if len(at_matches_list) > 0:
for at_idx in at_matches_list:
atom = mol.GetAtomWithIdx(at_idx)
chg = atom.GetFormalCharge()
hcount = atom.GetTotalNumHs()
atom.SetFormalCharge(0)
atom.SetNumExplicitHs(hcount - chg)
atom.UpdatePropertyCache()
return mol
def cleanUp(psi4out_xyz):
deleteTheseFiles = ['1_default_grid.dat','1_default_grid_esp.dat','grid.dat','timer.dat']
deleteTheseFiles.append(psi4out_xyz)
for fileName in deleteTheseFiles:
if os.path.exists(fileName):
os.remove(fileName)
def get_xyz_coords(mol):
if not mol is None:
num_atoms = mol.GetNumAtoms()
xyz_string=""
for counter in range(num_atoms):
pos=mol.GetConformer().GetAtomPosition(counter)
xyz_string = xyz_string + ("%s %12.6f %12.6f %12.6f\n" % (mol.GetAtomWithIdx(counter).GetSymbol(), pos.x, pos.y, pos.z) )
return xyz_string
def calcRESPCharges(mol, basisSet, method, gridPsi4 = 1):
options = {'BASIS_ESP': basisSet,
'METHOD_ESP': method,
'RESP_A': 0.0005,
'RESP_B': 0.1,
'VDW_SCALE_FACTORS':[1.4, 1.6, 1.8, 2.0],
'VDW_POINT_DENSITY':int(gridPsi4)
}
resp_charges = resp.resp([mol], [options])[0][1]
return resp_charges
# -
# ### Set some variables and stuff
method = 'b3lyp'
basisSet = '3-21g' #'6-31G**'
neutralize = True
psi4.set_memory('10 GB')
obConversion = ob.OBConversion()
obConversion.SetInAndOutFormats("xyz", "mol2")
singlePoint = True
path = "./data"
# ### Read sdf file (3D) into a list
inputFile = "./data/twoCpds.sdf"
molList = Chem.SDMolSupplier(inputFile, removeHs=False)
# ### ...or read a SMILES files into a list
# +
SMILESasInput = False
if SMILESasInput:
molList = []
inputFile = "./data/twoCpds.smi"
suppl = Chem.SmilesMolSupplier(inputFile, titleLine = False)
for mol in suppl:
mol = Chem.AddHs(mol)
AllChem.EmbedMolecule(mol)
try:
AllChem.MMFFOptimizeMolecule(mol)
except:
AllChem.UFFOptimizeMolecule(mol)
molList.append(mol)
# -
# ### Loop over compounds in list and calculate partial charges
for mol in molList:
if not mol is None:
molId = mol.GetProp("_Name")
print('Trying:', molId)
if neutralize:
mol = neutralize_atoms(mol)
mol = Chem.AddHs(mol)
xyz_string = get_xyz_coords(mol)
psi_mol = psi4.geometry(xyz_string)
### single point calculation
outfile_mol2 = inputFile[:-4]+".mol2"
if singlePoint:
print('Running singlepoint...')
resp_charges = calcRESPCharges(psi_mol, basisSet, method, gridPsi4 = 1)
else:
print('Running geometry optimization...')
methodNbasisSet = method+"/"+basisSet
psi4.optimize(methodNbasisSet, molecule=psi_mol)
resp_charges = calcRESPCharges(psi_mol, basisSet, method, gridPsi4 = 1)
### save coords to xyz file
psi4out_xyz = molId + '.xyz'
psi_mol.save_xyz_file(psi4out_xyz,1)
### read xyz file and write as mol2
ob_mol = ob.OBMol()
obConversion.ReadFile(ob_mol, psi4out_xyz)
### set new partial charges
count = 0
for atom in ob.OBMolAtomIter(ob_mol):
newChg = resp_charges[count]
atom.SetPartialCharge(newChg)
count += 1
### write as mol2
outfile_mol2 = path+"/"+molId+"_partialChgs.mol2"
print("Finished. Saved compound with partial charges as mol2 file: %s" % outfile_mol2)
obConversion.WriteFile(ob_mol, outfile_mol2)
### clean up
cleanUp(psi4out_xyz)
| CalculatePartialChargesWithPsi4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os, sys, h5py, glob, time
import seaborn as sns
from matplotlib import pyplot as plt
from PIL import Image
import numpy as np
# +
masks_path = '../bldgs_masks/*_mask.png'
masks_800 = []
masks_1200 = []
hdf5_mask_path = '../masks.hdf5'
files = glob.glob(masks_path)
width, height = 0, 0
beg = time.time()
for mask in files:
image = Image.open(mask)
# image.load()
width, height = image.size
if width == 800:
masks_800.append(mask)
if width == 1200:
masks_1200.append(mask)
# print(width, height)
end = time.time()
coster = end - beg
print(coster)
# -
print(len(masks_800), len(masks_1200))
# +
hdf5_mask_path_1200 = '../masks_1200.hdf5'
# -
mask_800_shape = (len(masks_800), 800, 968, 3)
mask_1200_shape = (len(masks_1200), 1200, 968, 3)
print(mask_800_shape, mask_1200_shape)
# +
hdf5_mask_file_800 = h5py.File(hdf5_mask_path_800, mode='w')
hdf5_mask_file_800.create_dataset("bldgs_masks_800", mask_800_shape, np.int8)
hdf5_mask_file_800.close()
# +
hdf5_mask_file_1200 = h5py.File(hdf5_mask_path_1200, mode='w')
hdf5_mask_file_1200.create_dataset("bldgs_masks_1200", mask_1200_shape, np.int8)
hdf5_mask_file_1200.close()
# -
import cv2
hdf5_mask_file_800.close()
# +
#17941
hdf5_mask_path_800 = '../masks_800.hdf5'
mask_800_shape = (len(masks_800), 968, 800, 3)
hdf5_mask_file_800 = h5py.File(hdf5_mask_path_800, mode='w')
hdf5_mask_file_800.create_dataset("bldgs_masks_800", mask_800_shape, np.int8)
# a numpy array to save the mean of the images
#mean = np.zeros(masks_800[1:], np.float32)
# loop over train addresses
for i in range(len(masks_800)):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print ('Train data: {}/{}'.format(i, len(masks_800)))
# read an image and resize to (224, 224)
# cv2 load images as BGR, convert it to RGB
# addr = train_addrs[i]
image = masks_800[i]
img = cv2.imread(image)
# img = cv2.resize(img, (224, 224), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# add any image pre-processing here
# if the data order is Theano, axis orders should change
# save the image and calculate the mean so far
hdf5_mask_file_800["bldgs_masks_800"][i, ...] = img[None]
#mean += img / float(len(train_labels))
# +
import platform
a = platform.python_implementation()
# -
a
# +
def getFile(list_of_files):
yield filer
asd = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
for a in asd:
print(getFile(asd))
# -
# +
# a numpy array to save the mean of the images
mean = np.zeros(train_shape[1:], np.float32)
# loop over train addresses
for i in range(len(train_addrs)):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print 'Train data: {}/{}'.format(i, len(train_addrs))
# read an image and resize to (224, 224)
# cv2 load images as BGR, convert it to RGB
addr = train_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (224, 224), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# add any image pre-processing here
# if the data order is Theano, axis orders should change
if data_order == 'th':
img = np.rollaxis(img, 2)
# save the image and calculate the mean so far
hdf5_file["train_img"][i, ...] = img[None]
mean += img / float(len(train_labels))
# loop over validation addresses
for i in range(len(val_addrs)):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print 'Validation data: {}/{}'.format(i, len(val_addrs))
# read an image and resize to (224, 224)
# cv2 load images as BGR, convert it to RGB
addr = val_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (224, 224), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# add any image pre-processing here
# if the data order is Theano, axis orders should change
if data_order == 'th':
img = np.rollaxis(img, 2)
# save the image
hdf5_file["val_img"][i, ...] = img[None]
# loop over test addresses
for i in range(len(test_addrs)):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print 'Test data: {}/{}'.format(i, len(test_addrs))
# read an image and resize to (224, 224)
# cv2 load images as BGR, convert it to RGB
addr = test_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (224, 224), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# add any image pre-processing here
# if the data order is Theano, axis orders should change
if data_order == 'th':
img = np.rollaxis(img, 2)
# save the image
hdf5_file["test_img"][i, ...] = img[None]
# save the mean and close the hdf5 file
hdf5_file["train_mean"][...] = mean
hdf5_file.close()
# -
| books_201806/Building PreProcessor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# https://www.geeksforgeeks.org/capitalize-first-letter-of-a-column-in-pandas-dataframe/?ref=rp
# ### Capitalize first letter of a column
# + active=""
# Capitalize first letter of a column:
#
# 1. Capitalize
#
# 2. Using lambda with capitalize() method
# -
# importing pandas as pd
import pandas as pd
# +
# creating a dataframe
df = pd.DataFrame({'A': ['john', 'bODAY', 'minA', 'Peter', 'nicky'],
'B': ['masters', 'graduate', 'graduate',
'Masters', 'Graduate'],
'C': [27, 23, 21, 23, 24]})
df
# -
# <h3 style="color:green" align="left"> Capitalize first letter of a column </h3>
# <h3 style="color:blue" align="left"> 1. Capitalize </h3>
df['A'] = df['A'].str.capitalize()
df
# <h3 style="color:blue" align="left"> 2. Using lambda with capitalize() method </h3>
df['A'].apply(lambda x: x.capitalize())
| 4_Rows and Columns/9_Capitalize_first_letter_of_a_column.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="QGIUMEbpB_Ow"
# 
# -
# ---
# ## Spectros en archivos `FITS`.
#
#
# <NAME> (<EMAIL>)
#
# ---
# + [markdown] id="-pzkgKdcB_O1"
# ### Resumen
#
# En este cuaderno se untilizará la librería `astropy` para leer un archivo .fits y visualizar un espectro astronómico.
#
#
# ---
# + [markdown] id="u-DxgKmeB_O1"
# ## 1. Los datos .FITS
#
# Ahora se descargará un espectro de una galaxia en formato .fits desde el SDSS Database. Para ello vaya a
#
# https://dr16.sdss.org/
#
# y haga click en la sección de **Optical Spectra**. Seleccione para acceder al espectro en el PlateID 271. Desde la tabla de espectros, descargue el archivo correspondiente a la FiberID 5, con el specobj_id: 305120280735410176.
#
# De esta forma, la identificación completa del espectro será
#
# Plate: 271\
# MJD: 51883\
# Fiber: 5
#
# También es posible encontrar este espectro utilizando esta información en la página
#
# https://dr16.sdss.org/optical/spectrum/search
#
#
# El archivo .fits descargado corresponde al espectro de la galaxia SDSS J102008.09-004750.7. La información completa de este objeto puede verse al hacer click en el link CAS en la tabla de espectros, que llevará a la página
#
# http://skyserver.sdss.org/dr16/en/tools/explore/summary.aspx?plate=271&mjd=51883&fiber=5
#
#
#
# ---
#
# ### 1.1. Abrir el archivo .fits
#
# De nuevo se utilizará la función `astropy.io.fits.open()` para acceder a la información del archivo, pero esta vez incluiremos la opción `memmap=True` para prevenir inconvenientes con el almacenamiento en RAM.
# + id="ZP8a5XLAB_O1"
import numpy as np
from matplotlib import pyplot as plt
from astropy.io import fits
from astropy.table import Table
hdul = fits.open('spec-0271-51883-0005.fits', memmap=True)
hdul.info()
# + [markdown] id="f5yjE1MCB_O2"
# Note que este archivo contiene 3 objetos HDU. En el PRIMARY se tiene la información general del archivo
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 982, "status": "ok", "timestamp": 1609120201414, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="lauyXf2cB_O3" outputId="f932f2be-1490-4f63-e830-2698e20eb212"
hdul[0].header
# + [markdown] id="5E6AmxunB_O3"
# En el header del `hdul[1]` se tiene la información del espectro,
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 623, "status": "ok", "timestamp": 1609120207399, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="TNg2cIPLB_O3" outputId="487d2409-4b89-4337-9e9d-53484b9f3c38"
hdul[1].header
# + [markdown] id="mc0Uio6bB_O3"
# ### Extracción del espectro del archivo FITS
#
# Ahora se extraerá la informaciónd el espectro del objeto HDUN. Una descripción detallada del proceso puede verse en
#
# http://learn.astropy.org/rst-tutorials/FITS-tables.html?highlight=filtertutorials
#
# El objeto `hdul[1]`contiene una tabla con los nombres de las columnas. Para acceder a estos datos se utiliza el método `.columns`
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 545, "status": "ok", "timestamp": 1609120210996, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="LSLcRC_nB_O4" outputId="17dfb31e-6824-4556-c525-21ca012c80af"
hdul[1].columns
# + [markdown] id="B9yADA5SB_O4"
# La información detallada acerca de este espectro se encuentra en el link del modelo spec_data,
#
# https://data.sdss.org/datamodel/files/BOSS_SPECTRO_REDUX/RUN2D/spectra/PLATE4/spec.html
#
# Por ejemplo, alli se puede observar que la columna ['flux'] presenta el flujo en unidades de $10^{-17}$ ergs/s/cm2/Å y la columna ['loglam'] presenta el $\log_{10}$ ode la longitud de onda en Å.
#
# Sin embargo, asignar los datos contenidos en el objeto hdul[1] a una variable en Python no resulta en un tipo de objeto conocido,
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 611, "status": "ok", "timestamp": 1609120214231, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="6HgUb6dwB_O4" outputId="f8877c35-6d32-4308-88ad-9ff774f79264"
spectrum_data = hdul[1].data
type(spectrum_data)
# -
spectrum_data
# + [markdown] id="D-IE4OKtB_O4"
# En realidad, para poder cargar el espectro se debe usar la función [astropy.table.Table( )](https://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table)
# + id="OCSI19PpB_O4"
from astropy.table import Table
spectrum_data = Table(hdul[1].data)
# + [markdown] id="BWcBM2TnB_O5"
# De esta forma, la información se almaceno en un objeto del tipo Table,
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 549, "status": "ok", "timestamp": 1609120220242, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="mhZMu7ETB_O5" outputId="ffadad70-94ab-4a2d-a317-4818c22dde44"
type(spectrum_data)
# + [markdown] id="bM0VvaCJB_O5"
# y ahora la información se puede acceder fácilmente,
# + colab={"base_uri": "https://localhost:8080/", "height": 519} executionInfo={"elapsed": 545, "status": "ok", "timestamp": 1609120223207, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="ozHxXUt-B_O6" outputId="ca652fa0-ef79-495a-9ff3-997afc5f3fd9"
spectrum_data
# + [markdown] id="Da1NddZ3B_O6"
# ### Histograma del Flujo
#
# Podemos crear un histograma de la columna 'flux' en esta tabla,
# + colab={"base_uri": "https://localhost:8080/", "height": 265} executionInfo={"elapsed": 1282, "status": "ok", "timestamp": 1609120227234, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="xOzoKwX5B_O7" outputId="ac6fab03-b59a-4e1b-da65-b1f1b5b2100a"
plt.hist(spectrum_data['flux'], bins='auto')
plt.ylabel('Flux')
plt.show()
# + [markdown] id="1vUlf1piB_O7"
# ### Visualización del Espectro
#
# Para visualizar el espectro de esta galaxia, graficamos la columna ['flux'] vs. la columna ['loglam'].
# + colab={"base_uri": "https://localhost:8080/", "height": 285} executionInfo={"elapsed": 850, "status": "ok", "timestamp": 1609120230286, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgCVABzEgj-rCdyxWa29RnA0kIYUCXAaVbnRYOEhQ=s64", "userId": "04402438389940282602"}, "user_tz": 300} id="rsdwCGl-B_O7" outputId="1e9192c5-4d02-4e12-f91e-39878604dd7b"
plt.plot(spectrum_data['loglam'],spectrum_data['flux'],'k')
plt.xlabel(r'$\log_{10} \lambda {(\rm \AA)}$')
plt.ylabel('Flux')
plt.show()
# + [markdown] id="qv-0x02tB_O7"
# Si se quiere utilizar la longitud de onda (y no su logaritmo), definimos
# + id="kvoTqOdcB_O8"
wavelength = np.power(10,spectrum_data['loglam'])
# + id="DOuqxVOKB_O8" outputId="4c065ce3-b688-4388-b1e5-8db314dccb92"
plt.plot(wavelength,spectrum_data['flux'],'k')
plt.xlabel(r'$\lambda {(\rm \AA)}$')
plt.ylabel('Flux')
plt.show()
# + id="YOit4BkCB_O8"
| 15. Astrophysics Data/06.FITSSpectrum01/FITSSpectrum01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2.4 Grover's Algorithm
#
# * [Q# exercise: Grover's algorithm](./2-Quantum_Algorithms/4-Grover_s_Algorithm.ipynb#qex)
#
# Grover's algorithm is used to search for an item in an unordered list more efficiently than a classical
# computer. It finds with high probability the unique input to a black box function that produces a particular
# output value.
#
# We have a black box that takes 3 bits as input and returns one bit as output. The black box is built
# in such a way that it returns 1 only for one combination of inputs and returns 0 for all the other
# combinations. Below is an example of one such black box that returns 1 only when the input bits are 110:
#
# <img src="img/4-g001.png" style="width: 300px" />
#
# <table style="width: 200px">
# <tr>
# <th>$x$ </th>
# <th>$y = f(x)$ </th>
# </tr>
# <tr>
# <td> $000$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $001$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $010$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $011$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $100$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $101$ </td>
# <td> $0$ </td>
# </tr>
# <tr>
# <td> $110$ </td>
# <td> $1$ </td>
# </tr>
# <tr>
# <td> $111$ </td>
# <td> $0$ </td>
# </tr>
# </table>
#
# For a 3-bit system, we can create eight different black boxes, each returning 1 for a different
# combination of bits. If you were given one such a black box without being informed which one, how many times do you need to run that black box (with different combinations of inputs) to find out which one was given? On average, you will need to run four times. In the worst case, it will be seven. A classical black box takes $n$-bits as input and returns 1 only for one of the possible$2^n$ inputs and returns 0 for all the remaining inputs. We don't know for which combination it returns 1. In the worst case we might have to execute the black box $2^n$ times. If we randomly choose the combination, and if we are lucky we may find it in the first iteration itself; and if we are unlucky we may have to run it for all possible combinations ($2^n$). On average it might take about $\frac{2^n}{2}$ times. As the input size increases, the number of iterations increases by $\mathcal{O}(2^n)$.
# Using Grover's algorithm, however, we can decrease this complexity to $\mathcal{O}(\sqrt {2^n})$.
#
# Before proceeding with the actual algorithm, we need to establish black boxes that can be used
# in a quantum computer. Since those black boxes need to be reversible, we use the logic similar to the
# black boxes used in the Deutsch-Jozsa algorithm.
#
# <img src="img/4-g002.png" style="width: 300px" />
#
# As shown, Figure. 2.3.1 implements the quantum black box for the function in Table 2.3.1. If we initialize
# y to 0, it will remain 0 at the end of the circuit for all the combinations of the inputs except for 110. When
# the input is 110, the first X gate will change the inputs to 111 and the Controlled X gate (which flips the
# last qubit only if all the first three qubits are 1s) flips y from 0 to 1. Later we revert the last input by applying
# the X gate again to restore its original state to 110.
#
# Similarly, we can implement the _Black Box 010_ as follows:
#
# <img src="img/4-g003.png" style="width: 300px" />
#
# The output will be 1 only when input is 010.
#
# Similarly, the _Black Box 111_ :
#
# <img src="img/4-g004.png" style="width: 300px" />
# As one can see, the pattern here is to put X gates before and after the application of Controlled X for the
# 0 qubit.
#
# Now that we know how to create the black boxes, let's proceed with the actual algorithm. We will
# do a walkthrough of the algorithm with three qubits using a black box for 101.
#
# **Step 1:**
#
# Take a 3-qubit system in ground state:
#
# $$|000 \rangle ,$$
#
# in vector form:
#
# $$\begin{bmatrix}1 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}$$
#
# which has 2^3 elements. If we try to represent it graphically, we can plot:
#
# <img src="img/4-g005.png" style="width: 300px" />
#
# **Step 2:**
#
# Generate equal probability for all eight basis states by applying H gate on all the three qubits:
#
# $$(H \otimes H \otimes H )\,(|0 \rangle \otimes |0 \rangle \otimes |0 \rangle )$$
#
# $$= \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right)$$
#
# $$= \frac{1}{\sqrt 8} \, \left( \, |000 \rangle + |001 \rangle + |010 \rangle
# + |011 \rangle + |100 \rangle + |101 \rangle + |110 \rangle + |111 \rangle \, \right)$$
#
# $$= \begin{bmatrix} \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8}\\ \frac{1}{\sqrt 8}\\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix}$$
#
# Graphically it can be plotted as (all the bars with equal height of $\frac{1}{\sqrt 8}$ ):
#
# <img src="img/4-g006.png" style="width: 300px" />
#
# **Step 3:**
#
# This step is to show you the effect of a black box. If we know the black box is 101, we highlight the state
# that is represented by the black box (101 in the current example) by making its amplitude negative.
#
# $$\frac{1}{\sqrt 8} \, |000 \rangle + \frac{1}{\sqrt 8} \, |001 \rangle +
# \frac{1}{\sqrt 8} \, |010 \rangle + \frac{1}{\sqrt 8} \, |011 \rangle +
# \frac{1}{\sqrt 8} \, |100 \rangle - \frac{1}{\sqrt 8} \, |101 \rangle +
# \frac{1}{\sqrt 8} \, |110 \rangle + \frac{1}{\sqrt 8} \, |111 \rangle$$
#
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8}\\
# \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix}$$
#
#
# Graphically:
#
# <img src="img/4-g007.png" style="width: 300px" />
#
# We haven't seen the circuit that does this negation operation yet. We will explain in the next section.
# **Step 4:**
#
# We calculate the mean of all the amplitudes:
#
# $$Mean = \left( \,
# \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} +
# \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} \, \right) \ 8 = 0.26$$
#
# denoting it as a blue line in the plot:
#
# <img src="img/4-g008.png" style="width: 300px" />
#
# Now, if we could flip the amplitudes over to the other side of the mean value, the plot visually will
# change to:
#
# <img src="img/4-g009.png" style="width: 300px" />
#
# How to achieve this mathematically?
#
# <img src="img/4-g0010.png" style="width: 300px" />
#
#
# Given any number $m$ ( which can be the mean value) and a value $x_1$ , one can find a value $x_2$. Visually it looks like "flipping" $x_1$ over the mean. In mathematical terms, it is known as the reflection of $x_1$ over $m$
# to be $x_2$ via
#
# $$x_2 = 2 \times m - x_1$$
#
# We need to perform this reflection over mean for all the amplitudes:
#
# $$= \begin{bmatrix}
# 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\ 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\
# 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\ 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\
# 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\ 2 \ast 0.26 \, + \frac{1}{\sqrt 8} \\
# 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\ 2 \ast 0.26 \, - \frac{1}{\sqrt 8} \\ \end{bmatrix}
# = \begin{bmatrix} 0.18 \\ 0.18 \\ 0.18 \\ 0.18 \\ 0.18 \\ 0.18 \\ 0.18 \\ 0.18 \end{bmatrix}$$
#
# Here again, we didn't show the circuit that does this reflection over mean. We will cover that in the next
# sections.
#
# This is the first iteration of the Grover's algorithm. The amplitude of the state represented by our
# black box (101) is 0.88 (getting closer to 1) and the amplitudes of the other states is 0.18 (getting closer
# to 0).
#
# Now, let's repeat Step 3 and negate the amplitude of the state represented by the black box once more:
#
# $$0.18\, |000 \rangle + 0.18\, |001 \rangle + 0.18\, |010 \rangle +
# 0.18\, |011 \rangle + 0.18\, |100 \rangle + 0.18\, |101 \rangle +
# 0.18\, |110 \rangle + 0.18\, |111 \rangle$$
#
# Graphically:
#
# <img src="img/4-g0011.png" style="width: 300px" />
#
# Repeat Step 4. First, we need to calculate the new mean:
#
# $$Mean = \left(\, 0.18 + 0.18 + 0.18 + 0.18 + 0.18 + 0.18 + 0.18 + 0.18\, \right) / \, 8 = 0.04$$
#
# Drawing the new mean line:
#
# <img src="img/4-g0012.png" style="width: 300px" />
#
# Now, performing reflection over mean of all the amplitudes:
#
# $$= \begin{bmatrix}
# 2 \ast 0.004 \, - 0.18 \\ 2 \ast 0.004 \, - 0.18 \\
# 2 \ast 0.004 \, - 0.18 \\ 2 \ast 0.004 \, - 0.18 \\
# 2 \ast 0.004 \, - 0.18 \\ 2 \ast 0.004 \, + 0.88 \\
# 2 \ast 0.004 \, - 0.18 \\ 2 \ast 0.004 \, - 0.18 \\ \end{bmatrix}
# = \begin{bmatrix}
# - 0.09 \\ - 0.09 \\ - 0.09 \\ - 0.09 \\
# - 0.09 \\ 0.97 \\ - 0.09 \\ - 0.09 \end{bmatrix}$$
#
# Graphically:
#
# <img src="img/4-g0013.png" style="width: 300px" />
#
# The amplitude of the state represented by the black box is very close to 1 and all the other amplitudes are very close 0. We can stop making further iterations and measure the three qubits and we will get 101 with very high probability. We were able to find which black box was given by just making two
# iterations, rather than about four iterations that might have been needed had it been a classical search. In later sections we will also discuss on how to find the exact number of iterations that are needed.
# ```
# Math insert – Grover's algorithm circuit construction --------------------------------------------
# ```
# Above is the to show an intuition of what a Grover's algorithm does. A Grover's
# algorithm is represented with the following circuit. Let's try to build it step by step in and gain insights on how to mathematically construct the above.
#
# <img src="img/4-g0014.png" style="width: 300px" />
#
# Let's first build the circuit that implements Step 3 from earlier. Here we need to negate the amplitude of the state represented by the black box.
#
# Consider the following circuit:
#
# <img src="img/4-g0015.png" style="width: 300px" />
#
# Let's compute the state for each phase of the circuit. Here we take four qubits with an initial state:
#
# $$| 0000 \rangle$$,
#
# which can be rewritten as:
#
# $$| 000 \rangle\,|0 \rangle$$.
#
# Applying X gate on the last qubit:
#
# $$| 000 \rangle\,|1 \rangle$$.
#
# Applying H gate on all four qubit:
#
# $$\left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
#
# which can be written as (doing tensor product on only the first three qubits):
#
# $$\frac{1}{\sqrt 8}\,\left(\,
# |000 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |001 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |010 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |011 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |100 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |101 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |110 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |111 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# \, \right)$$.
#
# Now, applying the black box 101. This black box affects all the eight parts. It flips the last qubit
# only when the state of the first three qubits are 101.
#
# So, the state becomes:
#
# $$\frac{1}{\sqrt 8}\,\left(\,
# |000 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |001 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |010 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |011 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |100 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |101 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |110 \rangle \otimes \left( - \frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right) +
# |111 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# \, \right)$$.
#
# $$= \frac{1}{\sqrt 8}\,\left(\,
# |000 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |001 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |010 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |011 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |100 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) -
# |101 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |110 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right) +
# |111 \rangle \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# \, \right)$$.
#
# $$= \left(\,
# \frac{1}{\sqrt 8} \, |000 \rangle + \frac{1}{\sqrt 8} \, |001 \rangle +
# \frac{1}{\sqrt 8} \, |010 \rangle + \frac{1}{\sqrt 8} \, |011 \rangle +
# \frac{1}{\sqrt 8} \, |100 \rangle - \frac{1}{\sqrt 8} \, |101 \rangle +
# \frac{1}{\sqrt 8} \, |110 \rangle + \frac{1}{\sqrt 8} \, |111 \rangle
# \, \right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
#
#
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8}\\
# \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix}
# \otimes \begin{bmatrix} \frac{1}{\sqrt 2} \\ - \frac{1}{\sqrt 2} \end{bmatrix}
# $$
#
# This technique can be used to negate the amplitude of the state represented by the black box.
# The highlights keep track of the terms that to $|101 \rangle$.
#
# Now, we need to build a circuit that does the reflection over the mean value (of all the
# amplitudes of all the states). Consider the following circuit:
#
# <img src="img/4-g0016.png" style="width: 300px" />
#
# Here the symbol $| - \rangle$ is used to stand for $\begin{bmatrix} \frac{1}{\sqrt 2} \\ - \frac{1}{\sqrt 2}\end{bmatrix}$ as we did in Phase 1.
#
# The unitary matrix created by the three H gates in the beginning of this circuit
#
# $$H \otimes H \otimes H$$
#
# $$= \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix} \otimes
# \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix} \otimes
# \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix}$$
#
# The result is a unitary matrix with the first column:
#
#
#
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}$$
#
# and the first row:
#
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \\
# \ldots & \ldots & \ldots & \ldots &
# \ldots & \ldots & \ldots & \ldots \end{bmatrix}$$
#
# (Readers are encouraged to derive the full matrix. They can also find the result in the appendix below.)
#
# Now, consider the part of the circuit enclosed in the box, what kind of unitary matrix does it represent? The input to this part is a generic superposition of three qubits and $| - \rangle$:
#
# $$\left(\,a_0\,|000 \rangle + a_1\,|001 \rangle + a_2\,|020 \rangle +
# a_3\,|011 \rangle + a_4\,|100 \rangle + a_5\,|101 \rangle +
# a_6\,|110 \rangle + a_7\,|111 \rangle \,\right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$.
#
#
# Apply the X gate to the last qubit as shown in the box:
#
# $$\left(\,a_0\,|000 \rangle + a_1\,|001 \rangle + a_2\,|020 \rangle +
# a_3\,|011 \rangle + a_4\,|100 \rangle + a_5\,|101 \rangle +
# a_6\,|110 \rangle + a_7\,|111 \rangle \,\right) \otimes \left(\frac{-|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right)$$.
#
#
# $$= - a_0\,|000 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_1\,|001 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_2\,|020 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_3\,|011 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_4\,|100 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_5\,|101 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_6\,|110 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_7\,|111 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$.
#
# The Controlled X and X gates in the remaining part of the circuit in the box flips only those amplitudes where all the first three qubits are 0:
#
# $$= - a_0\,|000 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} + \frac{|1 \rangle}{\sqrt 2} \right)
# - a_1\,|001 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_2\,|020 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_3\,|011 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_4\,|100 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_5\,|101 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_6\,|110 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_7\,|111 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
#
# $$= a_0\,|000 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_1\,|001 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_2\,|020 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_3\,|011 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_4\,|100 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_5\,|101 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_6\,|110 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)
# - a_7\,|111 \rangle \otimes
# \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$
#
# $$= \left(\,a_0\,|000 \rangle - a_1\,|001 \rangle - a_2\,|020 \rangle -
# a_3\,|011 \rangle - a_4\,|100 \rangle - a_5\,|101 \rangle -
# a_6\,|110 \rangle - a_7\,|111 \rangle \,\right) \otimes \left(\frac{|0 \rangle}{\sqrt 2} - \frac{|1 \rangle}{\sqrt 2} \right)$$.
#
# Basically, what the circuit in the box did is to negate all the amplitudes except that of $|000\rangle$.
# This fact needs to be noted because we will be using this later.
# ```
# ```
# Now, what is the unitary matrix that makes
#
#
# $$\begin{bmatrix}
# a_0 \\ a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ a_6 \\ a_7 \end{bmatrix}
# \textrm{change to}
# \begin{bmatrix} a_0 \\ -a_1 \\ -a_2 \\ -a_3 \\ -a_4 \\ -a_5 \\ -a_6 \\ -a_7 \end{bmatrix}\,?
# $$
#
# The answer is
#
#
# $$\left[ \begin{array}{rrrrrrrr}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\
# \end{array}\right]$$
#
# Combining all the three sections of this circuit:
#
#
# $$\begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \;
# \left[\begin{array}{rrrrrrrr}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\
# \end{array}\right]
# \; \ast \;
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}$$
# Rewriting:
#
# $$\begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \; \left(
# \left[\begin{array}{rrrrrrrr}
# 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \, - \,
# \left[\begin{array}{rrrrrrrr}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
# \end{array}\right]
# \; \ast \; \right)
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}$$
#
#
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \; \left(
# \left[\begin{array}{rrrrrrrr}
# 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \, - \,
# I
# \, \right) \; \ast \;
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}$$
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \; \left(
# \left[\begin{array}{rrrrrrrr}
# 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \,
# \ast
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \,
# -
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \, \right)$$
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \;
# \left[\begin{array}{rrrrrrrr}
# 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \,
# \ast
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \,
# -
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \,
# \ast
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# $$
# $$= \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \; \ast \;
# \left[\begin{array}{rrrrrrrr}
# 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \,
# \ast
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \,
# -
# \,
# I$$
#
# ( because $H \ast H = I\,$).
# Now, multiplying the first two matrices (it must be obvious now why only the first column of the first matrix matters):
#
# $$\left[\begin{array}{rrrrrrrr}
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \frac{2}{\sqrt 8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# \end{array}\right]
# \,
# \ast
# \,
# \begin{bmatrix}
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \\
# \frac{1}{\sqrt 8} & \ldots \\ \frac{1}{\sqrt 8} & \ldots \end{bmatrix}
# \,
# -
# \,
# I.$$
#
# Since all the entries in the first row and first column of $(H \otimes H \otimes H)$ will be $\frac{1}{\sqrt 8}$ (proven in the previous sections), following will be the result:
# $$\left[\begin{array}{rrrrrrrr}
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8}
# \end{array}\right]
# \,
# -
# \,
# I =
# \left[\begin{array}{cccccccc}
# \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1
# \end{array}\right]$$
# Now, what happens if we multiply this with some generic state:
#
# $$\left[\begin{array}{cccccccc}
# \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1 & \frac{2}{8} \\
# \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} & \frac{2}{8} - 1
# \end{array}\right] \, \ast \,
# \begin{bmatrix}
# a_0 \\ a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ a_6 \\ a_7 \end{bmatrix}\;?$$
#
# The first entry in the resultant matrix:
# $$a_0 \ast \left( \frac{2}{8} - 1 \right) +
# a_1 \ast \frac{2}{8} + a_2 \ast \frac{2}{8} + a_3 \ast \frac{2}{8} +
# a_4 \ast \frac{2}{8} + a_5 \ast \frac{2}{8} + a_6 \ast \frac{2}{8} +
# a_7 \ast \frac{2}{8}$$
#
# $$ = a_0 \ast \left( \frac{2}{8} \right) - a_0 +
# a_1 \ast \frac{2}{8} + a_2 \ast \frac{2}{8} + a_3 \ast \frac{2}{8} +
# a_4 \ast \frac{2}{8} + a_5 \ast \frac{2}{8} + a_6 \ast \frac{2}{8} +
# a_7 \ast \frac{2}{8}$$
#
# $$ = \left( \frac{2}{8} \right)
# \left( a_0 + a_1 + a_2 + a_3 + a_4 + a_5 + a_6 + a_7 \right) - a_0$$
#
#
# $$ = 2 \ast
# \left( a_0 + a_1 + a_2 + a_3 + a_4 + a_5 + a_6 + a_7 \right)\,/\,8 - a_0$$
#
# $$ = 2 \ast mean - a_0$$
#
# Similarly, the second row will become:
#
# $$2 \ast mean - a_1$$
#
# Hence, the final result will be:
#
# $$\begin{bmatrix}
# 2 \ast mean - a_0 \\
# 2 \ast mean - a_1 \\
# 2 \ast mean - a_2 \\
# 2 \ast mean - a_3 \\
# 2 \ast mean - a_4 \\
# 2 \ast mean - a_5 \\
# 2 \ast mean - a_6 \\
# 2 \ast mean - a_7
# \end{bmatrix}$$
# Hence, we achieved the desired result of reflection over the mean value.
# Finally, we can combine both the circuits and write final circuit as follows (any black box can be used instead of Black Box 101).
#
#
# <img src="img/4-g0017.png" style="width: 600px" />
#
#
# After the initialization, the circuit in the bigger box (including the black box and reflection) will be executed multiple times based on the number of qubits. The exact number of times it needs to be iterated over will be covered in the appendix below. This iteration count will be in the order of $\mathcal{O}(\sqrt {2^n})$ instead of the classical $\mathcal{O}({2^n})$. Measuring the first 3 qubits will give the bit combination associated with the black box.
# ### Q# exercise: Grover's algorithm
# <a id='#qex'></a>
#
# 1. Go to QuantumComputingViaQSharpSolution introduced in session 1.1.
# 2. Open 23_Demo Grover's Algorithm Operation.qs in Visual Studio (Code).
# 3. The black boxes are defined by lines 23-48, just as how we constructed the circuit diagrams.
#
# <img src="img/4-g0018.png" style="width: 300px" />
#
# <img src="img/4-g0019.png" />
#
# 4. Reflection over mean is implemented by lines 52 - 69.
#
# <img src="img/4-g0020.png" />
#
# 5. The rest of the script puts the circuit together. `"MultiM()"`measures multiple qubit values.
#
# <img src="img/4-g0021.png" />
#
# `"Pattern"` is the number to find. The default is set to `"10100"`in the driver.cs.
#
# <img src="img/4-g0022_1.png" />
# <img src="img/4-g0022_2.png" />
#
# 6. By running the `Operation.qs`via `dotnet run`, we will get the output:
#
# `"State of the system at the end of 3 iterations:
# 10100"`
#
# 7. Change the pattern to a different string in the driver.cs and then rerun the simulation.
#
# 8. More Grover'salgorithm exercise can be found in the Quantum Katas.
#
# #### Appendix
#
# **Iteration Count**
#
# Before going into the calculations on how to find the number of iterations, let's see how to graphically visuaize the state vector.
#
# Let's say we have a 3-qubit system. There are eigth basis states:
#
# $$|000\rangle, |001\rangle, |010\rangle, |011\rangle, |100\rangle, |101\rangle, |110\rangle, |111\rangle$$
#
# The state vector will have eight elements:
#
# $$
# \begin{bmatrix}1 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 1 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 1\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 0\\ 1\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 1\\ 0\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 1\\ 0\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 1\\ 0\end{bmatrix}\, , \,
# \begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 1\end{bmatrix}
# $$
#
# a sample state might look as follows:
#
# $$a_0 |000> + a_1 |001> + a_2 |010> + a_3 |011> +
# a_4 |100> + a_5 |101> + a_6 |110> + a_7 |111>$$
#
# In the vector form it will look as:
#
# $$
# \begin{bmatrix}a_0 \\ a_1 \\ a_2\\ a_3\\ a_4\\ a_5\\ a_6\\ a_7\end{bmatrix}\, = \,
# a_0\begin{bmatrix}1 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, + \,
# a_1\begin{bmatrix}0 \\ 1 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, + \,
# a_2\begin{bmatrix}0 \\ 0 \\ 1\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, + \,
# a_3\begin{bmatrix}0 \\ 0 \\ 0\\ 1\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\, + \,
# a_4\begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 1\\ 0\\ 0\\ 0\end{bmatrix}\, + \,
# a_5\begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 1\\ 0\\ 0\end{bmatrix}\, + \,
# a_6\begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 1\\ 0\end{bmatrix}\, + \,
# a_7\begin{bmatrix}0 \\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 1\end{bmatrix}
# $$.
#
#
#
#
# Here, we can see that each basis state is orthogonal to every other basis state (dot product will be zero).
# The length of a generic state vector will always be 1. Each basis state is a unite vector with the length 1.
# We also know that any unitary operation will change the values of this generic state vector into a different state vector but will always retain its length of 1. So, we can imagine a hyperspace of 8 dimensions where each axis is perpendicular to every other and each basis state is a unit vector along one of the axes. In this scheme, our generic state vector will be a vector whose length will always be 1 and keeps moving around in this hyperspace whenever a unitary operation is performed on it.
#
# Let's start with a single qubit system.
#
# <img src="img/4-g0030.png" style="width: 300px" />
#
# As shown above, since a single qubit system has only two basis states, we can represent them with two
# orthogonal axes. Now, if we consider a single qubit in state |0⟩, the state of this qubit will be aligned with
# the horizontal axis towards the right (white arrow):
#
# <img src="img/4-g0031.png" style="width: 300px" />
#
# Applying an X gate on this state will make it $|1 \rangle$:
#
# <img src="img/4-g0032.png" style="width: 300px" />
#
# How will the state $H(|0\rangle)$ look like? The vector form of it is $\begin{bmatrix} \frac{1}{\sqrt 2} \\ \frac{1}{\sqrt 2}\end{bmatrix}$ Graphically, it will be a 2D - vector with both the components set to $\frac{1}{\sqrt 2}$
#
# <img src="img/4-g0033.png" style="width: 300px" />
#
# As shown, the head of this white arrow will move in a circle with radius 1 in this 2D-space. One thing to
# be noticed here is that, the 2D-plane in which this state vector rotates is not real but complex.
#
# In a 3-qubit system, the basis states are visualized:
#
# <img src="img/4-g0034.png" style="width: 300px" />
#
# In reality, a 3-qubit system has 8 dimensions. Unfortunately, as humans, we can visualize only 3 dimensions, in this picture we are showing only the first three orthogonal basis states and leaving the other 5 for user's imagination. Here again we are still cheating because, in reality these three dimensions
# are complex but we are showing them with lines of real numbers. Nevertheless, this will still help us gain some really useful intuition of the state of a quantum system.
#
# In the Q# exercise earlier, the number of iterations is calculated in lines 8-19 via an angle between the basis vector and the axis of the multi-dimensional space of the qubit system
#
# <img src="img/4-g0035.png" />
# **Initialization**
#
# In this scheme, the state $|000\rangle$ looks like (white arrow):
#
# <img src="img/4-g0036.png" style="width: 300px" />
#
# How will the state $(H \otimes H \otimes H)(|000\rangle)$ look like? Applying H gate on all the three qubits which are in the ground state:
#
#
# $$
# \left( \frac{|0\rangle}{\sqrt 2} + \frac{|1\rangle}{\sqrt 2}\right) \otimes \left( \frac{|0\rangle}{\sqrt 2} + \frac{|1\rangle}{\sqrt 2}\right) \otimes \left( \frac{|0\rangle}{\sqrt 2} + \frac{|1\rangle}{\sqrt 2}\right)\, = \, \begin{bmatrix}\frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix}
# $$
#
# Showing only the first three dimensions of this 8-dimensional vector above:
#
# <img src="img/4-g0037.png" style="width: 500px" />
#
# This is a very important state in our discussion, so we will mark it permanently in the space using a yellow
# arrow:
#
# <img src="img/4-g0038.png" style="width: 300px" />
#
# Take a generic state,
#
# $$\begin{bmatrix}a_0 \\ a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ a_6 \\ a_7 \end{bmatrix}$$,
#
# graphically:
#
# <img src="img/4-g0039.png" style="width: 500px" />
#
# Do the reflection of this state over the $|000\rangle$ axis:
#
# <img src="img/4-g0040.png" style="width: 500px" />
# Taking the reflection over $|000\rangle$ is same as keeping $a_0$ intact but negating all the other components, thus flipping vector di rection.
#
# Now the unitary operation $(H \otimes H \otimes H)$:
#
# $$\begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix} \otimes
# \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix} \otimes
# \begin{bmatrix} \frac{1}{\sqrt 2} & \frac{1}{\sqrt 2}\\ \frac{1}{\sqrt 2} & - \frac{1}{\sqrt 2}\end{bmatrix}$$
#
# $$= \,
# \left[\begin{array}{rrrrrrrr}
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8}
# \end{array}\right]
# $$
# Multiplying this with the generic state vector:
#
# $$
# \left[\begin{array}{rrrrrrrr}
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8}
# \end{array}\right]
# \, \ast \, \begin{bmatrix}a_0 \\ a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ a_6 \\ a_7 \end{bmatrix}$$,
#
# $$ = \,
# \left[\begin{array}{rrrrrrrr}
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} \\
# \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} &
# - \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & \frac{1}{\sqrt 8} & - \frac{1}{\sqrt 8}
# \end{array}\right]
# \, \ast \,
# \left(\,
# a_0 \,\begin{bmatrix}1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, +
# a_1 \,\begin{bmatrix}0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, +
# a_2 \,\begin{bmatrix}0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, +
# a_3 \,\begin{bmatrix}0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, +
# a_4 \,\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, +
# a_5 \,\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \, +
# a_6 \,\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \, +
# a_7 \,\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}
# \,\right)
# $$
# $$= \,
# a_0 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_1 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_2 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_3 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_4 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_5 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_6 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \end{bmatrix} \, +
# a_7 \,\begin{bmatrix} \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ \frac{1}{\sqrt 8} \\ - \frac{1}{\sqrt 8} \end{bmatrix}
# $$
#
# Applying the unitary matrix $\left( H \otimes H \otimes H \right)$ on a generic state vector can also be thought as simple change
# of basis, because each column in the $\left( H \otimes H \otimes H \right)$ unitary matrix is a unit vector and all the columns are
# orthogonal to each other. (Try doing a dot product of any two columns; you will get 0 as the result). This
# rule is true for any unitary matrix. Its application is equivalent to a change of basis and each new basis
# vector will be one of the columns of the unitary matrix. The first basis state in this new basis states is the
# same as $\left( H \otimes H \otimes H \right)|000\rangle$ and is represented by the yellow vector we saw earlier in the hyper space.
# You can also see that the second basis in this new basis is the same as $\left( H \otimes H \otimes H \right)|001\rangle$ and so on.
# **Reflection over mean**
#
# In the _Reflection over mean_ part of the circuit, the first step is to apply H gate on all the qubits.
# This is basically changing the basis. The first new basis is $\left( H \otimes H \otimes H \right)|000\rangle$, which is represented by the yellow arrow in the diagram below. The gates after that are negating all the amplitudes except for the first basis state, which is same as reflecting the state about the first basis state. Applying $\left( H \otimes H \otimes H \right)$
# again will undo the basis change.
#
# We have shown above that, reflection over mean is the same as reflecting the state over the
# $\left( H \otimes H \otimes H \right)|000\rangle$ state (yellow vector). We have changed the basis into new basis states where the first basis is the yellow vector itself and then used the circuit to negate all the amplitudes except of the first one, effectively reflecting the state vector over the yellow vector and finally did the basis change.
#
#
# <img src="img/4-g0041.png" style="width: 500px" />
#
# In the above image, the white vector shows the state before reflection and the black vector shows the state after the reflection over the yellow vector.
#
# **Black box – amplitude negation**
#
# For the sake of simplicity, let's assume now that the black box is for |000⟩. The black box is simply negating the amplitude of this state and leaves all the others unchanged. This can be graphically represented as:
#
# <img src="img/4-g0042.png" style="width: 500px" />
#
# Effectively the algorithm is doing
#
# 1. Take the initial state as $|000\rangle$.
# 2. Apply H gate on all the qubits and aligning it with the yellow vector.
#
#
# 3. Change the sign of the state that we are looking for.
# 4. Reflecting the state over the yellow vector.
# 5. Repeat 3 and 4 until the state vector reaches the desired state of $|001\rangle$ (example take in these images).
#
# In the beginning the state vector is at $|000\rangle$, i.e aligned to the right-pointing arrow:
#
# <img src="img/4-g0043.png" style="width: 500px" />
#
# Applying H gates on all the qubits, aligns the state with the yellow vector:
#
# <img src="img/4-g0044.png" style="width: 500px" />
#
# Now, this state vector makes an angle θ with the red and blue plane and $\frac{\pi}{2} - \theta$ with $|001\rangle$ (the state of the black box that we are considering now). After the application of the black box, the state vector will go $\theta$ below the red and blue plane because it will negate only the amplitude of $|001\rangle$ and retains the others:
#
# <img src="img/4-g0045.png" style="width: 500px" />
#
# The angle now between the yellow and white vectors is $2\teta$. The next step is to reflect the white vector over the yellow vector. This will bring the white vector close to the green vector by $2\theta$:
#
# <img src="img/4-g0046.png" style="width: 500px" />
#
#
# Here the angle between the yellow vector and the white vector is $2\theta$. We should do it nIter-times so that, at the end, the white vector coincides with the $|001\rangle$. In the beginning the angle between the white vector and the $|001\rangle$ vector was $\frac{\pi}{2} - \theta$ and in every iteration this angle is decreased by $2\Theta$. Therefore, assuming we find the solution after n iterations:
#
# $$nIter \ast 2\theta = \frac{\pi}{2} - \theta$$
#
# Solving this we get, $nIter = (\frac{\pi}{4\theta}) - \frac{1}{2}$
#
# Also, we need to know $\theta$. In the beginning, $\sin(\theta) = opposite\,side \, / \, hypotenuse$ of the yellow vector. The $opposite \,side$ is the vector component of the yellow vector in the $|001\rangle$ direction which is nothing but $\frac{1}{\sqrt 8}$.
#
# And we know that the length of the $hypotenuse$ of the yellow vector is one. So, generically speaking:
#
# $$\sin(\theta) = \frac{1}{\sqrt{2^n}}$$
#
# $$\theta = \arcsin\left(\frac{1}{\sqrt{2^n}}\right)$$
#
#
# By putting this $\theta$ in the previous equation we will get the exact number of iterations that are needed. In this example, where we have 3 qubits, $\theta = 0.36$ radians.
#
# Hence, $nIter = 2$ (after rounding). After two iterations the state looks as follows:
#
# <img src="img/4-g0047.png" style="width: 500px" />
#
# Now, measuring this will yield the result $|001\rangle$ with very high probability.
#
# To calculate the number of counts, we know that as the number of qubits increases, the number of states increases. The value of $\frac{1}{\sqrt{2^n}}$ becomes smaller. We know that for small angles $\sin(\theta) = \theta$ (you can search online for this proof, it is very simple based on the definition of a radian).
#
#
# $$nIter = \left(\frac{\pi}{4\ast \frac{1}{\sqrt{2^n}}} - \frac{1}{2} \right)$$
#
#
# By simplifying it and removing the constant terms and constant multipliers we will be left with $\mathcal{O}{(\sqrt2^n{}})$.
| 2-Quantum_Algorithms/4-Grover_s_Algorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Convolutional Neural Networks
# Basado en: https://www.aprendemachinelearning.com/clasificacion-de-imagenes-en-python/
# # Importar Librerías
import numpy as np
import os
import re
import matplotlib.pyplot as plt
# %matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import keras
from keras.utils import to_categorical
from keras.models import Sequential,Input,Model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import LeakyReLU
# # Cargar set de Imágenes
# +
dirname = os.path.join(os.getcwd(), 'sportimages')
imgpath = dirname + os.sep
images = []
directories = []
dircount = []
prevRoot=''
cant=0
print("leyendo imagenes de ",imgpath)
for root, dirnames, filenames in os.walk(imgpath):
for filename in filenames:
if re.search("\.(jpg|jpeg|png|bmp|tiff)$", filename):
cant=cant+1
filepath = os.path.join(root, filename)
image = plt.imread(filepath)
images.append(image)
b = "Leyendo..." + str(cant)
print (b, end="\r")
if prevRoot !=root:
print(root, cant)
prevRoot=root
directories.append(root)
dircount.append(cant)
cant=0
dircount.append(cant)
dircount = dircount[1:]
dircount[0]=dircount[0]+1
print('Directorios leidos:',len(directories))
print("Imagenes en cada directorio", dircount)
print('suma Total de imagenes en subdirs:',sum(dircount))
# -
# # Creamos las etiquetas
labels=[]
indice=0
for cantidad in dircount:
for i in range(cantidad):
labels.append(indice)
indice=indice+1
print("Cantidad etiquetas creadas: ",len(labels))
deportes=[]
indice=0
for directorio in directories:
name = directorio.split(os.sep)
print(indice , name[len(name)-1])
deportes.append(name[len(name)-1])
indice=indice+1
# +
y = np.array(labels)
X = np.array(images, dtype=np.uint8) #convierto de lista a numpy
# Find the unique numbers from the train labels
classes = np.unique(y)
nClasses = len(classes)
print('Total number of outputs : ', nClasses)
print('Output classes : ', classes)
# -
# # Creamos Sets de Entrenamiento y Test
train_X,test_X,train_Y,test_Y = train_test_split(X,y,test_size=0.2)
print('Training data shape : ', train_X.shape, train_Y.shape)
print('Testing data shape : ', test_X.shape, test_Y.shape)
# +
plt.figure(figsize=[5,5])
# Display the first image in training data
plt.subplot(121)
plt.imshow(train_X[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(train_Y[0]))
# Display the first image in testing data
plt.subplot(122)
plt.imshow(test_X[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(test_Y[0]))
# -
# # Preprocesamos las imagenes
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255.
test_X = test_X / 255.
# ## Hacemos el One-hot Encoding para la red
# +
# Change the labels from categorical to one-hot encoding
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
# Display the change for category label using one-hot encoding
print('Original label:', train_Y[0])
print('After conversion to one-hot:', train_Y_one_hot[0])
# -
# # Creamos el Set de Entrenamiento y Validación
#Mezclar todo y crear los grupos de entrenamiento y testing
train_X,valid_X,train_label,valid_label = train_test_split(train_X, train_Y_one_hot, test_size=0.2, random_state=13)
print(train_X.shape,valid_X.shape,train_label.shape,valid_label.shape)
# # Creamos el modelo de CNN
#declaramos variables con los parámetros de configuración de la red
INIT_LR = 1e-3 # Valor inicial de learning rate. El valor 1e-3 corresponde con 0.001
epochs = 6 # Cantidad de iteraciones completas al conjunto de imagenes de entrenamiento
batch_size = 64 # cantidad de imágenes que se toman a la vez en memoria
# +
sport_model = Sequential()
sport_model.add(Conv2D(32, kernel_size=(3, 3),activation='linear',padding='same',input_shape=(21,28,3)))
sport_model.add(LeakyReLU(alpha=0.1))
sport_model.add(MaxPooling2D((2, 2),padding='same'))
sport_model.add(Dropout(0.5))
sport_model.add(Flatten())
sport_model.add(Dense(32, activation='linear'))
sport_model.add(LeakyReLU(alpha=0.1))
sport_model.add(Dropout(0.5))
sport_model.add(Dense(nClasses, activation='softmax'))
# -
sport_model.summary()
sport_model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adagrad(lr=INIT_LR, decay=INIT_LR / 100),metrics=['accuracy'])
# # Entrenamos el modelo: Aprende a clasificar imágenes
# este paso puede tomar varios minutos, dependiendo de tu ordenador, cpu y memoria ram libre
# como ejemplo, en mi Macbook pro tarda 4 minutos
sport_train = sport_model.fit(train_X, train_label, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_label))
# guardamos la red, para reutilizarla en el futuro, sin tener que volver a entrenar
sport_model.save("sports_mnist.h5py")
# # Evaluamos la red
test_eval = sport_model.evaluate(test_X, test_Y_one_hot, verbose=1)
print('Test loss:', test_eval[0])
print('Test accuracy:', test_eval[1])
accuracy = sport_train.history['acc']
val_accuracy = sport_train.history['val_acc']
loss = sport_train.history['loss']
val_loss = sport_train.history['val_loss']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
predicted_classes2 = sport_model.predict(test_X)
predicted_classes=[]
for predicted_sport in predicted_classes2:
predicted_classes.append(predicted_sport.tolist().index(max(predicted_sport)))
predicted_classes=np.array(predicted_classes)
predicted_classes.shape, test_Y.shape
# # Aprendamos de los errores: Qué mejorar
correct = np.where(predicted_classes==test_Y)[0]
print("Found %d correct labels" % len(correct))
for i, correct in enumerate(correct[0:9]):
plt.subplot(3,3,i+1)
plt.imshow(test_X[correct].reshape(21,28,3), cmap='gray', interpolation='none')
plt.title("{}, {}".format(deportes[predicted_classes[correct]],
deportes[test_Y[correct]]))
plt.tight_layout()
incorrect = np.where(predicted_classes!=test_Y)[0]
print("Found %d incorrect labels" % len(incorrect))
for i, incorrect in enumerate(incorrect[0:9]):
plt.subplot(3,3,i+1)
plt.imshow(test_X[incorrect].reshape(21,28,3), cmap='gray', interpolation='none')
plt.title("{}, {}".format(deportes[predicted_classes[incorrect]],
deportes[test_Y[incorrect]]))
plt.tight_layout()
target_names = ["Class {}".format(i) for i in range(nClasses)]
print(classification_report(test_Y, predicted_classes, target_names=target_names))
| 12_ Deep-Learning/CNN_10deportes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DKOM 2019
# We show three areas of innovation. The purpose is to demonstrate how data scientists can explore and perform machine learning tasks on large data sets that are stored in HANA. We show the power of pushing processing power closer to where the data exists. The benefits of using the power of HANA are:
# <li>Performance: We see orders of magnitude performance gains and it only gets better when data sets are large.</li>
# <li>Security: Since the data is all in HANA and processing done there, all the security measures are enforced along with the data.</li>
# <br>
# <br>
# From a data scientist point of view, they use Python and Python like APIs that they are comfortable with.
# <br>
# <br>
# We will cover the following:
# <li><b>Dataframes:</b>A reference to a relation in HANA. No need for deep SQL knowledge.</li>
# <li><b>HANA ML API:</b>Exploit HANA's ML capabilities using a SciKit type of Python interface.</li>
# <li><b>Exploratory Data Analysis and Visualization:</b>Analyze large data sets without the performance penalty or running out of resources on the client</li>
# # Dataframes
# The SAP HANA Python Client API for machine learning algorithms (Python Client API for ML) provides a set of client-side Python functions for accessing and querying SAP HANA data, and a set of functions for developing machine learning models.
#
# The Python Client API for ML consists of two main parts:
#
# <li>A set of machine learning APIs for different algorithms.</li>
# <li>The SAP HANA dataframe, which provides a set of methods for analyzing data in SAP HANA without bringing that data to the client.</li>
#
# This library uses the SAP HANA Python driver (hdbcli) to connect to and access SAP HANA.
# <br>
# <br>
# <img src="images/highlevel_overview2_new.png" title="Python API Overview" style="float:left;" width="300" height="50" />
# <br>
# A dataframe represents a table (or any SQL statement). Most operations on a dataframe are designed to not bring data back from the database unless explicitly asked for.
from hana_ml import dataframe
import logging
# ## Setup connection and data sets
# Let us load some data into a HANA table. The data is loaded into 4 tables - full set, test set, training set, and the validation set:DBM2_RFULL_TBL, DBM2_RTEST_TBL, DBM2_RTRAINING_TBL, DBM2_RVALIDATION_TBL.
#
# The data is related with direct marketing campaigns of a Portuguese banking institution. More information regarding the data set is at https://archive.ics.uci.edu/ml/datasets/bank+marketing#.
#
# To do that, a connection is created and passed to the loader. There is a config file, <b>config/e2edata.ini</b> that controls the connection parameters. Please edit it to point to your hana instance.
from data_load_utils import DataSets, Settings
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, training_tbl, validation_tbl, test_tbl = DataSets.load_bank_data(connection_context, force=False, batch_size=50000)
# ### Simple DataFrame
# A dataframe is a reference to a relation. This can be a table, view, or any relation from a SQL statement
# <table align="left"><tr><td>
# </td><td><img src="images/Dataframes_1.png" style="float:left;" width="600" height="400" /></td></tr></table>
# <br>
# <b>Let's take a look at a dataframe created using our training table.</b>
# <br>
dataset = connection_context.table(training_tbl)
print(dataset.select_statement)
print(type(dataset))
# ### Bring data to client
# #### Fetch 5 rows into client as a <b>Pandas Dataframe</b>
dataset.head(5).collect()
# ## SQL Operations
# We now show simple SQL operations. No extensive SQL knowledge is needed.
# ### Projection
# <img src="images/Projection.png" style="float:left;" width="150" height="750" />
dsp = dataset.select("ID", "AGE", "JOB", ('"AGE"*2', "TWICE_AGE"))
dsp.head(5).collect() # collect() brings data to the client)
# ### Filtering Data
# <img src="images/Filter.png" style="float:left;" width="200" height="100" />
dataset.filter('AGE > 60').head(10).collect()
# ### Sorting
# <img src="images/Sort.png" style="float:left;" width="200" height="100" />
dataset.filter('AGE>60').sort(['AGE']).head(2).collect()
# ### Grouping Data
# <img src="images/Grouping.png" style="float:left;" width="300" height="200" />
dataset.agg([('count', 'AGE', 'COUNT_OF_AGE')], group_by='AGE').head(4).collect()
# ### Simple Joins
# <img src="images/Join.png" style="float:left;" width="300" height="200" />
ds1 = dataset.select(["ID", "AGE"])
ds2 = dataset.select(["ID", "JOB"])
condition = '{}."ID"={}."ID"'.format(ds1.quoted_name, ds2.quoted_name)
dsj = ds1.join(ds2, condition)
dsj.select_statement
# ### Describing a dataframe
# <img src="images/Describe.png" style="float:left;" width="300" height="200" />
dataset.describe().collect()
dataset.describe().select_statement
# # ML API Wrapping Predictive Analytics Library
# ## Classification - Logistic Regression Example
# ### Bank dataset to determine if a customer would buy a CD
# The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. A number of features such as age, kind of job, marital status, education level, credit default, existence of housing loan, etc. were considered. The classification goal is to predict if the client will subscribe (yes/no) a term deposit.
#
# More information regarding the data set is at https://archive.ics.uci.edu/ml/datasets/bank+marketing#.
#
# <font color='blue'>__ The objective is to demonstrate the use of logistic regression and to tune hyperparameters enet_lamba and enet_alpha. __</font>
#
# ### Attribute Information:
#
# #### Input variables:
# ##### Bank client data:
# 1. age (numeric)
# 2. job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
# 3. marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
# 4. education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
# 5. default: has credit in default? (categorical: 'no','yes','unknown')
# 6. housing: has housing loan? (categorical: 'no','yes','unknown')
# 7. loan: has personal loan? (categorical: 'no','yes','unknown')
#
# ##### Related with the last contact of the current campaign:
# 8. contact: contact communication type (categorical: 'cellular','telephone')
# 9. month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
# 10. day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
# 11. duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
#
# ##### Other attributes:
# 12. campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
# 13. pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
# 14. previous: number of contacts performed before this campaign and for this client (numeric)
# 15. poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
#
# ##### Social and economic context attributes:
# 16. emp.var.rate: employment variation rate - quarterly indicator (numeric)
# 17. cons.price.idx: consumer price index - monthly indicator (numeric)
# 18. cons.conf.idx: consumer confidence index - monthly indicator (numeric)
# 19. euribor3m: euribor 3 month rate - daily indicator (numeric)
# 20. nr.employed: number of employees - quarterly indicator (numeric)
#
# #### Output variable (desired target):
# 21. y - has the client subscribed a term deposit? (binary: 'yes','no')
#
# ### Load the data set and create data frames
from hana_ml import dataframe
from hana_ml.algorithms.pal import linear_model
from hana_ml.algorithms.pal import clustering
from hana_ml.algorithms.pal import trees
import numpy as np
import matplotlib.pyplot as plt
import logging
from IPython.core.display import Image, display
from data_load_utils import DataSets, Settings
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, training_tbl, validation_tbl, test_tbl = DataSets.load_bank_data(connection_context, force=False, batch_size=50000)
full_set = connection_context.table(full_tbl)
training_set = connection_context.table(training_tbl)
validation_set = connection_context.table(validation_tbl)
test_set = connection_context.table(test_tbl)
# ### Let us look at some rows
training_set.head(5).collect()
# # Create Model and Tune Hyperparameters
# Try different hyperparameters and see what parameter is best.
# The results are stored in a list called res which can then be used to visualize the results.
#
# _The variable "quick" is to run the tests for only a few values to avoid running the code below for a long time._
#
features = ['AGE','JOB','MARITAL','EDUCATION','DBM_DEFAULT', 'HOUSING','LOAN','CONTACT','DBM_MONTH','DAY_OF_WEEK','DURATION','CAMPAIGN','PDAYS','PREVIOUS','POUTCOME','EMP_VAR_RATE','CONS_PRICE_IDX','CONS_CONF_IDX','EURIBOR3M','NREMPLOYED']
label = "LABEL"
quick = True
enet_lambdas = np.linspace(0.01,0.02, endpoint=False, num=1) if quick else np.append(np.linspace(0.01,0.02, endpoint=False, num=4), np.linspace(0.02,0.02, num=5))
enet_alphas = np.linspace(0, 1, num=4) if quick else np.linspace(0, 1, num=40)
res = []
for enet_alpha in enet_alphas:
for enet_lambda in enet_lambdas:
lr = linear_model.LogisticRegression(connection_context, solver='Cyclical', tol=0.000001, max_iter=10000,
stat_inf=True,pmml_export='multi-row', lamb=enet_lambda, alpha=enet_alpha,
class_map0='no', class_map1='yes')
lr.fit(training_set, features=features, label=label)
accuracy_val = lr.score(validation_set, 'ID', features, label)
res.append((enet_alpha, enet_lambda, accuracy_val, lr.coef_))
# ## Graph the results
# Plot the accuracy on the validation set against the hyperparameters.
#
# This is only done if all the combinations are tried.
# %matplotlib inline
if not quick:
arry = np.asarray(res)
fig = plt.figure(figsize=(10,10))
plt.title("Validation accuracy for training set with different lambdas")
ax = fig.add_subplot(111)
most_accurate_lambda = arry[np.argmax(arry[:,2]),1]
best_accuracy_arg = np.argmax(arry[:,2])
for lamda in enet_lambdas:
if lamda == most_accurate_lambda:
ax.plot(arry[arry[:,1]==lamda][:,0], arry[arry[:,1]==lamda][:,2], label="%.3f" % round(lamda,3), linewidth=5, c='r')
else:
ax.plot(arry[arry[:,1]==lamda][:,0], arry[arry[:,1]==lamda][:,2], label="%.3f" % round(lamda,3))
plt.legend(loc=1, title="Legend (Lambda)", fancybox=True, fontsize=12)
ax.set_xlabel('Alpha', fontsize=12)
ax.set_ylabel('Accuracy', fontsize=12)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.grid()
plt.show()
print("Best accuracy: %.4f" % (arry[best_accuracy_arg][2]))
print("Value of alpha for maximum accuracy: %.3f\nValue of lambda for maximum accuracy: %.3f\n" % (arry[best_accuracy_arg][0], arry[best_accuracy_arg][1]))
else:
display(Image('images/bank-data-hyperparameter-tuning.png', width=800, unconfined=True))
print("Best accuracy: 0.9148")
print("Value of alpha for maximum accuracy: 0.769")
print("Value of lambda for maximum accuracy: 0.010")
# # Predictions on test set
# Let us do the predictions on the test set using these values of alpha and lambda
alpha = 0.769
lamda = 0.01
lr = linear_model.LogisticRegression(connection_context, solver='Cyclical', tol=0.000001, max_iter=10000,
stat_inf=True,pmml_export='multi-row', lamb=lamda, alpha=alpha,
class_map0='no', class_map1='yes')
lr.fit(training_set, features=features, label=label)
# ## Look at the predictions
result_df = lr.predict(test_set, 'ID')
result_df.filter('"CLASS"=\'no\'').head(5).collect()
# ## What about the final score?
lr.score(test_set, 'ID')
# # KMeans Clustering Example
# A data set that identifies different types of iris's is used to demonstrate KMeans in SAP HANA.
# ## Iris Data Set
# The data set used is from University of California, Irvine (https://archive.ics.uci.edu/ml/datasets/iris). This data set contains attributes of a plant iris. There are three species of Iris plants.
# <table>
# <tr><td>Iris Setosa</td><td><img src="images/Iris_setosa.jpg" title="Iris Sertosa" style="float:left;" width="300" height="50" /></td>
# <td>Iris Versicolor</td><td><img src="images/Iris_versicolor.jpg" title="Iris Versicolor" style="float:left;" width="300" height="50" /></td>
# <td>Iris Virginica</td><td><img src="images/Iris_virginica.jpg" title="Iris Virginica" style="float:left;" width="300" height="50" /></td></tr>
# </table>
#
# The data contains the following attributes for various flowers:
# <table align="left"><tr><td>
# <li align="top">sepal length in cm</li>
# <li align="left">sepal width in cm</li>
# <li align="left">petal length in cm</li>
# <li align="left">petal width in cm</li>
# </td><td><img src="images/sepal_petal.jpg" style="float:left;" width="200" height="40" /></td></tr></table>
#
# Although the flower is identified in the data set, we will cluster the data set into 3 clusters since we know there are three different flowers. The hope is that the cluster will correspond to each of the flowers.
#
# A different notebook will use a classification algorithm to predict the type of flower based on the sepal and petal dimensions.
# ### Load the data set and create data frames
from hana_ml import dataframe
from hana_ml.algorithms.pal import clustering
import numpy as np
import pandas as pd
import logging
import itertools
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d, Axes3D
from data_load_utils import DataSets, Settings
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, training_tbl, validation_tbl, test_tbl = DataSets.load_iris_data(connection_context, force=False, batch_size=50000)
full_set = connection_context.table(full_tbl)
# ### Let's check how many SPECIES are in the data set.
full_set.distinct("SPECIES").collect()
# # Create Model
# The lines below show the ease with which clustering can be done.
# Set up the features and labels for the model and create the model
features = ['SEPALLENGTHCM','SEPALWIDTHCM','PETALLENGTHCM','PETALWIDTHCM']
label = ['SPECIES']
kmeans = clustering.KMeans(connection_context, thread_ratio=0.2, n_clusters=3, distance_level='euclidean',
max_iter=100, tol=1.0E-6, category_weights=0.5, normalization='min_max')
predictions = kmeans.fit_predict(full_set, 'ID', features).collect()
predictions.head(5)
# # Plot the data
def plot_kmeans_results(data_set, features, predictions):
# use this to estimate what each cluster_id represents in terms of flowers
# ideal would be 50-50-50 for each flower, so we can see there are some mis clusterings
class_colors = {0: 'r', 1: 'b', 2: 'k'}
predictions_colors = [class_colors[p] for p in predictions['CLUSTER_ID'].values]
red = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='r', label='Iris-virginica', markersize=10, alpha=0.9)
blue = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='b', label='Iris-versicolor', markersize=10, alpha=0.9)
black = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='k', label='Iris-setosa', markersize=10, alpha=0.9)
for x, y in itertools.combinations(features, 2):
plt.figure(figsize=(10,5))
plt.scatter(full_set[[x]].collect(), data_set[[y]].collect(), c=predictions_colors, alpha=0.6, s=70)
plt.grid()
plt.xlabel(x, fontsize=15)
plt.ylabel(y, fontsize=15)
plt.tick_params(labelsize=15)
plt.legend(handles=[red, blue, black])
plt.show()
# %matplotlib inline
#above allows interactive 3d plot
sizes=10
for x, y, z in itertools.combinations(features, 3):
fig = plt.figure(figsize=(8,5))
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(data_set[[x]].collect(), data_set[[y]].collect(), data_set[[z]].collect(), c=predictions_colors, s=70)
plt.grid()
ax.set_xlabel(x, labelpad=sizes, fontsize=sizes)
ax.set_ylabel(y, labelpad=sizes, fontsize=sizes)
ax.set_zlabel(z, labelpad=sizes, fontsize=sizes)
ax.tick_params(labelsize=sizes)
plt.legend(handles=[red, blue, black])
plt.show()
print(pd.concat([predictions, full_set[['SPECIES']].collect()], axis=1).groupby(['SPECIES','CLUSTER_ID']).size())
# %matplotlib inline
plot_kmeans_results(full_set, features, predictions)
# # Exploratory Data Analysis and Visualization
# ## Titanic Data Set (~1K rows)
# This dataset is from https://github.com/awesomedata/awesome-public-datasets/tree/master/Datasets
from hana_ml import dataframe
from hana_ml.algorithms.pal import trees
from hana_ml.visualizers.eda import EDAVisualizer as eda
import pandas as pd
import matplotlib.pyplot as plt
import time
from hana_ml.visualizers.eda import EDAVisualizer
from data_load_utils import DataSets, Settings
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, training_tbl, validation_tbl, test_tbl = DataSets.load_titanic_data(connection_context, force=True, batch_size=50000)
# Create the HANA Dataframe (df_train) and point to the training table.
data = connection_context.table(full_tbl)
data = data.fillna(25, ['AGE'])
data.head(5).collect()
data.dtypes()
# ### Histogram plot for AGE distribution
bins=15
f = plt.figure(figsize=(bins*1.5, bins*0.5))
ax1 = f.add_subplot(121)
ax2 = f.add_subplot(122)
start = time.time()
eda = EDAVisualizer(ax1)
ax1, dist_data1 = eda.distribution_plot(data, column="AGE", bins=bins, title="Distribution of AGE (All)")
eda = EDAVisualizer(ax2)
ax2, dist_data2 = eda.distribution_plot(data.filter('SURVIVED=1'), column="AGE", bins=bins, title="Distribution of AGE (Survived)")
end = time.time()
plt.show()
print("Time: {}s. Time taken to do this by getting the data from the server was 0.86s".format(round(end-start, 2)))
# +
### Pie plot for PCLASS (passenger class) distribution
# -
f = plt.figure(figsize=(20,10))
ax1 = f.add_subplot(121)
ax2 = f.add_subplot(122)
start = time.time()
eda = EDAVisualizer(ax1)
ax1, pie_data = eda.pie_plot(data, column="PCLASS", title="Proportion of passengers in each class")
eda = EDAVisualizer(ax2)
ax2, pie_data = eda.pie_plot(data.filter('SURVIVED=1'), column="PCLASS", title="Proportion of passengers in each class who survived")
end = time.time()
plt.show()
print("Time: {}s. Time taken to do this by getting the data from the server was 0.88s".format(round(end-start, 2)))
# ### Correlation plot - Look at all numeric columns
f = plt.figure(figsize=(10,10))
ax1 = f.add_subplot(111)
start = time.time()
eda = EDAVisualizer(ax1)
ax1, corr = eda.correlation_plot(data)
end = time.time()
plt.show()
print("Time: {}s. Time taken to do this by getting the data from the server was 2s".format(round(end-start, 2)))
# ### Performance Comparison
# +
# Box plot time for the large data set is 1600s!
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
col_names = ['chart', 'dataSize', 'noDataframe', 'withDataframe']
comparison_df = pd.DataFrame(columns = col_names)
comparison_df.loc[len(comparison_df)] = ['Distribution', '1K', 0.86, 0.14]
comparison_df.loc[len(comparison_df)] = ['Pie', '1K', 0.88, 0.09]
comparison_df.loc[len(comparison_df)] = ['Correlation', '1K', 2.0, 2.78]
comparison_df.loc[len(comparison_df)] = ['Distribution', '8K', 7.5, 0.18]
comparison_df.loc[len(comparison_df)] = ['Pie', '8K', 7.6, 0.26]
comparison_df.loc[len(comparison_df)] = ['Correlation', '8K', 9.2, 2.1]
comparison_df.loc[len(comparison_df)] = ['Distribution', '500K', 360, 0.29]
comparison_df.loc[len(comparison_df)] = ['Pie', '500K', 450, 0.23]
comparison_df.loc[len(comparison_df)] = ['Correlation', '500K', 400, 4.3]
comparison_df.loc[len(comparison_df)] = ['Distribution', '1M', 950, 0.33]
comparison_df.loc[len(comparison_df)] = ['Pie', '1M', 940, 0.22]
comparison_df.loc[len(comparison_df)] = ['Correlation', '1M', 950, 6.28]
comparison_df['noDataframe'] = np.log10(comparison_df['noDataframe']*1000)
comparison_df['withDataframe'] = np.log10(comparison_df['withDataframe']*1000)
#comparison_df[comparison_df['chart'] == 'Distribution']['noDataframe']
# +
f = plt.figure(figsize=(15,10))
ax = f.add_subplot(111)
N = 4
width = 0.10
ind = np.arange(N)
ax.bar(ind, comparison_df[comparison_df['chart'] == 'Distribution']['noDataframe'], width, label='Distribution (No DF)')
ax.bar(ind + width, comparison_df[comparison_df['chart'] == 'Distribution']['withDataframe'], width, label='Distribution (DF)')
gap = 0.05
ax.bar(ind + gap + 2*width, comparison_df[comparison_df['chart'] == 'Pie']['noDataframe'], width, label='Pie (No DF)')
ax.bar(ind + gap + 3*width, comparison_df[comparison_df['chart'] == 'Pie']['withDataframe'], width, label='Pie (DF)')
ax.bar(ind + 2*gap + 4*width, comparison_df[comparison_df['chart'] == 'Correlation']['noDataframe'], width, label='Correlation (No DF)')
ax.bar(ind + 2*gap + 5*width, comparison_df[comparison_df['chart'] == 'Correlation']['withDataframe'], width, label='Correlation (DF)')
#plt.ylabel('Scores')
#plt.title('Scores by group and gender')
#ax.xticks(ind + width*2, comparison_df['dataSize'].unique())
ax.set_xticks(ind + width*2)
ax.set_xticklabels(comparison_df['dataSize'].unique(), fontsize=20)
ax.set_xlabel("Data Size (# Rows)", fontsize=20)
ax.set_yticklabels([0, .01, .1, 1, 10, 100, 1000], fontsize=20)
ax.set_ylabel("Time in seconds (Log Scale)", fontsize=20)
ax.legend(loc='best', fontsize=20)
plt.show()
# -
# # SUMMARY
# ## What we covered
# <li><b>Dataframes:</b>A reference to a relation in HANA. No need for deep SQL knowledge.</li>
# <li><b>HANA ML API:</b>Exploit HANA's ML capabilities using a SciKit type of Python interface.</li>
# <li><b>Exploratory Data Analysis and Visualization:</b>Analyze large data sets without the performance penalty or running out of resources on the client</li>
# <br>
# ## Main benefits
# <li>Ease of Use: For the data scientists.</li>
# <li>Performance: Orders of magnitude performance gains.</li>
# <li>Security: Centralized security.</li>
# <br>
# <br>
| Python-API/pal/notebooks/DKOM2019.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="LwGu_JmwKarP"
#hide
import pandas as pd
import numpy as np
projetos = pd.read_csv("http://dados.ufrn.br/dataset/e48162fa-0668-4098-869a-8aacfd177f9f/resource/3f12a9a4-7084-43e7-a4ac-091a8ae14020/download/projetos-de-pesquisa.csv", sep=';')
bolsistas = pd.read_csv("http://dados.ufrn.br/dataset/81608a4d-c76b-4758-a8d8-54be32209833/resource/d21c94fe-22ba-4cf3-89db-54d8e739c567/download/bolsas-iniciacao-cientifica.csv", sep=';')
# + colab={"base_uri": "https://localhost:8080/"} id="FsziYsqaPjH6" outputId="313eb684-1af9-4043-cbc9-ca9298f2a9c0"
#hide
projetos.columns
# + colab={"base_uri": "https://localhost:8080/"} id="ck_h0zq0PqWZ" outputId="da2e3067-f5ce-42c5-9e53-6832c6d87143"
#hide
bolsistas.columns
# + [markdown] id="CszVOMz0V7X3"
# Tamanho das bases:
# + colab={"base_uri": "https://localhost:8080/"} id="9Pr9zEPhPu7X" outputId="ed2330a3-c768-4edc-ffe0-728f59f8cdf3"
#hide_input
print("Tabela de Projetos:", projetos.shape)
print("Tabela de Bolsistas", bolsistas.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="1w8Uuik4QNXH" outputId="34a9ad5b-12ef-47b3-bfbb-54ad2b048fd7"
#hide
projetos.dtypes
# + colab={"base_uri": "https://localhost:8080/"} id="YhKfnVSmP89X" outputId="6b25c103-295d-45da-f322-4aa8a93492e6"
#hide
bolsistas.dtypes
# + colab={"base_uri": "https://localhost:8080/"} id="m0TbS_5EVOk6" outputId="aa1c48d6-e94f-4074-cb99-92eeb4ca0a33"
#hide
print("Tabela de projetos de pesquisa\n", projetos.isna().sum(), "\n")
print("Tabela de bolsistas\n",bolsistas.isna().sum())
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="tLZq_k15S6Bu" outputId="9bcb37e1-54bb-41fa-959e-272ae604aede"
#hide
bolsistas_categoria = bolsistas.groupby('categoria', as_index=False).agg({"discente": "count"})
bolsistas_categoria
# + colab={"base_uri": "https://localhost:8080/", "height": 917} id="bwQ0wh7EUaRr" outputId="d9942b38-29ec-4615-db28-ec7857b5ab78"
#hide
bolsistas_tipo = bolsistas.groupby('tipo_de_bolsa', as_index=False).agg({"discente" : "count"})
bolsistas_tipo
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="2Apfm8GhUqYl" outputId="b00223d7-9231-4c08-a97d-9b99ec275429"
#hide
bolsistas_status = bolsistas.groupby('status', as_index=False).agg({"codigo_projeto": "count"})
bolsistas_status
# + colab={"base_uri": "https://localhost:8080/", "height": 320} id="l4MQDSFRVAq0" outputId="238ccd77-005f-4f27-a110-72ab7e3de40d"
#hide
bolsistas.groupby('ano').discente.count().plot(kind='bar')
# + colab={"base_uri": "https://localhost:8080/", "height": 638} id="bIKkoQ4aafIS" outputId="d0ccca10-c4a6-4fe4-cb18-453193ff7466"
#hide
bolsistas_ano = bolsistas.groupby('ano', as_index=False).agg({"discente": "count"})
bolsistas_ano
# + colab={"base_uri": "https://localhost:8080/"} id="qyNXYyALBK9Z" outputId="7c2d4a40-79cf-425e-cc9e-ab991c78e640"
#hide
grupo_projetos = bolsistas.groupby("id_projeto_pesquisa")
grupo_projetos
# + colab={"base_uri": "https://localhost:8080/"} id="H7qj94b0_M9v" outputId="f4d1fb37-4bcd-4cd7-897f-b6a00339d3cc"
#hide
df_projetos = projetos.drop(columns= ['id_coordenador', 'coordenador', 'edital', 'objetivos_desenvolvimento_sustentavel'])
df_projetos.columns
# + id="j28e-L1xcGPu"
#hide
projetos_situacao = projetos.groupby('situacao', as_index=False).agg({"codigo_projeto":"count"})
projetos_categoria = projetos.groupby('categoria_projeto', as_index=False).agg({"codigo_projeto":"count"})
projetos_unidades = projetos.groupby('unidade', as_index=False).agg({"codigo_projeto":"count"})
# + [markdown] id="YplLpI6hUiwv"
# # Mineração de Dados - Visão Geral dos Projetos de Pesquisa na UFRN
# + id="0RbbRj_9UlJV"
#hide
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# + id="_PsO_dwcWJIG"
#hide
import cufflinks as cf
# + id="iOrXrRomMWJs"
#hide
import plotly.express as px
import plotly.offline as py
import plotly.graph_objs as go
from plotly.subplots import make_subplots
# + id="Qb6P_8nN_NeU"
#hide
anos_projetos = projetos['ano'].unique()
qtd_projetos = projetos['ano'].value_counts().to_list()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="I7SgzNiu7Ioh" outputId="4fc18c1a-9db2-4ad3-8759-0356086cacc5"
#hide_input
trace = go.Bar( x = anos_projetos,
y = qtd_projetos,
marker = dict(color = 'rgba(50, 75, 150, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
data = [trace]
layout = go.Layout(title='Quantidade de Projetos por Ano',
yaxis={'title':'Quantidade de Projetos'},
xaxis={'title': 'Ano'})
fig = go.Figure(data = data, layout = layout)
py.iplot(fig)
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="59N1FXLdd-KX" outputId="1e1fc99b-6da2-4cf0-ec17-a5595febb513"
#hide_input
trace = go.Bar( x = bolsistas_ano.ano,
y = bolsistas_ano.discente,
marker = dict(color = 'rgba(50, 100, 50, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)))
data = [trace]
layout = go.Layout(title='Quantidade de Bolsistas por Ano',
yaxis={'title':'Quantidade de Bolsistas'},
xaxis={'title': 'Ano'})
fig = go.Figure(data = data, layout = layout)
py.iplot(fig)
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="MV9fDEWsewaW" outputId="83088d3e-4fe9-446d-c71f-c1bb3ad0ba34"
#hide
fig = px.pie(bolsistas_categoria, values='discente', names='categoria', title='Quantidade de Bolsistas por Categorias')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="7fEEvB4ylZNx" outputId="fd5d1eb2-0b8b-498e-b01f-e8a8c2bd7d5c"
#hide
labels = bolsistas_categoria['categoria'].to_list()
values = bolsistas_categoria['discente'].to_list()
fig = go.Figure(data=[go.Pie(labels=labels, values=values, pull=[0, 0.2])])
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="GTp09NxcguQb" outputId="8fa003f8-590d-44bf-9893-0468166c4ccc"
#hide_input
colors = ['dodgerblue', 'azure']
fig = px.pie(bolsistas_categoria, values='discente', names='categoria', title='Quantidade de Bolsistas por Categoria')
fig.update_traces(textfont_size=13, marker=dict(colors=colors, line=dict(color='#000000', width=1.1)))
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="tX3v0H2hffVc" outputId="5c2f440f-a288-4a39-e311-f37e33d47f9e"
#hide
fig = px.pie(bolsistas_tipo, values='discente', names='tipo_de_bolsa', title='Quantidade de Bolsistas por Tipo de Bolsa')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="Mj_E_7wEgUlO" outputId="933822da-c0cf-4218-ab9f-4a97732d5c45"
#hide_input
fig = px.pie(bolsistas_tipo, values='discente', names='tipo_de_bolsa',
title='Quantidade de Bolsistas por Tipo de Bolsa')
fig.update_traces(textposition='inside')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 111} id="TL_C7SN7ljN-" outputId="189b9cd5-9740-4c01-9a56-1f6b91179467"
#hide
bolsistas_status
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="q5eqGl85lpXp" outputId="cdc183c5-5f3b-4cc6-dce1-7cbc99ae93d8"
#hide_input
labels = bolsistas_status['status'].to_list()
values = bolsistas_status['codigo_projeto'].to_list()
blue_colors = ['rgb(33, 75, 99)', 'rgb(79, 129, 102)', 'rgb(151, 179, 100)',
'rgb(175, 49, 35)', 'rgb(36, 73, 147)']
fig = go.Figure(data=[go.Pie(labels=labels, values=values, pull=[0, 0.2], marker_colors=blue_colors)])
fig.update(layout_title_text='Quantidade de Projetos por Status')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="IOmtVRcOn2Yx" outputId="c5fd3d3d-41f1-4ac3-eefc-9c3d2bf00638"
#hide
projetos_situacao
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="tgQWnEtmn5hK" outputId="a649edaf-ad41-4252-98e4-b50f3f494c4a"
#hide
projetos_unidades
# + id="fnJLsw1UBFHY"
#hide
projetos_unidades.sort_values(by=['codigo_projeto'], inplace=True, ascending=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 328} id="vILXEGl8qQcY" outputId="d113888d-466a-45ba-ddf4-755ef9a39802"
#hide
principais_unidades = projetos_unidades[(projetos_unidades['codigo_projeto'] > 500)]
principais_unidades
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="432icQFTrwGu" outputId="389d5ace-4154-4bcd-9c5d-4c88b79d20ed"
#hide
fig = px.scatter(principais_unidades, x="unidade", y="codigo_projeto",
color="unidade",
size='codigo_projeto')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="cUjZNu37sm4E" outputId="d5c1bb37-624a-4f51-9422-f523d2e0cbd4"
#hide_input
plot = go.Figure(data=[go.Scatter(
x = principais_unidades['unidade'].to_list(),
y = principais_unidades['codigo_projeto'].to_list(),
mode = 'markers',
marker=dict(
color = [1600, 1500, 1400, 1300, 1200, 1100, 1000, 800, 600],
size = [90, 80, 70, 60 ,50 ,40 ,30 ,20, 10],
showscale=True
)
)])
plot.show()
# + id="lAR_2Rs6rvoS"
#hide
#df = px.data.gapminder()
#fig = px.scatter_3d(principais_unidades, x='year', y='continent', z='pop', size='gdpPercap', color='lifeExp',
# hover_data=['country'])
#fig.update_layout(scene_zaxis_type="log")
#fig.show()
| _notebooks/2021-03-05-mineracao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import tensorflow as tf
saved_model_dir = "../../data/osfstorage/deep-learning-qc/saved-models/report-model"
model = tf.keras.models.load_model(saved_model_dir)
def rename(model, layer, new_name):
def _get_node_suffix(name):
for old_name in old_nodes:
if old_name.startswith(name):
return old_name[len(name):]
old_name = layer.name
old_nodes = list(model._network_nodes)
new_nodes = []
for l in model.layers:
if l.name == old_name:
l._name = new_name
# vars(l).__setitem__('_name', new) # bypasses .__setattr__
new_nodes.append(new_name + _get_node_suffix(old_name))
else:
new_nodes.append(l.name + _get_node_suffix(l.name))
model._network_nodes = set(new_nodes)
# +
output_file = "../../../../paper/figures/deep-learning-qc/model.pdf"
# cnn_layer = model.get_layer("model")
# cnn_layer._name="cnn"
rename(model, model.get_layer("model"), "cnn")
tf.keras.utils.plot_model(
model,
to_file=output_file,
rankdir="TB",
show_shapes=True,
show_layer_names=True,
# expand_nested=True,
# show_layer_activations=True,
)
# +
image_model = model.get_layer("cnn")
output_file = "../../../../paper/figures/deep-learning-qc/image_model.pdf"
tf.keras.utils.plot_model(
image_model,
to_file=output_file,
rankdir="TB",
show_shapes=True,
show_layer_names=True,
# expand_nested=True,
# show_layer_activations=True,
)
# -
model.summary()
| notebooks/visualize_dl_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# +
import numpy as np
# we have an array a and we wanna repeat it 5 times on the 3th dimention
a = np.ones((20, 10))
# Broadcast: broadcast an array to a new shape
bc = np.broadcast_to(a.reshape(20, 10, 1), (20, 10, 5)) # repeat 5 times
# Repeat: repeat elements of an array
rp = a.reshape(20, 10, 1).repeat(5, axis=2)
#>>> x = np.array([[1,2],[3,4]])
#>>> np.repeat(x, 2)
#array([1, 1, 2, 2, 3, 3, 4, 4])
# Tile: construct array by repeating A the number of times given by reps
tl = np.tile(a.reshape(20, 10, 1), (1, 1, 5))
a.reshape(20, 10, 1)
# which is equalant to
# a[:, :, np.newaxis]
# and
# np.expand_dims(a, axis=2)
# +
import numpy as np
x=np.arange(5).reshape(1, 5).repeat(4, axis=0)
rows = np.array([0,3])
columns = np.array([0, 4])
x[np.ix_(rows, columns)]
x[...]
# y = x[1:8:2, : : -1]
# obj = (slice(1, 8, 2), slice(None, None, -1))
# z = x[obj]
# -
file = open('test.txt', 'w')
a = [1, 2., (2, 3, 4)]
import numpy as np
a = np.ones((10, 10, 10))
| numpy_calculation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/umerhasan17/mlds/blob/master/help/operations.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="EOkkLSC8fSMa" colab_type="code" colab={}
# Mount Google Drive
from google.colab import drive
from os.path import join
ROOT = '/content/drive/' # default for the drive
PROJ = 'My Drive/nlp/' # path to your project on Drive
drive.mount(ROOT) # we mount the drive at /content/drive
# + id="x3m0_rzW4plb" colab_type="code" outputId="c7ebe04e-6127-45dc-f81a-35ba8831fa4e" colab={"base_uri": "https://localhost:8080/", "height": 34}
# cd drive/My\ Drive/nlp
# + id="PuShLAaJIxj8" colab_type="code" outputId="f38c0f9d-068e-4610-9bc9-babc8143287f" colab={"base_uri": "https://localhost:8080/", "height": 153}
GIT_USERNAME = "umerhasan17" # replace with yours
GIT_REPOSITORY = "NLPzoo" # ...nah
PROJECT_PATH = join(ROOT, PROJ)
# # !mkdir "{PROJECT_PATH}"I # in case we haven't created it already
# GIT_PATH = "https://{GIT_TOKEN}@github.com/{GIT_USERNAME}/{GIT_REPOSITORY}.git"
# # !git clone "{GIT_PATH}"
# # !mkdir ./temp
# # !git clone "{GIT_PATH}"
# # !mv ./temp/* "{PROJECT_PATH}"
# # !rm -rf ./temp
# # !rsync -aP --exclude=data/ "{PROJECT_PATH}"/* ./
# !git clone "https://github.com/umerhasan17/NLPzoo.git"
# !ls
# + id="2snZNRJNKMX4" colab_type="code" colab={}
| help/operations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="zJpHYEZ0o5qB"
# ## AlphabetSoupCharity_Optimizer 2
#
# - Added hidden layers
# - Increased the number of neurons / layer
# - Activation formula use sigmoid more because binary result
# - Using ModelCheckpoint and batches
#
#
# + [markdown] id="-Vjaj_kh_0Jb"
# ## Preprocessing
# + colab={"base_uri": "https://localhost:8080/"} id="5p97Y95Q8cIK" outputId="03199bc5-efac-4a1b-f3fe-066fac492c9e"
# Mount the drive, last folder lhs, give permissions, copy path
from google.colab import drive
drive.mount('/content/drive')
# + id="vhqjqXZjo5qE"
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# to create check points
from tensorflow.keras.callbacks import ModelCheckpoint
# Plotting
import matplotlib.pyplot as plt
# + id="RkMYZ6qvYZ6q"
# Greate a check point path
checkpoint_path = "/content/drive/MyDrive/Colab Notebooks/Deep-Learning/checkpoints/weights.{epoch:02d}.hdf5"
# + colab={"base_uri": "https://localhost:8080/", "height": 496} id="a7VDO9rYo5qF" outputId="0f5bd150-697b-43e7-d9a2-c74d8040fe19"
# Import and read the charity_data.csv.
application_df = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Deep-Learning/Resources/charity_data.csv")
application_df.head()
# + id="RTU7Smy-o5qG" outputId="e517dc5c-2811-4b4f-8274-148c36a47cd8" colab={"base_uri": "https://localhost:8080/"}
application_df.shape
# + id="wUBE6sW4o5qH" outputId="20ce55f6-5a99-42da-94a6-636d376cdb25" colab={"base_uri": "https://localhost:8080/"}
application_df.info()
# + [markdown] id="v1v_NI5qo5qH"
# ### Data Preparation
# + id="-9F4v1coo5qI"
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df.drop(['EIN','NAME'], axis=1, inplace= True)
# + id="mJynnDlxo5qI" outputId="6a6ed364-e51d-4223-eb1d-737a5bfd3870" colab={"base_uri": "https://localhost:8080/", "height": 206}
application_df.head()
# + id="AsLnvJbqo5qJ" outputId="493c65c2-88df-427e-8530-5970ce66ce19" colab={"base_uri": "https://localhost:8080/"}
# Determine the number of unique values in each column.
application_df.nunique()
# + id="KhChMpYro5qJ" outputId="cadb9f82-d74a-4212-fc15-1009b32dd0b0" colab={"base_uri": "https://localhost:8080/"}
# Look at APPLICATION_TYPE value counts for binning
application_value = application_df['APPLICATION_TYPE'].value_counts()
application_value
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="haH7XWcVsEpT" outputId="f074723a-4b22-497d-c47b-9137f9163d01"
# Visualize the value counts of APPLICATION_TYPE
application_value.plot.density()
# + id="CYexIX6Ko5qK" outputId="286d5f61-f9d1-470d-c1c0-9d74f1d3141f" colab={"base_uri": "https://localhost:8080/"}
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(application_value[application_value < 500].index)
application_types_to_replace
# + id="IHkcXrOoo5qL" outputId="22853e3e-4044-411b-d410-72e7b3ef8aff" colab={"base_uri": "https://localhost:8080/"}
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successfull
application_df['APPLICATION_TYPE'].value_counts()
# + id="Ja_mW075o5qL" outputId="b9654603-84e5-4f49-a9bb-1d4e11d59fd3" colab={"base_uri": "https://localhost:8080/"}
# Look at CLASSIFICATION value counts for binning
classification_count_binning = application_df['CLASSIFICATION'].value_counts()
classification_count_binning
# + id="tvEDuHVpo5qM" outputId="e56f6f0f-f8ab-45a6-de29-e4544581abd6" colab={"base_uri": "https://localhost:8080/"}
# Look at CLASSIFICATION value counts >1. :: used > 10
counts_classification = classification_count_binning[classification_count_binning > 10]
counts_classification.shape
# + id="Jrajfqgz4BoY" outputId="b7ed5562-bd38-492f-f513-9f2fd28e6f54" colab={"base_uri": "https://localhost:8080/"}
counts_classification
# + id="DTd7g_wxryAY" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="df2fe46c-9a8a-4b73-89b5-2172b7818a27"
# Plot look at CLASSIFICATION value counts >1 >10, > 100. 1000n +- 5000 range
counts_classification.plot(kind='density')
# + id="w8Ggy68Po5qM" outputId="56b36494-2755-44ca-e0ee-64ff0fa8e86e" colab={"base_uri": "https://localhost:8080/"}
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
classifications_to_replace = list(classification_count_binning[classification_count_binning < 1000].index)
classifications_to_replace
# + colab={"base_uri": "https://localhost:8080/"} id="vpgzaA3WxnSo" outputId="810e36bd-67cf-42c5-e16a-0b2ca0560d42"
len(classifications_to_replace)
# + id="INPTLyM4o5qN" outputId="80c33c3e-4a6e-4b66-d048-b170f2bfd30a" colab={"base_uri": "https://localhost:8080/"}
# Replace in dataframe
# taking all those classifications <1000 and put in category as "Other"
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
# + id="RyXkWnIJo5qN" outputId="b624472e-42a6-4539-f1c9-5f98c307651e" colab={"base_uri": "https://localhost:8080/", "height": 288}
# Convert categorical data to numeric with `pd.get_dummies`
# One hot encoding
application_df = pd.get_dummies(application_df,dtype=float)
application_df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="_ue_LwHLc18V" outputId="4bd79161-0277-4e6d-be5a-8c060020d1ee"
application_df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="jZ_5V022lyh5" outputId="81a7dd32-f360-4fce-ea55-a32c232736a7"
application_df.columns
# + id="n05tHaNfo5qN" outputId="a7ebaa13-4a7b-48b5-b35d-85a591f85cd0" colab={"base_uri": "https://localhost:8080/"}
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
y
# + id="WGXaxl_Vo5qN" colab={"base_uri": "https://localhost:8080/"} outputId="efc54a37-6eeb-40d4-f342-8eb748b5c183"
# Drop the 'IS_SUCCESSFUL' from the features
X = application_df.drop('IS_SUCCESSFUL', axis=1).values
X
# + id="4tHp9zwoo5qO"
# Split the preprocessed data into a training and testing dataset
# train_split=0.8, val_split=0.1, test_split=0.1. test_size=0.2,
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state = 42)
# + id="ZRXGVYtDo5qO" outputId="55748ad0-298b-4386-dda6-9ac436cb4b95" colab={"base_uri": "https://localhost:8080/"}
X_train.shape, y_train.shape
# + id="X87XGVqno5qO"
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# + [markdown] id="sEBAiX9Qo5qO"
# ## Compile, Train and Evaluate the Model
# + id="v8_Gv3V2o5qO" outputId="e23d93f5-2d26-4332-e324-935414df4fca" colab={"base_uri": "https://localhost:8080/"}
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
# Increased layers to 4 hidden layers
# Use more sigmoid because binary classification, result should be 0 or 1
number_input_features = len( X_train_scaled[0])
hidden_nodes_layer1=180
hidden_nodes_layer2=50
hidden_nodes_layer3=10
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation='relu'))
# Second hidden layer. sigmoid
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation='sigmoid'))
# Third hidden layer. sigmoid
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation='sigmoid'))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 533} id="EFb7ay5S3BTX" outputId="118ba5ec-5460-43ce-c350-39b7e71ee7e8"
from tensorflow.keras.utils import plot_model
# summarize the model
images_dir = '/content/drive/MyDrive/Colab Notebooks/Deep-Learning/Images'
plot_model(nn, f'{images_dir}/AlphabetSoupModelOptimized2.png', show_shapes=True)
# + id="PZh9sC7go5qO"
# Compile the model
nn.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics=['accuracy', tf.keras.metrics.Recall()])
# + id="3EgUdMMBf-Cs"
# Callback to save every 5, use this for optimizer
# Model to compile and train every 5 epochs below in train
batch_size=100
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq= int(batch_size*13))
# + id="Psjz4UT0pvCi" colab={"base_uri": "https://localhost:8080/"} outputId="090f7ec6-ad26-4a63-d9e1-ec9dd50b7cfc"
# Train the model.
fit_model = nn.fit(X_train_scaled,y_train, epochs=100, batch_size=batch_size, callbacks=[cp_callback])
#fit_model = nn.fit(X_train_scaled,y_train, epochs=150)
# + id="Y5SQPcgoo5qP" outputId="94500047-8059-4160-b9f4-28879a8cf166" colab={"base_uri": "https://localhost:8080/"}
# Evaluate the model using the test data
model_loss, model_accuracy, model_Recall = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# + [markdown] id="9xVosr8w7LyM"
# ### History and Plots
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="hbzcH4sYesy_" outputId="375ac248-d7c6-445a-dfea-04eb11b5601d"
# list all data in history
print(fit_model.history.keys())
# + id="MyAvBvAU7VJS" colab={"base_uri": "https://localhost:8080/", "height": 322} outputId="df10c471-efbf-439c-f4e5-0dec0822152f"
# Create a DataFrame containing training history
history_df = pd.DataFrame(fit_model.history, index=range(1,len(fit_model.history["loss"])+1))
# Plot the accuracy
# Visualize Loss/Accuracy
from google.colab import files
images_dir = '/content/drive/MyDrive/Colab Notebooks/Deep-Learning/Images'
fig, ax = plt.subplots()
plt.title('Opitimize Model 2: Accuracy and Loss \n',fontsize=20)
loss = ax.plot(history_df["loss"], color="red", label="Loss")
ax.set_xlabel("Epoch")
ax.set_ylabel("Loss")
ax2 = ax.twinx()
acc = ax2.plot(history_df["accuracy"], color="blue", label="Accuracy")
ax2.set_ylabel("Accuracy")
curves = loss + acc
labs = [l.get_label() for l in curves]
ax.legend(curves, labs, loc="center right")
plt.savefig(f"{images_dir}/AlphabetSoupCharity_Optimizer2_Acc_Loss.png")
files.download(f"{images_dir}/AlphabetSoupCharity_Optimizer2_Acc_Loss.png")
plt.show()
# + id="GhZWahmVo5qP" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="6695e62a-2f73-4ebc-9bb1-fc7dd8889e96"
# Export our model to HDF5 file. # (equation (with all structure )# info on model , to load model on diffe dataset, neural network structure to compile
nn.save('/content/drive/MyDrive/Colab Notebooks/Deep-Learning/Models/AlphabetSoupCharity_Optimizer2.h5')
files.download('/content/drive/MyDrive/Colab Notebooks/Deep-Learning/Models/AlphabetSoupCharity_Optimizer2.h5')
# + id="oweZf_2tHJkk"
# get the best fit model
| Deep-Learning/AlphabetSoupCharity_Optimizer2.ipynb |
# + echo=false message=false tags=["remove_output"] warning=false
library(reticulate)
library(exams)
# + echo=false name="data generation" tags=["remove_output"] language="python"
# import random
# import numpy as np
# import array
# r = random.sample([-.97, 0, .5, .97], 1)[0]
# if np.random.uniform(size=1) < 1/3:
# mx=0
# my=0
# sx=1
# sy=1
# else:
# mx= random.sample(list(10*np.arange(-5,5)),1)[0]
# my= random.sample(list(10*np.arange(0,5)),1)[0]
# sx= random.sample([1, 10, 20],1)[0]
# sy= random.sample([1,10,20],1)[0]
#
# b= r* (sy/sx)
# a= my - b*mx
# x= np.random.normal(mx,sx,size=200)
# y= b*x + np.random.normal(a,sy*np.sqrt(1-r**2),200)
# questions = ['','','','']
# explanations= ['','','','']
# solutions= [True,True,True,True]
#
# if (np.random.uniform(size=1)[0] < .5):
# questions[0] = 'The scatterplot is standardized.'
# solutions[0] = mx==0 and my==0 and sx==1 and sy==1
# else:
# questions[0]= "The slope of the regression line is about 1."
# solutions[0]= abs(b-1) < .1
# explanations[0]= "The slope of the regression line is" + str(b)
#
# if (solutions[0]):
# explanations[0]= "X and Y have both mean 0 and variance 1."
# else:
# explanations[0]="The scatterplot is not standardized, because X and Y do not both have mean 0 and variance 1."
#
#
# if(np.random.uniform(size=1)[0] < .5):
# questions[1] = "The absolute value of the correlation coeffiient is at least .8"
# solutions[1] = abs(r) >= .8
# else:
# questions[1]="The absolute value of the correlation coefficient is at most .8."
# solutions[1]= abs(r) <= .8
#
# if(abs(r) >= .9):
# explanations[1]="A strong association between the variables is given in the scatterplot, so the absolute correlation coefficient is larger than .8."
# elif(abs(r)==0):
# explanations[1]="No association between the variables is observed in the scatterplot. This implies a correlation coefficient close to 0."
# else:
# explanations[1]="Only a slightly positive association between the variables is observable in the scatterplot"
#
# if(np.random.uniform(size=1) <.5):
# questions[2] = "The standard deviation of X is at least 6"
# solutions[2] = sy >= 6
# explanations[2] = "The standard deviation of X is equal to " + str(sx)
# else:
# questions[2]= "The standard deviation of Y is at least 6."
# solutions[2]= sy>=6
# explanations[2]= "The standard deviation of Y is equal to" + str(sy)
#
# if(np.random.uniform(size=1) < .5):
# questions[3]= "The mean of X is at most 5."
# solutions[3]= mx<=5
# explanations[3]="The mean of X is about equal to" + str(mx)
# else:
# questions[3]="The mean of Y is at least 30"
# solutions[3]= my>= 30
# explanations[3]= "The mean of Y is equal to" + str(my)
#
# o= random.sample(list(np.arange(0,4)),4)
# questions=[questions[i] for i in o]
# solutions=[solutions[i] for i in o]
# explanations= [explanations[i] for i in o]
#
#
#
# -
# Question
# ========
#
# The following figure shows a scatterplot. Which of the following statements are correct?
# + echo=false
plot(py$x,py$y,xlab="x",ylab="y")
# + name="questionlist" results="asis" tags=["remove_input"]
answerlist(py$questions, markup = "markdown")
# -
# Solution
# ========
# + name="solutionlist" results="asis" tags=["remove_input"]
answerlist(ifelse(py$solutions, "True", "False"), py$explanations, markup = "markdown")
# -
# Meta-information
# ================
# extype: mchoice
# exsolution: `r mchoice2string(py$solutions)`
# exname: Scatterplot
| Examples_JN/withPlotFromJup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.signal import find_peaks
pd.plotting.register_matplotlib_converters()
# %matplotlib inline
plt.rcParams['figure.dpi'] = 100
plt.rcParams['axes.grid'] = True
plt.style.use('seaborn')
# -
data_1 = pd.read_csv('dataset1.csv')
L = data_1['L']
W = data_1['W']
data_1['NA'] = [W[i]/np.sqrt(4*L[i]**2 + W[i]**2) for i in range(len(L))]
data_1['θ'] = [np.arcsin(x)*180/np.pi for x in data_1['NA'].values]
data_1['2θ'] = [2*x for x in data_1['θ'].values]
data_1.index += 1
# +
data_2 = pd.read_csv('dataset2.csv')
L = data_2['L']
W = data_2['W']
data_2['NA'] = [W[i]/np.sqrt(4*L[i]**2 + W[i]**2) for i in range(len(L))]
data_2['θ'] = [np.arcsin(x)*180/np.pi for x in data_2['NA'].values]
data_2['2θ'] = [2*x for x in data_2['θ'].values]
data_2.index += 1
# -
data_1
data_2
np.mean(data_1["2θ"]),np.mean(data_2["2θ"])
| S2/Numerical Aperture/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/khajaowais/ColabPrimerSS2021/blob/main/TensorFlow_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LIDAXtDyrrs6"
# Sources:
# * https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/Basics/tutorial2-tensorbasics.py
# * https://www.youtube.com/watch?v=tPYj3fFJGjk
# * https://www.tensorflow.org/guide/tensor
#
# + [markdown] id="A5yKYOuNnAD3"
# #TensorFlow 2.0 Introduction
# In this notebook you will be given an interactive introduction to TensorFlow 2.0. We will walk through the following topics within the TensorFlow module:
#
# - TensorFlow Install and Setup
# - Representing Tensors
# - Tensor Shape and Rank
# - Types of Tensors
# - Tensor Operations
#
#
# If you'd like to follow along without installing TensorFlow on your machine you can use **Google Colaboratory**. Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.
#
# + [markdown] id="kutjqBFmpX6R"
# # What is TensorFlow?
# Tensors are the heart of TensorFlow. Tensors can be thought of as multidimensional arrays.
#
# TensorFlow makes it convenient to define operations on these tensors, such as matrix-vector multiplication, and execute them efficiently, regardless of the underlying hardware.
#
# TensorFlow also has an automatic differentiation engine. This allows us to compute the gradients of differentiable functions, assuming we can express them using TensorFlow primitives.
#
# These two aspects together have made TensorFlow popular for training neural networks:
# * We can define neural network parameters as a list of tensors.
# * We can define a loss using TensorFlow primitives and tensors.
# * We can compute the gradients of a loss with respect to the neural network parameters, using the automatic differentiation engine.
# * We can update the parameters using an optimization scheme (such as SGD).
#
# TensorFlow makes it easy to do all of this!
# + [markdown] id="vnIhl39Cpq48"
# ##Installing TensorFlow
# To install TensorFlow on your local machine you can use pip.
# ```console
# pip install tensorflow
# ```
#
# If you have a CUDA enabled GPU you can install the GPU version of TensorFlow. You will also need to install some other software which can be found here: https://www.tensorflow.org/install/gpu
# ```console
# pip install tensorflow-gpu
# ```
# + [markdown] id="duDj86TfWFof"
# ##Tensors
# "A tensor is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represents tensors as n-dimensional arrays of base datatypes." (https://www.tensorflow.org/guide/tensor)
#
# It shouldn't surprise you that tensors are a fundemental aspect of TensorFlow. They are the main objects that are passed around and manipluated throughout the program.
#
# <!-- Each tensor represents a partially defined computation that will eventually produce a value. TensorFlow programs work by building a graph of Tensor objects that details how tensors are related. Running different parts of the graph allow results to be generated. -->
#
# Each tensor has a data type and a shape.
#
# **Data Types**: float32, int32, string, and so on.
#
# **Shape**: Represents the dimension of data.
#
# Just like vectors and matrices, tensors can have operations applied to them: addition, subtraction, dot product, cross product and so on.
#
# In the next sections we will discuss some different properties of tensors. This is to make you more familiar with how TensorFlow represents data and how you can manipulate this data.
#
# + [markdown] id="TAk6QhGUwQRt"
# ###Creating Tensors
# Below is an example of how to create some different tensors.
#
# You simply define the value of the tensor and the datatype and you are good to go! It's worth mentioning that usually we deal with tensors of numeric data, it is quite rare to see string tensors.
#
# For a full list of datatypes please refer to the following guide.
#
# https://www.tensorflow.org/api_docs/python/tf/dtypes/DType?version=stable
# + id="OnReKOeSdidt"
# %tensorflow_version 2.x
import tensorflow as tf
# + id="epGskXdjZHzu" colab={"base_uri": "https://localhost:8080/"} outputId="45f9100f-2cf7-487c-f655-96ea07aaf95a"
string = tf.Variable("this is a string", tf.string)
number = tf.Variable(324, tf.int16)
floating = tf.Variable(3.567, tf.float64)
print(string)
print(number)
print(floating)
# + [markdown] id="D0_H71HMaE-5"
# ###Rank/Degree of Tensors
# Another word for rank is degree, these terms simply mean the number of dimensions involved in the tensor. What we created above is a *tensor of rank 0*, also known as a scalar.
#
# Now we'll create some tensors of higher degrees/ranks.
# + id="hX_Cc5IfjQ6-"
rank1_tensor = tf.Variable(["Test"], tf.string)
rank2_tensor = tf.Variable([["test", "ok"], ["test", "yes"]], tf.string)
# + [markdown] id="55zuGMc7nHjC"
# **To determine the rank** of a tensor we can call the following method.
# + id="Zrj0rAWLnMNv" colab={"base_uri": "https://localhost:8080/"} outputId="303d18a1-909c-4e0c-aa28-efbbd9cc9dc7"
tf.rank(rank2_tensor)
# + [markdown] id="hTv4Gz67pQbx"
# The rank of a tensor is direclty related to the deepest level of nested lists. You can see in the first example ```["Test"]``` is a rank 1 tensor as the deepest level of nesting is 1.
# Where in the second example ```[["test", "ok"], ["test", "yes"]]``` is a rank 2 tensor as the deepest level of nesting is 2.
# + [markdown] id="RaVrANK8q21q"
# ###Shape of Tensors
# Now that we've talked about the rank of tensors it's time to talk about the shape. The shape of a tensor is simply the number of elements that exist in each dimension. TensorFlow will try to determine the shape of a tensor but sometimes it may be unknown.
#
# To **get the shape** of a tensor we use the shape attribute.
#
# + id="L_NRXsFOraYa" colab={"base_uri": "https://localhost:8080/"} outputId="51083196-4bb1-49fa-fb98-1f05b9393d10"
rank2_tensor.shape
# + [markdown] id="wVDmLJeFs086"
# ###Changing Shape
# The number of elements of a tensor is the product of the sizes of all its shapes. There are often many shapes that have the same number of elements, making it convient to be able to change the shape of a tensor.
#
# The example below shows how to change the shape of a tensor.
# + id="dZ8Rbs2xtNqj" colab={"base_uri": "https://localhost:8080/"} outputId="c234aa7b-84e5-4dce-fe2d-0a2fb03370c5"
tensor1 = tf.ones([1,2,3]) # tf.ones() creates a shape [1,2,3] tensor full of ones
tensor2 = tf.reshape(tensor1, [2,3,1]) # reshape existing data to shape [2,3,1]
tensor3 = tf.reshape(tensor2, [3, -1]) # -1 tells the tensor to calculate the size of the dimension in that place
# this will reshape the tensor to [3,3]
# The number of elements in the reshaped tensor MUST match the number in the original
print(tensor1)
print(tensor2)
print(tensor3)
# + [markdown] id="q88pJucBolsp"
# ###Slicing Tensors
# You may be familiar with the term "slice" in python and its use on lists, tuples etc. Well the slice operator can be used on tensors to select specific axes or elements.
#
# When we slice or select elements from a tensor, we can use comma seperated values inside the set of square brackets. Each subsequent value refrences a different dimension of the tensor.
#
# Ex: ```tensor[dim1, dim2, dim3]```
#
# I've included a few examples that will hopefully help illustrate how we can manipulate tensors with the slice operator.
# + id="b0YrD-hRqD-W" colab={"base_uri": "https://localhost:8080/"} outputId="34da4765-e311-4ea1-a801-8d71ef3d2bc9"
# Creating a 2D tensor
matrix = [[1,2,3,4,5],
[6,7,8,9,10],
[11,12,13,14,15],
[16,17,18,19,20]]
tensor = tf.Variable(matrix, dtype=tf.int32)
print(tf.rank(tensor))
print(tensor.shape)
# + id="Wd85uGI7qyfC" colab={"base_uri": "https://localhost:8080/"} outputId="0a98a581-19d0-4a00-e455-3f27d73341ad"
# Now lets select some different rows and columns from our tensor
three = tensor[0,2] # selects the 3rd element from the 1st row
print(three) # -> 3
row1 = tensor[0] # selects the first row
print(row1)
column1 = tensor[:, 0] # selects the first column
print(column1)
row_2_and_4 = tensor[1::2] # selects second and fourth row
print(row_2_and_4)
column_1_in_row_2_and_3 = tensor[1:3, 0]
print(column_1_in_row_2_and_3)
# + [markdown] id="UU4MMhB_rxvz"
# ###Types of Tensors
# These are the diffent types of tensors.
# - Variable
# - Constant
# - Placeholder
# - SparseTensor
#
# With the exception of ```Variable``` all these tensors are immutable, meaning their value may not change during execution.
#
# For now, it is enough to understand that we use the Variable tensor when we want to potentially change the value of our tensor, such as to repre
#
#
# + [markdown] id="6c3FXxNy0iVT"
# #Tensor Operations
#
# Below, we describe some additional operations that can be performed with tensors.
#
# + id="1PcjxWPyr6Bf" colab={"base_uri": "https://localhost:8080/"} outputId="ca5098e2-0a47-49aa-c8c1-0165808313e2"
x = tf.constant(4, shape=(1, 1), dtype=tf.float32)
print(x)
x = tf.constant([[1, 2, 3], [4, 5, 6]], shape=(2, 3))
print(x)
# + id="FAVpNBxxsRzy" colab={"base_uri": "https://localhost:8080/"} outputId="2f445546-4736-4e78-e773-0e81770cff37"
# Identity Matrix
x = tf.eye(3)
print(x)
# Matrix of ones
x = tf.ones((4, 3))
print(x)
# Matrix of zeros
x = tf.zeros((3, 2, 5))
print(x)
# + id="UcbLUUZWtOn-" colab={"base_uri": "https://localhost:8080/"} outputId="d8c99a92-c55b-408b-a0e1-3f1a23a3a429"
# Sampling from a uniform distribution (https://en.wikipedia.org/wiki/Continuous_uniform_distribution)
x = tf.random.uniform((2, 2), minval=0, maxval=1)
print(x)
# Sampling from a normal distribution (https://en.wikipedia.org/wiki/Normal_distribution)
x = tf.random.normal((3, 3), mean=0, stddev=1)
print(x)
# Constructing a tensor with range limits and step values
x = tf.range(9)
print(x)
x = tf.range(start=0, limit=10, delta=2)
print(x)
# + id="cY_h-xbvtVfd" colab={"base_uri": "https://localhost:8080/"} outputId="b69c010c-0962-4e3f-8346-ad4f46e2bf76"
# Changing the data type of a tensor
# Supported dtypes : tf.float (16,32,64), tf.int (8, 16, 32, 64), tf.bool
print(tf.cast(x, dtype=tf.float64))
# + id="z68fBnlFuU8C" colab={"base_uri": "https://localhost:8080/"} outputId="6fb7773d-f615-4c35-bc23-f077a781c8bb"
# Mathematical Operations
x = tf.constant([1, 2, 3])
y = tf.constant([9, 8, 7])
# Two ways of addition
z = tf.add(x, y)
z = x + y
print(f"Adding {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# Two ways of subtraction
z = tf.subtract(x, y)
z = x - y
print(f"Subtracting {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# Two ways of element-wise division
z = tf.divide(x, y)
z = x / y
print(f"Dividing {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# Two ways of element-wise multiplication
z = tf.multiply(x, y)
z = x * y
print(f"Multiplying {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# Dot-product
z = tf.tensordot(x, y, axes=1)
print(f"The dot product of {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# Exponentiation
z = x ** 5
print(f"{x.numpy()} raised to the power of 5 gives us {z.numpy()}")
# Matrix multiplication
x = tf.random.normal((2, 3))
y = tf.random.normal((3, 2))
# Two ways of matrix multiplication
z = tf.matmul(x, y)
z = x @ y
print(f"Matrix multiplication of {x.numpy()} and {y.numpy()} gives us {z.numpy()}")
# + id="se7R8ZolwX2c" colab={"base_uri": "https://localhost:8080/"} outputId="9223687d-0391-474b-a91b-3b9de25d4913"
# Indexing in a vector
x = tf.constant([0, 1, 1, 2, 3, 1, 2, 3])
print(x[:])
print(x[1:])
print(x[1:3])
print(x[::2])
print(x[::-1])
# Get values at specific indices in a tensor
indices = tf.constant([0, 3])
x_indices = tf.gather(x, indices)
# Indexing in a matrix
x = tf.constant([[1, 2], [3, 4], [5, 6]])
print(x[0, :])
print(x[0:2, :])
# Reshaping
x = tf.range(9)
x = tf.reshape(x, (3, 3))
x = tf.transpose(x, perm=[1, 0])
# + [markdown] id="eKIi66zFCns8"
# ## Automatic Differentiation
#
# The easiest way to use the automatic differentiation engine is with `tf.GradientTape`.
#
# Suppose we want to find the gradient of a function `func` when given input `x`.
# * Call `y = func(x)` in a `with tf.GradientTape() as tape:` block.
# * Compute the gradients with `tape.gradient(y, x)`.
#
# For example:
#
# + id="KszVbauhDL7E"
# This is the function that we want to differentiate.
def func(x):
return 3*x*x + 2*x
# This is the gradient function computed by-hand.
def manual_grad_func(x):
return 6*x + 2
# This is the gradient function computed by TF.
def auto_grad_func(x):
x = tf.Variable(x, dtype=tf.float32)
with tf.GradientTape() as tape:
y = func(x)
return tape.gradient(y, x)
# + id="0BclEFwEEG4z" outputId="6c0685a0-4f1b-4f53-9eb6-6387f60a9260"
x = 1
print(f'x = {x}')
print(f'Expected f\'(x): {manual_grad_func(x)}')
print(f'Computed f\'(x): {auto_grad_func(x)}')
| TensorFlow_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="IkSguVy8Xv83"
# # **StarDist (2D)**
# ---
#
# <font size = 4>**StarDist 2D** is a deep-learning method that can be used to segment cell nuclei from bioimages and was first published by [Schmidt *et al.* in 2018, on arXiv](https://arxiv.org/abs/1806.03535). It uses a shape representation based on star-convex polygons for nuclei in an image to predict the presence and the shape of these nuclei. This StarDist 2D network is based on an adapted U-Net network architecture.
#
# <font size = 4> **This particular notebook enables nuclei segmentation of 2D dataset. If you are interested in 3D dataset, you should use the StarDist 3D notebook instead.**
#
# ---
# <font size = 4>*Disclaimer*:
#
# <font size = 4>This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.
#
# <font size = 4>This notebook is largely based on the paper:
#
# <font size = 4>**Cell Detection with Star-convex Polygons** from Schmidt *et al.*, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018. (https://arxiv.org/abs/1806.03535)
#
# <font size = 4>and the 3D extension of the approach:
#
# <font size = 4>**Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy** from Weigert *et al.* published on arXiv in 2019 (https://arxiv.org/abs/1908.03636)
#
# <font size = 4>**The Original code** is freely available in GitHub:
# https://github.com/mpicbg-csbd/stardist
#
# <font size = 4>**Please also cite this original paper when using or developing this notebook.**
#
# + [markdown] id="jWAz2i7RdxUV"
# # **How to use this notebook?**
#
# ---
#
# <font size = 4>Video describing how to use our notebooks are available on youtube:
# - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
# - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
#
#
# ---
# ###**Structure of a notebook**
#
# <font size = 4>The notebook contains two types of cell:
#
# <font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
#
# <font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
#
# ---
# ###**Table of contents, Code snippets** and **Files**
#
# <font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
#
# <font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
#
# <font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
#
# <font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
#
# <font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
#
# <font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
#
# ---
# ###**Making changes to the notebook**
#
# <font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
#
# <font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
# You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment.
# + [markdown] id="gKDLkLWUd-YX"
# #**0. Before getting started**
# ---
# <font size = 4> For StarDist to train, **it needs to have access to a paired training dataset made of images of nuclei and their corresponding masks**. Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki
#
# <font size = 4>**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model**. The quality control assessment can be done directly in this notebook.
#
# <font size = 4>The data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Images" (Training_source) and "Training - Masks" (Training_target).
#
# <font size = 4>Additionally, the corresponding Training_source and Training_target files need to have **the same name**.
#
# <font size = 4>Please note that you currently can **only use .tif files!**
#
# <font size = 4>You can also provide a folder that contains the data that you wish to analyse with the trained network once all training has been performed. This can include Test dataset for which you have the equivalent output and can compare to what the network provides.
#
# <font size = 4>Here's a common data structure that can work:
# * Experiment A
# - **Training dataset**
# - Images of nuclei (Training_source)
# - img_1.tif, img_2.tif, ...
# - Masks (Training_target)
# - img_1.tif, img_2.tif, ...
# - **Quality control dataset**
# - Images of nuclei
# - img_1.tif, img_2.tif
# - Masks
# - img_1.tif, img_2.tif
# - **Data to be predicted**
# - **Results**
#
# ---
# <font size = 4>**Important note**
#
# <font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
#
# <font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
#
# <font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
# ---
# + [markdown] id="n4yWFoJNnoin"
# # **1. Initialise the Colab session**
#
#
#
#
# ---
#
#
#
#
#
# + [markdown] id="DMNHVZfHmbKb"
#
# ## **1.1. Check for GPU access**
# ---
#
# By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
#
# <font size = 4>Go to **Runtime -> Change the Runtime type**
#
# <font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
#
# <font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
#
# + cellView="form" id="zCvebubeSaGY"
#@markdown ##Run this cell to check if you have GPU access
# %tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
# !nvidia-smi
# + [markdown] id="sNIVx8_CLolt"
# ## **1.2. Mount your Google Drive**
# ---
# <font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
#
# <font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
#
# <font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook.
# + cellView="form" id="01Djr8v-5pPk"
#@markdown ##Play the cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
# mount user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
# + [markdown] id="AdN8B91xZO0x"
# # **2. Install StarDist and dependencies**
# ---
#
# + cellView="form" id="fq21zJVFNASx"
Notebook_version = ['1.12.2']
#@markdown ##Install StarDist and dependencies
# %tensorflow_version 1.x
import sys
before = [str(m) for m in sys.modules]
import tensorflow
print(tensorflow.__version__)
print("Tensorflow enabled.")
# Install packages which are not included in Google Colab
# !pip install tifffile # contains tools to operate tiff-files
# !pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
# !pip install stardist # contains tools to operate STARDIST.
# !pip install gputools # improves STARDIST performances
# !pip install edt # improves STARDIST performances
# !pip install wget
# !pip install fpdf
# !pip install PTable # Nice tables
# !pip install zarr
# !pip install imagecodecs
import imagecodecs
# ------- Variable specific to Stardist -------
from stardist import fill_label_holes, random_label_cmap, calculate_extents, gputools_available, relabel_image_stardist, random_label_cmap, relabel_image_stardist, _draw_polygons, export_imagej_rois
from stardist.models import Config2D, StarDist2D, StarDistData2D # import objects
from stardist.matching import matching_dataset
from __future__ import print_function, unicode_literals, absolute_import, division
from csbdeep.utils import Path, normalize, download_and_extract_zip_file, plot_history # for loss plot
from csbdeep.io import save_tiff_imagej_compatible
import numpy as np
np.random.seed(42)
lbl_cmap = random_label_cmap()
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
from PIL import Image
import zarr
from zipfile import ZIP_DEFLATED
from csbdeep.data import Normalizer, normalize_mi_ma
import imagecodecs
class MyNormalizer(Normalizer):
def __init__(self, mi, ma):
self.mi, self.ma = mi, ma
def before(self, x, axes):
return normalize_mi_ma(x, self.mi, self.ma, dtype=np.float32)
def after(*args, **kwargs):
assert False
@property
def do_after(self):
return False
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32, img_as_ubyte, img_as_float
from skimage.util import img_as_ubyte
from tqdm import tqdm
import cv2
from fpdf import FPDF, HTMLMixin
from datetime import datetime
from pip._internal.operations.freeze import freeze
import subprocess
# For sliders and dropdown menu and progress bar
from ipywidgets import interact
import ipywidgets as widgets
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print('------------------------------------------')
print("Libraries installed")
# Check if this is the latest version of the notebook
Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
print('Notebook version: '+Notebook_version[0])
strlist = Notebook_version[0].split('.')
Notebook_version_main = strlist[0]+'.'+strlist[1]
if Notebook_version_main == Latest_notebook_version.columns:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
# PDF export
def pdf_export(trained=False, augmentation = False, pretrained_model = False):
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'StarDist 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and method:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras','csbdeep']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+conf.train_dist_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
#text = 'The '+Network+' model ('+model_name+') was trained using '+str(dataset_size)+' paired images (image dimensions: '+str(shape)+') using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The GPU used was a '+gpu_name+'.'
if pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+conf.train_dist_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was retrained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(190, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=0)
pdf.set_font('')
if augmentation:
aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>n_rays</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>grid_parameter</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{7}</td>
</tr>
</table>
""".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),batch_size,number_of_steps,percentage_validation,n_rays,grid_parameter,initial_learning_rate)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(30, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_StarDist2D.png').shape
pdf.image('/content/TrainingDataExample_StarDist2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: <NAME>, Lucas & <NAME>, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- StarDist 2D: Schmidt, Uwe, et al. "Cell detection with star-convex polygons." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
if augmentation:
ref_4 = '- Augmentor: Bloice, <NAME>., <NAME>, and <NAME>. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
pdf.multi_cell(190, 5, txt = ref_4, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+"_training_report.pdf")
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'Stardist 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'/Quality Control/lossCurvePlots.png').shape
if os.path.exists(full_QC_model_path+'/Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'/Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'/Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'/Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=100% style="margin-left:0px;">"""
with open(full_QC_model_path+'/Quality Control/Quality_Control for '+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
#image = header[0]
#PvGT_IoU = header[1]
fp = header[2]
tp = header[3]
fn = header[4]
precision = header[5]
recall = header[6]
acc = header[7]
f1 = header[8]
n_true = header[9]
n_pred = header[10]
mean_true = header[11]
mean_matched = header[12]
panoptic = header[13]
header = """
<tr>
<th width = 5% align="center">{0}</th>
<th width = 12% align="center">{1}</th>
<th width = 6% align="center">{2}</th>
<th width = 6% align="center">{3}</th>
<th width = 6% align="center">{4}</th>
<th width = 5% align="center">{5}</th>
<th width = 5% align="center">{6}</th>
<th width = 5% align="center">{7}</th>
<th width = 5% align="center">{8}</th>
<th width = 5% align="center">{9}</th>
<th width = 5% align="center">{10}</th>
<th width = 10% align="center">{11}</th>
<th width = 11% align="center">{12}</th>
<th width = 11% align="center">{13}</th>
</tr>""".format("image #","Prediction v. GT IoU",'false pos.','true pos.','false neg.',precision,recall,acc,f1,n_true,n_pred,mean_true,mean_matched,panoptic)
html = html+header
i=0
for row in metrics:
i+=1
#image = row[0]
PvGT_IoU = row[1]
fp = row[2]
tp = row[3]
fn = row[4]
precision = row[5]
recall = row[6]
acc = row[7]
f1 = row[8]
n_true = row[9]
n_pred = row[10]
mean_true = row[11]
mean_matched = row[12]
panoptic = row[13]
cells = """
<tr>
<td width = 5% align="center">{0}</td>
<td width = 12% align="center">{1}</td>
<td width = 6% align="center">{2}</td>
<td width = 6% align="center">{3}</td>
<td width = 6% align="center">{4}</td>
<td width = 5% align="center">{5}</td>
<td width = 5% align="center">{6}</td>
<td width = 5% align="center">{7}</td>
<td width = 5% align="center">{8}</td>
<td width = 5% align="center">{9}</td>
<td width = 5% align="center">{10}</td>
<td width = 10% align="center">{11}</td>
<td width = 11% align="center">{12}</td>
<td width = 11% align="center">{13}</td>
</tr>""".format(str(i),str(round(float(PvGT_IoU),3)),fp,tp,fn,str(round(float(precision),3)),str(round(float(recall),3)),str(round(float(acc),3)),str(round(float(f1),3)),n_true,n_pred,str(round(float(mean_true),3)),str(round(float(mean_matched),3)),str(round(float(panoptic),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: <NAME>, Lucas & <NAME>, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- StarDist 2D: Schmidt, Uwe, et al. "Cell detection with star-convex polygons." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'/Quality Control/'+QC_model_name+'_QC_report.pdf')
# Exporting requirements.txt for local run
# !pip freeze > requirements.txt
after = [str(m) for m in sys.modules]
# Get minimum requirements file
#Add the following lines before all imports:
# import sys
# before = [str(m) for m in sys.modules]
#Add the following line after the imports:
# after = [str(m) for m in sys.modules]
from builtins import any as b_any
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
df = pd.read_csv('requirements.txt', delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open('StarDist_2D_requirements_simple.txt','w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
# + [markdown] id="HLYcZR9gMv42"
# # **3. Select your parameters and paths**
# ---
# + [markdown] id="FQ_QxtSWQ7CL"
# ## **3.1. Setting main training parameters**
# ---
# <font size = 4>
# + [markdown] id="AuESFimvMv43"
# <font size = 5> **Paths for training, predictions and results**
#
#
# <font size = 4>**`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (images of nuclei) and Training_target (masks) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.
#
# <font size = 4>**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.
#
# <font size = 4>**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).
#
#
# <font size = 5>**Training parameters**
#
# <font size = 4>**`number_of_epochs`:** Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a 50-100 epochs, but a full training should run for up to 400 epochs. Evaluate the performance after training (see 5.). **Default value: 100**
#
# <font size = 5>**Advanced Parameters - experienced users only**
#
# <font size =4>**`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 2**
#
# <font size = 4>**`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size**
#
# <font size = 4>**`patch_size`:** Input the size of the patches use to train StarDist 2D (length of a side). The value should be smaller or equal to the dimensions of the image. Make the patch size as large as possible and divisible by 8. **Default value: dimension of the training images**
#
# <font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10**
#
# <font size = 4>**`n_rays`:** Set number of rays (corners) used for StarDist (for instance, a square has 4 corners). **Default value: 32**
#
# <font size = 4>**`grid_parameter`:** increase this number if the cells/nuclei are very large or decrease it if they are very small. **Default value: 2**
#
# <font size = 4>**`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0003**
#
# <font size = 4>**If you get an Out of memory (OOM) error during the training, manually decrease the patch_size value until the OOM error disappear.**
#
#
#
#
# + cellView="form" id="ewpNJ_I0Mv47"
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
#trained_model = model_path
#@markdown ### Other parameters for training:
number_of_epochs = 100#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
#GPU_limit = 90 #@param {type:"number"}
batch_size = 4 #@param {type:"number"}
number_of_steps = 0#@param {type:"number"}
patch_size = 1024 #@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
n_rays = 32 #@param {type:"number"}
grid_parameter = 2#@param [1, 2, 4, 8, 16, 32] {type:"raw"}
initial_learning_rate = 0.0003 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 2
n_rays = 32
percentage_validation = 10
grid_parameter = 2
initial_learning_rate = 0.0003
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so print a warning
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted !!")
print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3"+W)
# Here we open will randomly chosen input and output image
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check the image dimensions
Image_Y = x.shape[0]
Image_X = x.shape[1]
print('Loaded images (width, length) =', x.shape)
# If default parameters, patch size is the same as image size
if (Use_Default_Advanced_Parameters):
patch_size = min(Image_Y, Image_X)
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print(bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
if patch_size > 2048:
patch_size = 2048
print(bcolors.WARNING + " Your image dimension is large; therefore the patch_size chosen is now:",patch_size)
# Here we check that the patch_size is divisible by 16
if not patch_size % 16 == 0:
patch_size = ((int(patch_size / 16)-1) * 16)
print(bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is:",patch_size)
# Here we disable pre-trained model by default (in case the next cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
print("Parameters initiated.")
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
#Here we use a simple normalisation strategy to visualise the image
norm = simple_norm(x, percent = 99)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, interpolation='nearest', norm=norm, cmap='magma')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, interpolation='nearest', cmap=lbl_cmap)
plt.title('Training target')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_StarDist2D.png',bbox_inches='tight',pad_inches=0)
# + [markdown] id="xyQZKby8yFME"
# ## **3.2. Data augmentation**
# ---
# <font size = 4>
# + [markdown] id="w_jCy7xOx2g3"
# <font size = 4>Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it.
#
# <font size = 4>Data augmentation is performed here via random rotations, flips, and intensity changes.
#
#
# <font size = 4> **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.**
# + cellView="form" id="DMqWq5-AxnFU"
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 4 #@param {type:"slider", min:1, max:10, step:1}
def random_fliprot(img, mask):
assert img.ndim >= mask.ndim
axes = tuple(range(mask.ndim))
perm = tuple(np.random.permutation(axes))
img = img.transpose(perm + tuple(range(mask.ndim, img.ndim)))
mask = mask.transpose(perm)
for ax in axes:
if np.random.rand() > 0.5:
img = np.flip(img, axis=ax)
mask = np.flip(mask, axis=ax)
return img, mask
def random_intensity_change(img):
img = img*np.random.uniform(0.6,2) + np.random.uniform(-0.2,0.2)
return img
def augmenter(x, y):
"""Augmentation of a single input/label image pair.
x is an input image
y is the corresponding ground-truth label image
"""
x, y = random_fliprot(x, y)
x = random_intensity_change(x)
# add some gaussian noise
sig = 0.02*np.random.uniform(0,1)
x = x + sig*np.random.normal(0,1,x.shape)
return x, y
if Use_Data_augmentation:
augmenter = augmenter
print("Data augmentation enabled")
if not Use_Data_augmentation:
augmenter = None
print(bcolors.WARNING+"Data augmentation disabled")
# + [markdown] id="3L9zSGtORKYI"
#
# ## **3.3. Using weights from a pre-trained model as initial weights**
# ---
# <font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a StarDist model**.
#
# <font size = 4> This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.
#
# <font size = 4> In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
# + cellView="form" id="9vC2n-HeLdiJ"
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "2D_versatile_fluo_from_Stardist_Fiji" #@param ["Model_from_file", "2D_versatile_fluo_from_Stardist_Fiji", "2D_Demo_Model_from_Stardist_Github", "Versatile_H&E_nuclei"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the Demo 2D model provided in the Stardist 2D github ------------------------
if pretrained_model_choice == "2D_Demo_Model_from_Stardist_Github":
pretrained_model_name = "2D_Demo"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_Github")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("https://github.com/mpicbg-csbd/stardist/raw/master/models/examples/2D_demo/config.json", pretrained_model_path)
wget.download("https://github.com/mpicbg-csbd/stardist/raw/master/models/examples/2D_demo/thresholds.json", pretrained_model_path)
wget.download("https://github.com/mpicbg-csbd/stardist/blob/master/models/examples/2D_demo/weights_best.h5?raw=true", pretrained_model_path)
wget.download("https://github.com/mpicbg-csbd/stardist/blob/master/models/examples/2D_demo/weights_last.h5?raw=true", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the Demo 2D_versatile_fluo_from_Stardist_Fiji ------------------------
if pretrained_model_choice == "2D_versatile_fluo_from_Stardist_Fiji":
print("Downloading the 2D_versatile_fluo_from_Stardist_Fiji")
pretrained_model_name = "2D_versatile_fluo"
pretrained_model_path = "/content/"+pretrained_model_name
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("https://cloud.mpi-cbg.de/index.php/s/1k5Zcy7PpFWRb0Q/download?path=/versatile&files=2D_versatile_fluo.zip", pretrained_model_path)
with zipfile.ZipFile(pretrained_model_path+"/2D_versatile_fluo.zip", 'r') as zip_ref:
zip_ref.extractall(pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_best.h5")
# --------------------- Download the Versatile (H&E nuclei)_fluo_from_Stardist_Fiji ------------------------
if pretrained_model_choice == "Versatile_H&E_nuclei":
print("Downloading the Versatile_H&E_nuclei from_Stardist_Fiji")
pretrained_model_name = "2D_versatile_he"
pretrained_model_path = "/content/"+pretrained_model_name
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("https://cloud.mpi-cbg.de/index.php/s/1k5Zcy7PpFWRb0Q/download?path=/versatile&files=2D_versatile_he.zip", pretrained_model_path)
with zipfile.ZipFile(pretrained_model_path+"/2D_versatile_he.zip", 'r') as zip_ref:
zip_ref.extractall(pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_best.h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_last.h5 pretrained model does not exist' + W)
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead' + W)
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+ W)
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
# + [markdown] id="MCGklf1vZf2M"
# #**4. Train the network**
# ---
# + [markdown] id="1KYOuygETJkT"
# ## **4.1. Prepare the training data and model for training**
# ---
# <font size = 4>Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
# + cellView="form" id="lIUAOJ_LMv5E"
#@markdown ##Create the model and dataset objects
# --------------------- Here we delete the model folder if it already exist ------------------------
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!" + W)
shutil.rmtree(model_path+'/'+model_name)
# --------------------- Here we load the augmented data or the raw data ------------------------
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
training_images_tiff=Training_source_dir+"/*.tif"
mask_images_tiff=Training_target_dir+"/*.tif"
# this funtion imports training images and masks and sorts them suitable for the network
X = sorted(glob(training_images_tiff))
Y = sorted(glob(mask_images_tiff))
# assert -funtion check that X and Y really have images. If not this cell raises an error
assert all(Path(x).name==Path(y).name for x,y in zip(X,Y))
# Here we map the training dataset (images and masks).
X = list(map(imread,X))
Y = list(map(imread,Y))
n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1]
if not Use_Data_augmentation:
augmenter = None
#Normalize images and fill small label holes.
if n_channel == 1:
axis_norm = (0,1) # normalize channels independently
print("Normalizing image channels independently")
if n_channel > 1:
axis_norm = (0,1,2) # normalize channels jointly
print("Normalizing image channels jointly")
sys.stdout.flush()
X = [normalize(x,1,99.8,axis=axis_norm) for x in tqdm(X)]
Y = [fill_label_holes(y) for y in tqdm(Y)]
#Here we split the your training dataset into training images (90 %) and validation images (10 %).
#It is advisable to use 10 % of your training dataset for validation. This ensures the truthfull validation error value. If only few validation images are used network may choose too easy or too challenging images for validation.
# split training data (images and masks) into training images and validation images.
assert len(X) > 1, "not enough training data"
rng = np.random.RandomState(42)
ind = rng.permutation(len(X))
n_val = max(1, int(round(percentage * len(ind))))
ind_train, ind_val = ind[:-n_val], ind[-n_val:]
X_val, Y_val = [X[i] for i in ind_val] , [Y[i] for i in ind_val]
X_trn, Y_trn = [X[i] for i in ind_train], [Y[i] for i in ind_train]
print('number of images: %3d' % len(X))
print('- training: %3d' % len(X_trn))
print('- validation: %3d' % len(X_val))
# Use OpenCL-based computations for data generator during training (requires 'gputools')
# Currently always false for stability
use_gpu = False and gputools_available()
#Here we ensure that our network has a minimal number of steps
if (Use_Default_Advanced_Parameters) or (number_of_steps == 0):
# number_of_steps= (int(len(X)/batch_size)+1)
number_of_steps = Image_X*Image_Y/(patch_size*patch_size)*(int(len(X)/batch_size)+1)
if (Use_Data_augmentation):
augmentation_factor = Multiply_dataset_by
number_of_steps = number_of_steps * augmentation_factor
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
conf = Config2D (
n_rays = n_rays,
use_gpu = use_gpu,
train_batch_size = batch_size,
n_channel_in = n_channel,
train_patch_size = (patch_size, patch_size),
grid = (grid_parameter, grid_parameter),
train_learning_rate = initial_learning_rate,
)
# Here we create a model according to section 5.3.
model = StarDist2D(conf, name=model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
#Here we check the FOV of the network.
median_size = calculate_extents(list(Y), np.median)
fov = np.array(model._axes_tile_overlap('YX'))
if any(median_size > fov):
print(bcolors.WARNING+"WARNING: median object size larger than field of view of the neural network.")
print(conf)
pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
# + [markdown] id="0Dfn8ZsEMv5d"
# ## **4.2. Start Training**
# ---
#
# <font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
#
# <font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this point.
#
# <font size = 4>**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the Stardist Fiji plugin. You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
#
# <font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder.
# + cellView="form" id="iwNmp1PUzRDQ"
start = time.time()
#@markdown ##Start training
history = model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter,
epochs=number_of_epochs, steps_per_epoch=number_of_steps)
None;
print("Training done")
print("Network optimization in progress")
#Here we optimize the network.
model.optimize_thresholds(X_val, Y_val)
print("Done")
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model.export_TF()
print("Your model has been sucessfully exported and can now also be used in the Stardist Fiji plugin")
pdf_export(trained=True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
#Create a pdf document with training summary
# + [markdown] id="_0Hynw3-xHp1"
# # **5. Evaluate your model**
# ---
#
# <font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model.
#
#
# <font size = 4>**We highly recommend to perform quality control on all newly trained models.**
#
#
#
# + cellView="form" id="eAJzMwPA6tlH"
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
# + [markdown] id="dhJROwlAMv5o"
# ## **5.1. Inspection of the loss function**
# ---
#
# <font size = 4>First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*
#
# <font size = 4>**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.
#
# <font size = 4>**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.
#
# <font size = 4>During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.
#
# <font size = 4>Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.
#
#
#
# + cellView="form" id="vMzSP50kMv5p"
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)
plt.show()
# + [markdown] id="X5_92nL2xdP6"
# ## **5.2. Error mapping and quality metrics estimation**
# ---
# <font size = 4>This section will calculate the Intersection over Union score for all the images provided in the Source_QC_folder and Target_QC_folder ! The result for one of the image will also be displayed.
#
# <font size = 4>The **Intersection over Union** (IuO) metric is a method that can be used to quantify the percent overlap between the target mask and your prediction output. **Therefore, the closer to 1, the better the performance.** This metric can be used to assess the quality of your model to accurately predict nuclei.
#
# <font size = 4>Here, the IuO is both calculated over the whole image and on a per-object basis. The value displayed below is the IuO value calculated over the entire image. The IuO value calculated on a per-object basis is used to calculate the other metrics displayed.
#
# <font size = 4>“n_true” refers to the number of objects present in the ground truth image. “n_pred” refers to the number of objects present in the predicted image.
#
# <font size = 4>When a segmented object has an IuO value above 0.5 (compared to the corresponding ground truth), it is then considered a true positive. The number of “**true positives**” is available in the table below. The number of “false positive” is then defined as “**false positive**” = “n_pred” - “true positive”. The number of “false negative” is defined as “false negative” = “n_true” - “true positive”.
#
# <font size = 4>The mean_matched_score is the mean IoUs of matched true positives. The mean_true_score is the mean IoUs of matched true positives but normalized by the total number of ground truth objects. The panoptic_quality is calculated as described by [Kirillov et al. 2019](https://arxiv.org/abs/1801.00868).
#
# <font size = 4>For more information about the other metric displayed, please consult the SI of the paper describing ZeroCostDL4Mic.
#
# <font size = 4> The results can be found in the "*Quality Control*" folder which is located inside your "model_folder".
# + cellView="form" id="w90MdriMxhjD"
#@markdown ##Choose the folders that contain your Quality Control dataset
from stardist.matching import matching
from stardist.plot import render_label, render_label_pred
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
#Create a quality control Folder and check if the folder already exist
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control") == False:
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control")
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Generate predictions from the Source_QC_folder and save them in the QC folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
np.random.seed(16)
lbl_cmap = random_label_cmap()
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
n_channel = 1 if Z[0].ndim == 2 else Z[0].shape[-1]
print('Number of test dataset found in the folder: '+str(len(Z)))
#Normalize images.
if n_channel == 1:
axis_norm = (0,1) # normalize channels independently
print("Normalizing image channels independently")
if n_channel > 1:
axis_norm = (0,1,2) # normalize channels jointly
print("Normalizing image channels jointly")
model = StarDist2D(None, name=QC_model_name, basedir=QC_model_path)
names = [os.path.basename(f) for f in sorted(glob(Source_QC_folder_tif))]
# modify the names to suitable form: path_images/image_numberX.tif
lenght_of_Z = len(Z)
for i in range(lenght_of_Z):
img = normalize(Z[i], 1,99.8, axis=axis_norm)
labels, polygons = model.predict_instances(img)
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(names[i], labels, polygons)
# Here we start testing the differences between GT and predicted masks
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/Quality_Control for "+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file, delimiter=",")
writer.writerow(["image","Prediction v. GT Intersection over Union", "false positive", "true positive", "false negative", "precision", "recall", "accuracy", "f1 score", "n_true", "n_pred", "mean_true_score", "mean_matched_score", "panoptic_quality"])
# define the images
for n in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,n)):
print('Running QC on: '+n)
test_input = io.imread(os.path.join(Source_QC_folder,n))
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",n))
test_ground_truth_image = io.imread(os.path.join(Target_QC_folder, n))
# Calculate the matching (with IoU threshold `thresh`) and all metrics
stats = matching(test_prediction, test_ground_truth_image, thresh=0.5)
#Convert pixel values to 0 or 255
test_prediction_0_to_255 = test_prediction
test_prediction_0_to_255[test_prediction_0_to_255>0] = 255
#Convert pixel values to 0 or 255
test_ground_truth_0_to_255 = test_ground_truth_image
test_ground_truth_0_to_255[test_ground_truth_0_to_255>0] = 255
# Intersection over Union metric
intersection = np.logical_and(test_ground_truth_0_to_255, test_prediction_0_to_255)
union = np.logical_or(test_ground_truth_0_to_255, test_prediction_0_to_255)
iou_score = np.sum(intersection) / np.sum(union)
writer.writerow([n, str(iou_score), str(stats.fp), str(stats.tp), str(stats.fn), str(stats.precision), str(stats.recall), str(stats.accuracy), str(stats.f1), str(stats.n_true), str(stats.n_pred), str(stats.mean_true_score), str(stats.mean_matched_score), str(stats.panoptic_quality)])
from tabulate import tabulate
df = pd.read_csv (QC_model_path+"/"+QC_model_name+"/Quality Control/Quality_Control for "+QC_model_name+".csv")
print(tabulate(df, headers='keys', tablefmt='psql'))
from astropy.visualization import simple_norm
# ------------- For display ------------
print('--------------------------------------------------------------')
@interact
def show_QC_results(file = os.listdir(Source_QC_folder)):
plt.figure(figsize=(25,5))
if n_channel > 1:
source_image = io.imread(os.path.join(Source_QC_folder, file))
if n_channel == 1:
source_image = io.imread(os.path.join(Source_QC_folder, file), as_gray = True)
target_image = io.imread(os.path.join(Target_QC_folder, file), as_gray = True)
prediction = io.imread(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/"+file, as_gray = True)
stats = matching(prediction, target_image, thresh=0.5)
target_image_mask = np.empty_like(target_image)
target_image_mask[target_image > 0] = 255
target_image_mask[target_image == 0] = 0
prediction_mask = np.empty_like(prediction)
prediction_mask[prediction > 0] = 255
prediction_mask[prediction == 0] = 0
intersection = np.logical_and(target_image_mask, prediction_mask)
union = np.logical_or(target_image_mask, prediction_mask)
iou_score = np.sum(intersection) / np.sum(union)
norm = simple_norm(source_image, percent = 99)
#Input
plt.subplot(1,4,1)
plt.axis('off')
if n_channel > 1:
plt.imshow(source_image)
if n_channel == 1:
plt.imshow(source_image, aspect='equal', norm=norm, cmap='magma', interpolation='nearest')
plt.title('Input')
#Ground-truth
plt.subplot(1,4,2)
plt.axis('off')
plt.imshow(target_image_mask, aspect='equal', cmap='Greens')
plt.title('Ground Truth')
#Prediction
plt.subplot(1,4,3)
plt.axis('off')
plt.imshow(prediction_mask, aspect='equal', cmap='Purples')
plt.title('Prediction')
#Overlay
plt.subplot(1,4,4)
plt.axis('off')
plt.imshow(target_image_mask, cmap='Greens')
plt.imshow(prediction_mask, alpha=0.5, cmap='Purples')
plt.title('Ground Truth and Prediction, Intersection over Union:'+str(round(iou_score,3 )));
plt.savefig(full_QC_model_path+'/Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
qc_pdf_export()
# + [markdown] id="-tJeeJjLnRkP"
# # **6. Using the trained model**
#
# ---
#
# <font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
# + [markdown] id="d8wuQGjoq6eN"
#
#
# ## **6.1 Generate prediction(s) from unseen dataset**
# ---
#
# <font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
#
# ---
#
# <font size = 4>The current trained model (from section 4.3) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Prediction_folder** folder as restored image stacks (ImageJ-compatible TIFF images).
#
# <font size = 4>**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.
#
# <font size = 4>**`Result_folder`:** This folder will contain the predicted output ROI.
#
# <font size = 4>**`Data_type`:** Please indicate if the images you want to predict are single images or stacks
#
#
# <font size = 4>In stardist the following results can be exported:
# - Region of interest (ROI) that can be opened in ImageJ / Fiji. The ROI are saved inside of a .zip file in your choosen result folder. To open the ROI in Fiji, just drag and drop the zip file !**
# - The predicted mask images
# - A tracking file that can easily be imported into Trackmate to track the nuclei (Stacks only).
# - A CSV file that contains the number of nuclei detected per image (single image only).
# - A CSV file that contains the coordinate the centre of each detected nuclei (single image only).
#
#
# + cellView="form" id="y2TD5p7MZrEb"
Single_Images = 1
Stacks = 2
#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved (Result folder), then play the cell to predict output on your unseen images.
Data_folder = "" #@param {type:"string"}
Results_folder = "" #@param {type:"string"}
#@markdown ###Are your data single images or stacks?
Data_type = Single_Images #@param ["Single_Images", "Stacks"] {type:"raw"}
#@markdown ###What outputs would you like to generate?
Region_of_interests = True #@param {type:"boolean"}
Mask_images = True #@param {type:"boolean"}
Tracking_file = False #@param {type:"boolean"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = False #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#single images
if Data_type == 1 :
Data_folder = Data_folder+"/*.tif"
print("Single images are now beeing predicted")
np.random.seed(16)
lbl_cmap = random_label_cmap()
X = sorted(glob(Data_folder))
X = list(map(imread,X))
n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1]
# axis_norm = (0,1,2) # normalize channels jointly
if n_channel == 1:
axis_norm = (0,1) # normalize channels independently
print("Normalizing image channels independently")
if n_channel > 1:
axis_norm = (0,1,2) # normalize channels jointly
print("Normalizing image channels jointly")
sys.stdout.flush()
model = StarDist2D(None, name = Prediction_model_name, basedir = Prediction_model_path)
names = [os.path.basename(f) for f in sorted(glob(Data_folder))]
Nuclei_number = []
# modify the names to suitable form: path_images/image_numberX.tif
FILEnames = []
for m in names:
m = Results_folder+'/'+m
FILEnames.append(m)
# Create a list of name with no extension
name_no_extension=[]
for n in names:
name_no_extension.append(os.path.splitext(n)[0])
# Save all ROIs and masks into results folder
for i in range(len(X)):
img = normalize(X[i], 1,99.8, axis = axis_norm)
labels, polygons = model.predict_instances(img)
os.chdir(Results_folder)
if Mask_images:
imsave(FILEnames[i], labels, polygons)
if Region_of_interests:
export_imagej_rois(name_no_extension[i], polygons['coord'])
if Tracking_file:
print(bcolors.WARNING+"Tracking files are only generated when stacks are predicted"+W)
Nuclei_centre_coordinate = polygons['points']
my_df2 = pd.DataFrame(Nuclei_centre_coordinate)
my_df2.columns =['Y', 'X']
my_df2.to_csv(Results_folder+'/'+name_no_extension[i]+'_Nuclei_centre.csv', index=False, header=True)
Nuclei_array = polygons['coord']
Nuclei_array2 = [names[i], Nuclei_array.shape[0]]
Nuclei_number.append(Nuclei_array2)
my_df = pd.DataFrame(Nuclei_number)
my_df.to_csv(Results_folder+'/Nuclei_count.csv', index=False, header=False)
# One example is displayed
print("One example image is displayed bellow:")
plt.figure(figsize=(10,10))
plt.imshow(img if img.ndim==2 else img[...,:3], clim=(0,1), cmap='gray')
plt.imshow(labels, cmap=lbl_cmap, alpha=0.5)
plt.axis('off');
plt.savefig(name_no_extension[i]+"_overlay.tif")
if Data_type == 2 :
print("Stacks are now beeing predicted")
np.random.seed(42)
lbl_cmap = random_label_cmap()
# normalize channels independently
axis_norm = (0,1)
model = StarDist2D(None, name = Prediction_model_name, basedir = Prediction_model_path)
for image in os.listdir(Data_folder):
print("Performing prediction on: "+image)
timelapse = imread(Data_folder+"/"+image)
short_name = os.path.splitext(image)
timelapse = normalize(timelapse, 1,99.8, axis=(0,)+tuple(1+np.array(axis_norm)))
if Region_of_interests:
polygons = [model.predict_instances(frame)[1]['coord'] for frame in tqdm(timelapse)]
export_imagej_rois(Results_folder+"/"+str(short_name[0]), polygons, compression=ZIP_DEFLATED)
n_timepoint = timelapse.shape[0]
prediction_stack = np.zeros((n_timepoint, timelapse.shape[1], timelapse.shape[2]))
Tracking_stack = np.zeros((n_timepoint, timelapse.shape[2], timelapse.shape[1]))
# Save the masks in the result folder
if Mask_images or Tracking_file:
for t in range(n_timepoint):
img_t = timelapse[t]
labels, polygons = model.predict_instances(img_t)
prediction_stack[t] = labels
# Create a tracking file for trackmate
for point in polygons['points']:
cv2.circle(Tracking_stack[t],tuple(point),0,(1), -1)
prediction_stack_32 = img_as_float32(prediction_stack, force_copy=False)
Tracking_stack_32 = img_as_float32(Tracking_stack, force_copy=False)
Tracking_stack_8 = img_as_ubyte(Tracking_stack_32, force_copy=True)
Tracking_stack_8_rot = np.rot90(Tracking_stack_8, axes=(1,2))
Tracking_stack_8_rot_flip = np.fliplr(Tracking_stack_8_rot)
os.chdir(Results_folder)
if Mask_images:
imsave(str(short_name[0])+".tif", prediction_stack_32, compress=ZIP_DEFLATED)
if Tracking_file:
imsave(str(short_name[0])+"_tracking_file.tif", Tracking_stack_8_rot_flip, compress=ZIP_DEFLATED)
print("Predictions completed")
# + [markdown] id="KjGHBGmxlk9B"
# ## **6.2. Generate prediction(s) from unseen dataset (Big data)**
# ---
#
# <font size = 4>You can use this section of the notebook to generate predictions on very large images. Compatible file formats include .Tif and .svs files.
#
# <font size = 4>**`Data_folder`:** This folder should contains the images that you want to predict using the network that you trained.
#
# <font size = 4>**`Result_folder`:** This folder will contain the predicted output ROI.
#
#
# <font size = 4>In stardist the following results can be exported:
# - Region of interest (ROI) that can be opened in ImageJ / Fiji. The ROI are saved inside of a .zip file in your choosen result folder. To open the ROI in Fiji, just drag and drop the zip file ! IMPORTANT: ROI files cannot be exported for extremely large images.
# - The predicted mask images
#
#
#
#
#
#
#
# + cellView="form" id="jxjHeOFFleSV"
#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved (Result folder), then play the cell to predict output on your unseen images.
start = time.time()
Data_folder = "" #@param {type:"string"}
Results_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = False #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
#@markdown #####To analyse very large image, your images need to be divided into blocks. Each blocks will then be processed independently and re-assembled to generate the final image.
#@markdown #####Here you can choose the dimension of the block.
block_size_Y = 1024#@param {type:"number"}
block_size_X = 1024#@param {type:"number"}
#@markdown #####Here you can the amount of overlap between each block.
min_overlap = 50#@param {type:"number"}
#@markdown #####To analyse large blocks, your blocks need to be divided into tiles. Each tile will then be processed independently and re-assembled to generate the final block.
n_tiles_Y = 1#@param {type:"number"}
n_tiles_X = 1#@param {type:"number"}
#@markdown ###What outputs would you like to generate?
Mask_images = True #@param {type:"boolean"}
Region_of_interests = False #@param {type:"boolean"}
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Create a temp folder to save Zarr files
Temp_folder = "/content/Temp_folder"
if os.path.exists(Temp_folder):
shutil.rmtree(Temp_folder)
os.makedirs(Temp_folder)
# mi, ma = np.percentile(img[::8], [1,99.8]) # compute percentiles from low-resolution image
# mi, ma = np.percentile(img[13000:16000,13000:16000], [1,99.8]) # compute percentiles from smaller crop
mi, ma = 0, 255 # use min and max dtype values (suitable here)
normalizer = MyNormalizer(mi, ma)
np.random.seed(16)
lbl_cmap = random_label_cmap()
#Load the StarDist model
model = StarDist2D(None, name=Prediction_model_name, basedir=Prediction_model_path)
for image in os.listdir(Data_folder):
print("Performing prediction on: "+image)
X = imread(Data_folder+"/"+image)
print("Image dimension "+str(X.shape))
short_name = os.path.splitext(image)
n_channel = 1 if X.ndim == 2 else X.shape[-1]
# axis_norm = (0,1,2) # normalize channels jointly
if n_channel == 1:
axis_norm = (0,1) # normalize channels independently
print("Normalizing image channels independently")
block_size = (block_size_Y, block_size_X)
min_overlap = (min_overlap, min_overlap)
n_tiles = (n_tiles_Y, n_tiles_X)
axes="YX"
if n_channel > 1:
axis_norm = (0,1,2) # normalize channels jointly
print("Normalizing image channels jointly")
axes="YXC"
block_size = (block_size_Y, block_size_X, 3)
n_tiles = (n_tiles_Y, n_tiles_X, 1)
min_overlap = (min_overlap, min_overlap, 0)
sys.stdout.flush()
zarr.save_array(str(Temp_folder+"/image.zarr"), X)
del X
img = zarr.open(str(Temp_folder+"/image.zarr"), mode='r')
labels = zarr.open(str(Temp_folder+"/labels.zarr"), mode='w', shape=img.shape[:3], chunks=img.chunks[:3], dtype=np.int32)
labels, polygons = model.predict_instances_big(img, axes=axes, block_size=block_size, min_overlap=min_overlap, context=None,
normalizer=normalizer, show_progress=True, n_tiles=n_tiles)
# Save the predicted mask in the result folder
os.chdir(Results_folder)
if Mask_images:
imsave(str(short_name[0])+".tif", labels, compress=ZIP_DEFLATED)
if Region_of_interests:
export_imagej_rois(str(short_name[0])+'labels_roi.zip', polygons['coord'], compression=ZIP_DEFLATED)
del img
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
# One example image
fig, (a,b) = plt.subplots(1,2, figsize=(20,20))
a.imshow(labels[::8,::8], cmap='tab20b')
b.imshow(labels[::8,::8], cmap=lbl_cmap)
a.axis('off'); b.axis('off');
None;
# + [markdown] id="hvkd66PldsXB"
# ## **6.3. Download your predictions**
# ---
#
# <font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name.
# + [markdown] id="UvSlTaH14s3t"
#
# #**Thank you for using StarDist 2D!**
| Colab_notebooks/StarDist_2D_ZeroCostDL4Mic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# default_exp callback.PredictionDynamics
# -
# # PredictionDynamics
#
# > Callback used to visualize model predictions during training.
# This is an implementation created by <NAME> (<EMAIL>) based on a [blog post](http://localhost:8888/?token=<PASSWORD>) by <NAME> I read some time ago that I really liked. One of the things he mentioned was this:
# >"**visualize prediction dynamics**. I like to visualize model predictions on a fixed test batch during the course of training. The “dynamics” of how these predictions move will give you incredibly good intuition for how the training progresses. Many times it is possible to feel the network “struggle” to fit your data if it wiggles too much in some way, revealing instabilities. Very low or very high learning rates are also easily noticeable in the amount of jitter." <NAME>
#
#export
from fastai.callback.all import *
from tsai.imports import *
# export
class PredictionDynamics(Callback):
order, run_valid = 65, True
def __init__(self, show_perc=1., figsize=(10,6), alpha=.3, size=30, color='lime', cmap='gist_rainbow', normalize=False,
sensitivity=None, specificity=None):
"""
Args:
show_perc: percent of samples from the valid set that will be displayed. Default: 1 (all).
You can reduce it if the number is too high and the chart is too busy.
alpha: level of transparency. Default:.3. 1 means no transparency.
figsize: size of the chart. You may want to expand it if too many classes.
size: size of each sample in the chart. Default:30. You may need to decrease it a bit if too many classes/ samples.
color: color used in regression plots.
cmap: color map used in classification plots.
normalize: flag to normalize histograms displayed in binary classification.
sensitivity: (aka recall or True Positive Rate) if you pass a float between 0. and 1. the sensitivity threshold will be plotted in the chart.
Only used in binary classification.
specificity: (or True Negative Rate) if you pass a float between 0. and 1. it will be plotted in the chart. Only used in binary classification.
The red line in classification tasks indicate the average probability of true class.
"""
store_attr()
def before_fit(self):
self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds")
if not self.run:
return
self.cat = True if (hasattr(self.dls, "c") and self.dls.c > 1) else False
if self.cat:
self.binary = self.dls.c == 2
if self.show_perc != 1:
valid_size = len(self.dls.valid.dataset)
self.show_idxs = np.random.choice(valid_size, int(round(self.show_perc * valid_size)), replace=False)
# Prepare ground truth container
self.y_true = []
def before_epoch(self):
# Prepare empty pred container in every epoch
self.y_pred = []
def after_pred(self):
if self.training:
return
# Get y_true in epoch 0
if self.epoch == 0:
self.y_true.extend(self.y.cpu().flatten().numpy())
# Gather y_pred for every batch
if self.cat:
if self.binary:
y_pred = F.softmax(self.pred, -1)[:, 1].reshape(-1, 1).cpu()
else:
y_pred = torch.gather(F.softmax(self.pred, -1), -1, self.y.reshape(-1, 1).long()).cpu()
else:
y_pred = self.pred.cpu()
self.y_pred.extend(y_pred.flatten().numpy())
def after_epoch(self):
# Ground truth
if self.epoch == 0:
self.y_true = np.array(self.y_true)
if self.show_perc != 1:
self.y_true = self.y_true[self.show_idxs]
self.y_bounds = (np.min(self.y_true), np.max(self.y_true))
self.min_x_bounds, self.max_x_bounds = np.min(self.y_true), np.max(self.y_true)
self.y_pred = np.array(self.y_pred)
if self.show_perc != 1:
self.y_pred = self.y_pred[self.show_idxs]
if self.cat:
neg_thr = None
pos_thr = None
if self.specificity is not None:
inp0 = self.y_pred[self.y_true == 0]
neg_thr = np.sort(inp0)[-int(len(inp0) * (1 - self.specificity))]
if self.sensitivity is not None:
inp1 = self.y_pred[self.y_true == 1]
pos_thr = np.sort(inp1)[-int(len(inp1) * self.sensitivity)]
self.update_graph(self.y_pred, self.y_true, neg_thr=neg_thr, pos_thr=pos_thr)
else:
# Adjust bounds during validation
self.min_x_bounds = min(self.min_x_bounds, np.min(self.y_pred))
self.max_x_bounds = max(self.max_x_bounds, np.max(self.y_pred))
x_bounds = (self.min_x_bounds, self.max_x_bounds)
self.update_graph(self.y_pred, self.y_true, x_bounds=x_bounds, y_bounds=self.y_bounds)
def update_graph(self, y_pred, y_true, x_bounds=None, y_bounds=None, neg_thr=None, pos_thr=None):
if not hasattr(self, 'graph_fig'):
self.df_out = display("", display_id=True)
if self.cat:
self._cl_names = self.dls.vocab
self._classes = L(self.dls.vocab.o2i.values())
self._n_classes = len(self._classes)
if self.binary:
self.bins = np.linspace(0, 1, 101)
else:
_cm = plt.get_cmap(self.cmap)
self._color = [_cm(1. * c/self._n_classes) for c in range(1, self._n_classes + 1)][::-1]
self._h_vals = np.linspace(-.5, self._n_classes - .5, self._n_classes + 1)[::-1]
self._rand = []
for i, c in enumerate(self._classes):
self._rand.append(.5 * (np.random.rand(np.sum(y_true == c)) - .5))
self.graph_fig, self.graph_ax = plt.subplots(1, figsize=self.figsize)
self.graph_out = display("", display_id=True)
self.graph_ax.clear()
if self.cat:
if self.binary:
self.graph_ax.hist(y_pred[y_true == 0], bins=self.bins, density=self.normalize, color='red', label=self._cl_names[0],
edgecolor='black', alpha=self.alpha)
self.graph_ax.hist(y_pred[y_true == 1], bins=self.bins, density=self.normalize, color='blue', label=self._cl_names[1],
edgecolor='black', alpha=self.alpha)
self.graph_ax.axvline(.5, lw=1, ls='--', color='gray')
if neg_thr is not None:
self.graph_ax.axvline(neg_thr, lw=2, ls='--', color='red', label=f'specificity={(self.specificity):.3f}')
if pos_thr is not None:
self.graph_ax.axvline(pos_thr, lw=2, ls='--', color='blue', label=f'sensitivity={self.sensitivity:.3f}')
self.graph_ax.set_xlabel(f'probability of class {self._cl_names[1]}', fontsize=12)
self.graph_ax.legend()
else:
for i, c in enumerate(self._classes):
self.graph_ax.scatter(y_pred[y_true == c], y_true[y_true == c] + self._rand[i], color=self._color[i],
edgecolor='black', alpha=self.alpha, lw=.5, s=self.size)
self.graph_ax.vlines(np.mean(y_pred[y_true == c]), i - .5, i + .5, color='r')
self.graph_ax.vlines(.5, min(self._h_vals), max(self._h_vals), lw=.5)
self.graph_ax.hlines(self._h_vals, 0, 1, lw=.5)
self.graph_ax.set_ylim(min(self._h_vals), max(self._h_vals))
self.graph_ax.set_yticks(self._classes)
self.graph_ax.set_yticklabels(self._cl_names)
self.graph_ax.set_ylabel('true class', fontsize=12)
self.graph_ax.set_xlabel('probability of true class', fontsize=12)
self.graph_ax.set_xlim(0, 1)
self.graph_ax.set_xticks(np.linspace(0, 1, 11))
self.graph_ax.grid(axis='x', color='gainsboro', lw=.2)
else:
self.graph_ax.scatter(y_pred, y_true, color=self.color, edgecolor='black', alpha=self.alpha, lw=.5, s=self.size)
self.graph_ax.set_xlim(*x_bounds)
self.graph_ax.set_ylim(*y_bounds)
self.graph_ax.plot([*x_bounds], [*x_bounds], color='gainsboro')
self.graph_ax.set_xlabel('y_pred', fontsize=12)
self.graph_ax.set_ylabel('y_true', fontsize=12)
self.graph_ax.grid(color='gainsboro', lw=.2)
self.graph_ax.set_title(f'Prediction Dynamics \nepoch: {self.epoch + 1}/{self.n_epoch}')
self.df_out.update(pd.DataFrame(np.stack(self.learn.recorder.values)[-1].reshape(1,-1),
columns=self.learn.recorder.metric_names[1:-1], index=[self.epoch]))
self.graph_out.update(self.graph_ax.figure)
if self.epoch == self.n_epoch - 1:
plt.close(self.graph_ax.figure)
from tsai.basics import *
from tsai.models.InceptionTime import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
check_data(X, y, splits, False)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True)]
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, InceptionTime, metrics=accuracy, cbs=PredictionDynamics())
learn.fit_one_cycle(2, 3e-3)
#hide
from tsai.imports import *
from tsai.export import *
nb_name = get_nb_name()
# nb_name = "064_callback.PredictionDynamics.ipynb"
create_scripts(nb_name);
| nbs/064_callback.PredictionDynamics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing the word2vec embeddings
#
# In this example, we'll train a word2vec model using Gensim and then, we'll visualize the embedding vectors using the `sklearn` implementation of [t-SNE](https://lvdmaaten.github.io/tsne/). t-SNE is a dimensionality reduction technique, which will help us visualize the multi-dimensional embedding vectors over a 2D Plot.
# Let's start with the imports:
# +
import logging
import pprint # beautify prints
import gensim.downloader as gensim_downloader
import matplotlib.pyplot as plt
import numpy as np
from gensim.models.word2vec import Word2Vec
from sklearn.manifold import TSNE
logging.basicConfig(level=logging.INFO)
# -
# Next, let's instantiate and train Gensim's word2vec model using the `text8` dataset. `text8` is embedded in Gensim and contains the tokenized version of the first 100mB of Wikipedia:
model = Word2Vec(
sentences=gensim_downloader.load('text8'), # download and load the text8 dataset
sg=0, size=100, window=5, negative=5, min_count=5, iter=5)
# Let's see the results of the training by showing the most similar words to the combination of `woman` and `king`:
pprint.pprint(model.wv.most_similar(positive=['woman', 'king'], negative=['man']))
# We can see that the model has been trained properly, because the most similar words are quite relevant to the input query.
# Next, we'll continue with the t-SNE visualization. To do this, we'll collect the words (and vectors), most similar to the set of seed words below. Each seed word will serve as the core of a `embedding_groups` of it's most similar words:
# +
target_words = ['mother', 'car', 'tree', 'science', 'building', 'elephant', 'green']
word_groups, embedding_groups = list(), list()
for word in target_words:
words = [w for w, _ in model.wv.most_similar(word, topn=5)]
word_groups.append(words)
embedding_groups.append([model.wv[w] for w in words])
# -
# Next, we'll use the `embedding_groups` to train the t-SNE model for 5000 iterations:
embedding_groups = np.array(embedding_groups)
m, n, vector_size = embedding_groups.shape
tsne_model_en_2d = TSNE(perplexity=8, n_components=2, init='pca', n_iter=5000)
# Next, we'll use the model to generate embeddings reduced to only 2 dimensions:
embeddings_2d = tsne_model_en_2d.fit_transform(embedding_groups.reshape(m * n, vector_size))
embeddings_2d = np.array(embeddings_2d).reshape(m, n, 2)
# Finally, we'll visualize the generated embeddings:
# +
# Plot the results
plt.figure(figsize=(16, 9))
# Different color and marker for each group of similar words
color_map = plt.get_cmap('Dark2')(np.linspace(0, 1, len(target_words)))
markers = ['o', 'v', 's', 'x', 'D', '*', '+']
# Iterate over all groups
for label, similar_words, emb, color, marker in \
zip(target_words, word_groups, embeddings_2d, color_map, markers):
x, y = emb[:, 0], emb[:, 1]
# Plot the points of each word group
plt.scatter(x=x, y=y, c=[color], label=label, marker=marker)
# Annotate each vector with it's corresponding word
for word, w_x, w_y in zip(similar_words, x, y):
plt.annotate(word, xy=(w_x, w_y), xytext=(0, 15),
textcoords='offset points', ha='center', va='top', size=10)
plt.legend()
plt.grid(True)
plt.show()
# + active=""
# We can see how the words of each embedding group are clustered in the same area of the plot. This once again proves that the word embedding vectors reflect the true meaning of the words.
| Chapter06/word2vec_visualize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling and Simulation in Python
#
# Chapter 12
#
# Copyright 2017 <NAME>
#
# License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
#
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
# -
# ### Code
#
# Here's the code from the previous notebook that we'll need.
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
# ### Metrics
# Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
# Here's an example.|
# +
beta = 0.333
gamma = 0.25
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, gamma, calc_total_infected(results))
# -
# **Exercise:** Write functions that take a `TimeFrame` object as a parameter and compute the other metrics mentioned in the book:
#
# 1. The fraction of students who are sick at the peak of the outbreak.
#
# 2. The day the outbreak peaks.
#
# 3. The fraction of students who are sick at the end of the semester.
#
# Note: Not all of these functions require the `System` object, but when you write a set of related functons, it is often convenient if they all take the same parameters.
#
# Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this:
#
# I.max()
#
# And the index of the largest value like this:
#
# I.idxmax()
#
# You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
for
# +
# Solution goes here
# +
# Solution goes here
# -
# ### What if?
# We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
# Let's start again with the system we used in the previous sections.
# +
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
# -
# And run the model without immunization.
results = run_simulation(system, update_func)
calc_total_infected(results)
# Now with 10% immunization.
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
calc_total_infected(results2)
# 10% immunization leads to a drop in infections of 16 percentage points.
#
# Here's what the time series looks like for S, with and without immunization.
# +
plot(results.S, '-', label='No immunization')
plot(results2.S, '--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('figs/chap05-fig02.pdf')
# -
# Now we can sweep through a range of values for the fraction of the population who are immunized.
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
print(fraction, calc_total_infected(results))
# This function does the same thing and stores the results in a `Sweep` object.
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
results = run_simulation(system, update_func)
sweep[fraction] = calc_total_infected(results)
return sweep
# Here's how we run it.
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
# And here's what the results look like.
# +
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('figs/chap05-fig03.pdf')
# -
# If 40% of the population is immunized, less than 4% of the population gets sick.
# ### Logistic function
# To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function) (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
# The following array represents the range of possible spending.
spending = linspace(0, 1200, 21)
# `compute_factor` computes the reduction in `beta` for a given level of campaign spending.
#
# `M` is chosen so the transition happens around \$500.
#
# `K` is the maximum reduction in `beta`, 20%.
#
# `B` is chosen by trial and error to yield a curve that seems feasible.
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
# Here's what it looks like.
# +
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
savefig('figs/chap05-fig04.pdf')
# -
# **Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
# ### Hand washing
# Now we can model the effect of a hand-washing campaign by modifying `beta`
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
# Let's start with the same values of `beta` and `gamma` we've been using.
# +
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
# -
# Now we can sweep different levels of campaign spending.
# +
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
print(spending, system.beta, calc_total_infected(results))
# -
# Here's a function that sweeps a range of spending and stores the results in a `SweepSeries`.
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results)
return sweep
# Here's how we run it.
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
# And here's what it looks like.
# +
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('figs/chap05-fig05.pdf')
# -
# Now let's put it all together to make some public health spending decisions.
# ### Optimization
# Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses, endpoint=True)
max_doses
# We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.
#
# For each scenario, we compute the fraction of students who get sick.
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results, run_simulation(system, update_func)
print(doses, system.init.S, system.beta, calc_total_infected(results))
# The following function wraps that loop and stores the results in a `Sweep` object.
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results)
return sweep
# Now we can compute the number of infected students for each possible allocation of the budget.
infected_sweep = sweep_doses(dose_array)
# And plot the results.
# +
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('figs/chap05-fig06.pdf')
# + active=""
# ### Exercises
#
# **Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
# -
# **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.
#
# How might you incorporate the effect of quarantine in the SIR model?
# +
# Solution goes here
| code/chap12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Look at the Vo data - households and contact network; add some visualisation of testing over time
# -
# %matplotlib inline
import numpy as np
import scipy.stats as st
from scipy import sparse
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import sklearn as sk
from sklearn.decomposition import FastICA
from scipy.cluster.hierarchy import dendrogram, linkage
df = pd.read_csv('./vo_data.csv')
df
for x in df.columns[104:123]:
print(x)
posi = ((df['first_sampling'].values == 'Positive') | (df['second_sampling'].values == 'Positive'))
# indices taken from the vo_legend file
symptom_indices = range(3,13)
contact_indices = range(13,104)
testday_indices = range(104,123)
hcol = df.household_id.values
hhids = pd.unique(df.household_id)
len(hhids)
hh_tests = []
ages = []
for hid in hhids:
dfh = df[df.household_id == hid]
tests = dfh.iloc[:,testday_indices].values
aa = dfh.iloc[:,2].values
tests[tests=='Neg'] = 0
tests[tests=='Pos'] = 1
hh_tests.append(tests)
ages.append(aa)
age_gs = pd.unique(df.age_group)
age_gs.sort()
age_gs
nsamp = np.zeros(len(age_gs))
npos = np.zeros(len(age_gs))
for i, ag in enumerate(age_gs):
dfa = df[df.age_group == ag]
nsamp[i] = len(dfa)
dfp = df[posi]
dfa = dfp[dfp.age_group == ag]
npos[i] = len(dfa)
plt.bar(np.arange(0,len(nsamp)),nsamp)
plt.bar(np.arange(0,len(npos)),npos)
plt.bar(np.arange(0,len(npos)),npos/nsamp)
# Dictionary that puts ages in categories
as2rg = {
'00-10' : 1,
'11-20' : 1,
'21-30' : 0, # 0 is reference class
'31-40' : 0,
'41-50' : 0,
'51-60' : 2,
'61-70' : 3,
'71-80' : 4,
'81-90' : 5,
'91+' : 5,
}
nages = max(as2rg.values())
Y = [] # To store outcomes
XX = [] # To store design matrices
for i in range(0,len(hhids)):
mya = [as2rg[a] for a in ages[i]]
m = len(mya)
myx = np.zeros((m,nages))
myy = np.zeros(m)
for j, a in enumerate(mya):
if (a>0):
myx[j,a-1] = 1
if (np.any(hh_tests[i][j,:]==1)):
myy[j] = 1
Y.append(myy)
XX.append(myx)
k=0
plt.figure(figsize=(18,20))
for j in range(0,len(hh_tests)):
tests = hh_tests[j]
nn, tt = tests.shape
if ((tests[tests==1]).size > 0):
#print(tests)
k+=1
plt.subplot(10,6,k)
for y in range(0,nn):
for t in range(0,tt):
if (tests[y,t] == 1):
plt.plot(t,y,marker='+',c='r')
elif (tests[y,t] == 0):
plt.plot(t,y,marker='o',c='b')
else:
plt.plot(t,y,marker='.',c='k')
plt.ylim([-0.5, nn-0.5])
plt.yticks(np.arange(0,nn),ages[j])
plt.tight_layout()
itxt = df.id.values
jtxt = df.columns.values[contact_indices]
ii = range(1,len(itxt)+1)
t2i = dict(zip(itxt,ii)) # this will be a dictionary storing lookup from code to node
k = len(t2i) # This will count the number of nodes
IJ = [] # This is for import into networkx
for jt in jtxt:
if jt in t2i.keys():
j = t2i[jt]
else:
k += 1
t2i[jt] = k
j = k
traced = df[jt].values
for i in np.where(~np.isnan(traced))[0]:
IJ.append((j, i))
I = range(1,k+1)
DG = nx.DiGraph()
DG.add_nodes_from(I)
DG.add_edges_from(IJ)
c = []
for i in range(0,k):
if (i < len(itxt)):
if (not posi[i]):
c.append('b')
else:
c.append('r')
else:
c.append('r')
# TO DO: Check if these indicate shared contacts by
plt.figure(figsize=(10,10))
pos=nx.spring_layout(DG)
nx.draw_networkx(DG,pos,with_labels=False,node_size=10,node_color=c)
plt.savefig('./vonet.pdf',format='pdf')
hsize = []
hpos = []
for hh in hhids:
jj = np.argwhere(hcol == hh)
hsize.append(len(jj[:,0]))
hpos.append(sum(posi[jj[:,0]]))
hsize = np.array(hsize)
hpos = np.array(hpos)
plt.hist(hsize[hsize<10.0],np.arange(0.5,10.5)) # Plot shows Look at 6+
q = np.sum(hpos)/np.sum(hsize)
q
# +
plt.figure(figsize=(6,8))
for i in range(1,6):
plt.subplot(3,2,i)
yy = hpos[hsize==i]
xx = np.arange(0,i+1)
zz = st.binom.pmf(xx,i,q)
plt.scatter(xx,zz,zorder=2,c='w',s=60)
plt.scatter(xx,zz,zorder=3,c='b',s=40)
plt.hist(yy,bins=np.arange(-0.5,i+1.0,1),color='r',density=True,zorder=1)
plt.xticks(range(0,i+1))
plt.ylim([0,0.15])
plt.xlabel('Swab Positives')
plt.ylabel('Proportion')
plt.title('Household Size ' + str(i))
plt.subplot(3,2,6)
yy = hpos[hsize>=6]
xx = np.arange(0,7)
zz = st.binom.pmf(xx,i,q)
plt.scatter(xx,zz,zorder=2,c='w',s=60)
plt.scatter(xx,zz,zorder=3,c='b',s=40)
plt.hist(yy,bins=np.arange(-0.5,7.0,1),color='r',density=True,zorder=1)
plt.xticks(range(0,7))
plt.ylim([0,0.15])
plt.xlabel('Swab Positives')
plt.ylabel('Proportion')
plt.title('Household Size 6+')
plt.tight_layout()
#plt.savefig('./vohh.pdf',format='pdf')
# -
df_pos_symp = df[posi].iloc[:,symptom_indices]
symps = ['fever','cough','sore_throat','malaise','diarrhea','conjunctivitis','other_symptoms']
symp_names = ['Fever','Cough','Sore Throat','Malaise','Diarrhoea','Conjunctivitis','Rhinitis']
print(pd.unique(df_pos_symp.other_symptoms))
Xtxt = df_pos_symp[symps].values
X = Xtxt.copy()
X[pd.isna(X)] = 0.0
X[X=='rhinitis'] = 1.0
X = X.astype(np.float)
# This is linkage by symptoms on cases not linkage of symptoms
W = linkage(X.T, 'ward')
plt.figure(figsize=(6,8))
dendrogram(W,labels=np.array(symp_names), leaf_rotation=90)
plt.tight_layout()
#plt.savefig('./vo_symptoms.pdf')
myica = FastICA(n_components=2, random_state=33)
myica.fit(X)
n, p = X.shape
vv = myica.components_
Z = X@(vv.T)
U = np.eye(p)@(vv.T)
plt.figure(figsize=(6,6))
plt.scatter(Z[:,0],Z[:,1], c='b', s=80, alpha=0.3) # Does look a bit like no symptoms and three clusters
for i in range(0,p):
#plt.plot(np.array([0.0, U[i,0]]),np.array([0.0, U[i,1]]))
plt.annotate(symp_names[i],np.zeros(2),xytext=U[i,:],arrowprops={'arrowstyle':'->'})
plt.xlabel('IC 1')
plt.ylabel('IC 2')
plt.tight_layout()
#plt.savefig('./vo_ica.pdf')
| examples/vo/.ipynb_checkpoints/vo_plots-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cuml4
# language: python
# name: python3
# ---
# # Principal Componenet Analysis (PCA)
#
# The PCA algorithm is a dimensionality reduction algorithm which works really well for datasets which have correlated columns. It combines the features of X in linear combination such that the new components capture the most information of the data.
# The model can take array-like objects, either in host as NumPy arrays or in device (as Numba or cuda_array_interface-compliant), as well as cuDF DataFrames as the input.
#
# For more information about cuDF, refer to the cuDF documentation: https://rapidsai.github.io/projects/cudf/en/latest/
#
# For more information about cuML's PCA implementation: https://rapidsai.github.io/projects/cuml/en/latest/api.html#principal-component-analysis
# +
import os
import numpy as np
import pandas as pd
import cudf as gd
from cuml.datasets import make_blobs
from sklearn.decomposition import PCA as skPCA
from cuml.decomposition import PCA as cumlPCA
# -
# ## Define Parameters
# +
n_samples = 2**15
n_features = 400
n_components = 2
whiten = False
random_state = 42
svd_solver = "full"
# -
# ## Generate Data
#
# ### GPU
# +
# %%time
device_data, _ = make_blobs(n_samples=n_samples,
n_features=n_features,
centers=5,
random_state=random_state)
device_data = gd.DataFrame.from_gpu_matrix(device_data)
# -
# ### Host
host_data = device_data.to_pandas()
# ## Scikit-learn Model
# +
# %%time
pca_sk = skPCA(n_components=n_components,
svd_solver=svd_solver,
whiten=whiten,
random_state=random_state)
result_sk = pca_sk.fit_transform(host_data)
# -
# ## cuML Model
# +
# %%time
pca_cuml = cumlPCA(n_components=n_components,
svd_solver=svd_solver,
whiten=whiten,
random_state=random_state)
result_cuml = pca_cuml.fit_transform(device_data)
# -
# ## Evaluate Results
#
# ### Singular Values
passed = np.allclose(pca_sk.singular_values_,
pca_cuml.singular_values_,
atol=0.01)
print('compare pca: cuml vs sklearn singular_values_ {}'.format('equal' if passed else 'NOT equal'))
# ### Explained Variance
passed = np.allclose(pca_sk.explained_variance_,
pca_cuml.explained_variance_,
atol=1e-8)
print('compare pca: cuml vs sklearn explained_variance_ {}'.format('equal' if passed else 'NOT equal'))
# ### Explained Variance Ratio
passed = np.allclose(pca_sk.explained_variance_ratio_,
pca_cuml.explained_variance_ratio_,
atol=1e-8)
print('compare pca: cuml vs sklearn explained_variance_ratio_ {}'.format('equal' if passed else 'NOT equal'))
# ### Components
passed = np.allclose(pca_sk.components_,
np.asarray(pca_cuml.components_.as_gpu_matrix()),
atol=1e-6)
print('compare pca: cuml vs sklearn components_ {}'.format('equal' if passed else 'NOT equal'))
# ### Transform
passed = np.allclose(result_sk, np.asarray(result_cuml.as_gpu_matrix()), atol=1e-1)
print('compare pca: cuml vs sklearn transformed results %s'%('equal'if passed else 'NOT equal'))
| cuml/pca_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv
# language: python
# name: venv
# ---
# !which jupyter
# +
#importing dependencies
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.utils.visualizer import Visualizer
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from detectron2 import model_zoo
import random
import cv2
import json
import os
import numpy as np
import torch
import torchvision
from IPython.display import Image, display
import PIL
# +
# util functions
def showImage(imgData):
display(PIL.Image.fromarray(imgData))
def showImageFromFile(imgPath):
Image(filename=imgPath)
def getImage(imgPath):
return cv2.imread(imgPath)
def getTestImage():
return getImage('../tests/2.jpg')
# -
im = getTestImage()
showImage(im[:, :, ::-1])
# # Panoptic segmentation model
# Get configuration
cfg = get_cfg()
# Set configuration variables
cfg.MODEL.DEVICE = 'cpu'
cfg.merge_from_file(model_zoo.get_config_file(
"COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
cfg
predictor = DefaultPredictor(cfg)
# Create a simple end-to-end predictor with the given config
# that runs on single device for a single input image.
predictor
panoptic_seg, segments_info = predictor(im)["panoptic_seg"]
panoptic_seg
segments_info
| notebooks/panoptic_segmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/anishmahapatra/Data-Science-Questions/blob/master/DP_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LFYWBSrtHwf9"
# # PQ-2: Dynamic Programming
# Author: AM, PQ
#
# <b>Aim:</b> <br/>The purpose of this notebook is to understand and implement Advanced Data Structures
#
# + [markdown] id="ehtgAaMFH2Te"
# <a name="0"></a>
# ## Table of Contents
#
# 1 [Min Steps to 1](#1) <br/>
# 2 [Min Squares to represent N](#2) <br/>
# 2.1 [Min Squares to represent N - Recursion](#2.1) <br/>
# 2.2 [Min Squares to represent N - Memoization](#2.2) <br/>
# 2.3 [Min Squares to represent N - Iterative](#2.3) <br/>
# 3 [Longest Increasing Subsequence](#2) <br/>
# 2.1 [LIS - Recursion](#2.1) <br/>
# 2.2 [LIS - Memoization](#2.2) <br/>
# 2.3 [LIS - Iterative](#2.3) <br/>
# + [markdown] id="4n0f1wNMoT7L"
# <a name="1"></a>
# ## 1 Min Steps to 1
# Back to [Table of Contents](#0)
#
# ---
#
# + id="goe8s-QYojUa"
# + colab={"base_uri": "https://localhost:8080/"} id="L92jt8KtsVah" outputId="6c8bee86-a11b-4a81-f93d-605bbe9dfabf"
# + [markdown] id="K0lZO-KloYbK"
# <a name="2"></a>
# ## 2 Min Squares to represent N
# Back to [Table of Contents](#0)
#
# ---
#
# + [markdown] id="mmFxiL28sax5"
# <a name="2.1"></a>
# ### 2.1 Min Squares to represent N - Recursion
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="yrxBy6JLsiVD" outputId="2cfe4090-e599-4899-cfba-697470350a75"
# Python3 Program to implement
# the above approach
def find_sequence(n):
# Base Case
if n == 1:
return 1, -1
# Recursive Call for n-1
ans = (find_sequence(n - 1)[0] + 1, n - 1)
# Check if n is divisible by 2
if n % 2 == 0:
div_by_2 = find_sequence(n // 2)
if div_by_2[0] < ans[0]:
ans = (div_by_2[0] + 1, n // 2)
# Check if n is divisible by 3
if n % 3 == 0:
div_by_3 = find_sequence(n // 3)
if div_by_3[0] < ans[0]:
ans = (div_by_3[0] + 1, n // 3)
# Returns a tuple (a, b), where
# a: Minimum steps to obtain x from 1
# b: Previous number
return ans
# Function that find the optimal
# solution
def find_solution(n):
a, b = find_sequence(n)
# Print the length
print(a)
sequence = []
sequence.append(n)
# Exit condition for loop = -1
# when n has reached 1
while b != -1:
sequence.append(b)
_, b = find_sequence(b)
# Return the sequence
# in reverse order
return sequence[::-1]
# Driver Code
# Given N
n = 5
# Function Call
print(*find_solution(n))
# + [markdown] id="T7e4bSPssi1I"
# <a name="2.2"></a>
# ### 2.2 Min Squares to represent N - Memoization
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="wk8wONNwojn8" outputId="cddc77e8-32b7-4fa6-a0c2-bf9648ba922a"
# Python3 program to implement
# the above approach
# Function to find the sequence
# with given operations
def find_sequence(n, map):
# Base Case
if n == 1:
return 1, -1
# Check if the subproblem
# is already computed or not
if n in map:
return map[n]
# Recursive Call for n-1
ans = (find_sequence(n - 1, map)[0]\
+ 1, n - 1)
# Check if n is divisible by 2
if n % 2 == 0:
div_by_2 = find_sequence(n // 2, map)
if div_by_2[0] < ans[0]:
ans = (div_by_2[0] + 1, n // 2)
# Check if n is divisible by 3
if n % 3 == 0:
div_by_3 = find_sequence(n // 3, map)
if div_by_3[0] < ans[0]:
ans = (div_by_3[0] + 1, n // 3)
# Memoize
map[n] = ans
# Returns a tuple (a, b), where
# a: Minimum steps to obtain x from 1
# b: Previous state
return ans
# Function to check if a sequence can
# be obtained with given operations
def find_solution(n):
# Stores the computed
# subproblems
map = {}
a, b = find_sequence(n, map)
# Return a sequence in
# reverse order
print(a)
sequence = []
sequence.append(n)
# If n has reached 1
while b != -1:
sequence.append(b)
_, b = find_sequence(b, map)
# Return sequence in reverse order
return sequence[::-1]
# Driver Code
# Given N
n = 5
# Function Call
print(*find_solution(n))
# + [markdown] id="FFglKdpksj0a"
# <a name="2.3"></a>
# ### 2.3 Min Squares to represent N - Iterative
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="iWjid3EDrfIS" outputId="9bdfdbff-19cb-430b-fb47-eb313140fff2"
# Python3 program to implement
# the above approach
# Function to generate
# the minimum sequence
def find_sequence(n):
# Stores the values for the
# minimum length of sequences
dp = [0 for _ in range(n + 1)]
# Base Case
dp[1] = 1
# Loop to build up the dp[]
# array from 1 to n
for i in range(1, n + 1):
if dp[i] != 0:
# If i + 1 is within limits
if i + 1 < n + 1 and (dp[i + 1] == 0 \
or dp[i + 1] > dp[i] + 1):
# Update the state of i + 1
# in dp[] array to minimum
dp[i + 1] = dp[i] + 1
# If i * 2 is within limits
if i * 2 < n + 1 and (dp[i * 2] == 0 \
or dp[i * 2] > dp[i] + 1):
# Update the state of i * 2
# in dp[] array to minimum
dp[i * 2] = dp[i] + 1
# If i * 3 is within limits
if i * 3 < n + 1 and (dp[i * 3] == 0 \
or dp[i * 3] > dp[i] + 1):
# Update the state of i * 3
# in dp[] array to minimum
dp[i * 3] = dp[i] + 1
# Generate the sequence by
# traversing the array
sequence = []
while n >= 1:
# Append n to the sequence
sequence.append(n)
# If the value of n in dp
# is obtained from n - 1:
if dp[n - 1] == dp[n] - 1:
n = n - 1
# If the value of n in dp[]
# is obtained from n / 2:
elif n % 2 == 0 and dp[n // 2] == dp[n] - 1:
n = n // 2
# If the value of n in dp[]
# is obtained from n / 3:
elif n % 3 == 0 and dp[n // 3] == dp[n] - 1:
n = n // 3
# Return the sequence
# in reverse order
return sequence[::-1]
# Driver Code
# Given Number N
n = 5
# Function Call
sequence = find_sequence(n)
# Print the length of
# the minimal sequence
print(len(sequence))
# Print the sequence
print(*sequence)
# + [markdown] id="LiFcpPnbod6e"
# <a name="3"></a>
# ## 3 Longest increasing Subsequence (LIS)
# Back to [Table of Contents](#0)
#
# ---
#
# + id="-cya0gDMHpqE"
# + [markdown] id="wTqGQyy1OtFj"
# <a name="3.1"></a>
# ### 3.1 LIS - Recursion
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="Omj67lz9Ovus" outputId="82d69be2-9f06-4ab9-b072-f20ae9139190"
# A naive Python implementation of LIS problem
""" To make use of recursive calls, this function must return
two things:
1) Length of LIS ending with element arr[n-1]. We use
max_ending_here for this purpose
2) Overall maximum as the LIS may end with an element
before arr[n-1] max_ref is used this purpose.
The value of LIS of full array of size n is stored in
*max_ref which is our final result """
# global variable to store the maximum
global maximum
def _lis(arr, n):
# to allow the access of global variable
global maximum
# Base Case
if n == 1:
return 1
# maxEndingHere is the length of LIS ending with arr[n-1]
maxEndingHere = 1
"""Recursively get all LIS ending with arr[0], arr[1]..arr[n-2]
IF arr[n-1] is maller than arr[n-1], and max ending with
arr[n-1] needs to be updated, then update it"""
for i in range(1, n):
res = _lis(arr, i)
if arr[i-1] < arr[n-1] and res+1 > maxEndingHere:
maxEndingHere = res + 1
# Compare maxEndingHere with overall maximum. And
# update the overall maximum if needed
maximum = max(maximum, maxEndingHere)
return maxEndingHere
def lis(arr):
# to allow the access of global variable
global maximum
# lenght of arr
n = len(arr)
# maximum variable holds the result
maximum = 1
# The function _lis() stores its result in maximum
_lis(arr, n)
return maximum
# Driver program to test the above function
arr = [10, 22, 9, 33, 21, 50, 41, 60]
n = len(arr)
print ("Length of lis is ", lis(arr))
# This code is contributed by <NAME>
# + [markdown] id="9f80Kh8dOwQ0"
# <a name="3.2"></a>
# ### 3.2 LIS - Memoization - DP
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="i6sYpQmwOw9e" outputId="51564da4-5a30-4aff-857c-b853234acbed"
# Dynamic programming Python implementation
# of LIS problem
# lis returns length of the longest
# increasing subsequence in arr of size n
def lis(arr):
n = len(arr)
# Declare the list (array) for LIS and
# initialize LIS values for all indexes
lis = [1]*n
# Compute optimized LIS values in bottom up manner
for i in range(1, n):
for j in range(0, i):
if arr[i] > arr[j] and lis[i] < lis[j] + 1:
lis[i] = lis[j]+1
# Initialize maximum to 0 to get
# the maximum of all LIS
maximum = 0
# Pick maximum of all LIS values
for i in range(n):
maximum = max(maximum, lis[i])
return maximum
# end of lis function
# Driver program to test above function
arr = [10, 22, 9, 33, 21, 50, 41, 60]
print ("Length of lis is", lis(arr))
# This code is contributed by <NAME>
# + [markdown] id="k_xNaHTjOxY6"
# <a name="3.3"></a>
# ### 3.3 LIS - Iterative
# Back to [Table of Contents](#0)
#
# + colab={"base_uri": "https://localhost:8080/"} id="bTRU5R4nOxs0" outputId="446f45e4-6281-4c0d-ec31-d71621c237e9"
# Dynamic Programming Approach of Finding LIS by reducing the problem to longest common Subsequence
def lis(a):
n=len(a)
# Creating the sorted list
b=sorted(list(set(a)))
m=len(b)
# Creating dp table for storing the answers of sub problems
dp=[[-1 for i in range(m+1)] for j in range(n+1)]
# Finding Longest common Subsequence of the two arrays
for i in range(n+1):
for j in range(m+1):
if i==0 or j==0:
dp[i][j]=0
elif a[i-1]==b[j-1]:
dp[i][j]=1+dp[i-1][j-1]
else:
dp[i][j]=max(dp[i-1][j],dp[i][j-1])
return dp[-1][-1]
# Driver program to test above function
arr = [10, 22, 9, 33, 21, 50, 41, 60]
print("Length of lis is ", lis(arr))
| DP_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Defining a `FilePattern`
#
# Welcome to the Pangeo Forge introduction tutorial! This is the 1st part in a sequence, the flow of which is described {doc}`here </introduction_tutorial/index>`.
# ## Part 1 Outline
#
# The main goal of the first two parts of this tutorial are to create and run a **recipe**, the object that defines our data transformation.
#
# In part 1 we create a `FilePattern` object. A `FilePattern` contains everything Pangeo Forge needs to know about where the input data are coming from and how the individual files should be organized into the output. They are a huge step toward creating a recipe.
#
# The steps to creating a `FilePattern` are:
#
# 1. Understand the URL Pattern for Coastwatch SLA & Create a Template String
# 1. Define the **Combine Dimension** object
# 1. Create a Format Function
# 1. Define a `FilePattern`
#
# We will talk about each of these one at a time.
# ### Where should I write this code?
# Eventually, all of the code defining the recipe will go in a file called `recipe.py`, so one option for development is to develop your recipe directly in that file. Alternately, if you prefer the interactivity of a notebook you can work on your recipe code in a Jupyter Notebook and then copy the final code to a single `.py` file later. The choice between the two is personal preference.
# ## Understand the URL Pattern for Coastwatch SLA & Create a Template String
#
#
# ### Explore the structure
#
# In order to create our Recipe, we have to understand how the data are organized on the server.
# Like many datasets, Coastwatch SLA is available over the internet via the HTTP protocol.
# We can browse the the files at this URL:
#
# <https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt/>
#
# By clicking the link, we can explore the organization of the dataset, which we need to do in order to build our Recipe.
# The link above shows folders grouped by month. Within each month there is data for individual days. We could represent the file structure like this:
# 
# The important takeaways from this structure exploration are:
# - 1 file = 1 day
# - Folders separate months
# ### A single URL
#
# By putting together the full URL for a single file we can see that the Coastwatch SLA dataset for December 9th, 1981 would be accessed using the URL:
#
# [https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt/2012/rads_global_dt_sla_20120101_001.nc](https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt/2012/rads_global_dt_sla_20120101_001.nc)
#
# Copying and pasting that url into a web browser will download that single file to your computer.
#
# If we just have a few files, we can just manually type out the URLs for each of them.
# But that isn't practical when we have thousands of files.
# We need to understand the _pattern_.
# ### Create a URL Template String
# We can generalize the URL to say that salinity dataset are accessed using a URL of the format:
#
# `https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt/{year}/rads_global_dt_sla_{year_month_day}_001.nc`
#
# where `{year}` and `{year_month_day}` change for each file. Of the three dimensions of this dataset - latitude, longitude and time - the individual files are split up by time.
# Our goal is to combine, or _concatenate_, these files along the time dimension into a single Zarr dataset.
# 
# ### Why does this matter so much?
#
# A Pangeo Forge {class}`FilePattern <pangeo_forge_recipes.patterns.FilePattern>` is built on the premise that
#
# 1. We want to combine many individual small files into a larger dataset along one or more dimensions using either "concatenate" or "merge" style operations.
# 1. The individual files are accessible by URL and organized in a predictable way.
# 2. There is a some kind of correspondance, or mapping, between the dimensions of the combination process and the actual URLs.
#
# Knowing the generalized structure of the OISST URL leads us to start building the pieces of a `FilePattern`.
# ## About the `FilePattern` object
#
# ```{note}
# `FilePattern`s are probably the most abstract part of Pangeo Forge.
# It may take some time and experience to become comfortable with the `FilePattern` concept.
# ```
#
# The goal of the `FilePattern` is to describe how the files in the generalized URL template string should be organized when they get combined together into a single zarr datastore.
#
# In order to define a `FilePattern` we need to:
# 1. Know the dimension of data that will be used to combine the files. In the case of Coastwatch SLA the dimension is time.
# 2. Define the values of the dimension that correspond to each file, called the `key`s
# 3. Create a function that converts the `key`s to the specific URL for each file. We call this the Format Function.
#
# The first two pieces together are called the **Combine Dimension**. Let's start by defining that.
#
# ## Define the **Combine Dimension**
#
# The {class}`Combine Dimenion <pangeo_forge_recipes.patterns.ConcatDim>` describes the relationship between files. In this dataset we only have one combine dimension: time. There is one file per day, and we want to concatenate the files in time. We will use the Pangeo Forge object `ConcatDim()`.
#
# We also want to define the values of time that correspond to each file. These are called the `key`s. For ESA SSS this means creating a list of every day covered by the dataset. The easiest way to do this is with the Pandas `date_range` function.
# +
import pandas as pd
dates = pd.date_range('2012-01-01', '2019-11-01', freq='D')
# print the first 4 dates
dates[:4]
# -
# These will be the `key`s for our **Combine Dimension**.
# We now define a {class}`ConcatDim <pangeo_forge_recipes.patterns.ConcatDim>` object as follows:
# +
from pangeo_forge_recipes.patterns import ConcatDim
time_concat_dim = ConcatDim("time", dates, nitems_per_file=1)
time_concat_dim
# -
# The `nitems_per_file=1` option is a hint we can give to Pangeo Forge. It means, "we know there is only one timestep in each file".
# Providing this hint is not necessary, but it makes some things more efficient down the line.
# + [markdown] tags=[]
# ## Define a Format Function
#
# Next we we need to write a function that takes a single key (here representing one day) and translates it into a URL to a data file.
# This is just a standard Python function.
#
# ```{caution}
# If you're not comfortable with writing Python functions, this may be a good time to review
# the [official Python tutorial](https://docs.python.org/3/tutorial/controlflow.html#defining-functions)
# on this topic.
# ```
#
# So we need to write a function that takes a date as its argument and returns the correct URL for the OISST file with that date.
# -
# 
# A very useful helper for this is the [strftime](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) function,
# which is a method on each item in the `dates` array.
# For example
dates[0].strftime('%Y')
# Armed with this, we can now write our function
def make_url(time):
yyyy = time.strftime('%Y')
yyyymmdd = time.strftime('%Y%m%d')
return (
'https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt'
f'/{yyyy}/rads_global_dt_sla_{yyyymmdd}_001.nc'
)
# Let's test it out:
make_url(dates[0])
# It looks good! 🤩
#
# Before we move on, there are a couple of important things to note about this function:
#
# - It must have the _same number of arguments as the number of Combine Dimensions_. In our case, this is just one.
# - The name of the argument must match the `name` of the the Combine Dimension. In our case, this is `time`.
#
# These are ideas that will become increasingly relevant as you approach more complex datasets. For now, keep them in mind and we can move on to make our `FilePattern`.
# ## Define the `FilePattern`
# We now have the two ingredients we need for our {class}`FilePattern <pangeo_forge_recipes.patterns.FilePattern>`.
# 1. the Format Function
# 2. the **Combine Dimension** (`ConcatDim`, in this case)
#
# At this point, it's pretty quick:
# +
from pangeo_forge_recipes.patterns import FilePattern
pattern = FilePattern(make_url, time_concat_dim)
pattern
# -
# ```{note}
# You'll notice that we are using a function as an argument to another function here. If that pattern is new to you that's alright. It is a very powerful technique, so it is used semi-frequently in Pangeo Forge.
# ```
# The `FilePattern` object contains everything Pangeo Forge needs to know about where the data are coming from and how the individual files should be combined. This is huge progress toward making a recipe!
# To summarize our process, we made a `ConcatDim` object, our **combine dimension**, which specifies `"time"` as the axis of concatenation and lists the dates. The Format function converts the dates to URLs and the `FilePattern` object keeps track of the URLs and how they relate to each other.
# + [markdown] tags=[]
# ### Iterating through a `FilePattern`
#
# While not necessary for the recipe, if you want to interact with the `FilePattern` object a bit (for example, for debugging) more you can iterate through it using `.items()`.
# To keep the output concise, we use an if statement to stop the iteration after a few filepaths.
# -
for index, url in pattern.items():
print(index)
print(url)
# Stop after the 3rd filepath (September 3rd, 1981)
if '20120103' in url:
break
# The `index` is an object used internally by Pangeo Forge. The url corresponds to the actual file we want to download.
# ## End of Part 1
# And there you have it - your first `FilePattern` object! That object describes 1) all of the URLs to the files that we are planning to convert as well as 2) how we want each of the files to be organized in the output object. Pretty compact!
#
# In part 2 of the tutorial, we will move on to creating a recipe object, and then use it to convert some data locally.
#
# ### Code Summary
# The code written in part 1 could all be written together as:
# +
import pandas as pd
from pangeo_forge_recipes.patterns import ConcatDim, FilePattern
from pangeo_forge_recipes.recipes import XarrayZarrRecipe
dates = pd.date_range('2012-01-01', '2019-11-01', freq='D')
def make_url(time):
yyyy = time.strftime('%Y')
yyyymmdd = time.strftime('%Y%m%d')
return (
'https://coastwatch.noaa.gov/pub/socd/lsa/rads/sla/daily/dt'
f'/{yyyy}/rads_global_dt_sla_{yyyymmdd}_001.nc'
)
time_concat_dim = ConcatDim("time", dates, nitems_per_file=1)
pattern = FilePattern(make_url, time_concat_dim)
# -
| docs/introduction_tutorial_ocean_sciences/intro_tutorial_part1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basics [Cython]
# ---
# - Author: <NAME>
# - GitHub: [github.com/diegoinacio](https://github.com/diegoinacio)
# - Notebook: [basics_Cython.ipynb](https://github.com/diegoinacio/machine-learning-notebooks/blob/master/Tips-and-Tricks/basics_Cython.ipynb)
# ---
# Basic functions and operations using [Cython](https://cython.org/) and *Python*.
# # 0. Installation
# ---
#
# [Installation](http://docs.cython.org/en/latest/src/quickstart/install.html) command for *anaconda* and *pip*:
#
# ```
# $ conda install -c anaconda cython
# ```
#
# or
#
# ```
# $ pip install Cython
# ```
import cython
# # 1. Compilation
# ---
# A *Cython* source file has the name of the module followed by the extension `.pyx`. For example, given the source file `examples_cy.pyx` with a simple function which returns a string.
#
# ```python
# def hello_cython():
# return 'Hello, Cython!'
# ```
#
# The following step consist of creating the `setup.py`, which will be responsible for the compilation process.
#
# ```python
# from setuptools import setup
# from Cython.Build import cythonize
#
# setup(
# name="Examples Cython",
# ext_modules=cythonize("examples_cy.pyx")
# )
# ```
#
# Given that, the compilation step is done by running the command:
#
# ```
# $ python setup.py build_ext --inplace
# ```
from examples_cy import hello_cython
print(hello_cython())
# # 2. Performance
# ---
# The following example, we will try to approximate the value $\large\pi$ with the idea of $\\tan^{-1}1=\frac{\pi}{4}$ using the power series of *arctan*, defined by:
#
# $$\large
# 4 \sum_{n=0}^{N}\frac{(-1)^n}{2n+1} \approx \pi
# $$
#
# where $N$ tends to the infinity.
# +
def pi_py(N):
pi = 0
for n in range(N):
pi += (-1)**n/(2*n + 1)
return 4*pi
print(pi_py(1000000))
# -
# In the same *Cython* source file `examples_cy.pyx`, lets include the function and adapt it to be compiled.
#
# ```python
# cdef double pi_cy(int N):
# cdef double pi = 0
# cdef int n
# for n in range(N):
# pi += (-1)**n/(2*n + 1)
# return 4*pi
# ```
#
# *p.s.: compile it again running the command:*
#
# ```
# $ python setup.py build_ext --inplace
# ```
from examples_cy import pi_cy
# Time measurement over the situations
print('[Python] pi_py |', end=' ')
# %timeit -n 5 -r 5 pi_py(1000000)
print('[Cython] pi_cy |', end=' ')
# %timeit -n 5 -r 5 pi_cy(1000000)
# # 3. Cython and Jupyter Notebook
# ---
# To enable support for *Cython* compilation in *Jupyter Notebooks*, we have to run firstly the command:
# %load_ext Cython
# It will allow the *C functions* declaration inside cells, using the magic function `%%cython` for multiple lines.
#
# *p.s.: the function call must be within the same cell*
# + language="cython"
#
# cdef int factorial(int x):
# if x <= 1:
# return 1
# return x*factorial(x - 1)
#
# print(factorial(10))
| Tips-and-Tricks/basics_Cython.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AoC 2020 Day 1: Report Repair
#
# ## Part 1
# Task:
#
# Before you leave, the Elves in accounting just need you to fix your expense report (your puzzle input); apparently, something isn't quite adding up.
#
# Specifically, they need you to find the two entries that sum to 2020 and then multiply those two numbers together.
#
# Potential Solution:
#
# One way to solve this puzzle is to start with the first element and then add all subsequent elements to it to see if they sum to 2020 and then go to the second element and add every subsequent elements to that one and so on. If one pair sums to 2020 then we can retrive that pair and multiply them.
# Libraries
import numpy as np
# +
a = [1721, 979, 366, 299, 675, 1456]
for i in range(len(a)):
for w in range(len(a)):
c = a[i] + a[w]
if c == 2020:
d = a[i]*a[w]
print(d)
else:
continue
# -
# Loading in data
aoc_day1_data = np.loadtxt("AoC 2020 Data.txt")
# +
for i in range(len(aoc_day1_data)):
for w in range(len(aoc_day1_data)):
c = aoc_day1_data[i] + aoc_day1_data[w]
if c == 2020:
d = aoc_day1_data[i]*aoc_day1_data[w]
print(d)
else:
continue
# -
# ## Part 2
#
# Task:
# The Elves in accounting are thankful for your help; one of them even offers you a starfish coin they had left over from a past vacation. They offer you a second one if you can find three numbers in your expense report that meet the same criteria.
#
# That is we need three elements that produce 2020
#
# Potential Solution: Same as the last one but with one more loop added to the mix.
#
# +
# Practice Run
a = [1721, 979, 366, 299, 675, 1456]
# Developed reusable function to solve this puzzle
def sum_three(T):
for i in range(len(T)):
for w in range(len(T)):
for f in range(len(T)):
c = T[i] + T[w] + T[f]
if c == 2020:
d = T[i]*T[w]*T[f]
print(d)
return
else:
continue
sum_three(a)
# -
sum_three(aoc_day1_data)
| AoC 2020 Day 1/Advent of Code 2020 Day 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graph DB experiments
# +
# #!pip install git+https://github.com/cjdrake/ipython-magic
# -
import grapheekdb
# grapheekdb.__version__
from grapheekdb.backends.data.sqlite import SqliteGraph
g=SqliteGraph('test.db')
book1 = g.add_node(kind='book', name='python tutorial', author='<NAME>', thema='programming')
book2 = g.add_node(kind='book', name='cooking for dummies', author='<NAME>', thema='cooking')
book3 = g.add_node(kind='book', name='grapheekdb', author='<NAME>', thema='programming')
book4 = g.add_node(kind='book', name='python secrets', author='<NAME>', thema='programming')
book5 = g.add_node(kind='book', name='cooking a python', author='<NAME>', thema='cooking')
book6 = g.add_node(kind='book', name='rst the hard way', author='<NAME>', thema='documentation')
person1 = g.add_node(kind='person', name='sam xxxx')
person2 = g.add_node(kind='person', name='sam xxxx')
person3 = g.add_node(kind='person', name='sam xxxx')
person4 = g.add_node(kind='person', name='sam xxxx')
for node in g.V():
print(node.kind, node.name)
g.add_edge(person1, book1, action='bought')
g.add_edge(person1, book3, action='bought')
g.add_edge(person1, book4, action='bought')
g.add_edge(person2, book2, action='bought')
g.add_edge(person2, book5, action='bought')
g.add_edge(person3, book1, action='bought')
g.add_edge(person3, book3, action='bought')
g.add_edge(person3, book5, action='bought')
g.add_edge(person3, book6, action='bought')
# %dotobj g.V().dot('name', 'action')
g.V().data()
g.V(kind='person').data()
| notebooks/GraphDB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LegacyHalos SersicFitting
#
# This notebook demonstrates how we fit the 1D surface brightness profiles using various parametric (e.g., Sersic) models, using one galaxy as a toy example.
# ### Imports, paths, and other preliminaries.
import os
import numpy as np
import matplotlib.pyplot as plt
from legacyhalos import io
from legacyhalos.util import ellipse_sbprofile
from legacyhalos.qa import display_ellipse_sbprofile
plt.style.use('seaborn-talk')
# %matplotlib inline
pixscale = 0.262
band = ('g', 'r', 'z')
refband = 'r'
# ### Read the sample
sample = io.read_sample(first=0, last=0)
objid, objdir = io.get_objid(sample)
ellipsefit = io.read_ellipsefit(objid, objdir)
redshift = sample.z
# ### Read the measured surface brightness profile
from speclite import filters
filt = filters.load_filters('decam2014-g', 'decam2014-r', 'decam2014-z')
filt.effective_wavelengths.value
sbprofile = ellipse_sbprofile(ellipsefit, band=band, refband=refband,
redshift=redshift, pixscale=pixscale)
print(sbprofile.keys())
display_ellipse_sbprofile(ellipsefit, band=band, refband=refband,
redshift=redshift, pixscale=pixscale,
sersicfit=None)
# ### Fit a Sersic model
def sb2flux(sb):
"""Convert surface brightness to linear flux."""
return np.array([10**(-0.4 * _sb) for _sb in np.atleast_1d(sb)])
def fit_sersic_oneband(sbprofile, init_reff=10.0, init_n=2.0):
"""Fit a simple Sersic model to the galaxy surface brightness
profile in each bandpass independently.
"""
from scipy.optimize import least_squares
from astropy.modeling import models, fitting
fixed = {'n': True}
bounds = {}
fit = fitting.LevMarLSQFitter()
radius = sbprofile['sma'] # see sbprofile['smaunit'] but typically kpc
sersicfit = dict()
for filt in band:
mu = sb2flux(sbprofile['mu_{}'.format(filt)])
mu_err = sb2flux(sbprofile['mu_{}_err'.format(filt)])
init = models.Sersic1D(amplitude=sb2flux(mu.min()),
r_eff=init_reff, n=init_n,
fixed=fixed, bounds=bounds)
sersicfit[filt] = fit(init, radius, mu, weights=1/mu_err)
print(sersicfit[filt])
return sersicfit
from astropy.modeling.models import Sersic1D
help(Sersic1D)
from astropy.modeling.core import FittableModel
help(FittableModel)
def sersic_allbands_model(sbwave, sbdata, params):
"""Evaluate a model in which the Sersic index and
half-light radius vary as a power-law function of wavelength.
"""
from astropy.modeling.models import Sersic1D
refwave, n_ref, re_ref, alpha, beta
sbmodel = np.zeros_like(sbprofile)
for uwave in set(wave):
these = np.where(uwave == wave)[0]
sb[these] =
mu = sb2flux(sbprofile['mu_{}'.format(filt)])
mu_err = sb2flux(sbprofile['mu_{}_err'.format(filt)])
init = models.Sersic1D(amplitude=sb2flux(mu.min()),
r_eff=init_reff, n=init_n,
fixed=fixed, bounds=bounds)
return sb
# #### Merge the multiband surface brightness profiles
sbwave, sbdata = [], []
for filt in band:
mu = sb2flux(sbprofile['mu_{}'.format(filt)])
sbdata.append(mu)
sbwave.append(np.repeat(sbprofile['{}_wave_eff'.format(filt)], len(mu)))
stop
def fit_sersic(sbprofile, init_reff=10.0, init_n=2.0):
"""Fit a single Sersic model to all the bands simultaneously by allowing
the half-light radius and Sersic n parameter to vary as a power-law
function of wavelength, while allowing the surface brightness at r_e
in each band to be free.
"""
from scipy.optimize import least_squares
from astropy.modeling import models, fitting
fixed = {
'refwave': True,
'n_ref': False,
're_ref': False,
'alpha': True, # n = n_ref(wave/refwave)**alpha
'beta': True # r_e = r_e,ref(wave/refwave)**beta
}
bounds = {
'refwave': (5500, 5500),
'n_ref': (0.1, 8),
're_ref': (0.1, 100),
'alpha', (-1, 1),
'beta': (-1, 1)
}
for filt in band:
# surface brightness at re_ref
fixed.append({'sbe_{}'.format(filt): False})
bounds.append({'sbe_{}'.format(filt): (10, 35)})
fit = fitting.LevMarLSQFitter()
radius = sbprofile['sma'] # see sbprofile['smaunit'] but typically kpc
sersicfit = dict()
for filt in band:
mu = sb2flux(sbprofile['mu_{}'.format(filt)])
mu_err = sb2flux(sbprofile['mu_{}_err'.format(filt)])
init = models.Sersic1D(amplitude=sb2flux(mu.min()),
r_eff=init_reff, n=init_n,
fixed=fixed, bounds=bounds)
sersicfit[filt] = fit(init, radius, mu, weights=1/mu_err)
print(sersicfit[filt])
return sersicfit
def lnprobfn(theta, residuals=False):
"""For now, just compute a vector of chi values, for use
with non-linear least-squares algorithms.
"""
from astropy.modeling import models
if residuals:
init = models.Sersic1D(amplitude=sb2flux(mu.min()),
r_eff=init_reff, n=init_n,
fixed=fixed, bounds=bounds)
def chivecfn(theta):
"""Return the residuals instead of the posterior probability or negative
chisq, for use with least-squares optimization methods.
"""
return lnprobfn(theta, residuals=True)
def minimizer_ball(guess, nmin=5, seed=None):
"""Draw initial values from the (1d, separable, independent) priors for
each parameter. Requires that priors have the `sample` method available.
If priors are old-style, draw randomly between min and max.
"""
rand = np.random.RandomState(seed)
npars = len(guess)
ballguess = np.zeros((nmin, npars))
for ii in range(npars):
bounds = guess[ii]['bounds']
ballguess[:, ii] = rand.uniform(bounds[0], bounds[1], nmin)
return ballguess
def initialize_guess():
"""Initialize the parameters with starting values."""
I0 = dict(name= 'I0', init=sb2flux(18), units='maggies',
bounds=sb2flux((14, 26)), fixed=False)
reff = dict(name='reff', init=10.0, units='kpc',
bounds=(5.0, 50.0), fixed=False)
n = dict(name='n', init=2.0, units='', bounds=(1, 6), fixed=False)
return list((I0, reff, n))
guess = initialize_guess()
print(guess)
sersicfit = fit_sersic(sbprofile)
display_ellipse_sbprofile(ellipsefit, band=band, refband=refband,
redshift=redshift, pixscale=pixscale,
sersicfit=sersicfit)
# #### Build a "ball" of initial guesses.
ballguess = minimizer_ball(guess, nmin=10)
print(ballguess)
guesses = []
for i, pinit in enumerate(pinitial):
res = least_squares(chivecfn, pinit, method='lm', x_scale='jac',
xtol=1e-18, ftol=1e-18)
guesses.append(res)
chisq = [np.sum(r.fun**2) for r in guesses]
best = np.argmin(chisq)
initial_center = fitting.reinitialize(guesses[best].x, model,
edge_trunc=rp.get('edge_trunc', 0.1))
initial_prob = None
pdur = time.time() - ts
if rp['verbose']:
print('done L-M in {0}s'.format(pdur))
print('best L-M guess:{0}'.format(initial_center))
sersicfit['r'].fit_info
display_ellipse_sbprofile(ellipsefit, band=band, refband=refband,
redshift=redshift, pixscale=pixscale,
sersicfit=None)
# ### Playing around below here
stop
stop
# +
from matplotlib.ticker import FormatStrFormatter, ScalarFormatter
smascale = 1
filt = 'r'
good = (ellipsefit[filt].stop_code < 4)
bad = ~good
fig, ax1 = plt.subplots()
ax1.fill_between(ellipsefit[filt].sma[good] * smascale,
ellipsefit[filt].eps[good]-ellipsefit[filt].ellip_err[good],
ellipsefit[filt].eps[good]+ellipsefit[filt].ellip_err[good],
edgecolor='k', lw=2)
#ax1.errorbar(ellipsefit[filt].sma[good] * smascale,
# ellipsefit[filt].eps[good],
# ellipsefit[filt].ellip_err[good], marker='s', linestyle='none',
# capsize=10, capthick=2,
# markersize=10)#, color=color[filt])
ax1.scatter(ellipsefit[filt].sma[bad] * smascale,
ellipsefit[filt].eps[bad], marker='s', s=40, edgecolor='k', lw=2, alpha=0.75)
ax1.set_xscale('log')
ax1.xaxis.set_major_formatter(ScalarFormatter())
| doc/nb/legacyhalos-sersic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Algorithmic Complexity
#
# Notes by <NAME>
# + slideshow={"slide_type": "skip"}
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] slideshow={"slide_type": "slide"}
# ## How long will my code take to run?
# + [markdown] slideshow={"slide_type": "fragment"}
# Today, we will be concerned *solely* with **time complexity**.
#
# Formally, we want to know $T(d)$, where $d$ is any given dataset and $T(d)$ gives the *total run time*.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Let's begin with a simple problem.
#
# How many instructions does the following bit of code take? Go ahead and assume that you can ignore the machinery in the `if` and `for` statements.
# + slideshow={"slide_type": "fragment"}
def mini(x):
n = len(x)
mini = x[0]
for i in range(n):
if x[i] < mini:
mini= x[i]
return mini
# + [markdown] slideshow={"slide_type": "fragment"}
# **Go ahead and work it out with your neighbors.**
# + slideshow={"slide_type": "slide"}
x = np.random.rand(1000)
# + slideshow={"slide_type": "slide"}
print(mini(x))
print(x.min())
# + [markdown] slideshow={"slide_type": "slide"}
# ## The answer
#
# Each time the function is called, we have
#
# ```python
# n = len(x)
# mini = x[0]
# ```
#
# that's two instructions (again, ignoring how much goes into the `len` call).
#
# Then, the for loop body requires either one or two instructions. You always have the comparison `x[i] < mini`, but you may or may not have the assignment `mini = x[i]`.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise
# Compute the number number of instructions for this input data
#
# ```python
# x = [4, 3, 2, 1]
# ```
# and
#
# ```python
# y = [1, 3, 2, 4]
# ```
# + [markdown] slideshow={"slide_type": "fragment"}
# $N_{inst}(x) = 9$
#
# $N_{inst}(y) = 7$
# + [markdown] slideshow={"slide_type": "slide"}
# ## As usual, pessimism is the most useful view
# The answer to "how long does this take" is...it depends on the input.
#
# Since we would like to know how long an algorithm will take *before* we run it, let's examine the **worst case scenario**.
#
# This allows us to looking for from $T(d)$ to $f(n)$, where $n \equiv \mathrm{size}(d)$ is the *size* of the dataset.
# + [markdown] slideshow={"slide_type": "slide"}
# For our simple problem,
#
# $$f(n) = 2 + 4n$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Asymptotics
#
# Let's look at a pair of cubic polynomials,
#
# $$ f(n) = f_0 + f_1 n + f_2 n^2 + f_3 n^3 $$
# $$ g(n) = g_0 + g_1 n + g_2 n^2 + g_3 n^3 $$
# + slideshow={"slide_type": "subslide"}
n = np.linspace(0,1000,10000)
f0 = 2; f1 = 1; f2 = 10; f3 = 2
g0 = 0; g1 = 10; g2 = 1; g3 = 1
f = f0 + f1*n + f2*n**2 + f3*n**3
g = g0 + g1*n + g2*n**2 + g3*n**3
# + slideshow={"slide_type": "slide"}
plt.figure()
plt.plot(n, f, label='f')
plt.plot(n, g, label='g')
plt.xlim(0,2)
plt.ylim(0,20)
plt.legend()
# + [markdown] slideshow={"slide_type": "subslide"}
# Clearly, we can drop the lower order terms *and* the coefficients $f_3$ and $g_3$.
#
# **We call this**
#
# $$O(n^3)$$,
#
# and we say our algorithm is "$n^3$", meaning **no worse than $n^3$**.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# Of course this is exactly the same notation and meaning as when we do a series expansion in any other calculation,
#
# $$ e^{x} = 1 + x + \frac{x^2}{2} + O(x^3), x\to 0$$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## An example
#
# Let's take the force calculation for an N-body simulation. We recall we can write this as
#
# $$\ddot{\mathbf{r}}_i = -G\sum_{i \ne j} \frac{m_j \mathbf{r}_{ij}}{r_{ij}^3},$$
#
# for each particle $i$. This is fairly easy to analyze.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Calculate the complexity with your neighbors**
# + [markdown] slideshow={"slide_type": "slide"}
# ## Some Code
#
# This is a very simple implementation of a force calculator that only calculates the $x$ component (for unit masses!).
# + slideshow={"slide_type": "fragment"}
def f_x(particles):
G = 1
a_x = np.empty(len(particles))
for i, p in enumerate(particles):
for j, p in enumerate(particles):
if j != i:
a_x[i] -= G*p.x/(p.x**2 + p.y**2 + p.z**2)**1.5
# + slideshow={"slide_type": "skip"}
class Particle:
def __init__(self, r, v):
self.r = r
self.v = v
@property
def x(self):
return self.r[0]
@property
def y(self):
return self.r[1]
@property
def z(self):
return self.r[2]
# + [markdown] slideshow={"slide_type": "slide"}
# ## Test it!
#
# Theory is all well and good, but let's do a numerical experiment.
# + slideshow={"slide_type": "fragment"}
nn = np.array([10, 100, 300, 1000])
nnn = np.linspace(nn[0],nn.max(),10000)
p1 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[0])]
p2 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[1])]
p3 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[2])]
p4 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[3])]
# + slideshow={"slide_type": "fragment"}
# t1 = %timeit -o f_x(p1)
# t2 = %timeit -o f_x(p2)
# t3 = %timeit -o f_x(p3)
# t4 = %timeit -o f_x(p4)
times = np.array([t1.average, t2.average, t3.average, t4.average])
# + [markdown] slideshow={"slide_type": "skip"}
# ## Plot the results...
# + slideshow={"slide_type": "slide"}
plt.figure()
plt.loglog(nn,times,'x', label='data')
plt.loglog(nnn,times[0]*(nnn/nnn[0])**2, label=r'$O(n^2)$')
plt.legend();plt.xlabel('data size');plt.ylabel('run time (s)')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Several Common Asymptotics
# + slideshow={"slide_type": "fragment"}
plt.figure()
plt.loglog(nn,times,'x', label='data')
plt.loglog(nnn,t1.average*(nnn/nnn[0])**3, label=r'$O(n^3)$')
plt.loglog(nnn,times[0]*(nnn/nnn[0])**2, label=r'$O(n^2)$')
plt.loglog(nnn,times[0]*(nnn/nnn[0]), label=r'$O(n)$')
plt.loglog(nnn,t1.average*(nnn/nnn[0])*np.log(nnn/nnn[0]), label=r'$O(n \log(n))$')
plt.legend()
plt.xlabel('data size')
plt.ylabel('run time (s)')
# -
# # But...
# + [markdown] slideshow={"slide_type": "slide"}
# ## Consider the *problem*
#
# Let's talk about solving PDES:
#
# $$\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} = -\frac{\nabla p}{\rho} + \nu \nabla^2 \mathbf{u} $$
#
# Let's focus on *two* ways of calculating gradients.
#
# **Finite Difference**
# $$\frac{\partial u}{\partial x} \simeq \frac{u_{i+1} - u_{i-1}}{\Delta x}$$
#
# **Spectral**
# $$\frac{\partial u}{\partial x} \simeq i k_j \sum_{j = 0}^{N} f_j \exp{i k_j x}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Scaling
# + slideshow={"slide_type": "fragment"}
plt.figure()
plt.loglog(nnn,times[0]*(nnn/nnn[0]), label=r'$O(n)$ finite difference')
plt.loglog(nnn,t1.average*(nnn/nnn[0])*np.log(nnn/nnn[0]), label=r'$O(n \log(n))$ spectral')
plt.legend()
plt.xlabel('data size')
plt.ylabel('run time (s)')
# + [markdown] slideshow={"slide_type": "slide"}
# <img src="Screenshot 2019-03-22 at 20.34.57.png">
| Sessions/Session08/Day4/Algorithmic Complexity Notes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Hikyuu 交互式工具示例
# ==============
# 1、引入交互式工具
# -----------------
# 需从hikyuu.interactive.interactive引入,而不是直接从hikyuu库中引入(hikyuu是一个库,可用于编制其他的工具,而hikyuu.interactive.interactive是基于hikyuu库实现的交互式探索工具)
# %matplotlib inline
# %time from hikyuu.interactive.interactive import *
#use_draw_engine('echarts') #use_draw_engine('matplotlib') #默认为'matplotlib'绘图
# 2、创建交易系统并运行
# --------------------
# +
#创建模拟交易账户进行回测,初始资金30万
my_tm = crtTM(initCash = 300000)
#创建信号指示器(以5日EMA为快线,5日EMA自身的10日EMA最为慢线,快线向上穿越慢线时买入,反之卖出)
my_sg = SG_Flex(EMA(n=5), slow_n=10)
#固定每次买入1000股
my_mm = MM_FixedCount(1000)
#创建交易系统并运行
sys = SYS_Simple(tm = my_tm, sg = my_sg, mm = my_mm)
sys.run(sm['sz000001'], Query(-150))
# -
# 3、绘制曲线观察
# ---------------
# +
#绘制系统信号
sys.plot()
k = sm['sz000001'].getKData(Query(-150))
c = CLOSE(k)
fast = EMA(c, 5)
slow = EMA(fast, 10)
#绘制信号指示器使用两个指标
fast.plot(new=False)
slow.plot(new=False)
# -
# 4、绘制资金收益曲线
# ---------------------
#绘制资金收益曲线
x = my_tm.getProfitCurve(k.getDatetimeList(), KQuery.DAY)
x = PRICELIST(x)
x.plot()
# 5、回测统计报告
# ----------------------
# +
#回测统计
from datetime import datetime
per = Performance()
print(per.report(my_tm, Datetime(datetime.today())))
# -
# 6、关于性能
# ---------------
#
# 经常有人问到性能问题,下面这段的代码使用之前的系统示例,遍历指定板块的所有股票,计算他们的“盈利交易比例%”(即胜率)。
# +
def test_func(stock, query):
"""计算指定stock的系统策略胜率,系统策略为之前的简单双均线交叉系统(每次固定买入100股)
"""
#创建模拟交易账户进行回测,初始资金30万
my_tm = crtTM(initCash = 1000000)
#创建信号指示器(以5日EMA为快线,5日EMA自身的10日EMA最为慢线,快线向上穿越慢线时买入,反之卖出)
my_sg = SG_Flex(EMA(n=5), slow_n=10)
#固定每次买入1000股
my_mm = MM_FixedCount(100)
#创建交易系统并运行
sys = SYS_Simple(tm = my_tm, sg = my_sg, mm = my_mm)
sys.run(stock, query)
per = Performance()
per.statistics(my_tm, Datetime(datetime.today()))
return per.get("赢利交易比例%".encode('gb2312'))
def total_func(blk, query):
"""遍历指定板块的所有的股票,计算系统胜率"""
result = {}
for s in blk:
if s.valid and s.type != constant.STOCKTYPE_INDEX:
result[s.name] = test_func(s, query)
return result
# -
# 遍历所有当前有效且并非指数的证券。下面是我的机器执行结果,共计算4151支证券,最近500个交易日,共耗时2.89秒。机器配置:Intel i7-4700HQ 2.G。
# %time a = total_func(sm, Query(-500))
len(a)
| hikyuu/examples/notebook/001-overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # NKoteak (Tuplak)
#
# * `tuple` motako objektuak
# * Objektu sekuentzia **ALDAEZINA**.
# * NKoteak sortzeko aukerak:
# 1. Objektuaren berezko metodoaz → `tuple(iteragarria)`
# 1. Balio literala → `(obj1 , obj2 , obj3, ... )`
# 1. NKoteak bueltatzen dituzten metodoak → `enumerate("aeiou")`
# * `len()` funtzioa → elementu kopurua
# + [markdown] slideshow={"slide_type": "slide"}
# ### NKote hutsa
#
# + slideshow={"slide_type": "-"}
# nkote hutsa
t = tuple()
print(t,len(t))
# + slideshow={"slide_type": "-"}
# nkote huts literala
t = ()
print(t,len(t))
# + [markdown] slideshow={"slide_type": "slide"}
# ### Elementu bakarreto NKotea
# Parentesiek, nkoteez gain, beren erabilpena ere badute:
# * (obj) → objektu bat (parentesi artean)
# * (obj,) → objektu bakarra duen **nkotea**
# + slideshow={"slide_type": "-"}
a = ("epa")
print(type(a),"-->",a)
# + slideshow={"slide_type": "-"}
t = ("epa",)
print(type(t),"-->",t)
# + [markdown] slideshow={"slide_type": "slide"}
# ### `tuple(iteragarria)`
# Iteragarria zeharkatu eta bere edukia nkote moduan bueltatzen du
# + slideshow={"slide_type": "fragment"}
t = tuple(range(10))
print(t)
# + slideshow={"slide_type": "fragment"}
t = tuple("aeiou")
print(t)
# + slideshow={"slide_type": "fragment"}
f = open("MyText.txt")
t = tuple(f)
print(t)
f.close()
# + [markdown] slideshow={"slide_type": "slide"}
# ### `(obj1 , obj2 , obj3, ... )`
# `obj1`, `obj2`, `obj3`, ...-z osotutako nkotea bueltatzen du
# + slideshow={"slide_type": "fragment"}
(0,1,2,3,4,5,6,7,8,9) == tuple(range(10))
# + slideshow={"slide_type": "fragment"}
("a","e","i","o","u") == tuple("aeiou")
# + slideshow={"slide_type": "fragment"}
# Tupletako elementuak edozein motakoak izan daitezke
( "kaixo" , 1 , 2.0 , True , None , [1,2] , (3,4,[5,6]) )
# + [markdown] slideshow={"slide_type": "slide"}
# ## NKoteen ezaugarriak
#
# * Indexagarriak → `t[5]`
# * Indize negatiboak, *slize* notazioa...
# * Iteragarriak → `for i in t :`
# * **Aldaezinak**
# + slideshow={"slide_type": "slide"}
t = tuple(range(10))
print(t)
print(t[-1])
print(t[0::2])
# + [markdown] slideshow={"slide_type": "slide"}
# ## NKoteen eragileak
# * `a==b`, `a!=b`, `a>b`, `a>=b`, `a<b`, `a<=b` → `bool` : elementuz-elementuko konparaketa
# * `a + b` → `tuple` : konkatenazioa
# * `a * 4` , `4 * a` → `tuple` : auto-errepikapena
# * `a in b`, `a not in b` → `bool` : `a` elementua `b`-ren barnean ote dagoen
# * `a is b` → `bool` : `a` objektua `b` ote den
# + slideshow={"slide_type": "slide"}
# Elementuz-elementuko konparaketa
a = ( 1 , 2 , 3 , 4 )
b = ( 1 , 2 , 0 , 4 , 5)
print(a == b)
print(a > b)
# + slideshow={"slide_type": "fragment"}
# konkatenazioa
t = tuple(range(4)) + tuple("aeiou")
print(t)
# + slideshow={"slide_type": "fragment"}
# auto-errepikapena
print(3 * tuple("aeiou"))
# + slideshow={"slide_type": "fragment"}
# in eragilea
print(3 in (1,2,3))
print((1,2) not in (1,2,3))
# + [markdown] slideshow={"slide_type": "slide"}
# ## NKoteen metodoak
#
# * Metodo hauek izendatzeko: `nkotea.metodo_izena`
# * Bi metodo dituzte bakarrik.
# + [markdown] slideshow={"slide_type": "slide"}
# ### tuple.index
# ```
# index(...) method of builtins.tuple instance
# T.index(value, [start, [stop]]) -> integer -- return first index of value.
# Raises ValueError if the value is not present.
# ```
# + slideshow={"slide_type": "fragment"}
t = tuple("aeiou")
print(t.index("e"))
# ValueError: 'ei' is not in tuple
#print(t.index("ei"))
# + [markdown] slideshow={"slide_type": "slide"}
# ### tuple.count
# ```
# count(...) method of builtins.tuple instance
# T.count(value) -> integer -- return number of occurrences of value
# ```
# + slideshow={"slide_type": "fragment"}
t = tuple(range(10)) * 123
print(t.count(6))
print(t.count(10))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Esleipen anitza
# * Esleipen baten ezkerraldean aldagai bat baina gehiago egon liteke.
# * Aldagai bakoitzari eskuineko sekuentziatik dagokion balioa esleituko zaio
# + slideshow={"slide_type": "-"}
a,b,c,d,e,f,g = range(7)
print(a,b,c,d,e,f,g)
# + slideshow={"slide_type": "-"}
a,b,c = (1,2,4)
print(a,b,c)
# + [markdown] slideshow={"slide_type": "slide"}
# Batzuetan ez da beharrezkoa parentesiak jartzea nkote bat adierazteko
# * Esleipenak
# + slideshow={"slide_type": "-"}
a,b,c = 1,2,4
print(a,b,c)
# + [markdown] slideshow={"slide_type": "slide"}
#
# * `return` sententziak → balio anitzak bueltatzen dituzten funtzioak
# + slideshow={"slide_type": "-"}
def zatidura_hondarra(a,b):
z = 0
while a >= b :
a = a - b
z = z + 1
return z,a
z,h = zatidura_hondarra(19,4)
print(z,h)
# + [markdown] slideshow={"slide_type": "slide"}
# Elkarraldaketak egiteko aproposa:
# + slideshow={"slide_type": "-"}
a,b = 1,2
print(a,b)
x = a
a = b
b = x
print(a,b)
a,b = 1,2
print(a,b)
a,b = b,a
print(a,b)
a,b,c,d,e = 1,2,3,4,5
print(a,b,c,d,e)
a,b,c,d,e = b,c,d,e,a
print(a,b,c,d,e)
# + [markdown] slideshow={"slide_type": "slide"}
# Esleipen anitz egituratuak egon daitezke.
# + slideshow={"slide_type": "-"}
a = 1,[2,(3,4)],5
print(a)
a,b,c = 1,[2,(3,4)],5
print(a,b,c)
a,(b,c),d = 1,[2,(3,4)],5
print(a,b,c,d)
a,(b,(c,d)),e = 1,[2,(3,4)],5
print(a,b,c,d,e)
a,(b,c,d),e = 1,range(3),3
print(a,b,c,d,e)
# + slideshow={"slide_type": "slide"}
list(enumerate("aeiou"))
# + slideshow={"slide_type": "fragment"}
for x in enumerate("aeiou"):
print(x)
print(list(enumerate("aeiou")))
print(tuple(enumerate("aeiou")))
for x in enumerate("aeiou"):
print(x[0],"-->",x[1])
for i,b in enumerate("aeiou"):
print(i,"-->",b)
print("-"*30)
for a,(b,c) in enumerate([("bat","bi"),("hiru","lau")]):
print(a,"-->(",b,",",c,")")
# + slideshow={"slide_type": "slide"}
z = [23,235,235,235,23,52,35,234,6,34,23,4123,5,346,134,52,354]
print(list(enumerate(z)))
z2 = []
for i,b in enumerate(z) :
z2.append((b,i))
print(z2)
z2.sort()
print(z2)
print("-"*30)
b,i = z2[0]
print("minimoa:",b,"posizioa:",i)
print("minimoa:",z2[0][0],"posizioa:",z2[0][1])
b,i = z2[-1]
print("maximoa:",b,"posizioa:",i)
b,i = z2[len(z)//2]
print("erdikoa:",b,"posizioa:",i)
# + [markdown] slideshow={"slide_type": "-"}
# <table border="0" width="100%" style="margin: 0px;">
# <tr>
# <td style="text-align:left"><a href="ZerrendenZerrendak.ipynb">< < Matrizeak: zerrenden zerrendak < <</a></td>
# <td style="text-align:right"><a href="Hiztegiak.ipynb">> > Hiztegiak > ></a></td>
# </tr>
# </table>
| Gardenkiak/Programazioa/NKoteak.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/timgluz/colab_notebooks/blob/master/SpacyCourse_chapter2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rHYL8XeK6qFA" colab_type="text"
# # Data structures: Vocab, Lexemes, and StringStore
#
# source: https://course.spacy.io/en/chapter2
#
# In this lesson, we'll take a look at the shared vocabulary and how spaCy deals with strings.
#
# spaCy stores all shared data in a vocabulary, the **Vocab**.
#
# This includes words, but also the labels schemes for tags and entities.
#
# Strings are only stored once in the `StringStore` via `nlp.vocab.strings`
# + id="J9Y065J4A1V4" colab_type="code" colab={}
# install spacy
# %pip install spacy
# + id="Mty76M3y5ia0" colab_type="code" colab={}
# download medium size model package
# %%python -m spacy download en_core_web_md
# + id="3cxI_6kWA_44" colab_type="code" colab={}
# import the english language class
from spacy.lang.en import English
# create the nlp object
# contains the procesing pipeline,
# includes language-specific rules for tokenization
nlp = English()
# + id="pwWZCE226T1k" colab_type="code" colab={}
# Look up the string and hash in nlp.vocab.string
doc = nlp("I love coffee")
print("hash value:", nlp.vocab.strings["coffee"])
print("string value:", nlp.vocab.strings[3197928453018144401])
# + id="U3_YqId0BSOU" colab_type="code" colab={}
# A Doc object also exposes its vocab and strings.
doc = nlp("I love coffee")
print("hash value:", doc.vocab.strings["coffee"])
# + [markdown] id="-Wrc3YpiBlXU" colab_type="text"
# ### Lexemes
#
# Lexemes are context-independent entries in the vocabulary.
#
# You can get a lexeme by looking up a string or a hash ID in the vocab.
#
# Lexemes expose attributes, just like tokens.
#
# They hold context-independent information about a word, like the text, or whether the word consists of alphabetic characters.
#
# Lexemes don't have part-of-speech tags, dependencies or entity labels. Those depend on the context.
#
# + id="TpStyoJ9Bv3M" colab_type="code" colab={}
doc = nlp("I love coffee")
lexeme = nlp.vocab["coffee"]
# Print the lexical attributes
print(lexeme.text, lexeme.orth, lexeme.is_alpha)
# + [markdown] id="oyprcboIHePo" colab_type="text"
# ### The Doc object
#
# The `Doc` is one of the central data structures in spaCy. It's created automatically when you process a text with the `nlp` object. But you can also instantiate the class manually.
#
# After creating the `nlp` object, we can import the `Doc` class from `spacy.tokens`.
#
#
# The spaces are a list of boolean values indicating whether the word is followed by a space. Every token includes that information – even the last one!
#
# The Doc class takes three arguments: the shared vocab, the words and the spaces.
#
# + id="wvgfbX7SIAFQ" colab_type="code" colab={}
# Create an nlp object
from spacy.lang.en import English
nlp = English()
# Import the Doc class
from spacy.tokens import Doc
# The words and spaces to create the doc from
words = ["Hello", "world", "!"]
spaces = [True, False, False]
# Create a doc manually
doc = Doc(nlp.vocab, words=words, spaces=spaces)
print(doc[0])
# + [markdown] id="ls1QYvj0LhWc" colab_type="text"
# ### The Span object
#
# A Span is a slice of a doc consisting of one or more tokens. The Span takes at least three arguments: the doc it refers to, and the start and end index of the span. Remember that the end index is exclusive!
# + id="-Sx-lDNeLpW8" colab_type="code" colab={}
# Import the Doc and Span classes
from spacy.tokens import Doc, Span
# The words and spaces to create the doc from
words = ["Hello", "world", "!"]
spaces = [True, False, False]
# Create a doc manually
doc = Doc(nlp.vocab, words=words, spaces=spaces)
# Create a span manually
span = Span(doc, 0, 2)
# Create a span with a label
span_with_label = Span(doc, 0, 2, label="GREETING")
# Add span to the doc.ents
doc.ents = [span_with_label]
print(span_with_label)
# + [markdown] id="ZdehX4EANrJM" colab_type="text"
# # Word vectors and semantic similarity
#
# In this lesson, you'll learn how to use spaCy to predict how similar documents, spans or tokens are to each other.
#
# The `Doc`, `Token` and `Span` objects have a `.similarity` method that takes another object and returns a floating point number between 0 and 1, indicating how similar they are.
#
#
# **NB!** : In order to use similarity, you need a larger spaCy model that has word vectors included.
#
#
# + id="cuViTnC44vr8" colab_type="code" colab={}
import spacy
nlp = spacy.load("en_core_web_md")
# Compare two documents
doc1 = nlp("I like fast food")
doc2 = nlp("I like pizza")
print(doc1.similarity(doc2))
# Compare a document with a token
doc = nlp("I like pizza")
token = nlp("soap")[0]
print(doc.similarity(token))
# Compare a span with a document
span = nlp("I like pizza and pasta")[2:5]
doc = nlp("McDonalds sells burgers")
print(span.similarity(doc))
# + id="Vk52BqVy6tkM" colab_type="code" colab={}
doc = nlp("I have a banana")
# Access the vector via the token.vector attribute
print(doc[3].vector)
# + [markdown] id="ZYfSfRZs7xus" colab_type="text"
# # Combining models and rules
#
# Combining statistical models with rule-based systems is one of the most powerful tricks you should have in your NLP toolbox.
#
# 
# + [markdown] id="FHSA80hs9IFQ" colab_type="text"
#
# + id="dtNIN5kJ9If0" colab_type="code" colab={}
import json
from spacy.lang.en import English
with open("exercises/en/countries.json") as f:
COUNTRIES = json.loads(f.read())
nlp = English()
doc = nlp("Czech Republic may help Slovakia protect its airspace")
# Import the PhraseMatcher and initialize it
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
# Create pattern Doc objects and add them to the matcher
# This is the faster version of: [nlp(country) for country in COUNTRIES]
patterns = list(nlp.pipe(COUNTRIES))
matcher.add("COUNTRY", None, *patterns)
# Call the matcher on the test document and print the result
matches = matcher(doc)
print([doc[start:end] for match_id, start, end in matches])
| SpacyCourse_chapter2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Tema 01: Introducción informal (Soluciones)
# *Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*.
# **1) Identifica el tipo de dato (int, float, string o list) de los siguientes valores literales**
# ```python
# "<NAME>" # string
# [1, 10, 100] # list
# -25 # int
# 1.167 # float
# ["Hola", "Mundo"] # list
# ' ' # string
# ```
# **2) Determina mentalmente (sin programar) el resultado que aparecerá por pantalla a partir de las siguientes variables:**
# ```python
# a = 10
# b = -5
# c = "Hola "
# d = [1, 2, 3]
# ```
#
# ```python
# print(a * 5) # 50
# print(a - b) # 15
# print(c + "Mundo") # Hola Mundo
# print(c * 2) # Hola Hola
# print(c[-1]) # 3
# print(c[1:]) # [2, 3]
# print(d + d) # [1, 2, 3, 1, 2, 3]
# ```
# **3) El siguiente código pretende realizar una media entre 3 números, pero no funciona correctamente. ¿Eres capaz de identificar el problema y solucionarlo?**
# +
numero_1 = 9
numero_2 = 3
numero_3 = 6
media = (numero_1 + numero_2 + numero_3) / 3 # Hay que realizar primero la suma de los 3 números antes de dividir
print("La nota media es", media)
# -
# **4) A partir del ejercicio anterior, vamos a suponer que cada número es una nota, y lo que queremos es obtener la nota final. El problema es que cada nota tiene un valor porcentual: **
#
# * La primera nota vale un 15% del total
# * La segunda nota vale un 35% del total
# * La tercera nota vale un 50% del total
#
# **Desarrolla un programa para calcular perfectamente la nota final.**
# +
nota_1 = 10
nota_2 = 7
nota_3 = 4
# Completa el ejercicio aquí
media = numero_1 * 0.15 + numero_2 * 0.35 + numero_3 * 0.50 # Podemos multiplicar en tanto por 1 cada nota y sumarlas
print("La nota media es", media)
# -
# **5) La siguiente matriz (o lista con listas anidadas) debe cumplir una condición, y es que en cada fila, el cuarto elemento siempre debe ser el resultado de sumar los tres primeros. ¿Eres capaz de modificar las sumas incorrectas utilizando la técnica del slicing?**
#
# *Ayuda: La función llamada sum(lista) devuelve una suma de todos los elementos de la lista ¡Pruébalo!*
# +
matriz = [
[1, 1, 1, 3],
[2, 2, 2, 7],
[3, 3, 3, 9],
[4, 4, 4, 13]
]
# Completa el ejercicio aquí
matriz[1][-1] = sum(matriz[1][:-1])
matriz[3][-1] = sum(matriz[3][:-1])
print(matriz)
# -
# **6) Al realizar una consulta en un registro hemos obtenido una cadena de texto corrupta al revés. Al parecer contiene el nombre de un alumno y la nota de un exámen. ¿Cómo podríamos formatear la cadena y conseguir una estructura como la siguiente?:**
#
# * ***Nombre*** ***Apellido*** ha sacado un ***Nota*** de nota.
#
# *Ayuda: Para voltear una cadena rápidamente utilizando slicing podemos utilizar un tercer índice -1: **cadena[::-1]** *
# +
cadena = "<NAME>1"
# Completa el ejercicio aquí
cadena_volteada = cadena[::-1]
print(cadena_volteada[3:], "ha sacado un", cadena_volteada[:2], "de nota.")
| Fase 1 - Fundamentos de programacion/Tema 01 - Introduccion informal/Ejercicios/Soluciones.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1 style="text-align:center; color:#008080;">Preprocessing with Local Binary Patterns<h1>
# <hr style="height:1px;">
# <h2 style="text-align:center; color:green;">Initialisation Steps</h2>
# <hr>
# <h3>Loading Dependencies</h3>
import PIL
from skimage.feature import local_binary_pattern
import numpy as np
import pickle
import matplotlib.pyplot as plt
import pandas as pd
import os
# <h3>Path Constants</h3>
# Train Parent Path
train_path='../asl_alphabet_train/'
# Train Labels
train_labels=os.listdir(train_path)
# Sample Image
sample_img=train_path+'A/A1.jpg'
sample_render=PIL.Image.open(sample_img)
sample_grey=sample_render.convert('L')
sample_arr=np.asarray(sample_grey)
# <h3>Sample Image</h3>
sample_render
sample_grey
# <h2 style="text-align:center;color:green;">Local Binary Pattern Transformation</h2>
# <hr>
sample_lbp=local_binary_pattern(sample_arr,8,1,'uniform')
sample_lbp=np.uint8((sample_lbp/sample_lbp.max())*255)
sample_conversion=PIL.Image.fromarray(sample_lbp)
sample_conversion
# calculating historgram
sample_hist,_=np.histogram(sample_lbp,8)
sample_hist=np.array(sample_hist,dtype=float)
# calculating energy and entropy from PDF
sample_prob=np.divide(sample_hist,np.sum(sample_hist))
sample_energy=np.sum(sample_prob**2)
sample_entropy=-np.sum(np.multiply(sample_prob,np.log2(sample_prob)))
print('Entropy ',sample_entropy)
print('Energy', sample_energy)
# <h2 style="text-align:center; color:green">Replicating for entire Training set</h2>
# <hr>
def compute_lbp(image):
lbp=local_binary_pattern(image,8,1,'uniform')
lbp=np.uint8((lbp/lbp.max())*255)
hist,_=np.histogram(lbp,8)
hist=np.array(hist,dtype=float)
prob=np.divide(hist,np.sum(hist))
energy=np.sum(prob**2)
entropy=-np.sum(np.multiply(prob,np.log2(prob)))
return energy,entropy
def load_img(path):
render=PIL.Image.open(path)
render=render.convert('L')
return np.asarray(render)
lbp_datafame=pd.DataFrame(columns=['Energy', 'Entropy', 'Label'])
for label in os.listdir(train_path):
for image_name in os.listdir(train_path+'/'+label+'/'):
image_path=train_path+'/'+label+'/'+image_name
image_arr=load_img(image_path)
lbp=compute_lbp(image_arr)
lbp_datafame.loc[len(lbp_datafame)]=[image_energy,image_entropy,label]
print('-------------------------------------------------------------------')
print('Dataframe Write Completed for ',label)
print('-------------------------------------------------------------------')
lbp_datafame.to_csv('lbp_df.csv', sep=',',index=False)
| preprocessing_scripts/Local Binary Patterns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# cd ..
from src.stats import z_val, confidence_interval
# ## Test z_val()
z_val(0.01)
# ## Test confidence_interval()
confidence_interval()
| code/test_stats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MC - Covid-19 Data preparation
# Task description: https://ds-spaces.technik.fhnw.ch/app/uploads/sites/82/2020/09/minichallenge_covid19.pdf
#
# Author: <NAME>
#
# Data Pipeline Concept: https://en.wikipedia.org/wiki/Pipeline_(computing) (Zustandsänderung, Konfigurierbarkeit!)
# ## Procedure
# 1. [x] Create function for daily data pull
# 2. [x] Drop unneccesary data
# 3. [x] Clean data (tidy data principe)
# 4. [X] Prepare data for visualization (aggregation maybe)
# 5. [x] Return global and local dataframe
# 6. [x] Create visualization (plotly) for global and local data (barplots incl. moving average e. g. srf.ch and linecharts)
# 7. [x] Document process and code
#
# Steps 1 to 5 should function as a data preparation pipeline
# + active=""
# Zahlen, die täglich pro Land geliefert werden sollen:
#
# [x] total bestätigte Fälle pro Land seit Anfang der Epidemie
# [x] neue bestätigte Fälle pro Land seit Anfang der Epidemie
# [x] total bestätigte Todesfälle pro Land seit Anfang der Epidemie
# [x] neue bestätigte Todesfälle pro Land seit Anfang der Epidemie
#
# Zahlen, die täglich pro Kanton geliefert werden sollen:
#
# [x] total bestätigte Fälle pro Kanton seit 1. Juni 2020 (Beginn 2. Welle)
# [x] neue bestätigte Fälle pro Kanton seit 1. Juni 2020
# [x] total bestätigte Todesfälle pro Kanton seit 1. Juni 2020
# [x] neue bestätigte Todesfälle pro Kanton seit 1. Juni 2020
# +
# TODO: Anderes Dataset verwenden (daily reports)
# Datasource local = https://github.com/daenuprobst/covid19-cases-switzerland
# Datasource global = https://github.com/CSSEGISandData/COVID-19
# Imports
import pandas as pd
from datetime import date, timedelta
# -
# ## get_data()
# This function pulls 4 dataframes from two github repositories. Two dataframes from the John Hopkins University containing the daily total cases and total fatalities of Covid-19 from every country, and two from the covid19-cases-switzerland repository from <NAME> containing daily total cases and total fatalities of Covid-19 from every Canton in Switzerland. The function utilises the `pandas.read_csv()` function and pulls the data directly via an URL to the raw data. It returns a List of DataFrames.
def get_data():
"""
Pulls latest data from sources
:return df_global: Dataframe containg current Covid-19 Data from John Hopkins University
:return df_CH_cases: Daily new cases in Switzerland
:return df_CH_fatal: Daily new fatalities in Switzerland
"""
# Get most recent data from john hopkins
df_global_daily_cases = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv')
df_global_daily_fatal = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
# Get most recent data from covid19-cases-switzerland
df_CH_cases = pd.read_csv('https://raw.githubusercontent.com/daenuprobst/covid19-cases-switzerland/master/covid19_cases_switzerland_openzh-phase2.csv')
df_CH_fatal = pd.read_csv('https://raw.githubusercontent.com/daenuprobst/covid19-cases-switzerland/master/covid19_fatalities_switzerland_openzh-phase2.csv')
return [df_global_daily_cases, df_global_daily_fatal, df_CH_cases, df_CH_fatal]
# ## drop_columns()
# Muliple columns are not necessary for the data visualization and get dropped with `pandas.DataFrame.drop()`.
def drop_columns(dfs):
"""
Drops columns in list of dataframes if column name is in columns
:param dfs: Lisst of dataframes
:return dfs: List of dataframes
"""
DROP_COLUMNS = ['FIPS','Admin2','Province_State','Recovered','Combined_Key',
'Incidence_Rate','Case-Fatality_Ratio', 'Province/State', 'Last_Update', 'Long','Lat']
for df in dfs:
columns = df.columns.intersection(DROP_COLUMNS) # get list of columns to drop
df.drop(columns=columns, inplace=True)
return dfs
# ## rename()
# To get more uniform data and therefore less complication in later processes, it is useful to rename columns to have uniform names.
def rename(dfs):
"""
Rename Columns with same information to get uniform columns across dataframes (easier down the line)
:param dfs: List of dataframes
:return dfs: List of dataframes with equal column names
"""
for df in dfs:
if 'Country_Region' in df.columns or 'Country/Region' in df.columns:
df.rename({'Country_Region':'country',
'Country/Region':'country'}, inplace=True, axis=1)
if 'Date' in df.columns:
df.rename({'Date':'date'}, inplace=True, axis=1)
return dfs
# ## drop_nan()
# Drops empty rows
def drop_nan(dfs):
"""
Drops empty rows
:param dfs: Lisst of dataframes
:return dfs: List of dataframes without NAN rows
"""
for df in dfs:
df.dropna(how='all', inplace=True)
return dfs
# ## groupby_country()
# The two DataFrames of John Hopkins University have countries that occur several times because they have been divided into provinces/states. For example, the USA occurs several times because its numbers are given per state. These numbers can be summed up to get only one line for each country.
def groupby_country(dfs):
"""
Group data by Country or. Canton
:param dfs: List of dataframes
:return dfs: List of dataframes all grouped by country or Canton
"""
for i,df in enumerate(dfs):
# World data
if 'country' in df.columns:
df_copy = df.copy()
df_copy = df_copy[['country']]
# get all columns which should be summed up. I.e. all collumns with case numbers
columns = [e for e in df.columns if e not in ['country']]
# group by country and sum up the case numbers
df = df.groupby(by=['country'])[columns].agg('sum').reset_index()
# merge on original list
df = df.merge(df_copy, left_on='country', right_on='country')
# drop duplicated countries
dfs[i] = df.drop_duplicates(subset='country', keep='first')
return dfs
# ## melt_data()
# This function is the heart of the pipeline. It melts the Dataframes and calculates the new columns.
# To avoid problems with the calculation of the new daily cases (it can happen that data from one country is combined with data from another country), the DataFrames are divided per country/canton, the new data is calculated and finally merged into one DataFrame.
#
# `pandas.DataFrame.melt()` led to some problems with the John Hopkins Data (calculation of "new_cases/fatal" led to negativ numbers, which should not be possible). Another approach was used as a workaround.
def melt_data(dfs):
# TODO: Groupby for countries evtl. apply
"""
Melts DataFrames into format "country"("canton"),"date","total_cases","new_cases","total_fatal","new_fatal"
:param dfs: List of Dataframes
:return dfs: List of Dataframes in new format
"""
label = ['cases','fatal','cases','fatal']
for i, df in enumerate(dfs):
# global data
if 'country' in df.columns:
df_new = None
for c in df.country.unique():
df_c = df[df['country']==c].copy()
# transpose DataFrame
df_c = df_c.T.reset_index()
# clean dataset and rename columns
df_c.columns = ['date',f'total_{label[i]}']
df_c = df_c.drop(0)
# calculate new columns
df_c[f'new_{label[i]}'] = df_c[f'total_{label[i]}'].diff() # new_cases/fatal column
df_c['country'] = c
if df_new is None:
df_new = df_c
else:
df_new = df_new.append(other=df_c,ignore_index=True)
# add new_data
dfs[i] = df_new
# local data
else:
df_new = None
# melt dataframe
df = df.melt(id_vars='date',var_name='canton',value_name=f'total_{label[i]}')
for c in df.canton.unique():
df_c = df[df['canton']==c].copy()
# calculate new columns
df_c[f'new_{label[i]}'] = df_c[f'total_{label[i]}'].diff() # new_cases/fatal column
if df_new is None:
df_new = df_c
else:
df_new = df_new.append(other=df_c,ignore_index=True)
# add new_data
dfs[i] = df_new
return dfs
# ## merge_data()
# This function merges the four DataFrames into df_global and df_local
def merge_data(dfs):
"""
Merges prepared Dataframes into two dataframes (df_global and df_local)
:param dfs: List of DataFrames
:return df_global: DataFrame containing Covid-Data for every country
:return df_local: DataFrame containing Covid-Data for every canton in switzerland
"""
df_global, df_local = None, None
for df in dfs:
if 'country' in df.columns:
if df_global is None:
df_global = df
else:
df_global = df_global.merge(right=df, on=['country','date'],how='left')
elif 'canton' in df.columns:
if df_local is None:
df_local = df
else:
df_local = df_local.merge(right=df, on=['canton','date'],how='left')
return df_global, df_local
# ## covid_pipe()
# Strings all the above mentioned functions together to form a pipeline
def covid_pipe():
"""
Covid-19 data pipeline
:return df_global: DataFrame containing total_cases, new_cases, total_fatal (fatalities),
new_fatal for every day and country since tracking
:return df_local: DataFrame containing total_cases, new_cases, total_fatal (fatalities),
new_fatal for every day and canton in switzerland since tracking
"""
dfs = get_data()
dfs = drop_columns(dfs)
dfs = rename(dfs)
dfs = drop_nan(dfs)
dfs = groupby_country(dfs)
dfs = melt_data(dfs)
df_global, df_local = merge_data(dfs)
return df_global, df_local
df_global, df_local = covid_pipe()
# ## Output of pipeline
df_local.head()
| nb_covid_19_pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="qqVoyQCo-Uji" executionInfo={"status": "ok", "timestamp": 1630947028679, "user_tz": -330, "elapsed": 1026, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}}
__author__ = 'porky-chu'
import theano
import numpy as np
from theano import tensor as TT
class NodeVectorModel(object):
def __init__(self, n_from, n_to, de, seed=1692, init_params=None):
"""
n_from :: number of from embeddings in the vocabulary
n_to :: number of to embeddings in the vocabulary
de :: dimension of the word embeddings
"""
np.random.seed(seed)
# parameters of the model
if init_params is not None:
with open('data/case_embeddings.pkl', 'rb') as f:
temp = cPickle.load(f)
self.Win = theano.shared(temp.Win.get_value().astype(theano.config.floatX))
self.Wout = theano.shared(temp.Wout.get_value().astype(theano.config.floatX))
else:
self.Win = theano.shared(0.2 * np.random.uniform(-1.0, 1.0, (n_from, de)).astype(theano.config.floatX))
self.Wout = theano.shared(0.2 * np.random.uniform(-1.0, 1.0, (n_to, de)).astype(theano.config.floatX))
# adagrad
self.cumulative_gradients_in = theano.shared(0.1 * np.ones((n_from, de)).astype(theano.config.floatX))
self.cumulative_gradients_out = theano.shared(0.1 * np.ones((n_to, de)).astype(theano.config.floatX))
idxs = TT.imatrix()
x_in = self.Win[idxs[:, 0], :]
x_out = self.Wout[idxs[:, 1], :]
norms_in= TT.sqrt(TT.sum(x_in ** 2, axis=1))
norms_out = TT.sqrt(TT.sum(x_out ** 2, axis=1))
norms = norms_in * norms_out
y = TT.vector('y') # label
y_predictions = TT.sum(x_in * x_out, axis=1) / norms
# cost and gradients and learning rate
loss = TT.mean(TT.sqr(y_predictions - y))
gradients = TT.grad(loss, [x_in, x_out])
updates = [
(self.cumulative_gradients_in, TT.inc_subtensor(self.cumulative_gradients_in[idxs[:, 0]], gradients[0] ** 2)),
(self.cumulative_gradients_out, TT.inc_subtensor(self.cumulative_gradients_out[idxs[:, 1]], gradients[1] ** 2)),
(self.Win, TT.inc_subtensor(self.Win[idxs[:, 0]], - (0.5 / TT.sqrt(self.cumulative_gradients_in[idxs[:, 0]])) * gradients[0])),
(self.Wout, TT.inc_subtensor(self.Wout[idxs[:, 1]], - (0.5 / TT.sqrt(self.cumulative_gradients_out[idxs[:, 1]])) * gradients[1])),
]
# theano functions
self.calculate_loss = theano.function(inputs=[idxs, y], outputs=loss)
self.classify = theano.function(inputs=[idxs], outputs=y_predictions)
self.train = theano.function(
inputs=[idxs, y],
outputs=loss,
updates=updates,
name='training_fn'
)
def __getstate__(self):
return self.Win, self.Wout
def __setstate__(self, state):
Win, Wout = state
self.Win = Win
self.Wout = Wout
def save_to_file(self, output_path):
with open(output_path, 'wb') as output_file:
#cPickle.dump(self, output_file, cPickle.HIGHEST_PROTOCOL)
print("Save")
# + id="feymSZ3U-s4H" executionInfo={"status": "ok", "timestamp": 1630947204664, "user_tz": -330, "elapsed": 326, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}}
__author__ = 'allentran'
import json
import os
import multiprocessing
import numpy as np
def _update_min_dict(candidate_node, depth, min_set):
if candidate_node in min_set:
if min_set[candidate_node] <= depth:
return
else:
min_set[candidate_node] = depth
else:
min_set[candidate_node] = depth
def _get_connected_nodes(node_idx, adjancency_list, max_degree, current_depth=1):
connected_dict = {}
single_degree_nodes = [other_idx for other_idx in adjancency_list[node_idx] if adjancency_list[node_idx][other_idx] == 1]
for other_idx in single_degree_nodes:
_update_min_dict(other_idx, current_depth, connected_dict)
if current_depth <= max_degree:
for other_node_idx in single_degree_nodes:
if other_node_idx in adjancency_list:
new_connected_nodes = _get_connected_nodes(other_node_idx, adjancency_list, max_degree, current_depth + 1)
if new_connected_nodes is not None:
for other_idx, depth in new_connected_nodes.iteritems():
_update_min_dict(other_idx, depth, connected_dict)
return connected_dict
class Graph(object):
def __init__(self, graph_path):
self.from_nodes_mapping = {}
self.to_nodes_mapping = {}
self.edge_dict = {}
self._load_graph(graph_path=graph_path)
self._create_mappings()
def save_mappings(self, output_dir):
print("wow")
return
with open(os.path.join(output_dir, 'from.map'), 'w') as from_map_file:
json.dump(self.from_nodes_mapping, from_map_file)
with open(os.path.join(output_dir, 'to.map'), 'w') as to_map_file:
json.dump(self.to_nodes_mapping, to_map_file)
def get_mappings(self):
return self.from_nodes_mapping, self.to_nodes_mapping
def _create_mappings(self):
for key in self.edge_dict:
self.from_nodes_mapping[key] = len(self.from_nodes_mapping)
for to_nodes in self.edge_dict.values():
for to_node in to_nodes:
if to_node not in self.to_nodes_mapping:
self.to_nodes_mapping[to_node] = len(self.to_nodes_mapping)
def _add_edge(self, from_idx, to_idx, degree=1):
if from_idx not in self.edge_dict:
self.edge_dict[from_idx] = dict()
if to_idx in self.edge_dict[from_idx]:
if degree >= self.edge_dict[from_idx][to_idx]:
return
self.edge_dict[from_idx][to_idx] = degree
def _load_graph(self, graph_path):
with open(graph_path, 'r') as graph_file:
for line in graph_file:
parsed_line = line.strip().split(' ')
if len(parsed_line) in [2, 3]:
from_idx = int(parsed_line[0])
to_idx = int(parsed_line[1])
if len(parsed_line) == 3:
degree = int(parsed_line[2])
self._add_edge(from_idx, to_idx, degree)
else:
self._add_edge(from_idx, to_idx)
def extend_graph(self, max_degree, penalty=2):
def _zip_args_for_parallel_fn():
for key in self.from_nodes_mapping.keys():
yield (key, self.edge_dict, max_degree)
from_to_idxs = []
degrees = []
pool = multiprocessing.Pool(multiprocessing.cpu_count())
connected_nodes_list = pool.map(_get_connected_nodes, _zip_args_for_parallel_fn())
pool.close()
pool.join()
for node_idx, connected_nodes in zip(self.from_nodes_mapping.keys(), connected_nodes_list):
for other_node, degree in connected_nodes.iteritems():
from_to_idxs.append([self.from_nodes_mapping[node_idx], self.to_nodes_mapping[other_node]])
degrees.append(float(1)/(degree ** penalty))
return np.array(from_to_idxs).astype(np.int32), np.array(degrees).astype(np.float32)
# + id="ilREQ-F_-kYk" executionInfo={"status": "ok", "timestamp": 1630947029026, "user_tz": -330, "elapsed": 351, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}}
__author__ = 'porky-chu'
import random
import os
import logging
import numpy as np
#from node_vectors import NodeVectorModel
#import parser
class Graph2Vec(object):
def __init__(self, vector_dimensions, output_dir='data'):
self.output_dir = output_dir
self.model = None
self.from_nodes = None
self.to_nodes = None
self.dimensions = vector_dimensions
self.from_to_idxs = None
self.inverse_degrees = None
def parse_graph(self, graph_path, data_dir='data', load_edges=False, extend_paths=2):
graph = Graph(graph_path)
self.from_nodes, self.to_nodes = graph.get_mappings()
graph.save_mappings(self.output_dir)
if load_edges:
self.inverse_degrees = np.memmap(
os.path.join(data_dir, 'inverse_degrees.mat'),
mode='r',
dtype='float32'
)
self.from_to_idxs = np.memmap(
os.path.join(data_dir, 'from_to.mat'),
mode='r',
dtype='int32'
)
self.from_to_idxs = np.reshape(self.from_to_idxs, newshape=(self.inverse_degrees.shape[0], 2))
else:
from_to_idxs, inverse_degrees = graph.extend_graph(max_degree=extend_paths)
self.from_to_idxs = np.memmap(
os.path.join(data_dir, 'from_to.mat'),
mode='r+',
shape=from_to_idxs.shape,
dtype='int32'
)
self.from_to_idxs[:] = from_to_idxs[:]
self.inverse_degrees = np.memmap(
os.path.join(data_dir, 'inverse_degrees.mat'),
mode='r+',
shape=inverse_degrees.shape,
dtype='float32'
)
self.inverse_degrees[:] = inverse_degrees[:]
def fit(self, max_epochs=100, batch_size=1000, seed=1692, params=None):
self.model = NodeVectorModel(
n_from=len(self.from_nodes),
n_to=len(self.to_nodes),
de=self.dimensions,
init_params=params,
)
random.seed(seed)
shuffled_idxes = np.arange(self.from_to_idxs.shape[0])
for epoch_idx in xrange(max_epochs):
random.shuffle(shuffled_idxes)
cost = []
for obs_idx in xrange(0, len(self.inverse_degrees), batch_size):
cost.append(self.model.train(self.from_to_idxs[shuffled_idxes[obs_idx:obs_idx + batch_size]],
self.inverse_degrees[shuffled_idxes[obs_idx:obs_idx + batch_size]]))
cost = np.mean(cost)
logging.info('After %s epochs, cost=%s' % (epoch_idx, cost ** 0.5))
# + colab={"base_uri": "https://localhost:8080/"} id="uhy0F2Of8RDv" executionInfo={"status": "ok", "timestamp": 1630947032814, "user_tz": -330, "elapsed": 3792, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}} outputId="a18e40b8-72a3-4b2d-8abe-10666432d6c4"
# !pip install graph2vec
# !pip install pickle
# + colab={"base_uri": "https://localhost:8080/"} id="94ScQEqx9M1X" executionInfo={"status": "ok", "timestamp": 1630947032815, "user_tz": -330, "elapsed": 25, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}} outputId="d240419a-0108-4bbf-cbd5-058b3f8d63cd"
import graph2vec
help(graph2vec)
# + colab={"base_uri": "https://localhost:8080/"} id="xetDcuY6BEVp" executionInfo={"status": "ok", "timestamp": 1630947032816, "user_tz": -330, "elapsed": 20, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}} outputId="5fb08861-3791-46d8-b427-0c82e1b5b002"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/", "height": 612} id="z1BsPP2g7V85" executionInfo={"status": "error", "timestamp": 1630947210881, "user_tz": -330, "elapsed": 631, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ghk3NvZ4zb-ERM3KiNdF2C-Tp6MsT2QphdB5L7FMQ=s64", "userId": "09282768075823282303"}} outputId="9d91d7e3-530a-4476-9ca5-73bb4c0c1199"
graph2vec =Graph2Vec(vector_dimensions=128)
graph2vec.parse_graph('/content/drive/MyDrive/Research/Sinhala NLP/Graph2Vec/data/edge.data', extend_paths=2)
graph2vec.fit(batch_size=1000, max_epochs=1000)
node2vec.model.save_to_file("/content/drive/MyDrive/Research/Sinhala NLP/Graph2Vec/data/case_embeddings.pkl")
| Untitled0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="ffFq6iYN0q4r"
import os
def restart_runtime():
os.kill(os.getpid(), 9)
restart_runtime()
# + colab={"base_uri": "https://localhost:8080/"} id="_8moVMnv0nN3" outputId="30b95cbe-d40d-47ba-db15-1631f6f73c0c"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="SYXuvp_Vf4Sx" outputId="f616b96d-7f9b-4ca6-a691-8a29b9fc5092"
# %cd '/content/drive/MyDrive/Projects'
# + id="vPIpF39_0sSu"
# #!pip install pennylane
# #!pip install pennylane-qiskit
# #!pip install qiskit
# #!pip install pennylane-qchem
# !pip install pyscf
# #!pip install pylatexenc
# #!pip install --update openfermion
from IPython.display import clear_output
clear_output()
# + id="bSgFMud10BNo"
# This cell is added by sphinx-gallery
# It can be customized to whatever you like
# %matplotlib inline
# + id="5sakpoDn3kpp" colab={"base_uri": "https://localhost:8080/"} outputId="65a2ad0c-da7e-4941-b1e3-912733514ec6"
from qiskit import IBMQ
from qiskit import *
from qiskit.providers.aer.noise import NoiseModel
# + id="hbE71WOZ3lrv"
import pennylane as qml
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] id="08RHi9RoaxcD"
# # FCI
# + id="iOtsySIPfls3"
from pyscf import gto, scf, ao2mo, fci
import numpy as np
from matplotlib import pyplot as plt
# + id="LzvrSlY6flOz"
def FCI(L, e_list):
mol = gto.M(atom='H 0 0 ' + str(L) + '; H 0 0 0', basis='sto-3g')
mf = scf.RHF(mol).run()
h1 = mf.mo_coeff.T.dot(mf.get_hcore()).dot(mf.mo_coeff)
eri = ao2mo.kernel(mol, mf.mo_coeff)
cisolver = fci.direct_spin1.FCI(mol)
e, ci = cisolver.kernel(h1, eri, h1.shape[1], mol.nelec, ecore=mol.energy_nuc())
e_list += [e]
return e_list
#print(e)
# + colab={"base_uri": "https://localhost:8080/"} id="VjcIRjyjiXS0" outputId="93920952-b9fd-4be9-e04f-2aefbc0fcdaa"
FCI(0.7414, [])
# + id="X_4MfFusgbsC"
e_fci = []
bond_length_fci = np.linspace(0.1, 3.0, 30)
for i in range(len(bond_length_fci)):
e_fci = FCI(bond_length_fci[i], e_fci)
from IPython.display import clear_output
clear_output()
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="ep4kGaLYixHC" outputId="ae8f9b46-299b-4d92-e286-2f478245a132"
plt.plot(bond_length_fci, e_fci)
# + colab={"base_uri": "https://localhost:8080/"} id="o2cHHWNykISS" outputId="cc923147-a655-4cb1-b9fb-d68ddd3d10f2"
bond_length_fci[np.argmin(e_fci)]
# + id="aRH_RB0Gml8U"
np.savetxt('./FCI_energy_wide_30.txt', e_fci)
#np.savetxt('./FCI_length_crammed.txt', bond_length_fci)
# + id="gbETwYeY0BNy"
import pennylane as qml
from pennylane import qchem
from pennylane import numpy as np
# + [markdown] id="nzVShOjW71Cy"
# # VQE
# + [markdown] id="pyQxSp5c8BgP"
# ## Molecule Geometry
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="Q7bWZdDJuiyx" outputId="d873cf8b-6d4e-48e1-d5e2-b3fc281997de"
f = open("./h2.xyz")
f.read()
# + id="1w9C2L0MzZHE"
def change_geometry(L_new, L_prev):
#read input file
fin = open("./h2.xyz", "rt")
#read file contents to string
data = fin.read()
#replace all occurrences of the required string
data = data.replace(str(L_prev), str(L_new))
#close the input file
fin.close()
#open the input file in write mode
fin = open("./h2.xyz", "wt")
#overrite the input file with the resulting data
fin.write(data)
#close the file
fin.close()
return L_new
# + colab={"base_uri": "https://localhost:8080/"} id="vMNHWgYW2Abl" outputId="cca865f8-10d7-4e96-ad07-9ac4455f5de2"
change_geometry(0.10, 0.90)
# + [markdown] id="GxzqHr2R8Z61"
# ## Jordan-Wigner Mapping
# + id="sgms4tpi0BN7"
def jordan_wigner_map():
geometry = './h2.xyz'
charge = 0
multiplicity = 1
basis_set = 'sto-3g'
name = 'h2'
h, qubits = qchem.molecular_hamiltonian(
name,
geometry,
charge=charge,
mult=multiplicity,
basis=basis_set,
active_electrons=2,
active_orbitals=2,
mapping='jordan_wigner'
)
#print('Number of qubits = ', qubits)
#print('Hamiltonian is ', h)
return h
# + colab={"base_uri": "https://localhost:8080/"} id="uNgyCmlH2qtj" outputId="c1c5c03e-10cb-485d-9831-aad0cf5ae200"
print(jordan_wigner_map())
# + [markdown] id="vTunO6ej8eal"
# ## Devices
# + colab={"base_uri": "https://localhost:8080/"} id="a3Ft3f5yzj6A" outputId="01fb15bc-d318-4ffa-cf9d-e837be139cd0"
provider = IBMQ.load_account()
backend = provider.get_backend('ibmq_vigo')
noise_model = NoiseModel.from_backend(backend)
# + id="Ul7H5pSc3_i5"
dev_noisy = qml.device("qiskit.aer", wires=qubits, shots=2048, noise_model = noise_model, analytic=False, backend='qasm_simulator')
cost_noisy = qml.ExpvalCost(circuit, h, dev_noisy)
# + [markdown] id="pptkYqOk4PRb"
# ## Ground State Energy as function of bond length
# + colab={"base_uri": "https://localhost:8080/"} id="XEjoygwG4qke" outputId="273d6e23-9f68-41bb-87e8-5eac57c55012"
max_iterations = 1000
conv_tol = 1e-08
qubits = 4
energy_list = []
bond_length_vqe = np.linspace(0.73, 0.75, 20)
L_before = 0.80
for k in range(len(bond_length_vqe)):
L = bond_length_vqe[k]
L_before = change_geometry(L, L_before)
dev = qml.device('default.qubit', wires=qubits)
def circuit(params, wires):
#qml.BasisState(np.array([1, 1, 0, 0]), wires=wires)
for i in wires:
qml.Rot(*params[i], wires=i)
qml.CNOT(wires=[2, 3])
qml.CNOT(wires=[2, 0])
qml.CNOT(wires=[3, 1])
h = jordan_wigner_map()
cost_fn = qml.ExpvalCost(circuit, h, dev)
opt = qml.AdamOptimizer(stepsize=0.01)
np.random.seed(0)
params = np.random.normal(0, np.pi, (qubits, 3))
prev_energy = cost_fn(params)
for n in range(max_iterations):
params = opt.step(cost_fn, params)
energy = cost_fn(params)
conv = np.abs(energy - prev_energy)
if conv <= conv_tol:
break
prev_energy = energy
energy_list += [energy]
print()
print('Bond Length = {:.4f} Angstrom'.format(L))
print('Final convergence parameter = {:.8f} Ha'.format(conv))
print('Final value of the ground-state energy = {:.8f} Ha'.format(energy))
print()
# + id="BwQxtfmNgnXq"
np.savetxt('./VQE_energy_vs_L_crammed.txt', energy_list)
np.savetxt('./VQE_length_crammed.txt', bond_length_vqe)
# + [markdown] id="VVwI85aKbk8w"
# ## Ground State Energy as function of number of iterations (fixed bond length)
# + colab={"base_uri": "https://localhost:8080/"} id="YqaF1moScv3B" outputId="cfa573f3-8a21-49ec-841c-f4d1b7276d9a"
change_geometry(0.7414, 0.7348650)
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="WbO5UlIjcsZT" outputId="3599f30f-7bda-45ba-c8bc-e398a1a1a65a"
f = open("./h2.xyz")
f.read()
# + colab={"base_uri": "https://localhost:8080/"} id="gYD-u90ldOFy" outputId="49dd39a8-2697-46af-d0f8-d71e8ad0f5cd"
h = jordan_wigner_map()
print(h)
# + id="UBFYf8-a0BN_"
qubits = 4
dev = qml.device('default.qubit', wires=qubits)
def circuit(params, wires):
#qml.BasisState(np.array([1, 1, 0, 0]), wires=wires)
for i in wires:
qml.Rot(*params[i], wires=i)
qml.CNOT(wires=[2, 3])
qml.CNOT(wires=[2, 0])
qml.CNOT(wires=[3, 1])
cost_fn = qml.ExpvalCost(circuit, h, dev)
opt = qml.AdamOptimizer(stepsize=0.01)
np.random.seed(0)
params = np.random.normal(0, np.pi, (qubits, 3))
# + id="mbMmztsW0BOB" colab={"base_uri": "https://localhost:8080/"} outputId="f6e4c74b-39f3-4057-c68c-aa1dd8b3437e"
max_iterations = 1000
conv_tol = 1e-08
energy_list = []
prev_energy = cost_fn(params)
energy_list += [prev_energy]
for n in range(max_iterations):
params = opt.step(cost_fn, params)
energy = cost_fn(params)
conv = np.abs(energy - prev_energy)
if (n+1) % 20 == 0:
print('Iteration = {:}, Energy = {:.8f} Ha'.format(n+1, energy))
if conv <= conv_tol:
break
prev_energy = energy
energy_list += [prev_energy]
print()
print('Final convergence parameter = {:.8f} Ha'.format(conv))
print('Final value of the ground-state energy = {:.8f} Ha'.format(energy))
print('Accuracy with respect to the FCI energy: {:.8f} Ha ({:.8f} kcal/mol)'.format(
np.abs(energy - (-1.136189454088)), np.abs(energy - (-1.136189454088))*627.503
)
)
print()
print('Final circuit parameters = \n', params)
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="WWMScmradtrv" outputId="8e6c5b56-6aac-4051-8d52-4632ee72e715"
plt.plot(energy_list)
# + id="RTWi4UA2jitO"
bond_length_vqe = np.linspace(0, len(energy_list)-1, len(energy_list), dtype=int)
# + id="47xgdAA3jc_t"
np.savetxt('./VQE_energy_vs_iteration.txt', energy_list)
np.savetxt('./VQE_length_iteration.txt', bond_length_vqe)
# + [markdown] id="9YCvZmJb8_zI"
# ## Ground state wave function estimation using the final (optimal) circuit parameters
# + colab={"base_uri": "https://localhost:8080/"} id="ogSTpVs-oCfa" outputId="66e42a93-6269-447d-c4ab-3b5170e739f1"
params
# + id="860WZxkcpP4Y"
dev = qml.device('default.qubit', wires=4, shots=10000)
@qml.qnode(dev)
def circuit_optimal(params):
#qml.BasisState(np.array([1, 1, 0, 0]), wires=wires)
for i in range(4):
qml.Rot(*params[i], wires=i)
qml.CNOT(wires=[2, 3])
qml.CNOT(wires=[2, 0])
qml.CNOT(wires=[3, 1])
return qml.probs([0, 1, 2, 3])
# + id="llG0BI3soSVH"
prob_state = circuit_optimal(params)
# + colab={"base_uri": "https://localhost:8080/"} id="ukAsW2lCuN64" outputId="bec6ec7f-c072-4298-f8a9-525e5ace3685"
print(prob_state[3] + prob_state[12])
print(prob_state[3], np.sqrt(prob_state[3]))
print(prob_state[12], np.sqrt(prob_state[12]))
# + colab={"base_uri": "https://localhost:8080/"} id="fO6yLW_ktbRB" outputId="48482862-d082-4596-fa61-16566c3e0d8a"
prob_state
# + id="dWIUJbAooIpL"
np.savetxt('./VQE_optimal_param_0.7414.txt', params)
| VQE_H2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Iris
# ### Introduction:
#
# This exercise may seem a little bit strange, but keep doing it.
#
# ### Step 1. Import the necessary libraries
import pandas as pd
import numpy as np
# ### Step 2. Import the dataset from this [address](https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data).
# ### Step 3. Assign it to a variable called iris
# +
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
iris = pd.read_csv(url)
iris.head()
# -
# ### Step 4. Create columns for the dataset
# +
# 1. sepal_length (in cm)
# 2. sepal_width (in cm)
# 3. petal_length (in cm)
# 4. petal_width (in cm)
# 5. class
iris.columns = ['sepal_length','sepal_width', 'petal_length', 'petal_width', 'class']
iris.head()
# -
# ### Step 5. Is there any missing value in the dataframe?
pd.isnull(iris).sum()
# nice no missing value
# ### Step 6. Lets set the values of the rows 10 to 29 of the column 'petal_length' to NaN
iris.iloc[10:30,2:3] = np.nan
iris.head(20)
# ### Step 7. Good, now lets substitute the NaN values to 1.0
iris.petal_length.fillna(1, inplace = True)
iris
# ### Step 8. Now let's delete the column class
del iris['class']
iris.head()
# ### Step 9. Set the first 3 rows as NaN
iris.iloc[0:3 ,:] = np.nan
iris.head()
# ### Step 10. Delete the rows that have NaN
iris = iris.dropna(how='any')
iris.head()
# ### Step 11. Reset the index so it begins with 0 again
iris = iris.reset_index(drop = True)
iris.head()
# ### BONUS: Create your own question and answer it.
| 10_Deleting/Iris/Exercises_with_solutions_and_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Default popup
# +
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
# +
from cartoframes.viz import Layer, color_bins_style, default_popup_element
Layer(
'eng_wales_pop',
color_bins_style('pop_sq_km'),
popup_hover=[
default_popup_element(title='Population per square km', format=',.3r')
]
)
| docs/examples/data_visualization/popups/default_popup.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # NLP
# ## Tidy
#
texto <- c('Minha terra tem palmeiras',
'onde canta o sabiá;',
'As aves que aqui gorjeiam,',
'não gorjeiam como lá.',
'Nosso céu tem mais estrelas,',
'Nossas várzeas têm mais flores,',
'Nossos bosques têm mais vida,',
'Nossa vida mais amores.',
'Em cismar – sozinho – à noite – ',
'Mais prazer encontro eu lá;',
'Minha terra tem palmeiras,',
'Onde canta o Sabiá.',
'Minha terra tem primores,',
'Que tais não encontro eu cá;',
'Em cismar – sozinho – à noite –',
'Mais prazer encontro eu lá;',
'Minha terra tem palmeiras,',
'Onde canta o Sabiá.',
'Não permita Deus que eu morra,',
'Sem que eu volte para lá;',
'Sem que eu desfrute os primores',
'Que não encontro por cá;',
'Sem que ainda aviste as palmeiras,',
'Onde canta o Sabiá.'
)
texto
# Vamos transformar em um dataframe com o pacote dplyr:
library(dplyr)
df_texto <- data_frame(line = 1:length(texto), text = texto)
df_texto
# +
#install.packages("tidytext",repos='http://cran.us.r-project.org')
# -
library(tidytext)
df_texto %>% unnest_tokens(word,text)
# Note que há muitas palavras sem importância, como: artigos, preposições etc. São chamadas de "stop words" e somente atrapalham a análise.
library(readr)
stopwords <- read_csv('portuguese-stopwords.txt', col_names = 'word')
filtrado <- df_texto %>%
unnest_tokens(word,text) %>%
anti_join(stopwords,by='word')
filtrado
# +
words <- filtrado %>%
group_by(word) %>%
summarise(freq = n()) %>%
arrange(desc(freq))
words <- as.data.frame(words)
rownames(words) <- words$word
head(words)
# -
#install.packages('wordcloud',repos='http://cran.us.r-project.org')
library(wordcloud)
options(warn=-1)
wordcloud(words$word,words$freq,scale=c(5,.01),min.freq = 1,max.words=Inf,colors=brewer.pal(8,"Dark2"))
options(warn=0)
| book-R/tidy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css')
# +
os.chdir(path)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# %matplotlib inline
# %load_ext watermark
# %load_ext autoreload
# %autoreload 2
import sys
from math import ceil
from tqdm import trange
from subprocess import call
from itertools import islice
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import normalize
from sklearn.neighbors import NearestNeighbors
from scipy.sparse import csr_matrix, dok_matrix
# %watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,scipy,sklearn,tqdm
# -
# If you are new to the field recommendation system, please make sure you understand the basics of matrix factorizaion in this [other documentation](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/recsys/1_ALSWR.ipynb).
#
# # Bayesian Personalized Ranking
#
# Recall that when doing matrix factorization for implicit feedback data (users' clicks, view times), we start with a user-item matrix, $R$ where nonzero elements of the matrix are the users' interaction with the items. And what matrix factorization does is it decomposes a large matrix into products of matrices, namely, $R=U×V$.
#
# 
#
# Matrix factorization assumes that:
#
# - Each user can be described by $d$ features. For example, feature 1 might be a referring to how much each user likes disney movies.
# - Each item, movie in this case, can be described by an analagous set of $d$ features. To correspond to the above example, feature 1 for the movie might be a number that says how close the movie is to a disney movie.
#
# With that notion in mind, we can denote our $d$ feature user into math by letting a user $u$ take the form of a $1 \times d$-dimensional vector $\textbf{x}_{u}$. Similarly, an item *i* can be represented by a $1 \times d$-dimensional vector $\textbf{y}_{i}$. And we would predict the interactions that the user $u$ will have for item $i$ is by doing a dot product of the two vectors
#
# \begin{align}
# \hat r_{ui} &= \textbf{x}_{u} \textbf{y}_{i}^{T} = \sum\limits_{d} x_{ud}y_{di}
# \end{align}
#
# Where $\hat r_{ui}$ represents our prediction for the true interactions $r_{ui}$. Next, we will choose a objective function to minimize the square of the difference between all interactions in our dataset ($S$) and our predictions. This produces a objective function of the form:
#
# \begin{align}
# L &= \sum\limits_{u,i \in S}( r_{ui} - \textbf{x}_{u} \textbf{y}_{i}^{T} )^{2}
# \end{align}
#
# This is all well and good, but a lot of times, what we wish to optimize for is not the difference between the true interaction and the predicted interaction, but instead is the ranking of the items. Meaning given a user, what is the top-N most likely item that the user prefers. And this is what **Bayesian Personalized Ranking (BPR)** tries to accomplish. The idea is centered around sampling positive (items user has interacted with) and negative (items user hasn't interacted with) items and running pairwise comparisons.
# ## Formulation
#
# Suppose $U$ is the set of all users and $I$ is the set of all items, our goal is to provide user $u$ with a personalized ranking, deonted by $>_u$. As mentioned in the last section, the usual approach for item recommenders is to predict a personalized score $\hat r_{ui}$ for an item that reflects the user's preference for that item. Then the items are ranked by sorting them according to that score and the top-N is recommended to the user.
#
# Here we'll use a different approach by using item pairs as training data and optimize for correctly ranking item pairs. From $S$, the whole dataset we try to reconstruct for each user parts of $>_u$. If the user has interacted with item $i$, i.e. $(u,i) \in S$, then we assume that the user prefers this item over all other non-observed items. E.g. in the figure below user $u_1$ has interacted with item $i_2$ but not item $i_1$, so we assume that this user prefers item $i_2$ over $i_1$: $i_2 >_u i_1$. We will denote this generally as $i >_u j$, where the $i$ stands for the positive item and $j$ is for the negative item. For items that the user have both interacted with, we cannot infer any preference. The same is true for two items that a user has not interacted yet (e.g. item $i_1$ and $i_4$ for user $u_1$).
#
# 
#
# Given these information, we can now get to the Bayesian part of this method. Let $\Theta$ be the parameter of the model that determines the personalized ranking. BPR's goal is to maximize the posterior probability:
#
# \begin{align}
# p(\Theta | i >_u j ) \propto p( i >_u j |\Theta) p(\Theta)
# \end{align}
#
# $p( i >_u j |\Theta)$ is the likelihood function, it captures the individual probability that a user really prefers item $i$ over item $j$. We compute this probability with the form:
#
# \begin{align}
# p( i >_u j |\Theta) = \sigma \big(\hat r_{uij}(\Theta) \big)
# \end{align}
#
# Where: $\sigma$ is the good old logistic sigmoid:
#
# \begin{align}
# \sigma(x) = \frac{1}{1 + e^{-x}}
# \end{align}
#
# And $r_{uij}(\Theta)$ captures the relationship between user $u$, item $i$ and item $j$, which can be further decomposed into:
#
# \begin{align}
# \hat r_{uij} = \hat r_{ui} - \hat r_{uj}
# \end{align}
#
# For convenience we skiped the argument $\Theta$ from $\hat r_{uij}$. The formula above is basically saying what is the difference between the predicted interaction between the positive item $i$ and the negative item $j$. Now because of this generic framework, we can apply any standard collaborative filtering (such as the matrix factorization) techniques that can predict the interaction between user and item. Keep in mind that although it may seem like we're using the same models as in other work, here we're optimizing against another criterion as we do not try to predict a single predictor $\hat r_{ui}$ but instead tries to classify the difference of two predictions $\hat r_{ui} - \hat r_{uj}$. For those that interested in diving deeper, there's a section in the original paper that showed that the BPR optimization criteria is actually optimizing AUC (Area Under the ROC curve).
# So far, we have only discussed the likelihood function. In order to complete the Bayesian modeling approach of the personalized ranking task, we introduce a general prior density $p(\Theta)$ which is a normal distribution with zero mean and variance-covariance matrix $\Sigma (\Theta)$, to reduce the number of unknown hyperparameters, we set: $\Sigma (\Theta) = \lambda_{\Theta} I$
#
# To sum it all up, the full form of the maximum posterior probability optimization (called BPR-Opt in the paper) can be specified as:
#
# \begin{align}
# BPR-Opt &\implies \prod_{u, i, j} p( i >_u j |\Theta) p(\Theta) \\
# &\implies ln \big( \prod_{u, i, j} p( i >_u j |\Theta) p(\Theta) \big) \\
# &\implies \sum_{u, i, j} ln \sigma \big(\hat r_{ui} - \hat r_{uj} \big) + ln p(\Theta) \\
# &\implies \sum_{u, i, j} ln \sigma \big(\hat r_{ui} - \hat r_{uj} \big)
# - \lambda_{\Theta} \left\Vert \Theta \right\Vert ^{2} \\
# &\implies \sum_{u, i, j} ln \sigma \big(\hat r_{ui} - \hat r_{uj} \big)
# - \frac{\lambda_{\Theta}}{2} \left\Vert x_u \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_i \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_j \right\Vert ^{2} \\
# &\implies \sum_{u, i, j} ln \sigma \big( x_u y_i^T - x_u y_j^T \big)
# - \frac{\lambda_{\Theta}}{2} \left\Vert x_u \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_i \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_j \right\Vert ^{2} \\
# &\implies \sum_{u, i, j} ln \frac{1}{1 + e^{-(x_u y_i^T - x_u y_j^T) }}
# - \frac{\lambda_{\Theta}}{2} \left\Vert x_u \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_i \right\Vert ^{2}
# - \frac{\lambda_{\Theta}}{2} \left\Vert y_j \right\Vert ^{2}
# \end{align}
#
#
# Where:
#
# - We first take the natural log (it's a monotonic transformation, meaning taking the log does not affect the optimization process) to make the product a sum to make life easier on us
# - As for the $p(\Theta)$ part, recall that because for each parameter we assume that it's a normal distribution with mean zero ($\mu = 0$), and unit variance ($\Sigma = I$, we ignore the $\lambda_{\Theta}$ for now), the formula for it is:
#
# \begin{align}
# N(x \mid \mu, \Sigma)
# &\implies \frac{1}{(2\pi)^{d/2}\sqrt{|\Sigma|}} exp(-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)) \\
# &\implies \frac{1}{(2\pi)^{d/2}} exp(-\frac{1}{2}\Theta^{T}\Theta)
# \end{align}
#
# In the formula above, the only thing that depends on $\Theta$ is the $exp(-\frac{1}{2}\Theta^{T}\Theta)$ part on the right, the rest is just a multiplicative constant that we don't need to worry about, thus if we take the natural log of that formula, then the exponential goes away and our $p(\Theta)$ can be written as $- \frac{1}{2} \left\Vert \Theta \right\Vert ^{2}$, and we simply multiply the $\lambda_{\Theta}$ back, which can be seen as the model specific regularization parameter.
#
# Last but not least, in machine learning it's probably more common to try and minimize things, thus we simply flip all the signs of the maximization formula above, leaving us with:
#
# \begin{align}
# argmin_{x_u, y_i, y_j} \sum_{u, i, j} -ln \frac{1}{1 + e^{-(x_u y_i^T - x_u y_j^T) }}
# + \frac{\lambda_{\Theta}}{2} \left\Vert x_u \right\Vert ^{2}
# + \frac{\lambda_{\Theta}}{2} \left\Vert y_i \right\Vert ^{2}
# + \frac{\lambda_{\Theta}}{2} \left\Vert y_j \right\Vert ^{2}
# \end{align}
# ## Optimization
#
# In the last section we have derived an optimization criterion for personalized ranking. As the criterion is differentiable, gradient descent based algorithms are an obvious choice for maximization. But standard gradient descent is probably not the right choice for our problem given the size of all the possible combinations of the triplet $(u, i, j)$. To solve for this issue we use a stochastic gradient descent algorithm that chooses the triplets randomly (uniformly distributed and bootstrapped sampled).
#
# To solve for the function using gradient descent, we derive the gradient for the three parameters $x_u$, $y_i$, $y_j$ separately. Just a minor hint when deriving the gradient, remember that the first part of the formula requires the chain rule:
#
# \begin{align}
# \dfrac{\partial}{\partial x} ln \sigma(x)
# &\implies \dfrac{1}{\sigma(x)} \dfrac{\partial}{\partial x} \sigma(x) \\
# &\implies \left( 1 + e^{-x} \right) \dfrac{\partial}{\partial x} \sigma(x) \\
# &\implies \left( 1 + e^{-x} \right) \dfrac{\partial}{\partial x} \left[ \dfrac{1}{1 + e^{-x}} \right] \\
# &\implies \left( 1 + e^{-x} \right) \dfrac{\partial}{\partial x} \left( 1 + \mathrm{e}^{-x} \right)^{-1} \\
# &\implies \left( 1 + e^{-x} \right) \cdot -(1 + e^{-x})^{-2}(-e^{-x}) \\
# &\implies \left( 1 + e^{-x} \right) \dfrac{e^{-x}}{\left(1 + e^{-x}\right)^2} \\
# &\implies \dfrac{e^{-x}}{1 + e^{-x}}
# \end{align}
#
# ---
#
# \begin{align}
# \dfrac{\partial}{\partial x_u}
# &\implies \dfrac{e^{-(x_u y_i^T - x_u y_j^T)}}{1 + e^{-(x_u y_i^T - x_u y_j^T)}} \cdot (y_j - y_i) + \lambda x_u
# \end{align}
#
# \begin{align}
# \dfrac{\partial}{\partial y_i}
# &\implies \dfrac{e^{-(x_u y_i^T - x_u y_j^T)}}{1 + e^{-(x_u y_i^T - x_u y_j^T)}} \cdot -y_i + \lambda y_i
# \end{align}
#
# \begin{align}
# \dfrac{\partial}{\partial y_j}
# &\implies \dfrac{e^{-(x_u y_i^T - x_u y_j^T)}}{1 + e^{-(x_u y_i^T - x_u y_j^T)}} \cdot y_j + \lambda y_j
# \end{align}
#
# After deriving the gradient the update for each parameter using gradient descent is simply:
#
# \begin{align}
# \Theta & \Leftarrow \Theta - \alpha \dfrac{\partial}{\partial \Theta}
# \end{align}
#
# Where $\alpha$ is the learning rate.
# # Implementation
#
# We will again use the movielens data as an example.
# +
file_dir = 'ml-100k'
file_path = os.path.join(file_dir, 'u.data')
if not os.path.isdir(file_dir):
call(['curl', '-O', 'http://files.grouplens.org/datasets/movielens/' + file_dir + '.zip'])
call(['unzip', file_dir + '.zip'])
# we will not be using the timestamp column
names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv(file_path, sep = '\t', names = names)
print('data dimension: \n', df.shape)
df.head()
# -
# Because BPR assumes binary implicit feedback (meaing there's only positive and negative items), here we'll assume that an item is positive only if he/she gave the item a ratings above 3 (feel free to experiment and change the threshold). The next few code chunks, creates the sparse interaction matrix and split into train and test set.
def create_matrix(data, users_col, items_col, ratings_col, threshold = None):
"""
creates the sparse user-item interaction matrix,
if the data is not in the format where the interaction only
contains the positive items (indicated by 1), then use the
threshold parameter to determine which items are considered positive
Parameters
----------
data : DataFrame
implicit rating data
users_col : str
user column name
items_col : str
item column name
ratings_col : str
implicit rating column name
threshold : int, default None
threshold to determine whether the user-item pair is
a positive feedback
Returns
-------
ratings : scipy sparse csr_matrix [n_users, n_items]
user/item ratings matrix
data : DataFrame
the implict rating data that retains only the positive feedback
(if specified to do so)
"""
if threshold is not None:
data = data[data[ratings_col] >= threshold]
data[ratings_col] = 1
for col in (items_col, users_col, ratings_col):
data[col] = data[col].astype('category')
ratings = csr_matrix(( data[ratings_col],
(data[users_col].cat.codes, data[items_col].cat.codes) ))
ratings.eliminate_zeros()
return ratings, data
items_col = 'item_id'
users_col = 'user_id'
ratings_col = 'rating'
threshold = 3
X, df = create_matrix(df, users_col, items_col, ratings_col, threshold)
X
def create_train_test(ratings, test_size = 0.2, seed = 1234):
"""
split the user-item interactions matrix into train and test set
by removing some of the interactions from every user and pretend
that we never seen them
Parameters
----------
ratings : scipy sparse csr_matrix [n_users, n_items]
The user-item interactions matrix
test_size : float between 0.0 and 1.0, default 0.2
Proportion of the user-item interactions for each user
in the dataset to move to the test set; e.g. if set to 0.2
and a user has 10 interactions, then 2 will be moved to the
test set
seed : int, default 1234
Seed for reproducible random splitting the
data into train/test set
Returns
-------
train : scipy sparse csr_matrix [n_users, n_items]
Training set
test : scipy sparse csr_matrix [n_users, n_items]
Test set
"""
assert test_size < 1.0 and test_size > 0.0
# Dictionary Of Keys based sparse matrix is more efficient
# for constructing sparse matrices incrementally compared with csr_matrix
train = ratings.copy().todok()
test = dok_matrix(train.shape)
# for all the users assign randomly chosen interactions
# to the test and assign those interactions to zero in the training;
# when computing the interactions to go into the test set,
# remember to round up the numbers (e.g. a user has 4 ratings, if the
# test_size is 0.2, then 0.8 ratings will go to test, thus we need to
# round up to ensure the test set gets at least 1 rating)
rstate = np.random.RandomState(seed)
for u in range(ratings.shape[0]):
split_index = ratings[u].indices
n_splits = ceil(test_size * split_index.shape[0])
test_index = rstate.choice(split_index, size = n_splits, replace = False)
test[u, test_index] = ratings[u, test_index]
train[u, test_index] = 0
train, test = train.tocsr(), test.tocsr()
return train, test
X_train, X_test = create_train_test(X, test_size = 0.2, seed = 1234)
X_train
# The following section provides a implementation of the algorithm from scratch.
class BPR:
"""
Bayesian Personalized Ranking (BPR) for implicit feedback data
Parameters
----------
learning_rate : float, default 0.01
learning rate for gradient descent
n_factors : int, default 20
Number/dimension of user and item latent factors
n_iters : int, default 15
Number of iterations to train the algorithm
batch_size : int, default 1000
batch size for batch gradient descent, the original paper
uses stochastic gradient descent (i.e., batch size of 1),
but this can make the training unstable (very sensitive to
learning rate)
reg : int, default 0.01
Regularization term for the user and item latent factors
seed : int, default 1234
Seed for the randomly initialized user, item latent factors
verbose : bool, default True
Whether to print progress bar while training
Attributes
----------
user_factors : 2d nd.array [n_users, n_factors]
User latent factors learnt
item_factors : 2d nd.array [n_items, n_factors]
Item latent factors learnt
References
----------
<NAME>, <NAME>, <NAME>, <NAME>
Bayesian Personalized Ranking from Implicit Feedback
- https://arxiv.org/abs/1205.2618
"""
def __init__(self, learning_rate = 0.01, n_factors = 15, n_iters = 10,
batch_size = 1000, reg = 0.01, seed = 1234, verbose = True):
self.reg = reg
self.seed = seed
self.verbose = verbose
self.n_iters = n_iters
self.n_factors = n_factors
self.batch_size = batch_size
self.learning_rate = learning_rate
# to avoid re-computation at predict
self._prediction = None
def fit(self, ratings):
"""
Parameters
----------
ratings : scipy sparse csr_matrix [n_users, n_items]
sparse matrix of user-item interactions
"""
indptr = ratings.indptr
indices = ratings.indices
n_users, n_items = ratings.shape
# ensure batch size makes sense, since the algorithm involves
# for each step randomly sample a user, thus the batch size
# should be smaller than the total number of users or else
# we would be sampling the user with replacement
batch_size = self.batch_size
if n_users < batch_size:
batch_size = n_users
sys.stderr.write('WARNING: Batch size is greater than number of users,'
'switching to a batch size of {}\n'.format(n_users))
batch_iters = n_users // batch_size
# initialize random weights
rstate = np.random.RandomState(self.seed)
self.user_factors = rstate.normal(size = (n_users, self.n_factors))
self.item_factors = rstate.normal(size = (n_items, self.n_factors))
# progress bar for training iteration if verbose is turned on
loop = range(self.n_iters)
if self.verbose:
loop = trange(self.n_iters, desc = self.__class__.__name__)
for _ in loop:
for _ in range(batch_iters):
sampled = self._sample(n_users, n_items, indices, indptr)
sampled_users, sampled_pos_items, sampled_neg_items = sampled
self._update(sampled_users, sampled_pos_items, sampled_neg_items)
return self
def _sample(self, n_users, n_items, indices, indptr):
"""sample batches of random triplets u, i, j"""
sampled_pos_items = np.zeros(self.batch_size, dtype = np.int)
sampled_neg_items = np.zeros(self.batch_size, dtype = np.int)
sampled_users = np.random.choice(
n_users, size = self.batch_size, replace = False)
for idx, user in enumerate(sampled_users):
pos_items = indices[indptr[user]:indptr[user + 1]]
pos_item = np.random.choice(pos_items)
neg_item = np.random.choice(n_items)
while neg_item in pos_items:
neg_item = np.random.choice(n_items)
sampled_pos_items[idx] = pos_item
sampled_neg_items[idx] = neg_item
return sampled_users, sampled_pos_items, sampled_neg_items
def _update(self, u, i, j):
"""
update according to the bootstrapped user u,
positive item i and negative item j
"""
user_u = self.user_factors[u]
item_i = self.item_factors[i]
item_j = self.item_factors[j]
# decompose the estimator, compute the difference between
# the score of the positive items and negative items; a
# naive implementation might look like the following:
# r_ui = np.diag(user_u.dot(item_i.T))
# r_uj = np.diag(user_u.dot(item_j.T))
# r_uij = r_ui - r_uj
# however, we can do better, so
# for batch dot product, instead of doing the dot product
# then only extract the diagonal element (which is the value
# of that current batch), we perform a hadamard product,
# i.e. matrix element-wise product then do a sum along the column will
# be more efficient since it's less operations
# http://people.revoledu.com/kardi/tutorial/LinearAlgebra/HadamardProduct.html
# r_ui = np.sum(user_u * item_i, axis = 1)
#
# then we can achieve another speedup by doing the difference
# on the positive and negative item up front instead of computing
# r_ui and r_uj separately, these two idea will speed up the operations
# from 1:14 down to 0.36
r_uij = np.sum(user_u * (item_i - item_j), axis = 1)
sigmoid = np.exp(-r_uij) / (1.0 + np.exp(-r_uij))
# repeat the 1 dimension sigmoid n_factors times so
# the dimension will match when doing the update
sigmoid_tiled = np.tile(sigmoid, (self.n_factors, 1)).T
# update using gradient descent
grad_u = sigmoid_tiled * (item_j - item_i) + self.reg * user_u
grad_i = sigmoid_tiled * -user_u + self.reg * item_i
grad_j = sigmoid_tiled * user_u + self.reg * item_j
self.user_factors[u] -= self.learning_rate * grad_u
self.item_factors[i] -= self.learning_rate * grad_i
self.item_factors[j] -= self.learning_rate * grad_j
return self
def predict(self):
"""
Obtain the predicted ratings for every users and items
by doing a dot product of the learnt user and item vectors.
The result will be cached to avoid re-computing it every time
we call predict, thus there will only be an overhead the first
time we call it. Note, ideally you probably don't need to compute
this as it returns a dense matrix and may take up huge amounts of
memory for large datasets
"""
if self._prediction is None:
self._prediction = self.user_factors.dot(self.item_factors.T)
return self._prediction
def _predict_user(self, user):
"""
returns the predicted ratings for the specified user,
this is mainly used in computing evaluation metric
"""
user_pred = self.user_factors[user].dot(self.item_factors.T)
return user_pred
def recommend(self, ratings, N = 5):
"""
Returns the top N ranked items for given user id,
excluding the ones that the user already liked
Parameters
----------
ratings : scipy sparse csr_matrix [n_users, n_items]
sparse matrix of user-item interactions
N : int, default 5
top-N similar items' N
Returns
-------
recommendation : 2d nd.array [number of users, N]
each row is the top-N ranked item for each query user
"""
n_users = ratings.shape[0]
recommendation = np.zeros((n_users, N), dtype = np.uint32)
for user in range(n_users):
top_n = self._recommend_user(ratings, user, N)
recommendation[user] = top_n
return recommendation
def _recommend_user(self, ratings, user, N):
"""the top-N ranked items for a given user"""
scores = self._predict_user(user)
# compute the top N items, removing the items that the user already liked
# from the result and ensure that we don't get out of bounds error when
# we ask for more recommendations than that are available
liked = set(ratings[user].indices)
count = N + len(liked)
if count < scores.shape[0]:
# when trying to obtain the top-N indices from the score,
# using argpartition to retrieve the top-N indices in
# unsorted order and then sort them will be faster than doing
# straight up argort on the entire score
# http://stackoverflow.com/questions/42184499/cannot-understand-numpy-argpartition-output
ids = np.argpartition(scores, -count)[-count:]
best_ids = np.argsort(scores[ids])[::-1]
best = ids[best_ids]
else:
best = np.argsort(scores)[::-1]
top_n = list( islice((rec for rec in best if rec not in liked), N) )
return top_n
def get_similar_items(self, N = 5, item_ids = None):
"""
return the top N similar items for itemid, where
cosine distance is used as the distance metric
Parameters
----------
N : int, default 5
top-N similar items' N
item_ids : 1d iterator, e.g. list or numpy array, default None
the item ids that we wish to find the similar items
of, the default None will compute the similar items
for all the items
Returns
-------
similar_items : 2d nd.array [number of query item_ids, N]
each row is the top-N most similar item id for each
query item id
"""
# cosine distance is proportional to normalized euclidean distance,
# thus we normalize the item vectors and use euclidean metric so
# we can use the more efficient kd-tree for nearest neighbor search;
# also the item will always to nearest to itself, so we add 1 to
# get an additional nearest item and remove itself at the end
normed_factors = normalize(self.item_factors)
knn = NearestNeighbors(n_neighbors = N + 1, metric = 'euclidean')
knn.fit(normed_factors)
# returns a distance, index tuple,
# we don't actually need the distance
if item_ids is not None:
normed_factors = normed_factors[item_ids]
_, items = knn.kneighbors(normed_factors)
similar_items = items[:, 1:].astype(np.uint32)
return similar_items
# +
# parameters were randomly chosen
bpr_params = {'reg': 0.01,
'learning_rate': 0.1,
'n_iters': 160,
'n_factors': 15,
'batch_size': 100}
bpr = BPR(**bpr_params)
bpr.fit(X_train)
# -
# ## Evaluation
#
# In recommender systems, we are often interested in how well the method can rank a given set of items. And to do that we'll use AUC (Area Under ROC Curve as our evaluation metric. The best possible value that the AUC evaluation metric can take is 1, and any non-random ranking that makes sense would have an AUC > 0.5. An intuitive explanation of AUC is it specifies the probability that when we draw two examples at random, their predicted pairwise ranking is correct. The following [documentation](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/model_selection/auc/auc.ipynb) has a more detailed dicussion on AUC in case you're not familiar with it.
def auc_score(model, ratings):
"""
computes area under the ROC curve (AUC).
The full name should probably be mean
auc score as it is computing the auc
for every user's prediction and actual
interaction and taking the average for
all users
Parameters
----------
model : BPR instance
the trained BPR model
ratings : scipy sparse csr_matrix [n_users, n_items]
sparse matrix of user-item interactions
Returns
-------
auc : float 0.0 ~ 1.0
"""
auc = 0.0
n_users, n_items = ratings.shape
for user, row in enumerate(ratings):
y_pred = model._predict_user(user)
y_true = np.zeros(n_items)
y_true[row.indices] = 1
auc += roc_auc_score(y_true, y_pred)
auc /= n_users
return auc
print(auc_score(bpr, X_train))
print(auc_score(bpr, X_test))
# ## Item Recommendations
#
# Now that we have trained the model, we can get the most similar items by using the `get_similar_items` method, we can specify the number of most simliar items by specifying the `N` argument. And this can be seen as the people who like/buy this also like/buy this functionality, since it's recommending similar items for a given item.
bpr.get_similar_items(N = 5)
# On the other hand, we can also generate a top-N recommended item for each given user, by passing the sparse rating data and `N` to the `recommend` method.
bpr.recommend(X_train, N = 5)
# For these two methods, we can go one step further and look-up the actual item for these indices to see if they make intuitive sense. If we wish to do this, the movielens dataset has a `u.item` file that contains metadata about the movie.
# # Reference
#
# - [Wiki: Area Under the ROC Curve](http://www.recsyswiki.com/wiki/Area_Under_the_ROC_Curve)
# - [StackExchange: Derivative of sigmoid function](http://math.stackexchange.com/questions/78575/derivative-of-sigmoid-function-sigma-x-frac11e-x)
# - [Blog: What you wanted to know about AUC](http://fastml.com/what-you-wanted-to-know-about-auc/)
# - [Blog: Learning to Rank Sketchfab Models with LightFM](http://blog.ethanrosenthal.com/2016/11/07/implicit-mf-part-2/)
# - [Blog (Chinese Mandarin): BPR [Bayesian Personalized Ranking]](http://liuzhiqiangruc.iteye.com/blog/2073526)
# - [Github: An implementation of Bayesian Personalised Ranking in Theano](https://github.com/bbc/theano-bpr)
# - [Paper: <NAME>, <NAME>, <NAME>, <NAME>-Thieme Bayesian Personalized Ranking from Implicit Feedback](https://arxiv.org/abs/1205.2618)
| recsys/4_bpr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 (astro)
# language: python
# name: astro
# ---
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_moons, load_digits
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture, BayesianGaussianMixture
# -
# # Actividad práctica GMM
#
# ## Parte 1: Datos sintéticos (50%)
#
# - Entrene un modelo GMM sobre el siguiente conjunto de datos sintéticos bidimensionales `two-moons`
# - Encuentre el mejor número de clusters usando el criterio BIC para los casos de covarianza esférica, diagonal y completa, visualice el BIC versus número de clusters
# - Visualice las resposabilidades y los centroides para el mejor caso asociado a cada tipo de covarianza
# - Compare y discuta en base a sus resultados y observaciones
# +
X, y = make_moons(1000, noise=0.11)
fig, ax = plt.subplots(figsize=(5, 3), tight_layout=True)
ax.scatter(X[:, 0], X[:, 1], c='k', s=5);
# -
# ## Parte 2: Datos reales
#
# - Entrene un modelo GMM variacional sobre el conjunto de datos reales digits, correspondiente a imágenes de dígitos manuscritos de 8x8
# - Realice una proyección de los datos usando PCA para reducir la dimensión (64)
# - Utilice dos componentes
# - Encuentre la cantidad de componentes que preservan un 95% de la varianza
# - Entrene un GMM variacional usando covarianza completa
# - (2 dimensiones) Visualice las etiquetas del dataset y sus predicciones con GMM. ¿Cuántos clusters encuentra? ¿Se condicen con las clases reales? Compare y discuta
# - (X dimensiones) Genere y visualice 20 dígitos a partir de su GMM entrenado y la transformada PCA inversa ¿Se condicen con las clases reales? Compare y discuta
# +
data = load_digits()
X = data['data']
y = data['target']
n_pix = 8
fig, ax = plt.subplots(1, 10, tight_layout=True, figsize=(8, 1))
for ax_, img in zip(ax, X):
ax_.matshow(img.reshape(n_pix, n_pix), cmap=plt.cm.Greys)
ax_.axis('off')
# -
| lectures/9_mixture_models/practical.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:alpacaenv]
# language: python
# name: conda-env-alpacaenv-py
# ---
conda activate alpacaenv
# +
# Import Libraries
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import json
# %matplotlib inline
# -
# Laod .env environment
load_dotenv()
# Pull in API Key
api_key = os.getenv("glassnode_api")
type(api_key)
# Define crypto currencies to pull
crypto_list = ["BTC", "BCH", "ETH", "LTC", "USDT", "DOGE"]
# Define URLs
price_url = 'https://api.glassnode.com/v1/metrics/market/price_usd'
volume_url = 'https://api.glassnode.com/v1/metrics/transactions/transfers_volume_sum'
mkt_cap_url = 'https://api.glassnode.com/v1/metrics/market/marketcap_usd'
mining_url = 'https://api.glassnode.com/v1/metrics/mining/volume_mined_sum'
exchange_fee_url = 'https://api.glassnode.com/v1/metrics/fees/exchanges_sum'
# ## LTC API Data Pull
# +
# Price API Request
ltc_price_res = requests.get(price_url,
params={'a': 'LTC',
'i': '24h',
'api_key': api_key})
# Convert price to Pandas Dataframe, set index to time and clean up file
ltc_price_df = pd.read_json(ltc_price_res.text, convert_dates=['t'])
ltc_price_df.columns = ['Date', 'Price']
ltc_price_df.set_index('Date', inplace=True)
# Volume API Request
ltc_volume_res = requests.get(volume_url,
params={'a': 'LTC',
'i': '24h',
'api_key': api_key})
# Convert volume to Pandas Dataframe, set index to time and clean up file
ltc_volume_df = pd.read_json(ltc_volume_res.text, convert_dates=['t'])
ltc_volume_df.columns = ['Date', 'Volume']
ltc_volume_df.set_index('Date', inplace=True)
# Market Cap API Request
ltc_mkt_cap_res = requests.get(mkt_cap_url,
params={'a': 'LTC',
'i': '24h',
'api_key': api_key})
# Convert Market Cap to Pandas Dataframe, set index to time and clean up file
ltc_mkt_cap_df = pd.read_json(ltc_mkt_cap_res.text, convert_dates=['t'])
ltc_mkt_cap_df.columns = ['Date', 'Market Cap']
ltc_mkt_cap_df.set_index('Date', inplace=True)
# Mining API Request
ltc_mining_res = requests.get(mining_url,
params={'a': 'LTC',
'i': '24h',
'api_key': api_key})
# Convert Mining to Pandas Dataframe, set index to time and clean up file
# ltc_mining_df = pd.read_json(ltc_mining_res.text, convert_dates=['t'])
# ltc_mining_df.head(5)
# -
# ltc_mining_df.columns = [['Date'],['Blocks Mined']]
# ltc_mining_df.set_index('Date', inplace=True)
ltc_price_df.head(5)
ltc_volume_df.head(5)
ltc_mkt_cap_df.head(5)
ltc_mining_df.head(5)
# ## LTC Data Aggregating & Cleaning
# +
# Define all the different data frames into a list
ltc_frames = [ltc_price_df, ltc_volume_df, ltc_mkt_cap_df, ltc_mining_df]
# Concatenate all the dataframes into one
ltc_data = pd.concat(ltc_frames, axis=1, join="outer", ignore_index=False)
ltc_data
# -
EP
| EP Code/Get_Glen_a_condo_LTC - EP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# Licensed under the MIT License.
#
# Copyright (c) 2021-2031. All rights reserved.
#
# # FLAML Specified Search Space
# +
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import balanced_accuracy_score
from flaml import AutoML
from flaml.data import get_output_from_log
from flaml.model import LGBMEstimator, BaseEstimator
from flaml import tune
import flaml
print(flaml.__version__)
# +
df30 = pd.read_csv('../../crystal_ball/data_collector/structured_data/leaf.csv')
print(df30.shape)
df30.head()
# +
# train, test split for df30
y30 = df30['species']
X30 = df30.drop('species', axis=1)
X_train30, X_test30, y_train30, y_test30 = train_test_split(X30, y30, test_size=0.2,
random_state=10, shuffle=True, stratify=y30)
X_train30.reset_index(inplace=True, drop=True)
X_test30.reset_index(inplace=True, drop=True)
y_train30.reset_index(inplace=True, drop=True)
y_test30.reset_index(inplace=True, drop=True)
print(X_train30.shape, X_test30.shape, y_train30.shape, y_test30.shape)
print(y_train30.nunique(), y_test30.nunique())
# -
def plot_learning_curve(training_time_list, best_error_list):
plt.title('Learning Curve')
plt.xlabel('Training Time (s)')
plt.ylabel('Best Validation Loss')
plt.scatter(training_time_list, best_error_list)
plt.step(training_time_list, best_error_list, where='post')
plt.show()
# ## Customize LGBM for Leaves30
#
# * LGBM is included in FLAML's estimator list
# * Find all the estimators in `flaml.model`:https://github.com/microsoft/FLAML/blob/main/flaml/model.py
# * LGBM's default search space in FLAML: https://github.com/microsoft/FLAML/blob/main/flaml/model.py#L199
# * LGBM params: https://lightgbm.readthedocs.io/en/latest/Parameters.html
# * Tips for LGBM param tuning: https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html
# * `tune` API search space functions: https://docs.ray.io/en/master/tune/api_docs/search_space.html
# * You can inherit from either `BaseEstimator` or `LGBMEstimator`
# * If your customized estimator is inherted from BaseEstimator, it might be time consuming to resolve errors caused by FLAML specific parameters. But if you inherit from a built-in FLAML estimator such as `LGBMEstimator`, you will get unwanted estimator params, and the way to ignore them is to keep them in `search_space()` and set them as None in `__init__()`, this is odd and less efficient, but might be a simple solution...
# * Better to have `super().__init__(**params)` in `__init__()` inhert params from the super class
# * If you want to add/remove params for the customized estimator, you can specify it in `self.params` after using `super().__init__(**params)`
# * You can also directly add params in `search_space()` without specifying them in `self.params` under `__init__()`
class my_lgbm(LGBMEstimator):
def __init__(self, **params):
super().__init__(**params)
if "min_child_samples" in self.params:
self.params["min_child_samples"] = None # set as None to remove the param from the estimator
@classmethod
def search_space(cls, data_size, **params):
upper = min(32768, int(data_size))
return {
'n_estimators': {'domain': tune.lograndint(lower=4, upper=upper), 'init_value': 4, 'low_cost_init_value': 4},
'num_leaves': {'domain': tune.lograndint(lower=4, upper=upper), 'init_value': 4, 'low_cost_init_value': 4},
'min_child_samples': {'domain': tune.lograndint(lower=2, upper=2**7), 'init_value': 20}, # ignored param but has to appear here
'learning_rate': {'domain': tune.loguniform(lower=1 / 1024, upper=1.0), 'init_value': 0.1},
'subsample': {'domain': tune.uniform(lower=0.1, upper=1.0), 'init_value': 1.0},
'log_max_bin': {'domain': tune.lograndint(lower=3, upper=10), 'init_value': 8},
'colsample_bytree': {'domain': tune.uniform(lower=0.01, upper=1.0), 'init_value': 1.0},
'reg_alpha': {'domain': tune.loguniform(lower=1 / 1024, upper=1024), 'init_value': 1 / 1024},
'reg_lambda': {'domain': tune.loguniform(lower=1 / 1024, upper=1024), 'init_value': 1.0},
'max_depth': {'domain': tune.lograndint(lower=2, upper=upper), 'init_value': 7, 'low_cost_init_value': 7},
'min_data_in_leaf': {'domain': tune.lograndint(lower=2, upper=upper), 'init_value': 50, 'low_cost_init_value': 50},
'extra_trees': {'domain': True, 'init_value': True, 'low_cost_init_value': True},
}
# +
np.random.seed(10) # To make sure same code will get the same output from FLAML
automl30 = AutoML()
settings = {
"time_budget": 300, # total running time in seconds
"metric": 'accuracy',
"task": 'multi', # multiclass classification
"n_splits": 5,
"estimator_list": ['my_lgbm'],
"split_type": 'stratified',
"verbose": 0,
"log_file_name": 'logs/automl_specified_space_leaf30.log', # flaml log file
}
automl30.add_learner(learner_name='my_lgbm', learner_class=my_lgbm)
automl30.fit(X_train=X_train30, y_train=y_train30, **settings)
# +
print('Selected Estimator:', automl30.model.estimator)
print('Best loss on validation data: {0:.4g}'.format(automl30.best_loss))
print('Training duration of best run: {0:.4g} s'.format(automl30.best_config_train_time))
print()
y_pred30 = automl30.predict(X_test30)
balanced_accuracy = balanced_accuracy_score(y_test30, y_pred30)
print(f'The balanced accuracy on testing data from optimized model is {balanced_accuracy}')
# save the optimized automl object
with open('trained_models/automl_specified_space_leaf30.pkl', 'wb') as f:
pickle.dump(automl30, f, pickle.HIGHEST_PROTOCOL)
# plot FLAML learning curve
training_time_list, best_error_list, error_list, config_list, logged_metric_list = \
get_output_from_log(filename=settings['log_file_name'], time_budget=60)
plot_learning_curve(training_time_list, best_error_list)
# -
# ## CFO vs BlendSearch in Larger Search Space for Leaves30
#
# * Larger search space
# * Added `path_smooth`, `max_bin`, `bagging_freq`, increased `min_data_in_leaf`.
# * These were supposed to reduce overfitting and increase accuracy
# * Validation loss reducted from 0.005614 to 0.002817 but the testing results dropped from 0.84 to 0.7.
# * CFO is still faster than BlendSearch without sacrificing the testing performance in this case
#
# * BS doesn't recognize all the legal LGBM params, such as `extra_trees`
class my_lgbm(LGBMEstimator):
def __init__(self, **params):
super().__init__(**params)
if "min_child_samples" in self.params:
self.params["min_child_samples"] = None
if "subsample_freq" in self.params:
self.params["subsample_freq"] = None
@classmethod
def search_space(cls, data_size, **params):
upper = min(32768, int(data_size))
return {
'n_estimators': {'domain': tune.lograndint(lower=4, upper=upper), 'init_value': 4, 'low_cost_init_value': 4},
'num_leaves': {'domain': tune.lograndint(lower=4, upper=upper), 'init_value': 4, 'low_cost_init_value': 4},
'min_child_samples': {'domain': tune.lograndint(lower=2, upper=2**7), 'init_value': 20}, # ignored param (because of `min_data_in_leaf`) but has to appear here
'learning_rate': {'domain': tune.loguniform(lower=1 / 1024, upper=1.0), 'init_value': 0.1},
'subsample': {'domain': tune.uniform(lower=0.1, upper=1.0), 'init_value': 1.0}, # same as `bagging_fraction`
'log_max_bin': {'domain': tune.lograndint(lower=3, upper=10), 'init_value': 8},
'colsample_bytree': {'domain': tune.uniform(lower=0.01, upper=1.0), 'init_value': 1.0}, # same as `feature_fraction`
'reg_alpha': {'domain': tune.loguniform(lower=1 / 1024, upper=1024), 'init_value': 1 / 1024},
'reg_lambda': {'domain': tune.loguniform(lower=1 / 1024, upper=1024), 'init_value': 1.0},
'max_depth': {'domain': tune.lograndint(lower=2, upper=upper), 'init_value': 7, 'low_cost_init_value': 7},
'min_data_in_leaf': {'domain': tune.randint(lower=2, upper=upper), 'init_value': 50, 'low_cost_init_value': 50},
'path_smooth': {'domain': tune.randint(lower=0, upper=10), 'init_value': 0, 'low_cost_init_value': 0},
'max_bin': {'domain': tune.randint(lower=240, upper=500), 'init_value': 255, 'low_cost_init_value': 255},
'bagging_freq': {'domain': tune.randint(lower=0, upper=5), 'init_value': 0},
'subsample_freq': {'domain': tune.randint(lower=0, upper=5), 'init_value': 0} # ignored param (because of `bagging_freq`) but has to appear here
}
# ### CFO
# +
np.random.seed(10) # To make sure same code will get the same output from FLAML
automl30 = AutoML()
settings = {
"time_budget": 300, # total running time in seconds
"metric": 'accuracy',
"task": 'multi', # multiclass classification
"n_splits": 5,
"estimator_list": ['my_lgbm'],
"split_type": 'stratified',
"verbose": 0,
"hpo_method": 'cfo',
"log_file_name": 'logs/automl_specified_space_cfo_leaf30.log', # flaml log file
}
automl30.add_learner(learner_name='my_lgbm', learner_class=my_lgbm)
automl30.fit(X_train=X_train30, y_train=y_train30, **settings)
print('Selected Estimator:', automl30.model.estimator)
print('Best loss on validation data: {0:.4g}'.format(automl30.best_loss))
print('Training duration of best run: {0:.4g} s'.format(automl30.best_config_train_time))
print()
y_pred30 = automl30.predict(X_test30)
balanced_accuracy = balanced_accuracy_score(y_test30, y_pred30)
print(f'The balanced accuracy on testing data from optimized model is {balanced_accuracy}')
# save the optimized automl object
with open('trained_models/automl_specified_space_cfo_leaf30.pkl', 'wb') as f:
pickle.dump(automl30, f, pickle.HIGHEST_PROTOCOL)
# plot FLAML learning curve
training_time_list, best_error_list, error_list, config_list, logged_metric_list = \
get_output_from_log(filename=settings['log_file_name'], time_budget=60)
plot_learning_curve(training_time_list, best_error_list)
# -
# ### Blend Search
# +
np.random.seed(10) # To make sure same code will get the same output from FLAML
automl30 = AutoML()
settings = {
"time_budget": 300, # total running time in seconds
"metric": 'accuracy',
"task": 'multi', # multiclass classification
"n_splits": 5,
"estimator_list": ['my_lgbm'],
"split_type": 'stratified',
"verbose": 0,
"hpo_method": 'bs',
"log_file_name": 'logs/automl_specified_space_bs_leaf30.log', # flaml log file
}
automl30.add_learner(learner_name='my_lgbm', learner_class=my_lgbm)
automl30.fit(X_train=X_train30, y_train=y_train30, **settings)
print('Selected Estimator:', automl30.model.estimator)
print('Best loss on validation data: {0:.4g}'.format(automl30.best_loss))
print('Training duration of best run: {0:.4g} s'.format(automl30.best_config_train_time))
print()
y_pred30 = automl30.predict(X_test30)
balanced_accuracy = balanced_accuracy_score(y_test30, y_pred30)
print(f'The balanced accuracy on testing data from optimized model is {balanced_accuracy}')
# save the optimized automl object
with open('trained_models/automl_specified_space_bs_leaf30.pkl', 'wb') as f:
pickle.dump(automl30, f, pickle.HIGHEST_PROTOCOL)
# plot FLAML learning curve
training_time_list, best_error_list, error_list, config_list, logged_metric_list = \
get_output_from_log(filename=settings['log_file_name'], time_budget=60)
plot_learning_curve(training_time_list, best_error_list)
| code/queen_lotus/flaml_experiments/flaml_specified_search_space.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from mlpractice.regression import LinearRegression
from mlpractice.datasets import load_iris
# %matplotlib inline
iris = load_iris().iris
lr = LinearRegression(features=['sepal_length', 'petal_length', 'sepal_width'],
label='petal_width',
fit_intercept=True)
lr_model = lr.fit(dataset=iris)
lr_model.coef
iris_pred = lr_model.predict(dataset=iris, inplace=False)
iris_pred.plot.scatter(x='petal_width', y='prediction')
lr.params.set_params(prediction='pred')
lr_model2 = lr.fit(dataset=iris)
iris_pred2 = lr_model2.predict(dataset=iris, inplace=False)
iris_pred2
# +
from sklearn import linear_model
model = linear_model.LinearRegression()
# -
model.fit(X=iris.loc[:, ['sepal_length', 'petal_width', 'petal_length']], y=iris.loc[:, 'sepal_width'])
model.coef_
model.intercept_
| examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/SLCFLAB/Data-Science-Python/blob/main/Day%203/3_2.%20Data%20exploration%20with%20Iris%20dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ### Reference
# https://github.com/nhkim55/bigdata_fintech_python (빅데이터 핀테크 과정)
# ## Iris dataset
# sklearn에 내장된 데이터셋. 붓꽃의 종류(3가지)에 따른 꽃받침과 꽃잎의 길이, 폭에 대한 데이터
# Target
# * 0: iris-setosa
# * 1: iris-versicolor
# * 2: iris-virginica
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_iris
#Load Dataset
iris = load_iris()
df = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
df.head()
# -
df.shape
df.describe()
df['target'].value_counts()
# ## Scatter plot
df.plot(kind="scatter",x="sepal length (cm)",y="sepal width (cm)")
plt.show()
sns.set_style("whitegrid");
sns.FacetGrid(df,hue="target",size=4) \
.map(plt.scatter,"sepal length (cm)","sepal width (cm)") \
.add_legend()
plt.show()
# ### Rename column name
df.rename(columns={df.columns[0] : 'SL',
df.columns[1] : 'SW',
df.columns[2] : 'PL',
df.columns[3] : 'PW',
df.columns[4] : 'Y'}, inplace = True) # inplace=True를 써서 실제로 컬럼명을 바꿔줌 없으면 안바뀜
df.head()
# ### column별 평균
st = df.groupby(df.Y).mean() # Y를 기준으로 그룹화를 하여 각 그룹의 평균을 구하여 준다.
st.columns.name = "변수" # columns의 이름을 "변수"로 지정한다.
st
# ### barplot
st.T.plot.bar(rot=0) # rot : x축 변수명의 기울기
plt.title("mean by column")
plt.xlabel("column name")
plt.ylabel("mean")
plt.ylim(0,8)
plt.show()
# + [markdown] id="RK5gp6xGKv9p"
# ## 불균형한 클래스 다루기
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1479, "status": "ok", "timestamp": 1610525589750, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10452528424857445283"}, "user_tz": -540} id="Ua1dZmx8Kv9p" outputId="d6a678f8-21f0-4057-8cf4-56058b0bc26c"
# 라이브러리를 임포트합니다.
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
# 붓꽃 데이터를 적재합니다.
iris = load_iris()
# 특성 행렬을 만듭니다.
features = iris.data
# 타깃 벡터를 만듭니다.
target = iris.target
# 처음 40개 샘플을 삭제합니다.
features = features[40:,:]
target = target[40:]
# 클래스 0을 음성 클래스로 하는 이진 타깃 벡터를 만듭니다.
target = np.where((target == 0), 0, 1)
# 불균형한 타깃 벡터를 확인합니다.
target
# -
iris_df = pd.DataFrame(data= np.c_[features, target],
columns= iris['feature_names'] + ['target'])
iris_df['target'].value_counts()
iris_df.shape
# + [markdown] id="WSR6LMEAK-TE"
# * 사이킷런에 있는 많은 알고리즘은 훈련할 때 불균형한 영향을 줄일 수 있도록 클래스에 가중치를 부여할 수 있는 매개변수를 제공
# * RandomForestClassifier는 class_weight 매개변수를 가진 인기 높은 분류 알고리즘
# * 매개변수값에 원하는 클래스 가중치를 직접 지정할 수 있음
# + id="9I77-Ev6Kv9p" outputId="3b637c56-8bca-4038-ba6d-76413975d2de"
# 가중치를 만듭니다.
weights = {0: .9, 1: 0.1}
# 가중치를 부여한 랜덤 포레스트 분류기를 만듭니다.
RandomForestClassifier(class_weight=weights)
# + [markdown] id="hGXgUfnnLUZ6"
# * 또는 balanced로 지정하여 클래스 빈도에 반비례하게 자동으로 가중치를 만들 수 있음
# + id="yKiimMm0Kv9p" outputId="4af48360-faa0-4473-e26e-9b85244cf0fd"
# 균형잡힌 클래스 가중치로 랜덤 포레스트 모델을 훈련합니다.
RandomForestClassifier(class_weight="balanced")
# + [markdown] id="ORGvKXxQLcc3"
# * 다수 클래스의 샘플을 줄이거나(다운샘플링) 소수 클래스의 샘플을 늘릴 수도 있음(업샘플링)
# * 다운샘플링에서는 다수 클래스(즉, 더 많은 샘플을 가진 클래스)에서 중복을 허용하지 않고 랜덤하게 샘플을 선택하여 소수 클래스와 같은 크기의 샘플 부분집합을 만들어 줌
# * 예를 들면, 소수 클래스에 10개의 샘플이 있다면 다수 클래스에서 10개의 샘플을 랜덤하게 선택하여 총 20개의 샘플을 데이터로 사용
# + id="iWeLUpLZKv9q" outputId="8ea41f72-5c9a-440f-9a38-4bd1dc5e0fe9"
# 각 클래스의 샘플 인덱스를 추출합니다.
i_class0 = np.where(target == 0)[0]
i_class1 = np.where(target == 1)[0]
# 각 클래스의 샘플 개수
n_class0 = len(i_class0)
n_class1 = len(i_class1)
# 클래스 0의 샘플만큼 클래스 1에서 중복을 허용하지 않고 랜덤하게 샘플을 뽑습니다.
# from class 1 without replacement
i_class1_downsampled = np.random.choice(i_class1, size=n_class0, replace=False)
# 클래스 0의 타깃 벡터와 다운샘플링된 클래스 1의 타깃 벡터를 합칩니다.
np.hstack((target[i_class0], target[i_class1_downsampled]))
# + id="TSP2Z2tVzhA0"
# 각 클래스의 샘플 인덱스를 추출합니다.
i_class0 = np.where(target == 0)[0]
i_class1 = np.where(target == 1)[0]
# 각 클래스의 샘플 개수
n_class0 = len(i_class0)
n_class1 = len(i_class1)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 864, "status": "ok", "timestamp": 1610525613349, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10452528424857445283"}, "user_tz": -540} id="vSYNHs2RznH4" outputId="86f5c2c4-840e-47fb-f199-8ff1288e9ad0"
n_class1
# + id="ha9VHR6bKv9q" outputId="f3a7815f-6491-4c89-f9a6-65e8868acdba"
np.vstack((features[i_class0,:], features[i_class1_downsampled,:]))[0:5]
# + [markdown] id="OXbQTY_-L6Fj"
# * 또 다른 방법은 소수의 클래스를 업샘플링하는 것
# * 업샘플링에서는 다수 클래스의 샘플만큼 소수 클래스에서 중복을 허용하여 랜덤하게 샘플을 허용하여 랜덤하게 샘플 선택
# * 결과적으로 다수 클래스와 소수 클래스의 샘플 수가 같아짐
# * 업샘플링은 다운샘플링과 반대 방식으로 매우 비슷하게 구현
# + id="0ZKLqq4ZKv9q" outputId="c5fd006d-a90b-4fbe-dd49-198c6bee0bd4"
# 클래스 1의 샘플 개수만큼 클래스 0에서 중복을 허용하여 랜덤하게 샘플을 선택합니다.
i_class0_upsampled = np.random.choice(i_class0, size=n_class1, replace=True)
# 클래스 0의 업샘플링된 타깃 벡터와 클래스 1의 타깃 벡터를 합칩니다.
np.concatenate((target[i_class0_upsampled], target[i_class1]))
# + id="_x1JxEnqKv9q" outputId="7818425e-c128-4b78-e8c4-22c30ce43935"
# 클래스 0의 업샘플링된 특성 행렬과 클래스 1의 특성 행렬을 합칩니다.
np.vstack((features[i_class0_upsampled,:], features[i_class1,:]))[0:5]
# -
| Day 3/3_2. Data exploration with Iris dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Euler Problem 13
# ================
#
# Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
#
# 37107287533902102798797998220837590246510135740250
# 46376937677490009712648124896970078050417018260538
# 74324986199524741059474233309513058123726617309629
# 91942213363574161572522430563301811072406154908250
# 23067588207539346171171980310421047513778063246676
# 89261670696623633820136378418383684178734361726757
# 28112879812849979408065481931592621691275889832738
# 44274228917432520321923589422876796487670272189318
# 47451445736001306439091167216856844588711603153276
# 70386486105843025439939619828917593665686757934951
# 62176457141856560629502157223196586755079324193331
# 64906352462741904929101432445813822663347944758178
# 92575867718337217661963751590579239728245598838407
# 58203565325359399008402633568948830189458628227828
# 80181199384826282014278194139940567587151170094390
# 35398664372827112653829987240784473053190104293586
# 86515506006295864861532075273371959191420517255829
# 71693888707715466499115593487603532921714970056938
# 54370070576826684624621495650076471787294438377604
# 53282654108756828443191190634694037855217779295145
# 36123272525000296071075082563815656710885258350721
# 45876576172410976447339110607218265236877223636045
# 17423706905851860660448207621209813287860733969412
# 81142660418086830619328460811191061556940512689692
# 51934325451728388641918047049293215058642563049483
# 62467221648435076201727918039944693004732956340691
# 15732444386908125794514089057706229429197107928209
# 55037687525678773091862540744969844508330393682126
# 18336384825330154686196124348767681297534375946515
# 80386287592878490201521685554828717201219257766954
# 78182833757993103614740356856449095527097864797581
# 16726320100436897842553539920931837441497806860984
# 48403098129077791799088218795327364475675590848030
# 87086987551392711854517078544161852424320693150332
# 59959406895756536782107074926966537676326235447210
# 69793950679652694742597709739166693763042633987085
# 41052684708299085211399427365734116182760315001271
# 65378607361501080857009149939512557028198746004375
# 35829035317434717326932123578154982629742552737307
# 94953759765105305946966067683156574377167401875275
# 88902802571733229619176668713819931811048770190271
# 25267680276078003013678680992525463401061632866526
# 36270218540497705585629946580636237993140746255962
# 24074486908231174977792365466257246923322810917141
# 91430288197103288597806669760892938638285025333403
# 34413065578016127815921815005561868836468420090470
# 23053081172816430487623791969842487255036638784583
# 11487696932154902810424020138335124462181441773470
# 63783299490636259666498587618221225225512486764533
# 67720186971698544312419572409913959008952310058822
# 95548255300263520781532296796249481641953868218774
# 76085327132285723110424803456124867697064507995236
# 37774242535411291684276865538926205024910326572967
# 23701913275725675285653248258265463092207058596522
# 29798860272258331913126375147341994889534765745501
# 18495701454879288984856827726077713721403798879715
# 38298203783031473527721580348144513491373226651381
# 34829543829199918180278916522431027392251122869539
# 40957953066405232632538044100059654939159879593635
# 29746152185502371307642255121183693803580388584903
# 41698116222072977186158236678424689157993532961922
# 62467957194401269043877107275048102390895523597457
# 23189706772547915061505504953922979530901129967519
# 86188088225875314529584099251203829009407770775672
# 11306739708304724483816533873502340845647058077308
# 82959174767140363198008187129011875491310547126581
# 97623331044818386269515456334926366572897563400500
# 42846280183517070527831839425882145521227251250327
# 55121603546981200581762165212827652751691296897789
# 32238195734329339946437501907836945765883352399886
# 75506164965184775180738168837861091527357929701337
# 62177842752192623401942399639168044983993173312731
# 32924185707147349566916674687634660915035914677504
# 99518671430235219628894890102423325116913619626622
# 73267460800591547471830798392868535206946944540724
# 76841822524674417161514036427982273348055556214818
# 97142617910342598647204516893989422179826088076852
# 87783646182799346313767754307809363333018982642090
# 10848802521674670883215120185883543223812876952786
# 71329612474782464538636993009049310363619763878039
# 62184073572399794223406235393808339651327408011116
# 66627891981488087797941876876144230030984490851411
# 60661826293682836764744779239180335110989069790714
# 85786944089552990653640447425576083659976645795096
# 66024396409905389607120198219976047599490197230297
# 64913982680032973156037120041377903785566085089252
# 16730939319872750275468906903707539413042652315011
# 94809377245048795150954100921645863754710598436791
# 78639167021187492431995700641917969777599028300699
# 15368713711936614952811305876380278410754449733078
# 40789923115535562561142322423255033685442488917353
# 44889911501440648020369068063960672322193204149535
# 41503128880339536053299340368006977710650566631954
# 81234880673210146739058568557934581403627822703280
# 82616570773948327592232845941706525094512325230608
# 22918802058777319719839450180888072429661980811197
# 77158542502016545090413245809786882778948721859617
# 72107838435069186155435662884062257473692284509516
# 20849603980134001723930671666823555245252804609722
# 53503534226472524250874054075591789781264330331690
#
# +
numbers = """
37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
"""
print(str(sum(map(int, numbers.split())))[:10])
# -
| Euler 013 - Large Sum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import gzip
import json
import nltk
import pandas as pd
from gensim import corpora
from gensim.parsing import preprocessing
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from tqdm.notebook import tqdm
tqdm.pandas()
nltk.download("stopwords")
# -
stopwords = {
"spanish": stopwords.words("spanish"),
"portuguese": stopwords.words("portuguese")
}
# +
data = []
for language in tqdm(["spanish", "portuguese"]):
for split in tqdm(["train", "test", "validation"]):
df = pd.read_json(f"../data/meli-challenge-2019/{language}.{split}.jsonl.gz", lines=True)
data.append(df)
data = pd.concat(data, ignore_index=True)
data.head()
# +
def clean_titles(row):
title = preprocessing.strip_tags(row["title"].lower())
title = preprocessing.strip_punctuation(title)
title = preprocessing.strip_numeric(title)
title = word_tokenize(title, language=row["language"])
title = [word for word in title if word not in stopwords[row["language"]]]
title = [word for word in title if len(word) >= 3]
return title
data["tokenized_title"] = data.progress_apply(clean_titles, axis=1)
# +
for language, lang_df in data.groupby(["language"]):
dictionary = corpora.Dictionary(lang_df["tokenized_title"].tolist())
dictionary.filter_extremes(no_below=2, no_above=1, keep_n=50000)
dictionary.compactify()
dictionary.patch_with_special_tokens({
"[PAD]": 0,
"[UNK]": 1
})
data.loc[lang_df.index, "data"] = lang_df["tokenized_title"].progress_map(
lambda t: dictionary.doc2idx(
document=t,
unknown_word_index=1
)
)
label_to_target = {label: index for index, label in enumerate(lang_df["category"].unique())}
data.loc[lang_df.index, "target"] = lang_df["category"].progress_map(lambda l: label_to_target[l])
with gzip.open(f"../data/meli-challenge-2019/{language}_token_to_index.json.gz", "wt") as fh:
json.dump(dictionary.token2id, fh)
data.head()
# -
n_labels = data.groupby(["language"])["target"].max().to_dict()
n_labels
split_size = data.groupby(["language", "split"]).size().to_dict()
split_size
data["n_labels"] = data.apply(lambda r: n_labels[r["language"]] + 1, axis=1)
data["size"] = data.apply(lambda r: split_size[(r["language"], r["split"])], axis=1)
data.head()
for (language, split), sub_df in data.groupby(["language", "split"]):
sub_df.to_json(
f"../data/meli-challenge-2019/{language}.{split}.jsonl.gz",
lines=True,
orient="records"
)
| docs/experiments/rnn_experiment/preprocess_meli_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''opensap_python_intro-SgMpohZV'': pipenv)'
# name: python3
# ---
# # Why are functions necessary?
# ## Motivation
#
# One of the basic concepts of mathematics is that of functions. A function $ f $ assigns to each element $ x $ of
# a set $ D $ one element $ y $ of a set $ D $. The following examples show some well known functions.
#
# Examples:
#
# * $ f (x) = x ^ 2 \text{ for } x \in \mathbb{N} $
# * $ s (x, y) = x + y $ for $ x \in \mathbb{R} $
#
# A function is therefore clearly defined by its parameters and the associated mapping rule(s).
#
#
# ## Functions in programming
# A function in programming is structured similarly to a function in mathematics. A function in programming
# usually consists of a *name*, a series of *parameters* and a set of *instructions*. Furthermore,
# functions usually yield some result.
#
# The main goals of functions
# in programming are:
#
# - to structure programs
# - to make programs more readable
# - to enable reuse.
#
# As an example consider the following small program:
# +
names = ["Christian", "Stephan", "Lukas"]
for name in names:
print("Hallo", name)
# -
# In this program the `print()` function is used to create some output. Using the
# `print()` function has several advantages:
#
# 1. The `print` function is reusable. We already used it in several of our programs
# 1. The `print` function makes the program more readable
# - as it gives a name to some functionality (*print* in this case)
# - because the details of the implementation of the functionality are hidden
#
# The content of this week focuses on writing functions in Python and using them to structure programs.
# # Functions in Python
# As already mentioned in the previous section we have already used some functions from the Python standard library. For
# example, we already used the function [`print()`](https://docs.python.org/3/library/functions.html#print) to create
# output or the function [`int()`](https://docs.python.org/3/library/functions.html#int) to convert other datatypes into
# an integer.
#
# Of course, as in other programming languages, it is also possible to define own functions in Python. The following cell
# shows the definition of a simple function `double()`. This function doubles the value of a passed parameter and returns
# the result.
def double(x):
"""
Doubles the value x
"""
return x * 2
# Once the function is defined it can be called as shown in the following cell.
# +
d = double(21)
print(d)
print(double("Hello "))
# -
# Functions in Python are defined using the following syntax:
def function_name(parameter_list):
"""
docstring
"""
statements
return output_value(s)
# A function in Python consists of the following components:
#
# - The keyword `def` followed by a *function name*. The function name can be used to call the function
# - An optional *parameter list*. The parameter list can, therefore, be empty or contain multiple parameters. Several
# parameters are separated by commas
# - An optional *docstring (between the `"""`)*. It can be used to add a documentation for the function
# - The function body. The function body is a code block consisting of statements and optional return values
# - As with all code blocks in Python the functional body is indented
# - The function body must contain at least one statement
# - The return values of the function follow the keyword `return`. Return values are optional
#
# The individual components a function comprises are explained in detail in the following sections. First, however, we
# learn how a function is called.
# ## Function calls
#
# The cell below contains a Python program consisting of several parts. First, a function `greet()` is defined. Next, this
# function is called two times with different parameters.
# +
def greet(name):
return "May the force be with you, " + name
n = "Luke"
greeting = greet(n)
print(greeting)
print(greet("Christian"))
# -
# If this Python program is executed, the function `greet()` is defined first. This definition has no output. Then the
# function `greet()` is called twice with different parameters. The result of the function calls is displayed using a call
# to the function `print()`.
#
# The execution of the program is shown graphically in the figure below.
#
# 
# First, the variable n is set to the value `"Luke"`. Then the function `greet()` is called and the variable `n` is passed
# as a parameter. By calling the function, the value of the variable `n` is assigned to the parameter `name`. Inside the
# function the parameter `name` has now the value `"Luke"`.
# The result of the function is constructed by concatenating value `"May the force be with you, "` and the value of the
# parameter `name`. The outcome is `"May the force be with you, Luke"`.
# It is returned from the function. In the example the returned value is assigned to the variable `greeting`. Finally, the
# value of the variable greeting is displayed by calling the function `print()`.
#
# In the next step the function `greet()` is called again. This time the string `"Christian"` is passed as a parameter.
# The return value of the function is, therefore `"May the force be with you, Christian"`. This result is passed to the
# `print()` function directly without assigning it to any variable first.
| week_5/week_5_unit_1_whyfunctions_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training a single linear neuron with fixed input size
# Todo:
#
# 1. Generate and visualize some training data (for the network)
# 2. Train a network and collect the weights
# 3. Save the data to file for later use
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import seaborn as sns
import pandas as pd
# %matplotlib inline
sns.set_style(style='whitegrid')
plt.rcParams["patch.force_edgecolor"] = True
# ## Create a single training set
m = 2 # slope
c = 3 # intercept
x = np.random.rand(256)
noise = np.random.randn(256) / 4
y = x * m + c + noise
df = pd.DataFrame()
df['x'] = x
df['y'] = y
sns.lmplot(x='x', y='y', data=df)
# ## Implement Linear Regression
import torch
import torch.nn as nn
from torch.autograd import Variable
x_train = x.reshape(-1, 1).astype('float32')
y_train = y.reshape(-1, 1).astype('float32')
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
input_dim = x_train.shape[1]
output_dim = y_train.shape[1]
input_dim, output_dim
model = LinearRegressionModel(input_dim, output_dim)
criterion = nn.MSELoss()
[w, b] = model.parameters()
def get_param_values():
return w.data[0][0], b.data[0]
def plot_current_fit(title=""):
plt.figure(figsize=(12,4))
plt.title(title)
plt.scatter(x, y, s=8)
w1 = w.data[0][0]
b1 = b.data[0]
x1 = np.array([0., 1.])
y1 = x1 * w1 + b1
plt.plot(x1, y1, 'r', label='Current Fit ({:.3f}, {:.3f})'.format(w1, b1))
plt.xlabel('x (input)')
plt.ylabel('y (target)')
plt.legend()
plt.show()
plot_current_fit('Before training')
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# +
def run_epoch(epoch):
# Convert from numpy array to torch tensors
inputs = Variable(torch.from_numpy(x_train))
labels = Variable(torch.from_numpy(y_train))
# Clear the gradients w.r.t. parameters
optimizer.zero_grad()
# Forward to get the outputs
outputs = model(inputs)
# Calcuate loss
loss = criterion(outputs, labels)
# Getting gradients from parameters
loss.backward()
# Updating parameters
optimizer.step()
# print('epoch {}, loss {}'.format(epoch, loss.data[0]))
return loss
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
fig, (ax1) = plt.subplots(1, figsize=(12, 6))
ax1.scatter(x, y, s=8)
w1, b1 = get_param_values()
x1 = np.array([0., 1.])
y1 = x1 * w1 + b1
fit, = ax1.plot(x1, y1, 'r', label='Predicted'.format(w1, b1))
ax1.plot(x1, x1 * m + c, 'g', label='Real')
ax1.legend()
ax1.set_title('Linear Regression')
def init():
ax1.set_ylim(0, 6)
return fit,
def animate(i):
loss = run_epoch(i)
w1, b1 = get_param_values()
y1 = x1 * w1 + b1
fit.set_ydata(y1)
epochs = np.arange(1, 250)
ani = FuncAnimation(fig, animate, epochs, init_func=init, interval=100, blit=True, repeat=False)
plt.show()
# -
get_param_values()
| nb/lineaer-regression-animation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Stack
class Stack:
"""
Stack data structure using list
"""
def __init__(self,value=None,name=None,verbose=False):
"""
Class initializer: Produces a stack with a single value or a list of values
value: The initial value. Could be a list of a single item.
name: Optional name for the stack.
verbose: Boolean. Determines if each stack operation prints the item being pushed or popped.
"""
self._items = []
self._height=0
self._verbose=verbose
if value!=None:
if type(value)==list:
for v in value:
self._items.append(v)
self._height = len(value)
else:
self._items.append(value)
self._height = 1
else:
print("Empty stack created.")
if name!=None:
self._name=str(name)
else:
self._name=''
def pop(self):
if self._height == 0:
print ("Stack is empty. Nothing to pop.")
return None
else:
self._height -=1
item=self._items.pop()
if self._verbose:
print("{} popped".format(item))
return item
def push(self,value=None):
"""
Pushes a a single value or a list of values onto the stack
"""
if value!=None:
if type(value)==list:
for v in value:
self._items.append(v)
if self._verbose:
print("{} pushed".format(v))
self._height += len(value)
else:
self._height +=1
self._items.append(value)
if self._verbose:
print("{} pushed".format(value))
else:
if self._verbose:
print("No value supplied, nothing was pushed.")
def isEmpty(self):
"""
Returns (Boolean) if the stack is empty
"""
return self._height==0
def draw(self):
"""
Prints the stack structure as a vertical representation
"""
if self.isEmpty():
pass
return None
else:
print("="*15)
n=self._height
print('['+str(self._items[n-1])+']')
for i in range(n-2,-1,-1):
print(" | ")
print('['+str(self._items[i])+']')
print("="*15)
def height(self):
"""
Returns stack height
"""
return (self._height)
def __str__(self):
"""
Returns stack name to print method
"""
return (self._name)
def set_verbosity(self,boolean):
"""
Sets the verbosity of the stack operation
"""
self._verbose=boolean
s=Stack('start',name='TestStack')
for i in range(2,12,2):
s.push(i)
s.draw()
s.pop()
s.pop()
s.draw()
s=Stack('start',name='TestStack')
s.draw()
for i in range(2,12,2):
s.push(i)
s.draw()
for i in range(3):
s.pop()
s.draw()
s.push(100)
s.draw()
s.pop()
s.pop()
s.draw()
s.push([3,5,7])
s.draw()
s.draw()
# ## Queue
class Queue:
"""
Queue data structure using list
"""
def __init__(self,value=None,name=None,verbose=False):
"""
Class initializer: Produces a queue with a single value or a list of values
value: The initial value. Could be a list of a single item.
name: Optional name for the queue.
verbose: Boolean. Determines if each queue operation prints the item being pushed or popped.
"""
self._items = []
self._length=0
self._verbose=verbose
if value!=None:
if type(value)==list:
for v in value:
self._items.append(v)
self._length = len(value)
else:
self._items.append(value)
self._length = 1
else:
print("Empty queue created.")
if name!=None:
self._name=str(name)
else:
self._name=''
def dequeue(self):
if self._length == 0:
print ("Queue is empty. Nothing to pop.")
return None
else:
self._length -=1
item=self._items.pop(0)
if self._verbose:
print("{} dequeued".format(item))
return item
def enqueue(self,value=None):
"""
Pushes a a single value or a list of values onto the queue
"""
if value!=None:
if type(value)==list:
for v in value:
self._items.append(v)
if self._verbose:
print("{} enqueued".format(v))
self._length += len(value)
else:
self._length +=1
self._items.append(value)
if self._verbose:
print("{} enqueued".format(value))
else:
if self._verbose:
print("No value supplied, nothing was enqueued.")
def isEmpty(self):
"""
Returns (Boolean) if the queue is empty
"""
return self._length==0
def draw(self):
if self.isEmpty():
pass
return None
else:
n=self._length
for i in range(n-1):
print('['+str(self._items[i])+']-',end='')
print('['+str(self._items[n-1])+']')
def length(self):
"""
Returns queue height
"""
return (self._length)
def __str__(self):
"""
Returns queue name to print method
"""
return (self._name)
def set_verbosity(self,boolean):
"""
Sets the verbosity of the queue operation
"""
self._verbose=boolean
q=Queue([2,3,6],verbose=True)
q
q.draw()
q.enqueue(10)
q.draw()
q.dequeue()
q.enqueue()
for i in range(100,104):
q.enqueue(i)
q.draw()
for i in range(100,103):
q.dequeue()
q.draw()
# ## Binary Tree
class Node(object):
def __init__(self,data=None):
self.left = None
self.right = None
self.data = data
def printNode(self):
print(self.data)
def insert(self, data):
if self.data:
if data < self.data:
if self.left is None:
self.left = Node(data)
else:
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = Node(data)
else:
self.right.insert(data)
else:
self.data = data
def PrintTree(self):
if self.left:
self.left.PrintTree()
print( self.data),
if self.right:
self.right.PrintTree()
bst=Node(10)
bst
bst.printNode()
bst.insert(6)
bst.left.data
bst.insert(2)
bst.PrintTree()
| Misc Data Structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 111} id="UtAefGBdLJ0z" outputId="8aaf4830-24db-4361-d8c7-f660aa998680"
from google.colab import files
files.upload(
)
# + id="4sHuO00YLLD-"
import numpy as np
import matplotlib.pyplot as plt
import sklearn
# + id="mgOQsSoBLLHJ"
import pandas as pd
# + [markdown] id="Wby9rdRFO1pm"
# ***Table of Contents***
#
#
# * Import Libraries
# * Loading Data
# * Data Preprocessing
# * Data Analysis
# * Model Building
# * Conclusion
#
#
#
#
#
#
#
#
#
#
#
# + [markdown] id="pYMTtIQuPq84"
# ***Importing Libraries***
# + id="nqc_wmQnO1SA"
#Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import precision_score, recall_score, confusion_matrix, classification_report, accuracy_score, f1_score
from sklearn import metrics
from sklearn.metrics import roc_curve, auc, roc_auc_score
np.random.seed(0)
# + [markdown] id="CqJAqBBORJgq"
# ***Loading Data***
# + colab={"base_uri": "https://localhost:8080/", "height": 240} id="L6ztkQpcLLM7" outputId="321326aa-f104-4b61-972a-7971eae250b4"
data = pd.read_csv("fetal_health.csv")
data.head()
# + colab={"base_uri": "https://localhost:8080/"} id="mwoHOXRdLLPw" outputId="d1ad392f-0bdb-417c-94e5-1e4f9fdcb8d5"
data.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="oLuCmqjSLLS-" outputId="6fe045a1-ffbe-4e2b-a41c-a446b9afcf14"
data.describe()
# + [markdown] id="2itGRcK_SEZB"
# ***Data Analysis***
#
# *The analysis*
#
#
# * Count Plot
# * Correlation Heat Map
# * Implot
# * Swarm and Boxen Plot
#
#
#
#
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="DLAKit7KLLWg" outputId="d6b72b31-24e2-4668-a5f6-b520984b2271"
#lets eveluate the target and find out if our data is imbalanced or not
colors = ['#f7b2b0', '#8f7198', '#003f5c']
sns.countplot(data=data, x='fetal_health', palette=colors)
# + [markdown] id="Icro01XzTVHI"
# The count plot of targets indicates an imbalanced in data. This is a case that tends to provide misleading classication accuracy.
# The performance measures that would provide better insight:
#
#
# * Confusion Matrix
# * Precision
# * Recall
# * F1 Score
#
#
#
#
# + [markdown] id="GsTDmhYgUGel"
# Lets evaluate the correlation matrix
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="AygNbjRNLLZr" outputId="f501b03b-32bd-498e-c6eb-12922b4d6ac6"
#correlation matrix
corrmat = data.corr()
plt.figure(figsize=(15, 15))
cmap = sns.diverging_palette(250, 10, s=80, l=55, n=9, as_cmap=True)
sns.heatmap(corrmat, annot=True, cmap=cmap, center=0)
# + [markdown] id="N_9TbvXeVYou"
# **Accelerations Vs Fetal Movement by Fetal Health**
# + colab={"base_uri": "https://localhost:8080/", "height": 383} id="vsogtWY2LLdX" outputId="f1330d90-e428-4791-ba87-0d2080c71f02"
sns.lmplot(data=data, x="accelerations", y="fetal_movement", palette=colors, hue="fetal_health", legend_out=False)
plt.show()
# + [markdown] id="quoM3AzNWKk2"
# **prolongued declarations vs fetal movement by fetal health**
# + colab={"base_uri": "https://localhost:8080/", "height": 384} id="OOpP_VOPLLhQ" outputId="08fdabd4-a8dd-4b5d-a9b4-ef7e36cb885b"
sns.lmplot(data=data, x="prolongued_decelerations", y="fetal_movement", palette=colors, hue="fetal_health",
legend_out=False)
plt.show()
# + [markdown] id="TrzBqTpdXGan"
# **Abnormal Short Term Variability vs Fetal Movement by Fetal Health**
# + colab={"base_uri": "https://localhost:8080/", "height": 384} id="BE53MM-zLLkU" outputId="ef1bfc53-4313-4f60-c02b-523a04349a57"
sns.lmplot(data=data, x="abnormal_short_term_variability", y="fetal_movement", palette=colors,
hue="fetal_health", legend_out=False)
plt.show()
# + [markdown] id="1NmpWrPGXzds"
# **Mean Value of Long Term Variability Vs Fetal Movement by Fetal Health**
# + colab={"base_uri": "https://localhost:8080/", "height": 384} id="Yh0ujYgdLLm_" outputId="b9fa89a4-e29d-42a5-c924-602881860c7f"
sns.lmplot(data=data, x="mean_value_of_long_term_variability", y="fetal_movement", palette=colors, hue="fetal_health",
legend_out=False)
plt.show()
# + id="ZG0aK6bXLLp5"
cols = ['baseline_value', 'accelerations', 'fetal_movement',
'uterine_conctractions', 'light_decelerations', 'severe_decelerations',
'prolongued_decelerations', 'abnormal_short_ter_variablility',
'mean_value_of_short_term_variability', 'percentage_of_time_abnormal_long_term_variability',
'mean_value_of_term_variability']
# + colab={"base_uri": "https://localhost:8080/", "height": 519} id="T98XoH10LL2h" outputId="1633a508-915f-43a7-b801-0fc1bec0f65e"
shades =["#f7b2b0","#c98ea6","#8f7198","#50587f", "#003f5c"]
plt.figure(figsize=(20,10))
sns.boxenplot(data = data,palette = shades)
plt.xticks(rotation=90)
plt.show()
# + [markdown] id="2VYwaZ0cbWl2"
# ***Model Selection And Building***
#
# In this section we will
#
# * Set up features(X) and target(Y)
# * Scale the feature
# * Split traing and test sets
# * Model Selection
# * Hyperparameter tuning
#
#
#
#
#
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="9Czr8C7dcEGm" outputId="8f73c3e8-181a-476b-c112-7ac46f307c3b"
#assigning values to features as X and target as y
X = data.drop(["fetal_health"], axis=1)
y = data["fetal_health"]
#setting up a standard scaler for the feature
col_names = list(X.columns)
s_scaler = preprocessing.StandardScaler()
X_df = s_scaler.fit_transform(X)
X_df = pd.DataFrame(X_df, columns=col_names)
X_df.describe()
# + [markdown] id="KzY-MSLudCeY"
# *Looking up the scaled features*
# + colab={"base_uri": "https://localhost:8080/", "height": 521} id="QYfZjZIVcEDi" outputId="90eab054-0d38-49bc-9470-89904e45d6c1"
plt.figure(figsize=(20, 10))
sns.boxenplot(data=X_df, palette=shades)
plt.xticks(rotation=90)
plt.show()
# + [markdown] id="nUEhbZcjd1uz"
# **Splitting test and traing set**
# + id="30lmQUgscEBG"
X_train, X_test, y_train, y_test = train_test_split(X_df, y, test_size=0.3, random_state=42)
# + [markdown] id="3Q30exiGd8VB"
# **A quick model for selection process**
# + colab={"base_uri": "https://localhost:8080/"} id="BKt4tmmXcD62" outputId="4898436d-4fd4-4367-df6b-53112a087791"
#pipelines of models( it is short was to fit and pred)
pipeline_lr=Pipeline([('lr_classifier',LogisticRegression(random_state=42))])
pipeline_dt=Pipeline([ ('dt_classifier',DecisionTreeClassifier(random_state=42))])
pipeline_rf=Pipeline([('rf_classifier',RandomForestClassifier())])
pipeline_svc=Pipeline([('sv_classifier',SVC())])
# List of all the pipelines
pipelines = [pipeline_lr, pipeline_dt, pipeline_rf, pipeline_svc]
# Dictionary of pipelines and classifier types for ease of reference
pipe_dict = {0: 'Logistic Regression', 1: 'Decision Tree', 2: 'RandomForest', 3: "SVC"}
# Fit the pipelines
for pipe in pipelines:
pipe.fit(X_train, y_train)
#cross validation on accuracy
cv_results_accuracy = []
for i, model in enumerate(pipelines):
cv_score = cross_val_score(model, X_train,y_train, cv=10 )
cv_results_accuracy.append(cv_score)
print("%s: %f " % (pipe_dict[i], cv_score.mean()))
# + [markdown] id="yyroiKYmhuH3"
# **Taking look at test set**
# + colab={"base_uri": "https://localhost:8080/"} id="mL1NiHdxcD3j" outputId="766666f8-3bea-46f6-e3a7-03f10686eb67"
#taking look at the test set
pred_rfc = pipeline_rf.predict(X_test)
accuracy = accuracy_score(y_test, pred_rfc)
print(accuracy)
# + [markdown] id="-IESt9G2iG4C"
# **Building a dictionary with of optional values that will analyze by GridSearch CV**
# + id="s_tWSbNvcDwo"
#Building a dictionalry with list of optional values that will me analyesed by GridSearch CV
parameters = {
'n_estimators': [100,150, 200,500,700,900],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,6,8,12,14,16],
'criterion' :['gini', 'entropy'],
'n_jobs':[-1,1,None]
}
#Fitting the trainingset to find parameters with best accuracy
CV_rfc = GridSearchCV(estimator=RandomForestClassifier(), param_grid=parameters, cv= 5)
CV_rfc.fit(X_train, y_train)
#Getting the outcome of gridsearch
CV_rfc.best_params_
# + id="S5aCuSL7cDtp"
RF_model = RandomForestClassifier(**CV_rfc.best_params_)
RF_model.fit(X_train, y_train)
#Testing the model
predictions=RF_model.predict(X_test)
accuracy=accuracy_score(y_test, predictions)
accuracy
# + id="1N-b9ZUBcDq0"
# + id="Cr0I_LkwcDnk"
# + id="9LAFVgy8cDjc"
# + id="0Q4vCflkcDdO"
# + id="ZRDzkeHucDZm"
# + id="olnmXeHocDWl"
# + id="rMnPsvFcLL5P"
# + id="NpACIQfLLL8R"
# + id="ilnQAr30LL_S"
# + id="_nWnnoCiLMCQ"
# + id="TsdTzv4SLME4"
# + id="zp2T14FqLMHm"
| healthcare_fetal_health_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Dictionary/Associative Array:
# --
#
# > Python provides a dictionary structure to deal with unordered set of key value pairs.
#
# > The key and value pair is called as item.
#
# > The values in the dictionary are mutable i.e can be changed
#
# > The keys in the dictionary are immutable i.e cannot be changed
#
# > It is also called as an Associative array because the key works as an Index and they are decided by the user.
d = {1: "A"}
print(d)
print(type(d))
# type your code here6
d = { 1: "COMP", 2:"IT", 99:"EXTC"}
print(d)
print(d[99])
d1 = {"COMP" : 3, "IT": 1, "EXTC":2}
# type your code here8
print(d1["IT"])
print(d1["EXTC"])
d1 = {"COMP" : 3, "IT": 1, "EXTC":2, "COMP":999}
# type your code here8
print(d1["IT"])
print(d1["COMP"])
print(d1)
d = {1:"COMP", 2: "IT", 3:"EXTC"}
# type your code here1..1
del d[1]
print(d)
d.clear()
print(d)
d = {1:"COMP", 2: "IT", 3:"EXTC"}
print(3 in d)
# type your code here1-6
print(str(d))
print(len(d))
print(d.keys())
print(d.values())
d1 = {1:"COMP", 2: "IT", 3:"EXTC"}
d2 = {100:"CHEM"}
print(d1)
print(d2)
# type your code here1*8
d1.update(d2)
print(d1)
d1 = {"COMP" : 3, "IT": 1, "EXTC":2}
# type your code here2*1
print(d1['IT'])
print(d1.get("IT"))
# Dictionary can contain lists
# type your code here2*2
l = {1:['A',"B"], 2: ['C','D']}
print(l)
print(type(l))
# Dictionary can contain tuples
# type your code here2**7
l = {1: ('1',2), 2: ('1', '2'), 3: ['2', '3'], 4: ('Darshan',)}
print(l)
print(type(l))
print(l[4])
print(type(l[4]))
# Dictionary can contain tuples
# type your code here
l = {1: ('1',2), 2: ('1', '2'), 3: ('2', '3'), 4: ('3')}
print(l)
print(type(l))
print(type(l[4]))
# Dictionary can contain dictionaries
l = {1: ('1',2), 2: ('1', '2'), 100:{444: ('1',2), 445: ('1', '2'), 446: ('2', '3')},3: ('2', '3'), 4: ('3')}
print(l)
print(type(l))
print("l[1]= ",l[1])
print("l[100]= ",l[100])
# List can contain tuples
# type your code here
l=[1,2,(999,888),3,4]
print(l)
print(l[2])
print(type(l))
print(type(l[2]))
# Tuple can contain lists
# type your code here
l=(1,2,[999,888],3,4)
print(l)
print(l[2])
print(type(l))
print(type(l[2]))
| Day6/6_Dictionary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Description
# This notebook develops a daily flight count model for the Hartsfield Jackson International Airport (ATL) in Atlanta, GA. It is focused on building a model for non-pandemic flight patterns. Then this model is scored on the pandemic time period in order to determine the flights expected had the airport been operating without the burden of the pandemic.
#
# Special credit should be given to the Forecasting Energy Demand developers that developed the Azure Machine Learning documentation examples:
# https://github.com/Azure/MachineLearningNotebooks
#
# https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/
#
#
# + [markdown] jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# # Set up workspace
# + gather={"logged": 1617623309639}
print("Hello World")
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Import packages etc
# + gather={"logged": 1617623309768}
import pandas as pd
import numpy as np
import re
import datetime
from azureml.core import Workspace, Dataset, workspace
subscription_id = '<KEY>'
resource_group = 'hackathon'
workspace_name = 'Iteration1_ML'
workspace = Workspace(subscription_id, resource_group, workspace_name)
# -
# ### Set up experiment
# + gather={"logged": 1617623309932} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
import logging
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import warnings
import os
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
import azureml.core
from azureml.core import Experiment, Workspace, Dataset
from azureml.train.automl import AutoMLConfig
from datetime import datetime
# -
# ### Set up workspace
# + gather={"logged": 1617623310477}
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'non_COVID_model_JP'
# # project folder
# project_folder = './sample_projects/automl-forecasting-energy-demand'
experiment = Experiment(ws, experiment_name)
output = {}
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
# -
# ### Create AmlCompute
# + gather={"logged": 1617623311362}
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your cluster.
amlcompute_cluster_name = "flights-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=6)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# -
# # Start Data Prep for Modeling
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Set up designated variables
# + gather={"logged": 1617623311469} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
target_column_name = 'counts_x'
time_column_name = 'Date'
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Read in data as dataset tabular object
# + gather={"logged": 1617623311820} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
dataset = Dataset.get_by_name(workspace, name='ATL_training_autoML').with_timestamp_columns(fine_grain_timestamp=time_column_name)
#preview first 5 rows
dataset.take(5).to_pandas_dataframe().reset_index(drop=True)
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Split the data into train and test sets
# + gather={"logged": 1617626150028} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# split into train based on time, non covid times
train0 = dataset.time_before(datetime(2020, 2, 1, 0), include_boundary=False)
train0.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)
# + gather={"logged": 1617626152067} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# split into test and forecasting period
test0 = dataset.time_between(datetime(2020, 2, 1, 0), datetime(2020, 12, 31, 0))
test0.to_pandas_dataframe().reset_index(drop=True).head(5)
# -
# ### There are too many input columns. Do first round of feature selection
# + gather={"logged": 1617626164292}
varsToKeep=[
'Date',
'counts_x',
'DOW_x',
'day_x',
'month_x',
'mint',
'maxt',
'temp',
'dew',
'humidity',
'wspd',
'precip',
'precipcover',
'snowdepth',
'visibility',
'cloudcover',
'sealevelpressure',
'weathertype',
'conditions']
# =============================================================================
# removed these for being covid related (in non-covid model)
#'positiveIncrease',
#'deathIncrease',
#'hospitalizedIncrease'
# removed these for being cumulative
#'positive',
#'probableCases',
#'death',
# removed these for being all 0
#'hospitalizedCurrently',
#'inIcuCurrently',
#'onVentilatorCurrently',
#removed these for being missing
#'wgust',
#'windchill',
#'heatindex',
# =============================================================================
# + gather={"logged": 1617626167591} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
train=train0.keep_columns(varsToKeep)
test=test0.keep_columns(varsToKeep)
# + gather={"logged": 1617623313025} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
#print for viewing
train.to_pandas_dataframe().reset_index(drop=True).tail(5)
# + gather={"logged": 1617623313159} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
#print for viewing
test.to_pandas_dataframe().reset_index(drop=True).head(5)
# -
# # Modeling
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Set forecast horizon
# + gather={"logged": 1617623313264} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
forecast_horizon = 30
# -
# ## Train the model
# + gather={"logged": 1617623313351}
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
freq='D'
)
automl_config = AutoMLConfig(task='forecasting',
primary_metric='normalized_root_mean_squared_error',
blocked_models = ['ExtremeRandomTrees', 'Prophet','VotingEnsemble'],
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
forecasting_parameters=forecasting_parameters,
enable_voting_ensemble=False)
# + gather={"logged": 1617624804300}
remote_run = experiment.submit(automl_config, show_output=True)
remote_run.wait_for_completion()
# -
# # Results
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Retrieve the Best Model
# + gather={"logged": 1617624804919} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
best_run, fitted_model = remote_run.get_output()
fitted_model.steps
#fitted_model
# + [markdown] nteract={"transient": {"deleting": false}}
# ### View featurization
# + gather={"logged": 1617624805034} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()
# + gather={"logged": 1617624805125} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# Get the featurization summary as a list of JSON
featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()
# View the featurization summary as a pandas dataframe
pd.DataFrame.from_records(featurization_summary)
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Forecast against the test sample and the pandemic time period
# + gather={"logged": 1617626179866} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
X_test = test.to_pandas_dataframe().reset_index(drop=True)
y_test = X_test.pop(target_column_name).values
#also partition out a covid period prediction for forecasting
covid0= dataset.time_between(datetime(2020, 3, 1, 0), datetime(2020, 12, 31, 0))
covid=train0.keep_columns(varsToKeep)
X_covid = covid.to_pandas_dataframe().reset_index(drop=True)
y_covid = X_covid.pop(target_column_name).values
#X_test
#y_test
X_covid
#y_covid
# + gather={"logged": 1617626188077} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
#y_predictions_covid,X_trans_covid = fitted_model.forecast(X_covid)
# + [markdown] nteract={"transient": {"deleting": false}}
# ###### method from:
# https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
# + gather={"logged": 1617626193293} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# from:
# https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
import pandas as pd
import numpy as np
from pandas.tseries.frequencies import to_offset
def align_outputs(y_predicted, X_trans, X_test, y_test, target_column_name,
predicted_column_name='predicted',
horizon_colname='horizon_origin'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods
"""
if (horizon_colname in X_trans):
df_fcst = pd.DataFrame({predicted_column_name: y_predicted,
horizon_colname: X_trans[horizon_colname]})
else:
df_fcst = pd.DataFrame({predicted_column_name: y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name,
predicted_column_name]].notnull().all(axis=1)]
return(clean)
# -
# #### Align Datasets
# + gather={"logged": 1617626200184} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name)
#df_all_covid = align_outputs(y_predictions_covid, X_trans_covid, X_covid, y_covid, target_column_name)
# -
# ### Determine Test Period fit statistics
# + gather={"logged": 1617626454344} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
df_test=df_all[(df_all.Date<=pd.Timestamp(2020,3,1))]
# use automl metrics module
scores = scoring.score_regression(
y_test=df_test[target_column_name],
y_pred=df_test['predicted'],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET))
print("[Test data scores]\n")
for key, value in scores.items():
print('{}: {:.3f}'.format(key, value))
# Plot outputs for testing period
# %matplotlib inline
test_pred = plt.scatter(df_test[target_column_name], df_test['predicted'], color='b')
test_test = plt.scatter(df_test[target_column_name], df_test[target_column_name], color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Plot the results, Test Partition and Pandemic Time Period
# + gather={"logged": 1617626604976} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
import datetime
# %matplotlib inline
plt.figure(figsize=(20,10))
new_df_all=df_all[(df_all.Date>=datetime.datetime(2020, 3, 1, 0, 0, 0))]
plt.title("Comparison of Daily Flights During Pandemic vs. Expected Daily Flights Assuming No Pandemic")
test_pred = plt.plot(new_df_all['Date'], new_df_all['predicted'], color='b', label="Projection for Non-Covid Daily Flights", marker="o")
test_test = plt.plot(new_df_all['Date'], new_df_all[target_column_name], color='g', label="Actual Daily Flights", marker="o")
plt.legend(loc='upper left')
plt.show()
# + [markdown] nteract={"transient": {"deleting": false}}
# ### Determine number of flights grounded
# + gather={"logged": 1617626257826} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
total_nonpandemic_flights=new_df_all['predicted'].sum()
total_pandemic_flights=new_df_all[target_column_name].sum()
diff=total_nonpandemic_flights-total_pandemic_flights
print('Expected (non-pandemic) flights: '+'{:,}'.format(int(total_nonpandemic_flights)))
print('Actual pandemic flights: '+'{:,}'.format(int(total_pandemic_flights)))
print('Flights grounded due to pandemic: '+'{:,}'.format(int(diff)))
total_percent_reduction = diff/total_nonpandemic_flights*100
print('Percent reduction in flights: '+'{:,}'.format(int(total_percent_reduction))+"%")
# -
# ### Confirm the run is completed
# + gather={"logged": 1617626262120} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
automl_run, fitted_model = remote_run.get_output()
| models/Flights_model_noncovid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <!--
# Author: <NAME> <https://thekrishna.in/>
# GitHub: https://github.com/bearlike
# Title: CBSE Class 12 - Working with Python and MySQL
# !-->
#
# # MySQL Connectivity in Python
# Intended for CBSE Class 12 Students
# <center><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqkAAAC1CAMAAABLc5Y8AAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAACuUExURf///wFFZedyFGBgYNWtQmRkZP7+/gBEY+BvFERnhWNjY/z8+11dXeRwE9nd4Pr28j1igdDW2u/x88bM0Obq7A5Na9WsPu+xf5evvbvDyBlUcYeHh9/m6XiYqp+fn2iKn/jq1oqlte/RpERxinBwcHp6euuaWe2lbKa8x/O/lueLQFR9lPXfweV/LbCwsGlpaS1ifSNbdtm1U9SqO5SUlN/AbNOpOQQEBCYmJgFyYBDpgfAAABoxSURBVHja7J0Ne5o8FIZNqXGKBfnaREVK0daW1o/Vre///2VvEkDBz2Crsbuee9eurQra0tuTk+Qk1GoAAAAAAAAA8A+gW90g6DouxaUAVwy1YiLoxd0BLge4WoweaWQQL/F1XBFwlZhN4WhqK/snnsBVcI043NFes7nIZG2Q2MdVAdeXpXaZoPNBjdpO2MtUJYmFzhW4QlNJIMSkgyhLWYkXGMgBwHWZylv/JP9iEHi5q3HkuBShFVwNLmvzPWMlrhXnIwGk0YuDrmXCVnAdQTVsrIMqHwvorvpWPGUl89CxISu4AizupV+Q0Y3mfCKgkY8FkHk0gKtAPQmzcWEXHzGtKEziubcaZe0FcBUox/CYjOFWVlAzB5MwzwSYqy6uFFCcqUY8bO4c7qfuJO9hkUUXA1dALWa81f6v0Z0kH7mKMSEArqD9D/ZpSK0kH2SNTFwsoLT9J8VB1e3nnTgPq+hZAZWm6k1mYbL/earnE62LCVQFCuFzqsRZf237ll12dZCnABE6VkAh5ZkqOvjv479+OSmlkzSsksTG5QLKGPBOlbX+uv/x8THaaOiNNFtlySquF1CWqgaERcu1mu5/TNXNMVY9SFWdG7hgQBXlmqpabchM7W/5POmlswBQFSiDx8twHVTN9sfHcjv0WgsiJlehKlAWVFmm2itkoNZ26y/6WnORASwMjFYBRZlqyDLVqODfcLRTxkGcqYpLBtTAx1TnEtOlrlCVzDECANQgJqocKq1qE+OqQE3zz2f/E5n0020KVRPUqwAl8JIqT6ZNp+5cDFYF6FUBJfB1KpGUfUa6i8UE1wyoaP593qeXyz4dj5uKYVWgBJs16qQrFVTpRIxVxSisAiqImHxNOfmoqAGQTBYA+Fr45L9s8slXXx1cKQDA+TJVXlE1l2zRDQ/tP1DFoCedqdZoVwxVdXHVgAICIt+i62KuaoGpKqAqU+1IymeJ9h+dKqCk+8/jZCKZfIZiUBWlKkABJm/SieRaaRGBSYigChRg8Kr+WLJTFYmRKgtXDSiA7/w/l8xU7QVXNcRFAypMbUjH1JreRVAFihDTpNL1fFlQRaYKLg/fo1L65mniflYIqkABvD9fYTQfmSpQhFPeTOV49x9BFahKU0mVWv5sTBVXDlw+Ta1WyicKVVH9By6fppJepfukDHpY/QfUpKlxNetCEVQx+w8uyqRRqUPFSUuqA1w7cEmiKuP+WScsESVVuLUauGTXnzflVStOxa1XGxGuHrigqQnf8a/qSSj+BxeHSVd9aZQvVlR30f0Hl4upZVMNuZJqvk1gozFHUAUXQy+ZShPJJl3sqEJwVzWgqPVnAkpNrFJTbP43x9p/oKRHZS8aRHLnCVH810BQBZc0dTVKJfRzpM5Lgyo2VAEXM7VQ8S/6SQdu/Fs6TyxTkdQagM/TXc+mOukd/eRKT/mOlpUrBgA4GZ/fa0JPw2t68+mwwoaqBBXV4EIYXoP07HXqyeyTKz3VYxFUMaYKLgP3My2LTnedkt8jRRxPFpGLDABcqPOfrk5J+0jyi6TSFVUsIge4UyW4RJcqu9OvGK9Kkev+1/QwPYN4oQVXwbkZeGlZlC7S1CbrVhEiOfhkhrncXoi4Cs7d/Mdp8y+WRzUi2qyw+Q/tLlauBqisBudlkt7p189GncTok/SyarsbkywHmH9ybtW2Jt1d+AOEayAMWYi790RZzYkYfZIv6KO6Ey5SWUly+ipAakdJZw/NEFNhoJZtNeUZIufk86r8zpPVFkm73Xk2DHByWHWSTnMvnU6AYVvAOkb8VtNNMTvKg5eYfaq484Sduxqelq36zQOiclfD61DVtPCRUYmTj/nPxX3R+UBp1ZXVNTvqiUR3fsqAlZVGzs5+XTuB+rIt2xmONewdo5QoMzUrqaZhg1SuPaVuICatvMrLq6gednjYnDiT/TlAx1d9kYyR1tZaMFUtrPEm3jzK45aeNMiiajtOqSXWrJLKLbXDRQ14PHf3qtoJVQdVS6vX6zBVda/KNizXXAdDmzl3wsbTelekAHFFyQOmZ3aOtb/9V+2I1YKpV+HqRk4Wk1PWnlBD3DRoXun3abJA2gnz991r6gSmgl3dh5icdIs0vevxCiur0lvxxp0eMzWCqWCnPwk5ZZkUz1YrqpramSrgd2AqqIgZkpP286M08piq8gHZjUWPiSeqRty8JlNtE6Z+g8SV6mHFvSeobg+cSRAmfLgqNqvF1E4cTYK4cz2mmtZwZMDUb+GqHsnfeYK6TpTEC4+QvGJFelzVjfNJ00MTVRc21Z6OW+0WTP0urnYle/FGFPdyRbNZBE96X7b9vSiFplptrd6qw9Rvo+pEpj7KCL21pMxX0uglkfyv1L1CU6mh1WHqt8KYHB0AsFfl1ExT0muGXcetVUhw3eaOSj/lMRWmfjf0o87ZvcxUski6lqvTijMGurNN2EFMBV+fzaamLiY2/ary/KCDmAq+Puymy1oXFX6L5sA6yM6Yqt/nmGf/9CGm/pPYooiq0ZOdmreipNnsHGJnnmr+fv/59/3v+/vt7z+vZy6tQkz9p1UlUqtI7OhIhf++vr/5++dtys/3v3/uJSKjadvmbqWP5SmfjKnsjW3904d8YRtBB4bjWANdOj1zDcsyKhxf6bvRc+gmx0859EMeexVuX7r0n8THS/7t8ARPN0xlvN++Hr7SznQ5Ho9G4/FyulmFSAfT/uZjtsER10Fn//Fb3FRfPGhvm2obju/7zs7yRtMZ9tkbj9gb7/HatKbZIf2pdQFbdSdIx7u9OJJZ6Uuz8XHSi4MvXW1pvj48PP/69cR55H82eaabsea+dArj16/n54eHe3PrQ2SuXu6gGXqQqupFR648DU4SdcvU25+/90dV2+/PtLamtRia1p4N7eIPP+3P2q1Nyfw283AkjrNHrTrXMqPtbJjq+stRXWu329psuTWJZ07HfG1A+sb1/vYkn24NR612+s2xQ1r9s6+89ePVpAyT7+iMDLVDr3BC8kX3HaX3z49vP27ubgTZP5s8FvXT2Rkvb3c7Trm7+fH28vR8X5LVfMuffj78nUx6UmupndNE3Tb19v3PnvBgT0fMlTr3gAnB/WqPC9HN4SLNDprKDhCScuFaG6YOhvzVs+fbrWH5k2mM2zxv0Bg8KmutYbn3R50+O6nFPeffI0PThmcNq2ZYmjzkK+YPnzCYl49ffMX+OPT1iQl29+MwRVPpwyN3cu+xXNyXB7NoanbwMVOzxSnsJ3POEFJ3mHp7uzuoOuO20HQ2Xg6Hy3GLu6rN1qo6XKVNU2nBVHNUiKjlmFo3piwe13m8ZuFQuFryzJnxF2/Xx/1lf6SJ/5fyDKPPHmRBnmUGLAGY7XqJryYqeZevQD7QOsabx3/BJvn3zNMfxymYev/04/gZzNVXumnq3TFT2aGRmFUl3oESLLlJfjlT3x92vcOUq6nV+76bJp1WX6g6cg+aWoyprPdRylPNgqk83LJGfepbznCkCZGnBVHr/L1nQ34Kdf1xmz8/tteZAROZBfjpQBddOpd/zQ8543JGvblp6pHNnCZbx396P2f68CbjadHU+xe5M5iWtGpMLYbV/aP0VvMLTf2z41vwRau7LLT25lKouqSyMZXu7/uzx7T6kne8uGjiM9CardIdg4unrTMNe8gPaC/zmLTkwX42LSTNlrBdG59tgJjyLXI2xDsUVOkOs8lnFwY9bDXidyl52nm3mafKispf6fkEU9nPOeFJDiF7l+efmqbuMvXn7x3vMmzz33w51o9F6ujIxtQDo1Qsqyx8CMQL19vD/Mu+ELXwynTYLkZd/vzq6OxVRd9NO1+vKt/AuWjeoc0c/Mb28dHnEtWVQmlyyRy8eXt7YTw+ir4868zzv7xLn/XazcebkoybHa+7YsZ780Crmyo2mOrl+wbtvBAnm9rdzlP3mtrfCPXChj6VjKmH5qhGPt18bBURp/wlWlZpkGVcfLMdptbSgD89m6nbjfnBHXK2stRG1b2fdoXUlVVvT7+eH15f703T1NcDn7RWXnJKnwui3ty9vTxyn3Oenh5f/mfvXJRT1aEwjNimrRERsV7qDdF6xda91b7/ox2SQLKABAHRerqbM7PnjANY288/657dzuCXGItCpPqyOiTOpurTFSfV1pwYqJ2MpGqYIFILY6GZNFWVo4qabIju/49s+2+Rnbx6jO0hUHVlpNIxAvXrkSrRyDT0bAnYT5eRipY6Zyrs1E9PvAARfjaWXiR0Su5EJMza5dfoTiFSybej/0JQxaWSOmtp7uGhkKYytXusjkvQ1IbswTZn/bEaQxlTUQ3cORmpbRJoqN9CU99eOHpvyokMW3GNMBsml5G6CZkyvGxOoAdAXSuq+RBe8Ku8gqTShOlrRf5NLEqqv/njU1FN1RrUxHzHF9upcVItYWVSfa3NTdmXJHDuZaRS++AWpFbeVq9nLU9LlBpPhR2wvegn4AQZm4zhLrH560t1frXLpdotTqr/l95WpJ1UBUltbluaFwM1h6aSrBN30svU1OFnLSSVimN88w9uqx+VpOIrk9rjpL4OhWAqBuSiCZBdcfVFAVXs6JK4fqoId4WZ6qWEvvhz15eQisze61MflURqc2YlJTWHpjJ7MtilL7JTY6S2BakKg5N+SQI5/2ZS2/3KmcDTUKjuRFBbuezQcb6X692MpC65ubBzUhw1oxRSSch9L/ExC5Ha3Fr+VyhupebQVHQEGJWpqYJU3K9LY/gYeP8SUtG1SZ0CTcXNdJlEI4GyBUj9E5g0ZpsvJbut+DXIi2nf+cVjVKn2QvwbUJhUhFqTfS5SVcWqs6n/d3YTe38OTdVYXHNcup0KNHVclwdGB+JJ36GpkFQhsPLDmdtvgGSBLdfU1Vu49kqXrJe4JremasIFW6RlaMN8wqWkkiCAlZ3U5st2Il2jlW9eYgmoeTSV+ejHK2pqQGryE7OAqXUPmqqBhJXEn0cA5BVC/Aynp1BTV5VwKWeW+c/g17RDos67R4pgQdodSJB5Mak5olTNba/RUv9UzukgATWHpp4l9XI7VUmq0Npv11QkylUklllrDz0uYAr8aSUiXsPzdnFIam7fH2ciVfsOUpuznrpZwDS904NMUfNr6uBf11Swv0ui/6sKjGIBUt+yk4qSpILIv4v+36Q2tw0SzHBP0vX34XCQcnpVTS3PTj2K/Ni3a6pMKMVPAjIDQw3qb8hcFk2dJkgVhqrPlOeYZ9pHCKl6FsuWA307Urf+R/JOna9DR7YUeppLUwOMrmqnfsh9f5YQYKHcb9BUFCMVxKHixf8ihkWTNlhYra/DS3Z/JMh7Nowdy+KvafrfcbAksZqXVONWpM6GxGFS86hcOTT1mNf3z2+n2vJ4apBONb9LU0dR3kBs/wWrEqnUhkVlkerLEKz6M8J2E+NZ1wm2a9cxUQFS0a01tdnXkPv1UGTljfxfM56Kho/SHBVLXbE6rm+1UwNSQb40aqmA8kBaFViMVCQjFbmKLhPjmRWp7pawzSQjqdqNNZUciuIdCoGaXVNZv0ng7lzJTqXIJWuiGzUhtXegqRqaRYGUJFIZwgU1dSojlRT9G6mV+8ZubeYldXFbTW3aybrT0jWVVqiEHFJSH4dl26nMFn5syHy5oFjwOzQ1TiowR2GgChiwQf4Kktq4bPcnccblzkht+TP0hZlz91+oNLV7FVJJ6UkxSc2hqUGQCnFSk+Gki+1UReKf9QHgu9FU4OLDQBUaiXLUVYy5MkjVkOl2F5udziv4E9RyVPOSqt+E1OZI004FSc2sqZh1kTC/HEmJkmpqLQ+pCA+S2z+ixVZhROAuNBWDOkARqGqJQOvevIBUpCQVaUgzTcfz3PW6u1zQuv3nSJt02Lx3n5pKBqMV3fwza2o/0lZH6QLyKbdTqX9k5dFUqffP2rpM7X40FaRUQV1m7ynxYjFNnSpIFbySCBiJqRJs18vNs2iyCnJYRTV1Uyqp8Y4/WXvUxZpaixwloQ2pP8VDnZgFAsaRX0KLBugFqcxl/8hFKqaxsE/wR0S0oY+nA+5CUyFMPFDV2iczAuXu/jLUCLfYE7X7upcnRxXXVFwuqdZLeaR2JF3UzK+pPY4FUg2691cHGMonQVWoqtmn3cyAVBrXAvdkIdXHO9ac2phH2LwPTYUlU31pIlVFaj/5rEtIDb+rohzVjQZKM5KqJ0gtY+5bfDLFBaR2vlwVqY+1+ufRJgWSeMimP1Tn4nfLtO+RTIggEyRa1gcd0RM1CVhXFJhucp5UhCza8D+32V0t+tb1gandl6aCvBULVCEsa7FKIbXymttOTSFOFFt1o+GndFKxilSjFFK1abM8TXWUpFZrdNzPYPDOJupUP6HJ2Rqwi+rkisG8Rs5JiWlqgHPt/cMKKoIzaKqG6LSfau193O9/jOdkOFZ90NLuTFNBaX/A4ErWtpok1c6gqdPcpCa3bVFJvcgS+Q9IFTmrZRmkokZpmno4YQWpPl9zNpuKzjEjA3YiQSnUOgYiSq8gA83mR5+5iJvl40wo9P+xM5PqX0XlmQ5IIz9BPTJB7U40VROlpxVapgrmpgAEk6RalfMTA3qXkLpkkVxhDoSkYpMtDOYCxHd7Ybcuypn62muWQ2pHPkAt8P3bH/NgOGQtNpUyCEq906mVZBBavTY/9lvtOKkapo/wL2DGnEXpi5NaIy/CtKTJ35hMpYxOrRyQi2P5VjwnL96WVK3xCjd7BCR1oqWQ2rgRqaLrdBeIkf/SbkcGryyWa1NBKpdiY1MOqTgy7a8wqZ2OvG2RR6la9ngw//z0N/iPtsTtZPN4398H4w+rFahjPHRFHjF4Z3qMWnbf/y+GvOm/ZPfbsdcUk36RRS6O/Y2RLXnx2qSKahTiQOG9dLpKklRQiGWVZ6eCUmu2+4MRKrrD3S49HGW165qaLHvK62CNXUljvnBvJgb5FyO10/n668nPvILxVETaz0xliIRsKQju4zXJJDOEz4z9kL6uGo0ufwjStBuTasGglChiqcCelSSpoA7bLtFOdYyYRyXGA4lZFljUY+sLR0oql2LdKeu3N5xuZ2F/n5zUTup6+HtyVV8bedXfefuZlEKFWc+ftOSkalsOVM8WGX94Ti5aJUg19+fH/xXw/bmEBmP7ECy9FuNRxVQ1fUempoqOv6AihT9Hd8v7/WFy4AFdbRmpHddJXSnqXpBUWvNU5L7/Kamg+A8MA4xUV/UkpPKXeuWR6vApVOGASc5gZCMHqBr6GoNWgkB4XeGIlag4oDZCRqqjoZSV9uCipNpVWXXpjyVViKpypG9SU5G4STXYMredilwxLi3UQpGkIkPXBChgVqW+cbk5EJLqCJKX5jUsKqmmeonlZDPoipJKIv1XdMHvjlTZUL8ofhJSp2Cm6rBlkjN3qHRgsQSpf3h3SnKRW0ji3+2CvL/wn7oRJE2ssZvA0LRnUYrFdRcBwHfLtes5jsnGX7LzhOi/l6irzE49xNfXV0bbo6idSu+z/x1S0faMpEpIhX0Blde3/Ww228amNIxEtisk1V0m12Kx2ZBiKiPSt8K7rD0jPj+Vru5CVt/KA/1w7CophvX/2bGRwnSqMB8r7C+zLE2VrCuTSkbuJRsBfjCpwFJV7OgSUhPHW1SeyKxcuMCZK8Fvs5s4yyesUI0jx0f2o8hM6mdxo6wEe8M9fTMxcz0ypB0srzRNzZjkL4/UfhVMrf6RpMZz9QlRfY3FSCWkIjySTQJ+SmAKNbWb9WwJ4Ac5u8xz/gWoGnI2upHlpmKkpmtq5/B1uIGm0iqoax5hcn+aChJVcidJpqka6r9IWZXQm4tUX/e6YEtG3i4bcxBUonrdTPcZhUjFaaR+nVyXTlC5rqbSwkBQ9PRPkIqgPlYqb/E82UrecIX7kz05A5CcPFICqcHmvOlGD+zTHJ+580etRUEl7p2z3rAjK4ybaurBJRuCd2VNZdV51fmPPGcaDJqM1z+1wlMq/X37zyhxkh84y6IRiS8is7GaTmb7P2+vEfNUaacaycUNT9/pWXTXkqPFfeaWm3A0QPRMIBABkAxYxd66u9gR8yFy2ErkzUu3U1lRPyajU69EKmrbY1J39VifWz8RVK0/4itRpoOt3tR310fTlSUZX9dQ30nTwrg9HFq23V/1etPpdDSaTLazl5lofQnnXrtdyVp32ak+JJKkDB0F5/uu2ZEpwSm/m80OuFbGTgYdObKCdGy51NFf07eLvL1TsqaGh/iRuemH8kkd9j+O5CRgkp2qDoY/ElQtLWFyJpOS4Sxz0sYXHGcSLNFPuDclP0P4OBRPAGX5FMFbmWSHFwbuubFsCLP7M3zic4aqilRum5LjfTrZSEVjcqxpNlI/glN2a9Xqex9rv6uEhaeVcmaun3mbLpdVXXdvFrJRdlGHp02T3b+TUbDt98d6PRupFqterX4ebfOXsZIkXBRpz64JEDyK1Vjf6tM5SjuVCilyfOe/c8r6DcWNj3k2Us157XM+GNttDf0iVtaaZTrzsgRURezUKKlxKgNb6tEUnZNvF5Mo1SG7t+ab0xlPpB822r9iWi4/YLTl5Lr40Ngp8+p1fXmjv6OjnpHaORy+Oqqeqd91Z5y2e6JB+6l3daFzXBIKoKt7Iz68h/SE6tfpV/vuwV1q8WWawf+028Nho2HZ/d50NHuDuYSblPuEdVzmrZTM+/ulZvXQ+QX1LtZ/7dzBaoNAEAbgCoUeAgaNW7oihLQXYXtYG9r3f7QSU0oCUjwUFPm+i/dx0P2dwUO+N1y0bQjh+s3/bmaVNvoW3L9+PJ+/ppwfP981ySr0xeQqyuQsNXebLcM4j5gwrlCzhmNoP3M55akoUrPlQghNq3+mzuzT3LuXLHlOLea0aUhRrGBRXfijQYtx+39IL51CsbDdsb0JVL/xKrRDzm/peOpjWT8Y+LGCVm3KKvZXl2uMh6oqu6be17uf303DeqOwEgAAAAAAAAAAAAAAAAAA/+wbwjpnzYRagwgAAAAASUVORK5CYII=" alt="mysql-python" width="400"/></center>
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Increase readability in the notebook 🙈
# -
# <mark><b>I recommend you hide this cell since it's beyond the scope of your syllabus and can be confusing.</b></mark>
from IPython.display import HTML, display
# This function is specific to Jupyter notebooks and is beyond the scope of your syllabus
def table(data: list):
""" Displays a list as a HTML table in Jupyter notebooks
Arguments:
- data (list): A list that should be displayed as a table
"""
display(HTML(
'<table><tr>{}</tr></table>'.format(
'</tr><tr>'.join(
'<td>{}</td>'.format('</td><td>'.join(str(_) for _ in row)) for row in data)
)
))
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# <h2 style="color:blue">String Formatting ✏️ (Prerequisite)</h2>
# -
# Here is the basic syntax for the `str.format()` method:
#
# `"template string {}".format(arguments)`
#
# Inside the template string, we can use `{}` which act as placeholders for the arguments. You might need to work with some commands through
name = "Krishna"
print("Hello, {}. You are {}".format(name, "awesome"))
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Importing the necessary packages 📦
# -
# This command executed in your system terminal will install the mysql-connector.
# !pip install -q mysql-connector-python
import mysql.connector
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Establishing the connection 🔌
# -
# The hostname of our database is `mysql-server` using the default port `3306`. We'll be using `root` user whose password is `<PASSWORD>` (not a secure password but still). We will be using the [`sakila` database](https://dev.mysql.com/doc/sakila/en/).
conn = mysql.connector.connect(
user='root', password='<PASSWORD>', host='mysql-server', database='sakila'
)
# + [markdown] tags=[]
# ## Creating a cursor object using the `cursor()` method
# The `MySQLCursor` of the `mysql-connector-python` package/library is used to communicate and execute queries on the MySQL database
# -
cursor = conn.cursor()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Execute SQL query using `cursor.execute()`
# -
# - Executes MySQL queries and store the retrieved records in the cursor object.
# - **Syntax**: `<cursor_object>.execute(SQL_QUERY)`
# MySQL query is passed as a `string` argument to the below function
cursor.execute("SELECT * FROM film_text;")
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Understanding `cursor.fetchall()`
# -
# - The `cursor.fetchall()` method retrieves all the rows from executed result set of a query and returns them as list of tuples. (If we execute this after retrieving few rows it returns the remaining ones).
# - MySQL commands (such as SHOW and SELECT) that fetches data from a database can use this function.
rows = cursor.fetchall()
# Printing the first four elements in a list
print(rows[:4])
# The above table can be better visualized below.
table(rows[:4])
# + [markdown] tags=[]
# ## Understanding `cursor.fetchone()`
# -
# - The `cursor.fetchone()` method retrieves the next row of a query result set and returns a single sequence, or `None` if no more rows are available.
# - **TLDR**: Used to retrieve one result set at a time while `cursor.fetchall()` retrieves everything from a query result set.
# +
# This query lists all Databases from a MySQL database server.
cursor.execute("SHOW DATABASES;")
row = cursor.fetchone()
# This loop runs until cursor.fetchone() is None
# which means no more rows are left to be returned.
while row is not None:
# row is a tuple, row[0] is an element in a tuple
print(row[0])
row = cursor.fetchone()
# -
# # Examples 📚
# ### Display `email` of `STAFF` whose `first_name`(s) are given as a list
# +
# List of names for which email(s) has to be returned
names = ["Mike", "Jon"]
conn = mysql.connector.connect(user='root', password='<PASSWORD>', host='mysql-server', database='sakila')
cursor = conn.cursor()
for name in names:
cursor.execute("SELECT email FROM staff WHERE first_name='{}';".format(name))
row = cursor.fetchone()
while row is not None:
print(row)
row = cursor.fetchone()
# -
| Python - MySQL Connectivity.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// <h1 style="text-align: center; font-size: 40px">The Queue Interface and the Stack Class</h1>
import java.util.Random;
import java.lang.*;
//
// The objectives of this worksheet are as follows:
// * The Stack class and it's operations
// * The queue interfaces and the following if it's implementations:
// * ArrayDequeue
// * LinkedList
// * Common applications of stacks and queues
//
//
//
//
// #### Using Jupyter
// A few things to remind you with regard to using this Jupyter environment:
// 1. If the platform crashes don't worry. All of this is in the browser so just refresh and all of your changes up to your last save should still be there. For this reason we encourage you to **save often**.
// 2. Be sure to run cells from top to bottom.
// 3. If you're getting weird errors to go: Kernel->Restart & Clear Output. From there, run cells from top to bottom.
// # Stacks
//
// * Callback to 483
//
// Stacks are commonly refered to as a "last in, first out" or LIFO data structure. This means that the last thing that was added to the stack will be the first thing to be removed. As such, stacks have the following two operations for adding and removal:
// * `push` which adds an object to the top of the stack
// * `pop` which removed the object at the top of the stack
//
// Further documentation on this class, these methods, and their functionality can be found, as usual, in the [Java docs](https://docs.oracle.com/javase/7/docs/api/java/util/Stack.html).
public void printIntegerStack(Stack<Integer> stack){
Stack<Integer> stackCopy = new Stack<>();
stackCopy.addAll(stack);
System.out.println("| |");
while(!stackCopy.isEmpty()){
System.out.printf("| %d |\n", stackCopy.pop());
}
System.out.println(" ----- ");
}
// The following set of cells creates a stack and modifies it. There are some ascii visualizations produced after each modification to help you keep track of what the internal state of the stack is after each operation.
Stack<Integer> nums = new Stack<>();
printIntegerStack(nums);
nums.push(1);
printIntegerStack(nums);
nums.push(2);
printIntegerStack(nums);
nums.pop();
printIntegerStack(nums);
nums.push(3);
printIntegerStack(nums);
// <img alt="Activity - In-Class" src="https://img.shields.io/badge/Activity-In--Class-E84A27" align="left" style="margin-right: 5px"/>
// <br>
// <br>
// Play around with the stack by pushing and poping integers and then printing the results here:
// +
// Step 1) Define a stack
// Step 2) Push and pop some integers of your choosing
// -
/* Run this cell to print the contents */
printIntegerStack(/* replace with your name */);
// ### Stack Problem - Reverse Polish Notation
//
// <img alt="Activity - In-Class" src="https://img.shields.io/badge/Activity-In--Class-E84A27" align="left" style="margin-right: 5px"/>
// <br>
// <br>
// There are a variety of problems one can solve with a stack however we'll be starting with the "Hello, world" of stack based problem solving, a reverse polish notation calculator. For those happy few that have never encountered this method of forming arithmatic expressions let's begin with a basic addition example: 1 1 +. As you might have guessed this is how 1 + 1 is expressed in RPN. The basic set rules given a list of numbers and operations:
//
// * Look at each element in the list and, at each stage:
// * If an element is an operation (i.e., +/i):
// * Pop two numbers from the stack
// * Perform the operation
// * Push the result onto the stack
// * Otherwise, it must be a string representation of a number so:
// * Convert it to an `Integer`
// * Push it to the stack
//
// For our problem I make the following guarantees to for the sake of simplicity:
// * The only operations our calculator does is addition and subtraction
// * The only type of number our calculator operates on are integers
// * You will only be given valid RPN expressions so you don't have worry about edge cases
//
// For some more examples of this here are some helpful links:
// * [Reverse Polish Notation Online Calculator](https://www.dcode.fr/reverse-polish-notation)
// * [RPN on Wikipedia](https://en.wikipedia.org/wiki/Reverse_Polish_notation)
public Integer rpnEval(List<String> elems){
Stack<Integer> s = new Stack<Integer>();
for(/* Iterate through the elements */){
// Perform the operations associated with the element you
// are currently visiting.
}
return s.pop();
}
/* Construct our test case */
List<String> elems = new ArrayList<String>();
elems.add("1");
elems.add("1");
elems.add("2");
elems.add("+");
elems.add("+");
Integer result = rpnEval(elems);
System.out.println(result);
// # Queues
//
// Queues are "first in, first out" or FIFO data structures where the first element added into the list is the first out. Unlike a `Stack` in Java, `Queue` is an interface with a variety of implementations. The one that we will be focusing on is the `ArrayDequeue` given it is the most straightforward implementation of a traditional FIFO queue that Java provides.
//
//
// The operations defined for examining and manipulating queues in Java are as follows:
//
// | --- | Throws exception | Returns a value |
// | --- | --- | --- |
// | Insert | add(e) | offer(e) |
// | Remove | remove() | poll() |
// | Examine | element() | peek() |
//
// Simlilar to our experimentation with Queues I've defined a `printIntegerQueue` method which, as the name suggests, prints all of the values in a queue. We'll create and examine a queue just as we did the stack by instantiating an `ArrayDeque` containing `Integer` objects. Two methods we will be exploring are:
// * `offer` which adds an element to the **back** of a queue. The process of adding to the end of a queue is commonly refered to as "enqueueing".
// * `poll` which removes an element from the **front** of the queue. The process of removing from a queue is commonly refered to as "dequeueing".
//
// Further information can be found in the [Java docs](https://docs.oracle.com/javase/7/docs/api/java/util/Queue.html).
public void printIntegerQueue(Queue<Integer> q){
Stack<Integer> qCopy = new Stack<>();
qCopy.addAll(q);
System.out.println("| | <- end");
while(!qCopy.isEmpty()){
System.out.printf("| %d |\n", qCopy.pop());
}
System.out.println("| | <- front");
}
// Run the following cells to see how enqueing and dequeing impacts the internal state of the queue.
Queue<Integer> nums = new ArrayDeque<>();
printIntegerQueue(nums);
nums.offer(1);
printIntegerQueue(nums);
nums.offer(2);
printIntegerQueue(nums);
nums.poll();
printIntegerQueue(nums);
nums.offer(3);
printIntegerQueue(nums);
// <img alt="Activity - In-Class" src="https://img.shields.io/badge/Activity-In--Class-E84A27" align="left" style="margin-right: 5px"/>
// <br>
// <br>
// Play around with the stack by enqueuing and poping integers and then printing the results here:
// +
// Step 1) Define a stack
// Step 2) Push and pop some integers of your choosing
// -
/* Run this cell to print the contents */
printIntegerQueue(/* replace with your name */);
// ### Queue Problem - Round-Robin Process Scheduler
//
// Round-Robin scheduling is a CPU scheduling algorithm where all processes in need of CPU time are placed on a queue. The CPU retrieves a process from the front of the queue, process it for a specified period of time (quantum), and places it back on the end of the queue and takes another off the front in a process known as a context switch. For this activity you will be:
// * Making a `Proc` class which stores information on a hypothetical process
// * Creating a method which takes a queue of these processes and simulates a simplified round-robin algorithm.
//
// You will be making a modified round robin scheduler where:
// * All processes arrive at the beginning
// * Each process can operate for a unique quantum
//
// Information on CPU scheduling algorithms:
// * [Round-Robin Sched](https://en.wikipedia.org/wiki/Round-robin_scheduling)
// #### Implement the `Proc` Class
//
// The `Proc` class stores and allows us to manipulate the following information on a process:
// * `name` -> the name of our process
// * `remainingTime` -> the amount of time left to complete the process
// * `ctxSwitches` -> the number of times our process was place back on the queue without completing
// * `quantum` -> the maximum amount of time our process is allowed to be run before it is switched out of context (place back on the queue).
//
// Your class should have:
// * Getters for all of the class attributes.
// * A method that takes a single parameter that allows `remainingTime` to be decremented.
// * A method that allows the `ctxSwtiches` variable to be incremented.
// +
class Proc{
private int remainingTime;
private int quantum;
private String name;
private int ctxSwitches;
Proc(/* params */){
this.ctxSwitches = 0;
//...
}
/* getters for name and quanta */
/* getters/setters for remaining time */
/* getter/setter for ctx switch count */
}
// -
// #### Implement `performRoundRobinSchedule()`
//
// The round robin scheduler is simply a loop that continues while there are processes still in the queue. At each iteration you should:
// * Dequeue a process
// * Check if the allowed processing time (quantum) is greater than the time remaining to serve that proccess:
// * If it is:
// * reduce the proc's remaining time by that quanta
// * increment the total process time by the quanta
// * increment that proc's contex switch count
// * enqueue the process
// * Print a message indicating the name of the process and it's quanta
// * Otherwise:
// * increment the total processing time by the time remaining for that process
// * print a message indicating the event name, the total time the process spent in the queue, and the number of time's it was switched out of context.
// +
public void performRoundRobinSchedule(ArrayDeque<Proc> procs){
int totalProcessingTime = 0;
while(/* there is stuff in the queue */){
if(/* */){
} else{
}
}
}
// -
// Now that we have our event class and our scheduling method lets create a queue of events and test the class/method.
// +
//Step 1) Create an ArrayDeque of events
ArrayDeque<Proc> procQueue = new ArrayDeque<>();
procQueue.add(new Proc(10, 3, "event1"));
procQueue.add(new Proc(12, 3, "event2"));
procQueue.add(new Proc(10, 5, "event3"));
procQueue.add(new Proc(30, 7, "event4"));
//Step 2) Add a few events of your choosing
performRoundRobinSchedule(procQueue);
// -
// When you run with the default case provided you should get the followig output:
// ```
// Processed event1 for 3 secs.
// Processed event2 for 3 secs.
// Processed event3 for 5 secs.
// Processed event4 for 7 secs.
// Processed event1 for 3 secs.
// Processed event2 for 3 secs.
// Finished event event3 in 29 sec(s) with 1 ctx switches.
// Processed event4 for 7 secs.
// Processed event1 for 3 secs.
// Processed event2 for 3 secs.
// Processed event4 for 7 secs.
// Finished event event1 in 50 sec(s) with 3 ctx switches.
// Finished event event2 in 53 sec(s) with 3 ctx switches.
// Processed event4 for 7 secs.
// Finished event event4 in 62 sec(s) with 4 ctx switches.
// ```
| worksheet-4-stacks-and-queues.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/DarekGit/FACES_DNN/blob/master/notebooks/06_01_FDDB_TEST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Rk0kyukIenrN" colab_type="text"
# ---
#
# [Spis treści](https://github.com/DarekGit/FACES_DNN/blob/master/notebooks/Praca_Dyplomowa.ipynb) | [1. Wstęp](01_00_Wstep.ipynb) | [2. Metryki oceny detekcji](02_00_Miary.ipynb) | [3. Bazy danych](03_00_Datasety.ipynb) | [4. Przegląd metod detekcji](04_00_Modele.ipynb) | [5. Detekcja twarzy z wykorzystaniem wybranych architektur GSN](05_00_Modyfikacje.ipynb) | [6. Porównanie modeli](06_00_Porownanie.ipynb) | [7. Eksport modelu](07_00_Eksport_modelu.ipynb) | [8. Podsumowanie i wnioski](08_00_Podsumowanie.ipynb) | [Bibliografia](Bibliografia.ipynb)
#
#
# ---
# + [markdown] id="ZOBx5ZMnLVAu" colab_type="text"
# < [6. Porównanie modeli](06_00_Porownanie.ipynb) | [6.1. Wyniki na zbiorze FDDB](06_01_FDDB_TEST.ipynb) | [6.2. Wyniki na zbiorze WIDER FACE](06_02_WIDERFACE_TEST.ipynb) >
# + [markdown] id="-mr1ouoKcbFB" colab_type="text"
# # 6.1. Wyniki na zbiorze FDDB
# + [markdown] id="mckrcH-VcBSP" colab_type="text"
# Instalacja Detectron2, pobranie datasetu i narzędzi
# + id="L7UjdR40dnFE" colab_type="code" colab={}
# install detectron2:
# !pip install cython pyyaml==5.1
# !pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
# !git clone https://github.com/facebookresearch/detectron2 detectron2_repo
# !pip install -q -e detectron2_repo
# + id="crDMRtTe5GDd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="1f9a62ca-82cb-4ee1-9cbc-846ccd9ac478"
# !wget http://tamaraberg.com/faceDataset/originalPics.tar.gz
# !wget vis-www.cs.umass.edu/fddb/FDDB-folds.tgz
# !mkdir FDDB
# !tar -C FDDB -zxf originalPics.tar.gz > /dev/null
# !tar -C FDDB -zxf FDDB-folds.tgz > /dev/null
# + id="ZRhrUDmc9hKZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 165} outputId="cfc89424-c90f-48f4-8574-62831f318395"
# mobilenet
# !gdown https://drive.google.com/uc?id=1U0SVkSaSio4TBiXvF1QfTZI65WYpXpZ9
# !unzip -qo mobilenet.zip
# !rm -f mobilenet.zip
# Tools
# !gdown https://drive.google.com/uc?id=1_9ydMZlTNFXBOMl16xsU8FSBmK2PW4lN -O FDDB/tools.py
# !wget -q -O FDDB/mAP.py https://drive.google.com/uc?id=1PtEsobTFah3eiCDbSsYblOGbe2fmkjGR
# + [markdown] id="qzx4JAMasOcs" colab_type="text"
# <font color=yellow> Restart runtime to continue... <b>Crtl+M.</b> </font>
# + id="GnPqoafKJweW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="1bf13709-4dad-453c-c755-faf2d66839f4"
# !nvidia-smi
from psutil import virtual_memory
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(virtual_memory().total / 1e9))
# + id="uyWKRP9IL-_y" colab_type="code" colab={}
# !pip install facenet-pytorch
from facenet_pytorch import MTCNN
# + id="Ucz8h0sBtzkJ" colab_type="code" colab={}
import time
from tqdm.notebook import tqdm
import torch, torchvision
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import gdown
from google.colab import drive
import os
import cv2
import random
import itertools
import shutil
import glob
import json
import numpy as np
import pandas as pd
from PIL import ImageDraw, Image
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from matplotlib import collections as mc
from google.colab.patches import cv2_imshow
from detectron2 import model_zoo
import detectron2.utils.comm as comm
from detectron2.engine import DefaultPredictor, DefaultTrainer, HookBase
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_train_loader
from detectron2.structures import BoxMode
from detectron2.data import build_detection_test_loader
from detectron2.data.datasets import register_coco_instances
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from mobilenet.utils import add_mobilenet_config, build_mobilenetv2_fpn_backbone
from FDDB.mAP import mAP, plot_mAP
from FDDB.tools import annotations,output_Files
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# + id="cNmn7Bh1SHZJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="26cc1526-90ac-4dc6-8c53-bfd3f92c30fc"
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Running on device: {}'.format(device))
# + [markdown] colab_type="text" id="mvN6PU1S8zEs"
# Zapis danych na Google drive
#
#
# + colab_type="code" id="dX2t_t7a8zEw" colab={}
OUTPUT_DIR_NAME = "FDDB_OUTPUT_DIR"
drive.mount('/content/drive')
OUTPUT_DIR_PATH = os.path.join("./drive/My Drive", OUTPUT_DIR_NAME)
if not os.path.exists(OUTPUT_DIR_PATH):
os.makedirs(OUTPUT_DIR_PATH)
print("\ncfg.OUTPUT_DIR =",OUTPUT_DIR_PATH)
else:
print("\ncfg.OUTPUT_DIR =",OUTPUT_DIR_PATH)
# + id="7MtCLn7Jk62k" colab_type="code" colab={}
IMAGES_PATH='FDDB/'
# + [markdown] id="dvUi9652MNo2" colab_type="text"
# ## Funkcja konwertująca anotacje eliptycze FDDB do prostokątnych
# + id="CIlS8o1JML1d" colab_type="code" colab={}
def fddb_rectangular_annotation(IMAGES_PATH=IMAGES_PATH):
'''Funkcja wraca listę boxy obliczone na podstawie *ellipseList.txt
i względną ścieżkę do pliku
[{'boxes': array([[184.1439, 38.1979, 355.2429, 285.3645]]),
'path': '2002/08/11/big/img_591'}]
'''
fddb_annotations = []
ellipseList = glob.glob(IMAGES_PATH+"FDDB-folds/*ellipseList.txt")
ellipseList.sort()
for item in ellipseList:
with open(item, "r") as file_:
rows = file_.readlines()
idx = 0
while (idx < len(rows)):
tmp = []
image_name = rows[idx].replace("\n", "")
number_of_faces = int(rows[idx+1])
boxes = []
for i in range(1, number_of_faces+1):
box = rows[idx+1+i]
box = [float(item) for item in box.split(' ')[0:5]]
xmin = float(box[3]- box[1])
ymin = float(box[4]- box[0])
xmax = float(xmin + box[1]*2)
ymax = float(ymin + box[0]*2)
boxes.append([xmin, ymin, xmax, ymax])
boxes=np.array(boxes)
fddb_annotations.append({'path':image_name,'boxes':boxes})
idx += (number_of_faces+2)
return fddb_annotations
# + id="h065UtyDRCSV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="7f9f83e2-6283-4c4b-9c74-d6fd8a80b971"
fddb_annotations = fddb_rectangular_annotation(IMAGES_PATH)
fddb_annotations[0]
# + [markdown] id="O905ooh29RmK" colab_type="text"
# ## Funkcja konwertująca anotacje do format wymaganego przez Detectron2
# + id="Yx0qlpm9ZbFw" colab_type="code" colab={}
def get_fddb_dict(annotations):
'''Funkcja modyfikuje annotacje z fddb_rectangular_annotation()
na słownik wymagany przez detectron2.data.DatasetCatalog
{'annotations': [{'bbox': [59, 71, 269, 353],
'bbox_mode': <BoxMode.XYXY_ABS: 0>,
'category_id': 0,
'iscrowd': 0,
'segmentation': [[59, 71, 269, 71, 269, 353, 59, 353]]}],
'file_name': 'FDDB/2002/07/29/big/img_136.jpg',
'height': 450,
'image_id': 18,
'width': 319}
'''
dataset_dicts = []
for idx, item in enumerate(annotations):
record = {}
file_path = IMAGES_PATH+item["path"]+".jpg"
record["file_name"] = file_path
record["image_id"] = idx
record["height"], record["width"] = cv2.imread(file_path).shape[:2]
objs = []
for box in item["boxes"]:
box = [int(i) for i in box] # int vs float..
xmin, ymin, xmax, ymax = box
poly = [(xmin, ymin), (xmax, ymin), (xmax, ymax), (xmin, ymax)]
poly = list(itertools.chain.from_iterable(poly))
obj = {
"bbox": [xmin, ymin, xmax, ymax],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": 0,
"iscrowd": 0
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
# + [markdown] id="0s3piptHzLPf" colab_type="text"
# Rejestracja datasetu dla Detectron2
# + id="06Eyd9gOySQT" colab_type="code" colab={}
classes = ['face']
for d in ["val"]:
DatasetCatalog.register("faces_" + d, lambda d=d: get_fddb_dict(fddb_annotations if d == "val" else "null"))
MetadataCatalog.get("faces_" + d).set(thing_classes = classes)
faces_metadata = MetadataCatalog.get("faces_val")
# + id="6qyeh1uC7B7H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="4f8f52e3-70bc-4352-a887-fc3d9f7d9651"
faces_metadata
# + [markdown] id="mArzyGBgy-gL" colab_type="text"
# Wizualizacja annotacji na plikach z walidacyjnego
# + id="ZlXV1ceStVuN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="fd8dbb11-1d62-418a-f961-c15f5fa2dcca"
dataset_dicts = get_fddb_dict(fddb_annotations)
for d in random.sample(dataset_dicts, 1):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=faces_metadata, scale=0.9)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
# + [markdown] id="ykrRb9Q_8SYA" colab_type="text"
# ## Modele do testu
# + [markdown] id="a4luE_CnSh9L" colab_type="text"
# MobileNetV2
# + id="nbeTfmnV6JG0" colab_type="code" colab={}
models_mnv2 = {
# '800k' - parametry domyślne - jako punkt odniesienia
'800k': {
'config': 'FrozenBN',
'pth': 'https://drive.google.com/uc?id=1-tqNKwZIFmkAbfJ1L2sibIO4PT_MwnVA',
'weights_name': 'model_0799999.pth'},
# 'BN_800k' - dodano BN - pokazanie wpływu BN przy małym batchu=2
'BN_800k': {
'config': 'BN',
'pth': 'https://drive.google.com/uc?id=1-kRDjqkMk9qXbNfzVx09VE35YzgkP1AO',
'weights_name': 'model_0799999.pth'},
#'BN_Mish_V2_250+F_2_50k' - najlepszy wynik uczenie z Mish BN 250k + 50k z FrozenBN
'BN_Mish_V2_250+F_2_50k': {
'config': 'BN_Mish_V2F',
'pth': 'https://drive.google.com/uc?id=1-I6YSAs9NrORI4cFISfK_-4Yhm1twfDi',
'weights_name': 'model_0049999.pth'},
# 'BN_Mish_V3_80+30k' - najszybsze uczenie 110k iteracji
'BN_Mish_V3_80+_30k': {
'config': 'BN_Mish_V3',
'pth': 'https://drive.google.com/uc?id=1--bP5VPyqIrfrBxVZ5eLSsupIvBlQ8ef',
'weights_name': 'model_0029999.pth'},
}
# + [markdown] id="mjWyDLLmSspQ" colab_type="text"
# Faster R-CNN
# + id="84Fym-4-F7Sf" colab_type="code" colab={}
faster_rcnn_model = {
'faster_rcnn_R_50_FPN_3x' : {
'config': 'https://drive.google.com/uc?id=173XKeFyCXK909QA652WpEhQ3imlyDFKX',
'config_file' : "faster_rcnn_R_50_FPN_3x.yaml",
'pth': 'https://drive.google.com/uc?id=1-XGz6HOqJQWmDCJiJDbBl-FnK7ZxZu5z',
'weights_name': 'model_0269999.pth'
},
'scratch_faster_rcnn_R_50_FPN_gn' : {
'config': 'https://drive.google.com/uc?id=1-IX6Dshhs-rin7m1lGtTl1xNaUQXK3kN',
'config_file' : "scratch_faster_rcnn_R_50_FPN_gn.yaml",
'pth': 'https://drive.google.com/uc?id=1-dqOloknew4wN0cD5xcFhbRlrA8rSHGt',
'weights_name': 'model_0539999.pth'
}
}
# + [markdown] id="HdSxPDQbS9dM" colab_type="text"
# MTCNN
# + id="bZqwYI5iS_O-" colab_type="code" colab={}
mtcnn = MTCNN(image_size=224, margin=0, keep_all=True, device=device)
# + [markdown] id="exBWIXto8Wfd" colab_type="text"
# ## Zestawy konfiguracji podstawowych dla modeli opartych na MobileNetV2
#
# + id="bzArg2Fi6JJ4" colab_type="code" colab={}
cfg_set = {
'FrozenBN':'https://drive.google.com/uc?id=1rZFzJaR_g7uYuTguTdbUuCQYD4eXLeqw',
'BN': 'https://drive.google.com/uc?id=1-doXtwe5iZHoqPzKGc2ZZbxj6Ebhxsn4',
'BN_V2':'https://drive.google.com/uc?id=1wywB8UAaOO5KZx3IS35kV-rLsvJMIse6',
'BN_Mish':'https://drive.google.com/uc?id=1-axV3KKg8-YiZZ7uDBh_2v181JoC9Nj3',
'BN_Mish_V2':'https://drive.google.com/uc?id=1WoESx5RYvpapNicpSrmMoNJeE2GVm3zK',
'BN_Mish_V3':'https://drive.google.com/uc?id=1-Kgd_2AS4EsD_ZPqP7SxkscyDjP-Qhnr',
'BN_Mish_V2F':'https://drive.google.com/uc?id=1pCwyYCjIoduro2vIKMZi5HhlpaypH0_x',
}
# + [markdown] id="1X_tPprVVIHM" colab_type="text"
# ## DefaultPredictor dla modeli opartych na MobileNetV2
# + id="Bqe6JWtK8cMf" colab_type="code" colab={}
def cfg_write(cfg,cfg_all):
for key in cfg_all.keys():
if isinstance(cfg_all[key],dict):
cfg_write(cfg[key],cfg_all[key])
else: cfg[key]=cfg_all[key]
return cfg
def set_predictor(model_name=None, models_dict=models_mnv2, cfg_dict=cfg_set ,device='cuda', cfg_DATASETS_TEST=None):
url = models_dict[model_name]['pth']
out = models_dict[model_name]['weights_name']
gdown.download(url, out, True)
# model configuration
url_cfg = cfg_dict[models_dict[model_name]['config']]
gdown.download(url_cfg, "temporary.json", True)
with open('temporary.json','r') as f:
cfg_all=json.load(f)
cfg = get_cfg()
add_mobilenet_config(cfg)
cfg = cfg_write(cfg,cfg_all)
cfg.MODEL.WEIGHTS = models_dict[model_name]['weights_name']
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.MODEL.DEVICE=device
if cfg_DATASETS_TEST is not None:
cfg.DATASETS.TEST = (cfg_DATASETS_TEST, )
return DefaultPredictor(cfg), cfg
# + [markdown] id="W3BYCQm6VV3Y" colab_type="text"
# ## DefaultPredictor dla Faster-RCNN
# + id="lsj7xLHRVbp5" colab_type="code" colab={}
def faster_predictor(model_name=None, models_dict=faster_rcnn_model, cfg_DATASETS_TEST=None):
url = models_dict[model_name]['pth']
out = models_dict[model_name]['weights_name']
gdown.download(url, out, True)
config = models_dict[model_name]['config']
config_file = models_dict[model_name]['config_file']
gdown.download(config, config_file, True)
cfg = get_cfg()
cfg.merge_from_file(config_file)
cfg.MODEL.WEIGHTS = out
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.MODEL.DEVICE='cuda'
if cfg_DATASETS_TEST is not None:
cfg.DATASETS.TEST = (cfg_DATASETS_TEST, )
return DefaultPredictor(cfg), cfg
# + [markdown] id="jhyDElRv8jeW" colab_type="text"
# ## Funkcja pomiaru czasu ewaluacji modelu
# + id="d-hx0lW18foZ" colab_type="code" colab={}
def time_measurement(predictor, dataset):
time_ = []
for item in tqdm(dataset):
start_ = time.time()
img = cv2.imread(item["file_name"])
if str(predictor) == "mtcnn":
outputs = mtcnn.detect(img)
else :
outputs = predictor(img)
diff = time.time() - start_
time_.append(diff)
# Total time
total_time = np.array(time_).sum()
# Mean time
mean_diff = np.array(time_).mean()
# Frames per second
fps = 1 / mean_diff
print("Total time(sec): {:.2f}, Average(sec):{:.2f}, fps:{:.2f}\n".format(total_time, mean_diff, fps))
return {"total_time": total_time, "mean_diff": mean_diff, "fps": fps}
# + [markdown] id="z8p6XUGs8pc5" colab_type="text"
# ## Funkcja do ewaluacji modelu na zbiorze walidacyjnym FDDB
# + id="CjtuwMRMvcHm" colab_type="code" colab={}
def predict_on_val(dataset, predictor, model_name):
gbxs=[]
dbxs=[]
dset=dataset
if 'annotations' in dataset[0].keys():
dset=[]
for r in dataset:
dset.append({'path' : r['file_name'],'marks' : [b['bbox'] for b in r['annotations']],
'persons': ['' for b in r['annotations']]})
for item in tqdm(dset):
im = cv2.imread(item["path"])
outputs = predictor(im)
pbxs = outputs['instances'].pred_boxes.tensor.tolist()
pconfs = outputs['instances'].scores.tolist()
dbx=[[*(np.array(bx)+0.5).astype('int'),pconfs[i]] for i,bx in enumerate(pbxs)]
dbxs.append(dbx)
gbxs.append(item['marks'])
return {'gbxs':gbxs,'dbxs':dbxs,'metric':dset}
# + [markdown] id="ywWdmuOXH8CF" colab_type="text"
# ## Funkcja do ewaluacji MTCNN na zbiorze walidacyjnym FDDB
# + id="wBrTuX-JH8m0" colab_type="code" colab={}
def mtcnn_detect(dataset, model, model_name = "MTCNN"):
gbxs=[]
dbxs=[]
dset=dataset
if 'annotations' in dataset[0].keys():
dset=[]
for r in dataset:
dset.append({'path' : r['file_name'],'marks' : [b['bbox'] for b in r['annotations']],
'persons': ['' for b in r['annotations']]})
for item in tqdm(dset):
im = cv2.imread(item["path"])
pbxs, pconfs = model.detect(im)
if type(pbxs)!=np.ndarray:
pbxs=[]
pconfs=[]
dbx=[[*(np.array(bx)+0.5).astype('int'),pconfs[i]] for i,bx in enumerate(pbxs)]
dbxs.append(dbx)
gbxs.append(item['marks'])
return {'gbxs':gbxs,'dbxs':dbxs,'metric':dset}
# + id="9wPeINyq81e1" colab_type="code" colab={}
def dict_to_file(dic, path):
f = open(path,'w')
f.write(str(dic))
f.close()
def dict_from_file(filename):
f = open(filename,'r')
data=f.read()
f.close()
return eval(data)
# + [markdown] id="nvQYb2H6857e" colab_type="text"
# # Pomiar czasu interferencji modeli na zbiorze walidacyjnym FDDB
#
# + [markdown] id="VhpKdSSSE3Sw" colab_type="text"
# > Konfiguracje modeli maja domyślnie ustawiony parametr
# cfg.MODEL.DEVICE = 'cuda'
# + id="iUn6FCRM81hu" colab_type="code" colab={}
dataset = get_fddb_dict(fddb_annotations)
# + id="O-jR2vsi892f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 440, "referenced_widgets": ["4588cc1982fd45fd902419e6638cfc12", "58053a08d0994e8399679eca2adeaab2", "6a2dea304109495bb4b66ec29b1c79d7", "196c3e2fbf1b4d0b889ae226c1de45fb", "447a2d64621f4d67b78046f310c062a7", "2240df5eeb704f4c96484faa6818db9a", "ab60e7faf585469f87da0414d69cd528", "9d650523fd6540beb65e3d240bf80fbb", "b5c4e4c4c2e84ae49bc3a16375580b45", "<KEY>", "472c8659091f4417b58348e8dd0be738", "5dde40e463c54c499cce5ebf0a09b407", "62cec9d7001a46ab88a53b6e46aaf572", "<KEY>", "a11902d4374d4432afb28342a8df51a0", "2435ee0ca9f649d6b8ef63c74d1f470c", "<KEY>", "1ed626588e5a44d5b6cebcca579b239b", "<KEY>", "537694f814e74f5abd6f043488df7280", "0d81c5ab1d734c568065a864b338ca62", "78e18c9cbe4e4812b69a6e9108a69d91", "4f3e0f615b994eb48b076d827a1d9069", "<KEY>", "dfd62f2eb7354a87aa576ba14d34efab", "51b15b9a035f4ff8ba6bc754838d5dba", "<KEY>", "0b67bd8071e8470088132ab291014049", "11fb1fa985a245c58ed9bfa2ee3df274", "080cc3be54e6452faad9ec3d071f0c24", "<KEY>", "01572ccc02ab4624a94509b4b1808846"]} outputId="ac95422a-eb7e-44d7-e094-4e9a19e553df"
time_measurement_results = {}
for model in models_mnv2:
print(model)
predictor, _ = set_predictor(model)
time_measurement_results[model] = time_measurement(predictor, dataset=dataset)
# + id="vX8jDvcCdlm2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 229, "referenced_widgets": ["30799fe8c5e9440a856667eab67e1bc4", "27cb9a8ff9a5489b89cc048242362fa2", "a688cf4d613d4dc6afc60a0dc8118d10", "bb4e70ec55ec4167a925af1e5b477f06", "155f034ec54e4613a6133df348b88fb1", "<KEY>", "<KEY>", "00cf82822bea43eba41f3e64c2028af5", "<KEY>", "<KEY>", "<KEY>", "22ea628e3f3847bc83c284a059e28a05", "<KEY>", "31399b6ed37b4250bed34e5e43054c2d", "fd8e43e01e1f405b8ae92d423fd3aeae", "263ce970860a42d9b376b000815efcaa"]} outputId="4d0d1537-6447-4eaa-ba7d-01d81c65444d"
for model in faster_rcnn_model:
print(model)
predictor, _ = faster_predictor(model)
time_measurement_results[model] = time_measurement(predictor, dataset=dataset)
# + id="fbAOhF_mdmmV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104, "referenced_widgets": ["e444470d331a4ac88f70d8f0e0fe045e", "bb81208891f041cc9a3fdc7fbea70f19", "54861d06d7a14997a4fa6af544a7e950", "4b355e74d19b42ea97efd0e11c9eaa41", "e3d7195c46d5442e8fb879daf4d26727", "ec2c0ba481ca4ee9851b76797cc7c89b", "180c937e39804335ac78e57272d7dd82", "3f64d4196567434f890de0c2a75fb403"]} outputId="caf87832-26be-49a1-f1e3-0211ab6a750b"
time_measurement_results["MTCNN"] = time_measurement(mtcnn, dataset=dataset)
# + id="ZESL-xo3do7A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 405} outputId="7df4e1f1-afda-479d-aaea-d952d6094a34"
time_measurement_results
# + [markdown] id="zVUETo7N9Buz" colab_type="text"
# Porównanie wyników
# + id="WuOTU5c8895M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="ca402f17-2494-47d7-dc47-c4119d28315a"
df_time_measurement_results = pd.DataFrame.from_dict(time_measurement_results, orient='index').sort_values(by=['total_time'])
df_time_measurement_results
# + [markdown] id="lhk7z6mha8oU" colab_type="text"
# Zapis wyników na Google drive
# + id="Vmt1YEgPb1K1" colab_type="code" colab={}
time_measurement_results = df_time_measurement_results.to_json(orient="table")
output_file_name = "time_measurement_results.json"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
with open(output_file, 'w') as f:
json.dump(time_measurement_results, f, indent=4)
# + id="8ARk4HBDhU_f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="c2146273-f8c4-459f-bcf8-d2fe2978c15c"
with open(output_file) as data_file:
data_loaded = json.load(data_file)
pd.read_json(data_loaded, orient="table" )
# + [markdown] id="dA9MiLqG9Kd1" colab_type="text"
# # Ewaluacja modeli na zbiorze walidacyjnym i mAP liczone funkcją Darka
# + id="m_sXKQ-n897i" colab_type="code" colab={}
FDDB_DATASET = get_fddb_dict(fddb_annotations)
# + id="O9VJ2hMH8991" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 293, "referenced_widgets": ["ed9edfc0dc9b4440a2cef1745ca97b49", "4c0a7092e4cf49b7a72ae88821feb634", "42592a19575541799f32fe14783dd3c2", "<KEY>", "35b58c6ad43744d69b16ef9596ec6f01", "<KEY>", "<KEY>", "39d56e35ae66400da36e73c50441d20c", "f16d85079e3043b1aec2822f93ef61f0", "a3b4b8de8da649aeaf41a06e8fe92550", "30d1f762b5e74773b37b573326f4f4a3", "21a7d2a32e724f8eab0a9bbde3020347", "d02f78007dfe4424b209da4615c7f2ea", "f64551b98cc848c087fae8da294c75a5", "62b055a0309e4f8b9a351c90ae35b17a", "af7ad74d2318480386eaa530befbe9e6", "<KEY>", "97780358b21449f49b20cefeece1005a", "a49caa47f9be46afae643121814fe104", "c6acdec140ee4baa8c4e1b896a18783d", "5de6e8f72e2f43d699edecf75b7fa036", "<KEY>", "<KEY>", "6095d13e993e4d5fb23e5ab42c973803", "05686e7e861e491c80b7218a580d4464", "1e104c1e1db9492b9a1be2b25edfbd33", "0785ce71f0e3478d8d3c85e1b203be8a", "<KEY>", "40cdd804e31c4c83a80ac32b00e54c87", "4f0fae52f7bc4327a1f273f1db794034", "<KEY>", "5c20d431fa1a41ccaddea990fa9ac785"]} outputId="beb49062-4b74-41b7-bc98-87600ea5fbb3"
pred = {}
for model in models_mnv2:
print(model)
predictor, cfg = set_predictor(model)
pred[model] = predict_on_val(FDDB_DATASET ,predictor, model)
# save to Google drive
output_file_name = f"unified_mAP_{model}"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(pred[model], output_file)
# + id="-rk_phYWkZ4V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 155, "referenced_widgets": ["c2fdf410fa184b0693d8f5b5af588e5a", "<KEY>", "07dd406b3a164896af4aeed18e912d72", "<KEY>", "2ce6474497e04e2bb0d744b9045e014c", "<KEY>", "ccea2b43b737494fb7d54cef8f088127", "fbac761da54d44a6b13e8fd0d35bebaa", "<KEY>", "243b567dca0d49028b27638e2c979840", "<KEY>", "31ad78f804db456a8dc1ffcd3dec8034", "cfed0dfcdb544a29b9a88f4aa3a34061", "49ede886aa984feeb1f5e2797a661ff4", "45e7228856074835b6528ee73a4b3abb", "04910fd1b5af4e4383fa1618ac3e263a"]} outputId="b2375e88-c99d-4cc4-cb47-257d989bf5a6"
for model in faster_rcnn_model:
print(model)
predictor, cfg = faster_predictor(model)
pred[model] = predict_on_val(FDDB_DATASET ,predictor, model)
# save to Google drive
output_file_name = f"unified_mAP_{model}"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(pred[model], output_file)
# + id="nKf1aP2gknuL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67, "referenced_widgets": ["72a0559984de493f805348eb5d5e457f", "e9d45c5d47644fe48274523517208228", "afb3dfbfabe04c149d69edac99ae1bff", "9d11ce03187f4bfb94e3bf60dca71e20", "ac3e031c73a04330885eac52cc5a0417", "05f0c758e1f248e6b4ee9557b5508118", "021a34400d90406ca9b9a1a62d21e1dc", "0ebb4af53d354fea9878c50cd2c2167f"]} outputId="aca048da-e4a3-4e8d-f4be-da39c0187b4f"
model = "MTCNN"
pred[model] = mtcnn_detect(FDDB_DATASET , mtcnn , model)
output_file_name = f"unified_mAP_{model}"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(pred[model], output_file)
# + [markdown] id="SZ7qO1oWAKaD" colab_type="text"
# # Wykresy mAP dla testowanych modeli
# + id="hV57OHAf89_5" colab_type="code" colab={}
def model_mAP(pdict, model_name, title=""):
m, d = mAP(gbxs=pdict[model_name]['gbxs'], dbxs=pdict[model_name]['dbxs'], data=True)
output_file_name = f"unified_mAP_{title}_{model_name}"
plot_mAP(m,d,['All 0.50','mAP','large','medium','small'],1,title+model_name,file=f"unified_mAP_{title}_{model_name}")
return m,d
# + id="xU2Th145ALDo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="908956a8-5379-4798-cdc1-4c5666d8ead6"
all_models_mAP = {}
pred.keys()
# + [markdown] id="t47S4k3eIQEj" colab_type="text"
# ## Model: 800k
# + id="xJReNeeGALGQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="49118da4-e4f8-4d37-f11d-c0c5b434ce0f"
modelshow = list(pred.keys())[0]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="fz9HqHR4IPEo" colab_type="text"
# ## Model: BN_800k
#
# + id="L-Q2om6zAS1z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="90d86f4f-0901-4a77-b39a-ca0b7d0a4170"
modelshow = list(pred.keys())[1]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="unzPlqEAIWh7" colab_type="text"
# ## Model: BN_Mish_V2_250+F_2_50k
# + id="eNy2t4iYAS4a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="c1d766ce-614d-4991-b633-7640e71d0187"
modelshow = list(pred.keys())[2]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="UrR9Z73HIbdg" colab_type="text"
# ## Model: BN_Mish_V3_80+_30k
# + id="7Z4TylkeAS7C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="894a1cb9-c954-49a5-fffc-d91688d8113d"
modelshow = list(pred.keys())[3]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="BHv0hSchIddE" colab_type="text"
# ## Model: faster_rcnn_R_50_FPN_3x
# + id="1VbIWdJoAS_c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="fdc0ba85-7e6e-42cd-b5f6-18e590883b2d"
modelshow = list(pred.keys())[4]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="5uEQDSNrIhcn" colab_type="text"
# ## Model: scratch_faster_rcnn_R_50_FPN_gn
# + id="Qhnb-Xe3uHVR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="037dceb9-05a2-4552-e35c-f4785070b272"
modelshow = list(pred.keys())[5]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + [markdown] id="_SywYJ2CsnZo" colab_type="text"
# ## Model: MTCNN
# + id="VRt3nu0NsrR6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 732} outputId="cea5436e-c470-44c0-d72b-7a8628ceae43"
modelshow = list(pred.keys())[6]
print("Model:", modelshow)
all_models_mAP[modelshow] = model_mAP(pred, modelshow, title="FDDB_mAP_model_")
# + id="CsKFTlKxjXAa" colab_type="code" colab={}
# !mv unified_mAP*.png ./drive/'My Drive'/FDDB_OUTPUT_DIR/
# + [markdown] id="3Uyq104ZAY06" colab_type="text"
# Zapis wyników na Google drive
# + id="cI1-jMMrATBw" colab_type="code" colab={}
output_file_name = "unified_mAP_all_models"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(all_models_mAP, output_file)
# + id="alMzjZ-ZIy12" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="8cfdff88-de08-4a0b-93fa-c6198394ebed"
all_models_mAP.keys()
# + id="PW6V8fEvAS9W" colab_type="code" colab={}
modeldict = {}
DD_mAP = {}
for item in all_models_mAP.keys():
for i in all_models_mAP[item][0].keys():
modeldict[i] = {
"AP" : all_models_mAP[item][0][i][0],
"Recall" : all_models_mAP[item][0][i][1],
"IoU" : all_models_mAP[item][0][i][2],
}
DD_mAP[item] = modeldict
# + id="joXrZmRpALIl" colab_type="code" colab={}
output_file_name = "DD_mAP_all_models"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(DD_mAP, output_file)
# + id="XqjY6Ztm7ttc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="f6bf247b-3a0b-45d7-dafe-4d12c4408394"
from collections import OrderedDict
DD_mAP2 = OrderedDict()
for item in all_models_mAP.keys():
DD_mAP2[item] = ({i:all_models_mAP[item][0][i][0] for i in ['mAP', 'All 0.50','large','medium','small']})
df_unified_mAP = pd.DataFrame.from_dict(DD_mAP2, orient='index').sort_values(by=['All 0.50'], ascending=False)
df_unified_mAP
# + id="Pv1ygY4D1XAC" colab_type="code" colab={}
df_unified_mAP_results = df_unified_mAP.to_json(orient="table")
output_file_name = "unified_mAP_dataframe.json"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
with open(output_file, 'w') as f:
json.dump(df_unified_mAP_results, f, indent=4)
# + id="EwmSRdeX8d5b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="9811661d-c39a-4c67-e778-4078dee1ccff"
with open(output_file) as data_file:
data_loaded = json.load(data_file)
pd.read_json(data_loaded, orient="table" )
# + [markdown] id="XzwU9RGNAdf_" colab_type="text"
# # Ewaluacja na FDDB z wykorzystaniem COCOEvaluator i inference_on_dataset
#
# + id="kgYvfw83Acy7" colab_type="code" colab={}
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
# + id="5Ep12r_GAc1t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="d6f8be85-6bd6-4689-edf5-8cdf6eaddd8a"
AP_VAL = {}
for model in models_mnv2:
print(model)
destination = os.path.join(OUTPUT_DIR_PATH, model)
if not os.path.exists(destination):
os.makedirs(destination)
predictor, cfg = set_predictor(model, cfg_DATASETS_TEST="faces_val")
print(destination)
evaluator = COCOEvaluator(cfg.DATASETS.TEST[0], cfg, True, output_dir=destination)
val_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
inference_on_dataset(predictor.model, val_loader, evaluator)
AP_VAL[model] = evaluator.evaluate()
# + id="h4HxhwbWS5Ww" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3b6b65d2-b03c-4c0c-a815-bbf2dba5ccfd"
for model in faster_rcnn_model:
print(model)
destination = os.path.join(OUTPUT_DIR_PATH, model)
if not os.path.exists(destination):
os.makedirs(destination)
predictor, cfg = faster_predictor(model, cfg_DATASETS_TEST="faces_val")
print(destination)
evaluator = COCOEvaluator(cfg.DATASETS.TEST[0], cfg, True, output_dir=destination)
val_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
inference_on_dataset(predictor.model, val_loader, evaluator)
AP_VAL[model] = evaluator.evaluate()
# + id="xgA-cLhyAc3_" colab_type="code" colab={}
output_file_name = "AP_VAL_COCOEvaluator"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
dict_to_file(AP_VAL, output_file)
# + [markdown] id="NWyVAlUQBJ7A" colab_type="text"
# Porównanie wyników
# + id="Z1xHInuFAc6E" colab_type="code" colab={}
AP_FDDB = {}
for item in AP_VAL.keys():
AP_FDDB[item] = AP_VAL[item]['bbox']
df_AP_FDDB = pd.DataFrame.from_dict(AP_FDDB, orient='index').sort_values(by=['AP50'], ascending=False)
# + id="wPl4jL2DBKuR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 237} outputId="1f661001-6ffc-42e4-d3cb-579a4a8e7b77"
df_AP_FDDB
# + [markdown] colab_type="text" id="GIIsIzkJ2YvK"
# Zapis wyników na Google drive
# + colab_type="code" id="AIU8NR792YvO" colab={}
df_AP_FDDB_results = df_AP_FDDB.to_json(orient="table")
output_file_name = "df_AP_FDDB.json"
output_file = os.path.join(OUTPUT_DIR_PATH, output_file_name)
with open(output_file, 'w') as f:
json.dump(df_AP_FDDB_results, f, indent=4)
# + id="ucHAtI6A-hLP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 237} outputId="bf9fe1b5-0b40-4aa3-e5ad-ef1475db6dc3"
with open(output_file) as data_file:
data_loaded = json.load(data_file)
pd.read_json(data_loaded, orient="table" )
# + [markdown] colab_type="text" id="boLRW0elLrSX"
# <br>
#
# < [6. Porównanie modeli](06_00_Porownanie.ipynb) | [6.1. Wyniki na zbiorze FDDB](06_01_FDDB_TEST.ipynb) | [6.2. Wyniki na zbiorze WIDER FACE](06_02_WIDERFACE_TEST.ipynb) >
# + [markdown] colab_type="text" id="M_LMmPHLxodb"
#
# ---
#
# [Spis treści](https://github.com/DarekGit/FACES_DNN/blob/master/notebooks/Praca_Dyplomowa.ipynb) | [1. Wstęp](01_00_Wstep.ipynb) | [2. Metryki oceny detekcji](02_00_Miary.ipynb) | [3. Bazy danych](03_00_Datasety.ipynb) | [4. Przegląd metod detekcji](04_00_Modele.ipynb) | [5. Detekcja twarzy z wykorzystaniem wybranych architektur GSN](05_00_Modyfikacje.ipynb) | [6. Porównanie modeli](06_00_Porownanie.ipynb) | [7. Eksport modelu](07_00_Eksport_modelu.ipynb) | [8. Podsumowanie i wnioski](08_00_Podsumowanie.ipynb) | [Bibliografia](Bibliografia.ipynb)
#
#
# ---
| notebooks/06_01_FDDB_TEST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # IFRS17 CSM Waterfall Chart Notebook
# To run this notebook and get all the outputs below, Go to the **Cell** menu above, and then click **Run All**.
#
# ## About this notebook
#
# This noteook demonstrates the usage of [ifrs17sim] project in [lifelib], by building and running a model and drawing a waterfall graph of CSM amortization on a single modelpoint.
#
#
# [ifrs17sim]: https://lifelib.io/projects/ifrs17sim.html
# [lifelib]: https://lifelib.io
#
# <div class="alert alert-warning">
#
# **Warning:**
#
# The primary purpose of this model is to showcase the capability of [lifelib] and its base system [modelx], and less attention has been paid to the accuracy of the model or the compliance with the accounting standards.
# At very least, following items are identified as over-simplification or lack of implementation.
#
# <ul>
# <li>The timing of cashflows is either the beginning or end of each step.</li>
# <li>All expenses are included in insurance cashflows.</li>
# <li>Loss component logic is not yet incorporated, so `CSM` can be negative.</li>
# <li>Coverage unit is set to sum assured</li>
# <li>Amortization of acquisition cash flows is not yet implemented.</li>
# <li>All insurance cashflows are considered non-market sensitive, i.e. no TVOG is considered.</li>
# <li>Risk adjustment is not yet modeled.</li>
# </ul>
#
# </div>
#
# [modelx]: http://docs.modelx.io
# ## How to use Jupyter Notebook
# Jupter notebook enables you to run a Python script piece by piece. You can run each piece of code (called a "cell") by putting the cursor in the cell and pressing **Shift + Enter**, and get the output right below the input code of the cell.
#
# If you want to learn more about Jupyter Notebook, [this tutorial] will help you. There are also plenty of other resources on the internet as Jupyter Notebook is quite popular.
#
# [this tutorial]: https://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Running%20Code.ipynb
# ## The entire script
# Below is the entire script of this example. The enire scipt is broken down to several parts in differenc cells, and each part is explained below. The pieces of code in cells below are executable one after another from the top.
#
# ```python
# # %matplotlib notebook
# import collections
# import pandas as pd
# from draw_charts import draw_waterfall, get_waterfalldata
# import ifrs17sim
#
# model = ifrs17sim.build()
# proj = model.OuterProj[1]
#
# df = get_waterfalldata(
# proj,
# items=['CSM',
# 'IntAccrCSM',
# 'AdjCSM_FlufCF',
# 'TransServices'],
# length=15,
# reverseitems=['TransServices'])
#
# draw_waterfall(df)
# ```
# ## Initial set-up
#
# The first line `%matplotlib notebook`, is for specifying drawing mode.
#
# The next few lines are import statements, by which functions defined in other modules become avaialbe in this script.
#
# `ifrs17sim` and `draw_charts` modules are in the project directory of this project. To see what fiels are in the project directory, select **Open** from the **File** menu in the tool bar above.
# %matplotlib notebook
import collections
import pandas as pd
import ifrs17sim
from draw_charts import draw_waterfall, get_waterfalldata
# ## Building the model
#
# The next line is to create a model from `build` function defined in `ifrs17sim` module which has just been imported.
#
# By supplying `True` to `load_saved` parameter of the `build` function, the input data is read from `ifrs17sim.mx`, the 'pickled' file to save loading time. To read input from `input.xlsm`, call `build` with `load_saved=False` or without any parameter because `False` is the default value of `load_saved`.
#
# If you run this code multiple time, the previous model is renamed to `ifrs17sim_BAK*`, and a new model is created and returned as `model`.
model = ifrs17sim.build(load_saved=True)
# To see what space is inside `model`, execute `model.spaces` in an empty cell.
# ```python
# model.spaces
# ```
# ## Calculating CSM
#
# In `model` thre is a space called `OuterProj` and other spaces. `OuterProj` is parametrized by Policy ID, i.e. each of the spaces with parameters corresponds to a projection of one policy. For example, `model.OuterProj[1]` return the projection of policy ID 1, `model.OuterProj[171]` return the projection of policy ID 171.
#
# The first line below sets `proj` as a shorthand for the projection of Policy ID 1. To see what cells are in `proj`, execute `proj.cells` in an empty cell.
# ```python
# proj.cells
# ```
#
# You can change the sample policy to output by supplying some other ID.
proj = model.OuterProj[1]
# ## Exporting values into DataFrame
#
# The code below is to construct a DataFrame object for drawing the waterfall chart, from the cells that make up bars in the waterfall chart.
#
# `TransServices` is passed to `reverseitems` parameter, to reverse the sign of its values, as we want to draw is as reduction that pushes down the CSM balance.
df = get_waterfalldata(
proj,
items=['CSM',
'IntAccrCSM',
'AdjCSM_FlufCF',
'TransServices'],
length=15,
reverseitems=['TransServices'])
# Tha table below show the DataFrame values.
df
# ## Draw waterfall chart
#
# The last line is to draw the waterfall graph. The function to draw the graph was imported from the separate module `draw_charts` in this project directory, and was imported at the first part of this script.
draw_waterfall(df)
| lifelib/projects/ifrs17sim/ifrs17sim_csm_waterfall.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python
# () -> Tuples<br>
# [] -> Lists<br>
# {} -> Dictionaries<br>
# {} ->Sets<br>
# In python array[5] or as like any type of array operation<br>
# From front index start with 0,1,2,3,4<br>
# From back index start with -1,-2,-3,-4,-5
# # Arithmetic Operator
import numpy as np
num1 = 10
num2 = 20
print("Num1 + NUm2 = ", num1 + num2)
print("Num1 - NUm2 = ", num1 - num2)
print("Num1 * NUm2 = ", num1 * num2)
print("Num1 / NUm2 = ", num1 / num2)
print("5^3 = ", 5**3)
print("20 % 3 = ", 20%3) # vag shes
print("22//7 = ", 22//7)
print("3.8//2 = ", 3.8//2) # vagfol
# # Assignment Operator
num3 = num1 + num2
print(num3)
num3 += num2 # num3 = num3 + num2 it can be **=, %=, //=, /=
print(num3)
# # Comperison Operator
print("num3 & num2",num3, num2)
print("is num3 > num2",num3 > num2)
print("is num3 == num2",num3 == num2)
print("is num3 != num2",num3 != num2)
# # Logical Operator
# +
x = True # 1
y = False # 0
print("X and Y", x and y)
print("X or Y", x or y)
print("not of Y", not y)
# -
# # Bitwise Operator
num4 = 6 # 110
num5 = 2 # 010
print("num4 or num5 = ", num4 | num5)
print("num4 & num5 = ", num4 & num5)
print("num4 x-or num5 = ", num4 ^ num5)
print("num4 and num5", num4 and num5) # or , and
print("num4 >> 3 time", num4 >> 3)
print("num5 << 2 time", num5 << 2)
# # Identity Operator
# # Membership Operator
| 1_Operator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Model: Gender Only
# https://www.kaggle.com/c/titanic/overview
# This is a logistic regression model using the simplified v4 Transformed Datasets.
# Features included in this model are:
#
# * age
# * sibsp
# * parch
# * fare
#
#
# * pclass
# * sex
# * ticket
# * embarked
# The numerical features are scaled.
# # Initialization
# %run init.ipynb
# +
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import scale
import great_expectations as ge
from progressbar import ProgressBar
RANDOM_STATE = 42
# -
# ## Extract Clean Data
# **Separate data into X (features) and y (label)**
from data.data import (transform_X_numerical,
transform_X_categorical,
transform_X)
Xy = pd.read_csv('../data/processed/train_v4.csv', index_col='passengerid', dtype={'pclass':str})
Xy
# ## Train Test Split Data
all_columns = ['age', 'fare', 'family_size', 'is_child', 'is_traveling_alone',
'sex_male', 'embarked_Q', 'embarked_S', 'title_Miss', 'title_Mr',
'title_Mrs', 'age_bin_baby', 'age_bin_child', 'age_bin_senior',
'age_bin_student', 'age_bin_teen', 'age_bin_young_adult', 'fare_bin_q2',
'fare_bin_q3', 'fare_bin_q4', 'pclass_2', 'pclass_3']
# +
important_features =['title_Mr',
'title_Mrs',
'pclass_3',
'age_bin_baby',
'family_size',
# 'is_child',
'fare_bin_q2',
# 'fare_bin_q4',
# 'embarked_S',
# 'age_bin_young_adult'
]
#important_features = all_columns
X_all = transform_X(Xy.drop(['name'], axis=1))
X = X_all[important_features]
y = Xy['survived']
X.shape
# -
# ### Split data into train and test.
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_STATE)
y_test = y_test.to_frame()
print(f'Number of sample in training data = {len(X_train)}')
print(f'Number of sample in test data = {len(X_test)}')
# -
# ### Logistic Regression with Age
# +
X.columns
model = LogisticRegression(random_state=RANDOM_STATE, max_iter=500, fit_intercept=True,
penalty='l2', l1_ratio=1)
model.fit(X_train, y_train)
y_pred, predicted_accuracy_score, cv_scores = pm.calc_model_rst_table_metrics(model, X_train, y_train, X_test, y_test)
# -
# # Feature Selection
from yellowbrick.model_selection import FeatureImportances
viz = FeatureImportances(model, stack=False, relative=False, absolute=True, size=(1080, 720))
viz.fit(X, y);
viz.show();
# +
fi = (pd.DataFrame(data = viz.feature_importances_,
index = viz.features_,
columns=['Feature Importance']
)
.sort_values(by='Feature Importance', ascending=False)
)
fi.iloc[0:10].index.tolist()
with pd.option_context('display.max_rows', 20):
fi.reset_index()
# -
# # Prepare Submission
X_holdout = pd.read_csv('../data/processed/holdout.csv', index_col='passengerid', dtype={'pclass':str})
X_holdout
X_test_kaggle_public = transform_X(X_holdout).reindex(X_test.columns, axis=1).fillna(0)
X_test_kaggle_public
# +
model
X_test_kaggle_public.columns
# +
y_pred = (pd.Series(model.predict(X_test_kaggle_public),
index=X.index, name='Survived').to_frame().sort_index()
)
y_pred.index.names = ['PassengerId']
y_pred
# -
y_submission = (pd.read_csv('../data/raw/gender_submission.csv')
.set_index('PassengerId')
)
y_submission
# +
filename = 'logreg_model_4.csv'
y_pred.to_csv(filename)
y_pred_file = (pd.read_csv(filename)
.set_index('PassengerId')
)
(y_pred_file.index == y_submission.index).all()
y_pred_file.index.names == y_submission.index.names
(y_pred_file.columns == y_submission.columns).all()
# -
(y_pred.index == y_submission.index).all()
y_pred.index.names == y_submission.index.names
(y_pred.columns == y_submission.columns).all()
# # Simplify Model
# +
from yellowbrick.features import Rank1D
# Instantiate the 1D visualizer with the Sharpiro ranking algorithm
visualizer = Rank1D(algorithm='shapiro', size=(1000,1000))
visualizer.fit(X, y, ) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show(); # Finalize and render the figure
# -
# https://www.districtdatalabs.com/visualize-data-science-pipeline-with-yellowbrick
from cycler import cycler
import matplotlib as mpl
from yellowbrick.model_selection import FeatureImportances
# +
# Create a new figure
mpl.rcParams['axes.prop_cycle'] = cycler('color', ['red'])
fig = plt.gcf()
fig.set_size_inches(20,20)
ax = plt.subplot(311)
labels = X.columns
viz = FeatureImportances(model, ax=ax, labels=labels, relative=False, absolute=True)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.grid(False)
# Fit and display
viz.fit(X, y)
viz.poof()
# -
| notebooks/model__logreg_model_5__2019-11-1.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # Truth Tables Kata
#
# This kata provides an introduction into representing Boolean functions in terms of integers, in which each bit represents a truth value for some input assignment.
#
# * Boolean function manipulation based on integers is discussed in the book Hacker's Delight by <NAME>.
#
# Each task is wrapped in one operation preceded by the description of the task.
# Your goal is to fill in the blank (marked with `// ...` comments)
# with some Q# code that solves the task. To verify your answer, run the cell using Ctrl+Enter (⌘+Enter on macOS).
# This tutorial teaches you how to represent Boolean functions as
# integers. We use the bits in the binary integer representation
# as truth values in the truth table of the Boolean function.
#
# Formally, a Boolean function is a function $f(x) : \{0,1\}^n \rightarrow \{0,1\}$
# that takes an $n$-bit input, called input assignment, and produces
# a 1-bit output, called function value or truth value.
#
# We can think of an $n$-variable Boolean function as an integer with at
# least $2^n$ binary digits. Each digit represents the truth value for
# each of the $2^n$ input assignments. The least-significant bit represents
# the assignment 00...00, the next one - 00...01, and so on, and
# the most-significant bit represents 11...11.
#
# In Q# we can use the `0b` prefix to specify integers in binary notation,
# which is useful when describing the truth table of a Boolean function.
# For example, the truth table of the 2-input function ($x_1 \wedge x_2$) can be
# represented by the integer `0b1000 = 8`.
#
# Here is how you would get this representation:
#
# <table style="border:1px solid">
# <col width=50>
# <col width=50>
# <col width=100>
# <col width=150>
# <tr>
# <th style="text-align:center; border:1px solid">$x_2$</th>
# <th style="text-align:center; border:1px solid">$x_1$</th>
# <th style="text-align:center; border:1px solid">$f(x_1, x_2)$</th>
# <th style="text-align:center; border:1px solid">Bit of the truth table</th>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid">Least significant</td>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid">1</td>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid"></td>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">1</td>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid">0</td>
# <td style="text-align:center; border:1px solid"></td>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">1</td>
# <td style="text-align:center; border:1px solid">1</td>
# <td style="text-align:center; border:1px solid">1</td>
# <td style="text-align:center; border:1px solid">Most significant</td>
# </tr>
# </table>
#
# Since the number of bits in a Q# integer is always the same, we need to
# specify the number of variables explicitly. Therefore, it makes sense
# to introduce a user defined type for truth tables.
# Here is its definition:
#
# newtype TruthTable = (bits : Int, numVars : Int);
# ### Task 1. Projective functions (elementary variables)
# **Goal:**
# Describe the three projective functions $x_1$, $x_2$, $x_3$ as 3-input
# functions, represented by integers. Note that we follow the
# convention that $x_1$ is the least-significant input.
#
# > **Example:**
# The function $x_1$ (least-significant input) is given as an
# example. The function is true for assignments 001, 011, 101,
# and 111.
# +
%kata T1_ProjectiveTruthTables
open Quantum.Kata.TruthTables;
function ProjectiveTruthTables () : (TruthTable, TruthTable, TruthTable) {
let x1 = TruthTable(0b10101010, 3);
let x2 = TruthTable(0, 0); // Update the value of $x_2$ ...
let x3 = TruthTable(0, 0); // Update the value of $x_3$ ...
return (x1, x2, x3);
}
# -
# ### Task 2. Compute AND of two truth tables
# **Goal:**
# Compute a truth table that computes the conjunction (AND)
# of two truth tables. Find a way to perform the computation
# directly on the integer representations of the truth tables,
# i.e., without accessing the bits individually.
#
# <br/>
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# You can use bit-wise operations in Q# for this task. See
# <a href="https://docs.microsoft.com/quantum/language/expressions#numeric-expressions">Q# Numeric Expressions</a>.
# </details>
# +
%kata T2_TTAnd
open Quantum.Kata.TruthTables;
function TTAnd (tt1 : TruthTable, tt2 : TruthTable) : TruthTable {
// ...
}
# -
# ### Task 3. Compute OR of two truth tables
# **Goal:**
# Compute a truth table that computes the disjunction (OR)
# of two truth tables.
# +
%kata T3_TTOr
open Quantum.Kata.TruthTables;
function TTOr (tt1 : TruthTable, tt2 : TruthTable) : TruthTable {
// ...
}
# -
# ### Task 4. Compute XOR of two truth tables
# **Goal:**
# Compute a truth table that computes the exclusive-OR (XOR)
# of two truth tables.
# +
%kata T4_TTXor
open Quantum.Kata.TruthTables;
function TTXor (tt1 : TruthTable, tt2 : TruthTable) : TruthTable {
// ...
}
# -
# ### Task 5. Compute NOT of a truth table
# **Goal:**
# Compute a truth table that computes negation of a truth
# table.
#
# <br/>
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Be careful not to set bits in the integer that are out-of-range
# in the truth table.
# </details>
#
# +
%kata T5_TTNot
open Quantum.Kata.TruthTables;
function TTNot (tt : TruthTable) : TruthTable {
// ...
}
# -
# ### Task 6. Build if-then-else truth table
# **Goal:**
# Compute the truth table of the if-then-else function `ttCond ? ttThen | ttElse`
# (`if ttCond then ttThen else ttElse`) by making use of the truth table operations
# defined in the previous four tasks.
# +
%kata T6_IfThenElseTruthTable
open Quantum.Kata.TruthTables;
function TTIfThenElse (ttCond : TruthTable, ttThen : TruthTable, ttElse : TruthTable) : TruthTable {
// ...
}
# -
# ### Task 7. Find all true input assignments in a truth table
# **Goal:**
# Return an array that contains all input assignments in a truth table
# that have a true truth value. These input assignments are called minterms.
# The order of assignments in the return does not matter.
#
# > You could make use of Q# library functions to implement this operation in a
# single return statement without implementing any helper operations. Useful
# Q# library functions to complete this task are `Mapped`, `Filtered`, `Compose`,
# `Enumerated`, `IntAsBoolArray`, `EqualB`, `Fst`, and `Snd`.
#
# > **Example:**
# The truth table of 2-input OR is `0b1110`, i.e., its minterms are `[1, 2, 3]`.
# +
%kata T7_AllMinterms
open Quantum.Kata.TruthTables;
function AllMinterms (tt : TruthTable) : Int[] {
// ...
}
# -
# ### Task 8. Apply truth table as a quantum operation
# **Goal:**
# Apply the X operation on the target qubit, if and only if
# the classical state of the controls is a minterm of the truth table.
# +
%kata T8_ApplyFunction
open Quantum.Kata.TruthTables;
operation ApplyXControlledOnFunction (tt : TruthTable, controls : Qubit[], target : Qubit) : Unit is Adj {
// ...
}
| TruthTables/TruthTables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import xml.etree.ElementTree as et
xtree = et.parse("ecml_pkdd_learning_dataset.xml")
xroot = xtree.getroot()
# +
import re
import numpy as np
from numpy import array
import csv
import random
import pandas as pd
docs=[]
labels={'Valid':0, 'XSS':1, 'SqlInjection':2, 'LdapInjection':3,
'XPathInjection':4, 'PathTransversal':5, 'OsCommanding':6, 'SSI':7}
max_length=0
for node in xroot:
sample_id = node.attrib.get("id")
request = node.find('{http://www.example.org/ECMLPKDD}request')
clazz = node.find('{http://www.example.org/ECMLPKDD}class')
type_ = clazz.find('{http://www.example.org/ECMLPKDD}type')
method = request.find('{http://www.example.org/ECMLPKDD}method')
protocol = request.find('{http://www.example.org/ECMLPKDD}protocol')
uri = request.find('{http://www.example.org/ECMLPKDD}uri')
query = request.find('{http://www.example.org/ECMLPKDD}query')
headers = request.find('{http://www.example.org/ECMLPKDD}headers')
if query is not None:
query_text = "?"+query.text
else:
query_text = ""
headers = request.find('{http://www.example.org/ECMLPKDD}headers')
body = request.find('{http://www.example.org/ECMLPKDD}body')
if body is not None:
body_text= body.text
else:
body_text = ""
request_text = method.text+" "+uri.text+query_text+" "+protocol.text+"\n"+headers.text+"\n"
if len(body_text)>1:
request_text = request_text+ "\n"+body_text+"\n"
label = labels[type_.text]
docs.append( (request_text,str(label)) )
if len(request_text)>max_length:
max_length=len(request_text)
df = pd.DataFrame.from_records(docs,columns=['request','label'])
df['label'].apply(str)
train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))])
#print(len(df.loc[df['label'] != '0']))
print("Training data length: "+str(len(train['request'])))
print("Validate data length: "+str(len(validate['request'])))
print("Test data length: "+str(len(test['request'])))
train.to_csv('../_train.csv',header=False,index=False,quoting=csv.QUOTE_ALL,columns=['label','request'])
validate.to_csv('../_validate.csv',header=False,index=False,quoting=csv.QUOTE_ALL,columns=['label','request'])
test.to_csv('../_test.csv',header=False,index=False,quoting=csv.QUOTE_ALL,columns=['label','request'])
print("MAX input length: "+str(max_length))
#print number of samples per class in training data
train['label'].value_counts()
| ecml_pkdd/tranform_ECMLPKDD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from matplotlib import animation
import matplotlib.pyplot as plt
from IPython.display import HTML
import seaborn as sns
import pandas as pd
from regularization import *
from itertools import combinations_with_replacement
sns.set()
df = pd.read_csv('TempLinkoping2016.csv')
df.head()
X = df.iloc[:, 0:1].values
Y = df.iloc[:, 1].values
def train_gradient_mean_square(epoch, X, Y, learning_rate, degree, regularization, alpha = 0.0005, b = 0):
n_features = X.shape[1]
combs = [combinations_with_replacement(range(n_features), i) for i in range(0, degree + 1)]
flat_combs = [item for sublist in combs for item in sublist]
X_new = np.empty((X.shape[0], len(flat_combs)))
for i, index_combs in enumerate(flat_combs):
X_new[:, i] = np.prod(X[:, index_combs], axis=1)
n_output_features = len(flat_combs)
X = X_new
m = np.zeros(X.shape[1])
for i in range(epoch):
y_hat = X.dot(m) + b
if regularization:
m_gradient = -(Y - y_hat).dot(X) + regularization(alpha, m,grad=True)
b_gradient = np.sum(-(Y - y_hat)) + regularization(alpha, b,grad=True)
else:
m_gradient = -(Y - y_hat).dot(X)
b_gradient = -np.sum((Y - y_hat))
m -= learning_rate * m_gradient
b -= learning_rate * b_gradient
return X_new, m, b
X_new, m, b= train_gradient_mean_square(1000, X, Y, 0.001, 100,None)
plt.scatter(X[:,0],Y)
plt.plot(X,X_new.dot(m) + b, c='red')
plt.show()
# +
fig = plt.figure(figsize=(10,5))
ax = plt.axes()
ax.scatter(X[:,0],Y, c='b')
ax.set_xlabel('k: %d'%(0))
line, = ax.plot([],[], lw=2, c='r')
def iteration_k(epoch):
X_new, m, b= train_gradient_mean_square(1000, X, Y, 0.001, epoch,None)
line.set_data(X,X_new.dot(m) + b)
ax.set_xlabel('k: %d'%(epoch))
return line, ax
anim = animation.FuncAnimation(fig, iteration_k, frames=100, interval=200)
anim.save('animation-poly-k-regression.gif', writer='imagemagick', fps=10)
# -
| regression/polynomial-regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.053302, "end_time": "2021-06-06T09:21:34.762385", "exception": false, "start_time": "2021-06-06T09:21:34.709083", "status": "completed"} tags=[]
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + papermill={"duration": 0.902332, "end_time": "2021-06-06T09:21:35.691456", "exception": false, "start_time": "2021-06-06T09:21:34.789124", "status": "completed"} tags=[]
# Importing Liberaries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from pandas import read_csv, set_option
from pandas.plotting import scatter_matrix
# + papermill={"duration": 4.467659, "end_time": "2021-06-06T09:21:40.185422", "exception": false, "start_time": "2021-06-06T09:21:35.717763", "status": "completed"} tags=[]
# Loading Data
fraud_data = pd.read_csv('/kaggle/input/creditcardfraud/creditcard.csv')
# + papermill={"duration": 0.075438, "end_time": "2021-06-06T09:21:40.287477", "exception": false, "start_time": "2021-06-06T09:21:40.212039", "status": "completed"} tags=[]
# Viewing Raw Data
set_option('display.width', 100)
fraud_data.head()
# + papermill={"duration": 0.035282, "end_time": "2021-06-06T09:21:40.349909", "exception": false, "start_time": "2021-06-06T09:21:40.314627", "status": "completed"} tags=[]
# Dimension of data
fraud_data.shape
# + papermill={"duration": 0.072794, "end_time": "2021-06-06T09:21:40.450085", "exception": false, "start_time": "2021-06-06T09:21:40.377291", "status": "completed"} tags=[]
# Data Type
fraud_data.info()
# + [markdown] papermill={"duration": 0.027252, "end_time": "2021-06-06T09:21:40.504776", "exception": false, "start_time": "2021-06-06T09:21:40.477524", "status": "completed"} tags=[]
# > Our observations are as
#
# > NaN values do not present in the data set. Because of the Non-Null Count and number of rows in the dataset match.
#
# > There are 29 Input Variables and 1 Output Variable (Class)
#
# > The data type of all the input variables is float64 whereas the data type of out variable (Class) is int64
# + papermill={"duration": 0.057114, "end_time": "2021-06-06T09:21:40.589239", "exception": false, "start_time": "2021-06-06T09:21:40.532125", "status": "completed"} tags=[]
# CHecking Null values
fraud_data.isnull().sum()
# + papermill={"duration": 0.542068, "end_time": "2021-06-06T09:21:41.159196", "exception": false, "start_time": "2021-06-06T09:21:40.617128", "status": "completed"} tags=[]
# Summarizing data
set_option('precision', 5)
fraud_data.describe()
# + papermill={"duration": 0.045506, "end_time": "2021-06-06T09:21:41.234170", "exception": false, "start_time": "2021-06-06T09:21:41.188664", "status": "completed"} tags=[]
# Response Variable Analysis
class_names = {0:'Not Fraud', 1:'Fraud'}
rvs = fraud_data.Class.value_counts().rename(index = class_names)
print(rvs)
# + [markdown] papermill={"duration": 0.028487, "end_time": "2021-06-06T09:21:41.291729", "exception": false, "start_time": "2021-06-06T09:21:41.263242", "status": "completed"} tags=[]
# ***Splitting data into training and testing data***
# + papermill={"duration": 0.360226, "end_time": "2021-06-06T09:21:41.680889", "exception": false, "start_time": "2021-06-06T09:21:41.320663", "status": "completed"} tags=[]
from sklearn.model_selection import train_test_split
y= fraud_data["Class"]
X = fraud_data.loc[:, fraud_data.columns != 'Class']
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size=1/6, random_state=42)
# + [markdown] papermill={"duration": 0.02889, "end_time": "2021-06-06T09:21:41.739179", "exception": false, "start_time": "2021-06-06T09:21:41.710289", "status": "completed"} tags=[]
# # Data Modelling
# + [markdown] papermill={"duration": 0.02875, "end_time": "2021-06-06T09:21:41.797539", "exception": false, "start_time": "2021-06-06T09:21:41.768789", "status": "completed"} tags=[]
# ***Logistic Regression***
# + papermill={"duration": 5.601232, "end_time": "2021-06-06T09:21:47.427624", "exception": false, "start_time": "2021-06-06T09:21:41.826392", "status": "completed"} tags=[]
#Import Library for Accuracy Score
from sklearn.metrics import accuracy_score
#Import Library for Logistic Regression
from sklearn.linear_model import LogisticRegression
#Initialize the Logistic Regression Classifier
logisreg = LogisticRegression()
#Train the model using Training Dataset
logisreg.fit(X_train, y_train)
# Prediction using test data
y_pred = logisreg.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_logisreg = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Logistic Regression model : ', acc_logisreg )
# + [markdown] papermill={"duration": 0.051711, "end_time": "2021-06-06T09:21:47.531351", "exception": false, "start_time": "2021-06-06T09:21:47.479640", "status": "completed"} tags=[]
# ***Linear Discriminent Analysis***
# + papermill={"duration": 1.436909, "end_time": "2021-06-06T09:21:48.999223", "exception": false, "start_time": "2021-06-06T09:21:47.562314", "status": "completed"} tags=[]
#Import Library for Linear Discriminant Analysis
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
#Initialize the Linear Discriminant Analysis Classifier
model = LinearDiscriminantAnalysis()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_lda = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Linear Discriminant Analysis Classifier: ', acc_lda )
# + [markdown] papermill={"duration": 0.035783, "end_time": "2021-06-06T09:21:49.121715", "exception": false, "start_time": "2021-06-06T09:21:49.085932", "status": "completed"} tags=[]
# ***Gaussian Naive Bayes***
# + papermill={"duration": 0.224044, "end_time": "2021-06-06T09:21:49.375902", "exception": false, "start_time": "2021-06-06T09:21:49.151858", "status": "completed"} tags=[]
#Import Library for Gaussian Naive Bayes
from sklearn.naive_bayes import GaussianNB
#Initialize the Gaussian Naive Bayes Classifier
model = GaussianNB()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_ganb = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Gaussian Naive Bayes : ', acc_ganb )
# + [markdown] papermill={"duration": 0.029502, "end_time": "2021-06-06T09:21:49.436412", "exception": false, "start_time": "2021-06-06T09:21:49.406910", "status": "completed"} tags=[]
# ***Decision Tree***
# + papermill={"duration": 26.738145, "end_time": "2021-06-06T09:22:16.204459", "exception": false, "start_time": "2021-06-06T09:21:49.466314", "status": "completed"} tags=[]
#Import Library for Decision Tree Classifier
from sklearn.tree import DecisionTreeClassifier
#Initialize the Decision Tree Classifier
model = DecisionTreeClassifier()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_dtree = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Decision Tree Classifier : ', acc_dtree )
# + [markdown] papermill={"duration": 0.031575, "end_time": "2021-06-06T09:22:16.268014", "exception": false, "start_time": "2021-06-06T09:22:16.236439", "status": "completed"} tags=[]
# ***Random Forest***
# + papermill={"duration": 293.514529, "end_time": "2021-06-06T09:27:09.814682", "exception": false, "start_time": "2021-06-06T09:22:16.300153", "status": "completed"} tags=[]
#Import Library for Random Forest
from sklearn.ensemble import RandomForestClassifier
#Initialize the Random Forest
model = RandomForestClassifier()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_rf = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Random Forest : ', acc_rf )
# + [markdown] papermill={"duration": 0.03071, "end_time": "2021-06-06T09:27:09.876869", "exception": false, "start_time": "2021-06-06T09:27:09.846159", "status": "completed"} tags=[]
# ***Support Vector Machine***
# + papermill={"duration": 21.592322, "end_time": "2021-06-06T09:27:31.500338", "exception": false, "start_time": "2021-06-06T09:27:09.908016", "status": "completed"} tags=[]
#Import Library for Support Vector Machine
from sklearn import svm
#Initialize the Support Vector Classifier
model = svm.SVC()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_svc = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of Support Vector Classifier: ', acc_svc )
# + [markdown] papermill={"duration": 0.030923, "end_time": "2021-06-06T09:27:31.563850", "exception": false, "start_time": "2021-06-06T09:27:31.532927", "status": "completed"} tags=[]
# ***KNN***
# + papermill={"duration": 245.780187, "end_time": "2021-06-06T09:31:37.375654", "exception": false, "start_time": "2021-06-06T09:27:31.595467", "status": "completed"} tags=[]
#Import Library for K Nearest Neighbour Model
from sklearn.neighbors import KNeighborsClassifier
#Initialize the K Nearest Neighbour Model with Default Value of K=5
model = KNeighborsClassifier()
#Train the model using Training Dataset
model.fit(X_train, y_train)
# Prediction using test data
y_pred = model.predict(X_test)
# Calculate Model accuracy by comparing y_test and y_pred
acc_knn = round( accuracy_score(y_test, y_pred) * 100, 2 )
print( 'Accuracy of KNN Classifier: ', acc_knn )
# + [markdown] papermill={"duration": 0.031035, "end_time": "2021-06-06T09:31:37.437935", "exception": false, "start_time": "2021-06-06T09:31:37.406900", "status": "completed"} tags=[]
# # Model Selection
# + papermill={"duration": 0.050867, "end_time": "2021-06-06T09:31:37.520084", "exception": false, "start_time": "2021-06-06T09:31:37.469217", "status": "completed"} tags=[]
models = pd.DataFrame({
'Model': ['Logistic Regression', 'Linear Discriminant Analysis','Naive Bayes', 'Decision Tree', 'Random Forest', 'Support Vector Machines',
'K - Nearest Neighbors'],
'Score': [acc_logisreg, acc_lda, acc_ganb, acc_dtree, acc_rf, acc_svc, acc_knn]})
models.sort_values(by='Score', ascending=False)
# + [markdown] papermill={"duration": 0.032534, "end_time": "2021-06-06T09:31:37.585937", "exception": false, "start_time": "2021-06-06T09:31:37.553403", "status": "completed"} tags=[]
# >The Best model for Predicting in **Random Forest Model** with **99.96% Accuracy**
# + [markdown] papermill={"duration": 0.032262, "end_time": "2021-06-06T09:31:37.652832", "exception": false, "start_time": "2021-06-06T09:31:37.620570", "status": "completed"} tags=[]
# # Confusion Matrix
# This is a binary classification problem (Fraud or No-Fraud). Some of the commonly used terms are:
# * True positives (TP)
# Predicted positive and are actually positive.
# * False positives (FP)
# Predicted positive and are actually negative.
# * True negatives (TN)
# Predicted negative and are actually negative.
# * False negatives (FN)
# Predicted negative and are actually positive.
# + [markdown] papermill={"duration": 0.031957, "end_time": "2021-06-06T09:31:37.717265", "exception": false, "start_time": "2021-06-06T09:31:37.685308", "status": "completed"} tags=[]
# 
# + papermill={"duration": 0.347028, "end_time": "2021-06-06T09:31:38.096906", "exception": false, "start_time": "2021-06-06T09:31:37.749878", "status": "completed"} tags=[]
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
sns.heatmap(cm, annot = True);
# + papermill={"duration": 261.405114, "end_time": "2021-06-06T09:35:59.535878", "exception": false, "start_time": "2021-06-06T09:31:38.130764", "status": "completed"} tags=[]
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(model, X_test, y_test);
# + [markdown] papermill={"duration": 0.033387, "end_time": "2021-06-06T09:35:59.603196", "exception": false, "start_time": "2021-06-06T09:35:59.569809", "status": "completed"} tags=[]
# * In this case, overall accuracy is strong, but the confusion metrics tell a different story.
# * Despite the high accuracy level, 36 out of 164 instances of fraud are missed and incorrectly predicted as nonfraud.
# * The false-negative rate is substantial.
# * The intention of a fraud detection model is to minimize these false negatives.
# + [markdown] papermill={"duration": 0.033316, "end_time": "2021-06-06T09:35:59.670289", "exception": false, "start_time": "2021-06-06T09:35:59.636973", "status": "completed"} tags=[]
# **Dont Forgot Upvote!!!!!!**
| Case Studies/Credit Card Fraud Detection/credit-card-fraud-funky-ml.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 1: Mean/Covariance of a data set and effect of a linear transformation
#
# In this week, we are going to investigate how the mean and (co)variance of a dataset changes
# when we apply affine transformation to the dataset.
# ## Learning objectives
# 1. Get Familiar with basic programming using Python and Numpy/Scipy.
# 2. Learn to appreciate implementing
# functions to compute statistics of dataset in vectorized way.
# 3. Understand the effects of affine transformations on a dataset.
# 4. Understand the importance of testing in programming for machine learning.
# First, let's import the packages that we will use for the week
# + nbgrader={"grade": false, "grade_id": "cell-ba51e43914fcac0f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# PACKAGE: DO NOT EDIT
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('fivethirtyeight')
from sklearn.datasets import fetch_olivetti_faces
import time
import timeit
# %matplotlib inline
# -
# Next, we are going to retrieve Olivetti faces dataset.
#
# When working with some datasets, before digging into further analysis, it is almost always
# useful to do a few things to understand your dataset. First of all, answer the following
# set of questions:
#
# 1. What is the size of your dataset?
# 2. What is the dimensionality of your data?
#
# The dataset we have are usually stored as 2D matrices, then it would be really important
# to know which dimension represents the dimension of the dataset, and which represents
# the data points in the dataset.
#
# __When you implement the functions for your assignment, make sure you read
# the docstring for what each dimension of your inputs represents the data points, and which
# represents the dimensions of the dataset!__
# +
image_shape = (64, 64)
# Load faces data
dataset = fetch_olivetti_faces(data_home='./')
faces = dataset.data
print('Shape of the faces dataset: {}'.format(faces.shape))
print('{} data points'.format(faces.shape[0]))
# -
# When your dataset are images, it's a really good idea to see what they look like.
#
# One very
# convenient tool in Jupyter is the `interact` widget, which we use to visualize the images (faces). For more information on how to use interact, have a look at the documentation [here](http://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html).
# + nbgrader={"grade": false, "grade_id": "cell-5d4286bace914619", "locked": true, "schema_version": 3, "solution": false, "task": false}
from ipywidgets import interact
# -
def show_face(face):
plt.figure()
plt.imshow(face.reshape((64, 64)), cmap='gray')
plt.show()
@interact(n=(0, len(faces)-1))
def display_faces(n=0):
plt.figure()
plt.imshow(faces[n].reshape((64, 64)), cmap='gray')
plt.show()
# ## 1. Mean and Covariance of a Dataset
# + nbgrader={"grade": false, "grade_id": "cell-2e726e77148b84dc", "locked": false, "schema_version": 3, "solution": true, "task": false}
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def mean_naive(X):
"""Compute the sample mean for a dataset by iterating over the dataset.
Args:
X: `ndarray` of shape (N, D) representing the dataset. N
is the size of the dataset and D is the dimensionality of the dataset.
Returns:
mean: `ndarray` of shape (D, ), the sample mean of the dataset `X`.
"""
# Iterate over the dataset and compute the mean vector.
N, D = X.shape
mean = np.zeros(D)
for n in range(N): # Iterate through every row
mean += X[n, :]
mean = mean/N
return mean
def cov_naive(X):
"""Compute the sample covariance for a dataset by iterating over the dataset.
Args:
X: `ndarray` of shape (N, D) representing the dataset. N
is the size of the dataset and D is the dimensionality of the dataset.
Returns:
ndarray: ndarray with shape (D, D), the sample covariance of the dataset `X`.
"""
N, D = X.shape # (row, columns)
cov = np.zeros((D, D))
meanVal = mean_naive(X)
for n in range(N): # Iterate through all rows
difference = np.asmatrix(X[n, :] - meanVal)
cov += difference @ difference.T
covariance = cov/N
return covariance
def mean(X):
"""Compute the sample mean for a dataset.
Args:
X: `ndarray` of shape (N, D) representing the dataset. N
is the size of the dataset and D is the dimensionality of the dataset.
Returns:
ndarray: ndarray with shape (D,), the sample mean of the dataset `X`.
"""
m = np.zeros((X.shape[1]))
m = np.mean(X, axis=0) # Along columns
return m
def cov(X):
"""Compute the sample covariance for a dataset.
Args:
X: `ndarray` of shape (N, D) representing the dataset. N
is the size of the dataset and D is the dimensionality of the dataset.
Returns:
ndarray: ndarray with shape (D, D), the sample covariance of the dataset `X`.
"""
# YOUR CODE HERE
# It is possible to vectorize our code for computing the covariance with matrix multiplications,
# i.e., we do not need to explicitly
# iterate over the entire dataset as looping in Python tends to be slow
# We challenge you to give a vectorized implementation without using np.cov, but if you choose to use np.cov,
# be sure to pass in bias=True.
N, D = X.shape
covariance_matrix = np.zeros((D, D))
covariance_matrix = np.cov(X, rowvar=False, bias=True) # Don't switch row vs col.
return covariance_matrix
# + nbgrader={"grade": true, "grade_id": "cell-5e92c4f560e0a5b2", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
from numpy.testing import assert_allclose
# Test case 1
X = np.array([[0., 1., 1.],
[1., 2., 1.]])
expected_mean = np.array([0.5, 1.5, 1.])
assert_allclose(mean(X), expected_mean, rtol=1e-5)
# Test case 2
X = np.array([[0., 1., 0.],
[2., 3., 1.]])
expected_mean = np.array([1., 2., 0.5])
assert_allclose(mean(X), expected_mean, rtol=1e-5)
# Test covariance is zero
X = np.array([[0., 1.],
[0., 1.]])
expected_mean = np.array([0., 1.])
assert_allclose(mean(X), expected_mean, rtol=1e-5)
### Some hidden tests below
### ...
# -
cov(np.array([[0., 1.],
[1., 2.],
[0., 1.],
[1., 2.]
]))
# + nbgrader={"grade": true, "grade_id": "cell-b8863e42cc6ca615", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
from numpy.testing import assert_allclose
# Test case 1
X = np.array([[0., 1.],
[1., 2.],
[0., 1.],
[1., 2.]])
expected_cov = np.array(
[[0.25, 0.25],
[0.25, 0.25]])
assert_allclose(cov(X), expected_cov, rtol=1e-5)
# Test case 2
X = np.array([[0., 1.],
[2., 3.]])
expected_cov = np.array(
[[1., 1.],
[1., 1.]])
assert_allclose(cov(X), expected_cov, rtol=1e-5)
# Test covariance is zero
X = np.array([[0., 1.],
[0., 1.],
[0., 1.]])
expected_cov = np.zeros((2, 2))
assert_allclose(cov(X), expected_cov, rtol=1e-5)
### Some hidden tests below
### ...
# -
# With the `mean` function implemented, let's take a look at the _mean_ face of our dataset!
# +
def mean_face(faces):
return faces.mean(axis=0).reshape((64, 64))
plt.imshow(mean_face(faces), cmap='gray');
# -
# One of the advantage of writing vectorized code is speedup gained when working on larger dataset. Loops in Python
# are slow, and most of the time you want to utilise the fast native code provided by Numpy without explicitly using
# for loops. To put things into perspective, we can benchmark the two different implementation with the `%time` function
# in the following way:
# We have some HUUUGE data matrix which we want to compute its mean
X = np.random.randn(1000, 20)
# Benchmarking time for computing mean
# %time mean_naive(X)
# %time mean(X)
pass
# Benchmarking time for computing covariance
# %time cov_naive(X)
# %time cov(X)
pass
# ## 2. Affine Transformation of Dataset
# In this week we are also going to verify a few properties about the mean and
# covariance of affine transformation of random variables.
#
# Consider a data matrix $X$ of size (N, D). We would like to know
# what is the covariance when we apply affine transformation $Ax_i + b$ for each datapoint $x_i$ in $X$. i.e.
# we would like to know what happens to the mean and covariance for the new dataset if we apply affine transformation.
# + nbgrader={"grade": false, "grade_id": "cell-7d7b94efbb31d292", "locked": false, "schema_version": 3, "solution": true, "task": false}
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def affine_mean(mean, A, b):
"""Compute the mean after affine transformation
Args:
mean: `ndarray` of shape (D,), the sample mean vector for some dataset.
A, b: `ndarray` of shape (D, D) and (D,), affine transformation applied to x
Returns:
sample mean vector of shape (D,) after affine transformation.
"""
affine_m = np.zeros(mean.shape) # affine_m has shape (D,)
affine_m = A @ mean + b
return affine_m
# + nbgrader={"grade": false, "grade_id": "cell-dca2c9932c499a71", "locked": false, "schema_version": 3, "solution": true, "task": false}
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def affine_covariance(S, A, b):
"""Compute the covariance matrix after affine transformation
Args:
mean: `ndarray` of shape (D,), the sample covariance matrix for some dataset.
A, b: `ndarray` of shape (D, D) and (D,), affine transformation applied to x
Returns:
sample covariance matrix of shape (D, D) after the transformation
"""
affine_cov = np.zeros(S.shape) # affine_cov has shape (D, D)
affine_cov = A @ S @ A.T
return affine_cov
# + nbgrader={"grade": true, "grade_id": "cell-16cbecd7814fc682", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
from numpy.testing import assert_allclose
A = np.array([[0, 1], [2, 3]])
b = np.ones(2)
m = np.full((2,), 2)
S = np.eye(2)*2
expected_affine_mean = np.array([ 3., 11.])
expected_affine_cov = np.array(
[[ 2., 6.],
[ 6., 26.]])
assert_allclose(affine_mean(m, A, b), expected_affine_mean, rtol=1e-4)
### Some hidden tests below
### ...
# + nbgrader={"grade": true, "grade_id": "cell-7cea45ab7c99c90a", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
from numpy.testing import assert_allclose
A = np.array([[0, 1], [2, 3]])
b = np.ones(2)
m = np.full((2,), 2)
S = np.eye(2)*2
expected_affine_cov = np.array(
[[ 2., 6.],
[ 6., 26.]])
assert_allclose(affine_covariance(S, A, b),
expected_affine_cov, rtol=1e-4)
### Some hidden tests below
### ...
# -
# Once the two functions above are implemented, we can verify the correctness our implementation. Assuming that we have some $A$ and $b$.
random = np.random.RandomState(42)
A = random.randn(4,4)
b = random.randn(4)
# Next we can generate some random dataset $X$
X = random.randn(100, 4)
# Assuming that for some dataset $X$, the mean and covariance are $m$, $S$, and for the new dataset after affine transformation $X'$, the mean and covariance are $m'$ and $S'$, then we would have the following identity:
#
# $$m' = \text{affine_mean}(m, A, b)$$
#
# $$S' = \text{affine_covariance}(S, A, b)$$
X1 = ((A @ (X.T)).T + b) # applying affine transformation once
X2 = ((A @ (X1.T)).T + b) # twice
# One very useful way to compare whether arrays are equal/similar is use the helper functions
# in `numpy.testing`.
#
# Check the Numpy [documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.testing.html)
# for details.
#
# If you are interested in learning more about floating point arithmetic, here is a good [paper](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.6768).
np.testing.assert_allclose(mean(X1), affine_mean(mean(X), A, b))
np.testing.assert_allclose(cov(X1), affine_covariance(cov(X), A, b))
np.testing.assert_allclose(mean(X2), affine_mean(mean(X1), A, b))
np.testing.assert_allclose(cov(X2), affine_covariance(cov(X1), A, b))
| PCA/week1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pandas
import math
from math import floor as floor
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import pandas as pd
import scipy
from tabulate import tabulate
import seaborn as sns
import collections
# ### Make Columns like Dictionaries
df = pd.DataFrame({'col_1':[0,1,2], 'col_2':[0,1,2], 'col_3':[1,2,3]})
df.head()
# ## Append Columns
# +
df2 = pd.DataFrame({'amount spent':[0,1,2], 'date':['11/11/1111', '22/22/2222', '33/33/3333'], 'amount bought':['1/8th', '1 g', '1 g']})
new_value = [0,1,2]
for index, row in df2.iterrows():
df2.at[index, 'new_column'] = new_value[index]
df2
# +
df2['another_new_column'] = ['hey', 'yo', 'what\'s up'] #The column has to match the length of the number of rows
new_row = {'amount spent':69, 'date':'44/44/4444', 'amount bought':'2 g', 'new_column':3, 'another_new_column':'not much'} # a new row has to match the number of columns
df2 = df2.append(new_row, ignore_index=True)
df2
# -
# ## Sort By Value
# +
newer_row = {'amount spent':math.nan, 'date':'', 'amount bought':'', 'new_column':math.nan, 'another_new_column':''}
df2 = df2.append(newer_row, ignore_index=True)
df2.sort_values(by='amount spent', na_position='first')
# -
# ## Query dataframe
df2.query('new_column>0')
# ## Display DataFrame with Tabulate
# +
cool_fmts = ['psql', 'html', 'fancy_grid', 'latex_raw'] # fancy grid is the best by far
for i in range(len(cool_fmts)):
print('Format: ' + cool_fmts[i])
print(tabulate(df, tablefmt = cool_fmts[i])) # the important line
print('\n')
# -
# ## Generate Fake Data and Plot
plot = pd.DataFrame({'X':np.linspace(0,360,361), 'Y':[np.sin(2*np.pi*(i) / 360) for i in range(361)]})
sns.lmplot('X', 'Y', data=plot, fit_reg=False)
sns.kdeplot(plot.Y)
sns.kdeplot(plot.Y, plot.X)
sns.distplot(plot.Y)
plt.hist(plot.Y, alpha=.3)
sns.rugplot(plot.Y);
sns.boxplot([plot.Y, plot.X])
sns.heatmap([plot.Y])
# # Play with an OpenBCI File
# +
# load data
rnd_file = "OpenBCI-RAW-2021-08-07_00-58-55.txt"
alpha_wave_file = 'OpenBCI-RAW-Alpha-Waves.txt'
clean_gauranteed_alphas_file = 'OpenBCI-RAW-Clean-Guaranteed-Alpha-Waves-.txt'
file = clean_gauranteed_alphas_file
f = open(file)
meta_data = [f.readline() for i in range(4)]
sample_rate = int(meta_data[2][15:18])
print('Sample Rate: ' + str(sample_rate))
egg_df = pd.read_csv(file, skiprows=[0,1,2,3])
egg_df = egg_df.drop(columns=['Sample Index'])
start_crop = 1 # seconds; default: 0
to_end_crop = 10 # seconds; default: 'end'
use_crop = ''
while 'y' != use_crop and 'n' != use_crop:
use_crop = input('Crop Data (y/n) : ')
if use_crop == 'y':
if start_crop != 0:
egg_df = egg_df.drop(range(start_crop*sample_rate))
if to_end_crop != 'end':
egg_df = egg_df.drop(range((to_end_crop*sample_rate) + 1, egg_df.index[-1] + 1))
egg_df
# -
# egg_df[' EXG Channel 0']
if 'Time' not in egg_df.keys():
if 'index' not in egg_df.keys():
egg_df.reset_index(inplace=True) # use this to make a new column for index
if type(egg_df['index'].divide(sample_rate).iloc[-1]) == np.float64:
egg_df['index'] = egg_df['index'].divide(sample_rate)
egg_df = egg_df.rename(columns={'index':'Time'})
egg_df
# +
# plot = pd.DataFrame({'X':np.linspace(0,360,361), 'Y':[np.sin(2*np.pi*(i) / 360) for i in range(361)]})
# sns.lmplot('X', 'Y', data=plot, fit_reg=False)
sns.lineplot('Time', " EXG Channel 0", data=egg_df) #fix the fact that the EXG data is stored as a string type variable
# +
import plotly.express as px
import plotly.graph_objects as go
px.line(data_frame=egg_df, x='Time', y=' EXG Channel 0')
# +
# Attempt to low-pass and high-pass filter data
import scipy.signal as signal
import matplotlib.pyplot as plt
low_cut_fq = 1 # Hz
high_cut_fq = 50 # Hz
n_chns = 16
X = egg_df['Time']
Y = egg_df[' EXG Channel 0']
b, a = signal.butter(2, low_cut_fq, 'high', fs=sample_rate)
Y = signal.filtfilt(b, a, Y)
filt_df = pd.DataFrame()
filt_df['Time'] = egg_df['Time']
for i in range(n_chns):
b, a = signal.butter(2, low_cut_fq, 'high', fs=sample_rate)
ip = signal.filtfilt(b, a, egg_df[' EXG Channel '+str(i)])
b, a = signal.butter(2, high_cut_fq, 'low', fs=sample_rate)
filt_df['EXG_Channel_'+str(i)] = signal.filtfilt(b, a, ip)
# plt.plot(X, Y)
# plt.xlim([5, 30])
# plt.ylim([-200, 200])
px.line(data_frame=filt_df, x='Time', y='EXG_Channel_0', render_mode='webgl')
# +
# get a vector of fft features for one channel
fft = np.real(scipy.fft.rfft(np.array(filt_df['EXG_Channel_6'])))
print(len(fft))
timestep = 1/sample_rate
x_fq = np.fft.rfftfreq(len(fft), d=timestep)
print(len(x_fq))
# +
zero_fq_idx = math.floor(len(fft)/2 - 1)
fft = fft[zero_fq_idx:]
print(len(fft))
print(len(x_fq))
# +
plot_df = {'fft':fft, 'x_fq':x_fq}
px.line(data_frame=plot_df, x='x_fq', y='fft')
# +
data = np.array(filt_df['EXG_Channel_6'])
plt.psd(data, NFFT=int(len(data)/10), Fs=sample_rate)
plt.xlim([0, 20])
| Pandas_Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D4_Optimization/student/W1D4_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D4_Optimization/student/W1D4_Tutorial1.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# -
# # Tutorial 1: Optimization techniques
# **Week 1, Day 4: Optimization**
#
# **By Neuromatch Academy**
#
# __Content creators:__ <NAME>, <NAME>
#
# __Content reviewers:__ <NAME>, <NAME>, <NAME>, <NAME>
#
# __Content editors:__ <NAME>, <NAME>, <NAME>
#
# __Production editors:__ <NAME>, <NAME>
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# ---
# # Tutorial Objectives
#
# Objectives:
# * Necessity and importance of optimization
# * Introduction to commonly used optimization techniques
# * Optimization in non-convex loss landscapes
# * 'Adaptive' hyperparameter tuning
# * Ethical concerns
#
#
# + cellView="form"
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
# @markdown If you want to locally download the slides, click [here](https://osf.io/ft2sz/download)
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/ft2sz/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# -
# ---
# # Setup
# + cellView="form"
# @title Install dependencies
# !pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# generate airtable form
atform = AirtableForm('appn7VdPRseSoMXEG','W1D4_T1','https://portal.neuromatchacademy.org/api/redirect/to/9548a279-c9f9-4586-b89c-f0ceceba5c14')
# +
# Imports
import time
import copy
import torch
import torchvision
import numpy as np
import ipywidgets as widgets
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.datasets as datasets
from tqdm.auto import tqdm
# + cellView="form"
# @title Figure settings
import ipywidgets as widgets # interactive display
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
plt.rc('axes', unicode_minus=False)
# + cellView="form"
# @title Helper functions
def print_params(model):
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.data)
# + cellView="form"
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
# + cellView="form"
# @title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
# -
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
# ---
# # Section 1. Introduction
#
# *Time estimate: ~15 mins*
# + cellView="form"
# @title Video 1: Introduction
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1VB4y1K7Vr", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"zm9oekdkJbQ", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 1: Introduction')
display(out)
# -
# ## Discuss: Unexpected consequences
#
# Can you think of examples from your own experience/life where poorly chosen incentives or objectives have lead to unexpected consequences?
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_b8bbba6f.py)
#
#
# -
# ---
# # Section 2: Case study: successfully training an MLP for image classification
#
# *Time estimate: ~40 mins*
# Many of the core ideas (and tricks) in modern optimization for deep learning can be illustrated in the simple setting of training an MLP to solve an image classification task. In this tutorial we will guide you through the key challenges that arise when optimizing high-dimensional, non-convex$^\dagger$ problems. We will use these challenges to motivate and explain some commonly used solutions.
#
# **Disclaimer:** Some of the functions you will code in this tutorial are already implemented in Pytorch and many other libraries. For pedagogical reasons, we decided to bring these simple coding tasks into the spotlight and place a relatively higher emphasis in your understanding of the algorithms, rather than the use of a specific library.
#
# In 'day-to-day' research projects you will likely to rely on the community-vetted, optimized libraries rather than the 'manual implementations' you will write today. In Section 8 you will have a chance to 'put it all together' and use the full power of Pytorch to tune the parameters of an MLP to classify handwritten digits.
# $^\dagger$: A **convex** function has one, global minimum - a nice property, as an optimization algorithm won't get stuck in a local minimum that isn't a global one (e.g., $f(x)=x^2 + 2x + 1$). A **non-convex** function is wavy - has some 'valleys' (local minima) that aren't as deep as the overall deepest 'valley' (global minimum). Thus, the optimization algorithms can get stuck in the local minimum, and it can be hard to tell when this happens (e.g., $f(x) = x^4 + x^3 - 2x^2 - 2x$). See also **Section 5** for more details.
# + cellView="form"
# @title Video 2: Case Study - MLP Classification
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1GB4y1K7Ha", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pJc2ENhYbqA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 2: Case Study - MLP Classification')
display(out)
# -
# ## Section 2.1: Data
#
# We will use the MNIST dataset of handwritten digits. We load the data via the Pytorch `datasets` module, as you learned in W1D1.
#
# **Note:** Although we can download the MNIST dataset directly from `datasets` using the optional argument `download=True`, we are going to download them from NMA directory on OSF to ensure network reliability.
#
# + cellView="form"
# @title Download MNIST dataset
import tarfile, requests, os
fname = 'MNIST.tar.gz'
name = 'MNIST'
url = 'https://osf.io/y2fj6/download'
if not os.path.exists(name):
print('\nDownloading MNIST dataset...')
r = requests.get(url, allow_redirects=True)
with open(fname, 'wb') as fh:
fh.write(r.content)
print('\nDownloading MNIST completed.')
if not os.path.exists(name):
with tarfile.open(fname) as tar:
tar.extractall()
os.remove(fname)
else:
print('MNIST dataset has been dowloaded.')
# +
def load_mnist_data(change_tensors=False, download=False):
"""Load training and test examples for the MNIST digits dataset
Returns:
train_data (tensor): training input tensor of size (train_size x 784)
train_target (tensor): training 0-9 integer label tensor of size (train_size)
test_data (tensor): test input tensor of size (70k-train_size x 784)
test_target (tensor): training 0-9 integer label tensor of size (70k-train_size)
"""
# Load train and test sets
train_set = datasets.MNIST(root='.', train=True, download=download,
transform=torchvision.transforms.ToTensor())
test_set = datasets.MNIST(root='.', train=False, download=download,
transform=torchvision.transforms.ToTensor())
# Original data is in range [0, 255]. We normalize the data wrt its mean and std_dev.
## Note that we only used *training set* information to compute mean and std
mean = train_set.data.float().mean()
std = train_set.data.float().std()
if change_tensors:
# Apply normalization directly to the tensors containing the dataset
train_set.data = (train_set.data.float() - mean) / std
test_set.data = (test_set.data.float() - mean) / std
else:
tform = torchvision.transforms.Compose([torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[mean / 255.], std=[std / 255.])
])
train_set = datasets.MNIST(root='.', train=True, download=download,
transform=tform)
test_set = datasets.MNIST(root='.', train=False, download=download,
transform=tform)
return train_set, test_set
train_set, test_set = load_mnist_data(change_tensors=True)
# -
# As we are just getting started, we will concentrate on a small subset of only 500 examples out of the 60.000 data points contained in the whole training set.
#
#
# +
# Sample a random subset of 500 indices
subset_index = np.random.choice(len(train_set.data), 500)
# We will use these symbols to represent the training data and labels, to stay
# as close to the mathematical expressions as possible.
X, y = train_set.data[subset_index, :], train_set.targets[subset_index]
# -
# Run the following cell to visualize the content of three examples in our training set. Note how the pre-processing we applied to the data changes the range of pixel values after normalization.
#
# + cellView="form"
# @title Run me!
num_figures = 3
fig, axs = plt.subplots(1, num_figures, figsize=(5 * num_figures, 5))
for sample_id, ax in enumerate(axs):
# Plot the pixel values for each image
ax.matshow(X[sample_id, :], cmap='gray_r')
# 'Write' the pixel value in the corresponding location
for (i, j), z in np.ndenumerate(X[sample_id, :]):
text = '{:.1f}'.format(z)
ax.text(j, i, text, ha='center',
va='center', fontsize=6, c='steelblue')
ax.set_title('Label: ' + str(y[sample_id].item()))
ax.axis('off')
plt.show()
# -
# ## Section 2.2: Model
#
# As you will see next week, there are specific model architectures that are better suited to image-like data, such as Convolutional Neural Networks (CNNs). For simplicity, in this tutorial we will focus exclusively on Multi-Layer Perceptron (MLP) models as they allow us to highlight many important optimization challenges shared with more advanced neural network designs.
class MLP(nn.Module):
""" This class implements MLPs in Pytorch of an arbitrary number of hidden
layers of potentially different sizes. Since we concentrate on classification
tasks in this tutorial, we have a log_softmax layer at prediction time.
"""
def __init__(self, in_dim=784, out_dim=10, hidden_dims=[], use_bias=True):
"""Constructs a MultiLayerPerceptron
Args:
in_dim (int): dimensionality of input data
out_dim (int): number of classes
hidden_dims (list): contains the dimensions of the hidden layers, an empty
list corresponds to a linear model (in_dim, out_dim)
"""
super(MLP, self).__init__()
self.in_dim = in_dim
self.out_dim = out_dim
# If we have no hidden layer, just initialize a linear model (e.g. in logistic regression)
if len(hidden_dims) == 0:
layers = [nn.Linear(in_dim, out_dim, bias=use_bias)]
else:
# 'Actual' MLP with dimensions in_dim - num_hidden_layers*[hidden_dim] - out_dim
layers = [nn.Linear(in_dim, hidden_dims[0], bias=use_bias), nn.ReLU()]
# Loop until before the last layer
for i, hidden_dim in enumerate(hidden_dims[:-1]):
layers += [nn.Linear(hidden_dim, hidden_dims[i + 1], bias=use_bias),
nn.ReLU()]
# Add final layer to the number of classes
layers += [nn.Linear(hidden_dims[-1], out_dim, bias=use_bias)]
self.main = nn.Sequential(*layers)
def forward(self, x):
# Flatten the images into 'vectors'
transformed_x = x.view(-1, self.in_dim)
hidden_output = self.main(transformed_x)
output = F.log_softmax(hidden_output, dim=1)
return output
# Linear models constitute a very special kind of MLPs: they are equivalent to an MLP with *zero* hidden layers. This is simply an affine transformation, in other words a 'linear' map $W x$ with an 'offset' $b$; followed by a softmax function.
#
# $$f(x) = \text{softmax}(W x + b)$$
#
# Here $x \in \mathbb{R}^{784}$, $W \in \mathbb{R}^{10 \times 784}$ and $b \in \mathbb{R}^{10}$. Notice that the dimensions of the weight matrix are $10 \times 784$ as the input tensors are flattened images, i.e., $28 \times 28 = 784$-dimensional tensors and the output layer consists of $10$ nodes.
# +
# Empty hidden_dims means we take a model with zero hidden layers.
model = MLP(in_dim=784, out_dim=10, hidden_dims=[])
# We print the model structure with 784 inputs and 10 outputs
print(model)
# -
# ## Section 2.3: Loss
#
# While we care about the accuracy of the model, the 'discrete' nature of the 0-1 loss makes it challenging to optimize. In order to learn good parameters for this model, we will use the cross entropy loss (negative log-likelihood), which you saw in last lecture, as a surrogate objective to be minimized.
#
# This particular choice of model and optimization objective leads to a *convex* optimization problem with respect to the parameters $W$ and $b$.
loss_fn = F.nll_loss
# ## Section 2.4: Interpretability
# In last lecture, you saw that inspecting the weights of a model can provide insights on what 'concepts' the model has learned. Here we show the weights of a partially trained model. The weights corresponding to each class 'learn' to _fire_ when an input of the class is detected.
#
# + cellView="form"
#@markdown Run _this cell_ to train the model. If you are curious about how the training
#@markdown takes place, double-click this cell to find out. At the end of this tutorial
#@markdown you will have the opportunity to train a more complex model on your own.
cell_verbose = False
partial_trained_model = MLP(in_dim=784, out_dim=10, hidden_dims=[])
if cell_verbose:
print('Init loss', loss_fn(partial_trained_model(X), y).item()) # This matches around np.log(10 = # of classes)
optimizer = optim.Adam(partial_trained_model.parameters(), lr=7e-4)
for _ in range(200):
loss = loss_fn(partial_trained_model(X), y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if cell_verbose:
print('End loss', loss_fn(partial_trained_model(X), y).item()) # This should be less than 1e-2
# +
# Show class filters of a trained model
W = partial_trained_model.main[0].weight.data.numpy()
fig, axs = plt.subplots(1, 10, figsize=(15, 4))
for class_id in range(10):
axs[class_id].imshow(W[class_id, :].reshape(28, 28), cmap='gray_r')
axs[class_id].axis('off')
axs[class_id].set_title('Class ' + str(class_id) )
plt.show()
# -
# ---
# # Section 3: High dimensional search
#
# *Time estimate: ~25 mins*
# We now have a model with its corresponding trainable parameters as well as an objective to optimize. Where do we go to next? How do we find a 'good' configuration of parameters?
#
# One idea is to choose a random direction and move only if the objective is reduced. However, this is inefficient in high dimensions and you will see how gradient descent (with a suitable step-size) can guarantee consistent improvement in terms of the objective function.
# + cellView="form"
# @title Video 3: Optimization of an Objective Function
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1aL411H7Ce", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"aSJTRdjRvvw", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 3: Optimization of an Objective Function')
display(out)
# -
# ## Coding Exercise 3: Implement gradient descent
#
# In this exercise you will use PyTorch automatic differentiation capabilities to compute the gradient of the loss with respect to the parameters of the model. You will then use these gradients to implement the update performed by the gradient descent method.
# +
def zero_grad(params):
"""Clear up gradients as Pytorch automatically accumulates gradients from
successive backward calls
"""
for par in params:
if not(par.grad is None):
par.grad.data.zero_()
def random_update(model, noise_scale=0.1, normalized=False):
""" Performs a random update on the parameters of the model
"""
for par in model.parameters():
noise = torch.randn_like(par)
if normalized:
noise /= torch.norm(noise)
par.data += noise_scale * noise
# +
def gradient_update(loss, params, lr=1e-3):
"""Perform a gradient descent update on a given loss over a collection of parameters
Args:
loss (tensor): A scalar tensor containing the loss whose gradient will be computed
params (iterable): Collection of parameters with respect to which we compute gradients
lr (float): Scalar specifying the learning rate or step-size for the update
"""
# Clear up gradients as Pytorch automatically accumulates gradients from
# successive backward calls
zero_grad(params)
# Compute gradients on given objective
loss.backward()
with torch.no_grad():
for par in params:
#################################################
## TODO for students: update the value of the parameter ##
raise NotImplementedError("Student exercise: implement gradient update")
#################################################
# Here we work with the 'data' attribute of the parameter rather than the
# parameter itself.
par.data -= ...
# add event to airtable
atform.add_event('Coding Exercise 3: Implement gradient descent')
set_seed(seed=SEED)
model1 = MLP(in_dim=784, out_dim=10, hidden_dims=[])
print('\n The model1 parameters before the update are: \n')
print_params(model1)
loss = loss_fn(model1(X), y)
## Uncomment below to test your function
# gradient_update(loss, list(model1.parameters()), lr=1e-1)
# print('\n The model1 parameters after the update are: \n')
# print_params(model1)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_1c2b3d1a.py)
#
#
# -
# ```
# The model1 parameters after the update are:
#
# main.0.weight tensor([[-0.0263, 0.0010, 0.0174, ..., 0.0298, 0.0278, -0.0220],
# [-0.0047, -0.0302, -0.0093, ..., -0.0077, 0.0248, -0.0240],
# [ 0.0234, -0.0237, 0.0335, ..., 0.0117, 0.0263, -0.0187],
# ...,
# [-0.0006, 0.0156, 0.0110, ..., 0.0143, -0.0302, -0.0145],
# [ 0.0164, 0.0286, 0.0238, ..., -0.0127, -0.0191, 0.0188],
# [ 0.0206, -0.0354, -0.0184, ..., -0.0272, 0.0098, 0.0002]])
# main.0.bias tensor([-0.0292, -0.0018, 0.0115, -0.0370, 0.0054, 0.0155, 0.0317, 0.0246,
# 0.0198, -0.0061])
# ```
# ## Comparing updates
#
# These plots compare the effectiveness of updating random directions for the problem of optimizing the parameters of a high-dimensional linear model. We contrast the behavior at initialization and during an intermediate stage of training by showing the histograms of change in loss over 100 different random directions vs the changed in loss induced by the gradient descent update
#
# **Remember:** since we are trying to minimize, here negative is better!
#
# + cellView="form"
# @markdown _Run this cell_ to visualize the results
fig, axs = plt.subplots(1, 2, figsize=(10, 4))
for id, (model_name, my_model) in enumerate([('Initialization', model),
('Partially trained', partial_trained_model)]):
# Compue the loss we will be comparing to
base_loss = loss_fn(my_model(X), y)
# Compute the improvement via gradient descent
dummy_model = copy.deepcopy(my_model)
loss1 = loss_fn(dummy_model(X), y)
gradient_update(loss1, list(dummy_model.parameters()), lr=1e-2)
gd_delta = loss_fn(dummy_model(X), y) - base_loss
deltas = []
for trial_id in range(100):
# Compute the improvement obtained with a random direction
dummy_model = copy.deepcopy(my_model)
random_update(dummy_model, noise_scale=1e-2)
deltas.append((loss_fn(dummy_model(X), y) - base_loss).item())
# Plot histogram for random direction and vertical line for gradient descent
axs[id].hist(deltas, label='Random Directions', bins=20)
axs[id].set_title(model_name)
axs[id].set_xlabel('Change in loss')
axs[id].set_ylabel('% samples')
axs[id].axvline(0, c='green', alpha=0.5)
axs[id].axvline(gd_delta.item(), linestyle='--', c='red', alpha=1,
label='Gradient Descent')
handles, labels = axs[id].get_legend_handles_labels()
fig.legend(handles, labels, loc='upper center',
bbox_to_anchor=(0.5, 1.05),
fancybox=False, shadow=False, ncol=2)
plt.show()
# -
# ## Think! 3: Gradient descent vs. random search
#
# Compare the behavior of gradient descent and random search based on the histograms above. Is any of the two methods more reliable? How can you explain the changes between behavior of the methods at initialization vs during training?
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q1' , text.value)
print("Submission successful!")
button.on_click(on_button_clicked)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_2de57667.py)
#
#
# -
# ---
# # Section 4: Poor conditioning
#
# *Time estimate: ~30 mins*
# Already in this 'simple' logistic regression problem, the issue of bad conditioning is haunting us. Not all parameters are created equal and the sensitivity of the network to changes on the parameters will have a big impact in the dynamics of the optimization.
#
# + cellView="form"
# @title Video 4: Momentum
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1NL411H71t", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"3ES5O58Y_2M", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 4: Momentum')
display(out)
# -
# We illustrate this issue in a 2-dimensional setting. We freeze all but two parameters of the network: one of them is an element of the weight matrix (filter) for class 0, while the other is the bias for class 7. This results in an optimization with two decision variables.
#
# How much difference is there in the behavior of these two parameters under gradient descent? What is the effect of momentum in bridging that gap?
#
# + cellView="form"
# @markdown _Run this cell_ to setup some helper functions.
def loss_2d(model, u, v, mask_idx=(0, 378), bias_id=7):
"""Defines a 2-dim function by freezing all but two parameters of a linear
model.
Args:
model (torch module): a pytorch 0-hidden layer (linear) model
u (scalar): first free parameter
v (scalar): second free parameter
mask_idx (tuple): selects parameter in weight matrix replaced by u
bias_idx (int): selects parameter in bias vector replaced by v
Returns:
scalar: loss of the 'new' model over inputs X, y (defined externally)
"""
# We zero out the element of the weight tensor that will be
# replaced by u
mask = torch.ones_like(model.main[0].weight)
mask[mask_idx[0], mask_idx[1]] = 0.
masked_weights = model.main[0].weight * mask
# u is replacing an element of the weight matrix
masked_weights[mask_idx[0], mask_idx[1]] = u
res = X.reshape(-1, 784) @ masked_weights.T + model.main[0].bias
# v is replacing a bias for class 7
res[:, 7] += v - model.main[0].bias[7]
res = F.log_softmax(res, dim=1)
return loss_fn(res, y)
def plot_surface(U, V, Z, fig):
""" Plot a 3D loss surface given meshed inputs U, V and values Z
"""
ax = fig.add_subplot(1, 2, 2, projection='3d')
ax.view_init(45, -130)
surf = ax.plot_surface(U, V, Z, cmap=plt.cm.coolwarm,
linewidth=0, antialiased=True, alpha=0.5)
# Select certain level contours to plot
# levels = Z.min() * np.array([1.005, 1.1, 1.3, 1.5, 2.])
# plt.contour(U, V, Z)# levels=levels, alpha=0.5)
ax.set_xlabel('Weight')
ax.set_ylabel('Bias')
ax.set_zlabel('Loss', rotation=90)
return ax
def plot_param_distance(best_u, best_v, trajs, fig, styles, labels,
use_log=False, y_min_v=-12.0, y_max_v=1.5):
""" Plot the distance to each of the two parameters for a collection of 'trajectories'
"""
ax = fig.add_subplot(1, 1, 1)
for traj, style, label in zip(trajs, styles, labels):
d0 = np.array([np.abs(_[0] - best_u) for _ in traj])
d1 = np.array([np.abs(_[1] - best_v) for _ in traj])
if use_log:
d0 = np.log(1e-16 + d0)
d1 = np.log(1e-16 + d1)
ax.plot(range(len(traj)), d0, style, label='weight - ' + label)
ax.plot(range(len(traj)), d1, style, label='bias - ' + label)
ax.set_xlabel('Iteration')
if use_log:
ax.set_ylabel('Log distance to optimum (per dimension)')
ax.set_ylim(y_min_v, y_max_v)
else:
ax.set_ylabel('Abs distance to optimum (per dimension)')
ax.legend(loc='right', bbox_to_anchor=(1.5, 0.5),
fancybox=False, shadow=False, ncol=1)
return ax
def run_optimizer(inits, eval_fn, update_fn, max_steps=500,
optim_kwargs={'lr':1e-2}, log_traj=True):
"""Runs an optimizer on a given objective and logs parameter trajectory
Args:
inits list(scalar): initialization of parameters
eval_fn (callable): function computing the objective to be minimized
update_fn (callable): function executing parameter update
max_steps (int): number of iterations to run
optim_kwargs (dict): customize optimizer hyperparameters
Returns:
list[list]: trajectory information [*params, loss] for each optimization step
"""
# Initialize parameters and optimizer
params = [nn.Parameter(torch.tensor(_)) for _ in inits]
# Methods like momentum and rmsprop keep and auxiliary vector of parameters
aux_tensors = [torch.zeros_like(_) for _ in params]
if log_traj:
traj = np.zeros((max_steps, len(params)+1))
for _ in range(max_steps):
# Evaluate loss
loss = eval_fn(*params)
# Store 'trajectory' information
if log_traj:
traj[_, :] = [_.item() for _ in params] + [loss.item()]
# Perform update
if update_fn == gradient_update:
gradient_update(loss, params, **optim_kwargs)
else:
update_fn(loss, params, aux_tensors, **optim_kwargs)
if log_traj:
return traj
L = 4.
xs = np.linspace(-L, L, 30)
ys = np.linspace(-L, L, 30)
U, V = np.meshgrid(xs, ys)
# -
# ## Coding Exercise 4: Implement momentum
#
# In this exercise you will implement the momentum update given by:
#
# \begin{equation}
# w_{t+1} = w_t - \eta \nabla J(w_t) + \beta (w_t - w_{t-1})
# \end{equation}
#
# It is convenient to re-express this update rule in terms of a recursion. For that, we define 'velocity' as the quantity:
# \begin{equation}
# v_{t-1} := w_{t} - w_{t-1}
# \end{equation}
#
# which leads to the two-step update rule:
#
# \begin{equation}
# v_t = - \eta \nabla J(w_t) + \beta (\underbrace{w_t - w_{t-1}}_{v_{t-1}})
# \end{equation}
#
# \begin{equation}
# w_{t+1} \leftarrow w_t + v_{t}
# \end{equation}
#
# Pay attention to the positive sign of the update in the last equation, given the definition of $v_t$, above.
# +
def momentum_update(loss, params, grad_vel, lr=1e-3, beta=0.8):
"""Perform a momentum update over a collection of parameters given a loss and 'velocities'
Args:
loss (tensor): A scalar tensor containing the loss whose gradient will be computed
params (iterable): Collection of parameters with respect to which we compute gradients
grad_vel (iterable): Collection containing the 'velocity' v_t for each parameter
lr (float): Scalar specifying the learning rate or step-size for the update
beta (float): Scalar 'momentum' parameter
"""
# Clear up gradients as Pytorch automatically accumulates gradients from
# successive backward calls
zero_grad(params)
# Compute gradients on given objective
loss.backward()
with torch.no_grad():
for (par, vel) in zip(params, grad_vel):
#################################################
## TODO for students: update the value of the parameter ##
raise NotImplementedError("Student exercise: implement momentum update")
#################################################
# Update 'velocity'
vel.data = ...
# Update parameters
par.data += ...
# add event to airtable
atform.add_event('Coding Exercise 4: Implement momentum')
set_seed(seed=SEED)
model2 = MLP(in_dim=784, out_dim=10, hidden_dims=[])
print('\n The model2 parameters before the update are: \n')
print_params(model2)
loss = loss_fn(model2(X), y)
initial_vel = [torch.randn_like(p) for p in model2.parameters()]
## Uncomment below to test your function
# momentum_update(loss, list(model2.parameters()), grad_vel=initial_vel, lr=1e-1, beta=0.9)
# print('\n The model2 parameters after the update are: \n')
# print_params(model2)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_ba72f88a.py)
#
#
# -
# ```
# The model2 parameters after the update are:
#
# main.0.weight tensor([[ 1.5898, 0.0116, -2.0239, ..., -1.0871, 0.4030, -0.9577],
# [ 0.4653, 0.6022, -0.7363, ..., 0.5485, -0.2747, -0.6539],
# [-1.4117, -1.1045, 0.6492, ..., -1.0201, 0.6503, 0.1310],
# ...,
# [-0.5098, 0.5075, -0.0718, ..., 1.1192, 0.2900, -0.9657],
# [-0.4405, -0.1174, 0.7542, ..., 0.0792, -0.1857, 0.3537],
# [-1.0824, 1.0080, -0.4254, ..., -0.3760, -1.7491, 0.6025]])
# main.0.bias tensor([ 0.4147, -1.0440, 0.8720, -1.6201, -0.9632, 0.9430, -0.5180, 1.3417,
# 0.6574, 0.3677])
# ```
# ## Interactive Demo 4: Momentum vs. GD
#
# The plots below show the distance to the optimum for both variables across the two methods, as well as the parameter trajectory over the loss surface.
# + cellView="form"
# @markdown Run this cell to enable the widget!
from matplotlib.lines import Line2D
# Find the optimum of this 2D problem using Newton's method
def run_newton(func, init_list=[0., 0.], max_iter=200):
par_tensor = torch.tensor(init_list, requires_grad=True)
t_g = lambda par_tensor: func(par_tensor[0], par_tensor[1])
for _ in tqdm(range(max_iter)):
eval_loss = t_g(par_tensor)
eval_grad = torch.autograd.grad(eval_loss, [par_tensor])[0]
eval_hess = torch.autograd.functional.hessian(t_g, par_tensor)
# Newton's update is: - inverse(Hessian) x gradient
par_tensor.data -= torch.inverse(eval_hess) @ eval_grad
return par_tensor.data.numpy()
set_seed(2021)
model = MLP(in_dim=784, out_dim=10, hidden_dims=[])
# Define 2d loss objectives and surface values
g = lambda u, v: loss_2d(copy.deepcopy(model), u, v)
Z = np.fromiter(map(g, U.ravel(), V.ravel()), U.dtype).reshape(V.shape)
best_u, best_v = run_newton(func=g)
# Initialization of the variables
INITS = [2.5, 3.7]
# Used for plotting
LABELS = ['GD', 'Momentum']
COLORS = ['black', 'red']
LSTYLES = ['-', '--']
@widgets.interact_manual
def momentum_experiment(max_steps=widgets.IntSlider(300, 50, 500, 5),
lr=widgets.FloatLogSlider(value=1e-1, min=-3, max=0.7, step=0.1),
beta=widgets.FloatSlider(value=9e-1, min=0, max=1., step=0.01)
):
# Execute both optimizers
sgd_traj = run_optimizer(INITS, eval_fn=g, update_fn=gradient_update,
max_steps=max_steps, optim_kwargs={'lr': lr})
mom_traj = run_optimizer(INITS, eval_fn=g, update_fn=momentum_update,
max_steps=max_steps, optim_kwargs={'lr': lr, 'beta':beta})
TRAJS = [sgd_traj, mom_traj]
# Plot distances
fig = plt.figure(figsize=(9,4))
plot_param_distance(best_u, best_v, TRAJS, fig,
LSTYLES, LABELS, use_log=True, y_min_v=-12.0, y_max_v=1.5)
# # Plot trajectories
fig = plt.figure(figsize=(12, 5))
ax = plot_surface(U, V, Z, fig)
for traj, c, label in zip(TRAJS, COLORS, LABELS):
ax.plot3D(*traj.T, c, linewidth=0.3, label=label)
ax.scatter3D(*traj.T, '.-', s=1, c=c)
# Plot optimum point
ax.scatter(best_u, best_v, Z.min(), marker='*', s=80, c='lime', label='Opt.');
lines = [Line2D([0], [0],
color=c,
linewidth=3,
linestyle='--') for c in COLORS]
lines.append(Line2D([0], [0], color='lime', linewidth=0, marker='*'))
ax.legend(lines, LABELS + ['Optimum'], loc='right',
bbox_to_anchor=(.8, -0.1), ncol=len(LABELS) + 1)
# -
# ## Think! 4: Momentum and oscillations
#
# - Discuss how this specific example illustrates the issue of poor conditioning in optimization? How does momentum help resolve these difficulties?
#
# - Do you see oscillations for any of these methods? Why does this happen?
#
# - Finally, tune the learning rate and momentum parameters to achieve a loss below $10^{-6}$ (for both dimensions) within 100 iterations.
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q2' , text.value)
print("Submission successful!")
button.on_click(on_button_clicked)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_5eaa9306.py)
#
#
# -
# ---
# # Section 5: Non-convexity
#
# *Time estimate: ~30 mins*
# The introduction of even just 1 hidden layer in the neural network transforms the previous convex optimization problem into a non-convex one. And with great non-convexity, comes great responsibility... (Sorry, we couldn't help it!)
#
# **Note:** From this section onwards we will be dealing with non-convex optimization problems for the remaining of the tutorial.
# + cellView="form"
# @title Video 5: Overparametrization
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV16h41167Jr", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"7vUpUEKKl5o", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event("Video 5: Overparametrization")
display(out)
# -
# Take a couple of minutes to play with a more complex 3D visualization of the loss landscape of a neural network on a non-convex problem. Visit https://losslandscape.com/explorer.
#
# 1. Explore the features on the bottom left corner. You can see an explanation for each icon by clicking on the [i] button located on the top right corner.
# 2. Use the 'gradient descent' feature to perform a thought experiment:
# - Choose an initialization
# - Choose the learning rate
# - Mentally formulate your hypothesis about what kind of trajectory you expect to observe
# 3. Run the experiment and contrast your intuition with the observed behavior.
# 4. Repeat this experiment a handful of times for several initialization/learning rate configurations
#
# ## Interactive Demo 5: Overparametrization to the rescue!
#
# As you may have seen, the non-convex nature of the surface can lead the optimization process to get stuck in undesirable local-optima. There is ample empirical evidence supporting the claim that 'overparameterized' models are easier to train.
#
# We will explore this assertion in the context of our MLP training. For this, we initialize a fixed model and construct several models by small random perturbations to the original initialized weights. Now, we train each of these perturbed models and see how the loss evolves. If we were in the convex setting, we should reach very similar objective values upon convergence since all these models were very close at the beginning of training, and in convex problems, every local optimum is also a global optimum.
#
# Use the interactive plot below to visualize the loss progression for these perturbed models:
#
# 1. Select different settings from the `hidden_dims` drop-down menu.
# 2. Explore the effect of the number of steps and learning rate.
# + cellView="form"
# @markdown Execute this cell to enable the widget!
@widgets.interact_manual
def overparam(max_steps=widgets.IntSlider(150, 50, 500, 5),
hidden_dims=widgets.Dropdown(options=["10", "20, 20", "100, 100"],
value="10"),
lr=widgets.FloatLogSlider(value=5e-2, min=-3, max=0, step=0.1),
num_inits=widgets.IntSlider(7, 5, 10, 1)):
X, y = train_set.data[subset_index, :], train_set.targets[subset_index]
hdims = [int(s) for s in hidden_dims.split(',')]
base_model = MLP(in_dim=784, out_dim=10, hidden_dims=hdims)
fig, axs = plt.subplots(1, 1, figsize=(5, 4))
for _ in tqdm(range(num_inits)):
model = copy.deepcopy(base_model)
random_update(model, noise_scale=2e-1)
loss_hist = np.zeros((max_steps, 2))
for step in range(max_steps):
loss = loss_fn(model(X), y)
gradient_update(loss, list(model.parameters()), lr=lr)
loss_hist[step] = np.array([step, loss.item()])
plt.plot(loss_hist[:, 0], loss_hist[:, 1])
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.ylim(0, 3)
plt.show()
num_params = sum([np.prod(_.shape) for _ in model.parameters()])
print('Number of paramaters in model: ' + str(num_params))
# -
# ### Think! 5.1: Width and depth of the network
#
# - We see that as we increase the width/depth of the network, training becomes faster and more consistent across different initializations. What might be the reasons for this behavior?
#
# - What are some potential downsides of this approach to dealing with non-convexity?
#
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q3' , text.value)
print("Submission successful!")
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_d69ca8d7.py)
#
#
# -
# ---
# # Section 6: Full gradients are expensive
#
# *Time estimate: ~25 mins*
# So far we have used only a small (fixed) subset of 500 training examples to perform the updates on the model parameters in our quest to minimize the loss. But what if we decided to use the training set? Do our current approach scale to datasets with tens of thousands, or millions of datapoints?
#
# In this section we explore an efficient alternative to avoid having to perform computations on all the training examples before performing a parameter update.
# + cellView="form"
# @title Video 6: Mini-batches
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1ty4y1T7Uh", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"hbqUxpNBUGk", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event("Video 6: Mini-batches")
display(out)
# -
# ## Interactive Demo 6.1: Cost of computation
#
# Evaluating a neural network is a relatively fast process. However, when repeated millions of times, the computational cost of performing forward and backward passes through the network starts to become significant.
#
# In the visualization below, we show the time (averaged over 5 runs) of computing a forward and backward pass with a changing number of input examples. Choose from the different options in the drop-down box and note how the vertical scale changes depending on the size of the network.
#
# **Remarks:** Note that the computational cost of a forward pass shows a clear linear relationship with the number of input examples, and the cost of the corresponding backward pass exhibits a similar computational complexity.
# + cellView="form"
# @markdown Execute this cell to enable the widget!
def measure_update_time(model, num_points):
X, y = train_set.data[:num_points], train_set.targets[:num_points]
start_time = time.time()
loss = loss_fn(model(X), y)
loss_time = time.time()
gradient_update(loss, list(model.parameters()), lr=0)
gradient_time = time.time()
return loss_time - start_time, gradient_time - loss_time
@widgets.interact
def computation_time(hidden_dims=widgets.Dropdown(options=["1", "100", "50, 50"],
value="100")):
hdims = [int(s) for s in hidden_dims.split(',')]
model = MLP(in_dim=784, out_dim=10, hidden_dims=hdims)
NUM_POINTS = [1, 5, 10, 100, 200, 500, 1000, 5000, 10000, 20000, 30000, 50000]
times_list = []
for _ in range(5):
times_list.append(np.array([measure_update_time(model, _) for _ in NUM_POINTS]))
times = np.array(times_list).mean(axis=0)
fig, axs = plt.subplots(1, 1, figsize=(5,4))
plt.plot(NUM_POINTS, times[:, 0], label='Forward')
plt.plot(NUM_POINTS, times[:, 1], label='Backward')
plt.xlabel('Number of data points')
plt.ylabel('Seconds')
plt.legend()
# -
# ## Coding Exercise 6: Implement minibatch sampling
#
# Complete the code in `sample_minibatch` so as to produce IID subsets of the training set of the desired size. (This is _not_ a trick question.)
# +
def sample_minibatch(input_data, target_data, num_points=100):
"""Sample a minibatch of size num_point from the provided input-target data
Args:
input_data (tensor): Multi-dimensional tensor containing the input data
target_data (tensor): 1D tensor containing the class labels
num_points (int): Number of elements to be included in minibatch
Returns:
batch_inputs (tensor): Minibatch inputs
batch_targets (tensor): Minibatch targets
"""
#################################################
## TODO for students: sample minibatch of data ##
raise NotImplementedError("Student exercise: implement gradient update")
#################################################
# Sample a collection of IID indices from the existing data
batch_indices = ...
# Use batch_indices to extract entries from the input and target data tensors
batch_inputs = input_data[...]
batch_targets = target_data[...]
return batch_inputs, batch_targets
# add event to airtable
atform.add_event('Coding Exercise 6: Implement minibatch sampling')
## Uncomment to test your function
# x_batch, y_batch = sample_minibatch(X, y, num_points=100)
# print(f"The input shape is {x_batch.shape} and the target shape is: {y_batch.shape}")
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_6bf245d5.py)
#
#
# -
# ```
# The input shape is torch.Size([100, 28, 28]) and the target shape is: torch.Size([100])
# ```
# ## Interactive Demo 6.2: *Compare* different minibatch sizes
#
# What are the trade-offs induced by the choice of minibatch size? The interactive plot below shows the training evolution of a 2-hidden layer MLP with 100 hidden units in each hidden layer. Different plots correspond to a different choice of minibatch size. We have a fixed time budget for all the cases, reflected in the horizontal axes of these plots.
# + cellView="form"
# @markdown Execute this cell to enable the widget!
@widgets.interact_manual
def minibatch_experiment(batch_sizes='20, 250, 1000',
lrs='5e-3, 5e-3, 5e-3',
time_budget=widgets.Dropdown(options=["2.5", "5", "10"],
value="2.5")):
batch_sizes = [int(s) for s in batch_sizes.split(',')]
lrs = [float(s) for s in lrs.split(',')]
LOSS_HIST = {_:[] for _ in batch_sizes}
X, y = train_set.data, train_set.targets
base_model = MLP(in_dim=784, out_dim=10, hidden_dims=[100, 100])
for id, batch_size in enumerate(tqdm(batch_sizes)):
start_time = time.time()
# Create a new copy of the model for each batch size
model = copy.deepcopy(base_model)
params = list(model.parameters())
lr = lrs[id]
# Fixed budget per choice of batch size
while (time.time() - start_time) < float(time_budget):
data, labels = sample_minibatch(X, y, batch_size)
loss = loss_fn(model(data), labels)
gradient_update(loss, params, lr=lr)
LOSS_HIST[batch_size].append([time.time() - start_time,
loss.item()])
fig, axs = plt.subplots(1, len(batch_sizes), figsize=(10, 3))
for ax, batch_size in zip(axs, batch_sizes):
plot_data = np.array(LOSS_HIST[batch_size])
ax.plot(plot_data[:, 0], plot_data[:, 1], label=batch_size,
alpha=0.8)
ax.set_title('Batch size: ' + str(batch_size))
ax.set_xlabel('Seconds')
ax.set_ylabel('Loss')
plt.show()
# -
# **Remarks:** SGD works! We have an algorithm that can be applied (with the due precautions) to learn datasets of arbitrary size.
#
# However, **note the diference in the vertical scale** across the plots above. When using a larger minibatch, we can perform fewer parameter updates as the forward and backward passes are more expensive.
#
# This highlights the interplay between the minibatch size and the learning rate: when our minibatch is larger, we have a more confident estimator of the direction to move, and thus can afford a larger learning rate. On the other hand, extremely small minibatches are very fast computationally but are not representative of the data distribution and yield estimations of the gradient with high variance.
#
# We encourage you to tune the value of the learning rate for each of the minibatch sizes in the previous demo, to achieve a training loss steadily below 0.5 within 5 seconds.
# ---
# # Section 7: Adaptive methods
#
# *Time estimate: ~25 mins*
# As of now, you should be aware that there are many knobs to turn when working on a machine learning problem. Some of these relate to the optimization algorithm, to the choice of model, or to the objective to minimize. Here are some prototypical examples:
#
# - Problem: loss function, regularization coefficients (Day 5)
# - Model: architecture, activations function
# - Optimizer: learning rate, batch size, momentum coefficient
#
# We concentrate on the choices that are directly related with optimization. In particular, we will explore some _automatic_ methods for setting the learning rate, in a way that fixes the poor-conditioning problem and is robust across different problems.
# + cellView="form"
# @title Video 7: Adaptive Methods
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1eq4y1W7JG", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Zr6r2kfmQUM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 7: Adaptive Methods')
display(out)
# -
# ## Coding Exercise 7: Implement RMSprop
#
# In this exercise you will implement the update of the RMSprop optimizer:
#
# \begin{align}
# v_{t} &= \alpha v_{t-1} + (1 - \alpha) \nabla J(w_t)^2 \\ \\
# w_{t+1} &= w_t - \eta \frac{\nabla J(w_t)}{\sqrt{v_t + \epsilon}}
# \end{align}
#
# where the non-standard operations (division of two vectors, squaring a vector, etc) are to be interpreted as element-wise operations, i.e., the operation is applied to each (pair of) entry[ies] of the vector(s) considered as real number(s).
#
# Here, the $\epsilon$ hyperparameter provides numerical estability to the algorithm, by preventing the learning rate to become too big when $v_t$ is small. Typically, we set $\epsilon$ to a default small value, like $10^{-8}$.
# +
def rmsprop_update(loss, params, grad_sq, lr=1e-3, alpha=0.8, epsilon=1e-8):
"""Perform an RMSprop update on a collection of parameters
Args:
loss (tensor): A scalar tensor containing the loss whose gradient will be computed
params (iterable): Collection of parameters with respect to which we compute gradients
grad_sq (iterable): Moving average of squared gradients
lr (float): Scalar specifying the learning rate or step-size for the update
alpha (float): Moving average parameter
epsilon (float): for numerical estability
"""
# Clear up gradients as Pytorch automatically accumulates gradients from
# successive backward calls
zero_grad(params)
# Compute gradients on given objective
loss.backward()
with torch.no_grad():
for (par, gsq) in zip(params, grad_sq):
#################################################
## TODO for students: update the value of the parameter ##
# Use gsq.data and par.grad
raise NotImplementedError("Student exercise: implement gradient update")
#################################################
# Update estimate of gradient variance
gsq.data = ...
# Update parameters
par.data -= ...
# add event to airtable
atform.add_event('Coding Exercise 7: Implement RMSprop')
set_seed(seed=SEED)
model3 = MLP(in_dim=784, out_dim=10, hidden_dims=[])
print('\n The model3 parameters before the update are: \n')
print_params(model3)
loss = loss_fn(model3(X), y)
# Intialize the moving average of squared gradients
grad_sq = [1e-6*i for i in list(model3.parameters())]
## Uncomment below to test your function
# rmsprop_update(loss, list(model3.parameters()), grad_sq=grad_sq, lr=1e-3)
# print('\n The model3 parameters after the update are: \n')
# print_params(model3)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_b4a7e579.py)
#
#
# -
# ```
# The model3 parameters after the update are:
#
# main.0.weight tensor([[-0.0240, 0.0031, 0.0193, ..., 0.0316, 0.0297, -0.0198],
# [-0.0063, -0.0318, -0.0109, ..., -0.0093, 0.0232, -0.0255],
# [ 0.0218, -0.0253, 0.0320, ..., 0.0102, 0.0248, -0.0203],
# ...,
# [-0.0027, 0.0136, 0.0089, ..., 0.0123, -0.0324, -0.0166],
# [ 0.0159, 0.0281, 0.0233, ..., -0.0133, -0.0197, 0.0182],
# [ 0.0186, -0.0376, -0.0205, ..., -0.0293, 0.0077, -0.0019]])
# main.0.bias tensor([-0.0313, -0.0011, 0.0122, -0.0342, 0.0045, 0.0199, 0.0329, 0.0265,
# 0.0182, -0.0041])
# ```
# ## Interactive Demo 7: Compare optimizers
#
# Below, we compare your implementations of SGD, momentum and RMSprop. If you have successfully coded all the exercises so far: congrats! You are now *in the know* of some of the most commonly used and powerful tools of optimization for deep learning.
# + cellView="form"
# @markdown Execute this cell to enable the widget!
X, y = train_set.data, train_set.targets
@widgets.interact_manual
def compare_optimizers(
batch_size=(25, 250, 5),
lr=widgets.FloatLogSlider(value=2e-3, min=-5, max=0),
max_steps=(50, 500, 5)):
SGD_DICT = [gradient_update, 'SGD', 'black', '-', {'lr': lr}]
MOM_DICT = [momentum_update, 'Momentum', 'red', '--', {'lr': lr, 'beta': 0.9}]
RMS_DICT = [rmsprop_update, 'RMSprop', 'fuchsia', '-', {'lr': lr, 'alpha': 0.8}]
ALL_DICTS = [SGD_DICT, MOM_DICT, RMS_DICT]
base_model = MLP(in_dim=784, out_dim=10, hidden_dims=[100, 100])
LOSS_HIST = {}
for opt_dict in tqdm(ALL_DICTS):
update_fn, opt_name, color, lstyle, kwargs = opt_dict
LOSS_HIST[opt_name] = []
model = copy.deepcopy(base_model)
params = list(model.parameters())
if opt_name != 'SGD':
aux_tensors = [torch.zeros_like(_) for _ in params]
for step in range(max_steps):
data, labels = sample_minibatch(X, y, batch_size)
loss = loss_fn(model(data), labels)
if opt_name == 'SGD':
update_fn(loss, params, **kwargs)
else:
update_fn(loss, params, aux_tensors, **kwargs)
LOSS_HIST[opt_name].append(loss.item())
fig, axs = plt.subplots(1, len(ALL_DICTS), figsize=(9, 3))
for ax, optim_dict in zip(axs, ALL_DICTS):
opt_name = optim_dict[1]
ax.plot(range(max_steps), LOSS_HIST[opt_name], alpha=0.8)
ax.set_title(opt_name)
ax.set_xlabel('Iteration')
ax.set_ylabel('Loss')
ax.set_ylim(0, 2.5)
plt.show()
# -
# ### **Discussion**
#
# Tune the 3 methods above in order to make each individually excel and discuss your findings. How do the methods compare in terms of robustness to small changes of the hyperparameters? How easy was it to find a good hyperparameter configuration?
#
# **Remarks:** Note that RMSprop is allowing us to use a 'per-dimension' learning rate _without having to tune one learning rate for each dimension **ourselves**_. The method uses information collected about the variance of the gradients throughout training to **adapt** the step size for each of the parameters automatically. The savings in tuning efforts of RMSprop over SGD or 'plain' momentum are undisputed on this task.
#
# Moreover, adaptive optimization methods are currently a highly active research domain, with many related algorithms like Adam, AMSgrad, Adagrad being used in practical application and theoretically investigated.
# ### Locality of Gradients
# As we've seen throught this tutorial, poor conditioning can be a significant burden on convergence to an optimum while using gradient based optimization. Of the methods we've seen to deal with this issue, notice how both momentum and adaptive learning rates incorperate past gradient values into their update schemes. Why do we use past values of our loss function's gradient while updating our current MLP weights?
#
# Recall from W1D2 that the gradient of a function, $\nabla f(w_t)$, is a **local** property and computes the direction of maximum change of $f(w_t)$ at the point $w_t$. However, when we train our MLP model we are hoping to find the __global__ optimum for our training loss. By incorperating past values of our function's gradient into our optimization schemes, we use more information about the overall shape of our function than just a single gradient alone can provide.
# ## Think! 7: Loss function and optimization
#
# Can you think of other ways we can incorperate more information about our loss function into our optimization schemes?
# + cellView="form"
# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your answer here and click on `Submit!`',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q4' , text.value)
print("Submission successful!")
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_a3f4354b.py)
#
#
# -
# ---
# # Section 8: Ethical concerns
#
# *Time estimate: ~15mins*
# + cellView="form"
# @title Video 8: Ethical concerns
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1TU4y1G7Je", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0EthSI0cknI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 8: Ethical concerns')
display(out)
# -
# ---
# # Summary
#
# * Optimization is necessary to create Deep Learning models that are guaranteed to converge
# * Stochastic Gradient Descent and Momentum are two commonly used optimization techniques
# * RMSProp is a way of adaptive hyper parameter tuning which utilises a per-dimension learning rate
# * Poor choice of optimization objectives can lead to unforeseen, undesirable consequences
#
# If you have time left, you can read the Bonus material, were we put all together and we compare our model with a benchmark model.
# + cellView="form"
# @title Airtable Submission Link
from IPython import display as IPydisplay
IPydisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link end of day Survey" style="width:410px"></a>
</div>""" )
# -
# ---
# # Bonus: Putting it all together
#
# *Time estimate: ~40 mins*
# We have progressively built a sophisticated optimization algorithm which is able to deal with a non-convex, poor-conditioned problem concerning tens of thousands of training examples. Now we present _you_ with a small challenge: beat us! :P
#
# Your mission is to train an MLP model that can compete with a benchmark model which we have pre-trained for you. In this section you will be able to use the full Pytorch power: loading the data, defining the model, sampling minibatches as well as Pytorch's **optimizer implementations**.
#
# There is a big engineering component behind the design of optimizers and their implementation can sometimes become tricky. So unless you are directly doing research in optimization, it's recommended to use an implementation provided by a widely reviewed open-source library.
# + cellView="form"
# @title Video 9: Putting it all together
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1MK4y1u7u2", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"DP9c13vLiOM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
# add event to airtable
atform.add_event('Video 9: Putting it all together')
display(out)
# + cellView="form"
# @title Download parameters of the benchmark model
import requests
fname = 'benchmark_model.pt'
url = "https://osf.io/sj4e8/download"
r = requests.get(url, allow_redirects=True)
with open(fname, 'wb') as fh:
fh.write(r.content)
# Load the benchmark model's parameters
DEVICE = set_device()
if DEVICE == "cuda":
benchmark_state_dict = torch.load(fname)
else:
benchmark_state_dict = torch.load(fname, map_location=torch.device('cpu'))
# +
# Create MLP object and update weights with those of saved model
benchmark_model = MLP(in_dim=784, out_dim=10,
hidden_dims=[200, 100, 50]).to(DEVICE)
benchmark_model.load_state_dict(benchmark_state_dict)
# Define helper function to evaluate models
def eval_model(model, data_loader, num_batches=np.inf, device='cpu'):
loss_log, acc_log = [], []
model.to(device=device)
# We are just evaluating the model, no need to compute gradients
with torch.no_grad():
for batch_id, batch in enumerate(data_loader):
# If we only evaluate a number of batches, stop after we reach that number
if batch_id > num_batches:
break
# Extract minibatch data
data, labels = batch[0].to(device), batch[1].to(device)
# Evaluate model and loss on minibatch
preds = model(data)
loss_log.append(loss_fn(preds, labels).item())
acc_log.append(torch.mean(1. * (preds.argmax(dim=1) == labels)).item())
return np.mean(loss_log), np.mean(acc_log)
# -
# We define an optimizer in the following steps:
#
# 1. Load the corresponding class that implements the parameter updates and other internal management activities, including:
# - create auxiliary variables,
# - update moving averages,
# - adjust learning rate.
# 2. Pass the parameters of the Pytorch model that the optimizer has control over. Note that different parameter groups can potentially be controlled by different optimizers.
# 3. Specify hyperparameters, including learning rate, momentum, moving average factors, etc.
#
#
# ## Exercise Bonus: Train your own model
#
# Now, train the model with your preferred optimizer and find a good combination of hyperparameter settings.
# +
#################################################
## TODO for students: adjust training settings ##
# The three parameters below are in your full control
MAX_EPOCHS = 2 # select number of epochs to train
LR = 1e-5 # choose the step size
BATCH_SIZE = 64 # number of examples per minibatch
# Define the model and associated optimizer -- you may change its architecture!
model = MLP(in_dim=784, out_dim=10, hidden_dims=[200, 100, 50]).to(DEVICE)
# You can take your pick from many different optimizers
# Check the optimizer documentation and hyperparameter meaning before using!
# More details on Pytorch optimizers: https://pytorch.org/docs/stable/optim.html
# optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
# optimizer = torch.optim.RMSprop(model.parameters(), lr=LR, alpha=0.99)
# optimizer = torch.optim.Adagrad(model.parameters(), lr=LR)
optimizer = torch.optim.Adam(model.parameters(), lr=LR)
#################################################
# +
set_seed(seed=SEED)
# Print trainig stats every LOG_FREQ minibatches
LOG_FREQ = 200
# Frequency for evaluating the validation metrics
VAL_FREQ = 200
# Load data using a Pytorch Dataset
train_set_orig, test_set_orig = load_mnist_data(change_tensors=False)
# We separate 10,000 training samples to create a validation set
train_set_orig, val_set_orig = torch.utils.data.random_split(train_set_orig, [50000, 10000])
# Create the corresponding DataLoaders for training and test
g_seed = torch.Generator()
g_seed.manual_seed(SEED)
train_loader = torch.utils.data.DataLoader(train_set_orig,
shuffle=True,
batch_size=BATCH_SIZE,
num_workers=2,
worker_init_fn=seed_worker,
generator=g_seed)
val_loader = torch.utils.data.DataLoader(val_set_orig,
shuffle=True,
batch_size=256,
num_workers=2,
worker_init_fn=seed_worker,
generator=g_seed)
test_loader = torch.utils.data.DataLoader(test_set_orig,
batch_size=256,
num_workers=2,
worker_init_fn=seed_worker,
generator=g_seed)
# Run training
metrics = {'train_loss':[],
'train_acc':[],
'val_loss':[],
'val_acc':[],
'val_idx':[]}
step_idx = 0
for epoch in tqdm(range(MAX_EPOCHS)):
running_loss, running_acc = 0., 0.
for batch_id, batch in enumerate(train_loader):
step_idx += 1
# Extract minibatch data and labels
data, labels = batch[0].to(DEVICE), batch[1].to(DEVICE)
# Just like before, refresh gradient accumulators.
# Note that this is now a method of the optimizer.
optimizer.zero_grad()
# Evaluate model and loss on minibatch
preds = model(data)
loss = loss_fn(preds, labels)
acc = torch.mean(1.0 * (preds.argmax(dim=1) == labels))
# Compute gradients
loss.backward()
# Update parameters
# Note how all the magic in the update of the parameters is encapsulated by
# the optimizer class.
optimizer.step()
# Log metrics for plotting
metrics['train_loss'].append(loss.cpu().item())
metrics['train_acc'].append(acc.cpu().item())
if batch_id % VAL_FREQ == (VAL_FREQ - 1):
# Get an estimate of the validation accuracy with 100 batches
val_loss, val_acc = eval_model(model, val_loader,
num_batches=100,
device=DEVICE)
metrics['val_idx'].append(step_idx)
metrics['val_loss'].append(val_loss)
metrics['val_acc'].append(val_acc)
print(f"[VALID] Epoch {epoch + 1} - Batch {batch_id + 1} - "
f"Loss: {val_loss:.3f} - Acc: {100*val_acc:.3f}%")
# print statistics
running_loss += loss.cpu().item()
running_acc += acc.cpu().item()
# Print every LOG_FREQ minibatches
if batch_id % LOG_FREQ == (LOG_FREQ-1):
print(f"[TRAIN] Epoch {epoch + 1} - Batch {batch_id + 1} - "
f"Loss: {running_loss / LOG_FREQ:.3f} - "
f"Acc: {100 * running_acc / LOG_FREQ:.3f}%")
running_loss, running_acc = 0., 0.
# +
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
ax[0].plot(range(len(metrics['train_loss'])), metrics['train_loss'],
alpha=0.8, label='Train')
ax[0].plot(metrics['val_idx'], metrics['val_loss'], label='Valid')
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Loss')
ax[0].legend()
ax[1].plot(range(len(metrics['train_acc'])), metrics['train_acc'],
alpha=0.8, label='Train')
ax[1].plot(metrics['val_idx'], metrics['val_acc'], label='Valid')
ax[1].set_xlabel('Iteration')
ax[1].set_ylabel('Accuracy')
ax[1].legend()
plt.tight_layout()
plt.show()
# -
# ## Think! Bonus: Metrics
#
# Which metric did you optimize when searching for the right configuration? The training set loss? Accuracy? Validation/test set metrics? Why? Discuss!
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D4_Optimization/solutions/W1D4_Tutorial1_Solution_093a66ad.py)
#
#
# -
# ### Evaluation
#
# We _finally_ can evaluate and compare the performance of the models on previously unseen examples.
#
# Which model would you keep? (\*drum roll*)
# +
print('Your model...')
train_loss, train_accuracy = eval_model(my_model, train_loader, device=DEVICE)
test_loss, test_accuracy = eval_model(my_model, test_loader, device=DEVICE)
print(f'Train Loss {train_loss:.3f} / Test Loss {test_loss:.3f}')
print(f'Train Accuracy {100*train_accuracy:.3f}% / Test Accuracy {100*test_accuracy:.3f}%')
print('\nBenchmark model')
train_loss, train_accuracy = eval_model(benchmark_model, train_loader, device=DEVICE)
test_loss, test_accuracy = eval_model(benchmark_model, test_loader, device=DEVICE)
print(f'Train Loss {train_loss:.3f} / Test Loss {test_loss:.3f}')
print(f'Train Accuracy {100*train_accuracy:.3f}% / Test Accuracy {100*test_accuracy:.3f}%')
| tutorials/W1D4_Optimization/student/W1D4_Tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Developing a High-Tech Product
# ## Problem definition
# BIOSENS and TECHSENS, two competing firms, are developing a new state-ofthe-art biomedical sensor, whose total sales will amount to 1,000 million dollars.
# Both must opt for using the most advanced ‘‘e1’’ or ‘‘e2’’ technology. Should both firms decide on the same technology, BIOSENS would obtain 60% of sales, since they know how to sell their products better than their competitor, which sells similar
# products. However, if both opt for different technologies, then TECHSENS would obtain a 60 % market share because it exploits differentiation better.
#
# **a)** Determine which strategy each firm should adopt
| docs/source/Game Theory/Exercises/Developing a High-Tech Product.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
from collections import defaultdict
from math import ceil
import anndata
import faiss
import numpy as np
import pandas as pd
import plotly.io as pio
import scanpy as sc
from matplotlib import rcParams
import scglue
# +
scglue.plot.set_publication_params()
rcParams["figure.figsize"] = (8, 8)
PATH = "s06_sankey"
os.makedirs(PATH, exist_ok=True)
# -
# # Read data
rna = anndata.read_h5ad("s04_glue_final/full/rna.h5ad", backed="r")
atac = anndata.read_h5ad("s04_glue_final/full/atac.h5ad", backed="r")
atac.obs["NNLS"] = atac.obs["cell_type"]
# # Transfer labels
rna_latent = rna.obsm["X_glue"]
atac_latent = atac.obsm["X_glue"]
rna_latent = rna_latent / np.linalg.norm(rna_latent, axis=1, keepdims=True)
atac_latent = atac_latent / np.linalg.norm(atac_latent, axis=1, keepdims=True)
# +
np.random.seed(0)
quantizer = faiss.IndexFlatIP(rna_latent.shape[1])
n_voronoi = round(np.sqrt(rna_latent.shape[0]))
index = faiss.IndexIVFFlat(quantizer, rna_latent.shape[1], n_voronoi, faiss.METRIC_INNER_PRODUCT)
index.train(rna_latent[np.random.choice(rna_latent.shape[0], 50 * n_voronoi, replace=False)])
index.add(rna_latent)
# index = faiss.IndexFlatIP(rna_latent.shape[1])
# index.add(rna_latent)
# -
nnd, nni = index.search(atac_latent, 50)
hits = rna.obs["cell_type"].to_numpy()[nni]
pred = pd.crosstab(
np.repeat(atac.obs_names, nni.shape[1]), hits.ravel()
).idxmax(axis=1).loc[atac.obs_names]
pred = pd.Categorical(pred, categories=rna.obs["cell_type"].cat.categories)
atac.obs["GLUE"] = pred
atac.write(f"{PATH}/atac_transferred.h5ad", compression="gzip")
# atac = anndata.read_h5ad(f"{PATH}/atac_transferred.h5ad")
# # Sankey
COLOR_MAP = {
k: v for k, v in
zip(atac.obs["cell_type"].cat.categories, atac.uns["cell_type_colors"])
}
link_cutoff = ceil(atac.shape[0] * 0.001)
link_color_map = defaultdict(lambda: "#CCCCCC")
link_color_map.update({
("Astrocytes", "Excitatory neurons"): COLOR_MAP["Excitatory neurons"],
("Astrocytes/Oligodendrocytes", "Astrocytes"): COLOR_MAP["Astrocytes"],
("Astrocytes/Oligodendrocytes", "Oligodendrocytes"): COLOR_MAP["Oligodendrocytes"]
})
fig = scglue.plot.sankey(
atac.obs["NNLS"],
atac.obs["GLUE"],
title="NNLS vs GLUE transferred labels",
left_color=lambda x: COLOR_MAP[x],
right_color=lambda x: COLOR_MAP[x],
link_color=lambda x: "rgba(0.9,0.9,0.9,0.2)" if x["value"] <= link_cutoff \
else link_color_map[(x["left"], x["right"])],
width=700, height=1400, font_size=14
)
pio.write_image(fig, f"{PATH}/sankey.png", scale=10)
# # Accuracy
# ## Exact match
match_set = {(item, item) for item in atac.obs["NNLS"].cat.categories}
match = np.array([(i, j) in match_set for i, j in zip(atac.obs["NNLS"], atac.obs["GLUE"])])
np.sum(match) / atac.shape[0]
# ## Relaxed match
# + tags=[]
for item in atac.obs["NNLS"].cat.categories:
if "?" in item:
match_set.add((item, item.replace("?", "")))
if "/" in item:
for split in item.split("/"):
match_set.add((item, split))
match_set = match_set.union({
("Syncytiotrophoblast and villous cytotrophoblasts?", "Syncytiotrophoblasts and villous cytotrophoblasts"),
# ("Thymocytes", "Lympoid cells"),
# ("Myeloid cells", "Microglia"),
# ("Astrocytes", "Excitatory neurons")
})
# -
match = np.array([(i, j) in match_set for i, j in zip(atac.obs["NNLS"], atac.obs["GLUE"])])
mask = ~atac.obs["NNLS"].str.contains("unknown", case=False)
np.sum(np.logical_and(match, mask)) / mask.sum()
NNLS_size = atac.obs["NNLS"].value_counts().to_dict()
GLUE_size = atac.obs["GLUE"].value_counts().to_dict()
unmatch = atac.obs.loc[
np.logical_and(~match, mask), ["NNLS", "GLUE"]
].value_counts()
unmatch.name = "count"
unmatch = unmatch.reset_index()
unmatch["NNLS_size"] = unmatch["NNLS"].map(NNLS_size)
unmatch["GLUE_size"] = unmatch["GLUE"].map(GLUE_size)
unmatch["frac_all"] = unmatch["count"] / atac.shape[0]
unmatch["frac_NNLS"] = unmatch["count"] / unmatch["NNLS_size"]
unmatch["frac_GLUE"] = unmatch["count"] / unmatch["GLUE_size"]
unmatch.head(n=10)
unmatch.to_csv(f"{PATH}/unmatch.csv", index=False)
| experiments/Atlas/s06_sankey.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
path = os.path.join('/home/santiago/Documents/dev/reservoirpy')
import sys
sys.path.insert(0,path)
from reservoirpy.simulationpy import sim as gd
import numpy as np
import pyvista as pv
import vtk
from shapely.geometry import Point
import math
import matplotlib.pyplot as plt
ct=gd.grid(
grid_type='cartesian',
nx = 3,
ny = 3,
nz = 3,
dx = 100,
dy = 100,
dz= 50,
origin = Point(100,100,-5000),
petrophysics = {'PORO':np.arange(27),'PERMX':200, 'PERMY':300},
azimuth=0,
dip=0
)
ct.petrophysics
p=ct.cartesian_vertices_coord
c=ct.cartesian_center_point_coord
c
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(p[:,0],p[:,1],p[:,2])
ax.scatter(c[:,0],c[:,1],c[:,2])
ct.get_vertices_z(2,0,2)
ct.get_vertices_coords(0,0,0)
ct.get_vertices_face_z(0,0,2,face='Z+')
ct.get_center_coord(0,0,2)
ct_grid=ct.get_vtk()
#ct_grid.plot(show_edges=True)
ct_grid.plot(scalars='PORO',show_edges=True)
cord = np.array([
0,0,0,0,0,10,
3,0,0,3,0,10,
6,0,0,6,0,10,
9,0,0,9,0,10,
0,3,0,0,3,10,
3,3,0,3,3,10,
6,3,0,6,3,10,
9,3,0,9,3,10,
0,6,0,0,6,10,
3,6,0,3,6,10,
6,6,0,6,6,10,
9,6,0,9,6,10,
0,9,0,0,9,10,
3,9,0,3,9,10,
6,9,0,6,9,10,
9,9,0,9,9,10
])
zcorn = np.array([
[0]*36 +
[3]*72 +
[6]*72 +
[9]*36
]).flatten(order='F')
cp =gd.grid(
grid_type='corner_point',
nx = 3,
ny = 3,
nz = 3,
coord = cord,
zcorn = zcorn,
petrophysics = {'PORO':[0.12]*27,'PERMX':200, 'PERMY':300}
)
cp.get_cell_id(2,2,2)
cp.get_cell_ijk(10)
cp.get_pillar(4)
cp.get_cell_pillars(1,1)
cp.get_vertices_id(0,0,0,order='VTK')
cp.get_vertices_z(2,1,2)
cp.get_vertices_coords(0,0,0,order='VTK')
cp.get_vertices_face_z(0,0,2,face='Z+')
cp.get_center_coord(2,2,2)
cp.get_vertices_face_coords(0,0,0,face='Y+')
grid = cp.get_vtk()
grid
grid.plot(show_edges=True, notebook=False)
coord=np.loadtxt('coord.txt').flatten()
zcorn = np.loadtxt('zcord.txt').flatten()
ts =gd.grid(
grid_type='corner_point',
nx = 20,
ny = 20,
nz = 5,
coord = coord,
zcorn = zcorn,
)
gr = ts.get_vtk()
gr.plot(show_edges=True, notebook=False)
| examples/simulation/grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 1
#
# In this assignment you will work with Linear Regression and Gradient Descent. The dataset that you will use is the so-called *Boston house pricing dataset*.
# ## Preparation
#
# First we'll load some libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import read_csv
# ## Boston House Pricing
#
# ### Pre-processing
#
# In this part of the assignment you will try to predict the prices of houses in Boston. Let's load the data and see what's in it:
colnames = [label.strip() for label in open("columns.names").readline().rstrip().split(',')]
bdata = read_csv("housing.data", sep="\s+", header=None, names=colnames)
bdata.head() #take a look at the data
# It looks like we have some data! There are 13 different features in the dataset (from CRIM to LSTAT) and one value that we will try to predict based on the features (MEDV - median house price).
# What kind of data exactly?
bdata.dtypes
# Mostly floats and some ints, now how many data points?
bdata.shape
# It's also good to check if we have any missing data or NaN's (not-a-number) in the dataset:
print(bdata.isnull().sum())
print(bdata.isna().sum())
# No and no - luckily no need to remove observations.
#
# Now it's time to look closer into the data and see how it looks like. First, let's use the pandas `describe` method:
pd.set_option('precision', 1)
bdata.describe()
# As you can see there's lots of basic statistic for each column being printed. However since we are dealing with a regression problem it's far more interesting to see if there are any correlations between the features.
# We can do it in text mode:
pd.set_option('precision', 2)
bdata.corr(method='pearson')
# If we take a look at the last column we can see the correlations between the various features and the median house prices. Usually, correlations above (absolute value of) 0.5 are 'promissing' when it comes to building regression models.
# Here we will lower this limit to 0.4 and drop all the columns except: INDUS, NOX, RM, TAX, PTRATIO, LSTAT.
# +
# TODO: remove all the columns except INDUS, NOX, RM, TAX, PTRATIO, LSTAT, MEDV
# make sure to name your new dataframe: clean_data
# Score: 1 point
# clean_data =
clean_data = bdata[['INDUS', 'NOX', 'RM', 'TAX', 'PTRATIO', 'LSTAT', 'MEDV']].copy() #why do we need 2 [[]]???
clean_data.describe()
# If everything went good your dataset should now contain only the colums: INDUS, NOX, RM, TAX, PTRATIO, LSTAT, MEDV, check it
# -
# Let's split the data into the features and a vector of values:
features = clean_data.drop('MEDV', axis = 1)
prices = clean_data['MEDV']
# features.describe()
# prices.describe()
# ### Linear Regression and Learning Curves
#
# If you look at the data above you will notice that the features have different scales, to use the regression models you'll need to build a pipeline that uses scaling of the features:
# +
# TODO: Build a scikit Pipeline that contains the scaling and linear regression
# make sure to name this pipeline: lin_regressor
# Score: 1 point
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
steps = [('scaler', StandardScaler()), ('LR', LinearRegression())]
# /\ sho za huinya
#какого хуя так и нахуя???
lin_regressor = Pipeline(steps)
# -
# Having the pipeline build, now it's time to run linear regression:
#
# 1. Split the dataset into the training and validation sets
# 2. Train the model and see what's the RME on the training and on the validation data is
# 3. (Additionally) Wrap you code into a loop that will plot the learning curves by training your model against the data of different sizes.
#
# +
# TODO: Split the data into the training and validation data, train the model, plot the learning curves
# make sure that you call your split data sets as below and that you name the predicted values of the training set: y_train_predict
# Points 1 point for each item (3 points total)
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
# def plot_learning_curves(model, X, y):
# X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
# train_errors, val_errors = [], []
# for m in range(1, len(X_train)):
# model.fit(X_train[:m], y_train[:m])
# y_train_predict = model.predict(X_train[:m])
# y_val_predict = model.predict(X_val)
# train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
# val_errors.append(mean_squared_error(y_val, y_val_predict))
# plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
# plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
# X_train, X_val, y_train, y_val = train_test_split(features,prices,test_size=0.2,random_state=0) #what the fuck????
# #why the fuck do we need features and prices? how this shit works?
# regressor = LinearRegression()
# regressor.fit(X_train,y_train) #traning the algorithm
lin_reg = lin_regressor#LinearRegression()
X_train, X_val, y_train, y_val = train_test_split(features, prices, test_size=0.2, random_state=75)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
lin_reg.fit(X_train[:m], y_train[:m])
y_train_predict = lin_reg.predict(X_train[:m])
y_val_predict = lin_reg.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=1, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=1, label="val")
# #To retrieve the intercept:
# print(regressor.intercept_)
# #For retrieving the slope:
# print(regressor.coef_) #sho za huinya v output?
# y_train_predict = regressor.predict(X_val)
# print('RMSE',np.sqrt(mean_squared_error(y_val,y_train_predict)))
# plt.plot(X_val, y_train_predict, 'b', linewidth=1)
# plt.show()
# -
# This doesn't look that impressive - the RMSE is around 5, which given that most values you are trying to predict are in the 20-30 range gives a prediction error of almost 25%!
# We can also plot the errors of our predictions (those are called *residuals*):
# Checking residuals
y_train_predict = lin_regressor.predict(X_train)
plt.scatter(y_train_predict, y_train-y_train_predict)
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.show()
# This plot gives us some hope - most of the errors fall in the +/-5 range except a few outliers - perhaps if we could somehow manipulate and clean the input data the results could be better. What's also curious is the shape of residuals that looks a bit like a quadratic function - perhaps we have some polynominal dependency?
#
# ## Data preprocessing
#
# ### Normalization
#
# Let's look what our data actually looks like - this can be done by plotting histograms (or the density functions) of all the features in the dataset.
#
# We can either use the Pandas dataframe fucntionality or rely on the seaborn library:
# +
bdata.hist(bins=20,figsize=(12,10),grid=False);
# using seaborn
import seaborn as sns
import matplotlib.pyplot as plt
fig, axs = plt.subplots(ncols=4, nrows=4, figsize=(12, 10))
index = 0
axs = axs.flatten()
for k, v in bdata.items():
sns.distplot(v, ax=axs[index])
index += 1
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0)
# -
# Essentially, those are the same plots, the seaborn-generated ones are a bit nicer but that's the selling point of seaborn.
# What you will notice is that our input data looks just awful. Only RM has a nice normal distribution, the rest not so much. We see exponential distributions (e.g. NOX, DIS), bimodal distributions (e.g. RAD, INDUS), weird peaks in data (e.g. ZN) and so on. This is all bad for Linear Regression which we are trying to use here - Linear Regression works the best with normally distributed data. Let's see if we can fix it somehow. We'll start by transforming features that are exponentially distributed - those are CRIM, NOX, DIS and LSTAT. To get rid of the exponentiation you need to take a logarithm - e.g for LSTAT the result looks like this:
sns.distplot(np.log(bdata['LSTAT']))
# This is much better than the original!
# Instead of doing transformations one feature by one we are going to create a new dataframe with the exponential columns transformed. For this we will use the [`assign`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html) function:
# +
#notice that we create a new data frame here by replacing columns in bdata
#normally it's better to create a data transformer (by using ColumnTransformer and FunctionTransformer in this case.)
new_data = bdata.assign(CRIM=lambda x: np.log(x.CRIM),
NOX = lambda x: np.log(x.NOX),
DIS = lambda x: np.log(x.DIS),
LSTAT = lambda x: np.log(x.LSTAT))
#plot the resulting distributions
fig, axs = plt.subplots(ncols=4, nrows=4, figsize=(12, 10))
index = 0
axs = axs.flatten()
for k, v in new_data.items():
sns.distplot(v, ax=axs[index])
index += 1
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0)
# -
# You can see that the distributions for CRIM, NOX, DIS and LSTAT look less skewed now.
#
# ### Outliers
#
# Now we are going to try to remove outliers - those are the observations that are far away from other observations. The easy way to check for outliers in our feature set is by using boxplots:
fig, axs = plt.subplots(ncols=7, nrows=2, figsize=(20, 10))
index = 0
axs = axs.flatten()
for k, v in new_data.items():
sns.boxplot(y=k, data=new_data, ax=axs[index])
index += 1
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0)
# What you will notice when looking at the plots is that the features ZN, RM and B have many outliers, what's even worse the value we are trying to predict MEDV also has some! We are not going to remove the outliers from the features but we definitely need to get rid of those in MEDV - if you look at the distribution you'll notice that there are a couple of values that are exactly 50 - those are most likely data injected into the set when no real price was available or when it was higher than 50 - let's remove those observations
fig, axs = plt.subplots(ncols=2, nrows=1, figsize=(12, 5))
axs.flatten()
sns.distplot(new_data['MEDV'], ax = axs[0])
new_data = new_data[(new_data['MEDV'] != 50)]
sns.distplot(new_data['MEDV'], ax = axs[1])
# ### Collinearity
#
# The last thing we are going to do is to look at the Collinearity of the features - this is checking whether some features are strongly corellated. Such features shouldn't be used together in the Linear Regression. We are going to look again at the correlations but this time using the [`heatmap`](https://seaborn.pydata.org/generated/seaborn.heatmap.html) function of seaborn:
corr = new_data.corr().abs()
plt.figure(figsize=(16, 12))
sns.heatmap(corr, square=True, annot=True)
plt.tight_layout()
# If we take a value of 0.8 as a treshold, the following features are highly collinear:
# CRIM: TAX, RAD, NOX
# NOX: TAX, DIS, CRIM
# DIS: NOX
# RAD: TAX, CRIM
# TAX: RAD, CRIM
#
# On the other, taking 0.5 as the limit MEDV seems to be correlated with LSTAT (0.82), PTRATIO (0.52), TAX (0.57), RM (0.69), NOX (0.53), INDUS (0.6), CRIM (0.57).
#
# In the final feature selection for Linear Regression we will use LSTAT, PTRATIO, RM, INDUS and TAX. Neither CRIM nor NOX make it because of the collinearity with TAX.
prices = new_data['MEDV']
features = new_data[['LSTAT', 'PTRATIO', 'RM', 'INDUS', 'TAX']]
# With the features and the values to predict cleaned up and selected your taks is as follows:
#
# 1. Build a processing pipeline that includes: addition of polynominal features, feature scaler and a regularized regressor (Linear won't do for poly features) \[1 point\]
#
# 2. Split the dataset (new_data) into the training and validation sets and plot the learning curves \[1 point\]
#
# 3. Build at least two additional pipelines:
#
# a) one that includes polynominal features and `LinearRegression`
#
# b) one that includes polynominal features and another kind of regularized Regressor
#
# c) compare the performance of those three approaches by comparing cross-validation scores using the k-fold strategy \[3 points\]
# +
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import FunctionTransformer
from sklearn.compose import ColumnTransformer
def kcross(regressor, X, y):
kf = KFold(n_splits=5)
scores = cross_val_score(regressor, X, y, cv=kf,scoring='neg_mean_squared_error')
return scores.mean(), scores.std()
# TODO: Build the first pipeline (1 point)
my_pipe = [('PF',PolynomialFeatures(degree=3)),('scaler', StandardScaler()), ('Ridge', Ridge())]
lin_regressor = Pipeline(my_pipe)
# TODO: Split the dataset and plot the learning curves (1 point)
X_train, X_val, y_train, y_val = train_test_split(features, prices, test_size=0.2)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
lin_regressor.fit(X_train[:m], y_train[:m])
y_train_predict = lin_regressor.predict(X_train[:m])
y_val_predict = lin_regressor.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=1, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=1, label="val")
mean, std = kcross(lin_regressor, features, prices)
print(f"MSE: mean={-mean:.2f} std={std:.2f}")
# +
from sklearn.linear_model import Lasso
def k_cross(model, x, y):
k_fold = KFold(n_splits=5)
scores = cross_val_score(model, x, y, cv=k_fold, scoring='neg_mean_squared_error')
print(f"MSE: mean={-scores.mean():.2f} std={scores.std():.2f}")
log_transformer = ColumnTransformer(
transformers=[ ('log',
FunctionTransformer(
func=lambda x: np.log(1+x),
inverse_func=lambda x: np.exp(x) - 1,
validate=True),
['LSTAT', 'PTRATIO', 'RM', 'INDUS', 'TAX'])
])
# TODO: Build the additional pipelines, plot learning curves and use cross-validation to compare the regressors (3 points)
my_pipe = [("log_transformer",log_transformer),('PF',PolynomialFeatures(degree=3)),('scaler',StandardScaler()),('LR',LinearRegression())]
lin_regressor = Pipeline(my_pipe)
lasso_regressor = Pipeline( [("log_transformer",log_transformer),('PF',PolynomialFeatures(degree=3)),('std_scaler',StandardScaler()),('Lasso_reg',Lasso())] )
# TODO: Split the dataset and plot the learning curves (1 point)
ridge_regressor = Pipeline( [("log_transformer",log_transformer),('PF',PolynomialFeatures(degree=3)),('std_scaler',StandardScaler()),('Ridge',Ridge())] )
k_cross(lin_regressor, features, prices)
k_cross(lasso_regressor, features, prices)
k_cross(ridge_regressor, features, prices)
# X_train, X_val, y_train, y_val = train_test_split(features, prices, test_size=0.2)
# train_errors, val_errors = [], []
# for m in range(1, len(X_train)):
# lin_regressor.fit(X_train[:m], y_train[:m])
# y_train_predict = lin_regressor.predict(X_train[:m])
# y_val_predict = lin_regressor.predict(X_val)
# train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
# val_errors.append(mean_squared_error(y_val, y_val_predict))
# plt.ylim(0,17.5)
# plt.plot(np.sqrt(train_errors), "r-+", linewidth=1, label="train")
# plt.plot(np.sqrt(val_errors), "b-", linewidth=1, label="val")
#not sure, also try https://towardsdatascience.com/polynomial-regression-bbe8b9d97491
# mean, std = kcross(lin_regressor, features, prices)
# print(f"MSE: mean={-mean:.2f} std={std:.2f}")
# -
| Assignment 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LambdaTheda/DS-Unit-2-Linear-Models/blob/master/3am_SUN_Mar_8_build_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="sugh27O6FWTw" colab_type="code" outputId="79234dba-84a9-441c-e1af-bedc01e577c6" colab={"base_uri": "https://localhost:8080/"}
# read in dataset 1: pokemon Go character sightings
import pandas as pd
#predictemall.read_csv
df = pd.read_csv('300k.csv')
pd.set_option("display.max_rows", 91835)
pd.set_option("display.max_columns", 300)
df
# + id="LUYfSx_CWOJt" colab_type="code" outputId="64f825fd-61f1-4d10-fac6-cea074091723" colab={"base_uri": "https://localhost:8080/", "height": 34}
df.shape
# + id="fAcTWsznWQ4l" colab_type="code" outputId="fdf063dd-83b0-4e13-e0c6-d743e2495cf7" colab={"base_uri": "https://localhost:8080/", "height": 287}
df.head()
# + id="UPpL0A0-O37g" colab_type="code" colab={}
# read in dataset 2: encoded names and corresponding id#s
df_Names = pd.read_csv('pokemonNumbers.csv', header= None)
pd.set_option("display.max_rows", 91835)
df_Names.columns=['pokemonId', 'pokemonName'] # change df_Names column names to align with the proper columns for merge.
df_Names
# + id="l6cdXakqWo-1" colab_type="code" colab={}
df_Names.head(20)
# + id="UKgzEw27Wb3c" colab_type="code" outputId="468e2f66-1dd5-4d5c-ee92-f2b9e191b15b" colab={"base_uri": "https://localhost:8080/", "height": 34}
df_Names.shape
# + id="5D5tFb2ePIVQ" colab_type="code" outputId="854e1d01-3038-4761-caac-1437b3e64f7c" colab={"base_uri": "https://localhost:8080/", "height": 54}
#EXPLORATORY DATA ANALYSIS/DATA WRANGLING
'''
join(df2) does a left join by default (keeps all rows of df1), but
df merge does an inner join by default (returns only matching rows of df1 and df2 ).Jun 18, 2016
What is the difference between join and merge in Pandas? - Stack ...stackoverflow.com › questions › what-is-the-di
--------
“Merging” two datasets is the process of bringing two datasets together into one, and
aligning the rows from each based on common attributes or columns.
https://www.shanelynn.ie/merge-join-dataframes-python-pandas-index-1/
'''
'''
# try df.join(other.set_index('key'), on='key')
df3 = df.join(df_Names.set_index('pokemonId'), on='pokemonId')
df3
'''# merge df and df_Names to make df3
df3 = pd.merge(df_Names, df, on = 'pokemonId')
pd.set_option("display.max_rows", 91835)
df3
# + id="_Q1ap1Dgnn15" colab_type="code" colab={}
# merge df and df_Names to make df3
'''
df3 = pd.merge(df_Names, df, on = 'pokemonId')
pd.set_option("display.max_rows", 91835)
df3
'''
# + id="K97Vz9FDH4DE" colab_type="code" colab={}
#df3.head()
# + [markdown] id="z3h9pCFJCryg" colab_type="text"
# # **check if merge preserved pokemonId:pokemonName**
# + id="JqcQo5K98nWC" colab_type="code" colab={}
# ATTEMPT 1: locate row where latitude = 20.525745, longitude = -97.460829 to check if merge preserved pokemonId:pokemonName
#df.iloc[lambda x: df3['latitude'] == 20.525745] - GET: NotImplementedError: iLocation based boolean indexing on an integer type is not available
# + id="stIqMZlu-8nu" colab_type="code" colab={}
# ATTEMPT 2: locate row of before and after merge..
#df.query('latitude = 20.525745 and longitude = -97.460829')
# + id="tVCKZUZmClaT" colab_type="code" colab={}
# ATTEMPT 3: df.loc[df['column_name'] == some_value]
#df.loc[df['latitude'] == 20.525745] and df.loc[df['longitude'] == -97.460829]
'''
To select rows whose column value equals a scalar, some_value, use ==:
Combine multiple conditions with &:
df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)]
'''
#USE LOCATED ROW TO COMPARE w/col vals in df3
df.loc[(df['latitude'] == 20.525745) & (df['longitude'] == -97.460829) & (df['pokemonId'] == 19)]
# + id="5hAspQJ6E7qi" colab_type="code" colab={}
# try to locate and compare with analogous row in df3- found NO ROWS
#df3.loc[(df3['latitude'] == 20.525745) & (df3['longitude'] == -97.460829) & (df3['pokemonName'] == 'Raticate')] - FOUND NO ROWS
#RETURNS A ROW BUT ROW INDEX OFF BY 1; "Raticate" is id# 20, but queried pokemonId was 19 in df_Names - MAY FIX LATER
df3.loc[(df3['latitude'] == 20.525745) & (df3['longitude'] == -97.460829) & (df3['pokemonId'] == 19)]
# + id="0huqvInRDhKp" colab_type="code" colab={}
# ATTEMPT 4:
df.loc[df['latitude'] == 20.525745]
# + id="txb6Es1HNxr3" colab_type="code" colab={}
# MERGED DATAFRAME df3... ROW INDEX OFF BY 1; "Raticate" is id# 20 in df_Names - fix by dropping pokemonId as 300k.csv dataset creator advised
#df.drop(['B', 'C'], axis=1)- SAMPLE
#df.drop(['pokemonId'], axis=1) - DOESN'T WORK ?!#$
#df.drop(columns=['B', 'C'])- SAMPLE
df.drop(columns=['pokemonId'])
df_Names.drop(columns=['pokemonId']) #- NOT DROPPED EITHER; MAYBE IT'S A TUPLE?..
#df_Names.dtypes()
#df_Names.head(5)
#df
# + id="OKggO5h4tg-X" colab_type="code" colab={}
#try with (from https://colab.research.google.com/github/LambdaTheda/CheatSheets/blob/master/Data_Cleaning_and_Exploring_Cheat_Sheet.ipynb#scrollTo=yYUg0b4F255o )
'''
combine two data frames
df3 = df1.append(df2)
'''
# + id="O9q2C237R_d8" colab_type="code" colab={}
# STILL CAN'T DROP 'pokemonID' from ANY df!!
'''
df3.drop(columns=['pokemonId'])
df3.head(5)
'''
# + id="OzJ8I2dKtSmd" colab_type="code" colab={}
# TRY with (from https://colab.research.google.com/github/LambdaTheda/CheatSheets/blob/master/Data_Cleaning_and_Exploring_Cheat_Sheet.ipynb#scrollTo=bcpPRe87F8OQ)
'''
drop a column or row
df.drop('column1', axis='columns')
'''
# + id="J8uB_gw8Y2Vc" colab_type="code" outputId="53ba3d0a-d476-4642-97e4-935be814460a" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Choosing'appearedDayOfWeek' as target
df['appearedDayOfWeek'].nunique()
# + id="SwZam-oxbs9_" colab_type="code" outputId="96640c8e-cb2d-423e-f220-f7197983cff7" colab={"base_uri": "https://localhost:8080/", "height": 153}
df['appearedDayOfWeek'].value_counts()
# + id="xZAcL6WOZQkj" colab_type="code" colab={}
#df3['appearedDayOfWeek'].nunique()
# + [markdown] id="GJXS0knVOS0B" colab_type="text"
# # #if df['appearedDayOfWeek'] changes from 7 next time I reload file(s), I will CHOOSE the NEW TARGET TO PREDICT to be 'Continent' (nunique=11)!
# + id="y_gaAZ1yZPYb" colab_type="code" colab={}
# check df Nulls- returns 0 for all columns
df.isnull().sum()
# + id="jslz4d9GWLYn" colab_type="code" outputId="5eae2237-6f73-4569-cb2a-60b19f29e5e2" colab={"base_uri": "https://localhost:8080/", "height": 68}
# check df_Names Nulls- returns 0 for all columns
df_Names.isnull().sum()
# + id="aTfEczWLVW_I" colab_type="code" colab={}
# check df3 Nulls- returns 0 for all columns
df3.isnull().sum()
# + id="bIMOJ8PCW10H" colab_type="code" colab={}
# THIS TIME AROUND GETTING NO NULLS...
#drop df cols with nulls
#df.dropna() - not work right
'''
df.dropna(axis='columns') # also not work right
df
'''
# + id="EzmogW5KScRD" colab_type="code" colab={}
#df.columns.to_list()
# + id="3nP5_4MCoD7x" colab_type="code" colab={"base_uri": "https://localhost:8080/"} outputId="6f70dbc9-fdf9-44b6-d4d0-2bb004b6bb05"
df['appearedDayOfWeek']
# + id="9GWbnyvBqlgW" colab_type="code" colab={}
#For Correlation matrix: setting categories to type 'category' for faster operations
#df['column1'] = df['column1'].astype('category')
# + id="f59jULiMcEcm" colab_type="code" colab={}
#View correlation matrix- at the moment only interested in features positively correlated with 'appearedDayOfWeek"
df.corr()
# + id="RBlXNXBZooLe" colab_type="code" colab={}
#Correlation heatmap
corrmat = df.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
# + id="0qr2JGASdJym" colab_type="code" outputId="d3aa1134-982f-44ee-be58-3846de423eeb" colab={"base_uri": "https://localhost:8080/", "height": 54}
# THIS TIME AROUND GETTING NO NULLS...
#trim df to reduce noise including nulls
'''
df_trimmed = df.drop(columns=['gymIn100m',
'gymIn250m',
'gymIn500m',
'gymIn1000m',
'gymIn2500m',
'gymIn5000m',
'cooc_114',
'cooc_115',
'cooc_116',
'cooc_117',
'cooc_118',
'cooc_119',
'cooc_120',
'cooc_121',
'cooc_122',
'cooc_123',
'cooc_124',
'cooc_125',
'cooc_126',
'cooc_127',
'cooc_128',
'cooc_129',
'cooc_130',
'cooc_131',
'cooc_132',
'cooc_133',
'cooc_134',
'cooc_135',
'cooc_136',
'cooc_137',
'cooc_138',
'cooc_139',
'cooc_140',
'cooc_141',
'cooc_142',
'cooc_143',
'cooc_144',
'cooc_145',
'cooc_146',
'cooc_147',
'cooc_148',
'cooc_149',
'cooc_150',
'cooc_151',
])
'''
# + id="jMxKVn2res1e" colab_type="code" colab={}
#trim df to reduce noise:
''' 1st do
1) CORRELATION MATRIX &
2) HEATMAP to try to determine which columns we may drop that may not or
not significantly affect predicting TARGET = 'continent'
'''
# + id="lBy5BFX-XEZu" colab_type="code" outputId="7575a1b4-e942-4421-e857-18c3f2d5521c" colab={"base_uri": "https://localhost:8080/", "height": 34}
df3['continent'].nunique()
# + id="dekbUc4NXQc-" colab_type="code" colab={}
df3['continent'].value_counts()
# + id="M2RNz_HnXnCS" colab_type="code" outputId="2a32639c-dff1-42cd-acfc-9e03fbd6ff94" colab={"base_uri": "https://localhost:8080/", "height": 34}
df['continent'].nunique()
# + id="NJC8tT3-cRqQ" colab_type="code" colab={}
#df_trimmed.sort_by('pokemonId', ascending = True)
# + id="wPUDkS06pHZV" colab_type="code" colab={}
#check data distribution to see if linear model is appropriate- IT IS NOT; THIS IS MULTI-CLASS CLASSIFICATiON PROBLEM
#df_trimmed.plot.hist()
# + id="d8hHCaODZQRd" colab_type="code" colab={}
#Get BASELINE- the first way to get the highest accuracy by assuming majority, [by] assum[ing] majority:
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_row', 300000)
'''
average for regression should be the majority value in a sense;
classification is assuming all values are the highest majority value- BASELINE IS PREDICTING MODE CLASS (largest # of observations
'''
df['appearedDayOfWeek'].value_counts()
# our BASELINE IS SATURDAY, WITH 72201 appearances on that day
# + id="Il-kyMlVjUvj" colab_type="code" colab={}
df['appearedDayOfWeek'].nunique()
# + id="vzKtGQnzT6Yr" colab_type="code" colab={}
# Fit a Classification model
# Try a shallow decision tree as a fast, first model
#pip install --upgrade category_encoders
# from sklearn.preprocessing import CategoricalEncoder
#pip install category_encoders
# !pip install category_encoders==2.*
import numpy as np
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
#encoder = ce.OrdinalEncoder(cols=[...])
target = 'appearedDayOfWeek'
features = ['appearedMonth', 'appearedYear', 'appearedDay', 'pokestopDistanceKm', 'weather', 'latitude', 'longitude']
#train = df
X_train, X_test, y_train, y_test = train_test_split(df[features], df[target])
train, test = train_test_split(df)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier()
)
# + [markdown] id="8r12pYVKsfCD" colab_type="text"
# # **Approaches to convert appearedDayofWeek string classes into Floats to calculate model Accuracy**
# + id="trD2nGSHnbCn" colab_type="code" colab={}
# function to Convert 'appearedDayOfWeek' string values into floats
'''
def convertDayToFloat():
week = list(df['appearedDayOfWeek'].replace('1.0', '2.0', '3.0','4.0','5.0','6.0','7.0'))
print(week)
convertDayToFloat
'''
# + id="rlx1UjBH0CWj" colab_type="code" colab={}
#dictionary to Convert 'appearedDayOfWeek' string values into floats
dict = {'Sunday':'1.0', 'dummy_day':'2.0', 'Tuesday':'3.0','Wednesday':'4.0','Thursday':'5.0', 'Friday':'6.0', 'Saturday':'7.0' }
# + id="jpLl2Osmszpk" colab_type="code" colab={"base_uri": "https://localhost:8080/"} outputId="e729c75d-aef7-4a26-f558-d6ef062e23fb"
#Label encoding: using https://towardsdatascience.com/categorical-encoding-using-label-encoding-and-one-hot-encoder-911ef77fb5bd
'''
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
# creating initial dataframe
bridge_types = ('Arch','Beam','Truss','Cantilever','Tied Arch','Suspension','Cable')
bridge_df = pd.DataFrame(bridge_types, columns=['Bridge_Types'])
# creating instance of labelencoder
labelencoder = LabelEncoder()
# Assigning numerical values and storing in another column
bridge_df['Bridge_Types_Cat'] = labelencoder.fit_transform(bridge_df['Bridge_Types'])
bridge_df
----
Saturday 72201
Friday 68037
Wednesday 60115
Thursday 54257
Sunday 32056
dummy_day 9001
Tuesday 354
'''
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
# df = initial dataframe
# creating initial dataframe
daysOfWeek = ('Saturday','Friday','Wednesday','Thursday','Sunday','dummy_day','Tuesday')
strToNum_df = pd.DataFrame(daysOfWeek, columns=['daysOfWeek'])
# creating instance of label_encoder
labelencoder = LabelEncoder()
# Assigning numerical values and storing in another column
df['labelEncDays2Nums'] = labelencoder.fit_transform(df['appearedDayOfWeek'])
labelEncDays2Nums
# + id="_D1_nCBDvhvW" colab_type="code" colab={}
#label encoding using https://colab.research.google.com/github/LambdaTheda/CheatSheets/blob/master/Data_Cleaning_and_Exploring_Cheat_Sheet.ipynb#scrollTo=-qgpc344EMzv
df['column1'] = df['column1'].cat.codes
# + id="ETwfRWzQnNdh" colab_type="code" colab={}
#fit model 1: Random Forest Classifier and score accuracy
# + id="VzB3nmXFnOzC" colab_type="code" colab={}
pipeline.fit(X_train, y_train)
acc_score = pipeline.score(test[features], test[target])
#ra_score = roc_auc_score(test[target], pipeline.predict(test[features]))
print(f'Test Accuracy: {pipeline.score(X_test, y_test)}')
print(f'Test ROC AUC: {roc_auc_score(y_test, pipeline.predict(X_test))}\n')
'''
print(f'Val Accuracy: {acc_score}')
print(f'Val ROC AUC: {ra_score}')
'''
| 3am_SUN_Mar_8_build_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: base
# language: python
# name: base
# ---
#Dependencies
from bs4 import BeautifulSoup
from splinter import Browser
import pandas as pd
import requests
import os
import time
#Site Navigation
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
# ### NASA Mars News
news_url = "http://mars.nasa.gov/news/"
browser.visit(news_url)
time.sleep(5)
html = browser.html
soup = BeautifulSoup(html, "html.parser")
news_title = soup.find("div", class_='content_title').text
news_p = soup.find("div", class_='article_teaser_body').text
print(f"News Title: {news_title}")
print(f"News Paragraph: {news_p}")
# ### JPL Mars Space Images - Featured Image
# +
#Assigned URL
main_url = "https://www.jpl.nasa.gov"
image_url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(image_url)
time.sleep(5)
html = browser.html
soup = BeautifulSoup(html, "html.parser")
# -
image_results = soup.find("ul", class_="articles")
image_href = image_results.find("a", class_='fancybox')['data-fancybox-href']
featured_image_url = main_url + image_href
featured_image_url
# ### Mars Weather
# +
twitter_url = "https://twitter.com/marswxreport?lang=en"
browser.visit(twitter_url)
time.sleep(3)
# -
html = browser.html
soup = BeautifulSoup(html, "html.parser")
spans=soup.find_all('span')
for span in spans:
if 'InSight sol ' in span.text:
print(span.text)
mars_weather = span.text
break
mars_weather
# ### Mars Facts
table_facts = pd.read_html("https://space-facts.com/mars/")
mars_df = table_facts[0]
mars_df.columns = ["Category", "Measurements"]
mars_facts = mars_df.set_index("Category")
mars_facts
# ### Mar Hemisphere
import time
hemisphere_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
browser.visit(hemisphere_url)
time.sleep(3)
hemispheres_main_url = 'https://astrogeology.usgs.gov'
# +
html = browser.html
soup = BeautifulSoup(html, "html.parser")
div_items = soup.find_all('div', class_='item')
hemisphere_image_urls = []
hemispheres_main_url = 'https://astrogeology.usgs.gov'
for item in div_items:
title = item.find('h3').text
end_points = item.find('a', class_='itemLink product-item')['href']
browser.visit(hemispheres_main_url + end_points)
image_html = browser.html
soup = BeautifulSoup( image_html, 'html.parser')
image_url = hemispheres_main_url + soup.find('img', class_='wide-image')['src']
hemisphere_image_urls.append({"Title": title, "Image_URL": image_url})
hemisphere_image_urls
# +
titles = []
hemispheres = soup.find_all('div', class_="item")
for hemisphere in hemispheres:
title = hemisphere.find("h3").text
titles.append(title)
titles
# +
links = soup.find_all('a', class_="itemLink product-item")
links_href= []
for link in links:
links_href.append(link["href"])
links_href= list(set(links_href))
links_href
# -
| .ipynb_checkpoints/mission_to_mars-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Prerequisite For Running this Kernal----- Better Strength of Internet Connection
# 1. Login to your Instagram Handle
# 1. Submit with sample username and password
# #### Please Enter the Username in place of 'SAMPLE USERNAME' and Password in place of 'SAMPLE PASSWORD' for running this kernal
#importing library and class for webdriver and for explicte waiting
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait#for using explicite waiting
from selenium.webdriver.support import expected_conditions as EC# for locating certain element
from selenium.webdriver.common.by import By# for by which condition we want to locate
# executing a new chrome driver to open Instagram
driver=webdriver.Chrome(executable_path="/Users/karna/Desktop/chromedriver")
driver.maximize_window()
#driver opening website of Instagram
driver.get("https://www.instagram.com/")
# it is for explicite waiting for 10 sec, when ever waiting required We will use it
wait=WebDriverWait(driver,10)
#locating the area where we can enter username
email1=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"_2hvTZ")))
#entering the usename
email1.send_keys("SAMPLE USERNAME")
#locating the area and sending the password
driver.find_element_by_name("password").send_keys("<PASSWORD>")
#now submiting the username and password
driver.find_element_by_class_name("sqdOP").submit()
#locating the popup-window
clickable=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"cmbtv")))
clickable.click()
#locating againg the popup_window
clickable=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"HoLwm")))
clickable.click()
# # ------------------------------------------------------------------------------------------------------------
# 2. Type for “food” in search bar and print all the names of the Instagram Handles that are displayed in list after typing “food”
# 1. Note : Make sure to avoid printing hashtags
#clicking on search bar of instagram
driver.find_element_by_class_name("pbgfb").click()
#typing "food" in search bar
driver.find_element_by_class_name("XTCLo").send_keys("food")
#locating all username who appered after typing 'food'
food_list=driver.find_elements_by_class_name("Ap253")
for i in food_list:
#getting the username as str object
a=i.text
if('#' in a):#removing '#'
a=a[1:]
print((a))
# # ------------------------------------------------------------------------------------------------------------
# 3. Searching and Opening a profile using
# 1. Open profile of “So Delhi”
# +
driver.get("https://www.instagram.com/")
#locating searchbar
driver.find_element_by_class_name("pbgfb").click()
driver.find_element_by_class_name("XTCLo").send_keys("<NAME>")
SoDelhi=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
SoDelhi.click()
# -
# # ------------------------------------------------------------------------------------------------------------
# 4. Follow/Unfollow given handle -
# 4.1 Open the Instagram Handle of “So Delhi”
# +
driver.get("https://www.instagram.com/")
#locating searchbar
driver.find_element_by_class_name("pbgfb").click()
driver.find_element_by_class_name("XTCLo").send_keys("<NAME>")
SoDelhi=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
SoDelhi.click()
# -
# 4. Follow/Unfollow given handle -
# 4.2 Start following it. Print a message if you are already following
# +
try:
#locating follow button
follow=driver.find_element_by_class_name("jIbKX")
follow.click()
print("Now you are Follwing")
except:
print("You are already Following")
# -
# 4. Follow/Unfollow given handle -
# 3. After following, unfollow the instagram handle. Print a message if you have already unfollowed.
try:
#locating follow button
driver.find_element_by_class_name("-fzfL").click()
unfollow=driver.find_element_by_class_name("aOOlW")
unfollow.click()
print("Now you are UnFollowing")
except:
print("You are already UnFollowing")
# # ------------------------------------------------------------------------------------------------------------
# 5. Like/Unlike posts
# 5.1 Liking the top 30 posts of the ‘dilsefoodie'. Print message if you have already liked it.
# +
driver.get("https://www.instagram.com")
driver.find_element_by_class_name("pbgfb").click()
driver.find_element_by_class_name("XTCLo").send_keys("dilsefoodie")
from bs4 import BeautifulSoup
dilsefoodie=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
dilsefoodie.click()
time.sleep(3)
post=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"eLAPa")))
post.click()
heart=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"fr66n")))
fill=str(heart.get_attribute("outerHTML"))
if(not "#ed4956" in fill):
heart.click()
for i in range(29):
driver.find_element_by_class_name("coreSpriteRightPaginationArrow").click()
heart=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"fr66n")))
fill=str(heart.get_attribute("outerHTML"))
if("#ed4956" in fill):
pass
else:
heart.click()
print("You have Liked first 30 posts of dilsefoodie")
driver.get("https://www.instagram.com/")
# -
# 5. Like/Unlike posts
# 5.2 Unliking the top 30 posts of the ‘dilsefoodie’. Print message if you have already unliked it.
# +
import time
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("pbgfb").click()
driver.find_element_by_class_name("XTCLo").send_keys("dilse<PASSWORD>")
from bs4 import BeautifulSoup
dilsefoodie=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
dilsefoodie.click()
time.sleep(3)
post=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"eLAPa")))
post.click()
heart=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"fr66n")))
fill=str(heart.get_attribute("outerHTML"))
if("#ed4956" in fill):
heart.click()
for i in range(29):
driver.find_element_by_class_name("coreSpriteRightPaginationArrow").click()
heart=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"fr66n")))
fill=str(heart.get_attribute("outerHTML"))
if("#ed4956" in fill):
heart.click()
print("You have Unliked first 30 posts of dilsefoodie")
driver.get("https://www.instagram.com/")
# -
# # ------------------------------------------------------------------------------------------------------------
# 6. Extract list of followers
# 1. Extract the usernames of the first 500 followers of ‘foodtalkindia’ and ‘sodelhi’.
# +
#6.1 Extracting the username of the first 500 followers of "foodtalkindia"
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("pbgfb").click()# finding search bar
driver.find_element_by_class_name("XTCLo").send_keys("<PASSWORD>")# searching foodtalkindia in searchbar
foodtalkindia=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
foodtalkindia.click()# clikcing on searched name appeared as foodtalkindia
follower=wait.until(EC.presence_of_element_located((By.XPATH,"//ul[@class='k9GMp ']//a")))
follower.click()# cliking on follower button
scr1=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"isgrP")))# now locating the pop window appeared in which followers name occured
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)# scrolling the popup page
username_foodtalkindia=driver.find_elements_by_class_name("FPmhX")# creating the list of object of all username appearing on followers pop-up window
while(len(username_foodtalkindia)<=500):# same process goes on till finding the 500 object of username in followers popup page
username_foodtalkindia=driver.find_elements_by_class_name("FPmhX")
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)
print("Followers of foddtalkindia : ")
print()
for i in range(500):
username_foodtalkindia[i]=str(username_foodtalkindia[i].get_attribute("title"))# now extracting the username from username selenium object
print(username_foodtalkindia[i])
driver.get("https://instagram.com/")
# +
# 6.1 Extracting the username of the first 500 followers of "sodelhi"
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("pbgfb").click()# finding search bar
driver.find_element_by_class_name("XTCLo").send_keys("<PASSWORD>")# searching sodelhi in searchbar
sodelhi=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
sodelhi.click()# clikcing on searched name appeared as sodelhi
follower=wait.until(EC.presence_of_element_located((By.XPATH,"//ul[@class='k9GMp ']//a")))
follower.click()# cliking on follower button
scr1=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"isgrP")))# now locating the pop window appeared in which followers name occured
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)# scrolling the popup page
username_sodelhi=driver.find_elements_by_class_name("FPmhX")# creating the list of object of all username appearing on followers pop-up window
while(len(username_sodelhi)<=500):# same process goes on till finding the 500 object of username in followers popup page
username_sodelhi=driver.find_elements_by_class_name("FPmhX")
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)
print("Followers of sodelhi : ")
print()
for i in range(500):
username_sodelhi[i]=str(username_sodelhi[i].get_attribute("title"))# now extracting the username from username selenium object
print(username_sodelhi[i])
driver.get("https://instagram.com/")
# -
# 6. Extract list of followers
# 6.2 Now print all the followers of “foodtalkindia” that you are following but those who don’t follow you.
#
#
# ##### For solving this question, first I will find own followers(as own_followers), then finding the whom I following(own_following), then
# ##### will find the foodtalkindia's followers(username_foodtalkindia) whom I am following , but they are not following me, by just perfoming set operation of
# ##### ( own_following & username_foodtalkindia) - own_followers
# +
#6.2 getting the username of my account followers, ie whose username and password is used for running this kernal
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("SKguc").click()# cliking on own username that is appering on instagram
follower=wait.until(EC.presence_of_element_located((By.XPATH,"//ul[@class='k9GMp ']//a")))# locating the follower button
follower.click()# clicking on follower button
follower_number=int(follower.text.split()[0])# finding the number of own followers
scr1=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"isgrP")))# locating the pop-up window in which followers usernames appeared
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)# scrolling the pop-up-window
own_followers=driver.find_elements_by_class_name("FPmhX")# creating the list of selenium object of all usernames
while(len(own_followers)!=follower_number):# process continue till to find all followers
own_followers=driver.find_elements_by_class_name("FPmhX")
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)
for j in range(len(own_followers)):
own_followers[j]=str(own_followers[j].get_attribute("title"))# extracting the username from usename selenium object
print(own_followers[j])
driver.get("https://www.instagram.com/")
# +
#getting the username whom account holder is following
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("SKguc").click()# clicking on username of account holder
following=wait.until(EC.presence_of_all_elements_located((By.XPATH,"//ul[@class='k9GMp ']//a")))# locating the following button
following_number=int(following[1].text.split()[0])# finding the number of following
following[1].click() # clicking on following button
scr1=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"isgrP")))# locating the pop-up window on which following username appears
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)# scrolling the popup winodw to get all username appearing on it
own_following=driver.find_elements_by_class_name("FPmhX")# creating the selenium object of all username
while(len(own_following)!=following_number):# process it contiue till we find all followings username
own_following=driver.find_elements_by_class_name("FPmhX")
driver.execute_script("arguments[0].scrollTop = arguments[0].scrollHeight", scr1)
for j in range(len(own_following)):
own_following[j]=str(own_following[j].get_attribute("title"))# extracting all the username from usename selenium username object
print(own_following[j])
driver.get("https://www.instagram.com/")
# -
username_foodtalkindia=set(username_foodtalkindia)
own_followers=set(own_followers)# converting own_followeres list to set
own_following=set(own_following)# converging own_following list to set
print((own_following & username_foodtalkindia)-own_followers)# here performing set opeation of intersection(&) and difference(-) for obtaining desired result
# # ------------------------------------------------------------------------------------------------------------
# 7. Check the story of ‘coding.ninjas’. Consider the following Scenarios and print error messages accordingly -
# 1. If You have already seen the story.
# 2. Or The user has no story.
# 3. Or View the story if not yet seen.
# +
import time
driver.get("https://www.instagram.com/")
driver.find_element_by_class_name("pbgfb").click()# locating search bar
driver.find_element_by_class_name("XTCLo").send_keys("coding.ninjas")# searching coding.ninjas in search bar
coding=wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
coding.click()# clicking on searched coding.ninjas
time.sleep(3)
try:
status=driver.find_element_by_class_name("h5uC0")#locating status circle if status is presence else give error sending action to exception
view=driver.find_element_by_class_name("CfWVH")# locating status button
if(view.get_attribute("height")=='131'):# when if status is seen , it's circle height has 131
print("You have already seen the Story of this account")
else:
print("Story is being seen")
view.click()
except:
print("This account has no Story")
driver.get("https://www.instagram.com/")
# -
# # ------------------------------------------------------------------------------------------------------------
# # ------------------------------------------------------------------------------------------------------------
| Instagram Bot/Project_InstaBot Part 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
import matplotlib.pyplot as plt
import imutils
import glob
import math
import progressbar
from scipy.spatial.distance import directed_hausdorff
def ShowResizedIm(img,windowname,scale):
cv2.namedWindow(windowname, cv2.WINDOW_NORMAL) # Create window with freedom of dimensions
height, width = img.shape[:2] #get image dimension
cv2.resizeWindow(windowname,int(width/scale) ,int(height/scale)) # Resize image
cv2.imshow(windowname, img) # Show image
# +
#=========USER START================
#folder path
path = 'raw image/*.jpg'
#=========USER END================
# +
image_list = []
for filename in glob.glob(path):
image_list.append(filename)
name_list = []
# -
img = cv2.imread(image_list[4])
ShowResizedIm(np.hstack([img,]),"mark",2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
#color image
b,g,r = cv2.split(img)
ShowResizedIm(np.hstack((b,g,r)),"img1",3)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
#highboost filtering
#https://theailearner.com/2019/05/14/unsharp-masking-and-highboost-filtering/
# Blur the image
kernel = np.ones((21,21),np.float32)/441
dst = cv2.filter2D(g,-1,kernel)
# Apply Unsharp masking
k = 5
unsharp_image = cv2.addWeighted(g, k+1, dst, -k, 0)
ShowResizedIm(np.hstack((g,unsharp_image)),"img1",3)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
imgcopy = unsharp_image.copy()
height, width = imgcopy.shape[:2] #get image dimension
scaler = 1
imgcopy = cv2.resize(imgcopy,(int(width*scaler),int(height*scaler)))
#canny edge
imgcopy = cv2.GaussianBlur(imgcopy,(15,15),0)
edges = cv2.Canny(imgcopy,80,100)
edges = cv2.Canny(imgcopy,80,100)
cv2.imwrite("img.jpg",edges)
ShowResizedIm(np.hstack((imgcopy,edges)),"img1",2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
canvas = np.zeros((100,100), dtype=np.uint8)
canv_H,canv_W = canvas.shape[:2]
cv2.line(canvas,(0,0),(100,100),(255,255,255),2)
cv2.line(canvas,(0,15),(85,100),(255,255,255),2)
cv2.line(canvas,(50,0),(50,100),(255,255,255),2)
window_size = 19
half_window_size = int(window_size/2)
alpha = 5
M = int(360/alpha) #72
# +
with progressbar.ProgressBar(max_value=(canv_H-half_window_size)-half_window_size) as bar:
progress = 0
for u in range(half_window_size,canv_H-half_window_size):
for v in range(half_window_size,canv_W-half_window_size):
if canvas[u][v] == 0:
imCrop = canvas[int(v-half_window_size):int(v+half_window_size+1),
int(u-half_window_size):int(u+half_window_size+1)]
#construct set Sk , k = 1,2,3,...,M
Sk = {}
for k in range(1,M+1):
Di = []
for theta in range((k-1)*alpha,k*alpha):
#create mask
mask = np.zeros((window_size,window_size), dtype=np.uint8)
if theta < 90:
mask_line_u = int(half_window_size-(math.tan(math.radians(theta))*half_window_size))
mask_line_v = window_size-1
elif theta == 90:
mask_line_u = 0
mask_line_v = half_window_size
elif theta < 270:
mask_line_u = int(half_window_size-(math.tan(math.radians(180-theta))*half_window_size))
mask_line_v = 0
elif theta == 270:
mask_line_u = window_size-1
mask_line_v = half_window_size
else:
mask_line_u = int(half_window_size-(math.tan(math.radians(theta))*half_window_size))
mask_line_v = window_size-1
cv2.line(mask,(half_window_size,half_window_size),(mask_line_v,mask_line_u),(255,255,255),1)
#do and operation to mask and imcrop
bit_and = cv2.bitwise_and(imCrop,mask)
if np.any(bit_and):
Di.append(find_nearest_distance(bit_and,(half_window_size,half_window_size)))
ShowResizedIm(np.hstack((imCrop,mask,bit_and)),"img1",0.1)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sk[k] = Di
#construct set Hd
for j in range(1,int(M/2)+2):
if not bool(Sk[j]) and not bool(Sk[j+int(M/2)]):
print(directed_hausdorff(Sk[j], Sk[j+int(M/2)],[0]))
bar.update(progress)
progress = progress+1
# -
def find_nearest_distance(img, target):
nonzero = cv2.findNonZero(img)
distances = np.sqrt((nonzero[:,:,0] - target[0]) ** 2 + (nonzero[:,:,1] - target[1]) ** 2)
return distances[0][0]
# +
#construct set Hd
for j in range(1,int(M/2)+2):
print(j)
if len(Sk[j]) is len(Sk[j+int(M/2)-1]):
if bool(Sk[j]) and bool(Sk[j+int(M/2)-1]):
print(directed_hausdorff([Sk[j]], [Sk[j+int(M/2)-1]],[0]))
# -
Sk
| retinal vessel segmentation(Binary Hausdorff Symmetry) failed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Display HTML
from IPython.display import Image
from IPython.core.display import HTML
# # Complete Machine Learning Workflow
# + active="" endofcell="--"
# 1. Recipe for Project
#
# -
# -
# -
# --
# + active="" endofcell="--"
# 2. Input Data
#
# -
# -
# -
# --
# + active=""
# 3. EDA
#
# - First Inspection
#
# - Automated
#
# - Manual
#
# + active="" endofcell="--"
# 4. Verifying Assumptions / Technical Conditions
#
# - Linear regression
# - L: Linear model
# - I: Independent Observations
# - N: Normally distributed points around the line
# - E: Equal variability around the line for all values of the explanatory variable
#
# -
# -
# --
# + active=""
# 5. Preprocessing
#
# - Missing
# - Transforms
# - Outliers
# - Scaling
# - Class Imbalance
# + jupyter={"source_hidden": true}
Image(url= "https://assets.datacamp.com/production/repositories/4983/datasets/238dde66d8af1b7ebd8ffe82de9df60ad6a68d22/preprocessing3.png")
# + active="" endofcell="--"
# 6. Feature Selection and/or Extraction
#
# -
# -
# -
# --
# + active=""
# 7. Model Building
#
# - Baseline model
#
# - Fit
# - Predict
# - Evaluate
# - Hyperparameter Tuning
# - Validate
# + jupyter={"source_hidden": true}
Image(url= "https://assets.datacamp.com/production/repositories/4983/datasets/d969eb14e3e86ac140949a4c83204a1dc45302a5/pipeline-2.png")
# + active="" endofcell="--"
# 8. Model Deployment
#
# -
# -
# -
# --
# + active="" endofcell="--"
# 9. Model Maintenance
#
# - Dataset shift detection
# - Temporal
# - Domain
#
# -
# --
| notebooks/machine_learning_algorithms/0-Complete-ML-Workflow.ipynb |