code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p><div class="lev1"><a href="#How-to-create-and-populate-a-histogram"><span class="toc-item-num">1 </span>How to create and populate a histogram</a></div><div class="lev1"><a href="#What-does-a-hist()-fucntion-returns?"><span class="toc-item-num">2 </span>What does a hist() fucntion returns?</a></div><div class="lev1"><a href="#Manipulate-The-Histogram-Aesthetics"><span class="toc-item-num">3 </span>Manipulate The Histogram Aesthetics</a></div><div class="lev2"><a href="#Number-of-bins"><span class="toc-item-num">3.1 </span>Number of bins</a></div><div class="lev2"><a href="#Range-of-histogram"><span class="toc-item-num">3.2 </span>Range of histogram</a></div><div class="lev2"><a href="#Normalizing-your-histogram"><span class="toc-item-num">3.3 </span>Normalizing your histogram</a></div><div class="lev3"><a href="#Special-Normalize"><span class="toc-item-num">3.3.1 </span>Special Normalize</a></div><div class="lev2"><a href="#Weights-of-your-input"><span class="toc-item-num">3.4 </span>Weights of your input</a></div><div class="lev2"><a href="#Cumulative-histogram"><span class="toc-item-num">3.5 </span>Cumulative histogram</a></div><div class="lev2"><a href="#Raise-your-histogram-using-bottom"><span class="toc-item-num">3.6 </span>Raise your histogram using bottom</a></div><div class="lev2"><a href="#Different-draw-types"><span class="toc-item-num">3.7 </span>Different draw types</a></div><div class="lev2"><a href="#Align-of-the-histogram"><span class="toc-item-num">3.8 </span>Align of the histogram</a></div><div class="lev2"><a href="#Orientation-of-the-bins"><span class="toc-item-num">3.9 </span>Orientation of the bins</a></div><div class="lev2"><a href="#Relative-width-of-the-bars"><span class="toc-item-num">3.10 </span>Relative width of the bars</a></div><div class="lev2"><a href="#Logarithmic-Scale"><span class="toc-item-num">3.11 </span>Logarithmic Scale</a></div><div class="lev2"><a href="#Color-of-your-histogram"><span class="toc-item-num">3.12 </span>Color of your histogram</a></div><div class="lev2"><a href="#Label-your-histogram"><span class="toc-item-num">3.13 </span>Label your histogram</a></div><div class="lev2"><a href="#Stack-multiple-histograms"><span class="toc-item-num">3.14 </span>Stack multiple histograms</a></div><div class="lev2"><a href="#Add-Info-about-the-data-on-the-canvas"><span class="toc-item-num">3.15 </span>Add Info about the data on the canvas</a></div><div class="lev1"><a href="#How-to-fit-a-histogram"><span class="toc-item-num">4 </span>How to fit a histogram</a></div><div class="lev2"><a href="#Fit-using-Kernel-Density-Estimation"><span class="toc-item-num">4.1 </span>Fit using Kernel Density Estimation</a></div><div class="lev2"><a href="#Fit-using-Scipy's-Optimize-submodule"><span class="toc-item-num">4.2 </span>Fit using Scipy's Optimize submodule</a></div><div class="lev3"><a href="#Example-of-curve-fit-:"><span class="toc-item-num">4.2.1 </span>Example of curve fit :</a></div><div class="lev3"><a href="#Curve-fit-on-histogram"><span class="toc-item-num">4.2.2 </span>Curve fit on histogram</a></div><div class="lev2"><a href="#What-about-the-fit-errors?"><span class="toc-item-num">4.3 </span>What about the fit errors?</a></div><div class="lev3"><a href="#How-can-I-be-sure-about-my-fit-errors?"><span class="toc-item-num">4.3.1 </span>How can I be sure about my fit errors?</a></div>
# -
# # How to create and populate a histogram
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Let's generate a data array
#
#
data = np.random.rand(500)*1200;
# Make a histogram of the data
fig = plt.figure();
plt.hist(data)
plt.show()
# ---
# # What does a hist() fucntion returns?
#
# We can use the hist() function and assign it to a tuple of size 3 to get back some information about what the histogram does.
#
# The whole output is :
n, my_bins, my_patches = plt.hist(data, bins=10);
# In this case:
#
# - **n** : is an array or a list of arrays that hold the **values** of the histogram bins. (Careful in case of the `weights` and/or `normed` options are used.
#
n
len(n)
# - **my_bins** : Is an *array*. This holds the edges of the bins. The length of the **my_bins** is **nbins+1** (that is nbins left edges and the right edge of the last bin). This is always a **single array**, even if more than one datasets are passed in.
my_bins
# - **my_patches** : This is a silent list of the individual patches that are used to create the histogram or list of such list if multiple datasets are plotted.
my_patches
# ----
# # Manipulate The Histogram Aesthetics
# ## Number of bins
#
# Use the `bins=` option.
plt.hist(data, bins=100);
# ----
#
#
# ## Range of histogram
#
# Use the `range=(tuple)` option
#
plt.hist(data, bins=100, range=(0,1000));
# ----
#
#
# ## Normalizing your histogram
#
# To normalize a histogram use the `normed=True` option.
#
plt.hist(data, normed=True);
# This assures that the integral of the distribution is equal to unity.
#
# **If `stacked` is also True, the sum of the histograms is normalized to 1.**
#
#
# ----
#
#
# ### Special Normalize
# However, sometimes it is useful to visualize the height of the bins to sum up to unity.
# For this we generate weights for the histogram. Each bin has the weight: ** 1/(number of data points) **
#
# N.B. : Using this technique you **MUST NOT USE** the `normed=True` option.
#
# This way adding up the bars will give you 1.
weights = np.ones_like(data)/len(data);
plt.hist(data, weights=weights); ## We have NOT used the normed = True option
# ----
#
#
# ## Weights of your input
#
# To weight your data use the `weights=(array)` option.
#
# The weights array **must be of the same shape of the data provided**.
# Each data point provided in the data array only contributes its associated weight towards the bin count (instead of 1)
#
# If you also use **normed=True** the weights will be normalized so that the integral of the density over the range is unity.
#
# Again, sometimes it is useful to visualize the height of the bins to sum up to unity.
# For this we generate weights for the histogram. Each bin has the weight: ** 1/(number of data points) **
#
# N.B. : Using this technique you **MUST NOT USE** the `normed=True` option.
#
# This way adding up the bars will give you 1.
#
weights = np.ones_like(data)/len(data);
plt.hist(data, weights=weights);
# ----
#
#
# ## Cumulative histogram
#
# This is to create the cumulative histogram. Use cimulative=True, so that now each bin has its counts and also all the counts of the previous bins.
plt.hist(data, weights=weights, cumulative=True);
# ----
#
#
# ## Raise your histogram using bottom
#
#
# You can raise your histogram by adding either a scalar (fixed) amount on your y-axis, or even an array-like raise.
# To do this use the `bottom=(array,scalar,None)` option
plt.hist(data, weights=weights, bottom=5);
nbins = 10
bot = 5*np.random.rand(nbins)
plt.hist(data, bins=nbins, bottom=bot);
# ----
#
#
# ## Different draw types
#
# Use the `histtype=` option for other draw options of your histogram. Basics are:
#
# - bar -> Traditional bar type histogram
plt.hist(data, bins=nbins,histtype='bar');
# - barstacked -> bar-type where multiple data are stacked on-top of the other
plt.hist(data, bins=nbins,histtype='barstacked');
# - step -> To create the line plot only
plt.hist(data, bins=nbins,histtype='step');
# - stepfilled -> to create the step but also fill it (similar to bar but without the vertical lines)
plt.hist(data, bins=nbins,histtype='stepfilled');
# ----
#
#
# ## Align of the histogram
#
# One can use the `align='left'|'mid'|'right'` option
#
# - 'left' -> bars are centered on the left bin edges
plt.hist(data, align='left');
# - 'mid' -> bars centered between bin edges
plt.hist(data, align='mid');
# - 'right' -> guess...
plt.hist(data, align='right');
# ----
#
#
# ## Orientation of the bins
#
# You can orient the histogram vertical or horizontal using the `orientation` option.
plt.hist(data, orientation="horizontal");
plt.hist(data, orientation="vertical");
# ----
#
#
# ## Relative width of the bars
#
# The option `rwidth=(scalar,None)` defines the relative width of the bars as a fraction of the bin width. If None (default) automatically computes the width.
#
#
plt.hist(data);
plt.hist(data, rwidth=0.2);
plt.hist(data, rwidth=0.8);
# ----
#
#
# ## Logarithmic Scale
#
# To enable the logarithmic scale use the `log=True` option. The histogram axis will be set to log scale. For logarithmic histograms, empty bins are filtered out.
#
plt.hist(data, log=True);
# ----
#
#
# ## Color of your histogram
#
# You can use the presets or array_like of colors.
plt.hist(data, color='red');
plt.hist(data, color=[0.2, 0.3, 0.8, 0.3]); # RGBA
# ----
#
#
# ## Label your histogram
#
# Use the `label=string` option. This takes a string or a sequence of strings.
#
plt.hist(data, label="Histogram");
# ---
#
#
# ## Stack multiple histograms
#
#
# To stack more than one histogram use the `stacked=True` option.
#
# If True multiple data are stacked on top of each other, otherwise, if False multiple data are aranged side by side (or on-top of each other)
data2 = np.random.rand(500)*1300;
plt.hist(data, stacked=True);
plt.hist(data2, stacked=True);
# ## Add Info about the data on the canvas
#
# First of all we can get the mean, median, std of the data plotted and add them on the canvas
entries = len(data);
mean = data.mean();
stdev = data.std();
# Then create the string and add these values
textstr = 'Entries=$%i$\nMean=$%.2f$\n$\sigma$=$%.2f$'%(entries, mean, stdev)
plt.hist(data, label=textstr);
plt.ylim(0,100);
plt.legend(loc='best',markerscale=0.01);
# Or using a textbox...
#
plt.hist(data);
plt.ylim(0,100);
#plt.text(800,80,textstr);
plt.annotate(textstr, xy=(0.7, 0.8), xycoords='axes fraction') # annotate for specifying the
# fraction of the canvas
# ----
#
#
# # How to fit a histogram
#
# Let's generate a normal distribution
fit_data = np.random.randn(500)*200;
plt.hist(fit_data);
# Assume now that seeing these data we think that a gaussian distribution will fit the best on the given dataset.
#
# We load the gaussian (normal) distribution from scipy.stats:
from scipy.stats import norm
# Now, looking at this function `norm` we see it has the `loc` option and the `scale` option.
#
# ** `loc` is the mean value and `scale` the standard deviation**
#
# To fit a gaussian on top of the histogram, we need the **normed** histogram and also to get the mean and std of the gaussian that fits the data. Therefore we have
plt.hist(fit_data, normed=True);
mean, std = norm.fit(fit_data);
mean
std
# Then we create the curve for that using the norm.pdf in the range of fit_data.min() and fit_data.max()
x = np.linspace(fit_data.min(), fit_data.max(), 1000);
fit_gaus_func = norm.pdf(x, mean, std);
plt.hist(fit_data, normed=True);
plt.plot(x,fit_gaus_func, lw=4);
# ## Fit using Kernel Density Estimation
#
# Instead of specifying a distribution we can fit the best probability density function. This can be achieved thanks to the non-parametric techique of **kernel density estimation**.
#
# KDE is a non parametric way to estimate the probability density function of a random variable.
#
# **How it works?**
# Suppose $(x_1, x_2, ..., x_n)$ are i.i.d. with unknown density $f$. We want to estimate the shape of this function $f$. Its kernel density estimator is
#
# $ \hat{f}_{h}(x) = \frac{1}{n} \sum_{i=1}^{n}(x-x_i) = \frac{1}{nh}\sum_{i=1}^{n}K\left(\frac{x-x_i}{h}\right)$
#
# where the $K()$ is the kernel. Kernel is a non-negative function that intergrates to one and has mean zero, also h>0 is a smoothing parameter called **bandwidth**.
# A kernel with a subscript h is called a **scaled kernel** and is defined as $K_h(x)=\frac{1}{h}K(\frac{x}{h})$.
# Usually one wants to use small $h$, but is always a trade of between the bias of the estimator and its variance.
#
# Kernel functions commonly used:
# - uniform
# - triangular
# - biweight
# - triweight
# - Epanechinikov
# - normal
# More under https://en.wikipedia.org/wiki/Kernel_(statistics)#Kernel_functions_in_common_use
#
#
# In python this is done using the ** scipy.stats.kde ** submodule.
# For gaussian kernel density estimation we use the gaussian kde
from scipy.stats import gaussian_kde
pdf_gaus = gaussian_kde(fit_data);
pdf_gaus
pdf_gaus = pdf_gaus.evaluate(x); # get the "y" values from the pdf for the "x" axis, this is an array
pdf_gaus
plt.hist(fit_data, normed=1);
plt.plot(x, pdf_gaus, 'k', lw=3)
plt.plot(x,fit_gaus_func, lw=4, label="fit");
plt.plot(x, pdf_gaus, 'k', lw=3, label="KDE");
plt.legend();
# **N.B.: ** Notice the difference in the two fit curves! This comes from the fact that the Gaussian kernel is a mixture of normal distrubutions; a Gaussian mixture may be skew or heavy-, light-tailed or multimodal. Thus it does not assume the original distrubution of any particular form.
#
# ## Fit using Scipy's Optimize submodule
#
#
# Scipy comes with an [optimize submodule](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) that provides several commonly used optimization algorithms.
#
# One of the easiest is [curve_fit](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.curve_fit.html), which uses non-linear least squares to fit a function $f$ to the data. It assumes that :
#
# $ y_{data} = f(x_{data}, *params) + eps$
#
# The declaration of the function is :
#
# scipy.optimize.curve_fit(f, xdata, ydata, p0=None, sigma=None, absolute_sigma=False, check_finite=True, bounds=(-inf, inf), method=None, jac=None, **kwargs)
#
# where
# - f : model function (callable) f(x, ... ) the independent variable must be its first argument
# - xdata : sequence or array (array (k,M) when we have multiple funcitons with k predictors)
# - ydata : sequence of dependent data
# - p0 : initial guess for the parameters
# - sigma : if it is not set to None this provides the uncertainties in the ydata array. These are used as weights in the LSquere problem, i.e. minimizing : $\sum{\left(\frac{f(xdata, popt)-ydata}{sigma}\right)^2}$. If set to None, uncertainties are assumed to be 1.
# -absolute_sigma : (bool) When False, sigma denotes relative weights of datapoints. The returned covariance matrix is based on **estimated** errors in the data and is **not** affected by the overall magnitude of the values in *sigma*. Only the relative magnitudes of the *sigma* values matter. If true, then *sigma* describes one standard deviation errors of the input data points. The estimated covariance in pcov is based on these values.
# - method : 'lm', 'trf', 'dogbox' (**N.B.:** lm does not work when the number of observations is less than the number of variables)
#
#
#
# The function returns
# - popt : array of optimal values for the parameters so that the sum of the squared error of $f(xdata, popt)-ydata$ is minimized
#
# - pcov : the covariance matrix of popot. To compute one standard deviation errors on the parameters use :
# $$ perr = np.sqrt(np.diag(pcov)) $$
#
#
# Errors raised by the module:
# - ValueError : if there are any NaN's or incompatible options used
# - RuntimeError : if least squares minimization fails
# - OptimizeWarning : if covariance of the parameters cannot be estimated.
#
#
#
# ### Example of curve fit :
# +
from scipy.optimize import curve_fit
## define the model function:
def func(x, a,b,c):
return a*np.exp(-b*x)+c
## the x-points
xdata = np.linspace(0,4,50);
## get some data from the model function...
y = func(xdata, 2.5, 1.3, 0.5)
##and then add some gaussian errors to generate the "data"
ydata = y + 0.2*np.random.normal(size=len(xdata))
# -
## now run the curve_fit()
popt, pcov = curve_fit(func, xdata, ydata)
popt
pcov
### To constrain the optimization to the region of 0<a<3.0 , 0<b<2 and 0<c<1
popt, pcov = curve_fit(func, xdata, ydata, bounds=(0, [3., 2., 1.]))
popt
pcov
#
#
# ### Curve fit on histogram
#
# To use curve_fit on a histogram we need to get the bin heights and model the histogram as a set of data points. One way to do it is to take the height of the center bin as (x,y) datapoints.
#
#
### Start with the histogram, but now get the values and binEdges
n,bins,patches = plt.hist(fit_data, bins=10);
n
bins
### Calculate the bin centers as (bins[1:]+bins[:-1])/2
binCenters = 0.5 * (bins[1:] + bins[:-1]); # we throw away the first (in the first) and last (in the second) edge
binCenters
# +
## function to model is a gaussian:
def func_g(x, a, b, c):
return a*np.exp(-(x-b)**2/(2*c**2))
## xdata are the centers, ydata are the values
xdata = binCenters
ydata = n
popt, pcov = curve_fit(func_g, xdata, ydata, p0=[1, ydata.mean(), ydata.std()])#,diag=(1./xdata.mean(),1./ydata.mean())) # setting some initial guesses to "help" the minimizer
# -
popt
pcov
# +
plt.plot(xdata,ydata, "ro--", lw=2, label="Data Bin Center");
plt.hist(fit_data, bins=10, label="Data Hist");
plt.plot(np.linspace(xdata.min(),xdata.max(),100),
func_g(np.linspace(xdata.min(),xdata.max(),100),popt[0],popt[1],popt[2]),
"g-", lw=3, label="Fit"); # I increased the x axis points to have smoother curve
plt.legend();
# -
# ## What about the fit errors?
#
# To get the standard deviation of the parameters simply get the square root of the sum of the diagonal elements of the covariance matrix.
#
#
errors = []
for i in range(len(popt)):
try:
errors.append(np.absolute(pcov[i][i])**0.5)
except:
errors.append(0.00)
for i in range(len(popt)):
print popt[i],"+/-",errors[i]
# However this works when using curve_fit.
#
# The `optimize.leastsq` method will return the fractional covariance matrix. We have to multiply the elements of this matrix by the residual variance (the reduced chi squared) and then take the square root of the diagonal elements, to get an estimate of the standard deviation of the fit parameters.
#
# ### How can I be sure about my fit errors?
#
# To get the proper estimate of the standard error in the fitted parameters is a complicated statistical problem. In detail, the resulting covariance matrix of the optimize.leastsq and optimize.curve_fit, is based on the assumptions regarding the probability distribution of the errors and the interactions between parameters; these interaction may exist depending on the specific fit function $f(x)$. A good way to deal with a complicated $f(x)$ is to use the [bootstrap method](http://phe.rockefeller.edu/LogletLab/whitepaper/node17.html).
| Histogram_Manipulation_with_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="hO7cyA2nH9Lc"
# !pip install tensorflow==2.0.0-rc1
# + colab={"base_uri": "https://localhost:8080/", "height": 36} colab_type="code" id="YOtxWRoqIJWS" outputId="9efc8866-89af-4f84-ac07-50023321ebb1"
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
from tensorflow.keras.optimizers import Adam
import numpy as np
import re
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from google.colab import files
tf.__version__
# + [markdown] colab_type="text" id="DtdUJjxW17SU"
# # Sequence generation with TF2.0
#
# In this notebook we train a simple model with lyrics by Rosalía ().
# Next, we generate new song lyrics.
#
# The dataset used in this toy example is small (5 song only) and the model is simple.
#
# Feel free to used the model with a larger dataset and add some complexity! :)
# + colab={"base_uri": "https://localhost:8080/", "height": 74, "resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlI<KEY> "headers": [["content-type", "application/javascript"]], "ok": true, "status": 200, "status_text": "OK"}}} colab_type="code" id="ubpi0hpnIbB9" outputId="4e1e5e7a-300b-48a2-82a0-04c04cb023e5"
# Upload the data
uploaded = files.upload()
# + colab={} colab_type="code" id="FiHdDH2jIxA0"
# Load the data
with open('rosalia_lyrics.txt', 'r') as infile:
lyrics = infile.read()
# + colab={"base_uri": "https://localhost:8080/", "height": 56} colab_type="code" id="ZDmZ0t6QJAEb" outputId="a754b05a-2da1-4b36-fc42-9905f7f0c0c8"
# Sanity check
lyrics
# + colab={} colab_type="code" id="SdglI8isJU1Q"
# Clean the data - remove verse, chorus, etc. tags
lyrics = re.sub('\[.+\]', '', lyrics)
lyrics = re.sub('\(.+\)', '', lyrics)
# + colab={} colab_type="code" id="Es6v_-DKJBPW"
# Instantiate the tokenizer
tokenizer = Tokenizer()
# + colab={} colab_type="code" id="Ezksaby_JPMU"
# Get the corpus
corpus = lyrics.lower().split('\n')
# + colab={} colab_type="code" id="xCx8H4oGj9sW"
# Remove empty lines
corpus = [line for line in corpus if len(line) != 0]
# + colab={} colab_type="code" id="Mv9bSQiyLIpO"
# Fit the tokenizer
tokenizer.fit_on_texts(corpus)
# + colab={} colab_type="code" id="xFyk32VVLLRI"
# Adding one to consider out-of-vocab words
n_words = len(tokenizer.word_index) + 1
# + colab={} colab_type="code" id="Yg62EgsPjVPP"
# turn the corpus into the training data
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range (1, len(token_list)):
n_gram_seq = token_list[:i + 1]
input_sequences.append(n_gram_seq)
# + colab={} colab_type="code" id="PwB0N0A3jWfr"
# Find the length of the longest seq in the corpus
max_seq_len = max([len(x) for x in input_sequences])
# + colab={"base_uri": "https://localhost:8080/", "height": 36} colab_type="code" id="Dvg70r3rk4-g" outputId="eb14d22d-8d8c-46d1-f0a3-8b283a4fdd4c"
max_seq_len
# + colab={} colab_type="code" id="4Vb3CmSjldtG"
# Pad sequences
in_seqs = pad_sequences(input_sequences,
maxlen = max_seq_len,
padding = 'pre')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="lJE1Se03lwKY" outputId="8e19df8f-d00e-4144-cdc3-fc6291454f30"
# A quick sanity check
for i, seq in enumerate(in_seqs):
if i < 10:
print(seq)
# + colab={} colab_type="code" id="KtlkTYURl7Io"
# Data-label split
X = in_seqs[:, :-1]
y = in_seqs[:, -1]
# + colab={} colab_type="code" id="i3w46GoKsEIx"
# One-hot encode the labels
y = tf.keras.utils.to_categorical(y, num_classes = n_words)
# + colab={} colab_type="code" id="Lyz4-pDusy3N"
# Build a generative model
model = Sequential()
model.add(Embedding(n_words, 64,
input_length = max_seq_len - 1)) # We subtract 1 as we removed the last token - it now our label
model.add(Bidirectional(LSTM(50)))
model.add(Dense(n_words, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = Adam(),
metrics = ['accuracy'])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="v6izlWUJtmU7" outputId="43bce08f-99f6-456d-a204-f6523348253d"
hist = model.fit(X, y, epochs = 350, verbose = 1)
# + colab={} colab_type="code" id="nSvougIJvYQF"
def plot_metrics(history):
plt.plot(history.history['loss'], label = 'Loss')
plt.plot(history.history['accuracy'], label = 'Acc')
plt.xlabel('Epochs')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 304} colab_type="code" id="n0TK2HfKxMhb" outputId="d67823c3-0250-49be-9c87-b450276fff88"
plot_metrics(hist)
# + colab={} colab_type="code" id="dSml914UxQq1"
# Set params
seeds = ["Voy a", 'Tu', 'Demasiado', 'Te quiero', 'Nunca', 'No quiero mas'] # Some random Spanish phrases :)
next_words = 20
# + colab={"base_uri": "https://localhost:8080/", "height": 129} colab_type="code" id="OJVgv6ISz6QO" outputId="11d6258a-3935-4b23-d6b3-ec3e69ebf6de"
# Generate a new seq
for seed in seeds:
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed])[0]
token_list = pad_sequences([token_list],
maxlen = max_seq_len - 1,
padding = 'pre')
predicted = model.predict_classes(token_list,
verbose = 0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed += " " + output_word
print(seed)
# + colab={} colab_type="code" id="NtgVuFdr0-ZK"
| TF2.0/06_TF2_Sequence_Generation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.1 64-bit (''test-services-InKJn0l_-py3.9'': poetry)'
# name: python3
# ---
# +
""" Load the Collection """
from tests import utils
from pyclinic import postman
PATH = utils.build_example_path('bookstore.postman_collection.json')
COLLECTION = postman.load_postman_collection_from_file(PATH)
print('Loaded collection from:', PATH)
# +
""" Map Response Bodies to their respective folders """
from pprint import pprint
folders = postman.map_response_bodies_to_folders(COLLECTION)
print("Folders found:", len(folders))
pprint(folders)
# +
""" OPTION 3: Create a *_model.py file for each Folder """
import shutil
shutil.rmtree('./models', ignore_errors=True)
postman.write_collection_models_to_files(folders)
# +
from collections import ChainMap
d = [{'key': 'Content-Type', 'value': 'application/json'}, {'key': 'Authorization', 'value': 'Bearer yo momma!'}]
D = {}
for dictionary in d:
key = dictionary['key']
value = dictionary['value']
D.update({key: value})
del D['foo']
print(D)
| __postman_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <table class="ee-notebook-buttons" align="left">
# <td><a target="_blank" href="https://github.com/giswqs/geemap/tree/master/tutorials/FeatureCollection/us_census_data.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
# <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/tutorials/FeatureCollection/us_census_data.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
# <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/tutorials/FeatureCollection/us_census_data.ipynb"><img width=26px src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
# </table>
# # U.S. Census Data
#
# The United States Census Bureau Topologically Integrated Geographic Encoding and Referencing (TIGER) dataset contains the 2018 boundaries for the primary governmental divisions of the United States. In addition to the fifty states, the Census Bureau treats the District of Columbia, Puerto Rico, and each of the island areas (American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands) as the statistical equivalents of States for the purpose of data presentation. Each feature represents a state or state equivalent.
#
# For full technical details on all TIGER 2018 products, see the [TIGER technical documentation](https://www2.census.gov/geo/pdfs/maps-data/data/tiger/tgrshp2018/TGRSHP2018_TechDoc.pdf).
#
# * [TIGER: US Census States](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_States): `ee.FeatureCollection("TIGER/2018/States")`
# * [TIGER: US Census Counties](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_Counties): `ee.FeatureCollection("TIGER/2018/Counties")`
# * [TIGER: US Census Tracts](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Tracts_DP1): `ee.FeatureCollection("TIGER/2010/Tracts_DP1")`
# * [TIGER: US Census Blocks](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Blocks): `ee.FeatureCollection("TIGER/2010/Blocks")`
# * [TIGER: US Census Roads](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2016_Roads): `ee.FeatureCollection("TIGER/2016/Roads")`
# * [TIGER: US Census 5-digit ZIP Code](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_ZCTA5): `ee.FeatureCollection("TIGER/2010/ZCTA5")`
# ## Install Earth Engine API and geemap
# Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
# The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
#
# **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
# +
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# -
# ## TIGER: US Census States
#
# https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_States
#
# 
#
# ### Displaying data
# +
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
Map.centerObject(states, 4)
Map.addLayer(states, {}, 'US States')
Map.addLayerControl() #This line is not needed for ipyleaflet-based Map
Map
# -
# ### Dispalying vector as raster
# +
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
image = ee.Image().paint(states, 0, 2)
Map.centerObject(states, 4)
Map.addLayer(image, {}, 'US States')
Map.addLayerControl()
Map
# -
# ### Select by attribute
#
# #### Select one single state
# +
Map = emap.Map(center=[40, -100], zoom=4)
tn = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.eq("NAME", 'Tennessee'))
Map.centerObject(tn, 6)
Map.addLayer(tn, {}, 'Tennessee')
Map.addLayerControl()
Map
# +
tn = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.eq("NAME", 'Tennessee')) \
.first()
props = tn.toDictionary().getInfo()
print(props)
# -
# #### Select multiple states
# +
Map = emap.Map(center=[40, -100], zoom=4)
selected = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.inList("NAME", ['Tennessee', 'Alabama', 'Georgia']))
Map.centerObject(selected, 6)
Map.addLayer(selected, {}, 'Selected states')
Map.addLayerControl()
Map
# -
# #### Printing all values of a column
states = ee.FeatureCollection('TIGER/2018/States').sort('ALAND', False)
names = states.aggregate_array("STUSPS").getInfo()
print(names)
areas = states.aggregate_array("ALAND").getInfo()
print(areas)
import matplotlib.pyplot as plt
# %matplotlib notebook
plt.bar(names, areas)
plt.show()
# #### Discriptive statistics of a column
#
# For example, we can calcualte the total land area of all states:
states = ee.FeatureCollection('TIGER/2018/States')
area_m2 = states.aggregate_sum("ALAND").getInfo()
area_km2 = area_m2 / 1000000
print("Total land area: ", area_km2, " km2")
states = ee.FeatureCollection('TIGER/2018/States')
stats = states.aggregate_stats("ALAND").getInfo()
print(stats)
# #### Add a new column to the attribute table
states = ee.FeatureCollection('TIGER/2018/States').sort('ALAND', False)
states = states.map(lambda x: x.set('AreaKm2', x.area().divide(1000000).toLong()))
first = states.first().toDictionary().getInfo()
print(first)
# #### Set symbology based on column values
# +
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
visParams = {
'palette': ['purple', 'blue', 'green', 'yellow', 'orange', 'red'],
'min': 500000000.0,
'max': 5e+11,
'opacity': 0.8,
}
image = ee.Image().float().paint(states, 'ALAND')
Map.addLayer(image, visParams, 'TIGER/2018/States')
Map.addLayerControl()
Map
# -
# #### Download attribute table as a CSV
states = ee.FeatureCollection('TIGER/2018/States')
url = states.getDownloadURL(filetype="csv", selectors=['NAME', 'ALAND', 'REGION', 'STATEFP', 'STUSPS'], filename="states")
print(url)
# #### Formatting the output
# +
first = states.first()
props = first.propertyNames().getInfo()
print(props)
props = states.first().toDictionary(props).getInfo()
print(props)
for key, value in props.items():
print("{}: {}".format(key, value))
# -
# #### Download data as shapefile to Google Drive
# +
# function for converting GeometryCollection to Polygon/MultiPolygon
def filter_polygons(ftr):
geometries = ftr.geometry().geometries()
geometries = geometries.map(lambda geo: ee.Feature( ee.Geometry(geo)).set('geoType', ee.Geometry(geo).type()))
polygons = ee.FeatureCollection(geometries).filter(ee.Filter.eq('geoType', 'Polygon')).geometry()
return ee.Feature(polygons).copyProperties(ftr)
states = ee.FeatureCollection('TIGER/2018/States')
new_states = states.map(filter_polygons)
col_names = states.first().propertyNames().getInfo()
print("Column names: ", col_names)
url = new_states.getDownloadURL("shp", col_names, 'states');
print(url)
desc = 'states'
# Set configration parameters for output vector
task_config = {
'folder': 'gee-data', # output Google Drive folder
'fileFormat': 'SHP',
'selectors': col_names # a list of properties/attributes to be exported
}
print('Exporting {}'.format(desc))
task = ee.batch.Export.table.toDrive(new_states, desc, **task_config)
task.start()
# -
# ## TIGER: US Census Blocks
#
# https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Blocks
#
# 
# +
Map = emap.Map(center=[40, -100], zoom=4)
dataset = ee.FeatureCollection('TIGER/2010/Blocks') \
.filter(ee.Filter.eq('statefp10', '47'))
pop = dataset.aggregate_sum('pop10')
print("The number of census blocks: ", dataset.size().getInfo())
print("Total population: ", pop.getInfo())
Map.setCenter(-86.79, 35.87, 6)
Map.addLayer(dataset, {}, "Census Block", False)
visParams = {
'min': 0.0,
'max': 700.0,
'palette': ['black', 'brown', 'yellow', 'orange', 'red']
}
image = ee.Image().float().paint(dataset, 'pop10')
Map.setCenter(-73.99172, 40.74101, 13)
Map.addLayer(image, visParams, 'TIGER/2010/Blocks')
Map.addLayerControl()
Map
# -
# ## TIGER: US Census Counties 2018
#
# https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_Counties
#
# 
# +
Map = emap.Map(center=[40, -100], zoom=4)
Map.setCenter(-110, 40, 5)
states = ee.FeatureCollection('TIGER/2018/States')
# .filter(ee.Filter.eq('STUSPS', 'TN'))
# // Turn the strings into numbers
states = states.map(lambda f: f.set('STATEFP', ee.Number.parse(f.get('STATEFP'))))
state_image = ee.Image().float().paint(states, 'STATEFP')
visParams = {
'palette': ['purple', 'blue', 'green', 'yellow', 'orange', 'red'],
'min': 0,
'max': 50,
'opacity': 0.8,
};
counties = ee.FeatureCollection('TIGER/2016/Counties')
# print(counties.first().propertyNames().getInfo())
image = ee.Image().paint(states, 0, 2)
# Map.setCenter(-99.844, 37.649, 4)
# Map.addLayer(image, {'palette': 'FF0000'}, 'TIGER/2018/States')
Map.addLayer(state_image, visParams, 'TIGER/2016/States');
Map.addLayer(ee.Image().paint(counties, 0, 1), {}, 'TIGER/2016/Counties')
Map.addLayerControl()
Map
# -
# ## TIGER: US Census Tracts
#
# https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Tracts_DP1
#
# http://magic.lib.uconn.edu/magic_2/vector/37800/demogprofilehousect_37800_0000_2010_s100_census_1_t.htm
#
# 
# +
Map = emap.Map(center=[40, -100], zoom=4)
dataset = ee.FeatureCollection('TIGER/2010/Tracts_DP1')
visParams = {
'min': 0,
'max': 4000,
'opacity': 0.8,
'palette': ['#ece7f2', '#d0d1e6', '#a6bddb', '#74a9cf', '#3690c0', '#0570b0', '#045a8d', '#023858']
}
# print(dataset.first().propertyNames().getInfo())
# Turn the strings into numbers
dataset = dataset.map(lambda f: f.set('shape_area', ee.Number.parse(f.get('dp0010001'))))
# Map.setCenter(-103.882, 43.036, 8)
image = ee.Image().float().paint(dataset, 'dp0010001')
Map.addLayer(image, visParams, 'TIGER/2010/Tracts_DP1')
Map.addLayerControl()
Map
# -
# ## TIGER: US Census Roads
#
# https://developers.google.com/earth-engine/datasets/catalog/TIGER_2016_Roads
#
# 
# +
Map = emap.Map(center=[40, -100], zoom=4)
fc = ee.FeatureCollection('TIGER/2016/Roads')
Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(fc, {}, 'Census roads')
Map.addLayerControl()
Map
| tutorials/FeatureCollection/us_census_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Clustering in two dimensions
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib widget
from functions import gen_domains2D, plot_domains2D, confidence_ellipse
# -
# This is an introduction to some of the considerations needed for data clustering in higher dimensions and an example of using Gaussian mixture models to predict the best cluster for unknown data.
# ## Normal distribution in higher dimensions
# * In one dimension a normal distrbution is described by $\exp(-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2})$, where $\mu$ is the mean, $\sigma$ is the standard deviation and $\sigma^2$ is the variance
# * In higher dimensions this becomes $\exp(-\frac{1}{2}(X-M)^T \Sigma (X-M))$, where $X$ and $M$ are $n$-dimensional vectors and $\Sigma$ is the $n \times n$ covariance matrix
# * The diagonal elements of $\Sigma$ contain the variances of each dimension and the off-diagonal elements are the covariance between two dimensions
# * In 2D: $$\Sigma = \begin{pmatrix} \sigma_x^2 & \mathrm{cov}(x, y) \\ \mathrm{cov}(x, y) & \sigma_y^2 \end{pmatrix} = \begin{pmatrix} \sigma_x^2 & \rho\sigma_x\sigma_y \\ \rho\sigma_x\sigma_y & \sigma_y^2 \end{pmatrix}$$ where $\rho$ is the Pearson correlation coefficient
# * This is demonstrated by the widget below
from covariance_matrix import covariance_matrix
fig, sliders = covariance_matrix()
# ## The advantages of higher dimensions
# * The example below shows a sample of three materials with two measurements (which could be _e.g._ surface potential and tapping phase)
# * Comparing the scatter plot to the histograms shows the advantage of higher dimensions:
# * Clusters that are overlapping in 1D may be distinct in 2D or higher
# * Distances in 1D are given by $x-x_0$ but in 2D are given by $\sqrt{(x-x_0)^2 + (y-y_0)^2}$ (and so on for higher dimensions)
images, materials = gen_domains2D(
populations=(1, 3, 1),
means=((-50, 101), (0, 100), (50, 99)),
stds=((10, 1/5), (10, 1/5), (10, 1/5)),
pearsons=(0, 0.4, 0)
);
plot_domains2D(images, materials);
# ## Normalisation
# * Clustering looks at distances between points to judge their similarity
# * But often, different observations have different units and variances so it's not fair to equate them
# * As an example, if I plot the scatter data from above, but use an equal scaling, the $A$ parameter completely dominates all the point-to-point distances
#
fig, ax = plt.subplots(figsize=(12, 3))
ax.scatter(*images.reshape(2, -1), alpha=0.05, s=1)
ax.set(
xlabel='$A$',
ylabel='$B$',
aspect='equal'
)
for side in ('top', 'right'):
ax.spines[side].set_visible(False)
fig.tight_layout()
# * We don't want to assume that either parameter is more important so we use Mahalanobis normalisation
# * Each dimension is normalised to its mean, $\mu$, and standard deviation, $\sigma$, as $\frac{data - \mu(data)}{\sigma(data)}$
# * This means every observation has unit variance and zero mean
# * It also means the data is made dimensionless, which removes complications of different units
# * The scatter plot of Mahalanobis normalised data below shows that neither $A$ nor $B$ is given unfair precendence
# +
mahalanobis = (
(images - images.mean(axis=(1, 2))[:, np.newaxis, np.newaxis])
/ images.std(axis=(1, 2))[:, np.newaxis, np.newaxis]
)
fig, ax = plt.subplots(figsize=(12, 12))
ax.scatter(*mahalanobis.reshape(2, -1), s=1)
ax.set(
xlabel=r'$\frac{A-{\mu}(A)}{{\sigma}(A)}$',
ylabel=r'$\frac{B-{\mu}(B)}{{\sigma}(B)}$',
aspect='equal'
)
ax.grid()
for side in ('top', 'right'):
ax.spines[side].set_visible(False)
fig.tight_layout()
# -
# ## Using clustering to classify new data
# * One of the main advantages of data clustering is that it can be used to automatically identify new data as belonging to a particular cluster
# * To show this I'm splitting the image above into two halves, top and bottom
# * I'll train the clustering on the top, and test it on the bottom
images_top = images[:, :64]
materials_top = [mat[:64] for mat in materials]
images_bottom = images[:, 64:]
materials_bottom = [mat[64:] for mat in materials]
# ### Training phase
# * I start by normalising the data from the top half of the image
# * This is displayed below
norm_top = (
(images_top - images_top.mean(axis=(1, 2))[:, np.newaxis, np.newaxis])
/ images_top.std(axis=(1, 2))[:, np.newaxis, np.newaxis]
)
plot_domains2D(norm_top, materials_top);
# * It has three materials and so should be fit best by three clusters
# * I train a three-component Gaussian mixture model (GMM) on the data then calculate the probability each point belongs to a particular cluster
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=3).fit(norm_top.reshape(2, -1).T)
labels_top = gmm.predict(norm_top.reshape(2, -1).T)
probs_top = gmm.predict_proba(norm_top.reshape(2, -1).T)
# * The scatter plot below shows the three identified clusters (ellipses show $1\sigma$ away from the mean position)
# * The probability for each point being in the cluster is shown by the images
fig = plt.figure(figsize=(12, 9))
gs = plt.GridSpec(figure=fig, nrows=3, ncols=2, width_ratios=(2, 1))
scatter_ax = fig.add_subplot(gs[:, 0])
cluster_axes = [fig.add_subplot(gs[i, -1]) for i in range(3)]
for i in range(3):
c = f'C{i+1}'
scatter_ax.scatter(*norm_top.reshape(2, -1)[:, labels_top==i], c=c, s=2)
confidence_ellipse(scatter_ax, gmm.means_[i], gmm.covariances_[i], fc=c, ls='--', lw=2, alpha=0.2)
cluster_axes[i].imshow(probs_top[:, i].reshape(images_top[0].shape))
cluster_axes[i].set_title(f'Cluster {i+1}', c=c)
cluster_axes[i].set_axis_off()
for side in ('top', 'right'):
scatter_ax.spines[side].set_visible(False)
fig.tight_layout()
# ### New data
# * We now want to use the clusters found above from the top of the image, to assess the materials in the bottom of the image
# * To do this we need to apply the same normalisation as we did for the training data:
# * For the training data we used Mahalanobis normalisation $\frac{training - \mu(training)}{\sigma(training)}$
# * For fairness we can't simply Mahalanobis normalise the testing data (it may have a different $\mu$ and $\sigma$)
# * We need to do $\frac{new data - \mu(training)}{\sigma(training)}$
# * The appropriately normalised new data from the bottom half of the image is shown here
norm_bottom = (
(images_bottom - images_top.mean(axis=(1, 2))[:, np.newaxis, np.newaxis])
/ images_top.std(axis=(1, 2))[:, np.newaxis, np.newaxis]
)
plot_domains2D(norm_bottom, materials_bottom);
# * We still have the means and covariances of our Gaussian clusters found from the training data
# * We can use these to predict the likelihood that a point from the new data set belongs to each cluster
probs_bottom = gmm.predict_proba(norm_bottom.reshape(2, -1).T)
fig = plt.figure(figsize=(12, 9))
gs = plt.GridSpec(figure=fig, nrows=3, ncols=2, width_ratios=(2, 1))
scatter_ax = fig.add_subplot(gs[:, 0])
cluster_axes = [fig.add_subplot(gs[i, -1]) for i in range(3)]
scatter_ax.scatter(*norm_bottom.reshape(2, -1), c='C0', s=2)
for i in range(3):
c = f'C{i+1}'
confidence_ellipse(scatter_ax, gmm.means_[i], gmm.covariances_[i], fc=c, ls='--', lw=2, alpha=0.2)
cluster_axes[i].imshow(probs_bottom[:, i].reshape(images_bottom[0].shape))
cluster_axes[i].set_title(f'Cluster {i+1}', c=c)
cluster_axes[i].set_axis_off()
for side in ('top', 'right'):
scatter_ax.spines[side].set_visible(False)
fig.tight_layout()
| matplotlib_demos/cluster_2D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: medhas01
# language: python
# name: medhas01
# ---
# # BERT's Attention and Dependency Syntax
#
# This notebook contains code for comparing BERT's attention to dependency syntax annotations (see Sections 4.2 and 5 of [What Does BERT Look At? An Analysis of BERT's Attention](https://arxiv.org/abs/1906.04341))
import collections
import pickle
import numpy as np
#import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from matplotlib import pyplot as plt
# ### Loading the data
#
# Download the data used in this notebook from [here](https://drive.google.com/open?id=1DEIBQIl0Q0az5ZuLoy4_lYabIfLSKBg-). However, note that since Penn Treebank annotations are not public, this is dummy data where all labels are ROOT. See the README for extracting attention maps on your own data.
# +
def load_pickle(fname):
with open(fname, "rb") as f:
dev_data = pickle.load(f, encoding="latin1") # add, encoding="latin1") if using python3 and downloaded data
return (dev_data)
dev_data = load_pickle("./data/ud/ud_attention_data.pkl")
# The data consists of a list of examples (dicts)
# with the following keys/values
# {
# "words": list of words in the sentence
# "heads": index of each word"s syntactic head (0 for ROOT, 1 for the first
# word of the sentence, etc.)
# "relns": the relation between each word and its head
# "attns": [n_layers, n_heads, seq_len, seq_len] tensor of attention maps
# from BERT
#}
index=5
print("words:", dev_data[index]["words"])
print("heads:", dev_data[index]["heads"])
print("relns:", dev_data[index]["relns"])
# Attention maps are 9x9 because [CLS] and [SEP] are added
print("attns: a tensor with shape", dev_data[index]["attns"].shape)
# -
# Find the most common relations in our data
reln_counts = collections.Counter()
for example in dev_data:
for reln in example["relns"]:
reln_counts[reln] += 1
print(reln_counts.most_common(10))
# ### Evaluating individual heads at dependency syntax (Section 4.2)
# +
# Code for evaluating individual attention maps and baselines
def evaluate_predictor(prediction_fn):
"""Compute accuracies for each relation for the given predictor."""
n_correct, n_incorrect = collections.Counter(), collections.Counter()
for example in dev_data:
words = example["words"]
predictions = prediction_fn(example)
for i, (p, y, r) in enumerate(zip(predictions, example["heads"],
example["relns"])):
is_correct = (p == y)
if r == "poss" and p < len(words):
# Special case for poss (see discussion in Section 4.2)
if i < len(words) and words[i + 1] == "'s" or words[i + 1] == "s'":
is_correct = (predictions[i + 1] == y)
if is_correct:
n_correct[r] += 1
n_correct["all"] += 1
else:
n_incorrect[r] += 1
n_incorrect["all"] += 1
return {k: n_correct[k] / float(n_correct[k] + n_incorrect[k])
for k in n_incorrect.keys()}
def attn_head_predictor(layer, head, mode="normal"):
"""Assign each word the most-attended-to other word as its head."""
def predict(example):
attn = np.array(example["attns"][layer][head])
if mode == "transpose":
attn = attn.T
elif mode == "both":
attn += attn.T
else:
assert mode == "normal"
# ignore attention to self and [CLS]/[SEP] tokens
attn[range(attn.shape[0]), range(attn.shape[0])] = 0
attn = attn[1:-1, 1:-1]
return np.argmax(attn, axis=-1) + 1 # +1 because ROOT is at index 0
return predict
def offset_predictor(offset):
"""Simple baseline: assign each word the word a fixed offset from
it (e.g., the word to its right) as its head."""
def predict(example):
return [max(0, min(i + offset + 1, len(example["words"])))
for i in range(len(example["words"]))]
return predict
def get_scores(mode="normal"):
"""Get the accuracies of every attention head."""
scores = collections.defaultdict(dict)
for layer in range(12):
for head in range(12):
scores[layer][head] = evaluate_predictor(
attn_head_predictor(layer, head, mode))
return scores
# attn_head_scores[direction][layer][head][dep_relation] = accuracy
attn_head_scores = {
"dep->head": get_scores("normal"),
"head<-dep": get_scores("transpose")
}
# baseline_scores[offset][dep_relation] = accuracy
baseline_scores = {
i: evaluate_predictor(offset_predictor(i)) for i in range(-3, 3)
}
# -
def get_all_scores(reln):
"""Get all attention head scores for a particular relation."""
all_scores = []
for key, layer_head_scores in attn_head_scores.items():
for layer, head_scores in layer_head_scores.items():
for head, scores in head_scores.items():
all_scores.append((scores[reln], layer, head, key))
return sorted(all_scores, reverse=True)
# Compare the best attention head to baselines across the most common relations.
# This produces the scores in Table 1
for row, (reln, _) in enumerate([("all", 0)] + reln_counts.most_common()):
if reln == "root" or reln == "punct":
continue
if reln_counts[reln] < 100 and reln != "all":
break
uas, layer, head, direction = sorted(
s for s in get_all_scores(reln))[-1]
baseline_uas, baseline_offset = max(
(scores[reln], i) for i, scores in baseline_scores.items())
print("{:8s} | {:5d} | attn: {:.1f} | offset={:2d}: {:.1f} | {:}-{:} {:}".format(
reln[:8], reln_counts[reln], 100 * uas, baseline_offset, 100 * baseline_uas,
layer, head, direction))
# +
# Results are not as expected, check http://corpling.uis.georgetown.edu/gum/,
# and https://github.com/amir-zeldes/gum/tree/master/dep
# -
# ### Qualitative examples of attention (Figure 5)
def plot_attn(title, examples, layer, head, color_words,
color_from=True, width=3, example_sep=3,
word_height=1, pad=0.1, hide_sep=False):
"""Plot BERT's attention for a particular head/example."""
plt.figure(figsize=(4, 4))
for i, example in enumerate(examples):
yoffset = 0
if i == 0:
yoffset += (len(examples[0]["words"]) -
len(examples[1]["words"])) * word_height / 2
xoffset = i * width * example_sep
attn = example["attns"][layer][head]
attn = np.array(attn)
if hide_sep:
attn[:, 0] = 0
attn[:, -1] = 0
attn /= attn.sum(axis=-1, keepdims=True)
words = ["[CLS]"] + example["words"] + ["[SEP]"]
n_words = len(words)
for position, word in enumerate(words):
for x, from_word in [(xoffset, True), (xoffset + width, False)]:
color = "k"
if from_word == color_from and word in color_words:
color = "#cc0000"
plt.text(x, yoffset - (position * word_height), word,
ha="right" if from_word else "left", va="center",
color=color)
for i in range(n_words):
for j in range(n_words):
color = "b"
if words[i if color_from else j] in color_words:
color = "r"
plt.plot([xoffset + pad, xoffset + width - pad],
[yoffset - word_height * i, yoffset - word_height * j],
color=color, linewidth=1, alpha=attn[i, j])
plt.axis("off")
plt.title(title)
plt.show()
# +
# Examples similar to Figure 5 of the paper.
plot_attn("obj: dep->head", [dev_data[12], dev_data[8]], 5, 7,
["future", "deposition"], example_sep=4)
plot_attn("det: dep->head", [dev_data[20], dev_data[31]], 6, 4,
["a", "the"], example_sep=4)
plot_attn("amod: head->dep", [dev_data[294], dev_data[173]], 10, 6,
["emotional", "burning"], example_sep=4, color_from=False)
plot_attn("case: dep->head", [dev_data[1], dev_data[3]], 6, 4,
["by", "in"], example_sep=4)
plot_attn("xcomp: dep->head", [dev_data[852], dev_data[852]], 5, 7,
["know", "receive"], example_sep=4)
# -
# ### Probing classifiers (Section 5)
class WordEmbeddings(object):
"""Class for loading/using pretrained GloVe embeddings"""
def __init__(self):
self.pretrained_embeddings = load_pickle("./data/glove/embeddings.pkl")
self.vocab = load_pickle("./data/glove/vocab.pkl")
def tokid(self, w):
return self.vocab.get(w.lower(), 0)
N_DISTANCE_FEATURES = 8
def make_distance_features(seq_len):
"""Constructs distance features for a sentence."""
# how much ahead/behind the other word is
distances = np.zeros((seq_len, seq_len))
for i in range(seq_len):
for j in range(seq_len):
if i < j:
distances[i, j] = (j - i) / float(seq_len)
feature_matrices = [distances, distances.T]
# indicator features on if other word is up to 2 words ahead/behind
for k in range(3):
for direction in ([1] if k == 0 else [-1, 1]):
feature_matrices.append(np.eye(seq_len, k=k*direction))
features = np.stack(feature_matrices)
# additional indicator feature for ROOT
features = np.concatenate(
[np.zeros([N_DISTANCE_FEATURES - 1, seq_len, 1]),
features], -1)
root = np.zeros((1, seq_len, seq_len + 1))
root[:, :, 0] = 1
return np.concatenate([features, root], 0)
# +
def attn_linear_combo():
return Probe()
def attn_and_words():
return Probe(use_words=True)
def words_and_distances():
return Probe(use_distance_features=True, use_attns=False,
use_words=True, hidden_layer=True)
class Probe(object):
"""The probing classifier used in Section 5."""
def __init__(self, use_distance_features=False, use_words=False,
use_attns=True, include_transpose=True, hidden_layer=False):
self._embeddings = WordEmbeddings()
# We use a simple model with batch size 1
self._attns = tf.placeholder(
shape=[12, 12, None, None], dtype=tf.float32)
self._labels = tf.placeholder(
shape=[None], dtype=tf.int32)
self._features = tf.placeholder(
shape=[N_DISTANCE_FEATURES, None, None], dtype=tf.float32)
self._words = tf.placeholder(shape=[None], dtype=tf.int32)
if use_attns:
seq_len = tf.shape(self._attns)[-1]
if include_transpose:
# Include both directions of attention
attn_maps = tf.concat(
[self._attns,
tf.transpose(self._attns, [0, 1, 3, 2])], 0)
attn_maps = tf.reshape(attn_maps, [288, seq_len, seq_len])
else:
attn_maps = tf.reshape(self._attns, [144, seq_len, seq_len])
# Use attention to start/end tokens to get score for ROOT
root_features = (
(tf.get_variable("ROOT_start", shape=[]) * attn_maps[:, 1:-1, 0]) +
(tf.get_variable("ROOT_end", shape=[]) * attn_maps[:, 1:-1, -1])
)
attn_maps = tf.concat([tf.expand_dims(root_features, -1),
attn_maps[:, 1:-1, 1:-1]], -1)
else:
# Dummy attention map for models not using attention inputs
n_words = tf.shape(self._words)[0]
attn_maps = tf.zeros((1, n_words, n_words + 1))
if use_distance_features:
attn_maps = tf.concat([attn_maps, self._features], 0)
if use_words:
word_embedding_matrix = tf.get_variable(
"word_embedding_matrix",
initializer=self._embeddings.pretrained_embeddings,
trainable=False)
word_embeddings = tf.nn.embedding_lookup(word_embedding_matrix, self._words)
n_words = tf.shape(self._words)[0]
tiled_vertical = tf.tile(tf.expand_dims(word_embeddings, 0),
[n_words, 1, 1])
tiled_horizontal = tf.tile(tf.expand_dims(word_embeddings, 1),
[1, n_words, 1])
word_reprs = tf.concat([tiled_horizontal, tiled_vertical], -1)
word_reprs = tf.concat([word_reprs, tf.zeros((n_words, 1, 200))], 1) # dummy for ROOT
if not use_attns:
attn_maps = tf.concat([
attn_maps, tf.transpose(word_reprs, [2, 0, 1])], 0)
attn_maps = tf.transpose(attn_maps, [1, 2, 0])
if use_words and use_attns:
# attention-and-words probe
weights = tf.layers.dense(word_reprs, attn_maps.shape[-1])
self._logits = tf.reduce_sum(weights * attn_maps, axis=-1)
else:
if hidden_layer:
# 1-hidden-layer MLP for words-and-distances baseline
attn_maps = tf.layers.dense(attn_maps, 256,
activation=tf.nn.tanh)
self._logits = tf.squeeze(tf.layers.dense(attn_maps, 1), -1)
else:
# linear combination of attention heads
attn_map_weights = tf.get_variable("attn_map_weights",
shape=[attn_maps.shape[-1]])
self._logits = tf.reduce_sum(attn_map_weights * attn_maps, axis=-1)
loss = tf.reduce_sum(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=self._logits, labels=self._labels))
opt = tf.train.AdamOptimizer(learning_rate=0.002)
self._train_op = opt.minimize(loss)
def _create_feed_dict(self, example):
return {
self._attns: example["attns"],
self._labels: example["heads"],
self._features: make_distance_features(len(example["words"])),
self._words: [self._embeddings.tokid(w) for w in example["words"]]
}
def train(self, sess, example):
return sess.run(self._train_op, feed_dict=self._create_feed_dict(example))
def test(self, sess, example):
return sess.run(self._logits, feed_dict=self._create_feed_dict(example))
def run_training(probe, train_data, dev_data):
"""Trains and evaluates the given attention probe."""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(1):
print(40 * "=")
print("EPOCH", (epoch + 1))
print(40 * "=")
print("Training...")
for i, example in enumerate(train_data):
if i % 1000 == 0:
print("{:}/{:}".format(i, len(train_data)))
probe.train(sess, example)
print("Evaluating...")
correct, total = 0, 0
for i, example in enumerate(dev_data):
if i % 1000 == 0:
print("{:}/{:}".format(i, len(dev_data)))
logits = probe.test(sess, example)
for i, (head, prediction, reln) in enumerate(
zip(example["heads"], logits.argmax(-1), example["relns"])):
# it is standard to ignore punct for Stanford Dependency evaluation
if reln != "punct":
if head == prediction:
correct += 1
total += 1
print("UAS: {:.1f}".format(100 * correct / total))
# -
tf.reset_default_graph()
attention_data = load_pickle("./data/ud/ud_attention_data.pkl")
train_samples = round(0.8*len(attention_data))
train_data = attention_data[:train_samples]
dev_data = attention_data[train_samples:]
run_training(attn_and_words(), train_data, dev_data)
tf.reset_default_graph()
| Syntax_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from functions import *
from ipywidgets import HBox, HTML, Dropdown, Label, Layout,Image
import tkinter as tk
from IPython.display import display, Markdown, clear_output, IFrame,Javascript
from pyvis.network import Network
pathways_name=pd.read_csv("data/pathways.tsv", sep='\t')["pathway_name"]
last_selection=pathways_name
display(HTML("<style>.container { width:100% !important; }</style>"))
# +
display(HTML("<style>.left-spacing-class {margin-left: 20px; color:blac; margin-right:20px}</style>"))
text=Label(value="")
text.add_class("left-spacing-class")
menu = Dropdown(
options=list(pathways_name.values),
value=list(pathways_name.values)[25],
description='Pathway:')
sidebyside=HBox([menu, HTML('''<script> </script> <form action="javascript:IPython.notebook.execute_cells([2])"><input type="submit" id="toggleButton" value="Run"></form>'''),text])
display(sidebyside)
# -
skip_calcs=False
selection=menu.value
normal_edges=False
pathway_edges=read_pathway(selection)
if (len(pathway_edges)==0):
selection=last_selection
skip_calcs=True
st.error("Edges not found, try another pathway!")
else:
skip_calcs=False
last_selection=selection
pathway_edges=read_pathway(selection)
adj_matrix,nodes_renamed,inv_nodes_renamed=build_adj(pathway_edges)
G = nx.from_numpy_matrix(adj_matrix)
triad_cliques=get_triad(G)
weighted_edges,triad_cliques=calculate_weighted_edges(triad_cliques, adj_matrix,inv_nodes_renamed)
to_remove=[]
signify_values={}
essential_edges=[]
for x in weighted_edges.items():
zeros=0
ones=0
minus=0
for z in x[1]:
if (z[1]==0):
zeros+=1
elif (z[1]==1):
ones+=1
else:
minus+=1
if (ones==0):
if (minus==0):
to_remove.append(x[0])
else:
m=(zeros+minus)/2
if ((minus+zeros)/(zeros*minus+1)*zeros/(minus+1)>((minus+zeros)/(m*m+1))*zeros/(minus+1)):
to_remove.append(x[0])
else:
essential_edges.append(x[0])
if (ones==0):
signify_values[x[0]]=round((minus+zeros)/(zeros*minus+1)*(zeros)/(minus+1),3)
else:
signify_values[x[0]]=0
relabel={}
for e,node in enumerate( G.nodes()):
relabel[e]=str(inv_nodes_renamed[node])
net=Network(height="825px",notebook=True,directed=True,width="1800px", bgcolor='#222222', font_color='white')
triad_nodes=set()
_=[triad_nodes.add(str(inv_nodes_renamed[y])) for x in triad_cliques for y in x]
triad_nodes=list(triad_nodes)
for i,node in relabel.items():
if normal_edges:
net.add_node(str(node))
elif node in triad_nodes:
print(node)
net.add_node(str(node))
if normal_edges:
for edge in pathway_edges.values:
if(edge[2]==-1):
net.add_edge(str(edge[0]), str(edge[1]), color="yellow")
else:
net.add_edge(str(edge[0]), str(edge[1]))
for triad in triad_cliques:
for i,x in enumerate(triad):
for j,y in enumerate(triad):
value=""
isessential=""
tmp=pathway_edges[(pathway_edges[0]==inv_nodes_renamed[triad[i]]) & (pathway_edges[1]==inv_nodes_renamed[triad[j]])].values
# if (str(start_node)+","+str(to_node) not in signify_values):
# continue
if (len(tmp)>0):
start_node,to_node,weight=tmp[0]
else:
continue
if ((str(start_node)+","+str(to_node)) in to_remove):
color="red"
size=10
value+=", significativity: "+str(signify_values[str(start_node)+","+str(to_node)])
else:
color="green"
size=3
value+=", significativity: "+str(signify_values[str(start_node)+","+str(to_node)])
if ((str(start_node)+","+str(to_node)) in essential_edges):
isessential="Essential "
if (weight==1):
net.add_edge(str(start_node), str(to_node), color=color, width=size,title=isessential+"Expression edge"+value)
else:
net.add_edge(str(start_node), str(to_node), color=color, width=size,title=isessential+"Suppression edge"+value)
net.hrepulsion(node_distance=120, central_gravity=0.0, spring_length=100, spring_strength=0, damping=0.09)
net.show("data/graph.html")
weighted_edges['6494,5908']
len([c for c in nx.cycle_basis(G) if len(c)==3])
triad_nodes
weighted_edges={}
first_label=""
second_label=""
third_label=""
mod = """
y ~ x1 + x2
"""
new_triad_cliques=[]
for triad in range(len(triad_cliques)):
triad_matrix=np.zeros((3,3))
for i,x in enumerate(triad_cliques[triad]):
for j,y in enumerate(triad_cliques[triad]):
triad_matrix[i][j]=adj_matrix[x][y]
# if list(triad_matrix[0])== [0, 1, 1]:
zeros_count=np.array([len(np.where(x==0)[0]) for i,x in enumerate(triad_matrix) ])
if (sum(zeros_count)==6):
new_triad_cliques.append(triad_cliques[triad])
else:
continue
first_index=int(np.where(zeros_count==1)[0])
second_index=int(np.where(zeros_count==2)[0])
third_index=int(np.where(zeros_count==3)[0])
first_label=str(inv_nodes_renamed[triad_cliques[triad][first_index]])
second_label=str(inv_nodes_renamed[triad_cliques[triad][second_index]])
third_label=str(inv_nodes_renamed[triad_cliques[triad][third_index]])
first_gene=(list(esets.loc[first_label,:].values),0)
print(first_label,second_label,third_label)
second_gene=(list(esets.loc[second_label,:].values),1)
third_gene=(list(esets.loc[third_label,:].values),2)
y=third_gene
x1=first_gene
x2=second_gene
to_df={"y":y[0],"x1":x1[0],"x2":x2[0]}
data=pd.DataFrame(to_df).replace(np.inf, np.nan).replace(-np.inf, np.nan).dropna()
m = Model(mod)
r = m.fit(data)
fac_sum=np.abs(r.x[0]+r.x[1])
if (np.abs(r.x[0])<fac_sum*0.1):
if (first_label+","+third_label in weighted_edges):
weighted_edges[first_label+","+third_label].append((r.x[0],0))
else:
weighted_edges[first_label+","+third_label]=[(r.x[0],0)]
if (second_label+","+third_label in weighted_edges) :
weighted_edges[second_label+","+third_label].append((r.x[1],1))
else:
weighted_edges[second_label+","+third_label]=[(r.x[1],1)]
if(first_label+","+second_label in weighted_edges) :
weighted_edges[first_label+","+second_label].append((r.x[1],1))
else:
weighted_edges[first_label+","+second_label]=[(r.x[1],1)]
elif(np.abs(r.x[1])<fac_sum*0.1):
if (first_label+","+third_label in weighted_edges):
weighted_edges[first_label+","+third_label].append((r.x[0],1))
else:
weighted_edges[first_label+","+third_label]=[(r.x[0],1)]
if (second_label+","+third_label in weighted_edges):
weighted_edges[second_label+","+third_label].append((r.x[1],0))
else:
weighted_edges[second_label+","+third_label]=[(r.x[1],0)]
if(first_label+","+second_label in weighted_edges) :
weighted_edges[first_label+","+second_label].append((r.x[1],0))
else:
weighted_edges[first_label+","+second_label]=[(r.x[1],0)]
else:
if (first_label+","+third_label in weighted_edges):
weighted_edges[first_label+","+third_label].append((r.x[0],-1))
else:
weighted_edges[first_label+","+third_label]=[(r.x[0],-1)]
if (second_label+","+third_label in weighted_edges):
weighted_edges[second_label+","+third_label].append((r.x[1],-1))
http://localhost:8888/notebooks/Universit%C3%A0/BD/BD_project2/Progetto.ipynb#else:
weighted_edges[second_label+","+third_label]=[(r.x[1],-1)]
if(first_label+","+second_label in weighted_edges) :
weighted_edges[first_label+","+second_label].append((r.x[1],-1))
else:
weighted_edges[first_label+","+second_label]=[(r.x[1],-1)]
print(new_triad_cliques[0])
triad_cliques
triad_matrix
[np.where(x[i]==0) for i,x in enumerate() ]
[len(np.where(x==0)[0]) for i,x in enumerate(triad_matrix) ]
first=int(np.where(np.array([len(np.where(x==0)[0]) for i,x in enumerate(triad_matrix) ])==2)[0])
first
for x in triad_matrix:
print(x)
adj_matrix[27][26]
nodes_renamed[4023]
pathway_edges_0=pathway_edges[0].unique()
pathway_edges_1=pathway_edges[1].unique()
nodes=list(np.hstack((pathway_edges_0,pathway_edges_1)))
nodes_renamed={}
inv_nodes_renamed={}
for e,x in enumerate(nodes):
nodes_renamed[x]=e
inv_nodes_renamed[e]=x
nodes=len(nodes)
adj_matrix=np.zeros((nodes,nodes))
for x in pathway_edges.values:
print(x[0],x[1])
adj_matrix[nodes_renamed[x[0]]][nodes_renamed[x[1]]]=x[2]
for x in triad_cliques:
for y in x:
triad_nodes.add(y)
[[ch for ch in word] for word in ("apple", "banana", "pear", "the", "hello")]
| Progetto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ***Data structure***
# ## list
x=[2,3,5,6,23,56,7,8]
# ### to add an element
x.append(9)
print(x)
# #### to insert another list
d=[10,12,15,13]
x.extend(d)
print(x)
# #### to insert an element particular place
x.insert(3,20)
print(x)
# #### to remove an element
x.remove(20)
print(x)
# #### to remove last element
x.pop()
print(x)
# #### to delete all element in a list
c=[1,2,3,4,5,6]
c.clear()
print(c)
x.index(15)
# #### to arrange it
x.sort()
print(x)
x.sort(reverse=True)
print(x)
# #### to count the element
# + tags=[]
x.append(9)
print(x)
# -
x.count(9)
# ### to copy the list
f=x.copy()
print(f)
# #### delete an element
del f[5]
print(f)
del f[2:4]
print(f)
del f[:]
print(f)
x.clear()
print(x)
# ## tuples
j=(1,2,3,4,8,9,5)
print(type(j))
# ### to get a value at place
print( j[4])
# ### convert tuple to list
# + jupyter={"source_hidden": true} tags=[]
e=list(j)
print(type(e))
print(e)
# -
# #### convert list to tuple
h=tuple(e)
print(type(h))
print(h)
# ## set
t={1,2,3,6,4}
print( type(t))
a=set('mathes')
b=set('hari')
print(a)# print the unique letters
print(b)
# #### letters in a not in b
a-b
# #### letters in b not in a
b-a
# #### letters in a or b or both
a|b
# #### common letters in a & b
a&b
# #### uncommon letters
a^b
# ## dictionaries
u={"first":1}
print(type(u))
i={"subbu":95,"mathesh":45,"nithish":89,"nishanth":50}
i["subbu"]
i["nithish"]
i["nishanth"]
w={"subbu":[12,15,14,65],"mathesh":[15,17,19,65],"nithish":[13,15,15,64],}
w["subbu"]
w["nithish"]
w["mathesh"]
| python/da_stru.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Initialization
# %reload_ext autoreload
# %autoreload 2
# %reload_ext cython
# %reload_ext line_profiler
import os,sys
sys.path.insert(1, os.path.join(sys.path[0], '..', 'module'))
import wiki
import numpy as np
import pandas as pd
import networkx as nx
import scipy as sp
import seaborn as sns
import cufflinks as cf
import matplotlib.pyplot as plt
# topics = ['anatomy', 'biochemistry', 'cognitive science', 'evolutionary biology',
# 'genetics', 'immunology', 'molecular biology', 'chemistry', 'biophysics',
# 'energy', 'optics', 'earth science', 'geology', 'meteorology']
topics = ['earth science']
path_saved = '/Users/harangju/Developer/data/wiki/graphs/dated/'
networks = {}
for topic in topics:
print(topic, end=' ')
networks[topic] = wiki.Net()
networks[topic].load_graph(path_saved + topic + '.pickle')
graph = networks[topic].graph
topic = topics[0]
graph = networks[topic].graph
tfidf = graph.graph['tfidf']
import pickle
dct = pickle.load(open('/Users/harangju/Developer/data/wiki/models/' + 'dict.model','rb'))
# ## Auxiliary methods
# + magic_args="-f" language="cython"
#
# import numpy as np
# cimport numpy as np
# from cython cimport floating,boundscheck,wraparound
# from cython.parallel import prange
#
# from libc.math cimport fabs
#
# np.import_array()
#
# @boundscheck(False) # Deactivate bounds checking
# @wraparound(False)
# def cython_manhattan(floating[::1] X_data, int[:] X_indices, int[:] X_indptr,
# floating[::1] Y_data, int[:] Y_indices, int[:] Y_indptr,
# double[:, ::1] D):
# """Pairwise L1 distances for CSR matrices.
# Usage:
# >>> D = np.zeros(X.shape[0], Y.shape[0])
# >>> cython_manhattan(X.data, X.indices, X.indptr,
# ... Y.data, Y.indices, Y.indptr,
# ... D)
# """
# cdef np.npy_intp px, py, i, j, ix, iy
# cdef double d = 0.0
#
# cdef int m = D.shape[0]
# cdef int n = D.shape[1]
#
# with nogil:
# for px in prange(m):
# for py in range(n):
# i = X_indptr[px]
# j = Y_indptr[py]
# d = 0.0
# while i < X_indptr[px+1] and j < Y_indptr[py+1]:
# if i < X_indptr[px+1]: ix = X_indices[i]
# if j < Y_indptr[py+1]: iy = Y_indices[j]
#
# if ix==iy:
# d = d+fabs(X_data[i]-Y_data[j])
# i = i+1
# j = j+1
#
# elif ix<iy:
# d = d+fabs(X_data[i])
# i = i+1
# else:
# d = d+fabs(Y_data[j])
# j = j+1
#
# if i== X_indptr[px+1]:
# while j < Y_indptr[py+1]:
# iy = Y_indices[j]
# d = d+fabs(Y_data[j])
# j = j+1
# else:
# while i < X_indptr[px+1]:
# ix = X_indices[i]
# d = d+fabs(X_data[i])
# i = i+1
#
# D[px,py] = d
# +
import sklearn.preprocessing as skp
import sklearn.metrics.pairwise as smp
def year_diffs(graph):
return [graph.nodes[node]['year'] - graph.nodes[neighbor]['year']
for node in graph.nodes
for neighbor in list(graph.successors(node))]
def neighbor_similarity(graph, tfidf):
nodes = list(graph.nodes)
return [smp.cosine_similarity(tfidf[:,nodes.index(node)].transpose(),
tfidf[:,nodes.index(neighbor)].transpose())[0,0]
for node in nodes
for neighbor in list(graph.successors(node))]
def sparse_manhattan(X,Y=None):
X, Y = smp.check_pairwise_arrays(X, Y)
X = sp.sparse.csr_matrix(X, copy=False)
Y = sp.sparse.csr_matrix(Y, copy=False)
res = np.empty(shape=(X.shape[0],Y.shape[0]))
cython_manhattan(X.data,X.indices,X.indptr,
Y.data,Y.indices,Y.indptr,
res)
return res
def word_diffs(graph, tfidf):
dists = sparse_manhattan(X=skp.binarize(tfidf).transpose())
nodes = list(graph.nodes)
return [dists[nodes.index(node), nodes.index(neighbor)]
for node in nodes
for neighbor in list(graph.successors(node))]
def sum_abs_weight_differences(graph, tfidf):
nodes = list(graph.nodes)
diff = []
for node in nodes:
for neighbor in graph.successors(node):
v1 = tfidf[:,nodes.index(node)]
v2 = tfidf[:,nodes.index(neighbor)]
idx = np.concatenate([v1.indices, v2.indices])
diff.append( np.sum(np.absolute(v1[idx]-v2[idx])) )
return diff
def sum_weight_differences(graph, tfidf):
nodes = list(graph.nodes)
diff = []
for node in nodes:
for neighbor in graph.successors(node):
v1 = tfidf[:,nodes.index(node)]
v2 = tfidf[:,nodes.index(neighbor)]
idx = np.concatenate([v1.indices, v2.indices])
diff.append( np.sum(v1[idx]-v2[idx]) )
return diff
def bin_distribution(data, steps=30, scale='log'):
if scale=='log':
bins = np.logspace(np.log10(np.min(data)), np.log10(np.max(data)), steps)
elif scale=='linear':
bins = np.linspace(np.min(data), np.max(data), num=steps)
hist, edges = np.histogram(data, bins=bins)
return hist, edges, bins
def plot_distribution(data):
hist, edges, bins = bin_distribution(data)
# hist_norm = hist/(bins[1:] - bins[:-1])
fig = go.Figure()
fig.add_trace(go.Scatter(x=bins[:-1],
y=hist/len(data),
mode='markers'))
fig.update_layout(template='plotly_white',
xaxis={'type': 'log',
'title': 'x'},
yaxis={'type': 'log',
'title': 'P(x)'})
fig.show()
return fig
# -
# ## Priors
# ### Prior: power law distributions of weights
# +
import powerlaw
tfidf = graph.graph['tfidf']
# fit = powerlaw.Fit(tfidf.data)
# fit = xmin = 4.3e-2; alpha=2.7
n_rows = 2
plt.figure(figsize=(16,n_rows*6))
# plt.subplot(n_rows,2,1)
# fit.plot_pdf()
# fit.power_law.plot_pdf();
# plt.title(f"xmin={fit.xmin:.1e}, α={fit.alpha:.1f}");
plt.subplot(n_rows,2,3)
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': tfidf.indices,
'weight': tfidf.data}))
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': tfidf.indices,
'weight': tfidf.data})\
.groupby('index').mean()\
.reset_index())
plt.legend(['weights', 'averaged'])
plt.ylim([-.2,1.2])
plt.subplot(n_rows,2,4)
plot_distribution(tfidf.data)
# -
# ### Prior: similarity / year between neighbors
# +
n_rows = 5
plt.figure(figsize=(16,n_rows*6))
plt.subplot(n_rows,2,1)
yd = year_diffs(graph)
sns.distplot(yd)
plt.xlabel('Δyear')
plt.ylabel('distribution')
plt.subplot(n_rows,2,2)
bin_size=25
years = [graph.nodes[node]['year'] for node in graph.nodes]
sns.distplot(years, bins=bin_size, rug=True, kde=False)
hist, bin_edges = np.histogram(years, bins=bin_size)
popt, pcov = sp.optimize.curve_fit(lambda x,a,b: a*pow(b,x), bin_edges[1:], hist)
x = np.linspace(min(years), max(years), 100)
sns.lineplot(x=x, y=popt[0]*pow(popt[1],x))
plt.legend([f"a*b^x; a={popt[0]:.1e}, b={popt[1]:.4f}"])
plt.xlabel('year');
wd = word_diffs(graph, tfidf)
mu, std = sp.stats.norm.fit(wd)
plt.subplot(n_rows,2,3)
sns.distplot(wd)
x = np.linspace(min(wd), max(wd), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.legend([f"m={mu:.2f}; s={std:.2f}"])
plt.xlabel('manhattan distance')
plt.ylabel('distribution');
slope, intercept, fit_r, p, stderr = sp.stats.linregress(np.abs(yd), wd)
plt.subplot(n_rows,2,4)
wd = word_diffs(graph, tfidf)
sns.scatterplot(x=np.abs(yd), y=wd)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"r={fit_r:.2f}; p={p:.1e}")
plt.legend([f"slope={slope:.2f}"])
plt.xlabel('Δyear')
plt.ylabel('manhattan distance');
parents = parent_similarity(graph, tfidf)
neighbors = neighbor_similarity(graph, tfidf)
non_neighbors = non_neighbor_similarity(graph, tfidf)
fit_mu, fit_std = sp.stats.norm.fit(neighbors)
plt.subplot(n_rows,2,5)
sns.distplot(neighbors, hist=True)
x = np.linspace(min(neighbors), max(neighbors), 100)
plt.plot(x, sp.stats.norm.pdf(x, fit_mu, fit_std))
sns.distplot(non_neighbors)
plt.legend([f"fit-neighbors (m={fit_mu:.2f}; s={fit_std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity');
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), neighbors)
plt.subplot(n_rows,2,6)
sns.scatterplot(x=np.abs(yd), y=neighbors)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"r={r:.2f}; p={p:.1f}")
plt.legend([f"slope={slope:.2f}"])
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
sum_weight_diffs = sum_weight_differences(graph, tfidf)
mu, std = sp.stats.norm.fit(sum_weight_diffs)
plt.subplot(n_rows,2,7)
sns.distplot(sum_weight_diffs)
x = np.linspace(min(sum_weight_diffs), max(sum_weight_diffs), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.legend([f"m={mu:.2f}; s={std:.2f}"])
plt.xlabel('Σ abs Δw_i')
plt.ylabel('distribution');
slope, intercept, fit_r_sum_weight, p, stderr = \
sp.stats.linregress(np.abs(yd), sum_weight_diffs)
plt.subplot(n_rows,2,8)
sns.scatterplot(x=np.abs(yd), y=sum_weight_diffs)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"r={fit_r_sum_weight:.2f}; p={p:.1e}")
plt.legend([f"slope={slope:.1e}"])
plt.xlabel('Δyear')
plt.ylabel('Σ abs Δw_i');
weight_diffs = weight_differences(graph, tfidf)
fit_mu_weight, std = sp.stats.norm.fit(weight_diffs)
plt.subplot(n_rows,2,9)
sns.distplot(weight_diffs)
x = np.linspace(min(weight_diffs), max(weight_diffs), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.legend([f"m={fit_mu_weight:.2f}; s={std:.2f}"])
plt.xlabel('abs Δw_i')
plt.ylabel('distribution');
# -
# Maybe this just shows that people nowadays have a lower threshold for dissimilarity between old and new knowledge.
# ### Prior: word weight vs title
from gensim.parsing.preprocessing import remove_stopwords
stoplist=set('for a of the and to in'.split())
nodes = []
words = []
for i in range(tfidf.shape[1]):
node = list(graph.nodes)[i]
if tfidf[:,i].data.size == 0:
print(node, tfidf[:,i].data)
continue
top_words, idx = wiki.Model.find_top_words(tfidf[:,i], dct, top_n=5)
# top_words = remove_stopwords(' '.join(top_words)).split(' ')
nodes += [node]
words += [top_words]
pd.DataFrame(data={'Node': nodes, 'Top words': words})
# ## Static methods
# ### Power law
# +
max_val = np.max(tfidf.data)
def f(n, max_val):
x = fit.power_law.generate_random(n)
while True:
if np.any(x>max_val):
y = x > max_val
x[y] = fit.power_law.generate_random(np.sum(y))
else:
break
return x
g = lambda n: np.random.choice(tfidf.data, size=n)
plt.figure(figsize=(16,6))
plt.subplot(121)
plot_distribution(tfidf.data)
plot_distribution(f(100000, max_val))
plot_distribution(g(10000))
plt.xlim([1e-4,1])
plt.ylim([1e-5,1])
# plt.yscale('linear')
# plt.xscale('linear')
plt.title('weights');
plt.subplot(122)
sns.distplot(tfidf.data[tfidf.data<fit.xmin])
plt.xlabel('weight')
plt.ylabel('count');
# -
# ### Mutate
# + code_folding=[]
x = tfidf[:,0].copy()
y = tfidf[:,0].copy()
T = 300
sim = np.zeros(T)
size = np.zeros(T)
mag = np.zeros(T)
for i in range(sim.size):
sim[i] = smp.cosine_similarity(x.transpose(),y.transpose())[0,0]
size[i] = y.size
mag[i] = np.sum(y.data)
y = wiki.Model.mutate(y, lambda n: fit.power_law.generate_random(n),
point=(1,.5), insert=(1,.3,None), delete=(1,.3))
plt.figure(figsize=(16,10))
ax = plt.subplot(221)
sns.lineplot(x=range(sim.size), y=sim)
plt.ylabel('similarity')
plt.xlabel('years')
plt.subplot(222)
sns.lineplot(x=range(sim.size), y=size)
plt.ylabel('size')
plt.xlabel('years')
plt.subplot(223)
plot_distribution(x.data)
plot_distribution(y.data)
plt.xlabel('tf-idf values')
plt.legend(['before mutation', 'after mutation'])
plt.xlabel('tf-idf values')
plt.subplot(224)
plot_distribution(x.data)
plot_distribution(y.data)
plt.xlabel('tf-idf values')
plt.yscale('linear')
plt.xscale('linear')
plt.ylim([0,.2])
plt.xlim([0,.1])
plt.legend(['before mutation','after mutation']);
# -
# ### Connect
model = wiki.Model(graph_parent=networks[topic].graph,
vectors_parent=networks[topic].graph.graph['tfidf'],
year_start=-500)
# +
test_graph = model.graph.copy()
test_vector = sp.sparse.hstack([tfidf[:,list(graph.nodes).index(n)] for n in test_graph.nodes])
seed = 'Meteorology'
seed_vector = tfidf[:,list(graph.nodes).index(seed)]
print('Nodes:', test_graph.nodes)
print('Edges:', test_graph.edges, '\n')
print(f"Seed: {seed}\n")
wiki.Model.connect(seed_vector, test_graph, test_vector, dct, match_n=3)
print('Nodes:', test_graph.nodes)
print('Edges:', test_graph.edges)
# -
# ## Evolve
# ### Model
first_n_nodes = 10
start_condition = lambda m: [n for n in m.graph_parent.nodes
if m.graph_parent.nodes[n]['year'] <=\
sorted(list(nx.get_node_attributes(m.graph_parent, 'year')\
.values()))[first_n_nodes]]
end_condition = lambda m: (len(m.graph.nodes) >= len(m.graph_parent.nodes)) or \
(m.year > 2200)
network = networks[topic]
tfidf = network.graph.graph['tfidf']
yd = year_diffs(network.graph)
md = word_diffs(network.graph, tfidf)
a_md, b_md, r_md, p_md, stderr = sp.stats.linregress(np.abs(yd), md)
swd = sum_abs_weight_differences(network.graph, tfidf)
a_swd, b_swd, r_swd, p_swd, stderr = sp.stats.linregress(np.abs(yd), swd)
rvs = lambda n: tfidf.data[np.random.choice(tfidf.data.size, size=n)]
mu_sawd = np.mean(np.sum(np.abs(rvs((1,100000))-rvs((1,100000))), axis=0))
nb = neighbor_similarity(network.graph, tfidf)
mu_nb, std_nb = sp.stats.norm.fit(nb)
p_point, p_insert, p_delete = a_swd/mu_sawd, a_md/2, a_md/2
model = wiki.Model(graph_parent=networks[topic].graph,
vectors_parent=tfidf,
year_start=sorted(list(
nx.get_node_attributes(networks[topic].graph, 'year')\
.values()))[first_n_nodes],
start_nodes=start_condition,
n_seeds=2,
dct=dct,
point=(1, p_point),
insert=(1, p_insert, list(set(tfidf.indices))),
delete=(1, p_delete),
rvs=lambda n: tfidf.data[
np.random.choice(tfidf.data.size, size=n)],
create=lambda n: np.random.normal(
loc=fit_mu, scale=fit_std, size=n))
# %lprun -f model.create_nodes model.evolve(until=end_condition)
sim = lambda a,b: smp.cosine_similarity(a.transpose(), b.transpose())[0,0]
nodes = list(model.graph.nodes)
model.record['Similarity (parent)'] = [sim(model.record.iloc[i]['Seed vectors'],
model.vectors[:,nodes.index(
model.record.iloc[i]['Parent'])])
for i in range(len(model.record.index))]
model.record
# #### Simulations
#
# | Model number | Parameters | Comments |
# | ------------ |:---------- |:-------- |
# | 7 | `match_n=4`, `year_start=1600`, `year_end=2020` | |
# | 8 | `match_n=5` | |
# | 9 | `year_start=-500`, `year_end=2020` | too many edges still |
# | 10 | `match_n=6` | not enough nodes |
# | 11 | `n_seeds=3` | too many edges |
# | 12 | `n_seeds=2`, `year_end=1600` | too many edges near mean |
# | 13 | `match_n=7` | too few edges; too many strong edges |
# | 14 | `match_n=6` | |
# + [markdown] heading_collapsed=true
# ### Interesting thought
# If it weren't for the Middle Ages, we would have an amount of knowledge in the 16th Century that is similar to what we have now. But if we run the model after the Dark Ages, the model is accurate (?).
# + code_folding=[] hidden=true
s = lambda a,b: smp.cosine_similarity(a.transpose(), b.transpose())[0,0]
nodes = list(model.graph.nodes)
model.record['Similarity to parent'] = [s(model.record.iloc[i]['Seed vectors'],
model.vectors[:,nodes.index(model.record.iloc[i]['Parent'])])
for i in range(len(model.record.index))]
model.record['Parent seed'] = model.record['Parent'] + ' ' + model.record['Seed number'].map(str)
# + hidden=true
plt.figure(figsize=(16,10))
ax = sns.lineplot(x='Year', y='Similarity to parent', hue='Parent seed', legend=False,
data=model.record)
plt.ylim([0,1.1]);
# -
# ### Save/load graph
path_base = os.path.join('/','Users','harangju','Developer','data','wiki')
# +
# pickle.dump(model, open(f"../models/model14.p", 'wb'))
# -
model = pickle.load(open(os.path.join(path_base, 'simulations', 'tests', 'model14.pickle'), 'rb'))
model.record
# ## Posteriors
from ipywidgets import interact, widgets, Layout
import plotly.express as px
import plotly.graph_objs as go
import plotly.figure_factory as ff
# + [markdown] heading_collapsed=true
# ### Interactive plots
# [FigureWidgets](https://plot.ly/python/v3/figurewidget-app/)
# + hidden=true
df = cf.datagen.lines(5,1000).reset_index(drop=True)
df
# + hidden=true
def update(change):
with fig.batch_update():
fig.data[0].x = df.index[df.index > change.new]
fig.data[0].y = df[df.columns[0]][df.index > change.new]
min_idx = min(df.index)
max_idx = max(df.index)
slider = widgets.IntSlider(value=0, min=min_idx, max=max_idx,
step=1, description='Year', continuous_update=True,
layout=Layout(width='auto'))
slider.observe(update, names='value')
display(slider)
fig = go.FigureWidget()
fig.add_trace(go.Scatter(x=df.index, y=df[df.columns[0]], name=df.columns[0], mode='lines'))
fig.update_layout(title='Title', xaxis_title='Index', yaxis_title='Y', template='plotly_white')
fig
# -
# ### Plots
# #### Degree distribution
#
# **Interpretation**
#
# There are too many connections. Similarity to the parent isn't actually changing that much.
fig = go.Figure()
fig.add_trace(go.Histogram(x=[d for _,d in graph.degree], nbinsx=30, name='empirical'))
fig.add_trace(go.Histogram(x=[d for _,d in model.graph.degree], nbinsx=30, name='model'))
fig.update_layout(title='Degree distribution', template='plotly_white',
xaxis_title='degree', yaxis_title='number of edges')
# #### Network growth
years = pd.DataFrame([model.graph.nodes[node]['year'] for node in model.graph.nodes],
columns=['Year'])\
.sort_values(by='Year')\
.reset_index(drop=True)
years['count'] = 1
years['Year (cumsum)'] = years['count'].cumsum()
years = years.drop(columns='count')
years
nodes = list(model.graph.nodes)
layout = nx.kamada_kawai_layout(model.graph, dim=2)
# layout = nx.spring_layout(model.graph, dim=3)
layout = np.vstack([layout[node] for node in nodes])
Xn = [layout[k][0] for k in range(len(nodes))]
Yn = [layout[k][1] for k in range(len(nodes))]
# Zn = [layout[k][2] for k in range(len(nodes))]
Xe = []
Ye = []
# Ze = []
for e in model.graph.edges:
Xe += [layout[nodes.index(e[0])][0], layout[nodes.index(e[1])][0], None]
Ye += [layout[nodes.index(e[0])][1], layout[nodes.index(e[1])][1], None]
# Ze += [layout[nodes.index(e[0])][2], layout[nodes.index(e[1])][2], None]
def graph_layout(graph, nodes):
subgraph = model.graph.subgraph(nodes)
layout = nx.kamada_kawai_layout(graph, dim=2)
Xn = [layout[n][0] for n in subgraph.nodes]
Yn = [layout[n][1] for n in subgraph.nodes]
Xe = []
Ye = []
for e in subgraph.edges:
Xe += [layout[e[0]][0], layout[e[1]][0], None]
Ye += [layout[e[0]][1], layout[e[1]][1], None]
return (Xn, Yn), (Xe, Ye)
years_emp = np.array(sorted([graph.nodes[n]['year'] for n in graph.nodes]))
years_emp_dist = np.cumsum(np.ones(shape=len(years_emp)))
len(years_emp), len(years_emp_dist)
fig = go.Figure()
fig.add_trace(go.Scatter(x=years_emp,
y=years_emp_dist,
name='empirical'))
fig.add_trace(go.Scatter(x=years.Year,
y=years['Year (cumsum)'],
name='model',
mode='lines'))
fig.update_layout(title='Discoveries',
xaxis_title='Year',
yaxis_title='Number of discoveries',
template='plotly_white')
# +
# fig.write_image('fig1.svg');
# +
def update_network(change):
with fig.batch_update():
(Xn, Yn), (Xe, Ye) = graph_layout(model.graph,
[n for n in model.graph.nodes
if model.graph.nodes[n]['year']<=change.new])
fig.data[0].x = Xe
fig.data[0].y = Ye
fig.data[1].x = Xn
fig.data[1].y = Yn
fig.layout.title = model.graph.name + ', year: ' + str(change.new)
fig.update_xaxes(range=[-1.2,1.2])
fig.update_yaxes(range=[-1.2,1.2])
nodes = list(model.graph.nodes)
min_year = min([model.graph.nodes[n]['year'] for n in nodes])
max_year = max([model.graph.nodes[n]['year'] for n in nodes])
slider_network = widgets.IntSlider(value=min_year, min=min_year, max=max_year,
step=1, description='Year', continuous_update=True,
layout=Layout(width='auto'))
slider_network.observe(update_network, names='value')
display(slider_network)
(Xn, Yn), (Xe, Ye) = graph_layout(model.graph,
[n for n in model.graph.nodes
if model.graph.nodes[n]['year']==min_year])
trace1 = go.Scatter(x=Xe, y=Ye,# z=Ze,
mode='lines', line=dict(color='gray', width=.5),
hoverinfo='none')
trace2 = go.Scatter(x=Xn, y=Yn,# z=Zn,
mode='markers',
marker=dict(symbol='circle', size=6,
# color=group,
colorscale='Viridis',
line=dict(color='rgb(50,50,50)', width=0.5)),
text=nodes, hoverinfo='text')
axis = dict(showbackground=False,
showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
title='')
fig = go.Figure(data=[trace1, trace2],
layout=go.Layout(title=topic + ', year: ' + str(min_year),
width=600,#1000,
height=600,
showlegend=False,
scene=dict(xaxis=dict(axis),
yaxis=dict(axis),
zaxis=dict(axis),),
hovermode='closest',
template='plotly_white'))
fig = go.FigureWidget(fig)
fig.update_xaxes(range=[-1.2,1.2])
fig.update_yaxes(range=[-1.2,1.2])
fig
# -
# **Comments**
#
# Too many connections in new nodes. So, try
# * restricting title words to uncommon words?
# #### Similarity
# ##### Cosine similarity
# +
import itertools as it
import sklearn.metrics.pairwise as smp
sim = lambda a,b: smp.cosine_similarity(a.transpose(), b.transpose())[0,0]
nodes = list(model.graph.nodes)
births = pd.DataFrame({'Node': nodes,
'Year': [model.graph.nodes[n]['year'] for n in nodes]})\
.sort_values(by=['Year'])\
.reset_index(drop=True)
births['Similarity (neighbor)'] = [[sim(model.vectors[:,nodes.index(births.iloc[i].Node)],
model.vectors[:,nodes.index(neighbor)])
for neighbor in it.chain(model.graph.successors(births.iloc[i].Node),
model.graph.predecessors(births.iloc[i].Node))
if model.graph.nodes[neighbor]['year'] <= births.iloc[i].Year]
for i in births.index]
births
# +
max_y = 180
def update_similarity(change):
with fig.batch_update():
fig.data[1].x = [j for i in births[births.Year<=change.new]['Similarity (neighbor)']
for j in i]
fig.data[2].x = model.record['Similarity (parent)'][model.record.Year == change.new]
fig.update_xaxes(range=[0,1])
fig.update_yaxes(range=[0,max_y])
min_year = min(model.record.Year)
max_year = max(model.record.Year)
slider = widgets.IntSlider(value=min_year, min=min_year, max=max_year,
step=1, description='Year', continuous_update=True,
layout=Layout(width='auto'))
slider.observe(update_similarity, names='value')
display(slider)
fig = go.FigureWidget()
fig.add_trace(go.Histogram(x=neighbors,
name='empirical'))
fig.add_trace(go.Histogram(x=[j for i in births[births.Year<=min_year+50]['Similarity (neighbor)']
for j in i],
name='model (neighbor)'))
fig.add_trace(go.Histogram(x=model.record[model.record.Year==min_year]['Similarity (parent)'],
name='model (parent)'))
fig.update_layout(title='Cosine similarity', template='plotly_white',
xaxis_title='cosine similarity', yaxis_title='number of edges')
fig.update_xaxes(range=[0,1])
fig.update_yaxes(range=[0,max_y])
fig
# -
# ##### Manhattan distance
# + [markdown] heading_collapsed=true
# #### Something
# + hidden=true
plt.figure(figsize=(16,4))
plt.subplot(121)
sns.distplot(neighbors)
x = np.linspace(min(neighbors), max(neighbors), 100)
mu, std = sp.stats.norm.fit(neighbors)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
sns.distplot(non_neighbors)
plt.title(topic + ' (prior)')
plt.legend([f"fit-neighbors (m={mu:.2f}; s={std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity');
plt.xlim([-.2,1.2])
plt.subplot(122)
neighbors_model = neighbor_similarity(model.graph, model.vectors)
non_neighbors_model = non_neighbor_similarity(model.graph, model.vectors)
sns.distplot(neighbors_model)
x = np.linspace(min(neighbors_model), max(neighbors_model), 100)
mu, std = sp.stats.norm.fit(neighbors_model)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
sns.distplot(non_neighbors_model)
plt.title(topic + ' (model)')
plt.legend([f"fit-neighbors (m={mu:.2f}; s={std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity')
plt.xlim([-.2,1.2]);
# + hidden=true
plt.figure(figsize=(16,4))
plt.subplot(121)
bin_size=25
years = [graph.nodes[node]['year'] for node in graph.nodes]
sns.distplot(years, bins=bin_size, rug=True, kde=False)
hist, bin_edges = np.histogram(years, bins=bin_size)
popt, pcov = sp.optimize.curve_fit(lambda x,a,b: a*pow(b,x), bin_edges[1:], hist)
x = np.linspace(min(years), max(years), 100)
sns.lineplot(x=x, y=popt[0]*pow(popt[1],x))
plt.legend([f"a*b^x; a={popt[0]:.1e}, b={popt[1]:.4f}"])
plt.title('prior')
plt.ylabel('discoveries')
plt.xlabel('year');
plt.subplot(122)
years = [model.graph.nodes[node]['year'] for node in model.graph.nodes]
sns.distplot(years, bins=bin_size, rug=True, kde=False)
hist, bin_edges = np.histogram(years, bins=bin_size)
popt, pcov = sp.optimize.curve_fit(lambda x,a,b: a*pow(b,x), bin_edges[1:], hist)
x = np.linspace(min(years), max(years), 100)
sns.lineplot(x=x, y=popt[0]*pow(popt[1],x))
plt.legend([f"a*b^x; a={popt[0]:.1e}, b={popt[1]:.4f}"])
plt.title('model')
plt.ylabel('discoveries')
plt.xlabel('year');
plt.figure(figsize=(16,4))
bin_size=25
years = [graph.nodes[node]['year'] for node in graph.nodes]
sns.distplot(years, bins=bin_size, rug=True, kde=False, hist=False)
# hist, bin_edges = np.histogram(years, bins=bin_size)
# popt, pcov = sp.optimize.curve_fit(lambda x,a,b: a*pow(b,x), bin_edges[1:], hist)
# x = np.linspace(min(years), max(years), 100)
# sns.lineplot(x=x, y=popt[0]*pow(popt[1],x))
sns.lineplot(x=sorted(years),
y=np.sum(np.array([sorted(years)]).transpose() < np.array([sorted(years)]), axis=0))
years = [model.graph.nodes[node]['year'] for node in model.graph.nodes]
sns.distplot(years, bins=bin_size, rug=True, kde=False, hist=False)
hist, bin_edges = np.histogram(years, bins=bin_size)
# popt_model, pcov = sp.optimize.curve_fit(lambda x,a,b: a*pow(b,x), bin_edges[1:], hist)
# x = np.linspace(min(years), max(years), 100)
# sns.lineplot(x=x, y=popt_model[0]*pow(popt_model[1],x))
sns.lineplot(x=sorted(years),
y=np.sum(np.array([sorted(years)]).transpose() < np.array([sorted(years)]), axis=0))
plt.legend([#f"prior: a*b^x; a={popt[0]:.1e}, b={popt[1]:.4f}",
f"prior: count",
#f"model: a*b^x; a={popt_model[0]:.1e}, b={popt_model[1]:.4f}",
f"model: count"])
plt.ylabel('discoveries')
plt.xlabel('year');
# + hidden=true
plt.figure(figsize=(16,6))
plt.subplot(121)
fit.plot_pdf()
fit.power_law.plot_pdf()
plt.title(f"empirical xmin={fit.xmin:.1e}, α={fit.alpha:.1f}");
plt.subplot(122)
fit_model = powerlaw.Fit(model.vectors.data)
fit_model.plot_pdf()
fit_model.power_law.plot_pdf()
plt.title(f"model xmin={fit_model.xmin:.1e}, α={fit_model.alpha:.1f}");
# + hidden=true
sns.jointplot(x=np.abs(yd), y=wd, kind='reg',
marginal_kws=dict(bins=15, rug=True))
plt.xlabel('Δyear')
plt.ylabel('manhattan distance');
# + hidden=true
n_rows = 4
plt.figure(figsize=(16,n_rows*6))
# wd = word_diffs(graph, tfidf)
# yd = year_diffs(graph)
plt.subplot(n_rows,2,1)
sns.distplot(yd)
plt.title(topic + ' prior')
plt.xlabel('year difference')
plt.subplot(n_rows,2,2)
yd_model = year_diffs(model.graph)
sns.distplot(yd_model)
plt.title(topic + ' model')
plt.xlabel('year difference');
plt.subplot(n_rows,2,3)
sns.scatterplot(x=np.abs(yd), y=wd)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), wd)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (prior)")
plt.xlabel('year')
plt.ylabel('manhattan distance');
plt.subplot(n_rows,2,4)
sns.distplot(wd)
mu, std = sp.stats.norm.fit(wd)
x = np.linspace(min(wd), max(wd), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.xlabel('manhattan distance')
plt.ylabel('probability distribution');
plt.title(f"μ={mu:.2}, σ={std:.2} (prior)")
wd_model = word_diffs(model.graph, model.vectors)
yd_model = year_diffs(model.graph)
neighbors_model = neighbor_similarity(model.graph, model.vectors)
plt.subplot(n_rows,2,5)
sns.scatterplot(x=np.abs(yd_model), y=wd_model)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd_model), wd_model)
x = np.linspace(0, max(np.abs(yd_model)), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (model)")
plt.xlabel('year')
plt.ylabel('manhattan distance');
plt.subplot(n_rows,2,6)
sns.distplot(wd_model)
mu, std = sp.stats.norm.fit(wd_model)
x = np.linspace(min(wd_model), max(wd_model), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.xlabel('manhattan distance')
plt.ylabel('probability distribution');
plt.title(f"μ={mu:.2}, σ={std:.2} (model)");
plt.subplot(n_rows,2,7)
sns.scatterplot(x=np.abs(yd), y=neighbors)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), neighbors)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (prior)")
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
plt.subplot(n_rows,2,8)
sns.scatterplot(x=np.abs(yd_model), y=neighbors_model)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd_model), neighbors_model)
x = np.linspace(0, max(np.abs(yd_model)), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (model)")
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
# + hidden=true
plt.figure(figsize=(16,6))
plt.subplot(121)
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': model.vectors.indices,
'weight': model.vectors.data}))
plt.ylim([-.1,1.1]);
plt.subplot(122)
plot_distribution(model.vectors.data)
# + hidden=true
plt.figure(figsize=(16,6))
plt.subplot(121)
nx.draw_networkx(graph, node_color=['r' if graph.nodes[n]['year']<-500 else 'b'
for n in graph.nodes])
plt.title('original graph')
plt.subplot(122)
nx.draw_networkx(model.graph, node_color=['r' if model.graph.nodes[n]['year']<-500 else 'b'
for n in model.graph.nodes])
plt.title('new graph');
# + hidden=true
plt.figure(figsize=(16,6))
sns.distplot([d for _,d in graph.degree], bins=30)
sns.distplot([d for _,d in model.graph.degree], bins=30)
plt.legend(['prior', 'model'])
plt.xlim([-10,110]);
# -
# ### Discussion
#
# The point of this model is that one can model knowledge discovery as incremental changes on existing knowledge.
#
# The mutation model doesn't monotonically decrease similarity with parent.
| tests/test-model-in-wiki-module.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: dev
# kernelspec:
# display_name: 'Python 3.7.9 64-bit (''PythonDataV2'': conda)'
# name: python3
# ---
# # a_tree_feature_selection
# ----
#
# Written in the Python 3.7.9 Environment with the following package versions
#
# * joblib 1.0.1
# * numpy 1.19.5
# * pandas 1.3.1
# * scikit-learn 0.24.2
# * tensorflow 2.5.0
#
# By <NAME>
#
# This Jupyter Notebook tunes a decision tree model with Exoplanet classification from Kepler Exoplanet study data in order to select the features of greatest importance.
#
# Column descriptions can be found at https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.html
#
# **Source Data**
#
# The source data used was provided by University of Arizona's Data Analytics homework assignment. Their data was derived from https://www.kaggle.com/nasa/kepler-exoplanet-search-results?select=cumulative.csv
#
# The full data set was released by NASA at
# https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi
# +
# Import Dependencies
# Plotting
# %matplotlib inline
import matplotlib.pyplot as plt
# Data manipulation
import numpy as np
import pandas as pd
from statistics import mean
from operator import itemgetter
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from tensorflow.keras.utils import to_categorical
# Parameter Selection
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
# Model Development
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
# Model Metrics
from sklearn.metrics import classification_report
# Save/load files
from tensorflow.keras.models import load_model
import joblib
# # Ignore deprecation warnings
# import warnings
# warnings.simplefilter('ignore', FutureWarning)
# -
# Set the seed value for the notebook, so the results are reproducible
from numpy.random import seed
seed(1)
# # Read the CSV and Perform Basic Data Cleaning
# +
# Import data
df = pd.read_csv("../b_source_data/exoplanet_data.csv")
# print(df.info())
# Drop columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop rows containing null values
df = df.dropna()
# Display data info
print(df.info())
print(df.head())
print(df.koi_disposition.unique())
# -
# Rename "FALSE POSITIVE" disposition values
df.koi_disposition = df.koi_disposition.str.replace(' ','_')
print(df.koi_disposition.unique())
# # Pre-processing
#
# Use `koi_disposition` for the y values
# Split dataframe into X and y
X = df.drop("koi_disposition", axis=1)
y = df["koi_disposition"]
print(X.shape, y.shape)
# Split X and y into training and testing groups
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
# Display training data
X_train.head()
# Scale the data with MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# ## Determine Features of Influence - Decision Tree Classifier Method
# Create and score a decision tree classifier
model = tree.DecisionTreeClassifier()
model = model.fit(X_train, y_train)
print('Decision Tree Score:')
print(model.score(X_train, y_train))
# Sort the features by their importance
tree_feature_sort = sorted(zip(X.columns,model.feature_importances_),key=itemgetter(1), reverse=True)
# tree_feature_sort
# Plot Decision Tree Feature Importance
fig = plt.figure(figsize=[12,12])
plt.barh(*zip(* (tree_feature_sort)))
plt.xlabel('Feature Importance')
plt.ylabel('Feature Name')
plt.title('Decision Tree Assessment')
plt.show()
# # Choose Features of Importance
# From the Decision Tree Assessment plot you can see a large gap in feature importance between koi_fpflag_ec and koi_model_snr. This indicates that features with importance exceeding 0.1 are good candidates for seeding predictive models
# Select Tree Features of Interest
tree_features = [feature[0] for feature in tree_feature_sort if feature[1] > 0.1]
tree_features
# # Save the Model
# Save the model
joblib.dump(model, './a_tree_feature_selection_model.sav')
| c_feature_selection/a_tree_feature_selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demo for forward mode
# +
import sys
import numpy as np
sys.path.append('../')
from autodiff.ad import AutoDiff
from autodiff.vector_forward import Vector_Forward
# -
# ### handle scalar case in forward mode
# +
val = 0 # Value to evaluate at
# Create an AD forward mode object with val
x = AutoDiff(val, name="x")
f = AutoDiff.sin(2 * x) # function to be evaluate, i.e. f(x) = sin(2x)
print(f.val) # Output the function value
print(f.der) # Output the function derivative
# -
# ### handle vector case for a single variable in forward mode
# +
# Create an AD forward mode object with vector
x = AutoDiff([-1.0, -3.0, -5.0, -7.0, 0.1], name="x")
f = AutoDiff.logistic(AutoDiff.tan(x) + (3 * x ** (-2)) + (2 * x) + 7) # function to be evaluate
print(f.val) # Output the function value
print(f.der) # Output the function derivative
# -
# ### handle vector case for various variables in forward mode
# +
# Create an AD forward mode object with vector for x
x = AutoDiff([16, 0], name="x")
# Create an AD forward mode object with vector for y
y = AutoDiff([8, -1], name="y")
f = x / y # function to be evaluate, i.e. f(x, y) = x / y
print(f.val) # Output the function value
print(f.der) # Output the function derivative
# -
# ### handle the case of various functions - a simple case when the input of different variables is scalar
# +
# Create an AD forward mode object with value for x
x = AutoDiff(3, name='x')
# Create an AD forward mode object with value for y
y = AutoDiff(5, name='y')
f1 = (2 * x ** 2) + (3 * y ** 4)
f2 = AutoDiff.cos(x + (4 * y ** 2))
v = Vector_Forward([f1, f2])
print(v.val())
print(v.jacobian()[0])
print(v.jacobian()[1])
# -
# ### handle the case of various functions - multiple variables, and each of the variable has vector input, and we evaluate them for multiple functions
x = AutoDiff([3, 1], name='x')
y = AutoDiff([5, 2], name='y')
f1 = (2 * x ** 2) + (3 * y ** 4)
f2 = AutoDiff.cos(x + (4 * y ** 2))
v = Vector_Forward([f1, f2])
print(v.val())
print(v.jacobian()[0])
print(v.jacobian()[1])
| demos/Demo_Forward.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Thinking, Fast and Slow: <NAME>
# This will contain the book review for thinking fast and slow.
# ## Excerpts and thoughts
#
# #### Introduction
# > However, we found that participants in our experiments _ignored the relevant statistical facts_ and relied exclusively on resemblance. We proposed that they used resemblance as a simplifying heuristic (roughly, a rule of thumb) to make a difficult judgment. The reliance on the heuristic caused **predictable biases** (systematic errors) in their predictions (pg 7).
#
# This is a theme that comes up again and again throughout the book; human cognition is prone to systematic errors in particular situations. The way to combat that, which Kahneman discuss at length, is to slow down and try to recognize the situation you are in, and what biases you may be succumbing to.
#
# > This is often the essence of intuitive heuristics, when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution (pg 12).
#
# > Fast thinking includes both variants of intuitive thought-the expert and the heuristic-as well as the entirely automatic mental activities of perception and memory, the operations that enable you to know there is a lamp on your desk or retrieve the name of the capital of Russia (pg 13).
#
# #### Chapter 1: The Characters of the Story
# > The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness (pg 24).
#
# > When system 1 runs into difficulty, it calls on system 2 to support more detailed and specific processing that may solve the problem of the moment (pg 24).
#
# > In summary, most of what you (your system 2) think and do originates in your system 1, but system 2 takes over when things get difficult, and it normally has the last word (pg 25).
#
# > System 1 has biases, however, systematic errors that it is prone to make in specified circumstances (pg 25).
#
# > In other words, system 2 is in charge of self control (pg 26).
#
# #### Chapter 3: The Lazy Controller
# > Too much concern about how well one is doing in a task sometimes disrupts performance by loading short-term memory with pointless anxious thoughts. The conclusion is straightforward: self-control requires attention and effort. Another way of saying this is that controlling thoughts and behaviors is one of the tasks that system 2 performs (pg 41).
#
# > The core of his argument is that _rationality_ should be distinguished from _intelligence_ (pg 49).
#
# #### Chapter 4: Priming
# LEAVING OFF
| Books/01-Psychology_and_Human_Cognition-Thinking_Fast_and_Slow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
from torchvision.models.resnet import resnet50
from tqdm import tqdm
from typing import Dict
from l5kit.geometry import transform_points
from l5kit.evaluation import write_pred_csv
import matplotlib.pyplot as plt
from matplotlib import animation, rc
rc('animation', html='jshtml')
from IPython.display import HTML
#from IPython.display import display, clear_output
import PIL
import numpy as np
import pandas as pd
import scipy as sp
from l5kit.data import ChunkedDataset, LocalDataManager
from l5kit.dataset import EgoDataset, AgentDataset
from l5kit.rasterization import build_rasterizer
from l5kit.configs import load_config_data
from l5kit.visualization import draw_trajectory, TARGET_POINTS_COLOR
from l5kit.geometry import transform_points
from tqdm import tqdm
from collections import Counter
from l5kit.data import PERCEPTION_LABELS
from prettytable import PrettyTable
from IPython.display import display, clear_output
import os
from scipy import stats
from pathlib import Path
import zarr
import itertools as it
import sys
# +
os.environ["L5KIT_DATA_FOLDER"] = "data"
# cfg = load_config_data("examples/visualisation/visualisation_config.yaml")
cfg = {
'format_version': 4,
'model_params': {
'history_num_frames': 10,
'history_step_size': 1,
'history_delta_time': 0.1,
'future_num_frames': 50,
'future_step_size': 1,
'future_delta_time': 0.1
},
'raster_params': {
'raster_size': [224, 224],
'pixel_size': [0.5, 0.5],
'ego_center': [0.25, 0.5],
'map_type': 'py_semantic',
'semantic_map_key': 'semantic_map/semantic_map.pb',
'dataset_meta_key': 'meta.json',
'filter_agents_threshold': 0.5
},
'test_data_loader': {
'key': 'scenes/test.zarr',
'batch_size': 12,
'shuffle': False,
'num_workers': 0
}
}
# print(cfg)
dm = LocalDataManager()
dataset_path = dm.require('scenes/sample.zarr')
zarr_dataset = ChunkedDataset(dataset_path)
zarr_dataset.open()
frames = zarr_dataset.frames
agents = zarr_dataset.agents
scenes = zarr_dataset.scenes
tl_faces = zarr_dataset.tl_faces
display(str(frames))
display(str(agents))
display(str(scenes))
display(str(tl_faces))
# +
def build_model(cfg: Dict) -> torch.nn.Module:
# load pre-trained Conv2D model
model = resnet50(pretrained=False)
# change input channels number to match the rasterizer's output
num_history_channels = (cfg["model_params"]["history_num_frames"] + 1) * 2
num_in_channels = 3 + num_history_channels
model.conv1 = nn.Conv2d(
num_in_channels,
model.conv1.out_channels,
kernel_size=model.conv1.kernel_size,
stride=model.conv1.stride,
padding=model.conv1.padding,
bias=False,
)
# change output size to (X, Y) * number of future states
num_targets = 2 * cfg["model_params"]["future_num_frames"]
model.fc = nn.Linear(in_features=2048, out_features=num_targets)
return model
def forward(data, model, device):
inputs = data["image"].to(device)
target_availabilities = data["target_availabilities"].unsqueeze(-1).to(device)
targets = data["target_positions"].to(device)
# Forward pass
outputs = model(inputs).reshape(targets.shape)
return outputs
# +
test_cfg = cfg["test_data_loader"]
test_zarr = ChunkedDataset(dm.require(test_cfg["key"])).open()
test_mask = np.load("data/scenes/mask.npz")["arr_0"]
rasterizer = build_rasterizer(cfg, dm)
test_dataset = AgentDataset(cfg, test_zarr, rasterizer, agents_mask=test_mask)
test_dataloader = DataLoader(test_dataset,
shuffle=test_cfg["shuffle"],
batch_size=test_cfg["batch_size"],
num_workers=test_cfg["num_workers"])
print(test_dataset)
# +
# ==== INIT MODEL
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = build_model(cfg).to(device)
model.load_state_dict(torch.load("baseline_weights/baseline_weights.pth", map_location=device))
# ==== EVAL LOOP
model.eval()
torch.set_grad_enabled(False)
# store information for evaluation
future_coords_offsets_pd = []
timestamps = []
agent_ids = []
progress_bar = tqdm(test_dataloader)
for data in progress_bar:
outputs = forward(data, model, device).cpu().numpy().copy()
# convert into world coordinates offsets
coords_offset = []
for agent_coords, agent_yaw_rad in zip(outputs, data["yaw"]):
world_offset_from_agent = np.array(
[
[np.cos(agent_yaw_rad), -np.sin(agent_yaw_rad), 0],
[np.sin(agent_yaw_rad), np.cos(agent_yaw_rad), 0],
[0, 0, 1],
])
coords_offset.append(transform_points(agent_coords, world_offset_from_agent))
future_coords_offsets_pd.append(np.stack(coords_offset))
timestamps.append(data["timestamp"].numpy().copy())
agent_ids.append(data["track_id"].numpy().copy())
# -
write_pred_csv("submission/submission.csv",
timestamps=np.concatenate(timestamps),
track_ids=np.concatenate(agent_ids),
coords=np.concatenate(future_coords_offsets_pd),
)
| model_baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Deep Learning using Rectified Linear Units
# ===
#
# ## Overview
#
# In this notebook, we explore the performance of an autoencoder with varying activation functions on an image reconstruction task.
# We load our dependencies.
# +
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from autoencoder import AE
# -
# We set up the GPU memory growth.
tf.config.experimental.set_memory_growth(
tf.config.experimental.list_physical_devices('GPU')[0], True
)
# We set the random seeds for reproducibility.
np.random.seed(42)
tf.random.set_seed(42)
# We set the batch size, the number of epochs, and the number of units per layer.
batch_size = 256
epochs = 100
neurons = 512
code_dim = 128
# ## Data Preparation
# We load the MNIST dataset.
(train_features, _), (test_features, _) = tf.keras.datasets.mnist.load_data()
# We scale the images.
train_features = train_features.astype('float32').reshape(-1, 784) / 255.
test_features = test_features.astype('float32').reshape(-1, 784) / 255.
# We create a `tf.data.Dataset` object for the training dataset.
dataset = tf.data.Dataset.from_tensor_slices((train_features, train_features))
dataset = dataset.batch(batch_size, True)
dataset = dataset.prefetch(batch_size * 4)
dataset = dataset.shuffle(train_features.shape[1])
# ## Model
# We use a vanilla Autoencoder.
# ### Logistic-based Model
#
# We define an autoencoder with a Logistic activation function.
model = AE(
units=neurons,
activation=tf.nn.sigmoid,
initializer='glorot_uniform',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Call the model.
model(train_features[:batch_size])
# Display model summary.
model.summary()
# Train the model.
logistic_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### Logistic-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ### TanH-based Model
#
# We define an autoencoder with a Hyperbolic Tangent activation function.
model = AE(
units=neurons,
activation=tf.nn.tanh,
initializer='glorot_uniform',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Train the model.
tanh_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### TanH-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ### ReLU-based Model
#
# We define an autoencoder with a ReLU activation function.
model = AE(
units=neurons,
activation=tf.nn.relu,
initializer='he_normal',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Train the model.
relu_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### ReLU-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ### Leaky ReLU-based Model
#
# We define an autoencoder with a Leaky ReLU activation function.
model = AE(
units=neurons,
activation=tf.nn.leaky_relu,
initializer='he_normal',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Train the model.
lrelu_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### Leaky ReLU-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ### Softplus-based Model
#
# We define an autoencoder with a Softplus activation function.
model = AE(
units=neurons,
activation=tf.nn.softplus,
initializer='he_normal',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Train the model.
softplus_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### Softplus-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ### ELU-based Model
#
# We define an autoencoder with a ELU activation function.
model = AE(
units=neurons,
activation=tf.nn.elu,
initializer='he_normal',
code_dim=code_dim,
original_dim=train_features.shape[1]
)
# After defining our model, we shall now compile it for training.
model.compile(
loss=tf.losses.mean_squared_error,
optimizer=tf.optimizers.Adam(learning_rate=1e-2),
)
# Train the model.
elu_performance = model.fit(
dataset, epochs=epochs, verbose=0
)
# #### ELU-based AE Results
#
# We plot the original images, the reconstructed images, and the difference between them.
# Display the original images.
# +
number = 10
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
plt.imshow(test_features[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# -
# Display the reconstructed images.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1)
reconstructed = model(test_features[index].reshape(-1, 784)).numpy().reshape(28, 28)
plt.imshow(reconstructed)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# Display the difference between the original images and their reconstruction.
plt.figure(figsize=(20, 4))
for index in range(number):
ax = plt.subplot(2, number, index + 1 + number)
difference = model(test_features[index].reshape(-1, 784)) - test_features[index]
difference = difference.numpy().reshape(28, 28)
plt.imshow(difference)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# ## Training Performance
#
# We lay down the training performance of the autoencoder.
# +
sns.set_style('dark', {'grid.linestyle': '--'})
plt.figure(figsize=(8, 8))
plt.rcParams.update({'font.size': 14})
plt.plot(
range(len(logistic_performance.history['loss'])),
logistic_performance.history['loss'],
label='logistic'
)
plt.plot(
range(len(tanh_performance.history['loss'])),
tanh_performance.history['loss'],
label='tanh'
)
plt.plot(
range(len(relu_performance.history['loss'])),
relu_performance.history['loss'],
label='relu'
)
plt.plot(
range(len(lrelu_performance.history['loss'])),
lrelu_performance.history['loss'],
label='leaky_relu'
)
plt.plot(
range(len(softplus_performance.history['loss'])),
softplus_performance.history['loss'],
label='softplus'
)
plt.plot(
range(len(elu_performance.history['loss'])),
elu_performance.history['loss'],
label='elu'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend(loc='upper right')
plt.grid()
plt.title('Autoencoder on MNIST')
plt.savefig('ae_mnist_experiments.png', dpi=300)
plt.show()
# -
plt.plot(
range(len(logistic_performance.history['loss'])),
logistic_performance.history['loss'],
label='logistic'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('Logistic-based Autoencoder on MNIST')
plt.show()
plt.plot(
range(len(tanh_performance.history['loss'])),
tanh_performance.history['loss'],
label='tanh'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('TanH-based Autoencoder on MNIST')
plt.show()
plt.plot(
range(len(relu_performance.history['loss'])),
relu_performance.history['loss'],
label='relu'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('ReLU-based Autoencoder on MNIST')
plt.show()
plt.plot(
range(len(lrelu_performance.history['loss'])),
lrelu_performance.history['loss'],
label='leaky_relu'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('Leaky ReLU-based Autoencoder on MNIST')
plt.show()
plt.plot(
range(len(softplus_performance.history['loss'])),
softplus_performance.history['loss'],
label='softplus'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('Softplus-based Autoencoder on MNIST')
plt.show()
plt.plot(
range(len(elu_performance.history['loss'])),
elu_performance.history['loss'],
label='elu'
)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.grid()
plt.title('ELU-based Autoencoder on MNIST')
plt.show()
| notebooks/autoencoders/ae-mnist-experiments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MultiLabel Classification
# - Toy Apples and Basketball dataset (1 Text Feature column and 1 mulitlabel target column)
# ---
# ## Import modules
# +
# Standard
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# TFIDF
from sklearn.feature_extraction.text import TfidfVectorizer
# Preprocessing
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MultiLabelBinarizer
# ML
from sklearn.linear_model import LogisticRegression
from sklearn.multioutput import MultiOutputClassifier
# Train-Test split
from sklearn.model_selection import train_test_split
# AST
import ast
# Spacy
import spacy
# Make column transformer
from sklearn.compose import make_column_transformer
# Pipeline
from sklearn.pipeline import make_pipeline
# Metrics
from sklearn.metrics import jaccard_score
# -
# ## Configuration
# Required to run the rest of the notebook for any dataset that has a Text column which serves as the features for a NLP multilable classification task
# +
# Excel Datafile (if csv then change to pd.read_csv)
datafile = 'apples_and_basketball.xlsx'
# Features column
TEXT_CONTENT_COLUMN = "Text"
# Target
TARGET = "Label"
# -
# ## Import Data
df = pd.read_excel(datafile)
df.shape
df
# ## Data Preparation
# #### 1. Convert label list imported as strings to lists
type(df[TARGET][0])
df['label_ast'] = df[TARGET].apply(lambda x: ast.literal_eval(x))
type(df['label_ast'][0])
df.head(2)
# #### 2. Label Counts
df['label_count'] = df['label_ast'].apply(len)
df.head(3)
# #### 3. Label Distribution
plt.hist(df['label_count'])
plt.title('Label Distribution across samples')
plt.xlabel('# of labels per sample')
plt.ylabel('# of samples having label(s)')
plt.show()
# #### 4. Spacy Stopwords
from spacy.cli.download import download
download(model="en_core_web_lg")
nlp_lg = spacy.load('en_core_web_lg')
nlp_lg
# +
# Spacy stopwords
sp_stopwords = spacy.lang.en.stop_words.STOP_WORDS
print('Type : ', type(sp_stopwords))
print('Length : ', len(sp_stopwords))
# -
# #### 5. Vectorize Features using TFIDF
# Will use the spacy stopwords later
tfidf = TfidfVectorizer(stop_words='english')
tfidf.get_params();
# +
# Vectorize the feature column using make_column_transformer
preprocessor = make_column_transformer((tfidf, TEXT_CONTENT_COLUMN), # "Test"
remainder='passthrough'
)
# -
# #### 6. Convert Target via Label Encoder
# +
# Get all unique lables from the target
all_labels_flattened_list = [inner_value for inner_list in list(df['label_ast']) for inner_value in inner_list]
all_unique_labels = list(set(all_labels_flattened_list))
all_unique_labels
# -
# ## Pipeline - 1
# - 1 step pipeline of vectorizing feature column using TFIDF and re-producing the original df and the preprocessed df along with the other non-preprocessed columns
# - Just for viewing and analysis
pipe1 = make_pipeline(preprocessor)
pipe1.fit(df)
# +
# TFIDF Feature names
tfidf_feature_names = preprocessor.named_transformers_.tfidfvectorizer.get_feature_names()
tfidf_feature_names
# +
# TDIDF processed array of df
tfidf_processed_csr = pipe1.transform(df)
# Creating a df of the csr (just for analysis)
# Since we used "passthrough" from the other columns we
# need to provide their names to the processed df along
# with the tfidf feature names
df_processed = pd.DataFrame(tfidf_processed_csr,
columns=tfidf_feature_names + list(df.columns[-3:]))
df_processed.shape
# -
df_processed.head(3)
# ## Train-Test split
# +
# Form X and y required for tts
X_processed = df_processed.loc[:, df_processed.columns.isin(tfidf_feature_names)]
y_processed = df_processed['label_ast']
X_processed.shape, y_processed.shape
# +
X_train, X_test, y_train_ast, y_test_ast = train_test_split(X_processed, y_processed, test_size=0.3)
print(X_train.shape)
print(y_train_ast.shape)
print(X_test.shape)
print(y_test_ast.shape)
# +
# Examining the split of data
y_train_ast
# -
y_test_ast
# +
# Instantiate multilabel binarizer
mlb = MultiLabelBinarizer(classes=all_unique_labels)
# -
y_train = mlb.fit_transform(y_train_ast)
y_train
y_test = mlb.transform(y_test_ast)
y_test
# ---
# ## TODO: Stratify multilabels - Figure out how
# ---
# ## Convert Target
# - `Possibly help for stratification ... but not sure ...`
# - ##### **Does not look like it helps :-(**
# +
le = LabelEncoder()
le.fit_transform(all_unique_labels)
# -
df_processed['label_ast_lblencoded'] = df_processed['label_ast'].apply(lambda x: le.transform(x))
df_processed.sample(4)
# ## ML Models (Supervised)
lr = LogisticRegression()
lr.get_params();
mo = MultiOutputClassifier(estimator=lr)
mo.get_params();
# ## Fit and Predict
#
# **`TO DO:`** Put the below in an overall pipeline of 2 steps:
# 1. Preprocessing
# 2. ML algorithms for classification
# +
# Starting off with just passing what we
# obtained so far into a 1 step pipeline
mo.fit(X_train, y_train);
# -
# If you do not convert y_train to the required format (binary array or sparse matrix) the following error occurs:
# - ERROR message:
# - `ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead - the MultiLabelBinarizer transformer can convert to this format.`
#
# - Multioutput classifier can take different text labels as input (does not NEED to be binarized). However, the target input to fitting the model requires an array. Since its an array, the rows and columns require values in it. So, I cannot just pass ['apple'] for one sample and ['apple', 'basketball'] for another sample since that has different shapes. And therefore, in the code above I used multilabel binarizer.
# - Note, I could have passed non-binary values (since multioutput classifier can handle that), but that too needs to be in a array with fixed number of rows and columns. So i can pass np.array([['apple', 'nothing'], ['basketball','nothing']]) but that has the same overall effect same as using the multilabel binarizer.
#
y_pred = mo.predict(X_test)
y_pred
# +
# Predicted label descriptions
mlb.inverse_transform(y_pred)
# Printing per line
for x in mlb.inverse_transform(y_pred):
print(x)
# +
# True Label descriptions
mlb.inverse_transform(y_test)
# Printing per line
for x in mlb.inverse_transform(y_test):
print(x)
# +
# Seeing the corresponding X_test
X_test
# +
# Seeing the corresponding full sentences
df.loc[X_test.index, :]
# -
# ---
# ## Model Evaluation
# ### 1. Jaccard Score
# +
# From sklearn library
jaccard_score(y_test, y_pred, average='samples')
# +
# Manual
def jaccard_score_manual(y_true, y_pred):
# Compute jaccard score
# np.min element wise will pick up 0 if 0 and 1 are the 2 elements
# in the 2 arrays being compared, which means no element was intersecting
# np.min will pick 1 if 1 and 1 are the 2 elements in the 2 arrays
# which means both elements were 1 and were intersecting
# HOWEVER, if both elements are 0 and 0, np.min will pick 0,
# which is correct, but, if the arrays are made of only 1 element
# each then the jaccard score is 0. But then again, we use jaccard
# score with multilabel classification and it is fair to assume that
# the 2 arrays being compared will not have only 1 element each.
jaccard = np.minimum(y_true, y_pred).sum(axis=1) / np.maximum(y_true, y_pred).sum(axis=1)
print(jaccard)
return jaccard.mean()
# -
# Call
jaccard_score_manual(y_test, y_pred)
# +
# Checking manual result with sklearn's jaccard score
assert jaccard_score(y_test, y_pred, average='samples') == jaccard_score_manual(y_test, y_pred), "Jaccard manual and sklearn mismatch"
# -
y_test
y_pred
np.minimum(y_test, y_pred).sum(axis=1)
np.minimum(y_test, y_pred)
np.maximum(y_test, y_pred).sum(axis=1)
jaccard_score(np.array([[1,1]]), np.array([[1,1]]), average='samples')
# +
# Weighted Jaccard score
# +
# From sklearn library
jaccard_score(y_test, y_pred, average='weighted')
# -
def jaccard_score_manual_weighted(y_true, y_pred):
jaccard = np.minimum(y_true, y_pred).sum(axis=1) / np.maximum(y_true, y_pred).sum(axis=1)
print(jaccard)
return jaccard.mean()
# Call: Weighted
jaccard_score_manual(y_test, y_pred)
# ### 2. Average Precision
# ---
# ## Model Coefficients
idx = 0
mo.estimators_[idx].coef_
df_coeffs = pd.DataFrame({'Features':tfidf_feature_names,
'Coefficients':mo.estimators_[idx].coef_[0]})
# # TO DO: Place idx also in above df so you have 1 df of coefficients across all models (idx=0 and idx=1)
# ---
| multilabel_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Quantum Approximate Optimization Algorithm
# +
import itertools
import qiskit
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from numpy import *
from qiskit import BasicAer, IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.compiler import transpile
from qiskit.tools.visualization import plot_histogram
from qiskit.quantum_info.analysis import *
from qiskit.tools.visualization import circuit_drawer, plot_bloch_multivector
pi = np.pi
# -
backend = BasicAer.get_backend('qasm_simulator')
shots = 99999
# <img style="float:center;" src="./maxcut.png" width = "20%">
#
# 
# <img style="float:center;" src="./maxcut2.png" width = "20%">
# $$|z_{i}\rangle = |1\rangle , |0\rangle$$
#
#
# $$|s\rangle = |z_{1}\rangle \otimes |z_{2}\rangle \otimes |z_{3}\rangle \otimes |z_{4}\rangle \otimes |z_{5}\rangle
# = |z_{1}z_{2}z_{3}z_{4}z_{5}\rangle$$
#
# Objective operator:
#
# $$C = \sum_{\langle ij \rangle}C_{\langle ij \rangle}$$
#
# $$ C_{\langle ij \rangle} |z_{i}z_{j} \rangle = 1 \ |z_{i}z_{j} \rangle , \ z_{i} \neq z_{j}
# \\ C_{\langle ij \rangle} |z_{i}z_{j} \rangle = 0 \ |z_{i}z_{j} \rangle , \ z_{i} = z_{j}$$
# $$ U(C,\alpha) = e^{-i \alpha C} = e^{-i \ \alpha \ \sum_{\langle ij \rangle}C_{\langle ij \rangle}} = \prod_{\langle ij \rangle} e^{- i \alpha C_{\langle ij \rangle} } $$
#
# $$ e^{- i \alpha C_{\langle ij \rangle} } \vert 00 \rangle = \vert 00 \rangle \\
# e^{- i \alpha C_{\langle ij \rangle} } \vert 11 \rangle = \vert 11 \rangle \\
# e^{- i \alpha C_{\langle ij \rangle} } \vert 10 \rangle = e^{- i \alpha} \vert 10 \rangle\\
# e^{- i \alpha C_{\langle ij \rangle} } \vert 01 \rangle = e^{- i \alpha} \vert 01 \rangle$$
# $$ B=\sum_{j=1}^{n} \sigma_{j}^{x} \\
# U(B,\beta) = e^{-i\beta B} = e^{-i\beta \sum_{j=1}^{n} \sigma_{j}^{x} }$$
# $$|\psi\rangle = \sum_{z\in\{0,1\}^n} a_{z} \vert z \rangle$$
#
# $$|\vec{\alpha},\vec{\beta}\rangle = U(B,\beta_{p})U(C,\alpha_{p})\cdots U(B,\beta_{2})U(C,\alpha_{2})U(B,\beta_{1})U(C,\alpha_{1})|\psi\rangle $$
#
# $$ \vec{\alpha} = (\alpha_{1},\alpha_{2},\cdots,\alpha_{p})$$
#
# $$ \vec{\beta} = (\beta_{1},\beta_{2},\cdots,\beta_{p})$$
# ### Initiation
# $$|\psi_{0}\rangle = |00000\rangle$$
#
# $$ H|0\rangle = \frac{|0\rangle + |1\rangle}{\sqrt{2}}$$
#
# $$H^{\otimes n}|\psi_{0}\rangle = (\frac{|0\rangle + |1\rangle}{\sqrt{2}})^{\otimes n} $$
graph = [[0,1],[1,4],[1,2],[2,3],[3,4],[4,0]]
node = QuantumRegister(5)
aux = QuantumRegister(1)
creq = ClassicalRegister(5)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
circ.draw(output = 'mpl')
circ.measure(node, creq)
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
plot_histogram(answer, figsize=(20,10))
# ### Implementation
# +
def uz(circ, node, aux, graph, a): # theta takes value in [0, pi]
step_len = 1/100
for[i,j] in graph:
circ.cx(node[i],node[j])
circ.rz(-4*step_len*pi*a,node[j])
circ.cx(node[i],node[j])
circ.rz(4*step_len*pi*a,aux)
circ.barrier()
def ux(circ,node,b): # theta takes value in [0, pi]
step_len = 1/100
circ.rx(4*step_len*pi*b, node)
circ.barrier()
def maxcut(circ, node, aux, creq,graph, ang_zip):
for a,b in ang_zip:
uz(circ, node, aux, graph, a)
ux(circ, node, b)
circ.measure(node, creq)
def mean(graph,answer):
sum2 = 0
for k,v in answer.items():
sum1 = 0
for [i,j] in graph:
if k[i] != k[j]:
sum1 += 1
sum2 += sum1*v
mean = sum2/shots
return(mean)
def outcome(n,graph,ang1,ang2):
ang_zip = zip(ang1,ang2)
node = QuantumRegister(n)
aux = QuantumRegister(1)
creq = ClassicalRegister(n)
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
maxcut(circ, node, aux, creq,graph, ang_zip)
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
out = mean(graph,answer)
return(out)
# -
# $$p=1$$
# $$\vec{\alpha} = (39) \\
# \vec{\beta} = (45) $$
#
# $$|\vec{\alpha},\vec{\beta}\rangle = U(B,\beta_{2})U(C,\alpha_{2})U(B,\beta_{1})U(C,\alpha_{1})|\psi\rangle $$
circ = QuantumCircuit(node,aux,creq)
circ.h(node)
a=[39]
b=[45]
ang_zip = zip(a,b)
maxcut(circ, node, aux, creq,graph, ang_zip)
circ.draw(output = 'mpl')
results = execute(circ, backend = backend, shots = shots).result()
answer = results.get_counts()
plot_histogram(answer, figsize=(20,10))
# $$ C_{mean} = \langle \vec{\alpha},\vec{\beta}| C|\vec{\alpha},\vec{\beta}\rangle$$
C_mean = outcome(5,graph,a,b)
C_mean
| QAOA Pre.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DataScience Course $\circ$ Pandas $\circ$ String
# By Dr <NAME>
# <img src="https://1.bp.blogspot.com/-J1gxIX6Cp9w/Wy-lUTT-LqI/AAAAAAADIss/i8oWgsmB9OY8nRMc21XH_BA7AUCGkuGFgCLcBGAs/s1600/logo2.jpg" width=600>
# <hr>
# ### Topics
# - Recall vectorized operations on numerical data with numpy
# - List comprehension for string
# - pandas `.str` to the rescue
# - `lower()`, `upper()`, `capitalize()`, `startswith()`, `split()`
# - `get()`, slicing syntax `[...]`
# - `extract()`, `findall()`
# +
import pandas as pd
import numpy as np
import seaborn as sns
# %matplotlib inline
ls_str = ["Analytical Thinking",
"Business Understanding",
"Mathematics",
"Presentation and Communication",
"Programming",
"Statistics",
"Visualization" ]
countries = pd.read_csv("../data/GlobalLandTemperaturesByCountry.csv.zip", usecols=['Country'])['Country'].drop_duplicates().reset_index(drop=True)
len(countries)
# -
# ## Exercises
# - Run provided codes prior to the exercise question
# - Add a **new cell** to try your code
# - Type in Python code in the box to get the results as displayed right below
#
# __Warning__: results will be replaced, so make sure you add a new cell to try your code
# **$\Rightarrow$
# Use list comprehension and string method `.upper()` to make all the strings in `ls_str` upper case
# **
# **$\Rightarrow$
# Use vectorized string method in `pandas` to make each string in `ls_str` lower case
# **
# **$\Rightarrow$
# Create a `pd.Series` object from `ls_str` and call it `skills`. Then, print it out to observe.
# **
# **$\Rightarrow$
# Use the `str` attribute to access the first letter in each string in `skills`
# **
# **$\Rightarrow$
# Use the `str` attribute and a vectorized string method to split each string by a blank space (i.e. " ")
# **
# - View the result
# - Count the number of words in each string
# **$\Rightarrow$
# List all the country names in `countries` that start with `T`, and end with a vowel (i.e. `[a, e, i, o, u]`)
# **
# **$\Rightarrow$
# What are the five most common start alphabet of the country names
# **
#optional
# +
# optional
# -
# **$\Rightarrow$
# What is the "shortest" and "longest" country names?
# **
# <hr>
# ## Finished!
| pynotes/e24_Pandas_String.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Modeling - Statistical Modeling
import statsmodels.api as sm
import numpy as np
predictors = np.random.random(1000).reshape(500,2)
target = predictors.dot(np.array([0.4, 0.6])) + np.random.random(500)
lmRegModel = sm.OLS(target,predictors)
result = lmRegModel.fit()
result
result.summary()
print(result.summary())
# ### Prediction (out of sample)
# %matplotlib inline
# +
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
# -
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1-5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
y
X
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
olsres.summary()
ypred = olsres.predict(X)
print(ypred)
x1n = np.linspace(20.5,25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n-5)**2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
#
# +
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, 'o', label="Data")
ax.plot(x1, y_true, 'b-', label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), 'r', label="OLS prediction")
ax.legend(loc="best");
# -
| data-modeling_statistical-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.5
# language: julia
# name: julia-0.4
# ---
;cat lin_interp.jl
include("lin_interp.jl")
grid = [0, 2, 4, 6, 8, 10]
vals = [1, 4, 5, 8, 9, 11]
ellenlovespola=MyLinInterp.LinearInterpolation(grid,vals)
ellenlovespola([1,7,9])
# +
using PyPlot
f(x)=log(x)
a=1
b=13
grid=linspace(a,b,5)
vals=f(grid)
g=MyLinInterp.LinearInterpolation(grid,vals)
grid2=linspace(a,b,100)
vals2=f(grid2)
plot(grid2,vals2, "b-", label="log(x)")
plot(grid2,g(grid2), "g-", label="kinji")
legend()
# -
Pkg.clone("https://github.com/ellenjunghyunkim/MyInterpolations.jl")
Pkg.checkout("ellenjunghyunkim/MyInterpolations", "master")
Pkg.rm("MyInterpolations")
Pkg.clone("https://github.com/ellenjunghyunkim/MyInterpolations.jl")
using MyInterpolations
| demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Basics
# +
import pandas as pd
data = pd.read_csv('/data/restaurants.tsv', delimiter='\t')
data.head(50)
# -
data.describe()
data.info()
# +
unique_addresses = data.address.unique()
token_count = {}
for a in unique_addresses:
tokens = a.split(' ')
for t in tokens:
if t not in token_count:
token_count[t] = 1
else:
token_count[t] += 1
for key, value in sorted(token_count.items(), key=lambda item: item[1]):
print("%s: %s" % (key, value))
# -
def printuniques(series):
l = series.unique()
l.sort()
print("### " + series.name)
print(l)
printuniques(data.name)
printuniques(data.city)
#printuniques(data.phone)
#printuniques(data["type"])
# +
import textdistance as td
import numpy as np
unames = data.name.unique()
unames.sort()
unames
list_d = [(p1, p2, td.hamming(p1, p2)) for p1 in unames for p2 in unames]
list(list_d)
# -
newlist = filter(lambda x: x[2] > 0 and x[2] < 10, list_d)
newlist = list(newlist)
newlist.sort(key = lambda x: x[2])
newlist
# +
import textdistance as td
def get_close(series):
u = series.unique()
u.sort()
lu = [(p1, p2, p1 == p2 and 666 or td.hamming(p1, p2)) for p1 in u for p2 in u]
nlu = list(filter(lambda x: x[2] > 0 and x[2] < 5, lu))
nlu.sort(key = lambda x: x[2])
return nlu
# +
locateddata = data.copy()
locateddata
import googlemaps
gmaps = googlemaps.Client(key="<KEY>")
def dogeocode(str):
#return gmaps.geocode(str)
return "lol " + str
addresses = locateddata.address.unique()
wow = {}
for a in addresses:
wow[a] = dogeocode(a)
#locateddata["geocode"] = locateddata.address.map(lambda x: wow[x])
#locateddata.to_csv(r'/data/restaurants_geocoded.csv')
# -
locdata = pd.read_csv('/data/restaurants_geocoded.csv')
locdata
locdata[locdata["geocode"] == "[]"]
# +
def calcnew(x):
if x.geocode == "[]":
return str(gmaps.geocode(x["address"] + ", " + x["city"]))
return x.geocode
locdata.geocode = locdata.apply(calcnew, axis=1)
#locateddata.to_csv(r'/data/restaurants_geocoded.csv')
# +
#pd.read_json(locdata.loc[0].geocode, orient="index")
import re
import json
def load_dirty_json(dirty_json):
regex_replace = [(r"([ \{,:\[])(u)?'([^']*)'", r'\1"\3"'), (r" False([, \}\]])", r' false\1'), (r" True([, \}\]])", r' true\1')]
for r, s in regex_replace:
dirty_json = re.sub(r, s, dirty_json)
try:
clean_json = json.loads(dirty_json)
except:
print(dirty_json)
raise TypeError("Only integers are allowed")
return clean_json
load_dirty_json(locdata.loc[0].geocode)[0]
# +
locdata["lat"] = 0
locdata["lng"] = 0
def getloc(x, t):
if x == "[]":
return -1
return load_dirty_json(x)[0]["geometry"]["location"][t]
def getplaceid(x):
if x == "[]" or len(x) == 0:
return -1
return load_dirty_json(x)[0]["place_id"]
locdata["lat"] = locdata.geocode.map(lambda x: getloc(x, "lat"))
locdata["lng"] = locdata.geocode.map(lambda x: getloc(x, "lng"))
locdata["place_id"] = locdata.geocode.map(lambda x: getplaceid(x))
# -
#locdata.to_csv(r'/data/restaurants_geocoded2.csv')
locdata
locdata.groupby(["place_id"]).count()
# goal here is 752 rows
# find out what to do about the bar anise
locdata[locdata["phone"].str.startswith("212")]
# +
cleandata = data.copy()
cleandata = cleandata.drop(labels=['id'], axis=1)
cleandata.name = cleandata.name.str.replace('[^a-zA-Z0-9 ]', '', case=False)
cleandata.address = cleandata.address.str.replace('[^a-zA-Z0-9 ]', '', case=False)
cleandata.phone = cleandata.phone.str.replace('[^a-zA-Z0-9]', '', case=False)
cleandata.city = cleandata.city.str.replace('new york city', 'new york', case=False)
cleandata.type = cleandata.type.str.replace(' \(new\)', '', case=False)
#cleandata.city = cleandata.city.str.replace('studio city', 'los angeles', case=False)
cleandata = cleandata.drop_duplicates()
cleandata = cleandata.groupby(['name', 'address', 'city', 'phone'])["type"].apply(list).reset_index()
cleandata
# -
# Finding dirty data!
cleandata[cleandata.phone.str.match("^[0-9]{1,9}$")]
cleandata[cleandata.duplicated(['address'], keep=False)]
cleandata[cleandata["city"] == "w. hollywood"]
| notebooks/restaurant-playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:data_sharing_reuse] *
# language: python
# name: conda-env-data_sharing_reuse-py
# ---
# +
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.feature_extraction import stop_words
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, auc, f1_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn import tree
from imblearn.ensemble import EasyEnsembleClassifier
import re
from urlextract import URLExtract
from scipy.sparse import hstack
import numpy as np
import json
import pickle
import sys
import seaborn as sns
import pydotplus
from IPython.display import Image
sys.path.append('..')
from src.features.build_features import syns, sep_urls, check_paren, repo_label
from src.data.make_dataset import return_passages, test_suitability
# -
# # Sequencing Classifiers
#
# In this notebook, I'm bolting on a second level classifier to see how things work out. Basically, we're using a classifier very similar to the high recall one we used last time, but now we're bolting a secondary classifier on after that. This secondary classifier will use some hand-built features along with the predicted probability from the first classifier.
def code_kw(text):
passage_marked = 0
reg_matches = re.compile(r"""(software)|(tool)|(code)|(package)|(\sR\s)|(python)|
(matlab)|(SPM8)|(implement.)""", re.X|re.VERBOSE)
m = re.search(reg_matches, text.lower())
if m:
return(1)
else:
return(0)
# ## Adam's labels
# Here, we're looking at the results of labeling the data. These are the labels adam provided in the last round.
df_labeled = pd.read_csv('/data/riddleta/data_sharing_reuse/interim/high_recall_labelling - high_recall_labelling.csv')
df_labeled['recoded_labels'] = df_labeled.n2.replace({'c':0, 'n':0, '2':1, 'd':1, 'n2':0, 'nd':0})
df_labeled.recoded_labels.value_counts()
#159 instances of data statements last time
# How do those data statements look with respect to the presence/absence of keywords that would indicate that it is code that is being shared?
df_labeled['kw_code'] = df_labeled.text.apply(lambda x: code_kw(x))
pd.crosstab(df_labeled.kw_code, df_labeled.n2)
df_labeled.n2.value_counts()
# ## Rerunning the last classifier
# Now we rerun the classifier on the original training sample (before the new labels). This is just to familiarize us with what we were working with.
extract = URLExtract()
df = pd.read_csv('/data/riddleta/data_sharing_reuse/external/combined_labels_incomplete.csv')
df.text.fillna('', inplace=True)
df['has_url'] = df.text.apply(lambda x: extract.has_urls(x))
df['has_parenth'] = df.text.apply(lambda x: check_paren(x))
df['repo'] = df.text.apply(lambda x: repo_label(x))
df['text'] = df.text.apply(lambda x: sep_urls(x))
df['syn_text'] = df.text.apply(lambda x: syns(x))
df['all_text'] = df.text + ' ' + df.syn_text
# +
cv = CountVectorizer(stop_words=stop_words.ENGLISH_STOP_WORDS)
enc = OneHotEncoder(handle_unknown='ignore')
x_tr, x_tst, y_tr, y_tst = train_test_split(df.all_text, df.data_statement, test_size=.25, random_state=42, stratify=df.data_statement)
# +
x_train = cv.fit_transform(x_tr)
one_hots_train = enc.fit_transform(df[['section', 'Journal Title', 'Year', 'has_url', 'has_parenth', 'repo']].loc[x_tr.index])
y_train = df.data_statement[x_tr.index]
x_test = cv.transform(df.all_text[x_tst.index])
one_hots_test = enc.transform(df[['section', 'Journal Title', 'Year', 'has_url', 'has_parenth', 'repo']].iloc[x_tst.index])
y_test = df.data_statement[x_tst.index]
x_train = hstack([x_train, one_hots_train])
x_test = hstack([x_test, one_hots_test])
#x_res, y_res = ros.fit_resample(x_train, y_train)
clf = EasyEnsembleClassifier()
y_score = clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
y_pred_proba = clf.predict_proba(x_test)
print(pd.crosstab(y_test, y_pred, rownames=['True'], colnames=['Predicted']))
print(classification_report(y_test, y_pred))
# -
# ## Evaluating first-level classifiers
#
# Now add in the new labels. The second level classifier is going to use predicted probabilities from the first, so we're going to switch to a 3-fold cross validation scheme, using the predicted probabilities for the held out fold as the input for the next level. I also tried a few different algorithms here to see if any of them yielded superior downstream results.
df_labeled['data_statement'] = df_labeled.n2.replace({'c':0, 'n':0, '2':1,
'd':1, 'n2':0, 'nd':0})
df = pd.read_csv('/data/riddleta/data_sharing_reuse/external/combined_labels_incomplete.csv')
df = pd.concat([df[['text', 'section', 'doi', 'Journal Title',
'pmcid', 'data_statement']],
df_labeled[['text', 'section', 'doi', 'Journal Title',
'pmcid', 'data_statement']]])
df.text.fillna('', inplace=True)
df.shape
df_nimh = pd.read_csv('/data/riddleta/data_sharing_reuse/external/nimh_papers.csv')
df_nimh['Year'] = df_nimh['journal_year']
df_nimh = df_nimh[['pmcid', 'Year']].drop_duplicates()
df = df.merge(df_nimh, how='left', on='pmcid')
df['has_url'] = df.text.apply(lambda x: extract.has_urls(x))
df['has_parenth'] = df.text.apply(lambda x: check_paren(x))
df['repo'] = df.text.apply(lambda x: repo_label(x))
df['text'] = df.text.apply(lambda x: sep_urls(x))
df['syn_text'] = df.text.apply(lambda x: syns(x))
df['all_text'] = df.text + ' ' + df.syn_text
# +
kfold = StratifiedKFold(n_splits=3, shuffle=True)
cv = CountVectorizer(stop_words=stop_words.ENGLISH_STOP_WORDS)
enc = OneHotEncoder(handle_unknown='ignore')
df['pred_prob1'] = 0
df['pred1'] = 0
df['pred_prob2'] = 0
df['pred2'] = 0
df['pred_prob3'] = 0
df['pred3'] = 0
df['pred_prob4'] = 0
df['pred4'] = 0
df['kw_code'] = df.text.apply(lambda x: code_kw(x))
for train_index, test_index in kfold.split(df.all_text, df.data_statement):
x_train1 = cv.fit_transform(df.all_text[train_index])
one_hots_train1 = enc.fit_transform(df[['section', 'Journal Title', 'Year', 'has_url', 'has_parenth', 'repo']].iloc[train_index])
y_train = df.data_statement[train_index]
x_test1 = cv.transform(df.all_text[test_index])
one_hots_test1 = enc.transform(df[['section', 'Journal Title', 'Year', 'has_url', 'has_parenth', 'repo']].iloc[test_index])
#y_test = df.data_statement[test_index]
one_hots_train2 = enc.fit_transform(df[['has_url', 'has_parenth', 'repo']].iloc[train_index])
one_hots_test2 = enc.transform(df[['has_url', 'has_parenth', 'repo']].iloc[test_index])
x_train1 = hstack([x_train1, one_hots_train1])
x_test1 = hstack([x_test1, one_hots_test1])
clf1 = EasyEnsembleClassifier()
clf2 = RandomForestClassifier()
clf3 = LogisticRegression()
clf4 = SVC(class_weight='balanced', probability=True)
y_score1 = clf1.fit(x_train1, y_train)
y_score2 = clf2.fit(one_hots_train2, y_train)
y_score3 = clf3.fit(one_hots_train2, y_train)
y_score4 = clf4.fit(x_train1, y_train)
df['pred_prob1'].loc[test_index] = clf1.predict_proba(x_test1)[:,1]
df['pred1'].loc[test_index] = clf1.predict(x_test1)
df['pred_prob2'].loc[test_index] = clf2.predict_proba(one_hots_test2)[:,1]
df['pred2'].loc[test_index] = clf2.predict(one_hots_test2)
df['pred_prob3'].loc[test_index] = clf3.predict_proba(one_hots_test2)[:,1]
df['pred3'].loc[test_index] = clf3.predict(one_hots_test2)
df['pred_prob4'].loc[test_index] = clf4.predict_proba(x_test1)[:,1]
df['pred4'].loc[test_index] = clf4.predict(x_test1)
print('******** Below are the results aggregated across the folds ***********')
print(pd.crosstab(df.data_statement, df.pred1, rownames=['True'], colnames=['Predicted_EZ_ensemble']))
print(classification_report(df.data_statement, df.pred1))
print(pd.crosstab(df.data_statement, df.pred2, rownames=['True'], colnames=['Predicted_RandomForest']))
print(classification_report(df.data_statement, df.pred2))
print(pd.crosstab(df.data_statement, df.pred3, rownames=['True'], colnames=['Predicted_LogisticRegression']))
print(classification_report(df.data_statement, df.pred3))
print(pd.crosstab(df.data_statement, df.pred4, rownames=['True'], colnames=['Predicted_SVM']))
print(classification_report(df.data_statement, df.pred4))
# -
plot_dat = df[['pred_prob1', 'pred_prob2', 'pred_prob3', 'pred_prob4', 'data_statement']]
sns.pairplot(plot_dat, hue='data_statement')
df[['pred_prob1', 'pred_prob2', 'pred_prob3', 'pred_prob4', 'data_statement', 'text']].to_csv('/home/riddleta/temp_file.csv', index=False)
# ## Second level classifier
#
# I looked at some of the predictions from the last batch to get a sense of what kinds of statements were being missed. It was a combination of mis-identifying software (again) and missing pretty obvious data statements. I thought I would just make a simple set of keywords that appeared to most closely align with those categories.
#
# To the keywords, I added some of the features we identified previously as likely to be associated with repositories (e.g. keyword search of repos, whether there is a parenthetical, and whether there is a URL. After iteratively testing each of the predicted probabilities from the models above (plus all of them together), using the probabilities from the SVM alone resulted in the highest F1 score for the positive class. That's what I've implemented here. I also tried a couple of different classifiers for this, but settled with Logistic Regression as it seems to do the best. Obviously, I haven't done any tuning of hyperparameters here.
# +
keywords = ['''software tool code package R python matlab SPM8 implement data
available dataset provided obtained deposited repository database
download downloaded release released accession submit submitted
public publically''']
cv.fit_transform(keywords)
# +
df['kw_code'] = df.text.apply(lambda x: code_kw(x))
df['kw_data'] = df.text.apply(lambda x: data_kw(x))
x_tr, x_tst, y_tr, y_tst = train_test_split(df.kw_code, df.data_statement, test_size=.33, random_state=42, stratify=df.data_statement)
cv_tr = cv.transform(df.text.loc[x_tr.index])
cv_tst = cv.transform(df.text.loc[x_tst.index])
one_hots_train = enc.fit_transform(df[['repo', 'has_parenth', 'has_url']].iloc[x_tr.index])
one_hots_test = enc.transform(df[['repo', 'has_parenth', 'has_url']].iloc[x_tst.index])
pred_probs_tr = df.pred_prob4.loc[x_tr.index]
pred_probs_tst = df.pred_prob4.loc[x_tst.index]
#x_tr = df[['pred_prob1', 'pred_prob2', 'pred_prob3', 'pred_prob4']].loc[x_tr.index]
#x_tst = df[['pred_prob1', 'pred_prob2', 'pred_prob3', 'pred_prob4']].loc[x_tst.index]
x_tr = pd.concat([pd.DataFrame(cv_tr.todense()), pd.DataFrame(one_hots_train.todense()), pred_probs_tr.reset_index()], axis=1)
x_tst = pd.concat([pd.DataFrame(cv_tst.todense()), pd.DataFrame(one_hots_test.todense()), pred_probs_tst.reset_index()], axis=1)
#x_tr = pd.concat([pd.DataFrame(cv_tr.todense()), pd.DataFrame(one_hots_train.todense()), x_tr], axis=1)
#x_tst = pd.concat([pd.DataFrame(cv_tst.todense()), pd.DataFrame(one_hots_test.todense()), x_tst], axis=1)
#x_tr = hstack([x_tr, one_hots_train, cv_tr])
#x_tst = hstack([x_tst, one_hots_test, cv_tst])
clf_log = LogisticRegression()
clf_log.fit(x_tr, y_tr)
y_pred = clf_log.predict(x_tst)
print(pd.crosstab(y_tst, y_pred, rownames=['True'], colnames=['Predicted']))
print(classification_report(y_tst, y_pred))
# -
# ## Next steps
#
# Precision, recall, and f1 are all in the mid to upper .8 range. When I set out on this project, I kind of envisioned this as being roughly the upper limit of what we could do, with .9 as a kind of absolute maximum. Of course, I'm well aware that this is not exactly a clean test, as I've iteratively trained and tested a bunch of classifiers on the same data without having a dedicated hold-out set, so these numbers are likely to be slightly inflated, but I think it is still encouraging and we are close to having a usable system.
#
# I think the next thing to do is to have another round of labeling and see how this pipeline performs on those labels.
df['pred_prob_final'] = 0
for train_index, test_index in kfold.split(df.text, df.data_statement):
x_train = cv.transform(df.text[train_index])
one_hots_train = enc.fit_transform(df[['repo', 'has_parenth', 'has_url']].iloc[train_index])
y_train = df.data_statement[train_index]
x_test = cv.transform(df.text[test_index])
one_hots_test = enc.transform(df[['repo', 'has_parenth', 'has_url']].iloc[test_index])
y_test = df.data_statement[test_index]
pred_probs_tr = df.pred_prob4.loc[train_index]
pred_probs_tst = df.pred_prob4.loc[test_index]
cv_tr_dense = pd.DataFrame(x_train.todense())
one_hots_train_dense = pd.DataFrame(one_hots_train.todense())
pred_probs_tr = pred_probs_tr.reset_index(drop=True)
cv_tst_dense = pd.DataFrame(x_test.todense())
one_hots_tst_dense = pd.DataFrame(one_hots_test.todense())
pred_probs_tst = pred_probs_tst.reset_index(drop=True)
x_tr = pd.concat([cv_tr_dense, one_hots_train_dense, pred_probs_tr], axis=1)
x_tst = pd.concat([cv_tst_dense, one_hots_tst_dense, pred_probs_tst], axis=1)
clf = LogisticRegression()
y_score = clf.fit(x_tr, y_train)
df['pred_prob_final'].loc[test_index] = clf.predict_proba(x_tst)[:,1]
df[['text', 'data_statement', 'pred_prob1', 'pred_prob_final']].to_csv('/home/riddleta/tempfile.csv', index=False)
# ## below here is appendix code
#
# after this, I'm using the high-recall classifier to apply labels to the population of papers. I didn't really do much with this, other than inspect the results.
nimh_papers = pd.read_csv('/data/riddleta/data_sharing_reuse/external/nimh_papers.csv')
#load file index
file_ix = pd.read_csv('/data/riddleta/data_sharing_reuse/external/file_index.csv')
file_ix['pmcid'] = file_ix.pmcid.astype('str')
nimh_papers['pmcid'] = nimh_papers.pmcid.astype('str')
target_papers = file_ix[file_ix.pmcid.isin(nimh_papers.pmcid)]
target_papers.shape
target_papers = target_papers.sort_values('file')
status_prints = range(0, len(target_papers.file.tolist()), 250)
len(status_prints)
data_collect = []
last_file = np.nan
for i, file in enumerate(target_papers.file.tolist()):
if i in status_prints:
print(i)
if file == last_file:
paper = dat[target_papers.paper_number.iloc[i]]
out_dat = return_passages(paper)
data_collect.extend(out_dat)
else:
with open(file) as infile:
dat = json.load(infile)
paper = dat[target_papers.paper_number.iloc[i]]
out_dat = return_passages(paper)
data_collect.extend(out_dat)
last_file = file
df_pool = pd.DataFrame(data_collect)
df_pool.columns = ['context', 'paper_offset', 'pmcid', 'doi', 'section']
tk_file = open('/data/riddleta/data_sharing_reuse/external/tokenizer.pk', 'rb')
tokenizer = pickle.load(tk_file)
tk_file.close()
df_pool['context'] = df_pool.context.apply(lambda x: tokenizer.tokenize(x))
df_pool = df_pool.explode('context')
df_pool.shape# all sentence 18406892
df_pool = df_pool[~df_pool.section.isin(['REF', 'TABLE', 'TITLE'])]
df_pmcids = pd.read_csv('/data/riddleta/data_sharing_reuse/external/PMC-ids.csv')
df_pmcids['pmcid'] = df_pmcids.PMCID.apply(lambda x: str(x)[3:])
df_pool = df_pool.merge(df_pmcids, how='left', on='pmcid')
df_pool['pmcid'] = df_pool.pmcid.astype('str')
df_pool['offset'] = df_pool.paper_offset.astype('str')
df_pool['pmcid-offset'] = df_pool.apply(lambda x: x['pmcid']+'-'+x['offset'], axis=1)
df_pool['context'] = df_pool.context.astype('str')
df_pool['text'] = df_pool.context.apply(lambda x: sep_urls(x))
df_pool['syn_text'] = df_pool.text.apply(lambda x: syns(x))
df_pool['all_text'] = df_pool.text + ' ' + df.syn_text
df_pool.text.fillna('', inplace=True)
df_pool['has_url'] = df_pool.text.apply(lambda x: extract.has_urls(x))
df_pool['has_parenth'] = df_pool.text.apply(lambda x: check_paren(x))
df_pool['repo'] = df_pool.text.apply(lambda x: repo_label(x))
df_pool.all_text.fillna('', inplace=True)
# +
x_pool = cv.transform(df_pool.all_text)
one_hots_pool = enc.transform(df_pool[['section', 'Journal Title', 'Year', 'has_url', 'has_parenth', 'repo']])
x_pool = hstack([x_pool, one_hots_pool])
y_pool_pred_prob = clf.predict_proba(x_pool)
y_pool_pred = clf.predict(x_pool)
# -
df_pool['data_sharing_pred_prob'] = y_pool_pred_prob[:,1]
df_pool['data_sharing_pred'] = y_pool_pred
df_data_statements = df_pool[df_pool.data_sharing_pred==1]
statements_to_label = df_data_statements.sample(n=500, random_state=42)
statements_to_label['kw_code'] = statements_to_label.text.apply(lambda x: code_kw(x))
sns.distplot(df_pool.data_sharing_pred_prob, )
sns.distplot(statements_to_label.data_sharing_pred_prob)
| notebooks/07.0-TAR-second_level_clf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Weather Prediction Using Recurrent Neural Networks
#
# ## Adrian, Ben, and Sai
# ### Imports
# +
# Data processing and functions
import pandas as pd
import numpy as np
import scipy as sp
from pandas import read_csv
from pandas import datetime
from pandas import Series, DataFrame
from matplotlib import pyplot
from statsmodels.tsa.arima_model import ARIMA
from sklearn.metrics import mean_squared_error
from math import sqrt
# Analytics and modeling
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_predict
from sklearn import linear_model
from sklearn.model_selection import cross_val_score
from sklearn.decomposition import PCA
from sklearn import manifold
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.pipeline import make_pipeline
from sklearn.metrics import classification_report
import statsmodels.api as sm
from statsmodels.tsa.arima_model import ARIMA
import statsmodels.sandbox.tools.tools_pca as sm_pca
from statsmodels.formula.api import ols as sm_ols
from statsmodels.stats.anova import anova_lm as sm_anova
from patsy.contrasts import Treatment
from patsy import dmatrices
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.svm import SVR
from sklearn.metrics import f1_score
from datetime import datetime
from datetime import timedelta
from pandas.core import datetools
# Graphing and visualizing
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from matplotlib import cm
from pylab import savefig
# Miscellaneous
from functools import reduce
import datetime
import timeit
import random
import sys
import os
from collections import defaultdict
# Neural Networks
import tensorflow as tf
from tensorflow.contrib import rnn
# Setting graphing preferences
sns.set(style="darkgrid", color_codes=True)
# Printing
import locale
# Show plots locally
locale.setlocale( locale.LC_ALL, '' )
# %matplotlib inline
# -
# # Preprocessing
# ### Read in the files
# +
# Filenames
city_file = 'city_attributes.csv'
temp_file = 'temperature.csv'
humid_file = 'humidity.csv'
press_file = 'pressure.csv'
desc_file = 'weather_description.csv'
wdir_file = 'wind_direction.csv'
wspeed_file = 'wind_speed.csv'
# Load the files
city_df = pd.read_csv(city_file)
city_df.rename(str.lower, axis = 'columns', inplace = True)
city_df.drop(['country'], axis = 1, inplace = True)
city_df.set_index(['city'], inplace = True)
temp_df = pd.read_csv(temp_file)
humid_df = pd.read_csv(humid_file)
press_df = pd.read_csv(press_file)
desc_df = pd.read_csv(desc_file)
wdir_df = pd.read_csv(wdir_file)
wspeed_df = pd.read_csv(wspeed_file)
# -
# These are the cities that universally have > 1% missing across all weather values
drop_city = set(temp_df.columns[temp_df.isna().sum() > 500]) & set(humid_df.columns[humid_df.isna().sum() > 500]) & set(press_df.columns[press_df.isna().sum() > 500]) & set(desc_df.columns[desc_df.isna().sum() > 500]) & set(wdir_df.columns[wdir_df.isna().sum() > 500]) & set(wspeed_df.columns[wspeed_df.isna().sum() > 500])
# +
# Remove the undesired cities and melt the tables to be conducive for joining
alt_temp_df = pd.melt(temp_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'temperature')
alt_humid_df = pd.melt(humid_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'humidity')
alt_press_df = pd.melt(press_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'pressure')
alt_desc_df = pd.melt(desc_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'weather_description')
alt_wdir_df = pd.melt(wdir_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'wind_direction')
alt_wspeed_df = pd.melt(wspeed_df.drop(drop_city, axis = 1), id_vars = ['datetime'], var_name = 'city', value_name = 'wind_speed')
# Set proper indices
alt_temp_df = alt_temp_df.set_index(['city', 'datetime'])
alt_humid_df = alt_humid_df.set_index(['city', 'datetime'])
alt_press_df = alt_press_df.set_index(['city', 'datetime'])
alt_desc_df = alt_desc_df.set_index(['city', 'datetime'])
alt_wdir_df = alt_wdir_df.set_index(['city', 'datetime'])
alt_wspeed_df = alt_wspeed_df.set_index(['city', 'datetime'])
# -
# ### Join tables together
# Join tables on the city and datetime info
dfs = [city_df, alt_temp_df, alt_humid_df, alt_press_df, alt_wspeed_df, alt_wdir_df, alt_desc_df]
df_final = reduce(lambda left, right : pd.merge(left, right, left_index = True, right_index = True), dfs)
# ### Deal with Missing Values
# +
# Get number of nulls for Charlotte - SUPER CONVOLUTED, but it works
temp = df_final.reset_index()
temp = temp[temp.city == "Charlotte"]
temp.isnull().sum()
#city 0
#datetime 0
#latitude 0
#longitude 0
#temperature 3
#humidity 589
#pressure 3
#wind_speed 2
#wind_direction 1
#weather_description 1
#dtype: int64
# INTERPOLATION HAPPENS HERE -- Break up by city
df_final = df_final.groupby('city').apply(lambda group: group.interpolate(limit_direction = 'both'))
# Need to do something special for weather_description
arr, cat = df_final['weather_description'].factorize()
df_final['weather_description'] = pd.Series(arr).replace(-1, np.nan).interpolate(method = 'nearest', limit_direction = 'both').interpolate(limit_direction = 'both').astype('category').cat.rename_categories(cat).astype('str').values
# -
# The whole purpose here is to encode wind direction. It's not continuous so don't really want to scale it
# Also have more granularity in wind dir if need be.
#dir_df = pd.DataFrame({'dir' : ['N', 'NNE', 'NE', 'ENE', 'E', 'ESE', 'SE', 'SSE', 'S', 'SSW', 'SW', 'WSW', 'W', 'WNW', 'NW', 'NNW', 'N'],
# 'lower' : [348.75, 11.25, 33.75, 56.25, 78.75, 101.25, 123.75, 146.25, 168.75, 191.25, 213.75, 236.25, 258.75, 281.25, 303.75, 326.25, 0],
# 'upper' : [360, 33.75, 56.25, 78.75, 101.25, 123.75, 146.25, 168.75, 191.25, 213.75, 236.25, 258.75, 281.25, 303.75, 326.25, 348.75, 11.25]})
dir_df = pd.DataFrame({'dir' : ['N', 'NE', 'E', 'SE', 'S', 'SW', 'W', 'NW', 'N'],
'lower' : [337.5, 22.5, 67.5, 112.5, 157, 202.5, 247.5, 292.5, 0],
'upper' : [360, 67.5, 112.5, 157, 202.5, 247.5, 292.5, 337.5, 22.5]})
# Make a copy to fool around in
fill_this = df_final['wind_direction'].copy()
# And overwrite the copy
for i in reversed(range(len(dir_df))):
# print(str(dir_df.loc[i,'lower']) + " and " + str(dir_df.loc[i,'upper']))
fill_this.loc[df_final['wind_direction'].between(dir_df.loc[i,'lower'], dir_df.loc[i,'upper'])] = i
# This is a bit ugly here; but it maintains any missing values nicely
df_final['wind_direction'] = dir_df.loc[fill_this, 'dir'].values
# Go ahead and drop lat and long, we wont need them for now
df_final.drop(["latitude", "longitude"], inplace=True, axis=1)
# Convert the data to Farenheit
df_final["temperature"] = df_final["temperature"] * 9/5 - 459.67
# ### Normalize data through min-max scaling
# Scaling happens here -- IMPUTATION MUST HAPPEN FIRST
scale_df = df_final[['temperature', 'humidity', 'pressure', 'wind_speed']].values
scaler = MinMaxScaler()
# We have access to min and max so we can transform back and forth
scale_df = scaler.fit_transform(scale_df)
df_final_scaled = df_final.copy()
df_final_scaled[['temperature', 'humidity', 'pressure', 'wind_speed']] = scale_df
df_final_scaled.head()
# Collapse a lot of these groupings
weather_dict = {'scattered clouds' : 'partly_cloudy', 'sky is clear' : 'clear',
'few clouds' : 'partly_cloudy', 'broken clouds' : 'partly_cloudy',
'overcast clouds' : 'cloudy', 'mist' : 'cloudy', 'haze' : 'cloudy',
'dust' : 'other', 'fog' : 'cloudy', 'moderate rain' : 'rain',
'light rain' : 'rain', 'heavy intensity rain' : 'rain', 'light intensity drizzle' : 'rain',
'heavy snow' : 'snow', 'snow' : 'snow', 'light snow' : 'snow', 'very heavy rain' : 'rain',
'thunderstorm' : 'tstorm', 'proximity thunderstorm' : 'tstorm', 'smoke' : 'other', 'freezing rain' : 'snow',
'thunderstorm with light rain' : 'tstorm', 'drizzle' : 'rain', 'sleet' : 'snow',
'thunderstorm with rain' : 'tstorm', 'thunderstorm with heavy rain' : 'tstorm',
'squalls' : 'rain', 'heavy intensity drizzle' : 'rain', 'light shower snow' : 'snow',
'light intensity shower rain' : 'rain', 'shower rain' : 'rain',
'heavy intensity shower rain' : 'rain', 'proximity shower rain' : 'rain',
'proximity sand/dust whirls' : 'other', 'proximity moderate rain' : 'rain', 'sand' : 'other',
'shower snow' : 'snow', 'proximity thunderstorm with rain' : 'tstorm',
'sand/dust whirls' : 'other', 'proximity thunderstorm with drizzle' : 'tstorm',
'thunderstorm with drizzle' : 'tstorm', 'thunderstorm with light drizzle' : 'tstorm',
'light rain and snow' : 'snow', 'thunderstorm with heavy drizzle' : 'tstorm',
'ragged thunderstorm' : 'tstorm', 'tornado' : 'other', 'volcanic ash' : 'other', 'shower drizzle' : 'rain',
'heavy shower snow' : 'snow', 'light intensity drizzle rain' : 'rain',
'light shower sleet' : 'snow', 'rain and snow' : 'snow'}
adj_weather = [weather_dict[val] for val in df_final_scaled['weather_description']]
df_final_scaled['adj_weather'] = adj_weather
df_final_scaled = df_final_scaled.drop('weather_description', axis = 1)
# ### Make weather and wind direction dummy variables
# And one-hot encode the wind_directions and weather
df_final_scaled = pd.get_dummies(df_final_scaled, prefix=['wind_dir', 'weather'],
columns=['wind_direction', 'adj_weather'])
# ### Write the results
# +
df_final_scaled = df_final_scaled.reset_index('city')
# Write for distribution
df_final_scaled.to_csv('df_weather_scaled_encoded.csv')
# -
# # Time Series and Baseline Analysis
# Read back in the data
data = pd.read_csv(r'df_weather_scaled_encoded.csv')
# Get a sense of the data
data[0:3]
# +
data.columns
# No nulls. checked
# -
# Make sure categoricals are numeric
data[['weather_clear', 'weather_cloudy', 'weather_other', 'weather_partly_cloudy', 'weather_rain', 'weather_snow',
'weather_tstorm']] = data[['weather_clear', 'weather_cloudy', 'weather_other', 'weather_partly_cloudy', 'weather_rain',
'weather_snow', 'weather_tstorm']].apply(pd.to_numeric, errors = 'coerce')
# See all cities
data['city'].unique()
# Keep only the observations from Charlotte, NC
data_charlotte = data[data['city'] == 'Charlotte']
data_charlotte.shape
# Split up the train test
data_charlotte_train = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) <= 2016]
data_charlotte_test = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) == 2017]
data_charlotte_train.columns
# Split into features and response
X_train = data_charlotte_train
X_test = data_charlotte_test
y_train = data_charlotte_train['temperature']
y_test = data_charlotte_test['temperature']
# +
# Fit a simple linear model
lr = linear_model.LinearRegression()
lr.fit(pd.DataFrame(X_train[X_train.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temperature'])]), y_train)
y_fit = lr.predict(pd.DataFrame(X_train[X_train.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temperature'])]))
y_pred = lr.predict(pd.DataFrame(X_test[X_test.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temperature'])]))
#Performance on the test set
mse = mean_squared_error(y_pred, y_test)
print("Mean Square Error: %0.2f" % (mse))
# +
# Residual ACF, PACF
resid = y_train - y_fit
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid, lags=160, ax=ax1)
labels = 'Difference in time (lag)'
plt.xlabel(labels)
ylabelsset = 'Autocorrelation'
plt.ylabel(ylabelsset)
# The below plots demonstrate that there is certainly a 24 hour cycle to an extnent
# +
# Residual ACF, PACF
resid = y_train - y_fit
fig = plt.figure(figsize=(12,8))
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=160, ax=ax2)
labels = 'Difference in time (lag)'
plt.xlabel(labels)
ylabelsset = 'Partial autocorrelation'
plt.ylabel(ylabelsset)
# -
# ### How does a plot of predicted vs actual look?
# Get a sense of how the predictions actually look
fit_train = pd.DataFrame(y_fit, y_train)
fit_train = fit_train.reset_index()
fit_train.columns = ['Predicted', 'Actual']
scatt = fit_train.plot(x = ['Predicted'], y = ['Actual'], kind = 'scatter')
plt.show()
# ### ARIMA models
# #### (ARIMA approach 1): just t-4, t-5 hour temperature data to predict t
model_2hr = sm.tsa.ARIMA(endog = data_charlotte_train['lag'],order=[2,1,0])
results_2hr = model_2hr.fit()
print(results_2hr.summary())
fit_AR = model_2hr.fit(disp=-1)
TempDiff8760 = data_charlotte_train['temperature'].diff(8760)
data_charlotte_train['datetime'] = pd.to_datetime(data_charlotte_train['datetime'])
data_charlotte_train["NEW_DATE"] = data_charlotte_train["datetime"] + pd.offsets.DateOffset(years=2)
# Get the train forecast
forecast = data_charlotte_train.temperature.shift(8760).dropna(inplace = False) +\
fit_AR.fittedvalues\
+ TempDiff8760.dropna(inplace = False).mean()
# Get the forecasting indices
use_this = data_charlotte_train[['temperature', 'NEW_DATE']][-17544:-8784].copy()
use_this.index = use_this['NEW_DATE']
###### HERE ######
forecast = (use_this['temperature'] +\
fit_AR.predict(start= len(data_charlotte_train), end = len(data_charlotte_train) -1 + len(data_charlotte_test))\
+ TempDiff8760.dropna(inplace = False).mean()).dropna(inplace = False)
# +
fig = plt.figure(figsize = (14,11))
ts = fig.add_subplot(1, 1, 1)
#forecast_mod1 = model_2hr.fit(disp=-1)
#data_charlotte_train.temperature.plot(label = 'Temperature')
forecast.plot()
ts.legend(loc = 'best')
ts.set_title("WEATHER Training Predictions")
ts.set_ylabel("Normalized Temperature")
ts.set_xlabel("DateTime")
# -
# ##### (results_2hr.predict(start= len(data_charlotte_train), end = len(data_charlotte_train) + len(data_charlotte_test)))[-2:-1]
forecast = pd.DataFrame(forecast)
forecast.columns = ['predicted_values']
forecast['ground_truth'] = data_charlotte_test['temperature'].values
forecast['predicted_denormalized'] = forecast['predicted_values']*(99.95 - 0.515) + 0.515
forecast['grountruth_denormalized'] = forecast['ground_truth']*(99.95 - 0.515) + 0.515
rmse_2h = sqrt(mean_squared_error(forecast['predicted_denormalized'], forecast['grountruth_denormalized']))
print(rmse_2h)
# 8.256708774009267
# #### (ARIMA approach 2): just t-4, t-5 hour temperature data to predict t
# +
data_charlotte['lag'] = data_charlotte['temperature'].shift(4)
data_charlotte.dropna(inplace=True)
data_charlotte_train = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) <= 2016]
data_charlotte_test = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) == 2017]
X = data_charlotte_train['lag'].values
history = [x for x in data_charlotte_train['lag']]
test = data_charlotte_test['lag'].values
predictions = list()
for t in range(len(data_charlotte_test)):
model = ARIMA(history, order=(2,1,0))
model_fit = model.fit(disp=0)
output = model_fit.forecast()
yhat = float(output[0])
predictions.append(yhat)
obs = test[t]
history.append(obs)
# -
len(predictions)
# Use the min and max values to denormalize the data
predictions = pd.DataFrame(predictions)
predictions.columns = ['predicted_values']
predictions['ground_truth'] = data_charlotte_test['temperature'].values
predictions['predicted_denormalized'] = predictions['predicted_values']*(99.95 - 0.515) + 0.515
predictions['grountruth_denormalized'] = predictions['ground_truth']*(99.95 - 0.515) + 0.515
ARIMA_2h = sqrt(mean_squared_error(predictions['predicted_denormalized'], predictions['grountruth_denormalized']))
print(ARIMA_2h)
# Overall best model. RMSE: 6.065001387755496
print(model_fit.summary())
forecast = pd.DataFrame(forecast)
forecast.columns = ['predicted_values']
forecast['ground_truth'] = data_charlotte_test['temperature'].values
forecast['predicted_denormalized'] = forecast['predicted_values']*(99.95 - 0.515) + 0.515
forecast['grountruth_denormalized'] = forecast['ground_truth']*(99.95 - 0.515) + 0.515
rmse_2h = sqrt(mean_squared_error(forecast['predicted_denormalized'], forecast['grountruth_denormalized']))
print(rmse_2h)
# #### (ARIMA approach 3): Predicting Temperature at time T + 4 by looking at temperatures between T and T - 24
# +
model_24hr = sm.tsa.ARIMA(endog = data_charlotte_train['lag'],order=[24,1,0])
results_24hr = model_24hr.fit()
print(results_24hr.summary())
# Computationally not feasible to model. Hence not continuing with this analysis
# -
# ### Few other standard modeling techniques: Regression, Random Forest and ANN
# #### A little bit of data prep from scratch again
data = pd.read_csv(r'df_weather_scaled_encoded.csv')
data_charlotte = data[data['city'] == 'Charlotte']
data_charlotte['temp_after_4hr'] = data_charlotte['temperature'].shift(-4)
data_charlotte['temp_last_hr'] = data_charlotte['temperature'].shift(1)
data_charlotte['temp_before_24hr'] = data_charlotte['temperature'].shift(20)
data_charlotte['temp_before_48hr'] = data_charlotte['temperature'].shift(44)
data_charlotte['temp_before_72hr'] = data_charlotte['temperature'].shift(68)
data_charlotte['temp_before_24hr_3day_avg'] = (data_charlotte['temperature'].shift(20) + data_charlotte['temperature'].shift(44)
+ data_charlotte['temperature'].shift(68))/3
data_charlotte = data_charlotte.dropna()
data_charlotte_train2 = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) < 2017]
data_charlotte_test2 = data_charlotte[data_charlotte['datetime'].astype(str).str[0:4].astype(int) == 2017]
# #### parameter search before modeling
# +
# Linear Regression parameter search
X = pd.DataFrame(data_charlotte_train2[data_charlotte_train2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
y = data_charlotte_train2['temp_after_4hr']
# Perform a stepwise selection
def stepwise_selection(X, y,
initial_list=[],
threshold_in=0.01,
threshold_out = 0.05,
verbose=True):
included = list(initial_list)
while True:
changed=False
# forward step
excluded = list(set(X.columns)-set(included))
new_pval = pd.Series(index=excluded)
for new_column in excluded:
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included+[new_column]]))).fit()
new_pval[new_column] = model.pvalues[new_column]
best_pval = new_pval.min()
if best_pval < threshold_in:
best_feature = new_pval.argmin()
included.append(best_feature)
changed=True
if verbose:
print('Add {:30} with p-value {:.6}'.format(best_feature, best_pval))
# backward step
model = sm.OLS(y, sm.add_constant(pd.DataFrame(X[included]))).fit()
# use all coefs except intercept
pvalues = model.pvalues.iloc[1:]
worst_pval = pvalues.max() # null if pvalues is empty
if worst_pval > threshold_out:
changed=True
worst_feature = pvalues.argmax()
included.remove(worst_feature)
if verbose:
print('Drop {:30} with p-value {:.6}'.format(worst_feature, worst_pval))
if not changed:
break
return included
# Run the stepwise selection
result = stepwise_selection(X, y)
print('resulting features:')
print(result)
"""
Add temp_before_24hr_3day_avg with p-value 0.0
Add temp_before_48hr with p-value 0.0
Add temp_before_72hr with p-value 0.0
Add temperature with p-value 0.0
Add temp_last_hr with p-value 0.0
Add humidity with p-value 0.0
Add temp_before_24hr with p-value 0.0
Add weather_rain with p-value 5.42304e-167
Add wind_dir_NE with p-value 9.61627e-102
Add wind_dir_NW with p-value 5.15545e-85
Add wind_speed with p-value 7.11659e-49
Add wind_dir_SW with p-value 5.66768e-55
Add wind_dir_S with p-value 1.11484e-55
Add wind_dir_W with p-value 1.09867e-17
Add weather_clear with p-value 4.70843e-12
Add weather_tstorm with p-value 3.14212e-12
Add weather_snow with p-value 1.12328e-10
Add pressure with p-value 1.93877e-10
Add wind_dir_E with p-value 1.79258e-05
resulting features:
['temp_before_24hr_3day_avg', 'temp_before_48hr', 'temp_before_72hr', 'temperature', 'temp_last_hr', 'humidity', 'temp_before_24hr', 'weather_rain', 'wind_dir_NE', 'wind_dir_NW', 'wind_speed', 'wind_dir_SW', 'wind_dir_S', 'wind_dir_W', 'weather_clear', 'weather_tstorm', 'weather_snow', 'pressure', 'wind_dir_E']
"""
# +
# Random Forest parameter search
from sklearn.grid_search import GridSearchCV
rfc = RandomForestRegressor(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50)
# Grid search some key parameters
param_grid = {
'n_estimators': [10, 20, 50],
'max_features': ['log2', 'sqrt'],
'max_depth': [10, 30, 100]
}
CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5)
k = pd.DataFrame(data_charlotte_train2[data_charlotte_train2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
p = np.ravel(data_charlotte_train2['temp_after_4hr'])
CV_rfc.fit(k,p)
print(CV_rfc.best_params_)
# {'max_depth': 100, 'max_features': 'sqrt', 'n_estimators': 50}
# -
# #### Modeling
# +
response = np.ravel(data_charlotte_train2['temp_after_4hr'])
train_rf = pd.DataFrame(data_charlotte_train2[data_charlotte_train2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
train_linear = data_charlotte_train2[['temp_before_24hr_3day_avg', 'temp_before_48hr', 'temp_before_72hr', 'temperature', 'temp_last_hr', 'humidity', 'temp_before_24hr', 'weather_rain', 'wind_dir_NE', 'wind_dir_NW', 'wind_speed', 'wind_dir_SW', 'wind_dir_S', 'wind_dir_W', 'weather_clear', 'weather_tstorm', 'weather_snow', 'pressure', 'wind_dir_E']]
test_linear = data_charlotte_test2[['temp_before_24hr_3day_avg', 'temp_before_48hr', 'temp_before_72hr', 'temperature', 'temp_last_hr', 'humidity', 'temp_before_24hr', 'weather_rain', 'wind_dir_NE', 'wind_dir_NW', 'wind_speed', 'wind_dir_SW', 'wind_dir_S', 'wind_dir_W', 'weather_clear', 'weather_tstorm', 'weather_snow', 'pressure', 'wind_dir_E']]
ground_truth = data_charlotte_test2['temp_after_4hr']
test = pd.DataFrame(data_charlotte_test2[data_charlotte_test2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
linear_model = LinearRegression().fit(train_linear, response)
predicted_linear = linear_model.predict(test_linear)
rf_model = RandomForestRegressor(n_estimators = 50, max_depth = 100, max_features = 'sqrt').fit(train_rf, response)
predicted_rf = rf_model.predict(test)
# Neural network part
#y_train = list(data_charlotte_train2['temp_after_4hr'])
#X_train = pd.DataFrame(data_charlotte_train2[data_charlotte_train2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
#y_test = list(data_charlotte_test2['temp_after_4hr'])
#X_test = pd.DataFrame(data_charlotte_test2[data_charlotte_test2.columns.difference(['datetime', 'city', 'latitude', 'longitude', 'temp_after_4hr'])])
#scaler = StandardScaler()
#scaler.fit(X_train)
#X_train = scaler.transform(X_train)
#X_test = scaler.transform(X_test)
#mlp = MLPRegressor(hidden_layer_sizes=(20,20,20),max_iter=500000)
#mlp.fit(X_train,y_train)
#predicted_neural = mlp.predict(X_test)
# -
# Get the linear prediction
predictions_linear = pd.DataFrame(predicted_linear)
predictions_linear.columns = ['predicted_values']
predictions_linear['ground_truth'] = list(data_charlotte_test2['temp_after_4hr'])
predictions_linear['predicted_denormalized'] = predictions_linear['predicted_values']*(99.95 - 0.515) + 0.515
predictions_linear['grountruth_denormalized'] = predictions_linear['ground_truth']*(99.95 - 0.515) + 0.515
rmse_linear = sqrt(mean_squared_error(predictions_linear['predicted_denormalized'], predictions_linear['grountruth_denormalized']))
print(rmse_linear)
# RMSE: 3.1710680190888194
train_linear.columns
# +
# Which variables are important in linear regression
linear_model.coef_
# -
data_charlotte_train2.columns
# +
# What happens if I don't inclde all the temperature variables as predictors?
response2 = np.ravel(data_charlotte_train2['temp_after_4hr'])
train_linear2 = data_charlotte_train2[['humidity', 'pressure', 'wind_speed', 'wind_dir_E', 'wind_dir_N', 'wind_dir_NE', 'wind_dir_NW',
'wind_dir_S', 'wind_dir_SE', 'wind_dir_SW', 'wind_dir_W', 'weather_clear', 'weather_cloudy',
'weather_other', 'weather_partly_cloudy', 'weather_rain', 'weather_snow', 'weather_tstorm']]
test_linear2 = data_charlotte_test2[['humidity', 'pressure', 'wind_speed', 'wind_dir_E', 'wind_dir_N', 'wind_dir_NE', 'wind_dir_NW',
'wind_dir_S', 'wind_dir_SE', 'wind_dir_SW', 'wind_dir_W', 'weather_clear', 'weather_cloudy',
'weather_other', 'weather_partly_cloudy', 'weather_rain', 'weather_snow', 'weather_tstorm']]
ground_truth2 = data_charlotte_test2['temp_after_4hr']
predicted_linear2 = LinearRegression().fit(train_linear2, response2).predict(test_linear2)
predictions_linear2 = pd.DataFrame(predicted_linear2)
predictions_linear2.columns = ['predicted_values']
predictions_linear2['ground_truth'] = list(data_charlotte_test2['temp_after_4hr'])
predictions_linear2['predicted_denormalized'] = predictions_linear2['predicted_values']*(99.95 - 0.515) + 0.515
predictions_linear2['grountruth_denormalized'] = predictions_linear2['ground_truth']*(99.95 - 0.515) + 0.515
rmse_linear2 = sqrt(mean_squared_error(predictions_linear2['predicted_denormalized'], predictions_linear2['grountruth_denormalized']))
print(rmse_linear2)
# RMSE: 10.267910905877935
# Clearly, past temperature values matter a lot
# -
# Get the accuracy for RF
predictions_rf = pd.DataFrame(predicted_rf)
predictions_rf.columns = ['predicted_values']
predictions_rf['ground_truth'] = list(data_charlotte_test2['temp_after_4hr'])
predictions_rf['predicted_denormalized'] = predictions_rf['predicted_values']*(99.95 - 0.515) + 0.515
predictions_rf['grountruth_denormalized'] = predictions_rf['ground_truth']*(99.95 - 0.515) + 0.515
rmse_rf = sqrt(mean_squared_error(predictions_rf['predicted_denormalized'], predictions_rf['grountruth_denormalized']))
print(rmse_rf)
# RMSE: 3.227148804096484
# +
# Get a sense of feature importance
importances = rf_model.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf_model.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
# -
# # RNNs
# Read in the data
full_df = pd.read_csv("df_weather_scaled_encoded.csv")
# +
# Filter by the city of interest
current_city = "Charlotte"
full_df = full_df[full_df["city"] == current_city]
# Store max and min temperatures for denormalization
min_dataset = 0.515
max_dataset = 99.95
# +
# Extract
years = np.array([y[0:4] for y in full_df.datetime])
# Split into train, test, validation
train = full_df[years < '2016']
valid = full_df[years == '2016']
test = full_df[years > '2016']
if(train.shape[0] + valid.shape[0] + test.shape[0] != years.shape[0]):
raise Exception("Partition did not work")
# Drop the city and timestamp for all three
train.drop(["city", "datetime"], inplace=True, axis=1)
valid.drop(["city", "datetime"], inplace=True, axis=1)
test.drop(["city", "datetime"], inplace=True, axis=1)
# +
# Wrapper for data object
# Modified from <NAME>
class DataSet(object):
def __init__(self, x, y, shuffle=True):
self._num_examples = len(x)
self._x = x
self._y = y
self._epochs_done = 0
self._index_in_epoch = 0
if shuffle:
np.random.seed(123456)
# Shuffle the data
perm = np.arange(self._num_examples)
np.random.shuffle(perm)
self._x = [self._x[i] for i in perm]
self._y = [self._y[i] for i in perm]
random.seed(123456)
@property
def features(self):
return self._x
@property
def response(self):
return self._y
@property
def num_examples(self):
return self._num_examples
@property
def epochs_done(self):
return self._epochs_done
def reset_batch_index(self):
self._index_in_epoch = 0
def next_batch(self, batch_size):
"""Return the next `batch_size` examples from this data set."""
start = self._index_in_epoch
self._index_in_epoch += batch_size
done = False
if self._index_in_epoch > self._num_examples:
# After each epoch we update this
self._epochs_done += 1
# Shuffle the data
perm = np.arange(self._num_examples)
np.random.shuffle(perm)
self._x = self._x
self._y = self._y
start = 0
self._index_in_epoch = batch_size
done = True
assert batch_size <= self._num_examples
end = self._index_in_epoch
return self._x[start:end], self._y[start:end], done
# -
# ## Create observations using a sliding sequence window
# Wrapper function to perform the entire creation of observations given the subset
# data. Can specify sequence_size, lookahead, response (temp means 'temperature'),
# and whether you want a greedy baseline.
def create_observations(train, test, valid, seq_size = 24, lookahead = 1, temp = True, baseline=False):
train_x = []
train_y = []
# If we are doing the temperature variable, extract that feature
if temp:
for i in range(train.shape[0] - seq_len - lookahead + 1):
# Slide over input, storing each "sequence size" window
train_x.append([x for x in train.iloc[i:i+seq_len, :].values])
train_y.append([y for y in train.iloc[i+lookahead:i+seq_len+lookahead, 0]])
# Otherwise, extract out the weather type
else:
for i in range(train.shape[0] - seq_len - lookahead + 1):
train_x.append([x for x in train.iloc[i:i+seq_len, :].values])
train_y.append([y for y in train.iloc[i+lookahead:i+seq_len+lookahead, -7:].values])
# Convert to a Dataset object
train_data = DataSet(train_x, train_y)
# Repeat the above process on the validation set
valid_x = []
valid_y = []
# If we are doing the temperature variable, extract that feature
if temp:
for i in range(valid.shape[0] - seq_len - lookahead + 1):
# Slide over input, storing each "sequence size" window
valid_x.append([x for x in valid.iloc[i:i+seq_len, :].values])
valid_y.append([y for y in valid.iloc[i+lookahead:i+seq_len+lookahead, 0]])
# Otherwise, extract out the weather type
else:
for i in range(valid.shape[0] - seq_len - lookahead + 1):
valid_x.append([x for x in valid.iloc[i:i+seq_len, :].values])
valid_y.append([y for y in valid.iloc[i+lookahead:i+seq_len+lookahead, -7:].values])
valid_data = DataSet(valid_x, valid_y)
# Repeat for test except also track the baseline prediction error
test_x = []
test_y = []
test_baseline_err = []
if temp:
for i in range(test.shape[0] - seq_len - lookahead + 1):
test_x.append([x for x in test.iloc[i:i+seq_len, :].values])
test_y.append([y for y in test.iloc[i+lookahead:i+seq_len+lookahead, 0]])
# Get the baseline prediction error by taking the MSE between the current hour and the
# temperature of the next hour. This is the trivial case where our prediction for temp
# is just the current temp
if baseline:
test_baseline_err.append((np.mean(test.iloc[i:i+seq_len, 0]*(max_dataset-min_dataset)+min_dataset) -
(test.iloc[i+seq_len + lookahead - 1, 0]*(max_dataset-min_dataset)+min_dataset)) ** 2)
if baseline:
print("Baseline error of: " + str(np.mean(test_baseline_err)))
# Test baseline for seq 1: 29.66645546285467
# Test baseline for seq 2: 34.86351968736361
# Test baseline error for seq 24: 34.01255035338878
# Test baseline error for seq 72: 42.73780841606058
else:
for i in range(test.shape[0] - seq_len - lookahead + 1):
test_x.append([x for x in test.iloc[i:i+seq_len, :].values])
test_y.append([y for y in test.iloc[i+lookahead:i+seq_len+lookahead, -7:].values])
if baseline:
# Compare current weather type with the most common weather type over a period
# Variable to hold most frequent weather type
x_obs = np.array([sum(x) for x in zip(*test.iloc[i:i+seq_len, -7:].values)])
# Append equality of current prediction and true value to error list
test_baseline_err.append(np.argmax(x_obs) == np.argmax(test.iloc[i + seq_len + lookahead - 1, -7:].values))
if baseline:
print("Baseline error of: " + str(np.mean(test_baseline_err)))
# Test baseline error of 0.595
test_data = DataSet(test_x, test_y, shuffle=False)
return train_data, valid_data, test_data
# ## Model 1: Temperature Prediction
# +
# Define the RNN model
# Modified from <NAME>
def build_and_save_d(modelDir,train,valid,cell,cellType,input_dim=1,hidden_dim=100,
seq_size = 12,max_itr=200,keep_prob=0.5, batch_size=32, num_epochs=10, log=500,
early_stopping=3, learning_rate=0.01):
tf.reset_default_graph()
graph = tf.Graph()
with graph.as_default():
# input place holders
# input Shape: [# training examples, sequence length, # features]
x = tf.placeholder(tf.float32,[None,seq_size,input_dim],name="x_in")
# label Shape: [# training examples, sequence length]
y = tf.placeholder(tf.float32,[None,seq_size],name="y_in")
dropout = tf.placeholder(tf.float32,name="dropout_in")
# Function to wrap each cell with dropout
def wrap_cell(cell, keep_prob):
drop = tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob=keep_prob)
return drop
cells = tf.nn.rnn_cell.MultiRNNCell(
[wrap_cell(cell,keep_prob) for cell in cell]
)
# cell = tf.nn.rnn_cell.DropoutWrapper(cell)
# RNN output Shape: [# training examples, sequence length, # hidden]
outputs, _ = tf.nn.dynamic_rnn(cells,x,dtype=tf.float32)
# weights for output dense layer (i.e., after RNN)
# W shape: [# hidden, 1]
W_out = tf.Variable(tf.random_normal([hidden_dim,1]),name="w_out")
# b shape: [1]
b_out = tf.Variable(tf.random_normal([1]),name="b_out")
# output dense layer:
num_examples = tf.shape(x)[0]
# convert W from [# hidden, 1] to [# training examples, # hidden, 1]
# step 1: add a new dimension at index 0 using tf.expand_dims
w_exp= tf.expand_dims(W_out,0)
# step 2: duplicate W for 'num_examples' times using tf.tile
W_repeated = tf.tile(w_exp,[num_examples,1,1])
# Dense Layer calculation:
# [# training examples, sequence length, # hidden] *
# [# training examples, # hidden, 1] = [# training examples, sequence length]
y_pred = tf.matmul(outputs,W_repeated)+b_out
# Actually, y_pred: [# training examples, sequence length, 1]
# Remove last dimension using tf.squeeze
y_pred = tf.squeeze(y_pred,name="y_pred")
# Cost & Training Step
cost = tf.reduce_mean(tf.square(y_pred-y))
train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
saver=tf.train.Saver()
# Run Session
with tf.Session(graph=graph) as sess:
# initialize variables
sess.run(tf.global_variables_initializer())
# Run for 1000 iterations (1000 is arbitrary, need a validation set to tune!)
start=timeit.default_timer()
epoch_counter = 0 # Keep track of our epochs
i = 0 # Keep track of our iterations
min_validation_err = sys.float_info.max # Start min error at biggest number
min_validation_itr = 0 # Keep track of the smallest validation error we have seen so far
early_stopping_counter = 0 # Counter to see if we have acheived early stopping
min_denorm_err = None
print('Training %s ...'%cellType)
while True: # If we train more, would we overfit? Try 10000
i += 1 # Increment counter
trainX, trainY, done = train.next_batch(batch_size) # Get train batch
# See if we are done with our epochs
if done:
epoch_counter += 1
print("Done with epoch " + str(epoch_counter))
if epoch_counter >= num_epochs:
break
# Pass the data through the network
_, train_err = sess.run([train_op,cost],feed_dict={x:trainX,y:trainY,dropout:keep_prob})
if i==0:
print(' step, train err= %6d: %8.5f' % (0,train_err))
# Every 'log' steps, print out train error and validation error.
# Update early stopping at these points
elif (i+1) % log == 0:
print(' step, train err= %6d: %8.5f' % (i+1,train_err))
# Get validation error on the full validation set
valid_err, predicted_vals_rnn = sess.run([cost, y_pred],feed_dict={x:valid.features,y:valid.response,dropout:1})
# Compute denormalized MSE
# step 1: denormalize data
# If seq_len greater than 1, get only the last element
if seq_size > 1:
predicted_vals_rnn = predicted_vals_rnn[:,seq_size-1]
predicted_vals_dnorm_rnn=predicted_vals_rnn*(max_dataset-min_dataset)+min_dataset
# step 2: get ground-truth, also must be denormalized
actual_test= np.array([x[-1] for x in valid.response])*(max_dataset-min_dataset)+min_dataset
# step 3: compute MSE
mse_rnn= ((predicted_vals_dnorm_rnn - actual_test) ** 2).mean()
print(' step, validation err= %6d: %8.5f' % (i+1,valid_err))
print(' step, denorm validation err= %6d: %8.5f' % (i+1,mse_rnn))
# Check early stopping
early_stopping_counter += 1
# If we have smaller validation error, reset counter and
# assign new smallest validation error. Also store the
# current iterqtion as the iteration where the current min is
if valid_err < min_validation_err:
min_validation_err = valid_err
min_validation_itr = i + 1
early_stopping_counter = 0
min_denorm_err = mse_rnn
# Store the current best model
modelPath= saver.save(sess,"%s/model_%s"%(modelDir,cellType),global_step=i+1)
print("model saved:%s"%modelPath)
# Break if we achieve early stopping
if early_stopping_counter > early_stopping:
break
end=timeit.default_timer()
print("Training time : %10.5f"%(end-start))
# Log the results to a file
with open(modelDir + "/results.txt", 'a+') as file:
file.write(cellType + "\n")
file.write("Time taken: " + str(end - start) + "\n")
file.write("Itr stopped: " + str(min_validation_itr) + "\n")
file.write("Min validation error: " + str(min_validation_err) + "\n")
file.write("Denormalized validation error: " + str(min_denorm_err) + "\n\n")
return min_validation_itr, min_validation_err
# Load back in a model and create predicted values
def load_and_predict(test,modelDir,cellType,itr,seq_size):
# Restore the session
with tf.Session() as sess:
print ("Load model:%s-%s"%(modelDir,itr))
saver = tf.train.import_meta_graph("%s/model_%s-%s.meta"%(modelDir,cellType,itr))
saver.restore(sess,tf.train.latest_checkpoint("%s"%modelDir))
graph = tf.get_default_graph()
# print all nodes in saved graph
#print([n.name for n in tf.get_default_graph().as_graph_def().node])
# get tensors by name to use in prediction
x = graph.get_tensor_by_name("x_in:0")
dropout= graph.get_tensor_by_name("dropout_in:0")
y_pred = graph.get_tensor_by_name("y_pred:0")
# Feed entire test set to get predictions
predicted_vals_all= sess.run(y_pred, feed_dict={ x: test.features, dropout:1})
# Get last item in each predicted sequence:
predicted_vals = predicted_vals_all[:,seq_size-1]
return predicted_vals
# -
# Function to predict and plot the test set
def predict(test_data, modelDir, cellType, end_itr, seq_len):
# Load and predict
predicted_vals_rnn=load_and_predict(test_data,modelDir,cellType,end_itr,seq_len)
# Compute MSE
# step 1: denormalize data
predicted_vals_dnorm_rnn=predicted_vals_rnn*(max_dataset-min_dataset)+min_dataset
# step 2: get ground-truth, also must be denormalized
actual_test= np.array([x[-1] for x in test_data.response])*(max_dataset-min_dataset)+min_dataset
# step 3: compute MSE
mse_rnn= ((predicted_vals_dnorm_rnn - actual_test) ** 2).mean()
print("RNN MSE = %10.5f"%mse_rnn)
pred_len=len(predicted_vals_dnorm_rnn)
train_len=len(test_data.features)
pred_avg = []
actual_avg = []
# Compute the moving average of each set for visual purposes
moving_length = 24
for i in range(len(actual_test) - moving_length):
pred_avg.append(np.mean(predicted_vals_dnorm_rnn[i:i+moving_length]))
actual_avg.append(np.mean(actual_test[i:i+moving_length]))
# Plot the results
plt.figure()
plt.plot(list(range(len(actual_test))), actual_test, label="Actual", color='r', alpha=1)
plt.plot(list(range(len(actual_test))), predicted_vals_dnorm_rnn, color='b', label=cellType, alpha=0.6)
# plt.plot(list(range(int(moving_length/2), len(actual_test)-int(moving_length/2))), pred_avg, color='y', label="{0} MA".format(cellType), alpha=0.7)
# plt.plot(list(range(int(moving_length/2), len(actual_test)-int(moving_length/2))), actual_avg, color='b', label="Actual MA", alpha=0.7)
plt.title("Cell Type: " + cellType + ", Sequence Length " + str(seq_len))
plt.legend()
plt.savefig("{0}{1}.png".format(cellType, seq_len))
# +
# Define size of sequence
seq_len = 2
# Create the data
train_data,valid_data,test_data = create_observations(train, valid, test, seq_size=seq_len, temp=True, lookahead=4, baseline=False)
# -
# ### Crude grid search results
# +
"""
# Perform a crude grid search
from itertools import product
# Define learning rates, dropout
params = [[0.1, 0.01, 0.001], [0.25, 0.5, 0.75]]
# Iterate over all combinations and test model
# with those parameters, storing the min
min_param_val = sys.float_info.max
min_param_elems = None
for elem in product(*params):
# Unpack the values
learning_rate, keep_prob = elem
RNNcell = [rnn.BasicLSTMCell(hidden_dim) for _ in range(n_layers)]
cellType = "LSTM"
# Build models and save model
end_itr, min_err = build_and_save_d(modelDir=modelDir,
train=train_data,
valid=valid_data,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,=
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log,
early_stopping=early_stopping,
learning_rate=learning_rate)
# See if we have a new low error
if min_err < min_param_val:
min_param_val = min_err
min_param_elems = elem
print("Min validation error " + str(min_err) + " for elems " + str(elem))
print("Global validation error " + str(min_param_val) + " for elems " + str(min_param_elems))
"""
# Grid search on learning rate and dropout
# RNN
#Min validation error 0.015986204 for elems (0.1, 0.25)
#Min validation error 0.015794938 for elems (0.1, 0.5)
#Min validation error 0.015503254 for elems (0.1, 0.75)
#Min validation error 0.012949656 for elems (0.01, 0.25)
#Min validation error 0.006430081 for elems (0.01, 0.5)
#Min validation error 0.0046402193 for elems (0.01, 0.75)
#Min validation error 0.029264465 for elems (0.001, 0.25)
#Min validation error 0.012221504 for elems (0.001, 0.5)
#Min validation error 0.008622245 for elems (0.001, 0.75)
#Global validation error 0.0046402193 for elems (0.01, 0.75)
# GRU
#Min validation error 0.0111637125 for elems (0.1, 0.25)
#Min validation error 0.012049832 for elems (0.1, 0.5)
#Min validation error 0.017291395 for elems (0.1, 0.75)
#Min validation error 0.0037756523 for elems (0.01, 0.25)
#Min validation error 0.002122913 for elems (0.01, 0.5)
#Min validation error 0.0032095483 for elems (0.01, 0.75)
#Min validation error 0.00797302 for elems (0.001, 0.25)
#Min validation error 0.008556419 for elems (0.001, 0.5)
#Min validation error 0.0030354045 for elems (0.001, 0.75)
#Global validation error 0.002122913 for elems (0.01, 0.5)
# LSTM
#Min validation error 0.0039516427 for elems (0.1, 0.25)
#Min validation error 0.016133798 for elems (0.1, 0.5)
#Min validation error 0.008657359 for elems (0.1, 0.75)
#Min validation error 0.0010539122 for elems (0.01, 0.25)
#Min validation error 0.0023624634 for elems (0.01, 0.5)
#Min validation error 0.002788953 for elems (0.01, 0.75)
#Min validation error 0.002642741 for elems (0.001, 0.25)
#Min validation error 0.0013699796 for elems (0.001, 0.5)
#Min validation error 0.0020976907 for elems (0.001, 0.75)
#Global validation error 0.0010539122 for elems (0.01, 0.25)
# Seems pretty close overall, choose 0.01 and dropout 0.5
# -
# ### Run the model
# +
input_dim=19 # dim > 1 for multivariate time series
hidden_dim=100 # number of hiddent units h
keep_prob=0.5
modelDir='modelDir'
log=500 # How often we validate
batch_size=16
num_epochs=15 # MAXIMUM number of epochs (i.e if early stopping is never achieved)
early_stopping = 5 # Number of validation steps without improvement until we stop
learning_rate = 0.01
n_layers = 2
# NEED TO MAKE DIFFERENT COPIES OF THE CELL TO AVOID SELF-REFENTIAL ERRORS
# RNNcell = [rnn.BasicRNNCell(hidden_dim) for _ in range(1)]
# cellType = "RNN"
RNNcell = [rnn.BasicRNNCell(hidden_dim) for _ in range(n_layers)]
cellType = "RNN"
# Build models and save model
end_itr, min_err = build_and_save_d(modelDir=modelDir,
train=train_data,
valid=valid_data,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log,
early_stopping=early_stopping,
learning_rate=learning_rate)
predict(test_data, modelDir, cellType, end_itr, seq_len)
RNNcell = [rnn.GRUCell(hidden_dim) for _ in range(n_layers)]
cellType = "GRU"
# Build models and save model
end_itr, min_err = build_and_save_d(modelDir=modelDir,
train=train_data,
valid=valid_data,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log,
early_stopping=early_stopping,
learning_rate=learning_rate)
predict(test_data, modelDir, cellType, end_itr, seq_len)
RNNcell = [rnn.BasicLSTMCell(hidden_dim) for _ in range(n_layers)]
cellType = "LSTM"
# Build models and save model
end_itr, min_err = build_and_save_d(modelDir=modelDir,
train=train_data,
valid=valid_data,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log,
early_stopping=early_stopping,
learning_rate=learning_rate)
predict(test_data, modelDir, cellType, end_itr, seq_len)
# -
# ### Results
# + active=""
# Seq Len: 2
#
# RNN
# Time taken: 25.785779288000413
# Itr stopped: 8500
# Min validation error: 0.0031018716
# Denormalized validation error: 31.769728957977332
#
# GRU
# Time taken: 41.09399954499895
# Itr stopped: 8500
# Min validation error: 0.0026672964
# Denormalized validation error: 26.344573167959652
#
# LSTM
# Time taken: 69.01221254200027
# Itr stopped: 17500
# Min validation error: 0.002471932
# Denormalized validation error: 21.351730504118457
#
# Seq Len: 24
#
# RNN
# Time taken: 246.51083700100025
# Itr stopped: 17500
# Min validation error: 0.001986726
# Denormalized validation error: 18.504949918661264
#
# GRU
# Time taken: 487.5709146240006
# Itr stopped: 18000
# Min validation error: 0.001339163
# Denormalized validation error: 10.45840469361726
#
# LSTM
# Time taken: 432.73047026600034
# Itr stopped: 18500
# Min validation error: 0.0013122336
# Denormalized validation error: 9.057215740626543
#
# Seq Len: 72
#
# RNN
# Time taken: 723.7870445440003
# Itr stopped: 19000
# Min validation error: 0.0012962609
# Denormalized validation error: 10.682260107741136
#
# GRU
# Time taken: 1246.9418031219993
# Itr stopped: 15500
# Min validation error: 0.0010465818
# Denormalized validation error: 8.61281120826994
#
# LSTM
# Time taken: 1108.5033689619995
# Itr stopped: 13500
# Min validation error: 0.0009968199
# Denormalized validation error: 7.759870622882436
#
# Seq Len: 96
#
# RNN
# Time taken: 907.123842844001
# Itr stopped: 15000
# Min validation error: 0.0010954492
# Denormalized validation error: 8.885591086046022
#
# GRU
# Time taken: 1511.5696144740032
# Itr stopped: 13000
# Min validation error: 0.001024862
# Denormalized validation error: 8.57339568785667
#
# LSTM
# Time taken: 1151.6507261309998
# Itr stopped: 11000
# Min validation error: 0.00092091894
# Denormalized validation error: 7.558516922265615
#
# -
# ## Model 2: Weather Type Prediction
# +
# Define the second kind of RNN
# Modified from <NAME>
def build_and_save_d2(modelDir,train,valid,cell,cellType,input_dim=1,hidden_dim=100,
seq_size = 12,max_itr=200,keep_prob=0.5, batch_size=32, num_epochs=10,log=500,save=1000):
tf.reset_default_graph()
graph = tf.Graph()
with graph.as_default():
# input place holders, note the change in dimensions in y, which
# now has 7 dimensions
# input Shape: [# training examples, sequence length, # features]
x = tf.placeholder(tf.float32,[None,seq_size,input_dim],name="x_in")
# label Shape: [# training examples, sequence length, # classes]
y = tf.placeholder(tf.float32,[None,seq_size,7],name="y_in")
dropout = tf.placeholder(tf.float32,name="dropout_in")
# Function to wrap each cell with dropout
def wrap_cell(cell, keep_prob):
drop = tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob=keep_prob)
return drop
cells = tf.nn.rnn_cell.MultiRNNCell(
[wrap_cell(cell,keep_prob) for cell in cell]
)
# RNN output Shape: [# training examples, sequence length, # hidden]
outputs, _ = tf.nn.dynamic_rnn(cells,x,dtype=tf.float32)
# weights for output dense layer (i.e., after RNN)
# W shape: [# hidden, 7]
W_out = tf.Variable(tf.random_normal([hidden_dim,7]),name="w_out")
# b shape: [7]
b_out = tf.Variable(tf.random_normal([7]),name="b_out")
# output dense layer:
num_examples = tf.shape(x)[0]
# convert W from [# hidden, 7] to [# training examples, # hidden, 7]
# step 1: add a new dimension at index 0 using tf.expand_dims
w_exp= tf.expand_dims(W_out,0)
# step 2: duplicate W for 'num_examples' times using tf.tile
W_repeated = tf.tile(w_exp,[num_examples,1,1])
# Dense Layer calculation:
# [# training examples, sequence length, # hidden] *
# [# training examples, # hidden, 7] = [# training examples, sequence length, 7]
y_pred = tf.matmul(outputs,W_repeated) + b_out
y_pred = tf.add(y_pred, b_out, name="y_out")
# Cost & Training Step
# Minimize error with softmax cross entropy
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logitsv2(logits=y_pred, labels=y))
train_op = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
saver=tf.train.Saver()
# Run Session
with tf.Session(graph=graph) as sess:
# initialize variables
sess.run(tf.global_variables_initializer())
# Run for 1000 iterations (1000 is arbitrary, need a validation set to tune!)
start=timeit.default_timer()
epoch_counter = 0 # Keep track of our epochs
i = 0 # Keep track of our iterations
min_validation_err = sys.float_info.max # Start min error at biggest number
min_validation_itr = 0 # Keep track of the smallest validation error we have seen so far
early_stopping_counter = 0 # Counter to see if we have acheived early stopping
min_accuracy=None
print('Training %s ...'%cellType)
while True: # If we train more, would we overfit? Try 10000
i += 1 # Increment counter
trainX, trainY, done = train.next_batch(batch_size) # Get train batch
# See if we are done with our epochs
if done:
epoch_counter += 1
print("Done with epoch " + str(epoch_counter))
if epoch_counter >= num_epochs:
break
# Pass the data through the network
_, train_err = sess.run([train_op,cost],feed_dict={x:trainX,y:trainY,dropout:keep_prob})
if i==0:
print(' step, train err= %6d: %8.5f' % (0,train_err))
# Every 'log' steps, print out train error and validation error.
# Update early stopping at these points
elif (i+1) % log == 0:
print(' step, train err= %6d: %8.5f' % (i+1,train_err))
# Get validation error on the full validation set
valid_err, pred = sess.run([cost, y_pred],feed_dict={x:valid.features,y:valid.response,dropout:1})
print(' step, validation err= %6d: %8.5f' % (i+1,valid_err))
pred = pred[:,seq_size-1]
actual_valid = np.array([x[-1] for x in valid.response])
# Look at the distribution of the output
amax = np.argmax(pred, axis=1)
accuracy = sum(np.argmax(actual_valid, axis=1) == np.argmax(pred, axis=1))/len(pred)
print("Accuracy of: " + str(accuracy))
# Check early stopping
early_stopping_counter += 1
# If we have smaller validation error, reset counter and
# assign new smallest validation error. Also store the
# current iterqtion as the iteration where the current min is
if valid_err < min_validation_err:
min_validation_err = valid_err
min_validation_itr = i + 1
early_stopping_counter = 0
min_accuracy = accuracy
# Store the current best model
modelPath= saver.save(sess,"%s/model_%s"%(modelDir,cellType),global_step=i+1)
print("model saved:%s"%modelPath)
# Break if we achieve early stopping
if early_stopping_counter > early_stopping:
break
end=timeit.default_timer()
print("Training time : %10.5f"%(end-start))
# Log the results to a file
with open(modelDir + "/results.txt", 'a+') as file:
file.write(cellType + "\n")
file.write("Time taken: " + str(end - start) + "\n")
file.write("Itr stopped: " + str(min_validation_itr) + "\n")
file.write("Min validation error: " + str(min_validation_err) + "\n")
file.write("Min validation accuracy: " + str(min_accuracy) + "\n\n")
# Return the min validation error and the iteration in which it occured
return min_validation_itr, min_validation_err
# Load back in the model
def load_and_predict2(test,modelDir,cellType,itr,seq_size):
with tf.Session() as sess:
print ("Load model:%s-%s"%(modelDir,itr))
saver = tf.train.import_meta_graph("%s/model_%s-%s.meta"%(modelDir,cellType,itr))
saver.restore(sess,tf.train.latest_checkpoint("%s"%modelDir))
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x_in:0")
dropout= graph.get_tensor_by_name("dropout_in:0")
y_pred = graph.get_tensor_by_name("y_out:0")
predicted_vals_all= sess.run(y_pred, feed_dict={ x: test.features, dropout:1})
# Get last item in each predicted sequence:
predicted_vals = predicted_vals_all[:,seq_size-1]
return predicted_vals
# -
# Take in test set and print accuracy and F1-score
def predict_type(test,modelDir,cellType,itr,seq_size):
# Load and predict
predicted_vals_rnn=load_and_predict2(test_data2,modelDir,cellType,end_itr,seq_len)
# Compute accuracy
# step 2: get ground-truth
actual_test= np.array([x[-1] for x in test_data2.response])
# Get raw accuracy
accuracy = sum(np.argmax(actual_test, axis=1) == np.argmax(predicted_vals_rnn, axis=1))/len(actual_test)
print("Accuracy: " + str(accuracy))
# Calculate f1_score
# Convert the continuous valued predictions
# to one hot
preds = np.zeros([len(actual_test), 7])
for i in range(len(actual_test)):
preds[i, np.argmax(predicted_vals_rnn[i])] = 1
# Use the weighted version for more accuracy in the multiclass setting
print("F1 score: " + str(f1_score(actual_test, preds, average="weighted")))
# ### Read in observations and run the model
# +
# Define size of sequence, 1 day for now
seq_len = 2
train_data2,valid_data2,test_data2 = create_observations(train, valid, test, seq_size=seq_len, temp=False, lookahead=4, baseline=True)
# +
input_dim=19 # dim > 1 for multivariate time series
hidden_dim=100 # number of hiddent units h
keep_prob=0.75
modelDir='modelDir2' # Make sure to use a different model dir
log=500 # How often we validate
batch_size=16
num_epochs=15 # MAXIMUM number of epochs (i.e if early stopping is never achieved)
early_stopping=10 # Number of validation steps without improvement until we stop
num_layers = 2
# Different RNN Cell Types
# NEED TO MAKE DIFFERENT COPIES OF THE CELL TO AVOID SELF-REFENTIAL ERRORS
RNNcell = [rnn.BasicRNNCell(hidden_dim) for _ in range(num_layers)]
cellType = "RNN"
# Build models and save model
end_itr, min_err = build_and_save_d2(modelDir=modelDir,
train=train_data2,
valid=valid_data2,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log)
predict_type(test_data2, modelDir, cellType, end_itr, seq_len)
RNNcell = [rnn.GRUCell(hidden_dim) for _ in range(num_layers)]
cellType = "GRU"
# Build models and save model
end_itr, min_err = build_and_save_d2(modelDir=modelDir,
train=train_data2,
valid=valid_data2,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log)
predict_type(test_data2, modelDir, cellType, end_itr, seq_len)
RNNcell = [rnn.BasicLSTMCell(hidden_dim) for _ in range(num_layers)]
cellType = "LSTM"
# Build models and save model
end_itr, min_err = build_and_save_d2(modelDir=modelDir,
train=train_data2,
valid=valid_data2,
cell=RNNcell,
cellType=cellType,
input_dim=input_dim,
hidden_dim=hidden_dim,
seq_size=seq_len,
keep_prob=keep_prob,
batch_size=batch_size,
num_epochs=num_epochs,
log=log)
predict_type(test_data2, modelDir, cellType, end_itr, seq_len)
# -
# ### Results
# + active=""
# Seq Len: 2
#
# RNN
#
# Training time : 26.06421
# Load model:modelDir2-3000
# INFO:tensorflow:Restoring parameters from modelDir2/model_RNN-3000
# Accuracy: 0.5818430345141816
# F1 score: 0.5751496500102374
#
# GRU
#
# Training time : 68.72410
# Load model:modelDir2-9500
# INFO:tensorflow:Restoring parameters from modelDir2/model_GRU-9500
# Accuracy: 0.5961954664540381
# F1 score: 0.5916327029106627
#
# LSTM
#
# Training time : 71.13695
# Load model:modelDir2-11500
# INFO:tensorflow:Restoring parameters from modelDir2/model_LSTM-11500
# Accuracy: 0.6037134069939629
# F1 score: 0.5971185305032227
#
# Seq Len: 24
#
#
# RNN
#
# Training time : 78.50333
# Load model:modelDir2-3000
# INFO:tensorflow:Restoring parameters from modelDir2/model_RNN-3000
# Accuracy: 0.5922119447299303
# F1 score: 0.5910033794373887
#
# GRU
#
# Training time : 88.32919
# Load model:modelDir2-1000
# INFO:tensorflow:Restoring parameters from modelDir2/model_GRU-1000
# Accuracy: 0.5928971108827223
# F1 score: 0.5792721833038622
#
# LSTM
#
# Training time : 90.14142
# Load model:modelDir2-1500
# INFO:tensorflow:Restoring parameters from modelDir2/model_LSTM-1500
# Accuracy: 0.6044307411213886
# F1 score: 0.5905284940134202
#
# Seq Len: 72
#
# RNN
#
# Training time : 314.71831
# Load model:modelDir2-3500
# INFO:tensorflow:Restoring parameters from modelDir2/model_RNN-3500
# Accuracy: 0.5838787461246986
# F1 score: 0.5687409812194684
#
# GRU
#
# Training time : 409.06710
# Load model:modelDir2-500
# INFO:tensorflow:Restoring parameters from modelDir2/model_GRU-500
# Accuracy: 0.5920312320587897
# F1 score: 0.5844762323065641
#
# LSTM
#
# Training time : 398.69022
# Load model:modelDir2-1000
# INFO:tensorflow:Restoring parameters from modelDir2/model_LSTM-1000
# Accuracy: 0.5940980594787002
# F1 score: 0.5796382185477683
| Weather - Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jcs-lambda/CS-Unit1-Build/blob/master/naive_bayes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rya6ruSKK-gB" colab_type="text"
# # Scikit-Learn GaussianNB
#
# Testing using [wine dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_wine.html#sklearn.datasets.load_wine)
# + id="F7k0ZVbJPdfF" colab_type="code" colab={}
import pandas as pd
from sklearn.datasets import load_wine
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
# + id="H8m2F8NZPnDO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="f52d3c87-59cc-41b8-ab8e-3b12cd3ac5c2"
wine = load_wine()
print(wine.DESCR)
# + id="ljNt78umP0CS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="25f8c971-78ea-419b-9e2e-31007764fe37"
print(wine.feature_names)
print(wine.target_names)
# + id="DrPEV7leP8r8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 164} outputId="354f7abf-07b2-4bcb-eca8-7ed80170192c"
wine.target
# + id="yPdBxTodQA5L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 431} outputId="73c94267-2623-4489-a637-2f17d3ea8e79"
df = pd.DataFrame(wine.data, columns=wine.feature_names)
df['class'] = wine.target
df['class'] = df['class'].replace({
0: wine.target_names[0],
1: wine.target_names[1],
2: wine.target_names[2],
})
df
# + id="AFZF8VECQI44" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 263} outputId="767ec0be-9b83-46ed-fdcb-e785dd047b45"
df.dtypes
# + id="7rJDohzmQM5j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 263} outputId="438a238a-8339-4df7-e8f1-fc4dd886d6bb"
df.isna().sum()
# + id="Dzc24SnJRMAG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 32} outputId="7cac3b3c-8531-4b92-a0e1-ea1e70dd851b"
target='class'
features = df.columns.drop(target)
X_train, X_test, y_train, y_test = train_test_split(df[features], df[target], test_size=0.2, stratify=df[target], random_state=42)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# + id="X4lRhXrGRodh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 82} outputId="f508c028-d422-4ff3-da32-d604d5c22542"
y_train.value_counts(normalize=True)
# + id="E-ce75j2SIDx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 32} outputId="5700a4c2-a87b-45fb-8942-34c98a6a2382"
gnb = GaussianNB().fit(X_train, y_train)
y_pred_sk = gnb.predict(X_test)
acc_sk= accuracy_score(y_test, y_pred_sk)
print(f'Test accuracy: {acc_sk * 100:.02f}%')
# + id="Dr3dsIboW_vq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 131} outputId="c2ebc093-5a42-441d-d788-6c76a4ce56ac"
y_pred_sk
# + [markdown] id="nmIAgfhISdXr" colab_type="text"
# ## My estimator
# + id="cZLqIFdpSesG" colab_type="code" colab={}
import math
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.utils import check_X_y, check_array
from sklearn.utils.validation import check_is_fitted
# + id="qyIqDgapUTOX" colab_type="code" colab={}
class NaiveBayes(BaseEstimator, ClassifierMixin):
"""Gaussian Naive Bayes Classifier"""
def __init__(self):
"""No initialization parameters."""
pass
def _validate_input(self, X, y=None):
"""Returns validated input.
:param X: 2d array-like of numeric values with no NaNs or infinite values
:param y: 1d array-like of hashable values with no NaNs or infinite values
:return: validated data, converted to numpy arrays
"""
if y is not None:
# fitting the model, validate X and y
return check_X_y(X, y)
else:
# predicting, validate X
check_is_fitted(self, ['num_features_', 'feature_summaries_'])
X = check_array(X)
if X.shape[1] != self.num_features_:
raise(ValueError('unexpected input shape: (x, {X.shape[1]}); must be (x, {self.num_features_})'))
return X
def fit(self, X, y):
"""Fit the model with training data. X and y must be of equal length.
:param X: 2d array-like of numeric values with no NaNs or infinite values
:param y: 1d array-like of hashable values with no NaNs or infinite values
:return: fitted instance
"""
X, y = self._validate_input(X, y)
self.num_features_ = X.shape[1]
# create dictionary containing input data separated by class label
data_by_class = {}
for i in range(len(X)):
features = X[i]
label = y[i]
if label not in data_by_class:
# first occurence of label, create empty list in dictionary
data_by_class[label] = []
data_by_class[label].append(features)
# summarize the distribution of features by label as list of
# (mean, standard deviation) tuples
# store in instance attribute for use in prediction
self.feature_summaries_ = {}
for label, features in data_by_class.items():
self.feature_summaries_[label] = [
(np.mean(column), np.std(column))
for column in zip(*features)
]
return self
def _liklihood(self, x, mean, stdev):
"""Calculate conditional probability of a Gaussian distribution.
:param x: float
:param mean: float, sample mean
:param stdev: float, sample standard deviation
:return: float
"""
exponent = math.exp(-(math.pow(x - mean, 2) / (2 * math.pow(stdev, 2))))
return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent
def predict(self, X):
"""Returns class predictions for each row in X.
:param X: 2d array-like of numeric values with no NaNs or infinite values
whose .shape[1] == .shape[1] of fitted data
:return: np.array of class predictions
"""
X = self._validate_input(X)
# predicted class labels
predictions = []
# iterate input rows
for x in X:
# get cumulative log probabilites for each class for this row
probabilities = {}
for label, features in self.feature_summaries_.items():
probabilities[label] = 0
for i in range(len(features)):
mean, stdev = features[i]
probabilities[label] += math.log2(
self._liklihood(x[i], mean, stdev)
)
# find class with highest probability
best_label, best_prob = None, -1
for label, probability in probabilities.items():
if best_label is None or probability > best_prob:
best_prob = probability
best_label = label
# prediction for this row
predictions.append(best_label)
return np.array(predictions)
# + id="jbGeAvAVAukS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 32} outputId="4e858726-c6c7-4f35-c5cb-af71fe3<PASSWORD>"
nb = NaiveBayes().fit(X_train, y_train)
y_pred_mine = nb.predict(X_test)
acc_mine= accuracy_score(y_test, y_pred_mine)
print(f'Test accuracy: {acc_mine * 100:.02f}%')
# + id="V-q8uJLgA2s1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 131} outputId="66c45724-66d1-4461-a088-b1ef4179be0b"
y_pred_mine
# + id="o6AnK_QCH4Ft" colab_type="code" colab={}
| naive_bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
# %matplotlib inline
sns.set(style="white")
sns.set(style="whitegrid", color_codes=True)
crime = pd.read_csv('Downloads/durham-police-crime-reports.csv')
crime.head()
print(crime.shape)
crime['chrgdesc'].unique()
crime['chrgdesc'].value_counts()
print(crime.describe())
import gmaps
import gmaps.datasets
gmaps.configure(api_key="<KEY>")
locations = crime[['Lati','Long']]
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations))
fig
house = pd.read_excel('durham-nc-neighborhoods-Zillow-Home-Value-Index-TimeSeries (1).xls')
house
house.drop([0], axis=0, inplace=True)
house
print(house.describe())
| Exploring_FDS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Histogramy
# +
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(19680801)
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
x2 = mu + sigma * np.random.randn(10000, 2)
plt.figure(figsize=(12,12))
plt.subplot(221)
n, bins, patches = plt.hist(x2, 50, density=True, alpha=0.75, histtype='barstacked')
plt.xlim(40,160)
plt.ylim(0, 0.03)
plt.title('bins=50, alpha=0.75, histtype =\'barstacked\'')
plt.grid(True)
plt.subplot(222)
n, bins, patches = plt.hist(x, 25, density=True, facecolor='r', alpha=0.5)
plt.xlim(40,160)
plt.ylim(0, 0.03)
plt.title('grid = False, bins=25, alpha=0.50, facecolor=\'r\'')
plt.grid(False)
plt.subplot(223)
n, bins, patches = plt.hist(x, 75, density=True, facecolor='b', alpha=0.25, orientation ='horizontal')
plt.ylim(40, 160)
plt.xlim(0, 0.03)
plt.title('bins=75, alpha=0.25, facecolor=\'b\', orientation =\'horizontal\'')
plt.grid(True)
plt.subplot(224)
n, bins, patches = plt.hist(x, 50, density=True, facecolor='y', alpha=0.75, cumulative=True)
plt.title('bins=50, alpha=0.75, facecolor=\'y\', cumulative=True')
plt.grid(True)
plt.show()
# -
# ### Wykres 3D
# +
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure(figsize=(12,12))
ax = fig.gca(projection='3d')
dx, dy = 0.01, 0.01
X, Y = np.mgrid[slice(1, 5+dx, dx), slice(1, 5+dy, dy)]
Z = np.cos(X)**2 + np.sin(Y)**2
surf = ax.plot_surface(X, Y, Z, cmap=plt.get_cmap('Spectral_r'), lineWidth=1, antialiased=False)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
# -
| assets/code/code_script_matplotlib_wideo_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Names Of Gods
#
# I created the dataset in excel, and converted it into .csv file
#
# I put different names of three gods Shiva, Vishnu & Devi and made total 150 rows with 5 columns.
# I'm trying to show names of gods with their respective meanings.
#
# From [Data Analysis with Python: Zero to Pandas](zerotopandas.com) course & learned basic things, which makes me capable to apply it on real world.
# ## Downloading the Dataset
#
# Here I used opendatasets library
# firstly I uploaded my csv file to my [Github](https://github.com/ak0029/names) account then fetched that file in my [kaggle](https://www.kaggle.com/avadhootk/names-df) account and from kaggle I downloaded that file to jupiter notebook
#
# +
# Install opendataset library
# !pip install opendatasets --upgrade
import opendatasets as od
# +
#Retrieve data
dataset_url = 'https://www.kaggle.com/avadhootk/names-df'
od.download('https://www.kaggle.com/avadhootk/names-df')
# Use below username & key
# username: avadhootk
# key: e432ba9ab838c63effeb16e660e18743
# +
# Import Pandas
import pandas as pd
# +
# Install Jovian library and commit changes after each stage for work saving
# !pip install jovian --upgrade --quiet
# -
import jovian
jovian.commit(project='ak-names')
# Import OS library
import os
# +
# For listing files under directories
os.listdir('names-df')
# +
# Contains 2 files we need .csv file for our project
# read .csv file using pandas library
raw_df = pd.read_csv('names-df/Names.csv')
# +
# Our dataframe which we are going to use for this project
raw_df
# -
# - Errors faced while creating dataset
#
# I compared my .csv file by opening in notepad++ with covid_df file which used in demo by sir and removed its errors otherwise I faced lots of issues.
#
#
jovian.commit()
# ## Data Preparation and Cleaning
#
# **TODO** - We are checking no. of columns, rows, etc. We are taking here overview of file
#
# +
# Showing columns names in df
raw_df.columns
# +
# Info about data frame
raw_df.info()
# +
# Showing more elaborate info without using .info()
raw_df.info
# +
# Overall description include count, frquency, etc.
raw_df.describe()
# +
# Showing No of Rows vs Columns
raw_df.shape
# +
# Random 10 samples of df
raw_df.sample(10)
# +
# Accessing data at particular row
raw_df.loc[111]
# -
# **<NAME>**
jovian.commit()
# ## Exploratory Analysis and Visualization
#
# We are trying here to perform some mathematical as well as chart, graphs related methods by trial and error
#
# Let's begin by importing`matplotlib.pyplot` and `seaborn`.
# +
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
# Below fuction helps to ensure that graph is shown in jupyter notebook
# %matplotlib inline
sns.set_style('darkgrid')
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['figure.figsize'] = (9, 5)
matplotlib.rcParams['figure.facecolor'] = '#00000000'
# -
# **TODO** - Explore one or more columns by plotting a graph below, and add some explanation about it
# +
# Showing description of column data belongs to
raw_df.belongs_to.describe()
# +
# Finds unique values in belongs_to columns
raw_df.belongs_to.unique()
# +
# Showing description of name column data
raw_df.name.describe()
# +
# Finds unique values in name columns
raw_df.name.unique()
# +
# Finds unique values in sanskrit column
raw_df.sanskrit.unique()
# +
# Finds unique values in meaning columns
raw_df.meaning.unique()
# -
jovian.commit()
# ## Asking and Answering Questions
#
# ### Q1: Make Separate dataframes according to "belong_to" column values
# +
# make shiva names dataframe; here we use simple binary fuction for checking
shiva_df = raw_df[raw_df.belongs_to == 'Shiva']
#Following command can also be used but below command assign boolean values "True" to Shiva & "False" to rest
#but we have to perform more operations on it.
#shiva_df = raw_df.belongs_to == 'Shiva'
# -
shiva_df
shiva_df.describe()
# +
# make Devi names dataframe; here we use simple binary fuction for checking
devi_df = raw_df[raw_df.belongs_to == 'Devi']
# -
devi_df
devi_df.describe()
# +
#Now
#Below command assign boolean values "True" to 'narayan' & "False" to rest data in "belong_to" column
narayan_df = raw_df.belongs_to == 'Narayan'
# +
# True assigned to narayan & others assigned as false
narayan_df
# -
''' Result shows
False value is most assigned with 125 times i.e. 150-125 = 25
25 True values i.e. 25 names are belongs to Narayan
'''
narayan_df.describe()
# +
# While this function is much easier
narayan_df = raw_df[raw_df.belongs_to == 'Narayan']
narayan_df
# -
jovian.commit()
# ### Q2: Plot chart w.r.to '_belongs_to_' column
# +
# Plotting line chart with X-axis & Y-axis labels
plt.plot(raw_df.belongs_to)
plt.xlabel('No. of names')
plt.ylabel('Names')
# -
jovian.commit()
# ### Q3: TODO - replace values in belongs_to column with Shiva=1 , Devi = 2, Narayan= 3
# Data type of _**belongs_to**_ column is _object_ which is used for strings/categories in pandas.
# +
# Make copy of df
copy_df = raw_df.copy()
# -
copy_df.info()
# +
#replacing the categories with the above numbers with replace() function
#Dictionary which contains mapping numbers for each category in the belongs_to column:
replace_matrix = {'belongs_to': {'Shiva': 1, 'Devi': 2, 'Narayan': 3}}
# +
# Replacing
copy_df.replace(replace_matrix, inplace=True)
copy_df.head()
# +
# As we replaced values "belongs_to" datatype changed into int64
copy_df.info()
# -
copy_df.sample(20)
jovian.commit()
# ### Q4: Count of distinct categories within belong_to column & make frequency distribution graph
# +
# Counting Diff values in belongs_to column
raw_df['belongs_to'].value_counts().count()
# +
'''Barplot of the frequency distribution of a category using the seaborn package, which shows the
frequency distribution of the carrier column.
'''
df = raw_df['belongs_to'].value_counts()
# Assigning title, x & Y axis labels
sns.barplot(df.index, df.values, alpha=0.9)
plt.title('Frequency Distribution of Names')
plt.ylabel('Number of Occurrences', fontsize=16)
plt.xlabel('Names', fontsize=16)
plt.show()
# -
jovian.commit()
# ### Q5: Show percentage wise distribution of God's/ goddesses names in total dataframe
# +
# We are plotting pie chart here
labels = raw_df['belongs_to'].astype('category').cat.categories.tolist()
counts = raw_df['belongs_to'].value_counts()
sizes = [counts[var_cat] for var_cat in labels]
fig1, ax1 = plt.subplots()
#autopct is to show the percentage(%) on pie chart
ax1.pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True)
plt.title('Percentage-wise Distribution of Names')
ax1.axis('equal')
plt.show()
# -
jovian.commit()
# ### Q6: Find sanskrit names of devi
# +
# Access previously made df
devi_df
# +
# extraction of required data
df1 = devi_df[['belongs_to', 'sanskrit']]
df1
# +
# Plotting line chart
plt.plot(df1.belongs_to)
plt.xlabel('Sanskrit names')
plt.ylabel('Names');
# -
jovian.commit()
# ### Q7: Random Sample's % Calculation
# +
# We are plotting pie chart here
# Random sample collection
df2 = raw_df.sample(3)
labels = df2['belongs_to'].astype('category').cat.categories.tolist()
counts = df2['belongs_to'].value_counts()
sizes = [counts[var_cat] for var_cat in labels]
fig1, ax1 = plt.subplots()
#autopct is to show the percentage(%) on pie chart
ax1.pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True)
plt.title("Random Sample's Percentage Calculation")
ax1.axis('equal')
plt.show();
# -
jovian.commit()
# ### Q8: TODO - Plot graph of 140 to 145 Name Vs meaning
#
df4 = raw_df.loc[140:145]
df4.info()
df4
# +
# Finds unique values in columns
name = df4.name.unique()
meaning = df4.meaning.unique()
# Plot histogram
plt.bar(name, meaning);
# -
# ### Q9: In raw_df find numerically which god's & goddesses has most no. of their names?
# #### Don't use charts
jovian.commit()
# +
#check null(missing) values in df
raw_df.isnull().values.sum()
# +
# chk column wise null values
raw_df.isnull().sum()
# +
# Frequency distribution of 'belongs_to' category
raw_df['belongs_to'].value_counts()
# -
# Here we got our answer
jovian.commit()
# ## Inferences and Conclusion
#
# - Did analysis on dataframe and found that Shiva has most names in df
# - Did percentage wise calculation
# - Performed graphs drawing on selected group of data
# - Performed lots of diff operations
# ## References and Future Work
#
# **TODO** - Will going to increse this dataset in future, with proper meanings of names from vedas/scriptures.
#
# **You Can Contribute me here to grow dataset**
#
# - https://www.kaggle.com/avadhootk/names-df
#
# - https://github.com/ak0029/names
#
import jovian
jovian.commit()
| zerotopandas-project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# The training outlined in this particular [Node.js](https://nodejs.org/en) is pulled from the following locations:
#
# - [Node.js For Beginners: Become A Node.js Developer](https://www.udemy.com/nodejs-for-beginners-become-a-nodejs-developer)
#
# This training was created for my personal use to generate clarity around what I've learned through the above.
# # What Is Node.js And Where Can You Find It?
#
# [Node.js](https://nodejs.org/en) is a JavaScript runtime built on [Chrome's V8 JavaScript engine](https://developers.google.com/v8) - which was built on C++. This allows us to do things we wouldn't normally be able to do with JavaScript.
#
# It uses an event-drive, non-blocking I/O model that makes it lightweight and efficient. It's package ecosystem is the largest ecosystem of open source libraries in the world.
#
# Until Node.js was created, you could only run JavaScript in the browser. (They were the only ones that had a JavaScript engine.) So this was created WITH the v8 engine in order to be able to run JavaScript anywhere (including your computer, etc).
#
# It's most popular for building servers. Most companies use it as a back-end server because of how it's built. Most likely the server of choice for any web apps, mobile apps, etc that want to create an API or server. Also good for building chat apps and real time communication servers.
#
# The content creator (who uses a Mac) preferred LTS, but as long as above 8.9.3 it should be fine.
#
# Windows is provided in the download section as well.
#
# In short?
#
# It is a free, memory efficient, open source server environment that runs on various platforms and uses JavaScript on the server & asynchronous programming.
#
# ## What Can Node.js Do?
#
# Node.js is very versatile, as it can:
# - create dynamic page content
# - create, open, read, write, delete, and close files on a server
# - collect form data
# - add, delete, modify, data in your database
# # Installing Node.js
#
# If you would like the mac version, please see Section 1, Lecture 5 of the course mentioned at the top.
#
# The instructions found here will be for Windows users.
#
# 1. Go to the Node.js website.
# 2. Download the installer - the most stable version will be on the left & the one with latest features is on the right.
# 3. Follow the install process.
# 
# Be sure to read and accept the EULA.
#
# 
# Choose your destination installation folder - default is fine unless there is a specific place you wish to put it.
#
# 
# At the **custom setup** section, you will see the following:
# - *Node.js runtime*: this will install the core environment, including performance counters & event tracing
# - *npm package manager*: a program to download and add Node.js packages into our applications
# - *online documentation shortcuts*: adds entries to the start menu that link to online documentation for Node.js
# - *add to path*: adds the ability to call Node.js in your command prompt, npm, and the modules globally installed to the PATH environment
#
# 
# 
# 
# # How Can I Tell If I Have It?
#
# In a command prompt, type:<br>
# `node -v`
#
# This should then print the version number for you on the console.
#
# 
#
# # Cool Miscellaneous Stuff
#
# Have you checked out Johnny-Five?
#
# Also, here are some great additional sites for more information:
# - [w3schools](https://www.w3schools.com/nodejs)
# - [NPM](https://www.npmjs.com) - website to download packages for Node.js
#
# ## What About Git Bash?
#
# [Git Bash](https://git-scm.com) is a handy tool to use because it resembles the linux system.
#
# You should have this in your arsenal.
# # Commands In Node.js
#
# `node`<br>
# This changes the prompt of the command prompt - you are now coding in Node.js and can now run in the terminal!
#
# `console.log('hi')`<br>
# Prints "hi" to the command prompt.
#
# `4+5`<br>
# Returns the integer 9.
#
# `Boolean(3)`<br>
# Returns **true**.
#
# `global`<br>
# Will provide a bunch of objects that can be used, but not browser specific.
#
# `process`<br>
# What you're running in the terminal right now - what the computer is doing.
#
# `process.exit()`<br>
# Exit the process - now back in the terminal.
#
# ## Final Thoughts
#
# Other than browser-specific objects, we can do anything with Node.js!
| NodeJS/NodeJS For Beginners/0 - Introduction to NodeJS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="35lxkB5RmKeh"
# This algorithm usually has a huge impact on understanding and on the conversational policies/states. As we are dealing with semantics also in other pipelines it is important to find the right steps and resources to complete it correctly and in here, we are preparing the field.
#
# To find the specific-domains semantics, a database is needed (can be also books related to the domains that are used for query purposes, or site’s corpuses, or any unstructured data related to the domain) together with the available corpuses that we have for the language model. It’s not important for corpuses to be large. In our research, 100-200 sentences for each type of conversation (chit-chat/ reactions/ restaurants advice/ legal consultancy etc.) are enough at this point.
#
# We don’t want to use pre-trained models that are coming with their own tokenizers and semantics from different domains.
#
# + id="ohoGr8BwmFG_"
# IN: input from pre-processing and Composed Words
# OUT: tuple (word/POS) + identified input for CVM
# + [markdown] id="LyR92TAYmZLC"
# Objectives:
#
# • Addressing words that are coming from the linguistic evaluation;
#
# • Retrieving information for CVM (verbs);
#
# • Addressing UNK words by finding their POS probabilities;
#
# • Addressing words that has more than one POS in lexicons/vocabularies and estimate their right POS;
#
# Language specificities: yes (first 2 objectives);
#
# Dependencies: Input Processing l/CVM/ Composed words;
#
# Database/ Vocabularies needed: Lexicon, verbs conjugations/forms, pronouns database, choose a database for POS training + Books database (that contains titles/first phrases for each chapter/sub-titles/ dialogues);
#
# + id="qR3OyPSzmiwv"
# To dos:
# 1. Retrieve information by evaluating each verb and send it to CVM (future/present/past, firsts person/second/third, affirmative/negative forms)
# 2. Bring words that are coming from linguistic evaluation to their common base/stem/lemma.
# 3. Train database for semantic (some nouns are marked in composed words for this task).
# 4. For every word make a tuple of the word and his POS.
# + [markdown] id="vBXpHP_Om9kx"
# Use existing codes
#
# Example codes (Do not use without adaptation):
#
# + id="1G2j2SKqnH-O"
# https://github.com/amanjeetsahu/Natural-Language-Processing-Specialization/blob/master/Natural%20Language%20Processing%20with%20Probabilistic%20Models/Week%203/C2_W3_Assignment_Solution.ipynb
# https://github.com/amanjeetsahu/Natural-Language-Processing-Specialization/blob/master/Natural%20Language%20Processing%20with%20Attention%20Models/Week%201/C4_W1_Assignment_Solution.ipynb
# https://github.com/amanjeetsahu/Natural-Language-Processing-Specialization/blob/master/Natural%20Language%20Processing%20with%20Classification%20and%20Vector%20Spaces/Week%202/C1_W2_Assignment_Solution.ipynb
# https://github.com/amanjeetsahu/Natural-Language-Processing-Specialization/blob/master/Natural%20Language%20Processing%20with%20Probabilistic%20Models/Week%203/C2_W3_Assignment_Solution.ipynb
| NIU-NLU dummy codes/6_Grammar_Semantics_dummy_code (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# +
df_data = pd.read_csv('Data_Science_Topics_Survey.csv')
df_data.drop(['Timestamp'], axis = 1, inplace = True)
df_data.rename(columns={"What's your level of interest for the following areas of Data Science? [Data Visualization]":'Data Visualization', "What's your level of interest for the following areas of Data Science? [Machine Learning]":'Machine Learning', "What's your level of interest for the following areas of Data Science? [Data Analysis / Statistics]":'Data Analysis / Statistics', "What's your level of interest for the following areas of Data Science? [Big Data (Spark / Hadoop)]":'Big Data (Spark / Hadoop)', "What's your level of interest for the following areas of Data Science? [Data Journalism]":'Data Journalism', "What's your level of interest for the following areas of Data Science? [Deep Learning]":'Deep Learning'}, inplace = True)
df_data.head()
# +
# loop returns all the column names
columnvalue =[]
for col in df_data:
columnvalue.append(col)
columnvalue
# -
dt_f = df_data.apply(pd.value_counts).transpose()
dt_f
dt_f.sort_index(axis = 1, level = 'Very interested', inplace = True, ascending = False)
dt_f
dt_f.sort_values(by='Very interested', ascending=False, inplace = True)
dt_f
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
dt_f
# +
ax = dt_f.plot(kind='bar',
figsize = (20, 8),
width = 0.8,
color = ('#5cb85c', '#5bc0de', '#d9534f'),
linewidth=0,
edgecolor='white')
tt = "Percentage of Respondents' Interest in Data Science Areas"
ax.set_title(tt, fontsize=16)
ax.set_xticklabels(['Data Analysis / Statistics', 'Machine Learning', 'Data Visualization', 'Big Data (Spark / Hadoop)', 'Deep Learning', 'Data Journalism'], fontsize=11)
plt.legend(fontsize = 14)
plt.show()
# +
ax = dt_f.plot(kind='bar',
figsize = (20, 8),
width = 0.8,
color = ('#5cb85c', '#5bc0de', '#d9534f'),
linewidth=0,
edgecolor='white')
tt = "Percentage of Respondents' Interest in Data Science Areas"
ax.set_title(tt, fontsize=16)
ax.set_xticklabels(['Data Analysis / Statistics', 'Machine Learning', 'Data Visualization', 'Big Data (Spark / Hadoop)', 'Deep Learning', 'Data Journalism'], fontsize=11)
plt.legend(fontsize = 14)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.axes.get_yaxis().set_visible(False)
totals = []
for i in ax.patches:
totals.append(i.get_height())
tem = []
for i in range(0, 6):
tem.append(totals[i] + totals[i + 6] + totals[i + 12])
j = 0
for i in ax.patches:
if j == 6:
j = 0
t = round((i.get_height() / tem[j])*100, 2)
ax.text(i.get_x() + 0.03, i.get_height() + 5, str(t), fontsize = 14, color='black')
j += 1
# -
df_crime = pd.read_csv('Police_Department_Incidents_-_Previous_Year__2016_.csv')
df_crime.head()
df_crime.drop(['Category', 'Descript', 'DayOfWeek', 'Date', 'Time', 'Resolution', 'Address', 'X', 'Y', 'Location', 'PdId'], axis = 1, inplace = True)
df_crime.head()
df_crime.rename(columns={'PdDistrict':'Neighborhoods'}, inplace = True)
df_crime
df_crime.drop(['IncidntNum'], axis = 1, inplace = True)
df_crime
df_crime['Neighborhoods'].value_counts()
df_crime_added = df_crime['Neighborhoods'].value_counts()
df_crime_added
# +
columnvalue =[]
for col in df_crime_added:
columnvalue.append(col)
columnvalue
# -
df_test = pd.DataFrame({'Neighborhoods': ['SOUTHERN','NORTHERN','MISSION','CENTRAL','BAYVIEW','INGLESIDE','TARAVAL',
'TENDERLOIN','RICHMOND','PARK'],
'Count': [28445, 20100, 19503, 17666, 14303, 11594, 11325, 9942, 8922, 8699]})
df_test
df_test.sort_index(axis = 1, level = 'Neighborhoods', inplace = True, ascending = False)
df_test
import seaborn as sns
import folium
# +
world_geo = r'san-francisco.geojson'
threshold_scale = np.linspace(df_test['Count'].min(), df_test['Count'].max(), 6, dtype=int)
# let Folium determine the scale.
world_map = folium.Map(location=[37.77,-122.42], zoom_start = 12)
world_map.choropleth(
geo_data = world_geo,
data = df_test,
columns=['Neighborhoods', 'Count'],
key_on='feature.properties.DISTRICT',
fill_color = 'YlOrRd',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Crime Rate in San Francisco',
reset=True
)
world_map
# -
| Coursera/Data Visualization with Python-IBM/Week-3/Assignment/Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: MTC
# language: python
# name: python3
# ---
# + id="G3aa2d_zuSjt"
import os
import urllib.request
import zipfile
import pandas as pd
import numpy as np
import scipy
import matplotlib.pyplot as plt
import scipy.sparse
from joblib import Parallel, delayed
from enum import Enum
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import FeatureHasher
plt.style.use('ggplot')
# + [markdown] id="OWC5yFQ-0BIB"
# ## Download Data
# + id="mxHgyoENubTF"
def download_dataset():
urllib.request.urlretrieve(
"https://s3-eu-west-1.amazonaws.com/attribution-dataset/criteo_attribution_dataset.zip",
"criteo_attribution_dataset.zip"
)
with zipfile.ZipFile("criteo_attribution_dataset.zip", "r") as zip_ref:
zip_ref.extractall("criteo_attribution_dataset")
# + id="L0DH4wUrvQQV"
dataset_path = 'criteo_attribution_dataset/criteo_attribution_dataset.tsv.gz'
if not os.path.exists(dataset_path):
download_dataset()
# + [markdown] id="ssG5zDpb0FqL"
# ## Preprocessing
# + id="HLvBtGE50MEV"
df = pd.read_csv(dataset_path, sep='\t', compression="gzip")
# + colab={"base_uri": "https://localhost:8080/", "height": 439} id="XfHUQxxy0zgI" outputId="83e53261-569d-4f8e-<PASSWORD>"
# On Google colab, one must use a smaller dataset (with debug_sample=1e-2)
debug_sample = 1.0
uid_and_salt = df['uid'].astype(str) + 'hash_salt_for_sampling'
hashed_uid_and_salt = pd.util.hash_pandas_object(uid_and_salt, index=False)
random_column_based_on_uid = hashed_uid_and_salt / np.iinfo(np.uint64).max
debug_df = df[random_column_based_on_uid < debug_sample]
debug_df
# + id="EkNQWmTCWnhg"
def get_conversion_in_time_window(df, time_window=60*60*24):
conversion_timestamps = df.groupby('uid').agg({'conversion_timestamp': lambda x: sorted(list(set(x)))})
conversion_timestamps.rename(columns={'conversion_timestamp': 'conversion_timestamps'}, inplace=True)
def get_next_conversion_timestamp(data):
next_conversion_index = np.searchsorted(data['conversion_timestamps'], data['timestamp'])
if next_conversion_index == len(data['conversion_timestamps']):
return -1
else:
return data['conversion_timestamps'][next_conversion_index]
df_with_ct = df.merge(conversion_timestamps, on='uid', how='outer', validate="many_to_one")
matched_displays_mask = df_with_ct['conversion_timestamp'] > 0
df_with_ct['next_conversion_timestamp'] = np.where(
matched_displays_mask,
df_with_ct['conversion_timestamp'],
df_with_ct.apply(get_next_conversion_timestamp, axis=1)
)
df_with_ct['conversion_in_time_window'] = np.where(
df_with_ct['next_conversion_timestamp'] != -1,
((df_with_ct['next_conversion_timestamp'] - df_with_ct['timestamp']) <= time_window).astype(int),
0
)
original_column_order = list(df.columns) + ['conversion_in_time_window']
return df_with_ct[original_column_order]
# + id="RCFp-bhtWnhg"
debug = True
if debug:
test_df = pd.DataFrame([
{'uid': 1, 'timestamp': 1, 'conversion_timestamp': -1},
{'uid': 1, 'timestamp': 2, 'conversion_timestamp': -1},
{'uid': 1, 'timestamp': 3, 'conversion_timestamp': 6},
{'uid': 1, 'timestamp': 4, 'conversion_timestamp': 6},
{'uid': 1, 'timestamp': 7, 'conversion_timestamp': -1},
{'uid': 1, 'timestamp': 8, 'conversion_timestamp': 10},
{'uid': 1, 'timestamp': 9, 'conversion_timestamp': 10},
{'uid': 1, 'timestamp': 11, 'conversion_timestamp': -1},
{'uid': 1, 'timestamp': 12, 'conversion_timestamp': -1},
{'uid': 2, 'timestamp': 1, 'conversion_timestamp': -1},
{'uid': 2, 'timestamp': 2, 'conversion_timestamp': -1},
# Edge case : sometimes (rarely, the conversion is not mapped to the next one)
{'uid': 3, 'timestamp': 1, 'conversion_timestamp': -1},
{'uid': 3, 'timestamp': 2, 'conversion_timestamp': 7},
{'uid': 3, 'timestamp': 3, 'conversion_timestamp': 4},
{'uid': 3, 'timestamp': 5, 'conversion_timestamp': 7},
{'uid': 3, 'timestamp': 6, 'conversion_timestamp': 7},
])
split_test_df = pd.DataFrame([
{'uid': 1, 'timestamp': 1, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 1, 'timestamp': 2, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 1, 'timestamp': 3, 'conversion_timestamp': 6, 'conversion_in_time_window': 0},
{'uid': 1, 'timestamp': 4, 'conversion_timestamp': 6, 'conversion_in_time_window': 1},
{'uid': 1, 'timestamp': 7, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 1, 'timestamp': 8, 'conversion_timestamp': 10, 'conversion_in_time_window': 1},
{'uid': 1, 'timestamp': 9, 'conversion_timestamp': 10, 'conversion_in_time_window': 1},
{'uid': 1, 'timestamp': 11, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 1, 'timestamp': 12, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 2, 'timestamp': 1, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 2, 'timestamp': 2, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
# Edge case
{'uid': 3, 'timestamp': 1, 'conversion_timestamp': -1, 'conversion_in_time_window': 0},
{'uid': 3, 'timestamp': 2, 'conversion_timestamp': 7, 'conversion_in_time_window': 0},
{'uid': 3, 'timestamp': 3, 'conversion_timestamp': 4, 'conversion_in_time_window': 1},
{'uid': 3, 'timestamp': 5, 'conversion_timestamp': 7, 'conversion_in_time_window': 1},
{'uid': 3, 'timestamp': 6, 'conversion_timestamp': 7, 'conversion_in_time_window': 1},
])
assert split_test_df.equals(get_conversion_in_time_window(test_df, time_window=2))
# + id="3VBljWPSWnhh"
def get_nb_clicks(df, time_window=60*60*24):
click_timestamps = df[df['click'] == 1].groupby('uid').agg({'timestamp': lambda x: sorted(list(set(x)))})
click_timestamps.rename(columns={'timestamp': 'click_timestamps'}, inplace=True)
def get_nb_clicks(data):
if isinstance(data['click_timestamps'], list) and len(data['click_timestamps']) > 0:
return np.searchsorted(data['click_timestamps'], data['timestamp'])
else:
return 0
df_with_ct = df.merge(click_timestamps, on='uid', how='outer', validate="many_to_one")
df_with_ct['nb_clicks'] = df_with_ct.apply(get_nb_clicks, axis=1)
original_column_order = list(df.columns) + ['nb_clicks']
return df_with_ct[original_column_order]
# + id="zHerHJ1fWnhh"
debug = True
if debug:
test_df = pd.DataFrame([
{'uid': 1, 'timestamp': 1, 'click': 0},
{'uid': 1, 'timestamp': 2, 'click': 0},
{'uid': 1, 'timestamp': 3, 'click': 1},
{'uid': 1, 'timestamp': 4, 'click': 1},
{'uid': 1, 'timestamp': 7, 'click': 0},
{'uid': 1, 'timestamp': 8, 'click': 1},
{'uid': 1, 'timestamp': 9, 'click': 1},
{'uid': 1, 'timestamp': 11, 'click': 0},
{'uid': 1, 'timestamp': 12, 'click': 0},
{'uid': 2, 'timestamp': 1, 'click': 0},
{'uid': 2, 'timestamp': 2, 'click': 0},
])
nb_clicks_test_df = pd.DataFrame([
{'uid': 1, 'timestamp': 1, 'click': 0, "nb_clicks": 0},
{'uid': 1, 'timestamp': 2, 'click': 0, "nb_clicks": 0},
{'uid': 1, 'timestamp': 3, 'click': 1, "nb_clicks": 0},
{'uid': 1, 'timestamp': 4, 'click': 1, "nb_clicks": 1},
{'uid': 1, 'timestamp': 7, 'click': 0, "nb_clicks": 2},
{'uid': 1, 'timestamp': 8, 'click': 1, "nb_clicks": 2},
{'uid': 1, 'timestamp': 9, 'click': 1, "nb_clicks": 3},
{'uid': 1, 'timestamp': 11, 'click': 0, "nb_clicks": 4},
{'uid': 1, 'timestamp': 12, 'click': 0, "nb_clicks": 4},
{'uid': 2, 'timestamp': 1, 'click': 0, "nb_clicks": 0},
{'uid': 2, 'timestamp': 2, 'click': 0, "nb_clicks": 0},
])
assert nb_clicks_test_df.equals(get_nb_clicks(test_df))
# + id="2JRVAkKQiB5J"
def preprocess_dataframe(input_df, refresh=False):
df_identifier = '_'.join(map(str, input_df.shape))
cache_directory = 'cache_ifa_lr'
cache_path = os.path.join(cache_directory, df_identifier, 'preprocess.pkl')
if os.path.exists(cache_path) and not refresh:
print('Load from', cache_path)
df = pd.read_pickle(cache_path)
else:
df = input_df.copy()
df['uid'] = df['uid'].astype(str) + '_' + df['campaign'].astype(str)
df['day'] = np.floor(df['timestamp'] / 86400.).astype(int)
loground_bucketize = True
if loground_bucketize:
df['time_since_last_click_bucketized'] = np.where(
df['time_since_last_click'] > 0,
np.log(1 + df['time_since_last_click'] / 60).astype(int),
df['time_since_last_click'],
)
else:
packed_hours = 4
df['time_since_last_click_bucketized'] = (df['time_since_last_click'] / (packed_hours * 3600)).astype(int).values
df['time_since_last_click_bucketized'] *= packed_hours
df['gap_click_sale'] = -1
df.loc[df['conversion'] == 1, 'gap_click_sale'] = df['conversion_timestamp'] - df['timestamp']
df['last_click'] = df['attribution'] * (df['click_pos'] == df['click_nb'] - 1).astype(int)
df = get_conversion_in_time_window(df)
df = get_nb_clicks(df)
os.makedirs(os.path.dirname(cache_path), exist_ok=True)
df.to_pickle(cache_path)
return df
# + colab={"base_uri": "https://localhost:8080/", "height": 119} id="M4dSQLac7VEb" outputId="57ab1e3b-ace0-46db-d37b-281049414150"
# %%time
enriched_df = preprocess_dataframe(debug_df, refresh=False)
with pd.option_context('display.max_columns', 1000):
display(enriched_df.head())
# + id="dhixWvRI0Skg"
FEATURES = ['campaign', 'cat1', 'cat2', 'cat3', 'cat4', 'cat5', 'cat6', 'cat7', 'cat8', 'cat9', 'time_since_last_click_bucketized', 'nb_clicks']
INFOS = ['cost', 'cpo', 'time_since_last_click']
INFOS += ['last_click', 'first_click', 'uniform']
# + [markdown] id="7dZmKPdS7tQw"
# ## Learning
# + [markdown] id="zNKnb_HBi-Kd"
# ### Last click model
# + id="OQa6DDBFKni1"
class SplitBy(Enum):
UID = 1
DATE = 2
def split_train_test_mask(split_by, df, ratio):
"""We split the dataset into train and test parts.
We can either split it by day (learn on the past to predict the future)
or by uid (learn on a part of the population and test on the other part)
"""
if split_by == SplitBy.UID:
uid_and_salt = df['uid'].astype(str) + 'hash_salt_for_train_test_split'
hashed_uid_and_salt = pd.util.hash_pandas_object(uid_and_salt, index=False)
random_column_based_on_uid = hashed_uid_and_salt / np.iinfo(np.uint64).max
is_training = random_column_based_on_uid < ratio
if split_by == SplitBy.DATE:
split_day = max(1, ratio * df['day'].max())
is_training = df['day'] < split_day
return is_training
# + id="YojriMQ77rud" outputId="94a7e745-2cbc-4b3c-f001-b99078bfce30"
# %%time
cache_directory = 'cache_ifa_lr'
features_file = 'features.npz'
def features_to_list_of_strings(row):
return [f'{feature}_{row[feature]}' for feature in row.index]
def get_features(df, features_columns, hash_space=2**13):
df_identifier = '_'.join(map(str, enriched_df.shape))
label_features_identifier = f'{"_".join(features_columns)}'
cache_path = os.path.join(cache_directory, df_identifier, label_features_identifier, str(hash_space))
features_cache_path = os.path.join(cache_path, features_file)
print('features_cache_path',features_cache_path)
if os.path.exists(features_cache_path):
features = scipy.sparse.load_npz(features_cache_path)
else:
raw_features = df[features_columns]
features_as_list_of_strings = raw_features.apply(features_to_list_of_strings, axis=1)
hasher = FeatureHasher(n_features=hash_space, input_type='string', alternate_sign=False)
features = hasher.fit_transform(features_as_list_of_strings)
os.makedirs(cache_path)
scipy.sparse.save_npz(features_cache_path, features)
return features
features = get_features(enriched_df, FEATURES, hash_space=2**16)
is_training = split_train_test_mask(SplitBy.UID, enriched_df, 0.8)
# + id="pgifAQ47Wnhk"
class LastClickModel():
def __init__(self):
self.last_click_model = LogisticRegression(max_iter=1000)
def fit(self, enriched_df, features):
is_clicked = enriched_df['click'] == 1
last_click_labels = enriched_df['last_click']
last_click_given_click_labels = last_click_labels[is_clicked]
features_given_click = features[is_clicked]
self.last_click_model = LogisticRegression(max_iter=1000)
self.last_click_model.fit(features_given_click, last_click_given_click_labels)
def predict_proba(self, features):
return self.last_click_model.predict_proba(features)[:, 1]
# + id="nuscPWbAWnhk" outputId="46cbc6da-c9ca-454f-f96f-6a08890fa833"
# %%time
last_click_model = LastClickModel()
last_click_model.fit(enriched_df[is_training], features[is_training])
# + [markdown] id="2khKr6RnXq5E"
# ### Incrementality factor model
# + id="wjf3b0Hg-K38"
class IFAModel():
def __init__(self):
self.sales_given_click_model = LogisticRegression(max_iter=1000)
self.sales_given_no_click_model = LogisticRegression(max_iter=1000)
def fit(self, enriched_df, features):
is_clicked = enriched_df['click'] == 1
sales_labels = enriched_df['conversion_in_time_window']
labels_given_click, features_given_click = sales_labels[is_clicked], features[is_clicked]
labels_given_no_click, features_given_no_click = sales_labels[~is_clicked], features[~is_clicked]
self.sales_given_click_model.fit(features_given_click, labels_given_click)
self.sales_given_no_click_model.fit(features_given_no_click, labels_given_no_click)
def predict_ifa(self, features, epsilon=1e-2):
p_sales_given_click = self.sales_given_click_model.predict_proba(features)[:, 1]
p_sales_given_no_click = self.sales_given_no_click_model.predict_proba(features)[:, 1]
ifa = 1 - p_sales_given_no_click / p_sales_given_click
return np.maximum(np.minimum(ifa, 1 - epsilon), epsilon)
def predict_proba(self, features, epsilon=1e-2):
p_sales_given_click = self.sales_given_click_model.predict_proba(features)[:, 1]
return self.predict_ifa(features, epsilon=epsilon) * p_sales_given_click
# + id="MztyIOcgWnhl" outputId="b05d854a-a3a9-4776-d01c-a84063b0e7aa"
# %%time
ifa_model = IFAModel()
ifa_model.fit(enriched_df[is_training], features[is_training])
# + id="lb_3saWDWnhl"
packed_hours = 6
hour_since_last_click = (enriched_df['time_since_last_click'][is_training] / (packed_hours * 3600)).astype(int).values
hour_since_last_click *= packed_hours
temp_df = pd.DataFrame()
temp_df['ifa'] = ifa_model.predict_ifa(features[is_training])
temp_df['hour_since_last_click'] = hour_since_last_click
temp_df = temp_df[temp_df['hour_since_last_click'] > 0]
tslc_mean_scores_df = temp_df.groupby('hour_since_last_click').mean()
# + id="KDINnuLCWnhl"
temp_df = pd.DataFrame()
temp_df['ifa'] = ifa_model.predict_ifa(features[is_training])
temp_df['nb_clicks'] = enriched_df['nb_clicks']
nb_clicks_mean_scores_df = temp_df.groupby('nb_clicks').mean()
# + id="HSbTINHqWnhm"
cache_ifa_results_dir = 'cache_ifa_results_dir'
os.makedirs(cache_ifa_results_dir, exist_ok=True)
tslc_mean_scores_file_path = os.path.join(cache_ifa_results_dir, f"tslc_mean_scores_file_{debug_sample}.csv")
nb_clicks_mean_scores_file_path = os.path.join(cache_ifa_results_dir, f"nb_clicks_mean_scores_file_{debug_sample}.csv")
# + id="fsgmTl8mWnhm"
tslc_mean_scores_df.to_csv(tslc_mean_scores_file_path)
nb_clicks_test_df.to_csv(nb_clicks_mean_scores_file_path)
# + id="nA7aNP1WWnhm" outputId="b97b06f1-c6dd-417a-8ce9-11a6f2358fdc"
tslc_mean_scores_df = pd.read_csv(tslc_mean_scores_file_path)
nb_clicks_test_df = pd.read_csv(nb_clicks_mean_scores_file_path)
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
ifa_label = r'$1 - \frac{\mathbb{P}(S|\bar{C})}{\mathbb{P}(S|C)}$'
ax_tslc = axes[0]
n_hours = 24000
keep_x_tslc = int(n_hours / packed_hours)
ax_tslc.plot(tslc_mean_scores_df.index[:keep_x_tslc], tslc_mean_scores_df['ifa'][:keep_x_tslc], label=ifa_label)
ax_tslc.set_xlabel('Hours since last click')
ax_tslc.legend(fontsize=14)
lbda_pcb = 6.25e-6
x_in_seconds = tslc_mean_scores_df.index[:keep_x_tslc].values * 3600
pcb_factor = 1 - np.exp(- lbda_pcb * x_in_seconds)
#ax_tslc.plot(tslc_mean_scores_df.index[:keep_x_tslc],pcb_factor)
ax_nclicks = axes[1]
keep_x_nclicks = 21
ax_nclicks.plot(nb_clicks_mean_scores_df.index[:keep_x_nclicks], nb_clicks_mean_scores_df['ifa'][:keep_x_nclicks], label=ifa_label)
ax_nclicks.set_xlabel('Number of clicks before display')
ax_nclicks.legend(fontsize=14)
plt.savefig('ifa_value_sanity_checks.pdf', bbox_inches='tight')
# + id="Gvc1yzeWWnhm" outputId="7785048a-bac3-47b4-c489-f70ab8f24305"
# %%time
click_model = LogisticRegression(max_iter=1000)
click_model.fit(features[is_training], enriched_df['click'][is_training])
# + id="XGkl_Q2MWnhm"
def log_likelihood(label, predictor):
return label * np.log(predictor) + (1 - label) * np.log(1 - predictor)
def revert_the_label_likelihood(click_model_or_p_click, enriched_df, features, evaluated_model):
click_labels = enriched_df['click']
sales_labels = enriched_df['conversion_in_time_window']
if hasattr(click_model_or_p_click, 'predict_proba'):
p_C = click_model_or_p_click.predict_proba(features)[:, 1]
else:
p_C = click_model_or_p_click
y_predictor = evaluated_model.predict_proba(features)
weighted_llh_c_sales = click_labels / p_C * (log_likelihood(sales_labels, y_predictor))
penalized_unclicked_sales = (1 - click_labels) * sales_labels / (1 - p_C) * (np.log((1 - y_predictor) / y_predictor))
return np.mean(weighted_llh_c_sales + penalized_unclicked_sales)
# + id="9yx2axWpWnhn"
cache_ifa_results_dir = 'cache_ifa_results_dir'
os.makedirs(cache_ifa_results_dir, exist_ok=True)
llh_results_file_path = os.path.join(cache_ifa_results_dir, f"llh_results_file_{debug_sample}.csv")
# + id="AA9zo8tCWnhn" outputId="4f58b30b-a9b0-4604-fa55-6148fa7343cc"
# %%time
def compute_likelihoods(train_pclick, test_pclick, hash_space):
features = get_features(enriched_df, FEATURES, hash_space=hash_space)
is_training = split_train_test_mask(SplitBy.UID, enriched_df, 0.8)
train_features, train_enriched_df = features[is_training], enriched_df[is_training]
test_features, test_enriched_df = features[~is_training], enriched_df[~is_training]
last_click_model = LastClickModel()
last_click_model.fit(train_enriched_df, train_features)
ifa_model = IFAModel()
ifa_model.fit(train_enriched_df, train_features)
return (
revert_the_label_likelihood(train_pclick, train_enriched_df, train_features, ifa_model),
revert_the_label_likelihood(test_pclick, test_enriched_df, test_features, ifa_model),
revert_the_label_likelihood(train_pclick, train_enriched_df, train_features, last_click_model),
revert_the_label_likelihood(test_pclick, test_enriched_df, test_features, last_click_model),
)
hash_spaces = [2**space for space in range(10, 17)] #10
n_jobs = min(10, len(hash_spaces))
train_pclick = click_model.predict_proba(features[is_training])[:, 1]
test_pclick = click_model.predict_proba(features[~is_training])[:, 1]
#parallel_result = []
#for hash_space in hash_spaces:
# parallel_result += [compute_likelihoods(train_pclick, test_pclick, hash_space)]
parallel_result = Parallel(n_jobs=n_jobs)(
delayed(compute_likelihoods)(train_pclick, test_pclick, hash_space) for hash_space in hash_spaces)
ifa_train_llh, ifa_test_llh, lc_train_llh, lc_test_llh = zip(*parallel_result)
llh_df = pd.DataFrame({'hash_spaces': hash_spaces, 'ifa_train_llh': ifa_train_llh, 'ifa_test_llh': ifa_test_llh, 'lc_train_llh': lc_train_llh, 'lc_test_llh': lc_test_llh})
llh_df.to_csv(llh_results_file_path)
# + id="hlFzK67hWnhn" outputId="f62d434b-3bfb-4bae-b485-c4c4db94505f"
llh_df = pd.read_csv(llh_results_file_path)
fig, axes = plt.subplots(1, 2, figsize=(8, 4), sharey=False)
axes[0].plot(llh_df['hash_spaces'], llh_df['ifa_train_llh'], label='incremental bidder', marker='o')
axes[0].plot(llh_df['hash_spaces'], llh_df['lc_train_llh'], label='greedy bidder', marker='x')
axes[0].set_title('Incremental likelihood on train set\n', fontsize=13)
axes[0].set_xlabel('size of features space')
axes[0].set_ylim([None, axes[0].get_ylim()[1] + (axes[0].get_ylim()[1] - axes[0].get_ylim()[0]) * 0.3])
axes[0].legend()
axes[1].plot(llh_df['hash_spaces'], llh_df['ifa_test_llh'], label='incremental bidder', marker='o')
axes[1].plot(llh_df['hash_spaces'], llh_df['lc_test_llh'], label='greedy bidder', marker='x')
axes[1].set_title('Incremental likelihood on test set\n', fontsize=13)
axes[1].set_xlabel('size of features space')
axes[1].legend()
axes[1].set_ylim([None, axes[1].get_ylim()[1] + (axes[1].get_ylim()[1] - axes[1].get_ylim()[0]) * 0.3])
fig.tight_layout()
plt.savefig('incremental_metrics.pdf')
| incrementality_factor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# As the name suggest, this concept is about inheriting properties from an existing entity. This increases reusability of code. Single, Multiple and Multi-level inheritances are few of the many types supported by Python.
class Person:
def __init__(self):
pass
# Single level inheritance
class Employee(Person):
def __init__(self):
pass
# In multiple inheritance, classes are inherited from left to right inside parenthesis, depending on Method Resolution Order (MRO) algorithm of Python.
# Multi-level inheritance
class Manager(Employee):
def __init__(self):
pass
# Multiple Inheritance
class Enterprenaur(Person, Employee):
def __init__(self):
pass
# -
| OOP_ Inheritance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="KiFAYj0Jhm8_"
# # Sample Notebook for exploring gnomAD in BigQuery
# This notebook contains sample queries to explore the gnomAD dataset which is hosted through the Google Cloud Public Datasets Program.
# + [markdown] id="pW09tTZBh9zq"
# ## Setup and Authentication
#
# If you just want to look at sample results, you can scroll down to see the output of the existing queries without having to run anything. If you would like to re-run the queries or make changes, you will need to authenticate as your user and set the Google Cloud project in which to run the analysis.
# + id="QG8bvNuTgRji"
# Import libraries
import numpy as np
import os
# Imports for using and authenticating BigQuery
from google.colab import auth
# + [markdown] id="54BmRpw3imuP"
# ### User Authentication
# Before running any queries using BigQuery, you need to first authenticate yourself by running the following cell. If you are running it for the first time, it will ask you to follow a link to log in using your Google identity account, and accept the data access requests to your profile. Once this is done, it will generate a string of verification code, which you should paste back to the cell below and press enter. This should be a Google account which you can login to and which has access to run BigQuery jobs in the Google Cloud project specified in the next step.
# + id="VJGkLDCDgnuc"
auth.authenticate_user()
# + [markdown] id="FoYinYtci5Iv"
# ### Set Google Cloud Project
# To run queries in BigQuery, you need to specify the Google Cloud project that will be used. The queries below report the number of bytes billed by each query. The first 1 TB of query data processed in a project per month is free. For more details, see the [BigQuery Pricing](https://cloud.google.com/bigquery/pricing) page.
#
# To find your Project ID, go to the [Project Settings page](https://console.cloud.google.com/iam-admin/settings) in the Google [Cloud Console](https://console.cloud.google.com/). You can select the project you want using the drop-down menu at the top of the page.
# + id="nkUF1y9IhDla"
# Replace project_id with your Google Cloud Project ID.
os.environ["GOOGLE_CLOUD_PROJECT"]='project-id'
# + [markdown] id="sS2NHDSDmqDJ"
# # gnomAD Queries Type1: Explore a particular genomic region
# This category include queries that extract information from a region of the genome, for example a gene. Because gnomAD BigQuery tables utilize [integer range partitioning](https://cloud.google.com/bigquery/docs/creating-integer-range-partitions) they are optimized for this type of query.
#
# The main requirement to use this feature is to limit queries to a particular region by adding these conditions to the `WHERE` clause:
#
# `WHERE start_position >= X AND start_position <= Y`
#
# Where `[X, Y]` is the region of interest.
#
# You can find values of `X` and `Y` by refering to an external databses. For example the following table sumarizes the start and end positions for 4 genes on chromosome 17 extracted from an external resource:
#
# | Gene | X | Y | Source |
# |:-: |- |- |- |
# | BRCA1 | 43044295 | 43125364 | [link](https://ghr.nlm.nih.gov/gene/BRCA1#location) |
# | COL1A1 | 50184096 | 50201649 | [link](https://ghr.nlm.nih.gov/gene/COL1A1#location) |
# | TP53 | 31094927 | 31377677 | [link](https://ghr.nlm.nih.gov/gene/TP53#location) |
# | NF1 | 56593699 | 56595611 | [link](https://ghr.nlm.nih.gov/gene/NF1#location) |
#
# Alternatively you could use the following query that extract the same infomration directly from gnomAD tables.
#
# In the following example we are using `BRCA1` on `chr17` as an example. You can enter your gene of interest and chromosome to modify all the following queries. If your query returns `NaN` this might be because you specified the wrong chromosome, which will query the wrong table.
#
# Also you can choose which version of the gnomAD dataset you'd like to use for all the queries:
# * `v2_1_1_exomes`
# * `v2_1_1_genomes`
# * `v3_genomes`
#
# + id="9kq6xHqSmqDJ" outputId="40da15b2-3381-4cfe-bace-45544e523143" colab={"base_uri": "https://localhost:8080/", "height": 126, "referenced_widgets": ["26b58b97065c4af79b678110ee555bae", "088f2f8177524786aff8fc38fdcee367", "9df30d679d52486f81c68f4397329e47", "51ad6267e2d94b57a28839887f35e17a", "4ad10dc62bfc4b1697479cdccf9b87cb", "<KEY>", "<KEY>", "27ed9d68da4641fd90b9718b3e8c3f53", "23db269e1ee74e2e998c3dba52105105"]}
import ipywidgets as widgets
print("Variables for Region (Type 1) Queries")
gnomad_version_widget_region = widgets.Dropdown(
options=['v2_1_1_exomes', 'v2_1_1_genomes', 'v3_genomes'],
value='v3_genomes',
description='gnomAD version:',
disabled=False,
style={'description_width': 'initial'}
)
display(gnomad_version_widget_region)
chromosome_widget_region = widgets.Dropdown(
options=['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8',
'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15',
'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22',
'chrX', 'chrY'],
value='chr17',
description='Chromosome:',
disabled=False,
style={'description_width': 'initial'}
)
display(chromosome_widget_region)
gene_symbol_widget_region= widgets.Text(
value='BRCA1',
placeholder='gene_symbol',
description='Gene Symbol:',
disabled=False,
style={'description_width': 'initial'}
)
display(gene_symbol_widget_region)
# + id="C7FEUTunI-tP" outputId="e4067f70-b46c-4dbd-ca61-fb4776f87fe8" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Set the variables for the rest of the Type 1 queries based on the values above.
gnomad_version_region=gnomad_version_widget_region.value
chromosome_region=chromosome_widget_region.value
gene_symbol_region=gene_symbol_widget_region.value
print('Running Region (Type 1) queries on gnomAD version: {}, chromosome: {}, gene symbol: {}'.format(
gnomad_version_region,
chromosome_region,
gene_symbol_region
))
if gnomad_version_region.startswith('v3'):
# Variant type (snv, indel, multi-snv, multi-indel, or mixed) is stored under difference columns in V2 and V3
variant_type_col = 'variant_type'
extra_columns = ''
else:
variant_type_col = 'alternate_bases. allele_type'
# These vep columns only exist in V2
extra_columns = 'vep.STRAND AS STRAND, vep.Protein_position AS Protein_pos,'
# + id="IJkNyocBmqDQ"
from google.cloud import bigquery
client = bigquery.Client()
def run_query(query):
query_job = client.query(query)
result = query_job.to_dataframe(progress_bar_type='tqdm_notebook')
gb_processed = (query_job.total_bytes_billed / 1024 ** 3)
print('This query processed {} GB of data which is {}% of your 1 TB monthly free quota.'.format(gb_processed, round(gb_processed / 1024 * 100, 4)))
return result
# + id="iQOgAs_imqDS" outputId="0cc97110-064a-439e-e084-22f460853abc" colab={"base_uri": "https://localhost:8080/", "height": 117, "referenced_widgets": ["e71ca7a92d7c48cc854063dbfc53d139", "9d1d9481ff134f5f940fb8fd78d803ed", "d9aa3104a0cc45bca995407067d22985", "33a5b77b89464358a55e3f1cfe96290a", "cc851b862b454d5090360812e4bb0fbb", "d593dac7d2a04899a3d548960a7d0c82", "8574ae994d9940fda4a1b7ac94d3083a", "fed0ea3093454fe98fd2503340bc1c89"]}
query_template = """
SELECT MIN(start_position) AS X, MAX(end_position) AS Y
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table
WHERE EXISTS
(SELECT 1 FROM UNNEST(main_table.alternate_bases) AS alternate_bases
WHERE EXISTS (SELECT 1 from alternate_bases.vep WHERE SYMBOL = '{GENE}'))
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
GENE=gene_symbol_region)
limits = run_query(query)
print(limits)
x = limits.at[0, 'X']
y = limits.at[0, 'Y']
# + [markdown] id="SmC0cJLpmqDV"
# After you found the `[X, Y]` range for your gene of interst you can run *Type1* queries efficiently. Here are a couple of examples:
#
# ### Query 1.1a - Variant Type (BigQuery)
# Find the number of INDELs and SNVs in the region of interest using BigQuery
# + id="lv2IrzzZmqDW" outputId="c3588e1c-927a-406d-ec91-c5a6bf4fbd9d" colab={"base_uri": "https://localhost:8080/", "height": 177, "referenced_widgets": ["56b5ce1740d34102bf15c4e837c3b325", "df1af7ad0ff5476caf8afc8444d0e8b5", "36fda5bc518147bc9ad3be3f64bd4dbf", "e3e2e4821ba7432dba80d0cb6d1a0290", "c22e79a78b7a4e969b08a326f497379c", "3e4fd1c8b5d44d1ba9586e3ba42ceb5e", "a0178c219a104901acdfdeef0dff686b", "4703d62f20574bf493f7594343b5f182"]}
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template = """
SELECT COUNT(1) AS num, variant_type
FROM (
SELECT DISTINCT
start_position,
reference_bases,
alternate_bases.alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
)
GROUP BY 2
ORDER BY 1 DESC
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
VAR_TYPE_COL=variant_type_col, X=x, Y=y)
summary = run_query(query)
summary.head()
# + [markdown] id="EEdQo6lHUKWK"
# ### Query 1.1b - Variant Type (Python)
# You can also find the number of INDELs and SNVs in the region of interest by doing the aggregation and count in Python using the dataframe.
# + id="4zQxbRJcUKWQ" outputId="225e9084-5581-4b6c-a8fc-cd24d780afe6" colab={"base_uri": "https://localhost:8080/", "height": 134, "referenced_widgets": ["c5cca218c3844c14831bf0f120572e4e", "e9c58e4ba3ae40c2bf09cbcd1491d4dd", "8ae17bf022484709aee18e2979003cdf", "099f53f0565d457484f4e4e1b48da99d", "90c8a207bfa84555a4a4279779a9a4f7", "9173aa645729438caf94ee8ab69ba813", "f6e33f2870c44b4198484e53731f148b", "ba9603a0904e4a0fa2a511a5c0e6b6e2"]}
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template = """
SELECT DISTINCT
start_position,
reference_bases,
alternate_bases.alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
VAR_TYPE_COL=variant_type_col, X=x, Y=y)
summary_dataframe = run_query(query)
# Count the number of each variant type in Python instead of in BigQuery
print('Number of variants by type:')
for v in summary_dataframe.variant_type.unique():
print('{}: {}'.format(v,
np.count_nonzero(summary_dataframe['variant_type'] == v)))
# + [markdown] id="QR9XdsnfmqDY"
# Instead of aggregating the results in BigQuery to count the number of each variant type, we could return all rows and process them here. The following query adds a few more columns to the previous query.
#
# ### Query 1.2 - Allele Count by Sex
# A query to retrieve all variants in the region of interest along with `AN` and `AC` values split by sex.
#
# * `AN`: Total number of alleles in samples
# * `AC`: Alternate allele count for samples
# * `nhomalt`: The number of individuals that are called homozygous for the alternate allele.
# + id="4JYyaxegmqDZ" outputId="2d5eca11-c0f2-4fff-9f9e-2ed5ed6b212b" colab={"base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": ["6624f43903b245b8a9b721ab2a6c4ea2", "0310b7c1bb6c45828ce11b9431812eec", "a8aa53d84b9e4a3ebb0ca3854e90b4e6", "<KEY>", "5418f53240c74e5ba97a63c5848fd1d9", "8453673113fa483bbd31213bc31d017b", "<KEY>", "e2a5dc9cedc84ae7bcf7539818f7f2f3"]}
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template = """
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
AN,
AN_male,
AN_female,
alternate_bases.AC AS AC,
alternate_bases.AC_male AS AC_male,
alternate_bases.AC_female AS AC_female,
alternate_bases.nhomalt AS nhomalt,
alternate_bases.nhomalt_male AS nhomalt_male,
alternate_bases.nhomalt_female AS nhomalt_female,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
stats_sex = run_query(query)
stats_sex.head()
# + [markdown] id="IcaAuthSWG0i"
# We can then perform further analysis on the dataframe such as filtering out variants with a low allele count (AC).
# + id="eIMQgCF_WX67" outputId="8fa0f7a1-22b3-4e34-8906-a4a1a8dc7e57" colab={"base_uri": "https://localhost:8080/", "height": 204}
stats_sex_filtered_ac=stats_sex.loc[stats_sex['AC'] > 10]
stats_sex_filtered_ac.head()
# + [markdown] id="KSBLLmkGWpkS"
# Or we could filter to find variants that were most common in females that were not found in any male samples.
# + id="oPsQ4xQqW0oe" outputId="105ea395-3572-4141-d007-731b60295ef8" colab={"base_uri": "https://localhost:8080/", "height": 359}
stats_sex_no_male=stats_sex.loc[stats_sex['AC_male'] == 0].sort_values(by=('AC_female'),
ascending = False)
stats_sex_no_male.head(10)
# + [markdown] id="crtWu-hNmqDc"
# Instead of splitting `AN` and `AC` values by sex we can analyze ancestry.
#
# ### Query 1.3 - Allele Count by Ancestry
# A query to retrieve all variants in the region of interest along with `AN` and `AC` values for the following ancestries:
# * `afr`: African-American/African ancestry
# * `amr`: Latino ancestry
# * `eas`: East Asian ancestry
# * `nfe`: Non-Finnish European ancestry
# + id="NV8yUwNcmqDc" outputId="d85946d7-2f62-49c8-ff19-d68bd9257131" colab={"base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": ["<KEY>", "731e74977b114357884ba53d1c701ced", "<KEY>", "<KEY>", "b00a1e70fc324d25828bb4229b8cb557", "22763183698a4729af0ec1b9a3d32814", "721e388866b04ea58d9cd9f9ad7291ed", "2b0eeac57d014ee29acd0ecac77c12c0"]}
# NOTE: For v2_1_1 the "variant_type" column must be replaced with "alternate_bases.allele_type AS variant_type"
query_template = """
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
AN_afr,
AN_amr,
AN_eas,
AN_nfe,
alternate_bases.AC_afr AS AC_afr,
alternate_bases.AC_amr AS AC_amr,
alternate_bases.AC_eas AS AC_eas,
alternate_bases.AC_nfe AS AC_nfe,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE start_position >= {X} AND start_position <= {Y}
ORDER BY 1,2
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
stats_ancestry = run_query(query)
stats_ancestry.head()
# + [markdown] id="kzIoUN7V9m_P"
# An example here would be to report the most common variant for each ancestry that was not present in any of the others.
# + id="OrA0IVH39nT-" outputId="fa3ace13-8940-4729-8d52-fd2db85dc9a1" colab={"base_uri": "https://localhost:8080/", "height": 359}
stats_ancestry_amr=stats_ancestry.loc[
(stats_ancestry['AC_amr'] > 0) &
(stats_ancestry['AC_afr'] == 0) &
(stats_ancestry['AC_eas'] == 0) &
(stats_ancestry['AC_nfe'] == 0)].sort_values(by=('AC_amr'),
ascending = False)
stats_ancestry_amr.head(10)
# + [markdown] id="NxhS-7_fPhtx"
# ### Query 1.4 - gnomAD Columns
# gnomAD tables have many more columns, you can find the full list of columns along with their description using the following query.
#
# + id="ecNDTKKwPlnG" outputId="e69f36b3-229e-47c1-98e7-5186837ead6b" colab={"base_uri": "https://localhost:8080/", "height": 349, "referenced_widgets": ["fcaf933853bb43d88ce5e48961e78a94", "9d11ef09131a4b4290414d5eeb1786b5", "bb8b8fdab53b4b03982ed60e9d3c4f06", "607e958e770f445388ce3b7c3d0a22f1", "9d8c3948165d4a899f9094a71149b8d3", "2d3cbde9cb7f48b7b33f6f4b3789f6a9", "3b3c82832cf54d5e818e9ab88153430d", "d2499a774f304189b7ffe805264684bf"]}
query_template = """
SELECT column_name, field_path, description
FROM `bigquery-public-data`.gnomAD.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
WHERE table_name = "{GNOMAD_VER}__{CHROM}"
AND column_name IN (
SELECT COLUMN_NAME
FROM `bigquery-public-data`.gnomAD.INFORMATION_SCHEMA.COLUMNS
WHERE table_name = "{GNOMAD_VER}__{CHROM}")
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region)
column_info = run_query(query)
print('There are {} columns in `bigquery-public-data.gnomAD.{}__{}` table'.format(len(column_info.index),
gnomad_version_region,
chromosome_region))
column_info.head(7)
# + [markdown] id="yeOYzQ-KOosz"
# Using `column_info` dataframe you can find other available values for the ancestry slice:
# + id="h97Pwy6OOosz" outputId="ff697d7e-74ae-4eb5-f32f-b268f3354e7f" colab={"base_uri": "https://localhost:8080/", "height": 390}
AN_columns = column_info[column_info['column_name'].str.startswith('AN')] # Retain only rows that column_name starts with "AN"
AN_columns = AN_columns[['column_name', 'description']] # Drop extra column (field_path)
AN_columns = AN_columns.sort_values(by=['column_name']) # Sort by column_name
AN_columns.head(11)
# + [markdown] id="6MJ03MHfOos2"
# Note that the corresponding values for `AC` and `AF` (Alternate allele frequency) exist under the `alternate_bases` column.
# + id="ilYK8FdHOos3" outputId="cc0dfd07-4028-488e-a7d3-d2b6995aa5c7" colab={"base_uri": "https://localhost:8080/", "height": 390}
AC_columns = column_info[column_info['field_path'].str.startswith('alternate_bases.AC')] # Retain only rows that field_path starts with "alternate_bases.AC"
AC_columns = AC_columns[['field_path', 'description']] # Drop extra column (column_name)
AC_columns = AC_columns.sort_values(by=['field_path']) # Sort by field_path
AC_columns.head(11)
# + [markdown] id="CvjPXfN3Oos5"
# Please refer to gnomAD release announcements ([v2.1](https://gnomad.broadinstitute.org/blog/2018-10-gnomad-v2-1/) and [v3.0](https://gnomad.broadinstitute.org/blog/2019-10-gnomad-v3-0/)) for more details about demographics and annotation slices.
#
# The next query showcases how to use `AN` and `AC` values.
#
# ### Query 1.5 - Burden of Mutation
# Given a region of interest, compute the burden of mutation for the gene along with other summary statistics.
# + id="9aQNKh2wOos6" outputId="9022b209-b784-4ac4-99ad-bf34f21be163" colab={"base_uri": "https://localhost:8080/", "height": 163, "referenced_widgets": ["cc52c3395ffa42dfb14e5e8bd4489106", "1deadc320f094ac58e564cb934284b6e", "8c2191e8bc664e1487ec847fb6d3d33f", "6000ae1b37e9420bab6cf8311937795a", "d68e66fab6084cc28d3a09824e935f1b", "897378a97ae84b10b092a24e5da5be42", "48e02cc90b4c4de1ad0d8ef350550527", "2aa0143a9cbe4d578a4e9f1f7896637c"]}
query_template = """
WITH summary_stats AS (
SELECT
COUNT(1) AS num_variants,
SUM(ARRAY_LENGTH(alternate_bases)) AS num_alts, # This data appears to be bi-allelic.
SUM((SELECT alt.AC FROM UNNEST(alternate_bases) AS alt)) AS sum_AC,
APPROX_QUANTILES((SELECT alt.AC FROM UNNEST(alternate_bases) AS alt), 10) AS quantiles_AC,
SUM(AN) AS sum_AN,
APPROX_QUANTILES(AN, 10) AS quantiles_AN,
-- Also include some information from Variant Effect Predictor (VEP).
STRING_AGG(DISTINCT (SELECT annot.symbol FROM UNNEST(alternate_bases) AS alt,
UNNEST(vep) AS annot LIMIT 1), ', ') AS genes
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table
WHERE start_position >= {X} AND start_position <= {Y})
---
--- The resulting quantiles and burden_of_mutation score give a very rough idea of the mutation
--- rate within these particular regions of the genome. This query could be further refined to
--- compute over smaller windows within the regions of interest and/or over different groupings
--- of AC and AN by population.
---
SELECT
ROUND(({Y} - {X}) / num_variants, 3) AS burden_of_mutation,
*,
FROM summary_stats
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region, X=x, Y=y)
burden_of_mu = run_query(query)
burden_of_mu.head()
# + [markdown] id="rteMw6VYOos8"
# The other column to use is `alternate_bases.vep` which contains the [VEP annotaions](https://uswest.ensembl.org/info/docs/tools/vep/index.html) for each variant.
# + id="CHkpgp6kOos9" outputId="84dc2ac1-ff8d-4c92-c0dd-2b6b1d2fe51f" colab={"base_uri": "https://localhost:8080/", "height": 421}
vep_columns = column_info[column_info['field_path'].str.startswith('alternate_bases.vep')] # Retain only rows that field_path starts with "alternate_bases.vep"
vep_columns = vep_columns[['field_path', 'description']] # Drop extra column (column_name)
vep_columns.head(22)
# + [markdown] id="NIc8J5NEOos_"
# The next query showcases how to use some of the `vep` annotation values.
#
# ### Query 1.6 - VEP Annotations
# Given a region of interest, examine `vep` annotations to pull out [missense variants](https://en.wikipedia.org/wiki/Missense_mutation).
#
# + id="ZQsJc6MXOos_" outputId="84df7da3-5348-47a6-9a54-198305291005" colab={"base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": ["e47f70c5d839401d8c354891008501cb", "376019ce8a104c248a03bda9c708f055", "d57cc2815b204e33b728520f4a0b7214", "dc134b16d6de48148b7f76debce4686e", "234edd320613450c9dc00cd5c4bd42eb", "32997a3978e34d77932871befe9bee71", "31f5241372c24f4cbc4242c58ab6f0a7", "b96568addc7b436bab0603eecd6816fc"]}
query_template = """
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.Consequence AS Consequence,
vep.IMPACT AS Impact,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE start_position >= {X} AND start_position <= {Y} AND
REGEXP_CONTAINS(vep.Consequence, r"missense_variant")
ORDER BY start_position, reference_bases
"""
query = query_template.format(GNOMAD_VER=gnomad_version_region,
CHROM=chromosome_region,
EXTRA_COLS=extra_columns, X=x, Y=y)
neg_variants = run_query(query)
neg_variants.head()
# + [markdown] id="Agb4xFx2mqDi"
# # gnomAD Queries Type2: Explore an entire chromosome
#
# This section queries across an entire chromosome.
#
# + id="M-eJUtlOEIgY" outputId="f60736b4-47f9-41d7-a527-c3a6ec2d8e7a" colab={"base_uri": "https://localhost:8080/", "height": 96, "referenced_widgets": ["04b04101f4d7402ca2f7b0ff550f883a", "6056f94e9ca14f2cbee0481cb40a7c4f", "62abe0d1f96d4744b1922d169bf337f3", "6763c27c30ab42abb88d23062c891bf7", "3d9d11ed08794c43a866b415672a1ea2", "3882ffc1042e4e2caeffc3c55a8bc2ea"]}
import ipywidgets as widgets
print("Variables for Chromosome (Type 2) queries")
gnomad_version_widget_chr = widgets.Dropdown(
options=['v2_1_1_exomes', 'v2_1_1_genomes', 'v3_genomes'],
value='v2_1_1_exomes',
description='gnomAD version:',
disabled=False,
style={'description_width': 'initial'}
)
display(gnomad_version_widget_chr)
chromosome_widget_chr = widgets.Dropdown(
options=['chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8',
'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15',
'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22',
'chrX', 'chrY'],
value='chr17',
description='Chromosome:',
disabled=False,
style={'description_width': 'initial'}
)
display(chromosome_widget_chr)
# + id="etWAza7lEpz6" outputId="f93eb191-13b8-4ed2-ff4f-a744b17fd74b" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Set the variables for the rest of the Chromosome (Type 2) queries based on the values above.
gnomad_version_chr=gnomad_version_widget_chr.value
chromosome_chr=chromosome_widget_chr.value
print('Running chromosome (Type 2) queries on gnomAD version: {}, chromosome: {}'.format(
gnomad_version_chr,
chromosome_chr
))
if gnomad_version_chr.startswith('v3'):
# Variant type (snv, indel, multi-snv, multi-indel, or mixed) is stored under difference columns in V2 and V3
variant_type_col = 'variant_type'
extra_columns = ''
else:
variant_type_col = 'alternate_bases. allele_type'
# These vep columns only exist in V2
extra_columns = 'vep.STRAND AS STRAND, vep.Protein_position AS Protein_pos,'
# + [markdown] id="17GsksosA7hb"
# ## Query 2.1 - Find alleles that occur at least in 90% of samples
# Find all variants on the selected chromosome that were observed in at least 90% of samples. In other words, this query finds variants where allele frequency is very high for non-REF alleles.
# + id="n69cLjhfmqDi" outputId="f2d5f94f-509a-4151-ac70-9161466efec6" colab={"base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": ["341c28def461400cbff409fa8e46528a", "55b9f8c1c5ab417391ba9db32b6049f6", "9e03d5c4a3f1434aafac076ccf45ddf6", "3db540ad2f184c5b90b6e28e8b3da3c1", "2cd8457577c549c39fae098845315cb3", "cb687c033e874ee0a8a915964701424e", "8d4a615cebd845f1bb7c84b8cc43adb1", "d0dd08b93b8d48f4bbb2f36764c53e59"]}
query_template = """
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
AN,
alternate_bases.AC AS AC,
alternate_bases.AF AS AF,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE AN > 0 AND AF > 0.9
ORDER BY AN DESC
"""
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
EXTRA_COLS=extra_columns)
high_af = run_query(query)
high_af.head()
# + [markdown] id="GxTI7n_OmqDn"
# We can condense the result and only list gene symbols and the number of variants found in the previous query:
# + id="BnoQ8MGKmqDo" outputId="2d0e5886-3c2d-468d-9511-09e79c494ec7" colab={"base_uri": "https://localhost:8080/", "height": 390}
high_af.groupby('Symbol').count()[['POS']].sort_values(by=['POS'],
ascending=False).head(10)
# + [markdown] id="w7A9jlrHmqDq"
# ## Query 2.2 - Top variants by ancenstry difference
# Find top 1,000 variants on the selected chromosome that show the most significant differences between male samples of African-American ancestry versus Finnish ancestry
# + id="ggChrvDlmqDq" outputId="63873e41-b44c-44eb-80c7-05600c3d2861" colab={"base_uri": "https://localhost:8080/", "height": 290, "referenced_widgets": ["4a18a41841084092890f420bd7f4a869", "5023a1389ddd4be8b8c011affa62b126", "dfe398fb484b4747b3caeb402c6bfa52", "c166823727a94ee280bf8444f6ecfab7", "f4e8ac8169e94c8c88cca77ffe452e18", "d42b5c8b1c47443a8f9b9e5bccfb6f8f", "4c5daa97a62f4ba5a0a57cb97679159b", "6313ca4356834e5a982ce240112454cb"]}
query_template = """
SELECT reference_name AS CHROM,
start_position AS POS,
names AS ID,
reference_bases AS REF,
alternate_bases.alt AS ALT,
vep.SYMBOL AS Symbol,
vep.Gene AS Gene,
AN,
alternate_bases.AC_fin_male AS AC_fin_m,
alternate_bases.AC_afr_male AS AC_afr_m,
ROUND(ABS(alternate_bases.AC_fin_male - alternate_bases.AC_afr_male) / alternate_bases.AC_male, 3) AS fin_afr_diff,
vep.EXON AS EXON,
vep.INTRON AS INTRON,
{EXTRA_COLS}
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE vep.SYMBOL IS NOT NULL AND
alternate_bases.AC_male > 20 AND alternate_bases.AC_fin_male > 0 AND alternate_bases.AC_afr_male > 0
order by fin_afr_diff DESC
LIMIT 1000
"""
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
EXTRA_COLS=extra_columns)
stats_chr_ancestry = run_query(query)
stats_chr_ancestry.head()
# + [markdown] id="5xkN7ewQmqDw"
# ## Query 2.3 - Find genes with high number of INDELs
# Find top 1000 genes with the highest number of INDELs on the selected chromosome.
# + id="4FvIPEhlmqDw" outputId="20241587-8b13-45ea-de82-106803df9af3" colab={"base_uri": "https://localhost:8080/", "height": 425, "referenced_widgets": ["6ce1c3fc9fc4417e92dc32d44711fab7", "32c06db11e794c9b812d573994616eec", "475d1fb2eb754e90ac243023d5e24b47", "5f6f6262cdc045f78129880809618f0c", "758fa652c43145628fc343bc30e1bee8", "49a2bc37ff6d49688dde2eca2d0a3442", "62810141dbb5435fa5dd9e7ed2daf352", "36250839218246b5a28819c8b2a93f8d"]}
query_template = """
SELECT Symbol, count(1) AS num_indels
FROM
(
SELECT DISTINCT
start_position AS str_pos,
alternate_bases.alt AS alt,
vep.SYMBOL AS Symbol,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases,
alternate_bases.vep AS vep
WHERE vep.SYMBOL IS NOT NULL AND variant_type IN ('ins', 'del', 'indel')
)
GROUP BY 1
ORDER BY 2 DESC
LIMIT 1000
"""
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
VAR_TYPE_COL=variant_type_col)
indel_stats = run_query(query)
indel_stats.head(10)
# + [markdown] id="DteicRVbBDfu"
# ## Query 2.4 - Find distribution of SNVs across a chromosome
# Find the distribution of SNVs across the selected chromosome. In order to be able to plot the result we group base pairs into buckets of size 10,000.
#
# + id="FojNXJY6DIw0" outputId="5942ecb9-7b61-45df-98ea-5e767af0c5fb" colab={"base_uri": "https://localhost:8080/", "height": 270, "referenced_widgets": ["b2406dd554ef42cb88464b7aed20845f", "f302bde312794b5286c9b04ae14f3495", "8280608dde5d4b6c9fdb32cdf8037278", "21fb707bd3bc4c19846b1195b5f3d578", "91812a525cc444589b38637fa8e73472", "12c68617a243438eac83cd31aea0ce87", "6a44ed1391ed4c04b4d36ee11fd7548d", "86971951838342ff8a44c2ebddc64510"]}
bucket_size = 10000
query_template = """
SELECT CAST(FLOOR(DIV(start_position, {BUCKET})) AS INT64) AS start_pos_bucket ,
count(1) AS num_snv
FROM
(
SELECT DISTINCT
start_position,
alternate_bases.alt AS alt,
{VAR_TYPE_COL} AS variant_type,
FROM `bigquery-public-data.gnomAD.{GNOMAD_VER}__{CHROM}` AS main_table,
main_table.alternate_bases AS alternate_bases
WHERE variant_type = 'snv'
)
GROUP BY 1
ORDER BY 1
"""
query = query_template.format(GNOMAD_VER=gnomad_version_chr,
CHROM=chromosome_chr,
VAR_TYPE_COL=variant_type_col,
BUCKET=bucket_size)
snv_dist = run_query(query)
snv_dist.head()
# + id="FxFpLITCec0J" outputId="db7b8347-0886-4bb8-deb2-0015f6dce7a4" colab={"base_uri": "https://localhost:8080/", "height": 595}
import matplotlib.pyplot as plt
plt.figure(dpi=150)
plt.bar(snv_dist.start_pos_bucket, snv_dist.num_snv)
plt.xlabel("Bucket number of start_pos")
plt.ylabel("No of SNVs in each bucket")
plt.title("Distribution of SNVs on {} for buckets of {} base pairs".format(chromosome_chr, bucket_size))
plt.show()
| docs/sample_queries/gnomad/gnomad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ojm_6E9f9Kcf"
# # MLP ORF to GenCode
# Try using a saved model.
# Run notebook 113 first.
# It will save to my drive / best model.
# This notebook will use the model trained in notebook 113.
# + colab={"base_uri": "https://localhost:8080/"} id="RmPF4h_YI_sT" outputId="5ca73207-0b94-4bd8-d62f-f3151e32e19a"
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
# + id="VQY7aTj29Kch"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
# + colab={"base_uri": "https://localhost:8080/"} id="xUxEB53HI_sk" outputId="3ea0c44b-71f8-49e5-d40b-0c4ddc7f81a4"
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
BESTMODELPATH=DATAPATH+"BestModel" # saved on cloud instance and lost after logout
LASTMODELPATH=DATAPATH+"LastModel" # saved on Google Drive but requires login
# + [markdown] id="8buAhZRfI_sp"
# ## Data Load
# + colab={"base_uri": "https://localhost:8080/"} id="h94xptH1tI82" outputId="8bdc915b-9271-4c78-d3c7-944eb76acf6f"
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=8000
NC_TESTS=8000
PC_LENS=(200,99000)
NC_LENS=(200,99000)
PC_FILENAME='gencode.v38.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
MAX_K = 3
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="VNnPagXjtI85" outputId="972a34f8-aaf4-4b94-862b-0e1f4a63c98d"
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False)
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="ShtPw_fGtI9E" outputId="3ec8982e-08a4-4123-a0c8-2250dfe47dd3"
def dataframe_length_filter(df,low_high):
(low,high)=low_high
# The pandas query language is strange,
# but this is MUCH faster than loop & drop.
return df[ (df['seqlen']>=low) & (df['seqlen']<=high) ]
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(
dataframe_length_filter(pcdf,PC_LENS))
nc_all = dataframe_extract_sequence(
dataframe_length_filter(ncdf,NC_LENS))
show_time()
print("PC seqs pass filter:",len(pc_all))
print("NC seqs pass filter:",len(nc_all))
# Garbage collection to reduce RAM footprint
pcdf=None
ncdf=None
# + [markdown] id="CCNh_FZaI_sv"
# ## Data Prep
# + colab={"base_uri": "https://localhost:8080/"} id="V91rP2osI_s1" outputId="e47c7d5a-f68a-4f48-e009-8d4fdf6773fb"
# Train set not needed because we use a pre-trained model.
# pc_train=pc_all[:PC_TRAINS]
# nc_train=nc_all[:NC_TRAINS]
# print("PC train, NC train:",len(pc_train),len(nc_train))
pc_test=pc_all[PC_TRAINS:PC_TRAINS+PC_TESTS]
nc_test=nc_all[NC_TRAINS:NC_TRAINS+PC_TESTS]
print("PC test, NC test:",len(pc_test),len(nc_test))
# Garbage collection
pc_all=None
nc_all=None
# + colab={"base_uri": "https://localhost:8080/"} id="FfyPeInGI_s4" outputId="1bc79d45-4d1f-4b8c-ca8b-dbfa62c1fc29"
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
total=len1+len0
L1=np.ones(len1,dtype=np.int8)
L0=np.zeros(len0,dtype=np.int8)
S1 = np.asarray(seqs1)
S0 = np.asarray(seqs0)
all_labels = np.concatenate((L1,L0))
all_seqs = np.concatenate((S1,S0))
return all_seqs,all_labels # use this to test unshuffled
X,y = shuffle(all_seqs,all_labels) # sklearn.utils.shuffle can be RAM hog
return X,y
# Skip training, load a trained model.
# Xseq,y=prepare_x_and_y(pc_train,nc_train)
# print(Xseq[:3])
# print(y[:3])
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="LWLixZOfI_s7" outputId="8241be23-ac6a-4140-a39e-3208177c9c68"
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
collection = []
#i=0
for seq in seqs:
counts = tool.make_dict_upto_K(max_K)
# Last param should be True when using Harvester.
counts = tool.update_count_one_K(counts,max_K,seq,True)
# Given counts for K=3, Harvester fills in counts for K=1,2.
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
#i = i+1
#print(i)
return np.asarray(collection)
# Skip training, load a trained model.
# Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
# Garbage collection
Xseq = None
show_time()
# + [markdown] id="dJ4XhrzGI_s-"
# ## Neural network
# + colab={"base_uri": "https://localhost:8080/"} id="o5NPW7zKI_tC" outputId="f76b68b0-b0b9-454c-c6b8-18d4e6c7de2a"
best_model=load_model(BESTMODELPATH)
print(best_model.summary())
# + colab={"base_uri": "https://localhost:8080/"} id="GVImN4_0I_tJ" outputId="af4beb69-5410-4254-ed36-c0973f772149"
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="V7EI2M5yodH4" outputId="7be42f9c-9db4-47c8-e175-db31bd3ffe56"
Xseq,y=prepare_x_and_y(pc_test,nc_test)
print(Xseq[0])
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="upx48_rhogZb" outputId="efefde13-38f1-4cfe-acd0-c8be522f36e1"
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print(Xfrq[0])
print(y[:5])
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="bBxMn1HJoihn" outputId="bb43bd0a-d830-4f68-eced-89c9b7faa289"
X=Xfrq
scores = best_model.evaluate(X, y, verbose=0)
show_time()
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="sxC-6dXcC8jG" outputId="57d7a318-9123-4863-a829-6f45aea590fa"
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
print("predictions.shape",bm_probs.shape)
print("first predictions",bm_probs[:5])
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
# + id="tGf2PcxRC8jT"
| Notebooks/ORF_MLP_114.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="RjCo3jrpfJ0l"
# ### NumPy 실습 문제
#
# ### 문제 1.
#
# 길이가 10인 0-벡터(모든 요소가 0인)를 만드세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 1023, "status": "ok", "timestamp": 1592741657305, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="p-aeXdaguQ9K" outputId="9a8ed264-4445-4d81-e975-75af8d94e449"
import numpy as np
x = np.zeros(10)
x
# + [markdown] colab_type="text" id="jBqZrWqqiFk_"
# ### 문제 2.
#
# 길이가 10이며 다섯번째 원소만 1이고 나머지 원소는 모두 0인 벡터를 만드세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 1013, "status": "ok", "timestamp": 1592741657306, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="z0Kl7D6-uRfD" outputId="7bd7ff3d-e635-4296-bad9-c8e1e67ad76e"
x = np.zeros(10)
x[4] = 1
x
# + [markdown] colab_type="text" id="BoYSsYGhiIef"
# ### 문제 3.
#
# 10 부터 49까지의 값을 가지는 벡터를 만드세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 1003, "status": "ok", "timestamp": 1592741657307, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="pSVM-KF3uR9y" outputId="1a98c504-e7c7-4c97-84ac-91e9791bcc4b"
x = np.arange(10,50)
x
# + [markdown] colab_type="text" id="xlZ0xpKMiNGN"
# ### 문제 4.
#
# 위 벡터의 순서를 바꾸세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 994, "status": "ok", "timestamp": 1592741657308, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="-IA4dT_huSjg" outputId="aa124167-ac5d-42e7-ee89-dfcee91214ed"
x = np.arange(50, 9, -1)
x
# + [markdown] colab_type="text" id="zxiD5ItZiPta"
# ### 문제 5.
#
# 0부터 8까지의 값을 가지는 3x3 행렬을 만드세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 985, "status": "ok", "timestamp": 1592741657309, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="QByIoaCCuTIg" outputId="6d46669f-5d7d-4931-c87c-2434fd888a32"
x = np.reshape(np.arange(9), (3,3))
x
# + [markdown] colab_type="text" id="0RSXeMRviR75"
# ### 문제 6.
#
# 벡터 [1,2,0,0,4,0] 에서 원소의 값이 0이 아닌 원소만 선택한 벡터를 만드세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 975, "status": "ok", "timestamp": 1592741657309, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="y7ueLgpUuTie" outputId="82e8563e-46a9-4ad9-fb58-f302d1e0954c"
x = np.array([1,2,0,0,4,0])
x[x != 0]
# + [markdown] colab_type="text" id="FpjT4iZUiUkG"
# ### 문제 7.
#
# 3x3 단위 행렬(identity matrix, 주대각선의 원소가 모두 1이며 나머지 원소는 모두 0인)을 만드세요
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 966, "status": "ok", "timestamp": 1592741657310, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="g-2aj0IZuUBo" outputId="906e843c-602a-44d2-94c0-c4ebed2eeb53"
np.eye(3,3)
# + [markdown] colab_type="text" id="TbnT-pA4iWkl"
# ### 문제 8.
#
# 난수 원소를 가지는 3x3 행렬을 만드세요
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 956, "status": "ok", "timestamp": 1592741657310, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="CbYAz43nuUkU" outputId="ddbd2486-2bd3-4720-dfb6-42b3b1fbfe17"
x = np.random.rand(3,3)
x
# + [markdown] colab_type="text" id="F0mr3gNHiYkY"
# ### 문제 9.
#
# 위에서 만든 난수 행렬에서 최대값/최소값 원소를 찾으세요.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 948, "status": "ok", "timestamp": 1592741657311, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="zAeM4e7juVNU" outputId="fd5f7c8a-1663-47bc-8574-5a73194930dc"
print(x.min(), x.max())
# + [markdown] colab_type="text" id="tbaqoKpniawg"
# ### 문제 10.
#
# 위에서 만든 난수 행렬에서 행 평균, 열 평균을 계산하세요.
# + colab={"base_uri": "https://localhost:8080/", "height": 53} colab_type="code" executionInfo={"elapsed": 940, "status": "ok", "timestamp": 1592741657313, "user": {"displayName": "\uc774\uc120\ud654", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiGEcSuxiespWUAaMC6eXljdm2fXmv29ZXuZ14n=s64", "userId": "08084686575025891086"}, "user_tz": -540} id="4N6SKojSazoB" outputId="62a614e0-d479-44db-b7fb-e945f71b418f"
print(np.mean(x, axis = 1)) # 행평균
print(np.mean(x, axis = 0)) # 열평균
| 02NumPy/Numpy연습문제_해답.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Normal Distribution
# ***
# ## Definition
# >The normal distribution, also known as the Gaussian or standard normal distribution, is the [continuous] probability distribution that plots all of its values in a symmetrical fashion, and most of the results are situated around the probability's mean. Values are equally likely to plot either above or below the mean. Grouping takes place at values close to the mean and then tails off symmetrically away from the mean $ ^{[1]}$.
#
# ## Formula
# The probability mass function of a noramlly distributed random variable is defined as:
# $$ f(x|\mu, \sigma) = \frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}$$
# where $\mu$ denotes the mean and $\sigma$ denotes the standard deviation.
# +
# IMPORTS
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import matplotlib.style as style
from IPython.core.display import HTML
# PLOTTING CONFIG
# %matplotlib inline
style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (14, 7)
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: center;
}
</style>
""")
plt.figure(dpi=100)
# PDF
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100)) / np.max(stats.norm.pdf(np.linspace(-3, 3, 100))),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100)) / np.max(stats.norm.pdf(np.linspace(-3, 3, 100))),
alpha=.15,
)
# CDF
plt.plot(np.linspace(-4, 4, 100),
stats.norm.cdf(np.linspace(-4, 4, 100)),
)
# LEGEND
plt.text(x=-1.5, y=.7, s="pdf (normed)", rotation=65, alpha=.75, weight="bold", color="#008fd5")
plt.text(x=-.4, y=.5, s="cdf", rotation=55, alpha=.75, weight="bold", color="#fc4f30")
# TICKS
plt.tick_params(axis = 'both', which = 'major', labelsize = 18)
plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7)
# TITLE, SUBTITLE & FOOTER
plt.text(x = -5, y = 1.25, s = "Normal Distribution - Overview",
fontsize = 26, weight = 'bold', alpha = .75)
plt.text(x = -5, y = 1.1,
s = 'Depicted below are the normed probability density function (pdf) and the cumulative density\nfunction (cdf) of a normally distributed random variable $ y \sim \mathcal{N}(\mu,\sigma) $, given $ \mu = 0 $ and $ \sigma = 1$.',
fontsize = 19, alpha = .85)
plt.text(x = -5,y = -0.2,
s = 'Normal',
fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey');
# -
# ***
# ## Parameters
# +
# IMPORTS
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import matplotlib.style as style
from IPython.core.display import HTML
# PLOTTING CONFIG
# %matplotlib inline
style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (14, 7)
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: center;
}
</style>
""")
plt.figure(dpi=100)
# PDF MU = 0
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100)),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100)),
alpha=.15,
)
# PDF MU = 2
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), loc=2),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100),loc=2),
alpha=.15,
)
# PDF MU = -2
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), loc=-2),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100),loc=-2),
alpha=.15,
)
# LEGEND
plt.text(x=-1, y=.35, s="$ \mu = 0$", rotation=65, alpha=.75, weight="bold", color="#008fd5")
plt.text(x=1, y=.35, s="$ \mu = 2$", rotation=65, alpha=.75, weight="bold", color="#fc4f30")
plt.text(x=-3, y=.35, s="$ \mu = -2$", rotation=65, alpha=.75, weight="bold", color="#e5ae38")
# TICKS
plt.tick_params(axis = 'both', which = 'major', labelsize = 18)
plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7)
# TITLE, SUBTITLE & FOOTER
plt.text(x = -5, y = 0.51, s = "Normal Distribution - $ \mu $",
fontsize = 26, weight = 'bold', alpha = .75)
plt.text(x = -5, y = 0.45,
s = 'Depicted below are three normally distributed random variables with varying $ \mu $. As one can easily\nsee the parameter $\mu$ shifts the distribution along the x-axis.',
fontsize = 19, alpha = .85)
plt.text(x = -5,y = -0.075,
s = 'Normal',
fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey');
# +
# IMPORTS
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import matplotlib.style as style
from IPython.core.display import HTML
# PLOTTING CONFIG
# %matplotlib inline
style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (14, 7)
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: center;
}
</style>
""")
plt.figure(dpi=100)
# PDF SIGMA = 1
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=1),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=1),
alpha=.15,
)
# PDF SIGMA = 2
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=2),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=2),
alpha=.15,
)
# PDF SIGMA = 0.5
plt.plot(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=0.5),
)
plt.fill_between(np.linspace(-4, 4, 100),
stats.norm.pdf(np.linspace(-4, 4, 100), scale=0.5),
alpha=.15,
)
# LEGEND
plt.text(x=-1.25, y=.3, s="$ \sigma = 1$", rotation=51, alpha=.75, weight="bold", color="#008fd5")
plt.text(x=-2.5, y=.13, s="$ \sigma = 2$", rotation=11, alpha=.75, weight="bold", color="#fc4f30")
plt.text(x=-0.75, y=.55, s="$ \sigma = 0.5$", rotation=75, alpha=.75, weight="bold", color="#e5ae38")
# TICKS
plt.tick_params(axis = 'both', which = 'major', labelsize = 18)
plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7)
# TITLE, SUBTITLE & FOOTER
plt.text(x = -5, y = 0.98, s = "Normal Distribution - $ \sigma $",
fontsize = 26, weight = 'bold', alpha = .75)
plt.text(x = -5, y = 0.87,
s = 'Depicted below are three normally distributed random variables with varying $\sigma $. As one can easily\nsee the parameter $\sigma$ "sharpens" the distribution (the smaller $ \sigma $ the sharper the function).',
fontsize = 19, alpha = .85)
plt.text(x = -5,y = -0.15,
s = 'Normal',
fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey');
# -
# ***
# ## Implementation in Python
# Multiple Python packages implement the normal distribution. One of those is the `stats.norm` module from the `scipy` package. The following methods are only an excerpt. For a full list of features the [official documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html) should be read.
# ### Random Variates
# In order to generate a random sample the function `rvs` should be used. By default the samples are drawn from a normal distribution with $ \mu = 0$ and $\sigma=1$:
# +
from scipy.stats import norm
# draw a single sample
print(norm.rvs(), end="\n\n")
# draw 10 samples
print(norm.rvs(size=10), end="\n\n")
# adjust mean ('loc') and standard deviation ('scale')
print(norm.rvs(loc=10, scale=0.1), end="\n\n")
# -
# ### Probability Density Function
# The probability density function can be accessed via the `pdf` function. Like the `rvs` method, the `pdf` allows for adjusting mean and standard deviation of the random variable:
# +
from scipy.stats import norm
# additional imports for plotting purpose
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams["figure.figsize"] = (14, 7)
# relative likelihood of x and y
x = -1
y = 2
print("pdf(x) = {}\npdf(y) = {}".format(norm.pdf(x), norm.pdf(y)))
# continuous pdf for the plot
x_s = np.linspace(-3, 3, 50)
y_s = norm.pdf(x_s)
plt.scatter(x_s, y_s);
# -
# ### Cumulative Probability Density Function
# The cumulative probability density function is useful when a "real" probability has to be calculated. It can be accessed via the `cdf` function:
# +
from scipy.stats import norm
# probability of x less or equal 0.3
print("P(X <0.3) = {}".format(norm.cdf(0.3)))
# probability of x in [-0.2, +0.2]
print("P(-0.2 < X < 0.2) = {}".format(norm.cdf(0.2) - norm.cdf(-0.2)))
# -
# ***
# ## Infering $\mu$ and $\sigma$
# Given a sample of datapoints it is often required to estimate the "true" parameters of the distribution. In the case of the normal distribution this estimation is quite simple. $\mu$ can be derived by calculating the mean of the sample. $\sigma$ can be derived calculating the standard deviation of the sample.
# +
# IMPORTS
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import matplotlib.style as style
from IPython.core.display import HTML
# PLOTTING CONFIG
# %matplotlib inline
style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (14, 7)
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: center;
}
</style>
""")
plt.figure(dpi=100)
##### COMPUTATION #####
# DECLARING THE "TRUE" PARAMETERS UNDERLYING THE SAMPLE
mu_real = 10
sigma_real = 2
# DRAW A SAMPLE OF N=1000
np.random.seed(42)
sample = stats.norm.rvs(loc=mu_real, scale=sigma_real, size=1000)
# ESTIMATE MU AND SIGMA
mu_est = np.mean(sample)
sigma_est = np.std(sample)
print("Estimated MU: {}\nEstimated SIGMA: {}".format(mu_est, sigma_est))
##### PLOTTING #####
# SAMPLE DISTRIBUTION
plt.hist(sample, bins=50, alpha=.25)
# TRUE CURVE
plt.plot(np.linspace(2, 18, 1000), norm.pdf(np.linspace(2, 18, 1000),loc=mu_real, scale=sigma_real))
# ESTIMATED CURVE
plt.plot(np.linspace(2, 18, 1000), norm.pdf(np.linspace(2, 18, 1000),loc=np.mean(sample), scale=np.std(sample)))
# LEGEND
plt.text(x=9.5, y=.1, s="sample", alpha=.75, weight="bold", color="#008fd5")
plt.text(x=7, y=.2, s="true distrubtion", rotation=55, alpha=.75, weight="bold", color="#fc4f30")
plt.text(x=5, y=.12, s="estimated distribution", rotation=55, alpha=.75, weight="bold", color="#e5ae38")
# TICKS
plt.tick_params(axis = 'both', which = 'major', labelsize = 18)
plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7)
# TITLE, SUBTITLE & FOOTER
plt.text(x = 0, y = 0.3, s = "Normal Distribution",
fontsize = 26, weight = 'bold', alpha = .75)
plt.text(x = 0, y = 0.265,
s = 'Depicted below is the distribution of a sample (blue) drawn from a normal distribution with $\mu = 10$\nand $\sigma = 2$ (red). Also the estimated distrubution with $\mu \sim {:.3f} $ and $\sigma \sim {:.3f} $ is shown (yellow).'.format(np.mean(sample), np.std(sample)),
fontsize = 19, alpha = .85)
plt.text(x = 0,y = -0.025,
s = 'Normal',
fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey');
# -
# ## Infering $\mu$ and $\sigma$ - MCMC
# In addition to a "direct" inference, $\mu$ and $\sigma$ can also be estimated using Markov chain Monte Carlo simulation - implemented in Python's [PyMC3](https://github.com/pymc-devs/pymc3).
# +
# IMPORTS
import pymc3 as pm
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
import matplotlib.style as style
from IPython.core.display import HTML
# PLOTTING CONFIG
# %matplotlib inline
style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (14, 7)
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: center;
}
</style>
""")
plt.figure(dpi=100)
##### SIMULATION #####
# MODEL BUILDING
with pm.Model() as model:
mu = pm.Uniform("mu", upper=20)
std = pm.Uniform("std", upper=5)
normal = pm.Normal("normal", mu=mu, sd=std, observed=sample)
# MODEL RUN
with model:
step = pm.Metropolis()
trace = pm.sample(50000, step=step)
burned_trace = trace[45000:]
# MU - 95% CONF INTERVAL
mus = burned_trace["mu"]
mu_est_95 = np.mean(mus) - 2*np.std(mus), np.mean(mus) + 2*np.std(mus)
print("95% of sampled mus are between {:0.3f} and {:0.3f}".format(*mu_est_95))
# STD - 95% CONF INTERVAL
stds = burned_trace["std"]
std_est_95 = np.mean(stds) - 2*np.std(stds), np.mean(stds) + 2*np.std(stds)
print("95% of sampled sigmas are between {:0.3f} and {:0.3f}".format(*std_est_95))
#### PLOTTING #####
# SAMPLE DISTRIBUTION
plt.hist(sample, bins=50,normed=True, alpha=.25)
# TRUE CURVE
plt.plot(np.linspace(2, 18, 1000), norm.pdf(np.linspace(2, 18, 1000),loc=mu_real, scale=sigma_real))
# ESTIMATED CURVE MCMC
plt.plot(np.linspace(2, 18, 1000), norm.pdf(np.linspace(2, 18, 1000),loc=np.mean(mus), scale=np.mean(stds)))
# LEGEND
plt.text(x=9.5, y=.1, s="sample", alpha=.75, weight="bold", color="#008fd5")
plt.text(x=7, y=.2, s="true distrubtion", rotation=55, alpha=.75, weight="bold", color="#fc4f30")
plt.text(x=5, y=.12, s="estimated distribution", rotation=55, alpha=.75, weight="bold", color="#e5ae38")
# TICKS
plt.tick_params(axis = 'both', which = 'major', labelsize = 18)
plt.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7)
# TITLE, SUBTITLE & FOOTER
plt.text(x = 0, y = 0.3, s = "Normal Distribution - Parameter Estimation (MCMC)",
fontsize = 26, weight = 'bold', alpha = .75)
plt.text(x = 0, y = 0.265,
s = 'Depicted below is the distribution of a sample (blue) drawn from a normal distribution with $\mu = 10$\nand $\sigma = 2$ (red). Also the estimated distrubution with $\mu \sim {:.3f} $ and $\sigma \sim {:.3f} $ is shown (yellow).'.format(np.mean(mus), np.mean(stds)),
fontsize = 19, alpha = .85)
plt.text(x = 0,y = -0.025,
s = 'Normal MCMC',
fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey');
# -
# ***
# [1] - [Investopedia. Normal Distribution](https://www.investopedia.com/terms/n/normaldistribution.asp)
| Mathematics/Statistics/Statistics and Probability Python Notebooks/Important-Statistics-Distributions-py-notebooks/Normal Distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming onboard peripherals
# ## LEDs, switches and buttons
# This notebook can be run with the PYNQ-Z1 or PYNQ-Z2. Both boards have four green LEDs (LD0-3), 2 multi colour LEDs (LD4-5), 2 slide-switches (SW0-1) and 4 push-buttons (BTN0-3) that are connected to the Zynq’s programmable logic.
# Note that there are additional push-buttons and LEDs on the board, but these are used for specific functions (Power LED, PS reset button etc) and are not user accessible.
#
# The IO can be controlled directly from Python. To demonstrate this, we first import the LED, Switch and Button classes from the module pynq.board:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
# ## Controlling an LED
# Now we can create an instance of each of these classes and use their methods to manipulate them. Let’s start by instantiating a single LED and turning it on and off.
from pynq.lib import LED, Switch, Button
led0 = LED(0)
led0.on()
# Check the board and confirm the LED is on.
led0.off()
# Let’s then toggle _led0_ using the sleep() method from the _time_ package to see the LED flashing.
import time
led0 = LED(0)
for i in range(20):
led0.toggle()
time.sleep(.1)
# ## Example: Controlling all the LEDs, switches and buttons
#
#
# The example below creates 3 separate lists, called _leds_, _switches_ and _buttons_.
# +
# Set the number of Switches
MAX_LEDS =4
MAX_SWITCHES = 2
MAX_BUTTONS = 4
leds = [LED(index) for index in range(MAX_LEDS)]
switches = [Switch(index) for index in range(MAX_SWITCHES)]
buttons = [Button(index) for index in range(MAX_BUTTONS)]
# Create lists for each of the IO component groups
for i in range(MAX_LEDS):
leds[i] = LED(i)
for i in range(MAX_SWITCHES):
switches[i] = Switch(i)
for i in range(MAX_BUTTONS):
buttons[i] = Button(i)
# -
# First, all LEDs are set to off. Then each switch is read, and if a switch is in the on position, the corresponding led is turned on. You can execute this cell a few times, changing the position of the switches on the board.
# +
# LEDs start in the off state
for i in range(MAX_LEDS):
leds[i].off()
# if a slide-switch is on, light the corresponding LED
for i in range(MAX_LEDS):
if switches[i%2].read():
leds[i].on()
else:
leds[i].off()
# -
# The last part toggles the corresponding led (on or off) if a pushbutton is pressed. You can execute this cell a few times pressing different pushbuttons each time.
# if a button is depressed, toggle the state of the corresponding LED
for i in range(MAX_LEDS):
if buttons[i].read():
leds[i].toggle()
# ## Next steps
#
# If you have time, write your own program to:
# 1. Turn on/off a single LED when a button is pressed
# 2. Shift the LED pattern when another button is pressed (the shift direction is determined by the value of the dip switch)
# 3. Toggle/Flash the LEDs for 5 seconds when another button is pressed
# 4. Change the delay between toggle when the last button is pressed.
| Session_1/4_Programming_onboard_peripherals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import sys
# !{sys.executable} -m pip install kfp==1.4.0 kfp-server-api==1.2.0 --user>/dev/null
import os,sys
import kfp
import random
import string
components_url = "/mnt/dkube/pipeline/components/"
dkube_training_op = kfp.components.load_component_from_file(components_url + "training/component.yaml")
dkube_serving_op = kfp.components.load_component_from_file(components_url + "serving/component.yaml")
token = os.getenv("DKUBE_USER_ACCESS_TOKEN")
client = kfp.Client(host=os.getenv("KF_PIPELINES_ENDPOINT"), existing_token=token, namespace=os.getenv("USERNAME"))
@kfp.dsl.pipeline(
name='dkube-attentive-pl',
description='attentive pipeline with dkube components'
)
def attentive_pipeline():
train = dkube_training_op(token,container = '{"image":"ocdr/d3-datascience-pytorch-cpu:v1.6"}',
framework="pytorch", version="1.6",
program="AttentiveChrome", run_script="python v2PyTorch/train.py",
datasets='["AttentiveData"]', outputs='["AttentiveModel"]',
input_dataset_mounts='["/v2PyTorch"]',
output_mounts='["/model"]',
envs='[{"EPOCHS": "30"}]')
serving = dkube_serving_op(token,model = train.outputs['artifact'], device='cpu',
serving_image='{"image":"ocdr/pytorchserver:1.6"}',
transformer_image='{"image":"ocdr/d3-datascience-pytorch-cpu:v1.6-1"}',
transformer_project="AttentiveChrome",
transformer_code='v2PyTorch/transformer.py').after(train)
client.create_run_from_pipeline_func(attentive_pipeline, arguments={})
# +
# compile
#import tempfile
#f = tempfile.NamedTemporaryFile(suffix=".zip", delete=False)
#kfp.compiler.Compiler().compile(mnist_pipeline,f.name)
# +
#pipeline_id = client.get_pipeline_id("mnist-pipeline")
#if(pipeline_id != None):
#pipeline_version = ''.join(random.choices(string.ascii_uppercase + string.digits, k = 7))
#new_pipeline = client.upload_pipeline_version(f.name,pipeline_version_name = pipeline_version, pipeline_id = pipeline_id)
#else:
# pipeline = client.upload_pipeline(f.name, pipeline_name="mnist-pipeline")
| v2PyTorch/.ipynb_checkpoints/pipeline-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py38
# language: python
# name: py38
# ---
# # The Wilson score interval and weighted histograms
#
# The Wilson Score interval (WSI) was won the LHCb internal challenge for the most suitable interval for efficiencies. The question what to do when events are weighted was left open at the time. It is addressed here, the WSI can be used also with weighted histograms if ordinary counts in unweighted histograms are replaced with effective counts in weighted histograms.
import numpy as np
import boost_histogram as bh
from matplotlib import pyplot as plt
# https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
#
# Wilson score interval:
# $$
# p_\text{min} = \frac{n_S + \frac 12 z^2}{n_S + n_F + z^2} - \frac{z}{n_S + n_F + z^2} \sqrt{\frac{n_S n_F}{n_S + n_F} + \frac{z^2}{4}}
# $$
# $$
# p_\text{max} = \frac{n_S + \frac 12 z^2}{n_S + n_F + z^2} + \frac{z}{n_S + n_F + z^2} \sqrt{\frac{n_S n_F}{n_S + n_F} + \frac{z^2}{4}}
# $$
# with $n_S$ counts of selected events, $n_F$ counts of rejected events, $z$ number of sigmas ($z = 1$ for standard intervals).
#
# In case of weighted events, the $n_S,n_F$ are the effective counts, computed as
# $$
# n_\text{effective} = \frac{\big(\sum_i w_i\big)^2}{\sum_i w^2_i}.
# $$
# In boost-histogram, this is computed as `n_eff = h.values() ** 2 / h.variances()` for a histogram `h` with `WeightedSum` storage.
def wilson_score_interval(n_s, n_f, z=1):
n = n_s + n_f
z2 = z * z
center = (n_s + 0.5 * z2) / (n + z2)
delta = z / (n + z2) * np.sqrt(n_s * n_f / n + 0.25 * z2)
return center - delta, center + delta
# +
p_truth = 0.2
rng = np.random.default_rng(1)
w_s = bh.accumulators.WeightedSum()
w_f = bh.accumulators.WeightedSum()
w = rng.exponential(10, size=100)
m = rng.uniform(size=len(w)) < p_truth
w_s.fill(w[m])
w_f.fill(w[~m])
# +
n_s = w_s.value ** 2 / w_s.variance if w_s.variance > 0 else 0
n_f = w_f.value ** 2 / w_f.variance if w_f.variance > 0 else 0
p = n_s / (n_s + n_f)
p_min, p_max = wilson_score_interval(n_s, n_f)
plt.errorbar([1], [p], [[p-p_min], [p_max-p]], fmt="ok")
print(f"efficiency = {p:.2f} - {p-p_min:.2f} + {p_max-p:.2f}")
# -
# Note that the error bar is asymmetric in general. This is especially noticable when p is close to 0 or 1.
#
# Let's check with a parametric bootstrap simulation whether this uncertainty is correct. It is parametric because I put in we know that the weights are exponentially distributed.
# +
rng = np.random.default_rng(1)
w_mean = np.mean(w)
p_boot = []
for iboot in range(1000):
wi = rng.exponential(w_mean, size=len(w))
mi = rng.uniform(size=len(w)) < p
t_s = bh.accumulators.WeightedSum()
t_f = bh.accumulators.WeightedSum()
t_s.fill(wi[mi])
t_f.fill(wi[~mi])
p_boot.append(t_s.value / (t_s.value + t_f.value))
center = np.mean(p_boot)
delta = np.std(p_boot)
p_min_boot = center - delta
p_max_boot = center + delta
plt.errorbar([1], [p], [[p-p_min], [p_max-p]], fmt="ok")
plt.errorbar([2], [p], [[p-p_min_boot], [p_max_boot-p]], fmt="sr")
plt.axhline(p_truth, ls="--", color="0.5")
print(f"efficiency = {p:.2f} - {p-p_min:.2f} + {p_max-p:.2f}")
print(f"efficiency(boot) = {p:.2f} - {p-p_min_boot:.2f} + {p_max_boot-p:.2f}")
# -
# Ok, check passed.
| Wilson Score Interval with Weighted Histograms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
#
# # Task
#
# There are four types of `Task` in total:
#
# - `BaseTask` is base class for all tasks, and all operators are subclassed from it. Codes in
# `BaseTask` will always berun in the main thread and event loop. Due to this limitation, you should not
# put computation-cost logic in it, otherwise the whole eventloop will be blocked and hinder other task's execution.
# - `Task` for running python functions. Execution of `run_job()` method will be scheduled and further executed by executor
# configured.
# - `ShellTask` for executing bash script. Support `conda` and `image` option compared to `Task`. When these options are
# specified, additional bash command for handling enviroment creation, activation will berun before the
# execution of user defined bash commnd.
#
#
#
# ### EnvTask
#
#
#
#
# ### Operators
#
# All operators has three ways to used:
# - Operator's class name, a task object needs to be created before using. Usage:
# `Map()(channel)`
# - Operator's function name, all operator has been wraped into a function accepts the same argument as the
# original operator class. Usage: `merge(ch1, ch2, ch3)`
# - `Channel`'s method name, all operator has been added as a method of `Channel`.
# Usage: `ch1.map(fn)`
#
#
# All predefined operators are:
# - `Merge()`
# - `Flatten()`
# - `Mix()`
# - ....
#
#
# #### add custom operator
# - Since operators are all `Tasks`, you can define your own operator task and even add
# it as a `Channel` method.
#
#
| docs/source/task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="OGbKdJlYqkDB" colab_type="text"
# # **Imports**
# + id="Td8GtVvPqnzI" colab_type="code" colab={}
# !pip install mido
# !pip install keras-rl
# !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
# !unzip ngrok-stable-linux-amd64.zip
# + id="jZG_hVAFqoSX" colab_type="code" colab={}
from mido import MidiFile, MidiTrack, Message
from keras.layers import CuDNNLSTM, LSTM, Dense, Activation, Dropout
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.optimizers import RMSprop,Adam,SGD,Adagrad
from keras.callbacks import TensorBoard
from rl.callbacks import FileLogger, ModelIntervalCheckpoint
import numpy as np
import mido
import random
from google.colab import files
# + [markdown] id="1ZaVT-gVqqDw" colab_type="text"
# # **Dataset Import**
# + id="wYTDW0hiquyQ" colab_type="code" colab={}
uploaded = files.upload()
fileNames = [];
for fn in uploaded.keys():
fileNames.append(fn)
# + [markdown] id="SL3XU3KQsAqd" colab_type="text"
# # **Data Preparation**
# + id="FjSTpRYvsDef" colab_type="code" colab={}
'''
Retrieve all notes from songs and seperate them into noteSets
'''
noteSets = []
noteDescriptors = 3
#Iterate through all songs
for song in fileNames:
time=float(0)
prev=float(0)
midi = MidiFile(song)
notes = []
#Iterate through messages in midi file
for msg in midi:
time += msg.time
if not msg.is_meta:
if msg.type == 'note_on':
'''
note are in the form [type, note, velocity]
we want to convert to [note, velocity, time]
'''
note = msg.bytes()
note = note[1:3]
note.append(time-prev)
prev = time
notes.append(note)
#Add notes from song to noteSets
noteSets.append(notes)
# + id="csCMuVYKupF7" colab_type="code" colab={}
'''
Scale all notes to to [0,1]
[note, velocity, time] -> [(note-24)/88, velocity/127, time/max_time]
'''
n = []
for notes in noteSets:
for note in notes:
note[0] = (note[0] - 24) / 88
note[1] = note[1] / 127
n.append(note[2])
max_time = max(n)
for notes in noteSets:
for note in notes:
note[2] = note[2] / max_time
# + id="FRJw2EUEvhDY" colab_type="code" colab={}
'''
Constructing sentences and labels for training
'''
sentences = []
labels = []
window = 50
for notes in noteSets:
for i in range(len(notes) - window):
sentence = notes[i : i + window]
label = notes[i + window]
sentences.append(sentence)
labels.append(label)
#Conversion to numpy array for feeding into model
sentences = np.array(sentences)
labels = np.array(labels)
# + [markdown] id="cjoO2x8jazh6" colab_type="text"
# # **TensorBoard**
# + id="J8kEowLHa1Xq" colab_type="code" colab={}
# Retrieved from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/
LOG_DIR = './log'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
# ! curl -s http://localhost:4040/api/tunnels | python3 -c \
# "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
tbCallBack = TensorBoard(log_dir='./log',
write_graph=True,
write_grads=True,
write_images=True)
# + [markdown] id="0awtNx4awnGA" colab_type="text"
# # **Model Definition**
# + id="ks1snlhRwp0a" colab_type="code" colab={}
model=Sequential()
model.add(CuDNNLSTM(512,input_shape=(window, noteDescriptors),return_sequences=True))
model.add(Dropout(0.3))
model.add(CuDNNLSTM(512, return_sequences=True))
model.add(Dropout(0.3))
model.add(CuDNNLSTM(512))
model.add(Dense(1024))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Dropout(0.3))
model.add(Dense(noteDescriptors,activation="softmax"))
model.summary()
# + [markdown] id="zbwhczbQxIDC" colab_type="text"
# # **Model Training**
# + id="7OTXxkRrxJpr" colab_type="code" colab={}
model.compile(loss="categorical_crossentropy",
optimizer="RMSprop",
metrics=["accuracy"])
# + id="oAtXh204xpN-" colab_type="code" colab={}
weights_filename = 'model_weights_final.h5f'
callbacks = [tbCallBack]
model.fit(sentences,
labels,
epochs=500,
batch_size=200,
validation_split=0.1,
callbacks=callbacks)
model.save_weights(weights_filename, overwrite=True)
# + [markdown] id="u2cNz5wMyJ6B" colab_type="text"
# # **Music Generation**
# + id="TINMYrhryNNo" colab_type="code" colab={}
'''
Generate a song based on database
'''
songLength = 500;
#Pick a random song to start generation
noteSetSeed = random.randint(0, len(noteSets)-1)
seed = noteSets[noteSetSeed][0 : window]
x = seed
x = np.expand_dims(x, axis = 0)
predict=[]
for i in range(songLength):
p=model.predict(x)
x=np.squeeze(x) #squeezed to concateneate
x=np.concatenate((x, p))
x=x[1:]
x=np.expand_dims(x, axis=0) #expanded to roll back
p=np.squeeze(p)
predict.append(p)
# + id="yw2hjfQIzdv1" colab_type="code" colab={}
'''
Reversescale all notes back
[note, velocity, time] -> [note * 88 + 24, velocity * 127, time * max_time]
'''
for a in predict:
a[0] = int(88*a[0] + 24)
a[1] = int(127*a[1])
a[2] *= max_time
# reject values out of range (note[0]=24-102)(note[1]=0-127)(note[2]=0-__)
if a[0] < 24:
a[0] = 24
elif a[0] > 102:
a[0] = 102
if a[1] < 0:
a[1] = 0
elif a[1] > 127:
a[1] = 127
if a[2] < 0:
a[2] = 0
# + id="89sUtz4Mz0Qk" colab_type="code" colab={}
'''
Save track from bytes data
'''
m=MidiFile()
track=MidiTrack()
m.tracks.append(track)
for note in predict:
#147 means note_on
note=np.insert(note, 0, 147)
bytes=note.astype(int)
msg = Message.from_bytes(bytes[0:3])
time = int(note[3]/0.001025) # to rescale to midi's delta ticks. arbitrary value
msg.time = time
track.append(msg)
m.save('Ai_song.mid')
| Prototype1_NoteBasedLSTM/NoteBasedLSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (scRNA)
# language: python
# name: scrna-environment
# ---
# # **Quality Control (QC) and filtering**
#
# This notebooks serves for filtering of the second human testis sample. It is analogous to the filtering of the other sample, so feel free to go through it faster and just skimming through the text.
#
# ---------------------
#
# **Motivation:**
#
# Quality control and filtering is the most important steps of single cell data analysis. Allowing low quality cells into your analysis will compromise/mislead your conclusions by adding hundreds of meaningless data points to your workflow.
# The main sources of low quality cells are
# - broken cells for which some of their transcripts get lost
# - cells isolated together with too much ambient RNA
# - missing cell during isolation (e.g. empty droplet in microfluidic machines)
# - multiple cells isolated together (multiplets, usually only two cells - doublets)
#
# ---------------------------
#
# **Learning objectives:**
#
# - Understand and discuss QC issues and measures from single cell data
# - Explore QC graphs and set filtering tools and thresholds
# - Analyze the results of QC filters and evaluate necessity for different filtering
# ----------------
# **Execution time: 40 minutes**
# ------------------------------------
# **Import the packages**
# + tags=[]
import scanpy as sc
import pandas as pd
import scvelo as scv
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import ipywidgets as widgets
# + tags=[]
sample_3 = sc.read_h5ad('../../Data/notebooks_data/sample_3.h5ad')
# -
# We calculate the percentage of mitocondrial genes into each cell. A high percentage denotes the possibility that material from broken cells has been captured during cell isolation, and then sequenced. Mitocondrial percentage is not usually calculated by `scanpy`, because there is need for an identifier for mitocondrial genes, and there is not a standard one. In our case, we look at genes that contain `MT-` into their ID, and calculate their transcript proportion into each cell. We save the result as an observation into `.obs['perc_mito']`
# + tags=[]
MT = ['MT' in i for i in sample_3.var_names]
perc_mito = np.sum( sample_3[:,MT].X, 1 ) / np.sum( sample_3.X, 1 )
sample_3.obs['perc_mito'] = perc_mito.copy()
# -
# ## Visualize and evaluate quality measures
#
# We can do some plots to have a look at quality measures combined together
# **Counts vs Genes:** this is a typical plot, where you look at the total transcripts per cells (x axis) and detected genes per cell (y axis). Usually, those two measures grow together. Points with a lot of transcripts and genes might be multiplets (multiple cells sequenced together as one), while very few transcripts and genes denote the presence of only ambient RNA or very low quality sequencing of a cell. Below, the dots are coloured based on the percentage of mitocondrial transcripts. Note how a high proportion is often on cells with very low transcripts and genes (bottom left corner of the plot)
# + tags=[]
sc.pl.scatter(sample_3, x='total_counts', y='n_genes_by_counts', color='perc_mito',
title ='Nr of transcripts vs Nr detected genes, coloured by mitocondrial content',
size=50)
# -
# **Transcripts and Genes distribution:** Here we simply look at the distribution of transcripts per cells and detected genes per cell. Note how the distribution is bimodal. This usually denotes a cluster of low-quality cells and viable cells. Sometimes filtering out the data points on the left-most modes of those graphs removes a lot of cells from a dataset, but this is quite a normal thing not to be worried about. The right side of the distributions show a tail with few cells having a lot of transcripts and genes. It is also good to filter out some of those extreme values - for technical reasons, it will also help in having a better normalization of the data later on.
# + tags=[]
ax = sns.distplot(sample_3.obs['total_counts'], bins=50)
ax.set_title('Cells Transcripts distribution')
# + tags=[]
ax = sns.distplot(sample_3.obs['n_genes_by_counts'], bins=50)
ax.set_title('Distribution of detected genes per cell')
# -
# **Mitocondrial content**: In this dataset there are few cell with a high percentage of mitocondrial content. Those are precisely 245 if we set 0.1 (that is 10%) as a treshold. A value between 10% and 20% is the usual standard when filtering single cell datasets.
# + tags=[]
#subsetting to see how many cells have percentage of mitocondrial genes above 10%
sample_3[ sample_3.obs['perc_mito']>0.1, : ].shape
# + tags=[]
ax = sns.distplot(sample_3.obs['perc_mito'], bins=50)
ax.set_title('Distribution of mitocondrial content per cell')
# -
# ## Choosing thresholds
# Let's establish some filtering values by looking at the plots above, and then apply filtering
# + tags=[]
MIN_COUNTS = 5000 #minimum number of transcripts per cell
MAX_COUNTS = 40000 #maximum number of transcripts per cell
MIN_GENES = 2000 #minimum number of genes per cell
MAX_GENES = 6000 #maximum number of genes per cell
MAX_MITO = .1 #mitocondrial percentage treshold)
#plot cells filtered by max transcripts
a=sc.pl.scatter(sample_3[ sample_3.obs['total_counts']<MAX_COUNTS ],
x='total_counts', y='n_genes_by_counts', color='perc_mito', size=50,
title =f'Nr of transcripts vs Nr detected genes, coloured by mitocondrial content\nsubsetting with threshold MAX_COUNTS={MAX_COUNTS}')
#plot cells filtered by min genes
b=sc.pl.scatter(sample_3[ sample_3.obs['n_genes_by_counts'] > MIN_GENES ],
x='total_counts', y='n_genes_by_counts', color='perc_mito', size=50,
title =f'Nr of transcripts vs Nr detected genes, coloured by mitocondrial content\nsubsetting with treshold MIN_GENES={MIN_GENES}')
# -
#
# The following commands filter using the chosen tresholds.
# Again, scanpy does not do the mitocondrial QC filtering,
# so we do that on our own by subsetting the data.
# Note for the last two filterings: the parameter `min_cells` remove
# all those cells showing transcripts for only 10 genes or less -
# standard values for this parameter are usually between 3 and 10,
# and do not come from looking at the QC plots.
# + tags=[]
sc.preprocessing.filter_cells(sample_3, max_counts=MAX_COUNTS)
sc.preprocessing.filter_cells(sample_3, min_counts=MIN_COUNTS)
sc.preprocessing.filter_cells(sample_3, min_genes=MIN_GENES)
sc.preprocessing.filter_cells(sample_3, max_genes=MAX_GENES)
sc.preprocessing.filter_genes(sample_3, min_cells=10)
sample_3 = sample_3[sample_3.obs['perc_mito']<MAX_MITO].copy()
# -
# We have been reducing the data quite a lot from the original >8000 cells. Often, even more aggressive filterings are done. For example, one could have set the minimum number of genes detected to 3000. It would have been anyway in the area between the two modes of the QC plot.
# + tags=[]
print(f'Cells after filters: {sample_3.shape[0]}, Genes after filters: {sample_3.shape[1]}')
# -
# ## Doublet filtering
# Another important step consists in filtering out multiplets. Those are in the almost totality of the cases doublets, because triplets and above multiplets are extremely rare. Read [this more technical blog post](https://liorpachter.wordpress.com/2019/02/07/sub-poisson-loading-for-single-cell-rna-seq/) for more explanations about this.
# The external tool `scrublet` simulates doublets by putting together the transcripts of random pairs of cells from the dataset. Then it assigns a score to each cell in the data, based on the similarity with the simulated doublets. An `expected_doublet_rate` of 0.06 (6%) is quite a typical value for single cell data, but if you have a better estimate from laboratory work, microscope imaging or a specific protocol/sequencing machine, you can also tweak the value.
# `random_state` is a number choosing how the simulations are done. using a specific random state means that you will always simulate the same doublets whenever you run this code. This allows you to reproduce exactly the same results every time and is a great thing for reproducibility in your own research.
# + tags=[]
sc.external.pp.scrublet(sample_3,
expected_doublet_rate=0.06,
random_state=12345)
# -
# It seems that the doublet rate is likely to be lower than 6%, meaning that in this regard the data has been produced pretty well. We now plot the doublet scores assigned to each cell by the algorithm. We can see that most cells have a low score (the score is a value between 0 and 1). Datasets with many doublets show a more bimodal distribution, while here we just have a light tail beyond 0.1.
# + tags=[]
sns.distplot(sample_3.obs['doublet_score'])
# -
# We can choose 0.1 as filtering treshold for the few detected doublets or alternatively use the automatic selection of doublets by the algorithm. We will choose the last option and use the automatically chosen doublets.
# + tags=[]
sample_3 = sample_3[np.invert(sample_3.obs['predicted_doublet'])].copy()
# -
# ## Evaluation of filtering
# A quite basic but easy way to look at the results of our filtering is to normalize and plot the dataset on some projections. Here we use a standard normalization technique that consists of:
# - **TPM normalization**: the transcripts of each cell are normalized, so that their total amounts to the same value in each cell. This should make cells more comparable independently of how many transcripts has been retained during cell isolation.
# - **Logarithmization**: the logarithm of the normalized transcripts is calculated. This reduce the variability of transcripts values and highlights variations due to biological factors.
# - **Standardization**: Each gene is standardized across all cells. This is useful for example for projecting the data onto a PCA.
# + tags=[]
# TPM normalization and storage of the matrix
sc.pp.normalize_per_cell(sample_3)
sample_3.layers['umi_tpm'] = sample_3.X.copy()
# Logarithmization and storage
sc.pp.log1p(sample_3)
sample_3.layers['umi_log'] = sample_3.X.copy()
# Select some of the most meaningful genes to calculate the PCA plot later
# This must be done on logarithmized values
sc.pp.highly_variable_genes(sample_3, n_top_genes=15000)
# save the dataset
sample_3.write('../../Data/notebooks_data/sample_3.filt.h5ad')
# standardization and matrix storage
sc.pp.scale(sample_3)
sample_3.layers['umi_gauss'] = sample_3.X.copy()
# -
# Now we calculate the PCA projection
# + tags=[]
sc.preprocessing.pca(sample_3, svd_solver='arpack', random_state=12345)
# -
# We can look at the PCA plot and color it by some quality measure and gene expression. We can already see how the PCA has a clear structure with only a few dots sparsed around. It seems the filtering has got a good result.
# + tags=[]
sc.pl.pca(sample_3, color=['total_counts','SYCP1'])
# -
# We plot the variance ratio to see how each component of the PCA changes in variability. Small changes in variability denote that the components are mostly modeling noise in the data. We can choose a threshold (for example 15 PCA components) to be used in all algorithms that use PCA to calculate any quantity.
# + tags=[]
sc.plotting.pca_variance_ratio(sample_3)
# -
# We project the data using the UMAP algorithm. This is very good in preserving the structure of a dataset in low dimension, if any is present. We first calculate the neighbors of each cell (that is, its most similar cells), those are then used for the UMAP. The neighbors are calculated using the PCA matrix instead of the full data matrix, so we can choose the number of PCA components to use (parameter `n_pcs`). Many algorithms work on the PCA, so you will see the parameter used again in other places.
# + tags=[]
sc.pp.neighbors(sample_3, n_pcs=15, random_state=12345)
# + tags=[]
sc.tools.umap(sample_3, random_state=54321)
# -
# The UMAP plot gives a pretty well-structured output for this dataset. We will keep working further with this filtering.
# + tags=[]
sc.plotting.umap(sample_3, color=['total_counts','SYCP1'])
# -
# -------------------------------
# ## Wrapping up
# The second human testis dataset is now filtered and you can proceed to the normalize and integration part of the analysis (Notebook `Part03_normalize_and_integrate`).
| Notebooks/Python/Part02_filtering_sample3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Plot recommended Zip Codes for investment
# !pip install cufflinks
import pandas as pd
import numpy as np
# %matplotlib inline
# +
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
print(__version__) # requires version >= 1.9.0
# -
import cufflinks as cf
# For Notebooks
init_notebook_mode(connected=True)
# For offline use
cf.go_offline()
unpickled_df = pd.read_pickle("./data/rec1.pkl")
unpickled_df.head()
# ### Plot of forecasted values by MAE
# +
df = unpickled_df.head(20)
display = go.Scatter(
x = df['MAE'],
y = df['yhat'],
text = df['City'],
hoverinfo = 'text',
mode = 'markers+text',
name = 'df',
marker = dict(size = 11,
color = 'rgba(76, 224, 128, 0.79)'),
textposition='bottom right'
)
layout = go.Layout(
title='2018 Invesment Zip Codes',
xaxis={'title':'Mean Absolute Error'},
yaxis={'title':'Projected Value'},
)
data=[display]
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename='hover-chart-basic')
# -
# ### Plot of forecasted values by growth
# +
df = unpickled_df.head(20)
display = go.Scatter(
x = df['2018 Growth'],
y = df['yhat'],
text = df['City'],
hoverinfo = 'text',
mode = 'markers+text',
name = 'df',
marker = dict(size = 11,
color = 'rgba(76, 200, 128, 0.79)'),
textposition='bottom right',
textfont=dict(
family='sans serif',
size=12,
color='#1f77b4')
)
layout = go.Layout(
title='2018 Invesment Zip Codes',
xaxis={'title':'2018 % Growth'},
yaxis={'title':'Projected Value'},
)
data=[display]
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename='hover-chart-basic')
# -
unpickled_df2 = pd.read_pickle("./data/rec2.pkl")
# +
df = unpickled_df2.head(20)
display = go.Scatter(
x = df['MAE'],
y = df['yhat'],
text = df['City'],
hoverinfo = 'text',
mode = 'markers+text',
name = 'df',
marker = dict(size = 11,
color = 'rgba(76, 224, 128, 0.79)'),
textposition='bottom right'
)
layout = go.Layout(
title='2018 Invesment Zip Codes',
xaxis={'title':'Mean Absolute Error'},
yaxis={'title':'Projected Value'},
)
data=[display]
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename='hover-chart-basic')
# +
df = unpickled_df2.head(20)
display = go.Scatter(
x = df['2018 Growth'],
y = df['yhat'],
text = df['City'],
hoverinfo = 'text',
mode = 'markers+text',
name = 'df',
marker = dict(size = 11,
color = 'rgba(76, 200, 128, 0.79)'),
textposition='bottom right',
textfont=dict(
family='sans serif',
size=12,
color='#1f77b4')
)
layout = go.Layout(
title='2018 Invesment Zip Codes',
xaxis={'title':'2018 % Growth'},
yaxis={'title':'Projected Value'},
)
data=[display]
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename='hover-chart-basic')
| zillow_Plotly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Tenntucky/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/module3-introduction-to-bayesian-inference/Kole_Goldsberry_LS_DS_133_Introduction_to_Bayesian_Inference_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="H7OLbevlbd_Z" colab_type="text"
# # Lambda School Data Science Module 133
#
# ## Introduction to Bayesian Inference
#
#
#
# + [markdown] id="P-DzzRk5bf0z" colab_type="text"
# ## Assignment - Code it up!
#
# Most of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions!
#
# Specific goals/targets:
#
# 1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations
# 2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week
# 3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach
# 4. In your own words, summarize the difference between Bayesian and Frequentist statistics
#
# If you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar!
#
# Stretch goals:
#
# - Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective
# - Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples
# - Take PyMC3 further - see if you can build something with it!
# + id="xpVhZyUnbf7o" colab_type="code" colab={}
# TODO - code!
# + [markdown] id="uWgWjp3PQ3Sq" colab_type="text"
# ## Resources
# + [markdown] id="QRgHqmYIQ9qn" colab_type="text"
# - [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator)
# - [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139)
| module3-introduction-to-bayesian-inference/Kole_Goldsberry_LS_DS_133_Introduction_to_Bayesian_Inference_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''recsys'': conda)'
# name: python388jvsc74a57bd0ae065c56e262e6694ecb174f95b527853807b9a02b7a5d470fad054d1e0ab874
# ---
# +
import sys
sys.path.append('../../..')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn import preprocessing
from tensorflow.keras.models import save_model,load_model
from utils.preprocessing import *
from utils.dataset import Dataset
import pickle
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
from utils.evaluate import calculate_ctr, compute_rce, average_precision_score
import tensorflow.keras.backend as K
import core.config as conf
# -
# ## Load Data & Preprocessing
ds = Dataset(train=True, target_encoding=False)
data_path = conf.dataset_mini_path + 'train'
df = read_data(data_path)
df = ds.preprocess(df)
data_path = conf.dataset_mini_path + 'valid'
val_df = read_data(data_path)
val_df = ds.preprocess(val_df)
data_path = conf.dataset_mini_path + 'test'
test_df = read_data(data_path)
test_df = ds.preprocess(test_df)
df.columns
pkl_path = conf.dict_path + 'user_main_language.pkl'
with open(pkl_path, 'rb') as f:
user_to_main_language = pickle.load(f)
users = user_to_main_language.keys()
used_features = ['engager_follower_count',
'engager_following_count',
'engager_is_verified',
'engager_account_creation',
'creator_follower_count',
'creator_following_count',
'creator_is_verified',
'creator_account_creation',
'media',
'domains',
'language',
'dt_day',
'dt_dow',
'dt_hour',
'len_domains']
# +
X_train = df[used_features]
Y_train = df['like']
X_val = val_df[used_features]
Y_val = val_df['like']
X_test = test_df[used_features]
Y_test = test_df['like']
# -
df['engager_main_language'] = df['engager_id'].apply(lambda x: user_to_main_language[x])
df['creator_main_language'] = df['creator_id'].apply(lambda x: user_to_main_language[x])
df['is_same_main_language'] = df['engager_main_language'] == df['creator_main_language']
df.head()
| models/notebooks/DCN/deep_and_cross.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # <NAME>
# + [markdown] tags=[]
# ## Research Question
#
# Is there a correlation betweeen cyanobacterial hepatotoxins concentrations and total phosphorous concentration in European lakes?
# -
import numpy as np
import pandas as pd
lakedata = pd.read_csv("../data/raw/EMLSdata_10Aug.csv")
# # Milestone 3
# ## Task 1
# ### EDA of Data
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
lakedata.info()
lakedata.head()
# #### Interesting to note: There are 5 object datatypes and 39 float64 datatypes.
rc=lakedata.shape
cn=lakedata.columns
print(f"Number of rows and columns {rc}")
print(f"Column Names: {cn}")
print("Number of unique values for each variable:")
print(f"{lakedata.nunique(axis=0)}")
lakedata.describe().apply(lambda s: s.apply('{0:.1f}'.format))
#not sure if relevant due to differing lake depths of data, thus the mean of each column is not helpful. Still good to look over though.
lakedata.describe(include='object').T
#Object datatype descriptions.
#A lot of the data is collected from Poland.
# view which column has how many rows of missing data
lakedata.isnull().sum()
# ## Task 2
# ### Set up an analysis pipeline
# #### 1. Load Data
#load unprocessed lakedata
ldu = pd.read_csv("../data/raw/EMLSdata_10Aug.csv")
# #### 2. Clean Data
#remove columns not being used
ldr=ldu.drop(labels=['MeanDepth_m','Lake_ID','MC_LY_ugL','MaximumDepth_m', 'TN_mgL','MC_LW_ugL','MC_LF_ugL','NOD_ugL','NO3NO2_mgL','PO4_ugL','NH3_mgL', 'Date','LabName','Latitude','ThermoclineDepth_m','SamplingDepth_m','Longitude','Altitude_m','SurfaceTemperature_C','EpilimneticTemperature_C','CYN_ugL','ATX_ugL','Lutein_ugL','Echinenone_ugL','Chlorophyllc2_ugL','Peridinin_ugL','Alloxanthin_ugL','Diadinoxanthin_ugL','Fucoxanthin_ugL','Zeaxanthin_ugL','Chlorophyllb_ugL','Diatoxanthin_ugL','Chlorophylla_ugL','SecchiDepth_m','Violaxanthin_ugL'], axis=1)
#remove rows (lakes) with missing data
ldr=ldr.dropna(axis=0)
# #### 3. Process Data
# +
#Find and replace operations
#(examples inlcude replacing the string ‘Strongly Agree’ with the number 5).
ldr=ldr.rename(columns=
{"MeanDepth_m": "Mean Depth (m)",
"LakeName": "Lake Name",
"TP_mgL": "Concentration of total phosphorus (mgL)",
"MC_YR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin YR (ugL)",
"MC_dmRR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin dmRR (ugL)",
"MC_RR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin RR (ugL)",
"MC_dmLR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin dmLR (ugL)",
"MC_LR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LR (ugL)",
"MC_LW_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LW (ugL)",
"MC_LF_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LF (ugL)",
"NOD_ugL": "Concentration of cyanobacterial hepatotoxin nodularin (ugL)"})
# -
# #### 4. Wrangle Data
#Restructure data format (columns and rows).
ldr= ldr.sort_values(by=['Concentration of total phosphorus (mgL)'], ascending=True)
# ## Task 3
# ### Method chaining and writing python programs
# +
def load_and_process(pathname):
# Method Chain 1 (Load data, remove unneeded columns and remove rows with missing data)
ldr1 = (
pd.read_csv("../data/raw/EMLSdata_10Aug.csv")
.drop(labels=['MeanDepth_m','Lake_ID','MC_LY_ugL','MC_LW_ugL','MC_LF_ugL','NOD_ugL','MaximumDepth_m', 'TN_mgL', 'NO3NO2_mgL', 'NH3_mgL', 'PO4_ugL','Date','LabName','Latitude','ThermoclineDepth_m','SamplingDepth_m','Longitude','Altitude_m','SurfaceTemperature_C','EpilimneticTemperature_C','CYN_ugL','ATX_ugL','Lutein_ugL','Echinenone_ugL','Chlorophyllc2_ugL','Peridinin_ugL','Alloxanthin_ugL','Diadinoxanthin_ugL','Fucoxanthin_ugL','Zeaxanthin_ugL','Chlorophyllb_ugL','Diatoxanthin_ugL','Chlorophylla_ugL','SecchiDepth_m','Violaxanthin_ugL'], axis=1)
.dropna(axis=0)
)
# Method Chain 2 (Drop unneeded columns, rename columns, drop outlier row, sort with ascending Mean Depth (m)
ldr2 = (
ldr1
.rename(columns=
{"LakeName": "Lake Name",
"TP_mgL": "Concentration of total phosphorus (mgL)",
"MC_YR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin YR (ugL)",
"MC_dmRR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin dmRR (ugL)",
"MC_RR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin RR (ugL)",
"MC_dmLR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin dmLR (ugL)",
"MC_LR_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LR (ugL)",
"MC_LW_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LW (ugL)",
"MC_LF_ugL": "Concentration of cyanobacterial hepatotoxin microcystin LF (ugL)",
"NOD_ugL": "Concentration of cyanobacterial hepatotoxin nodularin (ugL)"})
.sort_values(by=['Concentration of total phosphorus (mgL)'], ascending=True)
)
return ldr2
load_and_process('../data/raw/EMLSdata_10Aug.csv')
# -
ldr.to_csv('kkr_processed.csv')
import project_functions3
ldr = project_functions3.load_and_process('../data/raw/EMLSdata_10Aug.csv')
Fig1 = sns.countplot(y='Country', data=ldr)
Fig1.set_title("Number of Lakes Studied from Each Country")
Fig1.set_xlabel("Count")
plt.show()
# Figure 1. A count plot of how many lakes from each country were in the dataset.
# Figure 1 Analysis:
# This countplot shows that the data is skewed towards lakes from Poland and Germany. It is possible that this may effect the analysis of data, as there could be factors from those country, such as pollution, that may effect the lake cyanobacterial levels.
sns.regplot(x='Concentration of total phosphorus (mgL)', y='Concentration of cyanobacterial hepatotoxin microcystin YR (ugL)', data=ldr)
# Figure 2. A scatterplot of the concentration of cyanobacterial hepatotoxin microcystin YR in ugL versus the concentration of total phosphorous in mgL.
# Figure 2 Analysis:
# This scatterplot shows that there is very slight positive correlation between the concentration of cyanobacterial hepatotoxin microcystin YR and the concentration of total phosphorous.
sns.regplot(x='Concentration of total phosphorus (mgL)', y='Concentration of cyanobacterial hepatotoxin microcystin dmRR (ugL)', data=ldr)
# Figure 3. A scatterplot of the concentration of cyanobacterial hepatotoxin microcystin dmRR in ugL versus the concentration of total phosphorous in mgL.
# Figure 3 Analysis:
# This scatterplot shows that there is small positive correlation between the concentration of cyanobacterial hepatotoxin microcystin dmRR and the concentration of total phosphorous.
sns.regplot(x='Concentration of cyanobacterial hepatotoxin microcystin dmLR (ugL)', y='Concentration of total phosphorus (mgL)', data=ldr)
# Figure 4. A scatterplot of the concentration of cyanobacterial hepatotoxin microcystin dmLR in ugL versus the concentration of total phosphorous in mgL.
# Figure 4 Analysis:
# This scatterplot shows that there is small positive correlation between the concentration of cyanobacterial hepatotoxin microcystin dmLR and the concentration of total phosphorous.
sns.regplot(x='Concentration of total phosphorus (mgL)', y='Concentration of cyanobacterial hepatotoxin microcystin RR (ugL)', data=ldr)
# Figure 5. A scatterplot of the concentration of cyanobacterial hepatotoxin microcystin RR in ugL versus the concentration of total phosphorous in mgL.
# Figure 5 Analysis:
# This scatterplot shows that there is no correlation between the concentration of cyanobacterial hepatotoxin microcystin RR and the concentration of total phosphorous.
sns.regplot(x='Concentration of total phosphorus (mgL)', y='Concentration of cyanobacterial hepatotoxin microcystin LR (ugL)', data=ldr)
# Figure 6. A scatterplot of the Concentration of cyanobacterial hepatotoxin microcystin LR in ugL versus the concentration of total phosphorous in mgL.
# Figure 6 Analysis:
# This scatterplot shows that there is almost no correlation between the concentration of cyanobacterial hepatotoxin microcystin LR and the concentration of total phosphorous. There is a slight negative correlation, but it is barely visible.
# ### Conclusions
# There does not seem to be a consistent correlation between cyanobacterial hepatotoxin levels and phosphorous concentration. Some hepatotoxins have a slight positive correlation, one has no correlation and one has almost no correlation (but does show a slight negative correlation).
| notebooks/analysis3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating Redshift Cluster Using The Quick Launcher
#
# - Follow the instructions in the lesson to create a Redshift cluster
# - Use the query editor to create a table and insert data
# - Delete the cluster
| Code/Redshift/example1/home/L3 Exercise 1 - Redshift Quick Launch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="6PlKZ0IXD-4w" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} outputId="5b66cfcc-e7e1-4b84-c362-eae75be50a59" executionInfo={"status": "ok", "timestamp": 1526357591552, "user_tz": -120, "elapsed": 1336, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
import tensorflow as tf
tf.test.gpu_device_name()
# + id="9SHJEEbSC29b" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 101} outputId="6e81ad78-eb2c-4d85-a2c5-01c606f79adf" executionInfo={"status": "ok", "timestamp": 1526357594153, "user_tz": -120, "elapsed": 2562, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
# !pip install torch torchvision
# + id="LhUQbpQ9C0ZJ" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.autograd import Variable
# + [markdown] id="jRWKqEJfC0ZR" colab_type="text"
# ## Load data
# + id="7t7aaFcxC0ZT" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 168} outputId="73c32e40-3049-49e1-9f8a-d2961a36fdaf" executionInfo={"status": "ok", "timestamp": 1526357597989, "user_tz": -120, "elapsed": 2867, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
# root_folder = 'D:/dev/data/'
root_folder = '~/Download/'
train_dataset = datasets.MNIST(root=root_folder+'mnist/', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = datasets.MNIST(root=root_folder+'mnist/', train=False, transform=transforms.ToTensor(), download=True)
print('Length of train dataset: {}'.format(len(train_dataset)))
print('Length of test dataset: {}'.format(len(test_dataset)))
print('Shape of each image: {}'.format(train_dataset[0][0].size()))
# + [markdown] id="ntlcjYe0C0Ze" colab_type="text"
# ## Display MNIST
# + id="zFyk6VB7C0Zf" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# + id="Xe4hcwT2C0Zl" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 297} outputId="0274ad3b-d5a1-457e-b8e1-4d16bab78759" executionInfo={"status": "ok", "timestamp": 1526357599643, "user_tz": -120, "elapsed": 817, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
shape = list(train_dataset[0][0].size()[-2:])
show_img = train_dataset[0][0].numpy().reshape(shape)
plt.imshow(show_img, cmap='gray')
plt.title(train_dataset[0][1].numpy())
# + [markdown] id="LVRf6pqBC0Zt" colab_type="text"
# ## Make dataset iterable
# + id="IOawBr40C0Zv" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} outputId="e77058eb-fe3c-4cd8-b55e-57cbfbf1a8a9" executionInfo={"status": "ok", "timestamp": 1526357600359, "user_tz": -120, "elapsed": 601, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
batch_size = 100
n_iters = 6000
n_epochs = int(n_iters / (len(train_dataset)/batch_size))
print('Number of epochs: {}'.format(n_epochs))
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
# + [markdown] id="EomV4D1nC0Z2" colab_type="text"
# ### Check iterability
# + id="lPNdVOlgC0Z3" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} outputId="9b4a69ef-f730-4bc8-97a9-89a54f2a3c19" executionInfo={"status": "ok", "timestamp": 1526357600973, "user_tz": -120, "elapsed": 414, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
import collections
isinstance(train_loader, collections.Iterable)
isinstance(test_loader, collections.Iterable)
# + [markdown] id="ODvOEJD5C0Z9" colab_type="text"
# ## Build model
# + id="homvHfMMC0Z_" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3, stride=1, padding=1)
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1)
self.cnn3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1)
self.cnn4 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
self.fc1 = nn.Linear(in_features=128*7*7, out_features=500)
self.fc2 = nn.Linear(in_features=500, out_features=10)
self.relu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=2)
def forward(self, x):
out = self.cnn1(x) # 16x28x28
out = self.relu(out) # 16x28x28
out = self.cnn2(out) # 32x28x28
out = self.relu(out) # 32x28x28
out = self.maxpool(out) # 32x14x14
out = self.cnn3(out) # 64x14x14
out = self.relu(out) # 64x14x14
out = self.cnn4(out) # 128x14x14
out = self.relu(out) # 128x14x14
out = self.maxpool(out) # 128x7x7
out = out.view(out.size(0), -1) # 1x(128*7*7)
out = self.fc1(out)
out = self.fc2(out)
return out
def fit(self, train_loader, criterion, optimizer):
iteration = 0
for epoch in range(n_epochs):
for i, (x, y) in enumerate(train_loader):
if torch.cuda.is_available():
x = Variable(x.cuda())
y = Variable(y.cuda())
else:
x = Variable(x)
y = Variable(y)
optimizer.zero_grad()
outputs = self.forward(x)
print(y.size())
loss = criterion(outputs, y)
loss.backward()
optimizer.step()
iteration += 1
if iteration%500 == 0:
print('Epoch: {}, Iteration: {}, Loss: {}'.format(epoch, iteration, loss))
#accuracy = predict(test_loader)
def predict(self, test_loader):
correct = 0
total = 0
first = True
for x, y in test_loader:
if torch.cuda.is_available():
x = Variable(x.cuda())
y = Variable(y.cuda())
else:
x = Variable(x)
y = Variable(y)
outputs = self.forward(x)
if first == True:
predicted = torch.argmax(outputs, dim=1)
first = False
else:
predicted = torch.cat((predicted, torch.argmax(outputs, dim=1)))
return predicted
def score(self, target, predicted):
correct = (predicted == target).sum()
accuracy = np.round(100.*correct/total, 3)
return accuracy
# + [markdown] id="sLIBNWz9C0aH" colab_type="text"
# ## Instatiate model
# + id="dz8a9NSQC0aJ" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
ann = CNN()
if torch.cuda.is_available():
ann.cuda()
# + [markdown] id="ctqkEwMqC0aO" colab_type="text"
# ## Instantiate loss class
# + id="vz2UBZ_wC0aO" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
criterion = nn.CrossEntropyLoss()
# + [markdown] id="h_q499svC0aT" colab_type="text"
# ## model.parameters() explained
# + id="AX_F_Zx5C0aU" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 252} outputId="d2aeea17-1d7c-4a00-d67c-1d8ced8bf2e8" executionInfo={"status": "ok", "timestamp": 1526357603525, "user_tz": -120, "elapsed": 478, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
print(ann.parameters())
print(len(list(ann.parameters())))
for parameter in list(ann.parameters()):
print(parameter.size())
# + [markdown] id="dVRFxjF-C0ae" colab_type="text"
# ## Optimizer
# + id="B-BKJ6-2C0ah" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
learning_rate = 0.1
optimizer=torch.optim.SGD(ann.parameters(), lr=learning_rate)
# + [markdown] id="kmqFKmfSC0ar" colab_type="text"
# ## Training phase
# + id="5f4F-yffC0au" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 734} outputId="2ac8578e-b01e-4800-8d49-4b53f16796b2" executionInfo={"status": "error", "timestamp": 1526357612377, "user_tz": -120, "elapsed": 8189, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
ann.fit(train_loader, optimizer=optimizer, criterion=criterion)
# + [markdown] id="v3U1oFbIC0a0" colab_type="text"
# ## Prediction phase
# + id="9af8wfF6IxfP" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
predicted = ann.predict(test_loader)
# + id="1v8FFkMkJxs1" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} outputId="2875d180-327f-4e07-963e-878dd5cc4d2d" executionInfo={"status": "ok", "timestamp": 1525841243676, "user_tz": -120, "elapsed": 1009, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
print(predicted.size())
# + id="f_HbjoeLC0a4" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 281} outputId="320e7097-fde7-4572-f747-160c7ac9<PASSWORD>" executionInfo={"status": "ok", "timestamp": 1525841207298, "user_tz": -120, "elapsed": 456, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "100979246453766324139"}}
for images, labels in test_loader:
images = Variable(images.view(-1, 28*28))
labels = Variable(labels)
plt.imshow(images[0].view(28,28).numpy(), cmap='gray')
plt.title(labels[0].numpy())
break
# + id="LG4cTmbmC0a_" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
| archive/cnn/pytorch/cnn_pytorch_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Class Attributes
# - Instance attributes are owned by the specific instances of a class.
# - This means for two different instances the instance attributes are usually different.
# - Now, let us define attributes at the class level.
# - Class attributes are attributes which are owned by the class itself.
# - They will be shared by all the instances of the class.
>>> class Foo(object):
... val = 5
# ## **Creating instance `a`**
a = Foo()
a.val
# Changing the **`val`** for instance `a`
a.val = 10
a.val
# ## **Creating instance `b`**
b = Foo()
b.val
# Value of class attribute **`Foo.val`**
Foo.val
# Changing the value of class attribute to `100`
Foo.val = 100
Foo.val
# Changed class attribute of **`Foo.val`** is shared with **`b`**
b.val
# Changed class attribute of Foo.val is not updated with **`a.val`**
a.val
# Python's class attributes and object attributes are stored in separate dictionaries, as we can see here:
a.__dict__
b.__dict__
import pprint
pprint.pprint(Foo.__dict__)
pprint.pprint(a.__class__.__dict__)
pprint.pprint(b.__class__.__dict__)
# ## Demonstrate to count instances with class attributes.
>>> class Foo(object):
... instance_count = 0
... def __init__(self):
... Foo.instance_count += 1
... def __del__(self):
... Foo.instance_count -= 1
a = Foo()
print('Instance count %s' % Foo.instance_count)
b = Foo()
print('Instance count %s' % Foo.instance_count)
del a
print('Instance count %s' % Foo.instance_count)
del b
print('Instance count %s' % Foo.instance_count)
# ## Demonstrate to count instance with class attributes using `type(self)`
>>> class Foo(object):
... instance_count = 0
... def __init__(self):
... type(self).instance_count += 1
... def __del__(self):
... type(self).instance_count -= 1
a = Foo()
type(a)
print('Instance count %s' % Foo.instance_count)
b = Foo()
print('Instance count %s' % Foo.instance_count)
del a
print('Instance count %s' % Foo.instance_count)
del b
print('Instance count %s' % Foo.instance_count)
# ## Demonstrate to count instance ERROR using private class attributes.
>>> class Foo(object):
... __instance_count = 0
... def __init__(self):
... type(self).__instance_count += 1
... def foo_instances(self):
... return Foo.__instance_count
a = Foo()
print('Instance count %s' % a.foo_instances())
b = Foo()
print('Instance count %s' % a.foo_instances())
Foo.foo_instances()
# ## The next idea, which still doesn't solve our problem, consists in omitting the parameter `self` and adding `@staticmethod` decorator.
>>> class Foo(object):
... __instance_count = 0
... def __init__(self):
... type(self).__instance_count += 1
... # @staticmethod
... def foo_instances():
... return Foo.__instance_count
a = Foo()
print('Instance count %s' % Foo.foo_instances())
print('Instance count %s' % a.foo_instances())
b = Foo()
print('Instance count %s' % Foo.foo_instances())
print('Instance count %s' % a.foo_instances())
print('Instance count %s' % b.foo_instances())
| 00-PythonLearning/01-Tutorials/python_examples/class_attributes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing all Required Libraries
import pandas as pd
import requests as r
from bs4 import BeautifulSoup
import re
# # Scraping all the Mobile links from priceoye Website
urls=[
"https://priceoye.pk/store?page=1","https://priceoye.pk/store?page=2","https://priceoye.pk/store?page=3",
"https://priceoye.pk/store?page=4","https://priceoye.pk/store?page=5","https://priceoye.pk/store?page=6",
"https://priceoye.pk/store?page=7","https://priceoye.pk/store?page=8"]
all_links=[]
for url in urls:
page=r.get(url)
soup=BeautifulSoup(page.content,'html.parser')
links=soup.find('div',{'class':"product-list"})
for link in links.find_all('a'):
all_links.append(link.get('href'))
all_links
# # Now Appending Each Mobile Specification in a CSV File
for link in all_links:
dv=[]
page=r.get(link)
soup=BeautifulSoup(page.content,'html.parser')
for s in soup.find_all('table',{'class':'p-spec-table card'}):
for data in s.find_all('tbody'):
for d in data.find_all('td'):
dv.append(d.text.replace('\n','').replace(',',''))
df = pd.DataFrame(data=dv).T
df.to_csv('data.csv',index=False,header=False,mode='a')
# # Scraping Column name NOTE: All Mobile have the Same column name So here Scraping from just one site..
heading=[]
page=r.get("https://priceoye.pk/mobiles/tecno/tecno-spark-7-pro")
soup=BeautifulSoup(page.content,'html.parser')
for s in soup.find_all('tbody'):
for headings in s.find_all('th'):
heading.append(headings.text.replace('','').replace(',',''))
# # Loading Csv file
csv=pd.read_csv('data.csv',header=None)
csv.head(3)
# # Adding the Header(Column names)
csv.to_csv('data.csv',header=heading,index=False)
# # Scraping the Mobile names and their prices
names=[]
for url in urls:
page=r.get(url)
soup=BeautifulSoup(page.content,'html.parser')
for s in soup.find_all('div',{'class':"detail-box"}):
for name in s.find_all('h4',{'class':'p3'}):
n=re.sub(r"\s+", "", name.text, flags=re.UNICODE)
names.append(n)
#names.append(name.text.replace('','').replace(',','').split(name.text.replace(',','')))
#names.append(name.text)
names
prices=[]
for url in urls:
page=r.get(url)
soup=BeautifulSoup(page.content,'html.parser')
for s in soup.find_all('div',{'class':"detail-box"}):
for price in s.find_all('div',{'class':'price-box'}):
p=re.sub(r"\s+", "", price.text, flags=re.UNICODE)
prices.append(p)
prices
# # Adding Names and Prices Columns in the CSV file
csv=pd.read_csv('data.csv')
csv.insert(0,column = "Name", value = names)
csv.insert(1,column = "Price", value = prices)
csv.to_csv('data.csv',index=False)
| priceoyewebscraping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import sugartensor as tf
from data import SpeechCorpus, voca_size
from model import *
tf.sg_verbosity(10)
batch_size = 16 # total batch size
# corpus input tensor
data = SpeechCorpus(batch_size=batch_size * tf.sg_gpus())
# -
# sentence label as inputs
inputs = tf.split(data.label.sg_float(), tf.sg_gpus(), axis=0)
# mfcc feature of audio as outputs
labels = tf.split(data.mfcc, tf.sg_gpus(), axis=0)
inputs, labels
# +
#tf.not_equal(inputs[0].sg_sum(axis=2), 0.).sg_int().sg_sum(axis=1).get_shape()
# +
#tf.not_equal(labels[0], 0).sg_int().sg_sum(axis=1).get_shape()
# -
# sequence length except zero-padding
seq_len = []
for input_ in inputs:
#seq_len.append(tf.not_equal(input_.sg_sum(axis=2), 0.).sg_int().sg_sum(axis=1))
seq_len.append(tf.not_equal(input_, 0).sg_int().sg_sum(axis=1))
seq_len
logit = get_logit(inputs[0].sg_expand_dims(), voca_size=voca_size)
logit.get_shape()
# ### already performed: `sg_squeeze().sg_transpose(perm=[1, 2, 0])`
logit.sg_mse(target=labels[0]).sg_sum(axis=2).sg_sum(axis=1).get_shape()
# ### $\mu$ __- law?__
logit.sg_ctc(target=inputs[0], seq_len=seq_len).get_shape()
# parallel loss tower
@tf.sg_parallel
def get_loss(opt):
# encode audio feature
logit = get_logit(opt.input[opt.gpu_index], voca_size=voca_size)
# CTC loss
#return logit.sg_ctc(target=opt.target[opt.gpu_index], seq_len=opt.seq_len[opt.gpu_index])
# MSE loss
return logit.sg_mse(target=opt.target[opt.gpu_index]).sg_sum(axis=3).sg_sum(axis=2)
#
# train
#
#loss=get_loss(input=inputs, target=label0, seq_len=seq_len)
loss=get_loss(input=inputs, target=labels)
tf.sg_train(lr=0.0001, loss=loss, ep_size=data.num_batch, max_ep=50)
voca_size
var1 = tf.Variable([[[2., -1., 3.], [3., 1., -2.]], [[1., -1., 2.], [3., 1., -2.]]],name='var1')
var2 = tf.Variable([[2., 1.], [2., 3.]],name='var2')
var1.get_shape(), var2.get_shape()
var1.sg_ctc(target=var2)
| debug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Análisis de series temporales
# Modelos Supervisados y No Supervisados.
#
# ## Casificacion Multiclase
#
# En el presente documento intentaremos predecir cuanto tardata una entrega determinada, en un rango de 0 a 20 dias.
#
# Para ello relizaremos una clasificacion multiclase con diferentes algoritmos, para intentar definir cual predice de manera mas precisa el resultado esperado.
# ## Bibliotecas
# Vamos a cargar las biblitecas necesarias para realizar el analisis.
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.decomposition import PCA
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LinearRegression
# ## Datos
# Para empezr, cargamos los datos de los envios de Marzo de 2019.
# +
cols = ['service',
'sender_zipcode',
'receiver_zipcode',
'sender_state',
'receiver_state',
'shipment_type',
'quantity',
'status',
'date_created',
'date_sent',
'date_visit',
'target']
df = pd.read_csv('./shipments_BR_201903.csv', usecols=cols)
# -
df.head(5)
# Como intentaremos predecir envios que tardan de 0 a 20 dias, los target mayores seran modificados a 20.
df['target'] = np.where(df['target'] > 20, 20, df['target'])
# Para poder incluir el tipo de envio entre las features, lo separamos en 3 features independientes.
# +
#df = pd.get_dummies(df, columns=['shipment_type'])
# -
# Definimos las columnas que usaremos como features para el modelo, y cual sera la columna target.
#features = ['sender_zipcode', 'receiver_zipcode', 'service', 'shipment_type_express', 'shipment_type_standard', 'shipment_type_super']
features = ['sender_zipcode', 'receiver_zipcode', 'service']
target = 'target'
# Antes de aplicar cualquier modelo de prediccion, graficamos la distribucion de los datos segun las diferentes features.
# +
#sns.pairplot(
# data=df,
# vars=features,
# hue=target)
# -
# ## Regresion Logistica
# Como modelo base, utilizaremos un pipeline de sklearn, que nos permita normalizar los datos y luego pasarlos a una Regresion Logistica Multiclase.
# +
def drop_last_zip_digit(data):
for row in data:
row[0] = round(row[0] / 10)
row[1] = round(row[1] / 10)
return data
zip_features = [0, 1]
function_transformer = FunctionTransformer(drop_last_zip_digit, validate=False)
model = Pipeline([
('zip_cutter', ColumnTransformer(transformers=[('log', function_transformer, zip_features)])),
('normalizer', MinMaxScaler()),
('reduce_dim', PCA()),
('classifier', LogisticRegression(solver='lbfgs',
multi_class='multinomial',
max_iter=500))
])
# -
# Definimos una fecha de corte para separar los datos de entrenamiento y test. Para asegurar que los datos no sean modificados, trabajamos sobre una copia del dataset original
# +
copy = df.copy()
cut_off = '2019-03-20'
df_train = copy.query(f'date_visit <= "{cut_off}"')
df_test = copy.query(f'date_created > "{cut_off}"')
X_train = df_train[features].values.astype(np.float)
y_train = df_train[target].values
X_test = df_test[features].values.astype(np.float)
y_test = df_test[target].values
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# -
y_train = df_train[target].values
y_test = df_test[target].values
y_train.shape, y_test.shape
# Entrenamos nuestro modelo con los datos de entrenamiento y test definidos.
# %%time
result = model.fit(X_train, y_train)
result
# Tomamos los scores del resultado para intentar definir que tan acertado fue el modelo.
# +
y_pred = model.predict(X_test)
metrics = {
'accuracy': accuracy_score(y_test, y_pred),
'precision': precision_score(y_test, y_pred, average='macro'),
'recall': recall_score(y_test, y_pred, average='macro'),
'f1_score': f1_score(y_test, y_pred, average='macro'),
}
metrics
# -
# El modelo no se comporta adecuadamente con los datos disponibles.
# ## Arboles de Desicion
# Iniciamos con una copia del dataset original
# +
copy = df.copy()
cut_off = '2019-03-20'
df_train = copy.query(f'date_visit <= "{cut_off}"')
df_test = copy.query(f'date_created > "{cut_off}"')
X_train = df_train[features].values.astype(np.float)
y_train = df_train[target].values
X_test = df_test[features].values.astype(np.float)
y_test = df_test[target].values
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# -
y_train = df_train[target].values
y_test = df_test[target].values
y_train.shape, y_test.shape
# Creamons un pileline de skLearn
model = Pipeline([
('zip_cutter', ColumnTransformer(transformers=[('log', function_transformer, zip_features)])),
('normalizer', MinMaxScaler()),
('reduce_dim', PCA()),
('classifier', XGBClassifier())
])
# %%time
result = model.fit(X_train, y_train)
result
# +
y_pred = model.predict(X_test)
metrics = {
'accuracy': accuracy_score(y_test, y_pred),
'precision': precision_score(y_test, y_pred, average='macro'),
'recall': recall_score(y_test, y_pred, average='macro'),
'f1_score': f1_score(y_test, y_pred, average='macro'),
}
metrics
# -
# ## KNN
# Iniciamos con una copia del dataset original
# +
copy = df.copy()
cut_off = '2019-03-20'
df_train = copy.query(f'date_visit <= "{cut_off}"')
df_test = copy.query(f'date_created > "{cut_off}"')
X_train = df_train[features].values.astype(np.float)
y_train = df_train[target].values
X_test = df_test[features].values.astype(np.float)
y_test = df_test[target].values
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# -
y_train = df_train[target].values
y_test = df_test[target].values
y_train.shape, y_test.shape
# Creamons un pileline para estimar Kvecinos cercanos
model = Pipeline([
('zip_cutter', ColumnTransformer(transformers=[('log', function_transformer, zip_features)])),
('normalizer', MinMaxScaler()),
('reduce_dim', PCA()),
('classifier', KNeighborsClassifier())
])
# %%time
result = model.fit(X_train, y_train)
result
# +
y_pred = model.predict(X_test)
metrics = {
'accuracy': accuracy_score(y_test, y_pred),
'precision': precision_score(y_test, y_pred, average='macro'),
'recall': recall_score(y_test, y_pred, average='macro'),
'f1_score': f1_score(y_test, y_pred, average='macro'),
}
metrics
# -
# ## Regresion
# Iniciamos con una copia del dataset original
# +
copy = df.copy()
cut_off = '2019-03-20'
df_train = copy.query(f'date_visit <= "{cut_off}"')
df_test = copy.query(f'date_created > "{cut_off}"')
X_train = df_train[features].values.astype(np.float)
y_train = df_train[target].values
X_test = df_test[features].values.astype(np.float)
y_test = df_test[target].values
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# -
# %%time
result = model.fit(X_train, y_train)
result
# Creamons un pileline para estimar una regresion lineal
model = Pipeline([
('zip_cutter', ColumnTransformer(transformers=[('log', function_transformer, zip_features)])),
('normalizer', MinMaxScaler()),
('reduce_dim', PCA()),
('classifier', LinearRegression())
])
# %%time
result = model.fit(X_train, y_train)
result
# +
y_pred = model.predict(X_test)
metrics = {
'accuracy': accuracy_score(y_test, y_pred),
'precision': precision_score(y_test, y_pred, average='macro'),
'recall': recall_score(y_test, y_pred, average='macro'),
'f1_score': f1_score(y_test, y_pred, average='macro'),
}
metrics
# -
# ## Prediccion
# Iniciamos con una copia del dataset original
# +
copy = df.copy()
cut_off = '2019-03-20'
df_train = copy.query(f'date_visit <= "{cut_off}"')
df_test = copy.query(f'date_created > "{cut_off}"')
X_train = df_train[features].values.astype(np.float)
y_train = df_train[target].values
X_test = df_test[features].values.astype(np.float)
y_test = df_test[target].values
X_train.shape, y_train.shape, X_test.shape, y_test.shape
| practico4_fer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="cyS1c-HwrKWA" outputId="ea992ce3-0e0b-4480-9658-b82d6d603ce0"
123*321
# + colab={"base_uri": "https://localhost:8080/"} id="K9Aql_vft-4n" outputId="b080b576-41bd-4647-d61e-76d9fe5bdcd3"
1001/10
# + colab={"base_uri": "https://localhost:8080/"} id="gF0LGEk6uDgI" outputId="97f0cd23-4e2c-472c-84cb-400135778794"
1000-200
# + id="yN92j7w7uteR"
x=10
# + colab={"base_uri": "https://localhost:8080/"} id="JsXvg44ruyGX" outputId="e64b8030-9528-444e-eb88-bd5167f47f64"
x
# + id="ruP6BwPTu375"
y=230
# + colab={"base_uri": "https://localhost:8080/"} id="gAioUA2dvX-g" outputId="73c2b54d-13e5-427b-a28d-8fa0edef9264"
y
# + id="FruG4cuSvYxJ"
x=100-1
# + colab={"base_uri": "https://localhost:8080/"} id="lKKekjC_wSv2" outputId="2abd723b-9f21-4853-9b17-b11e6c919149"
x
# + id="IScrePfuwTkw"
x=90
# + id="iZ4EQih0xQgW"
y=80
# + colab={"base_uri": "https://localhost:8080/"} id="_w80nbOnxSJg" outputId="a284f466-03d6-4c6a-a476-7e5aecd972b4"
x*y
# + id="tbNLZGEKxUI4"
x=34
# + id="lY9YdDjGyDbu"
y=90
# + id="w_aTCNLqyEhw"
z=78
# + colab={"base_uri": "https://localhost:8080/"} id="Cn15poA7yGaf" outputId="bfa40309-c3b4-41bf-82af-07c2098dec82"
x+y-z
# + colab={"base_uri": "https://localhost:8080/"} id="vIhdt3mcyJ83" outputId="8c64c3e9-f653-4a5f-89e9-ccde750b2fbe"
x+z
# + colab={"base_uri": "https://localhost:8080/"} id="p0GLyouJyMIp" outputId="533caca7-8785-4a1a-c85a-2e7b190608aa"
x-y-z
# + colab={"base_uri": "https://localhost:8080/"} id="4XIfEKNgywWA" outputId="56e46f02-fbf5-4dbe-afb2-2dbba3cf1819"
z+x*y
# + id="Mp7Vmwdey5Bs"
P=10000
# + id="qr12jYFOz_Wn"
R= 0.09
# + id="90_u1oGD0Hme"
N=60
# + id="782bjHk40Sx-"
I= P*R*N
# + colab={"base_uri": "https://localhost:8080/"} id="zrFAcWNw31aE" outputId="477a8b20-0811-4817-bded-053767e2d910"
I
# + id="CBbrxQgn32pN"
X= "INDIA"
# + colab={"base_uri": "https://localhost:8080/", "height": 38} id="v4toMBJX-T-Z" outputId="28ee7cf2-7d25-4499-a0f7-70474358bb64"
X
# + id="m8T-83WS-UjS"
X=10
# + id="KXKkqT0q-x8L"
Y=10.5
# + id="Hr-gdUnv-2Hi"
Z="INDIA"
# + colab={"base_uri": "https://localhost:8080/"} id="dqzbp9V0-9xS" outputId="19878fbb-b0dd-4f6f-e3f7-74b6678d6c30"
type(X)
# + colab={"base_uri": "https://localhost:8080/"} id="BkPUXVD4_U6C" outputId="7e280457-b80d-4d78-f0fc-e71e2a544e5f"
type(Y)
# + colab={"base_uri": "https://localhost:8080/"} id="AXcTVJvZ__2S" outputId="db37b9ee-4918-4270-f203-59ea77b63873"
type(Z)
# + colab={"base_uri": "https://localhost:8080/"} id="rR-J4LtYAx6O" outputId="0c735a29-d827-45b2-b519-d83b9cac2292"
X=input("ENTER A NAME")
# + colab={"base_uri": "https://localhost:8080/"} id="Alt5ajFeA-tq" outputId="b927b4bd-8272-4aa6-caf1-5837bf6c0547"
print(X)
# + id="w6Olx3-tBXVZ"
| 01_Python_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Figure 4: Creation
#
# Here, we re-create a plot in Figure 4 (a) from
#
# - <NAME> *et al.* Stable and manipulable Bloch point. [*Scientific Reports* **9**, 7959](https://www.nature.com/articles/s41598-019-44462-2) (2019).
#
# ### (optional) Micromagnetic simulations
#
# This notebook uses results from `*.txt` files which are already in this repository. The script used to generate result files is `src/creation.py` and if you want to re-execute micromagnetic simulations, please run:
#
# $ make creation
#
# Micromagnetic simulations will be run inside [Docker](https://www.docker.com/) container, which contains all the necessary software. Therefore, please make sure you have Docker installed on your machine - installation instructions can be found [here](https://docs.docker.com/install/).
#
# Details about Docker images, VTK, and H5 files can be found in `README.md` file of [`marijanbeg/2019-paper-bloch-point-stability`](https://github.com/marijanbeg/2019-paper-bloch-point-stability) GitHub repository.
#
# ### Plot
#
# We start by reading the results from `*.txt` files:
# +
import os
import numpy as np
# geometry parameters
d = 150 # disk diameter (nm)
hb = 20 # bottom-layer thickness (nm)
ht = 10 # top-layer thickness (nm)
# time array
T = 500e-12 # total simulation time (s)
dt = 1e-12 # time step (s)
t_array = np.arange(0, T+dt/2, dt)/1e-12
mz_top = [] # average mz in the top-layer
mz_bottom = [] # average mz in bottom-layer
S_top = [] # skyrmion number in the top-layer
S_bottom = [] # skyrmion number in the bottom-layer
for t in t_array:
basename = f'd{d}hb{hb}ht{ht}'
rdir = os.path.join('..', 'results', 'creation', basename)
with open(os.path.join(rdir, f't{int(t)}.txt'), 'r') as f:
data = eval(f.read())
mz_top.append(data['average_top'][2])
mz_bottom.append(data['average_bottom'][2])
S_top.append(data['S_top']/ht)
S_bottom.append(data['S_bottom']/hb)
# -
# Finally, we can make the plot:
# +
# %config InlineBackend.figure_formats = ['svg'] # output matplotlib plots as SVG
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
plt.style.use('customstyle.mplstyle')
plt.figure(figsize=(10, 6))
plt.plot(t_array, mz_top, '-', label=r'$\langle m_{z} \rangle_\mathrm{top}$')
plt.plot(t_array, mz_bottom, '--', label=r'$\langle m_{z} \rangle_\mathrm{bottom}$')
plt.plot(t_array, S_top, '-', label=r'$\tilde{Q}_\mathrm{top}$')
plt.plot(t_array, S_bottom, '--', label=r'$\tilde{Q}_\mathrm{bottom}$')
plt.xlabel(r'$t$ (ps)')
plt.ylabel(r'$\tilde{Q}$, $\langle m_{z} \rangle$')
plt.ylim([-1.1, 1.1])
plt.legend(loc='lower right')
plt.show()
| figures/creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro to python
# ### Strings
# +
# 1. TASK: print "Hello World"
print( "Hello word" )
# 2. print "Hello Noelle!" with the name in a variable
name = "Noelle"
print( "Hello", name ) # with a comma
print( "Hello"+ name ) # with a +
# 3. print "Hello 42!" with the number in a variable
name = 42
print( "Hello", name ) # with a comma
##print( "Hello" + name ) # with a + -- this one should give us an error!
# 4. print "I love to eat sushi and pizza." with the foods in variables
fave_food1 = "sushi"
fave_food2 = "pizza"
print( "I love to eat {} and {}".format(fave_food1,fave_food2)) # with .format()
print(f"I love to eat {fave_food1} and {fave_food2}") # with an f string
# is suchi lower case upper case?
fave_food1.isupper()
fave_food1.islower()
# upper case the string
fave_food1.upper()
fave_food1.split("i")
# count the how many "s " in word sushi
fave_food1.count("s")
# -
| String.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Scikit-Learn을 사용한 머신러닝
#
# ## 01장. 분류의 기초
# ### 5. 다중 클래스 분류
#
# 타겟 클래스가 2개 이상인 경우를 다중 클래스 분류라고 한다.
# 다중 클래스 분류는 이진 클래스 분류 문제로 변환해서 해결한다.
#
# - OvO : 각 데이터 별로 클래스 2개씩 대결
# - OvR : 각 데이터 별로 해당 클래스 인지 아닌지 검사, 클래스 개수만큼
# ### OvO (one vs one)
#
# `OneVsOneClassifier`
# - $\frac{k(k-1)}{2}$번 계산
# ### OvR (one vs rest)
#
# `OneVsRestClassifier`
# - $k$번 계산
# - - -
import matplotlib.pylab as plt
import pandas as pd
from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
# +
iris = load_iris()
# model1, iris 데이터를 이용해서 로지스틱 선형회귀를 통해 y 클래스값 예측
model1 = LogisticRegression().fit(iris.data, iris.target)
# model2, OvO 하나하나씩 예측
model2 = OneVsOneClassifier(LogisticRegression()).fit(iris.data, iris.target)
# model 3,
model3 = OneVsRestClassifier(LogisticRegression()).fit(iris.data, iris.target)
# -
import matplotlib as mpl
import matplotlib.font_manager as fm
font_location = "/Library/Fonts/AppleGothic.ttf"
font_name = fm.FontProperties(fname=font_location).get_name()
print(font_name)
mpl.rc('font', family=font_name)
# +
# model 1, 조건부 확률 모형인 로지스틱 선형회귀는 종속변수 target 의 클래스가 3개 이상이여도 바로 적용가능하다.
# 종속 변수의 분별 함수 값
ax1 = plt.subplot(211)
pd.DataFrame(model1.decision_function(iris.data)).plot(ax=ax1)
# 종속 변수 클래스 예측 값
ax2 = plt.subplot(212)
pd.DataFrame(model1.predict(iris.data), columns=["prediction"]).plot(ax=ax2)
plt.show()
# +
# model 2, OvO 각 데이터 별로
# 종속 변수의 분별 함수 값
ax1 = plt.subplot(211)
pd.DataFrame(model2.decision_function(iris.data)).plot(ax=ax1)
# 종속 변수 클래스 예측 값
ax2 = plt.subplot(212)
pd.DataFrame(model2.predict(iris.data), columns=["prediction"]).plot(ax=ax2)
plt.show()
# +
# model 3, OvR
# 종속 변수의 분별 함수 값
ax1 = plt.subplot(211)
pd.DataFrame(model3.decision_function(iris.data)).plot(ax=ax1)
# 종속 변수 클래스 예측 값
ax2 = plt.subplot(212)
pd.DataFrame(model3.predict(iris.data), columns=["prediction"]).plot(ax=ax2)
plt.show()
# 211 그래프에 중간 부분 모두 음수인데, 그 경우에는 그나마 높은 것을 고른다.
# OvR 에서 양수이면 분별하고자 하는 수 가 맞고(one 채택), 음수이면 그 수가 아니다.(Rest 채택)
# -
# Label Binarizer는 y 즉 종속변수의 클래스를 one-hot-encoding 하기 위한 명령이다. 이렇게 하면 각 열은 OvR 문제를 풀기위한 y 값이 된다.
| 09_classification/00_basis_of_classification/05_multi_class_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:jcop]
# language: python
# name: conda-env-jcop-py
# ---
from luwiji.neural_network import illustration
# # Regression training loop
illustration.regression_loop
# # Neural network training loop
#
# neural_network kan juga regresi, hanya dengan formula yang berbeda
illustration.neural_network_loop
# # Minibatch
illustration.minibatch
illustration.minibatch_training
# # Minibatch Gradient Descent
# memperkirakan gradien hanya dengan sebagian data saja
#
# Kelebihan:
# - Training data secara kesulurahan itu berat (membutuhkan memory yang besar)
# - menggunakan real gradien bisa terjebak di saddle point
# - menggunakan minibatch ada sedikit efek regularization
#
# Kekurangan:
# - kalau batch_size terlalu kecil, maka estimasi gradien bisa jadi tidak reliable
# - kalau batch_size = 1, namanya adalah stochastic gradient descent (SGD)
from luwiji.minibatch import demo
demo.gradient_descent()
| 13 - Neural Network/Part 4 - Feedforward, Backpropagation, dan Minibatch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/OBB-2199/EscapeEarth/blob/main/Interns/Olivia/run_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="0iGbjJlKGzpw" outputId="9c5511c4-7011-4d42-bf39-33e2d4c10db6"
## mount google drive to access files
from google.colab import drive
drive.mount('/content/gdrive')
# + colab={"base_uri": "https://localhost:8080/"} id="YqOb2orMGqn8" outputId="6082976f-af23-4b59-e52f-13feb88dcf30"
import matplotlib.pyplot as plt
import numpy as np
# !pip install lightkurve==1.9.0
import lightkurve as lk
import pandas as pd
import sys
sys.path.append('/content/gdrive/My Drive/EscapeEarthData/')
import OpenAndPlot as op
# + id="5fHOcSM4FOeT"
#https://docs.google.com/presentation/d/1nJR1YiOolG41xITh9mAs61WhhRY_JuTn8uIx49LPm5k/edit#slide=id.p
def periods(N=1000):
period=np.logspace(-0.523, 1.43, N, endpoint=True)
return period
def duration_grid(N=10):
duration=np.linspace(.01, 0.298, N)
return duration
def flatten (lc):
'''
inputs: cleaned lightkurve data
output: flattened time and flux data
function: takes in lightkurve data and flattens the time and flux
'''
lc_flat = lc.flatten()
flat_time = lc_flat.time - lc_flat.time[0]
return lc_flat, flat_time
def BLS(periodgrid,lightcurve,flat_time,durationgrid):
from astropy.timeseries import BoxLeastSquares
'''
Purppose
------------------
A Box Least Squares function to print out the compute stats of the periodogram.
Parameters
-------------------
period grid - describes how often the transit is happening (arrays different value)
duration grid - describes the width of the transit (array of different values)
lightcurve - lightkurve class object
Returns
list of stats in the following order: period, duration, transit-time, power, depth
------------------
Calculate several statistics of a candidate transit.
'''
#assigning parameters to variables
period = periodgrid
duration = durationgrid
lc = lightcurve
t = flat_time #time
y = lc.flux #flux
#dy is the uncertianty
model = BoxLeastSquares(t, y, dy= lc.flux_err)
periodogram = model.power(period,duration)
max_power = np.argmax(periodogram.power)
#calculates the max stats w/in the transit
stats = [periodogram.period[max_power],
periodogram.duration[max_power],
periodogram.transit_time[max_power],
max_power,
periodogram.depth[max_power]]
#stats is the one peak, periodogram is the array
return stats
##############################
##############################
#change sector here
sector = 14
##############################
##############################
#opening target lists
path14 = '/content/gdrive/My Drive/EscapeEarthData/all_targets_S014_v1.csv'
path15 = '/content/gdrive/My Drive/EscapeEarthData/all_targets_S015_v1.csv'
Olivia_path14 = '~/Users/brownscholar/Desktop/Brown Scholars/Internship/all_targets_S014_v1.csv'
Olivia_path15 = '~/Users/brownscholar/Desktop/Brown Scholars/Internship/all_targets_S015_v1.csv'
target_list14 = pd.read_csv(path14,skiprows=5) #from sector 14
target_list15 = pd.read_csv(path15,skiprows=5) #from sector 15
testing_targets = [ 7582633, 7620704, 7618785 ] #7582594, 7584049 – don't exist
# + id="lKv2I9z2djRQ"
save_power = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_powers-RunTeam.npy'
save_period = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_period-RunTeam.npy'
save_duration = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_duration-RunTeam.npy'
save_depth = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_depth-RunTeam.npy'
save_transit_time = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_transit_time-RunTeam.npy'
save_path = [save_depth,save_duration,save_transit_time,save_period,save_power]
final_save = '/content/gdrive/My Drive/EscapeEarthData/TeamB/bls_stats_df.csv'
# + colab={"base_uri": "https://localhost:8080/"} id="fOMY87KnHUAT" outputId="a368f844-cbec-4e33-809c-a9b697037fb5"
#period grid
pg = periods()
#duration grid
dg = duration_grid()
#create a empty list for each stat (period,depth,duration,power,transit_time)
period = []; depth = []; duration = []; power =[]; transit_time = []
#change targets here
targets = testing_targets
#have to run for each sector
for star_id in targets:
data = [star_id,sector]
#intialize class
lc = op.OpenAndPlot(data)
#open data
open_data = lc.open_lc('clean')
#flatten the data
flat_lc,flat_time = flatten(open_data)
#run the bls on flattened data
bls_output = BLS(pg,flat_lc,flat_time,dg)
#append each of the stats to a separate list
period.append(bls_output[0]); duration.append(bls_output[1])
transit_time.append(bls_output[2]); power.append(bls_output[3])
depth.append(bls_output[4])
#save that list
#np.save()
my_arr = [depth,duration,transit_time, period, power]
for i in range(len(save_path)):
np.save(save_path[i],my_arr[i])
print(len(period),len(duration),len(depth),len(power),len(transit_time))
print(period,duration,depth,depth,power,transit_time)
#create a dictionary of all the stats
stats_dict = {'Period':period,'Duration':duration,'Transit Time':transit_time, "Power": power,'Depth':depth}
#use that dictionary to create a pandas dataframe
stats_df = pd.DataFrame(stats_dict)
#write that dataframe to a file
stats_df.to_csv(final_save)
# + id="B3cu-z1ZPBHo"
| Interns/Olivia/run_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Recreating figure 3.7 from Bishop's "Pattern Recognition and Machine Learning."
#
# This notebook provides scaffolding for your exploration Bayesian Linear Gaussian Regression, as described in Lecture.
# In particular, through this notebook you will reproduce several variants of figure 3.7 from Bishop's book.
# ## Instructions:
#
# ### 5.1-3:
#
# Implement the functions in `problem` -- completed implementations of these functions are needed to generate the plots.
from support_code import *
from problem import *
# ## Instructions (continued):
#
# ### 5.4:
#
# If your implementations are correct, then the next few code blocks in this notebook will generate the required variants of Bishop's figure. These are the same figures that you would obtain if you ran `python problem.py` from the command line -- this notebook is just provided as additional support.
# +
# Generate our simulated dataset
# Note we are using sigma == 0.2
np.random.seed(46134)
actual_weights = np.matrix([[0.3], [0.5]])
data_size = 40
noise = {"mean":0, "var":0.2 ** 2}
likelihood_var = noise["var"]
xtrain, ytrain = generate_data(data_size,
noise,
actual_weights)
# -
# Next, we generate the plots using 3 different prior covariance matrix. In the main call to `problem.py`, this is done in a loop -- here we wrap the loop body in a short helper function.
def make_plot_given_sigma(sigma_squared):
prior = {"mean":np.matrix([[0], [0]]),
"var":matlib.eye(2) * sigma_squared}
make_plots(actual_weights,
xtrain,
ytrain,
likelihood_var,
prior,
likelihood_func,
get_posterior_params,
get_predictive_params)
sigmas = [1/2, 1/(2**5), 1/(2**10)]
# #### First covariance matrix:
# $$\Sigma_{0} = \frac{1}{2}I,\qquad{} I \in \mathbb{R}^{2 \times 2}$$
try:
make_plot_given_sigma(sigmas[0])
except NameError:
print('If not yet implemented, implement functions in problem.py.')
print('If you have implemented, remove this try/except.')
# #### Second covariance matrix:
# $$\Sigma_{0} = \frac{1}{2^{5}}I,\qquad{} I \in \mathbb{R}^{2 \times 2}$$
try:
make_plot_given_sigma(sigmas[1])
except NameError:
print('If not yet implemented, implement functions in problem.py.')
print('If you have implemented, remove this try/except.')
# #### Third covariance matrix:
# $$\Sigma_{0} = \frac{1}{2^{10}}I,\qquad{} I \in \mathbb{R}^{2 \times 2}$$
try:
make_plot_given_sigma(sigmas[2])
except NameError:
print('If not yet implemented, implement functions in problem.py.')
print('If you have implemented, remove this try/except.')
# ## Instructions (continued):
#
# ### 5.5:
#
# For questiion (5) (Comment on your results ...) no code is required -- instead please answer with a written description.
# ## Instructions (continued):
#
# ### 5.6:
#
# For question (6), find the MAP solution for the first prior covariance $\left(\frac{1}{2}I\right)$ by completing the implementation below. In addition, be sure to justify the value for the regularization coefficient (in `sklearn` named `alpha`) in your written work.
from sklearn.linear_model import Ridge
# +
alpha = 9999 # Change to the correct value
ridge = Ridge(alpha=alpha,
fit_intercept=False,
solver='cholesky')
ridge.fit(xtrain, ytrain)
# -
# If alpha is set correctly, ridge.coef_ will equal the prior mean/MAP estimate returned by the next two cells.
ridge.coef_
# +
prior = {"mean":np.matrix([[0], [0]]),
"var":matlib.eye(2) * sigmas[0]}
try:
post = get_posterior_params(xtrain, ytrain, prior,
likelihood_var = 0.2**2)
post[0].ravel()
except NameError:
print('If not yet implemented, implement functions in problem.py.')
print('If you have implemented, remove this try/except.')
| assignments/hw5-probabilistic/bayesian-code/bayesian_regression_support.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda2]
# language: python
# name: conda-env-anaconda2-py
# ---
import pandas as pd
# ### 2013-2017 ACS 5-year PUMS
# copied from M:\Data\Census\corrlib
ba_puma = pd.read_csv('./PUMS Relocation Rates/Bay_puma_2010.csv')
# available online: https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=ACS_pums_csv_2013_2017&prodType=document
# data dic: https://www2.census.gov/programs-surveys/acs/tech_docs/pums/data_dict/PUMS_Data_Dictionary_2013-2017.pdf?#
pums = pd.read_csv('./PUMS Relocation Rates/psam_h06.csv')
# subset PUMS data to bay area
pums_ba = pums[pums.PUMA.isin(ba_puma.PUMARC)]
# select relevant columns
pums_ba = pums_ba[['SERIALNO', 'DIVISION', 'PUMA', 'REGION', 'ST', 'TEN', 'ADJINC', 'HINCP', 'MV', 'WGTP']]
# 0) use ADJINC to adjust to 2017, use inflation rate to adjust to 2015
#Adjustment factor for income and earnings dollar amounts (6 implied decimal places)
#1061971 .2013 factor (1.007549 * 1.05401460)
#1045195 .2014 factor (1.008425 * 1.03646282)
#1035988 .2015 factor (1.001264 * 1.03468042)
#1029257 .2016 factor (1.007588 * 1.02150538)
#1011189 .2017 factor (1.011189 * 1.00000000)
pums_ba.loc[pums_ba.ADJINC==1061971, 'hh_inc'] = pums_ba.HINCP * (1061971.0/1000000.0)*.96239484
pums_ba.loc[pums_ba.ADJINC==1045195, 'hh_inc'] = pums_ba.HINCP * (1045195.0/1000000.0)*.96239484
pums_ba.loc[pums_ba.ADJINC==1035988, 'hh_inc'] = pums_ba.HINCP * (1035988.0/1000000.0)*.96239484
pums_ba.loc[pums_ba.ADJINC==1029257, 'hh_inc'] = pums_ba.HINCP * (1029257.0/1000000.0)*.96239484
pums_ba.loc[pums_ba.ADJINC==1011189, 'hh_inc'] = pums_ba.HINCP * (1011189.0/1000000.0)*.96239484
# 1) add household income quartile
#Household income (past 12 months, use ADJINC to adjust HINCP to constant dollars)
#bbbbbbbb .N/A (GQ/vacant)
#00000000 .No household income
#-0059999 .Loss of $59,999 or more
#-0059998..-0000001 .Loss of $1 to $59,998
#00000001 .$1 or Break even
#00000002..99999999 .Total household in
pums_ba.loc[(pums_ba.hh_inc > -999999999) & (pums_ba.hh_inc <= 30000), 'hh_inc_quartile'] = 1
pums_ba.loc[(pums_ba.hh_inc > 30000) & (pums_ba.hh_inc <= 60000), 'hh_inc_quartile'] = 2
pums_ba.loc[(pums_ba.hh_inc > 60000) & (pums_ba.hh_inc <= 100000), 'hh_inc_quartile'] = 3
pums_ba.loc[(pums_ba.hh_inc > 100000) & (pums_ba.hh_inc <= 999999999), 'hh_inc_quartile'] = 4
# 2) add tenure
#Tenure
#b .N/A (GQ/vacant)
#1 .Owned with mortgage or loan (include home equity loans)
#2 .Owned free and clear
#3 .Rented
#4 .Occupied without payment of rent
pums_ba.loc[(pums_ba.TEN == 1.0) | (pums_ba.TEN == 2.0), 'tenure'] = 'own'
pums_ba.loc[(pums_ba.TEN == 3.0), 'tenure'] = 'rent'
# 3) add boolean for whether household moved in the last 5 years -- PUMS provides last 4 years
#When moved into this house or apartment
#b .N/A (GQ/vacant)
#1 .12 months or less
#2 .13 to 23 months
#3 .2 to 4 years
#4 .5 to 9 years
#5 .10 to 19 years
#6 .20 to 29 years
#7 .30 years or more
pums_ba.loc[(pums_ba.MV == 1.0) | (pums_ba.MV == 2.0) | (pums_ba.MV == 3.0), 'moved_last_4yrs'] = 1
pums_ba.loc[(pums_ba.MV == 4.0) | (pums_ba.MV == 5.0) | (pums_ba.MV == 6.0) | (pums_ba.MV == 7.0), 'moved_last_4yrs'] = 0
# subset into tenure X income
own_q1 = pums_ba[(pums_ba.tenure == 'own') & (pums_ba.hh_inc_quartile == 1.0)]
own_q1_mv = own_q1[own_q1.moved_last_4yrs==1]
own_q2 = pums_ba[(pums_ba.tenure == 'own') & (pums_ba.hh_inc_quartile == 2.0)]
own_q2_mv = own_q2[own_q2.moved_last_4yrs==1]
own_q3 = pums_ba[(pums_ba.tenure == 'own') & (pums_ba.hh_inc_quartile == 3.0)]
own_q3_mv = own_q3[own_q3.moved_last_4yrs==1]
own_q4 = pums_ba[(pums_ba.tenure == 'own') & (pums_ba.hh_inc_quartile == 4.0)]
own_q4_mv = own_q4[own_q4.moved_last_4yrs==1]
rent_q1 = pums_ba[(pums_ba.tenure == 'rent') & (pums_ba.hh_inc_quartile == 1.0)]
rent_q1_mv = rent_q1[rent_q1.moved_last_4yrs==1]
rent_q2 = pums_ba[(pums_ba.tenure == 'rent') & (pums_ba.hh_inc_quartile == 2.0)]
rent_q2_mv = rent_q2[rent_q2.moved_last_4yrs==1]
rent_q3 = pums_ba[(pums_ba.tenure == 'rent') & (pums_ba.hh_inc_quartile == 3.0)]
rent_q3_mv = rent_q3[rent_q3.moved_last_4yrs==1]
rent_q4 = pums_ba[(pums_ba.tenure == 'rent') & (pums_ba.hh_inc_quartile == 4.0)]
rent_q4_mv = rent_q4[rent_q4.moved_last_4yrs==1]
# get proportion of movers within those groups, weighted first
# and then normalize to 5-year probabilities
own_q1_move_prop = float(own_q1_mv.WGTP.sum())/float(own_q1.WGTP.sum())
own_q1_move_prop = own_q1_move_prop*(5.0/4.0)
print(own_q1_move_prop)
own_q2_move_prop = float(own_q2_mv.WGTP.sum())/float(own_q2.WGTP.sum())
own_q2_move_prop = own_q2_move_prop*(5.0/4.0)
print(own_q2_move_prop)
own_q3_move_prop = float(own_q3_mv.WGTP.sum())/float(own_q3.WGTP.sum())
own_q3_move_prop = own_q3_move_prop*(5.0/4.0)
print(own_q3_move_prop)
own_q4_move_prop = float(own_q4_mv.WGTP.sum())/float(own_q4.WGTP.sum())
own_q4_move_prop = own_q4_move_prop*(5.0/4.0)
print(own_q4_move_prop)
rent_q1_move_prop = float(rent_q1_mv.WGTP.sum())/float(rent_q1.WGTP.sum())
rent_q1_move_prop = rent_q1_move_prop*(5.0/4.0)
print(rent_q1_move_prop)
rent_q2_move_prop = float(rent_q2_mv.WGTP.sum())/float(rent_q2.WGTP.sum())
rent_q2_move_prop = rent_q2_move_prop*(5.0/4.0)
print(rent_q2_move_prop)
rent_q3_move_prop = float(rent_q3_mv.WGTP.sum())/float(rent_q3.WGTP.sum())
rent_q3_move_prop = rent_q3_move_prop*(5.0/4.0)
print(rent_q3_move_prop)
rent_q4_move_prop = float(rent_q4_mv.WGTP.sum())/float(rent_q4.WGTP.sum())
rent_q4_move_prop = rent_q4_move_prop*(5.0/4.0)
print(rent_q4_move_prop)
# ### Now compare against older PUMS: 2006-2008 ACS PUMS
# copied from M:\Data\Census\corrlib
ba_puma_00 = pd.read_csv('./PUMS Relocation Rates/BayArea_puma5_cens2000.csv')
# copied from: M:\Data\Census\PUMS\PUMS 2006-08
pums_07 = pd.read_csv('./PUMS Relocation Rates/ss06_08hca.csv')
# subset PUMS data to bay area
pums_07_ba = pums_07[pums_07.PUMA.isin(ba_puma_00.PUMA)]
# select relevant columns
pums_07_ba = pums_07_ba[['SERIALNO', 'DIVISION', 'PUMA', 'REGION', 'ST', 'TEN', 'ADJINC', 'HINCP', 'MV', 'WGTP']]
# 0) use ADJINC to adjust to 2008
#Adjustment factor for income and earnings dollar amounts (6 implied decimal places)
#1084622 .2006 factor (1.015675 * 1.06788247)
#1055856 .2007 factor (1.016787 * 1.03842365)
#1018389 .2008 factor (1.018389 * 1.00000000)
pums_07_ba.loc[pums_07_ba.ADJINC==1084622, 'hh_inc'] = pums_07_ba.HINCP * (1084622.0/1000000.0)
pums_07_ba.loc[pums_07_ba.ADJINC==1055856, 'hh_inc'] = pums_07_ba.HINCP * (1055856.0/1000000.0)
pums_07_ba.loc[pums_07_ba.ADJINC==1018389, 'hh_inc'] = pums_07_ba.HINCP * (1018389.0/1000000.0)
# 1) add household income quartile
#Household income (past 12 months, use ADJINC to adjust HINCP to constant dollars)
#bbbbbbbb .N/A (GQ/vacant)
#00000000 .No household income
#-0059999 .Loss of $59,999 or more
#-0059998..-0000001 .Loss of $1 to $59,998
#00000001 .$1 or Break even
#00000002..99999999 .Total household in
pums_07_ba.loc[(pums_07_ba.hh_inc > -999999999) & (pums_07_ba.hh_inc <= 30000), 'hh_inc_quartile'] = 1
pums_07_ba.loc[(pums_07_ba.hh_inc > 30000) & (pums_07_ba.hh_inc <= 60000), 'hh_inc_quartile'] = 2
pums_07_ba.loc[(pums_07_ba.hh_inc > 60000) & (pums_07_ba.hh_inc <= 100000), 'hh_inc_quartile'] = 3
pums_07_ba.loc[(pums_07_ba.hh_inc > 100000) & (pums_07_ba.hh_inc <= 999999999), 'hh_inc_quartile'] = 4
# 2) add tenure
#Tenure
#b .N/A (GQ/vacant)
#1 .Owned with mortgage or loan (include home equity loans)
#2 .Owned free and clear
#3 .Rented
#4 .Occupied without payment of rent
pums_07_ba.loc[(pums_07_ba.TEN == 1.0) | (pums_07_ba.TEN == 2.0), 'tenure'] = 'own'
pums_07_ba.loc[(pums_07_ba.TEN == 3.0), 'tenure'] = 'rent'
# 3) add boolean for whether household moved in the last 5 years -- PUMS provides last 4 years
#When moved into this house or apartment
#b .N/A (GQ/vacant)
#1 .12 months or less
#2 .13 to 23 months
#3 .2 to 4 years
#4 .5 to 9 years
#5 .10 to 19 years
#6 .20 to 29 years
#7 .30 years or more
pums_07_ba.loc[(pums_07_ba.MV == 1.0) | (pums_07_ba.MV == 2.0) | (pums_07_ba.MV == 3.0), 'moved_last_4yrs'] = 1
pums_07_ba.loc[(pums_07_ba.MV == 4.0) | (pums_07_ba.MV == 5.0) | (pums_07_ba.MV == 6.0) | (pums_07_ba.MV == 7.0), 'moved_last_4yrs'] = 0
# subset into tenure X income
own_q1_07 = pums_07_ba[(pums_07_ba.tenure == 'own') & (pums_07_ba.hh_inc_quartile == 1.0)]
own_q1_mv_07 = own_q1_07[own_q1_07.moved_last_4yrs==1]
own_q2_07 = pums_07_ba[(pums_07_ba.tenure == 'own') & (pums_07_ba.hh_inc_quartile == 2.0)]
own_q2_mv_07 = own_q2_07[own_q2_07.moved_last_4yrs==1]
own_q3_07 = pums_07_ba[(pums_07_ba.tenure == 'own') & (pums_07_ba.hh_inc_quartile == 3.0)]
own_q3_mv_07 = own_q3_07[own_q3_07.moved_last_4yrs==1]
own_q4_07 = pums_07_ba[(pums_07_ba.tenure == 'own') & (pums_07_ba.hh_inc_quartile == 4.0)]
own_q4_mv_07 = own_q4_07[own_q4_07.moved_last_4yrs==1]
rent_q1_07 = pums_07_ba[(pums_07_ba.tenure == 'rent') & (pums_07_ba.hh_inc_quartile == 1.0)]
rent_q1_mv_07 = rent_q1_07[rent_q1_07.moved_last_4yrs==1]
rent_q2_07 = pums_07_ba[(pums_07_ba.tenure == 'rent') & (pums_07_ba.hh_inc_quartile == 2.0)]
rent_q2_mv_07 = rent_q2_07[rent_q2_07.moved_last_4yrs==1]
rent_q3_07 = pums_07_ba[(pums_07_ba.tenure == 'rent') & (pums_07_ba.hh_inc_quartile == 3.0)]
rent_q3_mv_07 = rent_q3_07[rent_q3_07.moved_last_4yrs==1]
rent_q4_07 = pums_07_ba[(pums_07_ba.tenure == 'rent') & (pums_07_ba.hh_inc_quartile == 4.0)]
rent_q4_mv_07 = rent_q4_07[rent_q4_07.moved_last_4yrs==1]
# get proportion of movers within those groups, weighted first
own_q1_move_prop_07 = float(own_q1_mv_07.WGTP.sum())/float(own_q1_07.WGTP.sum())
print(own_q1_move_prop_07)
own_q2_move_prop_07 = float(own_q2_mv_07.WGTP.sum())/float(own_q2_07.WGTP.sum())
print(own_q2_move_prop_07)
own_q3_move_prop_07 = float(own_q3_mv_07.WGTP.sum())/float(own_q3_07.WGTP.sum())
print(own_q3_move_prop_07)
own_q4_move_prop_07 = float(own_q4_mv_07.WGTP.sum())/float(own_q4_07.WGTP.sum())
print(own_q4_move_prop_07)
rent_q1_move_prop_07 = float(rent_q1_mv_07.WGTP.sum())/float(rent_q1_07.WGTP.sum())
print(rent_q1_move_prop_07)
rent_q2_move_prop_07 = float(rent_q2_mv_07.WGTP.sum())/float(rent_q2_07.WGTP.sum())
print(rent_q2_move_prop_07)
rent_q3_move_prop_07 = float(rent_q3_mv_07.WGTP.sum())/float(rent_q3_07.WGTP.sum())
print(rent_q3_move_prop_07)
rent_q4_move_prop_07 = float(rent_q4_mv_07.WGTP.sum())/float(rent_q4_07.WGTP.sum())
print(rent_q4_move_prop_07)
# ### Now compare against another older PUMS: 2000 PUMS
# copied from: M:\Data\Census\PUMS\PUMS 2000
pums_00 = pd.read_csv('./PUMS Relocation Rates/hbayarea5_2000.csv')
# subset PUMS data to bay area
pums_00_ba = pums_00[pums_00.puma5.isin(ba_puma_00.PUMA)]
# select relevant columns
pums_00_ba = pums_00_ba[['puma5', 'tenure', 'hinc', 'yrmoved', 'hweight']]
pums_00_ba.rename(columns={'tenure':'ten'}, inplace=True)
# 1) add household income quartile
#T Household Total Income in 1999
#V –0059999 . Loss of $59,999 or more
#R –0000001..–0059998 . Loss of $1 to $59,998
#V 00000000 . Not in universe (vacant, GQ, no income)
#V 00000001 . $1 or break even
#R 00000002..99999998 . $2 to $99,999,998
#V 99999999 . $99,999,999 or more
pums_00_ba.loc[(pums_00_ba.hinc > -999999999) & (pums_00_ba.hinc <= 30000), 'hh_inc_quartile'] = 1
pums_00_ba.loc[(pums_00_ba.hinc > 30000) & (pums_00_ba.hinc <= 60000), 'hh_inc_quartile'] = 2
pums_00_ba.loc[(pums_00_ba.hinc > 60000) & (pums_00_ba.hinc <= 100000), 'hh_inc_quartile'] = 3
pums_00_ba.loc[(pums_00_ba.hinc > 100000) & (pums_00_ba.hinc <= 999999999), 'hh_inc_quartile'] = 4
# 2) add tenure
#T Home Ownership
#V 0 . Not in universe (vacant or GQ)
#V 1 . Owned by you or someone in this household with a mortgage or loan
#V 2 . Owned by you or someone in this household free and clear (without a mortgage or loan)
#V 3 . Rented for cash rent
#V 4 . Occupied without payment of cash rent
pums_00_ba.loc[(pums_00_ba.ten == 1.0) | (pums_00_ba.ten == 2.0), 'tenure'] = 'own'
pums_00_ba.loc[(pums_00_ba.ten == 3.0), 'tenure'] = 'rent'
# 3) add boolean for whether household moved in the last 5 years -- 2000 PUMS provides last 5 years
#T Year Moved In
#V blank . Not in universe (vacant or GQ)
#V 1 . 1999 or 2000
#V 2 . 1995 to 1998
#V 3 . 1990 to 1994
#V 4 . 1980 to 1989
#V 5 . 1970 to 1979
#V 6 . 1969 or earlier
pums_00_ba.loc[(pums_00_ba.yrmoved == 1.0) | (pums_00_ba.yrmoved == 2.0), 'moved_last_5yrs'] = 1
pums_00_ba.loc[(pums_00_ba.yrmoved == 3.0) | (pums_00_ba.yrmoved == 4.0) | (pums_00_ba.yrmoved == 5.0) | (pums_00_ba.yrmoved == 6.0), 'moved_last_5yrs'] = 0
# subset into tenure X income
own_q1_00 = pums_00_ba[(pums_00_ba.tenure == 'own') & (pums_00_ba.hh_inc_quartile == 1.0)]
own_q1_mv_00 = own_q1_00[own_q1_00.moved_last_5yrs==1]
own_q2_00 = pums_00_ba[(pums_00_ba.tenure == 'own') & (pums_00_ba.hh_inc_quartile == 2.0)]
own_q2_mv_00 = own_q2_00[own_q2_00.moved_last_5yrs==1]
own_q3_00 = pums_00_ba[(pums_00_ba.tenure == 'own') & (pums_00_ba.hh_inc_quartile == 3.0)]
own_q3_mv_00 = own_q3_00[own_q3_00.moved_last_5yrs==1]
own_q4_00 = pums_00_ba[(pums_00_ba.tenure == 'own') & (pums_00_ba.hh_inc_quartile == 4.0)]
own_q4_mv_00 = own_q4_00[own_q4_00.moved_last_5yrs==1]
rent_q1_00 = pums_00_ba[(pums_00_ba.tenure == 'rent') & (pums_00_ba.hh_inc_quartile == 1.0)]
rent_q1_mv_00 = rent_q1_00[rent_q1_00.moved_last_5yrs==1]
rent_q2_00 = pums_00_ba[(pums_00_ba.tenure == 'rent') & (pums_00_ba.hh_inc_quartile == 2.0)]
rent_q2_mv_00 = rent_q2_00[rent_q2_00.moved_last_5yrs==1]
rent_q3_00 = pums_00_ba[(pums_00_ba.tenure == 'rent') & (pums_00_ba.hh_inc_quartile == 3.0)]
rent_q3_mv_00 = rent_q3_00[rent_q3_00.moved_last_5yrs==1]
rent_q4_00 = pums_00_ba[(pums_00_ba.tenure == 'rent') & (pums_00_ba.hh_inc_quartile == 4.0)]
rent_q4_mv_00 = rent_q4_00[rent_q4_00.moved_last_5yrs==1]
# get proportion of movers within those groups, weighted first
own_q1_move_prop_00 = float(own_q1_mv_00.hweight.sum())/float(own_q1_00.hweight.sum())
print(own_q1_move_prop_00)
own_q2_move_prop_00 = float(own_q2_mv_00.hweight.sum())/float(own_q2_00.hweight.sum())
print(own_q2_move_prop_00)
own_q3_move_prop_00 = float(own_q3_mv_00.hweight.sum())/float(own_q3_00.hweight.sum())
print(own_q3_move_prop_00)
own_q4_move_prop_00 = float(own_q4_mv_00.hweight.sum())/float(own_q4_00.hweight.sum())
print(own_q4_move_prop_00)
rent_q1_move_prop_00 = float(rent_q1_mv_00.hweight.sum())/float(rent_q1_00.hweight.sum())
print(rent_q1_move_prop_00)
rent_q2_move_prop_00 = float(rent_q2_mv_00.hweight.sum())/float(rent_q2_00.hweight.sum())
print(rent_q2_move_prop_00)
rent_q3_move_prop_00 = float(rent_q3_mv_00.hweight.sum())/float(rent_q3_00.hweight.sum())
print(rent_q3_move_prop_00)
rent_q4_move_prop_00 = float(rent_q4_mv_00.hweight.sum())/float(rent_q4_00.hweight.sum())
print(rent_q4_move_prop_00)
# #### 1990? -- haven't been able to get a usable PUMS file format / iPUMS doesn't have the needed vars
# +
# https://www.census.gov/data/tables/2017/demo/geographic-mobility/cps-2017.html
# https://www.census.gov/data/tables/2018/demo/geographic-mobility/cps-2018.html
# 2016-2017: it does seem like for "persons" low inc moves more, but for "householders" higher inc moves more
# +
# https://www.theatlantic.com/business/archive/2017/10/geographic-mobility-and-housing/542439/
# Highly educated people still relocate for work, but exorbitant housing costs in the best-paying cities
# make it difficult for anyone else to do so.
# +
# https://www.npr.org/templates/story/story.php?storyId=235384213
# Staying Put: Why Income Inequality Is Up And Geographic Mobility Is Down
# Median income is now a little below what it was in the late 1990s.
# And you combine that with rising housing prices, then it becomes difficult for people
# to move to jobs because they can't afford to live where the new jobs are.
# +
# q1 has less ability to manage rent burden
# q1 could potentially be helped by rent control or deed-restricted units
# q4 has more means to move
# q4 could also be helped by rent control (which matters less...)
# -
| applications/travel_model_lu_inputs/2015/Deprecated/household_relocation_rates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import math
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# +
# Import xlsx file and store each sheet in to a df list
xl_file = pd.ExcelFile('./data.xlsx',)
dfs = {sheet_name: xl_file.parse(sheet_name)
for sheet_name in xl_file.sheet_names}
# -
# Data from each sheet can be accessed via key
keyList = list(dfs.keys())
# Data cleansing
for key, df in dfs.items():
dfs[key].loc[:,'Confirmed'].fillna(value=0, inplace=True)
dfs[key].loc[:,'Deaths'].fillna(value=0, inplace=True)
dfs[key].loc[:,'Recovered'].fillna(value=0, inplace=True)
dfs[key]=dfs[key].astype({'Confirmed':'int64', 'Deaths':'int64', 'Recovered':'int64'})
# Change as China for coordinate search
dfs[key]=dfs[key].replace({'Country/Region':'Mainland China'}, 'China')
dfs[key]=dfs[key].replace({'Province/State':'Queensland'}, 'Brisbane')
dfs[key]=dfs[key].replace({'Province/State':'New South Wales'}, 'Sydney')
dfs[key]=dfs[key].replace({'Province/State':'Victoria'}, 'Melbourne')
# Add a zero to the date so can be convert by datetime.strptime as 0-padded date
dfs[key]['Last Update'] = '0' + dfs[key]['Last Update']
# Convert time as Australian eastern daylight time
dfs[key]['Date_last_updated_AEDT'] = [datetime.strptime(d, '%m/%d/%Y %H:%M') for d in dfs[key]['Last Update']]
dfs[key]['Date_last_updated_AEDT'] = dfs[key]['Date_last_updated_AEDT'] + timedelta(hours=16)
# Check
dfs[keyList[1]].head()
# Import data with coordinates (coordinates were called seperately in "Updated_coordinates")
dfs[keyList[0]]=pd.read_csv('{}_data.csv'.format(keyList[0]))
# Save numbers into variables to use in the app
confirmedCases=dfs[keyList[0]]['Confirmed'].sum()
deathsCases=dfs[keyList[0]]['Deaths'].sum()
recoveredCases=dfs[keyList[0]]['Recovered'].sum()
confirmedCases
# +
# Construct new dataframe for line plot
DateList = []
ChinaList =[]
OtherList = []
for key, df in dfs.items():
dfTpm = df.groupby(['Country/Region'])['Confirmed'].agg(np.sum)
dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Confirmed':dfTpm.values})
dfTpm = dfTpm.sort_values(by='Confirmed', ascending=False).reset_index(drop=True)
DateList.append(df['Date_last_updated_AEDT'][0])
ChinaList.append(dfTpm['Confirmed'][0])
OtherList.append(dfTpm['Confirmed'][1:].sum())
df_confirmed = pd.DataFrame({'Date':DateList,
'Mainland China':ChinaList,
'Other locations':OtherList})
# -
df_confirmed['date_day']=[d.date() for d in df_confirmed['Date']]
df_confirmed=df_confirmed.groupby(by=df_confirmed['date_day'], sort=False).transform(max).drop_duplicates(['Date'])
df_confirmed['Total']=df_confirmed['Mainland China']+df_confirmed['Other locations']
df_confirmed=df_confirmed.reset_index(drop=True)
df_confirmed
plusConfirmedNum = df_confirmed['Total'][0] - df_confirmed['Total'][1]
plusPercentNum1 = (df_confirmed['Total'][0] - df_confirmed['Total'][1])/df_confirmed['Total'][1]
plusPercentNum1
# +
# Construct new dataframe for line plot
DateList = []
ChinaList =[]
OtherList = []
for key, df in dfs.items():
dfTpm = df.groupby(['Country/Region'])['Recovered'].agg(np.sum)
dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Recovered':dfTpm.values})
dfTpm = dfTpm.sort_values(by='Recovered', ascending=False).reset_index(drop=True)
DateList.append(df['Date_last_updated_AEDT'][0])
ChinaList.append(dfTpm['Recovered'][0])
OtherList.append(dfTpm['Recovered'][1:].sum())
df_recovered = pd.DataFrame({'Date':DateList,
'Mainland China':ChinaList,
'Other locations':OtherList})
# -
df_recovered['date_day']=[d.date() for d in df_recovered['Date']]
df_recovered=df_recovered.groupby(by=df_recovered['date_day'], sort=False).transform(max).drop_duplicates(['Date'])
df_recovered['Total']=df_recovered['Mainland China']+df_recovered['Other locations']
df_recovered=df_recovered.reset_index(drop=True)
df_recovered
plusRecoveredNum = df_recovered['Total'][0] - df_recovered['Total'][1]
plusPercentNum2 = (df_recovered['Total'][0] - df_recovered['Total'][1])/df_recovered['Total'][1]
plusPercentNum2
# +
# Construct new dataframe for line plot
DateList = []
ChinaList =[]
OtherList = []
for key, df in dfs.items():
dfTpm = df.groupby(['Country/Region'])['Deaths'].agg(np.sum)
dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Deaths':dfTpm.values})
dfTpm = dfTpm.sort_values(by='Deaths', ascending=False).reset_index(drop=True)
DateList.append(df['Date_last_updated_AEDT'][0])
ChinaList.append(dfTpm['Deaths'][0])
OtherList.append(dfTpm['Deaths'][1:].sum())
df_deaths = pd.DataFrame({'Date':DateList,
'Mainland China':ChinaList,
'Other locations':OtherList})
# -
df_deaths['date_day']=[d.date() for d in df_deaths['Date']]
df_deaths=df_deaths.groupby(by='date_day', sort=False).transform(max).drop_duplicates(['Date'])
df_deaths['Total']=df_deaths['Mainland China']+df_deaths['Other locations']
df_deaths=df_deaths.reset_index(drop=True)
df_deaths
plusDeathNum = df_deaths['Total'][0] - df_deaths['Total'][1]
plusPercentNum3 = (df_deaths['Total'][0] - df_deaths['Total'][1])/df_deaths['Total'][1]
plusPercentNum3
# Save numbers into variables to use in the app
latestDate=datetime.strftime(df_confirmed['Date'][0], '%b %d %Y %H:%M AEDT')
secondLastDate=datetime.strftime(df_confirmed['Date'][1], '%b %d')
daysOutbreak=(df_confirmed['Date'][0] - datetime.strptime('12/31/2019', '%m/%d/%Y')).days
secondLastDate
# +
# Line plot for confirmed cases
# Set up tick scale based on confirmed case number
tickList = list(np.arange(0, df_confirmed['Mainland China'].max()+1000, 2000))
# Create empty figure canvas
fig_confirmed = go.Figure()
# Add trace to the figure
fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Mainland China'],
mode='lines+markers',
name='Mainland China',
line=dict(color='#921113', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#921113')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']],
hovertext=['Mainland China confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Mainland China']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Other locations'],
mode='lines+markers',
name='Other Region',
line=dict(color='#eb5254', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#eb5254')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']],
hovertext=['Other locations confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Other locations']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
# Customise layout
fig_confirmed.update_layout(
#title=dict(
# text="<b>Confirmed Cases Timeline<b>",
# y=0.96, x=0.5, xanchor='center', yanchor='top',
# font=dict(size=20, color="#292929", family="Playfair Display")
#),
margin=go.layout.Margin(
l=10,
r=10,
b=10,
t=5,
pad=0
),
yaxis=dict(
showline=True, linecolor='#272e3e',
zeroline=False,
gridcolor='#cbd2d3',
gridwidth = .1,
tickmode='array',
# Set tick range based on the maximum number
tickvals=tickList,
# Set tick label accordingly
ticktext=["{:.0f}k".format(i/1000) for i in tickList]
),
# yaxis_title="Total Confirmed Case Number",
xaxis=dict(
showline=True, linecolor='#272e3e',
gridcolor='#cbd2d3',
gridwidth = .1,
zeroline=False
),
xaxis_tickformat='%b %d',
hovermode = 'x',
legend_orientation="h",
# legend=dict(x=.35, y=-.05),
plot_bgcolor='#f4f4f2',
paper_bgcolor='#cbd2d3',
font=dict(color='#292929')
)
fig_confirmed.show()
# +
# Line plot for recovered cases
# Set up tick scale based on confirmed case number
tickList = list(np.arange(0, df_recovered['Mainland China'].max()+100, 100))
# Create empty figure canvas
fig_recovered = go.Figure()
# Add trace to the figure
fig_recovered.add_trace(go.Scatter(x=df_recovered['Date'], y=df_recovered['Mainland China'],
mode='lines+markers',
name='Mainland China',
line=dict(color='#168038', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#168038')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_recovered['Date']],
hovertext=['Mainland China recovered<br>{:,d} cases<br>'.format(i) for i in df_recovered['Mainland China']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
fig_recovered.add_trace(go.Scatter(x=df_recovered['Date'], y=df_recovered['Other locations'],
mode='lines+markers',
name='Other Region',
line=dict(color='#25d75d', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#25d75d')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_recovered['Date']],
hovertext=['Other locations recovered<br>{:,d} cases<br>'.format(i) for i in df_recovered['Other locations']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
# Customise layout
fig_recovered.update_layout(
#title=dict(
# text="<b>Recovered Cases Timeline<b>",
# y=0.96, x=0.5, xanchor='center', yanchor='top',
# font=dict(size=20, color="#292929", family="Playfair Display")
#),
margin=go.layout.Margin(
l=10,
r=10,
b=10,
t=5,
pad=0
),
yaxis=dict(
showline=True, linecolor='#272e3e',
zeroline=False,
gridcolor='#cbd2d3',
gridwidth = .1,
tickmode='array',
# Set tick range based on the maximum number
tickvals=tickList,
# Set tick label accordingly
ticktext=['{:.0f}'.format(i) for i in tickList]
),
# yaxis_title="Total Recovered Case Number",
xaxis=dict(
showline=True, linecolor='#272e3e',
gridcolor='#cbd2d3',
gridwidth = .1,
zeroline=False
),
xaxis_tickformat='%b %d',
hovermode = 'x',
legend_orientation="h",
# legend=dict(x=.35, y=-.05),
plot_bgcolor='#f4f4f2',
paper_bgcolor='#cbd2d3',
font=dict(color='#292929')
)
fig_recovered.show()
# +
# Line plot for deaths cases
# Set up tick scale based on confirmed case number
tickList = list(np.arange(0, df_deaths['Mainland China'].max()+100, 100))
# Create empty figure canvas
fig_deaths = go.Figure()
# Add trace to the figure
fig_deaths.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Mainland China'],
mode='lines+markers',
name='Mainland China',
line=dict(color='#626262', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#626262')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']],
hovertext=['Mainland China death<br>{:,d} cases<br>'.format(i) for i in df_deaths['Mainland China']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
fig_deaths.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Other locations'],
mode='lines+markers',
name='Other Region',
line=dict(color='#a7a7a7', width=3),
marker=dict(size=8, color='#f4f4f2',
line=dict(width=1,color='#a7a7a7')),
text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']],
hovertext=['Other locations death<br>{:,d} cases<br>'.format(i) for i in df_deaths['Other locations']],
hovertemplate='<b>%{text}</b><br></br>'+
'%{hovertext}'+
'<extra></extra>'))
# Customise layout
fig_deaths.update_layout(
# title=dict(
# text="<b>Death Cases Timeline<b>",
# y=0.96, x=0.5, xanchor='center', yanchor='top',
# font=dict(size=20, color="#292929", family="Playfair Display")
# ),
margin=go.layout.Margin(
l=10,
r=10,
b=10,
t=5,
pad=0
),
yaxis=dict(
showline=True, linecolor='#272e3e',
zeroline=False,
gridcolor='#cbd2d3',
gridwidth = .1,
tickmode='array',
# Set tick range based on the maximum number
tickvals=tickList,
# Set tick label accordingly
ticktext=['{:.0f}'.format(i) for i in tickList]
),
# yaxis_title="Total Death Case Number",
xaxis=dict(
showline=True, linecolor='#272e3e',
gridcolor='#cbd2d3',
gridwidth = .1,
zeroline=False
),
xaxis_tickformat='%b %d',
hovermode = 'x',
legend_orientation="h",
# legend=dict(x=.35, y=-.05),
plot_bgcolor='#f4f4f2',
paper_bgcolor='#cbd2d3',
font=dict(color='#292929')
)
fig_deaths.show()
# +
mapbox_access_token = "<KEY>"
# Generate a list for hover text display
textList=[]
for area, region in zip(dfs[keyList[0]]['Province/State'], dfs[keyList[0]]['Country/Region']):
if type(area) is str:
if region == "Hong Kong" or region == "Macau" or region == "Taiwan":
textList.append(area)
else:
textList.append(area+', '+region)
else:
textList.append(region)
fig2 = go.Figure(go.Scattermapbox(
lat=dfs[keyList[0]]['lat'],
lon=dfs[keyList[0]]['lon'],
mode='markers',
marker=go.scattermapbox.Marker(
color='#ca261d',
size=dfs[keyList[0]]['Confirmed'].tolist(),
sizemin=2,
sizemode='area',
sizeref=2.*max(dfs[keyList[0]]['Confirmed'].tolist())/(80.**2),
),
text=textList,
hovertext=['Comfirmed: {}<br>Recovered: {}<br>Death: {}'.format(i, j, k) for i, j, k in zip(dfs[keyList[0]]['Confirmed'],
dfs[keyList[0]]['Recovered'],
dfs[keyList[0]]['Deaths'])],
hovertemplate = "<b>%{text}</b><br><br>" +
"%{hovertext}<br>" +
"<extra></extra>")
)
fig2.update_layout(
# title=dict(
# text="<b>Latest Coronavirus Outbreak Map<b>",
# y=0.96, x=0.5, xanchor='center', yanchor='top',
# font=dict(size=20, color="#292929", family="Playfair Display")
# ),
plot_bgcolor='#151920',
paper_bgcolor='#cbd2d3',
margin=go.layout.Margin(
l=10,
r=10,
b=10,
t=0,
pad=40
),
hovermode='closest',
mapbox=go.layout.Mapbox(
accesstoken=mapbox_access_token,
style="light",
bearing=0,
center=go.layout.mapbox.Center(
lat=29.538860,
lon=173.304781
),
pitch=0,
zoom=2
)
)
fig2.show()
# -
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
app = dash.Dash(__name__, assets_folder='./assets/',
meta_tags=[
{"name": "viewport", "content": "width=device-width, height=device-height, initial-scale=1.0"}
]
)
app.layout = html.Div(style={'backgroundColor':'#f4f4f2'},
children=[
html.Div(
id="header",
children=[
html.H4(children="Wuhan Coronavirus (2019-nCoV) Outbreak Monitor"),
html.P(
id="description",
children="On Dec 31, 2019, the World Health Organization (WHO) was informed of \
an outbreak of “pneumonia of unknown cause” detected in Wuhan City, Hubei Province, China – the \
seventh-largest city in China with 11 million residents. As of {}, there are over {:,d} cases \
of 2019-nCoV confirmed globally.\
This dash board is developed to visualise and track the recent reported \
cases on a daily timescale.".format(latestDate, confirmedCases),
),
html.P(style={'fontWeight':'bold'},
children="Last updated on {}.".format(latestDate))
]
),
html.Div(
id="number-plate",
style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'},
children=[
html.Div(
style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block',
'marginRight':'.8%','verticalAlign':'top'},
children=[
html.H3(style={'textAlign':'center',
'fontWeight':'bold','color':'#ffffbf'},
children=[
html.P(style={'fontSize':'2rem','color':'#cbd2d3','padding':'.5rem'},
children='x'),
'{}'.format(daysOutbreak),
]),
html.P(style={'textAlign':'center',
'fontWeight':'bold','color':'#ffffbf','padding':'.1rem'},
children="Days Since Outbreak")
]),
html.Div(
style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block',
'marginRight':'.8%','verticalAlign':'top'},
children=[
html.H3(style={'textAlign':'center',
'fontWeight':'bold','color':'#d7191c'},
children=[
html.P(style={'fontSize':'2rem','padding':'.5rem'},
children='+ {:,d} from yesterday ({:.1%})'.format(plusConfirmedNum, plusPercentNum1)),
'{:,d}'.format(confirmedCases)
]),
html.P(style={'textAlign':'center',
'fontWeight':'bold','color':'#d7191c','padding':'.1rem'},
children="Confirmed Cases")
]),
html.Div(
style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block',
'marginRight':'.8%','verticalAlign':'top'},
children=[
html.H3(style={'textAlign':'center',
'fontWeight':'bold','color':'#1a9622'},
children=[
html.P(style={'fontSize':'2rem','padding':'.5rem'},
children='+ {:,d} from yesterday ({:.1%})'.format(plusRecoveredNum, plusPercentNum2)),
'{:,d}'.format(recoveredCases),
]),
html.P(style={'textAlign':'center',
'fontWeight':'bold','color':'#1a9622','padding':'.1rem'},
children="Recovered Cases")
]),
html.Div(
style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block',
'verticalAlign':'top'},
children=[
html.H3(style={'textAlign':'center',
'fontWeight':'bold','color':'#6c6c6c'},
children=[
html.P(style={'fontSize':'2rem','padding':'.5rem'},
children='+ {:,d} from yesterday ({:.1%})'.format(plusDeathNum, plusPercentNum3)),
'{:,d}'.format(deathsCases)
]),
html.P(style={'textAlign':'center',
'fontWeight':'bold','color':'#6c6c6c','padding':'.1rem'},
children="Death Cases")
])
]),
html.Div(
id='dcc-plot',
style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.35%','marginTop':'.5%'},
children=[
html.Div(
style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'},
children=[
html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3',
'color':'#292929','padding':'1rem','marginBottom':'0'},
children='Confirmed Case Timeline'),
dcc.Graph(figure=fig_confirmed)]),
html.Div(
style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'},
children=[
html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3',
'color':'#292929','padding':'1rem','marginBottom':'0'},
children='Recovered Case Timeline'),
dcc.Graph(figure=fig_recovered)]),
html.Div(
style={'width':'32.79%','display':'inline-block','verticalAlign':'top'},
children=[
html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3',
'color':'#292929','padding':'1rem','marginBottom':'0'},
children='Death Case Timeline'),
dcc.Graph(figure=fig_deaths)])]),
html.Div(
id='dcc-map',
style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'},
children=[
html.Div(style={'width':'100%','display':'inline-block','verticalAlign':'top'},
children=[
html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3',
'color':'#292929','padding':'1rem','marginBottom':'0'},
children='Latest Coronavirus Outbreak Map'),
dcc.Graph(figure=fig2)]),]),
html.Div(style={'marginLeft':'1.5%','marginRight':'1.5%'},
children=[
html.P(style={'textAlign':'center','margin':'auto'},
children=["Data source from ",
html.A('Dingxiangyuan, ', href='https://ncov.dxy.cn/ncovh5/view/pneumonia?sce\
ne=2&clicktime=1579582238&enterid=1579582238&from=singlemessage&isappinstalled=0'),
html.A('Tencent News, ', href='https://news.qq.com//zt2020/page/feiyan.htm#charts'),
'and ',
html.A('JHU CSSE', href='https://docs.google.com/spreadsheets/d/1yZv9w9z\
RKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#'),
" | 🙏 Pray for China, Pray for the World 🙏 |",
" Developed by ",html.A('Jun', href='https://junye0798.com/')," with ❤️"])])
])
if __name__ == '__main__':
app.run_server(port=8882)
| dash-2019-coronavirus/dashboard-virus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Build a napari plugin for feature visualization
# Use the cookiecutter template: https://github.com/napari/cookiecutter-napari-plugin
# Magicgui for interface parts? https://napari.org/magicgui/
# Napari plugin instructions: https://napari.org/plugins/stable/for_plugin_developers.html
# Example project: https://github.com/jni/affinder/blob/393b0c666622fb65835ef056ed5233c2bd1034f2/affinder/affinder.py#L85-L97
# Forum discussion: https://forum.image.sc/t/visualizing-feature-measurements-in-napari-using-colormaps-as-luts/51567/6
# +
import numpy as np
import napari_feature_visualization
# %gui qt5
# Note that this Magics command needs to be run in a cell
# before any of the Napari objects are instantiated to
# ensure it has time to finish executing before they are
# called
import napari
# -
# Get the dummy label image from the ITK bug report and make a dummy dataframe to test this
shape = (1, 50, 50)
lbl_img_np = np.zeros(shape).astype('uint16')
lbl_img_np[0, 5:10, 5:10] = 1
lbl_img_np[0, 15:20, 5:10] = 2
lbl_img_np[0, 25:30, 5:10] = 3
lbl_img_np[0, 5:10, 15:20] = 4
lbl_img_np[0, 15:20, 15:20] = 5
lbl_img_np[0, 25:30, 15:20] = 6
import pandas as pd
# Dummy df for this test
d = {'test': [-100, 200, 300, 500, 900, 300], 'label': [1, 2, 3, 4, 5, 6], 'feature1': [100, 200, 300, 500, 900, 1001], 'feature2': [2200, 2100, 2000, 1500, 1300, 1001]}
df = pd.DataFrame(data=d)
df.to_csv('test_df_2.csv', index=False)
# +
viewer = napari.Viewer()
viewer.add_image(lbl_img_np)
viewer.add_labels(lbl_img_np, name='labels')
viewer.window.add_plugin_dock_widget('napari-feature-visualization')
viewer.window.activate()
# Open napari with my plugin
#napari -w myplugin
# -
viewer.add_image(lbl_img_np)
feature='feature1'
properties_array = np.zeros(df['label'].max() + 1)
properties_array[df['label']] = df[feature]
label_properties = {feature: np.round(properties_array, decimals=2)}
viewer.add_labels(
lbl_img_np,
name=feature,
properties=label_properties,
opacity=1,
)
label_properties
| napari_feature_visualization/napari-feature-vis_test_case.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ### ridge regression
# + deletable=true editable=true
# import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import Ridge
import pandas as pd
# + deletable=true editable=true
def load_data(filename):
cols=["N", "Time_in_us"]
return pd.read_csv(filename, names=cols)
df = load_data('dev0_h2d_stepsize4.csv')
# df = load_data('dev1_h2d_stepsize4.csv')
X = df["N"]
y = df["Time_in_us"]
X = np.array(X)
y = np.array(y)
X = X.reshape(-1,1)
y = y.reshape(-1,1)
# -
X.shape
# + deletable=true editable=true
clf = Ridge(alpha=1.0)
clf.fit(X, y)
# + deletable=true editable=true
print clf.get_params
# + deletable=true editable=true
print "Mean squared error: %.6f" % np.mean((clf.predict(X) - y) ** 2)
print('Variance score: %.6f' % clf.score(X, y))
# + [markdown] deletable=true editable=true
# ### testing
# + deletable=true editable=true
clf.predict(500)
# + deletable=true editable=true
clf.predict(1000)
# + deletable=true editable=true
clf.predict(500000)
# + deletable=true editable=true
# 1m
clf.predict(1000000)
# + deletable=true editable=true
# 10m
clf.predict(10000000)
# + deletable=true editable=true
# 100m
clf.predict(100000000)
# + [markdown] deletable=true editable=true
# ### contention
# + deletable=true editable=true
clf.predict(409600)
# + deletable=true editable=true
clf.predict(819200)
# + [markdown] deletable=true editable=true
# ### permutation
# + deletable=true editable=true
np.random.seed(42)
sample = np.random.choice(df.index, size= int(len(df) * 0.9), replace=False)
data, test_data = df.ix[sample], df.drop(sample)
# + deletable=true editable=true
X_train, y_train = data["N"], data["Time_in_us"]
X_test, y_test = test_data["N"], test_data["Time_in_us"]
X_train = X_train.reshape(-1,1)
y_train = y_train.reshape(-1,1)
X_test = X_train.reshape(-1,1)
y_test = y_train.reshape(-1,1)
clf.fit(X_train, y_train)
# print lr_model.coef_
# print lr_model.intercept_
# + deletable=true editable=true
print "Mean squared error: %.6f" % np.mean((clf.predict(X_test) - y_test) ** 2)
print('Variance score: %.6f' % clf.score(X_test, y_test))
| logGP_lr_ridge/ridge_reg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.12 ('wikirl-gym')
# language: python
# name: python3
# ---
# +
import gym
import numpy as np
import torch
import wandb
import argparse
import pickle
import random
import sys
sys.path.append('/Users/shiro/research/projects/rl-nlp/can-wikipedia-help-offline-rl/code')
from decision_transformer.evaluation.evaluate_episodes import (
evaluate_episode,
evaluate_episode_rtg,
)
from decision_transformer.models.decision_transformer import DecisionTransformer
from decision_transformer.models.mlp_bc import MLPBCModel
from decision_transformer.training.act_trainer import ActTrainer
from decision_transformer.training.seq_trainer import SequenceTrainer
from utils import get_optimizer
import os
from tqdm.notebook import tqdm
import seaborn as sns
import matplotlib.pyplot as plt
# +
def discount_cumsum(x, gamma):
discount_cumsum = np.zeros_like(x)
discount_cumsum[-1] = x[-1]
for t in reversed(range(x.shape[0] - 1)):
discount_cumsum[t] = x[t] + gamma * discount_cumsum[t + 1]
return discount_cumsum
def prepare_data(variant):
env_name, dataset = variant["env"], variant["dataset"]
model_type = variant["model_type"]
exp_prefix = 'gym-experiment'
group_name = f"{exp_prefix}-{env_name}-{dataset}"
exp_prefix = f"{group_name}-{random.randint(int(1e5), int(1e6) - 1)}"
if env_name == "hopper":
env = gym.make("Hopper-v3")
max_ep_len = 1000
env_targets = [3600, 1800] # evaluation conditioning targets
scale = 1000.0 # normalization for rewards/returns
elif env_name == "halfcheetah":
env = gym.make("HalfCheetah-v3")
max_ep_len = 1000
env_targets = [12000, 6000]
scale = 1000.0
elif env_name == "walker2d":
env = gym.make("Walker2d-v3")
max_ep_len = 1000
env_targets = [5000, 2500]
scale = 1000.0
elif env_name == "reacher2d":
from decision_transformer.envs.reacher_2d import Reacher2dEnv
env = Reacher2dEnv()
max_ep_len = 100
env_targets = [76, 40]
scale = 10.0
else:
raise NotImplementedError
if model_type == "bc":
env_targets = env_targets[
:1
] # since BC ignores target, no need for different evaluations
state_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# load dataset
dataset_path = f"../data/{env_name}-{dataset}-v2.pkl"
with open(dataset_path, "rb") as f:
trajectories = pickle.load(f)
# save all path information into separate lists
mode = variant.get("mode", "normal")
states, traj_lens, returns = [], [], []
for path in trajectories:
if mode == "delayed": # delayed: all rewards moved to end of trajectory
path["rewards"][-1] = path["rewards"].sum()
path["rewards"][:-1] = 0.0
states.append(path["observations"])
traj_lens.append(len(path["observations"]))
returns.append(path["rewards"].sum())
traj_lens, returns = np.array(traj_lens), np.array(returns)
# used for input normalization
states = np.concatenate(states, axis=0)
state_mean, state_std = np.mean(states, axis=0), np.std(states, axis=0) + 1e-6
num_timesteps = sum(traj_lens)
print("=" * 50)
print(f"Starting new experiment: {env_name} {dataset}")
print(f"{len(traj_lens)} trajectories, {num_timesteps} timesteps found")
print(f"Average return: {np.mean(returns):.2f}, std: {np.std(returns):.2f}")
print(f"Max return: {np.max(returns):.2f}, min: {np.min(returns):.2f}")
print("=" * 50)
pct_traj = variant.get("pct_traj", 1.0)
# only train on top pct_traj trajectories (for %BC experiment)
num_timesteps = max(int(pct_traj * num_timesteps), 1)
sorted_inds = np.argsort(returns) # lowest to highest
num_trajectories = 1
timesteps = traj_lens[sorted_inds[-1]]
ind = len(trajectories) - 2
while ind >= 0 and timesteps + traj_lens[sorted_inds[ind]] < num_timesteps:
timesteps += traj_lens[sorted_inds[ind]]
num_trajectories += 1
ind -= 1
sorted_inds = sorted_inds[-num_trajectories:]
# used to reweight sampling so we sample according to timesteps instead of trajectories
p_sample = traj_lens[sorted_inds] / sum(traj_lens[sorted_inds])
return trajectories, sorted_inds, state_dim, act_dim, max_ep_len, state_mean, state_std, num_trajectories, p_sample, scale
def get_batch(
batch_size,
max_len,
trajectories,
sorted_inds,
state_dim,
act_dim,
max_ep_len,
state_mean,
state_std,
num_trajectories,
p_sample,
scale,
device
):
batch_inds = np.random.choice(
np.arange(num_trajectories),
size=batch_size,
replace=True,
p=p_sample, # reweights so we sample according to timesteps
)
s, a, r, d, rtg, timesteps, mask = [], [], [], [], [], [], []
for i in range(batch_size):
traj = trajectories[int(sorted_inds[batch_inds[i]])]
si = random.randint(0, traj["rewards"].shape[0] - 1)
# get sequences from dataset
s.append(traj["observations"][si : si + max_len].reshape(1, -1, state_dim))
a.append(traj["actions"][si : si + max_len].reshape(1, -1, act_dim))
r.append(traj["rewards"][si : si + max_len].reshape(1, -1, 1))
if "terminals" in traj:
d.append(traj["terminals"][si : si + max_len].reshape(1, -1))
else:
d.append(traj["dones"][si : si + max_len].reshape(1, -1))
timesteps.append(np.arange(si, si + s[-1].shape[1]).reshape(1, -1))
timesteps[-1][timesteps[-1] >= max_ep_len] = (
max_ep_len - 1
) # padding cutoff
rtg.append(
discount_cumsum(traj["rewards"][si:], gamma=1.0)[
: s[-1].shape[1] + 1
].reshape(1, -1, 1)
)
if rtg[-1].shape[1] <= s[-1].shape[1]:
rtg[-1] = np.concatenate([rtg[-1], np.zeros((1, 1, 1))], axis=1)
# padding and state + reward normalization
tlen = s[-1].shape[1]
s[-1] = np.concatenate(
[np.zeros((1, max_len - tlen, state_dim)), s[-1]], axis=1
)
s[-1] = (s[-1] - state_mean) / state_std
a[-1] = np.concatenate(
[np.ones((1, max_len - tlen, act_dim)) * -10.0, a[-1]], axis=1
)
r[-1] = np.concatenate([np.zeros((1, max_len - tlen, 1)), r[-1]], axis=1)
d[-1] = np.concatenate([np.ones((1, max_len - tlen)) * 2, d[-1]], axis=1)
rtg[-1] = (
np.concatenate([np.zeros((1, max_len - tlen, 1)), rtg[-1]], axis=1)
/ scale
)
timesteps[-1] = np.concatenate(
[np.zeros((1, max_len - tlen)), timesteps[-1]], axis=1
)
mask.append(
np.concatenate(
[np.zeros((1, max_len - tlen)), np.ones((1, tlen))], axis=1
)
)
s = torch.from_numpy(np.concatenate(s, axis=0)).to(
dtype=torch.float32, device=device
)
a = torch.from_numpy(np.concatenate(a, axis=0)).to(
dtype=torch.float32, device=device
)
r = torch.from_numpy(np.concatenate(r, axis=0)).to(
dtype=torch.float32, device=device
)
d = torch.from_numpy(np.concatenate(d, axis=0)).to(
dtype=torch.long, device=device
)
rtg = torch.from_numpy(np.concatenate(rtg, axis=0)).to(
dtype=torch.float32, device=device
)
timesteps = torch.from_numpy(np.concatenate(timesteps, axis=0)).to(
dtype=torch.long, device=device
)
mask = torch.from_numpy(np.concatenate(mask, axis=0)).to(device=device)
return s, a, r, d, rtg, timesteps, mask
# +
seed=666
epoch=1
env_name='hopper'
reward_state_action = 'state'
torch.manual_seed(seed)
dataset_name = 'medium'
model_names = ['gpt2', 'igpt', 'dt'] # ['gpt2', 'igpt', 'dt']
grad_norms_list = []
for model_name in model_names:
if model_name == 'gpt2':
pretrained_lm1 = 'gpt2'
elif model_name == 'clip':
pretrained_lm1 = 'openai/clip-vit-base-patch32'
elif model_name == 'igpt':
pretrained_lm1 = 'openai/imagegpt-small'
elif model_name == 'dt':
pretrained_lm1 = False
variant = {
'embed_dim': 768,
'n_layer': 12,
'n_head': 1,
'activation_function': 'relu',
'dropout': 0.2, # 0.1
'load_checkpoint': False if epoch==0 else f'../checkpoints/{model_name}_medium_{env_name}_666/model_{epoch}.pt',
'seed': seed,
'outdir': f"checkpoints/{model_name}_{dataset_name}_{env_name}_{seed}",
'env': env_name,
'dataset': dataset_name,
'model_type': 'dt',
'K': 20, # 2
'pct_traj': 1.0,
'batch_size': 100, # 64
'num_eval_episodes': 100,
'max_iters': 40,
'num_steps_per_iter': 2500,
'pretrained_lm': pretrained_lm1,
'gpt_kmeans': None,
'kmeans_cache': None,
'frozen': False,
'extend_positions': False,
'share_input_output_proj': True
}
os.makedirs(variant["outdir"], exist_ok=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
trajectories, sorted_inds, state_dim, act_dim, max_ep_len, state_mean, state_std, num_trajectories, p_sample, scale = prepare_data(variant)
K = variant["K"]
batch_size = variant["batch_size"]
loss_fn = lambda s_hat, a_hat, r_hat, s, a, r: torch.mean((a_hat - a) ** 2)
model = DecisionTransformer(
args=variant,
state_dim=state_dim,
act_dim=act_dim,
max_length=K,
max_ep_len=max_ep_len,
hidden_size=variant["embed_dim"],
n_layer=variant["n_layer"],
n_head=variant["n_head"],
n_inner=4 * variant["embed_dim"],
activation_function=variant["activation_function"],
n_positions=1024,
resid_pdrop=variant["dropout"],
attn_pdrop=0.1,
)
if variant["load_checkpoint"]:
state_dict = torch.load(variant["load_checkpoint"], map_location=torch.device('cpu'))
model.load_state_dict(state_dict)
print(f"Loaded from {variant['load_checkpoint']}")
# model.eval()
# grad = {}
# def get_grad(name):
# def hook(model, input, output):
# grad[name] = output.detach()
# return hook
# for block_id in range(len(model.transformer.h)):
# model.transformer.h[block_id].ln_1.register_backward_hook(get_grad(f'{block_id}.ln_1'))
# model.transformer.h[block_id].attn.c_attn.register_backward_hook(get_grad(f'{block_id}.attn.c_attn'))
# model.transformer.h[block_id].attn.c_proj.register_backward_hook(get_grad(f'{block_id}.attn.c_proj'))
# model.transformer.h[block_id].attn.attn_dropout.register_backward_hook(get_grad(f'{block_id}.attn.attn_dropout'))
# model.transformer.h[block_id].attn.resid_dropout.register_backward_hook(get_grad(f'{block_id}.attn.resid_dropout'))
# model.transformer.h[block_id].ln_2.register_backward_hook(get_grad(f'{block_id}.ln_2'))
# model.transformer.h[block_id].mlp.c_fc.register_backward_hook(get_grad(f'{block_id}.mlp.c_fc'))
# model.transformer.h[block_id].mlp.c_proj.register_backward_hook(get_grad(f'{block_id}.mlp.c_proj'))
# model.transformer.h[block_id].mlp.act.register_backward_hook(get_grad(f'{block_id}.mlp.act'))
# model.transformer.h[block_id].mlp.dropout.register_backward_hook(get_grad(f'{block_id}.mlp.dropout'))
states, actions, rewards, dones, rtg, timesteps, attention_mask = get_batch(batch_size,
K,
trajectories,
sorted_inds,
state_dim,
act_dim,
max_ep_len,
state_mean,
state_std,
num_trajectories,
p_sample,
scale,
device
)
action_target = torch.clone(actions)
grads_list = []
for batch_id in tqdm(range(batch_size)):
##### 勾配計算 #####
action_target_batch = action_target[batch_id, :, :].unsqueeze(0)
state_preds, action_preds, reward_preds, all_embs = model.forward(
states[batch_id, :, :].unsqueeze(0),
actions[batch_id, :, :].unsqueeze(0),
rewards[batch_id, :, :].unsqueeze(0),
rtg[batch_id, :-1].unsqueeze(0),
timesteps[batch_id, :].unsqueeze(0),
attention_mask=attention_mask[batch_id, :].unsqueeze(0),
)
act_dim = action_preds.shape[2]
action_preds = action_preds.reshape(-1, act_dim)[attention_mask[batch_id, :].unsqueeze(0).reshape(-1) > 0]
action_target_batch = action_target_batch.reshape(-1, act_dim)[
attention_mask[batch_id, :].unsqueeze(0).reshape(-1) > 0
]
model.zero_grad()
loss = loss_fn(
None,
action_preds,
None,
None,
action_target_batch,
None,
)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), .25)
grads = []
for name, param in model.transformer.h.named_parameters():
grads.append(param.grad.view(-1))
grads = torch.cat(grads)
grads_list.append(grads)
# torch.nn.utils.clip_grad_norm_(model.parameters(), 0.25) # grad normの閾値を敷く
# grad_ordered = {}
# for block_id in range(len(model.transformer.h)):
# for block_name in block_name_list:
# grad_ordered[f'{block_id}.{block_name}'] = grad[f'{block_id}.{block_name}']
# np.save(f'results/grad_{epoch}_{model_name}_{env_name}_{dataset_name}_{seed}_{reward_state_action}.npy', grads_list)
grad_norm_list = []
for grads in tqdm(grads_list):
grad_norm_list.append(torch.norm(grads).numpy())
grad_norms = np.array(grad_norm_list)
grad_norms_list.append(grad_norms)
np.save(f'results/gradnorms_{epoch}_gpt2_igpt_dt_{env_name}_{dataset_name}_{seed}_{reward_state_action}.npy', grad_norms_list)
model_name_label = ['GPT2', 'iGPT', 'Random Init']
colors = [(0.372, 0.537, 0.537), (0.627, 0.352, 0.470), (0.733, 0.737, 0.870)]
my_palette = sns.color_palette(colors)
sns.boxplot(data=grad_norms_list, palette=my_palette) # "PuBuGn_r"
plt.xticks(np.arange(3), model_name_label)
# plt.savefig(f'figs/gradnorms_{epoch}_gpt2_igpt_dt_{env_name}_{dataset_name}_{seed}_{reward_state_action}.pdf')
plt.show()
# -
model_name_label = ['GPT2', 'iGPT', 'Random Init']
colors = [(0.372, 0.537, 0.537), (0.627, 0.352, 0.470), (0.733, 0.737, 0.870)]
my_palette = sns.color_palette(colors)
sns.boxplot(data=grad_norms_list, palette=my_palette) # "PuBuGn_r"
plt.xticks(np.arange(3), model_name_label)
# plt.savefig(f'figs/gradnorms_{epoch}_gpt2_igpt_dt_{env_name}_{dataset_name}_{seed}_{reward_state_action}.pdf')
plt.show()
for i in range(3):
print(min(grad_norms_list[i]))
plt.hist(grad_norms_list[i], bins=100)
plt.show()
parameters = [p for p in model.parameters() if p.grad is not None]
torch.norm(torch.stack([torch.norm(p.grad.detach()) for p in parameters]))
torch.nn.utils.clip_grad_norm_(model.parameters(), .25)
| code/notebooks/grad_norm_batch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A simple MNIST classifier which displays summaries in TensorBoard.
This is an unimpressive MNIST model, but it is a good example of using
tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of
naming summary tags so that they are grouped meaningfully in TensorBoard.
It demonstrates the functionality of every TensorBoard dashboard.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
'for unit testing.')
flags.DEFINE_integer('max_steps', 1000, 'Number of steps to run trainer.')
flags.DEFINE_float('learning_rate', 0.001, 'Initial learning rate.')
flags.DEFINE_float('dropout', 0.9, 'Keep probability for training dropout.')
flags.DEFINE_string('data_dir', './tmp/data', 'Directory for storing data')
flags.DEFINE_string('summaries_dir', './tmp/mnist_logs', 'Summaries directory')
def train():
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir,
one_hot=True,
fake_data=FLAGS.fake_data)
sess = tf.InteractiveSession()
# Create a multilayer model.
# Input placeholders
with tf.name_scope('input'):
x = tf.placeholder(tf.float32, [None, 784], name='x-input')
y_ = tf.placeholder(tf.float32, [None, 10], name='y-input')
with tf.name_scope('input_reshape'):
image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
tf.image_summary('input', image_shaped_input, 10)
# We can't initialize these variables to 0 - the network will get stuck.
def weight_variable(shape):
"""Create a weight variable with appropriate initialization."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""Create a bias variable with appropriate initialization."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def variable_summaries(var, name):
"""Attach a lot of summaries to a Tensor."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.scalar_summary('mean/' + name, mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean)))
tf.scalar_summary('sttdev/' + name, stddev)
tf.scalar_summary('max/' + name, tf.reduce_max(var))
tf.scalar_summary('min/' + name, tf.reduce_min(var))
tf.histogram_summary(name, var)
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
"""Reusable code for making a simple neural net layer.
It does a matrix multiply, bias add, and then uses relu to nonlinearize.
It also sets up name scoping so that the resultant graph is easy to read,
and adds a number of summary ops.
"""
# Adding a name scope ensures logical grouping of the layers in the graph.
with tf.name_scope(layer_name):
# This Variable will hold the state of the weights for the layer
with tf.name_scope('weights'):
weights = weight_variable([input_dim, output_dim])
variable_summaries(weights, layer_name + '/weights')
with tf.name_scope('biases'):
biases = bias_variable([output_dim])
variable_summaries(biases, layer_name + '/biases')
with tf.name_scope('Wx_plus_b'):
preactivate = tf.matmul(input_tensor, weights) + biases
tf.histogram_summary(layer_name + '/pre_activations', preactivate)
activations = act(preactivate, 'activation')
tf.histogram_summary(layer_name + '/activations', activations)
return activations
hidden1 = nn_layer(x, 784, 500, 'layer1')
with tf.name_scope('dropout'):
keep_prob = tf.placeholder(tf.float32)
tf.scalar_summary('dropout_keep_probability', keep_prob)
dropped = tf.nn.dropout(hidden1, keep_prob)
y = nn_layer(dropped, 500, 10, 'layer2', act=tf.nn.softmax)
with tf.name_scope('cross_entropy'):
diff = y_ * tf.log(y)
with tf.name_scope('total'):
cross_entropy = -tf.reduce_mean(diff)
tf.scalar_summary('cross entropy', cross_entropy)
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
cross_entropy)
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.scalar_summary('accuracy', accuracy)
# Merge all the summaries and write them out to /tmp/mnist_logs (by default)
merged = tf.merge_all_summaries()
train_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/train',
sess.graph)
test_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/test')
tf.initialize_all_variables().run()
# Train the model, and also write summaries.
# Every 10th step, measure test-set accuracy, and write test summaries
# All other steps, run train_step on training data, & add training summaries
def feed_dict(train):
"""Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
if train or FLAGS.fake_data:
xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)
k = FLAGS.dropout
else:
xs, ys = mnist.test.images, mnist.test.labels
k = 1.0
return {x: xs, y_: ys, keep_prob: k}
for i in range(FLAGS.max_steps):
if i % 10 == 0: # Record summaries and test-set accuracy
summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
test_writer.add_summary(summary, i)
print('Accuracy at step %s: %s' % (i, acc))
else: # Record train set summaries, and train
if i % 100 == 99: # Record execution stats
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary, _ = sess.run([merged, train_step],
feed_dict=feed_dict(True),
options=run_options,
run_metadata=run_metadata)
train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
train_writer.add_summary(summary, i)
print('Adding run metadata for', i)
else: # Record a summary
summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
train_writer.add_summary(summary, i)
train_writer.close()
test_writer.close()
def main(_):
if tf.gfile.Exists(FLAGS.summaries_dir):
tf.gfile.DeleteRecursively(FLAGS.summaries_dir)
tf.gfile.MakeDirs(FLAGS.summaries_dir)
train()
if __name__ == '__main__':
tf.app.run()
# -
print(__name__)
| misc/deep_learning_notes/Ch2 Intro to Tensorflow/Graph Summary and Logging.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
animals = ["lion", "tiger"]
animals.append("bear")
animals.append("lion, monkey")
print(animals)
a = animals.pop()
print(a)
print(animals)
# -
# ## Meghna
hello()
hello(Meghna)
hello("meghna")
# +
import random
random.randint(0,10)
# +
import numpy as np
np.power(2,2)
# +
from random import*
randint(0,10)
# +
import numpy as np
import matplotlib.pyplot as plt
X = np.linspace(-np.pi, np.pi, 256, endpoint= True)
C,S =np.cos(X), np.sin(X)
plt.plot(X,C)
plt.plot(X,S)
plt.show()
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16, 9)
import pqr
# # Предобработка данных
# +
prices = pd.read_excel('factors/russia/monthlyprice.xlsx', index_col=0, parse_dates=True)
mcap = pd.read_excel('factors/russia/mcap.xlsx', index_col=0, parse_dates=True)
pe = pd.read_excel('factors/russia/PE.xlsx', index_col=0, parse_dates=True)
volume = pd.read_excel('factors/russia/betafilter.xlsx', index_col=0, parse_dates=True)
imoex = pd.read_excel('factors/russia/imoex.xlsx', index_col=0, parse_dates=True)
for df in (prices, mcap, pe, volume, imoex):
df.replace(0, np.nan, inplace=True)
# +
universe = pqr.Universe(prices)
universe.filter(volume >= 10_000_000)
preprocessor = [
pqr.Filter(universe.mask),
pqr.LookBackMean(3),
pqr.Hold(3),
]
size = pqr.Factor(mcap, "less", preprocessor)
value = pqr.Factor(pe, "less", preprocessor)
benchmark = pqr.Benchmark.from_index(imoex["IMOEX"], name="IMOEX")
# -
# Сначала строим однофакторные портфели по топ 20%
# +
q02 = pqr.fm.Quantiles(0, 0.2)
size_portfolio = pqr.Portfolio(
universe,
longs=q02(size),
allocation_strategy=pqr.EqualWeights(),
name="Size"
)
value_portfolio = pqr.Portfolio(
universe,
longs=q02(value),
allocation_strategy=pqr.EqualWeights(),
name="Value"
)
# -
summary = pqr.dash.Dashboard(
pqr.dash.Graph(pqr.metrics.CompoundedReturns(), benchmark=benchmark),
pqr.dash.Table(
pqr.metrics.MeanReturn(annualizer=1),
pqr.metrics.Volatility(annualizer=1),
pqr.metrics.SharpeRatio(rf=0),
pqr.metrics.MeanExcessReturn(benchmark),
pqr.metrics.Alpha(benchmark),
pqr.metrics.Beta(benchmark),
)
)
summary([size_portfolio, value_portfolio])
# # Weighted Multifactor
# Все просто - сводим мульифакторный выбор к 1-факторному
# +
size_value_weighted = pqr.Factor(
0.5 * size.values + 0.5 * value.values,
better="less"
)
size_value_weighted_portfolio = pqr.Portfolio(
universe,
longs=q02(size_value_weighted),
allocation_strategy=pqr.EqualWeights(),
name="Size + Value Weighted"
)
summary([size_portfolio, value_portfolio, size_value_weighted_portfolio])
# -
# # Intercept Multifactor
# +
size_value_intercept_portfolio = pqr.Portfolio(
universe,
longs=q02(size) & q02(value),
allocation_strategy=pqr.EqualWeights(),
name="Size + Value Intercept"
)
summary([size_portfolio, value_portfolio, size_value_weighted_portfolio, size_value_intercept_portfolio])
# -
# # Double Sort Multifactor
# +
size_universe = pqr.Universe(prices)
size_universe.filter(q02(size))
sort_preprocessor_size = [
pqr.Filter(universe.mask),
pqr.LookBackMean(3),
pqr.Hold(3),
pqr.Filter(size_universe.mask)
]
value_presorted = pqr.Factor(pe, "less", sort_preprocessor_size)
size_value_double_sort_portfolio = pqr.Portfolio(
universe,
longs=q02(value_presorted),
allocation_strategy=pqr.EqualWeights(),
name="Size -> Value Double Sort"
)
value_universe = pqr.Universe(prices)
value_universe.filter(q02(value))
sort_preprocessor_value = [
pqr.Filter(universe.mask),
pqr.LookBackMean(3),
pqr.Hold(3),
pqr.Filter(value_universe.mask)
]
size_presorted = pqr.Factor(mcap, "less", sort_preprocessor_value)
value_size_double_sort_portfolio = pqr.Portfolio(
universe,
longs=q02(size_presorted),
allocation_strategy=pqr.EqualWeights(),
name="Value -> Size Double Sort"
)
summary(
[
size_portfolio, value_portfolio,
size_value_weighted_portfolio,
size_value_intercept_portfolio,
size_value_double_sort_portfolio,
value_size_double_sort_portfolio
]
)
| examples/multifactors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Threat to Coral Reef from Fishing Practices bio.024.4 http://www.wri.org/publication/reefs-risk-revisited
# +
import numpy as np
import pandas as pd
import rasterio
import boto3
import requests as req
from matplotlib import pyplot as plt
# %matplotlib inline
import os
import sys
import threading
# -
# If data already on s3, create a staging key and download to staging folder
# +
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/bio_024_4_coral_reef_threat_from_fishing_practices/"
s3_key_orig = s3_folder + "bio_024_4_coral_reef_threat_from_fishing_practices.tif"
s3_key_edit = s3_key_orig[0:-4] + "_edit.tif"
temp_folder = "/Users/nathansuberi/Desktop/WRI_Programming/RW_Data/temp/"
local_orig = temp_folder + "bio_024_4.tif"
local_edit = local_orig[:-4] + "_edit.tif"
s3 = boto3.resource('s3')
s3.meta.client.download_file(s3_bucket, s3_key_orig, local_orig)
#s3.meta.client.download_file(s3_bucket, s3_key_edit, local_edit)
# -
# <b>Regardless of any needed edits, upload original file</b>
#
# <i>Upload tif to S3 folder</i>
#
# http://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html
#
# <i>Monitor Progress of Upload</i>
#
# http://boto3.readthedocs.io/en/latest/_modules/boto3/s3/transfer.html
# https://boto3.readthedocs.io/en/latest/guide/s3.html#using-the-transfer-manager
# +
s3 = boto3.client("s3")
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
# +
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3.upload_file(local_key, s3_bucket, s3_key_orig,
Callback=ProgressPercentage(local_key))
# -
# Check for compression, projection
#
# Create edit file if necessary
with rasterio.open(local_orig) as src:
print(src.profile)
# +
local_edit_tb = local_edit[:-4] + "_tb.tif"
local_edit_t = local_edit[:-4] + "_t.tif"
local_edit_b = local_edit[:-4] + "_b.tif"
with rasterio.open(local_orig) as src:
data = src.read(1)
# Return lat info
south_lat = -90
north_lat = 90
# Return lon info
west_lon = -180
east_lon = 180
# Transformation function
transform = rasterio.transform.from_bounds(west_lon, south_lat, east_lon, north_lat, data.shape[1], data.shape[0])
# Profile
kwargs = src.profile
kwargs.update(
driver = 'GTiff',
dtype = rasterio.int16,
crs = 'EPSG:4326',
compress = 'lzw',
nodata = -9999,
transform = transform,
)
kwargs_tiled_blocked = dict(kwargs)
kwargs["tiled"] = False
kwargs_blocked = dict(kwargs)
kwargs.pop("blockxsize", None)
kwargs.pop("blockysize", None)
kwargs_no_tile_no_block = dict(kwargs)
kwargs["tiled"] = True
kwargs_tiled = dict(kwargs)
np.putmask(data, data==-32768, -9999)
with rasterio.open(local_edit_tb, 'w', **kwargs_tiled_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_t, 'w', **kwargs_tiled) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_b, 'w', **kwargs_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit, 'w', **kwargs_no_tile_no_block) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
# -
local_edit
with rasterio.open(local_edit) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_t) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_b) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_tb) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
# Upload edited files to S3
# +
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3_key_edit_t = s3_key_edit[:-4] + "_t.tif"
s3_key_edit_b = s3_key_edit[:-4] + "_b.tif"
s3_key_edit_tb = s3_key_edit[:-4] + "_tb.tif"
s3.upload_file(local_edit, s3_bucket, s3_key_edit,
Callback=ProgressPercentage(local_edit))
s3.upload_file(local_edit_t, s3_bucket, s3_key_edit_t,
Callback=ProgressPercentage(local_edit_t))
s3.upload_file(local_edit_b, s3_bucket, s3_key_edit_b,
Callback=ProgressPercentage(local_edit_b))
s3.upload_file(local_edit_tb, s3_bucket, s3_key_edit_tb,
Callback=ProgressPercentage(local_edit_tb))
# -
s3_key_edit
# Layer definition
#
# https://github.com/resource-watch/notebooks/blob/master/ResourceWatch/Api_definition/layer_definition.ipynb
# Upload to server destination
# +
# Too big for ArcGIS Online to upload using their web interface... 1 GB limit
| bio.024.4_done.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Testing after requests is working with Compute
#
# +
import os
from pathlib import Path
from bravado.client import SwaggerClient
from bravado.client import SwaggerClient
from bravado.swagger_model import load_file
from bravado.requests_client import RequestsClient
from bravado.swagger_model import load_file
# from bravado.fido_client import FidoClient
import json
import keyring
import requests
import dataclasses
from requests import Session
from requests.auth import HTTPBasicAuth
from requests.auth import _basic_auth_str
from secrets import opc_username, opc_password
# -
import keyring
import requests
from json import load, loads, dump, dumps
from requests import Session
from secrets import opc_username, opc_password
## https://medium.com/@betz.mark/validate-json-models-with-swagger-and-bravado-5fad6b21a825
# Validate json models with swagger and bravado
from bravado_core.spec import Spec
from bravado_core.validate import validate_object
from yaml import load, Loader, dump, Dumper
# +
idm_domain_name = 'gc30003'
idm_service_instance_id = '587626604'
iaas_rest_endpoint = r'https://compute.uscom-central-1.oraclecloud.com'
iaas_auth_endpoint = f'{iaas_rest_endpoint}/authenticate/'
print(f'iaas_rest_endpoint: {iaas_rest_endpoint}')
print(f'iaas_auth_endpoint: {iaas_auth_endpoint}\n')
# +
### Username/pass setup
idm_domain_username = f'/Compute-{idm_domain_name}/{opc_username}'
idm_service_instance_username = f'/Compute-{idm_service_instance_id}/{opc_username}'
# username = traditional_iaas_username
username = idm_service_instance_username
# basic_auth_cred = _basic_auth_str(username, opc_password)
print(f'idm_domain_username: {idm_domain_username}')
print(f'idm_service_instance_username: {idm_service_instance_username}')
print(f'username: {username}')
# print(f'basic_auth_cred: {basic_auth_cred}')
### END Username/pass setup
json_data={"user":username, "password":<PASSWORD>}
print(f'\njson_data: {json_data}')
files = None
params = None
### https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsa/SendRequests.html
### Supported Headers shown here
# headers = dict([('Content-Type', 'application/oracle-compute-v3+json'),
# ('Accept', 'application/oracle-compute-v3+directory+json'),
# ('Accept-Encoding', 'gzip;q=1.0, identity; q=0.5'),
# ('Content-Encoding', 'deflate'),
# ('Cookie', '<Set from /authenticate>')
# ])
headers = dict([('Content-Type', 'application/oracle-compute-v3+json'),
('Accept', 'application/oracle-compute-v3+directory+json'),
])
print(f'headers: {headers}')
# +
requests_client = RequestsClient()
requests_client.session.headers.update(headers)
print(f"requests_client.session.headers before update: {requests_client.session.headers}\n")
requests_client.session.headers.update(headers)
print(f"requests_client.session.headers after update: {requests_client.session.headers}\n")
# -
# #### Proxies
#
#
# [http://docs.python-requests.org/en/master/user/advanced/#proxies](http://docs.python-requests.org/en/master/user/advanced/#proxies)
#
#
# Proxies
# If you need to use a proxy, you can configure individual requests with the proxies argument to any request method:
#
# import requests
#
# proxies = {
# 'http': 'http://10.10.1.10:3128',
# 'https': 'http://10.10.1.10:1080',
# }
#
# requests.get('http://example.org', proxies=proxies)
# You can also configure proxies by setting the environment variables HTTP_PROXY and HTTPS_PROXY.
#
# $ export HTTP_PROXY="http://10.10.1.10:3128"
# $ export HTTPS_PROXY="http://10.10.1.10:1080"
#
# $ python
# >>> import requests
# >>> requests.get('http://example.org')
# To use HTTP Basic Auth with your proxy, use the http://user:password@host/ syntax:
#
# proxies = {'http': 'http://user:pass@10.10.1.10:3128/'}
# To give a proxy for a specific scheme and host, use the scheme://hostname form for the key. This will match for any request to the given scheme and exact hostname.
#
# proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'}
# Note that proxy URLs must include the scheme.
#
# +
# proxies = {
# 'http': 'http://dmz-proxy-adcq7.us.oracle.com:80',
# 'https': 'https://dmz-proxy-adcq7.us.oracle.com:80',
# }
# print(f'proxies: {proxies}')
# requests_client.session.proxies.update(proxies)
# -
# #### SSL Cert Verification
#
#
# [http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification](http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification)
#
# ####### Requests can also ignore verifying the SSL certificate if you set ** verify to False**:
#
#
# <pre>
# MaxRetryError: HTTPSConnectionPool(host='compute.uscom-central-1.oraclecloud.com', port=443): Max retries exceeded with url: /instance/%2FCompute-587626604%2Feric.harris%40oracle.com%2F (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",),))
#
# During handling of the above exception, another exception occurred:
#
# SSLError Traceback (most recent call last)
#
# .
# .
#
# C:\Apps\python\python_3.6\envs\psmcli\lib\site-packages\requests\adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
# 504 if isinstance(e.reason, _SSLError):
# 505 # This branch is for urllib3 v1.22 and later.
# --> 506 raise SSLError(e, request=request)
# 507
# 508 raise ConnectionError(e, request=request)
#
# SSLError: HTTPSConnectionPool(host='compute.uscom-central-1.oraclecloud.com', port=443): Max retries exceeded with url: /instance/%2FCompute-587626604%2Feric.harris%40oracle.com%2F (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",),))
# </pre>
#
#
# +
# requests_client.session.verify = False
# +
response = requests_client.session.post(url=iaas_auth_endpoint, json=json_data)
print(f'Response OK: {response.ok}, Status Code: {response.status_code}, URL: {response.url}')
if response.ok and 'Set-Cookie' in response.headers:
print(f"Auth request succeess.\n")
### The auth cookie is already placed in the session ... nothing else needs to be done.
print(f"\nSession Cookies: {requests_client.session.cookies}")
print(f"\nResponse Headers['Set-Cookie']: {response.headers['Set-Cookie']}")
else:
print(f'Something failed! Response OK: {response.ok}, Status Code: {response.status_code}')
# -
print(f"requests_client.session.headers before update: {requests_client.session.headers}\n")
cookie_header = {'Cookie': response.headers['Set-Cookie']}
print(f"cookie_header: {cookie_header}\n")
requests_client.session.headers.update(cookie_header)
print(f"requests_client.session.headers after update: {requests_client.session.headers}\n")
# #### Loading swagger.json by file path
# [http://bravado.readthedocs.io/en/latest/advanced.html#loading-swagger-json-by-file-path](http://bravado.readthedocs.io/en/latest/advanced.html#loading-swagger-json-by-file-path)
#
#
# bravado also accepts swagger.json from a file path. Like so:
#
# ```
# client = SwaggerClient.from_url('file:///some/path/swagger.json')
# ```
#
# Alternatively, you can also use the load_file helper method.
#
# <pre>
# from bravado.swagger_model import load_file
# client = SwaggerClient.from_spec(load_file('/path/to/swagger.json'))
# </pre>
#
# This uses the from_spec() class method
#
# <pre>
# @classmethod
# def from_spec(cls, spec_dict, <b>origin_url</b>=None, http_client=None,
# config=None):
# """
# Build a :class:`SwaggerClient` from a Swagger spec in dict form.
#
# :param spec_dict: a dict with a Swagger spec in json-like form
# :param origin_url: the url used to retrieve the spec_dict
# :type origin_url: str
# :param config: Configuration dict - see spec.CONFIG_DEFAULTS
#
# :rtype: :class:`bravado_core.spec.Spec`
# """
# http_client = http_client or RequestsClient()
#
# # Apply bravado config defaults
# config = dict(CONFIG_DEFAULTS, **(config or {}))
#
# also_return_response = config.pop('also_return_response', False)
# swagger_spec = Spec.from_dict(
# spec_dict, origin_url, http_client, config,
# )
# return cls(swagger_spec, also_return_response=also_return_response)
# </pre>
#
#
#
#
# +
cwd = os.getcwd()
spec_file_path = Path().joinpath('open_api_definitions/iaas_instances.json').resolve()
print(f'spec_file_path exists: {spec_file_path.exists()}, spec_file_path: {spec_file_path}')
#### http://bravado.readthedocs.io/en/latest/advanced.html#loading-swagger-json-by-file-path
## needed for: client = SwaggerClient.from_url('file:///some/path/swagger.json')
spec_file_uri = f"file:///{spec_file_path}"
print(f'spec_file_uri: {spec_file_path}')
# -
# ##### Update swagger definition to use https as the scheme
#
# If we don't update the schemes key to https the client will use http
#
# swagger_spec.api_url: http://compute.uscom-central-1.oraclecloud.com
#
# instead of
#
# swagger_spec.api_url: https://compute.uscom-central-1.oraclecloud.com
#
# <pre>
# {
# "swagger" : "2.0",
# "info" : {
# "version" : "18.1.2-20180126.052521",
# "description" : "A Compute Classic instance is a virtual machine running a specific operating system and with CPU and memory resources that you specify. See <a target=\"_blank\" href=\"http://www.oracle.com/pls/topic/lookup?ctx=stcomputecs&id=STCSG-GUID-F928F362-2DB6-4E45-843F-C269E0740A36\">About Instances</a> in <em>Using Oracle Cloud Infrastructure Compute Classic</em>.<p>You can view and delete instances using the HTTP requests listed below.",
# "title" : "Instances"
# },
# <b>"schemes" : [ "http" ]</b>,
# "consumes" : [ "application/oracle-compute-v3+json", "application/oracle-compute-v3+directory+json" ],
# "produces" : [ "application/oracle-compute-v3+json", "application/oracle-compute-v3+directory+json" ],
# "paths" : {
# "/instance/" : {
# "get" : {
# "tags" : [ "Instances" ],
# </pre>
#
#
#
#
#
#
spec_dict = load_file(spec_file_path)
spec_dict['schemes']
print(f"Original spec: spec_dict['schemes']: {spec_dict['schemes']}")
spec_dict['schemes'] = ['https']
print(f"Spec after scheme update: spec_dict['schemes']: {spec_dict['schemes']}")
swagger_spec = Spec.from_dict(spec_dict=spec_dict,
origin_url=iaas_rest_endpoint,
http_client=requests_client,
)
swagger_spec.api_url
swagger_spec.origin_url
# +
# swagger_client = SwaggerClient.from_spec(spec_dict=load_file(spec_file_path),
# origin_url=iaas_rest_endpoint,
# http_client=requests_client,
# config={'also_return_response': True})
# -
# ## Client Configuration
# [https://bravado.readthedocs.io/en/latest/configuration.html#client-configuration](https://bravado.readthedocs.io/en/latest/configuration.html#client-configuration)
#
# <pre>
# from bravado.client import SwaggerClient, SwaggerFormat
#
# my_super_duper_format = SwaggerFormat(...)
#
# config = {
# # === bravado config ===
#
# # Determines what is returned by the service call.
# 'also_return_response': False,
#
# # === bravado-core config ====
#
# # validate incoming responses
# 'validate_responses': True,
#
# # validate outgoing requests
# 'validate_requests': True,
#
# # validate the swagger spec
# 'validate_swagger_spec': True,
#
# # Use models (Python classes) instead of dicts for #/definitions/{models}
# 'use_models': True,
#
# # List of user-defined formats
# 'formats': [my_super_duper_format],
#
# }
#
# client = SwaggerClient.from_url(..., config=config)
#
# </pre>
#
#
swagger_client = SwaggerClient.from_spec(spec_dict=spec_dict,
origin_url=iaas_rest_endpoint,
http_client=requests_client,
config={'also_return_response': True,
'validate_responses': False,
'validate_swagger_spec': False,})
swagger_client
swagger_client.Instances.resource.operations
# ## discoverInstance
#
# ** From swagger_client **
# <pre>
# Type: Operation
# String form: Operation(discoverInstance)
# File: c:\apps\python\python_3.6\envs\psmcli\lib\site-packages\bravado_core\operation.py
# Docstring: <no docstring>
# Init docstring:
# Swagger operation defined by a unique (http_method, path_name) pair.
#
# :type swagger_spec: :class:`Spec`
# :param path_name: path of the operation. e.g. /pet/{petId}
# :param http_method: get/put/post/delete/etc
# :param op_spec: operation specification in dict form
# </pre>
#
#
# ** From swagger.json instances_swagger_18.1.2.json **
#
# <pre>
# "/instance/{container}" : {
# "get" : {
# "tags" : [ "Instances" ],
# "summary" : "Retrieve Names of all Instances and Subcontainers in a Container",
# "description" : "Retrieves the names of objects and subcontainers that you can access in the specified container.<p><b>Required Role: </b>To complete this task, you must have the <code>Compute_Operations</code> role. If this role isn't assigned to you or you're not sure, then ask your system administrator to ensure that the role is assigned to you in Oracle Cloud My Services. See <a target=\"_blank\" href=\"http://www.oracle.com/pls/topic/lookup?ctx=stcomputecs&id=MMOCS-GUID-54C2E747-7D5B-451C-A39C-77936178EBB6\">Modifying User Roles</a> in <em>Managing and Monitoring Oracle Cloud</em>.",
# <b> "operationId" : "discoverInstance", </b>
# "responses" : {
# "200" : {
# "headers" : {
# "set-cookie" : {
# "type" : "string",
# "description" : "The cookie value is returned if the session is extended"
# }
# },
# "description" : "OK. See <a class=\"xref\" href=\"Status%20Codes.html\">Status Codes</a> for information about other possible HTTP status codes.",
# <b> "schema" : {
# "$ref" : "#/definitions/Instance-discover-response"
# }</b>
# }
# },
# "consumes" : [ "application/oracle-compute-v3+json" ],
# "produces" : [ "application/oracle-compute-v3+directory+json" ],
# "parameters" : [ {
# <b> "name" : "container", </b>
# "in" : "path",
# <b> "description" : "Specify <code>/Compute-<i>identityDomain</i>/<i>user</i>/</code> to retrieve the names of objects that you can access. Specify <code>/Compute-<i>identityDomain</i>/</code> to retrieve the names of containers that contain objects that you can access.",</b>
# "required" : true,
# "type" : "string"
# }, {
# "name" : "Cookie",
# "in" : "header",
# "type" : "string",
# "description" : "The Cookie: header must be included with every request to the service. It must be set to the value of the set-cookie header in the response received to the POST /authenticate/ call."
# } ]
# }
# },
# </pre>
#
#
#
op = swagger_client.Instances.resource.operations['discoverInstance']
op_api_url = op.swagger_spec.api_url
print(f"discoverInstance Operation: {op}, discoverInstance.api_url: {op_api_url}")
instances = swagger_client.Instances
print(f"instances: {instances}, idm_service_instance_username: {idm_service_instance_username}")
container = f"{idm_service_instance_username}/"
print(f"container: {container}")
discover_instance = instances.discoverInstance(container=container)
print(f"""discover_instance: {discover_instance},
discover_instance.operation: {discover_instance.operation},
discover_instance.operation.params: {discover_instance.operation.params},
discover_instance.operation.operation_id: {discover_instance.operation.operation_id}""")
discover_instance_result = discover_instance.result()
# ###### nope, discover_instance_result = discover_instance.result()
#
# We're now getting:
#
# ```
# ConnectionError: HTTPSConnectionPool(host='compute.uscom-central-1.oraclecloud.com', port=443): Max retries exceeded with url: /instance/%2FCompute-587626604%2Feric.harris%40oracle.com%2F (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0000021B45D240B8>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',))
#
# ```
#
# %2F is ASCII for '/'
# /instance/%2FCompute-587626604%2Feric.harris%40oracle.com%2F -> /instance//Compute-587626604/eric.harris%40oracle.com/
#
# <pre>
# "parameters" : [ {
# <b> "name" : "container", </b>
# "in" : "path",
# <b> "description" : "Specify <code>/Compute-<i>identityDomain</i>/<i>user</i>/</code> to retrieve the names of objects that you can access. Specify <code>/Compute-<i>identityDomain</i>/</code> to retrieve the names of containers that contain objects that you can access.",</b>
# "required" : true,
# "type" : "string"
# }, {
# "name" : "Cookie",
# "in" : "header",
# "type" : "string",
# "description" : "The Cookie: header must be included with every request to the service. It must be set to the value of the set-cookie header in the response received to the POST /authenticate/ call."
# } ]
# }
# },
# </pre>
#
container = idm_service_instance_username[1:]
print(f"container: {container}")
discover_instance = instances.discoverInstance(container=container)
discover_instance_result = discover_instance.result()
# +
from bravado.swagger_model import Loader
loader = Loader(requests_client, request_headers=None)
spec_dict = loader.load_spec(spec_file_uri)
spec_dict
# -
| gc3_query/var/scratchpad/Bravado_latest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# ========================================================
# Gaussian process regression (GPR) on Mauna Loa CO2 data.
# ========================================================
#
# This example is based on Section 5.4.3 of "Gaussian Processes for Machine
# Learning" [RW2006]. It illustrates an example of complex kernel engineering and
# hyperparameter optimization using gradient ascent on the
# log-marginal-likelihood. The data consists of the monthly average atmospheric
# CO2 concentrations (in parts per million by volume (ppmv)) collected at the
# Mauna Loa Observatory in Hawaii, between 1958 and 1997. The objective is to
# model the CO2 concentration as a function of the time t.
#
# The kernel is composed of several terms that are responsible for explaining
# different properties of the signal:
#
# - a long term, smooth rising trend is to be explained by an RBF kernel. The
# RBF kernel with a large length-scale enforces this component to be smooth;
# it is not enforced that the trend is rising which leaves this choice to the
# GP. The specific length-scale and the amplitude are free hyperparameters.
#
# - a seasonal component, which is to be explained by the periodic
# ExpSineSquared kernel with a fixed periodicity of 1 year. The length-scale
# of this periodic component, controlling its smoothness, is a free parameter.
# In order to allow decaying away from exact periodicity, the product with an
# RBF kernel is taken. The length-scale of this RBF component controls the
# decay time and is a further free parameter.
#
# - smaller, medium term irregularities are to be explained by a
# RationalQuadratic kernel component, whose length-scale and alpha parameter,
# which determines the diffuseness of the length-scales, are to be determined.
# According to [RW2006], these irregularities can better be explained by
# a RationalQuadratic than an RBF kernel component, probably because it can
# accommodate several length-scales.
#
# - a "noise" term, consisting of an RBF kernel contribution, which shall
# explain the correlated noise components such as local weather phenomena,
# and a WhiteKernel contribution for the white noise. The relative amplitudes
# and the RBF's length scale are further free parameters.
#
# Maximizing the log-marginal-likelihood after subtracting the target's mean
# yields the following kernel with an LML of -83.214::
#
# 34.4**2 * RBF(length_scale=41.8)
# + 3.27**2 * RBF(length_scale=180) * ExpSineSquared(length_scale=1.44,
# periodicity=1)
# + 0.446**2 * RationalQuadratic(alpha=17.7, length_scale=0.957)
# + 0.197**2 * RBF(length_scale=0.138) + WhiteKernel(noise_level=0.0336)
#
# Thus, most of the target signal (34.4ppm) is explained by a long-term rising
# trend (length-scale 41.8 years). The periodic component has an amplitude of
# 3.27ppm, a decay time of 180 years and a length-scale of 1.44. The long decay
# time indicates that we have a locally very close to periodic seasonal
# component. The correlated noise has an amplitude of 0.197ppm with a length
# scale of 0.138 years and a white-noise contribution of 0.197ppm. Thus, the
# overall noise level is very small, indicating that the data can be very well
# explained by the model. The figure shows also that the model makes very
# confident predictions until around 2015.
#
#
# +
print(__doc__)
# Authors: <NAME> <<EMAIL>>
#
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels \
import RBF, WhiteKernel, RationalQuadratic, ExpSineSquared
from sklearn.datasets import fetch_mldata
data = fetch_mldata('mauna-loa-atmospheric-co2').data
X = data[:, [1]]
y = data[:, 0]
# Kernel with parameters given in GPML book
k1 = 66.0**2 * RBF(length_scale=67.0) # long term smooth rising trend
k2 = 2.4**2 * RBF(length_scale=90.0) \
* ExpSineSquared(length_scale=1.3, periodicity=1.0) # seasonal component
# medium term irregularity
k3 = 0.66**2 \
* RationalQuadratic(length_scale=1.2, alpha=0.78)
k4 = 0.18**2 * RBF(length_scale=0.134) \
+ WhiteKernel(noise_level=0.19**2) # noise terms
kernel_gpml = k1 + k2 + k3 + k4
gp = GaussianProcessRegressor(kernel=kernel_gpml, alpha=0,
optimizer=None, normalize_y=True)
gp.fit(X, y)
print("GPML kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp.log_marginal_likelihood(gp.kernel_.theta))
# Kernel with optimized parameters
k1 = 50.0**2 * RBF(length_scale=50.0) # long term smooth rising trend
k2 = 2.0**2 * RBF(length_scale=100.0) \
* ExpSineSquared(length_scale=1.0, periodicity=1.0,
periodicity_bounds="fixed") # seasonal component
# medium term irregularities
k3 = 0.5**2 * RationalQuadratic(length_scale=1.0, alpha=1.0)
k4 = 0.1**2 * RBF(length_scale=0.1) \
+ WhiteKernel(noise_level=0.1**2,
noise_level_bounds=(1e-3, np.inf)) # noise terms
kernel = k1 + k2 + k3 + k4
gp = GaussianProcessRegressor(kernel=kernel, alpha=0,
normalize_y=True)
gp.fit(X, y)
print("\nLearned kernel: %s" % gp.kernel_)
print("Log-marginal-likelihood: %.3f"
% gp.log_marginal_likelihood(gp.kernel_.theta))
X_ = np.linspace(X.min(), X.max() + 30, 1000)[:, np.newaxis]
y_pred, y_std = gp.predict(X_, return_std=True)
# Illustration
plt.scatter(X, y, c='k')
plt.plot(X_, y_pred)
plt.fill_between(X_[:, 0], y_pred - y_std, y_pred + y_std,
alpha=0.5, color='k')
plt.xlim(X_.min(), X_.max())
plt.xlabel("Year")
plt.ylabel(r"CO$_2$ in ppm")
plt.title(r"Atmospheric CO$_2$ concentration at Mauna Loa")
plt.tight_layout()
plt.show()
| scikit-learn-official-examples/gaussian_process/plot_gpr_co2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Handling TensorFlow graphs and sessions
# --
#
# ##### Check [Tips and Tricks notebook](../tips_and_tricks.ipynb) for more examples about graph and sessions.
import gpflow
import tensorflow as tf
#
# TensorFlow gives you a default graph, which you fill with tensors and operations - nodes and edges in the graph respectively. You can find details [here](https://www.tensorflow.org/guide/graphs#building_a_tfgraph) on how to change the default graph to another one or exploit multiple graphs.
#
# TensorFlow graph is a representation of your computation, to execute your code you need a TensorFlow [session](https://www.tensorflow.org/guide/graphs#executing_a_graph_in_a_tfsession). For example, you can think of graphs and sessions as binary sources and actual command running it in a terminal. Normally, TensorFlow doesn't provide the default session, although GPflow creates default session and you can get it via:
session = gpflow.get_default_session()
# To change GPflows' default session:
gpflow.reset_default_session()
assert session is not gpflow.get_default_session()
# You can manipulate sessions manually, but you have to make them default for GPflow:
with tf.Session() as session:
k = gpflow.kernels.RBF(1)
k.lengthscales = 2.0
k.variance = 3.0
# Now, TensorFlow variables and tensors for the created RBF kernel are initialised with the session which was closed when the python context ended. You can reuse the RBF object by re-inialising it.
k.initialize(gpflow.get_default_session())
k
| doc/source/notebooks/understanding/tf_graphs_and_sessions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
from PIL import Image, ImageDraw
import torch
from torch.utils.data import DataLoader
from torchvision import transforms, datasets
import numpy as np
import pandas as pd
from time import time
import sys, os
import glob
from tqdm.auto import tqdm
from facenet_pytorch.models.mtcnn import MTCNN, prewhiten
from facenet_pytorch.models.inception_resnet_v1 import InceptionResnetV1, get_torch_home
from facenet_pytorch.models.utils.detect_face import extract_face
# + pycharm={"is_executing": false, "name": "#%%\n"}
def get_image(path, trans):
img = Image.open(path)
img = trans(img)
return img
# + pycharm={"is_executing": false, "name": "#%%\n"}
trans = transforms.Compose([
transforms.Resize(512)
])
trans_cropped = transforms.Compose([
np.float32,
transforms.ToTensor(),
prewhiten
])
# + pycharm={"is_executing": false, "name": "#%%\n"}
dataset = datasets.ImageFolder('dataset/lfw', transform=trans)
dataset.idx_to_class = {k: v for v, k in dataset.class_to_idx.items()}
loader = DataLoader(dataset, collate_fn=lambda x: x[0])
# + pycharm={"is_executing": false, "name": "#%%\n"}
mtcnn = MTCNN(device=torch.device('cuda:0'))
# -
resnet = InceptionResnetV1(pretrained='casia-webface').eval().cuda()
# + pycharm={"is_executing": false, "name": "#%%\n"}
total_item = len(dataset)
names = []
aligned = []
embs = []
for img, idx in tqdm(loader):
name = dataset.idx_to_class[idx]
# start = time()
img_align = mtcnn(img)#, save_path = "data/aligned/{}/{}.png".format(name, str(idx)))
# print('MTCNN time: {:6f} seconds'.format(time() - start))
if img_align is not None:
names.append(name)
# aligned.append(img_align)
embs.append(resnet(img_align.unsqueeze(0).cuda()).cpu().detach().numpy())
embs = torch.stack(embs)
# -
df = pd.DataFrame({"name": names,
"emb": embs})
df.head()
df.to_csv("embeddings.csv", index=False)
10
arr = aligned.numpy()
np.save("aligned", arr)
aligned = np.load('aligned.npy')
aligned = torch.from_numpy(aligned)
# +
import torch
from torch.utils import data
class Dataset(data.Dataset):
'Characterizes a dataset for PyTorch'
def __init__(self, aligned_pics):
'Initialization'
self.aligned_pics = aligned_pics
def __len__(self):
'Denotes the total number of samples'
return len(self.aligned_pics)
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
aligned_pic = self.aligned_pics[index]
return aligned_pic
# -
dataset = Dataset(aligned)
data_loader = DataLoader(dataset, batch_size=64, shuffle=False)
next(iter(data_loader)).shape
# + pycharm={"name": "#%%\n"}
# resnet = InceptionResnetV1(pretrained='casia-webface').eval().cuda()
# # aligned = aligned.cuda()
# start = time()
# embs = resnet(next(iter(data_loader)))
# print('\nResnet time: {:6f} seconds\n'.format(time() - start))
# # dists = [[(emb - e).norm().item() for e in embs] for emb in embs]
# # print('\nOutput:')
# # print(pd.DataFrame(dists, columns=names, index=names))
# -
resnet = InceptionResnetV1(pretrained='casia-webface').eval().cuda()
| evaluate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DQ0 SDK Demo
# ## Prerequistes
# * Installed DQ0 SDK. Install with `pip install dq0-sdk`
# * Installed DQ0 CLI
# * Proxy running and registered from the DQ0 CLI with `dq0-cli proxy add ...`
# * Valid session of DQ0. Log in with `dq0 user login`
# * Running instance of DQ0 CLI server: `dq0 server start`
# ## Concept
# The two main structures to work with DQ0 quarantine via the DQ0 SDK are
# * Project - the current model environment, a workspace and directory the user can define models in. Project also provides access to trained models.
# * Experiment - the DQ0 runtime to execute training runs in the remote quarantine.
# Start by importing the core classes
# import dq0-sdk cli
from dq0.sdk.cli import Project, Experiment
# ## Create a project
# Projects act as the working environment for model development.
# Each project has a model directory with a .meta file containing the model uuid, attached data sources etc.
# Creating a project with `Project.create(name='model_1')` is equivalent to calling the DQ0 Cli command `dq0-cli project create model_1`
# create a project with name 'model_1'. Automatically creates the 'model_1' directory and changes to this directory.
project = Project(name='model_1')
# ## Load a project
# Alternatively, you can load an existing project by first cd'ing into this directory and then call Project.load()
# This will read in the .meta file of this directory
# %cd model_1
# Alternative: load a project from the current model directory
project = Project.load()
# ## Create Experiment
# To execute DQ0 training commands inside the quarantine you define experiments for your projects.
# You can create as many experiments as you like for one project.
# Create experiment for project
experiment = Experiment(project=project, name='experiment_1')
# ## Get and attach data source
# For new projects you need to attach a data source. Existing (loaded) projects usually already have data sources attached.
# +
# first get some info about available data sources
sources = project.get_available_data_sources()
# print info abouth the first source
info = project.get_data_info(sources[0]['uuid'])
info
# -
# Get the dataset description:
# print data description
info['description']
# Also, inspect the data column types including allowed values for feature generation:
# print information about column types and values
info['types']
# And some sample data if available:
# get sample data
project.get_sample_data(sources[0]['uuid'])
# Now, attach the dataset to our project
# attach the first dataset
project.attach_data_source(sources[0]['uuid'])
# ## Define a neural network for the 20 newsgroups text dataset
# Working with DQ0 is basically about defining two functions:
# * setup_data() - called right before model training to prepare attached data sources
# * setup_model() - actual model definition code
# The easiest way to define those functions is to write them in the notebook (inline) and pass them to the project before calling deploy. Alternatively, the user can write the complete user_model.py to the project's directory.
#
# ### Define fuctions inline
# First variant with functions passed to the project instance. Note that you need to define imports inline inside the functions as only those code blocks are replaced in the source files.
# +
# define functions
def setup_data(self):
# load input data
if self.data_source is None:
logger.error('No data source found')
return
X, y = self.data_source.read()
# check data format
import pandas as pd
import numpy as np
if isinstance(X, pd.DataFrame):
X = X.values
else:
if not isinstance(X, np.ndarray):
raise Exception('X is not np.ndarray')
if isinstance(y, pd.Series):
y = y.values
else:
if not isinstance(y, np.ndarray):
raise Exception('y is not np.ndarray')
# prepare data
if y.ndim == 2:
# make non-dimensional array (just to avoid Warnings by Sklearn)
y = np.ravel(y)
self._num_features = X.shape[1]
self._num_classes = len(np.unique(y)) # np.nan, np.Inf in y are
# counted as classes by np.unique
# encodes target labels with interger values between 0 and
# self._num_classes - 1
self.label_encoder = LabelEncoder()
y = self.label_encoder.fit_transform(y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,
random_state=42)
# back to column vector: transform one-dimensional array into column vector
y_train = y_train[:, np.newaxis]
y_test = y_test[:, np.newaxis]
# set data member variables
self.X_train = X_train
self.X_test = X_test
self.y_train = y_train
self.y_test = y_test
def setup_model(self):
import tensorflow.compat.v1 as tf
self.optimizer = 'Adam'
# To set optimizer parameters, instantiate the class:
# self.optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
self.metrics = ['accuracy']
self.loss = tf.keras.losses.SparseCategoricalCrossentropy()
# As an alternative, define the loss function with a string
self.epochs = 50
self.batch_size = 250
self.model = tf.keras.Sequential([
tf.keras.layers.Input(self._num_features),
tf.keras.layers.Dense(
128,
activation='tanh',
kernel_regularizer=tf.keras.regularizers.l2(1e-3)
),
tf.keras.layers.Dense(self._num_classes, activation='softmax')
])
self.model.summary()
# set model code in project
project.set_model_code(setup_data=setup_data, setup_model=setup_model,
parent_class_name='NeuralNetworkClassification')
# -
# ### Define functions as source code
# Second variant, writing the complete model. Template can be retrieved by `!cat models/user_model.py` which is created by Project create.
# +
# %%writefile models/user_model.py
import logging
from dq0.sdk.models.tf import NeuralNetworkClassification
logger = logging.getLogger()
class UserModel(NeuralNetworkClassification):
"""Derived from dq0.sdk.models.tf.NeuralNetwork class
Model classes provide a setup method for data and model
definitions.
"""
def __init__(self):
super().__init__()
def setup_data(self):
"""Setup data function. See code above..."""
pass
def setup_model(self):
"""Setup model function See code above..."""
pass
# -
# ## Train the model
# After testing the model locally directly in this notebook, it's time to train it inside the DQ0 quarantine. This is done by calling experiment.train() which in turn calls the Cli commands `dq0-cli project deploy` and `dq0-cli model train`
run = experiment.train()
# train is executed asynchronously. You can wait for the run to complete or get the state with get_state:
# (TBD: in the future there could by a jupyter extension that shows the run progress in a widget.)
# wait for completion
run.wait_for_completion(verbose=True)
# When the run has completed you can retrieve the results:
# get training results
print(run.get_results())
# After train dq0 will run the model checker to evaluate if the trained model is safe and allowed for prediction. Get the state of the checker run together with the other state information with the get_state() function:
# get the state whenever you like
print(run.get_state())
# ## Predict
# Finally, it's time to use the trained model to predict something
# +
import numpy as np
import pandas as pd
# get the latest model
model = project.get_latest_model()
# check DQ0 privacy clearing
if model.predict_allowed:
# get numpy predict data
predict_data = model.X_test[:10]
# call predict
run = model.predict(predict_data)
# wait for completion
run.wait_for_completion(verbose=True)
# -
# get predict results
y_pred = run.get_results()['predict']
print(y_pred)
# Let us quickly assess how good the predictions of our model are by generating the confusion matrix:
# +
import numpy as np
from dq0.sdk.data.utils.plotting import compute_confusion_matrix
y_actual = model.y_test[:10]
y_actual = model.label_encoder.inverse_transform(np.ravel(y_actual))
y_pred = model.label_encoder.inverse_transform(np.ravel(y_pred))
normalize = False # set to True to have the matrix entries normalized per row
cm, labels_list = compute_confusion_matrix(y_actual, y_pred, normalize=normalize)
if normalize:
fmt = '.2f'
else:
fmt = 'd'
fig, ax = plt.subplots()
if len(labels_list) > 10:
annot_kws = {'size': 6} # reduce font size to avoid cluttering
xticks_rotation = '45'
else:
annot_kws = None
sns.heatmap(cm, ax=ax, annot=True, cbar=True, fmt=fmt, cmap=cmap,
annot_kws=annot_kws)
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('Actual labels')
ax.set_title(title)
ax.xaxis.set_ticklabels(labels_list)
ax.yaxis.set_ticklabels(labels_list)
ax.grid(False)
# rotate the tick labels and set their alignment
if xticks_rotation.lower() != 'horizontal'.lower():
for c_ax in [ax.get_xticklabels(), ax.get_yticklabels()]:
plt.setp(c_ax, rotation=45, ha="right", rotation_mode="anchor")
fig.tight_layout()
plt.close(fig)
| notebooks/DQ0SDK-Quickstart_Newsgroups.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %config IPCompleter.greedy=True
# %config IPCompleter.use_jedi=False
# +
import pandas as pd
import numpy as np
# Load data from google sheet
sheet_id = "1QaXBrhDY8tyF9cIBjejDTGMiv5vDUQG0KQ7iVapGQoY"
gid = 1987847080
link = f"https://docs.google.com/feeds/download/spreadsheets/Export?key={sheet_id}&exportFormat=csv&gid={gid}"
envs = pd.read_csv(link)
# Preprocessing
# Add house type icons
def emojify(row):
house_type = row["House type"]
if house_type == "residential":
return "|home|"
elif house_type == "office":
return "|office|"
elif house_type == "industrial":
return "|industry|"
return "other"
def new_type(row):
house_type = row["House type"]
if house_type == "residential":
return "residential building"
elif house_type == "office":
return "office building"
elif house_type == "industrial":
return "data center"
return "data center"
envs['Type'] = envs.apply(lambda row: emojify(row), axis=1)
envs['new_type'] = envs.apply(lambda row: new_type(row), axis=1)
# Make sure each env has gym
envs['Gym'] = envs['Gym'].replace('', np.NaN)
envs['Gym'] = envs['Gym'].fillna(method='ffill',axis=0)
# +
# Create main environment list
def create_main_list(filepath="../../docs/envs/envs_list.rst"):
rst_out = ""
for i, gym in enumerate(envs['Gym'].unique()):
gym_envs = envs[envs['Gym']==gym]
rst_out += f":{gym}:\n"
for index, row in gym_envs.iterrows():
env = row["Environment"]
symbol = row["Type"]
#rst_out += f" - `{env}`_ {symbol}\n"
rst_out += f" - `{env}`_\n"
# Add hline between gyms
if i < len(envs['Gym'].unique()) - 1:
rst_out += "\n----\n\n"
# Add image links
rst_out += """
.. |office| image:: https://raw.githubusercontent.com/tabler/tabler-icons/master/icons/building-skyscraper.svg
.. |home| image:: https://raw.githubusercontent.com/tabler/tabler-icons/master/icons/home.svg
.. |industry| image:: https://raw.githubusercontent.com/tabler/tabler-icons/master/icons/building-factory.svg
"""
with open(filepath, 'w') as file:
file.write(rst_out)
create_main_list(filepath="../../docs/envs/envs_list.rst")
# +
def create_env_descriptions():
for _, gym in enumerate(envs['Gym'].unique()):
env_descr_file = f"../../docs/envs/{gym}_descriptions.rst"
rst_out = ""
gym_envs = envs[envs['Gym']==gym]
for _, row in gym_envs.iterrows():
env = row["Environment"]
rst_out += f"\n\n.. _env-{env}: \n\n"
rst_out += f"``{env}``\n"
rst_out += '"' * (len(env) + 4) + "\n\n"
#rst_out += f":Type: {row['new_type']} ({row['Type']})\n"
rst_out += f":Type: {row['new_type']}\n"
rst_out += f":More info: `framework docs <{row['Original docs']}>`_\n"
with open(env_descr_file, 'w') as file:
file.write(rst_out)
create_env_descriptions()
# -
| notebooks/utils/nb001_create_envs_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Author : <NAME>
# ### LGM VIP - Data Science September-2021
# ### Advanced Level Task 2 : Next Word Prediction
# ### Dataset Link : https://drive.google.com/file/d/1GeUzNVqiixXHnTl8oNiQ2W3CynX_lsu2/view
# ### Importing required Libraries
# +
import tensorflow as tf
import numpy as np
import pickle
import numpy as np
import os
import matplotlib.pyplot as plt
import pickle
import heapq
from nltk.tokenize import RegexpTokenizer
from keras.models import Sequential, load_model
from keras.layers import LSTM
from keras.layers.core import Dense, Activation
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.optimizers import Adam
# -
import warnings
warnings.filterwarnings('ignore')
# ### Loading Data
# source text
file = open('1661-0.txt', encoding="utf8").read().lower()
print('corpus length:', len(file))
# ### Cleaning the data
# +
file = open("1661-0.txt", "r", encoding = "utf8")
lines = []
for i in file:
lines.append(i)
print(lines)
# +
data = ""
for i in lines:
data = ' '. join(lines)
data = data.replace('\n', '').replace('\r', '').replace('\ufeff', '')
data[:360]
# +
import string
translator = str.maketrans(string.punctuation, ' '*len(string.punctuation)) #map punctuation to space
new_data = data.translate(translator)
new_data[:500]
# +
z = []
for i in data.split():
if i not in z:
z.append(i)
data = ' '.join(z)
data[:500]
# -
# ### Tokenization
# +
tokenizer = Tokenizer()
tokenizer.fit_on_texts([data])
# saving the tokenizer for predict function.
pickle.dump(tokenizer, open('tokenizer1.pkl', 'wb'))
sequence_data = tokenizer.texts_to_sequences([data])[0]
sequence_data[:10]
# -
vocab_size = len(tokenizer.word_index) + 1
print(vocab_size)
# +
sequences = []
for i in range(1, len(sequence_data)):
words = sequence_data[i-1:i+1]
sequences.append(words)
print("The Length of sequences are: ", len(sequences))
sequences = np.array(sequences)
sequences[:10]
# +
X = []
y = []
for i in sequences:
X.append(i[0])
y.append(i[1])
X = np.array(X)
y = np.array(y)
# -
print("The Data is: ", X[:5])
print("The responses are: ", y[:5])
y = to_categorical(y, num_classes=vocab_size)
y[:5]
# ### Creating the Model
model = Sequential()
model.add(Embedding(vocab_size, 10, input_length=1))
model.add(LSTM(1000, return_sequences=True))
model.add(LSTM(1000))
model.add(Dense(1000, activation="relu"))
model.add(Dense(vocab_size, activation="softmax"))
model.summary()
# ### Plot The Model
# +
from tensorflow import keras
from keras.utils.vis_utils import plot_model
keras.utils.plot_model(model, to_file='model.png', show_layer_names=True)
# -
# ### Callbacks
# +
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.callbacks import TensorBoard
checkpoint = ModelCheckpoint("nextword1.h5", monitor='loss', verbose=1,
save_best_only=True, mode='auto')
reduce = ReduceLROnPlateau(monitor='loss', factor=0.2, patience=3, min_lr=0.0001, verbose = 1)
logdir='logsnextword1'
tensorboard_Visualization = TensorBoard(log_dir=logdir)
# -
# ### Compile The Model
model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001))
# ### Fit The Model
model.fit(X, y, epochs=150, batch_size=64, callbacks=[checkpoint, reduce, tensorboard_Visualization])
# ### Graph
# +
# https://stackoverflow.com/questions/26649716/how-to-show-pil-image-in-ipython-notebook
# tensorboard --logdir="./logsnextword1"
# http://DESKTOP-U3TSCVT:6006/
#Check out this article for the complete breakdown of the code graph.png
#https://towardsdatascience.com/next-word-prediction-with-nlp-and-deep-learning-48b9fe0a17bf
from IPython.display import Image
pil_img = Image(filename='graph (1).png')
display(pil_img)
# -
# ### Prediction
# ### For the prediction notebook, we will load the tokenizer file which we have stored in the pickle format. We will then load our next word model which we have saved in our directory. We will use this same tokenizer to perform tokenization on each of the input sentences for which we should make the predictions on. After this step, we can proceed to make predictions on the input sentence by using the saved model.
#
# ### Thank You
| 03 Advanced Level Task/Task 2-Next Word Prediction/Advanced Level Task 2-Next Word Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import division, print_function
# %matplotlib inline
import sys
sys.path.insert(0,'..') # allow us to format the book
sys.path.insert(0,'../code')
# use same formattibng as rest of book so that the plots are
# consistant with that look and feel.
import book_format
book_format.load_style(directory='..')
# +
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import randn, random, uniform, seed
import scipy.stats
class ParticleFilter(object):
def __init__(self, N, x_dim, y_dim):
self.particles = np.empty((N, 3)) # x, y, heading
self.N = N
self.x_dim = x_dim
self.y_dim = y_dim
# distribute particles randomly with uniform weight
self.weights = np.empty(N)
self.weights.fill(1./N)
self.particles[:, 0] = uniform(0, x_dim, size=N)
self.particles[:, 1] = uniform(0, y_dim, size=N)
self.particles[:, 2] = uniform(0, 2*np.pi, size=N)
def predict(self, u, std):
""" move according to control input u with noise std"""
self.particles[:, 2] += u[0] + randn(self.N) * std[0]
self.particles[:, 2] %= 2 * np.pi
d = u[1] + randn(self.N)
self.particles[:, 0] += np.cos(self.particles[:, 2]) * d
self.particles[:, 1] += np.sin(self.particles[:, 2]) * d
self.particles[:, 0:2] += u + randn(self.N, 2) * std
def weight(self, z, var):
dist = np.sqrt((self.particles[:, 0] - z[0])**2 +
(self.particles[:, 1] - z[1])**2)
# simplification assumes variance is invariant to world projection
n = scipy.stats.norm(0, np.sqrt(var))
prob = n.pdf(dist)
# particles far from a measurement will give us 0.0 for a probability
# due to floating point limits. Once we hit zero we can never recover,
# so add some small nonzero value to all points.
prob += 1.e-12
self.weights += prob
self.weights /= sum(self.weights) # normalize
def neff(self):
return 1. / np.sum(np.square(self.weights))
def resample(self):
p = np.zeros((self.N, 3))
w = np.zeros(self.N)
cumsum = np.cumsum(self.weights)
for i in range(self.N):
index = np.searchsorted(cumsum, random())
p[i] = self.particles[index]
w[i] = self.weights[index]
self.particles = p
self.weights = w / np.sum(w)
def estimate(self):
""" returns mean and variance """
pos = self.particles[:, 0:2]
mu = np.average(pos, weights=self.weights, axis=0)
var = np.average((pos - mu)**2, weights=self.weights, axis=0)
return mu, var
# +
from pf_internal import plot_pf
seed(1234)
N = 3000
pf = ParticleFilter(N, 20, 20)
xs = np.linspace (1, 10, 20)
ys = np.linspace (1, 10, 20)
zxs = xs + randn(20)
zys = xs + randn(20)
def animatepf(i):
if i == 0:
plot_pf(pf, 10, 10, weights=False)
idx = int((i-1) / 3)
x, y = xs[idx], ys[idx]
z = [x + randn()*0.2, y + randn()*0.2]
step = (i % 3) + 1
if step == 2:
pf.predict((0.5, 0.5), (0.2, 0.2))
pf.weight(z=z, var=.6)
plot_pf(pf, 10, 10, weights=False)
plt.title('Step {}: Predict'.format(idx+1))
elif step == 3:
pf.resample()
plot_pf(pf, 10, 10, weights=False)
plt.title('Step {}: Resample'.format(idx+1))
else:
mu, var = pf.estimate()
plot_pf(pf, 10, 10, weights=False)
plt.scatter(mu[0], mu[1], color='g', s=100, label='PF')
plt.scatter(x, y, marker='x', color='r', s=180, lw=3, label='Robot')
plt.title('Step {}: Estimate'.format(idx+1))
#plt.scatter(mu[0], mu[1], color='g', s=100, label="PF")
#plt.scatter([x+1], [x+1], marker='x', color='r', s=180, label="True", lw=3)
plt.legend(scatterpoints=1, loc=2)
plt.tight_layout()
from gif_animate import animate
animate('particle_filter_anim.gif', animatepf,
frames=40, interval=800, figsize=(4, 4))
# -
# <img src='particle_filter_anim.gif'>
| animations/particle_animate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Somehow figure out what to send to LCO, and preferably edit submit files manually
# -
# #%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import marshaltools
import re
from Observatory import Observatory
from utils import get_config, plot_visibility, prepare_snifs_schedule
from astropy.time import Time
import astropy.units as u
from astropy.coordinates import SkyCoord
from astroquery.sdss import SDSS
from io import StringIO
import json, copy
import requests
# Get the lco token
apitoken = { "token": "<KEY>" }
df_hosts = pd.read_csv('target_for_lco_brighterthan19gmag_sorted_by_dec.csv')
df_hosts
# Check visibility
observatories ={
'uh88': Observatory('uh88', 19.8231, -155.47, 4205, sun_alt_th=-17, logger=None),
}
obs = 'uh88'
date = '2018-11-14'
tshift = {'ntt': -0.25, 'uh88': 0} # In units of days
trange = [Time(Time(date).jd+tshift[obs],format='jd'), Time(Time(date).jd+7.+tshift[obs],format='jd')]
aentries = {}
for row in df_hosts.iterrows():
ra = row[1]['Host_candi_ra']
dec = row[1]['Host_candi_dec']
print('Calculating visibility of source %s (ra: %f, dec: %f) with %s.'%(row[1]['SN_Name'], ra, dec, obs))
obs_w = observatories[obs].compute_visibility(ra, dec, trange,airmass_th=1.6,dt_min=30)
amasses = obs_w['airmass']
# If we have more than five entries in the good airmass range, submit
aentries[row[1]['SN_Name']] = len(amasses)
print('Visibility : %s'%(len(amasses)))
# Load LCO template
with open('lcoapi_hosts.json','r') as fh:
api = json.load(fh)
# +
for row in df_hosts.iterrows():
ra = row[1]['Host_candi_ra']
dec = row[1]['Host_candi_dec']
name = 'V'+row[1]['SN_Name']
snapi = copy.deepcopy(api)
# target
snapi['requests'][0]['target']['name'] = name
snapi['requests'][0]['target']['ra'] = ra
snapi['requests'][0]['target']['dec'] = dec
# windows we leave as default (could change default window)
# location is also default (2m telescope)
snapi['requests'][0]['constraints']['max_airmass'] = 1.6
# We could leave the moleculres (obs) as default, or change slidwith
for mol in snapi['requests'][0]['molecules']:
mol['spectra_slit'] = 'slit_2.0as'
# print(snapi)
apiresponse = requests.post(
# 'https://observe.lco.global/api/userrequests/validate/',
'https://observe.lco.global/api/userrequests/',
headers={'Authorization':'Token {}'.format(apitoken['token'])},
json=snapi )
# Make sure this api call was successful
try:
apiresponse.raise_for_status()
except requests.exceptions.HTTPError as exc:
print('Request failed: {}'.format(apiresponse.content))
raise exc
print(apiresponse.json())
lcoout[snname] = apiresponse.json()
# with open('lco_%s.json'%(snname),'w') as fh:
# json.dump(snapi,fh)
# -
# Now look at cpl SNIa for redshift
obs = 'uh88'
date = '2018-11-14'
tshift = {'ntt': -0.25, 'uh88': 0} # In units of days
trange = [Time(Time(date).jd+tshift[obs],format='jd'), Time(Time(date).jd+14.+tshift[obs],format='jd')]
for snname, sninfo in cpl.sources.items():
if sninfo['classification'] is None: continue
if not re.search('Ia',sninfo['classification']) : continue
ra = cpl.sources[snname]['ra']
dec = cpl.sources[snname]['dec']
print('Calculating visibility of source %s (ra: %f, dec: %f) with %s.'%(snname, ra, dec, obs))
obs_w = observatories[obs].compute_visibility(ra, dec, trange,airmass_th=2.0,dt_min=30)
amasses = obs_w['airmass']
if (len(amasses)<10):
print('Not enough visibility')
# Check SDSS
# Explore SDSS query - should do for the full sample
pos = SkyCoord(ra,dec, frame='icrs', unit="deg")
response = SDSS.query_crossid_async(pos)
sdsstab = pd.read_table(StringIO(response.text),sep=',',header=1)
print(sdsstab)
continue
sdsstypes = sdsstab['type']
if len(sdsstypes) == 0:
# No SDSS match, continue
pass
elif len(sdsstypes) == 1 and cut_sdss_stars:
if sdsstypes[0]=='STAR':
msg = "%s: Skipping %s due to photometric SDSS def"%(date,sne[1]["ztf_name"])
logger.info(msg)
continue
else:
print(sdsstypes)
print('Multiple SDSS photometric matches...')
aentries[snname] = len(amasses)
print('snia')
print(snname)
print(sninfo)
api
lcojson = {"group_id":"ampelLCO","proposal":"CON2018B-005","ipp_value":"1.0","operator":"SINGLE","observation_type":"NORMAL","requests":[{"acceptability_threshold":100,"target":{"name":"ZTFSN","type":"SIDEREAL","ra":"166.554549","dec":"50.261242","proper_motion_ra":0,"proper_motion_dec":0,"epoch":2000,"parallax":0,"rot_mode":"VFLOAT","rot_angle":0},"molecules":[{"type":"LAMP_FLAT","instrument_name":"2M0-FLOYDS-SCICAM","exposure_count":"2","bin_x":1,"bin_y":1,"fill_window":false,"defocus":0,"ag_mode":"OPTIONAL","acquire_mode":"WCS","exposure_time":60,"spectra_slit":"slit_2.0as"},{"type":"ARC","instrument_name":"2M0-FLOYDS-SCICAM","exposure_count":"2","bin_x":1,"bin_y":1,"fill_window":false,"defocus":0,"ag_mode":"OPTIONAL","acquire_mode":"WCS","exposure_time":60,"spectra_slit":"slit_2.0as"},{"type":"SPECTRUM","instrument_name":"2M0-FLOYDS-SCICAM","exposure_count":"2","bin_x":1,"bin_y":1,"fill_window":false,"defocus":0,"ag_mode":"ON","acquire_mode":"WCS","exposure_time":"900","spectra_slit":"slit_2.0as"},{"type":"ARC","instrument_name":"2M0-FLOYDS-SCICAM","exposure_count":"2","bin_x":1,"bin_y":1,"fill_window":false,"defocus":0,"ag_mode":"OPTIONAL","acquire_mode":"WCS","exposure_time":60,"spectra_slit":"slit_2.0as"},{"type":"LAMP_FLAT","instrument_name":"2M0-FLOYDS-SCICAM","exposure_count":"2","bin_x":1,"bin_y":1,"fill_window":false,"defocus":0,"ag_mode":"OPTIONAL","acquire_mode":"WCS","exposure_time":60,"spectra_slit":"slit_2.0as"}],"windows":[{"start":"2018-11-14 09:52:13","end":"2018-11-20 09:52:13"}],"location":{"telescope_class":"2m0"},"constraints":{"max_airmass":"1.8","min_lunar_distance":30}}]}
| LCOsubmit_hosts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: fattails
# language: python
# name: fattails
# ---
# # About
# Numerical visualisation of the Central Limit Theorem.
#
# I show how to transform samples of the uniform distribution into a bell shaped distribution.
import numpy as np
import pandas as pd
# # Functions
def summed_uniform_sample(summand_count):
"""Generate random uniform values and sum them up.
Parameters:
summand_count (int): Specifies how many uniform values we want to sum.
"""
summands = np.random.uniform(0,1,summand_count)
sum_total = summands.sum()
return sum_total
def plot_uniform_sum_samples(summand_count, sample_size, bins=50):
"""Plot the sample distribution of summed uniform variables as a histogram.
Generates lots of samples of summed uniform variables.
Then plots these as a histogram.
Parameters:
summand_count (int): The number of uniform variables in each sum
sample_size (int): The number of times we want to sample the summed uniform.
"""
samples = [summed_uniform_sample(summand_count) for idx in range(sample_size)]
samples = pd.Series(samples)
fig = samples.hist(bins=bins)
return fig
# # Examples
# ### Uniform to Bell Shape in Three Steps
# First off lets focus on how the uniform distribution starts looking bell shaped with just three summands...
# We want to approximate an infinite sample size
# That way we get a histogram which matches the distribution
sample_size=10**5
# Plot the distribution of one summand. It should be a flat horizontal line, i.e. the uniform distribution.
summand_count = 1
plot_uniform_sum_samples(summand_count, sample_size)
# Plot the distribution of two summed variables. It should be triangle shaped.
summand_count = 2
plot_uniform_sum_samples(summand_count, sample_size)
# Plot the distribution of three summed uniform variables. Now we should start to see a bell shape...
summand_count = 3
plot_uniform_sum_samples(summand_count, sample_size)
# Lets skip all the way to 24 summands, this should be pretty close to Gaussian:
summand_count = 24
plot_uniform_sum_samples(summand_count, sample_size, bins=100)
# ToDo: Plot a gaussian of same mean and variance for comparison
# # Preasymptotic Distribution
#
# Lets use a smaller sample size and see how that fares.
#
# Try 1 year sample size. Think of it as getting one uniform sum every day for a year.
sample_size = 365
summand_count = 1
plot_uniform_sum_samples(summand_count, sample_size, bins=50)
summand_count = 2
plot_uniform_sum_samples(summand_count, sample_size, bins=50)
summand_count = 3
plot_uniform_sum_samples(summand_count, sample_size, bins=50)
# ##### Real Life Example
# * Suppose you buy one unit electricity every hour and care about the total cost per day.
# * Next suppose that electricity price is uniform (which it is not) between 0 and 1 $/unit.
# * In that case the plot below shows the distribution of your daily cost across one year.
summand_count = 24
plot_uniform_sum_samples(summand_count, sample_size, bins=50)
# ### Conclusion
# The uniform distribution converges quickly (compared to other distributions), but in real life you'll see it rougher than a smooth asymptotic distribution.
| notebooks/NB-22 - Visual Central Limit Theorem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Understand the Normal Curve
# ## Mini-Lab: Characteristics of the Normal Curve
# Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell.
# +
from datascience import *
import numpy as np
import matplotlib
# %matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
# -
# Rather than a traditional mini-lab format with graded questions, this mini-lab is more akin to an exploratory analysis of normal curves and similar content, kind of like the interactive content we've supplied you during your educational journey.
#
# In the previous lab, we've bootstrapped and created a confidence interval of novel COVID-19 test cases. For this demonstration go ahead and replace the relevant cells below with your solution from the previous lab.
# +
def proportion_positive(test_results):
numerator = np.count_nonzero(test_results == "positive")
denominator = len(test_results)
return numerator / denominator
def sample_population(population_table):
sampled_population = population_table.sample()
return sampled_population
def apply_statistic(sample_table, column_name, statistic_function):
return statistic_function(sample_table.column(column_name))
def bootstrap(sample_table, column_name, test_statistic):
resampled_table = sample_table.sample()
return apply_statistic(resampled_table, column_name, test_statistic)
# -
new_tests = Table().read_table("../datasets/new_covid19_village_tests.csv")
new_tests.show(5)
# The following cell will take the bootstrap code and put it in a function for ease of use. You will be running this function over and over again so it's best to make things as simple as possible.
def full_bootstrap_simulation(iterations):
bootstrap_samples = make_array()
for iteration in np.arange(iterations):
bootstrap_result = bootstrap(new_tests, "COVID-19 Test Result", proportion_positive)
bootstrap_samples = np.append(bootstrap_samples, bootstrap_result)
simulation_table = Table().with_column("Bootstrap Test Statistics", bootstrap_samples)
simulation_table.hist(bins=np.arange(0.225, 0.375, 0.006));
# Now that everything is running and imported go ahead and run the function below. Change the number of iterations every time to something much smaller and much larger (though if you use a ridiculously large number, it may take some time to render.)
#
# What changes specifically and how does it change? What about the smoothness of the curve? How does the maximum `Percent per unit` change as you increase or decrease the number of iterations?
#
# Do you notice anything interesting about our normal curve when you use a large number such as 100,000? If I told you that the true mean is 0.268, would that explain any phenomena in the graph?
full_bootstrap_simulation(1000)
# Feel free to tinker around with the `full_bootstrap_simulation` function as well as the test statistic being used. Feel free also to import different data or to create your own data!
| minilabs/understand-the-normal-curve/normal_curve_minilab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# #############################
# Fermion Tensor Network Basics
# #############################
#
# The fermionic tensor network modules in ``quimb`` uses ``pyblock3.algebra.fermion`` submodule as backend.
# User can choose between two numerical backends (Python and CPP), where the Python backend adopts class:`~pyblock3.algebra.fermion.SparseFermionTensor` as the base data class and CPP uses class:`~pyblock3.algebra.fermion.FlatFermionTensor`.
#
# Additionally, ``pyblock3`` backend currently supports four types of symmetries (U1, Z2, U1 \otimes U1, Z2 \otimes Z2).
#
# Backend settings can be parsed to ``pyblock3`` via func:`~quimb.tensor.block_interface.set_options`.
#
# In this example, we'll use Z2 symmetry and Python backend for easier demonstration.
# -
from quimb.tensor.fermion.block_interface import set_options
from quimb.tensor.fermion.fermion_core import FermionTensor, FermionTensorNetwork
set_options(use_cpp=False, symmetry="Z2")
# + active=""
# Creating Fermion Tensors
# ------------------------
#
# Fermion Tensors are created using the class :class:`~quimb.tensor.fermion.FermionTensor`, which inherits from base
# class :class:`~quimb.tensor.tensor_core.Tensor` with the requirement that the underlying data must be a ``pyblock3`` block tensor.
#
# For user experienced with ``pyblock3``, it's encouraged to create the block tensor data via the ``pyblock3.algebra.fermion`` directly. Additionally, for users seeking basic functionality, ``quimb.tensor.block_gen`` submodule provides routines to generate random fermionic tensors with symmetry.
#
# Key functionality:
#
# - :func:`~quimb.tensor.block_gen.rand_single_block`
# - :func:`~quimb.tensor.block_gen.rand_all_blocks`
#
# In example we'll create a random three-indexed FermionTensor data T_{abc} with the symmetry pattern as below.
# ^
# b|
# a | c
# --->o--->
#
# The symmetry information for this tensor is: S_a - S_b - S_c = Net symmetry.
# + active=""
# To create a random single block tensor with net symmetry being Z2(1), we parse the block shape, symmetry pattern , the net symmetry and the index where we want the net symmetry to reside.
# -
from quimb.tensor.fermion.block_gen import rand_single_block
# +
shape = (2, 3, 5) # (a, b, c)
pattern = "+--" # corresponds to S_a - S_b - S_c = Net symmetry
dq = 1 # Net symmetry being Z2(1)
ind = 2 # allow last index to take the net symmetry
data = rand_single_block(shape, pattern=pattern, dq=dq, ind=ind)
print("Symmetry Pattern:", data.pattern)
print("Net Symmetry: ", data.dq)
'''
if Python backend is used, we can print out the symmetry informations
'''
if hasattr(data, "blocks"):
for iblk in data.blocks:
print(iblk.q_labels, iblk.shape) # Z2(0)-Z2(0)-Z2(1) = Net Symmetry Z2(1)
T = FermionTensor(data, inds=('a','b','c'))
# + active=""
# To create a random tensor with multiple blocks, we parse the shape for each block, a tuple of allowed symmetry for each dimension and the net symmetry to function:func:`~quimb.tensor.block_gen.rand_all_blocks`
# +
from quimb.tensor.fermion.block_gen import rand_all_blocks
shape = (2, 3, 5)
symmetry_info = ((0,1), (0,1), (0,))
# allow S_a/S_b to be Z2(0) and Z2(1) only Z2(0) for S_c
pattern = "+--"
dq = 1
data = rand_all_blocks(shape, symmetry_info, pattern=pattern, dq=dq)
if hasattr(data, "blocks"):
for iblk in data.blocks:
print(iblk.q_labels, iblk.shape)
T = FermionTensor(data, inds=('a','b','c'))
print(T)
# + active=""
# Contraction of Free Fermion Tensors
# -----------------------------------
#
# Contraction between Fermion Tensors are equivalent to contracting operators. The order of the input operands affects the output. For instance, in most cases \hat{T1} \hat{T2} != \hat{T2} \hat{T1}
#
# Contractions are handled by function:func:`~quimb.tensor.fermion.tensor_contract`. If the input tensors are free fermion tensors (not part of any fermion tensor network), the order of the operands are the same as they're in the arguments.
# +
import numpy as np
shape = (1,1,1)
data1 = rand_all_blocks(shape, symmetry_info, pattern=pattern, dq=dq)
T1 = FermionTensor(data1, inds=('a','b', 'c'))
data2 = rand_all_blocks(shape, symmetry_info, pattern=pattern, dq=dq)
T2 = FermionTensor(data2, inds=('c','d', 'e'))
from quimb.tensor.tensor_core import tensor_contract
out = tensor_contract(T1, T2, output_inds=('a', 'b', 'd', 'e')) # \hat{T}_1 \hat{T}_2
out1 = tensor_contract(T2, T1, output_inds=('a', 'b', 'd', 'e')) # \hat{T}_2 \hat{T}_1
# note contraction has an order in Fermion Tensors
# T1 \dot T2 != T2 \dot T1
print("T1 @ T2")
for iblk in out.data:
print(np.asarray(iblk).ravel(), iblk.q_labels)
print("T2 @ T1")
for iblk in out1.data:
print(np.asarray(iblk).ravel(), iblk.q_labels)
# + active=""
# Creating Fermion Tensor Network
# -------------------------------
#
#
# We can combine these tensors into a :class:`~quimb.tensor.fermion.FermionTensorNetwork` using the ``&`` operator overload:
# +
tn = T1 & T2 # equivalent to \hat{T2} \hat{T1}
out = tn.contract(all)
for iblk in out.data:
print(np.asarray(iblk).ravel(), iblk.q_labels)
from quimb.tensor.fermion.fermion_core import FermionTensorNetwork
tn = FermionTensorNetwork((T1, T2), virtual=True) # T_2 T_1
out = tn.contract(all)
for iblk in out.data:
print(np.asarray(iblk).ravel(), iblk.q_labels)
# + active=""
# Creating Random 2D Fermionoc PEPS
# ---------------------------------
#
#
# When generating randomn 2D fermionic PEPS, one needs to decide the initial net symmetry on each site and the allowed symmetry sectors on each leg. The two arguments are parsed to :func:`~quimb.tensor.block_gen.gen_2d_bonds` to generate symmetry informations needed to generate 2D fermionic PEPS using :method:`~quimb.tensor.fermion_2d.FPEPS.rand`
#
# In this example we show how to generate a 4*4 fermionic PEPS with all Z2 symmetries sectors allowed in each leg. The bond dimension per block is 2 and we place net symmetry of Z2(1) on each site
# +
from quimb.tensor.fermion.block_gen import gen_2d_bonds
from quimb.tensor.fermion.fermion_2d import FPEPS
Lx, Ly = (4, 3)
z2arr = np.ones((Lx, Ly)) # net Z2(1) at each site
physical_infos = dict()
for ix in range(Lx):
for iy in range(Ly):
physical_infos[ix, iy] = (0,1) # all physical legs allow Z2(0) and Z2(1) symmetry
symmetry_infos, dq_infos = gen_2d_bonds(z2arr, physical_infos)
bond_dim = 2
peps = FPEPS.rand(Lx, Ly, bond_dim, symmetry_infos, dq_infos, phys_dim=2)
peps.normalize_()
# + active=""
# Getting Hubbard Hamiltonian
# ---------------------------
#
# The fermionic module provides a default interface to encode 2D Hubbard model in the backend. This is achieved through :func:`~quimb.tensor.fermion_2d.Hubbard2D`. The hopping hamiltonian is implemented as a block tensor that acts on two bits. The single-bit onsite Coulomb repulsion and chemical potential part are factorized in the same two-bit tensor.
#
# In this example, we compute the energy of the random state in 2D Hubbard Hamiltonian
#
# -
from quimb.tensor.fermion.fermion_2d_tebd import Hubbard2D
t = 1
u = 4
mu = 0
ham = Hubbard2D(t, u, Lx, Ly, mu=mu)
ene = peps.compute_local_expectation(ham.terms, max_bond=32)
print(ene)
# + active=""
# Measuring Local Observables
# ---------------------------
#
# The fermionic modules provides some default encoding scheme to construct operators/states for spinful fermions.
#
# Symmetry |vac> |+-> |+> |->
# --------------------------------------------------
# Z2 Z2(0) Z2(0) Z2(1) Z2(1)
# Z2*Z2 (0,0) (0,1) (1,0) (1,1)
# U1 U1(0) U1(2) U1(1) U1(1)
# U1*U1 (0,0) (2,0) (1,1) (1,-1)
# --------------------------------------------------
#
# Key operators for such encoding scheme:
#
# - :func:`~quimb.tensor.block_interface.ParticleNumber`
# - :func:`~quimb.tensor.block_interface.measure_SZ`
# - :func:`~quimb.tensor.block_interface.Hubbard`
#
# For customized encoding scheme, user needs to implement the Hamiltonian.
# -
from quimb.tensor.fermion.block_interface import ParticleNumber, measure_SZ
pnterms = dict()
pnop = ParticleNumber()
szterms = dict()
szop = measure_SZ()
for ix in range(Lx):
for iy in range(Ly):
pnterms[ix,iy]=pnop
szterms[ix,iy]=szop
pntot = peps.compute_local_expectation(pnterms, max_bond=32)
sztot = peps.compute_local_expectation(szterms, max_bond=32)
print(pntot)
print(sztot)
# + active=""
# Simple Update
# -------------
#
#
# +
from quimb.tensor.fermion.fermion_2d_tebd import SimpleUpdate
su = SimpleUpdate(
psi0=peps,
ham=ham,
D=6,
chi=32,
compute_energy_every=10,
compute_energy_per_site=True,
ordering = 'random',
keep_best=True,
)
su.evolve(10, tau=0.01)
# -
psi = su.get_state()
pntot1 = psi.compute_local_expectation(pnterms, max_bond=32, normalized=True)
sztot1 = psi.compute_local_expectation(szterms, max_bond=32, normalized=True)
print("Before Simple Update")
print("N=%.6f, Sz=%.6f"%(pntot, sztot))
print("After Simple Update")
print("N=%.6f, Sz=%.6f"%(pntot1, sztot1))
| quimb/tensor/fermion/tutorial/quimb_fermion_example.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: UIMA Ruta
# language: ruta
# name: ruta
# ---
# # Exercise 2: Sequential Patterns
#
# This exercise provides an introduction to how annotations are combined in sequential patterns.
# #### Setup
# First, we define some input text for the following examples.
%%documentText
The dog barked at the cat.
Dogs, cats and mice are mammals.
There are 12 tuna swimming in the sea.
This text was created approx. on 13.04.2021.
# Then, we provide some initial `Animal` annotations using a dictionary.
DECLARE Animal;
WORDLIST AnimalList = 'resources/animals.txt';
MARKFAST(Animal, AnimalList, true);
# This annotates all animals in the text (dog, cat, dogs, cats, mice, tuna). The annotations are not visible as we did not execute the statement `COLOR(Animal, "lightgreen");`.
# ### Sequential Patterns
# #### Simple pattern: An animal preceded by a number
# In the previous exercise 1, you have seen how we can annotate a word based on its covered text. Often times, it is desired, to only create an annotation if the word is part of a certain sequence. This is very easy to do with Ruta.
#
# Suppose that we want to create annotations of the type `MultipleAnimals`, if an Animal is preceded by a number. Then we can do the following.
DECLARE MultipleAnimals; // Declaring a new type MultipleAnimals
(NUM Animal){-> MultipleAnimals}; // If a number is followed by an Animal, create a new annotation of type MultipleAnimals
COLOR(MultipleAnimals,"lightblue"); // Show "MultipleAnimals" in blue
# #### Annotating an enumeration of animals
# Next, we want to annotate enumerations of Animals, i.e. "*Dogs, cats and mice*" in this text example. This should be annotated with a new Type called `AnimalEnum`.
#
# As a preliminary step, we will annotate all `Conjunction` elements with the rule below using the Condition-Action structure that we have already seen in exercise 1. The rule in line 2 goes through all tokens of the document (`ANY`) and matches if this token is a `COMMA` or if the token is the word "and" or "or". In these cases, it creates a new `Conjunction` annotation.
DECLARE Conjunction;
ANY{OR(IS(COMMA), REGEXP("and|or")) -> Conjunction};
COLOR(Conjunction,"pink");
# Now, we declare a new annotation type `AnimalEnum` and annotate enumerations of Animals using composed rule elements and a quantifier. The general structure of these enumerations should be: `Animal Conjunction Animal ... Conjunction Animal`. Similar to regular expressions, the plus quantifier `+` can be used to model this behavior. It matches if there is at least one occurrence of the part `(Conjunction Animal)`.
#
# Other important quantifiers are `?` (matches zero or one time), `*` (matches zero or more times) and the notation `[2,5]` - matches two to five repetitions. These quantifiers are greedy and always match with the longest valid sequence.
DECLARE AnimalEnum;
(Animal (Conjunction Animal)+) {-> AnimalEnum};
COLOR(AnimalEnum, "lightgreen");
# #### Annotating sentences
# Let's reset the document with all its annotations — which is called Common Analysis Structure, `CAS` — and try to annotate sentences.
%resetCas
%%documentText
The dog barked at the cat.
Dogs, cats and mice are mammals.
Zander and tuna are fishes.
This text was created approx. on 13.04.2021.
# We declare a new annotation type `Sentence` and create a Sentence annotation for each sentence in the input document. We use the wildcard `#` which uses the next rule element to determine its match and always takes the shortest possible sequence. For instance, the rule `(# PERIOD){-> Sentence};` creates a Sentence annotation on anything until the first `PERIOD` token. The second rule `PERIOD (# PERIOD){-> Sentence};` creates a Sentence annotation on anything between two consecutive `PERIOD` tokens.
# +
// Let's also switch to a different output display mode
// We list the detected sentences in a table
%displayMode CSV
%csvConfig Sentence
DECLARE Sentence;
(# PERIOD){-> Sentence};
PERIOD (# PERIOD){-> Sentence};
# -
# This initial try to annotate sentence has several problems with the Period symbols in `approx.` and within the date.
// To start over, we can remove the initial faulty Sentence annotations.
// For each Sentence, we apply UNMARK() which removes annotations.
s:Sentence{-> UNMARK(s)};
# Let's introduce a "helper" annotation for sentence ends and improve the resulting annotations.
# +
DECLARE SentenceEnd;
// A period is a "SentenceEnd" only if it is followed by any Token (_) that is
// not a number and not a small written word (SW).
//
// The "_" is a special matching condition. It is also fulfilled if nothing is left to match.
// This is necessary to match the last Sentence.
PERIOD{-> SentenceEnd} _{-PARTOF(NUM), -PARTOF(SW)};
(# SentenceEnd){-> Sentence}; // Matches the first sentence.
SentenceEnd (# SentenceEnd){-> Sentence}; // Matches other sentences.
# -
# Looks good! Of course, there are still problems, e.g. exclamation marks and question marks should also be considered a `SentenceEnd` ...
# ### Annotating a simple Date pattern
# In the next cell, we declare four new annotation types: `Day`, `Month`, `Year` and `Date`.
# We create a single rule for detecting dates of the form `DD.MM.YYYY`.
# This single rule should create four annotations:
# 1. A "Date" annotation for the complete date mention.
# 2. A "Day" annotation for the two digits of the day.
# 3. A "Month" annotation for the two digits of the month.
# 4. A "Year" annotation for the four digits of the year.
# +
DECLARE Date, Day, Month, Year;
//we restrict the number using a regex
(NUM{REGEXP("..")-> Day} PERIOD
NUM{REGEXP("..")-> Month} PERIOD
NUM{REGEXP(".{4}")-> Year}){-> Date};
COLOR(Day, "pink");
COLOR(Month, "lightgreen");
COLOR(Year, "lightblue");
COLOR(Date, "lightgrey");
# -
# It is often very useful to specify more than one action in a rule. This way, we can detect multiple entities using a single sequential pattern and only in combination with other information.
| notebooks/ruta-training/Chapter 1 - Language elements/Exercise 2 - Sequential Patterns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import json
import math
from imutils.video import VideoStream
import argparse
import datetime
import motmetrics as mm
import imutils
import time
from sklearn import preprocessing
import cv2 as cv
import time
from sort import *
import json
from utils import load_detections
from detection import Detection
from tracker import Tracker
acc = mm.MOTAccumulator(auto_id=True)
iou_overlaps = []
desc_dists = []
confusion_frames = []
confusion_tracks = []
confusion_distances =[]
colors = [[0,0,128],[0,255,0],[0,0,255],[255,0,0],[0,128,128],[128,0,128],[128,128,0],[255,255,0],[0,255,255],[255,255,0],[128,0,0],[0,128,0]
,[0,128,255],[0,255,128],[255,0,128],[128,255,0],[255,128,0],[128,255,255],[128,0,255],[128,128,128],[128,255,128]]
tracking_methods=['kalman_corners']
#tracking_methods=['center_flow','keypoint_flow','kalman_center','kalman_corners','SORT']
detectors = ['yolo']
#detectors = ['ssd300','retinanet','yolo']
#'center_fow','keypoint_flow','kalman_center','kalman_corners',
datasets=['garda_2']
times = {}
for dataset in datasets:
times[dataset]={}
images_input_path='../%s/'%dataset
image_id_prefix= dataset
frame_width=1032
frame_height=778
if(dataset=='venc'):
frame_width = 1280
frame_height = 960
if(dataset=='modd'):
frame_width=640
frame_height=464
if(dataset=='garda_1' or dataset=='garda_2'):
frame_width=1280
frame_height=720
if(dataset=='mot_1'):
frame_width=768
frame_height=576
iou_threshold = 0.1
for detector in detectors:
times[dataset][detector] = {}
boat_class=8
min_conf=0
if(detector=='ssd300'):
boat_class=4
min_conf=0
if(detector=='def'):
boat_class=1
path = '%s/%s_videos'%(detector,image_id_prefix)
detections = load_detections(image_id_prefix,detector,boat_class,0)
for tracking_method in tracking_methods:
times[dataset][detector][tracking_method] = []
video_output_path='%s/%s.avi'%(path,tracking_method)
json_output_path='%s/%s.json'%(path,tracking_method)
out_tracking = cv.VideoWriter(video_output_path,cv.VideoWriter_fourcc('M','J','P','G'), 30, (frame_width,frame_height))
frameCount =0
no_tracking_res = []
tracking_res = []
kalman_trackers=[]
# initialize the first frame in the video stream
frameCount =0
step_up = 0.1
step_down = 0.2
print('Running: Dataset:%s, Detector:%s, Tracker:%s, @%dx%d'%(dataset,detector,tracking_method,frame_width,frame_height))
preds = []
tracks=[]
started = False
multiplier=0
cc=0
prev_frame=None
total_frames=641
if(tracking_method=='SORT'):
mot_tracker = Sort()
else:
tracker_wrapper = Tracker(tracking_method)
tracker_wrapper.frame_width = frame_width
tracker_wrapper.frame_height = frame_height
if(dataset=='modd'):
tracker_wrapper.A = np.array([int(frame_width/2),int(frame_height/2)])
tracker_wrapper.B = np.array([int(frame_width/5),int(frame_height-1)])
tracker_wrapper.C= np.array([int(4*frame_width/5),frame_height-1])
elif(dataset=='graal_1' or dataset == 'graal_2' or dataset=='garda_1' or dataset=='garda_2'):
tracker_wrapper.A = np.array([int(frame_width/2),int(frame_height/2)])
tracker_wrapper.B = np.array([int(frame_width/5),int(frame_height-1)])
tracker_wrapper.C= np.array([int(4*frame_width/5),frame_height-1])
while frameCount<total_frames:
# grab the current frame and initialize the occupied/unoccupied
# text
frame = cv.imread('%s%s.jpg'%(images_input_path,str(frameCount+1).zfill(5)))
# if the frame could not be grabbed, then we have reached the end
# of the video
if frame is None:
break
if(frameCount<0):
continue
preds = []
if '%s/%s.jpg'%(image_id_prefix,str(frameCount+1).zfill(5)) in detections and tracking_method=='SORT':
for box in detections['%s/%s.jpg'%(image_id_prefix,str(frameCount+1).zfill(5))]:
if(box.conf<min_conf):
continue
temp_pred = box[2:]
temp_pred = np.insert(temp_pred,4,box[1])
preds.append(temp_pred)
start= time.time()
if(tracking_method=='SORT'):
preds = np.asarray(preds)
trackers = mot_tracker.update(preds)
to_display = []
for itrk,tracker in enumerate(trackers):
to_display.append([tracker[4],boat_class,preds[itrk][4],tracker[0],tracker[1],tracker[2],tracker[3]])
else:
if '%s/%s.jpg'%(image_id_prefix,str(frameCount+1).zfill(5)) in detections:
preds = detections['%s/%s.jpg'%(image_id_prefix,str(frameCount+1).zfill(5))]
for det in preds:
det.calc_hog_descriptor(frame)
tracker_wrapper.track(preds,frame,prev_frame)
to_display = tracker_wrapper.get_display_tracks()
#print(acc.mot_events.loc[frameId])
col_points = []#tracker_wrapper.get_collision_points()
if(len(col_points)>0):
col = [0,0,255]
cv.putText(frame,'Collision detected!', (20, 20),cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
else:
col = [0,255,255]
for p in col_points:
cv.circle(frame,(int(p[0]),int(p[1])),8,(0,0,255),1)
i=0
for box in to_display:
# Transform the predicted bounding boxes for the 512x512 image to the original image dimensions.
xmin = int(box.xmin)
ymin = int(box.ymin)
xmax =int(box.xmax)
ymax =int(box.ymax)
cv.rectangle(frame, (int(xmin), int(ymin)), (int(xmax),int(ymax)), colors[int(box.track_id)%len(colors)], 3)
#cv.rectangle(frame, (int(box.new_box[0] - box.new_box[2]/2), int(box.new_box[1] - box.new_box[3]/2)), (int(box.new_box[0] + box.new_box[2]/2),int(box.new_box[1] + box.new_box[3]/2)), (255,255,255), 2)
p1 = box.center()
p2 = box.center()
p2[0] += box.offset[0]*2
p2[1] += box.offset[1]*2
cv.arrowedLine(frame,(int(p1[0]),int(p1[1])),(int(p2[0]),int(p2[1])),(0,0,255),2)
if(tracking_method=='kalman_center'):
cv.circle(frame,(int(predictions[i][0][0]),int(predictions[i][1][0])),5,(255,0,0),2)
#if(tracking_method=='kalman_corners'):
#cv.circle(frame,(int(predictions[i][0][0]),int(predictions[i][1][0])),5,(255,0,0),2)
#cv.circle(frame,(int(predictions[i][2][0]),int(predictions[i][3][0])),5,(255,0,0),2)
cv.putText(frame,'{:.2f}'.format( box.conf), (xmin, ymin),cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
tracking_res.append({"image_id" : frameCount+1, "category_id" : 1, "bbox" : [float(xmin),float(ymin),float(xmax-xmin),float(ymax-ymin)], "score" : np.minimum(1.0,box.conf),"id":box.track_id})
#f.write("graal_2/%s.jpg,%s,%d,%f,%f,%f,%f,%f\n"%(str(frameCount+1).zfill(5),classes[int(box[1])],box[1],box[2],xmin,ymin,xmax,ymax))
i+=1
times[dataset][detector][tracking_method].append(time.time()-start)
out_tracking.write(frame)
#cv.imwrite('debug_frames/%s.jpg'%str(frameCount+1),frame)
frameCount+=1
prev_frame=frame
flow = None
# cleanup the camera and close any open windows
out_tracking.release()
with open(json_output_path, 'w') as outfile:
json.dump(tracking_res, outfile)
# -
print(np.average(times['garda_2']['yolo']['kalman_corners']))
| Tracking new style.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
from konlpy.tag import Okt
from functools import reduce
from wordcloud import WordCloud
# -
DATA_IN_PATH = './data_in/'
data = pd.read_csv(DATA_IN_PATH + 'ChatBotData.csv', encoding='utf-8')
data
print(data.head())
sentences = list(data['Q']) + list(data['A'])
# +
tokenized_sentences = [s.split() for s in sentences]
sent_len_by_token = [len(t) for t in tokenized_sentences]
sent_len_by_eumjeol = [len(s.replace(' ', '')) for s in sentences]
okt = Okt()
morph_tokenized_sentences = [okt.morphs(s.replace(' ', '')) for s in sentences]
sent_len_by_morph = [len(t) for t in morph_tokenized_sentences]
# -
plt.figure(figsize=(12, 5))
plt.hist(sent_len_by_token, bins=50, range=[0,50], alpha=0.5, color= 'r', label='eojeol')
plt.hist(sent_len_by_morph, bins=50, range=[0,50], alpha=0.5, color='g', label='morph')
plt.hist(sent_len_by_eumjeol, bins=50, range=[0,50], alpha=0.5, color='b', label='eumjeol')
plt.title('Sentence Length Histogram')
plt.xlabel('Sentence Length')
plt.ylabel('Number of Sentences')
plt.figure(figsize=(12, 5))
plt.hist(sent_len_by_token, bins=50, range=[0,50], alpha=0.5, color= 'r', label='eojeol')
plt.hist(sent_len_by_morph, bins=50, range=[0,50], alpha=0.5, color='g', label='morph')
plt.hist(sent_len_by_eumjeol, bins=50, range=[0,50], alpha=0.5, color='b', label='eumjeol')
plt.yscale('log')
plt.title('Sentence Length Histogram by Eojeol Token')
plt.xlabel('Sentence Length')
plt.ylabel('Number of Sentences')
print('어절 최대길이: {}'.format(np.max(sent_len_by_token)))
print('어절 최소길이: {}'.format(np.min(sent_len_by_token)))
print('어절 평균길이: {:.2f}'.format(np.mean(sent_len_by_token)))
print('어절 길이 표준편차: {:.2f}'.format(np.std(sent_len_by_token)))
print('어절 중간길이: {}'.format(np.median(sent_len_by_token)))
print('제 1 사분위 길이: {}'.format(np.percentile(sent_len_by_token, 25)))
print('제 3 사분위 길이: {}'.format(np.percentile(sent_len_by_token, 75)))
print('형태소 최대길이: {}'.format(np.max(sent_len_by_morph)))
print('형태소 최소길이: {}'.format(np.min(sent_len_by_morph)))
print('형태소 평균길이: {:.2f}'.format(np.mean(sent_len_by_morph)))
print('형태소 길이 표준편차: {:.2f}'.format(np.std(sent_len_by_morph)))
print('형태소 중간길이: {}'.format(np.median(sent_len_by_morph)))
print('형태소 1/4 퍼센타일 길이: {}'.format(np.percentile(sent_len_by_morph, 25)))
print('형태소 3/4 퍼센타일 길이: {}'.format(np.percentile(sent_len_by_morph, 75)))
print('음절 최대길이: {}'.format(np.max(sent_len_by_eumjeol)))
print('음절 최소길이: {}'.format(np.min(sent_len_by_eumjeol)))
print('음절 평균길이: {:.2f}'.format(np.mean(sent_len_by_eumjeol)))
print('음절 길이 표준편차: {:.2f}'.format(np.std(sent_len_by_eumjeol)))
print('음절 중간길이: {}'.format(np.median(sent_len_by_eumjeol)))
print('음절 1/4 퍼센타일 길이: {}'.format(np.percentile(sent_len_by_eumjeol, 25)))
print('음절 3/4 퍼센타일 길이: {}'.format(np.percentile(sent_len_by_eumjeol, 75)))
plt.figure(figsize=(12, 5))
plt.boxplot([sent_len_by_token, sent_len_by_morph, sent_len_by_eumjeol],
labels=['Eojeol', 'Morph', 'Eumjeol'],
showmeans=True)
# +
query_sentences = list(data['Q'])
answer_sentences = list(data['A'])
query_morph_tokenized_sentences = [okt.morphs(s.replace(' ', '')) for s in query_sentences]
query_sent_len_by_morph = [len(t) for t in query_morph_tokenized_sentences]
answer_morph_tokenized_sentences = [okt.morphs(s.replace(' ', '')) for s in answer_sentences]
answer_sent_len_by_morph = [len(t) for t in answer_morph_tokenized_sentences]
# -
plt.figure(figsize=(12, 5))
plt.hist(query_sent_len_by_morph, bins=50, range=[0,50], color='g', label='Query')
plt.hist(answer_sent_len_by_morph, bins=50, range=[0,50], color='r', alpha=0.5, label='Answer')
plt.legend()
plt.title('Query Length Histogram by Morph Token')
plt.xlabel('Query Length')
plt.ylabel('Number of Queries')
plt.figure(figsize=(12, 5))
plt.hist(query_sent_len_by_morph, bins=50, range=[0,50], color='g', label='Query')
plt.hist(answer_sent_len_by_morph, bins=50, range=[0,50], color='r', alpha=0.5, label='Answer')
plt.legend()
plt.yscale('log', nonposy='clip')
plt.title('Query Length Log Histogram by Morph Token')
plt.xlabel('Query Length')
plt.ylabel('Number of Queries')
print('형태소 최대길이: {}'.format(np.max(query_sent_len_by_morph)))
print('형태소 최소길이: {}'.format(np.min(query_sent_len_by_morph)))
print('형태소 평균길이: {:.2f}'.format(np.mean(query_sent_len_by_morph)))
print('형태소 길이 표준편차: {:.2f}'.format(np.std(query_sent_len_by_morph)))
print('형태소 중간길이: {}'.format(np.median(query_sent_len_by_morph)))
print('형태소 1/4 퍼센타일 길이: {}'.format(np.percentile(query_sent_len_by_morph, 25)))
print('형태소 3/4 퍼센타일 길이: {}'.format(np.percentile(query_sent_len_by_morph, 75)))
print('형태소 최대길이: {}'.format(np.max(answer_sent_len_by_morph)))
print('형태소 최소길이: {}'.format(np.min(answer_sent_len_by_morph)))
print('형태소 평균길이: {:.2f}'.format(np.mean(answer_sent_len_by_morph)))
print('형태소 길이 표준편차: {:.2f}'.format(np.std(answer_sent_len_by_morph)))
print('형태소 중간길이: {}'.format(np.median(answer_sent_len_by_morph)))
print('형태소 1/4 퍼센타일 길이: {}'.format(np.percentile(answer_sent_len_by_morph, 25)))
print('형태소 3/4 퍼센타일 길이: {}'.format(np.percentile(answer_sent_len_by_morph, 75)))
okt.pos('오늘밤은유난히덥구나')
# +
query_NVA_token_sentences = list()
answer_NVA_token_sentences = list()
for s in query_sentences:
for token, tag in okt.pos(s.replace(' ', '')):
if tag == 'Noun' or tag == 'Verb' or tag == 'Adjective':
query_NVA_token_sentences.append(token)
for s in answer_sentences:
temp_token_bucket = list()
for token, tag in okt.pos(s.replace(' ', '')):
if tag == 'Noun' or tag == 'Verb' or tag == 'Adjective':
answer_NVA_token_sentences.append(token)
query_NVA_token_sentences = ' '.join(query_NVA_token_sentences)
answer_NVA_token_sentences = ' '.join(answer_NVA_token_sentences)
# +
query_wordcloud = WordCloud(font_path= DATA_IN_PATH + 'NanumGothic.ttf').generate(query_NVA_token_sentences)
plt.imshow(query_wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
# +
query_wordcloud = WordCloud(font_path= DATA_IN_PATH + 'NanumGothic.ttf').generate(answer_NVA_token_sentences)
plt.imshow(query_wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
| 6.CHATBOT/6.2.EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import mnist_loader
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
import network2
import matplotlib.pyplot as plt
import numpy as np
def plot_accuracy_epochs(name,ev_accuracy,train_accuracy): #plots ev and training accuracy against epoch no
x=range(1,len(train_accuracy)+1)
ev_accuracy=np.divide(ev_accuracy,10000.0)
train_accuracy=np.divide(train_accuracy,50000.0)
savename='plots/'+name+'_accuracy.png'
plt.figure()
ev,=plt.plot(x,ev_accuracy,label='ev')
train,=plt.plot(x,train_accuracy,label='train')
plt.xlabel('epoch number')
plt.ylabel('accuracy')
plt.title(name+ ' accuracy')
plt.legend([ev,train],['testing dataset','training dataset'])
plt.savefig(savename)
def plot_cost_epochs(name,ev_cost,training_cost): #plots ev and training cost against epoch no
x=range(1,len(train_cost)+1)
savename='plots/'+name+'_cost.png'
plt.figure()
ev,=plt.plot(x,ev_cost)
train,=plt.plot(x,train_cost)
plt.xlabel('epoch number')
plt.ylabel('cost')
plt.title(name+' cost')
plt.legend([ev,train],['testing dataset','training dataset'])
plt.savefig(savename)
net = network2.Network([784, 30, 10])
#default paramaters are epochs=10, mini batch size=10, learning rate=3, number of neurons=30
#each parameters is varied keeping the other parameters as defaults.
mini_batch_size_def=10
eta_def=3.0
lmbda_def=0.0
epochs_def=10
#default
ev_cost,ev_accuracy,train_cost,train_accuracy=net.SGD(training_data, epochs=epochs_def, mini_batch_size=mini_batch_size_def, eta=eta_def,lmbda=lmbda_def,evaluation_data=test_data,monitor_evaluation_cost=True,
monitor_evaluation_accuracy=True,
monitor_training_cost=True,
monitor_training_accuracy=True)
plot_accuracy_epochs('default',ev_accuracy,train_accuracy)
plot_cost_epochs('default',ev_cost,train_cost)
\end{lstlisting}
\subsection{Varying training parameter}
\begin{lstlisting}[language=Python]
#variation of training parameter eta
net = network2.Network([784, 30, 10])
mini_batch_size_def=10
lmbda_def=0.0
epochs_def=10
eta=[0.01,0.1,1,10,100]
labels=[None]*len(eta)
for i in range(0,len(eta)):
labels[i]='eta='+str(eta[i])
plt.figure()
fig, (ax1,ax2)=plt.subplots(1,2)
fig.tight_layout()
#fig.suptitle('Training paramater variation')
ax1.set_title('accuracy')
ax1.set(xlabel='epoch',ylabel='accuracy')
ax2.set_title('cost')
ax2.set(xlabel='epoch',ylabel='cost')
x=range(1,len(train_cost)+1)
for i in range(0,len(eta)):
ev_cost,ev_accuracy,train_cost,train_accuracy=net.SGD(training_data, epochs=epochs_def, mini_batch_size=mini_batch_size_def, eta=eta[i],lmbda=lmbda_def,evaluation_data=test_data,monitor_evaluation_cost=True,
monitor_evaluation_accuracy=True,
monitor_training_cost=True,
monitor_training_accuracy=True)
ax1.plot(x,ev_accuracy,label=labels[i])
ax2.plot(x,ev_cost)
plot_accuracy_epochs('training parameter='+str(eta[i]),ev_accuracy,train_accuracy)
plot_cost_epochs('training parameter='+str(eta[i]),ev_cost,train_cost)
fig.legend()
fig.savefig('plots/eta.png')
# +
#variation of training parameter eta
net = network2.Network([784, 30, 10])
mini_batch_size_def=10
lmbda_def=0.0
epochs_def=10
eta=[0.01,0.1,1,10,100]
labels=[None]*len(eta)
for i in range(0,len(eta)):
labels[i]='eta='+str(eta[i])
plt.figure()
fig, (ax1,ax2)=plt.subplots(1,2)
fig.tight_layout()
#fig.suptitle('Training paramater variation')
ax1.set_title('accuracy')
ax1.set(xlabel='epoch',ylabel='accuracy')
ax2.set_title('cost')
ax2.set(xlabel='epoch',ylabel='cost')
x=range(1,len(train_cost)+1)
for i in range(0,len(eta)):
ev_cost,ev_accuracy,train_cost,train_accuracy=net.SGD(training_data, epochs=epochs_def, mini_batch_size=mini_batch_size_def, eta=eta[i],lmbda=lmbda_def,evaluation_data=test_data,monitor_evaluation_cost=True,
monitor_evaluation_accuracy=True,
monitor_training_cost=True,
monitor_training_accuracy=True)
ax1.plot(x,ev_accuracy,label=labels[i])
ax2.plot(x,ev_cost)
plot_accuracy_epochs('training parameter='+str(eta[i]),ev_accuracy,train_accuracy)
plot_cost_epochs('training parameter='+str(eta[i]),ev_cost,train_cost)
fig.legend()
fig.savefig('plots/eta.png')
# -
#variation of mini batch
net = network2.Network([784, 30, 10])
mini_batch_size=[5,10,15,20,25,30]
lmbda_def=0.0
epochs_def=10
eta_def=3.0
labels=[None]*len(mini_batch_size)
for i in range(0,len(mini_batch_size)):
labels[i]='mini batch size='+str(mini_batch_size[i])
plt.figure()
fig, (ax1,ax2)=plt.subplots(1,2)
fig.tight_layout()
#fig.suptitle('Training paramater variation')
ax1.set_title('accuracy')
ax1.set(xlabel='epoch',ylabel='accuracy')
ax2.set_title('cost')
ax2.set(xlabel='epoch',ylabel='cost')
x=range(1,len(train_cost)+1)
for i in range(0,len(mini_batch_size)):
ev_cost,ev_accuracy,train_cost,train_accuracy=net.SGD(training_data, epochs=epochs_def, mini_batch_size=mini_batch_size[i], eta=eta_def,lmbda=lmbda_def,evaluation_data=test_data,monitor_evaluation_cost=True,
monitor_evaluation_accuracy=True,
monitor_training_cost=True,
monitor_training_accuracy=True)
ax1.plot(x,ev_accuracy,label=labels[i])
ax2.plot(x,ev_cost)
plot_accuracy_epochs('mini batch size='+str(mini_batch_size[i]),ev_accuracy,train_accuracy)
plot_cost_epochs('mini batch size='+str(mini_batch_size[i]),ev_cost,train_cost)
fig.legend()
fig.savefig('plots/mini_batch.png')
#variation of number of neurons
neuron_size=[5,10,15,20,25,30]
mini_batch_size_def=10
lmbda_def=0.0
epochs_def=10
eta_def=3.0
labels=[None]*len(neuron_size)
for i in range(0,len(neuron_size)):
labels[i]='no of neurons='+str(neuron_size[i])
plt.figure()
fig, (ax1,ax2)=plt.subplots(1,2)
fig.tight_layout()
#fig.suptitle('Training paramater variation')
ax1.set_title('accuracy')
ax1.set(xlabel='epoch',ylabel='accuracy')
ax2.set_title('cost')
ax2.set(xlabel='epoch',ylabel='cost')
x=range(1,len(train_cost)+1)
for i in range(0,len(neuron_size)):
net = network2.Network([784, neuron_size[i], 10])
ev_cost,ev_accuracy,train_cost,train_accuracy=net.SGD(training_data, epochs=10, mini_batch_size=mini_batch_size_def, eta=eta_def,lmbda=lmbda_def,evaluation_data=test_data,monitor_evaluation_cost=True,
monitor_evaluation_accuracy=True,
monitor_training_cost=True,
monitor_training_accuracy=True)
ax1.plot(x,ev_accuracy,label=labels[i])
ax2.plot(x,ev_cost)
plot_accuracy_epochs('neuron size='+str(neuron_size[i]),ev_accuracy,train_accuracy)
plot_cost_epochs('neuron size='+str(neuron_size[i]),ev_cost,train_cost)
fig.legend()
fig.savefig('plots/neuron_size.png')
| src/Evaluate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 2 Recap
# ## [Lecture 2.1](2.1_time_series_data/time_series_data.ipynb)
# ---
# A **Time Series** is a sequence of data points indexed by _time_. In Pandas, there are three main time series classes: `Timestamp`, `Timedelta`, and `Period`, and three matching time indexes, respectively : `DatetimeIndex`, `TimedeltaIndex`, `PeriodIndex`.
# Pandas can parse most datetime formats with the **`pd.to_datetime()`** constructor, including named arguments, native python datetimes, or natural language strings.
# This parsing is also carried inside **access methods** like `[]`, `.loc[]`, or `.iloc[]`.
#
# e.g: `ts['3rd of January 2000':'2000/01/05']`
# The **`.pivot()`** function is used to create a new index or new columns from `DataFrame` cell values. This helps unfold datasets in a _stacked_ format.
# _Periodicity_ in a time-indexed dataset can be visualised with **seasonality plots**. These superpose or aggregate data over a recurring period (e.g a year), to highlight repeating patterns.
# The **`.shift()`** function can shift time series values or their index by a certain duration. This is useful to clean erroneous timestamps, or harmonise data across timezones.
# It is often useful to carry out aggregate calculations over _time windows_, and incrementally step through the entire time series. These **rolling statistics** are implemented with the **`.rolling()`** method. Just like group-bys, it is followed by an aggregation function, e.g: `.mean()`, or `.sum()`.
# Sometimes time indices are not regularly spaced, which can create problems for downstream tasks. The **`.resample()`** method aggregates values across consistent & consecutive time windows.
# **`.interpolate()`** can repair missing data by guessing likely values, e.g with a linear function. Use with caution!
# ## [Lecture 2.2](2.2_text_and_image_data/text_and_image_data.ipynb)
# ---
# **Text data** is hard to summarise and significant preprocessing is required to extract insights from it.
#
# Contrary to tabular data, **text data munging** is typically done with _native python methods_. Common cleaning operations include `.join()`, `.split()`, `.strip()`, or regex pattern matching with the `re` module.
#
# Strings of text are typically _segmented_ into semantic units than can then be aggregated or analysed, e.g words or characters. This process is called **tokenization**.
#
# Manipulations that require grammatical or semantic knowledge require advanced Natural Language Processing (NLP) techniques. The **spacy** library offers comprehensive out-of-the-box text analysis, including state-of-the-art _tokenization_, _dependency parsing_, and _entity tagging_. This linguistic metadata is stored on a `doc = nlp(string)` object, with `nlp` the downloaded NLP model.
#
#
# Large **image datasets** can be hard to visualise in their entirety, and can require heavy computation to process.
#
# Images are just pixel **tensors** (N-dimensional arrays). They can therefore be stored in NumPy `ndarrays` and manipulated with standard access methods like `[]`.
#
# **Pillow** offers more comprehensive out-of-the-box image operations like _io_, _transposes_, and _resizes_. These are useful for the preprocessing of image data before downstream computer vision tasks.
# ## [Lecture 2.3](2.3_data_visualization/data_visualization.ipynb)
# ---
# Effective data visualisation requires:
# * **data literacy** to understand the dataset in detail
# * **visual literacy** to communicate this detailed understanding
#
# The language of data viz is **data encodings**, e.g lengths, colors, alignment, shapes. They are used to _guide attention_, _transmit quantitative information_, and _enable mental calculations_ in the reader's brain.
#
# Tools to transform boring graphs into impactful **data stories** include:
# * _minimalism & high data-ink ratios_
# * _apt color schemes_
# * _accurate representation of variation in the data_
# * _descriptive text_
# * _ackowledged conventions_
# * _asking questions before finding answers_
#
# Data viz typically takes two roles in the data science workflow:
# * **data exploration**: quick iterative graphs to better understand a dataset
# * **analysis communication**: polished plots to convey final results
# **Pandas** integrates matplotlib visualisations directly into `DataFrame`s with `df.plot`. However, directly using the matplotlib module gives more control over the appearance of graphs.
#
# The **object-oriented api** provides classes to design the virtual plot space, including `figure`, `axes`, `lines`, `tickers`, etc.
#
# All the available plotting methods can be found [here](https://matplotlib.org/stable/api/axes_api.html#plotting).
#
# The most appropriate type of graph depends on the data and the insights being conveyed ([data-to-viz flowchart](https://www.data-to-viz.com/)).
| week_2/week_2_recap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Multi-Class Classification 1D CNN
#
# 1D CNN with multi-class classification.
#
# We are going to see if we can create a 1D CNN that can classify an individual as a healthy control or as a subject diagnosed with one of 3 neuropsychiatric disorders, based on the shape (LB spectrum) of their white matter tracts.
#
# The data set we will use contains individuals that have been diagnosed with bipolar disorder, ADHD, and schizophrenia.
# ### Import the needed libraries
# +
#to read in the data
import pickle
#for plotting, numbers etc.
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
#for splitting the data
from sklearn.model_selection import train_test_split
#keras functions
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D
from keras.utils import np_utils, plot_model, to_categorical
from keras.optimizers import Adam, RMSprop
from keras.regularizers import l1, l2, l1_l2
#normalize the data
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegressionCV
# -
# ### Import the data
# eigenvalue dictionary with entry for eact tract, 600 evs per tract
allgroups_ev_dict = pickle.load(open("allgroups_ev_dict.p",'rb'))
# list of tracts we want to use
all_tracts = pickle.load(open('all_tracts.p','rb'))
# subject list
allsubjs = pickle.load(open('allsubjs.p','rb'))
# list of subject group 0 = control, 1 = adhd, 2 = bipolar, 3 = schizophrenia
groupid = pickle.load(open('groupid.p','rb'))
# Since we examined the data in the last notebook we will not do that here.
#
# As a reminder: We have 158 subject in total. We have around 40 samples (individuals) in each group. The original dataset of the control group had over 100, we have randomly chosen 40 to keep the group sizes similar. 0 is controls - 40, 1 is ADHD - 37, 2 is bipolar -43, and 3 is schizophrenia - 38.
# ## Preprocess the data
#
# The eigenvalue data is already in a vector format, so we do not need to vectorize it. However, we will need to combine the vectors of all the tracts so that we have a single vector per subject.
#
# We also need to normalize the data so that each set of eigenvalues has a mean of 0 and a standard deviation of 1. We will write a function to do this using sklearn's StandardScaler function.
# ### Normalize the data
def scale_ev_dict(ev_dict):
scaled_dict = {}
for tract in ev_dict.keys():
scaler = StandardScaler()
scaled_dict[tract] = scaler.fit_transform(ev_dict[tract])
return scaled_dict
# normalize all of the tracts so that each ev is centered on 0.
allgroups_ev_dict_scaled = scale_ev_dict(allgroups_ev_dict)
# ### Reorganize the data
#
# Currently the data is a dictionary of 2D matrices, we want to reorganize this into a single 2D matrix with the shape (158, n * 20), where 158 is the number of subjects and n is the number of eigenvalues we are using. It is likely that 600 eigenvalues is way more than we need, but we do not know how many eigenvalues is optimal. We will write a function to do this reorganization so we can easily try multiple amounts of eigenvalues if necessary. We will start with just 200 eigenvalues for all tracts.
# change the organization to be one vector per subject with all evs for all tracts
def reorganize_spectrums(ev_dict_scaled, numev, HCP_subj_list=allsubjs, tractstouse=all_tracts):
# create an empty numpy array of the shape we want
# numev is the number of eigenvalues we want per tract
allsubjs_alltracts_scaled = np.zeros([len(HCP_subj_list), numev*len(tractstouse)])
for i in range(len(tractstouse)):
allsubjs_alltracts_scaled[:, i*numev:i*numev+numev] = ev_dict_scaled[tractstouse[i]][:, 0:numev]
return allsubjs_alltracts_scaled
allgroups_ev_dict_scaled_alltracts_2d = reorganize_spectrums(allgroups_ev_dict_scaled, 200)
allgroups_ev_dict_scaled_alltracts_2d.shape
# ### Reshape the data for CNN layers
#
# We need to change the shape again to be a 3D array instead of a 2D array. We need each subject to have their own 2D array. The final shape will be (num subjects, num total eigenvalues, 1).
#
# If this were a sequence it would be (num samples, num timestamps, num features per timestamp). I find it helps with understanding the shape if you think about the data in terms of timestamps, even though it is not sequential data.
#
# Another way we could format the data is by splitting each tract into its own feature, rather than combining them all into one. In that case, the shape would be (num subjects, num eigenvalues per tract, num tracts). Each tract is a feature, and each eigenvalue is a timestamp.
allgroups_ev_dict_scaled_alltracts = allgroups_ev_dict_scaled_alltracts_2d.reshape(allgroups_ev_dict_scaled_alltracts_2d.shape[0], allgroups_ev_dict_scaled_alltracts_2d.shape[1], 1)
print allgroups_ev_dict_scaled_alltracts.shape
# ### One-hot encode the labels
#
# Right now the labels are integers, we need to change them so they are one-hot encoded. This time we will use Keras bult-in function.
groupid_one = to_categorical(groupid)
# ### Check datatype
groupid_one = groupid_one.astype('float32')
allgroups_ev_dict_scaled_alltracts = allgroups_ev_dict_scaled_alltracts.astype('float32')
# ### Split the dataset
#
# This is a very small dataset, so it will likely not do well if we split the data into the three training/validation/testing sets. Since the dataset is so small, this is just an initial test, and we are not even sure if a neural network will work, we are going to just split the data into two sets, a training set and a testing set.
# +
X = allgroups_ev_dict_scaled_alltracts
Y = groupid_one
#first split the training/validation data from the testing data
trainX, testX, trainY, testY = train_test_split(X, Y, train_size = .8, test_size = .2, random_state=0)
print len(trainX)
print len(testX)
# -
#print the sum of each column in the one hot encoded labels
np.sum(testY, axis=0)
# It is important to note that the train_test_split function will shuffle the data for you. In the testing dataset we have a fairly even distribution of classes.
#
# We now have a training set and a testing set of data and we are ready to try classification with a 1D CNN. Given that we have 4 categories of labels and roughly even amounts of samples for each label, by chance we should get ~25% correct. Our goal is to get a classification accuracy higher than chance.
#
# We learned with the MLP that adding weights to the categories helped with classification. Let's compute the weights now.
# ### Compute the class weights
from sklearn.utils import class_weight
y_ints = [y.argmax() for y in trainY]
class_weights = class_weight.compute_class_weight('balanced',
np.unique(y_ints),
y_ints)
print(class_weights)
# ## Set up the 1D CNN
# We are going to try a basic 1D CNN. This will be the same model architecture we started with for the binary 1D CNN, however, the last Dense layer will have 4 nodes (1 per category) and use the softmax activation function. We will also use the categorical_crossentropy loss function.
def plot_history(network_history):
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(network_history.history['loss'])
plt.plot(network_history.history['val_loss'])
plt.legend(['Training','Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.plot(network_history.history['acc'])
plt.plot(network_history.history['val_acc'])
plt.legend(['Training','Validation'])
plt.show()
# +
model = Sequential()
model.add(Conv1D(64, 3, activation='relu', input_shape=(trainX.shape[1], 1)))
model.add(Conv1D(64, 3, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Conv1D(64, 3, activation='relu'))
model.add(Conv1D(64, 3, activation='relu'))
model.add(GlobalAveragePooling1D())
model.add(Dropout(.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(trainX, trainY, epochs=20, batch_size=10, validation_data=(testX, testY), class_weight=class_weights)
plot_history(history)
# -
# Ok, right away we are not doing much better than chance. Let's try changing some of the parameters and removing the global pooling layer as that has helped in the past.
# +
model = Sequential()
model.add(Conv1D(100, 3, activation='relu', input_shape=(trainX.shape[1], 1)))
model.add(Conv1D(100, 3, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Conv1D(100, 3, activation='relu'))
model.add(Conv1D(100, 3, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Flatten())
# model.add(GlobalAveragePooling1D())
model.add(Dropout(.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(trainX, trainY, epochs=20, batch_size=10, validation_data=(testX, testY), class_weight=class_weights)
plot_history(history)
# -
# Ok, we reached 37% accuracy, but the model is very overfit. Let's add in regularization and keep trying other parameters.
# +
# hidden_units = 64
# kernel_size = 3
# dropout = .5
# optim = 'adam'
# epochs = 20
# batch = 10
# reg=l1(0.01)
def make_model(hidden_units, kernel_size, dropout, optim, epochs, batch, reg):
model = Sequential()
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', input_shape=(trainX.shape[1], 1), kernel_regularizer=reg))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(MaxPooling1D(kernel_size))
model.add(Dropout(dropout))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(MaxPooling1D(kernel_size))
model.add(Flatten())
model.add(Dropout(dropout))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer=optim, loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(trainX, trainY, epochs=epochs, batch_size=batch, validation_data=(testX, testY), class_weight=class_weights)
plot_history(history)
return model
# +
hidden_units = 100
kernel_size = 3
dropout = .3
optim = RMSprop(lr=0.001)
epochs = 10
batch = 1
reg=l2(0.001)
model = make_model(hidden_units, kernel_size, dropout, optim, epochs, batch, reg)
# -
# After trying a few parameter combinations, we reached 43% accuracy, however the model is still overfitting. Let's take a look at how the data is being represented by the model.
# +
pred = model.predict(allgroups_ev_dict_scaled_alltracts)
pred_class = []
for i in range(len(pred)):
pred_class.append(np.argmax(pred[i]))
pca = PCA(n_components=2)
pts = pca.fit_transform(pred)
plt.scatter(pts[:,0], pts[:,1], c = groupid)
plt.colorbar()
plt.show()
plt.scatter(pts[:,0], pts[:,1], c = pred_class)
plt.colorbar()
plt.show()
# -
# It is creating clear boundaries for the four groups, but the boundaries are not very accurate.
plt.hist(pred_class)
# The distribution of the predictions of all of the subjects is well spread out, however, which is good. The model is not just predicting one or two classes.
# Let's add back in the global pooling layer. With that layer the model was much less overfit. Let's also save down the best model at take a look at how the best model represents the data.
from keras.callbacks import ModelCheckpoint
# +
best_model = ModelCheckpoint('/home/pestillilab/lindsey/DeepLearningTutorial_LBspectrum/bestmodel_MC_1dcnn1.h5', monitor='val_acc', save_best_only=True)
model = Sequential()
model.add(Conv1D(100, 3, activation='relu', input_shape=(trainX.shape[1], 1)))
model.add(Conv1D(100, 3, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Conv1D(100, 3, activation='relu'))
model.add(Conv1D(100, 3, activation='relu'))
# model.add(MaxPooling1D(3))
# model.add(Flatten())
model.add(GlobalAveragePooling1D())
model.add(Dropout(.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(trainX, trainY, epochs=20, batch_size=10, validation_data=(testX, testY), callbacks=[best_model], class_weight=class_weights)
plot_history(history)
# -
# load in the best model we saved down during training using the callback function
from keras.models import load_model
best_model1 = load_model('bestmodel_MC_1dcnn1.h5')
score = best_model1.evaluate(testX, testY)
print score
# +
pred = best_model1.predict(allgroups_ev_dict_scaled_alltracts)
pred_class = []
for i in range(len(pred)):
pred_class.append(np.argmax(pred[i]))
pca = PCA(n_components=2)
pts = pca.fit_transform(pred)
plt.scatter(pts[:,0], pts[:,1], c = groupid)
plt.colorbar()
plt.show()
plt.scatter(pts[:,0], pts[:,1], c = pred_class)
plt.colorbar()
plt.show()
plt.hist(pred_class)
# -
# While the global layer prevents the model from becoming too overfit, the model only learns/predicts two of the categories, even with the class weights.
#
# Let's go back to the model with out the global pooling.
# +
hidden_units = 100
kernel_size = 3
dropout = .3
optim = RMSprop(lr=0.001)
epochs = 10
batch = 1
reg=l2(0.001)
best_model = ModelCheckpoint('/home/pestillilab/lindsey/DeepLearningTutorial_LBspectrum/bestmodel_MC_1dcnn2.h5', monitor='val_acc', save_best_only=True)
model = Sequential()
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', input_shape=(trainX.shape[1], 1), kernel_regularizer=reg))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(MaxPooling1D(kernel_size))
model.add(Dropout(dropout))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(Conv1D(hidden_units, kernel_size, activation='tanh', kernel_regularizer=reg))
model.add(MaxPooling1D(kernel_size))
model.add(Flatten())
model.add(Dropout(dropout))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer=optim, loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(trainX, trainY, epochs=epochs, batch_size=batch, validation_data=(testX, testY), class_weight=class_weights, callbacks=[best_model])
plot_history(history)
# -
best_model2 = load_model('bestmodel_MC_1dcnn2.h5')
score = best_model2.evaluate(testX, testY)
print score
# +
pred = best_model2.predict(allgroups_ev_dict_scaled_alltracts)
pred_class = []
for i in range(len(pred)):
pred_class.append(np.argmax(pred[i]))
pca = PCA(n_components=2)
pts = pca.fit_transform(pred)
plt.scatter(pts[:,0], pts[:,1], c = groupid)
plt.colorbar()
plt.show()
plt.scatter(pts[:,0], pts[:,1], c = pred_class)
plt.colorbar()
plt.show()
plt.hist(pred_class)
# -
# We were able to reach 40% correct and we can see that the prediction class plot is starting to look more like the actual class plot.
#
# Lets look at the predictions of the test data and see the distribution there.
# +
testpred = best_model2.predict(testX)
predicted_ints = [y.argmax() for y in testpred]
print predicted_ints
testy_ints = [y.argmax() for y in testY]
print testy_ints
correct = [0,0,0,0]
for i in range(len(testY)):
if testy_ints[i] == predicted_ints[i]:
correct[testy_ints[i]] += 1
print correct
print np.sum(testY, axis=0)
# -
# We can see that the model correctly predicts 4/7 controls, 1/9 adhd subjects, 7/8 bipolar subjects, and 1/8 schizophrenic subjects. There is definitely a bias towards the control and bipolar subjects. It seems that the class weighting does not do as well with CNNs as it does with the MLP.
# ## Final Comments
#
# The goal was to build a multi-class 1D CNN that could classify individuals as a control or one of three neuropsychiatric disorders based on the shape of their white matter tracts (via the LB spectrum).
#
# This goal was met. I was able to build a 1D CNN that classified higher than chance overall. It did not classify higher than chance for all categories, however and did very poorly with two of the classes.
#
# The final model was best at predicting new bipolar subjects, followed by control subjects. The model did the worst with ADHD and schizophrenic subjects. Even with the class weights it seems that the CNN is affected by the small imbalance in the dataset.
#
# The secondary goal was to improve upon (if possible) the classification accuracy of a basic 'non-deep' machine learning algorithm.
#
# This goal was also met. The final 'best' model achieved 40% classification accuracy with the testing data, while the logistic regression classifer had 22%, essentially chance. This was 18% higher than the classification accuracy of the logistic regression classifier.
#
| Multi-Class Classification 1D CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.9 64-bit
# name: python3
# ---
# # Basic of object detection program using OpenVINO
#
# You will learn the basic of object detection program using OpenVINO in through this exercise.
#
# Here, we'll go through a simple object detection progam using SSD(Single Shot multi-box Detection) model and learn how it works.
# ### Installing required Python packages
# We'll use `matplotlib` to display image data in this exercise. Let's install `matplotlib`.
# Linux
# !pip3 install matplotlib
# Windows
# !pip install matplotlib
# ### Preparing an input image and label text data files
# First, let's prepare imput image file and class label text file. Those files are in the OpenVINO install directory. We'll simply copy them to the current working directory.
# Linux
# !cp $INTEL_OPENVINO_DIR/deployment_tools/demo/car_1.bmp .
# !cp $INTEL_OPENVINO_DIR/deployment_tools/open_model_zoo/data/dataset_classes/voc_20cl_bkgr.txt .
# Windows
# !copy "%INTEL_OPENVINO_DIR%\deployment_tools\demo\car_1".bmp .
# !copy "%INTEL_OPENVINO_DIR%\deployment_tools\open_model_zoo\data\dataset_classes\voc_20cl_bkgr.txt" .
# Show the image file and check the picture.
# **Note:** `IPython.display.Image` doesn't support `.bmp` format, so we use `OpenCV` and `matplotlib` this time.
# %matplotlib inline
import cv2
import matplotlib.pyplot as plt
img=cv2.imread('car_1.bmp')
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
# ### Preparing a DL model for inferencing
# Download a DL model for image classification using `Model downloader` and convert it into OpenVINO IR model with `Model converter`.
# We'll use `mobilenet-ssd` model for this practice.
# Linux
# !python3 $INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/downloader.py --name mobilenet-ssd
# !python3 $INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/converter.py --name mobilenet-ssd --precisions FP16
# !ls public/mobilenet-ssd/FP16 -l
# Windows
# !python "%INTEL_OPENVINO_DIR%\deployment_tools\tools\model_downloader\downloader.py" --name mobilenet-ssd
# !python "%INTEL_OPENVINO_DIR%\deployment_tools\tools\model_downloader\converter.py" --name mobilenet-ssd --precisions FP16
# !dir public\mobilenet-ssd\FP16
# ----
# The Python inferencing code starts from here.
# ### Importing required Python modules
# At first, let's import required Python modules.
import cv2
import numpy as np
from openvino.inference_engine import IECore
# ### Reading class label text file
# +
label = [ s.replace('\n', '') for s in open('voc_20cl_bkgr.txt').readlines() ]
print(len(label), 'labels read') # Show the number of labels
print(label[:20]) # Show first 20 labels
# -
# ### Creating an Inference Engine core object and do some preparation
# - Create Inference Engine core object
# - Read IR model data
# - Information extraction from input and output blob (=buffers)
# +
# Create an Inference Engine core object
ie = IECore()
# Read an IR model data to memory
model = './public/mobilenet-ssd/FP16/mobilenet-ssd'
net = ie.read_network(model=model+'.xml', weights=model+'.bin')
# Obtain the name of the input and output blob, and input blob shape
input_blob_name = list(net.input_info.keys())[0]
output_blob_name = list(net.outputs.keys())[0]
batch,channel,height,width = net.input_info[input_blob_name].tensor_desc.dims
# Show model input and output information
print(input_blob_name, batch, channel, height, width)
print(output_blob_name, net.outputs[output_blob_name].shape)
# -
# ### Loading model data to the IE core object
# Load the model data to the IE core object. The `CPU` is specified as the processor to use for infrencing job.
exec_net = ie.load_network(network=net, device_name='CPU', num_requests=1)
# ### Reading and manipulate input image
# Read the input image file and resize and transform it to fit it for input blob of the DL model using OpenCV.
# +
print('input blob: name="{}", N={}, C={}, H={}, W={}'.format(input_blob_name, batch, channel, height, width))
img = cv2.imread('car_1.bmp')
in_img = cv2.resize(img, (width,height))
in_img = in_img.transpose((2, 0, 1))
in_img = in_img.reshape((1, channel, height, width))
print(in_img.shape)
# -
# ### Running inference
# The `infer()` API is a blocking function and the control will be kept waited until the inferencing task is completed.
# Input data can be passed with a dictionary object in `{input_blob_name:input_data}` format.
#
# **IMPORTANT:** As you may have already noticed, the Python code up to this point is almost identical with the one for image classification code in the previous exercise (model, label and input image file names are the only difference). OpenVINO application is very simple and most of the code is common for different models, and only result parsing and processing code are specialized for the model.
# +
res = exec_net.infer(inputs={input_blob_name: in_img})
print(res[output_blob_name].shape)
# -
# ### Displaying the inference result
# The `mobilenet-ssd` model outputs 100 object candidates (regardless how many objects are in a picture) and each object has 7 parameters (shape of the output blob is [1,1,100,7]).
# The object parameter represents [`id`, `class#`, `confidence`, `xmin`, `ymin`, `xmax`, `ymax`].
# `class#` represents class number, `confidence` means the 'likeness' to the class in value ranging from 0.0 to 1.0 (1.0=100%). (`xmin`,`ymin`)-(`xmax`,`ymax`) is the top-left and right-bottom coordination of the bounding box for the object. The coordinate is in range from 0.0 to 1.0, so, multiplies the image height and width to convert it into the pixel coordination in the picture.
# The code checks the confidence and draw bounding box and label on the image when the confidence is higher than 0.6 (60%).
#
# Displaying the rsult image at the end.
# +
print('output blob: name="{}", shape={}'.format(output_blob_name, net.outputs[output_blob_name].shape))
result = res[output_blob_name][0][0]
img_h, img_w, _ = img.shape
for obj in result:
imgid, clsid, confidence, x1, y1, x2, y2 = obj
if confidence>0.6:
x1 = int(x1 * img_w)
y1 = int(y1 * img_h)
x2 = int(x2 * img_w)
y2 = int(y2 * img_h)
cv2.rectangle(img, (x1, y1), (x2, y2), (0,255,255), thickness=4 )
cv2.putText(img, label[int(clsid)], (x1, y1), cv2.FONT_HERSHEY_PLAIN, fontScale=4, color=(0,255,255), thickness=4)
# %matplotlib inline
import matplotlib.pyplot as plt
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
# -
# ----
# Now, you have learnt how object detection program using OpenVINO works.
#
# As you have already saw, most of the code in the OpenVINO application is common except the result processing part.
# What you need to know to develop an application using OpenVINO is the input and output blob format. You don't need to know the details or the internal behavior of the model to develop an application for OpenVINO.
# ## Next => Basic of asynchronous inferencing - [classification-async-single.ipynb](./classification-async-single.ipynb)
| object-detection-ssd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="7h1ABDJuNy5i" colab_type="text"
# # Topic 2.1 - Motion
#
# __Formula booklet:__ four SUVAT equations
#
# *velocity* $$
# v = u + at $$
#
# *displacement* $$
# s = ut + \frac{1}{2}at^2$$
#
# *timeless* $$
# v^2 = u^2 + 2as$$
#
# *average displacement* $$
# s = \frac{(v + u)t}{2} $$
# + [markdown] id="iTFahyFxQdCE" colab_type="text"
# ## Question 1
#
# A fly travels along the x-axis. His starting point is $x = -8.0 m$ and his ending point is $x = -16 m$. His flight lasts $2.0$ seconds. What is his velocity?
#
# __Given__
# - $x_i = -8.0 m$
# - $x_f = -16 m$
# - $t = 2 s$
#
# __Formula__
# - $\Delta x = x_f - x_i$
# - $v = \frac{\Delta x}{t}$
#
# __Solution__
# - $\Delta x = x_f - x_i = -16 - (-8) = -8m$
# - $v = \frac{\Delta x}{t} = \frac{-8}{2} = -4 \frac{m}{s}$
#
# __Answer:__ The velocity of the fly is $-4 \frac{m}{s}$.
# + id="DAoNTDtwP2UT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c35568ff-f1f2-4963-ed99-122633a1b1bf"
x_i = -8.0 # initial point in m
x_f = -16 # final point in m
t = 2 # time to travel the distance in s
x = x_f - x_i # displacement in m
v = x / t # velocity
print('The velocity of the fly is', v, 'm/s.')
# + [markdown] id="xC_uHHcdTRDd" colab_type="text"
# ## Question 2
#
# A car traveling at $48 ms^{-1}$ is brought to a stop in $3.0$ seconds. What is its acceleration?
#
# __Given__
# - $u = 48 \frac{m}{s}$
# - $t = 3 s$
# - $v = 0$
#
# __Formula__ velocity
# - $v = u + at$
#
# __Solution__
# - Since $v = 0$ the formula rearranges:
# - $-u = at$ or $a = -\frac{u}{t} = -\frac{48}{3} = -16\frac{m}{s^2}$
#
# __Answer:__ The acceleration of the car is $-16\frac{m}{s^2}$.
# + id="uOHhw4NONv0H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bf0eb66f-ed31-404a-97b8-f72b9bcae63c"
v = 0 # final velocity - implicit - stop or zero
u = 48 # initial velocity
t = 3 # time to stop
a = -u / t # acceleration is change in velocity over time
print('The acceleration of the car is',a,'m/s²')
| formative_ipynb-solution/2.1 Motion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import redditDataset
import praw
r = praw.Reddit(user_agent='date_test')
reload(redditDataset)
sub = redditDataset.getSubreddits(r, ['funny'])
sub = sub.next()
"""
Get submission 202wd3
http://www.reddit.com/r/funny/comments/202wd3/i_participated_in_one_of_the_biggest_magic_the/
"""
post = r.get_submission('http://www.reddit.com/r/funny/comments/202wd3/i_participated_in_one_of_the_biggest_magic_the/')
# +
"""
Get posts within timeframe
"""
import datetime, time
startDate = '140101'
endDate = '140102'
startDate = time.mktime(datetime.datetime.strptime(startDate, "%y%m%d").timetuple())
endDate = time.mktime(datetime.datetime.strptime(endDate, "%y%m%d").timetuple())
searchTerm = 'timestamp:' + str(startDate)[:-2] + '..' + str(endDate)[:-2]
print searchTerm
posts = sub.search(searchTerm, sort='top', syntax='cloudsearch', limit=100)
len(list(posts))
# -
# test function out
reload(redditDataset)
posts = redditDataset.getPostsWithinRange(sub, '140101', '140102')
len(list(posts))
# TEST COMBINING GENERATORS
reload(redditDataset)
import itertools
gen1 = redditDataset.getPostsWithinRange(sub, '140101', '140102', nPosts=50)
gen2 = redditDataset.getPostsWithinRange(sub, '140103', '140104', nPosts=50)
fullgen = itertools.chain(gen1, gen2)
empty = []
emptygen = itertools.chain(empty, fullgen)
len(list(emptygen))
# test fine scale function
reload(redditDataset)
posts = redditDataset.getAllPostsWithinRangeFineScale(sub, '140101', '140104', fineScale=12)
len(list(posts))
| Get reddit data between dates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <table>
# <tr><td><img style="height: 150px;" src="images/geo_hydro1.jpg"></td>
# <td bgcolor="#FFFFFF">
# <p style="font-size: xx-large; font-weight: 900; line-height: 100%">AG Dynamics of the Earth</p>
# <p style="font-size: large; color: rgba(0,0,0,0.5);">Jupyter notebooks</p>
# <p style="font-size: large; color: rgba(0,0,0,0.5);"><NAME></p>
# </td>
# </tr>
# </table>
# # Dynamic systems: 3. Continuity
# ## Heat transport with `scalarTransportFoam`
# ---
# *<NAME>,
# Geophysics Section,
# Institute of Geological Sciences,
# Freie Universität Berlin,
# Germany*
# **In this notebook, we will learn to**
#
# - use `scalarTransportFoam` to prepare a simple model for solving the continuity
# equation for transporting heat.
#
# **Prerequisites:** (text)
#
# **Result:** You should get a figure similar to
# <img src="images/HeatTransport.jpg" style=width:10cm>
#
# <a href="#top">**Table of contents**</a>
#
# 1. [Solver and equations](#one)
# 2. [Implementation](#two)
# 3. [Running](#three)
# 4. [Post-processing](#four)
# 5. [Technical aspects](#five)
# <div id="one"></div>
#
# ----
# ## 1. Solver and equations
#
# `scalarTransportFoam` is a
#
# - transient
# - incompressible
#
# solver for the transport equation.
#
# We thus need the continuity equation and the Fourier law:
# $$
# \begin{array}{rcl}
# \rho c_p \frac{\partial T}{\partial t}
# + \rho c_p \vec{u} \cdot \nabla T
# + \nabla \cdot \vec{F} &=& H \\
# \vec{F} &=& - K_h \nabla T
# \end{array}
# $$
# with
# $\rho$ [kg/m$^3$] density,
# $c_p$ [J/kg/K] specific heat,
# $T$ [K] temperature,
# $\vec{u}$ [m/s] velocity,
# $t$ [s] time,
# $\nabla$ [1/m] nabla operator,
# $K_h$ [W/m/K] thermal conductivity,
# $F$ [J/m$^2$/s] heat flux, and
# $H$ [W/m$^3$] heat sources.
#
# Inserting yields:
# $$
# \frac{\partial T}{\partial t}
# + \vec{u} \cdot \nabla T
# - \nabla \cdot \kappa_h \nabla T = {{H}\over{\rho c_p}}
# $$
# with $\kappa$ [m$^2$/s] thermal diffusivity.
#
# This equation is solved by `scalarTransportFoam`...
# <div id="two"></div>
#
# ----
# ## 2. Implementation
#
# We consider flow along a channel 10 m long, from left to right:
# <img src="images/1D_transport.jpg" style=width:15cm>
# Black numbers mark sizes, red numbers are vertex numbers.
#
# For the 1D transport of heat, we have an inflow face (0 4 7 3) and an outflow face (1 2 6 2).
# The numbering of faces mark the **unit normal vector** in a right-hand side sense, thus for both
# faces, the unit vector points outwards.
#
# The other faces are not relevant, as we consider a 2D problem, thus they will be defined empty.
#
# We have to set up the **mesh**, the **initial conditions**, a **temperature anomaly**, then **run** the program.
#
# ### Directory structure and files
#
# ~~~
# HeatTransport_scalarTransportFoam
# |-- 0
# |-- U
# |-- T
# |- constant
# |-- transportProperties
# |- system
# |-- blockMesh
# |-- controlDict
# |-- fvSchemes
# |-- fvSolution
# |-- setFieldsDict
# ~~~
#
# - `system/blockMeshDict`
# Eight vertices are defined, use the figure above. Only two faces, inflow and outflow side, are relevant, and defined as patches. Choose direction of points such that the normal vectors are pointing outside! Create one block, with 1000 elements in $x$ direction.
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# convertToMeters 1;
#
# vertices
# (
# ( 0 -1 -1)
# (10 -1 -1)
# (10 1 -1)
# ( 0 1 -1)
# ( 0 -1 1)
# (10 -1 1)
# (10 1 1)
# ( 0 1 1)
# );
#
# blocks
# (
# hex (0 1 2 3 4 5 6 7) (1000 1 1) simpleGrading (1 1 1)
# );
#
# edges
# (
# );
#
# boundary
# (
# sides
# {
# type patch;
# faces
# (
# (1 2 6 5)
# (0 4 7 3)
# );
# }
# empty
# {
# type empty;
# faces
# (
# (0 1 5 4)
# (5 6 7 4)
# (3 7 6 2)
# (0 3 2 1)
# );
# }
# );
#
# mergePatchPairs
# (
# );
# ~~~
#
# </details>
#
# - `0/U.orig`
# Set entire velocity field to zero. We use `U.orig` as file here, because we modify the initial
# conditions later with `setFields`.
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# dimensions [0 1 -1 0 0 0 0];
#
# internalField uniform (0 0 0);
#
# boundaryField
# {
# sides
# {
# type zeroGradient;
# }
# empty
# {
# type empty;
# }
# }
# ~~~
#
# </details>
#
# - `0/T.orig`
# Set entire temperature field to 10 degrees. We use `T.orig` as file here, because we modify the initial
# conditions later with `setFields`.
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# dimensions [0 0 0 1 0 0 0];
#
# internalField uniform 10;
#
# boundaryField
# {
# sides
# {
# type zeroGradient;
# }
#
# empty
# {
# type empty;
# }
# }
# ~~~
#
# </details>
#
# - `constant/transportProperties`
# We set the thermal diffusivity, following the table below.
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# DT DT [0 2 -1 0 0 0 0] 1e-6; // run1a
# //DT DT [0 2 -1 0 0 0 0] 1e-4; // run1b
# //DT DT [0 2 -1 0 0 0 0] 1e-2; // run1c
# //DT DT [0 2 -1 0 0 0 0] 1e-6; // run2a
# //DT DT [0 2 -1 0 0 0 0] 1e-6; // run3a
# //DT DT [0 2 -1 0 0 0 0] 1e-2; // run3c
# ~~~
#
# </details>
#
# - `system/controlDict`
# We set a run time of 8s. We assume a maximum velocity of $u=1$m/s, have discretised with $\Delta x=0.01$m in
# $x$ direction. For a **Courant number** below 1, we use
# $$
# C = {{u \Delta t}\over{\Delta x}} < 1
# $$
# thus $\Delta t < 0.01$s should be sufficient.
# Saving interval `writeInterval` is set to 1s.
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# application scalarTransportFoam;
# startFrom startTime;
# startTime 0;
# stopAt endTime;
# endTime 8.0;
# deltaT 0.01;
# writeControl runTime;
# writeInterval 1;
# purgeWrite 0;
# writeFormat ascii;
# writePrecision 6;
# writeCompression off;
# timeFormat general;
# timePrecision 6;
# runTimeModifiable true;
# ~~~
#
# </details>
#
# - `system/fvSchemes`
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# ddtSchemes
# {
# default Euler;
# }
#
# gradSchemes
# {
# default Gauss linear;
# }
#
# divSchemes
# {
# default none;
# div(phi,T) Gauss linearUpwind grad(T);
# }
#
# laplacianSchemes
# {
# default none;
# laplacian(DT,T) Gauss linear corrected;
# }
#
# interpolationSchemes
# {
# default linear;
# }
#
# snGradSchemes
# {
# default corrected;
# }
# ~~~
#
# </details>
#
# - `system/fvSolution`
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# solvers
# {
# T
# {
# solver PBiCGStab;
# preconditioner DILU;
# tolerance 1e-06;
# relTol 0;
# }
# }
#
# SIMPLE
# {
# nNonOrthogonalCorrectors 0;
# }
# ~~~
#
# </details>
#
# - `system/setFieldsDict`
# is new. Here we assign field conditions:
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# defaultFieldValues ( volVectorFieldValue U (0.5 0 0) volScalarFieldValue T 10. );
#
# regions ( boxToCell { box (4.5 -1 -1) (5.5 1 1) ; fieldValues ( volScalarFieldValue T 20. ) ; } );
# ~~~
#
# </details>
#
# The velocity in the above case is set to $0.5$m/s in $x$-direction, temperatures to 10$^{\circ}$C,
# except in a box region between 4.5 and 5.5m, here they are elevated to 20$^{\circ}$C.
# Note that this dictionary needs to be processed before running the code with `setFields`.
#
# We set up a sensitivity study. $U$ [m/s] is the velocity in $x$ direction, $DT$ [m$^2$/s] the
# thermal diffusivity.
#
# | U/DT | $10^{-6}$ | $10^{-4}$ | $10^{-2}$ |
# |---------|-----------|-----------|-----------|
# |0.0 | run1a | run1b | run1c |
# |0.2 | run2a | - | - |
# |0.5 | run3a | - | run3c |
# <div id="three"></div>
#
# ----
# ## 3. Running
#
# Running a particular example is done with the following set of commands:
# ~~~
# $ foamCleanTutorials
# $ blockMesh
# $ setFields
# $ scalarTransportFoam
# ~~~
#
# If the run was successful, create a dummy file `show.foam` in the directory and
# open the run in `paraview`. In the case below, several runs have been loaded into
# `paraview` and shifted for visibility.
#
# <img src="images/HeatTransport.jpg" style=width:25cm>
# <div id="four"></div>
#
# ----
# ## 4. Post-processing: Profiles
#
# Use dictionary `system/sampleDict` to extract data after main run:
#
# <details><summary><b>> Show code</b></summary>
#
# ~~~
# type sets;
#
# setFormat raw;
#
# interpolationScheme cell;
# //interpolationScheme cellPoint;
# //interpolationScheme cellPointFace;
#
# // Fields to sample.
# fields
# (
# T
# );
#
# sets
# (
# transport1d_run1a
# {
# type uniform;
# nPoints 100;
#
# axis xyz;
# start ( 0 0.0 0.0);
# end (10 0.0 0.0);
# }
#
# );
# ~~~
# </details>
#
# Run:
#
# ~~~
# $ postProcess -func sampleDict -latestTime
# ~~~
#
# The `postProcess` code creates in this case sampled data for temperature $T$ in a directory
# `postProcessing/sampleDict/8/transport1D_run1a_T.xy` for each run.
# +
import numpy as np
import matplotlib.pyplot as plt
# calculate analytical data
def T(x,T0=400,T1=500):
Hsternrhocp = 1e-5
D = 1e-6
L = 10
c1 = (T1-T0)/L + Hsternrhocp/2/D*L
c2 = T0
T = -Hsternrhocp/2/D*x**2 + c1*x + c2
return T
# load laplacianFoam postprocessed data
data1a = np.loadtxt('data/transport1D_run1a_T.xy')
data1b = np.loadtxt('data/transport1D_run1b_T.xy')
data1c = np.loadtxt('data/transport1D_run1c_T.xy')
data2a = np.loadtxt('data/transport1D_run2a_T.xy')
data3a = np.loadtxt('data/transport1D_run3a_T.xy')
data3c = np.loadtxt('data/transport1D_run3c_T.xy')
plt.figure(figsize=(10,6))
plt.xlabel('x [m]')
plt.ylabel('T [K]')
#plt.plot(x,Tanalytical,linewidth=10,color='gray',label='analytical')
plt.plot(data1a[:,0],data1a[:,3],linewidth=2,color='red',label='D=10$^{-6}$ m$^2$/s,u=0.0 m/s')
plt.plot(data1b[:,0],data1b[:,3],linewidth=2,color='green',label='D=10$^{-4}$ m$^2$/s,u=0.0 m/s')
plt.plot(data1c[:,0],data1c[:,3],linewidth=2,color='blue',label='D=10$^{-2}$ m$^2$/s,u=0.0 m/s')
plt.plot(data2a[:,0],data2a[:,3],linewidth=2,linestyle=':',color='red',label='D=10$^{-6}$ m$^2$/s,u=0.2 m/s')
plt.plot(data3a[:,0],data3a[:,3],linewidth=2,linestyle='--',color='red',label='D=10$^{-6}$ m$^2$/s,u=0.5 m/s')
plt.plot(data3c[:,0],data3c[:,3],linewidth=2,linestyle='--',color='blue',label='D=10$^{-2}$ m$^2$/s,u=0.5 m/s')
plt.legend()
# -
# ... done
| .ipynb_checkpoints/Dynamics_lab03_HeatTransport_scalarTransportFoam-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GPyTorch Regression Tutorial
#
# ## Introduction
#
# In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
#
# \begin{align*}
# y &= \sin(2\pi x) + \epsilon \\
# \epsilon &\sim \mathcal{N}(0, 0.2)
# \end{align*}
#
# with 11 training examples, and testing on 51 test examples.
#
# **Note:** this notebook is not necessarily intended to teach the mathematical background of Gaussian processes, but rather how to train a simple one and make predictions in GPyTorch. For a mathematical treatment, Chapter 2 of Gaussian Processes for Machine Learning provides a very thorough introduction to GP regression (this entire text is highly recommended): http://www.gaussianprocess.org/gpml/chapters/RW2.pdf
# +
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# -
# ### Set up training data
#
# In the next cell, we set up the training data for this example. We'll be using 11 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
# Training data is 11 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2
# ## Setting up the model
#
# The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
#
# First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide *the tools necessary to quickly construct one*. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, like the [CIFAR10 Deep Kernel Learning](../08_Deep_Kernel_Learning/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb) example which combines deep learning and Gaussian processes, this allows the user great flexibility in designing custom models.
#
# For most GP regression models, you will need to construct the following GPyTorch objects:
#
# 1. A **GP Model** (`gpytorch.models.ExactGP`) - This handles most of the inference.
# 1. A **Likelihood** (`gpytorch.likelihoods.GaussianLikelihood`) - This is the most common likelihood used for GP regression.
# 1. A **Mean** - This defines the prior mean of the GP.
# - If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.
# 1. A **Kernel** - This defines the prior covariance of the GP.
# - If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start.
# 1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions.
#
#
# ### The GP Model
#
# The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:
#
# 1. An `__init__` method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's `forward` method. This will most commonly include things like a mean module and a kernel module.
#
# 2. A `forward` method that takes in some $n \times d$ data `x` and returns a `MultivariateNormal` with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
#
# This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:
#
# ```python
# self.covar_module = ScaleKernel(RBFKernel() + WhiteNoiseKernel())
# ```
#
# Or you can add the outputs of the kernel in the forward method:
#
# ```python
# covar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)
# ```
# +
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
# -
# ### Model modes
#
# Like most PyTorch modules, the `ExactGP` has a `.train()` and `.eval()` mode.
# - `.train()` mode is for optimizing model hyperameters.
# - `.eval()` mode is for computing predictions through the model posterior.
# ## Training the model
#
# In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
#
# The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. Because GP models directly extend `torch.nn.Module`, calls to methods like `model.parameters()` or `model.named_parameters()` function as you might expect coming from PyTorch.
#
# In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
#
# 1. Zero all parameter gradients
# 2. Call the model and compute the loss
# 3. Call backward on the loss to fill in gradients
# 4. Take a step on the optimizer
#
# However, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
# +
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iter = 50
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
# -
# ## Make predictions with the model
#
# In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
#
# Just as a user defined GP model returns a `MultivariateNormal` containing the prior mean and covariance from forward, a trained GP model in eval mode returns a `MultivariateNormal` containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
#
# ```python
# f_preds = model(test_x)
# y_preds = likelihood(model(test_x))
#
# f_mean = f_preds.mean
# f_var = f_preds.variance
# f_covar = f_preds.covariance_matrix
# f_samples = f_preds.sample(sample_shape=torch.Size(1000,))
# ```
#
# The `gpytorch.settings.fast_pred_var` context is not needed, but here we are giving a preview of using one of our cool features, getting faster predictive distributions using [LOVE](https://arxiv.org/abs/1803.06058).
# +
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
# -
# ## Plot the model fit
#
# In the next cell, we plot the mean and confidence region of the Gaussian process model. The `confidence_region` method is a helper method that returns 2 standard deviations above and below the mean.
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
| examples/01_Simple_GP_Regression/Simple_GP_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/marongkang/MLeveryday/blob/main/2022c.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="tXMO8foo-pYG"
import urllib.request
# + colab={"base_uri": "https://localhost:8080/"} id="jRyw47j7-18s" outputId="59594c30-14a0-4dac-c6fc-0bebf7230b59"
BCurl='https://github.com/marongkang/datasets/raw/main/BCHAIN-MKPRU.csv'
Gurl='https://github.com/marongkang/datasets/raw/main/LBMA-GOLD.csv'
BCrps=urllib.request.urlopen(BCurl)
Grps=urllib.request.urlopen(Gurl)
BChtml=BCrps.read()
Ghtml=Grps.read()
f1=open('BCHAIN-MKPRU.csv','wb')
f1.write(BChtml)
f2=open('LBMA-GOLD.csv','wb')
f2.write(Ghtml)
# + id="mSKvd5VLAYuo"
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pyplot import MultipleLocator
import numpy as np
# + id="CCQ4vjhcASoO"
BCdf=pd.read_csv('BCHAIN-MKPRU.csv')
Gdf=pd.read_csv('LBMA-GOLD.csv')
# + id="AgEpSY5fAlQe"
BCdf['Date']=pd.to_datetime(BCdf['Date'])
Gdf['Date']=pd.to_datetime(Gdf['Date'])
Gdf['Return']=0
# + [markdown] id="RdyBMweSmzuH"
# #数据预处理
# **缺失值分析**
# + [markdown] id="w9FhlNELnj05"
# **KNN**
# + id="OHVLDRMvnc6g"
import sklearn
from sklearn.impute import KNNImputer
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="_cC0uYWzB5T3" outputId="96f87846-8e01-48c1-aefc-4811d59fbcc7"
plt.plot(Gdf['Date'],Gdf['USD (PM)'])
# + id="Fp-PbbTynnV-" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="4e8ad10b-5dfd-43e3-f2e2-e202ea2bdfbb"
#效果不佳
'''
imputer = KNNImputer(n_neighbors=30,
weights='distance'
)
imputed = imputer.fit_transform(Gdf[['Return','USD (PM)']])
df_imputed = pd.DataFrame(imputed, columns=Gdf[['Return','USD (PM)']].columns)
Gdf[['Return','USD (PM)']]=df_imputed
'''
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="21ihcoBRBWuN" outputId="ddadf13f-596c-47f1-e8a4-ee7333aecdae"
plt.plot(Gdf['Date'],Gdf['USD (PM)'])
# + [markdown] id="0wiPyJmbEQiF"
# **dropna**
# + id="yvpk-ZYuEdkl"
Gdf=Gdf.dropna()
Gdf=Gdf.reset_index()
# + [markdown] id="UQ01EXPLnD5e"
# #计算关键指标
# + id="m5e1FK_VCJ-n"
BCret=[0]
for i in range(1,len(BCdf)):
BCret.append((BCdf['Value'][i]-BCdf['Value'][i-1])/BCdf['Value'][i-1])
Gret=[0]
for i in range(1,len(Gdf)):
Gret.append((Gdf['USD (PM)'][i]-Gdf['USD (PM)'][i-1])/Gdf['USD (PM)'][i-1])
# + id="l9XJLltKVVzc"
BCdf['Return']=BCret
Gdf['Return']=Gret
# + [markdown] id="B3t2jIG9eMDo"
# |index|Value|Return|Var|
# |---|---|---|---|
# |count|1826\.0|1826\.0|1826\.0|
# |mean|12206\.068281468402|0\.0032333670412730536|0\.0016046231673695472|
# |std|14043\.891626844317|0\.041475857776247516|0\.0004943415076093683|
# |min|594\.08|-0\.39140443386063384|0\.0|
# |25%|3994\.9825|-0\.012508367874880424|0\.0016626569540600347|
# |50%|7924\.46|0\.001438775113487313|0\.001724851684370721|
# |75%|11084\.73|0\.0190618988914242|0\.0018092466918252334|
# |max|63554\.44|0\.21866893459705275|0\.002272227021372883|
# + id="Ma3mheMr2afb"
#计算收益等级
Glabel=[]
for i in range(len(Gdf)):
if Gdf['Return'][i]>= 0.004494:
Glabel.append(0)
elif Gdf['Return'][i]>=0.000203:
Glabel.append(1)
elif Gdf['Return'][i]>=-0.004267:
Glabel.append(2)
else:
Glabel.append(3)
Gdf['Label']=Glabel
# + id="ea8VIyl85AkV"
#计算收益等级
BClabel=[]
for i in range(len(BCdf)):
if BCdf['Return'][i]>= 0.019062:
BClabel.append(0)
elif BCdf['Return'][i]>=0.001439:
BClabel.append(1)
elif BCdf['Return'][i]>=-0.012508:
BClabel.append(2)
else:
BClabel.append(3)
BCdf['Label']=BClabel
# + [markdown] id="_eAeTabsBuRw"
# **指数加权移动平均(Exponentially Weighted Average)**
# + id="hSIMTMnkC1pH"
#args
beta=0.9
# + id="tKwJnd6UA04H"
def EW_avg(data):
res=[0]*len(data)
for i in range(1,len(data)):
res[i]=beta*res[i-1]+(1-beta)*data[i]
res[i]=res[i]/(1-beta**(np.exp(i)))
return res
# + colab={"base_uri": "https://localhost:8080/", "height": 744} id="-j71nNQOEKC6" outputId="631138c1-8fb2-492e-89b3-d928719d8386"
BCdf['Avg_ret']=EW_avg(BCdf['Return'])
plt.figure(dpi=200)
plt.plot(BCdf['Date'],BCdf['Return'],label='Return')
plt.plot(BCdf['Date'],BCdf['Avg_ret'],label='EW_avg')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 744} id="tZXmIGe-F9si" outputId="a5fc7770-1fa9-43ad-e54b-2b52d04f5af3"
Gdf['Avg_ret']=EW_avg(Gdf['Return'])
plt.figure(dpi=200)
plt.plot(Gdf['Date'],Gdf['Return'],label='Return')
plt.plot(Gdf['Date'],Gdf['Avg_ret'],label='EW_avg')
plt.show()
# + [markdown] id="jqET1tW1cRFc"
# #合理性解释
# + colab={"base_uri": "https://localhost:8080/", "height": 708} id="Okis0nJJN16U" outputId="36a509c3-18d3-44db-f5a3-f8fdb9e70bc6"
y=np.random.rand(30)
plt.figure(dpi=200)
plt.plot(range(30),y,label='Return')
plt.plot(range(30),EW_avg(y),label='EW_avg')
plt.show()
# + [markdown] id="FhKzVxeT6NjC"
# #Torch
# + id="Mkta8JO7j80i"
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
import torch.utils.data as data
from sklearn.preprocessing import MinMaxScaler
# + [markdown] id="gYEdUC2a1MsK"
# #LSTM
# + id="YKSIQcD0o8OH"
#ML Parameters
lr = 1E-4
epochs = 100
batch_size = 20
scaler = MinMaxScaler()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# + [markdown] id="tUI0pD2qryfG"
#
# **构建数据集**
# + id="eA_QtAa4pDK-"
class StockDataset(data.Dataset):
def __init__(self,input):
self.df=input
self.orig_dataset = self.df[['Value','Return']].to_numpy()
self.TLabel=self.df[['Label']].to_numpy()
self.normalized_dataset = np.copy(self.orig_dataset)
#self.normalized_dataset = self.normalized_dataset.reshape(-1, 1)
self.normalized_dataset = scaler.fit_transform(self.normalized_dataset)
#self.normalized_dataset = self.normalized_dataset.reshape(-1)
self.sample_len = 20
def __len__(self):
if len(self.orig_dataset) > self.sample_len:
return len(self.orig_dataset) - self.sample_len
else:
return 0
def __getitem__(self, idx):
target = self.normalized_dataset[idx+self.sample_len,0]
i = self.normalized_dataset[idx:(idx+self.sample_len),0]
#i = i.reshape((-1, 1))
i = torch.from_numpy(i)
i=i.reshape((-1,1))
target = torch.Tensor([target])
i=i.double()
target=target.double()
return i, target
# + colab={"base_uri": "https://localhost:8080/"} id="Ke-bFR3HrxM3" outputId="b06c32c3-1071-48e6-b7ec-251172ea82b0"
# Load dataset
dataset = StockDataset(BCdf[['Value','Return','Label']])
# Split training and validation set
train_len = int(0.7*len(dataset))
valid_len = len(dataset) - train_len
TrainData, ValidationData = random_split(dataset,[train_len, valid_len])
# Load into Iterator (each time get one batch)
train_loader = data.DataLoader(TrainData, batch_size=batch_size, shuffle=True, num_workers=2)
test_loader = data.DataLoader(ValidationData, batch_size=batch_size, shuffle=True, num_workers=2)
# Print statistics
print("Total: ", len(dataset))
print("Training Set: ", len(TrainData))
print("Validation Set: ", len(ValidationData))
# + colab={"base_uri": "https://localhost:8080/"} id="byIa4wLjwe7E" outputId="b6641f98-2a32-4690-dac4-6d9d20f3a0c5"
dataset.normalized_dataset
# + id="WavpX1dSHsAJ"
for i,j in train_loader:
#print(i,j)
continue
# + id="OgPv1m2rsgiB"
class TempLSTM(nn.Module):
def __init__(self):
# Required in PyTorch Model
super(TempLSTM, self).__init__()
# Parameters
self.feature_dim = 1
self.hidden_dim = 500
self.num_layers = 3
self.output_dim = 1
# Neural Networks
self.lstm = nn.LSTM(self.feature_dim, self.hidden_dim, self.num_layers, dropout=0.1, batch_first=True)
if torch.cuda.is_available():
self.lstm.cuda()
self.fc = nn.Linear(self.hidden_dim , self.output_dim)
def forward(self, i):
h0 = torch.randn([self.num_layers, i.shape[0], self.hidden_dim], dtype=torch.double, device=device) #.requires_grad_()
c0 = torch.randn([self.num_layers, i.shape[0], self.hidden_dim], dtype=torch.double, device=device) #.requires_grad_()
# Forward propagate LSTM
out, _ = self.lstm.forward(i, (h0.detach(), c0.detach())) # output shape (batch, sequence, hidden_dim)
Lout = self.fc(out[:, -1, :])
return Lout
# + colab={"base_uri": "https://localhost:8080/"} id="s1xlupQttfUr" outputId="e95f43a4-5de0-445a-c7ba-057ff641633e"
# Define model
model = TempLSTM()
model = model.double()
#model = Model(1)
print(model)
# Load into GPU if necessary
if torch.cuda.is_available():
model = model.cuda()
# Define loss function
criterion = nn.MSELoss()
# Define optimization strategy
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# + id="Ri5_O-5AtgkI"
def train(model, iterator, optimizer, criterion, device):
model.train() # Enter Train Mode
train_loss =0
for _, (ii, targets) in enumerate(iterator):
# move to GPU if necessary
if torch.cuda.is_available():
ii, targets = ii.cuda(), targets.cuda()
# generate prediction
#print(ii)
output = model(ii)
#print(output)
# calculate loss
loss = criterion(output, targets)
# compute gradients and update weights
optimizer.zero_grad()
loss.backward()
optimizer.step()
# record training losses
train_loss+=loss.item()
# print completed result
#print('train_loss: %f' % (train_loss))
return train_loss
# + id="QrBuLfKKt0Zp"
def test(model, iterator, criterion, device):
model.eval() # Enter Evaluation Mode
test_loss =0
with torch.no_grad():
for _, (ii, targets) in enumerate(iterator):
# move to GPU if necessary
ii, targets = ii.to(device), targets.to(device)
# generate prediction
output = model(ii)
#print(output)
# calculate loss
loss = criterion(output,targets)
# record training losses
test_loss+=loss.item()
# print completed result
#print('test_loss: %s' % (test_loss))
return test_loss
# + id="ylO2d-SiuDCq"
def predict(model,data):
if torch.cuda.is_available():
data=data.cuda()
model.eval() # Enter Evaluation Mode
with torch.no_grad():
pred = model(data)
return pred
# + colab={"background_save": true} id="ySYZjyKighTV"
# !pip install git+https://github.com/d2l-ai/d2l-zh@release
from d2l import torch as d2l
from IPython import display
% matplotlib inline
# + id="ud-HGg3LgdCe"
class Animator:
def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), nrows=1, ncols=1,
figsize=(3.5, 2.5)):
if legend is None:
legend=[]
d2l.use_svg_display()
self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [self.axes, ]
self.config_axes = lambda: d2l.set_axes(
self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# 向图表中添加多个数据点
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
# + id="IwKTeIhC44rP"
if torch.cuda.is_available():
model=model.cuda()
# + colab={"base_uri": "https://localhost:8080/", "height": 261} id="6gifNybWuKQq" outputId="817e2ea1-c4f6-4e62-e9e8-89bb1c9cab1a"
#animator
animator=Animator(xlabel='epoch', xlim=[1, epochs],
legend=['train loss','test_loss'])
for epoch in range(epochs):
train_loss=train(model, train_loader, optimizer, criterion, device)
test_loss=test(model, test_loader, criterion, device)
animator.add(epoch+1,(train_loss,test_loss))
# + id="pTyshnNnLaSJ"
for _, (ii, targets) in enumerate(test_loader):
pred=predict(model,device,ii)
print(pred,targets)
# + id="AjY8pfLhury6"
#torch.save(model,'/content/drive/MyDrive/Models/LSMT.pt')
# + id="YFG5PvMC0lGQ"
model=torch.load('/content/drive/MyDrive/Models/LSMT.pt',map_location='cpu')
# + id="pMVR4-6a0z_V"
# + [markdown] id="Xxd3wJQMcIN5"
# #交易模拟
# + [markdown] id="k_UyO5SED4bJ"
# **股价预测**
# + colab={"base_uri": "https://localhost:8080/"} id="4O81_yHOTnXy" outputId="3b930819-d3cf-429e-f646-3572da295b22"
Gdf.count()
# + colab={"base_uri": "https://localhost:8080/"} id="OImDWJofchJk" outputId="3e8c19a7-0ab7-41d9-b6ad-41d01e0aa82a"
scaler.inverse_transform([[predict(model,torch.Tensor([scaler.fit_transform(BCdf[['Value']])[0:20]]).double()).item()]])[0][0]
# + id="BPpk_76o4Lhy"
pred=[]
for i in range(len(BCdf)-20):
pred.append(scaler.inverse_transform([[predict(model,torch.Tensor([scaler.fit_transform(BCdf[['Value']])[i:i+20]]).double()).item()]])[0][0])
# + colab={"base_uri": "https://localhost:8080/", "height": 708} id="IEaikgJi47Ws" outputId="1668cef2-8a33-4266-ea1a-7bc811b644fe"
plt.figure(dpi=200)
plt.plot(BCdf['Date'],BCdf['Value'],label='bitcoin_price')
plt.plot(BCdf['Date'][20:],pred,label='predict')
plt.legend(loc = 'upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="n3UPFTR_KLd7" outputId="3b9e5ea0-5fe5-4197-b5f3-ced8a7d71da4"
BCdf['Pred']=0
BCdf['Pred'][20:]=pred
# + id="0UmfLeTL9k3b"
Gscaler=MinMaxScaler()
Gscaler.fit_transform(Gdf[['USD (PM)']])
Gpred=[]
for i in range(len(Gdf)-20):
Gpred.append(Gscaler.inverse_transform([[predict(model,torch.Tensor([Gscaler.fit_transform(Gdf[['USD (PM)']])[i:i+20]]).double()).item()]])[0][0])
# + colab={"base_uri": "https://localhost:8080/", "height": 708} id="3HyFYlOW_8WB" outputId="9705f186-2eab-4990-8e12-6ba7c3078dec"
plt.figure(dpi=200)
plt.plot(Gdf['Date'],Gdf['USD (PM)'],label='gold_price')
plt.plot(Gdf['Date'][20:],Gpred,label='predict')
plt.legend(loc = 'upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="lQ1GiwCf9hwQ" outputId="c32ffd89-ac7a-4c27-84df-aa5228b30023"
Gdf['Pred']=0
Gdf['Pred'][20:]=Gpred
# + colab={"base_uri": "https://localhost:8080/"} id="S_POWov0NeT9" outputId="789ee1ad-7a3e-4c79-8428-f7f908c4ef4c"
Pred_ret=[]
for i in range(20,len(BCdf)):
Pred_ret.append((BCdf['Pred'][i]-BCdf['Value'][i])/BCdf['Value'][i])
BCdf['pred_ret']=0
BCdf['pred_ret'][20:]=Pred_ret
# + colab={"base_uri": "https://localhost:8080/"} id="UBfvM-5EPO-w" outputId="e47e7bf5-498b-4aca-dd0e-d735708abcca"
Pred_ret=[]
for i in range(20,len(Gdf)):
Pred_ret.append((Gdf['Pred'][i]-Gdf['USD (PM)'][i])/Gdf['USD (PM)'][i])
Gdf['pred_ret']=0
Gdf['pred_ret'][20:]=Pred_ret
# + colab={"base_uri": "https://localhost:8080/", "height": 552} id="5pXd4U_x95Ok" outputId="7b01c99a-25b2-42fd-deac-f718f4f8b302"
plt.figure(dpi=150)
plt.plot(BCdf['Date'],BCdf['pred_ret'])
# + colab={"base_uri": "https://localhost:8080/", "height": 552} id="fIHPtO8K-Zbo" outputId="ff02d280-e71c-4ad2-8410-3b1a9a414a26"
plt.figure(dpi=150)
plt.plot(Gdf['Date'],Gdf['pred_ret'])
plt.plot(Gdf['Date'],Gdf['Return'])
# + [markdown] id="l3q6gvypECnp"
# **计算均方差和协方差**
# + id="I9TmfpM_L3HA"
xy=[]
# + id="KW_tPWk9LM2j"
j=0
for i in range(len(BCdf)):
xy.append(BCdf['Return'][i]*Gdf['Return'][j])
if BCdf['Date'][i]>=Gdf['Date'][j]:
j=j+1
# + id="KLX8ZiHQMk6L"
BCdf['xy']=xy
# + colab={"base_uri": "https://localhost:8080/"} id="IP02gTyoM7F1" outputId="ddb68cb7-eea5-4a8f-b733-45646d83be0c"
BCdf['xy_avg']=EW_avg(BCdf['xy'])
# + id="mZ6ZnMo3NS33"
cov=[]
j=0
for i in range(len(BCdf)):
cov.append(BCdf['xy_avg'][i]-BCdf['Avg_ret'][i]*Gdf['Avg_ret'][j])
if BCdf['Date'][i]>=Gdf['Date'][j]:
j=j+1
# + id="KfZ2m0CDNAPm"
BCdf['cov']=cov
# + id="iW-acyr-NxnL"
def squareX_EX(df):
return (df['Return']-df['Avg_ret'])**2
# + [markdown] id="-N4dBMaUT4wV"
# Exponentially weighted averages
#
# $X_t=\beta X_{t-1}+(1-\beta)R_t$
#
# $V_t=\beta V_{t-1}+(1-\beta)(R_t-X_t)^2$
#
# where:
# * X_t|第t天的指数加权平均收益率
# * R_t|第t天的实际收益率
# * V_t|第t天的偏差值
# + colab={"base_uri": "https://localhost:8080/"} id="iyINlNHbSwQC" outputId="0274dbd9-8e09-4a11-eef8-a689e7db3c9c"
BCdf['squareX_EX']=squareX_EX(BCdf)
BCdf['EW_var']=EW_avg(BCdf['squareX_EX'])
Gdf['squareX_EX']=squareX_EX(Gdf)
Gdf['EW_var']=EW_avg(Gdf['squareX_EX'])
# + colab={"base_uri": "https://localhost:8080/", "height": 423} id="m7zDHAoVS9q3" outputId="ca2ae194-ea0d-4f77-f807-8ccfb4246304"
BCdf
# + id="OsqzUmgOcnUS"
#args
epochs=len(BCdf)
# + [markdown] id="JBsWymyNEcQJ"
# **组合投资策略**
# + [markdown] id="mTEGHs4DewY0"
# $R_t=\frac{T_t}{T_{t-1}}-1$
# + id="0fos7AVZkyNa"
def portfolio(i,j,W,beta=0.3,BCFrate=0.02,GFrate=0.01):
pR=[]
pV=[]
w_0=np.array(W[1:])*100
mR,mV,alc=-100,-100,W[1]
'''
投资方案为20%—80%
变化幅度为5%
'''
for w in range(20,80,1):
c=abs(w-w_0[0])*BCFrate+(1-w-w_0[1])*GFrate
portfolioR=w*BCdf['Avg_ret'][i]+(100-w)*Gdf['Avg_ret'][j]-c
portfolioV=w*BCdf['EW_var'][i]+(100-w)*Gdf['EW_var'][j]+w*(100-w)*BCdf['cov'][i]
if portfolioV < beta:
if mR<portfolioR:
mR=portfolioR
mV=portfolioV
alc=w/100
pR.append(portfolioR)
pV.append(portfolioV)
if mR < 0:
alc=0
return pR,pV,[mR,mV,alc]
# + [markdown] id="efgwmtYoDLyX"
# $\mu=E(R)=\sum w_i(X_i-S_i)$
#
# $\sigma^2=Var(R)=\sum\sum w_iw_jcov(X_i,X_j)$
#
# where:
# * $\mu|组合收益率$
# * $\sigma^2|组合风险率$
# * $w_i|第i项的投资额$
# * $X_i|第i项的收益率$
# * $S_i|第i项的交易费率$
# + colab={"base_uri": "https://localhost:8080/", "height": 400} id="7FuSv4GMY0Kl" outputId="d889cdac-458f-4fcf-f879-9324355b110c"
pR,pV,eP=portfolio(900,1000,[0,0.5,0.5])
plt.figure(dpi=100)
plt.scatter(pV,pR,label='allocation_line')
plt.scatter(eP[1],eP[0],label='efficient_point')
plt.xlabel('Risk')
plt.ylabel('Expected_return')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/"} id="m9NSIgfu3Fps" outputId="b0b054be-fd06-42e6-80a2-ea8dd176d45e"
eP
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="cq7qF9f_qGlh" outputId="7bccaea7-5c10-4e89-b020-ef88adc9c72a"
BCdf.describe()
# + id="vS6ffnDqcjkz"
def simulate(epoch,tradeGap,beta=0.3,BCFrate=0.02,GFrate=0.01):
j=0
state=[[1000,0,0]]
transactionFee=[]
W=[]
w=[1,0,0]
for i in range(1,epoch):
#print(Gdf['Date'][j],BCdf['Date'][i])
#计算总资产
total=state[i-1][0]+state[i-1][1]*(1+BCdf['Return'][i])+state[i-1][2]*(1+Gdf['Return'][j])
#分配策略
if i>20 and i%tradeGap==0:
pR,pV,eP=portfolio(i,j,w,beta) #配比(和为1)
if eP[2]!=0:
w=[0,eP[2],1-eP[2]]
else:
w=[1,0,0]
cash=w[0]*total
bc=w[1]*total
gold=w[2]*total
#交易费
bcFee=abs(bc-state[i-1][1])*BCFrate
goldFee=abs(gold-state[i-1][2])*GFrate
transactionFee.append(bcFee+goldFee)
#结余资产
total=total-bcFee-goldFee
bc=total*w[1]
gold=w[2]*total
cash=total-bc-gold
state.append([cash,bc,gold])
W.append(w)
#保持日期同步
if BCdf['Date'][i]>=Gdf['Date'][j]:
j=j+1
return transactionFee,W,state
# + id="oy5OxCd9fKdW"
transactionFee,W,state=simulate(epochs,29,0.49,0.04,0.02)
# + [markdown] id="_TJSPvasESUc"
#
# 200%|5749.531074105381
# 100%|4271.741445766016
# 50%|2619.119064059438
# + [markdown] id="136etYbWIOqQ"
# | |transactionCost|Value|
# |---|---|---|
# |200%|5749.53|11648.55|
# |100%|4271.74|20320.91|
# |50%|2619.12|26812.17|
# |10%|618.24|33452.79|
#
# + colab={"base_uri": "https://localhost:8080/"} id="ooxKfKPAjg3O" outputId="46fc7ae9-fed6-44e9-ba0f-163335ed71e7"
sum(transactionFee)
# + id="0VBl1T856-QB"
state=np.array(state)
len(state)
asset=pd.DataFrame()
asset['Date']=BCdf['Date']
asset['Property']=state.sum(axis=1)
profit=[0]
for i in range(len(asset)-1):
profit.append((asset['Property'][i+1]-asset['Property'][i])/asset['Property'][i])
asset['profit_rate']=profit
# + id="8TJb-tCpHUVr"
asset=asset.set_index('Date')
# + id="J1_8hw6IJVcH"
def maximum_drawdown(asset):
return (asset['Property'][asset['Property'].idxmax()]-asset['Property'][asset['Property'][asset['Property'].idxmax():].idxmin()])/asset['Property'][asset['Property'].idxmax()]
# + id="htXx7IVDRWNg"
def std(asset):
return asset['profit_rate'].std()
# + id="sW248NQKM142"
#def sharp(asset):
# (asset['Property'][len(asset)-1]-asset['Property'][0])/asset['Property'][0]/asset['Property']
# + colab={"base_uri": "https://localhost:8080/"} id="8b2nPVnVIZd0" outputId="56f0fb67-a870-4d3f-f836-0174004328b8"
df2017=pd.DataFrame(asset[['Property','profit_rate']]['2017'])
df2018=pd.DataFrame(asset[['Property','profit_rate']]['2018'])
df2019=pd.DataFrame(asset[['Property','profit_rate']]['2019'])
df2020=pd.DataFrame(asset[['Property','profit_rate']]['2020'])
df2021=pd.DataFrame(asset[['Property','profit_rate']]['2021'])
# + colab={"base_uri": "https://localhost:8080/", "height": 455} id="yysNDgb9RzpX" outputId="62a253b5-3185-45fb-bd1c-e1ca308c3074"
df2017
# + id="-L39D78dKDav"
x=[]
x.append(maximum_drawdown(df2017))
x.append(maximum_drawdown(df2018))
x.append(maximum_drawdown(df2019))
x.append(maximum_drawdown(df2020))
x.append(maximum_drawdown(df2021))
# + colab={"base_uri": "https://localhost:8080/"} id="dUMnCsLELqkt" outputId="4e33f313-8ab5-4afc-ca7f-271a5c08b13d"
x
# + [markdown] id="h2oPw0d-LwQL"
# |Date|maximum_drawdown|
# |---|---|
# |2017|23.9%|
# |2018|50.7%|
# |2019|29.6%|
# |2020|0%|
# |2021|33.3%
#
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="uLtm5m8sK_Ye" outputId="843e2264-68b5-469f-ad6f-1c53efbf2cbe"
plt.plot(range(2017,2022,1),x)
# + colab={"base_uri": "https://localhost:8080/"} id="38L4dpmuRlAa" outputId="9bf7d934-e54a-459a-b98d-940bdc9a5e0a"
year_std=[]
year_std.append(std(df2017))
year_std.append(std(df2018))
year_std.append(std(df2019))
year_std.append(std(df2020))
year_std.append(std(df2021))
year_std
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="Y0nT7DPZSfdg" outputId="8016129e-d53b-48f4-e67c-22e79cb764aa"
plt.plot(range(2017,2022,1),year_std)
# + [markdown] id="iMV5YnIUSlmG"
# |Date|std|
# |---|---|
# |2017|3.46%|
# |2018|1.47%|
# |2019|1.95%|
# |2020|2.29%|
# |2021|2.76%
#
# + colab={"base_uri": "https://localhost:8080/", "height": 467} id="zhh6RswvJwkF" outputId="154282b0-3774-4c5a-fa78-8740aeac97ee"
colors = ['#ff9999','#9999ff','#cc1234']
plt.figure(dpi=150,figsize=(10, 3))
plt.stackplot(BCdf['Date'],
state[:,0],state[:,1],state[:,2], # 可变参数,接受多个y
labels = ['cash','bitcoin','gold'], # 定义各区块面积的含义
#colors = colors # 设置各区块的填充色
)
ax=plt.gca()
#ax.xaxis.set_major_locator(MultipleLocator(90))
#ax.xaxis.set_tick_params(rotation=90)
plt.xlabel('Date')
plt.ylabel('Total Value')
plt.legend(loc = 'upper left')
# + [markdown] id="m_C5-u-PGheC"
# $\frac{收益}{本金}^{-\frac{持有天数}{365}}$
# + id="eOYBTCaIjVy7"
res=pd.DataFrame()
res['Date']=BCdf['Date']
res['Property']=state.sum(axis=1)
# + id="7FfU0TTflcUm"
profit=[0]
for i in range(len(res)-1):
profit.append((res['Property'][i+1]-res['Property'][i])/res['Property'][i])
res['profit_rate']=profit
# + colab={"base_uri": "https://localhost:8080/"} id="oPTP-OC8mlkb" outputId="e9ee1b2a-1c18-40b6-d304-f2c0c8678d5b"
res['profit_rate'].std()
# + id="tuZoLh5f-_nM" colab={"base_uri": "https://localhost:8080/"} outputId="7e70f365-6ba1-438d-975f-7d3365f6e106"
res['Date'][1000]-res['Date'][1]
# + id="15V_6BYuFbQR"
pred_acc=[]
for i in range(20,len(Gdf)-1):
pred_acc.append(Gdf['pred_ret'][i]*Gdf['Return'][i+1])
# + colab={"base_uri": "https://localhost:8080/"} id="wuASCCUhFCZc" outputId="32aacdb3-d186-4206-c877-54a9294ee51b"
pred_acc=np.array(pred_acc)
len(pred_acc[pred_acc>=0])/len(pred_acc)
# + id="ovlkBJw6bHJr"
def test(epochs,gap,beta,BCFrate=0.02,GFrate=0.01):
transactionFee,W,state=simulate(epochs,gap,beta,BCFrate,GFrate)
state=np.array(state)
asset=pd.DataFrame()
asset['Date']=BCdf['Date']
asset['Property']=state.sum(axis=1)
profit=[0]
for i in range(len(asset)-1):
profit.append((asset['Property'][i+1]-asset['Property'][i])/asset['Property'][i])
asset['profit_rate']=profit
return asset['profit_rate'].std(),sum(transactionFee),state.sum(axis=1)[len(state)-1],(asset['Property'][asset['Property'].idxmax()]-asset['Property'][asset['Property'][asset['Property'].idxmax():].idxmin()])/asset['Property'][asset['Property'].idxmax()]
# + id="hVIm-yGu-zdp"
cycle_test_inf=[]
for i in range(1,35,2):
profit_rate_std,transactionFee,property_sum,maximum_drawdown=test(len(BCdf),i,0.49)
cycle_test_inf.append([i,profit_rate_std,transactionFee,property_sum,maximum_drawdown])
# + id="b6PtoZEV_R4o"
cycle_test_inf=np.array(cycle_test_inf)
# + colab={"base_uri": "https://localhost:8080/", "height": 400} id="pQPGsSSp_-h5" outputId="804026dc-7d9a-4ca1-ec4d-140b34d1ab62"
#plt.plot(range(1,35,2),test_inf[:,3],label='Value')
fig = plt.figure(dpi=100)
ax1 = fig.add_subplot()
ax1.plot(range(1,35,2),cycle_test_inf[:,3],label='Value')
ax1.set_xlabel('Trading Cycle(days)')
ax1.set_ylabel('Total Value')
ax1.legend()
#ax2 = ax1.twinx()
#ax2.plot(range(1,35,2),cycle_test_inf[:,4],label='Maximum Drawdowm',c='r')
#x2.set_ylabel('Maximum Drawdowm')
#fig.legend(loc='upper left')
#ax2.legend(loc='upper left')
# + id="9wBXGyhrBMrh"
test_inf=[]
for i in range(1,100,1):
profit_rate_std,transactionFee,property_sum,maximum_drawdown=test(len(BCdf),29,i/100)
test_inf.append([i,profit_rate_std,transactionFee,property_sum,maximum_drawdown])
test_inf=np.array(test_inf)
# + colab={"base_uri": "https://localhost:8080/", "height": 400} id="QDhpf6gTIHrC" outputId="3426a569-93d7-40b7-c303-10aadf5c1705"
plt.figure(dpi=100)
plt.plot(range(1,100,1),test_inf[:,3],label='Value')
plt.xlabel('Risk threshold(%)')
plt.ylabel('Total Value')
plt.legend(loc = 'upper left')
# + colab={"base_uri": "https://localhost:8080/", "height": 400} id="9L_967YN1v7v" outputId="b74401fa-9522-47ae-b95b-2949938c17d8"
fig = plt.figure(dpi=100)
ax1 = fig.add_subplot()
ax1.plot(range(1,100,1),test_inf[:,3],label='Value')
ax1.set_xlabel('Risk threshold(%)')
ax1.set_ylabel('Total Value')
ax2 = ax1.twinx()
ax2.plot(range(1,100,1),test_inf[:,4],label='Maximum Drawdowm',c='r')
ax2.set_ylabel('Maximum Drawdowm')
ax1.legend()
ax2.legend()
# + id="Y6CM0olbJIFm"
profit_rate_std,transactionFee,property_sum,maximum_drawdown=test(len(BCdf),29,0.49)
# + colab={"base_uri": "https://localhost:8080/"} id="2_dZ1SmTtSYC" outputId="92eba0cf-7fcc-400a-c08f-5281fc28af87"
property_sum
# + [markdown] id="Ho764Hk3avOg"
# $w_t=第t天的投资配置比例|∈\mathbb{R}^{3X1}$
# $T_t=第t天的价格$
# $R_t=第t天的收益率$
# $\mu =投资组合收益$
# $σ^2 =投资组合的风险$
# + [markdown] id="1eepT-wHzk9r"
# 在传统的组合投资策略中,一般使用平均收益和均方差来分布衡量一支股票的收益和风险.在本题中,我们通过投资组合来对每一个周期的的投资进行决策,而平均收益和均方差并不能有效的判断该股票在一段时期内的情况,所以,我们选择使用指数加权法进行计算
#
# 公式
#
# 其中$\beta$表示权重下降的速率,该公式计算结果类似于最近的$\frac{1}{1-\beta}组数据的均值和均方差$
# + id="CgZtO5-Z6nn5"
| 2022c.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 151} colab_type="code" executionInfo={"elapsed": 4068, "status": "ok", "timestamp": 1590843742484, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02763474582244212418"}, "user_tz": -120} id="lmUq5kQ-A8JC" outputId="0058866e-9b4d-41de-84d6-583a4f0f4b4d"
'''
#Mount drive
class FileSystem:
def __init__(self, colab_dir="PersonTracking", local_dir="./", data_dir="data"): # replace with dlav path
IN_COLAB = 'google.colab' in sys.modules
if (IN_COLAB):
from google.colab import drive
drive.mount('/gdrive')
self.root_dir = os.path.join("/gdrive/My Drive/", colab_dir)
else:
self.root_dir = local_dir
self.data_dir = data_dir
self.change_directory = False
def data_path(self, name):
return os.path.join(self.data_dir, name) if self.change_directory else os.path.join(self.root_dir, self.data_dir, name)
def path(self, name):
return os.path.join("./", name) if self.change_directory else os.path.join("./", self.root_dir, name)
def cd(self):
%cd {self.root_dir}
%ls
self.change_directory = True
fs = FileSystem()
fs.cd()
'''
# + colab={} colab_type="code" id="K7oks2aQkZKz"
import time
import datetime
import argparse
import random
import os, sys
import torch
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
import torchvision.transforms as T
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.ticker import NullLocator
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
import numpy as np
from tqdm import tqdm
import io
import PIL
from PIL import Image
import requests
import cv2
import shutil
#from google.colab.patches import cv2_imshow
import os.path
import glob
import warnings
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 7158, "status": "ok", "timestamp": 1590843260130, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02763474582244212418"}, "user_tz": -120} id="LqvwXczNaUws" outputId="89110da4-6e67-44f4-960b-74af0861a491"
# !pwd
# + colab={"base_uri": "https://localhost:8080/", "height": 252} colab_type="code" executionInfo={"elapsed": 19955, "status": "ok", "timestamp": 1590843851657, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02763474582244212418"}, "user_tz": -120} id="d0BsRp7dCd5y" outputId="4a5fdb14-c4a7-4b42-cb05-f5023149c675"
try:
shutil.rmtree('reid-strong-baseline')
except OSError as e:
pass
try:
shutil.rmtree('YOLOv3-pedestrian')
except OSError as e:
pass
# #!git clone https://github.com/nodiz/reid-strong-baseline.git
# !git clone https://github.com/nodiz/YOLOv3-pedestrian.git
# #!cd YOLOv3-pedestrian && git checkout AnchorsForVideo
# !cp /home/fabien/Desktop/DLfAD/YOLOv3-pedestrian/weights/yolov3_ckpt_current_50.pth YOLOv3-pedestrian/weights
# + [markdown] colab_type="text" id="8DXZr9lUO412"
# **Affiliation**
# + colab={} colab_type="code" id="Bzo4ZU--O27j"
def crop_affilition(distmat,dist_thld,framenr,gallery_path,name_query,name_gallery):
#For distmat: rows: Querys, Collums: Gallery
maxiter=np.shape(distmat)[0]
gallength=len(os.listdir(gallery_path))
querynr=0
query = np.arange(0,maxiter)
aff_mat=np.zeros((maxiter,2))
while querynr < maxiter:
q_min=np.amin(distmat)
result = np.where(distmat == q_min)
result=(result[0][0], result[1][0])
idx=(result[0], result[1])
#print(distmat[idx], dist_thld)
if (distmat[idx]<dist_thld and querynr<gallength):
#print('Put Query '+str(result[0])+' into Gallery ' +f'{result[1]}')
shutil.move(name_query[result[0]], name_gallery[result[1]][:-9]+'/'+str(framenr)+'.jpg')
aff_mat[querynr,:]=[result[0],int(name_gallery[result[1]][-13:-9])]
for i in range(0,np.shape(name_gallery)[0]):
if name_gallery[i][-13:-9] == name_gallery[result[1]][-13:-9]:
distmat[:,i]=float("inf")
distmat[:,result[1]]=float("inf")
distmat[result[0],:]=float("inf")
query = np.delete(query, np.where(query == result[0]))
querynr+=1
else:
#put remaining queries in new folder
for i in query:
#print('Put Query '+str(i)+' into new Gallery ' +f'{gallength}')
os.makedirs(name_gallery[0][:-14]+'/'+f'{gallength:04}', exist_ok = True)
shutil.move(name_query[i], name_gallery[0][:-14]+'/'+f'{gallength:04}'+'/'+str(framenr)+'.jpg')
aff_mat[querynr,:]=[i,gallength]
gallength+=1
querynr+=1
return aff_mat
# + [markdown] colab_type="text" id="Ebhh6I_BT47i" pycharm={"name": "#%% md\n"}
# **RE-ID Function**
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" executionInfo={"elapsed": 1562, "status": "ok", "timestamp": 1590843920108, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02763474582244212418"}, "user_tz": -120} id="mfAdtf-HhbTH" outputId="1a1bfb14-99ed-4a3f-e5f7-abdb22492939" pycharm={"is_executing": true, "name": "#%%\n"}
# Build model
# #!ls #/gdrive#/.shortcut-targets-by-id/ #1xcANAd3HJkfxNCBAtjJOyodOgmX0iO8b/PersonTracking
from reidLib.modeling.baseline import Baseline
__file__ = 'reidLib/modeling'
sys.path.append(os.path.dirname(__file__))
def build_model(num_classes):
# model = Baseline(num_classes, cfg.MODEL.LAST_STRIDE, cfg.MODEL.PRETRAIN_PATH, cfg.MODEL.NECK, cfg.TEST.NECK_FEAT \ after)
model = Baseline(num_classes, 1, "WeightsReid/resnet50-19c8e357.pth", 'bnneck', 'after', 'resnet50', 'imageNet') # maybe try self instead of imageNet
return model
device = 'cuda'
model = build_model(1041) # 1041 identities in training dataset
model.eval() # evaluation mode
#model=torch.load('WeightsReid/resnet50_ibn_a_center_param_120.pth')
model.load_param('WeightsReid/WeightFab/center/resnet50_model_100.pth')
model.to(device);
# + colab={} colab_type="code" id="PaExWreZf8t0" pycharm={"is_executing": true}
# Prepare transform
normalize_transform = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform = T.Compose([
T.Resize([256, 128]),
T.ToTensor(),
normalize_transform
])
#Dataset
class ImageDataset(Dataset):
"""Image Person ReID Dataset"""
def __init__(self, dataset, transform=None):
self.dataset = dataset
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, index):
img_path = self.dataset[index]
img = Image.open(img_path).convert('RGB')
if self.transform is not None:
img = self.transform(img)
return img
# Dataloader
def val_collate_fn(batch):
return torch.stack(batch, dim=0)
# + colab={} colab_type="code" id="nm86mLxyToqB" pycharm={"is_executing": true}
def reid(gallery_path='pic/*', query_path='pic/*'):
#Load images query
Gallery_size = 140
imgs_query = sorted(glob.glob(os.path.join(query_path, "*.*")))
num_query = np.shape(imgs_query)[0] # first x images in dataset
# Load images gallery
imgs_gallery = sorted(glob.iglob(os.path.join(gallery_path, "*.*")),key=os.path.getctime, reverse=True) #pic = gallery_path
if np.shape(imgs_gallery)[0]>Gallery_size :
imgs_gallery = imgs_gallery[:Gallery_size ]
imgs=imgs_query + imgs_gallery
#print([os.path.basename(x) for x in imgs[:num_query]])
#print([os.path.basename(x) for x in imgs[num_query:]])
# build dataset and dataloader !!!! np.shape(imgs)[0]
imdatas = ImageDataset(imgs, transform)
demo_loader = DataLoader(
imdatas, batch_size=np.shape(imgs)[0], shuffle=False, num_workers=4,
collate_fn=val_collate_fn
)
# model evaluation
with torch.no_grad():
for batch in demo_loader:
batch = batch.to(device)
feat = model(batch) # (bs, 2048)
feat_norm = 1
if feat_norm:
feat = torch.nn.functional.normalize(feat, dim=1, p=2)
# query
qf = feat[:num_query]
# gallery
gf = feat[num_query:]
m, n = qf.shape[0], gf.shape[0]
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
distmat.addmm_(1, -2, qf, gf.t())
distmat = distmat.cpu().numpy()
return distmat, imgs_query, imgs_gallery
# + [markdown] pycharm={"name": "#%% md\n"}
#
# + pycharm={"is_executing": true, "name": "#%%\n"}
def white_balance(img):
result = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
avg_a = np.average(result[:, :, 1])
avg_b = np.average(result[:, :, 2])
result[:, :, 1] = result[:, :, 1] - ((avg_a - 128) * (result[:, :, 0] / 255.0) * 1.1)
result[:, :, 2] = result[:, :, 2] - ((avg_b - 128) * (result[:, :, 0] / 255.0) * 1.1)
result = cv2.cvtColor(result, cv2.COLOR_LAB2BGR)
return result
def show(final):
print('display')
cv2.imshow('bla', frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
# + [markdown] colab_type="text" id="PnuKSzPjs4cO" pycharm={"name": "#%% md\n"}
# **Bounding Box**
# + colab={} colab_type="code" id="13xkcF6ns2VZ" pycharm={"is_executing": true, "name": "#%%\n"}
def bboxdrawer(img_path,txtfile_path,aff_mat, gallery_path, idx):
# Creates bounding box by overwriting raw frame at img_path
aff_mat=aff_mat[aff_mat[:,0].argsort()]
img = np.array(Image.open(img_path))
plt.figure()
fig, ax = plt.subplots(1)
ax.imshow(img)
cmap = plt.get_cmap("tab20b")
hsvBig = plt.get_cmap('hsv', 512)
cmap = ListedColormap(hsvBig(np.linspace(0, 1, 256)))
colors = [cmap(i) for i in np.linspace(0, 1, 256)]
bboxmatrix = np.loadtxt(txtfile_path)
if np.shape(aff_mat)[0] == 1:
bboxmatrix[0] = aff_mat[0][1]
bboxmatrix=[bboxmatrix]
bb_boxfor=bboxmatrix
else:
bboxmatrix[:,0]=aff_mat[:,1]
bboxmatrix=[bboxmatrix]
bb_boxfor=bboxmatrix[0]
bbox_colors = np.array(colors)[idx]
for label,x1,x2,y1,y2 in bb_boxfor:
'''
label=bboxmatrix[j,0]
x1=bboxmatrix[j,1]
x2=bboxmatrix[j,2]
y1=bboxmatrix[j,3]
y2=bboxmatrix[j,4]
'''
box_w = x2 - x1
box_h = y2 - y1
color = bbox_colors[int(label-1)]
#print(color)
bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=1, edgecolor=color, facecolor="none")
ax.add_patch(bbox)
plt.text(
x1,
y1,
s=int(label),
color="white",
verticalalignment="top",
bbox={"color": color, "pad": 0},
)
plt.axis("off")
plt.gca().xaxis.set_major_locator(NullLocator())
plt.gca().yaxis.set_major_locator(NullLocator())
plt.savefig(img_path, bbox_inches="tight", pad_inches=0.0)
#plt.close()
#bboxdrawer('Frames/MOT16-11-raw/frame2.jpg','DetectedImages/MOT16-11-raw/Detection2.txt')
#bboxdrawer('Frames/MOT16-11-raw/frame3.jpg','DetectedImages/MOT16-11-raw/Detection3.txt')
# + [markdown] colab_type="text" id="0LudXFQgk6tu" pycharm={"name": "#%% md\n"}
# **Detection Algorithm**
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 30881, "status": "error", "timestamp": 1590843961126, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "02763474582244212418"}, "user_tz": -120} id="tfisITLwEo6V" outputId="c7f2962a-808d-4028-db66-5d52ba198f11" pycharm={"is_executing": true, "name": "#%%\n"}
#clear sample folder in YoloV3
files = glob.glob('YOLOv3-pedestrian/data/samples/*')
for f in files:
os.remove(f)
files = glob.glob('YOLOv3-pedestrian/output/*')
for f in files:
os.remove(f)
#define and read video
videoname = 'MOT16-11-raw'
threshold=0.13 #0.13
# Color BBOX
n_cls = 150 #Maximum color
random.seed(1)
index=np.linspace(0,255, n_cls).astype(int)
np.random.shuffle(index)
# data aug
hsv=True
wbalance=True
#delete at every execution folder such that code can be executed again
try:
shutil.rmtree('DetectedImages/'+videoname)
except OSError as e:
pass
frame_path='Frames/'+videoname
try:
shutil.rmtree(frame_path)
except OSError as e:
pass
os.makedirs(frame_path, exist_ok=True)
videoframe = cv2.VideoCapture('VideoToTrack/'+videoname+'.webm')
framenr=0
firstdetection=True
if (videoframe.isOpened()== False):
print("Error opening video stream or file")
while(videoframe.isOpened()):
# Capture frame-by-frame
ret, frame = videoframe.read()
if ret == True:
framenr+=1
# Display the resulting frame
if wbalance ==True:
frame = white_balance(frame)
if hsv==True:
hsvImg = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
#multiple by a factor to change the saturation
hsvImg[...,1] = hsvImg[...,1]*1.5
#multiple by a factor of less than 1 to reduce the brightness
hsvImg[...,2] = hsvImg[...,2]*0.99
frame=cv2.cvtColor(hsvImg,cv2.COLOR_HSV2BGR)
#cv2_imshow(frame)
imagename='YOLOv3-pedestrian/data/samples/'+videoname+str(framenr)+'.jpg'
cv2.imwrite(imagename, frame)
cv2.imwrite(frame_path+'/frame'+str(framenr)+'.jpg', frame)
# !python3 YOLOv3-pedestrian/detectbbox.py --conf_thres 0.9 --img_size 540 --image_folder YOLOv3-pedestrian/data/samples --model_def YOLOv3-pedestrian/config/yolov3-custom.cfg --output_dir YOLOv3-pedestrian/output --weights_path YOLOv3-pedestrian/weights/yolov3_ckpt_current_50.pth --class_path YOLOv3-pedestrian/data/classes.names
os.remove('YOLOv3-pedestrian/data/samples/'+videoname+str(framenr)+'.jpg')
os.makedirs('DetectedImages/'+videoname, exist_ok=True)
os.makedirs('DetectedImages/'+videoname+'-txtfiles', exist_ok=True)
#if detection is made do reidentification else put a warning that nothing was
if os.path.exists('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt') and np.size(np.loadtxt('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt')) != 0:
print(firstdetection)
shutil.move('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt', 'DetectedImages/'+videoname+'-txtfiles/Detection'+str(framenr)+'.txt')
#for the first frame put every detected person crop in separate folder
if firstdetection==True:
DIR = 'YOLOv3-pedestrian/output/'
detection = sorted(glob.glob(os.path.join(DIR, "*.jpg")))
print(detection, range(1,np.size(detection)+1) )
for pid1 in range(0,np.size(detection)):
os.mkdir('DetectedImages/'+videoname+'/'+f'{pid1:04}')
shutil.move('YOLOv3-pedestrian/output/'+f'{pid1+1:04}'+'.jpg', 'DetectedImages/'+videoname+'/'+f'{pid1:04}'+'/'+f'{framenr:04}'+'.jpg' )
firstdetection=False
else: # np.size(np.loadtxt('DetectedImages/'+videoname+'-txtfiles/Detection'+str(framenr)+'.txt')) != 0:
#implement person reid here and affilate image to correct folder
distmat, name_query, name_gallery = reid(gallery_path = 'DetectedImages/'+videoname+'/*', query_path='YOLOv3-pedestrian/output/')
print('QUER', np.shape(name_query))
print('GAL', np.shape(name_gallery))
amat=crop_affilition(distmat= distmat, dist_thld = threshold, framenr=f'{framenr:04}', gallery_path = 'DetectedImages/'+videoname, name_query = name_query, name_gallery = name_gallery)
bboxdrawer(frame_path+'/frame'+str(framenr)+'.jpg','DetectedImages/'+videoname+'-txtfiles/'+'Detection'+str(framenr)+'.txt',amat,gallery_path = 'DetectedImages/'+videoname, idx=index)
torch.cuda.empty_cache()
elif os.path.exists('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt') and np.size(np.loadtxt('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt')) == 0: #pippo
os.remove('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt')
warnings.warn('no detection in frame number '+str(framenr))
else:
warnings.warn('no detection in frame number '+str(framenr))
# Break the loop when video is finished
else: #here
break
# When everything done, release the video capture object
videoframe.release()
# Closes all the frames
cv2.destroyAllWindows()
# + pycharm={"is_executing": true, "name": "#%%\n"}
np.shape('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.txt') != ()
# + colab={} colab_type="code" id="-ywIWNlNgU6U" pycharm={"is_executing": true}
# !pwd
# !ls DetectedImages/MOT16-10-raw/
# + colab={} colab_type="code" id="VEqH1TLh2ocW" pycharm={"is_executing": true}
'''
aff_mat=amat
aff_mat=aff_mat[aff_mat[:,0].argsort()]
cmap = plt.get_cmap("tab20b")
hsvBig = plt.get_cmap('hsv', 512)
cmap = ListedColormap(hsvBig(np.linspace(0, 1, 256)))
colors = [cmap(i) for i in np.linspace(0, 1, 256)]
bboxmatrix = np.loadtxt('DetectedImages/'+videoname+'-txtfiles/'+'Detection'+str(framenr)+'.txt')
if np.shape(aff_mat)[0] == 1:
bboxmatrix[0] = aff_mat[0][1]
bboxmatrix=[bboxmatrix]
bb_boxfor=bboxmatrix
else:
bboxmatrix[:,0]=aff_mat[:,1]
bb_boxfor=bboxmatrix[0]
#if type(bboxmatrix)==np.ndarray:
#print('pippo')
#bboxmatrix=[bboxmatrix]
n_cls=len(os.listdir('DetectedImages/'+videoname))
random.seed(1)
bbox_colors = random.sample(colors, n_cls)
print(bb_boxfor)
for label,x1,x2,y1,y2 in bb_boxfor:
print('pippo')
'''
# + [markdown] colab_type="text" id="TxB5R3a2tPEa" pycharm={"name": "#%% md\n"}
# Frame to video
# + colab={} colab_type="code" id="QxUPafD6C-Vo" pycharm={"is_executing": true}
image_folder = 'Frames/'+videoname +'/'
video_name = 'Video/'+videoname+'_BoundingBoxv3.avi'
#images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")]
#print(images[0:10])
images = sorted(glob.iglob(os.path.join(image_folder, "*.jpg")),key=os.path.getctime, reverse=True)[::-1]
images= images[1:]
frame = cv2.imread(images[0])
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 5, (width,height))
print(height,width)
for image in images:
a = cv2.imread(image)
a = cv2.resize(a, (width,height), interpolation = cv2.INTER_AREA)
print(image, a.shape)
video.write(a)
cv2.destroyAllWindows()
video.release()
# + [markdown] colab_type="text" id="mcqMC0W7DhYh" pycharm={"name": "#%% md\n"}
# Tests (not important)
# + pycharm={"is_executing": true, "name": "#%%\n"}
''''
videoname = 'MOT16-10-raw'
videoframe = cv2.VideoCapture('VideoToTrack/'+videoname+'.webm')
framenr=0
while(videoframe.isOpened()):
ret, frame = videoframe.read()
if ret ==True:
#cv2_imshow(frame)
framenr+=1
imagename='YOLOv3-pedestrian/data/samples/'+videoname+str(framenr)+'.jpg'
cv2.imwrite(imagename, frame)
%cd YOLOv3-pedestrian/
!python detect.py --model_def config/yolov3-custom.cfg --weights_path weights/yolov3_ckpt_current_50.pth
os.remove('data/samples/'+videoname+str(framenr)+'.jpg')
%cd ..
os.makedirs('Detection/'+videoname, exist_ok=True)
shutil.move('YOLOv3-pedestrian/output/'+videoname+str(framenr)+'.png', 'Detection/'+videoname+str(framenr)+'.png' )
#cv2_imshow( 'Detection/'+videoname+str(framenr)+'.png')
else:
break
videoframe.release()
# Closes all the frames
cv2.destroyAllWindows()
'''
# + pycharm={"is_executing": true, "name": "#%%\n"}
videoname = 'MOT16-11-raw'
videoframe = cv2.VideoCapture('VideoToTrack/'+videoname+'.webm')
if (videoframe.isOpened()== False):
print("Error opening video stream or file")
else:
ret, frame = videoframe.read()
cv2.imshow('blabla.jpg',frame)
hsvImg = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
#multiple by a factor to change the saturation
hsvImg[...,1] = hsvImg[...,1]*1.5
#multiple by a factor of less than 1 to reduce the brightness
hsvImg[...,2] = hsvImg[...,2]*0.5
image=cv2.cvtColor(hsvImg,cv2.COLOR_HSV2BGR)
cv2.imshow('blabla2.jpg',image)
# + pycharm={"is_executing": true, "name": "#%%\n"}
# !pwd
# !ls
#os.remove('MOT16-10-raw1.jpg')
# !pwd
# + colab={} colab_type="code" id="BXh7JcvaDgxu" pycharm={"is_executing": true}
# !pwd
# !ls
#os.remove('MOT16-10-raw1.jpg')
# !pwd
# + colab={} colab_type="code" id="_cxPFC-AMQH5" pycharm={"is_executing": true, "name": "#%%\n"}
videoname = 'MOT16-11-raw'
videoframe = cv2.VideoCapture('VideoToTrack/'+videoname+'.webm')
if (videoframe.isOpened()== False):
print("Error opening video stream or file")
else:
ret, frame = videoframe.read()
cv2.imshow('blabla.jpg',frame)
hsvImg = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
#multiple by a factor to change the saturation
hsvImg[...,1] = hsvImg[...,1]*1.5
#multiple by a factor of less than 1 to reduce the brightness
hsvImg[...,2] = hsvImg[...,2]*0.5
image=cv2.cvtColor(hsvImg,cv2.COLOR_HSV2BGR)
cv2.imshow('blabla2.jpg',image)
| notebooks/VideoPersonTracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pandas import DataFrame
from ioUtils import getFile
manualEntries = getFile("manualEntries.yaml")
medf = DataFrame(manualEntries).T
artistCnts = medf["ArtistName"].value_counts()
artistCnts.head(100).tail(25)
# +
# Assassin -> 4
# Juju -> 4
# Exile -> 3
# Kamelot -> 2
# <NAME> -> 2
# <NAME>
| mergers/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read and write files
# We have made good progress and now we can get down to the serious business of manipulating data files. This is one of the very important points concerning this training.
#
#
# N.B: Most of the files in `./data/` are fiels that
# To open/edit a file in python we use the `open()` function.
#
# This function takes as first parameter the path of the file (*relative* or *absolute*) and as second parameter the type of opening, i.e. reading or writing mode.
#
# *A relative path in computing is a path that takes into account the current location.*
# The path is **relative** to where it is called from.
#
# *An absolute path is a complete path that can be read regardless of the reading location.*
#
#
f = open("./data/data.txt", "r") # r for "read"
# - `"r"`, for a read opening (READ).
#
# - `"w"`, for a write opening (WRITE), each time the file is opened, the content of the file is overwritten. If the file does not exist, python creates it.
#
# *The Python docs say that w+ will "overwrite the existing file if the file exists". So as soon as you open a file with w+, it is now an empty file: it contains 0 bytes. If it used to contain data, that data has been truncated — cut off and thrown away — and now the file size is 0 bytes, so you can't read any of the data that existed before you opened the file with w+. If you actually wanted to read the previous data and add to it, you should use r+ instead of w+* [[Source]](https://stackoverflow.com/questions/16208206/confused-by-python-file-mode-w#comment83227862_16208298)
#
#
#
# - `"a"`, for an opening in add mode at the end of the file (APPEND). If the file does not exist, python creates it.
#
# - `"x"`, creates a new file and opens it for writing
#
# You can also append a `+` and a `b` to nearly all of the above commands. [[More info here]](https://stackabuse.com/file-handling-in-python/)
# Like any open element, it must be closed again once the instructions have been completed. To do this, we use the `close()` method.
f.close()
# Let's find out what's going on there
f = open("./data/data.txt", "r")
print (file.read())
f.close()
# Other possibility of opening without closing
with open("./data/data.txt", "r") as fichier:
print (fichier.read())
# Can you put the contents of this file in the form of a list in which each element is a sentence ?
# *(Use `.split()` for example...)*
# To write in a file, just open a file (existing or not), write in it and close it,. We open it in mode `"w"` so that the previous data is deleted and new data can be added.
file = open("./data/write.txt", "w")
file.write("Hi everyone, I'm adding sentences to the file !")
file.close()
# Can you take the content of the `data.txt` file from the `.data/` directory, capitalize all the words and write them in the file that you created just before, after the sentences you added?
#
# +
# It's up to you to write the end
arr =[]
with open("./data/write.txt", "r+") as fichier:
return # Add your code
# -
# ## Management of directory paths...
# The `os` module is a library that provides portable way of using operating system dependent functionality.
# In this chapter, we are interested in using its powerful file path handling capabilities using `os.path`
import os
# Each file or folder is associated with a kind of address that makes it easy to find it without errors. It is not possible to identically name a file in the same folder (except if the extension is different).
#
# As said before, there are two kind of paths: the absolute path from the root of your file system and the relative path from the folder being read.
import os.path
# By looking with the help function, we can see the available methods
help(os.path)
# To know your current absolute path, use `abspath('')`
# In python a path is a string, so there are methods to manipulate it.
path=os.path.abspath('')
print(path)
print(type(path))
# To know the part of the path that consists of directories, use `dirname(path)`.
os.path.dirname(path)
# To know the part of the path that is your file only, use `basename(path)`.
os.path.basename(path)
# To add a directory, let's say "text" to the path, we use `join()`
rep_text=os.path.join(path, "text")
print(rep_text)
# Too retrieve all the elements of a folder as a list, you can use the `listdir()` method
# Items are returned as a list and includes folders and hidden files.
os.listdir("../")
# ### How to display all the elements of a folder as well as its child folders?
#
# With the function `walk()`:
#
# ```
# walk(top, topdown=True, onerror=None, followlinks=False)
# ```
#
# +
folder_path = os.path.abspath('./')
print(folder_path)
for path, dirs, files in os.walk(folder_path):
for filename in files:
print(filename)
# -
# Put all the **`.txt` files** from the `data/` directory into a variable.
# Then, copy the content of all the files from this variable into a file in `data/` that you will name `final.txt`
#
# New task. Can you open all the files from your `data/` directory and save all their contents in a variable,
# using a loop?
# Finally, save this concatenated information (assemblies) in a new file.
| Content/1.python/2.python_advanced/04.File-handling/00.python_handling_file.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # max_flow_winston1
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/max_flow_winston1.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/max_flow_winston1.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Max flow problem in Google CP Solver.
From Winston 'Operations Research', page 420f, 423f
Sunco Oil example.
Compare with the following models:
* MiniZinc: http://www.hakank.org/minizinc/max_flow_winston1.mzn
* Comet: http://hakank.org/comet/max_flow_winston1.co
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
import sys
from ortools.constraint_solver import pywrapcp
# Create the solver.
solver = pywrapcp.Solver('Max flow problem, Winston')
#
# data
#
n = 5
nodes = list(range(n))
# the arcs
# Note:
# This is 1-based to be compatible with other
# implementations.
arcs1 = [[1, 2], [1, 3], [2, 3], [2, 4], [3, 5], [4, 5], [5, 1]]
# convert arcs to 0-based
arcs = []
for (a_from, a_to) in arcs1:
a_from -= 1
a_to -= 1
arcs.append([a_from, a_to])
num_arcs = len(arcs)
# capacities
cap = [2, 3, 3, 4, 2, 1, 100]
# convert arcs to matrix
# for sanity checking below
mat = {}
for i in nodes:
for j in nodes:
c = 0
for k in range(num_arcs):
if arcs[k][0] == i and arcs[k][1] == j:
c = 1
mat[i, j] = c
#
# declare variables
#
flow = {}
for i in nodes:
for j in nodes:
flow[i, j] = solver.IntVar(0, 200, 'flow %i %i' % (i, j))
flow_flat = [flow[i, j] for i in nodes for j in nodes]
z = solver.IntVar(0, 10000, 'z')
#
# constraints
#
solver.Add(z == flow[n - 1, 0])
# capacity of arcs
for i in range(num_arcs):
solver.Add(flow[arcs[i][0], arcs[i][1]] <= cap[i])
# inflows == outflows
for i in nodes:
s1 = solver.Sum([
flow[arcs[k][0], arcs[k][1]] for k in range(num_arcs) if arcs[k][1] == i
])
s2 = solver.Sum([
flow[arcs[k][0], arcs[k][1]] for k in range(num_arcs) if arcs[k][0] == i
])
solver.Add(s1 == s2)
# sanity: just arcs with connections can have a flow
for i in nodes:
for j in nodes:
if mat[i, j] == 0:
solver.Add(flow[i, j] == 0)
# objective: maximize z
objective = solver.Maximize(z, 1)
#
# solution and search
#
db = solver.Phase(flow_flat, solver.INT_VAR_DEFAULT, solver.INT_VALUE_DEFAULT)
solver.NewSearch(db, [objective])
num_solutions = 0
while solver.NextSolution():
num_solutions += 1
print('z:', z.Value())
for i in nodes:
for j in nodes:
print(flow[i, j].Value(), end=' ')
print()
print()
print('num_solutions:', num_solutions)
print('failures:', solver.Failures())
print('branches:', solver.Branches())
print('WallTime:', solver.WallTime(), 'ms')
| examples/notebook/contrib/max_flow_winston1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rnuei4rDdyyN" colab_type="code" colab={}
#importing the libraries
import os # for file related and system handling
import shutil ## for file related and system handling
# + id="VKgUYQg6zEva" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": "OK"}}, "base_uri": "https://localhost:8080/", "height": 78} outputId="2b2a2bf7-b451-48cf-b3e6-de017619a1c2"
# !pip install -q kaggle
from google.colab import files
files.upload()
# ! mkdir ~/.kaggle
# ! cp kaggle.json ~/.kaggle/
# ! chmod 600 ~/.kaggle/kaggle.json
# # ! kaggle competitions download -c 'name-of-competition'
# + id="-gghZp3CzdPB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="8e5286f6-b856-4311-deb1-4a03655d5bf8"
# !kaggle competitions download -c dogs-vs-cats-redux-kernels-edition # to download the data from kaggle api command
# + id="Tvfj21AhztbA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 86} outputId="970b811c-2635-4ec6-86cf-0581092d9810"
#Create a seperate directory for different classes
# !mkdir /content/Test
# !mkdir /content/Train
# !mkdir /content/Train/dog
# !mkdir /content/Train/cat
# + id="gdN8GfMR1HII" colab_type="code" colab={}
#Unzipping the data
# !unzip train.zip
# !unzip test.zip
# + id="iEXa1GMb0fXk" colab_type="code" colab={}
#Copying the data to required folders
for filename in os.listdir("/content/train"):
# print(filename)
if "cat" in filename:
shutil.copy(os.path.join("/content/train",filename),"/content/Train/cat")
else :
shutil.copy(os.path.join("/content/train",filename),"/content/Train/dog")
for filename in os.listdir("/content/test"):
shutil.copy(os.path.join("/content/test",filename),"/content/Test")
# + id="etm3YcEw-uqb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="6615cdea-4036-4a76-e7ae-862da2fb9743"
#Defining model
#Alexnet
import keras
model=keras.Sequential()
model.add(keras.layers.Conv2D(filters=96,kernel_size=(11,11),strides=4,padding='valid',activation='relu',input_shape=(150,150,3)))
model.add(keras.layers.MaxPool2D(pool_size=(3,3),strides=2))
model.add(keras.layers.Conv2D(filters=256,kernel_size=(5,5),padding='same',activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=(3,3),strides=2))
model.add(keras.layers.Conv2D(filters=384,kernel_size=(3,3),padding='same',activation='relu'))
# model.add(keras.layers.Conv2D(filters=384,kernel_size=(3,3),padding='same',activation='relu'))
# model.add(keras.layers.Conv2D(filters=384,kernel_size=(3,3),padding='same',activation='relu'))
# model.add(keras.layers.MaxPool2D(pool_size=(3,3),strides=2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(4096,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(4096,activation='relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(units = 1, activation = 'sigmoid'))
from keras import optimizers
model.compile(loss='binary_crossentropy',optimizer=optimizers.RMSprop(lr=1e-3),metrics=['accuracy'])
model.summary()
# + id="sgQL0eKfHjm8" colab_type="code" colab={}
import keras
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.2) # set validation split
batch_size=32
train_generator = train_datagen.flow_from_directory(
"/content/Train",
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary',
subset='training')
# set as training data
validation_generator = train_datagen.flow_from_directory(
"/content/Train", # same directory as training data
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary',
subset='validation')
# set as validation data
nb_epochs=15
model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = nb_epochs)
# + id="mrn9kuSp7dJK" colab_type="code" colab={}
from keras.models import load_model
model.save('Alexnet_version1.h5') # creates a HDF5 file 'my_model.h5'
model = load_model('Alexnet_version1.h5')
# + id="ppypkC2TDfLB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="28af33d3-2503-4b7b-d890-9a3944e16411"
from keras.models import load_model
from keras.preprocessing import image
import numpy as np
import os
# image folder
folder_path = '/content/Test'
# path to model
model_path = "Alexnet_version1.h5"
# dimensions of images
img_width, img_height = 150, 150
# load the trained model
model = load_model(model_path)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# load all images into a list
images = []
filelist = os.listdir(folder_path)
filelist = sorted(filelist,key=lambda x: int(os.path.splitext(x)[0]))
for img in filelist:
img = os.path.join(folder_path, img)
img = image.load_img(img, target_size=(img_width, img_height))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
images.append(img)
# stack up images list to pass for prediction
images = np.vstack(images)
classes = model.predict_classes(images, batch_size=10)
print(classes)
classes = train_generator.class_indices
classes
# (1 = dog, 0 = cat).
# + id="TwCfipfkYWfP" colab_type="code" colab={}
import pandas as pd
df=pd.read_csv("/content/sample_submission.csv")
df.label=classes
df.head(15)
# + id="g2G8llrvFm5V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 516} outputId="795b63ef-e8b1-45bf-a68d-4512a8237c02"
from IPython.display import Image
Image(filename="/content/Test/1.jpg")
| Alexnet/ALexnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import external datasets
#
# The Digital Earth Africa Sandbox allows users to add external data such as shapefiles and .geojson files to their algorithms.
#
# This tutorial will take you through:
#
# 1. The packages to import
# 2. Setting the path for the vector file
# 3. Loading the external dataset
# 4. Displaying the dataset on a basemap
# 5. Loading the satellite imagery by using the extent of the external dataset
# 6. Mask the area of interest from the satellite imagery using the extenal dataset
#
# For this tutorial, the example external dataset is in a shapefile format.
# + raw_mimetype="text/restructuredtext" active=""
# .. note::
# Before you proceed, ensure you have completed all lessons in the `DE Africa Six-Week Training Course <session_1/01_what_is_digital_earth_africa.ipynb>`_.
# -
# ## Set up notebook
#
# In your **Training** folder, create a new Python 3 notebook. Name it `external_dataset.ipynb`. For more instructions on creating a new notebook, see the [instructions from Session 2](./session_2/04_load_data_exercise.ipynb).
# ### Load packages and functions
#
# In the first cell, type the following code and then run the cell to import necessary Python dependencies.
#
# import sys
# import datacube
# import numpy as np
# import pandas as pd
# import geopandas as gpd
#
# from datacube.utils import geometry
#
# sys.path.append('../Scripts')
# from deafrica_datahandling import load_ard, mostcommon_crs
# from deafrica_plotting import map_shapefile, rgb
# from deafrica_spatialtools import xr_rasterize
# Take note of the packages below on how they were imported with other packages above.
#
# These packages are the packages you will need when you want to use external dataset.
#
# import geopandas as gpd
# from datacube.utils import geometry
# from deafrica_plotting import map_shapefile
# from deafrica_spatialtools import xr_rasterize
# ### Connect to the datacube
#
# Enter the following code and run the cell to create our `dc` object, which provides access to the datacube.
#
# dc = datacube.Datacube(app='import_dataset')
# Create a folder called **data** in the **Training** directory.
# Download this [zip file](_static/external_dataset/reserve.zip) and extract on your local machine.
# Upload the `reserve` shapefile (cpg, dbf, shp, shx) into the **data** folder.
#
# Create a variable called `shapefile_path`,to store the path of the shapefile as shown below.
#
# shapefile_path = "data/reserve.shp"
# Read the shapefile into a GeoDataFrame using the `gpd.read_file` function.
#
# gdf = gpd.read_file(shapefile_path)
#
# Convert all of the shapes into a datacube geometry using `geometry.Geometry`
#
# geom = geometry.Geometry(gdf.unary_union, gdf.crs)
# Use the `map_shapefile` function to display the shapefile on a basemap.
#
# map_shapefile(gdf, attribute=gdf.columns[0], fillOpacity=0, weight=2)
#
# <img align="middle" src="_static/external_dataset/1.PNG" alt="The DE Africa" width="400">
# ## Create a query object
#
# We will replace `x` and `y` with `geopolygon`, as shown below. We remove the `x`, `y` arguments and replace it with `geopolygon`.
#
# query = {
# 'x' : x,
# 'y' : y,
# 'group_by': 'solar_day',
# 'time' : ('2019-01-15'),
# 'resolution': (-10, 10),
# }
#
# Remove `x`, `y` from `query` and update with `geopolygon`:
#
# query = {
# 'geopolygon' : geom,
# 'group_by': 'solar_day',
# 'time' : ('2019-01-15'),
# 'resolution': (-10, 10),
# }
#
#
# We then identify the most common projection system in the input query, and load the dataset `ds`.
#
# output_crs = mostcommon_crs(dc=dc, product='s2_l2a', query=query)
#
# ds = load_ard(dc=dc,
# products=['s2_l2a],
# output_crs=output_crs,
# measurements=["red","green","blue"],
# **query
# )
#
# Print the `ds` result.
#
# ds
# ## Ploting of the result
# We will dipslay the returned dataset using the `rgb` functions.
#
# rgb(ds)
#
# <img align="middle" src="_static/external_dataset/2.PNG" alt="The DE Africa" width="400">
# ## Rasterise the shapefile
#
# Before we can apply the shapefile data as a mask, we need to convert the shapefile to a raster using the `xr_rasterize` function.
#
# mask = xr_rasterize(gdf, ds)
#
# ## Mask the dataset
#
# Mask the dataset using the `ds.where` and `mask` to set pixels outside the polygon to `NaN`.
#
# ds = ds.where(mask)
#
# Plot the masked result of the dataset
#
# rgb(ds)
#
# <img align="middle" src="_static/external_dataset/3.PNG" alt="The DE Africa" width="400">
# ## Conclusion
#
# You can apply this method to already exisiting notebooks you are working with. It is useful for selecting specific areas of interest, and for transferring information between the Sandbox and GIS platorms.
| docs/External_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/johnfred-delossantos/CPEN-21A-CPE-1-2/blob/main/Operations_and_Expressions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="hsIRX_wc0cCV"
# ##Boolean Operators
# + colab={"base_uri": "https://localhost:8080/"} id="r18DlUs7z_M1" outputId="f20ed0ed-e0d3-4d23-a777-f23be31da03a"
#Booleans represent one of two values: True or False
print(10>9)
print(10==9)
print(9>10)
# + colab={"base_uri": "https://localhost:8080/"} id="rX-Z3ns21D9K" outputId="c7326852-6357-46ff-9a84-46a6f9d59b20"
a = 10
b = 9
print(a>b)
print(a==a)
print(b>a)
# + colab={"base_uri": "https://localhost:8080/"} id="qNgGGzXq10PR" outputId="320f9e3f-f4d9-40d3-f2bc-4d9ec61b7feb"
print(bool("Hello"))
print(bool(15))
print(bool(True))
# + colab={"base_uri": "https://localhost:8080/"} id="vpF7aQvd2ekK" outputId="bc170777-21e0-4260-b3c6-3866b36ce6be"
print(bool(False))
print(bool(None))
print(bool(0))
print(bool([])) #allows you to evaluate and gives False in return.
# + colab={"base_uri": "https://localhost:8080/"} id="5FvhXSFf20Dy" outputId="f2a79c29-3c08-4a89-c628-f4b9c406c64c"
def myFunction(): return True
print(myFunction())
# + colab={"base_uri": "https://localhost:8080/"} id="VgukJFlT3JcJ" outputId="f4870618-6f89-4d76-c20b-d429cbd7cf6f"
def myFunction(): return False
print(myFunction())
if myFunction():
print("Yes!")
else:
print("No!")
# + [markdown] id="Gt2VWFQr4mVI"
# ##You Try!
# + colab={"base_uri": "https://localhost:8080/"} id="-PI-rrc54P8P" outputId="339e4d55-ae72-43aa-95ed-c1254e6c1ee2"
print(10>9)
a=6
b=7
print(a==b)
print(a!=a)
# + colab={"base_uri": "https://localhost:8080/"} id="A8lk7iv749ch" outputId="d1c250ac-05cd-421e-9bf0-562686103bd6"
print(10+5)
print(10-5)
print(10*5)
print(10/5) #division - quotient
print(10%5) #modulo division
print(10%3) #modulo division
print(10//3) #floor division
print(10**2) #concatenation
# + colab={"base_uri": "https://localhost:8080/"} id="rgZq5G-36rp4" outputId="469d175a-2c29-420e-e712-2fe75d237da1"
a = 60 #0011 1100 0000 1111
b = 13 #0000 1101
print(a & b)
print(a | b)
print(a ^ b)
print(~a)
print(a<<2)
print(a>>2)
print(b<<2)
print(b>>2)
# + colab={"base_uri": "https://localhost:8080/"} id="tqvP9dFx-xwx" outputId="fd2675f6-9cb6-46d4-d3a2-9ae94372da3f"
x = 6
x += 3 #x = x+3
print(x)
x%=3 #x = x%3, remainder 0
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="CdmP_BI8_e-y" outputId="cd16f58c-ec95-4790-d43f-0656bfd06f44"
a=True
b=False
print(a and b)
print(a or b)
print(not(a and b))
print(not(a or b)) #negation
# + colab={"base_uri": "https://localhost:8080/"} id="rdzmJF_TAHTa" outputId="c5ef3ff8-a658-47b2-838c-e49718c7860e"
print(a is b)
print(a is not b)
| Operations_and_Expressions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Running multiple train-test-validation splits
#
# ### to better estimate predictive accuracy in modeling genre.
#
# This notebook attempts a slight improvement on the methods deployed in my 2015 article, "The Life Cycles of Genres."
#
# In 2015, I used a set number of features and a set regularization constant. Now I optimize *n* (number of features) and *c* (the regularization constant) through gridsearch, running multiple crossvalidations on a train/test set to find the best constants for a given sample.
#
# To avoid exaggerating accuracy through multiple trials, I have also moved to a train/test/validation split: constants are optimized through crossvalidation on the train-test set, but the model is then tested on a separate validation set. I repeat that process on random train/test/validation splits in order to visualize model accuracy as a distribution.
#
# Getting the train/test vs. validation split right can be challenging, because we want to avoid repeating *authors* from the train/test set in validation. (Or in both train and test for that matter.) Authorial diction is constant enough that this could become an unfair advantage for genres with a few prolific authors. We also want to ensure that the positive & negative classes within a given set have a similar distribution across historical time. (Otherwise the model will become a model of language change.) Building sets where all these conditions hold is more involved than a random sample of volumes.
#
# Most of the code in this notebook is concerned with creating the train/test-vs-validation split. The actual modeling happens in versatiletrainer2, which we import in the first cell.
import sys
import os, csv, random
import numpy as np
import pandas as pd
import versatiletrainer2
import metaselector
import matplotlib.pyplot as plt
from scipy import stats
% matplotlib inline
# #### Managing the validation split.
#
# The functions defined below are used to create a train/test/validation divide, while also ensuring
#
# 1. No author is present in more than one of those sets, so we don't overfit on a specific style.
# 2. Positive and negative classes are equally distributed across time (so we don't end up modeling language change instead of genre!)
#
# But the best way to understand the overall workflow may be to scan down a few cells to the bottom function, **train_and_validate().**
def evenlymatchdate(meta, tt_positives, v_positives, negatives):
'''
Given a metadata file, two lists of positive indexes and a (larger) list
of negative indexes, this assigns negatives that match the date distribution
of the two positive lists as closely as possible, working randomly so that
neither list gets "a first shot" at maximally close matches.
The task is complicated by our goal of ensuring that authors are only
represented in the train/test OR the validation set. To do this while
using as much of our sample as we can, we encourage the algorithm to choose
works from already-selected authors when they fit the date parameters needed.
This is the function of the selected_neg_unmatched set: works by authors we have
chosen, not yet matched to a positive work.
'''
assert len(negatives) > (len(tt_positives) + len(v_positives))
authors = dict()
authors['tt'] = set(meta.loc[tt_positives, 'author'])
authors['v'] = set(meta.loc[v_positives, 'author'])
neg_matched = dict()
neg_matched['tt'] = []
neg_matched['v'] = []
neg_unmatched = dict()
neg_unmatched['v'] = []
neg_unmatched['tt'] = []
negative_meta = meta.loc[negatives, : ]
allpositives = [(x, 'tt') for x in tt_positives]
allpositives.extend([(x, 'v') for x in v_positives])
random.shuffle(allpositives)
for idx, settype in allpositives:
if settype == 'v':
inversetype = 'tt'
else:
inversetype = 'v'
date = meta.loc[idx, 'firstpub']
found = False
negative_meta = negative_meta.assign(diff = np.abs(negative_meta['firstpub'] - date))
for idx2 in neg_unmatched[settype]:
matchdate = meta.loc[idx2, 'firstpub']
if abs(matchdate - date) < 3:
neg_matched[settype].append(idx2)
location = neg_unmatched[settype].index(idx2)
neg_unmatched[settype].pop(location)
found = True
break
if not found:
candidates = []
for i in range(200):
aspirants = negative_meta.index[negative_meta['diff'] == i].tolist()
# the following section insures that authors in
# traintest don't end up also in validation
for a in aspirants:
asp_author = meta.loc[a, 'author']
if asp_author not in authors[inversetype]:
# don't even consider books by authors already
# in the other set
candidates.append(a)
if len(candidates) > 0:
break
chosen = random.sample(candidates, 1)[0]
chosenauth = negative_meta.loc[chosen, 'author']
allbyauth = negative_meta.index[negative_meta['author'] == chosenauth].tolist()
authors[settype].add(chosenauth)
if len(allbyauth) < 1:
print('error')
for idx3 in allbyauth:
if idx3 == chosen:
neg_matched[settype].append(idx3)
# the one we actually chose
else:
neg_unmatched[settype].append(idx3)
# others by same author, to be considered first in future
negative_meta.drop(allbyauth, inplace = True)
if len(negative_meta) == 0:
print('Exhausted negatives! This is surprising.')
break
# other books by same authors can be added to the set in the end
tt_neg = neg_matched['tt'] + neg_unmatched['tt']
v_neg = neg_matched['v'] + neg_unmatched['v']
remaining_neg = negative_meta.index.tolist()
return tt_neg, v_neg, remaining_neg
# +
def tags2tagset(x):
''' function that will be applied to transform
fantasy|science-fiction into {'fantasy', 'science-fiction'} '''
if type(x) == float:
return set()
else:
return set(x.split(' | '))
def divide_training_from_validation(tags4positive, tags4negative, sizecap, metadatapath):
''' This function divides a dataset into two parts: a training-and-test set, and a
validation set. We ensure that authors are represented in one set *or* the other,
not both.
A model is optimized by gridsearch and crossvalidation on the training-and-test set. Then this model
is applied to the validation set, and accuracy is recorded.
'''
meta = pd.read_csv(metadatapath)
column_of_sets = meta['genretags'].apply(tags2tagset)
meta = meta.assign(tagset = column_of_sets)
overlap = []
negatives = []
positives = []
for idx, row in meta.iterrows():
if 'drop' in row['tagset']:
continue
# these works were dropped and will not be present in the data folder
posintersect = len(row['tagset'] & tags4positive)
negintersect = len(row['tagset'] & tags4negative)
if posintersect and negintersect:
overlap.append(idx)
elif posintersect:
positives.append(idx)
elif negintersect:
negatives.append(idx)
print()
print('-------------')
print('Begin construction of validation split.')
print("Positives/negatives:", len(positives), len(negatives))
random.shuffle(overlap)
print('Overlap (assigned to pos class): ' + str(len(overlap)))
positives.extend(overlap)
# We do selection by author
positiveauthors = list(set(meta.loc[positives, 'author'].tolist()))
random.shuffle(positiveauthors)
traintest_pos = []
validation_pos = []
donewithtraintest = False
for auth in positiveauthors:
this_auth_indices = meta.index[meta['author'] == auth].tolist()
confirmed_auth_indices = []
for idx in this_auth_indices:
if idx in positives:
confirmed_auth_indices.append(idx)
if not donewithtraintest:
traintest_pos.extend(confirmed_auth_indices)
else:
validation_pos.extend(confirmed_auth_indices)
if len(traintest_pos) > sizecap:
# that's deliberately > rather than >= because we want a cushion
donewithtraintest = True
# Now let's get a set of negatives that match the positives' distribution
# across the time axis.
traintest_neg, validation_neg, remaining_neg = evenlymatchdate(meta, traintest_pos, validation_pos, negatives)
traintest = meta.loc[traintest_pos + traintest_neg, : ]
realclass = ([1] * len(traintest_pos)) + ([0] * len(traintest_neg))
traintest = traintest.assign(realclass = realclass)
print("Traintest pos/neg:", len(traintest_pos), len(traintest_neg))
if len(validation_neg) > len(validation_pos):
validation_neg = validation_neg[0: len(validation_pos)]
# we want the balance of pos and neg examples to be even
print("Validation pos/neg:", len(validation_pos), len(validation_neg))
validation = meta.loc[validation_pos + validation_neg, : ]
realclass = ([1] * len(validation_pos)) + ([0] * len(validation_neg))
validation = validation.assign(realclass = realclass)
return traintest, validation
# -
# #### Iteratively testing multiple splits.
#
# Because we have a relatively small number of data points for our positive classes, there's a fair amount of variation in model accuracy depending on the exact sample chosen. It's therefore necessary to run the whole train/test/validation cycle multiple times to get a distribution and a median value.
#
# The best way to understand the overall workflow may be to look first at the bottom function, **train_and_validate()**. Essentially we create a split between train/test and validation sets, and write both as temporary files. Then the first, train/test file is passed to a function that runs a grid-search on it (via crossvalidation). We get back some parameters, including cross-validated accuracy; the model and associated objects (e.g. vocabulary, scaler, etc) are pickled and written to disk.
#
# Then finally we apply the pickled model to the held-out *validation* set in order to get validation accuracy.
#
# We do all of that multiple times to get a sense of the distribution of possible outcomes.
# +
def tune_a_model(name, tags4positive, tags4negative, sizecap, sourcefolder, metadatapath):
'''
This tunes a model through gridsearch, and puts the resulting model in a ../temp
folder, where it can be retrieved
'''
vocabpath = '../lexica/' + name + '.txt'
modeloutpath = '../temp/' + name + '.csv'
c_range = [.0001, .001, .003, .01, .03, 0.1, 1, 10, 100, 300, 1000]
featurestart = 1000
featureend = 7000
featurestep = 500
modelparams = 'logistic', 10, featurestart, featureend, featurestep, c_range
forbiddenwords = {}
floor = 1700
ceiling = 2020
metadata, masterdata, classvector, classdictionary, orderedIDs, authormatches, vocablist = versatiletrainer2.get_simple_data(sourcefolder, metadatapath, vocabpath, tags4positive, tags4negative, sizecap, extension = '.fic.tsv', excludebelow = floor, excludeabove = ceiling,
forbid4positive = {'drop'}, forbid4negative = {'drop'}, force_even_distribution = False, forbiddenwords = forbiddenwords)
matrix, maxaccuracy, metadata, coefficientuples, features4max, best_regularization_coef = versatiletrainer2.tune_a_model(metadata, masterdata, classvector, classdictionary, orderedIDs, authormatches,
vocablist, tags4positive, tags4negative, modelparams, name, modeloutpath)
meandate = int(round(np.sum(metadata.firstpub) / len(metadata.firstpub)))
floor = np.min(metadata.firstpub)
ceiling = np.max(metadata.firstpub)
os.remove(vocabpath)
return floor, ceiling, meandate, maxaccuracy, features4max, best_regularization_coef, modeloutpath
def confirm_separation(df1, df2):
'''
Just some stats on the train/test vs validation split.
'''
authors1 = set(df1['author'])
authors2 = set(df2['author'])
overlap = authors1.intersection(authors2)
if len(overlap) > 0:
print('Overlap: ', overlap)
pos1date = np.mean(df1.loc[df1.realclass == 0, 'firstpub'])
neg1date = np.mean(df1.loc[df1.realclass == 1, 'firstpub'])
pos2date = np.mean(df2.loc[df2.realclass == 0, 'firstpub'])
neg2date = np.mean(df2.loc[df2.realclass == 1, 'firstpub'])
print("Traintest mean date pos:", pos1date, "neg:", neg1date)
print("Validation mean date pos", pos2date, "neg:", neg2date)
print()
def train_and_validate(modelname, tags4positive, tags4negative, sizecap, sourcefolder, metadatapath):
outmodels = modelname + '_models.tsv'
if not os.path.isfile(outmodels):
with open(outmodels, mode = 'w', encoding = 'utf-8') as f:
outline = 'name\tsize\tfloor\tceiling\tmeandate\ttestacc\tvalidationacc\tfeatures\tregularization\ti\n'
f.write(outline)
for i in range(10):
name = modelname + str(i)
traintest, validation = divide_training_from_validation(tags4positive, tags4negative, sizecap, metadatapath)
confirm_separation(traintest, validation)
traintest.to_csv('../temp/traintest.csv', index = False)
validation.to_csv('../temp/validation.csv', index = False)
floor, ceiling, meandate, testacc, features4max, best_regularization_coef, modeloutpath = tune_a_model(name, tags4positive, tags4negative, sizecap, sourcefolder, '../temp/traintest.csv')
modelinpath = modeloutpath.replace('.csv', '.pkl')
results = versatiletrainer2.apply_pickled_model(modelinpath, sourcefolder, '.fic.tsv', '../temp/validation.csv')
right = 0
wrong = 0
columnname = 'alien_model'
for idx, row in results.iterrows():
if float(row['realclass']) >= 0.5 and row[columnname] >= 0.5:
right +=1
elif float(row['realclass']) <= 0.5 and row[columnname] <= 0.5:
right += 1
else:
wrong += 1
validationacc = right / (right + wrong)
validoutpath = modeloutpath.replace('.csv', '.validate.csv')
results.to_csv(validoutpath)
print()
print('Validated: ', validationacc)
with open(outmodels, mode = 'a', encoding = 'utf-8') as f:
outline = '\t'.join([name, str(sizecap), str(floor), str(ceiling), str(meandate), str(testacc), str(validationacc), str(features4max), str(best_regularization_coef), str(i)]) + '\n'
f.write(outline)
# -
train_and_validate('BoWGothic', {'lochorror', 'pbgothic', 'locghost', 'stangothic', 'chihorror'},
{'random', 'chirandom'}, 125, '../newdata/', '../meta/finalmeta.csv')
train_and_validate('BoWSF', {'anatscifi', 'locscifi', 'chiscifi', 'femscifi'},
{'random', 'chirandom'}, 125, '../newdata/', '../meta/finalmeta.csv')
train_and_validate('BoWMystery', {'locdetective', 'locdetmyst', 'chimyst', 'det100'},
{'random', 'chirandom'}, 125, '../newdata/', '../meta/finalmeta.csv')
# ### Trials on reduced data
#
# The same models run on a corpus down-sampled to 5% of the data (each word instance had a 5% chance of being recorded) and 80 instead of 125 volumes.
#
# We used this alternate version of **tune_a_model():**
def tune_a_model(name, tags4positive, tags4negative, sizecap, sourcefolder, metadatapath):
'''
This tunes a model through gridsearch, and puts the resulting model in a ../temp
folder, where it can be retrieved
'''
vocabpath = '../lexica/' + name + '.txt'
modeloutpath = '../temp/' + name + '.csv'
c_range = [.00001, .0001, .001, .003, .01, .03, 0.1, 1, 10, 100, 300, 1000]
featurestart = 10
featureend = 1500
featurestep = 100
modelparams = 'logistic', 10, featurestart, featureend, featurestep, c_range
forbiddenwords = {}
floor = 1700
ceiling = 2020
metadata, masterdata, classvector, classdictionary, orderedIDs, authormatches, vocablist = versatiletrainer2.get_simple_data(sourcefolder, metadatapath, vocabpath, tags4positive, tags4negative, sizecap, extension = '.fic.tsv', excludebelow = floor, excludeabove = ceiling,
forbid4positive = {'drop'}, forbid4negative = {'drop'}, force_even_distribution = False, forbiddenwords = forbiddenwords)
matrix, maxaccuracy, metadata, coefficientuples, features4max, best_regularization_coef = versatiletrainer2.tune_a_model(metadata, masterdata, classvector, classdictionary, orderedIDs, authormatches,
vocablist, tags4positive, tags4negative, modelparams, name, modeloutpath)
meandate = int(round(np.sum(metadata.firstpub) / len(metadata.firstpub)))
floor = np.min(metadata.firstpub)
ceiling = np.max(metadata.firstpub)
os.remove(vocabpath)
return floor, ceiling, meandate, maxaccuracy, features4max, best_regularization_coef, modeloutpath
train_and_validate('BoWShrunkenGothic', {'lochorror', 'pbgothic', 'locghost', 'stangothic', 'chihorror'},
{'random', 'chirandom'}, 40, '../reduced_data/', '../meta/finalmeta.csv')
sf = pd.read_csv('../results/ABsfembeds_models.tsv', sep = '\t')
sf.head()
sf.shape
new = sf.loc[[x for x in range(31,41)], : ]
old = sf.loc[[x for x in range(21,31)], : ]
print(np.median(new.testacc), np.median(old.testacc))
print(np.mean(new.validationacc), np.mean(old.validationacc))
new
old
print(np.mean(new.features), np.mean(old.features))
hist = pd.read_csv('../results/HistShrunkenGothic_models.tsv', sep = '\t')
hist1990 = pd.read_csv('../results/Hist1990ShrunkenGothic_models.tsv', sep = '\t')
bow = pd.read_csv('../results/BoWShrunkenGothic_models.tsv', sep = '\t')
glove = pd.read_csv('../results/GloveShrunkenGothic_models.tsv', sep = '\t')
print(np.mean(hist.testacc), np.mean(bow.testacc), np.mean(glove.testacc))
print(np.mean(hist.validationacc), np.mean(hist1990.validationacc), np.mean(bow.validationacc), np.mean(glove.validationacc))
print(np.mean(hist.features), np.mean(hist1990.features), np.mean(bow.features), np.mean(glove.features))
print(np.mean(myst.testacc[0:10]), np.mean(myst.testacc[10: ]))
hist = pd.read_csv('../results/HistGothic_models.tsv', sep = '\t')
print(np.mean(hist.validationacc), np.mean(bowgoth.validationacc))
hist = pd.read_csv('../results/HistGothic_models.tsv', sep = '\t')
hist1990 = pd.read_csv('../results/Hist1990Gothic_models.tsv', sep = '\t')
bow = pd.read_csv('../results/BoWGothic_models.tsv', sep = '\t')
print(np.mean(hist.validationacc), np.mean(hist1990.validationacc), np.mean(bow.validationacc))
bow = pd.read_csv('BoWMystery_models.tsv', sep = '\t')
np.mean(bow.validationacc[0:30])
np.mean(bow.validationacc[30: ])
| variation/make_validation_splits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
x = 5
if x==5:
print('x is equal to 5')
if x<5:
print('x is less then 5')
if x>5:
print('x is greater then 5')
if x<= 5:
print('x is less then or equal to 5')
if x>=5:
print('x is greater then or equal to 5')
y = 9
if y == 9:
print('y is 9')
print('y is still 9')
print('y is increasing')
for i in range(7):
print(i)
if i>7:
print('i is greater then 7')
print('done with i', i)
print('all done')
x = 8
if x> 2:
print('true')
else:
print('false')
print('')
x = 4
if x<6:
print('small')
elif x<9:
print('medium')
else:
print('greater')
print('all done')
x = 9
if x<5:
print ('mini')
elif x<9:
print('micro')
elif x<15:
print('small')
elif x<25:
print('medium')
elif x<36:
print('large')
else:
print('all done')
# +
astr = 'i am haris'
try:
istr = int(astr)
except:
istr = -1
print('first is', istr)
astr = '123'
try:
istr = int(astr)
except:
istr = -2
print('second is', istr)
# +
x = input('please enter anything:')
try:
y = int(x)
except:
y = -112
if y>0:
print('nice work')
else:
print('not a number')
# -
hrs = 47.50
rate = 10.50
k = hrs*rate
print(k)
| Code/2 Conditional statements.ipynb |