code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (DL virtualenv)
# language: python
# name: dssc_dl_2021
# ---
# # Deep Learning Course - LAB 1
#
# ## Intro to PyTorch
#
# PyTorch (PT) is a Python (and C++) library for Machine Learning (ML) particularly suited for Neural Networks and their applications.
#
# Its great selection of built-in modules, models, functions, CUDA capability, tensor arithmetic support and automatic differentiation functionality make it one of the most used scientific libraries for Deep Learning.
#
# Note: for this series of labs, we advise to install Python >= 3.7
# ### Installing PyTorch
#
# We advise to install PyTorch following the directions given in its [home page](https://pytorch.org/get-started/locally/). Just typing `pip install torch` may not be the correct action as you have to take into account the compatibility with `cuda`. If you have `cuda` installed, you can find your version by typing `nvcc --version` in a terminal (Linux/iOS).
#
# If you're using Windows, we first suggest first to install Anaconda and then install PyTorch from the `anaconda prompt` software via `conda` (preferably) or `pip`.
#
# If you're using Google Colab, all the libraries needed to follow this lecture should be pre-installed there.
#
# We see now how to operate on Colab.
# ### For Colab users
#
# Google Colab is a handy tool that we suggest you use for this course---especially if your laptop does not support CUDA or has limited hardware capabilities. Anyway, note that **we'll try to avoid GPU code as much as possible**.
#
# Essentially, Colab renders available to you a virtual machine with a limited hardware capability and disk where you can execute your code inside a given time window. You can even ask for a GPU (if you use it too much you'll need to start waiting a lot before it's available though).
# #### Your (maybe) first Colab commands
#
# Colab Jupyter-style notebook interface with a few tweaks.
#
# For instance, you may run (some) bash command from here prepending `!` to your code.
# !ls
# !pwd
# !git clone https://github.com/mnzluca/IntroToAI
# This makes it very easy to operate your virtual machine without the need for a terminal.
# #### File transfer on Colab
#
# One of the most intricate action in Colab is file transfer. Since your files reside on the virtual machine, there're two main ways to operate file transfer on Colab.
#
# * `files.download()` / `.upload()`
from google.colab import files
files.upload()
files.download("sample_data/README.md")
# Although it may be much more handy to connect your Google Drive to Colab. Here is a snippet that lets you do this.
# +
from google.colab import drive
folder_mount = '/content/drive' # Your Drive will be mounted on top of this path
drive.mount(folder_mount)
# -
# ### Dive into PyTorch - connections with NumPy
#
# Like NumPy, PyTorch provides its own multidimensional array class, called `Tensor`. `Tensor`s are essentially the equivalent of NumPy `ndarray`s.
# If we wish to operate a very superficial comparison between `Tensor` and `ndarray`, we can say that:
# * `Tensor` draws a lot of methods from NumPy, although it's missing some (see [this GitHub issue](if you're interested))
# * `Tensor` is more OO than `ndarray` and solves some inconsistencies within NumPy
# * `Tensor` has CUDA support
# +
import torch
import numpy as np
# create custom Tensor and ndarray
x = torch.Tensor([[1,5,4],[3,2,1]])
y = np.array([[1,5,4],[3,2,1]])
def pretty_print(obj, title=None):
if title is not None:
print(title)
print(obj)
print("\n")
pretty_print(x, "x")
pretty_print(y, "y")
# -
# What are the types of these objs?
x.dtype, y.dtype
# `torch` already thinks with Machine Learning in mind as the `Tensor` is implicitly converted to `dtype float32`, while NumPy makes no such assumption.
#
# For more info on `Tensor` data types, please check the beginning of [this page](https://pytorch.org/docs/stable/tensors.html).
#
# As in NumPy, we can call the `.shape` attribute to get the shape of the structures. Moreover, `Tensor`s have also the `.size()` method which is analogous to `.shape`.
x.shape, y.shape, x.size()
# Notice how a `Tensor` shape is **not** a tuple.
#
# We can also create a random `Tensor` analogously to NumPy.
#
# A `2 × 3 × 3` `Tensor` is the same as saying "2 3 × 3 matrices", or a "cubic matrix"
#
# 
x = torch.rand([2, 3, 3]) # you always have to specify wih a tuple the shape of the tensor!
x
y = np.random.rand(2, 3, 3)
y
# We can get the total number of elements in a `Tensor` via the `numel()` method
x.numel()
# We can get the memory occupied by each element of a `Tensor` via `element_size()`
x.element_size()
# Hence, we can quickly calculate the size of the `Tensor` within the RAM
x.numel() * x.element_size()
# #### Slicing a `Tensor`
#
# You can slice a `Tensor` (*i.e.*, extract a substructure of a `Tensor`) as in NumPy using the square brackets:
# +
# extract first element (i.e., matrix) of first dimension
pretty_print(x[0], "Slice first element (x[0])")
# extract a specific element
pretty_print(x[1,2,0], "Slice element at (1, 2, 0) (x[1, 2, 0])")
# extract first element of second dimension (":" means all the elements of the given dim)
pretty_print(x[:, 0], "Slice first element of second dim (x[:, 0])")
# note that it is equivalent to
pretty_print(x[:, 0, :], "As above (x[:, 0] equivalent to x[:, 0, :])")
# extract range of dimensions (first and second element of third dim)
pretty_print(x[:, :, 0:2], "Slice first and second el of third dim (x[:, :, 0:2])")
# note that it is equivalent to (i.e., you can also pass list for slicing, as opposed to Py vanilla lists/tuples)
pretty_print(x[:, :, (0, 1)], "As above (x[:, :, 0:2] equivalent to x[:, :, (0, 1)])")
# -
# In Py, you can also slice any list by interval via the "double colon" notation `::` (`from`:`to - 1`:`step`). Note that `::3` means "take all elements of the object by step of 3 starting from 0 until the list ends".
torch.range(0, 10)[0:7:3]
# #### `Tensor` supports linear algebra
# +
z1 = torch.rand([4, 5])
print("z1")
print("shape", z1.shape)
print(z1)
# transposition
z2 = z1.T # specific for matrices
# z2 = z1.transpose(INDICATE DIMENSIONS!)
print("\nz2")
print("shape", z2.shape)
print(z2)
# +
# matrix multiplication
pretty_print(z1 @ z2, "Matrix multiplication: with '@'")
# equivalent to
pretty_print(torch.matmul(z1, z2), "Matrix multiplication: with torch.matmul")
# and also
pretty_print(z1.matmul(z2), "Matrix multiplication: with Tensor.matmul")
# -
# Note that `@` identifies the matrix product.
#
# Don't mistake `@` and `*` as the latter is the Hadamard (element-by-element) product!
z1 * z2 # this gives an Exception
z1 * z1
# Generally, the "regular" arithmetic operators for Python act as element-wise operators in `Tensor`s (as in `ndarrays`)
z1 ** 2 # Equivalent to above: NOT matrix product!
z3 = torch.Tensor([[1,2,3,4,7],[0.2,2,4,5,3],[-1,3,-4,2,2],[1,1,1,1,2]])
pretty_print(z1 % z3, "z1 % z3 (remainder of integer division)")
pretty_print(z3 // z1, "z3 // z1 (integer division)") # integer division
z3 /= z1
pretty_print(z3, "in-place tensor division (z3 /= z1)")
# As for `ndarrays`, `Tensor`s arithmetic operations support **broadcasting**. Roughly speaking, when two `Tensor`s have different shapes and a binary+ operator is applied to them, PT will try to find a way to make these objects "compatible" for the operation.
#
# Of course, broadcasting is not always possible, but as a rule of thumb, if some dimensions of a `Tensor` are one and the other dimensions are the same, broadcasting works.
# +
small_vector_5 = torch.Tensor([1,2,3,5,2]) # this is treated as a row vector (1 x 5 matrix)
print("small_vector_5:", small_vector_5, "; Shape:", small_vector_5.shape, "\n")
pretty_print(z1 / small_vector_5, "Broadcasting: dividing matrix by row vector")
small_vector_4 = torch.Tensor([4,2,3,1])
small_vector_4 = small_vector_4.unsqueeze(-1) # this operation "transposes" the vector into a column vector (4 x 1 matrix)
print("small_vector_4:\n", small_vector_4, "\nShape:", small_vector_4.shape, "\n")
pretty_print(z1 / small_vector_4, "Broadcasting: dividing matrix by column vector")
# -
torch.Tensor([1,2,3]) == torch.Tensor([[1,2,3]]) # single-dim Tensors are also row vectors
# #### Reshaping and permuting
#
# Sometimes it may be necessary to reshape the tensors to apply some specific operators.
#
# Take the example of RGB images: they can be seen as `3 x h x w` tensors, where `h` is the height and `w` the width.
#
# 
#
# Sometimes, it may be necessary to "flatten" the three matrices into vectors, thus obtaining a `3 x hw` tensor.
image = torch.load("data/img.pt")
image.shape
# This flattening may be achieved via the `reshape` method.
image_reshaped = image.reshape(3, 243*880)
pretty_print(image_reshaped.shape, "shape of image_reshaped")
# We can alternatively use the `view` method...
image_view = image.view(3, 243*880)
pretty_print(image_view.shape, "shape of image_view")
# **Q**: what is the difference between `reshape` and `view`?
# Some libraries encode images as `h x w x 3` tensors instead of `3 x h x w`.
#
# To convert between these two format, one may be tempted to `reshape` or `view` the tensor: in the end, they share the number of elements.
#
#
# +
from matplotlib import pyplot as plt
image2 = image.reshape(243, 880, 3)
plt.imshow(image2)
plt.show()
# -
# That does not seem to work though: reshape does not change the order of the elements within the memory.
#
# In order to do so, we need to use `permute`, which changes shape **and** the order of the elements.
# We need to pass the new order of the dimensions to it.
image3 = image.permute(1,2,0) # 1,2,0 --> old dim1 goes first, old dim2 goes second, dim0 goes last
# can also do image.permute(-2,-1,0) -- works with negative indices as well
plt.imshow(image3)
plt.show()
# We already saw a case of incompatible `Tensor`s above. Which one is it?
#
# Some more lineal algebra...
#
z3_norm = z3.norm(2)
pretty_print(z3_norm, "Tensor norm")
pretty_print(np.linalg.norm(y), "ndarray norm") # notice how torch is more OO
# Notice how methods reducing `Tensor`s to scalars still return singleton `Tensor`s. (be wary of this feature when scripting something in PT)
#
# To "disentangle" the scalar from a `Tensor` use the `.item()` method.
z3_norm.item()
# Note that, as for NumPy, PT supports `Tensor`s operator on a subset of its dimensions.
#
# For example, given a `3x4x4 Tensor`, we might want to calculate the norm of each of the three `4x4` matrices composing it. We must hence specify to the `norm` method the dimensions on which we want it to operate the reduction:
z4 = torch.rand((3,4,4))
pretty_print(z4.norm())
pretty_print(z4.norm(dim=(1,2)), "Norm of the three matrices composing z4 -- z4.norm(dim=(1,2))")
# As expected, the result is a `1x3 Tensor`, showing the norm of each of the matrices.
#
# We can notice this behaviour in other `Tensor` operator applying a reduction:
print(z4)
print(z4[0,:,0])
print(z4[0,:,0].sum())
# +
pretty_print(z4.sum(dim=0), "z4.sum(dim=0) -- Sum of the three matrices composing the tensor -- is a 4x4 matrix")
pretty_print(z4.prod(dim=1), "z4.prod(dim=1) -- Product of the columns of each matrix composing the tensor")
# -
pretty_print(z4[0, :, 0], "This is how the first element of z4.prod(dim=1) is obtained. We fix the 1st and 3rd dim to `0` ...")
pretty_print(z4[0, :, 0].prod(), "... and we operate the prod on it")
print("The other elements are obtained by looping through all of the other possible 1st and 3rd dimension indices. For instance, the element `2,3` of the reduction is obtained as\nz4[2, :, 3].prod().\n")
pretty_print(z4[2, : ,3], "z4[2, : ,3]")
pretty_print(z4[2, :, 3].prod(), "z4[2, : ,3].prod()")
# #### Seamless conversion from NumPy to PT
# +
y_torch = torch.from_numpy(y)
pretty_print(y_torch, "y converted to torch.Tensor")
x_numpy = x.numpy()
pretty_print(x_numpy, "x converted to numpy.ndarray")
# Note that NumPy implicitly converts Tensor to ndarray whenever it can; the same doesn't happen for PT
pretty_print(np.linalg.norm(x), "Example of implicit conversion Tensor → ndarray")
# torch.norm(x_numpy) # ERROR: numpy.... has no attribute dim
# -
# #### Stochastic functionalities
# We can render the (pseudo)random number generator deterministic by calling `torch.manual_seed(integer)`.
#
# This works for both CPU and CUDA RNG calls.
torch.manual_seed(123456)
print("...from now on our random tensor should be the same...")
# +
pretty_print(torch.randperm(10), "(randperm) Random permutation of 0:10")
pretty_print(torch.rand_like(z1), "(rand_like) Create random vector with the same shape of z_1")
pretty_print(torch.randint(10, (3, 3)), "(randint) Like rand, but with integers up to 10")
pretty_print(torch.normal(0, 1, (3, 3)), "(normal) Sampling a 3x3 iid scalars from N(0,1)")
pretty_print(torch.normal(torch.Tensor([[1,2,3],[4,5,6],[0,0,0]]), torch.Tensor([[1,0.5,0.9],[0.5,1,0.1],[3,4,1]])), "Sampling from 9 normals with different means and std into a (3x3) Tensor")
# -
# ### Using GPUs
#
# All `Torch.Tensor` methods support GPU computation via built-in CUDA wrappers.
#
# Just transfer the involved `Tensor`s to CUDA and let the magic happen :)
# +
# check if cuda is available on this machine
torch.cuda.is_available()
has_cuda_gpu = torch.cuda.is_available()
# -
if has_cuda_gpu:
dim = 10000
large_cpu_matrix = torch.rand((dim, dim))
large_gpu_matrix = large_cpu_matrix.to("cuda") # Can also specify "cuda:gpu_id" if multiple GPUs: cuda:0, cuda:1, etc.
# alternatively, you may also call large_cpu_matrix.cuda() or large_cpu_matrix.cuda(0)
else:
print("Sorry, this part of the notebook is inaccessible since it seems you don't have a CUDA-capable GPU on your device :/")
pretty_print(large_cpu_matrix.device, "Device of large_cpu_matrix")
pretty_print(large_gpu_matrix.device, "Device of large_gpu_matrix")
pretty_print(large_gpu_matrix, "If a tensor is not on CPU, the device will also be printed if you print the tensor itself")
if has_cuda_gpu:
import timeit
# NOTE: please fix this number w.r.t. your GPU and CPU
repetitions = 100
print("Norm of large cpu matrix. Time:", timeit.timeit("large_cpu_matrix.norm()", number=repetitions, globals=locals()))
print("Norm of large gpu matrix. Time:", timeit.timeit("large_gpu_matrix.norm()", number=repetitions, globals=locals()))
else:
print("Sorry, this part of the notebook is inaccessible since it seems you don't have a CUDA-capable GPU on your device :/")
# Captain obvious: Use `tensor.cpu()` or `tensor.to("cpu")` to move a tensor to your CPU
# ### Building easy ML models
#
# By using all the pieces we've seen till now, we can build our first ML model using PyTorch: a linear regressor, whose model is
#
# `y = XW + b`
#
# which can also be simplified as
#
# `y = XW`
#
# if we incorporate the bias `b` inside `W` and add to the `X` a column of ones to the right.
#
# We'll first create our data. The `X`s are the 0:9 range plus some iid random noise, while the `y` is just the 0:9 range
# +
x1 = torch.range(0, 9).unsqueeze(-1)
x2 = torch.range(0, 9).unsqueeze(-1)
x3 = torch.range(0, 9).unsqueeze(-1)
x0 = torch.ones([10]).unsqueeze(-1)
X = torch.cat((x1, x2, x3), dim=1)
eps = torch.normal(0, .3, (10, 3))
X += eps
X = torch.cat((X, x0), dim=1) # to concatenate tensors
y = torch.range(0, 9).unsqueeze(-1)
pretty_print(X, "X (covariates)")
pretty_print(y, "y (response)")
# -
# For the case of linear regression, we usually wish to obtain a set of weights minimizing the so called mean square error/loss (MSE), which is the squared difference between the ground truth and the model prediction, summed for each data instance.
#
# We know that the OLS/Max Likelihood esitmator is the one yielding the optimal set of weights in that regard.
# +
W_hat = ((X.T @ X).inverse()) @ X.T @ y # OLS estimator
pretty_print(W_hat, "W (weights - coefficients and bias/intercept)")
# -
# We can evaluate our model on the mean square loss
def mean_square_loss(y, y_hat):
return (((y - y_hat).norm())**2).item() / y.shape[0]
# Let's apply it to our data.
#
# First we need to obtain the predictions, then we can evaluate the MSE.
# +
y_hat = X @ W_hat
pretty_print(y_hat, "Predictions (y_hat)")
pretty_print(mean_square_loss(y, y_hat), "Loss (MSE)")
# -
# #### Using PT built-ins
#
# We will now be exploring the second chunk of PT functionalities, namely the built-in structures and routines supporting the creation of ML models.
#
# We can create the same model we have seen before using PT built-in structures, so we start to see them right away.
#
# Usually, a PT model is a `class` inheriting from `torch.nn.Module`. Inside this class, we'll define two methods:
# * the constructor (`__init__`) in which we define the building blocks of our model as class variables (later during our lectures we'll see more "elegant" methods to build models architectures)
# * the `forward` method, which specifies as the data fed into the model needs to be processed in order to produce the output
#
# Note for those who already know something about NNs: we don't need to define `backward` methods since we're constructing our model with built-in PT building blocks. PT automatically creates a `backward` routine based upon the `forward` method.
#
# Our model only has one building block (layer) which is a `Linear` layer.
# We need to specify the size of the input (i.e. the coefficients `W` of our linear regressor) and the size of the output (i.e. how many scalars it produces) of the layer. We additionaly request our layer to has a bias term `b` (which acts as the intercept of the line we saw before).
#
# The `Linear` layer processes its input as `XW + b`, which is exactly the (first) equation of the linear regressor we saw before.
#
#
class LinearRegressor(torch.nn.Module):
def __init__(self):
super().__init__()
self.regressor = torch.nn.Linear(in_features=3, out_features=1, bias=True)
def forward(self, X):
return self.regressor(X)
# We can create an instance of our model and inspect the current parameters by using the `state_dict` method, which prints the building blocks of our model and their current parameters. Note that `state_dict` is essentially a dictonary indexed by the names of the building blocks which we defined inside the constructor (plus some additional identifiers if a layer has more than one set of parameters).
# +
lin_reg = LinearRegressor()
for param_name, param in lin_reg.state_dict().items():
print(param_name, param)
# -
# We can update the parameters via `state_dict` and re-using the same OLS estimates we obtained before.
#
# Note that PT is thought of for Deep Learning: it does not have (I think) the routines to solve different ML problems.
#
# Next time, we'll see as we can unleash PT's gradient-based iterative training routines and compare the results w.r.t. the OLS estimators.
state_dict = lin_reg.state_dict()
state_dict["regressor.weight"] = W_hat[:3].T
state_dict["regressor.bias"] = W_hat[3]
lin_reg.load_state_dict(state_dict)
# Check if it worked
for param_name, param in lin_reg.state_dict().items():
print(param_name, param)
# The `forward` method gets implicitly called by passing the data to our model's instance `lin_reg`:
X_lin_reg = X[:,:3]
predictions_lin_reg = lin_reg(X_lin_reg)
pretty_print(predictions_lin_reg, "Predictions of torch class")
# The predictions are the same as before
pretty_print(y_hat, "Predictions of linear model")
# ### Adding non-linearity
#
# One of the staples of DL is that the relationship between the `X`s and the predictions is **non-linear**.
#
# The non-linearity is obtain by applying a non-linear function (called *activation function*) after each linear layer.
#
# We can complicate just a little bit our linear model to create a **logistic regressor**:
#
# `y = logistic(XW + b)`,
#
# where `logistic(z) = exp(z) / (1 + exp(z))`
#
# The logistic function has different names:
# * in statistics, it's usually called *inverse logit* as well
# * in DL and mathematics, it's called *sigmoid function* due to its "S" shape
#
# Hystorically, the sigmoid was between the first activation functions used in NNs.
#
# 
#
# Logistic regression is usually used as a **binary classification model** instead of a regression model.
# In this setting, we suppose we have two destination classes to which we assign values 0 and 1: `y ∈ {0, 1}`.
# Since the codomain of the sigmoid is `[0,1]`, we can interpret its output `ŷ` as a probability value, and assign each data to the class 0 if `ŷ <= 0.5`, to the class 1 otherwise.
#
y = torch.Tensor([0,1,0,0,1,1,1,0,1,1])
pretty_print(y, "y for our classification problem")
# Note that we may also want our y to be a vector of `int`s.
# We can convert the `Tensor` type to `int` by calling the method `.long()` or `.int()` of `Tensor`.
#
# As in NumPy, the type of the `Tensor` is found within the `dtype` variable of the given `Tensor`.
y = y.long() # ...PT expects a long(=int64)
pretty_print(y, "y converted to int")
pretty_print(y.dtype, "Data type of y")
# Let us now build our logistic regressor in PT.
#
# We only need one single addition wrt the linear regressor: in the `forward` method, we'll add the sigmoidal non-linearity by calling the `sigmoid` function within the `torch.nn.functional` library.
#
# Note that there also exist some "mirror" alias of these functionals within `torch.nn` (e.g. `torch.nn.Sigmoid`): we'll learn in the following lecture why these aliases are there and how to use them.
class LogisticRegressor(torch.nn.Module):
def __init__(self):
super().__init__()
# no difference wrt linear regressor
self.regressor = torch.nn.Linear(in_features=3, out_features=1)
def forward(self, X):
out = self.regressor(X)
# here we apply the sigmoid fct to the output of regressor
out = torch.nn.functional.sigmoid(out)
return out
# We can instantiate our logistic regressor and use it to calculate our predictions on the same X as before.
#
# Note that **we're using the initial (random) weights which PT has assigned to the model parameters**.
# For our linear regressor, we were able to analytically obtain the OLS value of the parameters.
# In the case of logistic regression, there's no MaxLikelihood estimator obtainable in close form and we need to resort to numerical methods to obtain them.
# Since the part concerning numerical optimization will be discussed during the next Lab, we will not be training our model (hence results will obviously be sub-par).
log_reg = LogisticRegressor()
y_hat = log_reg(X_lin_reg)
pretty_print(y_hat, "logistic regressor predictions")
# There exist many ways to evaluate the performance of the logistic regressor: one of them is **accuracy** (correctly identified units / total number of units).
# We can define a function to evaluate accuracy and calculate it on our model and data
def accuracy(y, y_hat):
# Assign each y_hat to its predicted class
pred_classes = torch.where(y_hat < .5, 0, 1).squeeze().long()
correct = (pred_classes == y).sum()
return (correct / y.shape[0]).item()
accuracy(y, y_hat)
# #### Visualizing linear and logistic regression as a computational graph
#
# We now need to convert the equation of the linear and the logistic regression:
# * `y = σ(WX + b)`
#
# where `σ` is a generic `ℝ → ℝ` function: sigmoid for logistic regression, identity for linear regression.
#
# 
#
# We organize the input in *nodes* (on the left part) s.t. each node represents one dimension/covariate.
# For each data instance, we substitute to each node the corresponding numeric value.
# The nodes undergo one or more operations, namely, from left to right:
#
# 1. Each node is multiplied by its corresponding weight (a value placed on the edge indicates that the node is multiplied by said value)
# 2. All the corresponding outputs are summed together
#
# These two operations identify the dot product between vectors `X` and `W`
#
# 3. The bias term `b` is added
# 4. The non-linear function `σ` is applied to the result of this sum
# 5. Finally, we assign that value to the variable `ŷ`, which is also indicated as a node
#
# ### Our first MultiLayer Perceptron (MLP)
#
# The MLP is a family of Artificial NNs in which the input is a vector of size `ℝ^d` and the output is again a vector of size `ℝ^p`, where `p` is determined upon the nature of the problem we wish to solve. Additionally, a MLP is characterized by multiple stages (*layers*) of sequential vector-matrix multiplication and non-linearity (steps 1., 2., 3. above) in which each output of the layer `l-1` acts as input to the layer `l`.
#
# Taking inspiration to the graph of the logistic regression, we can translate all into an image to give sense to these words:
#
# 
#
# In NNs, each of the nodes within the graph is called a **neuron**
#
# Neurons are organized in **layers**
#
# In computational graphs, layers are shown from left to right (or bottom to top sometimes), which is the direction of the flow of information inside the NN.
#
# The first layer is called **input layer** and represents the dimensions of our data.
#
# The last layer is called **output layer** and represent the output of our NN.
#
# All the intermediate layers are called **hidden layers**. To be defined MLP, there must be at least one hidden layer inside our model.
#
# If the NN is an MLP, each neuron in a given layer (except for the input) receives information from every neuron of the previous; moreover, each neuron in any layer (except for the output) sends information to every neuron of the next layer. There's no communication between neurons of the same layer (if it happens, we have a **Recursive Neural Network**).
# For the sake of brevity, usually in NN computational graphs we drop the blocks `+` and `σ`, the values of weights, and the reference to the bias terms, remaining with a scheme conveying info about
# * the number of neurons per layer
# * the connectivity of neurons
#
# The graph above becomes:
#
# 
#
# We can then start programming our simple MLP in PT.
#
# We will suppose that our MLP is for **binary classification**, hence the activation function `τ` is the sigmoid.
#
# **Q**: what if we wanted our MLP to operate a regression?
class MLP(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(in_features=3, out_features=2)
self.layer2 = torch.nn.Linear(in_features=2, out_features=1)
def forward(self, X):
out = self.layer1(X)
out = torch.nn.functional.relu(out)
out = self.layer2(X)
out = torch.nn.functional.sigmoid(out)
return out
# For the great majority of MLP, it's very hard to get analytical solutions to our sets of weights and biases. We then resort to numerical methods for optimization.
#
# In DL, we normally used gradient-based methods like Stochastic Gradient Descent with *backpropagation* to find approximate solutions.
#
# We'll cover these topics in future lectures. For now, the focus is to build a MLP in PT and perform the *forward pass* (=evaluate the model on a set of data).
# We can analyse the structure of our MLP by just printing the model
model = MLP()
model
# although we might wanna have additional informations.
#
# There's an additional package, called `torch-summary` which helps us producing more informative and exhaustive model summaries.
import sys
# !{sys.executable} -m pip install torch-summary
from torchsummary import summary
summary(model)
# Let us suppose we wish to build a larger model from the graph below.
#
# 
#
# We suppose that
#
# 1. The layers have no bias units
# 2. The activation function for hidden layers is `ReLU`
#
# Moreover, we suppose that this is a classification problem.
#
# As you might recall, when the number of classes is > 2, we encode the problem in such a way that the output layer has a no. of neurons corresponding to the no. of classes. Doing so, we establish a correspondence between output units and classes. The value of the $j$-th neuron represents the **confidence** of the network in assigning a given data instance to the $j$-th class.
#
# Classically, when the network is encoded in such way, the activation function for the final layer is the **softmax** function.
# If $C$ is the total number of classes,
#
# $softmax(z_j) = \frac{\exp(z_j)}{\sum_{k=1}^C \exp(z_k)}$
#
# where $j\in \{1,\cdots,C\}$ is one of the classes.
#
# If we repeat this calculation for all $j$s, we end up with $C$ normalized values (i.e., between 0 and 1) which can be interpreted as probability that the network assigns the instance to the corresponding class.
# +
# Task: build this network from scratch during class
# +
# instantiate and summarise
| labs/01-intro-to-pt_class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from librosa.core import cqt
import matplotlib.pyplot as plt
import time
import torch
import torch.nn as nn
from librosa.core import note_to_hz
import pandas as pd
import sys
sys.path.insert(0,'../')
import Spectrogram
# -
y_list = np.load('./y_list.npy')
t_start = time.time()
mel_layer = Spectrogram.MelSpectrogram(sr=44100)
time_used = time.time()-t_start
print(time_used)
y_torch = torch.tensor(y_list, dtype=torch.float)
# +
timing = []
for e in range(5):
t_start = time.time()
spec = mel_layer(y_torch)
time_used = time.time()-t_start
print(time_used)
timing.append(time_used)
# -
print("mean = ",np.mean(timing))
print("std = ", np.std(timing))
data = pd.DataFrame(timing,columns=['t_avg'])
data['Type'] = 'torch_CPU'
data.to_csv('Mel_torch')
plt.imshow(spec[0], aspect='auto', origin='lower')
| speed_test/Mel Spectrogram Torch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# hide
# %load_ext autoreload
# %autoreload 2
# %load_ext nb_black
# %load_ext lab_black
# +
# default_exp preprocessing
# -
# # Preprocessing
# > Feature/target selection, engineering and manipulation.
#
# ## Overview
# This section provides functionality for all data manipulation steps that are needed before data is passed into a model for prediction. We group all these steps under Preprocessing. This includes feature/target selection, feature/target engineering and feature/target manipulation.
#
# Some preprocessors work with both Pandas DataFrames and NumerFrames. Most preprocessors use specific `NumerFrame` functionality.
#
# In the last section we explain how you can implement your own Preprocessor that integrates well with the rest of this framework.
# hide
from nbdev.showdoc import *
# +
# export
import os
import time
import warnings
import numpy as np
import pandas as pd
import datetime as dt
from umap import UMAP
import tensorflow as tf
from tqdm.auto import tqdm
from functools import wraps
from scipy.stats import rankdata
from typeguard import typechecked
from abc import ABC, abstractmethod
from rich import print as rich_print
from typing import Union, List, Tuple
from multiprocessing.pool import Pool
from sklearn.linear_model import Ridge
from sklearn.mixture import BayesianGaussianMixture
from sklearn.preprocessing import QuantileTransformer, MinMaxScaler
from numerblox.download import NumeraiClassicDownloader
from numerblox.numerframe import NumerFrame, create_numerframe
# -
# ## 0. Base
# These objects will provide a base for all pre- and post-processing functionality and log relevant information.
# ## 0.1. BaseProcessor
# `BaseProcessor` defines common functionality for `preprocessing` and `postprocessing` (Section 5).
#
# Every Preprocessor should inherit from `BaseProcessor` and implement the `.transform` method.
# export
class BaseProcessor(ABC):
"""Common functionality for preprocessors and postprocessors."""
def __init__(self):
...
@abstractmethod
def transform(
self, dataf: Union[pd.DataFrame, NumerFrame], *args, **kwargs
) -> NumerFrame:
...
def __call__(
self, dataf: Union[pd.DataFrame, NumerFrame], *args, **kwargs
) -> NumerFrame:
return self.transform(dataf=dataf, *args, **kwargs)
# ## 0.2. Logging
# We would like to keep an overview of which steps are done in a data pipeline and where processing bottlenecks occur.
# The decorator below will display for a given function/method:
# 1. When it has finished.
# 2. What the output shape of the data is.
# 3. How long it took to finish.
#
# To use this functionality, simply add `@display_processor_info` as a decorator to the function/method you want to track.
#
# We will use this decorator throughout the pipeline (`preprocessing`, `model` and `postprocessing`).
#
# Inspiration for this decorator: [Calmcode Pandas Pipe Logs](https://calmcode.io/pandas-pipe/logs.html)
# export
def display_processor_info(func):
"""Fancy console output for data processing."""
@wraps(func)
def wrapper(*args, **kwargs):
tic = dt.datetime.now()
result = func(*args, **kwargs)
time_taken = str(dt.datetime.now() - tic)
class_name = func.__qualname__.split(".")[0]
rich_print(
f":white_check_mark: Finished step [bold]{class_name}[/bold]. Output shape={result.shape}. Time taken for step: [blue]{time_taken}[/blue]. :white_check_mark:"
)
return result
return wrapper
# +
# hide_input
class TestDisplay:
"""
Small test for logging.
Output should mention 'TestDisplay',
Return output shape of (10, 314) and
time taken for step should be close to 2 seconds.
"""
def __init__(self, dataf: NumerFrame):
self.dataf = dataf
@display_processor_info
def test(self) -> NumerFrame:
time.sleep(2)
return self.dataf
dataf = create_numerframe("test_assets/mini_numerai_version_1_data.csv")
TestDisplay(dataf).test()
# -
# ## 1. Common preprocessing steps
#
# This section implements commonly used preprocessing for Numerai. We invite the Numerai community to develop new preprocessors.
# ## 1.0 Tournament agnostic
# Preprocessors that can be applied for both Numerai Classic and Numerai Signals.
# ### 1.0.1. CopyPreProcessor
#
# The first and obvious preprocessor is copying, which is implemented as a default in `ModelPipeline` (Section 4) to avoid manipulation of the original DataFrame or `NumerFrame` that you load in.
# export
@typechecked
class CopyPreProcessor(BaseProcessor):
"""Copy DataFrame to avoid manipulation of original DataFrame."""
def __init__(self):
super().__init__()
@display_processor_info
def transform(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame:
return NumerFrame(dataf.copy())
dataset = create_numerframe(
"test_assets/mini_numerai_version_1_data.csv", metadata={"version": 1}
)
copied_dataset = CopyPreProcessor().transform(dataset)
assert np.array_equal(copied_dataset.values, dataset.values)
assert dataset.meta == copied_dataset.meta
# ### 1.0.2. FeatureSelectionPreProcessor
#
# `FeatureSelectionPreProcessor` will keep all features that you pass + keeps all other columns that are not features.
# export
@typechecked
class FeatureSelectionPreProcessor(BaseProcessor):
"""
Keep only features given + all target, predictions and aux columns.
"""
def __init__(self, feature_cols: Union[str, list]):
super().__init__()
self.feature_cols = feature_cols
@display_processor_info
def transform(self, dataf: NumerFrame) -> NumerFrame:
keep_cols = (
self.feature_cols
+ dataf.target_cols
+ dataf.prediction_cols
+ dataf.aux_cols
)
dataf = dataf.loc[:, keep_cols]
return NumerFrame(dataf)
# +
selected_dataset = FeatureSelectionPreProcessor(
feature_cols=["feature_wisdom1"]
).transform(dataset)
assert selected_dataset.get_feature_data.shape[1] == 1
assert dataset.meta == selected_dataset.meta
# -
selected_dataset.head(2)
# ### 1.0.3. TargetSelectionPreProcessor
#
# `TargetSelectionPreProcessor` will keep all targets that you pass + all other columns that are not targets.
#
# Not relevant for an inference pipeline, but especially convenient for Numerai Classic training if you train on a subset of the available targets. Can also be applied to Signals if you are using engineered targets in your pipeline.
#
# export
@typechecked
class TargetSelectionPreProcessor(BaseProcessor):
"""
Keep only features given + all target, predictions and aux columns.
"""
def __init__(self, target_cols: Union[str, list]):
super().__init__()
self.target_cols = target_cols
@display_processor_info
def transform(self, dataf: NumerFrame) -> NumerFrame:
keep_cols = (
self.target_cols
+ dataf.feature_cols
+ dataf.prediction_cols
+ dataf.aux_cols
)
dataf = dataf.loc[:, keep_cols]
return NumerFrame(dataf)
dataset = create_numerframe(
"test_assets/mini_numerai_version_2_data.parquet", metadata={"version": 2}
)
target_cols = ["target", "target_nomi_20", "target_nomi_60"]
selected_dataset = TargetSelectionPreProcessor(target_cols=target_cols).transform(
dataset
)
assert selected_dataset.get_target_data.shape[1] == len(target_cols)
selected_dataset.head(2)
# ### 1.0.4. ReduceMemoryProcessor
#
# Numerai datasets can take up a lot of RAM and may put a strain on your compute environment.
#
# For Numerai Classic, many of the feature and target columns can be downscaled to `float16`. `int8` if you are using the Numerai int8 datasets. For Signals it depends on the features you are generating.
#
# `ReduceMemoryProcessor` downscales the type of your numeric columns to reduce the memory footprint as much as possible.
# export
class ReduceMemoryProcessor(BaseProcessor):
"""
Reduce memory usage as much as possible.
Credits to kainsama and others for writing about memory usage reduction for Numerai data:
https://forum.numer.ai/t/reducing-memory/313
:param deep_mem_inspect: Introspect the data deeply by interrogating object dtypes.
Yields a more accurate representation of memory usage if you have complex object columns.
"""
def __init__(self, deep_mem_inspect=False):
super().__init__()
self.deep_mem_inspect = deep_mem_inspect
@display_processor_info
def transform(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame:
dataf = self._reduce_mem_usage(dataf)
return NumerFrame(dataf)
def _reduce_mem_usage(self, dataf: pd.DataFrame) -> pd.DataFrame:
"""
Iterate through all columns and modify the numeric column types
to reduce memory usage.
"""
start_memory_usage = (
dataf.memory_usage(deep=self.deep_mem_inspect).sum() / 1024**2
)
rich_print(
f"Memory usage of DataFrame is [bold]{round(start_memory_usage, 2)} MB[/bold]"
)
for col in dataf.columns:
col_type = dataf[col].dtype.name
if col_type not in [
"object",
"category",
"datetime64[ns, UTC]",
"datetime64[ns]",
]:
c_min = dataf[col].min()
c_max = dataf[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
dataf[col] = dataf[col].astype(np.int16)
elif (
c_min > np.iinfo(np.int16).min
and c_max < np.iinfo(np.int16).max
):
dataf[col] = dataf[col].astype(np.int16)
elif (
c_min > np.iinfo(np.int32).min
and c_max < np.iinfo(np.int32).max
):
dataf[col] = dataf[col].astype(np.int32)
elif (
c_min > np.iinfo(np.int64).min
and c_max < np.iinfo(np.int64).max
):
dataf[col] = dataf[col].astype(np.int64)
else:
if (
c_min > np.finfo(np.float16).min
and c_max < np.finfo(np.float16).max
):
dataf[col] = dataf[col].astype(np.float16)
elif (
c_min > np.finfo(np.float32).min
and c_max < np.finfo(np.float32).max
):
dataf[col] = dataf[col].astype(np.float32)
else:
dataf[col] = dataf[col].astype(np.float64)
end_memory_usage = (
dataf.memory_usage(deep=self.deep_mem_inspect).sum() / 1024**2
)
rich_print(
f"Memory usage after optimization is: [bold]{round(end_memory_usage, 2)} MB[/bold]"
)
rich_print(
f"[green] Usage decreased by [bold]{round(100 * (start_memory_usage - end_memory_usage) / start_memory_usage, 2)}%[/bold][/green]"
)
return dataf
dataf = create_numerframe("test_assets/mini_numerai_version_2_data.parquet")
rmp = ReduceMemoryProcessor()
dataf = rmp.transform(dataf)
# hide
dataf.head(2)
# ### 1.0.5. DeepDreamDataGenerator
# Best known for its computer vision applications, DeepDream excites activations in a trained model to augment original input. It uses sevral steps of gradient ascent to achieve this. Numerai participant [nyuton (nemethpeti on Github)](https://github.com/nemethpeti/numerai/blob/main/DeepDream/deepdream.py) implemented a way to apply this technique on Numerai data. Therefore, it allows us to generate synthetic training data. Check out `nbs/edu_nbs/synthetic_data_generation.ipynb` for experiments that demonstrate the effectiveness of using this additional data for training Numerai models.
#
# 
# Source: Example of image generated with DeepDream (deepdreamgenerator.com)
# export
class DeepDreamGenerator(BaseProcessor):
"""
Generate synthetic eras using DeepDream technique. \n
Based on implementation by nemethpeti: \n
https://github.com/nemethpeti/numerai/blob/main/DeepDream/deepdream.py
:param model_path: Path to trained DeepDream model. Example can be downloaded from \n
https://github.com/nemethpeti/numerai/blob/main/DeepDream/model.h5 \n
:param batch_size: How much synthetic data to process in each batch. \n
:param steps: Number of gradient ascent steps to perform. More steps will lead to more augmentation. \n
:param step_size: How much to augment the batch based on computed gradients. \n
Like with the number of steps, a larger step size will lead to more dramatic changes to the input features. \n
The default parameters are found to work well in practice, but could be further optimized.
"""
def __init__(
self,
model_path: str,
batch_size: int = 200_000,
steps: int = 5,
step_size: float = 0.01,
feature_names: list = None,
):
super().__init__()
tf.config.run_functions_eagerly(True)
self.model_path = model_path
self.model = self.__load_model(self.model_path)
self.batch_size = batch_size
self.steps = steps
self.step_size = step_size
self.feature_names = feature_names
@display_processor_info
def transform(self, dataf: NumerFrame) -> NumerFrame:
dream_dataf = self.get_synthetic_batch(dataf)
dataf = pd.concat([dataf, dream_dataf])
return NumerFrame(dataf)
def get_synthetic_batch(self, dataf: NumerFrame) -> NumerFrame:
"""
Produce a synthetic version of the full input dataset.
Target features will stay the same as in the original input data.
"""
features = self.feature_names if self.feature_names else dataf.feature_cols
targets = dataf.target_cols
dream_dataf = pd.DataFrame(columns=features)
for i in tqdm(
np.arange(0, len(dataf), self.batch_size),
desc="Deepdreaming Synthetic Batches",
):
start = i
end = np.minimum(i + self.batch_size - 1, len(dataf) - 1)
sub_dataf = dataf.reset_index(drop=False).iloc[start:end]
batch = tf.convert_to_tensor(
sub_dataf.loc[:, features].astype(np.float32).values
)
dream_arr = self._dream(batch)
batch_dataf = pd.DataFrame(dream_arr, columns=features)
batch_dataf[targets] = sub_dataf[targets]
dream_dataf = pd.concat([dream_dataf, batch_dataf])
return NumerFrame(dream_dataf)
def _dream(self, batch: tf.Tensor) -> np.ndarray:
"""
Perform gradient ascent on batch of data.
This loop perturbs the original features to create synthetic data.
"""
for _ in tf.range(self.steps):
with tf.GradientTape() as tape:
tape.watch(batch)
layer_activations = self.model(batch)
loss = tf.math.reduce_mean(layer_activations, -1)
gradients = tape.gradient(loss, batch)
gradients /= tf.expand_dims(tf.math.reduce_std(gradients, -1), 1) + 1e-8
# In gradient ascent, the "loss" is maximized so that the input row increasingly "excites" the layers.
batch = batch + gradients * self.step_size
batch = tf.clip_by_value(batch, 0, 1)
return batch.numpy()
@staticmethod
def __load_model(
model_path: str, output_layer_name: str = "concat"
) -> tf.keras.Model:
"""
Load in Keras model from given path.
output_layer_name will be the layer used to augment data.
"""
base_model = tf.keras.models.load_model(model_path)
base_model.compile(run_eagerly=True)
# Maximize the activations of these layers
layers = base_model.get_layer(output_layer_name).output
# Create the feature extraction model
dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
return dream_model
# hide
directory = "deepdream_test/"
downloader = NumeraiClassicDownloader(directory_path=directory)
downloader.download_single_dataset(
filename="numerai_validation_data.parquet",
dest_path=directory + "numerai_validation_data.parquet",
)
val_dataf = create_numerframe(f"{directory}numerai_validation_data.parquet")
# For our example we will use the model open sourced by [nemethpeti](https://github.com/nemethpeti) which you can download [here](https://github.com/nemethpeti/numerai/blob/main/DeepDream/model.h5). This model works on the v3 medium feature set. We therefore use v3 data in this example. The v3 medium feature set can be easily retrieved using `NumeraiClassicDownloader`.
# hide_output
feature_set = downloader.get_classic_features(filename="v3/features.json")
feature_names = feature_set["feature_sets"]["medium"]
# [Download link to deepdream_model.h5 used here (Github).](https://github.com/nemethpeti/numerai/blob/main/DeepDream/model.h5)
ddg = DeepDreamGenerator(
model_path="test_assets/deepdream_model.h5", feature_names=feature_names
)
sample_dataf = NumerFrame(val_dataf.sample(100))
dreamed_dataf = ddg.transform(sample_dataf)
# The new dreamed `NumerFrame` consists of the original data and 100 new additional rows. Note that targets are the same.
#
# Also, `era`, `data_type` and any other columns besides features and targets will be `NaN`s.
print(dreamed_dataf.shape)
dreamed_dataf.tail()
# To only keep new synthetic data use `.get_synthetic_batch`.
synth_dataf = ddg.get_synthetic_batch(sample_dataf)
print(synth_dataf.shape)
synth_dataf.head()
# ### 1.0.6. UMAPFeatureGenerator
# Uniform Manifold Approximation and Projection (UMAP) is a dimensionality reduction technique that we can utilize to generate new Numerai features. This processor uses [umap-learn](https://pypi.org/project/umap-learn) under the hood to model the manifold. The dimension of the input data will be reduced to `n_components` number of features.
# export
class UMAPFeatureGenerator(BaseProcessor):
"""
Generate new Numerai features using UMAP. Uses umap-learn under the hood: \n
https://pypi.org/project/umap-learn/
:param n_components: How many new features to generate.
:param n_neighbors: Number of neighboring points used in local approximations of manifold structure.
:param min_dist: How tightly the embedding is allows to compress points together.
:param metric: Metric to measure distance in input space. Correlation by default.
:param feature_names: Selection of features used to perform UMAP on. All features by default.
*args, **kwargs will be passed to initialization of UMAP.
"""
def __init__(
self,
n_components: int = 5,
n_neighbors: int = 15,
min_dist: float = 0.0,
metric: str = "correlation",
feature_names: list = None,
*args,
**kwargs,
):
super().__init__()
self.n_components = n_components
self.n_neighbors = n_neighbors
self.min_dist = min_dist
self.feature_names = feature_names
self.metric = metric
self.umap = UMAP(
n_components=self.n_components,
n_neighbors=self.n_neighbors,
min_dist=self.min_dist,
metric=self.metric,
*args,
**kwargs,
)
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
feature_names = self.feature_names if self.feature_names else dataf.feature_cols
new_feature_data = self.umap.fit_transform(dataf[feature_names])
umap_feature_names = [f"feature_umap_{i}" for i in range(self.n_components)]
norm_new_feature_data = MinMaxScaler().fit_transform(new_feature_data)
dataf.loc[:, umap_feature_names] = norm_new_feature_data
return NumerFrame(dataf)
n_components = 3
umap_gen = UMAPFeatureGenerator(n_components=n_components, n_neighbors=9)
dataf = create_numerframe("test_assets/mini_numerai_version_2_data.parquet")
dataf = umap_gen(dataf)
# The new features will be names with the convention `f"feature_umap_{i}"`.
umap_features = [f"feature_umap_{i}" for i in range(n_components)]
dataf[umap_features].head(3)
# ## 1.1. Numerai Classic
# The Numerai Classic dataset has a certain structure that you may not encounter in the Numerai Signals tournament.
# Therefore, this section has all preprocessors that can only be applied to Numerai Classic.
# ### 1.1.0 Numerai Classic: Version agnostic
#
# Preprocessors that work for all Numerai Classic versions.
# #### 1.1.0.1. BayesianGMMTargetProcessor
# export
class BayesianGMMTargetProcessor(BaseProcessor):
"""
Generate synthetic (fake) target using a Bayesian Gaussian Mixture model. \n
Based on Michael Oliver's GitHub Gist implementation: \n
https://gist.github.com/the-moliver/dcdd2862dc2c78dda600f1b449071c93
:param target_col: Column from which to create fake target. \n
:param feature_names: Selection of features used for Bayesian GMM. All features by default.
:param n_components: Number of components for fitting Bayesian Gaussian Mixture Model.
"""
def __init__(
self,
target_col: str = "target",
feature_names: list = None,
n_components: int = 6,
):
super().__init__()
self.target_col = target_col
self.feature_names = feature_names
self.n_components = n_components
self.ridge = Ridge(fit_intercept=False)
self.bins = [0, 0.05, 0.25, 0.75, 0.95, 1]
@display_processor_info
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
all_eras = dataf[dataf.meta.era_col].unique()
coefs = self._get_coefs(dataf=dataf, all_eras=all_eras)
bgmm = self._fit_bgmm(coefs=coefs)
fake_target = self._generate_target(dataf=dataf, bgmm=bgmm, all_eras=all_eras)
dataf[f"{self.target_col}_fake"] = fake_target
return NumerFrame(dataf)
def _get_coefs(self, dataf: NumerFrame, all_eras: list) -> np.ndarray:
"""
Generate coefficients for BGMM.
Data should already be scaled between 0 and 1
(Already done with Numerai Classic data)
"""
coefs = []
for era in all_eras:
features, target = self.__get_features_target(dataf=dataf, era=era)
self.ridge.fit(features, target)
coefs.append(self.ridge.coef_)
stacked_coefs = np.vstack(coefs)
return stacked_coefs
def _fit_bgmm(self, coefs: np.ndarray) -> BayesianGaussianMixture:
"""
Fit Bayesian Gaussian Mixture model on coefficients and normalize.
"""
bgmm = BayesianGaussianMixture(n_components=self.n_components)
bgmm.fit(coefs)
# make probability of sampling each component equal to better balance rare regimes
bgmm.weights_[:] = 1 / self.n_components
return bgmm
def _generate_target(
self, dataf: NumerFrame, bgmm: BayesianGaussianMixture, all_eras: list
) -> np.ndarray:
"""Generate fake target using Bayesian Gaussian Mixture model."""
fake_target = []
for era in tqdm(all_eras, desc="Generating fake target"):
features, _ = self.__get_features_target(dataf=dataf, era=era)
# Sample a set of weights from GMM
beta, _ = bgmm.sample(1)
# Create fake continuous target
fake_targ = features @ beta[0]
# Bin fake target like real target
fake_targ = (rankdata(fake_targ) - 0.5) / len(fake_targ)
fake_targ = (np.digitize(fake_targ, self.bins) - 1) / 4
fake_target.append(fake_targ)
return np.concatenate(fake_target)
def __get_features_target(self, dataf: NumerFrame, era) -> tuple:
"""Get features and target for one era and center data."""
sub_df = dataf[dataf[dataf.meta.era_col] == era]
features = self.feature_names if self.feature_names else sub_df.get_feature_data
target = sub_df[self.target_col]
features = features.values - 0.5
target = target.values - 0.5
return features, target
bgmm = BayesianGMMTargetProcessor()
sample_dataf = bgmm(sample_dataf)
sample_dataf[["target", "target_fake"]].head(3)
# hide
downloader.remove_base_directory()
# ### 1.1.1. Numerai Classic: Version 1 specific
#
# Preprocessors that only work for version 1 (legacy data).
# When using version 1 preprocessor it is recommended that the input `NumerFrame` has `version` in its metadata.
# This avoids using version 1 preprocessors on version 2 data and encountering confusing error messages.
#
# As a new user we recommend to start modeling the version 2 data and avoid version 1.
# The preprocessors below are only there for legacy and compatibility reasons.
# #### 1.1.1.1. GroupStatsPreProcessor
# The version 1 legacy data has 6 groups of features which allows us to calculate aggregate features.
# export
class GroupStatsPreProcessor(BaseProcessor):
"""
WARNING: Only supported for Version 1 (legacy) data. \n
Calculate group statistics for all data groups. \n
| :param groups: Groups to create features for. All groups by default.
"""
def __init__(self, groups: list = None):
super().__init__()
self.all_groups = [
"intelligence",
"wisdom",
"charisma",
"dexterity",
"strength",
"constitution",
]
self.group_names = groups if groups else self.all_groups
@display_processor_info
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
"""Check validity and add group features."""
self._check_data_validity(dataf=dataf)
dataf = dataf.pipe(self._add_group_features)
return NumerFrame(dataf)
def _add_group_features(self, dataf: pd.DataFrame) -> pd.DataFrame:
"""Mean, standard deviation and skew for each group."""
for group in self.group_names:
cols = [col for col in dataf.columns if group in col]
dataf[f"feature_{group}_mean"] = dataf[cols].mean(axis=1)
dataf[f"feature_{group}_std"] = dataf[cols].std(axis=1)
dataf[f"feature_{group}_skew"] = dataf[cols].skew(axis=1)
return dataf
def _check_data_validity(self, dataf: NumerFrame):
"""Make sure this is only used for version 1 data."""
assert hasattr(
dataf.meta, "version"
), f"Version should be specified for '{self.__class__.__name__}' This Preprocessor will only work on version 1 data."
assert (
getattr(dataf.meta, "version") == 1
), f"'{self.__class__.__name__}' only works on version 1 data. Got version: '{getattr(dataf.meta, 'version')}'."
dataf = create_numerframe(
"test_assets/mini_numerai_version_1_data.csv", metadata={"version": 1}
)
group_features_dataf = GroupStatsPreProcessor().transform(dataf)
group_features_dataf.head(2)
assert group_features_dataf.meta.version == 1
# hide
new_cols = [
"feature_intelligence_mean",
"feature_intelligence_std",
"feature_intelligence_skew",
"feature_wisdom_mean",
"feature_wisdom_std",
"feature_wisdom_skew",
"feature_charisma_mean",
"feature_charisma_std",
"feature_charisma_skew",
"feature_dexterity_mean",
"feature_dexterity_std",
"feature_dexterity_skew",
"feature_strength_mean",
"feature_strength_std",
"feature_strength_skew",
"feature_constitution_mean",
"feature_constitution_std",
"feature_constitution_skew",
]
assert set(group_features_dataf.columns).intersection(new_cols)
group_features_dataf.get_feature_data[new_cols].head(2)
# `GroupStatsPreProcessor` should break if `version != 1`.
# +
# hide
def test_invalid_version(dataf: NumerFrame):
copied_dataf = dataf.copy()
copied_dataf.version = 2
try:
GroupStatsPreProcessor().transform(copied_dataf)
except AssertionError:
return True
return False
test_invalid_version(dataf)
# -
# ### 1.1.2. Numerai Classic: Version 2 specific
#
# Preprocessors that are only compatible with version 2 data. If the preprocessor is agnostic to Numerai Classic version implement under heading 1.1.0.
# +
# 1.1.2
# No version 2 specific Numerai Classic preprocessors implemented yet.
# -
# ## 1.2. Numerai Signals
#
# Preprocessors that are specific to Numerai Signals.
# ### 1.2.1. TA-Lib Features (TalibFeatureGenerator)
#
# [TA-Lib](https://mrjbq7.github.io/ta-lib) is an optimized technical analysis library. It is based on Cython and includes 150+ indicators. We have selected features based on feature importances, SHAP and correlation with the Numerai Signals target. If you want to implement other features check out the [TA-Lib documentation](https://mrjbq7.github.io/ta-lib/index.html).
#
# Installation of TA-Lib is a bit more involved than just a pip install and is an optional dependency for this library. Visit the [installation documentation](https://mrjbq7.github.io/ta-lib/install.html) for instructions.
# export
class TalibFeatureGenerator(BaseProcessor):
"""
Generate relevant features available in TA-Lib. \n
More info: https://mrjbq7.github.io/ta-lib \n
Input DataFrames for these functions should have the following columns defined:
['open', 'high', 'low', 'close', 'volume'] \n
Make sure that all values are sorted in chronological order (by ticker). \n
:param windows: List of ranges for window features.
Windows will be applied for all features specified in self.window_features. \n
:param ticker_col: Which column to groupby for feature generation.
"""
def __init__(self, windows: List[int], ticker_col: str = "bloomberg_ticker"):
self.__check_talib_import()
super().__init__()
self.windows = windows
self.ticker_col = ticker_col
self.window_features = [
"NATR",
"ADXR",
"AROONOSC",
"DX",
"MFI",
"MINUS_DI",
"MINUS_DM",
"MOM",
"ROCP",
"ROCR100",
"PLUS_DI",
"PLUS_DM",
"BETA",
"RSI",
"ULTOSC",
"TRIX",
"ADXR",
"CCI",
"CMO",
"WILLR",
]
self.no_window_features = ["AD", "OBV", "APO", "MACD", "PPO"]
self.hlocv_cols = ["open", "high", "low", "close", "volume"]
def get_no_window_features(self, dataf: pd.DataFrame):
for func in tqdm(self.no_window_features, desc="No window features"):
dataf.loc[:, f"feature_{func}"] = (
dataf.groupby(self.ticker_col)
.apply(lambda x: pd.Series(self._no_window(x, func)).bfill())
.values.astype(np.float32)
)
return dataf
def get_window_features(self, dataf: pd.DataFrame):
for win in tqdm(self.windows, position=0, desc="Window features"):
for func in tqdm(self.window_features, position=1):
dataf.loc[:, f"feature_{func}_{win}"] = (
dataf.groupby(self.ticker_col)
.apply(lambda x: pd.Series(self._window(x, func, win)).bfill())
.values.astype(np.float32)
)
return dataf
def get_all_features(self, dataf: pd.DataFrame) -> pd.DataFrame:
dataf = self.get_no_window_features(dataf)
dataf = self.get_window_features(dataf)
return dataf
def transform(self, dataf: pd.DataFrame, *args, **kwargs) -> NumerFrame:
return NumerFrame(self.get_all_features(dataf=dataf))
def _no_window(self, dataf: pd.DataFrame, func) -> pd.Series:
from talib import abstract as tab
inputs = self.__get_inputs(dataf)
if func in ["MACD"]:
# MACD outputs tuple of 3 elements (value, signal and hist)
return tab.Function(func)(inputs["close"])[0]
else:
return tab.Function(func)(inputs)
def _window(self, dataf: pd.DataFrame, func, window: int) -> pd.Series:
from talib import abstract as tab
inputs = self.__get_inputs(dataf)
if func in ["ULTOSC"]:
# ULTOSC requires 3 timeperiods as input
return tab.Function(func)(
inputs["high"],
inputs["low"],
inputs["close"],
timeperiod1=window,
timeperiod2=window * 2,
timeperiod3=window * 4,
)
else:
return tab.Function(func)(inputs, timeperiod=window)
def __get_inputs(self, dataf: pd.DataFrame) -> dict:
return {col: dataf[col].values.astype(np.float64) for col in self.hlocv_cols}
@staticmethod
def __check_talib_import():
try:
from talib import abstract as tab
except ImportError:
raise ImportError(
"TA-Lib is not installed for this environment. If you are using this class make sure to have TA-Lib installed. check https://mrjbq7.github.io/ta-lib/install.html for instructions on installation."
)
# +
# hide
# Example usage
# dataf = pd.DataFrame() # Your Signals DataFrame here.
# tfg = TalibFeatureGenerator(windows=[10, 20, 40], ticker_col="bloomberg_ticker")
# ta_dataf = tfg.transform(dataf=dataf)
# ta_dataf.head(2)
# -
# ### 1.2.2. KatsuFeatureGenerator
#
# [Katsu1110](https://www.kaggle.com/code1110) provides an excellent and fast feature engineering scheme in his [Kaggle notebook on starting with Numerai Signals](https://www.kaggle.com/code1110/numeraisignals-starter-for-beginners). It is surprisingly effective, fast and works well for modeling. This preprocessor is based on his feature engineering setup in that notebook.
#
# Features generated:
# 1. MACD and MACD signal
# 2. RSI
# 3. Percentage rate of return
# 4. Volatility
# 5. MA (moving average) gap
#
# export
class KatsuFeatureGenerator(BaseProcessor):
"""
Effective feature engineering setup based on Katsu's starter notebook.
Based on source by Katsu1110: https://www.kaggle.com/code1110/numeraisignals-starter-for-beginners
:param windows: Time interval to apply for window features: \n
1. Percentage Rate of change \n
2. Volatility \n
3. Moving Average gap \n
:param ticker_col: Columns with tickers to iterate over. \n
:param close_col: Column name where you have closing price stored.
"""
warnings.filterwarnings("ignore")
def __init__(
self,
windows: list,
ticker_col: str = "ticker",
close_col: str = "close",
num_cores: int = None,
):
super().__init__()
self.windows = windows
self.ticker_col = ticker_col
self.close_col = close_col
self.num_cores = num_cores if num_cores else os.cpu_count()
@display_processor_info
def transform(self, dataf: Union[pd.DataFrame, NumerFrame]) -> NumerFrame:
"""Multiprocessing feature engineering."""
tickers = dataf.loc[:, self.ticker_col].unique().tolist()
rich_print(
f"Feature engineering for {len(tickers)} tickers using {self.num_cores} CPU cores."
)
dataf_list = [
x
for _, x in tqdm(
dataf.groupby(self.ticker_col), desc="Generating ticker DataFrames"
)
]
dataf = self._generate_features(dataf_list=dataf_list)
return NumerFrame(dataf)
def feature_engineering(self, dataf: pd.DataFrame) -> pd.DataFrame:
"""Feature engineering for single ticker."""
close_series = dataf.loc[:, self.close_col]
for x in self.windows:
dataf.loc[
:, f"feature_{self.close_col}_ROCP_{x}"
] = close_series.pct_change(x)
dataf.loc[:, f"feature_{self.close_col}_VOL_{x}"] = (
np.log1p(close_series).pct_change().rolling(x).std()
)
dataf.loc[:, f"feature_{self.close_col}_MA_gap_{x}"] = (
close_series / close_series.rolling(x).mean()
)
dataf.loc[:, "feature_RSI"] = self._rsi(close_series)
macd, macd_signal = self._macd(close_series)
dataf.loc[:, "feature_MACD"] = macd
dataf.loc[:, "feature_MACD_signal"] = macd_signal
return dataf.bfill()
def _generate_features(self, dataf_list: list) -> pd.DataFrame:
"""Add features for list of ticker DataFrames and concatenate."""
with Pool(self.num_cores) as p:
feature_datafs = list(
tqdm(
p.imap(self.feature_engineering, dataf_list),
desc="Generating features",
total=len(dataf_list),
)
)
return pd.concat(feature_datafs)
@staticmethod
def _rsi(close: pd.Series, period: int = 14) -> pd.Series:
"""
See source https://github.com/peerchemist/finta
and fix https://www.tradingview.com/wiki/Talk:Relative_Strength_Index_(RSI)
"""
delta = close.diff()
up, down = delta.copy(), delta.copy()
up[up < 0] = 0
down[down > 0] = 0
gain = up.ewm(com=(period - 1), min_periods=period).mean()
loss = down.abs().ewm(com=(period - 1), min_periods=period).mean()
rs = gain / loss
return pd.Series(100 - (100 / (1 + rs)))
def _macd(
self, close: pd.Series, span1=12, span2=26, span3=9
) -> Tuple[pd.Series, pd.Series]:
"""Compute MACD and MACD signal."""
exp1 = self.__ema1(close, span1)
exp2 = self.__ema1(close, span2)
macd = 100 * (exp1 - exp2) / exp2
signal = self.__ema1(macd, span3)
return macd, signal
@staticmethod
def __ema1(series: pd.Series, span: int) -> pd.Series:
"""Exponential moving average"""
a = 2 / (span + 1)
return series.ewm(alpha=a).mean()
# +
# other
from numerblox.download import KaggleDownloader
# Get price data from Kaggle
home_dir = "katsu_features_test/"
kd = KaggleDownloader(home_dir)
kd.download_training_data("code1110/yfinance-stock-price-data-for-numerai-signals")
# -
# other
dataf = create_numerframe(f"{home_dir}/full_data.parquet")
dataf.loc[:, "friday_date"] = dataf["date"]
# Take 500 ticker sample for test
dataf = dataf[dataf["ticker"].isin(dataf["ticker"].unique()[:500])]
# other
kfpp = KatsuFeatureGenerator(windows=[20, 40, 60], num_cores=8)
new_dataf = kfpp.transform(dataf)
# 12 features are generated in this test (3*3 window features + 3 non window features).
# other
new_dataf.sort_values(["ticker", "date"]).get_feature_data.tail(2)
# ### 1.2.3. EraQuantileProcessor
# Numerai Signals' objective is predicting a ranking of equities. Therefore, we can benefit from creating rankings out of the features. Doing this reduces noise and works as a normalization mechanism for your features. `EraQuantileProcessor` bins features in a given number of quantiles for each era in the dataset.
# export
class EraQuantileProcessor(BaseProcessor):
"""
Transform features into quantiles on a per-era basis
:param num_quantiles: Number of buckets to split data into. \n
:param era_col: Era column name in the dataframe to perform each transformation. \n
:param features: All features that you want quantized. All feature cols by default. \n
:param num_cores: CPU cores to allocate for quantile transforming. All available cores by default. \n
:param random_state: Seed for QuantileTransformer.
"""
def __init__(
self,
num_quantiles: int = 50,
era_col: str = "friday_date",
features: list = None,
num_cores: int = None,
random_state: int = 0,
):
super().__init__()
self.num_quantiles = num_quantiles
self.era_col = era_col
self.num_cores = num_cores if num_cores else os.cpu_count()
self.features = features
self.random_state = random_state
def _process_eras(self, groupby_object):
quantizer = QuantileTransformer(
n_quantiles=self.num_quantiles, random_state=self.random_state
)
qt = lambda x: quantizer.fit_transform(x.values.reshape(-1, 1)).ravel()
column = groupby_object.transform(qt)
return column
@display_processor_info
def transform(
self,
dataf: Union[pd.DataFrame, NumerFrame],
) -> NumerFrame:
"""Multiprocessing quantile transforms by era."""
self.features = self.features if self.features else dataf.feature_cols
rich_print(
f"Quantiling for {len(self.features)} features using {self.num_cores} CPU cores."
)
date_groups = dataf.groupby(self.era_col)
groupby_objects = [date_groups[feature] for feature in self.features]
with Pool() as p:
results = list(
tqdm(
p.imap(self._process_eras, groupby_objects),
total=len(groupby_objects),
)
)
quantiles = pd.concat(results, axis=1)
dataf[
[f"{feature}_quantile{self.num_quantiles}" for feature in self.features]
] = quantiles
return NumerFrame(dataf)
# other
new_dataf = new_dataf.sample(10000)
era_quantiler = EraQuantileProcessor(num_quantiles=50)
era_dataf = era_quantiler.transform(new_dataf)
# other
era_dataf.get_feature_data.tail(2)
# other
# hide
kd.remove_base_directory()
# ### 1.2.4. TickerMapper
#
# Numerai Signals data APIs may work with different ticker formats. Our goal with `TickerMapper` is to map `ticker_col` to `target_ticker_format`.
# export
class TickerMapper(BaseProcessor):
"""
Map ticker from one format to another. \n
:param ticker_col: Column used for mapping. Must already be present in the input data. \n
:param target_ticker_format: Format to map tickers to. Must be present in the ticker map. \n
Supported ticker formats are: ['ticker', 'bloomberg_ticker', 'yahoo']
"""
def __init__(
self, ticker_col: str = "ticker", target_ticker_format: str = "bloomberg_ticker"
):
super().__init__()
self.ticker_col = ticker_col
self.target_ticker_format = target_ticker_format
self.signals_map_path = "https://numerai-signals-public-data.s3-us-west-2.amazonaws.com/signals_ticker_map_w_bbg.csv"
self.ticker_map = pd.read_csv(self.signals_map_path)
assert (
self.ticker_col in self.ticker_map.columns
), f"Ticker column '{self.ticker_col}' is not available in ticker mapping."
assert (
self.target_ticker_format in self.ticker_map.columns
), f"Target ticker column '{self.target_ticker_format}' is not available in ticker mapping."
self.mapping = dict(
self.ticker_map[[self.ticker_col, self.target_ticker_format]].values
)
@display_processor_info
def transform(
self, dataf: Union[pd.DataFrame, NumerFrame], *args, **kwargs
) -> NumerFrame:
dataf[self.target_ticker_format] = dataf[self.ticker_col].map(self.mapping)
return NumerFrame(dataf)
test_dataf = pd.DataFrame(["AAPL", "MSFT"], columns=["ticker"])
mapper = TickerMapper()
mapper.transform(test_dataf)
# ### 1.2.5. SignalsTargetProcessor
#
# Numerai provides [targets for 5000 stocks](https://docs.numer.ai/numerai-signals/signals-overview#universe) that are neutralized against all sorts of factors. However, it can be helpful to experiment with creating your own targets. You might want to explore different windows, different target binning and/or neutralization. `SignalsTargetProcessor` engineers 3 different targets for every given windows:
# - `_raw`: Raw return based on price movements.
# - `_rank`: Ranks of raw return.
# - `_group`: Binned returns based on rank.
#
# Note that Numerai provides targets based on 4-day returns and 20-day returns. While you can explore any window you like, it makes sense to start with `windows` close to these timeframes.
#
# For the `bins` argument there are also many options possible. The followed are commonly used binning:
# - Nomi bins: `[0, 0.05, 0.25, 0.75, 0.95, 1]`
# - Uniform bins: `[0, 0.20, 0.40, 0.60, 0.80, 1]`
# export
class SignalsTargetProcessor(BaseProcessor):
"""
Engineer targets for Numerai Signals. \n
More information on implements Numerai Signals targets: \n
https://forum.numer.ai/t/decoding-the-signals-target/2501
:param price_col: Column from which target will be derived. \n
:param windows: Timeframes to use for engineering targets. 10 and 20-day by default. \n
:param bins: Binning used to create group targets. Nomi binning by default. \n
:param labels: Scaling for binned target. Must be same length as resulting bins (bins-1). Numerai labels by default.
"""
def __init__(
self,
price_col: str = "close",
windows: list = None,
bins: list = None,
labels: list = None,
):
super().__init__()
self.price_col = price_col
self.windows = windows if windows else [10, 20]
self.bins = bins if bins else [0, 0.05, 0.25, 0.75, 0.95, 1]
self.labels = labels if labels else [0, 0.25, 0.50, 0.75, 1]
@display_processor_info
def transform(self, dataf: NumerFrame) -> NumerFrame:
for window in tqdm(self.windows, desc="Signals target engineering windows"):
dataf.loc[:, f"target_{window}d_raw"] = (
dataf[self.price_col].pct_change(periods=window).shift(-window)
)
era_groups = dataf.groupby(dataf.meta.era_col)
dataf.loc[:, f"target_{window}d_rank"] = era_groups[
f"target_{window}d_raw"
].rank(pct=True, method="first")
dataf.loc[:, f"target_{window}d_group"] = era_groups[
f"target_{window}d_rank"
].transform(
lambda group: pd.cut(
group, bins=self.bins, labels=self.labels, include_lowest=True
)
)
return NumerFrame(dataf)
# other
stp = SignalsTargetProcessor()
era_dataf.meta.era_col = "date"
new_target_dataf = stp.transform(era_dataf)
new_target_dataf.get_target_data.head(2)
# ### 1.2.6. LagPreProcessor
# Many models like Gradient Boosting Machines (GBMs) don't learn any time-series patterns by itself. However, if we create lags of our features the models will pick up on time dependencies between features. `LagPreProcessor` create lag features for given features and windows.
# export
class LagPreProcessor(BaseProcessor):
"""
Add lag features based on given windows.
:param windows: All lag windows to process for all features. \n
[5, 10, 15, 20] by default (4 weeks lookback) \n
:param ticker_col: Column name for grouping by tickers. \n
:param feature_names: All features for which you want to create lags. All features by default.
"""
def __init__(
self,
windows: list = None,
ticker_col: str = "bloomberg_ticker",
feature_names: list = None,
):
super().__init__()
self.windows = windows if windows else [5, 10, 15, 20]
self.ticker_col = ticker_col
self.feature_names = feature_names
@display_processor_info
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
feature_names = self.feature_names if self.feature_names else dataf.feature_cols
ticker_groups = dataf.groupby(self.ticker_col)
for feature in tqdm(feature_names, desc="Lag feature generation"):
feature_group = ticker_groups[feature]
for day in self.windows:
shifted = feature_group.shift(day, axis=0)
dataf.loc[:, f"{feature}_lag{day}"] = shifted
return NumerFrame(dataf)
# other
lpp = LagPreProcessor(ticker_col="ticker", feature_names=["close", "volume"])
dataf = lpp(dataf)
# All lag features will contain `lag` in the column name.
# other
dataf.get_pattern_data("lag").tail(2)
# ### 1.2.7. DifferencePreProcessor
# After creating lags with the `LagPreProcessor`, it may be useful to create new features that calculate the difference between those lags. Through this process in `DifferencePreProcessor`, we can provide models with more time-series related patterns.
# export
class DifferencePreProcessor(BaseProcessor):
"""
Add difference features based on given windows. Run LagPreProcessor first.
:param windows: All lag windows to process for all features. \n
:param feature_names: All features for which you want to create differences. All features that also have lags by default. \n
:param pct_change: Method to calculate differences. If True, will calculate differences with a percentage change. Otherwise calculates a simple difference. Defaults to False \n
:param abs_diff: Whether to also calculate the absolute value of all differences. Defaults to True \n
"""
def __init__(
self,
windows: list = None,
feature_names: list = None,
pct_diff: bool = False,
abs_diff: bool = False,
):
super().__init__()
self.windows = windows if windows else [5, 10, 15, 20]
self.feature_names = feature_names
self.pct_diff = pct_diff
self.abs_diff = abs_diff
@display_processor_info
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
feature_names = self.feature_names if self.feature_names else dataf.feature_cols
for feature in tqdm(self.feature_names, desc="Difference feature generation"):
lag_columns = dataf.get_pattern_data(f"{feature}_lag").columns
if not lag_columns.empty:
for day in self.windows:
differenced_values = (
(dataf[feature] / dataf[f"{feature}_lag{day}"]) - 1
if self.pct_diff
else dataf[feature] - dataf[f"{feature}_lag{day}"]
)
dataf[f"{feature}_diff{day}"] = differenced_values
if self.abs_diff:
dataf[f"{feature}_absdiff{day}"] = np.abs(
dataf[f"{feature}_diff{day}"]
)
else:
rich_print(
f":warning: WARNING: Skipping {feature}. Lag features for feature: {feature} were not detected. Have you already run LagPreProcessor? :warning:"
)
return NumerFrame(dataf)
# other
dpp = DifferencePreProcessor(
feature_names=["close", "volume"], windows=[5, 10, 15, 20], pct_diff=True
)
dataf = dpp.transform(dataf)
# All difference features will contain `diff` in the column name.
# other
dataf.get_pattern_data("diff").tail(2)
# ## 2. Custom preprocessors
# There are an almost unlimited number of ways to preprocess (selection, engineering and manipulation). We have only scratched the surface with the preprocessors currently implemented. We invite the Numerai community to develop Numerai Classic and Numerai Signals preprocessors.
#
# A new Preprocessor should inherit from `BaseProcessor` and implement a `transform` method. For efficient implementation, we recommend you use `NumerFrame` functionality for preprocessing. You can also support Pandas DataFrame input as long as the `transform` method returns a `NumerFrame`. This ensures that the Preprocessor still works within a full `numerai-blocks` pipeline. A template for new preprocessors is given below.
#
# To enable fancy logging output. Add the `@display_processor_info` decorator to the `transform` method.
# export
class AwesomePreProcessor(BaseProcessor):
""" TEMPLATE - Do some awesome preprocessing. """
def __init__(self):
super().__init__()
@display_processor_info
def transform(self, dataf: NumerFrame, *args, **kwargs) -> NumerFrame:
# Do processing
...
# Parse all contents of NumerFrame to the next pipeline step
return NumerFrame(dataf)
# -------------------------------------------
# +
# hide
# Run this cell to sync all changes with library
from nbdev.export import notebook2script
notebook2script()
# -
| nbs/03_preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# slow down a bit when hacking something together, e.g. I forgot to add a simple function call
# tuple unpacking is nice, but cannot be done in a nested list comprehension
# don't forget .items in for k,v in dict.items()
# use hashlib for md5 encodings
# multiline list comprehensions don't need extra parentheses, but multiline if statements do
# np.clip min and max can be omitted by specifying None
# try except looks nice untill it obscures your real error
# parsing ints to ints instead of strings is really important
# checking whether someting is an int should be done with isinstance, not with isalpha() (fails on int)
# removing from a list while iterating can be done safely by iterating over a slice(?)
# with re make sure to use r'' literal strings
# read assignment before tinkering with networkx and discovering its not necessary
# sometimes a simple for loop works better then a list comprehension when parsing the input, and just add to concept variables
from dataclasses import dataclass
from math import gcd, ceil
import re
from collections import Counter, defaultdict, namedtuple, deque
import itertools
import numpy as np
from matplotlib import pyplot as plt
import aoc
import networkx as nx
# +
inp = '1113122113'
def say(inp):
res = ''
prev = ''
count = 0
for s in inp:
# print(s, count)
if s == prev:
count += 1
else:
if count > 0: res += str(count)+prev
count = 1
prev = s
if count > 0: res += str(count)+prev
return res
for _ in range(50):
print(_)
inp = say(inp)
print(len(inp))
# +
from itertools import groupby
def look_and_say(input_string, num_iterations):
for i in range(num_iterations):
input_string = ''.join([str(len(list(g))) + str(k) for k, g in groupby(input_string)])
return input_string
len(look_and_say('1113122113',50))
# +
import re
re_d = re.compile(r'((\d)\2*)')
def replace(match_obj):
s = match_obj.group(1)
return str(len(s)) + s[0]
s = '1321131112'
for i in range(50):
s = re_d.sub(replace,s)
print (len(s))
# -
| advent_of_code_2015/day 10/solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DGA Detection, Modeling and Learning
#
# ## Load Data
import pandas as pd
import os
import torch
import torch.utils.data
# +
# # !ls -la data
# -
# should be the name of directory you created to save your features data
data_dir = 'data'
# +
# take a look at some matsnu example domains
from dga import matsnu
for i in range(10):
print(matsnu.generate_domain())
# +
matsnu_list = []
for i in range(10000):
matsnu_list.append(matsnu.generate_domain())
matsnu_df = pd.DataFrame(matsnu_list, columns=['domain'])
print("Matsnu Shape:", matsnu_df.shape)
matsnu_df.head()
# -
# save in data file
matsnu_df.to_csv(data_dir + "/matsnu.csv")
# alex top 1 million
read_csv(data_dir + "/")
# ---
#
# # Modeling
#
# ---
#
# Directory of train.py
# !pygmentize model/train.py
# +
# torch imports
import torch
import torch.nn.functional as F
import torch.nn as nn
class LSTMClassifier(nn.Module):
"""
This is the simple RNN model we will be using to perform Sentiment Analysis.
"""
def __init__(self, embedding_dim, hidden_dim, vocab_size):
"""
Initialize the model by settingg up the various layers.
"""
super(LSTMClassifier, self).__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=0)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.dense = nn.Linear(in_features=hidden_dim, out_features=1)
self.sig = nn.Sigmoid()
self.word_dict = None
def forward(self, x):
"""
Perform a forward pass of our model on some input.
"""
x = x.t()
lengths = x[0,:]
reviews = x[1:,:]
embeds = self.embedding(reviews)
lstm_out, _ = self.lstm(embeds)
out = self.dense(lstm_out)
out = out[lengths - 1, range(len(lengths))]
return self.sig(out.squeeze())
# -
# ---
# # Create an Estimator
#
#
#
# ## Define PyTorch estimators
# +
train_data = pd.read_csv(os.path.join(data_dir, "train.csv"), header=None, names=None)
train_y = torch.from_numpy(train_data[[0]].values).float().squeeze()
train_x = torch.from_numpy(train_data.drop([0], axis=1).values).float()
train_ds = torch.utils.data.TensorDataset(train_x, train_y)
train_sample_dl = torch.utils.data.DataLoader(train_ds, batch_size=10)
# -
# Training function for LSTM
def train_lstm(model, train_loader, epochs, criterion, optimizer, device):
"""
This is the training method that is called by the PyTorch training script of the LSTM model. The parameters
passed are as follows:
model - The PyTorch model that we wish to train.
train_loader - The PyTorch DataLoader that should be used during training.
epochs - The total number of epochs to train for.
criterion - The loss function used for training.
optimizer - The optimizer to use during training.
device - Where the model and data should be loaded (gpu or cpu).
"""
# training loop is provided
for epoch in range(1, epochs + 1):
model.train() # Make sure that the model is in training mode.
total_loss = 0
for batch in train_loader:
# get data
batch_x, batch_y = batch
#
batch_x = torch.from_numpy(batch_x).float().squeeze()
batch_y = torch.from_numpy(batch_y).float()
batch_x = batch_x.to(device)
batch_y = batch_y.to(device)
optimizer.zero_grad()
model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_dim),
torch.zeros(1, 1, model.hidden_layer_dim))
# get predictions from model
y_pred = model(batch_x)
# perform backprop
loss = criterion(y_pred, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
if epoch%25 == 1:
print("Epoch: {}, Loss: {}".format(epoch, total_loss / len(train_loader)))
# +
import torch.optim as optim
# from model.LSTM_Estimator import LSTMEstimator
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMEstimator(8, 30, 1, 8)
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_fn = torch.nn.L1Loss()
train_lstm(model, processed_data, 100, loss_fn, optimizer, device)
# -
| 3_Train_Deploy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Comparing IQ Scores for Different Test Groups by Using a Box Plot
# In this exercise, we will compare IQ scores among different test groups using a box plot of the Seaborn library to demonstrate how easy and efficient it is to create plots with Seaborn provided that we have a proper DataFrame. This exercise also shows how to quickly change the style and context of a Figure using the preconfigurations supplied by Seaborn.
# Let's compare IQ scores among different test groups using the Seaborn library:
# Import the necessary modules and enable plotting within a Jupyter Notebook.
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Use the pandas read_csv() function to read the data located in the data folder:
mydata = pd.read_csv("../../Datasets/iq_scores.csv")
# Access the data of each test group in the column. Convert them into a list using the tolist() method. Once the data of each test group has been converted into a list, assign this list to variables of each respective test group:
group_a = mydata[mydata.columns[0]].tolist()
group_b = mydata[mydata.columns[1]].tolist()
group_c = mydata[mydata.columns[2]].tolist()
group_d = mydata[mydata.columns[3]].tolist()
# Print the values of each group to check whether the data inside it is converted into a list. This can be done with the help of the print() function:
print(group_a)
print(group_b)
print(group_c)
print(group_d)
# Once we have the data for each test group, we need to construct a DataFrame from this given data. This can be done with the help of the pd.DataFrame() function, which is provided by pandas.
data = pd.DataFrame({'Groups': ['Group A'] * len(group_a) + ['Group B'] * len(group_b) + ['Group C'] * len(group_c) + ['Group D'] * len(group_d),
'IQ score': group_a + group_b + group_c + group_d})
# If you don’t create your own DataFrame, it is often helpful to print the column names, which is done by calling print(data.columns). The output is as follows:
print(data.columns)
# You can see that our DataFrame has two variables with the labels Groups and IQ score. This is especially interesting since we can use them to specify which variable to plot on the x-axis and which one on the y-axis.
# Now, since we have the DataFrame, we need to create a box plot using the boxplot() function provided by Seaborn. Within this function, specify the variables for both the axes along with the DataFrame. Make Groups the variable to plot on the x-axis and IQ score the variable for the y-axis. Pass data as a parameter. Here, data is the DataFrame that we obtained from the previous step. Moreover, use the whitegrid style, set the context to talk, and remove all axes splines, except the one on the bottom.
plt.figure(dpi=300)
# Set style to 'whitegrid'
sns.set_style('whitegrid')
# Set context to 'talk'
sns.set_context('talk')
# Create boxplot
sns.boxplot('Groups', 'IQ score', data=data)
# Despine
sns.despine(left=True, right=True, top=True)
# Add title
plt.title('IQ scores for different test groups')
# Show plot
plt.show()
# The despine() function helps in removing the top and right spines from the plot by default (without passing any arguments to the function). Here, we have also removed the left spine. Using the title() function, we have set the title for our plot. The show() function visualizes the plot.
| Chapter04/Exercise4.01/Exercise4.01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Online Retails Purchase ***
# ### Step 1. Import the necessary libraries
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# set the graphs to show in the jupyter notebook
# %matplotlib inline
# set seaborn graphs to a better style
sns.set(style="ticks")
# -
# ### Step 2. Import the dataset `Online_Retail.csv` from `data`.
# ### Step 3. Assign it to a variable called `online_rt`
# Note: if you receive a utf-8 decode error, set `encoding = 'latin1'` in `pd.read_csv()`.
online_rt = pd.read_csv('../data/Online_Retail.csv', encoding = 'latin1')
online_rt.head()
# ### Step 4. Create a histogram with the 10 countries that have the most 'Quantity' ordered except UK
countries = online_rt.groupby('Country').sum()
countries_sorted = countries.sort_values('Quantity', ascending = False)[1:11]
countries_sorted['Quantity']
# +
plt.bar(countries_sorted['Quantity'].index, countries_sorted['Quantity'].values,\
width = 0.5, color = ['b', 'orange', 'r', 'c', 'm', 'y', 'pink', 'lightblue', 'grey', 'g'])
plt.xticks(rotation = 90)
plt.xlabel('Countries')
plt.ylabel('Quantity')
plt.title('10 Countries with more orders')
# -
# ### Step 5. Exclude negative Quantity entries
online_rt = online_rt[online_rt['Quantity'] > 0]
online_rt.head()
# ### Step 6. Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries (except UK)
# +
customers = online_rt.groupby(['CustomerID', 'Country'], as_index = False).sum()
top_countries = ['Netherlands', 'EIRE', 'Germany']
customers = customers[customers['Country'].isin(top_countries)]
# +
#1 Matplotlib
germany = customers[customers['Country'] == 'Germany']
EIRE = customers[customers['Country'] == 'EIRE']
netherlands = customers[customers['Country'] == 'Netherlands']
fig = plt.figure(figsize = (11,6))
ax = fig.add_subplot(2,3,1)
ax.scatter(germany['Quantity'], germany['UnitPrice'])
ax.set(title = "Country = Germany", ylabel = "UnitPrice", xlabel = "Quantity")
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax1 = fig.add_subplot(2,3,2)
ax1.scatter(netherlands['Quantity'], netherlands['UnitPrice'])
ax1.set(title = "Country = Netherlands", xlabel = "Quantity")
plt.setp(ax1.get_yticklabels(), visible=False)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax2 = fig.add_subplot(2,3,3)
ax2.scatter(EIRE['Quantity'], EIRE['UnitPrice'])
ax2.set(title = "Country = EIRE", xlabel = "Quantity")
plt.setp(ax2.get_yticklabels(), visible=False)
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax_list = [ax, ax1, ax2]
ax_list[1].get_shared_x_axes().join(ax, ax1, ax2)
ax_list[2].get_shared_y_axes().join(ax, ax1, ax2)
# +
#2 Seaborn
sea = sns.FacetGrid(customers, col="Country")
sea.map(plt.scatter, "Quantity", "UnitPrice", alpha=1)
sea.add_legend()
# -
# ### Step 7. Investigate why the previous results look so uninformative.
#
# This section might seem a bit tedious to go through. But I've thought of it as some kind of a simulation of problems one might encounter when dealing with data and other people. Besides there is a prize at the end (i.e. Section 8).
#
# (But feel free to jump right ahead into Section 8 if you want; it doesn't require that you finish this section.)
#
# #### Step 7.1 Look at the first line of code in Step 6. And try to figure out if it leads to any kind of problem.
# ##### Step 7.1.1 Display the first few rows of that DataFrame.
online_rt.groupby(['CustomerID','Country']).sum().head()
# ##### Step 7.1.2 Think about what that piece of code does and display the dtype of `UnitPrice`
customers['UnitPrice'].dtype
# ##### Step 7.1.3 Pull data from `online_rt`for `CustomerID`s 12346.0 and 12347.0.
display(online_rt[online_rt.CustomerID == 12347.0].
sort_values(by='UnitPrice', ascending = False).head())
display(online_rt[online_rt.CustomerID == 12346.0].
sort_values(by='UnitPrice', ascending = False).head())
# #### Step 7.2 Reinterpreting the initial problem.
#
# To reiterate the question that we were dealing with:
# "Create a scatterplot with the Quantity per UnitPrice by CustomerID for the top 3 Countries"
#
# The question is open to a set of different interpretations.
# We need to disambiguate.
#
# We could do a single plot by looking at all the data from the top 3 countries.
# Or we could do one plot per country. To keep things consistent with the rest of the exercise,
# let's stick to the latter oprion. So that's settled.
#
# But "top 3 countries" with respect to what? Two answers suggest themselves:
# Total sales volume (i.e. total quantity sold) or total sales (i.e. revenue).
# This exercise goes for sales volume, so let's stick to that.
#
# ##### Step 7.2.1 Find out the top 3 countries in terms of sales volume.
# +
sales_volume = online_rt.groupby('Country')['Quantity'].sum().sort_values(ascending=False)
top3 = sales_volume.index[1:4]
top3.values
# -
# ##### Step 7.2.2
#
# Now that we have the top 3 countries, we can focus on the rest of the problem:
# "Quantity per UnitPrice by CustomerID".
# We need to unpack that.
#
# "by CustomerID" part is easy. That means we're going to be plotting one dot per CustomerID's on our plot. In other words, we're going to be grouping by CustomerID.
#
# "Quantity per UnitPrice" is trickier. Here's what we know:
# *One axis will represent a Quantity assigned to a given customer. This is easy; we can just plot the total Quantity for each customer.
# *The other axis will represent a UnitPrice assigned to a given customer. Remember a single customer can have any number of orders with different prices, so summing up prices isn't quite helpful. Besides it's not quite clear what we mean when we say "unit price per customer"; it sounds like price of the customer! A reasonable alternative is that we assign each customer the average amount each has paid per item. So let's settle that question in that manner.
#
# #### Step 7.3 Modify, select and plot data
# ##### Step 7.3.1 Add a column to online_rt called `Revenue` calculate the revenue (Quantity * UnitPrice) from each sale.
# We will use this later to figure out an average price per customer.
online_rt['Revenue'] = online_rt.Quantity * online_rt.UnitPrice
online_rt.head()
# ##### Step 7.3.2 Group by `CustomerID` and `Country` and find out the average price (`AvgPrice`) each customer spends per unit.
# +
grouped = online_rt[online_rt.Country.isin(top3)].groupby(['CustomerID','Country'])
plottable = grouped[['Quantity','Revenue']].agg('sum')
plottable['AvgPrice'] = plottable.Revenue / plottable.Quantity
plottable['Country'] = plottable.index.get_level_values(1)
plottable.head()
# -
# ##### Step 7.3.3 Plot
# +
#1 Matplotlib
germany = plottable[plottable['Country'] == 'Germany']
EIRE = plottable[plottable['Country'] == 'EIRE']
netherlands = plottable[plottable['Country'] == 'Netherlands']
fig = plt.figure(figsize = (11,6))
ax = fig.add_subplot(2,3,1)
ax.scatter(germany['Quantity'], germany['AvgPrice'])
ax.set(title = "Country = Germany", ylabel = "AvgPrice", xlabel = "Quantity")
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax1 = fig.add_subplot(2,3,2)
ax1.scatter(netherlands['Quantity'], netherlands['AvgPrice'])
ax1.set(title = "Country = Netherlands", xlabel = "Quantity")
plt.setp(ax1.get_yticklabels(), visible=False)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax2 = fig.add_subplot(2,3,3)
ax2.scatter(EIRE['Quantity'], EIRE['AvgPrice'])
ax2.set(title = "Country = EIRE", xlabel = "Quantity")
plt.setp(ax2.get_yticklabels(), visible=False)
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax_list = [ax, ax1, ax2]
ax_list[1].get_shared_x_axes().join(ax, ax1, ax2)
ax_list[2].get_shared_y_axes().join(ax, ax1, ax2)
# +
#2 Seaborn
sea1 = sns.FacetGrid(plottable, col="Country")
sea1.map(plt.scatter, "Quantity", "AvgPrice", alpha=1)
sea1.add_legend();
# -
# #### Step 7.4 What to do now?
# We aren't much better-off than what we started with. The data are still extremely scattered around and don't seem quite informative.
#
# But we shouldn't despair!
# There are two things to realize:
# 1) The data seem to be skewed towaards the axes (e.g. we don't have any values where Quantity = 50000 and AvgPrice = 5). So that might suggest a trend.
# 2) We have more data! We've only been looking at the data from 3 different countries and they are plotted on different graphs.
#
# So: we should plot the data regardless of `Country` and hopefully see a less scattered graph.
#
# ##### Step 7.4.1 Plot the data for each `CustomerID` on a single graph
# +
plottable1 = online_rt.groupby(['CustomerID'])[['Quantity','Revenue']].agg('sum')
plottable1['AvgPrice'] = plottable1.Revenue / plottable1.Quantity
plt.scatter(plottable1.Quantity, plottable1.AvgPrice)
plt.plot()
# -
# ##### Step 7.4.2 Zoom in so we can see that curve more clearly
# +
plottable1 = online_rt.groupby(['CustomerID'])[['Quantity','Revenue']].agg('sum')
plottable1['AvgPrice'] = plottable1.Revenue / plottable1.Quantity
plt.scatter(plottable1.Quantity, plottable1.AvgPrice)
plt.xlim(-40,2000)
plt.ylim(-1,80)
#incluimos valores negativos para que se vean los puntos completos
# -
# ### 8. Plot a line chart showing revenue (y) per UnitPrice (x).
#
# Did Step 7 give us any insights about the data? Sure! As average price increases, the quantity ordered decreses. But that's hardly surprising. It would be surprising if that wasn't the case!
#
# Nevertheless the rate of drop in quantity is so drastic, it makes me wonder how our revenue changes with respect to item price. It would not be that surprising if it didn't change that much. But it would be interesting to know whether most of our revenue comes from expensive or inexpensive items, and how that relation looks like.
#
# That is what we are going to do now.
#
# #### 8.1 Group `UnitPrice` by intervals of 1 for prices [0,50), and sum `Quantity` and `Revenue`.
# +
price_start = 0
price_end = 50
price_interval = 1
buckets = np.arange(price_start,price_end,price_interval)
online_rt['UnitBucketPrice'] = pd.cut(online_rt['UnitPrice'], buckets)
#https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html
online_rt['UnitBucketPrice'].to_frame()
# -
revenue_per_price = online_rt.groupby('UnitBucketPrice')['Revenue'].sum()
revenue_per_price.head()
# #### 8.3 Plot.
# +
plt.plot(revenue_per_price.values)
plt.xlabel('Unit Price (in intervals of '+str(price_interval)+')')
plt.ylabel('Revenue')
# -
# #### 8.4 Make it look nicer.
# x-axis needs values.
# y-axis isn't that easy to read; show in terms of millions.
# +
plt.plot(revenue_per_price.values)
plt.xlabel('Unit Price (in buckets of '+str(price_interval)+')')
plt.ylabel('Revenue')
plt.xticks(np.arange(price_start,price_end,3),
np.arange(price_start,price_end,3))
plt.yticks([0, 500000, 1000000, 1500000, 2000000, 2500000],
['0', '$0.5M', '$1M', '$1.5M', '$2M', '$2.5M'])
# -
| Task_Visualization/Online_Retail_viz-RES.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Baseline TREC-6: TEXT Classification + BERT + Ax
# ## Librairies
# +
# # !pip install transformers==4.8.2
# # !pip install datasets==1.7.0
# # !pip install ax-platform==0.1.20
# -
import os
import sys
# +
import io
import re
import pickle
from timeit import default_timer as timer
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset, Dataset, concatenate_datasets
from transformers import AutoTokenizer
from transformers import BertModel
from transformers.data.data_collator import DataCollatorWithPadding
from ax import optimize
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.utils.notebook.plotting import render, init_notebook_plotting
import esntorch.core.reservoir as res
import esntorch.core.learning_algo as la
import esntorch.core.merging_strategy as ms
import esntorch.core.baseline as bs
# -
# %config Completer.use_jedi = False
# %load_ext autoreload
# %autoreload 2
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
SEED = 42
# ## Global variables
# +
RESULTS_PATH = '~/Results/Ax_results/Baseline' # path of your result folder
CACHE_DIR = '~/Data/huggignface/' # path of your folder
PARAMS_FILE = 'trec-6_baseline_params.pkl'
RESULTS_FILE = 'trec-6_baseline_results.pkl'
# -
# ## Dataset
# +
# rename correct column as 'labels': depends on the dataset you load
def load_and_enrich_dataset(dataset_name, split, cache_dir):
dataset = load_dataset(dataset_name, split=split, cache_dir=CACHE_DIR)
dataset = dataset.rename_column('label-coarse', 'labels') # 'label-fine'
dataset = dataset.map(lambda e: tokenizer(e['text'], truncation=True, padding=False), batched=True)
dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
def add_lengths(sample):
sample["lengths"] = sum(sample["input_ids"] != 0)
return sample
dataset = dataset.map(add_lengths, batched=False)
return dataset
# +
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
full_train_dataset = load_and_enrich_dataset('trec', split='train', cache_dir=CACHE_DIR).sort("lengths")
train_val_datasets = full_train_dataset.train_test_split(train_size=0.8, shuffle=True)
train_dataset = train_val_datasets['train'].sort("lengths")
val_dataset = train_val_datasets['test'].sort("lengths")
test_dataset = load_and_enrich_dataset('trec', split='test', cache_dir=CACHE_DIR).sort("lengths")
dataset_d = {
'full_train': full_train_dataset,
'train': train_dataset,
'val': val_dataset,
'test': test_dataset
}
dataloader_d = {}
for k, v in dataset_d.items():
dataloader_d[k] = torch.utils.data.DataLoader(v, batch_size=256, collate_fn=DataCollatorWithPadding(tokenizer))
# -
dataset_d
# ## Optimization
def fitness(alpha,
dataset_d,
dataloader_d,
return_test_acc=False):
# parameters
esn_params = {
'embedding_weights': 'bert-base-uncased', # TEXT.vocab.vectors,
'input_dim' : 768, # dim of encoding!
'learning_algo' : None,
'criterion' : None,
'optimizer' : None,
'merging_strategy' : 'mean',
'lexicon' : None,
'bidirectional' : False,
'device': device,
'seed' : 42
}
# model
ESN = bs.Baseline(**esn_params)
ESN.learning_algo = la.RidgeRegression(alpha = alpha)# , mode='normalize')
ESN = ESN.to(device)
# predict
if return_test_acc:
t0 = timer()
LOSS = ESN.fit(dataloader_d["train"])
t1 = timer()
acc = ESN.predict(dataloader_d["test"], verbose=False)[1].item()
else:
LOSS = ESN.fit(dataloader_d["train"])
acc = ESN.predict(dataloader_d["val"], verbose=False)[1].item()
# clean objects
del ESN.learning_algo
del ESN.criterion
del ESN.merging_strategy
del ESN
torch.cuda.empty_cache()
if return_test_acc:
return acc, t1 - t0
else:
return acc
# +
# # %%time
# fitness(alpha=10, dataset_d=dataset_d, dataloader_d=dataloader_d)
# -
def wrapped_fitness(d, return_test_acc=False):
return fitness(alpha=d['alpha'],
dataset_d=dataset_d,
dataloader_d=dataloader_d,
return_test_acc=return_test_acc)
# +
best_params_d = {}
best_parameters, best_values, experiment, model = optimize(
parameters=[
{
"name": "alpha",
"value_type": "float",
"type": "range",
"log_scale": True,
"bounds": [1e-3, 1e3],
}
],
# Booth function
evaluation_function = wrapped_fitness,
minimize = False,
objective_name = 'val_accuracy',
total_trials = 10
)
# results
best_params_d['best_parameters'] = best_parameters
best_params_d['best_values'] = best_values
best_params_d['experiment'] = experiment
# best_params_d[res_dim]['model'] = model
# -
# ## Results
# +
# best parameters
with open(os.path.join(RESULTS_PATH, PARAMS_FILE), 'wb') as fh:
pickle.dump(best_params_d, fh)
# +
# # load results
# with open(os.path.join(RESULTS_PATH, PARAMS_FILE), 'rb') as fh:
# best_params_d = pickle.load(fh)
# -
best_params_d
# +
# results
best_parameters = best_params_d['best_parameters']
acc, time = wrapped_fitness(best_parameters, return_test_acc=True)
results_tuple = acc, time
print("Experiment finished.")
# -
results_tuple
with open(os.path.join(RESULTS_PATH, RESULTS_FILE), 'wb') as fh:
pickle.dump(results_tuple, fh)
| notebooks_paper_2021/Baselines/Baseline_TREC-6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Documentation
# > From the TSNE analysis for transformer encoder model, there may be too many hidden states output by transformer, and so the PCA and subsequent TSNE have the linear pattern. Perform a SVD to see how each PC account for the variance.
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import numpy as np
import pandas as pd
# ## Transformer Encoder
hn_motortoolkit = np.load("../data/hn_transformerencoder_motortoolkit.npy")
hn_pfammotors= np.load("../data/hn_transformerencoder_pfammotors.npy")
hn_dfdev = np.load("../data/hn_transformerencoder_dfdev.npy")
print(hn_motortoolkit.shape)
print(hn_pfammotors.shape)
print(hn_dfdev.shape)
u, s, v = np.linalg.svd(hn_motortoolkit)
s[0:10]
u, s, v = np.linalg.svd(hn_pfammotors[1:10000,:])
s[0:10]
u, s, v = np.linalg.svd(hn_dfdev[110000:120000,:])
s[0:10]
# From the analysis, we could tell that only one principle component dominate over, and so the hidden states are actually very linearly dependent. The network need to be randomized/further inspected to see if next token prediction is not effective
# With above analysis, perform it also on LSTM5 and Seq2Seq result to see if only one PC dominates entirely
# ## Seq2Seq
hn_motortoolkit = np.load("../data/hn_s2sencoder_motortoolkit.npy")
hn_pfammotors= np.load("../data/hn_s2sencoder_pfammotors.npy")
hn_dfdev = np.load("../data/hn_s2sencoder_dfdev.npy")
u, s, v = np.linalg.svd(hn_motortoolkit)
s[0:10]
u, s, v = np.linalg.svd(hn_pfammotors[1:20000,:])
s[0:10]
u, s, v = np.linalg.svd(hn_dfdev[110000:120000,:])
s[0:10]
# ## LSTM5
hn_motortoolkit = np.load("../data/hn_lstm5_motortoolkit.npy")
hn_pfammotors= np.load("../data/hn_lstm5_pfammotors.npy")
hn_dfdev = np.load("../data/hn_lstm5_dfdev.npy")
print(hn_motortoolkit.shape)
print(hn_pfammotors.shape)
print(hn_dfdev.shape)
u, s, v = np.linalg.svd(hn_motortoolkit)
s[0:10]
u, s, v = np.linalg.svd(hn_pfammotors[1:20000,:])
s[0:10]
u, s, v = np.linalg.svd(hn_dfdev[110000:120000,:])
s[0:10]
# From PCA, it seems only transformer model have the pc of super large magnitude
# ## Perform the analysis but first normalize each hidden dimension
hn_motortoolkit = np.load("../../data/first_try/hn_transformerencoder_motortoolkit.npy")
hn_pfammotors= np.load("../../data/first_try/hn_transformerencoder_pfammotors.npy")
hn_dfdev = np.load("../../data/first_try/hn_transformerencoder_dfdev.npy")
print(hn_motortoolkit.shape)
print(hn_pfammotors.shape)
print(hn_dfdev.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(hn_motortoolkit)
hn_motortoolkit = scaler.transform(hn_motortoolkit)
u, s, v = np.linalg.svd(hn_motortoolkit)
s[0:10]
scaler = StandardScaler()
scaler.fit(hn_pfammotors)
hn_pfammotors = scaler.transform(hn_pfammotors)
u, s, v = np.linalg.svd(hn_pfammotors[1:20000,:])
s[0:10]
# +
# The function reduced the variance being accounted from x100000 to x10000 and so did help to a limited extent
# -
| code/first_try/dimension_analysis_201022.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="cn5_7mqggh2H"
# ## sigMF STFT on GPU
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 2251, "status": "ok", "timestamp": 1548951950015, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11966704463856227449"}, "user_tz": 300} id="r80FflgHhCiH" outputId="143411b2-cc11-47a1-c334-a76291219798"
import os
import itertools
from sklearn.utils import shuffle
import torch, torchvision
import torch.nn as nn
import torch.nn.functional as d
import torch.optim as optim
import torch.nn.functional as F
import torch.nn.modules as mod
import torch.utils.data
import torch.utils.data as data
from torch.nn.utils.rnn import pack_padded_sequence
from torch.nn.utils.rnn import pad_packed_sequence
from torch.autograd import Variable
import numpy as np
import sys
import importlib
import time
import matplotlib.pyplot as plt
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torchvision.utils import save_image
import librosa
from scipy import signal
from scipy import stats
from scipy.special import comb
import matplotlib.pyplot as plt
import glob
import json
import pickle
from random import randint, choice
import random
from timeit import default_timer as timer
from torch import istft
# from torchaudio.functional import istft
from sklearn.decomposition import NMF
global GPU, n_fft
GPU = 1
Fs = 1000000
n_fft = 1024
plt.style.use('default')
device = torch.device('cuda:1')
print('Torch version =', torch.__version__, 'CUDA version =', torch.version.cuda)
print('CUDA Device:', device)
print('Is cuda available? =',torch.cuda.is_available())
# +
# # %matplotlib notebook
# # %matplotlib inline
# + [markdown] colab_type="text" id="2t_9_D3l0Px9"
# #### Machine paths
# -
path_save = "/home/david/sigMF_ML/SVD/UDV_matrix/" # ace
path = "/home/david/sigMF_ML/SVD/" # ACE
# path = "/home/david/sigMF_ML/class2/data3/" # ACE
print(path)
os.chdir(path)
db = np.fromfile("UHF_vodeson_snr_hi.sigmf-data", dtype="float32")
# +
def meta_encoder(meta_list, num_classes):
a = np.asarray(meta_list, dtype=int)
return a
def read_meta(meta_files):
meta_list = []
for meta in meta_files:
all_meta_data = json.load(open(meta))
meta_list.append(all_meta_data['global']["core:class"])
meta_list = list(map(int, meta_list))
return meta_list
def read_num_val(x):
x = len(meta_list_val)
return x
# -
# #### torch GPU Cuda stft
def gpu(db):
I = db[0::2]
Q = db[1::2]
start = timer()
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
I_stft = torch.stft(torch.tensor(I).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=True)
Q_stft = torch.stft(torch.tensor(Q).cuda(GPU), n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=True)
X_stft = I_stft[...,0] + Q_stft[...,0] + I_stft[...,1] + -1*Q_stft[...,1]
X_stft = torch.cat((X_stft[n_fft//2:],X_stft[:n_fft//2]))
end = timer()
gpu_stft_time = end - start
print('GPU STFT time = ', gpu_stft_time)
torch.cuda.empty_cache()
return X_stft, I_stft, Q_stft, gpu_stft_time
# #### scipy CPU stft function
def cpu(db):
t = len(db)
db2 = db[0::]
start = timer()
db = db.astype(np.float32).view(np.complex64)
I_t, I_f, Z = signal.stft(db, fs=Fs, nperseg=n_fft, return_onesided=False)
Z = np.vstack([Z[n_fft//2:], Z[:n_fft//2]])
end = timer()
cpu_stft_time = end - start
print('CPU STFT time = ', cpu_stft_time)
return Z, cpu_stft_time
# ### GPU Timing: first time slowest
stft_gpu, I_stft, Q_stft, gpu_stft_time = gpu(db)
I_stft.shape, Q_stft.shape, stft_gpu.shape
plt.figure(figsize=(9, 6))
fig3 = plt.figure()
plt.imshow(20*np.log10(np.abs(stft_gpu.cpu()+1e-8)), aspect='auto', origin='lower')
title = "Vodeson Original spectrum"
plt.title(title)
plt.xlabel('Time in bins')
plt.ylabel('Frequency bins(1Khz resolution)')
plt.minorticks_on()
# plt.yticks(np.arange(0,60, 6))
fig3.savefig('vodeson_full_spectrum.pdf', format="pdf")
plt.show()
# #### GPU SVD
def udv_stft(I_stft,Q_stft):
start = timer()
U_I0, D_I0, V_I0 = torch.svd(I_stft[...,0])
U_I1, D_I1, V_I1 = torch.svd(I_stft[...,1])
U_Q0, D_Q0, V_Q0 = torch.svd(Q_stft[...,0])
U_Q1, D_Q1, V_Q1 = torch.svd(Q_stft[...,1])
end = timer()
usv_stft_time = end - start
print('SVD time: ',usv_stft_time)
return U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, usv_stft_time
# #### Inverse stft
def ISTFT(db):
w = n_fft
win = torch.hann_window(w, periodic=True, dtype=None, layout=torch.strided, requires_grad=False).cuda(GPU)
start = timer()
Z = istft(db, n_fft=n_fft, hop_length=n_fft//2, win_length=w, window=win, center=True, normalized=True, onesided=True)
end = timer()
istft_time = end - start
print('ISTFT time = ',istft_time)
torch.cuda.empty_cache()
return Z, istft_time
# #### Re-combine UDV to approximate original signal
def udv(u, d, v, k):
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
start = timer()
UD = torch.mul(u[:, :k], d[:k])
print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:k, :])
end = timer()
udv_time = end - start
# print('u new shape = ', u[:, :k].shape)
# print('d new shape = ', d[:k].shape)
# print('v new shape = ', v[:k, :].shape)
print('UDV time: ',udv_time)
return UDV, udv_time
def udv_from_file(u, d, v):
start = timer()
# print('u shape = ', u.shape)
# print('d shape = ', d.shape)
# print('v shape = ', v.shape)
UD = torch.mul(u[:, :], d[:])
# print('UD shape = ', UD.shape)
v = torch.transpose(v,1,0)
UDV = torch.mm(UD, v[:, :])
end = timer()
udv_time = end - start
print('UDV time: ',udv_time)
return UDV, udv_time
print(path_save)
os.chdir(path_save)
# +
# np.save('I_stft', I_stft.detach().cpu().numpy())
# np.save('Q_stft', Q_stft.detach().cpu().numpy())
# -
# ### Main function to run all sub function calls
def complete_gpu(num):
stft_gpu, I_stft, Q_stft, gpu_stft_time = gpu(db)
U_I0, D_I0, V_I0, U_I1, D_I1, V_I1, U_Q0, D_Q0, V_Q0, U_Q1, D_Q1, V_Q1, udv_time = udv_stft(I_stft,Q_stft)
torch.cuda.empty_cache()
print('UDV I0 shapes = ',U_I0.shape, D_I0.shape, V_I0.shape)
print('UDV I1 shapes = ',U_I1.shape, D_I1.shape, V_I1.shape)
print('UDV Q0 shapes = ', U_Q0.shape, D_Q0.shape, V_Q0.shape)
print('UDV Q1 shapes = ', U_Q1.shape, D_Q1.shape, V_Q1.shape)
# ------------ I0 ------------------------------------------------------
np.save('U_I0', U_I0[:, :num].detach().cpu().numpy())
np.save('D_I0', D_I0[:num].detach().cpu().numpy())
np.save('V_I0', V_I0[:, :num].detach().cpu().numpy())
# print('saved V_IO size = ', V_I0[:, :num].shape)
# ------------ I1 ------------------------------------------------------
np.save('U_I1', U_I1[:, :num].detach().cpu().numpy())
np.save('D_I1', D_I1[:num].detach().cpu().numpy())
np.save('V_I1', V_I1[:, :num].detach().cpu().numpy())
# print('saved V_I1 size = ', V_I1[:, :num].shape)
# ------------ Q0 ------------------------------------------------------
np.save('U_Q0', U_Q0[:, :num].detach().cpu().numpy())
np.save('D_Q0', D_Q0[:num].detach().cpu().numpy())
np.save('V_Q0', V_Q0[:, :num].detach().cpu().numpy())
# print('saved V_QO size = ', V_Q0[:, :num].shape)
# ------------ Q1 ------------------------------------------------------
np.save('U_Q1', U_Q1[:, :num].detach().cpu().numpy())
np.save('D_Q1', D_Q1[:num].detach().cpu().numpy())
np.save('V_Q1', V_Q1[:, :num].detach().cpu().numpy())
# print('saved V_Q1 size = ', V_Q1[:, :num].shape)
# -----------------------------------------------------------------------
udv_I0, udv_time1 = udv(U_I0, D_I0, V_I0,num)
udv_I1, udv_time2 = udv(U_I1, D_I1, V_I1,num)
udv_Q0, udv_time3 = udv(U_Q0, D_Q0, V_Q0,num)
udv_Q1, udv_time4 = udv(U_Q1, D_Q1, V_Q1,num)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I)
Q, istft_time2 = ISTFT(UDV_Q)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = gpu_stft_time+udv_time+misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
return IQ_SVD, time_sum
torch.cuda.empty_cache()
# ### Perform SVD on IQ stft data
num = 2 # number to reconstruct SVD matrix from
IQ_SVD, time_sum = complete_gpu(num)
time_sum # double sided = true GPU only all functions
# ### Write reconstructed IQ file to file
from array import array
IQ_file = open("vod_clean_svd2", 'wb')
IQ_SVD.tofile(IQ_file)
IQ_file.close()
# #### load arrays for reconstruction
def udv_file_reconstruct():
os.chdir(path_save)
# ****** D **************
D_I0 = np.load('D_I0.npy')
D_I1 = np.load('D_I1.npy')
D_Q0 = np.load('D_Q0.npy')
D_Q1 = np.load('D_Q1.npy')
# ****** U **************
U_I0 = np.load('U_I0.npy')
U_I1 = np.load('U_I1.npy')
U_Q0 = np.load('U_Q0.npy')
U_Q1 = np.load('U_Q1.npy')
# ****** V **************
V_I0 = np.load('V_I0.npy')
V_I1 = np.load('V_I1.npy')
V_Q0 = np.load('V_Q0.npy')
V_Q1 = np.load('V_Q1.npy')
# ****** d to torch **************
d_i0 = torch.tensor(D_I0).cuda(GPU)
d_i1 = torch.tensor(D_I1).cuda(GPU)
d_q0 = torch.tensor(D_Q0).cuda(GPU)
d_q1 = torch.tensor(D_Q1).cuda(GPU)
# ****** u to torch **************
u_i0 = torch.tensor(U_I0).cuda(GPU)
u_i1 = torch.tensor(U_I1).cuda(GPU)
u_q0 = torch.tensor(U_Q0).cuda(GPU)
u_q1 = torch.tensor(U_Q1).cuda(GPU)
# ****** v to torch **************
v_i0 = torch.tensor(V_I0).cuda(GPU)
v_i1 = torch.tensor(V_I1).cuda(GPU)
v_q0 = torch.tensor(V_Q0).cuda(GPU)
v_q1 = torch.tensor(V_Q1).cuda(GPU)
# ****** reconstruction *********************
udv_I0, udv_time1 = udv_from_file(u_i0, d_i0, v_i0)
udv_I1, udv_time2 = udv_from_file(u_i1, d_i1, v_i1)
udv_Q0, udv_time3 = udv_from_file(u_q0, d_q0, v_q0)
udv_Q1, udv_time4 = udv_from_file(u_q1, d_q1, v_q1)
torch.cuda.empty_cache()
print('udv I shapes = ',udv_I0.shape,udv_I1.shape)
print('udv Q shapes = ',udv_Q0.shape,udv_Q1.shape)
# -------------stack and transpose----------------------------------------
start_misc = timer()
UDV_I = torch.stack([udv_I0,udv_I1])
UDV_I = torch.transpose(UDV_I,2,0)
UDV_I = torch.transpose(UDV_I,1,0)
UDV_Q = torch.stack([udv_Q0,udv_Q1])
UDV_Q = torch.transpose(UDV_Q,2,0)
UDV_Q = torch.transpose(UDV_Q,1,0)
stop_misc = timer()
misc_time = stop_misc - start_misc
torch.cuda.empty_cache()
#--------------------------------------------------------------------------
I, istft_time1 = ISTFT(UDV_I)
Q, istft_time2 = ISTFT(UDV_Q)
torch.cuda.empty_cache()
I = I.detach().cpu().numpy()
Q = Q.detach().cpu().numpy()
end = len(I)*2
IQ_SVD = np.zeros(len(I)*2) # I and Q must be same length
IQ_SVD[0:end:2] = I
IQ_SVD[1:end:2] = Q
time_sum = misc_time+udv_time1+udv_time2+udv_time3+udv_time4+istft_time1+istft_time2
IQ_SVD = IQ_SVD.astype(np.float32).view(np.complex64)
torch.cuda.empty_cache()
return IQ_SVD, time_sum
IQ_SVD2, time_sum2 = udv_file_reconstruct()
time_sum2
from array import array
IQ_file = open("vod_clean_svd2", 'wb')
IQ_SVD2.tofile(IQ_file)
IQ_file.close()
| GPU only STFT SVD reconstruction 1MSPS times calculated FINAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import gym, importlib, sys, warnings, IPython
import tensorflow as tf
import itertools
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %autosave 240
warnings.filterwarnings("ignore")
print(tf.__version__)
# -
sys.path.append('../../embodied_arch/')
import embodied as emg
from embodied_misc import ActionPolicyNetwork, SensoriumNetworkTemplate
importlib.reload(emg)
# ## Cartpole Benchmark Setup
actor = lambda s: ActionPolicyNetwork(s, hSeq=(10,), gamma_reg=1e-1)
sensor = lambda st, out_dim: SensoriumNetworkTemplate(st, hSeq=(16,8,), out_dim=out_dim, gamma_reg=5.)
# +
tf.reset_default_graph()
importlib.reload(emg)
env = gym.make('CartPole-v0')
# cprf = emg.EmbodiedAgentRF(
# name="cp-emb", env_=env,
# space_size = (4,1),latentDim=4,
# alpha=1e6, actorNN=actor, sensorium=sensor
# )
cprf = emg.EmbodiedAgentRFBaselined(
name="cp-emb-b", env_=env,
space_size = (4,1),latentDim=4,
alpha_p=5e3, alpha_v=5e2, actorNN=actor, sensorium=sensor
)
print(cprf, cprf.s_size, cprf.a_size)
# +
saver = tf.train.Saver(max_to_keep=1) #n_epochs = 1000
sess = tf.InteractiveSession()
cprf.init_graph(sess)
num_episodes = 100
n_epochs = 751
# +
## Verify step + play set up
state = cprf.env.reset()
print(state, cprf.act(state, sess))
# cprf.env.step(cprf.act(state, sess))
cprf.play(sess)
cprf.episode_length()
# -
# ## Baseline
print('Baselining untrained pnet...')
uplen0 = []
for k in range(num_episodes):
cprf.play(sess, terminal_reward=0.)
uplen0.append(cprf.last_total_return) # uplen0.append(len(cprf.episode_buffer))
if k%20 == 0: print("\rEpisode {}/{}".format(k, num_episodes),end="")
base_perf = np.mean(uplen0)
print("\nCartpole stays up for an average of {} steps".format(base_perf))
st = cprf.env.reset()
# ## Train
# Train pnet on cartpole episodes
print('Training...')
saver = tf.train.Saver(max_to_keep=1)
cprf.work(sess, saver, num_epochs=n_epochs);
# ## Test
# Test pnet!
print('Testing...')
uplen = []
for k in range(num_episodes):
cprf.play(sess, terminal_reward=0.)
uplen.append(cprf.last_total_return) # uplen.append(len(cprf.episode_buffer))
if k%20 == 0: print("\rEpisode {}/{}".format(k, num_episodes),end="")
trained_perf = np.mean(uplen)
print("\nCartpole stays up for an average of {} steps compared to baseline {} steps".format(trained_perf, base_perf) )
# ## Evaluate
fig, axs = plt.subplots(2, 1, sharex=True)
sns.boxplot(uplen0, ax = axs[0])
axs[0].set_title('Baseline Episode Lengths')
sns.boxplot(uplen, ax = axs[1])
axs[1].set_title('Trained Episode Lengths')
# +
# buf = []
# last_total_return, d, s = 0, False, cprf.env.reset()
# while (len(buf) < 1000) and not d:
# a_t = cprf.act(s, sess)
# s1, r, d, *rest = cprf.env.step(a_t)
# cprf.env.render()
# buf.append([s, a_t, float(r), s1])
# last_total_return += float(r)
# s = s1
# print("\r\tEpisode Length", len(buf), end="")
# +
# sess.close()
# +
# fdic = {
# cprf.states_St:[st, st],
# cprf.actions_At :np.vstack([1,1])
# }
# print(sess.run(
# [cprf.a_prob, cprf.a_logit],
# feed_dict=fdic)
# )
# print(sess.run(
# [cprf.entropy],
# feed_dict=fdic)
# )
# print(
# np.squeeze(
# sess.run([cprf.lnPi_t],
# feed_dict=fdic)
# )
# )
# +
# sess.run(
# tf.one_hot(tf.cast(tf.reshape(cprf.actions_At, shape=[-1]), dtype=tf.uint8), depth=2), #
# feed_dict={cprf.actions_At :np.vstack([1,0,1,1,0,0])}
# )
| embodied_arch/unit-tests/recertify-CP-RF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
# # %matplotlib inline
import os
from pathlib import Path
import warnings
import numpy as np
import nibabel as nb
import pandas as pd
import matplotlib as mpl
mpl.use('pgf')
from matplotlib import pyplot as plt
from matplotlib import gridspec, colors
import seaborn as sn
import palettable
from niworkflows.data import get_template
from nilearn.image import concat_imgs, mean_img
from nilearn import plotting
warnings.simplefilter('ignore')
DATA_HOME = Path(os.getenv('FMRIPREP_DATA_HOME', os.getcwd())).resolve()
DS030_HOME = DATA_HOME / 'ds000030' / '1.0.3'
DERIVS_HOME = DS030_HOME / 'derivatives'
ATLAS_HOME = get_template('MNI152NLin2009cAsym')
ANALYSIS_HOME = DERIVS_HOME / 'fmriprep_vs_feat_2.0-oe'
fprep_home = DERIVS_HOME / 'fmriprep_1.0.8' / 'fmriprep'
feat_home = DERIVS_HOME / 'fslfeat_5.0.10' / 'featbids'
out_folder = Path(os.getenv('FMRIPREP_OUTPUTS') or '').resolve()
# Load MNI152 nonlinear, asymmetric 2009c atlas
atlas = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_T1w.nii.gz'))
mask1mm = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-01_brainmask.nii.gz')).get_data() > 0
mask2mm = nb.load(str(ATLAS_HOME / 'tpl-MNI152NLin2009cAsym_space-MNI_res-02_brainmask.nii.gz')).get_data() > 0
data = atlas.get_data()
data[~mask1mm] = data[~mask1mm].max()
atlas = nb.Nifti1Image(data, atlas.affine, atlas.header)
# +
# sn.set_style("whitegrid", {
# 'ytick.major.size': 5,
# 'xtick.major.size': 5,
# })
# sn.set_context("notebook", font_scale=1.5)
# pgf_with_custom_preamble = {
# 'ytick.major.size': 0,
# 'xtick.major.size': 0,
# 'font.size': 30,
# 'font.sans-serif': ['HelveticaLTStd-Light'],
# 'font.family': 'sans-serif', # use serif/main font for text elements
# 'text.usetex': False, # use inline math for ticks
# }
# mpl.rcParams.update(pgf_with_custom_preamble)
pgf_with_custom_preamble = {
'text.usetex': True, # use inline math for ticks
'pgf.rcfonts': False, # don't setup fonts from rc parameters
'pgf.texsystem': 'xelatex',
'verbose.level': 'debug-annoying',
"pgf.preamble": [
r"""\usepackage{fontspec}
\setsansfont{HelveticaLTStd-Light}[
Extension=.otf,
BoldFont=HelveticaLTStd-Bold,
ItalicFont=HelveticaLTStd-LightObl,
BoldItalicFont=HelveticaLTStd-BoldObl,
]
\setmainfont{HelveticaLTStd-Light}[
Extension=.otf,
BoldFont=HelveticaLTStd-Bold,
ItalicFont=HelveticaLTStd-LightObl,
BoldItalicFont=HelveticaLTStd-BoldObl,
]
\setmonofont{Inconsolata-dz}
""",
r'\renewcommand\familydefault{\sfdefault}',
# r'\setsansfont[Extension=.otf]{Helvetica-LightOblique}',
# r'\setmainfont[Extension=.ttf]{DejaVuSansCondensed}',
# r'\setmainfont[Extension=.otf]{FiraSans-Light}',
# r'\setsansfont[Extension=.otf]{FiraSans-Light}',
]
}
mpl.rcParams.update(pgf_with_custom_preamble)
# +
res_shape = np.array(mask1mm.shape[:3])
bbox = np.argwhere(mask1mm)
new_origin = np.clip(bbox.min(0) - 5, a_min=0, a_max=None)
new_end = np.clip(bbox.max(0) + 5, a_min=0,
a_max=res_shape - 1)
# Find new origin, and set into new affine
new_affine_4 = atlas.affine.copy()
new_affine_4[:3, 3] = new_affine_4[:3, :3].dot(
new_origin) + new_affine_4[:3, 3]
cropped_atlas = atlas.__class__(
atlas.get_data()[new_origin[0]:new_end[0], new_origin[1]:new_end[1], new_origin[2]:new_end[2]],
new_affine_4,
atlas.header
)
# +
plt.clf()
fig = plt.gcf()
_ = fig.set_size_inches(12, 10)
gs = gridspec.GridSpec(4, 3, height_ratios=[15, 15, 4, 0.5], hspace=0.0, wspace=0.)
ax1 = plt.subplot(gs[0, :])
ax2 = plt.subplot(gs[1, :])
cut_coords = [0, 15, 30]
disp = plotting.plot_stat_map(str(ANALYSIS_HOME / 'acm_fpre.nii.gz'),
bg_img=cropped_atlas, threshold=0.25, display_mode='z',
cut_coords=cut_coords, vmax=0.8, alpha=0.8,
axes=ax1, colorbar=False, annotate=False)
disp.annotate(size=20, left_right=True, positions=True)
disp = plotting.plot_stat_map(str(ANALYSIS_HOME / 'acm_feat.nii.gz'),
bg_img=cropped_atlas, threshold=0.25, display_mode='z',
cut_coords=cut_coords, vmax=0.8, alpha=0.8,
axes=ax2, colorbar=False, annotate=False)
disp.annotate(size=24, left_right=False, positions=False, scalebar=True,
loc=3, size_vertical=2, label_top=False, frameon=True, borderpad=0.1,
bg_color='None')
ax1.annotate(
'fMRIPrep',
xy=(0., .5), xycoords='axes fraction', xytext=(-40, .0),
textcoords='offset points', va='center', color='k', size=24,
rotation=90);
ax2.annotate(
r'\texttt{feat}',
xy=(0., .5), xycoords='axes fraction', xytext=(-40, .0),
textcoords='offset points', va='center', color='k', size=24,
rotation=90);
ax3 = fig.add_subplot(gs[3, 2])
cmap = plotting.cm.cold_hot
gradient = np.linspace(-0.8, 0.8, cmap.N)
# istart = int(norm(-offset, clip=True) * (our_cmap.N - 1))
# istop = int(norm(offset, clip=True) * (our_cmap.N - 1))
GRAY = (0.85, 0.85, 0.85, 1.)
cmaplist = []
for i in range(cmap.N):
cmaplist += [cmap(i)] if not -0.25 < gradient[i] < 0.25 else [GRAY] # just an average gray color
cmap = colors.LinearSegmentedColormap.from_list('Custom cmap', cmaplist, cmap.N)
th_index = cmaplist.index(GRAY)
gradient = np.vstack((gradient, gradient))
ax3.imshow(gradient, aspect='auto', cmap=cmap)
ax3.set_title(r'\noindent\parbox{7.5cm}{\centering\textbf{Fraction of participants} \\ with significant response}',
size=18, position=(0.5, 3.0))
ax3.xaxis.set_ticklabels(['80\%', '25\%', '25\%', '80\%'], size=20)
ax3.xaxis.set_ticks([0, th_index, cmap.N - th_index - 1, cmap.N])
ax3.yaxis.set_ticklabels([])
ax3.yaxis.set_ticks([])
for pos in ['top', 'bottom', 'left', 'right']:
# ax3.spines[pos].set_visible(False)
ax3.spines[pos].set_color(GRAY)
# ax3.annotate(
# r'\noindent\parbox{15cm}{'
# r'Cold hues represent negative activation (response inhibition). '
# r'Warm hues represent positive activation. '
# r'Activation count maps are derived from N=257 biologically independent participants.}',
# xy=(-2.05, 0.07), xycoords='axes fraction', xytext=(.0, 10.0),
# textcoords='offset points', va='center', color='k', size=14,
# )
plt.savefig(str(out_folder / 'figure04.pdf'),
format='pdf', bbox_inches='tight', pad_inches=0.2, dpi=300)
# -
| 02 - Figure 4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
data = pd.read_csv("iris.csv")
data.head()
X = data.iloc[:,1:5]
Y = data.iloc[:,5]
X
from sklearn.preprocessing import LabelEncoder
#one_hot encoding in species
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y=encoder.transform(Y)
encoded_Y
Y=encoded_Y
Y
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X,Y, test_size = 0.2, random_state =24)
Y_test
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
X_train = np.array(X_train)
X_test = np.array(X_test)
Y_train = np.array(Y_train)
Y_test = np.array(Y_test)
Y_test
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
X_train,Y_train = shuffle(X_train,Y_train)
X_test, Y_test = shuffle(X_test, Y_test)
#preprocessing
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#simple tf.keras sequential model
import tensorflow as tf
from tensorflow import keras
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# +
#model
model= keras.models.Sequential([
keras.layers.Dense(units=5, kernel_initializer = 'he_uniform',activation='relu',input_dim = 4),
keras.layers.Dense(units=3, kernel_initializer = 'glorot_uniform', activation = 'softmax')
])
# -
model.summary()
model.compile(optimizer=keras.optimizers.Adam(lr=0.001,decay=1e-4),loss="sparse_categorical_crossentropy", metrics=["accuracy"])
X_train
history = model.fit(X_train,Y_train,validation_split=0.1, batch_size = 10, epochs=30,shuffle=True,verbose=2)
prediction = model.predict(X_test,batch_size=10,verbose=0)
for i in prediction:
print(i)
rounded_prediction= np.argmax(prediction,axis=1)
for i in rounded_prediction:
print(i)
from sklearn.metrics import confusion_matrix
import itertools
cm=confusion_matrix(y_true=Y_test,y_pred=rounded_prediction)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm_plot_labels = ['Iris-setosa','Iris-versicolor',"Iris-virginica"]
plot_confusion_matrix(cm=cm, classes=cm_plot_labels, title='Confusion Matrix')
model.save("model.h5")
model1 = keras.models.load_model("model.h5")
sa=model1.predict([[5.1,2.1,1.4,0.2]])
sa
rounded_prediction= np.argmax(sa,axis=1)
rounded_prediction
| Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# # 'Plateaus' and 'Dips' in case numbers as an artifact of changing policies and partially isolated subpopulations
#
# Many simulations and pulibcations have talked about the possibility of 'peaks' in various places around the world. In effect, a peak is a time where the effective reproductive number R(t) < 1: each infected person infects less than one additional infection. Under this scenario, you'd expect the new case count / hospitalization count / death count to asymptotically decrease to 0, unless behavior changes in a way that results in the effective reproductive number increasing again.
#
# Many people have been optimistically looking for peaks by looking at the number of cases/hospitalizations/deaths and noticing that in several places, these seem to have plateaued or are decreasing. While it is tempting to do so and look for bright spots in this time, over-optimism can be harmful in terms of how we plan for the future.
#
# Unfortunately, a plateau/dip of this sort has many explanations. Many people have talked about the possibility that people are dying in their homes or that testing rates are changing, but even controlling for these things, several mathematical effects that come out of exponential growth can result in these numbers occurring while the effective reproduction number R(t) > 1.
#
# A subset of these are discussed below:
#
# 1. Quick changes in R(t) (i.e. policy) can result in plateaus or even large decreases in the daily count of new cases/hospitalizations/deaths temporarily.
# 2. In the presence of heterogenous subpopulations where some reduce R(t) to < 1, but others do not, similar dips and plateaus can arise when looking at the aggregated data. This holds at every scale where people don't mix completely randomly or people display different behaviors
#
# We simulate these effects with a simple SIR model, but more realistic compartmental models would also show these effects, and we believe that they may arrise in reality.
# ### Modeling of dips that are in response to policy changes that alter the effective reproductive number
#
# In the below model, a single homogenous population has it's R(t) change over time linearly for a few days after policy implementation, before settling at a new level. Additionally, we model a period where the R(t) does in fact go below 1 in order to show that the shapes can look locally similar.
susceptible = np.zeros(200)
infected = np.zeros(200)
resistant = np.zeros(200)
susceptible[0] = 1e7
infected[0] = 10
infection_length = 14
growth_rate = np.concatenate([
np.repeat(3.4, 20),
np.linspace(3.4, 2.2, 5),
np.repeat(2.2, 15),
np.linspace(2.2, 1.6, 5),
np.repeat(1.6, 15),
np.linspace(1.6, .9, 30),
np.repeat(.7, 200)
])[:200]
for v in range(len(susceptible)):
if v != 0:
susceptible[v] = susceptible[v - 1] - infected[v - 1] * growth_rate[v]/infection_length
infected[v] = infected[v - 1] + infected[v - 1] * growth_rate[v]/infection_length - infected[v - 1]/infection_length
resistant[v] = resistant[v - 1] + infected[v - 1]/infection_length
# +
fig, (ax1, ax3) = plt.subplots(1,2, figsize=(16, 9))
color = 'tab:red'
ax1.set_xlabel('Days since first case')
ax1.set_ylabel('Growth Rate R(t)')
pd.Series(growth_rate, name='R(t)').plot(ax=ax1, color=color).legend(loc='upper left', bbox_to_anchor=(0, 1.05), ncol=3)
ax1.tick_params(axis='y')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('# per day') # we already handled the x-label with ax1
pd.concat([
pd.Series(susceptible, name='new cases').diff(-1),
pd.Series(resistant * .01, name='new deaths').diff(1)
], axis=1).plot(ax=ax2).legend(loc='upper right', bbox_to_anchor=(1.0, 1.05), ncol=3)
color = 'tab:red'
ax3.set_xlabel('Days since first case')
ax3.set_ylabel('Growth Rate R(t)')
pd.Series(growth_rate, name='R(t)').plot(ax=ax3, color=color).legend(loc='upper left', bbox_to_anchor=(0, 1.05), ncol=3)
ax3.tick_params(axis='y')
ax4 = ax3.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax4.set_ylabel('Cumulative #') # we already handled the x-label with ax1
pd.concat([
pd.Series(susceptible, name='cum cases').diff(-1),
pd.Series(resistant * .01, name='cum deaths').diff(1)
], axis=1).cumsum().plot(ax=ax4).legend(loc='upper right', bbox_to_anchor=(1.0, 1.05), ncol=3)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.suptitle('Influence of changes in R(t) (i.e. through policy changes) on cases per day', y=1.05)
fig.set_facecolor('white')
plt.show()
# -
# We repeat this plotting exercise with a log scale
# +
fig, (ax1, ax3) = plt.subplots(1,2, figsize=(16, 9))
color = 'tab:red'
ax1.set_xlabel('Days since first case')
ax1.set_ylabel('Growth Rate R(t)')
pd.Series(growth_rate, name='R(t)').plot(ax=ax1, color=color).legend(loc='upper left', bbox_to_anchor=(0, 1.05), ncol=3)
ax1.tick_params(axis='y')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('# per day') # we already handled the x-label with ax1
pd.concat([
pd.Series(susceptible, name='new cases').diff(-1),
pd.Series(resistant * .01, name='new deaths').diff(1)
], axis=1).plot(ax=ax2, logy=True).legend(loc='upper right', bbox_to_anchor=(1.0, 1.05), ncol=3)
color = 'tab:red'
ax3.set_xlabel('Days since first case')
ax3.set_ylabel('Growth Rate R(t)')
pd.Series(growth_rate, name='R(t)').plot(ax=ax3, color=color).legend(loc='upper left', bbox_to_anchor=(0, 1.05), ncol=3)
ax3.tick_params(axis='y')
ax4 = ax3.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax4.set_ylabel('Cumulative #') # we already handled the x-label with ax1
pd.concat([
pd.Series(susceptible, name='cum cases').diff(-1),
pd.Series(resistant * .01, name='cum deaths').diff(1)
], axis=1).cumsum().plot(ax=ax4, logy=True).legend(loc='upper right', bbox_to_anchor=(1.0, 1.05), ncol=3)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.suptitle('Influence of changes in R(t) (i.e. through policy changes) on cases per day, semilog', y=1.05)
fig.set_facecolor('white')
plt.show()
# -
# As can be seen, the policy changes can result in plateaus in confirmed cases per day. Notably, the progression of deaths per day does not seem to have the same pattern, but this is largely an artifact of simplistic model assumptions. With more stochasticity, the possibility of temporary dips or stability is much higher.
#
# However, other network mechanics make this type of effect even more profound. For example, lets consider two different populations that are being treated as one.
# ### Modeling of dips that are in response to different populations responding differently to policy measures.
#
# In the below model, two isolated populations of equal size have their R(t) change over time linearly for a few days after policy implementation, before settling at a new level.
#
# In the first population, the initial case load is higher, but they respond more directly to the policy changes, bringing their effective reproduction rate below 1.
#
# In the second populatino, the initial case load is lower, but they do not respond as much to the policy changes, and their effective reproduction rate stay above 1.
susceptible = np.zeros((200, 2))
infected = np.zeros((200, 2))
resistant = np.zeros((200, 2))
susceptible[0:] = 5e6, 5e6
infected[0,:] = 10000, 1000
infection_length = 14
growth_rate_a = np.concatenate([
np.repeat(3.4, 10),
np.linspace(3.4, .7, 5),
np.repeat(.7, 200)
])[:200]
growth_rate_b = np.concatenate([
np.repeat(3.4, 10),
np.linspace(3.4, 1.5, 10),
np.repeat(1.5, 200)
])[:200]
growth_rate = np.column_stack((np.array(growth_rate_a), np.array(growth_rate_b)))
for v in range(len(susceptible)):
if v != 0:
susceptible[v] = susceptible[v - 1] - infected[v - 1] * growth_rate[v]/infection_length
infected[v] = infected[v - 1] + infected[v - 1] * growth_rate[v]/infection_length - infected[v - 1]/infection_length
resistant[v] = resistant[v - 1] + infected[v - 1]/infection_length
# +
new_infections_per_day = pd.DataFrame(susceptible, columns=['a', 'b']).diff(-1)
new_infections_per_day = new_infections_per_day.assign(total=new_infections_per_day.a + new_infections_per_day.b)
new_deaths_per_day = pd.DataFrame(resistant * .01, columns=['a', 'b']).diff(1)
new_deaths_per_day = new_deaths_per_day.assign(total=new_deaths_per_day.a + new_deaths_per_day.b)
# +
fig, (ax1, ax3) = plt.subplots(1,2, figsize=(16, 9))
color = 'tab:red'
ax1.set_xlabel('Days since first case')
ax1.set_ylabel('Growth Rate R(t)')
pd.concat([
pd.Series(growth_rate_a, name='a R(t)'),
pd.Series(growth_rate_b, name='b R(t)')
], axis=1).plot(ax=ax1, color=['red', 'purple']).legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), ncol=3, fancybox=True)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('New cases per day') # we already handled the x-label with ax1
new_infections_per_day.plot(ax=ax2, logy=True).legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=3, fancybox=True)
color = 'tab:red'
ax3.set_xlabel('Days since first case')
ax3.set_ylabel('Growth Rate R(t)')
pd.concat([
pd.Series(growth_rate_a, name='a R(t)'),
pd.Series(growth_rate_b, name='b R(t)')
], axis=1).plot(ax=ax3, color=['red', 'purple']).legend(loc='upper center', bbox_to_anchor=(0.5, 1.0), ncol=3, fancybox=True)
ax4 = ax3.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax4.set_ylabel('Cumulative cases') # we already handled the x-label with ax1
new_infections_per_day.cumsum().plot(ax=ax4, logy=True).legend(loc='upper center', bbox_to_anchor=(0.5, 1.05), ncol=3, fancybox=True)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.suptitle('Time effect of different responses by subpopulations on cases per day (semi-log)', y=1.05)
fig.set_facecolor('white')
plt.show()
# +
fig, (ax1, ax3) = plt.subplots(1,2, figsize=(16, 9))
color = 'tab:red'
ax1.set_xlabel('Days since first case')
ax1.set_ylabel('Growth Rate R(t)')
pd.concat([
pd.Series(growth_rate_a, name='a R(t)'),
pd.Series(growth_rate_b, name='b R(t)')
], axis=1).plot(ax=ax1, color=['red', 'purple']).legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), ncol=3, fancybox=True)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('New deaths per day') # we already handled the x-label with ax1
new_deaths_per_day.plot(ax=ax2, logy=True).legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=3, fancybox=True)
color = 'tab:red'
ax3.set_xlabel('Days since first case')
ax3.set_ylabel('Growth Rate R(t)')
pd.concat([
pd.Series(growth_rate_a, name='a R(t)'),
pd.Series(growth_rate_b, name='b R(t)')
], axis=1).plot(ax=ax3, color=['red', 'purple']).legend(loc='upper center', bbox_to_anchor=(0.5, 1.0), ncol=3, fancybox=True)
ax4 = ax3.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax4.set_ylabel('Cumulative deaths') # we already handled the x-label with ax1
new_deaths_per_day.cumsum().plot(ax=ax4, logy=True).legend(loc='upper center', bbox_to_anchor=(0.5, 1.05), ncol=3, fancybox=True)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.suptitle('Time effect of different responses by subpopulations on deaths per day (semi-log)', y=1.05)
fig.set_facecolor('white')
plt.show()
# -
# In this case, a profound dip in the number of deaths is temporarily visible even in the deaths per day.
#
# This simulation assumes two isolated populations, but in reality these populations are likely to mix to some degree. As the second population's infection count increases, it is possible that mixing could bring the effective reproduction number of the other population back above 1 over time.
| communication/growth_rate_dips.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="QL28WSHoBsfp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="f22d55d4-e94a-4a41-e258-28a2a2077a73" executionInfo={"status": "ok", "timestamp": 1547340412889, "user_tz": 120, "elapsed": 1980, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}}
# -*- coding: utf-8 -*-
#@author: alison
import re
import string
import pickle
import keras
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from sklearn.model_selection import train_test_split
from nltk.stem import PorterStemmer, SnowballStemmer
from nltk.tokenize import TweetTokenizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.models import Sequential, Model
from keras.layers import Dense, Flatten, Activation
from keras.layers import Conv2D, MaxPool2D, Reshape
from keras.layers import Input, concatenate, Dropout
from keras.layers import Embedding, Concatenate
from keras.optimizers import Adam, SGD, RMSprop
from keras import optimizers
from keras import regularizers
from keras.utils.vis_utils import plot_model
# + id="XdekfYsoV2c2" colab_type="code" outputId="db992cdb-e32b-4770-e3f9-6acfdec461f3" executionInfo={"status": "ok", "timestamp": 1547340413128, "user_tz": 120, "elapsed": 2154, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 90}
import nltk
nltk.download('stopwords')
# + id="-TpI0dMJEi6p" colab_type="code" colab={}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + id="XEfvBz8IEqrO" colab_type="code" outputId="5643ebc1-d116-4ccd-d21d-5a888eda91fd" executionInfo={"status": "ok", "timestamp": 1547340421338, "user_tz": 120, "elapsed": 10278, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 390}
file_list = drive.ListFile({'q': "'1Hx5OP1Yrlh37yYzSMtsv6Ui_fuzOuG04' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
# + id="h2hLjMJyE3dR" colab_type="code" colab={}
train_downloaded = drive.CreateFile({'id': '1TIjlRkVNIvM8NL3P-4UAmMY6moF3mff0'})
train_downloaded.GetContentFile('train_en.tsv')
test_downloaded = drive.CreateFile({'id': '1QqOc_95fjvjbw7-uYT37veooqMzfYn-p'})
test_downloaded.GetContentFile('dev_en.tsv')
trial_downloaded = drive.CreateFile({'id': '1rQ4h1lQi12lyAo2VL7xuUR5iZ9Bsa4MR'})
trial_downloaded.GetContentFile('trial_en.tsv')
# + id="0s3Dnnk8FF4j" colab_type="code" colab={}
train = pd.read_csv('train_en.tsv', delimiter='\t',encoding='utf-8')
dev = pd.read_csv('dev_en.tsv', delimiter='\t',encoding='utf-8')
#trial = pd.read_csv('trial_en.tsv', delimiter='\t',encoding='utf-8')
# + id="lrslbThJCeSY" colab_type="code" colab={}
# Etapa de pré-processamento
def clean_tweets(tweet):
tweet = re.sub('@(\\w{1,15})\b', '', tweet)
tweet = tweet.replace("via ", "")
tweet = tweet.replace("RT ", "")
tweet = tweet.lower()
return tweet
def clean_url(tweet):
tweet = re.sub('http\\S+', '', tweet, flags=re.MULTILINE)
return tweet
def remove_stop_words(tweet):
stops = set(stopwords.words("english"))
stops.update(['.',',','"',"'",'?',':',';','(',')','[',']','{','}'])
toks = [tok for tok in tweet if not tok in stops and len(tok) >= 3]
return toks
def stemming_tweets(tweet):
stemmer = SnowballStemmer('english')
stemmed_words = [stemmer.stem(word) for word in tweet]
return stemmed_words
def remove_number(tweet):
newTweet = re.sub('\\d+', '', tweet)
return newTweet
def remove_hashtags(tweet):
result = ''
for word in tweet.split():
if word.startswith('#') or word.startswith('@'):
result += word[1:]
result += ' '
else:
result += word
result += ' '
return result
# + id="QDsnLDRQGl71" colab_type="code" colab={}
def preprocessing(tweet, swords = True, url = True, stemming = True, ctweets = True, number = True, hashtag = True):
if ctweets:
tweet = clean_tweets(tweet)
if url:
tweet = clean_url(tweet)
if hashtag:
tweet = remove_hashtags(tweet)
twtk = TweetTokenizer(strip_handles=True, reduce_len=True)
if number:
tweet = remove_number(tweet)
tokens = [w.lower() for w in twtk.tokenize(tweet) if w != "" and w is not None]
if swords:
tokens = remove_stop_words(tokens)
if stemming:
tokens = stemming_tweets(tokens)
text = " ".join(tokens)
return text
# + id="SunHFjyyFLR3" colab_type="code" colab={}
train_text = train['text'].map(lambda x: preprocessing(x, swords = True, url = True, stemming = False, ctweets = True, number = False, hashtag = False))
y_train = train['HS']
id_train = train['id']
test_text = dev['text'].map(lambda x: preprocessing(x, swords = True, url = True, stemming = False, ctweets = True, number = False, hashtag = False))
y_test = dev['HS']
id_test = dev['id']
data = np.concatenate((train_text, test_text), axis=0)
classes = np.concatenate((y_train, y_test), axis=0)
# + id="cZzoxLL3nsKn" colab_type="code" outputId="bb054c3f-56c2-405c-b8f0-65e330691307" executionInfo={"status": "ok", "timestamp": 1547340426851, "user_tz": 120, "elapsed": 15562, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 158}
file_list = drive.ListFile({'q': "'1sBKK1i4JXIluelnAPqjOU4xVIIPA3Vmy' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
# + id="FIV4AdDpnwV7" colab_type="code" colab={}
#we = {'id': '1FRfAM3GouOxelBo_gwj5n4YNWDMP6CkL', 'file': 'glove.6B.300d_en.txt'}
we = {'id': '18zPA_9tWmlNZChdfMKHTp0Ka1SQsnkj9', 'file': 'wiki_fasttext_en.vec'}
# + id="ygdK2gi1n2cW" colab_type="code" colab={}
word_vector = drive.CreateFile({'id': we['id']})
word_vector.GetContentFile(we['file'])
# + id="dF4yXFPKoA1g" colab_type="code" colab={}
def word_embeddings(word_index, num_words, word_embedding_dim):
embeddings_index = {}
f = open(we['file'], 'r', encoding='utf-8')
for line in tqdm(f):
values = line.rstrip().rsplit(' ')
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
matrix = np.zeros((num_words, word_embedding_dim))
for word, i in word_index.items():
if i >= max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
matrix[i] = embedding_vector
return matrix
# + id="KyUJRLIQoSVj" colab_type="code" colab={}
embedding_dim = 300
max_features = 25000
maxlen = 100
batch_size = 32
epochs = 5
filter_sizes = [3,4]
num_filters = 512
drop = 0.5
# Treina um tokenizaddor nos dados de treino
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(data)
# Tokeniza os dados
X = tokenizer.texts_to_sequences(data)
Y = tokenizer.texts_to_sequences(test_text)
tweets = sequence.pad_sequences(X, maxlen=maxlen)
x_test = sequence.pad_sequences(Y, maxlen=maxlen)
word_index = tokenizer.word_index
num_words = min(max_features, len(word_index) + 1)
# + id="BvKdMmt8wn-J" colab_type="code" colab={}
x_train, x_val, y_train, y_val = train_test_split(tweets, classes, test_size=0.1, random_state=None)
# + id="3O5IjISOoGAV" colab_type="code" outputId="66204646-f601-46ff-9552-9d4948350819" executionInfo={"status": "ok", "timestamp": 1547340476319, "user_tz": 120, "elapsed": 64875, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
from tqdm import tqdm
embedding_matrix = word_embeddings(word_index, num_words, embedding_dim)
# + id="pTZGv8lcWGIU" colab_type="code" outputId="691014f5-fa16-4a7c-ab97-ac8b84fd74bf" executionInfo={"status": "ok", "timestamp": 1547340652057, "user_tz": 120, "elapsed": 60335, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 781}
# Fase de classificação
tweet_input = Input(shape=(maxlen,), dtype='int32')
embedding = Embedding(num_words, embedding_dim, weights=[embedding_matrix], input_length=maxlen, trainable=True)(tweet_input)
reshape = Reshape((maxlen, embedding_dim, 1))(embedding)
cnn1 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='tanh')(reshape)
max1 = MaxPool2D(pool_size=(maxlen - filter_sizes[0] + 1, 1), strides=(1,1), padding='valid')(cnn1)
cnn2 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='tanh')(reshape)
max2 = MaxPool2D(pool_size=(maxlen - filter_sizes[1] + 1, 1), strides=(1,1), padding='valid')(cnn2)
concatenated_tensor = Concatenate(axis=1)([max1, max2])
flatten = Flatten()(concatenated_tensor)
dropout = Dropout(drop)(flatten)
dens = Dense(num_filters, activation='relu')(dropout)
output = Dense(1, activation='sigmoid')(dens)
model = Model(inputs=tweet_input, outputs=output)
opt = RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, shuffle=True, validation_data=(x_val, y_val))
y_pred = (model.predict(x_test, batch_size=batch_size) > .5).astype(int)
# + id="rx8QHLaNWgdn" colab_type="code" outputId="867d28f2-de4e-4807-93bc-3d96e3bdb6d0" executionInfo={"status": "ok", "timestamp": 1547340661156, "user_tz": 120, "elapsed": 547, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 87}
from sklearn.metrics import f1_score, precision_score, accuracy_score, recall_score
print("F1.........: %f" %(f1_score(y_test, y_pred, average="macro")))
print("Precision..: %f" %(precision_score(y_test, y_pred, average="macro")))
print("Recall.....: %f" %(recall_score(y_test, y_pred, average="macro")))
print("Accuracy...: %f" %(accuracy_score(y_test, y_pred)))
# + id="o2oDoSc55nxN" colab_type="code" outputId="9caf5d4f-d166-4b04-d3f8-20da0cf18223" executionInfo={"status": "ok", "timestamp": 1547340534192, "user_tz": 120, "elapsed": 122675, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-p2L5zKSs9Oc/AAAAAAAAAAI/AAAAAAAAAEE/PzwO-PqNro8/s64/photo.jpg", "userId": "16421844834226782203"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
output = []
for array in y_pred:
output.append(array[0])
print(len(output), len(id_test))
# + id="H6MN2vcq5OyW" colab_type="code" colab={}
from google.colab import files
with open("en_a.tsv", "w") as file:
for i in range(len(y_pred)):
file.write(str(id_test[i]))
file.write('\t')
file.write(str(output[i]))
file.write('\n')
#files.download('en_a.tsv')
# + id="S-hRWbhAHKW2" colab_type="code" colab={}
#plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
# + id="e7-utUYPIifw" colab_type="code" colab={}
#files.download('model_plot.png')
| English/TaskA/CNN_en_a.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="c05P9g5WjizZ"
# # Classify Structured Data
# + [markdown] colab_type="text" id="VxyBFc_kKazA"
# ## Import TensorFlow and Other Libraries
# + colab={} colab_type="code" id="9dEreb4QKizj"
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import feature_column
from os import getcwd
from sklearn.model_selection import train_test_split
# + [markdown] colab_type="text" id="KCEhSZcULZ9n"
# ## Use Pandas to Create a Dataframe
#
# [Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset and load it into a dataframe.
# + colab={} colab_type="code" id="REZ57BXCLdfG"
filePath = f"{getcwd()}/../tmp2/heart.csv"
dataframe = pd.read_csv(filePath)
dataframe.head()
# + [markdown] colab_type="text" id="u0zhLtQqMPem"
# ## Split the Dataframe Into Train, Validation, and Test Sets
#
# The dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.
# + colab={} colab_type="code" id="YEOpw7LhMYsI"
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
# + [markdown] colab_type="text" id="84ef46LXMfvu"
# ## Create an Input Pipeline Using `tf.data`
#
# Next, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly.
# +
# EXERCISE: A utility method to create a tf.data dataset from a Pandas Dataframe.
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# Use Pandas dataframe's pop method to get the list of targets.
labels = dataframe.pop("target")
# Create a tf.data.Dataset from the dataframe and labels.
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
# Shuffle dataset.
ds = ds.shuffle(buffer_size=100)
# Batch dataset with specified batch_size parameter.
ds = ds.batch(batch_size)
return ds
# + colab={} colab_type="code" id="CXbbXkJvMy34"
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
# + [markdown] colab_type="text" id="qRLGSMDzM-dl"
# ## Understand the Input Pipeline
#
# Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
# + colab={} colab_type="code" id="CSBo3dUVNFc9"
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
# + [markdown] colab_type="text" id="OT5N6Se-NQsC"
# We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
# + [markdown] colab_type="text" id="ttIvgLRaNoOQ"
# ## Create Several Types of Feature Columns
#
# TensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
# + colab={} colab_type="code" id="mxwiHFHuNhmf"
# Try to demonstrate several types of feature columns by getting an example.
example_batch = next(iter(train_ds))[0]
# + colab={} colab_type="code" id="0wfLB8Q3N3UH"
# A utility method to create a feature column and to transform a batch of data.
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column, dtype='float64')
print(feature_layer(example_batch).numpy())
# + [markdown] colab_type="text" id="Q7OEKe82N-Qb"
# ### Numeric Columns
#
# The output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features.
# + colab={} colab_type="code" id="QZTZ0HnHOCxC"
# EXERCISE: Create a numeric feature column out of 'age' and demo it.
age = feature_column.numeric_column("age")
demo(age)
# + [markdown] colab_type="text" id="7a6ddSyzOKpq"
# In the heart disease dataset, most columns from the dataframe are numeric.
# + [markdown] colab_type="text" id="IcSxUoYgOlA1"
# ### Bucketized Columns
#
# Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column).
# + colab={} colab_type="code" id="wJ4Wt3SAOpTQ"
# EXERCISE: Create a bucketized feature column out of 'age' with
# the following boundaries and demo it.
boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
# + [markdown] colab_type="text" id="-me1NKJ4BIEB"
# Notice the one-hot values above describe which age range each row matches.
# + [markdown] colab_type="text" id="r1tArzewPb-b"
# ### Categorical Columns
#
# In this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets).
#
# **Note**: You will probably see some warning messages when running some of the code cell below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.
# + colab={} colab_type="code" id="DJ6QnSHkPtOC"
# EXERCISE: Create a categorical vocabulary column out of the
# above mentioned categories with the key specified as 'thal'.
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
# + [markdown] colab_type="text" id="zQT4zecNBtji"
# The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file).
# + [markdown] colab_type="text" id="LEFPjUr6QmwS"
# ### Embedding Columns
#
# Suppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. You can tune the size of the embedding with the `dimension` parameter.
# + colab={} colab_type="code" id="hSlohmr2Q_UU"
# EXERCISE: Create an embedding column out of the categorical
# vocabulary you just created (thal). Set the size of the
# embedding to 8, by using the dimension parameter.
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
# + [markdown] colab_type="text" id="urFCAvTVRMpB"
# ### Hashed Feature Columns
#
# Another way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash buckets significantly smaller than the number of actual categories to save space.
# + colab={} colab_type="code" id="YHU_Aj2nRRDC"
# EXERCISE: Create a hashed feature column with 'thal' as the key and
# 1000 hash buckets.
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
# + [markdown] colab_type="text" id="fB94M27DRXtZ"
# ### Crossed Feature Columns
# Combining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/#feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
# + colab={} colab_type="code" id="oaPVERd9Rep6"
# EXERCISE: Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
# + [markdown] colab_type="text" id="ypkI9zx6Rj1q"
# ## Choose Which Columns to Use
#
# We have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this exercise is to show you the complete code needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.
#
# If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
# + colab={} colab_type="code" id="Eu8bJWmCScfC"
dataframe.dtypes
# + [markdown] colab_type="text" id="2pV4tSI3SkuX"
# You can use the above list of column datatypes to map the appropriate feature column to every column in the dataframe.
# + colab={} colab_type="code" id="4PlLY7fORuzA"
# EXERCISE: Fill in the missing code below
feature_columns = []
# Numeric Cols.
# Create a list of numeric columns. Use the following list of columns
# that have a numeric datatype: ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca'].
numeric_columns = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']
for header in numeric_columns:
# Create a numeric feature column out of the header.
numeric_feature_column = feature_column.numeric_column(header)
feature_columns.append(numeric_feature_column)
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
# + [markdown] colab_type="text" id="M-nDp8krS_ts"
# ### Create a Feature Layer
#
# Now that we have defined our feature columns, we will use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to our Keras model.
# + colab={} colab_type="code" id="6o-El1R2TGQP"
# EXERCISE: Create a Keras DenseFeatures layer and pass the feature_columns you just created.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
# + [markdown] colab_type="text" id="8cf6vKfgTH0U"
# Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
# + colab={} colab_type="code" id="gcemszoGSse_"
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
# + [markdown] colab_type="text" id="bBx4Xu0eTXWq"
# ## Create, Compile, and Train the Model
# + colab={} colab_type="code" id="_YJPPb3xTPeZ"
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=100)
# + colab={} colab_type="code" id="GnFmMOW0Tcaa"
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
# -
# # Submission Instructions
# +
# Now click the 'Submit Assignment' button above.
# -
# # When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This frees up resources for your fellow learners.
# + language="javascript"
# <!-- Save the notebook -->
# IPython.notebook.save_checkpoint();
# + language="javascript"
# <!-- Shutdown and close the notebook -->
# window.onbeforeunload = null
# window.close();
# IPython.notebook.session.delete();
| tf_data_dep/c_3/TFDS-Week2-Question.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:python3]
# language: python
# name: conda-env-python3-py
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
import tqdm
import spacepy.toolbox as tb
import spacepy.plot as spp
# %matplotlib inline
# -
# # Setup a simulated data to see if we can make it look right
#
#
# +
np.random.seed(8675309)
sim_pa = np.arange(20,175)
sim_c = 890*np.sin(np.deg2rad(sim_pa))**0.8
# at each point draw a poisson variable with that mean
sim_c_n = np.asarray([np.random.poisson(v) for v in sim_c ])
prob=0.1
sim_c_n2 = np.asarray([np.random.negative_binomial((v*prob)/(1-prob), prob) for v in sim_c ])
# Two subplots, the axes array is 1-d
f, axarr = plt.subplots(3, sharex=True, sharey=True, figsize=(7,9))
axarr[0].plot(sim_pa, sim_c, lw=2)
axarr[0].set_ylabel('Truth')
axarr[0].set_xlim((0,180))
axarr[0].set_yscale('log')
axarr[1].plot(sim_pa, sim_c_n, lw=2)
axarr[1].set_ylabel('Measured\nPoisson')
axarr[1].set_xlim((0,180))
axarr[1].set_yscale('log')
axarr[2].plot(sim_pa, sim_c_n2, lw=2)
axarr[2].set_ylabel('Measured\nNegBin')
axarr[2].set_xlim((0,180))
axarr[2].set_yscale('log')
axarr[2].set_ylim(axarr[1].get_ylim())
# -
# ## This will then serve as the background
#
# Can the simplest model get started?
#
# +
# generate some data
with pm.Model() as model:
bkg = pm.NegativeBinomial('bkg', mu=pm.Uniform('m_bkg', 0, 1e5,shape=len(sim_pa), testval=1e3),
alpha=0.1,
observed=sim_c_n2, shape=len(sim_pa))
# truth_mc = pm.Uniform('truth', 0, 100, shape=dat_len)
# noisemean_mc = pm.Uniform('noisemean', 0, 100)
# noise_mc = pm.Poisson('noise', noisemean_mc, observed=obs[1:20])
# real_n_mc = pm.Poisson('real_n', truth_mc+noisemean_mc, shape=dat_len)
# psf = pm.Uniform('psf', 0, 5, observed=det)
# obs_mc = pm.Normal('obs', (truth_mc+noisemean_mc)*psf.max(), 1/5**2, observed=obs, shape=dat_len)
trace = pm.sample(50000)
# -
pm.traceplot(trace)
pm.summary(trace)
| Deconvolution/convolution2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
""" creates fusions x cell dataframe, as well as fusions x patient
needed for input to makeSummaryTable script"""
import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 999)
pd.options.display.max_columns = None
def readFunc(fus):
fName = './out/' + fus + '.query.out.csv'
f = pd.read_csv(fName)
toKeep = f['fusionPresent_bool'] == 1
f = f[toKeep]
f = f.reset_index(drop=True)
return f
fusionsList = ['ALK--NPM1', 'ALK--TPM3', 'ALK--TFG', 'ALK--TPM4', 'ALK--ATIC', 'ALK--CLTC', 'ALK--MSN',
'ALK--RNF213', 'ALK--CARS', 'ALK--EML4', 'ALK--KIF5B', 'ALK--C2orf22', 'ALK--DCTN1',
'ALK--HIP1', 'ALK--TPR', 'ALK--RANBP2', 'ALK--PPFIBP1', 'ALK--SEC31A', 'ALK--STRN',
'ALK--VCL', 'ALK--C2orf44', 'ALK--KLC1', 'BRAF--AKAP9', 'BRAF--KIAA1549', 'BRAF--CEP88',
'BRAF--LSM14A', 'BRAF--SND1', 'BRAF--FCHSD1', 'BRAF--SLC45A3', 'BRAF--FAM131B', 'BRAF--RNF130',
'BRAF--CLCN6', 'BRAF--MKRN1', 'BRAF--GNAI1', 'BRAF--AGTRAP', 'ROS1--CD74', 'ROS1--GOPC',
'ROS1--SDC4', 'ROS1--SLC34A2', 'ROS1--EZR', 'ROS1--LRIG3', 'ROS1--HLA-A', 'ROS1--MYO5A',
'ROS1--PPFIBP1', 'ROS1--ERC1', 'ROS1--CLIP1', 'ROS1--TPM3', 'ROS1--ZCCHHC8', 'ROS1--KIAA1598',
'ROS1--PWWP2A', 'TGF--NR4A3', 'PDGFRB--HIP1', 'PDGFRB--CCDC6', 'CD74--NRG1', 'RET--H4',
'RET--PRKAR1A', 'RET--NCOA4', 'RET--PCM1', 'RET--GOLGA5', 'RET--TRIM33', 'RET--KTN1', 'RET--TRIM27',
'RET--HOOK3', 'RET--KIF5B', 'RET--CCDC6', 'TP63--TBL1XR1', 'NOTCH1--TRB', 'NOTCH1--SEC16A',
'NOTCH1--GABBR2', 'NTRK1--TPM3', 'NTRK1--TP53', 'NTRK1--TPR', 'NTRK1--TFG']
fusionsDF = pd.DataFrame('Nan', index=np.arange(50), columns=fusionsList)
fusionsDF
for currFus in fusionsList:
currName = currFus.split('--')[0] + '_' + currFus.split('--')[1]
df = readFunc(currFus)
fusionsDF[currFus] = pd.Series(df['cellName'])
fusionsDF
fusionsDF['ALK--EML4']
fusionsDF.to_csv('./fusion_dataframe.csv', index=False)
# +
#//////////////////////////////////////////////////////////////////////////
#//////////////////////////////////////////////////////////////////////////
#//////////////////////////////////////////////////////////////////////////
# -
patientMetadata = pd.read_csv('/Users/lincoln.harris/Desktop/152-LAUD_cell_lists_and_various_shit/cDNA_plate_metadata.csv')
patientMetadata
patientMetadata = patientMetadata.drop([0,1])
patientMetadata
fusions_by_patient = pd.DataFrame(columns=['patientID', 'ALK--EML4', 'ALK_any', 'EML4_any', 'NTRK_any', 'RET_any', 'ROS1_any'])
fusions_by_patient
unique_patientIDs = set(patientMetadata['patient_id'])
unique_patientIDs
len(unique_patientIDs)
fusions_by_patient['patientID'] = list(unique_patientIDs)
fusions_by_patient
fusions_by_patient['ALK--EML4'] = 0
fusions_by_patient['ALK_any'] = 0
fusions_by_patient['EML4_any'] = 0
fusions_by_patient['NTRK_any'] = 0
fusions_by_patient['RET_any'] = 0
fusions_by_patient['ROS1_any'] = 0
fusions_by_patient
# +
# dear god this is horrible
for i in range(0, len(fusionsDF.index)-1): # row-wise
currRow = list(fusionsDF.iloc[i])
currFus = ''
for j in range(0, len(currRow)): # col-wise
if j == 0:
currFus = 'ALK--EML4'
if j == 1:
currFus = 'ALK_any'
if j == 2:
currFus = 'EML4_any'
if j == 3:
currFus = 'NTRK_any'
if j == 4:
currFus = 'RET_any'
if j == 5:
currFus = 'ROS1_any'
try:
currCell = currRow[j]
currCellSplit = currCell.split('_')
currPlate = currCellSplit[1]
keepList = patientMetadata['plate'] == currPlate
keepRow = patientMetadata[keepList]
currPatient = list(keepRow['patient_id'])[0]
# now modify vals in fusions_by_patient df
match_row = fusions_by_patient[fusions_by_patient['patientID'] == currPatient]
match_row_index = match_row.index[0]
fusions_by_patient[currFus][match_row_index] += 1
except:
continue
# -
fusions_by_patient
fusions_by_patient.to_csv('./fusions_by_patient.csv', index=False)
| fusionSearch/fusions_x_patient_notebook.ipynb |
# + [markdown] colab_type="text" id="8tQJd2YSCfWR"
#
# + [markdown] colab_type="text" id="D7tqLMoKF6uq"
# Deep Learning
# =============
#
# Assignment 6
# ------------
#
# After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="MvEblsgEXxrd"
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import os
import numpy as np
import random
import string
import tensorflow as tf
import zipfile
from six.moves import range
from six.moves.urllib.request import urlretrieve
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 5993, "status": "ok", "timestamp": 1445965582896, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="RJ-o3UBUFtCw" outputId="d530534e-0791-4a94-ca6d-1c8f1b908a9e"
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 5982, "status": "ok", "timestamp": 1445965582916, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="Mvf09fjugFU_" outputId="8f75db58-3862-404b-a0c3-799380597390"
def read_data(filename):
with zipfile.ZipFile(filename) as f:
name = f.namelist()[0]
data = tf.compat.as_str(f.read(name))
return data
text = read_data(filename)
print('Data size %d' % len(text))
# + [markdown] colab_type="text" id="ga2CYACE-ghb"
# Create a small validation set.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 6184, "status": "ok", "timestamp": 1445965583138, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="w-oBpfFG-j43" outputId="bdb96002-d021-4379-f6de-a977924f0d02"
valid_size = 1000
valid_text = text[:valid_size]
train_text = text[valid_size:]
train_size = len(train_text)
print(train_size, train_text[:64])
print(valid_size, valid_text[:64])
# + [markdown] colab_type="text" id="Zdw6i4F8glpp"
# Utility functions to map characters to vocabulary IDs and back.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 6276, "status": "ok", "timestamp": 1445965583249, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": <KEY>", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="gAL1EECXeZsD" outputId="88fc9032-feb9-45ff-a9a0-a26759cc1f2e"
vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '
first_letter = ord(string.ascii_lowercase[0])
def char2id(char):
if char in string.ascii_lowercase:
return ord(char) - first_letter + 1
elif char == ' ':
return 0
else:
print('Unexpected character: %s' % char)
return 0
def id2char(dictid):
if dictid > 0:
return chr(dictid + first_letter - 1)
else:
return ' '
print(char2id('a'), char2id('z'), char2id(' '), char2id('ï'))
print(id2char(1), id2char(26), id2char(0))
# + [markdown] colab_type="text" id="lFwoyygOmWsL"
# Function to generate a training batch for the LSTM model.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 6473, "status": "ok", "timestamp": 1445965583467, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="d9wMtjy5hCj9" outputId="3dd79c80-454a-4be0-8b71-4a4a357b3367"
batch_size=64
num_unrollings=10
class BatchGenerator(object):
def __init__(self, text, batch_size, num_unrollings):
self._text = text
self._text_size = len(text)
self._batch_size = batch_size
self._num_unrollings = num_unrollings
segment = self._text_size // batch_size
self._cursor = [ offset * segment for offset in range(batch_size)]
self._last_batch = self._next_batch()
def _next_batch(self):
"""Generate a single batch from the current cursor position in the data."""
batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)
for b in range(self._batch_size):
batch[b, char2id(self._text[self._cursor[b]])] = 1.0
self._cursor[b] = (self._cursor[b] + 1) % self._text_size
return batch
def next(self):
"""Generate the next array of batches from the data. The array consists of
the last batch of the previous array, followed by num_unrollings new ones.
"""
batches = [self._last_batch]
for step in range(self._num_unrollings):
batches.append(self._next_batch())
self._last_batch = batches[-1]
return batches
def characters(probabilities):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (most likely) character representation."""
return [id2char(c) for c in np.argmax(probabilities, 1)]
def batches2string(batches):
"""Convert a sequence of batches back into their (most likely) string
representation."""
s = [''] * batches[0].shape[0]
for b in batches:
s = [''.join(x) for x in zip(s, characters(b))]
return s
train_batches = BatchGenerator(train_text, batch_size, num_unrollings)
valid_batches = BatchGenerator(valid_text, 1, 1)
print(batches2string(train_batches.next()))
print(batches2string(train_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="KyVd8FxT5QBc"
def logprob(predictions, labels):
"""Log-probability of the true labels in a predicted batch."""
predictions[predictions < 1e-10] = 1e-10
return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]
def sample_distribution(distribution):
"""Sample one element from a distribution assumed to be an array of normalized
probabilities.
"""
r = random.uniform(0, 1)
s = 0
for i in range(len(distribution)):
s += distribution[i]
if s >= r:
return i
return len(distribution) - 1
def sample(prediction):
"""Turn a (column) prediction into 1-hot encoded samples."""
p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)
p[0, sample_distribution(prediction[0])] = 1.0
return p
def random_distribution():
"""Generate a random column of probabilities."""
b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])
return b/np.sum(b, 1)[:,None]
# + [markdown] colab_type="text" id="K8f67YXaDr4C"
# Simple LSTM Model.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="Q5rxZK6RDuGe"
num_nodes = 64
graph = tf.Graph()
with graph.as_default():
# Parameters:
# Input gate: input, previous output, and bias.
ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ib = tf.Variable(tf.zeros([1, num_nodes]))
# Forget gate: input, previous output, and bias.
fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
fb = tf.Variable(tf.zeros([1, num_nodes]))
# Memory cell: input, state and bias.
cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
cb = tf.Variable(tf.zeros([1, num_nodes]))
# Output gate: input, previous output, and bias.
ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ob = tf.Variable(tf.zeros([1, num_nodes]))
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
# Classifier weights and biases.
w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))
b = tf.Variable(tf.zeros([vocabulary_size]))
# Definition of the cell computation.
def lstm_cell(i, o, state):
"""Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf
Note that in this formulation, we omit the various connections between the
previous state and the gates."""
input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)
forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)
update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb
state = forget_gate * state + input_gate * tf.tanh(update)
output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)
return output_gate * tf.tanh(state), state
# Input data.
train_data = list()
for _ in range(num_unrollings + 1):
train_data.append(
tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))
train_inputs = train_data[:num_unrollings]
train_labels = train_data[1:] # labels are inputs shifted by one time step.
# Unrolled LSTM loop.
outputs = list()
output = saved_output
state = saved_state
for i in train_inputs:
output, state = lstm_cell(i, output, state)
outputs.append(output)
# State saving across unrollings.
with tf.control_dependencies([saved_output.assign(output),
saved_state.assign(state)]):
# Classifier.
logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels=tf.concat(train_labels, 0), logits=logits))
# Optimizer.
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(
zip(gradients, v), global_step=global_step)
# Predictions.
train_prediction = tf.nn.softmax(logits)
# Sampling and validation eval: batch 1, no unrolling.
sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])
saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))
saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))
reset_sample_state = tf.group(
saved_sample_output.assign(tf.zeros([1, num_nodes])),
saved_sample_state.assign(tf.zeros([1, num_nodes])))
sample_output, sample_state = lstm_cell(
sample_input, saved_sample_output, saved_sample_state)
with tf.control_dependencies([saved_sample_output.assign(sample_output),
saved_sample_state.assign(sample_state)]):
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 41}, {"item_id": 80}, {"item_id": 126}, {"item_id": 144}]} colab_type="code" executionInfo={"elapsed": 199909, "status": "ok", "timestamp": 1445965877333, "user": {"color": "#1FA15D", "displayName": "<NAME>", "isAnonymous": false, "isMe": true, "permissionId": "05076109866853157986", "photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg", "sessionId": "6f6f07b359200c46", "userId": "102167687554210253930"}, "user_tz": 420} id="RD9zQCZTEaEm" outputId="5e868466-2532-4545-ce35-b403cf5d9de6"
num_steps = 7001
summary_frequency = 100
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
mean_loss = 0
for step in range(num_steps):
batches = train_batches.next()
feed_dict = dict()
for i in range(num_unrollings + 1):
feed_dict[train_data[i]] = batches[i]
_, l, predictions, lr = session.run(
[optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)
mean_loss += l
if step % summary_frequency == 0:
if step > 0:
mean_loss = mean_loss / summary_frequency
# The mean loss is an estimate of the loss over the last few batches.
print(
'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))
mean_loss = 0
labels = np.concatenate(list(batches)[1:])
print('Minibatch perplexity: %.2f' % float(
np.exp(logprob(predictions, labels))))
if step % (summary_frequency * 10) == 0:
# Generate some samples.
print('=' * 80)
for _ in range(5):
feed = sample(random_distribution())
sentence = characters(feed)[0]
reset_sample_state.run()
for _ in range(79):
prediction = sample_prediction.eval({sample_input: feed})
feed = sample(prediction)
sentence += characters(feed)[0]
print(sentence)
print('=' * 80)
# Measure validation set perplexity.
reset_sample_state.run()
valid_logprob = 0
for _ in range(valid_size):
b = valid_batches.next()
predictions = sample_prediction.eval({sample_input: b[0]})
valid_logprob = valid_logprob + logprob(predictions, b[1])
print('Validation set perplexity: %.2f' % float(np.exp(
valid_logprob / valid_size)))
# + [markdown] colab_type="text" id="pl4vtmFfa5nn"
# ---
# Problem 1
# ---------
#
# You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.
#
# ---
# + [markdown] colab_type="text" id="4eErTCTybtph"
# ---
# Problem 2
# ---------
#
# We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.
#
# a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.
#
# b- Write a bigram-based LSTM, modeled on the character LSTM above.
#
# c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).
#
# ---
# + [markdown] colab_type="text" id="Y5tapX3kpcqZ"
# ---
# Problem 3
# ---------
#
# (difficult!)
#
# Write a sequence-to-sequence LSTM which mirrors all the words in a sentence. For example, if your input is:
#
# the quick brown fox
#
# the model should attempt to output:
#
# eht kciuq nworb xof
#
# Refer to the lecture on how to put together a sequence-to-sequence model, as well as [this article](http://arxiv.org/abs/1409.3215) for best practices.
#
# ---
| courses/udacity_deep_learning/6_lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import karps as ks
import karps.functions as f
from karps.display import show_phase
employees = ks.dataframe([
("ACME", "John", "12/01", 12.0),
("ACME", "Kate", "09/04", 11.4),
], schema=["company_name", "employee_name", "dob", "shoe_size"],
name="employees")
employees
# +
# Group employees by date of birth, and count how many employees per calendar date.
#df2 = employees.groupby(employees.dob).agg({"count", f.count})
# Count how many dates have more than one employee.
#num_collisions = f.count(df2[df2.count >= 2], name="num_collisions")
#print("number of days with more than one b-day", s.run(num_collisions))
# +
# Group employees by date of birth, and count how many employees per calendar date.
#df2 = df.groupby(df.employees_dob).agg({"count", f.count})
# Count how many dates have more than one employee.
#num_collisions = f.count(f.filter(df2.count >= 2, df2.employees_dob), name="num_collisions")
#print("number of days with more than one b-day", s.run(num_collisions))
# -
# Put that into a function:
def num_collisions(dob_col):
by_dob = dob_col.groupby(dob_col)
count_dob = by_dob.agg({"count": f.count}, name="count_by_dob")
num_collisions = f.count(
count_dob[count_dob.count >= 2],
name="num_collisions")
return num_collisions
# Let's check that it works against a small amount of data
sample_dobs = ks.dataframe(["12/1", "1/4", "12/1"])
ks.display(num_collisions(sample_dobs))
# We can also use this function as an aggregation function!
collisions_by_company =
(employees.employees_dob
.groupby(employees.company)
.agg({"num_collisions": num_collisions}))
# We can also use this function as an aggregation function!
collisions_by_company =
(employees
.groupby(employees.company)
.agg({
"num_collisions": num_collisions, # Functional
"shoe_size": f.mean(employee.shoe_size) # Direct call to a column of the data being grouped.
# lambda col: f.mean(col.shoe_size)
}))
s = ks.session("test3")
comp = s.compute(the_mean)
show_phase(comp, "initial")
show_phase(comp, "REMOVE_OBSERVABLE_BROADCASTS")
show_phase(comp, "MERGE_AGGREGATIONS")
show_phase(comp, "final")
comp.values()
s.run(the_mean)
| python/notebooks/Demo 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
x=pd.read_excel("Sample 1.xlsx")
x
import pandas as pd
x=pd.read_excel("Sample 1.xlsx")
x
x=x.dropna()
x
new_x=x.dropna()
new_x
x=pd.read_excel("Sample.xlsx")
x
x.loc[18,"Apple"]=40
x
| Day 15.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bonus: Temperature Analysis I
import pandas as pd
from datetime import datetime as dt
# "tobs" is "temperature observations"
df = pd.read_csv('Resources/hawaii_measurements.csv')
df.head()
# Convert the date column format from string to datetime
df.date = pd.to_datetime(df.date,infer_datetime_format = True)
# Set the date column as the DataFrame index
df = df.set_index(df['date'])
df.head()
# Drop the date column
df = df.drop(columns='date')
df.head()
# ### Compare June and December data across all years
from scipy import stats
# Filter data for desired months
dec_data = df[df.index.month == 12]
dec_data.head()
# Filter data for desired months
june_data = df[df.index.month == 6]
june_data.head()
# Identify the average temperature for June
june_data.mean()
# Identify the average temperature for December
dec_data.mean()
# Create collections of temperature data
june = june_data.tobs
june
dec = dec_data.tobs
dec
# Run paired t-test
stats.ttest_ind(june,dec)
# ### Analysis
# +
## Average temperature in June was 74.94 degrees while the average temperature in
## dec was 71.04 degrees. This is not a huge difference indicating that the temperature in
## Hawaii is fairly stable year round. the p Value indicates that the results were statistically
## significant
| temp_analysis_bonus_1_starter.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.1
# language: julia
# name: julia-1.0
# ---
# # Basic OLS
#
# ## Loading Packages
# +
using Dates, DelimitedFiles, Statistics, LinearAlgebra, StatsBase, Distributions
include("jlFiles/printmat.jl")
include("jlFiles/NWFn.jl")
# -
# ## Loading Data
# +
x = readdlm("Data/FFmFactorsPs.csv",',',skipstart=1)
#yearmonth, market, small minus big, high minus low
(ym,Rme,RSMB,RHML) = (x[:,1],x[:,2]/100,x[:,3]/100,x[:,4]/100)
x = nothing
println(size(Rme))
# -
# ## Point Estimates
# Consider the linear regression
#
# $
# y_{t}=\beta^{\prime}x_{t}+\varepsilon_{t},
# $
#
# where $y_{t}$ is a scalar and $x_{t}$ is $k\times1$. The OLS estimate is
#
# $
# \hat{\beta} = S_{xx}^{-1}S_{xy}, \: \text{ where } \:
# S_{xx} = \frac{1}{T}\sum\nolimits_{t=1}^{T}x_{t}x_{t}^{\prime}
# \: \text{ and } \:
# S_{xy} = \frac{1}{T}\sum\nolimits_{t=1}^{T}x_{t}y_{t}.
# $
#
# (The $1/T$ terms clearly cancel, but are sometimes useful to keep to preserve
# numerical precision.)
#
# Instead of these sums (loops over $t$), matrix multiplication can be used to
# speed up the calculations. Create matrices $X_{T\times k}$ and $Y_{T\times1}$
# by letting $x_{t}^{\prime}$ and $y_{t}$ be the $t^{th}$ rows
#
# $
# X_{T\times k}=\left[
# \begin{array}[c]{l}
# x_{1}^{\prime}\\
# \vdots\\
# x_{T}^{\prime}
# \end{array}
# \right] \ \text{ and } \ Y_{T\times1}=\left[
# \begin{array}[c]{l}
# y_{1}\\
# \vdots\\
# y_{T}
# \end{array}
# \right].
# $
#
# We can then calculate the same matrices as
#
# $
# S_{xx} =X^{\prime}X/T \ \text{ and } \: S_{xy}=X^{\prime}Y/T \: \text{, so } \:
# \hat{\beta} =(X^{\prime}X)^{-1}X^{\prime}Y.
# $
#
# However, instead of inverting $S_{xx}$, we typically get much better numerical
# precision by solving the system of $T$ equations
#
# $
# X_{T\times k}b_{k\times1}=Y_{T\times1}
# $
#
# for the vector $b$ that minimizes the sum of squared errors. This
# is easily done by using the command
# ```
# b = X\Y
# ```
# +
println("Three different ways to calculate OLS estimates")
Y = Rme
T = size(Y,1)
X = [ones(T) RSMB RHML]
K = size(X,2)
S_xx = zeros(K,K)
S_xy = zeros(K,1)
for t = 1:T
local x_t, y_t
global S_xx, S_xy
x_t = X[t,:] #a vector
y_t = Y[t:t,:]
S_xx = S_xx + x_t*x_t'/T #KxK
S_xy = S_xy + x_t*y_t/T #Kx1
end
b1 = inv(S_xx)*S_xy #OLS coeffs, version 1
b2 = inv(X'X)*X'Y #OLS coeffs, version 2
b3 = X\Y #OLS coeffs, version 3
println("\nb1, b2 and b3")
printmat([b1 b2 b3])
# -
# ## Distribution of OLS Estimates
# The distribution of the estimates is (typically)
#
# $
# \sqrt{T}(\hat{\beta}-\beta_{0})\overset{d}{\rightarrow}N(0,V)
# \: \text{ where } \: V=S_{xx}^{-1} S S_{xx}^{-1}
# $
#
# where $S$ is the covariance matrix of $\sqrt{T}\bar{g}$, where $\bar{g}$
# is the sample average of
#
# $
# g_{t}=x_{t}(y_{t}-x_{t}^{\prime}\beta),
# $
#
# and $S_{xx}$ is defined as
#
# $
# S_{xx}=-\sum_{t=1}^{T}x_{t}x_{t}^{\prime}/T.
# $
# +
b = X\Y
u = Y - X*b #residuals
g = X.*u #TxK, moment conditions
println("\navg moment conditions")
printmat(mean(g,dims=1))
S = NWFn(g,1) #Newey-West covariance matrix
Sxx = -X'X/T
V = inv(Sxx)'S*inv(Sxx) #Cov(sqrt(T)*b)
println("\nb and std(b)")
printmat([b3 sqrt.(diag(V/T))])
# -
# ## A Function for OLS
"""
OlsFn(y,x,m=1)
LS of y on x; for one dependent variable
# Usage
(b,res,yhat,V,R2a) = OlsFn(y,x,m)
# Input
- `y::Array`: Tx1, the dependent variable
- `x::Array`: Txk matrix of regressors (including deterministic ones)
- `m::Int`: scalar, bandwidth in Newey-West
# Output
- `b::Array`: kx1, regression coefficients
- `u::Array`: Tx1, residuals y - yhat
- `yhat::Array`: Tx1, fitted values x*b
- `V::Array`: kxk matrix, covariance matrix of sqrt(T)b
- `R2a::Number`: scalar, R2 value
"""
function OlsFn(y,x,m=0)
T = size(y,1)
b = x\y
yhat = x*b
u = y - yhat
g = x.*u
S0 = NWFn(g,m) #Newey-West covariance matrix
Sxx = -x'x/T
V = inv(Sxx)'S0*inv(Sxx)
R2a = 1 - var(u)/var(y)
return b,u,yhat,V,R2a
end
(b4,_,_,V,R2a) = OlsFn(Y,X,1)
println("\n with NW standard errors")
printmat([b4 sqrt.(diag(V/T))])
# ## Testing a Hypothesis
# Since the estimator $\hat{\beta}_{_{k\times1}}$ satisfies
#
# $
# \sqrt{T}(\hat{\beta}-\beta_{0})\overset{d}{\rightarrow}N(0,V_{k\times k}) ,
# $
#
# we can easily apply various tests. To test a joint linear hypothesis of the
# form
#
# $
# \gamma_{q\times1}=R\beta-a,
# $
#
# use the test
#
# $
# (R\beta-a)^{\prime}(\Lambda/T) ^{-1}(R\beta
# -a)\overset{d}{\rightarrow}\chi_{q}^{2} \: \text{, where } \: \Lambda=RVR^{\prime}.
# $
R = [0 1 0; #testing if b(2)=0 and b(3)=0
0 0 1]
a = [0;0]
Γ = R*V*R'
test_stat = (R*b-a)'inv(Γ/T)*(R*b-a)
println("\ntest-statictic and 10% critical value of chi-square(2)")
printmat([test_stat 4.61])
# ## Regression Diagnostics: Testing All Slope Coefficients
#
# The function in the next cell tests all slope coefficients of the regression.
# +
"""
OlsR2TestFn(R2a,T,k)
"""
function OlsR2TestFn(R2a,T,k)
RegrStat = T*R2a/(1-R2a)
pval = 1 - cdf(Chisq(k-1),RegrStat)
Regr = [RegrStat pval (k-1)]
return Regr
end
# +
Regr = OlsR2TestFn(R2a,T,size(X,2))
println("Test of all slopes: stat, p-val, df")
printmat(Regr)
# -
# ## Regression Diagnostics: Autocorrelation of the Residuals
#
# The function in the next cell estimates autocorrelations, calculates the DW and Box-Pierce statistics.
# +
"""
OlsAutoCorrFn(u,m=1)
Test the autocorrelation of OLS residuals
# Input:
- `u::Array`: Tx1, residuals
- `m::Int`: scalar, number of lags in autocorrelation and Box-Pierce test
# Output
- `AutoCorr::Array`: mx2, autorrelation and p-value
- `DW::Number`: scalar, DW statistic
- `BoxPierce::Array`: 1x2, Box-Pierce statistic and p-value
# Requires
- StatsBase, Distributions
"""
function OlsAutoCorrFn(u,m=1)
T = size(u,1)
Stdu = std(u)
rho = autocor(u,1:m)
#use map to get around bug in cdf.()
pval = 2*(1.0 .- map(x->cdf(Normal(0,1),x),sqrt(T)*abs.(rho)))
AutoCorr = [rho pval]
BPStat = T*sum(rho.^2)
pval = 1 - cdf(Chisq(m),BPStat)
BoxPierce = [BPStat pval]
dwStat = mean(diff(u).^2)/Stdu^2
return AutoCorr,dwStat,BoxPierce
end
# +
(AutoCorr,dwStat,BoxPierce) = OlsAutoCorrFn(u,3)
println(" lag autoCorr p-val:")
printmat([1:3 AutoCorr])
printlnPs("DW:",dwStat)
println("\nBoxPierce: stat, p-val")
printmat(BoxPierce)
# -
# ## Regression Diagnostics: Heteroskedasticity
#
# The function in the next cell performs White's test for heteroskedasticity.
# +
"""
OlsWhitesTestFn(u,x)
# Input:
- `u::Array`: Tx1, residuals
- `x::Array`: Txk, regressors
"""
function OlsWhitesTestFn(u,x)
(T,k) = (size(x,1),size(x,2))
psi = zeros(T,round(Int,k*(k+1)/2)) #matrix of cross products of x
vv = 0
for i = 1:k, j = i:k
vv = vv + 1
psi[:,vv] = x[:,i].*x[:,j] #all cross products, incl own
end
(_,_,_,_,R2a) = OlsFn(u.^2,psi) #White's test for heteroskedasticity
WhiteStat = T*R2a/(1-R2a)
pval = 1 - cdf(Chisq(size(psi,2)-1),WhiteStat)
White = [WhiteStat pval (size(psi,2)-1)]
return White
end
# +
White = OlsWhitesTestFn(u,X)
println("White: stat,p-val, df ")
printmat(White)
# -
# # A Function for SURE (OLS)
#
#
# Consider the linear regression
#
# $
# y_{it}=\beta_i^{\prime}x_{t}+\varepsilon_{it},
# $
#
# where $i=1,2,..,n$ indicates $n$ different dependent variables. The regressors are the *same* across the regressions.
"""
OlsSureFn(y,x,m=1)
LS of y on x; for one n dependent variables, same regressors
# Usage
(b,res,yhat,Covb,R2a) = OlsSureFn(y,x,m)
# Input
- `y::Array`: Txn, the n dependent variables
- `x::Array`: Txk matrix of regressors (including deterministic ones)
- `m::Int`: scalar, bandwidth in Newey-West
# Output
- `b::Array`: n*kx1, regression coefficients
- `u::Array`: Txn, residuals y - yhat
- `yhat::Array`: Txn, fitted values x*b
- `V::Array`: matrix, covariance matrix of sqrt(T)vec(b)
- `R2a::Number`: n vector, R2 value
"""
function OlsSureFn(y,x,m=0)
(T,n) = (size(y,1),size(y,2))
k = size(x,2)
b = x\y
yhat = x*b
u = y - yhat
g = zeros(T,n*k)
for i = 1:n
vv = (1+(i-1)*k):(i*k) #1:k,(1+k):2k,...
g[:,vv] = x.*u[:,i] #moment conditions for y[:,i] regression
end
S0 = NWFn(g,m) #Newey-West covariance matrix
Sxxi = -x'x/T
Sxx_1 = kron(Matrix(1.0I,n,n),inv(Sxxi))
V = Sxx_1 * S0 * Sxx_1
R2a = 1.0 .- var(u,dims=1)./var(y,dims=1)
return b,u,yhat,V,R2a
end
# +
println("regressing [RSMB RHML] on Rme: [vec(coef) vec(std)]")
(b,u,yhat,V,R2a) = OlsSureFn([RSMB RHML],[ones(T) Rme],1)
printmat([vec(b) sqrt.(diag(V/T))])
R = [1 0 -1 0] #Testing if the alphas are the same
Γ = R*V*R'
test_stat = (R*vec(b))'inv(Γ/T)*(R*vec(b))
println("test-statictic of alpha1=alpha2 and 10% critical value of chi-square(1)")
printmat([test_stat 2.71])
# -
| Ols.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/GeekBoySupreme/Colab_Notebooks/blob/master/fashion_mnist_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="fLWblBHaknxq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 377} outputId="80789a76-be3b-4cb1-9bff-f35fe5132430"
# !pip install tensorflow==2.0.0-beta1
# + id="ROun-Ku3lOAp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="ca7189ec-4a0d-47cc-c0d1-2f2f4acbcb9f"
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import datasets, layers, models
# Loading the fashion-mnist pre-shuffled train data and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', #since the data is labeled in numbers
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + id="azJiFJpslTjN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="ad5d9617-1b44-479b-cd3d-64e47999d543"
# Normalize pixel values to be between 0 and 1
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(class_names[y_train[i]])
plt.show()
# + id="kHVfvL3hlaic" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="5e4d2099-292a-48a5-fc4e-77b4c0bf6b2c"
#Building model with Keras' Sequential function
x_train = x_train.reshape((60000, 28, 28, 1))
x_test = x_test.reshape((10000, 28, 28, 1))
model = tf.keras.models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
# + id="PbMIlpUzlfTJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 292} outputId="90b4dd85-87af-48b0-e01a-d3133d7b78b1"
#training model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
# + id="QV3g3s9Klkjl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="686ea3f1-27bd-4c76-d8f9-adc1bfb3d132"
#testing model prediction
test_loss, test_acc = model.evaluate(x_test, y_test)
print(test_acc)
| fashion_mnist_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import csv
df = pd.DataFrame({
"num1":[222,333,444,555],
"text":[
'foo " bar baz',
' dasldf \n \\" foo bar " and more \\" foo bar " dlasjdsaij',
' foo \n \ bar C\\"" ',
'foo \"" bar'
],
"num":[1,2,3,4]
})
df.head()
df.to_csv(
"preproc.csv",
quoting=csv.QUOTE_NONNUMERIC,
escapechar="\\",
doublequote=False,
index=False)
df = pd.read_csv(
"preproc.csv",
escapechar="\\")
df.head()
| python3/notebooks/pandas-csv/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Practical_RL week3 homework
#
# In this notebook we'll get more perspective on how value-based methods work.
#
# We assume that you've already done either seminar_main or seminar_alternative.
#
# To begin with, __please edit qlearning.py__ - just copy your implementation from the first part of this assignment.
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython.display import clear_output
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
# !bash ../xvfb start
# %env DISPLAY=:1
# -
# ## 1. Q-learning in the wild (3 pts)
#
# Here we use the qlearning agent on taxi env from openai gym.
# You will need to insert a few agent functions here.
# +
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
# +
from qlearning import QLearningAgent
agent = QLearningAgent(alpha=0.5,epsilon=0.25,discount=0.99,
getLegalActions = lambda s: range(n_actions))
# +
def play_and_train(env,agent,t_max=10**4):
"""This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
a = #<get agent to pick action given state s>
next_s,r,done,_ = env.step(a)
#<train (update) agent for state s>
s = next_s
total_reward +=r
if done:break
return total_reward
# -
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
if i %100 ==0:
clear_output(True)
print("mean reward",np.mean(rewards[-100:]))
plt.plot(rewards)
plt.show()
# ### 1.1 reducing epsilon
#
# Try decreasing agent epsilon over time to make him reach positive score.
#
# The straightforward way to do so is to reduce epsilon every N games:
# * either multiply agent.epsilon by a number less than 1 (e.g. 0.99)
# * or substract a small value until it reaches 0
#
# You can, of-course, devise other strategies.
#
# __The goal is to reach positive reward!__
# ## 2. Expected value SARSA (1 pt)
#
# Let's try out expected-value SARSA. You will have to implement EV-SARSA as an agent, resembling the one you used in qlearning.py ,
#
# ```<go to expected_value_sarsa.py and implement missing lines in getValue(state)```
#
# __[bonus, 2pt]__ implement EV-SARSA for softmax policy:
#
# $$ \pi(a_i|s) = softmax({Q(s,a_i) \over \tau}) = {e ^ {Q(s,a_i)/ \tau} \over {\sum_{a_j} e ^{Q(s,a_j) / \tau }}} $$
# +
import gym
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
# -
from expected_value_sarsa import EVSarsaAgent
agent = EVSarsaAgent(alpha=0.5,epsilon=0.25,discount=0.99,
getLegalActions = lambda s: range(n_actions))
# ### Train EV-SARSA
#
# Note that it uses __the same update parameters as__ qlearning so you adapt use the ```play_and_train``` code above.
#
# Please try both constant epsilon = 0.25 and decreasing epsilon.
<your code here>
# ## 3. Continuous state space (2 pt)
#
# Use agent to train on CartPole-v0
#
# This environment has a continuous number of states, so you will have to group them into bins somehow.
#
# The simplest way is to use `round(x,n_digits)` (or numpy round) to round real number to a given amount of digits.
#
# The tricky part is to get the n_digits right for each state to train effectively.
#
# Note that you don't need to convert state to integers, but to __tuples__ of any kind of values.
# +
env = gym.make("CartPole-v0")
n_actions = env.action_space.n
print("first state:%s"%(env.reset()))
plt.imshow(env.render('rgb_array'))
# -
# ### Play a few games
#
# We need to estimate observation distributions. To do so, we'll play a few games and record all states.
# +
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s,r,done,_ = env.step(env.action_space.sample())
all_states.append(s)
if done:break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
# -
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
if i %100 ==0:
clear_output(True)
print("mean reward",np.mean(rewards[-100:]))
plt.plot(rewards)
plt.show()
# ## Binarize environment
from gym.core import ObservationWrapper
class Binarizer(ObservationWrapper):
def _observation(self,state):
#state = <round state to some amount digits.>
#hint: you can do that with round(x,n_digits)
#you will need to pick a different n_digits for each dimension
return tuple(state)
env = Binarizer(gym.make("CartPole-v0"))
# +
all_states = []
for _ in range(1000):
all_states.append(env.reset())
done = False
while not done:
s,r,done,_ = env.step(env.action_space.sample())
all_states.append(s)
if done:break
all_states = np.array(all_states)
for obs_i in range(env.observation_space.shape[0]):
plt.hist(all_states[:,obs_i],bins=20)
plt.show()
# -
# ## Learn
agent = QLearningAgent(alpha=0.5,epsilon=0.25,discount=0.99,
getLegalActions = lambda s: range(n_actions))
rewards = []
for i in range(1000):
rewards.append(play_and_train(env,agent))
if i %100 ==0:
clear_output(True)
print("mean reward",np.mean(rewards[-100:]))
plt.plot(rewards)
plt.show()
# ## 3.2 EV-sarsa on CartPole
#
# Now train the `EVSarsaAgent` on CartPole-v0 env with binarizer you used above for Q-learning.
# +
env = <make env and wrap it with binarizer>
agent = <your code>
# -
<train me>
# ## 4. Experience replay (4 pts)
#
# There's a powerful technique that you can use to improve sample efficiency for off-policy algorithms: [spoiler] Experience replay :)
#
# The catch is that you can train Q-learning and EV-SARSA on `<s,a,r,s'>` tuples even if they aren't sampled under current agent's policy. So here's what we're gonna do:
#
# #### Training with experience replay
# 1. Play game, sample `<s,a,r,s'>`.
# 2. Update q-values based on `<s,a,r,s'>`.
# 3. Store `<s,a,r,s'>` transition in a buffer.
# 3. If buffer is full, delete earliest data.
# 4. Sample K such transitions from that buffer and update q-values based on them.
#
#
# To enable such training, first we must implement a memory structure that would act like such a buffer.
import random
class ReplayBuffer(object):
def __init__(self, size):
"""Create Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
"""
self._storage = []
self._maxsize = size
<any other vars>
def __len__(self):
return len(self._storage)
def add(self, obs_t, action, reward, obs_tp1, done):
'''
Make sure, _storage will not exceed _maxsize.
Make sure, FIFO rule is being followed: the oldest examples has to be removed earlier
'''
data = (obs_t, action, reward, obs_tp1, done)
<add data to storage.>
def sample(self, batch_size):
"""Sample a batch of experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
"""
idxes = <randomly generate indexes of samples>
###Your code: collect <s,a,r,s',done> for each index
return np.array(<states>), np.array(<actions>), np.array(<rewards>), np.array(<next_states>), np.array(<is_done>)
# Some tests to make sure your buffer works right
replay = ReplayBuffer(2)
obj1 = tuple(range(5))
obj2 = tuple(range(5, 10))
replay.add(*obj1)
assert replay.sample(1)==obj1, "If there's just one object in buffer, it must be retrieved by buf.sample(1)"
replay.add(*obj2)
assert len(replay._storage)==2, "Please make sure __len__ methods works as intended."
replay.add(*obj2)
assert len(replay._storage)==2, "When buffer is at max capacity, replace objects instead of adding new ones."
assert tuple(np.unique(a) for a in replay.sample(100))==obj2
replay.add(*obj1)
assert max(len(np.unique(a)) for a in replay.sample(100))==2
replay.add(*obj1)
assert tuple(np.unique(a) for a in replay.sample(100))==obj1
print ("Success!")
# Now let's use this buffer to improve training:
agent = <create agent>
replay = ReplayBuffer(10000)
def play_and_train(env,agent,replay,t_max=10**4):
"""This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward"""
total_reward = 0.0
s = env.reset()
<How you need to modify pipeline in order to use ER?>
for t in range(t_max):
a = #<get agent to pick action given state s>
next_s,r,done,_ = env.step(a)
###Your code here: store current <s,a,r,s'> transition in buffer
###Your code here: train on both current
s = next_s
total_reward +=r
if done:break
return total_reward
# Train with experience replay
###Your code:
# - build a training loop
# - plot learning curves
<...>
# ### Bonus I: TD($ \lambda $) (5+ points)
#
# There's a number of advanced algorithms you can find in week 3 materials (Silver lecture II and/or reading about eligibility traces). One such algorithm is TD(lambda), which is based on the idea of eligibility traces. You can also view it as a combination of N-step updates for alll N.
# * N-step temporal difference from Sutton's book - [url](http://incompleteideas.net/sutton/book/ebook/node73.html)
# * Eligibility traces from Sutton's book - [url](http://incompleteideas.net/sutton/book/ebook/node72.html)
# * Blog post on eligibility traces - [url](http://pierrelucbacon.com/traces/)
#
# Here's a practical algorithm you can start with: [url](https://stackoverflow.com/questions/40862578/how-to-understand-watkinss-q%CE%BB-learning-algorithm-in-suttonbartos-rl-book/40892302)
#
#
# Implementing this algorithm will prove more challenging than q-learning or sarsa, but doing so will earn you a deeper understanding of how value-based methods work [in addition to some bonus points].
#
# More kudos for comparing and analyzing TD($\lambda$) against Q-learning and EV-SARSA in different setups (taxi vs cartpole, constant epsilon vs decreasing epsilon).
# ### Bonus II: More pacman (5+ points)
#
# Remember seminar_main where your vanilla q-learning had hard time solving Pacman even on a small grid. Now's the time to fix that issue.
#
# We'll focus on those grids for pacman setup.
# * python pacman.py -p PacmanQAgent -x N_TRAIN_GAMES -n N_TOTAL_GAMES -l __mediumGrid__
# * python pacman.py -p PacmanQAgent -x N_TRAIN_GAMES -n N_TOTAL_GAMES -l __mediumClassic__
#
# Even if you adjust N_TRAIN_GAMES to 10^5 and N_TOTAL_GAMES to 10^5+100 (100 last games are for test), pacman won't solve those environments
#
# The problem with those environments is that they have a large amount of unique states. However, you can devise a smaller environment state by choosing different observation parameters, e.g.:
# * distance and direction to nearest ghost
# * where is nearest food
# * 'center of mass' of all food points (and variance, and whatever)
# * is there a wall in each direction
# * and anything else you see fit
#
# Here's how to get this information from [state](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/pacman.py#L49),
# * Get pacman position: [state.getPacmanPosition()](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/pacman.py#L128)
# * Is there a wall at (x,y)?: [state.hasWall(x,y)](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/pacman.py#L189)
# * Get ghost positions: [state.getGhostPositions()](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/pacman.py#L144)
# * Get all food positions: [state.getCapsules()](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/pacman.py#L153)
#
# You can call those methods anywhere you see state.
# * e.g. in [agent.getValue(state)](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/qlearningAgents.py#L52)
# * Defining a function that extracts all features and calling it in [getQValue](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/qlearningAgents.py#L38) and [setQValue](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/qlearningAgents.py#L44) is probably enough.
# * You can also change agent parameters. The simplest way is to hard-code them in [PacmanQAgent](https://github.com/yandexdataschool/Practical_RL/blob/master/week2/assignment/qlearningAgents.py#L140)
#
# Also, don't forget to optimize ```learning_rate```, ```discount``` and ```epsilon``` params of model, this may also help to solve this env.
| week3_model_free/homework/homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.2
# language: julia
# name: julia-1.5
# ---
# # Time-varying bid problems with [PowerSimulations.jl](https://github.com/NREL-SIIP/PowerSimulations.jl)
# **Originally Contributed by**: <NAME>
# ## Introduction
# PowerSimulations.jl supports the construction of Operations problems in power system
# with three part cost bids for each time step. MarketBidCost allows the user to pass a
# time-series of variable cost for energy and ancillary services jointly.
# This example shows how to build a Operations problem with MarketBidCost and how to add
# the time-series data to the devices.
# ## Dependencies
using SIIPExamples
# ### Modeling Packages
using PowerSystems
using PowerSimulations
const PSI = PowerSimulations
using D3TypeTrees
# ### Data management packages
using Dates
using DataFrames
using TimeSeries
# ### Optimization packages
using Cbc #solver
# ### Data
# This data depends upon the [RTS-GMLC](https://github.com/gridmod/rts-gmlc) dataset. Let's
# download and extract the data.
rts_dir = SIIPExamples.download("https://github.com/GridMod/RTS-GMLC")
rts_src_dir = joinpath(rts_dir, "RTS_Data", "SourceData")
rts_siip_dir = joinpath(rts_dir, "RTS_Data", "FormattedData", "SIIP");
# ### Create a `System` from RTS-GMLC data just like we did in the [parsing tabular data example.](../../notebook/2_PowerSystems_examples/04_parse_tabulardata.jl)
rawsys = PowerSystems.PowerSystemTableData(
rts_src_dir,
100.0,
joinpath(rts_siip_dir, "user_descriptors.yaml"),
timeseries_metadata_file = joinpath(rts_siip_dir, "timeseries_pointers.json"),
generator_mapping_file = joinpath(rts_siip_dir, "generator_mapping.yaml"),
);
sys = System(rawsys; time_series_resolution = Dates.Hour(1));
# ### Creating the Time Series data for Energy bid
MultiDay = collect(
DateTime("2020-01-01T00:00:00"):Hour(1):(DateTime("2020-01-01T00:00:00") + Hour(8783)),
);
# ### Replacing existing ThreePartCost with MarketBidCost
# Here we replace the existing ThreePartCost with MarketBidCost, and add the energy bid
# time series to the system. The TimeSeriesData that holds the energy bid data can be of any
# type (i.e. `SingleTimeSeries` or `Deterministic`) and bid data should be of type
# `Array{Float64}`,`Array{Tuple{Float64, Float64}}` or `Array{Array{Tuple{Float64,Float64}}`.
for gen in get_components(ThermalGen, sys)
varcost = get_operation_cost(gen)
market_bid_cost = MarketBidCost(;
variable = nothing,
no_load = get_fixed(varcost),
start_up = (hot = get_start_up(varcost), warm = 0.0, cold = 0.0),
shut_down = get_shut_down(varcost),
ancillary_services = Vector{Service}(),
)
set_operation_cost!(gen, market_bid_cost)
data = TimeArray(MultiDay, repeat([get_cost(get_variable(varcost))], 8784))
_time_series = SingleTimeSeries("variable_cost", data)
set_variable_cost!(sys, gen, _time_series)
end
# ### Transforming SingleTimeSeries into Deterministic
horizon = 24;
interval = Dates.Hour(24);
transform_single_time_series!(sys, horizon, interval)
# In the [OperationsProblem example](../../notebook/3_PowerSimulations_examples/1_operations_problems.ipynb)
# we defined a unit-commitment problem with a copper plate representation of the network.
# Here, we want do define unit-commitment problem with ThermalMultiStartUnitCommitment
# formulation for thermal device representation.
# For now, let's just choose a standard ACOPF formulation.
uc_template = template_unit_commitment(network = CopperPlatePowerModel)
# Currently energy budget data isn't stored in the RTS-GMLC dataset.
# +
uc_template.devices[:Generators] =
DeviceModel(ThermalStandard, ThermalMultiStartUnitCommitment)
solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 1, "ratioGap" => 0.5)
# -
# Now we can build a 4-hour economic dispatch / ACOPF problem with the RTS data.
problem = OperationsProblem(
EconomicDispatchProblem,
uc_template,
sys,
horizon = 4,
optimizer = solver,
balance_slack_variables = true,
)
# And solve it ...
solve!(problem)
# ---
#
# *This notebook was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| notebook/3_PowerSimulations_examples/10_market_bid_cost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# Follow the instructions below to help you create your ML pipeline.
# ### 1. Import libraries and load data from database.
# - Import Python libraries
# - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# - Define feature and target variables X and Y
# +
# import libraries
import time
import re
import nltk
import pickle
import pandas as pd
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords, wordnet
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
# Suppress warnings
import warnings; warnings.simplefilter('ignore')
# -
# load data from database
engine = create_engine('sqlite:///Messages.db')
df = pd.read_sql_table('Messages', engine)
# Print first few lines of dataframe
df.head(2)
# Define X and Y variables
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis=1)
# ### 2. Write a tokenization function to process your text data
# +
# Define tokenize function
# Excluded the lemmatization step to increase speed
# Defining the variable "stop_words" makes the code run significantly faster
def tokenize(text):
# Case normalization
text = text.lower()
# Puncuation removal
text = re.sub(r'[^a-zA-Z0-9]', ' ', text)
# Tokenize words
words = word_tokenize(text)
# Stop word removal
stop_words = stopwords.words("english")
words = [w for w in words if w not in stop_words]
# Perform stemming
stemmer = PorterStemmer()
words = [stemmer.stem(w) for w in words if w not in stop_words]
return words
# +
# %%timeit
# Time how long the tokenize function to work on this short sentence
# to help with speed optimization
text = "You are the greatest, no 1.0, person in the world."
tokenize(text)
# -
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
# Build ML pipeline using random forest classifier
pipeline1 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
# ### 4. Train pipeline
# - Split data into train and test sets
# - Train pipeline
# +
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y)
start = time.time()
# Train model
pipeline1.fit(X_train, y_train)
end = time.time()
train_time1 = end - start
print(train_time1)
# -
# ### 5. Test your model
# Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
# +
# Predict labels using model
start = time.time()
y_pred1 = pipeline1.predict(X_test)
end = time.time()
pred_time1 = end - start
print(pred_time1)
# -
# Print accuracy report
report1 = pd.DataFrame.from_dict(classification_report(y_test, y_pred1, target_names=Y.columns, output_dict=True))
report1 = pd.DataFrame.transpose(report1)
report1
# ### 6. Improve your model
# Use grid search to find better parameters.
# +
# Use grid search to find better parameters
start = time.time()
parameters = {
'clf__estimator__n_estimators': [10, 100],
'clf__estimator__min_samples_split': [2, 5]
}
cv = GridSearchCV(pipeline1, param_grid=parameters, verbose=1)
cv.fit(X_train, y_train)
y_pred_cv = cv.predict(X_test)
end = time.time()
cv_time = end - start
print(cv_time)
# +
# Show results of gridsearch.
# According to the rank_test_score and train_scores:
# param_clf__estimator__min_samples_split = 2 and param_clf__estimator__n_estimators = 100
# is the best option (also the slowest).
pd.DataFrame.from_dict(cv.cv_results_)
# -
# ### 7. Test your model
# Show the accuracy, precision, and recall of the tuned model.
#
# Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
# Print accuracy report
report_cv = pd.DataFrame.from_dict(classification_report(y_test, y_pred_cv, target_names=Y.columns, output_dict=True))
report_cv = pd.DataFrame.transpose(report_cv)
report_cv
# Compare f1 scores between default model and improved model
report_cv['f1-score'] - report1['f1-score']
# The default RF and the improved model appear to be similar in terms of f1 score.
# ### 8. Try improving your model further. Here are a few ideas:
# * try other machine learning algorithms
# * add other features besides the TF-IDF
# Create new pipeline using MultinomialNB classifier
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(MultinomialNB()))
])
# +
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y)
start = time.time()
# Train model
pipeline2.fit(X_train, y_train)
end = time.time()
train_time2 = end - start
print(train_time2)
# +
# Test model
start = time.time()
y_pred2 = pipeline2.predict(X_test)
end = time.time()
pred_time2 = end - start
print(pred_time2)
# -
# Print accuracy report
report2 = pd.DataFrame.from_dict(classification_report(y_test, y_pred2, target_names=Y.columns, output_dict=True))
report2 = pd.DataFrame.transpose(report2)
report2
# The MultinomialNB classifier performs poorly compared to the RandomForestClassifier as it cannot predict many of the categories.
# Create new pipeline using SVC classifier
pipeline3 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(SVC()))
])
# +
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y)
start = time.time()
# Train model
pipeline3.fit(X_train, y_train)
end = time.time()
train_time3 = end - start
print(train_time3)
# +
# Test model
start = time.time()
y_pred3 = pipeline3.predict(X_test)
end = time.time()
pred_time3 = end - start
print(pred_time3)
# -
# Print accuracy report
report3 = pd.DataFrame.from_dict(classification_report(y_test, y_pred3, target_names=Y.columns, output_dict=True))
report3 = pd.DataFrame.transpose(report3)
report3
# The SVC classifier performs poorly compared to the RandomForestClassifier as it cannot predict anything other than 'related'.
# ### 9. Export your model as a pickle file
pickle.dump(cv, open('disaster_model.sav', 'wb'))
# ### 10. Use this notebook to complete `train.py`
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
| ML_Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] run_control={"marked": true}
# <img src="https://www.colorado.edu/rc/sites/default/files/page/logo.png"
# alt="Logo for Research Computing @ University of Colorado Boulder"
# width="400" />
#
# # Install `ipyparallel` using `conda`
# -
# !conda install --yes ipyparallel
# # Install the cluster tab for the notebooks
# !jupyter nbextension install --user --py ipyparallel
# !jupyter nbextension enable --user --py ipyparallel
# !jupyter nbextension enable --user --py ipyparallel
# # Quit and restart your jupyter hub
| 01-install-software.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 <NAME>, <NAME>, <NAME>. Partly based on [HyperPython](http://nbviewer.ipython.org/github/ketch/HyperPython/tree/master/) by <NAME>, also under CC-BY.
# # Riding the wave
# This is the fourth and final lesson of Module 3, _Riding the wave: convection problems_, of the course **"Practical Numerical Methods with Python"** (a.k.a., [#numericalmooc](https://twitter.com/hashtag/numericalmooc)). We learned about conservation laws and the traffic-flow model in the [first lesson](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_01_conservationLaw.ipynb), and then about better numerical schemes for convection in [lesson 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_02_convectionSchemes.ipynb).
#
# By then, you should have started to recognize that both mathematical models and numerical schemes work together to give us a good solution to a problem. To drive the point home, [lesson 3](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_03_aBetterModel.ipynb) deals only with an improved model—and showed you some impressive SymPy tricks!
#
# In this lesson, we'll learn about a new class of discretization schemes, known as finite-volume methods. They are the _most widely used_ methods in computational fluid dynamics, and for good reasons! Let's get started ...
# ## Finite-volume method
# Are you curious to find out why the finite-volume method (FVM) is the _most popular method_ in computational fluid dynamics? In fact, almost all of the commercial CFD software packages are based on the finite-volume discretization. Here are some reasons:
#
# * FVM discretizations are very general and have no requirement that the grid be structured, like in the finite-difference method. This makes FVM very _flexible_.
#
# * FVM gives a _conservative discretization_ automatically by using directly the conservation laws in integral form.
# ### Conservative discretization
# Let's go right back to the start of this module, where we explained conservation laws looking at a tiny control volume. To simplify the discussion, we just looked at flow in one dimension, with velocity $u$. Imagining a tiny cylindrical volume, like the one shown in Figure 1, there is flux on the left face and right face and we easily explained conservation of mass in that case.
# 
# #### Figure 1. Tiny control volume in the shape of a cylinder.
# The law of conservation of mass says that the rate of change of mass in the control volume, plus the net rate of flow of mass across the control surfaces must be zero. The same idea works for other conserved quantities.
#
# Conservation means that any change in the quantity within a volume is due to the amount of that quantity that crosses the boundary. Sounds simple enough. (Remember that we are ignoring possible internal sources of the quantity.) The amount crossing the boundary is the flux. A general conservation law for a quantity $e$ is thus:
#
# \begin{equation}
# \frac{\partial}{\partial t}\int_{\text{cv}}e \, dV + \oint_{\text{cs}}\vec{F}\cdot d\vec{A} =0
# \end{equation}
#
# where $\vec{F}$ is the flux, and $\text{cv}$ denotes the control volume with control surface $\text{cs}$.
# **Why not make the control volume itself our computational cell?**
#
# Imagine that the one-dimensional domain of interest is divided using grid points $x_i$. But instead of using local values at the grid points, like we did before, we now want to use _average_ values within each one-dimensional cell of width $\Delta x$ with center at $x_i$.
#
# Define $e_i$ as the integral average across the little control volume on the cell with center at $x_i$ (see Figure 2).
#
# $$
# \begin{equation}
# e_i = \frac{1}{\Delta x} \int_{x_i - \Delta x / 2}^{x_i + \Delta x / 2} e(x, t) \, dx.
# \end{equation}
# $$
#
# If we know the flux terms at the boundaries of the control volume, which are at $x_{i-1/2}$ and $x_{i+1/2}$, the general conservation law for this small control volume gives:
#
# $$
# \begin{equation}
# \frac{\partial}{\partial t} e_i + \frac{1}{\Delta x} \left[ F \left( x_{i+1/2}, t \right) - F \left( x_{i - 1 / 2}, t \right) \right] = 0.
# \end{equation}
# $$
#
# This now just requires a time-stepping scheme, and is easy to solve *if* we can find $F$ on the control surfaces.
# 
#
# ####Figure 2. Discretizing a 1D domain into finite volumes.
# We've seen with the traffic model that the flux can depend on the conserved quantity (in that case, the traffic density). That is generally the case, so we write that $F = F(e)$. We will need to compute, or approximate, the flux terms at the cell edges (control surfaces) from the integral averages, $e_i$.
#
# If we had a simple convection equation with $c>0$, then the flux going into the cell centered at $x_i$ from the left would be $F(e_{i-1})$ and the flux going out the cell on the right side would be $F(e_{i})$ (see Figure 2). Applying these fluxes in Equation (3) results in a scheme that is equivalent to our tried-and-tested backward-space (upwind) scheme!
#
# We know from previous lessons that the backward-space scheme is first order and the error introduces numerical diffusion. Also, remember what happened when we tried to use it with the non-linear [traffic model](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_01_conservationLaw.ipynb) in the green-light problem? *It blew up!* That was because the problem contains both right-moving and left-moving waves (if you don't remember that discussion, go back and review it; it's important!).
#
# To skirt this difficulty in the green-light problem, we chose initial conditions that don't produce negative wave speeds. But that's cheating! A genuine solution would be to have a scheme that can deal with both positive and negative wave speeds. Here is where Godunov comes in.
# ### Godunov's method
# Godunov proposed a first-order method in 1959 that uses the integral form of the conservation laws, Equation (1), and a piecewise constant representation of the solution, as shown in Figure (2). Notice that representing the solution in this way is like having a myriad of little shocks at the cell boundaries (control surfaces).
#
# For each control surface, we have *two values* for the solution $e$ at a given time: the constant value to the left, $e_L$, and the constant value to the right, $e_R$. A situation where you have a conservation law with a constant initial condition, except for a single jump discontinuity is called a **Riemann problem**.
#
# The Riemann problem has an exact solution for the Euler equations (as well as for any scalar conservation law). The [shock-tube problem](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_05_Sods_Shock_Tube.ipynb), subject of your assignment for this course module, is a Riemann problem! And because it has an analytical solution, we can use it for testing the accuracy of numerical methods.
#
# But Godunov had a better idea. With the solution represented as piecewise constant (Figure 2), why not use the analytical solution of the Riemann problem at each cell boundary? Solving a Riemann problem gives all the information about the characteristic structure of the solution, including the sign of the wave speed. The full solution can then be reconstructed from the union of all the Riemann solutions at cell boundaries. *Neat idea, Godunov!*
#
# Figure 3 illustrates a Riemann problem for the Euler equations, associated to the shock tube. The space-time plot shows the characteristic lines for the left-traveling expansion wave, and the right-traveling contact discontinuity and shock.
# 
#
# ####Figure 3. The shock tube: a Riemann problem for Euler's equations. Physical space (top) and $x, t$ space (bottom).
# We need to solve many Riemann problems from $t$ to $\Delta t$, one on each cell boundary (illustrated in Figure 4). The numerical flux on $x_{i+1/2}$ is
#
# \begin{equation}
# F_{i+1/2}= \frac{1}{\Delta t} \int_{t^n}^{t^{n+1}} F\left(e(x_{i+1/2},t) \right)\,dt
# \end{equation}
#
# To be able to solve each Riemann problem independently, they should not interact, which imposes a limit on $\Delta t$. Looking at Figure 4, you might conclude that we must require a CFL number of 1/2 to avoid interactions between the Riemann solutions, but the numerical flux above only depends on the state at $x_{i+1/2}$, so we're fine as long as the solution there is not affected by that at $x_{i-1/2}$—i.e., the CFL limit is really 1.
# 
#
# ####Figure 4. Riemann problems on each cell boundary.
# The Riemann solution, even though known analytically, can get quite hairy for non-linear systems of conservation laws (like the Euler equations). And we need as many Riemann solutions as there are finite-volume cell boundaries, and again at each time step! This gets really cumbersome.
#
# Godunov solved the Riemann problems exactly, but many after him proposed *approximate* Riemann solutions instead. We'll be calculating the full solution numerically, after all, so some controlled approximations can be made. You might imagine a simple approximation for the flux at a cell boundary that is just the average between the left and right values, for example: $\frac{1}{2}\left[F(e_L)+F(e_R)\right]$. But that leads to a central scheme and, on its own, is unstable. Adding a term proportional to the difference between left and right states, $e_R-e_L$, supplies artificial dissipation and gives stability (see van Leer et al., 1987).
#
# One formula for the numerical flux at $x_{i+1/2}$ called the Rusanov flux, a.k.a. Lax-Friedrichs flux, is given by
#
# \begin{equation}
# F_{i+1/2}= \frac{1}{2} \left[ F \left( e_L \right) + F \left( e_R \right) \right] - \frac{1}{2} \max \left|F'(e)\right| \left( e_R - e_L \right)
# \end{equation}
#
# where $F'(e)$ is the Jacobian of the flux function and $\max\left|F'(e)\right|$ is the local propagation speed of the fastest traveling wave. The Riemann solutions at each cell boundary do not interact if $\max|F'(e)|\leq\frac{\Delta x}{\Delta t}$, which leads to a flux formula we can now use:
#
# \begin{equation}
# F_{i+1/2}= \frac{1}{2} \left( F \left( e_{i} \right) + F \left( e_{i+1} \right) - \frac{\Delta x}{\Delta t} \left( e_{i+1} - e_{i} \right) \right)
# \end{equation}
# ### Let's try it!
# Let's apply Godunov's method to the [LWR traffic model](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_01_conservationLaw.ipynb). In [lesson 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_02_convectionSchemes.ipynb) we already wrote functions to set the initial conditions for a red-light problem and to compute the fluxes. To save us from writing this out again, we saved those functions into a Python file named `traffic.py` (found in the same directory of the course repository). Now, we can use those functions by importing them in the same way that we import NumPy or any other library. Like this:
from traffic import rho_red_light, computeF
#
# You've probably noticed that we have the habit of writing detailed explanations of what a function does after defining it, in comments. These comments are called *docstrings* and it is good practice to include them in all your functions. It can be very useful when loading a function that you aren't familiar with (or don't remember!), because the `help` command will print them out for you. Check it out:
help(rho_red_light)
# Now, we can write some code to set up our notebook environment, and set the calculation parameters, with the functions imported above readily available.
# %matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
from matplotlib import animation
from JSAnimation.IPython_display import display_animation
# +
#Basic initial condition parameters
#defining grid size, time steps, CFL condition, etc...
nx = 101
nt = 30
dx = 4.0/(nx-2)
rho_in = 5.
rho_max = 10.
V_max = 1.
x = numpy.linspace(0,4,nx-1)
rho = rho_red_light(nx-1, rho_max, rho_in)
# -
def animate(data):
x = numpy.linspace(0,4,nx-1)
y = data
line.set_data(x,y)
return line,
# The cells above are code that you are already familiar with from lesson 2. Below is a new function for applying Godunov's method with Lax-Friedrichs fluxes. Study it carefully.
def godunov(rho, nt, dt, dx, rho_max, V_max):
""" Computes the solution with the Godunov scheme using the Lax
Parameters
----------
rho : array of floats
Density at current time-step
nt : int
Number of time steps
dt : float
Time-step size
dx : float
Mesh spacing
rho_max: float
Maximum allowed car density
V_max : float
Speed limit
Returns
-------
rho_n : array of floats
Density after nt time steps at every point x
"""
#initialize our results array with dimensions nt by nx
rho_n = numpy.zeros((nt,len(rho)))
#copy the initial u array into each row of our new array
rho_n[:,:] = rho.copy()
#setup some temporary arrays
rho_plus = numpy.zeros_like(rho)
rho_minus = numpy.zeros_like(rho)
flux = numpy.zeros_like(rho)
for t in range(1,nt):
rho_plus[:-1] = rho[1:] # Can't do i+1/2 indices, so cell boundary
rho_minus = rho.copy() # arrays at index i are at location i+1/2
flux = 0.5 * (computeF(V_max, rho_max, rho_minus) +
computeF(V_max, rho_max, rho_plus) +
dx / dt * (rho_minus - rho_plus))
rho_n[t,1:-1] = rho[1:-1] + dt/dx*(flux[:-2]-flux[1:-1])
rho_n[t,0] = rho[0]
rho_n[t,-1] = rho[-1]
rho = rho_n[t].copy()
return rho_n
# We can run using a CFL of $1$, to start with, but you should experiment with different values.
# +
sigma = 1.0
dt = sigma*dx/V_max
rho = rho_red_light(nx-1, rho_max, rho_in) #make sure that u is set to our expected initial conditions
rho_n = godunov(rho, nt, dt, dx, rho_max, V_max)
# +
fig = pyplot.figure();
ax = pyplot.axes(xlim=(0,4),ylim=(4.5,11),xlabel=('Distance'),ylabel=('Traffic density'));
line, = ax.plot([],[],color='#003366', lw=2);
anim = animation.FuncAnimation(fig, animate, frames=rho_n, interval=50)
display_animation(anim, default_mode='once')
# -
# You'll see that the result is very similar to the original Lax-Friedrichs method, and with good reason: they're essentially the same! But this is only because we are using a uniform grid. In the finite-volume approach, using the integral form of the equations, we were free to use a spatially varying grid spacing, if we wanted to.
#
# The original Godunov method is first-order accurate, due to representing the conserved quantity by a piecewise-constant approximation. That is why you see considerable numerical diffusion in the solution. But Godunov's method laid the foundation for all finite-volume methods to follow and it was a milestone in numerical solutions of hyperbolic conservation laws. A whole industry developed inventing "high-resolution" methods that offer second-order accuracy and higher.
# ##### Dig deeper
# * Godunov's method works in problems having waves moving with positive or negative wave speeds. Try it on the green-light problem introduced in [lesson 1](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_01_conservationLaw.ipynb) using the initial condition containing waves traveling in both directions.
#
# * Investigate two or three different numerical flux schemes (you can start with van Leer et al., 1987, or Google for other references. Implement the different flux schemes and compare!
# ## MUSCL schemes
# Godunov's method is first-order accurate, which we already know is not appropriate for hyperbolic conservation laws, due to the high numerical diffusion. This poses particular difficulty near sharp gradients in the solution.
#
# To do better, we can replace the piecewise constant representation of the solution with a piecewise linear version (still discontinuous at the edges). This leads to the MUSCL scheme (for Monotonic Upstream-Centered Scheme for Conservation Laws), invented by van Leer (1979).
# ### Reconstruction in space
# The piecewise linear reconstruction consists of representing the solution inside each cell with a *straight line* (see Figure 5). Define the cell representation as follows:
#
# \begin{equation}
# e(x) = e_i + \sigma_i (x - x_i).
# \end{equation}
#
# where $\sigma_i$ is the *slope* of the approximation within the cell (to be defined), and $e_i$ is the Godunov cell average. The choice $\sigma_i=0$ gives Godunov's method.
#
# Standard central differencing would give
#
# \begin{equation}
# \sigma_i = \frac{e_{i+1} - e_{i-1}}{2 \Delta x}.
# \end{equation}
# <img src="./figures/cell_boundaries.svg">
#
# ####Figure 5. Piecewise linear approximation of the solution.
# But we saw with the results [in the second lesson](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_02_convectionSchemes.ipynb) that this can lead to oscillations near shocks. These [Gibbs oscillations](http://en.wikipedia.org/wiki/Gibbs_phenomenon) will always appear (according to [Godunov's theorem](http://en.wikipedia.org/wiki/Godunov's_theorem)) unless we use constant reconstruction. So we have to modify, or *limit* the slope, near shocks.
#
# The easiest way to limit is to compute one-sided slopes
#
# \begin{equation}
# \Delta e^- = \frac{e_i - e_{i-1}}{\Delta x}, \quad \Delta e^+ = \frac{e_{i+1} - e_{i}}{\Delta x},
# \end{equation}
# <img src="./figures/calc_sigma.svg">
# #### Figure 6. One-sided slopes
# Now build the *minmod* slope
#
# \begin{align}
# \sigma_i & = \text{minmod}(\Delta e^-, \Delta e^+) \\
# & = \begin{cases} \min(\Delta e^-, \Delta e^+) & \text{ if } \Delta e^-, \Delta e^+ > 0 \\
# \max(\Delta e^-, \Delta e^+) & \text{ if } \Delta e^-, \Delta e^+ < 0 \\
# 0 & \text{ if } \Delta e^- \cdot \Delta e^+ \leq 0
# \end{cases}
# \end{align}
#
# That is, use the *smallest* one-sided slope in magnitude, unless the slopes have different sign, in which cases it uses the constant reconstruction (i.e., Godunov's method).
#
# Once the *minmod* slope is calculated, we can use it to obtain the values at the interfaces between cells.
#
# \begin{align}
# e^{R}_{i-1/2} &= e_i - \sigma_i \frac{\Delta x}{2}\\
# e^{L}_{i+1/2} &= e_i + \sigma_i \frac{\Delta x}{2}
# \end{align}
#
# where $e^R$ and $e^L$ are the local interpolated values of the conserved quantity immediately to the right and left of the cell boundary, respectively.
# ##### Index headache
# Notice that for the cell with index $i$, we calculate $e^R_{i-1/2}$ and $e^L_{i+1/2}$. Look at Figure 5: those are the two local values of the solution are at opposite cell boundaries.
#
# However, when we calculate the local flux at the cell boundaries, we use the local solution values on either side of that cell boundary. That is:
#
# \begin{equation}
# F_{i+1/2} = f(e^L_{i+1/2}, e^R_{i+1/2})
# \end{equation}
#
# You can calculate two flux vectors; one for the right-boundary values and one for the left-boundary values. Be careful that you know which boundary value a given index in these two vectors might refer to!
#
# _____
# Here is a Python function implementing minmod.
def minmod(e, dx):
"""
Compute the minmod approximation to the slope
Parameters
----------
e : array of float
input data
dx : float
spacestep
Returns
-------
sigma : array of float
minmod slope
"""
sigma = numpy.zeros_like(e)
de_minus = numpy.ones_like(e)
de_plus = numpy.ones_like(e)
de_minus[1:] = (e[1:] - e[:-1])/dx
de_plus[:-1] = (e[1:] - e[:-1])/dx
# The following is inefficient but easy to read
for i in range(1, len(e)-1):
if (de_minus[i] * de_plus[i] < 0.0):
sigma[i] = 0.0
elif (numpy.abs(de_minus[i]) < numpy.abs(de_plus[i])):
sigma[i] = de_minus[i]
else:
sigma[i] = de_plus[i]
return sigma
# ### Evolution in time
# Since we are aiming for second-order accuracy in space, we might as well try for second-order in time, as well. We need a method to evolve the *ordinary* differential equation forwards in time:
#
# \begin{equation}
# \frac{\partial}{\partial t} e_i + \frac{1}{\Delta x} \left[ F \left( x_{i+1/2}, t \right) - F \left( x_{i - 1 / 2}, t \right) \right] = 0
# \end{equation}
#
# A second-order Runge-Kutta method with special characteristics (due to Shu & Osher, 1988) gives the following scheme:
#
# \begin{align}
# e^*_i & = e^n_i + \frac{\Delta t}{\Delta x}\left( F^n_{i-1/2} - F^n_{i+1/2} \right) \\
# e^{n+1}_i & = \frac{1}{2} e^n_i + \frac{1}{2}\left( e^*_i + \frac{\Delta t}{\Delta x}\left( F^*_{i-1/2} - F^*_{i+1/2} \right) \right)
# \end{align}
#
# Recall that the Rusanov flux is defined as
#
# $$
# F_{i+1/2}= \frac{1}{2} \left[ F \left( e_L \right) + F \left( e_R \right) \right] - \frac{1}{2} \max \left|F'(e)\right| \left( e_R - e_L \right)
# $$
#
# Armed with the interpolated values of $e$ at the cell boundaries we can generate a more accurate Rusanov flux. At cell boundary $i+1/2$, for example, this is:
#
# \begin{equation}
# F_{i+1/2} = \frac{1}{2} \left( F \left( e^L_{i+1/2} \right) + F \left( e^R_{i+1/2} \right) - \frac{\Delta x}{\Delta t} \left( e^R_{i+1/2} - e^L_{i+1/2} \right) \right)
# \end{equation}
#
# Now we are ready to try some MUSCL!
def muscl(rho, nt, dt, dx, rho_max, V_max):
""" Computes the solution with the MUSCL scheme using the Lax-Friedrichs flux,
RK2 in time and minmod slope limiting.
Parameters
----------
rho : array of floats
Density at current time-step
nt : int
Number of time steps
dt : float
Time-step size
dx : float
Mesh spacing
rho_max: float
Maximum allowed car density
V_max : float
Speed limit
Returns
-------
rho_n : array of floats
Density after nt time steps at every point x
"""
#initialize our results array with dimensions nt by nx
rho_n = numpy.zeros((nt,len(rho)))
#copy the initial u array into each row of our new array
rho_n[:,:] = rho.copy()
#setup some temporary arrays
rho_plus = numpy.zeros_like(rho)
rho_minus = numpy.zeros_like(rho)
flux = numpy.zeros_like(rho)
rho_star = numpy.zeros_like(rho)
for t in range(1,nt):
sigma = minmod(rho,dx) #calculate minmod slope
#reconstruct values at cell boundaries
rho_left = rho + sigma*dx/2.
rho_right = rho - sigma*dx/2.
flux_left = computeF(V_max, rho_max, rho_left)
flux_right = computeF(V_max, rho_max, rho_right)
#flux i = i + 1/2
flux[:-1] = 0.5 * (flux_right[1:] + flux_left[:-1] - dx/dt *\
(rho_right[1:] - rho_left[:-1] ))
#rk2 step 1
rho_star[1:-1] = rho[1:-1] + dt/dx * (flux[:-2] - flux[1:-1])
rho_star[0] = rho[0]
rho_star[-1] = rho[-1]
sigma = minmod(rho_star,dx) #calculate minmod slope
#reconstruct values at cell boundaries
rho_left = rho_star + sigma*dx/2.
rho_right = rho_star - sigma*dx/2.
flux_left = computeF(V_max, rho_max, rho_left)
flux_right = computeF(V_max, rho_max, rho_right)
flux[:-1] = 0.5 * (flux_right[1:] + flux_left[:-1] - dx/dt *\
(rho_right[1:] - rho_left[:-1] ))
rho_n[t,1:-1] = .5 * (rho[1:-1] + rho_star[1:-1] + dt/dx * (flux[:-2] - flux[1:-1]))
rho_n[t,0] = rho[0]
rho_n[t,-1] = rho[-1]
rho = rho_n[t].copy()
return rho_n
sigma = 1.
dt = sigma*dx/V_max
rho = rho_red_light(nx-1, rho_max, rho_in) #make sure that u is set to our expected initial conditions
rho_n = muscl(rho, nt, dt, dx, rho_max, V_max)
# +
fig = pyplot.figure();
ax = pyplot.axes(xlim=(0,4),ylim=(4.5,11),xlabel=('Distance'),ylabel=('Traffic density'));
line, = ax.plot([],[],color='#003366', lw=2);
anim = animation.FuncAnimation(fig, animate, frames=rho_n, interval=50)
display_animation(anim, default_mode='once')
# -
# This MUSCL scheme does not show any of the oscillations you might see with MacCormack or Lax-Wendroff, but the features are not as sharp. Using the _minmod_ slopes led to some smearing of the shock, which motivated many researchers to investigate other options. Bucketloads of so-called _shock-capturing_ schemes exist and whole books are written on this topic. Some people dedicate their lives to developing numerical methods for hyperbolic equations!
# ##### Challenge task
# * Go back to Sod! Calculate the shock-tube problem using the MUSCL scheme and compare with your previous results. What do think?
# ## References
# * <NAME>. (1959), "A difference scheme for numerical computation of discontinuous solutions of equations of fluid dynamics," _Math. Sbornik_, Vol. 47, pp. 271–306.
#
# * <NAME> (1979), "Towards the ultimate conservative difference scheme, V. A second-order sequel to Godunov's method," _J. Comput. Phys._, Vol. 32, pp. 101–136
#
# * <NAME>., <NAME>, <NAME>, <NAME> (1987). "A comparison of numerical flux formulas for the Euler and Navier-Stokes equations," AIAA paper 87-1104 // [PDF from umich.edu](http://deepblue.lib.umich.edu/bitstream/handle/2027.42/76365/AIAA-1987-1104-891.pdf), checked 11/01/14.
#
# * <NAME> <NAME> (1988). "Efficient implementation of essentially non-oscillatory shock-capturing schemes," _J. Comput. Phys._, Vol. 77, pp. 439–471 // [PDF from NASA Tech. Report server](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870013797.pdf)
# ---
#
# ######The cell below loads the style of the notebook.
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
| lessons/03_wave/03_04_MUSCL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TEXT MINING AND NATURAL LANGUAGE PROCESSING USING NLTK
# ## Text Mining:
# ### Text Analysis or Text Mining is the process of deriving meaningful data from natural language text. This usually involves the process of structuring the input text (since most of text data aroung us in unstructured) deriving a pattern, interpreting the data an processing it for use
#
# ## Natural Language Processing:
# ### As text mining is the process of deriving high quality info from the text. The overall goal is to turn this text into data for analysis via application of NLP. We use NLP in various applications like sentiment analysis, speech recognition, chatbot, machine translation (eg: google translate), spell checking, keyword search, info extraction, advertisement matching
# ## NLP: Components
# ### NLP has two components Natural language understanding and natural language generation. NLU involves processes like mapping input to useful representation, analysing different aspects of the language, whereas NLG involves processes like text planning, sentence planning, text realization
#
# ## NLP: Ambiguity
# ### Lexical Ambiguity eg: The fisherman is going to tha bank. Here bank can be either a real bank where we borrow money, or a river bank.
# ### Syntactical Ambiguity: Where the sentence may have a double meaning. eg: Visiting relatives can be boring.
# ### Referential Ambiguity: The boy told his father of the thief. He was very upset. So he can be either the boy or the father or the thief.
# ________________
# ### TOKENIZATION:
# #### Tokenization is the first step in NLP. It involves processes like breaking a complex sentence into words, understanding the importance of each of the words with respect to the sentence & to produce a structural description of the language
import os # We can find info about OS related processes like my location, process id, current working directory and so on
import nltk
import nltk.corpus
# +
# nltk.download() # It downloads all the nltk dataset one of which is the corpus dataset
# nltk.download('punkt')
# -
nltk.corpus.gutenberg.fileids() # Shows all the file ids in the gutenberg file of the corpus dataset
# type(nltk.corpus.gutenberg)
hamlet = nltk.corpus.gutenberg.words('shakespeare-hamlet.txt')
hamlet # This shows all the words present in the file shakespeare-hamlet
# Lets take a look at the first 500 words present in this file
#for words in hamlet[:500]:
# print(words, sep=' ', end=' ')
type(hamlet) # So this is not in a list format so we have to convert it into strings
string = []
for word in hamlet[:500]:
string.append(word)
string
type(string) # This converts corpus type into list. Now lets convert it into string type by join() function
# string is a list in a list
# join function is used to join list
string = (" ".join(string))
#type(string)
string
from nltk.tokenize import word_tokenize # This is to tokenize the above file
all_tokens = word_tokenize(string)
all_tokens # willshow all my tokenized words
len(all_tokens) # includes all the special characters and the words
type(all_tokens)
# +
# Now to find the frequency of the various words in this string we use
# The frequency distinct function freqdist() which falls under the nltk.probability
from nltk.probability import FreqDist
#fdist = FreqDist(all_tokens)
type(fdist)
for word in all_tokens:
word = word.lower()
#print(word)
#all_tokens
fdist = FreqDist(all_tokens)
fdist
# fdist['the']
fdist_top10 = fdist.most_common(10) # 10 most common word
fdist_top10
# All of these words are of no importance like of, the, - and ,
# +
# _-------------------- T E S T I N G -------------------------
#sent = "Hello this is me"
#tokens = word_tokenize(sent)
#type(tokens)
#fdist = FreqDist(tokens)
#for word in tokens:
#fdist[word.lower()]+=1
#fdist
# -
# ##### Blank Tokenizer
import nltk
from nltk.tokenize import blankline_tokenize
token_blank = blankline_tokenize(string)
# token_blank
# op here is 1. This represents how many paragraphs we have and how many paragraphs are separated with a new line
# Sometimes it may seem like there is just one paragraph but it may be not
# _______________
# ### Bigrams: These are tokens of two consequetive written words
# ### Trigrams: tokens of 3 consequetive written words
# ### Ngrams: Tokens of any number of words
from nltk.util import bigrams, trigrams, ngrams
string = "The best and the most beautiful things in the world cannot be seen or teven touched, they must be felt with the heart"
gram_token = nltk.word_tokenize(string)
gram_token # this is a list
# so we use the list functions to show the bigrams
bigram = list(nltk.bigrams(gram_token))
bigram # tokens are in a pair form
# similarly trigrams can be formed
# for ngrams use nltk.ngrams(gram_token, n) where n is the number
# _______________________________
# ## Stemming
# #### Now we have our tokens we can wordk on them. Stemming is a process which normalizes word into its base form or root form.
# eg: affection, affect, affectionate, affected all have the root word affect
# #### This is not the best word form, as in many cases stemming does not word
# ##### PORTER STEMMER
from nltk.stem import PorterStemmer
prt = PorterStemmer()
prt.stem("having")
words = ['give', 'given', 'gave', 'giving']
for word in words:
print(word + ": " + prt.stem(word))
# ##### LANCASTER STEMMER (More Powerful than Porter stemmer)
from nltk.stem import LancasterStemmer
lcs = LancasterStemmer()
for word in words:
print(word + ": " + lcs.stem(word))
# ##### There is another stemming algo known as the Snowburst stemmer where we need to specify the language as well
# _________________________
# ### Lametization
# ##### It is a process which groups together different infected forms of a word known as lemma. It is somehow similar to stemming, but unlike stemming where sometimes the root word formed may not be a proper legible word like in the above example of giv, lemmatization always gives a proper word. For this lemmatization requires a detailed dictionary. Eg: lemmatization on the words gone, going will be go
#
# +
# nltk.download('wordnet') # This wordnet i=s the reference dictionary
from nltk.stem import wordnet
from nltk.stem import WordNetLemmatizer
lem = WordNetLemmatizer()
# -
lem.lemmatize('corpora')
prt.stem('corpora')
for word in words:
print(word + ": " + lem.lemmatize(word))
# +
# The above resul is due to the fact that the lemmatizer function has assumed that all of them are noun and not verbs
# Since a POS tag was not provided.
# POS stands for parts of speech which means that it tells whether the word is a noun or a verb or an adjective
# -
# ___________
# ### POS: Part of speech helps us in identifying whether a given word in a sentence is a noun or a verb or an adjective or determinant and so on.
# #### There is list of all the POS that are present and its description on the internet
# +
# nltk.download('averaged_perceptron_tagger')
# +
sent = "Tim is a natural when it comes to drawing"
sent_token = word_tokenize(sent)
for token in sent_token:
print(nltk.pos_tag([token]))
# -
# ___
# ### STOP WORDS
# #### There are some words in the english language which are often used in sentence making, such as an, a, the , is etc. These words do not play any role in NLP even though they have very high occurance. These words are known as Stop Words. Nltk has its own library of Stop Words
nltk.download('stopwords')
from nltk.corpus import stopwords
stopwords.words('english')
# len(stopwords.words('english'))
# ________________________
# ## Named Entity Relation
# ##### Named entities can be anything such as name of a person, name of a place, location name, currency name, organization name, movie name etc. Recognition of such words fall under Named Entity Relation
#nltk.download('maxent_ne_chunker')
nltk.download('words')
from nltk import ne_chunk ## This imports the chunk of dictionary which acts like a look up table
sent = 'The US President stays in the White House'
sent_tokens = word_tokenize(sent)
sent_tags = nltk.pos_tag(sent_tokens)
sent_tags # Gives the pos of the differnt tokens
sent_ner = ne_chunk(sent_tags)
sent_ner
# ________________________
# ## Syntax
# ### The term syntax refers to a set of principles, rules and processes which governs the structure of a given sentence in a language. Also refers to the study of such principles
# ### Syntax tree
# ###### A syntax tree refers to a tree representation of syntactic structure of sentences or strings. A sentence has three phrases usually. The noun phrase and the prepositions phrase. Noun has the article such as A, an , the which comes before the noun. Similarly in the preposition phrase we have the preposition and the noun phrase and this chain continues.
# ##### Ghostscript can be used for doing such tedious works. Ghostscripts are used to render these syntax trees in our notebook. Ghostscript is a rendering engine
# __________________________________
# ## Chunking
# ### Chunking is a part of analysing the sentence structure. It basicaly means picking up individual pieces of information and grouping then into bigger pieces. These bigger pieces are also known as chunks
# Eg: We caught the black panther
# Here We is a preposition, caught is a verb, the is a determinant, black is an adjective and panther is a noun.
# So chunking creates chunks of tokens such that "We" which is a noun phrase of a syntax tree and "the black panther" which is also a NP, are grouped together into chunk
sent = 'The big cat ate the little mouse who was after fresh cheese'
sent_tokens = nltk.pos_tag(word_tokenize(sent))
sent_tokens
# ### The rule states that whenever the chunk finds an optional determiner (DT) followed by any number of adjectives (JJ) and then a noun (NN) then the Noun Phrase(NP) chunk should be formed.
grammer_np = r"NP: {<DT>?<JJ>*<NN>}" #I dont know what happened here
chunk_parser = nltk.RegexpParser(grammer_np) # This is one of the chunk parsing module
chunk_result = chunk_parser.parse(sent_tokens)
chunk_result # The output of this will give an error, bcoz it is trying to form a syntax tree using the ghostscript.
# so for this to work we have to set the ghost script to the path enviromnet
# Still a tree will be formed in descriptive manner which can be used to see the chunking formed
# _______________
# # Using all the above concept to build a ML classifier on the movie reviews from NLTK library
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
print(os.listdir(nltk.data.find('corpora'))) # It has a movie review.zip file as well
from nltk.corpus import movie_reviews
type(movie_reviews)
type(movie_reviews.categories())
movie_reviews.categories()
# +
# gutenberg is a plain text corpus hence it has no categories
# Whereas movie_reviews is a LazyCorpus and has categorical values
# http://www.nltk.org/api/nltk.corpus.reader.html#nltk.corpus.reader.plaintext.PlaintextCorpusReader.paras
# Categorical values is for categorizedplaintextcorpus and lazycorpus
# fielids is for plaintextcorpusrreader
# -
print(len(movie_reviews.fileids('pos')))
print(movie_reviews.fileids('pos')) # There are 1000 positive file reviews
# Lets take one such text file from the positive review and pass it into the tokenizer
pos_rev = nltk.corpus.movie_reviews.words('pos/cv029_18643.txt')
pos_rev[:500]
type(pos_rev)
# Converting it into a list form from corpus form
string = []
for word in pos_rev:
string.append(word)
type(string)
# +
# Converting it into a string format from a list format
string_new # we find that the words are not separated by spaces
# So lets join by adding spaces between them
string_new = " ".join(word for word in string)
# string_new
# Now lets replace the punctuators in the file
punctuators = '''!()-[]{};:'"\,<>./?@#$%^&*_~''' + "\'"
string_pos_nopunc = ""
for char in string_new:
if char not in punctuators:
string_pos_nopunc = string_pos_nopunc + char
string_pos_nopunc
# To remove extra spaces
import re
re.sub(' +', ' ', string_pos_nopunc)
# +
# Now lets do the same thing with the negative reviews
print(movie_reviews.fileids('neg'))
# -
neg_rev = nltk.corpus.movie_reviews.words('neg/cv000_29416.txt')
neg_rev[:500]
type(neg_rev)
string = []
for word in neg_rev:
string.append(word)
type(string)
# +
# Converting it into a string format from a list format
string_new # we find that the words are not separated by spaces
# So lets join by adding spaces between them
string_new = " ".join(word for word in string)
# string_new
# Now lets replace the punctuators in the file
punctuators = '''!()-[]{};:'"\,<>./?@#$%^&*_~''' + "\'"
string_neg_nopunc = ""
for char in string_new:
if char not in punctuators:
string_neg_nopunc = string_neg_nopunc + char
string_neg_nopunc
# To remove extra spaces
import re
re.sub(' +', ' ', string_neg_nopunc)
# -
len(string_pos_nopunc)
| Text Mining and NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Conditional Probability
#
# - Conditional probability has many applications, we learn it by mentioning its application in text analysis
#
# - Assume this small dataset is given:
#
# <img src="spam_ham_data_set.png" width="600" height="600">
# ## Question: What is the probability that an email be spam? What is the probability that an email be ham?
#
# - $P(spam) = ?$
#
# - $P(ham) = ?$
# ## Question: We know an email is spam, what is the probability that password be a word in it? (What is the frequency of password in a spam email?)
#
# - Hint: Create the dictionary of spam where its key would be unique words in spam emails and the value shows the occurance of that word
spam = {
"password": 2,
"review": 1,
"send": 3,
"us": 3,
"your": 3,
"account": 1
}
# $P(password \mid spam) = 2/(2+1+3+3+3+1) = 2/13$
# or
p_password_given_spam = spam['password']/sum(spam.values())
print(p_password_given_spam)
# ## Question: We know an email is ham, what is the probability that password be a word in it? (What is the frequency of password in a ham email?)
#
# - Hint: Create the dictionary of ham where its key would be unique words in spam emails and the value shows the occurance of that word
ham = {
"password": 1,
"review": 2,
"send": 1,
"us": 1,
"your": 2,
"account": 0
}
# $P(password \mid ham) = 1/(1+2+1+1+1+0) = 1/6$
# or
p_password_given_ham = ham['password']/sum(ham.values())
print(p_password_given_ham)
# ## Question: Assume we have seen password in an email, what is the probability that the email be spam?
#
# - $P(spam \mid password) = ?$
#
# - Hint: Use Bayes' rule:
#
# $P(spam \mid password) = (P(password \mid spam) P(spam))/ P(password)$
#
# $P(password) = P(password \mid spam) P(spam) + P(password \mid ham) P(ham)$
#
p_spam = 4/6
p_ham = 2/6
p_password = <PASSWORD> + p_password_given_ham*p_ham
print(p_password)
p_spam_given_password = <PASSWORD>
print(p_spam_given_password)
# ## Activity: Do the above computation for each word by writing code
p_spam = 4/6
p_ham = 2/6
ls1 = []
ls2 = []
for i in spam:
print(i)
p_word_given_spam = # TODO
p_word_given_ham = # TODO
# obtain the probability of each word by assuming the email is spam
# obtain the probability of each word by assuming the email is ham
#TODO
# obtain the probability that for a seen word it belongs to spam email
# obtain the probability that for a seen word it belongs to ham email
#TODO
# ## Quiz: Compute the expected value of a fair dice
#
# By Definition, the expected value of random events (a random variable) like rolling a dice is computed as:
#
# $E(X) = \sum_{i=1}^{6}i * P(dice = i)$
#
# <img src="dice.jpg" width="100" height="100">
#
# 1- For a fair dice,
#
# compute the probability that when roll the dice then 1 apprears (P(dice = 1)),
#
# compute the probability that when roll the dice then 2 apprears (P(dice = 2)),
#
# .
#
# .
#
# .
#
# compute the probability that when roll the dice then 2 apprears (P(dice = 6))
#
# 2- Compute $E(X)$ from the above steps.
# ### Answer:
#
# The expected value for a fair dice is:
#
# $E(X) = (1*1/6) + (2*1/6) + (3*1/6)+ (4*1/6) + (5*1/6) + (6*1/6)$
#
# $E(X) = 3.5$
#
# We can show that E(X) is the mean of the random variable
import numpy as np
# lets roll the dice 1000 times
dice = np.random.randint(low=1.0, high=7.0, size=1000)
print(dice)
# Compute the mean of dice list
print(np.mean(dice))
print(sum(dice)/len(dice))
| Notebooks/Conditional_Probability/.ipynb_checkpoints/Conditional_probability-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py37
# language: python
# name: py37
# ---
import pandas
food_info = pandas.read_csv("sample.csv",encoding="gbk")
print(type(food_info))
print(food_info.dtypes)
food_info = pandas.read_csv("utf8_sample.csv")
print(type(food_info))
print(food_info.dtypes)
food_info = pandas.read_csv("utf8_测试故障工单.csv")
print(food_info)
print(type(food_info))
print(food_info.dtypes)
| data/根因分析/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lung Nodule Detector Model
#
# ## My first pass using UNet-like architecture [Link](https://www.researchgate.net/figure/Convolutional-neural-network-CNN-architecture-based-on-UNET-Ronneberger-et-al_fig2_323597886)
#
# This model is trained using LUNA 16 dataset. The preprocessing procedure will be uploaded at a later date
#
# I used Deep-Learning-with-PyTorch book ( <NAME> and <NAME>) as reference, but I used different data preprocessing, data augmentation, model architecture, a learning rate schedule, Dice loss, as well as weighted BCE Loss.
#
# - Initialize model + data loader
# - data augmentation
# - visualizing data together with labeled data. (plot it out or use ipywidget)
# - Pass the batches into model
# - calculate train loss + back propagation
# - calculate validation loss + record params
# - save weights and print out all info..
# - Visualize prediction compared to labels.
# - Get the candidates of training data that the model is doing poorly against (False positive candidates) and to later train on that data to reduce false positive rate.
# Reset VRAM
from numba import cuda
device = cuda.get_current_device()
device.reset()
# +
# I use Line messenger to send me loss updates while working on my full time job!
# from parinya import LINE
# line = LINE("")
# -
import torch
import torch.nn as nn
from torchvision import datasets
from torch.utils.data import DataLoader
import os
import glob
import numpy as np
import time
import datetime
from ipywidgets import interact
import matplotlib.pyplot as plt
from skimage.transform import rotate, resize
from random import randint
import shutil
# how to define a dataloader https://www.kaggle.com/dhananjay3/image-segmentation-from-scratch-in-pytorch
#
# - Note to self: glob.glob is important else will not get all the files
# - Note: the torchvision augmentation doesn't support ndarray, so I tried using PIL - Image.fromarray(image) - still causing error. I decided to use skimage instead
class DataLoaderImg(torch.utils.data.Dataset):
def __init__(self, folder_path="train3", random_rotation=None, get_path=False):
super(DataLoaderImg, self).__init__()
self.img_files = glob.glob(os.path.join(folder_path, '*.npy'))
self.random_rotation = random_rotation
self.get_path = get_path
def __getitem__(self, index):
img_path = self.img_files[index]
data = np.load(img_path)
image = data[0]
label = data[1]
if self.random_rotation:
rand_angle = (randint(-10,10))
image = rotate(image, rand_angle, resize=True)
label = rotate(label, rand_angle, resize=True)
image = resize(image, (32,32,32)) #เนื่องจากพอ rotate แล้ว shape จะเปลี่ยน
label = resize(label, (32,32,32))
if self.get_path:
return torch.from_numpy(image).float().unsqueeze(0), torch.from_numpy(label).float().unsqueeze(0), img_path
return torch.from_numpy(image).float().unsqueeze(0), torch.from_numpy(label).float().unsqueeze(0)
def __len__(self):
return len(self.img_files)
train_dataset = DataLoaderImg(folder_path = "train5", random_rotation = False)
valid_dataset = DataLoaderImg(folder_path = "valid5", random_rotation = False)
# +
batch_size = 32
num_workers = 0
train_loader = DataLoader(train_dataset, batch_size = batch_size, shuffle=True, num_workers=num_workers)
valid_loader = DataLoader(valid_dataset, batch_size = batch_size, shuffle=True, num_workers=num_workers)
# -
len(train_loader)
# +
# for (i,l) in train_loader:
# print(i.shape)
# -
# # Visualizing data with labeling
# find labeled slices, return index of the middle slice of the nodule (assumed to be the most visible).
def find_marked(display_label):
layer_i = []
for i in range(display_label.shape[0]):
test_mask = display_label[i, :, :]
if np.sum(test_mask)>0:
layer_i.append(i)
if len(layer_i)>0:
return layer_i[int(len(layer_i)/2)]
else:
return int(display_label.shape[0]/2)
# +
def plot_ct_scan_with_labels(loader, plot_size=50, cmap=plt.cm.gray):
"""accepts train_loader"""
data = next(iter(loader))
paths = data[2]
display_labels = data[1].squeeze().detach().cpu().numpy()
display_images = data[0].squeeze().detach().cpu().numpy() #1 batch of display_image 32,32,32,32
f, plots = plt.subplots(int(display_images[0].shape[0] / 4) , 4, figsize=(plot_size, plot_size))
f.suptitle('Label', fontsize=50, y=0.92)
for img in range(0, display_images.shape[0]): #batch_size
each_path = paths[img]
each_label = display_labels[img]
each_image = display_images[img]
marked = find_marked(each_label)
print(each_path)
plots[int((img / 4)), int(img % 4)].imshow(each_image[marked,:,:], cmap="gray")
label = np.ma.masked_where((each_label < 0.05), each_label)
plots[int((img / 4)), int(img % 4)].imshow(label[marked, :, :],cmap="hsv", alpha=0.25)
plots[int((img / 4)), int(img % 4)].axis('off')
plots[int((img / 4)), int(img % 4)].set_title(str(each_path))
if False: #set to True, to enable plotting..
display_dataset = DataLoaderImg(folder_path = "train5",random_rotation = True, get_path=True)
display_loader = DataLoader(display_dataset, batch_size = 32, shuffle=True, num_workers=0)
plot_ct_scan_with_labels (display_loader)
# +
### visualize false candidates in a widget.
# display_dataset = DataLoaderImg(folder_path = "false_cand",random_rotation = True, get_path=True)
# display_loader = DataLoader(display_dataset, batch_size = 32, shuffle=True, num_workers=0)
# data = next(iter(display_loader))
# index = 0 #choose 0-31
# +
# print(data[2][index])
# display_label = data[1][index].squeeze().detach().cpu().numpy()
# display_image = data[0][index].squeeze().detach().cpu().numpy() #1 batch
# def explore_3dimage(layer=find_marked(display_label)):
# plt.figure(figsize=(10, 5))
# plt.imshow(display_image[layer, :, :], cmap='gray')
# label = np.ma.masked_where((display_label < 0.05), display_label)
# plt.imshow(label[layer, :, :], cmap="hsv", alpha=0.1); #label อยู่ตรงนี้นะ
# plt.title('Label', fontsize=20)
# plt.axis('off')
# return layer
# interact(explore_3dimage, layer=(0, display_image.shape[0]))
# index += 1
# -
# This is my first project outside of a course
# This time I'll be implementing 3d-conv in pytorch using U-net-like architecture
#
# https://www.researchgate.net/figure/Convolutional-neural-network-CNN-architecture-based-on-UNET-Ronneberger-et-al_fig2_323597886
#
# note to self:
# - If you didn't add super init -> will cause error model called before init (when defining LunaModel class)! (it creates a proxy for that subclass?)
train_on_gpu = torch.cuda.is_available()
device = torch.device("cuda" if train_on_gpu else "cpu")
print('GPU_available :',train_on_gpu)
print(torch.cuda.get_device_name(torch.cuda.current_device()))
# +
class LunaBlockDown(nn.Module):
def __init__(self, in_channels, conv_channels):
super(LunaBlockDown, self).__init__()
self.conv1 = nn.Conv3d(
in_channels, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv3d(
conv_channels, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu2 = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool3d(2,2)
self.batchnorm = nn.BatchNorm3d(conv_channels)
def forward(self, input_batch):
block_out = self.conv1(input_batch)
block_out = self.relu1(block_out)
block_out = self.conv2(block_out)
block_out = self.batchnorm(block_out)
block_out = self.relu2(block_out)
return self.maxpool(block_out), block_out
class LunaBlockUp(nn.Module):
def __init__(self, in_channels, conv_channels):
super(LunaBlockUp, self).__init__()
self.t_conv_layer = nn.ConvTranspose3d(
in_channels, in_channels,kernel_size =2, stride=2, padding=0, bias = False)
self.conv1 = nn.Conv3d(
in_channels, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv3d(
conv_channels, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu2 = nn.ReLU(inplace=True)
self.batchnorm = nn.BatchNorm3d(conv_channels)
# output size: (32-1)*s -2p+k = 64
def forward(self, input_batch):
block_out = self.t_conv_layer(input_batch)
block_out = self.conv1(block_out)
block_out = self.batchnorm(block_out)
block_out = self.relu1(block_out)
block_out = self.conv2(block_out)
block_out = self.batchnorm(block_out)
block_out = self.relu2(block_out)
return block_out
# +
class LunaModel(nn.Module):
def __init__(self, in_channels=1, conv_channels=32):
super(LunaModel, self).__init__()
self.block1 = LunaBlockDown(in_channels, conv_channels)
self.block2 = LunaBlockDown(conv_channels, conv_channels * 2)
self.block3 = LunaBlockDown(conv_channels * 2,conv_channels * 4)
self.block4 = LunaBlockUp(conv_channels * 4, conv_channels * 4)
self.block5 = LunaBlockUp(conv_channels * 8, conv_channels*2)
self.block6 = LunaBlockUp(conv_channels*4, conv_channels)
self.conv1 = nn.Conv3d(
conv_channels*2, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv3d(
conv_channels, conv_channels, kernel_size=3, padding=1, bias=True)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = nn.Conv3d(
conv_channels, in_channels, kernel_size=1, padding=0, bias=True)
self.batchnorm = nn.BatchNorm3d(conv_channels)
self.dropout = nn.Dropout(p=0.2, inplace=True)
def forward(self, input_batch):
block_out, layer1 = self.block1(input_batch) #128x128x20
block_out, layer2 = self.block2(block_out) #64x64x10
block_out, layer3 = self.block3(block_out) #32x32x5
block_out = self.block4(block_out)
block_out = torch.cat((block_out, layer3), dim=1)
block_out = self.dropout(block_out)
block_out = self.block5(block_out)
block_out = torch.cat((block_out, layer2), dim=1)
block_out = self.dropout(block_out)
block_out = self.block6(block_out)
block_out = torch.cat((block_out, layer1), dim=1)
block_out = self.dropout(block_out)
block_out = self.conv1(block_out)
block_out = self.batchnorm(block_out)
block_out = self.relu1(block_out)
block_out = self.conv2(block_out)
block_out = self.batchnorm(block_out)
block_out = self.relu2(block_out)
block_out = self.conv3(block_out)
# no batchnorm for last layer
# return torch.sigmoid(block_out) #if use dice_loss
return block_out #if use BCEWithLogitsLoss
def _init_weights(self):
for m in self.modules():
if type(m) in {
nn.Conv3d,
nn.ConvTranspose3d
}:nn.init.kaiming_normal_(
m.weight.data, a=0, mode='fan_out', nonlinearity='relu',
)
if m.bias is not None:
fan_in, fan_out = nn.init._calculate_fan_in_and_fan_out(m.weight.data)
bound = 1 / math.sqrt(fan_out)
nn.init.normal_(m.bias, -bound, bound)
model = LunaModel()
# move everything to gpu
if train_on_gpu:
model.cuda()
model
# -
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(pytorch_total_params)
# +
# Soft dice loss https://www.jeremyjordan.me/semantic-segmentation/
# +
import torch.optim as optim
from torch.utils.data.sampler import SubsetRandomSampler
from torch.autograd import Variable
# Note: soft dice loss not dice loss
class diceloss(torch.nn.Module):
"""
Compute mean dice coefficient over all abnormality classes.
Args:
y_true (tensor): shape: (num_classes, x_dim, y_dim, z_dim)
y_pred (tensor): tensor of predictions for all classes.
shape: (num_classes, x_dim, y_dim, z_dim)
axis (tuple): spatial axes to sum over when computing numerator and
denominator of dice coefficient.
Hint: pass this as the 'axis' argument to the K.sum
and K.mean functions.
epsilon (float): small constant add to numerator and denominator to
avoid divide by 0 errors.
Returns:
dice_coefficient (float): computed value of dice coefficient.
"""
def init(self):
super(diceLoss, self).init()
def forward(self,y_true, y_pred, epsilon=0.00001):
axis = tuple(range(1, len(y_pred.shape)-1))
dice_numerator = torch.sum(y_pred*y_true,axis=axis)*2 + epsilon
dice_denominator = torch.sum(y_true**2,axis=axis) + torch.sum(y_pred**2,axis=axis) + epsilon
dice_loss = 1-torch.mean(dice_numerator/dice_denominator)
return dice_loss
# -
# compute confusion matrix
def compute_CM (pred, label, threshold=0.5):
"""Only accepts numpy.."""
threshold = 0.5
pred[pred >= threshold] = 1.0
pred[pred < threshold] = 0.0
pred = pred.detach().cpu().numpy()
label = label.detach().cpu().numpy()
tp = np.sum((pred == 1) & (label==1))
tn = np.sum((pred == 0) & (label==0))
fp = np.sum((pred == 1) & (label==0))
fn = np.sum((pred == 0) & (label==1))
accuracy = ((tp+tn)/(tp+tn+fp+fn))
sensitivity = ((tp/(tp+fn)))
specificity = (tn/(tn+fp))
ppv = (tp/(tp+fp))
return round(accuracy*100, 2),round(sensitivity*100, 2),round(specificity*100, 2),round(ppv*100, 2)
# +
# Rectified Adam https://www.kaggle.com/dhananjay3/image-segmentation-from-scratch-in-pytorch
# RAdam better than Adam? https://medium.com/@lessw/new-state-of-the-art-ai-optimizer-rectified-adam-radam-5d854730807b
# note to self learning rate decay of 0.94 is too fast since my dataset is small (10 epochs = 0.50) -> adjusted to 0.972
# +
num_epochs = 150
lr = 1*1e-3
use_learning_rate_decay = True #set to false when use LR finder
if use_learning_rate_decay == True:
optimizer = optim.Adam(model.parameters(), lr=lr)
LRdecayRate = 0.972
my_lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=LRdecayRate)
else:
optimizer = optim.Adam(model.parameters())
# criterion = diceloss()
pos_weight = torch.ones([1]).to(device)*70 # 1 class and weight = 70, the number of neg_px is about 70 times that of pos_px in my training dataset
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
# -
# ### Find optimal learning rate with : https://github.com/davidtvs/pytorch-lr-finder
# *cylical learning rate vs adaptive learning rate?*
#
# cyclical -> believes difficulty in minimizing the loss arises from "saddle points" rather than poor local minima.
# Cyclical Learning Rates for Training Neural Networks by <NAME> https://arxiv.org/abs/1506.01186
#
# The code from the link used cyclical learning rate to find optimal lr...
# while I will just use simple leanring rate decay..
# +
if use_learning_rate_decay == False:
from torch.utils.data import DataLoader
from torch_lr_finder import LRFinder
model.load_state_dict(torch.load('save/save43.pt'))
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(valid_loader, end_lr=5*1e-5, num_iter=200, step_mode="exp")
lr_finder.plot()
lr_finder.reset()
# -
# ### Optimal learning rate
# The optimal learning rate from code above is about 5*1e-4
# 
#
# # Start Training!
def print_and_send_line(text):
try:
line.sendtext(str(text))
except:
pass
print(str(text))
# +
model.load_state_dict(torch.load('save/save60.pt'))
print_and_send_line("Starting new training session")
losses = []
# valid_loss_min = np.Inf
valid_loss_min = 6 #for BCE loss(w=70)
# valid_loss_min = 2.3 #for Dice loss
CM_min = 141 #confusion matrix
print_every = 15
clip = 1
save_number = 1
total_time = time.time()
for e in range(num_epochs):
start_time = time.time()
train_loss = 0.0
train_loss_small = 0.0
valid_loss = 0.0
################### # train the model # ###################
model.train()
for batch_i,(i, l) in enumerate(train_loader,1):
optimizer.zero_grad()
i, l = i.to(device), l.to(device)
pred = model(i)
loss = criterion(pred,l)
loss.backward()
# apply gradient clipping
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm = clip)
optimizer.step()
train_loss_small += loss.item()*i.size(0)
train_loss += train_loss_small
if batch_i%print_every == 0:
print("Epoch: {}/{}, train_loss: {}".format(e+1,num_epochs, train_loss_small))
try:
line.sendtext("Epoch: {}/{}, train_loss: {}".format(e+1,num_epochs, train_loss_small))
except:
pass
train_loss_small = 0.0
print("--- %s seconds ---" % (time.time() - start_time))
###################### # validate the model # ######################
model.eval()
for (i,l) in valid_loader:
i, l = i.to(device), l.to(device)
pred = model(i)
loss = criterion(pred,l)
valid_loss += loss.item()*i.size(0)
my_lr_scheduler.step()
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
losses.append((train_loss, valid_loss))
accuracy, sensitivity, specificity, ppv = compute_CM(pred,l)
print_and_send_line("Epoch: {}/{}, train_loss: {}, Validation_loss: {:.6f} ".format(e+1,num_epochs, train_loss, valid_loss))
print_and_send_line("Accuracy: {}%, Sensitivity: {}%, Specificity: {}%, , PPV: {}%".format(accuracy, sensitivity, specificity, ppv))
print_and_send_line("--- %s seconds ---" % (time.time() - start_time))
if valid_loss <= valid_loss_min:
while os.path.isfile('save/save{}.pt'.format(save_number)):
save_number += 1
print_and_send_line ("val loss min decreased, Saving model...{}".format(save_number))
torch.save(model.state_dict(), 'save/save{}.pt'.format(save_number))
valid_loss_min = valid_loss
CM = sensitivity + ppv
if CM > CM_min:
while os.path.isfile('save/save{}.pt'.format(save_number)):
save_number += 1
print_and_send_line ("sum of params in confusion matrix increased, Saving model...{}".format(save_number))
torch.save(model.state_dict(), 'save/save{}.pt'.format(save_number))
CM_min = CM
back_up = "save/" + datetime.datetime.now().strftime("%d-%m-%Y_%H-%M") + ".pt"
torch.save(model.state_dict(), back_up)
print_and_send_line("Finished Training, Total time elapsed: ".format(time.time() - start_time))
# +
# torch.save(model.state_dict(), 'save/save39.pt')
# -
# # Visualing Results
#
# ### loss
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Train_loss', alpha=0.5)
plt.plot(losses.T[1], label='Valid_loss', alpha=0.5)
plt.title("Losses")
plt.legend()
# ### Preds in comparison to labels
# +
model.load_state_dict(torch.load('save/save19.pt'))
display_dataset = DataLoaderImg(folder_path = "test5",random_rotation = True, get_path=True)
display_loader = DataLoader(display_dataset, batch_size = 50, shuffle=True, num_workers=0)
model.eval()
def plot_ct_scan_with_labels_and_pred(loader, plot_size=50, cmap=plt.cm.gray):
"""accepts train_loader"""
data = next(iter(loader))
i = data[0].to(device)
pred = model(i).squeeze().detach().cpu().numpy()
display_labels = data[1].squeeze().detach().cpu().numpy()
display_images = i.squeeze().detach().cpu().numpy() #1 batch of display_image 32,32,32,32
paths = data[2]
f, plots = plt.subplots(int(display_images.shape[0] / 2) , 4, figsize=(plot_size, plot_size))
f.suptitle('Red = Label, Yellow = Prediction', fontsize=50, y=0.92, x=0.2)
for img in range(0, display_images.shape[0]): #batch_size
each_path = paths[img]
each_label = display_labels[img]
each_image = display_images[img]
each_pred = pred[img]
marked = find_marked(each_label)
print(each_path)
plots[int((img / 2)), int(img % 2)*2].imshow(each_image[marked,:,:], cmap="gray")
plots[int((img / 2)), (int(img % 2)*2)+1].imshow(each_image[marked,:,:], cmap="gray")
label = np.ma.masked_where((each_label < 0.05), each_label)
pred_mask = np.ma.masked_where((each_pred < 0.05), each_pred)
plots[int((img / 2)), int(img % 2)*2].imshow(label[marked, :, :],cmap="hsv", alpha=0.5)
plots[int((img / 2)), (int(img % 2)*2)+1].imshow(pred_mask[marked,:,:], cmap="Wistia", alpha=1.0)
plots[int((img / 2)), int(img % 2)*2].axis('off')
plots[int((img / 2)), (int(img % 2)*2)+1].axis('off')
plots[int((img / 2)), int(img % 2)*2].set_title(str(each_path))
plt.subplots_adjust(wspace=0, hspace=0.2, left=0, right=0.4)
plot_ct_scan_with_labels_and_pred (display_loader)
# +
index = 31 #ใส่เลข 0-31
display_pred = pred[index].squeeze().detach().cpu().numpy()
display_label = l[index].squeeze().detach().cpu().numpy()
display_image = i[index].squeeze().detach().cpu().numpy()
# +
def explore_3dimage(layer=find_marked(display_label)):
plt.figure(figsize=(10, 5))
plt.imshow(display_image[layer, :, :], cmap='gray');
global display_pred
mask = np.ma.masked_where((display_pred < 0.05), display_pred)
plt.imshow(mask[layer, :, :], cmap="Wistia", alpha=0.5); #mask อยู่ตรงนี้นะ
label = np.ma.masked_where((display_label < 0.05), display_label)
plt.imshow(label[layer, :, :], cmap="hsv", alpha=0.5); #label อยู่ตรงนี้นะ
plt.title('Label', fontsize=20)
plt.axis('off')
return layer
interact(explore_3dimage, layer=(0, display_image.shape[0]))
# -
# # Find false positive candidates
# Find the file containing the false positive candidates and move it to another location. These can be used to lower the false positivity rates.
# +
def find_false_cands(loader, plot_size=50, cmap=plt.cm.gray):
"""accepts train_loader"""
data = next(iter(loader))
i = data[0].to(device)
pred = model(i).squeeze().detach().cpu().numpy()
display_labels = data[1].squeeze().detach().cpu().numpy()
display_images = i.squeeze().detach().cpu().numpy() #1 batch of display_image 32,32,32,32
paths = data[2]
all_paths = []
for img in range(0, display_images.shape[0]): #batch_size
each_path = paths[img]
each_label = display_labels[img]
each_image = display_images[img]
each_pred = pred[img]
marked = find_marked(each_pred)
all_paths.append(each_path) #will delete files that are not moved
for x in range(each_pred.shape[0]):
test_mask = each_pred[x, :, :]
# print(np.sum(test_mask))
if np.sum(test_mask)>-15500:
try:
shutil.move(each_path, 'train5-2')
except:
pass
for y in all_paths:
try:
os.remove(y)
except:
pass
# for i in range (0,1000):
# model.load_state_dict(torch.load('save/save22.pt'))
# display_dataset = DataLoaderImg(folder_path = "false_cand",random_rotation = True, get_path=True)
# display_loader = DataLoader(display_dataset, batch_size = 50, shuffle=True, num_workers=0)
# model.eval()
# all_paths = find_false_cands (display_loader)
| Model-5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Lecture 17b: Fashion MNIST with Stacked Autoencoders
# ==
# Load Packages
# ==
# +
# %matplotlib inline
import tqdm
import torch
import numpy as np
import torchvision
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
# from torch.autograd import Function
print(torch.__version__) # This code has been updated for PyTorch 1.0.0
# -
# Load Data:
# ===============
# +
transform = transforms.Compose([transforms.ToTensor()])
BatchSize = 1000
trainset = torchvision.datasets.FashionMNIST(root='./FMNIST', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BatchSize,
shuffle=True, num_workers=4) # Creating dataloader
testset = torchvision.datasets.FashionMNIST(root='./FMNIST', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=BatchSize,
shuffle=False, num_workers=4) # Creating dataloader
# +
# Check availability of GPU
use_gpu = torch.cuda.is_available()
if use_gpu:
print('GPU is available!')
device = "cuda"
else:
print('GPU is not available!')
device = "cpu"
# -
# Define the Autoencoder:
# ===============
# +
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 400),
nn.Tanh())
self.decoder = nn.Sequential(
nn.Linear(400, 28*28),
nn.Sigmoid())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
net = autoencoder()
print(net)
net = net.double().to(device)
# -
# Train Autoencoder:
# ===========
# +
iterations = 10
learning_rate = 0.98
criterion = nn.MSELoss()
for epoch in range(iterations): # loop over the dataset multiple times
runningLoss = 0.0
for data in tqdm.tqdm_notebook(trainloader):
# get the inputs
inputs, labels = data
inputs = inputs.view(-1, 28*28).double().to(device)
net.zero_grad() # zeroes the gradient buffers of all parameters
outputs = net(inputs) # forward
loss = criterion(outputs, inputs) # calculate loss
loss.backward() # backpropagate the loss
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * gradient (Update Weights)
runningLoss += loss.item()
print('At Iteration : %d / %d ; Mean-Squared Error : %f'%(epoch + 1,iterations,
runningLoss/(60000/BatchSize)))
print('Finished Training')
# -
# Stacking Layers:
# ================================
# Adding New Layer (Stacking)
net.encoder.add_module('New_Encoder_Layer', nn.Sequential(nn.Linear(400, 256),nn.Tanh()))
net.encoder.add_module('New_Decoder_Layer', nn.Sequential(nn.Linear(256, 400),nn.Tanh()))
print(net)
net = net.double().to(device)
# Train Autoencoder:
# ==========
for epoch in range(iterations): # loop over the dataset multiple times
runningLoss = 0.0
for data in tqdm.tqdm_notebook(trainloader):
# get the inputs
inputs, labels = data
inputs = inputs.view(-1, 28*28).double().to(device)
net.zero_grad() # zeroes the gradient buffers of all parameters
outputs = net(inputs) # forward
loss = criterion(outputs, inputs) # calculate loss
loss.backward() # backpropagate the loss
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * gradient (Update Weights)
runningLoss += loss.item()
print('At Iteration : %d / %d ; Mean-Squared Error : %f'%(epoch + 1,iterations,
runningLoss/(60000/BatchSize)))
# Modifying the autoencoder for classification:
# ================================
# Removing the decoder module from the autoencoder
new_classifier = nn.Sequential(*list(net.children())[:-1])
net = new_classifier
new_classifier = nn.Sequential(*list(net[0].children())[:-1])
net = new_classifier
# Adding linear layer for 10-class classification problem
net.add_module('classifier', nn.Sequential(nn.Linear(256, 10),nn.LogSoftmax(dim=1)))
print(net)
net = net.double().to(device)
# Train Classifier:
# ===========
# +
iterations = 10
learning_rate = 0.1
criterion = nn.NLLLoss()
for epoch in range(iterations): # loop over the dataset multiple times
runningLoss = 0.0
for data in tqdm.tqdm_notebook(trainloader):
# get the inputs
inputs, labels = data
inputs, labels = inputs.view(-1, 28*28).double().to(device), labels.to(device)
net.zero_grad() # zeroes the gradient buffers of all parameters
outputs = net(inputs) # forward
loss = criterion(outputs, labels) # calculate loss
loss.backward() # backpropagate the loss
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * gradient (Update Weights)
runningLoss += loss.item()
correct = 0
total = 0
net.eval()
with torch.no_grad():
for data in testloader:
inputs, labels = data
inputs, labels = inputs.view(-1, 28*28).double().to(device), labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('At Iteration : %d / %d ; Train Error : %f ;Test Accuracy : %f'%(epoch + 1,iterations,
runningLoss/(60000/BatchSize),100 * correct /float(total)))
print('Finished Training')
# -
| lecture17b.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pymongo
from collections import defaultdict
# print all available databases
client = pymongo.MongoClient('localhost', 27017)
cursor = client.list_databases()
for db in cursor:
print(db)
all_cols = ['claudience', 'clpersuasive', 'clsentiment', 'clagreement', 'cldisagreement', 'clinformative', 'clmean', 'clcontroversial', 'cltopic']
def get_mets(db, conf=None):
mydb = client[db]
res = mydb["metrics"].aggregate([{
"$match": {"name": 'kappa_score'} # only consider metric
},
{"$unwind": "$values"},
{"$group":
{'_id': '$_id',
'val': {'$max': "$values"}, 'run_id' : { '$first': '$run_id' }}
}, # find min values
{"$sort": {"val": -1}} # sort
])
if not conf is None:
runs = mydb['runs'].find(conf)
runs = [r['_id'] for r in list(runs)]
res = [r for r in res if r['run_id'] in runs]
best = list(res)[0]
epoch = None
max_epochs = 0
for x in mydb['metrics'].find({'run_id': best['run_id'], 'name': 'kappa_score'}):
max_epochs = len(x['values'])
for i, v in enumerate(x['values']):
if v == best['val'] and epoch is None:
epoch = i + 1
for x in mydb['metrics'].find({'run_id': best['run_id'], 'name': 'F1_macro'}):
f1_macro = x['values'][epoch - 1]
for x in mydb['metrics'].find({'run_id': best['run_id'], 'name': 'accuracy'}):
f1_micro = x['values'][epoch - 1]
run = list(mydb['runs'].find({'_id': best['run_id']}))[0]
mod = ''
if 'mod' in run['config']:
mod= run['config']['mod']
return best['val'], f1_micro, f1_macro, epoch, max_epochs, run['config']['exp_id'], run['config']['drop_mult'], mod
get_mets('lm_threads_cl_cltopic')
def get_all(db, **kwargs):
for col in all_cols:
print(col)
print(get_mets(db + col, **kwargs))
get_all('lm_threads_cl_', conf={'config.mod': 'simle_fit'})
get_all('lm_threads_cut_cl_', conf={'config.mod': 'simple_fit'})
get_all('lm_threads_cl_')
get_all('lm_threads_cut_cl_')
# the first audiance is not correct because it was run on wrong data, so better check again!
get_all('threads_headline_cl_')
get_all('threads_headline_article_cl_')
get_all('headline_root_threads_cl_')
get_all('headline_root_threads_no_over_cl_')
get_all('dat_false_par_true_hea_false30000_cl_')
get_all('threads_constructive')
for r in reas:
print(r)
def get_best(db, metric, config_param=None, config_val=None, config_param2=None, config_val2=None, one=True):
myclient = pymongo.MongoClient("mongodb://localhost:27017/")
mydb = myclient[db]
res = mydb["metrics"].aggregate([{
"$match": {"name": metric} # only consider metric
},
{"$unwind": "$values"},
{"$group":
{'_id': '$_id',
'val': {'$max': "$values"}, 'run_id' : { '$first': '$run_id' }}
}, # find min values
{"$sort": {"val": -1}} # sort
])
if config_param is None:
if one:
return list(res)[0]['val']
return [x['val'] for x in list(res)]
filtered_res = []
# filter only results for the config
for res_obj in res:
run = list(mydb['runs'].find({'_id': res_obj['run_id']}))[0]
if config_param in run['config']:
if run['config'][config_param] == config_val:
if config_param2 is None:
filtered_res.append(res_obj)
else:
if not config_param2 in run['config']:
continue
# print(run['config'][config_param2])
if run['config'][config_param2] == config_val2:
# print('yes')
filtered_res.append(res_obj)
else:
pass
# print('NB: just used run without considering param')
# filtered_res.append(res_obj)
if one:
return filtered_res[0]['val']
return [x['val'] for x in filtered_res]
| code/ynacc/Results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Solutions
# +
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
| intro-neural-networks/gradient-descent/GradientDescentSolutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/100rab-S/Coursera-Tensorflow-Developer/blob/main/Course_3_Week_3_Exercise_Question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="zX4Kg8DUTKWO"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + id="hmA6EzkQJ5jt"
import json
import tensorflow as tf
import csv
import random
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import regularizers
embedding_dim = 100
max_length = 16
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_size= 160000 #Your dataset size here. Experiment using smaller values (i.e. 16000), but don't forget to train on at least 160000 to see the best effects
test_portion=.1
corpus = []
# + id="bM0l_dORKqE0"
# Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader
# You can do that yourself with:
# iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv
# I then hosted it on my site to make it easier to use in this notebook
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \
# -O /tmp/training_cleaned.csv
num_sentences = 0
with open("/tmp/training_cleaned.csv") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
# Your Code here. Create list items where the first item is the text, found in row[5], and the second is the label. Note that the label is a '0' or a '4' in the text. When it's the former, make
# your label to be 0, otherwise 1. Keep a count of the number of sentences in num_sentences
list_item=[]
list_item.append(row[5])
list_item.append(0 if row[0] == '0' else 1)
num_sentences = num_sentences + 1
corpus.append(list_item)
# + colab={"base_uri": "https://localhost:8080/"} id="3kxblBUjEUX-" outputId="dddbea4d-57f0-42c2-a099-b5e9768ba332"
print(num_sentences)
print(len(corpus))
print(corpus[1])
# Expected Output:
# 1600000
# 1600000
# ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0]
# + id="ohOGz24lsNAD"
sentences=[]
labels=[]
random.shuffle(corpus)
for x in range(training_size):
sentences.append(corpus[x][0])
labels.append(corpus[x][1])
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
vocab_size=len(word_index)
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
split = int(test_portion * training_size)
test_sequences = padded[:split]
training_sequences = padded[split:training_size]
test_labels = labels[:split]
training_labels = labels[split:training_size]
# + colab={"base_uri": "https://localhost:8080/"} id="gIrtRem1En3N" outputId="56cd8330-730b-4b54-9544-df107b8d76d7"
print(vocab_size)
print(word_index['i'])
# Expected Output if training size = 160000
# 138858
# 1
# + colab={"base_uri": "https://localhost:8080/"} id="C1zdgJkusRh0" outputId="97740f42-7067-4483-f7bc-c6f6cc6ecd80"
# Note this is the 100 dimension version of GloVe from Stanford
# I unzipped and hosted it on my site to make this notebook easier
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \
# -O /tmp/glove.6B.100d.txt
embeddings_index = {};
with open('/tmp/glove.6B.100d.txt') as f:
for line in f:
values = line.split();
word = values[0];
coefs = np.asarray(values[1:], dtype='float32');
embeddings_index[word] = coefs;
embeddings_matrix = np.zeros((vocab_size+1, embedding_dim));
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word);
if embedding_vector is not None:
embeddings_matrix[i] = embedding_vector;
# + colab={"base_uri": "https://localhost:8080/"} id="71NLk_lpFLNt" outputId="0e7450f1-5084-4042-b812-52d4c138606d"
print(len(embeddings_matrix))
# Expected Output
# 138859
# + id="iKKvbuEBOGFz" colab={"base_uri": "https://localhost:8080/"} outputId="6bc82863-17e4-42c4-93e5-ccd0d94825ab"
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(1024, activation = 'relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.summary()
training_sequences = np.array(training_sequences)
training_labels = np.array(training_labels)
test_sequences = np.array(test_sequences)
test_labels = np.array(test_labels)
num_epochs = 10 #change to higher value for better results
history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels))
print("Training Complete")
# + id="qxju4ItJKO8F" colab={"base_uri": "https://localhost:8080/", "height": 607} outputId="97336f46-6f55-423f-c5be-e99e23f85f43"
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r')
plt.plot(epochs, val_acc, 'b')
plt.title('Training and validation accuracy')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["Accuracy", "Validation Accuracy"])
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.plot(epochs, val_loss, 'b')
plt.title('Training and validation loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss", "Validation Loss"])
plt.figure()
# Expected Output
# A chart where the validation loss does not increase sharply!
| Course_3_Week_3_Exercise_Question.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - (c) <NAME>, 2020/01/17
# - MIT License
# ## SVMによるBreast Cancerデータの識別
# - 入れ子交差検証で最適パラメータを探索
import numpy as np
from sklearn.svm import SVC
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import StratifiedKFold, GridSearchCV
from sklearn.preprocessing import scale
# ### Breast Cancerデータのロード
# +
df = load_breast_cancer()
X = df.data
y = df.target
# z標準化
X = scale(X)
# -
print(df.DESCR)
# ### 入れ子交差検証でハイパーパラメータをグリッドサーチ
# - グリッドサーチパラメータリストの書き方は下記を参照
# - http://scikit-learn.org/stable/modules/grid_search.html#grid-search
# - SVCの可能なパラメータリストは下記を参照
# - http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC
# +
# 外側ループのための交差検証用データ生成インスタンス
kfold = StratifiedKFold(n_splits=10, shuffle=True)
acc_trn_list = [] #外側ループのfold毎の学習データに対するaccuracy格納用
acc_tst_list = [] #外側ループのfold毎のテストデータに対するaccuracy格納用
# グリッドサーチのパラメータリスト
parameters = {'kernel':['poly','rbf','sigmoid'],
'gamma':[0.01, 0.02, 0.05, 0.1, 0,2, 1, 10, 100],
'degree':[1,2,3,4],
'coef0':[1,3,5,10,100]
}
# 内側ループでグリッドサーチを行う交差検証インスタンス
gs = GridSearchCV(SVC(), parameters, cv=2, iid=False)
k=0
# 内側ループのグリッドサーチ
for train_itr, test_itr in kfold.split(X, y):
gs.fit(X[train_itr], y[train_itr])
print('Fold #{:2d}; Best Parameter: {}, Accuracy: {:.3f}'\
.format(k+1,gs.best_params_,gs.best_score_))
acc_trn_list.append(gs.score(X[train_itr],y[train_itr]))
acc_tst_list.append(gs.score(X[test_itr],y[test_itr]))
k=k+1
# -
# ### 平均Accuracy
print('Training data: %1.3f' % np.mean(acc_trn_list))
print('Test data: %1.3f' % np.mean(acc_tst_list))
| ch6-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pre-requirements
# #### 1. [Download](https://qgis.org/en/site/forusers/download.html) and install QGIS
# #### 2. A Shapefile
# + The shapefile should end with the '.shp' extension
#
# # Instructions
#
# #### 1. Launch the QGIS Application
# #### 2. Click on the _New Empty Project_ panel
# + This option is highlighted in red in the image below
# 
# #### 3. Using the _Browser_ sidepanel, find and click on the desired Shapefile
# 
# #### 4. Click on _Vector→Data Management Tools→Split Vector Layer..._
# 
# #### 5. Configure the _Split Vector Layer_ tool settings and click _Run_
# + **_Input layer_** specifies what Layer is being used to generate the files
# + **_Unique ID field_** specifies the prefix for the generated files
# + **_Output directory_** is the directory which will contain the generated files
# 
# #### 6. Once the _Split Vector Layer_ tool finishes running, click on _Layer→Add Layer →Add Vector Layer..._
# 
# #### 7. Configure the _Data Source Manager_ accordingly
# + Ensure that it is under the _Vector_ tab is selected
# + For the **_Source Type_** option, select _File_
# + Under **_Source_** navigate and select the generated file(s)
# 
# #### 8. The selected file(s) should now appear as a new layer
# 
# +
from haversine import haversine, Unit
from shapely.geometry import Point, Polygon
import matplotlib.pyplot as plt
import csv
import pandas as pd
import geopandas as gpd
import fiona
import os
import pprint
# %matplotlib inline
# -
df = pd.read_csv('population_AS47_2018-10-01.csv')
#df = df.iloc[100000:]
india_shp = gpd.read_file("shapefiles/India_Districts_ADM2_GADM.shp")
haversine(p1, p2, unit='in')
# find avg distance between consecutive points
sum_dist = 0
prev_pt = p1
n = 10
for i in range(1, n):
pt = (df['latitude'][i], df['longitude'][i])
dist = haversine(prev_pt, pt, unit='in') / 12
print(dist, " ft")
sum_dist += dist
prev_pt = pt
print("avg_dist in feet: ", sum_dist/(n-1))
# display shapefile
fig, ax = plt.subplots(figsize=(15,15))
india_shp.plot(ax = ax)
crs = {'init': 'epsg:4326'} # coordinate reference system (long, lat)
df.head()
# converting lat and long in df to coordinate shapely Points
geometry = [Point(xy) for xy in zip(df['longitude'], df['latitude'])]
geometry[:3]
geo_df = gpd.GeoDataFrame(df, #dataset
crs=crs, #coordinate system
geometry=geometry) # specify geometry list
geo_df.head()
# plot just a sliver of 18million points
lil_geo_df = geo_df.iloc[10000:30000]
fig, ax = plt.subplots(figsize=(15,15))
india_shp.plot(ax = ax, alpha=0.4, color='grey')
geo_df.plot(ax=ax, markersize=3, color='blue')
def shp_lookup(shp_dir, search_prop, query):
"""
function that returns name of shapefile (.shp) associated with a particualar field
If looking for shp by city, use
Args:
shp_dir : str
location of shapefiles to be searched
search_prop : str
field to search for in shapefiles
(city: 'NAME_2')
(region/state: 'NAME_1')
query : str
string to be searched for i.e. 'Delhi'
Returns:
str : filename of shapefile or None
"""
for filename in os.listdir(shp_dir):
if filename.endswith(".shp"):
with fiona.open(shp_dir + filename) as src:
if src[0]['properties'][search_prop] == query:
return filename
return "None"
shp_lookup(r'shapefiles/regions/', 'NAME_2', 'Delhi')
| notebooks/India_csv_distance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Network Operations
# ## Pre-Processing
# nuclio: ignore
import nuclio
# Define the MLRun environment
# %nuclio config kind = "job"
# %nuclio config spec.image = "mlrun/ml-models"
# ## Function
# +
import os
import pandas as pd
from mlrun.datastore import DataItem
from typing import Union
# -
def aggregate(context,
df_artifact: Union[DataItem, pd.core.frame.DataFrame],
save_to: str = 'aggregated-df.pq',
keys: list = None,
metrics: list = None,
labels: list = None,
metric_aggs: list = ['mean'],
label_aggs: list = ['max'],
suffix: str = '',
window: int = 3,
center: bool = False,
inplace: bool = False,
drop_na: bool = True,
files_to_select: int = 1):
"""Time-series aggregation function
Will perform a rolling aggregation on {df_artifact}, over {window} by the selected {keys}
applying {metric_aggs} on {metrics} and {label_aggs} on {labels}. adding {suffix} to the
feature names.
if not {inplace}, will return the original {df_artifact}, joined by the aggregated result.
:param df_artifact: MLRun input pointing to pandas dataframe (csv/parquet file path) or a
directory containing parquet files.
* When given a directory the latest {files_to_select} will be selected
:param save_to: Where to save the result dataframe.
* If relative will add to the {artifact_path}
:param keys: Subset of indexes from the source dataframe to aggregate by (default=all)
:param metrics: Array containing a list of metrics to run the aggregations on. (default=None)
:param labels: Array containing a list of labels to run the aggregations on. (default=None)
:param metric_aggs: Array containing a list of aggregation function names to run on {metrics}.
(Ex: 'mean', 'std') (default='mean')
:param label_aggs: Array containing a list of aggregation function names to run on {metrics}.
(Ex: 'max', 'min') (default='max')
:param suffix: Suffix to add to the feature name, E.g: <Feature_Name>_<Agg_Function>_<Suffix>
(Ex: 'last_60_mintes') (default='')
:param window: Window size to perform the rolling aggregate on. (default=3)
:param center: If True, Sets the value for the central sample in the window,
If False, will set the value to the last sample. (default=False)
:param inplace: If True, will return only the aggregated results.
If False, will join the aggregated results with the original dataframe
:param drop_na: Will drop na lines due to the Rolling.
:param files_to_select: Specifies the number of *latest* files to select (and concat) for aggregation.
"""
from_model = type(df_artifact) == pd.DataFrame
if from_model:
context.logger.info('Aggregating from Buffer')
input_df = df_artifact
else:
if df_artifact.url.endswith('/'): # is a directory?
mpath = [os.path.join(df_artifact.url, file) for file in df_artifact.listdir() if file.endswith(('parquet', 'pq'))]
files_by_updated = sorted(mpath, key=os.path.getmtime, reverse=True)
context.logger.info(files_by_updated)
latest = files_by_updated[:files_to_select]
context.logger.info(f'Aggregating {latest}')
input_df = pd.concat([context.get_dataitem(df).as_df() for df in latest])
else: # A regular artifact
context.logger.info(f'Aggregating {df_artifact.url}')
input_df = df_artifact.as_df()
# Verify there is work to be done
if not (metrics or labels):
raise ValueError('please specify metrics or labels param')
# Select the correct indexes
if keys:
current_index = input_df.index.names
indexes_to_drop = [col for col in input_df.index.names if col not in keys]
df = input_df.reset_index(level=indexes_to_drop)
else:
df = input_df
# For each metrics
if metrics:
metrics_df = df.loc[:, metrics].rolling(window=window, center=center).aggregate(metric_aggs)
# Flatten all the aggs
metrics_df.columns = ['_'.join(col).strip() for col in metrics_df.columns.values]
# Add suffix
if suffix:
metrics_df.columns = [f'{metric}_{suffix}' for metric in metrics_df.columns]
if not inplace:
final_df = pd.merge(input_df, metrics_df, suffixes=('', suffix), left_index=True, right_index=True)
else:
final_df = metrics_df
# For each label
if labels:
labels_df = df.loc[:, labels].rolling(window=window,
center=center).aggregate(label_aggs)
# Flatten all the aggs
labels_df.columns = ['_'.join(col).strip() for col in labels_df.columns.values]
# Add suffix
if suffix:
labels_df.columns = [f'{label}_{suffix}' for label in labels_df.columns]
if metrics:
final_df = pd.merge(final_df, labels_df, suffixes=('', suffix), left_index=True, right_index=True)
else:
if not inplace:
final_df = pd.merge(input_df, labels_df, suffixes=('', suffix), left_index=True, right_index=True)
else:
final_df = labels_df
if drop_na:
final_df = final_df.dropna()
# Save the result dataframe
context.logger.info('Logging artifact')
if not from_model:
context.log_dataset(key='aggregate',
df=final_df,
format='parquet',
local_path=save_to)
else:
return final_df
# +
# nuclio: end-code
# -
# ## Test
# > This test uses the metrics data, created by the [Generator function](https://github.com/mlrun/demo-network-operations/blob/master/notebooks/generator.ipynb) from MLRun's [Network Operations Demo](https://github.com/mlrun/demo-network-operations)
# To test it yourself, please generate this dataset or use any of your available csv/parquet datasets.
from mlrun import code_to_function, mount_v3io, NewTask, mlconf, run_local
mlconf.dbpath = mlconf.dbpath or 'http://mlrun-api:8080'
metrics_path = '/User/demo-network-operations/data/metrics.pq'
# ### Local Test
# Define the aggregate test task
aggregate_task = NewTask(name='aggregate',
project='network-operations',
params={'metrics': ['cpu_utilization'],
'labels': ['is_error'],
'metric_aggs': ['mean', 'sum'],
'label_aggs': ['max'],
'suffix': 'daily',
'inplace': False,
'window': 5,
'center': True,
'save_to': 'aggregate.pq',
'files_to_select': 2},
inputs={'df_artifact': metrics_path},
handler=aggregate)
aggregate_run = run_local(aggregate_task)
# ### Test on cluster
# Convert the code to an MLRun function
fn = code_to_function('aggregate', handler='aggregate', code_output='function.py')
fn.spec.description = "Rolling aggregation over Metrics and Lables according to specifications"
fn.metadata.categories = ["data-prep"]
fn.metadata.labels = {'author': 'orz'}
fn.export('function.yaml')
aggregate_run = fn.apply(mount_v3io()).run(aggregate_task, artifact_path=os.path.abspath('./'))
# ### Show results
pd.read_parquet(aggregate_run.artifact('aggregate')['target_path'])
| aggregate/aggregate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# <h4 style="background-color:lightblue;"> We have downloaded fasta File of Recurrent Mutation(-201 and +200 Position)</h4>
from IPython.display import Image as display
from theragen import fasta_description
# ##### Position and Reference Sequence
# ###### 1. ``` CHROSOME3_184033625 GCCC**C**ATCC```
# ###### 2. ``` CHROSOME20_62045527 TGTC**C**TCTCC...```
# ###### 3. ``` CHROSOME1_45797760 CACC**T**GAGAGG...```
# ###### 4. ``` CHROSOME7_781069 CTTT**G**CCCC....```
# ###### 5. ``` CHROSOME7_781073 GCCC**C**TCCCA....```
#
# <h3 style="background-color:Cornsilk;">------------------1. Checking the CHROSOME3_184033625 `C` ------------------</h3>
fasta_description("datas/chr3_184033625.fa")
display("datas/chr3_184033625.JPG")
# <h3 style="background-color:Cornsilk;">------------------Checking the CHROSOME20_62045527 `C` ------------------ </h3>
fasta_description("datas/chr20_62045527.fa")
# ###### Sample of Chromosom20 Position 62045527
display("datas/chr20_62045527_i.JPG")
# <h3 style="background-color:Cornsilk;">------------------Checking the CHROSOME1_45797760 `T`------------------</h3>
fasta_description("datas/chr1_45797760.fa")
# ###### Sample of Chromosome1 Position 45797760 `T`
display("datas/chr1_45797760.JPG")
# <h3 style="background-color:Cornsilk;">------------------Checking the CHROSOME7_781069 `G`------------------</h3>
fasta_description("datas/chr7_781069.fa")
# <h3 style="background-color:Cornsilk;">------------------Checking the CHROSOME7_781073 `C` ------------------</h3>
fasta_description("datas/chr7_781073.fa")
# ###### Sample of Chromosome CHROSOME7 POsition 781069 `G` & CHROSOME7 Position 781073 `C`
display("datas/chr7_all.JPG")
# <h2 style="background-color:LightGreen;">***************Thank You ***************</h2>
| Recurrent_Mutation_Position_Fasta_File.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
import pandas as pd
#plt.rcParams["figure.figsize"] = (10,7)
alphas = [-8,-6,-4,-2,0,2,4,6,8,10,12,14,16,18,20]
fs = 15
title_fs = 18
legend_fs = 12
ticks_fs = 16
#folder = '2d_comparisons_et_each_step/'
folder = '2d_comparisons_et_only_once_from_ground/'
# +
sdata = pd.read_csv(folder + 'polar-data2d.csv')
cdata = pd.read_csv(folder + 'cart-data2d.csv')
sdata['V_rel_block'] = sdata.apply(lambda row: row.V_block_fin - row.Wind_x, axis=1)
cdata['V_rel_block'] = cdata.apply(lambda row: row.V_block_fin - row.Wind_x, axis=1)
time = cdata['TotTime'].max()
cdata.tail(4)
# -
# ## 1) Block velocities as a function of the attack angle for various wind velocities
wmin = 4
wmax = 25
step = 2
# +
fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(14, 5), gridspec_kw={'wspace': 0})
for w in range(wmin, wmax):
fig.suptitle("Block velocity after " + str(time) + " sec for various wind velocities")
wdf = cdata.loc[cdata['Wind_x'] == float(w)]
#print(wdf.head(4))
axs[0].plot(wdf['Alpha'], wdf['V_block_fin'], label=str(w))
plt.grid(b=True)
wdf = sdata.loc[sdata['Wind_x'] == float(w)]
wdf.head(2)
axs[1].plot(wdf['Alpha'], wdf['V_block_fin'], label=str(w))
axs[0].grid(b=True)
axs[0].legend(title = r'Wind (m/s)', fontsize = legend_fs, loc='upper left', prop={"size":9})
axs[0].set_ylabel('Block velocity (m/s)')
#axs[0].set_xlim(0,20)
axs[0].set_title("Cartesian")
axs[1].set_title("Polar")
for ax in axs.flat:
ax.set(xlabel='Attack angle ' + r'$\alpha$')
ax.set_xticks(alphas)
#plt.savefig('Images/vblock_cart-pol.png')
# -
# ### a) Cartesian coordinates
# +
fig, axs = plt.subplots(1, 1, figsize=(11,8), gridspec_kw={'wspace': 0})
for w in range(wmin, wmax, step):
wdf = cdata.loc[cdata['Wind_x'] == float(w)]
axs.plot(wdf['Alpha'], wdf['V_block_fin'], label=str(w))
axs.set_ylabel('Block velocity (m/s)', fontsize=fs+3)
axs.set_xlabel('Attack angle ' + r'$\alpha$', fontsize=fs+3)
axs.set_title("Block velocity after "+ str(int(time)) + " for various wind velocities", fontsize=title_fs)
leg = axs.legend(title = r'Wind (m/s)', fontsize = legend_fs, loc='upper left')
axs.set_xlim(-8,20)
axs.tick_params(axis='both', labelsize=ticks_fs)
leg.get_title().set_fontsize(ticks_fs)
axs.set_xticks([-8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
plt.grid(b=True)
#plt.savefig('Images/vblock_alpha_cartesian.png')
# -
# ### b) Polar coordinates
# for w in range(wmin, wmax, step):
# wdf = sdata.loc[cdata['Wind_x'] == float(w)]
# plt.plot(wdf['Alpha'], wdf['V_block_fin'], label=str(w))
# plt.ylabel('Block velocity (m/s)', fontsize=fs)
# plt.xlabel('Attack angle ' + r'$\alpha$', fontsize=fs)
# plt.title("Block velocity after 300 sec for various wind velocities")
# plt.legend(title = 'Wind (m/s)', fontsize = legend_fs, loc='upper left')
# plt.grid(b=True)
# ## 2) Kite relative velocity as a function of wind velocities for various alpha
# +
fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(14, 5), gridspec_kw={'wspace': 0})
for i, angle in enumerate(alphas):
fig.suptitle("Kite relative velocity after " + str(time) + " sec for various attack angles")
wdfc = cdata.loc[cdata['Alpha'] == angle]
axs[0].plot(wdfc['Wind_x'], wdfc['Vrelx'], label=r'$\alpha$='+str(angle))
plt.grid(b=True)
wdfs = sdata.loc[sdata['Alpha'] == angle]
axs[1].plot(wdfs['Wind_x'], wdfs['Vrelx'], label=r'$\alpha$='+str(angle))
axs[0].grid(b=True)
axs[0].legend(title = r'Attack angles', fontsize = legend_fs, loc='upper left', prop={"size":9})
axs[0].set_ylabel(r'V$_{REL}$ (m/s)')
axs[0].set_title("Cartesian")
axs[1].set_title("Polar")
for ax in axs.flat:
ax.set(xlabel='Wind velocity (m/s)')
#plt.savefig('Images/vrel_wind_negative_pi4.png')
# +
fig, axs = plt.subplots(1, 1, figsize=(11,8), gridspec_kw={'wspace': 0})
for i, angle in enumerate(alphas):
wdfc = cdata.loc[cdata['Alpha'] == angle]
axs.plot(wdfc['Wind_x'], wdfc['Vrelx'], label=r'$\alpha$='+str(angle))
axs.set_ylabel(r'V$_{REL}$ (m/s)', fontsize=fs+3)
axs.set_xlabel('Wind velocity (m/s)', fontsize=fs+3)
#axs.set_title("Kite relative velocity after " + str(time) + " sec for various attack angles", fontsize=title_fs)
leg = axs.legend(title = r'Attack angles', fontsize = 12,loc='lower left',framealpha=1)
axs.set_xlim(0,38)
axs.tick_params(axis='both', labelsize=ticks_fs)
leg.get_title().set_fontsize(ticks_fs)
#axs.set_xticks([-8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
plt.grid(b=True)
plt.savefig('Images/vrel_wind_cartesian1.png', dpi=300)
# +
fig, axs = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(14, 5), gridspec_kw={'wspace': 0})
for w in range(wmin, wmax, step):
fig.suptitle("Kite relative velocity after " + str(time) + " sec for various winds and attack angles")
wdfc = cdata.loc[cdata['Wind_x'] == float(w)]
axs[0].plot(wdfc['Alpha'], wdfc['Vrelx'], label=str(w))
plt.grid(b=True)
wdfs = sdata.loc[sdata['Wind_x'] == float(w)]
axs[1].plot(wdfs['Alpha'], wdfs['Vrelx'], label=str(w))
axs[0].grid(b=True)
axs[0].legend(title = r'Wind (m/s)', fontsize = legend_fs, loc='upper left', prop={"size":9})
axs[0].set_ylabel(r'V$_{REL}$ (m/s)')
axs[0].set_title("Cartesian")
axs[1].set_title("Polar")
for ax in axs.flat:
ax.set(xlabel='Attack angle ' + r'$\alpha$')
#plt.savefig('Images/vrel_alpha_negative_pi4.png')
# +
fig, axs = plt.subplots(1, 1, figsize=(11,8), gridspec_kw={'wspace': 0})
for w in range(wmin, wmax, step):
wdfc = cdata.loc[cdata['Wind_x'] == float(w)]
axs.plot(wdfc['Alpha'], wdfc['Vrelx'], label=str(w))
axs.set_ylabel(r'V$_{REL}$ (m/s)', fontsize=fs)
axs.set_xlabel('Attack angle ' + r'$\alpha$', fontsize=fs)
axs.set_title("Kite relative velocity after " + str(time) + " sec for various winds and attack angles", fontsize=title_fs)
leg = axs.legend(title = r'Attack angles', fontsize = legend_fs, loc='lower left', bbox_to_anchor=(1, 0))
axs.set_xlim(-8,20)
axs.tick_params(axis='both', labelsize=ticks_fs)
leg.get_title().set_fontsize(ticks_fs)
fontP = FontProperties()
fontP.set_size('small')
axs.set_xticks([-8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
plt.grid(b=True)
#plt.savefig('Images/vrel_alpha_cartesian.png')
| Dynamics-Analysis/Motion-analysis2d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reading and writting to a file
#
# ## 1) Reading files
#
# I have created two files that are in the data directory, lets look what they are. It is easy to use shell command line using in the jupyter notebook, they just need to be preceded by the "!". For example:
# !dir ..\data
# Let's open the first file called: animals.txt
filin = open (r"..\data\animals.txt", "r")
lines=filin.readlines()
print(lines)
filin.close()
# The open() instruction, open the file given in argument (be careful of the path to the file). The file is opened in readonly mode ("r"). At this point the file is only open not read. To read the file one use the .readlines() method that will create a list with the content of each line in the file. The .close() method close the file, and it's content is not available anymore when it is closed. Once the file is open and read it is possible to loop through its elements as usual.
for line in lines:
print(line)
# Each element of the list is a string. In Python it is also possible to make sure that the file is open and closed properly using the "with" instruction:
with open ("../data/animals.txt", 'r') as filin :
lines=filin.readlines()
for line in lines:
print(line)
# The file is closed at the end of the indented block corresponding to with.
#
# It is also possible to read the full file not line by line and get a single string:
with open (r"..\data\animals.txt", 'r') as filin :
file=filin.read()
print(file)
# It is also possible to open the file line by line and iterate over them using the .readline() method. For instance:
with open (r"..\data\animals.txt", "r") as filin :
line = filin .readline()
while line != "":
print ( line )
line = filin.readline()
# It is also possible to iterate directly on the opened file:
with open ("../data/animals.txt", 'r') as filin :
for line in filin:
print(line)
# ## 2) Writing files
#
# Writting files is equally simple, one just need to open a file in write mode ("w"):
animals2=["Pigeons","Frogs","Giraffes"]
with open (r"..\data\animals2.txt", 'w') as filout :
for animal in animals2:
filout.write(animal+"\n")
# If one want to separate each line of the file by a new line, it has to be declared explicitely: "\n".
#
# It is also possible to open many files at the same time with the with option:
with open (r"..\data\animals.txt", "r") as file1 , open (r"..\data\animals_3.txt", "w") as file2 :
for line in file1 :
file2 . write ("* " + line )
# ## 3) Conversion types with opened files
# When reading files, the content is always a string. If one has written other elements (int or float), they have to be explicitely converted to be usable. It is also important to notice that only strings can be written to files using these methods.
# ## 4) Exercise
#
# - Open the file ../data/numbers.txt create a list with each value. Compute the average value and the standard deviation for this list.
#
# - Draw a spiral:
# - create a file spiral.py to compute the cartesian coordinates of a spiral at 2 dimensions. The cartesian coordinates $x_a$ and $y_a$ of a point A on a circle with radius $r$ are expressed in term of the polar angle $\theta$, as $x_a=cos(\theta)\times r$ and $y_a=cos(\theta)\times r$.
# - To compute the cartesian coordinates that describes the spiral, vary both parameters at the same time:
# - The angle $\theta$ will take any values from $0$ to $4\pi$ by step of 0.1
# - The circle radius r, initiale value will be 0.5 and will be incremented by 0.5.
# - You will use the module math and the functions sin and cosin:
# - math.sin(), math.cos()
# - $\pi$ is also accessible in the same module using math.pi
# - Save the coordinates in a file ./data/spiral.dat
# - Each line of the file should contain a couple of coordinate ($x_a$, $y_a$). Each values should be separated with a space, and the coordinates should be displayed with 10 characters with 5 digits.
#
# - Display the output using the following piece of code:
# +
import numpy as np
nmbrlist = []
with open (r"..\data\numbers.txt") as nmbrs:
line = nmbrs.readline()
while line != "":
nmbrlist.append(int(line))
line = nmbrs.readline()
print(np.mean(nmbrlist), np.std(nmbrlist))
# do it as a function instead of an extra .py file
def spiral (turns = 2, innerrad = 0.5, spiralness = 0.5):
coords = []
currentrad = innerrad
currentphi = 0
phi = 0
while phi <= 2*np.pi*turns:
currentrad = innerrad + (phi/4*np.pi)*spiralness
coords.append([np.cos(phi)*currentrad, np.sin(phi)*currentrad])
phi += 0.1
return coords
coords = spiral(turns = 50, spiralness = 0.05, innerrad = 1)
with open(r"..\data\spiral.txt", "w+") as spiral:
for coord in coords: #sorry
spiral.write("{0:2.8f} {1:2.8f}\n".format(coord[0],coord[1]) )
# -
import matplotlib . pyplot as plt
x = []
y = []
with open (r"..\data\spiral.txt", "r") as f_in:
for line in f_in :
coords = line.split()
x. append ( float ( coords [0]))
y. append ( float ( coords [1]))
plt.figure ( figsize =(8 ,8))
mini = min (x+y) * 1.2
maxi = max (x+y) * 1.2
plt . xlim (mini , maxi )
plt . ylim (mini , maxi )
plt . plot (x, y)
#plt . savefig(r"..\figures\spirale.png")
| tutorial/07_Files_Windows.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # Tutorial: Load demo data and enrich it with NOAA ISD Weather data.
#
# In this tutorial, you load the demo data (a parquet file in Azure Blob), check the data schema, enrich it with NOAA ISD Weather data.
#
# Prerequisites:
# > You must install the PyPI package on the cluster:
# > * azureml-opendatasets
#
# Learn how to:
# > * Load the demo data from Azure Blob
# > * Check the demo data schema
# > * Initialize NoaaIsdWeather class to load weather data
# > * Enrich the demo data with weather data
# > * Display the joined result annd stats
# ## Load demo parquet file from Azure Blob
# +
from azure.storage.blob import BlockBlobService
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
container_name = 'tutorials'
account_name = 'azureopendatastorage'
relative_path = 'noaa_isd_weather/demo.parquet'
df = spark.read.parquet('wasbs://%s@%s.blob.core.windows.net/%s' % (
container_name,
account_name,
relative_path))
df.count()
# -
# # Display the demo data
display(df.limit(10))
# # Initialize NoaaIsdWeather class, get the enricher from it and enrich demo data
# +
from azureml.opendatasets.accessories.location_data import LatLongColumn
from azureml.opendatasets.accessories.location_time_customer_data import LocationTimeCustomerData
from azureml.opendatasets import NoaaIsdWeather
_customer_data = LocationTimeCustomerData(df, LatLongColumn('lat', 'long'), 'datetime')
weather = NoaaIsdWeather(cols=["temperature", "windSpeed", "seaLvlPressure"])
weather_enricher = weather.get_enricher()
joined_data = weather_enricher.enrich_customer_data_with_agg(
customer_data_object=_customer_data,
location_match_granularity=5,
time_round_granularity='day',
agg='avg')
# -
# # Display the joined result
display(joined_data.data.limit(10))
# # Convert the joined spark dataframe to pandas dataframe
joined_data_pandas = joined_data.data.toPandas()
# # Check the stats of joined result
print(joined_data_pandas.info())
| tutorials/data-join/01-weather-join-in-spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="8UMFqsCD0xyF" colab_type="code" colab={}
# Importing necessary packages
import pandas as pd
# + id="HSXgY0ze09cY" colab_type="code" colab={}
file_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter03/bank-full.csv'
bankData = pd.read_csv(file_url, sep=";")
# + id="NiRh58PlaJoZ" colab_type="code" colab={}
from pandas import set_option
# + id="9wn-4MiPabSL" colab_type="code" colab={}
bankNumeric = bankData[['age','balance','day','duration','campaign','pdays','previous']]
# + id="deI0ALoiaL3i" colab_type="code" outputId="ab50a776-ba9a-42ad-af0e-cce93cb3ac14" colab={"base_uri": "https://localhost:8080/", "height": 266} executionInfo={"status": "ok", "timestamp": 1573001855052, "user_tz": -660, "elapsed": 973, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCYY-iGjUIqBSnlLoszfZTN7rU7FRNg05Rdt9Ii3A=s64", "userId": "11809607246124237079"}}
set_option('display.width',150)
set_option('precision',3)
bankCorr = bankNumeric.corr(method = 'pearson')
bankCorr
# + id="F8JT-MtWb6eY" colab_type="code" outputId="8ab75e85-e755-4273-eff7-4910a92aa352" colab={"base_uri": "https://localhost:8080/", "height": 271}
# Plotting the correlation matrix
from matplotlib import pyplot
corFig = pyplot.figure()
figAxis = corFig.add_subplot(111)
corAx = figAxis.matshow(bankCorr,vmin=-1,vmax=1)
corFig.colorbar(corAx)
pyplot.show()
| Chapter03/Exercise3.05/Exercise3.05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sanjaykmenon/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module2-loadingdata/Sanjay_Krishna_Unit_1_Sprint_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="pmU5YUal1eTZ"
# _Lambda School Data Science_
#
# # Join and Reshape datasets
#
# Objectives
# - concatenate data with pandas
# - merge data with pandas
# - understand tidy data formatting
# - melt and pivot data with pandas
#
# Links
# - [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
# - [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
# - Combine Data Sets: Standard Joins
# - Tidy Data
# - Reshaping Data
# - Python Data Science Handbook
# - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append
# - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join
# - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
# - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
#
# Reference
# - Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
# - Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + id="5MsWLLW4Xg_i" colab_type="code" outputId="b41efa21-9f4d-47d3-f57d-2ad50aad95b2" colab={"base_uri": "https://localhost:8080/", "height": 208}
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# + id="gfr4_Ya0XkLI" colab_type="code" outputId="79273a29-8162-4029-e102-357f20ab7c18" colab={"base_uri": "https://localhost:8080/", "height": 243}
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# + id="N4YyGPNdXrT0" colab_type="code" outputId="3bf7cc23-99fc-4085-ea67-6dccaee27bcc" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd instacart_2017_05_01
# + id="b26wmLUiXtlM" colab_type="code" outputId="5ac7b52e-52e9-4f49-e0ef-49a487549f89" colab={"base_uri": "https://localhost:8080/", "height": 121}
# !ls -lh *.csv
# + [markdown] colab_type="text" id="kAMtvSQWPUcj"
# # Assignment
#
# ## Join Data Practice
#
# These are the top 10 most frequently ordered products. How many times was each ordered?
#
# 1. Banana
# 2. Bag of Organic Bananas
# 3. Organic Strawberries
# 4. Organic Baby Spinach
# 5. Organic Hass Avocado
# 6. Organic Avocado
# 7. Large Lemon
# 8. Strawberries
# 9. Limes
# 10. Organic Whole Milk
#
# First, write down which columns you need and which dataframes have them.
#
# Next, merge these into a single dataframe.
#
# Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
# + id="vvE0EVHgXMFO" colab_type="code" outputId="2156e6ec-f6f4-4369-b29b-ef0a1055f4cf" colab={"base_uri": "https://localhost:8080/", "height": 363}
import pandas as pd #import pandas
df_orders = pd.read_csv('order_products__prior.csv') #create dataframe of orders
df_orders.head(10) #view dataframe
# + id="8yotS5gGkQgQ" colab_type="code" outputId="1d4669a2-3f55-4611-ea46-8916d6a5e3f4" colab={"base_uri": "https://localhost:8080/", "height": 206}
df_products = pd.read_csv('products.csv') #create dataframe of products
df_products.head() #view dataframe
# + id="brXmRH0unyIP" colab_type="code" outputId="6759b693-46c4-492e-f365-7b059600bf06" colab={"base_uri": "https://localhost:8080/", "height": 206}
c_df = pd.merge(df_products, df_orders, on='product_id') #merge both dataframes using product_id
c_df.head() #view combined dataframe
# + id="AcPb5MUFpU0K" colab_type="code" colab={}
freq = ['Banana','Bag of Organic Bananas','Organic Strawberries','Organic Baby Spinach','Organic Hass Avocado','Organic Avocado','Large Lemon','Strawberries','Limes','Organic Whole Milk']
#freq is list of most frequent products
filter = c_df.product_name.isin(freq) #condition of filter to be used for creating a new dataframe with only frequently ordered products
filtered_df = c_df[filter] #apply condition to combined dataframe
# + id="YSSD-Qj9RDd_" colab_type="code" colab={}
groupby_productname = filtered_df['reordered'].groupby(filtered_df['product_name']) #create groupby object of reordered items grouped by product name
# + id="CQWbWgfPRgQW" colab_type="code" outputId="c0340b18-63be-461f-b134-cb49a169726d" colab={"base_uri": "https://localhost:8080/", "height": 225}
groupby_productname.sum() #total number of times reordered of most frequently ordered items. I'm not sure if this is right. Need confirmation.
# + [markdown] id="RsiWi4DuXPLP" colab_type="text"
# ## Reshape Data Section
#
# - Replicate the lesson code
# - Complete the code cells we skipped near the beginning of the notebook
# - Table 2 --> Tidy
# - Tidy --> Table 2
# - Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
# + id="EFH67ggumMeT" colab_type="code" colab={}
# %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan,2],[16,11],[3,1]],index=['<NAME>','<NAME>','<NAME>'], columns=['treatmenta','treatmentb'])
table2=table1.T
# + id="F4571mpQoCo9" colab_type="code" outputId="95999c24-d4be-4c62-fb95-4c85e9842f82" colab={"base_uri": "https://localhost:8080/", "height": 143}
table1
# + id="5tDPVwyFoLT2" colab_type="code" outputId="b228ab70-717e-4825-d08b-1281ecc68228" colab={"base_uri": "https://localhost:8080/", "height": 112}
table2
# + id="EJbMaqJ8pcgu" colab_type="code" outputId="f55ec24b-1f02-44ca-a8d3-336f6e02e563" colab={"base_uri": "https://localhost:8080/", "height": 112}
indexC = table2.columns.to_list()
table2 = table2.reset_index()
table2 = table2.rename(columns={'index':'trt'})
table2
# + id="61UPOpqV6jRz" colab_type="code" colab={}
table2.trt = table2.trt.str.replace('treatment','')
# + id="VpqaiHfG6wFG" colab_type="code" outputId="4eb5a332-bf10-401c-8ac6-f33ebd741575" colab={"base_uri": "https://localhost:8080/", "height": 112}
table2.head()
# + id="X4FhHgnX7A5-" colab_type="code" outputId="7772ff0b-0379-4f94-e9a3-1a0ced4dfcca" colab={"base_uri": "https://localhost:8080/", "height": 206}
tidyT2 = table2.melt(id_vars='trt',value_vars=indexC)
tidyT2.head()
# + id="fgxulJQq0uLw" colab_type="code" colab={}
import seaborn as sns
flights = sns.load_dataset('flights')
# + id="J4W7Cd949LWk" colab_type="code" outputId="af780c6f-c4c8-4abd-c6cd-50d130be5662" colab={"base_uri": "https://localhost:8080/", "height": 206}
flights.head()
# + id="1qKc88WI0up-" colab_type="code" outputId="1d7dfaa0-c0f2-47e8-fb8f-662cabc912af" colab={"base_uri": "https://localhost:8080/", "height": 489}
flights.pivot_table(index='year',columns='month')
# + [markdown] id="mnOuqL9K0dqh" colab_type="text"
# ## Join Data Stretch Challenge
#
# The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)."
#
# The post says,
#
# > "We can also see the time of day that users purchase specific products.
#
# > Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.
#
# > **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"
#
# Your challenge is to reproduce the list of the top 25 latest ordered popular products.
#
# We'll define "popular products" as products with more than 2,900 orders.
#
#
# + id="B-QNMrVkYap4" colab_type="code" outputId="5e2f22af-5374-4a2b-d71c-823be117123c" colab={"base_uri": "https://localhost:8080/", "height": 1000}
##### YOUR CODE HERE #####
orders2 = pd.read_csv('orders.csv')
orders2.head()
new_df = pd.merge(df_orders, orders2, on='order_id')
new_df.head(100) #view combined dataframe
# + [markdown] id="Ij8S60q0YXxo" colab_type="text"
# ## Reshape Data Stretch Challenge
#
# _Try whatever sounds most interesting to you!_
#
# - Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
# - Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
# - Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
# - Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + id="_d6IA2R0YXFY" colab_type="code" colab={}
##### YOUR CODE HERE #####
| module2-loadingdata/Sanjay_Krishna_Unit_1_Sprint_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="-T83BVkZAR_t"
# # Node2vec on ML-latest in Keras
#
# > Implementing the node2vec model to generate embeddings for movies from the MovieLens dataset.
# + [markdown] id="Te6L78rxASAE"
# ## Introduction
#
# Learning useful representations from objects structured as graphs is useful for
# a variety of machine learning (ML) applications—such as social and communication networks analysis,
# biomedicine studies, and recommendation systems.
# [Graph representation Learning](https://www.cs.mcgill.ca/~wlh/grl_book/) aims to
# learn embeddings for the graph nodes, which can be used for a variety of ML tasks
# such as node label prediction (e.g. categorizing an article based on its citations)
# and link prediction (e.g. recommending an interest group to a user in a social network).
#
# [node2vec](https://arxiv.org/abs/1607.00653) is a simple, yet scalable and effective
# technique for learning low-dimensional embeddings for nodes in a graph by optimizing
# a neighborhood-preserving objective. The aim is to learn similar embeddings for
# neighboring nodes, with respect to the graph structure.
#
# Given your data items structured as a graph (where the items are represented as
# nodes and the relationship between items are represented as edges),
# node2vec works as follows:
#
# 1. Generate item sequences using (biased) random walk.
# 2. Create positive and negative training examples from these sequences.
# 3. Train a [word2vec](https://www.tensorflow.org/tutorials/text/word2vec) model
# (skip-gram) to learn embeddings for the items.
#
# In this example, we demonstrate the node2vec technique on the
# [small version of the Movielens dataset](https://files.grouplens.org/datasets/movielens/ml-latest-small-README.html)
# to learn movie embeddings. Such a dataset can be represented as a graph by treating
# the movies as nodes, and creating edges between movies that have similar ratings
# by the users. The learnt movie embeddings can be used for tasks such as movie recommendation,
# or movie genres prediction.
#
# This example requires `networkx` package, which can be installed using the following command:
#
# ```shell
# pip install networkx
# ```
# + [markdown] id="dzCIyOACASAJ"
# ## Setup
# + id="MXx4sXh6ASAN"
import os
from collections import defaultdict
import math
import networkx as nx
import random
from tqdm import tqdm
from zipfile import ZipFile
from urllib.request import urlretrieve
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
# + [markdown] id="gU7M2Ls7ASAT"
# ## Download the MovieLens dataset and prepare the data
#
# The small version of the MovieLens dataset includes around 100k ratings
# from 610 users on 9,742 movies.
#
# First, let's download the dataset. The downloaded folder will contain
# three data files: `users.csv`, `movies.csv`, and `ratings.csv`. In this example,
# we will only need the `movies.dat`, and `ratings.dat` data files.
# + id="wxK0i1XSASAV"
urlretrieve(
"http://files.grouplens.org/datasets/movielens/ml-latest-small.zip", "movielens.zip"
)
ZipFile("movielens.zip", "r").extractall()
# + [markdown] id="lYtjW9ssASAY"
# Then, we load the data into a Pandas DataFrame and perform some basic preprocessing.
# + id="dVAHpFEoASAa" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674927466, "user_tz": -330, "elapsed": 19, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="131ceb68-8d58-4f8d-beb2-890eea619bef"
# Load movies to a DataFrame.
movies = pd.read_csv("ml-latest-small/movies.csv")
# Create a `movieId` string.
movies["movieId"] = movies["movieId"].apply(lambda x: f"movie_{x}")
# Load ratings to a DataFrame.
ratings = pd.read_csv("ml-latest-small/ratings.csv")
# Convert the `ratings` to floating point
ratings["rating"] = ratings["rating"].apply(lambda x: float(x))
# Create the `movie_id` string.
ratings["movieId"] = ratings["movieId"].apply(lambda x: f"movie_{x}")
print("Movies data shape:", movies.shape)
print("Ratings data shape:", ratings.shape)
# + [markdown] id="lz39ix09ASAd"
# Let's inspect a sample instance of the `ratings` DataFrame.
# + id="DJwYiYElASAh" colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"status": "ok", "timestamp": 1632674930917, "user_tz": -330, "elapsed": 636, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="f4a4b664-0393-49a2-90bf-6e1307588a78"
ratings.head()
# + [markdown] id="vPBTQSfQASAj"
# Next, let's check a sample instance of the `movies` DataFrame.
# + id="f6je_bruASAk" colab={"base_uri": "https://localhost:8080/", "height": 206} executionInfo={"status": "ok", "timestamp": 1632674933809, "user_tz": -330, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="1a8405f8-84bd-4aa9-c47e-83637cbbbae6"
movies.head()
# + [markdown] id="lf6RuNclASAm"
# Implement two utility functions for the `movies` DataFrame.
# + id="eG6k_5ZmASAn"
def get_movie_title_by_id(movieId):
return list(movies[movies.movieId == movieId].title)[0]
def get_movie_id_by_title(title):
return list(movies[movies.title == title].movieId)[0]
# + [markdown] id="cvLfNbdQASAp"
# ## Construct the Movies graph
#
# We create an edge between two movie nodes in the graph if both movies are rated
# by the same user >= `min_rating`. The weight of the edge will be based on the
# [pointwise mutual information](https://en.wikipedia.org/wiki/Pointwise_mutual_information)
# between the two movies, which is computed as: `log(xy) - log(x) - log(y) + log(D)`, where:
#
# * `xy` is how many users rated both movie `x` and movie `y` with >= `min_rating`.
# * `x` is how many users rated movie `x` >= `min_rating`.
# * `y` is how many users rated movie `y` >= `min_rating`.
# * `D` total number of movie ratings >= `min_rating`.
# + [markdown] id="txbnvsEAASAs"
# ### Step 1: create the weighted edges between movies.
# + id="2CGIc-taASAu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674937087, "user_tz": -330, "elapsed": 906, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="cd714e2f-8050-4368-a89f-ea140b8b90e6"
min_rating = 5
pair_frequency = defaultdict(int)
item_frequency = defaultdict(int)
# Filter instances where rating is greater than or equal to min_rating.
rated_movies = ratings[ratings.rating >= min_rating]
# Group instances by user.
movies_grouped_by_users = list(rated_movies.groupby("userId"))
for group in tqdm(
movies_grouped_by_users,
position=0,
leave=True,
desc="Compute movie rating frequencies",
):
# Get a list of movies rated by the user.
current_movies = list(group[1]["movieId"])
for i in range(len(current_movies)):
item_frequency[current_movies[i]] += 1
for j in range(i + 1, len(current_movies)):
x = min(current_movies[i], current_movies[j])
y = max(current_movies[i], current_movies[j])
pair_frequency[(x, y)] += 1
# + [markdown] id="gUT67RLDASAw"
# ### Step 2: create the graph with the nodes and the edges
#
# To reduce the number of edges between nodes, we only add an edge between movies
# if the weight of the edge is greater than `min_weight`.
# + id="HMH7s8ohASAy" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674938078, "user_tz": -330, "elapsed": 1004, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="978d4908-b6e2-48ca-bb07-51e4e7bf29b1"
min_weight = 10
D = math.log(sum(item_frequency.values()))
# Create the movies undirected graph.
movies_graph = nx.Graph()
# Add weighted edges between movies.
# This automatically adds the movie nodes to the graph.
for pair in tqdm(
pair_frequency, position=0, leave=True, desc="Creating the movie graph"
):
x, y = pair
xy_frequency = pair_frequency[pair]
x_frequency = item_frequency[x]
y_frequency = item_frequency[y]
pmi = math.log(xy_frequency) - math.log(x_frequency) - math.log(y_frequency) + D
weight = pmi * xy_frequency
# Only include edges with weight >= min_weight.
if weight >= min_weight:
movies_graph.add_edge(x, y, weight=weight)
# + [markdown] id="5MoixT6kASAz"
# Let's display the total number of nodes and edges in the graph.
# Note that the number of nodes is less than the total number of movies,
# since only the movies that have edges to other movies are added.
# + id="Xk4XQkXrASA0" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674940942, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="4399a044-c281-4318-92e3-895eaa0583b4"
print("Total number of graph nodes:", movies_graph.number_of_nodes())
print("Total number of graph edges:", movies_graph.number_of_edges())
# + [markdown] id="HN8lkNKkASA1"
# Let's display the average node degree (number of neighbours) in the graph.
# + id="rvOhJ429ASA1" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674941332, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="6ac43860-d216-4632-81e1-472cb6cc7235"
degrees = []
for node in movies_graph.nodes:
degrees.append(movies_graph.degree[node])
print("Average node degree:", round(sum(degrees) / len(degrees), 2))
# + [markdown] id="IENzc-f1ASA2"
# ### Step 3: Create vocabulary and a mapping from tokens to integer indices
#
# The vocabulary is the nodes (movie IDs) in the graph.
# + id="TQBpdDNaASA3"
vocabulary = ["NA"] + list(movies_graph.nodes)
vocabulary_lookup = {token: idx for idx, token in enumerate(vocabulary)}
# + [markdown] id="Bhi2QC_GASA3"
# ## Implement the biased random walk
#
# A random walk starts from a given node, and randomly picks a neighbour node to move to.
# If the edges are weighted, the neighbour is selected *probabilistically* with
# respect to weights of the edges between the current node and its neighbours.
# This procedure is repeated for `num_steps` to generate a sequence of *related* nodes.
#
# The [*biased* random walk](https://en.wikipedia.org/wiki/Biased_random_walk_on_a_graph) balances between **breadth-first sampling**
# (where only local neighbours are visited) and **depth-first sampling**
# (where distant neighbours are visited) by introducing the following two parameters:
#
# 1. **Return parameter** (`p`): Controls the likelihood of immediately revisiting
# a node in the walk. Setting it to a high value encourages moderate exploration,
# while setting it to a low value would keep the walk local.
# 2. **In-out parameter** (`q`): Allows the search to differentiate
# between *inward* and *outward* nodes. Setting it to a high value biases the
# random walk towards local nodes, while setting it to a low value biases the walk
# to visit nodes which are further away.
# + id="-Sr_nA_DASA4"
def next_step(graph, previous, current, p, q):
neighbors = list(graph.neighbors(current))
weights = []
# Adjust the weights of the edges to the neighbors with respect to p and q.
for neighbor in neighbors:
if neighbor == previous:
# Control the probability to return to the previous node.
weights.append(graph[current][neighbor]["weight"] / p)
elif graph.has_edge(neighbor, previous):
# The probability of visiting a local node.
weights.append(graph[current][neighbor]["weight"])
else:
# Control the probability to move forward.
weights.append(graph[current][neighbor]["weight"] / q)
# Compute the probabilities of visiting each neighbor.
weight_sum = sum(weights)
probabilities = [weight / weight_sum for weight in weights]
# Probabilistically select a neighbor to visit.
next = np.random.choice(neighbors, size=1, p=probabilities)[0]
return next
def random_walk(graph, num_walks, num_steps, p, q):
walks = []
nodes = list(graph.nodes())
# Perform multiple iterations of the random walk.
for walk_iteration in range(num_walks):
random.shuffle(nodes)
for node in tqdm(
nodes,
position=0,
leave=True,
desc=f"Random walks iteration {walk_iteration + 1} of {num_walks}",
):
# Start the walk with a random node from the graph.
walk = [node]
# Randomly walk for num_steps.
while len(walk) < num_steps:
current = walk[-1]
previous = walk[-2] if len(walk) > 1 else None
# Compute the next node to visit.
next = next_step(graph, previous, current, p, q)
walk.append(next)
# Replace node ids (movie ids) in the walk with token ids.
walk = [vocabulary_lookup[token] for token in walk]
# Add the walk to the generated sequence.
walks.append(walk)
return walks
# + [markdown] id="hx4DEvq4ASA5"
# ## Generate training data using the biased random walk
#
# You can explore different configurations of `p` and `q` to different results of
# related movies.
# + id="AlTVVRgpASA6" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632674985523, "user_tz": -330, "elapsed": 40457, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="0ff0f920-b0f0-4002-85dc-ccb7d66043a6"
# Random walk return parameter.
p = 1
# Random walk in-out parameter.
q = 1
# Number of iterations of random walks.
num_walks = 5
# Number of steps of each random walk.
num_steps = 10
walks = random_walk(movies_graph, num_walks, num_steps, p, q)
print("Number of walks generated:", len(walks))
# + [markdown] id="hEaRU0FeASA8"
# ## Generate positive and negative examples
#
# To train a skip-gram model, we use the generated walks to create positive and
# negative training examples. Each example includes the following features:
#
# 1. `target`: A movie in a walk sequence.
# 2. `context`: Another movie in a walk sequence.
# 3. `weight`: How many times these two movies occured in walk sequences.
# 4. `label`: The label is 1 if these two movies are samples from the walk sequences,
# otherwise (i.e., if randomly sampled) the label is 0.
# + [markdown] id="_dbZhfLXASA9"
# ### Generate examples
# + id="BlYw5qOJASA_" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632675002516, "user_tz": -330, "elapsed": 17029, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="3ec5510c-b76d-4fa4-8149-e498fe0fd5ee"
def generate_examples(sequences, window_size, num_negative_samples, vocabulary_size):
example_weights = defaultdict(int)
# Iterate over all sequences (walks).
for sequence in tqdm(
sequences,
position=0,
leave=True,
desc=f"Generating postive and negative examples",
):
# Generate positive and negative skip-gram pairs for a sequence (walk).
pairs, labels = keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocabulary_size,
window_size=window_size,
negative_samples=num_negative_samples,
)
for idx in range(len(pairs)):
pair = pairs[idx]
label = labels[idx]
target, context = min(pair[0], pair[1]), max(pair[0], pair[1])
if target == context:
continue
entry = (target, context, label)
example_weights[entry] += 1
targets, contexts, labels, weights = [], [], [], []
for entry in example_weights:
weight = example_weights[entry]
target, context, label = entry
targets.append(target)
contexts.append(context)
labels.append(label)
weights.append(weight)
return np.array(targets), np.array(contexts), np.array(labels), np.array(weights)
num_negative_samples = 4
targets, contexts, labels, weights = generate_examples(
sequences=walks,
window_size=num_steps,
num_negative_samples=num_negative_samples,
vocabulary_size=len(vocabulary),
)
# + [markdown] id="o8IE6-sXASBA"
# Let's display the shapes of the outputs
# + id="lWTdap4PASBB" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632675002517, "user_tz": -330, "elapsed": 30, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="04021d44-250c-45f2-8741-4ea9cf92b1c9"
print(f"Targets shape: {targets.shape}")
print(f"Contexts shape: {contexts.shape}")
print(f"Labels shape: {labels.shape}")
print(f"Weights shape: {weights.shape}")
# + [markdown] id="nJnAso-BASBC"
# ### Convert the data into `tf.data.Dataset` objects
# + id="OQt0G2TYASBD"
batch_size = 1024
def create_dataset(targets, contexts, labels, weights, batch_size):
inputs = {
"target": targets,
"context": contexts,
}
dataset = tf.data.Dataset.from_tensor_slices((inputs, labels, weights))
dataset = dataset.shuffle(buffer_size=batch_size * 2)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
dataset = create_dataset(
targets=targets,
contexts=contexts,
labels=labels,
weights=weights,
batch_size=batch_size,
)
# + [markdown] id="9zAf2zxPASBE"
# ## Train the skip-gram model
#
# Our skip-gram is a simple binary classification model that works as follows:
#
# 1. An embedding is looked up for the `target` movie.
# 2. An embedding is looked up for the `context` movie.
# 3. The dot product is computed between these two embeddings.
# 4. The result (after a sigmoid activation) is compared to the label.
# 5. A binary crossentropy loss is used.
# + id="H9y4WII4ASBE"
learning_rate = 0.001
embedding_dim = 50
num_epochs = 10
# + [markdown] id="tGxbuOX8ASBF"
# ### Implement the model
# + id="5fqvCesPASBG"
def create_model(vocabulary_size, embedding_dim):
inputs = {
"target": layers.Input(name="target", shape=(), dtype="int32"),
"context": layers.Input(name="context", shape=(), dtype="int32"),
}
# Initialize item embeddings.
embed_item = layers.Embedding(
input_dim=vocabulary_size,
output_dim=embedding_dim,
embeddings_initializer="he_normal",
embeddings_regularizer=keras.regularizers.l2(1e-6),
name="item_embeddings",
)
# Lookup embeddings for target.
target_embeddings = embed_item(inputs["target"])
# Lookup embeddings for context.
context_embeddings = embed_item(inputs["context"])
# Compute dot similarity between target and context embeddings.
logits = layers.Dot(axes=1, normalize=False, name="dot_similarity")(
[target_embeddings, context_embeddings]
)
# Create the model.
model = keras.Model(inputs=inputs, outputs=logits)
return model
# + [markdown] id="H9bFiYZRASBH"
# ### Train the model
# + [markdown] id="9jCmIqMjASBH"
# We instantiate the model and compile it.
# + id="0IGjmFDpASBH"
model = create_model(len(vocabulary), embedding_dim)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
)
# + [markdown] id="cIE7WW5eASBI"
# Let's plot the model.
# + id="ReS9WBU7ASBI" colab={"base_uri": "https://localhost:8080/", "height": 312} executionInfo={"status": "ok", "timestamp": 1632675004582, "user_tz": -330, "elapsed": 1376, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="01c7fd83-144f-47b9-ae3f-b577ea9582a3"
keras.utils.plot_model(
model, show_shapes=True, show_dtype=True, show_layer_names=True,
)
# + [markdown] id="3sKUOOMfASBI"
# Now we train the model on the `dataset`.
# + id="2u5QM26OASBJ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632675059964, "user_tz": -330, "elapsed": 55400, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="0abf3af4-fd52-423b-f997-dc86ee2d5278"
history = model.fit(dataset, epochs=num_epochs)
# + [markdown] id="Y5rWBMq-ASBJ"
# Finally we plot the learning history.
# + id="vo8rIuhhASBK" colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"status": "ok", "timestamp": 1632675060680, "user_tz": -330, "elapsed": 754, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="e56a9075-06ef-4da1-ef25-dc345038d7b5"
plt.plot(history.history["loss"])
plt.ylabel("loss")
plt.xlabel("epoch")
plt.show()
# + [markdown] id="f9x5qeZWASBK"
# ## Analyze the learnt embeddings.
# + id="vHjLPXhxASBK" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632675060682, "user_tz": -330, "elapsed": 30, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="2eb2f71c-6ab3-4c9c-a83d-aa5b314c7c1e"
movie_embeddings = model.get_layer("item_embeddings").get_weights()[0]
print("Embeddings shape:", movie_embeddings.shape)
# + [markdown] id="BV0VxFKWASBL"
# ### Find related movies
#
# Define a list with some movies called `query_movies`.
# + id="c7WpjT8lASBL"
query_movies = [
"Matrix, The (1999)",
"Star Wars: Episode IV - A New Hope (1977)",
"Lion King, The (1994)",
"Terminator 2: Judgment Day (1991)",
"Godfather, The (1972)",
]
# + [markdown] id="mIbZHTcSASBM"
# Get the embeddings of the movies in `query_movies`.
# + id="JxzENVjmASBM"
query_embeddings = []
for movie_title in query_movies:
movieId = get_movie_id_by_title(movie_title)
token_id = vocabulary_lookup[movieId]
movie_embedding = movie_embeddings[token_id]
query_embeddings.append(movie_embedding)
query_embeddings = np.array(query_embeddings)
# + [markdown] id="RrRNBTxaASBN"
# Compute the [consine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) between the embeddings of `query_movies`
# and all the other movies, then pick the top k for each.
# + id="GlfcTvC5ASBO"
similarities = tf.linalg.matmul(
tf.math.l2_normalize(query_embeddings),
tf.math.l2_normalize(movie_embeddings),
transpose_b=True,
)
_, indices = tf.math.top_k(similarities, k=5)
indices = indices.numpy().tolist()
# + [markdown] id="MViqDx34ASBP"
# Display the top related movies in `query_movies`.
# + id="gPhK7ZeVASBP" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1632675060691, "user_tz": -330, "elapsed": 32, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="5ca899b1-5000-417c-b564-a5087a39b2cd"
for idx, title in enumerate(query_movies):
print(title)
print("".rjust(len(title), "-"))
similar_tokens = indices[idx]
for token in similar_tokens:
similar_movieId = vocabulary[token]
similar_title = get_movie_title_by_id(similar_movieId)
print(f"- {similar_title}")
print()
# + [markdown] id="PxFtA08eASBQ"
# ### Visualize the embeddings using the Embedding Projector
# + id="snumVN70ASBQ"
import io
out_v = io.open("embeddings.tsv", "w", encoding="utf-8")
out_m = io.open("metadata.tsv", "w", encoding="utf-8")
for idx, movie_id in enumerate(vocabulary[1:]):
movie_title = list(movies[movies.movieId == movie_id].title)[0]
vector = movie_embeddings[idx]
out_v.write("\t".join([str(x) for x in vector]) + "\n")
out_m.write(movie_title + "\n")
out_v.close()
out_m.close()
# + [markdown] id="1AlttAcrASBQ"
# Download the `embeddings.tsv` and `metadata.tsv` to analyze the obtained embeddings
# in the [Embedding Projector](https://projector.tensorflow.org/).
| docs/T894941_Node2vec_on_ML_latest_in_Keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from sklearn.linear_model import LogisticRegression
import json
import csv
from sklearn import metrics
from label_decoders import *
from prepareData import *
config = json.load(open('settings.json'))
# -
train_ohd,test_ohd,ind_list,ld,features = process_data(model_prefix='lr', feature_count=13)
# +
i = 0
for l in ld:
i = i + 1
for j in range(10):
X_1, X_2 = ind_list[j][1], ind_list[j][0]
y_1, y_2 = train_ohd.iloc[X_1]['Response'], train_ohd.iloc[X_2]['Response']
lr = LogisticRegression(random_state=1)
lr.fit(train_ohd.iloc[X_1][features],l(y_1))
train_ohd.iloc[X_2]['lr%s' % (i)] = lr.predict_proba(train_ohd.iloc[X_2][features]).T[1]
train_ohd.to_csv(config['train_lr'],index=0)
y = train_ohd['Response']
i = 0
for l in ld:
i = i + 1
###1
lr = LogisticRegression(random_state=1)
lr.fit(train_ohd[features],l(y)), i
test_ohd['lr%s' % (i)] = lr.predict_proba(test_ohd[features]).T[1]
test_ohd.to_csv(config['test_lr'],index=0)
# -
train_ohd.head(5)
test_ohd.head(5)
| notebooks/LogisticRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Procedural programming in python
#
# ## Topics
#
# * Review of procedural Python
# * Exceptions & asserts
# * Tests
# * Testing tests
# * Units of testing
# * Using nosetests
#
# <hr>
# <hr>
# ## Flow control
#
# <img src="https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/lnpls008.gif">Flow control figure</img>
#
# Flow control refers how to programs do loops, conditional execution, and order of functional operations.
# <hr>
# ### Functions
#
# For loops let you repeat some code for every item in a list. Functions are similar in that they run the same lines of code for new values of some variable. They are different in that functions are not limited to looping over items.
#
# Functions are a critical part of writing easy to read, reusable code.
#
# Create a function like:
# ```
# def function_name (parameters):
# """
# docstring
# """
# function expressions
# return [variable]
# ```
#
# _Note:_ Sometimes I use the word argument in place of parameter.
#
# ### Parameters have three different types:
#
# | type | behavior |
# |------|----------|
# | required | positional, must be present or error, e.g. `my_func(first_name, last_name)` |
# | keyword | position independent, e.g. `my_func(first_name, last_name)` can be called `my_func(first_name='Dave', last_name='Beck')` or `my_func(last_name='Beck', first_name='Dave')` |
# | default | keyword params that default to a value if not provided |
#
# Functions can contain any code that you put anywhere else including:
# * if...elif...else
# * for...else
# * while
# * other function calls
# <HR>
# ## From nothing to something
#
# ### Task: Compute the pairwise Pearson correlation between rows in a dataframe.
#
# Let's say we have three molecules (A, B, C) with three measurements each (v1, v2, v3). So for each molecule we have a vector of measurements:
#
# $$X=\begin{bmatrix}
# X_{v_{1}} \\
# X_{v_{2}} \\
# X_{v_{3}} \\
# \end{bmatrix} $$
#
# Where X is a molecule and the components are the values for each of the measurements. These make up the rows in our matrix.
#
# Often, we want to compare molecules to determine how similar or different they are. One measure is the Pearson correlation.
#
# Pearson correlation: <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/01d103c10e6d4f477953a9b48c69d19a954d978a"/>
#
# Expressed graphically, when you plot the paired measurements for two samples (in this case molecules) against each other you can see positively correlated, no correlation, and negatively correlated. Eg.
# <img src="http://www.statisticshowto.com/wp-content/uploads/2012/10/pearson-2-small.png"/>
#
#
# Simple input dataframe (_note_ when you are writing code it is always a good idea to have a simple test case where you can readily compute by hand or know the output):
#
# | index | v1 | v2 | v3 |
# |-------|----|----|----|
# | A | -1 | 0 | 1 |
# | B | 1 | 0 | -1 |
# | C | .5 | 0 | .5 |
#
# * If the above is a dataframe what shape and size is the output?
# 3 x 3 matrix
# * Whare are some unique features of the output?
# Square, symmetric, unit diagonal
# For our test case, what will the output be?
# | | A | B | C |
# |---|---|---|---|
# | A | 1 | -1 | 0 |
# | B | -1 | 1 | 0 |
# | C | 0 | 0 | 1 |
# ### Let's sketch the idea...
#
#
#
# nested for loop...
#
# ```
# for some row in dataframe rows
# for some row compare to other rows
# computer pearson between some row and other row
# save pearson someplace
#
# return
# ```
# +
def pairwise_correlation(df):
# intitialize empty matrix
# for iterows outer loop
# initialize empty matrix
# for iterows inner loop
# append to second empty matrix outer to inner corr
# append second matrix to first matrix
return first_matrix
def pairwise_correlation(df):
# intitialize empty matrix
# for i in range(length(df.rows))
# initialize empty matrix
# for j in range(...):
# matrix[i][j] = corr
# append second matrix to first matrix
return first_matrix
# -
# ## In class exercise
# ### 20-30 minutes
# #### Objectives:
# 1. Write code using functions to compute the pairwise Pearson correlation between rows in a pandas dataframe. You will have to use ``for`` and possibly ``if``.
# 2. Use a cell to test each function with an input that yields an expected output. Think about the shape and values of the outputs.
# 3. Put the code in a ``.py`` file in the directory with the Jupyter notebook, import and run!
#
#
# #### To help you get started...
# To create the sample dataframe:
# ```
# df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
# ```
#
# To loop over rows in a dataframe, check out (Google is your friend):
# ```
# DataFrame.iterrows
# ```
#
# For a row, to compute correlation to another list, series, vector, use:
# ```
# my_row.corr(other_row)
# ```
import pandas as pd
df = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])
df
# +
def row_i_to_row_j(row_i, row_j):
"""docstring goes here"""
return row_i.corr(row_j)
def row_i_to_all_rows(df, index_i, row_i, metric):
"""docstring goes here"""
for index_j, row_j in df.iterrows():
# use special features of the matrix
if index_j < index_i:
continue
elif index_i == index_j:
metric.loc[index_i, index_j] = 1
else:
metric.loc[index_i, index_j] = row_i_to_row_j(row_i, row_j)
metric.loc[index_j, index_i] = metric.loc[index_i, index_j]
return
def pairwise_correlation(df):
"""docstring goes here"""
metric = pd.DataFrame()
for index_i, row_i in df.iterrows():
row_i_to_all_rows(df, index_i, row_i, metric)
return metric
# -
pairwise_correlation(df)
# <hr>
# ## How do we know it is working?
#
#
# #### Use the test case!
# Our three row example is a useful tool for checking that our code is working. We can write some tests that compare the output of our functions to our expectations.
#
# E.g. The diagonals should be 1, and corr(A, B) = -1, ...
#
# #### But first, let's talk ``assert`` and ``raise``
#
# We've already briefly been exposed to assert in this code:
# ```
# if os.path.exists(filename):
# pass
# else:
# req = requests.get(url)
# # if the download failed, next line will raise an error
# assert req.status_code == 200
# with open(filename, 'wb') as f:
# f.write(req.content)
# ```
#
# What is the assert doing there?
#
# Let's play with ``assert``. What should the following asserts do?
# ```
# assert True == False, "You assert wrongly, sir!"
# assert 'Dave' in instructors
# assert function_that_returns_True_or_False(parameters)
# ```
#
# So when an assert statement is true, the code keeps executing and when it is false, it ``raises`` an exception (also known as an error).
#
# We've all probably seen lots of exception. E.g.
#
# ```
# def some_function(parameter):
# return
#
# some_function()
# ```
#
# ```
# some_dict = { }
# print(some_dict['invalid key'])
# ```
#
# ```
# 'fourty' + 2
# ```
#
# Like C++ and other languages, Python let's you ``raise`` your own exception. You can do it with ``raise`` (surprise!). Exceptions are special objects and you can create your own type of exceptions. For now, we are going to look at the simplest ``Exception``.
#
# We create an ``Exception`` object by calling the generator:
# ```
# Exception()
# ```
#
# This isn't very helpful. We really want to supply a description. The Exception object takes any number of strings. One good form if you are using the generic exception object is:
# ```
# Exception('Short description', 'Long description')
# ```
#
type(df)
def df_check(df):
if isinstance(df, pd.DataFrame):
print('is a dataframe')
else:
raise Exception('Bad type', 'Not a dataframe')
return
df_check(df)
df_check(3)
3+'4'
# Creating an exception object isn't useful alone, however. We need to send it down the software stack to the Python interpreter so that it can handle the exception condition. We do this with ``raise``.
#
# ```
# raise Exception("An error has occurred.")
# ```
#
# Now you can create your own error messages like a pro!
# #### DETOUR!
#
# There are lots of types of exceptions beyond the generic class ``Exception``. You can use them in your own code if they make sense. E.g.
# ```
# import math
# my_variable = math.inf
# if my_variable == math.inf:
# raise ValueError('my_variable cannot be infinity')
# ```
#
# <p>List of Standard Exceptions −</p>
# <table class="table table-bordered">
# <tr>
# <th><b>EXCEPTION NAME</b></th>
# <th><b>DESCRIPTION</b></th>
# </tr>
# <tr>
# <td>Exception</td>
# <td>Base class for all exceptions</td>
# </tr>
# <tr>
# <td>StopIteration</td>
# <td>Raised when the next() method of an iterator does not point to any object.</td>
# </tr>
# <tr>
# <td>SystemExit</td>
# <td>Raised by the sys.exit() function.</td>
# </tr>
# <tr>
# <td>StandardError</td>
# <td>Base class for all built-in exceptions except StopIteration and SystemExit.</td>
# </tr>
# <tr>
# <td>ArithmeticError</td>
# <td>Base class for all errors that occur for numeric calculation.</td>
# </tr>
# <tr>
# <td>OverflowError</td>
# <td>Raised when a calculation exceeds maximum limit for a numeric type.</td>
# </tr>
# <tr>
# <td>FloatingPointError</td>
# <td>Raised when a floating point calculation fails.</td>
# </tr>
# <tr>
# <td>ZeroDivisonError</td>
# <td>Raised when division or modulo by zero takes place for all numeric types.</td>
# </tr>
# <tr>
# <td>AssertionError</td>
# <td>Raised in case of failure of the Assert statement.</td>
# </tr>
# <tr>
# <td>AttributeError</td>
# <td>Raised in case of failure of attribute reference or assignment.</td>
# </tr>
# <tr>
# <td>EOFError</td>
# <td>Raised when there is no input from either the raw_input() or input() function and the end of file is reached.</td>
# </tr>
# <tr>
# <td>ImportError</td>
# <td>Raised when an import statement fails.</td>
# </tr>
# <tr>
# <td>KeyboardInterrupt</td>
# <td>Raised when the user interrupts program execution, usually by pressing Ctrl+c.</td>
# </tr>
# <tr>
# <td>LookupError</td>
# <td>Base class for all lookup errors.</td>
# </tr>
# <tr>
# <td><p>IndexError</p><p>KeyError</p></td>
# <td><p>Raised when an index is not found in a sequence.</p><p>Raised when the specified key is not found in the dictionary.</p></td>
# </tr>
# <tr>
# <td>NameError</td>
# <td>Raised when an identifier is not found in the local or global namespace.</td>
# </tr>
# <tr>
# <td><p>UnboundLocalError</p><p>EnvironmentError</p></td>
# <td><p>Raised when trying to access a local variable in a function or method but no value has been assigned to it.</p><p>Base class for all exceptions that occur outside the Python environment.</p></td>
# </tr>
# <tr>
# <td><p>IOError</p><p>IOError</p></td>
# <td><p>Raised when an input/ output operation fails, such as the print statement or the open() function when trying to open a file that does not exist.</p><p>Raised for operating system-related errors.</p></td>
# </tr>
# <tr>
# <td><p>SyntaxError</p><p>IndentationError</p></td>
# <td><p>Raised when there is an error in Python syntax.</p><p>Raised when indentation is not specified properly.</p></td>
# </tr>
# <tr>
# <td>SystemError</td>
# <td>Raised when the interpreter finds an internal problem, but when this error is encountered the Python interpreter does not exit.</td>
# </tr>
# <tr>
# <td>SystemExit</td>
# <td>Raised when Python interpreter is quit by using the sys.exit() function. If not handled in the code, causes the interpreter to exit.</td>
# </tr>
# <tr>
# <td>Raised when Python interpreter is quit by using the sys.exit() function. If not handled in the code, causes the interpreter to exit.</td>
# <td>Raised when an operation or function is attempted that is invalid for the specified data type.</td>
# </tr>
# <tr>
# <td>ValueError</td>
# <td>Raised when the built-in function for a data type has the valid type of arguments, but the arguments have invalid values specified.</td>
# </tr>
# <tr>
# <td>RuntimeError</td>
# <td>Raised when a generated error does not fall into any category.</td>
# </tr>
# <tr>
# <td>NotImplementedError</td>
# <td>Raised when an abstract method that needs to be implemented in an inherited class is not actually implemented.</td>
# </tr>
# </table>
# #### Put it all together... ``assert`` and ``raise``
#
# Breaking assert down, it is really just an if test followed by a raise. So the code below:
# ```
# assert <some_test>, <message>
# ```
# is equivalent to a short hand for:
# ```
# if not <some_test>:
# raise AssertionError(<message>)
# ```
#
# Prove it? OK.
#
# ```
# instructors = ['<NAME>', 'Jim']
# assert 'Dave' in instructors, "Dave isn't in the instructor list!"
# ```
#
# ```
# instructors = ['<NAME>', 'Jim']
# assert 'Dave' in instructors, "Dave isn't in the instructor list!"
# if not 'Dave' in instructors:
# raise AssertionError("Dave isn't in the instructor list!")
# ```
#
# #### Questions?
#
# ### All of this was in preparation for some testing...
#
# Can we write some quick tests that make sure our code is doing what we think it is? Something of the form:
#
# ```
# corr_matrix = pairwise_row_correlations(my_sample_dataframe)
# assert corr_matrix looks like what we expect, "The function is broken!"
# ```
#
# What are the smallest units of code that we can test?
#
# What asserts can we make for these pieces of code?
#
# #### Remember, in computers, 1.0 does not necessarily = 1
#
# Put the following in an empty cell:
# ```
# .99999999999999999999
# ```
#
# How can we test for two floating point numbers being (almost) equal? Pro tip: [Google!](http://lmgtfy.com/?q=python+assert+almost+equal)
#
assert pairwise_correlation(df) = [[],[],[]], "function is broken"
# +
def row_i_to_row_j(row_i, row_j):
"""docstring goes here"""
return row_i.corr(row_j)
def row_i_to_all_rows(df, index_i, row_i, metric):
"""docstring goes here"""
for index_j, row_j in df.iterrows():
# use special features of the matrix
if index_j < index_i:
continue
elif index_i == index_j:
metric.loc[index_i, index_j] = 1
else:
metric.loc[index_i, index_j] = row_i_to_row_j(row_i, row_j)
metric.loc[index_j, index_i] = metric.loc[index_i, index_j]
return
def pairwise_correlation(df):
"""docstring goes here"""
metric = pd.DataFrame()
for index_i, row_i in df.iterrows():
row_i_to_all_rows(df, index_i, row_i, metric)
return metric
# -
# Write a test using asserts of ``row_i_to_row_j``
import pandas as pd
def pairwise_correlation(df):
"""docstring goes here"""
if not isinstance(df, pd.DataFrame):
raise TypeError("passed object is not a pandas dataframe")
else:
pass
return None
pairwise_correlation(43)
def test_pairwise_correlation():
"""docstring goes here"""
try:
pairwise_correlation(42)
except Exception as e:
assert isinstance(e, TypeError)
"""continue the unit tests"""
return
test_pairwise_correlation()
# +
import pandas as pd
df = pd.DataFrame()
# -
# ### Put our tests in a `.py` file
#
# Copy the code for our `test_` prefixed functions to a file, e.g. `tests.py`
#
# Run the tests with `nosetest`.
#
# You can even do it in your notebook (not recommended):
#
# ```
# # %%bash
# nosetests
# ```
# ## From nothing to something wrap up
#
# Here we created some functions from just a short description of our needs.
# * Before we wrote any code, we walked through the flow control and decided on the parts that were necessary.
# * Before we wrote any code, we created a simple test example with simple predictable output.
# * We wrote some code according to our specifications.
# * We wrote tests using ``assert`` to verify our code against the simple test example.
# * Tests go into a `.py` file and are have function names prefixed with `test_`
# * We run tests with `nosetests` which can be `pip install nosetests` installed.
#
# ### QUESTIONS?
| Wi20_content/SEDS/L7.Testing.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// + [markdown] slideshow={"slide_type": "slide"}
// # Jan 31 Lecture: Java JVM, numeric data type, selection (switch), and class
//
// + [markdown] slideshow={"slide_type": "slide"}
// I determine to migrate all my materials to Jupyter notebooks. They will help when you
// - miss a class
// - fall asleep in a class
// - can not catch up with the class
// - want to run demo codes
//
// It will save my time on
// - uploading my course materials to Canvas (because I will not),
// - make my slides.
// + [markdown] slideshow={"slide_type": "slide"}
// ## More Resource
//
// - [Textbook](http://www.primeuniversity.edu.bd/160517/vc/eBook/download/IntroductiontoJava.pdf) and [Textbook Solutions](https://github.com/jsquared21/Intro-to-Java-Programming)(not official)
// - All the source codes of the [examples](http://www.cs.armstrong.edu/liang/intro10e/examplesource.html) in our textbook.
// - [My Jupyter notebooks](https://github.com/XiangHuang-LMC/ijava-binder) for the lecture.
//
// 
// + [markdown] slideshow={"slide_type": "slide"}
// ## More about Java and our developing tools
// Now you need to write your own code (your first assignment), let's review how your code will be excuted on your machine.
//
// 
//
// The first step will be accomplished by **javac** command, the second will be accompliesh by **java** command, as we demostrated last time.
//
// - **javac** your_.java_file_name
// - **java** your_.class_file_name
//
// + [markdown] slideshow={"slide_type": "slide"}
// The process of writing your code:
//
// 
// + [markdown] slideshow={"slide_type": "slide"}
// Here I recommand [jGrasp](https://spider.eng.auburn.edu/user-cgi/grasp/grasp.pl?;dl=download_jgrasp.html), or you can
// find it in **AJ apps** on our virtual desktop. Install the one bundled with Java, or if you use MacOS, I guess you should have Java already. If you run into any trouble installing jGrasp or Java on your local machine, please let me know.
//
// 
// + [markdown] slideshow={"slide_type": "slide"}
// ## Numeric Data types
//
// 
// + [markdown] slideshow={"slide_type": "slide"}
// For Scanners you get the corresponding methods to read your numeric datatypes.
// 
// + [markdown] slideshow={"slide_type": "slide"}
// ## **switch** Statements
//
// Today we are going to learn a new type of statements: **switch** statements. A **switch** statement executes statements based on the value of a variable or an expression.
//
// 
//
// + [markdown] slideshow={"slide_type": "slide"}
// 
//
// + [markdown] slideshow={"slide_type": "slide"}
// 
// + [markdown] slideshow={"slide_type": "slide"}
// ### Code demo: switch
// + slideshow={"slide_type": "slide"}
int day=3;
switch (day){
case 1:
case 2:
case 3:
case 4:
case 5: System.out.println("Workday"); break;
case 0: case 6: System.out.println("Weekend");
}
// + [markdown] slideshow={"slide_type": "slide"}
// ### Code demo: [The Chinese Zodiac ](http://www.cs.armstrong.edu/liang/intro10e/html/ChineseZodiac.html)
// + [markdown] slideshow={"slide_type": "slide"}
// 
// + slideshow={"slide_type": "slide"}
import java.util.Scanner;
public class ChineseZodiac {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
System.out.print("Enter a year: ");
int year = input.nextInt();
switch (year % 12) {
case 0: System.out.println("monkey"); break;
case 1: System.out.println("rooster"); break;
case 2: System.out.println("dog"); break;
case 3: System.out.println("pig"); break;
case 4: System.out.println("rat"); break;
case 5: System.out.println("ox"); break;
case 6: System.out.println("tiger"); break;
case 7: System.out.println("rabbit"); break;
case 8: System.out.println("dragon"); break;
case 9: System.out.println("snake"); break;
case 10: System.out.println("horse"); break;
case 11: System.out.println("sheep"); break;
}
}
}
// + [markdown] slideshow={"slide_type": "slide"}
// ## Conditional Expressions
//
// + slideshow={"slide_type": "slide"}
int x=4;
int y;
if(x>0){
y=1;
}
else{
y=-1;
}
System.out.println("y is "+ y);
// + [markdown] slideshow={"slide_type": "slide"}
// The expression should be written as
// **boolean-expression ? expression 1; expression 2**
// + slideshow={"slide_type": "slide"}
x= 4;
y = (x>0)? 1 : -1;
System.out.println("y is "+ y);
// + [markdown] slideshow={"slide_type": "slide"}
// ### Excercise:
// Change the following condition expression using **if-else** statements:
//
// score = (x>10)? 3* scale : 4* scale;
//
// + slideshow={"slide_type": "slide"}
if(x>10){
score = 3*scale;
}
else{
score = 4* scale;
}
// + [markdown] slideshow={"slide_type": "slide"}
// Rewrite the following **if** statement using the conditional operator.
//
// ```java
// if (age >= 16){
// ticketPrice = 20;
// }
// else{
// ticketPrice = 10;
// }
// ```
// + slideshow={"slide_type": "slide"}
ticketPrice = (age>=16)? 20:10;
// + slideshow={"slide_type": "slide"}
int age =19;
if(age > 16)
{
System.out.println("Your price is 20.");
}
else{
System.out.println("Your price is 20.");
}
| CSC176/More about Java numeric data type, selection, and class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AlessandroXavierOcasion/OOP-1-1/blob/main/Activity_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BU6WkSb7RBYo"
# Application 1
# 1. Create a Python program that displays the name of three students (Student 1, Student 2, Student 3) and their term grades
# 2. Create a class name Person and attributes - std1,std2,std3,prs,mid,fin
# 3. Compute the average of each term grade using Grade() method
# 4. Information about student's grades must be hidden from others
# + colab={"base_uri": "https://localhost:8080/"} id="9tc56azdRfde" outputId="64263cfb-40a1-4350-9a20-dded8adc7cbe"
class data:
def __init__(self,firstname,lastname,prelim,midterm,final):
self.firstname = firstname
self.lastname = lastname
self.prelim = prelim
self.midterm = midterm
self.final = final
def printname(self):
print(self.firstname, self.lastname)
class stud1(data):
def average(self):
return ((self.prelim + self.midterm + self.final)/3)
fname_input1 = str(input("Enter Your First Name: "))
lname_input1 = str(input("Enter Your Last Name: "))
prelims_input1 = float(input("Prelims:"))
midterms_input1 = float(input("Midterm:"))
finals_input1 = float(input("Finals:"))
student1 = stud1(fname_input1, lname_input1,prelims_input1, midterms_input1, finals_input1)
print("\n")
student1.printname()
print("Average:" , round(student1.average(),2), "\n")
class stud2(data):
def average(self):
return ((self.prelim + self.midterm + self.final)/3)
fname_input2 = str(input("Enter Your First Name: "))
lname_input2 = str(input("Enter Your Last Name: "))
prelims_input2 = float(input("Prelims:"))
midterms_input2 = float(input("Midterm:"))
finals_input2 = float(input("Finals:"))
student2 = stud2(fname_input2, lname_input2,prelims_input2, midterms_input2, finals_input2)
print("\n")
student2.printname()
print("Average:" , round(student2.average(),2), "\n")
class stud3(data):
def average(self):
return ((self.prelim + self.midterm + self.final)/3)
fname_input3 = str(input("Enter Your First Name: "))
lname_input3 = str(input("Enter Your Last Name: "))
prelims_input3 = float(input("Prelims:"))
midterms_input3 = float(input("Midterm:"))
finals_input3 = float(input("Finals:"))
student3 = stud3(fname_input3, lname_input3,prelims_input3, midterms_input3, finals_input3)
print("\n")
student3.printname()
print("Average:" , round(student3.average(),2), "\n")
| Activity_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import glob
import json
from indra_world.ontology import load_world_ontology
new_onto_url = 'https://raw.githubusercontent.com/WorldModelers/Ontologies/'\
'3.0/CompositionalOntology_metadata.yml'
old_onto_url = 'https://raw.githubusercontent.com/WorldModelers/Ontologies/'\
'2.3/CompositionalOntology_metadata.yml'
new_onto = load_world_ontology(new_onto_url)
new_onto.initialize()
old_onto = load_world_ontology(old_onto_url)
old_onto.initialize()
new_onto_terms = {new_onto.get_id(node) for node in new_onto.nodes()}
old_onto_terms = {old_onto.get_id(node) for node in old_onto.nodes()}
print(f'Number of old ontology terms: {len(old_onto_terms)}')
print(f'Number of new ontology terms: {len(new_onto_terms)}')
for term in sorted(new_onto_terms-old_onto_terms):
print(term)
# +
def get_grounding(concept):
if 'theme' not in concept:
return (concept['concept'], None, None, None)
parts = ['theme', 'theme_property', 'process', 'process_property']
return tuple(None if not concept[part] else concept[part] for part in parts)
def get_all_groundings(stmts):
groundings = []
for stmt in stmts:
for concept in [stmt['obj'], stmt['subj']]:
if 'theme' not in concept:
continue
groundings.append(get_grounding(concept))
return groundings
def get_grounding_configs(unique_gr):
grounding_configs = []
for gr in unique_gr:
parts = []
for val, part in zip(gr, ['theme', 'property', 'process', 'process_property']):
if val is not None:
parts.append(part)
grounding_configs.append('_'.join(parts))
return grounding_configs
def get_all_grounding_terms(unique_gr):
all_terms = set()
for gr in unique_gr:
for part in gr:
if part:
all_terms.add(part)
return all_terms
# -
# # ATA
fnames = glob.glob('ATA/*.json')
all_groundings_ata = []
for fname in fnames:
with open(fname, 'r') as fh:
jj = json.load(fh)
all_groundings_ata += get_all_groundings(jj['statements'])
print(f'All ATA statements: {int(len(all_groundings_ata)/2)}')
unique_gr_ata = set(all_groundings_ata)
print(f'Unique groundings in ATA models: {len(unique_gr_ata)}')
Counter(get_grounding_configs(unique_gr_ata))
all_terms_ata = get_all_grounding_terms(unique_gr_ata)
nterms_ata = len(all_terms_ata)
nterms_ata_old = len(all_terms_ata & old_onto_terms)
nterms_ata_new = len(all_terms_ata & new_onto_terms)
nterms_ata_new_only = len((all_terms_ata & new_onto_terms) - (all_terms_ata & old_onto_terms))
nterms_ata_custom = len((all_terms_ata - new_onto_terms) - old_onto_terms)
print(f'All terms: {nterms_ata}, '
f'Old terms: {nterms_ata_old}, '
f'New terms: {nterms_ata_new}, '
f'New only terms: {nterms_ata_new_only}, '
f'Custom terms: {nterms_ata_custom}, ')
for term in sorted((all_terms_ata & new_onto_terms) - (all_terms_ata & old_onto_terms)):
print(term)
for term in sorted((all_terms_ata - new_onto_terms) - old_onto_terms):
print(term)
# # NAF
# +
fnames = glob.glob('NAF/*.json')
unique_stmts_naf = {}
for fname in fnames:
if 'September Master' in fname:
print(fname)
with open(fname, 'r') as fh:
jj = json.load(fh)
for stmt in jj['statements']:
if ('theme' not in stmt['obj']) and stmt['obj']['concept'].startswith('wm/'):
continue
if ('theme' not in stmt['subj']) and stmt['subj']['concept'].startswith('wm/'):
continue
unique_stmts_naf[(get_grounding(stmt['obj']),
get_grounding(stmt['subj']))] = stmt
all_groundings_naf = []
for (sgr, ogr), stmt in unique_stmts_naf.items():
all_groundings_naf += [sgr, ogr]
# -
print(f'All NAF statements: {len(unique_stmts_naf)}')
unique_gr_naf = set(all_groundings_naf)
print(f'Unique groundings in NAF models: {len(unique_gr_naf)}')
Counter(get_grounding_configs(unique_gr_naf))
all_terms_naf = get_all_grounding_terms(unique_gr_naf)
nterms_naf = len(all_terms_naf)
nterms_naf_old = len(all_terms_naf & old_onto_terms)
nterms_naf_new = len(all_terms_naf & new_onto_terms)
nterms_naf_new_only = len((all_terms_naf & new_onto_terms) - (all_terms_naf & old_onto_terms))
nterms_naf_custom = len((all_terms_naf - new_onto_terms) - old_onto_terms)
print(f'All terms: {nterms_naf}, '
f'Old terms: {nterms_naf_old}, '
f'New terms: {nterms_naf_new}, '
f'New only terms: {nterms_naf_new_only}, '
f'Custom terms: {nterms_naf_custom}, ')
for term in sorted((all_terms_naf & new_onto_terms) - (all_terms_naf & old_onto_terms)):
print(term)
for term in sorted((all_terms_naf - new_onto_terms) - old_onto_terms):
print(term)
| scripts/oiad/august_embed_cags/embeds_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from simulation import *
from idea import dynamic_IDEA
EXPERIMENT_ID = 4
TRIAL_ID = 0
# -
# Ramię musi pokonać dwa rzędy przeszkód poruszające się w przeciwnych kierunkach, z wieloma dostępnymi szczelinami. Cel również porusza się ruchem jednostajnym. Ramię musi wielokrotnie zmieniać wykorzystywane szczeliny.
# + pycharm={"name": "#%%\n"}
# %%time
random_state(f"rng_states/experiment{EXPERIMENT_ID}.npy")
S = [1.] * 12
d = len(S)
x_min = -np.pi
x_max = np.pi
T = 12
rectangles = [((i - 0.25, 5), (i + 1.25, 7)) for i in range(-9, 10, 2)] + [((i - 0.25, 2), (i + 1.25, 3)) for i in range(-9, 10, 2)]
rectangle_vs = [(-0.4, 0)] * (len(rectangles) // 2) + [(0.4, 0)] * (len(rectangles) // 2)
target = (3., 8.5)
target_v = (-0.5, -0.1)
targets = target_with_velocity(target, target_v, T)
rectangle_lists = rectangles_with_velocity(rectangles, rectangle_vs, T)
objective = dynamic_inverse_kinematics_objectives(S, targets, rectangle_lists)
n_constraints = len(rectangles)
n = 600
alpha_inf = 0.8
n_immigrants = 100
eta_c = 3.
eta_m = 20.
p_c = 0.9
p_m = 0.05
num_iterations_init = 160
num_iterations = 120
population_hist, score_hist = dynamic_IDEA(objective, n_constraints, T, x_min, x_max, d, n, alpha_inf, eta_c, eta_m, p_c, p_m,
num_iterations=num_iterations, num_iterations_init=num_iterations_init, n_immigrants=n_immigrants, log_interval=20)
TRIAL_ID += 1
np.savez(f"/tmp/histories/experiment{EXPERIMENT_ID}_{TRIAL_ID}", populations=population_hist, scores=score_hist)
# + pycharm={"name": "#%%\n"}
feasible_populations = []
for populations, scores in zip(population_hist, score_hist):
mask = scores[-1][:, 1] == 0.
feasible_population = populations[-1][mask, :]
best = np.argsort(scores[-1, mask, 0])[:10]
feasible_populations.append(feasible_population[best])
draw_dynamic_solutions(feasible_populations, S, targets, rectangle_lists, np.arange(T),
nrows=4, ncols=3, figsize=(20, 25), xlim=(-5, 5), ylim=(-2, 10))
# + pycharm={"name": "#%%\n"}
infeasible_populations = []
for populations, scores in zip(population_hist, score_hist):
mask = scores[-1][:, 1] == 0.
infeasible_population = populations[-1][~mask, :]
infeasible_populations.append(infeasible_population[::sum(~mask) // 10])
draw_dynamic_solutions(infeasible_populations, S, targets, rectangle_lists, np.arange(T),
nrows=4, ncols=3, figsize=(20, 25), xlim=(-5, 5), ylim=(-2, 10))
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
# %%time
random_state(f"rng_states/experiment{EXPERIMENT_ID}.npy")
S = [1.] * 12
d = len(S)
x_min = -np.pi
x_max = np.pi
T = 12
rectangles = [((i - 0.25, 5), (i + 1.25, 7)) for i in range(-9, 10, 2)] + [((i - 0.25, 2), (i + 1.25, 3)) for i in range(-9, 10, 2)]
rectangle_vs = [(-0.4, 0)] * (len(rectangles) // 2) + [(0.4, 0)] * (len(rectangles) // 2)
target = (3., 8.5)
target_v = (-0.5, -0.1)
targets = target_with_velocity(target, target_v, T)
rectangle_lists = rectangles_with_velocity(rectangles, rectangle_vs, T)
objective = dynamic_inverse_kinematics_objectives(S, targets, rectangle_lists)
n_constraints = len(rectangles)
n = 600
alpha_inf = 0.8
n_immigrants = 100
eta_c = 3.
eta_m = 20.
p_c = 0.9
p_m = np.geomspace(0.02, 0.2, d)
num_iterations_init = 160
num_iterations = 120
population_hist, score_hist = dynamic_IDEA(objective, n_constraints, T, x_min, x_max, d, n, alpha_inf, eta_c, eta_m, p_c, p_m,
num_iterations=num_iterations, num_iterations_init=num_iterations_init, n_immigrants=n_immigrants, log_interval=20)
TRIAL_ID += 1
np.savez(f"/tmp/histories/experiment{EXPERIMENT_ID}_{TRIAL_ID}", populations=population_hist, scores=score_hist)
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
feasible_populations = []
for populations, scores in zip(population_hist, score_hist):
mask = scores[-1][:, 1] == 0.
feasible_population = populations[-1][mask, :]
best = np.argsort(scores[-1, mask, 0])[:10]
feasible_populations.append(feasible_population[best])
draw_dynamic_solutions(feasible_populations, S, targets, rectangle_lists, np.arange(T),
nrows=4, ncols=3, figsize=(20, 25), xlim=(-5, 5), ylim=(-2, 10))
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
infeasible_populations = []
for populations, scores in zip(population_hist, score_hist):
mask = scores[-1][:, 1] == 0.
infeasible_population = populations[-1][~mask, :]
infeasible_populations.append(infeasible_population[::sum(~mask) // 10])
draw_dynamic_solutions(infeasible_populations, S, targets, rectangle_lists, np.arange(T),
nrows=4, ncols=3, figsize=(20, 25), xlim=(-5, 5), ylim=(-2, 10))
# -
| Experiment4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#%%
import numpy as np
from sklearn import metrics
import numpy as np
from tqdm import tqdm
from typing import List
import random
from shared import bootstrap_auc
import matplotlib.pyplot as plt
import torch
from torch import nn, optim
# -
# start off by seeding random number generators:
RANDOM_SEED = 12345
random.seed(RANDOM_SEED)
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
# +
# import data; choose feature space
from dataset_poetry import y_train, Xd_train, y_vali, Xd_vali
X_train = Xd_train["numeric"]
X_vali = Xd_vali["numeric"]
# +
#%%
from sklearn.linear_model import LogisticRegression
m = LogisticRegression(random_state=RANDOM_SEED, penalty="none", max_iter=2000)
m.fit(X_train, y_train)
print("skLearn-LR AUC: {:.3}".format(np.mean(bootstrap_auc(m, X_vali, y_vali))))
print("skLearn-LR Acc: {:.3}".format(m.score(X_vali, y_vali)))
# +
def nearly_eq(x, y, tolerance=1e-6):
return abs(x - y) < tolerance
#%%
(N, D) = X_train.shape
X = torch.from_numpy(X_train).float()
y = torch.from_numpy(y_train).long()
Xv = torch.from_numpy(X_vali).float()
yv = torch.from_numpy(y_vali).long()
def train(name: str, model, optimizer, objective, max_iter=2000):
train_losses = []
vali_losses = []
samples = []
for it in tqdm(range(max_iter)):
model.train()
# Perform one step of training:
optimizer.zero_grad()
loss = objective(model(X), y)
loss.backward()
optimizer.step()
# every 25 steps, sample validation performance.
if it % 25 == 0:
model.eval()
y_probs = model(X).detach().numpy()
vali_loss = objective(model(Xv), yv)
train_losses.append(loss.item())
vali_losses.append(vali_loss.item())
samples.append(it)
model.eval()
# Predict on the Validation Set
y_probs = model(Xv).detach().numpy()
y_pred = (y_probs[:, 1] > 0.5).ravel()
print(
"Validation. Acc: {:.3} Auc: {:.3}".format(
metrics.accuracy_score(yv, y_pred),
metrics.roc_auc_score(yv, y_probs[:, 1].ravel()),
)
)
plt.plot(samples, train_losses, label="Training Loss", alpha=0.7)
plt.plot(samples, vali_losses, label="Validation Loss", alpha=0.7)
plt.title("{} Training Loss".format(name))
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.legend()
plt.tight_layout()
plt.savefig("graphs/p13-{}-loss.png".format(name))
plt.show()
return model
# +
# Actually train a LogisticRegression; just one 'Linear' layer.
n_classes = len([0, 1])
model = nn.Linear(D, n_classes, bias=True)
objective = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
train("logistic-regression", model, optimizer, objective)
# -
def make_neural_net(D: int, hidden: List[int], num_classes: int = 2, dropout=0.2):
""" Using nn.Sequential; construct a list of layers that make this a neural-network. """
layers = []
for i, dim in enumerate(hidden):
if i == 0:
layers.append(nn.Linear(D, dim))
layers.append(nn.Dropout(p=DROPOUT))
layers.append(nn.ReLU())
else:
layers.append(nn.Linear(hidden[i - 1], dim))
layers.append(nn.Dropout(p=DROPOUT))
layers.append(nn.ReLU())
layers.append(nn.Linear(hidden[-1], num_classes))
return nn.Sequential(*layers)
# ### Lab TODO:
# ### 0. In this version; consider commenting out other calls to train/fit!
# ### 1. Investigate LEARNING_RATE, DROPOUT, MOMENTUM, REGULARIZATION.
# LEARNING_RATE = 0.1
# DROPOUT = 0.1 # randomly turn off this fraction of the neural-net while training.
# MOMENTUM = 0.9
# REGULARIZATION = 0.01
#
# achieved: Validation. Acc: 0.927 Auc: 0.973
# and converged in the fewest iterations
# learning rate of 1 was too large it definitely was missing the global minimum
#
# ### 2. What do you think these variables change?
# Learning rate -
# The amount that the weights are updated during training is referred to as the step size or the “learning rate.”
# The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs.
#
# Dropout - A single model can be used to simulate having a large number of different network architectures by randomly dropping out nodes during training. This is called dropout and offers a very computationally cheap and remarkably effective regularization method to reduce overfitting and improve generalization error in deep neural networks of all kinds.
#
# Momentum - replaces the current gradient with m (“momentum”), which is an aggregate of gradients. This aggregate is the exponential moving average of current and past gradients (i.e. up to time t).
# If the momentum term is large then the learning rate should be kept smaller. A large value of momentum also means that the convergence will happen fast. But if both the momentum and learning rate are kept at large values, then you might skip the minimum with a huge step.
#
# regularization - neural networks are easily overfit. regularization makes it so neural networks generalize better
# ### 3. Consider a shallower, wider network.
# ### - Changing [16,16] to something else... might require revisiting step 1.
#
# Comparison
# LEARNING_RATE = 0.1
# DROPOUT = 0.1 # randomly turn off this fraction of the neural-net while training.
# MOMENTUM = 0.9
# REGULARIZATION = 0.01
# [16,16]
# achieved: Validation. Acc: 0.927 Auc: 0.973
#
#
# shallower network -
# [1,1] - accuracy decreased to 0.683
# [2,2] - accuracy decreased to .683 auc decreased a little bit
# [5,5] - didn't make much of a difference
# [10,10] - didn't make much of a difference
#
# wider -
# [20,20] - didn't make much of a difference
# [100,100] didn't make much of a difference
#
# +
LEARNING_RATE = 0.1
DROPOUT = 0.1 # randomly turn off this fraction of the neural-net while training.
MOMENTUM = 0.9
REGULARIZATION = 0.01 # try 0.1, 0.01, etc.
# two hidden layers, 16 nodes, each.
model = make_neural_net(D, [5, 5], dropout=DROPOUT)
objective = nn.CrossEntropyLoss()
optimizer = optim.SGD(
model.parameters(), lr=LEARNING_RATE, momentum=MOMENTUM, weight_decay=REGULARIZATION
)
train("neural_net", model, optimizer, objective, max_iter=1000)
# -
| p13-lr-torch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.base import clone
from rfpimp import *
import warnings
warnings.filterwarnings('ignore')
def mkdf(columns, importances):
I = pd.DataFrame(data={'Feature':columns, 'Importance':importances})
I = I.set_index('Feature')
I = I.sort_values('Importance', ascending=False)
return I
# -
df = pd.read_csv("data/rent.csv")
features = ['bathrooms','bedrooms','longitude','latitude',
'price']
df = df[features].copy()
df['price'] = np.log(df['price'])
df.head(5)
# # Built-in scikit importances
# +
base_rf = RandomForestRegressor(n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True)
X_train, y_train = df.drop('price',axis=1), df['price']
rf = clone(base_rf)
rf.fit(X_train, y_train)
print(rf.oob_score_)
I = mkdf(X_train.columns,rf.feature_importances_)
viz = plot_importances(I, title="Feature importance via avg drop in variance (sklearn)")
viz.save('../article/images/regr_dflt.svg')
viz.close()
X_train2 = X_train.copy()
X_train2['random'] = np.random.random(size=len(X_train))
rf2 = clone(base_rf)
rf2.fit(X_train2, y_train)
I = mkdf(X_train2.columns,rf2.feature_importances_)
viz = plot_importances(I, title="Feature importance via avg drop in variance (sklearn)")
viz.save('../article/images/regr_dflt_random.svg')
viz.save('../article/images/regr_dflt_random.pdf')
viz
# -
# ## Examine cost of dropping columns
# +
from sklearn.base import clone
# max_features=n_features for regressors but sqrt for classifiers
def dropcol_importances(rf, X_train, y_train):
rf_ = clone(rf)
rf_.random_state = 999
rf_.fit(X_train, y_train)
baseline = rf_.oob_score_
imp = []
for col in X_train.columns:
X = X_train.drop(col, axis=1)
rf_ = clone(rf)
rf_.random_state = 999
rf_.fit(X, y_train)
o = rf_.oob_score_
imp.append(baseline - o)
return np.array(imp)
# +
base_rf = RandomForestRegressor(n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True,
random_state = 999) # same boostrapping samples
X_train, y_train = df.drop('price',axis=1), df['price']
rf = clone(base_rf)
imp = dropcol_importances(rf, X_train, y_train)
I = mkdf(X_train.columns,imp)
viz = plot_importances(I)
viz.save('../article/images/regr_dropcol.svg')
X_train, y_train = df.drop('price',axis=1), df['price']
X_train2 = X_train.copy()
X_train2['random'] = np.random.random(size=len(X_train))
rf2 = clone(base_rf)
imp = dropcol_importances(rf2, X_train2, y_train)
I = mkdf(X_train2.columns,imp)
viz = plot_importances(I)
viz.save('../article/images/regr_dropcol_random.svg')
viz
# -
# # Roll your own OOB R^2 score
# +
from sklearn.ensemble.forest import _generate_unsampled_indices
from sklearn.metrics import r2_score
import warnings
# TODO: add arg for subsample size to compute oob score
def oob_regression_r2_score(rf, X_train, y_train):
X = X_train.values
y = y_train.values
n_samples = len(X)
n_classes = len(np.unique(y))
predictions = np.zeros(n_samples)
n_predictions = np.zeros(n_samples)
for tree in rf.estimators_:
unsampled_indices = _generate_unsampled_indices(tree.random_state, n_samples)
tree_preds = tree.predict(X[unsampled_indices, :])
predictions[unsampled_indices] += tree_preds
n_predictions[unsampled_indices] += 1
if (n_predictions == 0).any():
warnings.warn("Too few trees; some variables do not have OOB scores.")
n_predictions[n_predictions == 0] = 1
predictions /= n_predictions
oob_score = r2_score(y, predictions)
return oob_score
# -
# # Permutation importance
def permutation_importances(rf, X_train, y_train, metric):
"""
Return importances from pre-fit rf; this function
works for regressors and classifiers. The metric
arg is function that measures accuracy or R^2 or
similar; it should use the out-of-bag samples from
training set.
"""
baseline = metric(rf, X_train, y_train)
imp = []
for col in X_train.columns:
save = X_train[col].copy()
X_train[col] = np.random.permutation(X_train[col])
m = metric(rf, X_train, y_train)
X_train[col] = save
imp.append(baseline - m)
return np.array(imp)
# +
X_train, y_train = df.drop('price',axis=1), df['price']
base_rf = RandomForestRegressor(n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
max_features=len(X_train.columns),
oob_score=True,
random_state = 999) # same boostrapping samples
rf = clone(base_rf)
rf.fit(X_train, y_train)
imp = permutation_importances(rf, X_train, y_train,
oob_regression_r2_score)
I = mkdf(X_train.columns,imp)
viz = plot_importances(I)
viz.save('../article/images/regr_permute.svg')
X_train2 = X_train.copy()
X_train2['random'] = np.random.random(size=len(X_train))
rf2 = clone(base_rf)
rf2.fit(X_train2, y_train)
imp = permutation_importances(rf2, X_train2, y_train,
oob_regression_r2_score)
I = mkdf(X_train2.columns,imp)
viz = plot_importances(I)
viz.save('../article/images/regr_permute_random.svg')
viz
| notebooks/permutation-importances-regressor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <img align="right" style="max-width: 200px; height: auto" src="cfds_logo.png">
#
# ### Lab 14 - "Long Short-Term Memory (LSTM) Neural Networks"
#
# Chartered Financial Data Scientist (CFDS), Autumn Term 2020
# In this lab, we will learn how to apply another type of deep learning technique referred to as **Long Short-Term Memory (LSTM)** neural networks. Unlike standard feedforward neural networks, LSTMs encompass feedback connections that make it a "general-purpose computer". LSTMs are designed to process not only a single data point (such as images), but also entire sequences of data, e.g., such as speech, video, or financial time series.
#
#
# We will again use the functionality of the `PyTorch` library to implement and train an LSTM based neural network. The network will be trained on the historic daily (in-sample) returns of an exemplary financial stock. Once the network is trained, we will use the learned model to predict future (out-of-sample) returns. Finally, we will convert the predictions into tradable signals and the backtest the signals accordingly.
#
# The figure below illustrates a high-level view on the machine learning process we aim to establish in this lab.
# <img align="center" style="max-width: 700px" src="process.png">
# As always, pls. don't hesitate to ask all your questions either during the lab, post them in our NextThought lab discussion forum (https://financial-data-science.nextthought.io), or send us an email (using our fds.ai email addresses).
# ### Lab Objectives:
# After today's lab, you should be able to:
#
# > 1. Understand the basic concepts, intuitions and major building blocks of **Long Short-Term Memory (LSTM) Neural Networks**.
# > 2. Know how to **implement and to train an LSTM** to learn a model of financial time-series data.
# > 3. Understand how to apply such a learned model to **predict future data points of a time-series**.
# > 4. Know how to **interpret the model's prediction results** and backtest the predictions.
# Before we start let's watch a motivational video:
from IPython.display import YouTubeVideo
# Nvidia GTC 2016: "The Deep Learning Revolution" Opening in Keynote"
# YouTubeVideo('Dy0hJWltsyE', width=800, height=400)
# ### Setup of the Jupyter Notebook Environment
# Suppress potential warnings:
import warnings
warnings.filterwarnings('ignore')
# Similar to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. We will mostly use the `PyTorch`, `Numpy`, `Sklearn`, `Matplotlib`, `Seaborn`, `BT` and a few utility libraries throughout the lab:
# import python data science and utility libraries
import os, sys, itertools, urllib, io
import datetime as dt
import pandas as pd
import pandas_datareader as dr
import yfinance as yf
yf.pdr_override() # needed to access Yahoo! Finance
import numpy as np
# Import the backtesting library:
import bt as bt # library to backtest trading signals
# Import the Python machine / deep learning libraries:
# pytorch libraries
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils import data
from torch.utils.data import dataloader
# Import Python plotting libraries and set general plotting parameters:
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams['figure.dpi']= 150
# Enable notebook matplotlib inline plotting:
# %matplotlib inline
# Create a notebook folder structure to store the data as well as the trained neural network models:
# +
# create data sub-directory inside the current directory
data_directory = './data'
if not os.path.exists(data_directory): os.makedirs(data_directory)
# create models sub-directory inside the current directory
models_directory = './models'
if not os.path.exists(models_directory): os.makedirs(models_directory)
# -
# Set random seed value to obtain reproducable results:
# init deterministic seed
seed_value = 1234
np.random.seed(seed_value) # set numpy seed
torch.manual_seed(seed_value); # set pytorch seed CPU
# Enable GPU computing by setting the `device` flag and init a `CUDA` seed:
# +
# set cpu or gpu enabled device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu').type
# init deterministic GPU seed
torch.cuda.manual_seed(seed_value)
# log type of device enabled
now = dt.datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] notebook with \'{}\' computation enabled'.format(str(now), str(device)))
# -
# ### 1. Dataset Download and Data Assessment
# In this section of the lab notebook we will download and access historic daily stock market data ranging from **01/01/2000** to **31/12/2017** of the **"International Business Machines" (IBM)** corporation (ticker symbol: "IBM"). Thereby, we will utilize the `datareader` of the `Pandas` library that provides the ability to interface the `Yahoo` finance API.
#
# To start the data download, let's specify the start and end date of the stock market data download:
start_date = dt.datetime(2000, 1, 1)
end_date = dt.datetime(2017, 12, 31)
# Download the daily "International Business Machines" (IBM) stock market data:
stock_data = dr.data.DataReader('IBM', data_source='yahoo', start=start_date, end=end_date)
# Inspect the top 10 records of the retreived IBM stock market data:
stock_data.head(10)
# Let's also evaluate the data quality of the download by creating a set of summary statistics of the retrieved data:
stock_data.describe()
# Visually inspect the daily closing prices of the "International Business Machines" (IBM) stock market data:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
# plot daily closing prices
ax.plot(stock_data.index, stock_data['Close'], color='#9b59b6')
# rotate x-tick labels
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set x-axis labels and limits
ax.set_xlabel('[time]', fontsize=10)
ax.set_xlim([pd.to_datetime('01-01-2000'), pd.to_datetime('31-12-2017')])
# set y-axis labels and limits
ax.set_ylabel('[stock closing price]', fontsize=10)
ax.set_ylim(20, 220)
# set plot title
plt.title('International Business Machines (IBM) - Daily Historical Stock Closing Prices', fontsize=10);
# -
# Save the obtained and validated stock market data to the local data directory:
# save retrieved data to local data directory
stock_data.to_csv(os.path.join(data_directory, 'ibm_data_2010_2017_daily.csv'), sep=';', encoding='utf-8')
# ### 2. Data Pre-Processing
# In this section, we will obtain daily returns of the retrieved daily closing prices. Also, we will convert the time-series of daily returns into a set of sequences $s$ of $n$ time steps respectively. The created sequences will then be used to learn a model using an Long Short-Term Memory neural network.
# #### 2.1 Weekend and Holiday Padding
# Let's always forward propagate the last valid available price information observation to the next available valid price information using the Panda's `reindex()` function. This in order to also obtain market price information of weekend's and holidays:
# fill weekends and holidays
stock_data = stock_data.reindex(index=pd.date_range(stock_data.index.min(), stock_data.index.max()), method='ffill')
# Inspect the padded stock market data of the "International Business Machines" (IBM) stock:
stock_data.head(10)
# Inspect the number of records obtained after the data padding:
stock_data.shape
# #### 2.2 Daily Return Calculation
# Determine the daily returns of the "International Business Machines" (IBM) daily closing prices using the Panda's `pct_change()` function:
stock_data['RETURN'] = stock_data['Close'].pct_change()
# Inspect the daily returns of the closing prices:
stock_data['RETURN']
# Remove the first row corresponding to a `nan` daily return of the stock data dataframe:
stock_data = stock_data.iloc[1:]
# Inspect the daily returns of the adjusted closing prices for the removed row:
stock_data.head(10)
# Visually inspect the obtained daily returns:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
# plot daily closing prices
ax.plot(stock_data.index, stock_data['RETURN'], color='#9b59b6')
# rotate x-tick labels
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set axis labels and limits
ax.set_xlabel('[time]', fontsize=10)
ax.set_xlim([pd.to_datetime('01-01-2000'), pd.to_datetime('31-12-2017')])
ax.set_ylabel('[daily stock returns]', fontsize=10)
# set plot title
plt.title('International Business Machines (IBM) - Daily Historical Stock Closing Prices', fontsize=10);
# -
# #### 2.3 Conduct Train-Test Split for Neural Network Training
# To understand and evaluate the performance of any trained **supervised machine learning** model, it is good practice, to divide the dataset into a **training set** or **"in-sample"** data (the fraction of data records solely used for training purposes) and a **evaluation set** or **"out-of-sample"** data (the fraction of data records solely used for evaluation purposes). Pls. note, the **evaluation set** will never be shown to the model as part of the training process.
# <img align="center" style="max-width: 500px" src="traintestsplit.png">
# We set the split fraction of training sequences to **90%** of the total number of obtained sequences:
split_fraction = 0.9
split_row = int(stock_data.shape[0] * split_fraction)
# Split obtained returns into training ("in-sample") returns $r^{i}_{train}$ and validation ("out-of-sample") returns $r^{i}_{valid}$:
train_stock_data = stock_data.iloc[:split_row]
valid_stock_data = stock_data.iloc[split_row:]
# Visually inspect the obtained train and validation stock returns:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
# plot daily stock returns
ax.plot(stock_data.index[:split_row,], train_stock_data['RETURN'], c='C0', label='train')
ax.plot(stock_data.index[split_row:,], valid_stock_data['RETURN'], c='C1', label='valid')
# rotate x-tick labels
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set axis labels and limits
ax.set_xlabel('[time]', fontsize=10)
ax.set_xlim([pd.to_datetime('01-01-2000'), pd.to_datetime('31-12-2017')])
ax.set_ylabel('[daily stock returns]', fontsize=10)
# set plot legend
plt.legend(loc="lower right", numpoints=1, fancybox=True)
# set plot title
plt.title('International Business Machines (IBM) - Daily Historical Stock Returns', fontsize=10);
# -
# Determine count (shape) of daily return train sequences $r^{i}_{train}$:
train_stock_data.shape
# Determine count (shape) of daily return train sequences $r^{i}_{valid}$:
valid_stock_data.shape
# #### 2.4 Transform Time-Series Into Sequences
# In the following, we determine the number of return time-steps $n$ each individual sequence $s^{i}$ should be comprised of. Each sequence is thereby determined by the number of predictor (return) time-steps $t$ and the prediction (return) horizon $h = t+1$.
# <img align="center" style="max-width: 500px" src="timesteps.png">
# In this example, we will set the number of predictor (return) time-steps to $t$=4. This indicates that the input sequence of each sample is a vector of 4 sequential daily stock returns (pls. note, the choice of $t$=4 is arbitrary and should be selected through experimentation). Furthermore, we set the predicted return horizon to 1, which specifies that we aim to forecast a single future time-step.
time_steps = 4 # number of predictor timesteps
horizon = 1 # number of timesteps to be predicted
sequence_length = time_steps + horizon # determine sequence length
# Next, we extract the sequences $s^i$ of 5 time-steps.
#
# Thereby, we will step-wise iterate ("rolling window") over the entire sequence of daily stock returns $r_i$. In each iteration step, we extract an individual sequence of stock returns consisting of $n$ time-steps. The extracted individual sequences of daily closing prices are then collected in a single data frame.
# <img align="center" style="max-width: 900px" src="sequences.png">
# Next, we will determine the number of available **rolling training ("in-sample") target sequences**. Remember that each sequence exhibits consists of **a sequence length of 5 time-steps** as defined by the variable `sequence_length` above:
no_train_sequences = ((train_stock_data.shape[0] // sequence_length) - 1) * sequence_length
# Let's now print the number of available rolling training sequences comprised of 5 daily return time-steps each:
no_train_sequences
# Initialize a 2D tensor (a 2D array) that contains the individual sequences from which we aim from to learn as part of the training procedure. The training data corresponds to a **2D tensor of size** `no_train_sequences`$\times$`sequence_length`. The tensor contains the daily returns of the IBM market price data:
# +
# init the train sequence target daily returns data 2D matrix
train_stock_sequence_data = np.zeros([no_train_sequences, sequence_length])
# init the train sequence target daily dates data 2D matrix
train_stock_sequence_data_date = np.empty([no_train_sequences, sequence_length], dtype='datetime64[s]')
# -
# Let's now fill the created 2D tensor with the rolling sequences of the IBM's daily returns:
# iterate over the distinct daily returns of the training dataset
for i in range(0, no_train_sequences):
# determine current training sequence returns
return_sequence = np.array(train_stock_data['RETURN'][i:i + sequence_length].T)
# determine current training sequence dates
date_sequence = np.array(train_stock_data.index[i:i + sequence_length].T)
# fill 2D matrix of train stock target sequences with current sequence
train_stock_sequence_data[i, :] = return_sequence
# fill 2D matrix of train stock target sequences with current sequence
train_stock_sequence_data_date[i, :] = date_sequence
# Ultimately, let's inspect the final shape of the filled 2D-array of daily return **training** sequences:
train_stock_sequence_data.shape
# Also, inspect the **first 10 sequences** of the initialized and filled 2D-array of daily return training sequences:
train_stock_sequence_data[0:10,]
# Also, inspect the **last 10 sequences** of the initialized and filled 2D-array of daily return training sequences:
train_stock_sequence_data[-10:-1,]
# Next, we will determine the number of available **rolling validation ("out-of-sample") target sequences**. Remember again that each sequence exhibits consists of a a total number of 5 time-steps as defined by the variable `sequence_length` above:
no_valid_sequences = ((valid_stock_data.shape[0] // sequence_length) - 1) * sequence_length
# Let's now print the number of available rolling validation sequences comprised of 5 daily return time-steps each:
no_valid_sequences
# Initialize a 2D tensor that contains the individual sequences which we use to validate the trained models as part of the validation procedure. The training data corresponds to a **2D tensor of size** `no_valid_sequences`$\times$`sequence_length`. The tensor contains the daily returns of the IBM market price data:
# +
# init the valid sequence target daily returns data 2D matrix
valid_stock_sequence_data = np.zeros([no_valid_sequences, sequence_length])
# init the valid sequence target daily dates data 2D matrix
valid_stock_sequence_data_date = np.empty([no_valid_sequences, sequence_length], dtype='datetime64[s]')
# -
# Extract individual validation sequences of length $5$ from the obtained daily returns:
# iterate over the distinct daily returns of the validation dataset
for i in range(0, no_valid_sequences):
# determine current validation sequence returns
return_sequence = np.array(valid_stock_data['RETURN'][i:i + sequence_length].T)
# determine current validation sequence dates
date_sequence = np.array(valid_stock_data.index[i:i + sequence_length].T)
# fill 2D matrix of valid stock target sequences with current sequence
valid_stock_sequence_data[i, :] = return_sequence
# fill 2D matrix of valid stock target sequences with current sequence
valid_stock_sequence_data_date[i, :] = date_sequence
# Ultimately, let's inspect the final shape of the filled 2D-array of daily return **validation** sequences:
valid_stock_sequence_data.shape
# Also, inspect the **first 10 sequences** of the initialized and filled 2D-array of daily return validation sequences:
valid_stock_sequence_data[0:10,]
# Also, inspect the **last 10 sequences** of the initialized and filled 2D-array of daily return validation sequences:
valid_stock_sequence_data[-10:-1,]
# #### 2.4 Conduct Input-Target Split for Neural Network Training
# Before we continue the data pre-processing, let's briefly revisit how RNN's or, more specifically, LSTM based NN's can be trained to predict the next element of an input sequence. The cartoon below is derived from the "Next Word Predictor" Example that we also discussed in the course. For each **input return** $r_{i}$ of the input return training sequence $s^i$ the LSTM is supposed to learn to **predict the return** of the next time-step $\hat{r}_{i+1}$. In order to make such a future return $\hat{r}_{i+1}$ prediction the LSTM uses it's learned hidden state information $h_{i}$ as well as the current return $r_{i}$ as an input.
#
# For each time-step the predicted return $\hat{r}_{i+1}$ is then compared to the **target return** $r_{i+1}$. The discrepancy between both is collected as a loss $\mathcal{L}$ for the distinct timesteps. The accumulation of the individual time-step losses is accumulated as the total loss of a sequence $\mathcal{L}_{All}$.
# <img align="center" style="max-width: 600px" src="training.png">
# Seperate each training sequence $s^{i}$ into time-steps of input returns denoted by $s^{i}_{train, input}=\{r_{t-n-1}, ..., r_{t-1}, r_{t}\}$ and the time-step of the to be predicted target return denoted by $s^{i}_{train, target}=r_{t+1}$.
# <img align="center" style="max-width: 700px" src="sequencesplit.png">
# In addition, we convert both the input returns as well as the target returns to PyTorch tensors:
train_sequences_input = torch.from_numpy(train_stock_sequence_data[:, :-1]).float()
train_sequences_target = torch.from_numpy(train_stock_sequence_data[:, 1:]).float()
# Seperate each validation sequence $s^{i}$ into time-steps of input returns denoted by $s^{i}_{valid, input}=\{r_{t-n-1}, ..., r_{t-1}, r_{t}\}$ and the time-step of the to be predicted target return denoted by $s^{i}_{valid, target}=r_{t+1}$. In addition, we convert both the input returns as well as the target returns to PyTorch tensors:
valid_sequences_input = torch.from_numpy(valid_stock_sequence_data[:, :-1]).float()
valid_sequences_target = torch.from_numpy(valid_stock_sequence_data[:, 1:]).float()
# To train an LSTM neural network, we tailor the dataset class provided by the PyTorch library. We overwrite the individual functions of the dataset class. So that our dataset will supply the neural network with the individual training sequences $s^{i}_{train, input}$ and corresponding targets $s^{i}_{train, target}$ throughout the training process:
# define daily returns dataset
class DailyReturnsDataset(data.Dataset):
# define the class constructor
def __init__(self, sequences, targets):
# init sequences and corresponding targets
self.sequences = sequences
self.targets = targets
# define the length method
def __len__(self):
# returns the number of samples
return len(self.targets)
# define the get item method
def __getitem__(self, index):
# determine single sequence and corresponding target
sequence = self.sequences[index, :]
target = self.targets[index, :]
# return sequences and target
return sequence, target
# Once we have specified the daily returns dataset class we instantiate it using the new daily closing dataset using the prepared training input sequences $s^{i}_{train, input}$ and corresponding targets $s^{i}_{train, target}$:
train_dataset = DailyReturnsDataset(train_sequences_input, train_sequences_target)
# Let's see how it works by getting the 42th sequence and its corresponding targets:
train_dataset.__getitem__(42)
# ### 3. Neural Network Implementation and Loss Function
# In this section, we will implement the LSTM architecture of the to be learned time series model. Furthermore, we will specify the loss-function, learning-rate and optimization technique used in the network training.
# #### 3.1. Implementation of the LSTM Architecture
# In this section, we will implement the architecture of the LSTM neural network utilized to predict future returns of financial time series data, e.g. as in this example, the future returns of a given stock. The neural network, which we name **'LSTMNet'** consists in total of three layers. The first two layers correspond to LSTM cells, while the third layer corresponds to a fully-connected linear layer.
# <img align="center" style="max-width: 400px" src="lstmnet.png">
# The general LSTM cell structure as well as the formal definition of its individual gate functions are shown in the following (not considering the bias of each layer for simplicity):
# <img align="center" style="max-width: 700px" src="lstmcell.png">
# (Source: https://pytorch.org/docs/stable/nn.html)
# Each LSTM layer consits of a LSTM cell exhibiting a hidden state of 51 dimensions. The third linear squeezes the 51 hidden state dimensions of the second LSTM cell into a single output dimension. The single output signal of the linear layer refers to the return of the next time-step predicted by the neural network. Please note, that the choice of the implemented architecture and network hyperparameters is arbitrary and should in a real-world scenario be evaluated and selected thoroughly through experimentation.
# implement the LSTMNet network architecture
class LSTMNet(nn.Module):
# define class constructor
def __init__(self):
super(LSTMNet, self).__init__()
# define the lstm nn architecture
# the first lstm layer
self.lstm1 = nn.LSTMCell(1, 51)
# the second lstm layer
self.lstm2 = nn.LSTMCell(51, 51)
# the final linear layer
self.linear = nn.Linear(51, 1)
# define network forward pass
def forward(self, input):
# init predictions
predictions = []
# init the lstm hidden states
h_t1 = torch.zeros(input.size(0), 51, dtype=torch.float).to(device)
h_t2 = torch.zeros(input.size(0), 51, dtype=torch.float).to(device)
# init the lstm cell states
c_t1 = torch.zeros(input.size(0), 51, dtype=torch.float).to(device)
c_t2 = torch.zeros(input.size(0), 51, dtype=torch.float).to(device)
# iterate over distinct time steps
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
# propagate through the first lstm layer
h_t1, c_t1 = self.lstm1(input_t, (h_t1, c_t1))
# propagate through the second lstm layer
h_t2, c_t2 = self.lstm2(h_t1, (h_t2, c_t2))
# propagate through the final linear layer
prediction = self.linear(h_t2)
# collect predictions
predictions += [prediction]
# stack predictions
predictions = torch.stack(predictions, 1).squeeze(2)
# return predictions
return predictions
# Now, that we have implemented our first LSTM neural network we are ready to instantiate a model to be trained:
lstm_model = LSTMNet().to(device)
# Once the model is initialized, we can visualize the model structure and review the implemented network architecture by execution of the following cell:
# print the initialized architectures
print('[LOG] LSTMNet architecture:\n\n{}\n'.format(lstm_model))
# Looks like intended? Great! Finally, let's have a look into the number of model parameters that we aim to train in the next steps of the notebook:
# +
# init the number of model parameters
num_params = 0
# iterate over the distinct parameters
for param in lstm_model.parameters():
# collect number of parameters
num_params += param.numel()
# print the number of model paramters
print('[LOG] Number of to be trained LSTMNet model parameters: {}.'.format(num_params))
# -
# Ok, our "simple" `LSTMNet` model already encompasses an impressive number **32'284 model parameters** to be trained.
# #### 3.2. Definition of the Training Loss Function and Learning Rate
# We are now good to train the network. However, prior to starting the training, we need to define an appropriate loss function. Remember, we aim to train our model to learn a set of model parameters $\theta$ that minimize the prediction error of the true return $r_{t+1}$ and the by the model predicted return $\hat{r}_{t+1}$ at a given time-step $t+1$ of sequence $s^{i}$. In other words, for a given sequence of historic returns we aim to learn a function $f_\theta$ that is capable to predicts the return of the next timestep as faithfully as possible, as expressed by:
# <center> $\hat{r}_{t+1} = f_\theta(r_{t}, r_{t-1}, ..., r_{t-n})$. </center>
# Thereby, the training objective is to learn a set of optimal model parameters $\theta^*$ that optimize $\min_{\theta} \|r_{t+1} - f_\theta(r_{t}, r_{t-1}, ..., r_{t-n})\|$ over all time-steps $t$ contained in the set of training sequences $s_{train}$. To achieve this optimization objective, one typically minimizes a loss function $\mathcal{L_{\theta}}$ while training the neural network. In this lab we use the **'Mean Squared Error (MSE)'** loss, as denoted by:
# <center> $\mathcal{L}^{MSE}_{\theta} (r_{t+1}, \hat{r}_{t+1}) = \frac{1}{N} \sum_{i=1}^N \| r_{t+1} - \hat{r}_{t+1}\|^{2}$, </center>
loss_function = nn.MSELoss().to(device)
# Throughout the training process, the PyTorch library will automatically calculate the loss magnitude, compute the gradient, and update the parameters $\theta$ of the LSTM neural network. We will use the **"Adaptive Moment Estimation Optimization" (ADAM)** technique to optimize the network parameters. Furthermore, we specify a constant learning rate of $l = 1e-06$. For each training step, the optimizer will update the model parameters $\theta$ values according to the degree of prediction error (the MSE loss).
learning_rate = 1e-06 # set constant learning rate
optimizer = optim.Adam(lstm_model.parameters(), lr=learning_rate) # define optimization technique
# Now that we have successfully implemented and defined the three LSTM building blocks let's take some time to review the `LSTMNet` model definition as well as the `MSE loss` function. Please, read the above code and comments carefully and don't hesitate to let us know any questions you might have.
# ### 4. Training the Neural Network Model
# In this section, we will train the LSTM neural network model (as implemented in the section above) using the prepared dataset of daily return sequences. Therefore, we will have a detailed look into the distinct training steps and monitor the training progress.
# #### 4.1. Preparing the Network Training
# Let's now start to learn a model by training the NN for **5 epochs** in mini-batches of the size of **128 sequences** per batch. This implies that the whole dataset will be fed to the network **5 times** in chunks of 128 sequences yielding to **32 mini-batches** (4'068 training sequences / 128 sequences per mini-batch) per epoch:
# specify the training parameters
num_epochs = 200 # number of training epochs
mini_batch_size = 128 # size of the mini-batches
# Furthermore, lets specify and instantiate a corresponding PyTorch data loader that feeds the image tensors to our neural network:
train_dataloader = dataloader.DataLoader(train_dataset, batch_size=mini_batch_size, shuffle=True)
# #### 4.2. Running the Network Training
# Finally, we start training the model. The training procedure of each mini-batch is performed as follows:
#
# >1. do a forward pass through the LSTMNet network,
# >2. compute the mean-squared prediction error $\mathcal{L}^{MSE}_{\theta} (r_{t+1}, \hat{r}_{t+1}) = \frac{1}{N} \sum_{i=1}^N \| r_{t+1} - \hat{r}_{t+1}\|^{2}$,
# >3. do a backward pass through the LSTMNet network, and
# >4. update the parameters of the network $f_\theta(\cdot)$.
#
# To ensure learning while training the LSTM model we will monitor whether the loss decreases with progressing training. Therefore, we obtain and evaluate the mean prediction performance over all mini-batches in each training epoch. Based on this evaluation we can conclude on the training progress and whether the loss is converging (indicating that the model might not improve any further).
#
# The following elements of the network training code below should be given particular attention:
#
# >- `loss.backward()` computes the gradients based on the magnitude of the mean-squared-loss,
# >- `optimizer.step()` updates the network parameters based on the gradient.
# +
# init collection of training epoch losses
train_epoch_losses = []
# set the model in training mode
lstm_model.train()
# init the best loss
best_loss = 100.00
# iterate over epochs
for epoch in range(0, num_epochs):
# init collection of mini-batch losses
train_mini_batch_losses = []
# iterate over mini-batches
for sequence_batch, target_batch in train_dataloader:
# print(sequence_batch)
# push mini-batch data to computation device
sequence_batch = sequence_batch.to(device)
target_batch = target_batch.to(device)
# predict sequence output
prediction_batch = lstm_model(sequence_batch)
# calculate batch loss
batch_loss = loss_function(prediction_batch, target_batch)
# run backward gradient calculation
batch_loss.backward()
# update network parameters
optimizer.step()
# collect mini-batch loss
train_mini_batch_losses.append(batch_loss.data.item())
# determine mean min-batch loss of epoch
train_epoch_loss = np.mean(train_mini_batch_losses)
# print epoch loss
now = dt.datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] epoch: {} train-loss: {}'.format(str(now), str(epoch), str(train_epoch_loss)))
# determine mean min-batch loss of epoch
train_epoch_losses.append(train_epoch_loss)
# print epoch and save models
if epoch % 10 == 0 and epoch > 0:
# case: new best model trained
if train_epoch_loss < best_loss:
# store new best model
model_name = 'best_lstm_model_{}.pth'.format(str(epoch))
torch.save(lstm_model.state_dict(), os.path.join(models_directory, model_name))
# update best loss
best_loss = train_epoch_loss
# print epoch loss
now = dt.datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] epoch: {} new best train-loss: {} found'.format(str(now), str(epoch), str(train_epoch_loss)))
# -
# Upon successful training let's visualize and inspect the training loss per epoch:
# +
# prepare plot
fig = plt.figure()
ax = fig.add_subplot(111)
# add grid
ax.grid(linestyle='dotted')
# plot the training epochs vs. the epochs' prediction error
ax.plot(np.array(range(1, len(train_epoch_losses)+1)), train_epoch_losses, label='epoch loss (blue)')
# add axis legends
ax.set_xlabel("[training epoch $e_i$]", fontsize=10)
ax.set_ylabel("[Prediction Error $\mathcal{L}^{MSE}$]", fontsize=10)
# set plot legend
plt.legend(loc="upper right", numpoints=1, fancybox=True)
# add plot title
plt.title('Training Epochs $e_i$ vs. Prediction Error $L^{MSE}$', fontsize=10);
# -
# Ok, fantastic. The training error is nicely going down. We could train the network a couple more epochs until the error converges. But let's stay with the 200 training epochs for now and continue with evaluating our trained model.
# ### 5. Evaluating the Neural Network Model
# In this section, we will conduct a visual comparison of the predicted daily returns to the actual ('true') daily returns. The comparison will encompass the daily returns of the in-sample time period as well as the returns of the out-of-sample time period.
# #### 5.1. In-Sample Evaluation of the Trained Neural Network Model
# Before starting our evaluation, let's load the best performing model or an already pre-trained model (as done below). Remember, that we stored a snapshot of the model after each training epoch to our local model directory. We will now load one of the (hopefully well-performing) snapshots saved.
# +
# init the pre-trained model architecture
lstm_model_pretrained = LSTMNet().to(device)
# set the pre-trained model name we aim to load
lstm_model_name_pretrained = 'https://raw.githubusercontent.com/financial-data-science/CFDS-Notebooks/master/lab_14/models/best_lstm_model_30000.pth'
# read pretrained model from the remote location
lstm_model_bytes = urllib.request.urlopen(lstm_model_name_pretrained)
# load tensor from io.BytesIO object
lstm_model_buffer = io.BytesIO(lstm_model_bytes.read())
# load the pre-trained model paramaters
lstm_model_pretrained.load_state_dict(torch.load(lstm_model_buffer, map_location=lambda storage, loc: storage))
# -
# Let's inspect if the model was loaded successfully:
# set model in evaluation mode
lstm_model_pretrained.eval()
# Use the pre-trained model to determine the daily return predictions of the **in-sample** sequence population:
# don't calculate gradients
with torch.no_grad():
# predict sequence output
train_predictions = lstm_model_pretrained(train_sequences_input.to(device))
# collect prediction batch results
train_predictions_list = train_predictions.cpu().detach().numpy()[:, -1].tolist()
# collect target batch results
train_targets_list = train_sequences_target.numpy()[:, -1].tolist()
# Plot the pre-trained `LSTMNet` daily **in-sample** predictions vs. the target ("ground-truth") daily returns:
# +
# plot the prediction results
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(train_stock_sequence_data_date[:, -1], train_targets_list, color='C1', label='groundtruth (green)')
ax.plot(train_stock_sequence_data_date[:, -1], train_predictions_list, color='C0', label='predictions (blue)')
# set y-axis limits
ax.set_xlim(train_stock_sequence_data_date[:, -1].min(), train_stock_sequence_data_date[:, -1].max())
# set plot legend
plt.legend(loc="lower right", numpoints=1, fancybox=True)
# set plot title
plt.title('LSTM NN In-Sample Prediction vs. Ground-Truth Market Prices', fontsize=10)
# set axis labels
plt.xlabel('[time]', fontsize=8)
plt.ylabel('[market price]', fontsize=8)
# set axis ticks fontsize
plt.xticks(fontsize=8)
plt.yticks(fontsize=8);
# -
# #### 5.2. Out-of-Sample Evaluation of the Trained Neural Network Model
# Use the pre-trained model to determine the daily return predictions of the **out-of-sample** sequence population:
# don't calculate gradients
with torch.no_grad():
# predict sequence output
valid_predictions = lstm_model_pretrained(valid_sequences_input.to(device))
# collect prediction batch results
valid_predictions_list = valid_predictions.cpu().detach().numpy()[:, -1].tolist()
# collect target batch results
valid_targets_list = valid_sequences_target.numpy()[:, -1].tolist()
# Plot the pre-trained `LSTMNet` daily **out-of-sample** predictions vs. the target ("ground-truth") daily returns:
# +
# plot the prediction results
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(valid_stock_sequence_data_date[:, -1], valid_targets_list, color='C1', label='groundtruth (green)')
ax.plot(valid_stock_sequence_data_date[:, -1], valid_predictions_list, color='C0', label='predictions (blue)')
# set y-axis limits
ax.set_xlim(valid_stock_sequence_data_date[:, -1].min(), valid_stock_sequence_data_date[:, -1].max())
# set plot legend
plt.legend(loc="lower right", numpoints=1, fancybox=True)
# set plot title
plt.title('LSTM NN Out-of-Sample Prediction vs. Ground-Truth Market Prices', fontsize=10)
# set axis labels
plt.xlabel('[time]', fontsize=8)
plt.ylabel('[market price]', fontsize=8)
# set axis ticks fontsize
plt.xticks(fontsize=8)
plt.yticks(fontsize=8);
# -
# ### 6. Backtest of the Trained Neural Network Model
# In this section, we will backtest the python `bt` library. Python `bt` is a flexible, backtest framework that can be used to test quantitative trading strategies. In general, backtesting is the process of testing a strategy over a given data set (more details about the `bt` library can be found via: https://pmorissette.github.io/bt/).
#
# In order to test the predictions derived from the LSTM model we will view its predictions $\hat{r}_{i+1}$ as trade signals $\phi$. Thereby, we will interpret any positive future return prediction $r_{t+1} > 0.0$ of a sequence $s^i$ as a "long" (buy) signal. Likewise, we will interpret any negative future return prediction $r_{t+1} < 0.0$ of a sequence $s$ as a "short" (sell) signal.
# #### 6.1. LSTM Trading Signal Preparation
# Let's start by converting the out-of-sample model predictions into a trading signal, as described above. Therefore, we first convert the obtained predictions into a data frame that contains (1) the **date of the predicted returns** as well as (2) the **predicted returns $r_{t+1}$** itself:
signal_data = pd.DataFrame(valid_predictions_list, columns=['PREDICTIONS'], index=valid_stock_sequence_data_date[:, -1])
# Furthermore, let's briefly ensure the successful conversion by inspecting the top 10 rows of the created data frame:
signal_data.head(10)
# Now, let's derive a trading signal from the converted predictions. As already described, we will generate the trading signal $\phi(\hat{r}_{t+1})$ according to the following function:
# <center>
# $
# \\
# \phi(\hat{r}_{t+1})=
# \begin{cases}
# 1.0 & \textrm{("long signal")}, & for & \hat{r}_{t+1} > 0.0\\
# -1.0 & \textrm{("short signal")}, & for & \hat{r}_{t+1} < 0.0\\
# \end{cases}
# $
# </center>
# where $\hat{r}_{t+1}$ denotes a by the model predicted future return at time $t+1$.
signal_data['SIGNAL'] = np.where(signal_data['PREDICTIONS'] > 0.0, 1.0, -1.0)
# Let's inspect the top 10 rows of the prepared trading signals:
signal_data.head(10)
# Let's now offset the prepared trading signal by a single day $t-1$. Thereby, we rebalance our stock positions one day prior and according to the closing price predicted by the `LSTMNet` model. As a result we will be able to anticipate the model's future closing price prediction for a particular day $t$:
signal_data = signal_data.set_index(signal_data['SIGNAL'].index - pd.DateOffset(1))
# Let's inspect the top 10 rows of the prepared and offset trading signals:
signal_data.head(10)
# Visualize the predicted and prepared trading signals of the `LSTMNet` model:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(signal_data['SIGNAL'], lw=1.0, color='C3', label='LSTM trade signals')
# set axis ranges
ax.set_xlim([signal_data.index[0], signal_data.index[-1]])
ax.set_ylim([-1.1, 1.1])
# set axis labels
ax.set_xlabel('[time]', fontsize=10)
ax.set_ylabel('[lstm tade signal]', fontsize=10)
# rotate x-axis ticks
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set plot title
ax.set_title('International Business Machines Corporation (IBM) - LSTM Trading Signals', fontsize=10);
# -
# Determine the number of trade signal changes (trades to be executed) within the out-of-sample timeframe **03/2016** until **12/2017**, resulting in a total in-sample timeframe of approx. **21 months** (9 + 12):
# determine number of signal changes
len(list(itertools.groupby(signal_data['SIGNAL'], lambda x: x > 0)))
# On average around **7** signal changes (trades) per month (148 signal changes / 21 months) within the out-of-sample time period.
# #### 6.2. Stock Market Data Preperation
# Now, let's prepare the daily closing prices so that they can be utilized in the backtest:
stock_market_data = pd.DataFrame(stock_data['Close'])
stock_market_data = stock_market_data.rename(columns={'Close': 'PRICE'})
stock_market_data = stock_market_data.set_index(pd.to_datetime(stock_data.index))
# Let's inspect the top 5 rows of the prepared closing prices:
stock_market_data.head(5)
# Sub-sample the prepared daily closing prices to the out-of-sample time period:
stock_market_data = stock_market_data[stock_market_data.index >= signal_data.index[0]]
stock_market_data = stock_market_data[stock_market_data.index <= signal_data.index[-1]]
# Let's inspect the top 5 rows of the prepared closing prices:
stock_market_data.head(5)
# Visualize the out-of-sample daily closing prices:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(stock_market_data['PRICE'], color='#9b59b6')
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set axis labels
ax.set_xlabel('[time]', fontsize=10)
ax.set_ylabel('[equity %]', fontsize=10)
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set axis labels and limits
ax.set_xlabel('[time]', fontsize=10)
ax.set_xlim([stock_market_data.index[0], stock_market_data.index[-1]])
ax.set_ylabel('[adj. closing price]', fontsize=10)
# set plot title
plt.title('International Business Machines Corporation (IBM) - Daily Historical Stock Closing Prices', fontsize=10);
# -
# Let's calculate the potentially gained return by the application of a simple **"buy and hold"** strategy:
np.abs(stock_market_data.iloc[0]['PRICE'] - stock_market_data.iloc[-1]['PRICE']) / stock_market_data.iloc[0]['PRICE']
# Ok, with such a simple strategy we would have been able to yield a total return of approx. **5.32%**.
# #### 6.3. Backtest Preparation
# Now that we have trading signals as well as the market data let's implement the LSTM based trading strategy which we name `LSTMStrategy`:
class LSTMStrategy(bt.Algo):
def __init__(self, signals):
# set class signals
self.signals = signals
def __call__(self, target):
if target.now in self.signals.index[1:]:
# get actual signal
signal = self.signals[target.now]
# set target weights according to signal
target.temp['weights'] = dict(PRICE=signal)
# return True since we want to move on to the next timestep
return True
# Let's instantiate our LSTM based trading strategy:
lstm_strategy = bt.Strategy('lstm', [bt.algos.SelectAll(), LSTMStrategy(signal_data['SIGNAL']), bt.algos.Rebalance()])
# Initialize the backtest of our LSTM based trading strategy using the strategy and prepared market data:
backtest_lstm = bt.Backtest(strategy=lstm_strategy, data=stock_market_data, name='stock_lstm_backtest')
# In addition, let's also prepare a backtest of a "baseline" buy-and-hold trading strategy for comparison purposes. Our buy-and-hold strategy sends a "long" (+1.0) signal at each time step of the out-of-sample time frame:
signal_data_base = signal_data.copy(deep=True)
signal_data_base['SIGNAL'] = 1.0
# Init the buy-and-hold ("base") strategy as well as the corresponding backtest:
base_strategy = bt.Strategy('base', [bt.algos.SelectAll(), LSTMStrategy(signal_data_base['SIGNAL']), bt.algos.Rebalance()])
backtest_base = bt.Backtest(strategy=base_strategy, data=stock_market_data, name='stock_base_backtest')
# #### 6.4. Running the Backtest and Evaluate Results
# Run the backtest for both trading strategies:
backtest_results = bt.run(backtest_lstm, backtest_base)
# Inspect the individual backtest results and performance measures:
backtest_results.display()
# Collect detailed backtest performance per timestep of the LSTM trading strategy:
backtest_lstm_details = backtest_lstm.strategy.prices.to_frame(name='Rel. EQUITY')
backtest_lstm_details['Abs. EQUITY'] = backtest_lstm.strategy.values # equity per timestep
backtest_lstm_details['CASH'] = backtest_lstm.strategy.cash # cash per timestep
backtest_lstm_details['POSITIONS'] = backtest_lstm.strategy.positions # positions per timestep
backtest_lstm_details['FEES'] = backtest_lstm.strategy.fees # trading fees per timestep
# Inspect the LSTM trading strategy backtest details:
backtest_lstm_details.head(10)
# Visualize the monthly returns obtained by the LSTM based trading strategy:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
# plot heatmap of monthly returns generated by the strategy
ax = sns.heatmap(backtest_lstm.stats.return_table, annot=True, cbar=True, vmin=-0.5, vmax=0.5)
# set axis labels
ax.set_xlabel('[month]', fontsize=10)
ax.set_ylabel('[year]', fontsize=10)
# set plot title
ax.set_title('International Business Machines Corporation (IBM) - Monthly Returns LSTM Strategy', fontsize=10);
# -
# Collect detailed backtest performance per timestep of the "buy-and-hold" trading strategy:
backtest_base_details = backtest_base.strategy.prices.to_frame(name='Rel. EQUITY')
backtest_base_details['Abs. EQUITY'] = backtest_base.strategy.values # equity per timestep
backtest_base_details['CASH'] = backtest_base.strategy.cash # cash per timestep
backtest_base_details['POSITIONS'] = backtest_base.strategy.positions # positions per timestep
backtest_base_details['FEES'] = backtest_base.strategy.fees # trading fees per timestep
# Inspect the "buy-and-hold" trading strategy backtest details:
backtest_base_details.head(10)
# Visualize the monthly returns obtained by the "buy-and-hold" trading strategy:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
# plot heatmap of monthly returns generated by the strategy
ax = sns.heatmap(backtest_base.stats.return_table, annot=True, cbar=True, vmin=-0.5, vmax=0.5)
# set axis labels
ax.set_xlabel('[month]', fontsize=10)
ax.set_ylabel('[year]', fontsize=10)
# set plot title
ax.set_title('International Business Machines Corporation (IBM) - Monthly Returns \'buy-and-hold\' Strategy', fontsize=10);
# -
# Visualize the equity progression of both strategies over time:
# +
plt.rcParams['figure.figsize'] = [15, 5]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(backtest_lstm_details['Rel. EQUITY'], color='C1',lw=1.0, label='lstm strategy (green)')
ax.plot(backtest_base_details['Rel. EQUITY'], color='C2',lw=1.0, label='base strategy (red)')
for tick in ax.get_xticklabels():
tick.set_rotation(45)
# set axis labels
ax.set_xlabel('[time]', fontsize=10)
ax.set_xlim(valid_stock_sequence_data_date[:, -1].min(), valid_stock_sequence_data_date[:, -1].max())
ax.set_ylabel('[equity %]', fontsize=10)
# set plot legend
plt.legend(loc="upper right", numpoints=1, fancybox=True)
# set plot title
plt.title('International Business Machines Corporation (IBM) - Backtest % Equity Progression', fontsize=10);
# -
# ### Exercises:
# We recommend you to try the following exercises as part of the lab:
#
# **1. Evaluation of Shallow vs. Deep RNN Models.**
#
# > Download the daily closing prices of the IBM stock within the time frame starting from 01/01/2000 until 12/31/2019. In addition to the architecture of the lab notebook, evaluate further (more shallow as well as more deep) RNN architectures by either: **(1) re-moving/adding layers of LSTM cells, and/or (2) increasing/decreasing the dimensionality of the LSTM cells hidden state**. Train your model (using architectures you selected) for at least 20'000 training epochs but keep the following parameters unchanged (a) sequence length: 5 time-steps (days) and (b) train vs. test fraction: 0.9.
#
# > Analyze the prediction performance of the trained models in terms of training time and prediction accuracy. Furthermore, backtest the out-of-sample signals predicted by each of your models and evaluate them in terms of total return and equity progression. Which of your architecture results in the best performing model, and why?
# +
# ***************************************************
# INSERT YOUR CODE HERE
# ***************************************************
# -
# **2. Training and Evaluation of Models Learned from Additional Stocks.**
#
# > Download the daily closing prices of at least two additional stocks (e.g., Alphabet, Deutsche Bank) within the time frame starting from 01/01/2000 until 12/31/2019. **Pls. select two stocks that you are interested in to investigate ( e.g. stocks that you may occasionally trade yourself)**. Learn an ’optimal’ RNN model of both stocks and backtest their corresponding trade signals by following the approach outlined in the lab notebook regarding the IBM stock. Pls. keep the train vs. test dataset fraction fixed to 0.9, all other parameters of the data preparation and model training can be changed.
#
# > Analyse the performance of the learned models in terms of their prediction accuracy as well as their out-of-sample backtest performance (e. g. the total return and equity progression). What architectures and corresponding training parameters result in the best performing models?
# +
# ***************************************************
# INSERT YOUR CODE HERE
# ***************************************************
# -
# **3. Training and Evaluation of Models Learned from Augmented Data.**
#
# > In the prior exercises, we used the historical daily returns of a single stock to learn a model that can predict the stocks’ future closing price (log- return) movement. However, one of the advantages of NN’s **lies in their capability to learn a model from multiple sources of input data**. For each of the two stocks (’target stocks’) that you selected in `exercise 2.`. learn an ’optimal’ RNN model using the daily returns as a target label. However, before training your models **augment the training data of each stock by the return sequences of at least three additional stocks**. The additional stocks, used for data augmentation, should exhibit a high correlation to the historical closing prices of the target stock price movement you aim to model.
#
# > Analyse the performance of the learned models in terms of their prediction accuracy as well as their out-of-sample backtest performance (e. g. the total return and equity progression). Do you observe an improvement of the trained model in terms of out-of-sample backtest performance comparison to `exercise 1.`?
# +
# ***************************************************
# INSERT YOUR CODE HERE
# ***************************************************
# -
# ### Lab Summary:
# In this lab, a step by step introduction into **design, implementation, training and evaluation** of a LSTM neural network based trading strategy is presented.
#
# The strategy trades a specific financial instrument based on its historical market prices. The degree of success of the implemented strategy is evaluated based in its backtest performance with particular focus on (1) the strategy's **total return** as well as (2) its **equity progression** over time.
#
# The code provided in this lab provides a blueprint for the development and testing of more complex trading strategies.
# You may want to execute the content of your lab outside of the Jupyter notebook environment, e.g. on a compute node or a server. The cell below converts the lab notebook into a standalone and executable python script. Pls. note that to convert the notebook, you need to install Python's **nbconvert** library and its extensions:
# installing the nbconvert library
# !pip3 install nbconvert
# !pip3 install jupyter_contrib_nbextensions
# Let's now convert the Jupyter notebook into a plain Python script:
# !jupyter nbconvert --to script cfds_lab_14.ipynb
| lab_14/cfds_lab_14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="3zhQKhAwTHbp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} executionInfo={"status": "ok", "timestamp": 1600484964954, "user_tz": 300, "elapsed": 4665, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj2iNkr_0OYUQ043vwPYSkRsQcDqvE4PcGGpIJULA=s64", "userId": "07537237062042895789"}} outputId="1f995696-3455-4b0a-d834-53ff901b1285"
# !pip install cerbo
# + id="0r43g-5dTTRM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} executionInfo={"status": "ok", "timestamp": 1600484992825, "user_tz": 300, "elapsed": 26491, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj2iNkr_0OYUQ043vwPYSkRsQcDqvE4PcGGpIJULA=s64", "userId": "07537237062042895789"}} outputId="3f349552-ddfd-488d-b7a8-668300ac6c34"
from cerbo.preprocessing import *
from cerbo.DL import *
import keras
from keras.datasets import cifar10
import numpy as np
(x_train, y_train), (x_test, y_test) = cifar10.load_data() #loads in the data
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes) #labels the training data
y_test = keras.utils.to_categorical(y_test, num_classes) #labels the test data
x_train = x_train/255.0
x_test = x_test/255.0
print(y_train.shape)
# + id="x-8DhBs5Tmxf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 714} executionInfo={"status": "ok", "timestamp": 1600485078544, "user_tz": 300, "elapsed": 85999, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj2iNkr_0OYUQ043vwPYSkRsQcDqvE4PcGGpIJULA=s64", "userId": "07537237062042895789"}} outputId="01a545ff-8ee0-428e-ffc5-d933f15059cb"
model = CNN(10, x_train,y_train,x_test,y_test,type='custom')
model.addLayer('conv')
model.addLayer('pool')
model.addLayer('conv')
model.addLayer('pool')
model.addLayer('conv')
model.addLayer('pool')
model.addLayer('fc')
model.addLayer('out','softmax')
test_loss, test_acc = model.compute(loss='categorical_crossentropy',batch=128,num_epochs=20)
print(test_acc)
model.save(name='cnn')
| examples/Deep Learning/CNN/CNN3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implementing a new model with Jack
#
# In this tutorial, we focus on the minimal steps required to implement a new model from scratch using Jack.
# Please note that this tutorial has a lot of detail. It is aimed at developers who want to understand the internals of Jack.
#
# In order to implement a Jack Reader, we define three modules:
# - **Input Module**: Responsible for mapping `QASetting`s to numpy arrays assoicated with `TensorPort`s
# - **Model Module**: Defines the differentiable model architecture graph (_TensorFlow_ or _PyTorch_)
# - **Output Module**: Converting the network output to human-readable overall system output.
#
# Jack is modular, in the sense that any particular input/model/output module can be exchanged with another one.
# To illustrate this, we will implement an entire reader, and will then go on to implement another reader, but reusing the _Input_ and _Output_ module of the first.
#
# The first reader will have a _Model Module_ based on *TensorFlow*, the second will have a _Model Module_ based on *PyTorch*.
#
# ### Model Overview
# As example, we will implement a simple Bi-LSTM baseline for extractive question answering, which involves extracting the answer to a question from a given text. On a high level, the architecture looks as follows:
# - Words of question and support are embedded using random embeddings (not trained)
# - Both word and question are encoded using a bi-directional LSTM
# - The question is summarized by a weighted token representation
# - A feedforward NN scores each of the support tokens to be the _start_ of the answer
# - A feedforward NN scores each of the support tokens to be the _end_ of the answer
#
# First change dir to jack parent
import os
os.chdir('..')
import re
from jack.core import *
from jack.core.tensorflow import TFReader, TFModelModule
from jack.io.embeddings import Embeddings
from jack.util.hooks import LossHook
from jack.util.vocab import *
from jack.readers.extractive_qa.shared import XQAPorts, XQAOutputModule
from jack.readers.extractive_qa.util import prepare_data
from jack.readers.extractive_qa.util import tokenize
from jack import tfutil
from jack.tfutil import sequence_encoder
from jack.tfutil.misc import mask_for_lengths
from jack.util.map import numpify
from jack.util.preprocessing import stack_and_pad
import tensorflow as tf
_tokenize_pattern = re.compile('\w+|[^\w\s]')
# ## Ports
#
# All communication between _Input_, _Model_ and _Output_ modules happens via `TensorPort`s (see `jack/core/tensorport.py`). Tensorports can be understood as placeholders for tensors, and define the ways in which information is communicated between the differentiable model architecture (_Model_ module), and the _Input_ and _Output_ modules.
#
# This is useful when implementing new models: often there already exists a model for the same task, and you can re-use existing _Input_ or _Output_ modules. You can re-use existing modules by making sure that your new module is compatible to the ports specified in the already existing modules.
#
# In case you can reuse existing _Input_ or _Output_ modules, it is then enough to simply
# implement a new _Model_ Module (see below) that adheres to the same Tensorport interface.
# See `jack/readers/implementations.py` to see how different readers re-use the same modules.
#
# If you need a new port, however, it is also straight-forward to define one.
# For this tutorial, we will define most ports here.
class MyPorts:
embedded_question = TensorPort(np.float32, [None, None, None],
"embedded_question",
"Represents the embedded question",
"[B, max_num_question_tokens, N]")
# or reuse Ports.Misc.embedded_question
question_length = TensorPort(np.int32, [None],
"question_length",
"Represents length of questions in batch",
"[B]")
# or reuse Ports.Input.question_length
embedded_support = TensorPort(np.float32, [None, None, None],
"embedded_support",
"Represents the embedded support",
"[B, max_num_tokens, N]")
# or reuse Ports.Misc.embedded_support
support_length = TensorPort(np.int32, [None],
"support_length",
"Represents length of support in batch",
"[B]")
# or reuse Ports.Input.support_length
start_scores = TensorPort(np.float32, [None, None],
"start_scores",
"Represents start scores for each support sequence",
"[B, max_num_tokens]")
# or reuse Ports.Prediction.start_scores
end_scores = TensorPort(np.float32, [None, None],
"end_scores",
"Represents end scores for each support sequence",
"[B, max_num_tokens]")
# or reuse Ports.Prediction.end_scores
span_prediction = TensorPort(np.int32, [None, 2],
"span_prediction",
"Represents predicted answer as a (start, end) span",
"[B, 2]")
# or reuse Ports.Prediction.span_prediction
answer_span = TensorPort(np.int32, [None, 2],
"answer_span_target",
"Represents target answer as a (start, end) span",
"[B, 2]")
# or reuse Ports.Target.answer_span
token_offsets = TensorPort(np.int32, [None, None],
"token_offsets",
"Character index of tokens in support.",
"[B, support_length]")
# or reuse XQAPorts.token_offsets
loss = Ports.loss # this port must be used
# ## Implementing an Input Module
#
# The _Input_ module is responsible for converting `QASetting` instances (the inputs to the reader) into numpy
# arrays, which are mapped to `TensorPort`s and passed on to the _Model_ module.
# Effectively, we are building the tensorflow _feed dictionary_ used during training and inference.
# There are _Input_ modules for
# several readers that can easily be reused when your model requires the same
# pre-processing and input as another model.
# **Note**: Similarly, this is also true for the _Output_ Module.
#
# To implement a new _Input_ module, you could implement the `InputModule` interface, but in many cases it'll be
# easier to inherit from `OnlineInputModule`, which already comes with useful functionality. In our implementation we will do the latter. We will need to:
# - Define the output `TensorPort`s of our input module. These will be used to communicate with the _Model_ module
# - Implement the actual preprocessing (e.g. tokenization, mapping to embedding vectors, ...). The result of this step is one *annotation* per instance; this annotation is a `dict` with values for every Tensorport to pass on to the _Model_ module (see `_preprocess_instance()` below).
# - Implement batching.
class MyInputModule(OnlineInputModule):
def setup(self):
self.vocab = self.shared_resources.vocab
self.emb_matrix = self.vocab.emb.lookup
# We will now define the input and output TensorPorts of our model.
@property
def output_ports(self):
return [MyPorts.embedded_question, # Question embeddings
MyPorts.question_length, # Lengths of the questions
MyPorts.embedded_support, # Support embeddings
MyPorts.support_length, # Lengths of the supports
MyPorts.token_offsets # Character offsets of tokens in support, used for in ouput module
]
@property
def training_ports(self):
return [MyPorts.answer_span] # Answer span, one for each question
# Now, we implement our preprocessing. This involves tokenization,
# mapping to token IDs, mapping to to token embeddings,
# and computing the answer spans.
def _get_emb(self, idx):
"""Maps a token ID to it's respective embedding vector"""
if idx < self.emb_matrix.shape[0]:
return self.vocab.emb.lookup[idx]
else:
# <OOV>
return np.zeros([self.vocab.emb_length])
def preprocess(self, questions, answers=None, is_eval=False):
"""Maps a list of instances to a list of annotations.
Since in our case, all instances can be preprocessed independently, we'll
delegate the preprocessing to a `_preprocess_instance()` method.
"""
if answers is None:
answers = [None] * len(questions)
return [self._preprocess_instance(q, a)
for q, a in zip(questions, answers)]
def _preprocess_instance(self, question, answers=None):
"""Maps an instance to an annotation.
An annotation contains the embeddings and length of question and support,
token offsets, and optionally answer spans.
"""
has_answers = answers is not None
# `prepare_data()` handles most of the computation in our case, but
# you could implement your own preprocessing here.
q_tokenized, q_ids, _, q_length, s_tokenized, s_ids, _, s_length, \
word_in_question, offsets, answer_spans = \
prepare_data(question, answers, self.vocab,
with_answers=has_answers,
max_support_length=100)
# there is only 1 support
s_tokenized, s_ids, s_length, offsets = s_tokenized[0], s_ids[0], s_length[0], offsets[0]
# For both question and support, we'll fill an embedding tensor
emb_support = np.zeros([s_length, self.emb_matrix.shape[1]])
emb_question = np.zeros([q_length, self.emb_matrix.shape[1]])
for k in range(len(s_ids)):
emb_support[k] = self._get_emb(s_ids[k])
for k in range(len(q_ids)):
emb_question[k] = self._get_emb(q_ids[k])
# Now, we build the annotation for the question instance. We'll use a
# dict that maps from `TensorPort` to numpy array, but this could be
# any data type, like a custom class, or a named tuple.
annotation = {
MyPorts.question_length: q_length,
MyPorts.embedded_question: emb_question,
MyPorts.support_length: s_length,
MyPorts.embedded_support: emb_support,
MyPorts.token_offsets: offsets
}
if has_answers:
# For the purpose of this tutorial, we'll only use the first answer, such
# that we will have exactly as many answers as questions.
annotation[MyPorts.answer_span] = answer_spans[0][0]
return numpify(annotation, keys=annotation.keys())
def create_batch(self, annotations, is_eval, with_answers):
"""Now, we need to implement the mapping of a list of annotations to a feed dict.
Because our annotations already are dicts mapping TensorPorts to numpy
arrays, we only need to do padding here.
"""
return {key: stack_and_pad([a[key] for a in annotations])
for key in annotations[0].keys()}
# ## Implementing a Model Module
#
# The _Model_ module defines the differentiable computation graph.
# It takes _Input_ module outputs as inputs, and produces outputs (such as the loss, or logits)
# that match the inputs to the _Output_ module.
#
# We first look at a _TensorFlow_ implementation of the _Model_ module; futher below you can find an implementation using _PyTorch_.
class MyModelModule(TFModelModule):
@property
def input_ports(self) -> Sequence[TensorPort]:
return [MyPorts.embedded_question,
MyPorts.question_length,
MyPorts.embedded_support,
MyPorts.support_length]
@property
def output_ports(self) -> Sequence[TensorPort]:
return [MyPorts.start_scores,
MyPorts.end_scores,
MyPorts.span_prediction]
@property
def training_input_ports(self) -> Sequence[TensorPort]:
return [MyPorts.start_scores,
MyPorts.end_scores,
MyPorts.answer_span]
@property
def training_output_ports(self) -> Sequence[TensorPort]:
return [MyPorts.loss]
def create_output(self, shared_resources, input_tensors):
"""
Implements the "core" model: The TensorFlow subgraph which computes the
answer span from the embedded question and support.
Args:
emb_question: [Q, L_q, N]
question_length: [Q]
emb_support: [Q, L_s, N]
support_length: [Q]
Returns:
start_scores [B, L_s, N], end_scores [B, L_s, N], span_prediction [B, 2]
"""
tensors = TensorPortTensors(input_tensors)
with tf.variable_scope("fast_qa", initializer=tf.contrib.layers.xavier_initializer()):
dim = shared_resources.config['repr_dim']
# set shapes for inputs
tensors.embedded_question.set_shape([None, None, dim])
tensors.embedded_support.set_shape([None, None, dim])
# encode question and support
rnn = tf.contrib.rnn.LSTMBlockFusedCell
encoded_question = sequence_encoder.bi_lstm(dim, tensors.embedded_question,
tensors.question_length, name='bilstm',
with_projection=True)
encoded_support = sequence_encoder.bi_lstm(dim, tensors.embedded_support,
tensors.support_length, name='bilstm',
reuse=True, with_projection=True)
start_scores, end_scores, predicted_start_pointer, predicted_end_pointer = \
self._output_layer(dim, encoded_question, tensors.question_length,
encoded_support, tensors.support_length)
span = tf.concat([predicted_start_pointer, predicted_end_pointer], 1)
return TensorPort.to_mapping(self.output_ports, (start_scores, end_scores, span))
def _output_layer(self,
dim,
encoded_question,
question_length,
encoded_support,
support_length):
"""Simple span prediction layer of our network"""
batch_size = tf.shape(question_length)[0]
# Computing weighted question state
attention_scores = tf.contrib.layers.fully_connected(encoded_question, 1,
scope="question_attention")
q_mask = mask_for_lengths(question_length, batch_size)
attention_scores = attention_scores + tf.expand_dims(q_mask, 2)
question_attention_weights = tf.nn.softmax(attention_scores, 1,
name="question_attention_weights")
question_state = tf.reduce_sum(question_attention_weights * encoded_question, [1])
# Prediction
support_mask = mask_for_lengths(support_length, batch_size)
interaction = tf.expand_dims(question_state, 1) * encoded_support
def predict():
scores = tf.layers.dense(tf.concat([interaction, encoded_support], axis=2), 1)
scores = tf.squeeze(scores, [2])
scores = scores + support_mask
_, predicted = tf.nn.top_k(scores, 1)
return scores, predicted
start_scores, predicted_start_pointer = predict()
end_scores, predicted_end_pointer = predict()
return start_scores, end_scores, predicted_start_pointer, predicted_end_pointer
def create_training_output(self,
shared_resources,
input_tensors) -> Sequence[TensorPort]:
"""Compute loss from start & end scores and the gold-standard `answer_span`."""
tensors = TensorPortTensors(input_tensors)
start, end = [tf.squeeze(t, 1) for t in tf.split(tensors.answer_span_target, 2, 1)]
start_score_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=tensors.start_scores,
labels=start)
end_score_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=tensors.end_scores,
labels=end)
loss = start_score_loss + end_score_loss
return TensorPort.to_mapping(self.training_output_ports, [tf.reduce_mean(loss)])
# ## Implementing an Output Module
#
# The _Output_ module converts model predictions from the differentiable computation graph into `Answer` instances.
# Since our model is a standard extractive QA model, we could reuse the existing `XQAOutputModule`, rather than implementing our own.
class MyOutputModule(OutputModule):
@property
def input_ports(self) -> List[TensorPort]:
return [MyPorts.span_prediction,
MyPorts.token_offsets,
MyPorts.start_scores,
MyPorts.end_scores]
def __call__(self,
questions,
input_tensors) -> Sequence[Answer]:
"""Produces best answer for each question."""
answers = []
tensors = TensorPortTensors(input_tensors)
for i, question in enumerate(questions):
offsets = tensors.token_offsets[i]
start, end = tensors.span_prediction[i]
score = tensors.start_scores[i, start] + tensors.end_scores[i, end]
# map token to char span
char_start = offsets[start]
char_end = offsets[end + 1] if end < len(offsets) - 1 else len(question.support[0])
answer = question.support[0][char_start: char_end]
answer = answer.rstrip()
char_end = char_start + len(answer)
answers.append(Answer(answer, span=(char_start, char_start), score=score))
return answers
# # Putting Together all Modules
#
# We are now ready to put together the above defined _Input_, _Model_, and _Output_ modules into one _Reader_.
#
# For illustration purposes, we will use a toy data example with just one example question:
data_set = [
(QASetting(
question="Which is it?",
support=["While b seems plausible, answer a is correct."],
id="1"),
[Answer(text="a", span=(32, 33))])
]
# Before assembling the parts of our newly defined reader, we will need to define some shared resources, which all of the modules can depend on. This includes a vocabulary `Vocab`, and a configuration hyperparameter dictionary `config`.
#
# We build the vocabulary directly from the above data set using the function `build_vocab()`, which also associates each word with random embedding vectors.
# +
embedding_dim = 10
def build_vocab(questions):
"""Build a vocabulary of random vectors."""
embedding_lookup = dict()
for question in questions:
for t in tokenize(question.question):
if t not in embedding_lookup:
embedding_lookup[t] = len(embedding_lookup)
embeddings = Embeddings(embedding_lookup,
np.random.random([len(embedding_lookup),
embedding_dim]))
vocab = Vocab(emb=embeddings, init_from_embeddings=True)
return vocab
questions = [q for q, _ in data_set]
shared_resources = SharedResources(build_vocab(questions),
config={'repr_dim': 10,
'repr_dim_input': embedding_dim})
# -
# We then instantiate our above defined modules with these `shared_resources` as input parameter.
# +
tf.reset_default_graph()
input_module = MyInputModule(shared_resources)
model_module = MyModelModule(shared_resources)
output_module = MyOutputModule()
reader = TFReader(shared_resources, input_module, model_module, output_module)
# -
# At this point, the Reader is complete! It is composed of the three modules and shared resources, and is ready to generate predictions, or to train it.
# +
batch_size = 1
hooks = [LossHook(reader, iter_interval=1)]
optimizer = tf.train.AdamOptimizer(learning_rate=0.1)
reader.train(optimizer, data_set, batch_size, max_epochs=10, hooks=hooks)
print()
print(questions[0].question, questions[0].support[0])
answers = reader(questions)
print("{}, {}, {}".format(answers[0].score, answers[0].span, answers[0].text))
# -
#
# **Note:** If you want to train your newly implemented model using the main training script `jack/train_reader.py`, you first have to register a name for your new model in `jack.core.implementations`.
#
# ### Hooks
# In the above example, we are making use of a _hook_. Hooks are used to monitor progress throughout training. For example, the `LossHook` monitors the loss throughout training, but other hooks can measure validation accuracy, time elapsed, etc.
# Jack comes with several hooks predefined (see `jack.util.hooks`), but you can always extend them or add your own.
#
# ## Implementing a QA model in PyTorch
#
# Above, we have implemented a complete reader from scratch, using _TensorFlow_ to define the differentiable computation graph in the _Model_ module. Let's now implement another reader, reusing as much as possible, but change frameworks from _TensorFlow_ to _PyTorch_.
#
# All we need to do to accomplish this, is to write another _ModelModule_.
#
# **Note:** the following code requires that you to have installed [PyTorch](http://pytorch.org/).
#
# ### Differentiable Model Architecture (PyTorch)
#
# Let's first define _PyTorch_ modules that define the differentiable model architecture. This is independent of Jack, but we do offer some convenience functions, similar to TF.
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
from jack.torch_util import embedding, misc, xqa
from jack.torch_util.highway import Highway
from jack.torch_util.rnn import BiLSTM
class MyPredictionTorchModule(nn.Module):
def __init__(self, shared_resources):
super(MyPredictionTorchModule, self).__init__()
self._shared_resources = shared_resources
repr_dim_input = shared_resources.config["repr_dim_input"]
repr_dim = shared_resources.config["repr_dim"]
# nn child modules
self._bilstm = BiLSTM(repr_dim_input, repr_dim)
self._linear_question_attention = nn.Linear(2 * repr_dim, 1, bias=False)
self._linear_start_scores = nn.Linear(2 * repr_dim, 1, bias=False)
self._linear_end_scores = nn.Linear(2 * repr_dim, 1, bias=False)
def forward(self, emb_question, question_length, emb_support, support_length):
# encode
encoded_question = self._bilstm(emb_question)[0]
encoded_support = self._bilstm(emb_support)[0]
# answer
# computing attention over question
attention_scores = self._linear_question_attention(encoded_question)
q_mask = misc.mask_for_lengths(question_length)
attention_scores = attention_scores.squeeze(2) + q_mask
question_attention_weights = F.softmax(attention_scores, dim=1)
question_state = torch.matmul(question_attention_weights.unsqueeze(1),
encoded_question).squeeze(1)
interaction = question_state * encoded_support
# Prediction
start_scores = self._linear_start_scores(interaction).squeeze(2)
end_scores = self._linear_start_scores(interaction).squeeze(2)
# Mask
support_mask = misc.mask_for_lengths(support_length)
start_scores += support_mask
end_scores += support_mask
_, predicted_start_pointer = start_scores.max(1)
_, predicted_end_pointer = end_scores.max(1)
# end pointer cannot come before start
predicted_end_pointer = torch.max(predicted_end_pointer, predicted_start_pointer)
span = torch.stack([predicted_start_pointer, predicted_end_pointer], 1)
return start_scores, end_scores, span
class MyLossTorchModule(nn.Module):
def forward(self, start_scores, end_scores, answer_span):
start, end = answer_span[:, 0], answer_span[:, 1]
# start prediction loss
loss = -torch.index_select(F.log_softmax(start_scores, dim=1), dim=1, index=start.long())
# end prediction loss
loss -= torch.index_select(F.log_softmax(end_scores, dim=1), dim=1, index=end.long())
# mean loss over the current batch
return loss.mean()
# -
# ### Implementing the Jack _Model_ Module with PyTorch
#
# After defining our `torch nn.Module` classes, we can use them in a Jack `ModelModule`. Note that the signature of the `nn.Module` torch implementations above must match the tensorport signature of the `ModelModule`.
# +
from jack.core.torch import PyTorchModelModule, PyTorchReader
class MyTorchModelModule(PyTorchModelModule):
@property
def input_ports(self) -> Sequence[TensorPort]:
return [MyPorts.embedded_question,
MyPorts.question_length,
MyPorts.embedded_support,
MyPorts.support_length]
@property
def output_ports(self) -> Sequence[TensorPort]:
return [MyPorts.start_scores,
MyPorts.end_scores,
MyPorts.span_prediction]
@property
def training_input_ports(self) -> Sequence[TensorPort]:
return [MyPorts.start_scores,
MyPorts.end_scores,
MyPorts.answer_span]
@property
def training_output_ports(self) -> Sequence[TensorPort]:
return [MyPorts.loss]
def create_loss_module(self, shared_resources: SharedResources):
return MyLossTorchModule()
def create_prediction_module(self, shared_resources: SharedResources):
return MyPredictionTorchModule(shared_resources)
# -
# After defining our new PyTorchModelModule we can create our JackReader similar as before, by instantiating a `PyTorchReader`, rather than a `TFReader`, as before.
# +
input_module = MyInputModule(shared_resources)
model_module = MyTorchModelModule(shared_resources) # was MyModelModule
output_module = MyOutputModule()
reader = PyTorchReader(shared_resources,
input_module,
model_module,
output_module) # was TFReader
# -
# Interacting with the instantiated readers is transparent. For the user it doesn't matter whether it is a `TFReader` or a `PyTorchReader`.
# +
batch_size = 1
# torch needs to be setup already at this point, to get the parameters
reader.setup_from_data(data_set, is_training=True)
optimizer = torch.optim.Adam(reader.model_module.prediction_module.parameters(), lr=0.1)
hooks = [LossHook(reader, iter_interval=1)]
reader.train(optimizer,
data_set,
batch_size,
max_epochs=10,
hooks=hooks)
print()
print(questions[0].question, questions[0].support[0])
answers = reader(questions)
print("{}, {}, {}".format(answers[0].score, answers[0].span, answers[0].text))
# -
| notebooks/model_implementation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# **[Back to Fan's Intro Stat Table of Content](https://fanwangecon.github.io/Stat4Econ/)**
#
# # Discrete Random Variable and Binomial Experiment
#
# We have been looking at various examples of discrete Random Variables in [Sample Space, Experimental Outcomes, Events, Probabilities](https://fanwangecon.github.io/Stat4Econ/probability/samplespace.html), [Examples of Sample Space and Probabilities](https://fanwangecon.github.io/Stat4Econ/probability/samplespaceexa.html), [Multiple-Step Experiment: Playing the Lottery Three times](https://fanwangecon.github.io/Stat4Econ/probability/lottery.html), and [Throw an Unfair Four Sided Dice](https://fanwangecon.github.io/Stat4Econ/probability/samplespacedice.html).
#
# Now we state this a little bit more formally.
# ## Discrete Random Variable
#
# 1. Random Variable
# - "A random variable is a numerical description of the outcome of an experiment." (ASWCC P217)
# $$ \text{we could use letter } x \text{ to represent a random variable}$$
# 2. Discrete Random Variables
# - "A random variable that may assume either a finite number of values or an infinite sequence of values such as $0,1,2,...$ is referred to as a discrete random variable." (ASWCC P217)
# 3. Discrete Probability Mass Function
# - "For a discrete random variable $x$, a probability function, denoted by $f(x)$, provides the probability for each value of the random variable." (ASWCC P220)
# - We can think of different values of $x$ as been mapped from an experimental outcome in the sample space. Probability can not be negative, and the sum of the probability of different possible $x$ outcomes sum to 1 (ASWCC P221):
# $$ f(x) \ge 0$$
# $$\Sigma f(x) = 1$$
# 4. Expected Value of a Discrete Random Variable (ASWCC P225)
# $$E(x) = \mu_x = \Sigma x \cdot f(x)$$
# 4. Variance of a Discrete Random Variable (ASWCC P225)
# $$Var(x) = \sigma_x^2 = \Sigma \left( (x - \mu_x)^2 \cdot f(x) \right)$$
# ## Well Known Discrete Probability Distributions
#
# There is a [variety of](https://en.wikipedia.org/wiki/List_of_probability_distributions#Discrete_distributions) different Discrete Random Variable Probability Mass Functions that are appropriate for analyzing different situations. Think of these as tools, you don't need to memorize the formulas, but you can learn about under what conditions these distributions are useful. Learn about the parameters of these distributions, and use the appropriate Excel or R function.
#
# The simplest distribution is the [Discrete Uniform Distribution](https://en.wikipedia.org/wiki/Discrete_uniform_distribution). The uniform probability mass function is: $f(x) = \frac{1}{n}$, where $n$ is the *the number of values the random variable may assume*, corresponding to experimental outcomes. (ASWCC P222)
#
# We focus on the Binomial Distribution here, a variety of other distributions are related to the binomial distribution:
#
# - [Binomial Distribution](https://en.wikipedia.org/wiki/Binomial_distribution)
# - [Bernoulli Distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution)
# - [Hypergeometric Distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution)
# - [Beta-Binomial Distribution](https://en.wikipedia.org/wiki/Beta-binomial_distribution)
# ## Binomial Experiment
#
# We have already examined [Multiple-Step Experiment: Playing the Lottery Three times](https://fanwangecon.github.io/Stat4Econ/probability/lottery.html). Here we re-state the four properties of the [Binomial Experiment](https://en.wikipedia.org/wiki/Binomial_distribution) (ASWCC P240):
#
# 1. "The experiment consists of a sequence of $n$ identical trials."
# 2. "Two outcomes are possible on each trial. We refer to one outcome as a success and the other outcome as a failure."
# 3. "The probability of a success, denoted by $p$, does not change from trial to trial. Consequently, the probability of a failure, denoted by $1-p$, does not change from trial to trial."
# 4. "The trials are independent."
#
# ### Binomial Sample Space
#
# Note that given that there are $n$ trials, the possible $x$ go from $x=0$ to $x=n$. In another word, if there are three trials, $n=3$, there are four possible experimental outcomes for $x$.
#
# - Experiment: The binomial experiment with $n$ trials
# - Experimental outcomes: $x=0,x=1,...,x=\left(n-1\right),x=n$
# - Sample Space: $S=\left\{0,1,...,n-1,n\right\}$
#
# ### Binomial Probability Mass Funcion
#
# What is the probability of having $x$ success out of $n$ trials, given that there is $p$ chance of success for each trial and the assumptions above? The answer is given by this formula, which is the binomial probability mass function:
# $$
# \begin{eqnarray}
# f(x;n,p) &=& C^{n}_{x} \cdot p^x \cdot \left(1-p\right)^{n-x}\\
# &=& \frac{n!}{\left(n-x\right)! \cdot x!} \cdot p^x \cdot \left(1-p\right)^{n-x}\\
# \end{eqnarray}
# $$
#
# With the binomial experiment, we can now use a formula to assign probabilities. A formula is a piece of machinery that takes inputs and spits out outputs, Thie binomial probability mass function has three inputs, $x$, $n$ and $p$. You need to tell R, Excel, or alternative programs what these three numbers are, and the program will spit a probability out for you. $n$ and $p$ are the parameters.
#
# ### Binomial Expected Value and Variance
#
# For the binomial discrete random variable, it turns out the expected value and variance are:
# $$E(x) = n \cdot p$$
# $$Var(x) = n \cdot p \cdot (1-p)$$
# We can find the expected value and variance by summing over all terms following: $E(x) = \mu_x = \Sigma x \cdot f(x)$ and $Var(x) = \sigma_x^2 = \Sigma \left( (x - \mu_x)^2 \cdot f(x) \right)$, and we will always end up with these two results. It is intuitive. The average number of wins given that you play $n$ trials and given that the chance of winning each game is $p$ is $n \cdot p$.
#
# ## Binomial Example: Larceny
#
# In 2004, in the United State, [18.3 percent of Larceny-Theft were cleared](https://en.wikipedia.org/wiki/Clearance_rate). "Clearance rates are used by various groups as a measure of crimes solved by the police."
#
# ### Chance of x out n Arrested
#
# 10 people commit 10 larceny-theft crimes, suppose the situation follows the conditions of the binomial experiment, what is the chance that nobody is arrested? The chance that 1, 2, 3, or all 10 are arrested?
#
# - $n=10$: 10 crimes, 10 trials
# - $p=0.183$: what is $p$ here? For the police success is clearing a crime.
# - $x$: if $x=10$, that means all larceny-thefts were cleared, if $x=0$, this means the police failed to clear all 10 thefts.
#
# For example, the chance that $2$ out of $10$ is arrested is:
# $$f\left(x=2;n=10,p=0.183\right) = \frac{10!}{\left(10-2\right)! \cdot 2!} \cdot 0.183^2 \cdot \left(1-0.183\right)^{10-2} = 0.299$$
#
# We can use the r function, *dbinom*, to calculate these probabilites: *dbinom(2,10,0.183)*. Additionally, *pbinom(2,10,0.183)* tells us the cumulative probability that at most 2 out of 10 are arrested.
#
# We do this in the code below. From the graph below, we can see that there is a 13.3% chance that none of the 10 thieves would be arrested, and alomst 0 percent chance that all 10 of then would be arrested. The chance that 6 out of the 10 thiefs would be arrested is less than 1 percent.
# +
# library
library(tidyverse)
# Parameters
n <- 10
p <- 0.183
# A vector of different arrest counts
zero2max <- 0:n
# Probability for different arrest counts
prob_of_arrests <- dbinom(zero2max, n, p)
# Control Graph Size
options(repr.plot.width = 5, repr.plot.height = 4)
# Number of Arrests Probabilities
arrest.prob.mass <- tibble(arrests=zero2max, prob=prob_of_arrests)
# Titles Strings etc
graph.title <- sprintf('Prob for Number of Theft-Criminals Arrested out of %s' ,n)
graph.caption <- sprintf(
paste0('Assuming the binomial properties apply\n',
'2004 Larceny/Left USA Clearance Rate = %s'), p)
# Create a Table and Graph it
arrest.graph <- arrest.prob.mass %>%
ggplot(aes(x=arrests, y=prob, label = sprintf('%0.3f', prob))) +
geom_bar(stat = 'identity') +
geom_text(vjust = -0.5, size = 3) +
labs(title = graph.title,
x = 'Number of Criminals Arrests',
y = 'Prob of x Arrested',
caption = graph.caption) +
scale_x_continuous(labels = zero2max, breaks = zero2max) +
theme_bw()
print(arrest.graph)
# -
# ### What is the Chance of Arrest at Least X out of N People
#
# What is the chance that at most 3 out of the 10 thiefs are arrested? That includes the chance that no one is arrested, 1, 2, or 3 people are arrested:
#
# $$
# \begin{eqnarray}
# & & p\left(x \le 3;n=10,p=0.183\right) \\
# &=&\Sigma_{x=0}^{3} \left( f\left(x;n=10,p=0.183\right) \right) \\
# &=& \Sigma_{x=0}^3 \left( \frac{10!}{\left(10-x\right)! \cdot x!} \cdot 0.183^x \cdot \left(1-0.183\right)^{10-x} \right)\\
# &=&0.133 + 0.297 + 0.299 + 0.179\\
# &=&0.907\\
# \end{eqnarray}
# $$
#
# Given the low clearance rate, there is a $90$ percent chance that at most 3 out of 10 criminals are arrested.
#
# We can graph this out. We will graph the previous graph again but overlay the cumulative probability on top.
# +
# Cumulative Probability for different arrest counts, before dbinom, now pbinom
cumulative_prob_of_arrests <- pbinom(zero2max, n, p)
# Data File that Includes Cumulative Probability and Mass
arrest.prob <- tibble(arrests=(zero2max), prob=prob_of_arrests, cum.prob=cumulative_prob_of_arrests)
# Control Graph Size
options(repr.plot.width = 5, repr.plot.height = 4)
# Create a Table and Graph it
# geom_text(aes(y=prob, label = sprintf('%0.3f', prob)), vjust = -0.5, size = 3) +
axis.sec.ratio <- max(cumulative_prob_of_arrests)/max(prob_of_arrests)
right.axis.color <- 'blue'
left.axis.color <- 'red'
# Probabilities
arrest.graph <- arrest.prob %>%
ggplot(aes(x=arrests)) +
geom_bar(aes(y=prob),
stat='identity', alpha=0.5, width=0.5, fill=left.axis.color) +
geom_text(aes(y=prob,
label = paste0(sprintf('%2.1f', prob*100), '%')),
vjust = -0.5, size = 3, color=left.axis.color, fontface='bold')
# Cumulative Probabilities
arrest.graph <- arrest.graph +
geom_line(aes(y=cum.prob/axis.sec.ratio),
alpha=0.25, size=1, color=right.axis.color) +
geom_point(aes(y=cum.prob/axis.sec.ratio),
alpha=0.75, size=2, shape=23, fill=right.axis.color) +
geom_text(aes(y=cum.prob/axis.sec.ratio,
label = paste0(sprintf('%2.0f', cum.prob*100), '%')),
vjust = -0.5, size = 3, color=right.axis.color, fontface='bold')
# Titles Strings etc
graph.title <- sprintf(
paste0('Number of Theft-Criminals Arrested out of %s\n',
'Prob Mass (Left) and Cumulative Prob (Right)'), n)
graph.caption <- sprintf(
paste0('Assuming the binomial properties apply\n',
'2004 Larceny/Left USA Clearance Rate = %s'), p)
graph.title.x <- 'Number of Criminals Arrests'
graph.title.y.axisleft <- 'Prob of x Arrested'
graph.title.y.axisright <- 'Prob of at most x Arrested'
# Title
arrest.graph <- arrest.graph +
labs(title = graph.title,
x = graph.title.x, y = graph.title.y.axisleft,
caption = graph.caption) +
scale_y_continuous(sec.axis =
sec_axis(~.*axis.sec.ratio, name = graph.title.y.axisright)) +
scale_x_continuous(labels = zero2max, breaks = zero2max) +
theme(axis.text.y = element_text(face='bold'),
axis.text.y.right = element_text(color = right.axis.color),
axis.text.y.left = element_text(color = left.axis.color))
# Print
print(arrest.graph)
# -
# ## Binomial Example: WWII German Soldier
#
# During WWII, 13.6 million Germans served in the German army, and 4.2 million were killed or missing. This is a death rate of [30.9 percent](https://en.wikipedia.org/wiki/World_War_II_casualties#Military_casualties_by_branch_of_service).
#
# Suppose there are many German towns that each sent 100 soldiers to the front, suppose the binomial properties apply, what is the fraction of solidiers who will return to their towns after the war?
#
# - $n=100$: suppose 100 solidiers from each German town went to join the German Army during WWII.
# - $p=1-0.309=0.691$: $p$ is the chance of survival.
# - $x$: if $x=1$, that means 1 out of 100 survived.
# +
# Parameters
n <- 100
p <- 1-0.309
# Generate Data
# A vector of different survival counts
zero2max <- 0:n
# Probability for different survival counts
prob_of_survives <- dbinom(zero2max, n, p)
# Cumulative Probability for different survival counts, before dbinom, now pbinom
cumulative_prob_of_survives <- pbinom(zero2max, n, p)
# Data File that Includes Cumulative Probability and Mass
survive.prob <- tibble(survives=(zero2max), prob=prob_of_survives, cum.prob=cumulative_prob_of_survives)
# -
# ### WWII German Soldier--Graph
# We can see the results graphically as well below. Note that the graph looks normal. This is indeed the case, when $n$ gets larger, the bionomial distribution approximates the normal distribution, with mean $n\cdot p$ and variance $n \cdot p \cdot (1-p)$.
#
# Again this is given binomial assumptions. Which means in this case that soldiers from each town has equal probability of dying. And the chance of one soldier from a town dying does not change the chance of other soldiers from the same town dying. These are unlikely to be correct assumptions, but maybe they are approximately right.
#
# We can see from the figure below that if the survival distribution follows binomial, less than 1 percent of towns should expect more than 80 out of 100 soldiers to return. And less than 1 percent of towns should expect less than 57 soldiers to return. Hence:
#
# - Given a 30.9 percent death rate, nearly all German towns will have between 57 to 80 soldiers returning from the war out the 100 they sent to join the German army.
#
# +
# Control Graph Size
options(repr.plot.width = 5, repr.plot.height = 4)
# Two axis colors
axis.sec.ratio <- max(cumulative_prob_of_survives)/max(prob_of_survives)
right.axis.color <- 'blue'
left.axis.color <- 'red'
# Probabilities
survive.graph <- survive.prob %>%
ggplot(aes(x=survives)) +
geom_bar(aes(y=prob),
stat='identity', alpha=0.5, width=0.5, fill=left.axis.color)
# Cumulative Probabilities
survive.graph <- survive.graph +
geom_line(aes(y=cum.prob/axis.sec.ratio),
alpha=0.75, size=1, color=right.axis.color)
# Titles Strings etc
graph.title <- sprintf(
paste0('Number of Surviving Soldiers out of %s from Town\n',
'Prob Mass (Left) and Cumulative Prob (Right)') ,n)
graph.caption <- sprintf(
paste0('Assuming the binomial properties apply\n',
'German Army Soldier WWII survival rate = %s'), p)
graph.title.x <- 'Number of Soldiers Survived'
graph.title.y.axisleft <- 'Prob of x survived'
graph.title.y.axisright <- 'Prob of at most x surviveed'
# Titles etc
survive.graph <- survive.graph +
labs(title = graph.title,
x = graph.title.x,
y = graph.title.y.axisleft,
caption = graph.caption) +
scale_y_continuous(sec.axis =
sec_axis(~.*axis.sec.ratio, name = graph.title.y.axisright)) +
scale_x_continuous(labels = zero2max[floor(seq(1,n,length.out=10))],
breaks = zero2max[floor(seq(1,n,length.out=10))]) +
theme(axis.text.y = element_text(face='bold'),
axis.text.y.right = element_text(color = right.axis.color),
axis.text.y.left = element_text(color = left.axis.color))
# Print
print(survive.graph)
# -
# ### WWII German Soldier--Table
# We can see from the table of results what is the distribution of the number of soldiers returning to each village in greater detail.
f_table <- function(start_idx) {
survive.subset <- round(survive.prob[seq(start_idx, start_idx+20, length.out=20),], 4)
survive.subset$prob <- paste0(survive.subset$prob*100,
'% chance EXACTLY (n=', survive.subset$survives, ') survived')
survive.subset$one.minus.cum.prob <- paste0((1-survive.subset$cum.prob)*100,
'% chance AT LEAST (n=', survive.subset$survives, ') survived')
survive.subset$cum.prob <- paste0(round((survive.subset$cum.prob)*100, 4),
'% chance AT MOST (n=', survive.subset$survives, ') survived')
return(survive.subset[,2:4])
}
lapply(c(1,21,41,61,81), f_table)
| probability_discrete/binomial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1-2.3 Intro Python
# ## Strings: input, testing, formatting
# - input() - gathering user input
# - print() formatting
# - **Quotes inside strings**
# - **Boolean string tests methods**
# - String formatting methods
# - Formatting string input()
# - Boolean `in` keyword
#
# -----
#
# ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - gather, store and use string `input()`
# - format `print()` output
# - **test string characteristics**
# - format string output
# - search for a string in a string
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## quotes inside strings
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c21bfb29-21f6-4bef-b00e-25d2f7440153/Unit1_Section2-3-Quotes_in_Strings.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# ### single quotes in double quotes
# to display single quotes **`'`** in a string - double quotes can be used as the outer quotes: **`"it's time"`**
# ### double quotes in single quotes
# to display double quotes **`"`** in a string- single quotes can be used as the outer quotes: **`'Alton said "Hello"'`**
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# +
# review and run the code
# Single quote surrounded by Double
print("It's time to save your code")
# Double quote surrounded by Single
print('I said to the class "sometimes you need to shut down and restart a notebook when cells refuse to run"')
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
# - **[ ] `print()`** strings that display double and single quotation marks
# +
# [ ] using a print statement, display the text: Where's the homework?
print ("Where's the homework?")
# -
# [ ] output with double quotes: "Education is what remains after one has forgotten what one has learned in school" - <NAME>
print ('"Education is what remains after one has forgotten what one has learned in school" - <NAME>')
# >**note:**: Quotes in quotes handles only simple cases of displaying quotation marks. More complex case are covered later under *escape sequences.*
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## Boolean string tests
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"https://jupyternootbookwams.streaming.mediaservices.windows.net/dfe6e85f-8022-471c-8d92-0b1d61ebffbd/Unit1_Section2-3-Boolean_String_Methods.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# methods
# - .isalpha()
# - .isalnum()
# - .istitle()
# - .isdigit()
# - .islower()
# - .isupper()
# - .startswith()
#
# type **`str`** has methods that return a Boolean (True or False) for different tests on the properties of stings.
# >```python
# "Hello".isapha()
# ```
# out:[ ] `True`
#
# `.isalpha()` returns True if all characters in the string ("Hello") are alphabetical, otherwise returns False
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# ## Boolean String Tests
# - **[ ] review and run code in each cell**
"Python".isalpha()
"3rd".isalnum()
"A Cold Stromy Night".istitle()
"1003".isdigit()
cm_height = "176"
print("cm height:",cm_height, "is all digits =",cm_height.isdigit())
print("SAVE".islower())
print("SAVE".isupper())
"Boolean".startswith("B")
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 2: multi-part</B></font>
# ### test stings with **`.isalpha()`**
# [ ] Use .isalpha() on the string "alphabetical"
"alphabetical".isalpha()
# +
# [ ] Use .isalpha() on the string: "Are spaces and punctuation Alphabetical?"
"Are spaces and punctuation Alphaberical?".isalpha()
# +
# [ ] initailize variable alpha_test with input
fir_name = input("enter first name: ")
# [ ] use .isalpha() on string variable alpha_test
fir_name.isalpha()
# -
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| Python Absolute Beginner/Module_1_2.3_Absolute_Beginner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"checksum": "48770f8b5f5d3062d3badd51fcafc401", "grade": false, "grade_id": "cell-a6c4f74309fc2379", "locked": true, "schema_version": 1, "solution": false}
# # Assignment 4
# ## Description
# In this assignment you must read in a file of metropolitan regions and associated sports teams from [assets/wikipedia_data.html](assets/wikipedia_data.html) and answer some questions about each metropolitan region. Each of these regions may have one or more teams from the "Big 4": NFL (football, in [assets/nfl.csv](assets/nfl.csv)), MLB (baseball, in [assets/mlb.csv](assets/mlb.csv)), NBA (basketball, in [assets/nba.csv](assets/nba.csv) or NHL (hockey, in [assets/nhl.csv](assets/nhl.csv)). Please keep in mind that all questions are from the perspective of the metropolitan region, and that this file is the "source of authority" for the location of a given sports team. Thus teams which are commonly known by a different area (e.g. "Oakland Raiders") need to be mapped into the metropolitan region given (e.g. San Francisco Bay Area). This will require some human data understanding outside of the data you've been given (e.g. you will have to hand-code some names, and might need to google to find out where teams are)!
#
# For each sport I would like you to answer the question: **what is the win/loss ratio's correlation with the population of the city it is in?** Win/Loss ratio refers to the number of wins over the number of wins plus the number of losses. Remember that to calculate the correlation with [`pearsonr`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html), so you are going to send in two ordered lists of values, the populations from the wikipedia_data.html file and the win/loss ratio for a given sport in the same order. Average the win/loss ratios for those cities which have multiple teams of a single sport. Each sport is worth an equal amount in this assignment (20%\*4=80%) of the grade for this assignment. You should only use data **from year 2018** for your analysis -- this is important!
#
# ## Notes
#
# 1. Do not include data about the MLS or CFL in any of the work you are doing, we're only interested in the Big 4 in this assignment.
# 2. I highly suggest that you first tackle the four correlation questions in order, as they are all similar and worth the majority of grades for this assignment. This is by design!
# 3. It's fair game to talk with peers about high level strategy as well as the relationship between metropolitan areas and sports teams. However, do not post code solving aspects of the assignment (including such as dictionaries mapping areas to teams, or regexes which will clean up names).
# 4. There may be more teams than the assert statements test, remember to collapse multiple teams in one city into a single value!
# + [markdown] deletable=false editable=false nbgrader={"checksum": "369ff9ecf0ee04640574205cbc697f94", "grade": false, "grade_id": "cell-712b2b5da63d4505", "locked": true, "schema_version": 1, "solution": false}
# ## Question 1
# For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the **NHL** using **2018** data.
# + deletable=false nbgrader={"checksum": "1cac4803b02502929f5b1612d48db2b5", "grade": false, "grade_id": "cell-69b16e4386e58030", "locked": false, "schema_version": 1, "solution": true}
import pandas as pd
import numpy as np
import scipy.stats as stats
import re
def nhl_correlation():
nhl_df=pd.read_csv("assets/nhl.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
nhl_df.columns = [x.lower().strip() for x in nhl_df.columns]
nhl_df = nhl_df[nhl_df['year'] == 2018]
cities.columns = [x.lower().strip() for x in cities.columns]
cities.rename(columns = {"population (2016 est.)[8]": "population",
"metropolitan area": "city"}, inplace=True)
for key,value in cities.iteritems():
value = value.replace(r"-?[ ]?\[(.*?)\]", "",regex=True, inplace=True)
vals_to_replace = {'—':np.nan, "":np.nan}
for key,value in cities.iteritems():
value = value.replace(vals_to_replace, inplace=True)
cities_nhl = cities[['city', 'population', 'nhl']]
cities_nhl = cities_nhl[cities_nhl['nhl'].notna()]
cities_nhl.index = pd.RangeIndex(len(cities_nhl.index)) # This is faster than reset_index
nhl = nhl_df[['team', 'w', 'l']]
nhl = nhl.replace(r"\*", "",regex=True)
nhl.drop(nhl.index[[0,9,18,26]], inplace=True)
nhl.index = pd.RangeIndex(len(nhl.index))
nhl["team_only"]=nhl['team'].apply(lambda x: x.rsplit(" ")[-1])
nhl.loc[2, 'team_only'] = 'Maple Leafs'
nhl.loc[4, 'team_only'] = 'Red Wings'
nhl.loc[11, 'team_only'] = 'Blue Jackets'
nhl.loc[23, 'team_only'] = 'Golden Knights'
# Some cities have multiple teams and joined in one
nhl.loc[15, 'team_only'] = 'RangersIslandersDevils'
nhl.loc[14, 'team_only'] = 'RangersIslandersDevils'
nhl.loc[12, 'team_only'] = 'RangersIslandersDevils'
nhl.loc[26, 'team_only'] = 'KingsDucks'
nhl.loc[24, 'team_only'] = 'KingsDucks'
nhl[['w', 'l']] = nhl[['w', 'l']].astype(float)
nhl = nhl.groupby(by='team_only')['w','l'].mean()
nhl.reset_index(inplace=True)
data = pd.merge(cities_nhl,nhl, how='inner',left_on='nhl', right_on='team_only')
data['ratio'] = data['w']/(data['w']+data['l'])
data['population'] = data['population'].astype(float)
# raise NotImplementedError()
population_by_region = list(data['population']) # pass in metropolitan area population from cities
win_loss_by_region = list(data['ratio']) # pass in win/loss ratio from nhl_df in the same order as cities["Metropolitan area"]
assert len(population_by_region) == len(win_loss_by_region), "Q1: Your lists must be the same length"
assert len(population_by_region) == 28, "Q1: There should be 28 teams being analysed for NHL"
return (stats.pearsonr(population_by_region, win_loss_by_region))[0]
# -
nhl_correlation()
# + deletable=false editable=false nbgrader={"checksum": "52a581df513c71153e105b93764cda4b", "grade": true, "grade_id": "cell-ebe0b2dfe1067e63", "locked": true, "points": 20, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "988912cae4968d81473f46d783e79c16", "grade": false, "grade_id": "cell-cb964e690298b71d", "locked": true, "schema_version": 1, "solution": false}
# ## Question 2
# For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the **NBA** using **2018** data.
# + deletable=false nbgrader={"checksum": "9394222aafc8ccab0a228098ba0d6010", "grade": false, "grade_id": "cell-5a5f21279e3d3572", "locked": false, "schema_version": 1, "solution": true}
import pandas as pd
import numpy as np
import scipy.stats as stats
import re
def nba_correlation():
nba_df=pd.read_csv("assets/nba.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
nba_df.columns = [x.lower().strip() for x in nba_df.columns]
nba_df = nba_df[nba_df['year'] == 2018]
cities.columns = [x.lower().strip() for x in cities.columns]
cities.rename(columns = {"population (2016 est.)[8]": "population",
"metropolitan area": "city"}, inplace=True)
for key,value in cities.iteritems():
value = value.replace(r"-?[ ]?\[(.*?)\]", "",regex=True, inplace=True)
vals_to_replace = {'—':np.nan, "":np.nan}
for key,value in cities.iteritems():
value = value.replace(vals_to_replace, inplace=True)
cities_nba = cities[['city', 'population', 'nba']]
cities_nba = cities_nba[cities_nba['nba'].notna()]
cities_nba.index = pd.RangeIndex(len(cities_nba.index)) # This is faster than reset_index
nba_df = nba_df[['team', 'w', 'l']]
nba_df['team'] = nba_df['team'].replace(r"\((.*?)\)", "",regex=True)
nba_df['team'] = nba_df['team'].replace(r"(\*\s+)", "",regex=True)
nba_df["team_only"]=nba_df['team'].apply(lambda x: x.rsplit(" ")[-1])
nba_df["team_only"] = nba_df["team_only"].replace(r"\s+", "",regex=True)
# # Some cities have multiple teams and joined in one
nba_df.loc[24, 'team_only'] = 'LakersClippers'
nba_df.loc[25, 'team_only'] = 'LakersClippers'
nba_df.loc[17, 'team_only'] = 'Trail Blazers'
nba_df.loc[10, 'team_only'] = 'KnicksNets'
nba_df.loc[11, 'team_only'] = 'KnicksNets'
nba_df[['w', 'l']] = nba_df[['w', 'l']].astype(float)
nba_df = nba_df.groupby(by='team_only')['w','l'].mean()
nba_df.reset_index(inplace=True)
data = pd.merge(cities_nba,nba_df, how='inner',left_on='nba', right_on='team_only')
data['ratio'] = data['w']/(data['w']+data['l'])
data['population'] = data['population'].astype(float)
# raise NotImplementedError()
population_by_region = list(data['population']) # pass in metropolitan area population from cities
win_loss_by_region = list(data['ratio']) # pass in win/loss ratio from nba_df in the same order as cities["Metropolitan
assert len(population_by_region) == len(win_loss_by_region), "Q2: Your lists must be the same length"
assert len(population_by_region) == 28, "Q2: There should be 28 teams being analysed for NBA"
return (stats.pearsonr(population_by_region, win_loss_by_region))[0]
# -
nba_correlation()
# + deletable=false editable=false nbgrader={"checksum": "bbdeb8eb22f525a34c10dc8798324e42", "grade": true, "grade_id": "cell-e573b2b4a282b470", "locked": true, "points": 20, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "1a1a5809f675ca033086422007cd73bd", "grade": false, "grade_id": "cell-96e15e4335df78f4", "locked": true, "schema_version": 1, "solution": false}
# ## Question 3
# For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the **MLB** using **2018** data.
# + deletable=false nbgrader={"checksum": "27e8c0da6c9fa0dffc10488314335b6c", "grade": false, "grade_id": "cell-33b00fc3f3467b0c", "locked": false, "schema_version": 1, "solution": true}
import pandas as pd
import numpy as np
import scipy.stats as stats
import re
def mlb_correlation():
mlb_df=pd.read_csv("assets/mlb.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
cities.columns = [x.lower().strip() for x in cities.columns]
cities.rename(columns = {"population (2016 est.)[8]": "population",
"metropolitan area": "city"}, inplace=True)
mlb_df.columns = [x.lower().strip() for x in mlb_df.columns]
mlb_df = mlb_df[mlb_df['year'] == 2018]
for key,value in cities.iteritems():
value = value.replace(r"-?[ ]?\[(.*?)\]", "",regex=True, inplace=True)
vals_to_replace = {'—':np.nan, "":np.nan}
for key,value in cities.iteritems():
value = value.replace(vals_to_replace, inplace=True)
cities_mlb = cities[['city', 'population', 'mlb']]
cities_mlb = cities_mlb[cities_mlb['mlb'].notna()]
cities_mlb.index = pd.RangeIndex(len(cities_mlb.index)) # This is faster than reset_index
mlb_df = mlb_df[['team', 'w', 'l']]
mlb_df["team_only"]=mlb_df['team'].apply(lambda x: x.rsplit(" ")[-1])
# # Some cities have multiple teams and joined in one
mlb_df.loc[3, 'team_only'] = 'Blue Jays'
mlb_df.loc[0, 'team_only'] = 'Red Sox'
mlb_df.loc[21, 'team_only'] = 'CubsWhite Sox'
mlb_df.loc[8, 'team_only'] = 'CubsWhite Sox'
mlb_df.loc[1, 'team_only'] = 'YankeesMets'
mlb_df.loc[18, 'team_only'] = 'YankeesMets'
mlb_df.loc[28, 'team_only'] = 'GiantsAthletics'
mlb_df.loc[11, 'team_only'] = 'GiantsAthletics'
mlb_df.loc[13, 'team_only'] = 'DodgersAngels'
mlb_df.loc[25, 'team_only'] = 'DodgersAngels'
mlb_df[['w', 'l']] = mlb_df[['w', 'l']].astype(float)
mlb_df = mlb_df.groupby(by='team_only')['w','l'].mean()
mlb_df.reset_index(inplace=True)
data = pd.merge(cities_mlb,mlb_df, how='inner',left_on='mlb', right_on='team_only')
data['ratio'] = data['w']/(data['w']+data['l'])
data['population'] = data['population'].astype(float)
# raise NotImplementedError()
population_by_region = list(data['population']) # pass in metropolitan area population from cities
win_loss_by_region = list(data['ratio']) # pass in win/loss ratio from mlb_df in the same order as cities["Metropolitan
assert len(population_by_region) == len(win_loss_by_region), "Q3: Your lists must be the same length"
assert len(population_by_region) == 26, "Q3: There should be 26 teams being analysed for MLB"
return (stats.pearsonr(population_by_region, win_loss_by_region))[0]
# -
mlb_correlation()
# + deletable=false editable=false nbgrader={"checksum": "cda33b094ba19ccc37a481e0dd29e0bc", "grade": true, "grade_id": "cell-764d4476f425c5a2", "locked": true, "points": 20, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "6977a6da9ed6d8b7a0b7e37bbeda709b", "grade": false, "grade_id": "cell-793df6c04dfb126e", "locked": true, "schema_version": 1, "solution": false}
# ## Question 4
# For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the **NFL** using **2018** data.
# + deletable=false nbgrader={"checksum": "c4914ad1e119278ec2bd567c52640b66", "grade": false, "grade_id": "cell-8ccebc209aeec8d9", "locked": false, "schema_version": 1, "solution": true}
import pandas as pd
import numpy as np
import scipy.stats as stats
import re
def nfl_correlation():
nfl_df=pd.read_csv("assets/nfl.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
cities.columns = [x.lower().strip() for x in cities.columns]
cities.rename(columns = {"population (2016 est.)[8]": "population",
"metropolitan area": "city"}, inplace=True)
nfl_df.columns = [x.lower().strip() for x in nfl_df.columns]
nfl_df = nfl_df[nfl_df['year'] == 2018]
for key,value in cities.iteritems():
value = value.replace(r"-?[ ]?\[(.*?)\]", "",regex=True, inplace=True)
vals_to_replace = {'—':np.nan, "":np.nan}
for key,value in cities.iteritems():
value = value.replace(vals_to_replace, inplace=True)
cities_nfl = cities[['city', 'population', 'nfl']]
cities_nfl = cities_nfl[cities_nfl['nfl'].notna()]
cities_nfl.index = pd.RangeIndex(len(cities_nfl.index)) # This is faster than reset_index
nfl_df = nfl_df[['team', 'w', 'l']]
nfl_df= nfl_df.replace(r"\*|\+", "",regex=True)
nfl_df.drop(nfl_df.index[[0,5,10,15,20,25,30,35]], inplace=True)
nfl_df.index = pd.RangeIndex(len(nfl_df.index))
nfl_df["team_only"]=nfl_df['team'].apply(lambda x: x.rsplit(" ")[-1])
nfl_df.loc[3, 'team_only'] = 'GiantsJets'
nfl_df.loc[19, 'team_only'] = 'GiantsJets'
nfl_df.loc[13, 'team_only'] = 'RamsChargers'
nfl_df.loc[28, 'team_only'] = 'RamsChargers'
nfl_df.loc[15, 'team_only'] = '49ersRaiders'
nfl_df.loc[30, 'team_only'] = '49ersRaiders'
nfl_df[['w', 'l']] = nfl_df[['w', 'l']].astype(float)
nfl_df = nfl_df.groupby(by='team_only')['w','l'].mean()
nfl_df.reset_index(inplace=True)
data = pd.merge(cities_nfl,nfl_df, how='inner',left_on='nfl', right_on='team_only')
data['ratio'] = data['w']/(data['w']+data['l'])
data['population'] = data['population'].astype(float)
# raise NotImplementedError()
population_by_region = list(data['population']) # pass in metropolitan area population from cities
win_loss_by_region = list(data['ratio']) # pass in win/loss ratio from nfl_df in the same order as cities["Metropolitan
assert len(population_by_region) == len(win_loss_by_region), "Q4: Your lists must be the same length"
assert len(population_by_region) == 29, "Q4: There should be 29 teams being analysed for NFL"
return (stats.pearsonr(population_by_region, win_loss_by_region))[0]
# -
nfl_correlation()
# + deletable=false editable=false nbgrader={"checksum": "e9415d6399aa49e3a1a60813afdefa3b", "grade": true, "grade_id": "cell-de7b148b9554dbda", "locked": true, "points": 20, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "b02d5cd3273f561e4ae939bb2a41740c", "grade": false, "grade_id": "cell-97b49d8639e908c4", "locked": true, "schema_version": 1, "solution": false}
# ## Question 5
# In this question I would like you to explore the hypothesis that **given that an area has two sports teams in different sports, those teams will perform the same within their respective sports**. How I would like to see this explored is with a series of paired t-tests (so use [`ttest_rel`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_rel.html)) between all pairs of sports. Are there any sports where we can reject the null hypothesis? Again, average values where a sport has multiple teams in one region. Remember, you will only be including, for each sport, cities which have teams engaged in that sport, drop others as appropriate. This question is worth 20% of the grade for this assignment.
# + deletable=false nbgrader={"checksum": "6d78c961eb66f8d8c81f06d33ae8f393", "grade": false, "grade_id": "cell-92f25f44b8d1179f", "locked": false, "schema_version": 1, "solution": true}
import pandas as pd
import numpy as np
import scipy.stats as stats
import re
mlb_df=pd.read_csv("assets/mlb.csv")
nhl_df=pd.read_csv("assets/nhl.csv")
nba_df=pd.read_csv("assets/nba.csv")
nfl_df=pd.read_csv("assets/nfl.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
def sports_team_performance():
# YOUR CODE HERE
raise NotImplementedError()
# Note: p_values is a full dataframe, so df.loc["NFL","NBA"] should be the same as df.loc["NBA","NFL"] and
# df.loc["NFL","NFL"] should return np.nan
sports = ['NFL', 'NBA', 'NHL', 'MLB']
p_values = pd.DataFrame({k:np.nan for k in sports}, index=sports)
assert abs(p_values.loc["NBA", "NHL"] - 0.02) <= 1e-2, "The NBA-NHL p-value should be around 0.02"
assert abs(p_values.loc["MLB", "NFL"] - 0.80) <= 1e-2, "The MLB-NFL p-value should be around 0.80"
return p_values
# + deletable=false editable=false nbgrader={"checksum": "2a596ab421a45cc01168d10e8fbb8f89", "grade": true, "grade_id": "cell-fb4b9cb5ff4570a6", "locked": true, "points": 20, "schema_version": 1, "solution": false}
| Assignments Submitted/Week 4/assignment4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import NotebookImport
from DX_screen import *
v = rna_df.ix['ZWINT']
v = v.unstack()[['01','11']].dropna(subset=['01'])
# +
fig, axs = subplots(1,3, figsize=(5,10))
ax=axs[0]
ax.imshow(v, aspect=1./500, interpolation='nearest', cmap=cm.RdYlBu)
ax.set_yticks([])
ax.set_ylabel('mRNA Gene Expression')
ax.set_xticks([0,1])
ax.set_xticklabels(['Tumor','Normal'], rotation=330, ha='left')
ax=axs[1]
ax.imshow(v.dropna(), aspect=1./500, interpolation='nearest', cmap=cm.RdYlBu)
ax.set_yticks([])
#ax.set_ylabel('Patients')
ax.set_xticks([0,1])
ax.set_xticklabels(['Tumor','Normal'], rotation=330, ha='left')
ax = axs[2]
dx = (v['01'] > v['11']).ix[v.dropna().index]
ax.imshow(pd.DataFrame([dx]).T,
aspect=1./1000, interpolation='nearest')
ax.set_yticks([])
ax.set_ylabel('')
ax.set_xticks([0])
ax.set_xticklabels(['Tumor >\n Normal'], rotation=0, ha='center')
fig.tight_layout()
fig.savefig(FIGDIR + 'dx_pipeline.pdf')
# -
violin_plot_series(v.stack())
| Notebooks/Figures/DX_Pipeline_Figure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
# title: LiDAR
# tags: [geo, geospatial,]
# keywords: geo-python gdal QGIS
# summary: "Digest lidar data in Python"
# sidebar: mydoc_sidebar
# permalink: lidar.html
# folder: geopy
# ---
#
#
# {% include tip.html content="Use [*QGIS*](geo_software.html#qgis) to display geospatial data products." %}
#
# ## Laspy
#
# * [Documentation](https://laspy.readthedocs.io/en/latest/)
# * [Tutorials](https://laspy.readthedocs.io/en/latest/tut_background.html)
#
# {% include windows.html content="In order to work with the *LAStools* plugin in *QGIS*, download `LAStools.zip` from [https://www.cs.unc.edu/~isenburg/lastools/](https://www.cs.unc.edu/~isenburg/lastools/) (not a freeware), and unpack the zip folder to `C:\LAStools\`. Make sure that the directory `C:\LAStools\bin` exists and contains `las2shp.exe`." %}
#
#
# ### Install
#
# Type in *Anaconda Prompt*:
#
# ```
# conda install -c conda-forge laspy
# ```
#
# Find advanced installation instructions on [laspy.readthedocs.io](https://laspy.readthedocs.io/en/latest/tut_part_1.html).
#
# ### Usage
# *laspy* uses *numpy* arrays to store data and this is why both libraries need be imported to read a *las* file:
import numpy
import laspy
las_file_name = os.path.abspath("") + "/data/subsample.las"
# Then, we can load a *las* file with `file_object = laspy.file.File(`file_name`, mode="rw")`. Allowed `mode`s are `"r"` (read), `"w"` (write), and `"rw"` (read-write).
#
# To read essential data (points with attributes) from a *las* file, extract the points (*numpy* array) and have a look at the *dtypes*. The following code block uses a `with` statement to avoid that the *las* file is locked by *Python*:
# +
with laspy.file.File(las_file_name, mode="r") as las_file:
pts = las_file.points
print(pts.dtype)
# -
# `('X', '<i4'), ('Y', '<i4'), ('Z', '<i4')` tells us, that the first three array entries are a point's X, Y, and Z coordinates, , followed by `'intensity'` and so on. So the first row of points looks like this:
# The *las* file has many more attribute such as color ranges (*Red*, *Green*, *Blue*), spatial reference, or *NIR* (near-infrared). The following code block extracts and prints some of the file properties including its header:
with laspy.file.File(las_file_name, mode="r") as las_file:
print(dir(las_file))
headers = [str(spec.name) for spec in las_file.header.header_format]
print(", ".join(headers))
# To access geospatial and point properties
with laspy.file.File(las_file_name, mode="rw") as las_file:
pts = las_file.points
x_dim = las_file.X
y_dim = las_file.Y
scale = las_file.header.scale[0]
offset = las_file.header.offset[0]
print("x_dim: " + str(x_dim))
print("y_dim: " + str(y_dim))
print("scale: " + str(scale))
print("offset: " + str(offset))
# +
import copy
las_file = laspy.file.File(las_file_name, mode="r")
print(dir(las_file.header.vlrs))
for spec in las_file.reader.point_format:
in_spec = las_file.reader.get_dimension(spec.name)
print(in_spec)
print(spec.name)
# -
| intro2laspy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import random
from sklearn.externals import joblib
from scipy import ndimage
import matplotlib.pyplot as plt
# #%matplotlib inline
def shuffle_in_unison_inplace(a, b):
assert len(a) == len(b)
p = np.random.permutation(len(a))
return a[p], b[p]
def getData(pct=0, cast=False):
# Read data
train = pd.read_csv('../../data/train/train.csv')
labels = train.ix[:,0].values
a_train = (train.ix[:,1:].values)/255.0
a_test = (pd.read_csv('../../data/test/test.csv').values)/255.0
b_train = labels
b_test = np.array([random.randint(0,10) for i in range(a_test.shape[0])])
a,b = shuffle_in_unison_inplace(a_train, b_train)
X_train, y_train = a[pct*a.shape[0]/10:, :], b[pct*a.shape[0]/10:]
X_valid, y_valid = a[:pct*a.shape[0]/10, :], b[:pct*a.shape[0]/10]
X_test, y_test = a_test, b_test
if cast:
return (X_train.astype('float32'), y_train.astype('int32'),
X_valid.astype('float32'), y_valid.astype('int32'),
X_test.astype('float32'), y_test.astype('int32'))
else:
return (X_train, y_train, X_valid, y_valid, X_test, y_test)
def getData2(pct=0, cast=False):
# Read data
train = pd.read_csv('../../data/train/train.csv')
labels = train.ix[:,0].values
a_train = np.array(joblib.load('../../data/train/train2.csv'))
a_test = np.array(joblib.load('../../data/test/test2.csv'))
b_train = labels
b_test = np.array([random.randint(0,10) for i in range(a_test.shape[0])])
a,b = shuffle_in_unison_inplace(a_train, b_train)
X_train, y_train = a[pct*a.shape[0]/10:, :], b[pct*a.shape[0]/10:]
X_valid, y_valid = a[:pct*a.shape[0]/10, :], b[:pct*a.shape[0]/10]
X_test, y_test = a_test, b_test
if cast:
return (X_train.astype('float32'), y_train.astype('int32'),
X_valid.astype('float32'), y_valid.astype('int32'),
X_test.astype('float32'), y_test.astype('int32'))
else:
return (X_train, y_train, X_valid, y_valid, X_test, y_test)
def getDataRot(pct=0, cast=False):
# Read data
train = pd.read_csv('../../data/train/train.csv')
labels = train.ix[:,0].values
a_train1 = (train.ix[:,1:].values)/255.0
a_train2 = np.array(joblib.load('../../data/train/trainRot1.csv'))
a_train3 = np.array(joblib.load('../../data/train/trainRot2.csv'))
a_train = np.concatenate((a_train1, a_train2, a_train3))
b_train1 = labels
b_train2 = labels
b_train3 = labels
b_train = np.concatenate((b_train1, b_train2, b_train3))
a_test = (pd.read_csv('../../data/test/test.csv').values)/255.0
#a_test2 = np.array(joblib.load('../../data/test/testRot1.csv'))
#a_test3 = np.array(joblib.load('../../data/test/testRot2.csv'))
#a_test = np.concatenate((a_test1, a_test2, a_test3))
b_test = np.array([random.randint(0,10) for i in range(a_test.shape[0])])
#b_test2 = np.array([random.randint(0,10) for i in range(a_test.shape[0])])
#b_test3 = np.array([random.randint(0,10) for i in range(a_test.shape[0])])
#b_test = np.concatenate((b_test1, b_test2, b_test3))
a,b = shuffle_in_unison_inplace(a_train, b_train)
X_train, y_train = a[pct*a.shape[0]/10:, :], b[pct*a.shape[0]/10:]
X_valid, y_valid = a[:pct*a.shape[0]/10, :], b[:pct*a.shape[0]/10]
X_test, y_test = a_test, b_test
if cast:
return (X_train.astype('float32'), y_train.astype('int32'),
X_valid.astype('float32'), y_valid.astype('int32'),
X_test.astype('float32'), y_test.astype('int32'))
else:
return (X_train, y_train, X_valid, y_valid, X_test, y_test)
# +
#a_test_rot1 = [ndimage.interpolation.rotate(i.reshape(28,28), 15.)[3:-3, 3:-3].reshape(784) for i in a_test]
#a_test_rot2 = [ndimage.interpolation.rotate(i.reshape(28,28), -15.)[3:-3, 3:-3].reshape(784) for i in a_test]
#joblib.dump(a_test_rot2, '../../data/test/testRot2.csv', compress=7)
# -
| python/notebook/preprocess_data_lib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !pip install pymysql
# # !pip install plotly
# -
from sqlalchemy import create_engine
import pandas as pd
import seaborn as sns # For creating plots
# import matplotlib.ticker as mtick # For specifying the axes tick format
import matplotlib.pyplot as plt
# %matplotlib inline
from scipy import stats
from matplotlib import rcParams
# +
# db_host = ''
# username = ''
# user_pass = ''
# db_name = 'project'
# conn = create_engine('mysql+pymysql://'+username+':'+user_pass+'@'+db_host+'/'+db_name)
data=pd.read_csv('cleaned_dataset.csv')
# +
# query = "select * from telecom_churn_data"
# +
# data = pd.read_sql(query,conn)
# -
data.shape
data.columns = ['State','Account_Length','Area_Code','Phone','International_Plan'
,'VMail_Plan',
'VMail_Message','Day_Mins','Day_Calls','Day_Charge','Eve_Mins','Eve_Calls',
'Eve_Charge','Night_Mins','Night_Calls',
'Night_Charge','International_Mins','International_calls','International_Charge'
,'CustServ_Calls','Churn'];
data.info()
data.head()
# data.sort_values(['Phone'], ascending=True)
data.drop("Phone",axis=1,inplace=True)
data.drop("Area_Code",axis=1,inplace=True)
data.head()
data.Account_Length=data.Account_Length.astype('int64')
# data.Area_Code=data.Area_Code.astype('int64')
data.VMail_Message=data.VMail_Message.astype('int64')
data.Day_Mins=data.Day_Mins.astype('float64')
data.Day_Calls=data.Day_Calls.astype('int64')
data.Day_Charge=data.Day_Charge.astype('float64')
data.Eve_Mins=data.Eve_Mins.astype('float64')
data.Eve_Calls=data.Eve_Calls.astype('int64')
data.Eve_Charge=data.Eve_Charge.astype('float64')
data.Night_Mins=data.Night_Mins.astype('float64')
data.Night_Calls=data.Night_Calls.astype('int64')
data.Night_Charge=data.Night_Charge.astype('float64')
data.International_Mins=data.International_Mins.astype('float64')
data.International_calls=data.International_calls.astype('int64')
data.International_Charge=data.International_Charge.astype('float64')
data.CustServ_Calls=data.CustServ_Calls.astype('int64')
# +
# data.isnull().sum()
# -
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
data.State = enc.fit_transform(data.State)
data.VMail_Plan = enc.fit_transform(data.VMail_Plan)
data.International_Plan = enc.fit_transform(data.International_Plan)
data.Churn = enc.fit_transform(data.Churn)
# data['Churn'].replace(to_replace='Yes', value=1, inplace=True)
# data['Churn'].replace(to_replace='No', value=0, inplace=True)
data.head()
data.info()
data.isnull().sum().sum()
df_dummies = pd.get_dummies(data)
df_dummies.head()
# # EDA
import seaborn as sns # For creating plots
import matplotlib.ticker as mtick # For specifying the axes tick format
import matplotlib.pyplot as plt
import io
# import plotly.offline as py#visualization
# py.init_notebook_mode(connected=True)#visualization
# import plotly.graph_objs as go#visualization
# import plotly.tools as tls#visualization
# import plotly.figure_factory as ff#visualization
plt.figure(figsize=(15,8))
df_dummies.corr()['Churn'].sort_values(ascending = False).plot(kind='bar')
# data.sort_values(['Day_Charge'], ascending=True)
data.describe()
import seaborn as sns
plt.rcParams['figure.figsize'] = (8, 6)
sns.countplot(x='International_Plan', hue='Churn', data=data);
ax = sns.distplot(data['Account_Length'], hist=True, kde=False,
bins=int(180/5), color = 'darkblue',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 4})
ax.set_ylabel('# of Customers')
ax.set_xlabel('Account_Length (months)')
ax.set_title('# of Customers by their tenure')
# +
colors = ['#4D3425','#E4512B']
ax = (data['Account_Length'].value_counts()*100.0 /len(data)).plot(kind='bar',
stacked = True,
rot = 0,
color = colors)
ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.set_ylabel('% Customers')
ax.set_xlabel('Account_Length')
ax.set_ylabel('% Customers')
ax.set_title('Distribution')
# create a list to collect the plt.patches data
totals = []
# find the values and append to list
for i in ax.patches:
totals.append(i.get_width())
# set individual bar lables using above list
total = sum(totals)
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_x()+.15, i.get_height()-3.5, \
str(round((i.get_height()/total), 1))+'%',
fontsize=12,
color='white',
weight = 'bold')
# -
corr = data.corr()
sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values, annot = True, annot_kws={'size':12})
heat_map=plt.gcf()
heat_map.set_size_inches(20,15)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.show()
from collections import Counter
col=data.columns
for i in col:
print('Mode of--',i,'-- Occurance of',stats.mode(data[i])[0],'is',stats.mode(data[i])[1])
data.hist(figsize=(10,10))
# X = data.loc[:,['State','Account_Length','Area_Code','International_Plan', 'VMail_Plan', 'VMail_Message',
# 'Day_Mins', 'Day_Calls', 'Day_Charge', 'Eve_Mins', 'Eve_Calls', 'Eve_Charge', 'Night_Mins',
# 'Night_Calls','Night_Charge','International_Mins','International_calls',
# 'International_Charge','CustServ_Calls']
# ]
X = data.loc[:,['Account_Length','International_Plan', 'VMail_Plan', 'VMail_Message',
'Day_Charge', 'Eve_Charge'
,'Night_Charge',
'International_Charge','CustServ_Calls']
]
y = data.Churn
X.shape
model_rM.feature_importances_
# +
# data_
# -
# # Basic Model implementation
# # Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
X_train, X_test,y_train, y_test = train_test_split(X,y,test_size=0.3,
random_state=10)
model_rM= RandomForestClassifier(random_state=4,max_depth=20)
model_rM.fit(X_train,y_train)
y_predict_rm = model_rM.predict(X_test)
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
print(accuracy_score(y_test, y_predict_rm))
pd.crosstab(y_test, y_predict_rm)
importances = model_rM.feature_importances_
weights = pd.Series(importances,
index=X.columns.values)
weights.sort_values()[-10:].plot(kind = 'barh')
print(classification_report(y_test,y_predict_rm))
# # Logistic Regression
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
model_lr = LogisticRegression()
model_lr.fit(X_train,y_train)
y_predict_lr=model_lr.predict(X_test)
print(classification_report(y_test,y_predict_lr))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_predict_lr)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_predict_lr))
pd.crosstab(y_test, y_predict)
print(classification_report(y_test,y_predict_lr))
y_predict_prob = model_lr.predict_proba(X_test)
pd.DataFrame(y_predict_prob)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn import model_selection
models = []
models.append(('LR', LogisticRegression()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('RF', RandomForestClassifier()))
models.append(('SVM', SVC()))
results = []
names = []
for name,model in models:
kfold = model_selection.KFold(n_splits=10)
cv_result = model_selection.cross_val_score(model,X_train,y_train, cv = kfold, scoring = "accuracy")
names.append(name)
results.append(cv_result)
for i in range(len(names)):
print(names[i],results[i].mean())
from sklearn.externals import joblib
joblib.dump(model_rM, "Telecom-churn.ml")
# inputs: 'Account_Length','International_Plan', 'VMail_Plan', 'VMail_Message'
# , 'Day_Charge','Eve_Charge'
# ,'Night_Charge',
# 'International_Charge','CustServ_Calls'
# +
loaded_model = joblib.load("Telecom-churn.ml")
# result = loaded_model.score(y_predict)
y_predict_churn = loaded_model.predict([[200,0,0,2,20.0,45,10.0,150.0,5]])
print(y_predict_churn)
# -
| Telecom-ChurnV1_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import os
import time
import json
sys.path.append(".")
os.environ["JUPYTER_PATH"] = "."
CLIENT_ID = "e6c75d97-532a-4c88-b031-8584a319fa3e"
from globus_automate_client import (
create_action_client,
create_flows_client,
graphviz_format,
state_colors_for_log,
)
from ipywidgets import widgets
from IPython.display import display, display_svg, clear_output
import json
import time
# -
# ## Important: A Note on Authentication and Authorization
#
# * All interactions between users and services with the Globus Automate Platform are governed by use of Authentication and Authorization using the Globus Auth system.
# * In particular, this means that consent must be given by the user for each interaction taking place on their part, including in this Notebook.
# * The first time you interact with each service such as the Flow service, and Action, or even a Flow instance, an additional browser window will open for you to Consent to proceeding..
# * You may close the additional window after completing the Consent.
# # Globus Automate: Flows and Actions
#
# ## Flow Definition
#
# * Flows are composed of *Action* invocations
# * Each Action invocation reads from and contributes back to the *Flow State* as referenced by the `ResultPath` properties of an Action.
# +
flow_definition = {
"Comment": "Two step transfer from Jupyter",
"StartAt": "Transfer1",
"States": {
"Transfer1": {
"Comment": "Initial Transfer from Campus to DTN in DMZ",
"Type": "Action",
"ActionUrl": "https://actions.globus.org/transfer/transfer",
"Parameters": {
"source_endpoint_id.$": "$.source_endpoint",
"destination_endpoint_id.$": "$.intermediate_endpoint",
"transfer_items": [
{
"source_path.$": "$.source_path",
"destination_path.$": "$.intermediate_path",
"recursive": True,
}
],
},
"ResultPath": "$.Transfer1Result",
"Next": "Transfer2",
},
"Transfer2": {
"Comment": "Transfer from DMZ to dataset repository",
"Type": "Action",
"ActionUrl": "https://actions.globus.org/transfer/transfer",
"Parameters": {
"source_endpoint_id.$": "$.intermediate_endpoint",
"destination_endpoint_id.$": "$.destination_endpoint",
"transfer_items": [
{
"source_path.$": "$.intermediate_path",
"destination_path.$": "$.destination_path",
"recursive": True,
}
],
},
"ResultPath": "$.Transfer2Result",
"End": True,
},
},
}
input_schema = {
"type": "object",
"additionalProperties": False,
"properties": {
"source_endpoint": {"type": "string"},
"source_path": {"type": "string"},
"intermediate_endpoint": {"type": "string"},
"intermediate_path": {"type": "string"},
"destination_endpoint": {"type": "string"},
"destination_path": {"type": "string"},
},
"required": [
"source_endpoint",
"source_path",
"intermediate_endpoint",
"intermediate_path",
"destination_endpoint",
"destination_path",
],
}
# -
# * This flow composes two transfers into a single logical operation
# * Suitable, for example, for doing a two stage transfer between a local campus endpoint, a DMZ data transfer endpoint, and a dataset repository.
# * Each step in the Flow uses the same Action: Transfer which is referenced by URL
# * Source and destination information for the Transfer state are given in `Parameters` and `ResultPath`
#
# * The `input_schema` defines, in JSONSchema format, the required input to the Flow. In this case, it is simply a list of string values indicating the source, intermediate, and destination endpoints and paths for the two step Transfer operation.
# * When the input schema is provided with a Flow, the Flow service will validate the user's input using the schema prior to running the Flow. Thus, errors which may have resulted due to improper input can be caught before the Flow is run.
# * Providing an input schema is not required when defining a Flow, but it is encouraged as it allows validation, and in the future may also provide hints to tools and user interface elements which help users creating proper input for running the Flow.
#
# Next we _deploy_ the Flow so that we can execute below
#
flows_client = create_flows_client(CLIENT_ID)
flow = flows_client.deploy_flow(
flow_definition, title="Example Two Step Transfer Flow", input_schema=input_schema
)
flow_id = flow["id"]
print(f"Newly created flow with id:\n{flow_id}")
# * The newly created flow has an id which we use for referencing it, such as running it later.
#
# * We can also use the id to lookup the Flow in the Flows service and get a rough visualization of the Flow's contents.
get_resp = flows_client.get_flow(flow_id)
flow_def = get_resp.data["definition"]
flow_graph = graphviz_format(flow_def)
display(flow_graph)
# * The displayed output represents each Action state as a rectangle and provides the name of the state and the Parameters which will be used when it is run.
# * The Parameters reference the input shown below, and the required values are specific to the particular Action, Transfer, being run in the Action state.
#
# * We prepare the Input and run the Flow below.
# * As defined here, the following need to be prepared using Globus Transfer:
# * On the endpoint 'go#ep1' a folder called `campus_source` containing a child folder called `dataset1`
# * A small data file may be placed in the `dataset1` folder to show data movement.
# * On the endpoint 'go#ep2' a folder called `campus_source`
# * On the endpoint 'go#ep1' a folder called `dataset_repository`
# * We run the flow and monitor the Flow's execution
# * Periodically, we poll to get the progress of the execution and represent the progress with a colored representation of the same flow visualization shown above.
# * Yellow or Orange represent states which are running or have not run yet
# * Green represents completed states
# * Upon completion, all details of the execution are displayed below the visualization.
# +
flow_input = {
"source_endpoint": "go#ep1",
"source_path": "/~/campus_source/dataset1/",
"intermediate_endpoint": "go#ep2",
"intermediate_path": "/~/campus_source/dataset1/",
"destination_endpoint": "go#ep1",
"destination_path": "/~/dataset_repository/dataset1",
}
action_id = ""
run_resp = flows_client.run_flow(flow_id, None, flow_input)
action_id = run_resp.data["action_id"]
print(f"action_id: {action_id}")
while True:
status_resp = flows_client.flow_action_status(flow_id, None, action_id)
log_resp = flows_client.flow_action_log(flow_id, None, action_id, limit=100)
state_colors = state_colors_for_log(log_resp.data["entries"])
run_graph = graphviz_format(flow_def, state_colors)
print(
f'Action {action_id} is in state {status_resp.data["status"]} at time {time.ctime()}'
)
display(run_graph)
if status_resp.data["status"] in ("SUCCEEDED", "FAILED"):
break
print(f'Recent state details: {status_resp["details"]}')
time.sleep(5)
clear_output(wait=True)
print(f"Final Result: {json.dumps(status_resp['details'], indent=2)}")
# -
| examples/notebooks/flow-deploy-and-monitor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print("Hello World") #Use("") to print thing inside()
print(2+2) #Use() to perform computation
print("2+2") #does not process the calculations since it is under ""
print(2+2-6+50*234)
a = 2+2 # introducing variable(a) to store the data which can be called later to perform calc
b = 2+2-6+50*234
print(a+b)
print(a-2)
input("Whats your number" )
num = input("Whats your number" )
print(num)
print ("Hi")
n = input("Whats your name?")
print(n + " is very beautiful")
na_me = input("Whats your name?")
if na_me == "shreeya": #Use : after if stamement
print("You are very beautiful")
else: # Use : after else followed by if_else statement
print("You are fine")
q = input("Kata gako tme?")
if q == "khelera ako":
print("aba pitai khais")
elif q == "padhera ako":
print("tmelai nachineko hora")
elif q == "Momo khayera ako":
print("kati vannu tyo momo vanne jantu nakhanu vanera")
else:
print("la thikai cha jaa pad")
| basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pangeo
# language: python
# name: conda_pangeo
# ---
# # Hurricane Ike Maximum Water Levels
# Compute the maximum water level during Hurricane Ike on a 9 million node triangular mesh storm surge model. Plot the results with Datashader.
# +
import xarray as xr
import numpy as np
import pandas as pd
import hvplot.xarray
import fsspec
from dask.distributed import Client, progress
# +
#from dask_kubernetes import KubeCluster
#cluster = KubeCluster()
# -
# %%time
from dask_cloudprovider import FargateCluster
cluster = FargateCluster(n_workers=1, image='rsignell/dask:latest', find_address_timeout=60)
cluster.dashboard_link
# %%time
from dask_cloudprovider import FargateCluster
cluster = FargateCluster(n_workers=1, image='rsignell/pangeo-worker:2020-01-23b', find_address_timeout=60)
cluster.dashboard_link
# ### Start a dask cluster to crunch the data
cluster.scale(2);
cluster
# For demos, I often click in this cell and do "Cell=>Run All Above", then wait until the workers appear. This can take several minutes (up to 6!) for instances to spin up and Docker containers to be downloaded. Then I shutdown the notebook and run again from the beginning, and the workers will fire up quickly because the instances have not spun down yet.
# +
#cluster.adapt(maximum=10);
# -
# %%time
client = Client(cluster)
client
# ### Read the data using the cloud-friendly zarr data format
ds = xr.open_zarr(fsspec.get_mapper('s3://pangeo-data-uswest2/esip/adcirc/ike', anon=False, requester_pays=True))
# +
#ds = xr.open_zarr(fsspec.get_mapper('gcs://pangeo-data/rsignell/adcirc_test01'))
# -
ds['zeta']
ds['zeta'][:,1000].hvplot()
# How many GB of sea surface height data do we have?
ds['zeta'].nbytes/1.e9
# Take the maximum over the time dimension and persist the data on the workers in case we would like to use it later. This is the computationally intensive step.
# %%time
max_var = ds['zeta'].max(dim='time').compute()
progress(max_var)
# ### Visualize data on mesh using HoloViz.org tools
# +
import numpy as np
import datashader as dshade
import holoviews as hv
import geoviews as gv
import cartopy.crs as ccrs
import hvplot.xarray
import holoviews.operation.datashader as dshade
dshade.datashade.precompute = True
hv.extension('bokeh')
# -
v = np.vstack((ds['x'], ds['y'], max_var)).T
verts = pd.DataFrame(v, columns=['x','y','vmax'])
points = gv.operation.project_points(gv.Points(verts, vdims=['vmax']))
tris = pd.DataFrame(ds['element'].values.astype('int')-1, columns=['v0','v1','v2'])
tiles = gv.tile_sources.OSM
value = 'max water level'
label = '{} (m)'.format(value)
trimesh = gv.TriMesh((tris, points), label=label)
mesh = dshade.rasterize(trimesh).opts(
cmap='rainbow', colorbar=True, width=600, height=400)
tiles * mesh
# ### Extract a time series at a specified lon, lat location
# Because Xarray does not yet understand that `x` and `y` are coordinate variables on this triangular mesh, we create our own simple function to find the closest point. If we had a lot of these, we could use a more fancy tree algorithm.
# find the indices of the points in (x,y) closest to the points in (xi,yi)
def nearxy(x,y,xi,yi):
ind = np.ones(len(xi),dtype=int)
for i in range(len(xi)):
dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2)
ind[i] = dist.argmin()
return ind
#just offshore of Galveston
lat = 29.2329856
lon = -95.1535041
ind = nearxy(ds['x'].values,ds['y'].values,[lon], [lat])[0]
ind
ds['zeta'][:,ind].hvplot(grid=True)
| hurricane_ike.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ''
# language: python
# name: ''
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
#
# # Training an Image Classification Multi-Label model using AutoML
# In this notebook, we go over how you can use AutoML for training an Image Classification Multi-Label model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models).
# 
# **Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/).
# ## Environment Setup
# Please follow the ["Setup a new conda environment"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started.
# ## Workspace setup
# In order to train and deploy models in Azure ML, you will first need to set up a workspace.
#
# An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.
#
# Create an Azure ML Workspace within your Azure subscription or load an existing workspace.
# +
# specify workspace parameters
subscription_id = "<my-subscription-id>"
resource_group = "<my-resource-group>"
workspace_name = "<my-workspace-name>"
from azureml.core.workspace import Workspace
ws = Workspace.create(
name=workspace_name,
subscription_id=subscription_id,
resource_group=resource_group,
exist_ok=True,
)
# -
# ## Compute target setup
# You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model.
# +
from azureml.core.compute import AmlCompute, ComputeTarget
cluster_name = "gpu-cluster-nc6"
try:
compute_target = ws.compute_targets[cluster_name]
print("Found existing compute target.")
except KeyError:
print("Creating a new compute target...")
compute_config = AmlCompute.provisioning_configuration(
vm_size="Standard_NC6",
idle_seconds_before_scaledown=1800,
min_nodes=0,
max_nodes=4,
)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# -
# ## Experiment Setup
# Create an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#experiments) in your workspace to track your model training runs
# +
from azureml.core import Experiment
experiment_name = "automl-image-classification-multilabel"
experiment = Experiment(ws, name=experiment_name)
# -
# ## Dataset with input Training Data
#
# In order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data.
# In this notebook, we use a toy dataset called Fridge Objects, which consists of 128 images of 4 labels of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds. It also includes a labels file in .csv format. This is one of the most common data formats for Image Classification Multi-Label: one csv file that contains the mapping of labels to a folder of images.
#
# All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
#
# We first download and unzip the data locally.
# +
import os
import urllib
from zipfile import ZipFile
# download data
download_url = "https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip"
data_file = "./multilabelFridgeObjects.zip"
urllib.request.urlretrieve(download_url, filename=data_file)
# extract files
with ZipFile(data_file, "r") as zip:
print("extracting files...")
zip.extractall()
print("done")
# delete zip file
os.remove(data_file)
# -
# This is a sample image from this dataset:
# +
from IPython.display import Image
sample_image = "./multilabelFridgeObjects/images/56.jpg"
Image(filename=sample_image)
# -
# ### Convert the downloaded data to JSONL
# In this example, the fridge object dataset is annotated in the CSV file, where each image corresponds to a line. It defines a mapping of the filename to the labels. Since this is a multi-label classification problem, each image can be associated to multiple labels. In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).
#
# The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file.
# +
import json
import os
src = "./multilabelFridgeObjects"
train_validation_ratio = 5
# Retrieving default datastore that got automatically created when we setup a workspace
workspaceblobstore = ws.get_default_datastore().name
# Path to the labels file.
labelFile = os.path.join(src, "labels.csv")
# Path to the training and validation files
train_annotations_file = os.path.join(src, "train_annotations.jsonl")
validation_annotations_file = os.path.join(src, "validation_annotations.jsonl")
# sample json line dictionary
json_line_sample = {
"image_url": "AmlDatastore://" + workspaceblobstore + "/multilabelFridgeObjects",
"label": [],
}
# Read each annotation and convert it to jsonl line
with open(train_annotations_file, "w") as train_f:
with open(validation_annotations_file, "w") as validation_f:
with open(labelFile, "r") as labels:
for i, line in enumerate(labels):
# Skipping the title line and any empty lines.
if i == 0 or len(line.strip()) == 0:
continue
line_split = line.strip().split(",")
if len(line_split) != 2:
print("Skipping the invalid line: {}".format(line))
continue
json_line = dict(json_line_sample)
json_line["image_url"] += f"/images/{line_split[0]}"
json_line["label"] = line_split[1].strip().split(" ")
if i % train_validation_ratio == 0:
# validation annotation
validation_f.write(json.dumps(json_line) + "\n")
else:
# train annotation
train_f.write(json.dumps(json_line) + "\n")
# -
# ### Upload the JSONL file and images to Datastore
# In order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#datasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage.
# Retrieving default datastore that got automatically created when we setup a workspace
ds = ws.get_default_datastore()
ds.upload(src_dir="./multilabelFridgeObjects", target_path="multilabelFridgeObjects")
# Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation.
# +
from azureml.core import Dataset
from azureml.data import DataType
# get existing training dataset
training_dataset_name = "multilabelFridgeObjectsTrainingDataset"
if training_dataset_name in ws.datasets:
training_dataset = ws.datasets.get(training_dataset_name)
print("Found the training dataset", training_dataset_name)
else:
# create training dataset
training_dataset = Dataset.Tabular.from_json_lines_files(
path=ds.path("multilabelFridgeObjects/train_annotations.jsonl"),
set_column_types={"image_url": DataType.to_stream(ds.workspace)},
)
training_dataset = training_dataset.register(
workspace=ws, name=training_dataset_name
)
# get existing validation dataset
validation_dataset_name = "multilabelFridgeObjectsValidationDataset"
if validation_dataset_name in ws.datasets:
validation_dataset = ws.datasets.get(validation_dataset_name)
print("Found the validation dataset", validation_dataset_name)
else:
# create validation dataset
validation_dataset = Dataset.Tabular.from_json_lines_files(
path=ds.path("multilabelFridgeObjects/validation_annotations.jsonl"),
set_column_types={"image_url": DataType.to_stream(ds.workspace)},
)
validation_dataset = validation_dataset.register(
workspace=ws, name=validation_dataset_name
)
print("Training dataset name: " + training_dataset.name)
print("Validation dataset name: " + validation_dataset.name)
# -
# Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#model-agnostic-hyperparameters) for more details.
#
# This is what the training dataset looks like:
training_dataset.to_pandas_dataframe()
# ## Configuring your AutoML run for image tasks
# AutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-your-experiment-settings) for the details on the parameters that can be used and their values.
# When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms.
#
# ### Using default hyperparameter values for the specified algorithm
# Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.
#
# If you wish to use the default hyperparameter values for a given algorithm (say `vitb16r224`), you can specify the config for your AutoML Image runs as follows:
# +
from azureml.automl.core.shared.constants import ImageTask
from azureml.train.automl import AutoMLImageConfig
from azureml.train.hyperdrive import GridParameterSampling, choice
image_config_vit = AutoMLImageConfig(
task=ImageTask.IMAGE_CLASSIFICATION_MULTILABEL,
compute_target=compute_target,
training_data=training_dataset,
validation_data=validation_dataset,
hyperparameter_sampling=GridParameterSampling({"model_name": choice("vitb16r224")}),
iterations=1,
)
# -
# ## Submitting an AutoML run for Computer Vision tasks
# Once you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset.
automl_image_run = experiment.submit(image_config_vit)
automl_image_run.wait_for_completion(wait_post_processing=True)
# ### Hyperparameter sweeping for your AutoML models for computer vision tasks
# In this example, we use the AutoMLImageConfig to train an Image Classification model using the `vitb16r224` and `seresnext` model algorithms.
#
# When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, grad_accumulation_step, valid_resize_size, etc., to generate a model with the optimal 'accuracy'. If hyperparameter values are not specified, then default values are used for the specified algorithm.
#
# We use Random Sampling to pick samples from this parameter space and try a total of 20 iterations with these different samples, running 4 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.
#
# We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.
#
# For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
# +
from azureml.automl.core.shared.constants import ImageTask
from azureml.train.automl import AutoMLImageConfig
from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling
from azureml.train.hyperdrive import choice, uniform
parameter_space = {
"learning_rate": uniform(0.005, 0.05),
"model": choice(
{
"model_name": choice("vitb16r224"),
"number_of_epochs": choice(15, 30),
"grad_accumulation_step": choice(1, 2),
},
{
"model_name": choice("seresnext"),
# model-specific, valid_resize_size should be larger or equal than valid_crop_size
"valid_resize_size": choice(288, 320, 352),
"valid_crop_size": choice(224, 256), # model-specific
"train_crop_size": choice(224, 256), # model-specific
},
),
}
tuning_settings = {
"iterations": 20,
"max_concurrent_iterations": 4,
"hyperparameter_sampling": RandomParameterSampling(parameter_space),
"early_termination_policy": BanditPolicy(
evaluation_interval=2, slack_factor=0.2, delay_evaluation=6
),
}
automl_image_config = AutoMLImageConfig(
task=ImageTask.IMAGE_CLASSIFICATION_MULTILABEL,
compute_target=compute_target,
training_data=training_dataset,
validation_data=validation_dataset,
**tuning_settings,
)
# -
automl_image_run = experiment.submit(automl_image_config)
automl_image_run.wait_for_completion(wait_post_processing=True)
# When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
# +
from azureml.core import Run
hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + "_HD")
hyperdrive_run
# -
# ## Register the optimal vision model from the AutoML run
# Once the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)
# +
# Register the model from the best run
best_child_run = automl_image_run.get_best_child()
model_name = best_child_run.properties["model_name"]
model = best_child_run.register_model(
model_name=model_name, model_path="outputs/model.pt"
)
# -
# ## Deploy model as a web service
# Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.
# In this tutorial, we will deploy the model as a web service in AKS.
# You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster
# +
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.exceptions import ComputeTargetException
# Choose a name for your cluster
aks_name = "cluster-aks-gpu"
# Check to see if the cluster already exists
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
print("Found existing compute target")
except ComputeTargetException:
print("Creating a new compute target...")
# Provision AKS cluster with GPU machine
prov_config = AksCompute.provisioning_configuration(
vm_size="STANDARD_NC6", location="eastus2"
)
# Create the cluster
aks_target = ComputeTarget.create(
workspace=ws, name=aks_name, provisioning_configuration=prov_config
)
aks_target.wait_for_completion(show_output=True)
# -
# Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#update-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.
#
# <b>Note:</b> To change the model's settings, open the downloaded scoring script and modify the model_settings variable <i>before</i> deploying the model.
# +
from azureml.core.model import InferenceConfig
best_child_run.download_file(
"outputs/scoring_file_v_1_0_0.py", output_file_path="score.py"
)
environment = best_child_run.get_environment()
inference_config = InferenceConfig(entry_script="score.py", environment=environment)
# -
# You can then deploy the model as an AKS web service.
# +
# Deploy the model from the best run as an AKS web service
from azureml.core.webservice import AksWebservice
from azureml.core.model import Model
aks_config = AksWebservice.deploy_configuration(
autoscale_enabled=True, cpu_cores=1, memory_gb=20, enable_app_insights=True
)
aks_service = Model.deploy(
ws,
models=[model],
inference_config=inference_config,
deployment_config=aks_config,
deployment_target=aks_target,
name="automl-image-test",
overwrite=True,
)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
# -
# ## Test the web service
# Finally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI.
# +
import requests
from IPython.display import Image
# URL for the web service
scoring_uri = aks_service.scoring_uri
# If the service is authenticated, set the key or token
key, _ = aks_service.get_keys()
sample_image = "./test_image.jpg"
# Load image data
data = open(sample_image, "rb").read()
# Set the content type
headers = {"Content-Type": "application/octet-stream"}
# If authentication is enabled, set the authorization header
headers["Authorization"] = f"Bearer {key}"
# Make the request and display the response
resp = requests.post(scoring_uri, data, headers=headers)
print(resp.text)
# -
# ## Visualize predictions
# Now that we have scored a test image, we can visualize the predictions for this image
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import json
IMAGE_SIZE = (18, 12)
plt.figure(figsize=IMAGE_SIZE)
img_np = mpimg.imread(sample_image)
img = Image.fromarray(img_np.astype("uint8"), "RGB")
x, y = img.size
fig, ax = plt.subplots(1, figsize=(15, 15))
# Display the image
ax.imshow(img_np)
prediction = json.loads(resp.text)
score_threshold = 0.5
label_offset_x = 30
label_offset_y = 30
for index, score in enumerate(prediction["probs"]):
if score > score_threshold:
label = prediction["labels"][index]
display_text = "{} ({})".format(label, round(score, 3))
print(display_text)
color = "red"
plt.text(label_offset_x, label_offset_y, display_text, color=color, fontsize=30)
label_offset_y += 30
plt.show()
| python-sdk/tutorials/automl-with-azureml/image-classification-multilabel/auto-ml-image-classification-multilabel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (jupyter)
# language: python
# name: jupyter
# ---
# ### Figure generation notebook for MERFISH single cell quality comparisons with MACA
# +
import os
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import patches as mpatches
import numpy as np
from scipy.sparse import csr_matrix
from scipy.stats import ks_2samp
import anndata
import scanpy
import string
import seaborn as sns
import h5py
import tifffile
import fs
from fs import open_fs
from matplotlib_scalebar.scalebar import ScaleBar
from tqdm import tqdm
mpl.rcParams.update(mpl.rcParamsDefault) #Reset rcParams to default
colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] # Colors in this style
# Plotting style function (run this before plotting the final figure)
def set_plotting_style():
plt.style.use('seaborn-paper')
plt.rc('axes', labelsize=12)
plt.rc('axes', titlesize=12)
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
plt.rc('legend', fontsize=10)
plt.rc('text.latex', preamble=r'\usepackage{sfmath}')
plt.rc('xtick.major', pad=2)
plt.rc('ytick.major', pad=2)
plt.rc('mathtext', fontset='stixsans', sf='sansserif')
plt.rc('figure', figsize=[10,9])
plt.rc('svg', fonttype='none')
# +
# Load postprocessed MERFISH and MACA results
# We should run SI fig 2 first to determine the count threshold cutoff, and then create
# the filtered .h5ad object to use here eventually.
# Define the path of the analyzed MERFISH data
dataPathPost = '/mnt/ibm_lg/spatial-seq/MERlin_Analysis/ProcessedResults'
# Define the experiments
experimentName = ['MsLiver_Cellbound_VZG116_V1_JH_09-18-2021',
'MsKidney_CellBoundary_VZG116_111921',
'MsKidney_CellBoundary_VZG116_121021']
prefixCountsFilter = 'FilteredCounts'
suffixCountsFilter = '_FilteredSingleCellCounts.h5ad'
VizgenCountsFilter = []
QCFilter = []
for i in range(len(experimentName)):
# Filtered counts per segmented cell
VizgenCountsFilter.append(anndata.read_h5ad(os.path.join(dataPathPost, prefixCountsFilter,
experimentName[i] + suffixCountsFilter)))
# Get filtering criteria
QCFilter.append(VizgenCountsFilter[i].uns['QC_filter'])
# Convert all gene names to lower case
for i in range(len(experimentName)):
VizgenCountsFilter[i].var.index = VizgenCountsFilter[i].var.index.str.lower()
# Rename the 5 genes that are inconsistent with MACA (NOTE: MIR205HG doesn't seem to be in MACA at all)
rename_map = {'mir205hg':'4631405k08rik',
'ackr1':'darc',
'adgrl4':'eltd1',
'cavin2':'sdpr',
'jchain':'igj'}
for i in range(len(experimentName)):
gene_list = list(VizgenCountsFilter[i].var.index)
for j in range(len(gene_list)):
if gene_list[j] in rename_map.keys():
gene_list[j] = rename_map[gene_list[j]]
VizgenCountsFilter[i].var.index = gene_list
# Combine into single tissue objects
liver_ind = [0]
kidney_ind = [1,2]
VizgenLiver_all = []
VizgenKidney_all = []
for i in liver_ind:
VizgenLiver_all.append(VizgenCountsFilter[i])
for i in kidney_ind:
VizgenKidney_all.append(VizgenCountsFilter[i])
VizgenLiver = VizgenLiver_all[0]
for i in range(len(liver_ind)-1):
VizgenLiver = VizgenLiver.concatenate(VizgenLiver_all[i+1])
VizgenKidney = VizgenKidney_all[0]
for i in range(len(kidney_ind)-1):
VizgenKidney = VizgenKidney.concatenate(VizgenKidney_all[i+1])
# Load raw MACA data (10x)
dataPathMACA = '/mnt/ibm_lg/angela/'
experimentMACA10x = 'tabula-muris-senis-droplet-official-raw-obj.h5ad'
MACA_10x = anndata.read(os.path.join(dataPathMACA, experimentMACA10x))
# Use only the reference datasets that are 18 months or younger
ind = np.logical_or(np.logical_or(MACA_10x.obs['age'] == '1m', MACA_10x.obs['age'] == '3m'),
MACA_10x.obs['age'] == '18m')
MACA_10x = MACA_10x[ind]
# Load raw MACA data (SmartSeq)
experimentMACASmartSeq = 'tabula-muris-senis-facs-official-raw-obj.h5ad'
MACA_SmartSeq = anndata.read(os.path.join(dataPathMACA, experimentMACASmartSeq))
# Select only the tissue-specific cells from the raw datasets
MACAliver_10x= MACA_10x[MACA_10x.obs['tissue'] == 'Liver'].copy()
MACAliver_SmartSeq = MACA_SmartSeq[MACA_SmartSeq.obs['tissue'] == 'Liver'].copy()
MACAkidney_10x= MACA_10x[MACA_10x.obs['tissue'] == 'Kidney'].copy()
MACAkidney_SmartSeq = MACA_SmartSeq[MACA_SmartSeq.obs['tissue'] == 'Kidney'].copy()
# Convert genes to lower case
MACAliver_10x.var.index = MACAliver_10x.var.index.str.lower()
MACAliver_SmartSeq.var.index = MACAliver_SmartSeq.var.index.str.lower()
MACAkidney_10x.var.index = MACAkidney_10x.var.index.str.lower()
MACAkidney_SmartSeq.var.index = MACAkidney_SmartSeq.var.index.str.lower()
# Select shared gene panel genes only
genes_Vizgen = VizgenCountsFilter[0].var.index
genes_10x = MACAliver_10x.var.index
genes_SmartSeq = MACAliver_SmartSeq.var.index
genes_shared = genes_Vizgen.intersection(genes_10x) # List of shared genes
VizgenLiver = VizgenLiver[:, genes_Vizgen.isin(genes_shared)].copy()
VizgenKidney = VizgenKidney[:, genes_Vizgen.isin(genes_shared)].copy()
MACAliver_10x = MACAliver_10x[:, genes_10x.isin(genes_shared)]
MACAliver_SmartSeq = MACAliver_SmartSeq[:, genes_SmartSeq.isin(genes_shared)]
MACAkidney_10x = MACAkidney_10x[:, genes_10x.isin(genes_shared)]
MACAkidney_SmartSeq = MACAkidney_SmartSeq[:, genes_SmartSeq.isin(genes_shared)]
# Remove MERFISH cells with fewer than 20 counts
min_counts = 20
scanpy.pp.filter_cells(VizgenLiver, min_counts=min_counts)
scanpy.pp.filter_cells(VizgenKidney, min_counts=min_counts)
print('Processed data loaded.')
# -
# ### Panel A, B: distribution of counts/cell for liver and kidney
# +
bins = np.arange(0,3000,20)
fig, axes = plt.subplots(2,2, figsize=(10,8))
counts_VizgenLiver = VizgenLiver.X.sum(axis=1)
counts_MACAliver_10x = MACAliver_10x.X.sum(axis=1)
counts_MACAliver_SmartSeq = MACAliver_SmartSeq.X.sum(axis=1)
counts_VizgenKidney = VizgenKidney.X.sum(axis=1)
counts_MACAkidney_10x = MACAkidney_10x.X.sum(axis=1)
counts_MACAkidney_SmartSeq = MACAkidney_SmartSeq.X.sum(axis=1)
# liver
ax = axes[0,0]
ax.hist(counts_VizgenLiver,alpha=0.5,bins=bins,label='Vizgen')
ax.hist(counts_MACAliver_10x,alpha=0.5,bins=bins,label='MACA 10X')
ax.hist(counts_MACAliver_SmartSeq,alpha=0.5,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('number of cells')
ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.legend()
ax.set_title('liver')
ax = axes[0,1]
ax.hist(counts_VizgenLiver,alpha=0.5,density=True,bins=bins,label='Vizgen')
ax.hist(counts_MACAliver_10x,alpha=0.5,density=True,bins=bins,label='MACA 10X')
ax.hist(counts_MACAliver_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('frequency')
ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.legend()
ax.set_title('liver')
# kidney
ax = axes[1,0]
ax.hist(counts_VizgenKidney,alpha=0.5,bins=bins,label='Vizgen')
ax.hist(counts_MACAkidney_10x,alpha=0.5,bins=bins,label='MACA 10X')
ax.hist(counts_MACAkidney_SmartSeq,alpha=0.5,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('number of cells')
ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.legend()
ax.set_title('kidney')
ax = axes[1,1]
ax.hist(counts_VizgenKidney,alpha=0.5,density=True,bins=bins,label='Vizgen')
ax.hist(counts_MACAkidney_10x,alpha=0.5,density=True,bins=bins,label='MACA 10X')
ax.hist(counts_MACAkidney_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('frequency')
ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.legend()
ax.set_title('kidney')
fig.tight_layout()
plt.show()
# -
# ### Panel C, D: dropout rates/n_genes for each technology
# +
# Number of genes that have zero count for each cell divided by total number of genes in panel
n_genes = VizgenLiver.shape[1]
VizgenLiverDropoutFrac = 1 - np.count_nonzero(VizgenLiver.X, axis=1) / n_genes
VizgenKidneyDropoutFrac = 1 - np.count_nonzero(VizgenKidney.X, axis=1) / n_genes
MACALiverDropoutFrac_10x = 1 - np.count_nonzero(MACAliver_10x.X.toarray(), axis=1) / n_genes
MACALiverDropoutFrac_SmartSeq = 1 - np.count_nonzero(MACAliver_SmartSeq.X.toarray(), axis=1) / n_genes
MACAKidneyDropoutFrac_10x = 1 - np.count_nonzero(MACAkidney_10x.X.toarray(), axis=1) / n_genes
MACAKidneyDropoutFrac_SmartSeq = 1 - np.count_nonzero(MACAkidney_SmartSeq.X.toarray(), axis=1) / n_genes
# Plot the dropout rate
bins = np.arange(0,1,0.01)
fig, axes = plt.subplots(1,2, figsize=(10,4))
ax = axes[0]
ax.hist(VizgenLiverDropoutFrac,alpha=0.5,density=True,bins=bins,label='Vizgen')
ax.hist(MACALiverDropoutFrac_10x,alpha=0.5,density=True,bins=bins,label='MACA 10x')
ax.hist(MACALiverDropoutFrac_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('dropout rate')
ax.set_ylabel('frequency')
ax.set_xlim((0,1))
ax.set_title('liver')
ax.legend()
ax = axes[1]
ax.hist(VizgenKidneyDropoutFrac,alpha=0.5,density=True,bins=bins,label='Vizgen')
ax.hist(MACAKidneyDropoutFrac_10x,alpha=0.5,density=True,bins=bins,label='MACA 10x')
ax.hist(MACAKidneyDropoutFrac_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('dropout rate')
ax.set_ylabel('frequency')
ax.set_xlim((0,1))
ax.set_title('kidney')
ax.legend()
plt.show()
# -
# ### Panel E, F: scatter plot of fraction of cells detecting each gene between 10x and Vizgen
# +
# Fraction of cells detecting a gene
# Do this in a joint DataFrame to ensure gene mapping are the same
frac_cells_VizgenLiver = VizgenLiver.to_df().astype(bool).sum(axis=0) / len(VizgenLiver)
frac_cells_VizgenKidney = VizgenKidney.to_df().astype(bool).sum(axis=0) / len(VizgenKidney)
frac_cells_MACAliver_10x = MACAliver_10x.to_df().astype(bool).sum(axis=0) / len(MACAliver_10x)
frac_cells_MACAkidney_10x = MACAkidney_10x.to_df().astype(bool).sum(axis=0) / len(MACAkidney_10x)
# Log median expression ignoring zeros
median_expression_liver = np.log(MACAliver_10x.to_df()[MACAliver_10x.to_df() != 0].median(axis=0))
median_expression_kidney = np.log(MACAkidney_10x.to_df()[MACAkidney_10x.to_df() != 0].median(axis=0))
# Z score of log median expression
zscore_expression_liver = (median_expression_liver -
median_expression_liver.mean()) / median_expression_liver.std()
zscore_expression_kidney = (median_expression_kidney -
median_expression_kidney.mean()) / median_expression_kidney.std()
frac_cells_liver = pd.concat([frac_cells_VizgenLiver, frac_cells_MACAliver_10x,
median_expression_liver, zscore_expression_liver], axis=1)
frac_cells_liver = frac_cells_liver.rename(columns={0:'MERFISH', 1:'scRNA-seq',
2:'median_expression', 3:'zscore_expression'})
frac_cells_kidney = pd.concat([frac_cells_VizgenKidney, frac_cells_MACAkidney_10x,
median_expression_kidney, zscore_expression_kidney], axis=1)
frac_cells_kidney = frac_cells_kidney.rename(columns={0:'MERFISH', 1:'scRNA-seq',
2:'median_expression', 3:'zscore_expression'})
# Ratio of Vizgen to 10x
ratio_cells_liver = frac_cells_liver['MERFISH'] / frac_cells_liver['scRNA-seq']
ratio_cells_kidney = frac_cells_kidney['MERFISH'] / frac_cells_kidney['scRNA-seq']
fig, axes = plt.subplots(2,2, figsize=(12,8))
# liver
ax = axes[0,0]
frac_cells_liver.plot.scatter('MERFISH','scRNA-seq', c='zscore_expression', colormap='viridis', colorbar=True, ax=ax)
ax.plot([0,1],[0,1],'k--')
ax.set_xlabel('fraction of cells Vizgen')
ax.set_ylabel('fraction of cells MACA 10x')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_title('liver')
# kidney
ax = axes[0,1]
frac_cells_kidney.plot.scatter('MERFISH','scRNA-seq', c='zscore_expression', colormap='viridis', colorbar=True, ax=ax)
ax.plot([0,1],[0,1],'k--')
ax.set_xlabel('fraction of cells Vizgen')
ax.set_ylabel('fraction of cells MACA 10x')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_title('kidney')
# liver ratio
ax = axes[1,0]
bins = np.logspace(-2,1,30)
ax.hist(ratio_cells_liver, density=False, bins=bins)
ax.set_xscale('log')
ax.set_xlabel('ratio of fraction of cells Vizgen / MACA 10x')
ax.set_ylabel('number of genes')
ax.set_title('liver')
# kidney ratio
ax = axes[1,1]
bins = np.logspace(-2,1,30)
ax.hist(ratio_cells_kidney, density=False, bins=bins)
ax.set_xscale('log')
ax.set_xlabel('ratio of fraction of cells Vizgen / MACA 10x')
ax.set_ylabel('number of genes')
ax.set_title('kidney')
fig.tight_layout()
plt.show()
# -
# ### Panel G, H: leakiness of each technology
# - get a list of pancreas genes and make a bar plot of mean expression for each gene for Vizgen vs MACA 10x
# +
pancreas_genes = ['ace2','chga','cldn3','bambi',
'hhex','pcsk2']
liver_genes = 'hmgcs2'
kidney_genes = 'kcnj1'
all_genes = pancreas_genes.copy()
all_genes.append(liver_genes)
all_genes.append(kidney_genes)
# Get subset of pancreas genes
VizgenLiver_pancreas = VizgenLiver[:, all_genes]
VizgenKidney_pancreas = VizgenKidney[:, all_genes]
MACAliver_10x_pancreas = MACAliver_10x[:, all_genes]
MACAkidney_10x_pancreas = MACAkidney_10x[:, all_genes]
# Calculate mean + SD expression for these genes for cells with nonzero expression
mean_VizgenLiver_pancreas = VizgenLiver_pancreas.to_df()[VizgenLiver_pancreas.to_df() != 0].mean(axis=0)
mean_VizgenKidney_pancreas = VizgenKidney_pancreas.to_df()[VizgenKidney_pancreas.to_df() != 0].mean(axis=0)
mean_MACAliver_10x_pancreas = MACAliver_10x_pancreas.to_df()[MACAliver_10x_pancreas.to_df() != 0].mean(axis=0)
mean_MACAkidney_10x_pancreas = MACAkidney_10x_pancreas.to_df()[MACAkidney_10x_pancreas.to_df() != 0].mean(axis=0)
se_VizgenLiver_pancreas = VizgenLiver_pancreas.to_df()[
VizgenLiver_pancreas.to_df() != 0].std(axis=0) / np.sqrt(len(VizgenLiver))
se_VizgenKidney_pancreas = VizgenKidney_pancreas.to_df()[
VizgenKidney_pancreas.to_df() != 0].std(axis=0) / np.sqrt(len(VizgenKidney))
se_MACAliver_10x_pancreas = MACAliver_10x_pancreas.to_df()[
MACAliver_10x_pancreas.to_df() != 0].std(axis=0) / np.sqrt(len(MACAliver_10x))
se_MACAkidney_10x_pancreas = MACAkidney_10x_pancreas.to_df()[
MACAkidney_10x_pancreas.to_df() != 0].std(axis=0) / np.sqrt(len(MACAkidney_10x))
mean_liver_pancreas = pd.concat([mean_VizgenLiver_pancreas, mean_MACAliver_10x_pancreas],
axis=1)
mean_liver_pancreas = mean_liver_pancreas.rename(columns={0:'MERFISH', 1:'scRNA-seq'})
mean_kidney_pancreas = pd.concat([mean_VizgenKidney_pancreas, mean_MACAkidney_10x_pancreas],
axis=1)
mean_kidney_pancreas = mean_kidney_pancreas.rename(columns={0:'MERFISH', 1:'scRNA-seq'})
se_liver_pancreas = pd.concat([se_VizgenLiver_pancreas, se_MACAliver_10x_pancreas],
axis=1)
se_liver_pancreas = se_liver_pancreas.rename(columns={0:'MERFISH', 1:'scRNA-seq'})
se_kidney_pancreas = pd.concat([se_VizgenKidney_pancreas, se_MACAkidney_10x_pancreas],
axis=1)
se_kidney_pancreas = se_kidney_pancreas.rename(columns={0:'MERFISH', 1:'scRNA-seq'})
# Plot
fig, axes = plt.subplots(1,2, figsize=(12,4))
# liver
ax = axes[0]
mean_liver_pancreas.plot.bar(ax=ax, yerr=se_liver_pancreas, rot=45, capsize=2)
ax.set_ylabel('mean transcript count')
ax.set_title('liver')
# kidney
ax = axes[1]
mean_kidney_pancreas.plot.bar(ax=ax, yerr=se_kidney_pancreas, rot=45, capsize=2)
ax.set_ylabel('mean transcript count')
ax.set_title('kidney')
plt.show()
# -
# ### Plot the combined figure
# +
# Plot the whole figure
# Ignore the SmartSeq2 data for now
set_plotting_style()
fig, axes = plt.subplots(ncols=2, nrows=4, figsize=(8,12)) #Create a grid
# # Inset
# # barplots
# ax1 = ax[0].inset_axes([2, -9, 6, 4], transform=ax[0].transData)
# x_bar = [4,5]
# y_bar = [coord_df.loc[('liver','heart'),'macrophage'], coord_control_df_mean.loc[('liver','heart'),'macrophage']]
# y_err = [0, coord_control_df_sd.loc[('liver','heart'),'macrophage']]
# colors = ['tab:blue','tab:gray']
# ax1.bar(x_bar, y_bar, yerr=y_err, width=0.5, capsize=4, color=colors)
# ax1.set_ylim((0,1))
# ax1.set_yticks([0,1])
# #ax1.legend()
# Panel A: liver counts/cell comparison
#bins = np.arange(0,3000,20)
bins = np.logspace(0,3.5,30)
ax = axes[0,0]
ax.hist(counts_VizgenLiver,alpha=0.5,density=True,bins=bins,label='MERFISH')
ax.hist(counts_MACAliver_10x,alpha=0.5,density=True,bins=bins,label='scRNA-seq')
#ax.hist(counts_MACAliver_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('probability density')
#ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.set_xscale('log')
ax.legend()
ax.set_title('liver')
# Panel B: kidney counts/cell comparison
ax = axes[0,1]
ax.hist(counts_VizgenKidney,alpha=0.5,density=True,bins=bins,label='MERFISH')
ax.hist(counts_MACAkidney_10x,alpha=0.5,density=True,bins=bins,label='scRNA-seq')
#ax.hist(counts_MACAkidney_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('counts per cell')
ax.set_ylabel('probability density')
#ax.set_xlim((0,3000))
ax.set_yscale('log')
ax.set_xscale('log')
ax.legend()
ax.set_title('kidney')
# Panel C: liver dropout rate
bins = np.arange(0.5,1,0.02)
ax = axes[1,0]
ax.hist(VizgenLiverDropoutFrac,alpha=0.5,density=True,bins=bins,label='MERFISH')
ax.hist(MACALiverDropoutFrac_10x,alpha=0.5,density=True,bins=bins,label='scRNA-seq')
#ax.hist(MACALiverDropoutFrac_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('dropout rate')
ax.set_ylabel('probability density')
ax.set_xlim((0.5,1))
ax.set_title('liver')
ax.legend()
# Panel D: kidney dropout rate
ax = axes[1,1]
ax.hist(VizgenKidneyDropoutFrac,alpha=0.5,density=True,bins=bins,label='MERFISH')
ax.hist(MACAKidneyDropoutFrac_10x,alpha=0.5,density=True,bins=bins,label='scRNA-seq')
#ax.hist(MACAKidneyDropoutFrac_SmartSeq,alpha=0.5,density=True,bins=bins,label='MACA SmartSeq')
ax.set_xlabel('dropout rate')
ax.set_ylabel('probability density')
ax.set_xlim((0.5,1))
ax.set_title('kidney')
ax.legend()
# Panel E: liver cells/gene
ax = axes[2,0]
frac_cells_liver.plot.scatter('MERFISH','scRNA-seq', c='zscore_expression',
colormap='viridis', colorbar=True, ax=ax)
ax.plot([0,1],[0,1],'k--')
ax.set_xlabel('fraction of cells Vizgen')
ax.set_ylabel('fraction of cells MACA 10x')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_title('liver')
# Panel F: kidney cells/gene
ax = axes[2,1]
frac_cells_kidney.plot.scatter('MERFISH','scRNA-seq', c='zscore_expression', colormap='viridis', colorbar=True, ax=ax)
ax.plot([0,1],[0,1],'k--')
ax.set_xlabel('fraction of cells Vizgen')
ax.set_ylabel('fraction of cells MACA 10x')
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_title('kidney')
# Panel G: liver leakiness
ax = axes[3,0]
mean_liver_pancreas.plot.bar(ax=ax, yerr=se_liver_pancreas, rot=45, capsize=4)
ax.set_ylabel('mean transcript count')
ax.set_title('liver')
# Panel H: kidney leakiness
ax = axes[3,1]
mean_kidney_pancreas.plot.bar(ax=ax, yerr=se_kidney_pancreas, rot=45, capsize=4)
ax.set_ylabel('mean transcript count')
ax.set_title('kidney')
# Label subpanels
axes_label = [axes[0,0], axes[0,1], axes[1,0], axes[1,1],
axes[2,0], axes[2,1], axes[3,0], axes[3,1]]
for n, ax in enumerate(axes_label):
ax.text(-0.1, 1.1, string.ascii_uppercase[n], transform=ax.transAxes,
size=20, weight='bold')
fig.tight_layout()
plt.show()
# -
# Export figures
fig.savefig('../../figures/Fig5_singlecellcomparison.pdf')
fig.savefig('../../figures/Fig5_singlecellcomparison.png')
| notebooks/figures/.ipynb_checkpoints/Fig5_singlecellcomparison-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this lab, we will optimize the weather simulation application written in C++ (if you prefer to use Fortran, click [this link](../../Fortran/jupyter_notebook/profiling-fortran.ipynb)).
#
# Let's execute the cell below to display information about the GPUs running on the server by running the pgaccelinfo command, which ships with the PGI compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
# !pgaccelinfo
# ## Exercise 3
#
# ### Learning objectives
# Learn how to improve GPU occupancy and extract more parallelism by adding more descriptive clauses to the OpenACC loop constructs in the application. In this exercise you will:
#
# - Learn about GPU occupancy, and OpenACC vs CUDA execution model
# - Learn how to find out GPU occupancy from the Nsight Systems profiler
# - Learn how to improve the occupancy and saturate compute resources
# - Learn about collapse clause for further optimization of the parallel nested loops and when to use them
# - Apply collapse clause to eligible nested loops in the application and investigate the profiler report
#
# Look at the profiler report from the previous exercise again. From the timeline, have a close look at the the kernel functions. We can see that the for example `compute_tendencies_z_383_gpu` kernel has the theoretical occupancy of 37.5% . It clearly shows that the occupancy is a limiting factor. *Occupancy* is a measure of how well the GPU compute resources are being utilized. It is about how much parallelism is running / how much parallelism the hardware could run.
#
# <img src="images/occu-2.png" width="30%" height="30%">
#
# NVIDIA GPUs are comprised of multiple streaming multiprocessors (SMs) where it can manage up to 2048 concurrent threads (not actively running at the same time). Low occupancy shows that there are not enough active threads to fully utilize the computing resources. Higher occupancy implies that the scheduler has more active threads to choose from and hence achieves higher performance. So, what does this mean in OpenACC execution model?
#
# **Gang, Worker, and Vector**
# CUDA and OpenACC programming model use different terminologies for similar ideas. For example, in CUDA, parallel execution is organized into grids, blocks, and threads. On the other hand, the OpenACC execution model has three levels of gang, worker, and vector. OpenACC assumes the device has multiple processing elements (Streaming Multiprocessors on NVIDIA GPUs) running in parallel and mapping of OpenACC execution model on CUDA is as below:
#
# - An OpenACC gang is a threadblock
# - A worker is a warp
# - An OpenACC vector is a CUDA thread
#
# <img src="images/diagram.png" width="50%" height="50%">
#
# So, in order to improve the occupancy, we have to increase the parallelism within the gang. In other words, we have to increase the number of threads that can be scheduled on the GPU to improve GPU thread occupancy.
#
# **Optimizing loops and improving occupancy**
# Let's have a look at the compiler feedback (*Line 315*) and the corresponding code snippet showing three tightly nested loops.
#
# <img src="images/cfeedback2.png" width="80%" height="80%">
#
# The iteration count for the outer loop is `NUM_VARS` which is 4. As you can see from the above screenshot, the block dimension is <4,1,1> which shows the small amount of parallelism within the gang.
#
# ```cpp
# #pragma acc parallel loop private(indt, indf1, indf2)
# for (ll = 0; ll < NUM_VARS; ll++)
# {
# for (k = 0; k < nz; k++)
# {
# for (i = 0; i < nx; i++)
# {
# indt = ll * nz * nx + k * nx + i;
# indf1 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i;
# indf2 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i + 1;
# tend[indt] = -(flux[indf2] - flux[indf1]) / dx;
# }
# }
# }
# ```
#
# In order to expose more parallelism and improve the occupancy, we can use an additional clause called `collapse` in the `#pragma acc loop` to optimize loops. The loop directive gives the compiler additional information about the next loop in the source code through several clauses. Apply the `collapse(N)` clause to a loop directive to collapse the next `N` tightly-nested loops to be collapsed into a single, flattened loop. This is useful if you have many nested loops or when you have really short loops.
#
# When the loop count in any of some tightly nested loops is relatively small compared to the available number of threads in the device, creating a single iteration space across all the nested loops, increases the iteration count thus allowing the compiler to extract more parallelism.
#
# **Tips on where to use:**
# - Collapse outer loops to enable creating more gangs.
# - Collapse inner loops to enable longer vector lengths.
# - Collapse all loops, when possible, to do both
#
# Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again.
#
# From the top menu, click on *File*, and *Open* `miniWeather_openacc.cpp` and `Makefile` from the current directory at `C/source_code/lab3` directory. Remember to **SAVE** your code after changes, before running below cells.
# !cd ../source_code/lab3 && make clean && make
# Let us start inspecting the compiler feedback and see if it applied the optimizations. Here is the screenshot of expected compiler feedback after adding the `collapse`clause to the code. You can see that nested loops on line 277 and 315 have been successfully collapsed.
#
# <img src="images/cfeedback3.png" width="80%" height="80%">
#
# Now, **Profile** your code with Nsight Systems command line `nsys`.
# !cd ../source_code/lab3 && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o miniWeather_4 ./miniWeather
# [Download the profiler output](../source_code/lab3/miniWeather_4.qdrep) and open it via the GUI. Now have a close look at the kernel functions on the timeline and the occupancy.
#
# <img src="images/occu-3.png" width="40%" height="40%">
#
# As you can see from the above screenshot, the theoretical occupancy is now 75% and the block dimension is now `<128,1,1>` where *128* is the vector size per gang. **Screenshots represents profiler report for the values of 400,200,1500.**
#
# ```cpp
# #pragma acc parallel loop collapse(3) private(indt, indf1, indf2)
# for (ll = 0; ll < NUM_VARS; ll++)
# {
# for (k = 0; k < nz; k++)
# {
# for (i = 0; i < nx; i++)
# {
# indt = ll * nz * nx + k * nx + i;
# indf1 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i;
# indf2 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i + 1;
# tend[indt] = -(flux[indf2] - flux[indf1]) / dx;
# }
# }
# }
# ```
#
# The iteration count for the collapsed loop is `NUM_VARS * nz * nx` where (in the example screenshot),
#
# - nz= 200,
# - nx = 400, and
# - NUM_VARS = 4
#
# So, the interaction count for this particular loop inside the `compute_tendencies_z_383_gpu` function is 320K. This number divided by the vector length of *128* would gives us the grid dimension of `<2500,1,1>`.
#
# By creating a single iteration space across the nested loops and increasing the iteration count, we improved the occupancy and extracted more parallelism.
#
# **Notes:**
# - 100% occupancy is not required for, nor does it guarantee best performance.
# - Less than 50% occupancy is often a red flag
#
# How much this optimization will speed-up the code will vary according to the application and the target accelerator, but it is not uncommon to see large speed-ups by using collapse on loop nests.
# ## Post-Lab Summary
#
# If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
# + language="bash"
# cd ..
# rm -f openacc_profiler_files.zip
# zip -r openacc_profiler_files.zip *
# -
# **After** executing the above zip command, you should be able to download the zip file [here](../openacc_profiler_files.zip).
# -----
#
# # <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../profiling_start.ipynb>HOME</a> <span style="float:center"> <a href=profiling-c-lab4.ipynb>NEXT</a></span> </p>
#
# -----
# # Links and Resources
#
# [OpenACC API Guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)
#
# [NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
#
# [CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)
#
# **NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
#
# Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
#
# ---
#
# ## Licensing
#
# This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Introducción a Pandas
# Ing. <NAME>, MSF
# MF-013 Análisis de Inversión<br>
# Clase del 12 de octubre 2021<br>
#
# Maestría de Finanzas, Facultad de Economía<br>
# UANL<br>
# + [markdown] slideshow={"slide_type": "slide"}
# ## 
# + [markdown] slideshow={"slide_type": "slide"}
# > _"pandas es una librería de Python que provee una rápida y flexible estructura de datos diseñada para trabajar con datos relacionados o etiquetados de manera fácil e intuitiva."_
# >
# > _"Python junto con pandas se usa en una amplia variedad áreas que incluyen finanzas, data science, neurociencia, economía, estadísticas, publicidad, análisis web y más."_<br>
# > https://pandas.pydata.org
# + [markdown] slideshow={"slide_type": "slide"}
# ## Características de Pandas
# + [markdown] slideshow={"slide_type": "fragment"}
# * Rápido y eficiente para la manipulación de información a través de DataFrames
# + [markdown] slideshow={"slide_type": "fragment"}
# * Herramientas para leer y guardar información entre los datos generados y diferentes formatos como CSV, Excel, archivos de texto, bases de datos SQL, HDF5.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Funcionalidad de series de tiempo.
# -
# ## Estructura de datos en Pandas
# + [markdown] slideshow={"slide_type": "slide"}
# Las estructuras de datos de Pandas más importante son:
# 1. Series: Arreglos unidimensional que contiene una secuencia de valores y un array asociado de etiquetas para los datos, que se llama *__index__*.
# 2. DataFrames: Representa una tabla rectangular que contiene una colección ordenada de columnas. Contiene *__index__* tanto para las filas, como para las columnas.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Cargando la librería pandas en Python
# -
import pandas as pd
# + [markdown] slideshow={"slide_type": "slide"}
# ## Tipos de estructuras de datos en pandas
# -
# ### Serie
# Podemos hacer Series utilizando listas:
precios = pd.Series([12.2, 13, 14, 15, 16, 17, 18])
precios
# A las series le podemos especificar index o etiquetas:
precios = pd.Series([12.2, 13, 14, 15], index=['08/07/21', '08/08/21', '08/09/21', '08/10/21'])
precios
# Asignarle index a las series nos funciona para accesar el valor por el nombre/valor del index, en este caso una fecha:
precios['08/07/21']
# Podemos filtar por valores
precios[precios>13]
precios[precios==13]
# Las operaciones con escalares son para cada elemento de la Serie:
precios * 10
precios + precios
precios-precios
precios2 = precios * 10
# Las operaciones matemáticas de dos o más series se efectúan en función al "match" entre index's, no importa su posición en la serie. Lo único que importa en el nombre del index.
precios2
# #### Aplicar funciones de NumPy en pandas
# Podemos aplicar también funciones de NumPy en pandas.
import numpy as np
# Aplicarle logaritmo natural al df2 con `np.log`:
x = np.log(precios2)
x
precios
# Convertir los valores de la serie `precios` a array de numpy:
array_precios = precios.values
array_precios
array_precios[0]
# #### Crear una Serie a partir de un diccionario
dicc_cierre = {'btc':50000, 'xrp':.8, 'eth':3500, 'doge':.3, 'btg':500}
dicc_cierre
precios_cierre = pd.Series(dicc_cierre)
precios_cierre
precios_cierre['eth']
# #### Definir orden en la Serie
# Definir una lista con el órden deseado y ponerlo como index
# +
orden = ['btc', 'doge', 'eth', 'xrp']
precios_cierre2 = pd.Series(dicc_cierre, index=orden)
# -
precios_cierre2
precios_cierre + precios_cierre2
# ### DataFrame
# Podemos hacer un DataFrame (df) con un diccionario:
datos = {'Estados':['Coah', 'NL', 'Tam', 'SLP'], 'Poblacion':[2.9, 5.0, 3.5, 2.7],
'#Municipios':[38, 51, 43, 58]}
df = pd.DataFrame(datos)
df
# #### Seleccionar los valores de la columna `"Estados"`
df['Estados']
# #### Seleccionar valores puntuales
# Podemos seleccionar el valor `'NL'` haciendo referencia primero al nombre de la columna y después a su index:
df['Estados'][1]
# Pero *__NO__* podemos seleccionar la fila index `1` de esta misma manera:
df[1]
# Para eso necesitamos utilizar `iloc` o `loc` que veremos más adelante.
# #### Seleccionar varias columnas
# Seleccionar una columna:
df['Poblacion']
# Seleccionar dos columnas (o más):
df[['Estados', 'Poblacion']]
df
# #### Asignar una columna nueva y un valor escalar para toda la columna
df['PIB'] = 23.4
df
# #### Asignar una columna nueva y diferentes valores (utilizando la función de numpy arange)
import numpy as np
df['PIB'] = np.arange(4)
df
# #### Egregar una serie a un DataFrame
# Hacer Serie `pobreza`
# #### SOLUCIóN PROPUESTA POR BRENDA Y DANIEL:
pobreza = pd.Series([.03, .02, .01], index=[1, 2, 3])
pobreza
df['IndicePobreza'] = pobreza
df
# #### LO QUE YO INTENTABA HACER:
pobreza = pd.Series([.03, .02, .01], index=['NL', 'Tam', 'SLP'])
df['IndicePobreza'] = pobreza
df
# Al no tener los Estados como index, Panda no encontraba los index 'NL', 'Tam' y 'SLP' y por eso asignaba `NaN` a los index del `0` al `3`
# # HASTA AQUí TERMINAMOS LA CLASE
# #### Que es lo que tenemos que hacer? Cambiar los estados a index
df.set_index('Estados', inplace=True)
df
df['IndicePobreza'] = pobreza
pobreza
df
# ### Comparando valores
df['MuchosMunicipios'] = df['#Municipios'] >= 45
df
# ### Una manera de borrar columnas
del df['MuchosMunicipios']
df
# ## Trabajando con DataFrame's
ingresos_anuales = {'Cemex':{"'17":95,"'18":100, "'19":110, "'20":100},
'Femsa':{"'18":200, "'19":280, "'20":290},
'Alfa':{"'18":300, "'19":230, "'20": 200}
}
ingresos_anuales
df = pd.DataFrame(ingresos_anuales)
df
df.T
df.values
# ### Formas para construir desde cero un DataFrame
# <table>
# <tr>
# <th><center>Tipo</center></th>
# <th><center>Comentario</center></th>
#
# </tr>
# <tr>
# <td style="text-align:left;">ndarryas de 2D.</td>
# <td style="text-align:left;">Una matriz de datos.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Diccionario de arrays, listas o tuplas.</td>
# <td style="text-align:left;">Cada secuencia se vuelve una columna en el DataFrame. Todas las secuencias deben de ser de la misma longitud.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Numpy.</td>
# <td style="text-align:left;">pandas lo trata como un diccionario de arrays.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Diccionario de Series.</td>
# <td style="text-align:left;">Cada valor se vuelve una columna. Los index de cada Serie son unidos para formar el index del DataFrame.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Diccionario de diccionario</td>
# <td style="text-align:left;">Cada diccionario interno se vuelve una columna y sus llaves se vuelven el index. La llave exterior de los diccionarios se vuelven los titulos de las columnas.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Lista de diccionarios o Series.</td>
# <td style="text-align:left;">Cada elemento se convierte en una fila del DataFrame. La unión de las llaves de los diccionarios o el index de las Series, se vuelven los titulos de las columnas.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Lista de listas.</td>
# <td style="text-align:left;">Se trata al igual que los ndarrays de 2D.</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">Otro DataFrame</td>
# <td style="text-align:left;">Se utilizan los index del DataFrame original, al menos que se indiquen otros.</td>
# </tr>
#
# </table>
# ### Método "drop"
# #### "drop" momentaneo
df.drop(['Cemex'], axis = 1)
df
df.drop(["'17"])
# #### "drop" permanente
df = df.drop(["'17"])
# ### Atributo "inplace"
df.drop(["'17"], inplace = True)
df
# ### Indexing
# #### Indexing en series
pobreza
pobreza['NL']
pobreza[0]
# #### Indexing en DataFrames
df['Cemex']
df[['Cemex','Femsa']]
df
# ### Slicing index con "iloc" y "loc"
# *__iloc__* y *__loc__* están diseñados para hacer slicing en los *__index__* de los DataFrame's. Nos permiten seleccionar un subset de **indexes** y también **columnas**.
# * *__loc__* lo utilizamos si queremos seleccionar con las **etiquetas** del index o columna
# * *__iloc__* lo utilizamos si queremos seleccionar con la **posición númerica** del index o columna.
df.loc["'18"]
df.iloc[0]
# ### Operaciones artiméticas y alineación de datos
df + df
# + [markdown] slideshow={"slide_type": "slide"}
# ## Estadísticas descriptivas
# -
#
# <table>
# <tr>
# <th style="text-align:center;">Método</th>
# <th style="text-align:center;">Descripción</th>
# </tr>
#
# <tr>
# <td style="text-align:left;">count</td>
# <td style="text-align:left;">El número de valores que no sean NaN</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">describe</td>
# <td style="text-align:left;">Calcula un resumen de varias estadísticas por columna </td>
# </tr>
#
# <tr>
# <td style="text-align:left;">min</td>
# <td style="text-align:left;">Valor mínimo</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">max</td>
# <td style="text-align:left;">Valor máximo</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">idxmin</td>
# <td style="text-align:left;">Encuentra el valor del index en donde está el valor mínimo</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">idxmax</td>
# <td style="text-align:left;">Encuentra el valor del index en donde está el valor máximo</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">quantile</td>
# <td style="text-align:left;">Valor cuantil</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">sum</td>
# <td style="text-align:left;">Suma los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">mean</td>
# <td style="text-align:left;">Media de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">median</td>
# <td style="text-align:left;">Media de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">prod</td>
# <td style="text-align:left;">Multiplica todos los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">var</td>
# <td style="text-align:left;">Varianza de la muestra de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">std</td>
# <td style="text-align:left;">Desviación estándar de la muestra de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">skew</td>
# <td style="text-align:left;">Asimetría de la muestra de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">kurt</td>
# <td style="text-align:left;">Kurtosis de la muestra de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">cumsum</td>
# <td style="text-align:left;">Suma acomulada de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">cumprod</td>
# <td style="text-align:left;">Producto acomulado de los valores</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">pct_change</td>
# <td style="text-align:left;">Porcentaje de cambio</td>
# </tr>
#
#
#
# </table>
df.sum()
df.mean()
femsa = df['Femsa'].mean()
femsa
df[['Cemex', 'Femsa']].max()
df.idxmin()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Series de tiempo
# -
# *__DateTime__* tiene por lo menos tres módulos:
# 1. datetime
# 1. date
# 1. calendar
from datetime import datetime
ahora = datetime.now()
ahora
ahora.year
ahora.month
ahora.day
datetime(2021,10,1)
cambio = datetime(2021,10,1) - datetime(2020,9,1)
cambio
rango_fechas = pd.date_range('2010/01/01', '2021/10/19', freq = 'D')
rango_fechas
# Estos son algunas de las frecuencias de mayor utilidad:
# <table>
# <tr>
# <th style="text-align:center;">Alias</th>
# <th style="text-align:center;">Tipo</th>
# <th style="text-align:center;">Descripción</th>
# </tr>
#
# <tr>
# <td style="text-align:center;">D</td>
# <td style="text-align:left;">Día</td>
# <td style="text-align:left;">Día calendario</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">B</td>
# <td style="text-align:left;">Dia laborables</td>
# <td style="text-align:left;">Dias laborales diarios</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">H</td>
# <td style="text-align:left;">Hora</td>
# <td style="text-align:left;">Por hora</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">T</td>
# <td style="text-align:left;">minuto</td>
# <td style="text-align:left;">Por minuto</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">S</td>
# <td style="text-align:left;">Segundos</td>
# <td style="text-align:left;">Por segundo</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">M</td>
# <td style="text-align:left;">Fin de mes</td>
# <td style="text-align:left;">Último día calendario del mes</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">BM</td>
# <td style="text-align:left;"></td>
# <td style="text-align:left;">Último día laboral del mes</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">MS</td>
# <td style="text-align:left;"></td>
# <td style="text-align:left;">Primer día calendario del mes</td>
# </tr>
#
# <tr>
# <td style="text-align:center;">BMS</td>
# <td style="text-align:left;"></td>
# <td style="text-align:left;">Primer día laboral entre semana</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">W-MON, W-TUE,...</td>
# <td style="text-align:left;">Semana</td>
# <td style="text-align:left;">Semanal en el día dado (MON, TUE, WED, THU, FRI, SAT, SUN</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">BQ-JAN, BQ-FEB...</td>
# <td style="text-align:left;">Trimestres</td>
# <td style="text-align:left;">Trimestral a partir del último día laboral del mes seleccionado</td>
# </tr>
#
# </table>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Leer datos externos
# -
# <table>
# <tr>
# <th style="text-align:center;">Función</th>
# <th style="text-align:center;">Descripción</th>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_csv</td>
# <td style="text-align:left;">Lee archivos delimitados de un directorio, URL. El delimitador por default es ","</td>
# </tr>
#
# <tr>
#
# <tr>
# <td style="text-align:left;">read_table</td>
# <td style="text-align:left;">Lee archivos delimitados de un directorio, URL. El delimitador por default es "-TAB-"</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_fwf</td>
# <td style="text-align:left;">Lee archivos con datos con anchos de columna fijos, no delimitados</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_clipboard</td>
# <td style="text-align:left;">Similar a read_table, pero lee lo que tienes copiado en el clipboard!</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_excel</td>
# <td style="text-align:left;">Lee datos tabulares en XLS o XLSX</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_html</td>
# <td style="text-align:left;">Lee todas las tablas HTML de una dirección web</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_json</td>
# <td style="text-align:left;">Lee datos de archivos JSON</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_sas</td>
# <td style="text-align:left;">Lee datos de archivos SAS</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_sql</td>
# <td style="text-align:left;">Lee datos de la BD SQL</td>
# </tr>
#
# <tr>
# <td style="text-align:left;">read_stat</td>
# <td style="text-align:left;">Lee datos de archivos STAT</td>
# </tr>
# ## Datos históricos del Baseball 1871-2021 (bateo)
# > "*The game breaks your heart. It is designed to break your heart. The game begins in spring, when everything else begins again, and it blossoms in the summer, filling the afternoons and evenings, and then as soon as the chill rains come, it stops and leaves you to face the fall alone.”.-* <NAME><br><br>
# >"*A baseball game has six minutes of action crammed into two-and-one-half hours.*".- <NAME>
url = 'https://github.com/chadwickbureau/baseballdatabank/raw/master/core/Batting.csv'
mlb = pd.read_csv(url)
mlb.info()
mlb
mlb.head()
mlb.head()
mlb.tail()
mlb.tail(10)
mlb['HR'].max()
mlb['HR'].idxmax()
mlb.iloc[80767]
mlb.iloc[mlb['HR'].idxmax()]
maximo = mlb['HR'].idxmax()
mlb.iloc[maximo]
# $$TBHR=\frac{TurnosBat}{HR's}$$
mlb['TBHR'] = mlb['AB'] / mlb['HR']
mlb
mlb['TBHR'].min()
mlb2 = mlb[mlb['AB']>=300]
mlb2.info()
mlb2['TBHR'].min()
mlb2['TBHR'].idxmin()
mlb.iloc[minimos]
mlb2.nsmallest(10, 'TBHR')
mlb2.nlargest(5,'HR')
mlb2['HR'].nlargest(5)
mlb2.info()
mlb
mlb.groupby('playerID').sum().sort_values(by=['HR'], ascending=False)
mlb['HR'].plot(figsize=(12,8));
# ## Graficar utilizando Matplotlib
| Codigo/20211012Clase6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fields
#
# Fields allow you to define an area where certain Scene Elements will appear randomly
# This can be interesting in foraging scenario, where an agents move around to collect rewards.
# +
from simple_playgrounds.playgrounds import SingleRoom
from simple_playgrounds import Engine
# %matplotlib inline
import matplotlib.pyplot as plt
def plt_image(img):
plt.axis('off')
plt.imshow(img)
plt.show()
my_playground = SingleRoom(size=(200, 200))
# we use the option screen=True to use a keyboard controlled agent later on.
engine = Engine(time_limit=1000, playground= my_playground, screen=False)
plt_image(engine.generate_playground_image(plt_mode = True))
# -
# Now that we have an empty playground, lets create a field that produces candies in the top-left area of the playground.
# +
from simple_playgrounds.playgrounds.scene_elements import Candy, Field
from simple_playgrounds.utils import PositionAreaSampler
area = PositionAreaSampler( center = (50, 150), area_shape='circle', radius= 30 )
field = Field(probability=0.1, limit=10, entity_produced=Candy, production_area=area)
my_playground.add_scene_element(field)
# -
# Now we can let the playground run for some time, and see if the field produces candies.
#
# Run the following cell multiplt times.
# +
engine.run(steps = 10)
plt_image(engine.generate_playground_image(plt_mode = True))
# -
engine.terminate()
# Finally, we can add an agent controlled by a keyboard.
# We see that the Candies are replaced little by little.
# +
from simple_playgrounds.agents.controllers import Keyboard
from simple_playgrounds.agents.agents import BaseAgent
from simple_playgrounds.agents.parts.platform import ForwardPlatform
from simple_playgrounds.playgrounds.scene_elements import Candy, Field
from simple_playgrounds.utils import PositionAreaSampler
my_playground = SingleRoom(size=(200, 200))
engine = Engine(time_limit=10000, playground= my_playground, screen=True)
area = PositionAreaSampler( center = (100, 100), area_shape='circle', radius= 100 )
field = Field(probability=0.1, limit=10, entity_produced=Candy, production_area=area)
my_playground.add_scene_element(field)
my_agent = BaseAgent(controller=Keyboard(), platform = ForwardPlatform, interactive = True)
my_playground.add_agent(my_agent)
engine.run(update_screen=True)
engine.terminate()
# -
# You probably noticed that the field stops producing after some time.
# There is a total limit of total produced entities that can be set.
| tutorials/jupyter/05_Fields.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generating Custom Abundance Widget
# This notebook demonstrates how to generate and display Custom Abundance Widget.
# First, import `CustomAbundanceWidget` module from `visualization` subpackage to create the widget.
from tardis.visualization import CustomAbundanceWidget
# ## Initialize the GUI
# There are four ways to generate the widget. You can generate it from `.yml`/`.csvy` configuration files before running any simulation.
# ### Using a YAML file
widget = CustomAbundanceWidget.from_yml("tardis_example.yml")
# ### Using a CSVY file
# +
# widget = CustomAbundanceWidget.from_csvy("demo.csvy")
# -
# Alternatively, you can generate the widget after the simulation from a Simulation instance or a saved simulation (HDF file).
# ### Using a Simulation object
# +
# sim = run_tardis("tardis_example.yml")
# widget = CustomAbundanceWidget.from_sim(sim)
# -
# ### Using a HDF file
# +
# widget = CustomAbundanceWidget.from_hdf("demo.hdf")
# -
# ## Display the GUI
# No matter which way you use to initialize the widget, you can call `.display()` to display the GUI easily.
widget.display()
# The image below is just a screenshot of the GUI for a demonstration purpose. If you want to interact with the GUI, please run the code in the notebook.
#
# 
| docs/io/visualization/abundance_widget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="e8OyCTg693VH"
import pandas as pd
import numpy as np
# + id="s34YabiW95QK"
df = pd.read_csv('IMDB-Movie-Data.csv')
df
# + id="cbMjS7cx97BI"
df = df.sample(frac = 1)
df = df.reset_index()
df = df.fillna(0)
# + id="_pWKJNH697Dw" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="9b6c40a6-d728-4a17-ebe2-0373d98dfa96"
labels = df['Metascore']
df = df.drop('Metascore', axis=1)
df = df.drop('index', axis=1)
df = df.drop('Rank', axis=1)
df = df.drop('Title', axis=1)
df = df.drop('Genre', axis=1)
df = df.drop('Description', axis=1)
df = df.drop('Director', axis=1)
df = df.drop('Actors', axis=1)
df = df.astype(np.float32)
df
# + colab={"base_uri": "https://localhost:8080/"} id="18YicsQB97GL" outputId="b59580b6-8ff7-460e-8694-fad66a3815ea"
labels = np.array(labels)
labels.shape
# + colab={"base_uri": "https://localhost:8080/"} id="uIgBNIfo97Ky" outputId="58a956a7-9afa-46f9-bb9f-25bac6cd37fc"
features = df.iloc[:, 0:].values
for i in range(len(df.columns)):
features[:,i] = list(map(lambda x: ((x-min(features[:,i])) / (max(features[:,i]) - min(features[:,i]))) , features[:,i]))
features = np.array(features)
features.shape
# + id="od3q8IS697NG" colab={"base_uri": "https://localhost:8080/"} outputId="cf229ce2-c439-4543-de05-71f3613a00a8"
x_train = features[:int((labels.shape[0] * 80) / 100), :]
y_train = labels[:int((labels.shape[0] * 80) / 100)]
x_test = features[int((labels.shape[0] * 80) / 100):, :]
y_test = labels[int((labels.shape[0] * 80) / 100):]
x_train.shape
# + [markdown] id="ymy3rNCeLaJR"
# **Regresyon**
# + id="GmwVQDdc97P_"
from sklearn.linear_model import LinearRegression
clf = LinearRegression().fit(x_train, y_train)
y_pred = clf.predict(x_test)
y_pred, y_test
# + colab={"base_uri": "https://localhost:8080/"} id="IOWaEwiW97SI" outputId="157d4cfb-f664-4e08-af9e-186fd28cfaeb"
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
r2_score(y_test,y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="Kcfop-Ga97Vl" outputId="ad527712-2510-4cbc-808e-b6b00946efad"
print('MAE : ', mean_absolute_error(y_test, y_pred))
print('MSE : ', mean_squared_error(y_test, y_pred, squared=False))
# + [markdown] id="U4RVVHoiEcC6"
# **Yapay Sinir Ağları**
# + id="sG3MOpM597Id"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.models import Sequential
from keras.layers import Dense, Activation
from sklearn.preprocessing import StandardScaler
# + id="gtzr_WxyeNfk" colab={"base_uri": "https://localhost:8080/"} outputId="cf424ad8-f02e-4074-9052-ac76e32fc6e7"
model = Sequential()
model.add(layers.Dense(8, activation="sigmoid", input_dim=5))
model.add(layers.Dense(16, activation="sigmoid"))
model.add(layers.Dense(32, activation="sigmoid"))
model.add(layers.Dense(64, activation="sigmoid"))
model.add(layers.Dense(128, activation="sigmoid"))
model.add(layers.Dense(64, activation="sigmoid"))
model.add(layers.Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse'])
history = model.fit(x_train, y_train, batch_size=64, epochs=555, validation_split=0.2)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="VF9WEMqZNQek" outputId="f42b91f4-d815-4b5e-9162-9b688b742f9a"
import matplotlib.pyplot as plt
plt.plot(history.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="etoLe3B8Pj6i" outputId="ed17832d-76ae-44b8-8694-cba59bb31bb7"
print("mse: {}".format(model.evaluate(x=x_test, y=y_test, batch_size=64)))
| regresyon_ysa.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Maps SARS-CoV-2 Mutations to 3D Protein Structures
# [Work in progress]
#
# This notebook map mutation frequency of SARS-CoV-2 strains onto 3D protein structures in the [Protein Data Bank](https://www.wwpdb.org/).
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import matplotlib.cm as cm
#import matplotlib
from py2neo import Graph
import ipywidgets as widgets
from ipywidgets import interact, IntSlider
import py3Dmol
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
# #### Connect to COVID-19-Community Knowledge Graph
graph = Graph("bolt://172.16.58.3:7687", user="reader", password="<PASSWORD>")
# ### Get list of SARS-CoV-2 proteins
taxonomy_id = 'taxonomy:2697049' # SARS-CoV-2 taxonomy id
query = """
MATCH (p:Protein{taxonomyId:$taxonomy_id})-[t:HAS_TERTIARY_STRUCTURE]->(:Chain)-[:IS_PART_OF_STRUCTURE]->(s:Structure)
WHERE t.coverage > 0.2 // exclude polyproteins
RETURN DISTINCT(p.name) AS protein
ORDER BY protein
"""
proteins = graph.run(query, taxonomy_id=taxonomy_id).to_data_frame()['protein'].values
graph.run(query, taxonomy_id=taxonomy_id).to_data_frame()
protein_widget = widgets.Dropdown(options=proteins, description='Select protein:', value='Spike glycoprotein')
display(protein_widget)
protein_name = protein_widget.value
print('Protein name :', protein_name)
# ### Get total number of strains with variant annotation
query = """
MATCH (s:Strain)-[:HAS_VARIANT]->(:Variant)
WHERE s.hostTaxonomyId = 'taxonomy:9606'
RETURN count(s)
"""
strains = graph.evaluate(query)
print('Total number of human strains with variant annotation:', strains)
# ### Get variants for selected protein
query = """
MATCH (p:Protein{name: $protein_name})-[:HAS_VARIANT]->(v:Variant{variantConsequence:'missense_variant'})<-[:HAS_VARIANT]-(s:Strain)
WHERE s.hostTaxonomyId = 'taxonomy:9606'
WITH v.proteinPosition AS residue, count(v.proteinVariant) AS count,
split(v.proteinVariant, ':')[1] + '(' + count(v.proteinVariant) + ')' AS mutation ORDER by count DESC
WITH residue, count, mutation
RETURN residue, collect(mutation) AS mutations, sum(count) AS count ORDER BY residue
"""
variants = graph.run(query, protein_name=protein_name).to_data_frame()
variants.head()
# #### Add mutation annotation to each residue
variants['annotation'] = variants['mutations'].apply(lambda x: ', '.join(x))
variants['annotation'] = variants['annotation'].str.replace('p.', '')
# #### Create a color scale based on the log mutation frequency
variants['scale'] = variants['count'].apply(np.log) / math.log(strains)
# +
n_colors = 100
colors = cm.Reds(np.linspace(0.0, 1.0, n_colors))
col = np.empty(n_colors, dtype=object)
for i, color in enumerate(colors):
col[i] = matplotlib.colors.rgb2hex(color)
# -
variants['color'] = variants['scale'].apply(lambda x: col[round(x*n_colors)])
variants.head()
# ### Get PDB structures for selected protein
query = """
MATCH (p:Protein{name: $protein_name})-[h:HAS_TERTIARY_STRUCTURE]->(c:Chain)-[:IS_PART_OF_STRUCTURE]->(s:Structure)
RETURN p.name AS name, p.start, p.end, c.name, c.uniprotStart, c.uniprotEnd, c.pdbStart, c.pdbEnd, s.resolution AS resolution, s.description AS description, h.coverage AS coverage
ORDER BY resolution, coverage DESC
"""
chains = graph.run(query, protein_name=protein_name).to_data_frame()
chains.head()
chains.drop_duplicates(subset=['c.name'], inplace=True)
# #### Map uniprot residue numbers to PDB residue numbers
# + jupyter={"source_hidden": true}
def uniprot_to_pdb_mapping(row):
mapping = dict()
for (us,ue, ps, pe) in zip(row['c.uniprotStart'], row['c.uniprotEnd'], row['c.pdbStart'], row['c.pdbEnd']):
ps = int(ps)
pe = int(pe)
if (ue-us != pe-ps):
print('length mismatch:', row['c.name'], ue-us, pe-ps)
else:
offset = ps - us
for v in range(us, ue+1):
mapping[v] = offset + v
#print(mapping)
return mapping
# -
chains['mapping'] = chains.apply(lambda row: uniprot_to_pdb_mapping(row), axis=1)
chains.head()
# ### Visualize mutation sites
#
# Mutations are mapped onto protein chains for available 3D protein structures.
#
# Display options:
#
# |||
# |:-|:-|
# | *show_bio_assembly* | Toggle display of the biologically relevant quaternary structure |
# | *show_surface* | Toggle surface for protein chain |
# | *show_annotations* | Toggle display of mutation information<br>{PDBId}.{chainId}.{PDBResidue}: {UniProtResidue}{aminoAcid1}>{aminoAcid2}(# observations)<br>Example: 6Z43.A.614: 614D>G(58984), 614D>N(6) |
# | *size* | Change size of visualization |
# | *font* | Change font size of annotations |
# | *logFreq* | Change minimum threshold to display mutations based on normalized log of mutation frequency [0.0 - 1.0]|
# | *structure* | Move slider to browse through available structures |
# + jupyter={"source_hidden": true}
# Setup viewer
def view_mutations(df, variants, *args):
chainIds = list(df['c.name'])
def view3d(show_bio_assembly, show_surface, show_annotations, size, font, logFreq, i):
pdb_chain_id = chainIds[i].split(':')[1]
pdb_id, chain_id = pdb_chain_id.split('.')
global viewer1
viewer1 = py3Dmol.view(query='pdb:' + pdb_id, options={'doAssembly': show_bio_assembly}, width=size, height=size)
# polymer style
viewer1.setStyle({'cartoon': {'colorscheme': 'chain', 'width': 0.6, 'opacity':0.8}})
# non-polymer style
viewer1.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}})
# highlight chain of interest in blue
viewer1.setStyle({'chain': chain_id},{'cartoon': {'color': 'blue'}})
mapping = df['mapping'].iloc[i]
for row in variants.itertuples():
# get PDB residue mapping from a UniProt residue number
res_num = mapping.get(row.residue, 0)
col = row.color
if res_num > 0 and row.scale > logFreq:
mut_res = {'resi': res_num, 'chain': chain_id}
viewer1.addStyle(mut_res, {'sphere':{'color':col, 'opacity': 1.0}})
if show_annotations:
annotation = row.annotation
label = pdb_chain_id + "." + str(res_num) + ": " + annotation
viewer1.addLabel(label, {'fontSize':font,'fontColor': 'black','backgroundColor':'ivory', 'opacity': 1.0}, {'resi': res_num, 'chain': chain_id})
description = df['description'].iloc[i]
resolution = df['resolution'].iloc[i]
coverage = df['coverage'].iloc[i]
print(f"PDB Id:{pdb_id}, chain Id:{chain_id}, resolution:{resolution}, sequence coverage:{coverage:.2f}")
print(description)
# print any specified additional columns from the dataframe
for a in args:
print(a + ": " + df.iloc[i][a])
viewer1.zoomTo({'chain': chain_id})
viewer1.center({'chain': chain_id})
if show_surface:
viewer1.addSurface(py3Dmol.SES,{'opacity':0.8,'color':'lightblue'},{'chain': chain_id})
return viewer1.show()
s_widget = IntSlider(min=0, max=len(chainIds)-1, description='structure', continuous_update=False)
return interact(view3d, show_bio_assembly=False, show_surface=False, show_annotations=True, size=750, font=9, logFreq=0.33, i=s_widget)
def view_image1():
return viewer1.png()
# -
view_mutations(chains, variants);
| notebooks/analyses/MapMutationsTo3D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# name: pysparkkernel
# ---
# + [markdown] azdata_cell_guid="2757df21-8174-4bb1-a52c-66eaf94f6b96"
# # Using MsSQLSparkConnector with Integrated AD Auth
#
# This sample shows how to use the MsSQLSparkConnector with integrated AD Auth when using use principal and keytab instead of username/password.
#
# ## PreReq
# -------
# - SQL Server 2019 big data cluster is deployed with AD
# - Have access to AD controller to create keytabs that we will use in this sample.
# - Download [AdultCensusIncome.csv]( https://amldockerdatasets.azureedge.net/AdultCensusIncome.csv ) to your local machine. Upload this file to hdfs folder named *spark_data*.
# - The sample uses a SQL database *spark_sql_db* to create/update tables. The database needs to be created before the sample is run.
#
# + [markdown] azdata_cell_guid="48874729-541f-4b01-888f-3fdd2aeb59da"
#
#
# + [markdown] azdata_cell_guid="5e733ddd-cbf8-4aa1-9f97-9fff40c28332"
# # Creating KeyTab file
# The following section shows how to generate principal and keytab. This assumes you have a SS19 Big Data Cluster installed with Windows AD contoller for domain AZDATA.LOCAL. One of the users is <EMAIL> and the user is part of Domain Admin group.
#
# ## Create KeyTab file using ktpass
# 1. Login to the Windows AD controller with user1 credentials.
# 2. Open command prompt in Administrator mode.
# 3. Use ktpass to create a key tab. Refer [here](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/ktpass) for documentation on using ktpass.
#
# ```sh
# ktpass -out testusera1.keytab -mapUser <EMAIL> -pass <<PASSWORD>> -mapOp set +DumpSalt -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -princ <EMAIL>
# ```
#
# The command above should generate a keytab file named testusera1.keytab. Transfer this file to hdfs folder in Big Data Cluster. In this sample we transfer the file to /user/testusera1/testusera1.keytab
#
# ## Create KeyTab file using kinit
#
# If you are on a linux machine kinit can be used as follows to create keytab. Note that you linux machine shoud be connected to the domain controler.
#
# ``` sh
# ktutil
# ktutil : add_entry -password -p <EMAIL> -k 1 -e arcfour-hmac-md5
# Password for testusera1@myDomain:
# ktutil : add_entry -password -p <EMAIL> -k 1 -e des-cbc-md4
# ktutil : wkt <EMAIL>a1@<EMAIL>.keytab
# ```
#
# ``` sh
# ## Check if keytab generated properly. Any error implies that keytab is not generated right.
# kinit -kt testusera1.keytab <EMAIL>
# ```
#
# Load Keytab to HDFS for use
#
# ```sh
# hadoop fs -mkdir -p /user/testusera1/
# hadoop fs -copyFromLocal -f testusera1.keytab /user/testusera1/testusera1.keytab
# ```
#
#
#
#
# + [markdown] azdata_cell_guid="453e7b2f-e590-4b95-9fbd-5dc8b9d1f02c"
# # Create a database user
#
# ``` sql
# IF NOT EXISTS (select name from sys.server_principals where name='AZDATA.LOCAL\testusera1')
# BEGIN
# CREATE LOGIN [AZDATA.LOCAL\testusera1] FROM WINDOWS
# END
#
# ALTER SERVER ROLE dbcreator ADD MEMBER [AZDATA.LOCAL\testusera1]
# GRANT VIEW SERVER STATE to [AZDATA.LOCAL\testusera1]
#
# # Create a database named "spark_mssql_db"
# IF NOT EXISTS (SELECT * FROM sys.databases WHERE name = N'spark_mssql_db')
# CREATE DATABASE spark_mssql_db
# ```
# + [markdown] azdata_cell_guid="8ecbc487-996a-4bd1-b4f0-bcbf439decea"
# # Configure Spark applicaion to point to the key tab file
# Here we configure spark to use the keytab file once the keytab is created and uploaded to HDFS (/user/testusera1/testusera1.keytab).
# Note the usage of "spark.files" : "/user/testusera1/testusera1.keytab". As a result of this configuration Spark driver distributes the file to all executors.
#
# Run the cell below to start the spark application.
#
# + azdata_cell_guid="32acad02-f758-4a0a-a4e3-e64a52948986" tags=[]
# %%configure -f
{"conf": {
"spark.files" : "/user/testusera1/testusera1.keytab",
"spark.executor.memory": "4g",
"spark.driver.memory": "4g",
"spark.executor.cores": 2,
"spark.driver.cores": 1,
"spark.executor.instances": 4
}
}
# + [markdown] azdata_cell_guid="ed8b58e0-3607-4a71-8dc8-034bc0180ee4"
# # Read CSV into a data frame
# In this step we read the CSV into a data frame. This dataframe would then be written to SQL table using MSSQL Spark Connector
#
#
#
# + azdata_cell_guid="813bbfa3-2613-45dd-9556-94faba602977"
#spark = SparkSession.builder.getOrCreate()
sc.setLogLevel("INFO")
#Read a file and then write it to the SQL table
datafile = "/spark_data/AdultCensusIncome.csv"
df = spark.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)
df.show(5)
#Process this data. Very simple data cleanup steps. Replacing "-" with "_" in column names
columns_new = [col.replace("-", "_") for col in df.columns]
df = df.toDF(*columns_new)
df.show(5)
# + [markdown] azdata_cell_guid="a6afceb2-6fbc-435b-af88-e9f5cc784f5d"
# # Write and READ to/from SQL Table ( using Integrated Auth)
# - Write dataframe to SQL table to Master instance
# - Read SQL Table to Spark dataframe
#
# In both scenarions here we use integrated auth with principal\keytab file rather than username\password of the user.
# + azdata_cell_guid="b851fe61-6e85-4e46-a20f-4063fc6586e0"
#Write from Spark to SQL table using MSSQL Spark Connector
print("MSSQL Spark Connector write(overwrite) start ")
servername = "jdbc:sqlserver://master-p-svc:1433"
dbname = "spark_mssql_db"
security_spec = ";integratedSecurity=true;authenticationScheme=JavaKerberos;"
url = servername + ";" + "databaseName=" + dbname + security_spec
dbtable = "AdultCensus_test"
principal = "<EMAIL>"
keytab = "/user/testusera1/testusera1.keytab"
try:
df.write \
.format("com.microsoft.sqlserver.jdbc.spark") \
.mode("overwrite") \
.option("url", url) \
.option("dbtable", dbtable) \
.option("principal", principal) \
.option("keytab", keytab) \
.save()
except ValueError as error :
print("MSSQL Spark Connector write(overwrite) failed", error)
print("MSSQL Connector write(overwrite) done ")
# + azdata_cell_guid="e3e19e1f-1325-47ea-87d1-1170f316d2d8"
#Read from SQL table using MSSQ Connector
print("MSSQL Spark Connector read start ")
jdbcDF = spark.read \
.format("com.microsoft.sqlserver.jdbc.spark") \
.option("url", url) \
.option("dbtable", dbtable) \
.option("url", url) \
.option("dbtable", dbtable) \
.option("principal", principal) \
.option("keytab", keytab).load()
jdbcDF.show(5)
print("MSSQL Spark Connector read done")
| samples/features/sql-big-data-cluster/spark/data-virtualization/mssql_spark_connector_ad_pyspark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <u> Problem </U>
# ## We attempt to classify real youtube (artist videos) comments as spam or not spam.
# ### Using datasets: youtube-spam-dataset
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score, KFold
from sklearn.decomposition import PCA
import keras
from keras import Sequential
from keras.layers import Dense
from keras.losses import *
from keras.optimizers import *
from keras.preprocessing.text import Tokenizer
from sklearn.feature_extraction.text import CountVectorizer
from keras.utils import to_categorical
import matplotlib.pyplot as plt
from keras.callbacks import TensorBoard
from sklearn.metrics import accuracy_score, log_loss
from keras.metrics import categorical_crossentropy
# +
path = "/Users/aaditkapoor/Desktop/youtube-spam-dataset"
data1 = pd.read_csv(path+"/1.csv")
data2 = pd.read_csv(path+"/2.csv")
data3 = pd.read_csv(path+"/3.csv")
data4 = pd.read_csv(path+"/4.csv")
# The combined data
data = pd.concat([data1, data2, data3, data4])
# -
data.shape
data1.shape
data2.shape
data3.shape
data4.shape
data.columns
# Removing useless columns
data.drop(columns=['COMMENT_ID','AUTHOR','DATE'], axis=1, inplace=True)
data.head()
# - 1 is positive(spam) and 0 is negative(not spam)
features = data.CONTENT.values
labels = data.CLASS.values
labels = to_categorical(labels)
# Vectorizing data
tokenizer = Tokenizer()
tokenizer.fit_on_texts(features)
features = tokenizer.texts_to_matrix(features, mode='tfidf')
features.shape # 4377 as input_dim we can also reduce it.
pca = PCA()
pca.fit(features)
# Plotting
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel("number of features")
plt.ylabel("accuracy (feature use %)")
# We will test pca afterwards on text data
# TODO
pca = PCA(n_components=1400)
features = pca.fit_transform(features)
# +
# Creating custom callback
class StopAtHundered(keras.callbacks.Callback):
def on_epoch_end(self, epoch,logs={}):
if logs.get('acc') == 1.0:
print ("Stopped")
print ("Accuracy is:", logs.get('acc'))
model.is_training = False
custom = StopAtHundered()
# -
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, random_state=3, shuffle=True)
# Building
model = Sequential()
model.add(Dense(100, activation="relu", input_dim=features_train.shape[1]))
model.add(Dense(100, activation="relu"))
model.add(Dense(100, activation="relu"))
model.add(Dense(2, activation="softmax"))
model.compile(optimizer="adam", loss=keras.losses.categorical_crossentropy, metrics=['acc'])
tensorboard = TensorBoard(log_dir='./youtube_spam_ham_logs')
# Fitting
history = model.fit(features_train, labels_train, epochs=1000, batch_size=512, callbacks=[tensorboard, custom], validation_data=(features_test, labels_test))
predictions = model.predict(features_test)
predictions = np.argmax(predictions, axis=1)
labels_test = np.argmax(labels_test, axis=1)
print ("The accuracy is: ", accuracy_score(labels_test, predictions))
print ("The log loss is: ", log_loss(labels_test, predictions))
model.save("youtube_spam_ham.h5")
# Loss
plt.plot(history.history['loss'], "r")
plt.plot(history.history['val_loss'],"b")
# acc
plt.plot(history.history['acc'], "r")
plt.plot(history.history['val_acc'],"b")
model.summary()
len(model.get_weights())
| youtube-spam-ham.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] hide_input=true
#
# # Crypto
#
#
# -
# ## RSA Attacks
#
# + [markdown] hide_input=true
# ### Wiener Attack
#
#
# >$\phi(n)$ = (p - 1) (q - 1) <br>
# $\phi(n)$ = pq - (p + q) + 1 <br>
# $\phi(n)$ $\cong$ n <br><br>
# ed = k * $\phi(n)$ + n <br>
# ed - k * $\phi(n)$ = 1 <br><br><br>
# $\frac{e}{\phi(n))}$ - $\frac{k}{d}$ = $\frac1{d * \phi(n)}$ <br><br>
# Now $\frac1{ d * phi(n)}$ is very small <br>
# $\therefore \frac{e}{n}$ $\cong$ $\frac{d}{n}$ <br>
# We use continued fractions to get approx of $\frac{k}{d}$ <br>
#
# <br>
# <br>
#
#
# <b>Few Observations:</b>
# 1. As ed = 1 mod $\phi$(n) , and $\phi$(n) is even, Therefore e and d must be odd
# 2. $\phi$(n) is a whole number. Therefore (ed - 1) / k must be a whole number
# 3. If we get $\phi$(n) as a whole number, we can check if it is indeed the correct $\phi$(n) by: \
# $\phi$(n) = (p - 1) * (q - 1) = n - (p + q) + 1 \
# p + q = n - $\phi$(n) + 1 \
# Consider a quadratic equation :
# (x - p) (x - q) = 0 <br>
# ${x}^2$ - (p + q) + pq = 0 <br>
# ${x}^2$ - (n - $\phi$(n) + 1) + n = 0 <br>
# If this equation has integral roots, then the $\phi$(n) is correct.<br>
#
# Ref : https://www.youtube.com/watch?v=OpPrrndyYNU
# -
# ### Fermat factoring algorithm
#
#
# The algorithm is based upon the being able to factor the difference of 2 squares
#
#
# >n = ${x}^2$ - ${y}^2$ <br>
# n = (x - y) / (x + y) <br>
# And we know that every positive odd integer can be written as the difference of two squares.<br><br>
# For n = pq <br>
# n = ${(\frac{p + q}{2})} ^2$ - ${(\frac{p - q}{2})} ^2$ <br><br>
# Let k be the smallest positive integer so that k > $\sqrt n$ \
# Consider ${k}^2$ - n, <br>
# if it is a perfect square i.e. <br>
#
# n = (k + h) * (k - h) <br>
# else: <br>
# k = k + 1 <br>
# Eventually we will find an h such that ${k + h}^2$ - n can be factored because: <br>
# ${\frac{n + 1}{2}^2}$ - n = ${\frac{n - 1}{2}^2}$ <br><br>
# Fermat’s factorization algorithm works well if the factors are roughly the ${\bf same \ size.}$ (p and q are close together)
#
# + [markdown] heading_collapsed=true
# ### Hastads Broadcasting Attack
#
#
#
# >c_1 = $m^{e}$ mod N1 <br>
# c_2 = $m^{e}$ mod N2 <br>
# c_3 = $m^{e}$ mod N3 <br><br>
# Given GCD(N_i, N_j) = 1 <br>
# c = c1 mod N1 <br>
# c = c2 mod N2 <br>
# c = c3 mod N3 <br><br>
# Using CRT, we can find c <br>
# c = $m^e$ mod n1 * n2 * n3 <br><br>
# $\therefore$ m = $\sqrt[e]{c}$
#
#
# Ref: https://www.youtube.com/watch?v=DKWnvyCsh9A
# -
# ## Number Theory
# ### Fermat Little Theorem
#
# if gcd(a, p) = 1, p is a prime number and a is an integer <br>
# ${a}^{p - 1}$ = 1 mod p
#
# We considering the first p - 1 positive multiples of a <br>
#
# a , 2a , 3a , ... , (p - l)a <br>
#
# None of these numbers is congruent modulo p to any other nor is any congruent to zero. Because it it happened, <br>
# ra $\cong$ sa mod(p) where 1 <= r < s <= p - 1
#
# then a could be canceled to give r = s (mod p), which is impossible. <br>
#
# ra - sa = kp
# a(r - s) = kp
# and p is prime.
#
# $\therefore$ the previous set of integers must be congruent modulo 1, 2 , 3 , ... , p - 1
#
# Multiplying,
# a· 2a · 3a · · · (p - l)a = 1 · 2 · 3 · · · (p - 1) (mod p) <br>
# ${a}^{p - 1}$ = 1 mod p
#
# ### Chinese Remainder Theorem
#
# Let n1, n2, ... , nr be positive integers such that gcd(ni, nj) = 1 for i != j
#
# x = a1 (mod n1)
# x = a2 (mod n2)
# ...
# ...
# x = ar (mod nr)
#
# Then this has a simultaneous solution, which is unique modulo the integer n1n2...nr
#
# Proof(Intuition):
# Let N = n1*n2....*nr
#
# And Nk = N / nk
#
# Then x = a1*N1*x1 + a2*N2*x2 .......... + ar*Nr*xr
# is a simultaneous solution for this system.
#
# Observe: Ni = 0 mod(nk) because nk divides Ni (As Ni = N/ni )
# + [markdown] heading_collapsed=true
# ### Euler Theorem
#
#
# if n > 1 and gcd(a, n) = 1
# \begin{equation}
# a^{\phi (n)} = 1 \ mod \ n
# \end{equation}
#
#
#
#
# -
# ### Extended Euclidean Algorithm:
#
# Extended Euclidean algorithm also finds integer coefficients x and y such that:
# ax + by = gcd(a, b)
# ...
# ### Other
# 1. Let p, q be prime and p != q
#
# if a $\cong$ M mod(p) and a $\cong$ M mod(q)
#
# then a $\cong$ M mod(p $\times$ q)
#
#
# 2. If p and q are distinct primes with $a^p$ = a mod(p) and $a^q$ = a mod(p)
#
# then $a^{pq}$ = a (mod pq)
#
#
# 3. The linear congruence ax $\cong$ b (mod n) has a solution if and only if d divides b,
#
# where d = gcd(a, n). If d divides b, then it has d mutually incongruent solutions modulo n.
#
#
# 4. Wilson Theorem:
#
# If p is a prime, then (p - 1)! $\cong$ -1 (mod p ).
# ## PRNG
# ### Linear Congruential Generators
# Link : https://tailcall.net/blog/cracking-randomness-lcgs/
# + code_folding=[] hide_input=false
### Implementation
class prng_lcg:
m = 672257317069504227 # the "multiplier"
c = 7382843889490547368 # the "increment"
n = 9223372036854775783 # the "modulus"
def __init__(self, seed):
self.state = seed # the "seed"
def next(self):
self.state = (self.state * self.m + self.c) % self.n
return self.state
def test():
gen = prng_lcg(123) # seed = 123
print (gen.next()) # generate first value
print (gen.next()) # generate second value
print (gen.next()) # generate third value
# + hide_input=false
# Attacks
## unknown increment
def crack_unknown_increment(states, modulus, multiplier):
increment = (states[1] - states[0]*multiplier) % modulus
return modulus, multiplier, increment
## unknown increment and multiplier
def crack_unknown_multiplier(states, modulus):
multiplier = (states[2] - states[1]) * modinv(states[1] - states[0], modulus) % modulus
return crack_unknown_increment(states, modulus, multiplier)
## crack_unknown_modulus
def crack_unknown_modulus(states):
diffs = [s1 - s0 for s0, s1 in zip(states, states[1:])]
zeroes = [t2*t0 - t1*t1 for t0, t1, t2 in zip(diffs, diffs[1:], diffs[2:])]
modulus = abs(reduce(gcd, zeroes))
return crack_unknown_multiplier(states, modulus)
| Crypto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] azdata_cell_guid="491417cb-b92b-43bc-b87b-7e67bcae5589"
# # Exploring Memory-Optmized TempDB Metadata
#
# TempDB metadata contention has historically been a bottleneck to scalability for many workloads running on SQL Server. SQL Server 2019 introduces a new feature that is part of the [In-Memory Database](https://docs.microsoft.com/sql/relational-databases/in-memory-database) feature family, memory-optimized tempdb metadata, which effectively removes this bottleneck and unlocks a new level of scalability for tempdb-heavy workloads. In SQL Server 2019, the system tables involved in managing temp table metadata can be moved into latch-free non-durable memory-optimized tables.
#
# To learn more about tempdb metadata contention, along with other types of tempdb contention, check out the blog article [TEMPDB - Files and Trace Flags and Updates, Oh My!](https://techcommunity.microsoft.com/t5/SQL-Server/TEMPDB-Files-and-Trace-Flags-and-Updates-Oh-My/ba-p/385937). Keep reading to explore the new memory-optimized tempdb metadata feature.
# + [markdown] azdata_cell_guid="c740199e-107b-484a-9994-6883680db75e"
# ## Configure your environment
#
# Contention in tempdb happens when a large number of concurrent threads are attemping to create, modify or drop temp tables. In order to simulate this situation, you'll need to have a SQL Server instance that has multiple cores (4 or more is recommended), and a way to simulate multiple concurrent threads. For this example, we'll be using the ostress.exe tool to generate multiple concurrent threads. If you have an existing multi-core SQL Server instance, you can follow the T-SQL instructions to set up the demo, otherwise you can try the docker container steps instead.
#
# First, download the demo files to your local computer.
#
# ### Docker Container Setup
# Note that the docker commands may take some time to execute, but you will not see progress here until they are complete.
#
# 1. Make sure you have your docker environment configured, more information [here](https://docs.docker.com/get-started/).
# > NOTE
# > <br>If you are using Docker Desktop for Windows or Mac, the default configuration will limit your containers to 2 cores, regardless of the number of cores on your computer. Be sure to configure docker to allow at least 4 cores and 4GB of RAM for this demo to run properly. To do this, right click on the Docker Desktop icon in the status bar and choose Settings -> Advanced.
# 2. Pull the demo container with the following command:
# + azdata_cell_guid="1c435674-3a29-43db-af14-3bbaeb69cba8"
# ! docker pull bluefooted/sql2019tempdbdemo
# + [markdown] azdata_cell_guid="94a1d4c5-806b-4823-bbcc-527a46dcac53"
# 3. Start the demo container with the following command:
# + azdata_cell_guid="50d697be-9130-4bf8-8071-a670fce06a0c"
# ! docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<PASSWORD>!" -p 1455:1433 --name sql2019tempdbdemo -d bluefooted/sql2019tempdbdemo
# + [markdown] azdata_cell_guid="b713d8ad-99df-4a32-a65e-ab0779575f3b"
# > NOTE:
# > <br> If you see the following error, you may already have run the docker run command with this image:
# <br> <br> *docker: Error response from daemon: Conflict. The container name "/sql2019tempdbdemo" is already in use by container "3f662e0fd9b8cbdc1013e874722e066aa8e81ec3a07423fc3ab95cb75e640af9". You have to remove (or rename) that container to be able to reuse that name.
# See 'docker run --help'.*
#
# If you see this message, you can start the container instead with the following command:
# + azdata_cell_guid="e0d6ed1d-c6bc-4f08-a041-b6b70ea2054e"
# ! docker start sql2019tempdbdemo
# + [markdown] azdata_cell_guid="895b3f6a-1ba1-4480-9b06-9cfb2b3c246d"
# 4. Connect to the demo SQL Server instance using Azure Data Studio or SQL Server Management Studio using the following information:
#
# **Server Name**: localhost,1455<br>
# **Username**: sa<br>
# **Password**: <PASSWORD>!
# + [markdown] azdata_cell_guid="0302cf7c-5ee1-426c-b1a5-17fb492a0a8a"
# ### Existing SQL Server Instance Setup (skip if you are using the demo container)
#
# If you already have a SQL Server instance with a minimum of 4 cores, you can download and restore the AdventureWorks database and use the scripts in this repo to configure the database. Follow the steps in the T-SQL notebook to complete this setup.
#
# + [markdown] azdata_cell_guid="7ac2df51-c609-45fc-9e97-996c7dc6b505"
# ## Detecting TempDB Metadata Contention
#
# The first thing to figure out before you turn on this new feature is whether or not you are experiencing TempDB metadata contention. The main symptom of this contention is a number of sessions in `Suspended` state with a wait type of `PAGELATCH_xx` and a wait resource of a page that hosts a TempDB system table, such as `2:1:118`. In order to know whether or not the page is part of a TempDB system table, you can use the new dynamic management function `sys.dm_db_page_info()` in SQL Server 2019 or later, or the older `DBCC PAGE` command in older versions. For this demo, let's focus on `sys.dm_db_page_info()`. Note that this command will take some time to complete, you'll want to proceed to the next step before it completes.
#
# First, start the workload using the `ostress.exe` tool that is included in the downloads. Note that if you are not using the demo container, you will need to change the server name and login information.
# + azdata_cell_guid="b738ca65-ec24-4762-a773-d37674b41884"
# ! ostress.exe -Slocalhost,1455 -Usa -PP@ssw0rd! -dAdventureWorks -Q"EXEC dbo.usp_EmployeeBirthdayList 4" -mstress -quiet -n16 -r120 | FINDSTR "QEXEC Starting Creating elapsed"
# + [markdown] azdata_cell_guid="a3421322-4a07-4348-88f0-2e870368291b"
# While the above script is running, switch over to the T-SQL notebook and run the script to monitor your workload for page contention. You should see several sessions with a `wait_type` of `PAGELATCH_EX` or `PAGELATCH_SH`, often with an `object_name` of `sysschobjs`.
#
# > NOTE
# > <br>If this query does not return any results, make sure the command above is still running. If it is not running, start it and try the query again. If you still do not see any sessions waiting, you may need to increase the number of CPUs available to your server, and/or increase the number of concurrent threads by increasing the `-n` parameter in the command. This demo was tested with 4 cores and 16 concurrent sessions, which should yield the expected results. If you would like more time to examine the contention, you can increase the `-r` parameter, which will increase the number of iterations.
# + [markdown] azdata_cell_guid="aa10efd1-a1f1-4fbc-9e20-5ab94929d9ff"
# ## Improve performance with Memory-Optimized TempDB Metadata
#
# Now that you have observed TempDB metadata contention, let's see how SQL Server 2019 addresses this contention. Switch over to the T-SQL notebook to review and run the script to enable Memory-Optimized TempDB Metadata.
#
# Once you have run the T-SQL command, you will need to restart the service. If you are using the demo container, you can do so with the following command:
# + azdata_cell_guid="65567cd7-bdda-4501-b04a-3e7c3579252c"
# ! docker restart sql2019tempdbdemo
# + [markdown] azdata_cell_guid="9624ed6d-06b4-49b2-bb5e-5a0c2b6b457c"
# Once the server is restarted, you can use queries in the T-SQL notebook to verify that the feature has been enabled.
#
# > NOTE
# > <br> It's a good idea to run a few T-SQL queries after the restart to make sure the server is up and running before you attempt the scenario again.
#
# Now that we have enabled Memory-Optimized TempDB Metadata, let's try running the workload again:
# + azdata_cell_guid="58c2f587-a9a5-4d6c-8cd3-93e555f08dee"
# ! ostress.exe -Slocalhost,1455 -Usa -PP@ssw0rd! -dAdventureWorks -Q"EXEC dbo.usp_EmployeeBirthdayList 4" -mstress -quiet -n16 -r120 | FINDSTR "QEXEC Starting Creating elapsed"
# + [markdown] azdata_cell_guid="da79d28b-80d6-4644-98e5-23f250ec9c19"
# While this is running, switch over to the T-SQL notebook to run the monitoring script again. This time, you should not see any sessions waiting on `PAGELATCH_EX` or `PAGELATCH_SH`. Also, the script should complete faster than before the change was made.
| samples/features/in-memory-database/memory-optimized-tempdb-metadata/MemoryOptmizedTempDBMetadata-Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Standard Scalar
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import pandas as pd
#categorizing the input and output data
bat_data = pd.read_csv('NEWTrainingData_StandardScaler.csv')
train_bat = bat_data
D_in, H, H2, D_out, N = 165, 100, 75, 2, 4000
dtype = torch.float
device = torch.device('cpu')
#define testing sets
x_train = train_bat.drop(columns=['Unnamed: 0', 'Gravimetric Capacity (units)', 'Volumetric Capacity', 'Max Delta Volume'])
y_train = train_bat[['Gravimetric Capacity (units)', 'Volumetric Capacity']]
#test-train split
#shuffle
x_train = x_train.sample(frac = 1)
y_train = y_train.sample(frac = 1)
#optimize for user data
x_test = x_train[4000:]
y_test = y_train[4000:]
x_train = x_train[:4000]
y_train = y_train[:4000]
#Defining training and testing data
x_train_np = np.array(x_train)
x_train_torch = torch.tensor(x_train_np, device = device, dtype = dtype)
y_train_np = np.array(y_train)
y_train_torch = torch.tensor(y_train_np, device = device, dtype = dtype)
x_test_np = np.array(x_test)
x_test_torch = torch.tensor(x_test_np, device = device, dtype = dtype)
y_test_np = np.array(y_test)
y_test_torch = torch.tensor(y_test_np, device = device, dtype = dtype)
#Defining weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
#w3 = torch.randn(H2, D_out, device=device, dtype=dtype)
#define model
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.LeakyReLU(),
torch.nn.Linear(H, H2),
torch.nn.LeakyReLU(),
torch.nn.Linear(H2, D_out),
#nn.Softmax(dim=1) #normalizing the data,
)
optimizer = optim.SGD(model.parameters(), lr=1e-5)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-5
for t in range(1000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x_train_torch)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y_train_torch)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward/training pass.
optimizer.zero_grad()
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
optimizer.step()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# -
# ### Test Error
y_pred_test = model(x_test_torch)
loss_fn(y_pred_test, y_test_torch)
# ### RSS
RSS = ((model(x_train_torch)-y_train_torch)**2).sum()
print(RSS)
# ## MinMax Scalar
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import pandas as pd
#categorizing the input and output data
bat_data = pd.read_csv('NEWTrainingData_MinMaxScaler.csv')
train_bat = bat_data
D_in, H, H2, D_out, N = 115, 100, 75, 2, 4000
dtype = torch.float
device = torch.device('cpu')
#define testing sets
x_train = train_bat.drop(columns=['Unnamed: 0', 'Gravimetric Capacity (units)', 'Volumetric Capacity', 'Max Delta Volume'])
y_train = train_bat[['Gravimetric Capacity (units)', 'Volumetric Capacity']]
#shuffle data
x_train = x_train.sample(frac=1)
y_train = y_train.sample(frac=1)
#test-train split
#optimize for user data
x_test = x_train[4000:]
y_test = y_train[4000:]
x_train = x_train[:4000]
y_train = y_train[:4000]
#Defining training and testing data
x_train_np = np.array(x_train)
x_train_torch = torch.tensor(x_train_np, device = device, dtype = dtype)
y_train_np = np.array(y_train)
y_train_torch = torch.tensor(y_train_np, device = device, dtype = dtype)
x_test_np = np.array(x_test)
x_test_torch = torch.tensor(x_test_np, device = device, dtype = dtype)
y_test_np = np.array(y_test)
y_test_torch = torch.tensor(y_test_np, device = device, dtype = dtype)
#Defining weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
#w3 = torch.randn(H2, D_out, device=device, dtype=dtype)
#define model
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.LeakyReLU(),
torch.nn.Linear(H, H2),
torch.nn.LeakyReLU(),
torch.nn.Linear(H2, D_out),
#nn.Softmax(dim=1) #normalizing the data,
)
optimizer = optim.SGD(model.parameters(), lr=1e-4)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(1000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x_train_torch)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y_train_torch)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward/training pass.
optimizer.zero_grad()
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
optimizer.step()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# -
# ### Test Error
y_pred_test = model(x_test_torch)
loss_fn(y_test_torch, y_pred_test)
# ### RSS
RSS = ((model(x_train_torch)-y_train_torch)**2).sum()
print(RSS)
# ## Predicting Volume Change
train_bat
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import pandas as pd
#categorizing the input and output data
bat_data = pd.read_csv('NEWTrainingData_MinMaxScaler.csv')
train_bat = bat_data
D_in, H, H2, D_out, N = 2, 100, 75, 1, 4000
dtype = torch.float
device = torch.device('cpu')
#define testing sets
x_train = train_bat[['Gravimetric Capacity (units)', 'Volumetric Capacity']]
y_train = train_bat['Max Delta Volume']
#shuffle data
x_train = x_train.sample(frac=1)
y_train = y_train.sample(frac=1)
#test-train split
#optimize for user data
x_test = x_train[4000:]
y_test = y_train[4000:]
x_train = x_train[:4000]
y_train = y_train[:4000]
#Defining training and testing data
x_train_np = np.array(x_train)
x_train_torch = torch.tensor(x_train_np, device = device, dtype = dtype)
y_train_np = np.array(y_train)
y_train_torch = torch.tensor(y_train_np, device = device, dtype = dtype)
x_test_np = np.array(x_test)
x_test_torch = torch.tensor(x_test_np, device = device, dtype = dtype)
y_test_np = np.array(y_test)
y_test_torch = torch.tensor(y_test_np, device = device, dtype = dtype)
#Defining weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
#w3 = torch.randn(H2, D_out, device=device, dtype=dtype)
#define model
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.LeakyReLU(),
torch.nn.Linear(H, H2),
torch.nn.LeakyReLU(),
torch.nn.Linear(H2, D_out),
#nn.Softmax(dim=1) #normalizing the data,
)
optimizer = optim.SGD(model.parameters(), lr=1e-8)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-8
for t in range(1000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x_train_torch)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y_train_torch)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward/training pass.
optimizer.zero_grad()
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
optimizer.step()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# -
# ### Test Error
y_pred_test = model(x_test_torch)
loss_fn(y_test_torch, y_pred_test)
# ### RSS
RSS = ((model(x_train_torch)-y_train_torch)**2).sum()
print(RSS)
| kratosbat/NN/SecondPassNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Word Boundaries
#
# We will now learn about another special sequence that you can create using the backslash:
#
# * `\b`
#
# This special sequence doesn't really match a particular set of characters, but rather determines word boundaries. A word in this context is defined as a sequence of alphanumeric characters, while a boundary is defined as a white space, a non-alphanumeric character, or the beginning or end of a string. We can have boundaries either before or after a word. Let's see how this works with an example.
#
# In the code below, our `sample_text` string contains the following sentence:
#
# ```
# The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.
# ```
#
# As we can see the word `class` appears in three different positions:
#
# 1. As a stand-alone word: The word `class` has white spaces both before and after it.
#
#
# 2. At the beginning of a word: The word `class` in `classroom` has a white space before it.
#
#
# 3. At the end of a word: The word `class` in `subclass` has a whitespace after it.
#
# If we use `class` as our regular expression, we will match the word `class` in all three positions as shown in the code below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression 'class'
regex = re.compile(r'class')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that we have three matches, corresponding to all the instances of the word `class` in our `sample_text` string.
#
# Now, let's use word boundaries to only find the word `class` when it appears in particular positions. Let’s start by using `\b` to only find the word `class` when it appears at the beginning of a word. We can do this by adding `\b` before the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression '\bclass'
regex = re.compile(r'\bclass')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that now we only have two matches because it's only matching the stand-alone word, `class`, and the `class` in `classroom` since both of them have a word boundary (in this case a white space) directly before them. We can also see that it is not matching the `class` in `subclass` because there is no word boundary directly before it.
#
# Now, let's use `\b` to only find the word `class` when it appears at the end of a word. We can do this by adding `\b` after the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression 'class\b'
regex = re.compile(r'class\b')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that in this case we have two matches as well because it's matching the stand-alone word, `class` again, and the `class` in `subclass` since both of them have a word boundary (in this case a white space) directly after them. We can also see that it is not matching the `class` in `classroom` because there is no word boundary directly after it.
#
# Now, let's use `\b` to only find the word `class` when it appears as a stand-alone word. We can do this by adding `\b` both before and after the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression '\bclass\b'
regex = re.compile(r'\bclass\b')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that now we only have one match because the stand-alone word, `class`, is the only one that has a word boundary (in this case a white space) directly before and after it.
# # TODO: Find All 3-Letter Words
#
# In the cell below, write a regular expression that can match all 3-letter words in the `sample_text` string. As usual, save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Finally, write a loop to print all the `matches` found by the `.finditer()` method.
# +
# Import re module
import re
# Sample text
sample_text = 'John went to the store in his car, but forgot to buy bread.'
# Create a regular expression object with the regular expression
regex = re.compile(r'\b\w\w\w\b')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# # Not A Word Boundary
#
# As with the other special sequences that we saw before, we also have the uppercase version of `\b`, namely:
#
# * `\B`
#
# As with the other special sequences, `\B` indicates the opposite of `\b`. So if `\b` is used to indicate a word boundary, `\B` is used to indicate **not** a word boundary. Let's see how this works:
#
# Let's use `\B` to only find the word `class` when it **doesn't** have a word boundary directly before it. We can do this by adding `\B` before the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression '\Bclass'
regex = re.compile(r'\Bclass')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that we only get one match because the `class` in `subclass` is the only one that **doesn't** have a word boundary directly before it.
#
# Now, let's use `\B` to only find the word `class` when it **doesn't** have a word boundary directly after it. We can do this by adding `\B` after the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression 'class\B'
regex = re.compile(r'class\B')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# We can see that again we only have one match because the `class` in `classroom` is the only one that **doesn't** have a boundary directly after it.
#
# Finally, let's use `\B` to only find the word `class` when it **doesn't** have a word boundary directly before or after it. We can do this by adding `\B` both before and after the word `class` in our regular expression as shown below:
# +
# Import re module
import re
# Sample text
sample_text = 'The biology class will meet in the first floor classroom to learn about Theria, a subclass of mammals.'
# Create a regular expression object with the regular expression '\Bclass\B'
regex = re.compile(r'\Bclass\B')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# In this case, we can see that we get no matches. This is because all instances of the word `class` in our `sample_text` string, have a boundary either before or after it. In order to have a match in this case, the word `class` will have to appear in the middle of a word, such as in the word `declassified`. Let's see an example:
# +
# Import re module
import re
# Sample text
sample_text = 'declassified'
# Create a regular expression object with the regular expression '\Bclass\B'
regex = re.compile(r'\Bclass\B')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
for match in matches:
print(match)
# -
# # TODO: Finding Last Digits
#
# In the cell below, our `sample_text` string contains some numbers separated by whitespace characters.
#
# Write code that uses a regular expression to count how many numbers (greater than 3), have 3 as their last digit. For example, 93 is greater than 3 and its last digit is 3, so your code should count this number as a match. However, the number 3 by itself should not be counted as a match.
#
# As usual, save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Then, write a loop to print all the `matches` found by the `.finditer()` method. Finally, print the total number of matches.
# +
# Import re module
import re
# Sample text
sample_text = '203 3 403 687 283 234 983 345 23 3 74 978'
# Create a regular expression object with the regular expression
regex = re.compile(r'\B3\b')
# Search the sample_text for the regular expression
matches = regex.finditer(sample_text)
# Print all the matches
cnt = 0
for match in matches:
cnt += 1
print(match)
# Print the total number of matches
print(f'Total number of matches = {cnt}')
| Quiz/m5_financial_statements/word_boundaries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.032637, "end_time": "2020-09-25T19:48:19.468299", "exception": false, "start_time": "2020-09-25T19:48:19.435662", "status": "completed"} tags=[] id="PB3EzyAeZje2"
# # **Sentiment Analysis with Deep Learning using BERT**
#
#
# ## **What is BERT?**
#
# BERT is a large-scale transformer-based Language Model that can be finetuned for a variety of tasks.
#
# For more information, the original paper can be found here (https://arxiv.org/abs/1810.04805).
#
# HuggingFace documentation (https://huggingface.co/transformers/model_doc/bert.html)
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.049233, "end_time": "2020-09-25T19:48:19.549659", "exception": false, "start_time": "2020-09-25T19:48:19.500426", "status": "completed"} tags=[] id="U_f7Wnb-Zje6" executionInfo={"status": "ok", "timestamp": 1639318175666, "user_tz": -330, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + [markdown] papermill={"duration": 0.04373, "end_time": "2020-09-25T19:48:19.638706", "exception": false, "start_time": "2020-09-25T19:48:19.594976", "status": "completed"} tags=[] id="2kNXLTeVZje9"
# ## 1: Exploratory Data Analysis and Preprocessing
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" papermill={"duration": 1.602442, "end_time": "2020-09-25T19:48:21.286204", "exception": false, "start_time": "2020-09-25T19:48:19.683762", "status": "completed"} tags=[] id="l8CMcl8qZje9" executionInfo={"status": "ok", "timestamp": 1639318191204, "user_tz": -330, "elapsed": 6168, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
import torch
from tqdm.notebook import tqdm
# + papermill={"duration": 0.067018, "end_time": "2020-09-25T19:48:21.384464", "exception": false, "start_time": "2020-09-25T19:48:21.317446", "status": "completed"} tags=[] id="oJCcya0-Zje9" executionInfo={"status": "ok", "timestamp": 1639318229654, "user_tz": -330, "elapsed": 314, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
df = pd.read_csv('corpus.csv',
names=['text', 'category'])
df = df[1:]
df.dropna()
df.insert(0, 'id', range(1, 1 + len(df)))
df.set_index('id', inplace=True)
# + papermill={"duration": 0.046982, "end_time": "2020-09-25T19:48:21.462996", "exception": false, "start_time": "2020-09-25T19:48:21.416014", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 237} id="7bQgHecxZje-" executionInfo={"status": "ok", "timestamp": 1639318229981, "user_tz": -330, "elapsed": 19, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="9470cb37-e76b-4761-fb45-4170844c734e"
df.head()
# + papermill={"duration": 0.041243, "end_time": "2020-09-25T19:48:21.535564", "exception": false, "start_time": "2020-09-25T19:48:21.494321", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="_b_AXYFFZje_" executionInfo={"status": "ok", "timestamp": 1639318229982, "user_tz": -330, "elapsed": 16, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="604415dc-4228-47b3-c5cd-b49361c05300"
df.category.value_counts()
# + papermill={"duration": 0.04214, "end_time": "2020-09-25T19:48:21.684168", "exception": false, "start_time": "2020-09-25T19:48:21.642028", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="KgKKJghiZjfB" executionInfo={"status": "ok", "timestamp": 1639318229983, "user_tz": -330, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="1a8d11c8-a331-46a6-d6fd-7ed8eba92ddb"
df.category.value_counts()
# + papermill={"duration": 0.039202, "end_time": "2020-09-25T19:48:21.755805", "exception": false, "start_time": "2020-09-25T19:48:21.716603", "status": "completed"} tags=[] id="ZARDZX2oZjfB" executionInfo={"status": "ok", "timestamp": 1639318232343, "user_tz": -330, "elapsed": 4, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
possible_labels = df.category.unique()
# + papermill={"duration": 0.039653, "end_time": "2020-09-25T19:48:21.827872", "exception": false, "start_time": "2020-09-25T19:48:21.788219", "status": "completed"} tags=[] id="2n7gNOPJZjfB" executionInfo={"status": "ok", "timestamp": 1639318232682, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
label_dict = {}
for index, possible_label in enumerate(possible_labels):
label_dict[possible_label] = index
# + papermill={"duration": 0.040137, "end_time": "2020-09-25T19:48:21.900939", "exception": false, "start_time": "2020-09-25T19:48:21.860802", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="qqsYs4pNZjfC" executionInfo={"status": "ok", "timestamp": 1639318233011, "user_tz": -330, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="f268b05d-a01b-4192-b8e9-a26d5fcd59bd"
label_dict
# + papermill={"duration": 0.044378, "end_time": "2020-09-25T19:48:21.979606", "exception": false, "start_time": "2020-09-25T19:48:21.935228", "status": "completed"} tags=[] id="i5QNZ3DEZjfC" executionInfo={"status": "ok", "timestamp": 1639318233491, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
df.category = df['category'].map(label_dict)
# + papermill={"duration": 0.057347, "end_time": "2020-09-25T19:48:22.091890", "exception": false, "start_time": "2020-09-25T19:48:22.034543", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 394} id="LpA_HUTxZjfD" executionInfo={"status": "ok", "timestamp": 1639318234608, "user_tz": -330, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="b9f45ac2-10b2-4f49-d2a3-8c2b7fb2a7a5"
df.head(10)
# + [markdown] papermill={"duration": 0.038215, "end_time": "2020-09-25T19:48:22.167600", "exception": false, "start_time": "2020-09-25T19:48:22.129385", "status": "completed"} tags=[] id="JqV2GTcpZjfD"
# Classes are imbalanced as visible
# + [markdown] papermill={"duration": 0.038175, "end_time": "2020-09-25T19:48:22.242791", "exception": false, "start_time": "2020-09-25T19:48:22.204616", "status": "completed"} tags=[] id="zbzBU3J5ZjfD"
# ## 2: Training/Validation Split
# + papermill={"duration": 1.053035, "end_time": "2020-09-25T19:48:23.332257", "exception": false, "start_time": "2020-09-25T19:48:22.279222", "status": "completed"} tags=[] id="D9cr5BIHZjfD" executionInfo={"status": "ok", "timestamp": 1639318236123, "user_tz": -330, "elapsed": 2, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
from sklearn.model_selection import train_test_split
# + papermill={"duration": 0.072998, "end_time": "2020-09-25T19:48:23.459663", "exception": false, "start_time": "2020-09-25T19:48:23.386665", "status": "completed"} tags=[] id="ehpaRMK7ZjfD" executionInfo={"status": "ok", "timestamp": 1639318236500, "user_tz": -330, "elapsed": 2, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
X_train, X_val, y_train, y_val = train_test_split(df.index.values,
df.category.values,
test_size=0.15,
random_state=42,
stratify=df.category.values)
# + papermill={"duration": 0.074634, "end_time": "2020-09-25T19:48:23.591300", "exception": false, "start_time": "2020-09-25T19:48:23.516666", "status": "completed"} tags=[] id="bCl2CePBZjfE" executionInfo={"status": "ok", "timestamp": 1639318237660, "user_tz": -330, "elapsed": 2, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
df['data_type'] = ['not_set']*df.shape[0]
# + papermill={"duration": 0.064226, "end_time": "2020-09-25T19:48:23.712026", "exception": false, "start_time": "2020-09-25T19:48:23.647800", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 237} id="kDFV1ZNvZjfE" executionInfo={"status": "ok", "timestamp": 1639318238569, "user_tz": -330, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="0643ddda-abc8-407b-bd9b-f9ec6037af2f"
df.head()
# + papermill={"duration": 0.063313, "end_time": "2020-09-25T19:48:23.821906", "exception": false, "start_time": "2020-09-25T19:48:23.758593", "status": "completed"} tags=[] id="W8hQRO_7ZjfE" executionInfo={"status": "ok", "timestamp": 1639318239818, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
# + papermill={"duration": 0.064645, "end_time": "2020-09-25T19:48:23.923094", "exception": false, "start_time": "2020-09-25T19:48:23.858449", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 269} id="WkOvtTxQZjfE" executionInfo={"status": "ok", "timestamp": 1639318241474, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="2ac8c50e-dcc6-4a4a-a2a4-919521e3dd9c"
df.groupby(['category', 'data_type']).count()
# + [markdown] papermill={"duration": 0.049916, "end_time": "2020-09-25T19:48:24.020356", "exception": false, "start_time": "2020-09-25T19:48:23.970440", "status": "completed"} tags=[] id="sw9KYLScZjfF"
# # 3. Loading Tokenizer and Encoding our Data
# + colab={"base_uri": "https://localhost:8080/"} id="rzlbQxuwacL0" executionInfo={"status": "ok", "timestamp": 1639318253100, "user_tz": -330, "elapsed": 8997, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="302ca6f1-4107-4111-94d0-4de273e4fead"
# !pip install transformers
# + papermill={"duration": 6.608677, "end_time": "2020-09-25T19:48:30.679359", "exception": false, "start_time": "2020-09-25T19:48:24.070682", "status": "completed"} tags=[] id="JD8kJWXzZjfF" executionInfo={"status": "ok", "timestamp": 1639318264366, "user_tz": -330, "elapsed": 2939, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
from transformers import XLMTokenizer
from torch.utils.data import TensorDataset
# + papermill={"duration": 0.359537, "end_time": "2020-09-25T19:48:31.075982", "exception": false, "start_time": "2020-09-25T19:48:30.716445", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": ["bee6372ec09a45cab8c73f96ff248fae", "ba4f6f0102304f8cbb3f7231ceb8d654", "960cd07c6e484d96b6ac5aa30d511147", "<KEY>", "f597124af27e4ed2a8ac056f9250e2ab", "<KEY>", "<KEY>", "2d3e582b2be54f71995f8fc47267ff4b", "<KEY>", "e518edd212df4347bfc2a113b2a7bb20", "<KEY>", "<KEY>", "<KEY>", "6f5c9f5da0264d468188bbe63cab2fa8", "<KEY>", "a0a853a3259d4bdea5f8c1a12747fba4", "<KEY>", "<KEY>", "130e7dd9b81c40db8ad3d5b7ce5d8930", "97d9da140a3a44c4af3f20bd91be9ae1", "c93f947e679e4f6d89ac67c481a44214", "145d3d79e5a44c449219b58eab634795", "<KEY>", "<KEY>", "f45255847a7349dbb94834ebfe7036f0", "<KEY>", "<KEY>", "26989e88c6e3401baffb00eac8aa9d57", "ca321ae7ba97424585aafce6990c72ee", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "daffe76f4f654158aaa18c7efb8dcf47", "<KEY>", "<KEY>", "830a75ad94624e57ac70dd3c037461ee", "<KEY>", "<KEY>", "<KEY>", "737d0abbe7664611b332d2fed01ad520", "<KEY>", "<KEY>"]} id="aH3mf07xZjfF" executionInfo={"status": "ok", "timestamp": 1639318282157, "user_tz": -330, "elapsed": 3020, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="813f468c-e424-49d0-cbf4-00de413861a4"
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-xnli15-1024')
# + colab={"base_uri": "https://localhost:8080/", "height": 455} id="tQrL4kqya_BA" executionInfo={"status": "ok", "timestamp": 1639318282158, "user_tz": -330, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="cad779e5-cacb-4b7c-8570-57c3a7b9b5c8"
df.dropna()
# + colab={"base_uri": "https://localhost:8080/"} id="eDyoFe0CbOwH" executionInfo={"status": "ok", "timestamp": 1639318283954, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="96fe7a01-6ad6-4e2b-d7ed-ed19c0fecf77"
df['text'] = df['text'].astype('str')
df[df.data_type=='train'].text.values
# + id="zDHlRFaYbd3F" executionInfo={"status": "ok", "timestamp": 1639318285197, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
all_text_train = df[df.data_type=='train'].text.tolist()
# + papermill={"duration": 1.266904, "end_time": "2020-09-25T19:48:32.381780", "exception": false, "start_time": "2020-09-25T19:48:31.114876", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="RvCxZqElZjfF" executionInfo={"status": "ok", "timestamp": 1639318307298, "user_tz": -330, "elapsed": 14441, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="52c1db14-ecdc-4580-8f1d-6d56b5ca448f"
encoded_data_train = tokenizer.batch_encode_plus(
all_text_train,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
encoded_data_val = tokenizer.batch_encode_plus(
df[df.data_type=='val'].text.tolist(),
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(df[df.data_type=='train'].category.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(df[df.data_type=='val'].category.values)
# + colab={"base_uri": "https://localhost:8080/"} id="SNOtdwutbeTT" executionInfo={"status": "ok", "timestamp": 1639318307299, "user_tz": -330, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="b4f81512-6ec6-48c6-d25d-ca7d370bcf79"
len(input_ids_train), len(attention_masks_train), len(labels_train)
# + papermill={"duration": 0.048886, "end_time": "2020-09-25T19:48:32.470406", "exception": false, "start_time": "2020-09-25T19:48:32.421520", "status": "completed"} tags=[] id="1b-ihvLJZjfG" executionInfo={"status": "ok", "timestamp": 1639318307299, "user_tz": -330, "elapsed": 20, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
dataset_train = TensorDataset(input_ids_train,
attention_masks_train,
labels_train)
dataset_val = TensorDataset(input_ids_val,
attention_masks_val,
labels_val)
# + papermill={"duration": 0.046463, "end_time": "2020-09-25T19:48:32.554501", "exception": false, "start_time": "2020-09-25T19:48:32.508038", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="Ka76yNGBZjfG" executionInfo={"status": "ok", "timestamp": 1639318307300, "user_tz": -330, "elapsed": 20, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="52b626f0-4f02-480a-cd0f-b516413f868c"
len(dataset_train)
# + papermill={"duration": 0.067851, "end_time": "2020-09-25T19:48:32.660735", "exception": false, "start_time": "2020-09-25T19:48:32.592884", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="p_VGPdvlZjfG" executionInfo={"status": "ok", "timestamp": 1639318307300, "user_tz": -330, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="b8973e21-6dd3-410e-b7e4-6b07fe0c875b"
dataset_val.tensors
# + [markdown] papermill={"duration": 0.038163, "end_time": "2020-09-25T19:48:32.739040", "exception": false, "start_time": "2020-09-25T19:48:32.700877", "status": "completed"} tags=[] id="S2PPOrl0ZjfG"
# # 4. Setting up BERT Pretrained Model
# + papermill={"duration": 0.045229, "end_time": "2020-09-25T19:48:32.823259", "exception": false, "start_time": "2020-09-25T19:48:32.778030", "status": "completed"} tags=[] id="ukyhArGpZjfG" executionInfo={"status": "ok", "timestamp": 1639318337414, "user_tz": -330, "elapsed": 309, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
from transformers import AutoConfig, XLMForSequenceClassification
# + papermill={"duration": 35.879219, "end_time": "2020-09-25T19:49:08.742460", "exception": false, "start_time": "2020-09-25T19:48:32.863241", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 160, "referenced_widgets": ["debe75f41df14dad8caf5063c76fe3c6", "fce0ae24b0e2429ab4eb8af8646bda6b", "dfcf7530522044ec8ff027f9687f9479", "321c6e9d063642a2b8b8daa493176814", "5e50457b66424437a0dbd1d525887475", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "7ef4fede5c2e4596b2dc2e40c8591db9", "487926156d7f435eb2acbc6e4bd1ebce"]} id="pXj2ldcDZjfH" executionInfo={"status": "ok", "timestamp": 1639318394423, "user_tz": -330, "elapsed": 48003, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="34b37a8e-c148-4478-9954-325d7d1ec8b3"
model = XLMForSequenceClassification.from_pretrained(
'xlm-mlm-xnli15-1024',
num_labels = len(label_dict),
output_attentions = False,
output_hidden_states = False
)
# + [markdown] papermill={"duration": 0.041057, "end_time": "2020-09-25T19:49:08.825580", "exception": false, "start_time": "2020-09-25T19:49:08.784523", "status": "completed"} tags=[] id="YbVCrvo0ZjfH"
# # 5. Creating Data Loaders
# + papermill={"duration": 0.049908, "end_time": "2020-09-25T19:49:08.917274", "exception": false, "start_time": "2020-09-25T19:49:08.867366", "status": "completed"} tags=[] id="m_cZirsVZjfH" executionInfo={"status": "ok", "timestamp": 1639318394424, "user_tz": -330, "elapsed": 15, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
# + papermill={"duration": 0.05037, "end_time": "2020-09-25T19:49:09.008401", "exception": false, "start_time": "2020-09-25T19:49:08.958031", "status": "completed"} tags=[] id="2ldIzMECZjfH" executionInfo={"status": "ok", "timestamp": 1639318394425, "user_tz": -330, "elapsed": 14, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
batch_size = 4
dataloader_train = DataLoader(
dataset_train,
sampler=RandomSampler(dataset_train),
batch_size=batch_size
)
dataloader_val = DataLoader(
dataset_val,
sampler=RandomSampler(dataset_val),
batch_size=32
)
# + [markdown] papermill={"duration": 0.041482, "end_time": "2020-09-25T19:49:09.090888", "exception": false, "start_time": "2020-09-25T19:49:09.049406", "status": "completed"} tags=[] id="D87ySXdUZjfH"
# # 6. Setting Up Optimizer and Scheduler
# + papermill={"duration": 0.048269, "end_time": "2020-09-25T19:49:09.180451", "exception": false, "start_time": "2020-09-25T19:49:09.132182", "status": "completed"} tags=[] id="TwX6fkxiZjfH" executionInfo={"status": "ok", "timestamp": 1639318394425, "user_tz": -330, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
from transformers import AdamW, get_linear_schedule_with_warmup
# + papermill={"duration": 0.051332, "end_time": "2020-09-25T19:49:09.275138", "exception": false, "start_time": "2020-09-25T19:49:09.223806", "status": "completed"} tags=[] id="_-S-hXBvZjfI" executionInfo={"status": "ok", "timestamp": 1639318394426, "user_tz": -330, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
optimizer = AdamW(
model.parameters(),
lr = 1e-5,
eps = 1e-8
)
# + papermill={"duration": 0.04937, "end_time": "2020-09-25T19:49:09.366346", "exception": false, "start_time": "2020-09-25T19:49:09.316976", "status": "completed"} tags=[] id="7ufHbhbUZjfI" executionInfo={"status": "ok", "timestamp": 1639318394427, "user_tz": -330, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
epochs = 5
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps = len(dataloader_train)*epochs
)
# + [markdown] papermill={"duration": 0.041344, "end_time": "2020-09-25T19:49:09.449933", "exception": false, "start_time": "2020-09-25T19:49:09.408589", "status": "completed"} tags=[] id="uRhVjKFWZjfI"
# # 7. Defining our Performance Metrics
# + papermill={"duration": 0.048777, "end_time": "2020-09-25T19:49:09.540839", "exception": false, "start_time": "2020-09-25T19:49:09.492062", "status": "completed"} tags=[] id="FzdDtGjVZjfI" executionInfo={"status": "ok", "timestamp": 1639318394428, "user_tz": -330, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
import numpy as np
from sklearn.metrics import f1_score
# + papermill={"duration": 0.050586, "end_time": "2020-09-25T19:49:09.634137", "exception": false, "start_time": "2020-09-25T19:49:09.583551", "status": "completed"} tags=[] id="ehPP23vfZjfI" executionInfo={"status": "ok", "timestamp": 1639318394974, "user_tz": -330, "elapsed": 558, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
def f1_score_func(preds, labels):
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, preds_flat, average = 'weighted')
# + papermill={"duration": 0.053645, "end_time": "2020-09-25T19:49:09.730645", "exception": false, "start_time": "2020-09-25T19:49:09.677000", "status": "completed"} tags=[] id="bh6xyhUCZjfI" executionInfo={"status": "ok", "timestamp": 1639318394975, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
def accuracy_per_class(preds, labels):
label_dict_inverse = {v: k for k, v in label_dict.items()}
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
for label in np.unique(labels_flat):
y_preds = preds_flat[labels_flat==label]
y_true = labels_flat[labels_flat==label]
print(f'Class: {label_dict_inverse[label]}')
print(f'Accuracy:{len(y_preds[y_preds==label])}/{len(y_true)}\n')
# + [markdown] papermill={"duration": 0.042252, "end_time": "2020-09-25T19:49:09.816391", "exception": false, "start_time": "2020-09-25T19:49:09.774139", "status": "completed"} tags=[] id="xApqMoJmZjfI"
# # 8. Creating our Training Loop
# + papermill={"duration": 0.060651, "end_time": "2020-09-25T19:49:09.919256", "exception": false, "start_time": "2020-09-25T19:49:09.858605", "status": "completed"} tags=[] id="Wj-k5_uDZjfJ" executionInfo={"status": "ok", "timestamp": 1639318394976, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
import random
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# + papermill={"duration": 7.770862, "end_time": "2020-09-25T19:49:17.733331", "exception": false, "start_time": "2020-09-25T19:49:09.962469", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="BzLmjw4aZjfJ" executionInfo={"status": "ok", "timestamp": 1639318406545, "user_tz": -330, "elapsed": 11573, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="66334f44-4100-4833-ac87-33bb2bfbe5f2"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print(device)
# + papermill={"duration": 0.055972, "end_time": "2020-09-25T19:49:17.832806", "exception": false, "start_time": "2020-09-25T19:49:17.776834", "status": "completed"} tags=[] id="bGMqbM8QZjfJ" executionInfo={"status": "ok", "timestamp": 1639318406545, "user_tz": -330, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}}
def evaluate(dataloader_val):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
for batch in tqdm(dataloader_val):
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
# + papermill={"duration": 471.890013, "end_time": "2020-09-25T19:57:09.765836", "exception": false, "start_time": "2020-09-25T19:49:17.875823", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 771, "referenced_widgets": ["7ce2424285fd49528a03a3d57798d135", "acf1c8de292f46129f17c7752630cdb8", "b9f585d0824f406f923f136162825d6b", "a67b21e238c64a97969c1bdf20e8b909", "e20592af27084ac69157522422243721", "49497f81454946d681e0f571bf7b9f1e", "13959eb7f6d64578b10c0569c4b365c5", "d16612031986418680b0067678ca7e3f", "2b9877bc60674514af64691e5ddf50da", "<KEY>", "<KEY>", "159d164fce964ae491277ce021f1ae89", "<KEY>", "<KEY>", "<KEY>", "86111535c3c9494692298a45622e449f", "e0d181b00eb544d1a30a85b15a59cefc", "b92e2f0dcd704767892d350d1280324b", "c1cb70acf3ac4655ae08e71de9f86283", "<KEY>", "c4b83bfabbe34d46b2278e161893995e", "f8d6afa79d90413c9e772a7bd6917a57", "e70b9359704a44bdaea063dedb15954c", "<KEY>", "f54782fef82e4919922394c9559f2861", "<KEY>", "<KEY>", "27c23ec71c7e4c018be64045add9e0ad", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "494add1c2a164f539ad29d1f88ffbcea", "29d64f7657c74f1f9357ad35c2ac8a84", "<KEY>", "183c75d8cc894595ac2074c87e1a3d93", "463d5d82ec7541edb88dab76d0c82040", "45d6b5cea3c74a5aa166fe241272113e", "<KEY>", "<KEY>", "cba43c5f82314631a407b25de6ca4a6d", "d27c4028e6b447518b1d12e8d1cf3c52", "<KEY>", "18f1b99a28ae4be1b8d5545b2a73572c", "<KEY>", "f570a39698ca47249fd1da8239f544af", "<KEY>", "<KEY>", "<KEY>", "36785ba3513b4ca0ac9024ebe9d4a675", "1ec5dfa3c7a54b16806a4e63f1c1be25", "5e640703d1654be9a131c58097e7ca37", "<KEY>", "71d4bc75765a44b7bea22c4c60da1d4c", "e54dad0384b540afa4a878b21357f324", "9f727dadbc6c463e8da6076845626db5", "ffd2aff572ad417b835bf54a67b99134", "ae23a742165f45d5963ec21d3de166f3", "<KEY>", "72b92f768a474f2896e200376a549060", "3b7175a4117f4a8cb8a4af73c317ebd4", "14d9bcbc3cdf42ef9e7313d5460a380d", "774297d515554a40a8197576f9373d4b", "ebaca7aa76ca4635a6612708f654133e", "62fd34afecb2412ca4e29b31642f038f", "bea11d53093f4e39af0690698869ee60"]} id="M6Krjr6BZjfJ" executionInfo={"status": "error", "timestamp": 1639325194525, "user_tz": -330, "elapsed": 28487, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="c1db73e1-4433-4fef-8027-50d5a950a842"
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train,
desc='Epoch {:1d}'.format(epoch),
leave=False,
disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2]
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total +=loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
#torch.save(model.state_dict(), f'Models/BERT_ft_Epoch{epoch}.model')
tqdm.write('\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_val)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (weighted): {val_f1}')
# + [markdown] papermill={"duration": 0.061737, "end_time": "2020-09-25T19:57:09.890836", "exception": false, "start_time": "2020-09-25T19:57:09.829099", "status": "completed"} tags=[] id="EQZjN_gcZjfJ"
# # 9. Evaluating our Model
# + papermill={"duration": 0.075568, "end_time": "2020-09-25T19:57:10.028406", "exception": false, "start_time": "2020-09-25T19:57:09.952838", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="rCS9Iy6EZjfJ" executionInfo={"status": "ok", "timestamp": 1639325203431, "user_tz": -330, "elapsed": 2360, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjEjwI6P0ZAQiMEo4guQss8a2zjdDORv9mymlbCeg=s64", "userId": "12532207834237888029"}} outputId="f657218c-8ff5-496f-cc7f-5e9a94e01392"
accuracy_per_class(predictions, true_vals)
# + id="7ovJ67f4eU8U"
| XLM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''.venv'': venv)'
# language: python
# name: python3
# ---
import pandas as pd
aa_properties = pd.read_csv("../../data/amino_acid_properties.csv")
aa_properties = aa_properties.rename(columns ={'hydropathy index (Kyte-Doolittle method)': 'hydropathy'})
#aa_properties = aa_properties[['1-letter code', 'pI','hydropathy']]
#aa_properties = aa_properties.drop(columns = aa_properties.columns[2:10])
aa_properties = aa_properties.set_index('1-letter code')
metrics = aa_properties.to_dict('dict')
aa_properties
protein_s = ''
with open ('gprot.fasta', 'r') as f:
next(f)
for line in f:
line = line.strip()
protein_s += line
protein_s
# +
import pandas as pd
import numpy as np
import plotly.graph_objects as go
from collections import deque
class Protein:
aa_properties = pd.read_csv("../../data/amino_acid_properties.csv")
aa_properties = aa_properties.rename(columns ={'hydropathy index (Kyte-Doolittle method)': 'hydropathy'})
aa_properties = aa_properties[['1-letter code', 'pI','hydropathy']]
aa_properties = aa_properties.set_index('1-letter code')
metrics = aa_properties.to_dict('dict')
def __init__(self, name, sequence):
self.sequence = sequence
self.name = name
def plot(self, metric ="hydropathy", window_size = 5):
sequence_l = list(self.sequence)
metric_values = []
for aa in list(sequence_l):
metric_values.append(Protein.metrics[metric][aa])
pos = list(np.arange(len(sequence_l)))
window = deque([], maxlen = window_size)
mean_values = []
for metric_value in metric_values:
window.append(metric_value)
mean_values.append(np.mean(window))
data = [
go.Bar(
x=pos,
y=mean_values,
)
]
fig = go.Figure(data=data)
fig.update_layout(template="plotly_dark", title="{} of protein {}".format(metric, self.name))
fig.show()
return fig
# -
protein = Protein('G', protein_s)
h_5 = protein.plot(window_size=5)
h_50 = protein.plot(window_size=100)
h_5.write_image("h_5.png")
# Hydrophilic and hydrophobic areas are visible (positive values: hydrophobic, negative values: hydrophilic).
| exercises/day4/homework_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('Fake_News_Dataset.csv')
df.head(20)
# Get the Independent Feature
x = df.drop('label' , axis = 1)
x.head(20)
# Get the Dependent Feature
y = df['label']
y.head(20)
# Total number of words
df.shape
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer
df = df.dropna()
df
messages = df.copy()
messages.reset_index(inplace = True)
messages['title'][6]
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import re
ps = PorterStemmer()
corpus = []
for i in range(0, len(messages)):
review = re.sub('[^a-zA-Z]', ' ', messages['title'][i])
review = review.lower()
review = review.split()
review = [ps.stem(word) for word in review if not word in stopwords.words('english')]
review = ' '.join(review)
corpus.append(review)
corpus[3]
# +
#Applying CountVectorizer
#Creating the Bag of Words Model
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features=5000,ngram_range=(1,3))
X = cv.fit_transform(corpus).toarray()
# -
X.shape
y=messages['label']
print(y)
# +
## Divide the dataset into Train and Test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
# -
cv.get_feature_names()[:20]
cv.get_params()
count_df = pd.DataFrame(X_train, columns=cv.get_feature_names())
count_df.head()
import matplotlib.pyplot as plt
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
## This function prints and plots the confusion matrix.
## Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# # MultinomialNB Algorithm
from sklearn.naive_bayes import MultinomialNB
classifier=MultinomialNB()
from sklearn import metrics
import numpy as np
import itertools
classifier.fit(X_train, y_train)
pred = classifier.predict(X_test)
score = metrics.accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
cm = metrics.confusion_matrix(y_test, pred)
plot_confusion_matrix(cm, classes=['FAKE', 'REAL'])
classifier.fit(X_train, y_train)
pred = classifier.predict(X_test)
score = metrics.accuracy_score(y_test, pred)
score
y_train.shape
# # Passive Aggressive Classifier Algorithm
from sklearn.linear_model import PassiveAggressiveClassifier
linear_clf = PassiveAggressiveClassifier()
linear_clf.fit(X_train, y_train)
pred = linear_clf.predict(X_test)
score = metrics.accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
cm = metrics.confusion_matrix(y_test, pred)
plot_confusion_matrix(cm, classes=['FAKE Data', 'REAL Data'])
# # Multinomial Classifier with Hyperparameter
classifier=MultinomialNB(alpha=0.1)
previous_score=0
for alpha in np.arange(0,1,0.1):
sub_classifier=MultinomialNB(alpha=alpha)
sub_classifier.fit(X_train,y_train)
y_pred=sub_classifier.predict(X_test)
score = metrics.accuracy_score(y_test, y_pred)
if score>previous_score:
classifier=sub_classifier
print("Alpha: {}, Score : {}".format(alpha,score))
## Get Features names
feature_names = cv.get_feature_names()
classifier.coef_[0]
### Most real
sorted(zip(classifier.coef_[0], feature_names), reverse=True)[:20]
### Most fake
sorted(zip(classifier.coef_[0], feature_names))[:5000]
| FakeNews_Classifier/Fake News Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Euclidean distance
#
# - 두 수치 변수 사이에 거리를 계산 하는 공식
# - 같은 dimension을 가진 data에 적용 가능
# - 일반적으로 많이 쓰이는 계산 방법
# - 두 수치의 차의 제곱을 하므로 거리를 나타냄.
# - 뒤에 나올 pearson’s correlation, spearman’s correlation, cosine similarity에서도 사용.
# 
# ## Pearson’s correlation algor
# - 두 변수 사이에 Linear Correlation relationship을 계량화한 수치 ([-1,1]값을 가짐.)
# - 두 변수의 공분산(Covariance)을 각각의 표준편차(std)의 곱으로 나누어 준 값
# - 두 변수가 Gaussian or Gaussian-like distribution를 가지고 있다고 가정
#
# 
# 
# ## Spearman’s correlation
# - 두 변수 사이에 non-Linear Correlation relationship을 계량화한 수치 ([-1,1]값)
# - 두 변수의 단조성을 평가
# - 두 변수가 normal distribution이 아니거나 순위 척도 분포일때 사용가능.
# 
| cs224w/datasimilarity1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
from generator.rnn import LSTMGenerator
from dataset.loader import load_author
treny = list(load_author('kochanowski', ['treny']).values())
lstm = LSTMGenerator()
# lstm.fit(treny)
print(lstm.generate("hue", model_name="trener.hdf5", data=treny))
| examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python3
# ---
# # **SpaceX Falcon 9 First Stage Landing Prediction**
# ## Data Visualization , EDA and Data Feature Engineering
#
#
#
# ## Objectives
#
# We perform exploratory Data Analysis and Feature Engineering using `Pandas` and `Matplotlib`
#
# * Exploratory Data Analysis
# * Preparing Data Feature Engineering
#
# ### Our Libraries and Auxiliary Functions
#
# pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data.
import matplotlib.pyplot as plt
#Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics
import seaborn as sns
# ## Exploratory Data Analysis
#
# Viewing our dataset
#
df=pd.read_csv("dataset_part_2.csv")
df.head(5)
# First, let's try to see how the `FlightNumber` (indicating the continuous launch attempts.) and `Payload` variables would affect the launch outcome.
#
# We can plot out the <code>FlightNumber </code> vs. <code>PayloadMass </code>and overlay the outcome of the launch. We see that as the flight number increases, the first stage is more likely to land successfully. The payload mass is also important; it seems the more massive the payload, the less likely the first stage will return.
#
sns.catplot(y="PayloadMass", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Pay load Mass (kg)",fontsize=20)
plt.show()
# We see that different launch sites have different success rates. <code>CCAFS LC-40</code>, has a success rate of 60 %, while <code>KSC LC-39A</code> and <code>VAFB SLC 4E</code> has a success rate of 77%.
#
# ### Relationship between Flight Number and Launch Site
#
sns.catplot(y="FlightNumber", x="LaunchSite", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Site",fontsize=20)
plt.ylabel("Flight Number",fontsize=20)
plt.show()
# The more amount of flights at a launch site the greater the success rate at a launch site.
# ### The relationship between Payload and Launch Site
#
# We also want to observe if there is any relationship between launch sites and their payload mass.
#
#
sns.catplot(y="PayloadMass",x="LaunchSite",hue ="Class",data=df,aspect= 5)
plt.xlabel("Launch Site",fontsize=20)
plt.ylabel("Pay Load Mass (kg)",fontsize=20)
plt.show()
# The greater the payload mass for Launch Site CCAFS SLC 40 the higher the success rate for the Rocket.
#
# There is not quite a clear pattern to be found using this visualization to make a decision if the Launch Site is dependant on Pay Load Mass for a success launch.
# ### Relationship between success rate of each orbit type
# We want to visually check if there are any relationship between success rate and orbit type.
#
df.groupby(['Orbit']).mean()
sns.catplot(x="Orbit",y="Class", kind="bar",data=df)
plt.xlabel("Orbit",fontsize=20)
plt.ylabel("Mean",fontsize=20)
plt.show()
# Orbit GEO,HEO,SSO,ES-L1 has the best Success Rate
#
# ### Relationship between FlightNumber and Orbit type
#
# For each orbit, we want to see if there is any relationship between FlightNumber and Orbit type.
#
sns.scatterplot(x="Orbit",y="FlightNumber",hue="Class",data = df)
plt.xlabel("Orbit",fontsize=20)
plt.ylabel("Flight Number",fontsize=20)
plt.show()
# You should see that in the LEO orbit the Success appears related to the number of flights; on the other hand, there seems to be no relationship between flight number when in GTO orbit.
#
# ### Relationship between Payload and Orbit type
#
sns.scatterplot(x="Orbit",y="PayloadMass",hue="Class",data = df)
plt.xlabel("Orbit",fontsize=20)
plt.ylabel("PayloadMass",fontsize=20)
plt.show()
# You should observe that Heavy payloads have a negative influence on GTO orbits and positive on GTO and Polar LEO (ISS) orbits.
#
# ### Launch Success Yearly Trend
#
# +
year = pd.DatetimeIndex(df['Date']).year
year = np.array(list(year))
successratelist = []
successrate = 0.00
records = 1
data = 0
for x in df['Class']:
data = x + data
successrate = data/records
successratelist.append(successrate)
records= records +1
successratelist = np.array(successratelist)
d = {'successrate':successratelist,'year':year}
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.lineplot(data=d, x="year", y="successrate" )
plt.xlabel("Year",fontsize=20)
plt.title('Space X Rocket Success Rates')
plt.ylabel("Success Rate",fontsize=20)
plt.show()
# -
# you can observe that the success rate since 2013 kept increasing till 2020
#
# ## Features Engineering
#
features = df[['FlightNumber', 'PayloadMass', 'Orbit', 'LaunchSite', 'Flights', 'GridFins', 'Reused', 'Legs', 'LandingPad', 'Block', 'ReusedCount', 'Serial']]
features.head()
features_hot = df[['Orbit','LaunchSite','LandingPad','Serial']]
features_hot['Orbit'] = pd.get_dummies(df['Orbit'])
features_hot['LaunchSite'] = pd.get_dummies(df['LaunchSite'])
features_hot['LandingPad'] = pd.get_dummies(df['LandingPad'])
features_hot['Serial'] = pd.get_dummies(df['Serial'])
features_hot.head()
# +
features_hot.astype('float64')
features_hot
features_hot.to_csv('dataset_part_3.csv',index=False)
# -
# ### Author : <NAME>
| eda-dataviz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/W1D1_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text"
#
# # Neuromatch Academy: Week 1, Day 1, Tutorial 1
# # Model Types: "What" models
# __Content creators:__ <NAME>, <NAME>, <NAME>
#
# __Content reviewers:__ <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
#
# We would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here.
#
# + [markdown] colab_type="text"
# ___
# # Tutorial Objectives
# This is tutorial 1 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore 'What' models, used to describe the data. To understand what our data looks like, we will visualize it in different ways. Then we will compare it to simple mathematical models. Specifically, we will:
#
# - Load a dataset with spiking activity from hundreds of neurons and understand how it is organized
# - Make plots to visualize characteristics of the spiking activity across the population
# - Compute the distribution of "inter-spike intervals" (ISIs) for a single neuron
# - Consider several formal models of this distribution's shape and fit them to the data "by hand"
# + cellView="form" colab={} colab_type="code"
#@title Video 1: "What" Models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='KgqR_jbjMQg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text"
# # Setup
#
#
# + [markdown] colab_type="text"
# Python requires you to explictly "import" libraries before their functions are available to use. We will always specify our imports at the beginning of each notebook or script.
# + cellView="both" colab={} colab_type="code"
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] colab_type="text"
# Tutorial notebooks typically begin with several set-up steps that are hidden from view by default.
#
# **Important:** Even though the code is hidden, you still need to run it so that the rest of the notebook can work properly. Step through each cell, either by pressing the play button in the upper-left-hand corner or with a keyboard shortcut (`Cmd-Return` on a Mac, `Ctrl-Enter` otherwise). A number will appear inside the brackets (e.g. `[3]`) to tell you that the cell was executed and what order that happened in.
#
# If you are curious to see what is going on inside each cell, you can double click to expand. Once expanded, double-click the white space to the right of the editor to collapse again.
# + cellView="form" colab={} colab_type="code"
#@title Figure Settings
import ipywidgets as widgets #interactive display
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form" colab={} colab_type="code"
# @title Helper functions
# @markdown Most of the tutorials make use of helper functions
# @markdown to simplify the code that you need to write. They are defined here.
# Please don't edit these, or worry about understanding them now!
def restrict_spike_times(spike_times, interval):
"""Given a spike_time dataset, restrict to spikes within given interval.
Args:
spike_times (sequence of np.ndarray): List or array of arrays,
each inner array has spike times for a single neuron.
interval (tuple): Min, max time values; keep min <= t < max.
Returns:
np.ndarray: like `spike_times`, but only within `interval`
"""
interval_spike_times = []
for spikes in spike_times:
interval_mask = (spikes >= interval[0]) & (spikes < interval[1])
interval_spike_times.append(spikes[interval_mask])
return np.array(interval_spike_times, object)
# + cellView="form" colab={} colab_type="code"
# @title Data retrieval
# @markdown This cell downloads the example dataset that we will use in this tutorial.
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Failed to download data')
else:
spike_times = np.load(io.BytesIO(r.content),
allow_pickle=True)['spike_times']
# + [markdown] colab_type="text"
# ---
#
# # Section 1: Exploring the Steinmetz dataset
#
# In this tutorial we will explore the structure of a neuroscience dataset.
#
# We consider a subset of data from a study of [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x). In this study, Neuropixels probes were implanted in the brains of mice. Electrical potentials were measured by hundreds of electrodes along the length of each probe. Each electrode's measurements captured local variations in the electric field due to nearby spiking neurons. A spike sorting algorithm was used to infer spike times and cluster spikes according to common origin: a single cluster of sorted spikes is causally attributed to a single neuron.
#
# In particular, a single recording session of spike times and neuron assignments was loaded and assigned to `spike_times` in the preceding setup.
#
# Typically a dataset comes with some information about its structure. However, this information may be incomplete. You might also apply some transformations or "pre-processing" to create a working representation of the data of interest, which might go partly undocumented depending on the circumstances. In any case it is important to be able to use the available tools to investigate unfamiliar aspects of a data structure.
#
# Let's see what our data looks like...
# + [markdown] colab_type="text"
# ## Section 1.1: Warming up with `spike_times`
# + [markdown] colab_type="text"
# What is the Python type of our variable?
# + colab={} colab_type="code"
type(spike_times)
# + [markdown] colab_type="text"
# You should see `numpy.ndarray`, which means that it's a normal NumPy array.
#
# If you see an error message, it probably means that you did not execute the set-up cells at the top of the notebook. So go ahead and make sure to do that.
#
# Once everything is running properly, we can ask the next question about the dataset: what's its shape?
# + colab={} colab_type="code"
spike_times.shape
# + [markdown] colab_type="text"
# There are 734 entries in one dimension, and no other dimensions. What is the Python type of the first entry, and what is *its* shape?
# + colab={} colab_type="code"
idx = 0
print(
type(spike_times[idx]),
spike_times[idx].shape,
sep="\n",
)
# + [markdown] colab_type="text"
# It's also a NumPy array with a 1D shape! Why didn't this show up as a second dimension in the shape of `spike_times`? That is, why not `spike_times.shape == (734, 826)`?
#
# To investigate, let's check another entry.
# + colab={} colab_type="code"
idx = 321
print(
type(spike_times[idx]),
spike_times[idx].shape,
sep="\n",
)
# + [markdown] colab_type="text"
# It's also a 1D NumPy array, but it has a different shape. Checking the NumPy types of the values in these arrays, and their first few elements, we see they are composed of floating point numbers (not another level of `np.ndarray`):
# + colab={} colab_type="code"
i_neurons = [0, 321]
i_print = slice(0, 5)
for i in i_neurons:
print("Neuron {}:".format(i),
spike_times[i].dtype,
spike_times[i][i_print],
"\n",
sep="\n")
# + [markdown] colab_type="text"
# Note that this time we've checked the NumPy `dtype` rather than the Python variable type. These two arrays contain floating point numbers ("floats") with 32 bits of precision.
#
# The basic picture is coming together:
# - `spike_times` is 1D, its entries are NumPy arrays, and its length is the number of neurons (734): by indexing it, we select a subset of neurons.
# - An array in `spike_times` is also 1D and corresponds to a single neuron; its entries are floating point numbers, and its length is the number of spikes attributed to that neuron. By indexing it, we select a subset of spike times for that neuron.
#
# Visually, you can think of the data structure as looking something like this:
#
# ```
# | . . . . . |
# | . . . . . . . . |
# | . . . |
# | . . . . . . . |
# ```
#
# Before moving on, we'll calculate and store the number of neurons in the dataset and the number of spikes per neuron:
# + colab={} colab_type="code"
n_neurons = len(spike_times)
total_spikes_per_neuron = [len(spike_times_i) for spike_times_i in spike_times]
print(f"Number of neurons: {n_neurons}")
print(f"Number of spikes for first five neurons: {total_spikes_per_neuron[:5]}")
# + cellView="form" colab={} colab_type="code"
#@title Video 2: Exploring the dataset
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='oHwYWUI_o1U', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text"
# ## Section 1.2: Getting warmer: counting and plotting total spike counts
#
# As we've seen, the number of spikes over the entire recording is variable between neurons. More generally, some neurons tend to spike more than others in a given period. Lets explore what the distribution of spiking looks like across all the neurons in the dataset.
# + [markdown] colab_type="text"
# Are most neurons "loud" or "quiet", compared to the average? To see, we'll define bins of constant width in terms of total spikes and count the neurons that fall in each bin. This is known as a "histogram".
#
# You can plot a histogram with the matplotlib function `plt.hist`. If you just need to compute it, you can use the numpy function `np.histogram` instead.
# + colab={} colab_type="code"
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons");
# + [markdown] colab_type="text"
# Let's see what percentage of neurons have a below-average spike count:
# + colab={} colab_type="code"
mean_spike_count = np.mean(total_spikes_per_neuron)
frac_below_mean = (total_spikes_per_neuron < mean_spike_count).mean()
print(f"{frac_below_mean:2.1%} of neurons are below the mean")
# + [markdown] colab_type="text"
# We can also see this by adding the average spike count to the histogram plot:
# + colab={} colab_type="code"
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons")
plt.axvline(mean_spike_count, color="orange", label="Mean neuron")
plt.legend()
# + [markdown] colab_type="text"
# This shows that the majority of neurons are relatively "quiet" compared to the mean, while a small number of neurons are exceptionally "loud": they must have spiked more often to reach a large count.
#
# ### Exercise 1: Comparing mean and median neurons
#
# If the mean neuron is more active than 68% of the population, what does that imply about the relationship between the mean neuron and the median neuron?
#
# *Exercise objective:* Reproduce the plot above, but add the median neuron.
#
# + colab={} colab_type="code"
median_spike_count = np.median(total_spikes_per_neuron)
frac_below_median = (total_spikes_per_neuron < median_spike_count).mean()
print(f"{frac_below_median:2.1%} of neurons are below the median")
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.axvline(x=median_spike_count, color="orange", label="Median neuron")
plt.axvline(x=mean_spike_count, color="magenta", label="Mean neuron")
plt.xlabel('Total spikes per neuron')
plt.ylabel('Number of neurons')
plt.legend();
# + colab={} colab_type="code"
median_spike_count = np.median(total_spikes_per_neuron)
with plt.xkcd():
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.axvline(median_spike_count, color="limegreen", label="Median neuron")
plt.axvline(mean_spike_count, color="orange", label="Mean neuron")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons")
plt.legend()
# + [markdown] colab_type="text"
#
# *Bonus:* The median is the 50th percentile. What about other percentiles? Can you show the interquartile range on the histogram?
# +
lq_spike_count = np.percentile(total_spikes_per_neuron, 25)
uq_spike_count = np.percentile(total_spikes_per_neuron, 75)
with plt.xkcd():
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.axvline(median_spike_count, color="limegreen", label="Median neuron")
plt.axvline(mean_spike_count, color="orange", label="Mean neuron")
plt.axvline(lq_spike_count, color="yellow", label="Lower quartile neuron")
plt.axvline(uq_spike_count, color="yellow", label="Upper quartile neuron")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons")
plt.legend()
# + [markdown] colab_type="text"
# ---
#
# # Section 2: Visualizing neuronal spiking activity
# + [markdown] colab_type="text"
# ## Section 2.1: Getting a subset of the data
#
# Now we'll visualize trains of spikes. Because the recordings are long, we will first define a short time interval and restrict the visualization to only the spikes in this interval. We defined a utility function, `restrict_spike_times`, to do this for you. If you call `help()` on the function, it will tell you a little bit about itself:
# + colab={} colab_type="code"
help(restrict_spike_times)
# + colab={} colab_type="code"
t_interval = (5, 15) # units are seconds after start of recording
interval_spike_times = restrict_spike_times(spike_times, t_interval)
# + [markdown] colab_type="text"
# Is this a representative interval? What fraction of the total spikes fall in this interval?
# + colab={} colab_type="code"
original_counts = sum([len(spikes) for spikes in spike_times])
interval_counts = sum([len(spikes) for spikes in interval_spike_times])
frac_interval_spikes = interval_counts / original_counts
print(f"{frac_interval_spikes:.2%} of the total spikes are in the interval")
# + [markdown] colab_type="text"
# How does this compare to the ratio between the interval duration and the experiment duration? (What fraction of the total time is in this interval?)
#
# We can approximate the experiment duration by taking the minimum and maximum spike time in the whole dataset. To do that, we "concatenate" all of the neurons into one array and then use `np.ptp` ("peak-to-peak") to get the difference between the maximum and minimum value:
# + colab={} colab_type="code"
spike_times_flat = np.concatenate(spike_times)
experiment_duration = np.ptp(spike_times_flat)
interval_duration = t_interval[1] - t_interval[0]
frac_interval_time = interval_duration / experiment_duration
print(f"{frac_interval_time:.2%} of the total time is in the interval")
# + [markdown] colab_type="text"
# These two values—the fraction of total spikes and the fraction of total time—are similar. This suggests the average spike rate of the neuronal population is not very different in this interval compared to the entire recording.
#
# ## Section 2.2: Plotting spike trains and rasters
#
# Now that we have a representative subset, we're ready to plot the spikes, using the matplotlib `plt.eventplot` function. Let's look at a single neuron first:
# + colab={} colab_type="code"
neuron_idx = 1
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);
# + [markdown] colab_type="text"
# We can also plot multiple neurons. Here are three:
# + colab={} colab_type="code"
neuron_idx = [1, 11, 51]
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);
# + [markdown] colab_type="text"
# This makes a "raster" plot, where the spikes from each neuron appear in a different row.
#
# Plotting a large number of neurons can give you a sense for the characteristics in the population. Let's show every 5th neuron that was recorded:
# + colab={} colab_type="code"
neuron_idx = np.arange(0, len(spike_times), 5)
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);
# + [markdown] colab_type="text"
# *Question*: How does the information in this plot relate to the histogram of total spike counts that you saw above?
# -
# *Answer*: Each row of the raster represents a single neuron, which will be included as a single count, on the y axis, the number of spikes, seen as black lines in the raster, define where on the x axis bin the neuron count will be added. Neurons with more black marks in the raster plot, will be reflected in bars on the right hand side of the histogram.
# + cellView="form" colab={} colab_type="code"
#@title Video 3: Visualizing activity
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='QGA5FCW7kkA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text"
# ---
#
# # Section 3: Inter-spike intervals and their distributions
# + [markdown] colab_type="text"
# Given the ordered arrays of spike times for each neuron in `spike_times`, which we've just visualized, what can we ask next?
#
# Scientific questions are informed by existing models. So, what knowledge do we already have that can inform questions about this data?
#
# We know that there are physical constraints on neuron spiking. Spiking costs energy, which the neuron's cellular machinery can only obtain at a finite rate. Therefore neurons should have a refractory period: they can only fire as quickly as their metabolic processes can support, and there is a minimum delay between consecutive spikes of the same neuron.
#
# More generally, we can ask "how long does a neuron wait to spike again?" or "what is the longest a neuron will wait?" Can we transform spike times into something else, to address questions like these more directly?
#
# We can consider the inter-spike times (or interspike intervals: ISIs). These are simply the time differences between consecutive spikes of the same neuron.
#
# ### Exercise 2: Plot the distribution of ISIs for a single neuron
#
# *Exercise objective:* make a histogram, like we did for spike counts, to show the distribution of ISIs for one of the neurons in the dataset.
#
# Do this in three steps:
#
# 1. Extract the spike times for one of the neurons
# 2. Compute the ISIs (the amount of time between spikes, or equivalently, the difference between adjacent spike times)
# 3. Plot a histogram with the array of individual ISIs
# + colab={} colab_type="code"
def compute_single_neuron_isis(spike_times, neuron_idx):
"""Compute a vector of ISIs for a single neuron given spike times.
Args:
spike_times (list of 1D arrays): Spike time dataset, with the first
dimension corresponding to different neurons.
neuron_idx (int): Index of the unit to compute ISIs for.
Returns:
isis (1D array): Duration of time between each spike from one neuron.
"""
# Extract the spike times for the specified neuron
single_neuron_spikes = spike_times[neuron_idx]
# Compute the ISIs for this set of spikes
isis = np.diff(single_neuron_spikes)
return isis
# Uncomment the following lines when you are ready to test your function
with plt.xkcd():
plt.hist(single_neuron_isis, bins=50, histtype="stepfilled")
plt.axvline(single_neuron_isis.mean(), color="orange", label="Mean ISI")
plt.xlabel("ISI duration (s)")
plt.ylabel("Number of spikes")
plt.legend()
# + colab={} colab_type="code"
# to_remove solution
def compute_single_neuron_isis(spike_times, neuron_idx):
"""Compute a vector of ISIs for a single neuron given spike times.
Args:
spike_times (list of 1D arrays): Spike time dataset, with the first
dimension corresponding to different neurons.
neuron_idx (int): Index of the unit to compute ISIs for.
Returns:
isis (1D array): Duration of time between each spike from one neuron.
"""
# Extract the spike times for the specified neuron
single_neuron_spikes = spike_times[neuron_idx]
# Compute the ISIs for this set of spikes
# Hint: the function np.diff computes discrete differences along an array
isis = np.diff(single_neuron_spikes)
return isis
single_neuron_isis = compute_single_neuron_isis(spike_times, neuron_idx=284)
with plt.xkcd():
plt.hist(single_neuron_isis, bins=100, histtype="stepfilled")
plt.axvline(single_neuron_isis.mean(), color="orange", label="Mean ISI")
plt.xlabel("ISI duration (s)")
plt.ylabel("Number of spikes")
plt.legend()
# + [markdown] colab_type="text"
# ---
#
# In general, the shorter ISIs are predominant, with counts decreasing rapidly (and smoothly, more or less) with increasing ISI. However, counts also rapidly decrease to zero with _decreasing_ ISI, below the maximum of the distribution (8-11 ms). The absence of these very low ISIs agrees with the refractory period hypothesis: the neuron cannot fire quickly enough to populate this region of the ISI distribution.
#
# Check the distributions of some other neurons. To resolve various features of the distributions, you might need to play with the value of `n_bins`. Using too few bins might smooth over interesting details, but if you use too many bins, the random variability will start to dominate.
#
# You might also want to restrict the range to see the shape of the distribution when focusing on relatively short or long ISIs. *Hint:* `plt.hist` takes a `range` argument
# + [markdown] colab_type="text"
# ---
#
# # Section 4: What is the functional form of an ISI distribution?
# + cellView="form" colab={} colab_type="code"
#@title Video 4: ISI distribution
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='DHhM80MOTe8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text"
# The ISI histograms seem to follow continuous, monotonically decreasing functions above their maxima. The function is clearly non-linear. Could it belong to a single family of functions?
#
# To motivate the idea of using a mathematical function to explain physiological phenomena, let's define a few different function forms that we might expect the relationship to follow: exponential, inverse, and linear.
# + colab={} colab_type="code"
def exponential(xs, scale, rate, x0):
"""A simple parametrized exponential function, applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
scale (float): Linear scaling factor.
rate (float): Exponential growth (positive) or decay (negative) rate.
x0 (float): Horizontal offset.
"""
ys = scale * np.exp(rate * (xs - x0))
return ys
def inverse(xs, scale, x0):
"""A simple parametrized inverse function (`1/x`), applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
scale (float): Linear scaling factor.
x0 (float): Horizontal offset.
"""
ys = scale / (xs - x0)
return ys
def linear(xs, slope, y0):
"""A simple linear function, applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
slope (float): Slope of the line.
y0 (float): y-intercept of the line.
"""
ys = slope * xs + y0
return ys
# + [markdown] colab_type="text"
# ### Interactive Demo: ISI functions explorer
#
# Here is an interactive demo where you can vary the parameters of these functions and see how well the resulting outputs correspond to the data. Adjust the parameters by moving the sliders and see how close you can get the lines to follow the falling curve of the histogram. This will give you a taste of what you're trying to do when you *fit a model* to data.
#
# "Interactive demo" cells have hidden code that defines an interface where you can play with the parameters of some function using sliders. You don't need to worry about how the code works – but you do need to **run the cell** to enable the sliders.
#
# + cellView="form" code_folding=[] colab={} colab_type="code"
# @title
# @markdown Be sure to run this cell to enable the demo
# Don't worry about understanding this code! It's to setup an interactive plot.
single_neuron_idx = 283
single_neuron_spikes = spike_times[single_neuron_idx]
single_neuron_isis = np.diff(single_neuron_spikes)
counts, edges = np.histogram(single_neuron_isis,
bins=50,
range=(0, single_neuron_isis.max()))
functions = dict(
exponential=exponential,
inverse=inverse,
linear=linear,
)
colors = dict(
exponential="C1",
inverse="C2",
linear="C4",
)
@widgets.interact(
exp_scale=widgets.FloatSlider(1000, min=0, max=20000, step=250),
exp_rate=widgets.FloatSlider(-10, min=-200, max=50, step=1),
exp_x0=widgets.FloatSlider(0.1, min=-0.5, max=0.5, step=0.005),
inv_scale=widgets.FloatSlider(1000, min=0, max=3e2, step=10),
inv_x0=widgets.FloatSlider(0, min=-0.2, max=0.2, step=0.01),
lin_slope=widgets.FloatSlider(-1e5, min=-6e5, max=1e5, step=10000),
lin_y0=widgets.FloatSlider(10000, min=0, max=4e4, step=1000),
)
def fit_plot(
exp_scale=1000,
exp_rate=-10,
exp_x0=0.1,
inv_scale=1000,
inv_x0=0,
lin_slope=-1e5,
lin_y0=2000,
):
"""Helper function for plotting function fits with interactive sliders."""
func_params = dict(
exponential=(exp_scale, exp_rate, exp_x0),
inverse=(inv_scale, inv_x0),
linear=(lin_slope, lin_y0),
)
f, ax = plt.subplots()
ax.fill_between(edges[:-1], counts, step="post", alpha=.5)
xs = np.linspace(1e-10, edges.max())
for name, function in functions.items():
ys = function(xs, *func_params[name])
ax.plot(xs, ys, lw=3, color=colors[name], label=name)
ax.set(
xlim=(edges.min(), edges.max()),
ylim=(0, counts.max() * 1.1),
xlabel="ISI (s)",
ylabel="Number of spikes",
)
ax.legend()
# + cellView="form" colab={} colab_type="code"
#@title Video 5: Fitting models by hand
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='uW2HDk_4-wk', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text"
# # Summary
#
# In this tutorial, we loaded some neural data and poked at it to understand how the dataset is organized. Then we made some basic plots to visualize (1) the average level of activity across the population and (2) the distribution of ISIs for an individual neuron. In the very last bit, we started to think about using mathematical formalisms to understand or explain some physiological phenomenon. All of this only allowed us to understand "What" the data looks like.
#
# This is the first step towards developing models that can tell us something about the brain. That's what we'll focus on in the next two tutorials.
| tutorials/W1D1_ModelTypes/W1D1_Tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # we gonna learn
#
# Using Class Keyword
# Creating Attributes
# Creating Methods in class
# Knowing about the Inheritence Concept
# Knowing about the Polymorphism
# Knowing about the Special Methods for classes
myList = [1,2,3,4,5,6,7]
myList.append(1)
myList.count(1)
print(type(1))
print(type([]))
print(type(()))
print(type({}))
class NameOfClass():
pass
my_yash = NameOfClass()
type(my_yash)
class Vinay():
def __init__(self,gf1,gf2):
self.gf1 = gf1
self.gf2 = gf2
our_vinay = Vinay("harshita","divya")
type(our_vinay)
our_vinay.gf1
our_vinay.gf2
# +
class Circle():
pi = 3.14
def __init__(self,radius=1):
self.radius = radius
self.area = radius*radius*Circle.pi
def setRadius(self,newRadius):
self.radius = newRadius
self.area = newRadius*newRadius*self.pi
def getPeri(self):
return self.radius*self.pi*2
# -
c = Circle()
c.area
c.radius
c.setRadius(5)
c.getPeri()
# # Inheritance
# +
class Animal:
def __init__(self):
print('Animal Created')
def whoAmI(self):
print('I am Animal')
def drink(self):
print('Drinking Water')
class Yash(Animal):
def __init__(self):
Animal.__init__(self)
print("Yash Created")
def whoAmI(self):
print("I am Yaswanth 12'O clock")
def shout(self):
print("<NAME>")
# -
our_yash = Yash()
our_yash.whoAmI()
# # Polymorphism
# +
class Animal:
def __init__(self,name):
self.name = name
def speak(self):
raise NotImplementedError("Sub class must write the abstract class")
class Yash(Animal):
def speak(self):
return self.name+ ' says <NAME>'
class Akash(Animal):
def speak(self):
return self.name+ ' says Aishwarya'
our_yash = Yash('Yaswanth')
our_akash = Akash('Akash')
print(our_yash.speak())
print(our_akash.speak())
# -
| Object Oriented Programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
from ztlearn.utils import *
from ztlearn.dl.models import Sequential
from ztlearn.optimizers import register_opt
from ztlearn.datasets.digits import fetch_digits
from ztlearn.dl.layers import LSTM, Dense, Flatten
# +
data = fetch_digits(custom_path = os.getcwd() + '/..')
train_data, test_data, train_label, test_label = train_test_split(data.data,
data.target,
test_size = 0.3,
random_seed = 15)
# plot samples of training data
plot_img_samples(train_data, train_label)
# +
# optimizer definition
opt = register_opt(optimizer_name = 'adam', momentum = 0.01, learning_rate = 0.001)
# Model definition
model = Sequential()
model.add(LSTM(128, activation = 'tanh', input_shape = (8, 8)))
model.add(Flatten())
model.add(Dense(10, activation = 'softmax')) # 10 digits classes
model.compile(loss = 'categorical_crossentropy', optimizer = opt)
model.summary('digits lstm')
# -
model_epochs = 100
fit_stats = model.fit(train_data.reshape(-1, 8, 8),
one_hot(train_label),
batch_size = 128,
epochs = model_epochs,
validation_data = (test_data.reshape(-1, 8, 8), one_hot(test_label)),
shuffle_data = True)
predictions = unhot(model.predict(test_data.reshape(-1, 8, 8), True))
print_results(predictions, test_label)
plot_img_results(test_data, test_label, predictions)
model_name = model.model_name
plot_metric('loss', model_epochs, fit_stats['train_loss'], fit_stats['valid_loss'], model_name = model_name)
plot_metric('accuracy', model_epochs, fit_stats['train_acc'], fit_stats['valid_acc'], model_name = model_name)
| examples/notebooks/digits/digits_lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Notebook for Development
# +
import os
import sys
import re
import numpy as np
from pycorenlp import StanfordCoreNLP
import nltk
import subprocess
import pandas as pd
from lxml import etree
from jsonrpclib.jsonrpc import ServerProxy
nlp = StanfordCoreNLP('http://localhost:9000')
data_dir = '../data/'
tregex_dir = './stanford-tregex-2018-02-27/'
ctakes_folder = './ctakes/'
# can be extended to batch processing if needed (feed a list of filenames)
filenames = ['dev.txt']
#filenames = ['3.txt']
#filenames = ['test_ready.txt']
# +
neg_list = pd.read_csv(data_dir + 'multilingual_lexicon-en-de-fr-sv.csv', sep=',', header=0)[['ITEM', 'CATEGORY', 'EN (SV) ACTION']]
neg_list = neg_list[neg_list['CATEGORY'].isin(['definiteNegatedExistence', 'probableNegatedExistence', 'pseudoNegation'])]
neg_list['NEG'] = ''
neg_list['FIRST_TOKEN'] = ''
neg_list['FIRST_POS'] = ''
neg_list['LAST_TOKEN'] = ''
neg_list['LAST_POS'] = ''
for idx in neg_list.index:
if neg_list['CATEGORY'][idx] == 'definiteNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'forward':
neg_list['NEG'][idx] = 'PREN'
if neg_list['CATEGORY'][idx] == 'definiteNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'backward':
neg_list['NEG'][idx] = 'POST'
if neg_list['CATEGORY'][idx] == 'definiteNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'bidirectional':
neg_list['NEG'][idx] = 'POST'
if neg_list['CATEGORY'][idx] == 'probableNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'forward':
neg_list['NEG'][idx] = 'PREP'
if neg_list['CATEGORY'][idx] == 'probableNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'backward':
neg_list['NEG'][idx] = 'POSP'
if neg_list['CATEGORY'][idx] == 'probableNegatedExistence' and neg_list['EN (SV) ACTION'][idx] == 'bidirectional':
neg_list['NEG'][idx] = 'POSP'
if neg_list['CATEGORY'][idx] == 'pseudoNegation':
neg_list['NEG'][idx] = 'PSEU'
neg_list['FIRST_TOKEN'][idx] = neg_list['ITEM'][idx].split()[0]
neg_list['FIRST_POS'][idx] = nltk.pos_tag(nltk.word_tokenize(neg_list['FIRST_TOKEN'][idx]))[0][1]
neg_list['LAST_TOKEN'][idx] = neg_list['ITEM'][idx].split()[len(neg_list['ITEM'][idx].split())-1]
neg_list['LAST_POS'][idx] = nltk.pos_tag(nltk.word_tokenize(neg_list['LAST_TOKEN'][idx]))[0][1]
neg = neg_list['ITEM'].values
neg_list.head()
neg_list.to_csv(data_dir + 'neg_list.txt', sep='\t', index=False, quoting=False)
neg_term = [' ' + item + ' ' for item in neg]
neg_term.extend(item + ' ' for item in neg)
# use the labeled list (annotated 'type')
neg_list = pd.read_csv(data_dir + 'neg_list_complete.txt', sep='\t', header=0)
neg = neg_list['ITEM'].values
neg_term = [' ' + item + ' ' for item in neg]
neg_term.extend(item + ' ' for item in neg)
# -
# ## Section and senetence tokenization
# +
section_names = ['Allergies', 'Chief Complaint', 'Major Surgical or Invasive Procedure', 'History of Present Illness',
'Past Medical History', 'Social History', 'Family History', 'Brief Hospital Course',
'Medications on Admission', 'Discharge Medications', 'Discharge Diagnosis', 'Discharge Condition',
'Discharge Instructions']
section_dict ={
'Allergies': ['allergy'],
'Brief Hospital Course': ['hospital', 'course'],
'Chief Complaint': ['chief', 'complaint'],
'Discharge Condition': ['discharge', 'condition'],
'Discharge Diagnosis': ['discharge', 'diagnosis'],
'Discharge Instructions': ['discharge', 'instruction'],
'Discharge Medications': ['discharge', 'medication'],
'Family History': ['family', 'history'],
'History of Present Illness': ['history', 'present', 'illness'],
'Major Surgical or Invasive Procedure': ['major',
'surgical',
'invasive',
'procedure'],
'Medications on Admission': ['medication', 'admission'],
'Past Medical History': ['medical', 'history'],
'Social History': ['social', 'history']}
other_section_names = ['Followup Instructions', 'Physical Exam', 'Pertinent Results', 'Facility', 'Discharge Disposition']
other_section_dict = {
'Discharge Disposition': ['discharge', 'disposition'],
'Facility': ['facility'],
'Followup Instructions': ['followup', 'instruction'],
'Pertinent Results': ['pertinent', 'result'],
'Physical Exam': ['physical', 'exam']}
all_section_dict = {}
all_section_dict.update(section_dict)
all_section_dict.update(other_section_dict)
section_names_list = list(section_dict.keys())
section_to_parse = ['History of Present Illness', 'Brief Hospital Course', 'Discharge Instructions']
section_not_to_parse = [item for item in section_names_list if item not in section_to_parse] + ['None']
hard_section_list = ['History of Present Illness', 'Past Medical History', 'Brief Hospital Course', 'Discharge Diagnosis', 'Discharge Instructions']
easy_section_list = [item for item in section_names_list if item not in hard_section_list]
# -
def match_section_name(name, section_dict, nlp_parser):
output = nlp_parser.annotate(name.lower(), properties={
'annotators': 'lemma',
'outputFormat': 'json',
'threads': '4',
'tokenize.options': 'normalizeParentheses=false, normalizeOtherBrackets=false'
})
try:
name_lemma = set([[str(token['lemma']) for token in sent['tokens']] for sent in output['sentences']][0])
except:
return 'None'
else:
for section_name, section_name_lemma in section_dict.items():
if all([item in name_lemma for item in section_name_lemma]):
return section_name
return 'None'
for idx in range(1):
sections = {}
sections['None'] = []
with open(os.path.join(data_dir, filenames[idx]), 'r') as f:
for _ in range(3): next(f)
lines_buffer = []
previous_section_name = 'None'
for line in f:
line = line.strip()
if line:
if line.lower() == 'attending:':
continue
lines_buffer.append(line)
else:
if lines_buffer:
lines_buffer_head = lines_buffer[0]
if ':' in lines_buffer_head:
section_name = lines_buffer_head.split(':')[0]
matched_section_name = match_section_name(section_name, all_section_dict, nlp)
if matched_section_name != 'None':
previous_section_name = matched_section_name
if len(lines_buffer_head.split(':')[1:]) > 1:
sections[matched_section_name] = [' '.join(lines_buffer_head.split(':')[1:])] + lines_buffer[1:]
else:
sections[matched_section_name] = lines_buffer[1:]
lines_buffer = []
continue
sections[previous_section_name] = sections.get(previous_section_name, None) + lines_buffer
lines_buffer = []
# +
for section_name in section_to_parse:
if section_name in sections:
text = ' '.join(sections[section_name])
output = nlp.annotate(text, properties={
'annotators': 'ssplit',
'outputFormat': 'json',
'threads': '4',
'tokenize.options': 'normalizeParentheses=false, normalizeOtherBrackets=false'
})
try:
sents = [[str(token['word']) for token in sent['tokens']] for sent in output['sentences']]
except Exception as e:
pass
else:
sections[section_name] = [' '.join(sent) for sent in sents if sent != ['.']]
for section_name in section_not_to_parse:
if section_name in sections:
new_section_content = []
for text in sections[section_name]:
output = nlp.annotate(text, properties={
'annotators': 'ssplit',
'outputFormat': 'json',
'threads': '4',
'tokenize.options': 'normalizeParentheses=false, normalizeOtherBrackets=false'
})
try:
sents = [[str(token['word']) for token in sent['tokens']] for sent in output['sentences']]
except Exception as e:
pass
else:
new_section_content.append(' '.join([' '.join(sent) for sent in sents if sent != ['.']]))
sections[section_name] = new_section_content
# -
with open(data_dir + 'tmp', 'w') as f:
for section_name in hard_section_list:
# add section head tag
f.write('\n\n\n\n[SECTION-{}-START]'.format(section_name))
if section_name in sections:
for item in sections[section_name]:
# tag negated or affirmed based on string matching --- negation term list
# add one space to prevent loss of 'no ', 'not ', ... etc.
if any(substring in ' ' + item for substring in neg_term):
f.write('\n\n\n\n' + item + '\t [NEGATED]')
else:
f.write('\n\n\n\n' + item + '\t [AFFIRMED]')
# add section end tag
f.write('\n\n\n\n[SECTION-{}-END]'.format(section_name)) # this file for concept extraction and sentence parsing
# ## Concept extraction
# +
def get_cui_spans(xml_filename):
tree = etree.parse(xml_filename)
textsems = tree.xpath('*[@_ref_ontologyConceptArr]')
span = lambda e: (int(e.get('begin')), int(e.get('end')))
ref_to_span = {e.get('_ref_ontologyConceptArr'): span(e) for e in textsems}
fsarrays = tree.xpath('uima.cas.FSArray')
id_to_ref = {e.text: fs.get('_id') for fs in fsarrays for e in fs}
umlsconcepts = tree.xpath('org.apache.ctakes.typesystem.type.refsem.UmlsConcept')
cui_ids = [(c.get('cui'), c.get('tui'), c.get('preferredText'), c.get('_id')) for c in umlsconcepts]
id_to_span = lambda _id: ref_to_span[id_to_ref[_id]]
cui_spans = [(cui, tui, pt, id_to_span(_id)) for cui, tui, pt, _id in cui_ids]
seen = set()
seen_add = seen.add
return [cs for cs in cui_spans if not (cs in seen or seen_add(cs))]
def extract_cuis(xml_filename):
cui_spans = get_cui_spans(xml_filename)
cui_spans.sort(key=lambda cs: cs[3])
row_id = os.path.basename(xml_filename).split('.')[0]
txt = etree.parse(xml_filename).xpath('uima.cas.Sofa')[0].get('sofaString')
return [(row_id, str(cs[3][0]), str(cs[3][1]), cs[0], cs[1], txt[(cs[3][0]):(cs[3][1])], cs[2]) for cs in cui_spans]
# keep: 047, 046, 033, 184, 061, 048, 131
# discard: 029, 034, 197, 121, 023, 059, 060, 195, 109, 022, 122,
d = {
"ddx": ["T047", "T191"], # disease/disorder/syndrome
"ssx": ["T033", "T040", "T046", "T048", "T049", "T184"], # symptoms/signs
"med": ["T116", "T123", "T126", "T131"], # medications
"dxp": [], # diagnostic proc
"txp": ["T061"], # therapeutic proc
"lab": [], # labs
"ana": ["T017", "T024", "T025"], # anatomy
}
tui_list = []
for k, v in d.items():
tui_list.extend(v)
# +
# def execute(command):
# process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# while True:
# nextline = process.stdout.readline()
# if nextline == '' and process.poll() is not None:
# break
# sys.stdout.write(nextline)
# sys.stdout.flush()
# output = process.communicate()[0]
# exitCode = process.returncode
# if (exitCode == 0):
# return output
# else:
# raise ProcessException(command, exitCode, output)
os.system('find . -name ".DS_Store" -type f -delete -print; ')
os.system('cp ' + data_dir + 'tmp ' + ctakes_folder + 'note_input/')
os.system('sh ' + ctakes_folder + 'bin/pipeline.sh')
#(output, err) = p.communicate()
os.system('rm ' + ctakes_folder + 'note_input/tmp')
os.system('mv ' + ctakes_folder + 'note_output/tmp.xml '+ data_dir)
# +
# #subprocess.check_output(['bash', '-c', 'find . -name "' + data_dir + 'tmp" -exec cp {} ' + ctakes_folder + 'note_input/ \;'])
# subprocess.Popen('find . -name ".DS_Store" -type f -delete -print', stdout=subprocess.PIPE, shell=True)
# subprocess.Popen('cp ' + data_dir + 'tmp ' + ctakes_folder + 'note_input/', stdout=subprocess.PIPE, shell=True)
# p = subprocess.Popen("sh " + ctakes_folder + "bin/pipeline.sh", stdout=subprocess.PIPE, shell=True)
# (output, err) = p.communicate()
# subprocess.check_output(['bash', '-c', 'find . -name "' + ctakes_folder + 'note_output/tmp.xml" -exec cp {} ../data/ \;'])
# subprocess.Popen('rm ' + ctakes_folder + 'note_input/tmp', stdout=subprocess.PIPE, shell=True)
# subprocess.Popen('mv ' + ctakes_folder + 'note_output/tmp.xml '+ data_dir, stdout=subprocess.PIPE, shell=True)
# +
#os.system('rm tmp')
d = [e for e in extract_cuis(data_dir + 'tmp.xml')]
df = pd.DataFrame(d, columns=['fname', 'start', 'end', 'cui', 'tui', 'original', 'preferred'])
df = df[df['tui'].isin(tui_list)]
with open(data_dir + 'tmp', 'r') as f:
doc = f.read()
sec_dict = {}
for sec_head in hard_section_list:
sec_dict[sec_head] = (doc.index('[SECTION-' + sec_head + '-START]') + len('[SECTION-' + sec_head + '-START]'), \
doc.index('[SECTION-' + sec_head + '-END]'))
sec_dict
df['section'] = ''
for idx in df.index:
for k, v in sec_dict.iteritems():
if int(df['start'][idx]) > v[0] and int(df['end'][idx]) < v[1]:
df['section'][idx] = str(k)
# +
# d = [e for e in extract_cuis(data_dir + 'tmp.xml')]
# df = pd.DataFrame(d, columns=['fname', 'start', 'end', 'cui', 'tui', 'original', 'preferred'])
# with open(data_dir + 'tmp', 'r') as f:
# doc = f.read()
# sec_dict = {}
# for sec_head in hard_section_list:
# sec_dict[sec_head] = (doc.index('[SECTION-' + sec_head + '-START]') + len('[SECTION-' + sec_head + '-START]'), \
# doc.index('[SECTION-' + sec_head + '-END]'))
# sec_dict
# df['section'] = ''
# for idx in df.index:
# for k, v in sec_dict.iteritems():
# if int(df['start'][idx]) > v[0] and int(df['end'][idx]) < v[1]:
# df['section'][idx] = str(k)
# df[df.section != ""].shape
# +
s_neg_start = [s.start() for s in re.finditer('\\n\\n\\n\\n.*\\t \[NEGATED\]', doc)]
s_neg_end = [s.start() for s in re.finditer('\[NEGATED\]', doc)]
s_neg = zip(s_neg_start, s_neg_end) # range of negation in sentence level
neg_range_list = [range(r[0], r[1]) for r in s_neg]
#neg_range_list = [y for x in neg_range_list for y in x]
df['negation'] = 0
df['sent_id'] = 0
df['sent_loc'] = 0
for idx in df.index:
for i, nl in enumerate(neg_range_list):
if int(df['start'][idx]) in nl:
df['negation'][idx] = 0
df['sent_id'][idx] = i + 1 # sent_id from 1
df['sent_loc'][idx] = int(df['start'][idx]) - nl[0] + 1 # sent_loc also start from 1
# -
df1 = df[df.sent_id != 0]
df0 = df[df.sent_id == 0]
# ## Syntactic parsing
# +
class OpenNLP:
def __init__(self, host='localhost', port=8080):
uri = "http://%s:%d" % (host, port)
self.server = ServerProxy(uri)
def parse(self, text):
return self.server.parse(text)
nlp = OpenNLP()
# preparing sentence for parsing
l = []
sl = []
with open(data_dir + 'tmp') as fr:
for sent in fr:
if sent.endswith('[NEGATED]\n') or sent == '\n':
l.append(sent)
if sent.endswith('[NEGATED]\n'):
sl.append(sent)
# # opennlp parsing
# print '\n--- parse full sentence ---\n'
# tree_list = []
# with open(data_dir + 'tmp_tree', 'w') as fw:
# for i, s in enumerate(l):
# t = (nlp.parse(s.replace('[NEGATED]', '')))
# if t != '':
# # print s
# # print i, t
# fw.write(t + '\n')
# tree_list.append(t)
# print len(sl)
# print len(tree_list)
# remove before/after words!
# neg_front = [i + ' ' for i in neg_list[neg_list['EN (SV) ACTION'] == 'forward']['ITEM'].tolist()]
# neg_back = [' ' + i for i in neg_list[neg_list['EN (SV) ACTION'] == 'backward']['ITEM'].tolist()]
ll = []
for ss in l:
s = ''
flag = ''
for nw in sorted(neg_list['ITEM'].tolist(), key=len, reverse=True):
if nw in neg_list[neg_list['EN (SV) ACTION'] == 'forward']['ITEM'].tolist():
try:
s = ss[ss.index(nw):]
flag = 'f'
break
except:
continue
else:
try:
s = ss[:(ss.index(nw)+len(nw))]
flag = 'b'
break
except:
continue
ll.append(s)
tree_list = []
while len(sl) != len(tree_list): # run until opennlp can parse with correct number of sentences. bug???
# opennlp parsing the neg tree
print('\n--- parse negated part of the sentence ---\n')
tree_list = []
with open(data_dir + 'tmp_neg_tree', 'w') as fw:
for i, s in enumerate(ll):
t = (nlp.parse(s.replace('[NEGATED]', '')))
if t != '':
# print s
# print i, t
fw.write(t + '\n')
tree_list.append(t)
print len(sl)
print len(tree_list)
# +
# using stanford corenlp parsing too slow
import requests
def extract_subtree(text, tregex):
r = requests.post(url="http://localhost:9000/tregex",
data=text,
params={"pattern": tregex})
js = r.json()
if js['sentences'][0] and '0' in js['sentences'][0] and 'namedNodes' in js['sentences'][0]['0']:
return js['sentences'][0]['0']['namedNodes']
return ''
def extract_subtree_treefile(f, tregex):
t = subprocess.Popen(tregex_dir + 'tregex.sh ' + tregex + ' ' + f , stdout=subprocess.PIPE, shell=True)
p = subprocess.Popen(tregex_dir + 'tregex.sh ' + tregex + ' ' + f + ' -t', stdout=subprocess.PIPE, shell=True)
(tree, err) = t.communicate()
(output, err) = p.communicate()
print(tree)
print(output)
return output
def tregex_tsurgeon(f, pos):
cmd = trts[pos][0] + '\n\n' + trts[pos][1].replace(',', '\n')
with open('./stanford-tregex-2018-02-27/ts', 'w') as fw:
fw.write(cmd)
t = subprocess.Popen('cd ' + tregex_dir + '; ./tsurgeon.sh -treeFile ../' + f + ' ts; cd ..', stdout=subprocess.PIPE, shell=True)
p = subprocess.Popen('cd ' + tregex_dir + '; ./tsurgeon.sh -treeFile ../' + f + ' ts -s; cd ..', stdout=subprocess.PIPE, shell=True)
(tree, err) = t.communicate()
(output, err) = p.communicate()
print('constituency tree: ' + output.replace('\n', ''))
ts_out = re.sub('\([A-Z]*\$? |\(-[A-Z]+- |\)|\)|\(, |\(. |\n', '', output)
ts_out = re.sub('-LRB-', '(', ts_out)
ts_out = re.sub('-RRB-', ')', ts_out)
return ts_out, tree
# +
trts = {}
# no "jvd|murmurs|deficits" not work, pleural -> vbz?
# trts['NP'] = ('NP=target << DT=neg <<, /no|without/ !> NP >> TOP=t >> S=s', \
# 'excise s target,delete neg')
# if np with top node=S???
# trts['NP'] = ('NP=target << DT=neg <<, /no|without/ !> NP >> TOP=t >> S=s', \
# 'excise s target,delete neg')
trts['NP'] = ('NP=target << DT=neg <<, /no|without/ !> NP >> TOP=t', \
'delete neg')
# if np with top node=NP
trts['NP-nS'] = ('NP=target <<, /DT|NN|RB/=neg <<, /no|without/ !> NP >> TOP=t', \
'delete neg')
# denies -> mis pos to nns
trts['NP-denies'] = ('NP=target <<, /denies|deny|denied/=neg >> TOP=t', \
'delete neg')
# vp only
trts['VP-A'] = ('VP=target << /VBZ|VBD|VB/=neg >> TOP=t', \
'delete neg')
trts['VP-CC'] = ('VP=target <<, /VBZ|VBD|VB/=neg < CC >> TOP=t', \
'delete neg')
# vp only, 'resolved', add that neg1 part to prevent jvd -> VP, rashes -> VP error pos tagging
# trts['VP-P'] = ('NP=target <<, DT=neg1 <<, /no|negative|not/ $ VP=neg2 >> TOP=t >> S=s', \
# 'delete neg1')
trts['VP-P'] = ('VP=vp <<- /free|negative|absent|ruled|out|doubtful|unlikely|excluded|resolved|given/=neg $ NP=head >> TOP=t >> S=s', \
'excise s head')
# this is post, ... is negative
# trts['ADJP-P'] = ('VP=vp < ADJP <<- /negative/=neg $ NP=target >> TOP=t >> S=s', \
# 'delete vp,excise s target')
trts['ADJP-P'] = ('VP=vp <<- /free|negative|absent|ruled|out|doubtful|unlikely|excluded|resolved|given/=neg $ NP=head >> TOP=t >> S=s', \
'excise s head')
# this is ant, negative for ...
# trts['ADJP-A'] = ('PP=head $ JJ=neg < NP=target >> TOP=t > ADJP=s', \
# 'delete neg')
trts['ADJP-A'] = ('PP=head $ /JJ|ADJP|NP/=neg <- NP=target >> TOP=t >> /S|NP|ADJP/=s', \
'excise s target')
# not
# trts['ADVP-P'] = ('VP=target <<, /VB*|MD/ $ RB=neg >> TOP=t >> S=s', \
# 'excise s target')
trts['ADVP-P'] = ('VP=head $ RB=neg <<, /VB*|MD/=be >> TOP=t >> S=s', \
'delete head,delete neg')
# trts['ADVP-A'] = ('VP=target <<, /VB*|MD/ $ RB=neg >> TOP=t >> S=s', \
# 'excise s target')
trts['ADVP-A'] = ('VP=head $ RB=neg <<, /VB*|MD/ >> TOP=t >> S=s', \
'excise s head')
trts['ADVP-A2'] = ('VP=head << RB=neg <<, /VB*|MD/ << /ADJP|VP/=target >> TOP=t >> S=s', \
'excise s target')
# remove sbar
trts['ADVP-sbar'] = ('PP=head <<, /of|without/=neg > NP $ NP < NP=target >> TOP=t >> NP=st << SBAR=sbar', \
'excise st target,delete sbar')
trts['ADVP-advp'] = ('ADVP=advp', \
'delete advp')
trts['forced-sbar'] = ('SBAR=sbar', \
'delete sbar')
# remove RB
trts['ADVP-RB'] = ('TOP=target <<, RB=neg', \
'delete neg')
# sob become this, so need to be after np and vp
# trts['PP'] = ('PP=head <<, /of|without/=neg > NP $ NP < NP=target >> TOP=t >> NP=s', \
# 'excise s target')
trts['PP'] = ('PP=head <<, IN=neg1 < NP=target >> TOP=t >> /S|NP|ADJP/=s $ /JJ|NP/=neg2', \
'excise s target')
trts['PP-2'] = ('PP=head << IN=neg <<, /of|without/ >> TOP=t', \
'delete neg')
trts['NP-CC'] = ('S=s < NP =head<< PP=target << DT=neg <<, /no|without/ < CC=but << but < S=rm < /\.|\,/=punct << SBAR=sbar !> NP > TOP=t',
'delete neg,delete sbar,delete punct,delete but,delete rm')
trts['NP-although'] = ('S=s < NP =head<< PP=target << DT=neg <<, /no|without/ << /although|but/ < /\.|\,/=punct << SBAR=sbar !> NP > TOP=t',
'delete neg,delete sbar,delete punct')
# -
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
RM_POS = ['NN', 'NNS', 'RB', 'NP', 'ADVP', 'IN']
RM_CP = ['however', 'although', 'but']
# +
from difflib import SequenceMatcher
for i, t in enumerate(tree_list):
print('sent: ' + str(i))
print('original: ' + sl[i])
# get negated part of the sentence
with open(data_dir + 'ntree_tmp', 'w') as fw:
fw.write(t)
s = re.sub('\([A-Z]*\$? |\(-[A-Z]+- |\)|\)|\(, |\(. ', '', t)
print('neg part: ' + s)
# find what neg term is matched and use its neg type
try:
m = ''
for neg in [x for x in sorted(neg_list['ITEM'].tolist(), key=len, reverse=True)]:
#for neg in ['negative for']:
match = SequenceMatcher(None, s, neg).find_longest_match(0, len(s), 0, len(neg))
matched_string = s[match.a: match.a + match.size]
try: # if next char might be different, means partial match
if s[match.a + match.size + 1] == neg[match.b + match.size + 1] and \
s[match.a + match.size + 2] == neg[match.b + match.size + 2]:
if (len(matched_string) > len(m)) and \
((matched_string[0] == s[0] and matched_string[1] == s[1]) or \
(matched_string[len(matched_string)-1] == s[len(s)-1] and matched_string[len(matched_string)-2] == s[len(s)-2])): # either match from the beginning or laast
m = matched_string
matched_neg_item = neg[match.b: match.b + match.size]
if matched_neg_item[len(matched_neg_item)-1] == ' ':
matched_neg_item = matched_neg_item[0:len(matched_neg_item)-1]
else:
continue
except: # if no next char, means full match
try:
if (len(matched_string) > len(m)) and \
((matched_string[0] == s[0] and matched_string[1] == s[1]) or \
(matched_string[len(matched_string)-1] == s[len(s)-1] and matched_string[len(matched_string)-2] == s[len(s)-2])): # either match from the beginning or laast
m = matched_string
matched_neg_item = neg[match.b: match.b + match.size]
if matched_neg_item[len(matched_neg_item)-1] == ' ':
matched_neg_item = matched_neg_item[0:len(matched_neg_item)-1]
except: # match only one char!? rare case
if (len(matched_string) > len(m)) and \
(matched_string[0] == s[0]): # either match from the beginning or laast
m = matched_string
matched_neg_item = neg[match.b: match.b + match.size]
if matched_neg_item[len(matched_neg_item)-1] == ' ':
matched_neg_item = matched_neg_item[0:len(matched_neg_item)-1]
print('negated term: ' + matched_neg_item)
neg_type = neg_list[neg_list.ITEM == matched_neg_item]['TYPE'].values[0]
print('--- tregex/tsurgeon with negated type: ' + neg_type)
# run tregex/tsurgeon based on the selected neg type
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', neg_type)
# deal with corner cases
if neg_type == 'NP' and ('that' in ts_out):
print('--- NP with that')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'NP-denies')
if neg_type == 'NP' and s == ts_out:
print('--- NP without S node')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'NP-nS')
if neg_type == 'PP' and sum([item in neg_list['ITEM'].tolist() for item in ts_out.split()]) > 0:
print('--- NP without S node')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'NP-nS')
if neg_type == 'VP-A' and s == ts_out:
print('--- VP-A remove denies')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'NP-denies')
if neg_type == 'ADVP-A' and s == ts_out:
print('--- ADVP-A type 2')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'ADVP-A2')
if neg_type == 'ADVP-A' and s == ts_out:
print('--- ADVP-A remove SBAR')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'ADVP-sbar')
if neg_type == 'ADVP-A' and s == ts_out: # no longer
print('--- ADVP-A remove ADVP')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'ADVP-advp')
if neg_type == 'ADVP-A' and s == ts_out:
print('--- ADVP-A remove RB')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'ADVP-RB')
if 'SBAR' in tree:
print('--- forced remove SBAR')
ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'forced-sbar')
# if sum([item in neg_list['ITEM'].tolist() for item in ts_out.split()]) > 0:
# print('--- remove neg terms if exists')
# ts_out = ' '.join(ts_out.split()[1:])
if sum([item in RM_POS for item in ts_out.split()]) > 0:
print('--- remove POS')
ts_out = ' '.join(ts_out.split()[1:])
if sum([item in RM_CP for item in ts_out.split()]) > 0:
print('--- remove CP')
for cp in RM_CP:
try:
cp_loc = ts_out.split().index(cp)
except:
continue
ts_out = ' '.join(ts_out.split()[:cp_loc])
if ts_out.split()[0] in neg_list['ITEM'].tolist() + stopwords:
print('--- remove first token f if f in negated list or stopword list')
ts_out = ' '.join(ts_out.split()[1:])
# if neg_type == 'VP-A' and len(ts_out) < 2:
# print('--- VP-A CC')
# ts_out, tree = tregex_tsurgeon(data_dir + 'ntree_tmp', 'VP-CC')
print('>> ' + ts_out + '\n')
try:
neg_range = (sl[i].index(ts_out) + 1, sl[i].index(ts_out) + len(ts_out)) # negated place
except:
neg_range = (0, len(sl))
print(neg_range)
for idx in df1.index:
if df1['sent_id'][idx] == i+1 and df1['sent_loc'][idx] in range(neg_range[0], neg_range[1]+1):
df1['negation'][idx] = 1
except: # need to debug why very few cases don't work
continue
# -
# preserve the longest strings/concepts
df_s = df1
df_s['start'] = df_s['start'].astype(int)
df_s['len'] = df_s['original'].str.len()
df_s = df_s.sort_values('len', ascending=False)
df_s = df_s.drop_duplicates(['sent_id', 'start'], keep='first')
df_s = df_s.drop_duplicates(['sent_id', 'end'], keep='first')
df_s = df_s.sort_values('start', ascending=True)
df_s
# +
pd.set_option('display.max_rows', None)
df_ss = df_s[(df_s.sent_id != 0) & (df_s.section != '')]
def print_out_result(df):
for s in set(df['section'].values):
if s != '':
subset = df[df['section'] == s][['preferred', 'negation']]
subset['preferred'] = np.where(subset['negation'] == 1, subset['preferred'] + '(-)', subset['preferred'] + '(+)')
print '--- ' + s + ' ---\n' + ', '.join(subset['preferred'])
print_out_result(df_ss)
# -
| src/nlp_dev.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
#
#
# # Breve introducción al análisis de algoritmos #
# <div style="text-align: right">
# Autores: <NAME>, <NAME>, <NAME>, <NAME>
# </div>
#
# - _Nota 1: Este notebook es un guión, por favor revise el material en el aula virtual_
# - _Nota 2: Realice los ejercicios propuestos al final de este notebook, esos no estan en el aula virtual_
#
# <hr/>
# Nos sirve para el costo o dificultad de un problema o algoritmo para con una entrada dada. También se le suele llamar *complejidad computacional*.
#
# Queremos medir la complejidad computacional para:
#
# - **Conocer** si es posible mejorar un algoritmo para un problema dado
# - **Decidir** que algoritmo nos conviene para una *clase* de entrada
#
# En general, lo que se pretende es **contar** los recursos necesarios para resolver un problema o para correr un algoritmo.
# Para esto debemos aislar la operación o recurso básico a contar ya que suele ser MUY complicado hacerlo para todos los recursos involucrados.
# A esto se le suele llamar **modelo de computación**.
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Tipos de costo
# ## Sobre a qué se aplica
# - Costo de un problema
# - Costo de un algoritmo
# - Costo dependiente de los datos:
# - una clase bien caracterizada de instancias
# - una instancia particular
#
# ## Sobre cómo se calcula
# - Análisis de *mejor* y *peor* caso
# - Análisis promedio
# - Análisis amortizado
# + [markdown] slideshow={"slide_type": "slide"}
# ## Costo computacional ##
#
# - *Enfoque analítico*: Consiste en crear un modelo analítico para *predecir cuanto nos costaría* aplicar un algoritmo a una entrada dada, sin tener que ejecutarlo. Ej.
#
# - *Enfoque experimental*: Mide directamente el desempeño de un algoritmo utilizando diferentes entradas. Ejemplo:
# - Medir el tiempo real que se utiliza para completar una serie de corridas bien caracterizadas
# - Medir la cantidad de memoria alojada por un proceso
#
# + [markdown] slideshow={"slide_type": "slide"}
# Recordemos que una mejorá de $k$ veces en el desempeño tiene efectos interesantes:
#
# - nos puede ahorrar en un factor $\sim{k}$ veces en infraestructura y operación
# - nos puede permitir procesar $k$ veces más información en el mismo tiempo.
#
# El costo no siempre esta asociado directamente con el tiempo, también puede referirse a la memoria
#
# ***
# *una mejora en la complejidad de un algoritmo puede significar tener la capacidad de manejar problemas más grandes o a más detalle.*
# ***
# + [markdown] slideshow={"slide_type": "slide"}
#
# ## Costo como función del tamaño de entrada ##
# Es muy común caracterizar un algoritmo utilizando el tamaño de la entrada, suele ser útil para costos de peor caso. Ej.
#
#
# ## Costo como función de la distribución de los datos ##
#
#
# # Comparación
# ## Notación asimptótica
# ### $\Theta(g(n))$
# Para una función dada $g(n)$ se denota $\Theta(g(n))$ al conjunto de funciones<br/>
# $ \Theta(g(n)) = \{f(n): \text{existen las constantes positivas } c_1, c_2, n_0 \text{ tal que } 0 \leq c_1 g(n) \leq f(n) \leq c_2 g(n) \text{ para todo } n \geq n_0 \}$
# ### $O(g(n))$
# Dada la función $g(n)$, $O(g(n))$ esta definida como el conjunto de funciones<br/>
# $ O(g(n)) = \{f(n): \text{existen las constantes positivas } c, n_0 \text{ tal que } 0 \leq f(n) \leq c g(n) \text{ para todo } n \geq n_0 \}$
#
# ### $\Omega(g(n))$
# Dada la función $g(n)$, $\Omega(g(n))$ esta definida como el conjunto de funciones<br/>
# $\Omega(g(n)) = \{f(n): \text{existen las constantes positivas } c, n_0 \text{ tal que } 0 \leq c g(n) \leq f(n) \text{ para todo } n \geq n_0 \}$
#
# ## Notación asimptótica no _ajustada_
# Tanto $O$ como $\Omega$ describen conjuntos de funciones que pueden estar ajustados o no a $g(n)$ de manera asimptótica. Cuando es necesario describir costos donde no hay posibilidad que $f(n)$ se _acerque_ a $g(n)$ después de cierto $n_0$ se suelen utilizar las siguientes variantes:
#
# - $o(g(n)) = \{f(n): \text{para todas las constantes positivas } c > 0 \text{, existe una constante }n_0 > 0 \text{ tal que } 0 \leq f(n) \leq c g(n) \text{ para todo } n \geq n_0 \}$
# - $\omega(g(n)) = \{f(n): \text{para todas las constantes positivas } c > 0 \text{, existe una constante }n_0 > 0 \text{ tal que } 0 \leq c g(n) \leq f(n) \text{ para todo } n \geq n_0 \}$
#
# -
# # Ejercicios
#
# 1. Implemente los siguientes algoritmos:
#
# - Insertion-sort
# - Bubble-sort
# - Merge-sort
# - Heap-sort
#
# 2. Compare el desempeño de los 4 algoritmos. Considere el tamaño de arreglo $n$, genere arreglos de números con las siguientes propiedades:
#
# - Un arreglo ordenado $A_O$
# - Tomando el arreglo ordenado, intercambie de manera aleatoria:
# - $n/10$ pares, $A_{10}$
# - $n/30$ pares, $A_{30}$
# - $n/100$ pares, $A_{100}$
#
# Para los tamaños $n=10^4, 10^5, 10^6$, compare el desempeño de los 4 algoritmos de ordenamiento sobre los 4 tipos de arreglos ($A_O, A_{10}, A_{30}$, y $A_{100}$).
| 05-Algoritmos.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xeus-cling-cpp14
// ---
#include "../include/xvolume/xvolume_figure.hpp"
std::vector<float> x = std::vector<float>({0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1});
auto y = x;
std::vector<float> z = std::vector<float>({0, 0.3, 0.1, 0.6, 0.4, 0.1, 0.6, 0.7, 0.8, 0.9, 1});
xvl::scatter s;
s.x = x;
s.x = x;
s.y = y;
s.z = z;
s.color = "blue";
xvl::volume_figure a;
a
a.scatters = std::vector<xvl::scatter>({s});
a.scatters()[0].color="red"
a.scatters()[0].geo = "sphere"
a.scatters()[0].connected = false
a.scatters()[0].x = std::vector<float>({0.1, 0.4, 0.2, 0.5, 0.1, 0.3, 0.5, 0.1, 0.2, 0.8});
| notebooks/xvolume_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import re
import pandas as pd
import numpy as np
import pickle
from sklearn.impute import KNNImputer
import seaborn as sns
from sklearn.decomposition import NMF
# -
# # Import movie data
path = 'recommender/data/ml-latest-small/ratings.csv'
df_rating = pd.read_csv(path, sep=',')
df_rating
path1 = 'recommender/data/ml-latest-small/movies.csv'
df_movies = pd.read_csv(path1, sep=',')
df_movies
df_sp = df_movies.copy()
df_sp['just the title'] = df_sp['title'].copy
for i in range(len(df_movies.index)):
title = df_sp['title'][i]
title = re.sub(' \([0-9]{4}\)', "", title)
df_sp['just the title'][i] = title
df_sp
df_movies['title only'] = df_sp['just the title']
df_movies.head()
df_movies.loc[df_movies['title only'] == 'Sabrina']
# # movie, ratings by users
df_rating.groupby('movieId')['rating'].mean().sort_values(ascending=False).head(10)
# movie, ratings by users
R = df_rating.set_index(['movieId', 'userId'])['rating'].unstack(0)
type(R)
R
#make R.column labels movie titles
R_movieIds = pd.DataFrame(R.columns)
titles = R_movieIds.merge(df_movies, on='movieId')
R.columns = titles['title']
R
imputer = KNNImputer() # k-nearste neighor
R_imputed = pd.DataFrame(imputer.fit_transform(R), index=R.index, columns=R.columns)
R_imputed
# ### save imputed data frame R to speed up web app
R_imputed.to_pickle('recommender/data/R_imputed.pkl')
average_movie_rating = R.mean().mean()
average_movie_rating#used later for new user fill nan
#average_movie_rating
# # Train NMF
# Instantiate the nmf
nmf = NMF(n_components=20, max_iter=10000)
# As usual with scikit-learn Classes, we fit the nmf
nmf.fit(R_imputed)
R_imputed.shape
# ### save model for later use in other files
filepath = 'recommender/data/recommender_model_n-comp-20_max-iter-10000_KNN-imputer.sav'
with open(filepath, 'wb') as file:
pickle.dump(nmf, file)
# ## Check out the sub-matrices and reconstruction error
# Extract the movie-feature matrix
Q = nmf.components_
Q.shape # shape of Q should have a shape of? Anja: 20 features (rows) x # movies = 9724 (columns)
# Make a data frame
pd.DataFrame(Q, columns=R.columns)
# Extract the user-feature matrix
P = nmf.transform(R_imputed)
P.shape # We expect a shape of? # userId 610 (rows) x 20 components (columns)
# Make a DataFrame out of it
pd.DataFrame(P, index=R.index)
# Look at the reconstruction error
round(nmf.reconstruction_err_, 2)
# The error can be interpreted relative to the error of other models
# # Reconstruct the original matrix
# Look at the original matrix R
R.head()
# Calculate R_hat
R_hat = pd.DataFrame(np.matmul(P, Q), index=R.index, columns=R.columns)
R_hat
# # Make prediction based on new user input
len(R.index)
# Create a dictionary for a new user
new_user_input = {'<NAME> (2014)':3.2, 'Flint (2017)':4.6, 'Jumanji (1995)':5.0} # similar to JSON data that we will have to work with in the end
new_user_input
# Convert it to a pd.DataFrame
new_user = pd.DataFrame(new_user_input, columns=R.columns, index=[len(R.index)+1])
new_user
#Fill missing data
new_user = new_user.fillna(average_movie_rating)
new_user
#Prediction step 1 - generate user_P
user_P = nmf.transform(new_user)
user_P
#new user R - reconstruct R but for this new user only
user_R = pd.DataFrame(np.matmul(user_P, Q), index=new_user.index, columns=R.columns)
user_R
# I have a list of predicted films!! Can I now use this for my recommendations?
# We want to get rid of movies we have already watchend
recommendation = user_R.drop(columns=new_user_input.keys())
# Sort recommendations
recommendation.sort_values(by=len(R.index)+1, axis=1, ascending=False, inplace=True)
round(recommendation, 2)
round(recommendation.iloc[:,0:5], 2)
| Recommender/movie_recommender.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1><div align="center">Social Data Mining</div></h1>
# <h2><div align="center">Lesson IV - Twitter with Tweepy</div></h2>
# <div align="center"><NAME></div>
# <div align="center"><a href="http://www.data4sci.com/">www.data4sci.com</a></div>
# <div align="center">@bgoncalves, @data4sci</div>
# +
import json
import numpy as np
import matplotlib.pyplot as plt
import tweepy
import watermark
# %load_ext watermark
# %matplotlib inline
# -
# Let's start by print out the versions of the libraries we're using for future reference
# %watermark -n -v -m -p numpy,tweepy,matplotlib
# The first step is to load up the account information. I recommend that you keep all your account credentials in a dictionary like this one to make it easier to switch between accounts. You can find this stup in **twitter_accounts_STUB.py**
accounts = {
"social" : { 'api_key' : 'API_KEY',
'api_secret' : 'API_SECRET',
'token' : 'TOKEN',
'token_secret' : 'TOKEN_SECRET'
},
}
# All my credentials are listed in **twitter_accounts.py**. You can find a stub version of this file in the github repository. You can just fill in your own credentials and then proceed to doing:
from twitter_accounts import accounts
# You load the credentials for a specific account using its dictionary key:
app = accounts["bgoncalves"]
# Which contains all the information you need to create an OAuth Handler
auth = tweepy.OAuthHandler(app["api_key"], app["api_secret"])
auth.set_access_token(app["token"], app["token_secret"])
# That you can finally pass to the tweepy module
twitter_api = tweepy.API(auth)
# This object will be your main way of interacting with the twitter API.
# # Searching Tweets
query = "instagram"
count = 200
# To search for tweets matching a specific query we simply use the **search** method
statuses = twitter_api.search(q=query, count=count)
# The count parameter specifies the number of results we want. 200 is the maximum per call. The **search** method returns a *SearchResults* object
type(statuses)
# That contains a lot of metadata in addition to just the list of results
print("max_id:", statuses.max_id)
print("since_id:", statuses.since_id)
print("refresh_url:", statuses.refresh_url)
print("completed_in:", statuses.completed_in)
print("query:", statuses.query)
print("count:", statuses.count)
print("next_results:", statuses.next_results)
# We can also access the results as if the *SearchResults* was a list
tweet = statuses[0]
print(tweet.text)
# To request the next page of results we pass the *next_results* field to the next call to **search**
try:
for tweet in statuses:
print (tweet.text)
next_results = statuses.next_results
args = dict(parse.parse_qsl(next_results[1:]))
statuses = twitter_api.search(**args)
except:
pass
# # Streaming
# If instead we are interested in real time results, we can use the *Streaming* API. We simply declare a Listener that overrides the on_data and on_error functions appropriately.
class StdOutListener(tweepy.StreamListener):
def on_status(self, status):
print(status.text)
return True
def on_error(self, status):
print(status)
def on_timeout(self):
print('Timeout...', file=sys.stderr)
return True
# Instanciate the Listener
listen = StdOutListener()
# And pass the listener and the OAuth object to the **Stream** module.
stream = tweepy.Stream(auth, listen)
# This will return a stream object that we can (finally) use to track the Twitter stream for specific results
stream.filter(track=[query])
# # User information
# ### Profile
# Profile information is just a API call away
screen_name = 'neiltyson'
user = twitter_api.get_user(screen_name=screen_name)
print(user.screen_name, "has", user.followers_count, "followers and follows", user.friends_count, "other users")
# ### Friends
# Requesting information on a users friends is also simple
friends = twitter_api.friends(screen_name=screen_name, count=200)
# And we can see that we retrieved all the friends
len(friends)
# And their screen names are:
for i, friend in enumerate(friends):
print(i, friend.screen_name)
# ### Followers
# Since we already saw that the number of followers is significantly larger, we use a **Cursor** to seamlessly paginate through all the results. For expediency, here we only request the first 100 results.
for i, follower in enumerate(tweepy.Cursor(twitter_api.followers, screen_name=screen_name).items(100)):
print(i, follower.screen_name)
# ### User timeline
# The *user_timeline* method returns the tweets of a given user. As before, we can use a Cursor to iterate over all teh tweets, but do keep in mind that Twitter limits our access to only the 3200 most recent tweets
# +
screen_name = "BarackObama"
tweets = []
for status in tweepy.Cursor(twitter_api.user_timeline, screen_name=screen_name).items():
tweets.append(status.text)
print("Found", len(tweets), "tweets")
# -
# # Social Interactions
# By following the timeline of the a user, we can see who s/he interacts with to generate a social interaction graph. We define the edge direction to be the direction of information flow, so:
# - **retweet** - information flows from the author of the original tweet to the retweeter
# - **mention** - information flows from the author of the tweet to the one being mentioned
for status in tweepy.Cursor(twitter_api.user_timeline, screen_name=screen_name).items(200):
if hasattr(status, 'retweeted_status'):
print(status.retweeted_status.author.screen_name, '->', screen_name)
elif status.in_reply_to_screen_name is not None:
print(screen_name, '->', status.in_reply_to_screen_name)
# For convenience, here we chose to just list out all the edges in the order in which they appear. This information could naturally have been used to define a *NetworkX* graph for further analysis.
# # Geolocated data
# Here we demonstrate how to search for tweets containing geolocation information. We also take the opportunity to illustrate a more sophisticated StreamListener implementation
class FileOutListener(tweepy.StreamListener):
def __init__(self, fp = None):
super().__init__()
self.tweet_count = 0
if fp is not None:
self.fp = fp
else:
self.fp = open("tweets.json", "wt")
def on_data(self, data):
# Using on_data (instead of on_status) Tweets are return as json strings.
# We can parse them to extract the information we require
status = json.loads(data)
self.tweet_count += 1
print (self.tweet_count, status["id"])
print(data.strip(), file=self.fp)
return True
def on_error(self, status):
print(status)
def on_timeout(self):
print('Timeout...', file=sys.stderr)
return True
# Our bounding box will be NYC
bb = [-74,40,-73,41] # NYC
# And we will save the raw json from the tweets we obtain in a text file
with open("NYC.json", "wt") as fp:
listener = FileOutListener(fp)
stream = tweepy.Stream(auth, listener)
stream.filter(locations=bb)
| Lesson IV - Twitter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# <h2> Combine all Oscars data into one file <h2>
# +
#Dictionary with categories:
dict = {
'ACTOR': 'ACTOR IN A LEADING ROLE',
'ACTOR IN A LEADING ROLE':'ACTOR IN A LEADING ROLE',
'LEAD ACTOR':'ACTOR IN A LEADING ROLE',
'ACTRESS': 'ACTRESS IN A LEADING ROLE',
'ACTRESS IN A LEADING ROLE': 'ACTRESS IN A LEADING ROLE',
'LEAD ACTRESS':'ACTRESS IN A LEADING ROLE',
'ACTOR IN A SUPPORTING ROLE': 'ACTOR IN A SUPPORTING ROLE',
'SUPPORTING ACTOR':'ACTOR IN A SUPPORTING ROLE',
'ACTRESS IN A SUPPORTING ROLE': 'ACTRESS IN A SUPPORTING ROLE',
'SUPPORTING ACTRESS':'ACTRESS IN A SUPPORTING ROLE',
'CINEMATOGRAPHY': 'CINEMATOGRAPHY',
'CINEMATOGRAPHY (Black-and-White)': 'CINEMATOGRAPHY',
'CINEMATOGRAPHY (Color)': 'CINEMATOGRAPHY',
'DIRECTING (Comedy Picture)': 'DIRECTING',
'DIRECTING (Dramatic Picture)': 'DIRECTING',
'DIRECTING': 'DIRECTING',
'DIRECTOR':'DIRECTING',
'OUTSTANDING PICTURE': 'BEST PICTURE',
'OUTSTANDING MOTION PICTURE': 'BEST PICTURE',
'BEST MOTION PICTURE': 'BEST PICTURE',
'BEST PICTURE':'BEST PICTURE',
}
# +
#1929-2016
till2016 = pd.read_csv('data/database_oscars_goodone.csv')
#Fix year
yrs = {
'1927/1928':'1928',
'1928/1929':'1929',
'1929/1930':'1930',
'1930/1931':'1931',
'1931/1932':'1932',
'1932/1933':'1933',
}
till2016=till2016.replace({'Year':yrs})
till2016['Year']=pd.to_numeric(till2016['Year'])+1
# -
till2016['Award']=till2016['Award'].str.upper()
till2016['Award']=till2016['Award'].map(dict)
till2016=till2016[till2016['Award'].notnull()]
till2016=till2016.copy()
till2016.drop(columns=['Ceremony'], inplace=True)
till2016=till2016[till2016['Year']>1940]
#2017
osc17 = pd.read_csv('data/past_nominations.csv') #2000-2017
osc17=osc17[osc17['Year']==2017]
osc17=osc17[osc17['Award'].isin(dict)]
#2019 nominations
osc19 = pd.read_csv('data/2019_nominations.csv')
osc19=osc19[osc19['Award'].isin(dict)].reset_index().drop(columns=['index', 'Detail'])
#Add 2019 winners
osc19['winner']=np.nan
osc19.iloc[3,4]=1
osc19.iloc[5,4]=1
osc19.iloc[12,4]=1
osc19.iloc[17,4]=1
osc19.iloc[23,4]=1
osc19.iloc[28,4]=1
osc19.iloc[34,4]=1
#Import 2020 oscars, will process them separately
osc2020=pd.read_csv('data/2020nominees.csv')
osc2020=osc2020.drop(columns=osc2020.columns[0])
osc2020['category']=osc2020['category'].str.upper()
osc2020=osc2020[osc2020['category'].isin(dict)]
# +
osc2020
name=osc2020[osc2020['name'].isnull()]
name['name']=name['movie']
osc2020=pd.concat([osc2020[9:],name], axis=0)
osc2020=osc2020.copy()
osc2020['name']=osc2020['name'].str.title()
osc2020['movie']=osc2020['movie'].str.title()
# -
osc2020
till2016
# +
#combine till2016, 2017, 2019
till2016.columns=['Year','Award','Winner','Nominee','Movie']
till2016=till2016[['Year', 'Award', 'Movie', 'Nominee', 'Winner']]
osc19.columns=osc19.columns.str.title()
print(till2016.columns)
print(osc17.columns)
print(osc19.columns)
# +
osc=pd.concat([till2016,osc17,osc19],axis=0, sort=False)
osc['Movie']=osc['Movie'].str.title().str.strip()
osc['Nominee']=osc['Nominee'].str.title().str.strip()
#let's separate best picture nominations from here:
picture=osc[osc['Award']=='BEST PICTURE']
picture=picture.copy()
print(picture.shape)
#We still need to add 2018, once we match the nominations
# -
osc
# <h2>Feature engineering: Add nominations</h2>
# +
#Function to add film nomination in other categories
def match_awards(all_oscars, picture):
picture=picture.copy()
#Remove "BEST PICTURE" from the list
awards=all_oscars['Award'].unique()
n=np.where(awards=="BEST PICTURE")
awards=np.delete(awards,n[0][0])
for award in awards:
subset=pd.concat([all_oscars[all_oscars['Award']==award]['Movie'],all_oscars[all_oscars['Award']==award]['Nominee']])
name=award.replace(' ','_')+"_nomination"
picture[name]=picture['Nominee'].isin(subset)
return picture
# -
picture=match_awards(osc, picture)
#Add 2018
osc18=pd.read_csv('data/missing_data.csv') #Combine with data when matched other awards
osc18=osc18[osc18['year']==2018]
osc18=osc18[['year', 'award', 'movie', 'nominee', 'winner','lead_actor_nomination',
'lead_actor_nomination',
'cinematography_nomination', 'directing_nomination',
'support_actor_nomination', 'support_actress_nomination']]
osc18.columns=picture.columns
picture=pd.concat([picture, osc18], axis=0)
picture
# <h2>Add IMDB data </h2>
# +
#Edit picture fields
picture['Nominee']=picture['Nominee'].str.strip().str.title()
picture['Year']=pd.to_numeric(picture['Year'])
picture['Year_film']=picture['Year']-1
picture ['Year_film2']=picture['Year']-2
picture ['Year_film3']=picture['Year']-3
names_ = {'<NAME>': '<NAME>',
'The Godfather Part Ii':'The Godfather: Part Ii',
'The Godfather, Part Iii':'The Godfather: Part Iii',
"Precious: Based On The Novel 'Push' By Sapphire":'Precious',
'The Postman (Il Postino)': 'Il Postino',
'Henry V': 'The Chronicle History Of King Henry The Fifth With His Battell Fought At Agincourt In France'
}
picture=picture.replace({'Nominee':names_})
# -
#First, add title and genre -- this is a big DB
imdb = pd.read_csv('data/imdb/data.tsv', sep='\t', na_values='\\N')
imdb=imdb[imdb['titleType']=='movie']
imdb['primaryTitle']=imdb['primaryTitle'].str.strip().str.title()
imdb=imdb.drop(columns=['titleType','originalTitle','isAdult','endYear'])
imdb.columns
#This is Oscar Nominated movies list
top=pd.read_csv('data/all_best_pic_nominated.csv', encoding='latin-1')
top['Title']=top['Title'].str.strip().str.title()
top=top.drop(columns=['Position','IMDb Rating', 'Num Votes','Created','Modified','Description','URL','Title Type','Release Date', 'Directors'])
top=top[['Const', 'Title', 'Year', 'Runtime (mins)', 'Genres']]
top.columns=['tconst', 'primaryTitle', 'startYear', 'runtimeMinutes', 'genres']
# +
#Function to first map Oscar Nominations list, then imdb overall
def merge_imdb(picture):
merge=picture.merge(top, left_on=['Nominee', 'Year_film'], right_on=['primaryTitle','startYear'], how='inner')
missing=[]
pictures=picture['Nominee'].to_list()
for pic in pictures:
if pic not in np.array(merge['Nominee']):
missing.append(pic)
to_match=picture[picture['Nominee'].isin(missing)]
merge2=to_match.merge(imdb, left_on=['Nominee', 'Year_film'], right_on=['primaryTitle','startYear'], how='inner')
merge3=to_match.merge(imdb, left_on=['Nominee', 'Year_film2'], right_on=['primaryTitle','startYear'], how='inner')
merge4=to_match.merge(imdb, left_on=['Nominee', 'Year_film3'], right_on=['primaryTitle','startYear'], how='inner')
# print(merge.columns, merge2.columns, merge3.columns, merge4.columns)
final=pd.concat([merge,merge2,merge3,merge4], axis=0, sort=False)
print(final.shape)
return final
# -
picture=merge_imdb(picture)
# <h2>Match IMDB ratings</h2>
# Now, we're gonna match movie rating and movie votes
#
rating=pd.read_csv('data/imdb/film_rating_num_votes.tsv', sep='\t', na_values='\\N')
picture=picture.merge(rating, on='tconst', how='left')
picture
# <h2>Add director's age</h2>
# +
director = pd.read_csv('data/imdb/title_to_director_writer.tsv', sep='\t', na_values='\\N')
picture=picture.merge(director, on='tconst', how='left')
#Add age
dire = pd.read_csv('data/imdb/directors_actors_data.tsv', sep='\t', na_values='\\N')
picture['director']=picture['directors'].str.split(',').str[0]
picture=picture.merge(dire[['nconst','birthYear']], left_on='director', right_on='nconst', how='left')
# -
picture['dir_age']=picture['Year']-picture['birthYear']
# +
#Add gender
gender = pd.read_csv('data/Gender - gender.csv')
# -
picture=picture.merge(gender[['nconst','is_woman']], left_on='director', right_on='nconst')
# <h2>Add scraped stuff </h2>
picture[['tconst']].to_csv('data/movies to crawl.csv')
stuff=pd.read_csv('data/scraped_movie_info.csv')
new=picture.merge(stuff, left_on='tconst', right_on='tconst', how='left')
new['winner_bool']=new['Winner']==1
new.columns
# <h2>Do the same with the test data</h2>
# +
osc2020['Year']=2020
osc2020.columns=['Award','Nominee','Movie','Year']
osc2020['Award']=osc2020['Award'].map(dict)
pic2020 = osc2020[osc2020['Award']=='BEST PICTURE']
pic2020=match_awards(osc2020, pic2020)
pic2020
# +
pic2020['Year_film']=pic2020['Year']-1
pic2020 ['Year_film2']=pic2020['Year']-2
pic2020 ['Year_film3']=pic2020['Year']-3
pic2020=merge_imdb(pic2020)
pic2020
# -
#add ratings
pic2020=pic2020.merge(rating, on='tconst', how='left')
pic2020=pic2020[:9]
# +
pic2020.iloc[4,18]=8.1
pic2020.iloc[4,19]=43284
pic2020.iloc[6,18]=8.5
pic2020.iloc[6,19]=133686
pic2020
# +
#add director's age and gender
pic2020=pic2020.merge(director, on='tconst', how='left')
pic2020['director']=pic2020['directors'].str.split(',').str[0]
# -
#Age
pic2020=pic2020.merge(dire[['nconst','birthYear']], left_on='director', right_on='nconst', how='left')
pic2020['dir_age']=pic2020['Year']-pic2020['birthYear']
pic2020['is_woman']=0
# +
#Add scraped data
data=pd.read_csv('data/scraped_movie_info2020.csv')
pic2020=pic2020.merge(data, on='tconst')
# +
# pic2020.to_csv('test.csv', index=False)
# new.to_csv('train.csv', index=False)
# -
pic2020
# +
#### More wrangling of the data -- to the final set
# +
# traindata=pd.read_csv('train.csv') new
# test=pd.read_csv('test.csv') pic2020
# +
#Import CPI data
cpi = pd.read_csv('data/CPIAUCNS.csv')
cpi['year']=pd.to_datetime(cpi['DATE']).dt.year
cpi=cpi.groupby('year').first().reset_index()
# create index multiplier
#inflation[‘CPI_Multiplier’] = inflation[‘CPI’].iloc[-1] / inflation[‘CPI’]
cpi['cpi_multiplier']=cpi['CPIAUCNS'].iloc[-1]/cpi['CPIAUCNS']
cpi=cpi.drop(columns=['CPIAUCNS','DATE'])
cpi.loc[cpi.shape[0]]=[2020, 1]
# +
#Define function to get_dummies for genres
import numpy as np
concat=pd.concat([new['genres'].str.split(',').str[0], new['genres'].str.split(',').str[1], new['genres'].str.split(',').str[2]], axis=0).unique()
genres=pd.Series(concat).str.strip().str.lower().unique()
def add_genres(X):
X['genres']=X['genres'].str.lower()
for genre in genres:
if pd.isnull(genre)==False:
X[genre]=X['genres'].str.contains(genre)
return X
#Define function to get N globes
def n_globes(X):
globes = pd.read_csv('data/golden_globe_awards.csv')
get_films=X.merge(globes[['film','win','category']], left_on='nominee', right_on='film', how='inner')
n_nominations=pd.DataFrame(get_films['tconst'].value_counts()).reset_index()
n_nominations.columns=['tconst','n_globes']
X=X.merge(n_nominations, on='tconst',how='left')
return X
# +
new['budget']=pd.to_numeric(new['budget'].str.replace("$",'').str.replace(',','')
.str.replace('EUR','').str.replace('GBP',''))
new['budget'].dtypes=='object'
# +
#Wrangle data
def wrangle(X):
#Columns to lower
X.columns=X.columns.str.lower()
#Get genres
X=add_genres(X)
#Make sure all budgets are numeric
for cols in ['budget','opening_wk','gross','world']:
if (X[cols].dtypes=='object'):
X[cols]=pd.to_numeric(X[cols].str.replace('$','').str.replace('EUR','').str.replace('GBP','').str.replace(',',''))
#Adjust box office numbers for inflation
X=X.merge(cpi, on='year',how='left')
X['budget']=X['budget']*X['cpi_multiplier']
X['opening_wk']=X['opening_wk']*X['cpi_multiplier']
X['gross']=X['gross']*X['cpi_multiplier']
X['world']=X['world']*X['cpi_multiplier']
#Get golden globes n
X=n_globes(X)
#Convert columns to correct formats
return X
# -
final_train=wrangle(new)
final_train.columns
final_test=wrangle(pic2020)
final_train.to_csv('traindata.csv', index=False)
final_test.to_csv('test.csv', index=False)
| notebooks/Prediction_data_prep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Random selection of 64 conformations
# - Procedure done only for `EGFR` and `HSP90` proteins.
# - A random seed is provided for reproducibility
import pandas as pd
import numpy as np
import tarfile
# Load the data frame with the metadata information
prot_name = 'hsp90'
df_prot_file = '../1_Download_and_prepare_protein_ensembles/TABLA_MTDATA_HSP90_298_crys_LIGS_INFO.json'
df_prot = pd.read_json(df_prot_file)
print(df_prot.shape)
# +
np.random.seed(42)
pdb_ids = np.random.choice(df_prot.index, 64, replace = False)
pdb_ids_dir = '../1_Download_and_prepare_protein_ensembles/pdb_structures/pdb_prepared/'
with tarfile.open(f'64_{prot_name.upper()}_conformations.pdb.tar', 'w:gz') as tar:
for name in pdb_ids:
tar.add(f'{pdb_ids_dir}/{name}_ENS.pdb')
tar.close()
# -
# ### Selected PDB ids
np.sort(pdb_ids)
| hsp90/3_Protein_Ensembles_Analysis/1_Random_selection_of_64_conformations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 3.2 Learning from data
# First the problem is solved as per the question, then its solved using weights from Linear regression
from matplotlib import pyplot as plt
import numpy as np
import random
from random import seed
np.random.seed(1)
# +
#creating a linearly in-sepearable datasset
def generate_samples(n_train=100, n_test=1000):
X_train = np.zeros((n_train, 2), dtype=np.float)
X_test = np.zeros((n_test, 2), dtype=np.int)
y_train = np.zeros((n_train, ), dtype=np.float)
y_test = np.zeros((n_test, ), dtype=np.int)
X_train[:, 0] = np.random.rand(n_train)
X_test[:, 0] = np.random.rand(n_test)
X_train[:, 1] = np.random.rand(n_train)
X_test[:, 1] = np.random.rand(n_test)
while True:
weights_for_line = np.random.rand(3, 1)
b = weights_for_line[0] / (0.5 - 0.2)
w_1 = weights_for_line[1] / 0.9 - 0.3
w_2 = weights_for_line[2] / 0.5 - 0.1
for i in range(n_train):
y_train[i] = np.sign(w_1 * X_train[i, 0] + w_2 * X_train[i, 1] + b)
if np.abs((np.count_nonzero(y_train == 1) + 1) /
(np.count_nonzero(y_train == -1) + 1) - 0.5) < 0.1:
break
# flip the labels of the 10% dataset
flip_y = np.random.choice(n_train, n_train // 10, replace=True)
y_train[flip_y] = -y_train[flip_y]
for i in range(n_test):
y_test[i] = np.sign(w_1 * X_test[i, 0] + w_2 * X_test[i, 1] + b)
flip_y = np.random.choice(n_test, n_test // 10, replace=True)
y_test[flip_y] = -y_test[flip_y]
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_samples(100)
X = X_train
err_train_now = []
err_train_hat = []
err_test_now = []
err_test_hat = []
train_err_now = 1
train_err_min = 1
pos_x = []
pos_y = []
neg_x = []
neg_y = []
for i in range(X.shape[0]):
if y_train[i] == 1:
pos_x.append(X[i, 0])
pos_y.append(X[i, 1])
else:
neg_x.append(X[i, 0])
neg_y.append(X[i, 1])
plt.scatter(pos_x, pos_y, c='blue')
plt.scatter(neg_x, neg_y, c='red')
plt.show()
# -
learningRate = 0.01
Y = y_train
oneVector = np.ones((X_train.shape[0], 1))
X_train = np.concatenate((oneVector, X_train), axis=1)
oneVector_test = np.ones((X_test.shape[0], 1))
X_test= np.concatenate((oneVector_test, X_test), axis=1)
plotData = []
weights = np.random.rand(3, 1)
w_hat = weights
misClassifications = 1
minMisclassifications = 10000
iteration = 0
print(weights.shape)
def evaluate_error(w, X, y):
n = X.shape[0]
pred = np.matmul(X, w)
pred = np.sign(pred) - (pred == 0)
pred = pred.reshape(-1)
return np.count_nonzero(pred == y) / n
while (misClassifications != 0 and (iteration<1000)):
iteration += 1
misClassifications = 0
for i in range(0, len(X_train)):
currentX = X_train[i].reshape(-1, X_train.shape[1])
currentY = Y[i]
wTx = np.dot(currentX, weights)[0][0]
if currentY == 1 and wTx < 0:
misClassifications += 1
weights = weights + learningRate * np.transpose(currentX)
elif currentY == -1 and wTx > 0:
misClassifications += 1
weights = weights - learningRate * np.transpose(currentX)
train_err_now = evaluate_error(weights, X_train, y_train)
err_train_now.append(train_err_now)
if train_err_now < train_err_min :
train_err_min = train_err_now
err_train_hat.append(train_err_min)
w_hat = weights
test_err_hat = evaluate_error(w_hat,X_test,y_test)
test_err_now = evaluate_error(weights,X_test,y_test)
err_test_hat.append(test_err_hat)
err_test_now.append(test_err_now)
plotData.append(misClassifications)
if misClassifications<minMisclassifications:
minMisclassifications = misClassifications
print(weights.transpose())
print ("Best Case Accuracy of Pocket Learning Algorithm is: ",(((X_train.shape[0]-minMisclassifications)/X_train.shape[0])*100),"%")
plt.title('Number of misclassifications over the number of iterations')
plt.plot(np.arange(0,iteration),plotData)
plt.xlabel("Number of Iterations")
plt.ylabel("Number of Misclassifications")
plt.show()
plt.figure()
plt.title('TRAIN SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_train_now,color = 'magenta')
plt.show
plt.figure()
plt.title('TRAIN SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_train_hat,color='black')
plt.show
plt.figure()
plt.title('TEST SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_test_now,color = 'blue')
plt.show
plt.figure()
plt.title('TEST SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_test_hat,color='red')
plt.show
# # Using weights from Linear Regression
# %reset
from sklearn import linear_model
from matplotlib import pyplot as plt
import numpy as np
import random
from random import seed
np.random.seed(1)
# +
#creating a linearly in-sepearable datasset
def evaluate_error(w, X, y):
n = X.shape[0]
pred = np.matmul(X, w)
pred = np.sign(pred) - (pred == 0)
pred = pred.reshape(-1)
return np.count_nonzero(pred == y) / n
def generate_samples(n_train=100, n_test=1000):
X_train = np.zeros((n_train, 2), dtype=np.float)
X_test = np.zeros((n_test, 2), dtype=np.int)
y_train = np.zeros((n_train, ), dtype=np.float)
y_test = np.zeros((n_test, ), dtype=np.int)
X_train[:, 0] = np.random.rand(n_train)
X_test[:, 0] = np.random.rand(n_test)
X_train[:, 1] = np.random.rand(n_train)
X_test[:, 1] = np.random.rand(n_test)
while True:
weights_for_line = np.random.rand(3, 1)
b = weights_for_line[0] / (0.5 - 0.2)
w_1 = weights_for_line[1] / 0.9 - 0.3
w_2 = weights_for_line[2] / 0.5 - 0.1
for i in range(n_train):
y_train[i] = np.sign(w_1 * X_train[i, 0] + w_2 * X_train[i, 1] + b)
if np.abs((np.count_nonzero(y_train == 1) + 1) /
(np.count_nonzero(y_train == -1) + 1) - 0.5) < 0.1:
break
# flip the labels of the 10% dataset
flip_y = np.random.choice(n_train, n_train // 10, replace=True)
y_train[flip_y] = -y_train[flip_y]
for i in range(n_test):
y_test[i] = np.sign(w_1 * X_test[i, 0] + w_2 * X_test[i, 1] + b)
flip_y = np.random.choice(n_test, n_test // 10, replace=True)
y_test[flip_y] = -y_test[flip_y]
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_samples(100)
X = X_train
err_train_now = []
err_train_hat = []
err_test_now = []
err_test_hat = []
train_err_now = 1
train_err_min = 1
pos_x = []
pos_y = []
neg_x = []
neg_y = []
for i in range(X.shape[0]):
if y_train[i] == 1:
pos_x.append(X[i, 0])
pos_y.append(X[i, 1])
else:
neg_x.append(X[i, 0])
neg_y.append(X[i, 1])
plt.scatter(pos_x, pos_y, c='blue')
plt.scatter(neg_x, neg_y, c='red')
plt.show()
# -
learningRate = 0.01
Y = y_train
oneVector = np.ones((X_train.shape[0], 1))
X_train = np.concatenate((oneVector, X_train), axis=1)
oneVector_test = np.ones((X_test.shape[0], 1))
X_test= np.concatenate((oneVector_test, X_test), axis=1)
plotData = []
misClassifications = 1
minMisclassifications = 10000
iteration = 0
reg = linear_model.LinearRegression()
reg.fit(X_train,y_train)
weights_linear_regression = reg.coef_
l = []
for i in weights_linear_regression:
l.append(i)
weights = np.array(l)
weights = weights.reshape(3,1)
w_hat = weights
err_train_now = []
err_train_hat = []
err_test_now = []
err_test_hat = []
train_err_now = 1
train_err_min = 1
while (misClassifications != 0 and (iteration<1000)):
iteration += 1
misClassifications = 0
for i in range(0, len(X_train)):
currentX = X_train[i].reshape(-1, X_train.shape[1])
currentY = Y[i]
wTx = np.dot(currentX, weights)[0][0]
if currentY == 1 and wTx < 0:
misClassifications += 1
weights = weights + learningRate * np.transpose(currentX)
elif currentY == -1 and wTx > 0:
misClassifications += 1
weights = weights - learningRate * np.transpose(currentX)
train_err_now = evaluate_error(weights, X_train, y_train)
err_train_now.append(train_err_now)
if train_err_now < train_err_min :
train_err_min = train_err_now
err_train_hat.append(train_err_min)
w_hat = weights
test_err_hat = evaluate_error(w_hat,X_test,y_test)
test_err_now = evaluate_error(weights,X_test,y_test)
err_test_hat.append(test_err_hat)
err_test_now.append(test_err_now)
plotData.append(misClassifications)
if misClassifications<minMisclassifications:
minMisclassifications = misClassifications
print(weights.transpose())
print ("Best Case Accuracy of Pocket Learning Algorithm is: ",(((X_train.shape[0]-minMisclassifications)/X_train.shape[0])*100),"%")
plt.title('Number of misclassifications over the number of iterations')
plt.plot(np.arange(0,iteration),plotData)
plt.xlabel("Number of Iterations")
plt.ylabel("Number of Misclassifications")
plt.show()
plt.figure()
plt.title('TRAIN SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_train_now,color = 'magenta')
plt.show
plt.figure()
plt.title('TRAIN SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_train_hat,color='black')
plt.show
plt.figure()
plt.title('TEST SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_test_now,color = 'blue')
plt.show
plt.figure()
plt.title('TEST SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_test_hat,color='red')
plt.show
# # Using weights from Logistic Regression
# %reset
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
import numpy as np
import random
from random import seed
np.random.seed(1)
# +
#creating a linearly in-sepearable datasset
def evaluate_error(w, X, y):
n = X.shape[0]
pred = np.matmul(X, w)
pred = np.sign(pred) - (pred == 0)
pred = pred.reshape(-1)
return np.count_nonzero(pred == y) / n
def generate_samples(n_train=100, n_test=1000):
X_train = np.zeros((n_train, 2), dtype=np.float)
X_test = np.zeros((n_test, 2), dtype=np.int)
y_train = np.zeros((n_train, ), dtype=np.float)
y_test = np.zeros((n_test, ), dtype=np.int)
X_train[:, 0] = np.random.rand(n_train)
X_test[:, 0] = np.random.rand(n_test)
X_train[:, 1] = np.random.rand(n_train)
X_test[:, 1] = np.random.rand(n_test)
while True:
weights_for_line = np.random.rand(3, 1)
b = weights_for_line[0] / (0.5 - 0.2)
w_1 = weights_for_line[1] / 0.9 - 0.3
w_2 = weights_for_line[2] / 0.5 - 0.1
for i in range(n_train):
y_train[i] = np.sign(w_1 * X_train[i, 0] + w_2 * X_train[i, 1] + b)
if np.abs((np.count_nonzero(y_train == 1) + 1) /
(np.count_nonzero(y_train == -1) + 1) - 0.5) < 0.1:
break
# flip the labels of the 10% dataset
flip_y = np.random.choice(n_train, n_train // 10, replace=True)
y_train[flip_y] = -y_train[flip_y]
for i in range(n_test):
y_test[i] = np.sign(w_1 * X_test[i, 0] + w_2 * X_test[i, 1] + b)
flip_y = np.random.choice(n_test, n_test // 10, replace=True)
y_test[flip_y] = -y_test[flip_y]
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_samples(100)
X = X_train
err_train_now = []
err_train_hat = []
err_test_now = []
err_test_hat = []
train_err_now = 1
train_err_min = 1
pos_x = []
pos_y = []
neg_x = []
neg_y = []
for i in range(X.shape[0]):
if y_train[i] == 1:
pos_x.append(X[i, 0])
pos_y.append(X[i, 1])
else:
neg_x.append(X[i, 0])
neg_y.append(X[i, 1])
plt.scatter(pos_x, pos_y, c='blue')
plt.scatter(neg_x, neg_y, c='red')
plt.show()
# -
learningRate = 0.01
Y = y_train
oneVector = np.ones((X_train.shape[0], 1))
X_train = np.concatenate((oneVector, X_train), axis=1)
oneVector_test = np.ones((X_test.shape[0], 1))
X_test= np.concatenate((oneVector_test, X_test), axis=1)
plotData = []
misClassifications = 1
minMisclassifications = 10000
iteration = 0
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train, y_train)
logisticRegr.fit(X_train, y_train)
weights_logistic_regression = logisticRegr.coef_
l = []
for i in weights_logistic_regression:
l.append(i)
weights = np.array(l)
weights = weights.reshape(3,1)
w_hat = weights
err_train_now = []
err_train_hat = []
err_test_now = []
err_test_hat = []
train_err_now = 1
train_err_min = 1
while (misClassifications != 0 and (iteration<1000)):
iteration += 1
misClassifications = 0
for i in range(0, len(X_train)):
currentX = X_train[i].reshape(-1, X_train.shape[1])
currentY = Y[i]
wTx = np.dot(currentX, weights)[0][0]
if currentY == 1 and wTx < 0:
misClassifications += 1
weights = weights + learningRate * np.transpose(currentX)
elif currentY == -1 and wTx > 0:
misClassifications += 1
weights = weights - learningRate * np.transpose(currentX)
train_err_now = evaluate_error(weights, X_train, y_train)
err_train_now.append(train_err_now)
if train_err_now < train_err_min :
train_err_min = train_err_now
err_train_hat.append(train_err_min)
w_hat = weights
test_err_hat = evaluate_error(w_hat,X_test,y_test)
test_err_now = evaluate_error(weights,X_test,y_test)
err_test_hat.append(test_err_hat)
err_test_now.append(test_err_now)
plotData.append(misClassifications)
if misClassifications<minMisclassifications:
minMisclassifications = misClassifications
print(weights.transpose())
print ("Best Case Accuracy of Pocket Learning Algorithm is: ",(((X_train.shape[0]-minMisclassifications)/X_train.shape[0])*100),"%")
plt.title('Number of misclassifications over the number of iterations')
plt.plot(np.arange(0,iteration),plotData)
plt.xlabel("Number of Iterations")
plt.ylabel("Number of Misclassifications")
plt.show()
plt.figure()
plt.title('TRAIN SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_train_now,color = 'magenta')
plt.show
plt.figure()
plt.title('TRAIN SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_train_hat,color='black')
plt.show
plt.figure()
plt.title('TEST SET - Error over iterations')
plt.xlabel("Number of Iterations")
plt.ylabel("errors")
plt.plot(err_test_now,color = 'blue')
plt.show
plt.figure()
plt.title('TEST SET - Changes in Minimum Error')
plt.xlabel("Number of times the minimum error changed")
plt.ylabel("errors")
plt.plot(err_test_hat,color='red')
plt.show
| PocketAlgorithm_Exercise_3_2.ipynb |