code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/harshnarang8/sc779-comp/blob/main/test1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="CyuNBOVy64JJ"
# This model file is for testing how the transformerencoder and transformerencoderlayer modules in pytorch work. The code has mostly been sourced from the transformer tutorial on the pytorch website linked in the doc given in the cs779 course.
# + id="ilaOQl_zrCHn"
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
# + id="53GJW8ZXrhBg"
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src, src_mask):
src = self.encoder(src)*math.sqrt(self.ninp)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, src_mask)
output = self.decoder(output)
return output
# + id="BIC1vthNvMc5"
| test1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Coords 1: Getting Started with astropy.coordinates
#
# ## Authors
# <NAME>, <NAME>, <NAME>, <NAME>
#
# ## Learning Goals
# * Create `astropy.coordinates.SkyCoord` objects using names and coordinates
# * Use SkyCoord objects to become familiar with object oriented programming (OOP)
# * Interact with a `SkyCoord` object and access its attributes
# * Use a `SkyCoord` object to query a database
#
# ## Keywords
# coordinates, OOP, file input/output
#
#
# ## Summary
# In this tutorial, we're going to investigate the area of the sky around the picturesque group of galaxies named "Hickson Compact Group 7," download an image, and do something with its coordinates.
# ## Imports
# +
# Python standard-library
from urllib.parse import urlencode
from urllib.request import urlretrieve
# Third-party dependencies
from astropy import units as u
from astropy.coordinates import SkyCoord
from IPython.display import Image
# -
# ## Describing on-sky locations with `coordinates`
# The `SkyCoord` class in the `astropy.coordinates` package is used to represent celestial coordinates. First, we'll make a SkyCoord object based on our object's name, "Hickson Compact Group 7" or "HCG 7" for short. Most astronomical object names can be found by [SESAME](http://cdsweb.u-strasbg.fr/cgi-bin/Sesame), a service which queries Simbad, NED, and VizieR and returns the object's type and its J2000 position. This service can be used via the `SkyCoord.from_name()` [class method](https://julien.danjou.info/blog/2013/guide-python-static-class-abstract-methods):
# initialize a SkyCood object named hcg7_center at the location of HCG 7
hcg7_center = SkyCoord.from_name('HCG 7')
# <div class="alert alert-info">
# Note that the above command requires an internet connection. If you don't have one, execute the following line instead:
# </div>
# +
# uncomment and run this line if you don't have an internet connection
# hcg7_center = SkyCoord(9.81625*u.deg, 0.88806*u.deg, frame='icrs')
# -
type(hcg7_center)
# Show the available methods and attributes of the SkyCoord object we've created called `hcg7_center`
dir(hcg7_center)
# Show the RA and Dec.
print(hcg7_center.ra, hcg7_center.dec)
print(hcg7_center.ra.hour, hcg7_center.dec)
# We see that, according to SESAME, HCG 7 is located at ra = 9.849 deg and dec = 0.878 deg.
# This object we've just created has various useful ways of accessing the information contained within it. In particular, the ``ra`` and ``dec`` attributes are specialized [Quantity](http://docs.astropy.org/en/stable/units/index.html) objects (actually, a subclass called [Angle](http://docs.astropy.org/en/stable/api/astropy.coordinates.Angle.html), which in turn is subclassed by [Latitude](http://docs.astropy.org/en/stable/api/astropy.coordinates.Latitude.html) and [Longitude](http://docs.astropy.org/en/stable/api/astropy.coordinates.Longitude.html)). These objects store angles and provide pretty representations of those angles, as well as some useful attributes to quickly convert to common angle units:
type(hcg7_center.ra), type(hcg7_center.dec)
hcg7_center.ra, hcg7_center.dec
hcg7_center
hcg7_center.ra.hour
# SkyCoord will also accept string-formatted coordinates either as separate strings for RA/Dec or a single string. You'll need to give units, though, if they aren't part of the string itself.
SkyCoord('0h39m15.9s', '0d53m17.016s', frame='icrs')
hcg7_center.ra.hour
# ## Download an image
# Now that we have a `SkyCoord` object, we can try to use it to access data from the [Sloan Digitial Sky Survey](http://www.sdss.org/) (SDSS). Let's start by trying to get a picture using the SDSS image cutout service to make sure HCG 7 is in the SDSS footprint and has good image quality.
#
# This requires an internet connection, but if it fails, don't worry: the file is included in the repository so you can just let it use the local file``'HCG7_SDSS_cutout.jpg'``, defined at the top of the cell.
# +
# tell the SDSS service how big of a cutout we want
im_size = 12*u.arcmin # get a 12 arcmin square
im_pixels = 1024
cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'
query_string = urlencode(dict(ra=hcg7_center.ra.deg,
dec=hcg7_center.dec.deg,
width=im_pixels, height=im_pixels,
scale=im_size.to(u.arcsec).value/im_pixels))
url = cutoutbaseurl + '?' + query_string
# this downloads the image to your disk
urlretrieve(url, 'HCG7_SDSS_cutout.jpg')
# -
Image('HCG7_SDSS_cutout.jpg')
# Very pretty!
#
# The saga of HCG 7 continues in [Coords 2: Transforming between coordinate systems](http://learn.astropy.org/rst-tutorials/Coordinates-Transform.html).
# ## Exercises
# ### Exercise 1
# Create a `SkyCoord` of some other astronomical object you find interesting. Using only a single method/function call, get a string with the RA/Dec in the form 'HH:MM:SS.S DD:MM:SS.S'. Check your answer against an academic paper or a website like [SIMBAD](http://simbad.u-strasbg.fr/simbad/) that will show you sexigesimal coordinates for the object.
#
# (Hint: `SkyCoord.to_string()` might be worth reading up on.)
# ### Exercise 2
# Now get an image of that object from the Digitized Sky Survey and download it and/or show it in the notebook. Bonus points if you figure out the (one-line) trick to get it to display in the notebook *without* ever downloading the file yourself.
#
# (Hint: STScI has an easy-to-access [copy of the DSS](https://archive.stsci.edu/dss/). The pattern to follow for the web URL is ``http://archive.stsci.edu/cgi-bin/dss_search?f=GIF&ra=RA&dec=DEC``.)
| notebooks/Coordinates-Intro/Coordinates-Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv-DL
# language: python
# name: venv-dl
# ---
# ______
#
# ## 3.2 Tensorflow: Image Classification.
#
# In image classification, we have a set of couples (input, output) and we are interested to get a predictor that can map inputs (images) and outputs (labels) correctly. For example, if you have a bunch of labeled images of dogs and cats, you can train a model to distinguish dogs from cats. Below, you have some important things to settle before starting training.
#
#
# 1. The **training set**: a set of couples (input, output). We search for functions that are capable of mapping inputs to outputs for the whole training set.
# 2. The **test set**: To see whether our predictors generalize well, we use another set to test our predictors.
# 3. The **predictor class**: The set of possible predictors. We search among the predictor class for good ones.
# 4. The **loss functions**: The loss should be low for good predictors and should be high for bad ones.
# 5. The **model**: When we combine a loss and a predictor class, we get a model.
# ______
#
# ### 3.2.1 Get The Data.
#
# 1. Download the dataset.
# 2. Split into train and test set.
# 2. Define loading procedures.
#
# + tags=[]
import wget, zipfile, os, random
if not os.path.isfile("data.zip"):
print("downloading...")
wget.download("https://storage.googleapis.com/kaggle-competitions-data/kaggle-v2/3362/31148/bundle/archive.zip?GoogleAccessId=<EMAIL>&Expires=1646390400&Signature=mEr6DFSAZ3dCo2el6QcXOhwDIubJ1d%2B%2Fk1tLEHL9hr0LM682fco0IOWdNOfMMQG9no27M%2F%2BD2pU4ZbXh%2BLo1c0Gve%2BsAJgdYHblfmfg41hvyVOMXvrqZ%2BCGDrD3KhRSU%2Bc9xCHAQIfBppYJg0zKitt1zT69ccYkOK17cYs2now6PgJ1o4STlh6B%2BUTAP%2BwmQ4nYzr5hZ3s%2BAE8d%2FxiEVEGP53pkHMYJv60Pl8JSj80p7DCY8EJ7RfLZcbW2G6%2B%2F7TsblvZeDJQmFQEsQlUOLuQUvePPhh%2BysYDBGwyC27gf5fGixl31z92AppMkUH2ooWb9GDrnWt7ey9zeaNQoQxA%3D%3D&response-content-disposition=attachment%3B+filename%3Ddogs-vs-cats.zip", "data.zip")
if not (os.path.isdir("data") and os.path.isdir("data/train") and os.path.isdir("data/test1")):
print("extracting...")
with zipfile.ZipFile("data.zip" , 'r') as file: file.extractall("./data/")
with zipfile.ZipFile("data/train.zip", 'r') as file: file.extractall("./data/")
with zipfile.ZipFile("data/test1.zip", 'r') as file: file.extractall("./data/")
paths = list(map(lambda name:os.path.join("data/train",name), os.listdir("data/train/")))
random.shuffle(paths)
test_paths = paths[:len(paths)//3]
train_paths = paths[len(paths)//3+1:]
# + tags=[]
import random
from PIL import Image
Image.open(random.choice(train_paths)).resize((256,256), Image.NEAREST)
# +
import random, PIL
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
def to_standard_image(path, size=(64, 64)):
return tf.keras.utils.img_to_array(PIL.Image.open(path).resize(size, PIL.Image.NEAREST)).mean(-1)/255
def to_standard_label(path):
return 0 if "cat" in path else 1
def load_normalized_batch(paths, batch_size = 10, image_size = (64, 64)):
batch = random.choices(paths, k=batch_size)
X = np.stack([to_standard_image(p, size=image_size) for p in batch])
Y = np.array([to_standard_label(p) for p in batch])
return X,Y
X,Y = load_normalized_batch(train_paths)
plt.imshow(X[0], cmap='gray')
print(f"label: {Y[0]}")
# + [markdown] tags=[]
# ______
# ### 3.2.2 Define A Model.
#
# In tensorflow, you can define a model in many ways. One of the most flexible ways to define models is extending [tf.keras.Model]. The `__init__` should define which component you are going to use. Instead, the `call` method should define how the pre-defined components interact.
#
# In `tf.keras.layers` you can find a vast amount of layers. Some very common, some very specific. You can build a new layer using the predefined ones. Or, you can define new layers entirely using tensorflow Variables.
# ______
# #### 3.2.2.1 Dense Layer
# For example one of the most common layer is the `tf.keras.layers.Dense`. It computes:
#
# $$g(xW^T + b)$$
#
# Where $x$ is the layer input, $x \in \mathbb{R}^{n}$. $W$ is a matrix of learnable variables, $W \in \mathbb{R}^{m\times n}$. $b$ is a vector of learnable variables, $b \in \mathbb{R}^m$. $g$ is an activation function. Without $g$, the dense layer would be a simple linear layer. Both $g$ and $m$ are hyper-parameters of choice. Why hyper-parameters? if parameters defines a class of functions, hyper-parameters define a class of class of functions. Do not think about it too much, it is just a convenient way to say that it is a parameter that you can choose as you like, and it is not trained. Often, you can find a dense layer represented as:
#
#
# <center><img src="https://slugnet.jarrodkahn.com/_images/tikz-be0593f4dad31d763e7f8371668007610e7907c1.png" alt="drawing" width="400"/></center>
#
# ________
#
# ### 3.2.2.2 Convolution Layer
#
# [Convolution] is another deep learning layer. It was originally inspired by the structure of the human visual cortex. In practice, convolution layers work extremely well with visual tasks. The Convolution layer is already defined in tensorflow and you can use it easily. Below there is a simple gif showing a convolution operation.
#
# * the **kernel** is the sliding window that you see below. It is composed of trainable parameters. While it slides, it computes the element-wise multiplication between the kernel and the window.
# * the kernel is said to become **active** when it produces a high value. In practice, this means that it becomes active when it is multiplied against a window similar to the kernel.
#
# <center><img src="https://theano-pymc.readthedocs.io/en/latest/_images/numerical_no_padding_no_strides.gif" alt="drawing" width="400"/></center>
#
# In modern architectures, convolution layers are stacked on top of each other. Layers closer to the input become active for simple shapes. The layers on top activate with complex patterns formed by the features below. The figure below shows some activation patterns for low/mid/high-level layers.
#
# <center><img src="https://d33wubrfki0l68.cloudfront.net/05c47b9b612f8b9f2f57cae4505a4773415f3f22/b9ed6/assets/convnets/cnn20.png" alt="drawing" width="400"/></center>
#
# _________
#
# ### 3.2.2.3 Max Pooling Layer
# The [MaxPooling2D] layer is another common layer found in neural networks for vision. In practice, works similarly to the convolution layer. Instead of having a kernel of trainable parameters, it takes the maximum of each window.
# ______
#
#
# [MaxPooling2D]:https://keras.io/api/layers/pooling_layers/max_pooling2d/
# [Dense]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense
# [Convolution]:https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D
# +
batch_size = 100
image_size = (64,64)
class MyModel(tf.keras.Model):
def __init__(self, batch_size, image_size):
super(MyModel, self).__init__()
self.reshape = tf.keras.layers.Reshape((128*128,))
self.conv1 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')
self.maxp1 = tf.keras.layers.MaxPooling2D((2, 2))
self.conv2 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')
self.maxp2 = tf.keras.layers.MaxPooling2D((2, 2))
self.conv3 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')
self.flat1 = tf.keras.layers.Flatten()
self.dense1 = tf.keras.layers.Dense(64, activation="relu")
self.dense2 = tf.keras.layers.Dense(1 , activation="relu")
def call(self, x):
x = tf.expand_dims(x, -1)
x = self.maxp1(self.conv1(x))
x = self.maxp2(self.conv2(x))
x = self.flat1(self.conv3(x))
x = self.dense2(self.dense1(x))
return x[:, 0]
lossfn = lambda preds, truths: tf.reduce_sum((preds - truths)**2)
# -
# ______
# ### 3.2.3. Training Step
#
# Now we can define a training step. We need to do the following things:
# 1. Feed data to the model.
# 2. Compute gradients.
# 3. Update model parameters (aka tensorflow variables).
# 4. Compute additional metrics, such as accuracy.
def train_step(X, Y, model):
# register operations
with tf.GradientTape() as tape:
X = tf.cast(tf.convert_to_tensor(X), tf.float32)
Y = tf.cast(tf.convert_to_tensor(Y), tf.float32)
P = model(X, training=True)
loss = lossfn(Y, P)
# compute gradients
gradients = tape.gradient(loss, model.trainable_variables)
# update parameters
for v,g in zip(model.trainable_variables, gradients): v.assign(v - 0.0001*g)
# compute accuracy
acc = tf.reduce_sum(tf.cast(Y == tf.round(P), tf.int32))/X.shape[0]
return loss.numpy(), acc.numpy()
# + [markdown] tags=[]
# _____
# ### 3.2.4. Training the model
# To train, we just need to repeat the training step until we are satisfied.
# -
model = MyModel(batch_size, image_size)
for i in range(2000):
X,Y = load_normalized_batch(train_paths, batch_size=batch_size, image_size=image_size)
l,a = train_step(X, Y, model=model)
print(f"\r batch: {i}, loss: {l}, acc: {a}", end=" ")
# ______
# ### 3.2.4. Testing the model
#
# To test the model, we collect the metrics on the test set, without performing any training.
# +
import random
tp, fp, fn, tn = 0, 0, 0, 0
random.shuffle(test_paths)
paths = test_paths[:1000]
for i,p in enumerate(paths):
X = to_standard_image(p, size=image_size)
Y = to_standard_label(p)
P = tf.round(model(X.reshape(1,image_size[0],image_size[1]), training=True)).numpy()[0]
if Y == 1 and P == 1: tp += 1
if Y == 0 and P == 1: fp += 1
if Y == 1 and P == 0: fn += 1
if Y == 0 and P == 0: tn += 1
acc = (tp + tn) / (tp + tn + fp + fn)
print(f"\r {i}/{len(paths)}, acc: {acc}",end="")
print(f"\ntest accuracy: {acc}")
# -
# _______
# ### 3.2.5. Improving the Model.
#
# #### 3.2.5.1 Loss.
#
# All losses have one thing in common, they are big when there are errors and they are small when everything is right. By minimizing a loss, you are forcing your model to behave the way you want. However, it should be noted that different losses may impact the learning process differently. Ultimately, using the right loss for your task can lead to improvements and faster convergence. Unfortunately, knowing the right loss to use is not an easy task. Luckily, tensorflow already defines several losses that you can try out easily.
#
# One common loss for classification task is the [CategoricalCrossentropy].
#
# [CategoricalCrossentropy]:https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy
# ______
#
# #### 3.2.5.2 Optimizer.
#
# Up until now to we update the model parameter by simply computing the partial derivatives of $w$ with respect to our model $L$.
# $$ w \leftarrow w - \frac{\partial L}{\partial w} $$
#
# This little procedure is called optimizer. It turns out that there are a lot of ways you can define an optimizer. Instead of following blindly the current gradient you can average the current gradient with previous ones. You are still doing gradient descent but with a different optimization procedure. One popular optimizer that is often used is [Adam]. Adam, among many others, is already defined in tensorflow and can be adopted easily.
#
# ______
#
# #### 3.2.5.3 Skip-Connections.
#
# You should know that a bigger model means that you can approximate more complex functions. However, in practice, it is not always the case. You will find yourself in many situations in which having more layers does actually hurt the results. Skip-connections are one of those mechanisms that can be useful to avoid this issue. given a layer $layer$ with input $x$ you can define a skip-connection for $x$ as follow:
#
# $$x = x + layer(x)$$
#
# ______
#
# ### 3.2.5.4 Pretraining
#
# When we say nothing, tensorflow initializes the parameters randomly. This is perfectly fine for most scenarios. However, it is well known that some initializations are better than others. For example, if we were to train our model on a task for which we have many data available, we can hope that it will learn features general enough that can be useful for our task of interest.
#
# * When we train a model on a task with a huge dataset available with the intention of reusing its parameter later, we are doing **pretraining**.
# * When we train an already trained model on another task (or a different dataset), we are doing **fine-tuning**.
#
# Luckily, tensorflow has some pretrained models already defined so that we can just fine-tune them. Among these, one famous pretrained model is [ResNet]. ResNet relies heavily on both convolutions and skip-connections.
# ______
#
# Finally, we can put everything toghether and train a much better model.
#
# [Adam]:https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam
# [ResNet]:https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/ResNet50
# +
import tensorflow as tf
import random
class ResNet(tf.keras.Model):
def __init__(self):
super(ResNet, self).__init__()
self.resnet = tf.keras.applications.resnet50.ResNet50(include_top=False, weights='imagenet', input_shape=(224,224,3))
self.avg = tf.keras.layers.AveragePooling2D(pool_size=(7, 7))
self.dense = tf.keras.layers.Dense(2, activation="softmax")
self.squeeze = tf.keras.layers.Reshape((2,))
def call(self, x):
return self.squeeze(self.dense(self.avg(self.resnet(x))))
resnet = ResNet()
optim = tf.keras.optimizers.Adam(learning_rate=0.0001)
lossfn = tf.keras.losses.CategoricalCrossentropy(from_logits=True, axis=-1,)
### TRAIN LOOP ###
for i in range(20):
batch = random.choices(train_paths, k = 10) # load a
batchX = [tf.keras.preprocessing.image.load_img(x, target_size=(224,224)) for x in batch ] # batch
batchY = [[1,0] if "cat" in p else [0, 1] for p in batch ] # worth of
batchX = [tf.keras.preprocessing.image.img_to_array(x) for x in batchX] # images
batchX = [tf.keras.applications.resnet50.preprocess_input(x) for x in batchX] #
batchX = tf.stack(batchX) #
batchY = tf.convert_to_tensor(batchY) #
with tf.GradientTape() as tape: # register the
batchP = resnet(batchX, training=True) # computational
loss = lossfn(batchY, batchP) # graph
# compute gradient and update the parameters
gradients = tape.gradient(loss, resnet.trainable_variables)
optim.apply_gradients(zip(gradients, resnet.trainable_variables))
# compute accuracy score
acc = tf.reduce_sum(tf.cast(tf.argmax(batchY,-1) == tf.argmax(batchP,-1), tf.int32))/batchX.shape[0]
print(f" train {i}/{20}, loss:{loss.numpy()}, acc:{acc.numpy()}")
### TEST LOOP ###
tp,fp,fn,tn = 0,0,0,0
for i,p in enumerate(test_paths[:100]):
X = tf.keras.preprocessing.image.load_img(p, target_size=(224,224))
X = tf.keras.preprocessing.image.img_to_array(X)
X = tf.keras.applications.resnet50.preprocess_input(X)
X = tf.expand_dims(X,0)
Y = [1,0] if "cat" in p else [0,1]
Y = tf.convert_to_tensor(Y)
Y = tf.expand_dims(Y,0)
P = resnet(X, training=False)
Y, P = tf.argmax(Y,-1).numpy(), tf.argmax(P,-1).numpy()
if Y == 1 and P == 1: tp += 1
if Y == 0 and P == 1: fp += 1
if Y == 1 and P == 0: fn += 1
if Y == 0 and P == 0: tn += 1
print(f"\rtest {i}/{100}, acc:{(tp + tn) / (tp + tn + fp + fn)}", end="")
# +
paths = list(map(lambda name:os.path.join("data/test1",name), os.listdir("data/test1/")))
image = tf.keras.preprocessing.image.load_img(random.choice(paths), target_size=(224,224))
X = tf.keras.preprocessing.image.img_to_array(image)
X = tf.keras.applications.resnet50.preprocess_input(X)
X = tf.expand_dims(X,0)
P = resnet(X, training=False)
label2name = {0:"cat", 1:"dog"}
print(f"logit: {P.numpy().tolist()}")
print(f"label: {label2name[tf.argmax(P,-1).numpy()[0]]}")
image
| DSE/DeepLearning/Vision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Author: <NAME>
# github.com/kaylani2
# kaylani AT gta DOT ufrj DOT br
### K: Model: LSTM
import sys
import time
import pandas as pd
import os
import math
sys.path.insert(1, '../')
import numpy as np
from numpy import mean, std
from unit import remove_columns_with_one_value, remove_nan_columns, load_dataset
from unit import display_general_information, display_feature_distribution
from collections import Counter
#from imblearn.over_sampling import RandomOverSampler, RandomUnderSampler
import sklearn
from sklearn import set_config
from sklearn.impute import SimpleImputer
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, OrdinalEncoder
from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler
from sklearn.metrics import confusion_matrix, precision_score, recall_score
from sklearn.metrics import f1_score, classification_report, accuracy_score
from sklearn.metrics import cohen_kappa_score, mean_squared_error
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split, PredefinedSplit, RandomizedSearchCV
from sklearn.model_selection import GridSearchCV, RepeatedStratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif, chi2, mutual_info_classif
from sklearn.utils import class_weight
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
import keras.utils
from keras import metrics
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Conv2D, MaxPooling2D, Flatten, LSTM
from keras.optimizers import RMSprop, Adam
from keras.constraints import maxnorm
###############################################################################
## Define constants
###############################################################################
pd.set_option ('display.max_rows', None)
pd.set_option ('display.max_columns', 5)
BOT_IOT_DIRECTORY = '../../../../../datasets/bot-iot/'
BOT_IOT_FEATURE_NAMES = 'UNSW_2018_IoT_Botnet_Dataset_Feature_Names.csv'
BOT_IOT_FILE_5_PERCENT_SCHEMA = 'UNSW_2018_IoT_Botnet_Full5pc_{}.csv' # 1 - 4
FIVE_PERCENT_FILES = 4
BOT_IOT_FILE_FULL_SCHEMA = 'UNSW_2018_IoT_Botnet_Dataset_{}.csv' # 1 - 74
FULL_FILES = 74
FILE_NAME = BOT_IOT_DIRECTORY + BOT_IOT_FILE_5_PERCENT_SCHEMA
FEATURES = BOT_IOT_DIRECTORY + BOT_IOT_FEATURE_NAMES
NAN_VALUES = ['?', '.']
TARGET = 'attack'
INDEX_COLUMN = 'pkSeqID'
LABELS = ['attack', 'category', 'subcategory']
STATE = 0
try:
STATE = int (sys.argv [1])
except:
pass
#for STATE in [1, 2, 3, 4, 5]:
np.random.seed (STATE)
print ('STATE:', STATE)
###############################################################################
## Load dataset
###############################################################################
df = load_dataset (FILE_NAME, FIVE_PERCENT_FILES, INDEX_COLUMN, NAN_VALUES)
###############################################################################
## Clean dataset
###############################################################################
###############################################################################
### Remove columns with only one value
df, log = remove_columns_with_one_value (df, verbose = False)
print (log)
###############################################################################
### Remove redundant columns, useless columns and unused targets
### K: _number columns are numerical representations of other existing columns.
### K: category and subcategory are other labels.
### K: saddr and daddr may specialize the model to a single network
redundant_columns = ['state_number', 'proto_number', 'flgs_number']
other_targets = ['category', 'subcategory']
misc_columns = ['saddr', 'daddr']
print ('Removing redundant columns:', redundant_columns)
print ('Removing useless targets:', other_targets)
print ('Removing misc columns:', misc_columns)
columns_to_remove = redundant_columns + other_targets + misc_columns
df.drop (axis = 'columns', columns = columns_to_remove, inplace = True)
###############################################################################
### Remove NaN columns (with a lot of NaN values)
df, log = remove_nan_columns (df, 1/2, verbose = False)
print (log)
###############################################################################
### Encode categorical features
print ('Encoding categorical features (ordinal encoding).')
my_encoder = OrdinalEncoder ()
df ['flgs'] = my_encoder.fit_transform (df ['flgs'].values.reshape (-1, 1))
df ['proto'] = my_encoder.fit_transform (df ['proto'].values.reshape (-1, 1))
df ['sport'] = my_encoder.fit_transform (df ['sport'].astype (str).values.reshape (-1, 1))
df ['dport'] = my_encoder.fit_transform (df ['dport'].astype (str).values.reshape (-1, 1))
df ['state'] = my_encoder.fit_transform (df ['state'].values.reshape (-1, 1))
print ('Objects:', list (df.select_dtypes ( ['object']).columns))
###############################################################################
## Quick sanity check
###############################################################################
display_general_information (df)
###############################################################################
## Split dataset into train and test sets
###############################################################################
### K: Dataset is too big? Drop.
drop_indices = np.random.choice (df.index, int (df.shape [0] * 0.9),
replace = False)
df = df.drop (drop_indices)
TEST_SIZE = 3/10
VALIDATION_SIZE = 1/4
print ('Splitting dataset (test/train):', TEST_SIZE)
X_train_df, X_test_df, y_train_df, y_test_df = train_test_split (
df.loc [:, df.columns != TARGET],
df [TARGET],
test_size = TEST_SIZE,
random_state = STATE,)
print ('Splitting dataset (validation/train):', VALIDATION_SIZE)
X_train_df, X_val_df, y_train_df, y_val_df = train_test_split (
X_train_df,
y_train_df,
test_size = VALIDATION_SIZE,
random_state = STATE,)
print ('X_train_df shape:', X_train_df.shape)
print ('y_train_df shape:', y_train_df.shape)
print ('X_val_df shape:', X_val_df.shape)
print ('y_val_df shape:', y_val_df.shape)
print ('X_test_df shape:', X_test_df.shape)
print ('y_test_df shape:', y_test_df.shape)
###############################################################################
## Convert dataframe to a numpy array
###############################################################################
print ('\nConverting dataframe to numpy array.')
X_train = X_train_df.values
y_train = y_train_df.values
X_val = X_val_df.values
y_val = y_val_df.values
X_test = X_test_df.values
y_test = y_test_df.values
print ('X_train shape:', X_train.shape)
print ('y_train shape:', y_train.shape)
print ('X_val shape:', X_val.shape)
print ('y_val shape:', y_val.shape)
print ('X_test shape:', X_test.shape)
print ('y_test shape:', y_test.shape)
###############################################################################
## Apply normalization
###############################################################################
### K: NOTE: Only use derived information from the train set to avoid leakage.
print ('\nApplying normalization.')
startTime = time.time ()
scaler = StandardScaler ()
#scaler = MinMaxScaler (feature_range = (0, 1))
scaler.fit (X_train)
X_train = scaler.transform (X_train)
X_val = scaler.transform (X_val)
X_test = scaler.transform (X_test)
print (str (time.time () - startTime), 'to normalize data.')
###############################################################################
## Perform feature selection
###############################################################################
NUMBER_OF_FEATURES = 9 #'all'
print ('\nSelecting top', NUMBER_OF_FEATURES, 'features.')
startTime = time.time ()
#fs = SelectKBest (score_func = mutual_info_classif, k = NUMBER_OF_FEATURES)
### K: ~30 minutes to FAIL fit mutual_info_classif to 5% bot-iot
#fs = SelectKBest (score_func = chi2, k = NUMBER_OF_FEATURES) # X must be >= 0
### K: ~4 seconds to fit chi2 to 5% bot-iot (MinMaxScaler (0, 1))
fs = SelectKBest (score_func = f_classif, k = NUMBER_OF_FEATURES)
### K: ~4 seconds to fit f_classif to 5% bot-iot
fs.fit (X_train, y_train)
X_train = fs.transform (X_train)
X_val = fs.transform (X_val)
X_test = fs.transform (X_test)
print (str (time.time () - startTime), 'to select features.')
print ('X_train shape:', X_train.shape)
print ('y_train shape:', y_train.shape)
print ('X_val shape:', X_val.shape)
print ('y_val shape:', y_val.shape)
print ('X_test shape:', X_test.shape)
print ('y_test shape:', y_test.shape)
bestFeatures = []
for feature in range (len (fs.scores_)):
bestFeatures.append ({'f': feature, 's': fs.scores_ [feature]})
bestFeatures = sorted (bestFeatures, key = lambda k: k ['s'])
for feature in bestFeatures:
print ('Feature %d: %f' % (feature ['f'], feature ['s']))
#pyplot.bar ( [i for i in range (len (fs.scores_))], fs.scores_)
#pyplot.show ()
###############################################################################
## Rearrange samples for RNN
###############################################################################
print ('\nRearranging dataset for the RNN.')
print ('X_train shape:', X_train.shape)
print ('y_train shape:', y_train.shape)
print ('X_val shape:', X_val.shape)
print ('y_val shape:', y_val.shape)
print ('X_test shape:', X_test.shape)
print ('y_test shape:', y_test.shape)
### K: JUMPING WINDOWS APPROACH: WRONG!!!
#if ( (X_train.shape [0] % STEPS) != 0):
# X_train = X_train [:- (X_train.shape [0] % STEPS), :]
#
#X_train = X_train.reshape ( (X_train.shape [0] // STEPS, STEPS,
# X_train.shape [1]),
# order = 'C')
#startTime = time.time ()
#
## X_train
#if ( (X_train.shape [0] % STEPS) != 0):
# X_train = X_train [:- (X_train.shape [0] % STEPS), :]
#X_train = X_train.reshape ( (X_train.shape [0] // STEPS, STEPS, X_train.shape [1]),
# order = 'C')
#print ('Finished X_train.')
#
## X_val
#if ( (X_val.shape [0] % STEPS) != 0):
# X_val = X_val [:- (X_val.shape [0] % STEPS), :]
#X_val = X_val.reshape ( (X_val.shape [0] // STEPS, STEPS, X_val.shape [1]),
# order = 'C')
#print ('Finished X_val.')
#
## X_test
#if ( (X_test.shape [0] % STEPS) != 0):
# X_test = X_test [:- (X_test.shape [0] % STEPS), :]
#X_test = X_test.reshape ( (X_test.shape [0] // STEPS, STEPS, X_test.shape [1]),
# order = 'C')
#print ('Finished X_test.')
#
## Y_train
#if ( (y_train.shape [0] % STEPS) != 0):
# y_train = y_train [:- (y_train.shape [0] % STEPS)]
#y_train = y_train.reshape ( (y_train.shape [0] // STEPS, STEPS), order = 'C')
#
## Y_val
#if ( (y_val.shape [0] % STEPS) != 0):
# y_val = y_val [:- (y_val.shape [0] % STEPS)]
#y_val = y_val.reshape ( (y_val.shape [0] // STEPS, STEPS), order = 'C')
#
## Y_test
#if ( (y_test.shape [0] % STEPS) != 0):
# y_test = y_test [:- (y_test.shape [0] % STEPS)]
#y_test = y_test.reshape ( (y_test.shape [0] // STEPS, STEPS), order = 'C')
#
#print (str (time.time () - startTime), 's reshape data.')
### SLIDING WINDOW APPROACH: TAKES TOO LONG!
#from numpy import array
#LENGTH = 5
#
#sets_list = [X_train, X_test]
#for index, data in enumerate (sets_list):
# n = data.shape [0]
# samples = []
#
# # step over the X_train.shape [0] (samples) in jumps of 200 (time_steps)
# for i in range (0,n,LENGTH):
# print ('index, i1:', index, i)
# # grab from i to i + 200
# sample = data [i:i+LENGTH]
# samples.append (sample)
#
# # convert list of arrays into 2d array
# new_data = list ()
# new_data = np.array (new_data)
# for i in range (len (samples)):
# print ('index, i2:', index, i)
# new_data = np.append (new_data, samples [i])
#
# sets_list [index] = new_data.reshape (len (samples), LENGTH, data.shape [1])
#
#
#X_train = sets_list [0]
#X_test = sets_list [1]
### SLIDING WINDOW: JUST RIGHT!
STEPS = 3
FEATURES = X_train.shape [1]
def window_stack (a, stride = 1, numberOfSteps = 3):
return np.hstack ( [ a [i:1+i-numberOfSteps or None:stride] for i in range (0,numberOfSteps) ])
X_train = window_stack (X_train, stride = 1, numberOfSteps = STEPS)
X_train = X_train.reshape (X_train.shape [0], STEPS, FEATURES)
X_val = window_stack (X_val, stride = 1, numberOfSteps = STEPS)
X_val = X_val.reshape (X_val.shape [0], STEPS, FEATURES)
X_test = window_stack (X_test, stride = 1, numberOfSteps = STEPS)
X_test = X_test.reshape (X_test.shape [0], STEPS, FEATURES)
y_train = y_train [ (STEPS - 1):]
y_val = y_val [ (STEPS - 1):]
y_test = y_test [ (STEPS - 1):]
print ('X_train shape:', X_train.shape)
print ('y_train shape:', y_train.shape)
print ('X_val shape:', X_val.shape)
print ('y_val shape:', y_val.shape)
print ('X_test shape:', X_test.shape)
print ('y_test shape:', y_test.shape)
###############################################################################
## Create learning model (Multilayer Perceptron) and tune hyperparameters
###############################################################################
### K: One hot encode the output.
#numberOfClasses = len (df [TARGET].unique ())
#print ('y_val:')
#print (y_val [:50])
#print (y_val.shape)
#y_train = keras.utils.to_categorical (y_train, numberOfClasses)
#y_val = keras.utils.to_categorical (y_val, numberOfClasses)
#y_test = keras.utils.to_categorical (y_test, numberOfClasses)
### -1 indices -> train
### 0 indices -> validation
test_fold = np.repeat ( [-1, 0], [X_train.shape [0], X_val.shape [0]])
myPreSplit = PredefinedSplit (test_fold)
print ('y_val:')
print (y_val [:50])
print (y_val.shape)
#y_val = y_val.argmax (axis = 1)
print ('y_val:')
print (y_val [:50])
print (y_val.shape)
#y_train = y_train.argmax (axis = 1)
#def create_model (learn_rate = 0.01, dropout_rate = 0.0, weight_constraint = 0):
# model = Sequential ()
# model.add (LSTM (50, activation= 'relu' , input_shape= (X_train.shape [1], X_train.shape [2])))
# model.add (Dense (1, activation = 'sigmoid'))
# model.compile (optimizer = 'adam', loss = 'binary_crossentropy',)
# return model
#
#model = KerasClassifier (build_fn = create_model, verbose = 2)
#batch_size = [30]#10, 30, 50]
#epochs = [3]#, 5, 10]
#learn_rate = [0.001, 0.01, 0.1, 0.2, 0.3]
#dropout_rate = [0.0]#, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
#weight_constraint = [0]#, 2, 3, 4, 5]
#param_grid = dict (batch_size = batch_size, epochs = epochs,
# dropout_rate = dropout_rate, learn_rate = learn_rate,
# weight_constraint = weight_constraint)
#grid = GridSearchCV (estimator = model, param_grid = param_grid,
# scoring = 'f1_weighted', cv = myPreSplit, verbose = 2,
# n_jobs = -1)
#
#grid_result = grid.fit (np.concatenate ( (X_train, X_val), axis = 0),
# np.concatenate ( (y_train, y_val), axis = 0))
#print (grid_result.best_params_)
#
#print ("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
#means = grid_result.cv_results_ ['mean_test_score']
#stds = grid_result.cv_results_ ['std_test_score']
#params = grid_result.cv_results_ ['params']
#for mean, stdev, param in zip (means, stds, params):
# print ("%f (%f) with: %r" % (mean, stdev, param))
#sys.exit ()
#Best: 0.999989 using {'epochs': 3, 'learn_rate': 0.001, 'weight_constraint': 0, 'batch_size': 30, 'dropout_rate': 0.0}
###############################################################################
## Finished model
METRICS = [keras.metrics.TruePositives (name = 'TP'),
keras.metrics.FalsePositives (name = 'FP'),
keras.metrics.TrueNegatives (name = 'TN'),
keras.metrics.FalseNegatives (name = 'FN'),
keras.metrics.BinaryAccuracy (name = 'Acc.'),
keras.metrics.Precision (name = 'Prec.'),
keras.metrics.Recall (name = 'Recall'),
keras.metrics.AUC (name = 'AUC'),]
BATCH_SIZE = 30
NUMBER_OF_EPOCHS = 3
LEARNING_RATE = 0.001
WEIGHT_CONSTRAINT = 1
clf = Sequential ()
clf.add (LSTM (50, activation = 'relu',
input_shape = (X_train.shape [1], X_train.shape [2])))
clf.add (Dense (1, activation = 'sigmoid'))
print ('Model summary:')
clf.summary ()
###############################################################################
## Compile the network
###############################################################################
print ('\nCompiling the network.')
clf.compile (optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = METRICS)
###############################################################################
## Fit the network
###############################################################################
print ('\nFitting the network.')
startTime = time.time ()
history = clf.fit (X_train, y_train,
batch_size = BATCH_SIZE,
epochs = NUMBER_OF_EPOCHS,
verbose = 2, #1 = progress bar, not useful for logging
workers = 0,
use_multiprocessing = True,
#class_weight = 'auto',
validation_data = (X_val, y_val))
#clf.fit (X_train, y_train, epochs = NUMBER_OF_EPOCHS,
#use_multiprocessing = True, verbose = 2)
print (str (time.time () - startTime), 's to train model.')
###############################################################################
## Analyze results
###############################################################################
#print ('y_val:')
#print (y_val [:50])
#print (y_val.shape)
#y_val = y_val.reshape (y_val.shape [0], 1))
#print ('y_val after reshape:')
#print (y_val.shape)
#y_val = y_val.argmax (axis = 1)
#print ('y_pred:')
#print (y_pred [:50])
#print (y_pred.shape)
#print ('y_pred after reshape:')
#print (y_pred [:50])
#print (y_pred.shape)
#y_train = y_train.argmax (axis = 1)
# -
X_train.shape
# +
y_pred = clf.predict (X_train)
y_pred = y_pred.round ()
y_pred = y_pred.reshape (y_pred.shape [0], )
print ('\nPerformance on TRAIN set:')
y_pred = clf.predict (X_train)
# -
y_pred.shape
y_train.shape
y_pred = y_pred.round ()
print (y_pred [:10])
print (y_train [:10])
# +
my_confusion_matrix = confusion_matrix (y_train, y_pred,
labels = df [TARGET].unique ())
tn, fp, fn, tp = my_confusion_matrix.ravel ()
### K: NOTE: Scikit's confusion matrix is different from keras. We want attacks to be
### the positive class:
tp, tn, fp, fn = tn, tp, fn, fp
print ('Confusion matrix:')
print (my_confusion_matrix)
print ('Accuracy:', accuracy_score (y_train, y_pred))
print ('Precision:', precision_score (y_train, y_pred, average = 'macro'))
print ('Recall:', recall_score (y_train, y_pred, average = 'macro'))
print ('F1:', f1_score (y_train, y_pred, average = 'macro'))
print ('Cohen Kappa:', cohen_kappa_score (y_train, y_pred,
labels = df [TARGET].unique ()))
print ('TP:', tp)
print ('TN:', tn)
print ('FP:', fp)
print ('FN:', fn)
sys.exit ()
y_pred = clf.predict (X_test)
y_pred = y_pred.round ()
y_pred = y_pred.reshape (y_pred.shape [0], )
### K: Only before publishing... Don't peek.
print ('\nPerformance on TEST set:')
y_pred = clf.predict (X_test_df)
my_confusion_matrix = confusion_matrix (y_test, y_pred,
labels = df [TARGET].unique ())
tn, fp, fn, tp = my_confusion_matrix.ravel ()
### K: NOTE: Scikit's confusion matrix is different from keras. We want attacks to be
### the positive class:
tp, tn, fp, fn = tn, tp, fn, fp
print ('Confusion matrix:')
print (my_confusion_matrix)
print ('Accuracy:', accuracy_score (y_test, y_pred))
print ('Precision:', precision_score (y_test, y_pred, average = 'macro'))
print ('Recall:', recall_score (y_test, y_pred, average = 'macro'))
print ('F1:', f1_score (y_test, y_pred, average = 'macro'))
print ('Cohen Kappa:', cohen_kappa_score (y_test, y_pred,
labels = df [TARGET].unique ()))
print ('TP:', tp)
print ('TN:', tn)
print ('FP:', fp)
print ('FN:', fn)
| src/specific_models/bot-iot/attack_identification/jupyter_notebooks/lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %%capture
# !pip install openmined_psi
import syft as sy
duet = sy.launch_duet(loopback=True)
import openmined_psi as psi
class PsiServerDuet:
def __init__(self, duet, server_items, reveal_intersection=False, fpr=1e-6):
# save a reference to the current duet session
self.duet = duet
# send the reveal intersection flag to the client
self.duet.requests.add_handler(
name="reveal_intersection",
action="accept"
)
sy_reveal_intersection = sy.lib.python.Bool(reveal_intersection)
sy_reveal_intersection_ptr = sy_reveal_intersection.tag("reveal_intersection").send(self.duet, pointable=True)
# allow the client to access the false positive rate
self.duet.requests.add_handler(
name="fpr",
action="accept"
)
sy_fpr = sy.lib.python.Float(fpr)
sy_fpr_ptr = sy_fpr.tag("fpr").send(self.duet, pointable=True)
# start the server
self.server = psi.server.CreateWithNewKey(reveal_intersection)
# send ServerSetup
self.duet.requests.add_handler(
name="setup",
action="accept"
)
setup = self.server.CreateSetupMessage(fpr, 1, server_items)
setup_ptr = setup.tag("setup").send(self.duet, pointable=True)
def accept(self, timeout_secs=-1):
# block until a request is received from the client
while True:
try:
self.duet.store["request"]
except:
continue
break
# get the Request from the client
request_ptr = self.duet.store["request"]
request = request_ptr.get(
request_block=True,
name="request",
reason="To get the client request",
timeout_secs=timeout_secs,
delete_obj=True
)
# process the request and send Response to client
self.duet.requests.add_handler(
name="response",
action="accept"
)
response = self.server.ProcessRequest(request)
response_ptr = response.tag("response").send(self.duet, pointable=True)
server_items = ["Element " + str(2 * i) for i in range(1000)]
server = PsiServerDuet(duet, server_items)
# data owner has full control over the number of intersections that can be done
server.accept()
| examples/private-set-intersection/PSI_Server_Syft_Data_Owner_user_friendly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
import os
import boto3
import logging
import sagemaker
import sagemaker.session
from sagemaker.estimator import Estimator
from sagemaker.inputs import TrainingInput
from sagemaker.model_metrics import (
MetricsSource,
ModelMetrics,
)
from sagemaker.processing import (
ProcessingInput,
ProcessingOutput,
ScriptProcessor,
)
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo
from sagemaker.workflow.condition_step import (
ConditionStep,
)
from sagemaker.workflow.functions import (
JsonGet,
)
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.properties import PropertyFile
from sagemaker.workflow.steps import (
ProcessingStep,
TrainingStep,
)
from sagemaker.workflow.step_collections import RegisterModel
from botocore.exceptions import ClientError
# +
logger = logging.getLogger(__name__)
"""Environment Variables"""
proj_dir = "TO_BE_DEFINED"
region= "TO_BE_DEFINED"
model_artefact_bucket= "TO_BE_DEFINED"
role = "TO_BE_DEFINED"
project_name= "TO_BE_DEFINED"
stage= "test"
model_package_group_name="AbalonePackageGroup",
pipeline_name="AbalonePipeline",
base_job_prefix="Abalone",
project_id="SageMakerProjectId",
processing_image_uri=None
training_image_uri=None
inference_image_uri=None
# -
def get_session(region, default_bucket):
"""Gets the sagemaker session based on the region.
Args:
region: the aws region to start the session
default_bucket: the bucket to use for storing the artifacts
Returns:
`sagemaker.session.Session instance
"""
boto_session = boto3.Session(region_name=region)
sagemaker_client = boto_session.client("sagemaker")
runtime_client = boto_session.client("sagemaker-runtime")
return sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_client,
sagemaker_runtime_client=runtime_client,
default_bucket=default_bucket,
)
sagemaker_session = get_session(region, model_artefact_bucket)
# ## Feature Engineering
# This section describes the different steps involved in feature engineering which includes loading and transforming different data sources to build the features needed for the ML Use Case
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.m5.xlarge")
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.m5.xlarge")
inference_instance_type = ParameterString(name="InferenceInstanceType", default_value="ml.m5.xlarge")
model_approval_status = ParameterString(name="ModelApprovalStatus", default_value="PendingManualApproval")
input_data = ParameterString(
name="InputDataUrl",
default_value=f"s3://sagemaker-servicecatalog-seedcode-{region}/dataset/abalone-dataset.csv",
)
processing_image_name = "sagemaker-{0}-processingimagebuild".format(project_id)
training_image_name = "sagemaker-{0}-trainingimagebuild".format(project_id)
inference_image_name = "sagemaker-{0}-inferenceimagebuild".format(project_id)
# +
# processing step for feature engineering
try:
processing_image_uri = sagemaker_session.sagemaker_client.describe_image_version(
ImageName=processing_image_name
)["ContainerImage"]
except (sagemaker_session.sagemaker_client.exceptions.ResourceNotFound):
processing_image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.0-1",
py_version="py3",
instance_type=processing_instance_type,
)
# -
# Define Script Processor
script_processor = ScriptProcessor(
image_uri=processing_image_uri,
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=f"{base_job_prefix}/sklearn-abalone-preprocess",
command=["python3"],
sagemaker_session=sagemaker_session,
role=role,
)
# Define ProcessingStep
step_process = ProcessingStep(
name="PreprocessAbaloneData",
processor=script_processor,
outputs=[
ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="validation", source="/opt/ml/processing/validation"),
ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
],
code="source_scripts/preprocessing/prepare_abalone_data/main.py", # we must figure out this path to get it from step_source directory
job_arguments=["--input-data", input_data],
)
# # Training an XGBoost model
# +
# training step for generating model artifacts
model_path = f"s3://{sagemaker_session.default_bucket()}/{base_job_prefix}/AbaloneTrain"
try:
training_image_uri = sagemaker_session.sagemaker_client.describe_image_version(ImageName=training_image_name)[
"ContainerImage"
]
except (sagemaker_session.sagemaker_client.exceptions.ResourceNotFound):
training_image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.0-1",
py_version="py3",
instance_type=training_instance_type,
)
xgb_train = Estimator(
image_uri=training_image_uri,
instance_type=training_instance_type,
instance_count=1,
output_path=model_path,
base_job_name=f"{base_job_prefix}/abalone-train",
sagemaker_session=sagemaker_session,
role=role,
)
xgb_train.set_hyperparameters(
objective="reg:linear",
num_round=50,
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.7,
silent=0,
)
step_train = TrainingStep(
name="TrainAbaloneModel",
estimator=xgb_train,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs["train"].S3Output.S3Uri,
content_type="text/csv",
),
"validation": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs["validation"].S3Output.S3Uri,
content_type="text/csv",
),
},
)
# -
# # Evaluate the Model
# processing step for evaluation
script_eval = ScriptProcessor(
image_uri=training_image_uri,
command=["python3"],
instance_type=processing_instance_type,
instance_count=1,
base_job_name=f"{base_job_prefix}/script-abalone-eval",
sagemaker_session=sagemaker_session,
role=role,
)
evaluation_report = PropertyFile(
name="AbaloneEvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
step_eval = ProcessingStep(
name="EvaluateAbaloneModel",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
),
ProcessingInput(
source=step_process.properties.ProcessingOutputConfig.Outputs["test"].S3Output.S3Uri,
destination="/opt/ml/processing/test",
),
],
outputs=[
ProcessingOutput(output_name="evaluation", source="/opt/ml/processing/evaluation"),
],
code="source_scripts/evaluate/evaluate_xgboost/main.py",
property_files=[evaluation_report],
)
# # Conditional step to push model to SageMaker Model Registry
# +
# register model step that will be conditionally executed
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri="{}/evaluation.json".format(
step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"]
),
content_type="application/json",
)
)
try:
inference_image_uri = sagemaker_session.sagemaker_client.describe_image_version(ImageName=inference_image_name)[
"ContainerImage"
]
except (sagemaker_session.sagemaker_client.exceptions.ResourceNotFound):
inference_image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.0-1",
py_version="py3",
instance_type=inference_instance_type,
)
step_register = RegisterModel(
name="RegisterAbaloneModel",
estimator=xgb_train,
image_uri=inference_image_uri,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.large"],
transform_instances=["ml.m5.large"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
model_metrics=model_metrics,
)
# condition step for evaluating model quality and branching execution
cond_lte = ConditionLessThanOrEqualTo(
left=JsonGet(
step_name=step_eval.name, property_file=evaluation_report, json_path="regression_metrics.mse.value"
),
right=6.0,
)
step_cond = ConditionStep(
name="CheckMSEAbaloneEvaluation",
conditions=[cond_lte],
if_steps=[step_register],
else_steps=[],
)
# -
# # Create and run the Pipeline
# pipeline instance
pipeline = Pipeline(
name=pipeline_name,
parameters=[
processing_instance_type,
processing_instance_count,
training_instance_type,
model_approval_status,
input_data,
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
# +
import json
definition = json.loads(pipeline.definition())
definition
# -
pipeline.upsert(role_arn=role, description=f'{stage} pipelines for {project_name}')
pipeline.start()
| mlops-multi-account-cdk/mlops-sm-project-template-rt/seed_code/build_app/notebooks/sm_pipelines_runbook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/IgnatiusEzeani/IGBONLP/blob/master/ig_en_mt/scripts/1.%20Seq2Seq%20MT%20Model%20for%20Igbo-English.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="KrNB5PrA39Xm" colab_type="text"
# # Seq2Seq MT Model for Igbo-English
#
# In this work, we discuss our effort toward building a standard evaluation benchmark dataset for Igbo-English machine translation tasks.
#
# ## Table of Content
#
# <!-- [TOC] -->
# <div class="toc">
# <ol>
# <li><a href="#introduction">Introduction</a></li>
# <li><a href="#evaldataset">Evaluation Dataset</a></li>
# <li><a href="#buildingModel">Building the Igbo-English Translation Model</a>
# <li><a href="#seq2seqModel">Overview of the Seq2Seq Model</a></li>
# <li><a href="#dataprep">Data Preparation</a></li>
#
# <li><a href="#acknowledgement">Acknowledgement</a></li>
# </ol>
# </div>
#
# <a id="introduction"></a>
# ## Introduction
#
# [Igbo language](https://en.wikipedia.org/wiki/Igbo_language) is one of the 3 major Nigerian languages spoken by approximately 50 million people globally, 70% of whom are in southeastern Nigeria. It is low-resourced in terms of natural language processing. IGBONLP despite some efforts toward developing IgboNLP such as part-of-speech tagging: [Onyenwe et al. (2014)](http://eprints.whiterose.ac.uk/117817/1/W14-4914.pdf), [Onyenwe et al. (2019)](https://www.researchgate.net/publication/333333916_Toward_an_Effective_Igbo_Part-of-Speech_Tagger); and diacritic restoration: [Ezeani et al. (2016)](http://eprints.whiterose.ac.uk/117833/1/TSD_2016_Ezeani.pdf), [Ezeani et al. (2018)](https://www.aclweb.org/anthology/N18-4008).
#
# Although there are existing sources for collecting Igbo monolingual and parallel data, such as the [OPUS Project (Tiedemann, 2012)](http://opus.nlpl.eu/) or the [JW.ORG](https://www.jw.org/ig), they have certain limitations. The OPUS Project is a good source training data but, given that there are no human validations, may not be good as an evaluation benchmark. JW.ORG contents, on the other hand, are human generated and of good quality but the genre is often skewed to religiouscontexts and therefore may not be good for building a generalisable model.
#
# This project focuses on creating and publicly releasing a standard evaluation benchmark dataset for Igbo-English machine translation research for the NLP research community. This project aims to build, maintain and publicly share a standard benchmark dataset for Igbo-English machine translation research.
#
# <a id="evaldataset"></a>
# ## Evaluation Dataset
# The evaluation dataset was created using the approach laid out in the paper: [Igbo-English Machine Translation: An Evaluation Benchmark](https://eprints.lancs.ac.uk/id/eprint/143011/1/2004.00648.pdf).
#
# There are three key objectives:
# 1. Create a minimum of 10,000 English-Igbo human-level quality sentence pairs mostly from the news domain
# 2. To assemble and clean a minimum of 100,000 monolingual Igbo sentences, mostly from the news domain, as companion monolingual data for training MT models
# 3. To release the dataset to the research community as well as present it at a conference and publish a journal paper that details the processes involved.
#
# Al available data can be accessed from [IGBONLP GitHub Repo](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt).
#
# <a id="buildingModel"></a>
# ## Building the Igbo-English Translation Model
#
# With this Jupyter notebook, we present a simple usage demo of the evaluation benchmark dataset which was reported in the [paper](https://eprints.lancs.ac.uk/id/eprint/143011/1/2004.00648.pdf). We will also be building a machine learning model for Igbo-English machine translation, using [PyTorch](https://pytorch.org/) and [TorchText](https://pytorch.org/text/index.html).
#
# <a id="seq2seqModel"></a>
# ### Overview of the Seq2Seq Model
# We will build a *Sequence-to-Sequence* (Seq2Seq) model based on the method described in the paper [Sequence to Sequence Learning with Neural Networks by Sutskever et al](https://arxiv.org/abs/1409.3215). This requires building an *encoder-decoder* model that uses a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector, *context vector*. This vector will then be passed to another RNN to be *decoded* i.e. to learn to output the target (output) sentence by generating it one word at a time. We will be using the [Long Short Term Memory (LSTM)](https://en.wikipedia.org/wiki/Long_short-term_memory), a variant of RNN, in this demo.
#
# <a id="dataprep"></a>
# ### Data Preparation
# The models will be coded in PyTorch while the data pre-processing and preparation will be done using TorchText. We will assume that these required modules have been installed: `torch`, `torchtext`, `numpy`
# + id="hLrncMEM4VFa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="7bae71ee-3e36-4dbf-f216-4fa1cbbc545c"
# !git clone https://github.com/IgnatiusEzeani/IGBONLP.git
# + [markdown] id="-iIhnfqG4ivH" colab_type="text"
# Let's start by making the basic imports
# + id="BJVInRii39Xo" colab_type="code" colab={}
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import TranslationDataset
from torchtext.data import Field, BucketIterator
import os
import numpy as np
import random
import math
import time
# + [markdown] id="fGAJaOlA39X1" colab_type="text"
# We need a `tokenizer` for both languages. Here we are using a very basic one that splits the text on spaces. This is basically because the dataset has been presented in that format. Ideally, it may be necessary to use a more tailored tokenizer depending on the language and/or the task. But this will do for now.
#
# We will then use TorchText's [Field](https://pytorch.org/text/_modules/torchtext/data/field.html) (see implemetation [here]((https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61)) class to define the structure of the data. The `tokenize` argument is set to the tokenization function for both languages. The `Field` appends the "start of sequence" (`init_token`) and "end of sequence" (`eos_token`). All words are converted to lowercase.
# + id="OPov500q39X3" colab_type="code" colab={}
def tokenize(text):
return [token for token in text.split()]
SRC = Field(tokenize = tokenize, init_token = '<sos>', eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize, init_token = '<sos>', eos_token = '<eos>',
lower = True)
# + [markdown] id="oFMGP7wl39X_" colab_type="text"
# We then load the evaluation data from our [dataset](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt/ig_parallel). So far it contains approximately ~12,000 parallel Igbo and English sentences.
#
# We split the dataset into *train*, *validate* and *test* sets by specifying the source and target extensions (`exts`) and using the `Field` objects create above. The `exts` indicates the languages to use as the *source* and *target* (source first) and `fields` specifies which field to use for the source and target.
# + id="tLSUNc-S6Q5e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cabf122e-4244-4143-a9e8-c648f9575cd6"
# cd IGBONLP/ig_en_mt/scripts/
# + id="DQNabUdn39YC" colab_type="code" colab={}
fields, exts = (SRC,TRG), ('.ig','.en')
train_data, validate_data, test_data = TranslationDataset.splits(fields=fields, exts=exts,
path=os.path.join('./..','data'),
train='train', validation='val', test='test')
# + [markdown] id="DkipY1mh39YM" colab_type="text"
# To be sure that we loaded the data correctly, we can check whether the number of example we have for each split corresponds with what we expect.
# + id="FME3TZT539YR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="dc6cc4d1-5326-41a0-da04-72e99d3d0107"
print(f"{'Training examples':>20s}: {len(train_data.examples)}")
print(f"{'Validation examples':>20s}: {len(validate_data.examples)}")
print(f"{'Testing examples':>20s}: {len(test_data.examples)}")
# + [markdown] id="nUqoVwsr39YX" colab_type="text"
# We can also confirm that the language pairs are correctly loaded i.e. the correct *source* and *target* in their positions
# + id="lRlchecW39Yb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 125} outputId="33cbbbd6-d894-4239-a21f-7fe903101574"
example0 = vars(train_data.examples[0])
example5 = vars(train_data.examples[5])
print(f"Example 0:\n{example0}\n\nExample 5:\n{example5}")
# + [markdown] id="udbIgn2239Yg" colab_type="text"
# We'll set the random seeds for deterministic results.
# + id="7bOLwAud39Yh" colab_type="code" colab={}
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# + [markdown] id="QKJ6ZIRt39Yq" colab_type="text"
# Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.
#
# Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
#
# It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
# + id="0xq48Nvc39Ys" colab_type="code" colab={}
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
# + id="THsX6wAR39Yy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="249bcdc1-0309-4361-e2e0-4a790e258beb"
print(f"Unique tokens in source (ig) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
# + [markdown] id="mfsPtVfi39Y5" colab_type="text"
# The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
#
# We also need to define a `torch.device`. This is used to tell TorchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
#
# When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, TorchText iterators handle this for us!
#
# We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
# + id="1UBOI1r_39Y7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3d94d371-b7a5-4777-fbc3-cd5cbce18674"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# + id="LEL0cGU139ZC" colab_type="code" colab={}
BATCH_SIZE = 8
train_iterator, validate_iterator, test_iterator = BucketIterator.splits(
(train_data, validate_data, test_data),
batch_size = BATCH_SIZE,
device = device)
# + [markdown] id="Ts08AvRO39ZI" colab_type="text"
# ### Building the Seq2Seq Model
#
# We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
#
# #### Encoder LSTM
# + id="rLXN56i-39ZM" colab_type="code" colab={}
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
# + [markdown] id="t3wYRSPQ39ZS" colab_type="text"
# #### Decoder
# + id="jMF9UB_439ZU" colab_type="code" colab={}
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
# + [markdown] id="2-sBJOyi39Zd" colab_type="text"
# ### Seq2Seq Model
# + id="CMfgxSzr39Ze" colab_type="code" colab={}
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
# + [markdown] id="Bc7zXpJ939Zk" colab_type="text"
# # Training the Seq2Seq Model
#
# Now we have our model implemented, we can begin training it.
#
# First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
#
# We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
# + id="IHoMNT_i39Zl" colab_type="code" colab={}
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
# + [markdown] id="HwkCnXPA39Zp" colab_type="text"
# Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
#
# We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
# + id="vBVsOWJm39Zr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 247} outputId="6f625d97-61eb-4fea-d5ac-31c06872681b"
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
# + [markdown] id="FiHLL88139Zx" colab_type="text"
# We also define a function that will calculate the number of trainable parameters in the model.
# + id="8n78fEkB39Zy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3031320d-f353-43cd-af53-e4e4fe8a7d75"
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
# + [markdown] id="nwTUK8o-39Z4" colab_type="text"
# We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
# + id="0e3QNu0r39Z4" colab_type="code" colab={}
optimizer = optim.Adam(model.parameters())
# + [markdown] id="7rcScefB39Z-" colab_type="text"
# Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
#
# Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
# + id="gIxcRu1x39Z-" colab_type="code" colab={}
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
# + [markdown] id="BKVx4Qsa39aC" colab_type="text"
# Next, we'll define our training loop.
#
# First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
#
# As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
#
# $$\begin{align*}
# \text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
#
# Here, when we calculate the loss, we cut off the first element of each tensor to get:
#
# $$\begin{align*}
# \text{trg} = [&y_1, y_2, y_3, <eos>]\\
# \text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
# \end{align*}$$
#
# At each iteration:
# - get the source and target sentences from the batch, $X$ and $Y$
# - zero the gradients calculated from the last batch
# - feed the source and target into the model to get the output, $\hat{Y}$
# - as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
# - we slice off the first column of the output and target tensors as mentioned above
# - calculate the gradients with `loss.backward()`
# - clip the gradients to prevent them from exploding (a common issue in RNNs)
# - update the parameters of our model by doing an optimizer step
# - sum the loss value to a running total
#
# Finally, we return the loss that is averaged over all batches.
# + id="aNI4rUGU39aD" colab_type="code" colab={}
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
# + [markdown] id="96WImgRS39aG" colab_type="text"
# Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
#
# We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
#
# We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
#
# The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
# + id="BfvxayMx39aG" colab_type="code" colab={}
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
# + [markdown] id="TW3kzzSe39aK" colab_type="text"
# Next, we'll create a function that we'll use to tell us how long an epoch takes.
# + id="MtBAGtP_39aK" colab_type="code" colab={}
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
# + [markdown] id="mu-Zi9Yi39aa" colab_type="text"
# We can finally start training our model!
#
# At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
#
# We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
# + id="l1VnNbA239ac" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="ef58bfda-983b-496a-ca9c-6216f0414956"
N_EPOCHS = 30
CLIP = 1
print(f'-----------<Training started>-----------')
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, validate_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'igen-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
print(f'-----------<Training ended>-----------')
# + [markdown] id="dPQrXioQ39aj" colab_type="text"
# We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
# + id="Z8rdjMM739ak" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="afb4e35c-3d54-4f0d-a8c5-fb4938073c7b"
model.load_state_dict(torch.load('igen-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
# + [markdown] id="-iBlE2Va39aq" colab_type="text"
# This is our first implementation and, as may be observed, it performed poorly on _validation_ and _test_.
| ig_en_mt/scripts_notebooks/1. Seq2Seq MT Model for Igbo-English.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
from glob import glob
import numpy as np
import torch
import torchvision.models as models
from torchvision import datasets
import torchvision.transforms as transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
use_cuda = torch.cuda.is_available()
# +
train_transforms = transforms.Compose([transforms.RandomRotation(60),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
image_path = 'data'
train_path = os.path.join(image_path, 'train')
val_path = os.path.join(image_path, 'valid')
test_path = os.path.join(image_path, 'test')
train_dataset = datasets.ImageFolder(train_path, train_transforms)
val_dataset = datasets.ImageFolder(val_path, train_transforms)
test_dataset = datasets.ImageFolder(test_path, test_transforms)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=128, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=128, shuffle=True)
loaders = {'train': train_loader, 'valid': val_loader, 'test': test_loader}
# +
import torch.nn as nn
import torch.nn.functional as F
def calc_w_conv_out(conv, pool_stride = 1):
return (((conv["W"] - conv["F"] + (2*conv["P"])) / conv["S"]) + 1) / pool_stride
conv1_w_in = 224
conv1 = {"W": conv1_w_in, "D": 3, "K": 4, "F": 7, "P": 0, "S": 7}
conv1_w_out = calc_w_conv_out(conv1)
conv2 = {"W": conv1_w_out, "D": conv1["K"], "K": 8, "F": 3, "P": 1, "S": 1}
conv2_w_out = calc_w_conv_out(conv2, 2)
conv3 = {"W": conv2_w_out, "D": conv2["K"], "K": 12, "F": 3, "P": 1, "S": 1}
conv3_w_out = calc_w_conv_out(conv3)
conv4 = {"W": conv3_w_out, "D": conv3["K"], "K": 16, "F": 3, "P": 1, "S": 1}
conv4_w_out = calc_w_conv_out(conv4, 4)
conv_features_out = conv4_w_out**2 * conv4["K"]
print(conv1_w_out, conv2_w_out, conv3_w_out, conv4_w_out, conv_features_out)
def make_nn_conv(conv):
return nn.Conv2d(conv["D"], conv["K"], conv["F"], padding=conv["P"], stride=conv["S"])
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
## Layer 1
self.conv1 = make_nn_conv(conv1)
self.conv2 = make_nn_conv(conv2)
## Layer 2
self.conv3 = make_nn_conv(conv3)
self.conv4 = make_nn_conv(conv4)
self.fc1 = nn.Linear(int(conv_features_out), 3)
def forward(self, x):
## Define forward behavior
batch_size = x.size()[0]
# layer 1
x = F.dropout(F.relu(self.conv1(x)), 0.2)
x = F.dropout(F.max_pool2d(F.relu(self.conv2(x)), 2, 2), 0.2)
# layer 2
x = F.dropout(F.relu(self.conv3(x)), 0.2)
x = F.dropout(F.max_pool2d(F.relu(self.conv4(x)), 4, 4), 0.2)
x = x.view(batch_size, -1)
x = self.fc1(x)
return x
model = Net()
if use_cuda:
model.cuda()
# +
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
# -
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
train_loss = 0.0
valid_loss = 0.0
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
train_loss = train_loss/len(loaders['train'].sampler)
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
if use_cuda:
data, target = data.cuda(), target.cuda()
output = model(data)
loss = criterion(output, target)
valid_loss += loss.item()*data.size(0)
valid_loss = valid_loss/len(loaders['valid'].sampler)
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
if valid_loss < valid_loss_min:
print(f'Saved model.pt, validation decresed: {valid_loss_min} => {valid_loss}')
valid_loss_min = valid_loss
torch.save(model.state_dict(), 'model.pt')
return model
model = train(50, loaders, model, optimizer,
criterion, use_cuda, 'model.pt')
# +
model.load_state_dict(torch.load('model.pt'))
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
test(loaders, model, criterion, use_cuda)
# -
model.load_state_dict(torch.load('model.pt'))
def make_pred(img):
model.eval()
if use_cuda:
img.cuda()
output = model(img.unsqueeze(0)).detach().numpy()
return output
# +
import pandas as pd
imgs = np.array(test_dataset.imgs)[:, 0]
pred_df = pd.DataFrame(data={'Id': imgs, 'task_1': 0, 'task_2': 0})
pred_df.head()
# -
for i in range(len(imgs)):
pred = make_pred(test_dataset.__getitem__(i)[0])
pred_df.loc[i, 'task_1'] = pred[0][0]
pred_df.loc[i, 'task_2'] = pred[0][2]
pred_df.head()
pred_df.to_csv('predictions.csv', index=False)
# +
#model.load_state_dict(torch.load('model.pt'))
import matplotlib.pyplot as plt
# %matplotlib inline
# !python get_results.py predictions.csv 0.4
# -
| dermatologist-ai/Cancer Detection Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # PRMT-2321 Increase in EMIS-EMIS not LM compliant in July - is this within a small number of practices?
#
# ## Context
# In the month of July we have seen a spike in EMIS-EMIS error code 23 - Sender not LM compliant. This is unexpected as EMIS is Large message compliant overall. Feedback from EMIS support said that Large messaging is configurable at a practice level and needs to be enabled.
#
# ## Scope
# Are these across many practices or across a small group?
#
# If a small group, is there anything special about these practices, e.g. can we see if they are a practice that recently migrated from another supplier system?
#
# ## Hypothesis
# A small group of practices make up the majority of EMIS-EMIS not LM compliant errors.
#
#
import pandas as pd
import matplotlib.pyplot as plt
# ### Load June and July 2021 data
transfer_file_location = "s3://prm-gp2gp-transfer-data-preprod/v4/"
transfer_files = [
"2021/6/transfers.parquet",
"2021/7/transfers.parquet"
]
transfer_input_files = [transfer_file_location + f for f in transfer_files]
transfers_raw = pd.concat((
pd.read_parquet(f)
for f in transfer_input_files
))
transfers = transfers_raw.copy()
# +
asid_lookup_file_location = "s3://prm-gp2gp-asid-lookup-preprod/"
asid_lookup_files = [
"2021/7/asidLookup.csv.gz",
"2021/8/asidLookup.csv.gz"
]
asid_lookup_input_files = [asid_lookup_file_location + f for f in asid_lookup_files]
asid_lookup = pd.concat((
pd.read_csv(f)
for f in asid_lookup_input_files
)).drop_duplicates()
lookup = asid_lookup[["ASID", "NACS","OrgName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'ASID': 'requesting_supplier_asid', 'NACS': 'requesting_ods_code','OrgName':'requesting_practice_name'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'ASID': 'sending_supplier_asid', 'NACS': 'sending_ods_code','OrgName':'sending_practice_name'}, axis=1)
# -
# ### EMIS - EMIS: Sender not Large Message compliant (error 23)
# +
emis_sender_bool = transfers["sending_supplier"]=="EMIS"
emis_requester_bool = transfers["requesting_supplier"]=="EMIS"
sender_error_23_bool = transfers["sender_error_codes"].apply(lambda error_codes: 23 in error_codes)
emis_transfers_with_error_23 = transfers[emis_sender_bool & emis_requester_bool & sender_error_23_bool].copy()
grouped_emis_transfers_with_error_23 = emis_transfers_with_error_23.groupby(by='sending_practice_name').agg({'conversation_id': 'count'}).sort_values(by='conversation_id', ascending=False).reset_index()
# -
grouped_emis_transfers_with_error_23
grouped_emis_transfers_with_error_23.plot.bar(x='sending_practice_name', y='conversation_id', xlabel="Practice Asids", ylabel="Count of transfers", title="Distribution of LM Errors for EMIS-EMIS", figsize=(60,20))
grouped_emis_transfers_with_error_23_plot = plt.xticks(rotation='vertical')
# +
emis_transfers_with_error_23["date"] = emis_transfers_with_error_23["date_requested"].dt.date
emis_transfers_with_error_23_grouped_by_date = emis_transfers_with_error_23.groupby(by="date").agg({"conversation_id": "count"}).reset_index()
emis_transfers_with_error_23_grouped_by_date.plot.bar(x='date', y='conversation_id', xlabel='June-July', ylabel='Total per day', title="Distribution of Large Messaging Errors between EMIS-EMIS", figsize=(20,10))
emis_transfers_with_error_23_grouped_by_date_plot = plt.xticks(rotation='vertical')
# -
| notebooks/62-PRMT-2321--emis-emis-not-large-message-compliant-july.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sample WordCloud project with nltk dataset
# Download the nltk corpora if you need to get the dataset
# +
# import nltk
# nltk.download()
# -
from nltk.corpus import abc, stopwords
from collections import Counter
import re
# ### Limit the data to 300 most common words
#
# And remove 'stopwords'
# +
stopwords = stopwords.words('english')
def get_300_words(filename, number_of_words=300):
name = filename[:-4]
return Counter(
[
word.lower() for word in abc.words(filename)
if re.search("\w", word) and not re.search("\d", word) and word.lower() not in stopwords
]).most_common(number_of_words)
science_words = {k: v for k, v in get_300_words('science.txt')}
rural_words = {k: v for k, v in get_300_words('rural.txt')}
# -
# ### Remove the 'title' words from the dictionary since we will add them back later on ... special like
rural_words.pop('rural', None)
science_words.pop('science', None)
# ### Set the color scheme for the project
# +
default_color = 'red'
colors = [
'hsl(203, 80%, 50%)',
'hsl(253, 80%, 50%)',
'hsl(103, 80%, 50%)',
'hsl(153, 80%, 50%)',
'hsl(53, 80%, 50%)',
]
# -
# ### Use the example class provided in the docs to organize colors by group
#
# https://amueller.github.io/word_cloud/auto_examples/colored_by_group.html
from wordcloud import get_single_color_func
class GroupedColorFunc(object):
"""Create a color function object which assigns DIFFERENT SHADES of
specified colors to certain words based on the color to words mapping.
Uses wordcloud.get_single_color_func
Parameters
----------
color_to_words : dict(str -> list(str))
A dictionary that maps a color to the list of words.
default_color : str
Color that will be assigned to a word that's not a member
of any value from color_to_words.
"""
def __init__(self, color_to_words, default_color):
self.color_func_to_words = [
(get_single_color_func(color), set(words))
for (color, words) in color_to_words.items()]
self.default_color_func = get_single_color_func(default_color)
def get_color_func(self, word):
"""Returns a single_color_func associated with the word"""
try:
color_func = next(
color_func for (color_func, words) in self.color_func_to_words
if word in words)
except StopIteration:
color_func = self.default_color_func
return color_func
def __call__(self, word, **kwargs):
return self.get_color_func(word)(word, **kwargs)
# ### Break the words into 'color groups' evenly
#
# Based on the number of colors used and the number of words (default 300 - 1)
from random import shuffle
from math import ceil
# +
def get_color_groups(words, colors):
words_list = list(words.keys())
# Use ceil if the length of words is not a multiple of the length of colors
cnt = ceil(len(words_list)/len(colors))
shuffle(words_list)
return {color: words_list[i*cnt:(i+1)*cnt] for i, color in enumerate(colors)}
rural_color_groups = get_color_groups(rural_words, colors)
science_color_groups = get_color_groups(science_words, colors)
# -
# Validate the counts for each color
for k, v in rural_color_groups.items():
print(k, len(v))
# Validate the counts for each color
for k, v in science_color_groups.items():
print(k, len(v))
# ### Prime time! Create the WordCloud and save the images
from wordcloud import WordCloud
# +
def create_wordcloud(name, color_groups, words):
# Create a color function with multiple tones
color_func = GroupedColorFunc(color_groups, default_color)
words[name] = 2*max(words.values())
wordcloud = WordCloud(width=6*300,
height=9*300,
color_func=color_func).generate_from_frequencies(words)
wordcloud.to_file("./imgs/" + name + "_blk.png")
print(wordcloud.layout_[0])
create_wordcloud('rural', rural_color_groups, rural_words)
create_wordcloud('science', science_color_groups, science_words)
print('Done!')
# -
# ### Oops! The 'default' red color is being modified by the get_single_color_func() random state
#
# Wouldn't it be nice for the title word to be a nice consistent bright red color?
class GroupedColorFunc(object):
""" Modify the provided class from WordCloud documentation
Set the self.default_color_fun in the __init__() method
"""
def __init__(self, color_to_words, default_color):
self.color_func_to_words = [
(get_single_color_func(color), set(words))
for (color, words) in color_to_words.items()]
# set the default color to always be 'red'
self.default_color_func = lambda *args, **kwargs: "red"
def get_color_func(self, word):
"""Returns a single_color_func associated with the word"""
try:
color_func = next(
color_func for (color_func, words) in self.color_func_to_words
if word in words)
except StopIteration:
color_func = self.default_color_func
return color_func
def __call__(self, word, **kwargs):
return self.get_color_func(word)(word, **kwargs)
# ### Round 2! Create the WordCloud and save the images
# +
def create_wordcloud(name, color_groups, words):
# Create a color function with multiple tones
color_func = GroupedColorFunc(color_groups, default_color)
words[name] = 2*max(words.values())
wordcloud = WordCloud(width=6*300,
height=9*300,
color_func=color_func).generate_from_frequencies(words)
wordcloud.to_file("./imgs/" + name + "_blk.png")
print(wordcloud.layout_[0])
create_wordcloud('rural', rural_color_groups, rural_words)
create_wordcloud('science', science_color_groups, science_words)
print('Done!')
# -
# ### That's better! Nice and bright!
| 20200728 Sample wordcloud project.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Week 02, Part 3: AVOID text
#
# See lecture slides for more info about how this dataset was collected.
#
# ### Topics:
# 1. Read in text data
# 1. Explore text data
#
# Resize plots:
require(repr)
options(repr.plot.width=10, repr.plot.height=4)
# ## 1. Read in AVOID data
# There are a few packages that are helpful for reading in text data, we'll install one:
# +
#install.packages("readtext") # only need to run once, comment out once you have
# NOTE: you need to have pkg-config installed
# -
library("readtext")
txtfile = readtext::readtext("introAVoid.txt")
# Make sure this is stored somewhere you can remember! You can put it in the same directory as this file (or whatever R-script you are working from) or you can specify a location. For example, on my Mac I can specify the default `Downloads` folder as the location with:
#
# ```r
# fishdata = read.csv("~/Downloads/introAVoid.txt")
# ```
# What does this dataset look like?
txtfile
# ## 2. Exploring the AVOID data
# As it is, we have a big block of text. We have to do some data manipulation to really work with this data.
#
# While we won't get into too much NLP type stuff here, there are some usages in R that are fun to look at if you decide you like R and natural language processing, e.g.:
# * https://towardsdatascience.com/r-packages-for-text-analysis-ad8d86684adb
# * https://www.datacamp.com/community/tutorials/ML-NLP-lyric-analysis
# * https://www.kaggle.com/rtatman/nlp-in-r-topic-modelling
#
# Here, we'll just stick to some simple manipulation by grammer.
#
# First, let's split this into words:
twords = strsplit(txtfile[,2],split=" ") # split by spaces into words
twords
# Unfortunately, there are some punctuation marks in there that we probably want to take out (though, depending on what you want to do with this dataset, you might want to leave them in!)
#
# While we won't go through this in detail, we can do a for-loop with some greps and some character substitutions for us:
wordstmp = twords[[1]]
words = c()
# take out punctuation
for (i in 1:length(wordstmp)) {
# look for specific punctuation
# 1 => .
if (grepl("\\.",wordstmp[i])){ # if this punctuation is in the word
words = c(words,gsub(".","",wordstmp[i])) # replace it with an empty character
}
# 2 => ,
else if (grepl(",",wordstmp[i])){
words = c(words,gsub(",","",wordstmp[i]))
}
# 3 => )
else if (grepl("\\)",wordstmp[i])){
words = c(words,gsub(")","",wordstmp[i]))
}
# 4 => (
else if (grepl("\\(",wordstmp[i])){
words = c(words,gsub("\\(","",wordstmp[i]))
}
# 5 => —
else if (grepl("\\—",wordstmp[i])){
tmp = gsub("\\—","",wordstmp[i]) # sometimes only thing
if ((tmp != '') & (tmp !="")){
words = c(words,tmp)
}
else { # just append!
if ((tmp != '') & (tmp !="")){
words = c(words, wordstmp[i])
}
}
}
for (i in 1:length(words)){
print(words[i])
}
rlang::is_empty("")
barplot(table(country)) # note: if you stretch the plot window in RStudio you see more/less data
# What are the different countries?
print(levels(country))
# How about year of data?
barplot(table(year))
# (this is where we got in Fall 2020)
# How about type of fish?
barplot(table(type))
# Since its hard to see the labels, maybe we want to print out levels by hand:
levels(type)
# So, all-in-all we see we have about 87 different types of fish import/export in this dataset.
# How about transaction type? (import, export, re-export/import)
barplot(table(transaction))
# How about the cash amount of the transaction?
hist(trade_usd) # numerical data
# We can see here that this histogram tells us very little.
#
# Why? Well let's print out the values of `trade_usd` and take a look:
head(trade_usd, n=10)
# These numbers are very large overall. One option is that we can divide by something like $1000 and see what that looks like:
hist(trade_usd/1000., xlab='Trade in $1000 USD')
# Well that didn't do too much good! Why is that? Let's look at some summary stats for this variable:
print(summary(trade_usd))
# So, the min seems to be $0-$1 and the max is $5.2 billion dollars! You can see that the Median & mean are very different, and the IQR extends from 1000 to almost 10 million.
#
# When we have data over such a huge range that we want to take a look at, one good idea is to take the log and plot that.
#
# Recall log10 means "log base 10".
#
# What about a log scale plot?
hist(log10(trade_usd))
# Now we can see a lot more detail - what this plot means is that the peak of the distribution is log10 ~ 5 or at 0.1 million dollars ($10^5$ dollars).
# How about the weight of the fish in kg?
hist(weight)
# Hard to see - let's look at a summary again:
print(summary(weight))
# So again, min & max have a wide range, and a large spread in quartiles. Let's try a log plot again:
hist(log10(weight))
# That this looks similar to the trade_usd histogram makes sense inuitivley - more weight of fish probably corresponds to more money flowing.
# How about the quantity name?
levels(quant_name)
# Looks like some of of the "quantity" measures are weight, or # of items, or nothing.
#
# Since this is non-numeric, and only 3 value, let's just look at the table:
table(quant_name)
# It looks like most entries are in kg, and only a few are in #'s. A few specify `No Quantity` - we might want to be careful that we are comparing "apples to apples" - i.e. "like things to like things" and make sure we are not comparing measurements in kg to measurements in "# items".
# We are lucky in this dataset that we are going to be able to subset by the quantity and not have to deal with NA's.
#
# There are a few ways to specify an `na.omit` policy. We can in-fact check out the na.omit function to see all it covers, e.g. here: https://www.rdocumentation.org/packages/photobiology/versions/0.10.5/topics/na.omit
#
# And there are many other packages that have their own na.omit representations (for example, for data tables specifically): https://www.rdocumentation.org/packages/data.table/versions/1.13.6/topics/na.omit.data.table
#
# And googling gives you a lot of nice examples you can check out for how to use/when to use in different cases: https://statisticsglobe.com/na-omit-r-example/#:~:text=The%20na.,basic%20programming%20code%20for%20na.
# ## 3. Further data exploration and some beginning Stats Functions
# I'm going to show a few stats functions that we'll use later in class. We'll go over them in a lot of detail later, but for right now, I'm just showing an example of how one might use R to explore a dataset and try to learn stuff about it.
#
# I'll say a lot of "this will be quantified later" and it will! So don't get frustrated if its weird or vague at this point!
# #### 3A - Example 1: Croatian Imports
#
# Let's start by grabbing a subset of our data. We can do this by "masking" out our dataset and only looking at these specific values using "boolean" operators.
#
# Let's say I want *only* Croatian data:
mask = country=="Croatia" # Feel feel to pick your own country! recall: print(levels(country)) to take a look
# I can then make a subset of this dataset to work with, for example, to plot the total amount of trade in USD from Croatia:
trade_usd_croatia = subset(trade_usd,mask)
# Here, I use the "subset" function to grab only data with a country code of "croatia".
#
# What does the histogram look like?
hist(trade_usd_croatia)
# Again, we probably want to do a log10-based histogram:
hist(log10(trade_usd_croatia))
# We can make more complex masks, for example, remember how we only wanted to compare "like to like" in terms of how things are weighted/counted?
#
# Let's also "mask out" only Croatian data that is measured in kg:
mask = (country=="Croatia") & (quant_name == "Weight in kilograms")
trade_usd_croatia_kg = subset(trade_usd,mask)
# Let's overplot this new dataset on our old histogram:
# +
hist(log10(trade_usd_croatia))
hist(log10(trade_usd_croatia_kg),col=rgb(1,0,0),add=T) # add=T to include on old plot
# -
# It turns out this was an important addition to our mask - it changes how things look at the lowest trade in USD values.
#
# Finally, we can further subset and look at only how much import trade they are doing:
mask = (country=="Croatia") & (quant_name == "Weight in kilograms") & (transaction == "Import")
trade_usd_croatia_kg_import = subset(trade_usd,mask)
# +
options(repr.plot.width=10, repr.plot.height=6)
hist(log10(trade_usd_croatia))
hist(log10(trade_usd_croatia_kg),col=rgb(1,0,0),add=T)
hist(log10(trade_usd_croatia_kg_import),col=rgb(0,0,1),add=T)
# and of course, no plot is complete without a legend!
legend("topleft",c("Croatian Trade","Croatian Trade (kg)", "Croatian Imports (kg)"),
fill=c(rgb(1,1,1),rgb(1,0,0),rgb(0,0,1)))
# -
# This subsetting is also useful for looking at summary statistics of this dataset:
print(summary(trade_usd_croatia))
print(summary(trade_usd_croatia_kg))
# With the difference between these two, we can already see that if we select for things weighted in kg we find a slightly higher mean/median, etc.
#
# This sort of lines up with what we expect from looking at the histograms.
#
# Let's also finally compare the imports to the exports from Croatia:
mask = (country=="Croatia") & (quant_name == "Weight in kilograms") & (transaction == "Export")
trade_usd_croatia_kg_export = subset(trade_usd,mask)
# +
hist(log10(trade_usd_croatia))
hist(log10(trade_usd_croatia_kg),col=rgb(1,0,0),add=T)
hist(log10(trade_usd_croatia_kg_import),col=rgb(0,0,1),add=T)
hist(log10(trade_usd_croatia_kg_export),col=rgb(0,1,0),add=T)
# and, obviously, update our legend:
legend("topleft",c("Croatian Trade","Croatian Trade (kg)",
"Croatian Imports (kg)", "Croatian Exports (kg)"),
fill=c(rgb(1,1,1),rgb(1,0,0),rgb(0,0,1),rgb(0,1,0)))
# -
# By eye we can see that they seem like the mean/medians might be different but let's use summary to see:
print('IMPORTS')
print(summary(trade_usd_croatia_kg_import))
print('EXPORTS')
print(summary(trade_usd_croatia_kg_export))
# Again, our histogram seems to be accurate - the export median < import, though note this is not true of the mean.
#
# This makes sense because if we look at the STDDEV of each:
print(sd(trade_usd_croatia_kg_import))
print(sd(trade_usd_croatia_kg_export))
# The sd of the export > import meaning there is a larger spread of trade in USD in the export dataset so it makes sense the mean might be different from the median.
# **Practice Question: what is skewness of each histogram?**
# **Practice Question : Can we accurately say for sure that the medians between these are different? Can we quantify how sure we are these means or medians are different?**
#
# $\rightarrow$ more on these concepts later in class.
| week02/.ipynb_checkpoints/prep_notebook_avoid_week02_part3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Rather than setting up a recoreder and performing SQL queries, lets try doing data science just by calling the HASS [restful-API](https://home-assistant.io/developers/rest_api/). I start by working with localhost and no password is required.
# +
from requests import get, post
import json
from pprint import PrettyPrinter
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import pandas as pd
import datetime as dt
# %matplotlib inline
def print_json(json_data):
PrettyPrinter().pprint(json_data)
headers = {'content-type': 'application/json'}
# -
# Lets first check whats in my config
url = 'http://localhost:8123/api/config'
response = get(url, headers=headers).json()
print_json(response['components'])
# I know that **sensor.living_room_motion_sensor** has some data to display
url = 'http://localhost:8123/api/history/period?filter_entity_id=sensor.living_room_motion_sensor' #
response = get(url, headers=headers).json()
data = response[0]
len(data)
print_json(data[0])
data[0]['last_updated']
# So each last_updated is an event
data[0]['state']
# Lets use a comprehension to get the state data into a dict
living_room_motion_sensor = {v['last_updated']: v['state'] for v in data}
# Now lets convert the dict to a pandas series
living_room_motion_sensor_ds = pd.Series(living_room_motion_sensor)
living_room_motion_sensor_ds.head()
# Lets convert this categorical into numeric for plotting
living_room_motion_sensor_ds = pd.get_dummies(living_room_motion_sensor_ds)['on']
# Finally lets do a quick plot
f, ax = plt.subplots(figsize=(16, 6))
ax.step(living_room_motion_sensor_ds, 'bo', where='post')
ax.set_ylabel('living_room_motion_sensor', color='b')
| HASS API experiments simple localhost 21-1-2018.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ```
# 1 Environment, 环境
# 2 Hyper Parameters, 超参数
# 3 Training Data, 训练数据
# 4 Prepare for Training, 训练准备
# 4.1 mx Graph Input, mxnet图输入
# 4.2 Construct a linear model, 构造线性模型
# 4.3 Mean squared error, 损失函数:均方差
# 5 Start training, 开始训练
# 6 Regression result, 回归结果
# ```
#
# ---
# # Environment, 环境
# +
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd
import numpy
import matplotlib.pyplot as plt
mx.random.seed(1)
# -
# # Hyper Parameters, 超参数
learning_rate = 0.01
training_epochs = 1000
smoothing_constant = 0.01
display_step = 50
ctx = mx.cpu()
# # Training Data, 训练数据
train_X = numpy.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167,
7.042, 10.791, 5.313, 7.997, 5.654, 9.27,3.1])
train_Y = numpy.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221,
2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])
n_samples = train_X.shape[0]
# # Prepare for Training, 训练准备
# ## mx Graph Input, mxnet图输入
# +
# Set model weights,初始化网络模型的权重
W = nd.random_normal(shape=1)
b = nd.random_normal(shape=1)
params = [W, b]
for param in params:
param.attach_grad()
# -
# ## Construct a linear model, 构造线性模型
def net(X):
return X*W + b
# ## Mean squared error, 损失函数:均方差
# +
# Mean squared error,损失函数:均方差
def square_loss(yhat, y):
return nd.mean((yhat - y) ** 2)
# Gradient descent, 优化方式:梯度下降
def SGD(params, lr):
for param in params:
param[:] = param - lr * param.grad
# -
# # Start training, 开始训练
# +
# Fit training data
data = nd.array(train_X)
label = nd.array(train_Y)
losses = []
moving_loss = 0
niter = 0
for e in range(training_epochs):
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
SGD(params, learning_rate)
##########################
# Keep a moving average of the losses
##########################
niter +=1
curr_loss = nd.mean(loss).asscalar()
moving_loss = (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss
# correct the bias from the moving averages
est_loss = moving_loss/(1-(1-smoothing_constant)**niter)
losses.append(est_loss)
if (e + 1) % display_step == 0:
print("Epoch:", '%04d' % (e), "cost=", "{:.9f}".format(curr_loss), "W=", W.asnumpy()[0], "b=", b.asnumpy()[0])
# -
# # Regression result, 回归结果
# +
def plot(losses, X, Y, n_samples=10):
xs = list(range(len(losses)))
f, (fg1, fg2) = plt.subplots(1, 2)
fg1.set_title('Loss during training')
fg1.plot(xs, losses, '-r')
fg2.set_title('Estimated vs real function')
fg2.plot(X.asnumpy(), net(X).asnumpy(), 'or', label='Estimated')
fg2.plot(X.asnumpy(), Y.asnumpy(), '*g', label='Real')
fg2.legend()
plt.show()
plot(losses, data, label)
| 01_TF_basics_and_linear_regression/linear_regression_mx.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def PCAconv(data,n=13):
tmp=StandardScaler().fit_transform(data.values)
pca = PCA(n_components=n)
tmp.shape
# np.isnan(np.min(tmp))
principalComponents = pca.fit_transform(tmp)
X = pd.DataFrame(data = principalComponents)
X.shape
return(X)
def testing(filename,filename2):
data=pd.read_excel(filename)
data['time_sourced']=(data.datetime_sourced-data.datetime_ordered).map(lambda x:int(x.seconds/(60)))
data['time_make']=(data.datetime_product_ready-data.datetime_ordered).map(lambda x: int(x.seconds/(3600)))
data['time_ready']=(data.datetime_product_ready-data.datetime_ordered).map(lambda x: int(x.days/(1)))
data_corr=data;
data_corr=data_corr.drop(['order_id','delivered_to_plan','datetime_ordered','datetime_sourced','datetime_product_ready','datetime_planned'],axis=1)
col=['country','shipping_method','facility','product_category','on_sale']#categorical colums
tmp=[];
for i,j in enumerate(col):
a=data_corr[j].unique()
tmp.append(dict(zip(a,list(range(0,len(a))))))
print(tmp[i])
data_corr[j]=data_corr[j].map(tmp[i])
X=PCAconv(data_corr,11)
classifier = joblib.load(filename2)
y_pred = classifier.predict(X)
# print(accuracy_score(y_test, y_pred))
sub=pandas.Dataframe(data=y_pred)
sub.to_csv('Predicted.csv')
testing(0)
| .ipynb_checkpoints/tester-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="13pL--6rycN3"
# # Practice 02: Dealing with texts using CNN
#
# Today we're gonna apply the newly learned tools for the task of predicting job salary.
#
# <img src="https://storage.googleapis.com/kaggle-competitions/kaggle/3342/logos/front_page.png" width=400px>
#
# Based on YSDA [materials](https://github.com/yandexdataschool/nlp_course/blob/master/week02_classification/seminar.ipynb). _Special thanks to [<NAME>](https://github.com/Omrigan/) for the core assignment idea._
# + id="P8zS7m-gycN5"
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import pandas as pd
# + [markdown] id="34x92vWQycN_"
# ## About the challenge
# For starters, let's download and unpack the data from [here](https://www.dropbox.com/s/5msc5ix7ndyba10/Train_rev1.csv.tar.gz?dl=0).
#
# You can also get it from [yadisk url](https://yadi.sk/d/vVEOWPFY3NruT7) on the competition [page](https://www.kaggle.com/c/job-salary-prediction/data) (pick `Train_rev1.*`).
# + id="vwN72gd4ycOA" colab={"base_uri": "https://localhost:8080/"} outputId="732c0c1a-87dd-4c89-e929-99bbff304321"
# Do this only once:
# !curl -L "https://www.dropbox.com/s/5msc5ix7ndyba10/Train_rev1.csv.tar.gz?dl=1" \
# -o Train_rev1.csv.tar.gz
# !tar -xvzf ./Train_rev1.csv.tar.gz
# + id="39aNIiVbRJim" colab={"base_uri": "https://localhost:8080/"} outputId="1ef3a14c-521f-42b3-91cc-729bcb6cca8e"
data = pd.read_csv("./Train_rev1.csv", index_col=None)
data.shape
# + [markdown] id="z7kznuJfycOH"
# One problem with salary prediction is that it's oddly distributed: there are many people who are paid standard salaries and a few that get tons o money. The distribution is fat-tailed on the right side, which is inconvenient for MSE minimization.
#
# There are several techniques to combat this: using a different loss function, predicting log-target instead of raw target or even replacing targets with their percentiles among all salaries in the training set. We will use logarithm for now.
#
# _You can read more [in the official description](https://www.kaggle.com/c/job-salary-prediction#description)._
# + id="UuuKIKfrycOH" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="b9c9891e-a178-4104-bb27-613bbceb89a7"
data["Log1pSalary"] = np.log1p(data["SalaryNormalized"]).astype("float32")
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.hist(data["SalaryNormalized"], bins=20);
plt.subplot(1, 2, 2)
plt.hist(data["Log1pSalary"], bins=20);
# + [markdown] id="Fcu-qmHRycOK"
# Our task is to predict one number, __Log1pSalary__.
#
# To do so, our model can access a number of features:
# * free text: __`Title`__ and __`FullDescription`__;
# * categorical: __`Category`__, __`Company`__, __`LocationNormalized`__, __`ContractType`__, and __`ContractTime`__.
# + id="p9vyA_erycOK" colab={"base_uri": "https://localhost:8080/", "height": 290} outputId="11494165-dabe-48c0-9770-f7894a0debd4"
text_columns = ["Title", "FullDescription"]
categorical_columns = ["Category", "Company", "LocationNormalized",
"ContractType", "ContractTime"]
target_column = "Log1pSalary"
# Cast missing values to string "NaN":
data[categorical_columns] = data[categorical_columns].fillna("NaN")
data.sample(3)
# + [markdown] id="IUdclucmycON"
# ## Preprocessing text data
#
# Just like last week, applying NLP to a problem begins from tokenization: splitting raw text into sequences of tokens (words, punctuation, etc).
#
# __Your task__ is to lowercase and tokenize all texts under `Title` and `FullDescription` columns. Store the tokenized data as a __space-separated__ string of tokens for performance reasons.
#
# It's okay to use `nltk` tokenizers. Assertions were designed for `WordPunctTokenizer`, slight deviations are okay.
# + id="YzeOxD_aycOO" colab={"base_uri": "https://localhost:8080/"} outputId="29221013-701c-4eb6-8aab-6ff3cb0ae75f"
print("Raw text:")
print(data["FullDescription"][2::100000])
# + id="RUWkpd7PycOQ"
import nltk
tokenizer = nltk.tokenize.WordPunctTokenizer()
# See the task above.
def normalize(text):
text = str(text).lower()
tokens = tokenizer.tokenize(text)
return ' '.join(tokens)
data[text_columns] = data[text_columns].applymap(normalize)
# + [markdown] id="o3pQdHihycOT"
# Now we can assume that our text is a space-separated list of tokens:
# + id="Gs-6lnS_ycOU" colab={"base_uri": "https://localhost:8080/"} outputId="20679b8d-7391-4ddc-b959-1780f6441d82"
print("Tokenized:")
print(data["FullDescription"][2::100000])
assert data["FullDescription"][2][:50] == \
"mathematical modeller / simulation analyst / opera"
assert data["Title"][54321] == \
"international digital account manager ( german )"
# + [markdown] id="ouE3L2hyycOX"
# Not all words are equally useful. Some of them are typos or rare words that are only present a few times.
#
# Let's count how many times is each word present in the data so that we can build a "white list" of known words.
# + id="iC7hBwwjycOX"
# Count how many times does each token occur in both
# "Title" and "FullDescription" in total.
# Build a dictionary { token -> it's count }.
from collections import Counter
from tqdm import tqdm as tqdm
token_counts = Counter()
for row in data[text_columns].values.flatten():
token_counts.update(row.split(' '))
# + id="GiOWbc15ycOb" colab={"base_uri": "https://localhost:8080/"} outputId="657733d7-3de0-44e2-a1e0-ac71b753cfad"
print("Total unique tokens:", len(token_counts))
print('\n'.join(map(str, token_counts.most_common(n=5))))
print("...")
print('\n'.join(map(str, token_counts.most_common()[-3:])))
assert token_counts.most_common(1)[0][1] in range(2600000, 2700000)
assert len(token_counts) in range(200000, 210000)
print("Correct!")
# + id="nd5v3BNfycOf" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="90ef92d7-ee69-4b0f-b9e8-8b9286420526"
# Let's see how many words are there for each count
plt.hist(list(token_counts.values()), range=[0, 10 ** 4], bins=50, log=True)
plt.xlabel("Word counts");
# + [markdown] id="znuXxeghycOh"
# Now filter tokens to a list of all tokens that occur at least 10 times.
# + id="SeNFBWx5ycOh"
min_count = 10
# Tokens from token_counts keys that had at least
# min_count occurrences throughout the dataset:
tokens = [token for token, count in token_counts.items() if count >= min_count]
# + id="RATIRyPKycOk" colab={"base_uri": "https://localhost:8080/"} outputId="a6093d05-954e-4fb2-ba6c-7fb35b9836ad"
# Add a special tokens for unknown and empty words:
UNK, PAD = "UNK", "PAD"
tokens = [UNK, PAD] + sorted(tokens)
print("Vocabulary size:", len(tokens))
assert type(tokens) == list
assert len(tokens) in range(32000, 35000)
assert "me" in tokens
assert UNK in tokens
print("Correct!")
# + [markdown] id="cqEsgbjZycOo"
# Build an inverse token index: a dictionary from token (string) to it's index in `tokens` (int).
# + id="L60lo1l_ycOq"
token_to_id = {
token: index
for index, token in enumerate(tokens)
}
# + id="DeAoVo4mycOr" colab={"base_uri": "https://localhost:8080/"} outputId="3e2ee253-9e29-43fc-c8fa-b20f30660f1e"
assert isinstance(token_to_id, dict)
assert len(token_to_id) == len(tokens)
for token in tokens:
assert tokens[token_to_id[token]] == token
print("Correct!")
# + [markdown] id="cmJAkq3gycOv"
# And finally, let's use the vocabulary we've built to map text lines into neural network-digestible matrices.
# + id="JEsLeBjVycOw"
UNK_IX, PAD_IX = map(token_to_id.get, [UNK, PAD])
def as_matrix(sequences, max_len=None):
"""Convert a list of tokens into a matrix with padding."""
if isinstance(sequences[0], str):
sequences = list(map(str.split, sequences))
max_len = min(max(map(len, sequences)), max_len or float("inf"))
matrix = np.full((len(sequences), max_len), np.int32(PAD_IX))
for i,seq in enumerate(sequences):
row_ix = [token_to_id.get(word, UNK_IX) for word in seq[:max_len]]
matrix[i, :len(row_ix)] = row_ix
return matrix
# + id="JiBlPkdKycOy" colab={"base_uri": "https://localhost:8080/"} outputId="bef55d19-6f14-449b-c1e3-b6682e246e73"
print("Lines:")
print('\n'.join(data["Title"][::100000].values), end="\n\n")
print("Matrix:")
print(as_matrix(data["Title"][::100000]))
# + [markdown] id="nGOdZ3-dycO4"
# Now let's encode the categorical data we have.
#
# As usual, we shall use one-hot encoding for simplicity. Kudos if you implement more advanced encodings: tf-idf, pseudo-time-series, etc.
# + id="DpOlBp7ZycO6" colab={"base_uri": "https://localhost:8080/"} outputId="65e43843-ee37-42d5-9950-166d0f979a73"
from sklearn.feature_extraction import DictVectorizer
# We only consider top-1k most frequent companies to minimize memory usage:
top_companies, top_counts = zip(*Counter(data["Company"]).most_common(1000))
recognized_companies = set(top_companies)
data["Company"] = data["Company"].apply(
lambda comp: comp if comp in recognized_companies else "Other"
)
categorical_vectorizer = DictVectorizer(dtype=np.float32, sparse=False)
categorical_vectorizer.fit(data[categorical_columns].apply(dict, axis=1))
# + [markdown] id="yk4jmtAYycO8"
# ## The Deep Learning part
#
# Once we've learned how to tokenize the data, let's design a machine learning experiment.
#
# As before, we won't focus too much on validation, opting for a simple train-test split.
#
# __To be completely rigorous,__ we've comitted a small crime here: we used the whole data for tokenization and vocabulary building. A more strict way would be to do that part on training set only. You may want to do that and measure the magnitude of changes.
# + id="TngLcWA0ycO_" colab={"base_uri": "https://localhost:8080/"} outputId="81922f4b-36b9-41a2-ea33-fc84a2b98081"
from sklearn.model_selection import train_test_split
data_train, data_val = train_test_split(data, test_size=0.2, random_state=42)
data_train.index = range(len(data_train))
data_val.index = range(len(data_val))
print("Train size:", len(data_train))
print("Validation size:", len(data_val))
# + id="2PXuKgOSycPB"
def make_batch(data, max_len=None, word_dropout=0):
"""
Creates a neural-network-friendly dict from the batch data.
:param word_dropout: replaces token index with UNK_IX with this probability
:returns: a dict with {'title' : int64[batch, title_max_len]}
"""
batch = {}
batch["Title"] = as_matrix(data["Title"].values, max_len)
batch["FullDescription"] = as_matrix(data["FullDescription"].values,
max_len)
batch['Categorical'] = categorical_vectorizer.transform(
data[categorical_columns].apply(dict, axis=1)
)
if word_dropout != 0:
batch["FullDescription"] = apply_word_dropout(batch["FullDescription"],
1. - word_dropout)
if target_column in data.columns:
batch[target_column] = data[target_column].values
return batch
def apply_word_dropout(matrix, keep_prop, replace_with=UNK_IX, pad_ix=PAD_IX,):
dropout_mask = np.random.choice(2, np.shape(matrix),
p=[keep_prop, 1 - keep_prop])
dropout_mask &= matrix != pad_ix
return np.choose(dropout_mask, [matrix, np.full_like(matrix, replace_with)])
# + id="I6LpEQf0ycPD"
a = make_batch(data_train[:3], max_len=10)
# + [markdown] id="0eI5h9UMycPF"
# ### Architecture
#
# Our main model consists of three branches:
# * Title encoder
# * Description encoder
# * Categorical features encoder
#
# We will then feed all 3 branches into one common network that predicts salary.
#
# <img src="https://github.com/yandexdataschool/nlp_course/raw/master/resources/w2_conv_arch.png" width=600px>
#
# This clearly doesn't fit into PyTorch __Sequential__ interface. To build such a network, one will have to use [__PyTorch nn.Module API__](https://pytorch.org/docs/stable/nn.html#torch.nn.Module).
# + [markdown] id="pmoUUz6ARJiw"
# But to start with let's build the simple model using only the part of the data. Let's create the baseline solution using only the description part (so it should definetely fit into the Sequential model).
# + id="PBwYKnnpRJiw"
import torch
from torch import nn
import torch.nn.functional as F
# + id="ztyjBXN6RJiw"
# We will need these to make it simple:
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class Reorder(nn.Module):
def forward(self, input):
return input.permute((0, 2, 1))
# + [markdown] id="HIAAc1AXRJiw"
# To generate minibatches we will use a simple Python generator:
# + id="PzExHeftRJiw"
def iterate_minibatches(data, batch_size=256, shuffle=True, cycle=False,
**kwargs):
"""Iterate over minibatches of data in random order."""
while True:
indices = np.arange(len(data))
if shuffle:
indices = np.random.permutation(indices)
for start in range(0, len(indices), batch_size):
batch = make_batch(data.iloc[indices[start : start + batch_size]],
**kwargs)
target = batch.pop(target_column)
yield batch, target
if not cycle:
break
# + id="Ni7r-s7hRJiw"
iterator = iterate_minibatches(data_train, 3)
batch, target = next(iterator)
# + id="Eei7FUikddJT"
# Here is some startup code:
n_tokens = len(tokens)
n_cat_features = len(categorical_vectorizer.vocabulary_)
hid_size = 64
# + id="u-ICT1rkRJix"
simple_model = nn.Sequential()
simple_model.add_module("emb", nn.Embedding(num_embeddings=n_tokens,
embedding_dim=hid_size))
simple_model.add_module("reorder", Reorder())
simple_model.add_module("conv1", nn.Conv1d(in_channels=hid_size,
out_channels=hid_size * 2,
kernel_size=3))
simple_model.add_module("relu1", nn.ReLU())
simple_model.add_module("bn1", nn.BatchNorm1d(num_features=hid_size * 2))
simple_model.add_module("conv2", nn.Conv1d(in_channels=hid_size * 2,
out_channels=hid_size * 4,
kernel_size=3))
simple_model.add_module("relu2", nn.ReLU())
simple_model.add_module("bn2", nn.BatchNorm1d(num_features=hid_size * 4))
simple_model.add_module("conv3", nn.Conv1d(in_channels=hid_size * 4,
out_channels=hid_size * 8,
kernel_size=2))
simple_model.add_module("adaptive_pool", nn.AdaptiveMaxPool1d(1))
simple_model.add_module("flatten", nn.Flatten())
simple_model.add_module("out", nn.Linear(8 * hid_size, 1))
# + [markdown] id="8J6t8GBNRJix"
# __Remember!__ We are working with regression problem and predicting only one number.
# + id="nlEjaQt8RJix" colab={"base_uri": "https://localhost:8080/"} outputId="eeef1ef3-88d1-4279-c547-c73c41d78f34"
# Try this to check your model.
# `torch.long` tensors are required for nn.Embedding layers.
simple_model(torch.tensor(batch["FullDescription"], dtype=torch.long)).shape
# + [markdown] id="vYf9NFdXRJix"
# And now a simple training pipeline:
# + colab={"base_uri": "https://localhost:8080/"} id="RUyKTKsPgu9p" outputId="28d6eb38-acfa-490c-acf5-8d0c78922451"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
# + id="HVzuiX6CRJix" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="65abf0fd-0eff-465a-dd5f-91ce50da55ac"
from IPython.display import clear_output
from random import sample
epochs = 1
model = simple_model.to(device)
opt = torch.optim.Adam(model.parameters())
loss_func = nn.MSELoss()
history = []
for epoch_num in range(epochs):
for idx, (batch, target) in enumerate(iterate_minibatches(data_train)):
opt.zero_grad()
# Preprocessing the batch data and target:
batch = torch.tensor(batch["FullDescription"],
dtype=torch.long).to(device)
target = torch.tensor(target).to(device)
predictions = model(batch)
predictions = predictions.view(predictions.size(0))
loss = loss_func(predictions, target)
# Train with backprop:
loss.backward()
opt.step()
history.append(loss.item())
if (idx + 1) % 10 == 0:
clear_output(True)
plt.plot(history, label="loss")
plt.legend()
plt.show()
# + [markdown] id="xljrKb4yRJix"
# To evaluate the model it should be switched to `eval` state.
# + id="mLsEXaIKRJix" colab={"base_uri": "https://localhost:8080/"} outputId="7fb03815-08a3-4698-9a38-a640c1f0cf4e"
simple_model.eval()
# + [markdown] id="wS5zRKFWRJix"
# Let's check the model quality.
# + id="T9XIwJaFkqfO"
from tqdm import tqdm, tqdm_notebook
batch_size = 256
def print_metrics(model, data, batch_size=batch_size, name="", **kw):
squared_error = abs_error = num_samples = 0.0
for batch_x, batch_y in tqdm(iterate_minibatches(data,
batch_size=batch_size,
shuffle=False, **kw)):
batch = torch.tensor(batch_x["FullDescription"],
dtype=torch.long).to(device)
batch_pred = model(batch)[:, 0].detach().cpu().numpy()
squared_error += np.sum(np.square(batch_pred - batch_y))
abs_error += np.sum(np.abs(batch_pred - batch_y))
num_samples += len(batch_y)
print("%s results:" % (name or ""))
print("Mean square error: %.5f" % (squared_error / num_samples))
print("Mean absolute error: %.5f" % (abs_error / num_samples))
return squared_error, abs_error
# + id="RpYSmHBhRJiy" colab={"base_uri": "https://localhost:8080/"} outputId="dedc01c9-de5b-4a20-fb7b-80e4fa217578"
print_metrics(simple_model, data_train, name="Train")
print_metrics(simple_model, data_val, name="Val");
# + [markdown] id="uFddHFftRJiy"
# ## Bonus area: three-headed network
#
# Now you can try to implement the network we've discussed above. Use [__PyTorch nn.Module API__](https://pytorch.org/docs/stable/nn.html#torch.nn.Module).
# + id="3rX2bWKVRJiy"
class ThreeInputsNet(nn.Module):
def __init__(self, n_tokens=len(tokens),
n_cat_features=len(categorical_vectorizer.vocabulary_),
hid_size=64):
super(ThreeInputsNet, self).__init__()
# Title head:
self.title_emb = nn.Embedding(num_embeddings=n_tokens,
embedding_dim=hid_size)
self.title_conv1 = nn.Conv1d(in_channels=hid_size,
out_channels=hid_size,
kernel_size=2)
self.title_relu1 = nn.ReLU()
self.title_bn1 = nn.BatchNorm1d(num_features=hid_size)
self.title_conv2 = nn.Conv1d(in_channels=hid_size,
out_channels=hid_size,
kernel_size=2)
self.title_pool = nn.AdaptiveMaxPool1d(1)
self.title_flatten = nn.Flatten()
self.title_out = nn.Linear(hid_size, 1)
# Description head:
self.full_emb = nn.Embedding(num_embeddings=n_tokens,
embedding_dim=hid_size)
self.full_conv1 = nn.Conv1d(in_channels=hid_size,
out_channels=hid_size * 2,
kernel_size=3)
self.full_relu1 = nn.ReLU()
self.full_bn1 = nn.BatchNorm1d(num_features=hid_size * 2)
self.full_conv2 = nn.Conv1d(in_channels=hid_size * 2,
out_channels=hid_size * 4,
kernel_size=3)
self.full_relu2 = nn.ReLU()
self.full_bn2 = nn.BatchNorm1d(num_features=hid_size * 4)
self.full_conv3 = nn.Conv1d(in_channels=hid_size * 4,
out_channels=hid_size * 8,
kernel_size=2)
self.full_pool = nn.AdaptiveMaxPool1d(1)
self.full_flatten = nn.Flatten()
self.full_out = nn.Linear(8 * hid_size, 1)
# Categorical head:
self.category_out = nn.Linear(n_cat_features, 1)
# Final layer:
self.out = nn.Linear(3, 1)
def forward(self, whole_input):
input1, input2, input3 = whole_input
title_beg = self.title_emb(input1).permute((0, 2, 1))
title = self.title_conv1(title_beg)
title = self.title_relu1(title)
title = self.title_bn1(title)
title = self.title_conv2(title)
title = self.title_pool(title)
title = self.title_flatten(title)
title = self.title_out(title)
full_beg = self.full_emb(input2).permute((0, 2, 1))
full = self.full_conv1(full_beg)
full = self.full_relu1(full)
full = self.full_bn1(full)
full = self.full_conv2(full)
full = self.full_relu2(full)
full = self.full_bn2(full)
full = self.full_conv3(full)
full = self.full_pool(full)
full = self.full_flatten(full)
full = self.full_out(full)
category = self.category_out(input3)
concatenated = torch.cat([
title.view(title.size(0), -1),
full.view(full.size(0), -1),
category.view(category.size(0), -1)
], dim=1)
out = self.out(concatenated)
return out
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="yZwCX-oCkU9f" outputId="18740d47-7c33-4df0-8c63-2fa71eb91a6c"
from IPython.display import clear_output
from random import sample
epochs = 1
model = ThreeInputsNet().to(device)
opt = torch.optim.Adam(model.parameters())
loss_func = nn.MSELoss()
history = []
for epoch_num in range(epochs):
for idx, (batch, target) in enumerate(iterate_minibatches(data_train)):
opt.zero_grad()
# Preprocessing the batch data and target:
batch = (
torch.tensor(batch["Title"], dtype=torch.long).to(device),
torch.tensor(batch["FullDescription"], dtype=torch.long).to(device),
torch.tensor(batch["Categorical"], dtype=torch.float32).to(device),
)
target = torch.tensor(target).to(device)
predictions = model(batch)
predictions = predictions.view(predictions.size(0))
loss = loss_func(predictions, target)
# Train with backprop:
loss.backward()
opt.step()
history.append(loss.item())
if (idx + 1) % 10 == 0:
clear_output(True)
plt.plot(history, label="loss")
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="VtiF1XRHkbYb" outputId="23b0d2c0-d755-447c-db08-fe6436160194"
model.eval()
# + id="UKSbnIHJovrA"
from tqdm import tqdm, tqdm_notebook
batch_size = 256
def print_metrics(model, data, batch_size=batch_size, name="", **kw):
squared_error = abs_error = num_samples = 0.0
for batch_x, batch_y in tqdm(iterate_minibatches(data,
batch_size=batch_size,
shuffle=False, **kw)):
batch = (
torch.tensor(batch_x["Title"],
dtype=torch.long).to(device),
torch.tensor(batch_x["FullDescription"],
dtype=torch.long).to(device),
torch.tensor(batch_x["Categorical"],
dtype=torch.float32).to(device),
)
batch_pred = model(batch)[:, 0].detach().cpu().numpy()
squared_error += np.sum(np.square(batch_pred - batch_y))
abs_error += np.sum(np.abs(batch_pred - batch_y))
num_samples += len(batch_y)
print("%s results:" % (name or ""))
print("Mean square error: %.5f" % (squared_error / num_samples))
print("Mean absolute error: %.5f" % (abs_error / num_samples))
return squared_error, abs_error
# + colab={"base_uri": "https://localhost:8080/"} id="ZjbMKjA0kewb" outputId="84cdea36-4200-407e-e6d1-a9a8fd7f76d4"
print_metrics(model, data_train, name="Train")
print_metrics(model, data_val, name="Val");
# + [markdown] id="XdFLY-r2RJiy"
# ## Bonus area 2: comparing RNN to CNN
# Try implementing simple RNN (or LSTM) and applying it to this task. Compare the quality/performance of these networks.
#
# *Hint: try to build networks with ~same number of paremeters.*
# + id="GLfqDkSZrDC9"
# <YOUR CODE HERE>
# + [markdown] id="hEFuF8XnRJiy"
# ## Bonus area 3: fixing the data leaks
# Fix the data leak we ignored in the beginning of the __Deep Learning part__. Compare results with and without data leaks using same architectures and training time.
#
# + id="dSoDVKXtRJiy"
# <YOUR CODE HERE>
# + [markdown] id="wLjGTjqHRJiy"
# __Terrible start-up idea #1962:__ make a tool that automaticaly rephrases your job description (or CV) to meet salary expectations. :)
| solved_class_notebooks/02_cnn_for_texts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Kubeflow pipelines
#
# This notebook goes through the steps of using Kubeflow pipelines using the Python3 interpreter (command-line) to preprocess, train, tune and deploy the babyweight model.
#
# ### 1. Start Hosted Pipelines and Notebook
#
# To try out this notebook, first launch Kubeflow Hosted Pipelines and an AI Platform Notebooks instance.
# Follow the instructions in this [README.md](pipelines/README.md) file.
# ### 2. Install necessary packages
# %pip install --quiet kfp python-dateutil --upgrade --use-feature=2020-resolver
# Make sure to *restart the kernel* to pick up new packages (look for button in the ribbon of icons above this notebook)
# ### 3. Connect to the Hosted Pipelines
#
# Visit https://console.cloud.google.com/ai-platform/pipelines/clusters
# and get the hostname for your cluster. You can get it by clicking on the Settings icon.
# Alternately, click on the Open Pipelines Dashboard link and look at the URL.
# Change the settings in the following cell
# CHANGE THESE
PIPELINES_HOST='447cdd24f70c9541-dot-us-central1.notebooks.googleusercontent.com'
PROJECT='qwiklabs-gcp-01-974853e7c436'
BUCKET='ai-analytics-solutions-kfpdemo'
import kfp
import os
client = kfp.Client(host=PIPELINES_HOST)
#client.list_pipelines()
# ## 4. [Optional] Build Docker containers
#
# I have made my containers public (See https://cloud.google.com/container-registry/docs/access-control on how to do this), so you can simply use my images.
# + language="bash"
# cd pipelines/containers
# #bash build_all.sh
# -
# Check that the Docker images work properly ...
# +
# #!docker run -t gcr.io/ai-analytics-solutions/babyweight-pipeline-bqtocsv:latest --project $PROJECT --bucket $BUCKET --local
# -
# ### 5. Upload and execute pipeline
#
# Upload to the Kubeflow pipeline cluster
# +
from pipelines.containers.pipeline import mlp_babyweight
args = {
'project' : PROJECT,
'bucket' : BUCKET
}
#pipeline = client.create_run_from_pipeline_func(mlp_babyweight.preprocess_train_and_deploy, args)
os.environ['HPARAM_JOB'] = 'babyweight_200207_231639' # change to job from complete step
pipeline = client.create_run_from_pipeline_func(mlp_babyweight.train_and_deploy, args)
# +
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -
| courses/machine_learning/deepdive/06_structured/7_pipelines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (reco_gpu)
# language: python
# name: reco_gpu
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # DKN : Deep Knowledge-Aware Network for News Recommendation
# DKN \[1\] is a deep learning model which incorporates information from knowledge graph for better news recommendation. Specifically, DKN uses TransX \[2\] method for knowledge graph representaion learning, then applies a CNN framework, named KCNN, to combine entity embedding with word embedding and generate a final embedding vector for a news article. CTR prediction is made via an attention-based neural scorer.
# <img src="https://recodatasets.z20.web.core.windows.net/kdd2020/images%2FDKN-introduction-pic.JPG" width="600">
#
# ## Properties of DKN:
# - DKN is a content-based deep model for CTR prediction rather than traditional ID-based collaborative filtering.
# - It makes use of knowledge entities and common sense in news content via joint learning from semantic-level and knnowledge-level representations of news articles.
# - DKN uses an attention module to dynamically calculate a user's aggregated historical representaition.
#
#
#
# ## Data format:
# ### DKN takes several files as input as follows:
# - training / validation / test files: each line in these files represents one instance. Impressionid is used to evaluate performance within an impression session, so it is only used when evaluating, you can set it to 0 for training data. The format is : <br>
# `[label] [userid] [CandidateNews]%[impressionid] `<br>
# e.g., `1 train_U1 N1%0` <br>
# - user history file: each line in this file represents a users' click history. You need to set his_size parameter in config file, which is the max number of user's click history we use. We will automatically keep the last his_size number of user click history, if user's click history is more than his_size, and we will automatically padding 0 if user's click history less than his_size. the format is : <br>
# `[Userid] [newsid1,newsid2...]`<br>
# e.g., `train_U1 N1,N2` <br>
# - document feature file:
# It contains the word and entity features of news. News article is represented by (aligned) title words and title entities. To take a quick example, a news title may be : Trump to deliver State of the Union address next week , then the title words value may be CandidateNews:34,45,334,23,12,987,3456,111,456,432 and the title entitie value may be: entity:45,0,0,0,0,0,0,0,0,0. Only the first value of entity vector is non-zero due to the word Trump. The title value and entity value is hashed from 1 to n(n is the number of distinct words or entities). Each feature length should be fixed at k(doc_size papameter), if the number of words in document is more than k, you should truncate the document to k words, and if the number of words in document is less than k, you should padding 0 to the end.
# the format is like: <br>
# `[Newsid] [w1,w2,w3...wk] [e1,e2,e3...ek]`
# - word embedding/entity embedding/ context embedding files: These are npy files of pretrained embeddings. After loading, each file is a [n+1,k] two-dimensional matrix, n is the number of words(or entities) of their hash dictionary, k is dimension of the embedding, note that we keep embedding 0 for zero padding.
# In this experiment, we used GloVe\[4\] vectors to initialize the word embedding. We trained entity embedding using TransE\[2\] on knowledge graph and context embedding is the average of the entity's neighbors in the knowledge graph.<br>
# ## Global settings and imports
# + pycharm={"is_executing": false}
from recommenders.models.deeprec.deeprec_utils import *
from recommenders.models.deeprec.models.dkn import *
from recommenders.models.deeprec.io.dkn_iterator import *
import time
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
# -
# ## data paths
# Usually we will debug and search hyper-parameters on a small dataset. You can switch between the small dataset and full dataset by changing the value of `tag`.
tag = 'small' # small or full
# + pycharm={"is_executing": false}
data_path = 'data_folder/my/DKN-training-folder'
yaml_file = './dkn.yaml' # os.path.join(data_path, r'../../../../../../dkn.yaml')
train_file = os.path.join(data_path, r'train_{0}.txt'.format(tag))
valid_file = os.path.join(data_path, r'valid_{0}.txt'.format(tag))
test_file = os.path.join(data_path, r'test_{0}.txt'.format(tag))
user_history_file = os.path.join(data_path, r'user_history_{0}.txt'.format(tag))
news_feature_file = os.path.join(data_path, r'../paper_feature.txt')
wordEmb_file = os.path.join(data_path, r'word_embedding.npy')
entityEmb_file = os.path.join(data_path, r'entity_embedding.npy')
contextEmb_file = os.path.join(data_path, r'context_embedding.npy')
infer_embedding_file = os.path.join(data_path, r'infer_embedding.txt')
# -
# ## Create hyper-parameters
# + pycharm={"is_executing": false}
epoch=5
hparams = prepare_hparams(yaml_file,
news_feature_file = news_feature_file,
user_history_file = user_history_file,
wordEmb_file=wordEmb_file,
entityEmb_file=entityEmb_file,
contextEmb_file=contextEmb_file,
epochs=epoch,
is_clip_norm=True,
max_grad_norm=0.5,
history_size=20,
MODEL_DIR=os.path.join(data_path, 'save_models'),
learning_rate=0.001,
embed_l2=0.0,
layer_l2=0.0,
use_entity=True,
use_context=True
)
print(hparams.values)
# + pycharm={"is_executing": false}
input_creator = DKNTextIterator
# -
# ## Train the DKN model
# <img src="https://recodatasets.z20.web.core.windows.net/kdd2020/images%2FDKN-main.JPG" width="600">
# + pycharm={"is_executing": false}
model = DKN(hparams, input_creator)
# + pycharm={"is_executing": false}
t01 = time.time()
print(model.run_eval(valid_file))
t02 = time.time()
print((t02-t01)/60)
# + pycharm={"is_executing": false}
model.fit(train_file, valid_file)
# -
# Now we can test again the performance on valid set:
t01 = time.time()
print(model.run_eval(test_file))
t02 = time.time()
print((t02-t01)/60)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Document embedding inference API
# After training, you can get document embedding through this document embedding inference API. The input file format is same with document feature file. The output file fomrat is: `[Newsid] [embedding]`
# + pycharm={"is_executing": false, "name": "#%%\n"}
model.run_get_embedding(news_feature_file, infer_embedding_file)
# -
# we compre with DKN performance between using knowledge entities or without using knowledge entities (DKN(-)):
#
# | Models | Group-AUC | MRR |NDCG@2 | NDCG@4 |
# | :------| :------: | :------: | :------: | :------ |
# | DKN | 0.9557 | 0.8993 | 0.8951 | 0.9123 |
# | DKN(-) | 0.9506 | 0.8817 | 0.8758 | 0.8982 |
# | LightGCN | 0.8608 | 0.5605 | 0.4975 | 0.5792 |
# ## Reference
# \[1\] Wang, Hongwei, et al. "DKN: Deep Knowledge-Aware Network for News Recommendation." Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018.<br>
#
| examples/07_tutorials/KDD2020-tutorial/step3_run_dkn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Day 5
#
# >You're starting to sweat as the ship makes its way toward Mercury. The Elves suggest that you get the air conditioner working by upgrading your ship computer to support the Thermal Environment Supervision Terminal.
#
#
# # Part 1
#
# >The Thermal Environment Supervision Terminal (TEST) starts by running a diagnostic program (your puzzle input). The TEST diagnostic program will run on your existing Intcode computer after a few modifications:
#
# >First, you'll need to add two new instructions:
#
# >- Opcode 3 takes a single integer as input and saves it to the address given by its only parameter. For example, the instruction 3,50 would take an input value and store it at address 50.
# >- Opcode 4 outputs the value of its only parameter. For example, the instruction 4,50 would output the value at address 50.
#
# >Second, you'll need to add support for parameter modes:
#
# >- Right now, your ship computer already understands parameter mode 0, position mode, which causes the parameter to be interpreted as a position - if the parameter is 50, its value is the value stored at address 50 in memory.
# >- Now, your ship computer will also need to handle parameters in mode 1, immediate mode. In immediate mode, a parameter is interpreted as a value - if the parameter is 50, its value is simply 50
#
# >**Parameters that an instruction writes to will never be in immediate mode.**
#
# >The TEST diagnostic program will start by requesting from the user the ID of the system to test by running an input instruction - provide it 1, the ID for the ship's air conditioner unit.
#
# >It will then perform a series of diagnostic tests confirming that various parts of the Intcode computer, like parameter modes, function correctly. For each test, it will run an output instruction indicating how far the result of the test was from the expected value, where 0 means the test was successful. Non-zero outputs mean that a function is not working correctly; check the instructions that were run before the output instruction to see which one failed.
#
# What this means is that when we are running the intcode each time that it outputs a non-zero value, we need to halt the program. Otherwise, if it outputs zero (the end of the diagnostic test) then we can continue running the program until...
#
# >Finally, the program will output a diagnostic code and immediately halt. This final output isn't an error; an output followed immediately by a halt means the program finished. If all outputs were zero except the diagnostic code, the diagnostic program ran successfully.
import math
from typing import Tuple, List, Union
def parse_opcode(opcode: int) -> Tuple[int]:
opcode_param_list = [int(num) for num in str(opcode).zfill(5)]
# Don't need the leading zero from the opcode
return tuple(opcode_param_list[:3] + opcode_param_list[4:])
assert parse_opcode(3) == (0, 0, 0, 3)
assert parse_opcode(104) == (0, 0, 1, 4)
assert parse_opcode(1102) == (0, 1, 1, 2)
def get_output_positions(
opcode_idx: int, first_param_mode: int, second_param_mode: int, intcode: List[int]
) -> Tuple[int]:
first_output_position = opcode_idx + 1 if first_param_mode == 1 else intcode[opcode_idx + 1]
try:
second_output_position = opcode_idx + 2 if second_param_mode == 1 else intcode[opcode_idx + 2]
except IndexError:
return (first_output_position, 0)
return (first_output_position, second_output_position)
assert get_output_positions(
opcode_idx=4, first_param_mode=0, second_param_mode=0, intcode=[1100,1,238,225,4,-1,0]
) == (-1, 0)
assert get_output_positions(
opcode_idx=4, first_param_mode=1, second_param_mode=1, intcode=[1100,1,238,225,4,-1,0]
) == (5, 6)
assert get_output_positions(
opcode_idx=4, first_param_mode=1, second_param_mode=0, intcode=[1100,1,238,225,104,0]
) == (5, 0)
def run_opcode(
opcode: int, opcode_idx: int, intcode: List[int], opcode_input: int = None
) -> Union[List[int], int]:
"""Given an optcode and it's index in the intcode, perform the operations on the intcode
Parameters
opcode: A number no greater than 5 digits, ending in 01, 02, 03, 04
opcode_idx: The opcode's current position in the intcode, or the address in the memory
opcode_input: An input into the program. Only valid for opcode 3
intcode: The intcode where the opcode is running
Returns:
The modified intcode for opcodes 1, 2, 3, 7, and 8
The output of the diagnostic test or diagnostic code for opcode 4
The new opcode_idx for the intcode for opcode 5 and 6
"""
# Third param mode is always in position mode
_, second_param_mode, first_param_mode, opcode = parse_opcode(opcode)
first_output_position, second_output_position = get_output_positions(
opcode_idx, first_param_mode, second_param_mode, intcode
)
try:
intcode_replace_idx = intcode[opcode_idx + 3]
except IndexError:
pass
if opcode == 4:
return intcode[first_output_position]
elif opcode == 5 and intcode[first_output_position] != 0:
return intcode[second_output_position]
elif opcode == 6 and intcode[first_output_position] == 0:
return intcode[second_output_position]
if opcode == 1:
intcode[intcode_replace_idx] = intcode[first_output_position] + intcode[second_output_position]
elif opcode == 2:
intcode[intcode_replace_idx] = intcode[first_output_position] * intcode[second_output_position]
elif opcode == 3:
intcode[first_output_position] = opcode_input
elif opcode == 7:
intcode[intcode_replace_idx] = 1 if intcode[first_output_position] < intcode[second_output_position] else 0
elif opcode == 8:
intcode[intcode_replace_idx] = 1 if intcode[first_output_position] == intcode[second_output_position] else 0
return intcode
# +
# Tests for opcodes 7 and 8
assert run_opcode(opcode=8, opcode_idx=2, intcode=[3,9,8,9,10,9,4,9,99,7,8]) == [3,9,8,9,10,9,4,9,99,0,8]
assert run_opcode(opcode=7, opcode_idx=2, intcode=[3,9,7,9,10,9,4,9,99,2,8]) == [3,9,7,9,10,9,4,9,99,1,8]
assert run_opcode(opcode=1108, opcode_idx=2, intcode=[3,3,1108,8,8,3,4,3,99]) == [3,3,1108,1,8,3,4,3,99]
assert run_opcode(opcode=1107, opcode_idx=2, intcode=[3,3,1107,9,8,3,4,3,99]) == [3,3,1107,0,8,3,4,3,99]
# +
# Tests for opcodes 5 and 6
test_intcode = [3, 3, 1105, 0, 9, 1101, 0, 0, 12, 4, 12, 99, 1]
assert run_opcode(opcode=1105, opcode_idx=2, intcode=test_intcode) == test_intcode
assert run_opcode(opcode=1105, opcode_idx=2, intcode=[3, 3, 1105, 100, 9, 1101, 0, 0, 12, 4, 12, 99, 1]) == 9
test_intcode = [3,12,6,12,15,1,13,14,13,4,13,99,100,0,1,9]
assert run_opcode(opcode=6, opcode_idx=2, intcode=test_intcode) == test_intcode
assert run_opcode(opcode=6, opcode_idx=2, intcode=[3,12,6,12,15,1,13,14,13,4,13,99,0,0,1,9])
# +
assert run_opcode(opcode=104, opcode_idx=4, intcode=[1100,1,238,225,104,0]) == 0
assert run_opcode(opcode=4, opcode_idx=4, intcode=[1100,1,238,225,4,0]) == 1100
assert run_opcode(opcode=4, opcode_idx=4, intcode=[1100,1,238,225,4,-1,0]) == 0
assert run_opcode(opcode=3, opcode_idx=4, intcode=[1,1,1,1,3,2], opcode_input=1000) == [1,1,1000,1, 3, 2]
assert run_opcode(opcode=1003, opcode_idx=4, intcode=[1,1,1,1,3,2], opcode_input=1000) == [1,1,1000,1,3,2]
assert run_opcode(opcode=1101, opcode_idx=0, intcode=[1101,0,0,0,99]) == [0,0,0,0,99]
assert run_opcode(opcode=1102, opcode_idx=0, intcode=[1102,3,0,3,99]) == [1102,3,0,0,99]
assert run_opcode(opcode=101, opcode_idx=0, intcode=[1101,0,0,1,99]) == [1101,1101,0,1,99]
assert run_opcode(opcode=1002, opcode_idx=0, intcode=[1102,0,1,1,99]) == [1102,1102,1,1,99]
assert run_opcode(opcode=1101, opcode_idx=0, intcode=[1101,-1,-1,0,99]) == [-2,-1,-1,0,99]
assert run_opcode(opcode=1002, opcode_idx=0, intcode=[1102,-1,1,1,99]) == [1102,99,1,1,99]
# -
# Previous tests should also still work
assert run_opcode(1, 0, [1,0,0,0,99]) == [2,0,0,0,99]
assert run_opcode(2, 0, [2,3,0,3,99]) == [2,3,0,6,99]
assert run_opcode(2, 0, [2,4,4,5,99,0]) == [2,4,4,5,99,9801]
def run_diagnostic_tests(intcode: List[int], diagnostic_input: int) -> int:
"""Keep running the optcodes in the intcode until 99 is reached"""
intcode = intcode.copy()
opcode_idx = 0
# Assumed to be a 3
opcode = intcode[opcode_idx]
# Run once with the given input, increment, then start loop
intcode = run_opcode(opcode, opcode_idx, intcode, diagnostic_input)
opcode_idx += 2
while True:
opcode = intcode[opcode_idx]
opcode_num = int(str(opcode)[-1])
# Look for exit first
if opcode_num == 4:
result = run_opcode(opcode, opcode_idx, intcode)
end_of_program = intcode[opcode_idx + 2] == 99
if result == 0 and not end_of_program:
# Passed the test
opcode_idx += 2
continue
elif result != 0 and not end_of_program:
raise ValueError("Non-zero Test result before end of program")
else:
return result
# Check opcodes that move the opcode_idx
if opcode_num in [5, 6]:
result = run_opcode(opcode, opcode_idx, intcode)
opcode_idx = result if isinstance(result, int) else opcode_idx + 3
continue
# Run regular opcodes
intcode = run_opcode(opcode, opcode_idx, intcode)
opcode_idx += 4
# +
assert run_diagnostic_tests(intcode=[3,4,1,0,1,5,4,0,99], diagnostic_input=2) == 3
assert run_diagnostic_tests(intcode=[3,4,2,0,1,5,4,5,99], diagnostic_input=2) == 6
assert run_diagnostic_tests(intcode=[3,3,1102,0,1,7,104,0,99], diagnostic_input=1) == 1
assert run_diagnostic_tests(intcode=[3,12,6,12,15,1,13,14,13,4,13,99,-1,0,1,9], diagnostic_input=0) == 0
assert run_diagnostic_tests(intcode=[3,12,6,12,15,1,13,14,13,4,13,99,-1,0,1,9], diagnostic_input=10) == 1
assert run_diagnostic_tests(intcode=[3,3,1105,-1,9,1101,0,0,12,4,12,99,1], diagnostic_input=0) == 0
assert run_diagnostic_tests(intcode=[3,3,1105,-1,9,1101,0,0,12,4,12,99,1], diagnostic_input=7) == 1
# big_test_intcode = [3,21,1008,21,8,20,1005,20,22,107,8,21,20,1006,20,31,1106,0,36,98,0,0,1002,21,125,20,4,20,1105,1,46,104,999,1105,1,46,1101,1000,1,20,4,20,1105,1,46,98,99]
# print(big_test_intcode, "\n")
# big_test_intcode1 = run_opcode(opcode=3, opcode_idx=0, intcode=big_test_intcode, opcode_input=7)
# print(big_test_intcode1, "\n")
# big_test_intcode2 = run_opcode(opcode=1108, opcode_idx=2, intcode=big_test_intcode1)
# print(big_test_intcode2, "\n")
# big_test_intcode3 = run_opcode(opcode=1005, opcode_idx=6, intcode=big_test_intcode2)
# print(big_test_intcode3, "\n")
# big_test_intcode4 = run_opcode(opcode=107, opcode_idx=9, intcode=big_test_intcode3)
# print(big_test_intcode4, "\n")
# new_idx = run_opcode(opcode=1006, opcode_idx=13, intcode=big_test_intcode4)
# print(new_idx, "\n")
# run_opcode(opcode=104, opcode_idx=31, intcode=big_test_intcode4)
# -
# Input given from puzzle
intcode = [3,225,1,225,6,6,1100,1,238,225,104,0,1101,33,37,225,101,6,218,224,1001,224,-82,224,4,224,102,8,223,223,101,7,224,224,1,223,224,223,1102,87,62,225,1102,75,65,224,1001,224,-4875,224,4,224,1002,223,8,223,1001,224,5,224,1,224,223,223,1102,49,27,225,1101,6,9,225,2,69,118,224,101,-300,224,224,4,224,102,8,223,223,101,6,224,224,1,224,223,223,1101,76,37,224,1001,224,-113,224,4,224,1002,223,8,223,101,5,224,224,1,224,223,223,1101,47,50,225,102,43,165,224,1001,224,-473,224,4,224,102,8,223,223,1001,224,3,224,1,224,223,223,1002,39,86,224,101,-7482,224,224,4,224,102,8,223,223,1001,224,6,224,1,223,224,223,1102,11,82,225,1,213,65,224,1001,224,-102,224,4,224,1002,223,8,223,1001,224,6,224,1,224,223,223,1001,14,83,224,1001,224,-120,224,4,224,1002,223,8,223,101,1,224,224,1,223,224,223,1102,53,39,225,1101,65,76,225,4,223,99,0,0,0,677,0,0,0,0,0,0,0,0,0,0,0,1105,0,99999,1105,227,247,1105,1,99999,1005,227,99999,1005,0,256,1105,1,99999,1106,227,99999,1106,0,265,1105,1,99999,1006,0,99999,1006,227,274,1105,1,99999,1105,1,280,1105,1,99999,1,225,225,225,1101,294,0,0,105,1,0,1105,1,99999,1106,0,300,1105,1,99999,1,225,225,225,1101,314,0,0,106,0,0,1105,1,99999,1107,677,226,224,1002,223,2,223,1005,224,329,101,1,223,223,8,677,226,224,102,2,223,223,1006,224,344,1001,223,1,223,108,677,677,224,1002,223,2,223,1006,224,359,1001,223,1,223,1108,226,677,224,102,2,223,223,1006,224,374,1001,223,1,223,1008,677,226,224,102,2,223,223,1005,224,389,101,1,223,223,7,226,677,224,102,2,223,223,1005,224,404,1001,223,1,223,1007,677,677,224,1002,223,2,223,1006,224,419,101,1,223,223,107,677,226,224,102,2,223,223,1006,224,434,101,1,223,223,7,677,677,224,1002,223,2,223,1005,224,449,101,1,223,223,108,677,226,224,1002,223,2,223,1006,224,464,101,1,223,223,1008,226,226,224,1002,223,2,223,1006,224,479,101,1,223,223,107,677,677,224,1002,223,2,223,1006,224,494,1001,223,1,223,1108,677,226,224,102,2,223,223,1005,224,509,101,1,223,223,1007,226,677,224,102,2,223,223,1005,224,524,1001,223,1,223,1008,677,677,224,102,2,223,223,1005,224,539,1001,223,1,223,1107,677,677,224,1002,223,2,223,1006,224,554,1001,223,1,223,1007,226,226,224,1002,223,2,223,1005,224,569,1001,223,1,223,7,677,226,224,1002,223,2,223,1006,224,584,1001,223,1,223,108,226,226,224,102,2,223,223,1005,224,599,1001,223,1,223,8,677,677,224,102,2,223,223,1005,224,614,1001,223,1,223,1107,226,677,224,102,2,223,223,1005,224,629,1001,223,1,223,8,226,677,224,102,2,223,223,1006,224,644,1001,223,1,223,1108,226,226,224,1002,223,2,223,1006,224,659,101,1,223,223,107,226,226,224,1002,223,2,223,1006,224,674,1001,223,1,223,4,223,99,226]
run_diagnostic_tests(intcode, diagnostic_input=1)
# # Part 2
#
# So we got the air conditioner working :tada: but now it's putting the heat right back into the spacecraft :scream: We need to turn on the thermal vents -- and that means adding more opcodes - specifically 5, 6, 7, and 8
#
# We'll use these to get the code to test the thermal vents
run_diagnostic_tests(intcode, diagnostic_input=5)
| day-5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Data Processing
#
# Merge all initial raw data files into a single dataframe and csv
# +
from os import listdir
import pandas as pd
from sgfmill.sgf import Sgf_game
from tqdm import tqdm
files = listdir("../data/raw")
games = []
for file in tqdm(files):
file = "".join(["../data/raw/", file])
with open(file, "rb") as f:
game = Sgf_game.from_bytes(f.read())
winner = game.get_winner()
loser = 'w' if winner == 'b' else 'b'
# Some files don't have a winner. Let's skip those.
if not winner:
continue
games.append({
'win_name': game.get_player_name(winner),
'lose_name': game.get_player_name(loser),
'win_color': winner,
'lose_color': loser,
'date': game.get_root().get('DT'),
'win_rank': game.get_root().get(f'{winner.upper()}R'),
'lose_rank': game.get_root().get(f'{loser.upper()}R'),
'komi': game.get_root().get('KM')
})
games_df = pd.DataFrame(games)
games_df
# -
games_df.to_csv("../data/processed/data.csv", index=False)
| process/notebooks/process.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from spark_privacy_preserver.mondrian_preserver import Preserver
spark = SparkSession.builder.appName("SimpleApp").getOrCreate()
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
# +
data = [[6, '1', 'test1', 'x', 20],
[6, '1', 'test1', 'y', 30],
[8, '2', 'test2', 'x', 50],
[8, '2', 'test3', 'x', 45],
[8, '1', 'test2', 'y', 35],
[4, '2', 'test3', 'y', 20]]
cSchema = StructType([StructField("column1", IntegerType()),
StructField("column2", StringType()),
StructField("column3", StringType()),
StructField("column4", StringType()),
StructField("column5", IntegerType())])
df = spark.createDataFrame(data, schema=cSchema)
df.show()
# +
#K-Anonymity
# variables
categorical = set((
'column2',
'column3',
'column4'
))
sensitive_column = 'column4'
feature_columns = ['column1', 'column2', 'column3']
schema = StructType([
StructField("column1", StringType()),
StructField("column2", StringType()),
StructField("column3", StringType()),
StructField("column4", StringType()),
StructField("count", IntegerType()),
])
k = 2
# anonymizing
dfn = Preserver.k_anonymize(
df, k, feature_columns, sensitive_column, categorical, schema)
dfn.show()
# +
#K-Anonymity without row suppresion
# variables
categorical = set((
'column2',
'column3',
'column4'
))
sensitive_column = 'column4'
feature_columns = ['column2', 'column3', 'column5']
schema = StructType([
StructField("column1", IntegerType()),
StructField("column2", StringType()),
StructField("column3", StringType()),
StructField("column4", StringType()),
StructField("column5", StringType()),
])
k = 2
# anonymizing
dfn = Preserver.k_anonymize_w_user(
df, k, feature_columns, sensitive_column, categorical, schema)
dfn.show()
# +
#Single user anonymization
# variables
categorical = set((
'column2',
'column3',
'column4'
))
sensitive_column = 'column4'
schema = StructType([
StructField("column1", StringType()),
StructField("column2", StringType()),
StructField("column3", StringType()),
StructField("column4", StringType()),
StructField("column5", StringType()),
])
user = 6
usercolumn_name = "column1"
k = 2
# anonymizing
dfn = Preserver.anonymize_user(
df, k, user, usercolumn_name, sensitive_column, categorical, schema)
dfn.show()
# -
| mondrian_preserver demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
import sklearn.svm
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputRegressor
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputRegressor
from sklearn.model_selection import cross_val_score
import numpy as np
from sklearn.ensemble import RandomForestRegressor
import joblib
from sklearn.metrics import r2_score
from matplotlib import pyplot as plt
from sklearn.metrics import mean_squared_error as rmse
cluster = pd.read_excel("clustering.xlsx")
train = pd.read_excel("datatrain.xlsx")
val = pd.read_excel("datatest.xlsx")
cluster=cluster[['BARRIO','cluster']]
# +
cluster=cluster[['BARRIO','cluster']]
nombres=['SEMANA','ESPECIAL','DIA','MES','ANIO','BARRIO','atropello','caida_ocupante','choque','otro','volcamiento','incendio','choque_atropello','CLUSTER']
train=pd.merge(train,cluster, how='left', left_on=['BARRIO'], right_on = ['BARRIO'])
train.columns=nombres
val=pd.merge(val,cluster, how='left', left_on=['BARRIO'], right_on = ['BARRIO'])
val.columns=nombres
val.to_excel("val.xlsx",index = False)
train.to_excel("train.xlsx",index = False)
# -
val
| directorio de trabajo/Tacho/TrabajoTAE/.ipynb_checkpoints/cluster-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## VAE MNIST example: BO in a latent space
# In this tutorial, we use the MNIST dataset and some standard PyTorch examples to show a synthetic problem where the input to the objective function is a `28 x 28` image. The main idea is to train a [variational auto-encoder (VAE)](https://arxiv.org/abs/1312.6114) on the MNIST dataset and run Bayesian Optimization in the latent space. We also refer readers to [this tutorial](http://krasserm.github.io/2018/04/07/latent-space-optimization/), which discusses [the method](https://arxiv.org/abs/1610.02415) of jointly training a VAE with a predictor (e.g., classifier), and shows a similar tutorial for the MNIST setting.
# +
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets # transforms
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dtype = torch.float
# -
# ### Problem setup
#
# Let's first define our synthetic expensive-to-evaluate objective function. We assume that it takes the following form:
#
# $$\text{image} \longrightarrow \text{image classifier} \longrightarrow \text{scoring function}
# \longrightarrow \text{score}.$$
#
# The classifier is a convolutional neural network (CNN) trained using the architecture of the [PyTorch CNN example](https://github.com/pytorch/examples/tree/master/mnist).
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4 * 4 * 50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# We next instantiate the CNN for digit recognition and load a pre-trained model.
#
# Here, you may have to change `PRETRAINED_LOCATION` to the location of the `pretrained_models` folder on your machine.
# +
PRETRAINED_LOCATION = "./pretrained_models"
cnn_model = Net().to(device)
cnn_state_dict = torch.load(os.path.join(PRETRAINED_LOCATION, "mnist_cnn.pt"), map_location=device)
cnn_model.load_state_dict(cnn_state_dict);
# -
# Our VAE model follows the [PyTorch VAE example](https://github.com/pytorch/examples/tree/master/vae), except that we use the same data transform from the CNN tutorial for consistency. We then instantiate the model and again load a pre-trained model. To train these models, we refer readers to the PyTorch Github repository.
# +
class VAE(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 400)
self.fc21 = nn.Linear(400, 20)
self.fc22 = nn.Linear(400, 20)
self.fc3 = nn.Linear(20, 400)
self.fc4 = nn.Linear(400, 784)
def encode(self, x):
h1 = F.relu(self.fc1(x))
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 784))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
vae_model = VAE().to(device)
vae_state_dict = torch.load(os.path.join(PRETRAINED_LOCATION, "mnist_vae.pt"), map_location=device)
vae_model.load_state_dict(vae_state_dict);
# -
# We now define the scoring function that maps digits to scores. The function below prefers the digit '3'.
def score(y):
"""Returns a 'score' for each digit from 0 to 9. It is modeled as a squared exponential
centered at the digit '3'.
"""
return torch.exp(-2 * (y - 3)**2)
# Given the scoring function, we can now write our overall objective, which as discussed above, starts with an image and outputs a score. Let's say the objective computes the expected score given the probabilities from the classifier.
def score_image_recognition(x):
"""The input x is an image and an expected score based on the CNN classifier and
the scoring function is returned.
"""
with torch.no_grad():
probs = torch.exp(cnn_model(x)) # b x 10
scores = score(torch.arange(10, device=device, dtype=dtype)).expand(probs.shape)
return (probs * scores).sum(dim=1)
# Finally, we define a helper function `decode` that takes as input the parameters `mu` and `logvar` of the variational distribution and performs reparameterization and the decoding. We use batched Bayesian optimization to search over the parameters `mu` and `logvar`
def decode(train_x):
with torch.no_grad():
decoded = vae_model.decode(train_x)
return decoded.view(train_x.shape[0], 1, 28, 28)
# #### Model initialization and initial random batch
#
# We use a `SingleTaskGP` to model the score of an image generated by a latent representation. The model is initialized with points drawn from $[-6, 6]^{20}$.
# +
from botorch.models import SingleTaskGP
from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood
bounds = torch.tensor([[-6.0] * 20, [6.0] * 20], device=device, dtype=dtype)
def initialize_model(n=5):
# generate training data
train_x = (bounds[1] - bounds[0]) * torch.rand(n, 20, device=device, dtype=dtype) + bounds[0]
train_obj = score_image_recognition(decode(train_x))
best_observed_value = train_obj.max().item()
# define models for objective and constraint
model = SingleTaskGP(train_X=train_x, train_Y=train_obj)
model = model.to(train_x)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
mll = mll.to(train_x)
return train_x, train_obj, mll, model, best_observed_value
# -
# #### Define a helper function that performs the essential BO step
# The helper function below takes an acquisition function as an argument, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=3$.
# +
from botorch.optim import joint_optimize
BATCH_SIZE = 3
def optimize_acqf_and_get_observation(acq_func):
"""Optimizes the acquisition function, and returns a new candidate and a noisy observation"""
# optimize
candidates = joint_optimize(
acq_function=acq_func,
bounds=bounds,
q=BATCH_SIZE,
num_restarts=10,
raw_samples=200,
)
# observe new values
new_x = candidates.detach()
new_obj = score_image_recognition(decode(new_x))
return new_x, new_obj
# -
# ### Perform Bayesian Optimization loop with qEI
# The Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps: (1) given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$, (2) observe $f(x)$ for each $x$ in the batch, and (3) update the surrogate model. We run `N_BATCH=75` iterations. The acquisition function is approximated using `MC_SAMPLES=2000` samples. We also initialize the model with 5 randomly drawn points.
# +
from botorch import fit_gpytorch_model
from botorch.acquisition.monte_carlo import qExpectedImprovement
from botorch.acquisition.sampler import SobolQMCNormalSampler
seed=1
torch.manual_seed(seed)
N_BATCH = 50
MC_SAMPLES = 2000
best_observed = []
# call helper function to initialize model
train_x, train_obj, mll, model, best_value = initialize_model(n=5)
best_observed.append(best_value)
# -
# We are now ready to run the BO loop (this make take a few minutes, depending on your machine).
# +
import warnings
warnings.filterwarnings("ignore")
print(f"\nRunning BO ", end='')
from matplotlib import pyplot as plt
# run N_BATCH rounds of BayesOpt after the initial random batch
for iteration in range(N_BATCH):
# fit the model
fit_gpytorch_model(mll)
# define the qNEI acquisition module using a QMC sampler
qmc_sampler = SobolQMCNormalSampler(num_samples=MC_SAMPLES, seed=seed)
qEI = qExpectedImprovement(model=model, sampler=qmc_sampler, best_f=best_value)
# optimize and get new observation
new_x, new_obj = optimize_acqf_and_get_observation(qEI)
# update training points
train_x = torch.cat((train_x, new_x))
train_obj = torch.cat((train_obj, new_obj))
# update progress
best_value = score_image_recognition(decode(train_x)).max().item()
best_observed.append(best_value)
# reinitialize the model so it is ready for fitting on next iteration
model.set_train_data(train_x, train_obj, strict=False)
print(".", end='')
# -
# EI recommends the best point observed so far. We can visualize what the images corresponding to recommended points *would have* been if the BO process ended at various times. Here, we show the progress of the algorithm by examining the images at 0%, 10%, 25%, 50%, 75%, and 100% completion. The first image is the best image found through the initial random batch.
# +
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
fig, ax = plt.subplots(1, 6, figsize=(14, 14))
percentages = np.array([0, 10, 25, 50, 75, 100], dtype=np.float32)
inds = (N_BATCH * BATCH_SIZE * percentages / 100 + 4).astype(int)
for i, ax in enumerate(ax.flat):
b = torch.argmax(score_image_recognition(decode(train_x[:inds[i],:])), dim=0)
img = decode(train_x[b].view(1, -1)).squeeze().cpu()
ax.imshow(img, alpha=0.8, cmap='gray')
| tutorials/vae_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Quickstart
# A quick introduction on how to use the OQuPy package to compute the dynamics of a quantum system that is possibly strongly coupled to a structured environment. We illustrate this by applying the TEMPO method to the strongly coupled spin boson model.
# **Contents:**
#
# * Example - The spin boson model
# * 1. The model and its parameters
# * 2. Create system, correlations and bath objects
# * 3. TEMPO computation
# First, let's import OQuPy and some other packages we are going to use
# +
import sys
sys.path.insert(0,'..')
import oqupy
import numpy as np
import matplotlib.pyplot as plt
# -
# and check what version of tempo we are using.
oqupy.__version__
# Let's also import some shorthands for the spin Pauli operators and density matrices.
sigma_x = oqupy.operators.sigma("x")
sigma_y = oqupy.operators.sigma("y")
sigma_z = oqupy.operators.sigma("z")
up_density_matrix = oqupy.operators.spin_dm("z+")
down_density_matrix = oqupy.operators.spin_dm("z-")
# -------------------------------------------------
# ## Example - The spin boson model
# As a first example let's try to reconstruct one of the lines in figure 2a of [Strathearn2018] ([Nat. Comm. 9, 3322 (2018)](https://doi.org/10.1038/s41467-018-05617-3) / [arXiv:1711.09641v3](https://arxiv.org/abs/1711.09641)). In this example we compute the time evolution of a spin which is strongly coupled to an ohmic bath (spin-boson model). Before we go through this step by step below, let's have a brief look at the script that will do the job - just to have an idea where we are going:
# +
Omega = 1.0
omega_cutoff = 5.0
alpha = 0.3
system = oqupy.System(0.5 * Omega * sigma_x)
correlations = oqupy.PowerLawSD(alpha=alpha,
zeta=1,
cutoff=omega_cutoff,
cutoff_type='exponential')
bath = oqupy.Bath(0.5 * sigma_z, correlations)
tempo_parameters = oqupy.TempoParameters(dt=0.1, dkmax=30, epsrel=10**(-4))
dynamics = oqupy.tempo_compute(system=system,
bath=bath,
initial_state=up_density_matrix,
start_time=0.0,
end_time=15.0,
parameters=tempo_parameters)
t, s_z = dynamics.expectations(0.5*sigma_z, real=True)
plt.plot(t, s_z, label=r'$\alpha=0.3$')
plt.xlabel(r'$t\,\Omega$')
plt.ylabel(r'$<S_z>$')
plt.legend()
# -
# ### 1. The model and its parameters
# We consider a system Hamiltonian
# $$ H_{S} = \frac{\Omega}{2} \hat{\sigma}_x \mathrm{,}$$
# a bath Hamiltonian
# $$ H_{B} = \sum_k \omega_k \hat{b}^\dagger_k \hat{b}_k \mathrm{,}$$
# and an interaction Hamiltonian
# $$ H_{I} = \frac{1}{2} \hat{\sigma}_z \sum_k \left( g_k \hat{b}^\dagger_k + g^*_k \hat{b}_k \right) \mathrm{,}$$
# where $\hat{\sigma}_i$ are the Pauli operators, and the $g_k$ and $\omega_k$ are such that the spectral density $J(\omega)$ is
# $$ J(\omega) = \sum_k |g_k|^2 \delta(\omega - \omega_k) = 2 \, \alpha \, \omega \, \exp\left(-\frac{\omega}{\omega_\mathrm{cutoff}}\right) \mathrm{.} $$
# Also, let's assume the initial density matrix of the spin is the up state
# $$ \rho(0) = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} $$
# and the bath is initially at zero temperature.
# For the numerical simulation it is advisable to choose a characteristic frequency and express all other physical parameters in terms of this frequency. Here, we choose $\Omega$ for this and write:
#
# * $\Omega = 1.0 \Omega$
# * $\omega_c = 5.0 \Omega$
# * $\alpha = 0.3$
Omega = 1.0
omega_cutoff = 5.0
alpha = 0.3
# ### 2. Create system, correlations and bath objects
# #### System
# $$ H_{S} = \frac{\Omega}{2} \hat{\sigma}_x \mathrm{,}$$
system = oqupy.System(0.5 * Omega * sigma_x)
# #### Correlations
# $$ J(\omega) = 2 \, \alpha \, \omega \, \exp\left(-\frac{\omega}{\omega_\mathrm{cutoff}}\right) $$
# Because the spectral density is of the standard power-law form,
# $$ J(\omega) = 2 \alpha \frac{\omega^\zeta}{\omega_c^{\zeta-1}} X(\omega,\omega_c) $$
# with $\zeta=1$ and $X$ of the type ``'exponential'`` we define the spectral density with:
correlations = oqupy.PowerLawSD(alpha=alpha,
zeta=1,
cutoff=omega_cutoff,
cutoff_type='exponential')
# #### Bath
# The bath couples with the operator $\frac{1}{2}\hat{\sigma}_z$ to the system.
bath = oqupy.Bath(0.5 * sigma_z, correlations)
# ### 3. TEMPO computation
# Now, that we have the system and the bath objects ready we can compute the dynamics of the spin starting in the up state, from time $t=0$ to $t=5\,\Omega^{-1}$
dynamics_1 = oqupy.tempo_compute(system=system,
bath=bath,
initial_state=up_density_matrix,
start_time=0.0,
end_time=5.0,
tolerance=0.01)
# and plot the result:
t_1, z_1 = dynamics_1.expectations(0.5*sigma_z, real=True)
plt.plot(t_1, z_1, label=r'$\alpha=0.3$')
plt.xlabel(r'$t\,\Omega$')
plt.ylabel(r'$<S_z>$')
plt.legend()
# Yay! This looks like the plot in figure 2a [Strathearn2018].
# Let's have a look at the above warning. It said:
#
# ```
# WARNING: Estimating parameters for TEMPO calculation. No guarantie that resulting TEMPO calculation converges towards the correct dynamics! Please refere to the TEMPO documentation and check convergence by varying the parameters for TEMPO manually.
# ```
# We got this message because we didn't tell the package what parameters to use for the TEMPO computation, but instead only specified a `tolerance`. The package tries it's best by implicitly calling the function `oqupy.guess_tempo_parameters()` to find parameters that are appropriate for the spectral density and system objects given.
# #### TEMPO Parameters
# There are **three key parameters** to a TEMPO computation:
#
# * `dt` - Length of a time step $\delta t$ - It should be small enough such that a trotterisation between the system Hamiltonian and the environment it valid, and the environment auto-correlation function is reasonably well sampled.
#
# * `dkmax` - Number of time steps $K \in \mathbb{N}$ - It must be large enough such that $\delta t \times K$ is larger than the neccessary memory time $\tau_\mathrm{cut}$.
#
# * `epsrel` - The maximal relative error $\epsilon_\mathrm{rel}$ in the singular value truncation - It must be small enough such that the numerical compression (using tensor network algorithms) does not truncate relevant correlations.
# To choose the right set of initial parameters, we recommend to first use the `oqupy.guess_tempo_parameters()` function and then check with the helper function `oqupy.helpers.plot_correlations_with_parameters()` whether it satisfies the above requirements:
parameters = oqupy.guess_tempo_parameters(system=system,
bath=bath,
start_time=0.0,
end_time=5.0,
tolerance=0.01)
print(parameters)
fig, ax = plt.subplots(1,1)
oqupy.helpers.plot_correlations_with_parameters(bath.correlations, parameters, ax=ax)
# In this plot you see the real and imaginary part of the environments auto-correlation as a function of the delay time $\tau$ and the sampling of it corresponding the the chosen parameters. The spacing and the number of sampling points is given by `dt` and `dkmax` respectively. We can see that the auto-correlation function is close to zero for delay times larger than approx $2 \Omega^{-1}$ and that the sampling points follow the curve reasonably well. Thus this is a reasonable set of parameters.
# We can choose a set of parameters by hand and bundle them into a `TempoParameters` object,
tempo_parameters = oqupy.TempoParameters(dt=0.1, dkmax=30, epsrel=10**(-4), name="my rough parameters")
print(tempo_parameters)
# and check again with the helper function:
fig, ax = plt.subplots(1,1)
oqupy.helpers.plot_correlations_with_parameters(bath.correlations, tempo_parameters, ax=ax)
# We could feed this object into the `oqupy.tempo_compute()` function to get the dynamics of the system. However, instead of that, we can split up the work that `oqupy.tempo_compute()` does into several steps, which allows us to resume a computation to get later system dynamics without having to start over. For this we start with creating a `Tempo` object:
tempo = oqupy.Tempo(system=system,
bath=bath,
parameters=tempo_parameters,
initial_state=up_density_matrix,
start_time=0.0)
# We can start by computing the dynamics up to time $5.0\,\Omega^{-1}$,
tempo.compute(end_time=5.0)
# then get and plot the dynamics of expecatation values,
dynamics_2 = tempo.get_dynamics()
plt.plot(*dynamics_2.expectations(0.5*sigma_z, real=True), label=r'$\alpha=0.3$')
plt.xlabel(r'$t\,\Omega$')
plt.ylabel(r'$<S_z>$')
plt.legend()
# then continue the computation to $15.0\,\Omega^{-1}$,
tempo.compute(end_time=15.0)
# and then again get and plot the dynamics of expecatation values.
dynamics_2 = tempo.get_dynamics()
plt.plot(*dynamics_2.expectations(0.5*sigma_z, real=True), label=r'$\alpha=0.3$')
plt.xlabel(r'$t\,\Omega$')
plt.ylabel(r'$<S_z>$')
plt.legend()
# Finally, we note: to validate the accuracy the result **it vital to check the convergence of such a simulation by varying all three computational parameters!** For this we recommend repeating the same simulation with slightly "better" parameters (smaller `dt`, larger `dkmax`, smaller `epsrel`) and to consider the difference of the result as an estimate of the upper bound of the accuracy of the simulation.
# -------------------------------------------------
| tutorials/quickstart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: drlnd
# language: python
# name: drlnd
# ---
# # Collaboration and Competition
#
# ---
#
# You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!
#
# ### 1. Start the Environment
#
# Run the next code cell to install a few packages. This line will take a few minutes to run!
# !pip -q install ./python
# The environment is already saved in the Workspace and can be accessed at the file path provided below.
# +
from unityagents import UnityEnvironment
import numpy as np
env = UnityEnvironment(file_name="/data/Tennis_Linux_NoVis/Tennis")
# -
# Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# ### 2. Examine the State and Action Spaces
#
# Run the code cell below to print some information about the environment.
# +
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
# -
# ### 3. Take Random Actions in the Environment
#
# In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
#
# Note that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment.
for i in range(5): # play game for 5 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# When finished, you can close the environment.
# +
# env.close()
# -
# ### 4. It's Your Turn!
#
# Now it's your turn to train your own agent to solve the environment! A few **important notes**:
# - When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
# ```python
# env_info = env.reset(train_mode=True)[brain_name]
# ```
# - To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook.
# - In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine!
# +
import numpy as np
import random
import copy
from collections import namedtuple, deque
import matplotlib.pyplot as plt
from MAddpg_agent2 import MADDPG_Agent
import torch
import torch.nn.functional as F
import torch.optim as optim
import requests
import time
# !pip install progressbar
import progressbar as pb
# +
# define training configuration
episode = 2000
tmax = 2000
# declare variables for storing training results
mean_score_his = []
max_score_his = []
max_score_window = deque(maxlen=100)
# create MADDPG agent
MADDPG = MADDPG_Agent(state_size, action_size, num_agents, 20)
# print every
print_every = 10
# save every
save_every = 100
# +
# main training cell
# create progress bar
widget = ['training loop: ', pb.Percentage(), ' ',
pb.Bar(), ' ', pb.ETA() ]
timer = pb.ProgressBar(widgets=widget, maxval=episode).start()
# start time tracker time
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for e in range(episode):
# check current time and old time difference, if difference is too big, send request to keep workspace alive
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations # get the current state (for each agent)
score = np.zeros(num_agents) # initialize the score (for each agent)
MADDPG.reset() # reset action noise generator
for t in range(tmax):
action = MADDPG.act(state, True)
env_info = env.step(action)[brain_name]
reward = env_info.rewards
next_state = env_info.vector_observations
done = env_info.local_done
score += env_info.rewards
MADDPG.step(state, action, reward, next_state, done)
state = next_state
if np.any(done):
break
mean_score_his.append(np.mean(score))
max_score_his.append(np.max(score))
max_score_window.append(np.max(score))
if (e%print_every) == 0 :
print("Episode: {0:d}, score: {1:f}".format(e+1,np.mean(score)))
if len(max_score_window)>=100 and np.mean(max_score_window)>=0.5:
print("environment solved at episode {}".format(e+1))
break
# update progress widget bar
timer.update(e+1)
timer.finish()
# -
# calculate average
max_score_mean = []
for i in reversed(range(len(max_score_his))):
id_start = max(i-100+1,0)
max_score_mean_temp = np.mean(max_score_his[id_start:i+1])
max_score_mean.insert(0,max_score_mean_temp)
plt.figure(1),plt.plot(max_score_his)
plt.figure(1),plt.plot(max_score_mean)
| p3_collab-compet/Tennis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
from WaveAutoEncoder_model import WaveAutoEncoder
from WaveAutoEncoder_data import ToData
import config
import torch
import pytorch_lightning as pl
from DatasetLib import Dataset_onMemory
import matplotlib.pyplot as plt
from torch.utils import data as DataUtil
data_set = Dataset_onMemory(ToData.filepath,ToData.data_name,using_length=-1,log=True)
model = WaveAutoEncoder()
batch_size = 1024
EPOCHS = 1000
data_loader = DataUtil.DataLoader(data_set,batch_size,shuffle=True,num_workers=0,pin_memory=False)
trainer = pl.Trainer(gpus=1,num_nodes=1,precision=16,max_epochs=EPOCHS)
trainer.fit(model,data_loader)
from datetime import datetime
now = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
nameE = f'params/WaveEncoder_{now}.params'
nameD = f'params/WaveDecoder_{now}.params'
name = f'params/WaveAutoEncoder_{now}.params'
torch.save(model.encoder.state_dict(),nameE)
torch.save(model.decoder.state_dict(),nameD)
torch.save(model.state_dict(),name)
print('saved')
# +
#model.load_state_dict(torch.load(''))
# -
viewlen = 10
with torch.no_grad():
model.eval()
model.cuda()
model.half()
data = data_set.data[0][:viewlen].cuda()
out = model(data).cpu().detach().numpy()
data = data.detach().cpu().numpy()
for i in range(len(data)):
fig,ax = plt.subplots(1,2,figsize=(7,3))
ax[0].plot(data[i].reshape(-1))
ax[1].plot(out[i].reshape(-1))
plt.show()
| WaveAutoEncoder_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regression Template - Pycaret
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 39277, "status": "ok", "timestamp": 1615340491404, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="EOLpI6zUQPub" outputId="ea3f9547-b69f-48c6-9b32-a082e2710fcd"
# !pip install pycaret
# !pip install pycaret-nightly
# + executionInfo={"elapsed": 4702, "status": "ok", "timestamp": 1615340509694, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="t4oJZrFVQfj0"
import pandas as pd
from pycaret.regression import *
# + colab={"base_uri": "https://localhost:8080/", "height": 197} executionInfo={"elapsed": 872, "status": "ok", "timestamp": 1615341144502, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="BrMl81DVQrdD" outputId="de057853-216c-4fac-93b7-3766a2cbdaf4"
from pycaret.datasets import get_data
df=get_data('insurance')
# + executionInfo={"elapsed": 863, "status": "ok", "timestamp": 1615341173508, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="AR8qgCpEzFZp"
def CategoricalImputation_OneHot(df,variable):
temp=pd.get_dummies(df[variable], drop_first=True)
items=[df,temp]
df=pd.concat(items,axis=1)
return df
# + executionInfo={"elapsed": 844, "status": "ok", "timestamp": 1615341175651, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="iEi347oBzv6z"
#using one hot encoding for sex and smoker column
df=CategoricalImputation_OneHot(df,'smoker')
df=CategoricalImputation_OneHot(df,'sex')
# + colab={"base_uri": "https://localhost:8080/", "height": 197} executionInfo={"elapsed": 583, "status": "ok", "timestamp": 1615341177144, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="09XRnU8j0PkD" outputId="62ab1d1c-dd6c-4618-9d74-24fe716db2f6"
df.head()
# + executionInfo={"elapsed": 851, "status": "ok", "timestamp": 1615341272108, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="XPTbrLam1nxJ"
#Now lets drop both smoker and sex column
df=df.drop('sex',axis=1)
df=df.drop('smoker',axis=1)
# + executionInfo={"elapsed": 780, "status": "ok", "timestamp": 1615341707645, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="tdfQXpZd19EU"
#For region lets use monotonicity encoding technique
ordered_labels=df.groupby(['region'])['charges'].mean().sort_values().index
ordinal_label = {k:i for i, k in enumerate(ordered_labels, 1)}
df['region_ordered']=df['region'].map(ordinal_label)
# + executionInfo={"elapsed": 869, "status": "ok", "timestamp": 1615341794482, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="FRQWzHvR3n0w"
#Now lets drop the region column
df=df.drop('region',axis=1)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 882, "status": "ok", "timestamp": 1615341799503, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="U6M5MDggTdUE" outputId="8c0958cb-4430-4c10-c75b-3c8fb41cfe0d"
df.isnull().sum()
# + executionInfo={"elapsed": 872, "status": "ok", "timestamp": 1615342161117, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="YVrrTFde4WPk"
#Next,lets do normalization of all input variables
from sklearn.preprocessing import StandardScaler
# set up the scaler
scaler = StandardScaler()
# fit the scaler to the train set, it will learn the parameters
scaler.fit(df.iloc[:,[0,1,2,4,5,6]])
df_scaled=scaler.transform(df.iloc[:,[0,1,2,4,5,6]])
# + executionInfo={"elapsed": 1033, "status": "ok", "timestamp": 1615342299116, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="JoHgAmQE5VrU"
df_scaled=pd.DataFrame(df_scaled,columns=df.iloc[:,[0,1,2,4,5,6]].columns)
# + executionInfo={"elapsed": 933, "status": "ok", "timestamp": 1615342337407, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="Sv2bmlM2532k"
df_scaled['charges']=df['charges']
# + colab={"base_uri": "https://localhost:8080/", "height": 197} executionInfo={"elapsed": 858, "status": "ok", "timestamp": 1615342372212, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="692HNVKP6Bj7" outputId="2df5f1d5-a22c-4e22-f563-281b4cfe16b9"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 617, "referenced_widgets": ["fe6a0e267d684d1387283300ec847b93", "449244cee6d9477b90d2d844c7f2c266", "291b6766938042c6b1c2fac27992bf19", "48ef29b31e444fd198188e193e715f27", "3458d376fd414f7b9486d63046d6477f", "2d64f9669f8c4c6bbe44d32584b024ff"]} executionInfo={"elapsed": 6409, "status": "ok", "timestamp": 1615342463123, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="2EHrNT2SUHk2" outputId="291b9bb5-5b47-4524-feae-cf6224cfaf33"
data=setup(df,target='charges',preprocess=False,use_gpu=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 557, "referenced_widgets": ["bcbc1fc25a8e4700a5636d7d0f2875cf", "60f7a26415f2418290ffee8042de9143", "725e197a1e0f42ffbec096e6488062f8"]} executionInfo={"elapsed": 29568, "status": "ok", "timestamp": 1615342507561, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="s3Cpj793VyoH" outputId="6b47ef21-5216-4b19-ea8a-fe69bf4ce6ba"
#Selecting top3 models for tuning
top3_models=compare_models(n_select=3)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 814, "status": "ok", "timestamp": 1615342511071, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="UvYuqotGWh30" outputId="aec79e9b-c08d-41f0-e279-edefc61e15c8"
print(top3_models)
# + [markdown] id="sH6kxUbXfwN3"
#
# + executionInfo={"elapsed": 832, "status": "ok", "timestamp": 1615342519823, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="Tik1eZOiauA2"
pd.set_option('display.max_rows',None)
# + colab={"base_uri": "https://localhost:8080/", "height": 407, "referenced_widgets": ["c71c2133ce6f47a092b725d47072e6fb", "0c3bdd5b5d1a4e76ac841aa46ec51a6e", "dc21aa4a58cd4d4ca47c18a220063c2e", "0dbd3ab89f1f46bb8cdd0245d5918669", "82b32e1c787b4fb3898b64e35302c65b", "f12678d3148f4b679397c85c38eb8ebb", "ca8b05d618ec43189c4e68e00c269725", "<KEY>", "fe66f8744dcb49c4a7e253fe35a4efdb"]} executionInfo={"elapsed": 167640, "status": "ok", "timestamp": 1615342689267, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="5FnvzbfIUm6v" outputId="9182b1a3-ae09-4310-fb3b-c2dee268c8e4"
#Tuning the top 3 models
tuned_model_top3=[tune_model(i) for i in top3_models]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1089, "status": "ok", "timestamp": 1615342702662, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="zjpk1R0OXCeT" outputId="dc891444-06cb-46dd-eafa-9ba49d922127"
print(tuned_model_top3)
# + colab={"base_uri": "https://localhost:8080/", "height": 407, "referenced_widgets": ["d939287f95974d6e9e35bb8202c7e6c6", "7acd167981ce4db98093d64306f34005", "19e7295cee05401e94148bc99acf4e8d", "3e971f1688d84f329232a7092fde6c11", "eb14e0b9e3364195bea8c75759d1faa8", "67289b08177e45f487628501a4b29c0c", "2b2e2de0ed6e4588b27097b747ac4bf9", "f7143bdc115945029c8b50de5518d270", "62a67b62963e478c8fb082a798ec1a8f"]} executionInfo={"elapsed": 53164, "status": "ok", "timestamp": 1615342759978, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="EL8y1PzLcM5a" outputId="8821bc78-26d5-4628-d3db-a2bab857843b"
#Ensembling top 3 tuned models
bagged_tuned_top3=[ensemble_model(i,method='Bagging') for i in tuned_model_top3]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 845, "status": "ok", "timestamp": 1615342793883, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="FfytTYUqSgYy" outputId="cb72ec72-3911-436a-be45-2859bdb54aa7"
print(bagged_tuned_top3)
# + colab={"base_uri": "https://localhost:8080/", "height": 407, "referenced_widgets": ["3c220e2447ab4bd4aa58e33b47bb42f9", "b1772c737686411f810a4e73769ef99e", "540ccdcb8fbc4a52a936026fe494cda7"]} executionInfo={"elapsed": 11187, "status": "ok", "timestamp": 1615342809645, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="dV4AaWzlf7en" outputId="5991e54f-2f09-49c2-f2e8-334171b16854"
#Blend top3 models
blender=blend_models(estimator_list=top3_models)
# + colab={"base_uri": "https://localhost:8080/", "height": 407, "referenced_widgets": ["f6fe16b889434f6fbaf6c25f1a382fa8", "16602f1db065474b923b2b145e6177e3", "810a1bbaf4b7472bad587eb48d326e9d"]} executionInfo={"elapsed": 107896, "status": "ok", "timestamp": 1615342938605, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="GeZvLd5LgPKq" outputId="6b37ac45-7548-4e02-fc3e-5c5920b491d8"
stacker=stack_models(top3_models)
# + colab={"base_uri": "https://localhost:8080/", "height": 393, "referenced_widgets": ["da6d7f98b6e0476d9ad11756a76ddbb3", "08ffe4b1f477403e9165022d9c8dc78c", "a7f119ae33f24d4bae95a8eedb414d6f", "<KEY>", "8967b8bb9a0d471ba03d6fbe965467c0", "9cd69c83997c4ed49d2969ed5db4a06b"]} executionInfo={"elapsed": 5040, "status": "ok", "timestamp": 1615342951215, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="92Hr3DlK__eM" outputId="7bb1347f-140d-40b6-edc4-75e57bff4a09"
[plot_model(i) for i in top3_models]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["8bf3de78fdb841308711929c34fb2c58", "c0defe4d9adb4e128b87bb4f4b1d69cd", "62c2f03af388474c928aaf7d93f226c8", "25962e6a2dc2431e86a7aaf1a9712fd7", "dee83030743e43f7a8b09779b8a12bc4", "53a920eb235f43a6ba703e461616cb90", "0ac0d051d0104e74912a936b81859eb7", "<KEY>", "f5f1e13761a94a5d9904aa308d5c5611", "<KEY>", "3a7de82265e541318139943d6bdd6f5c", "<KEY>", "<KEY>", "62c87846702549fea658dbe549d1f6a5", "c1ca9e74d0d84e31be6cbe9b71810f71", "<KEY>", "<KEY>", "5caccec133db4a27bc5ee416c689b84b", "cef4da6626cb45e2a712c6f69cea62e1", "cd2a5646ee134590a1c535cfed405eb2", "320b54aebd8c483e9581d8969330ba89", "<KEY>", "1fc556fe3f334e03bd48d3400993ee63", "cc0870ef733f4cb999aeafd7588228df", "b64a2381ed0a4bc99a0f7a4c35881f89", "d18100b1246d4e71b14d35bfa9e694b6", "25dcc7c6ff714c93838aea7a314bbea9"]} executionInfo={"elapsed": 2273, "status": "ok", "timestamp": 1615342970789, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="he9-IMfuAygS" outputId="eaac2b20-dee0-42d2-848e-aac49aeef192"
[evaluate_model(i) for i in top3_models]
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 9452, "status": "ok", "timestamp": 1615343050364, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="lGzO5CopXbNW" outputId="bf2fb5bd-e258-4ce7-c05e-5fcb29e0a49f"
# !pip install shap
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1067, "status": "ok", "timestamp": 1615355782154, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="XKw4ZHultRGe" outputId="08c6b83e-51e9-4d30-ca87-fbc37caf3b72"
top3_models[1]
# + executionInfo={"elapsed": 1018, "status": "ok", "timestamp": 1615356877110, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="ngQ3X-Sy7HF9"
import shap
explainer = shap.Explainer(top3_models[0])
shap_values = explainer(df_scaled)
# + colab={"base_uri": "https://localhost:8080/", "height": 310} executionInfo={"elapsed": 948, "status": "error", "timestamp": 1615356929745, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="kjXbjkzl9UY5" outputId="f811a7a1-d1dd-4516-f154-a0b85b3bae8b"
shap.plots.waterfall(shap_values[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 193} executionInfo={"elapsed": 1551, "status": "ok", "timestamp": 1615355902898, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="hfINilmt-t5J" outputId="12845cca-ffea-44c7-f131-28ec790e7b0e"
shap.initjs()
#Visualize first prediction
shap.plots.force(shap_values[2])
# + colab={"base_uri": "https://localhost:8080/", "height": 334} executionInfo={"elapsed": 927, "status": "error", "timestamp": 1615356402294, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="wK2GHzHquJrh" outputId="9ef7344c-b0ab-43fe-b551-396186a414c2"
display(shap.plots.force(explainer.expected_value[0], shap_values[0]))
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 924, "status": "ok", "timestamp": 1615356758127, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="MBxoha0nugE1" outputId="10f10ef0-7b23-449f-8ed1-a032b08feef2"
import xgboost
import shap
# train an XGBoost model
X, y = shap.datasets.boston()
model = xgboost.XGBRegressor().fit(X, y)
# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
explainer = shap.Explainer(model)
shap_values = explainer(X)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1031, "status": "ok", "timestamp": 1615356782925, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="QqNdWyUIw_t9" outputId="2e0e0949-9c35-4eaf-bafc-c3b5f9d5c4d4"
shap_values
# + id="mYW_Y7Qtw-Yp"
# visualize the first prediction's explanation
shap.plots.waterfall(shap_values[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 335} executionInfo={"elapsed": 1526, "status": "ok", "timestamp": 1615355919665, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="2R1O78NOCs--" outputId="d315738d-2d95-4017-cb4c-1c060e22eb92"
#Visualize all predictions
shap.plots.scatter(shap_values[:,"age"],shap_values)
# + colab={"base_uri": "https://localhost:8080/", "height": 335} executionInfo={"elapsed": 1713, "status": "ok", "timestamp": 1615349034590, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="poq9zzy2Tf6g" outputId="7d3b2419-e14d-448b-877c-e121b3d78ed4"
shap.plots.scatter(shap_values[:,"age"], color=shap_values)
# + colab={"base_uri": "https://localhost:8080/", "height": 302} executionInfo={"elapsed": 1623, "status": "ok", "timestamp": 1615349072557, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="eCcUbrWFTqaB" outputId="e2fee4ec-b6bf-42a1-97d4-92e775d9bcd8"
shap.plots.beeswarm(shap_values)
# + colab={"base_uri": "https://localhost:8080/", "height": 454} executionInfo={"elapsed": 443696, "status": "ok", "timestamp": 1615308333788, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="rPYvhY8LaGJf" outputId="79bf5599-1a00-46ee-c98d-baa2aaa9e9a3"
top3_models
#[interpret_model(i) for i in best_model]
interpret_model(top3_models[1],)
# + executionInfo={"elapsed": 450742, "status": "ok", "timestamp": 1615308340843, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="6tVjD8sRasDL"
final_model=automl(optimize='R2')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 450737, "status": "ok", "timestamp": 1615308340845, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="ptiHi1QPDPCD" outputId="49d26ad5-be07-40f4-a2ea-b4942f0d87b0"
print(final_model)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 578281, "status": "ok", "timestamp": 1615308468397, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="W_XNs7ILEnOS" outputId="00402b78-7a08-4bf0-e5f8-dd2d3919738c"
from google.colab import drive
drive.mount('/content/gdrive')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 579003, "status": "ok", "timestamp": 1615308469130, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="S9IDcW2LScEu" outputId="e7f658d6-8adc-4ac4-ac38-48a74817fab1"
save_model(final_model,'/content/gdrive/My Drive/regression_final')
# + executionInfo={"elapsed": 578995, "status": "ok", "timestamp": 1615308469134, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00180303595658708163"}, "user_tz": -330} id="I3V-vDdjETmN"
| templates/modelling/regression_pycaret.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h4>Backprop Refactor</h4>
#
# <p></p>
# One of the goals in refactoring the backprop equations is to reduce the length of the backprop equations
# as we go deeper into the stages. Each stage should rely only on the terms before the current stage and the output
# of the stage.
# <p></p>
# Refactor the backprop equations into terms where each stage is consistent and we can plug in different
# derivatives as both the layers and activation functions change.
# For 1 logistic neuron one weight update = $\Delta w_i = \eta \delta x_i$
# <p></p>
# where $\delta$ is:
# <p></p>
# $\delta = (y-\hat y)f'(h)$
# <p></p>
# where f'(h) refers to the derivative of the activation function.
# <p></p>
# $h=\sum w_i x_i$
# <p></p>
# This refactor shows the change delta in the hidden layer weights are dependent on the derivative of the error function multiplied
# by the derivative of the activation function and the output vlalues of the hidden layer. Evaluate the derivative
# of the activation function using the values of the hidden layer output.
# +
#from Udacity DL Course lesson 2: week 12. Gradient Descent:The Code
#this is not gradient descent w/o the convergence or the epochs
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
"""
# Derivative of the sigmoid function
"""
return sigmoid(x) * (1 - sigmoid(x))
#the tf xor gate has 4 different inputs combos for 2 nodes
#it has to u
learnrate = 0.5
x = np.array([1, 2, 3, 4])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5, 0.3, 0.1])
### Calculate one gradient descent step for each weight
### Note: Some steps have been consilated, so there are
### fewer variable names than in the above sample code
# TODO: Calculate the node's linear combination of inputs and weights
h = np.dot(np.transpose(x),w)
# TODO: Calculate output of neural network
nn_output = sigmoid(h)
# TODO: Calculate error of neural network
error = np.subtract(y,nn_output)
# TODO: Calculate the error term
# Remember, this requires the output gradient, which we haven't
# specifically added a variable for.
error_term = error*sigmoid_prime(h)
# TODO: Calculate change in weights
del_w = learnrate*error_term*x
#this is vectorized? no this is completely useless
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
# -
# <h4>Tensorflow inputs</h4>
# The inputs from TF come from a placeholder which allows values to be added later. Input data is streamed through
# a placeholder. Defining a placeholder requires a shape, type(float32), and name. If you don't assign a name the runtime
# will assign a name. Easier to match the name with the assignment statement.
# <p></p>
# a = tf.placeholder(tf.float32, shape=(2,2), name="a")
# <p></p>
# A tensorflow shape can be defined similar to a numpy shape but it can also take in None as an argument to shape.
#
#
#
# +
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import OneHotEncoder
def trans_for_ohe(labels):
"""Transform a flat list of labels to what one hot encoder needs."""
print ("labels before reshape:",labels.shape)
#can use newaxis.
a = np.array(labels).reshape(len(labels), -1)
print("labels after reshape:",a.shape)
return a
XOR_X=np.array([[0,0],[0,1],[1,0],[1,1]])
XOR_Y = np.array([0,1,1,0])
print("len:",len(XOR_X))
XOR_X = tf.placeholder(tf.float32, shape=[None,len(XOR_X[0])], name="x")
#test different onehot
enc = OneHotEncoder()
enc.fit(trans_for_ohe(XOR_Y))
XOR_T = enc.transform(trans_for_ohe(XOR_Y)).toarray()
print("XOR_T:",XOR_T)
#
yonehot = tf.one_hot(XOR_Y,1)
#print(yonehot)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
sess.run(tf.Print(yonehot,[yonehot]))
sess.run(tf.shape(yonehot))
print("shape XOR_X:",tf.shape(XOR_X))
print("shape onehot:",tf.shape(yonehot))
#yonehot.eval()
# -
# <h4>Gradient Descent</h4>
# Given the equations to change the weights via backprop here is the
# addition of a learning rate and how to change the weight values given
# a loss function. This uses the The code below runs gradient descent with definitions:
# forward pass = $\hat y = f(\sum w_i x_i)$
# <p></p>
# error term = $\delta = (y - \hat y) \cdot f'(\sum w_i x_i)$
# <p></p>
# update the weight step $\Delta w_i = \Delta w_i + \delta x_i$
# <p></p>
# update the weights $w_i = w_i + \eta \frac{\Delta w_i}{m}$
# <p></p>
#
#
#
#
# +
#From Udacity DL Lesson 2 13: Implementing Gradient Descent. Run this first before cell below
import numpy as np
import pandas as pd
admissions = pd.read_csv('binary.csv')
# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
# Standarize features
for field in ['gre', 'gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:,field] = (data[field]-mean)/std
# Split off random 10% of the data for testing
np.random.seed(42)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)
# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
# +
#From Udacity DL Lesson 2 13: Implementing Gradient Descent. Run cell above to replace imports
import numpy as np
#from data_prep import features, targets, features_test, targets_test
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# TODO: We haven't provided the sigmoid_prime function like we did in
# the previous lesson to encourage you to come up with a more
# efficient solution. If you need a hint, check out the comments
# in solution.py from the previous lecture.
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Note: We haven't included the h variable from the previous
# lesson. You can add it if you want, or you can calculate
# the h together with the output
# TODO: Calculate the output
output = sigmoid(np.dot(x,weights))
# TODO: Calculate the error
error = y-output
# TODO: Calculate the error term
error_term = error * output*(1-output)
# TODO: Calculate the change in weights for this sample
# and add it to the total weight change
del_w += error_term * x
# TODO: Update weights using the learning rate and the average change in weights
weights += learnrate * del_w/n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
# +
#From Udacity DL Lesson 2 14.: Multilayer Perceptron
import numpy as np
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1/(1+np.exp(-x))
# Network size
N_input = 4
N_hidden = 3
N_output = 2
np.random.seed(42)
# Make some fake data
X = np.random.randn(4)
weights_input_to_hidden = np.random.normal(0, scale=0.1, size=(N_input, N_hidden))
weights_hidden_to_output = np.random.normal(0, scale=0.1, size=(N_hidden, N_output))
# TODO: Make a forward pass through the network
hidden_layer_in = np.dot(X, weights_input_to_hidden)
hidden_layer_out = sigmoid(hidden_layer_in)
print('Hidden-layer Output:')
print(hidden_layer_out)
output_layer_in = np.dot(hidden_layer_out, weights_hidden_to_output)
output_layer_out = sigmoid(output_layer_in)
print('Output-layer Output:')
print(output_layer_out)
# +
#From Udacity DL class lecture 2 15 Backpropagation
import tensorflow as tf
#second version... modify for input placeholders
XOR_X = [[0, 0], [0, 1], [1, 0], [1, 1]] # Features
XOR_Y = [0, 1, 1, 0] # Class labels
x = tf.placeholder(tf.float32,shape=[None,len(XOR_X)],name="x")
y = tf.placeholder(tf.float32,shape=[None,2],name="y")
w1 = tf.Variable(tf.random_uniform([2, 2], -1, 1, seed=0),
name="w1")
w2 = tf.Variable(tf.random_uniform([2, 2], -1, 1,
seed=0),
name="w2")
b1 = tf.Variable(tf.zeros([2]), name="b1")
b2 = tf.Variable(tf.zeros([2]), name="b2")
#Is this useful?There are no iterations for GD.
# +
#material from Udacity, is this useful? probably not redo this in more clear
#notation. With the refactor of each stage.
#
import tensorflow as tf
import numpy as np
init_op = tf.initialize_all_variables()
#run the graph
x = np.array([0.5, 0.1, -0.2])
target = 0.6
learnrate = 0.5
weights_input_hidden = np.array([[0.5, -0.6],
[0.1, -0.2],
[0.1, 0.7]])
weights_hidden_output = np.array([0.1, -0.3])
x1 = tf.convert_to_tensor(x)
weights_input_hidden1 = tf.convert_to_tensor(weights_input_hidden)
#x1_shape = tf.get_shape(x1)
print("x1.get_shape:",x1.get_shape())
x1shape = tf.shape(x1)
with tf.Session() as sess:
print("x1shape:",sess.run(x1shape))
## Forward pass
hidden_layer_input = tf.matmul(tf.transpose(tf.convert_to_tensor(x)), tf.convert_to_tensor(weights_input_hidden))
hidden_layer_output = tf.sigmoid(tf.convert_to_tensor(hidden_layer_input))
with tf.Session() as sess:
sess.run(init_op) #execute init_op
#print the random values that we sample
print (sess.run(hidden_layer_input))
print (sess.run(hidden_layer_output))
output_layer_in = np.dot(hidden_layer_output, weights_hidden_output)
output = sigmoid(output_layer_in)
## Backwards pass
## TODO: Calculate output error
## the error refers to the derivative of the least squares error. y_n-t_n where y_n is expected, t_n is NN output
error = target-output
# TODO: Calculate error term for output layer
output_error_term = error * output*(1-output)
# TODO: Calculate error term for hidden layer
#
hidden_error = np.dot(output_error_term, weights_hidden_output)
hidden_error_term = hidden_error*hidden_layer_output*(1-hidden_layer_output)
# TODO: Calculate change in weights for hidden layer to output layer
delta_w_h_o = learnrate*output_error_term*hidden_layer_output
# TODO: Calculate change in weights for input layer to hidden layer
delta_w_i_h = learnrate*hidden_error_term*x[:,None]
print('Change in weights for hidden layer to output layer:')
print(delta_w_h_o)
print('Change in weights for input layer to hidden layer:')
print(delta_w_i_h)
# -
# <h4>Backpropagation with 1 hidden layer refactor</h4>
# <p></p>
# error_hidden is the same as hidden_error term. error_hidden is not the same as hidden_error and have very different
# definitions.
# <p></p>
# error_hidden = $\delta_j$
# <p></p>
# hidden_error = $w_{jk}\delta_k$
# <p></p>
# error: the least squares error at the output stage after the derivative. They reversed the signs, $y-y^{hat}$
# error = output-target
# <p></p>
# output_error_term the error term above times the derivative of the logistic: $(y-y^{hat}) \cdot y(1-y)$
# <p></p>
# output_error_term = error*(output)(1-output)
# <p></p>
# hidden_error=np.dot(weights_hidden_output,output_error_term)
# <p></p>
# hidden_error_term = hidden_error * hidden_layer_output * (1 - hidden_layer_output)
# <p></p>
# The delta rule consists of 3 terms,
# <li>learning rate times</li>
# <li>delta error which is defined above and has 2 separate definitions, one for the output layer
# and one for the hidden layer. The hidden layer contains the backpropagated term from the stage before, the output layer. </li>
# <p></p>
# $\delta^{o} = (y-y^{hat}) \cdot f'(Wa)$
# where you take the derivative of the stage and their term Wa is our z where $z=\sum\limits_{n=0}^N w_h h_o$
# <p></p>
# $\delta^{h} = W\delta^{o}f'(h)$ where W is the weight matrix of the hidden layer, $\delta^{o}$
# is the backpropagated error term from the stage before and f'(h) is the derivative of the hidden layer
# with h the output of the hidden layer. They really mean f'(h) = h(1-h).
# <p></p>
# <li>input into layer or output of stage before</li>
# delta_who = learnrate * output_error_term * hidden_layer_output
# <p></p>
# If x is a 1 column vector, to take the transpose use x{:,None}. The
# delta_whi = learnrate * hidden_error_term * x[:, None]
# <p></p>
# +
#From Udacity Class lesson 2 16 Implementing Backpropagation run this first before cell below
import numpy as np
import pandas as pd
admissions = pd.read_csv('binary.csv')
# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
# Standarize features
for field in ['gre', 'gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:,field] = (data[field]-mean)/std
# Split off random 10% of the data for testing
np.random.seed(21)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)
# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
# +
#From Udacity DL class Lesson 2 16 Implementing BackPropagation
import numpy as np
#data_prep is supposed to be excecuted in the above cell first
#from data_prep import features, targets, features_test, targets_test
np.random.seed(21)
def sigmoid(x):
"""
Calculate sigmoid
"""
return 1 / (1 + np.exp(-x))
# Hyperparameters
n_hidden = 2 # number of hidden units
epochs = 900
learnrate = 0.005
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
size=n_hidden)
for e in range(epochs):
del_w_input_hidden = np.zeros(weights_input_hidden.shape)
del_w_hidden_output = np.zeros(weights_hidden_output.shape)
for x, y in zip(features.values, targets):
## Forward pass ##
# TODO: Calculate the output
hidden_input = np.dot(x,weights_input_hidden)
hidden_output = sigmoid(hidden_input)
output = sigmoid(np.dot(hidden_output,weights_hidden_output))
## Backward pass ##
# TODO: Calculate the network's prediction error
error = y - output
# TODO: Calculate error term for the output unit
output_error_term = error*output*(1-output)
## propagate errors to hidden layer
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(weights_hidden_output,hidden_output)
# TODO: Calculate the error term for the hidden layer
hidden_error_term = hidden_error * hidden_output * (1-hidden_output)
# TODO: Update the change in weights
del_w_hidden_output += output_error_term * hidden_output
del_w_input_hidden += hidden_error_term * x[:,None]
# TODO: Update weights
weights_input_hidden += learnrate * del_w_input_hidden/n_records
weights_hidden_output += learnrate * del_w_hidden_output/n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
hidden_output = sigmoid(np.dot(x, weights_input_hidden))
out = sigmoid(np.dot(hidden_output,
weights_hidden_output))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
# +
import tensorflow as tf
tf.reset_default_graph()
input_value = tf.constant(0.5,name="input_value")
weight = tf.Variable(1.0,name="weight")
expected_output = tf.constant(0.0,name="expected_output")
model = tf.multiply(input_value,weight,"model")
loss_function = tf.pow(expected_output - model,2,name="loss_function")
optimizer = tf.train.GradientDescentOptimizer(0.025).minimize(loss_function)
for value in [input_value,weight,expected_output,model,loss_function]:
tf.summary.scalar(value.op.name,value)
summaries = tf.summary.merge_all()
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
summary_writer = tf.summary.FileWriter('log_simple_stats',sess.graph)
#sess.run(tf.global_variables_initializer())
for i in range(100):
summary_writer.add_summary(sess.run(summaries),i)
sess.run(optimizer)
# +
import tensorflow as tf
#
x = tf.Variable(2.0)
y = 2.0 * (x**3)
z = 3.0 + y**2
grad_z = tf.gradients(z,[x,y])
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
result = sess.run(grad_z)
print (result)
#I dunno which way is better the cool google internal coding style way or the easier to remember way?
#with tf.Session() as sess:
# sess.run(x.initializer)
# result = sess.run(grad_z)
# print (result)
# +
import tensorflow as tf
tf.reset_default_graph()
x1 = tf.constant(0.1)
x2 = tf.constant(0.3)
w1 = tf.Variable(0.4)
w2 = tf.Variable(-0.2)
z = tf.add(tf.multiply(x1,w1),tf.multiply(x2,w2))
sig_out = tf.sigmoid(z)
grad = tf.gradients(z,[w1,w2])
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
result = sess.run(grad)
print (result)
# +
tf.reset_default_graph()
global_step = tf.Variable(0, dtype=tf.int32, trainable=False,
name='global_step')
x_ = tf.placeholder(tf.float32, shape=[4,2], name="x-input")
y_ = tf.placeholder(tf.float32, shape=[4,1], name="y-input")
Theta1 = tf.Variable(tf.random_uniform([2,2], -1, 1), name="Theta1")
Theta2 = tf.Variable(tf.random_uniform([2,1], -1, 1), name="Theta2")
Bias1 = tf.Variable(tf.zeros([2]), name="Bias1")
Bias2 = tf.Variable(tf.zeros([1]), name="Bias2")
A2 = tf.sigmoid(tf.matmul(x_, Theta1) + Bias1)
Hypothesis = tf.sigmoid(tf.matmul(A2, Theta2) + Bias2)
cost = tf.reduce_mean(( (y_ * tf.log(Hypothesis)) +
((1 - y_) * tf.log(1.0 - Hypothesis)) ) * -1)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
XOR_X = [[0,0],[0,1],[1,0],[1,1]]
XOR_Y = [[0],[1],[1],[0]]
for value in [Theta1, Theta2, Bias1, Bias2,Hypothesis]:
tf.summary.scalar(value.op.name,value)
summaries = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter('log_simple_stats',sess.graph)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(100000):
sess.run(train_step, feed_dict={x_: XOR_X, y_: XOR_Y})
summary_writer.add_summary(sess.run(summaries),i)
if i % 1000 == 0:
print('Epoch ', i)
print('Hypothesis ', sess.run(Hypothesis, feed_dict={x_: XOR_X, y_: XOR_Y}))
print('Theta1 ', sess.run(Theta1))
print('Bias1 ', sess.run(Bias1))
print('Theta2 ', sess.run(Theta2))
print('Bias2 ', sess.run(Bias2))
print('cost ', sess.run(cost, feed_dict={x_: XOR_X, y_: XOR_Y}))
writer = tf.summary.FileWriter("./logs/xor_logs", sess.graph_def)
# +
# Siraj code, verify from videos.
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
learning_rate = 0.01
training_iteration = 30
batch_size = 100
display_step = 2
x = tf.placeholder("float", [None, 784])
y = tf.placeholder("float", [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
with tf.name_scope("Wx_b") as scope:
model = tf.nn.softmax(tf.matmul(x, W) + b)
w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)
#this looks like a cross entropy.
with tf.name_scope("cost_function") as scope:
cost_function = -tf.reduce_sum(y*tf.log(model))
tf.summary.scalar("cost_function", cost_function)
with tf.name_scope("train") as scope:
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
init = tf.global_variables_initializer() #tf.initialize_all_variables()
merged_summary_op = tf.summary.merge_all
#Launch the graph
with tf.Session() as sess:
sess.run(init)
summary_writer = tf.summary.FileWriter('/sirajravel/logs', graph_def=sess.graph_def)
for iteration in range(training_iteration):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch
summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
summary_writer.add_summary(summary_str, iteration*total_batch + i)
if iteration % display_step == 0:
print("Iteration", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))
print("Tuning completed!")
predictions = tf.equal(tf.argmax(model,1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
# +
import tensorflow as tf
x1 = tf.Variable(0.1)
x2 = tf.Variable(0.3)
w1 = tf.Variable(0.4)
w2 = tf.Variable(-0.2)
y = tf.placeholder("float32")
z1 = tf.sigmoid(tf.multiply(x1,w1))
diff = y-z1
cost = tf.multiply(diff, diff)
step = tf.train.GradientDescentOptimizer(0.1)
#.minimize(cost)
compute_gradients = step.compute_gradients(cost)
#for i in range(10000):
# batch_xs, batch_ys =
# sess.run(step, feed_dict = {a_0: batch_xs,
# y : batch_ys})
# if i % 1000 == 0:
# res = sess.run(acct_res, feed_dict =
# {a_0: mnist.test.images[:1000],
# y : mnist.test.labels[:1000]})
# print (res)
# +
#
#
#
import tensorflow as tf
import numpy as np
batch_size = 5
dim = 3
hidden_units = 10
sess = tf.Session()
with sess.as_default():
x = tf.placeholder(dtype=tf.float32, shape=[None, dim], name="x")
y = tf.placeholder(dtype=tf.int32, shape=[None], name="y")
w = tf.Variable(initial_value=tf.random_normal(shape=[dim, hidden_units]), name="w")
b = tf.Variable(initial_value=tf.zeros(shape=[hidden_units]), name="b")
logits = tf.nn.tanh(tf.matmul(x, w) + b)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y,name="xentropy")
# define model end
# begin training
optimizer = tf.train.GradientDescentOptimizer(1e-5)
grads_and_vars = optimizer.compute_gradients(cross_entropy, tf.trainable_variables())
# generate data
data = np.random.randn(batch_size, dim)
labels = np.random.randint(0, 10, size=batch_size)
print("data shape:", data.shape," labels shape", labels.shape)
print("data:", data)
print("labels:", labels)
sess.run(tf.initialize_all_variables())
gradients_and_vars = sess.run(grads_and_vars, feed_dict={x:data, y:labels})
for g, v in gradients_and_vars:
if g is not None:
print ("****************this is variable*************")
print ("variable's shape:", v.shape)
print (v)
print ("****************this is gradient*************")
print ("gradient's shape:", g.shape)
print (g)
sess.close()
# +
import tensorflow as tf
def sigma(x):
return tf.div(tf.constant(1.0),
tf.add(tf.constant(1.0), tf.exp(tf.negative(x))))
def sigmaprime(x):
return tf.multiply(sigma(x), tf.subtract(tf.constant(1.0), sigma(x)))
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
a_0 = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
middle = 30
w_1 = tf.Variable(tf.truncated_normal([784, middle]))
b_1 = tf.Variable(tf.truncated_normal([1, middle]))
w_2 = tf.Variable(tf.truncated_normal([middle, 10]))
b_2 = tf.Variable(tf.truncated_normal([1, 10]))
z_1 = tf.add(tf.matmul(a_0, w_1), b_1)
#a_1 = sigma(z_1)
a_1 = tf.sigmoid(z_1)
z_2 = tf.add(tf.matmul(a_1, w_2), b_2)
#a_2 = sigma(z_2)
a_2 = tf.sigmoid(z_2)
diff = tf.subtract(a_2, y)
#d_z_2 = tf.multiply(diff, sigmaprime(z_2))
#d_b_2 = d_z_2
#d_w_2 = tf.matmul(tf.transpose(a_1), d_z_2)
#d_a_1 = tf.matmul(d_z_2, tf.transpose(w_2))
#d_z_1 = tf.multiply(d_a_1, sigmaprime(z_1))
#d_b_1 = d_z_1
#d_w_1 = tf.matmul(tf.transpose(a_0), d_z_1)
eta = tf.constant(0.5)
#step = [
# tf.assign(w_1,
# tf.subtract(w_1, tf.multiply(eta, d_w_1)))
# , tf.assign(b_1,
# tf.subtract(b_1, tf.multiply(eta,
# tf.reduce_mean(d_b_1, axis=[0]))))
# , tf.assign(w_2,
# tf.subtract(w_2, tf.multiply(eta, d_w_2)))
# , tf.assign(b_2,
# tf.subtract(b_2, tf.multiply(eta,
# tf.reduce_mean(d_b_2, axis=[0]))))
#]
acct_mat = tf.equal(tf.argmax(a_2, 1), tf.argmax(y, 1))
acct_res = tf.reduce_sum(tf.cast(acct_mat, tf.float32))
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
cost = tf.multiply(diff, diff)
step = tf.train.GradientDescentOptimizer(0.1).minimize(cost)
for i in range(10000):
batch_xs, batch_ys = mnist.train.next_batch(10)
sess.run(step, feed_dict = {a_0: batch_xs,
y : batch_ys})
if i % 1000 == 0:
res = sess.run(acct_res, feed_dict =
{a_0: mnist.test.images[:1000],
y : mnist.test.labels[:1000]})
print (res)
# +
<h6>Student cost function</h6>
| hinton/backprop/backprop2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import glob
import tensorflow as tf
from pathlib import Path
import math
from IPython.display import display
def datasetImageFolder(start_path,split=None):
filePath = getFilePaths(start_path,["jpg","png","jpeg"])
splits = list(set(map(lambda x: x[x.find('\\',len(start_path)-1 )+1:x.find('\\',x.find('\\',len(start_path)-1 )+1) ] , filePath)))
allSplits = []
for i in range(len(splits)):
rs = list(map(lambda x : x if splits[i] in x else None, filePath))
result = [x for x in rs if x]
allSplits.append(result)
datasets = dict()
for i in range(len(allSplits)):
images = tf.data.Dataset.from_tensor_slices(allSplits[i])
datasets[splits[i]] = images
return datasets
def getFilePaths(start_path,extensions= None):
paths = []
if(extensions is not None):
for extension in extensions:
paths.extend(list(Path(start_path).rglob("*." + extension)))
else:
path.extend(list(Path(start_path).rglob()))
return [str(x) for x in paths if x.is_file()]
def getLabelFromFilePathTF(file_path):
splits = tf.strings.split(file_path,'\\')
label = splits[len(splits)-2]
return label
def getLabelFromFilePath(file_path):
lastIndex = file_path.rfind('\\')
firstIndex = file_path.rfind('\\',0,lastIndex)+1
label =file_path[firstIndex:lastIndex]
return label
def getAllLabels(start_path,generateUndefined=False):
paths = getFilePaths(start_path,["jpg","png","jpeg"])
distinctLabels = []
if (generateUndefined):
distinctLabels.append('Undefined')
distinctLabels.extend(list(set(map(lambda x: getLabelFromFilePath(x) , paths))))
return distinctLabels
def generateOneHotEncodeDict(labels):
indices = [x for x in range(len(labels))]
one_hots = tf.one_hot(indices,len(labels),dtype=tf.uint8)
one_hot_dict = dict()
for i in range(len(one_hots)):
one_hot_dict[labels[i]] = one_hots[i]
return one_hot_dict
def generateBinaryEncoding(labels):
indices = [x for x in range(len(labels))]
n_bits = int(math.log(len(labels),2)) + 1
binaryEncode = []
for index in indices:
encoding = []
for i in range(n_bits):
encoding.append( (index >> i) & 1)
encoding.reverse()
binaryEncode.append(encoding)
binaryEncodeTensors = [tf.convert_to_tensor(x,dtype=tf.uint8) for x in binaryEncode]
binary_encode_dict = dict()
for i in range(len(labels)):
binary_encode_dict[labels[i]] = binaryEncodeTensors[i]
return binary_encode_dict
def loadBestSavedModel(start_path):
models = getFilePaths(start_path,['h5'])
bestModel = -1
modelToLoad = ""
for modelPath in models:
firstIndex = modelPath.find(".",modelPath.find(".")+1)+1
modelNumber = modelPath[firstIndex:modelPath.rfind(".")]
if(float(modelNumber) > bestModel ):
bestModel = float(modelNumber)
modelToLoad = modelPath
print(modelToLoad)
#if len(modelToLoad) > 0:
#return keras.models.load_model(modelToLoad)
return modelToLoad
# -
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D,Input,Dense,Flatten,LeakyReLU,MaxPooling2D,Dropout,Softmax,ReLU
from tensorflow.keras import Model
from tensorflow.keras.regularizers import l2
import tensorflow_datasets as tfds
from PIL import Image
from keras import metrics
# +
def vgg(filters=64,n_Class=1000,shape=(224,224,3)):
def convBlock(input_tensor,filters=filters):
x = Conv2D(filters,kernel_size=3,strides=1,padding='same',kernel_initializer=tf.keras.initializers.HeUniform(),kernel_regularizer=l2(0.0001))(input_tensor)
x = LeakyReLU(alpha=0.2)(x)
return x
input_tensor = Input(shape=shape)
x = convBlock(input_tensor,filters)
x = convBlock(x,filters)
x = MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = convBlock(x,filters*2)
x = convBlock(x,filters*2)
x = MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = convBlock(x,filters*4)
x = convBlock(x,filters*4)
x = convBlock(x,filters*4)
x = convBlock(x,filters*4)
x = MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = convBlock(x,filters*8)
x = MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = Flatten()(x)
x = Dense(4096,kernel_initializer=tf.keras.initializers.HeUniform())(x)
x = Dropout(0.5)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dense(4096,kernel_initializer=tf.keras.initializers.HeUniform())(x)
x = Dropout(0.5)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dense(n_Class)(x)
x = Softmax(axis=-1)(x)
return Model(inputs=input_tensor,outputs=x)
def preprocessImageVgg(image):
img = tf.image.random_crop(image,size=[224,224,3])
img /= 127.5
img -= 1.
return img
def getDatasetFromImageFolder(imageFolder,batchSize=32,n_epochs=100):
def preprocessVgg(ds):
ds = ds.map(lambda image ,label: (preprocessImageVgg(image) , tf.reshape(label ,(270,))) , num_parallel_calls=tf.data.AUTOTUNE)
return ds
def augmentDataset(ds):
ds = ds.map(lambda x ,y: (tf.image.rot90(x, tf.random.uniform(shape=[], minval=0, maxval=4, dtype=tf.int32)) ,y), num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.map(lambda x ,y: (tf.image.random_flip_left_right(x),y), num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.map(lambda x ,y: (tf.image.random_flip_up_down(x),y), num_parallel_calls=tf.data.AUTOTUNE)
return ds
def process_img(img):
img = tf.image.decode_jpeg(img, channels=3)
#img = tf.image.convert_image_dtype(img, tf.float32)
return tf.image.resize(img, [224, 224])
def combine_images_labels(file_path: tf.Tensor):
img = tf.io.read_file(file_path)
img = process_img(img)
label = getLabelFromFilePathTF(file_path)
labelEncoded = encodingLabelDict[label.numpy().decode('utf-8')]
return img, labelEncoded
def prepareDataset(ds, training=False):
ds = ds.shuffle(buffer_size=10000)
ds = ds.map(lambda x : tf.py_function(func=combine_images_labels,
inp=[x], Tout=(tf.float32,tf.uint8)),
num_parallel_calls=tf.data.AUTOTUNE)
ds = preprocessVgg(ds)
if(training):
ds = augmentDataset(ds)
ds = ds.repeat()
ds = ds.batch(batchSize)
ds = ds.apply(tf.data.experimental.copy_to_device('/gpu:0'))
ds = ds.prefetch(buffer_size=tf.data.AUTOTUNE)
return ds
datasets = datasetImageFolder(imageFolder)
dsTrain, dsTest, dsValid = datasets['train'] ,datasets['test'],datasets['valid']
#encodingLabelDict = generateBinaryEncoding(getAllLabels(imageFolder))
encodingLabelDict = generateOneHotEncodeDict(getAllLabels(imageFolder))
steps_per_epoch_train = dsTrain.cardinality().numpy() // batchSize
steps_per_epoch_valid = dsValid.cardinality().numpy() // batchSize
labelDict = generateBinaryEncoding(getAllLabels(imageFolder))
dsTrain = prepareDataset(dsTrain,training=True)
dsTest = prepareDataset(dsTest)
dsValid = prepareDataset(dsValid)
return dsTrain,dsTest,dsValid,steps_per_epoch_train,steps_per_epoch_valid
def schedule(epoch,lr):
return lr / max(10 * int(epoch/20) ,1)
def trainVgg(model,trainDataset,validDataset,epochs=100,verbose=1,steps_per_epoch=None,steps_per_epoch_valid=None):
my_callbacks = [
tf.keras.callbacks.EarlyStopping(patience=10),
tf.keras.callbacks.ModelCheckpoint(filepath='model.{epoch:02d}.{val_loss:02f}.h5', monitor='val_loss', verbose=0, save_best_only=True),
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.LearningRateScheduler(schedule, verbose=0)
]
if steps_per_epoch is not None and steps_per_epoch_valid is not None :
hist = model.fit(x=trainDataset,validation_data=validDataset,steps_per_epoch=steps_per_epoch,epochs=epochs, callbacks=my_callbacks , verbose=verbose,validation_steps=steps_per_epoch_valid,initial_epoch=15)
else:
hist = model.fit(trainDataset,validation_data=validDataset,callbacks=my_callbacks,verbose=verbose)
return hist
# -
n_labels = len( getAllLabels('data/'))
n_Class = int(math.log(n_labels,2))+1
print( n_labels)
#model_vgg = vgg(filters=64,n_Class=n_labels,shape=(224,224,3))
# +
dsTrain,dsTest,dsValid ,stepTrain,steps_per_epoch_valid = getDatasetFromImageFolder('data/',batchSize=16)
for image, label in dsTrain.take(1):
print(label.numpy().shape)
# +
print(stepTrain)
print(steps_per_epoch_valid)
print(dsTrain.element_spec)
#checkSavedModel("./")
# +
def compileVgg(model ,learning_rate=1e-2):
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate,clipnorm=1,nesterov=True),loss='categorical_crossentropy',metrics=[metrics.categorical_accuracy])
model = loadBestSavedModel(".")
model_vgg = tf.keras.models.load_model(model)
#compileVgg(model_vgg,learning_rate=1e-2)
#model_vgg.load_weights(model)
hist = trainVgg(model_vgg,trainDataset=dsTrain,validDataset=dsValid,epochs=100,verbose=1,steps_per_epoch=stepTrain,steps_per_epoch_valid=steps_per_epoch_valid)
# -
| Models/Vgg/vgg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="IV9Cis3HdHp8" outputId="c84bd189-b3d7-425c-ff76-ad3f38bcf668"
# !pip install datasets
# + id="uaD4Nk85FsZc"
import math
import time
import random
import torch
import torch.nn as nn
import torch.optim as optim
import torchtext
import datasets
# + id="z8xZhIfJ9smN"
torch.manual_seed(0)
random.seed(0)
# + colab={"base_uri": "https://localhost:8080/"} id="djyFDC4QFwVh" outputId="81089a44-cf8a-4bee-f551-1c56a4e77220"
dataset = datasets.load_dataset('wikitext', 'wikitext-2-raw-v1')
# + id="gKcltvAsfgHW"
tokenizer = torchtext.data.utils.get_tokenizer('basic_english')
# + id="aODmmThhcpTx"
def tokenize_data(example, tokenizer):
tokens = {'tokens': tokenizer(example['text'])}
return tokens
# + colab={"base_uri": "https://localhost:8080/"} id="Q5x9lbSFc5Yh" outputId="179a6f8b-0001-4b56-b89c-244aa5086b15"
tokenized_dataset = dataset.map(tokenize_data, remove_columns=['text'], fn_kwargs={'tokenizer': tokenizer})
# + id="4c1NIY8yiHJo"
vocab = torchtext.vocab.build_vocab_from_iterator(tokenized_dataset['train']['tokens'],
min_freq=3)
# + id="54CbiCkw_5P3"
vocab.insert_token('<unk>', 0)
vocab.insert_token('<eos>', 1)
# + id="qfSCMdZRioAG"
vocab.get_itos()[:10]
# + id="grwHG5UN_eKN"
unk_index = vocab['<unk>']
vocab.set_default_index(unk_index)
# + id="JMUwATIrwkAi"
def get_data(dataset, vocab, batch_size):
data = []
for example in dataset:
if example['tokens']:
tokens = example['tokens'].append('<eos>')
tokens = [vocab[token] for token in example['tokens']]
data.extend(tokens)
data = torch.LongTensor(data)
n_batches = data.shape[0] // batch_size
data = data.narrow(0, 0, n_batches * batch_size)
data = data.view(batch_size, -1)
return data
# + id="FtjtIvuNisST"
batch_size = 80
train_data = get_data(tokenized_dataset['train'], vocab, batch_size)
valid_data = get_data(tokenized_dataset['validation'], vocab, batch_size)
test_data = get_data(tokenized_dataset['test'], vocab, batch_size)
# + id="lnFp5L1XOzvc"
class LockedDropout(nn.Module):
def __init__(self, p=0.5):
super().__init__()
self.p = p
def forward(self, x):
# x = [batch size, seq len, hidden dim]
if not self.training or not self.p:
return x
x = x.clone()
mask = x.new_empty(x.shape[0], 1, x.shape[2], requires_grad=False).bernoulli_(1 - self.p)
mask = mask.div_(1 - self.p)
mask = mask.expand_as(x)
return x * mask
# + id="iyXHwjATP8kU"
def _setup_weight_drop(module, weights, dropout):
for name_w in weights:
w = getattr(module, name_w)
del module._parameters[name_w]
module.register_parameter(name_w + '_raw', nn.Parameter(w))
original_module_forward = module.forward
def forward(*args, **kwargs):
for name_w in weights:
raw_w = getattr(module, name_w + '_raw')
w = nn.Parameter(torch.nn.functional.dropout(raw_w, p=dropout, training=module.training))
setattr(module, name_w, w)
return original_module_forward(*args, **kwargs)
setattr(module, 'forward', forward)
# + id="4JHLBu86RodS"
class WeightDropLSTM(torch.nn.LSTM):
"""
Wrapper around :class:`torch.nn.LSTM` that adds ``weight_dropout`` named argument.
Args:
weight_dropout (float): The probability a weight will be dropped.
"""
def __init__(self, *args, weight_dropout=0.0, **kwargs):
super().__init__(*args, **kwargs)
weights = ['weight_hh_l' + str(i) for i in range(self.num_layers)]
_setup_weight_drop(self, weights, weight_dropout)
# + id="BAKBc-UUjO0-"
class AWDLSTM(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers,
embedding_dropout_rate, weight_dropout_rate, lstm_dropout_rate, output_dropout_rate,
tie_weights):
super().__init__()
self.n_layers = n_layers
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.tie_weights = tie_weights
self.embedding = nn.Embedding(vocab_size, embedding_dim)
lstms = []
for n in range(n_layers):
input_dim = embedding_dim if n == 0 else hidden_dim
output_dim = hidden_dim if n != n_layers - 1 else (embedding_dim if tie_weights else hidden_dim)
lstm = WeightDropLSTM(input_dim, output_dim, batch_first=True, weight_dropout=weight_dropout_rate)
lstms.append(lstm)
self.lstms = nn.ModuleList(lstms)
self.fc = nn.Linear(embedding_dim if tie_weights else hidden_dim, vocab_size)
self.embedding_dropout = LockedDropout(embedding_dropout_rate)
self.lstm_dropout = LockedDropout(lstm_dropout_rate)
self.output_dropout = LockedDropout(output_dropout_rate)
if tie_weights:
self.embedding.weight = self.fc.weight
self.init_weights()
def init_weights(self):
init_range = 0.1
self.embedding.weight.data.uniform_(-init_range, init_range)
self.fc.weight.data.uniform_(-init_range, init_range)
self.fc.bias.data.zero_()
def init_hidden(self, batch_size, device):
hiddens = []
for n in range(self.n_layers):
dim = self.hidden_dim if n != n_layers - 1 else (self.embedding_dim if self.tie_weights else self.hidden_dim)
hidden = torch.zeros(1, batch_size, dim).to(device)
cell = torch.zeros(1, batch_size, dim).to(device)
hiddens.append((hidden, cell))
return hiddens
def detach_hidden(self, hidden):
if isinstance(hidden, torch.Tensor):
return hidden.detach()
else:
return tuple(self.detach_hidden(h) for h in hidden)
def forward(self, input, hidden):
# input = [batch size, seq len]
# hidden = list([1, batch size, hidden dim])
embedding = self.embedding_dropout(self.embedding(input))
# embedding = [batch size, seq len, embedding dim]
lstm_input = embedding
new_hiddens = []
for n, lstm in enumerate(self.lstms):
lstm_output, new_hidden = lstm(lstm_input, hidden[n])
# lstm_output = [batch size, seq len, hidden dim]
# new_hidden = [1, batch size, hidden dim]
if n != self.n_layers - 1:
lstm_output = self.lstm_dropout(lstm_output)
lstm_input = lstm_output
new_hiddens.append(new_hidden)
output = self.output_dropout(lstm_output)
prediction = self.fc(output)
# prediction = [batch size, seq len, vocab size]
# output = [batch size, seq len, hidden dim]
# lstm_output = [bath size, seq len, hidden dim]
# new_hiddens = list([1, batch size, hidden dim])
return prediction, output, lstm_output, new_hiddens
# + id="QBYYhJ3WjWpY"
vocab_size = len(vocab)
embedding_dim = 400
hidden_dim = 1150
n_layers = 3
embedding_dropout_rate = 0.65
weight_dropout_rate = 0.5
lstm_dropout_rate = 0.2
output_dropout_rate = 0.4
tie_weights = True
model = AWDLSTM(vocab_size, embedding_dim, hidden_dim, n_layers,
embedding_dropout_rate, weight_dropout_rate, lstm_dropout_rate, output_dropout_rate,
tie_weights)
# + id="0C9nTPpxsclE"
criterion = nn.CrossEntropyLoss()
# + id="dkggfxKJBZ04" colab={"base_uri": "https://localhost:8080/"} outputId="4a992721-b229-442c-bc31-f1c43f424cc9"
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
# + id="Ojbdsa4sfFS8"
lr = 1e-3
weight_decay = 1e-6
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
# + id="Yv9GOZSgfGOj"
criterion = nn.CrossEntropyLoss()
# + colab={"base_uri": "https://localhost:8080/"} id="b5PZPWeHfHyY" outputId="9e4f7a42-83aa-4b74-bcfc-72ecd31d461c"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
# + id="5Q4ywswafKJT"
model = model.to(device)
criterion = criterion.to(device)
# + id="3U6UG3_0fM8K"
def train(model, data, optimizer, criterion, batch_size, base_seq_len, alpha, beta, clip, device):
epoch_loss = 0
model.train()
n_tokens = data.shape[-1]
base_lr = optimizer.param_groups[0]["lr"]
hidden = model.init_hidden(batch_size, device)
for input, target in get_batches(data, base_seq_len):
optimizer.zero_grad()
input = input.to(device)
target = target.to(device)
# input = [batch size, seq len]
# target = [batch size, seq len]
batch_size, seq_len = input.shape
scaled_lr = base_lr * seq_len / base_seq_len
optimizer.param_groups[0]["lr"] = scaled_lr
hidden = model.detach_hidden(hidden)
# hidden = list([1, batch size, hidden dim])
prediction, output, raw_output, hidden = model(input, hidden)
# prediction = [batch size, seq len, vocab size]
# output = [batch size, seq len, hidden dim]
# hidden = list([1, batch size, hidden dim])
prediction = prediction.reshape(batch_size * seq_len, -1)
target = target.reshape(-1)
# output = [batch size * seq len, vocab size]
# target = [batch size * seq len]
loss = criterion(prediction, target)
alpha_loss = (alpha * output.pow(2).mean()).sum()
beta_loss = (beta * (output[:,1:] - output[:,:-1]).pow(2).mean()).sum()
loss = loss + alpha_loss + beta_loss
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item() * seq_len
return epoch_loss / n_tokens
# + id="mIdNe9x4pkAw"
def get_batches(data, seq_len):
data_len = data.shape[-1]
total_seq_len = 0
sampled_seq_lens = []
min_seq_len, max_seq_len = int(seq_len * 0.9), int(seq_len * 1.1)
while total_seq_len < data_len:
sampled_seq_len = random.randint(min_seq_len, max_seq_len)
sampled_seq_lens.append(sampled_seq_len)
total_seq_len += sampled_seq_len
sampled_seq_lens = sampled_seq_lens[:-1]
remainder = data_len - sum(sampled_seq_lens)
if remainder > min_seq_len:
sampled_seq_lens.append(remainder - 1)
pos = 0
for sampled_seq_len in sampled_seq_lens:
input = data[:,pos:pos+sampled_seq_len]
target = data[:,pos+1:pos+sampled_seq_len+1]
pos += sampled_seq_len
yield input, target
# + id="WMfUy_H2IdUo"
def evaluate(model, data, criterion, batch_size, base_seq_len, device):
epoch_loss = 0
model.eval()
n_tokens = data.shape[-1]
hidden = model.init_hidden(batch_size, device)
with torch.no_grad():
for input, target in get_batches(data, base_seq_len):
input = input.to(device)
target = target.to(device)
# input = [batch size, seq len]
# target = [batch size, seq len]
batch_size, seq_len = input.shape
hidden = model.detach_hidden(hidden)
# hidden = list([1, batch size, hidden dim])
prediction, _, _, hidden = model(input, hidden)
# prediction = [batch size, seq len, vocab size]
# hidden = list([1, batch size, hidden dim])
prediction = prediction.reshape(batch_size * seq_len, -1)
target = target.reshape(-1)
# prediction = [batch size * seq len, vocab size]
# target = [batch size * seq len]
loss = criterion(prediction, target)
epoch_loss += loss.item() * seq_len
return epoch_loss / n_tokens
# + id="keYcu0DthiJ0"
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
# + id="SoPhYH3PJB-S" colab={"base_uri": "https://localhost:8080/"} outputId="d1ad4100-c267-4ed2-dc77-83d0786c50a6"
n_epochs = 50
seq_len = 70
clip = 0.25
alpha = 2
beta = 1
best_valid_loss = float('inf')
for epoch in range(n_epochs):
start_time = time.monotonic()
train_loss = train(model, train_data, optimizer, criterion, batch_size, seq_len, alpha, beta, clip, device)
valid_loss = evaluate(model, valid_data, criterion, batch_size, seq_len, device)
end_time = time.monotonic()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'awd-lstm_lm.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Perplexity: {math.exp(train_loss):.3f}')
print(f'\tValid Perplexity: {math.exp(valid_loss):.3f}')
# + colab={"base_uri": "https://localhost:8080/"} id="g9gMl22AweGP" outputId="39dd372a-4fbc-4633-e48b-e8341ae5f5f5"
model.load_state_dict(torch.load('awd-lstm_lm.pt'))
test_loss = evaluate(model, test_data, criterion, batch_size, seq_len, device)
print(f'Test Perplexity: {math.exp(test_loss):.3f}')
# + id="7ypQ5_huwmcf"
def generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed=None):
if seed is not None:
torch.manual_seed(0)
model.eval()
tokens = tokenizer(prompt)
indices = [vocab[t] for t in tokens]
batch_size = 1
hidden = model.init_hidden(batch_size, device)
with torch.no_grad():
for i in range(n_gen_tokens):
input = torch.LongTensor([indices]).to(device)
prediction, _, _, hidden = model(input, hidden)
probs = torch.softmax(prediction[:, -1] / temperature, dim=-1)
prediction = torch.multinomial(probs, num_samples=1).item()
indices.append(prediction)
itos = vocab.get_itos()
tokens = [itos[i] for i in indices]
return tokens
# + colab={"base_uri": "https://localhost:8080/"} id="NUBkgspjwn_J" outputId="8106a62a-04e1-4e19-ac47-db5853f36bd7"
prompt = 'the'
n_gen_tokens = 25
temperature = 0.5
seed = 0
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="XqbGquhWwpSc" outputId="137a67e4-4fcb-4a80-832f-7ee2bab3c190"
generation
# + colab={"base_uri": "https://localhost:8080/"} id="nZrBvc5vwrth" outputId="9aa0c94e-3162-4ad0-908e-f254f4773781"
temperature = 0.1
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="B4HKOCKywsL0" outputId="d14ae4dc-887d-406d-c98e-da31c95a3c92"
generation
# + colab={"base_uri": "https://localhost:8080/"} id="KTnFqFZywthG" outputId="629476f7-2a6b-4cc5-f80b-a7a40773a126"
temperature = 1.5
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="fhAnRwinwtan" outputId="51b4c64e-0dd7-4e02-ff60-e2276bf83312"
generation
# + colab={"base_uri": "https://localhost:8080/"} id="j8UvBRy1wtUg" outputId="883e7288-fdbe-4feb-86d3-b4609a0eab3e"
temperature = 0.75
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="deEaUs6owtKt" outputId="ef6fce5a-8221-4a9c-87f7-d7f6ae654a3b"
generation
# + colab={"base_uri": "https://localhost:8080/"} id="rBh_zvL1ws7c" outputId="2d18f1db-6d5e-4c68-ba2b-287dc6d08111"
temperature = 0.8
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="k93j5Kbfw7qk" outputId="571712de-1655-4cb6-ca0c-64aea6e8925c"
generation
# + colab={"base_uri": "https://localhost:8080/"} id="n9fS58ldw8rV" outputId="2a886169-4970-4f83-c989-94cc56123a84"
temperature = 0.7
generation = generate(prompt, n_gen_tokens, temperature, model, tokenizer, vocab, device, seed)
# + colab={"base_uri": "https://localhost:8080/"} id="_IwhwPhyw8m_" outputId="edac3031-6157-4abd-a8d1-3ce124d4d517"
generation
| 2_awd-lstm_lm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from Acquire.Client import User, Drive, StorageCreds
root = "https://fn.acquire-aaai.com/t"
user = User("chryswoods", identity_url="%s/identity" % root)
result = user.request_login()
print(result["login_url"])
from Acquire.Client import create_qrcode
create_qrcode(result["login_url"])
user.wait_for_login()
creds = StorageCreds(user=user, service_url="%s/storage" % root)
drive = Drive(name="test", creds=creds)
print(drive, drive.metadata().acl())
drive.list_files()
drive.list_drives()
drive2 = Drive(name="test/subdrive", creds=creds)
print(drive2)
drive2.list_files()
file = drive.list_files(filename="test_call_function.ipynb", include_metadata=True)[0]
versions = file.open().list_versions()
for version in versions:
print(version.filename(), version.uploaded_when(), version.uid())
f = drive.upload("test_call_function.ipynb")
print(f)
f2 = drive2.upload("fingerprints.ipynb")
print(f2)
f.acl()
f.is_compressed(), f.uploaded_by(), f.uploaded_when(), f.checksum(), f.filename()
drive.list_files()
drive2.list_files()
filename = f.open().download()
filename
filename = f.open().download()
print(filename)
files = drive.list_files(include_metadata=True)
f = files[0]
f.is_compressed(), f.uploaded_by(), f.uploaded_when(), f.checksum(), f.filename()
f.to_data()
f
user.logout()
| user/notebooks/test_drives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from keras.layers import Dense, Activation, Dropout, Reshape, concatenate, ReLU, Input
from keras.models import Model, Sequential
from keras.regularizers import l2, l1_l2
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.layers.normalization import BatchNormalization
from keras.constraints import unit_norm
from keras import optimizers
from keras import regularizers
from keras import initializers
import keras.backend as K
from sklearn.model_selection import train_test_split
from sklearn.utils import class_weight
from scipy.linalg import fractional_matrix_power
import tensorflow as tf
import numpy as np
from utils import *
from dfnets_optimizer import *
from dfnets_layer import DFNets
import warnings
warnings.filterwarnings('ignore')
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
# -
X, A, S, Y = load_data(dataset='cora')
A = np.array(A.todense())
# +
_, Y_val, _, train_idx, val_idx, test_idx, train_mask = get_splits(Y, S)
train_idx = np.array(train_idx)
val_idx = np.array(val_idx)
test_idx = np.array(test_idx)
labels = np.argmax(Y, axis=1) + 1
#Normalize X.
#X /= X.sum(1).reshape(-1, 1)
X = np.array(X)
# +
Y_train = np.zeros(Y.shape)
labels_train = np.zeros(labels.shape)
Y_train[train_idx] = Y[train_idx]
labels_train[train_idx] = labels[train_idx]
Y_test = np.zeros(Y.shape)
labels_test = np.zeros(labels.shape)
Y_test[test_idx] = Y[test_idx]
labels_test[test_idx] = labels[test_idx]
# +
#Identity matrix for self loop.
I = np.matrix(np.eye(A.shape[0]))
A_hat = A + I
#Degree matrix.
D_hat = np.array(np.sum(A_hat, axis=0))[0]
D_hat = np.matrix(np.diag(D_hat))
#Laplacian matrix.
L = I - (fractional_matrix_power(D_hat, -0.5) * A_hat * fractional_matrix_power(D_hat, -0.5))
L = L - ((lmax(L)/2) * I)
# +
lambda_cut = 0.5
def step(x, a):
for index in range(len(x)):
if(x[index] >= a):
x[index] = float(1)
else:
x[index] = float(0)
return x
response = lambda x: step(x, lmax(L)/2 - lambda_cut)
#Since the eigenvalues might change, sample eigenvalue domain uniformly.
mu = np.linspace(0, lmax(L), 70)
#AR filter order.
Ka = 5
#MA filter order.
Kb = 3
#The parameter 'radius' controls the tradeoff between convergence efficiency and approximation accuracy.
#A higher value of 'radius' can lead to slower convergence but better accuracy.
radius = 0.90
b, a, rARMA, error = dfnets_coefficients_optimizer(mu, response, Kb, Ka, radius)
# +
h_zero = np.zeros(L.shape[0])
def L_mult_numerator(coef):
y = coef.item(0) * np.linalg.matrix_power(L, 0)
for i in range(1, len(coef)):
x = np.linalg.matrix_power(L, i)
y = y + coef.item(i) * x
return y
def L_mult_denominator(coef):
y_d = h_zero
for i in range(0, len(coef)):
x_d = np.linalg.matrix_power(L, i+1)
y_d = y_d + coef.item(i) * x_d
return y_d
poly_num = L_mult_numerator(b)
poly_denom = L_mult_denominator(a)
arma_conv_AR = K.constant(poly_denom)
arma_conv_MA = K.constant(poly_num)
# -
def dense_factor(inputs, input_signal, num_nodes, droput):
h_1 = BatchNormalization()(inputs)
h_1 = DFNets(num_nodes,
arma_conv_AR,
arma_conv_MA,
input_signal,
kernel_initializer=initializers.glorot_normal(seed=1),
kernel_regularizer=l2(9e-2),
kernel_constraint=unit_norm(),
use_bias=True,
bias_initializer=initializers.glorot_normal(seed=1),
bias_constraint=unit_norm())(h_1)
h_1 = ReLU()(h_1)
output = Dropout(droput)(h_1)
return output
def dense_block(inputs):
concatenated_inputs = inputs
num_nodes = [8, 16, 32, 64, 128]
droput = [0.9, 0.9, 0.9, 0.9, 0.9]
for i in range(5):
x = dense_factor(concatenated_inputs, inputs, num_nodes[i], droput[i])
concatenated_inputs = concatenate([concatenated_inputs, x], axis=1)
return concatenated_inputs
def dense_block_model(x_train):
inputs = Input((x_train.shape[1],))
x = dense_block(inputs)
predictions = Dense(7, kernel_initializer=initializers.glorot_normal(seed=1),
kernel_regularizer=regularizers.l2(1e-10),
kernel_constraint=unit_norm(),
activity_regularizer=regularizers.l2(1e-10),
use_bias=True,
bias_initializer=initializers.glorot_normal(seed=1),
bias_constraint=unit_norm(),
activation='softmax', name='fc_'+str(1))(x)
model = Model(input=inputs, output=predictions)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.002), metrics=['acc'])
return model
model_dense_block = dense_block_model(X)
model_dense_block.summary()
# +
nb_epochs = 200
class_weight = class_weight.compute_class_weight('balanced', np.unique(labels_train), labels_train)
class_weight_dic = dict(enumerate(class_weight))
for epoch in range(nb_epochs):
model_dense_block.fit(X, Y_train, sample_weight=train_mask, batch_size=A.shape[0], epochs=1, shuffle=False,
class_weight=class_weight_dic, verbose=0)
Y_pred = model_dense_block.predict(X, batch_size=A.shape[0])
_, train_acc = evaluate_preds(Y_pred, [Y_train], [train_idx])
_, test_acc = evaluate_preds(Y_pred, [Y_test], [test_idx])
print("Epoch: {:04d}".format(epoch), "train_acc= {:.4f}".format(train_acc[0]), "test_acc= {:.4f}".format(test_acc[0]))
# -
| dfnets (legacy version)/dfnet/dfnets_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sarahrdk/EscapeEarth/blob/main/Interns/Sarah/BLSresults.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="ytGgaxXNAL1N" outputId="4a4b6e02-1260-4f29-b69a-3bed01d49067"
from google.colab import drive
drive.mount('/content/gdrive')
# + id="GFDG9_GDBUlZ"
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# + id="0hH9nCdNA1Er"
path14 = '/content/gdrive/My Drive/EscapeEarthData/BLS_results/bls_statsdf_sec14.csv'
df_1 = pd.read_csv(path14)
path15 = '/content/gdrive/My Drive/EscapeEarthData/BLS_results/bls_statsdf_sec15.csv'
df_2 = pd.read_csv(path15)
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="reIBx8v3BSH_" outputId="9169f706-c69a-404a-f010-d0cb71182e39"
df_1
# + id="nrrPX9DJBdwt"
col_for_df = np.repeat(14, len(df_1))
df_1['Sector'] = col_for_df
col_for_df_2 = np.repeat(15, len(df_2))
df_2['Sector'] = col_for_df_2
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="8QaW0brVCCJF" outputId="6323c82d-e6a2-46c3-e1ba-d2f472ad9622"
df_1
# + id="KbMe2-RGCHaU"
main_df = df_1.append(df_2)
main_df = main_df.reset_index()
main_df = main_df.drop(columns=['index'])
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="aVOf5n_SCrNO" outputId="54759f99-18c9-4946-daa7-9a7c8eff4a0d"
main_df
# + colab={"base_uri": "https://localhost:8080/", "height": 436} id="9pzJsKZGCrwC" outputId="b3f5c484-7bf6-4a65-85c7-97e3c3e4eaab"
main_df_highpowers = main_df[ main_df['Power'] >= 50 ]
print('Original length: {}, Length after threshold cut: {}'.format( len(main_df), len(main_df_highpowers)))
main_df_highpowers
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="6gw2X9_zEJUy" outputId="e7c01ca8-ca50-4fbf-b436-0a40ae55e736"
cut_1 = main_df['Power'] >40
cut_2 = main_df['Period'] < 10
#apply both thresholds to select targets within those limits
main_df_after2cuts = main_df[ (cut_1) & (cut_2)]
main_df_after2cuts
# + colab={"base_uri": "https://localhost:8080/"} id="02CA7ihRE-j4" outputId="d6e46c42-c54f-4a02-b973-740685a76a7b"
#get power column data as a numpy array
col_power = main_df['Power'].to_numpy()
#get period column data as a numpy array
col_period = main_df['Period'].to_numpy()
#see how they look
print('Power array:',col_power)
print('Period array:', col_period)
#see what happen w/o .to_numpy() at the end----note this format can cause problems so use .to_numpy()
print('See that this is a dataframe object, not an iterable array:\n', main_df['Power'])
# + colab={"base_uri": "https://localhost:8080/", "height": 305} id="f1DEqglrFBZB" outputId="01e4ef7e-d968-4f29-f7f3-21b1c5e021e5"
#get power column data as a numpy array
col_power = main_df['Power'].to_numpy()
#get period column data as a numpy array
col_period = main_df['Period'].to_numpy()
#plot with matplotlib
plt.scatter(col_power, col_period, s= 10)
plt.title('My plot', fontsize = 20)
plt.xlabel('Power', fontsize=15); plt.ylabel('Period', fontsize=15);
# + colab={"base_uri": "https://localhost:8080/", "height": 293} id="8qcWykwXFDeI" outputId="e2be4385-5537-415a-e565-1e4afd12921d"
main_df.plot()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="4hI2JJhIFFY7" outputId="4c97144a-2268-425f-fb54-687fd74e8dc4"
main_df['Sector'].plot()
# + id="OXy5E4PFFIAp"
mycut = main_df['Power'] > 60
data_subset = main_df[mycut]
print('data_subset:\n',data_subset) #so you can see whats going on
#select tic ids from the targets in the data subset
ids_subset = data_subset['TIC'].to_numpy()
print('\nids_subset:\n',ids_subset) #so you can see whats going on
print('\nStart of iteration:')#so you can see whats going on
for targetid in ids_subset:
# you can use these to open the data with our OpenAndPlot class
# but here I will just print the ID & what sector it comes from
#find row in df corresponding to ID
df_row = main_df[ main_df['TIC'] == targetid]
#select sector value for that row
sector = df_row['Sector'].to_numpy()[0] #[0] needed b/c I want just the value not the array form
#print result of ID & what sector it comes from
print('Star {} comes from Sector {}'.format( targetid, sector ))
# + colab={"base_uri": "https://localhost:8080/", "height": 367} id="yeKWrP1nFKi1" outputId="6077231f-053f-4bae-8777-3ff476c04a7a"
plt.hist(main_df["Period"])
# + colab={"base_uri": "https://localhost:8080/", "height": 350} id="bqv6IW2qJM3y" outputId="c75c411b-bfa1-4ad0-e22b-1ca3c8e346f3"
plt.hist(main_df["Duration"])
# + colab={"base_uri": "https://localhost:8080/", "height": 367} id="SR3U_ZJFJSOo" outputId="7f1b4299-8f70-43f8-e9b2-e2a4b84a36f4"
plt.hist(main_df["Transit Time"])
# + colab={"base_uri": "https://localhost:8080/", "height": 350} id="bt2vhh_SJSTv" outputId="808c6a4f-1c49-44de-990c-7c7f2bce906d"
plt.hist(main_df["Power"])
# + colab={"base_uri": "https://localhost:8080/", "height": 367} id="B3Z5HIG8JSc7" outputId="806f7ee1-2b27-4946-b637-c27d7af34eb0"
plt.hist(main_df["Depth"])
# + colab={"base_uri": "https://localhost:8080/", "height": 451} id="MoLwnUu3I6KT" outputId="7a81c96d-63ea-4a89-c165-f6ac2bc0459c"
main_df.hist()
# + id="gvYF278eJGul"
| Interns/Sarah/BLSresults.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# He usado el dataset "Forest Covertypes" de <br/>
# http://archive.ics.uci.edu/ml/datasets/Covertype
#
# El original tenía unas 500 000 instancias, y eso era demasiado, así que he sacado un subconjunto
# Es un problema de clasificación con 7 clases con datos categóricos y numéricos
#
# El problema era que las instancias de cada una de las clases estaban muy mal distribuidas:
# - 1 → 36 %
# - 2 → 48 %
# - 3 → 6 %
# - 4 → 0.4 %
# - 5 → 1 %
# - 6 → 3 %
# - 7 → 3.5 %
#
# Entonces he cogido 700 instancias aleatorias de cada una de las clases y las he juntado en otro dataset, que es el que estoy cargando
#
# Por eso tiene 4900 instancias
# + language="javascript"
# IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
# }
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import json
import numpy as np
import math
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
# +
chosen_clf = "DecisionTreeClassifier"
chosen_clf = "RandomForestClassifier"
estims = {}
if chosen_clf == "RandomForestClassifier":
estims = {"n_estimators":100}
# -
# Retorna una tupla (data, target) con todo el preprocesado hecho
def get_data():
with open("data_4900.json", "r") as jd:
d = json.load(jd)
data = d['data']
target = d['target']
data = np.array(data)
target = np.array(target)
# Quitar las columnas que tienen todo 0
data = data[:, np.count_nonzero(data, axis = 0) > 0]
# Realmente lo que tendría que hacer es quitar las
# columnas en las que son todos iguales. Sino la std
# también podría ser 0. De momento da igual
mean = data.mean(axis = 0)
std = data.std(axis = 0)
data = (data - mean)/std
#Hay que cambiar el orden, pues está ordenado por clases
ordering = np.arange(len(target))
np.random.shuffle(ordering)
data = data[ordering]
target = target[ordering]
return data, target
def show_accuracy(tune_param, decreases):
data, target = get_data()
N = data.shape[0]
prop_train = 2 / 3
N_train = math.ceil(N * prop_train)
N_test = N - N_train
data_train = data[:N_train]
data_test = data[N_train:]
target_train = target[:N_train]
target_test = target[N_train:]
clf_pipe = Pipeline([
("pca", PCA(n_components = 0.9, svd_solver = "full")),
#("clf", DecisionTreeClassifier())
("clf", eval(chosen_clf)(**estims))
])
#clf = DecisionTreeClassifier()
clf = eval(chosen_clf)(**estims)
train_scores = []
test_scores = []
pca_train_scores = []
pca_test_scores = []
for dec in decreases:
#clf.set_params(min_impurity_decrease = dec)
clf.set_params(**{tune_param: dec})
clf.fit(data_train, target_train)
test_score = clf.score(data_test, target_test)
train_score = clf.score(data_train, target_train)
test_scores.append(test_score)
train_scores.append(train_score)
#clf_pipe.set_params(dtc__min_impurity_decrease = dec)
clf_pipe.set_params(**{"clf__"+tune_param: dec})
clf_pipe.fit(data_train, target_train)
pca_test_score = clf_pipe.score(data_test, target_test)
pca_train_score = clf_pipe.score(data_train, target_train)
pca_test_scores.append(pca_test_score)
pca_train_scores.append(pca_train_score)
plt.figure(figsize=(15,5))
accuracy = plt.subplot(121)
accuracy.set_title("Normal DT")
accuracy.plot(decreases, test_scores, label = "Test score")
accuracy.plot(decreases, train_scores, label = "Train score")
accuracy.locator_params(nbins = 15, axis = "x")
accuracy.set_xlabel(tune_param)
accuracy.set_ylabel("accuracy")
accuracy.grid(True)
accuracy.legend(loc='best')
pca_accuracy = plt.subplot(122, sharey = accuracy)
pca_accuracy.set_title("Using PCA")
pca_accuracy.plot(decreases, pca_test_scores, label = "Test score")
pca_accuracy.plot(decreases, pca_train_scores, label = "Train score")
pca_accuracy.locator_params(nbins = 15, axis = "x")
pca_accuracy.set_xlabel(tune_param)
pca_accuracy.set_ylabel("accuracy")
pca_accuracy.grid(True)
pca_accuracy.legend(loc='best')
plt.show()
# ## Valores por defecto
# +
data, target = get_data()
N = data.shape[0]
prop_train = 2 / 3
N_train = math.ceil(N * prop_train)
N_test = N - N_train
data_train = data[:N_train]
data_test = data[N_train:]
target_train = target[:N_train]
target_test = target[N_train:]
# -
#clf = DecisionTreeClassifier()
clf = eval(chosen_clf)(**estims)
clf.fit(data_train, target_train)
train_score = clf.score(data_train, target_train)
test_score = clf.score(data_test, target_test)
train_score, test_score
# ## Tuneando parámetros
params = {
"max_depth": np.linspace(1,150, dtype = np.int64),
"min_samples_split": np.linspace(2,150, dtype = np.int64),
"min_samples_leaf": np.linspace(1,150, dtype = np.int64),
"min_weight_fraction_leaf": np.linspace(0,0.5, dtype = np.float64),
"max_leaf_nodes": np.linspace(2,150, dtype = np.int64),
"min_impurity_decrease": np.linspace(0,150, dtype = np.float64),
}
for i in params:
show_accuracy(i, params[i])
| code/notebooks/python/.ipynb_checkpoints/DT con dataset Covertype-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Visualize distribution of metadata by plotting frequency histograms
# +
# Load the "autoreload" extension
# %load_ext autoreload
# always reload modules marked with "%aimport"
# %autoreload 1
import os
import sys
import dotenv
from sklearn.externals import joblib
# add the 'src' directory as one where we can import modules
project_dir = os.path.join(os.getcwd(), os.pardir)
src_dir = os.path.join(project_dir, 'src')
sys.path.append(src_dir)
# import my method from the source code
# %aimport data.tools
# load env
# %load_ext dotenv
dotenv_path = os.path.join(project_dir, '.env')
dotenv.load_dotenv(dotenv_path)
| notebooks/1.0-mwhitesi-visualize-metadata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/enrprz/CoffeeBytes/blob/master/CoffeeBytes_06.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="gZ8HE_gCAn6R" colab_type="text"
# #Week 6: Dictionaries && Modules
#
# Hello and welcome to the 6th part of the coding series. At the end of this lesson you should be able to:
#
#
# * Learn about Dictionaries.
# * Learn about Modules
# * Code your own Dictionaries.
# * Make use of existing Modules.
# * Practice using Dictionaries and Modules with previous lessons.
# * Read more technical terms.
#
# Today you will be introduced to Dictionaries, and Modules. Adding these to our skill repertoire gets us closer to the end of the course. Be warned that as we learn more skills, the code might get slighlty longer than usual.
# Remember if you have any problems or questions do not hesitate to ask the instructor ([Enrique](https://www.instagram.com/01000101.sh/?hl=en)) or to ask the hacker community on the SHPE Discord.
# + [markdown] id="xoSdQyIBPx_H" colab_type="text"
# # Dictionaries:
#
# A dictionary is a collection of unordered, changeable and indexed variables. You will know it is a dictionary for the use of curly braces {}. It also has the unique ability to contain **keys**, and **values**. This is extremely helpful while coding since it lets you be faster when looking for values in different problems. Lets take a look at the anatomy of a dictionary:
#
# ```
# name_of_dictionary = {
# "key1" : value1,
# "key2" : value2,
# "key3" : value3
# }
# ```
#
#
# + id="F0aZ9FIbAnFV" colab_type="code" colab={}
# Lets go ahead and start with something very familiar to you
# Contacts!
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000
}
# lets display the contacts dictionary
print(contacts)
# + [markdown] id="KQmI84Vb8anM" colab_type="text"
# But wait, now you ran into your aunt Mary. She got a new phone, how do we add it?
# + id="oEfTuX838iJO" colab_type="code" colab={}
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000
}
# lets add a new key AND value.
contacts["Mary"] = 9561111111
# lets display the contacts dictionary
print(contacts)
# + [markdown] id="_I0SGcUE8zwK" colab_type="text"
# But now you are no longer friends with Enrique, sadly, you need to delete his number. How do we do it?
# + id="ek0otaEq8-79" colab_type="code" colab={}
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000,
"Mary" : 9561111111
}
# Delete Enrique's key and value
del contacts["Enrique"]
print(contacts)
# + [markdown] id="yk2rjUXx6_9y" colab_type="text"
# Lets make use of some of the skills learned in previous lessons to get a better grasp of Dictionaries:
# + id="xYORT_547Jm_" colab_type="code" colab={}
# Loop thru the names (keys) of the dictionary using a FOR loop!
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000,
"Mary" : 9561111111
}
# create loop
for x in contacts:
print(x)
# + [markdown] id="p6S8EDzp7luW" colab_type="text"
# We can also display only the phone#'s (Values):
# + id="60KjceDh7vnp" colab_type="code" colab={}
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000,
"Mary" : 9561111111
}
# create loop to display phone #
for x in contacts.values():
print(x)
# + [markdown] id="qjJNWlNX781g" colab_type="text"
# But that's boring, we need to know BOTH the key and the values at times. How do we do it?
# + id="fmblFyZz78SC" colab_type="code" colab={}
contacts = {
"Antonio" : 9560000101,
"Enrique" : 9560101010,
"Carlos" : 9561111000,
"Mary" : 9561111111
}
# create loop to display the Keys and the Values inside dictionary
for x,y in contacts.items():
print(x,y)
# + [markdown] id="C6REimT4882D" colab_type="text"
# Python comes preloaded with some very useful functions for Dictionaries, here are some of them:
#
# Function | Description
# ____________________
# * **clear()** - Removes all the elements from the dictionary.
# * **copy()** - Returns a copy of the dictionary.
# * **fromkeys()** - Returns a dictionary with the specified keys and values.
# * **get()** - Returns the value of the specified key.
# * **items()** - Returns a list containing the a tuple for each key value pair.
# * **keys()** - Returns a list containing the dictionary's keys.
# * **pop()** - Removes the element with the specified key.
# * **popitem()** - Removes the last inserted key-value pair.
# * **setdefault()** - Returns the value of the specified key. If the key does not exist: insert the key, with the specified value.
# * **update()** - Updates the dictionary with the specified key-value pairs.
# * **values()** - Returns a list of all the values in the dictionary.
#
# Lets go thru some of the most useful examples, **Dictionaries** inside **Dictionaries**!
# + id="rLHTHark-kmq" colab_type="code" colab={}
# Create 3 contact notebooks, inside a main contact dictionary
myContacts = {
"work_contacts" : {
"name" : "Sheryl",
"Phone" : 9561110000
},
"school_contacts" : {
"name" : "Jaime",
"phone" : 9560001101
},
"friends_contacts" : {
"name" : "Jerry",
"phone" : 9561101010
}
}
# Now display all your contacts!
print(myContacts)
# + [markdown] id="CH4D1ndNBBM3" colab_type="text"
# You don't always have to do this much code to create a dictionary. Remember, reusability is part of the game. In the next exercise we make use of a 'constructor' to make a dictionary. Lets check it out:
# + id="CqoM2cvIBOPx" colab_type="code" colab={}
# Reserved for in class example:
# + [markdown] id="BpwgQ8CdBQ0v" colab_type="text"
# # Modules:
#
# Hurray! This is it, the last of the skills that will serve you well beyond you could ever imagine. Modules are libraries or programs made by other people, and you can add them to your program for better functionality, as well as easy to access functions. Some of the most commonly used would be Tensorflow, Numpy, Pandas, Keras, and OS.
#
# Anatomy of a module:
#
# ```
# import name_of_module as short_name
#
# regular code
# ```
#
# Some of these modules will required for you to download them beforehand on your computer. But once that is done, its just the same as we've been doing so far. Ready to start? lets use our first module for something fun.
# + [markdown] id="PfexoLQS4a4Q" colab_type="text"
# We will be using the **calendar** module to display a given month given a year. If you find this interesting you can learn more about the module [here](https://docs.python.org/3/library/calendar.html).
# + id="YThrMpQ4KyJ6" colab_type="code" outputId="ea4063d2-f7f4-40d7-930c-e3d2452fdcf8" colab={"base_uri": "https://localhost:8080/", "height": 208}
# Display calendar of given month of the year
# Import the calendar module
import calendar
# Ask the user for the Month and Year!
y = int(input("Enter year: "))
m = int(input("Enter month: "))
# Display the calendar
print(calendar.month(y, m))
# + [markdown] id="tpKeymP340zV" colab_type="text"
# Life is very random, and to mimic life situations we need to make our programs react randomly as well! In the following program we make use of a library called **random** to create a random number given a range. If you are interested in this library, you can findout more about it [here](https://docs.python.org/3/library/random.html).
# + id="28kYDi_94u4r" colab_type="code" colab={}
# Import the random module
import random
# remove the comments ask the user for the numbers!
#low = int(input("Enter the start of the range:"))
#high = int(input("Enter the end of the range: "))
# Display random number (notice this number changes everytime you run the program)
print(random.randint(0,1000))
# + [markdown] id="g-ny47FvF_5g" colab_type="text"
# What if i want to display some cool math functions? Well for that you can use **sympy**! this module can help you create and customize your formulas to display while programming. To learn more about sympy you can click [here](https://docs.sympy.org/latest/tutorial/index.html#tutorial).
# + id="YxxrusNzHOl0" colab_type="code" outputId="81aa974d-3252-4750-8627-dbef713cd2b6" colab={"base_uri": "https://localhost:8080/", "height": 89}
# Import sympy module
import sympy
from sympy import *
# Declare the variables that we will use
x,y,z = symbols('x,y,z')
# Ignore this, this is just some google colab compatibility stuff.
# Only wizards read this
def custom_latex_printer(exp,**options):
from google.colab.output._publish import javascript
url = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=default"
javascript(url=url)
return sympy.printing.latex(exp,**options)
# This is the line used to display the content
init_printing(use_latex="mathjax",latex_printer=custom_latex_printer)
# Function to be displayed
Integral(sqrt(1/x)*8, x)
# + [markdown] id="dCXz9bKmMUVT" colab_type="text"
# Getting it so far? Awesome! Now that you see the power of python on its own, and with the extension of modules you can realize that your posibilities with it are endless! We are currently using a cloud environment to execute all of our code, and sadly not all modules work here. You can try them on your computer (just take your time installing the necessary files). However we will be focusing out attention into Data Science.
#
# # Why Data Science?
#
# Data Science is a rapidly growing field, and the demand for [jobs](https://www.hiringlab.org/2019/01/17/data-scientist-job-outlook/) is high. At the end of this course you will be presented with an oportunity to take an online certification from [EDX](https://www.edx.org/course/python-basics-for-data-science-2).
#
# Next lesson we will be taking a look at examples of python used for Data Science, and many more applications (might require actual computer installation).
#
# # See you next time!
| CoffeeBytes_06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring the Natality Dataset
#
# **Learning Objectives**
#
# - Explore a BigQuery dataset inside Jupyter notebook
# - Read data from BigQuery into Pandas dataframe
# - Examine the distribution of average baby weight across various features
# ## Introduction
#
# In this notebook we'll read data from BigQuery into our notebook to begin some preliminary data exploration of the natality dataset. To beign we'll set environment variables related to our GCP project.
PROJECT = "qwiklabs-gcp-00-34ffb0f0dc65" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
# + language="bash"
# if ! gsutil ls | grep -q gs://${BUCKET}/; then
# gsutil mb -l ${REGION} gs://${BUCKET}
# fi
# -
# <h2> Explore data </h2>
#
# The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data.
# Create SQL query using natality data after the year 2000
query_string = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
"""
# +
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df = bq.query(query_string + "LIMIT 100").to_dataframe()
df.head()
# -
# Let's write a query to find the unique values for a given column and see how the number of babies and their average weight is distributed across a those values. This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
def get_distinct_values(column_name):
sql_query = """
SELECT
{0},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{0}
""".format(column_name)
return bq.query(sql_query).to_dataframe()
# We'll use the `get_distinct_values` function above to explore how the variables `num_babies` and `avg_wt` are distributed across the features `is_male`, `mother_age`, `plurality` and `gestation_weeks`
# Bar plot to see is_male with avg_wt linear and num_babies logarithmic
df = get_distinct_values("is_male")
df.plot(x = "is_male", y = "num_babies", kind = "bar");
df.plot(x = "is_male", y = "avg_wt", kind = "bar");
# Line plots to see mother_age with avg_wt linear and num_babies logarithmic
df = get_distinct_values("mother_age")
df = df.sort_values("mother_age")
df.plot(x = "mother_age", y = "num_babies");
df.plot(x = "mother_age", y = "avg_wt");
# Bar plot to see plurality(singleton, twins, etc.) with avg_wt linear and num_babies logarithmic
df = get_distinct_values("plurality")
df = df.sort_values("plurality")
df.plot(x = "plurality", y = "num_babies", logy = True, kind = "bar");
df.plot(x = "plurality", y = "avg_wt", kind = "bar");
# Bar plot to see gestation_weeks with avg_wt linear and num_babies logarithmic
df = get_distinct_values("gestation_weeks")
df = df.sort_values("gestation_weeks")
df.plot(x = "gestation_weeks", y = "num_babies", logy = True, kind = "bar");
df.plot(x = "gestation_weeks", y = "avg_wt", kind = "bar");
# ## Conclusion
#
# All these factors seem to play a part in the baby's weight. Male babies are heavier on average than female babies. Teenaged and older moms tend to have lower-weight babies. Twins, triplets, etc. are lower weight than single births. Preemies weigh in lower as do babies born to single moms. In addition, it is important to check whether you have enough data (number of babies) for each input value. Otherwise, the model prediction against input values that doesn't have enough data may not be reliable.
# <p>
# In the next notebook, we will develop a machine learning model to combine all of these factors to come up with a prediction of a baby's weight.
# Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| courses/machine_learning/deepdive/05_review/1_explore_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MikeGener/CPEN-21A-ECE-2-1/blob/main/Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="7zGuZq7y24-R"
# Laboratory 1
# + id="wjmEkEyq21sg"
print ("Welcome to Python Programming")
print ("Name: <NAME>")
print ("Address: Block 17 Lot 28 Camachile Subdivision, Pasong Camachile 1, General Trias City, Cavite")
print ("Age: 19 years old")
| Welcome_To_Colaboratory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
#
# ## <font color="880000"> 4. Using the Solow Growth Model </font>
#
# ### <font color="888888"> 4.1. Convergence to the Balanced-Growth Path </font>
#
# #### <font color="000000"> 4.1.1. The Example of Post-WWII West Germany </font>
#
# Economies do converge to and then remain on their balanced-growth paths. The West German economy after World War II is a case in point.
#
# We can see such convergence in action in many places and times. For example, consider the post-World War II history of West Germany. The defeat of the Nazis left the German economy at the end of World War II in ruins. Output per worker was less than one-third of its prewar level. The economy’s capital stock had been wrecked and devastated by three years of American and British bombing and then by the ground campaigns of the last six months of the war. But in the years immediately after the war, the West German economy’s capital-output ratio rapidly grew and converged back to its prewar value. Within 12 years the West German economy had closed half the gap back to its pre-World War II growth path. And within 30 years the West German economy had effectively closed the entire gap between where it had started at the end of World War II and its balanced-growth path.
#
# +
# bring in libraries needed for python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# load data from the penn world table's estimates
# of growth among the g-7 economies
#
# (these estimates have problems; but all estimates
# have problems, and a lot of work has gone into
# making these)
pwt91_df = pd.read_csv('https://delong.typepad.com/files/pwt91-data.csv')
# +
# pull america out of the loaded data table
is_America = pwt91_df['countrycode'] == 'USA'
America_df = pwt91_df[is_America]
America_gdp_df = America_df[['year', 'rgdpna', 'emp']]
America_gdp_df['rgdpw'] = America_gdp_df.rgdpna/America_gdp_df.emp
America_pwg_ser = America_gdp_df[['year', 'rgdpw']]
America_pwg_ser.set_index('year', inplace=True)
# +
# pull germany out of the loaded data table, and
# plot growth
is_Germany = pwt91_df['countrycode'] == 'DEU'
Germany_df = pwt91_df[is_Germany]
Germany_gdp_df = Germany_df[['year', 'rgdpna', 'emp']]
Germany_gdp_df['rgdpw'] = Germany_gdp_df.rgdpna/Germany_gdp_df.emp
Germany_pwg_ser = Germany_gdp_df[['year', 'rgdpw']]
Germany_pwg_ser.set_index('year', inplace=True)
np.log(Germany_pwg_ser).plot()
plt.show()
# +
# compare german real national income per worker
# with american
Germany_ratio_ser = Germany_pwg_ser/America_pwg_ser
Germany_ratio_ser.plot()
plt.show()
# -
# The two figures above show, respectively, the natural logarithm of absolute real national income per worker for the German economy and real national income per worker relative to the U.S. value, both since 1950. By 1980 the German economy had converged: its period of rapid recovery growth was over, and national income per capita then grew at the same rate as that in the U.S., which had not suffered wartime destruction pushing it off and below its steady-state balanced-growth path. Then in 1990, at least according to this set of estimates, the absorption of the formerly communist East German state into the _Bundesrepublik_ was an enormous benefit: the expanded division of labor and return of the market economy allowed productivity in the German east to more than double almost overnight. Thereafter the German economy has lost some ground relative to the U.S. as the U.S.'s leading information technology hardware and software sectors have been much stronger leading sectors than Germany's precision machinery and manufacturing sectors.
#
# By comparison, the United States shows no analogous period of rapid growth catching up to a steady-state balanced-growth path. (There is, however, a marked boom in the 1960s, and then a return to early trends in the late 1970s and 1980s, followed by a return to previous normal growth in the 1990s and then a fall-off in growth after 2007.)
# +
# plot american growth since 1950
np.log(America_pwg_ser).plot()
plt.show()
# -
#
#
# #### <font color="000000"> 4.1.2. The Example of Post-WWII Japan </font>
#
# The same story holds in an even stronger form for the other defeated fascist power that surrendered unconditionally to the U.S. at the end of World War II.
#
# In 1950, largely as a result of <NAME>'s B-29s, Japan is only half as productive as Germany, and only one-fifth as productive as the United States. Once again, it converges rapidly. After 1990 Japan no longer grows faster than and catches up to the United States. Indeed, like Germany it thereafter loses ground as its world class manufacturing sectors are also less powerful leading sectors than the United States's information technology hardware and software complexes.
# +
# pull japan out of the loaded data table, and
# plot growth
is_Japan = pwt91_df['countrycode'] == 'JPN'
Japan_df = pwt91_df[is_Japan]
Japan_gdp_df = Japan_df[['year', 'rgdpna', 'emp']]
Japan_gdp_df['rgdpw'] = Japan_gdp_df.rgdpna/Japan_gdp_df.emp
Japan_pwg_ser = Japan_gdp_df[['year', 'rgdpw']]
Japan_pwg_ser.set_index('year', inplace=True)
np.log(Japan_pwg_ser).plot()
plt.show()
# +
# compare japan to america
Japan_ratio_ser = Japan_pwg_ser/America_pwg_ser
Japan_ratio_ser.plot()
plt.show()
# -
#
#
# #### <font color="008800"> 4.1.3. The Post-WWII G-7 </font>
#
# The same story holds for the other members of the G-7 group of large advanced industrial economies as well.
# +
# pull britain out of the loaded data table
is_Britain = pwt91_df['countrycode'] == 'GBR'
Britain_df = pwt91_df[is_Britain]
Britain_gdp_df = Britain_df[['year', 'rgdpna', 'emp']]
Britain_gdp_df['rgdpw'] = Britain_gdp_df.rgdpna/Britain_gdp_df.emp
Britain_pwg_ser = Britain_gdp_df[['year', 'rgdpw']]
Britain_ratio_ser = Britain_pwg_ser/America_pwg_ser
Britain_pwg_ser.set_index('year', inplace=True)
# +
# pull italy out of the loaded data table
is_Italy = pwt91_df['countrycode'] == 'ITA'
Italy_df = pwt91_df[is_Italy]
Italy_gdp_df = Italy_df[['year', 'rgdpna', 'emp']]
Italy_gdp_df['rgdpw'] = Italy_gdp_df.rgdpna/Italy_gdp_df.emp
Italy_pwg_ser = Italy_gdp_df[['year', 'rgdpw']]
Italy_ratio_ser = Italy_pwg_ser/America_pwg_ser
Italy_pwg_ser.set_index('year', inplace=True)
# +
# pull canada out of the loaded data table
is_Canada = pwt91_df['countrycode'] == 'CAN'
Canada_df = pwt91_df[is_Canada]
Canada_gdp_df = Canada_df[['year', 'rgdpna', 'emp']]
Canada_gdp_df['rgdpw'] = Canada_gdp_df.rgdpna/Canada_gdp_df.emp
Canada_pwg_ser = Canada_gdp_df[['year', 'rgdpw']]
Canada_ratio_ser = Canada_pwg_ser/America_pwg_ser
Canada_pwg_ser.set_index('year', inplace=True)
# +
# pull italy out of the loaded data table
is_France = pwt91_df['countrycode'] == 'FRA'
France_df = pwt91_df[is_France]
France_gdp_df = France_df[['year', 'rgdpna', 'emp']]
France_gdp_df['rgdpw'] = France_gdp_df.rgdpna/France_gdp_df.emp
France_pwg_ser = France_gdp_df[['year', 'rgdpw']]
France_ratio_ser = France_pwg_ser/America_pwg_ser
France_pwg_ser.set_index('year', inplace=True)
# +
# plot g-7 natural logarithm levels of national
# income per worker
g7_df = pd.DataFrame()
g7_df['Japan'] = np.log(Japan_pwg_ser['rgdpw'])
g7_df['Germany'] = np.log(Germany_pwg_ser['rgdpw'])
g7_df['America'] = np.log(America_pwg_ser['rgdpw'])
g7_df['Italy'] = np.log(Italy_pwg_ser['rgdpw'])
g7_df['Canada'] = np.log(Canada_pwg_ser['rgdpw'])
g7_df['France'] = np.log(France_pwg_ser['rgdpw'])
g7_df['Britain'] = np.log(Britain_pwg_ser['rgdpw'])
g7_df.plot()
plt.show()
# +
# calculate and plot g-7 levels of national income
# per worker as a proportion of american
g7_ratio_df = pd.DataFrame()
g7_ratio_df['Japan'] = Japan_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['Germany'] = Germany_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['America'] = America_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['Italy'] = Italy_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['Canada'] = Canada_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['France'] = France_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df['Britain'] = Britain_pwg_ser['rgdpw']/America_pwg_ser['rgdpw']
g7_ratio_df.plot()
plt.show()
# -
# The idea—derived from the Solow model—that economies pushed off and below their steady-state balanced-growth paths by the destruction and chaos of war thereafter experience a period of supergrowth that ebbs as they approach their steady-state balanced-growth paths from below story holds for the other members of the G-7 group of large advanced industrial economies as well. In increasing order of the magnitude of their shortfall vis-a-vis the U.S. and the speed of recovery supergrowth, we have: France, Italy, Germany, and Japan. The three economies that escaped wartime chaos and destruction—the U.S., Britain, and Canada—do not exhibit supergrowth until catchup to their steady-state balanced-growth paths.
#
# There is a lot more going on in the post-WWII history of the G-7 economies than just catchup to their steady-state balanced-growth paths after the destruction of World War II: Why do the other economies lose ground vis-a-vis the U.S. after 1990? Why does the U.S. exhibit a small speedup, slowdown, speedup, and then renewed slowdown again? What is it with Britain's steady-state balanced-growth path having so much lower productivity than the other Europeans? Why is Japan the most different from its G-7 partners? And what is it with Italy's attaining U.S. worker productivity levels in 1980, and then its post-2000 relative collapse? (The post-2000 collapse in Italian growth is real; the estimate that it was as productive as the U.S. from 1980-2000 is a data construction error.)
#
#
# ### <font color="888888"> 4.2. Analyzing Jumps in Parameter Values </font>
#
# What if one or more of the parameters in the Solow growth model were to suddenly and substantially shift? What if the labor-force growth rate were to rise, or the rate of technological progress to fall?
#
# One principal use of the Solow growth model is to analyze questions like these: how changes in the economic environment and in economic policy will affect an economy’s long-run levels and growth path of output per worker Y/L.
#
# Let’s consider, as examples, several such shifts: an increase in the growth rate of the labor force n, a change in the economy’s saving-investment rate s, and a change in the growth rate of labor efficiency g. All of these will have effects on the balanced- growth path level of output per worker. But only one—the change in the growth rate of labor efficiency—will permanently affect the growth rate of the economy.
#
# We will assume that the economy starts on its balanced growth path—the old balanced growth path, the pre-shift balanced growth path. Then we will have one (or more) of the parameters—the savings-investment rate s, the labor force growth rate n, the labor efficiency growth rate g—jump discontinuously, and then remain at its new level indefinitely. The jump will shift the balanced growth path. But the level of output per worker will not immediately jump. Instead, the economy's variables will then, starting from their old balanced growth path values, begin to converge to the new balanced growth path—and converge in the standard way.
#
# Remind yourselves of the key equations for understanding the model:
#
# The level of output per worker is:
#
# >(4.1) $ \frac{Y}{L} = \left( \frac{K}{Y} \right)^{\theta}E $
#
# The balanced-growth path level of output per worker is:
#
# >(4.2) $ \left( \frac{Y}{L} \right)^* = \left( \frac{s}{n+g+δ} \right)^{\theta}E $
#
#
# The speed of convergence of the capital-output ratio to its balanced-growth path value is:
#
# >(4.3) $ \frac{d(K/Y)}{dt} = −(1−α)(n+g+δ) \left[ \frac{K}{Y} − \frac{s}{(n+g+δ)} \right] $
#
# where, you recall $ \theta = \alpha/(1-\alpha) $ and $ \alpha = \theta/(1+\theta) $
#
#
# #### <font color="000000"> 4.2.1. A Shift in the Labor-Force Growth Rate </font>
#
# Real-world economies exhibit profound shifts in labor-force growth. The average woman in India today has only half the number of children that the average woman in India had only half a century ago. The U.S. labor force in the early eighteenth century grew at nearly 3 percent per year, doubling every 24 years. Today the U.S. labor force grows at 1 percent per year. Changes in the level of prosperity, changes in the freedom of migration, changes in the status of women that open up new categories of jobs to them (Supreme Court Justice Sandra Day O’Connor could not get a private-sector legal job in San Francisco when she graduated from Stanford Law School even with her amazingly high class rank), changes in the average age of marriage or the availability of birth control that change fertility—all of these have powerful effects on economies’ rates of labor-force growth.
#
# What effects do such changes have on output per worker Y/L—on our mea sure of material prosperity? The faster the growth rate of the labor force n, the lower will be the economy’s balanced-growth capital-output ratio s/(n + g - δ). Why? Because each new worker who joins the labor force must be equipped with enough capital to be productive and to, on average, match the productivity of his or her peers. The faster the rate of growth of the labor force, the larger the share of current investment that must go to equip new members of the labor force with the capital they need to be productive. Thus the lower will be the amount of invest ment that can be devoted to building up the average ratio of capital to output.
#
# A sudden and permanent increase in the rate of growth of the labor force will lower the level of output per worker on the balanced-growth path. How large will the long-run change in the level of output be, relative to what would have happened had labor-force growth not increased? It is straightforward to calculate if we know the other parameter values, as is shown in the example below.
#
#
# **4.2.1.1. An Example: An Increase in the Labor Force Growth Rate**: Consider an economy in which the parameter α is 1/2, the efficiency of labor growth rate g is 1.5 percent per year, the depreciation rate δ is 3.5 percent per year, and the saving rate s is 21 percent. Suppose that the labor-force growth rate suddenly and permanently increases from 1 to 2 percent per year.
#
# Before the increase in the labor-force growth rate, in the initial steady-state, the balanced-growth equilibrium capital-output ratio was:
#
# >(4.4) $ \left( \frac{K_{in}}{Y_{in}} \right)^* = \frac{s_{in}}{(n_{in}+g_{in}+δ_{in})} = \frac{0.21}{(0.01 + 0.015 + 0.035)} = \frac{0.21}{0.06} = 3.5 $
#
# (with subscripts "in" for "initial).
#
# After the increase in the labor-force growth rate, in the alternative steady state, the new balanced-growth equilibrium capital-output ratio will be:
#
# >(4.5) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \frac{s_{alt}}{(n_{alt}+g_{alt}+δ_{alt})} = \frac{0.21}{(0.02 + 0.015 + 0.035)} = \frac{0.21}{0.07} = 3 $
#
# (with subscripts "alt" for "alternative").
#
# Before the increase in labor-force growth, the level of output per worker along the balanced-growth path was equal to:
#
# >(4.6) $ \left( \frac{Y_{t, in}}{L_{t, in}} \right)^* = \left( \frac{s_{in}}{(n_{in}+g_{in}+δ_{in})} \right)^{α/(1−α)} E_{t, in} = 3.5 E_{t, in} $
#
# After the increase in labor-force growth, the level of output per worker along the balanced-growth path will be equal to:
#
# >(4.7) $ \left( \frac{Y_{t, alt}}{L_{t, alt}} \right)^* = \left( \frac{s_{alt}}{(n_{alt}+g_{alt}+δ_{alt})} \right)^{α/(1−α)} E_{t, alt} = 3 E_{t, alt} $
#
# This fall in the balanced-growth path level of output per worker means that in the long run—after the economy has converged to its new balanced-growth path—one-seventh of its per worker economic prosperity has been lost because of the increase in the rate of labor-force growth.
#
# In the short run of a year or two, however, such an increase in the labor-force growth rate has little effect on output per worker. In the months and years after labor-force growth increases, the increased rate of labor-force growth has had no time to affect the economy’s capital-output ratio. But over decades and generations, the capital-output ratio will fall as it converges to its new balanced-growth equilibrium level.
#
# A sudden and permanent change in the rate of growth of the labor force will immediately and substantially change the level of output per worker along the economy’s balanced-growth path: It will shift the balanced-growth path for output per worker up (if labor-force growth falls) or down (if labor-force growth rises). But there is no corresponding immediate jump in the actual level of output per worker in the economy. Output per worker doesn’t immediately jump—it is just that the shift in the balanced-growth path means that the economy is no longer in its Solow growth model long-run equilibrium.
#
#
# **4.2.1.2. Empirics: The Labor-Force Growth Rate Matters**: The average country with a labor-force growth rate of less than 1 percent per year has an output-per-worker level that is nearly 60 percent of the U.S. level. The average country with a labor-force growth rate of more than 3 percent per year has an output-per-worker level that is only 20 percent of the U.S. level.
#
# To some degree poor countries have fast labor-force growth rates because they are poor: Causation runs both ways. Nevertheless, high labor-force growth rates are a powerful cause of low capital intensity and relative poverty in the world today.
#
#
#
# **Figure 4.2.1. The Labor Force Growth Rate Matters: Output per Worker and Labor Force Growth**
#
# <a class="asset-img-link" href="https://delong.typepad.com/.a/6a00e551f0800388340240a4d493ac200d-pi"><img class="asset asset-image at-xid-6a00e551f0800388340240a4d493ac200d img-responsive" style="width: 400px; display: block; margin-left: auto; margin-right: auto;" alt="Labor-force-growth-matters" title="Labor-force-growth-matters" src="https://delong.typepad.com/.a/6a00e551f0800388340240a4d493ac200d-400wi" /></a>
#
#
#
# How important is all this in the real world? Does a high rate of labor-force growth play a role in making countries relatively poor not just in economists’ models but in reality? It turns out that it is important. Of the 22 countries in the world in 2000 with output-per-worker levels at least half of the U.S. level, 18 had labor-force growth rates of less than 2 percent per year, and 12 had labor-force growth rates of less than 1 percent per year. The additional investment requirements imposed by rapid labor-force growth are a powerful reducer of capital intensity and a powerful obstacle to rapid economic growth.
#
# It takes time, decades and generations, for the economy to converge to its new balanced-growth path equilibrium, and thus for the shift in labor-force growth to affect average prosperity and living standards. But the time needed is reason for governments that value their countries’ long-run prosperity to take steps now (or even sooner) to start assisting the demographic transition to low levels of population growth. Female education, social changes that provide women with more opportunities than being a housewife, inexpensive birth control—all these pay large long-run dividends as far as national prosperity levels are concerned.
#
# U.S. President <NAME> used to tell a story of a retired French general, <NAME>, “who once asked his gardener to plant a tree. The gardener objected that the tree was slow-growing and would not reach maturity for a hundred years. The Marshal replied, ‘In that case, there is no time to lose, plant it this afternoon.’”
#
#
# #### <font color="000000"> 4.2.2. The Algebra of a Higher Labor Force Growth Rate </font>
#
# But rather than calculating example by example, set of parameter values by set of parameter values, we can gain some insight by resorting to algebra, and consider in generality the effect on capital-output ratios and output per worker levels of an increase Δn in the labor force growth rate, following an old math convention of using "Δ" to stand for a sudden and discrete change.
#
# Assume the economy has its Solow growth parameters, and its initial balanced-growth path capital-output ratio
#
# >(4.8) $ \left( \frac{K_{in}}{Y_{in}} \right)^* = \frac{s_{in}}{(n_{in}+g_{in}+δ_{in})} $
#
# with "in" standing for "initial".
#
# And now let us consider an alternative scenario, with "alt" standing for "alternative", in which things had been different for a long time:
#
# >(4.9) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \frac{s_{alt}}{(n_{alt}+g_{alt}+δ_{alt})} $
#
# For the g and δ parameters, their initial values are their alternative values. And for the labor force growth rate:
#
# >(4.10) $ n_{alt} = n_{in} + Δn $
#
# So we can then rewrite:
#
# >(4.11) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \frac{s_{in}}{(n_{in}+g_{in}+δ_{in})} \frac{(n_{in}+g_{in}+δ_{in})}{(n_{in} + Δn +g_{in}+δ_{in})} = \frac{s_{in}}{(n_{in}+g_{in}+δ_{in})} \left[\frac{1}{1+\frac{Δn}{(n_{in}+g_{in}+δ_{in})}} \right] $
#
# The first term on the right hand side is just the initial capital-output ratio, and we know that 1/(1+x) is approximately 1−x for small values of x, so we can make an approximation:
#
# >(4.12) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \left( \frac{K_{in}}{Y_{in}} \right)^* \left[ 1 - \frac{Δn}{(n_{in}+g_{in}+δ_{in})} \right] $
#
# Take the proportional change in the denominator (n+g+δ) of the expression for the balanced-growth capital-output ratio. Multiply that proportional change by the initial balanced-growth capital-output ratio. That is the differential we are looking for.
#
# And by amplifying or damping that change by raising to the α/(1−α) power, we get the differential for output per worker.
#
#
# #### <font color="000000"> 4.2.3. A Shift in the Growth Rate of the Efficiency of Labor </font>
#
# **4.2.3.1. Efficiency of Labor the Master Key to Long Run Growth**: By far the most important impact on an economy’s balanced-growth path values of output per worker, however, is from shifts in the growth rate of the efhciency of labor g. We already know that growth in the efhciency of labor is absolutely essential for sustained growth in output per worker and that changes in g are the only things that cause permanent changes in growth rates that cumulate indehnitely.
#
# Recall yet one more time the capital-output ratio form of the production function:
#
# >(4.13) $ \frac{Y}{L} = \left( \frac{K}{Y} \right)^{\theta} E $
#
# Consider what this tells us. We know that a Solow growth model economy converges to a balanced-growth path. We know that the capital-output ratio K/Y is constant along the balanced-growth path. We know that the returns-to-investment parameter α is constant. And so the balanced-growth path level of output per worker Y/L grows only if, and grows only as fast as, the efficiency of labor E grows.
#
#
# **4.2.3.2. Efficiency of Labor Growth and the Capital-Output Ratio**: Yet when we took a look at the math of an economy on its balanced growth path:
#
# >(4.14) $ \left( \frac{Y}{L} \right)^* = \left( \frac{s}{n+g+δ} \right)^{\theta} E $
#
# we also see that an increase in g raises the denominator of the first term on the right hand side—and so pushes the balanced-growth capital output ratio down. That implies that the balanced-growth path level of output per worker associated with any level of the efficiency of labor down as well.
#
# It is indeed the case that—just as in the case of an increased labor force growth rate n—an increased efficiency-of-labor growth rate g reduces the economy’s balanced-growth capital-output ratio s/(n + g - δ). Why? Because, analogously with an increase in the labor force, increases in the efficiency of labor allow each worker to do the work of more, but they need the machines and buildings to do them. The faster the rate of growth of the efficiency of la or, the larger the share of current investment that must go to keep up with the rising efficiency of old members of the labor force and supply them with the capital they need to be productive. Thus the lower will be the amount of investment that can be devoted to building up or maintaining the average ratio of capital to output.
#
#
# #### <font color="000000"> 4.2.4. The Algebra of Shifting the Efficiency-of-Labor Growth Rate </font>
#
# The arithmetic and algebra are, for the beginning and the middle, the same as they were for an increase in the rate of labor force growth:
#
# Assume the economy has its Solow growth parameters, and its initial balanced-growth path capital-output ratio:
#
# >(4.15) $ \left( \frac{K_{in}}{Y_{in}} \right)^* = \frac{s}{(n_+g_{in}+δ)} $
#
# (with "in" standing for "initial"). Also consider an alternative scenario, with "alt" standing for "alternative", in which things had been different for a long time, with a higher efficiency-of-labor growth rate g+Δg since some time t=0 now far in the past:
#
# >(4.16) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \frac{s}{(n+g+Δg+δ)} $
#
# We can rewrite this as:
#
# >(4.17) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = $
# $ \frac{s}{(n+g_{in}+δ)} \frac{(n+g_{in}+δ)}{(n +g_{in}+Δg+δ)} = $
# $ \frac{s}{(n+g_{in}+δ)} \left[\frac{1}{1+\frac{Δg}{(n+g_{in}+δ)}} \right] $
#
# Once again, the first term on the right hand side is just the initial capital-output ratio, and we know that 1/1+x is approximately 1−x for small values of x, so we can make an approximation:
#
# >(4.18) $ \left( \frac{K_{alt}}{Y_{alt}} \right)^* = \left( \frac{K_{in}}{Y_{in}} \right)^* \left[ 1 - \frac{Δg}{(n+g_{in}+δ)} \right] $
#
# Take the proportional change in the denominator of the expression for the balanced-growth capital output ratio. Multiply that proportional change by the initial balanced-growth capital-output ratio. That is the differential in the balanced-growth capital-output ratio that we are looking for.
#
# But how do we translate that into a differential for output per worker? In the case of an increase in the labor force growth rate, it was simply by amplifying or damping the change in the balanced-growth capital-output ratio by raising it to the power $ \theta = (α/(1−α))$ in order to get the differential for output per worker. We could do that because the efficiency-of-labor at every time t $ E_t $ was the same in both the initial and the alternative scenarios.
#
# That is not the case here.
# Here, the efficiency of labor was the same in the initial and alternative scenarios back at time 0, now long ago. Since then E has been growing at its rate g in the initial scenario, and at its rate g+Δg in the alternative scenario, and so the time subscripts will be important. Thus for the alternative scenario:
#
# >(4.19) $ \left( \frac{Y_{t, alt}}{L_{t, alt}} \right)^* $
# $ \left( \frac{s}{(n+g_{in} + \Delta g + \delta)} \right) ^{\theta)}(1+(g_{in}+ \Delta g))^t E_0 $
# while for the initial scenario:
#
# >(4.20) $ \left( \frac{Y_{t, ini}}{L_{t, ini}} \right)^* $
# $ \left( \frac{s}{(n+g_{in} + \delta)} \right) ^{\theta} $
# $ (1+g_{in})^t E_0 $
#
# Now divide to get the ratio of output per worker under the alternative and initial scenarios:
#
# >(4.21) $ \left( \frac{Y_{t, alt}/L_{t, alt}}{Y_{t, ini}/L_{t, ini}} \right)^* = \left( \frac{n+g_{in}+\delta}{(n+g_{in}+\Delta g + \delta)} \right)^{\theta} (1+ \Delta g)^t $
# Thus we see that in the long run, as the second term on the right hand side compounds as t grows, balanced-growth path output per worker under the alternative becomes first larger and then immensely larger than output per worker under the initial scenario. Yes, the balanced-growth path capital-output ratio is lower. But the efficiency of labor at any time t is higher, and then vastly higher if Δgt has had a chance to mount up and thus (1+Δg)<sup>t</sup> has had a chance to compound.
#
# Yes, a positive in the efficiency of labor growth g does reduce the economy’s balanced-growth path capital-output ratio. But these effects are overwhelmed by the more direct effect ofa larger g on output per worker. It is the economy with a high rate of efficiency of labor force growth g that becomes by far the richest over time. This is our most important conclusion. In the very longest run, the growth rate of the standard of living—of output per worker—can change if and only if the growth rate of labor efficiency changes. Other factors—a higher saving-investment rate, lower labor-force growth rate, or lower depreciation rate—can and down. But their effects are short and medium effects: They do not permanently change the growth rate of output per worker, because after the economy has converged to its balanced growth path the only determinant of the growth rate of output per worker is the growth rate of labor efficiency: both are equal to g.
#
# Thus, if we are to increase the rate of growth of the standard of living permanently, we must pursue policies that increase the rate at which labor efficiency grows—policies that enhance technological and organizational progress, improve worker skills, and add to worker education.
#
#
# **4.2.4.1. An Example: Shifting the Growth Rate of the Efficiency of Labor**: What are the effects of an increase in the rate of growth of the efficiency of labor? Let's work through an example:
#
# Suppose we have, at some moment we will label time 0, t=0, an economy on its balanced growth path with a savings rate s of 20% per year, a labor force growth rate n or 1% per year, a depreciation rate δ of 3% per year, an efficiency-of-labor growth rate g of 1% per year, and a production function curvature parameter α of 1/2 and thus a $ \theta = 1 $. Suppose that at that moment t=0 the labor force $ L_0 $ is 150 million, and the efficiency of labor $ E_0 $ is 35000.
#
# It is straightforward to calculate the economy at that time 0. Because the economy is on its balanced growth path, its capital-output ratio K/Y is equal to the balanced-growth path capital-output ratio (K/Y)*:
#
# >(4.22) $ \frac{K_0}{Y_0} = \left( \frac{K}{Y} \right)^* = \frac{s}{n+g+\delta} = \frac{0.2}{0.01 + 0.01 + 0.03} = 4 $
#
# And with an efficiency of labor value $ E_0=70000 $, output per worker at time zero is:
#
# >(4.23) $ \frac{Y_0}{L_0} = \left( \frac{K_0}{Y_0} \right)^{\theta} = 4^1 (35000) = 140000 $
#
# Since the economy is on its balanced growth path, the rate of growth of output per worker is equal to the rate of growth of efficiency per worker. Since the efficiency of labor is growing at 1% per year, we can calculate what output per worker would be at any future time t should the parameters describing the economy remain the same:
#
# >(4.24) $ \left(\frac{Y_t}{L_t}\right)_ini = (140000)e^{0.01t} $
#
# where the subscript "ini" tells us that this value belongs to an economy that retains its initial parameter values into the future. Thus 69 years into the future, at t=69:
#
# >(4.25) $ \left(\frac{Y_69}{L_69}\right)_ini = (140000)e^{(0.01)(69)} = (140000)(1.9937) = 279120 $
#
# Now let us consider an alternative scenario in which output per worker is the same in year 0 but in which the efficiency of labor growth rate g is a higher rate. Suppose $ g_{alt} = g_{ini} + Δg $, with the subscript "alt" reminding us that this parameter or variable belongs to the alternative scenario just as "ini" reminds us of the initial scenario or set of values. How do we forecast the growth of the economy in an alternative scenario—in this case, in an alternative scenario in which $ Δg=0.02 $
#
# The first thing to do is to calculate the balanced growth path steady-state capital-output ratio in this alternative scenario. Thus we calculate:
#
# >(4.26) $ \left( \frac{K}{Y} \right)_{alt}^* = \frac{s}{n + g_{ini} + Δg + δ} = \frac{0.20}{0.01 + 0.01 + 0.02 + 0. 03} = \frac{0.20}{0.07} = 2.857 $
#
# The steady-state balanced growth path capital-output ratio is much lower in the alternative scenario than it was in the initial scenario: 2.857 rather than 4. The capital-output ratio, of course, does not drop instantly to its new steady-state value. It takes time for the transition to occur.
#
# While the transition is occurring, the efficiency of labor in the alternative scenario is growing at not 1% but 3% per year. We can thus calculate the alternative scenario balanced growth path value of output per worker as:
#
# >(4.27) $ \left(\frac{Y_t}{L_t}\right)_{alt}^* = \left( \frac{K}{Y} \right)_{alt}^{*\theta} E_0 e^{(0.01+0.02)t} $
#
# And in the 69th year this will be:
#
# >(4.28) $ \left(\frac{Y_69}{L_69}\right)_{alt}^* = (2.857)(35000) e^{(0.03)(69)} = 792443 $
#
# How good would this balanced growth path value be as an estimate of the actual behavior of the economy? We know that a Solow growth model economy closes a fraction (1−α)(n+g+δ) of the gap between its current position and its steady-state balanced growth path capital-output ratio each period. For our parameter values (1−α)(n+g+δ)=0.035. That gives us about 20 years as the period needed to converge halfway to the balanced growth path. 69 years is thus about 3.5 such halvings of the gap—meaning that the economy will close 9/10 of the way. Thus assuming the economy is on its alternative scenario balanced growth path in year 69 is not a bad assumption.
#
# But if we want to calculate the estimate exactly? 820752.
#
# The takeaways are three:
#
# For these parameter values, 69 years are definitely long enough for you to make the assumption that the economy has converged to its Solow model balanced growth path. One year no. Ten years no. Sixty-nine years, yes.
#
# Shifts in the growth rate g of the efficiency of labor do, over time, deliver enormous differentials in output per worker across scenarios.
#
# The higher efficiency of labor economy is, in a sense, a less capital intensive economy: only 2.959 years' worth of current production is committed to and tied up in the economy's capital stock in the alternative scenario, while 4 years' worth was tied up in the initial scenario. But the reduction in output per worker generated by a lower capital-output ratio is absolutely swamped by the faster growth of the efficiency of labor, and thus the much greater value of the efficiency of labor in the alternative scenario comes the 69th year.
#
#
# #### <font color="000000"> 4.2.5. Shifts in the Saving Rate s </font>
#
# **4.2.5.1. The Most Common Policy and Environment Shock**: Shifts in labor force growth rates do happen: changes in immigration policy, the coming of cheap and easy contraception (or, earlier, widespread female literacy), or increased prosperity and expected prosperity that trigger "baby booms" can all have powerful and persistent effects on labor force growth down the pike. Shifts in the growth of labor efficiency growth happen as well: economic policy disasters and triumphs, countless forecasted "new economies" and "secular stagnations", and the huge economic shocks that were the first and second Industrial Revolutions—the latter inaugurating that global era of previously unimagined increasing prosperity we call modern economic growth—push an economy's labor efficiency growth rate g up or down and keep it there.
#
# Nevertheless, the most frequent sources of shifts in the parameters of the Solow growth model are shifts in the economy’s saving-investment rate. The rise of politicians eager to promise goodies—whether new spending programs or tax cuts — to voters induces large government budget deficits, which can be a persistent drag on an economy’s saving rate and its rate of capital accumulation. Foreigners become alternately overoptimistic and overpessimistic about the value of investing in our country, and so either foreign saving adds to or foreign capital flight reduces our own saving- investment rate. Changes in households’ fears of future economic disaster, in households’ access to credit, or in any of numerous other factors change the share of household income that is saved and invested. Changes in government tax policy may push after-tax returns up enough to call forth additional savings, or down enough to make savings seem next to pointless. Plus rational or irrational changes in optimism or pessimism—what <NAME> labelled the "animal spirits" of individual entrepreneurs, individual financiers, or bureaucratic committees in firms or banks or funds all can and do push an economy's savings-investment rate up and down.
#
#
#
# **4.2.5.2. Analyzing a Shift in the Saving Rate s**: What effects do changes in saving rates have on the balanced-growth path levels of Y/L?
#
# The higher the share of national product devoted to saving and gross investment—the higher is s—the higher will be the economy’s balanced-growth capital-output ratio s/(n + g + δ). Why? Because more investment increases the amount of new capital that can be devoted to building up the average ratio of cap ital to output. Double the share of national product spent on gross investment, and you will find that you have doubled the economy’s capital intensity, or its average ratio of capital to output.
#
# As before, the equilibrium will be that point at which the economy’s savings effort and its investment requirements are in balance so that the capital stock and output grow at the same rate, and so the capital-output ratio is constant. The savings effort of society is simply sY, the amount of total output devoted to saving and investment. The investment requirements are the amount of new capital needed to replace depreciated and worn-out machines and buildings, plus the amount needed to equip new workers who increase the labor force, plus the amount needed to keep the stock of tools and machines at the disposal of more efficient workers increasing at the same rate as the efficiency of their labor.
#
# >(4.29) $sY = (n+g+δ)K $
#
# And so an increase in the savings rate s will, holding output Y constant, call forth a proportional increase in the capital stock at which savings effort and investment requirements are in balance: increase the saving-investment rate, and you double the balanced-growth path capital-output ratio:
#
# >(4.30) $ \frac{K}{Y}_{ini}^* = \frac{s_{ini}}{n+g+δ} $
#
# >(4.31) $ \frac{K}{Y}_{alt}^* = \frac{s_{ini}+Δs}{n+g+δ} $ KY∗alt=s+Δsn+g+δ
#
# >(4.32) $ \frac{K}{Y}_{alt}^* - \frac{K}{Y}_{ini}^* = \frac{Δs}{n+g+δ}
#
# with, once again, balanced growth path output per worker amplified or damped by the dependence of output per worker on the capital-output ratio:
#
# >(4.33) $ \frac{Y}{L}^* = \frac{K}{Y}^* E $
#
#
# **4.2.5.3. Analyzing a Shift in the Saving-Investment Rate: An Example**: To see how an increase in the economy’s saving rate s changes the balanced-growth path for output per worker, consider an economy in which the parameter $ \theta = 2 $ (and $ α = 2/3 $}, the rate of labor-force growth n is 1 percent per year, the rate of labor efficiency growth g is 1.5 percent per year, and the depreciation rate δ is 3.5 percent per year.
#
# Suppose that the saving rate s, which had been 18 percent, suddenly and permanently jumped to 24 percent of output.
#
# Before the increase in the saving rate, when s was 18 percent, the balanced-growth equilibrium capital-output ratio was:
#
# >(4.34) $ \frac{K}{Y}_{ini}^* = \frac{s_{ini}}{n+g+δ} = \frac{0.18}{0.06} = 3 $
#
# After the increase in the saving rate, the new balanced-growth equilibrium capital- output ratio will be:
#
# >(4.35) $ \frac{K}{Y}_{alt}^* = \frac{s_{ini} + {\Delta}s}{n+g+δ} = \frac{0.24}{0.06} = 4 $
#
# We can see, with a value of $ \theta = 2 $, that balanced-growth path output per worker after the jump in the saving rate is higher by a factor of $ (4/3)^2 = 16/9 $, or fully 78 percent higher.
#
# Just after the increase in saving has taken place, the economy is still on its old, balanced-growth path. But as decades and generations pass the economy converges to its new balanced-growth path, where output per worker is not 9 but 16 times the efficiency of labor. The jump in capital intensity makes an enormous differ ence for the economy’s relative prosperity.
#
# Note that this example has been constructed to make the effects of capital intensity on relative prosperity large: The high value for $ \theta $ means that differences in capital intensity have large and powerful effects on output-per-worker levels.
#
# But even here, the shift in saving and investment does not permanently raise the economy’s growth rate. After the economy has settled onto its new balanced-growth path, the growth rate of output per worker returns to the same 1.5 percent per year that is g, the growth rate of the effciency of labor.
#
#
#
#
# ## <font color="880000"> Lecture Notes: Using the Solow Growth Model </font>
#
# <img src="https://tinyurl.com/20181029a-delong" width="300" style="float:right" />
#
# * Ask me two questions…
# * Make two comments…
# * Further reading…
#
# <br clear="all" />
#
# ----
#
# ### The readings this week consist of six jupyter notebook files:
#
# 1. Solow Model Intro: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-solow-1-intro.ipynb>
#
# 2. Solow Model Basics: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-solow-2-basics.ipynb>
#
# 3. Solow Model Growing: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-solow-3-growing.ipynb>
#
# 4. Solow Model Using: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-solow-4-using.ipynb>
#
# 5. Malthus Model: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-malthus-5-preindustrial.ipynb>
#
# 6. Innovation Model: <http://datahub.berkeley.edu/user-redirect/interact?account=braddelong&repo=history-of-economic-growth-theory-readings&branch=main&path=heg-jones-6-innovation.ipynb>
#
#
#
#
#
#
# ----
| heg-solow-4-using.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: doingud
# language: python
# name: doingud
# ---
# +
import param as pm
import panel as pn
import math
import hvplot.pandas
import pandas as pd
import numpy as np
import plotly
import plotly.express as px
pn.extension('plotly')
import holoviews as hv
import bokeh
# +
from typing import Union
import plotly.express as px
import plotly
class SineWave(pm.Parameterized):
'''
A Sinusoid object, designed to be as convenient
as making a Sine function on Desmos, with the additional
functionality of being rotatable.
'''
y_intercept = pm.Number(0, label="Y - Intercept")
amplitude = pm.Number(1, label="Amplitude")
period = pm.Number(10, label="Period (Cycle Per Unit)")
plot_range = pm.Number(500, label="Plot Range")
rotation = pm.Number(0.7, label="Rotation (In Radians):")
def __init__(self,
y_intercept: float = 0,
amplitude: float = 1.0,
period: float = 10.0,
plot_range: int = 500,
rotation: float = 0.7,
plot_method: str = "plotly"):
super().__init__()
'''
Initializing the parameters.
Defaults are given if no arguments are provided.
Also needed to check if given plot style is valid.
'''
if plot_method not in ("plotly", "hvplot"):
raise Exception("Please provide a plot method. (Enter 'plotly' or 'hvplot')")
else:
self.plot_method = plot_method
self.y_intercept = y_intercept
self.amplitude = amplitude
self.period = period
self.plot_range = plot_range
self.rotation = rotation
def show_controls(self):
'''
Shows all the adjustable Parameters to modify the sine wave.
'''
return pn.Column(
self.param.y_intercept,
self.param.amplitude,
self.param.period,
self.param.rotation
)
@pm.depends('y_intercept',
'amplitude',
'period',
'plot_range',
'rotation')
def xy_cols(self):
'''
If we just want the two columns of the x and y
values, we call this function. Very useful for debugging
and if SineWave is to be used within another function.
'''
a = self.rotation
b = self.period
c = self.amplitude
t = np.linspace(0, self.plot_range, self.plot_range + 1) # The limit of the Axis, since we're plotting discrete points.
x = np.cos(a)*(t) - np.sin(a) * np.sin(t/b*2*math.pi) * c
x *= self.plot_range / x[-1]
y = np.sin(a)*(t) + (np.cos(a)*np.sin(t/b*2*math.pi) * c) + self.y_intercept
y *= self.plot_range / y[-1]
return x, y
def show_linspace(self):
'''
The formula that we use to create the sinewave actually modifies the linspace
array we feed it, rather than just reading the values in the linspace array and
deriving an array of y-values from it.
So the result is we do get a Sinewave that is made up of the number of points
we provided, but is crunched.
To compensate for this, we multiply the original provided linspace range to ensure
we get values that reach up to the plot range we want, and then decimate the x and y cols
to decrease the resolution.
'''
return np.linspace(0, self.plot_range, self.plot_range + 1)
def data_frame(self):
'''
Same purpose as xy_cols, but if we want the data as a Pandas
Dataframe.
'''
x_col, y_col = self.xy_cols()
sine_dataframe = pd.DataFrame(zip(x_col, y_col), columns=['x', 'y'])
return sine_dataframe
@pm.depends('y_intercept',
'amplitude',
'period',
'plot_range',
'rotation')
def plot(self):
'''
Asks for the dataframe so it can be plotted.
'''
sine_plot = self.data_frame()
if self.plot_method == "plotly":
return px.line(sine_plot, x="x", y=['y'])
else:
return sine_plot.hvplot.line(x='x', y='y')
def full_view(self):
return pn.Row(
pn.Column(self.show_controls(),
self.plot,)
)
# +
class BSMargin(pm.Parameterized):
buy_adjustment = pm.Number(10, bounds=(-200, 200))
sell_adjustment = pm.Number(0, bounds=(-200, 200))
y_intercept = pm.Number(200, bounds=(0, 400))
slope = pm.Number(1, bounds=(0, 1))
def __init__(self) -> None:
super().__init__()
self.bs_sine = SineWave(y_intercept=200, period=10, amplitude=3)
def x_axis(self):
x = np.linspace(0, 500, 501)
return x
def function_generator(self):
# Making buy and sell functions
buy_price_function = np.poly1d(
[1*self.slope, self.y_intercept + self.buy_adjustment])
sell_price_function = np.poly1d(
[1*self.slope, self.y_intercept + self.sell_adjustment])
return buy_price_function, sell_price_function
def buy_area(self):
x = self.x_axis()
# Just grabbing the buy function
buy_func, sell_func = self.function_generator()
buy_y = buy_func(x)
buy_plot = pd.DataFrame(zip(x, buy_y), columns=['supply', 'buy_price'])
return px.area(buy_plot, x="supply", y=['buy_price'])
def sell_area(self):
x = self.x_axis()
# Just grabbing the buy function
buy_func, sell_func = self.function_generator()
sell_y = sell_func(x)
sell_plot = pd.DataFrame(zip(x, sell_y), columns=[
'supply', 'sell_price'])
return px.area(sell_plot, x="supply", y=['sell_price'])
def area_difference(self):
return self.buy_area() * self.sell_area()
def render_sine(self):
return self.bs_sine.plot()
def full_view(self):
# Generating x axis
x = self.x_axis()
# Making y axes for both buy and sell
buy_func, sell_func = self.function_generator()
buy_y = buy_func(x)
sell_y = sell_func(x)
sine_x, sine_y = self.bs_sine.xy_cols()
# print(sine_x)
# print(sine_y)
# --------------------------------------------
# Making the Dataframe for both the buy and sell functions
combined_plot = pd.DataFrame(zip(x, buy_y, sell_y, sine_y), columns=[
'supply', 'buy_price', 'sell_price', 'sine_y'])
plotly_element = px.line(combined_plot, x="supply", y=[
'buy_price', 'sell_price', 'sine_y'], width=500, height=500)
final_pane = pn.pane.Plotly(
plotly_element, config={'responsive': True})
view = pn.Row(
pn.Column(
self.param.buy_adjustment,
self.param.sell_adjustment,
self.param.y_intercept,
self.param.slope,
),
final_pane,
)
return view
# -
x = BSMargin()
gh = SineWave
b = SineWave(y_intercept=0, period=10, amplitude=1.0001, rotation=0.78539816339, plot_range=50)
b.plot()
# +
d = SineWave(y_intercept=205,
period=100,
amplitude=1.00001,
rotation=0.0,
plot_range=5000,
plot_method="hvplot")
#d.show_linspace()
# -
len(d.xy_cols()[0])
d.xy_cols()[1][-1]
d.full_view()
help(pm.Number)
| bonding_curve_studies/buy_sell_line.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0
# ---
# # Time Series DeepAR on My Life
# I am building a machine learning model to learn the patterns of my life
#
# I will then extend this project out to a public database of health records
#
# Modular code and all brooo
#
# > Here we go!
#
# ---
# +
# Importing the basics
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Importing the specials
from sklearn import preprocessing
from datetime import datetime
# %time
# -
Functions I might use later
#date_rng = pd.date_range(start='5/24/2020', end='7/08/2020', freq='D')
###Timesavings_df2 = pd.read_csv('Aware-SagemakerS-ML/TimeSavingsCSV2.csv')
# +
#Creating Datafrome from CSV file
TimeSavings_df = pd.read_csv('TS ST.csv')
# Indexing df with the time stamp
TimeSavings_df = TimeSavings_df.set_index('Date')
#Dropping "Miles Ran" Column for now
TimeSavings_df.drop('<NAME>', 1, inplace=True)
#Splitting dataframe into recent and long term versions to test only 2 colums
#Printing a visual of the dataframe
print(TimeSavings_df.head(5))
# %time
# -
#Scaling all values to be between 0 and 1 so I can create a covariance matrix
TimeSavings_df.apply(lambda x: x/x.max())
corr = TimeSavings_df.corr()
corr.style.background_gradient(cmap='RdYlGn', axis=None).set_precision(2)
# +
#Okay so these are the correlations on the same day,
#I wonder what it would look like on the same week?
#To Be Continued...
| Activities Tracking ML/Time Series on My D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import jieba
filename = "haizi.txt"
with open(filename,encoding='gbk') as f:
mytext = f.read()
mytext = " ".join(jieba.cut(mytext))
from wordcloud import WordCloud
wordcloud = WordCloud(font_path="simsun.ttf").generate(mytext)
# %pylab inline
import matplotlib.pyplot as plt
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
| worldcloud_zh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# ## Environment Exploration
#
# The use case explored in this lab is about resource allocation. The idea is to train a reinforcement learning agent that learns to make optimal decisions, acting as a resource manager with the goal of minimizing the average job execution slowdown.
#
# In this notebook we will explore the reinforcement learning environment used to simulate the resource allocation use case.
# The environment exposes to the agent:
# - Five job slots, each one representing a job waiting to be scheduled to run at any point in time. The job representation includes the amount of each resource needed (horizontal axis) and the job duration (vertical axis).
# - One backlog queue, representing the number of jobs waiting to be placed in one of the job slots.
# - Two resources to execute jobs.
#
# The environment accepts the following integer values as commands:
# - Job slot index (0,1,2,3,4 in our case), which attempts to place the job at the corresponding job slot given by its index into the resource slot for execution.
# - Do nothing (5 in our case).
#
# The observed reward, at each time step, corresponds to the (negative) cumulative average job slowdown observed so far. Being negative makes the reinforcement learning objective easier to compute: maximizing the sum of expected rewards corresponds to minimizing the cumulative average job slowdown.
#
# The agent is also penalized when there are no more job slots available and jobs start to be placed into the job queue. But it is not penalized when it attempts to execute an invalid command (e.g. attempting to place a job to be executed that doesn’t fit the resource slots). And when it successfully places a job to be executed in the resource slots, it observes a reward of 0.
#
# I also suggest you refer to the [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/01/deeprm_hotnets16.pdf) from which this environment is based, where you find important details about its design.
# The cell below allows you to interact with the environment, step-by-step. The allowed commands were modified to make the interaction easier: 0 corresponds to do nothing, and 1 to 5 to the corresponding job slots.
#
# At the end of the simulation, you see the number of steps (commands executed) and the accumulated average job slowdown in the simulation cycle.
# +
import numpy as np
from IPython import display
import matplotlib.pyplot as plt
import time
from random import randint
import sys, os
sys.path.insert(0, os.path.join(os.getcwd(), '../agent_training/training_scripts/environment'))
from environment import Parameters, Env
pa = Parameters()
pa.simu_len = 50 # length of the simulation (corresponds to the number of steps executed in the resource slots)
pa.num_ex = 1 # number of job sets (unique sequences of jobs using across simulation cycles)
# default values for other parameters
pa.episode_max_length = 200 # enforcing an artificial terminal
pa.max_track_since_new = 10 # track how many time steps since last new jobs
pa.job_num_cap = 40 # maximum number of distinct colors in current work graph
pa.new_job_rate = 0.7 # lambda in new job arrival Poisson Process
"""
default values for other parameters
please notice that the visualization function is not prepared to render the environment with other values
than the default values below
"""
pa.num_res = 2 # number of resources in the system
pa.num_nw = 5 # maximum allowed number of work in the queue
pa.time_horizon = 20 # number of time steps in the graph
pa.max_job_len = 15 # maximum duration of new jobs
pa.res_slot = 10 # maximum number of available resource slots
pa.max_job_size = 10 # maximum resource request of new work
pa.backlog_size = 60 # backlog queue size
pa.compute_dependent_parameters()
env = Env(pa, render=False, repre='image')
img = env.plot_state_img()
plt.figure(figsize = (16,16))
plt.grid(color='w', linestyle='-', linewidth=0.5)
plt.text(0, -2, "EXECUTION QUEUE")
plt.text(-8, 10, "RESOURCE #1")
plt.text(-8, 30, "RESOURCE #2")
plt.text(14, -2, "JOB SLOT #1")
plt.text(26, -2, "JOB SLOT #2")
plt.text(38, -2, "JOB SLOT #3")
plt.text(50, -2, "JOB SLOT #4")
plt.text(62, -2, "JOB SLOT #5")
plt.text(76, 20, "BACKLOG")
plt.imshow(img, vmax=1, cmap='CMRmap')
ax = plt.gca()
ax.set_xticks(np.arange(-.5, 100, 1))
ax.set_xticklabels([])
ax.set_yticks(np.arange(-.5, 100, 1))
ax.set_yticklabels([])
ax.tick_params(axis=u'both', which=u'both',length=0)
image = plt.imshow(img, vmax=1, cmap='CMRmap')
display.display(plt.gcf())
actions = []
rewards = []
done = False
s = 0
while not done:
while True:
key = input('Enter action (0 - 5) to step into the environment\n')
if key in ['0', '1', '2', '3', '4', '5']:
if key == '0':
a = 5
else:
a = int(key) - 1
break
else:
print('\nInvalid action')
actions.append(a)
obs, reward, done, info = env.step(a)
rewards.append(reward)
if a != 5:
print('Action executed: run job at queue#', a+1)
else:
print('Action executed: do nothing')
print('Reward obtained:', reward)
print('Total rewards:', sum(rewards))
s += 1
print('Step:', s)
img = env.plot_state_img()
image.set_data(img)
display.display(plt.gcf())
display.clear_output(wait=True)
print('End')
print('Steps:', s, '\nTotal Average Job Slowdown:', round(-sum(rewards)))
# -
# The cell below shows you how a random policy behaves in this environment.
# +
import numpy as np
from IPython import display
import matplotlib.pyplot as plt
import time
from random import randint
import sys, os
sys.path.insert(0, os.path.join(os.getcwd(), '../agent_training/training_scripts/environment'))
from environment import Parameters, Env
pa = Parameters()
pa.simu_len = 50
pa.num_ex = 1
pa.compute_dependent_parameters()
env = Env(pa, render=False, repre='image')
img = env.plot_state_img()
plt.figure(figsize = (16,16))
plt.grid(color='w', linestyle='-', linewidth=0.5)
plt.text(0, -2, "EXECUTION QUEUE")
plt.text(-8, 10, "RESOURCE #1")
plt.text(-8, 30, "RESOURCE #2")
plt.text(14, -2, "JOB SLOT #1")
plt.text(26, -2, "JOB SLOT #2")
plt.text(38, -2, "JOB SLOT #3")
plt.text(50, -2, "JOB SLOT #4")
plt.text(62, -2, "JOB SLOT #5")
plt.text(76, 20, "BACKLOG")
plt.imshow(img, vmax=1, cmap='CMRmap')
ax = plt.gca()
ax.set_xticks(np.arange(-.5, 100, 1))
ax.set_xticklabels([])
ax.set_yticks(np.arange(-.5, 100, 1))
ax.set_yticklabels([])
ax.tick_params(axis=u'both', which=u'both',length=0)
image = plt.imshow(img, vmax=1, cmap='CMRmap')
display.display(plt.gcf())
actions = []
rewards = []
done = False
s=0
txt1 = plt.text(0, 45, '')
txt2 = plt.text(0, 47, '')
while not done:
a = randint(0, pa.num_nw)
actions.append(a)
obs, reward, done, info = env.step(a)
rewards.append(reward)
s += 1
txt1.remove()
txt2.remove()
txt1 = plt.text(0, 44, 'STEPS: ' + str(s), fontsize=14)
txt2 = plt.text(0, 46, 'TOTAL AVERAGE JOB SLOWDOWN: ' + str(round(-sum(rewards))), fontsize=14)
img = env.plot_state_img()
image.set_data(img)
display.display(plt.gcf())
display.clear_output(wait=True)
# -
# In the cell below, you see how a heuristics-based policy behaves in this environment. In this case, we are using the shortest-job-first heuristic.
# +
import numpy as np
from IPython import display
import matplotlib.pyplot as plt
import time
from random import randint
import sys, os
sys.path.insert(0, os.path.join(os.getcwd(), '../agent_training/training_scripts/environment'))
from environment import Parameters, Env
def get_sjf_action(machine, job_slot):
sjf_score = 0
act = len(job_slot.slot) # if no action available, hold
for i in range(len(job_slot.slot)):
new_job = job_slot.slot[i]
if new_job is not None: # there is a pending job
avbl_res = machine.avbl_slot[:new_job.len, :]
res_left = avbl_res - new_job.res_vec
if np.all(res_left[:] >= 0): # enough resource to allocate
tmp_sjf_score = 1 / float(new_job.len)
if tmp_sjf_score > sjf_score:
sjf_score = tmp_sjf_score
act = i
return act
pa = Parameters()
pa.simu_len = 50
pa.num_ex = 1
pa.compute_dependent_parameters()
env = Env(pa, render=False, repre='image')
img = env.plot_state_img()
plt.figure(figsize = (16,16))
plt.grid(color='w', linestyle='-', linewidth=0.5)
plt.text(0, -2, "EXECUTION QUEUE")
plt.text(-8, 10, "RESOURCE #1")
plt.text(-8, 30, "RESOURCE #2")
plt.text(14, -2, "JOB SLOT #1")
plt.text(26, -2, "JOB SLOT #2")
plt.text(38, -2, "JOB SLOT #3")
plt.text(50, -2, "JOB SLOT #4")
plt.text(62, -2, "JOB SLOT #5")
plt.text(76, 20, "BACKLOG")
plt.imshow(img, vmax=1, cmap='CMRmap')
ax = plt.gca()
ax.set_xticks(np.arange(-.5, 100, 1))
ax.set_xticklabels([])
ax.set_yticks(np.arange(-.5, 100, 1))
ax.set_yticklabels([])
ax.tick_params(axis=u'both', which=u'both',length=0)
image = plt.imshow(img, vmax=1, cmap='CMRmap')
display.display(plt.gcf())
actions = []
rewards = []
done = False
s = 0
txt1 = plt.text(0, 45, '')
txt2 = plt.text(0, 47, '')
while not done:
a = get_sjf_action(env.machine, env.job_slot)
actions.append(a)
obs, reward, done, info = env.step(a)
rewards.append(reward)
s += 1
txt1.remove()
txt2.remove()
txt1 = plt.text(0, 44, 'STEPS: ' + str(s), fontsize=14)
txt2 = plt.text(0, 46, 'TOTAL AVERAGE JOB SLOWDOWN: ' + str(round(-sum(rewards))), fontsize=14)
img = env.plot_state_img()
image.set_data(img)
display.display(plt.gcf())
display.clear_output(wait=True)
| environment_exploration/environment_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Positional and Keyword arguments
def my_func(a, b, c):
print("a={0}, b={1}, c={2}".format(a, b, c))
my_func(1, 2, 3)
my_func(1, 2)
def my_func(a, b=2, c=3):
print("a={0}, b={1}, c={2}".format(a,b,c))
my_func(10, 20, 30)
my_func(10, 20)
my_func(10)
my_func(1)
def my_func(a, b=2, c=3):
print("a={0}, b={1}, c={2}".format(a, b, c))
my_func(c=30, b=20, a=10)
my_func(10, c=30, b=20)
my_func(10, c=30)
| my_classes/FunctionParameters/PositionalKeywordArguments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In-Class Coding Lab: Conditionals
#
# The goals of this lab are to help you to understand:
#
# - Relational and Logical Operators
# - Boolean Expressions
# - The if statement
# - Try / Except statement
# - How to create a program from a complex idea.
#
# # Understanding Conditionals
#
# Conditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
# Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options.
#
# On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.
#
# The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`.
#
#
# ## Now Try It
#
# Write a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.
#
# To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
num = int(input("Enter an integer: "))
if num > 0:
print("Positive number")
elif num == 0:
print("Zero")
else:
print("negative number")
# # Rock, Paper Scissors
#
# In this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.
#
# The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
# ## Randomizing the Computer's Selection
# Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.
#
#
# To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html)
# It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
# Run the cell a couple of times. It should make a random selection from `choices` each time you run it.
#
# How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything!
# ## Getting input and guarding against stupidity
#
# With step one out of the way, its time to move on to step 2. Getting input from the user.
# +
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
# -
# This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem.
#
# We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play.
#
# ### In operator
#
# The `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
#
# TODO Try these:
'rock' in choices, 'mike' in choices
# ### You Do It!
# Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
# +
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# -
# ## Playing the game
#
# With the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:
#
# - rock beats scissors (rock smashes scissors)
# - scissors beats paper (scissors cuts paper)
# - paper beats rock (paper covers rock)
#
# So for example:
#
# - If you choose rock and the computer chooses paper, you lose because paper covers rock.
# - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.
# - If you both choose rock, it's a tie.
#
# ## It's too complicated!
#
# It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.
#
# One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
#
#
# +
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# -
# Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended.
#
# ## Paper: Making the program a bit more complex.
#
# With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.
#
# At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder.
#
#
# ### You Do It
#
# In the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
# +
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# -
# ## The final program
#
# With the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise.
#
# Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
#
#
# +
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
| content/lessons/04/Class-Coding-Lab/CCL-Conditionals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1 Speichern des trainierten ML-Models
#
# ### 1.1 Fürs Speicher des Models:
#
# saver = tf.train.Saver()
# saver.save(session, path_to_your_model)
# Per Default werden alle Variable von Graph in der Datei gespeichert. Falls du nicht alle Variables brauchst, kannst auch die zu speichernden Variable über tf.train.Saver() als Python Dictionary übergeben z.B.
#
# v1 = tf.get_variable("v1", [3], initializer = tf.zeros_initializer)
# v2 = tf.get_variable("v2", [5], initializer = tf.zeros_initializer)
# saver = tf.train.Saver({"v2": v2})
#
# Hier wird nur v2 gespeichert.
# ### 1.2 Model Speichern nach bestimmter Iterationen
# Es wird oft in Deep Learning erst nach bestimmten Iterationen (z.B. 1000 steps) das Model zu speichern. Durch Parameter **global_step** wird die Anzahl der Iteration bestimmt.
#
# saver.save(sess, './modelname',global_step=1000)
# Dieses Model wird durch suffix "-100" gekennzeichnet
#
# ### 1.3 Model Speichern periodisch
# In der Realität wird Model auch oft periodisch gepeichert z.B. alle zwei Stunden und zwar speichern nur die letzten fünf Model-Datei:
#
# tf.train.Saver(max_to_keep=5, keep_checkpoint_every_n_hours=2)
# Tensorflow speichert per default nur die letzten fünf Model. Wenn man mehr Modelle speichern möchte, kann durch Parameter **max_to_keep** bestimmen
#
#
# ## 2 Laden vom trainierten Model:
# ### 2.1 Laden via import_meta_graph
# wie besprochen Graph besagt, aus welchen Komponenten(Tensor, Op, Variable und Konstant) ein Model gebaut wird. Bei der Ausführung wird pro Session mindestens eine Graph benötigt. Aber wir müssen nicht jedes mal diese Graph noch mal implementieren... Wenn wir das trainierte Model schon mal gespeichert haben, liegt schon die Graph des Models in .meta Datei.
# Wir können die Graph durch **import_meta_graph** wieder herstellen
#
# Um die Rechnung der Daten von Graph zu gestalten, wird hier auch eine **Session** benötigt. In der Session können wir vordefinierte Variables (k, b) wieder aufrufen:
#
# with tf.Session() as sess:
# sess.run(init)
# saver = tf.train.import_meta_graph('./model_all/model.ckpt.meta')
# saver.restore(sess, tf.train.latest_checkpoint('./model_all'))
# k, b = sess.run([k,b])
#
# ### 2.2 Variable Übergabe
# You can also parse a variable by dictionary to perform prediction.
#
# with tf.Session() as sess:
# sess.run(init)
# saver = tf.train.import_meta_graph('./model_all/model.ckpt.meta')
# saver.restore(sess, tf.train.latest_checkpoint('./model_all'))
# sess.run(predict_weight, feed_dict = {x: 1.90}))
#
# In training part we defined variables for parameter **predict_weight**
#
# x = tf.Variable(0.) # define a value to parse
# mult_var = tf.multiply(k, x) # k * x
# predict_weight = tf.add(mult_var, b) # k * x + b
#
# ## 3 Model Dateien
# Für jedes Model wird per default vier neuen Dateien erzeugt
# * checkpoint
# * modelname.ckpt.data-00000-of-00001
# * modelname.ckpt.index
# * modelname.ckpt.meta
#
# ### 3.1 checkpoint
# checkpoit Datei führt allen Dateien vom gespeicherten Model als List zusammen und wird vom tf.train.Saver() gepflegt.
# ### 3.2 model.ckpt.meta
# In Datei "model.ckpt.meta" wird Meta-Data von Graph gespeichert z.B. Meta-Daten von Operationen und Tensor.
# ### 3.3 model.ckpt
# Hier wird die Daten von Variables als SSTable-Format gespeichert (SSTable is ähnlich wie key, value pair)
# for more:
# https://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
| 2_store_ml_model/2_description_model_save.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 说明
#
# ```
# "发动机数据":{
# "发动机状态":"1",
# "曲轴转速":"2",
# "燃料消耗率":"2"
# },
# ```
# ## 导入工具包和公共函数
import Ipynb_importer
from public_fun import *
# ## 类主体
class fun_07_02_04(object):
def __init__(self, data):
cf = [1, 2, 2]
cf_a = hexlist2(cf)
data = data[2:]
self.o = data[0:cf_a[-1]]
list_o = [
"发动机状态",
"曲轴转速",
"燃料消耗率",
]
self.oj = list2dict(self.o, list_o, cf_a)
self.oj2 = {'发动机数据':self.oj}
self.ol = pd.DataFrame.from_dict(self.oj,orient='index').T
self.pj = {
"发动机状态":dict_list_replace("07_02_01_01", self.oj['车辆状态']),,
"曲轴转速":fun_07_02_04.fun_2(self.oj['曲轴转速']),
"燃料消耗率":fun_07_02_04.fun_3(self.oj['燃料消耗率']),
}
self.pj2 = {'发动机数据':self.pj}
self.pl = pd.DataFrame.from_dict(self.pj,orient='index').T
self.next = data[cf_a[-1]:]
self.nextMark = data[cf_a[-1]:cf_a[-1]+2]
# 02_04_02 曲轴转速
def fun_2(data):
data = data.upper()
if data == 'FFFE':
return "异常"
elif data == "FFFF":
return "无效"
else :
return hex2dec(data)
# 02_04_03 燃料消耗率
def fun_3(data):
data = data.upper()
if data == 'FFFE':
return "异常"
elif data == "FFFF":
return "无效"
else :
return hex2dec(data, k=0.01)
# ## 执行
# +
data = "0101030100FA00000B361706273C64011E82DA0065020101043451AA4E8437170627270500072035A801EAC74A060179100E010D0FC801013C01043A0700000000000000000009010100483C3B3C3A3B3B3C3A3B3B3B3A3B3B3B3A3B3B3B3B3B3A3B3C3B3B3C3B3C3A3B3C3B3A3B3B3B3B3B3A3C3B3C3A3B3B3B3A3B3B3B3A3B3B3B3B3B3A3B3C3B3A3B3B3C3A3B3B3B3A3B3B0801011706273C00900001900FFA10040FFA0FFA0FFA0FFA10040FFA1004100410040FFA0FC8100410041004100410041004100410040FFA10041004100410041004100410040FFA10040FFA0FFA0FFA10040FFA0FFA0FFA0FFA100410040FFA0FFA100410040FFA0FFA10040FFA0FFA0FFA10040FFA0FFA0FFA0FFA0FFA0FFA0FF00FFA0FFA0FFA10040FFA0FFA100410040FF010040FFA0FFA0FFA0FFA0FFA10040FFA0FFA0FFA100410040FFA0FF00FFA0FFA0FFA0FFA0FFA0FFA0FFA10040FFA0FFA10041004100410040FFA0FFA0FFA0FFA0FFA0FFA0FFA0FFA0FFA100410040FFA0FFA1004100410040FFA0FFA0FFA0FFA0FFA0FFA0FFA0FFA100E0FF00FFA0FFA0FFA0FFA100410040FFA100410040FFA10040FFA10040FFA100410040FFA10041004100410040FFAAB"
fun_07_02_01 = fun_07_02_01(data)
# -
# ## 结果
fun_07_02_01.o
fun_07_02_01.oj
fun_07_02_01.ol
fun_07_02_01.pj
fun_07_02_01.pl
fun_07_02_01.next
fun_07_02_01.nextMark
| notebook/.ipynb_checkpoints/fun_07_02_04-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 2
# ## Question 1
# #### Consider the following 1-d dataset with 5 points $ X = \{-1, 1, 10, -0,5, 0 \} $, on which we are going to perform Gaussian density estimation. For the exercise below, you may use Python for plotting but all the calculations have to be done by hand.
# - Compute the Maximum Likelihood Estimate (MLE) of the mean and variance. For the variance, compute both the unbiased and biased
# versions. Comment on what you observe. In particular, how does the presence of an outlier a
# ect your estimates.
# +
import numpy as np
import matplotlib.pyplot as plt
def gaussian(x, mu, var):
return (1.0 / np.sqrt(2 * np.pi * var)) * np.exp( - (x - mu)**2 / (2 * var) )
x = [-1, 1, 10, -0.5, 0]
rd = np.arange(-15, 15, 0.1)
l = sorted(set(np.append(rd, x)))
p = np.asarray(l)
plt.title('Gaussian distribution over $ x $')
for mu, var, note in [(1.9, 16.84, 'biased variance with outlier'),
(1.9, 21.05, 'unbiased variance with outlier'),
(-0.125, 0.546875, 'biased variance without outlier'),
(-0.125, 0.729167, 'unbiased variance without outlier')]:
plt.plot(p, gaussian(p, mu, var), '-o', markevery=[l.index(e) for e in x], label=r'$\mu=' + str(mu) + ', \sigma^{2}=' + str(var) + '$ (' + note + ')')
plt.legend(loc='center', bbox_to_anchor=(1.3, 0.5))
# -
# - Assume that you have a $ \mathcal{N}(0, 1) $ prior over the mean parameter and set the standard deviation $ \sigma^{2} = 1 $. Compute the posterior distribution of the mean parameter and plot both the prior and the posterior distributions. Comment on what you observe.
# +
ru = np.arange(-5, 5, 0.1)
plt.title('Gaussian distribution over $ \mu $')
for mu, var, note in [(0, 1, 'prior'),
(1.583333, 0.166667, 'posterior')]:
plt.plot(ru, gaussian(ru, mu, var), label=r'$\mu=' + str(mu) + ', \sigma^{2}=' + str(var) + '$ (' + note + ')')
plt.legend(loc='center', bbox_to_anchor=(1.3, 0.5))
# -
# - Now suppose we change the prior over the mean parameter to $ \mathcal{N}(10, 1) $. Compute the new posterior distribution, plot it, and contrast it with what you observed previously.
# +
ru = np.arange(-15, 15, 0.1)
plt.title('Gaussian distribution over $ \mu $')
for mu, var, note in [(10, 1, 'prior'),
(3.25, 0.166667, 'posterior')]:
plt.plot(ru, gaussian(ru, mu, var), label=r'$\mu=' + str(mu) + ', \sigma^{2}=' + str(var) + '$ (' + note + ')')
plt.legend(loc='center', bbox_to_anchor=(1.3, 0.5))
# -
# - Suppose 2 more data points get added to your dataset: $$ X = \{-1, 1, 10, -0.5, 0, 2, 0.5 \}$$
# Using the same $ \mathcal{N}(0, 1) $ prior over the mean parameter, compute and plot the posterior. How does observing new data points afect the posterior?
# +
ru = np.arange(-5, 5, 0.1)
plt.title('Gaussian distribution over $ \mu $')
for mu, var, note in [(0, 1, 'prior'),
(1.5, 0.125, 'posterior')]:
plt.plot(ru, gaussian(ru, mu, var), label=r'$\mu=' + str(mu) + ', \sigma^{2}=' + str(var) + '$ (' + note + ')')
plt.legend(loc='center', bbox_to_anchor=(1.3, 0.5))
# -
# ## Question 2
# #### Generate 100 data points as follows: Draw $ x $ uniformly at random from $ \left[-100, 100\right] $. For each $ x $ draw $ t $ from $ \mathcal{N}\{f(x), 1\} $ where $ f(x) = 0.1 + 2x + x^{2} + 3x^{3} $. In order to fit this curve, we will make use of the following probabilistic model:
# $$ p(t | x, \textbf{w}, \beta)=\mathcal{N}(t | y(x,\textbf{w}), \beta^{-1}) $$
# where $ y(x, \textbf{w})=w_{0} + w_{1}x + w_{2}x^{2} + w_{3}x^{3} $
# - Perform MLE estimation of $ \textbf{w} $ and $ \beta $. You may use the $ \displaystyle{optimize} $ module from $ \displaystyle{scipy} $ for this task. Comment on how well $\textbf{w} $ and $ \beta $ match the true parameters used to generate the data. How do the
# estimates change when you use 1000 or 10,000 data points for your estimates?
# +
from scipy.optimize import minimize
import random
import numpy as np
NUM_POINTS = 100
# generate x
xs = []
for _ in range(NUM_POINTS):
xs.append(random.randint(-100, 100))
# generate t
def fx(x):
return 0.1 + (2 * x) + (x**2) + (3 * (x**3))
ts = []
for i in range(NUM_POINTS):
t = np.random.normal(fx(xs[i]), 1, 1)
ts.append(t[0])
# The first polynomial model
def yxw_1(w, x):
return w[0] + w[1] * x + w[2] * x**2 + w[3] * x**3
# The second polynomial model
def yxw_2(w, x):
return w[0] + w[1] * x + w[2] * x**2 + w[3] * x**3 + w[4] * x**4 + w[5] * x**5
# sum-of-squares-error function
def sose(w, xs, ts):
summ = 0
for i in range(NUM_POINTS):
# choose yxw_1 or yxw_2 here as a different polynomial model
summ += (yxw_1(w, xs[i]) - ts[i])**2
return 1.0 / 2 * summ
# Initial guess of w.
# Choose w0_1 when you use yxw_1 as the polynomial model
# in the sum-of-squares-error function.
# Correspondingly, choose wo_2 when you use yxw_2.
w0_1 = [1, 1, 1, 1]
w0_2 = [1, 1, 1, 1, 1, 1]
res = minimize(sose, w0_1, args=(xs,ts), method='Powell')
if (res.success):
w_ml = res.x
print('w_ml = ' + str(w_ml))
beta_ml = 1.0 / (sose(w_ml, xs, ts) * 2 / NUM_POINTS)
print('beta_ml = ' + str(beta_ml))
# -
# - Refer to the slides from the class, where we added a prior over $ \textbf{w} $ in order to derive Bayesian linear regression. Assume that we set the hyperparameter $ \alpha = 1 $ and plot the Bayesian estimate of the curve and the uncertainty around the estimate. How well does it match the observed data. How does the estimate change when you use 1000 or 10,000 data points?
# +
from numpy.matlib import zeros
from numpy.matlib import identity
import numpy as np
# the hyperparameter
alpha = 1
# MLE estimation of variance of likelihood distribution
beta = beta_ml
# the order of the polynomial model
M = 3
def phi(x):
return np.matrix([[x**i] for i in range(M + 1)])
summ = zeros((M + 1, M + 1))
# NUM_POINTS is defined at the previous program
for i in range(NUM_POINTS):
p = phi(xs[i])
summ += p * p.T
s_inv = alpha * identity(M + 1) + beta * summ
s = s_inv.I
sum_phi_t = zeros((M + 1, 1))
for i in range(NUM_POINTS):
sum_phi_t += phi(xs[i]) * ts[i]
def m(x):
return (beta * phi(x).T * s * sum_phi_t).item((0, 0))
def s_square(x):
return (1.0 / beta + phi(x).T * s * phi(x)).item((0, 0))
# plot the mean of predictive distribution
s_xs = sorted(xs)
plt.plot(s_xs, list(map(m, s_xs)), label='mean of predictive distribution')
def md_sigma_n(x):
return m(x) - np.sqrt(s_square(x))
def md_sigma_p(x):
return m(x) + np.sqrt(s_square(x))
# plot +/- one standard deviation around the mean of predictive distribution
plt.fill_between(s_xs,
list(map(md_sigma_n, s_xs)),
list(map(md_sigma_p, s_xs)),
facecolor='yellow',
alpha=0.5,
label='+/- one standard deviation around the mean of predictive distribution')
# plot the data points
plt.scatter(xs, ts, label='data points')
def m_ll(x):
return (w_ml * phi(x))[0, 0]
# plot the mean of maximized likelihood distribution
plt.plot(s_xs, list(map(m_ll, s_xs)), label='mean of maximized likelihood distribution')
plt.title('Posterior predictive distribution over t given different x')
plt.legend(loc='center', bbox_to_anchor=(0.5, -0.4))
w_t = np.matrix([0.1, 2, 1, 3])
s_mean = 0.0
s_var = 0.0
for i in range(NUM_POINTS):
tm = (w_t * phi(xs[i]))[0, 0]
s_mean += abs(tm - m(xs[i]))
s_var += abs(1 - s_square(xs[i]))
print("The average distance between the mean of the distribution used to generate data and the mean of the predictive distribution is: " + str(s_mean / NUM_POINTS))
print("The average distance between the variance of the distribution used to generate data and the variance of the predictive distribution is: " + str(s_var / NUM_POINTS))
| hw2/code/figure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from scipy import signal
from scipy import io
# pylab combines pyplot with numpy into a single namespace. This is convenient for interactive work,
# but for programming it is recommended that the namespaces be kept separate, e.g.:
import numpy as np
# +
def gen_square(t, freq=1, duty=0.5):
return signal.square(2 * np.pi * freq * t, duty)
def gen_sine(t, freq=1, phase=0):
return np.sin(2 * np.pi * freq * t + phase*np.pi/180)
def gen_sine2(t, amplitude = 1, offset=0, *args, **kwargs):
return amplitude*gen_sine(t, *args, **kwargs)+offset
# -
def gen_time(T_end = 10, dT=0.1):
return np.arange(0, T_end+dT, dT)
t1 = gen_time(T_end=100, dT=1)
t2 = gen_time(T_end=100, dT=0.1)
t3 = gen_time(T_end=100, dT=0.015)
# +
square_t1 = gen_square(t1, freq=0.1, duty=0.5)
square_t3 = gen_square(t3, freq=0.1, duty=0.5)
sine_t1 = gen_sine(t1, freq=0.01)
sine_t2 = gen_sine2(t2, amplitude=2, offset=2, freq=0.1)
sine_t3 = gen_sine(t3, freq=0.1)
data=dict()
data["t1"]=t1
data["t2"]=t2
data["t3"]=t3
data["sine_t1"]=sine_t1
data["sine_t2"]=sine_t2
data["sine_t3"]=sine_t3
data["square_t1"]=square_t1
data["square_t3"]=square_t3
io.savemat("faux_data.mat", mdict = data, long_field_names=True, do_compression=True)
# -
| tools/DataGenerator-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github"
# <a href="https://colab.research.google.com/github/AlisonJD/tb_examples/blob/main/Add_a_Column_to_a_Data_Source_using_the_CLI.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yunNOBebQ1qD"
# # Add a Column to a Data Source using the CLI
#
# Based on Tinybird blog post:
#
# https://blog.tinybird.co/2021/05/25/add-column/
# + [markdown] id="QJjXkx_2sZb6"
# If you have opened the notebook in Google Colab then `Copy to Drive` (see above).
# + colab={"base_uri": "https://localhost:8080/"} id="0m9SGvMoQ2M7" executionInfo={"status": "ok", "timestamp": 1628543453321, "user_tz": -120, "elapsed": 21143, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="8605eee2-5fd1-4324-9cdb-640c616d0f5b"
#@title Mount your Google Drive to save and use local files
from google.colab import drive
drive.mount('/content/gdrive', force_remount=False)
% cd "/content/gdrive/My Drive/Colab Notebooks/Tinybird/tb_examples"
# + id="7dDX75NlROEj" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628543468371, "user_tz": -120, "elapsed": 15056, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="9ae364fd-e6a7-4bd8-8bdd-c3c64925f89f"
#@title Install Tinybird CLI, os and your token
# !pip install tinybird-cli -q
import os
if not os.path.isfile('.tinyb'):
# !tb auth
if not os.path.isdir('datasources'):
# !tb init
# + id="Ok4fXnpES1Vd" executionInfo={"status": "ok", "timestamp": 1628543468372, "user_tz": -120, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
#@title Helper function to write to files
def write_text_to_file(filename, text):
with open(filename, 'w') as f: f.write(text)
# + [markdown] id="WQjLeU4mRkNR"
# # Worked Example from Blog:
#
# # Add a Column to a Data Source using the CLI
#
# Business changes, so does data. New attributes in datasets are the norm, not the exception.
#
# You can now add new columns to your existing Data Sources, without worrying about what happens with your existing data ingestion (we will keep importing data with the old schema and start accepting data with the new schema).
#
# Since you can materialize data to other Data Sources at ingestion time, changing the schema of your Data Source could have downstream effects. Don't worry we've solved that.
# + [markdown] id="qlxWb-MjS64g"
# ## 1. Create a Sample Data Source
# + id="CUn4TURuTQbe" executionInfo={"status": "ok", "timestamp": 1628545190251, "user_tz": -120, "elapsed": 278, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
filename="datasources/fixtures/my_ds.csv"
text='''
n,v
1,A
2,B
3,C
4,D
5,E
6,F
7,G
'''
write_text_to_file(filename, text)
# + colab={"base_uri": "https://localhost:8080/"} id="-0L9ZWHETqfg" executionInfo={"status": "ok", "timestamp": 1628545192665, "user_tz": -120, "elapsed": 2208, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="3f5300d4-e221-4e00-c8f2-ee27e6f661ab"
# !tb datasource generate datasources/fixtures/my_ds.csv --force
# + colab={"base_uri": "https://localhost:8080/"} id="uKPAkFWiVuul" executionInfo={"status": "ok", "timestamp": 1628545192666, "user_tz": -120, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="3aa0cbad-f39a-48ec-a1e9-ce5ac925137b"
# !cat datasources/my_ds.datasource
# + id="9GzyCEBRT2J_" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1628545195166, "user_tz": -120, "elapsed": 2504, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="4fe75659-9039-4dd7-d15f-e3d72e1755f4"
# !tb datasource append my_ds datasources/fixtures/my_ds.csv
# the row in quaratine is the row containing column names
# + colab={"base_uri": "https://localhost:8080/"} id="uOFjlAjOYj_C" executionInfo={"status": "ok", "timestamp": 1628545197193, "user_tz": -120, "elapsed": 2031, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="8a98846a-c704-439e-9d49-2885744b9ef4"
# !tb sql "select * from my_ds order by n"
# + [markdown] id="dBPMjutHYt0r"
# ## 2. Add New Columns
# + [markdown] id="3GYCjjXWYIqh"
#
#
# To add new columns, just add them to the end of the current schema definition and then do `tb push --force`.
# + id="AaJ13CSlYJdN" executionInfo={"status": "ok", "timestamp": 1628545197194, "user_tz": -120, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
filename="datasources/my_ds.datasource"
text='''
DESCRIPTION my_ds with some new columns
SCHEMA >
`n` Int16,
`v` String,
`v_str_def_thing` String DEFAULT 'thing',
`v_str_no_default` String,
`v_int_def_3` Int16 DEFAULT 3,
`v_int_no_default` Int16
'''
write_text_to_file(filename, text)
# + colab={"base_uri": "https://localhost:8080/"} id="KWzp86S1clVw" executionInfo={"status": "ok", "timestamp": 1628545197195, "user_tz": -120, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="188f1150-b8a9-4571-9582-e2fa0dde1ae0"
# !cat datasources/my_ds.datasource
# + colab={"base_uri": "https://localhost:8080/"} id="fAx1jTkHY-e4" executionInfo={"status": "ok", "timestamp": 1628545200688, "user_tz": -120, "elapsed": 3497, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="c3206925-7380-4af2-d704-59c79e30fd95"
# !tb push datasources/my_ds.datasource --force --yes
# + [markdown] id="6nbESu4OngbS"
# ## 3. Add Data with the New Columns
# + id="kQyl4bSkZFnw" executionInfo={"status": "ok", "timestamp": 1628545200689, "user_tz": -120, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
filename="datasources/fixtures/my_ds_new_cols.csv"
text='''
n,v,v_str_def_thing,v_str_no_default,v_int_def_3,v_int_no_default
8,H,other,word,5,10
9,I,,dog,0,6
10,J,again,words,1,8
'''
write_text_to_file(filename, text)
# + colab={"base_uri": "https://localhost:8080/"} id="yhHKaMbfbx1p" executionInfo={"status": "ok", "timestamp": 1628545203151, "user_tz": -120, "elapsed": 2468, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="e12b252e-d094-4619-ebfc-62bbb327045b"
# !tb datasource append my_ds datasources/fixtures/my_ds_new_cols.csv
# again the row of column names goes into quarantine
# + [markdown] id="4XPdsAMfntOc"
# ## 4. Add Data with the Old Columns
# + colab={"base_uri": "https://localhost:8080/"} id="97XY3y1Ib5Th" executionInfo={"status": "ok", "timestamp": 1628545205409, "user_tz": -120, "elapsed": 2261, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="53fe52ad-831c-453c-b47d-a0035d4b4c27"
# !tb sql "select * from my_ds order by n"
# + [markdown] id="LBQWR7XnfjNd"
# By default columns will have an empty string or a 0, depending on the type. In the schema you can specify other default values for the new columns.
# + id="0wkX5MX0maxe" executionInfo={"status": "ok", "timestamp": 1628545205409, "user_tz": -120, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
filename="datasources/fixtures/my_ds_old_cols.csv"
text='''
n,v
11,K
12,L
'''
write_text_to_file(filename, text)
# + colab={"base_uri": "https://localhost:8080/"} id="pPoSBnGvmr5j" executionInfo={"status": "ok", "timestamp": 1628545207861, "user_tz": -120, "elapsed": 2455, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="8eba610e-895e-424b-db08-aaef4ec21c7b"
# !tb datasource append my_ds datasources/fixtures/my_ds_old_cols.csv
# again the row of column names goes into quarantine
# + colab={"base_uri": "https://localhost:8080/"} id="YfSaEWg5myLr" executionInfo={"status": "ok", "timestamp": 1628545209963, "user_tz": -120, "elapsed": 2104, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="5fd90246-2135-4f92-90a4-b3cc3b94f079"
# !tb sql "select * from my_ds order by n"
# + [markdown] id="j0uWBQ23k10q"
# ## 5. Notes
# 1. If you are materializing views from the Data Source that you are adding columns to with `SELECT * FROM ...` the views will break because the target Data Sources won’t have all the columns. To avoid this, use column names instead of * when creating materialized views.
#
# 2. You can only add columns to Data Sources that have a `Null` engine or one in the `MergeTree` family``
#
# 3. You can keep importing data as if your schema hadn’t changed. Default values will be used for the new columns if a value is not provided for them. At any point, you can start importing with the new schema by sending data that contains the new columns.
#
# 4. All the new columns have to be added at the end of the schema of a current Data Source not in between existing columns.
| Add_a_Column_to_a_Data_Source_using_the_CLI.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.1
# language: julia
# name: julia-1.3
# ---
using Languages, TextAnalysis, Flux, PyPlot, Statistics, Embeddings
# +
# Array of documents for data (example)
# Input directory given by the user
doc_array = ["well done",
"good work",
"great effort",
"nice work",
"excellent",
"weak",
"poor effort",
"not good",
"poor work",
"could have done better"]
# +
# positve or negative sentiment to each 'document' string
# labels can be loaded from a file given by the user
y = [true true true true true false false false false false]
# -
# pushing the text from the files to the string documents
docs=[]
for i in 1:length(doc_array)
push!(docs, StringDocument(doc_array[i]))
end
# +
# building a Corpus
corpus=Corpus(docs)
# updating the lexicon and creating the word dict
update_lexicon!(corpus)
doc_term_matrix=DocumentTermMatrix(corpus)
word_dict=doc_term_matrix.column_indices
# function to return the index of the word in the word dictionary
tk_idx(s) = haskey(word_dict, s) ? i=word_dict[s] : i=0
# +
# Padding the corpus wrt the longest document
function pad_corpus(c, pad_size)
M=[]
for doc in 1:length(c)
tks = tokens(c[doc])
if length(tks)>=pad_size
tk_indexes=[tk_idx(w) for w in tks[1:pad_size]]
end
if length(tks)<pad_size
tk_indexes=zeros(Int64,pad_size-length(tks))
tk_indexes=vcat(tk_indexes, [tk_idx(w) for w in tks])
end
doc==1 ? M=tk_indexes' : M=vcat(M, tk_indexes')
end
return M
end
# -
# splitting words in the document
word_docs = map(split,doc_array)
# pad size is the number of words in the maximum word document
pad_size = maximum(length(word_docs[i]) for i in 1:length(doc_array))
# padding the docs
padded_docs = pad_corpus(corpus, pad_size)
# forming the data with the labels
x = padded_docs'
data = [(x, y)]
function vec_idx(s)
i=findfirst(x -> x==s, vocab)
i==nothing ? i=0 : i
end
# +
const embtable = load_embeddings(GloVe{:en},1) # or load_embeddings(FastText_Text) or ...
#Function to return the index of the word in the embedding (returns 0 if the word is not found)
const get_word_index = Dict(word=>ii for (ii,word) in enumerate(embtable.vocab))
function get_embedding(word)
ind = get_word_index[word]
emb = embtable.embeddings[:,ind]
return emb
end
# -
embeddings = embtable.embeddings
vocab = embtable.vocab
embed_size, max_features = size(embeddings)
# Building Flux Embeddings
N = size(padded_docs,1) #Number of documents
max_features = 50
# number of words in the vocabulary, should always be higher than the maximum index in our dictionary.
vocab_size = 20
max_features
# +
# Embedding layer for Flux model
# glorot_normal returns an Array of size dims containing random variables taken from a normal distribution with mean 0 and standard deviation (2 / sum(dims)).
embedding_matrix=Flux.glorot_normal(max_features, vocab_size)
for term in doc_term_matrix.terms
if vec_idx(term)!=0
embedding_matrix[:,word_dict[term]+1]=get_embedding(term)
end
end
# +
# Enabling Flux
m = Chain(x -> embedding_matrix * Flux.onehotbatch(reshape(x, pad_size*N), 0:vocab_size-1),
x -> reshape(x, max_features, pad_size, N),
x -> sum(x, dims=2),
x -> reshape(x, max_features, N),
Dense(max_features, 1, σ)
)
# -
# Layer 1. As x is fed into the model, the first layer’s embedding function matches the words in each document to corresponding word vectors. This is done by rolling all the word vectors one after the other and using onehotbatch to filter out the unwanted words. The output is a 8x40 array (W\[8x20\]\*onehotbatch\[20x40\]).
#
# Layer 2. Unrolls the vectors into the shape 8x4x10; i.e. 8 features and 10 documents of padded size 4.
#
# Layer 3. Now that our data is in the shape provided by layer 2 we can sum the word vectors to get an overall ‘meaning’ vector for each document. The output is now in the shape size of 8 x 1 x 10.
#
# Layer 4: Drops an axis so that the shape of x is a size suitable for training. After this step the shape is 8x10.
#
# Layer 5: is a normal Dense layer with the sigmoid activation function to give us nice probabilities.
#
# If you’d like to see each layer in action I recommend using m\[1\](x) to see sample output from the first layer. m\[1:2\](x) to see output from the second layer and so on.
# ## Breaking the model
x
reshape(x,pad_size*N)
Flux.onehotbatch(reshape(x, pad_size*N), 0:vocab_size-1)
# One hot encoded matrix has 1s on the positions of the vocab index (Eg 0 0 13 3 for first word vector)
# All the vectors are placed side by side concatenated
loss_h=[]
accuracy_train=[]
accuracy(x, y) = mean(x .== y)
loss(x, y) = sum(Flux.binarycrossentropy.(m(x), y))
optimizer = Flux.Descent(0.01)
for epoch in 1:400
Flux.train!(loss, Flux.params(m), data, optimizer)
println(loss(x, y), " ", accuracy(m(x).>0.5,y))
push!(loss_h, loss(x, y))
push!(accuracy_train, accuracy(m(x).>0.5,y))
end
println(m(x).>0.5)
accuracy(m(x).>0.5,y)
# +
figure(figsize=(12,5))
subplot(121)
PyPlot.xlabel("Epoch")
ylabel("Loss")
plot(loss_h)
subplot(122)
PyPlot.xlabel("Epoch")
ylabel("Accuracy")
plot(accuracy_train, label="train")
| tutorials/Rango - Text Classifier Basic Model Tutorial-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# name: python3
# ---
# ## Imports
import numpy as np
np.seterr(all='ignore')
import os, os.path
import pydicom
import cv2
import math
import matplotlib.pyplot as plt
import scipy.ndimage
import scipy.signal
import statistics
from itertools import *
# ## `dcm_list_builder`
def dcm_list_builder(path, test_text = ""):
# function to get list of dcm_files from dcm directory
dcm_path_list = []
for (dirpath, dirnames, filenames) in os.walk(path, topdown=True):
if dirpath not in dcm_path_list:
for filename in filenames:
try:
tmp_str = str(os.path.join(dirpath, filename))
ds = pydicom.read_file(tmp_str, stop_before_pixels = True)
if dirpath not in dcm_path_list:
dcm_path_list.append(dirpath)
except:
pass
else:
pass
return dcm_path_list
# ## `dcm_reader`
def dcm_reader(dcm_path):
dcm_files = []
for (dirpath, dirnames, filenames) in os.walk(dcm_path,topdown=False):
for filename in filenames:
try:
if not filename == 'DIRFILE':
dcm_file = str(os.path.join(dirpath, filename))
pydicom.read_file(dcm_file, stop_before_pixels = True)
dcm_files.append(dcm_file)
except:
pass
read_RefDs = True
while read_RefDs:
for index in range(len(dcm_files)):
try:
RefDs = pydicom.read_file(dcm_files[index], stop_before_pixels = False)
read_RefDs = False
break
except:
pass
slice_thick_ori = RefDs.SliceThickness
ConstPixelDims = (int(RefDs.Rows), int(RefDs.Columns), len(dcm_files))
dcm_array = np.zeros([ConstPixelDims[0],ConstPixelDims[1],len(dcm_files)],\
dtype=RefDs.pixel_array.dtype)
instances = []
for filenameDCM in dcm_files:
try:
ds = pydicom.read_file(filenameDCM, stop_before_pixels = True)
instances.append(int(ds.InstanceNumber))
except:
pass
instances.sort()
index = 0
for filenameDCM in dcm_files:
try:
ds = pydicom.read_file(filenameDCM)
dcm_array[:,:,instances.index(ds.InstanceNumber)] = ds.pixel_array
if ds.InstanceNumber in instances[:2]:
if ds.InstanceNumber == instances[0]:
loc_1 = ds.SliceLocation
else:
loc_2 = ds.SliceLocation
index += 1
except:
pass
try:
RefDs.SliceThickness = abs(loc_1 - loc_2)
except:
pass
dcm_array = dcm_array * RefDs.RescaleSlope + RefDs.RescaleIntercept
return RefDs, dcm_array, slice_thick_ori
# ## Load DICOMs
root_path = r"/Users/daleblack/Google Drive/Datasets/Canon_Aquilion_One_Vision"
dcm_path_list = dcm_list_builder(root_path)
dcm_path_list
dcm_path_list[5]
header, dcm_array, slice_thick_ori = dcm_reader(dcm_path_list[5])
dcm_array_ori = dcm_array.copy()
dcm_array.max()
dcm_array.min()
# # Whole heart mask
# ## `find_circle`
def find_circle(point_1, point_2, point_3):
x1, y1 = point_1
x2, y2 = point_2
x3, y3 = point_3
x12 = x1 - x2
x13 = x1 - x3
y12 = y1 - y2
y13 = y1 - y3
y31 = y3 - y1
y21 = y2 - y1
x31 = x3 - x1
x21 = x2 - x1
sx13 = x1**2 - x3**2
sy13 = y1**2 - y3**2
sx21 = x2**2 - x1**2
sy21 = y2**2 - y1**2
f = (((sx13) * (x12) + (sy13) * (x12) + (sx21) * (x13) +\
(sy21) * (x13)) // (2 * ((y31) * (x12) - (y21) * (x13))))
g = (((sx13) * (y12) + (sy13) * (y12) + (sx21) * (y13) + (sy21) *\
(y13)) // (2 * ((x31) * (y12) - (x21) * (y13))))
# eqn of circle be x^2 + y^2 + 2*g*x + 2*f*y + c = 0 where centre is (h = -g, k = -f)
center_insert = [-g,-f]
return center_insert
find_circle([309, 309], [312, 200], [155, 155])
# ## `mask_heart`
def mask_heart(array_used = None, radius_val = 95, slice_used_center = None):
try:
pixel_size = header.PixelSpacing[0]
except:
FOV = header.ReconstructionDiameter
matrix_size = header.Rows
pixel_size = FOV / matrix_size
radius = (radius_val/2) / pixel_size
central_image = array_used[:,:,slice_used_center].copy()
central_image[central_image > -200] = 0
central_image[central_image != 0] = 1
image_kernel = math.ceil(5 / header.PixelSpacing[0])
if image_kernel % 2 == 0:
image_kernel += 1
central_image = scipy.signal.medfilt2d(central_image, image_kernel)
center = [int(array_used.shape[0] / 2), int(array_used.shape[1] / 2)]
a = central_image.copy()
for index in range(int(array_used.shape[1] / 2)):
if (central_image[center[0] + index, center[1] + index] == 1 and\
central_image[center[0] + index, center[1] + index + 5] == 1):
point_1 = [center[0] + index, center[1] + index]
break
else:
a[center[0] + index, center[1] + index] = 2
pass
for index in range(int(array_used.shape[1] / 2)):
if (central_image[center[0] + index, center[1] - index] == 1 and\
central_image[center[0] + index, center[1] - index - 5] == 1):
point_2 = [center[0] + index, center[1] - index]
break
else:
a[center[0] + index, center[1] - index] = 2
pass
for index in range(int(array_used.shape[1] / 2)):
if (central_image[center[0] - index, center[1] - index] == 1 and\
central_image[center[0] - index, center[1] - index - 5] == 1):
point_3 = [center[0] - index, center[1] - index]
break
else:
a[center[0] - index, center[1] - index] = 2
pass
center_insert = find_circle(point_1, point_2, point_3)
Y, X = np.ogrid[:header.Rows, :header.Columns]
dist_from_center = np.sqrt((X - center_insert[1])**2 + (Y-center_insert[0])**2)
mask = dist_from_center <= radius
masked_array = np.zeros_like(array_used)
for index in range(array_used.shape[2]):
masked_array[:,:,index] = array_used[:,:,index] * mask
return masked_array, center_insert, mask
dcm_array, center_insert, mask = mask_heart(array_used = dcm_array, slice_used_center = int(dcm_array.shape[2]/2))
center_insert
plt.scatter(center_insert[1], center_insert[0])
plt.imshow(dcm_array_ori[:,:,10], cmap='gray')
plt.scatter(center_insert[1], center_insert[0])
plt.imshow(mask[:,:], cmap='gray')
plt.imshow(dcm_array[:,:,24], cmap='gray')
# # Calcium rod mask
# ## `get_calcium_slices`
def get_calcium_slices(dcm_array, header, calcium_threshold=130, comp_connect=4):
array = dcm_array.copy()
array[array < 1.1*calcium_threshold] = 0
array[array > 0] = 1
array = array.astype(dtype = np.uint8)
CCI_5mm_num_pixels = int(math.pi * (5/2)**2 / header.PixelSpacing[0]**2)
cal_rod_num_pixels = int(math.pi * (20/2)**2 / header.PixelSpacing[0]**2)
image_kernel = math.ceil(5 / header.PixelSpacing[0])
if image_kernel % 2 == 0:
image_kernel += 1
slice_dict = {}
large_index = []
cal_rod_dict = {}
for idx in range(array.shape[2]):
array_filtered = scipy.signal.medfilt2d(array[:,:,idx], image_kernel)
output = cv2.connectedComponentsWithStats(array_filtered, comp_connect,cv2.CV_32S)
count_5mm = 0
count = 0
for index in range(1,output[0]):
count += 1
area = output[2][index][4]
r1_1 = int(CCI_5mm_num_pixels * 0.6)
r1_2 = int(CCI_5mm_num_pixels * 1.5)
r2_1 = int(cal_rod_num_pixels * 0.7)
r2_2 = int(cal_rod_num_pixels * 1.3)
if area in range(r1_1, r1_2):
count_5mm += 1
elif area in range(r2_1, r2_2):
cal_rod_dict[index] = [int(output[3][index][1]), int(output[3][index][0])]
if (count_5mm > 0 and count_5mm < 4):
slice_dict[idx] = count_5mm
poppable_keys = []
for key in cal_rod_dict.keys():
start_coordinate = [cal_rod_dict[key][0], cal_rod_dict[key][1]]
x_right = 0
while array_filtered[start_coordinate[0], start_coordinate[1] + x_right] == 1:
x_right += 1
x_left = 0
while array_filtered[start_coordinate[0], start_coordinate[1] - x_left] == 1:
x_left += 1
y_top = 0
while array_filtered[start_coordinate[0] + y_top, start_coordinate[1]] == 1:
y_top += 1
y_bottom = 0
while array_filtered[start_coordinate[0] - y_bottom, start_coordinate[1]] == 1:
y_bottom += 1
x_dist = x_right + x_left
y_dist = y_top + y_bottom
if x_dist not in range(int(0.7*y_dist), int(1.2*y_dist)):
poppable_keys.append(key)
else:
pass
for key in poppable_keys:
cal_rod_dict.pop(key)
if len(cal_rod_dict) == 0:
pass
else:
large_index.append(idx)
return slice_dict, large_index
slice_dict, large_index = get_calcium_slices(dcm_array, header)
slice_dict, large_index
plt.imshow(dcm_array_ori[:, :, large_index[1]], cmap="gray")
# ## `get_calcium_center_slices`
def get_calcium_center_slices(dcm_array, slice_dict, large_index):
flipped_index = int(statistics.median(large_index))
# flipped_index = 31
edge_index = []
if flipped_index < (dcm_array.shape[2] / 2):
flipped = -1
for element in large_index:
# print("element: ", element)
# print("dcm_array.shape[2] / 2: ", dcm_array.shape[2] / 2)
if element > (dcm_array.shape[2] / 2):
edge_index.append(element)
if not edge_index:
pass
else:
for index_edge in range(min(edge_index), dcm_array.shape[2]):
try:
del(slice_dict[index_edge])
except:
pass
# print("slice_dict: ", slice_dict)
for element2 in edge_index:
large_index.remove(element2)
for element in range(max(large_index)):
try:
del(slice_dict[element])
except:
pass
else:
flipped = 1
for element in large_index:
if element < (dcm_array.shape[2] / 2):
edge_index.append(element)
if not edge_index:
pass
else:
for index_edge in range(max(edge_index)):
try:
del(slice_dict[index_edge])
except:
pass
for element2 in edge_index:
large_index.remove(element2)
for element in range(min(large_index), dcm_array.shape[2]):
try:
del(slice_dict[element])
except:
pass
return slice_dict, flipped, flipped_index
slice_dict, flipped, flipped_index = get_calcium_center_slices(dcm_array, slice_dict, large_index)
slice_dict, flipped, flipped_index
# ## `poppable_keys`
def poppable_keys(flipped, flipped_index, header, slice_dict):
poppable_keys = []
if flipped == -1:
for key in slice_dict.keys():
if key > (flipped_index + (55 / header.SliceThickness)):
poppable_keys.append(key)
elif flipped == 1:
for key in slice_dict.keys():
if key < (flipped_index - (55 / header.SliceThickness)):
poppable_keys.append(key)
for key in poppable_keys:
slice_dict.pop(key)
return slice_dict
poppable_keys(flipped, flipped_index, header, slice_dict)
# ## `compute_CCI`
def compute_CCI(dcm_array, header, slice_dict, calcium_threshold=130):
max_key, _ = max(zip(slice_dict.values(), slice_dict.keys()))
max_keys = []
for key in slice_dict.keys():
if slice_dict[key] is max_key:
max_keys.append(key)
slice_CCI = int(statistics.median(max_keys))
array = dcm_array.copy()
array[array < calcium_threshold] = 0
array[array > 0] = 1
array = array.astype(dtype = np.uint8)
calcium_image = array * dcm_array
quality_slice = round(slice_CCI - flipped * (20 / header.SliceThickness))
cal_rod_slice = slice_CCI + (flipped * int(30 / header.SliceThickness))
return calcium_image, slice_CCI, quality_slice, cal_rod_slice
calcium_image, slice_CCI, quality_slice, cal_rod_slice = compute_CCI(dcm_array, header, slice_dict)
slice_CCI, quality_slice, cal_rod_slice
plt.imshow(calcium_image[:, :, 14], cmap="gray")
# ## `mask_rod`
def mask_rod(dcm_array, header, calcium_threshold=130, comp_connect=4):
slice_dict, large_index = get_calcium_slices(dcm_array, header, calcium_threshold, comp_connect)
slice_dict, flipped, flipped_index = get_calcium_center_slices(dcm_array, slice_dict, large_index)
slice_dict = poppable_keys(flipped, flipped_index, header, slice_dict)
calcium_image, slice_CCI, quality_slice, cal_rod_slice = compute_CCI(
dcm_array, header, slice_dict, calcium_threshold
)
return calcium_image, slice_CCI, quality_slice, cal_rod_slice
calcium_image, CCI_slice, quality_slice, cal_rod_slice = mask_rod(dcm_array, header, calcium_threshold=130)
plt.imshow(calcium_image[:, :, CCI_slice], cmap='gray')
# # Calcium inserts mask
# ## `angle_calc`
def angle_calc(side1, side2):
#Calculate angle between two sides of rectangular triangle
if side1 == 0:
angle = 0
elif side2 == 0:
angle = math.pi / 2
else:
angle = math.atan(side1 / side2)
return angle
angle_calc(4, 3)
# ## `create_circular_mask`
def create_circular_mask(h, w, center_circle, radius_circle):
Y, X = np.ogrid[:h, :w]
dist_from_center = np.sqrt((X - center_circle[0])**2 + (Y-center_circle[1])**2)
mask = dist_from_center <= radius_circle
return mask
mask1 = create_circular_mask(40, 40, (20, 20), 1)
plt.imshow(mask1, cmap="gray")
# ## `calc_output`
def calc_output(dcm_array, CCI_slice, calcium_threshold = 130, comp_connect=4, print_plot=False):
# Actual scoring for CCI insert
# First step is to remove slices without calcium from arrays
CCI_min = int((CCI_slice - math.ceil(5 / header.SliceThickness)) - 1)
CCI_max = int((CCI_slice + math.ceil(5 / header.SliceThickness)) + 1)
central_CCI = int((CCI_max - CCI_min)/2)
if CCI_min < 0:
CCI_min = 0
if CCI_max > dcm_array.shape[2]:
CCI_max = dcm_array.shape[2]
CCI_array = dcm_array[:,:,CCI_min:CCI_max].copy()
CCI_array_binary = CCI_array.copy()
CCI_array_binary[CCI_array_binary < 1.0*calcium_threshold] = 0
CCI_array_binary[CCI_array_binary > 0] = 1
CCI_array_binary = CCI_array_binary.astype(dtype = np.uint8)
inp = CCI_array_binary[:,:,central_CCI - 1] + CCI_array_binary[:,:,central_CCI] + CCI_array_binary[:,:,central_CCI + 1]
_, _, _, centroids = cv2.connectedComponentsWithStats(inp, comp_connect, cv2.CV_32S)
centroids = np.delete(centroids,0,0)
image_kernel = math.ceil(3 / header.PixelSpacing[0])
if image_kernel % 2 == 0:
image_kernel += 1
image_for_center = scipy.signal.medfilt2d(CCI_array_binary[:,:,central_CCI - 1], image_kernel) +\
scipy.signal.medfilt2d(CCI_array_binary[:,:,central_CCI], image_kernel) +\
scipy.signal.medfilt2d(CCI_array_binary[:,:,central_CCI + 1], image_kernel)
if print_plot:
plt.imshow(image_for_center)
plt.show()
plt.imshow(image_for_center, cmap='bone')
plt.xticks(fontsize = 10)
plt.yticks(fontsize = 10)
plt.show()
output = cv2.connectedComponentsWithStats(image_for_center, comp_connect,cv2.CV_32S)
return output, CCI_array
output, CCI_array = calc_output(dcm_array, CCI_slice, print_plot=False)
output
plt.imshow(output[1])
CCI_array.shape
plt.imshow(CCI_array[:, :, 5], cmap="gray")
# ## `center_points`
def center_points(output, tmp_center, CCI_slice):
sizes = []
for size_index in range(1,len(output[2])):
area = output[2][size_index][4]
sizes.append(area)
# print("output[2][size_index][4]: ", output[2][size_index][4])
# global largest
largest = {}
for index in range(1,len(output[3])):
x = output[3][index][0]
y = output[3][index][1]
dist_loc = math.sqrt((tmp_center[1] - x)**2 +\
(tmp_center[0] - y)**2)
dist_loc *= header.PixelSpacing[0]
if dist_loc > 31:
largest[index] = [int(output[3][index][1]),int(output[3][index][0])]
else:
pass
# print("x: ", x, "y: ", y, "dist_loc: ", dist_loc)
# print("largest: ", largest)
max_dict = {}
for key in largest.keys():
tmp_arr = create_circular_mask(header.Rows, header.Columns,\
[largest[key][1],largest[key][0]],\
math.ceil(2.5 / header.PixelSpacing[0]))
# print(header.Rows, header.Columns, [largest[key][1],largest[key][0]], math.ceil(2.5 / header.PixelSpacing[0]))
tmp_arr = tmp_arr * dcm_array[:,:,CCI_slice] +\
tmp_arr * dcm_array[:,:,CCI_slice - 1] +\
tmp_arr * dcm_array[:,:,CCI_slice + 1]
tmp_arr[tmp_arr == 0] = np.nan
max_dict[key] = np.nanmedian(tmp_arr)
# print("max_dict: ", max_dict)
large1_index, large1_key = max(zip(max_dict.values(), max_dict.keys()))
max_dict.pop(large1_key)
large2_index, large2_key = max(zip(max_dict.values(), max_dict.keys()))
max_dict.pop(large2_key)
large3_index, large3_key = max(zip(max_dict.values(), max_dict.keys()))
center1 = largest[large1_key]
center2 = largest[large2_key]
center3 = largest[large3_key]
# global center
center = find_circle(center1, center2, center3)
return center, center1, center2, center3
center_points(output, center_insert, CCI_slice)
center_insert
# ## `calc_centers`
def calc_centers(output, tmp_center, CCI_slice):
center, center1, center2, center3 = center_points(output, tmp_center, CCI_slice)
centers = {}
for size_index4 in (center1, center2, center3):
center_index = size_index4
side_x = abs(center[0]-center_index[0])
side_y = abs(center[1]-center_index[1])
angle = angle_calc(side_x, side_y)
if (center_index[0] < center[0] and center_index[1] < center[1]):
medium_calc = [int(center_index[0] + (12.5 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] + (12.5 / header.PixelSpacing[1]) * math.cos(angle)))]
low_calc = [int(center_index[0] + (25 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] + (25 / header.PixelSpacing[1]) * math.cos(angle)))]
elif (center_index[0] < center[0] and center_index[1] > center[1]):
medium_calc = [int(center_index[0] + (12.5 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] - (12.5 / header.PixelSpacing[1]) * math.cos(angle)))]
low_calc = [int(center_index[0] + (25 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] - (25 / header.PixelSpacing[1]) * math.cos(angle)))]
elif (center_index[0] > center[0] and center_index[1] < center[1]):
medium_calc = [int(center_index[0] - (12.5 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] + (12.5 / header.PixelSpacing[1]) * math.cos(angle)))]
low_calc = [int(center_index[0] - (25 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] + (25 / header.PixelSpacing[1]) * math.cos(angle)))]
elif (center_index[0] > center[0] and center_index[1] > center[1]):
medium_calc = [int(center_index[0] - (12.5 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] - (12.5 / header.PixelSpacing[1]) * math.cos(angle)))]
low_calc = [int(center_index[0] - (25 / header.PixelSpacing[0]) * math.sin(angle)),\
int((center_index[1] - (25 / header.PixelSpacing[1]) * math.cos(angle)))]
elif (side_x == 0 and center_index[1] < center[1]):
medium_calc = [int(center_index[0]), int(center_index[1] + (12.5 / header.PixelSpacing[1]))]
low_calc = [int(center_index[0]), int(center_index[1] + (25 / header.PixelSpacing[1]))]
elif (side_x == 0 and center_index[1] > center[1]):
medium_calc = [int(center_index[0]), int(center_index[1] - (12.5 / header.PixelSpacing[1]))]
low_calc = [int(center_index[0]), int(center_index[1] - (25 / header.PixelSpacing[1]))]
elif (center_index[0] > center[0] and side_y == 0):
medium_calc = [int(center_index[0] - (12.5 / header.PixelSpacing[0])), int(center_index[1])]
low_calc = [int(center_index[0] - (25 / header.PixelSpacing[0])), int(center_index[1])]
elif (center_index[0] > center[0] and side_y == 0):
medium_calc = [int(center_index[0] + (12.5 / header.PixelSpacing[0])), int(center_index[1])]
low_calc = [int(center_index[0] + (25 / header.PixelSpacing[0])), int(center_index[1])]
else:
print("unknown angle.. error!")
if size_index4 == center1:
centers['Large_HD'] = ([center_index])
centers['Medium_HD'] = ([medium_calc])
centers['Small_HD'] = ([low_calc])
elif size_index4 == center2:
centers['Large_MD'] = ([center_index])
centers['Medium_MD'] = ([medium_calc])
centers['Small_MD'] = ([low_calc])
elif size_index4 == center3:
centers['Large_LD'] = ([center_index])
centers['Medium_LD'] = ([medium_calc])
centers['Small_LD'] = ([low_calc])
else:
pass
return centers
calc_centers(output, center_insert, CCI_slice)
# ## `mask_inserts`
def mask_inserts(dcm_array, CCI_slice, center, calcium_threshold = 130, comp_connect=4, print_plot = False):
output, CCI_array = calc_output(dcm_array, CCI_slice, calcium_threshold, comp_connect, print_plot)
tmp_center = center.copy()
calc_size_density_VS_AS_MS = calc_centers(output, tmp_center, CCI_slice)
for key in calc_size_density_VS_AS_MS.keys():
calc_size_density_VS_AS_MS[key].append(0)
calc_size_density_VS_AS_MS[key].append(0)
calc_size_density_VS_AS_MS[key].append(0)
mask_L_HD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Large_HD'][0][1],\
calc_size_density_VS_AS_MS['Large_HD'][0][0]],math.ceil((5 / header.PixelSpacing[0])/2) + 1)
mask_L_MD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Large_MD'][0][1],\
calc_size_density_VS_AS_MS['Large_MD'][0][0]],math.ceil((5 / header.PixelSpacing[0])/2) + 1)
mask_L_LD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Large_LD'][0][1],\
calc_size_density_VS_AS_MS['Large_LD'][0][0]],math.ceil((5 / header.PixelSpacing[0])/2) + 1)
mask_M_HD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Medium_HD'][0][1],\
calc_size_density_VS_AS_MS['Medium_HD'][0][0]],math.ceil((3 / header.PixelSpacing[0])/2) + 1)
mask_M_MD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Medium_MD'][0][1],\
calc_size_density_VS_AS_MS['Medium_MD'][0][0]],math.ceil((3 / header.PixelSpacing[0])/2) + 1)
mask_M_LD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Medium_LD'][0][1],\
calc_size_density_VS_AS_MS['Medium_LD'][0][0]],math.ceil((3 / header.PixelSpacing[0])/2) + 1)
mask_S_HD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Small_HD'][0][1],\
calc_size_density_VS_AS_MS['Small_HD'][0][0]],math.ceil((1 / header.PixelSpacing[0])/2) + 1)
mask_S_MD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Small_MD'][0][1],\
calc_size_density_VS_AS_MS['Small_MD'][0][0]],math.ceil((1 / header.PixelSpacing[0])/2) + 1)
mask_S_LD = create_circular_mask(header.Columns, header.Rows, [calc_size_density_VS_AS_MS['Small_LD'][0][1],\
calc_size_density_VS_AS_MS['Small_LD'][0][0]],math.ceil((1 / header.PixelSpacing[0])/2) + 1)
masks1 = mask_L_HD + mask_M_HD + mask_S_HD
masks2 = mask_L_MD + mask_M_MD + mask_S_MD
masks3 = mask_L_LD + mask_M_LD + mask_S_LD
if print_plot:
plt.imshow(masks1 + masks2 + masks3, cmap='bone')
plt.xticks(fontsize = 10)
plt.yticks(fontsize = 10)
plt.show()
return mask_L_HD, mask_M_HD, mask_S_HD, mask_L_MD, mask_M_MD, mask_S_MD, mask_L_LD, mask_M_LD, mask_S_LD
mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9 = mask_inserts(dcm_array, CCI_slice, center_insert)
plt.imshow(mask1 + mask2 + mask3 + mask4 + mask5 + mask6 + mask7 + mask8 + mask9, cmap="gray")
| python/segmentation/segmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="qeoHVjsu-x23" colab_type="text"
# importing all necessary libraries.
# + id="C5JZ2kXx-SUe" colab_type="code" outputId="376e7e28-aa6a-41c0-fb7e-86d7914b23ad" colab={"base_uri": "https://localhost:8080/", "height": 100}
#importing all necessary libraries
import os
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# %matplotlib inline
from tqdm import tqdm_notebook, tnrange
from itertools import chain
from skimage.io import imread, imshow, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.models import Model, load_model
from keras.layers import Input, BatchNormalization, Activation, Dense, Dropout
from keras.layers.core import Lambda, RepeatVector, Reshape
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D, GlobalMaxPool2D
from keras.layers.merge import concatenate, add
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
# + id="-2fqWEcO-s2A" colab_type="code" colab={}
#set some params , image width, height, border of image
im_width = 128
im_height = 128
border = 5
# + id="OMIlLuG8gmFp" colab_type="code" outputId="885852ea-6249-4a46-c2b0-da0028ab3b15" colab={"base_uri": "https://localhost:8080/", "height": 35}
#print the no.of images in training dataset
ids = next(os.walk("drive/My Drive/salt/train/images"))[2]
print("No. of images = ", len(ids))
# + id="seYHGtmxirRJ" colab_type="code" colab={}
#initialize array X, y with size of image height, width
X = np.zeros((len(ids), im_height, im_width, 1), dtype=np.float32)
y = np.zeros((len(ids), im_height, im_width, 1), dtype=np.float32)
# + id="VzVbw7mCj0qU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": ["36366d693f7c4b2cb205a2273aafa0a5", "4ee43f25f5934ff6bedea2477b208bd4", "705cbf0104b64da9a19010c132df4f96", "7b8819f2176944fd8e583a1b0b82e2ab", "94a386a8f9f3466e9a7884f1c0dab581", "<KEY>", "633f073e5ebc4ffc923e0bab29c385d7", "3b8efcca7fa746acbeabd9a403865ac9"]} outputId="bca4abf2-51a8-4ed8-f58a-3820d070abf8"
# tqdm is used to display the progress bar
for n, id_ in tqdm_notebook(enumerate(ids), total=len(ids)):
# Load images
img = load_img("drive/My Drive/salt/train/images/{}".format(id_), color_mode="grayscale")
x_img = img_to_array(img)
x_img = resize(x_img, (128, 128, 1), mode = 'constant', preserve_range = True)
# Load masks
mask = img_to_array(load_img("drive/My Drive/salt/train/masks/{}".format(id_), color_mode="grayscale"))
mask = resize(mask, (128, 128, 1), mode = 'constant', preserve_range = True)
# Save images
X[n] = x_img/255.0
y[n] = mask/255.0
# + id="WdoJ5kSwlSyl" colab_type="code" colab={}
# Split train and valid
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.1, random_state=42)
# + id="WHVXFrCi2SaG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 610} outputId="a68dec84-5fc7-4d0a-9b0c-df76bc6b6946"
# Visualize any randome image along with the mask
ix = random.randint(0, len(X_train))
has_mask = y_train[ix].max() > 0 # salt indicator
fig, (ax1, ax2) = plt.subplots(1, 2, figsize = (20, 15))
ax1.imshow(X_train[ix, ..., 0], cmap = 'seismic', interpolation = 'bilinear')
if has_mask: # if salt
# draw a boundary(contour) in the original image separating salt and non-salt areas
ax1.contour(y_train[ix].squeeze(), colors = 'k', linewidths = 5, levels = [0.5])
ax1.set_title('Seismic')
ax2.imshow(y_train[ix].squeeze(), cmap = 'gray', interpolation = 'bilinear')
ax2.set_title('Salt')
# + id="yYiQumla2e2g" colab_type="code" colab={}
#static convolutional neural network model to callback as fn for U-Net model
def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
"""Function to add 2 convolutional layers with the parameters passed to it"""
# first layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
# second layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
# + id="MBjAlFwQ2icb" colab_type="code" colab={}
#standard u-net arch
def get_unet(input_img, n_filters = 16, dropout = 0.1, batchnorm = True):
"""Function to define the UNET Model"""
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
p1 = Dropout(dropout)(p1)
c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters = n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
# Expansive Path
u6 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c5)
u6 = concatenate([u6, c4])
u6 = Dropout(dropout)(u6)
c6 = conv2d_block(u6, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
u7 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c3])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c2])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
u9 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c1])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
outputs = Conv2D(1, (1, 1), activation='sigmoid')(c9)
model = Model(inputs=[input_img], outputs=[outputs])
return model
# + id="ooyD0GT12qBA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="18c5af74-68b2-4234-af46-bfe32be57f16"
#compiling the model
input_img = Input((im_height, im_width, 1), name='img')
model = get_unet(input_img, n_filters=16, dropout=0.05, batchnorm=True)
model.compile(optimizer=Adam(), loss="binary_crossentropy", metrics=["accuracy"])
# + id="U15BKNgg202L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3535a259-bfa3-4c63-d657-8df827a10301"
model.summary()
# + id="ljiMpPUz22mG" colab_type="code" colab={}
#creating a checkpoint to save the best model
callbacks = [
EarlyStopping(patience=10, verbose=1),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('model-tgs-salt.h5', verbose=1, save_best_only=True, save_weights_only=True)
]
# + id="nA6D8Uzp26kC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2f2224f8-c16c-42d0-ffdc-0c7cd82fe103"
#fit the data to the model by training it
results = model.fit(X_train, y_train, batch_size=32, epochs=50, callbacks=callbacks,\
validation_data=(X_valid, y_valid))
# + id="mZGUP0V-3268" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ae9210d2-2c72-40b5-819f-f296631d1532"
# !ls
# + id="rCdPO9-f4mjG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="602be0c1-d036-472b-e540-21195b031f23"
#plotting the models loss , validation loss
plt.figure(figsize=(8, 8))
plt.title("Learning curve")
plt.plot(results.history["loss"], label="loss")
plt.plot(results.history["val_loss"], label="val_loss")
plt.plot( np.argmin(results.history["val_loss"]), np.min(results.history["val_loss"]), marker="x", color="r", label="best model")
plt.xlabel("Epochs")
plt.ylabel("log_loss")
plt.legend();
# + id="ZqQMrTkO4sHD" colab_type="code" colab={}
# load the best model
model.load_weights('drive/My Drive/salt/train/model-tgs-salt.h5')
# + id="xPJkAsHa4vYt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="720411c0-3a05-460e-8e23-9b40c0e76dde"
# Evaluate on validation set (this must be equals to the best log_loss)
model.evaluate(X_valid, y_valid, verbose=1)
# + id="XQDUdibf4yqK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="9f354d1a-98a5-46e5-e0da-765b9ade32af"
# Predict on train, val and test
preds_train = model.predict(X_train, verbose=1)
preds_val = model.predict(X_valid, verbose=1)
# + id="xAcbcbse43RK" colab_type="code" colab={}
# Threshold predictions
preds_train_t = (preds_train > 0.5).astype(np.uint8)
preds_val_t = (preds_val > 0.5).astype(np.uint8)
# + id="okDMwVL55fz3" colab_type="code" colab={}
#function to plot the data and to check the predictions.
def plot_sample(X, y, preds, binary_preds, ix=None):
"""Function to plot the results"""
if ix is None:
ix = random.randint(0, len(X))
has_mask = y[ix].max() > 0
fig, ax = plt.subplots(1, 4, figsize=(20, 10))
ax[0].imshow(X[ix, ..., 0], cmap='seismic')
if has_mask:
ax[0].contour(y[ix].squeeze(), colors='k', levels=[0.5])
ax[0].set_title('Seismic')
ax[1].imshow(y[ix].squeeze())
ax[1].set_title('Salt')
ax[2].imshow(preds[ix].squeeze(), vmin=0, vmax=1)
if has_mask:
ax[2].contour(y[ix].squeeze(), colors='k', levels=[0.5])
ax[2].set_title('Salt Predicted')
ax[3].imshow(binary_preds[ix].squeeze(), vmin=0, vmax=1)
if has_mask:
ax[3].contour(y[ix].squeeze(), colors='k', levels=[0.5])
ax[3].set_title('Salt Predicted binary');
# + id="BSMJhq_y5hEB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="6168ef39-428c-47ab-f712-239ae9f2cbc1"
# Check if training data looks all right
plot_sample(X_train, y_train, preds_train, preds_train_t, ix=14)
# + id="gRcomL525mWl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="be2797af-29e8-4d28-d570-28fcb71db812"
# Check if valid data looks all right
plot_sample(X_valid, y_valid, preds_val, preds_val_t, ix=19)
# + id="5wAueaPR6Bnm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="48db5fbc-365c-4517-fa46-0b802e86b916"
#ploting a random sample from validation set
plot_sample(X_valid, y_valid, preds_val, preds_val_t)
| saltid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [DeepSphere]: a spherical convolutional neural network
# [DeepSphere]: https://github.com/SwissDataScienceCenter/DeepSphere
#
# [<NAME>](https://perraudin.info), [<NAME>](http://deff.ch), <NAME>, <NAME>
#
# # Figures for the paper
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# +
import os
import numpy as np
from scipy.interpolate import interp1d
from scipy import sparse
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
import healpy as hp
import pygsp
import svgutils.compose as sc
import IPython.display as ipd
import hyperparameters
from deepsphere import utils, plot, models
# +
os.environ["CUDA_VISIBLE_DEVICES"] = ""
plt.rcParams['figure.figsize'] = (17, 5) # (9, 4) for matplotlib notebook
matplotlib.rcParams.update({'font.size': 10})
# -
pathfig = './figures/'
os.makedirs(pathfig, exist_ok=True)
# ## 1 Graph
# ### The full sphere
# +
fig = plt.figure(figsize=[8,6])
ax = fig.add_subplot(111, projection='3d')
G = utils.healpix_graph(nside=8, nest=True)
G.plotting.update(vertex_size=10)
G.plot(ax=ax,edges=False)
# Get rid of the ticks
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
# Get rid of the panes
ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# Get rid of the spines
ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.set_title('Healpix sampling, Nside=8')
plt.savefig(pathfig+"healpix_sampling_8.pdf", bbox_inches='tight')
fig = plt.figure(figsize=[8,6])
ax = fig.add_subplot(111, projection='3d')
G = utils.healpix_graph(nside=4, nest=True)
G.plotting.update(vertex_size=20)
G.plot(ax=ax)
# Get rid of the ticks
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
# Get rid of the panes
ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# Get rid of the spines
ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.set_title('Graph, full sphere, Nside=4')
plt.savefig(pathfig+"healpix_graph_4.pdf", bbox_inches='tight')
# -
# ### Half the sphere
# +
nside = 4
npoints = hp.nside2npix(nside)
indexes = hp.reorder(np.array(list(range(npoints))),n2r=True)[:npoints//2]
G = utils.healpix_graph(nside=nside, nest=True, indexes=indexes)
G.plotting['elevation']=90
G.plotting['azimuth']=0
G.plotting.update(vertex_size=50)
fig = plt.figure(figsize=[8,8])
ax = fig.add_subplot(111, projection='3d')
# plt.cm.Blues_r
# Highlight the node with a degree of 7 on the full sphere
G2 = utils.healpix_graph(nside=nside, nest=True)
snode = np.arange(0,G2.N)[G2.d==7]
sindex = set(indexes)
snode2 = [el for el in snode if el in sindex]
hl_index = [np.argmin(np.abs(indexes-el)) for el in snode2]
sig = np.zeros([G.N])
sig[hl_index]=1
G.plot_signal(1-sig, ax=ax,colorbar=False)
# G.plot(ax=ax)
# Get rid of the ticks
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
# Get rid of the panes
ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# Get rid of the spines
ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
# Remove the title
# ax.set_title('Graph, half sphere, Nside=4')
ax.set_title('')
# Zoom in
c = 0.6
plt.axis([c*min(G.coords[:,0]), c*max(G.coords[:,0]), c*min(G.coords[:,1]), c*max(G.coords[:,1]) ])
fig.savefig(pathfig+"half_graph_{}.pdf".format(nside), bbox_inches='tight')
# -
# ## 2 Pooling
# +
order = 4
index = np.arange(hp.nside2npix(order)) + 1
mask = np.zeros_like(index, dtype=np.bool)
mask[:order**2] = 1
index *= mask
index = index.astype(np.float)
index[index==0] = hp.UNSEEN
hp.mollview(index, title='', nest=True, cbar=False,cmap=None, xsize=1600)
plt.savefig(pathfig+"pooling-order4.pdf", bbox_inches='tight')
order = 2
index = np.arange(hp.nside2npix(order)) + 1
mask = np.zeros_like(index, dtype=np.bool)
mask[:order**2] = 1
index *= mask
index = index.astype(np.float)
index[index==0] = hp.UNSEEN
hp.mollview(index, title='', nest=True, cbar=False,cmap=None, xsize=1600)
plt.savefig(pathfig+"pooling-order2.pdf", bbox_inches='tight')
order = 1
index = np.arange(hp.nside2npix(order)) + 1
mask = np.zeros_like(index, dtype=np.bool)
mask[:order**2] = 1
index *= mask
index = index.astype(np.float)
index[index==0] = hp.UNSEEN
hp.mollview(index, title='', nest=True, cbar=False,cmap=None, xsize=1600)
plt.savefig(pathfig+"pooling-order1.pdf", bbox_inches='tight')
index = np.array(list(range(12)))
hp.mollview(index, title='', nest=True, cbar=False,cmap=None, xsize=1600)
plt.savefig(pathfig+"12parts.pdf", bbox_inches='tight')
# -
# ## 3 Fourier basis
#
# Let us display a few Fourier modes on the healpix map.
# +
n_eigenvectors = 16
G = utils.healpix_graph(nside=16, lap_type='normalized', nest=True, dtype=np.float64)
G.compute_fourier_basis(n_eigenvectors=n_eigenvectors)
fig = plt.figure(figsize=(8, 5))
cm = plt.cm.RdBu_r
cm.set_under('w')
l, m = 0, 0
lm = []
for idx in range(n_eigenvectors):
lm.append([l,m])
m += 1
if m > l:
l += 1
m = -l
ind = np.array([ 0, 1, 3, 2, 4, 5, 7, 6, 8, 10, 12, 9, 15, 14, 11, 13])
for idx in range(n_eigenvectors):
l,m = lm[ind[idx]]
hp.mollview(G.U[:, idx],
title='Mode {}: $\ell$={}, $|m|$={}'.format(idx, l, np.abs(m)),
nest=True,
sub=(np.sqrt(n_eigenvectors), np.sqrt(n_eigenvectors), idx+1),
max=np.max(np.abs(G.U[:, :n_eigenvectors])),
min=-np.max(np.abs(G.U[:, :n_eigenvectors])),
cbar=False,
cmap=cm)
hp.graticule(verbose=False)
plt.savefig(os.path.join(pathfig, "eigenvectors.pdf"), bbox_inches='tight')
# -
# ## 4 Convolution on graphs
# +
# taus = [5, 10, 20, 50]
taus = [5, 20, 50]
matplotlib.rcParams.update({'font.size': 14})
# fig, ax = plt.subplots(1,len(taus), figsize=(17, 4))
fig, ax = plt.subplots(1,len(taus), figsize=(12, 4))
for i,tau in enumerate(taus):
hf = pygsp.filters.Heat(G, tau=tau)
hf.plot(eigenvalues=False, sum=False, ax=ax[i])
ax[i].set_xlabel('Graph eigenvalues', fontsize=18)
if i is not 0:
ax[i].set_ylabel('')
else:
ax[i].set_ylabel('Spectral response', fontsize=18)
ax[i].set_title('$t={}$'.format(tau), fontsize=22)
fig.tight_layout(rect=[0, 0.05, 1, 0.92])
plt.suptitle('Filter response in the graph spectral domain', fontsize=24)
plt.savefig(pathfig+"gaussian_filters_spectral.pdf", bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# +
hf = pygsp.filters.Heat(G,tau=taus)
def arcmin2rad(x):
return x / 60 / 360 * 2 * np.pi
def gaussian_smoothing(sig, sigma, nest=True):
if nest:
sig = hp.reorder(sig, n2r=True)
smooth = hp.sphtfunc.smoothing(sig, sigma=arcmin2rad(sigma))
if nest:
smooth = hp.reorder(smooth, r2n=True)
return smooth
_, center = plot.get_index_equator(hp.npix2nside(G.N), radius=20)
ind0 = center
sig = np.zeros(G.N)
sig[ind0] = 1
conv = hf.analyze(sig)
fig = plt.figure(figsize=(12, 5))
rel_diff = []
matplotlib.rcParams.update({'font.size': 18})
cm = plt.cm.seismic
# cm = plt.cm.jet
cm.set_under('w')
m = 0
#[315, 465, 670, 1080]
for i, (tau, sigma) in enumerate(zip(taus, [315, 670, 1080])):
with utils.HiddenPrints():
smooth = gaussian_smoothing(sig, sigma, nest=True)
m = max(m, max(smooth))
hp.mollview(conv[:, i],
title='$t={}$'.format(tau),
nest=True,
min=-m, max=m,
cbar=False,
rot=(180,0,180),
sub=(2, len(taus), i+1),
cmap=cm)
hp.mollview(smooth,
title='$\sigma={}$'.format(sigma),
nest=True,
min=-m, max=m,
cbar=False,
rot=(180,0,180),
sub=(2,len(taus),i+len(taus)+1),
cmap=cm)
diff = (conv[:, i]-smooth)
rel_diff.append(np.linalg.norm(diff)/np.linalg.norm(smooth))
# hp.mollview(diff,
# title='',
# nest=True,
# cbar=False,
# sub=(3, len(taus), i+2*len(taus)+1))
with utils.HiddenPrints():
hp.graticule();
print(rel_diff)
plt.savefig(pathfig+"gaussian_filters_sphere.pdf", bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# +
hf = pygsp.filters.Heat(G,tau=taus)
order = 20
matplotlib.rcParams.update({'font.size': 20})
fig = plt.figure( figsize=(12, 5.5))
plot.plot_filters_gnomonic(filters=hf,order=order, title='', graticule=True)
plt.suptitle('Gnomonic projection of a convoluted delta', fontsize=27)
plt.savefig(pathfig+"gaussian_filters_gnomonic.pdf", bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# -
matplotlib.rcParams.update({'font.size': 14})
fig = plt.figure( figsize=(12, 4))
plot.plot_filters_section(hf, order=order, xlabel='', ylabel='', title='', marker='o')
plt.suptitle('Section of a convoluted delta', fontsize=22)
plt.savefig(pathfig+"gaussian_filters_section.pdf", bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
plot.plot_index_filters_section(hf,order=order)
plt.savefig(pathfig+"index_plotting_order{}_nside16.pdf".format(order), bbox_inches='tight')
# ## 5 Experiment results
# +
sigma=3
deepsphere_result_fcn = np.load('results/deepsphere/deepsphere_results_list_sigma{}_FCN.npz'.format(sigma))['data'][-15:]
deepsphere_result_cnn = np.load('results/deepsphere/deepsphere_results_list_sigma{}_CNN.npz'.format(sigma))['data'][-15:]
hist_result = np.load('results/histogram/histogram_results_list_sigma{}.npz'.format(sigma))['data'][-15:]
psd_result = np.load('results/psd/psd_results_list_sigma{}.npz'.format(sigma))['data']
def get_xy(result, order):
x = []
y = []
for d in result:
if d[0]==order:
x.append(d[1])
y.append(d[2])
x = np.array(x)
y = np.array(y)
a = np.argsort(x)
x = x[a]
y = y[a]
return x, y
# -
for order in[1, 2, 4]:
x_hist, y_hist = get_xy(hist_result, order)
x_deepsphere_fcn, y_deepsphere_fcn = get_xy(deepsphere_result_fcn, order)
x_deepsphere_cnn, y_deepsphere_cnn = get_xy(deepsphere_result_cnn, order)
x_psd, y_psd = get_xy(psd_result, order)
acc_hist = (1-y_hist)*100
acc_deepsphere_fcn = (1-y_deepsphere_fcn)*100
acc_deepsphere_cnn = (1-y_deepsphere_cnn)*100
acc_psd = (1-y_psd)*100
plt.figure(figsize=[4,3])
plt.plot(x_deepsphere_fcn, acc_deepsphere_fcn,'g.-', label='HealPixNet (FCN variant)')
plt.plot(x_deepsphere_cnn, acc_deepsphere_cnn,'g.--', label='HealPixNet (CNN variant)')
plt.plot(x_psd, acc_psd,'b.-', label='PSD + linear SVM')
plt.plot(x_hist, acc_hist,'r.-', label='Histogram + linear SVM')
plt.legend(loc=3, prop={'size': 12})
plt.xlabel('Relative noise level')
plt.ylabel('Accuracy in %')
plt.title('Order {}'.format(order))
plt.savefig(pathfig+"result_order{}.pdf".format(order), bbox_inches='tight')
# +
# deepsphere_result_params = np.load('results/deepsphere/deepsphere_results_list_sigma{}_params.npz'.format(sigma))['data']
# +
# def make_tab(order, results):
# print('-'*48)
# print('| {} | {} |'.format('Network'.ljust(30),'Accuracy % '))
# print('-'*48)
# for result in results:
# if int(result[0])==int(order):
# print('| {} | {:0.2f} |'.format(result[3].ljust(30), 100*(1-float(result[2]))))
# print('-'*48)
# make_tab(4, deepsphere_result_params)
# +
# make_tab(2, deepsphere_result_params)
# -
# ## 6 Experiment data
orders = [1,2,4]
order_max = max(orders)
npix = hp.nside2npix(order_max)
index = np.zeros([npix])
for order in orders:
index[:order**2] = index[:order**2]+1
index.astype(np.float)
index[index==0] = hp.UNSEEN
hp.mollview(index, title='', nest=True, cbar=False,cmap=None, xsize=1600)
plt.savefig(pathfig+"part_sphere.pdf", bbox_inches='tight')
# +
def make_ball(map_test1, cmap=plt.cm.gray_r, sub=None, vmin =-0.5, vmax=1.5):
cmap.set_under('w')
cmap.set_bad('lightgray')
dot_size=10
rot = (0,30,345)
hp.visufunc.orthview(map=map_test1, half_sky=True, title='', rot=rot, cmap=cmap, cbar=False, hold=True, nest=True, min=vmin, max=vmax, notext=True, sub=sub);
theta, phi = hp.pix2ang(hp.npix2nside(len(map_test1)), range(len(map_test1)), nest=True);
hp.projscatter(theta, phi, c='k', s=dot_size);
hp.graticule();
hp.graticule(dmer=360,dpar=360,alpha=1, rot=(0,0,15), local=True);
hp.graticule(dmer=360,dpar=360,alpha=1, rot=(0,0,195), local=True);
orders = [1,2,4]
order_max = max(orders)
npix = hp.nside2npix(order_max)
index = np.zeros([npix])
for order in orders:
index[:order**2] = index[:order**2]+1
index.astype(np.float)
index[index==0] = hp.UNSEEN
make_ball(index, cmap=plt.cm.RdBu_r, vmin=0, vmax=np.max(index))
plt.savefig(pathfig+"part_sphere2.pdf", bbox_inches='tight')
# -
# ### Plotting some data
img1 = hp.read_map('data/same_psd/kappa_omega_m_0p31_s_2.fits')
img2 = hp.read_map('data/same_psd/kappa_omega_m_0p26_s_2.fits')
img1 = hp.reorder(img1, r2n=True)
img2 = hp.reorder(img2, r2n=True)
Nside = 1024
img1 = hp.ud_grade(img1, nside_out=Nside, order_in='NESTED')
img2 = hp.ud_grade(img2, nside_out=Nside, order_in='NESTED')
cmin = min(np.min(img1), np.min(img2))
cmax = max(np.max(img1), np.max(img2))
cmax = -2*cmin
# +
# _ = plt.hist(img1,bins=100)
# +
# hp.mollview(img1, title='Map 1, omega_m=0.31, pk_norm=0.82, h=0.7', nest=True, min=cmin, max=cmax)
# hp.mollview(img2, title='Map 2, omega_m=0.26, sigma_8=0.91, h=0.7', nest=True, min=cmin, max=cmax)
# +
def arcmin2rad(x):
return x / 60 / 360 * 2 * np.pi
def gaussian_smoothing(sig, sigma, nest=True):
if nest:
sig = hp.reorder(sig, n2r=True)
smooth = hp.sphtfunc.smoothing(sig, sigma=arcmin2rad(sigma))
if nest:
smooth = hp.reorder(smooth, r2n=True)
return smooth
sigma=3
# -
fig = plot.zoom_mollview(img1, cmin=cmin, cmax=cmax)
plt.suptitle('Sample from class 1, $\Omega_m=0.31$, , $\sigma_8=0.82$',y=0.78, fontsize=18);
# omega_m=0.31, pk_norm=0.82, h=0.7
fig = plot.zoom_mollview(gaussian_smoothing(img1,sigma), cmin=cmin, cmax=cmax)
plt.suptitle('Smoothed map from class 1, $\Omega_m=0.31$, $\sigma_8=0.82$',y=0.78, fontsize=18)
plt.savefig(pathfig+"smooth_map_class_1.pdf", bbox_inches='tight')
# omega_m=0.31, pk_norm=0.82, h=0.7
fig = plot.zoom_mollview(img2, cmin=cmin, cmax=cmax)
_ = plt.suptitle('Sample from class 2, $\Omega_m=0.26$, $\sigma_8=0.91$',y=0.78, fontsize=18)
# omega_m=0.26, sigma_8=0.91, h=0.7
fig = plot.zoom_mollview(gaussian_smoothing(img2, sigma), cmin=cmin, cmax=cmax)
_ = plt.suptitle('Smoothed map from class 2, $\Omega_m=0.26$, $\sigma_8=0.91$',y=0.78, fontsize=18)
plt.savefig(pathfig+"smooth_map_class_2.pdf", bbox_inches='tight')
# omega_m=0.26, sigma_8=0.91, h=0.7
# ## 7 PSD plots
sigma = 3
compute = False
if compute:
def psd(x):
'''Spherical Power Spectral Densities'''
hatx = hp.map2alm(hp.reorder(x, n2r=True))
return hp.alm2cl(hatx)
data_path = 'data/same_psd/'
ds1 = np.load(data_path+'smoothed_class1_sigma{}.npz'.format(sigma))['arr_0']
ds2 = np.load(data_path+'smoothed_class2_sigma{}.npz'.format(sigma))['arr_0']
psds_img1 = [psd(img) for img in ds1]
psds_img2 = [psd(img) for img in ds2]
np.savez('results/psd_data_sigma{}'.format(sigma), psd_class1=psds_img1, psd_class2=psds_img2)
else:
psds_img1 = np.load('results/psd_data_sigma{}.npz'.format(sigma))['psd_class1']
psds_img2 = np.load('results/psd_data_sigma{}.npz'.format(sigma))['psd_class2']
# +
matplotlib.rcParams.update({'font.size': 14})
l = np.array(range(len(psds_img1[0])))
plot.plot_with_std(l,np.stack(psds_img1)*l*(l+1), label='class 1, $\Omega_m=0.31$, $\sigma_8=0.82$, $h=0.7$', color='r')
plot.plot_with_std(l,np.stack(psds_img2)*l*(l+1), label='class 2, $\Omega_m=0.26$, $\sigma_8=0.91$, $h=0.7$', color='b')
plt.legend(fontsize=16);
plt.xlim([11, np.max(l)])
plt.ylim([1e-6, 5e-4])
plt.yscale('log')
plt.xscale('log')
plt.xlabel('$\ell$: spherical harmonic index', fontsize=18)
plt.ylabel('$C_\ell \cdot \ell \cdot (\ell+1)$', fontsize=18)
plt.title('Power Spectrum Density, 3-arcmin smoothing, noiseless, Nside=1024', fontsize=18);
plt.savefig(pathfig+"psd_sigma{}.pdf".format(sigma), bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# -
# ## 8 Checking SVM sims
# +
sigma = 3
order = 2
sigma_noise = 1.5
# path = 'results/psd/'
# name = '40sim_1024sides_{0}arcmin_{2:.1f}noise_{1}order.npz'.format(sigma, order, sigma_noise)
path = 'results/histogram/'
# name = '40sim_1024sides_{2}noise_{1}order_{0}sigma.npz'.format(sigma, order, sigma_noise)
name = '40sim_1024sides_{2}noise_{1}order_{0}sigma.npz'.format(sigma, order, sigma_noise)
filepath = os.path.join(path,name)
data = np.load(filepath)['arr_0']
# +
matplotlib.rcParams.update({'font.size': 24})
plt.plot(data[0], data[1], linewidth=4)
plt.plot(data[0], data[2], linewidth=4)
plt.plot(data[0][-1], data[3],'x', markersize=10)
plt.legend(['Training','Validation', 'Testing'])
plt.xlabel('Number of training samples')
plt.ylabel('Error rate in %')
# plt.title('Error for the histogram + SVM, order: {}, noise level: {}'.format(order, sigma_noise))
plt.savefig(pathfig+"hist_error_order{}_noise{}.pdf".format(order,sigma_noise), bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# -
# ## 9 Plotting the filters
Nside = 1024
order = 2 # 1,2,4,8 correspond to 12,48,192,768 parts of the sphere.
sigma_noise = 2
sigma = 3
ntype = 'FCN'
EXP_NAME = '40sim_{}sides_{:0.1f}noise_{}order_{}sigma_{}'.format(Nside, sigma_noise, order, sigma, ntype)
# +
params = hyperparameters.get_params(12*40*0.8*order*order, EXP_NAME, order, Nside, ntype)
model = models.deepsphere(**params)
# -
folder = 'figures/filters/{}/'.format(EXP_NAME)
os.makedirs(folder, exist_ok=True)
layer = 5
model.plot_chebyshev_coeffs(layer, ind_in=range(5), ind_out=range(10))
plt.savefig('{}/layer{}_coefficients.png'.format(folder, layer), dpi=100)
# +
model.plot_filters_spectral(layer, ind_in=range(5), ind_out=range(10));
plt.savefig('{}/layer{}_spectral.png'.format(folder, layer), dpi=100)
# +
matplotlib.rcParams.update({'font.size': 16})
model.plot_filters_section(layer, ind_in=range(6), ind_out=range(4), title='');
plt.savefig(pathfig+"section_filter_last.pdf".format(order), bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# -
plt.rcParams['figure.figsize'] = (8, 12)
model.plot_filters_gnomonic(layer, ind_in=range(6), ind_out=range(4), title='');
plt.savefig(pathfig+"gnonomic_filter_last.pdf".format(order), bbox_inches='tight', dpi=100)
# +
plt.rcParams['figure.figsize'] = (17, 5) # (9, 4) for matplotlib notebook
matplotlib.rcParams.update({'font.size': 16})
model.plot_filters_section(1, ind_out=range(4), title='');
fig.savefig('{}/layer{}_section.png'.format(folder, layer), dpi=100)
plt.savefig(pathfig+"section_filter_first.pdf".format(order), bbox_inches='tight')
matplotlib.rcParams.update({'font.size': 10})
# -
# ## 10 Border effect of the convolution (part of the sphere)
# +
matplotlib.rcParams['image.cmap'] = 'RdBu_r'
nside = 16
indexes = range(nside**2)
G = utils.healpix_graph(nside=nside, indexes=indexes)
G.estimate_lmax()
tau = 30
hf = pygsp.filters.Heat(G, tau=tau)
index1 = 170
index2 = 64+2*16+2*4+2
sig1 = np.zeros([nside**2])
sig2 = np.zeros([nside**2])
sig1[index1] = 1
sig2[index2] = 1
sig1 = hf.filter(sig1)
sig2 = hf.filter(sig2)
m = max(np.max(sig1), np.max(sig2))
limits = [-m, m]
# sig = np.arange(nside**2)
fig = plt.figure(figsize=[12,6])
ax1 = fig.add_subplot(121, projection='3d')
G.plot_signal(sig1, ax=ax1, colorbar=False,limits=limits)
# Get rid of the ticks
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_zticks([])
# Get rid of the panes
ax1.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax1.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax1.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# Get rid of the spines
ax1.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax1.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax1.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
# Zoom
a = 0.35
ax1.set_xlim(-a,a)
ax1.set_ylim(-a,a)
# Remove the title
# ax.set_title('Graph, half sphere, Nside=4')
ax1.set_title('', FontSize=16)
ax1.view_init(elev=10, azim=45)
ax2 = fig.add_subplot(122, projection='3d')
G.plot_signal(sig2, ax=ax2, limits=limits, colorbar=False)
# Get rid of the ticks
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_zticks([])
# Get rid of the panes
ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax2.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
# Get rid of the spines
ax2.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax2.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
ax2.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))
# Zoom
a = 0.35
ax2.set_xlim(-a,a)
ax2.set_ylim(-a,a)
# Remove the title
# ax.set_title('Graph, half sphere, Nside=4')
ax2.set_title('', FontSize=16)
ax2.view_init(elev=10, azim=45)
plt.tight_layout(pad=0)
plt.savefig(pathfig+"border_effects.pdf", bbox_inches='tight')
# +
# for nside in [16, 32, 64, 128, 256, 512, 1024, 2048]:
# print('Time for nside: {}'.format(nside))
# %timeit G = utils.healpix_graph(nside=nside)
# -
# ## 11 Filtering speed
#
# Numbers measured in the `spherical_vs_graph` notebook.
# +
results = np.load('results/filtering_speed.npz')
fig, ax = plt.subplots(figsize=(5.9, 3.2))
npix = [hp.nside2npix(nside) for nside in results['nsides']]
ax.loglog(npix, results['times_graph'], '-', marker='.')
ax.loglog(npix, results['times_sphere'], '--', marker='.')
#ax.loglog(npix, np.array(npix)/1e6, ':', color='#808080')
#ax.loglog(npix, (np.array(npix)/1e6)**1.5, ':', color='#808080')
labels = ['Graph, poly. order K={}'.format(order) for order in results['orders']]
labels += ['Sph. harm., $\ell_{{max}}$ = {}$N_{{side}}$'.format(lm) for lm in results['lmax']]
#labels += [r'Asymptotic $\mathcal{O}(N_{side})$']
#labels += [r'Asymptotic $\mathcal{O}(N_{side}^{3/2})$']
ax.legend(labels, loc='upper left')
for i, nside in enumerate(results['nsides']):
x = npix[i]
y = results['times_sphere'][i, -1] * 2
ax.text(x, y, '$N_{{side}}$\n{}'.format(nside), horizontalalignment='center')
ax.set_ylim(0.6 * results['times_graph'].min(), 15 * results['times_sphere'].max())
#ax.set_xlim(0.5 * min(npix), 2 * max(npix))
ax.set_xlabel('Number of pixels')
ax.set_ylabel('Processing time [s]')
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'filtering_speed.pdf'))
# -
# ## 12 Group of eigenvalues
# +
n_eigenvalues = 50
nside = 16
graph = utils.healpix_graph(nside=nside, lap_type='normalized', nest=True, dtype=np.float64)
graph.compute_fourier_basis(n_eigenvectors=n_eigenvalues)
fig, ax = plt.subplots(figsize=(6, 2.5))
ax.plot(graph.e, '.-')
idx = 1
xticks = [idx]
for l in range(1, 7):
ax.text(idx + l - 2.3, graph.e[idx + l] + 0.005, '$\ell$ = {}'.format(l))
idx += 2*l + 1
xticks.append(idx)
ax.set_xlabel('Eigenvalue $\lambda$')
ax.set_ylabel('Value')
ax.set_xticks(xticks)
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'graph_eigenvalues.pdf'))
# -
# ## 13 Correspondance of subspaces
# +
nside = 16
lmax = 8
n_harmonics = np.cumsum(np.arange(1, 2*lmax+2, 2))
harmonics = utils.compute_spherical_harmonics(nside, lmax=lmax)
graph = utils.healpix_graph(nside, lap_type='normalized', nest=True, dtype=np.float64)
graph.compute_fourier_basis(n_eigenvectors=n_harmonics[-1])
C = harmonics.T @ graph.U
fig, ax = plt.subplots(figsize=(5, 4))
im = ax.imshow(np.abs(C), cmap=plt.cm.gist_heat_r, aspect='equal')
ax.set_xlabel('Graph Fourier modes')
ax.set_ylabel('Spherical harmonics')
ax.set_xticks(n_harmonics - 0.5)
ax.set_yticks(n_harmonics - 0.5)
ax.set_xticklabels(n_harmonics)
ax.set_yticklabels(n_harmonics)
for l in range(4, lmax+1):
ax.text(n_harmonics[l-1] + l - 3.9, n_harmonics[l-1] - 1, '$\ell={}$'.format(l))
ax.grid(True)
fig.colorbar(im)
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'subspace_harmonics_eigenvectors.pdf'))
# -
# ## 14 Convolution basis: Chebyshev polynomials vs monomials
#
# 1. Orthogonality of the basis in the spectral domain.
# 1. Orthogonality of the basis in the vertex domain.
# 1. Expected shape of the filters given a distribution over the coefficients.
#
# Todo:
# * compute the expectation analytically
# +
matplotlib.rcParams.update({'font.size': 10})
# Order of Chebyshev polynomials. Degree of monomials
degree = 7
n_points = 1000
graph = pygsp.graphs.Path(64)
# Irregular graph. Otherwise the Chebyshev polynomials are exactly orthogonal.
graph.W.data = 0.5 + 0.1 * np.random.uniform(size=graph.W.data.shape)
graph = pygsp.graphs.Graph(pygsp.utils.symmetrize(graph.W))
#plt.imshow(graph.W.toarray())
graph.estimate_lmax()
graph.set_coordinates('line1D')
fig = plt.figure(figsize=(8, 5))
# Chebyshev
#x = np.linspace(0, 1.05*graph.lmax, 1000)
x = np.linspace(0, graph.lmax, n_points)
coefficients = np.identity(degree)
f = pygsp.filters.Chebyshev(graph, coefficients)
Y = f.evaluate(x)
ax = plt.subplot2grid((2, 3), (0, 0), colspan=2)
ax.plot(x / graph.lmax * 2 - 1, Y.T)
ax.legend(['k={}'.format(k) for k in range(degree)])
ax.set_xlabel('Eigenvalue $\lambda$')
ax.set_ylabel('Polynomial $T_k(\lambda)$')
ax.set_title('Chebyshev basis (spectral domain)')
ax.set_xticks([-1, 0, 1])
ax.set_yticks([-1, 0, 1])
ax.grid()
C = Y @ Y.T
ax = plt.subplot2grid((2, 3), (0, 2))
im = ax.imshow(np.abs(C), cmap=plt.cm.gist_heat_r)
fig.colorbar(im, ax=ax)
ax.set_title('Cross-correlation')
# Monomials
x = np.linspace(-1, 1, n_points)
Y = np.empty((degree, len(x)))
for k in range(degree):
Y[k] = x**k
ax = plt.subplot2grid((2, 3), (1, 0), colspan=2)
plt.plot(x, Y.T)
ax.legend(['k={}'.format(k) for k in range(degree)])
ax.set_xlabel('Eigenvalue $\lambda$')
ax.set_ylabel('Monomial $\lambda^k$')
ax.set_title('Monomial basis (spectral domain)')
ax.set_xticks([-1, 0, 1])
ax.set_yticks([-1, 0, 1])
ax.grid()
C = Y @ Y.T
ax = plt.subplot2grid((2, 3), (1, 2))
im = ax.imshow(np.abs(C), cmap=plt.cm.gist_heat_r)
fig.colorbar(im, ax=ax)
ax.set_title('Cross-correlation')
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'polynomial_bases_spectrum.pdf'))
# +
fig = plt.figure(figsize=(8, 5))
# Chebyshev
Y = f.localize(graph.N // 2)
ax = plt.subplot2grid((2, 3), (0, 0), colspan=2, fig=fig)
for k in range(degree):
graph.plot_signal(Y[k], ax=ax)
ax.legend(['k={}'.format(k) for k in range(degree)])
ax.set_ylim(1.1*Y.min(), 1.1*Y.max())
ax.set_xlim(graph.N // 2 - degree, graph.N // 2 + degree)
ax.set_xticks(np.arange(graph.N // 2 - degree + 1, graph.N // 2 + degree, 2))
ax.set_xticklabels('$v_{{{}}}$'.format(i) for i in range(- degree + 1, degree, 2))
ax.set_title('Chebyshev basis (localized on vertex $v_0$)')
ax.set_ylabel('($T_k(L) \delta_0)_j$')
C = Y @ Y.T
ax = plt.subplot2grid((2, 3), (0, 2), fig=fig)
im = ax.imshow(np.abs(C), cmap=plt.cm.gist_heat_r)
fig.colorbar(im, ax=ax)
ax.set_title('Cross-correlation')
# Monomials
Y = np.empty((degree, graph.N))
s = np.zeros(graph.N)
s[graph.N // 2] = 1
L = graph.L / graph.lmax * 2 - sparse.identity(graph.N)
for k in range(degree):
Y[k] = L**k @ s
ax = plt.subplot2grid((2, 3), (1, 0), colspan=2, fig=fig)
for k in range(degree):
graph.plot_signal(Y[k], ax=ax)
ax.legend(['k={}'.format(k) for k in range(degree)])
ax.set_ylim(1.1*Y.min(), 1.1*Y.max())
ax.set_xlim(graph.N // 2 - degree, graph.N // 2 + degree)
ax.set_xticks(np.arange(graph.N // 2 - degree + 1, graph.N // 2 + degree, 2))
ax.set_xticklabels('$v_{{{}}}$'.format(i) for i in range(- degree + 1, degree, 2))
ax.set_title('Monomial basis (localized on vertex $v_0$)')
ax.set_ylabel('($L^k \delta_0)_j$')
C = Y @ Y.T
ax = plt.subplot2grid((2, 3), (1, 2), fig=fig)
im = ax.imshow(np.abs(C), cmap=plt.cm.gist_heat_r)
fig.colorbar(im, ax=ax)
ax.set_title('Cross-correlation')
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'polynomial_bases_vertex.pdf'))
# +
degrees = [5, 20, 100]
n_realizations = int(1e4)
n_points = 100
x = np.linspace(-1, 1, n_points)
fig, axes = plt.subplots(1, 2, sharey=True, figsize=(8.5, 3))
for degree in degrees:
coefficients = np.random.normal(0, 1, size=(degree, n_realizations))
#coefficients = np.random.uniform(-1, 1, size=(order, n_realizations))
# Monomials.
y = np.zeros((n_realizations, n_points))
for k, c in enumerate(coefficients):
y += np.outer(c, x**k)
plot.plot_with_std(x, y, ax=axes[0])
# Chebyshev polynomials.
graph = pygsp.graphs.Path(n_points)
graph.estimate_lmax()
filters = pygsp.filters.Chebyshev(graph, coefficients)
y = filters.evaluate((x + 1) / 2 * graph.lmax)
plot.plot_with_std(x, y, ax=axes[1])
legend = ['degree $K={}$'.format(degree) for degree in degrees]
axes[0].legend(legend, loc='upper center')
axes[1].legend(legend, loc='upper center')
axes[0].set_xlabel('Scaled eigenvalue $x$')
axes[1].set_xlabel('Scaled eigenvalue $x$')
axes[0].set_ylabel(r'Expected filter value $\mathbf{E}_\theta[ g_\theta(x) ]$')
axes[0].set_title('Expected sum of monomials')
axes[1].set_title('Expected sum of Chebyshev polynomials')
axes[0].text(0, -7, r'$g_\theta(x) = \sum_{k=0}^K \theta_k x^k$', horizontalalignment='center')
axes[1].text(0, -6, r'$g_\theta(x) = \sum_{k=0}^K T_k(x)$', horizontalalignment='center')
axes[1].text(0, -9.5, r'$T_k(x) = 2xT_{k-1}(x) - T_{k-2}(x), T_1(x) = x, T_0(x) = 1$', horizontalalignment='center')
fig.tight_layout()
fig.savefig(os.path.join(pathfig, 'expected_filters.pdf'))
# +
# x = np.arange(-1,1,0.001)
# order = 20
# c = np.random.randn(order,100)
# f = []
# for coeffs in c.T:
# s = 0*x
# for o, coeff in enumerate(coeffs):
# s += coeff*(x**o)
# f.append(s)
# f = np.array(f)
# ax = plot.plot_with_std(x, f)
# ax.set_title('Monomial - order {}'.format(order));
# +
# x = np.arange(-1,1,0.001)
# order = 20
# c = np.random.randn(order,100)
# f = []
# p = []
# p.append(x**0)
# p.append(x**1)
# for o in range(2, order):
# p.append(2*x*p[o-1]-p[o-2])
# for coeffs in c.T:
# s = x**0
# for o, coeff in enumerate(coeffs):
# s += coeff*p[o]
# f.append(s)
# f = np.array(f)
# ax = plot.plot_with_std(x, f)
# ax.set_title('Chebyshev - order {}'.format(order));
# +
x = np.arange(-1,1,0.001)
order =20
p = []
p.append(x**0)
p.append(x**1)
for o in range(2,order):
p.append(2*x*p[o-1]-p[o-2])
for o in range(order):
plt.plot(x, p[o])
# -
for o in range(5,12):
plt.plot(x, np.sum(np.array(p[0:o])**2/(o+0.5)*2,axis=0))
plt.plot(x, x**0)
o = 10
plt.plot(x, np.sum(np.array(p[0:o])**2,axis=0))
plt.plot(x, (o+0.5)/2*(x**0))
| figures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.11 64-bit
# language: python
# name: python3
# ---
# # Rule Controller example
# First we import all the used libraries, remember to always import `sinergym` even if it says is not used, because that is needed to define the environments
from typing import List, Any, Sequence
from sinergym.utils.common import get_season_comfort_range
from datetime import datetime
import gym
import numpy as np
import sinergym
# Now we can define the environment we want to use, in our case we are using the Eplus demo.
# + pycharm={"name": "#%%\n"}
env = gym.make('Eplus-demo-v1')
# -
# For the Rule-base controller have a look at the already defined controllers, there is one for each building, since the demo is based on the 5Zone building we are extending that controller and defining the action function we desire, feel free to play with the function to define your own action.
# + pycharm={"name": "#%%\n"}
from sinergym.utils.controllers import RBC5Zone
class MyRuleBasedController(RBC5Zone):
def act(self, observation: List[Any]) -> Sequence[Any]:
"""Select action based on outdoor air drybulb temperature and daytime.
Args:
observation (List[Any]): Perceived observation.
Returns:
Sequence[Any]: Action chosen.
"""
obs_dict = dict(zip(self.variables['observation'], observation))
out_temp = obs_dict['Site Outdoor Air Drybulb Temperature(Environment)']
day = int(obs_dict['day'])
month = int(obs_dict['month'])
hour = int(obs_dict['hour'])
year = int(obs_dict['year'])
summer_start_date = datetime(year, 6, 1)
summer_final_date = datetime(year, 9, 30)
current_dt = datetime(year, month, day)
# Get season comfort range
if current_dt >= summer_start_date and current_dt <= summer_final_date:
season_comfort_range = self.setpoints_summer
else:
season_comfort_range = self.setpoints_summer
season_comfort_range = get_season_comfort_range(1991,month, day)
# Update setpoints
in_temp = obs_dict['Zone Air Temperature(SPACE1-1)']
current_heat_setpoint = obs_dict[
'Zone Thermostat Heating Setpoint Temperature(SPACE1-1)']
current_cool_setpoint = obs_dict[
'Zone Thermostat Cooling Setpoint Temperature(SPACE1-1)']
new_heat_setpoint = current_heat_setpoint
new_cool_setpoint = current_cool_setpoint
if in_temp < season_comfort_range[0]:
new_heat_setpoint = current_heat_setpoint + 1
new_cool_setpoint = current_cool_setpoint + 1
elif in_temp > season_comfort_range[1]:
new_cool_setpoint = current_cool_setpoint - 1
new_heat_setpoint = current_heat_setpoint - 1
action = (new_heat_setpoint, new_cool_setpoint)
if current_dt.weekday() > 5 or hour in range(22, 6):
#weekend or night
action = (18.33, 23.33)
return action
# + [markdown] pycharm={"name": "#%% md\n"}
# Now that we have our controller ready we can use it:
# + pycharm={"name": "#%%\n"}
# create rule-based controller
agent = MyRuleBasedController(env)
for i in range(1):
obs = env.reset()
rewards = []
done = False
current_month = 0
while not done:
action = agent.act(obs)
obs, reward, done, info = env.step(action)
rewards.append(reward)
if info['month'] != current_month: # display results every month
current_month = info['month']
print('Reward: ', sum(rewards), info)
print(
'Episode ',
i,
'Mean reward: ',
np.mean(rewards),
'Cumulative reward: ',
sum(rewards))
# -
# Always remember to close the environment:
# + pycharm={"name": "#%%\n"}
env.close()
# -
# .. note:: For more information about our defines controllers and how create a new one, please, visit our [Controller Documentation](https://jajimer.github.io/sinergym/compilation/html/pages/controllers.html)
| docs/compilation/html/pages/notebooks/rule_controller_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Online Tracking
#
# Given a list of images, we want to track players and the ball and gather their trajectories. Our model initializes several tracklets based on the detected boxes in the first image. In the following ones, the model links the boxes to the existing tracklets according to:
#
# 1. their distance measured by the embedding model,
# 2. their distance measured by boxes IoU's
#
# When the entire list of image is processed, we compute an homography for each image. We then apply each homography to the players coordinates.
# !pip3 install --user tensorflow==2.2.0
# !pip3 install --user tensorflow-probability==0.11.0
# !pip3 install --user dm-sonnet=2.0.0
# ## Inputs
#
# Let's start by gathering a list of images:
# +
import mxnet as mx
import os
import numpy as np
import torch
import torch.nn.functional as F
import cv2
from matplotlib import pyplot as plt
from narya.utils.vizualization import visualize
from narya.tracker.full_tracker import FootballTracker
# -
# First, we initialize a 2d Field template:
template = cv2.imread('world_cup_template.png')
template = cv2.cvtColor(template, cv2.COLOR_BGR2RGB)
template = cv2.resize(template, (1280,720))
template = template/255.
# and then we create our list of images:
# +
"""
Images are ordered from 0 to 50:
"""
imgs_ordered = []
for i in range(0,51):
path = 'test_img/img_fullLeicester 0 - [3] Liverpool.mp4_frame_full_' + str(i) + '.jpg'
imgs_ordered.append(path)
img_list = []
for path in imgs_ordered:
if path.endswith('.jpg'):
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
img_list.append(image)
# -
# We can vizualize the first image of our list:
image_0,image_25,image_50 = img_list[0],img_list[25],img_list[50]
print("Image shape: {}".format(image_0.shape))
visualize(image_0=image_0,
image_25=image_25,
image_50=image_50)
# ## Football Tracker
#
# We first need to create our tracker. This object gathers each and every one of our model:
#
# ```python3
# class FootballTracker:
# """Class for the full Football Tracker. Given a list of images, it allows to track and id each player as well as the ball.
# It also computes the homography at each given frame, and apply it to each player coordinates.
#
# Arguments:
# pretrained: Boolean, if the homography and tracking models should be pretrained with our weights or not.
# weights_homo: Path to weight for the homography model
# weights_keypoints: Path to weight for the keypoints model
# shape_in: Shape of the input image
# shape_out: Shape of the ouput image
# conf_tresh: Confidence treshold to keep tracked bouding boxes
# track_buffer: Number of frame to keep in memory for tracking reIdentification
# K: Number of boxes to keep at each frames
# frame_rate: -
# Call arguments:
# imgs: List of np.array (images) to track
# split_size: if None, apply the tracking model to the full image. If its an int, the image shape must be divisible by this int.
# We then split the image to create n smaller images of shape (split_size,split_size), and apply the model
# to those.
# We then reconstruct the full images and the full predictions.
# results: list of previous results, to resume tracking
# begin_frame: int, starting frame, if you want to resume tracking
# verbose: Boolean, to display tracking at each frame or not
# save_tracking_folder: Foler to save the tracking images
# template: Football field, to warp it with the computed homographies on to the saved images
# skip_homo: List of int. e.g.: [4,10] will not compute homography for frame 4 and 10, and reuse the computed homography
# at frame 3 and 9.
# """
#
# def __init__(
# self,
# pretrained=True,
# weights_homo=None,
# weights_keypoints=None,
# shape_in=512.0,
# shape_out=320.0,
# conf_tresh=0.5,
# track_buffer=30,
# K=100,
# frame_rate=30,
# ):
#
# self.player_ball_tracker = PlayerBallTracker(
# conf_tresh=conf_tresh, track_buffer=track_buffer, K=K, frame_rate=frame_rate
# )
#
# self.homo_estimator = HomographyEstimator(
# pretrained=pretrained,
# weights_homo=weights_homo,
# weights_keypoints=weights_keypoints,
# shape_in=shape_in,
# shape_out=shape_out,
# )
# ```
tracker = FootballTracker(frame_rate=24.7,track_buffer = 60)
# We now only have to call it on our list of images. We manually remove some failed homography estimation, at frame $\in \{25,...,30 \}$ by adding ```skip_homo = [25,26,27,28,29,30]``` into our call.
trajectories = tracker(img_list,split_size = 512, save_tracking_folder = 'test_outputs/',
template = template,skip_homo = [25,26,27,28,29,30])
# Let's check the same images as before but with the tracking informations:
# +
imgs_ordered = []
for i in range(0,51):
path = 'test_outputs/test_' + '{:05d}'.format(i) + '.jpg'
imgs_ordered.append(path)
img_list = []
for path in imgs_ordered:
if path.endswith('.jpg'):
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
img_list.append(image)
image_0,image_25,image_50 = img_list[0],img_list[25],img_list[50]
print("Image shape: {}".format(image_0.shape))
visualize(image_0=image_0,
image_25=image_25,
image_50=image_50)
# -
# You can also easily create a movie of the tracking data, and display it:
import imageio
import progressbar
with imageio.get_writer('test_outputs/movie.mp4', mode='I',fps=20) as writer:
for i in progressbar.progressbar(range(0,51)):
filename = 'test_outputs/test_{:05d}.jpg'.format(i)
image = imageio.imread(filename)
writer.append_data(image)
# %%HTML
<div align="middle">
<video width="80%" controls>
<source src="test_outputs/movie.mp4" type="video/mp4">
</video></div>
# ## Process trajectories
#
# We now have raw trajectories that we need to process.
# Fist, you can do several operations to ensure that the trajectories are good:
#
# * Delete an id at a certain frame
# * Delete an id from every frame
# * Merge two ids
# * Add an id at a given frame
#
# These operation are easy to do with some of our functions from ```narya.utils.tracker```:
#
# ```python3
# def _remove_coords(traj, ids, frame):
# """Remove the x,y coordinates of an id at a given frame
# Arguments:
# traj: Dict mapping each id to a list of trajectory
# ids: the id to target
# frame: int, the frame we want to remove
# Returns:
# traj: Dict mapping each id to a list of trajectory
# Raises:
# """
#
# def _remove_ids(traj, list_ids):
# """Remove ids from a trajectory
# Arguments:
# traj: Dict mapping each id to a list of trajectory
# list_ids: List of id
# Returns:
# traj: Dict mapping each id to a list of trajectory
# Raises:
# """
#
# def add_entity(traj, entity_id, entity_traj):
# """Adds a new id with a trajectory
# Arguments:
# traj: Dict mapping each id to a list of trajectory
# entity_id: the id to add
# entity_traj: the trajectory linked to entity_id we want to add
# Returns:
# traj: Dict mapping each id to a list of trajectory
# Raises:
# """
#
# def add_entity_coords(traj, entity_id, entity_traj, max_frame):
# """Add some coordinates to the trajectory of a given id
# Arguments:
# traj: Dict mapping each id to a list of trajectory
# entity_id: the id to target
# entity_traj: List of (x,y,frame) to add to the trajectory of entity_id
# max_frame: int, the maximum number of frame in trajectories
# Returns:
# traj: Dict mapping each id to a list of trajectory
# Raises:
# """
#
#
# def merge_id(traj, list_ids_frame):
# """Merge trajectories of different ids.
# e.g.: (10,0,110),(12,110,300) will merge the trajectory of 10 between frame 0 and 110 to the
# trajectory of 12 between frame 110 and 300.
# Arguments:
# traj: Dict mapping each id to a list of trajectory
# list_ids_frame: List of (id,frame_start,frame_end)
# Returns:
# traj: Dict mapping each id to a list of trajectory
# Raises:
# """
#
# def merge_2_trajectories(traj1, traj2, id_mapper, max_frame_traj1):
# """Merge 2 dict of trajectories, if you want to merge the results of 2 tracking
# Arguments:
# traj1: Dict mapping each id to a list of trajectory
# traj2: Dict mapping each id to a list of trajectory
# id_mapper: A dict mapping each id in traj1 to id in traj2
# max_frame_traj1: Maximum number of frame in the first trajectory
# Returns:
# traj1: Dict mapping each id to a list of trajectory
# Raises:
# """
# ```
# here, let's assume we don't have to perform any operations, and directly process our trajectoris into a Pandas Dataframe.
#
# First, we can save our raw trajectory with
#
# ```python3
# import json
#
# with open('trajectories.json', 'w') as fp:
# json.dump(trajectories, fp)
# ```
#
# Let's start by padding our trajectories with np.nan and building a dict. for our dataframe:
# +
import json
with open('trajectories.json') as json_file:
trajectories = json.load(json_file)
# +
from narya.utils.tracker import build_df_per_id
df_per_id = build_df_per_id(trajectories)
# -
# We now fill the missing values, and apply a filter to smooth the trajectories:
# +
from narya.utils.tracker import fill_nan_trajectories
df_per_id = fill_nan_trajectories(df_per_id,5)
# +
from narya.utils.tracker import get_full_results
df = get_full_results(df_per_id)
# -
# ## Trajectory Dataframe
#
# We now have access to a dataframe with for each id, for each frame, the 2D coordinates of the entity.
df.head()
# Finally, this you can save this dataframe using ```df.to_csv('results_df.csv')```
| full-tracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
# +
# Avião KC-390 Millennium, da Força Aérea Brasileira (FAB).
# -
# # Title
# +
# Curso - DIO - Cognizant Cloud Data Engineer
# Fundamentos de ETL com Python
# -
# # Head
# +
# Análise de dados utitlizando base da FAB
# CENIPA - Ocorrências Aeronáuticas na Aviação Civil Brasileira
# https://www2.fab.mil.br/cenipa/index.php/ultimas-noticias/1157-cenipa-dados-abertos-sao-atualizados-em-setembro
# by geanclm in 05/03/2022 at 14:52h
# -
# # Local files
# !dir
# # Import libs
import pandas as pd
import pandera as pa
# # Load dataset
# fonte: https://dados.gov.br/dataset/ocorrencias-aeronauticas-da-aviacao-civil-brasileira
valores_ausentes =['***','****','*****','###!','####','NULL']
df = pd.read_csv('ocorrencia.csv', sep=';', encoding='utf-8',
parse_dates=['ocorrencia_dia'], dayfirst=True,
usecols=[0,2,5,8,9,11,12,13,19],
na_values=valores_ausentes)
df.columns
# # Data validation
# +
# 1 - verificar formato correto do atributo date
# 2 - importante pesquisar sobre expressão regular
# https://blog.dp6.com.br/express%C3%B5es-regulares-a-z-como-elas-podem-melhorar-a-sua-vida-a2700cef6f15
# -
df.info()
# +
# df['ocorrencia_dia'] = pd.to_datetime(df['ocorrencia_dia'], format ='%d/%m/%Y')
# df['ocorrencia_hora'] = pd.to_datetime(df['ocorrencia_hora'], format ='%H:%M:%S')
# df['ocorrencia_hora'] = df['ocorrencia_hora'].dt.strftime('%H:%M:%S')
# -
df.tail()
# validação do tipo de dados de cada coluna
schema = pa.DataFrameSchema(
columns = {
'codigo_ocorrencia':pa.Column(pa.Int, required=True),
'ocorrencia_uf':pa.Column(pa.String, pa.Check.str_length(2,3), nullable=True),
'ocorrencia_aerodromo':pa.Column(pa.String, nullable=True),
'ocorrencia_hora':pa.Column(pa.String, pa.Check.str_matches(r'^([0-1]?[0-9]|[2][0-3]):([0-5][0-9])(:[0-5][0-9])?$'), nullable=True)
}
)
schema.validate(df)
# df[df['ocorrencia_uf']=='***']
df.loc[4728]
df[df.index==4728]
# +
# df[df.ocorrencia_dia.dt.year==2021]
# df[df.ocorrencia_dia.dt.month==2]
# -
df.head()
# # Limpeza de dados
# +
# 1 - importante conversar com a área responsável na empresa sobre o que pode ser feito com
# os registros com dados ausentes
# -
df.head(5)
df.loc[1,'ocorrencia_cidade']
df.loc[1:3]
df.loc[[10,40]]
df.loc[:,'ocorrencia_cidade']
# # Manipulações com o Dataframe
df.codigo_ocorrencia.is_unique
# +
# definir coluna como índice do dataframe
# df.set_index('codigo_ocorrencia', inplace=True)
# -
df
df.loc[80467]
# +
# df.reset_index(drop=True, inplace=True)
# -
df.tail()
# +
# alterando o dado de um campo específico
# df.loc[0,'ocorrencia_aerodromo'] = ''
# -
df.head(1)
# +
# df.loc[df.ocorrencia_uf == 'SP', ['ocorrencia_classificacao']] = 'GRAVE'
# -
df.tail()
df[df['ocorrencia_uf'] == 'SP']
# +
# Retomando o Dataset original da leitura com pd.read_csv
# -
df
# # Limpeza de dados
# +
# ocorrencia_uf
# ***
# ocorrencia_aerodromo
# ****
# *****
# ###!
# ####
# ocorrencia_hora
# ****
# *****
# NULL
# +
# df.loc[df.ocorrencia_aerodromo == '****',['ocorrencia_aerodromo']] = '<NA>'
# +
# alterando dados em todo o Dataframe
# é possível carregar essa função na leitura dos dados
# df.replace(['***','****','*****','###!','####','NULL'],pd.NA, inplace=True)
# -
df.tail()
# contando o número de dados ausentes no dataframe
df.isna().sum()
df.isnull().sum()
df[df.ocorrencia_uf.isnull()]
# +
# preencher os valores nulos com o número 0
# df.fillna(0, inplace=True)
# +
# criar uma coluna no dataframe
# df['bkp'] = df['ocorrencia_uf']
# +
# remover coluna do dataframe
# df.drop(['bkp'], axis=1, inplace=True)
# -
df.dropna()
df.shape
# excluir registro com pelo menos um valor ausente na coluna selecionada
df.dropna(subset=['ocorrencia_hora'])
# +
# df.drop_duplicates()
# -
# # Transformação
df.dtypes
df.loc[1]
df.iloc[-1]
df.loc[10:15]
df.iloc[10:15]
df.loc[0:5,'ocorrencia_uf']
df['ocorrencia_uf'].head(6)
df.isna().sum()
df.loc[df.ocorrencia_uf.isnull()]
df.loc[df.ocorrencia_aerodromo.isnull()]
df.loc[df.ocorrencia_hora.isnull()]
# a função count não considera os valores nulos
df.count()
# retornar as ocorrências com mais de dez recomendações
# df[df['total_recomendacoes']>10]
df.loc[df.total_recomendacoes>10]
# cidades com mais de 10 recmendações
df.loc[df['total_recomendacoes']>10,['ocorrencia_cidade','total_recomendacoes']]
# filtro das ocorrência com INCIDENTE GRAVE
df.loc[df['ocorrencia_classificacao']=='INCIDENTE GRAVE',['ocorrencia_classificacao','ocorrencia_cidade']]
# INCIDENTE GRAVE em SP
df.loc[(df['ocorrencia_uf'] == 'SP') & (df['ocorrencia_classificacao'] =='INCIDENTE GRAVE')]
# INCIDENTE GRAVE ou INCIDENTE no estado de São Paulo
df[((df['ocorrencia_classificacao'] =='INCIDENTE GRAVE') |
(df['ocorrencia_classificacao'] =='INCIDENTE')) & (df['ocorrencia_uf']=='SP')]
# INCIDENTE GRAVE ou INCIDENTE no estado de São Paulo
# isin
df[df['ocorrencia_classificacao'].isin(['INCIDENTE','INCIDENTE GRAVE']) &
(df['ocorrencia_uf']=='SP')]
# ocoorência com cidades de nome iniciado com a letra 'C'
df[df['ocorrencia_cidade'].str[0] == 'C']
# ocoorência com cidades de nome terminado com a letra 'A'
df[df['ocorrencia_cidade'].str[-1] == 'A']
# ocoorência com cidades de nome terminado com a letra 'MA'
df[df['ocorrencia_cidade'].str[-2:] == 'MA']
# ocoorência com cidades onde o nome contem a sequência 'MA' ou AL
df[df['ocorrencia_cidade'].str.contains('MA|AL')]
# ocorrências do ano de 2015, mês de Dezembro e dia >= 3 e <= 8
df[(df['ocorrencia_dia'].dt.year == 2015) &
(df['ocorrencia_dia'].dt.month == 12) &
(df['ocorrencia_dia'].dt.day >= 3) &
(df['ocorrencia_dia'].dt.day <= 8)]
# juntar os campos da data e hora para facilitar as opções de consulta
df['dia_hora'] = pd.to_datetime(df['ocorrencia_dia'].astype(str) + " " + df['ocorrencia_hora'])
df.tail()
df.dtypes
# ocorrências do ano de 2015,
# mês de Dezembro,
# dias entre 3 e 8
# e intervalo entre 11h e 13h
df[(df['dia_hora'].dt.year == 2015) &
(df['dia_hora'].dt.month == 12) &
(df['dia_hora'].dt.day >= 3) &
(df['dia_hora'].dt.day <= 8) &
(df['dia_hora'].dt.hour >= 11) &
(df['dia_hora'].dt.hour <= 13)]
df.loc[(df['dia_hora'] >='2015-12-03 11:00:00') &
(df['dia_hora'] <='2015-12-08 13:00:00')]
# criar novo dataframe com dados selecionados do ano de 2015 e mês de Março
df_2015_03 = df.loc[(df['dia_hora'].dt.year == 2015) & (df['dia_hora'].dt.month == 3)]
df_2015_03.tail()
# Observação: a coluna ocorrencia_aerodromo apresenta dados ausentes
df_2015_03.count()
# Observação: a coluna ocorrencia_aerodromo apresenta dados ausentes
df_2015_03.groupby(['ocorrencia_classificacao','ocorrencia_uf','ocorrencia_cidade']).count()
# Observação: a coluna ocorrencia_aerodromo apresenta dados ausentes
# a opção size conta o número de registros de cada campo desconsiderando
# é uma alternativa para contar também onde consta dados ausentes
df_2015_03.groupby(['ocorrencia_classificacao',
'ocorrencia_uf',
'ocorrencia_cidade',
'ocorrencia_aerodromo']).size()
# listar número de registros por 'ocorrencia_classificacao' em ordem decrescente
df_2015_03.groupby(['ocorrencia_classificacao']).size().sort_values(ascending=False)
df_2015_03.groupby(['ocorrencia_classificacao']).size().sort_values(ascending=False).plot.bar();
# Ocorrências região Sudeste ano 2012
df_2012_Sudeste = df[(df['ocorrencia_uf'].isin(['SP','MG','ES','RJ'])) &
(df['dia_hora'].dt.year==2012)]
df_2012_Sudeste.groupby('ocorrencia_classificacao').size().sort_values(ascending=False)
df_2012_Sudeste.count()
df_2012_Sudeste.groupby('ocorrencia_classificacao').size().sort_values(ascending=False).plot.bar();
df_2012_Sudeste.groupby(['ocorrencia_classificacao',
'ocorrencia_uf',
'ocorrencia_cidade']).count()
df_2012_Sudeste.groupby(['ocorrencia_classificacao',
'ocorrencia_uf',
'ocorrencia_cidade',
'ocorrencia_aerodromo']).size()
# total de recomendações para o Rio de Janeiro em 2012
df_2012_Sudeste.loc[df['ocorrencia_cidade']=='RIO DE JANEIRO'].total_recomendacoes.sum()
# cofirmando total_recomendacoes
df_2012_Sudeste.loc[(df['ocorrencia_cidade']=='RIO DE JANEIRO') &
(df['total_recomendacoes'] > 0)]
# Número de recomendações por aerodromo
# Observação: a coluna ocorrencia_aerodromo apresenta dados ausentes
df_2012_Sudeste.groupby(['ocorrencia_aerodromo'], dropna=False).total_recomendacoes.sum()
# Número de recomendações por cidade
# Somente cidades com recomendações
df_2012_Sudeste.loc[df_2012_Sudeste['total_recomendacoes'] >0 ].groupby(['ocorrencia_cidade']).total_recomendacoes.sum().sort_values(ascending=False)
df_2012_Sudeste.loc[df_2012_Sudeste['total_recomendacoes'] >0 ].groupby(['ocorrencia_cidade']).total_recomendacoes.sum().sort_values(ascending=False).plot.bar();
# Total de recomendações por mês para cada cidade
df_2012_Sudeste.loc[df_2012_Sudeste['total_recomendacoes'] >0 ].groupby(['ocorrencia_cidade',
df_2012_Sudeste.dia_hora.dt.month]).total_recomendacoes.sum()
# cofirmando total_recomendacoes para cidade de São Paulo
df_2012_Sudeste.loc[(df['ocorrencia_cidade']=='SÃO PAULO') &
(df['total_recomendacoes'] > 0)]
# +
# Carregamento dos dados em novo repositório
# depende da realidade de cada empresa
| Python/Fundamentos de ETL com Python/FAB_projeto_ETL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
import numpy as np
import matplotlib.pyplot as plt
from torch import nn
from torchvision import datasets, transforms
# -
# # Prepare data
# +
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST("MNIST_data/", download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# -
# # Define network
# +
# Define a feed-forward network
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10) # logits, instead of output of the softmax
)
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# +
# forward pass to get the logits
logits = model(images)
# pass the logits to criterion to get the loss
loss = criterion(logits, labels)
print(loss)
# -
# # Use `nn.LogSoftmax` and `nn.NLLLoss`
# It is more convinient to build model with log-softmax output instead of output from Linear and use the negative log likelihood loss.
#
# Then, you can get the actual probabilities by taking torch.exp(output) instead of taking softmax function
# +
# Define a feed-forward network
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1) # dim=1 to calc softmax across columns instead of rows
)
# Define the loss
criterion = nn.NLLLoss()
# Get data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# +
# forward pass to get the logits
log_probs = model(images)
# pass the logits to criterion to get the loss
loss = criterion(log_probs, labels)
print(loss)
# -
# # Use Autograd to perform backpropagation
x = torch.randn(2, 2, requires_grad=True)
print(x)
y = x ** 2
print(y)
print(y.grad_fn)
z = y.mean()
print(z)
print(x.grad)
# calculate the gradients
z.backward()
print("grad: ", x.grad)
print("x:", x)
print("x/2: ", x / 2) # equal to gradients mathamatically = x / 2
# ## Try to perform backward pass and get the gradients
# +
print("Before backward pass: \n", model[0].weight.grad)
loss.backward()
print("After backward pass: \n", model[0].weight.grad)
# -
# # Training the network
#
# Use the optimizer from PyTorch `optim` package to update weights with the gradients. For example, stochastic gradient descent is `optim.SGD`
# +
from torch import optim
# Optimizers require the parameters to optimize and the learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
# -
# ## Try the optimizer to update weights
# The general process with Pytorch:
# - Make a forward pass
# - Calculate loss
# - Perform backward pass with loss.backward()
# - Take a step with optimizer to update the weights
# +
print("Initial weights: \n", model[0].weight)
images, labels = next(iter(trainloader))
images = images.view(64, 784)
# # !Important: Clear the gradients. otherwise, the gradients will be accumulated
optimizer.zero_grad()
# Forward pass
output = model.forward(images)
loss = criterion(output, labels)
# Backward pass
loss.backward()
print("Gradient: \n", model[0].weight.grad)
# Take a update step with the optimizer
optimizer.step()
print("Updated weights: \n", model[0].weight)
# -
# ## Actually training
# +
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten
images = images.view(images.shape[0], -1)
# # !Important: Clear the gradients. otherwise, the gradients will be accumulated
optimizer.zero_grad()
# Forward pass
output = model.forward(images)
loss = criterion(output, labels)
# Backward pass
loss.backward()
# Take a update step with the optimizer
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
# -
# # Get the predictions
# +
images, labels = next(iter(trainloader))
img = images[0].view(1, -1)
# turn off gradients to speed up
with torch.no_grad():
probs = model.forward(img)
output = torch.exp(probs)
# -
plt.imshow(img.view(1, 28, 28).squeeze(), cmap='gray')
print(output.numpy())
print("predict:", np.argmax(output))
| libs/pytorch/01_introduction/train neural networks with backpropagation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ArulselvanMadhavan/Artist_Recognition_from_Audio_Features/blob/master/HW0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="77OeiwmaUy5R"
import numpy as np
# + id="nS3sKvSJU8Zv"
sample = np.array([i for i in range(100)])
# + colab={"base_uri": "https://localhost:8080/"} id="bm-qALFBVJ-e" outputId="c0c0bd47-25b0-4572-fa10-8dfca6b18ff6"
np.sum(sample)
# + id="9XTiYLF_VPDW"
| HW0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="w-pyfWT3a6S4"
# 
#
# [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/ocr/NATURAL_SCENE.ipynb)
# + [markdown] id="NWU8Tl9Fa3eT"
# # Recognize text in natural scenes
# + [markdown] id="8DtUIwnIa3eU"
# To run this yourself, you will need to upload your **Spark OCR** license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
# + [markdown] id="FAbRkMvtz-Gl"
# For more in-depth tutorials: https://github.com/JohnSnowLabs/spark-ocr-workshop/tree/master/jupyter
# + [markdown] id="xdROQW5ScYKA"
# ## 1. Colab Setup
# + [markdown] id="GeMJ4AoPkoMc"
# Read licence key
# + id="fUeMlVj1a3eV"
import json
import os
from google.colab import files
license_keys = files.upload()
os.rename(list(license_keys.keys())[0], 'spark_ocr.json')
with open('spark_ocr.json') as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# + [markdown] id="74VqnE1reEFC"
# Install Dependencies
# + id="ErndQ8oea3eY" colab={"base_uri": "https://localhost:8080/", "height": 834} outputId="879d886d-1ddc-4538-e085-73c13f738a4c"
# Installing pyspark and spark-nlp
# ! pip install --upgrade -q pyspark==3.0.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark OCR
# ! pip install spark-ocr==$OCR_VERSION\+spark30 --extra-index-url=https://pypi.johnsnowlabs.com/$SPARK_OCR_SECRET --upgrade
# + [markdown] id="eVt8BQaaeGp5"
# Importing Libraries
# + id="Xagvrd2D_AEg"
import json, os
with open("spark_ocr.json", 'r') as f:
license_keys = json.load(f)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# + id="cn4in75Ha3eb"
import pandas as pd
import numpy as np
import os
#Pyspark Imports
from pyspark.sql import SparkSession
from pyspark.ml import PipelineModel
from pyspark.sql import functions as F
# Necessary imports from Spark OCR library
import sparkocr
from sparkocr import start
from sparkocr.transformers import *
from sparkocr.enums import *
from sparkocr.utils import display_image, to_pil_image
from sparkocr.metrics import score
import pkg_resources
# + [markdown] id="pmAnLt26eMHu"
# Start Spark Session
# + id="BXtlGP0aa3ee" colab={"base_uri": "https://localhost:8080/"} outputId="b8fe3d82-197a-4f03-9a65-cd638da6c849"
spark = sparkocr.start(secret=SPARK_OCR_SECRET,
nlp_version=PUBLIC_VERSION
)
# + [markdown] id="x6caZBlia3et"
# ## 2. Load image
# + id="ApsEfUpfa3eu" colab={"base_uri": "https://localhost:8080/"} outputId="ad1ec677-acad-4950-d094-cfd65e100196"
imagePath = pkg_resources.resource_filename('sparkocr', 'resources/ocr/images/natural_scene.jpeg')
image_df = spark.read.format("binaryFile").load(imagePath).cache()
image_df.show()
# + [markdown] id="mxZlmX-ieyqE"
# ## 3. Construct the OCR pipeline
# + id="f9vo_4vua3er"
# Read binary as image
binary_to_image = BinaryToImage()
binary_to_image.setInputCol("content")
binary_to_image.setOutputCol("image")
# Scale image
scaler = ImageScaler()
scaler.setInputCol("image")
scaler.setOutputCol("scaled_image")
scaler.setScaleFactor(2.0)
# Binarize using adaptive tresholding
binarizer = ImageAdaptiveThresholding()
binarizer.setInputCol("scaled_image")
binarizer.setOutputCol("binarized_image")
binarizer.setBlockSize(71)
binarizer.setOffset(65)
remove_objects = ImageRemoveObjects()
remove_objects.setInputCol("binarized_image")
remove_objects.setOutputCol("cleared_image")
remove_objects.setMinSizeObject(400)
remove_objects.setMaxSizeObject(4000)
# Apply morphology opening
morpholy_operation = ImageMorphologyOperation()
morpholy_operation.setKernelShape(KernelShape.DISK)
morpholy_operation.setKernelSize(5)
morpholy_operation.setOperation("closing")
morpholy_operation.setInputCol("cleared_image")
morpholy_operation.setOutputCol("corrected_image")
# Run OCR
ocr = ImageToText()
ocr.setInputCol("corrected_image")
ocr.setOutputCol("text")
ocr.setConfidenceThreshold(50)
ocr.setIgnoreResolution(False)
# OCR pipeline
pipeline = PipelineModel(stages=[
binary_to_image,
scaler,
binarizer,
remove_objects,
morpholy_operation,
ocr
])
# + [markdown] id="vd74DK4ha3ew"
# ## 4. Run OCR pipeline
# + id="o5Wh9ysOa3ew"
result = pipeline.transform(image_df).cache()
# + [markdown] id="_uC_egLWa3ez"
# ## 5. Visualize Results
# + [markdown] id="wyIew0rVgnPM"
# Display result dataframe
# + id="dCOLYbB5a3ez" colab={"base_uri": "https://localhost:8080/"} outputId="5de73b9e-d322-4bdc-d8e9-97e521d4f1fa"
result.select("text", "confidence").show()
# + [markdown] id="2C_VFRVIa3e6"
# Display text and images
# + id="hJZGPMJna3e6" colab={"base_uri": "https://localhost:8080/", "height": 670} outputId="5fbbc942-aa4b-41e3-9701-9c7894022d72"
for r in result.distinct().collect():
display_image(r.image)
print ('\n', 'Text: ',r.text)
# + [markdown] id="CSz6OPxn3d-r"
# Showing and saving intermediate processing results as jpg
# + id="aZYtShk6m00P" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="c0c16c3a-101e-4ca7-a2f3-193c57d079ee"
for r in result.distinct().collect():
print("Original: %s" % r.path)
display_image(r.image)
print("Binarized")
display_image(r.binarized_image)
print("Removing objetcts")
display_image(r.cleared_image)
print("Morphology closing")
display_image(r.corrected_image)
img = to_pil_image(r.binarized_image, r.binarized_image.mode)
img.save('img_binarized.jpg')
| tutorials/streamlit_notebooks/ocr/NATURAL_SCENE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 数据科学基础 第四次作业
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.transforms import Bbox
from sklearn.datasets import make_classification
from sklearn.metrics import classification_report, zero_one_loss
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier, RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression, LinearRegression
% matplotlib inline
# # 实验一:决策树深度对模型性能的影响
# +
n_f = 30 # 特征个数
inf_f = int(0.6 * n_f) # 60%含信息特征
red_f = int(0.1 * n_f) # 10%冗余特征(含信息特征的线性组合)
rep_f = int(0.1 * n_f) # 10%重复特征(随机取自含信息特征和冗余特征)
# 创建分类数据集
X,y = make_classification(
n_samples=1000, # 实例个数
flip_y=0.2, # 随机抽20%实例改变类别,产生一些噪声
n_features=n_f,
n_informative=inf_f,
n_redundant=red_f,
n_repeated=rep_f,
random_state=7)
# 划分为训练集,验证集,测试集
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state=9)
# -
# 验证模型深度的影响
single_train = []
single_test = []
for i in range(1,21):
# 单决策树模型
model = DecisionTreeClassifier(max_depth=i,min_samples_leaf=1)
model.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_train)
single_train.append(1-zero_one_loss(y_train,y_pred))
y_pred = model.predict(X_test)
single_test.append(1-zero_one_loss(y_test,y_pred))
plt.figure(figsize=(10,4))
x_axis=range(1,21)
plt.plot(x_axis,single_train,marker='.',markersize=10)
plt.plot(x_axis,single_test,marker='.',markersize=10)
plt.plot([5,5],[0.6,1.05],c='r',linestyle='--')
plt.ylim([0.6,1.05])
plt.xticks(x_axis)
plt.xlabel('Depth')
plt.ylabel('Accuracy')
plt.legend(['train','test'])
plt.savefig('./figure/depth.pdf')
plt.show()
# # 实验二:单独模型及组合模型性能对比
# +
# 三种模型对比
# 单决策树模型
model = DecisionTreeClassifier(max_depth=5,min_samples_leaf=1)
model.fit(X_train,y_train)
# boosting集成
boosting = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=10,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=10,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_train)
print classification_report(y_train,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_train,y_pred)*100),"%"
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
# 集成模型在验证集上预测 boosting
y_pred = boosting.predict(X_train)
print classification_report(y_train,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_train,y_pred)*100),"%"
y_pred = boosting.predict(X_test)
print classification_report(y_test,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
# 集成模型在验证集上预测 bagging
y_pred = bagging.predict(X_train)
print classification_report(y_train,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_train,y_pred)*100),"%"
y_pred = bagging.predict(X_test)
print classification_report(y_test,y_pred)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
# -
# # 实验三:分类器数量对组合模型性能的影响
# +
boosting_acc = []
bagging_acc = []
for i in range(10,301,10):
print(i)
# boosting集成
boosting = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=i,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=i,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
y_pred = boosting.predict(X_test)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
boosting_acc.append(1-zero_one_loss(y_test,y_pred))
y_pred = bagging.predict(X_test)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
bagging_acc.append(1-zero_one_loss(y_test,y_pred))
# -
plt.figure(figsize=(10,4))
x_axis = range(10,301,10)
plt.plot(x_axis,boosting_acc,marker='.',markersize=10)
plt.plot(x_axis,bagging_acc,marker='.',markersize=10)
plt.ylim([0.8,0.86])
plt.xlabel('#Estimator')
plt.ylabel('Accuracy')
plt.legend(['Boosting','Bagging'])
plt.savefig('./figure/nEstimator.pdf')
plt.show()
# # 实验四:数据集噪声率对组合模型性能的影响
single_acc2 = []
boosting_acc2 = []
bagging_acc2 = []
for i in range(51):
print(i)
# 创建分类数据集
X,y = make_classification(
n_samples=1000,
flip_y=0.01*i,
n_features=n_f,
n_informative=inf_f,
n_redundant=red_f,
n_repeated=rep_f,
random_state=7)
# 划分为训练集,验证集,测试集
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state=9)
# 单模型
model = DecisionTreeClassifier(max_depth=5,min_samples_leaf=1)
model.fit(X_train,y_train)
# boosting集成
boosting = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=50,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=50,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
y_pred = model.predict(X_test)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
single_acc2.append(1-zero_one_loss(y_test,y_pred))
y_pred = boosting.predict(X_test)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
boosting_acc2.append(1-zero_one_loss(y_test,y_pred))
y_pred = bagging.predict(X_test)
print "Fraction of misclassification = %0.2f" % (zero_one_loss(y_test,y_pred)*100),"%"
bagging_acc2.append(1-zero_one_loss(y_test,y_pred))
# +
# flip比例的影响
# 比较准确率的均值和方差,画图
boo_acc2 = np.array(boosting_acc2)
bag_acc2 = np.array(bagging_acc2)
sin_acc2 = np.array(single_acc2)
print(boo_acc2.mean(),boo_acc2.std())
print(bag_acc2.mean(),bag_acc2.std())
print(sin_acc2.mean(),sin_acc2.std())
plt.figure(figsize=(10,5))
x_axis = np.arange(0,51)*0.01
plt.plot(x_axis,boosting_acc2,marker='.',markersize=10)
plt.plot(x_axis,bagging_acc2,marker='.',markersize=10)
# plt.ylim([0.8,0.86])
plt.xlabel('Flip%')
plt.ylabel('Accuracy')
plt.legend(['Boosting','Bagging'])
plt.savefig('./figure/flip.pdf')
plt.show()
# -
# # 实验五:模型性能综合对比
# 创建分类数据集
X,y = make_classification(
n_samples=1000,
flip_y=0.2,
n_features=n_f,
n_informative=inf_f,
n_redundant=red_f,
n_repeated=rep_f,
random_state=7)
# 划分为训练集,验证集,测试集
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state=9)
# +
# 决策树
# 单模型
model = DecisionTreeClassifier(max_depth=5,min_samples_leaf=1)
model.fit(X_train,y_train)
# boosting集成
boosting = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=50,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
DecisionTreeClassifier(max_depth=5,min_samples_leaf=1),
n_estimators=50,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 boosting
y_pred = boosting.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 bagging
y_pred = bagging.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# +
# KNN
# 单模型
model = KNeighborsClassifier()
model.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
KNeighborsClassifier(),
n_estimators=50,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 bagging
y_pred = bagging.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# +
# SVM
# 单模型
model = SVC()
model.fit(X_train,y_train)
# boosting集成
boosting = AdaBoostClassifier(
SVC(),
n_estimators=50,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
SVC(),
n_estimators=50,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 boosting
y_pred = boosting.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 bagging
y_pred = bagging.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# +
# Logistic回归
# 单模型
model = LogisticRegression()
model.fit(X_train,y_train)
# boosting集成
boosting = AdaBoostClassifier(
LogisticRegression(),
n_estimators=50,
algorithm="SAMME",
random_state=9)
boosting.fit(X_train,y_train)
# bagging集成
bagging = BaggingClassifier(
LogisticRegression(),
n_estimators=50,
bootstrap=True,
max_samples=1.0,
bootstrap_features=True,
max_features=0.7,
random_state=9)
bagging.fit(X_train,y_train)
# 单模型在验证集上预测
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 boosting
y_pred = boosting.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# 集成模型在验证集上预测 bagging
y_pred = bagging.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# +
# 随机森林
model = RandomForestClassifier(n_estimators=50)
model.fit(X_train,y_train)
# 在验证集上预测
y_pred = model.predict(X_test)
print classification_report(y_test,y_pred)
print "Accuracy = %0.4f" % (1-zero_one_loss(y_test,y_pred))
# +
plt.figure(figsize=(6,6))
# KNN
plt.scatter(0.808,0.81,marker='.',s=400,c='r')
plt.scatter(0.852,0.85,marker='x',s=100,c='r')
# 决策树
plt.scatter(0.722,0.72,marker='.',s=400,c='b')
plt.scatter(0.794,0.79,marker='*',s=200,c='b')
plt.scatter(0.814,0.81,marker='x',s=100,c='b')
# SVM
plt.scatter(0.752,0.74,marker='.',s=400,c='g')
plt.scatter(0.554,0.4,marker='*',s=200,c='g')
plt.scatter(0.79,0.78,marker='x',s=100,c='g')
# Logistic回归
plt.scatter(0.734,0.73,marker='.',s=400,c='y')
plt.scatter(0.76,0.76,marker='*',s=200,c='y')
plt.scatter(0.788,0.79,marker='x',s=100,c='y')
# 随机森林
plt.scatter(0.82,0.82,marker='.',s=400,c='m')
plt.legend(['KNN','KNN(Bagging)','DecisionTree','DecisionTree(Boosting)','DecisionTree(Bagging)','SVM','SVM(Boosting)','KNN(Bagging)','LogosticRegression','LogosticRegression(Boosting)','LogosticRegression(Bagging)','RandomForest'],
loc='center left',bbox_to_anchor=(1.3, 0.5),fontsize=18)
plt.xlim([0.7,0.86])
plt.ylim([0.7,0.86])
plt.xlabel('Accuracy',fontsize=18)
plt.ylabel('F1-Score',fontsize=18)
plt.xticks([0.70,0.75,0.80,0.85],[0.70,0.75,0.80,0.85],fontsize=18)
plt.yticks([0.75,0.80,0.85],[0.75,0.80,0.85],fontsize=18)
plt.savefig('./figure/models.pdf',bbox_inches=Bbox([[-0.5,0],[12,6]]))
plt.show()
| homework_4/code/hw4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
# !{sys.executable} -m pip install imagesc
from pylab import *
L = 20
p = 0.5
z = rand(L,L)
m = z<p
imshow(m, origin='lower')
show()
p = linspace(0.35,1.0,100)
nx = len(p)
Pi = zeros(nx)
N=100
from scipy.ndimage import measurements
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
# +
from pylab import *
p = linspace(0.4,1.0,100)
nx = len(p)
Ni = zeros(nx)
P = zeros(nx)
N = 1000
L = 100
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip] = Ni[ip] + 1
Pi = Ni/N
plot(p,Pi)
xlabel('$p$')
ylabel('$\Pi$')
# -
from pylab import *
from scipy.ndimage import measurements
p = linspace(0.4,1.0,100)
nx = len(p)
Ni = zeros(nx)
P = zeros(nx)
N = 1000
L = 100
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip] = Ni[ip] + 1
area = measurements.sum(m, lw, perc[0])
P[ip] = P[ip] + area
Pi = Ni/N
P = P/(N*L*L)
subplot(2,1,1)
plot(p,Pi)
subplot(2,1,1)
plot(p,Pi)
subplot(2,1,2)
plot(p,P)
# +
import numpy as np
def test():
pList = np.arange(50) / 50.0
N = 16
trials = 1000
a = lattice(16)
results = []
for p in pList:
a.p = p
percolating = 0
for t in range(trials):
a.generate()
a.analyze()
if len(a.percolators) > 0: percolating += 1
results.append(percolating)
return (pList, results)
class lattice(object):
def __init__(self, N=16, p=0.5):
self.N = N
self.clusters = np.zeros((N, N), int)
self.numclusters = 0
self.p = p
self.percolators = []
self.sizes = []
def generate(self):
N = self.N
self.clusters[:,:] = 0
clusters = self.clusters
clusteruid = int(0)
self.uids = []
uids = self.uids
rightbonds = np.random.rand(N, N) < self.p
downbonds = np.random.rand(N, N) < self.p
# for index, thiscluster in np.ndenumerate(self.clusters):
# if thiscluster == 0:
# clustercount += 1
# thiscluster = clustercount
# self.clusters[index] = thiscluster
# if index[0] < N - 1 and down[index]:
# self.clusters[index[0] + 1, index[1]] = thiscluster
# if index[1] < N - 1 and right[index]:
# self.clusters[index[0], index[1] + 1] = thiscluster
for row in range(N):
for col in range(N):
right = (row, col + 1)
down = (row + 1, col)
clusterID = clusters[row, col]
if clusterID == 0:
## new cluster
clusteruid += 1
clusterID = clusteruid
clusters[row,col] = clusterID
uids.append(clusterID)
if col < N - 1 and rightbonds[row,col]:
if clusters[right] == 0:
## nothing to the right
clusters[right] = clusterID
elif clusterID != clusters[right]:
## different cluster found to right
existingcluster = clusters[right]
clusters[clusters == clusterID] = existingcluster
uids.remove(clusterID)
clusterID = existingcluster
if row < N - 1 and downbonds[row, col]:
self.clusters[down] = clusterID
self.numclusters = len(uids)
self.analyze()
def analyze(self):
self.sizes, null = np.histogram(self.clusters,
bins=range(self.numclusters))
north = self.clusters[0, :]
south = self.clusters[self.N - 1, :]
west = self.clusters[:, 0]
east = self.clusters[:, self.N - 1]
self.percolators = []
for cluster in self.uids:
if ((cluster in north and cluster in south)
or (cluster in west and cluster in east)):
self.percolators.append(cluster)
# -
import percolate
grid = percolate.spanning_2d_grid(3)
| .ipynb_checkpoints/pypercolate_example-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classification - Doc2Vec
#
# This notebook discusses Multi-label classificaon methods for the [academia.stackexchange.com](https://academia.stackexchange.com/) data dump in [Doc2Vec](https://radimrehurek.com/gensim_3.8.3/models/doc2vec.html) representation.
#
# ## Table of Contents
# * [Data import](#data_import)
# * [Data preparation](#data_preparation)
# * [Methods](#methods)
# * [Evaluation](#evaluation)
# +
import matplotlib.pyplot as plt
import numpy as np
import warnings
import re
from joblib import load
from pathlib import Path
from sklearn.metrics import classification_report
from academia_tag_recommender.experiments.experimental_classifier import available_classifier_paths
warnings.filterwarnings('ignore')
plt.style.use('plotstyle.mplstyle')
plt.rcParams.update({
'axes.grid.axis': 'y',
'figure.figsize': (16, 8),
'font.size': 18,
'lines.linestyle': '-',
'lines.linewidth': 0.7,
})
RANDOM_STATE = 0
# -
# <a id='data_import'/>
# ## Data import
# +
from academia_tag_recommender.experiments.data import ExperimentalData
ed = ExperimentalData.load()
X_train, X_test, y_train, y_test = ed.get_train_test_set()
# +
from academia_tag_recommender.experiments.transformer import Doc2VecTransformer
from academia_tag_recommender.experiments.experimental_classifier import ExperimentalClassifier
transformer = Doc2VecTransformer.load('doc2vec')
train = transformer.fit(X_train)
# -
test = transformer.transform(X_test)
# <a id='data_preparation'/>
# ## Data Preparation
def create_classifier(classifier, name):
experimental_classifier = ExperimentalClassifier.load(transformer, classifier, name)
#experimental_classifier.train(train, y_train)
#experimental_classifier.score(test, y_test)
print('Training: {}s'.format(experimental_classifier.training_time))
print('Test: {}s'.format(experimental_classifier.test_time))
experimental_classifier.evaluation.print_stats()
# <a id='methods'/>
# ## Methods
#
# * [Problem Transformation](#problem_transformation)
# * [Algorithm Adaption](#algorithm_adaption)
# * [Ensembles](#ensembles)
# <a id='problem_transformation'/>
# ### Problem Transformation
#
# - [DecisionTreeClassifier](#decisiontree)
# - [KNeighborsClassifier](#kNN)
# - [MLPClassifier](#mlp)
# - [MultioutputClassifier](#multioutput)
# - [Classwise Classifier](#classwise)
# - [Classifier Chain](#chain)
# - [Label Powerset](#label_powerset)
# <a id='decisiontree'/>
# **DecisionTreeClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier)
# +
from sklearn.tree import DecisionTreeClassifier
create_classifier(DecisionTreeClassifier(random_state=RANDOM_STATE), 'DecisionTreeClassifier')
# -
# <a id='kNN'/>
# **KNeighborsClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier)
# +
from sklearn.neighbors import KNeighborsClassifier
create_classifier(KNeighborsClassifier(), 'KNeighborsClassifier')
# -
# <a id='mlp'/>
# **MLPClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier)
# +
from sklearn.neural_network import MLPClassifier
create_classifier(MLPClassifier(random_state=RANDOM_STATE), 'MLPClassifier')
# -
# <a id='multioutput'/>
# **MultioutputClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html#sklearn.multioutput.MultiOutputClassifier)
#
# MultiouputClassifier transforms sklearn classifier into classifiers capable of Binary Relevence.
# +
from sklearn.multioutput import MultiOutputClassifier
from sklearn.svm import LinearSVC
create_classifier(MultiOutputClassifier(LinearSVC(random_state=RANDOM_STATE)), 'MultioutputClassifier(LinearSVC)')
# +
from sklearn.linear_model import LogisticRegression
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
# -
# <a id='chain'/>
# **Classifier Chain** [source](http://scikit.ml/api/skmultilearn.problem_transform.cc.html#skmultilearn.problem_transform.ClassifierChain)
#
# <cite>[Read et al., 2011][1]</cite>
#
# [1]: https://doi.org/10.1007/s10994-011-5256-5
# +
from skmultilearn.problem_transform import ClassifierChain
create_classifier(ClassifierChain(classifier=LinearSVC(random_state=RANDOM_STATE)), 'ClassifierChain(LinearSVC)')
# -
create_classifier(ClassifierChain(classifier=LogisticRegression(random_state=RANDOM_STATE)), 'ClassifierChain(LogisticRegression)')
# <a id='label_powerset'/>
# **Label Powerset** [source](http://scikit.ml/api/skmultilearn.problem_transform.lp.html#skmultilearn.problem_transform.LabelPowerset)
# +
from skmultilearn.problem_transform import LabelPowerset
create_classifier(LabelPowerset(classifier=LinearSVC(random_state=RANDOM_STATE)), 'LabelPowerset(LinearSVC)')
# -
from skmultilearn.problem_transform import LabelPowerset
from sklearn.linear_model import LogisticRegression
create_classifier(LabelPowerset(classifier=LogisticRegression(random_state=RANDOM_STATE)), 'LabelPowerset(LogisticRegression)')
# <a id='Algorithm Adaption'/>
# ### Algorithm Adaption
#
# - [MLkNN](#mlknn)
# - [MLARAM](#mlaram)
# <a id='mlknn'/>
# **MLkNN** [source](http://scikit.ml/api/skmultilearn.adapt.mlknn.html#multilabel-k-nearest-neighbours)
#
# > Firstly, for each test instance, its k nearest neighbors in the training set are identified. Then, according to statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the test instance.
# <cite>[<NAME>, 2007][1]</cite>
#
# [1]: https://doi.org/10.1016/j.patcog.2006.12.019
# +
from skmultilearn.adapt import MLkNN
create_classifier(MLkNN(), 'MLkNN')
# -
# <a id='mlaram'/>
# **MLARAM** [source](http://scikit.ml/api/skmultilearn.adapt.mlaram.html#skmultilearn.adapt.MLARAM)
#
# > an extension of fuzzy Adaptive Resonance Associative Map (ARAM) – an Adaptive Resonance Theory (ART)based neural network. It aims at speeding up the classification process in the presence of very large data.
# <cite>[<NAME> & <NAME>, 2015][7]</cite>
#
# [7]: https://doi.org/10.1109/ICDMW.2015.14
# +
from skmultilearn.adapt import MLARAM
create_classifier(MLARAM(), 'MLARAM')
# -
# <a id='ensembles'/>
# ### Ensembles
#
# - [RAkELo](#rakelo)
# - [RAkELd](#rakeld)
# - [MajorityVotingClassifier](#majority_voting)
# - [LabelSpacePartitioningClassifier](#label_space)
# <a id='rakelo'/>
# **RAkELo** [source](http://scikit.ml/api/skmultilearn.ensemble.rakelo.html#skmultilearn.ensemble.RakelO)
#
# > Rakel: randomly breaking the initial set of labels into a number of small-sized labelsets, and employing [Label powerset] to train a corresponding multilabel classifier.
# <cite>[Tsoumakas et al., 2011][1]</cite>
#
#
# > Divides the label space in to m subsets of size k, trains a Label Powerset classifier for each subset and assign a label to an instance if more than half of all classifiers (majority) from clusters that contain the label assigned the label to the instance.
# <cite>[skmultilearn][2]</cite>
#
#
# [1]: https://doi.org/10.1109/TKDE.2010.164
# [2]: http://scikit.ml/api/skmultilearn.ensemble.rakelo.html#skmultilearn.ensemble.RakelO
# +
from skmultilearn.ensemble import RakelO
create_classifier(RakelO(
base_classifier=LinearSVC(random_state=RANDOM_STATE),
model_count=y_train.shape[1]
), 'RakelO(LinearSVC)')
# -
create_classifier(RakelO(
base_classifier=LogisticRegression(random_state=RANDOM_STATE),
model_count=y_train.shape[1]
), 'RakelO(LogisticRegression)')
# <a id='rakeld'/>
# **RAkELd** [source](http://scikit.ml/api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelD)
#
# >Divides the label space in to equal partitions of size k, trains a Label Powerset classifier per partition and predicts by summing the result of all trained classifiers.
# <cite>[skmultilearn][3]</cite>
#
# [3]: http://scikit.ml/api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelD
# +
from skmultilearn.ensemble import RakelD
create_classifier(RakelD(base_classifier=LinearSVC(random_state=RANDOM_STATE)), 'RakelD(LinearSVC)')
# -
create_classifier(RakelD(base_classifier=LogisticRegression(random_state=RANDOM_STATE)), 'RakelD(LogisticRegression)')
# ***Clustering***
# +
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
def get_graph_builder():
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
edge_map = graph_builder.transform(y_train)
return graph_builder
# +
from skmultilearn.cluster import IGraphLabelGraphClusterer
import igraph as ig
def get_clusterer():
graph_builder = get_graph_builder()
clusterer_igraph = IGraphLabelGraphClusterer(graph_builder=graph_builder, method='walktrap')
partition = clusterer_igraph.fit_predict(X_train, y_train)
return clusterer_igraph
# -
clusterer_igraph = get_clusterer()
# <a id='majority_vorting'/>
# **MajorityVotingClassifier** [source](http://scikit.ml/api/skmultilearn.ensemble.voting.html#skmultilearn.ensemble.MajorityVotingClassifier)
# +
from skmultilearn.ensemble.voting import MajorityVotingClassifier
create_classifier(MajorityVotingClassifier(
classifier=ClassifierChain(classifier=LinearSVC(random_state=RANDOM_STATE)),
clusterer=clusterer_igraph
), 'MajorityVotingClassifier(ClassifierChain(LinearSVC))')
# -
create_classifier(MajorityVotingClassifier(
classifier=ClassifierChain(classifier=LogisticRegression(random_state=RANDOM_STATE)),
clusterer=clusterer_igraph
), 'MajorityVotingClassifier(ClassifierChain(LogisticRegression))')
# <a id='evaluation'/>
# ## Evaluation
paths = available_classifier_paths('doc2vec', 'size=100.')
paths = [path for path in paths if '-' not in path.name]
evals = []
for path in paths:
clf = load(path)
evaluation = clf.evaluation
evals.append([str(clf), evaluation])
from matplotlib.ticker import MultipleLocator
x_ = ['Precision', 'F1', 'Recall']
fig, axes = plt.subplots(1, 3, sharey=True)
axes[0].set_title('Sample')
axes[1].set_title('Macro')
axes[2].set_title('Micro')
for ax in axes:
ax.set_xticklabels(x_, rotation=45, ha='right')
ax.yaxis.set_major_locator(MultipleLocator(0.1))
ax.yaxis.set_minor_locator(MultipleLocator(0.05))
ax.set_ylim(-0.05, 1.05)
for eval_ in evals:
evaluator = eval_[1]
axes[0].plot(x_, [evaluator.precision_samples, evaluator.f1_samples, evaluator.recall_samples], label=eval_[0])
axes[1].plot(x_, [evaluator.precision_macro, evaluator.f1_macro, evaluator.recall_macro])
axes[2].plot(x_, [evaluator.precision_micro, evaluator.f1_micro, evaluator.recall_micro])
axes[0].set_ylabel('Recall macro')
fig.legend(bbox_to_anchor=(1,0.5), loc='center left')
plt.show()
top_3 = sorted(paths, key=lambda x: load(x).evaluation.recall_macro, reverse=True)[:3]
def per_label_accuracy(orig, prediction):
if not isinstance(prediction, np.ndarray):
prediction = prediction.toarray()
l = 1 - np.absolute(orig - prediction)
return np.average(l, axis=0)
from sklearn.metrics import classification_report
classwise_results = []
for clf_path in top_3:
clf = load(clf_path)
prediction = clf.predict(test)
label_accuracies = per_label_accuracy(y_test, prediction)
report = classification_report(y_test, prediction, output_dict=True, zero_division=0)
classwise_report = {}
for i, result in enumerate(report):
if i < len(label_accuracies):
classwise_report[result] = report[result]
classwise_report[result]['accuracy'] = label_accuracies[int(result)]
classwise_results.append((clf, classwise_report))
x_ = np.arange(0, len(y_test[0]))
fig, axes = plt.subplots(3, 1, figsize=(16, 12))
for i, classwise_result in enumerate(classwise_results):
name, results = classwise_result
sorted_results = sorted(results, key=lambda x: results[x]['support'], reverse=True)
axes[i].set_title(name, fontsize=18)
axes[i].plot(x_, [results[result]['precision'] for result in sorted_results][0:len(x_)], label='Precision')
axes[i].plot(x_, [results[result]['recall'] for result in sorted_results][0:len(x_)], label='Recall')
axes[i].plot(x_, [results[result]['f1-score'] for result in sorted_results][0:len(x_)], label='F1')
axes[i].plot(x_, [results[result]['accuracy'] for result in sorted_results][0:len(x_)], label="Accuracy")
axes[i].set_ylabel('Score')
axes[i].yaxis.set_major_locator(MultipleLocator(0.5))
axes[i].yaxis.set_minor_locator(MultipleLocator(0.25))
axes[i].set_ylim(-0.05, 1.05)
axes[2].set_xlabel('Label (sorted by support)')
lines, labels = fig.axes[-1].get_legend_handles_labels()
fig.legend(lines, labels, loc='right')
plt.subplots_adjust(hspace=0.5, right=0.8)
plt.show()
# **Impact of vector size**
transformer = Doc2VecTransformer.load('doc2vec', 100)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 200)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 500)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 1000)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
import re
paths = available_classifier_paths('MultioutputClassifier(LogisticRegression)', 'doc2vec', 'size')
evals = []
for path in paths:
clf = load(path)
evaluation = clf.evaluation
matches = re.findall(r'=([\w,\d]*)', str(path))
_, _, size = matches
evals.append([int(size), evaluation])
fig, ax = plt.subplots()
ax.set_title('Impact of Vector Size')
evals = sorted(evals, key=lambda x: x[0])
x_ = [eval[0] for eval in evals]
ax.plot(x_, [eval[1].accuracy for eval in evals], marker="o", label='Accuracy')
ax.plot(x_, [eval[1].f1_macro for eval in evals], marker="o", label='F1 Macro')
ax.plot(x_, [eval[1].recall_macro for eval in evals], marker="o", label='Recall Macro')
ax.plot(x_, [eval[1].precision_macro for eval in evals], marker="o", label='Precision Macro')
ax.set_xlabel('Vector size')
ax.set_ylabel('Score')
ax.xaxis.set_major_locator(MultipleLocator(100))
fig.legend(bbox_to_anchor=(1,0.5), loc='center left')
plt.show()
| notebooks/4.4-me-classification-doc2vec.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.preprocessing import normalize
import numpy as np
import json
from XMeans import XMeansTraining
import ipyvolume as ipv
from mpl_toolkits.mplot3d import Axes3D
import random
from sklearn.manifold import TSNE
from matplotlib import colors as mcolors
import matplotlib.pyplot as plt
# %matplotlib inline
# +
# Read data
titleList = []
DocVecs = []
with open("./sampleData.json",'r') as f:
for line in f:
data = json.loads(line)
titleList.append(data["title"])
DocVecs.append(data["docvec"])
data = np.array(DocVecs)
# +
# Train XMeans
maxBranching = 5
Centroids,Labels = XMeansTraining(data,maxBranching)
print("Number of clusters = {}".format(len(Centroids)))
# +
# Save Cluster titles at ./Clusters
for clID in range(len(Centroids)):
indices = np.where(Labels == clID)[0].tolist()
Titles = [titleList[i] for i in indices]
with open("./Clusters/"+str(clID)+".txt", 'a') as file:
for title in Titles:
file.write("%s\n" % title)
# -
# ### Visualize cluster samples
# +
# Dimensionality reduction using tSNE
tSNE = TSNE(n_components=3,metric='cosine')
tsneModel = tSNE.fit(data)
# +
# 3D scatter plot of samples from clusters
noSamples = 200
tSNEData = tsneModel.embedding_
colors = [col for col in mcolors.CSS4_COLORS.keys()]
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
for k in np.arange(len(Centroids)):
ind = np.where(Labels == k)[0].tolist()
indices = random.sample(ind, noSamples)
ax.scatter(tSNEData[indices,0], tSNEData[indices,1], tSNEData[indices,2], marker="o",color=colors[k])
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_zlabel('')
plt.show()
# +
# 3D interactive scatter plot of samples from clusters
size = 0.75
for k in np.arange(len(Centroids)):
ind = np.where(Labels == k)[0].tolist()
indices = random.sample(ind, noSamples)
ipv.pylab.scatter(tSNEData[indices,0], tSNEData[indices,1], tSNEData[indices,2], size=size, marker="sphere",color=colors[k+2])
ipv.pylab.show()
ipv.pylab.xlabel("")
ipv.pylab.ylabel("")
ipv.pylab.zlabel("")
ipv.pylab.xyzlim(-20)
# -
| Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Time-varying frame
#
# <NAME>
# <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Cartesian coordinate system
#
# As we perceive the surrounding space as three-dimensional, a convenient coordinate system is the [Cartesian coordinate system](http://en.wikipedia.org/wiki/Cartesian_coordinate_system) in the [Euclidean space](http://en.wikipedia.org/wiki/Euclidean_space) with three orthogonal axes as shown below. The axes directions are commonly defined by the [right-hand rule](http://en.wikipedia.org/wiki/Right-hand_rule) and attributed the letters X, Y, Z. The orthogonality of the Cartesian coordinate system is convenient for its use in classical mechanics, most of the times the structure of space is assumed having the [Euclidean geometry](http://en.wikipedia.org/wiki/Euclidean_geometry) and as consequence, the motion in different directions are independent of each other.
#
# <figure><img src="https://raw.githubusercontent.com/demotu/BMC/master/images/CCS.png" width=350/></figure>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Determination of a coordinate system
#
# In Biomechanics, we may use different coordinate systems for convenience and refer to them as global, laboratory, local, anatomical, or technical reference frames or coordinate systems.
#
# As we perceive the surrounding space as three-dimensional, a convenient coordinate system to use is the [Cartesian coordinate system](http://en.wikipedia.org/wiki/Cartesian_coordinate_system) with three orthogonal axes in the [Euclidean space](http://en.wikipedia.org/wiki/Euclidean_space). From [linear algebra](http://en.wikipedia.org/wiki/Linear_algebra), a set of unit linearly independent vectors (orthogonal in the Euclidean space and each with norm (length) equals to one) that can represent any vector via [linear combination](http://en.wikipedia.org/wiki/Linear_combination) is called a <a href="http://en.wikipedia.org/wiki/Basis_(linear_algebra)">basis</a> (or **orthonormal basis**). The figure below shows a point and its position vector in the Cartesian coordinate system and the corresponding versors (**unit vectors**) of the basis for this coordinate system. See the notebook [Scalar and vector](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ScalarVector.ipynb) for a description on vectors.
#
# <img src="https://raw.githubusercontent.com/demotu/BMC/master/images/vector3Dijk.png" width=350/>
# + [markdown] slideshow={"slide_type": "slide"}
# One can see that the versors of the basis shown in the figure above have the following coordinates in the Cartesian coordinate system:
#
# $$ \hat{\mathbf{i}} = \begin{bmatrix}1\\0\\0 \end{bmatrix}, \quad \hat{\mathbf{j}} = \begin{bmatrix}0\\1\\0 \end{bmatrix}, \quad \hat{\mathbf{k}} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$$
#
# Using the notation described in the figure above, the position vector $\overrightarrow{\mathbf{r}}$ can be expressed as:
#
# $$ \overrightarrow{\mathbf{r}} = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}} $$
#
# However, to use a fixed basis can lead to very complex expressions.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Time varying basis
# + [markdown] slideshow={"slide_type": "slide"}
# Consider that we have the position vector of a particle, moving in the path described by the parametric curve $s(t)$, described in a fixed reference frame as:
#
# $${\bf\hat{r}}(t) = {x}{\bf\hat{i}}+{y}{\bf\hat{j}} + {z}{\bf\hat{k}}$$
#
# <img src="../images/velRefFrame.png" width=500/>
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Tangential versor
#
# Often we describe all the kinematic variables in this fixed reference frame. However, it is often useful to define a time-varying basis, attached to some point of interest. In this case, what is usually done is to choose as one of the basis vector a unitary vector in the direction of the velocity of the particle. Defining this vector as:
#
# $${\bf\hat{e}_t} = \frac{{\bf\vec{v}}}{\Vert{\bf\vec{v}}\Vert}$$
# + [markdown] lang="en" slideshow={"slide_type": "slide"} variables={"\\bf\\vec{C": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-3cc517da6d99>, line 1)</p>\n", "\\bf\\vec{v": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-59c79f8f7734>, line 1)</p>\n"}
# ### Tangential versor
#
# Often we describe all the kinematic variables in this fixed reference frame. However, it is often useful to define a time-varying basis, attached to some point of interest. In this case, what is usually done is to choose as one of the basis vector a unitary vector in the direction of the velocity of the particle. Defining this vector as:
#
# $${\bf\hat{e}_t} = \frac{{\bf\vec{v}}}{\Vert{\bf\vec{v}}\Vert}$$
# + [markdown] slideshow={"slide_type": "slide"} variables={"\\bf\\vec{C": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-3cc517da6d99>, line 1)</p>\n", "\\bf\\vec{v": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-59c79f8f7734>, line 1)</p>\n"}
# ### Normal versor
#
# For the second vector of the basis, we define first a vector of curvature of the path (the meaning of this curvature vector will be seeing in another notebook):
#
# $$ {\bf\vec{C}} = \frac{d{\bf\hat{e}_t}}{ds}$$
#
# Note that $\bf\hat{e}_t$ is a function of the path $s(t)$. So, by the chain rule:
#
# $$ \frac{d{\bf\hat{e}_t}}{dt} = \frac{d{\bf\hat{e}_t}}{ds}\frac{ds}{dt} \longrightarrow \frac{d{\bf\hat{e}_t}}{ds} = \frac{\frac{d{\bf\hat{e}_t}}{dt}}{\frac{ds}{dt}} \longrightarrow {\bf\vec{C}} = \frac{\frac{d{\bf\hat{e}_t}}{dt}}{\frac{ds}{dt}}\longrightarrow {\bf\vec{C}} = \frac{\frac{d{\bf\hat{e}_t}}{dt}}{\Vert{\bf\vec{v}}\Vert}$$
#
# Now we can define the second vector of the basis, ${\bf\hat{e}_n}$:
#
# $${\bf\hat{e}_n} = \frac{{\bf\vec{C}}}{\Vert{\bf\vec{C}}\Vert}$$
#
# <img src="../images/velRefFrameeten.png" width=500/>
# + [markdown] slideshow={"slide_type": "slide"}
# ### Binormal versor
#
# The third vector of the basis is obtained by the cross product between ${\bf\hat{e}_n}$ and ${\bf\hat{e}_t}$.
#
# $${\bf\hat{e}_b} = {\bf\hat{e}_t} \times {\bf\hat{e}_n} $$
#
# Note that the vectors ${\bf\hat{e}_t}$, ${\bf\hat{e}_n}$ and ${\bf\hat{e}_b}$ vary together with the particle movement.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Velocity and Acceleration in a time-varying frame
# + [markdown] slideshow={"slide_type": "slide"} variables={"\\bf\\hat{e}_t": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-4431cfbd2ee7>, line 1)</p>\n", "d({\\Vert\\bf\\vec{v}\\Vert}{\\bf\\hat{e}_t": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-2c63c6b0c6c5>, line 1)</p>\n", "d\\bf\\vec{v": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-dcdda8e1ec30>, line 1)</p>\n"}
# ### Velocity
#
# Given the expression of $r(t)$ in a fixed frame we can write the velocity ${\bf\vec{v}(t)}$ as a function of the fixed frame of reference ${\bf\hat{i}}$, ${\bf\hat{j}}$ and ${\bf\hat{k}}$ (see http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsParticle.ipynb)).
#
# $${\bf\vec{v}}(t) = \dot{x}{\bf\hat{i}}+\dot{y}{\bf\hat{j}}+\dot{z}{\bf\hat{k}}$$
#
# However, this can lead to very complex functions. So it is useful to use the basis find previously ${\bf\hat{e}_t}$, ${\bf\hat{e}_n}$ and ${\bf\hat{e}_b}$.
#
# The velocity ${\bf\vec{v}}$ of the particle is, by the definition of ${\bf\hat{e}_t}$, in the direction of ${\bf\hat{e}_t}$:
#
# $${\bf\vec{v}}={\Vert\bf\vec{v}\Vert}.{\bf\hat{e}_t}$$
# + [markdown] slideshow={"slide_type": "slide"} variables={"\\bf\\hat{e}_t": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-4431cfbd2ee7>, line 1)</p>\n", "d({\\Vert\\bf\\vec{v}\\Vert}{\\bf\\hat{e}_t": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-2c63c6b0c6c5>, line 1)</p>\n", "d\\bf\\vec{v": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (<ipython-input-1-dcdda8e1ec30>, line 1)</p>\n"}
# ### Acceleration
#
# The acceleration can be written in the fixed frame of reference as:
#
# $${\bf\vec{a}}(t) = \ddot{x}{\bf\hat{i}}+\ddot{y}{\bf\hat{j}}+\ddot{z}{\bf\hat{k}}$$
#
# But for the same reasons of the velocity vector, it is useful to describe the acceleration vector in the time varying basis. We know that the acceleration is the time derivative of the velocity:
#
# $${\bf\vec{a}} = \frac{{d\bf\vec{v}}}{dt}=\frac{{d({\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}})}{dt}=\dot{\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}+{\Vert\bf\vec{v}\Vert}\dot{{\bf\hat{e}_t}}= \dot{\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}+{\Vert\bf\vec{v}\Vert}\frac{d{\bf\hat{e}_t}}{ds}\frac{ds}{dt}=\dot{\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}+{\Vert\bf\vec{v}\Vert}^2\frac{d{\bf\hat{e}_t}}{ds}$$
#
# $${\bf\vec{a}}=\dot{\Vert\bf\vec{v}\Vert}{\bf\hat{e}_t}+{\Vert\bf\vec{v}\Vert}^2\Vert{\bf\vec{C}} \Vert {\bf\hat{e}_n}$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example
# For example, consider that a particle follows the path described by the parametric curve below:
#
# $$\vec{r}(t) = (10t+100){\bf{\hat{i}}} + \left(-\frac{9,81}{2}t^2+50t+100\right){\bf{\hat{j}}}$$
#
# This curve could be, for example, from a projectile motion. See http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ProjectileMotion.ipynb for an explanation on projectile motion.
# + slideshow={"slide_type": "slide"}
import numpy as np
import sympy as sym
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
sym.init_printing()
from sympy.plotting import plot_parametric
from sympy.physics.mechanics import ReferenceFrame, Vector, dot
# + [markdown] slideshow={"slide_type": "slide"}
# ### Solving numerically
#
# Now we will obtain the time-varying basis numerically. This method is useful when it is not available a mathematical expression of the path. This often happens when you have available data collected experimentally (most of the cases in Biomechanics).
#
# First, data will be obtained from the expression of $r(t)$. This is done to replicate the example above. You could use data collected experimentally, for example.
# + slideshow={"slide_type": "slide"}
t = np.linspace(0, 10, 30)
r = np.transpose(np.array([10*t + 100, -9.81/2*t**2 + 50*t + 100]))
# + [markdown] slideshow={"slide_type": "slide"}
# Now, to obtain the $\bf{\hat{e_t}}$ versor, we can use Equation (4).
# + slideshow={"slide_type": "slide"}
Ts = t[1]
v = np.diff(r,1,0)/Ts
vNorm = np.sqrt(v[:,[0]]**2+v[:,[1]]**2)
et = v/vNorm
# + [markdown] slideshow={"slide_type": "slide"}
# And to obtain the versor $\bf{\hat{e_n}}$, we can use Equation (8).
# + slideshow={"slide_type": "slide"}
C = np.diff(et,1,0)/Ts
C = C/vNorm[1:]
CNorm = np.sqrt(C[:,[0]]**2+C[:,[1]]**2)
en = C/CNorm
# + slideshow={"slide_type": "slide"}
from matplotlib.patches import FancyArrowPatch
fig = plt.figure()
plt.plot(r[:,0],r[:,1],'.')
ax = fig.add_axes([0,0,1,1])
time = np.linspace(0,10,10)
for i in np.arange(len(t)-2):
vec1 = FancyArrowPatch(r[i,:],r[i,:]+10*et[i,:],mutation_scale=20,color='r')
vec2 = FancyArrowPatch(r[i,:],r[i,:]+10*en[i,:],mutation_scale=20,color='g')
ax.add_artist(vec1)
ax.add_artist(vec2)
plt.xlim((80,250))
plt.ylim((80,250))
plt.show()
# + slideshow={"slide_type": "slide"}
v = vNorm*et
vNormDot = np.diff(vNorm,1,0)/Ts
a = vNormDot*et[1:,:] + vNorm[1:]**2*CNorm*en
# + slideshow={"slide_type": "slide"}
from matplotlib.patches import FancyArrowPatch
# %matplotlib inline
plt.rcParams['figure.figsize']=10,10
fig = plt.figure()
plt.plot(r[:,0],r[:,1],'.')
ax = fig.add_axes([0,0,1,1])
for i in range(0,len(t)-2,3):
vec1 = FancyArrowPatch(r[i,:],r[i,:]+v[i,:],mutation_scale=10,color='r')
vec2 = FancyArrowPatch(r[i,:],r[i,:]+a[i,:],mutation_scale=10,color='g')
ax.add_artist(vec1)
ax.add_artist(vec2)
plt.xlim((80,250))
plt.ylim((80,250))
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Symbolic solution (extra reading)
# + [markdown] slideshow={"slide_type": "slide"}
# The computation here will be performed symbolically, with the symbolic math package of Python, Sympy. Below,a reference frame, called O, and a varible for time (t) are defined.
# + slideshow={"slide_type": "slide"}
O = sym.vector.CoordSys3D(' ')
t = sym.symbols('t')
# + [markdown] slideshow={"slide_type": "slide"}
# Below the vector $r(t)$ is defined symbolically.
# + slideshow={"slide_type": "slide"}
r = (10*t+100)*O.i + (-9.81/2*t**2+50*t+100)*O.j+0*O.k
r
# + slideshow={"slide_type": "slide"}
plot_parametric(r.dot(O.i),r.dot(O.j), (t,0,10))
# + slideshow={"slide_type": "slide"}
v = sym.diff(r)
v
# + slideshow={"slide_type": "slide"}
et = v/sym.sqrt(v.dot(v))
et
# + slideshow={"slide_type": "slide"}
C = sym.diff(et)/sym.sqrt(v.dot(v))
C
# + slideshow={"slide_type": "slide"}
en = C/(sym.sqrt(C.dot(C)))
sym.simplify(en)
# + slideshow={"slide_type": "slide"}
from matplotlib.patches import FancyArrowPatch
plt.rcParams['figure.figsize'] = 10, 10
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.axis("on")
time = np.linspace(0,10,30)
for instant in time:
vt = FancyArrowPatch([float(r.dot(O.i).subs(t,instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t,instant))+10*float(et.dot(O.i).subs(t,instant)), float(r.dot(O.j).subs(t, instant))+10*float(et.dot(O.j).subs(t,instant))],
mutation_scale=20,
arrowstyle="->",color="r",label='${\hat{e_t}}$')
vn = FancyArrowPatch([float(r.dot(O.i).subs(t, instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t, instant))+10*float(en.dot(O.i).subs(t, instant)), float(r.dot(O.j).subs(t, instant))+10*float(en.dot(O.j).subs(t, instant))],
mutation_scale=20,
arrowstyle="->",color="g",label='${\hat{e_n}}$')
ax.add_artist(vn)
ax.add_artist(vt)
plt.xlim((90,250))
plt.ylim((90,250))
plt.xlabel('x')
plt.legend(handles=[vt,vn],fontsize=20)
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# Now we can find the vectors ${\bf\vec{v}}$ and ${\bf\vec{a}}$ described in the time varying frame.
# + slideshow={"slide_type": "slide"}
v = sym.sqrt(v.dot(v))*et
# + slideshow={"slide_type": "slide"}
a = sym.diff(sym.sqrt(v.dot(v)))*et+v.dot(v)*sym.sqrt(C.dot(C))*en
sym.simplify(sym.simplify(a))
# + slideshow={"slide_type": "slide"}
from matplotlib.patches import FancyArrowPatch
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.axis("on")
time = np.linspace(0,10,10)
for instant in time:
vt = FancyArrowPatch([float(r.dot(O.i).subs(t,instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t,instant))+float(v.dot(O.i).subs(t,instant)), float(r.dot(O.j).subs(t, instant))+float(v.dot(O.j).subs(t,instant))],
mutation_scale=20,
arrowstyle="->",color="r",label='${{v}}$')
vn = FancyArrowPatch([float(r.dot(O.i).subs(t, instant)),float(r.dot(O.j).subs(t,instant))],
[float(r.dot(O.i).subs(t, instant))+float(a.dot(O.i).subs(t, instant)), float(r.dot(O.j).subs(t, instant))+float(a.dot(O.j).subs(t, instant))],
mutation_scale=20,
arrowstyle="->",color="g",label='${{a}}$')
ax.add_artist(vn)
ax.add_artist(vt)
plt.xlim((60,250))
plt.ylim((60,250))
plt.legend(handles=[vt,vn],fontsize=20)
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problems
#
# 1. Obtain the vectors $\hat{e_n}$ and $\hat{e_t}$ for the problem 17.1.1 from Ruina and Rudra's book.
# 2. Solve the problem 17.1.9 from Ruina and Rudra's book.
# 3. Write a Python program to solve the problem 17.1.10 (only the part of $\hat{e_n}$ and $\hat{e_t}$).
# + [markdown] slideshow={"slide_type": "slide"}
# ## References
#
# # + <NAME>, <NAME> (2015) Introduction to Statics and Dynamics. Oxford University Press. http://ruina.tam.cornell.edu/Book/RuinaPratap-Jan-20-2015.pdf
| notebooks/.ipynb_checkpoints/Time-varying frames-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="rK4xPOI93BjD"
#First we import and load all basic libraries
from google.colab import drive #For linking colab to Google Drive
import pandas as pd #For dataframe handling
import numpy as np #For matrix and list computations
import matplotlib.pyplot as plt
import seaborn as sns #For advanced graphs
import scipy.stats as stats
# + id="ySB_tPVC3I1R" colab={"base_uri": "https://localhost:8080/"} outputId="ffbbecf6-3422-413f-b6d2-afa8b9559733"
drive.mount('mydrive') #Bridge to Google Drive
# + colab={"base_uri": "https://localhost:8080/", "height": 423} id="9x5MWnFa3ePo" outputId="1c3bf4db-2cc2-4a98-bf9a-0b8150793ea0"
heart = pd.read_csv ('/content/mydrive/MyDrive/EDEM/heart.csv', sep=',')
heart
# + colab={"base_uri": "https://localhost:8080/"} id="4OyZfHgE9yFm" outputId="80edb377-0d05-4b2a-f5fc-258eaaca750d"
Age = heart.Age.describe()
print(heart.Age.describe())
m_age=Age[1]
sd_age=Age[2]
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="y58IxScFBZAL" outputId="1db43ab6-f44a-4d17-df34-77e85fcaa5e0"
x=heart['Age']
plt.hist(x,edgecolor='black',bins=20)
plt.xticks(np.arange(25,85, step=5))
plt.title("Figura 1. Edades")
plt.ylabel('Frequency')
plt.xlabel('Age')
plt.axvline(x=m_age, linewidth=1, linestyle= 'solid', color="red", label='Mean')
plt.axvline(x=m_age-sd_age, linewidth=1, linestyle= 'dashed', color="green", label='- 1 S.D.')
plt.axvline(x=m_age + sd_age, linewidth=1, linestyle= 'dashed', color="green", label='+ 1 S.D.')
# + colab={"base_uri": "https://localhost:8080/"} id="OM0B4OovApJH" outputId="3cd40e21-90bb-4c18-cdab-a03719c627c2"
mytable = heart.groupby(['Sex']).size()
print(mytable)
# + id="rAgkdbXLA9u4"
#Excursus to Operators
# Subset year 0
heart_female = heart[heart.Sex == 0]
# Subset year 1
heart_male = heart[heart.Sex == 1]
# + colab={"base_uri": "https://localhost:8080/", "height": 143} id="kUcqnRFsSYTR" outputId="7383051c-8d5b-441e-d8c5-2abe9097b529"
# Recoding season into a string variable pain_cat)
heart.loc[(heart['HeartDisease']==0),"HeartDisease_cat"]= "no_enfermo"
heart.loc[(heart['HeartDisease']==1),"HeartDisease_cat"]= "enfermo"
# Quality control
pd.crosstab(heart.HeartDisease, heart.HeartDisease_cat)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="wwVaFrjkajZy" outputId="70675fb7-e172-4e3d-d2a2-db30f3fd6125"
# Recoding Age into a string variable Age_cat)
heart.loc[(heart['Age']<40),"Age_cat2"]= "menores_de_40"
heart.loc[((heart['Age']>=40) & (heart['Age']<60)),"Age_cat2"]= "menores_de_60"
heart.loc[(heart['Age']>=60),"Age_cat2"]= "mayores_de_60"
##### Quality control?
plt.scatter( heart.Age, heart.Age_cat2, s=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 318} id="FJ4A0hMTomHc" outputId="98755002-a1b3-40bf-c25f-e1295b973d71"
# Recode the number of rentals in Three Groups
#Compute & store the cutting points
res = heart['Age'].describe()
# Store parameters as numbers
m = res[1]
sd = res[2]
n = res[0]
### Recode 2
heart.loc[ (heart['Age']<(m-sd)) ,"Age_cat2"]= "menores_de_40"
heart.loc[ ((heart['Age']>(m-sd)) & (heart['Age']<(m+sd))) ,"Age_cat2"]= "menores_de_60"
heart.loc[ (heart['Age']>(m+sd)) ,"Age_cat2"]= "mayores_de_60"
heart.Age_cat2.describe()
plt.hist(heart.Age_cat2, edgecolor='black')
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="mv6RRKhXwJKq" outputId="f80c938b-589d-413a-848f-50f934bd8f37"
#Descriptive comparison, Cholesterol and Sex:
#1. Describe the two variables involved in hypothesis
#CHOLESTEROL
heart.Cholesterol.describe()
plt.hist(heart.Cholesterol, edgecolor='black')
# + colab={"base_uri": "https://localhost:8080/", "height": 421} id="UwJ-Jrbux1Nw" outputId="35ec906a-1679-47ad-91f1-769dcf175fce"
#SEX
mytablesex = heart.groupby(['Sex']).size()
print(mytablesex)
n=mytablesex.sum()
mytablesex2 = (mytablesex/n)*100
print(mytablesex2)
n=mytablesex.sum()
bar_list = ['Female', 'Male']
plt.bar(bar_list, mytablesex2, edgecolor='black')
# + colab={"base_uri": "https://localhost:8080/"} id="bh5n3UWg9dYF" outputId="707a2b27-a271-4e58-9d03-c9aaae7a0688"
#2. Perform the numeric test: t.test
#Descriptive comparison:
Cholesterol = heart.Cholesterol.describe()
m_cho = Cholesterol[1]
print(m_cho)
#heart.groupby('Cholesterol_cat').Cholesterol.mean()
# + colab={"base_uri": "https://localhost:8080/", "height": 469} id="QRpEvj2d_0IF" outputId="de0291b3-9004-43b7-bd7f-6a9343124981"
################################AQUÍ EMPIEZA EL EJERCICIO A ENTREGAR###############################################################################
#ESCOGEMOS LA VARIABLE SEXO Y EL COLESTEROL COMO SUJETO DE ESTUDIO
#Descriptive comparison:
print(heart.groupby('Sex').Cholesterol.mean())
#Statistical comparison:
#Extract the two sub samples and store them in two objects
Cholesterol_female=heart.loc[heart.Sex=='F', "Cholesterol"]
Cholesterol_male=heart.loc[heart.Sex=='M', "Cholesterol"]
res = stats.f_oneway(Cholesterol_female,Cholesterol_male)
print(res)
#CI meanplot
#Graphic comparison: confidence intervals for the means
plt.figure(figsize=(5,5))
ax = sns.pointplot(x="Sex", y="Cholesterol", data=heart,capsize=0.05, ci=95, join=0, order=['F', 'M'])
ax.set_ylabel('Cholesterol')
plt.yticks(np.arange(150, 280, step=25))
plt.ylim(150,280)
plt.axhline(y=heart.Cholesterol.mean(),linewidth=1,linestyle= 'dashed',color="green")
props = dict(boxstyle='round', facecolor='white', lw=0.5)
# plt.text(1.5, 5000, 'Mean: 4504.3''\n''n: 731' '\n' 'F: 40.06''\n' 'Pval.: 0.000', bbox=props)
plt.text(0.35,258,'Mean:198.8''\n''n:918''\n' 'Pval.:9.58362487285248e-10', bbox=props)
plt.xlabel('Female and Male')
plt.title('Figure 1. Average Cholesterol by Sex.''\n')
#OBSERVAMOS QUE EL PVALUE ES MUY CERCANO AL 0 Y POR TANTO RECHAZAMOS LA HIPOSESIS NULA
#PODEMOS DECIR PUES QUE EL COLESTEROL NO DIFIERE CON LA VARIABLE SEXO
# + colab={"base_uri": "https://localhost:8080/", "height": 504} id="0MBDE1MznqZy" outputId="e3e5bca3-b74a-4318-be39-e6cf11ec663e"
#ESCOGEMOS LA TIPO DE DOLOR EN EL PECHO Y EL EDAD COMO SUJETO DE ESTUDIO
#Descriptive comparison:
print(heart.groupby('ChestPainType').Age.mean())
#Statistical comparison:
#Extract the two sub samples and store them in two objects
Age_ASY=heart.loc[heart.ChestPainType=='ASY', "Age"]
Age_ATA=heart.loc[heart.ChestPainType=='ATA', "Age"]
Age_NAP=heart.loc[heart.ChestPainType=='NAP', "Age"]
Age_TA=heart.loc[heart.ChestPainType=='TA', "Age"]
res = stats.f_oneway(Age_ASY,Age_ATA,Age_NAP,Age_TA)
print(res)
#CI meanplot
#Graphic comparison: confidence intervals for the means
plt.figure(figsize=(5,5))
ax = sns.pointplot(x="ChestPainType", y="Age", data=heart,capsize=0.05, ci=95, join=0, order=['ASY', 'ATA','NAP','TA'])
ax.set_ylabel('Age')
plt.yticks(np.arange(44, 60, step=2))
plt.ylim(44,60)
plt.axhline(y=heart.Age.mean(),linewidth=1,linestyle= 'dashed',color="green")
props = dict(boxstyle='round', facecolor='white', lw=0.5)
# plt.text(1.5, 5000, 'Mean: 4504.3''\n''n: 731' '\n' 'F: 40.06''\n' 'Pval.: 0.000', bbox=props)
plt.text(-0.4,57.5,'Mean:53.51''\n''n:918''\n' 'Pval.:1.136820472395362e-10', bbox=props)
plt.xlabel('ChestPainType')
plt.title('Figure 2. Average Age by ChestPainType.''\n')
#OBSERVAMOS QUE EL PVALUE ES MUY CERCANO AL 0 Y POR TANTO RECHAZAMOS LA HIPOSESIS NULA
#PODEMOS DECIR PUES QUE EL TIPO DE DOLOR EN EL PECHO NO DIFIERE CON LA VARIABLE EDAD
# + id="SRcMINCwyPtF"
| 1. FUNDAMENTOS/3. PROGRAMACION ESTADISTICA CON PYTHON/3. my project/Part 2 - Google Colab/1. heart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cateto/python4NLP/blob/main/colab/stemming_lemmatization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="oDOjtpgLFAV5" outputId="f96f6954-de17-4d07-c774-7cb3cf3ddb14"
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
n=WordNetLemmatizer()
words=['policy', 'doing', 'organization', 'have', 'going', 'love', 'lives', 'fly', 'dies', 'watched', 'has', 'starting']
print([n.lemmatize(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="xE1t7g77FtkN" outputId="6286510b-65d8-41c7-93e5-09cb509f2768"
n.lemmatize('dies', 'v')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="8-LxHujEGfj-" outputId="5abcfab8-15cb-4a75-abcb-63bf8182dc81"
n.lemmatize('watched', 'v')
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="FYYXefH_GhGm" outputId="3d45b00b-b65a-490d-92ce-71717788ab5d"
n.lemmatize('has', 'v')
# + colab={"base_uri": "https://localhost:8080/"} id="uZvzvAdmGiH2" outputId="d69f08df-9340-41b8-d77a-9c3626f8c443"
print([n.lemmatize(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/"} id="6owf1yWsGkVW" outputId="4b92f2b7-bef4-485b-a6aa-fc280d07591a"
import nltk
nltk.download('punkt')
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
s = PorterStemmer()
text="This was not the map we found in Billy Bones's chest, but an accurate copy, complete in all things--names and heights and soundings--with the single exception of the red crosses and the written notes."
words=word_tokenize(text)
print(words)
# + colab={"base_uri": "https://localhost:8080/"} id="Lgvrwx2PG6MO" outputId="83b826a9-626e-4c91-8332-a648ef0ba0b2"
print([s.stem(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/"} id="ns2p20rWG_7-" outputId="776d4154-8f36-45e1-b844-f0844d847012"
words=['formalize', 'allowance', 'electricical']
print([s.stem(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/"} id="w9tbp6qpHJY-" outputId="53cee92b-b283-4275-c255-f5224e82876e"
from nltk.stem import PorterStemmer
s=PorterStemmer()
words=['policy', 'doing', 'organization', 'have', 'going', 'love', 'lives', 'fly', 'dies', 'watched', 'has', 'starting']
print([s.stem(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/"} id="malTOD01Ha6W" outputId="1aa44639-75c9-46d8-e4a6-49cc15483ed4"
from nltk.stem import LancasterStemmer
l=LancasterStemmer()
words=['policy', 'doing', 'organization', 'have', 'going', 'love', 'lives', 'fly', 'dies', 'watched', 'has', 'starting']
print(type(l.stem(w) for w in words))
print([l.stem(w) for w in words])
# + colab={"base_uri": "https://localhost:8080/"} id="7cneZlprHo8O" outputId="fc0e3363-3632-436f-e305-89a9e0d1ba4f"
# version
import sys
print(sys.version)
# + colab={"base_uri": "https://localhost:8080/"} id="rM_OIEfrJfn2" outputId="440deb01-8c46-42b7-c947-c3fcf1a7976b"
# generator test
def gen():
yield 1
yield 2
yield 3
g = gen()
print([x for x in g])
# + [markdown] id="oIQuqrfVK3KY"
# ### 어간 추출(Stemming)
# - am → am
# - the going → the go
# - having → hav
#
# ### 표제어 추출(Lemmatization)
# - am → be
# - the going → the going
# - having → have
# + id="D8nW0RfVLMSm"
| colab/stemming_lemmatization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from ipyaladin import Aladin
a = Aladin(target='M 51', fov=1.2)
a
a.add_overlay_from_stcs('Polygon ICRS 202.63748 47.24951 202.46382 47.32391 202.46379 47.32391 202.45459 47.32391 202.34527 47.20597 202.34527 47.20596 202.34529 47.19710 202.51870 47.12286 202.52789 47.12286 202.52791 47.12286 202.63746 47.24063 202.63749 47.24949\nPolygon J2000 202.74977 47.36958 202.57592 47.44415 202.57585 47.44416 202.56666 47.44416 202.45683 47.32632 202.45683 47.31746 202.63051 47.24302 202.63970 47.24302 202.74978 47.36069 202.74982 47.36955\nPolygon J2000 202.52540 47.12904 202.35192 47.20325 202.34273 47.20325 202.23391 47.08518 202.23395 47.07633 202.23398 47.07630 202.40715 47.00227 202.40721 47.00226 202.41640 47.00226 202.52539 47.12018', {'color': 'red'})
a.add_overlay_from_stcs('Circle ICRS 202.4656816 +47.1999842 0.04', {'color': '#4488ee'})
| examples/9_Footprints_from_STCS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def add(x,y):
return x + y
# +
data = [1,2,3,4,5,6]
def jest_parzysta(x):
if x % 2 == 0:
return True
else:
return False
#filter wykonuje funkcje na kazdym elemencie i zapisuje jesli funkcja zwraca true
a = filter(jest_parzysta, data)
list(a)
# lambda <parametry>: <return>
b = filter(lambda x: x % 2 == 0, data)
list(b)
#list(map(float, data))
| intermediate/Lambda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''tsfresh'': conda)'
# language: python
# name: python37764bittsfreshcondae79e6058ffaf4c17bd6ddb4945106d0a
# ---
# # Import (Libraries & Functions)
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
import itertools
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
y = np.repeat(np.arange(0,len(classes)),15)
plt.xlim(-0.5, len(np.unique(y))-0.5)
plt.ylim(len(np.unique(y))-0.5, -0.5)
plt.tight_layout()
# # One patient
# +
path = "data/S1/S1.pkl"
with open(path, "rb") as f:
data = pickle.load(f, encoding="latin-1")
# -
data.keys()
data["questionnaire"]
data["subject"]
# ## Activity
activity = pd.DataFrame(data["activity"]).astype(int)
activity.columns = ["Activity"]
print(activity.shape)
activity.head()
activity["Activity"].value_counts()
# - __Sitting (ID: 1)__: Sitting still while reading. The aim of this activity was to generate a motionartefact-free baseline.
# - __Ascending and descending stairs (ID: 2)__: Climbing six floors up and going down again,
# repeating this twice. This activity was carried out in the main building at our research
# campus. Note: for subjects S1 and S2, going down was performed only once.
# - __Table soccer (ID: 3)__: Playing table soccer, 1 vs. 1 with the supervisor of the data collection.
# - __Cycling (ID: 4)__: Performed outdoors, around our research campus, following a defined route
# of about 2km length with varying road conditions (gravel, paved).
# - __Driving a car (ID: 5)__: This activity started at the parking ground of our research campus and
# was carried out within the area nearby. Subjects followed a defined route which took about
# 15 minutes to complete. The route included driving on different streets in a small city as well
# as driving on country roads.
# - __Lunch break (ID: 6)__: This activity was carried out at the canteen of our research campus. The
# activity included queuing and fetching food, eating, and talking at the table.
# - __Walking (ID: 7)__: This activity was carried out within the premises of our research campus,
# walking back from the canteen to the office, with some detour.
# - __Working (ID: 8)__: Subjects returned to their desk and worked as if not participating in this
# study. For each subject, work mainly consisted of working on a computer
dic_activity = {1: "Sitting", 2: "Stairs", 3: "Soccer", 4: "Cycling", 5: "Driving", 6: "Lunch", 7: "Walking", 8: "Working"}
# ## Label
label = pd.DataFrame(data["label"])
label
label = pd.DataFrame(np.repeat(label.values,8,axis=0))
label.columns = ["Label"]
if(np.size(label, axis = 0) < np.size(activity, axis = 0)):
mean = label.mean()
while(np.size(label, axis = 0) < np.size(activity, axis = 0)):
label = label.append(mean, ignore_index=True)
label
# ## Signal Data
signal = pd.DataFrame(data["signal"])
signal
# ### Chest
# The modalities ‘EDA’, ‘EMG’ and ‘Temp’ only include dummy data and should thus be ignored.
ACC = pd.DataFrame(signal["chest"].ACC)
ACC = ACC.iloc[::175, :]
ACC.columns = ["ACC_x", "ACC_y", "ACC_z"]
ACC.reset_index(drop = True, inplace=True)
ACC
ECG = pd.DataFrame(signal["chest"].ECG)
ECG = ECG.iloc[::175, :]
ECG.reset_index(drop = True, inplace=True)
ECG
Resp = pd.DataFrame(signal["chest"].Resp)
Resp = Resp.iloc[::175, :]
Resp.columns = ["Resp"]
Resp.reset_index(drop = True, inplace=True)
Resp
chest = pd.concat([ACC], sort=False)
chest["Resp"] = Resp
chest["ECG"] = ECG
chest.reset_index(drop=True, inplace=True)
chest = chest.add_prefix('chest_')
chest
# ### Wrist
ACC = pd.DataFrame(signal["wrist"].ACC)
ACC = ACC.iloc[::8, :]
ACC.columns = ["ACC_x", "ACC_y", "ACC_z"]
ACC.reset_index(drop = True, inplace=True)
ACC
EDA = pd.DataFrame(signal["wrist"].EDA)
EDA.columns = ["EDA"]
EDA
BVP = pd.DataFrame(signal["wrist"].BVP)
BVP = BVP.iloc[::16, :]
BVP.columns = ["BVP"]
BVP.reset_index(drop = True, inplace=True)
BVP
TEMP = pd.DataFrame(signal["wrist"].TEMP)
TEMP.columns = ["TEMP"]
TEMP
wrist = pd.concat([ACC], sort=False)
wrist["BVP"] = BVP
wrist["TEMP"] = TEMP
wrist.reset_index(drop = True, inplace=True)
wrist = wrist.add_prefix('wrist_')
wrist
# ### Fusing both
signals = chest.join(wrist)
signals
for k,v in data["questionnaire"].items() :
signals[k] = v
signals
# ### Counting Rpeaks
# We will cound for each 175 portion (0.25 sec since 700 is 1sec) the number of rpeaks during that period.
rpeaks = data['rpeaks']
rpeaks
# +
counted_rpeaks = []
index = 0 # index of rpeak element
time = 175 # time portion
count = 0 # number of rpeaks
while(index < len(rpeaks)):
rpeak = rpeaks[index]
if(rpeak > time): # Rpeak appears after the time portion
counted_rpeaks.append(count)
count = 0
time += 175
else:
count += 1
index += 1
# The rpeaks will probably end before the time portion so we need to fill the last portions with 0
if(len(counted_rpeaks) < 36848):
while(len(counted_rpeaks) < 36848):
counted_rpeaks.append(0)
# -
peaks = pd.DataFrame(counted_rpeaks)
peaks.columns = ["Rpeaks"]
peaks
signals = signals.join(peaks)
signals
# ### Fusing with Activity & Label
signals = signals.join(activity)
signals = signals.join(label)
signals
signals['Subject'] = data["subject"]
# ## Visualisation
# +
fig, ax = plt.subplots(figsize=(20, 4))
style = dict(size=10, color='k')
i = 0
x_start = 0
x_end = 0
while(i < len(signals.loc[:, 'Activity'])):
sport_index = signals.loc[i, 'Activity']
if(sport_index != 0):
x_start = i
while(i < len(signals.loc[:, 'Activity'])):
if(signals.loc[i, 'Activity'] != sport_index):
break
else:
i += 1
x_end = i-1
sport = dic_activity[sport_index]
plt.axvspan(xmin=x_start + 60, xmax=x_end, color='#ffb3b3')
ax.text((x_start+x_end)//2 - 500, 35, sport, **style)
x_start = 0
x_end = 0
else :
i += 1
ax.set_ylim(top = 36, bottom = 28)
signals.loc[:, 'wrist_TEMP'].plot(ax=ax)
plt.xlabel("Time", fontsize=15)
plt.ylabel("Temperature", fontsize=15)
# +
fig, ax = plt.subplots(figsize=(20, 4))
style = dict(size=10, color='k')
i = 0
x_start = 0
x_end = 0
while(i < len(signals.loc[:, 'Activity'])):
sport_index = signals.loc[i, 'Activity']
if(sport_index != 0):
x_start = i
while(i < len(signals.loc[:, 'Activity'])):
if(signals.loc[i, 'Activity'] != sport_index):
break
else:
i += 1
x_end = i-1
sport = dic_activity[sport_index]
plt.axvspan(xmin=x_start + 60, xmax=x_end, color='#ffb3b3')
ax.text((x_start+x_end)//2 - 500, 145, sport, **style)
x_start = 0
x_end = 0
else :
i += 1
signals.loc[:, 'Label'].plot(ax=ax)
plt.xlabel("Time", fontsize=15)
plt.ylabel("Heartrate", fontsize=15)
# -
# ## Features
# Since we're working only on one subject, his attributes don't matter to make a prediction.
remove = ["Subject", "WEIGHT", "Gender", "AGE", "HEIGHT", "SKIN", "SPORT", "Activity"]
features = [column for column in list(signals.columns) if column not in remove]
features
# ## Train-test split
X = signals[features].values
y = signals.Activity
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 42, stratify = y)
# ## Model
# ### Decision Tree
# +
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=42)
tree.fit(X_train, y_train)
y_pred = tree.predict(X_test)
# -
score = tree.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(tree.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# ### Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators = 65, random_state = 42)
rf.fit(X_train, y_train);
predictions = rf.predict(X_test)
score = rf.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(rf.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# # All patients : data agregation
# Now let us join the data of multiple patients to train our model.
def load_data(path):
with open(path, "rb") as f:
data = pickle.load(f, encoding="latin-1")
signal = pd.DataFrame(data["signal"])
ACC = pd.DataFrame(signal["chest"].ACC)
ACC = ACC.iloc[::175, :]
ACC.columns = ["ACC_x", "ACC_y", "ACC_z"]
ACC.reset_index(drop = True, inplace=True)
ECG = pd.DataFrame(signal["chest"].ECG)
ECG = ECG.iloc[::175, :]
ECG.reset_index(drop = True, inplace=True)
Resp = pd.DataFrame(signal["chest"].Resp)
Resp = Resp.iloc[::175, :]
Resp.columns = ["Resp"]
Resp.reset_index(drop = True, inplace=True)
chest = pd.concat([ACC], sort=False)
chest["Resp"] = Resp
chest["ECG"] = ECG
chest.reset_index(drop=True, inplace=True)
chest = chest.add_prefix('chest_')
ACC = pd.DataFrame(signal["wrist"].ACC)
ACC = ACC.iloc[::8, :]
ACC.columns = ["ACC_x", "ACC_y", "ACC_z"]
ACC.reset_index(drop = True, inplace=True)
EDA = pd.DataFrame(signal["wrist"].EDA)
EDA.columns = ["EDA"]
BVP = pd.DataFrame(signal["wrist"].BVP)
BVP = BVP.iloc[::16, :]
BVP.columns = ["BVP"]
BVP.reset_index(drop = True, inplace=True)
TEMP = pd.DataFrame(signal["wrist"].TEMP)
TEMP.columns = ["TEMP"]
wrist = pd.concat([ACC], sort=False)
wrist["BVP"] = BVP
wrist["TEMP"] = TEMP
wrist.reset_index(drop = True, inplace=True)
wrist = wrist.add_prefix('wrist_')
signals = chest.join(wrist)
for k,v in data["questionnaire"].items() :
signals[k] = v
rpeaks = data['rpeaks']
counted_rpeaks = []
index = 0 # index of rpeak element
time = 175 # time portion
count = 0 # number of rpeaks
while(index < len(rpeaks)):
rpeak = rpeaks[index]
if(rpeak > time): # Rpeak appears after the time portion
counted_rpeaks.append(count)
count = 0
time += 175
else:
count += 1
index += 1
# The rpeaks will probably end before the time portion so we need to fill the last portions with 0
if(len(counted_rpeaks) < np.size(signals, axis = 0)):
while(len(counted_rpeaks) < np.size(signals, axis = 0)):
counted_rpeaks.append(0)
peaks = pd.DataFrame(counted_rpeaks)
peaks.columns = ["Rpeaks"]
signals = signals.join(peaks)
activity = pd.DataFrame(data["activity"]).astype(int)
activity.columns = ["Activity"]
signals = signals.join(activity)
label = pd.DataFrame(data["label"])
label = pd.DataFrame(np.repeat(label.values,8,axis=0))
label.columns = ["Label"]
if(np.size(label, axis = 0) < np.size(activity, axis = 0)):
mean = label.mean()
while(np.size(label, axis = 0) < np.size(activity, axis = 0)):
label = label.append(mean, ignore_index=True)
signals = signals.join(label)
signals['Subject'] = data["subject"]
return signals
dataframes = {"d1" : signals}
for i in range(2,16):
dataframes["d" + str(i)] = load_data("data/S" + str(i) + "/S" + str(i) + ".pkl")
dataframes.keys()
df = dataframes["d1"]
for i in range(2,16):
df = df.append(dataframes["d" + str(i)])
df.shape
# ## Visualisation of our patients
attributes = ["Subject","WEIGHT", "Gender", "AGE", "HEIGHT", "SKIN", "SPORT"]
patients = pd.DataFrame(dataframes["d1"].loc[[0]][attributes])
for i in range(2,16):
patients = patients.append(dataframes["d" + str(i)].loc[0,attributes])
patients
sns.set(font_scale=1.4)
patients["Gender"].value_counts().plot(kind='bar', figsize=(7, 6), rot=0)
plt.xlabel("Gender", labelpad=14)
plt.ylabel("Count of People", labelpad=14)
sns.set(font_scale=1.4)
patients["SPORT"].value_counts(sort = False).plot(kind='bar', figsize=(7, 6), rot=0)
plt.xlabel("Gender", labelpad=14)
plt.ylabel("Count of People", labelpad=14)
patients["AGE"].hist(figsize=(7, 6))
plt.xlabel("Age", labelpad=14)
plt.ylabel("Count of People", labelpad=14)
# ## Features
remove = ["Subject", "Activity"]
features = [column for column in list(df.columns) if column not in remove]
features
df['Gender'].replace(' f', 0, inplace=True)
df['Gender'].replace(' m', 1, inplace=True)
# ## Train-test split
X = df[features].values
y = df.Activity
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 42, stratify = y)
# ## Model
# ### Decision Tree
# +
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=42)
tree.fit(X_train, y_train)
y_pred = tree.predict(X_test)
# -
score = tree.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(tree.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# ### Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators = 65, random_state = 42)
rf.fit(X_train, y_train);
predictions = rf.predict(X_test)
score = rf.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(rf.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# # Better repartition
# By fear of our model overfitting since we have very few patients, we will use only 13 patients to train and validate our model. We will then see how the model scores on the 2 other patients.
# ## Features
remove = ["Subject", "Activity"]
features = [column for column in list(df.columns) if column not in remove]
features
# ## Train-test split
df.reset_index(drop = True, inplace=True)
index_14patient = np.where(df['Subject'] == 'S13')[0].tolist()[0]
X = df.loc[:index_14patient-1, features].values
y = df.loc[:index_14patient-1, "Activity"].values
X_test = df.loc[index_14patient:, features].values
y_test = df.loc[index_14patient:, "Activity"].values
# +
#from sklearn.model_selection import train_test_split
#X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.25, random_state = 42, stratify = y)
# -
# ## Model
# ### Decision Tree
# +
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=42)
tree.fit(X, y)
y_pred = tree.predict(X_test)
# -
score = tree.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(tree.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# ### Random Forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators = 65, random_state = 42, max_depth=15)
rf.fit(X, y);
predictions = rf.predict(X_test)
score = rf.score(X_test, y_test)
print(score)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure(figsize = (10,5))
plot_confusion_matrix(cm, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8], normalize=True)
feat_importances = pd.Series(rf.feature_importances_, index=features)
feat_importances.nlargest(12).plot(kind='barh')
# We can see that using different patients for the test makes the score drop drastically. The model has a hard time generalizing and the previous models surely have overfitted since they've all been tested on the same patients they've been trained. Having only 15 patients, it is very difficult to build a model that will better perform on unseen data.
# ## Saving model
import pickle
filename = 'final_model.sav'
pickle.dump(rf, open(filename, 'wb'))
| notebooks/Sample-data-analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Geocode London Stations
#
# Source: https://data.london.gov.uk/dataset/london-underground-performance-reports
# +
import pandas
from cartoframes.auth import set_default_credentials
from cartoframes.data.services import Geocoding
set_default_credentials('creds.json')
# -
file_path = 'https://libs.cartocdn.com/cartoframes/samples/london_stations.xls'
df = pandas.read_excel(file_path, header=6, sheet_name=1)
df.head()
# +
df = df.rename(columns={
"Saturday.1": "saturday_exit",
"Sunday.1": "sunday_exit",
"Weekday.1": "weekday_exit",
"Saturday": "saturday_entry",
"Sunday": "sunday_entry",
"Weekday": "weekday_entry"
})
df.head()
# +
gc = Geocoding()
london_stations_gdf, london_stations_metadata = gc.geocode(
df,
street='Borough',
city={'value': 'London'},
country={'value': 'United Kingdom'}
)
# -
london_stations_gdf.head()
# +
from cartoframes.viz import Map, Layer
Map(Layer(london_stations_gdf), viewport={'zoom': 10, 'lat': 51.53, 'lng': -0.09})
| docs/examples/use_cases/geocoding_london_stations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 (fastai)
# language: python
# name: fastai
# ---
# https://www.youtube.com/watch?v=OGzPmgsI-pQ (no code)
# +
def insertion_sort(arr):
"""
sorts an array in place in O(n^2) time complexity
"""
for i in range(1,len(arr)):
j = i - 1
while arr[i] < arr[j] and j >= 0:
arr[i],arr[j] = arr[j],arr[i]
i -= 1
j -= 1
# -
arr = [4,5,2,28,1,8]
insertion_sort(arr)
arr
| insertion sort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="6dfJPT-2XMTB"
# # Install
# + colab={"base_uri": "https://localhost:8080/"} id="a193aGJWVaqb" outputId="8bcd479a-90f3-47ac-876d-6f50e0e9b243"
# !pip install sentencepiece
# + [markdown] id="JHkHg6XAXoyK"
# # Evn
# + id="WkYXFwcBXJDG"
import os
import random
import shutil
import json
import zipfile
import matplotlib.pyplot as plt
import numpy as np
import sentencepiece as spm
import tensorflow as tf
import tensorflow.keras.backend as K
from tqdm.notebook import tqdm
# + id="nvjyruUlXtlR"
# random seed initialize
random_seed = 1234
random.seed(random_seed)
np.random.seed(random_seed)
tf.random.set_seed(random_seed)
# + colab={"base_uri": "https://localhost:8080/"} id="BC3fXkhdYcYt" outputId="148acb09-8745-4524-8413-75a01a13d87d"
# !nvidia-smi
# + colab={"base_uri": "https://localhost:8080/"} id="xVRdxYReYeQj" outputId="905040c7-eee3-4636-8841-5035520550e9"
# google drive mount
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="byCIiLJBbFHh" outputId="7e8861dc-5c4d-4ccb-91ed-c4131f27ed94"
# data dir
data_dir = '/content/drive/MyDrive/Data/nlp'
os.listdir(data_dir)
# + id="2H0BLydCb7lg"
| colab_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Heat transport
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section1'></a>
#
# ## 1. Spatial patterns of insolation and surface temperature
# ____________
#
# Let's take a look at seasonal and spatial pattern of insolation and compare this to the zonal average surface temperatures.
# + slideshow={"slide_type": "skip"}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import climlab
from climlab import constants as const
# + slideshow={"slide_type": "slide"}
# Calculate daily average insolation as function of latitude and time of year
lat = np.linspace( -90., 90., 500 )
days = np.linspace(0, const.days_per_year, 365 )
Q = climlab.solar.insolation.daily_insolation( lat, days )
# + slideshow={"slide_type": "-"}
## daily surface temperature from NCEP reanalysis
ncep_url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/'
ncep_temp = xr.open_dataset( ncep_url + "surface_gauss/skt.sfc.day.1981-2010.ltm.nc", decode_times=False)
ncep_temp_zon = ncep_temp.skt.mean(dim='lon')
# + slideshow={"slide_type": "skip"}
def make_nice_axes(axs):
for ax in axs:
ax.set_xlabel('Days since January 1', fontsize=16 )
ax.set_ylabel('Latitude', fontsize=16 )
ax.set_yticks([-90,-60,-30,0,30,60,90])
ax.grid()
# + slideshow={"slide_type": "slide"}
fig = plt.figure(figsize=(12,6))
ax1 = fig.add_subplot(121)
CS = ax1.contour( days, lat, Q , levels = np.arange(0., 600., 50.) )
ax1.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10)
ax1.set_title('Daily average insolation', fontsize=18 )
ax1.contourf ( days, lat, Q, levels=[-100., 0.], colors='k' )
ax2 = fig.add_subplot(122)
CS = ax2.contour( (ncep_temp.time - ncep_temp.time[0])/const.hours_per_day, ncep_temp.lat,
ncep_temp_zon.T, levels=np.arange(210., 310., 10. ) )
ax2.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10)
ax2.set_title('Observed zonal average surface temperature', fontsize=18 )
make_nice_axes([ax1,ax2])
# + [markdown] slideshow={"slide_type": "slide"}
# This figure reveals something fairly obvious, but still worth thinking about:
#
# **Warm temperatures are correlated with high insolation**. It's warm where the sun shines.
#
# More specifically, we can see a few interesting details here:
#
# - The seasonal cycle is weakest in the tropics and strongest in the high latitudes.
# - The warmest temperatures occur slighly NORTH of the equator
# - The highest insolation occurs at the poles at summer solstice.
# + [markdown] slideshow={"slide_type": "slide"}
# The local surface temperature does not correlate perfectly with local insolation for two reasons:
#
# - the climate system has heat capacity, which buffers some of the seasonal variations
# - the climate system moves energy around in space!
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section2'></a>
#
# ## 2. Calculating Radiative-Convective Equilibrium as a function of latitude
# ____________
#
# As a first step to understanding the effects of **heat transport by fluid motions** in the atmosphere and ocean, we can calculate **what the surface temperature would be without any motion**.
#
# Let's calculate a **radiative-convective equilibrium** state for every latitude band.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Putting realistic insolation into an RCM
#
# This code demonstrates how to create a model with both latitude and vertical dimensions.
# + slideshow={"slide_type": "slide"}
# A two-dimensional domain
state = climlab.column_state(num_lev=30, num_lat=40, water_depth=10.)
# Specified relative humidity distribution
h2o = climlab.radiation.ManabeWaterVapor(name='Fixed Relative Humidity', state=state)
# Hard convective adjustment
conv = climlab.convection.ConvectiveAdjustment(name='Convective Adjustment', state=state, adj_lapse_rate=6.5)
# Daily insolation as a function of latitude and time of year
sun = climlab.radiation.DailyInsolation(name='Insolation', domains=state['Ts'].domain)
# Couple the radiation to insolation and water vapor processes
rad = climlab.radiation.RRTMG(name='Radiation',
state=state,
specific_humidity=h2o.q,
albedo=0.125,
insolation=sun.insolation,
coszen=sun.coszen)
model = climlab.couple([rad,sun,h2o,conv], name='RCM')
print( model)
# + slideshow={"slide_type": "slide"}
model.compute_diagnostics()
# + slideshow={"slide_type": "-"}
fig, ax = plt.subplots()
ax.plot(model.lat, model.insolation)
ax.set_xlabel('Latitude')
ax.set_ylabel('Insolation (W/m2)');
# + [markdown] slideshow={"slide_type": "slide"}
# This new insolation process uses the same code we've already been working with to compute realistic distributions of insolation. Here we are using
# ```
# climlab.radiation.DailyInsolation
# ```
# but there is also
#
# ```
# climlab.radiation.AnnualMeanInsolation
# ```
# for models in which you prefer to suppress the seasonal cycle and prescribe a time-invariant insolation.
# + [markdown] slideshow={"slide_type": "slide"}
# The following code will just integrate the model forward in four steps in order to get snapshots of insolation at the solstices and equinoxes.
# +
# model is initialized on Jan. 1
# integrate forward just under 1/4 year... should get about to the NH spring equinox
model.integrate_days(31+28+22)
Q_spring = model.insolation.copy()
# Then forward to NH summer solstice
model.integrate_days(31+30+31)
Q_summer = model.insolation.copy()
# and on to autumnal equinox
model.integrate_days(30+31+33)
Q_fall = model.insolation.copy()
# and finally to NH winter solstice
model.integrate_days(30+31+30)
Q_winter = model.insolation.copy()
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot(model.lat, Q_spring, label='Spring')
ax.plot(model.lat, Q_summer, label='Summer')
ax.plot(model.lat, Q_fall, label='Fall')
ax.plot(model.lat, Q_winter, label='Winter')
ax.legend()
ax.set_xlabel('Latitude')
ax.set_ylabel('Insolation (W/m2)');
# + [markdown] slideshow={"slide_type": "fragment"}
# This just serves to demonstrate that the `DailyInsolation` process is doing something sensible.
# + [markdown] slideshow={"slide_type": "slide"}
# Note that we could also pass different orbital parameters to this subprocess. They default to present-day values, which is what we are using here.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Find the steady seasonal cycle of temperature in radiative-convective equilibrium
# -
model.integrate_years(4.)
model.integrate_years(1.)
# + [markdown] slideshow={"slide_type": "slide"}
# All climlab `Process` objects have an attribute called `timeave`.
#
# This is a dictionary of time-averaged diagnostics, which are automatically calculated during the most recent call to `integrate_years()` or `integrate_days()`.
# -
model.timeave.keys()
# + [markdown] slideshow={"slide_type": "slide"}
# Here we use the `timeave['insolation']` to plot the annual mean insolation.
#
# (We know it is the *annual* average because the last call to `model.integrate_years` was for exactly 1 year)
# -
fig, ax = plt.subplots()
ax.plot(model.lat, model.timeave['insolation'])
ax.set_xlabel('Latitude')
ax.set_ylabel('Insolation (W/m2)')
# + [markdown] slideshow={"slide_type": "slide"}
# ### Compare annual average temperature in RCE to the zonal-, annual mean observations.
# -
# Plot annual mean surface temperature in the model,
# compare to observed annual mean surface temperatures
fig, ax = plt.subplots()
ax.plot(model.lat, model.timeave['Ts'], label='RCE')
ax.plot(ncep_temp_zon.lat, ncep_temp_zon.mean(dim='time'), label='obs')
ax.set_xticks(range(-90,100,30))
ax.grid(); ax.legend();
# + [markdown] slideshow={"slide_type": "fragment"}
# Our modeled RCE state is **far too warm in the tropics**, and **too cold in the mid- to high latitudes.**
# + [markdown] slideshow={"slide_type": "slide"}
# ### Vertical structure of temperature: comparing RCE to observations
# -
# Observed air temperature from NCEP reanalysis
ncep_air = xr.open_dataset( ncep_url + 'pressure/air.mon.1981-2010.ltm.nc', decode_times=False)
level_ncep_air = ncep_air.level
lat_ncep_air = ncep_air.lat
Tzon = ncep_air.air.mean(dim=('time','lon'))
# + slideshow={"slide_type": "skip"}
def make_nice_vaxis(axs):
for ax in axs:
ax.invert_yaxis()
ax.set_xlim(-90,90)
ax.set_xticks([-90, -60, -30, 0, 30, 60, 90])
# + slideshow={"slide_type": "slide"}
# Compare temperature profiles in RCE and observations
contours = np.arange(180., 350., 15.)
fig = plt.figure(figsize=(12,4))
ax1 = fig.add_subplot(1,2,1)
cax1 = ax1.contourf(lat_ncep_air, level_ncep_air, Tzon+const.tempCtoK, levels=contours)
fig.colorbar(cax1)
ax1.set_title('Observered temperature (K)')
ax2 = fig.add_subplot(1,2,2)
field = model.timeave['Tatm'].transpose()
cax2 = ax2.contourf(model.lat, model.lev, field, levels=contours)
fig.colorbar(cax2)
ax2.set_title('RCE temperature (K)')
make_nice_vaxis([ax1,ax2])
# + [markdown] slideshow={"slide_type": "slide"}
# Again, this plot reveals temperatures that are too warm in the tropics, too cold at the poles throughout the troposphere.
#
# Note however that the **vertical temperature gradients** are largely dictated by the convective adjustment in our model. We have parameterized this gradient, and so we can change it by changing our parameter for the adjustment.
#
# We have (as yet) no parameterization for the **horizontal** redistribution of energy in the climate system.
# + [markdown] slideshow={"slide_type": "slide"}
# ### TOA energy budget in RCE equilibrium
#
# Because there is no horizontal energy transport in this model, the TOA radiation budget should be closed (net flux is zero) at all latitudes.
#
# Let's check this by plotting time-averaged shortwave and longwave radiation:
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot(model.lat, model.timeave['ASR'], label='ASR')
ax.plot(model.lat, model.timeave['OLR'], label='OLR')
ax.set_xlabel('Latitude')
ax.set_ylabel('W/m2')
ax.legend(); ax.grid()
# -
# Indeed, the budget is (very nearly) closed everywhere. Each latitude is in energy balance, independent of every other column.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
# <a id='section3'></a>
#
# ## 3. Observed and modeled TOA radiation budget
# ____________
#
#
# We are going to look at the (time average) TOA budget as a function of latitude to see how it differs from the RCE state we just plotted.
#
# Ideally we would look at actual satellite observations of SW and LW fluxes. Instead, here we will use the NCEP Reanalysis for convenience.
#
# But bear in mind that the radiative fluxes in the reanalysis are a model-generated product, they are not really observations.
# + [markdown] slideshow={"slide_type": "slide"}
# ### TOA budget from NCEP Reanalysis
# -
# Get TOA radiative flux data from NCEP reanalysis
# downwelling SW
dswrf = xr.open_dataset(ncep_url + '/other_gauss/dswrf.ntat.mon.1981-2010.ltm.nc', decode_times=False)
#dswrf = xr.open_dataset(url + 'other_gauss/dswrf')
# upwelling SW
uswrf = xr.open_dataset(ncep_url + '/other_gauss/uswrf.ntat.mon.1981-2010.ltm.nc', decode_times=False)
#uswrf = xr.open_dataset(url + 'other_gauss/uswrf')
# upwelling LW
ulwrf = xr.open_dataset(ncep_url + '/other_gauss/ulwrf.ntat.mon.1981-2010.ltm.nc', decode_times=False)
#ulwrf = xr.open_dataset(url + 'other_gauss/ulwrf')
# + slideshow={"slide_type": "slide"}
ASR = dswrf.dswrf - uswrf.uswrf
OLR = ulwrf.ulwrf
# -
ASRzon = ASR.mean(dim=('time','lon'))
OLRzon = OLR.mean(dim=('time','lon'))
# + slideshow={"slide_type": "slide"}
ticks = [-90, -60, -30, 0, 30, 60, 90]
fig, ax = plt.subplots()
ax.plot(ASRzon.lat, ASRzon, label='ASR')
ax.plot(OLRzon.lat, OLRzon, label='OLR')
ax.set_ylabel('W/m2')
ax.set_xlabel('Latitude')
ax.set_xlim(-90,90); ax.set_ylim(50,310)
ax.set_xticks(ticks);
ax.set_title('Observed annual mean radiation at TOA')
ax.legend(); ax.grid();
# + [markdown] slideshow={"slide_type": "slide"}
# We find that ASR does NOT balance OLR in most locations.
#
# Across the tropics the absorbed solar radiation exceeds the longwave emission to space. The tropics have a **net gain of energy by radiation**.
#
# The opposite is true in mid- to high latitudes: **the Earth is losing energy by net radiation to space** at these latitudes.
# + [markdown] slideshow={"slide_type": "slide"}
# ### TOA budget from the control CESM simulation
#
# Load data from the fully coupled CESM control simulation that we've used before.
# +
casenames = {'cpl_control': 'cpl_1850_f19',
'cpl_CO2ramp': 'cpl_CO2ramp_f19',
'som_control': 'som_1850_f19',
'som_2xCO2': 'som_1850_2xCO2',
}
# The path to the THREDDS server, should work from anywhere
#basepath = 'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CESMA/'
# For better performance if you can access the filesystem (e.g. from JupyterHub)
basepath = '../Data/CESMA/'
casepaths = {}
for name in casenames:
casepaths[name] = basepath + casenames[name] + '/concatenated/'
# make a dictionary of all the CAM atmosphere output
atm = {}
for name in casenames:
path = casepaths[name] + casenames[name] + '.cam.h0.nc'
print('Attempting to open the dataset ', path)
atm[name] = xr.open_dataset(path)
# + slideshow={"slide_type": "slide"}
lat_cesm = atm['cpl_control'].lat
ASR_cesm = atm['cpl_control'].FSNT
OLR_cesm = atm['cpl_control'].FLNT
# +
# extract the last 10 years from the slab ocean control simulation
# and the last 20 years from the coupled control
nyears_slab = 10
nyears_cpl = 20
clim_slice_slab = slice(-(nyears_slab*12),None)
clim_slice_cpl = slice(-(nyears_cpl*12),None)
# For now we're just working with the coupled control simulation
# Take the time and zonal average
ASR_cesm_zon = ASR_cesm.isel(time=clim_slice_slab).mean(dim=('lon','time'))
OLR_cesm_zon = OLR_cesm.isel(time=clim_slice_slab).mean(dim=('lon','time'))
# -
# Now we can make the same plot of ASR and OLR that we made for the observations above.
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot(lat_cesm, ASR_cesm_zon, label='ASR')
ax.plot(lat_cesm, OLR_cesm_zon, label='OLR')
ax.set_ylabel('W/m2')
ax.set_xlabel('Latitude')
ax.set_xlim(-90,90); ax.set_ylim(50,310)
ax.set_xticks(ticks);
ax.set_title('CESM control simulation: Annual mean radiation at TOA')
ax.legend(); ax.grid();
# + [markdown] slideshow={"slide_type": "slide"}
# Essentially the same story as the reanalysis data: there is a **surplus of energy across the tropics** and a net **energy deficit in mid- to high latitudes**.
#
# There are two locations where ASR = OLR, near about 35º in both hemispheres.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
#
# ## 4. The energy budget for a zonal band
# ____________
# + [markdown] slideshow={"slide_type": "slide"}
# ### The basic idea
#
# Through most of the previous notes we have been thinking about **global averages**.
#
# We've been working with an energy budget that looks something like this:
#
# <img src='../images/column_sketch.png' width=200>
# + [markdown] slideshow={"slide_type": "slide"}
# When we start thinking about regional climates, we need to modify our budget to account for the **additional heating or cooling** due to **transport** in and out of the column:
#
# <img src='../images/column_sketch2.png' width=200>
# + [markdown] slideshow={"slide_type": "slide"}
# Conceptually, the additional energy source is the difference between what's coming in and what's going out:
#
# $$ h = \mathcal{H}_{in} - \mathcal{H}_{out} $$
#
# where $h$ is a **dynamic heating rate** in W m$^{-2}$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### A more careful budget
#
# Let’s now consider a thin band of the climate system, of width $\delta \phi$ , and write down a careful energy budget for it.
# -
# <img src='../images/ZonalEnergyBudget_sketch.png' width=400>
# + [markdown] slideshow={"slide_type": "slide"}
# Let $\mathcal{H}(\phi)$ be the total rate of northward energy transport across the latitude line $\phi$, measured in Watts (usually PW).
#
# So the transport into the band is $\mathcal{H}(\phi)$, and the transport out is just $\mathcal{H}(\phi + \delta \phi)$
#
# The dynamic heating rate looks like
#
# $$ h = \frac{\text{transport in} - \text{transport out}}{\text{area of band}} $$
# + [markdown] slideshow={"slide_type": "slide"}
# The surface area of the latitude band is
#
# $$ A = \text{Circumference} ~\times ~ \text{north-south width} $$
#
# $$ A = 2 \pi a \cos \phi ~ \times ~ a \delta \phi $$
#
# $$ A = 2 \pi a^2 \cos\phi ~ \delta\phi $$
# + [markdown] slideshow={"slide_type": "slide"}
# So we can write the heating rate as
#
# \begin{align*}
# h &= \frac{\mathcal{H}(\phi) - \mathcal{H}(\phi+\delta\phi)}{2 \pi a^2 \cos\phi ~ \delta\phi} \\
# &= -\frac{1}{2 \pi a^2 \cos\phi} \left( \frac{\mathcal{H}(\phi+\delta\phi) - \mathcal{H}(\phi)}{\delta\phi} \right)
# \end{align*}
# -
# Writing it this way, we can see that if the width of the band $\delta \phi$ becomes very small, then the quantity in parentheses is simply the **derivative** $d\mathcal{H}/d\phi$.
# + [markdown] slideshow={"slide_type": "slide"}
# The **dynamical heating rate** in W m$^{-2}$ is thus
#
# $$ h = - \frac{1}{2 \pi a^2 \cos\phi } \frac{\partial \mathcal{H}}{\partial \phi} $$
#
# which is the **convergence of energy transport** into this latitude band: the difference between what's coming in and what's going out.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
#
# ## 5. Calculating heat transport from the steady-state energy budget
# ____________
#
# If we can **assume that the budget is balanced**, i.e. assume that the system is at equilibrium and there is negligible heat storage, then we can use the energy budget to infer $\mathcal{H}$ from a measured (or modeled) TOA radiation imbalance.
# + [markdown] slideshow={"slide_type": "slide"}
# The balanced budget is
#
# $$ ASR + h = OLR $$
#
# (i.e. the **sources** balance the **sinks**)
# + [markdown] slideshow={"slide_type": "fragment"}
# which we can substitute in for $h$ and rearrange to write as
#
# $$ \frac{\partial \mathcal{H}}{\partial \phi} = 2 \pi ~a^2 \cos\phi ~ \left( \text{ASR} - \text{OLR} \right) = 2 \pi ~a^2 \cos\phi ~ R_{TOA} $$
#
# where for convenience we write $R_{TOA} = ASR - OLR$, the net downward flux at the top of atmosphere.
# + [markdown] slideshow={"slide_type": "slide"}
# Now integrate from the South Pole ($\phi = -\pi/2$):
#
# $$ \int_{-\pi/2}^{\phi} \frac{\partial \mathcal{H}}{\partial \phi^\prime} d\phi^\prime = 2 \pi ~a^2 \int_{-\pi/2}^{\phi} \cos\phi^\prime ~ R_{TOA} d\phi^\prime $$
#
# $$ \mathcal{H}(\phi) - \mathcal{H}(-\pi/2) = 2 \pi ~a^2 \int_{-\pi/2}^{\phi} \cos\phi^\prime ~ R_{TOA} d\phi^\prime $$
# + [markdown] slideshow={"slide_type": "slide"}
# Our boundary condition is that the transport must go to zero at the pole. We therefore have a formula for calculating the heat transport at any latitude, by integrating the imbalance from the South Pole:
#
# $$ \mathcal{H}(\phi) = 2 \pi ~a^2 \int_{-\pi/2}^{\phi} \cos\phi^\prime ~ R_{TOA} d\phi^\prime $$
# + [markdown] slideshow={"slide_type": "slide"}
# What about the boundary condition at the other pole? We must have $\mathcal{H}(\pi/2) = 0$ as well, because a non-zero transport at the pole is not physically meaningful.
#
# Notice that if we apply the above formula and integrate all the way to the other pole, we then have
#
# $$ \mathcal{H}(\pi/2) = 2 \pi ~a^2 \int_{-\pi/2}^{\pi/2} \cos\phi^\prime ~ R_{TOA} d\phi^\prime $$
# + [markdown] slideshow={"slide_type": "slide"}
# This is an integral of the radiation imbalance weighted by cosine of latitude. In other words, this is **proportional to the area-weighted global average energy imbalance**.
#
# We started by assuming that this imbalance is zero.
#
# If the **global budget is balanced**, then the physical boundary condition of no-flux at the poles is satisfied.
# + [markdown] slideshow={"slide_type": "slide"}
# ____________
#
# ## 6. Poleward heat transport in the CESM
# ____________
#
# + [markdown] slideshow={"slide_type": "-"}
# Here we will code up a function that performs the above integration.
# + slideshow={"slide_type": "slide"}
def inferred_heat_transport(energy_in, lat=None, latax=None):
'''
Compute heat transport as integral of local energy imbalance.
Required input:
energy_in: energy imbalance in W/m2, positive in to domain
As either numpy array or xarray.DataArray
If using plain numpy, need to supply these arguments:
lat: latitude in degrees
latax: axis number corresponding to latitude in the data
(axis over which to integrate)
returns the heat transport in PW.
Will attempt to return data in xarray.DataArray if possible.
'''
from scipy import integrate
from climlab import constants as const
if lat is None:
try: lat = energy_in.lat
except:
raise InputError('Need to supply latitude array if input data is not self-describing.')
lat_rad = np.deg2rad(lat)
coslat = np.cos(lat_rad)
field = coslat*energy_in
if latax is None:
try: latax = field.get_axis_num('lat')
except:
raise ValueError('Need to supply axis number for integral over latitude.')
# result as plain numpy array
integral = integrate.cumtrapz(field, x=lat_rad, initial=0., axis=latax)
result = (1E-15 * 2 * np.math.pi * const.a**2 * integral)
if isinstance(field, xr.DataArray):
result_xarray = field.copy()
result_xarray.values = result
return result_xarray
else:
return result
# + [markdown] slideshow={"slide_type": "slide"}
# Let's now use this to calculate the total northward heat transport from our control simulation with the CESM:
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot(lat_cesm, inferred_heat_transport(ASR_cesm_zon - OLR_cesm_zon))
ax.set_ylabel('PW')
ax.set_xticks(ticks)
ax.grid()
ax.set_title('Total northward heat transport inferred from CESM control simulation')
# + [markdown] slideshow={"slide_type": "slide"}
# The total heat transport is very nearly symmetric about the equator, with poleward transport of about 5 to 6 PW in both hemispheres.
#
# The transport peaks in magnitude near 35º latitude, the same latitude where we found that ASR = OLR. This is no coincidence!
#
# Equatorward of 35º (across the tropics) there is **net heating by radiation** and **net cooling by dynamics**. The opposite is true poleward of 35º.
# + [markdown] slideshow={"slide_type": "slide"}
# ### An example of a recently published observational estimate of meridional heat transport
# -
# <img src='../images/Fasullo_Trenberth_2008b_Fig7.jpg'>
# + [markdown] slideshow={"slide_type": "-"}
# > The ERBE period zonal mean annual cycle of the meridional energy transport in PW by (a) the atmosphere and ocean as inferred from ERBE $R_T$, NRA $\delta$A_E/$\delta$t, and GODAS $\delta$O_E/$\delta$t; (b) the atmosphere based on NRA; and (c) by the ocean as implied by ERBE + NRA $F_S$ and GODAS $\delta$O_E/$\delta$t. Stippling and hatching in (a)–(c) represent regions and times of year in which the standard deviation of the monthly mean values among estimates, some of which include the CERES period (see text), exceeds 0.5 and 1.0 PW, respectively. (d) The median annual mean transport by latitude for the total (gray), atmosphere (red), and ocean (blue) accompanied with the associated $\pm2\sigma$ range (shaded).
#
# This is a reproduction of Figure 7 from Fasullo and Trenberth (2008), "The Annual Cycle of the Energy Budget. Part II: Meridional Structures and Poleward Transports", J. Climate 21, doi:10.1175/2007JCLI1936.1
# + [markdown] slideshow={"slide_type": "slide"}
# This figure shows the breakdown of the heat transport by **season** as well as the **partition between the atmosphere and ocean**.
#
# Focusing just on the total, annual transport in panel (d) (black curve), we see that is quite consistent with what we computed from the CESM simulation.
# + [markdown] slideshow={"slide_type": "skip"}
# ____________
#
# ## Credits
#
# This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. It has been modified by <NAME>, UC Santa Cruz.
#
# It is licensed for free and open consumption under the
# [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
#
# Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
# ____________
# + slideshow={"slide_type": "skip"}
| content/courseware/heat-transport.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing Data File
# +
# import dependencies and setup
import pandas as pd
import os
school_data_to_load = os.path.join(".", "Resources", "schools_complete.csv")
student_data_to_load = os.path.join(".", "Resources", "students_complete.csv")
# -
school_data_df = pd.read_csv(school_data_to_load)
student_data_df = pd.read_csv(student_data_to_load)
school_data_df.head()
student_data_df.head()
# # Cleaning Data
student_data_df.count()
# +
# Step 2 cleaning student names and replacing subtring in a python
prefixes_suffixes = ["Dr.", "Mr.", "Ms.", "Mrs.", "Miss", "MD", "DDS", "DMv", "Phd"]
for word in prefixes_suffixes:
student_data_df["student_name"] = student_data_df["student_name"].str.replace(word,"")
# -
student_data_df.head(25)
# # Challenge replace the reading and math schools
# One solution to drop the rows that are specific for 9th graders based on the indices of 9th graders at Thomas HS
#
# +
thomas_high_school_student_data_df = student_data_df[(student_data_df["school_name"] == "Thomas High School") & (student_data_df["grade"] == "9th")].index
thomas_high_school_student_data_df
# -
student_data_df.loc[
(student_data_df["school_name"] =="Thomas High School") &
(student_data_df["grade"] == "9th") &
(student_data_df["reading_score"] >0)
]
cleaned_df.count()
# # A soltuion would replace the math and reading grades from 9th graders at Thomas High School with NaN
import numpy as np
reading_score_df = student_data_df.loc[
(student_data_df["school_name"] =="Thomas High School")
& (student_data_df["grade"] == "9th") & (student_data_df["reading_score"] >0),
"reading_score"] = np.nan
student_data_df
# +
student_data_df.loc[
(student_data_df["school_name"] =="<NAME> School")
& (student_data_df["grade"] == "9th") & (student_data_df["math_score"] >0),
"math_score"] = np.nan
# -
student_data_df
school_data_complete_df = pd.merge(student_data_df, school_data_df, how="left", on=["school_name", "school_name"])
school_data_complete_df.head()
school_count = len(school_data_complete_df["school_name"].unique())
student_count =school_data_complete_df["Student ID"].count()
total_budget = school_data_df["budget"].sum()
average_reading_score = school_data_complete_df["reading_score"].mean()
average_math_score = school_data_complete_df["math_score"].mean()
# +
passing_math_count = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)].count()["student_name"]
passing_reading_count = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)].count()["student_name"]
student_count = school_data_complete_df["Student ID"].count()
passing_math_percentage = passing_math_count / float(student_count) * 100
passing_reading_percentage = passing_reading_count / float(student_count) * 100
# -
passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >=70)
& (school_data_complete_df["reading_score"] >= 70)]
overall_passing_math_reading_count = passing_math_reading["student_name"].count()
overall_passing_percentage = overall_passing_math_reading_count / student_count * 100
district_summary_df = pd.DataFrame(
{
"Total Schools": [school_count],
"Total Students": [student_count],
"Total Budget": [total_budget],
"Average Math Score": [average_math_score],
"Average Reading Score": [average_reading_score],
"% Passing Math": [passing_math_percentage],
"% Passing Reading": [passing_reading_percentage],
"% Overall Passing": [overall_passing_percentage]
}
)
district_summary_df
# Format
district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format)
district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format)
# Format
district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format)
district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format)
district_summary_df["Average Math Score"] = district_summary_df["Average Math Score"].map("${:.1f}".format)
district_summary_df["Average Reading Score"] = district_summary_df["Average Reading Score"].map("{:.1f}".format)
district_summary_df["% Passing Math"] = district_summary_df["% Passing Math"].map("{:.1f}".format)
district_summary_df["% Passing Reading"] = district_summary_df["% Passing Reading"].map("{:.1f}".format)
district_summary_df["% Overall Passing"] = district_summary_df["% Overall Passing"].map("{:.1f}".format)
district_summary_df
# +
# District Summary Affected after 9th grade math and reading score for Thomas HS is replaced with NAN
# Average drops 0.1
# Average Reading Scores do not change
#% Passing Math drops by 1.1 percentage points
school_data_complete_df
# -
# # School Summary
per_school_types = school_data_df.set_index(["school_name"])["type"]
per_school_counts = school_data_complete_df["school_name"].value_counts()
per_school_budget = school_data_complete_df.groupby(["school_name"]).mean()["budget"]
per_school_capital = per_school_budget / per_school_counts
per_school_math = school_data_complete_df.groupby(["school_name"]).mean()["math_score"]
per_school_reading = school_data_complete_df.groupby(["school_name"]).mean()["reading_score"]
# +
per_school_passing_math = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)]
per_school_passing_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)]
# -
per_school_passing_reading
per_school_passing_math = per_school_passing_math.groupby(["school_name"]).count()["student_name"]
per_school_passing_reading = per_school_passing_reading.groupby(["school_name"]).count()["student_name"]
per_school_passing_math = per_school_passing_math / per_school_counts * 100
per_school_passing_reading = per_school_passing_reading / per_school_counts * 100
per_passing_math_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >=70)
& (school_data_complete_df["math_score"] >= 70)]
per_passing_math_reading = per_passing_math_reading.groupby(["school_name"]).count()["student_name"]
per_overall_passing_percentage = per_passing_math_reading/per_school_counts * 100
per_school_summary_df = pd.DataFrame({
"School Type": per_school_types,
"Total Students": per_school_counts,
"Total School Budget": per_school_budget,
"Per Student Budget":per_school_capital,
"Average math Score":per_school_math,
"Average Reading Score": per_school_reading,
"% Passing Math": per_school_passing_math,
"% Passing Reading": per_school_passing_reading,
"% Overall Passing": per_overall_passing_percentage
})
per_school_summary_df
per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format)
per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format)
per_school_summary_df
# # High Low Performing School
top_schools = per_school_summary_df.sort_values(["% Overall Passing"], ascending=False)
top_schools.head(10)
bottom_schools = per_school_summary_df.sort_values(["% Overall Passing"], ascending=True)
bottom_schools
# # Math and Reading Scores by Grade
# +
ninth_graders = school_data_complete_df[(school_data_complete_df["grade"] =="9th")]
tenth_graders = school_data_complete_df[(school_data_complete_df["grade"] =="10th")]
eleventh_graders = school_data_complete_df[(school_data_complete_df["grade"] =="11th")]
twelfth_graders = school_data_complete_df[(school_data_complete_df["grade"] =="12th")]
ninth_graders_math_scores =ninth_graders.groupby(["school_name"]).mean()["math_score"]
tenth_graders_math_scores =tenth_graders.groupby(["school_name"]).mean()["math_score"]
eleventh_graders_math_scores =eleventh_graders.groupby(["school_name"]).mean()["math_score"]
twelfth_graders_math_scores =twelfth_graders.groupby(["school_name"]).mean()["math_score"]
# -
ninth_graders = ninth_graders.dropna()
# +
math_scores_by_grade = pd.DataFrame({
"9th": ninth_graders_math_scores,
"10th": tenth_graders_math_scores,
"11th": eleventh_graders_math_scores,
"12th": twelfth_graders_math_scores
})
math_scores_by_grade
# -
ninth_graders_reading_scores =ninth_graders.groupby(["school_name"]).mean()["reading_score"]
tenth_graders_reading_scores =tenth_graders.groupby(["school_name"]).mean()["reading_score"]
eleventh_graders_reading_scores =eleventh_graders.groupby(["school_name"]).mean()["reading_score"]
twelfth_graders_reading_scores =twelfth_graders.groupby(["school_name"]).mean()["reading_score"]
ninth_graders.dropna()
# +
reading_scores_by_grade = pd.DataFrame({
"9th": ninth_graders_reading_scores,
"10th": tenth_graders_reading_scores,
"11th": eleventh_graders_reading_scores,
"12th": twelfth_graders_reading_scores
})
reading_scores_by_grade
# -
reading_scores_by_grade
reading_scores_by_grade = reading_scores_by_grade[["9th", "10th", "11th", "12th"]]
reading_scores_by_grade.index.name = None
reading_scores_by_grade
# Spending bins
spending_bins = [0,585, 630, 645, 675]
group_names = ["<$584", "$585-$629", "$630-$644", "$645-$675"]
per_school_summary_df["Spending Ranges (Per Student)"] = pd.cut(per_school_capital, spending_bins, labels=group_names)
per_school_summary_df
# +
spending_math_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average math Score"]
spending_reading_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Reading Score"]
spending_passing_math = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Math"]
spending_passing_reading = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Reading"]
overall_passing_spending = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Overall Passing"]
# +
spending_summary_df = pd.DataFrame({
"Avearge Math Score": size_math_scores,
"Average Reading Score": size_reading_scores,
"% Passing Math": size_passing_math,
"% Passing Reading": size_passing_reading,
"% Overall Passing": size_overall_passing
})
spending_summary_df
# +
# Scores by School Sizes
size_bins = [0,1000,2000,5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
per_school_summary_df["School Size"] = pd.cut(per_school_summary_df["Total Students"], size_bins, labels=group_names)
per_school_summary_df.head()
# +
size_math_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average math Score"]
size_reading_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average Reading Score"]
size_passing_math = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Math"]
size_passing_reading = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Reading"]
size_overall_passing = per_school_summary_df.groupby(["School Size"]).mean()["% Overall Passing"]
# +
size_summary_df = pd.DataFrame({
"Average math Score": size_math_scores,
"Average Reading Score": size_reading_scores,
"% Passing Math": size_passing_math,
"% Passing Reading": size_passing_reading,
"% Overall Passing": size_overall_passing
})
size_summary_df
# -
# # Scores School Type
# +
type_math_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average math Score"]
type_reading_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average Reading Score"]
type_passing_math = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Math"]
type_passing_reading = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Reading"]
type_overall_passing = per_school_summary_df.groupby(["School Type"]).mean()["% Overall Passing"]
# +
type_summary_df = pd.DataFrame({
"Average math Score": type_math_scores,
"Average Reading Score": type_reading_scores,
"% Passing Math": type_passing_math,
"% Passing Reading": type_passing_reading,
"% Overall Passing": type_overall_passing
})
type_summary_df
# -
| PyCitySchools_Challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Docs
# +
# #!pip3 install tkseem
# -
# ### Frequency Tokenizer
import tkseem as tk
# Read, preprocess then train
tokenizer = tk.WordTokenizer()
tokenizer.train('samples/data.txt')
print(tokenizer)
# Tokenize
tokenizer.tokenize("السلام عليكم")
# Encode as ids
encoded = tokenizer.encode("السلام عليكم")
print(encoded)
# Decode back to tokens
decoded = tokenizer.decode(encoded)
print(decoded)
detokenized = tokenizer.detokenize(decoded)
print(detokenized)
# ### SentencePiece Tokenizer
# Read, preprocess then train
tokenizer = tk.SentencePieceTokenizer()
tokenizer.train('samples/data.txt')
# Tokenize
tokenizer.tokenize("صباح الخير يا أصدقاء")
# Encode as ids
encoded = tokenizer.encode("السلام عليكم")
print(encoded)
# Decode back to tokens
decoded = tokenizer.decode(encoded)
print(decoded)
detokenized = tokenizer.detokenize(decoded)
print(detokenized)
# ### Morphological Tokenizer
# Read, preprocess then train
tokenizer = tk.MorphologicalTokenizer()
tokenizer.train()
# Tokenize
tokenizer.tokenize("السلام عليكم")
# Encode as ids
encoded = tokenizer.encode("السلام عليكم")
print(encoded)
# Decode back to tokens
decoded = tokenizer.decode(encoded)
print(decoded)
# ### Random Tokenizer
tokenizer = tk.B
tokenizer.train('samples/data.txt')
tokenizer.tokenize("السلام عليكم أيها الأصدقاء")
# ### Disjoint Letter Tokenizer
tokenizer = tk.DisjointLetterTokenizer()
tokenizer.train('samples/data.txt')
print(tokenizer.tokenize("السلام عليكم أيها الأصدقاء"))
# ### Character Tokenizer
tokenizer = tk.CharacterTokenizer()
tokenizer.train('samples/data.txt')
tokenizer.tokenize("السلام عليكم")
# ### BruteForce Tokenizer
tokenizer = tk.BruteForceTokenizer()
tokenizer.train('samples/data.txt')
tokenizer.tokenize("السلام عليكم")
# ### Compression Factor
import tkseem as tk
tokenizer = tk.WordTokenizer()
tokenizer.train('samples/data.txt')
tokenizer.tokenize('السلام عليكم')
tokenizer.calculate_compression_factor('السلام عليكم')
# +
import seaborn as sns
import pandas as pd
import time
def calc_comp(fun):
tokenizer = fun()
# morph tokenizer doesn't take arguments
if str(tokenizer) == 'MorphologicalTokenizer':
tokenizer.train()
else:
tokenizer.train('samples/data.txt')
text = open('samples/data.txt', 'r').read()
return tokenizer.calculate_compression_factor(text)
factors = {}
factors['Word'] = calc_comp(tk.WordTokenizer)
factors['SP'] = calc_comp(tk.SentencePieceTokenizer)
factors['Random'] = calc_comp(tk.RandomTokenizer)
factors['Disjoint'] = calc_comp(tk.DisjointLetterTokenizer)
factors['Character'] = calc_comp(tk.CharacterTokenizer)
factors['Morph'] = calc_comp(tk.MorphologicalTokenizer)
plt = sns.barplot(data = pd.DataFrame.from_dict([factors]))
# -
# ### Export Models
# Models can be saved for deployment and reloading.
tokenizer = tk.WordTokenizer()
tokenizer.train('samples/data.txt')
tokenizer.save_model('freq.pl')
# load model without pretraining
tokenizer = tk.WordTokenizer()
tokenizer.load_model('freq.pl')
tokenizer.tokenize('السلام عليكم')
# ### Benchmarking
# Comparing tokenizers in terms of training time
# +
import seaborn as sns
import pandas as pd
import time
def calc_time(fun):
tokenizer = fun()
start_time = time.time()
# morph tokenizer doesn't take arguments
if str(tokenizer) == 'MorphologicalTokenizer':
tokenizer.train()
else:
tokenizer.train('samples/data.txt')
return time.time() - start_time
running_times = {}
running_times['Word'] = calc_time(tk.WordTokenizer)
running_times['SP'] = calc_time(tk.SentencePieceTokenizer)
running_times['Random'] = calc_time(tk.RandomTokenizer)
running_times['Disjoint'] = calc_time(tk.DisjointLetterTokenizer)
running_times['Character'] = calc_time(tk.CharacterTokenizer)
running_times['Morph'] = calc_time(tk.MorphologicalTokenizer)
plt = sns.barplot(data = pd.DataFrame.from_dict([running_times]))
# -
# comparing tokenizers in tokenization time
# +
import seaborn as sns
import pandas as pd
import time
def calc_time(fun):
tokenizer = fun()
# morph tokenizer doesn't take arguments
if str(tokenizer) == 'MorphologicalTokenizer':
tokenizer.train()
else:
tokenizer.train('samples/data.txt')
start_time = time.time()
tokenizer.tokenize(open('samples/data.txt', 'r').read())
return time.time() - start_time
running_times = {}
running_times['Word'] = calc_time(tk.WordTokenizer)
running_times['SP'] = calc_time(tk.SentencePieceTokenizer)
running_times['Random'] = calc_time(tk.RandomTokenizer)
running_times['Disjoint'] = calc_time(tk.DisjointLetterTokenizer)
running_times['Character'] = calc_time(tk.CharacterTokenizer)
running_times['Morph'] = calc_time(tk.MorphologicalTokenizer)
plt = sns.barplot(data = pd.DataFrame.from_dict([running_times]))
# -
# ### Caching
# Caching is used for speeding up the tokenization process.
import tkseem as tk
tokenizer = tk.MorphologicalTokenizer()
tokenizer.train()
# %%timeit
out = tokenizer.tokenize(open('samples/data.txt', 'r').read(), use_cache = False)
# %%timeit
out = tokenizer.tokenize(open('samples/data.txt', 'r').read(), use_cache = True, max_cache_size = 10000)
| tasks/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Ds0TKcwEL7Zw" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 2*
#
# ---
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Regression 2
#
# ## Assignment
#
# You'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.
#
# - [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
# - [ ] Engineer at least two new features. (See below for explanation & ideas.)
# - [ ] Fit a linear regression model with at least two features.
# - [ ] Get the model's coefficients and intercept.
# - [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.
# - [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!
# - [ ] As always, commit your notebook to your fork of the GitHub repo.
#
#
# #### [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)
#
# > "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — <NAME>, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)
#
# > "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — <NAME>, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf)
#
# > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.
#
# #### Feature Ideas
# - Does the apartment have a description?
# - How long is the description?
# - How many total perks does each apartment have?
# - Are cats _or_ dogs allowed?
# - Are cats _and_ dogs allowed?
# - Total number of rooms (beds + baths)
# - Ratio of beds to baths
# - What's the neighborhood, based on address or latitude & longitude?
#
# ## Stretch Goals
# - [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression
# - [ ] If you want more introduction, watch [<NAME>, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)
# (20 minutes, over 1 million views)
# - [ ] Add your own stretch goal(s) !
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab_type="code" id="cvrw-T3bZOuW" colab={}
import numpy as np
import pandas as pd
# Read New York City apartment rental listing data
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# + id="YNTMMUKAMeWK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2d4bd667-0e6b-450b-c4fb-6c2daf93e771"
df
# + [markdown] id="3xV4ufFOEiI8" colab_type="text"
# ## Train/test Split
# + id="CjOzfF5SNXc4" colab_type="code" colab={}
df['created'] = pd.to_datetime(df['created'])
# + id="0yANlrhLN2V1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1b671328-5c12-436e-d313-e371616b3de1"
mask = (df['created'] > '2016-04-01') & (df['created'] <= '2016-05-01')
train = df.loc[mask]
train.shape
# + id="r5qAEw4hEucf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cd34717b-963a-4d91-9094-05adfdcf21c4"
# from April & May 2016 to train. Use data from June 2016 to test.
maskt = (df['created'] > '2016-06-01')
test = df.loc[maskt]
train.shape, test.shape
# + id="aOvGQJ99QxI-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="78fed64a-dfc2-43fa-a847-1b9bd2a565e1"
train['bedrooms'].mean()
# + id="Z2O9WmztTHWU" colab_type="code" colab={}
target = 'bedrooms'
y_train = train[target]
y_test = test[target]
# + id="PnYXQOnYTXKc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="c424448a-6547-4e41-bbd3-5f703bebdea1"
y_train
# + id="c7cJ6OucT_Ab" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="18cf7e52-d131-4e63-95fc-8e1c7d282b1b"
y_test
# + id="kC7jwID2UEsR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5bfdda85-d011-4eb1-9cc6-fdc018b85516"
guess = y_train.mean()
guess
# + id="IRJWMWbRUUQe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="6cd728d3-f5fd-4fc1-aebe-d08b38340719"
def test_error(X, header):
from sklearn.metrics import mean_absolute_error
y_pred = [guess] * len(X)
mae = mean_absolute_error(X, y_pred)
return print(f'{header} is {mae:.2f}')
test_error(y_train, "Train error")
test_error(y_test, "Test error")
# + [markdown] id="i92CVK8cMY2-" colab_type="text"
# ## Used the function from the previous lesson
# + id="OIgunSOrMDru" colab_type="code" colab={}
# Built a function to pass arbitrary values, X, Y and input value
def sk_predict(df, X, Y, in_value):
from sklearn.linear_model import LinearRegression
# Linear Regression
model = LinearRegression()
# features and target take string perimetersd
features = [X]
target = [Y]
x_Train = df[features]
y_Train = df[target]
# This fits the model
model.fit(x_Train, y_Train)
print("The model coefficient adds $", model.coef_[0], "for every ", X)
# Input value
input_value = in_value
x_test = [[input_value]]
y_pred = model.predict(x_test)
# return our desired prediction
return y_pred
def plotly(x1,y1,x2,y2):
import plotly.graph_objects as go
fig = go.Figure()
# Add traces
fig.add_trace(go.Scatter(x=x1, y=y1,
mode='markers',
name='Ground Truth'))
fig.add_trace(go.Scatter(x=x2, y=y2,
mode='lines+markers',
name='Prediction'))
return fig.show()
# chart prediction calls nested sk_predict function and plotly function
def chart_prediction(df, X, Y, range1, range2):
stored_data = []
x2 = []
for i in range(range1,range2):
stored_data.append(sk_predict(df,X, Y, i))
x2.append(i)
pretty_data = []
# prettify array
for i in range(range1,range2):
pretty_data.append(stored_data[i][0][0])
X = df[X]
Y = df[Y]
return plotly(X,Y,x2, pretty_data)
chart_prediction(df, 'bedrooms','price', 0, 9)
| module2-regression-2/LS_DS_212_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.dummy import DummyClassifier
from sklearn.metrics import classification_report
import numpy as np
# # Import Data
train_data= "./data/glove_ids_train.csv.zip"
val_data= "./data/glove_ids_val.csv.zip"
test_data= "./data/glove_ids_test.csv.zip"
from src.MyDataset import MyDataset, _batch_to_tensor
data_train = MyDataset(path_data=train_data,
max_seq_len=250)
data_val = MyDataset(path_data=val_data,
max_seq_len=250)
data_test = MyDataset(path_data=test_data,
max_seq_len=250)
# # Fit Baseline
baseline = DummyClassifier(strategy="uniform")
baseline.fit(data_train.X, data_train.y_id)
# # Classification Report with Baseline
# ## Train Data
np.random.seed(1234)
pred = baseline.predict(data_train.X)
print(classification_report(data_train.y_id, pred))
# ## Validation Data
pred = baseline.predict(data_val.X)
print(classification_report(data_val.y_id, pred))
# # Test Data
pred = baseline.predict(data_test.X)
print(classification_report(data_test.y_id, pred))
| 3. Baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # DeLaN Network Inference
# Following script demonstrates to get DeLaN rearrangement Sequence from Random pick and place points on table top environment.
# +
import matplotlib.pyplot as plt
import numpy as np
import time
import read_data_network
#from ortools import constraint_solver
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
from dataset_custom_z import TrajectoryDataset_customScipy
from torch.utils.data import DataLoader
import sys
import pickle
def merge_nodes(starts, goals, depot=None):
nodes = []
if depot:
nodes.append(depot)
for start in starts:
nodes.append(start)
for goal in goals:
nodes.append(goal)
return nodes
def create_data_model_euclidean(N, nodes):
"""Stores the data for the problem."""
data = {}
from scipy.spatial import distance_matrix
dm = distance_matrix(nodes, nodes)
#read_data_network.pprint(dm)
data['distance_matrix'] = dm
data['pickups_deliveries'] = []
data['demands'] = [0]
for i in range(N):
data['pickups_deliveries'].append([i+1, i+1+N])
for i in range(N):
data['demands'].append(1)
for i in range(N):
data['demands'].append(-1)
data['num_vehicles'] = 1
data['depot'] = 0
data['vehicle_capacities'] = [1]
# print("cost_eucledian inside")
# print(data)
return data
def create_data_model_joint(N, nodes, network='delan'):
"""Stores the data for the problem."""
data = {}
dm = read_data_network.get_joint_distance_matrix(nodes, network)
data['distance_matrix'] = dm
data['pickups_deliveries'] = []
data['demands'] = [0]
for i in range(N):
data['pickups_deliveries'].append([i+1, i+1+N])
for i in range(N):
data['demands'].append(1)
for i in range(N):
data['demands'].append(-1)
data['num_vehicles'] = 1
data['depot'] = 0
data['vehicle_capacities'] = [1]
return data
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
import time
t = time.time()
total_distance = 0
sol = []
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Picking Sequence : \n'
route_distance = 0
odd = 0
while not routing.IsEnd(index):
s = manager.IndexToNode(index)
if odd !=0 and odd %2 == 1:
plan_output += ' {} -> '.format(s)
sol.append(s)
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(
previous_index, index, vehicle_id)
odd += 1
s = manager.IndexToNode(index)
sol.append(s)
# plan_output += '{}\n'.format(s)
#plan_output += 'Distance of the route: {}m\n'.format(route_distance)
print(plan_output)
total_distance += route_distance
print("total time taken:", time.time()-t)
#print('Total Distance of all routes: {}m'.format(total_distance))
return sol
def solve(data):
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
def demand_callback(from_index):
"""Returns the demand of the node."""
# Convert from routing variable Index to demands NodeIndex.
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
# Define cost of each arc.
def distance_callback(from_index, to_index):
"""Returns the manhattan distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
demand_callback_index = routing.RegisterUnaryTransitCallback(
demand_callback)
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
2, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities
True, # start cumul to zero
'Capacity')
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# Add Distance constraint.
dimension_name = 'Distance'
routing.AddDimension(
transit_callback_index,
0, # no slack
10000, # vehicle maximum travel distance
True, # start cumul to zero
dimension_name)
distance_dimension = routing.GetDimensionOrDie(dimension_name)
distance_dimension.SetGlobalSpanCostCoefficient(100)
# Define Transportation Requests.
for request in data['pickups_deliveries']:
pickup_index = manager.NodeToIndex(request[0])
delivery_index = manager.NodeToIndex(request[1])
routing.AddPickupAndDelivery(pickup_index, delivery_index)
routing.solver().Add(
routing.VehicleVar(pickup_index) == routing.VehicleVar(
delivery_index))
routing.solver().Add(
distance_dimension.CumulVar(pickup_index) <=
distance_dimension.CumulVar(delivery_index))
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
# search_parameters.first_solution_strategy = (
# routing_enums_pb2.FirstSolutionStrategy.PARALLEL_CHEAPEST_INSERTION)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
sol = None
# Print solution on console.
if solution:
sol = print_solution(data, manager, routing, solution)
return sol
def sample_data(N=6):
start_xs = np.linspace(0.23, 0.73, num=50).tolist()
start_ys = np.linspace(-0.3, 0.3, num=50).tolist()
goal_xs = np.linspace(0.23, 0.73, num=50).tolist()
goal_ys = np.linspace(-0.3, 0.3, num=50).tolist()
start_zs = np.linspace(0.1, 0.3, num=50).tolist()
goal_zs = np.linspace(0.1, 0.3, num=50).tolist()
from random import sample
depot = (0.24605024, -0.22180356, 0.41969074)
# N = int(sys.argv[1])
start_x = sample(start_xs, N)
start_y = sample(start_ys, N)
goal_x = sample(goal_xs, N)
goal_y = sample(goal_ys, N)
start_z = sample(start_zs,N)
goal_z = sample(goal_zs, N)
starts = []
goals = []
for i in range(N):
starts.append((start_x[i], start_y[i],0.1))
for i in range(N):
goals.append((goal_x[i], goal_y[i], 0.1))
return depot, starts, goals
def sample_data_training(N=6):
TRAJ_train = TrajectoryDataset_customScipy()
trainloader = DataLoader(TRAJ_train, batch_size=1, drop_last=True, shuffle=True)
depot = (0.24605024, -0.22180356, 0.41969074)
# N = int(sys.argv[1])
i = 0
starts = []
goals = []
for x, y, net_input, cost, start_joint, end_joint in trainloader:
traj = []
device='cpu'
x = x.to(device).numpy()
y = y.to(device).numpy()
print("X = ", x)
starts.append((x[0][0], x[0][1], x[0][2]))
goals.append((y[0][0],y[0][1], y[0][2]))
i+=1
if i>=N:
break
return depot, starts, goals
# +
def run(n):
# N represents the number of objects to rearrange
N =n
depot, starts, goals = sample_data(N=N)
z = 0.0
# N = len(starts)
print("Number of Objects : " + str(N))
print("Random Start Points: ")
print(starts)
print()
print("Random Goal Points: ")
print(goals)
nodes = merge_nodes(starts, goals, depot)
data_e = create_data_model_euclidean(N, nodes)
data_j = create_data_model_joint(N, nodes, network='delan')
data_j_nn = create_data_model_joint(N, nodes, network='fnn')
print()
print("Solving in Euclidean Space")
start_time = time.time()
sol_e = solve(data_e)
total_time = time.time() - start_time
points = [depot] + starts + goals
route_index = []
for so in sol_e:
route_index.append(list(points[int(so)]))
print("Solving in DeLAN Space")
start_time = time.time()
sol_j = solve(data_j)
total_time = time.time()-start_time
print("Solving in NN Space")
start_time = time.time()
sol_j = solve(data_j_nn)
total_time = time.time()-start_time
route_index = []
for so in sol_j:
route_index.append(list(points[int(so)]))
run(10)
# -
| notebooks/Sequence_Compare.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
from pomegranate import *
import itertools
# ### Trying bayes net in pomegranate
#generate example data
n = 10
h = BernoulliDistribution(0.3).sample(n)
w = BernoulliDistribution(0.6).sample(n)
l = BernoulliDistribution(0.7).sample(n)
# +
# 1. Initialize parameters (uniform)
height = DiscreteDistribution({'short':0.5, 'tall':0.5})
weight = DiscreteDistribution({'low':0.5, 'high':0.5})
sex = ConditionalProbabilityTable([
['short', 'low', 'm', 0.5],
['short', 'low', 'f', 0.5],
['short', 'high', 'm', 0.5],
['short', 'high', 'f', 0.5],
['tall', 'low', 'm', 0.5],
['tall', 'low', 'f', 0.5],
['tall', 'high', 'm', 0.5],
['tall', 'high', 'f', 0.5]], [height, weight] )
hair = ConditionalProbabilityTable([
['m', 'sh', 0.5],
['m', 'long', 0.5],
['f', 'sh', 0.5],
['f', 'long', 0.5]], [sex] )
# +
# create state objects (nodes)
s_h = State(height, name='height')
s_w = State(weight, name='weight')
s_s = State(sex, name='sex')
s_l = State(hair, name='hair')
# +
# create the model
model = BayesianNetwork("Hair Length")
# Add the three states to the network
model.add_states(s_h, s_w, s_s, s_l)
# +
# build the DAG
model.add_transition(s_h, s_s)
model.add_transition(s_w, s_s)
model.add_transition(s_s, s_l)
model.bake()
# -
model.predict_proba({'height':'short','weight':'low'})
[state.name for state in model.states]
s_s.distribution.column_idxs
s_l.distribution.column_idxs
# ### A simple model for trying things
A = DiscreteDistribution({1:0.45,0:0.55})
B = ConditionalProbabilityTable([
[1, 0, 0.3],
[1, 1, 0.7],
[0, 0, 0.4],
[0, 1, 0.6]], [A] )
s_A = State(A, name='A')
s_B = State(B, name='B')
test_model = BayesianNetwork("test")
test_model.add_states(s_A, s_B)
test_model.add_transition(s_A, s_B)
test_model.bake()
name_list = [s.name for s in test_model.states]
test_model.states[0].distribution.parameters[0][0]=0.99
test_model.states[0].distribution.parameters
test_model.states
# +
# getting the conditional probability
prob_B = test_model.predict_proba({'A':1})[1]
# -
prob_B
prob_B.probability(1)
data = np.asarray([[0,1],[0,0],[1,1],[1,1],[0,0],[1,0],[0,0]])
s_A.distribution.marginal()
test_model.fit(data)
test_model.predict_proba()
type(test_model)
# ### Model for testing 2
A = DiscreteDistribution({'1':0.2, '0':0.8})
H = ConditionalProbabilityTable([
['0', '0', 0.4],
['0', '1', 0.6],
['1', '0', 0.3],
['1', '1', 0.7]], [A])
B = ConditionalProbabilityTable([
['0', '0', 0.6],
['0', '1', 0.4],
['1', '0', 0.5],
['1', '1', 0.5]], [H])
s_A = State(A, name='A')
s_H = State(H, name='H')
s_B = State(B, name='B')
test_model2 = BayesianNetwork("test2")
test_model2.add_states(s_A, s_H, s_B)
test_model2.add_transition(s_A, s_H)
test_model2.add_transition(s_H, s_B)
test_model2.bake()
# Data:
data2 = np.array([[0, np.nan, 0],
[0, np.nan, 1],
[1, np.nan, 0],
[1, np.nan, 1],
[1, np.nan, 0],
[0, np.nan, 1],
[1, np.nan, 1],
[1, np.nan, 0]], dtype='str')
data2[:, [0,2]]
# Test:
mb2 = MarkovBlanket(1)
mb2.populate(test_model2)
mb2.calculate_prob(test_model2)
mb2.prob_table
expected_counts2 = create_expected_counts(test_model2, mb2)
expected_counts2
expected_counts2 = update_expected_counts(expected_counts2, test_model2, mb2)
expected_counts2
mb2.parents
# ## Building the EM
# Steps:
# 1. Initialize parameters with near uniform values.
# 2. Build the model with the initialized parameters.
# 3. Expectation: Calculate the expected counts for the hidden variables. Update the data for the hidden variables.
# 4. Maximization: Calculate MLE for all parameters by summing over expected counts.
# 5. Update the model with the new parameter values.
# 6. Iterate 3-5 until converges.
#
def search_hidden(data):
"""Returns the column index of the hidden node if only one column of NaN.
Only works if data is numeric.
Parameters
----------
data : An ndarray (n_sample, n_nodes)
Returns
-------
ind_h : the index of the hidden node column
"""
is_col_nan = np.all(np.isnan(data), axis=0)
ind = np.where(is_col_nan)
if np.size(ind)==1:
ind_h = ind[0][0]
else:
raise ValueError('Data contains more than one hidden nodes or no hidden node')
return ind_h
class MarkovBlanket():
"""
An object for storing info on nodes within the markov blanket of the hidden node
Parameters
----------
ind_h : int
index of the hidden node within the model
Attributes
----------
hidden : int
index of the hidden node
parents : list of int
a list of indices of the parent nodes
children : list of int
a list of indices of the children nodes
coparents : list of int
a list of indices of the coparent nodes
prob_table : dict
a dict of probabilities table of nodes within the Markov blanket
"""
def __init__(self, ind_h):
self.hidden = ind_h
self.parents = []
self.children = []
self.coparents = []
self.prob_table = {}
def populate(self, model):
"""populate the parents, children, and coparents nodes
"""
state_indices = {state.name : i for i, state in enumerate(model.states)}
edges_list = [(parent.name, child.name) for parent, child in model.edges]
edges_list = [(state_indices[parent],state_indices[child])
for parent, child in edges_list]
self.children = list(set([child for parent, child in edges_list if parent==self.hidden]))
self.parents = list(set([parent for parent, child in edges_list if child==self.hidden]))
self.coparents = list(set([parent for parent, child in edges_list if child in self.children]))
try:
self.coparents.remove(self.hidden)
except ValueError:
pass
def calculate_prob(self, model):
"""Create the probability table from nodes
"""
for ind_state in [self.hidden]+self.children:
distribution = model.states[ind_state].distribution
if isinstance(distribution, ConditionalProbabilityTable):
table = list(distribution.parameters[0]) # make a copy
self.prob_table[ind_state] = {
tuple(row[:-1]) : row[-1] for row in table}
else:
self.prob_table[ind_state] = dict(distribution.parameters[0]) # make a copy
def update_prob(self, model, expected_counts, ct):
"""Update the probability table using expected counts
"""
ind = {x : i for i, x in enumerate([self.hidden] + self.parents + self.children + self.coparents)}
mb_keys = expected_counts.counts.keys()
for ind_state in [self.hidden] + self.children:
distribution = model.states[ind_state].distribution
if isinstance(distribution, ConditionalProbabilityTable):
idxs = distribution.column_idxs
table = self.prob_table[ind_state] # dict
# calculate the new parameter for this key
for key in table.keys():
num = 0
denom = 0
# marginal counts
for mb_key in mb_keys:
# marginal counts of node + parents
if tuple([mb_key[ind[x]] for x in idxs]) == key:
num += ct.table[mb_key[1:]]*expected_counts.counts[mb_key]
# marginal counts of parents
if tuple([mb_key[ind[x]] for x in idxs[:-1]]) == key[:-1]:
denom += ct.table[mb_key[1:]]*expected_counts.counts[mb_key]
try:
prob = num/denom
except ZeroDivisionError:
prob = 0
# update the parameter
table[key] = prob
else: # DiscreteProb
table = self.prob_table[ind_state] # dict
# calculate the new parameter for this key
for key in table.keys():
prob = 0
for mb_key in mb_keys:
if mb_key[ind[ind_state]] == key:
prob += ct.table[mb_key[1:]]*expected_counts.counts[mb_key]
# update the parameter
table[key] = prob/ct.size
class ExpectedCounts():
"""Calculate the expected counts using the model parameters
Parameters
----------
model : a BayesianNetwork object
mb : a MarkovBlanket object
Attributes
----------
counts : dict
a dict of expected counts for nodes in the Markov blanket
"""
def __init__(self, model, mb):
self.counts = {}
self.populate(model, mb)
def populate(self, model, mb):
#create combinations of keys
keys_list = [model.states[mb.hidden].distribution.keys()]
for ind in mb.parents + mb.children + mb.coparents:
keys_list.append(model.states[ind].distribution.keys())
self.counts = {p:0 for p in itertools.product(*keys_list)}
def update(self, model, mb):
ind = {x : i for i, x in enumerate([mb.hidden] + mb.parents + mb.children + mb.coparents)}
marginal_prob = {}
# calculate joint probability and marginal probability
for i, key in enumerate(self.counts.keys()):
prob = 1
for j, ind_state in enumerate([mb.hidden] + mb.children):
distribution = model.states[ind_state].distribution
if isinstance(distribution, ConditionalProbabilityTable):
idxs = distribution.column_idxs
state_key = tuple([key[ind[x]] for x in idxs])
else:
state_key = key[ind[ind_state]]
prob = prob*mb.prob_table[ind_state][state_key]
self.counts[key] = prob
try:
marginal_prob[key[1:]] += prob
except KeyError:
marginal_prob[key[1:]] = prob
# divide the joint prob by the marginal prob to get the conditional
for i, key in enumerate(self.counts.keys()):
try:
self.counts[key] = self.counts[key]/marginal_prob[key[1:]]
except ZeroDivisionError:
self.counts[key] = 0
class CountTable():
"""Counting the data"""
def __init__(self, model, mb, items):
"""
Parameters
----------
model : BayesianNetwork object
mb : MarkovBlanket object
items : ndarray
columns are data for parents, children, coparents
"""
self.table ={}
self.ind = {}
self.size = items.shape[0]
self.populate(model, mb, items)
def populate(self, model, mb, items):
keys_list = []
for ind in mb.parents + mb.children + mb.coparents:
keys_list.append(model.states[ind].distribution.keys())
# init
self.table = {p:0 for p in itertools.product(*keys_list)}
self.ind = {p:[] for p in itertools.product(*keys_list)}
# count
for i, row in enumerate(items):
try:
self.table[tuple(row)] += 1
self.ind[tuple(row)].append(i)
except KeyError:
print 'Items in row', i, 'does not match the set of keys.'
raise KeyError
def em_bayesnet(model, data, ind_h, max_iter = 50, criteria = 0.005, verbose=False):
"""Returns the data array with the hidden node filled in.
(model is not modified.)
Parameters
----------
model : a BayesianNetwork object
an already baked BayesianNetwork object with initialized parameters
data : an ndarray
each column is the data for the node in the same order as the nodes in the model
the hidden node should be a column of NaNs
ind_h : int
index of the hidden node
max_iter : int
maximum number of iterations
criteria : float between 0 and 1
the change in probability in consecutive iterations, below this value counts as convergence
verbose : boolean
if True then the function prints an update with each iteration
Returns
-------
data : an ndarray
the same data arary with the hidden node column filled in
"""
# create the Markov blanket object for the hidden node
mb = MarkovBlanket(ind_h)
mb.populate(model)
mb.calculate_prob(model)
# create the count table from data
items = data[:, mb.parents + mb.children + mb.coparents]
ct = CountTable(model, mb, items)
# create expected counts
expected_counts = ExpectedCounts(model, mb)
expected_counts.update(model, mb)
# ---- iterate over the E-M steps
i = 0
previous_params = np.array(mb.prob_table[mb.hidden].values())
convergence = False
while (not convergence) and (i < max_iter):
mb.update_prob(model, expected_counts, ct)
expected_counts.update(model, mb)
if verbose : print 'Iteration',i,mb.prob_table,'\n'
# convergence criteria
hidden_params = np.array(mb.prob_table[mb.hidden].values())
change = abs(hidden_params - previous_params)
convergence = max(change) < criteria
previous_params = np.array(mb.prob_table[mb.hidden].values())
i += 1
if i == max_iter:
print 'Maximum iterations reached.'
# ---- fill in the hidden node data by sampling the distribution
labels = {}
for key, prob in expected_counts.counts.items():
try:
labels[key[1:]].append((key[0], prob))
except:
labels[key[1:]] = [(key[0], prob)]
new_data = data.copy()
for key, counts in ct.table.items():
label, prob = zip(*labels[key])
prob = tuple(round(p,5) for p in prob)
if not all(p == 0 for p in prob):
samples = np.random.choice(label, size=counts, p=prob)
new_data[ct.ind[key], ind_h] = samples
return new_data
a=np.array([[1,2,3],[4,5,6],[7,8,9]])
a
a[:,1] = [33,44,55]
b = a.copy()
b[0] = 99
b
a[[1,2]][:,1] = [99,99]
a[[0,1],1] = [98,99]
em_bayesnet(test_model2, data2, 1)
data2
mb2 = MarkovBlanket(1)
mb2.populate(test_model2)
mb2.calculate_prob(test_model2)
ct = CountTable(test_model2, mb2, data2[:,[0,2]])
ct.table
expected_counts = ExpectedCounts(test_model2, mb2)
expected_counts.update(test_model2, mb2)
expected_counts.counts
mb2.prob_table
mb.prob_table[3]
expected_counts
model.states[2].distribution.parameters[0]
# Funtional tests
# Requirements:
# 1. data with NaN outside of the hidden node should raise exception
# 2. correct output for input data of type str, float/int
import unittest
# +
class TestDataType(unittest.TestCase):
def test_int_input(self):
data = np.array([[0,np.nan,0,0],
[0,np.nan,0,0],
[0,np.nan,0,1],
[0,np.nan,1,1],
[1,np.nan,0,0],
[1,np.nan,1,1],
[1,np.nan,1,1],
[1,np.nan,1,1]])
cloudy = DiscreteDistribution({0:0.4,1:0.6})
sprinkler = ConditionalProbabilityTable([
[1, 0, 0.7],
[1, 1, 0.23],
[0, 0, 0.4],
[0, 1, 0.6]], [cloudy] )
rain = ConditionalProbabilityTable([
[1, 0, 0.2],
[1, 1, 0.8],
[0, 0, 0.7],
[0, 1, 0.3]], [cloudy] )
wetgrass = ConditionalProbabilityTable([
[0, 0, 0, 0.8],
[0, 0, 1, 0.2],
[0, 1, 0, 0.1],
[0, 1, 1, 0.9],
[1, 0, 0, 0.1],
[1, 0, 1, 0.9],
[1, 1, 0, 0.1],
[1, 1, 1, 0.9]], [sprinkler, rain] )
s_c = State(cloudy, 'cloudy')
s_s = State(sprinkler, 'sprinkler')
s_r = State(rain, 'rain')
s_w = State(wetgrass, 'wetgrass')
model = BayesianNetwork('wet')
model.add_states(s_c, s_s, s_r, s_w)
model.add_transition(s_c, s_s)
model.add_transition(s_c, s_r)
model.add_transition(s_s, s_w)
model.add_transition(s_r, s_w)
model.bake()
new_data = em_bayesnet(model, data, 1)
print new_data
#if __name__ == "__main__":
# unittest.main()
# +
class TestDataType(unittest.TestCase):
def test_int_input(self):
data = np.array([[0,np.nan],
[0,np.nan],
[1,np.nan],
[1,np.nan]])
A = DiscreteDistribution({0:0.4, 1:0.6})
B = ConditionalProbabilityTable([
[0, 0, 1.0],
[0, 1, 0.0],
[1, 0, 0.0],
[1, 1, 1.0]], [A] )
s_A = State(A, 'A')
s_B = State(B, 'B')
model = BayesianNetwork('copy')
model.add_states(s_A, s_B)
model.add_transition(s_A, s_B)
model.bake()
new_data = em_bayesnet(model, data, 1)
correct_output = np.array([
[0, 0],
[0, 0],
[1, 1],
[1, 1]])
self.assertEqual(new_data.tolist(), correct_output.tolist())
def test_str_input(self):
data = np.array([['0',np.nan],
['0',np.nan],
['1',np.nan],
['1',np.nan]])
A = DiscreteDistribution({'0':0.4, '1':0.6})
B = ConditionalProbabilityTable([
['0', '0', 1.0],
['0', '1', 0.0],
['1', '0', 0.0],
['1', '1', 1.0]], [A] )
s_A = State(A, 'A')
s_B = State(B, 'B')
model = BayesianNetwork('copy')
model.add_states(s_A, s_B)
model.add_transition(s_A, s_B)
model.bake()
new_data = em_bayesnet(model, data, 1)
correct_output = np.array([
['0', '0'],
['0', '0'],
['1', '1'],
['1', '1']])
self.assertEqual(new_data.tolist(), correct_output.tolist())
if __name__ == "__main__":
#unittest.main()
unittest.main(argv=['first-arg-is-ignored'], exit=False)
# -
test = RainTest()
test.test_sprinkler()
np.random.choice([1,0],size=10,p=(0.024,0.976))
(x for x in [1,2,3])
new_data = em_bayesnet(test_model2, data2, 1)
new_data[:,1].dtype
new_data.dtype
np.array([1,0,1]).dtype
np.issubdtype(np.array([1]),np.integer)
np.typecodes['AllInteger']
np.array([1,0,1]).dtype.kind
data = np.array([[np.nan, 'yellow', 'sweet', 'long'],
[np.nan, 'green', 'sour', 'round'],
[np.nan, 'green', 'sour', 'round'],
[np.nan, 'yellow', 'sweet', 'long'],
[np.nan, 'yellow', 'sweet', 'long'],
[np.nan, 'green', 'sour', 'round'],
[np.nan, 'green', 'sweet', 'long'],
[np.nan, 'green', 'sweet', 'round']])
Fruit = DiscreteDistribution({'banana':0.4, 'apple':0.6})
Color = ConditionalProbabilityTable([['banana', 'yellow', 0.6],
['banana', 'green', 0.4],
['apple', 'yellow', 0.6],
['apple', 'green', 0.4]], [Fruit] )
Taste = ConditionalProbabilityTable([['banana', 'sweet', 0.6],
['banana', 'sour', 0.4],
['apple', 'sweet', 0.4],
['apple', 'sour', 0.6]], [Fruit])
Shape = ConditionalProbabilityTable([['banana', 'long', 0.6],
['banana', 'round', 0.4],
['apple', 'long', 0.4],
['apple', 'round', 0.6]], [Fruit])
s_fruit = State(Fruit, 'fruit')
s_color = State(Color, 'color')
s_taste = State(Taste, 'taste')
s_shape = State(Shape, 'shape')
model = BayesianNetwork('fruit')
model.add_states(s_fruit, s_color, s_taste, s_shape)
model.add_transition(s_fruit, s_color)
model.add_transition(s_fruit, s_taste)
model.add_transition(s_fruit, s_shape)
model.bake()
new_data = em_bayesnet(model, data, 0)
new_data
model.states[0].distribution.parameters[0]
new_model = model.fit(new_data)
# ### README
# # Bayesian Network with Hidden Variables
#
# bayesnet_em predicts values for a hidden variable in a Bayesian network by implementing the expectation maximization algorithm. It works as an extension to the Beysian network implementation in Pomegranade.
#
# ## Installation
#
# ### Dependencies
# > - numpy
# > - pomegranade
#
# ### Installing
# If you have Git installed:
# > `pip install git+https://github.com/nnvutisa/EM_BayesNet.git`
#
# ## Usage
#
# A Bayesian network is a probabilistic graphical model that represents relationships between random variables as a direct acyclic graph. Each node in the network represents a random variable whearas each edge represents a conditional dependency. Bayesian networks provides an efficient way to construct a full joint probability distribution over the variables. The random varibles can either be observed variables or unobserved variables, in which case they are called hidden (or latent) variables.
#
# Pomegranade currently supports a discrete Baysian network. Each node represents a categorical variable, which means it can take on a discrete number of values. The model parameters can be learned from data. However (at least as of now), it does not support a network with hidden variables. The purpose of bayesnet_em is to work with the pomegranade Bayesian network model to predict values of the hidden variables.
#
# bayesnet_em takes an already constructed and initialized `BayesianNetwork` object, a data array, and the index of the hidden node, and returns a complete data set.
#
# ### Example
# Let's use a simple example. Suppose there is a bag of fruits that contains apples and bananas. The observed variables for each sample taken from the bag are color, taste, and shape, while the label of the type of fruits was not recorded. We would like to predict the type of fruits for each sample along with the full probability distribution. This can be represented by a Bayesian network as:
#
# This simple relationship describes a naive Bayes model. The full joint probability distribution is
# > P(F,C,T,S) = P(F)\*P(C|F)\*P(T|F)\*P(S|F)
#
# Start with building a Bayesian network model using pomegranade.
#
# ```
# import numpy as np
# from pomegranate import *
# ```
#
# Here's our data. bayesnet_em suppost data types of int and string.
#
# ```
# data = np.array([[np.nan, 'yellow', 'sweet', 'long'],
# [np.nan, 'green', 'sour', 'round'],
# [np.nan, 'green', 'sour', 'round'],
# [np.nan, 'yellow', 'sweet', 'long'],
# [np.nan, 'yellow', 'sweet', 'long'],
# [np.nan, 'green', 'sour', 'round'],
# [np.nan, 'green', 'sweet', 'long'],
# [np.nan, 'green', 'sweet', 'round']])
# ```
#
# The columns represent the nodes in a specified order (fruit, color, taste, shape). The order of the columns have to match the order of the nodes (states) when constructing the Bayesian network. The first column with the `nan` values is the hidden node. Next, create the distributions of all the nodes and initialize the probabilities to some non-uniform values. The first node is just P(F). The other three nodes are described by conditional probabilities.
#
# ```
# Fruit = DiscreteDistribution({'banana':0.4, 'apple':0.6})
# Color = ConditionalProbabilityTable([['banana', 'yellow', 0.6],
# ['banana', 'green', 0.4],
# ['apple', 'yellow', 0.6],
# ['apple', 'green', 0.4]], [Fruit] )
# Taste = ConditionalProbabilityTable([['banana', 'sweet', 0.6],
# ['banana', 'sour', 0.4],
# ['apple', 'sweet', 0.4],
# ['apple', 'sour', 0.6]], [Fruit])
# Shape = ConditionalProbabilityTable([['banana', 'long', 0.6],
# ['banana', 'round', 0.4],
# ['apple', 'long', 0.4],
# ['apple', 'round', 0.6]], [Fruit])
# ```
# Now, create the state (node) objects
#
# ```
# s_fruit = State(Fruit, 'fruit')
# s_color = State(Color, 'color')
# s_taste = State(Taste, 'taste')
# s_shape = State(Shape, 'shape')
# ```
#
# and the `BayesianNetwork` object.
#
# ```
# model = BayesianNetwork('fruit')
# ```
#
# Add states and edges to the network.
#
# ```
# model.add_states(s_fruit, s_color, s_taste, s_shape)
# model.add_transition(s_fruit, s_color)
# model.add_transition(s_fruit, s_taste)
# model.add_transition(s_fruit, s_shape)
# model.bake()
# ```
#
# Now that we have the initialized model, we want to fill in the first column.
#
# ```
# from bayesnet_em import *
# ```
#
# call the `em_bayesnet` function. Our hidden node index is 0 since it is the first node.
#
# ```
# hidden_node_index = 0
# new_data = em_bayesnet(model, data, hidden_node_index)
# ```
#
# The returned array is the filled in data. Note that the function does not modify the model or the input data; it only returns a new data array.
#
# ```
# >>> new_data
# array([['banana', 'yellow', 'sweet', 'long'],
# ['apple', 'green', 'sour', 'round'],
# ['apple', 'green', 'sour', 'round'],
# ['banana', 'yellow', 'sweet', 'long'],
# ['banana', 'yellow', 'sweet', 'long'],
# ['apple', 'green', 'sour', 'round'],
# ['banana', 'green', 'sweet', 'long'],
# ['apple', 'green', 'sweet', 'round']],
# dtype='|S32')
# ```
#
# Of course the function does not actually know the meaning of the labels. In real uses, it would be more appropriate to use integer labels or something like 'group1', 'group2', etc. For the sake of this example, I just assigned the correct label names.
#
# If you want to furthur use the Bayesian network model (to see the distributions or make predictions for example), you can fit the model to the complete data set.
#
# ```
# new_model = model.fit(new_data)
# ```
#
# The model can be used to answer questions like, "what is the probability that a banana is yellow?"
#
# ```
# >>> new_model.predict_proba({'fruit':'banana'})[1].parameters
# [{'green': 0.25000000000000017, 'yellow': 0.74999999999999989}]
# ```
#
# What is the probability that a sample would be a sweet, round, and green apple?
#
# ```
# >>> new_model.probability(['apple', 'green', 'sweet', 'round'])
# 0.12500000000000003
# ```
#
# ## License
#
# This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
#
import matplotlib.pyplot as plt
# %matplotlib inline
new_model.marginal()[0]
new_model.predict_proba({'fruit':'apple'})
new_model.probability(['apple', 'green', 'sweet', 'round'])
new_model.predict_proba({'fruit':'banana'})[1].parameters
fig, ax = plt.subplots()
ax.bar(range(2), list(a1.values()), align='center')
plt.xticks(range(2), list(a1.keys()))
plt.show()
| em.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def a():
for row in range(8):
for col in range(7):
if (row%3==0 and (col>0 and col<5)) or (col==5 and (row>0 and row<7)) or ((row==1 or row==4 or row==5) and col==0) or (row+col==13):
print("*", end=" ")
else:
print(" ", end=" ")
print()
a()
def b():
for row in range(7):
for col in range(6):
if (col==0 ) or ((row==3 or row==6) and col<5) or (col==5 and (row>3 and row<6)):
print("*", end=" ")
else:
print(" ", end=" ")
print()
b()
def c():
for row in range(7):
for col in range(5):
if (col==0 and (row!=0 and row!=6)) or ((row==0 or row==6) and (col>0)):
print("*",end=" ")
else:
print(end=" ")
print()
c()
def d():
for row in range(7):
for col in range(6):
if (col==5 ) or ((row==3 or row==6) and col>0) or (col==0 and (row>3 and row<6)):
print("*", end=" ")
else:
print(" ", end=" ")
print()
d()
def e():
for row in range(7):
for col in range(5):
if (col==0 and (row!=0 and row!=6)) or ((row%3==0) and (col>0 and col<4)) or (col==4 and (row>0 and row<=2)):
print("*",end=" ")
else:
print(end=" ")
print()
e()
def f():
for row in range(8):
for col in range(7):
if (col==3 and row>0) or (row==5 and col>0 and col<6) or (row==0 and col>3 and col<6) or (col==6 and row==1):
print("*",end=" ")
else:
print(end=" ")
print()
f()
def g():
for row in range(7):
for col in range(5):
if (row%3==0) and (col>0 and col<4) or (col==0 and (row%3!=0 and row!=4)) or (col==4 and row<6):
print("*",end=" ")
else:
print(end=" ")
print()
g()
def h():
for row in range(7):
for col in range(6):
if (col==0 ) or ((row==3) and col<5) or (col==5 and (row>3)):
print("*", end=" ")
else:
print(" ", end=" ")
print()
h()
def i():
for row in range(9):
for col in range(7):
if (col==3 or row==8) and row>1 or (row>2 and row+col==3) or row==0 and col==3 :
print("*",end="")
else:
print(end=" ")
print()
i()
def j():
for row in range(7):
for col in range(5):
if (col==3 and row!=1 and row!=6) or (row==6 and col>0 and col<3) or (col==0 and row==5) or (row==col==2):
print("*",end=" ")
else:
print(end=" ")
print()
j()
def k():
for row in range(7):
for col in range(5):
if (col==0) or (row>1 and (row+col==5 or row-col==3)):
print("*",end=" ")
else:
print(end=" ")
print()
k()
def l():
for row in range(7):
for col in range(5):
if (col==0) or (row==6 and col<3):
print("*",end=" ")
else:
print(end=" ")
print()
l()
def m():
for row in range(5):
for col in range(7):
if row==col==0 or (col%3==0 and row>0) or (row==0 and col%3!=0):
print("*",end=" ")
else:
print(end=" ")
print()
m()
def n():
for row in range(5):
for col in range(4):
if row==col==0 or (col%3==0 and row>0) or (row==0 and col%3!=0):
print("*",end=" ")
else:
print(end=" ")
print()
n()
def n():
for row in range(5):
for col in range(4):
if row==0 and col<3 or (col%3==0 and row>0) :
print("*",end=" ")
else:
print(end=" ")
print()
n()
def o():
for row in range(7):
for col in range(5):
if ((col==0 or col==4) and (row!=0 and row!=6)) or (col>0 and col<4) and (row==0 or row==6):
print("*",end=" ")
else:
print(end=" ")
print()
o()
def p():
for row in range(7):
for col in range(5):
if col==0 or (col==4 and (row==1 or row==2)) or ((row==0 or row==3) and (col>0 and col<4)):
print("*",end=" ")
else:
print(end=" ")
print()
p()
def q():
for row in range(7):
for col in range(6):
if col==4 or (col==0 and (row==1 or row==2)) or ((row==0 or row==3) and (col>0 and col<4)) or (row==5 and col==5):
print("*",end=" ")
else:
print(end=" ")
print()
q()
def r():
for row in range(7):
for col in range(6):
if (col==2 and row>0) or (row<1 and col>2) or row==col==0:
print("*",end=" ")
else:
print(end=" ")
print()
r()
def s():
for row in range(7):
for col in range(5):
if (row%3==0) and (col>0 and col<4) or (col==0 and (row>0 and row<3)) or (col==4 and (row>3 and row<6)):
print("*",end=" ")
else:
print(end=" ")
print()
s()
def t():
for row in range(7):
for col in range(4):
if (row==2 and col<4) or (col==1 and row<6) or (row==6 and col>1):
print("*",end=" ")
else:
print(end=" ")
print()
t()
def u():
for row in range(7):
for col in range(6):
if ((col==0 or col==4) and row<5) or (row==5 and col>0 and col<4) or (row+col==10) and row!=6:
print("*",end=" ")
else:
print(end=" ")
print()
u()
def v():
for row in range(8):
for col in range(16):
if (row-col==0 or row+col==14):
print("*",end="")
else:
print(end=" ")
print()
v()
def w():
for row in range(6):
for col in range(7):
if (col%3==0 and row<5) or (row==5 and col%3!=0):
print("*",end=" ")
else:
print(end=" ")
print()
w()
def x():
for row in range(5):
for col in range(5):
if (row-col==0 or row+col==4):
print("*",end=" ")
else:
print(end=" ")
print()
x()
def y():
for row in range(7):
for col in range(5):
if (row%3==0 and row!=0) and (col>0 and col<4) or (col==0 and (row>0 and row<3)) or (col==4 and (row>0 and row<6)) or (row==5 and col==0):
print("*",end=" ")
else:
print(end=" ")
print()
y()
def z():
for row in range(5):
for col in range(5):
if row==0 or row==4 or row+col==4 :
print("*",end=" ")
else:
print(end=" ")
print()
z()
| small_alpha.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 電影對白爬蟲程式
import urllib.request as ur
from pythonopensubtitles.opensubtitles import OpenSubtitles
import fnmatch
import os
import time
import zipfile
import re
import pickle
from opencc import OpenCC
#path = '/Users/Wan-Ting/Google Drive/NCTU/NLP/lab2/crawler'
#os.chdir(path)
def t2s(text):
openCC = OpenCC('tw2sp') # Traditional Chinese (Taiwan standard) to Simplified Chinese (with phrases)
to_convert = text
converted = openCC.convert(to_convert)
return converted
def s2t(text):
openCC = OpenCC('s2t') # convert from Simplified Chinese to Traditional Chinese
# can also set conversion by calling set_conversion
# openCC.set_conversion('s2tw')
to_convert = text
converted = openCC.convert(to_convert)
return converted
def download(url,directory_to_extract_to='/Users/Wan-Ting/Downloads/lines'):
req = ur.Request(url, headers={'User-Agent': 'Mozilla/5.0'})
html = ur.urlopen(req).read()
#將zip檔另存
with open("script.zip", 'wb') as f:
f.write(html)
with zipfile.ZipFile(open('script.zip', 'rb')) as f:
FileName = fnmatch.filter(f.namelist(),"*.srt")+fnmatch.filter(f.namelist(),"*.ass")
if len(FileName)!=0:
#讀取解壓縮的檔案,並彙整台詞成→new_lines
for file_name in FileName:
#資料解壓縮
f.extract(member=file_name, path=directory_to_extract_to)
else:
print("no .srt script")
#print(FileName,"downloaded!")
return FileName
def search(movietitle):
ost = OpenSubtitles()
token = ost.login("doctest", 'doctest')
movie = ost.search_subtitles([{'query' :movietitle, 'sublanguageid' : 'zht'}])
return movie
# +
def is_time_stamp(l):
if l[:2].isnumeric() and l[2] == ':':
return True
return False
def has_no_text(line):
#移除分隔符號
l = line.strip('\n')
if not len(l):
return True
if l.isnumeric():
return True
if is_time_stamp(l):
return True
if l[0] == '(' and l[-1] == ')':
return True
return False
def is_lowercase_letter_or_comma(letter):
if letter.isalpha() and letter.lower() == letter:
return True
if letter == ',':
return True
return False
def clean_up(lines):
new_lines = []
for line in lines[:]:
#line = lines[6]
if has_no_text(line):
continue
else:
#append line
new_lines.append(line)
return new_lines
# -
#讀入電影清單(.txt檔)
file = open('/Users/Wan-Ting/Downloads/movie.txt', 'r', encoding = 'utf-8')
Data = file.read()
Data = Data.splitlines()
file.close()
#將電影名稱全部轉小寫、以''取代'、去除空白以外的符號
#List = [x.lower() for x in Data]
#List = [x.replace("'" ,'') for x in List]
#movietitle_ls = [re.sub(r"[^a-zA-Z0-9]+", ' ', k) for k in List]
#print(List)
movietitle_ls = [line.split('\t') for line in Data[1:]]
# +
directory_to_extract_to='/Users/Wan-Ting/Downloads/lines'
t = open("0452627_黃婉婷_task1.txt","w")
for mt in movietitle_ls:
print(movietitle)
movietitle = mt[0]
ost = OpenSubtitles()
token = ost.login("doctest", 'doctest')
movies = ost.search_subtitles([{'query' :movietitle, 'sublanguageid' : 'zht'}])
value = 0
ls = []
for movie in movies:
if movie['IDMovieImdb'] == mt[1]:
if value < int(movie['SubDownloadsCnt']):
value = int(movie['SubDownloadsCnt'])
ls.append(movie)
else:
pass
#print('imdbid does not match')
if len(ls):
high = ls[-1]
srt_ls = download(high['ZipDownloadLink'])
#print('downloaded!')
#print(srt_ls[0],high['SubEncoding'])
if high['SubEncoding']=='':
encoding = 'utf-8'
else:
encoding = high['SubEncoding']
with open(os.path.join(directory_to_extract_to,srt_ls[0]),encoding = encoding, errors="ignore") as f: #
l = f.readlines()
num = len(clean_up(l))
t.write(high['MovieReleaseName']+'\t'+high['SubDownloadLink']+'\t'+str(num)+'\n')
else:
print("ls is empty.")
t.close()
print("done!")
| crawl_movie_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %run -n main.py
# # datasets
# +
# path = join_path(DATA_DIR, DATASET)
# # !mkdir -p {path}
# +
# for name in DATASETS:
# paths = (
# join_path(CORUS_DATA_DIR, _)
# for _ in CORUS_FILES[name]
# )
# records = (
# record
# for path in paths
# for record in load_dataset(path)
# )
# records = log_progress(records, desc=name)
# records = sample(records, 1000)
# path = join_path(DATA_DIR, DATASET, name + JL + GZ)
# items = as_jsons(records)
# lines = format_jl(items)
# dump_gz_lines(lines, path)
# -
datasets = {}
for name in DATASETS:
path = join_path(DATA_DIR, DATASET, name + JL + GZ)
lines = load_gz_lines(path)
items = parse_jl(lines)
datasets[name] = list(from_jsons(items, Markup))
# # models
# +
# for name in MODELS:
# path = join_path(DATA_DIR, name)
# # !mkdir -p {path}
# -
# ## cpu
docker = docker_client()
model_name = SPACY
model = MODELS[model_name]()
model.start(docker)
model.wait()
for dataset_name in DATASETS:
records = model.map(_.words for _ in datasets[dataset_name])
records = log_progress(records, desc=dataset_name)
path = join_path(DATA_DIR, model_name, dataset_name + JL + GZ)
items = as_jsons(records)
lines = format_jl(items)
dump_gz_lines(lines, path)
model.stop(docker)
# ## gpu
# +
# # !vast search offers | grep '1 x GTX 1080 Ti'
# +
# model = DeeppavlovBERTModel()
# model = SlovnetBERTModel()
# model = StanzaModel()
# +
# # !vast create instance 498795 --image {model.image} --disk 30
# +
# # !vast show instances
# +
# # !ssh ssh4.vast.ai -p 20908 -l root -Nf -L {model.port}:localhost:{model.container_port}
# +
# for dataset_name in DATASETS:
# records = datasets[dataset_name]
# records = log_progress(records, desc=dataset_name)
# records = model.map(_.words for _ in records)
# path = join_path(DATA_DIR, model.name, dataset_name + JL + GZ)
# items = as_jsons(records)
# lines = format_jl(items)
# dump_gz_lines(lines, path)
# +
# # !vast destroy instance 500908
# -
# # score
dataset_models = {}
for dataset in DATASETS:
for model in MODELS:
path = join_path(DATA_DIR, model, dataset + JL + GZ)
lines = load_gz_lines(path)
items = parse_jl(lines)
dataset_models[dataset, model] = list(from_jsons(items, Markup))
scores = {}
for dataset, model in log_progress(dataset_models):
preds = dataset_models[dataset, model]
targets = datasets[dataset]
scores[dataset, model] = score_markups(preds, targets)
# # report
scores_table = scores_report_table(scores, DATASETS, MODELS)
html = format_scores_report(scores_table)
patch_readme(SYNTAX1, html, README)
patch_readme(SYNTAX1, html, SLOVNET_README)
HTML(html)
# +
BENCH = [
Bench(
UDPIPE,
init=6.91,
disk=45 * MB,
ram=242 * MB,
speed=56.2,
),
Bench(
SPACY,
init=9,
disk=140 * MB,
ram=579 * MB,
speed=41,
),
Bench(
DEEPPAVLOV_BERT,
init=34,
disk=(706 + 721) * MB, # BERT + model
ram=8.5 * GB,
speed=75,
device=GPU
),
Bench(
SLOVNET_BERT,
init=5,
disk=504 * MB,
ram=3427 * MB,
speed=200,
device=GPU
),
Bench(
SLOVNET,
init=1,
disk=27 * MB,
ram=125 * MB,
speed=450,
),
Bench(
STANZA,
init=3,
disk=591 * MB,
ram=890 * MB,
speed=12,
),
]
bench_table = bench_report_table(BENCH, MODELS)
html = format_bench_report(bench_table)
patch_readme(SYNTAX2, html, README)
patch_readme(SYNTAX2, html, SLOVNET_README)
HTML(html)
# -
| scripts/05_syntax/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # New LRISr Mark4 detector
# +
# imports
import os, glob
import numpy as np
from matplotlib import pyplot as plt
from astropy.io import fits
from pypeit.core import parse
from pypeit.display import display
from pypeit import flatfield
# -
# # Load data
dpath = '/scratch/REDUX/Keck/LRIS/new_LRISr'
rpath = os.path.join(dpath, 'Raw')
dfile = os.path.join(rpath, 'r211004_00003.fits')
hdul = fits.open(dfile)
hdul.info()
raw_data = hdul[0].data.astype(float)
raw_data.shape
# # Load pixel file
pix_file = os.path.join(dpath, 'pixelsL2U2L1U1_1_1.fits')
hdu_pix = fits.open(pix_file)
hdu_pix.info()
for ii in range(4):
print(hdu_pix[0].header[f'DSEC{ii}'])
# # Overwrite header
head0 = hdul[0].header
for card, value in hdu_pix[0].header.items():
head0[card] = value
# ## Write
hdul[0].header['BSEC4']
hdul.writeto(dfile.replace('.fits', '_upd.fits'))
# # Loop me
rfiles = glob.glob(rpath+'/*.fits')
rfiles
for rfile in rfiles:
hdul = fits.open(rfile)
head0 = hdul[0].header
for card, value in hdu_pix[0].header.items():
head0[card] = value
# Write
hdul.writeto(rfile.replace('.fits', '_upd.fits'))
# ## Moved them to their own folder..
# ----
# # Reudcin
# ## Test
#
# ## pypeit_view_fits keck_lris_red_mark4 r211004_00003_upd.fits
#
# ### Looks good!
# ## pypeit_view_fits keck_lris_red_mark4 r211004_00003_upd.fits --proc
# +
## Setup
##
# -
# ----
# # Extract me
final_img = np.zeros((2064*2, 2057*2))
# ## Amp L1, aka AMPID3, aka readout sequence 2
parse.load_sections(head0['DSEC3'], fmt_iraf=True)
slice3 = parse.sec2slice(head0['DSEC3'], one_indexed=True, include_end=True)
slice3
slice3b = (slice3[1], slice3[0])
slice3b
amp_l1 = raw_data[slice3b]
amp_l1.shape
display.show_image(amp_l1)
# ## Amp L2
slice2 = parse.sec2slice(head0['DSEC2'], one_indexed=True, include_end=True)
slice2
slice2b = (slice2[1], slice2[0])
amp_l2 = raw_data[slice2b]
amp_l2.shape
display.show_image(amp_l2)
# ----
# # Checking the gain
# ## The following was for an original set of gain values.
#
# ## Use chk_lris_mark4_gain.py in the DevSuite repo (dev_algorithms/lris) for further analysis
# ## Load a 4 amp flat
rdx_path = '/scratch/REDUX/Keck/LRIS/new_LRISr/keck_lris_red_mark4_A'
master_file = os.path.join(rdx_path, 'Masters', 'MasterFlat_A_1_01.fits')
# Load
flatImages = flatfield.FlatImages.from_file(master_file)
flatImages.pixelflat_raw.shape
# ## Spectral cut
spat_mid = flatImages.pixelflat_raw.shape[1]//2
spat_mid
left_spec = np.median(flatImages.pixelflat_raw[:,spat_mid-3:spat_mid], axis=1)
left_spec.size
right_spec = np.median(flatImages.pixelflat_raw[:,spat_mid:spat_mid+3], axis=1)
# ### Plot
plt.clf()
ax = plt.gca()
ax.plot(left_spec, label='left')
ax.plot(left_spec, label='right')
ax.legend()
ax.set_xlabel('spec')
plt.show()
rtio_spec = left_spec/right_spec
plt.clf()
ax = plt.gca()
ax.plot(rtio_spec, label='ratio', color='k')
#ax.plot(left_spec, label='right')
ax.set_xlabel('spec')
ax.legend()
plt.savefig('ratio_spec.png', dpi=300)
plt.show()
# ## Spatial cuts
spec_mid = flatImages.pixelflat_raw.shape[0]//2
spec_mid
bot_spat = np.median(flatImages.pixelflat_raw[spec_mid-3:spec_mid, :], axis=0)
top_spat = np.median(flatImages.pixelflat_raw[spec_mid:spec_mid+3, :], axis=0)
rtio_spat = top_spat/bot_spat
# +
plt.clf()
ax = plt.gca()
ax.plot(rtio_spat, label='ratio', color='r')
#ax.plot(left_spec, label='right')
ax.set_xlabel('spat')
ax.set_xlim(1400, 2600)
ax.set_ylim(0.9, 1.1)
ax.legend()
plt.savefig('ratio_spat.png', dpi=300)
plt.show()
# -
| doc/nb/LRIS_red_mark4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
from scipy import stats
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
output_data_file
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
citipy = []
clouds = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
api_key = weather_api_key
# +
#Set counters
records = 0
sets = 1
max_calls = 51
# displays API call header
print("-----------------------------------------------------------")
print("Beginning Data Retrieval")
print("-----------------------------------------------------------")
base_url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
#iterrate citites df
for city in cities:
#Limit amount of calls
if records == max_calls:
time.sleep(15)
sets += 1
records = 0
else:
#Build query string
query_url = base_url + "appid=" + api_key + "&q=" + city + "&units=" + units
#Build request and response variable
weather_response = requests.get(query_url).json()
#exception: missing data
try:
citipy.append(weather_response['name'])
clouds.append(weather_response['clouds']['all'])
country.append(weather_response['sys']['country'])
date.append(weather_response['dt'])
humidity.append(weather_response['main']['humidity'])
lat.append(weather_response['coord']['lat'])
lng.append(weather_response['coord']['lon'])
max_temp.append(weather_response['main']['temp_max'])
wind_speed.append(weather_response['wind']['speed'])
#displays record status
print(f"Processing Record {records} of Set {sets} | {city}.")
records +=1
except KeyError:
print('City not found. Skipping...')
print("-----------------------------------------------------------")
print("Data Retrieval Complete")
print("-----------------------------------------------------------")
# -
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
#Import cities file into cities_pd DataFrame
cities_df = pd.DataFrame({'City': citipy,
'Cloudiness': clouds,
'Country': country,
'Date': date,
'Humidity': humidity,
'Lat': lat,
'Lng': lng,
'Max Temp': max_temp,
'Wind Speed': wind_speed})
cities_df.count()
cities_df.head()
# Save as a csv
import os
path = r'C:\Users\fkokr\Desktop\Rutgers Coding Bootcamp HW\python-api-challenge\output_data'
cities_df.to_csv(os.path.join(path,r'Weatherpy.csv'))
# ### Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# #### Latitude vs. Temperature Plot
# +
# Incorporate the other graph properties
plt.figure(figsize = (10, 5))
plt.title("City Latitude vs. Max Temperature (1/4/2019)")
plt.xlabel("Latitude")
plt.ylabel("Maxium Temperature (F)")
plt.grid()
# Build a scatter plot for each data type
plt.scatter(cities_df['Lat'], cities_df['Max Temp'], marker ="o", color = "blue", edgecolor = 'k')
#Save figure to directory
plt.savefig(r'C:\Users\fkokr\Desktop\Rutgers Coding Bootcamp HW\python-api-challenge\Images\CityLat_v_MaxTemp.png')
# Show plot
plt.show()
# -
# #### Latitude vs. Humidity Plot
# +
# Incorporate the other graph properties
plt.figure(figsize = (10, 5))
plt.title("City Latitude vs. Humidity (1/4/2019)")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.ylim(0,300)
plt.grid()
# Build a scatter plot for each data type
plt.scatter(cities_df['Lat'], cities_df['Humidity'], marker ="o", color = "b", edgecolor = 'k')
#Save figure to directory
plt.savefig(r'C:\Users\fkokr\Desktop\Rutgers Coding Bootcamp HW\python-api-challenge\Images\CityLat_v_Humidity.png')
# Show plot
plt.show()
# -
# #### Latitude vs. Cloudiness Plot
# +
# Incorporate the other graph properties
plt.figure(figsize = (10, 5))
plt.title("City Latitude vs. Cloudiness (1/4/2019)")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.grid()
# Build a scatter plot for each data type
plt.scatter(cities_df['Lat'], cities_df['Cloudiness'], marker ="o", color = "b", edgecolor = 'k')
#Save figure to directory
plt.savefig(r'C:\Users\fkokr\Desktop\Rutgers Coding Bootcamp HW\python-api-challenge\Images\CityLat_v_Cloudiness.png')
# Show plot
plt.show()
# -
# #### Latitude vs. Wind Speed Plot
# +
# Incorporate the other graph properties
plt.figure(figsize = (10, 5))
plt.title("City Latitude vs. Wind Speed (1/4/2019)")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.grid(True)
# Build a scatter plot for each data type
plt.scatter(cities_df['Lat'], cities_df['Wind Speed'], marker ="o", color = "b", edgecolor = 'k')
#Save figure to directory
plt.savefig(r'C:\Users\fkokr\Desktop\Rutgers Coding Bootcamp HW\python-api-challenge\Images\CityLat_v_WindSpeed.png')
# Show plot
plt.show()
# -
# ## Linear Regression
# +
# OPTIONAL: Create a function to create Linear Regression plots
#Function accepts two list as arguement
def linRegPlot(x_data, y_data):
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_xlabel('Latitude')
#print r-squared
print(f'R-Squared: {r}')
#show figure
plt.show()
# -
# Create Northern and Southern Hemisphere DataFrames
northern_hemisphere = cities_df.loc[cities_df['Lat'] >= 0]
southern_hemisphere = cities_df.loc[cities_df['Lat'] < 0]
# # Charts Using Functions - No Labels
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
linRegPlot(northern_hemisphere['Lat'], northern_hemisphere['Max Temp'])
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
linRegPlot(southern_hemisphere['Lat'], southern_hemisphere['Max Temp'])
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
linRegPlot(northern_hemisphere['Lat'], northern_hemisphere['Humidity'])
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
linRegPlot(southern_hemisphere['Lat'], southern_hemisphere['Humidity'])
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
linRegPlot(northern_hemisphere['Lat'], northern_hemisphere['Cloudiness'])
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
linRegPlot(southern_hemisphere['Lat'], southern_hemisphere['Cloudiness'])
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
linRegPlot(northern_hemisphere['Lat'], northern_hemisphere['Wind Speed'])
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
linRegPlot(southern_hemisphere['Lat'], southern_hemisphere['Wind Speed'])
# # Same Charts with Labels
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_data = northern_hemisphere['Lat']
y_data = northern_hemisphere['Max Temp']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
y_int = format(y_int)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Northern Hemisphere - Max Temp vs. Latitude')
ax.set_xlabel('Latitude')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(0, -30, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_data = southern_hemisphere['Lat']
y_data = southern_hemisphere['Max Temp']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Southern Hemisphere - Max Temp vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Max Temp')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(-30, 60, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize='15')
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
x_data = northern_hemisphere['Lat']
y_data = northern_hemisphere['Humidity']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Northern Hemisphere - Humidity (%) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Humidity (%)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(40, 55, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
x_data = southern_hemisphere['Lat']
y_data = southern_hemisphere['Humidity']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Southern Hemisphere - Humidity (%) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Humidity (%)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(-30, 55, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
x_data = northern_hemisphere['Lat']
y_data = northern_hemisphere['Cloudiness']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Northern Hemisphere - Cloudiness (%) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Cloudiness (%)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(40, 45, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
x_data = southern_hemisphere['Lat']
y_data = southern_hemisphere['Cloudiness']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Southern Hemisphere - Cloudiness (%) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Cloudiness (%)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(-30, 40, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
x_data = northern_hemisphere['Lat']
y_data = northern_hemisphere['Wind Speed']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Wind Speed (mph) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Wind Speed (mph)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(0, 20, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize=15)
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
x_data = southern_hemisphere['Lat']
y_data = southern_hemisphere['Wind Speed']
#linear regression
slope, y_int, r, p, std_err = stats.linregress(x_data, y_data)
fit = slope * x_data + y_int
slope = format(slope)
#Plot figure
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(x_data, y_data, linewidth=0, marker='o')
ax.plot(x_data, fit, 'r--')
#figure attributes
ax.set_title('Southern Hemisphere - Wind Speed (mph) vs. Latitude')
ax.set_xlabel('Latitude')
ax.set_ylabel('Wind Speed (mph)')
#Format Variable
slope1 = '{:,.2f}'.format(float(slope))
y_int1 = '{:,.2f}'.format(float(str(y_int)))
#Make Annotation
ax.text(-55, 20, 'y =' + str(slope1)+'x' + ' + ' + str(y_int1), color='r', fontsize='15')
#print r-squared
print(f'R-Squared: {r}')
#save image
plt.show()
# -
| WaetherPy/WeatherPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from copy import deepcopy
from Udep2Mono.util import btree2list
from Udep2Mono.dependency_parse import dependency_parse
from Udep2Mono.binarization import BinaryDependencyTree
from Udep2Mono.polarization import PolarizationPipeline
det_type_words = {
"det:univ": ["all", "every", "each", "any", "all-of-the"],
"det:exist": ["a", "an", "some", "double", "triple", "some-of-the", "al-least", "more-than"],
"det:limit": ["such", "both", "the", "this", "that",
"those", "these", "my", "his", "her",
"its", "either", "both", "another"],
"det:negation": ["no", "neither", "never", "none", "none-of-the", "less-than", "at-most", "few"]
}
# +
import os
from nltk.tree import Tree
from nltk.draw import TreeWidget
from nltk.draw.util import CanvasFrame
from IPython.display import Image, display
def jupyter_draw_nltk_tree(tree):
cf = CanvasFrame()
tc = TreeWidget(cf.canvas(), tree)
tc['node_font'] = 'arial 14 bold'
tc['leaf_font'] = 'arial 14'
tc['node_color'] = '#005990'
tc['leaf_color'] = '#3F8F57'
tc['line_color'] = '#175252'
cf.add_widget(tc, 20, 20)
os.system('rm -rf ../data/tree.png')
os.system('rm -rf ../data/tree.ps')
cf.print_to_file('../data/tree.ps')
cf.destroy()
os.system('magick convert ../data/tree.ps ../data/tree.png')
display(Image(filename='../data/tree.png'))
# +
from pattern.en import pluralize, singularize
from copy import copy
class ContradictionGenerator:
def __init__(self):
self.deptree = None
self.annotated = None
self.original = None
self.kb = {}
self.tree_log = []
self.sent_log = []
def deptree_negation_generate(self, tree, annotated, original):
self.tree_log = []
self.sent_log = []
self.deptree = tree
self.original = original
self.annotated = deepcopy(annotated)
pop_list = list(annotated.popkeys())
unziped = list(zip(*pop_list))
self.sentence = list(unziped[0])
self.word_ids = unziped[3]
self.pos_tags = unziped[1]
self.polarities = unziped[2]
self.beVerb = False
self.nots = 1
self.nsubjs = 0
self.notDet = 1
self.expl = False
self.generate_not(self.deptree)
self.negate_det(self.deptree)
def rollback(self, tree, backup):
tree.val = backup.val
tree.left = deepcopy(backup.left)
tree.right = deepcopy(backup.right)
tree.mark = backup.mark
tree.pos = backup.pos
tree.negates = backup.negates
tree.id = backup.id
tree.is_tree = backup.is_tree
tree.is_root = backup.is_root
def negate_det(self, tree):
if tree.val == "nsubj":
self.nsubjs += 1
orig = self.nsubjs
self.negate_det(tree.left)
self.nsubjs = orig
self.negate_det(tree.right)
return
if tree.val == "det" and self.nsubjs == 1:
target = self.down_right(tree.left)
sentence = deepcopy(self.sentence)
if(target.val.lower() in det_type_words["det:exist"]):
sentence[target.id-1] = "no"
self.sent_log.append((' ').join(sentence))
elif(target.val.lower() in det_type_words["det:negation"]):
sentence[target.id-1] = "some"
self.sent_log.append((' ').join(sentence))
elif(target.val.lower() in det_type_words["det:univ"] and self.notDet == 1):
sentence.insert(target.id-1, "not")
self.notDet -= 1
self.sent_log.append((' ').join(sentence))
# "det:limit"
if tree.is_tree:
self.negate_det(tree.left)
self.negate_det(tree.right)
def down_right(self, tree):
if(tree.right != None):
return self.down_right(tree.right)
return tree
def add_not(self, tree, modifier):
if(self.nots < 1):
return
if self.beVerb:
index = self.down_right(tree.left).id
elif self.expl:
index = self.down_right(tree).id
else:
index = self.down_right(tree).id-1
sentence = deepcopy(self.sentence)
sentence.insert(index, modifier)
if("not" in modifier):
self.sent_log.append(' '.join(sentence))
self.nots -= 1
def generate_not(self, tree):
'''if tree.pos is not None:
if "VB" in tree.pos:
self.add_modifier_lexical(tree, "not", tree.val, tree.id)
self.add_modifier_lexical(tree, "not", tree.val, tree.id, 1)'''
if tree.val == "expl":
self.expl = True
if tree.val in ["aux", "cop"]:
self.beVerb = True
self.add_not(tree, "not")
elif tree.val in ["obj", "obl", "xcomp"] and not self.beVerb:
self.add_not(tree, "do not")
elif self.expl and tree.val == "nsubj":
self.add_not(tree, "not")
self.expl = False
if tree.is_tree:
self.generate_not(tree.left)
self.generate_not(tree.right)
def save_tree(self, isTree):
if isTree:
leaves = self.deptree.sorted_leaves().popkeys()
sentence = ' '.join([x[0] for x in leaves])
self.tree_log.append(self.deptree.copy())
#polarized = pipeline.postprocess(self.deptree, {})
#btreeViz = Tree.fromstring(polarized.replace('[', '(').replace(']', ')'))
#jupyter_draw_nltk_tree(btreeViz)
#leaves = copy(self.deptree).sorted_leaves().popkeys()
#sentence = ' '.join([x[0] for x in leaves])
else:
annotated_cp = deepcopy(self.annotated)
self.sent_log.append(
' '.join([word[0] for word in list(annotated_cp.popkeys())]))
def buildTree(self, config):
left = BinaryDependencyTree(
config['mod'], "N", "N", 1024,
wid=config['lid'], pos="JJ")
right = BinaryDependencyTree(
config['head'], "N", "N", 1024,
wid=config['rid'], pos="NN")
tree = BinaryDependencyTree(config['rel'], left, right, 1025)
left.mark = config['mark']
right.mark = config['mark']
tree.mark = config['mark']
return tree
# -
# +
sentences = ["There is an apple","Every people like music", "I am doing homework", "He likes studying", "No apple is blue"]#, "There is a sweet apple","There is not a sweet apple"]
pipeline = PolarizationPipeline(verbose = 0)
pg = ContradictionGenerator()
for sentence in sentences:
annotation = pipeline.single_polarization(sentence)
tree1 = pipeline.postprocess(annotation["polarized_tree"])
btree = Tree.fromstring(tree1.replace('[', '(').replace(']', ')'))
pg.deptree_negation_generate(annotation["polarized_tree"],annotation["annotated"],annotation["original"])
#tree_visuals.append(svgling.draw_tree(tree1))
jupyter_draw_nltk_tree(btree)
print(pg.sent_log)
# +
sentences = ["Some red flowers need light",
"Some red and beautiful flowers need light",
"All flowers need light and water",
"No flowers need bright or warm light",
"John can sing and dance",
"John ate an apple and finished his homework",
"John finished his homework and did not eat an apple"]
upward = ["Some students sing to celebrate their graduation",
"An Irishman won the nobel prize for literature.",
"A big poison spider was spanning a web",
"A Californian special policeman pulled a car over and spoke to the driver",
"A woman is dancing in a cage",
"A woman is dancing beautifully in a cage",
"People are riding and paddling a raft",
"Some delegates finished the survey on time"]
sick_upward = ["A brown dog is attacking another animal in front of the tall man in pants",
"A skilled person is riding a bicycle on one wheel",
"Two children are lying in the snow and are drawing angels"]
downward = ["No spider was spanning a web",
"No student finished homework",
"I've never flown in an airplane"]
hypothesis = ["No poison spider was spanning a web",
"No student at school finished homework compeletly",
"I've never flown in an airplane because i'm afraid."]
# +
MED_upward = []
MED_upward_hypo = []
MED_downward = []
MED_downward_hypo = []
with open("../data/MED/upward.txt") as upward_med:
lines = upward_med.readlines()
for i in range(len(lines) // 4):
MED_upward.append(lines[i*4+1])
MED_upward_hypo.append(lines[i*4+2])
with open("../data/MED/downward.txt") as donward_med:
lines = donward_med.readlines()
for i in range(len(lines) // 4):
MED_downward.append(lines[i*4+1])
MED_downward_hypo.append(lines[i*4+2])
# + tags=[]
from tqdm import tqdm
annotations = []
with open("./generation_log_donward.txt", 'w') as generate_log:
phrasalGenerator = PhrasalGenerator()
pipeline = PolarizationPipeline(verbose=0)
for i in tqdm(range(len(MED_downward))):
h_parsed, _ = dependency_parse(MED_downward_hypo[i], parser="stanza")
h_tree, _ = pipeline.run_binarization(h_parsed, MED_downward_hypo[i], {})
nn_phrases = dict()
vb_phrases = dict()
collect_modifiers(h_tree, nn_phrases, mod_type="NN")
collect_modifiers(h_tree, vb_phrases, mod_type="VB")
annotation = pipeline.single_polarization(MED_downward[i])
annotation['polarized_tree'].negates = h_tree.negates
#print("\n====================================")
generate_log.write("\n====================================")
phrasalGenerator.kb = merge(nn_phrases, vb_phrases)
#print(phrasalGenerator.kb)
#print("\nInit Premise: " + annotation['original'])
generate_log.write("\nInit Premise: " + annotation['original'])
generate_log.write("\nHypothesis: " + MED_downward_hypo[i])
#polarized = pipeline.postprocess(annotation['polarized_tree'], {})
#btreeViz = Tree.fromstring(polarized.replace('[', '(').replace(']', ')'))
#jupyter_draw_nltk_tree(btreeViz)
print(annotation['polarized_tree'].negates)
phrasalGenerator.deptree_generate(
annotation['polarized_tree'],
annotation['annotated'],
annotation['original'])
for gen_tree in phrasalGenerator.tree_log:
leaves = gen_tree.sorted_leaves().popkeys()
sentence = ' '.join([x[0] for x in leaves])
#print("\nNext Premise: " + sentence)
generate_log.write("\nNext Premise: " + sentence)
for gen_sent in set(phrasalGenerator.sent_log):
#print("\nNext Premise: " + gen_sent)
generate_log.write("\nNext Premise: " + gen_sent)
# +
up = ["few female committee members are from southern Europe."]
up_h = ["Not few female committee members are from southern Europe."]
annotations = []
phrasalGenerator = PhrasalGenerator()
pipeline = PolarizationPipeline(verbose=0)
for i in range(len(up)):
h_parsed, _ = dependency_parse(up_h[i], parser="stanza")
h_tree, _ = pipeline.run_binarization(h_parsed, up_h[i], {})
nn_phrases = {}
vb_phrases = {}
negates = collect_modifiers(h_tree, nn_phrases, mod_type="NN") + collect_modifiers(h_tree, vb_phrases, mod_type="VB")
annotation = pipeline.single_polarization(up[i])
annotation['polarized_tree'].negates = negates
print("\n====================================")
phrasalGenerator.kb = merge(nn_phrases, vb_phrases)
print(phrasalGenerator.kb)
print("\nInit Premise: " + annotation['original'])
polarized = pipeline.postprocess(annotation['polarized_tree'], {})
btree = Tree.fromstring(polarized.replace('[', '(').replace(']', ')'))
jupyter_draw_nltk_tree(btree)
phrasalGenerator.deptree_generate(
annotation['polarized_tree'],
annotation['annotated'],
annotation['original'])
for gen_tree in phrasalGenerator.tree_log:
leaves = gen_tree.sorted_leaves().popkeys()
sentence = ' '.join([x[0] for x in leaves])
print("\nNext Premise: " + sentence)
for gen_sent in set(phrasalGenerator.sent_log):
print("\nNext Premise: " + gen_sent)
# -
polarized = "[nsubj (NOUN researchers) (ccomp (mark (SCONJ that) (nsubj (acl (advmod (ADV first) (obl (case (ADP in) (PROPN Britain)) (VERB identified))) (acl:relcl (nsubj:pass (PRON which) (aux:pass (AUX is) (xcomp (mark (PART to) (advmod (ADV more) (cop (AUX be) (ADJ infectious)))) (VERB believed)))) (det (DET the) (NOUN variant)))) (aux (AUX could) (xcomp (nmod (case (ADP of) (det (DET the) (NOUN virus))) (nmod (case (ADP in) (det (DET this) (NOUN country))) (det (DET the) (amod (ADJ dominant) (NOUN form))))) (obl (case (ADP by) (PROPN March)) (VERB become)))))) (aux (AUX have) (obl (case (ADP In) (compound (PROPN United) (PROPN States))) (VERB warned))))]"
btreeViz = Tree.fromstring(polarized.replace('[', '(').replace(']', ')'))
jupyter_draw_nltk_tree(btreeViz)
| src/con_infer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Create necessary Images on S3
# +
import os
from PIL import Image
from io import BytesIO
import numpy as np
import pandas as pd
import s3fs
import boto3
os.getcwd()
# +
import sys
sys.path.append("/home/ec2-user/SageMaker/classify-streetview/mini-crops")
import mini_utils
# -
# # Generate a list of Images from S3
fs = s3fs.S3FileSystem()
# +
s3_image_bucket = 's3://streetview-w210'
gsv_images_dir = os.path.join(s3_image_bucket, 'gsv')
# See what is in the folder
gsv_images_paths = [filename for filename in fs.ls(gsv_images_dir) if '.jpg' in filename]
# -
len(gsv_images_paths)
gsv_images_paths[0:10]
# Extract just the imgid and heading
gsv_imgid_heading = [os.path.basename(filename).strip('.jpg') for filename in gsv_images_paths]
gsv_imgid_heading[0:10]
# # Execute Cropping script on 70K images
mini_utils.make_sliding_window_crops(gsv_imgid_heading_next)
# # Find which images have already been made
# +
s3_image_bucket = 's3://gsv-crops2'
mini_crops_dir = os.path.join(s3_image_bucket, 'mini-crops')
# See what is in the folder
mini_crops_paths = [filename for filename in fs.ls(mini_crops_dir) if '.jpg' in filename]
# -
len(mini_crops_paths)
| inference-pipeline/2020-04-11-PrepareImages.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Two Asset HANK Model [<cite data-cite="6202365/ECL3ZAR7"></cite>](https://cepr.org/active/publications/discussion_papers/dp.php?dpno=13071)
#
# [](https://mybinder.org/v2/gh/econ-ark/HARK/BayerLuetticke/notebooks?filepath=HARK%2FBayerLuetticke%2FTwoAsset.ipynb)
#
# - Adapted from original slides by <NAME> and <NAME> (Henceforth, 'BL')
# - Jupyter notebook originally by <NAME>
# - Further edits by <NAME>, <NAME>, <NAME>
# ### Overview
#
# BL propose a method for solving Heterogeneous Agent DSGE models that uses fast tools originally employed for image and video compression to speed up a variant of the solution methods proposed by Michael Reiter. <cite data-cite="undefined"></cite>
#
# The Bayer-Luetticke method has the following broad features:
# * The model is formulated and solved in discrete time (in contrast with some other recent approaches <cite data-cite="6202365/WN76AW6Q"></cite>)
# * Solution begins by calculation of the steady-state equilibrium (StE) with no aggregate shocks
# * Both the representation of the consumer's problem and the desciption of the distribution are subjected to a form of "dimensionality reduction"
# * This means finding a way to represent them efficiently using fewer points
# * "Dimensionality reduction" of the consumer's decision problem is performed before any further analysis is done
# * This involves finding a representation of the policy functions using some class of basis functions
# * Dimensionality reduction of the joint distribution is accomplished using a "copula"
# * See the companion notebook for description of the copula
# * The method approximates the business-cycle-induced _deviations_ of the individual policy functions from those that characterize the riskless StE
# * This is done using the same basis functions originally optimized to match the StE individual policy function
# * The method of capturing dynamic deviations from a reference frame is akin to video compression
# ### Setup
#
# #### The Recursive Dynamic Planning Problem
#
# BL describe their problem a generic way; here, we will illustrate the meaning of their derivations and notation using the familiar example of the Krusell-Smith model, henceforth KS. <cite data-cite="6202365/VPUXICUR"></cite>
#
# Consider a household problem in presence of aggregate and idiosyncratic risk
# * $S_t$ is an (exogenous) aggregate state (e.g., levels of productivity and unemployment)
# * $s_{it}$ is a partly endogenous idiosyncratic state (e.g., wealth)
# * $\mu_t$ is the distribution over $s$ at date $t$ (e.g., the wealth distribution)
# * $P_{t}$ is the pricing kernel
# * It captures the info about the aggregate state that the consumer needs to know in order to behave optimally
# * e.g., KS showed that for their problem, a good _approximation_ to $P_{t}$ could be constructed using only $S_{t}$ and the aggregate capital stock $K_{t}$
# * $\Gamma$ defines the budget set
# * This delimits the set of feasible choices $x$ that the agent can make
#
# The Bellman equation is:
#
# \begin{equation}
# v(s_{it},S_t,\mu_t) = \max\limits_{x \in \Gamma(s_{it},P_t)} u(s_{it},x) + \beta \mathbb{E}_{t} v(s_{it+1}(x,s_{it}),S_{t+1},\mu_{t+1})
# \end{equation}
#
# which, for many types of problems, implies an Euler equation: <!-- Question: Why isn't R a t+1 dated variable (and inside the expectations operator? -->
# \begin{equation}
# u^{\prime}\left(x(s_{it},S_t,\mu_t)\right) = \beta R(S_t,\mu_t) \mathbb{E}_{t} u^{\prime}\left(x(s_{it+1},S_{t+1},\mu_{t+1})\right)
# \end{equation}
#
# #### Solving for the StE
#
# The steady-state equilibrium is the one that will come about if there are no aggregate risks (and consumers know this)
#
# The first step is to solve for the steady-state:
# * Discretize the state space
# * Representing the nodes of the discretization in a set of vectors
# * Such vectors will be represented by an overbar
# * e.g. $\bar{m}$ is the nodes of cash-on-hand $m$
# * The optimal policy $\newcommand{\policy}{c}\newcommand{\Policy}{C}\policy(s_{i};P)$ induces flow utility $u_{\policy}$ whose discretization is a vector $\bar{u}_{\bar{\policy}}$
# * Idiosyncratic dynamics are captured by a transition probability matrix $\Pi_{\bar{\policy}}$
# * $\Pi$ is like an expectations operator
# * It depends on the vectorization of the policy function $\bar{\policy}$
# * $P$ is constant because in StE aggregate prices are constant
# * e.g., in the KS problem, $P$ would contain the (constant) wage and interest rates
# * In StE, the discretized Bellman equation implies
# \begin{equation}
# \bar{v} = \bar{u}_c + \beta \Pi_{\bar{\policy}}\bar{v}
# \end{equation}
# holds for the optimal policy
# * A linear interpolator is used to represent the value function
# * For the distribution, which (by the definition of steady state) is constant:
#
# \begin{eqnarray}
# \bar{\mu} & = & \bar{\mu} \Pi_{\bar{\policy}} \\
# d\bar{\mu} & = & d\bar{\mu} \Pi_{\bar{\policy}}
# \end{eqnarray}
# where we differentiate in the second line because we will be representing the distribution as a histogram, which counts the _extra_ population obtained by moving up <!-- Is this right? $\mu$ vs $d \mu$ is a bit confusing. The d is wrt the state, not time, right? -->
#
# We will define an approximate equilibrium in which:
# * $\bar{\policy}$ is the vector that defines a linear interpolating policy function $\policy$ at the state nodes
# * given $P$ and $v$
# * $v$ is a linear interpolation of $\bar{v}$
# * $\bar{v}$ is value at the discretized nodes
# * $\bar{v}$ and $d\bar{\mu}$ solve the approximated Bellman equation
# * subject to the steady-state constraint
# * Markets clear ($\exists$ joint requirement on $\bar{\policy}$, $\mu$, and $P$; denoted as $\Phi(\bar{\policy}, \mu, P) = 0$) <!-- Question: Why is this not $\bar{\mu}$ -->
#
# This can be solved by:
# 1. Given $P$,
# 1. Finding $d\bar{\mu}$ as the unit-eigenvalue of $\Pi_{\bar{\policy}}$
# 2. Using standard solution techniques to solve the micro decision problem
# * Like wage and interest rate
# 2. Using a root-finder to solve for $P$
# * This basically iterates the other two steps until it finds values where they are consistent
# #### Introducing aggregate risk
#
# With aggregate risk
# * Prices $P$ and the distribution $\mu$ change over time
#
# Yet, for the household:
# * Only prices and continuation values matter
# * The distribution does not influence decisions directly
# #### Redefining equilibrium (Reiter, 2002)
# A sequential equilibrium with recursive individual planning <cite data-cite="6202365/UKUXJHCN"></cite> is:
# * A sequence of discretized Bellman equations, such that
# \begin{equation}
# v_t = \bar{u}_{P_t} + \beta \Pi_{\policy_t} v_{t+1}
# \end{equation}
# holds for policy $\policy_t$ which optimizes with respect to $v_{t+1}$ and $P_t$
# * and a sequence of "histograms" (discretized distributions), such that
# \begin{equation}
# d\mu_{t+1} = d\mu_t \Pi_{\policy_t}
# \end{equation}
# holds given the policy $h_{t}$, that is optimal given $P_t$, $v_{t+1}$
# * Prices, distribution, and policies lead to market clearing
# + code_folding=[0, 6, 17]
from __future__ import print_function
# This is a jupytext paired notebook that autogenerates a corresponding .py file
# which can be executed from a terminal command line via "ipython [name].py"
# But a terminal does not permit inline figures, so we need to test jupyter vs terminal
# Google "how can I check if code is executed in the ipython notebook"
def in_ipynb():
try:
if str(type(get_ipython())) == "<class 'ipykernel.zmqshell.ZMQInteractiveShell'>":
return True
else:
return False
except NameError:
return False
# Determine whether to make the figures inline (for spyder or jupyter)
# vs whatever is the automatic setting that will apply if run from the terminal
if in_ipynb():
# # %matplotlib inline generates a syntax error when run from the shell
# so do this instead
get_ipython().run_line_magic('matplotlib', 'inline')
else:
get_ipython().run_line_magic('matplotlib', 'auto')
# The tools for navigating the filesystem
import sys
import os
# Find pathname to this file:
my_file_path = os.path.dirname(os.path.abspath("TwoAsset.ipynb"))
# Relative directory for pickled code
code_dir = os.path.join(my_file_path, "../Assets/Two")
sys.path.insert(0, code_dir)
sys.path.insert(0, my_file_path)
# + code_folding=[0]
## Load Stationary equilibrium (StE) object EX3SS_20
import pickle
os.chdir(code_dir) # Go to the directory with pickled code
## EX3SS_20.p is the information in the stationary equilibrium (20: the number of illiquid and liquid weath grids )
EX3SS=pickle.load(open("EX3SS_20.p", "rb"))
## WangTao: Find the code that generates this
# -
# #### Compact notation
#
# It will be convenient to rewrite the problem using a compact notation proposed by Schmidt-Grohe and Uribe (2004)
#
# The equilibrium conditions can be represented as a non-linear difference equation
# * Controls: $Y_t = [v_t \ P_t \ Z_t^Y]$ and States: $X_t=[\mu_t \ S_t \ Z_t^X]$
# * where $Z_t$ are purely aggregate states/controls
# * Define <!-- Q: What is $\epsilon$ here? Why is it not encompassed in S_{t+1}? -->
# \begin{align}
# F(d\mu_t, S_t, d\mu_{t+1}, S_{t+1}, v_t, P_t, v_{t+1}, P_{t+1}, \epsilon_{t+1})
# &= \begin{bmatrix}
# d\mu_{t+1} - d\mu_t\Pi_{\policy_t} \\
# v_t - (\bar{u}_{\policy_t} + \beta \Pi_{\policy_t}v_{t+1}) \\
# S_{t+1} - \Policy(S_t,d\mu_t,\epsilon_{t+1}) \\
# \Phi(\policy_t,d\mu_t,P_t,S_t) \\
# \epsilon_{t+1}
# \end{bmatrix}
# \end{align}
# s.t. <!-- Q: Why are S_{t+1} and \epsilon_{t+1} not arguments of v_{t+1} below? -->
# \begin{equation}
# \policy_t(s_{t}) = \arg \max\limits_{x \in \Gamma(s,P_t)} u(s,x) + \beta \mathop{\mathbb{E}_{t}} v_{t+1}(s_{t+1})
# \end{equation}
# * The solution is a function-valued difference equation:
# \begin{equation}
# \mathop{\mathbb{E}_{t}}F(X_t,X_{t+1},Y_t,Y_{t+1},\epsilon_{t+1}) = 0
# \end{equation}
# where $\mathop{\mathbb{E}}$ is the expectation over aggregate states
# * It becomes real-valued when we replace the functions by their discretized counterparts
# * Standard techniques can solve the discretized version
# #### So, is all solved?
# The dimensionality of the system F is a big problem
# * With high dimensional idiosyncratic states, discretized value functions and distributions become large objects
# * For example:
# * 4 income states $\times$ 100 illiquid capital states $\times$ 100 liquid capital states $\rightarrow$ $\geq$ 40,000 values in $F$
# ### Bayer-Luetticke method
# #### Idea:
# 1. Use compression techniques as in video encoding
# * Apply a discrete cosine transformation (DCT) to all value/policy functions
# * DCT is used because it is the default in the video encoding literature
# * Choice of cosine is unimportant; linear basis functions might work just as well
# * Represent fluctuations as differences from this reference frame
# * Assume all coefficients of the DCT from the StE that are close to zero do not change when there is an aggregate shock (small things stay small)
#
# 2. Assume no changes in the rank correlation structure of $\mu$
# * Calculate the Copula, $\bar{C}$ of $\mu$ in the StE
# * Perturb only the marginal distributions
# * This assumes that the rank correlations remain the same
# * See the companion notebook for more discussion of this
# * Use fixed Copula to calculate an approximate joint distribution from marginals
#
#
# The approach follows the insight of KS in that it uses the fact that some moments of the distribution do not matter for aggregate dynamics
# #### Details
# 1) Compression techniques from video encoding
# * Let $\bar{\Theta} = dct(\bar{v})$ be the coefficients obtained from the DCT of the value function in StE
# * Define an index set $\mathop{I}$ that contains the x percent largest (i.e. most important) elements from $\bar{\Theta}$
# * Let $\theta$ be a sparse vector with non-zero entries only for elements $i \in \mathop{I}$
# * Define
# \begin{equation}
# \tilde{\Theta}(\theta_t)=\left\{
# \begin{array}{@{}ll@{}}
# \bar{\Theta}(i)+\theta_t(i), & i \in \mathop{I} \\
# \bar{\Theta}(i), & \text{else}
# \end{array}\right.
# \end{equation}
# * This assumes that the basis functions with least contribution to representation of the function in levels, make no contribution at all to its changes over time
# 2) Decoding
# * Now we reconstruct $\tilde{v}(\theta_t)=dct^{-1}(\tilde{\Theta}(\theta_{t}))$
# * idct=$dct^{-1}$ is the inverse dct that goes from the $\theta$ vector to the corresponding values
# * This means that in the StE the reduction step adds no addtional approximation error:
# * Remember that $\tilde{v}(0)=\bar{v}$ by construction
# * But it allows us to reduce the number of derivatives that need to be calculated from the outset.
# * We only calculate derivatives for those basis functions that make an important contribution to the representation of the function
#
# 3) The histogram is recovered as follows
# * $\mu_t$ is approximated as $\bar{C}(\bar{\mu_t}^1,...,\bar{\mu_t}^n)$, where $n$ is the dimensionality of the idiosyncratic states <!-- Question: Why is there no time subscript on $\bar{C}$? I thought the copula was allowed to vary over time ... --> <!-- Question: is $\mu_{t}$ linearly interpolated between gridpoints? ... -->
# * $\mu_t^{i}$ are the marginal distributions <!-- Question: These are cumulatives, right? They are not in the same units as $\mu$ -->
# * The StE distribution is obtained when $\mu = \bar{C}(\bar{\mu}^1,...,\bar{\mu}^n)$
# * Typically prices are only influenced through the marginal distributions
# * The approach ensures that changes in the mass of one state (say, wealth) are distributed in a sensible way across the other dimensions
# * Where "sensible" means "like in StE" <!-- Question: Right? -->
# * The implied distributions look "similar" to the StE one (different in (Reiter, 2009))
#
# 4) The large system above is now transformed into a much smaller system:
# \begin{align}
# F(\{d\mu_t^1,...,d\mu_t^n\}, S_t, \{d\mu_{t+1}^1,...,d\mu_{t+1}^n\}, S_{t+1}, \theta_t, P_t, \theta_{t+1}, P_{t+1})
# &= \begin{bmatrix}
# d\bar{C}(\bar{\mu}_t^1,...,\bar{\mu}_t^n) - d\bar{C}(\bar{\mu}_t^1,...,\bar{\mu}_t^n)\Pi_{\policy_t} \\
# dct\left[idct\left(\tilde{\Theta}(\theta_t) - (\bar{u}_{\policy_t} + \beta \Pi_{\policy_t}idct(\tilde{\Theta}(\theta_{t+1}))\right)\right] \\
# S_{t+1} - \Policy(S_t,d\mu_t) \\
# \Phi(\policy_t,d\mu_t,P_t,S_t) \\
# \end{bmatrix}
# \end{align}
#
# ### The two-asset HANK model
#
# We illustrate the algorithm in a two-asset HANK model described as below
#
#
# #### Households
# - Maximizing discounted felicity
# - Consumption $c$
# - CRRA coefficent: $\xi$
# - EOS of CES consumption bundle: $\eta$
# - Disutility from work in GHH form:
# - Frisch elasticity $\gamma$
# - Two assets:
# - Liquid nominal bonds $b$, greater than lower bound $\underline b$
# - Borrowing constraint due to a wedge between borrowing and saving rate: $R^b(b<0)=R^B(b>0)+\bar R$
# - Illiquid assets capital $k$ nonnegative
# - Trading of illiquid assets is subject to a friction governed by $v$, the fraction of agents who can trade
# - If nontrading, receive dividend $r$ and depreciates by $\tau$
# - Idiosyncratic labor productivity $h$:
# - $h = 0$ for entreprener, only receive profits $\Pi$
# - $h = 1$ for labor, evolves according to an autoregressive process,
# - $\rho_h$ persistence parameter
# - $\epsilon^h$: idiosyncratic risk
#
# #### Production
# - Intermediate good producer
# - CRS production with TFP $Z$
# - Wage $W$
# - Cost of capital $r+\delta$
# - Reseller
# - Rotemberg price setting: quadratic adjustment cost scalled by $\frac{\eta}{2\kappa}$
# - Constant discount factor $\beta$
# - Investment subject to Tobin's q adjustment cost $\phi$
# - Aggregate risks $\Omega$ include
# - TFP $Z$, AR(1) process with persistence of $\rho^Z$ and shock $\epsilon^Z$
# - Uncertainty
# - Monetary policy
# - Central bank
# - Taylor rule on nominal saving rate $R^B$: reacts to deviation of inflation from target by $\theta_R$
# - $\rho_R$: policy innertia
# - $\epsilon^R$: monetary policy shocks
# - Government (fiscal rule)
# - Government spending $G$
# - Tax $T$
# - $\rho_G$: intensity of repaying government debt: $\rho_G=1$ implies roll-over
#
# #### Taking stock
#
# - Individual state variables: $\newcommand{\liquid}{m}\liquid$, $k$ and $h$, the joint distribution of individual states $\Theta$
# - Individual control variables: $c$, $n$, $\liquid'$, $k'$
# - Optimal policy for adjusters and nonadjusters are $c^*_a$, $n^*_a$ $k^*_a$ and $\liquid^*_a$ and $c^*_n$, $n^*_n$ and $\liquid^*_n$, respectively
#
# +
import time
from HARK.BayerLuetticke.Assets.Two.FluctuationsTwoAsset import FluctuationsTwoAsset, SGU_solver, plot_IRF
start_time = time.perf_counter()
## Choose an aggregate shock to perturb(one of three shocks: MP, TFP, Uncertainty)
# EX3SS['par']['aggrshock'] = 'MP'
# EX3SS['par']['rhoS'] = 0.0 # Persistence of variance
# EX3SS['par']['sigmaS'] = 0.001 # STD of variance shocks
#EX3SS['par']['aggrshock'] = 'TFP'
#EX3SS['par']['rhoS'] = 0.95
#EX3SS['par']['sigmaS'] = 0.0075
EX3SS['par']['aggrshock'] = 'Uncertainty'
EX3SS['par']['rhoS'] = 0.001 # Persistence of variance
EX3SS['par']['sigmaS'] = 0.02 # STD of variance shocks
## Choose an accuracy of approximation with DCT
### Determines number of basis functions chosen -- enough to match this accuracy
### EX3SS is precomputed steady-state pulled in above
EX3SS['par']['accuracy'] = 0.99999
## Implement state reduction and DCT
### Do state reduction on steady state
EX3SR = FluctuationsTwoAsset(**EX3SS)
SR = EX3SR.StateReduc()
print('SGU_solver')
SGUresult = SGU_solver(SR['Xss'],SR['Yss'],SR['Gamma_state'],SR['indexMUdct'],SR['indexVKdct'],SR['par'],SR['mpar'],SR['grid'],SR['targets'],SR['Copula'],SR['P_H'],SR['aggrshock'])
print('plot_IRF')
plot_IRF(SR['mpar'],SR['par'],SGUresult['gx'],SGUresult['hx'],SR['joint_distr'],
SR['Gamma_state'],SR['grid'],SR['targets'],SR['Output'])
end_time = time.perf_counter()
print('Elapsed time is ', (end_time-start_time), ' seconds.')
# -
| HARK/BayerLuetticke/notebooks/TwoAsset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py3-TF2.0]
# language: python
# name: conda-env-py3-TF2.0-py
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Input, GlobalMaxPooling1D, LSTM, Embedding
from tensorflow.keras.models import Model
df = pd.read_csv('spam.csv',encoding = 'ISO-8859-1')
df.head()
df.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'],axis=1, inplace=True)
df.head()
df.columns = [ 'labels', 'data']
df.head()
df['b_labels'] = df['labels'].map({'ham':0, 'spam': 1})
Y = df['b_labels'].values
df_train, df_test, Ytrain, Ytest = train_test_split(df['data'],Y,test_size=0.33)
MAX_VOCAB_SIZE = 20000
tokenizer = Tokenizer(num_words = MAX_VOCAB_SIZE)
tokenizer.fit_on_texts(df_train)
sequences_train = tokenizer.texts_to_sequences(df_train)
sequences_test = tokenizer.texts_to_sequences(df_test)
word2idx = tokenizer.word_index
V = len(word2idx)
print('# of tokens:', V)
data_train = pad_sequences(sequences_train)
T = data_train.shape[1]
data_train.shape
data_test = pad_sequences(sequences_test, maxlen =T)
data_test.shape
# +
D = 20
M = 15
i = Input(shape=(T,))
x = Embedding(V+1,D)(i)
x = LSTM(M, return_sequences=True)(x)
x = GlobalMaxPooling1D()(x)
x = Dense(1, activation = 'sigmoid')(x)
model = Model(i,x)
# -
model.compile(
loss = 'binary_crossentropy',
optimizer= 'adam',
metrics = ['accuracy']
)
r = model.fit(
data_train, Ytrain,
epochs=10,
validation_data = (data_test, Ytest)
)
plt.plot(r.history['loss'],label='loss')
plt.plot(r.history['val_loss'],label='val_loss')
plt.legend()
plt.plot(r.history['accuracy'],label='acc')
plt.plot(r.history['val_accuracy'],label='val_acc')
plt.legend()
| TensorFlow2.0/NLP/NLP - LSTM - Spam Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/joannamickamedina/OOP-58001/blob/main/Fundamentals_of_Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ONLF-WKxj8Xu"
# #Fundamentals of Python
# + [markdown] id="gTh2Ujr1kYKF"
# Python Variables
# + colab={"base_uri": "https://localhost:8080/"} id="mq1mJAbgkAaf" outputId="a472a810-6f85-451f-c8f4-4d3c82960c4d"
x = float(1)
a,b,c="Sally","John","Ana"
a, b, c = 0,-1, 2
print('This is a sample')
print(a)
print(c)
# + [markdown] id="2o5r-Ql3mG2c"
# Casting
# + colab={"base_uri": "https://localhost:8080/"} id="HM12IPiJmSKI" outputId="6b11a364-2838-4382-e19b-f0f44ac84c9c"
print(x)
# + [markdown] id="5sYrQ52vmp9U"
# Type() Function
# + colab={"base_uri": "https://localhost:8080/"} id="aFbdprapmujE" outputId="71a242e0-5f29-423a-8a10-a8a427ad5d7b"
y ="Johnny"
print(type(y))
print(type(x))
# + [markdown] id="oOfBzndvnmmU"
# Double quotes and Single quote
# + colab={"base_uri": "https://localhost:8080/"} id="_Mhyyw9pnsRz" outputId="e26142fa-8aa6-42a1-c12c-209ea009deda"
#h = "Maria"
h = 'Maria'
v=1
print(h)
# + [markdown] id="rTz18M7IpBCu"
# Multiple Variables
# + colab={"base_uri": "https://localhost:8080/"} id="ovbsryG1pDOb" outputId="73ebbd5d-4416-41b5-80d6-cea9c4758ff1"
x ,y,z = "one","two",'three'
print(x)
print(y)
print(z)
print(x,y,z)
# + [markdown] id="cf-7wwEkqmP0"
# One Value to Multiple Variables
# + colab={"base_uri": "https://localhost:8080/"} id="CqSwfOlWqo8F" outputId="b19a063f-ddac-4606-f42a-ceee6814eed4"
x = y = z ="Stella"
print(x,y,z)
print(x)
print(y)
print(z)
# + [markdown] id="i2UHpPhSrUyp"
# Output Variables
# + colab={"base_uri": "https://localhost:8080/"} id="NzjA8CCzrW8N" outputId="9e6139af-baf7-4766-d0a0-1455aee9faff"
x ="enjoying"
print("Python is" + " "+x)
# + colab={"base_uri": "https://localhost:8080/"} id="TXpTtLA3rmIq" outputId="f9482fd4-763f-4323-f6d0-4e463170f15c"
x ="enjoying"
y ='Python is'
print(y+" "+x)
# + [markdown] id="VVIKbsthtTj6"
# Arithmetic Operations
# + colab={"base_uri": "https://localhost:8080/"} id="PhgN5m0VtWpI" outputId="9458bd58-41ad-454a-913e-294384e0df79"
f = 1
g = 2
i = 6
print(f+g)
print(f-g)
print(f*g)
print(int(i/g))
print(3/g)
print(3%g)
print(3//g)
print(3**2)
print(2^6)
print(2**6)
print(2|6)
# + [markdown] id="ujz5lve7xUud"
# Boolean Operators
# + colab={"base_uri": "https://localhost:8080/"} id="o5Et0J6TxXir" outputId="3f467e5e-daa6-4cec-c19f-9f615f6212f9"
k = 2
l = 1
#k+=3 #Same as k= k+3
#print(k)
#print(k+=2)
print(k>>2) #shift right twice
print(k<<2) #shift left twice
# + [markdown] id="SsutC5Fgzzpf"
# Assignment Operators
# + colab={"base_uri": "https://localhost:8080/"} id="_Z34tjIDz3LC" outputId="ce8d08a5-842d-465c-d524-fc3dba35e757"
s = 2
s*=5 #Same as s=s*5
print(s)
# + [markdown] id="U8232Hq20QSX"
# Relational Operators
# + colab={"base_uri": "https://localhost:8080/"} id="DF1kXZ-t0SfX" outputId="a486723d-cb8d-434a-e8f8-8092f6c78fbd"
print(v>s) #v=1, s=2
print(v==s)
# + [markdown] id="uAo29MPy1BVE"
# Logical Operators
# + colab={"base_uri": "https://localhost:8080/"} id="nyy-DGww1EV7" outputId="adc14193-2e81-4482-f206-fb72fd40e74b"
print(v<s and s==s)
print(v<s or s==v)
print(not(v<s or s==v))
# + [markdown] id="5lNREGlj2AsP"
# Identity Operators
# + colab={"base_uri": "https://localhost:8080/"} id="_PvQ6DMc2DFK" outputId="ee647187-0fa6-407e-9677-92bfac7bf24b"
print(v is s)
print(v is not s)
| Fundamentals_of_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:sympy]
# language: python
# name: conda-env-sympy-py
# ---
# +
import sympy as sym
from sympy.polys.multivariate_resultants import MacaulayResultant
sym.init_printing()
# -
# Macaulay Resultant
# ------------------
# The Macauly resultant is a multivariate resultant. It is used for calculating the resultant of $n$ polynomials
# in $n$ variables. The Macaulay resultant is calculated as the determinant of two matrices,
#
# $$R = \frac{\text{det}(A)}{\text{det}(M)}.$$
# Matrix $A$
# -----------
# There are a number of steps needed to construct matrix $A$. Let us consider an example from https://dl.acm.org/citation.cfm?id=550525 to
# show the construction.
x, y, z = sym.symbols('x, y, z')
a_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3 = sym.symbols('a_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3')
b_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3 = sym.symbols('b_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3')
c_1, c_2, c_3 = sym.symbols('c_1, c_2, c_3')
variables = [x, y, z]
f_1 = a_1_1 * x ** 2 + a_1_2 * x * y + a_1_3 * x * z + a_2_2 * y ** 2 + a_2_3 * y * z + a_3_3 * z ** 2
f_2 = b_1_1 * x ** 2 + b_1_2 * x * y + b_1_3 * x * z + b_2_2 * y ** 2 + b_2_3 * y * z + b_3_3 * z ** 2
f_3 = c_1 * x + c_2 * y + c_3 * z
polynomials = [f_1, f_2, f_3]
mac = MacaulayResultant(polynomials, variables)
# **Step 1** Calculated $d_i$ for $i \in n$.
mac.degrees
# **Step 2.** Get $d_M$.
mac.degree_m
# **Step 3.** All monomials of degree $d_M$ and size of set.
mac.get_monomials_set()
mac.monomial_set
mac.monomials_size
# These are the columns of matrix $A$.
# **Step 4** Get rows and fill matrix.
mac.get_row_coefficients()
# Each list is being multiplied by polynomials $f_1$, $f_2$ and $f_3$ equivalently. Then we fill the matrix
# based on the coefficient of the monomials in the columns.
matrix = mac.get_matrix()
matrix
# Matrix $M$
# -----------
# Columns that are non reduced are kept. The rows which contain one if the $a_i$s is dropoed.
# $a_i$s are the coefficients of $x_i ^ {d_i}$.
mac.get_submatrix(matrix)
# Second example
# -----------------
# This is from: http://isc.tamu.edu/resources/preprints/1996/1996-02.pdf
x, y, z = sym.symbols('x, y, z')
a_0, a_1, a_2 = sym.symbols('a_0, a_1, a_2')
b_0, b_1, b_2 = sym.symbols('b_0, b_1, b_2')
c_0, c_1, c_2,c_3, c_4 = sym.symbols('c_0, c_1, c_2, c_3, c_4')
f = a_0 * y - a_1 * x + a_2 * z
g = b_1 * x ** 2 + b_0 * y ** 2 - b_2 * z ** 2
h = c_0 * y - c_1 * x ** 3 + c_2 * x ** 2 * z - c_3 * x * z ** 2 + c_4 * z ** 3
polynomials = [f, g, h]
mac = MacaulayResultant(polynomials, variables=[x, y, z])
mac.degrees
mac.degree_m
mac.get_monomials_set()
mac.get_size()
mac.monomial_set
mac.get_row_coefficients()
matrix = mac.get_matrix()
matrix
matrix.shape
mac.get_submatrix(mac.get_matrix())
| examples/notebooks/Macaulay_resultant.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Tutorial 7: Using PyBEAM to fit a model to experimental data.
#
# In this tutorial, we demonstrate how to use PyBEAM to fit model parameters to an experiment's data set. Specifically, we use data collected by Evans et al. (2020) in their publication "The Role of Passing Time in Decision Making". In this notebook, we fit data from the first experiment listed in their paper.
#
# In this experiment, participants were given random dot coherence task with four coherence conditions: 0%, 5%, 10%, and 40%. Reward rate was emphasized, with particpants instructed to maximize the number of correct decisions. The had a fixed time of 5 seconds in which to make the choice.
#
# First, we import PyBEAM's default module. Additionally, we import a python script which sorts the Evans data (parse_Evans2020) and numpy in order to load data files.
#
# +
# import PyBEAM's default module
import pybeam.default as pbd
# import sript to parse Evans data
from parse_Evans2020 import subject_rt
# import numpy to load Evans data
import numpy as np
# -
# The next block of code imports and sorts the Evans experiment 1 data set. We first load the data and subject labels using numpy and place them in arrays. Data contains the participant number, choice made (correct or incorrect), coherence (0%, 5%, 10% or 40%), and reaction time. Subjects contains the list of participants.
#
# To sort out the data for a single participant, we call the subject_rt function. It takes as an input the data and subjects array, in addition to the subject number we would like to fit, and minimum rt value (filters out data below this value, as done in Evans) and a max rt value (filters out data above this value, as done in Evans).
#
# The function outputs a dictionary containing four keys: 'rt0', 'rt5', 'rt10' and 'rt40'. These contain dictionaries holding the reaction time data for each of the four coherence conditions. Each sub-dictionary contains two entries: 'rt_upper' and 'rt_lower'. 'rt_upper' contains an array of rt values corresponding to the correct choice. 'rt_lower' contains an array of values containing the rt values corresponding to an error. It outputs this particular structure becauase is the data structure required by PyBEAM.
#
# +
# import rt data from Evans2020 Experiment 1
data = np.asarray(np.loadtxt(open("Exp1_data.csv", "rb"), delimiter=",", skiprows=1))
# import subject labels from Evans2020
subjects = np.asarray(np.loadtxt(open("Exp1_subjects.csv", "rb"), delimiter=",", skiprows=1))
# subject to fit model to (from list subjects)
subject_number = 31727
# filter data below min_rt and above max_rt
min_rt = 0.25
max_rt = 5.0
# file name to save posteriors to
file_name = 'subject_' + str(int(subject_number))
# return rt for desired subject number
rt = subject_rt(subject_number = subject_number,
data = data,
subjects = subjects,
min_rt = min_rt,
max_rt = max_rt)
rt
# -
# Now that we have imported our data, we now define our model. Following Evans, we fit a base model with a weibull threshold and uniform contamination. The dictionary required for this is below.
#
# +
# define model
model = {'type' : 'base', # model type ('base' or 'ugm')
'sigma' : 1.0, # sets sigma, the noise parameter
'threshold' : 'weibull', # sets threshold type (fixed, linear, exponential, or weibull)
'leakage' : False, # if True, drift rate has leaky integration
'delay' : False, # if True, decision threshold motion is delayed (only for non-fixed thresholds)
'contamination' : True} # if True, uniform contamination added ot model
# outputs which parameters your model uses
pbd.parse_model(model)
# -
# Now that we have defined our model, we can define our priors, conditions, and run the inference program. Since we have four coherences, we will have four model conditions, each with their own unique drift rate prior. Each model condition has its own data set, corresponding to one of the four coherences. Following Evans, we assume that the threshold will collapse completely from its initial location, so we set the collapse parameter, c, to -1 in the priors dictionary. We also assume that the uniform contamination can occur as any time between the min and max rt values, so we set the lower and upper contaminations to be theses values.
#
# +
# define priors
p = {'pt_nd' : 'Uniform("t_nd", lower = 0.0, upper = 0.75)', # prior for non-decision time
'pw' : 'Uniform("w", lower = 0.3, upper = 0.7)', # prior for relative start
'pmu0' : 'Uniform("mu0", lower = -5.0, upper = 5.0 )', # drift rate prior for coherence 0%
'pmu5' : 'Uniform("mu5", lower = -5.0, upper = 5.0)', # drift rate prior for choherence 5%
'pmu10' : 'Uniform("mu10", lower = -5.0, upper = 5.0)', # drift rate prior for coherence 10%
'pmu40' : 'Uniform("mu40", lower = -5.0, upper = 5.0)', # drift rate prior for coherence 40%
'pa' : 'Uniform("a", lower = 0.25, upper = 2.0)', # prior for threshold location
'plamb' : 'Uniform("lamb", lower = -1.0, upper = 2.0)', # prior for scale parameter
'pkappa' : 'Uniform("kappa", lower = -1.0, upper = 2.0)', # prior for shape parameter
'c' : -1.0, # collapse parameter value
'pg' : 'Uniform("g", lower = 0.0, upper = 0.75)', # prior for contamination strength
'gl' : min_rt, # uniform contamination lower bound
'gu' : max_rt} # uniform contamination upper bound
# model condition for coherence 0%
c0 = {'rt' : rt['rt0'],
't_nd' : 'pt_nd',
'w' : 'pw',
'mu' : 'pmu0',
'a' : 'pa',
'lamb' : 'plamb',
'kappa' : 'pkappa',
'c' : 'c',
'g' : 'pg',
'gl' : 'gl',
'gu' : 'gu'}
# model condition for coherence 5%
c5 = {'rt' : rt['rt5'],
't_nd' : 'pt_nd',
'w' : 'pw',
'mu' : 'pmu5',
'a' : 'pa',
'lamb' : 'plamb',
'kappa' : 'pkappa',
'c' : 'c',
'g' : 'pg',
'gl' : 'gl',
'gu' : 'gu'}
# model condition for coherence 10%
c10 = {'rt' : rt['rt10'],
't_nd' : 'pt_nd',
'w' : 'pw',
'mu' : 'pmu10',
'a' : 'pa',
'lamb' : 'plamb',
'kappa' : 'pkappa',
'c' : 'c',
'g' : 'pg',
'gl' : 'gl',
'gu' : 'gu'}
# model condition for coherence 40%
c40 = {'rt' : rt['rt40'],
't_nd' : 'pt_nd',
'w' : 'pw',
'mu' : 'pmu40',
'a' : 'pa',
'lamb' : 'plamb',
'kappa' : 'pkappa',
'c' : 'c',
'g' : 'pg',
'gl' : 'gl',
'gu' : 'gu'}
# load conditions into dictionary
cond = {0 : c0, 1 : c5, 2 : c10, 3 : c40}
# run parameter inference
trace = pbd.inference(model = model,
priors = p,
conditions = cond,
samples = 50000,
chains = 3,
cores = 3,
file_name = file_name)
# -
# plot posteriors
pbd.plot_trace(file_name = file_name, burnin = 25000);
# posterior summary
pbd.summary(file_name = file_name, burnin = 25000);
| default_tutorials/Default_Tutorial7_fitting_experimental_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="XKxVy4gGazpG" slideshow={"slide_type": "slide"}
# [](http://rpi.analyticsdojo.com)
# <center><h1>Boston Housing - Feature Selection and Importance</h1></center>
# <center><h3><a href = 'http://rpi.analyticsdojo.com'>rpi.analyticsdojo.com</a></h3></center>
# + [markdown] colab_type="text" id="omdwevWrazpZ" slideshow={"slide_type": "subslide"}
# ## Overview
# - Getting the Data
# - Reviewing Data
# - Modeling
# - Model Evaluation
# - Using Model
# - Storing Model
#
# + [markdown] colab_type="text" id="-CIppnzMazpd" slideshow={"slide_type": "subslide"}
# ## Getting Data
# - Available in the [sklearn package](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html) as a Bunch object (dictionary).
# - From FAQ: ["Don’t make a bunch object! They are not part of the scikit-learn API. Bunch objects are just a way to package some numpy arrays. As a scikit-learn user you only ever need numpy arrays to feed your model with data."](http://scikit-learn.org/stable/faq.html)
# - Available in the UCI data repository.
# - Better to convert to Pandas dataframe.
# + colab={"base_uri": "https://localhost:8080/", "height": 56} colab_type="code" id="RuwyJId-azpn" outputId="553005c0-f7e1-4992-ddcc-8644e6e7d2ab"
#From sklearn tutorial.
from sklearn.datasets import load_boston
boston = load_boston()
print( "Type of boston dataset:", type(boston))
# + colab={"base_uri": "https://localhost:8080/", "height": 56} colab_type="code" id="z4pN4rd4azpr" outputId="72d618df-c861-46ce-b97b-2e7cf3a188a5"
#A bunch is you remember is a dictionary based dataset. Dictionaries are addressed by keys.
#Let's look at the keys.
print(boston.keys())
# + colab={"base_uri": "https://localhost:8080/", "height": 944} colab_type="code" id="ciThYFN4azpw" outputId="fa226cc7-8709-4d10-84b2-2606421f20fd" slideshow={"slide_type": "subslide"}
#DESCR sounds like it could be useful. Let's print the description.
print(boston['DESCR'])
# + colab={"base_uri": "https://localhost:8080/", "height": 228} colab_type="code" id="ZqlGCsXvazp0" outputId="7c0b0a62-77e4-4ed1-e0b1-75d2ab646878" slideshow={"slide_type": "subslide"}
# Let's change the data to a Panda's Dataframe
import pandas as pd
boston_df = pd.DataFrame(boston['data'] )
boston_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 228} colab_type="code" id="BRAwfOYIazp6" outputId="a4721b96-384d-4627-8b8d-2a46153069e8" slideshow={"slide_type": "subslide"}
#Now add the column names.
boston_df.columns = boston['feature_names']
boston_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 228} colab_type="code" id="3nZ4U7h-azqA" outputId="db166550-c7a1-4416-865f-28a4d62d6f47" slideshow={"slide_type": "subslide"}
#Add the target as PRICE.
boston_df['PRICE']= boston['target']
boston_df.head()
# + [markdown] colab_type="text" id="tUMBF8RSazqI" slideshow={"slide_type": "subslide"}
# ## What type of data are there?
# - First let's focus on the dependent variable, as the nature of the DV is critical to selection of model.
# - *Median value of owner-occupied homes in $1000's* is the Dependent Variable (continuous variable).
# - It is relevant to look at the distribution of the dependent variable, so let's do that first.
# - Here there is a normal distribution for the most part, with some at the top end of the distribution we could explore later.
# + [markdown] colab_type="text" id="6VuBbfQgazqR" slideshow={"slide_type": "subslide"}
# ## Preparing to Model
# - It is common to separate `y` as the dependent variable and `X` as the matrix of independent variables.
# - Here we are using `train_test_split` to split the test and train.
# - This creates 4 subsets, with IV and DV separted: `X_train, X_test, y_train, y_test`
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 56} colab_type="code" id="ruTHiLY2azqS" outputId="1957e80f-4cb8-4966-88f0-32c7031c15b0"
#This will throw and error at import if haven't upgraded.
# from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
#y is the dependent variable.
y = boston_df['PRICE']
#As we know, iloc is used to slice the array by index number. Here this is the matrix of
#independent variables.
X = boston_df.iloc[:,0:13]
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# + [markdown] colab_type="text" id="d3IVc6FPazqU" slideshow={"slide_type": "subslide"}
# ## Modeling
# - First import the package: `from sklearn.linear_model import LinearRegression`
# - Then create the model object.
# - Then fit the data.
# - This creates a trained model (an object) of class regression.
# - The variety of methods and attributes available for regression are shown [here](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
#
# + colab={"base_uri": "https://localhost:8080/", "height": 72} colab_type="code" id="G46LxRigazqU" outputId="16247324-9447-4922-e8cc-cc0615a3e16b"
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit( X_train, y_train )
# + [markdown] colab_type="text" id="F2SnbBNJazqX" slideshow={"slide_type": "subslide"}
# ## Evaluating the Model Results
# - You have fit a model.
# - You can now store this model, save the object to disk, or evaluate it with different outcomes.
# - Trained regression objects have coefficients (`coef_`) and intercepts (`intercept_`) as attributes.
# - R-Squared is determined from the `score` method of the regression object.
# - For Regression, we are going to use the coefficient of determination as our way of evaluating the results, [also referred to as R-Squared](https://en.wikipedia.org/wiki/Coefficient_of_determination)
# + colab={"base_uri": "https://localhost:8080/", "height": 267} colab_type="code" id="i9HESM0CazqY" outputId="88fa40eb-9c3c-43ae-9cf4-d07d9e93da31"
print('labels\n',X.columns)
print('Coefficients: \n', lm.coef_)
print('Intercept: \n', lm.intercept_)
print('R2 for Train)', lm.score( X_train, y_train ))
print('R2 for Test (cross validation)', lm.score(X_test, y_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 480} colab_type="code" id="vhC8TsHWazqb" outputId="9c14170d-6ea2-41a0-949d-154f4fd2d129" slideshow={"slide_type": "subslide"}
#Alternately, we can show the results in a dataframe using the zip command.
pd.DataFrame( list(zip(X.columns, lm.coef_)),
columns=['features', 'estimatedCoeffs'])
# -
# # L1 Regularized Regression
# By increasing the alpha, we can zero in on the variables which are more important in the analysis.
#
#
from sklearn import linear_model
reg = linear_model.Ridge(alpha=5000)
reg.fit(X_train, y_train )
print('R2 for Train)', reg.score( X_train, y_train ))
print('R2 for Test (cross validation)', reg.score(X_test, y_test))
#Alternately, we can show the results in a dataframe using the zip command.
pd.DataFrame( list(zip(X.columns, reg.coef_)),
columns=['features', 'estimatedCoeffs'])
# ## Feature Importance With Random Forrest Regression
#
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(random_state=99)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
print('R2 for Train)', forest.score( X_train, y_train ))
print('R2 for Test (cross validation)', forest.score(X_test, y_test))
# ## Feature Selection
# "SelectFromModel is a meta-transformer that can be used along with any estimator that has a coef_ or feature_importances_ attribute after fitting. The features are considered unimportant and removed, if the corresponding coef_ or feature_importances_ values are below the provided threshold parameter. Apart from specifying the threshold numerically, there are built-in heuristics for finding a threshold using a string argument."
#
#
#
#
from sklearn.feature_selection import SelectFromModel
model = SelectFromModel(forest, prefit=True, max_features=3)
feature_idx = model.get_support()
feature_names = X.columns[feature_idx]
X_NEW = model.transform(X)
pd.DataFrame(X_NEW, columns= feature_names)
# +
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X_NEW, y, test_size=0.3, random_state=0)
lm = LinearRegression()
lm.fit( X_train, y_train )
print('R2 for Train)', lm.score( X_train, y_train ))
print('R2 for Test (cross validation)', lm.score(X_test, y_test))
# -
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit( X_train, y_train )
# + colab={"base_uri": "https://localhost:8080/", "height": 56} colab_type="code" id="QuR_RL73azqk" outputId="61b92363-b4b6-4500-9839-1bbe39bd5ef8"
from sklearn.metrics import r2_score
r2_train_reg = r2_score(y_train, lm.predict(X_train))
r2_test_reg = r2_score(y_test, lm.predict(X_test))
print(r2_train_reg,r2_test_reg )
# + [markdown] colab_type="text" id="r5H57cCKazrC"
#
# Copyright [AnalyticsDojo](http://rpi.analyticsdojo.com) 2016.
# This work is licensed under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license agreement.
#
| site/_build/html/_sources/notebooks/06-unsupervised/04-regression-feature-selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using H5Web in the notebook
# ## Display a simple HDF5 file
# +
import numpy as np
import h5py
with h5py.File("simple.h5", "w") as h5file:
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
Xg, Yg = np.meshgrid(X, Y)
h5file['threeD'] = [np.sin(2*np.pi*f*np.sqrt(Xg**2 + Yg**2)) for f in np.arange(0.1, 1.1, 0.1)]
h5file['twoD'] = np.sin(np.sqrt(Xg**2 + Yg**2))
h5file['oneD'] = X
h5file['scalar'] = 42
# +
from jupyterlab_h5web import H5Web
H5Web('simple.h5')
# -
# ## Display a NeXus file
# +
import numpy as np
import h5py
with h5py.File("nexus.nx", "w") as h5file:
root_group = h5file
root_group.attrs["NX_class"] = "NXroot"
root_group.attrs["default"] = "entry"
entry = root_group.create_group("entry")
entry.attrs["NX_class"] = "NXentry"
entry.attrs["default"] = "process/spectrum"
process = entry.create_group("process")
process.attrs["NX_class"] = "NXprocess"
process.attrs["default"] = "spectrum"
spectrum = process.create_group("spectrum")
spectrum.attrs["NX_class"] = "NXdata"
spectrum.attrs["signal"] = "data"
spectrum.attrs["auxiliary_signals"] = ["aux1", "aux2"]
data = np.array([np.linspace(-x, x, 10) for x in range(1, 6)])
spectrum["data"] = data ** 2
spectrum["aux1"] = -(data ** 2)
spectrum["aux2"] = -data
spectrum["data"].attrs["interpretation"] = "spectrum"
image = process.create_group("image")
image.attrs["NX_class"] = "NXdata"
image.attrs["signal"] = "data"
x = np.linspace(-5, 5, 50)
x0 = np.linspace(10, 100, 10)
image["data"] = [a*x**2 for a in x0]
image["X"] = np.linspace(-2, 2, 50, endpoint=False)
image["X"].attrs["units"] = u"µm"
image["Y"] = np.linspace(0, 0.1, 10, endpoint=False)
image["Y"].attrs["units"] = "s"
image.attrs["axes"] = ["X"]
image.attrs["axes"] = ["Y", "X"]
# +
from jupyterlab_h5web import H5Web
H5Web('nexus.nx')
# -
| example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="PFFRXs81RePx"
# This notebook was created by the team KépfelismeréSCH for the course "Deep Learning a gyakorlatban" (VITMAV45) in the fall semester 2020. The team consists of <NAME> and <NAME>, as our 3rd teammate left us during the semester.
#
# The notebook describes a solution to the Facial Emotion Recognition (FER) task. We load and preprocess the public FER 2013 dataset, mainly building on the following source: https://www.kaggle.com/phamvanlinh143/fer-2013-notebook
#
# Then we build different models using Keras, based on relevant studies cited in our documentation. Our goal was to have some models with different architectures, to later create an ensemble of these models. The standalone models achieve overall accuracies between 60-66%, which is really close to the level humans are able to recognize facial expressions (65%).
#
# The ensemble building is a common way to improve the overall accuracy in FER solutions by combining multiple different networks. We consider decision-level ensembles, and experiment with multiple architectures, like simple and accuracy-weighted averaging and a categorical-confidence weighted approach. In the latter case, we weight every output channel by the rate of the true positives achieved on the test data. We were able to achieve overall accuracies over 69%, a notable improvement considering that our ensembles consist of only a few networks.
#
# Note that the running of ensembles requires the pretrained models to be in the same directory as the source, however, we tested our solutions only in Colab.
# + colab={"base_uri": "https://localhost:8080/"} id="jGopRJ1RQK7l" outputId="cd68ffbf-c307-47bd-808f-657f882a1477"
#access to pretrained models stored on drive
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="u96v_JYdU6D0" outputId="3344afc0-d8fe-4fa5-a509-6bc435641610"
#install database handling package
# !sudo apt-get install python3-dev default-libmysqlclient-dev
# !pip install mysqlclient
# + id="J7ZVw5vzu2IM"
#import modules
import cv2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sqlalchemy
import MySQLdb
from math import sqrt
from google.colab import files
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input, Dense, Flatten, Dropout, BatchNormalization, Softmax, Activation
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.optimizers import Adam, SGD
from keras.regularizers import l1, l2
from sklearn.metrics import confusion_matrix
import itertools
# + id="5CX1SCVTw0ho"
#read data from mysql
engine = sqlalchemy.create_engine('mysql://colab:<EMAIL>:8888/deeplearning',connect_args={'connect_timeout': 10})
connection = engine.connect()
query = 'SELECT * FROM data;'
data = pd.read_sql_query(query,connection)
connection.close()
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="oT0Khl_g6inR" outputId="5e29b688-3753-4f73-c26f-4847867bb2d2"
#plot data structure
data
# + id="MZ-6XLPm2f0e" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="a2b7ca16-2e2c-4a2a-f54a-2417f2f492c3"
#plot number of images in each class
emotions = ["Happy", "Neutral", "Sad", "Fear", "Angry", "Surprise", "Disgust"]
summary=data.emotion.value_counts().to_frame()
summary['emotions']=emotions
summary.columns=['count','emotions']
summary
# + id="zldGTS1O9C3M" colab={"base_uri": "https://localhost:8080/"} outputId="471a9bdf-8ee6-4c8b-dadb-16997c4509ac"
#number of train, valid and test images
data.usages.value_counts()
# + id="ugC8gup39Edz"
#image parameters
num_classes = 7
depth = 1
height = int(sqrt(len(data.pixels[0].split()))) #48
width = int(height)
emotion_labels = ["Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"]
classes=np.array(("Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral")) #48
# + id="8UTqIOvuD7K2" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="93048c6d-14d8-4435-f789-e745c8ddc2a5"
#plot the first 10 images
for i in range(10):
img = np.mat(data.pixels[i]).reshape(height, width)
plt.figure(i)
plt.title(emotion_labels[data.emotion[i]])
plt.imshow(img,cmap='gray')
# + id="LgyQvX8SGkLs"
#make train-valid-test datas split
train_set = data[(data.usages == 'Training')]
val_set = data[(data.usages == 'PublicTest')]
test_set = data[(data.usages == 'PrivateTest')]
#convert strings to arrays
X_train = np.array(list(map(str.split, train_set.pixels)), np.float32)
X_val = np.array(list(map(str.split, val_set.pixels)), np.float32)
X_test = np.array(list(map(str.split, test_set.pixels)), np.float32)
#conversion to 2D
X_train = X_train.reshape(X_train.shape[0], 48, 48, 1)
X_val = X_val.reshape(X_val.shape[0], 48, 48, 1)
X_test = X_test.reshape(X_test.shape[0], 48, 48, 1)
num_train = X_train.shape[0]
num_val = X_val.shape[0]
num_test = X_test.shape[0]
#make train-valid-test targets split
y_train = train_set.emotion
y_train = np_utils.to_categorical(y_train, num_classes)
y_val = val_set.emotion
y_val = np_utils.to_categorical(y_val, num_classes)
y_test = test_set.emotion
y_test = np_utils.to_categorical(y_test, num_classes)
# + id="PbXMgD-zVFQO"
def upscale(data):
data_upscaled = np.ndarray((data.shape[0], 64, 64, 1))
for i in range(data.shape[0]):
im = data[i,:,:,0]
imsc = cv2.resize(im, dsize=(64, 64))
data_upscaled[i,:,:,0] = imsc
return data_upscaled
# + id="ck9JExz2VQKD"
#one of the chosen architectures uses input images with a size of 64x64 pixels
X_train_upscaled = upscale(X_train)
X_val_upscaled = upscale(X_val)
X_test_upscaled = upscale(X_test)
# + id="utttX-NPYVSl"
#split test data for ensemble weighting
X_test1,X_test2 = np.array_split(X_test,2)
X_testU1,X_testU2 = np.array_split(X_test_upscaled,2)
y_test1,y_test2 = np.array_split(y_test,2)
X_test_list = [X_test1, X_test2]
X_testU_list = [X_testU1, X_testU2]
# + colab={"base_uri": "https://localhost:8080/"} id="IdJid9FzY7mf" outputId="f43c125f-958c-46f9-8c2f-26000ec37033"
print(X_test1.shape, X_test2.shape, X_testU1.shape, X_testU2.shape, y_test1.shape, y_test2.shape)
# + id="aYoeIx_XtwrW"
#make batches, and scale the images
#data augmentation with horizontal flipping
datagen = ImageDataGenerator(
rescale=1./255,
rotation_range = 10,
horizontal_flip = True,
width_shift_range=0.1,
height_shift_range=0.1,
fill_mode = 'nearest')
testgen = ImageDataGenerator(
rescale=1./255
)
datagen.fit(X_train)
datagen.fit(X_train_upscaled)
batch_size = 128
train_flow = datagen.flow(X_train, y_train, batch_size=batch_size)
val_flow = testgen.flow(X_val, y_val, batch_size=batch_size)
test_flow = testgen.flow(X_test, y_test, batch_size=batch_size)
train_flow_upscaled = datagen.flow(X_train_upscaled, y_train, batch_size=batch_size)
val_flow_upscaled = testgen.flow(X_val_upscaled, y_val, batch_size=batch_size)
test_flow_upscaled = testgen.flow(X_test_upscaled, y_test, batch_size=batch_size)
#test_flows for the ensemble
test_flow1 = testgen.flow(X_test1, y_test1, batch_size=batch_size)
test_flow2 = testgen.flow(X_test2, y_test2, batch_size=batch_size)
test_flow_upscaled1 = testgen.flow(X_testU1, y_test1, batch_size=batch_size)
test_flow_upscaled2 = testgen.flow(X_testU2, y_test2, batch_size=batch_size)
test_flow_list = [test_flow1, test_flow2]
test_flowU_list = [test_flow_upscaled1, test_flow_upscaled2]
# + id="F9GUrrP1uYov"
def FER_Model1(input_shape=(48,48,1)):
#define model (4 CNN and 2 FC layers)
inp = Input(shape=input_shape, name='input')
num_classes = 7
#the 1-st block
conv1 = Conv2D(64, kernel_size=3, activation='relu', padding='same', name = 'conv1')(inp)
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2,2), name = 'pool1')(conv1)
drop1 = Dropout(0.3, name = 'drop1')(pool1)
#the 2-nd block
conv2 = Conv2D(128, kernel_size=5, activation='relu', padding='same', name = 'conv2')(drop1)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2,2), name = 'pool2')(conv2)
drop2 = Dropout(0.3, name = 'drop2')(pool2)
#the 3-rd block
conv3 = Conv2D(512, kernel_size=3, activation='relu', padding='same', name = 'conv3')(drop2)
conv3 = BatchNormalization()(conv3)
pool3 = MaxPooling2D(pool_size=(2,2), name = 'pool3')(conv3)
drop3 = Dropout(0.3, name = 'drop3')(pool3)
#the 4-th block
conv4 = Conv2D(512, kernel_size=3, activation='relu', padding='same', name = 'conv4')(drop3)
conv4 = BatchNormalization()(conv4)
pool4 = MaxPooling2D(pool_size=(2,2), name = 'pool4')(conv4)
drop4 = Dropout(0.3, name = 'drop4')(pool4)
#Flatten and FC layers
flatten = Flatten(name = 'flatten')(drop4)
fc1 = Dense(256, activation='relu', name = 'fc1')(flatten)
fc1 = BatchNormalization()(fc1)
dropfc1 = Dropout(0.3, name = 'dropfc1')(fc1)
fc2 = Dense(512, activation='relu', name = 'fc2')(dropfc1)
fc2 = BatchNormalization()(fc2)
dropfc2 = Dropout(0.3, name = 'dropfc2')(fc2)
output = Dense(num_classes, activation='softmax', name = 'output')(dropfc2)
# create model
model = Model(inputs = inp, outputs = output)
# summary layers
#print(model.summary())
return model
# + colab={"base_uri": "https://localhost:8080/"} id="w2pQ7dlScbfd" outputId="61ccd3f7-d1d7-475a-80f7-f4517496b896"
opt = Adam(lr=0.001, decay=1e-7)
model1 = FER_Model1()
model1.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
#model1.load_weights('/content/drive/MyDrive/BME/FER2020/fer1/1/Fer1_deep_1.hdf5')#add path
# + id="qxlSf5CDq19K"
def FER_Model2(input_shape=(48,48,1)):
#define model (2 CNN and 1 FC layers)
inp = Input(shape=input_shape, name='input')
num_classes = 7
#the 1-st block
conv1 = Conv2D(64, kernel_size=3, activation='relu', padding='same', name = 'conv1')(inp)
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2,2), name = 'pool1')(conv1)
drop1 = Dropout(0, name = 'drop1')(pool1)
#the 2-nd block
conv2 = Conv2D(128, kernel_size=3, activation='relu', padding='same', name = 'conv2')(drop1)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2,2), name = 'pool2')(conv2)
drop2 = Dropout(0, name = 'drop2')(pool2)
#Flatten and FC layers
flatten = Flatten(name = 'flatten')(drop2)
fc1 = Dense(512, activation='relu', name = 'fc1')(flatten)
fc1 = BatchNormalization()(fc1)
dropfc1 = Dropout(0, name = 'dropfc1')(fc1)
output = Dense(num_classes, activation='softmax', name = 'output')(dropfc1)
# create model
model = Model(inputs = inp, outputs = output)
# summary layers
#print(model.summary())
return model
# + colab={"base_uri": "https://localhost:8080/"} id="hyAMHaFsthBq" outputId="f96f100e-5899-4cd0-cb3d-a2e8e9dc4e2c"
model2_print = FER_Model2()
# + id="9kw_qVkvUpbz"
def FER_Model3(input_shape=(64,64,1)):
inp = Input(shape=input_shape, name='input')
num_classes = 7
#the 1st costituent layer
conv1 = Conv2D(32, kernel_size=8, activation=None, padding='same', name = 'conv1')(inp)
conv1 = BatchNormalization()(conv1)
conv1_5 = Conv2D(32, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv1_5')(conv1)
#the 2nd costituent layer
conv2 = Conv2D(32, kernel_size=8, activation=None, padding='same', name = 'conv2')(conv1_5)
conv2 = BatchNormalization()(conv2)
conv2_5 = Conv2D(32, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv2_5')(conv2)
#the 3rd costituent layer
conv3 = Conv2D(32, kernel_size=8, activation=None, padding='same', name = 'conv3')(conv2_5)
conv3 = BatchNormalization()(conv3)
conv3_5 = Conv2D(32, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv3_5')(conv3)
#the 4th costituent layer
conv4 = Conv2D(16, kernel_size=8, activation=None, padding='same', name = 'conv4')(conv3_5)
conv4 = BatchNormalization()(conv4)
conv4_5 = Conv2D(16, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv4_5')(conv4)
#the 5th costituent layer
conv5 = Conv2D(16, kernel_size=8, activation=None, padding='same', name = 'conv5')(conv4_5)
conv5 = BatchNormalization()(conv5)
conv5_5 = Conv2D(16, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv5_5')(conv5)
#the 6th costituent layer
conv6 = Conv2D(16, kernel_size=8, activation=None, padding='same', name = 'conv6')(conv5_5)
conv6 = BatchNormalization()(conv6)
conv6_5 = Conv2D(16, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv6_5')(conv6)
#the 7th costituent layer
conv7 = Conv2D(8, kernel_size=8, activation=None, padding='same', name = 'conv7')(conv6_5)
conv7 = BatchNormalization()(conv7)
conv7_5 = Conv2D(8, kernel_size=8, activation='relu', strides=(2, 2), padding='same', name = 'conv7_5')(conv7)
#final layers
conv8 = Conv2D(8, kernel_size=8, activation=None, padding='same', name = 'conv8')(conv7_5)
conv8 = BatchNormalization()(conv8)
conv9 = Conv2D(7, kernel_size = 7, activation = 'relu', strides=(1, 1), padding = 'same', name = 'conv9')(conv8)
output = Activation('softmax', name = 'output')(conv9)[:,0,0,:] #solution if a dimensionality error in model.fit, output would be [None 1 1 7 ] (or similar) instead of (None 7)
# create model
model = Model(inputs = inp, outputs = output)
# summary layers - COMMENT OUT for model summary
#print(model.summary())
return model
# + colab={"base_uri": "https://localhost:8080/"} id="Z58i5zk7ucu9" outputId="ef32381d-e1fc-4cd5-e4ee-f8b4c4e4d0b4"
#setting the model parameters
model = FER_Model2()
opt = Adam(lr=0.001, decay=1e-6, amsgrad=True)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="L4hDfHyvWXuF" outputId="bba9b713-0572-42ce-f7be-2b8c99b552cb"
#create model3
model = FER_Model3()
opt = Adam(lr=0.001, decay=1e-7, amsgrad=True)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# + id="Cqd923jiuhKd"
#adding callbacks> early stopping to avoid overfitting, saving the best model,
# reducing learning rate when the training does not improve the model anymore
from keras.callbacks import EarlyStopping
patience=15
early_stopping=EarlyStopping(patience=patience, verbose=1)
from keras.callbacks import ModelCheckpoint
filepath="weights_min_loss_3_ES.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='val_loss', verbose=1, factor=0.1, patience=10, min_lr=10e-5)
callbacks_list = [checkpoint, early_stopping, reduce_lr] #modify if needed
# + colab={"base_uri": "https://localhost:8080/"} id="tyW8erIaWRjs" outputId="f4f66596-2abc-448b-e8f0-a97e4f222f21"
#training model3
num_epochs = 120
history = model.fit(train_flow_upscaled,
epochs=num_epochs,
steps_per_epoch=len(X_train_upscaled) / batch_size,
verbose=2,
callbacks=callbacks_list,
validation_data=val_flow_upscaled,
validation_steps=len(X_val_upscaled) / batch_size)
# + colab={"base_uri": "https://localhost:8080/", "height": 610} id="nZLyDA5YutYU" outputId="2563bcf8-eaa2-4a77-9472-9fba5ed64fcc"
# visualizing losses and accuracy
# %matplotlib inline
train_loss=history.history['loss']
val_loss=history.history['val_loss']
train_acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
epochs = range(len(train_acc))
plt.plot(epochs,train_loss,'r', label='train_loss')
plt.plot(epochs,val_loss,'b', label='val_loss')
plt.title('train_loss vs val_loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend()
plt.figure()
plt.plot(epochs,train_acc,'r', label='train_acc')
plt.plot(epochs,val_acc,'b', label='val_acc')
plt.title('train_acc vs val_acc')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend()
plt.figure()
# + colab={"base_uri": "https://localhost:8080/"} id="uwtZzoBAuuAt" outputId="e7c5424e-ba62-47bb-e633-0ec71de5a142"
#loss and accuracy on the test datas
#pay attention which weights are loaded to which model
#model.load_weights('weights_min_loss_3_ES.hdf5')
loss = model.evaluate_generator(test_flow_upscaled, steps=len(X_test) / batch_size)
print("Test Loss " + str(loss[0]))
print("Test Acc: " + str(loss[1]))
# + id="oStx6qX5vIAs"
def plot_confusion_matrix(y_test, y_pred, classes,
normalize=False,
title='Unnormalized confusion matrix',
cmap=plt.cm.Blues):
cm = confusion_matrix(y_test, y_pred)
if normalize:
cm = np.round(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], 2)
np.set_printoptions(precision=2)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.min() + (cm.max() - cm.min()) / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True expression')
plt.xlabel('Predicted expression')
plt.show()
# + id="kv8O48ijvLDq" colab={"base_uri": "https://localhost:8080/"} outputId="04b827a9-6b73-41cd-a28e-72b26c773609"
#calculate the predicted test datas
#pay attention whether upscaled images are needed
y_pred_ = model.predict(X_test_upscaled/255., verbose=1)
y_pred = np.argmax(y_pred_, axis=1)
t_te = np.argmax(y_test, axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 311} id="V9wlHnDgvR0e" outputId="a1acc0fa-2263-4884-ccf9-0b0cd1de38ef"
#compare target data with predicted data
fig = plot_confusion_matrix(y_test=t_te, y_pred=y_pred,
classes=classes,
normalize=True,
cmap=plt.cm.Greys,
title='Average accuracy: ' + str(np.sum(y_pred == t_te)/len(t_te)) + '\n')
# + id="Cenza3XzZkqz"
#saving model
model.save('Fer3_2_amsgradON.hdf5')
# + id="jkXsmF92IVmS"
def model_ensemble_avg(models, upscaled_models, X_test):
X_test_upscaled = upscale(X_test)
y_temp = np.zeros((len(X_test),7))
for i in range(len(models)):
y_pred_ = models[i].predict(X_test/255.)
y_temp = y_temp + y_pred_
for i in range(len(upscaled_models)):
y_pred_ = upscaled_models[i].predict(X_test_upscaled/255.)
y_temp = y_temp + y_pred_
return np.argmax(y_temp, axis=1)
# + id="ueIb_Qof-f6K"
def model_ensemble_wavg(models, upscaled_models, X_test, val_flow, val_flow_upscaled):
X_test_upscaled = upscale(X_test)
y_temp = np.zeros((len(X_test),7))
for i in range(len(models)):
y_pred_ = models[i].predict(X_test/255.)
loss = models[i].evaluate_generator(val_flow, steps=len(X_test1) / batch_size)
y_temp = y_temp + y_pred_ * loss[1];
for i in range(len(upscaled_models)):
y_pred_ = upscaled_models[i].predict(X_test_upscaled/255.)
loss = upscaled_models[i].evaluate_generator(val_flow_upscaled, steps=len(X_test1) / batch_size)
y_temp = y_temp + y_pred_ * loss[1];
return np.argmax(y_temp, axis=1)
# + id="Olp_iMXlPkc6"
def model_ensemble_wavg_exp(models, upscaled_models, X_test, val_flow, val_flow_upscaled):
X_test_upscaled = upscale(X_test)
y_temp = np.zeros((len(X_test),7))
for i in range(len(models)):
y_pred_ = models[i].predict(X_test/255.)
loss = models[i].evaluate_generator(val_flow, steps=len(X_test1) / batch_size)
y_temp = y_temp + y_pred_ * np.exp(loss[1]);
for i in range(len(upscaled_models)):
y_pred_ = upscaled_models[i].predict(X_test_upscaled/255.)
loss = upscaled_models[i].evaluate_generator(val_flow_upscaled, steps=len(X_test1) / batch_size)
y_temp = y_temp + y_pred_ * np.exp(loss[1]);
return np.argmax(y_temp, axis=1)
# + id="5SbkC0EUFovK"
def weigh_output(y_val_pred, y_test_pred, y_val):
y_val_pred = np.argmax(y_val_pred, axis=1)
cm = confusion_matrix(y_val, y_val_pred)
cm = np.round(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], 2)
weights = np.diagonal(cm)
for col in range(y_test_pred.shape[1]):
y_test_pred[:,col] = y_test_pred[:,col]*weights[col]
return y_test_pred
# + id="hbLZ_awyFF5J"
def model_ensemble_cc(models, upscaled_models, X_test, X_val, y_val):
t_te_val = np.argmax(y_val, axis=1)
X_test_upscaled = upscale(X_test)
X_val_upscaled = upscale(X_val)
y_temp = np.zeros((len(X_test),7))
for i in range(len(models)):
y_pred_val = models[i].predict(X_val/255.)
y_pred_test = models[i].predict(X_test/255.)
y_pred_ = weigh_output(y_pred_val, y_pred_test, t_te_val)
y_temp = y_temp + y_pred_
for i in range(len(upscaled_models)):
y_pred_val = upscaled_models[i].predict(X_val_upscaled/255.)
y_pred_test = upscaled_models[i].predict(X_test_upscaled/255.)
y_pred_ = weigh_output(y_pred_val, y_pred_test, t_te_val)
y_temp = y_temp + y_pred_
return np.argmax(y_temp, axis=1)
# + id="1zAq0OFGkrC9" colab={"base_uri": "https://localhost:8080/"} outputId="59f28f55-c554-44ca-807b-48a9e22fd7bd"
opt = Adam(lr=0.001, decay=1e-7)
model1_1 = FER_Model1()
model1_1.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model1_1.load_weights('/content/drive/MyDrive/BME/FER2020/fer1/1/Fer1_deep_1.hdf5')# acc 66.48%
model1_2 = FER_Model1()
model1_2.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model1_2.load_weights('/content/drive/MyDrive/BME/FER2020/fer1/2/Fer1_deep_2.hdf5')# acc 65.5%
model1_3 = FER_Model1()
model1_3.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model1_3.load_weights('/content/drive/MyDrive/BME/FER2020/fer1/3/Fer1_deep_3.hdf5')# acc 63.78%
model2_1 = FER_Model2()
model2_1.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model2_1.load_weights('/content/drive/MyDrive/BME/FER2020/fer2/1/Fer2_deep_1.hdf5')# acc 60.63%
model2_2 = FER_Model2()
model2_2.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model2_2.load_weights('/content/drive/MyDrive/BME/FER2020/fer2/2/Fer2_deep_2.hdf5')#acc 46.67% Left out
model2_3 = FER_Model2()
model2_3.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model2_3.load_weights('/content/drive/MyDrive/BME/FER2020/fer2/3/Fer2_deep_3.hdf5')#acc 60.12%
model2_4 = FER_Model2()
model2_4.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model2_4.load_weights('/content/drive/MyDrive/BME/FER2020/fer2/4/Fer2_deep_4.hdf5')#acc 57.13% Left out
models = [model1_1, model1_2, model1_3, model2_1, model2_3, model2_4 ]
model_u_1 = FER_Model3()
model_u_1.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model_u_1.load_weights('/content/drive/MyDrive/BME/FER2020/fer_upsc/1/fer3_amsgradOFF.hdf5') # acc 63.36%
model_u_2 = FER_Model3()
model_u_2.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model_u_2.load_weights('/content/drive/MyDrive/BME/FER2020/fer_upsc/2/Fer3_2_amsgradON.hdf5') # acc 63.38%
models_u = [model_u_1, model_u_2]
# + colab={"base_uri": "https://localhost:8080/", "height": 311} id="hzXT9gBEHlb7" outputId="29386752-a780-461d-941d-b0b8909994be"
y_pred_wavg = model_ensemble_wavg(models, models_u, X_test, val_flow, val_flow_upscaled)
fig = plot_confusion_matrix(y_test=t_te, y_pred=y_pred_wavg,
classes=classes,
normalize=True,
cmap=plt.cm.Greys,
title='Average accuracy: ' + str(np.sum(y_pred_wavg == t_te)/len(t_te)) + '\n')
# + colab={"base_uri": "https://localhost:8080/", "height": 311} id="ys9XIIhlPvbr" outputId="33838160-e86f-42e1-e3e2-78e4445fe167"
y_pred_wavg_exp = model_ensemble_wavg_exp(models, models_u, X_test, val_flow, val_flow_upscaled)
fig = plot_confusion_matrix(y_test=t_te, y_pred=y_pred_wavg_exp,
classes=classes,
normalize=True,
cmap=plt.cm.Greys,
title='Average accuracy: ' + str(np.sum(y_pred_wavg_exp == t_te)/len(t_te)) + '\n')
# + colab={"base_uri": "https://localhost:8080/", "height": 311} id="at0zNDKWG433" outputId="f2aea82d-92af-44f8-dd33-6806fbda78ed"
y_pred_e_cc = model_ensemble_cc(models, models_u, X_test, X_val, y_val)
fig = plot_confusion_matrix(y_test=t_te, y_pred=y_pred_e_cc,
classes=classes,
normalize=True,
cmap=plt.cm.Greys,
title='Average accuracy: ' + str(np.sum(y_pred_e_cc == t_te)/len(t_te)) + '\n')
# + colab={"base_uri": "https://localhost:8080/", "height": 311} id="kGRCU2geIkDB" outputId="043ca3a5-17cd-4739-946e-02a136e4ec7d"
y_pred_e_avg = model_ensemble_avg(models, models_u, X_test)
fig = plot_confusion_matrix(y_test=t_te, y_pred=y_pred_e_avg,
classes=classes,
normalize=True,
cmap=plt.cm.Greys,
title='Average accuracy: ' + str(np.sum(y_pred_e_avg == t_te)/len(t_te)) + '\n')
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib
# Read the second dataset and that which contains a number of indicators regarding women's education. It includes things like female population and enrolment rates at the primary school level.
data2 = pd.read_csv('Dataset2.csv')
# Filter the dataset by choosing relevant columns.
data2 = data2[['Country Name', 'Series', '1990 [YR1990]', '1991 [YR1991]', '1992 [YR1992]', '1993 [YR1993]', '1994 [YR1994]', '1995 [YR1995]', '1996 [YR1996]', '1997 [YR1997]', '1998 [YR1998]', '1999 [YR1999]', '2000 [YR2000]', '2001 [YR2001]', '2002 [YR2002]', '2003 [YR2003]', '2004 [YR2004]', '2005 [YR2005]', '2006 [YR2006]', '2007 [YR2007]', '2008 [YR2008]', '2009 [YR2009]', '2010 [YR2010]', '2011 [YR2011]', '2012 [YR2012]', '2013 [YR2013]', '2014 [YR2014]', '2015 [YR2015]', '2016 [YR2016]', '2017 [YR2017]', '2018 [YR2018]']]
# Drop rows which have missing values for each column
data2 = data2.dropna(how='all')
# Rename values of select columns
data2.rename(columns={'Country Name':'country', '2018 [YR2018]':'2018', '2017 [YR2017]':'2017', '2016 [YR2016]':'2016', '2015 [YR2015]':'2015', '2014 [YR2014]':'2014', '2013 [YR2013]':'2013', '2012 [YR2012]':'2012', '2011 [YR2011]':'2011', '2010 [YR2010]':'2010', '2009 [YR2009]':'2009', '2008 [YR2008]':'2008', '2007 [YR2007]':'2007', '2006 [YR2006]':'2006', '2005 [YR2005]':'2005', '2004 [YR2004]':'2004', '2003 [YR2003]':'2003', '2002 [YR2002]':'2002', '2001 [YR2001]':'2001', '2000 [YR2000]':'2000', '1999 [YR1999]':'1999', '1998 [YR1998]':'1998', '1997 [YR1997]':'1997', '1996 [YR1996]':'1996', '1995 [YR1995]':'1995', '1994 [YR1994]':'1994', '1993 [YR1993]':'1993', '1992 [YR1992]':'1992', '1991 [YR1991]':'1991', '1990 [YR1990]':'1990'}, inplace=True)
data2.dropna(how='all', inplace=True)
print(data2)
# Set index of the dataset as first column
data2.set_index('country')
| {{ cookiecutter.repo_name }}/notebooks/Dataset2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting known to Phonemes
#
#
# In today world there exist many such areas where speech analytic is being actively used.
#
# Indexing and recommending music according to the genre and similar content
#
# Similarity search on the audio files - Shazam (An Apple subsidiary ) is a wonderful example for the same.
#
# Speed processing and generating artificial voice.
#
# For surveillance purpose
#
# In upcoming recipes, we will see how to practically implement and used text to speech and speech to text in your project pipelines. Before getting in the practical stuff, let's see how to extract features from the speech signals to insert these features later into the pipeline.
#
# Audio has three-dimensional representation, it has three-dimensional namely time, amplitude and frequency.
#
# 
#
# Figure: Showing three - components of the sound wave
#
#
# The amplitude represents the power in the sound wave high amplitude sound will be loud and low amplitude sound will be quiet. The low-frequency sound might sound like a low rumble, while a high-frequency sound might sound more like a sizzle.
#
# We have a lot of options to read and manipulate the speech data. We will be using librosa for analyzing and extracting features of audio features. For playing audio in Ipython notebook we will be using pyAudio. librosa can be installed by simply using pip install librosa. Alternatively if the previous method fails you may install librosa to anaconda using `conda install -c conda-forge librosa`. The pyAudio can be installed as `pip install PyAudio`.
import IPython.display as ipd
import librosa
import librosa.display
import matplotlib.pyplot as plt
import sklearn
% matplotlib inline
# **Loading an audio file :** The audio file can be loaded through load function. When a file is loaded then it is decoded into 1-dimensional time series. The sampling rate is the sampling rate, by default it is 22KHz.
audio_path = 'data/toing.mp3'
ipd.Audio(audio_path)
# Sampling can be disabled by specifying parameter sr = None. In this case, entire signal will be decoded to output and will be very lengthy to be processed by using machine learning pipeline.
time_series, sampling_rate = librosa.load(audio_path)
print(type(time_series), type(sampling_rate))
plt.plot(time_series)
plt.show()
# ## Visualizing
# To display or visualize the audio signal one may use a function librosa.display. Librosa.display support the visualization in the form of wave plot, spectrogram, and colormap. The spectrogram is important to plot that shows the amplitude and frequency of the audio at a given time. Amplitude and frequency are important features of the audio signal. The waveform is used to plot waveform of amplitude versus time. Where Y axis is amplitude and X-axis is time. spectrogram and waveform for an example audio file are given below
# ### Plotting wave plot
# A wavelet plot looks like as given below:
plt.figure(figsize=(14, 5))
librosa.display.waveplot(time_series, sampling_rate)
# ### Display Spectrogram
# A spectrogram gram is the graphical representation of the spectrum of the frequency of sounds. here you can easily see the change in the frequency with respect to time.
time_series_shift = librosa.stft(time_series)
Xdb = librosa.amplitude_to_db(abs(time_series_shift))
plt.figure(figsize=(14, 5))
librosa.display.specshow(Xdb, sr=sampling_rate, x_axis='time', y_axis='hz')
plt.colorbar()
time_series, sampling_rate = librosa.load(audio_path)
#Plot the signal:
plt.figure(figsize=(14, 5))
librosa.display.waveplot(time_series, sampling_rate)
# ## Feature Extraction
# ### Spectral Centroid
# This is about finding the center of mass of the frequency distribution in the given audio. if the audio end with higher frequency than the spectrum will shift high towards ends. If the audio is uniform throughout then the spectral centroid will be towards center. Spectral centroid can be calculated as given below.
#
spectral_centroids = librosa.feature.spectral_centroid(time_series, sampling_rate)[0]
spectral_centroids.shape
# Computing the time variable for visualization
frames = range(len(spectral_centroids))
time_frame = librosa.frames_to_time(frames)
# Normalising the spectral centroid for visualisation
def normalize(time_series, axis=0):
return sklearn.preprocessing.minmax_scale(time_series, axis=axis)
#Plotting the Spectral Centroid along the waveform
librosa.display.waveplot(time_series, sr=sampling_rate, alpha=0.4)
plt.plot(time_frame, normalize(spectral_centroids), color='green')
# ### Spectral Rolloff
# Spectral rolloff where the frequency lower than the given percentage of the average frequency lies. let's say if we define the cutoff = 85% then only frequency lower than 85% of frequency is provided.
spectral_rolloff = librosa.feature.spectral_rolloff(time_series, sampling_rate)[0]
frames = range(len(spectral_rolloff))
time_frame = librosa.frames_to_time(frames)
librosa.display.waveplot(time_series, sr=sampling_rate, alpha=0.4)
plt.plot(time_frame, normalize(spectral_rolloff), color='red')
# ### MFCC — Mel-Frequency Cepstral Coefficients
# MFCC is the most important feature while working with audio data. MFCC of the signal is a small set of feature which concisely describes the overall shape of the spectrum. By suing librosa, you can calculate MFCC as given below.
# In MFCC the first dimension represents the number of MFCC and the second dimension represents the number of such frames available.
mfccs = librosa.feature.mfcc(time_series, sampling_rate)
print("MFCC Shape :",mfccs.shape)
#Displaying the MFCCs:
librosa.display.specshow(mfccs, sr=sampling_rate, x_axis='time')
| Chapter10/story_of_phonemes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import time
PATH = r''
to_subtract = 6 * 60 * 60 + 123
EPOCH = time.time() - to_subtract
os.utime(PATH, (EPOCH, EPOCH))
| change-file-mod-date.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# day6 of 30-Days-Of-Jupyter
# Today we will load data from csv file in pandas
# Reading and Writing csv file using pandas
# -
import pandas as pd
csv_path = '/30-Days-Of-Jupyter/Data-Science-With-Python/data/TechCrunchcontinentalUSA.csv'
df = pd.read_csv(csv_path)
# head() show the first five elements of df.
df.head()
# tail() shows only last five elements of df
df.tail()
df
# unique() method to show some unique elements from csv data
df['raisedAmt'].unique()
df['company'].unique()
# +
# we have new data and dataframe
data = { 'Name' : ['Amazon','IBM','FB'],
'Date' : ['2000','2012','2010'],
'Shares': [100,90,60],
'Price' : [12,23,13]
}
df1 = pd.DataFrame(data)
# -
# to_csv() to save the data in csv file
df2 = df1.to_csv('data.csv')
# df3 is a new dataframe to read the new csv file using read_csv() method
df3 = pd.read_csv('data.csv')
df3
| day6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: kloppy-venv
# language: python
# name: kloppy-venv
# ---
# # Broadcast Tracking Data
#
# Unlike tracking data from permanently installed camera systems in a stadium, broadcast tracking captures player locations directly from video feeds.
# As these broadcast feeds usually show only part of the pitch, this tracking data contains only part of the players information for most frames and may be missing data for some frames entirely due to playbacks or close-up camera views.
# For more information about this data please go to https://github.com/SkillCorner/opendata.
#
# A note from Skillcorner:
# "if you use the data, we kindly ask that you credit SkillCorner and hope you'll notify us on Twitter so we can follow the great work being done with this data."
#
# Available Matches in the Skillcorner Opendata Repository:
#
# "ID: 4039 - Manchester City vs Liverpool on 2020-07-02"
# "ID: 3749 - Dortmund vs Bayern Munchen on 2020-05-26"
# "ID: 3518 - Juventus vs Inter on 2020-03-08"
# "ID: 3442 - Real Madrid vs FC Barcelona on 2020-03-01"
# "ID: 2841 - FC Barcelona vs Real Madrid on 2019-12-18"
# "ID: 2440 - Liverpool vs Manchester City on 2019-11-10"
# "ID: 2417 - Bayern Munchen vs Dortmund on 2019-11-09"
# "ID: 2269 - Paris vs Marseille on 2019-10-27"
# "ID: 2068 - Inter vs Juventus on 2019-10-06"
#
# Metadata is available for this data.
#
# ## Loading Skillcorner data
#
# +
from kloppy import skillcorner
# there is one example match for testing purposes in kloppy that we use here
# for other matches change the filenames to the location of your downloaded skillcorner opendata files
matchdata_file = '../../kloppy/tests/files/skillcorner_match_data.json'
tracking_file = '../../kloppy/tests/files/skillcorner_structured_data.json'
dataset = skillcorner.load(meta_data=matchdata_file,
raw_data=tracking_file,
limit=100)
df = dataset.to_pandas()
# -
# ## Exploring the data
#
# When you want to show the name of a player you are advised to use `str(player)`. This will call the magic `__str__` method that handles fallbacks for missing data. By default it will return `full_name`, and fallback to 1) `first_name last_name` 2) `player_id`.
# +
metadata = dataset.metadata
home_team, away_team = metadata.teams
[f"{player} ({player.jersey_no})" for player in home_team.players]
# -
print(f"{home_team.ground} - {home_team}")
print(f"{away_team.ground} - {away_team}")
# ## Working with tracking data
#
# The actual tracking data is available at `dataset.frames`. This list holds all frames. Each frame has a `players_coordinates` dictionary that is indexed by `Player` entities and has values of the `Point` type.
#
# Identities of players are not always specified. In that case only the team affiliation is known and a track_id that is part of the player_id is used to identify the same (unknown) player across multiple frames.
# +
first_frame = dataset.frames[88]
print(f"Number of players in the frame: {len(first_frame.players_coordinates)}")
from pprint import pprint
print("List home team players coordinates")
pprint([
(player.player_id, player_coordinates)
for player, player_coordinates
in first_frame.players_coordinates.items()
if player.team == home_team
])
# -
df.head()
| docs/examples/broadcast_tracking_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # **CRISP-DM OPERATION ON AIRBNB DATASET**
#
# THIS NOTEBOOK SEEKS TO PERFORM CRISP-DM OPERATION ON AIRBNB DATA TO DEMONSTRATE AND SHOW HOW DATASCIENCE TECHNIQUES CAN BE SUED TO GAI NINSIGHTS INTO DATA AND ARRIVE AT A LOGICAL CONCLUSION
#
# CRISP-DM (Cross-Industry Standard Process for Data Mining)
#
# * Business Understanding
#
# * Data Understanding
#
# * Data Preparation
#
# * Modeling
#
# * Evaluation
#
# * Deployment
# ## **BUSINESS UNDERSTANDING**
#
# *Airbnb, Inc. is a vacation rental online marketplace company that offers arrangement for lodging, primarily homestays, or tourism.
# The Airbnb revenue model runs on the listings and the stays. Airbnb offers a platform where these listings and bookings are made and this is where Airbnb earns from*.
#
# Two major sources of revenue of AirBnB
#
# **Commission from hosts:**
# Everytime someone chooses a host’s property and makes payment, Airbnb takes 10% of the payment amount as commission. This is one of the components of Airbnb fee structure.
#
# **Transaction fee from travelers:**
# When travelers make payment for stay they are charged a 3% fee for transaction. This amount adds to the Airbnb revenue.
#
# ref:https://appinventiv.com/blog/airbnbs-business-model-and-revenue-source/#:~:text=Airbnb%20offers%20a%20platform%20where,the%20payment%20amount%20as%20commission
#
#
#
# ### **So How can new host have a good start on Airbnb?**
#
# The number of time someone chooses host's property and pays, the more income/revenue to the host
#
# 1. What type of property type are most people interested in?
#
# 2. What are the top 10 characteristics of listing that influence the price of listings?
#
# 3. When is the best time to host a listing for booking?
#
#
#
#
#
#
#
#
# ## **DATA UNDERSTANDING**
#
#
# * **listings**: including full descriptions and average review score
#
# * **calendar**: including listing id and the price and availability for that day
#
# * **reviews** : including unique id for each reviewer and detailed comments
#
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# -
# ## GATHER
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
listing_df = pd.read_csv('/kaggle/input/seattle/listings.csv')
calender_df = pd.read_csv('/kaggle/input/seattle/calendar.csv')
reviews_df = pd.read_csv('/kaggle/input/seattle/reviews.csv')
# -
# Now let's take a look at each data
# Listing Dataset
listing_df.head()
# ## ACCESS
listing_df.columns
# the shape of listing dataset : 3818 rows and 92 columns
listing_df.shape
# check columns with not null values( these are the unique features that are required to every listing)
listing_df.columns[listing_df.isnull().sum() == 0]
print("listing dataset has {} columns with non null values".format(len(listing_df.columns[listing_df.isnull().sum() == 0])))
# +
# check the dtype of the columns which are objects and floats
no_object_columns=len(listing_df.select_dtypes(exclude ='float64').columns)
no_float_columns=len(listing_df.select_dtypes(exclude ='object').columns)
print("there are {} number of float columns as well as {} number of object dtype columns"\
.format(no_float_columns,no_object_columns))
# -
# Questions to answer on this dataset?
#
# * Which features contribute to higher or positive reviews. This can give us an idea as to what people realy like and what host should consider before listing?
# There are so many columns in this ataset but we only want to consider the features of a listings
# +
# features to consider
features = ['property_type', 'room_type', 'accommodates',
'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities','guests_included', 'minimum_nights',
'maximum_nights', 'has_availability',
'availability_30', 'availability_60', 'availability_90',
'availability_365']
label = ['review_scores_rating']
# -
listing_df[['review_scores_rating','review_scores_accuracy']].describe().reset_index()
# THe minimum rating here is 20 and looks like most listings got good ratings from 25th to 75th percentile.
#
# the review score rating reflect with the review score accuracy
#
# so we can use review score rating as a label for finding features in a listings that had very good rating
#
# Our next question we want to ask on thi dataset is that, What type of listing generate highested income
#
# To get this, we have to get all listings that have been rated. Which means a host has lent it to a traveler and has paid for it.
#
# Then query the price for that listing and get the listings with highest income generated
#
# Note that air bnb charges 10% from the host for listing and 3% transaction fee from the traveler. But in this analysis we would not be adding that.
#
# **Our focus now is to get listings that people or travelers really booked and how much they paid for it and get the highest**
#
#
# +
# we would be using the listing dataset together with the reviews dataset
# first select listing id's from the reviews dataset that had a review( meaning a traveler booked the listing)
# get booked listings or listings that has been reviewed
reviwed_listings = listing_df[listing_df.id.isin(reviews_df.listing_id)]
# how many listings were actually reviewed
reviwed_listings.shape[0]
# +
# confirm the number lets see listings that werent reviewed and check their review scoure
non_reviwed_listings = listing_df[~listing_df.id.isin(reviews_df.listing_id)]
non_reviwed_listings.review_scores_rating.unique()[0]
# -
# Non reviewed listings has no review scoure
#
# Now lets focus on reviewed listings
# +
# check how many unique values we have for each column
reviwed_listings.describe()
for col in reviwed_listings.columns:
unique = len(reviwed_listings[col].unique())
print(col, "unique({})".format(unique) )
# +
# get rid of columns with only one unique features as this will not help us in our analysis
unique_columns = reviwed_listings.columns[reviwed_listings.nunique() > 1]
reviwed_listings = reviwed_listings[unique_columns]
reviwed_listings.columns
# -
# We will not be considering all the columns in this dataset
#
# We just want to pick the main features of a listing and see how it relates to review and pricing
# +
feature_columns = ['city', 'zipcode', 'smart_location',
'latitude', 'longitude', 'is_location_exact', 'property_type',
'room_type', 'accommodates', 'bathrooms', 'bedrooms', 'beds',
'bed_type', 'amenities', 'square_feet', 'price', 'weekly_price',
'monthly_price', 'security_deposit', 'cleaning_fee', 'guests_included',
'extra_people', 'minimum_nights', 'maximum_nights', 'calendar_updated',
'availability_30', 'availability_60', 'availability_90',
'availability_365', 'number_of_reviews', 'first_review', 'last_review',
'review_scores_rating', 'review_scores_accuracy' ]
reviwed_listings = reviwed_listings[feature_columns]
# -
# Let see the correlation of these features
# ## VISUALIZE
# +
f, ax = plt.subplots(figsize=(10, 8))
sns.heatmap(reviwed_listings.corr(), annot=True, fmt=".1f");
# -
# Numerical Features with strong correlation:
# * bedrooms
# * bathrooms
# * accomodation
# * beds
# * review_score_rating
# * review_score_accuracy
# * square_feet
# * guest_included
#
#
# +
num_features = ['accommodates', 'bathrooms', 'bedrooms', 'beds', 'square_feet',
'guests_included', 'availability_30', 'availability_60', 'availability_90',
'availability_365','review_scores_rating', 'review_scores_accuracy']
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
# select numeric columns we are interested in
numberic_df = reviwed_listings[num_features]
# select categorical date
categorical_df = reviwed_listings[reviwed_listings.select_dtypes(exclude=numerics).columns ]
# add them together
new_listing_df = pd.concat([categorical_df,numberic_df], axis=1)
new_listing_df.head()
# -
# Lets now get more insight into the data
#
#
# ## ANALYZE
plt_data = new_listing_df.groupby(['accommodates', 'bathrooms', 'bedrooms', 'beds'])['review_scores_rating'].sum().reset_index()
plt_data
# ## VISUALIZE
f, ax = plt.subplots(figsize=(8, 6))
sns.lineplot( y="review_scores_rating", x= "accommodates", data=plt_data);
# The lower the number people allowed for accomodation, the higher the review score
#
# We can also see that there are nun values in the data but
f, ax = plt.subplots(figsize=(8, 6))
sns.lineplot( y="review_scores_rating", x= "beds", data=plt_data);
# The same is true for this too. the lower the listing bed number, the higher the review score
#
# Let's take a look at the bedrooms
# +
f, ax = plt.subplots(figsize=(8, 6))
sns.lineplot( y="review_scores_rating", x= "bedrooms", data=plt_data);
# -
# Same here. Looks like Most people are interested with listings with fewer bedrooms, beds, bathrooms and accomodation number.
#
# This shows that most people book homes for only themselves or just two or 3 people.
#
#
# Now lets see which property type had the highest sum of review score
#
#
# ## ANALYSE
prop_data = new_listing_df.groupby(['property_type'], sort=True)['review_scores_rating'].sum().reset_index()
prop_data
# ## VISUALIZE
# +
f, ax = plt.subplots(figsize=(10, 8))
sns.barplot(data=prop_data, x="review_scores_rating", y="property_type", orient="h");
plt.title('What Property Type are Most Travelers Interested In?')
plt.xlabel('People')
plt.ylabel('Property Type')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# -
# House and Apartment were the property types that a higher sum of review score
#
# This is telling us that people are interested in both house and apartment property type
# Next we want to find out what choice listing generated a lot or income or that most people paid for
new_listing_df.price = new_listing_df.price.replace('[\$,]', '', regex=True).astype(float)
# ## ANALYZE
# +
price_df = new_listing_df.groupby(['property_type'],
sort=True)['price'].sum().reset_index()
price_df
# -
# ## VISUALIZE
# +
f, ax = plt.subplots(figsize=(8, 6))
sns.barplot(data=price_df, x="price", y="property_type", orient="h");
# -
# The interest drives income as we can see from the chart
# +
# lets take a look at the distribution of price
new_listing_df.price.hist(bins=10)
# +
# add log filter to the price and visualize
def plot_loghist(x, bins):
'''
INPUT:
x: the array list or data to plot
bins: an integer value indicating the number of bins for the histogram
OUTPUT:
Return a logged scaled histogram
'''
hist, bins = np.histogram(x, bins=bins)
logbins = np.logspace(np.log10(bins[0]),np.log10(bins[-1]),len(bins))
plt.hist(x, bins=logbins)
plt.xscale('log')
plot_loghist(new_listing_df.price, 10)
# -
# the distribution looks right skewed
# ## ANALYZE
# +
# the mean price for all listings
new_listing_df.price.sort_values(ascending=True).reset_index().mean()['price']
# -
#
#
# Checking out Calender dataset
# ## ASSESS
calender_df.head()
# the shape of calender dataset : 1393570 rows and 4 columns
calender_df.shape
# check columns with not null values
calender_df.columns[calender_df.isnull().sum() == 0]
# +
# check the dtypes
calender_df.dtypes
# -
# Questions to ask on this dataset
#
# what does the *f and t* in the available column stands for and how does it affect other columns?
#
# which dates or times are have higest prices?
#
# what is the trend in pricing?
# perhaps t stands for true and f for false
calender_df['available'].unique()
# how does available column relate to price column
#
# as we have seen, we know that the available column does not containy any null values wereas the price columns does have some null values
# +
# select rows with f[false] availability values and check their label for price
calender_df[calender_df['available'] == 'f']['price'].unique()
# -
# select rows with t[true] availability values and check their label for price
calender_df[calender_df['available'] == 't']['price'].unique()
# we see that listings with availability set to t[True] are the once with only a price specified. All the rest with availability set to f[False] have Nan values in the price column.
#
# The obviously indicate to us the date for people to book and the price to pay for the accomodation
# What is the seasonality in the pricing
#
#
#
# ## ANALYZE
# +
# get data for listings which are available for booking
available_for_listing = calender_df[calender_df['available'] == 't']
# convert date object to datetime for plot
available_for_listing['date'] = available_for_listing['date'].apply(pd.to_datetime)
# convert price column to int
available_for_listing['price']=available_for_listing['price'].apply(lambda x: float(str(x).replace('$','').replace(',','') ))
# -
available_for_listing.head()
# ## VISUALIZE
#Plot the responses for different events and regions
plt.title('Plot of price against time')
sns.lineplot(data= available_for_listing, x="date", y="price");
# We see there is a trend in the price of listings for booking for the years 2016 to 2017.
#
# But there is a lot of noise in the data so lets clean or smothen it
#
# Now lets aggregate and smothen out the line on the chart
# ## ANALYZE
# +
# aggregate all price in a month
grouper = pd.Grouper(key='date', freq='M')
result = available_for_listing.groupby(grouper)['price'].sum().reset_index()
result
# -
# ## VISUALIZE
plt.title('Monthly Plot of price against time')
sns.lineplot(data= result, x="date", y="price");
# This looks better.
# As we can see, prices of listing begins to rise from january to may and barely levels then starts to climb from september and peaked in november/ december
# then fell very low january 2017.
#
# The peak in november/ december indicates that most host put their property for booking because a lot of people travel during this time
#
# ## ANALYZE
# let's see if this is true
travel_date=[]
tavel_price=[]
for year in result.date.dt.year.unique():
year_value= result[result.date.dt.year == year]
max_value = year_value.price.max()
year_date =result[result.price == max_value].date.dt.date.values[0]
travel_date.append(year_date)
tavel_price.append(max_value)
print(year, year_date, max_value)
# December and january saw the total highest price that host put on their listings for the year 2016
# Review dataset
reviews_df.head()
# the shape of reviews dataset : 84849 rows and 6 columns
reviews_df.shape
# check columns with not null values
reviews_df.columns[reviews_df.isnull().sum() == 0]
# Questions to ask about the reviews data
#
#
# when do most people review listings, this can give us an indication of when people do travel a lot
#
#
# ## VISUALIZE
# +
reviews_df.date= pd.to_datetime(reviews_df.date)
grouper = pd.Grouper(key='date', freq='M')
review_new_df = reviews_df.groupby(grouper)['id'].count().reset_index()
# plot avg listings prices over time.
plt.figure(figsize=(15, 8))
plt.plot(review_new_df.date, review_new_df.id, color='b', linewidth=1.9)
plt.title("What is the parten of travel from 2010 to 2016?")
plt.xlabel('date')
plt.ylabel('People')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# -
# THe number of people who review thats to climb from june there but not so sure
#
# we have to take a deeper look into that
# ## ANALYZE
review_new_df.date.dt.year.unique()
# +
# get the dates each year with the highest number of reviews
travel_date=[]
tavel_value=[]
for year in review_new_df.date.dt.year.unique():
year_value= review_new_df[review_new_df.date.dt.year == year]
max_value = year_value.id.max()
year_date =review_new_df[review_new_df.id == max_value].date.dt.date.values[0]
travel_date.append(year_date)
tavel_value.append(max_value)
print(year, year_date, max_value)
# -
# ## VISUALIZE
# +
f, ax = plt.subplots(figsize=(10, 8))
sns.barplot( x=travel_date, y=tavel_value);
plt.title("When do most people travel?")
plt.xlabel('date')
plt.ylabel('People')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# -
# This really looks good, as we have found that september seems to recieve more reviews as this indicates when most people travel during the year.
#
# SO our question is answered
#
# **When do most people travel?** During the month of September
# # DATA PREPERATION
#
# **PROCESSES INVOLVED**
#
# * removing or imputing null values
# * checking and removing features with high correlation
# * choosing which features to use in our model
# * feature engineering
# * converting categorical variables to numberical(on-hot-encoding)
#
# ## ASSESS
# +
# count the percent number of nulls in each column
for col in new_listing_df.columns:
percen_null = new_listing_df[col].isnull().sum()/ new_listing_df.shape[0]
print(col, "percent_null({})".format(percen_null) )
# -
# ## CLEAN
# remove square_feet column as this has almost all its rows null
new_listing_df=new_listing_df.drop(columns=['square_feet'])
# +
# drop columns that will not be needed
drop_column_list=['city','zipcode','smart_location','is_location_exact']
new_listing_df=new_listing_df.drop(columns=drop_column_list)
new_listing_df.head()
# -
# ## VISUALIZE
#
# we want to use see which central tendency measure is best for filling null values in bedroom and bathroom column
# +
sns.boxplot(x=new_listing_df.bedrooms,orient="h")
# -
# For bedroom, we see that there are a few listings with very high number of bedrooms. this can be an outlier and hence using the mean to fill missing values would not help in our analysis.
#
# the bedrooms data is left skewed and we see a great number falls between 1 and 2.
#
# so the mode is the best pick for filling missing values
sns.boxplot(x=new_listing_df.bathrooms,orient="v")
plt.hist(new_listing_df.bathrooms)
plt.ylabel('count');
plt.xlabel('bathrooms');
# from the plots for bathrooms column, we can see that the most recurring value is 1 and that very few listings had high number of bathrooms. therefore the mode is also a good pick for filling the missing values
# ## CLEAN
# +
# bathrooms ( replace null values with mode )
# bedrooms (replace null values with mode )
fill_mode = lambda col: col.fillna(col.mode())
try:
new_listing_df[['bathrooms','bedrooms']].apply(fill_mode, axis=0)
except:
print('That broke...')
# -
new_listing_df.head()
# +
# remove dollar sign from prices
dollar_column = ['weekly_price','monthly_price','security_deposit','cleaning_fee','extra_people']
remove_dollar = lambda col: col.replace('[\$,]', '', regex=True).astype(float)
new_listing_df[dollar_column] = new_listing_df[dollar_column].apply(remove_dollar)
# -
# ## VISUALIZE
# We want to consider the distribution of the dollar columns and decide which central tendency measure to fill na values
sns.boxplot(x=new_listing_df.weekly_price)
sns.boxplot(x=new_listing_df.monthly_price)
sns.boxplot(x=new_listing_df.security_deposit)
sns.boxplot(x=new_listing_df.cleaning_fee)
sns.boxplot(x=new_listing_df.extra_people)
# from the distribution all price columns are right skewed.
#
# all of them contains outliers, that is, very few of them have large values of price.
#
# hence using the mean would not help our analysis
#
# we can use the mode to fill null values
# fill price columns with mean
fill_mean = lambda col: col.fillna(col.mode())
new_listing_df[dollar_column] = new_listing_df[dollar_column].apply(fill_mean)
new_listing_df.head()
# +
# check how each dollar column correlates with price
sns.pairplot(new_listing_df[dollar_column +['price']])
# -
# weekly and monthly price are positively correlate with price
# ## CLEAN
# remove all other price columns as they have positive correlation with price
# this would ensure that our model fits the right column and remove any data leakage
new_listing_df=new_listing_df.drop(columns=['weekly_price','monthly_price','security_deposit','extra_people','cleaning_fee'])
# Get categorical columns and convert them
# select categorical date
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
categorical_columns = new_listing_df.select_dtypes(exclude=numerics).columns
categorical_columns
# drop date columns
new_listing_df=new_listing_df.drop(columns=['calendar_updated','first_review','last_review',
'availability_30', 'availability_60', 'availability_90','availability_365'])
def create_dummy_df(df, cat_cols, dummy_na):
'''
INPUT:
df - pandas dataframe with categorical variables you want to dummy
cat_cols - list of strings that are associated with names of the categorical columns
dummy_na - Bool holding whether you want to dummy NA vals of categorical columns or not
OUTPUT:
df - a new dataframe that has the following characteristics:
1. contains all columns that were not specified as categorical
2. removes all the original columns in cat_cols
3. dummy columns for each of the categorical columns in cat_cols
4. if dummy_na is True - it also contains dummy columns for the NaN values
5. Use a prefix of the column name with an underscore (_) for separating
'''
for col in cat_cols:
try:
# for each cat add dummy var, drop original column
df = pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col], prefix=col, prefix_sep='_', drop_first=True, dummy_na=dummy_na)], axis=1)
except:
continue
return df
# +
col_list= ['is_location_exact', 'property_type', 'room_type', 'bed_type']
df_new = create_dummy_df(new_listing_df,col_list, dummy_na=False)
print(df_new.shape)
# -
# create categorical variable for the amenity column
#
# +
# create categorical columns for amenities
# convert amenities into list for each row
def get_amen_unique(df):
'''
INPUT:
df - a dataframe holding a amenities columns wiht values
OUTPUT:
list_amenities - a list of unique amenities in the amenities column of the dataframe
'''
list_amenities =[]
for row in df_new.amenities.iteritems():
item = row[1].replace('{', '').replace('}', '').replace('"','')
for amen in item.split(','):
list_amenities.append(amen)
return set(list_amenities)
unique_amen=get_amen_unique(df_new)
# +
def amen_categorical(df,col_list):
'''
INPUT:
df - the dataframe containing amenities column to convert to categorical
col_list - the list of unique amenities in the amenities column of the dataframe
OUTPUT:
df - a new dataframe with categorical one-hot-encoded columns from the amenities column
'''
#create columns for each amen type
for new_col in unique_amen:
if new_col != '':
df['amen_'+ new_col] = None
for row in df.amenities.iteritems():
item = row[1].replace('{', '').replace('}', '').replace('"','')
amenities =[amen for amen in item.split(',') ]
index =row[0]
if new_col in amenities:
df.at[index, 'amen_'+ new_col] = 1
else:
df.at[index, 'amen_'+ new_col] = 0
# remove original amenities column
df = df.drop(columns=['amenities'])
return df
# -
df_new=amen_categorical(df_new,unique_amen)
# drop amenities column
df_new=df_new.drop(columns=['review_scores_rating', 'review_scores_accuracy'])
print(df_new.shape)
df_new.head()
# # MODELLING
# We need to know what characteristics or features that influence the pricing of listings.
#
# These characteristics ranges from amenities, bed types, e.t.c from the dataset
#
# To know this, we want to use a model on our features to fit the price column
#
# Then evaluate the model to find out what the model thinks influences price the more
#
# This can then answer the question, What are the top 10 characteristics of listing that influence the price of listings ?
# **PROCEDURE**
#
# * We choose our predictor which is the price column
#
# * Split data into training and test sets
#
# * We fit our data using a LinearRegression model
#
# * We use an r2_score for accessing our model
#
# * track the training and validation score
#
#
#
# +
def clean_fit_linear_mod(df, response_col,test_size=.3, rand_state=42):
'''
INPUT:
df - a dataframe holding all the variables of interest
response_col - a string holding the name of the column
test_size - a float between [0,1] about what proportion of data should be in the test dataset
rand_state - an int that is provided as the random state for splitting the data into training and test
OUTPUT:
test_score - float - r2 score on the test data
train_score - float - r2 score on the test data
lm_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
'''
#Drop the rows with missing response values
df = df.dropna(subset=[response_col], axis=0)
#Drop columns with all NaN values
df = df.dropna(how='all', axis=1)
# Mean function
fill_mean = lambda col: col.fillna(col.mean())
# Fill the mean
df = df.apply(fill_mean, axis=0)
# shuffle data
df = shuffle(df)
#Split into explanatory and response variables
X = df.drop(response_col, axis=1)
y = df[response_col]
#Split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=rand_state)
lm_model = LinearRegression(normalize=True) # Instantiate
lm_model.fit(X_train, y_train) #Fit
#Predict using your model
y_test_preds = lm_model.predict(X_test)
y_train_preds = lm_model.predict(X_train)
#Score using your model
test_score = r2_score(y_test, y_test_preds)
train_score = r2_score(y_train, y_train_preds)
return test_score, train_score, lm_model, X_train, X_test, y_train, y_test
#Test your function with the above dataset
test_score, train_score, lm_model, X_train, X_test, y_train, y_test = clean_fit_linear_mod(df_new, 'price')
# -
#Print training and testing score
print("The rsquared on the training data was {}. The rsquared on the test data was {}.".format(train_score, test_score))
# # EVALUATION
# Here, we want to evaluate our model on what it thinks makes pricing of listing go either directions.
#
# By evaluating the model coefficient, we can how much each feature in our model effect pricing
#
# +
def coef_weights(coefficients, X_train):
'''
INPUT:
coefficients - the coefficients of the linear model
X_train - the training data, so the column names can be used
OUTPUT:
coefs_df - a dataframe holding the coefficient, estimate, and abs(estimate)
Provides a dataframe that can be used to understand the most influential coefficients
in a linear model by providing the coefficient estimates along with the name of the
variable attached to the coefficient.
'''
coefs_df = pd.DataFrame()
coefs_df['est_int'] = X_train.columns
coefs_df['coefs'] = lm_model.coef_
coefs_df['abs_coefs'] = np.abs(lm_model.coef_)
coefs_df = coefs_df.sort_values('abs_coefs', ascending=False)
return coefs_df
#Use the function
coef_df = coef_weights(lm_model.coef_, X_train)
#A quick look at the top results
coef_df.head(10)
# -
# ## VISUALIZE
# +
f, ax = plt.subplots(figsize=(10, 8))
# which property type influenced price the most
property_type = coef_df[coef_df.est_int.str.startswith('property_type')]
property_type['est_int']= property_type['est_int'].str.replace('property_type_','')
sns.barplot(data=property_type, x="abs_coefs", y="est_int", orient="h");
plt.title('What Property Type Had a high Influence on the Price of a Listing?')
plt.xlabel('Absolute_Coefficienct')
plt.ylabel('Property Type')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# -
# As we can see Boat has a very high influence on the price compared with other listing property type
def show_values_on_bars(axs):
'''
INPUT:
ax- the axis of the plot to show values on top of the bar plot
OUTPUT:
bar plot with text on bar
'''
def _show_on_single_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
value = '{:.2f}'.format(p.get_height())
ax.text(_x, _y, value, ha="center")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
# +
f, ax = plt.subplots(figsize=(8, 6))
# which room type influenced price the most
room_type = coef_df[coef_df.est_int.str.startswith('room_type')]
room_type['est_int']= room_type['est_int'].str.replace('room_type_','')
g=sns.barplot(data=room_type, x="est_int", y="abs_coefs");
plt.title('What Room Type Had a high Influence on the Price of a Listing?')
plt.xlabel('Room Type')
plt.ylabel('Absolute_Coefficienct')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
show_values_on_bars(ax)
# -
# Shared room has high influence on the price
# +
# which amenitiy had a high influence on the price
f, ax = plt.subplots(figsize=(14, 10))
amenity_type = coef_df[coef_df.est_int.str.startswith('amen')]
amenity_type['est_int']= amenity_type['est_int'].str.replace('amen_','')
sns.barplot(data=amenity_type, x="abs_coefs", y="est_int", orient="h");
plt.title('What amenitiy Type Had a high Influence on the Price of a Listing?')
plt.xlabel('Absolute_Coefficienct')
plt.ylabel('Amenity Type')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# -
# Doorman and Elevator had a hgih influence on price
# +
# which bed type had a high influence on the price
f, ax = plt.subplots(figsize=(10, 6))
bed_type = coef_df[coef_df.est_int.str.startswith('bed_type')]
bed_type['est_int']= bed_type['est_int'].str.replace('bed_type_','')
sns.barplot(data=bed_type, x="est_int", y="abs_coefs");
plt.title('What Bed Type Had a high Influence on the Price of a Listing?')
plt.xlabel('Bed Type')
plt.ylabel('Absolute_Coefficienct')
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
show_values_on_bars(ax)
# -
# Futon looks to be the highest influencer on price and couch just bottomed down
# # Summary
#
# This notebook uses data from the Seattle area of Airbnb and has been analyzed to answer the following questions.
#
#
# * What type of property type are most people interested in?
#
# From the histogram of showing property type and number of reviews, House and Apartment are what most people are intereste in.
#
# * What are the top 10 characteristics of listing that influence the price of listings?
#
# From the statistical inference of the model,property types with Boat,Dorm,Loft, and Treehouse greatly influenced the price of listings. Again, Shared room type had a high price tag than Prive room. Then, Doorman and Washer amenities really spiked the prices of listings. We also saw that higher number of bedrroms increase prices of listings
#
#
#
# * When is the best time to host a listing for booking?
#
# There is a busy season and from the timeline there was definitely an increase every year of the number of people who travel. We found out that most of these travelers like to travel in september.
# So the best time to host a listing for booking is september
#
#
#
#
| crisp-dm-process-on-seatle-airbnb-data.ipynb |