code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1SgrstLXNbG_"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" id="k7gifg92NbG9"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="dCMqzy7BNbG9"
# # ๋ฅ๋๋ฆผ (DeepDream)
# + [markdown] id="2yqCPS8SNbG8"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/generative/deepdream"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org์์ ๋ณด๊ธฐ</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />๊นํ๋ธ(GitHub) ์์ค ๋ณด๊ธฐ</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="yJ-t14sot8iX"
# Note: ์ด ๋ฌธ์๋ ํ
์ํ๋ก ์ปค๋ฎค๋ํฐ์์ ๋ฒ์ญํ์ต๋๋ค. ์ปค๋ฎค๋ํฐ ๋ฒ์ญ ํ๋์ ํน์ฑ์ ์ ํํ ๋ฒ์ญ๊ณผ ์ต์ ๋ด์ฉ์ ๋ฐ์ํ๊ธฐ ์ํด ๋
ธ๋ ฅํจ์๋
# ๋ถ๊ตฌํ๊ณ [๊ณต์ ์๋ฌธ ๋ฌธ์](https://www.tensorflow.org/?hl=en)์ ๋ด์ฉ๊ณผ ์ผ์นํ์ง ์์ ์ ์์ต๋๋ค.
# ์ด ๋ฒ์ญ์ ๊ฐ์ ํ ๋ถ๋ถ์ด ์๋ค๋ฉด
# [tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n) ๊นํ ์ ์ฅ์๋ก ํ ๋ฆฌํ์คํธ๋ฅผ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
# ๋ฌธ์ ๋ฒ์ญ์ด๋ ๋ฆฌ๋ทฐ์ ์ฐธ์ฌํ๋ ค๋ฉด
# [<EMAIL>](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)๋ก
# ๋ฉ์ผ์ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
# + [markdown] id="XPDKhwPcNbG7"
# ๋ณธ ํํ ๋ฆฌ์ผ์ <NAME>๊ฐ ์ด [๋ธ๋ก๊ทธ ํฌ์คํธ](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)์์ ์ค๋ช
ํ ๋ฅ๋๋ฆผ์ ๊ฐ๋จํ ๊ตฌํ ๋ฐฉ๋ฒ์ ๋ํด ์๊ฐํฉ๋๋ค.
#
# ๋ฅ๋๋ฆผ์ ์ ๊ฒฝ๋ง์ด ํ์ตํ ํจํด์ ์๊ฐํํ๋ ์คํ์
๋๋ค. ์ด๋ฆฐ ์์ด๊ฐ ๊ตฌ๋ฆ์ ๋ณด๋ฉฐ ์์์ ๋ชจ์์ ์์ํ๋ ๊ฒ์ฒ๋ผ, ๋ฅ๋๋ฆผ์ ์ฃผ์ด์ง ์ด๋ฏธ์ง ๋ด์ ์๋ ํจํด์ ํฅ์์ํค๊ณ ๊ณผ์ํด์ํฉ๋๋ค.
#
# ์ด ํจ๊ณผ๋ ์
๋ ฅ ์ด๋ฏธ์ง๋ฅผ ์ ๊ฒฝ๋ง์ ํตํด ์์ ํ ํ ํ, ํน์ ์ธต์ ํ์ฑํ๊ฐ์ ๋ํ ์ด๋ฏธ์ง์ ๊ทธ๋๋์ธํธ(gradient)๋ฅผ ๊ณ์ฐํจ์ผ๋ก์จ ๊ตฌํํ ์ ์์ต๋๋ค. ๋ฅ๋๋ฆผ ์๊ณ ๋ฆฌ์ฆ์ ์ธต์ ํ์ฑํ๊ฐ์ ์ต๋ํํ๋๋ก ์ด๋ฏธ์ง๋ฅผ ์์ ํ๋๋ฐ, ์ด๋ ์ ๊ฒฝ๋ง์ผ๋ก ํ์ฌ๊ธ ํน์ ํจํด์ ๊ณผ์ํด์ ํ๋๋ก ํฉ๋๋ค. ์ด๋ก์จ ์
๋ ฅ ์ด๋ฏธ์ง๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ ๋ชฝํ์ ์ธ ์ด๋ฏธ์ง๋ฅผ ๋ง๋ค์ด๋ผ ์ ์๊ฒ ๋ฉ๋๋ค. ์ด ๊ณผ์ ์ [InceptionNet](https://arxiv.org/pdf/1409.4842.pdf) ๋ชจ๋ธ๊ณผ ์ํ [์ธ์
์
](https://en.wikipedia.org/wiki/Inception)์์ ์ด๋ฆ์ ๋ฐ์ "Inceptionism"์ด๋ผ๊ณ ๋ถ๋ฆ
๋๋ค.
#
# ๊ทธ๋ผ ์ด์ ์ด๋ป๊ฒ ํ๋ฉด ์ ๊ฒฝ๋ง์ผ๋ก ํ์ฌ๊ธ ์ด๋ฏธ์ง์ ์๋ ์ดํ์ค์ ์ธ ํจํด์ ํฅ์์ํค๊ณ โ๊ฟโ์ ๊พธ๋๋ก ๋ง๋ค ์ ์๋์ง ์์๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค.
#
# 
# + id="Sc5Yq_Rgxreb"
import tensorflow as tf
# + id="g_Qp173_NbG5"
import numpy as np
import matplotlib as mpl
import IPython.display as display
import PIL.Image
from tensorflow.keras.preprocessing import image
# + [markdown] id="wgeIJg82NbG4"
# ## ๋ณํ(dream-ify)ํ ์ด๋ฏธ์ง ์ ํํ๊ธฐ
# + [markdown] id="yt6zam_9NbG4"
# ์ด ํํ ๋ฆฌ์ผ์์๋ [๋๋ธ๋ผ๋](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg)์ ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค.
# + id="0lclzk9sNbG2"
url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg'
# + id="Y5BPgc8NNbG0"
# ์ด๋ฏธ์ง๋ฅผ ๋ด๋ ค๋ฐ์ ๋ํ์ด ๋ฐฐ์ด๋ก ๋ณํํฉ๋๋ค.
def download(url, max_dim=None):
name = url.split('/')[-1]
image_path = tf.keras.utils.get_file(name, origin=url)
img = PIL.Image.open(image_path)
if max_dim:
img.thumbnail((max_dim, max_dim))
return np.array(img)
# ์ด๋ฏธ์ง๋ฅผ ์ ๊ทํํฉ๋๋ค.
def deprocess(img):
img = 255*(img + 1.0)/2.0
return tf.cast(img, tf.uint8)
# ์ด๋ฏธ์ง๋ฅผ ์ถ๋ ฅํฉ๋๋ค.
def show(img):
display.display(PIL.Image.fromarray(np.array(img)))
# ์ด๋ฏธ์ง์ ํฌ๊ธฐ๋ฅผ ์ค์ฌ ์์
์ด ๋ ์ฉ์ดํ๋๋ก ๋ง๋ญ๋๋ค.
original_img = download(url, max_dim=500)
show(original_img)
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
# + [markdown] id="F4RBFfIWNbG0"
# ## ํน์ฑ ์ถ์ถ ๋ชจ๋ธ (feature extraction model) ์ค๋นํ๊ธฐ
# + [markdown] id="cruNQmMDNbGz"
# ์ฌ์ ํ๋ จ๋ ์ด๋ฏธ์ง ๋ถ๋ฅ ๋ชจ๋ธ์ ๋ด๋ ค๋ฐ์ต๋๋ค. ๋ณธ ํํ ๋ฆฌ์ผ์์๋ ๋ฅ๋๋ฆผ์์ ์๋ ์ฌ์ฉ๋ ๋ชจ๋ธ๊ณผ ์ ์ฌํ [InceptionV3](https://keras.io/applications/#inceptionv3)๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๋ค๋ฅธ ์ฌ์ ํ์ต๋ ๋ชจ๋ธ์ ์ฌ์ฉํด๋ ๊ด์ฐฎ์ง๋ง, ์ฝ๋ ๊ตฌํ ์ ์ธต์ ์ด๋ฆ์ ์์ ํด์ผ ํฉ๋๋ค.
# + id="GlLi48GKNbGy"
base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
# + [markdown] id="Bujb0jPNNbGx"
# ๋ฅ๋๋ฆผ์ ํ์ฑํ์ํฌ ํ๋ ํน์ ๊ทธ ์ด์์ ์ธต์ ์ ํํ ํ "์์ค"์ ์ต๋ํํ๋๋ก ์ด๋ฏธ์ง๋ฅผ ์์ ํจ์ผ๋ก์จ ์ ํํ ์ธต์ "ํฅ๋ถ"์ํค๋ ์๋ฆฌ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํฉ๋๋ค. ์ผ๋ง๋ ๋ณต์กํ ํน์ฑ์ด ๋ํ๋ ์ง๋ ์ ํํ ์ธต์ ๋ฐ๋ผ ๋ค๋ฅด๊ฒ ๋ฉ๋๋ค. ๋ฎ์ ์ธต์ ์ ํํ๋ค๋ฉด ํ ๋๋ ๊ฐ๋จํ ํจํด์ด ํฅ์๋๊ณ , ๊น์ ์ธต์ ์ ํํ๋ค๋ฉด ์ด๋ฏธ์ง ๋ด์ ๋ณต์กํ ํจํด์ด๋ ์ฌ์ง์ด ๋ฌผ์ฒด์ ๋ชจ์ต๋ ์์ฑํ ์ ์์ต๋๋ค.
# + [markdown] id="qOVmDO4LNbGv"
# InceptionV3๋ ์๋นํ ํฐ ๋ชจ๋ธ์
๋๋ค (๋ชจ๋ธ ๊ตฌ์กฐ์ ๊ทธ๋ํ๋ ํ
์ํ๋ก [์ฐ๊ตฌ ์ ์ฅ์](https://github.com/tensorflow/models/tree/master/research/inception)์์ ํ์ธํ ์ ์์ต๋๋ค). ๋ชจ๋ธ์ ๋ง์ ์ธต๋ค ์ค, ๋ฅ๋๋ฆผ์ ๊ตฌํํ๊ธฐ ์ํด ์ดํด๋ณด์์ผ ํ ๊ณณ์ ํฉ์ฑ๊ณฑ์ธต๋ค์ด ์ฐ๊ฒฐ๋ ๋ถ๋ถ์
๋๋ค. InceptionV3์๋ 'mixed0'๋ถํฐ 'mixed10'๊น์ง ์ด 11๊ฐ์ ์ด๋ฌํ ํฉ์ฑ๊ณฑ์ธต์ด ์์ต๋๋ค. ์ด ์ค ์ด๋ค ์ธต์ ์ ํํ๋๋์ ๋ฐ๋ผ์ ๋ฅ๋๋ฆผ ์ด๋ฏธ์ง์ ๋ชจ์ต์ด ๊ฒฐ์ ๋ฉ๋๋ค. ๊น์ ์ธต์ ๋์ด๋ ์ผ๊ตด๊ณผ ๊ฐ์ ๊ณ ์ฐจ์ ํน์ฑ(higher-level features)์ ๋ฐ์ํ๋ ๋ฐ๋ฉด, ๋ฎ์ ์ธต์ ์ ๋ถ์ด๋ ๋ชจ์, ์ง๊ฐ๊ณผ ๊ฐ์ ์ ์ฐจ์ ํน์ฑ์ ๋ฐ์ํฉ๋๋ค. ์์์ ์ธต์ ์ ํํด ์์ ๋กญ๊ฒ ์คํํด๋ณด๋ ๊ฒ๋ ๊ฐ๋ฅํฉ๋๋ค. ๋ค๋ง ๊น์ ์ธต(์ธ๋ฑ์ค๊ฐ ๋์ ์ธต)์ ํ๋ จ์ ์ํ ๊ทธ๋๋์ธํธ ๊ณ์ฐ์ ์๊ฐ์ด ์ค๋ ๊ฑธ๋ฆด ์ ์์ต๋๋ค.
# + id="08KB502ONbGt"
# ์ ํํ ์ธต๋ค์ ํ์ฑํ๊ฐ์ ์ต๋ํํฉ๋๋ค.
names = ['mixed3', 'mixed5']
layers = [base_model.get_layer(name).output for name in names]
# ํน์ฑ ์ถ์ถ ๋ชจ๋ธ์ ๋ง๋ญ๋๋ค.
dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
# + [markdown] id="sb7u31B4NbGt"
# ## ์์ค ๊ณ์ฐํ๊ธฐ
#
# ์์ค์ ์ ํํ ์ธต๋ค์ ํ์ฑํ๊ฐ์ ์ด ํฉ์ผ๋ก ๊ณ์ฐ๋ฉ๋๋ค. ์ธต์ ํฌ๊ธฐ์ ์๊ด ์์ด ๋ชจ๋ ํ์ฑํ๊ฐ์ด ๋์ผํ๊ฒ ๊ณ ๋ ค๋ ์ ์๋๋ก ๊ฐ ์ธต์ ์์ค์ ์ ๊ทํํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก, ์์ค์ ๊ฒฝ์ฌํ๊ฐ๋ฒ์ผ๋ก ์ต์ํํ๊ณ ์ ํ๋ ์์น์
๋๋ค. ํ์ง๋ง ๋ฅ๋๋ฆผ์์๋ ์์ธ์ ์ผ๋ก ์ด ์์ค์ ๊ฒฝ์ฌ์์น๋ฒ(gradient ascent)์ ํตํด ์ต๋ํํ ๊ฒ์
๋๋ค.
# + id="8MhfSweXXiuq"
def calc_loss(img, model):
# ์ด๋ฏธ์ง๋ฅผ ์์ ํ์์ผ ๋ชจ๋ธ์ ํ์ฑํ๊ฐ์ ์ป์ต๋๋ค.
# ์ด๋ฏธ์ง์ ๋ฐฐ์น(batch) ํฌ๊ธฐ๋ฅผ 1๋ก ๋ง๋ญ๋๋ค.
img_batch = tf.expand_dims(img, axis=0)
layer_activations = model(img_batch)
if len(layer_activations) == 1:
layer_activations = [layer_activations]
losses = []
for act in layer_activations:
loss = tf.math.reduce_mean(act)
losses.append(loss)
return tf.reduce_sum(losses)
# + [markdown] id="k4TCNsAUO9kI"
# ## ๊ฒฝ์ฌ์์น๋ฒ
#
# ์ ํํ ์ธต์ ์์ค์ ๊ตฌํ๋ค๋ฉด, ์ด์ ๋จ์ ์์๋ ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ์ฌ ์๋ณธ ์ด๋ฏธ์ง์ ์ถ๊ฐํ๋ ๊ฒ์
๋๋ค.
#
# ์๋ณธ ์ด๋ฏธ์ง์ ๊ทธ๋๋์ธํธ๋ฅผ ๋ํ๋ ๊ฒ์ ์ ๊ฒฝ๋ง์ด ๋ณด๋ ์ด๋ฏธ์ง ๋ด์ ํจํด์ ํฅ์์ํค๋ ์ผ์ ํด๋นํฉ๋๋ค. ํ๋ จ์ด ์งํ๋ ์๋ก ์ ๊ฒฝ๋ง์์ ์ ํํ ์ธต์ ๋์ฑ๋ ํ์ฑํ์ํค๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
#
# ์ด ์์
์ ์ํํ๋ ์๋์ ํจ์๋ ์ฑ๋ฅ์ ์ต์ ํํ๊ธฐ ์ํด [`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)์ผ๋ก ๊ฐ์๋๋ค. ์ด ํจ์๋ `input_signature`๋ฅผ ์ด์ฉํด ํจ์๊ฐ ๋ค๋ฅธ ์ด๋ฏธ์ง ํฌ๊ธฐ ํน์ `step`/`step_size`๊ฐ์ ๋ํด ํธ๋ ์ด์ฑ(tracing)๋์ง ์๋๋ก ํฉ๋๋ค. ๋ณด๋ค ์์ธํ ์ค๋ช
์ ์ํด์๋ [Concrete functions guide](https://www.tensorflow.org/guide/concrete_function)๋ฅผ ์ฐธ์กฐํฉ๋๋ค.
# + id="qRScWg_VNqvj"
class DeepDream(tf.Module):
def __init__(self, model):
self.model = model
@tf.function(
input_signature=(
tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),
tf.TensorSpec(shape=[], dtype=tf.int32),
tf.TensorSpec(shape=[], dtype=tf.float32),)
)
def __call__(self, img, steps, step_size):
print("Tracing")
loss = tf.constant(0.0)
for n in tf.range(steps):
with tf.GradientTape() as tape:
# `img`์ ๋ํ ๊ทธ๋๋์ธํธ๊ฐ ํ์ํฉ๋๋ค.
# `GradientTape`์ ๊ธฐ๋ณธ์ ์ผ๋ก ์ค์ง `tf.Variable`๋ง ์ฃผ์ํฉ๋๋ค.
tape.watch(img)
loss = calc_loss(img, self.model)
# ์
๋ ฅ ์ด๋ฏธ์ง์ ๊ฐ ํฝ์
์ ๋ํ ์์ค ํจ์์ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํฉ๋๋ค.
gradients = tape.gradient(loss, img)
# ๊ทธ๋๋์ธํธ๋ฅผ ์ ๊ทํํฉ๋๋ค.
gradients /= tf.math.reduce_std(gradients) + 1e-8
# ๊ฒฝ์ฌ์์น๋ฒ์ ์ด์ฉํด "์์ค" ์ต๋ํํจ์ผ๋ก์จ ์
๋ ฅ ์ด๋ฏธ์ง๊ฐ ์ ํํ ์ธต๋ค์ ๋ณด๋ค ๋ "ํฅ๋ถ" ์ํฌ ์ ์๋๋ก ํฉ๋๋ค.
# (๊ทธ๋๋์ธํธ์ ์ด๋ฏธ์ง์ ์ฐจ์์ด ๋์ผํ๋ฏ๋ก) ๊ทธ๋๋์ธํธ๋ฅผ ์ด๋ฏธ์ง์ ์ง์ ๋ํจ์ผ๋ก์จ ์ด๋ฏธ์ง๋ฅผ ์
๋ฐ์ดํธํ ์ ์์ต๋๋ค.
img = img + gradients*step_size
img = tf.clip_by_value(img, -1, 1)
return loss, img
# + id="yB9pTqn6xfuK"
deepdream = DeepDream(dream_model)
# + [markdown] id="XLArRTVHZFAi"
# ## ์ฃผ ๋ฃจํ
# + id="9vHEcy7dTysi"
def run_deep_dream_simple(img, steps=100, step_size=0.01):
# ์ด๋ฏธ์ง๋ฅผ ๋ชจ๋ธ์ ์์ ํํ๊ธฐ ์ํด uint8 ํ์์ผ๋ก ๋ณํํฉ๋๋ค.
img = tf.keras.applications.inception_v3.preprocess_input(img)
img = tf.convert_to_tensor(img)
step_size = tf.convert_to_tensor(step_size)
steps_remaining = steps
step = 0
while steps_remaining:
if steps_remaining>100:
run_steps = tf.constant(100)
else:
run_steps = tf.constant(steps_remaining)
steps_remaining -= run_steps
step += run_steps
loss, img = deepdream(img, run_steps, tf.constant(step_size))
display.clear_output(wait=True)
show(deprocess(img))
print ("Step {}, loss {}".format(step, loss))
result = deprocess(img)
display.clear_output(wait=True)
show(result)
return result
# + id="tEfd00rr0j8Z"
dream_img = run_deep_dream_simple(img=original_img,
steps=100, step_size=0.01)
# + [markdown] id="2PbfXEVFNbGp"
# ## ํ ์ฅํ๋ธ (octave) ์ฌ๋ผ๊ฐ๊ธฐ
#
# ์ง๊ธ ์์ฑ๋ ์ด๋ฏธ์ง๋ ์๋นํ ์ธ์์ ์ด์ง๋ง, ์์ ์๋๋ ๋ช ๊ฐ์ง ๋ฌธ์ ์ ๋ค์ ์๊ณ ์์ต๋๋ค:
#
# 1. ์์ฑ๋ ์ด๋ฏธ์ง์ ๋
ธ์ด์ฆ(noise)๊ฐ ๋ง์ด ๋ผ์ด์์ต๋๋ค (์ด ๋ฌธ์ ๋ [`tf.image.total_variation loss`](https://www.tensorflow.org/api_docs/python/tf/image/total_variation)๋ก ํด๊ฒฐํ ์ ์์ต๋๋ค).
# 1. ์์ฑ๋ ์ด๋ฏธ์ง์ ํด์๋๊ฐ ๋ฎ์ต๋๋ค.
# 1. ํจํด๋ค์ด ๋ชจ๋ ๊ท ์ผํ ์
๋(granularity)๋ก ๋ํ๋๊ณ ์์ต๋๋ค.
#
# ์ด ๋ฌธ์ ๋ค์ ํด๊ฒฐํ ์ ์๋ ํ ๊ฐ์ง ๋ฐฉ๋ฒ์ ๊ฒฝ์ฌ์์น๋ฒ์ ์ค์ผ์ผ(scale)๋ฅผ ๋ฌ๋ฆฌํ์ฌ ์ฌ๋ฌ ์ฐจ๋ก ์ ์ฉํ๋ ๊ฒ์
๋๋ค. ์ด๋ ์์ ์ค์ผ์ผ์์ ์์ฑ๋ ํจํด๋ค์ด ํฐ ์ค์ผ์ผ์์ ์์ฑ๋ ํจํด๋ค์ ๋
น์๋ค์ด ๋ ๋ง์ ๋ํ
์ผ์ ํ์ฑํ ์ ์๋๋ก ํด์ค๋๋ค.
#
# ์ด ์์
์ ์คํํ๊ธฐ ์ํด์๋ ์ด์ ์ ๊ตฌํํ ๊ฒฝ์ฌ์์น๋ฒ์ ์ฌ์ฉํ ํ, ์ด๋ฏธ์ง์ ํฌ๊ธฐ(์ด๋ฅผ ์ฅํ๋ธ๋ผ๊ณ ๋ถ๋ฆ
๋๋ค)๋ฅผ ํค์ฐ๊ณ ์ฌ๋ฌ ์ฅํ๋ธ์ ๋ํด ์ด ๊ณผ์ ์ ๋ฐ๋ณตํฉ๋๋ค.
#
# + id="0eGDSdatLT-8"
import time
start = time.time()
OCTAVE_SCALE = 1.30
img = tf.constant(np.array(original_img))
base_shape = tf.shape(img)[:-1]
float_base_shape = tf.cast(base_shape, tf.float32)
for n in range(-2, 3):
new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32)
img = tf.image.resize(img, new_shape).numpy()
img = run_deep_dream_simple(img=img, steps=50, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
show(img)
end = time.time()
end-start
# + [markdown] id="s9xqyeuwLZFy"
# ## ์ ํ ์ฌํญ: ํ์ผ(tile)์ ์ด์ฉํด ์ด๋ฏธ์ง ํ์ฅํ๊ธฐ
#
# ํ ๊ฐ์ง ๊ณ ๋ คํ ์ฌํญ์ ์ด๋ฏธ์ง์ ํฌ๊ธฐ๊ฐ ์ปค์ง์๋ก ๊ทธ๋๋์ธํธ ๊ณ์ฐ์ ์์๋๋ ์๊ฐ๊ณผ ๋ฉ๋ชจ๋ฆฌ๊ฐ ๋์ด๋๋ค๋ ์ ์
๋๋ค. ๋ฐ๋ผ์ ์์์ ๊ตฌํํ ๋ฐฉ์์ ์ฅํ๋ธ ์๊ฐ ๋ง๊ฑฐ๋ ์
๋ ฅ ์ด๋ฏธ์ง์ ํด์๋๊ฐ ๋์ ์ํฉ์๋ ์ ํฉํ์ง ์์ต๋๋ค.
#
# ์
๋ ฅ ์ด๋ฏธ์ง๋ฅผ ์ฌ๋ฌ ํ์ผ๋ก ๋๋ ๊ฐ ํ์ผ์ ๋ํด ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ๋ฉด ์ด ๋ฌธ์ ๋ฅผ ํผํ ์ ์์ต๋๋ค.
#
# ๊ฐ ํ์ผ๋ณ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ๊ธฐ ์ ์ ์ด๋ฏธ์ง๋ฅผ ๋๋คํ๊ฒ ์ด๋ํ๋ฉด ํ์ผ ์ฌ์ด์ ์ด์์๊ฐ ๋ํ๋๋ ๊ฒ์ ๋ฐฉ์งํ ์ ์์ต๋๋ค.
#
# ์ด๋ฏธ์ง๋ฅผ ๋๋คํ๊ฒ ์ด๋์ํค๋ ์์
๋ถํฐ ์์ํฉ๋๋ค:
# + id="oGgLHk7o80ac"
def random_roll(img, maxroll):
# ํ์ผ ๊ฒฝ๊ณ์ ์ด ์๊ธฐ๋ ๊ฒ์ ๋ฐฉ์งํ๊ธฐ ์ํด ์ด๋ฏธ์ง๋ฅผ ๋๋คํ๊ฒ ์ด๋ํฉ๋๋ค.
shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32)
shift_down, shift_right = shift[0],shift[1]
img_rolled = tf.roll(tf.roll(img, shift_right, axis=1), shift_down, axis=0)
return shift_down, shift_right, img_rolled
# + id="sKsiqWfA9H41"
shift_down, shift_right, img_rolled = random_roll(np.array(original_img), 512)
show(img_rolled)
# + [markdown] id="tGIjA3UhhAt8"
# ์์ ์ ์ํ `deepdream` ํจ์์ ํ์ผ ๊ธฐ๋ฅ์ ์ถ๊ฐํฉ๋๋ค:
# + id="x__TZ0uqNbGm"
class TiledGradients(tf.Module):
def __init__(self, model):
self.model = model
@tf.function(
input_signature=(
tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),
tf.TensorSpec(shape=[], dtype=tf.int32),)
)
def __call__(self, img, tile_size=512):
shift_down, shift_right, img_rolled = random_roll(img, tile_size)
# ๊ทธ๋๋์ธํธ๋ฅผ 0์ผ๋ก ์ด๊ธฐํํฉ๋๋ค.
gradients = tf.zeros_like(img_rolled)
# ํ์ผ์ด ํ๋๋ง ์์ง ์์ ์ด์ ๋ง์ง๋ง ํ์ผ์ ๊ฑด๋๋๋๋ค.
xs = tf.range(0, img_rolled.shape[0], tile_size)[:-1]
if not tf.cast(len(xs), bool):
xs = tf.constant([0])
ys = tf.range(0, img_rolled.shape[1], tile_size)[:-1]
if not tf.cast(len(ys), bool):
ys = tf.constant([0])
for x in xs:
for y in ys:
# ํด๋น ํ์ผ์ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํฉ๋๋ค.
with tf.GradientTape() as tape:
# `img_rolled`์ ๋ํ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํฉ๋๋ค.
# โGradientTape`์ ๊ธฐ๋ณธ์ ์ผ๋ก ์ค์ง `tf.Variable`๋ง ์ฃผ์ํฉ๋๋ค.
tape.watch(img_rolled)
# ์ด๋ฏธ์ง์์ ํ์ผ ํ๋๋ฅผ ์ถ์ถํฉ๋๋ค.
img_tile = img_rolled[x:x+tile_size, y:y+tile_size]
loss = calc_loss(img_tile, self.model)
# ํด๋น ํ์ผ์ ๋ํ ์ด๋ฏธ์ง ๊ทธ๋๋์ธํธ๋ฅผ ์
๋ฐ์ดํธํฉ๋๋ค.
gradients = gradients + tape.gradient(loss, img_rolled)
# ์ด๋ฏธ์ง์ ๊ทธ๋๋์ธํธ์ ์ ์ฉํ ๋๋ค ์ด๋์ ์ทจ์ํฉ๋๋ค.
gradients = tf.roll(tf.roll(gradients, -shift_right, axis=1), -shift_down, axis=0)
# ๊ทธ๋๋์ธํธ๋ฅผ ์ ๊ทํํฉ๋๋ค.
gradients /= tf.math.reduce_std(gradients) + 1e-8
return gradients
# + id="Vcq4GubA2e5J"
get_tiled_gradients = TiledGradients(dream_model)
# + [markdown] id="hYnTTs_qiaND"
# ์ด ๋ชจ๋ ๊ฒ์ ์ข
ํฉํ๋ฉด ํ์ฅ ๊ฐ๋ฅํ (scalabe) ์ฅํ๋ธ ๊ธฐ๋ฐ์ ๋ฅ๋๋ฆผ ๊ตฌํ์ด ์์ฑ๋ฉ๋๋ค:
# + id="gA-15DM4NbGk"
def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01,
octaves=range(-2,3), octave_scale=1.3):
base_shape = tf.shape(img)
img = tf.keras.preprocessing.image.img_to_array(img)
img = tf.keras.applications.inception_v3.preprocess_input(img)
initial_shape = img.shape[:-1]
img = tf.image.resize(img, initial_shape)
for octave in octaves:
# ์ฅํ๋ธ์ ๋ฐ๋ผ ์ด๋ฏธ์ง์ ํฌ๊ธฐ๋ฅผ ์กฐ์ ํฉ๋๋ค.
new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave)
img = tf.image.resize(img, tf.cast(new_size, tf.int32))
for step in range(steps_per_octave):
gradients = get_tiled_gradients(img)
img = img + gradients*step_size
img = tf.clip_by_value(img, -1, 1)
if step % 10 == 0:
display.clear_output(wait=True)
show(deprocess(img))
print ("Octave {}, Step {}".format(octave, step))
result = deprocess(img)
return result
# + id="T7PbRLV74RrU"
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
show(img)
# + [markdown] id="0Og0-qLwNbGg"
# ์ง๊ธ๊น์ง ๋ณธ ์ด๋ฏธ์ง๋ณด๋ค ๋ ํฅ๋ฏธ๋ก์ด ๊ฒฐ๊ณผ๊ฐ ์์ฑ๋์์ต๋๋ค! ์ด์ ๋ ์ฅํ๋ธ์ ์, ์ค์ผ์ผ, ๊ทธ๋ฆฌ๊ณ ํ์ฑํ ์ธต์ ๋ฌ๋ฆฌํด๊ฐ๋ฉฐ ์ด๋ค ๋ฅ๋๋ฆผ ์ด๋ฏธ์ง๋ฅผ ๋ง๋ค ์ ์๋์ง ์คํํด๋ณผ ์ฐจ๋ก์
๋๋ค.
#
# ์ด ํํ ๋ฆฌ์ผ์์ ์๊ฐ๋ ๊ธฐ๋ฒ ์ธ์ ์ ๊ฒฝ๋ง์ ํ์ฑํ๋ฅผ ํด์ํ๊ณ ์๊ฐํํ ์ ์๋ ๋ ๋ง์ ๋ฐฉ๋ฒ์ ๋ํด ์์๋ณด๊ณ ์ถ๋ค๋ฉด [ํ
์ํ๋ก ๋ฃจ์๋](https://github.com/tensorflow/lucid)๋ฅผ ์ฐธ๊ณ ํฉ๋๋ค.
| site/ko/tutorials/generative/deepdream.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # selenium & webdriver - 2
#
# - file uploading through selenium
# - use API
import time
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://cloud.google.com/vision')
# ### change iframe
# - upload image on google vision api test page and checking the result
# - the upload area is set as `iframe`, so we'll move the foucs to `iframe`
iframe = driver.find_element_by_css_selector("#vision_demo_section iframe")
driver.switch_to_frame(iframe)
# ### file upload
# check current path
# path = !pwd
path
file_path = path[0] + "/practice1.png"
driver.find_element_by_css_selector("#input").send_keys(file_path)
driver.find_element_by_css_selector("#labelAnnotations").click()
driver.find_elements_by_css_selector('.style-scope .vs-labels .name')[0].text
# ### running all the code in one attempt
# +
driver = webdriver.Chrome()
driver.get('https://cloud.google.com/vision/')
iframe = driver.find_element_by_css_selector("#vision_demo_section iframe")
driver.switch_to_frame(iframe)
file_path = path[0] + "/practice1.png"
driver.find_element_by_css_selector("#input").send_keys(file_path)
time.sleep(10) # time allowance for uploading image and anaylzing
driver.find_element_by_css_selector("#webDetection").click()
result = driver.find_elements_by_css_selector('.style-scope .vs-web .name')[0].text
print(result)
driver.close()
# -
# ### Check element while running
# - apply try and except
def check_element(driver, selector):
try:
driver.find_element_by_css_selector(selector)
return True
except:
return False
# +
driver = webdriver.Chrome()
driver.get('https://cloud.google.com/vision/')
iframe = driver.find_element_by_css_selector("#vision_demo_section iframe")
driver.switch_to_frame(iframe)
file_path = path[0] + "/Drake.png"
driver.find_element_by_css_selector("#input").send_keys(file_path)
selector = '.vs-web .name'
sec, limit_sec = 0, 3
while True:
sec += 1
print("{}sec".format(sec))
time.sleep(1)
# check element
if check_element(driver, selector):
driver.find_element_by_css_selector("#webDetection").click()
result = driver.find_elements_by_css_selector(selector)[0].text
print(result)
driver.close()
break;
# if limit_sec exceeds, handle it 'except'
if sec + 1 > limit_sec:
print("error")
driver.close()
break;
# -
| 2_python_Packages_Libraries/6_selenium/01_reviewNote_selenium/02_selenium_webdriver_file_uploading.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ocandataviz
# language: python
# name: ocandataviz
# ---
# # Canada Consumer Prices
# +
# %load_ext autoreload
# %autoreload 2
# %run relativepath.py
# %run commonimports.py
# %run displayoptions.py
# -
cpi_dataset = StatscanZip('https://www150.statcan.gc.ca/n1/tbl/csv/18100004-eng.zip')
cpi_data_alltime = cpi_dataset.get_data()
cpi_data_alltime.Date = pd.to_datetime(cpi_data_alltime.Date)
cpi_data_alltime = cpi_data_alltime.set_index('Date')
# ## Inflation since 1998
cpi_data = cpi_data_alltime[(cpi_data_alltime.index > '1997')
& (cpi_data_alltime.Geo =='Canada')
& ((cpi_data_alltime.index.month==1)|(cpi_data_alltime.index== '2018-12-01')) ]
# ## Summarize
cheapest = cpi_data.tail(1).T
cheapest = cheapest.iloc[1:]
cheapest.columns=['PriceIndex']
cheapest.sort_values('PriceIndex')
# +
CATEGORIES = {'Housing (1986 definition)':'Housing',
'Tuition fees' :'Tuition',
'Food':'Food',
'Health care': 'Healthcare',
#'Household furnishings and equipment':'Household Furnishings'
"Home entertainment equipment, parts and services": "Home entertainment",
'Purchase and leasing of passenger vehicles':'Cars',
'Child care services':'Childcare',
'Toys, games (excluding video games) and hobby supplies':'Toys',
'Digital computing equipment and devices': 'Computers',
}
cpi_categories = cpi_data[list(CATEGORIES.keys())].rename(columns=CATEGORIES)
# +
def scale_to_100(col:pd.Series):
rescaled = ((col - col[0])/col[0])*100
return rescaled.round(0)
for col in cpi_categories:
cpi_categories[col] = scale_to_100(cpi_categories[col])
# -
cpi_categories
# +
import matplotlib.style as style
import matplotlib.pyplot as plt
style.use('fivethirtyeight')
def add_text(ax, category, offset=0):
ax.text(y=cpi_categories[category][-1] + offset, x=737110, s=category, fontsize=16)
# %matplotlib inline
tick_labels = [ '' for year in cpi_categories.index.year]
ax = cpi_categories.plot(figsize=(12,10),legend=False)
ax.set_title('Canada Price Changes 1998 to 2018', fontsize=24, fontweight='bold')
ax.yaxis.set_tick_params(labelsize=16)
ax.xaxis.set_tick_params(labelsize=14)
ax.set_ylabel('%', fontsize=18, fontweight='bold')
ax.grid(False, axis='x')
ax.set_xlim((729000, 739400))
ax.set_xlabel('')
ax.set_xticklabels([])
add_text(ax, 'Tuition')
add_text(ax, 'Housing')
add_text(ax, 'Food')
add_text(ax, 'Healthcare', -4)
add_text(ax, 'Cars')
add_text(ax, 'Toys')
add_text(ax, 'Computers')
add_text(ax, 'Home entertainment')
ax.text(s='More Affordable', y=-90, x=729200, fontsize=23, color='#07357f')
ax.text(s='More Expensive', y=90, x=729200, fontsize=23, color='#ce1400');
# X Labels
ax.text(s='1998', y=-120, x=729200, fontsize=18);
ax.text(s='2008', y=-120, x=732500, fontsize=18);
ax.text(s='2018', y=-120, x=736100, fontsize=18);
# -
| notebooks/ConsumerPrices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # 02 - Generate a frequency response curve
#
# After running the previous notebook, [01 - Performing a test](01%20-%20Performing%20a%20test.ipynb), we now have two audio files. One is a baseline signal that we got from running a sweep test without the DUT connected to the audio interface. The other is the signal we obtained from the DUT. We want to compare the two signals to find out the frequency response of the DUT. `freqbench` has a simple interface for this. We're also going to use `matplotlib` to create a plot.
# imports
import freqbench
import matplotlib.pyplot as plt
import matplotlib
# Let's load up the data we generated with the last notebook.
base_signal, fr0 = freqbench.load('test_data/base_signal.wav')
dut_signal, fr1 = freqbench.load('test_data/dut_signal.wav')
assert fr0 == fr1
frame_rate = fr0
# We can calculate the frequency response between the two signals with `freqbench.analysis.freqresp()`. This returns two 1-D arrays of equal size. One contains a range of frequencies, in Hz, and the other tells us the response of the DUT at the corresponding frequency. Response is measured in decibels between the amplitudes of the two signals.
freqs, response = freqbench.analysis.freqresp(
base_signal, dut_signal, frame_rate)
# Finally, let's plot the frequency response.
plt.figure(figsize=(10, 4))
plt.xscale('log')
plt.xlim([20, 20_000])
plt.ylim([-10, 10])
plt.ylabel('Response, dB')
plt.xlabel('Frequency, Hz')
plt.plot(freqs, response)
plt.show()
# And there's the frequency response! There will be some noise in the plot, because of the fact that we're using discrete sampled signals of finite length. We can smooth that out a bit with `freqbench.analysis.smooth()`.
# +
response_smooth = freqbench.analysis.smooth(response, 100)
plt.figure(figsize=(10, 4))
plt.xscale('log')
plt.xlim([20, 20_000])
plt.ylim([-10, 10])
plt.ylabel('Response, dB')
plt.xlabel('Frequency, Hz')
plt.plot(freqs, response_smooth)
plt.show()
# -
| notebooks/02 - Generate a frequency response curve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-b07abb410d0b03e8", "locked": true, "schema_version": 3, "solution": false, "task": false}
#
# # BLU14 - Exercise Notebook
#
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-6029358aaa125687", "locked": true, "schema_version": 3, "solution": false, "task": false}
import os
import base64
import joblib
import pandas as pd
import numpy as np
import category_encoders
import json
import joblib
import pickle
import math
import requests
from copy import deepcopy
import seaborn as sns
from uuid import uuid4
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline, Pipeline
from sklearn.model_selection import cross_val_score
from category_encoders import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, roc_auc_score
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib inline
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-1e4de53acc49a080", "locked": true, "schema_version": 3, "solution": false, "task": false}
# After the police, you nailed another big client. A bank hired you to assess whether or not a person is potentially a good client. For that purpose, they want you to design a system that tries to predict if a given individual earns more than 50K a year. You're no expert in the financial field, but you decide to take on the challenge. They provide you with a dataset of existing clients they have and their earnings.
#
# <img src="media/banks.png" width=400 />
#
#
# They also provide you with the following data description:
#
# #### Attribute Information
#
# 1) age - client's age
# 2) workclass - type of work performed by the client (eg. `Private`)
# 3) fnlwgt - final weight assigned by the Census Bureau: if two samples have the same (or similar) fnlwgt they have similar characteristics, demographically speaking
# 4) education - level of education of client (eg. `Bachelors`)
# 5) education-num -
# 6) marital-status - client's marital status (eg `Widowed`)
# 7) occupation - type of job held by the client (eg. `Craft-repair`)
# 8) relationship -
# 9) race - client's race
# 10) sex - "male"/"female"
# 11) capital-gain - total capital gain in the previous year
# 12) capital-loss - total capital loss in previous year
# 13) hours-per-week - number of hours the the client works per week
# 14) native-country - client's original nationality (eg. `Portugal`)
#
#
# **Note**: even if the dataset has values outside of the data dictionary, you should for these exercises consider the data dictionary as the source of truth.
#
# Load the dataset below and check out its format:
#
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-38fc8b0a01b6c17d", "locked": true, "schema_version": 3, "solution": false, "task": false}
def load_data():
df = pd.read_csv(os.path.join("data", "bank.csv"))
return df
df = load_data()
df.head()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-b1f0716dd0e185a8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Let's split our data into train and test:
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-3079f4bf5c6b51a5", "locked": true, "schema_version": 3, "solution": false, "task": false}
df = load_data()
df_train, df_test = train_test_split(df, test_size=0.3, random_state=42)
df_test.salary.value_counts().plot(kind="bar");
plt.xlabel('Target value');
plt.ylabel('Target value counts');
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-b5a701873eb4a857", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q1)
#
# ### Q1.a) Train a baseline model
#
# Build a baseline model for this problem (don't worry about performance for now) and serialize it. Use the following features:
#
# 1) age
# 2) workclass - type of work performed by the client (eg. `Private`)
# 4) education - level of education of client (eg. `Bachelors`)
# 6) marital-status - client's marital status (eg `Widowed`)
# 9) race - client's race
# 10) sex - "male"/"female"
# 11) capital-gain - total capital gain in the previous year
# 12) capital-loss - total capital loss in the previous year
# 13) hours-per-week - number of hours the client works per week
#
# Make sure to change the target so that it has a binary value - True or False - instead of the original values. In particular:
#
# * False: client has as a salary of less than 50K
# * True: client has as a salary higher or equal to 50K
#
# **Note**: As we already provided the split, use the `df_train` to train your model.
#
# **Note 2**: If you use models or functions that have a random component, ensure that you pass a random state so that there are no surprises when you submit.
# + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-c0f8bd2fd8a95779", "locked": true, "schema_version": 3, "solution": false, "task": false}
# This is a temporary directory where your serialized files will be saved. Make sure you use this as
# the target folder when you serialize your files
TMP_DIR = '/tmp'
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-6a8b3cf903220496", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Write code to train and serialize a model in the block below
#
# Outputs expected: `columns.json`, `dtypes.pickle` and `pipeline.pickle`
#
# Your pipeline should be able to receive a dataframe with the columns we've requested you to use
# in the form `pipeline.predict(test_df)`
#
# YOUR CODE HERE
raise NotImplementedError()
# -
# Test your procedure is correct by running the asserts below:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-02ebbb63fbd9f79a", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
with open(os.path.join(TMP_DIR, 'columns.json')) as fh:
columns = json.load(fh)
assert columns == ["age", "workclass", "education", "marital-status", "race", "sex", "capital-gain", "capital-loss", "hours-per-week"]
with open(os.path.join(TMP_DIR, 'dtypes.pickle'), 'rb') as fh:
dtypes = pickle.load(fh)
assert dtypes.apply(lambda x: str(x)).isin(["int64", "int32", "object"]).all()
with open(os.path.join(TMP_DIR, 'pipeline.pickle'), 'rb') as fh:
pipeline = joblib.load(fh)
assert isinstance(pipeline, Pipeline)
assert pipeline.predict(pd.DataFrame([{
"age": 23,
"workclass": "Private",
"education": "Bachelors",
"marital-status": "Widowed",
"race": "White",
"sex": "male",
"capital-gain": 0,
"capital-loss": 0,
"hours-per-week": 40}
], columns=columns).astype(dtypes)) in [0, 1]
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-cec5bed529e037b7", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Q1.b) Client requirements
#
#
# Now, the client asks you one more thing. They want to make sure your model is as good at retrieving male cases of high salary as it is retrieving female cases.
#
# For example, if we have a pool of clients where 100 male clients earn more than 50K and we retrieve 80 out of those, and where 100 female patients also earn more than 50K but we only return 20 from those, then you're discriminating and that's not ok. A similar proportion, such as 75 women out of the 100 that earn higher, is expected.
#
# Build a small function to verify this. In particular, make sure that the difference in percentage points is not higher than 10:
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-0ca5bf28d4e81180", "locked": false, "schema_version": 3, "solution": true, "task": false}
def verify_retrieve_rates(X_test, y_true, y_test):
"""
Verify retrieval rates for different `sex` instances are
not different by more than 10 percentage points
Inputs:
X_test: features for the test cases
y_true: true labels for the test cases [0, 1]
y_test: predictions for the test cases [0, 1]
Returns: tuple of (success, rate_difference)
success: True if the condition is satisfied, otherwise False
rate_difference: difference between each class retrieval rates (as an absolute value)
"""
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-8c29ef35a9d73ebd", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Verify your function is working on a couple of models.
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-d445e07c99ff6fff", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
model_1 = pd.read_csv(os.path.join('data', 'data_model_1.csv'))
X_test = model_1.copy().drop(columns=['target', 'prediction'])
y_test = model_1.target
y_pred = model_1.prediction
success, rate_diff = verify_retrieve_rates(X_test, y_test, y_pred)
assert success is False
assert math.isclose(rate_diff, 0.55)
model_2 = pd.read_csv(os.path.join('data', 'data_model_2.csv'))
X_test = model_2.copy().drop(columns=['target', 'prediction'])
y_test = model_2.target
y_pred = model_2.prediction
success, rate_diff = verify_retrieve_rates(X_test, y_test, y_pred)
assert success is True
assert math.isclose(rate_diff, 0.050000000000000044)
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f60c10e28e9ad450", "locked": true, "schema_version": 3, "solution": false, "task": false}
# If you passed the asserts, you've defused this task. Move forward to the next one.
#
# <img src="media/client-specs.png" width=400 />
#
#
#
#
# <br>
#
# ### Q2) Prepare the model to be served
#
#
# Now, use the model that you built for Q1 and build a predict function around it that will parse the request and return the respective prediction. Split your code into initialization and prediction code as you've learned. Additionally, instead of returning 0 or 1, return True or False. Do not worry about potential bad inputs at this point, we'll get to it later on.
#
#
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-41c50b60ab6b94be", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Initialization code
# YOUR CODE HERE
raise NotImplementedError()
def predict(request):
"""
Produce prediction for request.
Inputs:
request: dictionary with format described below
```
{
"observation_id": <id-as-a-string>,
"data": {
"age": <value>,
"sex": <value>,
"race": <value>,
"workclass": <value>,
"education": <value>,
"marital-status": <value>,
"capital-gain": <value>,
"capital-loss": <value>,
"hours-per-week": <value>,
}
}
```
Returns:
response: A dictionary echoing the request and its data with the addition of the prediction and probability
```
{
"observation_id": <id-of-request>,
"age": <value-of-request>,
"sex": <value-of-request>,
"race": <value-of-request>,
"workclass": <value-of-request>,
"education": <value-of-request>,
"marital-status": <value-of-request>,
"capital-gain": <value-of-request>,
"capital-loss": <value-of-request>,
"hours-per-week": <value-of-request>,
"prediction": <True|False>,
"probability": <probability generated by model>
}
```
"""
# YOUR CODE HERE
raise NotImplementedError()
return response
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-053d4ccfe2e0d5cc", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Test your function on the code below:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-50f083e22995d498", "locked": true, "points": 4, "schema_version": 3, "solution": false, "task": false}
request = {
"observation_id": "1",
"data":
{
"age": 30,
"workclass": "Private",
"sex": "Female",
"race": "Amer-Indian-Eskimo",
"education": "Masters",
"marital-status": "Never-married",
"capital-gain": 0,
"capital-loss": 0,
"hours-per-week": 45,
}
}
response = predict(request)
assert sorted(response.keys()) == \
sorted(["observation_id", "age", "sex", "race", "education", "marital-status", "workclass",
"capital-gain", "capital-loss", "hours-per-week", "prediction", "probability"])
assert response["observation_id"] == "1"
assert response["age"] == 30
assert response["hours-per-week"] == 45
assert response["prediction"] in [True, False]
probability_1 = response["probability"]
request = {
"observation_id": "2",
"data":
{
"age": 44,
"workclass": "Private",
"sex": "Male",
"race": "White",
"education": "Some-college",
"marital-status": "Married-civ-spouse",
"capital-gain": 0,
"capital-loss": 0,
"hours-per-week": 40,
}
}
response = predict(request)
assert sorted(response.keys()) == \
sorted(["observation_id", "age", "sex", "race", "education", "marital-status", "workclass",
"capital-gain", "capital-loss", "hours-per-week", "prediction", "probability"])
assert response["observation_id"] == "2"
assert response["education"] == "Some-college"
assert response["hours-per-week"] == 40
assert response["prediction"] in [True, False]
probability_2 = response["probability"]
assert probability_1 != probability_2
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-405026099770218f", "locked": true, "schema_version": 3, "solution": false, "task": false}
#
# Hurray! It passed the tests.
#
#
# <br>
#
# ### Q3) Protecting our server
#
# Let's be a bit more thorough with our server. To avoid issues and ensure we have full control around what is going on, we need to reason about which values we expect to receive and in what format.
#
#
#
# <img src="media/darth-vader-validation.jpg" width=400 />
#
#
#
# #### Q3.1) Categorical values
#
# First, we'll reason about categorical values. As the name indicates, these are the values that are restricted to a set of potential choices. So logically, what we want to verify when one of these arrives at our server, is that they belong to the correct range.
#
# Create a function that given a column and a dataframe, obtains the list of possible values for it:
#
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-fb68f2a2c5114348", "locked": false, "schema_version": 3, "solution": true, "task": false}
def get_valid_categories(df, column):
"""
Obtain list of available categories for column
Inputs:
df (pandas.DataFrame): dataframe from which to extract column values
column (str): target column for which to extract values
Returns:
categories: A list of potential values for column
"""
# YOUR CODE HERE
raise NotImplementedError()
return categories
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-923dcdf794f295e4", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Test your function below:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-ccd1c2b06618669a", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
df = load_data()
# Test dataframe categorical values
assert get_valid_categories(df, "sex") == ["Male", "Female"]
assert get_valid_categories(df, "race") == ["White", "Black", "Asian-Pac-Islander", "Amer-Indian-Eskimo", "Other"]
assert len(get_valid_categories(df, "workclass")) == 9
assert len(get_valid_categories(df, "education")) == 16
assert len(get_valid_categories(df, "marital-status")) == 7
# Test dataframe numerical values - notice the amount of potential different values
assert len(get_valid_categories(df, "age")) == 71
assert len(get_valid_categories(df, "capital-loss")) == 83
assert len(get_valid_categories(df, "hours-per-week")) == 90
assert len(get_valid_categories(df, "capital-gain")) == 116
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-ae852ee3861c9cad", "locked": true, "schema_version": 3, "solution": false, "task": false}
#
# #### Q3.2) Numerical values
#
# Now we'll look into numerical values. Even though you used the categorical approach to assess them in the last cell, it should be obvious why that's not the best idea for validation. First, the amount of values might easily explode, depending on the scenario. And second, we don't really want to exclude potentially new values that can be interpreted by the model.
#
# For these values, it's better to reason and set proper expectations so that the model still behaves. Sometimes these can be set intuitively:
#
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-39ead3f22ab339de", "locked": true, "schema_version": 3, "solution": false, "task": false}
# **Q3.2.i)** For example, which age range do you think is most adequate?
#
# - A) -100 to 100
# - B) 0 to 100
# - C) 20 to 1000
#
# Enter your answer below wrapped by quotes, for example:
#
# ```
# answer_q32i = "A"
# ```
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-181303646d6f5459", "locked": false, "schema_version": 3, "solution": true, "task": false}
# answer_q32i = 'A' or 'B' or 'C'
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-a75cd03660bb8883", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert base64.b64encode(answer_q32i.encode()) == b'Qg=='
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-e3de61a69d97750e", "locked": true, "schema_version": 3, "solution": false, "task": false}
# However, not all features are that obvious and have ranges we can reason about. However, there are some strategies to go around it.
#
# **Q3.2.ii)** Take for example capital gain, what do you think is most appropriate?
#
# - A) Taking the minimum and maximum of the observed values and using them as a range
# - B) Setting fixed values - eg. 0 to 10000
# - C) Leaving the range of allowed values completely free of any specification
#
# Enter your answer below wrapped by quotes, for example:
#
# ```
# answer_q32ii = "A"
# ```
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-09665c3fce6194a2", "locked": false, "schema_version": 3, "solution": true, "task": false}
# answer_q32ii = 'A' or 'B' or 'C'
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-8c8fa9d734f8df94", "locked": true, "points": 0.5, "schema_version": 3, "solution": false, "task": false}
assert base64.b64encode(answer_q32ii.encode()) == b'QQ=='
# -
# #### Q3.3) Putting everything together
#
# Now put everything together. Create a function similar to the one above and protect it against unexpected inputs.
#
# If everything is well with your request return an answer like this:
#
# ```json
# {
# "observation_id": "id1234",
# "prediction": True,
# "probability": 0.4
# }
# ```
#
# However, if there is a problem with the initial data, such as missing fields or invalid values, we want to return a different response:
#
# ```json
# {
# "observation_id": "id1234",
# "error": "Some error occured",
# }
# ```
#
#
# #### Hints
#
# - Hint 1: If the `observation_id` is not present, set it to None
# - Hint 2: Check out the tests to see what we expect from the error cases and error messages
# - Hint 3: Even though we mentioned better strategies above for values such as capital-gain and capital-loss, it's enough here to protect against what the tests show
#
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-ba9eb7b46c393f65", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Initialization code
# YOUR CODE HERE
raise NotImplementedError()
def attempt_predict(request):
"""
Produce prediction for request.
Inputs:
request: dictionary with format described below
```
{
"observation_id": <id-as-a-string>,
"data": {
"age": <value>,
"sex": <value>,
"race": <value>,
"workclass": <value>,
"education": <value>,
"marital-status": <value>,
"capital-gain": <value>,
"capital-loss": <value>,
"hours-per-week": <value>,
}
}
```
Returns: A dictionary with predictions or an error, the two potential values:
if the request is OK and was properly parsed and predicted:
```
{
"observation_id": <id-of-request>,
"prediction": <True|False>,
"probability": <probability generated by model>
}
```
otherwise:
```
{
"observation_id": <id-of-request>,
"error": "some error message"
}
```
"""
# YOUR CODE HERE
raise NotImplementedError()
return response
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-159253e9dbf17db6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Run the tests below to validate your function is protected against some simple cases:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-5fdd682e3b4da185", "locked": true, "points": 3, "schema_version": 3, "solution": false, "task": false}
################################################
# Test with good payload
################################################
base_request = {
"observation_id": "1",
"data":
{
"age": 30,
"workclass": "Private",
"sex": "Female",
"race": "Amer-Indian-Eskimo",
"education": "Masters",
"marital-status": "Never-married",
"capital-gain": 0,
"capital-loss": 0,
"hours-per-week": 45,
}
}
response = attempt_predict(base_request)
assert 'prediction' in response, response
assert 'probability' in response, response
assert 'observation_id' in response, response
assert response["observation_id"] == "1", response["observation_id"]
assert response["prediction"] in [True, False], response["prediction"]
assert response["probability"] <= 1.0, response["probability"]
assert response["probability"] >= 0.0, response["probability"]
################################################
# Test missing `observation_id` produces an error
################################################
bad_request_1 = deepcopy(base_request)
bad_request_1['random_field'] = bad_request_1.pop('observation_id')
response = attempt_predict(bad_request_1)
assert 'error' in response, response
assert 'observation_id' in response['error']
################################################
# Test missing `data` produces an error
################################################
bad_request_2 = deepcopy(base_request)
bad_request_2['data_field_name'] = bad_request_2.pop('data')
response = attempt_predict(bad_request_2)
assert 'error' in response, response
assert 'data' in response['error']
################################################
# Test missing columns produce an error
################################################
bad_request_3 = deepcopy(base_request)
bad_request_3['data'].pop('age')
response = attempt_predict(bad_request_3)
assert 'error' in response, response
assert 'age' in response['error'], response['error']
################################################
# Test extra columns produce an error
################################################
bad_request_4 = deepcopy(base_request)
bad_request_4['data']['relationship'] = "Wife"
response = attempt_predict(bad_request_4)
assert 'error' in response, response
assert 'relationship' in response['error'], response['error']
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f7c6084210b7712f", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Run a couple more tests to make sure your server is bulletproof:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-001653d2c8007a1a", "locked": true, "points": 3, "schema_version": 3, "solution": false, "task": false}
####################################################
# Test invalid values for categorical features - sex
####################################################
bad_request_5 = deepcopy(base_request)
bad_request_5['data']['sex'] = "Engineeer"
response = attempt_predict(bad_request_5)
assert 'error' in response, response
assert 'sex' in response['error'], response['error']
assert 'Engineeer' in response['error'], response['error']
###########################################################################
# Test invalid values for categorical features - race
###########################################################################
bad_request_6 = deepcopy(base_request)
bad_request_6['data']['race'] = 'Male'
response = attempt_predict(bad_request_6)
assert 'error' in response, response
assert 'race' in response['error'], response['error']
assert 'Male' in response['error'], response['error']
####################################################
# Test invalid values for numerical features - age
####################################################
bad_request_7 = deepcopy(base_request)
bad_request_7['data']['age'] = -12
response = attempt_predict(bad_request_7)
assert 'error' in response, response
assert 'age' in response['error'], response['error']
assert '-12' in response['error'], response['error']
bad_request_8 = deepcopy(base_request)
bad_request_8['data']['age'] = 1200
response = attempt_predict(bad_request_8)
assert 'error' in response, response
assert 'age' in response['error'], response['error']
assert '1200' in response['error'], response['error']
####################################################
# Test invalid values for numerical features - capital gain and loss
####################################################
bad_request_9 = deepcopy(base_request)
bad_request_9['data']['capital-gain'] = -10
response = attempt_predict(bad_request_9)
assert 'error' in response, response
assert 'capital-gain' in response['error'], response['error']
assert '-10' in response['error'], response['error']
bad_request_10 = deepcopy(base_request)
bad_request_10['data']['capital-loss'] = -500
response = attempt_predict(bad_request_10)
assert 'error' in response, response
assert 'capital-loss' in response['error'], response['error']
assert '-500' in response['error'], response['error']
####################################################
# Test invalid values for numerical features - hours per week
####################################################
bad_request_11 = deepcopy(base_request)
bad_request_11['data']['hours-per-week'] = -10
response = attempt_predict(bad_request_11)
assert 'error' in response, response
assert 'hours-per-week' in response['error'], response['error']
assert '-10' in response['error'], response['error']
bad_request_12 = deepcopy(base_request)
bad_request_12['data']['hours-per-week'] = 400
response = attempt_predict(bad_request_12)
assert 'error' in response, response
assert 'hours-per-week' in response['error'], response['error']
assert '400' in response['error'], response['error']
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-27c74c3d31a3055f", "locked": true, "schema_version": 3, "solution": false, "task": false}
#
# Ufff. That was tough. But now your app is a bit safer to deploy! At least from all the cases we could think of.
#
# <img src="media/code-passes-tests.png" width=500 />
#
#
# <br>
#
# ### Q4) Put everything together
#
# Finally, build a server with your model and a predict endpoint protected from all the cases before. Deploy it and set
# the name of your app below:
# + deletable=false nbgrader={"grade": false, "grade_id": "cell-fd9977536ad6270d", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Assign the variable APP_NAME to the name of your heroku app
# APP_NAME = ...
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-07135d75c857276c", "locked": true, "schema_version": 3, "solution": false, "task": false}
#
# Test that your server is bulletproof:
# + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-ac0c1a590d9aba53", "locked": true, "points": 4, "schema_version": 3, "solution": false, "task": false}
# Test locally
# url = f"http://localhost:5000/predict"
# Testing the predict/update endpoint
url = "https://{}.herokuapp.com/predict".format(APP_NAME)
################################################
# Test with good payload
################################################
payload = {
"observation_id": str(uuid4()),
"data":
{
"age": 30,
"workclass": "Private",
"sex": "Female",
"race": "Amer-Indian-Eskimo",
"education": "Masters",
"marital-status": "Never-married",
"capital-gain": 0,
"capital-loss": 0,
"hours-per-week": 45,
}
}
r = requests.post(url, json=payload)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'prediction' in response, response
assert 'probability' in response, response
assert response["prediction"] in [True, False]
assert isinstance(response["probability"], float)
assert 0 <= response["probability"] <= 1
################################################
# Test missing `observation_id` produces an error
################################################
bad_payload_1 = deepcopy(payload)
bad_payload_1['random_field'] = bad_payload_1.pop('observation_id')
r = requests.post(url, json=bad_payload_1)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'observation_id' in response['error'], response['error']
################################################
# Test missing `data` produces an error
################################################
bad_payload_2 = deepcopy(payload)
bad_payload_2['observation_id'] = str(uuid4())
bad_payload_2['random_field'] = bad_payload_2.pop('data')
r = requests.post(url, json=bad_payload_2)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'data' in response['error'], response['error']
################################################
# Test missing columns produce an error
################################################
bad_payload_3 = deepcopy(payload)
bad_payload_3['observation_id'] = str(uuid4())
bad_payload_3['data'].pop('age')
r = requests.post(url, json=bad_payload_3)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'age' in response['error'], response['error']
################################################
# Test extra columns produce an error
################################################
bad_payload_4 = deepcopy(payload)
bad_payload_4['observation_id'] = str(uuid4())
bad_payload_4['data']['relationship'] = "Wife"
r = requests.post(url, json=bad_payload_4)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'relationship' in response['error'], response['error']
###########################################################################
# Test invalid values for categorical features - race
###########################################################################
bad_payload_5 = deepcopy(payload)
bad_payload_5['observation_id'] = str(uuid4())
bad_payload_5['data']['race'] = 'Engineer'
r = requests.post(url, json=bad_payload_5)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'race' in response['error'], response['error']
assert 'Engineer' in response['error'], response['error']
####################################################
# Test invalid values for numerical features - age
####################################################
bad_payload_6 = deepcopy(payload)
bad_payload_6['observation_id'] = str(uuid4())
bad_payload_6['data']['age'] = -12
r = requests.post(url, json=bad_payload_6)
assert isinstance(r, requests.Response)
assert r.ok
response = r.json()
assert 'error' in response, response
assert 'age' in response['error'], response['error']
assert '-12' in response['error'], response['error']
# + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-8088ad24d8aadbc8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# And... you're done. You have successfully built a model, assessed if it passed the client requirements, built an app and protected it from crappy input.
#
# It's time for a well-deserved rest, so go ahead and go be a couch potato.
#
# <br>
#
# <img src="media/lays.png" width=400 />
#
#
| S06 - DS in the Real World/BLU14 - Deployment in Real World/Exercise notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.5 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
# ## Efficient Optimization Algorithms
# Optuna๋ HyperParameter sampling์ ์ํ SOTA algorithm์ ์ฑํํ๊ณ ํจ์จ์ ์ด์ง ๋ชปํ trials๋ฅผ pruningํ ์ ์๋ค.
# Optuna๋ ๋ค์์ Algorithm์ ์ ๊ณตํ๋ค.
#
# - Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`
#
# - CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`
#
# - Grid Search implemented in :class:`optuna.samplers.GridSampler`
#
# - Random Search implemented in :class:`optuna.samplers.RandomSampler`
#
# The default sampler is :class:`optuna.samplers.TPESampler`.
# ## Switching Samplers
import optuna
# Default Sampler๋ TPESampler
study = optuna.create_study()
print(f"Sampler is {study.sampler.__class__.__name__}")
# +
# ๋ค๋ฅธ Sampler๋ฅผ ํ์ฉํ๊ณ ์ถ์ ๊ฒฝ์ฐ, sampler ์ต์
ํ์ฉ
study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
study = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
# -
# ## Pruning Algorithms
# ``Pruners``๊ฐ ์์ข์ trials์ ๋ํด์๋ ํ๋ จ์ ์์ชฝ์์ ์๋์ผ๋ก ๋ฉ์ถ๊ฒ ๋ง๋ ๋ค. (a.k.a automated early-stopping)
# Optuna๋ ๋ค์์ Prunig Algorithm์ ์ ๊ณตํ๋ค.
#
# - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`
#
# - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`
#
# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`
#
# - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`
#
# We use :class:`optuna.pruners.MedianPruner` in most examples. ์ฑ๋ฅ ์ญ์ ๋ค๋ฅธ pruning algorithm๋ณด๋ค ์ฐ์ํ๋ค.
# ## Activating Pruners
#
# pruning์ ํ๊ธฐ ์ํด์ ํ์ต ์ค์ ๊ฐ step์์ report์ should_prune์ ํธ์ถ ํด์ผํ๋ค.
# - ``optuna.trial.Trial.report``: ์ค๊ฐ objective ๊ฐ์ ๋ชจ๋ํฐ๋งํ๋ค.
# - ``optuna.trial.Trial.should_prune``: ๋ฏธ๋ฆฌ ์ ์๋ ์กฐ๊ฑด์ ์ถฉ์กฑํ์ง ์์ผ๋ฉด trial์ ์ข
๋ฃํ๋ค.
#
# We would recommend using integration modules for major machine learning frameworks. [Github-Optuna](https://github.com/optuna/optuna-examples/)
# +
import logging
import sys
import sklearn.datasets
import sklearn.linear_model
import sklearn.model_selection
def objective(trial):
iris = sklearn.datasets.load_iris() # iris data laod
classes = list(set(iris.target))
train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(
iris.data, iris.target, test_size = 0.25, random_state = 0
)
alpha = trial.suggest_float('alpha', 1e-5, 1e-1, log=True)
clf = sklearn.linear_model.SGDClassifier(alpha=alpha)
for step in range(100) :
clf.partial_fit(train_x, train_y, classes=classes)
# Report intermediate objective value
intermediate_value = 1.0 - clf.score(valid_x, valid_y)
trial.report(intermediate_value, step)
# Handle pruning based on the intermediate value
if trial.should_prune():
raise optuna.TrialPruned()
return 1.0 - clf.score(valid_x, valid_y)
# -
# Add stream handler of stdout to show the messages
# optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
study.optimize(objective, n_trials=20)
# ## Which Sampler and Pruner Should be Used?
#
# - `optuna.samplers.RandomSampler` with `optuna.pruners.MedianPruner` is the best.
# - `optuna.samplers.TPESampler` with `optuna.pruners.Hyperband` is the best.
#
# However, note that the benchmark is not deep learning.
#
# For deep learning tasks,consult the below table.
# This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 <https://doi.org/10.14923/transinfj.2019JDR0003>`_ paper,
# which is written in Japanese.
#
# +---------------------------+-----------------------------------------+---------------------------------------------------------------+
# | Parallel Compute Resource | Categorical/Conditional Hyperparameters | Recommended Algorithms |
# +===========================+=========================================+===============================================================+
# | Limited | No | TPE. GP-EI if search space is low-dimensional and continuous. |
# + +-----------------------------------------+---------------------------------------------------------------+
# | | Yes | TPE. GP-EI if search space is low-dimensional and continuous |
# +---------------------------+-----------------------------------------+---------------------------------------------------------------+
# | Sufficient | No | CMA-ES, Random Search |
# + +-----------------------------------------+---------------------------------------------------------------+
# | | Yes | Random Search or Genetic Algorithm |
# +---------------------------+-----------------------------------------+---------------------------------------------------------------+
#
# ## Integration Modules for Pruning
# Optuna๋ ``integraion`` module์ ์ ๊ณตํ๋ ๋ฐ, ์ด๋ฅผ ํ์ฉํ์ฌ puning์ ๊ฐ๋จํ๊ฒ ์คํํ ์ ์๋ค.
#
# ๋ค์ ์ฒ๋ผ ํ์ฉํ ์ ์๋ค.
# -> visualization.ipynb์์ ํ์ธํ์.
#
# ```python
# pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
# bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])
# ```
#
| optuna_tutorial/3_algorithm_with_pruning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import statements
from time import sleep
from json import dumps
from kafka import KafkaProducer
import random
import datetime as dt
# +
import csv
import json
csvfile = open('./assignment_data/hotspot_AQUA_streaming.csv', 'r')
fieldnames = ("lat","lon","confidence","surface_temp")
reader = csv.DictReader( csvfile, fieldnames)
rows = list(reader)
data = rows[1:]
# +
# import statements
from time import sleep
from json import dumps
from kafka import KafkaProducer
import random
import datetime as dt
random.seed(456)
def publish_message(producer_instance, topic_name, key, data):
try:
key_bytes = bytes(key, encoding='utf-8')
producer_instance.send(topic_name, key=key_bytes, value=data)
producer_instance.flush()
print('Message published successfully : ' + str(data))
except Exception as ex:
print('Exception in publishing message.')
print(str(ex))
def connect_kafka_producer():
_producer = None
try:
_producer = KafkaProducer(bootstrap_servers=['localhost:9092'],
value_serializer=lambda x:dumps(x).encode('ascii'),
api_version=(0, 10))
except Exception as ex:
print('Exception while connecting Kafka.')
print(str(ex))
finally:
return _producer
if __name__ == '__main__':
topic = 'TaskC'
print('Publishing records..')
producer02 = connect_kafka_producer()
#for selecting random data without replacement
rd = random.sample(range(len(data)), len(data))
for e in range(len(data)):
#use datetime as ISO format for readable in mongoDB
datetime = dt.datetime.now().replace(microsecond=0).isoformat()
stream_data = {'created_time': datetime, 'sender_id' : 2,'data' : data[rd[e]]}
publish_message(producer02, topic,'sender_2', stream_data)
interval = random.randrange(10,31)
# uncomment to see the interval
# print(interval)
sleep(interval) #stream every 10-30 seconds
# -
| Assignment_TaskC_Producer2.ipynb |
# # Convolutional Autoeconder
#
# This code trains a simple autoencoder neural network using convolutional
# layers.The contraction and expansion of the implemented neural network used
# only convolutional layers. Therefore, it does not rely on maxpooling or
# upsampling layers. Instead, it was used strides to control the contraction
# and expansion of the neural network. Also, in the decoder part it was used a
# decovolutional process.
#
# For the latent space it was used a fully connected layer with an additional
# fully connected layer in sequence, to connect the latent space with the
# decoder convolutional layer.
#
# The neural network architecture with the activation function is stated below.
# ## Prelimiary steps
# Optional cell if the current path already has the data and the file utils.py
# %cd ..
# %cd ..
# +
# Import modules
import h5py
import keras.layers as layers
import numpy as np
import pandas as pd
import tensorflow as tf
from keras.callbacks import EarlyStopping
from keras.models import Model
from utils import plot_red_comp, slicer, split
# Path to store training data
save_path = "./tests/jupyter-notebooks/models/train_ae_conv_{}"
# -
# ### Data manipulation
# +
# Selecting data
dt_fl = "nn_data.h5"
dt_dst = "scaled_data"
# The percentage for the test is implicit
n_train = 0.8
n_valid = 0.1
# Select the variable to train
# 0: Temperature - 1: Pressure - 2: Velocity - None: all
var = 2
# -
# Load the selected data and then split it.
# +
# Open data file
f = h5py.File(dt_fl, "r")
dt = f[dt_dst]
# Split data file
idxs = split(dt.shape[0], n_train, n_valid)
slc_trn, slc_vld, slc_tst = slicer(dt.shape, idxs, var=var)
# Slice data
x_train = dt[slc_trn][:, :, :, np.newaxis]
x_val = dt[slc_vld][:, :, :, np.newaxis]
# Convert the var into a slice
if var:
slc = slice(var, var + 1)
else:
slc = slice(var)
# -
# ### Autoencoder settings
# +
# Activation function
act = "tanh" # Convolutional layers activation function
act_lt = "tanh" # Latent space layers activation function
# Number of filters of each layer
flt = [3, 9, 27]
# Filter size
flt_size = 5
# Strides of each layer
strd = [2, 2, 5]
# Latent space size
lt_sz = 50
# Training settings
opt = "adam" # Optimizer
loss = "mse"
epochs = 60
batch_size = 64
# -
# ### Build autoencoder
# +
# Build the autoencoder neural network
tf.keras.backend.clear_session()
flt_tp = (flt_size, flt_size)
conv_kwargs = dict(activation=act, padding="same")
# Encoder
inputs = layers.Input(shape=x_train.shape[1:])
e = layers.Conv2D(flt[0], flt_tp, strides=strd[0], **conv_kwargs)(inputs)
e = layers.Conv2D(flt[1], flt_tp, strides=strd[1], **conv_kwargs)(e)
e = layers.Conv2D(flt[2], flt_tp, strides=strd[2], **conv_kwargs)(e)
# Latent space
l = layers.Flatten()(e)
l = layers.Dense(lt_sz, activation=act_lt)(l)
# Latent to decoder
dn_flt = flt[-1]
d_shp = (x_train.shape[1:-1] / np.prod(strd)).astype(int)
d_sz = np.prod(d_shp) * dn_flt
d = layers.Dense(d_sz, activation=act_lt)(l)
d = layers.Reshape(np.hstack((d_shp, dn_flt)))(d)
# Decoder
d = layers.Conv2DTranspose(flt[-1], flt_tp, strides=strd[-1], **conv_kwargs)(d)
d = layers.Conv2DTranspose(flt[-2], flt_tp, strides=strd[-2], **conv_kwargs)(d)
d = layers.Conv2DTranspose(flt[-3], flt_tp, strides=strd[-3], **conv_kwargs)(d)
decoded = layers.Conv2DTranspose(
x_train.shape[-1], flt_tp, activation="linear", padding="same"
)(d)
# Mount the autoencoder
ae = Model(inputs, decoded, name="Convolutional Autoencoder")
# -
# Show the architecture
ae.summary()
# ## Callbacks
# Early stopping to stop training when the validation loss start to increase
# The patience term is a number of epochs to wait before stop. Also, the
# 'restore_best_weights' is used to restore the best model against the
# validation dataset. It is necessary as not always the best model against
# the validation dataset is the last neural network weights.
# Callbacks
monitor = "val_loss"
patience = int(epochs * 0.3)
es = EarlyStopping(
monitor=monitor, mode="min", patience=patience, restore_best_weights=True
)
# ### Training
# Compile and train
ae.compile(optimizer=opt, loss=loss)
hist = ae.fit(
x_train,
x_train,
epochs=epochs,
batch_size=batch_size,
shuffle=True,
validation_data=(x_val, x_val),
callbacks=[es],
)
# ### Save trained model
# +
# Save the model
ae.save(save_path.format("model.h5"))
# Store the test dataset
x_test = dt[slc_tst][:, :, :, np.newaxis]
np.save(save_path.format("test.npy"), x_test)
# -
# ### Training process
# +
# Convert the history to a Pandas dataframe
hist_df = pd.DataFrame(hist.history)
hist_df.index.name = "Epochs"
# Save the training history
hist_df.to_hdf(save_path.format("hist.h5"), 'his')
# Plot training evolution
tit = "Validation loss: {:.3f} - Training loss: {:.3f}".format(*hist_df.min())
hist_df.plot(grid=True, title=tit)
# -
# ### Evaluate the trained model
# +
# Test the trained neural network against the test dataset
x_test = dt[slc_tst][:, :, :, np.newaxis]
loss = ae.evaluate(x_test, x_test)
print("Test dataset loss: {:.3f}".format(loss))
global_loss = ae.evaluate(dt[:, :, :, slc], dt[:, :, :, slc])
print("Entire dataset loss: {:.3f}".format(global_loss))
# +
# Comparing the input and output of the autoencoder neural network
data_index = 634
# Slice the data
dt_in = dt[data_index, :, :, slc]
# Get the neural network output
dt_out = ae.predict(dt_in[np.newaxis])
# Plot
alg = "Convolutional Autoencoder"
plot_red_comp(dt_in, dt_out[0], 0, lt_sz, global_loss, alg)
# -
| tests/jupyter-notebooks/train_ae_conv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import datapackage
# create a datapakage object
package = datapackage.Package()
package
# +
# add descriptors
package.descriptor['Title'] = 'Winemag reviews kaggle'
package.descriptor['name'] = 'winemag-raw'
package.descriptor
# -
package.infer('**/*.csv')
package.descriptor['resources'][0]
package.descriptor['resources'][0]['name'] = 'winemag'
package.descriptor
package.save('data/raw/datapackage.json')
package.save('data/raw/datapackage.zip')
| add_metadata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# python3 src/run_train.py --train annotated_images --label_names annotated_images/_labels.txt --iteration 10000 --val_iteration 1000 --loaderjob 4 --batchsize 30 --log_iteration 1000 --gpu 0
import numpy as np
if np.random.randint(2):
print('Yo')
np.random.randint(2)
a = []
if a:
print("A")
| 2018-01-12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practice Content
#
# ็ผๅๅฆไธไธคไธช็จๅบใ็จๅบๆบๆไปถๅไธบtest1.pyใtest2.pyใ
#
# 1. ็ผๅไปฃ็ ๏ผๅฎ็ฐไธไธชๆ ๏ผStack๏ผ็ฑปใๆ ๆฏๅช่ฝๅจไธ็ซฏๆๅ
ฅๅๅ ้คๆฐๆฎ็ๅบๅ๏ผๅฎๆ็
งๅ
่ฟๅๅบ็ๅๅๅญๅจๆฐๆฎ๏ผๅ
่ฟๅ
ฅ็ๆฐๆฎ่ขซๅๅ
ฅๆ ๅบ๏ผๆๅ็ๆฐๆฎๅจๆ ้กถ๏ผ้่ฆ่ฏปๆฐๆฎ็ๆถๅไปๆ ้กถๅผๅงๅผนๅบๆฐๆฎ๏ผๆๅไธไธชๆฐๆฎ่ขซ็ฌฌไธไธช่ฏปๅบๆฅ
#
# ---
# ๅญฆไน pickle ๅ unittest ไธคไธชๆจกๅ๏ผๅนถๅจ่ชๅทฑๅ็็จๅบtest1.pyใtest2.pyไธญไฝฟ็จ่ฟไธคไธชๆจกๅ
# ๆๅ
ด่ถฃ็ๅๅญฆ่ฟๅฏไปฅๅญฆ็จ shelve ๆจกๅ
# ## 1. Stack class
# 
# ### 1.1 ๅบๆฌๆฆๅฟต
# 1. ๆ ๏ผStack๏ผไธ้ๅ๏ผQueue๏ผๆฏไธค็งๅบๆฌ็ๆฐๆฎ็ปๆ๏ผๅไธบๅฎนๅจ็ฑปๅใไธค่
็ๅบๅซๅจไบ๏ผๆ ไธบๅ่ฟๅ
ๅบ๏ผๅฏนๅไธบๅ
่ฟๅ
ๅบใ
#
# 2. ๆ ไธ้ๅๆฒกๆๆฅ่ฏขๆไธไฝ็ฝฎๅ
็ด ็ๆไฝ๏ผไฝๅฎไปฌๆฏๆ้กบๅบๆๅ็ใ
#
# 3. ๅจPythonไธญ๏ผๅฏไปฅไฝฟ็จlistๅฎ็ฐๆ ๏ผๅ ไธบlistๆฏ็บฟๆงๆฐ็ป๏ผๅจๆซๅฐพๆๅ
ฅๅๅ ้คไธไธชๅ
็ด ๆไฝฟ็จ็็ๆถ้ดๅไธบ $O(1)$ใ
#
# 4. ไนๅฏไปฅไฝฟ็จ้พ่กจๅฎ็ฐใ
#
# ---
# ### 1.2 ๅบๆฌๆไฝ
#
# 1. ๅๅงๅ init
# 2. ๅ
ฅๆ push
# 3. ๅบๆ pop
# 4. ๅคๆญๆ ๆฏๅฆไธบ็ฉบ is_empty
# 5. ่ทๅๆ ้กถๅ
็ด top
# 6. ่ทๅๆ ็ๅฎน้ size
# 7. ๆๅฐๆ ไธญๅ
็ด elem
# 8. ๆธ
็ฉบๆ empty
# 9. ้ๆฏๆ destroy
#
# ---
# ### 1.3 List ๅฎ็ฐ ๆ
class Stack:
def __init__(self):
self.stack = []
def push(self, value):
self.stack.append(value)
def pop(self):
if len(self.stack):
self.stack.pop()
else:
raise LookupError("Stack is empty now!")
def is_empty(self):
if len(self.stack):
return False
else:
return True
def get_top_elem(self):
return self.stack[0]
def get_size(self):
return len(self.stack)
def display_elem(self):
print(self.stack)
def empty(self):
self.stack = []
def destroy(self):
del self.stack
new_stack = Stack()
new_stack.push(1)
new_stack.push(2)
new_stack.push(3)
new_stack.pop()
new_stack.is_empty()
new_stack.get_top_elem()
new_stack.get_size()
new_stack.display_elem()
new_stack.empty()
new_stack.destroy()
new_stack
# ้ๆฏไนๅ๏ผๆ ๆณ่ฟ่กๆ ็ๆไฝ
new_stack.push(1)
# ### `->` marks the return function annotation
#
# These are function annotations covered in PEP 3107(https://www.python.org/dev/peps/pep-3107/).
# +
def kinetic_energy(m:'in KG', v:'in M/S')->'Joules':
return 1/2*m*v**2
kinetic_energy.__annotations__
# +
def f(x: float) -> int:
return int(x)
f.__annotations__
# -
# ### 1.4 ๅๅ้พ่กจๅฎ็ฐ ๆๅฐๆ
# - Reference: https://leetcode-cn.com/problems/min-stack/solution/python-lian-biao-shi-xian-zhan-by-fu-hao-tong/
# - https://leetcode-cn.com/problems/min-stack/solution/min-stack-fu-zhu-stackfa-by-jin407891080/
# +
class Node(object):
def __init__(self, value):
self.value = value
self.next = None
self.min_value = None
class MinStack:
def __init__(self):
self.value = None
self.next = None
def push(self, value: int) -> None:
p = Node(value)
p.next = self.next
self.next = p
# ่กจ้ฟๅคงไบ1ๆถ
if p.next:
p.min_value = min(value, p.next.min_value)
else:
p.min_value = value
def pop(self) -> None:
tmp = self.next
if tmp != None:
self.next = tmp.next
return
def get_top_elem(self) -> int:
if self.next != None:
return self.next.value
return None
def get_min_elem(self) -> int:
p = self.next
return p.min_value
# -
new_stack = MinStack()
new_stack.push(1)
new_stack.push(2)
new_stack.push(3)
new_stack.pop()
new_stack.get_top_elem()
new_stack.get_min_elem()
# ### 1.5 List ๅฎ็ฐ้ๅ
class Queue(object):
def __init__(self):
self.queue = []
def enqueue(self, value):
self.queue.append(value)
def dequeue(self):
self.queue.pop(0)
def is_empty(self):
if len(self.queue) == 0:
return True
else:
return False
def get_size(self):
return len(self.queue)
def display_elem(self):
print(self.queue)
def empty(self):
self.queue = []
def destroy(self):
del self.queue
new_queue = Queue()
new_queue.enqueue(1)
new_queue.enqueue(2)
new_queue.enqueue(3)
new_queue.dequeue()
new_queue.is_empty()
new_queue.get_size()
new_queue.display_elem()
new_queue.empty()
new_queue.destroy()
new_queue.enqueue(1)
# ### 1.6 ้พ่กจๅฎ็ฐ้ๅ
#
# ๆ่ทฏ๏ผๅฎไนไธไธชๅ้พ่กจ๏ผๅ
ถๅคด็ป็น็ๆ้ๅๆๅ้ฆๅ
็ป็น๏ผๅคด็ป็น็ๆฐๆฎๅๆๅ้พ่กจ็ๅฐพ็ป็น๏ผๆถ้ดๅคๆๅบฆไธบ $O(1)$ใๅฆๅพๆ็คบ๏ผ
#
# 
a1 = [5, 4, 3, 2, 1, 2, 3, 4, 5]
a1.sort()
a1
# ๆ ่ฟๅๅผ
b1 = a1.sort()
b1
a2 = [5, 4, 3, 2, 1, 2, 3, 4, 5]
sorted(a2)
# ๆ่ฟๅๅผ
b2 = sorted(a2)
b2
# +
class Head(object):
def __init__(self):
self.left = None
self.right = None
class Node(object):
def __init__(self, value):
self.value = value
self.next = None
class Queue(object):
def __init__(self):
# ๅๅงๅๅคด็ป็น
self.head = Head()
def enqueue(self, value):
# ๆฐๅปบๆๅ
ฅๅ
็ด ็ป็น
new_node = Node(value)
# ๅฎไนๅคดๆ้
p = self.head
# ๅฆๆๅคด็ป็นไนๅไธไธบNone๏ผ่ฏดๆ้พ่กจไธญๆๅ
็ด
if p.right:
temp = p.right
p.right = new_node
temp.next = new_node
# ๅฆๆไธบNone๏ผ่ฏดๆ่กจไธบ็ฉบ๏ผๅชๆๅคด็ป็น
else:
p.right = new_node
p.left = new_node
def dequeue(self):
p = self.head
# ๅชๆไธไธชๅ
็ด
if p.left and (p.left == p.right):
temp = p.left
p.left = p.right = None
return temp.value
# ไธๆญขไธไธชๅ
็ด
elif p.left and (p.left != p.right):
# ๆๅ
ๅ
ฅ้็ๅ
็ด
temp = p.left
p.left = temp.next
return temp.value
else:
raise LookupError('Queue is empty now!')
def is_empty(self):
if self.head.left:
return False
else:
return True
def top(self):
if self.head.left:
return self.head.left.value
else:
raise LookupError('Queue is empty now!')
# -
new_queue = Queue()
new_queue.enqueue(1)
new_queue.enqueue(2)
new_queue.enqueue(3)
new_queue.dequeue()
new_queue.is_empty()
| Homework/HW 03/Practice 03. Queue and Stack (LinkList).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring Data with Python
#
# A significant part of a data scientist's role is to explore, analyze, and visualize data. There's a wide range of tools and programming languages that they can use to do this, and of the most popular approaches is to use Jupyter notebooks (like this one) and Python.
#
# Python is a flexible programming language that is used in a wide range of scenarios; from web applications to device programming. It's extremely popular in the data science and machine learning community because of the many packages it supports for data analysis and visualization.
#
# In this notebook, we'll explore some of these packages, and apply basic techniques to analyze data. This is not intended to be a comprehensive Python programming exercise; or even a deep dive into data analysis. Rather, it's intended as a crash course in some of the common ways in which data scientists can use Python to work with data.
#
# > **Note**: If you've never used the Jupyter Notebooks environment before, there are a few things you should be aware of:
# >
# > - Notebooks are made up of *cells*. Some cells (like this one) contain *markdown* text, while others (like the one beneath this one) contain code.
# > - The notebook is connected to a Python *kernel* (you can see which one at the top right of the page - if you're running this notebook in an Azure Machine Learning compute instance it should be connected to the **Python 3.6 - AzureML** kernel). If you stop the kernel or disconnect from the server (for example, by closing and reopening the notebook, or ending and resuming your session), the output from cells that have been run will still be displayed; but any variables or functions defined in those cells will have been lost - you must rerun the cells before running any subsequent cells that depend on them.
# > - You can run each code cell by using the **► Run** button. The **◯** symbol next to the kernel name at the top right will briefly turn to **⚫** while the cell runs before turning back to **◯**.
# > - The output from each code cell will be displayed immediately below the cell.
# > - Even though the code cells can be run individually, some variables used in the code are global to the notebook. That means that you should run all of the code cells <u>**in order**</u>. There may be dependencies between code cells, so if you skip a cell, subsequent cells might not run correctly.
#
#
# ## Exploring data arrays with NumPy
#
# Let's start by looking at some simple data.
#
# Suppose a college takes a sample of student grades for a data science class.
#
# Run the code in the cell below by clicking the **► Run** button to see the data.
# + tags=[]
data = [50,50,47,97,49,3,53,42,26,74,82,62,37,15,70,27,36,35,48,52,63,64]
print(data)
# -
# The data has been loaded into a Python **list** structure, which is a good data type for general data manipulation, but not optimized for numeric analysis. For that, we're going to use the **NumPy** package, which includes specific data types and functions for working with *Num*bers in *Py*thon.
#
# Run the cell below to load the data into a NumPy **array**.
# + tags=[]
import numpy as np
grades = np.array(data)
print(grades)
# -
# Just in case you're wondering about the differences between a **list** and a NumPy **array**, let's compare how these data types behave when we use them in an expression that multiplies them by 2.
# + tags=[]
print (type(data),'x 2:', data * 2)
print('---')
print (type(grades),'x 2:', grades * 2)
# -
# Note that multiplying a list by 2 creates a new list of twice the length with the original sequence of list elements repeated. Multiplying a NumPy array on the other hand performs an element-wise calculation in which the array behaves like a *vector*, so we end up with an array of the same size in which each element has been multiplied by 2.
#
# The key takeaway from this is that NumPy arrays are specifically designed to support mathematical operations on numeric data - which makes them more useful for data analysis than a generic list.
#
# You might have spotted that the class type for the numpy array above is a **numpy.ndarray**. The **nd** indicates that this is a structure that can consists of multiple *dimensions* (it can have *n* dimensions). Our specific instance has a single dimension of student grades.
#
# Run the cell below to view the **shape** of the array.
grades.shape
# The shape confirms that this array has only one dimension, which contains 22 elements (there are 22 grades in the original list). You can access the individual elements in the array by their zero-based ordinal position. Let's get the first element (the one in position 0).
grades[0]
# Alright, now you know your way around a NumPy array, it's time to perform some analysis of the grades data.
#
# You can apply aggregations across the elements in the array, so let's find the simple average grade (in other words, the *mean* grade value).
grades.mean()
# So the mean grade is just around 50 - more or less in the middle of the possible range from 0 to 100.
#
# Let's add a second set of data for the same students, this time recording the typical number of hours per week they devoted to studying.
# +
# Define an array of study hours
study_hours = [10.0,11.5,9.0,16.0,9.25,1.0,11.5,9.0,8.5,14.5,15.5,
13.75,9.0,8.0,15.5,8.0,9.0,6.0,10.0,12.0,12.5,12.0]
# Create a 2D array (an array of arrays)
student_data = np.array([study_hours, grades])
# display the array
student_data
# -
# Now the data consists of a 2-dimensional array - an array of arrays. Let's look at its shape.
# Show shape of 2D array
student_data.shape
# The **student_data** array contains two elements, each of which is an array containing 22 elements.
#
# To navigate this structure, you need to specify the position of each element in the hierarchy. So to find the first value in the first array (which contains the study hours data), you can use the following code.
# Show the first element of the first element
student_data[0][0]
# Now you have a multidimensional array containing both the student's study time and grade information, which you can use to compare data. For example, how does the mean study time compare to the mean grade?
# + tags=[]
# Get the mean value of each sub-array
avg_study = student_data[0].mean()
avg_grade = student_data[1].mean()
print('Average study hours: {:.2f}\nAverage grade: {:.2f}'.format(avg_study, avg_grade))
# -
# ## Exploring tabular data with Pandas
#
# While NumPy provides a lot of the functionality you need to work with numbers, and specifically arrays of numeric values; when you start to deal with two-dimensional tables of data, the **Pandas** package offers a more convenient structure to work with - the **DataFrame**.
#
# Run the following cell to import the Pandas library and create a DataFrame with three columns. The first column is a list of student names, and the second and third columns are the NumPy arrays containing the study time and grade data.
# +
import pandas as pd
df_students = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie',
'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny',
'Jakeem','Helena','Ismat','Anila','Skye','Daniel','Aisha'],
'StudyHours':student_data[0],
'Grade':student_data[1]})
df_students
# -
# Note that in addition to the columns you specified, the DataFrame includes an *index* to unique identify each row. We could have specified the index explicitly, and assigned any kind of appropriate value (for example, an email address); but because we didn't specify an index, one has been created with a unique integer value for each row.
#
# ### Finding and filtering data in a DataFrame
#
# You can use the DataFrame's **loc** method to retrieve data for a specific index value, like this.
# + tags=[]
# Get the data for index value 5
df_students.loc[5]
# -
# You can also get the data at a range of index values, like this:
# Get the rows with index values from 0 to 5
df_students.loc[0:5]
# In addition to being able to use the **loc** method to find rows based on the index, you can use the **iloc** method to find rows based on their ordinal position in the DataFrame (regardless of the index):
# Get data in the first five rows
df_students.iloc[0:5]
# Look carefully at the `iloc[0:5]` results, and compare them to the `loc[0:5]` results you obtained previously. Can you spot the difference?
#
# The **loc** method returned rows with index *label* in the list of values from *0* to *5* - which includes *0*, *1*, *2*, *3*, *4*, and *5* (six rows). However, the **iloc** method returns the rows in the *positions* included in the range 0 to 5, and since integer ranges don't include the upper-bound value, this includes positions *0*, *1*, *2*, *3*, and *4* (five rows).
#
# **iloc** identifies data values in a DataFrame by *position*, which extends beyond rows to columns. So for example, you can use it to find the values for the columns in positions 1 and 2 in row 0, like this:
df_students.iloc[0,[1,2]]
# Let's return to the **loc** method, and see how it works with columns. Remember that **loc** is used to locate data items based on index values rather than positions. In the absence of an explicit index column, the rows in our dataframe are indexed as integer values, but the columns are identified by name:
df_students.loc[0,'Grade']
# Here's another useful trick. You can use the **loc** method to find indexed rows based on a filtering expression that references named columns other than the index, like this:
df_students.loc[df_students['Name']=='Aisha']
# Actually, you don't need to explicitly use the **loc** method to do this - you can simply apply a DataFrame filtering expression, like this:
df_students[df_students['Name']=='Aisha']
# And for good measure, you can achieve the same results by using the DataFrame's **query** method, like this:
df_students.query('Name=="Aisha"')
# The three previous examples underline an occassionally confusing truth about working with Pandas. Often, there are multiple ways to achieve the same results. Another example of this is the way you refer to a DataFrame column name. You can specify the column name as a named index value (as in the `df_students['Name']` examples we've seen so far), or you can use the column as a property of the DataFrame, like this:
df_students[df_students.Name == 'Aisha']
# ### Loading a DataFrame from a file
#
# We constructed the DataFrame from some existing arrays. However, in many real-world scenarios, data is loaded from sources such as files. Let's replace the student grades DataFrame with the contents of a text file.
df_students = pd.read_csv('data/grades.csv',delimiter=',',header='infer')
df_students.head()
# The DataFrame's **read_csv** method is used to load data from text files. As you can see in the example code, you can specify options such as the column delimiter and which row (if any) contains column headers (in this case, the delimiter is a comma and the first row contains the column names - these are the default settings, so the parameters could have been omitted).
#
#
# ### Handling missing values
#
# One of the most common issues data scientists need to deal with is incomplete or missing data. So how would we know that the DataFrame contains missing values? You can use the **isnull** method to identify which individual values are null, like this:
df_students.isnull()
# Of course, with a larger DataFrame, it would be inefficient to review all of the rows and columns individually; so we can get the sum of missing values for each column, like this:
df_students.isnull().sum()
# So now we know that there's one missing **StudyHours** value, and two missing **Grade** values.
#
# To see them in context, we can filter the dataframe to include only rows where any of the columns (axis 1 of the DataFrame) are null.
df_students[df_students.isnull().any(axis=1)]
# When the DataFrame is retrieved, the missing numeric values show up as **NaN** (*not a number*).
#
# So now that we've found the null values, what can we do about them?
#
# One common approach is to *impute* replacement values. For example, if the number of study hours is missing, we could just assume that the student studied for an average amount of time and replace the missing value with the mean study hours. To do this, we can use the **fillna** method, like this:
df_students.StudyHours = df_students.StudyHours.fillna(df_students.StudyHours.mean())
df_students
# Alternatively, it might be important to ensure that you only use data you know to be absolutely correct; so you can drop rows or columns that contains null values by using the **dropna** method. In this case, we'll remove rows (axis 0 of the DataFrame) where any of the columns contain null values.
df_students = df_students.dropna(axis=0, how='any')
df_students
# ### Explore data in the DataFrame
#
# Now that we've cleaned up the missing values, we're ready to explore the data in the DataFrame. Let's start by comparing the mean study hours and grades.
# + tags=[]
# Get the mean study hours using the column name as an index
mean_study = df_students['StudyHours'].mean()
# Get the mean grade using the column name as a property (just to make the point!)
mean_grade = df_students.Grade.mean()
# Print the mean study hours and mean grade
print('Average weekly study hours: {:.2f}\nAverage grade: {:.2f}'.format(mean_study, mean_grade))
# -
# OK, let's filter the DataFrame to find only the students who studied for more than the average amount of time.
# Get students who studied for the mean or more hours
df_students[df_students.StudyHours > mean_study]
# Note that the filtered result is itself a DataFrame, so you can work with its columns just like any other DataFrame.
#
# For example, let's find the average grade for students who undertook more than the average amount of study time.
# What was their mean grade?
df_students[df_students.StudyHours > mean_study].Grade.mean()
# Let's assume that the passing grade for the course is 60.
#
# We can use that information to add a new column to the DataFrame, indicating whether or not each student passed.
#
# First, we'll create a Pandas **Series** containing the pass/fail indicator (True or False), and then we'll concatenate that series as a new column (axis 1) in the DataFrame.
# +
passes = pd.Series(df_students['Grade'] >= 60)
df_students = pd.concat([df_students, passes.rename("Pass")], axis=1)
df_students
# -
# DataFrames are designed for tabular data, and you can use them to perform many of the kinds of data analytics operation you can do in a relational database; such as grouping and aggregating tables of data.
#
# For example, you can use the **groupby** method to group the student data into groups based on the **Pass** column you added previously, and count the number of names in each group - in other words, you can determine how many students passed and failed.
print(df_students.groupby(df_students.Pass).Name.count())
# You can aggregate multiple fields in a group using any available aggregation function. For example, you can find the mean study time and grade for the groups of students who passed and failed the course.
print(df_students.groupby(df_students.Pass)['StudyHours', 'Grade'].mean())
# DataFrames are amazingly versatile, and make it easy to manipulate data. Many DataFrame operations return a new copy of the DataFrame; so if you want to modify a DataFrame but keep the existing variable, you need to assign the result of the operation to the existing variable. For example, the following code sorts the student data into descending order of Grade, and assigns the resulting sorted DataFrame to the original **df_students** variable.
# +
# Create a DataFrame with the data sorted by Grade (descending)
df_students = df_students.sort_values('Grade', ascending=False)
# Show the DataFrame
df_students
# -
# ## Visualizing data with Matplotlib
#
# DataFrames provide a great way to explore and analyze tabular data, but sometimes a picture is worth a thousand rows and columns. The **Matplotlib** library provides the foundation for plotting data visualizations that can greatly enhance your ability to analyze the data.
#
# Let's start with a simple bar chart that shows the grade of each student.
# +
# Ensure plots are displayed inline in the notebook
# %matplotlib inline
from matplotlib import pyplot as plt
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade)
# Display the plot
plt.show()
# -
# Well, that worked; but the chart could use some improvements to make it clearer what we're looking at.
#
# Note that you used the **pyplot** class from Matplotlib to plot the chart. This class provides a whole bunch of ways to improve the visual elements of the plot. For example, the following code:
#
# - Specifies the color of the bar chart.
# - Adds a title to the chart (so we know what it represents)
# - Adds labels to the X and Y (so we know which axis shows which data)
# - Adds a grid (to make it easier to determine the values for the bars)
# - Rotates the X markers (so we can read them)
# +
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade, color='orange')
# Customize the chart
plt.title('Student Grades')
plt.xlabel('Student')
plt.ylabel('Grade')
plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7)
plt.xticks(rotation=90)
# Display the plot
plt.show()
# -
# A plot is technically contained with a **Figure**. In the previous examples, the figure was created implicitly for you; but you can create it explicitly. For example, the following code creates a figure with a specific size.
# +
# Create a Figure
fig = plt.figure(figsize=(8,3))
# Create a bar plot of name vs grade
plt.bar(x=df_students.Name, height=df_students.Grade, color='orange')
# Customize the chart
plt.title('Student Grades')
plt.xlabel('Student')
plt.ylabel('Grade')
plt.grid(color='#95a5a6', linestyle='--', linewidth=2, axis='y', alpha=0.7)
plt.xticks(rotation=90)
# Show the figure
plt.show()
# -
# A figure can contain multiple subplots, each on its own *axis*.
#
# For example, the following code creates a figure with two subplots - one is a bar chart showing student grades, and the other is a pie chart comparing the number of passing grades to non-passing grades.
# +
# Create a figure for 2 subplots (1 row, 2 columns)
fig, ax = plt.subplots(1, 2, figsize = (10,4))
# Create a bar plot of name vs grade on the first axis
ax[0].bar(x=df_students.Name, height=df_students.Grade, color='orange')
ax[0].set_title('Grades')
ax[0].set_xticklabels(df_students.Name, rotation=90)
# Create a pie chart of pass counts on the second axis
pass_counts = df_students['Pass'].value_counts()
ax[1].pie(pass_counts, labels=pass_counts)
ax[1].set_title('Passing Grades')
ax[1].legend(pass_counts.keys().tolist())
# Add a title to the Figure
fig.suptitle('Student Data')
# Show the figure
fig.show()
# -
# Until now, you've used methods of the Matplotlib.pyplot object to plot charts. However, Matplotlib is so foundational to graphics in Python that many packages, including Pandas, provide methods that abstract the underlying Matplotlib functions and simplify plotting. For example, the DataFrame provides its own methods for plotting data, as shown in the following example to plot a bar chart of study hours.
df_students.plot.bar(x='Name', y='StudyHours', color='teal', figsize=(6,4))
# ## Getting started with statistical analysis
#
# Now that you know how to use Python to manipulate and visualize data, you can start analyzing it.
#
# A lot of data science is rooted in *statistics*, so we'll explore some basic statistical techniques.
#
# > **Note**: This is not intended to teach you statistics - that's much too big a topic for this notebook. It will however introduce you to some statistical concepts and techniques that data scientists use as they explore data in preparation for machine learning modeling.
#
# ### Descriptive statistics and data distribution
#
# When examining a *variable* (for example a sample of student grades), data scientists are particularly interested in its *distribution* (in other words, how are all the different grade values spread across the sample). The starting point for this exploration is often to visualize the data as a histogram, and see how frequently each value for the variable occurs.
#
#
#
#
#
# +
# Get the variable to examine
var_data = df_students['Grade']
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.hist(var_data)
# Add titles and labels
plt.title('Data Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
# Show the figure
fig.show()
# -
# The histogram for grades is a symmetric shape, where the most frequently occurring grades tend to be in the middle of the range (around 50), with fewer grades at the extreme ends of the scale.
#
# #### Measures of central tendency
#
# To understand the distribution better, we can examine so-called *measures of central tendency*; which is a fancy way of describing statistics that represent the "middle" of the data. The goal of this is to try to find a "typical" value. Common ways to define the middle of the data include:
#
# - The *mean*: A simple average based on adding together all of the values in the sample set, and then dividing the total by the number of samples.
# - The *median*: The value in the middle of the range of all of the sample values.
# - The *mode*: The most commonly occuring value in the sample set<sup>\*</sup>.
#
# Let's calculate these values, along with the minimum and maximum values for comparison, and show them on the histogram.
#
# > <sup>\*</sup>Of course, in some sample sets , there may be a tie for the most common value - in which case the dataset is described as *bimodal* or even *multimodal*.
# + tags=[]
# Get the variable to examine
var = df_students['Grade']
# Get statistics
min_val = var.min()
max_val = var.max()
mean_val = var.mean()
med_val = var.median()
mod_val = var.mode()[0]
print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val,
mean_val,
med_val,
mod_val,
max_val))
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.hist(var)
# Add lines for the statistics
plt.axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2)
plt.axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2)
plt.axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2)
plt.axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2)
plt.axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2)
# Add titles and labels
plt.title('Data Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
# Show the figure
fig.show()
# -
# For the grade data, the mean, median, and mode all seem to be more or less in the middle of the minimum and maximum, at around 50.
#
# Another way to visualize the distribution of a variable is to use a *box* plot (sometimes called a *box-and-whiskers* plot). Let's create one for the grade data.
# +
# Get the variable to examine
var = df_students['Grade']
# Create a Figure
fig = plt.figure(figsize=(10,4))
# Plot a histogram
plt.boxplot(var)
# Add titles and labels
plt.title('Data Distribution')
# Show the figure
fig.show()
# -
# The box plot shows the distribution of the grade values in a different format to the histogram. The *box* part of the plot shows where the inner two *quartiles* of the data reside - so in this case, half of the grades are between approximately 36 and 63. The *whiskers* extending from the box show the outer two quartiles; so the other half of the grades in this case are between 0 and 36 or 63 and 100. The line in the box indicates the *median* value.
#
# It's often useful to combine histograms and box plots, with the box plot's orientation changed to align it with the histogram (in some ways, it can be helpful to think of the histogram as a "front elevation" view of the distribution, and the box plot as a "plan" view of the distribution from above.)
# + tags=[]
# Create a function that we can re-use
def show_distribution(var_data):
from matplotlib import pyplot as plt
# Get statistics
min_val = var_data.min()
max_val = var_data.max()
mean_val = var_data.mean()
med_val = var_data.median()
mod_val = var_data.mode()[0]
print('Minimum:{:.2f}\nMean:{:.2f}\nMedian:{:.2f}\nMode:{:.2f}\nMaximum:{:.2f}\n'.format(min_val,
mean_val,
med_val,
mod_val,
max_val))
# Create a figure for 2 subplots (2 rows, 1 column)
fig, ax = plt.subplots(2, 1, figsize = (10,4))
# Plot the histogram
ax[0].hist(var_data)
ax[0].set_ylabel('Frequency')
# Add lines for the mean, median, and mode
ax[0].axvline(x=min_val, color = 'gray', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=mean_val, color = 'cyan', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=med_val, color = 'red', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=mod_val, color = 'yellow', linestyle='dashed', linewidth = 2)
ax[0].axvline(x=max_val, color = 'gray', linestyle='dashed', linewidth = 2)
# Plot the boxplot
ax[1].boxplot(var_data, vert=False)
ax[1].set_xlabel('Value')
# Add a title to the Figure
fig.suptitle('Data Distribution')
# Show the figure
fig.show()
# Get the variable to examine
col = df_students['Grade']
# Call the function
show_distribution(col)
# -
# All of the measurements of central tendency are right in the middle of the data distribution, which is symmetric with values becoming progressively lower in both directions from the middle.
#
# To explore this distribution in more detail, you need to understand that statistics is fundamentally about taking *samples* of data and using probability functions to extrapolate information about the full *population* of data. For example, the student data consists of 22 samples, and for each sample there is a grade value. You can think of each sample grade as a variable that's been randomly selected from the set of all grades awarded for this course. With enough of these random variables, you can calculate something called a *probability density function*, which estimates the distribution of grades for the full population.
#
# The Pandas DataFrame class provides a helpful plot function to show this density.
# +
def show_density(var_data):
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(10,4))
# Plot density
var_data.plot.density()
# Add titles and labels
plt.title('Data Density')
# Show the mean, median, and mode
plt.axvline(x=var_data.mean(), color = 'cyan', linestyle='dashed', linewidth = 2)
plt.axvline(x=var_data.median(), color = 'red', linestyle='dashed', linewidth = 2)
plt.axvline(x=var_data.mode()[0], color = 'yellow', linestyle='dashed', linewidth = 2)
# Show the figure
plt.show()
# Get the density of Grade
col = df_students['Grade']
show_density(col)
# -
# As expected from the histogram of the sample, the density shows the characteristic 'bell curve" of what statisticians call a *normal* distribution with the mean and mode at the center and symmetric tails.
#
# Now let's take a look at the distribution of the study hours data.
# + tags=[]
# Get the variable to examine
col = df_students['StudyHours']
# Call the function
show_distribution(col)
# -
# The distribution of the study time data is significantly different from that of the grades.
#
# Note that the whiskers of the box plot only extend to around 6.0, indicating that the vast majority of the first quarter of the data is above this value. The minimum is marked with an **o**, indicating that it is statistically an *outlier* - a value that lies significantly outside the range of the rest of the distribution.
#
# Outliers can occur for many reasons. Maybe a student meant to record "10" hours of study time, but entered "1" and missed the "0". Or maybe the student was abnormally lazy when it comes to studying! Either way, it's a statistical anomaly that doesn't represent a typical student. Let's see what the distribution looks like without it.
# + tags=[]
# Get the variable to examine
col = df_students[df_students.StudyHours>1]['StudyHours']
# Call the function
show_distribution(col)
# -
# In this example, the dataset is small enough to clearly see that the value **1** is an outlier for the **StudyHours** column, so you can exclude it explicitly. In most real-world cases, it's easier to consider outliers as being values that fall below or above percentiles within which most of the data lie. For example, the following code uses the Pandas **quantile** function to exclude observations below the 0.01th percentile (the value above which 99% of the data reside).
q01 = df_students.StudyHours.quantile(0.01)
# Get the variable to examine
col = df_students[df_students.StudyHours>q01]['StudyHours']
# Call the function
show_distribution(col)
# > **Tip**: You can also eliminate outliers at the upper end of the distribution by defining a threshold at a high percentile value - for example, you could use the **quantile** function to find the 0.99 percentile below which 99% of the data reside.
#
# With the outliers removed, the box plot shows all data within the four quartiles. Note that the distribution is not symmetric like it is for the grade data though - there are some students with very high study times of around 16 hours, but the bulk of the data is between 7 and 13 hours; The few extremely high values pull the mean towards the higher end of the scale.
#
# Let's look at the density for this distribution.
# Get the density of StudyHours
show_density(col)
# This kind of distribution is called *right skewed*. The mass of the data is on the left side of the distribution, creating a long tail to the right because of the values at the extreme high end; which pull the mean to the right.
#
# #### Measures of variance
#
# So now we have a good idea where the middle of the grade and study hours data distributions are. However, there's another aspect of the distributions we should examine: how much variability is there in the data?
#
# Typical statistics that measure variability in the data include:
#
# - **Range**: The difference between the maximum and minimum. There's no built-in function for this, but it's easy to calculate using the **min** and **max** functions.
# - **Variance**: The average of the squared difference from the mean. You can use the built-in **var** function to find this.
# - **Standard Deviation**: The square root of the variance. You can use the built-in **std** function to find this.
# + tags=[]
for col_name in ['Grade','StudyHours']:
col = df_students[col_name]
rng = col.max() - col.min()
var = col.var()
std = col.std()
print('\n{}:\n - Range: {:.2f}\n - Variance: {:.2f}\n - Std.Dev: {:.2f}'.format(col_name, rng, var, std))
# -
# Of these statistics, the standard deviation is generally the most useful. It provides a measure of variance in the data on the same scale as the data itself (so grade points for the Grade distribution and hours for the StudyHours distribution). The higher the standard deviation, the more variance there is when comparing values in the distribution to the distribution mean - in other words, the data is more spread out.
#
# When working with a *normal* distribution, the standard deviation works with the particular characteristics of a normal distribution to provide even greater insight. Run the cell below to see the relationship between standard deviations and the data in the normal distribution.
# +
import scipy.stats as stats
# Get the Grade column
col = df_students['Grade']
# get the density
density = stats.gaussian_kde(col)
# Plot the density
col.plot.density()
# Get the mean and standard deviation
s = col.std()
m = col.mean()
# Annotate 1 stdev
x1 = [m-s, m+s]
y1 = density(x1)
plt.plot(x1,y1, color='magenta')
plt.annotate('1 std (68.26%)', (x1[1],y1[1]))
# Annotate 2 stdevs
x2 = [m-(s*2), m+(s*2)]
y2 = density(x2)
plt.plot(x2,y2, color='green')
plt.annotate('2 std (95.45%)', (x2[1],y2[1]))
# Annotate 3 stdevs
x3 = [m-(s*3), m+(s*3)]
y3 = density(x3)
plt.plot(x3,y3, color='orange')
plt.annotate('3 std (99.73%)', (x3[1],y3[1]))
# Show the location of the mean
plt.axvline(col.mean(), color='cyan', linestyle='dashed', linewidth=1)
plt.axis('off')
plt.show()
# -
# The horizontal lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).
#
# In any normal distribution:
# - Approximately 68.26% of values fall within one standard deviation from the mean.
# - Approximately 95.45% of values fall within two standard deviations from the mean.
# - Approximately 99.73% of values fall within three standard deviations from the mean.
#
# So, since we know that the mean grade is 49.18, the standard deviation is 21.74, and distribution of grades is approximately normal; we can calculate that 68.26% of students should achieve a grade between 27.44 and 70.92.
#
# The descriptive statistics we've used to understand the distribution of the student data variables are the basis of statistical analysis; and because they're such an important part of exploring your data, there's a built-in **Describe** method of the DataFrame object that returns the main descriptive statistics for all numeric columns.
df_students.describe()
# ## Comparing data
#
# Now that you know something about the statistical distribution of the data in your dataset, you're ready to examine your data to identify any apparent relationships between variables.
#
# First of all, let's get rid of any rows that contain outliers so that we have a sample that is representative of a typical class of students. We identified that the StudyHours column contains some outliers with extremely low values, so we'll remove those rows.
df_sample = df_students[df_students['StudyHours']>1]
df_sample
# ### Comparing numeric and categorical variables
#
# The data includes two *numeric* variables (**StudyHours** and **Grade**) and two *categorical* variables (**Name** and **Pass**). Let's start by comparing the numeric **StudyHours** column to the categorical **Pass** column to see if there's an apparent relationship between the number of hours studied and a passing grade.
#
# To make this comparison, let's create box plots showing the distribution of StudyHours for each possible Pass value (true and false).
df_sample.boxplot(column='StudyHours', by='Pass', figsize=(8,5))
# Comparing the StudyHours distributions, it's immediately apparent (if not particularly surprising) that students who passed the course tended to study for more hours than students who didn't. So if you wanted to predict whether or not a student is likely to pass the course, the amount of time they spend studying may be a good predictive feature.
#
# ### Comparing numeric variables
#
# Now let's compare two numeric variables. We'll start by creating a bar chart that shows both grade and study hours.
# Create a bar plot of name vs grade and study hours
df_sample.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5))
# The chart shows bars for both grade and study hours for each student; but it's not easy to compare because the values are on different scales. Grades are measured in grade points, and range from 3 to 97; while study time is measured in hours and ranges from 1 to 16.
#
# A common technique when dealing with numeric data in different scales is to *normalize* the data so that the values retain their proportional distribution, but are measured on the same scale. To accomplish this, we'll use a technique called *MinMax* scaling that distributes the values proportionally on a scale of 0 to 1. You could write the code to apply this transformation; but the **Scikit-Learn** library provides a scaler to do it for you.
# +
from sklearn.preprocessing import MinMaxScaler
# Get a scaler object
scaler = MinMaxScaler()
# Create a new dataframe for the scaled values
df_normalized = df_sample[['Name', 'Grade', 'StudyHours']].copy()
# Normalize the numeric columns
df_normalized[['Grade','StudyHours']] = scaler.fit_transform(df_normalized[['Grade','StudyHours']])
# Plot the normalized values
df_normalized.plot(x='Name', y=['Grade','StudyHours'], kind='bar', figsize=(8,5))
# -
# With the data normalized, it's easier to see an apparent relationship between grade and study time. It's not an exact match, but it definitely seems like students with higher grades tend to have studied more.
#
# So there seems to be a correlation between study time and grade; and in fact, there's a statistical *correlation* measurement we can use to quantify the relationship between these columns.
df_normalized.Grade.corr(df_normalized.StudyHours)
# The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other). In this case, the correlation value is close to 1; showing a strongly positive correlation between study time and grade.
#
# > **Note**: Data scientists often quote the maxim "*correlation* is not *causation*". In other words, as tempting as it might be, you shouldn't interpret the statistical correlation as explaining *why* one of the values is high. In the case of the student data, the statistics demonstrates that students with high grades tend to also have high amounts of study time; but this is not the same as proving that they achieved high grades *because* they studied a lot. The statistic could equally be used as evidence to support the nonsensical conclusion that the students studied a lot *because* their grades were going to be high.
#
# Another way to visualise the apparent correlation between two numeric columns is to use a *scatter* plot.
# Create a scatter plot
df_sample.plot.scatter(title='Study Time vs Grade', x='StudyHours', y='Grade')
# Again, it looks like there's a discernible pattern in which the students who studied the most hours are also the students who got the highest grades.
#
# We can see this more clearly by adding a *regression* line (or a *line of best fit*) to the plot that shows the general trend in the data. To do this, we'll use a statistical technique called *least squares regression*.
#
# > **Warning - Math Ahead!**
# >
# > Cast your mind back to when you were learning how to solve linear equations in school, and recall that the *slope-intercept* form of a linear equation looks like this:
# >
# > \begin{equation}y = mx + b\end{equation}
# >
# > In this equation, *y* and *x* are the coordinate variables, *m* is the slope of the line, and *b* is the y-intercept (where the line goes through the Y-axis).
# >
# > In the case of our scatter plot for our student data, we already have our values for *x* (*StudyHours*) and *y* (*Grade*), so we just need to calculate the intercept and slope of the straight line that lies closest to those points. Then we can form a linear equation that calculates a new *y* value on that line for each of our *x* (*StudyHours*) values - to avoid confusion, we'll call this new *y* value *f(x)* (because it's the output from a linear equation ***f***unction based on *x*). The difference between the original *y* (*Grade*) value and the *f(x)* value is the *error* between our regression line and the actual *Grade* achieved by the student. Our goal is to calculate the slope and intercept for a line with the lowest overall error.
# >
# > Specifically, we define the overall error by taking the error for each point, squaring it, and adding all the squared errors together. The line of best fit is the line that gives us the lowest value for the sum of the squared errors - hence the name *least squares regression*.
#
# Fortunately, you don't need to code the regression calculation yourself - the **SciPy** package includes a **stats** class that provides a **linregress** method to do the hard work for you. This returns (among other things) the coefficients you need for the slope equation - slope (*m*) and intercept (*b*) based on a given pair of variable samples you want to compare.
# + tags=[]
from scipy import stats
#
df_regression = df_sample[['Grade', 'StudyHours']].copy()
# Get the regression slope and intercept
m, b, r, p, se = stats.linregress(df_regression['StudyHours'], df_regression['Grade'])
print('slope: {:.4f}\ny-intercept: {:.4f}'.format(m,b))
print('so...\n f(x) = {:.4f}x + {:.4f}'.format(m,b))
# Use the function (mx + b) to calculate f(x) for each x (StudyHours) value
df_regression['fx'] = (m * df_regression['StudyHours']) + b
# Calculate the error between f(x) and the actual y (Grade) value
df_regression['error'] = df_regression['fx'] - df_regression['Grade']
# Create a scatter plot of Grade vs StudyHours
df_regression.plot.scatter(x='StudyHours', y='Grade')
# Plot the regression line
plt.plot(df_regression['StudyHours'],df_regression['fx'], color='cyan')
# Display the plot
plt.show()
# -
# Note that this time, the code plotted two distinct things - the scatter plot of the sample study hours and grades is plotted as before, and then a line of best fit based on the least squares regression coefficients is plotted.
#
# The slope and intercept coefficients calculated for the regression line are shown above the plot.
#
# The line is based on the ***f*(x)** values calculated for each **StudyHours** value. Run the following cell to see a table that includes the following values:
#
# - The **StudyHours** for each student.
# - The **Grade** achieved by each student.
# - The ***f(x)*** value calculated using the regression line coefficients.
# - The *error* between the calculated ***f(x)*** value and the actual **Grade** value.
#
# Some of the errors, particularly at the extreme ends, are quite large (up to over 17.5 grade points); but in general, the line is pretty close to the actual grades.
# Show the original x,y values, the f(x) value, and the error
df_regression[['StudyHours', 'Grade', 'fx', 'error']]
# ### Using the regression coefficients for prediction
#
# Now that you have the regression coefficients for the study time and grade relationship, you can use them in a function to estimate the expected grade for a given amount of study.
# + tags=[]
# Define a function based on our regression coefficients
def f(x):
m = 6.3134
b = -17.9164
return m*x + b
study_time = 14
# Get f(x) for study time
prediction = f(study_time)
# Grade can't be less than 0 or more than 100
expected_grade = max(0,min(100,prediction))
#Print the estimated grade
print ('Studying for {} hours per week may result in a grade of {:.0f}'.format(study_time, expected_grade))
# -
# So by applying statistics to sample data, you've determined a relationship between study time and grade; and encapsulated that relationship in a general function that can be used to predict a grade for a given amount of study time.
#
# This technique is in fact the basic premise of machine learning. You can take a set of sample data that includes one or more *features* (in this case, the number of hours studied) and a known *label* value (in this case, the grade achieved) and use the sample data to derive a function that calculates predicted label values for any given set of features.
# ## Further Reading
#
# To learn more about the Python packages you explored in this notebook, see the following documentation:
#
# - [NumPy](https://numpy.org/doc/stable/)
# - [Pandas](https://pandas.pydata.org/pandas-docs/stable/)
# - [Matplotlib](https://matplotlib.org/contents.html)
#
# ## Challenge: Analyze Flight Data
#
# If this notebook has inspired you to try exploring data for yourself, why not take on the challenge of a real-world dataset containing flight records from the US Department of Transportation? You'll find the challenge in the [/challenges/01 - Flights Challenge.ipynb](./challenges/01%20-%20Flights%20Challenge.ipynb) notebook!
#
# > **Note**: The time to complete this optional challenge is not included in the estimated time for this exercise - you can spend as little or as much time on it as you like!
| 01 - Data Exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Requirement:
# --
# We use the online SMS service from http://clickatell.com/ en first need an account to access it. Clickatell offers a test acccount with a limited amount of free SMS messages. Different other services exist and your own mobile operator may offer a service as well, but then the below code needs to be adapted to your their API.
#
# After creating an account, you need to add a REST API in the interface, which Clickatell will generate an Auth(entication) Token. This token has to be filled in below:
TOKEN = "****************************************************************"
DEST = "32475******"
from clickatell.rest import Rest
clickatell = Rest(TOKEN);
response = clickatell.sendMessage([DEST], "Raspi wants to be your BFF forever", extra={'from':'32477550561'})
# the extra['from'] parameters can be used to put any phone number, but it needs to be registered
# in the Clickatell administration interface
print response
for entry in response:
print('destination {}:'.format(entry['destination']))
for key in entry.keys():
print(" {}: {}".format(key, entry[key]))
| notebooks/en-gb/Communication - Send SMS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import dask.dataframe as dd
import sys
import time
import pandas as pd
import datetime
from fbprophet import Prophet
from matplotlib import pyplot as plt
sys.getdefaultencoding()
filename = '/Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt'
filename_Bus = '/Users/bluekidds/Downloads/tn_data/IISIBusData.txt'
# !tail -15 /Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt
# !head -15 /Users/bluekidds/Downloads/tn_data/IISIBusData.txt
df = dd.read_csv(filename, encoding="big5", header=None, sep=r"\s*", error_bad_lines=False, usecols=[1,3,4,7])
display(df.head())
df = dd.read_csv(filename, encoding="big5", header=None, sep='\t', error_bad_lines=False, usecols=[1,3,6])
display(df.head())
df_road = dd.read_csv(filename, encoding="big5", header=None, sep='\t', error_bad_lines=False, usecols=[1,3,6])
df_road.astype({1: 'str'})
df_road.head()
df_road[1].value_counts()
df['time'] = df[[3,4]].apply(lambda x: ' '.join(x), axis = 1)
display(df.head())
df = df.drop([3,4], axis=1)
display(df.head())
# +
columns = ['RoadID','time', 'speed']
df.columns = columns
display(df.head())
# -
df['time'] = pd.to_datetime(df['time'])
display(df.head())
df1 = df[df['RoadID'] =='L03072']
df1 = df1.to_csv('data/*.csv')
df_refine = df[['RoadID','time', 'speed']]
df_refine.head()
df_refine1 = df_refine[df_refine['RoadID']=='L00620']
df_refine1 = df_refine1.compute()
df_refine.to_csv('refine_data.csv')
df1 = df[df['RoadName']=='ไธญ่ฏๅ่ทฏไบๆฎต(้่ฏ่ทฏไธๆฎต>ๅพท่่ทฏ)']
df1.head(10)
df2 = df1.compute()
df1 = df1[['time', 'speed']]
display(df1.tail(5))
df.RoadName.value_counts().compute()
# +
def blocks(files, size=65536):
while True:
b = files.read(size)
if not b: break
yield b
with open('/Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt',"r", errors='replace') as f:
print(sum(bl.count("\n") for bl in blocks(f)))
# -
# !pip install pyarrow
import pyarrow.parquet as pq
import pyarrow as pa
# +
input_file = open('/Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt', errors='replace')
#output_file = open('output.txt','w')
place_list = []
for lines in range(20):
line = input_file.readline()
line_list = line.split()
place_list.append(','.join([line_list[0], line_list[1],line_list[3]+' '+line_list[4], line_list[7]]))
#print(" ".join(line.split()))
#print(line.split(), len(line.split()))
#output_file.write(line)
small_df = pd.DataFrame(place_list)
table = pa.Table.from_pandas(small_df)
#with open('IISIRoadMOE_Processed.txt', 'w') as filehandle:
#filehandle.writelines("%s\n" % place for place in place_list)
pq.write_table(table, 'example.parquet')
#with open('/Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt', errors='replace') as f:
# full_lines = f.readlines()
# parse full lines to get the text to right of "-"
#sample_num = 10
input_file.close()
#for i in range(sample_num):
# print(" ".join(full_lines[i].split()))
# -
new_df = pd.read_parquet('example.parquet')
display(new_df.head())
def istimecheck(time):
timeformat = "%Y-%m-%d %H:%M:%S.%f"
try:
validtime = datetime.datetime.strptime(time, timeformat)
return True
#Do your logic with validtime, which is a valid format
except ValueError:
return False
#Do your logic for invalid format (maybe print some message?).
# +
import logging
#logging.basicConfig(level=logging.info, file='errors.log')
logging.basicConfig(filename='errors.log',level=logging.DEBUG)
# -
def ProcessLargeTextFile():
logging.basicConfig(filename='errors.log',level=logging.DEBUG)
bunchsize = 1000000 # Experiment with different sizes
bunch = []
start = time.time()
errors = 0
time_errors = 0
speed_errors = 0
with open('/Users/bluekidds/Downloads/tn_data/IISIRoadMOE.txt', errors='replace') as r, open("IISIRoadMOE_Processed.txt", "w") as w:
for line in r:
line_list = line.split()
if len(line_list) < 8:
print('Error occur...')
print(line_list)
break
my_time = line_list[3]+' '+line_list[4]
if not istimecheck(my_time):
#print('Error detected when extracting time')
errors += 1
time_errors += 1
logging.error("Time format error:" + my_time)
continue
if not line_list[5].replace('.','',1).isdigit():
if float(line_list[7]) == -1:
pass
else:
#print('Error detected when extracting speed')
errors += 1
speed_errors +=1
logging.error("Speed format error:" + line_list[5])
continue
bunch.append(','.join([line_list[0], line_list[1],line_list[3]+' '+line_list[4], line_list[5] + '\n']))
if len(bunch) == bunchsize:
print('Processed 1 million lines in: ' + str(time.time()-start))
print('Number of time errors accumulated: ' + str(time_errors))
print('Number of speed errors acucmulated: ' + str(speed_errors))
start = time.time()
w.writelines(bunch)
bunch = []
w.writelines(bunch)
print('Total errors: ' + str(errors) + ' and Speed variable errors ' + str(speed_errors) + ' and Time variable errors: ' + str(time_errors))
ProcessLargeTextFile()
'15.63157894๏ฟฝ368419'.replace('.','',1).isdigit()
# ## Pipeline after processing data
# !tail -15 IISIRoadMOE_Processed.txt
# +
filename = 'IISIRoadMOE_Processed.txt'
df = dd.read_csv(filename, header=None)
display(df.head())
# +
filename = 'IISIRoadMOE_Processed.txt'
df = dd.read_csv(filename, header=None)
display(df.head())
# -
df = dd.read_csv(filename, header=None, error_bad_lines=False, usecols=[1,2,3])
columns = ['RoadID','time', 'speed']
df.columns = columns
df['time']=dd.to_datetime(df['time'])
display(df.head(10))
road_list = df['RoadID'].value_counts().compute()
road_ID_set = set(df['RoadID'])
# For these road lists, if the number of values is greater than 50000. use it.
#
# For each road, predict its last 2400 units using the past two months.
#
# 1. Set each df
# 2. Resample to 15 mins
# 3. df to fbpro ['2019-03-01 00:00:00' : '2019-05-02 00:00:00']
# 4. create_model(config)
# 5. future = model.make_future_dataframe(periods=2400, freq='15T')
# 6. forecast = model.predict(future)
# 7. forecast[y_hat] ['2019-05-02 00:00:00:2019-05-29 00:00:00']
def create_model():
model = Prophet(
daily_seasonality=False,
weekly_seasonality=False,
yearly_seasonality=False,
changepoint_prior_scale=0.02,
)
model.add_seasonality(
name='daily',
period=1,
fourier_order=16,
prior_scale=0.1,
)
model.add_seasonality(
name='weekly',
period=7,
fourier_order=2,
prior_scale=0.1,
)
return model
road_list_effective = list(road_list[road_list > 30000].keys())
print(road_list_effective)
def process_all_sections(df, road_list_effective):
logging.basicConfig(filename='process_road_data.log',level=logging.DEBUG)
start = time.time()
for road_ID in road_list_effective:
logging.info('Processing road: %s', road_ID)
df_local = df[df['RoadID']== road_ID][['time','speed']]
df_local= df_local.set_index('time')
#df_local = df_local.compute()
#if df_local.shape[0] < 30000:
# logging.error("Data size insufficient in road : " + road_ID + " " + str(df_local.shape[0]))
# now = time.time()
# print('Processed one road in: ' + str(now-start))
# start = now
# continue
df_resample = df_local.resample('15min').mean().compute()
fb_df = df_to_fbpro(df_resample['2019-03-01 05:30:00' : '2019-05-01 05:30:00'])
model = create_model()
model.fit(fb_df)
future = model.make_future_dataframe(periods=2400, freq='15T')
forecast = model.predict(future)
forecast[['ds', 'yhat']].to_json('forecast_output/' + '.'.join((road_ID, 'json')))
print('Finish ' + road_ID)
now = time.time()
print('Processed one road in: ' + str(now-start) + 's')
start = now
# +
# %%time
df_L00770 = df[df['RoadID']== 'L00770'][['time','speed']]
df_L00770= df_L00770.set_index('time')
df_resample = df_L00770.resample('15min').mean()
display(df_resample.head())
# -
process_all_sections(df, road_list_effective)
df.head(10)['time']
# +
#df1 = df[df['RoadID']=='L03072'][['time','speed']]
df4 = df[df['RoadID']=='L00466'][['time','speed']]
df4= df4.set_index('time')
#df1['time']=dd.to_datetime(df1['time'])
#display(df1.head(10))
# -
df4.c.shape
display(df1.loc['2019-03-15 00:00:00' : '2019-03-17 10:21:00'].compute())
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from plotly.graph_objs import Scatter, Figure, Layout
import plotly
import plotly.graph_objs as go
init_notebook_mode(connected=False)
#df1 = df1.compute()
df4 = df4.compute()
# +
#df3 = df1.resample('15min').mean()
df5 = df4.resample('15min').mean()
display(df5.head())
# +
trace_fitted = go.Scatter(
x=df5.index,
y=df5.speed,
name = 'actuals')
data = [trace_fitted]
layout = go.Layout(
title='Actual Time Series Data in L00466',
xaxis=dict(title='Date'))
fig = go.Figure(data=data, layout=layout)
iplot(fig)
# +
trace_fitted = go.Scatter(
x=df3.index,
y=df3.speed,
name = 'actuals')
data = [trace_fitted]
layout = go.Layout(
title='Actual Time Series Data in L03072',
xaxis=dict(title='Date'))
fig = go.Figure(data=data, layout=layout)
iplot(fig)
# -
def df_to_fbpro(df):
if len(df.columns) == 1:
df = df.reset_index()
df.columns = ['ds', 'y']
return df
fb_df = df_to_fbpro(df5['2019-03-01 05:30:00' : '2019-05-01 05:30:00'])
display(fb_df.head())
fb_df_test = df_to_fbpro(df5['2019-05-01 05:30:00' : '2019-06-01 05:30:00'])
display(fb_df_test.head())
df2.columns
m = Prophet()
m.fit(fb_df)
# +
model = Prophet(
daily_seasonality=False,
weekly_seasonality=False,
yearly_seasonality=False,
changepoint_prior_scale=0.02,
)
model.add_seasonality(
name='daily',
period=1,
fourier_order=16,
prior_scale=0.1,
)
model.add_seasonality(
name='weekly',
period=7,
fourier_order=2,
prior_scale=0.1,
)
model.fit(fb_df);
# -
fb_df.shape
# +
#future = m.make_future_dataframe(periods=600, freq='H')
#future.tail()
future = model.make_future_dataframe(periods=2400, freq='15T')
future.tail()
# -
fb_df_test.tail()
forecast = model.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig2 = model.plot_components(forecast)
fig1 = model.plot(forecast)
forecast
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(fb_df['ds'], fb_df['y'], c='black', marker='o', ms=3, linestyle='None', label='Train')
ax.plot(fb_df_test['ds'], fb_df_test['y'], c='r', marker='o', ms=3, linestyle='None', label='Test')
ax.plot(forecast['ds'], forecast['yhat'], c='b', marker='o', ms=3, linestyle='None', label='Forecast', alpha=0.5)
ax.legend()
ax.set_xlabel('Date')
ax.set_ylabel('Speed');
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(fb_df['ds'], fb_df['y'], c='black', marker='o', ms=3, linestyle='None', label='Train')
ax.plot(fb_df_test['ds'], fb_df_test['y'], c='r', marker='o', ms=3, linestyle='None', label='Test')
ax.plot(forecast['ds'], forecast['yhat'], c='b', marker='o', ms=3, linestyle='None', label='Forecast', alpha=0.5)
ax.legend()
ax.set_xlabel('Date')
ax.set_ylabel('Speed');
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(fb_df['ds'], fb_df['y'], c='black', marker='o', ms=3, linestyle='None', label='Train')
ax.plot(fb_df_test['ds'], fb_df_test['y'], c='r', marker='o', ms=3, linestyle='None', label='Test')
ax.plot(forecast['ds'], forecast['yhat'], c='b', marker='o', ms=3, linestyle='None', label='Forecast', alpha=0.5)
ax.legend()
ax.set_xlabel('Date')
ax.set_ylabel('Speed');
fig, ax = plt.subplots(figsize=(20, 5))
ax.plot(fb_df['ds'], fb_df['y'], c='black', marker='o', ms=3, linestyle='None', label='Train')
ax.plot(fb_df_test['ds'], fb_df_test['y'], c='r', marker='o', ms=3, linestyle='None', label='Test')
ax.plot(forecast['ds'], forecast['yhat'], c='b', marker='o', ms=3, linestyle='None', label='Forecast', alpha=0.5)
ax.legend()
ax.set_xlabel('Date')
ax.set_ylabel('Speed');
# ## Cross-Validation
from fbprophet.diagnostics import cross_validation
df_cv = cross_validation(model, initial='40 days', period='3 days', horizon = '10 days')
df_cv.head()
from fbprophet.diagnostics import performance_metrics
df_p = performance_metrics(df_cv)
df_p.head()
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='rmse')
# ## Add holiday effects(Not supporting Taiwan)
# +
import folium
import folium.plugins as plugins
import numpy as np
#m.fit(df)
# -
# !pip install folium
# +
import folium
import folium.plugins as plugins
import numpy as np
np.random.seed(3141592)
initial_data = (
np.random.normal(size=(100, 2)) * np.array([[1, 1]]) +
np.array([[48, 5]])
)
move_data = np.random.normal(size=(100, 2)) * 0.01
data = [(initial_data + move_data * i).tolist() for i in range(100)]
weight = 1 # default value
for time_entry in data:
for row in time_entry:
row.append(weight)
# +
m = folium.Map(
location=[35.68159659061569, 139.76451516151428],
zoom_start=16
)
# Lon, Lat order.
lines = [
{
'coordinates': [
[139.76451516151428, 35.68159659061569],
[139.75964426994324, 35.682590062684206],
],
'dates': [
'2017-06-02T00:00:00',
'2017-06-02T00:10:00'
],
'color': 'red'
},
{
'coordinates': [
[139.75964426994324, 35.682590062684206],
[139.7575843334198, 35.679505030038506],
],
'dates': [
'2017-06-02T00:10:00',
'2017-06-02T00:20:00'
],
'color': 'blue'
},
{
'coordinates': [
[139.7575843334198, 35.679505030038506],
[139.76337790489197, 35.678040905014065],
],
'dates': [
'2017-06-02T00:20:00',
'2017-06-02T00:30:00'
],
'color': 'green',
'weight': 15,
},
{
'coordinates': [
[139.76337790489197, 35.678040905014065],
[139.76451516151428, 35.68159659061569],
],
'dates': [
'2017-06-02T00:30:00',
'2017-06-02T00:40:00'
],
'color': '#FFFFFF',
},
]
features = [
{
'type': 'Feature',
'geometry': {
'type': 'LineString',
'coordinates': line['coordinates'],
},
'properties': {
'times': line['dates'],
'style': {
'color': line['color'],
'weight': line['weight'] if 'weight' in line else 5
}
}
}
for line in lines
]
plugins.TimestampedGeoJson({
'type': 'FeatureCollection',
'features': features,
}, period='PT1M', add_last_point=True).add_to(m)
m
# +
from datetime import datetime, timedelta
time_index = [
(datetime.now() + k * timedelta(1)).strftime('%Y-%m-%d') for
k in range(len(data))
]
# -
# * Time_index is a list of datetime objects
# * data is a list of list of coordinates
data[1][0]
display(len(data), len(data[0]))
# ## Visualize Traffic Data
# +
from glob import glob
file_paths = glob('forecast_output/*.json')
# +
forecast_df = pd.read_json(file_paths[0])
location_id = file_paths[0].split('/')[1].split('.')[0]
forecast_df.columns = ['time',location_id]
forecast_df['time'] = forecast_df['time'] / 1000
forecast_df.set_index('time', inplace=True)
for file_path in file_paths[1:]:
location_name = file_path.split('/')[1].split('.')[0]
local_df = pd.read_json(file_path)
local_df.columns = ['time',location_name]
local_df['time'] = local_df['time'] / 1000
local_df.set_index('time', inplace=True)
#display(local_df.head())
#local_df.index = pd.to_datetime(local_df.index)
forecast_df = pd.merge(forecast_df, local_df, how='outer', on='time')
#forecast_df = forecast_df.join(local_df, how='outer')
#print(location_name, file_path)
#file_paths[0].split('/')[1].split('.')[0]
display(forecast_df.head())
# -
forecast_df = pd.read_csv('forecast_result_tainan.csv', index_col='time')
display(forecast_df.head())
import datetime
date = datetime.datetime.fromtimestamp(1551420000)
print(date)
forecast_df.index = pd.to_datetime(forecast_df.index, unit='s')
display(forecast_df.head())
sec_info = pd.read_csv('SECINFO.csv')
#display(sec_info.head())
sec_info = sec_info.dropna()
# +
sec_cols = forecast_df.columns
sectid_ids_gt = list(sec_info['SECTORID'])
not_in_gt = 0
in_gt = 0
sec_cols_gt = list()
for sec in sec_cols:
if sec not in sectid_ids_gt:
not_in_gt += 1
#display('In sectodID: ' + sec)
else:
in_gt +=1
sec_cols_gt.append(sec)
#display(sec)
display('In forecast but not in ground_truth: ' + str(not_in_gt))
display('In forecast and ground truth: ' +str(in_gt))
# -
sec_info = sec_info.set_index('SECTORID')
sec_info.loc['L01707']
# ็ถ็ทฏ latitude-longitude
# ### Save forecast results to csv
sec_info.loc['L01144']
time1_forecast['L01144'].values
m = folium.Map([23.0649, 120.219], zoom_start=14)
m
# +
m = folium.Map([23.0649, 120.219],zoom_start=12) #, tiles='stamentoner'
myIcon = folium.CustomIcon('https://raw.githubusercontent.com/HanInfinity/iron2018_FoliumAndLeaflet/master/dist/icon/marker.png',
icon_size = (30, 30),
icon_anchor = (15, 30))
for segment in time1_forecast.columns:
forecast_value = time1_forecast[segment].values
points1 = [sec_info.loc[segment]['START_Y'],sec_info.loc[segment]['START_X'] ]
points2 = [sec_info.loc[segment]['END_Y'],sec_info.loc[segment]['END_X'] ]
dist = distance(points1[0], points1[1], points2[0], points2[1])
if 0 in points1:
continue
if 0 in points2:
continue
if dist > 100:
continue
#folium.Marker(points1, icon= myIcon, popup='้ ๆธฌ้ๅบฆ: ' + str(forecast_value[0])).add_to(m)
folium.features.ColorLine(
[points1, points2],
colors=[0],
colormap=[section_color(forecast_value), section_color(forecast_value)],
weight=8, opacity=1).add_to(m)
# -
for a,b in timedict.items():
print(a,b)
def add_colorline(points1, points2, forecast_value, m):
folium.features.ColorLine(
[points1, points2],
colors=[0],
colormap=[section_color(forecast_value), section_color(forecast_value)],
weight=8, opacity=1).add_to(m)
return m
forecast_df_time = forecast_df.loc['2019-05-05 07:30:00'][sec_cols_gt].to_frame().transpose()
forecast_df_time
for name, time in timedict.items():
print('Processing time ', time)
m = folium.Map([23.017100, 120.201147],zoom_start=12) #, tiles='stamentoner'
forecast_df_time = forecast_df.loc[time][sec_cols_gt].to_frame().transpose()
for segment in forecast_df_time.columns:
forecast_value = forecast_df_time[segment].values
points1 = [sec_info.loc[segment]['START_Y'],sec_info.loc[segment]['START_X'] ]
points2 = [sec_info.loc[segment]['END_Y'],sec_info.loc[segment]['END_X'] ]
dist = distance(points1[0], points1[1], points2[0], points2[1])
if 0 in points1:
continue
if 0 in points2:
continue
if dist > 100:
continue
m = add_colorline(points1, points2, forecast_value, m)
m.save(name + '.html')
m.save('2019-05-05 12:00:00.html')
START_X START_Y END_X END_Y
forecast_df['L01144'].describe()
# 15:43 ๅธฅๅฐ่ญฆๅฏ้็ฝฐๅฎ ๅฐๅณฐๆๅปๅฐ็ทฉๅ
# 15:43 ๅธฅๅฐ่ญฆๅฏ้็ฝฐๅฎ ๅนณๅธธไฝ้้ๆๅป
# 15:43 ๅธฅๅฐ่ญฆๅฏ้็ฝฐๅฎ ๆไธๅฐๅณฐๆๅปๅฐ็ทฉๅ
time1 = '2019-05-06 12:00:00'
time2 = '2019-05-06 15:00:00'
time3 = '2019-05-06 18:00:00'
time4 = '2019-05-06 21:00:00'
time730 = '2019-05-06 07:30:00'
time800 = '2019-05-06 08:00:00'
time830 = '2019-05-06 08:30:00'
time900 = '2019-05-06 09:00:00'
time1200 = '2019-05-06 12:00:00'
time1230 = '2019-05-06 12:30:00'
time1300 = '2019-05-06 13:00:00'
time1330 = '2019-05-06 13:30:00'
time1900 = '2019-05-06 19:00:00'
time1930 = '2019-05-06 19:30:00'
time2000 = '2019-05-06 20:00:00'
time2030 = '2019-05-06 20:30:00'
a = time1_forecast.index
time1_forecast = forecast_df.loc[time1][sec_cols_gt]
time2_forecast = forecast_df.loc[time2][sec_cols_gt]
time3_forecast = forecast_df.loc[time3][sec_cols_gt]
time4_forecast = forecast_df.loc[time4][sec_cols_gt]
time730_forecast = forecast_df.loc[time730][sec_cols_gt]
time800_forecast = forecast_df.loc[time800][sec_cols_gt]
time830_forecast = forecast_df.loc[time830][sec_cols_gt]
time900_forecast = forecast_df.loc[time900][sec_cols_gt]
time1200_forecast = forecast_df.loc[time1200][sec_cols_gt]
time1230_forecast = forecast_df.loc[time1230][sec_cols_gt]
time1300_forecast = forecast_df.loc[time1300][sec_cols_gt]
time1330_forecast = forecast_df.loc[time1330][sec_cols_gt]
time1900_forecast = forecast_df.loc[time1900][sec_cols_gt]
time1930_forecast = forecast_df.loc[time1930][sec_cols_gt]
time2000_forecast = forecast_df.loc[time2000][sec_cols_gt]
time2030_forecast = forecast_df.loc[time2030][sec_cols_gt]
# +
## Find statistics of forecast_df
forecast_df.describe()
# -
def section_color(value):
if value < 28:
return 'red'
elif value < 32:
return 'orange'
elif value <36:
return 'green'
else:
return 'blue'
# +
from math import radians, sin, cos, acos
def distance(slat, slon, elat, elon):
return 6371.01 * acos(sin(slat)*sin(elat) + cos(slat)*cos(elat)*cos(slon - elon))
# -
timedict = {'730': time730,
'800': time800,
'830': time830,
'900': time900,
'1200': time1200,
'1230': time1230,
'1300': time1300,
'1330': time1330,
'1900': time1900,
'1930': time1930,
'2000': time2000,
'2030': time2030}
timedict.keys()
| traffic_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ##### 1
# 
# ##### 2
# 
# ##### 3
# 
# ##### 4
# 
# ##### 5
# 
# ##### 6
# 
# ##### 7
# 
# ่ฟ้็่ฝๅฆ้้ๅฐ๏ผๆฏๆ
# 1. ่ฝๅฆ่ทๅพๆญฃ็กฎ็ๆฐๆฎ๏ผๆฏๅฆ็จๆท็ๅนด้พๅฐฑๆฏๅพ้พ่ทๅพ็ๆฐๆฎ๏ผ
# 2. ๆไบๅฎๆถ็ๆฐๆฎๅนถไธไธๅฎ่ฝๅค่ทๅพใ
# ##### 8
# 
# ##### 9
# 
# ##### 10
# 
# ##### 11
# 
# ##### 12
# 
# ##### 13
# 
# ##### 14
# 
# ##### 15
# 
# ##### 16
# 
# ##### 17
# 
# ##### 18
# 
# ็ฆปๆฃๅๅฏไปฅๅธฆๆฅ้็บฟๆงใ
# ่ฏพไธไธพไบๆฑถๅทๅฐ้ๆไบบ็ไพๅญใ
# ่ไบบๅๅฐๅญฉๆดๆๅฏ่ฝ่ขซๆ๏ผๅฆๆๅชๆๅนด้พ่ฟไธไธชๅ้๏ผ้ฃๅจLRไธญๅช่ฝๆฏๆญฃ็ธๅ
ณๆ่ด็ธๅ
ณ๏ผไฝๆฏ๏ผๅฆๆๅๆไบ[0-15, 15-45, 45+]่ฟๆ ท็ไธไธชๅบ้ด๏ผๅฐฑๅฏไปฅๅๅซๅพๅฐๅๅบ็weight๏ผๅธฆๆฅไบ้็บฟๆง๏ผ่ไบบๅๅฐๅญฉๆดๆๅฏ่ฝ่ขซๆใ
# ##### 19
# 
# ##### 20
# 
# ##### 21
# 
# ##### 22
# 
# ##### 23
# 
# ##### 24
# 
# ##### 25
# 
# ##### 26
# 
# ##### 27
# 
# ##### 28
# 
# ##### 29
# 
# ##### 30
# 
# ##### 31
# 
# ##### 32
# 
# ##### 33
# 
# ##### 34
# 
# ##### 35
# 
# ##### 36
# 
# ##### 37
# 
# ##### 38
# 
# ##### 39
# 
# ##### 40
# 
# ##### 41
# 
# ##### 42
# 
# ##### 43
# 
# ##### 44
# 
# ##### 45
# 
# ##### 46
# 
# ##### 47
# 
# ##### 48
# 
# ##### 49
# 
# ##### 50
# 
# ##### 51
# 
| ml-april/lec06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lonestar 5
#
# Supercomputers like Lonestar 5 take large problems and divide them into smaller pieces. Each fragment of the task is distributed to different *nodes* on the cluster to be computed simultaneously, in parallel. The problem with parallel computation is that each node you add incurs a time penalty.
#
# This loss of efficiency is called *overhead*. The more nodes you add, the more overhead you suffer. That means that distributing a task 20 ways doesn't result in a 20x speedup. In reality, it may result in a lower actual speedup due to this overhead. For example, you may have a 10 hour job that you want to divide across 2 nodes. Unfortunately, transferring data to each additional node past the first one may take 1 hour per node. Therefore, the real time for your task would be 6 hours. The *speedup*, in this case, was 1.7x (rounded to the tenths place.)
#
# ### Input
#
# Your program must read input from a file named `lonestar5.dat`. There is one line of input with three integers. The first integer represents the number of hours a task will take. The second integer represents the number of nodes assigned to the task. The third integer is the additional time in hours it will take, per node, to distribute the task.
#
# ### Output
#
# Your program must output the *speedup* for this task, rounded to the tenths place.
#
# ### Sample Input File `lonestar5.dat`
#
# ```
# 4000 20 3
# ```
#
# ### Sample Output to Screen
#
# ```
# 15.6x
# ```
file = open("lonestar5.dat", "r")
ints = [ int(token.strip()) for token in file.readline().split()]
hours = ints[0]
nodes = ints[1]
penalty = ints[2]
time = hours / nodes + (nodes - 1) * penalty
print("{}x".format(round(hours / time, 1)))
| solutions/lonestar5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# SETS
mySet = set()
mySet.add(1)
mySet.add(2)
mySet.add(2)
mySet # {1,2}
# +
myList = [1,1,1,1,2,2,2,2,3,3,3,0]
mySet2 = set(myList)
mySet2 # {1, 2, 3}
s = set("paralel")
s.add('z')
s.add('b')
{'a', 'r'}.issubset(s)
#s # {'a', 'b', 'e', 'l', 'p', 'r', 'z'}
# +
myList = [1000,1,1,1,1,2000000,2,2,2,2,3,3,3,0]
mySet2 = set(myList)
mySet2.add(-1)
mySet2
# -
A = set('qwerty')
A.add('z')
print(A)
| Code/1.Basics/1.Data structures and Objects/7.set.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# | | learning_rate | epoch size | patience | activations |
# | --- | --- | --- | --- | --- |
# | 1 | 3e-05 | 200 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 2 | 3e-05 | 200 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 3 | 0.0001 | 200 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 4 | 0.0001 | 200 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 5 | 0.003 | 200 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 6 | 0.003 | 200 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 7 | 3e-05 | 1000 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 8 | 3e-05 | 1000 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 9 | 0.0001 | 1000 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 10 | 0.0001 | 1000 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 11 | 0.003 | 1000 | 20 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 12 | 0.003 | 1000 | 200 | {0: 'relu', 1: 'sigmoid', 2: 'sigmoid'} |
# | 13 | 3e-05 | 200 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 14 | 3e-05 | 200 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 15 | 0.0001 | 200 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 16 | 0.0001 | 200 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 17 | 0.003 | 200 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 18 | 0.003 | 200 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 19 | 3e-05 | 1000 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 20 | 3e-05 | 1000 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 21 | 0.0001 | 1000 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 22 | 0.0001 | 1000 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 23 | 0.003 | 1000 | 20 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
# | 24 | 0.003 | 1000 | 200 | {0: 'relu', 1: 'relu', 2: 'sigmoid'} |
#
# + deletable=true editable=true
mapper = {0: 'single_classification/45332624/',
1: 'single_regression/45333511/',
2: 'vanilla_lstm/45395626/'}
number_k = {0: 24,
1: 24,
2: 30}
modes = {0: 'train prec', 1: 'train roc', 2: 'train bedroc',
3: 'val prec', 4: 'val roc', 5: 'val bedroc',
6: 'test prec', 7: 'test roc', 8: 'test bedroc',
9: 'EF_2', 10: 'EF_1', 11: 'EF_015', 12: 'EF_01'}
# + deletable=true editable=true
# %matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (25.0, 5.0)
# + deletable=true editable=true
# %load_ext autoreload
# %autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from virtual_screening.function import plot_single_model_single_mode, plot_single_model_multi_mode
# + [markdown] deletable=true editable=true
# # Comparison on evaluation metrics among each data set
# + deletable=true editable=true
mode = 0
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[0])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[1])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[2])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[3])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[4])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[5])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[6])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[7])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[8])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[9])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[10])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[11])
# + deletable=true editable=true
plot_single_model_single_mode('../../output/{}'.format(mapper[mode]), number_k[mode], modes[12])
# + [markdown] deletable=true editable=true
# # Comparison on evaluation metrics among different data set
# + deletable=true editable=true
plot_single_model_multi_mode('../../output/{}'.format(mapper[mode]), number_k[mode],
mode_list=[modes[0], modes[3], modes[6]])
# + deletable=true editable=true
plot_single_model_multi_mode('../../output/{}'.format(mapper[mode]), number_k[mode],
mode_list=[modes[1], modes[4], modes[7]])
# + deletable=true editable=true
plot_single_model_multi_mode('../../output/{}'.format(mapper[mode]), number_k[mode],
mode_list=[modes[2], modes[5], modes[8]])
# + deletable=true editable=true
plot_single_model_multi_mode('../../output/{}'.format(mapper[mode]), number_k[mode],
mode_list=[modes[9], modes[10], modes[11], modes[12]])
| pria_lifechem/analysis/grid_search_single_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Forecasting Apartment Price in Gotham City
#
#
# ## 0. Introduction
# ## 1. Loading Packages and Data
# Load Packages
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
from plots import draw_corr_heatmap
import seaborn as sns
# +
# Varaibles
train_rate = .85
np.random.seed(0)
names = ['contract date', 'latitude', 'longtitude', 'altitude', '1st region id', '2nd region id', 'road id',
'apartment id', 'floor', 'angle', 'area', 'parking lot limit', 'parking lot area', 'parking lot external',
'management fee', 'households', 'age of residents', 'builder id', 'completion date', 'built year',
'schools', 'bus stations', 'subway stations', 'price']
# +
# Read data
data = pd.read_csv('../data/data_train.csv',
names=names)
data.shape
# -
# ## 2. Analyze Data
# +
def draw_corr_heatmap(data):
corrmat = data.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, cmap="PiYG", center=0)
draw_corr_heatmap(data)
# -
# ## 3. Pre-processing Data
# ### 3-1. Angle (-> sin value)
# +
# Before processing,
print(f"Before: {data['angle'][0]}")
# Convert into sin value
data['angle'] = np.sin(data['angle'])
print(f"After: {data['angle'][0]}")
# -
# ### 3-2. Dates (-> number of seconds after built)
# +
# Before processing,
print(f"Before: {data['contract date'][0]}")
print(f"Before: {data['completion date'][0]}")
# Convert into number of seconds after build
data['contract date'] = pd.to_datetime(data['contract date'])
data['completion date'] = pd.to_numeric(data['contract date'] - pd.to_datetime(data['completion date']))
data['contract date'] = pd.to_numeric(data['contract date'] - data['contract date'].min())
print(f"After: {data['contract date'][0]}")
print(f"After: {data['completion date'][0]}")
# -
# ### 3-3. Remove useless columns
# +
print(data.columns)
drop_columns = ['1st region id', '2nd region id', 'road id', 'apartment id','builder id', 'built year']
data = data.drop(columns=drop_columns)
print(data.columns)
# -
# ### 3-4. Remove missing data
print(f"Before: {data.shape}")
data = data.dropna()
print(f"After: {data.shape}")
# ### 3-5. Normalize
print(data.iloc[0])
y = data['price']
def normalize(d):
min_max_scaler = preprocessing.MinMaxScaler()
d_scaled = min_max_scaler.fit_transform(d)
return pd.DataFrame(d_scaled, columns=[item for item in names if item not in drop_columns])
data = normalize(data)
print(data.iloc[0])
# ## 4. Get X, y
X = data.drop(columns=['price'])
y = y
print(X.iloc[0])
print(y[0])
# ## References
# - [Predicting House Prices with Machine Learning](https://www.kaggle.com/erick5/predicting-house-prices-with-machine-learning)
| bin/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Install library
# +
import os
import random
import numpy as np
import pandas as pd
import optuna
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score, classification_report
from sklearn.utils import class_weight
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras.layers import Input, Dense, Conv2D, Activation
from tensorflow.keras.layers import MaxPooling2D, UpSampling2D, BatchNormalization, Dropout, GlobalAveragePooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# +
def set_randvalue(value):
# Set a seed value
seed_value= value
# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set `python` built-in pseudo-random generator at a fixed value
random.seed(seed_value)
# 3. Set `numpy` pseudo-random generator at a fixed value
np.random.seed(seed_value)
# 4. Set `tensorflow` pseudo-random generator at a fixed value
tf.random.set_seed(seed_value)
set_randvalue(42)
# -
# ## Dataset preprocessing and EDA
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # load data
x_train,x_test = x_train.astype('float32')/255.0,x_test.astype('float32')/255.0 # normalization
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
y_train
# #### Limit three class preprocessing
# +
# No method on keras to get cifar10 category label name by categoly label?
cifar10_labels = np.array([
'airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck'])
bird_num = np.where(cifar10_labels=='bird')
deer_num = np.where(cifar10_labels=='deer')
truck_num = np.where(cifar10_labels=='truck')
limit_num = 2500
# get limit label indexes
bird_indexes = [i for i, label in enumerate(y_train) if label == bird_num]
deer_indexes = [i for i, label in enumerate(y_train) if label == deer_num]
truck_indexes = [i for i, label in enumerate(y_train) if label == truck_num]
other_indexes = [i for i, label in enumerate(y_train) if label not in [bird_num, deer_num, truck_num]]
# limit
bird_indexes = bird_indexes[:limit_num]
deer_indexes = deer_indexes[:limit_num]
truck_indexes = truck_indexes[:limit_num]
print(f'Bird label num is {len(bird_indexes)}') # 2500
print(f'Deer label num is {len(deer_indexes)}') # 2500
print(f'Truck label num is {len(truck_indexes)}') # 2500
print(f'Other label num is {len(other_indexes)}') # 35000; 5000*7
# merge and sort
merge_indexes = np.concatenate([other_indexes, bird_indexes, deer_indexes, truck_indexes], 0)
merge_indexes.sort()
print(f'Train label num is {len(merge_indexes)}') # 42500
# create three labels removed train data
x_train_removed = np.zeros((len(merge_indexes), 32, 32, 3))
y_train_removed = np.zeros(len(merge_indexes))
for i, train_index in enumerate(merge_indexes):
x_train_removed[i] = x_train[train_index]
y_train_removed[i] = y_train[train_index]
print(x_train_removed.shape)
print(y_train_removed.shape)
# -
print(x_train_removed.shape)
print(y_train_removed.shape)
del x_train
del y_train
df = pd.DataFrame(y_train_removed.flatten())
print(df.value_counts())
del df
# +
import matplotlib.pyplot as plt
# plot data labels
plt.hist(y_train_removed.flatten())
# -
# train test split
# stratify y label
x_train_removed, x_valid_removed, y_train_removed, y_valid_removed = train_test_split(x_train_removed, y_train_removed,
test_size=0.3, random_state=42, stratify=y_train_removed)
print(x_train_removed.shape)
print(y_train_removed.shape)
print(x_valid_removed.shape)
print(y_valid_removed.shape)
df = pd.DataFrame(y_train_removed.flatten())
print(df.value_counts())
del df
df = pd.DataFrame(y_valid_removed.flatten())
print(df.value_counts())
del df
# ## AutoEncoder
# #### Load AE models weight
# +
# Batch Norm Model
def create_AE01_model(k_size):
input_img = Input(shape=(32, 32, 3)) # 0
conv1 = Conv2D(64, (k_size, k_size), padding='same', name="Dense_AE01_1")(input_img) # 1
conv1 = BatchNormalization(name="BN_AE01_1")(conv1) # 2
conv1 = Activation('relu', name="Relu_AE01_1")(conv1) # 3
decoded = Conv2D(3, (k_size, k_size), padding='same', name="Dense_AE01_2")(conv1) # 4
decoded = BatchNormalization(name="BN_AE01_2")(decoded) # 5
decoded = Activation('relu', name="Relu_AE01_2")(decoded) # 6
return Model(input_img, decoded)
class AE01():
def __init__(self, ksize, optimizer):
self.optimizer = optimizer
self.autoencoder = create_AE01_model(ksize)
self.encoder = None
def compile(self, optimizer='adam', loss='binary_crossentropy'):
self.autoencoder.compile(optimizer=self.optimizer, loss=loss)
def train(self, x_train=None, x_test=None, epochs=1, batch_size=32, shuffle=True):
es_cb = EarlyStopping(monitor='val_loss', patience=2, verbose=1, mode='auto')
ae_model_path = '../models/AE/AE01_AE_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = ae_model_path, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
history = self.autoencoder.fit(x_train, x_train,
epochs=epochs,
batch_size=batch_size,
shuffle=shuffle,
callbacks=[es_cb, cp_cb],
validation_data=(x_test, x_test))
self.autoencoder.load_weights(ae_model_path)
self.encoder = Model(self.autoencoder.input, self.autoencoder.get_layer('Relu_AE01_1').output)
encode_model_path = '../models/AE/AE01_Encoder_Best.hdf5'
self.encoder.save(encode_model_path)
return history
def load_weights(self, ae_model_path, encode_model_path):
self.autoencoder.load_weights(ae_model_path)
self.encoder = Model(self.autoencoder.input, self.autoencoder.get_layer('Relu_AE01_1').output)
self.encoder.load_weights(encode_model_path)
# -
ae_ksize = 3
ae_optimizer = 'rmsprop'
stack01 = AE01(ae_ksize, ae_optimizer)
stack01.load_weights('../models/AE/AE01_AE_Best.hdf5', '../models/AE/AE01_Encoder_Best.hdf5')
stack01.encoder.trainable = False
stack01.encoder.summary()
# ## Train
# #### Create Model AE to CNN
# +
def create_StackedAE01_CNN01_model(encoder):
input_img = encoder.input
output = encoder.layers[-1].output # 32,32,64
x = Conv2D(64,(3,3),padding = "same",activation= "relu")(output)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x) # 16,16,64
x = Conv2D(128,(3,3),padding = "same",activation= "relu")(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(128,(3,3),padding = "same",activation= "relu")(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x) # 8,8,128
x = GlobalAveragePooling2D()(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
y = Dense(10,activation = "softmax")(x)
return Model(input_img, y)
model01 = create_StackedAE01_CNN01_model(stack01.encoder)
model01.summary()
# -
# #### Train without data augumentation & Class weight
# +
adam = Adam() # defalut
model01.compile(loss = "categorical_crossentropy", optimizer = adam, metrics = ["accuracy"])
# one hot encoding
nb_classes = 10
y_train_removed_onehot = to_categorical(y_train_removed, nb_classes)
y_valid_removed_onehot = to_categorical(y_valid_removed, nb_classes)
y_test_onehot = to_categorical(y_test, nb_classes)
# +
# %%time
# train
saveDir = "../models/CNN/"
# calculate class weights
class_weights = class_weight.compute_class_weight('balanced',
np.unique(y_train_removed),
y_train_removed)
class_weights = dict(enumerate(class_weights))
es_cb = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto')
chkpt = saveDir + 'Model_009_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = chkpt, \
monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
model01_history = model01.fit(x_train_removed, y_train_removed_onehot,
batch_size=32,
epochs=400,
verbose=1,
validation_data=(x_valid_removed, y_valid_removed_onehot),
callbacks=[es_cb, cp_cb],
class_weight=class_weights,
shuffle=True)
# +
# plot training
mdoel01_hist_df = pd.DataFrame(model01_history.history)
plt.figure()
mdoel01_hist_df[['loss', 'val_loss']].plot()
plt.ylabel('loss')
plt.xlabel('epoch')
plt.figure()
mdoel01_hist_df[['accuracy', 'val_accuracy']].plot()
plt.ylabel('loss')
plt.xlabel('epoch')
# -
model01.load_weights('../models/CNN/Model_009_Best.hdf5')
model01.evaluate(x_test, y_test_onehot)
y_pred = model01.predict(x_test)
y_pred = np.argmax(y_pred, axis=1)
print(classification_report(y_test, y_pred))
class_weights
# #### Train with data augumentation & Class weight
# +
encoder = stack01.encoder
encoder.trainable = False
model02 = create_StackedAE01_CNN01_model(encoder) # transfer learning
adam = Adam() # defalut
model02.compile(loss = "categorical_crossentropy", optimizer = adam, metrics = ["accuracy"])
# +
# %%time
# one hot encoding
nb_classes = 10
y_train_removed_onehot = to_categorical(y_train_removed, nb_classes)
y_valid_removed_onehot = to_categorical(y_valid_removed, nb_classes)
y_test_onehot = to_categorical(y_test, nb_classes)
es_cb = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto')
chkpt = saveDir + 'Model_010_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = chkpt, \
monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
# calculate class weights
class_weights = class_weight.compute_class_weight('balanced',
np.unique(y_train_removed),
y_train_removed)
class_weights = dict(enumerate(class_weights))
# create data generator
train_datagen = ImageDataGenerator(
# rescale=1./255,
# rotation_range=10,
# shear_range=0.2,
horizontal_flip=True,
# vertical_flip=True,
# width_shift_range=0.1,
# height_shift_range=0.1,
zoom_range=0.1
# channel_shift_range=0.2
)
batch_size = 32
train_datagenerator = train_datagen.flow(x_train_removed, y_train_removed_onehot, batch_size)
valid_datagenerator = ImageDataGenerator().flow(x_valid_removed, y_valid_removed_onehot, batch_size)
model02_history = model02.fit_generator(train_datagenerator,
steps_per_epoch=int(len(x_train_removed)//batch_size),
epochs=400,
validation_data=valid_datagenerator,
validation_steps=int(len(x_valid_removed)//batch_size),
verbose=1,
shuffle=True,
callbacks=[es_cb, cp_cb])
# plot training
mdoel02_hist_df = pd.DataFrame(model02_history.history)
plt.figure()
mdoel02_hist_df[['loss', 'val_loss']].plot()
plt.ylabel('loss')
plt.xlabel('epoch')
plt.figure()
mdoel02_hist_df[['accuracy', 'val_accuracy']].plot()
plt.ylabel('loss')
plt.xlabel('epoch')
# -
model02.load_weights('../models/CNN/Model_010_Best.hdf5')
model02.evaluate(x_test, y_test_onehot)
y_pred = model02.predict(x_test)
y_pred = np.argmax(y_pred, axis=1)
print(classification_report(y_test, y_pred))
class_weights
| notebooks/AE_2_CNN_005.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (rnode2vec)
# language: python
# name: rnode2vec
# ---
# # Example 4
# In this example, we detect core-periphery structure in the airport networks
# # Packages
# +
# %load_ext autoreload
# %autoreload 2
import sys
import cpnet
import matplotlib as mpl
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import seaborn as sns
# import utils
from scipy import sparse
# -
# # Data
#
# The worldwide airport network constructed from the openflight data.
#
# Data source:
# - http://opsahl.co.uk/tnet/datasets/openflights.txt
# - http://opsahl.co.uk/tnet/datasets/openflights_airports.txt
# - https://raw.githubusercontent.com/lukes/ISO-3166-Countries-with-Regional-Codes/master/all/all.csv
#
# Code to generate this network
# - https://github.com/skojaku/core-periphery-detection/blob/add-notebook/scripts/generate-airport-net.py
# +
# Node attributes
node_table = pd.read_csv(
"https://raw.githubusercontent.com/skojaku/core-periphery-detection/add-notebook/data/node-table-airport.csv?token=<KEY>"
)
# Edge table
edge_table = pd.read_csv(
"https://raw.githubusercontent.com/skojaku/core-periphery-detection/add-notebook/data/edge-table-airport.csv?token=<KEY>A"
)
# -
G = nx.from_pandas_edgelist(edge_table)
print(nx.info(G))
# # Detect core-periphery structure
# Detect core-periphery structure
kmconfig = cpnet.KM_config() # Call the BE algorithm
kmconfig.detect(G) # Detect core-periphery structures
c = kmconfig.get_pair_id() # Get the group membership of nodes
x = kmconfig.get_coreness() # Get the coreness of nodes
# # Statistical test
sig_c, sig_x, significant, p_values = cpnet.qstest(
c, x, G, kmconfig, significance_level=0.05, num_of_rand_net=100, num_of_thread=16
)
# # Visualization
pos = nx.spring_layout(
G, scale = 2
) # The position can be calculated and passed to the drawing function
# +
fig = plt.figure(figsize=(12, 12))
ax = plt.gca()
draw_nodes_kwd = {"node_size": 30, "linewidths": 0.3}
ax, pos = cpnet.draw(
G,
sig_c,
sig_x,
ax,
draw_nodes_kwd=draw_nodes_kwd,
max_colored_group_num=5,
draw_edge=False,
layout_kwd = {"verbose":True, "iterations":500}
)
| notebooks/example4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#default_exp build_lib
# +
#export
from fastcore.utils import *
from fastcore.foundation import *
import pprint
# from json import loads
from jsonref import loads
from collections import namedtuple
# -
# # Internal - OpenAPI Parser
# This library leverages the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification) to create a python client for the GitHub API. The OpenAPI specification contains metadata on all of the endpoints and how to access them properly. Using this metadata, we can construct a python client dynamically that updates automatically along with the OpenAPI Spec.
#export
GH_OPENAPI_URL = 'https://github.com/github/rest-api-description/raw/main/descriptions/api.github.com/api.github.com.json?raw=true'
_DOC_URL = 'https://docs.github.com/'
#hide
if 1:
s = urlread(GH_OPENAPI_URL)
js = loads(s)['paths']
t = js['/repos/{owner}/{repo}/hooks']
params = nested_idx(t, 'get','parameters')
[o['name'] for o in params if o['in']=='query']
# +
#export
_lu_type = dict(zip(
'NA string object array boolean integer'.split(),
map(PrettyString,'object str dict list bool int'.split())
))
def _detls(k,v):
res = [_lu_type[v.get('type', 'NA')]]
try: res.append(v['default'])
except KeyError: pass
return [k]+res
# -
#export
def build_funcs(nm='ghapi/metadata.py', url=GH_OPENAPI_URL, docurl=_DOC_URL):
"Build module metadata.py from an Open API spec and optionally filter by a path `pre`"
def _get_detls(o):
data = nested_idx(o, *'requestBody content application/json schema properties'.split()) or {}
url = o['externalDocs']['url'][len(docurl):]
params = o.get('parameters',None)
qparams = [p['name'] for p in params if p['in']=='query'] if params else []
d = [_detls(*o) for o in data.items()]
preview = nested_idx(o, 'x-github','previews',0,'name') or ''
return (o['operationId'], o['summary'], url, qparams, d, preview)
js = loads(urlread(url))
_funcs = [(path, verb) + _get_detls(detls)
for path,verbs in js['paths'].items() for verb,detls in verbs.items()
if 'externalDocs' in detls]
Path(nm).write_text("funcs = " + pprint.pformat(_funcs, width=360))
#hide
build_funcs()
# This module created by `build_funcs` contains a list of metadata for each endpoint, containing the path, verb, operation id, summary, documentation relative URL, and list of parameters (if any), e.g:
#export
GhMeta = namedtuple('GhMeta', 'path verb oper_id summary doc_url params data preview'.split())
from ghapi.metadata import funcs
GhMeta(*funcs[3])
# ## Export -
#hide
from nbdev.export import notebook2script
notebook2script()
| 90_build_lib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="Rq-8cdE6uSVj" executionInfo={"status": "ok", "timestamp": 1622965249509, "user_tz": -480, "elapsed": 46302, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}} outputId="18a90f47-3659-498b-f949-93821d1cbbfa"
# !apt-get install -y -qq software-properties-common python-software-properties module-init-tools
# !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
# !apt-get update -qq 2>&1 > /dev/null
# !apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
# !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
# !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# + colab={"base_uri": "https://localhost:8080/"} id="PPU2-Bjaw3if" executionInfo={"status": "ok", "timestamp": 1622965254958, "user_tz": -480, "elapsed": 510, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}} outputId="2c10ddeb-6167-4f60-eeb0-d5a77c4bc379"
# gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
# + id="bWU91uoOxa0F" executionInfo={"status": "ok", "timestamp": 1622965255699, "user_tz": -480, "elapsed": 2, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
# !mkdir -p drive
# !google-drive-ocamlfuse -o nonempty drive
import os
import sys
os.chdir('drive')
# + id="keypfrnLhTp5" executionInfo={"status": "ok", "timestamp": 1622965260815, "user_tz": -480, "elapsed": 4334, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
import nltk
from functools import lru_cache
import re
import string
import numpy as np
import pandas as pd
import torch
from collections import Counter, defaultdict
from sklearn.preprocessing import OneHotEncoder
import torch.utils.data as Data
import torch.nn.functional as F
import torch
import torch.nn as nn
from torch.autograd import Variable
import os
import glob
from sklearn.utils import shuffle
import copy
# + colab={"base_uri": "https://localhost:8080/"} id="knN_KDWrqNHX" executionInfo={"status": "ok", "timestamp": 1622965260817, "user_tz": -480, "elapsed": 7, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}} outputId="7a551797-7fd0-4c68-94c8-5059378130f7"
if torch.cuda.is_available():
device = torch.device("cuda")
# print(f'There are {torch.cuda.device_count()} GPU(s) available.')
print(torch.cuda.device_count())
print('Device name:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
# + colab={"base_uri": "https://localhost:8080/"} id="mCiTJxaMqlt-" executionInfo={"status": "ok", "timestamp": 1622965260817, "user_tz": -480, "elapsed": 6, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}} outputId="27b3424d-9fef-42b5-f6a2-35a5a3ca223f"
# !/opt/bin/nvidia-smi
# + id="by-7uhU7n1cs" executionInfo={"status": "ok", "timestamp": 1622965260818, "user_tz": -480, "elapsed": 4, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
class Preprocessor:
def clean_text(self, text):
# '''Make text lowercase, remove text in square brackets,remove links,remove punctuation
# make text lowercase
text1 = text.lower()
# remove square brackets
text1 = re.sub('\[.*?\]', '', text1)
#remove <>
text1 = re.sub('<.*?>+', '', text1)
text1 = re.sub('\(.*?\)', ' ', text1)
text1 = re.sub('\{.*?\}', ' ', text1)
# remove links
text1 = re.sub('https?://\S+|www\.\S+', ' ', text1)
# remove punctuation
text1 = re.sub('[%s]' % re.escape(string.punctuation), '', text1)
# remove \n
# text = re.sub('\n', '', text)
# remove numbers
text = re.sub('\w*\d\w*', '', text)
return text1
def __init__(self):
self.stem = lru_cache(maxsize=10000)(nltk.stem.SnowballStemmer('english').stem)
# self.stopwords = stopwords.words('english')
self.tokenize = nltk.tokenize.TreebankWordTokenizer().tokenize
def __call__(self, text):
text1 = self.clean_text(text)
tokens = self.tokenize(text1)
# tokens = [token for token in tokens if token not in self.stopwords]
# tokens = [self.stem(token) for token in tokens]
return tokens
# + id="jxP_rB3xoeLK" executionInfo={"status": "ok", "timestamp": 1622965260818, "user_tz": -480, "elapsed": 4, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
class DataLoader:
def __init__(self, data, batch_size, k, context_window = 1, preprocessor=Preprocessor(), enc_tokens = OneHotEncoder(), dev = 'cpu'):
self.df = data
self.dev = dev
self.batch_size = batch_size
self.context_window = context_window
self.padding_length = 75
self.fold_num = 10
# self.padding_length = 192
# self.first_sentence = 128
# self.second_sentence = self.padding_length - self.first_sentence
self.apply_preprocessor(preprocessor)
enc_tokens.fit(self.vocab)
self.split(0.9, 0.1)
self.k_fold_partition(fold_num = self.fold_num) # Divide ten folds
print("K Fold Partition Completed!")
train, validation = self.k_fold_split(k)
trainX, trainY = self.build_training_data(enc_tokens, train)
validationX, validationY = self.build_training_data(enc_tokens, validation)
testX, testY = self.build_training_data(enc_tokens, self.test)
print(f'Train Dataset Shape : {trainX.shape}')
print(f'Validation Dataset Shape : {validationX.shape}')
print(f'Test Dataset Shape : {testX.shape}')
self.train_dataset = self.get_batch(trainX, trainY)
self.validation_dataset = self.get_batch(validationX, validationY)
self.test_dataset = self.get_batch(testX, testY)
def get_batch(self, X, Y):
dataset = Data.TensorDataset(X, Y)
loader = Data.DataLoader(
dataset=dataset,
batch_size=self.batch_size,
shuffle=True,
# num_workers=2,
)
return loader
def remove_low_frequency(self, list):
new_list = []
for x in list:
if x in self.token_to_count.keys():
new_list.append(x)
return new_list
def apply_preprocessor(self, preprocessor):
self.df['tokens'] = [preprocessor(s) for s in self.df['sentence']]
self.df['tokens'] = [x[:self.padding_length] if len(x) > self.padding_length else x for x in self.df['tokens']]
# for index, row in self.df.iterrows():
# if len(row['tokens']) > self.max_length:
# row[toke]
self.token_to_count = Counter([x for l in self.df['tokens'] for x in l])
# tmp_token_to_count = self.token_to_count.copy()
# for index, value in tmp_token_to_count.items():
# if value <= self.vocab_frequency:
# self.token_to_count.pop(index)
# self.df['tokens'] = [self.remove_low_frequency(x) for x in self.df['tokens']]
self.max_length = self.get_max_length()
print(f'Max Length : {self.max_length}')
self.vocab = list([[term] for term in self.token_to_count.keys()])
print(f'Vocab Size : {len(self.vocab)}')
def get_max_length(self):
max_length = 0
for index, row in self.df.iterrows():
token_list = [x for x in row['tokens']]
tmp_length = len(token_list)
if tmp_length > max_length:
max_length = tmp_length
return max_length
def k_fold_partition(self, fold_num = 10):
# index = list(range(len(self.df))) # total index
batch_size = int(len(self.train) / fold_num) # the number of data for each fold
remain_num = len(self.train) - batch_size * fold_num # the remain data after partition
self.fold_data = []
fold_batch_list = [] # Average the remaining data to the folds
for fold in range(fold_num):
if remain_num > 0:
remain_num -= 1
fold_batch_list.append(batch_size + 1)
else:
fold_batch_list.append(batch_size)
fold_index = 0 # The starting position of each division data
for fold in range(fold_num):
fold_texts = [fold_index, fold_index + fold_batch_list[fold]]
self.fold_data.append(fold_texts)
fold_index = fold_index + fold_batch_list[fold]
def k_fold_split(self, k):
print(f'K : {k}, fold_data[k] : {self.fold_data[k]}')
validation = self.train.iloc[self.fold_data[k][0]: self.fold_data[k][1]]
train = self.train.iloc[0: self.fold_data[k][0]].append(self.train.iloc[self.fold_data[k][1]:])
train_num = len(self.train) - (self.fold_data[k][1] - self.fold_data[k][0])
if train_num % self.batch_size == 0:
self.no_batch = train_num / self.batch_size
else:
self.no_batch = int(train_num / self.batch_size) + 1
print(f' index : {self.no_batch}')
return train, validation
def split(self, train, test):
index = int(train * len(self.df))
self.train = self.df.iloc[0:index]
self.test = self.df.iloc[index:]
def build_training_data(self, enc, df):
X = []
Y = []
for index, row in df.iterrows():
tmp = [enc.transform([[t]]).toarray()[0] for t in row['tokens']]
if len(tmp) < self.max_length:
pad_length = self.max_length - len(tmp)
for i in range(pad_length):
tmp.append(np.zeros(len(self.vocab)))
X.append(tmp)
Y.append(row['label'])
X = np.array(X)
Y = np.array(Y)
X = torch.from_numpy(X).float()
Y = torch.from_numpy(Y).long()
print(f'Input Data Shape (sequence_num, sequence_len, vocab_size + 1) : {X.shape}')
print(f'Input Label Shape : {Y.shape}')
return X, Y
# + id="gOBWpAvWrAdh" executionInfo={"status": "ok", "timestamp": 1622965469187, "user_tz": -480, "elapsed": 2, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
class RelexNet(nn.Module):
#Single step RNN.
#input_size is char_vacab_size=26,hidden_size is number of hidden neurons๏ผoutput_size is number of categories
def __init__(self, vocabulary_size, len_seq, num_classes):
super(RelexNet, self).__init__()
self.num_classes = num_classes
self.L = nn.Linear(vocabulary_size, num_classes)
self.dropout = nn.Dropout(p=0.5)
self.O = nn.Softmax(dim=1)
def forward(self, input, B_layer):
# X.shape = (batch, seq_len, vocab_size)
T = input.shape[1]
batch = input.shape[0]
predict_y = Variable(torch.zeros(batch, self.num_classes))
if B_layer is None:
B_layer = Variable(torch.zeros(batch, self.num_classes)).cuda()
for t in range(T):
tmp = input[:, t, :]
L_onestep = self.L(tmp)
L_onestep = self.dropout(L_onestep)
L_onestep = torch.sigmoid(L_onestep)
# L_onestep = F.relu6(L_onestep)
B_layer = torch.add(B_layer, L_onestep)
# print(B_layer)
if self.num_classes == 1:
O_layer[t] = F.sigmoid(B_layer)
else:
O_layer = self.O(B_layer)
return O_layer, B_layer
# + id="rE53K7W8sP4K" executionInfo={"status": "ok", "timestamp": 1622965469188, "user_tz": -480, "elapsed": 2, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}}
def train_model(dataset, model, optimizer, scheduler, num_epochs, dev):
losses = []
for epoch in range(num_epochs):
# training mode
# dataset.set_partition(dataset.train)
model.train()
total_train_loss = 0
total_train_correct = 0
count = 0
for x, y in dataset.train_dataset:
# for every batch in the training dataset perform one update step of the optimizer.
state = None
model.zero_grad()
y_h, state = model(x.to(dev), state)
loss = F.cross_entropy(y_h, y.to(dev))
optimizer.zero_grad()
# scheduler.zero_grad()
loss.backward()
optimizer.step()
scheduler.step()
total_train_loss += loss.item()
total_train_correct += (y_h.argmax(-1) == y.cuda()).float().mean()
count += 1
average_train_loss = total_train_loss / count
average_train_accuracy = total_train_correct / count
losses.append(average_train_loss)
print('{} optim: {}'.format(epoch + 1, optimizer.param_groups[0]['lr']))
# print('{} optim: {}'.format(epoch, optimizer.param_groups[0]['lr']))
# print('{} scheduler: {}'.format(epoch, scheduler.get_lr()[0]))
# validation mode
model.eval()
total_valid_loss = 0
total_valid_correct = 0
count = 0
for x, y in dataset.test_dataset:
state = None
y_h, state = model(x.to(dev), state)
loss = F.cross_entropy(y_h, y.to(dev))
total_valid_loss += loss.item()
total_valid_correct += (y_h.argmax(-1) == y.cuda()).float().mean()
count += 1
average_valid_loss = total_valid_loss / count
losses.append((average_train_loss, average_valid_loss))
average_valid_accuracy = total_valid_correct / count
print(f'epoch {epoch + 1} accuracies: \t train: {average_train_accuracy}\t valid: {average_valid_accuracy} loss: {average_train_loss}\t')
# test mode
# dataset.set_partition(dataset.test)
model.eval()
total_test_correct = 0
count = 0
for x, y in dataset.test_dataset:
state = None
y_h, state = model(x.to(dev), state)
total_test_correct += (y_h.argmax(-1) == y.cuda()).float().mean()
count += 1
average_test_accuracy = total_test_correct / count
print(f'test accuracy {average_test_accuracy}')
return losses, (average_train_accuracy, average_valid_accuracy, average_test_accuracy)
# + colab={"base_uri": "https://localhost:8080/"} id="SbWvcYEurRyy" executionInfo={"status": "ok", "timestamp": 1622965608058, "user_tz": -480, "elapsed": 137467, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}} outputId="df1766ad-1ee8-4b53-d55d-c80cc1056c69"
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
k = 0
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
all_files = glob.glob("data/sentiment/*.csv")
data = pd.concat((pd.read_csv(f, header=None, index_col=None) for f in all_files))
data.columns = ['sentence', 'label']
data = data.sample(frac=1, random_state=1)
dataset = DataLoader(data, k=k, batch_size = batch_size, context_window=context_window, dev=dev)
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
model = RelexNet(vocabulary_size=vocabulary_size, len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[8 * dataset.no_batch, 14 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
# torch.save(model, os.path.join('classifier.pth'))
# + [markdown] id="bnFCCwTmWQF2"
# The code below is used for test on stance detection dataset before,
# which is not used at all
# + colab={"base_uri": "https://localhost:8080/"} id="-21dIPGEMbM-" executionInfo={"elapsed": 546559, "status": "ok", "timestamp": 1613487138602, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}, "user_tz": -480} outputId="1d71f3ee-0276-454f-937e-f25a29f10e12"
# Abortion
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
# root_path = 'data/stance'
# files = ['abortion', 'gayRights', 'marijuana', 'obama']
# label = {'abortion' : 0, 'gayRights' : 1, 'marijuana' : 2, 'obama' : 3}
# list_sentence = []
# list_label = []
# for file in files:
# path = os.path.join('data', 'stance', file, '*.data')
# print(path)
# all_files = glob.glob(path)
# # print(all_files)
# for label_file in all_files:
# f = open(label_file, 'r', encoding='UTF-8')
# tmp = f.read()
# list_sentence.append(tmp)
# list_label.append(label[file])
# dict = {'sentence': list_sentence, 'label': list_label}
# data = pd.DataFrame(dict)
# data = data.sample(frac=1, random_state=1)
data = pd.read_csv("abortion.csv", index_col=None, error_bad_lines=False, encoding='UTF-8')
print(len(data))
# print(data)
data = data.sample(frac=1, random_state=256)
# data = shuffle(data)
dataset = DataLoader(data, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
model = RelexNet(vocabulary_size=vocabulary_size, len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.001,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[10 * dataset.no_batch, 15 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
torch.save(model, os.path.join('saved_models', 'classifier_abortion.pth'))
# + colab={"base_uri": "https://localhost:8080/"} id="bXeaQG7IQlVF" executionInfo={"elapsed": 359231, "status": "ok", "timestamp": 1613487545549, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}, "user_tz": -480} outputId="c3e50c06-9b60-4b87-d4f2-83ec6d6ae3de"
# GayRights
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
data = pd.read_csv("gayRights.csv", index_col=None, error_bad_lines=False, encoding='UTF-8')
print(len(data))
# print(data)
data = data.sample(frac=1, random_state=1)
# data = shuffle(data)
dataset = DataLoader(data, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
# model = FastText(len(dataset.token_to_id)+2, num_hidden, len(dataset.class_to_id)).to(dev)
model = RelexNet(vocabulary_size=vocabulary_size,len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.001,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[10 * dataset.no_batch, 15 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
torch.save(model, os.path.join('saved_models', 'classifier_gayRights.pth'))
# + colab={"base_uri": "https://localhost:8080/"} id="fZeMZ24XeFJd" executionInfo={"elapsed": 136282, "status": "ok", "timestamp": 1613487779476, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}, "user_tz": -480} outputId="45553b28-8e18-487a-f198-fe159ed5041a"
# Marijuana
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
data = pd.read_csv("marijuana.csv", index_col=None, error_bad_lines=False, encoding='UTF-8')
print(len(data))
# print(data)
data = data.sample(frac=1, random_state=1)
# data = shuffle(data)
dataset = DataLoader(data, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
# model = FastText(len(dataset.token_to_id)+2, num_hidden, len(dataset.class_to_id)).to(dev)
model = RelexNet(vocabulary_size=vocabulary_size,len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.001,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[10 * dataset.no_batch, 15 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
torch.save(model, os.path.join('saved_models', 'classifier_marijuana.pth'))
# + colab={"base_uri": "https://localhost:8080/"} id="Ul7Qg4IYgS9C" executionInfo={"elapsed": 232177, "status": "ok", "timestamp": 1613488319259, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}, "user_tz": -480} outputId="bd1ea312-a28b-43c3-d48b-3e7fea2c1067"
# Obama
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
data = pd.read_csv("obama.csv", index_col=None, error_bad_lines=False, encoding='UTF-8')
print(len(data))
# print(data)
data = data.sample(frac=1, random_state=1)
# data = shuffle(data)
dataset = DataLoader(data, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 2
# model = FastText(len(dataset.token_to_id)+2, num_hidden, len(dataset.class_to_id)).to(dev)
model = RelexNet(vocabulary_size=vocabulary_size, len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.001,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[10 * dataset.no_batch, 15 * dataset.no_batch],gamma = 0.1, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
torch.save(model, os.path.join('saved_models', 'classifier_obama.pth'))
# + colab={"base_uri": "https://localhost:8080/", "height": 526} id="vnSy7vADC0kK" executionInfo={"elapsed": 11573, "status": "error", "timestamp": 1612420266396, "user": {"displayName": "\u5218\u4e16\u8431", "photoUrl": "", "userId": "15293057863655146379"}, "user_tz": -480} outputId="55951f28-644f-457c-8765-a8f3372b30ca"
num_epochs = 20
batch_size = 32 # Number of hidden neurons in model
context_window = 1
dev = 'cuda' if torch.cuda.is_available() else 'cpu' # If you have a GPU installed, use that, otherwise CPU
print(dev)
print('Loading data...')
# root_path = 'data/stance'
# files = ['abortion', 'gayRights', 'marijuana', 'obama']
# label = {'abortion' : 0, 'gayRights' : 1, 'marijuana' : 2, 'obama' : 3}
# list_sentence = []
# list_label = []
# for file in files:
# path = os.path.join('data', 'stance', file, '*.data')
# print(path)
# all_files = glob.glob(path)
# # print(all_files)
# for label_file in all_files:
# f = open(label_file, 'r', encoding='UTF-8')
# tmp = f.read()
# list_sentence.append(tmp)
# list_label.append(label[file])
# dict = {'sentence': list_sentence, 'label': list_label}
# data = pd.DataFrame(dict)
# data = data.sample(frac=1, random_state=1)
data = pd.read_csv("stance_dataset.csv", index_col=None, error_bad_lines=False, encoding='UTF-8')
print(len(data))
# print(data)
data = data.sample(frac=1, random_state=1)
# data = shuffle(data)
dataset = DataLoader(data, batch_size = batch_size, context_window=context_window, dev=dev)
# print(f'Data Size: {dataset.df.size()}')
print("Data Ready!")
vocabulary_size = len(dataset.vocab)
len_seq = dataset.max_length
num_classes = 4
# model = FastText(len(dataset.token_to_id)+2, num_hidden, len(dataset.class_to_id)).to(dev)
model = RelexNet(vocabulary_size=vocabulary_size, enc=dataset.enc_modifiers, modifier_size=len(dataset.modifier_vocab),len_seq=len_seq, num_classes=num_classes).cuda()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(),lr=0.005,betas=(0.9,0.99))
# optimizer = torch.optim.Adam(model.parameters(),lr=0.01,betas=(0.9,0.99), weight_decay=10)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,milestones=[3 * dataset.no_batch, 15 * dataset.no_batch],gamma = 0.9, last_epoch=-1)
losses, accuracies = train_model(dataset, model, optimizer, scheduler, num_epochs, dev=dev)
torch.save(model, os.path.join('saved_models', 'classifier2.pth'))
# + id="cgJa4WP1rkU4"
torch.save(model, os.path.join('classifier.pth'))
# + id="rdJa617xw7N3"
# !pip install kora
from kora import console
console.start() # and click link
# + id="xzzcdPQuRcyw"
| relexnet (without modifier).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Data processing for Cereals
# # Setup
# ## Library import
# We import all the required Python libraries
import numpy as np
import pandas as pd
import geopandas as gpd
# ## Read data
# **Comunidades autonomas**
comunidades = gpd.read_file(f'../../datasets/processed/comunidades.geojson')
comunidades.sort_values(['CO_CCAA'], inplace = True)
comunidades
# **Rendimiento cereales**
df = pd.read_excel('../../datasets/raw/crops/cereales/CEREALES _clean.xlsx')
df = pd.merge(comunidades[['CO_CCAA', 'DS_CCAA']], df, how='left', left_on='DS_CCAA', right_on='Comunidad autonoma')
df.drop(columns=['Comunidad autonoma', 'Codigo'], inplace=True)
df
columns = list(df.columns)
df_new = pd.DataFrame(columns=['CO_CCAA', 'dataset', 'indicator', 'scenario', 'value', 'cereal', 'unit'])
for i in np.arange(4)+2:
df_tmp = pd.concat([df[['CO_CCAA']], df[['CO_CCAA']]])
df_tmp['scenario'] = len(df) * ['rcp45'] + len(df) * ['rcp85']
df_tmp['value'] = list(df[columns[i]]*100) + list(df[columns[i+4]]*100)
df_tmp['cereal'] = columns[i].split(' ')[0]
df_tmp['unit'] = '%'
df_tmp['dataset'] = 'PESETA'
df_tmp['indicator'] = 'yield_change_per'
df_new = pd.concat([df_new, df_tmp])
df_new.reset_index(drop=True, inplace=True)
df_new
# **Save table**
df_new.to_csv(f'../../datasets/processed/cereal_indicators/cereal_indicators_comunidades_autonomas.csv', index=False)
| data/notebooks/Lab/14_Datos_Cereales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# +
links = pd.read_csv("ml-youtube.csv")
links.set_index('movieId')
movies = pd.read_csv("movies.csv")
movies.set_index("movieId")
df = links.join(movies, lsuffix="links", rsuffix="")
df.drop(['movieIdlinks','titlelinks'], axis=1, inplace=True)
df.index.name = 'ID'
df = pd.concat(
[
df,
pd.DataFrame(
[[None]*len(genres)],
index=df.index,
columns=genres
)
], axis=1
)
for ind,row in df.iterrows():
for g in genres:
if g in row['genres']:
df.loc[ind,g] = 1
else:
df.loc[ind,g] = 0
print(df.head(10))
# -
df.head(10)
df.drop('genres',inplace=True, axis=1)
df.head(5)
# dropping things with no genre
df.drop(df[df['(no genres listed)'] == 1].index,inplace=True)
df.drop('(no genres listed)',axis=1,inplace=True)
print(df.columns)
df.drop('IMAX',axis=1,inplace=True)
print(df.columns)
df.to_csv('preprocessed_movies.csv')
| Preprocess.ipynb |
#!/usr/bin/env python
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: -all
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tfrl-cookbook
# language: python
# name: tfrl-cookbook
# ---
# Temporal Difference (TD) learning
# Chapter 2, TensorFlow 2 Reinforcement Learning Cookbook | <NAME>
import numpy as np
from envs.gridworldv2 import GridworldV2Env
from value_function_utils import visualize_grid_state_values
def temporal_difference_learning(env, max_episodes):
grid_state_values = np.zeros((len(env.distinct_states), 1))
grid_state_values[env.goal_state] = 1
grid_state_values[env.bomb_state] = -1
# v: state-value function
v = grid_state_values
gamma = 0.99 # Discount factor
alpha = 0.01 # learning rate
for episode in range(max_episodes):
state = env.reset()
done = False
while not done:
action = env.action_space.sample() # random policy
next_state, reward, done = env.step(action)
# State-value function updates using TD(0)
v[state] += alpha * (reward + gamma * v[next_state] - v[state])
state = next_state
visualize_grid_state_values(grid_state_values.reshape((3, 4)))
if __name__ == "__main__":
max_episodes = 4000
env = GridworldV2Env(step_cost=-0.1, max_ep_length=30)
temporal_difference_learning(env, max_episodes)
| Chapter02/3_temporal_difference_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: dev
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# # Read the CSV and Perform Basic Data Cleaning
df = pd.read_csv("cumulative.csv")
df = df.drop(columns=["rowid", "kepid", "kepoi_name", "kepler_name", "koi_pdisposition", "koi_score", "koi_tce_delivname"])
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
# # Create a Train Test Split
#
# Use `koi_disposition` for the y values
# Define X and y values for the model
X = df.drop('koi_disposition', axis = 1)
y = df['koi_disposition']
print (X.shape, y.shape)
# +
# Import ML dependencies from sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from keras.utils import to_categorical
# Split data into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y)
# -
X_train.head()
# # Pre-processing
#
# Scale the data using the MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
print(scaler.fit(X_train))
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# # Train the Support Vector Machine
# Create the SVC model
from sklearn.svm import SVC
model2 = SVC(kernel='linear')
model2
# Fit the model and predict
model2.fit(X_train_scaled, y_train)
predictions = model2.predict(X_test_scaled)
print(f"Training Data Score: {model2.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {model2.score(X_test_scaled, y_test)}")
target_names = ['CONFIRMED', 'FALSE POSITIVE', 'CANDIDATE']
# +
from sklearn.metrics import classification_report
predictions = model2.predict(X_test)
# Calculate the classification report for the Grid Search model
print(classification_report(y_test, predictions,
target_names=target_names))
# +
# Label encode categorical variable
# Step 1: Label-encode data set
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
# Step 2: Convert encoded labels to one-hot-encoding
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
# +
# Use Deep Learning model
#Import dependencies
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.datasets
# first, create a normal neural network with 40 inputs, 100 hidden nodes, and 3 outputs
from keras.models import Sequential
from keras.layers import Dense
deep_model = Sequential()
deep_model.add(Dense(units=100, activation='relu', input_dim=40))
deep_model.add(Dense(units=3, activation='softmax'))
# -
# Compile the model
deep_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
deep_model.summary()
# Fit the model to the training data
deep_model.fit(
X_train_scaled,
y_train_categorical,
epochs=100,
shuffle=True,
verbose=2
)
# # Hyperparameter Tuning
#
# Use `GridSearchCV` to tune the `C` and `gamma` parameters
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10, 50],
'gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid2 = GridSearchCV(model2, param_grid, verbose=3)
# Train the model with GridSearch
grid2.fit(X_train_scaled, y_train)
print(grid2.best_params_)
print(grid2.best_score_)
| starter_code/svm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extracting Emittance estimate from Measured Luminosity
# ~ <NAME>, 2018
#
# In the LHC, the two high luminosity experiments are measuring the delivered luminosity bunch-by-bunch. Given that the luminosity formula is defined as
#
#
#
# $\mathcal{L_{experiment}} = \frac{N_{b1}N_{b2}n_{b}f_{rev}}{\sigma_X\sigma_{\parallel}}\cdot \frac{1}{\sqrt{1+\left(\frac{\sigma_z\cdot\phi}{2\sigma_X}\right)^2}}$~,~
#
# where
# - $X$ denotes the crossing plane for the experiment and $\parallel$ the separation plane,
# - $N_{bi}$ the bunch charge of beam $i$
# - $n_b$ the total number of bunches
# - $f_{rev}$ the revolution frequency of the LHC (i.e. 11.245 kHz)
# - $\sigma_{i} = \sqrt{\beta^{*}_{i} \cdot \frac{\varepsilon_{n,i}}{\gamma_{rel}}}$ where $i=X,\parallel$ the plane
# - $\sigma_{z}$ the longitudinal RMS beam size
# - $\phi$ the full crossing angle
#
#
# and the fact that the two experiments have their crossing planes rotated by $90^{\circ}$ we can solve the system of luminosity equations and extract from the measured luminosities of ATLAS and CMS a pair of emittance $(\varepsilon_{n,X}, \varepsilon_{n,\parallel})$ solutions.
import numpy as np
from scipy.constants import c
import glob
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import pickle
import gzip
# Using either mathematica or sympy one can find the solution to the system given below:
def getEmittancesFromLumi(LAT, LCMS, beta, bunch_length1, bunch_length2, xing, N1, N2, nb, frev, gamma):
sigz = (bunch_length1+bunch_length2)/2.0
p = np.pi
enx = (-16*gamma*p**2*sigz**4*xing**4*LAT**2*LCMS**2 + frev**2*gamma*N1**2*N2**2*nb**2*(-LAT**2+LCMS**2) + beta*np.sqrt((gamma**2*(64*frev**2*N1**2*N2**2*nb**2*p**2*sigz**4*xing**4*LAT**2*LCMS**4+(frev**2*N1**2*N2**2*nb**2*LCMS**2-LAT**2*(frev**2*N1**2*N2**2*nb**2+16*p**2*sigz**4*xing**4*LCMS**2))**2))/beta**2)) /(32*beta*p**2*sigz**2*xing**2*LAT**2*LCMS**2)
eny = (2*frev**2*gamma**2*N1**2*N2**2*nb**2*sigz**2*xing**2*LAT**2)/(beta*(16*gamma*p**2*sigz**4*xing**4*LAT**2*LCMS**2+frev**2*gamma*N1**2*N2**2*nb**2*(-LAT**2+LCMS**2)+beta*np.sqrt(gamma**2*(64*frev**2*N1**2*N2**2*nb**2*p**2*sigz**4*xing**4*LAT**2*LCMS**4+(frev**2*N1**2*N2**2*nb**2*LCMS**2-LAT**2*(frev**2*N1**2*N2**2*nb**2+16*p**2*sigz**4*xing**4*LCMS**2))**2)/beta**2)))
return enx,eny
# ## Example
LAT = 8.65010502e34 # atlas luminosity [Hz/cm^2]
LCMS = 8.63010502e34 # cms luminosity [Hz/cm^2]
nb = 1 # bunch (single here)
frev = 11245.5 # revolution frequency [Hz]
gamma = 6927.63 # relativistic factor
N1 = 1.147e11 # bunch charge of beam 1 [ppb]
N2 = 1.142e11 # bunch charge of beam 2 [ppb]
blen1 = 0.081224 # bunch length of beam 1 [m]
blen2 = 0.081224 # bunch length of beam 1 [m]
beta=0.3 # beta star at the IP (assuming round beams i.e. bx=b//)
xing=161.0e-6 # half-crossing angle
enx, eny = getEmittancesFromLumi(LAT, LCMS, beta, blen1, blen2, xing, N1, N2, nb,frev, gamma)
print("Enx = {:.4f} um".format(enx*1.0e6))
print("Eny = {:.4f} um".format(eny*1.0e6))
# ----
#
#
# To loop over all fills in Lumimod repository:
flist = [int(x.split('_')[-1]) for x in glob.glob('/eos/project/l/lhc-lumimod/LuminosityFollowUp/2018/procdata/'+"*")]
# +
gamma = 6927.63
frev = 11245.5
nb = 1
fills = []
enx_bsrt_mean = []
eny_bsrt_mean = []
enx_bsrt_std = []
eny_bsrt_std = []
enx_lumi_mean = []
eny_lumi_mean = []
enx_lumi_std = []
eny_lumi_std = []
filled_slots = []
for filln in flist:
try:
with gzip.open("/eos/project/l/lhc-lumimod/LuminosityFollowUp/2018/procdata/fill_{}/fill_{}_lumi_meas.pkl.gz".format(filln, filln), 'rb') as fid:
meas = pickle.load(fid)
with gzip.open("/eos/project/l/lhc-lumimod/LuminosityFollowUp/2018/procdata/fill_{}/fill_{}.pkl.gz".format(filln, filln), 'rb') as fid:
sb = pickle.load(fid)
except:
print('Skipping file: {}'.format(filln))
continue
print('Working on fill {}'.format(filln))
filled_slots.append(len(sb['slots_filled_coll'][1])+len(sb['slots_filled_noncoll'][1]))
intens_b1 = np.array(sb['b_inten_interp_coll'][1][0])
intens_b2 = np.array(sb['b_inten_interp_coll'][2][0])
blen_b1 = np.array(sb['bl_interp_m_coll'][1][0])
blen_b2 = np.array(sb['bl_interp_m_coll'][2][0])
en1h = np.array(sb['eh_interp_coll'][1][0])
en1v = np.array(sb['ev_interp_coll'][1][0])
en2h = np.array(sb['eh_interp_coll'][2][0])
en2v = np.array(sb['ev_interp_coll'][2][0])
beta = sb['betastar'][1][0]
xing_1 = sb['xing_angle'][1][0]
xing_5 = sb['xing_angle'][5][0]
xing = (xing_1+xing_5)/2.0
emit_x_conv_lumi = []
emit_y_conv_lumi = []
emit_x_conv_data = []
emit_y_conv_data = []
for i_slot in xrange(len(meas['ATLAS']['bunch_lumi'][0])):
LAT = meas['ATLAS']['bunch_lumi'][0][i_slot]
LCMS = meas['CMS']['bunch_lumi'][0][i_slot]
tmp_enx, tmp_eny = getEmittancesFromLumi(LAT, LCMS, beta/100., blen_b1[i_slot], blen_b2[i_slot], xing/2.0, intens_b1[i_slot], intens_b2[i_slot], nb , frev, gamma)
if i_slot == 1:
print en1h[i_slot], en1v[i_slot],en2h[i_slot], en2v[i_slot], '|', LAT, LCMS, beta/100., blen_b1[i_slot], blen_b2[i_slot], xing/2.0, intens_b1[i_slot], intens_b2[i_slot], nb , frev, gamma, '==>', tmp_enx, tmp_eny
emit_x_conv_lumi.append(tmp_enx)
emit_y_conv_lumi.append(tmp_eny)
conv_x = (en1h[i_slot] + en2h[i_slot])/2.0
conv_y = (en1v[i_slot] + en2v[i_slot])/2.0
emit_x_conv_data.append(conv_x)
emit_y_conv_data.append(conv_y)
fills.append(filln)
enx_bsrt_mean.append(np.nanmean(emit_x_conv_data))
eny_bsrt_mean.append(np.nanmean(emit_y_conv_data))
enx_bsrt_std.append(np.nanstd(emit_x_conv_data))
eny_bsrt_std.append(np.nanstd(emit_y_conv_data))
enx_lumi_mean.append(np.nanmean(emit_x_conv_lumi)*1.0e6)
eny_lumi_mean.append(np.nanmean(emit_y_conv_lumi)*1.0e6)
enx_lumi_std.append(np.nanstd(emit_x_conv_lumi)*1.0e6)
eny_lumi_std.append(np.nanstd(emit_y_conv_lumi)*1.0e6)
print('done')
fills = np.array(fills )
enx_bsrt_mean = np.array(enx_bsrt_mean)
eny_bsrt_mean = np.array(eny_bsrt_mean)
enx_bsrt_std = np.array(enx_bsrt_std )
eny_bsrt_std = np.array(eny_bsrt_std )
enx_lumi_mean = np.array(enx_lumi_mean)
eny_lumi_mean = np.array(eny_lumi_mean)
enx_lumi_std = np.array(enx_lumi_std )
eny_lumi_std = np.array(eny_lumi_std )
filled_slots = np.array(filled_slots )
# -
# aaand visualize it
# +
fig = plt.figure(1, figsize=(12,9))
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
ax1.errorbar(fills, enx_bsrt_mean, yerr=enx_bsrt_std, c='#4C48FF', ls='None')
ax1.errorbar(fills, enx_lumi_mean, yerr=enx_lumi_std, c='#FF4948', ls='None')
ax1.scatter(fills, enx_bsrt_mean, c='#4C48FF', label='BSRT')
ax1.scatter(fills, enx_lumi_mean, c='#FF4948', label='Luminosity')
ax2.errorbar(fills, eny_bsrt_mean, yerr=eny_bsrt_std, c='#4C48FF', ls='None')
ax2.errorbar(fills, eny_lumi_mean, yerr=eny_lumi_std, c='#FF4948', ls='None')
ax2.scatter(fills, eny_bsrt_mean, c='#4C48FF', label='BSRT')
ax2.scatter(fills, eny_lumi_mean, c='#FF4948', label='Luminosity')
ax1.set_ylim(1.5, 3)
ax2.set_ylim(1.5, 3)
ax1.set_ylabel('Horizontal Emittance [$\mu$m]', fontsize=18)
ax2.set_ylabel('Vertical Emittance [$\mu$m]', fontsize=18)
ax1.set_title("Emittance Comparison from BSRT and Luminosity", fontsize=20, y=1.05)
leg = ax1.legend(loc='upper left', frameon=True, fancybox=True, ncol=2)
frame = leg.get_frame()
frame.set_color('white')
ax1.grid('on')
ax2.grid('on')
plt.setp(ax1.get_xticklabels(), visible=False, rotation=90);
plt.setp(ax1.get_yticklabels(), fontsize=16);
plt.setp(ax2.get_xticklabels(), fontsize=16, visible=True, rotation=90);
plt.setp(ax2.get_yticklabels(), fontsize=16, visible=True);
| emittances/extractEmittanceFromLuminosity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ''
# name: sagemath
# ---
# + language="html"
# <link href="http://mathbook.pugetsound.edu/beta/mathbook-content.css" rel="stylesheet" type="text/css" />
# <link href="https://aimath.org/mathbook/mathbook-add-on.css" rel="stylesheet" type="text/css" />
# <style>.subtitle {font-size:medium; display:block}</style>
# <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italic,600,600italic" rel="stylesheet" type="text/css" />
# <link href="https://fonts.googleapis.com/css?family=Inconsolata:400,700&subset=latin,latin-ext" rel="stylesheet" type="text/css" /><!-- Hide this cell. -->
# <script>
# var cell = $(".container .cell").eq(0), ia = cell.find(".input_area")
# if (cell.find(".toggle-button").length == 0) {
# ia.after(
# $('<button class="toggle-button">Toggle hidden code</button>').click(
# function (){ ia.toggle() }
# )
# )
# ia.hide()
# }
# </script>
#
# -
# **Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard.
# $\newcommand{\identity}{\mathrm{id}}
# \newcommand{\notdivide}{\nmid}
# \newcommand{\notsubset}{\not\subset}
# \newcommand{\lcm}{\operatorname{lcm}}
# \newcommand{\gf}{\operatorname{GF}}
# \newcommand{\inn}{\operatorname{Inn}}
# \newcommand{\aut}{\operatorname{Aut}}
# \newcommand{\Hom}{\operatorname{Hom}}
# \newcommand{\cis}{\operatorname{cis}}
# \newcommand{\chr}{\operatorname{char}}
# \newcommand{\Null}{\operatorname{Null}}
# \newcommand{\lt}{<}
# \newcommand{\gt}{>}
# \newcommand{\amp}{&}
# $
# <div class="mathbook-content"></div>
# <div class="mathbook-content"><p id="p-2172">We already know that the converse of Lagrange's Theorem is false. If $G$ is a group of order $m$ and $n$ divides $m\text{,}$ then $G$ does not necessarily possess a subgroup of order $n\text{.}$ For example, $A_4$ has order 12 but does not possess a subgroup of order 6. However, the Sylow Theorems do provide a partial converse for Lagrange's Theoremโin certain cases they guarantee us subgroups of specific orders. These theorems yield a powerful set of tools for the classification of all finite nonabelian groups.</p></div>
# <div class="mathbook-content"><nav class="summary-links"><li><a href="section-sylow-theorems.ipynb"><span class="codenumber">15.1</span><span class="title">The Sylow Theorems</span></a></li><li><a href="section-sylow-applications.ipynb"><span class="codenumber">15.2</span><span class="title">Examples and Applications</span></a></li><li><a href="exercises-sylow.ipynb"><span class="codenumber">15.3</span><span class="title">Exercises</span></a></li><li><a href="sylow-exercises-project.ipynb"><span class="codenumber">15.4</span><span class="title">A Project</span></a></li><li><a href="sylow-references.ipynb"><span class="codenumber">15.5</span><span class="title">References and Suggested Readings</span></a></li><li><a href="sylow-sage.ipynb"><span class="codenumber">15.6</span><span class="title">Sage</span></a></li><li><a href="sylow-sage-exercises.ipynb"><span class="codenumber">15.7</span><span class="title">Sage Exercises</span></a></li></nav></div>
| aata/sylow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# %config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import tensorflow as tf
session = tf.InteractiveSession()
# -
from exoplanet.tri_diag_solve import tri_diag_solve
from exoplanet.interp import cubic_op
# +
class CubicInterpolator(object):
def __init__(self, x, y, endpoints=None, dtype=tf.float32, name=None):
with tf.name_scope(name, "CubicInterpolator"):
x = tf.cast(x, dtype)
y = tf.cast(y, dtype)
# Compute the deltas
size = tf.shape(x)[-1]
axis = tf.rank(x) - 1
dx = tf.gather(x, tf.range(1, size), axis=axis) - tf.gather(x, tf.range(size-1), axis=axis)
dy = tf.gather(y, tf.range(1, size), axis=axis) - tf.gather(y, tf.range(size-1), axis=axis)
# Compute the slices
upper_inds = tf.range(1, size-1)
lower_inds = tf.range(size-2)
s_up = lambda a: tf.gather(a, upper_inds, axis=axis)
s_lo = lambda a: tf.gather(a, lower_inds, axis=axis)
dx_up = s_up(dx)
dx_lo = s_lo(dx)
dy_up = s_up(dy)
dy_lo = s_lo(dy)
first = lambda a: tf.gather(a, tf.zeros(1, dtype=tf.int64), axis=axis)
last = lambda a: tf.gather(a, [size-2], axis=axis)
diag = 2*tf.concat((first(dx), dx_up+dx_lo, last(dx)), axis)
upper = dx
lower = dx
Y = 3*tf.concat((first(dy)/first(dx),
dy_up/dx_up - dy_lo/dx_lo,
-last(dy)/last(dx)), axis)
# Solve the tri-diagonal system
c = tri_diag_solve(diag, upper, lower, Y)
c_up = tf.gather(c, tf.range(1, size), axis=axis)
c_lo = tf.gather(c, tf.range(size-1), axis=axis)
b = dy / dx - dx * (c_up + 2*c_lo) / 3
d = (c_up - c_lo) / (3*dx)
self.x = x
self.y = y
self.b = b
self.c = c_lo
self.d = d
def evaluate(self, t, name=None):
with tf.name_scope(name, "evaluate"):
res = cubic_op.cubic_gather(t, self.x, self.y, self.b, self.c, self.d)
tau = t - res.xk
return res.ak + res.bk * tau + res.ck * tau**2 + res.dk * tau**3
# inds = cubic_op.search_sorted(self.x, t)
# if self._endpoints == "natural":
# inds = tf.clip_by_value(inds-1,
# tf.constant(0, dtype=tf.int64),
# tf.cast(tf.shape(self.x)[-1], tf.int64) - 2)
# inds = tf.stack(tf.meshgrid(
# *[tf.range(s, dtype=tf.int64) for s in t.shape], indexing="ij")[:-1]
# + [inds], axis=-1)
# print(tf.gather_nd(self.y_ext, inds).eval())
# tau = t - tf.gather_nd(self.x_ext, inds)
# mod = tf.gather_nd(self.y_ext, inds)
# mod += tau * tf.gather_nd(self.b, inds)
# mod += tau**2 * tf.gather_nd(self.c, inds)
# mod += tau**3 * tf.gather_nd(self.d, inds)
# return mod
# +
T = tf.float64
np.random.seed(123)
x = np.sort(np.random.uniform(1, 9, (3, 8)))
# x = np.linspace(1, 9, 80)
y = np.sin(x)
t = np.linspace(0, 10, 500)
t = t + np.zeros((x.shape[0], len(t)))
x_t = tf.constant(x, dtype=T)
y_t = tf.constant(y, dtype=T)
t_t = tf.constant(t, dtype=T)
interp = CubicInterpolator(x_t, y_t, dtype=T)
model = interp.evaluate(t_t)
# -
interp.x.shape
tf.gradients(model, y_t)
res = cubic_op.cubic_gather(t_t, x_t, y_t, interp.b, interp.c, interp.d)
session.run(tf.gradients(res.ak, y_t))
session.run(res)
tau = t_t - res.xk
model2 = res.ak + res.bk * tau + res.ck * tau**2 + res.dk * tau**3
model.eval() - model2.eval()
# plt.plot(x.T, y.T, ".")
plt.plot(t.T, session.run(tf.gradients(model2, t_t))[0].T);
plt.plot(t.T, session.run(tf.gradients(model, t_t))[0].T, "--");
plt.plot(x.T, y.T, ".")
plt.plot(t.T, model.eval().T);
# +
# # http://banach.millersville.edu/~bob/math375/CubicSpline/main.pdf
# T = tf.float64
# np.random.seed(42)
# x = np.sort(np.random.uniform(1, 9, 8))
# # x = np.linspace(1, 9, 80)
# y = np.sin(x/2)
# t = np.linspace(0, 10, 500)
# pa = np.polyfit(x[:3], y[:3], 2)
# pb = np.polyfit(x[-3:], y[-3:], 2)
# fpa = np.polyval(np.polyder(pa), x[0])
# fpb = np.polyval(np.polyder(pb), x[-1])
# print(fpa, fpb)
# +
x_t = tf.constant(x, dtype=T)
y_t = tf.constant(y, dtype=T)
t_t = tf.constant(t, dtype=T)
dx = x_t[1:] - x_t[:-1]
dy = y_t[1:] - y_t[:-1]
fpa_t = tf.constant(0.0, dtype=T)
fpb_t = tf.constant(0.0, dtype=T)
diag = tf.concat((2*dx[:1], 2*(dx[1:]+dx[:-1]), 2*dx[-1:]), 0)
upper = dx
lower = dx
Y = tf.concat((3 * dy[:1]/dx[:1] - 3 * fpa_t,
3 * (dy[1:]/dx[1:] - dy[:-1]/dx[:-1]),
3 * fpb_t - 3 * dy[-1:]/dx[-1:]), 0)
# diag = tf.concat((tf.ones(1, dtype=T), 2*(dx[1:]+dx[:-1]), tf.ones(1, dtype=T)), 0)
# upper = tf.concat((tf.zeros(1, dtype=T), dx[1:]), 0)
# lower = tf.concat((dx[:-1], tf.zeros(1, dtype=T)), 0)
# Y = tf.concat((tf.zeros(1, dtype=T),
# 3 * (dy[1:]/dx[1:] - dy[:-1]/dx[:-1]),
# tf.zeros(1, dtype=T)), 0)
c = tri_diag_solve(diag, upper, lower, Y)
b = dy / dx - dx * (c[1:] + 2*c[:-1]) / 3
d = (c[1:] - c[:-1]) / (3*dx)
b_ext = tf.concat(([fpa_t], b, [fpb_t]), 0)
c_ext = tf.concat((tf.zeros(1, dtype=T), c[:-1], tf.zeros(1, dtype=T)), 0)
d_ext = tf.concat((tf.zeros(1, dtype=T), d, tf.zeros(1, dtype=T)), 0)
x_ext = tf.concat((x_t[:1], x_t), 0)
y_ext = tf.concat((y_t[:1], y_t), 0)
# b_ext = tf.concat((b[:1], b, b[-1:]), 0)
# c_ext = tf.concat((c[:1], c[:-1], c[-2:-1]), 0)
# d_ext = tf.concat((d[:1], d, d[-1:]), 0)
# x_ext = tf.concat((x_t[:1], x_t), 0)
# y_ext = tf.concat((y_t[:1], y_t), 0)
inds = search_sorted_op.search_sorted(x_t, t_t)
# inds = tf.clip_by_value(inds-1,
# tf.constant(0, dtype=tf.int64),
# tf.cast(tf.size(x_t), tf.int64) - 2)
# b_ext = b
# c_ext = c
# d_ext = d
# x_ext = x_t
# y_ext = y_t
tau = t_t - tf.gather(x_ext, inds)
mod = tf.gather(y_ext, inds)
mod += tau * tf.gather(b_ext, inds)
mod += tau**2 * tf.gather(c_ext, inds)
mod += tau**3 * tf.gather(d_ext, inds)
# -
plt.plot(t, tf.gather(b_ext, inds).eval())
plt.plot(t, mod.eval())
plt.plot(t, interp1d(x, y, kind="cubic", fill_value="extrapolate")(t))
plt.plot(x, y, ".")
plt.plot(t, session.run(tf.gradients(mod, t_t)[0]))
plt.axvline(x[0])
plt.axvline(x[-1])
# +
def step1(dx, dy):
n = len(dx)
np1 = n + 1
a = np.empty(np1)
a[0] = 3 * dy[0] / dx[0]
a[1:-1] = 3 * dy[1:] / dx[1:] - 3 * dy[:-1] / dx[:-1]
a[-1] = -3 * dy[-1] / dx[-1]
return a
def step1_rev(dx, dy, a, ba):
bdx = np.zeros_like(dx)
bdy = np.zeros_like(dy)
# a[0] = 3 * dy[0] / dx[0]
bdy[0] += 3 * ba[0] / dx[0]
bdx[0] += -a[0] * ba[0] / dx[0]
# a[1:-1] = 3 * dy[1:] / dx[1:] - 3 * dy[:-1] / dx[:-1]
bdy[1:] += 3 * ba[1:-1] / dx[1:]
bdy[:-1] += -3 * ba[1:-1] / dx[:-1]
bdx[1:] += -3 * dy[1:] * ba[1:-1] / dx[1:]**2
bdx[:-1] += 3 * dy[:-1] * ba[1:-1] / dx[:-1]**2
# a[-1] = -3 * dy[-1] / dx[-1]
bdy[-1] += -3 * ba[-1] / dx[-1]
bdx[-1] += -a[-1] * ba[-1] / dx[-1]
return bdx, bdy
def step2(dx, a):
n = len(dx)
np1 = n + 1
l = np.empty(np1)
u = np.empty(n)
z = np.empty(np1)
l[0] = 2*dx[0]
u[0] = 0.5
z[0] = a[0] / l[0]
for i in range(1, n):
l[i] = 2*dx[i] + dx[i-1] * (2 - u[i-1])
u[i] = dx[i] / l[i]
z[i] = (a[i] - dx[i-1] * z[i-1]) / l[i]
l[-1] = dx[-1] * (2 - u[-1])
z[-1] = (a[-1] - dx[-1] * z[-2]) / l[-1]
return u, l, z
def step2_rev(dx, a, u, l, z, bu, bl, bz):
n = len(u)
bu = np.array(bu)
bl = np.array(bl)
bz = np.array(bz)
ba = np.zeros_like(a)
bdx = np.zeros_like(dx)
# z[-1] = (a[-1] - dx[-1] * z[-2]) / l[-1]
ba[-1] += bz[-1] / l[-1]
bdx[-1] += -z[-2] * bz[-1] / l[-1]
bz[-2] += -dx[-1] * bz[-1] / l[-1]
bl[-1] += -z[-1] * bz[-1] / l[-1]
# l[-1] = dx[-1] * (2 - u[-1])
bdx[-1] += (2 - u[-1]) * bl[-1]
bu[-1] += -dx[-1] * bl[-1]
# for i in range(1, n):
for i in range(n-1, 0, -1):
# z[i] = (a[i] - dx[i-1] * z[i-1]) / l[i]
ba[i] += bz[i] / l[i]
bl[i] += -z[i]*bz[i]/l[i]
bdx[i-1] += -z[i-1] * bz[i] / l[i]
bz[i-1] += -bz[i] * dx[i-1] / l[i]
# u[i] = dx[i] / l[i]
bdx[i] += bu[i] / l[i]
bl[i] += -bu[i]*u[i]/l[i]
# l[i] = 2*dx[i] + dx[i-1] * (2 - u[i-1])
bdx[i] += 2*bl[i]
bdx[i-1] += (2-u[i-1])*bl[i]
bu[i-1] += -dx[i-1] * bl[i]
# z[0] = a[0] / l[0]
ba[0] += bz[0] / l[0]
bl[0] += -z[0] * bz[0] / l[0]
# l[0] = 2*dx[0]
bdx[0] += 2*bl[0]
return bdx, ba
def step3(z, u):
n = len(u)
c = np.empty_like(z)
c[-1] = z[-1]
for j in range(n-1, -1, -1):
c[j] = z[j] - u[j] * c[j+1]
return c
def step3_rev(z, u, c, bc):
n = len(u)
bc = np.array(bc)
bu = np.zeros_like(u)
bz = np.zeros_like(z)
# for j in range(n-1, -1, -1):
for j in range(n):
# c[j] = z[j] - u[j] * c[j+1]
bz[j] += bc[j]
bc[j+1] += -bc[j] * u[j]
bu[j] += -c[j+1] * bc[j]
# c[-1] = z[-1]
bz[-1] += bc[-1]
return bz, bu
def step4(dx, dy, c):
b = dy / dx - dx * (c[1:] + 2*c[:-1]) / 3
d = (c[1:] - c[:-1]) / (3*dx)
return b, d
def step4_rev(dx, dy, c, b, d, bb, bd):
bc = np.zeros_like(c)
# d = (c[1:] - c[:-1]) / (3*dx)
bdx = -d * bd / dx
bc[1:] += bd / (3*dx)
bc[:-1] += -bd / (3*dx)
# b = dy / dx - dx * (c[1:] + 2*c[:-1]) / 3
bdy = bb / dx
bdx += -(dy/dx**2 + (c[1:]+2*c[:-1])/3) * bb
bc[1:] += -dx * bb / 3
bc[:-1] += -2 * dx * bb / 3
return bdx, bdy, bc
def compute_polys(dx, dy):
n = len(dx)
np1 = n + 1
# Step 1
a = step1(dx, dy)
# Step 2
u, l, z = step2(dx, a)
# Step 3
c = step3(z, u)
# Step 4
b, d = step4(dx, dy, c)
return (np.vstack((
np.concatenate(([0.0], b, [0.0])),
np.concatenate(([0.0], c[:-1], [0.0])),
np.concatenate(([0.0], d, [0.0]))
)).T, a, z, u, l)
def compute_polys_rev(dx, dy, P, a, z, u, l, bP):
n = len(dx)
np1 = n + 1
b = P[1:-1, 0]
c = P[1:, 1]
d = P[1:-1, 2]
bb = np.array(bP[1:-1, 0])
bc = np.array(bP[1:, 1])
bd = np.array(bP[1:-1, 2])
bc[-1] = 0.0
# Step 4
bdx, bdy, bc0 = step4_rev(dx, dy, c, b, d, bb, bd)
bc += bc0
# Step 3
bz, bu = step3_rev(z, u, c, bc)
# Step 2
bl = np.zeros_like(l)
bdx0, ba = step2_rev(dx, a, u, l, z, bu, bl, bz)
bdx += bdx0
# Step 1
bdx0, bdy0 = step1_rev(dx, dy, a, ba)
bdx += bdx0
bdy += bdy0
return bdx, bdy
# -
def check_grad(value, grad, f, args=None, eps=1e-8, ind=None, factor=None):
if args is None:
args = (value,)
if factor is None:
factor = 1.0
for i in range(len(value)):
value[i] += eps
r = f(*args)
if ind is None:
vp = np.sum(factor*r)
else:
vp = np.sum(factor*r[ind])
value[i] -= 2*eps
r = f(*args)
if ind is None:
vm = np.sum(factor*r)
else:
vm = np.sum(factor*r[ind])
value[i] += eps
est = 0.5 * (vp - vm) / eps
print(est, grad[i], est - grad[i])
# +
n = 5
dx = np.random.rand(n)
dy = np.random.randn(n)
c = np.random.randn(n+1)
b, d = step4(dx, dy, c)
bb = np.random.randn(len(b))
bd = np.zeros_like(d)
bdx, bdy, bc = step4_rev(dx, dy, c, b, d, bb, bd)
print("b, dx:")
check_grad(dx, bdx, step4, args=(dx, dy, c), ind=0, factor=bb)
print("b, dy:")
check_grad(dy, bdy, step4, args=(dx, dy, c), ind=0, factor=bb)
print("b, c:")
check_grad(c, bc, step4, args=(dx, dy, c), ind=0, factor=bb)
bb = np.zeros_like(b)
bd = np.random.randn(len(d))
bdx, bdy, bc = step4_rev(dx, dy, c, b, d, bb, bd)
print("d, dx:")
check_grad(dx, bdx, step4, args=(dx, dy, c), ind=1, factor=bd)
print("d, dy:")
check_grad(dy, bdy, step4, args=(dx, dy, c), ind=1, factor=bd)
print("d, c:")
check_grad(c, bc, step4, args=(dx, dy, c), ind=1, factor=bd)
# +
n = 5
u = np.random.randn(n)
z = np.random.randn(n+1)
c = step3(z, u)
bc = np.random.randn(len(c))
bz, bu = step3_rev(z, u, c, bc)
print("u:")
check_grad(u, bu, step3, args=(z, u), factor=bc)
print("z:")
check_grad(z, bz, step3, args=(z, u), factor=bc)
# +
n = 5
dx = np.random.rand(n)
a = np.random.randn(n+1)
u, l, z = step2(dx, a)
bu = np.random.randn(len(u))
bl = np.zeros_like(l)
bz = np.zeros_like(z)
bdx, ba = step2_rev(dx, a, u, l, z, bu, bl, bz)
print("u, dx:")
check_grad(dx, bdx, step2, args=(dx, a), ind=0, factor=bu)
print("u, a:")
check_grad(a, ba, step2, args=(dx, a), ind=0, factor=bu)
bu = np.zeros_like(u)
bl = np.random.randn(len(l))
bz = np.zeros_like(z)
bdx, ba = step2_rev(dx, a, u, l, z, bu, bl, bz)
print("l, dx:")
check_grad(dx, bdx, step2, args=(dx, a), ind=1, factor=bl)
print("l, a:")
check_grad(a, ba, step2, args=(dx, a), ind=1, factor=bl)
bu = np.zeros_like(u)
bl = np.zeros_like(l)
bz = np.random.randn(len(z))
bdx, ba = step2_rev(dx, a, u, l, z, bu, bl, bz)
print("z, dx:")
check_grad(dx, bdx, step2, args=(dx, a), ind=2, factor=bz)
print("z, a:")
check_grad(a, ba, step2, args=(dx, a), ind=2, factor=bz)
# +
n = 5
dx = np.random.rand(n)
dy = np.random.randn(n)
a = step1(dx, dy)
ba = np.random.randn(len(a))
bdx, bdy = step1_rev(dx, dy, a, ba)
print("dx:")
check_grad(dx, bdx, step1, args=(dx, dy), factor=ba)
print("dy:")
check_grad(dy, bdy, step1, args=(dx, dy), factor=ba)
# -
bc
# +
np.random.seed(42)
x = np.sort(np.random.uniform(1, 9, 8))
# x = np.linspace(1, 9, 10)
y = np.sin(x)
dx = np.diff(x)
dy = np.diff(y)
P, a, z, u, l = compute_polys(dx, dy)
bP = np.zeros_like(P)
# inds = ([-3], [2])
inds = tuple(a.flatten() for a in np.indices(bP.shape))
bP[inds] = 1.0
# print(bP)
bx, by = compute_polys_rev(dx, dy, P, a, z, u, l, bP)
# print(bx)
# print(by)
# +
value = dx
grad = bx
eps = 1e-5
for i in range(len(value)):
value[i] += eps
r = compute_polys(dx, dy)
vp = np.sum(r[0][inds])
value[i] -= 2*eps
r = compute_polys(dx, dy)
vm = np.sum(r[0][inds])
value[i] += eps
est = 0.5 * (vp - vm) / eps
print(est, grad[i], est - grad[i])
# +
t = np.linspace(0, 10, 500)
m = np.searchsorted(x, t)
xp = np.concatenate((x[:1], x, x[-1:]))
yp = np.concatenate((y[:1], y, y[-1:]))
poly = P[m]
dd = t - xp[m]
value = yp[m] + poly[:, 0] * dd + poly[:, 1] * dd**2 + poly[:, 2] * dd**3
plt.plot(t, np.sin(t))
plt.plot(t, value)
plt.plot(x, y, ".")
# -
def get_system(dx, dy):
A = np.diag(np.concatenate((
2*dx[:1], 2*(dx[1:]+dx[:-1]), 2*dx[-1:]
)))
A += np.diag(dx, k=1)
A += np.diag(dx, k=-1)
Y = np.concatenate((
3 * dy[:1]/dx[:1],
3 * (dy[1:]/dx[1:] - dy[:-1]/dx[1:]),
-3 * dy[:1]/dx[:1],
))
return A, Y
A, Y = get_system(dx, dy)
c = np.linalg.solve(A, Y)
c
bc = np.random.randn(len(c))
Ax = np.linalg.solve(A, bc)
bA = -Ax[:, None] * c[None, :]
bA
print(np.allclose(np.diag(bA), -Ax*c))
print(np.allclose(np.diag(bA, k=1), -Ax[:-1]*c[1:]))
print(np.allclose(np.diag(bA, k=-1), -Ax[1:]*c[:-1]))
# +
eps = 1e-5
A[1, 0] += eps
r = np.linalg.solve(A, Y)
vp = np.sum(r * bc)
A[1, 0] -= 2*eps
r = np.linalg.solve(A, Y)
vm = np.sum(r * bc)
A[1, 0] += eps
est = 0.5 * (vp - vm) / eps
print(est, bA[1, 0], est - bA[1, 0])
# -
| papers/exoplanet/notebooks/cubic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
Support Vector Regression
# +
#load the libraries we have been using
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# +
from sklearn.svm import SVR
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.multiclass import OneVsRestClassifier
svm_est = Pipeline([('scaler',StandardScaler()),('svc',OneVsRestClassifier(SVR()))])
Cs = [0.001, 0.01, 0.1, 1]
gammas = [0.001, 0.01, 0.1]
param_grid = dict(svc__estimator__gamma=gammas, svc__estimator__C=Cs)
# +
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import StratifiedShuffleSplit
rand_grid = RandomizedSearchCV(svm_est, param_distributions=param_grid, cv=5,n_iter=5,scoring='neg_mean_absolute_error')
rand_grid.fit(X_train, y_train)
# -
rand_grid.best_params_
rand_grid.best_score_
| Chapter08/Support Vector Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Support Vector Machines (SVMs)
#
# + A Support Vector Machine is a supervised algorithm that can classify cases by finding a separator.
# > + SVM works by first mapping data to a high-dimensional feature space so that data points can be categorized, even when the data are not otherwise linearly separable.
# > + Then the algorithm estimates a **separator** for the data.
# > + The data should be transformed in such a way that a separator could be drawn as a **hyperplane**.
# + Check the Image below:
# +
UserPath = "/home/cerbero/Documents/edX/IBM Data Science/IBM ML0101EN/"
InputPath = UserPath+"00/"
OutputPath = UserPath+"05/"
from IPython.display import Image
Image(filename=OutputPath+"Selection_009.png", retina=True)
# -
# The above Image shows how a separator looks like (the squiggly rd line) for two features (variables), namely "UnifSize" and "Clump" and the data classfication in "malignant" (blue dots) or "benign" (yelllow). *In a 3D space, this squiggly line would look like a plane.* Check the Image below:
Image(OutputPath+"Selection_010.png", retina=True)
# + The SVM algorithm outputs an optimal **hyperplane** that categorizes new examples.
# > A hyperplane is simply a subspace of codimension one (that is, in n-space, it's a subspace of dimension nโ1).
# > A hyperplane in 3-space is just a familiar two-dimensional plane, as we saw above. But a hyperplane in 4-space is a three-dimensional volume. To the human visual cortex, the two objects are drastically different, but both obey the same underlying definition.
# + Two challenging questions to consider:
#
# 1) How do we transfer data in such a way that a separator could be drawn as a hyperplane?
# > Map the data into a higher dimensional space => **_kernelling functions_**
#
# 2) How can we find the best/optimized hyperplane separator after transformation?
# > Maximize the _margin_ between the two sets calculating the correct **suport vectors**.
Image(OutputPath+"Selection_011.png", retina=True)
for im in ["Selection_011.png","Selection_012.png"]:
display(Image(filename=OutputPath+im, retina=True))
# ## Support Vector Machines Lab
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
# %matplotlib inline
import matplotlib.pyplot as plt
cell_df = pd.read_csv(InputPath+"cell_samples.csv")
cell_df.head()
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='DarkBlue', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='Yellow', label='benign', ax=ax);
plt.show()
cell_df.dtypes
cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes
feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
X[0:5]
cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
y [0:5]
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
yhat = clf.predict(X_test)
yhat [0:5]
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# +
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')
# -
from sklearn.metrics import f1_score
f1_score(y_test, yhat, average='weighted')
from sklearn.metrics import jaccard_similarity_score, accuracy_score
jaccard_similarity_score(y_test, yhat), accuracy_score(y_test, yhat)
clf2 = svm.SVC(kernel='linear')
clf2.fit(X_train, y_train)
yhat2 = clf2.predict(X_test)
print("Avg F1-score: %.4f" % f1_score(y_test, yhat2, average='weighted'))
print("Jaccard score: %.4f" % accuracy_score(y_test, yhat2))
| 05 Module 3 Support Vector Machines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: optics
# language: python
# name: optics
# ---
# +
''' initialise development environment '''
# set auto reload imported modules tagged
# %load_ext autoreload
# %autoreload 2
# +
''' import optics package '''
# add custom python packages directory to path
import sys
sys.path.append('/home/brendan/dev/optics')
# %matplotlib widget
# import path tracing and image transformation engine
import optics
# +
''' Imports '''
# nd array manipulation
import numpy as np
# image manipulation
from scipy import ndimage
# plotting with matplotlib, interactive notebook, 3d toolkit
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# +
''' Generate Target Image '''
# set edge length; ensure odd
edge_len = 151
# generate pattern image (target)
test_image = optics.gen_image(edge_len)
# import image, scaled to edge length
#test_image = optics.import_image('../data/test-img-1.png', edge_len)
# initialise figure and axes, clean format
_w = 4; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
ax = fig.add_subplot(111)
ax.grid([]); ax.set_xticks([]); ax.set_yticks([])
_img = test_image
#_img = ndimage.gaussian_filter(test_image, sigma = 1.)
ax.imshow(_img, cmap = 'bone_r', vmin = 0, vmax = 1)
plt.tight_layout()
plt.show()
# +
''' Generate Initial Ray '''
# set height of target image (mm), and supersample factor
height = 5.
ss = 2.
# generate rays for image translation (list np.array[px, py, pz, vx, vy, vz] )
rays = optics.gen_img_rays(edge_len, height, test_image, ss)
# +
''' Define Standard Optics '''
# get standard optical parameters
opt_params = optics.std_opt_params()
# overwrite standard optical parameters
opt_params = { **opt_params,
#'eye_front': 300.,
#'cornea_sph': 1.,
#'cornea_axis': 0.,
#'cornea_pow': np.sqrt(0.5),
#'iris_dia': 4.,
#'focus': 1.,
#'lens_pow': np.sqrt(4.5),
#'retina_thick': 17.2,
}
# generate standard optics chain
opts = optics.gen_optics(opt_params)
# +
''' Calculate Ray Paths through Optics '''
# calculate ray paths through optics chain to retina
paths = optics.get_paths(rays, opts)
# +
''' Generate Reverse Rays '''
# generate reverse rays for back propagation through reverse optics chain
back_rays = optics.gen_rev_rays(paths, opt_params)
# +
''' Define Reverse Optics (Stigmatism) '''
# define stigmatism optics chain by optical parameters
rev_opt_params = {**opt_params,
'cornea_sph': opt_params['cornea_sph'] - 0.02,
'cornea_axis': opt_params['cornea_axis'] + 45.,
}
# generate standard optics chain, overwrite existing params
rev_opts = optics.gen_optics_rev(rev_opt_params)
# +
''' get ray paths through optics chain '''
# calculate reverse ray paths from retina, set initial refractive index
rev_paths = optics.get_paths(back_rays, rev_opts, n0 = 1.337)
# +
''' Resample Translated Rays as Image'''
# build image by resample return rays over area
grid = optics.translate_image(test_image, ss, paths, rev_paths, height, edge_len)
# +
# initialise figure and axes, clean format
_w = 7; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
plt.subplot(121)
plt.grid(''); plt.xticks([]); plt.yticks([])
plt.imshow(test_image, cmap = 'bone_r', vmin = 0., vmax = 1.)
plt.subplot(122)
plt.grid(''); plt.xticks([]); plt.yticks([])
#_img = ndimage.filters.median_filter(grid, size = 4)
_img = grid.copy()
plt.imshow(_img, cmap = 'bone_r', vmin = 0., vmax = 1.)
#plt.savefig('./images/output-0s95.png', dpi = 200)
plt.tight_layout()
plt.show()
# -
# +
''' initialise standard optics chain '''
# set edge length; ensure odd
edge_len = 151
# set height of target image (mm), and supersample factor
height = 4.
ss = 3.
# overwrite standard optical parameters
opt_params = {
#'eye_front': 300.,
#'cornea_sph': 1.,
#'cornea_axis': 0.,
#'cornea_pow': np.sqrt(0.5),
#'iris_dia': 4.,
#'focus': 1.,
#'lens_pow': np.sqrt(4.5),
#'retina_thick': 17.2,
}
# initialise standard optics chain
test_image, rays, opt_params, opts, paths, back_rays = optics.init_optics(edge_len, height, ss, opt_params)
# +
''' define stigmatism and batch image translate '''
# define stigmatism, each parameter range as delta (min, max, step)
stig = {
'param': 'cornea_sph',
#'range': [-.015, .015, .002]
'range': [-.009, .010, .001]
}
# perform batch image translation over stigmatism parameter range, return image set
images = optics.batch_translate(test_image, edge_len, height, ss, opt_params, paths, stig, back_rays)
# -
import pickle
with open('../data/subj-refr-images', 'wb') as file:
pickle.dump(images, file)
with open('../data/subj-refr-images', 'rb') as file:
images = pickle.load(file)
# +
# initialise figure and axes, clean format
_w = 4; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
plt.subplot(111)
plt.grid(''); plt.xticks([]); plt.yticks([])
img = images[15]['image']
plt.imshow(img, cmap = 'bone_r', vmin = 0., vmax = 1.)
plt.xlim(0, edge_len)
plt.ylim(0, edge_len)
#plt.tight_layout()
plt.subplots_adjust(left = .0, right = 1., top = 1., bottom = .0)
plt.show()
# -
# +
''' save all generated images'''
# define output path
out_path = '../data/subj-refr/'
# store all images to file
optics.store_images(images, out_path)
# -
import scipy.ndimage
# +
# iterate each image in batch
for image in images[:]:
# get current image data
_img = image['image']
# get image delta value
d = image['delta']
# generate zoom
rotate = np.arange(-90., 90., 5.)
for r in rotate:
__img = ndimage.rotate(_img, r, reshape = False)
# initialise figure and axes, clean format
_w = 4; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
plt.subplot(111)
plt.grid(''); plt.xticks([]); plt.yticks([])
plt.imshow(__img, cmap = 'bone_r', vmin = 0., vmax = 1.)
#plt.xlim(0, edge_len)
#plt.ylim(0, edge_len)
plt.xlim(0, 301)
plt.ylim(0, 301)
#plt.tight_layout()
plt.subplots_adjust(left = .0, right = 1., top = 1., bottom = .0)
plt.savefig('../data/subj-refr-high/subj-refr_pow-{:.3f}_axs-{:.1f}.png'.format(d, r), dpi = 250)
plt.close()
# +
image = images[0]
# get current image data
_img = image['image']
# get image delta value
d = image['delta']
# generate zoom
rotate = np.arange(-90., 90., 10.)
r = rotate[3]
__img = ndimage.rotate(_img, r)
# initialise figure and axes, clean format
_w = 4; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
plt.subplot(111)
plt.grid(''); plt.xticks([]); plt.yticks([])
plt.imshow(__img, cmap = 'bone_r', vmin = 0., vmax = 1.)
plt.xlim(0, 151)
plt.ylim(0, 151)
#plt.tight_layout()
plt.subplots_adjust(left = .0, right = 1., top = 1., bottom = .0)
#plt.savefig('../data/subj-refr/subj-refr_pow-{:.3f}_axs-{:.1f}.png'.format(d, r), dpi = 200)
plt.show()
# -
# +
# initialise figure and axes, clean format
_w = 7; _h = 4
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
plt.subplot(121)
plt.grid(''); plt.xticks([]); plt.yticks([])
_img = [ images[i]['image'] for i in range(len(images)) if images[i]['delta'] > 0. ][0]
plt.imshow(_img, cmap = 'bone_r', vmin = 0., vmax = 1.)
plt.subplot(122)
plt.grid(''); plt.xticks([]); plt.yticks([])
_img = images[0]['image']
plt.imshow(_img, cmap = 'bone_r', vmin = 0., vmax = 1.)
#plt.tight_layout()
plt.subplots_adjust(left = .0, right = 1., top = 1., bottom = .0)
#plt.savefig('./images/output-0s95.png', dpi = 200)
plt.show()
# -
# +
## plot ray paths
# initialise 3d figure
_w = 7; _h = 6
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
ax = fig.add_subplot(111, projection='3d')
#ax.set_xlim(0., 250.)
#ax.set_ylim(-15., 15.)
#ax.set_zlim(-15., 15.)
# plot all optics
if True:
# iterate over each optic in chain
for optic in opts[:2]:
# get optic parameters
C = optic['centre']
r = optic['radius']
e = optic['scale']
n2 = optic['opt_den']
rev = optic['rev']
theta = optic['theta']
print(theta)
# get optic points in 3d for plotting
x, y, z = optics.plot_3d_ellipsoid(C, r, e, rev, theta)
# plot ellipsoid
ax.plot_wireframe(x, y, z, rstride = 2, cstride = 2, color = 'k', alpha = 0.5)
# iterate over each ray path
for i in range(len(paths))[::200]:
# check for any refraction of ray with optics
#if len(paths[i]) > 1:
# only ray that hit retina
if len(paths[i]) == 7:
path = paths[i]
# iterate ray path through optics
for j in range(len(path)-1)[1:]:
# plot path segment
ax.plot([path[j][0][0], path[j+1][0][0]],
[path[j][0][1], path[j+1][0][1]],
[path[j][0][2], path[j+1][0][2]],
color = 'r', alpha = 0.7)
# format and display figure
plt.show()
# +
# initialise 3d figure
_w = 7; _h = 6
fig = plt.figure(figsize = (_w, _h))
fig.canvas.layout.width = '{}in'.format(_w)
fig.canvas.layout.height= '{}in'.format(_h)
ax = fig.add_subplot(111, projection='3d')
#ax.set_xlim(0., 50.)
#ax.set_ylim(-15., 15.)
#ax.set_zlim(-15., 15.)
# plot all optics
if True:
# iterate over each optic in chain
for optic in rev_opts[-2:-1]:
# get optic parameters
C = optic['centre']
r = optic['radius']
e = optic['scale']
n2 = optic['opt_den']
rev = optic['rev']
theta = optic['theta']
# get optic points in 3d for plotting
x, y, z = optics.plot_3d_ellipsoid(C, r, e, rev, theta)
# plot ellipsoid
ax.plot_wireframe(x, y, z, rstride = 2, cstride = 2, color = 'k', alpha = 0.5)
# iterate over each ray path
for i in range(len(rev_paths))[::100]:
# check for any refraction of ray with optics
if len(paths[i]) > 1:
# only ray that hit retina
#if len(paths[i]) == 7:
path = rev_paths[i]
# iterate ray path through optics
for j in range(len(path)-1)[:-1]:
# plot path segment
ax.plot([path[j][0][0], path[j+1][0][0]],
[path[j][0][1], path[j+1][0][1]],
[path[j][0][2], path[j+1][0][2]],
color = 'r', alpha = 0.7)
# format and display figure
plt.show()
# -
| nbks/.ipynb_checkpoints/rebuild-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;" />
# # รlgebra Lineal con NumPy
# _Una vez hemos visto el manejo bรกsico de arrays en Python con NumPy, es hora de pasar a operaciones mรกs interesantes como son las propias del รlgebra Lineal._
#
# _Los productos escalares y las inversiones de matrices estรกn por todas partes en los programas cientรญficos e ingenieriles, asรญ que vamos a estudiar cรณmo se realizan en Python._
# ## รlgebra lineal
# Como sabemos, las operaciones del รกlgebra lineal aparecen con mucha frecuencia a la hora de resolver sistemas de ecuaciones en derivadas parciales y en general al linealizar problemas de todo tipo, y suele ser necesario resolver sistemas con un nรบmero enorme de ecuaciones e incรณgnitas. Gracias a los arrays de NumPy podemos abordar este tipo de cรกlculos en Python, ya que todas las funciones estรกn escritas en C o Fortran y tenemos la opciรณn de usar bibliotecas optimizadas al lรญmite.
# El paquete de รกlgebra lineal en NumPy se llama `linalg`, asรญ que importando NumPy con la convenciรณn habitual podemos acceder a รฉl escribiendo `np.linalg`. Si imprimimos la ayuda del paquete vemos que tenemos funciones para:
#
# * Funciones bรกsicas (norma de un vector, inversa de una matriz, determinante, traza)
# * Resoluciรณn de sistemas
# * Autovalores y autovectores
# * Descomposiciones matriciales (QR, SVD)
# * Pseudoinversas
# Puede que ya sepas que en la biblioteca `SciPy` se pueden encontrar tambiรฉn funciones de รlgebra Lineal. ยฟCuรกles usar? Puedes encontrar la respuesta en este enlace: https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/linalg.html#scipy-linalg-vs-numpy-linalg
#
# Como de momento sรณlo hemos usado `NumPy` no importaremos las funciones de `SciPy`, aunque como ves, es recomendable hacerlo.
import numpy as np
help(np.linalg)
# Recordemos que si queremos usar una funciรณn de un paquete pero no queremos escribir la "ruta" completa cada vez, podemos usar la sintaxis `from package import func`:
from numpy.linalg import norm, det
norm
# El producto matricial usual (no el que se hace elemento a elemento, sino el del รกlgebra lineal) se calcula con la misma funciรณn que el producto matriz-vector y el producto escalar vector-vector: con la funciรณn `dot`, que **no** estรก en el paquete `linalg` sino directamente en `numpy` y no hace falta importarlo.
np.dot
# Una consideraciรณn importante a tener en cuenta es que en NumPy no hace falta ser estricto a la hora de manejar vectores como si fueran matrices columna, siempre que la operaciรณn sea consistente. Un vector es una matriz con una sola dimensiรณn: por eso si calculamos su traspuesta no funciona.
M = np.array([
[1, 2],
[3, 4]
])
v = np.array([1, -1])
v.T
u = np.dot(M, v)
u
# Para hacer comparaciones entre arrays de punto flotante se pueden usar las funciones `np.allclose` y `np.isclose`. La primera comprueba si todos los elementos de los arrays son iguales dentro de una tolerancia, y la segunda compara elemento a elemento y devuelve un array de valores `True` y `False`.
u, v
np.allclose(u, v)
np.isclose(0.0, 1e-8, atol=1e-10)
# __En la versiรณn 3.5 de Python se incorporรณ un nuevo operador `@` para poder calcular hacer multiplicaciones de matrices de una forma mรกs legible__
u = M @ v
u
# +
mat = np.array([[1, 5, 8, 5],
[0, 6, 4, 2],
[9, 3, 1, 6]])
vec1 = np.array([5, 6, 2])
vec1 @ mat
# -
# Si quieres saber mรกs, puedes leer [este artรญculo](http://pybonacci.org/2016/02/22/el-producto-de-matrices-y-el-nuevo-operador/) en Pybonacci escrito por _<NAME>_.
# ##### Ejercicios
# 1- Hallar el producto de estas dos matrices y su determinante:
#
# $$\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 1 \\ -1 & 0 & 1 \end{pmatrix} \begin{pmatrix} 2 & 3 & -1 \\ 0 & -2 & 1 \\ 0 & 0 & 3 \end{pmatrix}$$
from numpy.linalg import det
A = np.array([
[1, 0, 0],
[2, 1, 1],
[-1, 0, 1]
])
B = np.array([
[2, 3, -1],
[0, -2, 1],
[0, 0, 3]
])
print(A)
print(B)
C = A @ B
C
det(C)
# 2- Resolver el siguiente sistema:
#
# $$ \begin{pmatrix} 2 & 0 & 0 \\ -1 & 1 & 0 \\ 3 & 2 & -1 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} -1 \\ 3 \\ 0 \end{pmatrix} $$
M = (np.array([[2, 0, 0],
[-1, 1, 0],
[3, 2, -1]])
@
np.array([[1, 1, 1],
[0, 1, 2],
[0, 0, 1]]))
M
x = np.linalg.solve(M, np.array([-1, 3, 0]))
x
np.allclose(M @ x, np.array([-1, 3, 0]))
# 3- Hallar la inversa de la matriz $H$ y comprobar que $H H^{-1} = I$ (recuerda la funciรณn `np.eye`)
# aeropython: preserve
A = np.arange(1, 37).reshape(6,6)
A[1, 1::2] = 0
A[3, ::2] = 1
A[4, :] += 30
B = (2 ** np.arange(36)).reshape((6,6))
H = A + B
print(H)
np.linalg.det(H)
Hinv = np.linalg.inv(H)
np.isclose(np.dot(Hinv, H), np.eye(6))
np.set_printoptions(precision=3)
print(np.dot(Hinv, H))
# <div class="alert alert-warning">ยกNo funciona! Y no solo eso sino que los resultados varรญan de un ordenador a otro.</div>
# 4- Comprobar el nรบmero de condiciรณn de la matriz $H$.
np.linalg.cond(H)
# <div class="alert alert-warning">La matriz estรก mal condicionada.</div>
# ### Autovalores y autovectores
# La cosa no queda ahรญ y tambiรฉn se pueden resolver problemas de autovalores y autovectores:
# +
A = np.array([
[1, 0, 0],
[2, 1, 1],
[-1, 0, 1]
])
np.linalg.eig(A)
# -
# _Ya hemos aprendido a efectuar algunas operaciones รบtiles con NumPy. Estamos en condiciones de empezar a escribir programas mรกs interesantes, pero aรบn queda lo mejor._
#
# * [รlgebra lineal en Python con NumPy en Pybonacci](http://pybonacci.org/2012/06/07/algebra-lineal-en-python-con-numpy-i-operaciones-basicas/)
# ---
# <br/>
# #### <h4 align="right">ยกSรญguenos en Twitter!
# <br/>
# ###### <a href="https://twitter.com/AeroPython" class="twitter-follow-button" data-show-count="false">Follow @AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# <br/>
# ###### Este notebook ha sido realizado por: <NAME>, <NAME> y <NAME>
# <br/>
# ##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName"><NAME> y <NAME></span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribuciรณn 4.0 Internacional</a>.
# ---
# _Las siguientes celdas contienen configuraciรณn del Notebook_
#
# _Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
#
# File > Trusted Notebook
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
| notebooks_completos/014-NumPy-AlgerbraLineal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Helper Functions
# +
import numpy as np
def create_box(center, hsize):
d1 = np.array([hsize[0], 0.0, 0.0])
d2 = np.array([0.0, hsize[1], 0.0])
d3 = np.array([0.0, 0.0, hsize[2]])
oobb = []
oobb.append(center - d1 - d2 - d3)
oobb.append(center + d1 - d2 - d3)
oobb.append(center - d1 + d2 - d3)
oobb.append(center + d1 + d2 - d3)
oobb.append(center - d1 - d2 + d3)
oobb.append(center + d1 - d2 + d3)
oobb.append(center - d1 + d2 + d3)
oobb.append(center + d1 + d2 + d3)
return np.array(oobb)
# -
# # Good Feasibility: Box on Box
# +
# Imports
# %matplotlib inline
import matplotlib.pyplot as plt
from compute_moi import *
from compute_moi_util import *
options = {
"output_level": 3,
"max_iterations": 1000,
"dump_models": False,
"surface_area_tolerance": 0.003,
"print_surface_area_histogramm": True
}
# Create Scene
oobbs = []
oobbs.append(create_box(np.array([0.50, 0.75, 0.00]), np.array([0.125, 0.125, 0.25])))
oobbs.append(create_box(np.array([0.50, 0.75, 0.45]), np.array([0.125, 0.125, 0.25])))
# Calculate MoI
res = moi_from_bounding_boxes(oobbs, options)
print("Measure of Infeasibility: {}, includes a hover penalty of {}".format(str(res.moi), str(res.hover_penalty)))
# Draw Scene
fig = plt.figure(figsize=(9, 5))
extent = 0.7
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.set_xlim(-extent, extent)
ax.set_ylim(extent, -extent)
ax.set_zlim(-extent, extent)
ax.set_xlabel('x')
ax.set_ylabel('z')
ax.set_zlabel('y')
ax.set_proj_type('persp')
draw_oobbs(ax, res.oobbs[:-1])
draw_contact_surfaces(ax, res.contact_surfaces)
# -
# # Mild Infeasibility: Box Partially on Box
# +
# Imports
# %matplotlib inline
import matplotlib.pyplot as plt
from compute_moi import *
from compute_moi_util import *
options = {
"output_level": 3,
"max_iterations": 1000,
"dump_models": False,
"surface_area_tolerance": 0.003,
"print_surface_area_histogramm": True
}
# Create Scene
oobbs = []
oobbs.append(create_box(np.array([0.50, 0.75, 0.00]), np.array([0.125, 0.125, 0.25])))
oobbs.append(create_box(np.array([0.50 + 0.2, 0.75, 0.45]), np.array([0.125, 0.125, 0.25])))
# Calculate MoI
res = moi_from_bounding_boxes(oobbs, options)
print("Measure of Infeasibility: {}, includes a hover penalty of {}".format(str(res.moi), str(res.hover_penalty)))
# Draw Scene
fig = plt.figure(figsize=(9, 5))
extent = 0.7
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.set_xlim(-extent, extent)
ax.set_ylim(extent, -extent)
ax.set_zlim(-extent, extent)
ax.set_xlabel('x')
ax.set_ylabel('z')
ax.set_zlabel('y')
ax.set_proj_type('persp')
draw_oobbs(ax, res.oobbs[:-1])
draw_contact_surfaces(ax, res.contact_surfaces)
# -
# # Infeasible: Hovering Box
# +
# Imports
# %matplotlib inline
import matplotlib.pyplot as plt
from compute_moi import *
from compute_moi_util import *
options = {
"output_level": 3,
"max_iterations": 1000,
"dump_models": False,
"surface_area_tolerance": 0.003,
"print_surface_area_histogramm": True
}
# Create Scene
oobbs = []
oobbs.append(create_box(np.array([0.50, 0.75, 0.00]), np.array([0.125, 0.125, 0.25])))
oobbs.append(create_box(np.array([0.50, 0.75, 1.00]), np.array([0.125, 0.125, 0.25])))
oobbs.append(create_box(np.array([0.50, 0.75, 1.45]), np.array([0.125, 0.125, 0.25])))
# Calculate MoI
res = moi_from_bounding_boxes(oobbs, options)
print("Hovering meshes: {}".format(str(res.hover_meshes)))
print("Measure of Infeasibility: {}, includes a hover penalty of {}".format(str(res.moi), str(res.hover_penalty)))
# Draw Scene
fig = plt.figure(figsize=(9, 5))
extent = 0.7
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.set_xlim(-extent, extent)
ax.set_ylim(extent, -extent)
ax.set_zlim(-extent, extent)
ax.set_xlabel('x')
ax.set_ylabel('z')
ax.set_zlabel('y')
ax.set_proj_type('persp')
draw_oobbs(ax, res.oobbs[:-1])
draw_contact_surfaces(ax, res.contact_surfaces)
# -
# # Infeasible: Box attached to the bottom of a table
# +
# Imports
# %matplotlib inline
import matplotlib.pyplot as plt
from compute_moi import *
from compute_moi_util import *
options = {
"output_level": 3,
"max_iterations": 1000,
"dump_models": False,
"surface_area_tolerance": 0.003,
"print_surface_area_histogramm": True
}
# Create Scene
oobbs = []
oobbs.append(create_box(np.array([-0.2, -0.2, 0.00]), np.array([0.05, 0.05, 0.25])))
oobbs.append(create_box(np.array([ 0.7, -0.2, 0.00]), np.array([0.05, 0.05, 0.25])))
oobbs.append(create_box(np.array([ 0.7, 0.7, 0.15]), np.array([0.05, 0.05, 0.10])))
oobbs.append(create_box(np.array([-0.2, 0.7, 0.15]), np.array([0.05, 0.05, 0.10])))
oobbs.append(create_box(np.array([0.25, 0.25, 0.25]), np.array([0.5, 0.5, 0.05])))
# Calculate MoI
res = moi_from_bounding_boxes(oobbs, options)
print("Hovering meshes: {}".format(str(res.hover_meshes)))
print("Measure of Infeasibility: {}, includes a hover penalty of {}".format(str(res.moi), str(res.hover_penalty)))
# Draw Scene
fig = plt.figure(figsize=(9, 5))
extent = 0.7
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.set_xlim(-extent, extent)
ax.set_ylim(extent, -extent)
ax.set_zlim(-extent, extent)
ax.set_xlabel('x')
ax.set_ylabel('z')
ax.set_zlabel('y')
ax.set_proj_type('persp')
draw_oobbs(ax, res.oobbs[:-1])
draw_contact_surfaces(ax, res.contact_surfaces)
# -
| code/moi_examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training and Evaluating Machine Learning Models in cuML
# This notebook explores several basic machine learning estimators in cuML, demonstrating how to train them and evaluate them with built-in metrics functions. All of the models are trained on synthetic data, generated by cuML's dataset utilities.
#
# 1. Random Forest Classifier
# 2. UMAP
# 3. DBSCAN
# 4. Linear Regression
#
#
# [](https://colab.research.google.com/github/rapidsai/cuml/blob/branch-0.15/docs/source/estimator_intro.ipynb)
# ### Shared Library Imports
import cuml
from cupy import asnumpy
from joblib import dump, load
# ## 1. Classification
# ### Random Forest Classification and Accuracy metrics
#
# The Random Forest algorithm classification model builds several decision trees, and aggregates each of their outputs to make a prediction. For more information on cuML's implementation of the Random Forest Classification model please refer to :
# https://docs.rapids.ai/api/cuml/stable/api.html#cuml.ensemble.RandomForestClassifier
#
# Accuracy score is the ratio of correct predictions to the total number of predictions. It is used to measure the performance of classification models.
# For more information on the accuracy score metric please refer to: https://en.wikipedia.org/wiki/Accuracy_and_precision
#
# For more information on cuML's implementation of accuracy score metrics please refer to: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.accuracy.accuracy_score
#
# The cell below shows an end to end pipeline of the Random Forest Classification model. Here the dataset was generated by using sklearn's make_classification dataset. The generated dataset was used to train and run predict on the model. Random forest's performance is evaluated and then compared between the values obtained from the cuML and sklearn accuracy metrics.
# +
from cuml.datasets.classification import make_classification
from cuml.preprocessing.model_selection import train_test_split
from cuml.ensemble import RandomForestClassifier as cuRF
from sklearn.metrics import accuracy_score
# synthetic dataset dimensions
n_samples = 1000
n_features = 10
n_classes = 2
# random forest depth and size
n_estimators = 25
max_depth = 10
# generate synthetic data [ binary classification task ]
X, y = make_classification ( n_classes = n_classes,
n_features = n_features,
n_samples = n_samples,
random_state = 0 )
X_train, X_test, y_train, y_test = train_test_split( X, y, random_state = 0 )
model = cuRF( max_depth = max_depth,
n_estimators = n_estimators,
seed = 0 )
trained_RF = model.fit ( X_train, y_train )
predictions = model.predict ( X_test )
cu_score = cuml.metrics.accuracy_score( y_test, predictions )
sk_score = accuracy_score( asnumpy( y_test ), asnumpy( predictions ) )
print( " cuml accuracy: ", cu_score )
print( " sklearn accuracy : ", sk_score )
# save
dump( trained_RF, 'RF.model')
# to reload the model uncomment the line below
loaded_model = load('RF.model')
# -
# ## Clustering
# ### UMAP and Trustworthiness metrics
# UMAP is a dimensionality reduction algorithm which performs non-linear dimension reduction. It can also be used for visualization.
# For additional information on the UMAP model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.UMAP
#
# Trustworthiness is a measure of the extent to which the local structure is retained in the embedding of the model. Therefore, if a sample predicted by the model lay within the unexpected region of the nearest neighbors, then those samples would be penalized. For more information on the trustworthiness metric please refer to: https://scikit-learn.org/dev/modules/generated/sklearn.manifold.t_sne.trustworthiness.html
#
# the documentation for cuML's implementation of the trustworthiness metric is: https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.trustworthiness.trustworthiness
#
# The cell below shows an end to end pipeline of UMAP model. Here, the blobs dataset is created by cuml's equivalent of make_blobs function to be used as the input. The output of UMAP's fit_transform is evaluated using the trustworthiness function. The values obtained by sklearn and cuml's trustworthiness are compared below.
#
# +
from cuml.datasets import make_blobs
from cuml.manifold.umap import UMAP as cuUMAP
from sklearn.manifold import trustworthiness
import numpy as np
n_samples = 1000
n_features = 100
cluster_std = 0.1
X_blobs, y_blobs = make_blobs( n_samples = n_samples,
cluster_std = cluster_std,
n_features = n_features,
random_state = 0,
dtype=np.float32 )
trained_UMAP = cuUMAP( n_neighbors = 10 ).fit( X_blobs )
X_embedded = trained_UMAP.transform( X_blobs )
cu_score = cuml.metrics.trustworthiness( X_blobs, X_embedded )
sk_score = trustworthiness( asnumpy( X_blobs ), asnumpy( X_embedded ) )
print(" cuml's trustworthiness score : ", cu_score )
print(" sklearn's trustworthiness score : ", sk_score )
# save
dump( trained_UMAP, 'UMAP.model')
# to reload the model uncomment the line below
# loaded_model = load('UMAP.model')
# -
# ### DBSCAN and Adjusted Random Index
# DBSCAN is a popular and a powerful clustering algorithm. For additional information on the DBSCAN model please refer to the documentation on https://docs.rapids.ai/api/cuml/stable/api.html#cuml.DBSCAN
#
# We create the blobs dataset using the cuml equivalent of make_blobs function.
#
# Adjusted random index is a metric which is used to measure the similarity between two data clusters, and it is adjusted to take into consideration the chance grouping of elements.
# For more information on Adjusted random index please refer to: https://en.wikipedia.org/wiki/Rand_index
#
# The cell below shows an end to end model of DBSCAN. The output of DBSCAN's fit_predict is evaluated using the Adjusted Random Index function. The values obtained by sklearn and cuml's adjusted random metric are compared below.
# +
from cuml.datasets import make_blobs
from cuml import DBSCAN as cumlDBSCAN
from sklearn.metrics import adjusted_rand_score
import numpy as np
n_samples = 1000
n_features = 100
cluster_std = 0.1
X_blobs, y_blobs = make_blobs( n_samples = n_samples,
n_features = n_features,
cluster_std = cluster_std,
random_state = 0,
dtype=np.float32 )
cuml_dbscan = cumlDBSCAN( eps = 3,
min_samples = 2)
trained_DBSCAN = cuml_dbscan.fit( X_blobs )
cu_y_pred = trained_DBSCAN.fit_predict ( X_blobs )
cu_adjusted_rand_index = cuml.metrics.cluster.adjusted_rand_score( y_blobs, cu_y_pred )
sk_adjusted_rand_index = adjusted_rand_score( asnumpy(y_blobs), asnumpy(cu_y_pred) )
print(" cuml's adjusted random index score : ", cu_adjusted_rand_index)
print(" sklearn's adjusted random index score : ", sk_adjusted_rand_index)
# save and optionally reload
dump( trained_DBSCAN, 'DBSCAN.model')
# to reload the model uncomment the line below
# loaded_model = load('DBSCAN.model')
# -
# ## Regression
# ### Linear regression and R^2 score
# Linear Regression is a simple machine learning model where the response y is modelled by a linear combination of the predictors in X.
#
# R^2 score is also known as the coefficient of determination. It is used as a metric for scoring regression models. It scores the output of the model based on the proportion of total variation of the model.
# For more information on the R^2 score metrics please refer to: https://en.wikipedia.org/wiki/Coefficient_of_determination
#
# For more information on cuML's implementation of the r2 score metrics please refer to : https://docs.rapids.ai/api/cuml/stable/api.html#cuml.metrics.regression.r2_score
#
# The cell below uses the Linear Regression model to compare the results between cuML and sklearn trustworthiness metric. For more information on cuML's implementation of the Linear Regression model please refer to :
# https://docs.rapids.ai/api/cuml/stable/api.html#linear-regression
# +
from cuml.datasets import make_regression
from cuml.preprocessing.model_selection import train_test_split
from cuml.linear_model import LinearRegression as cuLR
from sklearn.metrics import r2_score
n_samples = 2**10
n_features = 100
n_info = 70
X_reg, y_reg = make_regression( n_samples = n_samples,
n_features = n_features,
n_informative = n_info,
random_state = 123 )
X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split( X_reg,
y_reg,
train_size = 0.8,
random_state = 10 )
cuml_reg_model = cuLR( fit_intercept = True,
normalize = True,
algorithm = 'eig' )
trained_LR = cuml_reg_model.fit( X_reg_train, y_reg_train )
cu_preds = trained_LR.predict( X_reg_test )
cu_r2 = cuml.metrics.r2_score( y_reg_test, cu_preds )
sk_r2 = r2_score( asnumpy( y_reg_test ), asnumpy( cu_preds ) )
print("cuml's r2 score : ", cu_r2)
print("sklearn's r2 score : ", sk_r2)
# save and reload
dump( trained_LR, 'LR.model')
# to reload the model uncomment the line below
# loaded_model = load('LR.model')
| docs/source/estimator_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
""" Plot a histogram"""
import pandas as pd
import os
import matplotlib.pyplot as plt
def symbol_to_path(symbol, base_dir="data"):
"""Return CSV file path given ticker symbol."""
return os.path.join(base_dir, "{}.csv".format(str(symbol)))
def get_data(symbols, dates):
"""Read stock data (adjusted close) for given symbols from CSV files."""
df = pd.DataFrame(index=dates)
if 'SPY' not in symbols: # add SPY for reference, if absent
symbols.insert(0, 'SPY')
for symbol in symbols:
df_temp = pd.read_csv(symbol_to_path(symbol),index_col='Date',parse_dates=True,usecols=['Date','Adj Close'],na_values=['nan'])
df_temp = df_temp.rename(columns={'Adj Close': symbol})
df = df.join(df_temp)
if symbol == 'SPY': # drop dates SPY did not trade
df = df.dropna(subset=["SPY"])
return df
def plot_data(df, title="Stock prices", xlabel="Date", ylabel="Price"):
"""Plot stock prices with a custom title and meaningful axis labels."""
ax = df.plot(title=title, fontsize=12)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.show()
def compute_daily_returns(df):
"""Compute and return the daily return values."""
# TODO: Your code here
daily_returns=(df/df.shift(1))-1 # much easier with pandas
daily_returns.iloc[0,:]=0# set daily returns for row 0
return daily_returns
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
# -
test_run()
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
#plot Histogram
daily_returns.hist()
plt.show()
test_run()
# -
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
#plot Histogram
daily_returns.hist(bins=20)
plt.show()
test_run()
# -
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns.hist(bins=20)
# Get Mean and Standard Deviation
mean=daily_returns['SPY'].mean()
print("Mean:",mean)
std=daily_returns['SPY'].std()
print("STD:",std)
test_run()
# -
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns.hist(bins=20)
# Get Mean and Standard Deviation
mean=daily_returns['SPY'].mean()
print("Mean:",mean)
std=daily_returns['SPY'].std()
print("STD:",std)
plt.axvline(mean,color='w',linestyle='dashed',linewidth=2)
plt.show()
test_run()
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns.hist(bins=20)
# Get Mean and Standard Deviation
mean=daily_returns['SPY'].mean()
print("Mean:",mean)
std=daily_returns['SPY'].std()
print("STD:",std)
plt.axvline(mean,color='w',linestyle='dashed',linewidth=2)
plt.axvline(std,color='r',linestyle='dashed',linewidth=2)
plt.axvline(-std,color='r',linestyle='dashed',linewidth=2)
plt.show()
test_run()
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns.hist(bins=20)
# Get Mean and Standard Deviation
mean=daily_returns['SPY'].mean()
print("Mean:",mean)
std=daily_returns['SPY'].std()
print("STD:",std)
plt.axvline(mean,color='w',linestyle='dashed',linewidth=2)
plt.axvline(std,color='r',linestyle='dashed',linewidth=2)
plt.axvline(-std,color='r',linestyle='dashed',linewidth=2)
plt.show()
#compute kurtosis
print(daily_returns.kurtosis())
test_run()
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY','MSFT']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns.hist(bins=20)
plt.show()
test_run()
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY','MSFT']
df = get_data(symbols, dates)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot Histogram
daily_returns['SPY'].hist(bins=20,label="SPY")
daily_returns['MSFT'].hist(bins=20,label="MSFT")
plt.legend(loc="upper right")
plt.show()
test_run()
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY','MSFT','GOOG']
df = get_data(symbols, dates)
plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
#plot SPY VS MSFT
daily_returns.plot(kind='scatter',x='SPY',y='MSFT')
plt.show()
#plot SPY VS GOOG
daily_returns.plot(kind='scatter',x='SPY',y='GOOG')
plt.show()
test_run()
# -
import numpy as np
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY','MSFT','GOOG']
df = get_data(symbols, dates)
plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
#plot SPY VS MSFT
daily_returns.plot(kind='scatter',x='SPY',y='MSFT')
beta_MSFT,alpha_MSFT=np.polyfit(daily_returns['SPY'],daily_returns['MSFT'],1)
plt.plot(daily_returns['SPY'],beta_MSFT*daily_returns['SPY'] +alpha_MSFT,'-',color='r')
print("beta MSFT:",beta_MSFT)
print("Aplha:",alpha_MSFT)
plt.show()
#plot SPY VS GOOG
daily_returns.plot(kind='scatter',x='SPY',y='GOOG')
beta_GOOG,alpha_GOOG=np.polyfit(daily_returns['SPY'],daily_returns['GOOG'],1)
plt.plot(daily_returns['SPY'],beta_GOOG*daily_returns['SPY'] +alpha_GOOG,'-',color='r')
print("beta GOOG:",beta_GOOG)
print("Aplha GOOG:",alpha_GOOG)
plt.show()
test_run()
# -
# +
def test_run():
dates = pd.date_range('2013-01-22', '2014-01-26') # one month only
symbols = ['SPY','MSFT','GOOG']
df = get_data(symbols, dates)
#plot_data(df)
# Compute daily returns
daily_returns = compute_daily_returns(df)
#plot_data(daily_returns, title="Daily returns", ylabel="Daily returns")
#plot SPY VS MSFT
daily_returns.plot(kind='scatter',x='SPY',y='MSFT')
beta_MSFT,alpha_MSFT=np.polyfit(daily_returns['SPY'],daily_returns['MSFT'],1)
plt.plot(daily_returns['SPY'],beta_MSFT*daily_returns['SPY'] +alpha_MSFT,'-',color='r')
print("beta MSFT:",beta_MSFT)
print("Aplha:",alpha_MSFT)
plt.show()
#plot SPY VS GOOG
daily_returns.plot(kind='scatter',x='SPY',y='GOOG')
beta_GOOG,alpha_GOOG=np.polyfit(daily_returns['SPY'],daily_returns['GOOG'],1)
plt.plot(daily_returns['SPY'],beta_GOOG*daily_returns['SPY'] +alpha_GOOG,'-',color='r')
print("beta GOOG:",beta_GOOG)
print("Aplha GOOG:",alpha_GOOG)
plt.show()
#CALCULATE CORRELATION COEFFICIENT
print(daily_returns.corr(method='pearson'))
test_run()
# -
| Untitled1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "Relative Strength Index - RSI"
# > "Relative Strength Index - RSI"
#
# - toc: true
# - branch: master
# - badges: true
# - comments: true
# - author: <NAME>
# - categories: [fastpages, jupyter, trading, finances, RSI, Strategy, algorithmicTrading, backtest]
# ### Relative strength index calculation.
#
# Credits to macroption providing a very straightforward <br>
# step by step approach to calculate the RSI which helped me a lot. <br>
# <br>
# <br> Link to macroption:
# [RSI Calculation](https://www.macroption.com/rsi-calculation/)
#
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import yfinance as yf
import datetime
pd.options.mode.chained_assignment = None
# +
tickers = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')[0]
tickers = tickers.Symbol.to_list()
tickers = [i.replace('.','-') for i in tickers]
#The following one has no enough data
tickers.remove('OGN')
#For quick test
tickers = ['EVRG',
'ES',
'RE',
'EXC',
'EXPE',
'EXPD',
'EXR',
'XOM',
'FFIV',
'FB',
'FAST',
'FRT',
'FDX',
'FIS',
'FITB',
'VTR',
'FRC']
#tickers
# -
def RSIcalc(asset):
df = yf.download(asset, start='2011-01-01')
df['MA200'] = df['Adj Close'].rolling(window=200).mean()
df['price change'] = df['Adj Close'].pct_change()
df['Upmove'] = df['price change'].apply(lambda x: x if x > 0 else 0)
df['Downmove'] = df['price change'].apply(lambda x: abs(x) if x < 0 else 0)
df['avg Up'] = df['Upmove'].ewm(span=19).mean()
df['avg Down'] = df['Downmove'].ewm(span=19).mean()
df = df.dropna()
df['RS'] = df['avg Up']/df['avg Down']
df['RSI'] = df['RS'].apply(lambda x: 100-(100/(x+1)))
df.loc[(df['Adj Close'] > df['MA200']) & (df['RSI'] < 30), 'Buy'] = 'Yes'
df.loc[(df['Adj Close'] < df['MA200']) | (df['RSI'] > 30), 'Buy'] = 'No'
return df
def getSignals(df):
Buying_dates = []
Selling_dates = []
if not df.empty and len(df) > 11:
for i in range(len(df) - 11):
if "Yes" in df['Buy'].iloc[i]:
Buying_dates.append(df.iloc[i+1].name)
for j in range(1, 11):
if df['RSI'].iloc[i + j] > 40:
Selling_dates.append(df.iloc[i+j+1].name)
break
elif j == 10:
Selling_dates.append(df.iloc[i+j+1].name)
return Buying_dates, Selling_dates
# Test the code and the strategy with only the first asset
frame = RSIcalc(tickers[0])
buy, sell = getSignals(frame)
# +
#frame
#frame.loc['2010-01-01':'2010-01-31', ['SPY','IBM']]
#TODO Define date range x amount of previous days
dfBuy = frame.loc['2021-01-01':'2021-08-24', ['Buy']]
toBuy = dfBuy[dfBuy['Buy']=='Yes']
toBuy['Ticker'] = tickers[0]
toBuy
#frame
# -
plt.figure(figsize=(12,5))
plt.scatter(frame.loc[buy].index, frame.loc[buy]['Adj Close'], marker='^', c='g')
plt.plot(frame['Adj Close'], alpha=0.7)
# +
Profits = (frame.loc[sell].Open.values - frame.loc[buy].Open.values)/frame.loc[buy].Open.values
# -
Profits
wins = [i for i in Profits if i > 0]
# wins over total of placements
len(wins)/len(Profits)
# +
len(Profits)
# -
# Only 40 trades un 10 years.... but next,
# we are going to perform the previous steps for all the assets
# +
matrixsignals = []
matrixprofits = []
counter = 1
totalfields = len(tickers)
for i in range(totalfields):
frame = RSIcalc(tickers[i])
buy, sell = getSignals(frame)
Profits = (frame.loc[sell].Open.values - frame.loc[buy].Open.values)/frame.loc[buy].Open.values
matrixsignals.append(buy)
matrixprofits.append(Profits)
#Date range to check if there is a signal to buy the asset
dateTo = datetime.datetime.today()
dateFrom = dateTo + datetime.timedelta(days=-5)
dfBuy = frame.loc[dateFrom:dateTo, ['Buy']]
toBuy = dfBuy[dfBuy['Buy']=='Yes']
toBuy['Ticker'] = tickers[i]
print('Processed: ' + str(counter) + ' of ' + str(totalfields))
counter = counter + 1
#In case we have a signal to buy in the specified period, let's plot it
if not toBuy.empty:
print('Asset to buy: ')
plt.figure(figsize=(12,5))
plt.scatter(frame.loc[buy].index, frame.loc[buy]['Adj Close'], marker='^', c='g')
plt.plot(frame['Adj Close'], alpha=0.7)
plt.title(label=tickers[i])
print(toBuy)
matrixsignalsbyAsset = dict(zip(tickers, matrixsignals))
# -
#Commented due to bad visualization on mobile device
#matrixsignalsbyAsset
# +
allprofit = []
for i in matrixprofits:
for e in i:
allprofit.append(e)
# -
wins = [i for i in allprofit if i > 0]
len(wins)/len(allprofit)
# In the following histogram we can check, most profits are positive.
plt.hist(allprofit, bins=100)
plt.show()
for i in matrixsignals:
for e in i:
if e.year == 2021:
print(e)
# Next steps:
#
# - Place orders in trading portals (e.g. etoro and test the results in "virtual" mode
#
#
| _notebooks/2021-08-24-Relative Strength Index - RSI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Reading Data
# + deletable=true editable=true
from __future__ import print_function
import tensorflow as tf
import numpy as np
import re
import matplotlib.pyplot as plt
# %matplotlib inline
# + deletable=true editable=true
from datetime import date
date.today()
# + deletable=true editable=true
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
# + deletable=true editable=true
tf.__version__
# + deletable=true editable=true
np.__version__
# + [markdown] deletable=true editable=true
# NOTE on notation
#
# _x, _y, _z, _X, _Y, _Z, ...: NumPy arrays
# x, y, z, X, Y, Z, ...: Tensors
#
# + [markdown] deletable=true editable=true
# ## Placeholder
# + deletable=true editable=true
# Make data and save to npz.
_x = np.zeros((100, 10), np.int32)
for i in range(100):
_x[i] = np.random.permutation(10)
_x, _y = _x[:, :-1], _x[:, -1]
import os
if not os.path.exists('example'): os.mkdir('example')
np.savez('example/example.npz', _x=_x, _y=_y)
# + deletable=true editable=true
# Load data
data = np.load('example/example.npz')
_x, _y = data["_x"], data["_y"]
#Q1. Make a placeholder for x such that it should be of dtype=int32, shape=(None, 9).
# Inputs and targets
x_pl = ...
y_hat = 45 - tf.reduce_sum(x_pl, axis=1) # We find a digit x_pl doesn't contain.
# Session
with tf.Session() as sess:
_y_hat = sess.run(y_hat, {x_pl: _x})
print("y_hat =", _y_hat[:30])
print("true y =", _y[:30])
# + [markdown] deletable=true editable=true
# ## TFRecord
# + deletable=true editable=true
tf.reset_default_graph()
# Load data
data = np.load('example/example.npz')
_x, _y = data["_x"], data["_y"]
# Serialize
with tf.python_io.TFRecordWriter("example/tfrecord") as fout:
for _xx, _yy in zip(_x, _y):
ex = tf.train.Example()
# Q2. Add each value to ex.
ex.features.feature['x']....
ex.features.feature['y']....
fout.write(ex.SerializeToString())
def read_and_decode_single_example(fname):
# Create a string queue
fname_q = tf.train.string_input_producer([fname], num_epochs=1, shuffle=True)
# Q3. Create a TFRecordReader
reader = ...
# Read the string queue
_, serialized_example = reader.read(fname_q)
# Q4. Describe parsing syntax
features = tf.parse_single_example(
serialized_example,
features={...
...}
)
# Output
x = features['x']
y = features['y']
return x, y
# Ops
x, y = read_and_decode_single_example('example/tfrecord')
y_hat = 45 - tf.reduce_sum(x)
# Session
with tf.Session() as sess:
#Q5. Initialize local variables
sess.run(...)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
while not coord.should_stop():
_y, _y_hat = sess.run([y, y_hat])
print(_y[0],"==", _y_hat, end="; ")
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
# + [markdown] deletable=true editable=true
# ## Queue
# + deletable=true editable=true
tf.reset_default_graph()
# Load data
data = np.load('example/example.npz')
_x, _y = data["_x"], data["_y"]
# Hyperparams
batch_size = 10 # We will feed mini-batches of size 10.
num_epochs = 2 # We will feed data for two epochs.
# Convert to tensors
x = tf.convert_to_tensor(_x)
y = tf.convert_to_tensor(_y)
# Q6. Make slice queues
x_q, y_q = ...
# Batching
x_batch, y_batch = tf.train.batch([x_q, y_q], batch_size=batch_size)
# Targets
y_hat = 45 - tf.reduce_sum(x_batch, axis=1)
# Session
with tf.Session() as sess:
sess.run(tf.local_variables_initializer())
# Q7. Make a train.Coordinator and threads.
coord = ...
threads = ...
try:
while not coord.should_stop():
_y_hat, _y_batch = sess.run([y_hat, y_batch])
print(_y_hat, "==", _y_batch)
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
# + [markdown] deletable=true editable=true
# ## Read csv files
# + deletable=true editable=true
tf.reset_default_graph()
# Load data
data = np.load('example/example.npz')
_x, _y = data["_x"], data["_y"]
_x = np.concatenate((_x, np.expand_dims(_y, axis=1)), 1)
# Write to a csv file
_x_str = np.array_str(_x)
_x_str = re.sub("[\[\]]", "", _x_str)
_x_str = re.sub("(?m)^ +", "", _x_str)
_x_str = re.sub("[ ]+", ",", _x_str)
with open('example/example.csv', 'w') as fout:
fout.write(_x_str)
# Hyperparams
batch_size = 10
# Create a string queue
fname_q = tf.train.string_input_producer(["example/example.csv"])
# Q8. Create a TextLineReader
reader = ...
# Read the string queue
_, value = reader.read(fname_q)
# Q9. Decode value
record_defaults = [[0]]*10
col1, col2, col3, col4, col5, col6, col7, col8, col9, col10 = tf.decode_csv(
...)
x = tf.stack([col1, col2, col3, col4, col5, col6, col7, col8, col9])
y = col10
# Batching
x_batch, y_batch = tf.train.shuffle_batch(
[x, y], batch_size=batch_size, capacity=200, min_after_dequeue=100)
# Ops
y_hat = 45 - tf.reduce_sum(x_batch, axis=1)
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(num_epochs*10):
_y_hat, _y_batch = sess.run([y_hat, y_batch])
print(_y_hat, "==", _y_batch)
coord.request_stop()
coord.join(threads)
# + [markdown] deletable=true editable=true
# ## Read image files
# + deletable=true editable=true
tf.reset_default_graph()
# Hyperparams
batch_size = 10
num_epochs = 1
# Make fake images and save
for i in range(100):
_x = np.random.randint(0, 256, size=(10, 10, 4))
plt.imsave("example/image_{}.jpg".format(i), _x)
# Import jpg files
images = tf.train.match_filenames_once('example/*.jpg')
# Create a string queue
fname_q = tf.train.string_input_producer(images, num_epochs=num_epochs, shuffle=True)
# Q10. Create a WholeFileReader
reader = ...
# Read the string queue
_, value = reader.read(fname_q)
# Q11. Decode value
img = ...
# Batching
img_batch = tf.train.batch([img], shapes=([10, 10, 4]), batch_size=batch_size)
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
num_samples = 0
try:
while not coord.should_stop():
sess.run(img_batch)
num_samples += batch_size
print(num_samples, "samples have been seen")
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
coord.request_stop()
coord.join(threads)
# + deletable=true editable=true
| Reading_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Crocoddyl: Contact RObot COntrol by Differential DYnamic programming Library
#
#
# ## I. Welcome to crocoddyl
# Crocoddyl is an **optimal control library for robot control under contact sequence**. Its solver is based on an efficient Differential Dynamic Programming (DDP) algorithm. Crocoddyl computes optimal trajectories along with optimal feedback gains. It uses Pinocchio for fast computation of robot dynamics and its analytical derivatives.
#
# Crocoddyl is focused on multi-contact optimal control problem (MCOP) which as the form:
#
# $$\mathbf{X}^*,\mathbf{U}^*=
# \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_N \\
# \mathbf{u}^*_0,\cdots,\mathbf{u}^*_N
# \end{Bmatrix} =
# \arg\min_{\mathbf{X},\mathbf{U}} \sum_{k=1}^N \int_{t_k}^{t_k+\Delta t} l(\mathbf{x},\mathbf{u})dt$$
# subject to
# $$ \mathbf{\dot{x}} = \mathbf{f}(\mathbf{x},\mathbf{u}),$$
# $$ \mathbf{x}\in\mathcal{X}, \mathbf{u}\in\mathcal{U}, \boldsymbol{\lambda}\in\mathcal{K}.$$
# where
# - the state $\mathbf{x}=(\mathbf{q},\mathbf{v})$ lies in a manifold, e.g. Lie manifold $\mathbf{q}\in SE(3)\times \mathbb{R}^{n_j}$, $n_j$ being the number of degrees of freedom of the robot.
# - the system has underactuacted dynamics, i.e. $\mathbf{u}=(\mathbf{0},\boldsymbol{\tau})$,
# - $\mathcal{X}$, $\mathcal{U}$ are the state and control admissible sets, and
# - $\mathcal{K}$ represents the contact constraints.
#
# Note that $\boldsymbol{\lambda}=\mathbf{g}(\mathbf{x},\mathbf{u})$ denotes the contact force, and is dependent on the state and control.
#
# Let's start by understanding the concept behind crocoddyl design.
# # II. Action models
#
# In crocoddyl, an action model combines dynamics and cost models. Each node, in our optimal control problem, is described through an action model. In order to describe a problem, we need to provide ways of computing the dynamics, the cost functions and their derivatives. All these are described inside the action model.
#
# To understand the mathematical aspects behind an action model, let's first get a locally linearize version of our optimal control problem as:
#
# $$\mathbf{X}^*(\mathbf{x}_0),\mathbf{U}^*(\mathbf{x}_0)
# =
# \arg\max_{\mathbf{X},\mathbf{U}} = cost_T(\delta\mathbf{x}_N) + \sum_{k=1}^N cost_t(\delta\mathbf{x}_k, \delta\mathbf{u}_k)$$
# subject to
# $$dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k)=\mathbf{0},$$
#
# where
# $$cost_T(\delta\mathbf{x}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top \\
# \mathbf{l_x} & \mathbf{l_{xx}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}
# $$
#
# $$cost_t(\delta\mathbf{x},\delta\mathbf{u}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top & \mathbf{l_u}^\top\\
# \mathbf{l_x} & \mathbf{l_{xx}} & \mathbf{l_{ux}}^\top\\
# \mathbf{l_u} & \mathbf{l_{ux}} & \mathbf{l_{uu}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}
# $$
#
# $$
# dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k) = \delta\mathbf{x}_{k+1} - (\mathbf{f_x}\delta\mathbf{x}_k + \mathbf{f_u}\delta\mathbf{u}_k)
# $$
#
# where an action model defines a time interval of this problem:
# - $actions = dynamics + cost$
#
# ### Important notes:
# - An action model describes the dynamics and cost functions for a node in our optimal control problem.
# - Action models lie in the discrete time space.
# - For debugging and prototyping, we have also implemented numerical differentiation (NumDiff) abstractions. These computations depend only on the definition of the dynamics equation and cost functions. However to asses efficiency, crocoddyl uses **analytical derivatives** computed from Pinocchio.
#
#
# ## II.a Differential and Integrated Action Models
# Optimal control solvers require the time-discrete model of the cost and the dynamics. However, it's often convenient to implement them in continuous time (e.g. to combine with abstract integration rules). In crocoddyl, this continuous-time action models are called "Differential Action Model (DAM)". And together with predefined "Integrated Action Models (IAM)", it possible to retrieve the time-discrete action model.
#
# At the moment, we have:
# - a simpletic Euler and
# - a Runge-Kutte 4 integration rules.
#
# An optimal control problem can be written from a set of DAMs as:
# $$\mathbf{X}^*(\mathbf{x}_0),\mathbf{U}^*(\mathbf{x}_0)
# =
# \arg\max_{\mathbf{X},\mathbf{U}} = cost_T(\delta\mathbf{x}_N) + \sum_{k=1}^N \int_{t_k}^{t_k+\Delta t} cost_t(\delta\mathbf{x}_k, \delta\mathbf{u}_k) dt$$
# subject to
# $$dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k)=\mathbf{0},$$
#
# where
# $$cost_T(\delta\mathbf{x}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top \\
# \mathbf{l_x} & \mathbf{l_{xx}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}
# $$
#
# $$cost_t(\delta\mathbf{x},\delta\mathbf{u}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top & \mathbf{l_u}^\top\\
# \mathbf{l_x} & \mathbf{l_{xx}} & \mathbf{l_{ux}}^\top\\
# \mathbf{l_u} & \mathbf{l_{ux}} & \mathbf{l_{uu}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}
# $$
#
# $$
# dynamics(\delta\mathbf{\dot{x}},\delta\mathbf{x},\delta\mathbf{u}) = \delta\mathbf{\dot{x}} - (\mathbf{f_x}\delta\mathbf{x} + \mathbf{f_u}\delta\mathbf{u})
# $$
# ### Building a differential action model for robot forward dynamics
# #### Loading the robot
#
# Crocoddyl offers several robot models for benchmarking our optimal control solvers (e.g. manipulators, humanoids, quadrupeds, etc). The collection of Talos models can be downloaded in Ubuntu with the APT package *robotpkg-talos-data*.
#
# Let's load a single Talos arm (left one):
# +
import crocoddyl
import numpy as np
import example_robot_data
talos_arm = example_robot_data.load('talos_arm')
robot_model = talos_arm.model # getting the Pinocchio model
# Defining a initial state
q0 = np.array([0.173046, 1., -0.52366, 0., 0., 0.1, -0.005])
x0 = np.concatenate([q0, np.zeros(talos_arm.model.nv)])
# -
# ### calc and calcDiff
# Optimal control solvers often need to compute a quadratic approximation of the action model (as previously described); this provides a search direction (computeDirection). Then it's needed to try the step along this direction (tryStep).
#
# Typically calc and calcDiff do the precomputations that are required before computeDirection and tryStep respectively (inside the solver). These functions update the information of:
# - **calc**: update the next state and its cost value
# $$\delta\mathbf{\dot{x}}_{k+1} = \mathbf{f}(\delta\mathbf{x}_k,\mathbf{u}_k)$$
# - **calcDiff**: update the derivatives of the dynamics and cost (quadratic approximation)
# $$\mathbf{f_x}, \mathbf{f_u} \hspace{1em} (dynamics)$$
# $$\mathbf{l_x}, \mathbf{l_u}, \mathbf{l_{xx}}, \mathbf{l_{ux}}, \mathbf{l_{uu}} \hspace{1em} (cost)$$
#
# **Crocoddyl put all information inside data**, so avoiding dynamic reallocation.
# +
import pinocchio
class DifferentialFwdDynamics(crocoddyl.DifferentialActionModelAbstract):
def __init__(self, state, costModel):
crocoddyl.DifferentialActionModelAbstract.__init__(self, state, state.nv, costModel.nr)
self.costs = costModel
self.enable_force = True
self.armature = np.zeros(0)
def calc(self, data, x, u=None):
if u is None:
u = self.unone
q, v = x[:self.state.nq], x[-self.state.nv:]
# Computing the dynamics using ABA or manually for armature case
if self.enable_force:
data.xout = pinocchio.aba(self.state.pinocchio, data.pinocchio, q, v, u)
else:
pinocchio.computeAllTerms(self.state.pinocchio, data.pinocchio, q, v)
data.M = data.pinocchio.M
if self.armature.size == self.state.nv:
data.M[range(self.state.nv), range(self.state.nv)] += self.armature
data.Minv = np.linalg.inv(data.M)
data.xout = data.Minv * (u - data.pinocchio.nle)
# Computing the cost value and residuals
pinocchio.forwardKinematics(self.state.pinocchio, data.pinocchio, q, v)
pinocchio.updateFramePlacements(self.state.pinocchio, data.pinocchio)
self.costs.calc(data.costs, x, u)
data.cost = data.costs.cost
def calcDiff(self, data, x, u=None):
q, v = x[:self.state.nq], x[-self.state.nv:]
if u is None:
u = self.unone
if True:
self.calc(data, x, u)
# Computing the dynamics derivatives
if self.enable_force:
pinocchio.computeABADerivatives(self.state.pinocchio, data.pinocchio, q, v, u)
data.Fx = np.hstack([data.pinocchio.ddq_dq, data.pinocchio.ddq_dv])
data.Fu = data.pinocchio.Minv
else:
pinocchio.computeRNEADerivatives(self.state.pinocchio, data.pinocchio, q, v, data.xout)
data.Fx = -np.hstack([data.Minv * data.pinocchio.dtau_dq, data.Minv * data.pinocchio.dtau_dv])
data.Fu = data.Minv
# Computing the cost derivatives
self.costs.calcDiff(data.costs, x, u)
def set_armature(self, armature):
if armature.size is not self.state.nv:
print('The armature dimension is wrong, we cannot set it.')
else:
self.enable_force = False
self.armature = armature.T
def createData(self):
data = crocoddyl.DifferentialActionModelAbstract.createData(self)
data.pinocchio = pinocchio.Data(self.state.pinocchio)
data.costs = self.costs.createData(data.pinocchio)
data.costs.shareMemory(data) # this allows us to share the memory of cost-terms of action model
return data
# -
# ## II.b State and its integrate and difference rules
# General speaking, the system's state can lie in a manifold $M$ where the state rate of change lies in its tangent space $T_\mathbf{x}M$. There are few operators that needs to be defined for different rutines inside our solvers:
# - $\mathbf{x}_{k+1} = integrate(\mathbf{x}_k,\delta\mathbf{x}_k) = \mathbf{x}_k \oplus \delta\mathbf{x}_k$
# - $\delta\mathbf{x}_k = difference(\mathbf{x}_{k+1},\mathbf{x}_k) = \mathbf{x}_{k+1} \ominus \mathbf{x}_k$
#
# where $\mathbf{x}\in M$ and $\delta\mathbf{x}\in T_\mathbf{x} M$.
#
#
# And we also need to defined the Jacobians of these operators with respect to the first and second arguments:
# - $\frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial \mathbf{x}}, \frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial\delta\mathbf{x}} =Jintegrante(\mathbf{x},\delta\mathbf{x})$
# - $\frac{\partial\mathbf{x}_2\ominus\mathbf{x}_2}{\partial \mathbf{x}_1}, \frac{\partial \mathbf{x}_2\ominus\mathbf{x}_1}{\partial\mathbf{x}_1} =Jdifference(\mathbf{x}_2,\mathbf{x}_1)$
#
# For instance, a state that lies in the Euclidean space will the typical operators:
# - $integrate(\mathbf{x},\delta\mathbf{x}) = \mathbf{x} + \delta\mathbf{x}$
# - $difference(\mathbf{x}_2,\mathbf{x}_1) = \mathbf{x}_2 - \mathbf{x}_1$
# - $Jintegrate(\cdot,\cdot) = Jdifference(\cdot,\cdot) = \mathbf{I}$
#
#
# These defines inare encapsulate inside the State class. **For Pinocchio models, we have implemented the StatePinocchio class which can be used for any robot model**.
# # III. Solving optimal control problems with DDP
#
# ## III.a ABA dynamics for reaching a goal with Talos arm
#
# Our optimal control solver interacts with a defined ShootingProblem. A shooting problem represents a stack of action models in which an action model defines a specific node along the OC problem.
#
# First we need to create an action model from DifferentialFwdDynamics. We use it for building terminal and running action models. In this example, we employ an simpletic Euler integration rule.
#
# Next we define the set of cost functions for this problem. For this particular example, we formulate three running-cost functions:
#
# goal-tracking cost, ๐๐๐(๐๐๐๐๐๐๐)
#
# state and control regularization; and โ๐ฑโ๐ฑ๐๐๐โ,โ๐ฎโ
#
# one terminal-cost:
#
# goal cost. โ๐ฎ๐โ
#
# First, let's create the common cost functions.
# +
# Create the cost functions
target = np.array([0.4, 0., .4])
Mref = crocoddyl.FrameTranslation(robot_model.getFrameId("gripper_left_joint"), target)
state = crocoddyl.StateMultibody(robot_model)
goalTrackingCost = crocoddyl.CostModelFrameTranslation(state, Mref)
xRegCost = crocoddyl.CostModelState(state)
uRegCost = crocoddyl.CostModelControl(state)
# Create cost model per each action model
runningCostModel = crocoddyl.CostModelSum(state)
terminalCostModel = crocoddyl.CostModelSum(state)
# Then let's added the running and terminal cost functions
runningCostModel.addCost("gripperPose", goalTrackingCost, 1e2)
runningCostModel.addCost("stateReg", xRegCost, 1e-4)
runningCostModel.addCost("ctrlReg", uRegCost, 1e-7)
terminalCostModel.addCost("gripperPose", goalTrackingCost, 1e5)
terminalCostModel.addCost("stateReg", xRegCost, 1e-4)
terminalCostModel.addCost("ctrlReg", uRegCost, 1e-7)
# Running and terminal action models
DT = 1e-3
actuationModel = crocoddyl.ActuationModelFull(state)
runningModel = crocoddyl.IntegratedActionModelEuler(
crocoddyl.DifferentialActionModelFreeFwdDynamics(state, actuationModel, runningCostModel), DT)
terminalModel = crocoddyl.IntegratedActionModelEuler(
crocoddyl.DifferentialActionModelFreeFwdDynamics(state, actuationModel, terminalCostModel), 0.)
# -
# We create a trajectory with 250 knots
# For this optimal control problem, we define 250 knots (or running action
# models) plus a terminal knot
T = 250
problem = crocoddyl.ShootingProblem(x0, [runningModel] * T, terminalModel)
# Onces we have defined our shooting problem, we create a DDP solver object and pass some callback functions for analysing its performance.
#
# Please note that:
# - CallbackDDPLogger: store the solution information.
# - CallbackDDPVerbose(level): printing message during the iterates.
# - CallbackDisplay(robot,rate): display the state trajectory using Gepetto viewer.
# +
# Creating the DDP solver for this OC problem, defining a logger
ddp = crocoddyl.SolverDDP(problem)
log = crocoddyl.CallbackLogger()
# Using the meshcat displayer, you could enable gepetto viewer for nicer view
# display = crocoddyl.GepettoDisplay(talos_arm, 4, 4)
display = crocoddyl.MeshcatDisplay(talos_arm, 4, 4, False)
ddp.setCallbacks([log,
crocoddyl.CallbackVerbose(),
crocoddyl.CallbackDisplay(display)])
# -
# Emdebbed meshcat in this cell
display.robot.viewer.jupyter_cell()
# +
# Solving it with the DDP algorithm
ddp.solve()
# Printing the reached position
frame_idx = talos_arm.model.getFrameId("gripper_left_joint")
xT = ddp.xs[-1]
qT = xT[:talos_arm.model.nq]
print
print "The reached pose by the wrist is"
print talos_arm.framePlacement(qT, frame_idx)
# -
# Let's plot the results and display final trajectory
# +
# %matplotlib inline
# # Plotting the solution and the DDP convergence
crocoddyl.plotOCSolution(log.xs, log.us)
crocoddyl.plotConvergence(log.costs, log.u_regs, log.x_regs, log.grads, log.stops, log.steps)
# Visualizing the solution in gepetto-viewer
display.displayFromSolver(ddp)
# -
# ## III.b Multi-Contact dynamics for biped walking (Talos legs)
# In crocoddyl, we can describe the multi-contact dynamics through holonomic constraints for the support legs. From the Gauss principle, we have derived the model as:
# $$
# \left[\begin{matrix}
# \mathbf{M} & \mathbf{J}^{\top}_c \\
# {\mathbf{J}_{c}} & \mathbf{0} \\
# \end{matrix}\right]
# \left[\begin{matrix}
# \dot{\mathbf{v}} \\ -\boldsymbol{\lambda}
# \end{matrix}\right]
# =
# \left[\begin{matrix}
# \boldsymbol{\tau} - \mathbf{h} \\
# -\dot{\mathbf{J}}_c \mathbf{v} \\
# \end{matrix}\right]$$.
#
# This DAM is defined in "DifferentialActionModelFloatingInContact" class.
#
# Given a predefined contact sequence and timings, we build per each phase a specific multi-contact dynamics. Indeed we need to describe multi-phase optimal control problem. One can formulate the multi-contact optimal control problem (MCOP) as follows:
#
#
# $$\mathbf{X}^*,\mathbf{U}^*=
# \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_N \\
# \mathbf{u}^*_0,\cdots,\mathbf{u}^*_N
# \end{Bmatrix} =
# \arg\min_{\mathbf{X},\mathbf{U}} \sum_{p=0}^P \sum_{k=1}^{N(p)} \int_{t_k}^{t_k+\Delta t} l_p(\mathbf{x},\mathbf{u})dt$$
# subject to
# $$ \mathbf{\dot{x}} = \mathbf{f}_p(\mathbf{x},\mathbf{u}), \text{for } t \in [\tau_p,\tau_{p+1}]$$
#
# $$ \mathbf{g}(\mathbf{v}^{p+1},\mathbf{v}^p) = \mathbf{0}$$
#
# $$ \mathbf{x}\in\mathcal{X}_p, \mathbf{u}\in\mathcal{U}_p, \boldsymbol{\lambda}\in\mathcal{K}_p.$$
#
# where $\mathbf{g}(\cdot,\cdot,\cdot)$ describes the contact dynamics, and they represents terminal constraints in each walking phase. In this example we use the following impact model:
#
# $$\mathbf{M}(\mathbf{v}_{next}-\mathbf{v}) = \mathbf{J}_{impulse}^T$$
#
# $$\mathbf{J}_{impulse} \mathbf{v}_{next} = \mathbf{0}$$
#
# $$\mathbf{J}_{c} \mathbf{v}_{next} = \mathbf{J}_{c} \mathbf{v}$$
#
# ### Note:
# You can find an example of such kind of problems in bipedal_walking_from_foot_traj.ipynb.
# ## Reference
#
# The material presented in this Notebook was previously presented at the ICRA at 2020. For more information, please read the following paper:
#
# <NAME> et al. Crocoddyl: An Efficient and Versatile Framework for Multi-Contact Optimal Control, 2020
# +
from IPython.display import HTML
# Youtube
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/nTeiHioEO4w" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
# -
#
#
# 
| examples/notebooks/introduction_to_crocoddyl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="_kNLvSwkeu7Z" outputId="3a3fc6b7-c543-49cf-be01-7f7fae62259b"
# !pip install simplejson
# !pip install pickle5
# !pip install nltk
import pickle5 as pickle
import nltk
import string
import re
import numpy as np
import pandas as pd
import simplejson
import json
#CNN Imports
import os
import sys
import keras
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.models import model_from_json
nltk.download('stopwords')
# + colab={"base_uri": "https://localhost:8080/"} id="qD97kYMEe4QV" outputId="7ce5b8ce-da58-4fb0-d3bc-66292cef8af2"
from google.colab import drive
drive.mount('/content/gdrive/')
# + id="ukFpNX8WfClo"
import sys
import os
prefix = "/content/gdrive/My Drive/NLP Assignments/"
sys.path.append(prefix)
# + id="wgZbXOxxfDcV"
json_filename = prefix+"bestmodel.json"
json_file = open(json_filename, "r")
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# + colab={"base_uri": "https://localhost:8080/"} id="cWC6EZzQf3TL" outputId="0ca391f0-ee92-4df5-ef29-4268b7aba186"
model_filename = prefix+"bestmodel.h5"
loaded_model.load_weights(model_filename)
print("Loaded model from disk")
# + id="qPL_mTCXe4Wj"
test_filename = prefix+"Test.csv"
test_reviews =pd.read_csv(test_filename)
# + colab={"base_uri": "https://localhost:8080/"} id="N1qqMoZue4Zi" outputId="a195f752-cbe7-4942-9631-33647b7fb3ee"
stopwords = nltk.corpus.stopwords.words('english')
ps = nltk.PorterStemmer()
MAX_WORDS = 2500
MAX_SEQUENCE_LENGTH = 734
def clean_text(text):
text = "".join([word.lower() for word in text if word not in string.punctuation])
tokens = re.split('\W+', text)
text = [ps.stem(word) for word in tokens if word not in stopwords]
return text
test_reviews['clean_text2'] = test_reviews['text'].apply(lambda x: clean_text(x))
X_test, y_test = test_reviews['clean_text2'], test_reviews['label']
# loading tokenizer
token_filename = prefix+"tokenizer.pickle"
with open(token_filename, 'rb') as handle:
tokenizer = pickle.load(handle)
X_test_seq = tokenizer.texts_to_sequences(X_test)
X_test_data = pad_sequences(X_test_seq, maxlen=MAX_SEQUENCE_LENGTH)
labels_test = to_categorical(np.asarray(y_test))
print('Shape of testing data tensor:', X_test_data.shape)
print('Shape of testing label tensor:', labels_test.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="VBRw_smOiy4T" outputId="1d90c29c-8b71-46f1-e1e0-bff808d6f7cc"
loaded_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
score = loaded_model.evaluate(X_test_data, y_test, verbose=0)
print ("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
# + colab={"base_uri": "https://localhost:8080/"} id="k_5Ey2mPizBI" outputId="f2bbf8c7-4a44-4119-e212-bbf417618c45"
from keras import backend as K
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
# compile the model
loaded_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])
loss_test, accuracy_test, f1_score_test, precision_test, recall_test = loaded_model.evaluate(X_test_data, y_test, verbose=0)
print('F1 score for test dataset :',f1_score_test*100)
| CNNmodel_Test_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="272e773f-bd3d-45e7-9685-776b6caa830d"
# # Python for Padawans
#
# This tutorial will go throughthe basic data wrangling workflow I'm sure you all love to hate, in Python!
# FYI: I come from a R background (aka I'm not a proper programmer) so if you see any formatting issues please cut me a bit of slack.
#
# **The aim for this post is to show people how to easily move their R workflows to Python (especially pandas/scikit)**
#
# One thing I especially like is how consistent all the functions are. You don't need to switch up style like you have to when you move from base R to dplyr etc.
# |
# And also, it's apparently much easier to push code to production using Python than R. So there's that.
#
# ### 1. Reading in libraries
# + _cell_guid="00006747-c6be-43d1-9529-9e8b80a3233b"
# %matplotlib inline
import os
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import math
# + [markdown] _cell_guid="2983cccb-e53b-49e2-83b6-7b69ac8913cd"
# #### Don't forget that %matplotlib function. Otherwise your graphs will pop up in separate windows and stop the execution of further cells. And nobody got time for that.
#
# ### 2. Reading in data
# + _cell_guid="48f54d0a-2112-4abe-a714-4cfa55daa0c7"
data = pd.read_csv('../input/loan.csv', low_memory=False)
data.drop(['id', 'member_id', 'emp_title'], axis=1, inplace=True)
data.replace('n/a', np.nan,inplace=True)
data.emp_length.fillna(value=0,inplace=True)
data['emp_length'].replace(to_replace='[^0-9]+', value='', inplace=True, regex=True)
data['emp_length'] = data['emp_length'].astype(int)
data['term'] = data['term'].apply(lambda x: x.lstrip())
# + [markdown] _cell_guid="86be0d39-6122-4e4b-823f-40f9c55625e3"
# ### 3. Basic plotting using Seaborn
#
# Now let's make some pretty graphs. Coming from R I definitely prefer ggplot2 but the more I use Seaborn, the more I like it. If you kinda forget about adding "+" to your graphs and instead use the dot operator, it does essentially the same stuff.
#
# **And I've just found out that you can create your own style sheets to make life easier. Wahoo!**
#
# But anyway, below I'll show you how to format a decent looking Seaborn graph, as well as how to summarise a given dataframe.
# + _cell_guid="33b8c75f-9774-4f64-8d25-d201fea0228c"
import seaborn as sns
import matplotlib
s = pd.value_counts(data['emp_length']).to_frame().reset_index()
s.columns = ['type', 'count']
def emp_dur_graph(graph_title):
sns.set_style("whitegrid")
ax = sns.barplot(y = "count", x = 'type', data=s)
ax.set(xlabel = '', ylabel = '', title = graph_title)
ax.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
_ = ax.set_xticklabels(ax.get_xticklabels(), rotation=0)
emp_dur_graph('Distribution of employment length for issued loans')
# + [markdown] _cell_guid="daa2a387-88ba-409a-8502-5d2851f40caf"
# ### 4. Using Seaborn stylesheets
#
# Now before we move on, we'll look at using style sheets to customize our graphs nice and quickly.
# + _cell_guid="f9215c66-3051-4034-b7fd-6a1e4859839e"
import seaborn as sns
import matplotlib
print (plt.style.available)
# + [markdown] _cell_guid="c9f76875-c24d-4b6f-a823-3c97b1b8e17c"
# Now you can see that we've got quite a few to play with. I'm going to focus on the following styles:
#
# - fivethirtyeight (because it's my fav website)
# - seaborn-notebook
# - ggplot
# - classic
# + _cell_guid="59ed93a4-1b68-428e-b5a8-f232baeab334"
import seaborn as sns
import matplotlib
plt.style.use('fivethirtyeight')
ax = emp_dur_graph('Fivethirty eight style')
# + _cell_guid="27842ef9-9b4b-47b1-8455-cb7bbd93b5bb"
plt.style.use('seaborn-notebook')
ax = emp_dur_graph('Seaborn-notebook style')
# + _cell_guid="fba70aa4-5a2a-4c77-b6bd-24436a8d8085"
plt.style.use('ggplot')
ax = emp_dur_graph('ggplot style')
# + _cell_guid="b2f3250f-65f7-469f-a31f-2e4bf102c856"
plt.style.use('classic')
ax = emp_dur_graph('classic style')
# + [markdown] _cell_guid="f35278bb-c8cd-4f54-bf07-3681cf12a05a"
# ### 5. Working with dates
#
# Now we want to looking at datetimes. Dates can be quite difficult to manipulate but it's worth the wait. Once they're formatted correctly life becomes much easier
# + _cell_guid="348b34d6-5e81-4e9d-87b0-dd349320e38a"
import datetime
data.issue_d.fillna(value=np.nan,inplace=True)
issue_d_todate = pd.to_datetime(data.issue_d)
data.issue_d = pd.Series(data.issue_d).str.replace('-2015', '')
data.emp_length.fillna(value=np.nan,inplace=True)
data.drop(['loan_status'],1, inplace=True)
data.drop(['pymnt_plan','url','desc','title' ],1, inplace=True)
data.earliest_cr_line = pd.to_datetime(data.earliest_cr_line)
import datetime as dt
data['earliest_cr_line_year'] = data['earliest_cr_line'].dt.year
# + [markdown] _cell_guid="5a9ca8bf-869d-4174-bfb4-eec7660c9ed1"
# ### 6. Making faceted graphs using Seaborn
#
# Now I'll show you how you can build on the above data frame summaries as well as make some facet graphs.
# + _cell_guid="daa5d719-1a94-4fa2-888c-e36b0e72ff36"
import seaborn as sns
import matplotlib.pyplot as plt
s = pd.value_counts(data['earliest_cr_line']).to_frame().reset_index()
s.columns = ['date', 'count']
s['year'] = s['date'].dt.year
s['month'] = s['date'].dt.month
d = s[s['year'] > 2008]
plt.rcParams.update(plt.rcParamsDefault)
sns.set_style("whitegrid")
g = sns.FacetGrid(d, col="year")
g = g.map(sns.pointplot, "month", "count")
g.set(xlabel = 'Month', ylabel = '')
axes = plt.gca()
_ = axes.set_ylim([0, d.year.max()])
plt.tight_layout()
# + [markdown] _cell_guid="6e5be6b8-0be0-425a-9e3f-39060562a93a"
# Now I want to show you how to easily drop columns that match a given pattern. Let's drop any column that includes "mths" in it.
# + _cell_guid="53b4824b-bcf4-4b3e-ac33-e7d56e2a7837"
mths = [s for s in data.columns.values if "mths" in s]
mths
data.drop(mths, axis=1, inplace=True)
# + [markdown] _cell_guid="6ad45286-59be-467a-9183-a89ffe221cf4"
# ### 7. Using groupby to create summary graphs
# + _cell_guid="6291b1e4-a10b-4c57-96ef-eaf56b227cb2"
group = data.groupby('grade').agg([np.mean])
loan_amt_mean = group['loan_amnt'].reset_index()
import seaborn as sns
import matplotlib
plt.style.use('fivethirtyeight')
sns.set_style("whitegrid")
ax = sns.barplot(y = "mean", x = 'grade', data=loan_amt_mean)
ax.set(xlabel = '', ylabel = '', title = 'Average amount loaned, by loan grade')
ax.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
_ = ax.set_xticklabels(ax.get_xticklabels(), rotation=0)
# + [markdown] _cell_guid="cb09d09b-b451-473e-9da7-761781e74c1a"
# ### 8. More advanced groupby statements visualised with faceted graphs
# + _cell_guid="2b918ec8-b0a0-406d-8b56-98e28163a101"
filtered = data[data['earliest_cr_line_year'] > 2008]
group = filtered.groupby(['grade', 'earliest_cr_line_year']).agg([np.mean])
graph_df = group['int_rate'].reset_index()
import seaborn as sns
import matplotlib
plt.style.use('fivethirtyeight')
plt.suptitle('bold figure suptitle', fontsize=14, fontweight='bold')
sns.set_style("whitegrid")
g = sns.FacetGrid(graph_df, col="grade", col_wrap = 2)
g = g.map(sns.pointplot, "earliest_cr_line_year", "mean")
g.set(xlabel = 'Year', ylabel = '')
axes = plt.gca()
axes.set_ylim([0, graph_df['mean'].max()])
_ = plt.tight_layout()
# + [markdown] _cell_guid="23150f2c-bbae-443d-ac9f-f5cda3058a34"
# ### 9. Treatment of missing values
# This section is a toughie because there really is no correct answer. A pure data science/mining approach would test each of the approaches here using a CV split and include the most accurate treatment in their modelling pipeline.
# Here I have included the code for the following treatments:
#
# - Mean imputation
# - Median imputation
# - Algorithmic imputation
#
# I spent a large amount of time looking at 3. because I couldn't find anyone else who has implemented it, so I built it myself. In R it's very easy to use supervised learning techniques to impute missing values for a given variable (as shown here: https://www.kaggle.com/mrisdal/shelter-animal-outcomes/quick-dirty-randomforest) but sadly I couldn't find it done in Python.
# + _cell_guid="d56c63f2-ac7a-463f-b9b1-f2721ea2c484"
#data['emp_length'].fillna(data['emp_length'].mean())
#data['emp_length'].fillna(data['emp_length'].median())
#data['emp_length'].fillna(data['earliest_cr_line_year'].median())
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(max_depth=5, n_estimators=100, max_features=1)
data['emp_length'].replace(to_replace=0, value=np.nan, inplace=True, regex=True)
cat_variables = ['term', 'purpose', 'grade']
columns = ['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate', 'grade', 'purpose', 'term']
def impute_missing_algo(df, target, cat_vars, cols, algo):
y = pd.DataFrame(df[target])
X = df[cols].copy()
X.drop(cat_vars, axis=1, inplace=True)
cat_vars = pd.get_dummies(df[cat_vars])
X = pd.concat([X, cat_vars], axis = 1)
y['null'] = y[target].isnull()
y['null'] = y.loc[:, target].isnull()
X['null'] = y[target].isnull()
y_missing = y[y['null'] == True]
y_notmissing = y[y['null'] == False]
X_missing = X[X['null'] == True]
X_notmissing = X[X['null'] == False]
y_missing.loc[:, target] = ''
dfs = [y_missing, y_notmissing, X_missing, X_notmissing]
for df in dfs:
df.drop('null', inplace = True, axis = 1)
y_missing = y_missing.values.ravel(order='C')
y_notmissing = y_notmissing.values.ravel(order='C')
X_missing = X_missing.as_matrix()
X_notmissing = X_notmissing.as_matrix()
algo.fit(X_notmissing, y_notmissing)
y_missing = algo.predict(X_missing)
y.loc[(y['null'] == True), target] = y_missing
y.loc[(y['null'] == False), target] = y_notmissing
return(y[target])
data['emp_length'] = impute_missing_algo(data, 'emp_length', cat_variables, columns, rf)
data['earliest_cr_line_year'] = impute_missing_algo(data, 'earliest_cr_line_year', cat_variables, columns, rf)
# + [markdown] _cell_guid="1cea39ab-3b20-4812-ac1f-b7e795515072"
# ### 10. Running a simple classification model
# Here I take my cleaned variables (missing values have been imputed using random forests) and run a simple sklearn algo to classify the term of the loan.
# This step in the analytics pipeline does take longer in Python than in R (as R handles factor variables out of the box while sklearn only accepts numeric features) but it isn't that hard.
# This is just indicative though! A number of the variables are likely to introduce leakage to the prediction problem as they'll influence the term of the loan either directly or indirectly.
# + _cell_guid="049b66d7-5c8d-46e0-82ed-de042efbc93b"
y = data.term
cols = ['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate', 'grade', 'emp_length', 'purpose', 'earliest_cr_line_year']
X = pd.get_dummies(data[cols])
from sklearn import preprocessing
y = y.apply(lambda x: x.lstrip())
le = preprocessing.LabelEncoder()
le.fit(y)
y = le.transform(y)
X = X.as_matrix()
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
logistic.fit(X, y)
# + [markdown] _cell_guid="56f7e808-3a7e-4953-a094-3ea571c6088b"
# ### 11. Pipelining in sklearn
#
# In this section I'll go through how you can combine multiple techniques (supervised an unsupervised) in a pipeline.
# These can be useful for a number of reasons:
#
# - You can score the output of the whole pipeline
# - You can gridsearch for the whole pipeline making finding optimal parameters easier
#
# So next we'll combine some a PCA (unsupervised) and Random Forests (supervised) to create a pipeline for modelling the data.
#
# In addition to this I'll show you an easy way to grid search for the optimal hyper parameters.
# + _cell_guid="2838f7e7-15a7-414f-a333-f00998fc6acb"
from sklearn import linear_model, decomposition
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
rf = RandomForestClassifier(max_depth=5, max_features=1)
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('rf', rf)])
n_comp = [3, 5]
n_est = [10, 20]
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_comp,
rf__n_estimators=n_est))
estimator.fit(X, y)
| downloaded_kernels/loan_data/kernel_100.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Welcome to the Chemical Clocks Module
#
# ### A chemical clock is a type of chemical reaction that literally runs like clockwork. With many of these reactions, you can determine the chemical composition just by timing the reaction!
# - Here we will use Python to simulate the Belousov-Zhabotinsky (BZ) reaction, which can be classified as a chemical clock. There are many different BZ reactions using different chemicals, but they all follow the same behavior.
#
# - This interesting reaction is "oscillatory". In the video you are about to see you will notice the color of the reaction mixture changing from red to green and then back again.
#
# - The math we will be using was developed at the University of Oregon! If you're curious about it, open this link in a new tab: http://www.scholarpedia.org/article/Oregonator
#
# # <font color='red'>*WARNING*</font>
# # <font color='red'>The chemicals used in the reactions you're about to see are hazardous. Do not attempt to recreate these reactions without adult supervision and proper personal protective equipment (goggles, gloves, etc.)</font>
#
# ## Click the arrow on the left of the next cell TWICE to begin.
# ### (This is how you'll be running blocks of code. You can also hit Shift and Enter on your keyboard)
from IPython.display import IFrame
IFrame("https://www.youtube.com/embed/8xSqvlkL1hk", width=560, height=315)
# ## As you can see, the color change happens QUICKLY, but at consistent times.
# - This is why we can call the BZ reaction a Chemical Clock.
#
# ## Click the arrow on the next box to watch a video of a different BZ reaction.
#
# - You'll notice it takes longer for the color to change.
# - When they speed up this video to 400x you can't even see the color change!
IFrame("https://www.youtube.com/embed/07n2WGg4WTc", width=560, height=315)
# ## The next video shows how the first reaction is made.
# ## <font color='red'>__AGAIN, DO NOT TRY THIS AT HOME__</font>
# ## <font color='red'>You WILL get hurt if you touch these chemicals.</font>
IFrame("https://www.youtube.com/embed/kw9wF-GNjqs", width=560, height=315)
# ### This module will show you how you can write code to model interesting science like this! And the best part?
#
# # You don't have to be "good" at math to do this!!!
#
# ### If someone has given you the equations you need, all you need to do is write them into the code and the computer does all the work for you!
#
# ## Here are the equations we'll be using:
# # $r_x = (qy - xy +x(1 - x))/\epsilon$
# # $r_y = (-qy -xy +z)/\epsilon'$
# # $r_z = x - z$
# ### If you wanted to solve these yourself you would need to take an advanced college math class.
# ### Luckily, the computer can solve these for us!
# - The way it does this is like making a movie.
# - The computer takes a bunch of "pictures" really quickly and then plays them together to make it look like they're moving.
#
# ### You only need to understand a few important things:
# - $r_x$, $r_y$, and $r_z$ together tell us how fast the reaction is happening (basically how fast the color is changing). We call these the "rates" of reaction.
# - There are a bunch of chemicals floating around in that beaker, but the only chemicals that matter are chemical X, chemical Y and chemical Z.
# - The $x$, $y$, and $z$ tell us how much of each chemical is in the mixture.
# - $q$, $\epsilon$, and $\epsilon'$ are just numbers we get to choose.
# ## Let's get started! Click the arrows next to each block as you go through the module
#
# #### First, we need to tell Python where to look for the code we'll need. This code is stored in a so-called "library".
# #### To access the code in these libraries, we tell Python to "import" the code.
# #### I wrote in some comments if you're curious about what the libraries are for. You can learn more about them by searching them on Google (or your favorite search engine).
# # Click the arrow on the next block
# +
#######################This stuff is for all the fun buttons you'll be clicking#######################
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
###################These tell python where to find the math stuff it needs and how to make plots######
get_ipython().run_line_magic('matplotlib', 'inline')
import math
import random
from matplotlib import pyplot as plt
from scipy.integrate import odeint
import numpy as np
# -
# ## The next few blocks show the reactions we'll be thinking about, and how we record the parameters in the code.
# #### The parameters are just numbers the computer will need to solve the equations.
# #### The following table shows the steps in one kind of BZ reaction. The model we're using simplifies this.
# #### The k's on the right side of this table are called "rate constants". These tell us how fast each step will happen.
# ## Don't worry too much about understanding this.
#
#
# 
# # Click the arrow
# +
def initialize(pH, x, y, z, timeStep, T):
#These are the rate constants for the reaction.
#F indicates the forward reaction
#R indicates the reverse reaction
kF1 = 8e9 #M^-2*s^-1
kR1 = 110 #s^-1
kF2 = 3e6 #M^-2*s^-1
kR2 = 2e-5 #M^-1*s^-1
kF3 = 2 #M^-3*s^-1
kR3 = 3.2 #M^-1*s^-1
kF4 = 3e3 #M^-1*s^-1
kR4 = 1e-8 #M^-2*s^-1
kF5 = 42 #M^-2*s^-1
kR5 = 2.2e3 #s^-1
kF5a = 7.4e4 #s^-1
kR5a = 1.4e9 #M^-1*s^-1
kF6 = 8e4 #M^-2*s^-1
kR6 = 8.9e3 #M^-1*s^-1
#This is pretty messy right? The Oregonator model makes things WAY more simple!
#We're going to make things simple by introducing new rate constants!
H = 10**(-pH)
k1 = kF3*H**2
k2 = kF2*H
k3 = kF5*H
k4 = kF4*H
kc = 1.00
A = 0.06
B= 0.02
#Here are those numbers q, ๐, and ๐โฒ in the equations above:
q = 2*k1*k4/(k2*k3)
epsilon = kc*B/(k3*A)
eprime = 2*kc*k4*B/(k2*k3*A)
#Here are the scaling relationships for X, Y, and Z:
#X0M = (k3*A/(2*k4))*x
#Y0M = (k3*A/k2)*y
#Z0M = ((k3*A)**2/(kc*k4*B))*z
#Finally, here's where the model figures out how many scaled timesteps to run for:
N = math.floor(T/timeStep) #Floor because computers start counting at 0
params = {'q':q,
'epsilon' : epsilon,
'eprime' : eprime,
'x0' : x,
'y0' : y,
'z0' : z,
'T' : T,
'N' : N}
return params
def select(p):
return p
# -
# ## Run the next block of code.
# ## Those complicated equations from before don't seem so bad once we put them into our code, as you'll see in the next block.
#
# #### The first function in the next block is what our equations look like in the code.
# #### The second function tells the computer how to solve those equations.
# +
#The rRate function (short for "reaction rate") computes the rates for the system as an array
def rRate(u, t, q, epsilon, eprime):
x, y, z = u
rx = (q*y - x*y + x*(1 - x))/epsilon
ry = (-q*y - x*y + 1*z)/eprime #normally the last term would be f*z, but we are taking f to be 1 here.
rz = x - z
r = [rx, ry, rz]
return r
#The "concs" (short for "concentrations") function solves the equations
def concs(prms):
q = prms.get('q')
eprime = prms.get('eprime')
epsilon = prms.get('epsilon')
T = prms.get('T')
N = prms.get('N')
#We will have u0 hold the initial concentrations
u0 = [prms.get('x0'), prms.get('y0'), prms.get('z0')]
#Time points to solve at:
t = np.linspace(0, T, num = N)
#This is the step that solves the equations
sol = odeint(rRate, u0, t, args = (q, epsilon, eprime))
return sol, t
# -
# # Now we get to have some fun! Like real scientists, you're going to change the parameters.
#
# - The pH measures the acidity of the reaction solution. As you'll see, this needs to be VERY acidic (low pH). You don't want to touch this with your bare hands.
# - x, y, and z are the amounts of the three chemicals that make the color change happen.
#
# ## We need to tell the computer how many pictures to take.
# - This tells the computer how much time we want the reaction to run.
# - You can think about this like filming a movie. The camera takes a bunch of pictures really fast, and when you flip through the pictures it looks like they're moving.
# - Like filming a movie, this will only work if you take A LOT of pictures.
# # Run the next block, then click and drag the sliders to choose how much of each chemical we start with.
# - Set x and y to 1.00 (just drag the scroller all the way to the right).
# - Set z to 0 (we'll assume there's none in there initially).
#
# ### Now, let's take those numbers we chose and have the computer tell us what will happen.
#
# # <font color='red'>**MAKE SURE YOU ONLY RUN THIS NEXT BLOCK OF CODE ONCE. IT WILL RESET IF YOU CLICK THE BUTTON TO RUN IT AGAIN**</font>
# +
#This is to help you choose the parameters
def f(x, y, z):
return x, y, z
chosen_concs = interact(f, x = (0, 1.0, 0.1), y = (0, 1.0, 0.1), z = (0, 1.0, 0.1))
# +
x0 = chosen_concs.widget.result[0]
y0 = chosen_concs.widget.result[1]
z0 = chosen_concs.widget.result[2]
chosen_params = initialize(0.10, x0, y0, z0, 0.001, 30)
solution, time = concs(chosen_params)
plt.plot(time, solution[:, 0], 'b', label='x')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical X', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 1], 'g', label='y')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical Y', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 2], 'r', label='z')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical Z', fontsize = 15)
plt.grid()
plt.show()
# -
# ## These graphs show how the amounts of each chemical in the beaker change over time.
# - If you could zoom in on the pictures, you would see that the lines are actually 30,000 dots placed close together!
# - The Greek letter $ฯ$ is used for time because this time is "scaled" (it's not in seconds or minutes).
# - Scaling just means we've multiplied it by something to make it easier for the computer to plot.
#
# ## Can you figure out which chemical (X, Y, or Z) is causing the color change?
# - The answer is Z, which in this case is Iron. The Iron (Fe$^{2+}$) in the Ferroin he adds is red. When it reacts to form Fe$^{3+}$, it turns blue.
# - Look at the bumps in the graphs. The bump happens in the blue graph (X), then red (Z), and then green (Y).
# - So what's the story here? How is this happening?
#
# ## Run the next block to plot the red and blue graphs together:
# +
plt.plot(time, solution[:, 1], 'g', label='y')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical Y', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 0], 'b', label='x')
plt.plot(time, solution[:, 2], 'r', label='z')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemicals', fontsize = 15)
plt.grid()
plt.show()
# -
# ## Chemical X _helps_ chemical Z form, and then chemical Y destroys both of them. (ouch...)
# - We call chemical X a _catalyst_ because it helps ("catalyzes") the reaction.
# - Chemical Y is called an _inhibitor_ because it slows down ("inhibits") the reaction that makes the color change.
#
# ## Let's see what happens when we change other parameters!
#
# ## What happens when we change the pH?
# - To answer this, we're going to keep the same amounts of X, Y, and Z as we had above.
# - I set the pH from before to be 0.10 (this is very acidic, and would hurt if it got on your skin).
# - Run the next block of code and set the pH to 1.00
# +
def pH_choice(pH):
return pH
chosen_pH = interact(pH_choice, pH = (-1, 1, 0.01))
# -
# ## Run this next block of code
# +
chosen_params = initialize((chosen_pH.widget.result), x0, y0, z0, 0.001, 30)
solution, time = concs(chosen_params)
plt.plot(time, solution[:, 1], 'g', label='y')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 0], 'b', label='x')
plt.plot(time, solution[:, 2], 'r', label='z')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
# -
# ## Notice that there's only one bump now? There were four when we first ran it.
#
# ## This means that the color change will happen slower!
# - This tells us that making the solution more acidic will make the reaction happen faster.
# - Go back and set the pH to -0.33 (drag the slider to the right).
# - This pH is so acidic it will burn through most gloves.
# ## Now let's see what happens when we let the reaction happen longer.
# - There are two things to do here. We need to decide how long to run the reaction, and how many pictures to take.
#
# ## When you run the next block of code:
# - Set timeStep = 0.0001 (click the dropdown menu and select the one at the top).
# - Set TotalTime = 100 (I set it to 30 before).
#
# ## Run the next block of code. It might take a while to make the graphs this time
# +
def getTime(timeStep, TotalTime):
return timeStep, TotalTime
chosen_times = interact(getTime, timeStep = [0.0001, 0.001, 0.01], TotalTime = (10, 100, 1))
# +
tstep = getTime.widget.result[0]
T = getTime.widget.result[1]
chosen_params = initialize(0.10, 1.00, 1.00, 0, tstep, T)
solution, time = concs(chosen_params)
plt.plot(time, solution[:, 1], 'g', label='y')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 0], 'b', label='x')
plt.plot(time, solution[:, 2], 'r', label='z')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
# -
# ## That took longer to run, right? That's because we told it to take more pictures!
# - The longer you tell the computer to run the code, the longer it will take to solve the equations we gave it.
# - A lot of scientific calculations need to run for MUCH longer than this. The longer you need to run these calculations, the more powerful you need your computer to be (a lot of scientists use supercomputers for this).
#
# # Here are all the parameters we talked about. Feel free to mess around with the parameters and see what interesting stuff you can get the graphs to do!
final_params = interact(initialize,
pH = (-1,1,0.01),
x = (0,1,0.1),
y = (0,1,0.1),
z = (0,1,0.1),
kc = (0.1, 2.0, 0.1),
timeStep = [0.0001, 0.001, 0.1],
T = (10, 50, 1))
# +
solution, time = concs(final_params.widget.result)
plt.plot(time, solution[:, 1], 'g', label='y')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
##############################################
##############################################
plt.plot(time, solution[:, 0], 'b', label='x')
plt.plot(time, solution[:, 2], 'r', label='z')
plt.legend(loc='best', fontsize = 15)
plt.xlabel('Reaction Time', fontsize = 15)
plt.ylabel('Amount of Chemical', fontsize = 15)
plt.grid()
plt.show()
# -
# # That's all for now! Let us know if you'd be interested in learning about more types of chemical clocks. We might even be able to help you make one in a lab sometime!
| Chemistry_module/Chemical_Clocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Reading a CSV file and converting it to list
import csv
f=open("guns.csv")
csvreader=csv.reader(f)
data=list(csvreader)
#print(data[0:5])
headers=data[:1]
data=data[1:len(data)]
print(headers)
print(data[:5])
years=[row[1] for row in data]
year_counts={}
for x in years:
if x not in year_counts:
year_counts[x]=1
else:
year_counts[x]+=1
print(year_counts)
import datetime
dates=[datetime.datetime(year=int(row[1]),month=int(row[2]),day=1) for row in data]
print(dates[0:5])
date_counts={}
for y in dates:
if y not in date_counts:
date_counts[y]=1
else:
date_counts[y]+=1
print(date_counts)
# +
sex=[row[5] for row in data]
sex_counts={}
for y1 in sex:
if y1 not in sex_counts:
sex_counts[y1]=1
else:
sex_counts[y1]+=1
print(sex_counts)
# -
races=[row[7] for row in data]
race_counts={}
for y2 in races:
if y2 not in race_counts:
race_counts[y2]=1
else:
race_counts[y2]+=1
print(race_counts)
# From the analysis so far it seems that there are more number of Male deaths than female deaths and also less among the Minority races but we can't judge this based on these numbers because we also have to take into the population of these races. Also, we have to see if there is a correlation among the seasons and the deaths.
import csv
f1=open("census.csv")
csvre=csv.reader(f1)
census=list(csvre)
print(census)
# +
mapping = {
"Asian/Pacific Islander": 15159516 + 674625,
"Native American/Native Alaskan": 3739506,
"Black": 40250635,
"Hispanic": 44618105,
"White": 197318956
}
race_per_hundredk={}
for a,b in race_counts.items():
race_per_hundredk[a]=(b/mapping[a])*10000
# +
intents = [row[3] for row in data]
homicide_race_counts = {}
for i,race in enumerate(races):
if race not in homicide_race_counts:
homicide_race_counts[race] = 0
if intents[i] == "Homicide":
homicide_race_counts[race] += 1
race_per_hundredk = {}
for q,w in homicide_race_counts.items():
race_per_hundredk[q] = (w / mapping[q]) * 100000
race_per_hundredk
# -
| Death analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="tS6XqFjMJpSc"
# # How to train MNIST with FastAI
# + [markdown] id="jNd7lCE8KAmB"
# After reading chapter 4 of [Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD](https://www.amazon.com/Deep-Learning-Coders-fastai-PyTorch/dp/1492045527), AKA "Fastbook", I can train very easily with FastAI.
# + colab={"base_uri": "https://localhost:8080/"} id="u8fs6lQiJlW_" outputId="b89e3a50-4821-4527-ec12-1e549be63940"
# !pip install -Uqq fastbook
# + [markdown] id="QY3PsOE4KwDu"
# Although it is discouraged to `import *` in python progarmming environment, in deep learning enviornment, it is actually encouraged. Rather than importing libraries as needed one by one, it is easier to load everything needed before start exploring. It is better to have it and not need it than need it and not have it.
# + id="k-yX02TwJ-jv"
from fastai.vision.all import *
# + [markdown] id="tNXqKx-FLgGl"
# We are going to use MNIST handwritten data. With FastAI, it is very easy to download data into our path.
# + colab={"base_uri": "https://localhost:8080/", "height": 55} id="JIJAkkwjKuc5" outputId="b35e969c-ddf8-48f0-843c-465d69834163"
path = untar_data(URLs.MNIST)
path.ls()
# + [markdown] id="SqNp9sWGMtGZ"
# Now that we have data, we need a datablock, which is a template for how data should be processed.
#
# ---
#
# Here is how our template is made:
# - `blocks=(ImageBlock, CategoryBlock)` means inputs are images and labels are multiple categories.
# - `get_items=get_image_files` specifies it is taking image files.
# - `splitter=RandomSplitter(seed=42)` randomly sets aside 20 percent of whole dataset for validation so that we can check for overfitting. Although MNIST dataset already has validation set, we do not have to use as suggested.
# - `get_y=parent_label` specifies how our data gets labels from the data. In this dataset, each image's parent directory informs us what kind of digit it is.
# + id="Mvz9fGx3LeGC"
digits = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=parent_label)
dls = digits.dataloaders(path)
# + [markdown] id="uuX6uGcvUt2C"
# Now that we have the dataloaders, we can take a look at the data with `dls.showbatch()`.
# + colab={"base_uri": "https://localhost:8080/", "height": 536} id="UDzgi9VUNU-N" outputId="159e3566-3bde-4d8b-e8e5-1cb4d8aba926"
dls.show_batch()
# + [markdown] id="ofsOCEAWViWz"
# It looks good. Each image has a correct label. It is time to train our model with the data. Instead of making our models from scratch, we will use pretrained model because we can save time and resources. With `cnn_learner`, we use resnet18 and set our metrics as error_rate. Then we `fine_tune` our model, which means we remove the last layer of resnet18 and replace it with our custom one, which will categorize what kind of digit it is. Also, this last layer, which is also called 'head', is the only layer we are training. All other layers remain the same.
# + colab={"background_save": true, "base_uri": "https://localhost:8080/", "height": 280, "referenced_widgets": ["e5a7809f50504440ae6438b5c59757c4"]} id="ev61GWiCSZqs" outputId="9bc23373-b9d7-42e6-8f36-384d4a7930e5"
learn = cnn_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(2)
# + [markdown] id="g0FqdBOsVSiF"
# FastAI will use GPU automatically if it is available. On a google colab GPU server, it took about six minutes to train with error rate close to 1%.
# + [markdown] id="KaRt4_gxa2J4"
# It is very easy to get started with FastAI because everything is already tuned for best practices without us trying to come up with everything in the beginning. When first training a model, this can be a quick baseline for us to compare with. With this baseline, we can figure out how more complex model is performing.
| _notebooks/2021-09-13-MNIST-In-FastAI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## DeepExplain - Keras (TF backend) example
### MNIST with CNN
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tempfile, sys, os
sys.path.insert(0, os.path.abspath('..'))
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
# Import DeepExplain
from deepexplain.tensorflow import DeepExplain
# +
# Build and train a network.
batch_size = 128
num_classes = 10
epochs = 3
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
x_train = (x_train - 0.5) * 2
x_test = (x_test - 0.5) * 2
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# ^ IMPORTANT: notice that the final softmax must be in its own layer
# if we want to target pre-softmax units
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# -
# %%time
with DeepExplain(session=K.get_session()) as de: # <-- init DeepExplain context
# Need to reconstruct the graph in DeepExplain context, using the same weights.
# With Keras this is very easy:
# 1. Get the input tensor to the original model
input_tensor = model.layers[0].input
# 2. We now target the output of the last dense layer (pre-softmax)
# To do so, create a new model sharing the same layers untill the last dense (index -2)
fModel = Model(inputs=input_tensor, outputs = model.layers[-2].output)
target_tensor = fModel(input_tensor)
xs = x_test[0:10]
ys = y_test[0:10]
attributions_gradin = de.explain('grad*input', target_tensor, input_tensor, xs, ys=ys)
#attributions_sal = de.explain('saliency', target_tensor, input_tensor, xs, ys=ys)
#attributions_ig = de.explain('intgrad', target_tensor, input_tensor, xs, ys=ys)
#attributions_dl = de.explain('deeplift', target_tensor, input_tensor, xs, ys=ys)
#attributions_elrp = de.explain('elrp', target_tensor, input_tensor, xs, ys=ys)
#attributions_occ = de.explain('occlusion', target_tensor, input_tensor, xs, ys=ys)
# Compare Gradient * Input with approximate Shapley Values
# Note1: Shapley Value sampling with 100 samples per feature (78400 runs) takes a couple of minutes on a GPU.
# Note2: 100 samples are not enough for convergence, the result might be affected by sampling variance
attributions_sv = de.explain('shapley_sampling', target_tensor, input_tensor, xs, ys=ys, samples=100)
# +
# %%time
from utils import plot, plt
# %matplotlib inline
n_cols = 6
n_rows = int(len(attributions_gradin) / 2)
fig, axes = plt.subplots(nrows=n_rows, ncols=n_cols, figsize=(3*n_cols, 3*n_rows))
for i, (a1, a2) in enumerate(zip(attributions_gradin, attributions_sv)):
row, col = divmod(i, 2)
plot(xs[i].reshape(28, 28), cmap='Greys', axis=axes[row, col*3]).set_title('Original')
plot(a1.reshape(28,28), xi = xs[i], axis=axes[row,col*3+1]).set_title('Grad*Input')
plot(a2.reshape(28,28), xi = xs[i], axis=axes[row,col*3+2]).set_title('Shapley Values')
# -
# ## Batch processing
# In this example, we generate explanations for the entire testset (10000 images) using the fast Gradient*Input method.
# `DeepExplain.explain()` accepts the `batch_size` parameter if the data to process does not fit in memory.
# %%time
with DeepExplain(session=K.get_session()) as de: # <-- init DeepExplain context
# Need to reconstruct the graph in DeepExplain context, using the same weights.
# With Keras this is very easy:
# 1. Get the input tensor to the original model
input_tensor = model.layers[0].input
# 2. We now target the output of the last dense layer (pre-softmax)
# To do so, create a new model sharing the same layers untill the last dense (index -2)
fModel = Model(inputs=input_tensor, outputs = model.layers[-2].output)
target_tensor = fModel(input_tensor)
xs = x_test
ys = y_test
attributions_gradin = de.explain('grad*input', target_tensor, input_tensor, xs, ys=ys, batch_size=128)
print ("Done")
| examples/mint_cnn_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tesurf] *
# language: python
# name: conda-env-tesurf-py
# ---
# + pycharm={"is_executing": false}
import sys
sys.path.append('..')
# + pycharm={"is_executing": false}
from tesufr import Processor, TextProcessParams, SummarySize
from tesufr.cores import SummaCore, FallbackCore
from tesufr.cores.em_core import EmCoresWrapper
from tesufr.corpora.providers import BbcNewsProvider, Krapivin2009Provider, LimitedProvider
from tesufr.keysum_evaluator import evaluate_processor_on_corpus
from tesufr.corpora import SetType, CorpusDocument, CorpusPurpose
# + pycharm={"is_executing": false}
processor_baseline = Processor([FallbackCore()])
processor_summa = Processor([SummaCore()])
processor_em = Processor([EmCoresWrapper()])
# -
def process_and_report(text, process_params, processor):
doc = processor.process_text(text, process_params)
print('====================================')
print("Keywords: "+' | '.join([str(kw) for kw in doc.keywords]))
print()
print("Named entities:")
for ne in doc.entities:
print(f"{ne.lemma} ({ne.subkind})")
print()
print(f"Summary ({len(doc.summary)}):")
for s in doc.summary:
print("* "+s.lemma)
# https://www.theguardian.com/us-news/2019/may/02/why-we-are-addicted-to-conspiracy-theories
text_en = open('theguardian.txt', 'rt', encoding='utf-8').read()
print(text_en[:200])
tpp = TextProcessParams(SummarySize.new_relative(0.1), 10)
process_and_report(text_en, tpp, processor_baseline)
process_and_report(text_en, tpp, processor_summa)
process_and_report(text_en, tpp, processor_em)
# https://www.cicero.de/innenpolitik/grundgesetz-freiheit-demokratie-meinungsfreiheit-debattenkultur
text_de = open('cicero1.txt', encoding='utf-8').read()
print(text_de[:200])
tpp = TextProcessParams(SummarySize.new_relative(0.1), 10)
process_and_report(text_de, tpp, processor_em)
| notebooks/Cores.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bonus: Temperature Analysis I
import pandas as pd
from datetime import datetime as dt
# "tobs" is "temperature observations"
df = pd.read_csv('./Resources/hawaii_measurements.csv')
df.head()
# Convert the date column format from string to datetime
df["date"] = pd.to_datetime(df["date"])
df.dtypes
# +
# Set the date column as the DataFrame index
#df = df.set_index("date", inplace=True)
# Problem occurs when date set to index
# -
# Drop the date column
df.head()
# ### Compare June and December data across all years
from scipy import stats
# Filter data for desired months
june = df[df['date'].dt.month==6]
dic = df[df['date'].dt.month==12]
# Identify the average temperature for June
june['tobs'].mean()
# Identify the average temperature for December
dic['tobs'].mean()
# Create collections of temperature data
temp_june = june['tobs']
temp_dic = dic['tobs']
# Run paired t-test
print(temp_june,temp_dic)
print(stats.ttest_ind(temp_june, temp_dic))
# ### Analysis
# Hypothesis rejected. Pvalue out of range, the result is pvalue=3.9025129038616655e-191 and to approve the hypothesis should be less than 0.05
| temp_analysis_bonus_1_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Entanglement
#
# This part looks at some of the implications of quantum mechanics in 2D systems. The concept of "entanglement" that you will meet here was one of the most contentious parts of early quantum physicsโEinstein famously denounced its apparent "spooky action at a distance"โand now is one of the corenerstones of quantum computing.
# ## Initialisation code
#
# **You need to run all cells in this section first for anything to work.** Click "Runtime > Run all" in the top toolbar to automatically do this. You don't need to read or understand the code in this section, but feel free to ask about it if you're interested. [After running the cells, click here to jump to the exercises.](#start-here)
# +
import keyword
import functools
import numbers
import numpy as np
# %matplotlib inline
from matplotlib import pyplot
import matplotlib
matplotlib.rcParams['figure.dpi'] = 120
import IPython
import ipywidgets
# +
class Wavefunction:
"""
A wavefunction for a particle-in-an-arbitrarily-dimensioned-box. I assume
that all the boxes dimensions are the same.
"""
def __init__(self, dimensions, state_vector):
if isinstance(dimensions, str):
dimensions = (dimensions,)
if not isinstance(dimensions, (list, tuple)):
raise TypeError("'dimensions' must be a list of dimension names")
if not dimensions:
raise ValueError("can't have a 0D wavefunction")
if not all(_valid_identifier(dimension) for dimension in dimensions):
raise ValueError("all identifiers must have valid Python names")
# Sort the dimensions so that objects created out-of-order still end up
# looking the same.
self.dimensions = sorted(dimensions)
# I store each wavefunction as a vector, where each element is the
# coefficient of a particular eigenstate of the system. In real life
# the phases can be complex, but to make the plotting clearer in this
# notebook, I only use real coefficients.
self.vector = np.asarray(state_vector, dtype=np.float64)
if self.vector.ndim != len(self.dimensions):
raise ValueError("state vector shape does not match dimensions")
if 0 in self.vector.shape:
raise ValueError("a dimension has 0 length")
norm = np.linalg.norm(self.vector)
# With floating-point numbers you can't reliably use '==' because
# they're not 100% accurate, so you should compare within a tolerance.
self.is_normalised = abs(norm - 1) < 1e-10
def normalise(self):
"""Return a new Wavefunction which is normalised."""
norm = np.linalg.norm(self.vector)
if norm < 1e-10:
# norm is effectively 0; just return the 0 vector.
return Wavefunction(self.dimensions, np.zeros_like(self.vector))
# Fix the gauge, so the first element's coefficient is positive.
index = tuple(n[0] for n in np.nonzero(self.vector))
vector = self.vector * (np.sign(self.vector[index]) / norm)
return Wavefunction(self.dimensions.copy(), vector)
def __add__(self, other):
"""
Add this Wavefunction to another of the same dimension. This magic
method overloads the '+' operator.
"""
if not isinstance(other, Wavefunction):
return NotImplemented
if self.dimensions != other.dimensions:
raise ValueError("can't add wavefunctions of different dimensions")
new_shape = tuple(
np.max([self.vector.shape, other.vector.shape], axis=0)
)
new_vector = (
_pad_to_size(new_shape, self.vector)
+ _pad_to_size(new_shape, other.vector)
)
return Wavefunction(self.dimensions, new_vector)
def __sub__(self, other):
"""
Subtract another Wavefunction of the same dimension from this one.
This magic method overloads the binary '-' operator.
"""
if not isinstance(other, Wavefunction):
return NotImplemented
return self + (-other)
def __neg__(self):
"""
Magic method that finds the negative of this object. Overloads the
unary '-' operator.
"""
return -1*self
def __mul__(self, other):
"""
Multiply this object by something, with the other thing on the right.
Defined for numbers and other wavefunctions that don't share a
dimension with us. This magic method overloads the '*' operator.
"""
if isinstance(other, numbers.Real):
# Multiplication by a scalar. (Should be complex in real life, but
# we're only considering real numbers for easier visualisation.)
return Wavefunction(self.dimensions, other*self.vector)
if not isinstance(other, Wavefunction):
return NotImplemented
# In the abstract linear algebra sense this operation is actually
# called the "tensor product", not multiplication, but you'll see more
# of that in third- and fourth-year quantum information.
if set(self.dimensions) & set(other.dimensions):
raise ValueError("can't multiply wavefunctions in the same dimension")
# Because I let you create and combine vectors that aren't all in the
# same "Hilbert space" from the beginning, I have to do a little bit of
# convoluted logic here to make sure that the separate dimensions are
# in the same order, no matter how you made this object.
dimensions = self.dimensions + other.dimensions
order = np.argsort(dimensions)
vector = np.tensordot(self.vector, other.vector, axes=0).transpose(order)
return Wavefunction(dimensions, vector)
def __rmul__(self, other):
"""
Multiply ourselves by something, which ourselves on the right.
Normally in quantum mechanics, multiplication is not commutative, but
with how I have defined things here, it's always safe to do so.
"""
return self * other
def __truediv__(self, other):
"""
Divide this object by a real number. Magic method overrides '/'.
"""
if not isinstance(other, numbers.Real):
return NotImplemented
return self * (1 / other)
def __call__(self, **dimensions):
"""
Magic method that overloads the "function-call" syntax in Python. You
access this by doing `u(x=0, y=1)` on a `Wavefunction` `u`.
This returns the value of the wavefunction evaluated at the given
values for the dimensions. If you pass a list or array to any of the
dimensions, you get back a full mesh of values evaluated at every
combination of the dimensions.
"""
if set(dimensions) != set(self.dimensions):
raise TypeError("must supply a value for each dimension")
scalar = all(np.isscalar(x) for x in dimensions.values())
dimensions = [
np.atleast_1d(dimensions[dimension])
for dimension in self.dimensions
]
out = np.zeros(
tuple(dimension.size for dimension in dimensions),
dtype=np.float64,
)
wavenumber_scale = np.pi / BOX_WIDTH
for ns in zip(*np.nonzero(self.vector)):
ks = (np.array(ns) + 1) * wavenumber_scale
# I recalculate the mesh each time because this minimises the
# number of calls to the expensive `sin`, and `meshgrid` creates
# very efficient arrays that don't copy as much data as you'd
# expect.
separate = np.meshgrid(*[
np.sin(k*dim) for k, dim in zip(ks, dimensions)
], indexing='ij')
out += self.vector[ns] * np.prod(separate, axis=0)
# Apply the normalisation of the eigenstates.
out *= np.sqrt(2 / BOX_WIDTH)**len(self.dimensions)
return out.flat[0] if scalar else out
def measure(self, **dimensions):
"""
Find the new wavefunction if a measurement is performed with a
given result (e.g. `u.measure(x=0.4)`).
"""
if set(dimensions) - set(self.dimensions):
raise ValueError("unknown dimensions in measurement")
if not (set(self.dimensions) - set(dimensions)):
raise ValueError("measuring all dimensions fully localises the wavefunction")
measure_i, keep_i = [], []
for i, dimension in enumerate(self.dimensions):
(measure_i if dimension in dimensions else keep_i).append(i)
shape = tuple(self.vector.shape[i] for i in keep_i)
vector = np.zeros(shape, dtype=np.float64)
k_scale = np.pi / BOX_WIDTH
for ns in zip(*np.nonzero(self.vector)):
out_i = tuple(ns[i] for i in keep_i)
sin_part = np.prod([
np.sin((ns[i]+1)*k_scale*dimensions[self.dimensions[i]])
for i in measure_i
])
vector[out_i] += self.vector[ns] * sin_part
return Wavefunction([self.dimensions[i] for i in keep_i], vector)
def __repr__(self):
"""
Magic method that is used to get the somewhat machine-readable
representation of the object in a normal Python session.
"""
return "".join([
self.__class__.__name__, "(",
repr(self.dimensions),
", ",
repr(self.vector),
")"
])
def _repr_latex_(self):
"""
Magic method that Jupyter/IPython uses to get a LaTeX representation of
the object.
"""
parts = []
for ns in zip(*np.nonzero(self.vector)):
basis = "".join(
rf"\lvert {{{n + 1}}}_{{{dimension}}}\rangle"
for dimension, n in zip(self.dimensions, ns)
)
coeff = self.vector[ns]
parts.append(f"{coeff:+.2f}{basis}")
if not parts:
return r"$$\text{[zero vector]}$$"
if parts[0].startswith("+"):
parts[0] = parts[0][1:]
return r"$$" + " ".join(parts) + r"$$"
BOX_WIDTH = 1
def _pad_to_size(shape, array):
"""
Helper function that adds zeros onto axes in an array to make it up to a
certain shape.
"""
pad_size = [(0, new - old) for new, old in zip(shape, array.shape)]
return np.pad(array, pad_size)
def _valid_identifier(name):
"""
Helper function to determine if a name is a valid Python identifier.
"""
if not isinstance(name, str):
return False
if not name.isidentifier():
return False
return not keyword.iskeyword(name)
# +
def x(n):
"""
Get the nth eigenstate of the particle-in-a-box in the x-direction. 'n'
must be a natural number.
"""
if not isinstance(n, numbers.Integral):
raise TypeError("'n' must be an integer")
if n <= 0:
raise ValueError("'n' must be a natural number")
return Wavefunction(['x'], [0]*(n - 1) + [1])
def y(n):
"""
Get the nth eigenstate of the particle-in-a-box in the y-direction. 'n'
must be a natural number.
"""
if not isinstance(n, numbers.Integral):
raise TypeError("'n' must be an integer")
if n <= 0:
raise ValueError("'n' must be a natural number")
return Wavefunction(['y'], [0]*(n - 1) + [1])
# +
def plot_wavefunction(u):
"""Plot the given 1D or 2D wavefunction."""
u = u.normalise()
IPython.display.display(u)
if len(u.dimensions) == 1:
return _plot_wavefunction_1d(u)
if len(u.dimensions) == 2:
return _plot_wavefunction_2d(u)
raise ValueError("can only plot 1D or 2D wavefunctions")
def plot_probability(u, _ylim=None):
"""
Plot the probability density function of the given 1D or 2D
wavefunction.
"""
u = u.normalise()
IPython.display.display(u)
if len(u.dimensions) == 1:
return _plot_probability_1d(u, ylim=_ylim)
if len(u.dimensions) == 2:
return _plot_probability_2d(u)
raise ValueError("can only plot 1D or 2D wavefunctions")
def plot_probability_after_measurement(state):
if len(state.dimensions) != 2:
raise ValueError("can only plot for a 2D state")
# Calculate the before-measurement bit first because it's always the same,
# and as a side-effect we can calculate some limits that will _always_ fit
# the entire plot in.
state = state.normalise()
points = [
np.linspace(0, BOX_WIDTH, _choose_points(state, i))
for i in range(len(state.dimensions))
]
values = np.abs(state(**dict(zip(state.dimensions, points))))**2
ylimits = np.min(values), np.max(values)
before_measurement = np.trapz(values, points[0], axis=0)
def interactive_bit(measurement_result):
plot_probability(
state.measure(**{state.dimensions[-1]: measurement_result}),
_ylim=ylimits,
)
pyplot.plot(
points[0], before_measurement,
dashes=(10, 10),
)
pyplot.legend(['After measurement', 'Before measurement'])
return interactive_bit
def _choose_points(u, dimension_index):
peaks = np.max(
np.nonzero(u.vector)[dimension_index],
# Set a sensible minimum value.
initial=2,
)
return 1 + 60*peaks
def _plot_wavefunction_1d(u):
points = np.linspace(0, BOX_WIDTH, _choose_points(u, 0))
values = u(**{u.dimensions[0]: points})
pyplot.plot(points, values)
pyplot.xlim((0, BOX_WIDTH))
limits = (np.min(values), np.max(values))
if limits[0] == limits[1]:
limits = (-1, 1)
pyplot.ylim(limits)
pyplot.xlabel(u.dimensions[0])
pyplot.ylabel("Wavefunction")
pyplot.axhline(0, color='black', linewidth=0.7, dashes=(15, 15))
def _plot_wavefunction_2d(u):
points = [
np.linspace(0, BOX_WIDTH, _choose_points(u, i))
for i in range(len(u.dimensions))
]
values = u(**dict(zip(u.dimensions, points)))
max_abs = np.max(np.abs(values))
pyplot.pcolormesh(
points[0], points[1], values.transpose(),
# If you're colorblind and can't distinguish the positive region from
# the negative, try changing the "PiYG" to one of
# "PRGn" (purple--green),
# "BrBG" (brown--blue/green),
# "PuOR" (purple--orange),
# "RdBu" (red--blue).
cmap=matplotlib.cm.PiYG,
vmin=-max_abs,
vmax=max_abs,
shading='auto',
)
pyplot.colorbar().set_label(
"Wavefunction",
rotation=90,
)
pyplot.xlim((0, BOX_WIDTH))
pyplot.ylim((0, BOX_WIDTH))
pyplot.xlabel(u.dimensions[0])
pyplot.ylabel(u.dimensions[1])
pyplot.gca().set_aspect('equal')
def _plot_probability_1d(u, ylim=None):
points = np.linspace(0, BOX_WIDTH, _choose_points(u, 0))
values = np.abs(u(**{u.dimensions[0]: points}))**2
pyplot.plot(points, values)
pyplot.xlim((0, BOX_WIDTH))
limits = ylim if ylim is not None else (np.min(values), np.max(values))
if limits[0] == limits[1]:
limits = (-1, 1)
pyplot.ylim(limits)
pyplot.xlabel(u.dimensions[0])
pyplot.ylabel("Probability density")
def _plot_probability_2d(u):
points = [
np.linspace(0, BOX_WIDTH, _choose_points(u, i))
for i in range(len(u.dimensions))
]
values = np.abs(u(**dict(zip(u.dimensions, points))))**2
pyplot.pcolormesh(
points[0], points[1], values.transpose(),
cmap=matplotlib.cm.magma_r,
shading='auto',
)
pyplot.colorbar().set_label(
"Probability density",
rotation=90,
)
pyplot.xlim((0, BOX_WIDTH))
pyplot.ylim((0, BOX_WIDTH))
pyplot.xlabel(u.dimensions[0])
pyplot.ylabel(u.dimensions[1])
pyplot.gca().set_aspect('equal')
# -
# <a name="start-here"></a>
# ## Introduction: How to use this notebook, and the 1D particle in a box
#
# Remember the position wavefunctions of the eigenstates of the 1D particle in a box are
# $$
# \psi_n(x) = \sqrt{\frac2L} \sin(k_n x), \quad\text{where $k_n = n\pi/L$.}
# $$
#
# In this notebook, you can create a representation of the $x$-direction wavefunction $\psi_n(x)$ by calling the function `x(n)`. You can multiply and divide these by real numbers (we're ignoring complex numbers to make plotting easier) and add together states to make superpositions, such as `(x(1) + 2*x(2)) / np.sqrt(5)`. You can normalise a wavefunction `psi` by calling `psi.normalise()`.
(x(1) + 2*x(2)).normalise()
# You can plot the wavefunction by calling `plot_wavefunction(psi)`, or plot the probability density by calling `plot_probability(psi)`. These functions will automatically normalise their inputs.
plot_wavefunction(x(3))
plot_probability(x(2) - x(4))
# You can use the form `psi.measure(y=0.3)` to collapse a 2D wavefunction `psi` down to 1D based on the given measurement of position (_not_ of the energy).
state_2d = ((x(1) + x(2)) * (y(1) - y(2))).normalise()
state_2d
state_2d.measure(y=0.75).normalise()
# ## Questions: Investigating 2D wavefunctions
# We are now considering a particle in a 2D box, where the widths of both dimensions are the same. The $y$-position wavefunctions look exactly like the $x$-position ones in form. In this notebook, you can make the state with wavefunction $\psi_{x,n}(x)\psi_{y,m}(y)$ by doing `x(n) * y(m)`. You can combine these in the same manner as above, and plot them using the same functions.
# ### Part 1: simple 2D wavefunctions
# Use the sliders below to plot some 2D wavefunctions $\lvert\psi\rangle=\lvert n_x\rangle\lvert n_y\rangle$ with varying $n_x$ and $n_y$.
# - What do you notice about the shapes of the graphs, for example symmetries?
# - Where are you most likely to measure the particle in the $y$-direction? Does that change if you measure its $x$-position first?
ipywidgets.interact(
lambda n_x, n_y: plot_wavefunction(x(n_x) * y(n_y)),
n_x=ipywidgets.IntSlider(value=1, min=1, max=5),
n_y=ipywidgets.IntSlider(value=1, min=1, max=5),
);
# ### Part 2: "separable" states (finish by 09:20/11:20)
#
# So far, the wavefunctions we have plotted could be written in the form
# $$
# \Psi(x, y) = \psi_x(x)\psi_y(y),
# $$
# where you can separate out the $x$ and $y$ components.
#
# The code starts out with the state
# $$\lvert\psi\rangle = \frac12\Bigl(\lvert 1_x\rangle\lvert 1_y\rangle - \lvert 1_x\rangle\lvert 2_y\rangle - \lvert 2_x\rangle\lvert 1_y\rangle + \lvert 2_x\rangle\lvert 2_y\rangle\Bigr).$$
# This is actually separable, because you could also write it as
# $$\lvert\psi\rangle = \frac12\bigl(\lvert 1_x\rangle - \lvert 2_x\rangle\bigr)\bigl(\lvert 1_y\rangle - \lvert 2_y\rangle\bigr).$$
#
# - In writing, can you come up with a quantum state which _cannot_ be written in this separated form?
# - Use the code cell below to plot such a state and see what the wavefunction looks like. Compare its shape to the plots you made in the previous part.
# Change the next line to represent your non-separable state. The state
# will be normalised automatically for you when it is plotted.
state = x(1)*y(1) - x(2)*y(1) - x(1)*y(2) + x(2)*y(2)
plot_wavefunction(state)
# ### Part 3: non-separable states (finish by 09:45/11:45)
#
# Consider a particle trapped in a 2D infinite square well in a state
#
# $$
# \lvert\theta\rangle =
# \sin(\theta)\lvert 1_x\rangle \lvert 2_y\rangle
# + \cos(\theta)\lvert 2_x\rangle\lvert 1_y\rangle.
# $$
#
# The plot below is the probability distribution over $x$ and $y$. Use the slider to change the angle $\theta$ as a fraction of $\pi$.
#
# - When is this state separable and when is it not?
# - On paper, what would the instantaneous wavefunction look like we meausred $y = \frac12$? What about if we measured $y=\frac14$?
# - Does the probability distribution of $x$ measurements depend on what you measure $y$ to be? What does this imply physically?
ipywidgets.interact(
lambda angle: plot_probability(
np.sin(np.pi*angle)*x(1)*y(2) + np.cos(np.pi*angle)*x(2)*y(1)
),
angle=ipywidgets.FloatSlider(min=0, max=1, step=0.05, continuous_update=False),
);
# The following cell compares the probability density of the $x$-position of the state $\lvert\pi/4\rangle$ before and after a measurement of $y$ (you can also change it to use a separable state to see the difference).
#
# - Check that a measurement in one dimension affects the probability distribution in the other for the non-separable state.
# - Check that the same measurement on a separable state doesn't affect the other probability distribution.
# +
nonseparable = (x(1)*y(2) + x(2)*y(1)) / np.sqrt(2)
separable = (x(1) - x(2)) * (y(1) - y(2)) / 2
ipywidgets.interactive(
lambda plotter, measurement_result: plotter(measurement_result),
plotter=ipywidgets.Dropdown(
options=[
('non-separable', plot_probability_after_measurement(nonseparable)),
('separable', plot_probability_after_measurement(separable)),
],
description='State type',
),
measurement_result=ipywidgets.FloatSlider(
value=BOX_WIDTH/2,
min=0, max=BOX_WIDTH, step=BOX_WIDTH/20,
description="y_0",
continuous_update=False,
),
)
# -
# ### Part 4: two entangled particles in boxes
#
# Now consider this system with two identical particles each in the ground state of a 1D infinite square well $\lvert 1_u\rangle\lvert 1_d\rangle$ ($u$ for "up" and $d$ for "down"). There is a photon source connected to the wells by fibre via a beam splitter, such that a horizontally polarised photon ($\lvert H\rangle$) goes to the bottom well, and vertically polarised photon ($\lvert V\rangle$) goes to the top.
#
# 
#
# A horizontally polarised photon reaching the bottom well will excite the particle from $\lvert 1_d\rangle$ to $\lvert 2_d\rangle$, and similar for a vertically polarised photon in the top well.
#
# One photon in a polarisation state $\bigl(\sin\theta\lvert H\rangle + \cos\theta\lvert V\rangle\bigr)$ is sent through the fibre.
#
# - What is the state of the two wells system after the photon has been absorbed?
# - Can you write down separate wavefunctions for each particle individually? What happens to the top particle if you measure the bottom one?
| Basic_Entanglement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Create a simple solar system model.
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
# ## Define a planet class.
class planet():
"A planet in our solar system"
def __init__(self,semimajor,eccentricity):
self.x = np.zeros(2) #x and y position
self.v = np.zeros(2) #x and y velocity
self.a_g = np.zeros(2) #x and y acceleration
self.t = 0.0 #current time
self.dt = 0.0 #current timestep
self.a = semimajor #semimajor axis of the orbit
self.e = eccentricity #eccentricity of the orbit
self.istep = 0 #current integer timestep1
self.name = " " #name for the planet
# ## Define a dictionary with some constants.
solar_system = { "M_sun":1.0, "G":39.4784176043574320}
# ## Define some functions for setting circular velocity, and acceleration.
def SolarCircularVelocity(p):
G = solar_system["G"]
M = solar_system["M_sun"]
r = (p.x[0]**2 + p.x[1]**2)**0.5
#return the circular velocity
return (G*M/r)**0.5
# ## Write a function to compute the gravitational acceleration on each planet from the Sun.
def SolarGravitationalAcceleration(p):
G = solar_system["G"]
M = solar_system["M_sun"]
r = (p.x[0]**2 + p.x[1]**2)**0.5
#acceleration in AU/yr/yr
a_grav = -1.0*G*M/r**2
#find the angle at this position
if(p.x[0]==0.0):
if(p.x[1]>0.0):
theta = 0.5*np.pi
else:
theta = 1.5*np.pi
else:
theta = np.arctan2(p.x[1].p.x[0])
#set the x and y components of the velocity
#p.a_g[0] = a_grav * np.cos(theta)
#p.a_g[1] = a_grav * np.sin(theta)
return a_grav*np.cos(theta), a_grav*np.sin(theta)
# ## Compute the timestep.
def calc_dt(p):
#integration tolerance
ETA_TIME_STEP = 0.0004
#compute timestep
eta = ETA_TIME_STEP
v = (p.v[0]**2 + p.v[1]**2)**0.5
a = (p.a_g[0]**2 + p.a_g[1]**2)**0.5
dt = eta * np.fmin(1./np.fabs(v),1./np.fabs(a)**0.5)
return dt
# ## Define the initial conditions.
def SetPlanet(p,i):
AU_in_km = 1.495979e+8 #an AU in km
#circular velocity
v_c = 0.0 #circular velocity in AU/yr
v_e = 0.0 #velocity at perihelion in AU/yr
#planet-by planet initial conditions
#Mercury
if(i==0):
#semi-major axis in AU
p.a = 57909227.0/AU_in_km
#eccentricity
p.e = 0.20563593
#name
p.name = "Mercury"
#Venus
elif(i==1):
#semi-major axis in AU
p.a = 108209475.0/AU_in_km
#eccentricity
p.e - 0.00677672
#name
p.name = "Venus"
#Earth
elif(i==2):
#semi-major axis in AU
p.a = 1.0
#eccentricity
p.e = 0.01671123
#name
p.name = "Earth"
#set remaining properties
p.t = 0.0
p.x[0] = p.a*(1.0-p.e)
p.x[1] = 0.0
#get equiv circular velocity
v_c = SolarCircularVelocity(p)
#velocity at perihelion
v_e = v_c*(1 + p.e)**0.5
#set velocity
p.v[0] = 0.0 #no x velocity at perihelion
p.v[1] = v_e #y velocity at perihelion (counter clockwise)
#calculate gravitational acceleration from Sun
p.a_g = SolarGravitationalAcceleration(p)
#set timestep
p.dt = calc_dt(p)
# ## Write leapfrog integrator.
def x_first_step(x_i,v_i,a_i,dt):
#x_1/2 = _0 + 1/2 v_0 Delta_t + 1/4 a_0 Delta t^2
return x_i + 0.5*v_i*dt + 0.25*a_i*dt**2
def v_full_step(v_i,a_ipoh,dt):
#v_i+1 = v_i + a_i + 1/2 Delta t
return v_i + a_ipoh*dt;
def x_full_step(x_ipoh,v_ip1,a_ipoh,dt):
#x_3/2 = x_1/2 + v_i+1 Delta t
return x_ipoh + v_ip1*dt;
# ## Write a function to save the data to file.
def SaveSolarSystem(p,n_planets,t,dt,istep,ndim):
#loop over the number of planets
for i in range(n_planets):
#define a filename
fname = "planet.%s.txt" % p[i].name
if(istep==0):
#create the file on the first timestep
fp = open(fname,"w")
else:
#append the file on subsequent timesteps
fp = open(fname,"a")
#compute the drifted properties of the planet
v_drift = np.zeros(ndim)
for k in range(ndim):
v_drift[k] = p[i].v[k] + 0.5*p[i].a_g[k]*p[i].dt
#write the data to file
s = "%6d\t%6.5f\t%6.5f\t%6d\t%6.5f\t%6.5f\t%6.5f\t%6.5f\t%6.5f\t%6.5f\t%6.5f\t%6.5f\n" % \
(istep,t,dt,p[i].istep,p[i].t,p[i].dt,p[i].x[0],p[i].x[1],v_drift[0],v_drift[1], \
p[i].a_g[0],p[i].a_g[1])
fp.write(s)
#close the file
fp.close()
# ## Write a function to evolve the solar system.
def EvolveSolarSystem(p,n_planets,t_max):
#number of spatial dimensions
ndim = 2
#define the first timestep
dt = 0.5/365.25
#define the starting time
t = 0.0
#define the starting timestep
istep = 0
#save the initial conditions
SaveSolarSystem(p,n_planets,t,dt,istep,ndim)
#begin a loop over the global timescale
while(t<t_max):
#check to see if the next step exceeds the
#maximum time. If so, take a smaller step
if(t+dt>t_max):
dt = t_max - t #limit the step to align with t_max
#evolve each planet
for i in range(n_planets):
while(p[i].t<t+dt):
#special case for istep==0
if(p[i].istep==0):
#take the first step according to a verlet scheme
for k in range(ndim):
p[i].x[k] = x_first_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)
#update the acceleration
p[i].a_g = SolarGravitationalAcceleration(p[i])
#update the time by 1/2dt
p[i].t += 0.5*p[i].dt
#update the timestep
p[i].dt = calc_dt(p[i])
#continue with a normal step
#limit to align with the global timestep
if(p[i].t + p[i].dt > t+dt):
p[i].dt = t+dt-p[i].t
#evolve the velocity
for k in range(ndim):
p[i].v[k] = v_full_step(p[i].v[k],p[i].a_g[k],p[i].dt)
#evolve the position
for k in range(ndim):
p[i].x[k] = x_full_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)
#update the acceleration
p[i].a_g = SolarGravitationalAcceleration(p[i])
#update by dt
p[i].t += p[i].dt
#compute the new timestep
p[i].dt = calc_dt(p[i])
#update the planet's timestep
p[i].istep+=1
#now update the global system time
t+=dt
#update the global step number
istep += 1
#output the current state
SaveSolarSystem(p,n_planets,t,dt,istep,ndim)
#print the final steps and time
print("Time t = ",t)
print("Maximum t = ",t_max)
print("Maximum number of steps = ",istep)
#end of evolution
# ## Create a routine to read in the data.
def read_twelve_arrays(fname):
fp = open(fname,"r")
f1 = fp.readlines()
n =len(f1)
a = np.zeros(n)
b = np.zeros(n)
c = np.zeros(n)
d = np.zeros(n)
f = np.zeros(n)
g = np.zeros(n)
h = np.zeros(n)
j = np.zeros(n)
k = np.zeros(n)
l = np.zeros(n)
m = np.zeros(n)
p = np.zeros(n)
for i in range(n):
a[i] = float(f1[i].split()[0])
b[i] = float(f1[i].split()[1])
c[i] = float(f1[i].split()[2])
d[i] = float(f1[i].split()[3])
f[i] = float(f1[i].split()[4])
g[i] = float(f1[i].split()[5])
h[i] = float(f1[i].split()[6])
j[i] = float(f1[i].split()[7])
k[i] = float(f1[i].split()[8])
l[i] = float(f1[i].split()[9])
m[i] = float(f1[i].split()[10])
p[i] = float(f1[i].split()[11])
return a,b,c,d,f,g,h,j,k,l,m,p
# ## Perform the integration of the solar system.
# +
#set the number of planets
n_planets = 3
#set the maximum time of the simulation
t_max = 2.0
#create empty list of planets
p = []
#set the planets
for i in range(n_planets):
#create an empty planet
ptmp = planet(0.0,0.0)
#set the planet properties
SetPlanet(ptmp,i)
#remember the planet
p.append(ptmp)
#evolve the solar system
EvolveSolarSystem(p,n_planets,t_max)
# -
# ## Read the data back in for every planet.
fname = "planet.Mercury.txt"
istepMg,tMg,dtMg,istepM,tM,dtM,xM,yM,vxM,vyM,axM,ayM = read_twelve_arrays(fname)
fname = "planet.Earth.txt"
istepEg,tEg,dtEg,istepE,tE,dtE,xE,yE,vxE,vyE,axE,ayE = read_twelve_arrays(fname)
fname = "planet.Venus.txt"
istepVg,tVg,dtVg,istepV,tV,dtV,xV,yV,vxV,vyV,axV,ayV = read_twelve_arrays(fname)
# ## Plot the data.
# +
fig = plt.figure(figsize=(7,7))
xSun = [0.0]
ySun = [0.0]
plt.plot(xSun,ySun,'o',color="0.5",label="Sun")
plt.plot(xM,yM,color="red")
plt.plot(xM[-1],yM[-1],'o',color="red",label="Mercury")
plt.plot(xV,yV,color+"green")
plt.plot(xV[-1],yV[-1],'o',color="green",label="Venus")
plt.plot(xE,yE,color="blue")
plt.plot(xE[-1],yE[-1],'o',color="blue",label="Earth")
plt.xlim([-1.25,1.25])
plt.ylim([-1.25,1.25])
plt.xlabel('x [AU]')
plt.ylabel('y [AU]')
plt.axes().set_aspect('equal')
plt.legend(frameon=False,loc=2)
# -
| simple_solar_system_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of DNA-MERFISH for CTP11
#
# by <NAME>
#
# 2022.02.15
#
# analysis for dataset:
#
# \\10.245.74.158\Chromatin_NAS_0\20220215-P_brain_CTP11-1000_CTP12_from0208
#
# This data is DNA of uncleared MERFISH RNA:
# \\10.245.74.158\Chromatin_NAS_0\20220208-P_brain_M1_nonclear
#
# +
# %run "..\..\Startup_py3.py"
sys.path.append(r"..\..\..\..\Documents")
import ImageAnalysis3 as ia
# %matplotlib notebook
from ImageAnalysis3 import *
print(os.getpid())
import h5py
from ImageAnalysis3.classes import _allowed_kwds
import ast
# -
# # 1. Pre-processing info
fov_param = {'data_folder':r'\\10.245.74.158\Chromatin_NAS_1\20220307-P_brain_CTP11_from_0303',
'save_folder':r'\\mendel\Mendel_SSD4\Pu_Temp\20220307-P_brain_CTP11_from_0303',
'experiment_type': 'DNA',
'num_threads': 44,
'correction_folder':r'\\10.245.74.158\Chromatin_NAS_0\Corrections\20210621-Corrections_lumencor_from_60_to_50',
'shared_parameters':{
'single_im_size':[50,2048,2048],
'distance_zxy': [250, 108, 108],
'corr_channels':['750','647','561'],
'num_empty_frames': 0,
'num_buffer_frames':0,
'corr_hot_pixel':True,
'corr_Z_shift':False,
'corr_bleed':True,
'min_num_seeds':10,
'max_num_seeds': 20000,
'spot_seeding_th': 1000,
'normalize_intensity_local':False,
'normalize_intensity_background':False,
'corr_gaussian_highpass':False,
},
}
# ## 1.1 define required floders
# +
save_folder = fov_param['save_folder']
save_filenames = [os.path.join(save_folder, _fl) for _fl in os.listdir(save_folder)
if _fl.split(os.extsep)[-1]=='hdf5']
# extract fov_id
save_fov_ids = [int(os.path.basename(_fl).split('.hdf5')[0].split('_')[-1]) for _fl in save_filenames]
debug = False
print(f"{len(save_filenames)} fovs detected")
segmentation_folder = os.path.join(save_folder, 'Segmentation')
if not os.path.exists(segmentation_folder):
os.makedirs(segmentation_folder)
print(f"Creating segmentation_folder: {segmentation_folder}")
else:
print(f"Use segmentation_folder: {segmentation_folder}")
cand_spot_folder = os.path.join(save_folder, 'CandSpots')
if not os.path.exists(cand_spot_folder):
os.makedirs(cand_spot_folder)
print(f"Creating cand_spot_folder: {cand_spot_folder}")
else:
print(f"Use cand_spot_folder: {cand_spot_folder}")
decoder_folder = cand_spot_folder.replace('CandSpots', 'Decoder')
if debug:
_version = 0
while os.path.exists(os.path.join(decoder_folder, f'v{_version}')):
_version += 1
decoder_folder = os.path.join(decoder_folder, f'v{_version}')
if not os.path.exists(decoder_folder):
os.makedirs(decoder_folder)
print(f"Creating decoder_folder: {decoder_folder}")
else:
print(f"Use decoder_folder: {decoder_folder}")
# -
pixel_sizes = np.array(fov_param['shared_parameters']['distance_zxy'])
single_im_size = np.array(fov_param['shared_parameters']['single_im_size'])
intensity_th = np.array(fov_param['shared_parameters']['spot_seeding_th'])
# + active=""
# save_fov_ids = save_fov_ids[:10]
# save_filenames = save_filenames[:10]
# -
# # 2. Partiton spots into cells
# ## 2.1 (For DNA-only) run segmentation
# + active=""
# %matplotlib inline
# from ImageAnalysis3.figure_tools.plot_segmentation import plot_segmentation
# from ImageAnalysis3.segmentation_tools.cell import Cellpose_Segmentation_3D
#
# for _fov_id, _save_filename in zip(save_fov_ids, save_filenames):
# print(_fov_id, _save_filename)
# with h5py.File(_save_filename, "r", libver='latest') as _f:
# _fov_name = _f.attrs['fov_name']
# # load PolyT if applicable
# if 'protein' in _f:
# polyt_im = _f['protein']['ims'][0]
# else:
# polyt_im = None
# # load DAPI
# dapi_im = _f.attrs['dapi_im']
#
# segmentation_filename = os.path.join(segmentation_folder,
# os.path.basename(_save_filename).replace('.hdf5', '_Segmentation.npy') )
#
# if os.path.exists(segmentation_filename):
# print(f"directly load segmentation from file: {segmentation_filename}")
# _mask = np.load(segmentation_filename)
#
# else:
#
# visual_tools.imshow_mark_3d_v2([dapi_im])
# seg_class = Cellpose_Segmentation_3D(dapi_im, polyt_im, pixel_sizes,
# save_filename=segmentation_filename,
# )
# _mask = seg_class.run()
# seg_class.save()
# seg_class.clear()
# ## Plot
# mask_savefig = os.path.join(segmentation_folder, 'Figures',
# os.path.basename(_save_filename).replace('.hdf5', '_SegmentationMask.png'))
# if not os.path.exists(os.path.dirname(mask_savefig)):
# os.makedirs(os.path.dirname(mask_savefig))
# ax = plot_segmentation(_mask, save_filename=mask_savefig)
# -
# ## (For DNA after dense MERFISH)
# MERFISH segmentation
merfish_segmentation_folder = r'\\mendel\Mendel_SSD3\MERFISH_Analysis\20220303-P_brain_M1_nonclear_adaptors\CellPoseSegment\features'
merfish_dapi_folder = r'\\10.245.74.158\Chromatin_NAS_0\20220303-P_brain_M1_nonclear_adaptors\Segmentation_Cellpose'
if not os.path.exists(merfish_dapi_folder):
os.makedirs(merfish_dapi_folder)
# generate alignment
rna_data_folder = r'\\10.245.74.158\Chromatin_NAS_0\20220303-P_brain_M1_nonclear_adaptors'
rna_alignment_file = os.path.join(rna_data_folder, 'Alignment', '10x_positions_before.txt')
dna_alignment_file = os.path.join(fov_param['data_folder'], 'Alignment', '10x_positions_after_new.txt')
print(rna_alignment_file, '\n', dna_alignment_file)
print(os.path.exists(rna_alignment_file), os.path.exists(dna_alignment_file))
R, t = ia.correction_tools.alignment.align_manual_points(rna_alignment_file, dna_alignment_file,
save_folder=save_folder)
# ## save DAPI image for RNA
from tqdm import tqdm
rna_fds, rna_fovs = ia.io_tools.data.get_folders(rna_data_folder)
ref_fd = rna_fds[0]
overwrite_dapi = False
for _fov_id in tqdm(save_fov_ids):
_dapi_savefile = os.path.join(merfish_dapi_folder, rna_fovs[_fov_id].replace('.dax', '_Dapi.npy'))
if overwrite_dapi or not os.path.exists(_dapi_savefile):
# load
_im = ia.visual_tools.DaxReader(os.path.join(ref_fd, rna_fovs[_fov_id])).loadAll()
_dapi_im = _im[4::5]
# save
np.save(_dapi_savefile.split('.npy')[0], _dapi_im)
import multiprocessing as mp
# savefile for segmentations
_total_seg_save_file = os.path.join(segmentation_folder, 'full_segmentation.hdf5')
# required parameters
microscope_file = r'\\mendel\pu_documents\Merfish_analysis\Merfish_Analysis_Scripts\merlin_parameters\microscope\storm6_microscope.json'
Zcoords = np.arange(0,12.5,0.25) # z-coordinates of all z-planes in this experiment
seg_align_params = {}
overwrite_segmentation = False
plot_segmentation = True
# initiate locks
_manager = mp.Manager()
# savefile lock
_segmentation_savefile_lock = _manager.RLock()
_seg_align_args = []
# prepare kwargs
for _fov_id, _save_filename in zip(save_fov_ids, save_filenames):
# segmentation filename
_segmentation_filename = os.path.join(segmentation_folder,
os.path.basename(_save_filename).replace('.hdf5', '_Segmentation.npy') )
_rna_feature_filename = os.path.join(merfish_segmentation_folder, f"feature_data_{_fov_id}.hdf5")
_rna_dapi_filename = os.path.join(merfish_dapi_folder,
os.path.basename(_save_filename).replace('.hdf5', '_Dapi.npy'))
_args = (_fov_id, Zcoords, _rna_feature_filename, _rna_dapi_filename,
_save_filename, microscope_file, R,
_total_seg_save_file, True, _segmentation_savefile_lock,
seg_align_params, plot_segmentation, overwrite_segmentation, False, False, True,
)
_seg_align_args.append(_args)
print(len(_seg_align_args))
# %%time
from ImageAnalysis3.segmentation_tools.cell import _batch_align_segmentation
# Multiprocessing
print(f"- Start multiprocessing segmentation alignment", end=' ')
_start_time = time.time()
with mp.Pool(12) as _seg_pool:
# start multiprocessing
_seg_pool.starmap(_batch_align_segmentation, _seg_align_args, chunksize=1)
# close multiprocessing
_seg_pool.close()
_seg_pool.join()
_seg_pool.terminate()
print(f"finish in {time.time()-_start_time:.3f}s. ")
# + active=""
# # non-parallel version
# reload(ia.segmentation_tools.cell)
# from ImageAnalysis3.segmentation_tools.cell import _batch_align_segmentation
# for _args in _seg_align_args:
# _batch_align_segmentation(*_args)
# -
# ## 2.2 Partition DNA-MERFISH spots
from ImageAnalysis3.classes.partition_spots import Spots_Partition
from ImageAnalysis3.classes.preprocess import Spots3D
from ImageAnalysis3.figure_tools import plot_partition
import pandas as pd
# +
# %%time
from ImageAnalysis3.segmentation_tools.cell import Align_Segmentation
from ImageAnalysis3.io_tools.spots import FovCell2Spots_2_DataFrame
reload(segmentation_tools.cell)
reload(io_tools.spots)
search_radius = 3
overwrite_cand_spots = False
_partition_args = []
for _fov_id, _save_filename in zip(save_fov_ids, save_filenames):
# savename
_cand_spot_filename = os.path.join(cand_spot_folder,
os.path.basename(_save_filename).replace('.hdf5', f'_CandSpots.csv') )
# load segmentation label matrix and uids
_align_seg = Align_Segmentation('', '', _save_filename, '', np.array([]))
_align_seg._load(_total_seg_save_file)
seg_label, fovcell_2_uid = _align_seg.dna_mask, _align_seg.fovcell_2_uid
# load cand_spots_list
with h5py.File(_save_filename, "r", libver='latest') as _f:
_grp = _f['combo']
combo_spots_list = [_spots[_spots[:,0]>0] for _spots in _grp['spots'][:]]
combo_bits = _grp['ids'][:]
combo_channels = [_ch.decode() for _ch in _grp['channels'][:]]
# partition args
_args = (
_fov_id, seg_label, fovcell_2_uid, combo_spots_list, combo_bits, combo_channels,
_cand_spot_filename, search_radius, pixel_sizes,
True, False, False, True,
)
_partition_args.append(_args)
print(len(_partition_args))
# -
# %%time
from ImageAnalysis3.classes.partition_spots import _batch_partition_spots
# Multiprocessing
print(f"- Start multiprocessing spot partitioning", end=' ')
_start_time = time.time()
with mp.Pool(12) as _partition_pool:
# start multiprocessing
_partition_pool.starmap(_batch_partition_spots, _partition_args, chunksize=1)
# close multiprocessing
_partition_pool.close()
_partition_pool.join()
_partition_pool.terminate()
print(f"finish in {time.time()-_start_time:.3f}s. ")
# # 3. Decoding of DNA-MERFISH
# ## 3.1 load codebook
import pandas as pd
codebook_filename = r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-11_brain\Summary_tables\CTP11-mouse-genome-1000_codebook.csv'
#
codebook_df = pd.read_csv(codebook_filename, header=0)
codebook_df
# ## 3.2 load spot files
# +
with h5py.File(save_filenames[0], "r", libver='latest') as _f:
_grp = _f['combo']
combo_channels = [_ch.decode() for _ch in _grp['channels'][:]]
combo_ids = _grp['ids'][:]
bit_2_channel = {_b:_ch for _b,_ch in zip(combo_ids, combo_channels)}
# -
# ## 3.3 test decode one cell
# +
# %%time
from tqdm import tqdm
from ImageAnalysis3.classes import decode
reload(decode)
overwrite_decoder = False
return_decoder = True
load_exist = True
pair_search_radius = 300
decode_args = []
for _fov_id, _save_filename in tqdm(zip(save_fov_ids, save_filenames)):
#print(f"Prepare decoding args for fov: {_fov_id}")
if _fov_id != 20:
continue
# load fov_df
cand_spot_filename = os.path.join(cand_spot_folder,
os.path.basename(_save_filename).replace('.hdf5', f'_CandSpots.csv') )
if os.path.isfile(cand_spot_filename):
_fov_spots_df = pd.read_csv(cand_spot_filename)
else:
continue
for _cell_id in np.unique(_fov_spots_df['cell_id']):
# get decoder filename
_decoder_filename = os.path.join(decoder_folder, f'Fov-{_fov_id}_Cell-{_cell_id}_Decoder.hdf5')
#if os.path.exists(_decoder_filename):
# continue
# get cell_df
_cell_spots_df =_fov_spots_df[_fov_spots_df['cell_id']==_cell_id]
_args = (_cell_spots_df, codebook_df, _decoder_filename,
False, True, bit_2_channel,
pixel_sizes, 2, 0.1,
pair_search_radius, -1, 1, 5, 0, -25,
True, overwrite_decoder, return_decoder, False)
# append
decode_args.append(_args)
print(len(decode_args))
# -
# test run one cell
# %matplotlib inline
reload(decode)
_cell_ind = 11
decoder = decode.batch_decode_DNA(*decode_args[_cell_ind])
region_ids = []
region_coords = []
for _g in decoder.spot_groups:
region_ids.append(_g.tuple_id)
region_coords.append(_g.centroid_spot().to_positions()[0]/1000)
region_ids = np.array(region_ids)
region_coords = np.array(region_coords)
save_figure = True
#figure_folder = os.path.join(save_folder, 'Figures_final')
figure_folder = os.path.join(decoder_folder, 'Figures_final')
if not os.path.exists(figure_folder):
print(f"Create figure_folder: {figure_folder}")
os.makedirs(figure_folder)
else:
print(f"Use figure_folder: {figure_folder}")
# +
# %matplotlib notebook
plt.style.use('dark_background')
def rotate(angle):
ax.view_init(azim=angle)
from matplotlib import animation
from matplotlib.cm import Reds, Blues, Spectral
fig = plt.figure(dpi=150)
ax = fig.add_subplot(projection='3d')
ax.set_facecolor([0,0,0,0])
ax.scatter(region_coords[:,1], region_coords[:,2], region_coords[:,0],
cmap=Spectral,
c=region_ids,
alpha=0.95,
s=1, )
ax.grid(False)
ax.xaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.yaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
angle = 3
ani = animation.FuncAnimation(fig, rotate, frames=np.arange(0, 360, angle), interval=50)
ani.save(os.path.join(figure_folder,
os.path.basename(decoder.savefile).replace('.hdf5', '_DecodeGroups.gif')),
writer=animation.PillowWriter(fps=20))
# -
sel_inds = np.where(np.array(region_ids)=)[0]
sel_inds
chr_2
# +
# %matplotlib notebook
plt.style.use('dark_background')
sel_reg = 266
sel_inds = np.where((np.array(region_ids)<=sel_reg) & (np.array(region_ids)>=208))[0]
sel_inds
def rotate(angle):
ax.view_init(azim=angle)
from matplotlib import animation
from matplotlib.cm import Reds, Blues, Spectral
fig = plt.figure(dpi=150)
ax = fig.add_subplot(projection='3d')
ax.set_facecolor([0,0,0,0])
ax.scatter(region_coords[sel_inds,1], region_coords[sel_inds,2], region_coords[sel_inds,0],
cmap='bwr',
c=region_ids[sel_inds],
alpha=0.95,
s=1, )
ax.grid(False)
ax.xaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.yaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
angle = 3
ani = animation.FuncAnimation(fig, rotate, frames=np.arange(0, 360, angle), interval=50)
ani.save(os.path.join(figure_folder,
os.path.basename(decoder.savefile).replace('.hdf5', f'_DecodeGroups_reg-{sel_reg}.gif')),
writer=animation.PillowWriter(fps=20))
# +
# %matplotlib notebook
def rotate(angle):
ax.view_init(azim=angle)
from matplotlib import animation
from matplotlib.cm import Reds, Blues, Spectral
fig = plt.figure(dpi=150)
ax = fig.add_subplot(projection='3d')
_chr_name = '4'
_zxys_list = decoder.chr_2_zxys_list[_chr_name]/1000
for _ichr, _zxys in enumerate(_zxys_list):
ax.scatter(_zxys[:,1], _zxys[:,2], _zxys[:,0],
cmap=Spectral,
c=Spectral(_ichr/(len(_zxys_list)+1)),
alpha=0.7,
s=3)
ax.plot(_zxys[:,1], _zxys[:,2], _zxys[:,0], linewidth=0.5,
alpha=0.7,
color = Spectral( _ichr/(len(_zxys_list)+1) ) )
ax.grid(False)
ax.xaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.yaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax.zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
angle = 3
ani = animation.FuncAnimation(fig, rotate, frames=np.arange(0, 360, angle), interval=50)
ani.save(os.path.join(figure_folder,
os.path.basename(decoder.savefile).replace('.hdf5', f'_Picked_chr-{_chr_name}.gif')),
writer=animation.PillowWriter(fps=20))
plt.show()
# -
# ### decoder
# ## visualize decoded spots
# +
if not os.path.exists(decoder_folder):
os.makedirs(decoder_folder)
print(f"Creating decoder_folder: {decoder_folder}")
else:
print(f"Use decoder_folder: {decoder_folder}")
decode_figure_folder = os.path.join(decoder_folder, 'Figures')
if not os.path.exists(decode_figure_folder):
os.makedirs(decode_figure_folder)
print(f"Creating decode_figure_folder: {decode_figure_folder}")
else:
print(f"Use decode_figure_folder: {decode_figure_folder}")
# +
# %matplotlib notebook
def rotate(angle):
ax.view_init(azim=angle)
from matplotlib import animation
from matplotlib.cm import Reds, Blues, Spectral
fig = plt.figure(dpi=100)
ax = fig.add_subplot(projection='3d')
_zxys_list = decoder.chr_2_zxys_list['2']
for _ichr, _zxys in enumerate(_zxys_list):
ax.scatter(_zxys[:,1], _zxys[:,2], _zxys[:,0],
#cmap=Spectral,
color=Spectral(_ichr/(len(_zxys_list)+1)),
#c=homolog_labels,
alpha=0.7,
s=3)
ax.plot(_zxys[:,1], _zxys[:,2], _zxys[:,0], linewidth=0.5,
alpha=0.7,
color = Spectral( _ichr/(len(_zxys_list)+1) ) )
fig.show()
# -
# ## 3.4 process all
# +
# %%time
# old version
import multiprocessing as mp
print(len(decode_args))
with mp.Pool(44) as decode_pool:
decode_results = decode_pool.starmap(decode.batch_decode_DNA, decode_args, chunksize=1)
decode_pool.close()
decode_pool.join()
decode_pool.terminate()
# -
# # 5. Summarize decoder
#
# Please goto the next jupyter:
#
# Chromatin_Analysis_Scripts/Tissue_DNA-FISH/CTP12_marker-gene/20220307-PostAnalysis_CellType.ipynb
| Tissue_DNA-FISH/CTP12_marker-gene/20220307-NewPostAnalysis_DNA_after_MERFISH-mendel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PIL (Python Imaging Library) - Pillow
# The most important class in the Python Imaging Library (PIL) is the Image class, defined in the module with the same name. You can create instances of this class in several ways; either by loading images from files, processing other images, or creating images from scratch.
# %matplotlib inline
import os
from IPython.core.display import HTML
def load_style(directory = '../../', name='customMac.css'):
styles = open(os.path.join(directory, name), 'r').read()
return HTML(styles)
load_style()
# ## How to import Image module
from PIL import Image
# To load an image from a file, use the `open()` function in the Image module:
im = Image.open("img/lena.ppm")
# If successful, this function returns an Image object. You can now use instance attributes to examine the file contents:
print(im.format, im.size, im.mode) # output: PPM (512, 512) RGB
# The format attribute identifies the source of an image. If the image was not read from a file, it is set to None. The size attribute is a 2-tuple containing width and height (in pixels). The mode attribute defines the number and names of the bands in the image, and also the pixel type and depth. Common modes are โLโ (luminance) for greyscale images, โRGBโ for true color images, and โCMYKโ for pre-press images.
#
# If the file cannot be opened, an IOError exception is raised.
#
# Once you have an instance of the Image class, you can use the methods defined by this class to process and manipulate the image. For example, letโs display the image we just loaded:
im.show()
# The following sections provide an overview of the different functions provided in this library.
# ## Reading and writing images
# The Python Imaging Library supports a wide variety of image file formats. To read files from disk, use the open() function in the Image module. You donโt have to know the file format to open a file. The library automatically determines the format based on the contents of the file.
#
# To save a file, use the save() method of the Image class. When saving files, the name becomes important. Unless you specify the format, the library uses the filename extension to discover which file storage format to use.
# ## Loading an image
# +
from __future__ import print_function
import os, sys
from PIL import Image
img_filename = "img/lena.ppm"
im = Image.open(img_filename)
im.show()
# -
if not os.path.exists('img'):
os.makedirs('img')
#%% Loading an image
cwd = os.getcwd()
img = Image.open(cwd+'/img/lena.ppm')
img.show()
# ## Saving an image in other format
img.save('img/lena.jpg')
# ## Splitting and merging bands
r, g, b = img.split()
r.show()
r.save('img/lenaRedChannel.jpg')
img = Image.merge("RGB", (r, g, b))
img.show()
# +
| Pillow/PillowTutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Automate Model Retraining & Deployment Using the AWS Step Functions Data Science SDK
#
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Create Resources](#Create-Resources)
# 1. [Build a Machine Learning Workflow](#Build-a-Machine-Learning-Workflow)
# 1. [Run the Workflow](#Run-the-Workflow)
# 1. [Clean Up](#Clean-Up)
# ## Introduction
#
# This notebook describes how to use the AWS Step Functions Data Science SDK to create a machine learning model retraining workflow. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions and Amazon SageMaker. For more information, please see the following resources:
# * [AWS Step Functions](https://aws.amazon.com/step-functions/)
# * [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)
# * [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)
#
# In this notebook, we will use the SDK to create steps that capture and transform data using AWS Glue, encorporate this data into the training of a machine learning model, deploy the model to a SageMaker endpoint, link these steps together to create a workflow, and then execute the workflow in AWS Step Functions.
# ## Setup
#
# First, we'll need to install and load all the required modules. Then we'll create fine-grained IAM roles for the Lambda, Glue, and Step Functions resources that we will create. The IAM roles grant the services permissions within your AWS environment.
import sys
# !{sys.executable} -m pip install --upgrade stepfunctions
# ### Import the Required Modules
# +
import uuid
import logging
import stepfunctions
import boto3
import sagemaker
from sagemaker.amazon.amazon_estimator import image_uris
from sagemaker.inputs import TrainingInput
from sagemaker.s3 import S3Uploader
from stepfunctions import steps
from stepfunctions.steps import TrainingStep, ModelStep
from stepfunctions.inputs import ExecutionInput
from stepfunctions.workflow import Workflow
session = sagemaker.Session()
stepfunctions.set_stream_logger(level=logging.INFO)
region = boto3.Session().region_name
bucket = session.default_bucket()
id = uuid.uuid4().hex
#Create a unique name for the AWS Glue job to be created. If you change the
#default name, you may need to change the Step Functions execution role.
job_name = 'glue-customer-churn-etl-{}'.format(id)
#Create a unique name for the AWS Lambda function to be created. If you change
#the default name, you may need to change the Step Functions execution role.
function_name = 'query-training-status-{}'.format(id)
# -
# Next, we'll create fine-grained IAM roles for the Lambda, Glue, and Step Functions resources. The IAM roles grant the services permissions within your AWS environment.
#
# ### Add permissions to your notebook role in IAM
#
# The IAM role assumed by your notebook requires permission to create and run workflows in AWS Step Functions. If this notebook is running on a SageMaker notebook instance, do the following to provide IAM permissions to the notebook:
#
# 1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/).
# 2. Select **Notebook instances** and choose the name of your notebook instance.
# 3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console.
# 4. Copy and save the IAM role ARN for later use.
# 5. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.
# 6. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**.
#
# We also need to provide permissions that allow the notebook instance the ability to create an AWS Lambda function and AWS Glue job. We will edit the managed policy attached to our role directly to encorporate these specific permissions:
#
# 1. Under **Permisions policies** expand the AmazonSageMaker-ExecutionPolicy-******** policy and choose **Edit policy**.
# 2. Select **Add additional permissions**. Choose **IAM** for Service and **PassRole** for Actions.
# 3. Under Resources, choose **Specific**. Select **Add ARN** and enter `query_training_status-role` for **Role name with path*** and choose **Add**. You will create this role later on in this notebook.
# 4. Select **Add additional permissions** a second time. Choose **Lambda** for Service, **Write** for Access level, and **All resources** for Resources.
# 5. Select **Add additional permissions** a final time. Choose **Glue** for Service, **Write** for Access level, and **All resources** for Resources.
# 6. Choose **Review policy** and then **Save changes**.
#
# If you are running this notebook outside of SageMaker, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
# Next, let's create an execution role in IAM for Step Functions.
#
# ### Create an Execution Role for Step Functions
#
# Your Step Functions workflow requires an IAM role to interact with other services in your AWS environment.
#
# 1. Go to the [IAM console](https://console.aws.amazon.com/iam/).
# 2. Select **Roles** and then **Create role**.
# 3. Under **Choose the service that will use this role** select **Step Functions**.
# 4. Choose **Next** until you can enter a **Role name**.
# 5. Enter a name such as `AmazonSageMaker-StepFunctionsWorkflowExecutionRole` and then select **Create role**.
#
# Next, create and attach a policy to the role you created. As a best practice, the following steps will attach a policy that only provides access to the specific resources and actions needed for this solution.
#
# 1. Under the **Permissions** tab, click **Attach policies** and then **Create policy**.
# 2. Enter the following in the **JSON** tab:
#
# ```json
# {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Effect": "Allow",
# "Action": "iam:PassRole",
# "Resource": "NOTEBOOK_ROLE_ARN",
# "Condition": {
# "StringEquals": {
# "iam:PassedToService": "sagemaker.amazonaws.com"
# }
# }
# },
# {
# "Effect": "Allow",
# "Action": [
# "sagemaker:CreateModel",
# "sagemaker:DeleteEndpointConfig",
# "sagemaker:DescribeTrainingJob",
# "sagemaker:CreateEndpoint",
# "sagemaker:StopTrainingJob",
# "sagemaker:CreateTrainingJob",
# "sagemaker:UpdateEndpoint",
# "sagemaker:CreateEndpointConfig",
# "sagemaker:DeleteEndpoint"
# ],
# "Resource": [
# "arn:aws:sagemaker:*:*:*"
# ]
# },
# {
# "Effect": "Allow",
# "Action": [
# "events:DescribeRule",
# "events:PutRule",
# "events:PutTargets"
# ],
# "Resource": [
# "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule"
# ]
# },
# {
# "Effect": "Allow",
# "Action": [
# "lambda:InvokeFunction"
# ],
# "Resource": [
# "arn:aws:lambda:*:*:function:query-training-status*"
# ]
# },
# {
# "Effect": "Allow",
# "Action": [
# "glue:StartJobRun",
# "glue:GetJobRun",
# "glue:BatchStopJobRun",
# "glue:GetJobRuns"
# ],
# "Resource": "arn:aws:glue:*:*:job/glue-customer-churn-etl*"
# }
# ]
# }
# ```
#
# 3. Replace **NOTEBOOK_ROLE_ARN** with the ARN for your notebook that you created in the previous step.
# 4. Choose **Review policy** and give the policy a name such as `AmazonSageMaker-StepFunctionsWorkflowExecutionPolicy`.
# 5. Choose **Create policy**.
# 6. Select **Roles** and search for your `AmazonSageMaker-StepFunctionsWorkflowExecutionRole` role.
# 7. Under the **Permissions** tab, click **Attach policies**.
# 8. Search for your newly created `AmazonSageMaker-StepFunctionsWorkflowExecutionPolicy` policy and select the check box next to it.
# 9. Choose **Attach policy**. You will then be redirected to the details page for the role.
# 10. Copy the AmazonSageMaker-StepFunctionsWorkflowExecutionRole **Role ARN** at the top of the Summary.
# ### Configure Execution Roles
# +
# paste the AmazonSageMaker-StepFunctionsWorkflowExecutionRole ARN from above
workflow_execution_role = ''
# SageMaker Execution Role
# You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance
sagemaker_execution_role = sagemaker.get_execution_role() #Replace with ARN if not in an AWS SageMaker notebook
# -
# #### Create a Glue IAM Role
# You need to create an IAM role so that you can create and execute an AWS Glue Job on your data in Amazon S3.
#
# 1. Go to the [IAM console](https://console.aws.amazon.com/iam/).
# 2. Select **Roles** and then **Create role**.
# 3. Under **Choose the service that will use this role** select **Glue**.
# 4. Choose **Next** until you can enter a **Role name**.
# 5. Enter a name such as `AWS-Glue-S3-Bucket-Access` and then select **Create role**.
#
# Next, create and attach a policy to the role you created. The following steps attach a managed policy that provides Glue access to the specific S3 bucket holding your data.
#
# 1. Under the **Permissions** tab, click **Attach policies** and then **Create policy**.
# 2. Enter the following in the **JSON** tab:
#
# ```json
# {
# "Version": "2012-10-17",
# "Statement": [
# {
# "Sid": "ListObjectsInBucket",
# "Effect": "Allow",
# "Action": ["s3:ListBucket"],
# "Resource": ["arn:aws:s3:::BUCKET-NAME"]
# },
# {
# "Sid": "AllObjectActions",
# "Effect": "Allow",
# "Action": "s3:*Object",
# "Resource": ["arn:aws:s3:::BUCKET-NAME/*"]
# }
# ]
# }
# ```
#
# 3. Run the next cell (below) to retrieve the specific **S3 bucket name** that we will grant permissions to.
session = sagemaker.Session()
bucket = session.default_bucket()
print(bucket)
# 4. Copy the output of the above cell and replace the **two occurances** of **BUCKET-NAME** in the JSON text that you entered.
# 5. Choose **Review policy** and give the policy a name such as `S3BucketAccessPolicy`.
# 6. Choose **Create policy**.
# 7. Select **Roles**, then search for and select your `AWS-Glue-S3-Bucket-Access` role.
# 8. Under the **Permissions** tab, click **Attach policies**.
# 9. Search for your newly created `S3BucketAccessPolicy` policy and select the check box next to it.
# 10. Choose **Attach policy**. You will then be redirected to the details page for the role.
# 11. Copy the **Role ARN** at the top of the Summary tab.
# paste the AWS-Glue-S3-Bucket-Access role ARN from above
glue_role = ''
# #### Create a Lambda IAM Role
# You also need to create an IAM role so that you can create and execute an AWS Lambda function stored in Amazon S3.
#
# 1. Go to the [IAM console](https://console.aws.amazon.com/iam/).
# 2. Select **Roles** and then **Create role**.
# 3. Under **Choose the service that will use this role** select **Lambda**.
# 4. Choose **Next** until you can enter a **Role name**.
# 5. Enter a name such as `query_training_status-role` and then select **Create role**.
#
# Next, attach policies to the role you created. The following steps attach policies that provides Lambda access to S3 and read-only access to SageMaker.
#
# 1. Under the **Permissions** tab, click **Attach Policies**.
# 2. In the search box, type **SageMaker** and select **AmazonSageMakerReadOnly** from the populated list.
# 3. In the search box type **AWSLambda** and select **AWSLambdaBasicExecutionRole** from the populated list.
# 4. Choose **Attach policy**. You will then be redirected to the details page for the role.
# 5. Copy the **Role ARN** at the top of the **Summary**.
#
# paste the query_training_status-role role ARN from above
lambda_role = ''
# ### Prepare the Dataset
# This notebook uses the XGBoost algorithm to automate the classification of unhappy customers for telecommunication service providers. The goal is to identify customers who may cancel their service soon so that you can entice them to stay. This is known as customer churn prediction.
#
# The dataset we use is publicly available and was mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by <NAME>. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets.
# +
project_name = 'ml_deploy'
data_source = S3Uploader.upload(local_path='./data/customer-churn.csv',
desired_s3_uri='s3://{}/{}'.format(bucket, project_name),
sagemaker_session=session)
train_prefix = 'train'
val_prefix = 'validation'
train_data = 's3://{}/{}/{}/'.format(bucket, project_name, train_prefix)
validation_data = 's3://{}/{}/{}/'.format(bucket, project_name, val_prefix)
# -
# ## Create Resources
# In the following steps we'll create the Glue job and Lambda function that are called from the Step Functions workflow.
# ### Create the AWS Glue Job
# +
glue_script_location = S3Uploader.upload(local_path='./code/glue_etl.py',
desired_s3_uri='s3://{}/{}'.format(bucket, project_name),
sagemaker_session=session)
glue_client = boto3.client('glue')
response = glue_client.create_job(
Name=job_name,
Description='PySpark job to extract the data and split in to training and validation data sets',
Role=glue_role, # you can pass your existing AWS Glue role here if you have used Glue before
ExecutionProperty={
'MaxConcurrentRuns': 2
},
Command={
'Name': 'glueetl',
'ScriptLocation': glue_script_location,
'PythonVersion': '3'
},
DefaultArguments={
'--job-language': 'python'
},
GlueVersion='1.0',
WorkerType='Standard',
NumberOfWorkers=2,
Timeout=60
)
# -
# ### Create the AWS Lambda Function
# +
import zipfile
zip_name = 'query_training_status.zip'
lambda_source_code = './code/query_training_status.py'
zf = zipfile.ZipFile(zip_name, mode='w')
zf.write(lambda_source_code, arcname=lambda_source_code.split('/')[-1])
zf.close()
S3Uploader.upload(local_path=zip_name,
desired_s3_uri='s3://{}/{}'.format(bucket, project_name),
sagemaker_session=session)
# +
lambda_client = boto3.client('lambda')
response = lambda_client.create_function(
FunctionName=function_name,
Runtime='python3.7',
Role=lambda_role,
Handler='query_training_status.lambda_handler',
Code={
'S3Bucket': bucket,
'S3Key': '{}/{}'.format(project_name, zip_name)
},
Description='Queries a SageMaker training job and return the results.',
Timeout=15,
MemorySize=128
)
# -
# ### Configure the AWS SageMaker Estimator
# +
container = sagemaker.image_uris.retrieve('xgboost', region, '1.2-1')
xgb = sagemaker.estimator.Estimator(container,
sagemaker_execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, project_name))
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='binary:logistic',
eval_metric='error',
num_round=100)
# -
#
# ## Build a Machine Learning Workflow
# You can use a state machine workflow to create a model retraining pipeline. The AWS Data Science Workflows SDK provides several AWS SageMaker workflow steps that you can use to construct an ML pipeline. In this tutorial you will create the following steps:
#
# * [**ETLStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.GlueStartJobRunStep) - Starts an AWS Glue job to extract the latest data from our source database and prepare our data.
# * [**TrainingStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep) - Creates the training step and passes the defined estimator.
# * [**ModelStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.ModelStep) - Creates a model in SageMaker using the artifacts created during the TrainingStep.
# * [**LambdaStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.LambdaStep) - Creates the task state step within our workflow that calls a Lambda function.
# * [**ChoiceStateStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Choice) - Creates the choice state step within our workflow.
# * [**EndpointConfigStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointConfigStep) - Creates the endpoint config step to define the new configuration for our endpoint.
# * [**EndpointStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointStep) - Creates the endpoint step to update our model endpoint.
# * [**FailStateStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Fail) - Creates fail state step within our workflow.
# SageMaker expects unique names for each job, model and endpoint.
# If these names are not unique the execution will fail.
execution_input = ExecutionInput(schema={
'TrainingJobName': str,
'GlueJobName': str,
'ModelName': str,
'EndpointName': str,
'LambdaFunctionName': str
})
# ### Create an ETL step with AWS Glue
# In the following cell, we create a Glue step thats runs an AWS Glue job. The Glue job extracts the latest data from our source database, removes unnecessary columns, splits the data in to training and validation sets, and saves the data to CSV format in S3. Glue is performing this extraction, transformation, and load (ETL) in a serverless fashion, so there are no compute resources to configure and manage. See the [GlueStartJobRunStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.GlueStartJobRunStep) Compute step in the AWS Step Functions Data Science SDK documentation.
etl_step = steps.GlueStartJobRunStep(
'Extract, Transform, Load',
parameters={"JobName": execution_input['GlueJobName'],
"Arguments":{
'--S3_SOURCE': data_source,
'--S3_DEST': 's3a://{}/{}/'.format(bucket, project_name),
'--TRAIN_KEY': train_prefix + '/',
'--VAL_KEY': val_prefix +'/'}
}
)
# ### Create a SageMaker Training Step
#
# In the following cell, we create the training step and pass the estimator we defined above. See [TrainingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep) in the AWS Step Functions Data Science SDK documentation to learn more.
training_step = steps.TrainingStep(
'Model Training',
estimator=xgb,
data={
'train': TrainingInput(train_data, content_type='text/csv'),
'validation': TrainingInput(validation_data, content_type='text/csv')
},
job_name=execution_input['TrainingJobName'],
wait_for_completion=True
)
# ### Create a Model Step
#
# In the following cell, we define a model step that will create a model in Amazon SageMaker using the artifacts created during the TrainingStep. See [ModelStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.ModelStep) in the AWS Step Functions Data Science SDK documentation to learn more.
#
# The model creation step typically follows the training step. The Step Functions SDK provides the [get_expected_model](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep.get_expected_model) method in the TrainingStep class to provide a reference for the trained model artifacts. Please note that this method is only useful when the ModelStep directly follows the TrainingStep.
model_step = steps.ModelStep(
'Save Model',
model=training_step.get_expected_model(),
model_name=execution_input['ModelName'],
result_path='$.ModelStepResults'
)
# ### Create a Lambda Step
# In the following cell, we define a lambda step that will invoke the previously created lambda function as part of our Step Function workflow. See [LambdaStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.LambdaStep) in the AWS Step Functions Data Science SDK documentation to learn more.
lambda_step = steps.compute.LambdaStep(
'Query Training Results',
parameters={
"FunctionName": execution_input['LambdaFunctionName'],
'Payload':{
"TrainingJobName.$": '$.TrainingJobName'
}
}
)
# ### Create a Choice State Step
# In the following cell, we create a choice step in order to build a dynamic workflow. This choice step branches based off of the results of our SageMaker training step: did the training job fail or should the model be saved and the endpoint be updated? We will add specfic rules to this choice step later on in section 8 of this notebook.
check_accuracy_step = steps.states.Choice(
'Accuracy > 90%'
)
# ### Create an Endpoint Configuration Step
# In the following cell we create an endpoint configuration step. See [EndpointConfigStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointConfigStep) in the AWS Step Functions Data Science SDK documentation to learn more.
endpoint_config_step = steps.EndpointConfigStep(
"Create Model Endpoint Config",
endpoint_config_name=execution_input['ModelName'],
model_name=execution_input['ModelName'],
initial_instance_count=1,
instance_type='ml.m4.xlarge'
)
# ### Update the Model Endpoint Step
# In the following cell, we create the Endpoint step to deploy the new model as a managed API endpoint, updating an existing SageMaker endpoint if our choice state is sucessful.
endpoint_step = steps.EndpointStep(
'Update Model Endpoint',
endpoint_name=execution_input['EndpointName'],
endpoint_config_name=execution_input['ModelName'],
update=False
)
# ### Create the Fail State Step
# In addition, we create a Fail step which proceeds from our choice state if the validation accuracy of our model is lower than the threshold we define. See [FailStateStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Fail) in the AWS Step Functions Data Science SDK documentation to learn more.
fail_step = steps.states.Fail(
'Model Accuracy Too Low',
comment='Validation accuracy lower than threshold'
)
# ### Add Rules to Choice State
# In the cells below, we add a threshold rule to our choice state. Therefore, if the validation accuracy of our model is below 0.90, we move to the Fail State. If the validation accuracy of our model is above 0.90, we move to the save model step with proceeding endpoint update. See [here](https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst) for more information on how XGBoost calculates classification error.
#
# For binary classification problems the XGBoost algorithm defines the model error as:
#
# \begin{equation*}
# \frac{incorret\:predictions}{total\:number\:of\:predictions}
# \end{equation*}
#
# To achieve an accuracy of 90%, we need error <.10.
# +
threshold_rule = steps.choice_rule.ChoiceRule.NumericLessThan(variable=lambda_step.output()['Payload']['trainingMetrics'][0]['Value'], value=.1)
check_accuracy_step.add_choice(rule=threshold_rule, next_step=endpoint_config_step)
check_accuracy_step.default_choice(next_step=fail_step)
# -
# ### Link all the Steps Together
# Finally, create your workflow definition by chaining all of the steps together that we've created. See [Chain](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.states.Chain) in the AWS Step Functions Data Science SDK documentation to learn more.
endpoint_config_step.next(endpoint_step)
workflow_definition = steps.Chain([
etl_step,
training_step,
model_step,
lambda_step,
check_accuracy_step
])
# ## Run the Workflow
# Create your workflow using the workflow definition above, and render the graph with [render_graph](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.render_graph):
workflow = Workflow(
name='MyInferenceRoutine_{}'.format(id),
definition=workflow_definition,
role=workflow_execution_role,
execution_input=execution_input
)
workflow.render_graph()
# Create the workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create):
workflow.create()
# Run the workflow with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute):
execution = workflow.execute(
inputs={
'TrainingJobName': 'regression-{}'.format(id), # Each Sagemaker Job requires a unique name,
'GlueJobName': job_name,
'ModelName': 'CustomerChurn-{}'.format(id), # Each Model requires a unique name,
'EndpointName': 'CustomerChurn', # Each Endpoint requires a unique name
'LambdaFunctionName': function_name
}
)
# Render workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress). This generates a snapshot of the current state of your workflow as it executes. This is a static image therefore you must run the cell again to check progress:
execution.render_progress()
# Use [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution:
execution.list_events(html=True)
# Use [list_executions](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.list_executions) to list all executions for a specific workflow:
workflow.list_executions(html=True)
# Use [list_workflows](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.list_workflows) to list all workflows in your AWS account:
Workflow.list_workflows(html=True)
# ## Clean Up
# When you are done, make sure to clean up your AWS account by deleting resources you won't be reusing. Uncomment the code below and run the cell to delete the Glue job, Lambda function, and Step Function.
# +
#lambda_client.delete_function(FunctionName=function_name)
#glue_client.delete_job(JobName=job_name)
#workflow.delete()
# -
# ---
| step-functions-data-science-sdk/automate_model_retraining_workflow/automate_model_retraining_workflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import dame_flame
import numpy as np
import matplotlib.pyplot as plt
df,_ = dame_flame.utils.data.generate_binomial_decay_importance(50,50)
model = dame_flame.matching.FLAME(verbose=3, repeats=False)
model.fit(holdout_data=df)
result_flame = model.predict(df)
# +
# Get matches using DAME and FLAME
model_dame = dame_flame.matching.DAME(repeats=False)
model_dame.fit(holdout_data=df)
result_dame = model_dame.predict(df)
# replace all the '*'s with NAs so we can get a count of the NAs.
result_flame = result_flame.replace(to_replace='*', value=np.nan)
result_dame = result_dame.replace(to_replace='*', value=np.nan)
# rename columns for graph
X_columns = ["X" + col for col in result_flame.columns]
result_flame.columns = X_columns
result_dame.columns = X_columns
x = np.arange(len(result_flame.columns)) # the label locations
width = 0.35 # the width of the bars
f, ax = plt.subplots(figsize=(12,9))
rects1 = ax.bar(x - width/2, result_dame.count(axis=0), width, color="lightcoral", label = "DAME" ) #, stopping at {}% control units matched".format(percent), hatch="/")
rects2 = ax.bar(x + width/2, result_flame.count(axis=0), width, color = "darkorchid", label = "FLAME") #, stopping at {}% control units matched".format(percent), hatch = "\\")
ax.set_ylabel('Number of units matched on covariate', fontsize=16)
ax.set_xlabel('Covariate name', fontsize=16)
ax.set_title('Covariate Importance, measured by number of units matched on each covariate', fontsize=16)
ax.set_xticks(x)
ax.set_xticklabels(result_flame.columns)
ax.legend(fontsize=16)
def autolabel(rects):
"""Attach a text label above each bar in *rects*, displaying its height."""
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
f.tight_layout()
plt.savefig('interpretability.png')
# -
| examples/interpretability.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problema
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Record Linkage
# Segundo : <NAME> & <NAME> em [A Theory for Record Linkage](https://amstat.tandfonline.com/doi/abs/10.1080/01621459.1969.10501049) o problema de record linkage consiste em:
# _"reconhecer os registros de dois arquivos que representam pessoas, objetos ou eventos idรชnticos"_
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Nosso contexto
#
# Os dados do problema [Record Linkage Comparison Patterns](https://googleweblight.com/i?u=https://archive.ics.uci.edu/ml/datasets/record%2Blinkage%2Bcomparison%2Bpatterns&hl=pt-BR) referem-se a registros epidemolรณgicos de paciรชntes com cรขncer de um hospital no estado de North Rhine-Westphalia na Alemanha.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Estrutura de coleta dos dados
# Os dados sรฃo compostos pelas seguintes colunas
#
# 1. id_1: internal identifier of first record.
# 2. id_2: internal identifier of second record.
# 3. cmp_fname_c1: agreement of first name, first component
# 4. cmp_fname_c2: agreement of first name, second component
# 5. cmp_lname_c1: agreement of family name, first component
# 6. cmp_lname_c2: agreement of family name, second component
# 7. cmp_sex: agreement sex
# 8. cmp_bd: agreement of date of birth, day component
# 9. cmp_bm: agreement of date of birth, month component
# 10. cmp_by: agreement of date of birth, year component
# 11. cmp_plz: agreement of postal code
# 12. is_match: matching status (TRUE for matches, FALSE for non-matches)
#
#
# + slideshow={"slide_type": "slide"}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] slideshow={"slide_type": "notes"}
# #### Importar dados para analise
# Para iniciar a analise dos dados primeiro necessita-se
# importar esse dados, os .csv encontrados abaixo sรฃo os dados disponibilizados
# [aqui](https://archive.ics.uci.edu/ml/datasets/record+linkage+comparison+patterns).
# Apรณs carregar cada um dos dados .csv unimos em um unico frame do pandas.
# + slideshow={"slide_type": "slide"}
dataframe = pd.DataFrame()
for x in range(1,11):
dataframe_name = '../data/block_'+str(x)+'.csv'
new_dataframe = pd.read_csv(dataframe_name)
dataframe = pd.concat([dataframe,new_dataframe])
frame = dataframe
# -
# #### Exibir Frame
# Exibir resultado do merge dos frames..
# + slideshow={"slide_type": "subslide"}
frame.head()
# + slideshow={"slide_type": "skip"}
frame.info()
# + [markdown] slideshow={"slide_type": "notes"}
# ร possรญvel observar que os dados numรฉricos estรฃo em formatos de string
# + [markdown] slideshow={"slide_type": "slide"}
# #### Corrigindo tipos de dados
# + slideshow={"slide_type": "subslide"}
broken_columns = list(frame.columns[2:11])
for column in broken_columns:
frame[column] = frame[column].apply(lambda x: np.NaN if x == '?' else x)
frame[column] = frame[column].apply(lambda x: float(x) if type(x) == str else x)
# + slideshow={"slide_type": "slide"}
frame.info()
# + [markdown] slideshow={"slide_type": "slide"}
# #### Verificando a existรชncia de valores nulos
# + slideshow={"slide_type": "skip"}
imp_values = frame.drop(['id_1','id_2','is_match'],axis=1)
# + slideshow={"slide_type": "skip"}
melted_frame = pd.melt(imp_values.notnull())
# + slideshow={"slide_type": "subslide"}
plt.figure(figsize=(15,4))
sns.countplot(melted_frame['variable'],hue=melted_frame['value'])
plt.tight_layout()
# + [markdown] slideshow={"slide_type": "notes"}
# No grรกfico acima as barras em azul representam o nรบmero de valores nulos de cada coluna
# + slideshow={"slide_type": "subslide"}
sns.heatmap(imp_values.isnull(),cbar=False,yticklabels=False)
# + [markdown] slideshow={"slide_type": "notes"}
# Acima vemos outro grรกfico que representa a quantidade de valores nulos de cada coluna. Nesse caso as partes claras
# representam os valores faltantes
# + [markdown] slideshow={"slide_type": "slide"}
# #### Removendo colunas vazias
# Com a visualizaรงรฃo dos graficos รฉ possivel observar que as colunas cmp_fname_c2 e cmp_lname_c2, existem muitos dados faltantes, entรฃo ele serรฃo desconsideradas a aplicaรงรฃo dos mรฉtodos e nas demais colunas onde existem dados faltantes serรฃo preenchidas pelas medias dos valores dos dados;
# -
# Removendo colunas desnecessario para aplicar o modelo.
# + slideshow={"slide_type": "subslide"}
frame.drop(['id_1','id_2','cmp_fname_c2','cmp_lname_c2'],axis=1,inplace=True)
# + [markdown] slideshow={"slide_type": "slide"}
# Aplicando as medias para os demais valores faltantes dos frames.
# + slideshow={"slide_type": "subslide"}
def preparer_data(frame):
frame["cmp_fname_c1"] = frame["cmp_fname_c1"].replace(np.NaN,0.000235404896421846)
frame["cmp_lname_c1"] = frame["cmp_lname_c1"].replace(np.NaN,2.68694413843136e-05)
frame["cmp_sex"] = frame["cmp_sex"].replace(np.NaN,0.5)
frame["cmp_bd"] = frame["cmp_bd"].replace(np.NaN,0.032258064516129)
frame["cmp_bm"] = frame["cmp_bm"].replace(np.NaN,0.0833333333333333)
frame["cmp_by"] = frame["cmp_by"].replace(np.NaN, 0.00943396226415094)
frame["cmp_plz"] = frame["cmp_plz"].replace(np.NaN, 0.000422654268808115)
return frame
frame = preparer_data(frame)
# -
# ## Verificando a existรชncia de valores nulos
frame.isnull().values.any()
# Ainda para observar, se as demais features que irรฃo realmente compor o modelo, iremos buscar a correlaรงรฃo entre elas,
# casos duas features tenham correlaรงรฃo muito alta, deverรฃo ser desconsideradas no modelo, pois elas basicamente estariam
# trazendo a mesma informaรงรฃo ao modelo.
# + slideshow={"slide_type": "slide"}
plt.figure(figsize=(10,5))
sns.heatmap(frame.corr(),annot=True,cmap=sns.cm.rocket_r)
# -
# Partindo da tabela de correlaรงรฃo podemos intenficar, que os dados estรฃo bem desacoplados, sendo assim podemos utilizalos no nosso modelo.
# + slideshow={"slide_type": "skip"}
frame.head()
# + [markdown] slideshow={"slide_type": "skip"}
# Apรณs esse tramento dos dados, essas features acima serรฃo as que vรฃo incorporar o modelo.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Distribuiรงรฃo dos Dados
# + slideshow={"slide_type": "subslide"}
frame['is_match'].value_counts()
# + [markdown] slideshow={"slide_type": "notes"}
# ร possรญvel notar que o dataset se encontra desbalanceado, tendo em vista que quantidade de observaรงรตes da classe `is_match = false` รฉ quase 274 veses maior que a classe de `is_match = true`.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Separaรงรฃo dos dados
# + slideshow={"slide_type": "subslide"}
X = frame.drop('is_match',axis=1)
y = frame['is_match']
# + [markdown] slideshow={"slide_type": "slide"}
# ## Balanceamento dos dados
# + slideshow={"slide_type": "subslide"}
from imblearn.under_sampling import NearMiss
from collections import Counter
# + slideshow={"slide_type": "subslide"}
nm = NearMiss(ratio='majority',version=1)
# + slideshow={"slide_type": "subslide"}
X_reshaped, y_reshaped = nm.fit_sample(X,y)
# + slideshow={"slide_type": "subslide"}
print('New data shape {}'.format(Counter(y_reshaped)))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Near Miss
# + [markdown] slideshow={"slide_type": "subslide"}
# Near Miss รฉ um mรฉtodo de under-sampling apresentado no artigo [kNN approach to unbalanced data distributions: a case study involving information extraction](http://www.site.uottawa.ca/~nat/Workshop2003/jzhang.pdf), e foi utilizado por meio da biblioteca [imbalanced-learn](http://contrib.scikit-learn.org/imbalanced-learn/stable/index.html). Esse mรฉtodo pode ser abordado de 3 formas, aqui utilizamos o NearMiss-1 para realizar o balanceamento dos dados.
#
# * ##### NearMiss-1
# Em um contexto onde as observaรงรตes possuem duas classes, verdadeiro e falso, e as observaรงรตes falsas representam a grande maioria dos dados, o NearMiss-1 calcula a distรขncia mรฉdia entre a observaรงรฃo falsas e as observaรงรตes verdadeiras, e seleciona as obsevaรงรตes falsas que obtiveram o menor valor.
#
# 
#
#
# *imagem retirada da documentaรงรฃo da biblioteca balanced-learn*
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## KNN
# + [markdown] slideshow={"slide_type": "notes"}
# KNN รฉ um mรฉtodo de machine learning supervisionado. Mรฉtodos supervisionados sรฃo mรฉtodos que precisam ser treinados com observaรงรตes jรก classificadas. Com base nessas observaรงรตes o KNN calcula o nรบmero de vizinhos de uma observaรงรฃo desconhecida. Com esse conjunto de vizinhos รฉ calculada a probabilidade condicional dessa observaรงรฃo pertencer a alguma das classes. No sklearn a classe que possuir o maior nรบmero de visinhos para aquela observaรงรฃo serรก designada como a classe da observaรงรฃo desconhecida.
#
#
# + slideshow={"slide_type": "skip"}
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# + slideshow={"slide_type": "skip"}
knn = KNeighborsClassifier()
# + slideshow={"slide_type": "skip"}
X_train, X_test, y_train, y_test = train_test_split(X_reshaped, y_reshaped, test_size=0.33, random_state=42)
# + slideshow={"slide_type": "skip"}
knn.fit(X_train,y_train)
# + slideshow={"slide_type": "skip"}
predictions = knn.predict(X_test)
# + slideshow={"slide_type": "subslide"}
print(classification_report(y_test,predictions))
# + slideshow={"slide_type": "subslide"}
print(confusion_matrix(y_test,predictions))
# + [markdown] slideshow={"slide_type": "slide"}
# ## K-fold
# + slideshow={"slide_type": "subslide"}
from sklearn.model_selection import cross_val_score
# + slideshow={"slide_type": "subslide"}
cv_result = cross_val_score(KNeighborsClassifier(),X_reshaped,y_reshaped,cv=5)
# + slideshow={"slide_type": "subslide"}
np.mean(cv_result)
# + [markdown] slideshow={"slide_type": "slide"}
# ## RandomUnderSampler
# + slideshow={"slide_type": "skip"}
from imblearn.under_sampling import RandomUnderSampler
# + slideshow={"slide_type": "skip"}
rus = RandomUnderSampler()
# + slideshow={"slide_type": "subslide"}
X_reshaped2, y_reshaped2 = rus.fit_sample(X,y)
print('New data shape {}'.format(Counter(y_reshaped2)))
# + [markdown] slideshow={"slide_type": "slide"}
# ## KNN
# + slideshow={"slide_type": "subslide"}
X_train, X_test, y_train, y_test = train_test_split(X_reshaped2, y_reshaped2, test_size=0.33, random_state=42)
knn = KNeighborsClassifier()
knn.fit(X_train,y_train)
predictions = knn.predict(X_test)
print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))
# + [markdown] slideshow={"slide_type": "slide"}
# ## K-fold
# + slideshow={"slide_type": "subslide"}
cv_result2 = cross_val_score(KNeighborsClassifier(),X_reshaped2,y_reshaped2,cv=5)
# + slideshow={"slide_type": "subslide"}
np.mean(cv_result2)
# -
| k_nearest_neighbors/k_nearest_neighbors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
# +
seq_length=2
x_data_dim=4
batch_size=100
min_max_normalization_flag=True
data_dir = '../dataset'
fname = os.path.join(data_dir, 'data-02-stock_daily.csv')
df = pd.read_csv(fname)
dataset=df.copy()
ori_Y=dataset.pop("Close")
ori_X=dataset.copy()
# +
X_train, X_test, Y_train, Y_test = train_test_split(ori_X,ori_Y, test_size=0.2, shuffle=False)
X_train, X_val, Y_train, Y_val= train_test_split(X_train,Y_train, test_size=0.2, shuffle=False)
# +
## ๋ฐ์ดํฐ์ min , max, mean, std ๊ฐ ๊ตฌํ๊ธฐ.
dataset_stats = X_train.describe()
dataset_stats = dataset_stats.transpose()
## data normalization
def min_max_norm(x):
return (x - dataset_stats['min']) / (dataset_stats['max'] - dataset_stats['min'])
def standard_norm(x):
return (x - dataset_stats['mean']) / dataset_stats['std']
if min_max_normalization_flag==True:
min_max_norm_train_data = min_max_norm(X_train)
min_max_norm_val_data = min_max_norm(X_val)
min_max_norm_test_data = min_max_norm(X_test)
data_gen_train=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_train_data.values.tolist(), Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_val_data.values.tolist(), Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_test_data.values.tolist(), Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
else:
data_gen_train = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_train.values.tolist(),Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_val.values.tolist(),Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_test.values.tolist(),Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
# +
input_Layer = tf.keras.layers.Input(shape=(seq_length, x_data_dim))
x=tf.keras.layers.LSTM(20,activation='tanh')(input_Layer) ##LSTM
x=tf.keras.layers.Dense(20,activation='relu')(x)
x=tf.keras.layers.Dense(10,activation='relu')(x)
Out_Layer=tf.keras.layers.Dense(1,activation=None)(x)
model = tf.keras.Model(inputs=[input_Layer], outputs=[Out_Layer])
model.summary()
# +
loss_function=tf.keras.losses.mean_squared_error
optimize=tf.keras.optimizers.Adam(learning_rate=0.001)
metric=tf.keras.metrics.mean_absolute_error
model.compile(loss=loss_function,
optimizer=optimize,
metrics=[metric])
history = model.fit(
data_gen_train,
validation_data=data_gen_val,
steps_per_epoch=len(X_train)/batch_size,
epochs=1000,
validation_freq=1,
)
print(model.evaluate(data_gen_test))
# +
test_data_X, test_data_Y=data_gen_test[0]
prediction_Y=model.predict(test_data_X).flatten()
Y_test=test_data_Y.flatten()
visual_y=[]
visual_pre_y=[]
for i in range(len(prediction_Y)):
label = Y_test[i]
prediction = prediction_Y[i]
print("์ค์ ๊ฐ๊ฒฉ: {:.3f}, ์์๊ฐ๊ฒฉ: {:.3f}".format(label, prediction))
visual_y.append(label)
visual_pre_y.append(prediction)
time = range(1, len(visual_y) + 1)
plt.plot(time, visual_y, 'r', label='ture')
plt.plot(time, visual_pre_y, 'b', label='prediction')
plt.title('stock prediction')
plt.xlabel('time')
plt.ylabel('value')
plt.legend()
plt.show()
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
| tensorflow/day6/practice/P_05_01_LSTM_stock_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="wNRNTbrsc53d"
# # If - elif - else statement
# + colab={} colab_type="code" id="MiPfTnBycxNa"
#W. A P. which takes one number from 0 to 9 from the user and prints it in the word. And if the word is not from 0 to 9 then
#it should print that number is outside of the range and program should exit.
# For exapmple:-
# input = 1
# output = one
num = {'0':'Zero','1':'One','2':'Two','3':'Three','4':'Four','5':'Five','6':'Six','7':'Seven','8':'Eight','9':'Nine'}
ent = input('Please enter a number between 0 to 9 \n')
if ent in num:
print(num[ent])
else:
print('The entered number is outside of the range')
# + colab={} colab_type="code" id="vEMNOv9zdA39"
#W. A P. to implement calculator but the operation to be done and two numbers will be taken as input from user:-
#Operation console should show below:-
# Please select any one operation from below:-
# * To add enter 1
# * to subtract enter 2
# * To multiply enter 3
# * To divide enter 4
# * To divide and find quotient enter 5
# * To divide and find remainder enter 6
# * To find num1 to the power of num2 enter 7
# * To Come out of the program enter 8
print('Welcome to the CALCULATOR')
num1 = int(input("Enter first number\t"))
num2 = int(input("Enter second number\t"))
print('To add enter 1')
print('To subtract enter 2')
print('To multiply enter 3')
print('To divide enter 4')
print('To divide and find quotient enter 5')
print('To divide and find remainder enter 6')
print('To find num1 to the power of num2 enter 7')
print('To come out of the program enter 8')
choice = int(input('Please enter your choice\t'))
if choice == 8:
exit()
elif choice == 1:
print('The sum of two numbers is:',num1+num2)
elif choice == 2:
print('The difference between the two numbers is:',abs(num1-num2))
elif choice == 3:
print('The product of two numbers is:',num1*num2)
elif choice == 4:
print('The division of first number by second number is:',num1/num2)
elif choice == 5:
print('The quotient is:',num1//num2)
elif choice == 6:
print('The remainder is:',num1%num2)
elif choice == 7:
print(f'{num1} to the power {num2} is:',num1**num2)
else:
print('Invalid input')
# + colab={} colab_type="code" id="1e2gwYLqdC1s"
#W A P to check whether a year entered by user is an leap year or not?
#Check with below input:-
#leap year:- 2012, 1968, 2004, 1200, 1600,2400
#Non-lear year:- 1971, 2006, 1700,1800,1900
year = int(input("Enter an year\t"))
if (year%4 == 0 and year%100 != 0) or (year%100 == 0 and year%400 == 0):
print(f'{year} is a leap year')
else:
print(f'{year} is not a leap year')
# + colab={} colab_type="code" id="_tpXv1EtdEre"
#W A P which takes one number from the user and checks whether it is an even or odd number?, If it even then prints number is
#even number else prints that number is odd number.
num = int(input('Enter a number'))
if num%2 == 0:
print(f'{num} is an even number')
else:
print(f'{num} is an odd number')
# + colab={} colab_type="code" id="Q1HumymCdG6i"
#W A P which takes two numbers from the user and prints below output:-
# 1. num1 is greater than num2 if num1 is greater than num2
# 2. num1 is smaller than num2 if num1 is smaller than num2
# 3. num1 is equal to num2 if num1 and num2 are equal
#Note:- 1. Do this problem using if - else
# 2. Do this using ternary operator
num1 = int(input('Enter first number\t'))
num2 = int(input('Enter second number\t'))
# if num1>num2:
# print(f'{num1} is greater than {num2}')
# if num1<num2:
# print(f'{num1} is smaller than {num2}')
# if num1==num2:
# print(f'{num1} and {num2} are equal')
print(f'{num1} is greater than {num2}') if num1>num2 else print(f'{num1} is smaller than {num2}') if num1<num2 else print(f'{num1} and {num2} are equal')
# + colab={} colab_type="code" id="LR6TVB-DdOPf"
#W A P which takes three numbers from the user and prints below output:-
# 1. num1 is greater than num2 and num3 if num1 is greater than num2 and num3
# 2. num2 is greater than num1 and num3 if num2 is greater than num1 and num3
# 3. num3 is greater than num1 and num2 if num3 is greater than num1 and num2
#Note:- 1. Do this problem using if - elif - else
# 2. Do this using ternary operator
# a = a if a>b else b
# expr if cond1 else expr2 if cond2 else expr3
num1 = int(input('Enter first number\t'))
num2 = int(input('Enter second number\t'))
num3 = int(input('Enter third number\t'))
if num1 == num2 or num2 == num3 or num1 == num3:
print("Invalid input")
exit()
else:
print(f'{num1} is greater than {num2} and {num3}') if num1>num2 and num1>num3 else print(f'{num2} is greater than {num1} and {num3}') if num2>num3 else print(f'{num3} is greater than {num1} and {num2}')
# if num1>num2 and num1>num3:
# print(f'{num1} is greater than {num2} and {num3}')
# elif num2>num3:
# print(f'{num2} is greater than {num1} and {num3}')
# else:
# print(f'{num3} is greater than {num1} and {num2}')
# + [markdown] colab_type="text" id="IoOLQMUGdSox"
# # Loops - for loop, while loop
# + colab={} colab_type="code" id="9K23Uld8dQfA"
#Write a Python program to find the length of the my_str using loop:-
#Input:- 'Write a Python program to find the length of the my_str'
#Output:- 55
s = 'Write a Python program to find the length of the my_str'
l = 0
for x in s:
l+=1
print("The length of the string is:",l)
# + colab={} colab_type="code" id="bp6AcqTsdYxy"
#Write a Python program to find the total number of times letter 'p' is appeared in the below string using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.\n'
#Output:- 9
s = '<NAME> picked a peck of pickled peppers.\n'
l = 0
for x in s:
if x=='p':
l+=1
print("number of times p is used in the string is:",l)
# + colab={} colab_type="code" id="xvxSBhTJdav1"
#Write a Python Program, to print all the indexes of all occurences of letter 'p' appeared in the string using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:-
# 0
# 6
# 8
# 12
# 21
# 29
# 37
# 39
# 40
s = '<NAME> picked a peck of pickled peppers.'
l=0
for x in s:
if x=='p':
print(l)
l+=1
# + colab={} colab_type="code" id="79L_feMadbmw"
#Write a python program to find below output using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:- ['peter', 'piper', 'picked', 'a', 'peck', 'of', 'pickled', 'peppers']
s = '<NAME> picked a peck of pickled peppers.'
ch = ''
lst = []
for x in s:
if x != ' ' and x != '.':
ch += x
else:
lst.append(ch)
ch = ''
print(lst)
# + colab={} colab_type="code" id="vzs5AJ53deVS"
#Write a python program to find below output using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:- 'peppers pickled of peck a picked <NAME>'
s = '<NAME> picked a peck of pickled peppers.'
ch = ''
lst = []
for x in s:
if x != ' ' and x != '.':
ch += x
elif x==' ' or x=='.':
lst.append(ch)
ch = ''
print(lst)
# print(lst[::-1])
lst2 = []
for i in range(1,len(lst)+1):
lst2.append(lst[-i])
print(lst2)
# + colab={} colab_type="code" id="i-HP7DrCdhwS"
#Write a python program to find below output using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:- '.sreppep delkcip fo kcep a dekcip repip retep'
s = '<NAME> picked a peck of pickled peppers.'
s1 = ''
for i in range (1,len(s)+1):
s1 += s[-i]
print(s1)
# + colab={} colab_type="code" id="3rt6p0ytdkq0"
#Write a python program to find below output using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:- 'retep repip dekcip a kcep fo delkcip sreppep'
s = '<NAME> picked a peck of pickled peppers.'
s1 = ''
for i in range (1,len(s)+1):
if s[-i] == '.':
continue
else:
s1 += s[-i]
print(s1)
ch = ''
lst = []
for x in s1:
if x == ' ' or x == '.':
lst.append(ch)
ch = ''
else:
ch+=x
lst.append(ch)
print(lst)
# + colab={} colab_type="code" id="HdlAWgT2dnKO"
#Write a python program to find below output using loop:-
#Input:- '<NAME> picked a peck of pickled peppers.'
#Output:- '<NAME> Picked A Peck Of Pickled Peppers'
s = '<NAME> picked a peck of pickled peppers.'
lst = []
ch = ''
for x in s:
if x == ' ' or x == '.':
ch = chr(ord(ch[0])-32) + ch[1:]
lst.append(ch)
ch = ''
else:
ch += x
print(' '.join(lst))
# + colab={} colab_type="code" id="OthUuacodrNl"
#Write a python program to find below output using loop:-
#Input:- '<NAME> Picked A Peck Of Pickled Peppers.'
#Output:- '<NAME> picked a peck of pickled peppers'
s = '<NAME> picked a peck of pickled peppers.'
s1 = ''
ch = chr(ord(s[0])-32)
s1 += ch
for i in range (1,len(s)):
s1 += s[i]
print(s1)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="scmDJE-ldsI9" outputId="4b1d1979-9f4a-4bc6-c505-b88b1bbd4d5d"
#Write a python program to implement index method using loop. If sub_str is found in my_str then it will print the index
# of first occurrence of first character of matching string in my_str:-
#Input:- my_str = '<NAME> Picked A Peck Of Pickled Peppers.', sub_str = 'Pickl'
#Output:- 29
my_str = '<NAME> Picked A Peck Of Pickled Peppers.'
sub_str = 'Pickl'
l2 = len(sub_str)
for i in range(len(my_str)):
if my_str[i] == sub_str[0]:
if my_str[i:(i+l2)] == sub_str:
print(i)
else:
continue
# + colab={} colab_type="code" id="Kq_fwdb_dvYE"
#Write a python program to implement replace method using loop. If sub_str is found in my_str then it will replace the first
#occurrence of sub_str with new_str else it will will print sub_str not found:-
#Input:- my_str = '<NAME> Picked A Peck Of Pickled Peppers.', sub_str = 'Peck', new_str = 'Pack'
#Output:- '<NAME> Picked A Pack Of Pickled Peppers.'
my_str = '<NAME> Picked A Peck Of Pickled Peppers.'
sub_str = 'Peck'
new_str = 'Pack'
s1 = ''
l2 = len(sub_str)
for i in range(len(my_str)):
if my_str[i] == sub_str[0]:
if my_str[i:(i+l2)] == sub_str:
a = i
else:
continue
s1 = my_str[0:a] + new_str + my_str[a+l2:]
print(s1)
# + colab={} colab_type="code" id="4Qes5D0cdyd1"
#Write a python program to find below output (implements rjust and ljust) using loop:-
#Input:- '<NAME> Picked A Peck Of Pickled Peppers.', sub_str = 'Peck',
#Output:- '*********************Peck********************'
my_str = '<NAME> Picked A Peck Of Pickled Peppers.'
sub_str = 'Peck'
l2 = len(sub_str)
for i in range(len(my_str)):
if my_str[i] == sub_str[0]:
if my_str[i:(i+l2)] == sub_str:
a=i
else:
continue
s1,s2 = '*'*a,'*'*(len(my_str)-l2-a)
print(s1+sub_str+s2)
# + colab={} colab_type="code" id="DsOrb07Od0lR"
#Write a python program to find below output using loop:-
#Input:- 'This is Python class', sep = 'is',
#Output:- ['This', 'is', 'Python class']
s = 'This is Python class'
sep = ' is'
l2 = len(sep)
for i in range(len(s)):
if s[i] == sep[0]:
if s[i:(i+l2)] == sep:
a = i
else:
continue
s1 = s[:a]
s2 = s[a+l2:]
lst = [s1,sep,s2]
print(lst)
# + colab={} colab_type="code" id="jRnBVufmd2Ay"
#Write a python program which takes one input string from user and encode it in below format:-
# 1. #Input:- 'Python'
#Output:- 'R{vjqp'
# 2. #Input:- 'Python'
#Output:- 'Rwvfql'
# 3. #Input:- 'Python'
#Output:- 'R{vkfml'
# +
tpl = 1,2,3,4,5
type(tpl)
# -
| Amit/Amit_Conditional_and_loop_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %% [markdown]
# # Description
# %% [markdown]
# This notebook performs EDA on the crypto prices and returns.
# %% [markdown]
# # Imports
# %%
# # %load_ext autoreload
# # %autoreload 2
# # %matplotlib inline
# %%
# TODO(Grisha): move to `core/dataflow_model/notebooks` in #205.
import logging
import os
import pandas as pd
import pytz
import core.config.config_ as cconconf
import core.explore as coexplor
import core.plotting as coplotti
import helpers.hdatetime as hdateti
import helpers.hdbg as hdbg
import helpers.henv as henv
import helpers.hprint as hprint
import helpers.hs3 as hs3
import im_v2.ccxt.data.client as imvcdcli
# %%
hdbg.init_logger(verbosity=logging.INFO)
_LOG = logging.getLogger(__name__)
_LOG.info("%s", henv.get_system_signature()[0])
hprint.config_notebook()
# %% [markdown]
# # Config
# %%
def get_eda_config() -> cconconf.Config:
"""
Get config that controls EDA parameters.
"""
config = cconconf.Config()
# Load parameters.
config.add_subconfig("load")
config["load"]["aws_profile"] = "am"
config["load"]["data_dir"] = os.path.join(hs3.get_path(), "data")
# Data parameters.
config.add_subconfig("data")
config["data"]["close_price_col_name"] = "close"
config["data"]["data_type"] = "OHLCV"
config["data"]["frequency"] = "T"
# TODO(Grisha): use `hdateti.get_ET_tz()` once it is fixed.
config["data"]["timezone"] = pytz.timezone("US/Eastern")
# Statistics parameters.
config.add_subconfig("stats")
config["stats"]["z_score_boundary"] = 3
config["stats"]["z_score_window"] = "D"
return config
config = get_eda_config()
print(config)
# %% [markdown]
# # Load data
# %%
# TODO(Grisha): allow loading multiple assets/exchanges/currencies #219.
# %%
# TODO(Grisha): potentially read data from the db.
ccxt_loader = imvcdcli.CcxtCsvFileSystemClient(
data_type=config["data"]["data_type"],
root_dir=config["load"]["data_dir"],
aws_profile=config["load"]["aws_profile"],
)
ccxt_data = ccxt_loader.read_data("binance::BTC_USDT")
_LOG.info("shape=%s", ccxt_data.shape[0])
ccxt_data.head(3)
# %%
# Check the timezone info.
hdbg.dassert_eq(
ccxt_data.index.tzinfo,
config["data"]["timezone"],
)
# %%
# TODO(Grisha): change tz in `CcxtLoader` #217.
ccxt_data.index = ccxt_data.index.tz_convert(config["data"]["timezone"])
ccxt_data.index.tzinfo
# %% [markdown]
# # Select subset
# %%
ccxt_data_subset = ccxt_data[[config["data"]["close_price_col_name"]]]
ccxt_data_subset.head(3)
# %% [markdown]
# # Resample index
# %%
# TODO(Grisha): do we want to merge it with `core.pandas_helpers.resample_index`?
# The problem with `resample_index` in `pandas_helpers` is that it does not
# generate empty rows for missing timestamps.
def resample_index(index: pd.DatetimeIndex, frequency: str) -> pd.DatetimeIndex:
"""
Resample `DatetimeIndex`.
:param index: `DatetimeIndex` to resample
:param frequency: frequency from `pd.date_range()` to resample to
:return: resampled `DatetimeIndex`
"""
hdbg.dassert_isinstance(index, pd.DatetimeIndex)
min_date = index.min()
max_date = index.max()
resampled_index = pd.date_range(
start=min_date,
end=max_date,
freq=frequency,
)
return resampled_index
resampled_index = resample_index(
ccxt_data_subset.index, config["data"]["frequency"]
)
ccxt_data_reindex = ccxt_data_subset.reindex(resampled_index)
_LOG.info("shape=%s", ccxt_data_reindex.shape[0])
ccxt_data_reindex.head(3)
# %% [markdown]
# # Filter data
# %%
# TODO(Grisha): add support for filtering by exchange, currency, asset class.
# %%
# Get the inputs.
# TODO(Grisha): pass tz to `hdateti.to_datetime` once it is fixed.
lower_bound = hdateti.to_datetime("2019-01-01")
lower_bound_ET = config["data"]["timezone"].localize(lower_bound)
upper_bound = hdateti.to_datetime("2020-01-01")
upper_bound_ET = config["data"]["timezone"].localize(upper_bound)
# Fiter data.
ccxt_data_filtered = coexplor.filter_by_time(
df=ccxt_data_reindex,
lower_bound=lower_bound_ET,
upper_bound=upper_bound_ET,
inclusive="left",
ts_col_name=None,
log_level=logging.INFO,
)
ccxt_data_filtered.head(3)
# %% [markdown]
# # Statistics
# %% [markdown]
# ## Plot timeseries
# %%
# TODO(Grisha): replace with a function that does the plotting.
ccxt_data_filtered[config["data"]["close_price_col_name"]].plot()
# %% [markdown]
# ## Plot timeseries distribution
# %%
# TODO(Grisha): fix the function behavior in #204.
coplotti.plot_timeseries_distribution(
ccxt_data_filtered[config["data"]["close_price_col_name"]],
datetime_types=["hour"],
)
# %% [markdown]
# ## NaN statistics
# %%
nan_stats_df = coexplor.report_zero_nan_inf_stats(ccxt_data_filtered)
nan_stats_df
# %%
# TODO(Grisha): pretify the function: add assertions, logging.
# TODO(Grisha): add support for zeros, infinities.
# TODO(Grisha): also count NaNs by exchange, currency, asset class.
def count_nans_by_period(
df: pd.DataFrame,
config: cconconf.Config,
period: str,
top_n: int = 10,
) -> pd.DataFrame:
"""
Count NaNs by period.
:param df: data
:param period: time period, e.g. "D" - to group by day
:param top_n: display top N counts
:return: table with NaN counts by period
"""
# Select only NaNs.
nan_data = df[df[config["data"]["close_price_col_name"]].isna()]
# Group by specified period.
nan_grouped = nan_data.groupby(pd.Grouper(freq=period))
# Count NaNs.
nan_grouped_counts = nan_grouped.apply(lambda x: x.isnull().sum())
nan_grouped_counts.columns = ["nan_count"]
nan_grouped_counts_sorted = nan_grouped_counts.sort_values(
by=["nan_count"], ascending=False
)
return nan_grouped_counts_sorted.head(top_n)
nan_counts = count_nans_by_period(
ccxt_data_filtered,
config,
"D",
)
nan_counts
# %% [markdown]
# ## Detect outliers
# %%
# TODO(Grisha): add support for other approaches, e.g. IQR-based approach.
def detect_outliers(df: pd.DataFrame, config: cconconf.Config) -> pd.DataFrame:
"""
Detect outliers in a rolling fashion using z-score.
If an observation has abs(z-score) > `z_score_boundary` it is considered
an outlier. To compute a `z-score` rolling mean and rolling std are used.
:param df: data
:return: outliers
"""
df_copy = df.copy()
roll = df_copy[config["data"]["close_price_col_name"]].rolling(
window=config["stats"]["z_score_window"]
)
# Compute z-score for a rolling window.
df_copy["z-score"] = (
df_copy[config["data"]["close_price_col_name"]] - roll.mean()
) / roll.std()
# Select outliers based on the z-score.
df_outliers = df_copy[
abs(df_copy["z-score"]) > config["stats"]["z_score_boundary"]
]
return df_outliers
outliers = detect_outliers(ccxt_data_filtered, config)
_LOG.info("shape=%s", outliers.shape[0])
outliers.head(3)
| research_amp/cc/notebooks/Master_crypto_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A real machine learning pipeline
#
# In this notebook we will demonstrate how scikit-learn can be used to assemble a complete machine learning pipeline to predict house prices on the Boston housing dataset. By using the [`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html)-class we can stitch together pre-processing and model fitting in a very convenient way.
#
# Using the pipelining we also avoid data leakage. Probably the most common reason for data leakage is that preprocessing steps are accidentally fitted on both train- and test-data. If we for instance use standard scaling as preprocessing, such mistake would mean that the variables are scaled using parameters calculated partly on the test-set. This means that the model will get access to "knowledge" leaked from the test-set into the training, which may give a overly optimistic evaluation.
#
# We don't want optimistic evaluations, we want honest ones so we can report honest results back to our managers.
#
# **Feel free to explore other regression models in your own cells before**
#
# The scikit-learn user guide has plenty of information on different [regression](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning) and [preprocessing](https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing) methods.
# +
import pickle
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.metrics import mean_squared_error, make_scorer
from sklearn.linear_model import LinearRegression, RidgeCV
# -
# First we read our data and split out response to separate series.
# +
df = pd.read_csv('boston-housing.csv')
response = df['target']
df.drop('target', axis=1, inplace=True)
# -
# We split out 30 \% of the data to use for validating our model.
#
# We are not aware of any dependencies in our data, so we make the split randomly
# +
np.random.seed(1337)
x_train, x_test, y_train, y_test = train_test_split(df, response, test_size=.3)
# -
# Now we define a convenience-function that we will use to cross-validate our pipelines
def crossvalidated_mean_squared_error(regressor, x, y):
scoring_fn = make_scorer(mean_squared_error)
fold_scores = cross_val_score(regressor, x, y, cv=5, scoring=scoring_fn)
return fold_scores.mean()
# # 1st attempt - Linear regression
# We start simple, with a linear regression using all variables.
# +
lr_pipeline = Pipeline([
('scaler', StandardScaler()),
('model', LinearRegression())
])
mse_cv = crossvalidated_mean_squared_error(lr_pipeline, x_train, y_train)
print(f'Mean Square Error averaged accross cross-validation: {mse_cv}')
# -
# # 2nd attempt - Polynomial regression
#
# To see if we can improve our results, we may include polynomial terms as well (square terms and interactions).
# +
poly_lr_pipeline = Pipeline([
('polynomial', PolynomialFeatures()),
('scaler', StandardScaler()),
('model', LinearRegression())
])
mse_cv = crossvalidated_mean_squared_error(poly_lr_pipeline, x_train, y_train)
print(f'Mean Square Error averaged accross cross-validation: {mse_cv}')
# -
# Whoops. Error is twice as large using simple regression with polynomial features.
# # 3rd attempt - Polynomial features with Ridge regression
#
# Our naive attempt on polynomial features was probably due to overfitting. We try to use Ridge regression, which is a popular method to combat overfitting.
# +
poly_ridge_pipeline = Pipeline([
('polynomial', PolynomialFeatures()),
('scaler', StandardScaler()),
('model', RidgeCV(cv=3))
])
mse_cv = crossvalidated_mean_squared_error(poly_ridge_pipeline, x_train, y_train)
print(f'Mean Square Error averaged accross cross-validation: {mse_cv}')
# -
# # Results
#
# According to cross-validation, our best model is to use polynomial features combined with ridge regression.
#
# We therefore fit our final model on all training data and report test-set performance.
# +
poly_ridge_pipeline.fit(x_train, y_train)
test_mean_squared_error = mean_squared_error(y_test, poly_ridge_pipeline.predict(x_test))
print(f'Test-set Mean Square Error: {test_mean_squared_error}')
# -
# We see that the test-set error is quite a bit higher than the validation error. This is common, and it is the most honest estimate you get.
#
# **Don't look at test-set error before deciding which model to use!**
# You want to know roughly how well the model will work in the future. The model is supposed to be the one you actually think works best. NOT the one that just happen to perform best on this particular train-test split.
#
# Lastly, we persist our model on disk so we can use it in the future.
with open('boston-housing_polynomial-ridge.pickle', 'wb') as f:
pickle.dump(poly_ridge_pipeline, f)
# # Your attempts
#
# Follow the structure of the examples above and try to implement your own pipeline.
| 01 - Boston Housing Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="k5oF-OdeJnQV"
# ### Refactor Clinic #1
#
# In this post, we're going to refactor a function that returns a start and end date, or some defaults that are deemed "sensible" in the context of the application.
#
# The function is adapted from a real-life example in the wild! We do necessarily believe this logic actually requires a function, but let's roll with it...
#
# We'll demonstrate how to make this function more clean, concise and "Pythonic".
# + id="lS2XDt47FEUb"
from datetime import date, timedelta
def getStartEndDate(q_start_date, q_end_date):
""" Return start date (or default) and end date (or default) """
if q_start_date:
start_date = q_start_date
else:
start_date = str(date.today() - timedelta(days=7))
if q_end_date:
end_date = q_end_date
else:
end_date = str(date.today())
return start_date, end_date
# + [markdown] id="Zbtu4hOFF_Lq"
# This function takes in two arguments - a start date, and an end date. These can be assumed to come from query parameters in an URL, as below.
#
# For example: `https://mysite.com/comments?start_date=2021-08-01&end_date=2021-08-10` would fetch all comments between the start and end date.
#
# However, these query parameters are optional - they may be set to None, therefore we might want to apply a default start and end date.
#
# We can call this function as below, and get our start and end dates.
# + colab={"base_uri": "https://localhost:8080/"} id="kByLgDXNGCKd" executionInfo={"status": "ok", "timestamp": 1630060195443, "user_tz": -60, "elapsed": 211, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08485893699705210861"}} outputId="fa97e70f-d1ac-4ae5-df92-666fe64825dd"
# call function with two dates
start_date, end_date = getStartEndDate('2020-08-29', '2020-08-31')
# call function with both dates set to None
start_date2, end_date2 = getStartEndDate(None, None)
# print out returned values
print(start_date, end_date)
print(start_date2, end_date2)
# + [markdown] id="BaNioUaGCegJ"
# This code calls the function twice - firstly, with a start and end date, and secondly with no start or end date (both set to `None`). In the first case, the dates passed in are returned, and we see these printed. In the second case, we get back the defaults, since the `start_date` and `end_date` arguments are both set to `None`.
#
# There are ways we can improve this to make it more concise, and to understand the goal of the function better through code. Let's move on to refactoring!
# + [markdown] id="7t5KB78eH92x"
# ### Refactored Function
#
# The refactoring steps we'll perform are as follows:
#
# - Change the function name - it is conventional to use underscores to separate words in Python, rather than camel-case.
# - Refactor the if/else conditions to a single-line conditional expression for both `start_date` and `end_date`.
# - We make explicit that the arguments are `None` by default, and should be strings if not `None`. We use the `Optional` construct from the typing module to provide type-hints for our arguments.
# - We make explicit that this function should return a tuple of strings.
#
# The goal is for our functionality to remain the same, but the code to be cleaner.
# + id="iKDMKZRqIAVX"
from typing import Optional
def get_start_and_end_date(
q_start_date: Optional[str] = None,
q_end_date: Optional[str] = None
) -> (str, str):
""" Return start date (or default) and end date (or default) """
last_week = date.today() - timedelta(days=7)
start_date = q_start_date if q_start_date else str(last_week)
end_date = q_end_date if q_end_date else str(date.today())
return start_date, end_date
# + [markdown] id="dYqwAfsnIc9j"
# The calling code will change, because the function name has changed.
# + colab={"base_uri": "https://localhost:8080/"} id="ONedAAAcIiaj" executionInfo={"status": "ok", "timestamp": 1630060245827, "user_tz": -60, "elapsed": 193, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08485893699705210861"}} outputId="ced86f1e-a751-4de7-9b3d-c1c294a6d381"
# call function with two dates
start_date, end_date = get_start_and_end_date('2020-08-29', '2020-08-31')
# call function with no arguments (better than passing None, now we have defaults!)
start_date2, end_date2 = get_start_and_end_date()
# print out returned values
print(start_date, end_date)
print(start_date2, end_date2)
# + [markdown] id="MJBsdUG3MCzk"
# We can actually clean this code up even more, using the `or` operator as shown below.
#
# Since the code is of the form `start_date = q_start_date if q_start_date else str(last_week)`
#
# We can use the code: `q_start_date or str(last_week)` instead.
#
# If `q_start_date` evaluates to False, then `str(last_week)` will be used.
# + id="Vfp7vf-3Kj2B"
def get_start_and_end_date(
q_start_date: Optional[str] = None,
q_end_date: Optional[str] = None
) -> (str, str):
""" Return start date (or default) and end date (or default) """
last_week = date.today() - timedelta(days=7)
start_date = q_start_date or str(last_week)
end_date = q_end_date or str(date.today())
return start_date, end_date
# + colab={"base_uri": "https://localhost:8080/"} id="G9Kd2FfvKn6i" executionInfo={"status": "ok", "timestamp": 1630060277759, "user_tz": -60, "elapsed": 243, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08485893699705210861"}} outputId="e8bb5a6e-00f9-447d-a862-0d40fce665bf"
# call function with two dates
start_date, end_date = get_start_and_end_date('2020-08-29', '2020-08-31')
# call function with no arguments (better than passing None, now we have defaults!)
start_date2, end_date2 = get_start_and_end_date()
# print out returned values
print(start_date, end_date)
print(start_date2, end_date2)
# + id="MY_vJaSFx27-"
| refactor-clinic/Refactor Clinic #1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # %load helpers.py
from pyspark.ml.classification import LogisticRegression, NaiveBayes, DecisionTreeClassifier, GBTClassifier, \
RandomForestClassifier
from pyspark.ml.feature import VectorAssembler
from pyspark.mllib.evaluation import BinaryClassificationMetrics, MulticlassMetrics
from pyspark.sql.functions import current_date, expr, datediff, to_date
from pyspark.sql.functions import length, regexp_replace
from nltk.corpus import stopwords
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
import re
def get_kv_pairs(row, exclusions=[]):
# get the text from the row entry
text = str(row.review_body).lower()
# create blacklist of words
blacklist = set(stopwords.words('english'))
# add explicit words
[blacklist.add(i) for i in exclusions]
# extract all words
words = re.findall(r'([^\w+])', text)
# for each word, send back a count of 1
# send a list of lists
return [[w, 1] for w in words if w not in blacklist]
def get_word_counts(texts, exclusions=[]):
mapped_rdd = texts.rdd.flatMap(lambda row: get_kv_pairs(row, exclusions))
counts_rdd = mapped_rdd.reduceByKey(lambda a, b: a + b).sortByKey(True, 1)
return counts_rdd.collect()
def convert_str_to_int(df, col='verified_purchase', type_='int'):
return df.select((df[col] == 'Y').cast(type_))
def get_review_age(df):
return df.select(datediff(current_date(), to_date(df['review_date'])))
def prepare_features(df):
df = df.withColumn('exclam', length('review_body') - length(regexp_replace('review_body', '\!', '')))
df = df.withColumn('age', datediff(current_date(), to_date(df['review_date'])))
df = df.withColumn('review_length', length(df['review_body']))
df = df.withColumn('helfulness', df['helpful_votes'] / df['total_votes'])
df = df.withColumn('label', expr("CAST(verified_purchase='Y' As INT)"))
select_cols = df.select(['star_rating', 'helfulness', 'age', 'review_length', 'label']).na.fill(0)
return select_cols
def split_data(df, rate=.9):
training = df.sampleBy("label", fractions={0: rate, 1: rate}, seed=12)
return training, df.subtract(training)
def get_auc_roc(classifier, training, test):
model = classifier.fit(training)
out = model.transform(test) \
.select("prediction", "label") \
.rdd.map(lambda x: (float(x[0]), float(x[1])))
metrics = MulticlassMetrics(out)
# print("Model: {1}. Area under ROC: {0:2f}".format(metrics.areaUnderROC, clf.__class__))
return model, out, metrics
def get_vectorized_features(df, cols=['star_rating']):
va = VectorAssembler().setInputCols(cols).setOutputCol(
'features')
return va.transform(df)
# +
# # %load environment.py
from pyspark.ml.classification import LogisticRegression, NaiveBayes, DecisionTreeClassifier, GBTClassifier, \
RandomForestClassifier
import pyspark as ps
from pyspark.sql.types import StructField, StructType, StringType, IntegerType
DATA_FILE = '../../amazon_reviews_us_Camera_v1_00.tsv.gz'
APP_NAME = 'Prediction'
FEATURES = ['star_rating', 'review_body', 'helpful_votes', 'total_votes', 'verified_purchase', 'review_date']
SAMPLE_SIZE = 10000
review_schema = StructType(
[StructField('marketplace', StringType(), True),
StructField('customer_id', StringType(), True),
StructField('review_id', StringType(), True),
StructField('product_id', StringType(), True),
StructField('product_parent', StringType(), True),
StructField('product_title', StringType(), True),
StructField('product_category', StringType(), True),
StructField('star_rating', IntegerType(), True),
StructField('helpful_votes', IntegerType(), True),
StructField('total_votes', IntegerType(), True),
StructField('vine', StringType(), True),
StructField('verified_purchase', StringType(), True),
StructField('review_headline', StringType(), True),
StructField('review_body', StringType(), True),
StructField('review_date', StringType(), True)])
spark = (ps.sql.SparkSession.builder
.master("local[1]")
.appName(APP_NAME)
.getOrCreate()
)
sc = spark.sparkContext
df = spark.read.format("csv") \
.option("header", "true") \
.option("sep", "\t") \
.schema(review_schema) \
.load(DATA_FILE)
review_all = df.select(FEATURES)
review_sample = df.select(FEATURES).limit(SAMPLE_SIZE).cache()
# -
classifiers = [LogisticRegression(), NaiveBayes(), DecisionTreeClassifier(), RandomForestClassifier(),
GBTClassifier()]
results = []
# # 10000 sample dataset
select_cols = prepare_features(review_sample)
features = get_vectorized_features(select_cols, cols=['star_rating', 'helfulness', 'age', 'review_length'])
training = features.sampleBy("label", fractions={0: 0.92, 1: 0.08}, seed=12)
training.groupBy("label").count().orderBy("label").show()
test = features.subtract(training)
for clf in classifiers:
model, out, metrics = get_auc_roc(clf, training, test)
results.append([model, out, metrics])
for m,o,metrics in results[-5:]:
print("Weighted recall = %s" % metrics.weightedRecall)
print("Weighted precision = %s" % metrics.weightedPrecision)
print("Weighted F(1) Score = %s" % metrics.weightedFMeasure())
print("Weighted F(0.5) Score = %s" % metrics.weightedFMeasure(beta=0.5))
print("Weighted false positive rate = %s" % metrics.weightedFalsePositiveRate)
m,o,metrics = results[1]
ys=list(zip(*o.collect()))
# +
import scikitplot as skplt
import matplotlib.pyplot as plt
y_true = ys[0]# ground truth labels
y_probas = ys[1]# predicted probabilities generated by sklearn classifier
skplt.metrics.plot_roc_curve(y_true, y_probas)
plt.show()
# -
# # entire dataset
select_cols = prepare_features(review_all)
features = get_vectorized_features(select_cols, cols=['star_rating', 'helfulness', 'age', 'review_length'])
features.groupBy("label").count().orderBy("label").show()
training = features.sampleBy("label", fractions={0: 0.80, 1: 0.165}, seed=24)
training.groupBy("label").count().orderBy("label").show()
test = features.subtract(training)
for clf in classifiers:
model, out, metrics = get_auc_roc(clf, training, test)
results.append([model, out, metrics])
df.columns
| Capstone 2/src/prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Understanding Dataset Structure
#
# [โฒ Overview](0.0-Overview.ipynb)
#
# [โถ Loading and Decoding Dataset](2.0-Loading-dataset.ipynb)
#
# The dataset obtained from the [Australian Bureau of Statistics](http://stat.data.abs.gov.au) is provided as a [SDMX structure](https://sdmx.org/) in JSON. In this exercise, only a minimal subset of SDMX is required. In particular, multidimensional data is not needed.
import json
from australian_housing import paths
with open(paths.manager.raw_data_file) as f:
data = json.load(f)
data.keys()
# `header` shows some metadata and does not need to be parsed. `dataSets` contains the actual data and `structure` explains what dimensions are to be expected. While `dataSets` may contain multiple enties, here only a single one is required.
data['header']
# `structure.attributes.observation` could be used for generic parsing and understanding data types. For this exercise only a single datetime column exists which can be identified by its id.
data['structure']['attributes']['observation']
# `structure.dimensions.observation` contains encoding information for the dimensions of the dataset. This is a key information which has be used for parsing!
observations = data['structure']['dimensions']['observation']
observations
[len(obs['values']) for obs in observations]
# The whole dataset contains 2920 entries of which only 73 will be relevant for this exercise. There is only a single entry in `dataSet`.
len(data['dataSets'][0]['observations'])
len(data['dataSets'])
# `dataSets.0.observations` is an object where all data dimensions are encoded in the key as colon-separated integers. The values of this object are lists of which only the first entry contains any information (the actual measure).
#
# For decoding the dataset, the keys in `dataSets.0.observations` need to be split by `:` and then decoded using the meta information in `structure.dimensions.observation`. This is implemented in `australian_housing.data.extract_dataframe.AustralianHousingLoader`.
data['dataSets'][0]['observations']
# [โฒ Overview](0.0-Overview.ipynb)
#
# [โถ Loading and Decoding Dataset](2.0-Loading-dataset.ipynb)
| notebooks/1.1-Exploring-dataset-structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parameter Graph Search Query Examples
# This tutorial goes over the `NsearchgoeQuery.py` and `MGsearchthroughPG.py` queries using a simple testing network. The query `NsearchgoeQuery.py` can be used to find edges in the parameter graph where the starting and ending nodes have the desired fixed points. The `MGsearchthroughPG.py` query then uses these edges to find paths through the parameter graph where each node in the path has the desired fixed point(s).
# +
import DSGRN
from DSGRN import *
import sys
sys.path.insert(0,'/home/elizabeth/Desktop/GIT/dsgrn_acdc/src')
from PGDraw import *
from MGsearchthroughPG import *
from NsearchgoeQuery import *
# -
database = Database("/home/elizabeth/Desktop/ACDC/TFP_test.db")
network = Network("/home/elizabeth/Desktop/ACDC/TFP_test")
DrawGraph(network)
PGGraph(database, network)
database.DrawMorseGraph(0)
database.DrawMorseGraph(1)
database.DrawMorseGraph(2)
# Notice that there are three different morse graph indexes in this example. The `NFixedPointQuery.py` query can find any morse graph indexes with any N number of fixed points, as long as they contain our fixed point(s) of interest. Say I am intereseted in finding any morse graph indexes that have fixed points {0,0} and {2,1}, then I can use this query by first setting up the 'bounds' to be searched. Then I can put them directly into the query, we already know from drawing all of the morse graphs above that this search should return {0,1,2}. Note: this will throw an error is the bounds are overlapping.
bounds00 = {"X":0, "Y":0} #FP {0,0}
bounds21 = {"X":2, "Y":1} #FP {2,1}
NFP = NFixedPointQuery(database, bounds00, bounds21).matches()
NFP
# The `NsearchqoeQuery.py` has some additional setup. If we want to search for any edges that take us from morse graph indexes containing ONLY fixed points {0,0} and {2,1} to morse graph indexes containing ONLY {0,0}, {1,0} and {2,1}, then we need to make seperate lists containing these fixed points.
# +
bounds10 = {"X":1, "Y":0} #FP {1,0}
MGI0 = [bounds00, bounds21] #We do already know this is MGI 0
MGI1 = [bounds00, bounds21, bounds10] #and this is MGI 1
# -
# Since we want only this fixed points in the morse graphs, then we use '=' as the second and third input into the query.
NsearchgoeQuery(database, '=', '=', MGI0, MGI1).stability_of_matches()
# The output is a set of tuples. Each of these tuples is a different edge in the parameter graph. Looking at the second item in the output from the above search, (2, 3, 1), 2 is the parameter graph index of the starting node, 3 is the ending node and 1 is the morse graph index of the ending node. If we want to do a search where other fixed points can also be in the morse graphs other than the ones we specify, then we can use '<' in place of '='. In the next search I am looking for any edges that take us from any morse graph indexes containing fixed points {0,0} and {2,1} to morse graph indexes containing ONLY {0,0}, {1,0} and {2,1}
NsearchgoeQuery(database, '<', '=', MGI0, MGI1).stability_of_matches()
# The `MGsearchthroughPG.py` query is similar to the `NsearchgoeQuery.py` query, except we can now enter in any number of fixed point sets and it will give use a path through them rather than just a single edge. In this next example, I am looking for paths through the parameter graph where we start on nodes containing ONLY fixed points {0,0} and {2,1}. This query allows for this set of fixed points to be repeated as many times as needed before moving onto the next set of fixed points as long as the parameter graph nodes are not repeated. The next set of fixed points I am interested in is ONLY {0,0}, {1,0} and {2,1}, then I want to end on a parameter graph node with ONLY fixed points {0,0}, {1,1} and {2,1}. Once a node with the last set of fixed points is found, the path is ended (so the last set of fixed points should never be repeated).
# +
bounds11 = {"X":1, "Y":1} #FP {1,1}
MGI2 = [bounds00, bounds21, bounds11] #This is MGI 2
Path_of_interest = [MGI0, MGI1, MGI2]
M = MGsearchthroughPG(database,'=', '=', Path_of_interest).allpaths()
# -
M
# The output is a list of lists, where the sublists are our paths and the items in the sublist are the parameter graph indexes we have pased through. For example, the sublist [2, 3, 10] is a path throught the parameter graph that starts on parameter graph node 2, then moves onto 3 and ends on 10.
| notebooks/MGsearchthoughPG_and_NsearchgoeQuery_Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy
#
# ### Quick demo on how to use FloPy to save array data as a binary file
# +
import os
import sys
import shutil
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# +
nlay,nrow,ncol = 1,20,10
model_ws = os.path.join("data","binary_data")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
precision = 'single' # or 'double'
dtype = np.float32 # or np.float64
mf = flopy.modflow.Modflow(model_ws=model_ws)
dis = flopy.modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol, delr=20, delc=10)
# -
# Create a linear data array
# +
# create the first row of data
b = np.linspace(10, 1, num=ncol, dtype=dtype).reshape(1, ncol)
# extend data to every row
b = np.repeat(b, nrow, axis=0)
# print the shape and type of the data
print(b.shape)
# -
# Plot the data array
pmv = flopy.plot.PlotMapView(model=mf)
v = pmv.plot_array(b)
pmv.plot_grid()
plt.colorbar(v, shrink=0.5);
# Write the linear data array to a binary file
# +
text = 'head'
# write a binary data file
pertim = dtype(1.0)
header = flopy.utils.BinaryHeader.create(bintype=text, precision=precision,
text=text, nrow=nrow, ncol=ncol,
ilay=1, pertim=pertim,
totim=pertim, kstp=1, kper=1)
pth = os.path.join(model_ws, 'bottom.bin')
flopy.utils.Util2d.write_bin(b.shape, pth, b, header_data=header)
# -
# Read the binary data file
bo = flopy.utils.HeadFile(pth, precision=precision)
br = bo.get_data(idx=0)
# Plot the data that was read from the binary file
pmv= flopy.plot.PlotMapView(model=mf)
v = pmv.plot_array(br)
pmv.plot_grid()
plt.colorbar(v, shrink=0.5);
# Plot the difference in the two values
pmv = flopy.plot.PlotMapView(model=mf)
v = pmv.plot_array(b-br)
pmv.plot_grid()
plt.colorbar(v, shrink=0.5);
| examples/Notebooks/flopy3_save_binary_data_file.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import tweepy
import csv
import json
ACCESS_TOKEN = ''
ACCESS_SECRET = ''
CONSUMER_KEY = ''
CONSUMER_SECRET =''
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
api = tweepy.API(auth,wait_on_rate_limit=True)
c = tweepy.Cursor(api.search, q='coronavirus', lang='en')
count=201
page_needed=int(count/15) + 3
c.pages(page_needed)
id_tweets = []
date = []
full_text_tweet =[]
for tweet in c.items():
try:
tweet = api.get_status(tweet.id, count=200, tweet_mode="extended")
full_text_tweet.append(tweet.full_text)
id_tweets.append(tweet.id)
except:
pass
if len(full_text_tweet)==count:
break
print("Total Data length", len(full_text_tweet))
# -
tweets = pd.Series(full_text_tweet)
from sklearn.externals import joblib
import string
import re
from keras.preprocessing.sequence import pad_sequences
import pickle
import pandas as pd
CNN = joblib.load('CNN_MODEL.pkl')
# loading
with open('tokenizer.pickle', 'rb') as handle:
tokenizer = pickle.load(handle)
maxlen = 104
# +
happyemoticon = r" ([xX;:]-?[dD)]|:-?[\)]|[;:][pP]) "
sademoticon = r" :'?[/|\(] "
def preprocess_text(sen):
text = remove_tags(sen)
text = re.sub('['+ string.punctuation +']',' ',text) #remove punctuation
text = re.sub(r"\s+[a-zA-Z]\s+", ' ', text) #Single character
text = re.sub(r'\s+', ' ', text) #Removing multiple
text = re.sub(r"([xX;:]-?[dD)]|:-?[\)]|[;:][pP])",happyemoticon,text)
text = re.sub(r" :'?[/|\(] ",sademoticon,text)
text = re.sub(r"(.)\1+", r"\1\1",text)
text = re.sub(r"&\w+;", "",text)
text = re.sub(r"https?://\S*", "",text)
text = re.sub(r"https?://\S*", "",text)
text = re.sub(r"&\w+;", "",text)
return text
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
# +
filtered_text = []
for i in tweets:
filtered_text.append(preprocess_text(str(i)))
# -
pos_count=0
neg_count=0
neutral_count=0
# +
sentiment = []
for i in range(len(filtered_text)):
example = filtered_text[i]
instance = tokenizer.texts_to_sequences([example])
flat_list = []
for sublist in instance:
for item in sublist:
flat_list.append(item)
flat_list = [flat_list]
text_predict = pad_sequences(flat_list, padding='post', maxlen=maxlen)
predicted_score = CNN.predict(text_predict)
if predicted_score > 0.60:
sentiment.append("๐")
pos_count = pos_count+1
elif predicted_score < 0.60 and predicted_score > 0.40:
sentiment.append("๐")
neutral_count = neutral_count+1
else:
sentiment.append("๐")
neg_count = neg_count+1
# +
sentiments = pd.Series(sentiment, name="Sentiment")
full_text_tweet = pd.Series(full_text_tweet, name="Tweet")
result_dataframe = pd.concat([full_text_tweet, sentiments], axis=1)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_colwidth', -1)
# -
result_dataframe
from matplotlib import pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.axis('equal')
langs = ['positive', 'neutral', 'negative']
prediction = [pos_count,neutral_count,neg_count]
ax.pie(prediction, labels = langs,autopct='%1.2f%%')
plt.show()
| Prediction/CNN_PRED.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.012237, "end_time": "2020-03-20T08:02:51.583261", "exception": false, "start_time": "2020-03-20T08:02:51.571024", "status": "completed"} tags=[]
# # COVID-19 Growth By State (US)
# > Growth of COVID-19 for the US by State.
#
# - comments: true
# - author: <NAME>
# - categories: [growth, US, states]
# - image: images/covid-growth-states.png
# - permalink: /growth-us-states/
# + papermill={"duration": 0.803156, "end_time": "2020-03-20T08:02:52.391795", "exception": false, "start_time": "2020-03-20T08:02:51.588639", "status": "completed"} tags=[]
#hide
# %matplotlib inline
import math
import requests
import pandas as pd
import numpy as np
from cycler import cycler
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
states_url = "https://covidtracking.com/api/states/daily"
us_url = "https://covidtracking.com/api/us/daily"
case_threshold = 100
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple',
'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan']
default_cycler = cycler(linestyle=['-', '--', ':', '-.']) * cycler(color=colors)
r = requests.get(states_url)
states_df = pd.DataFrame(r.json())
states_df['date'] = pd.to_datetime(states_df.date, format="%Y%m%d")
states_df = states_df[['date', 'state', 'positive']].sort_values('date')
cols = {}
for state in states_df.state.unique():
cases = states_df[(states_df.state == state) & (states_df.positive > case_threshold)]
cases = cases.reset_index().positive.reset_index(drop=True)
if len(cases):
cols[state] = cases
r = requests.get(us_url)
us_df = pd.DataFrame(r.json())
us_df['date'] = pd.to_datetime(us_df.date, format="%Y%m%d")
us_df = us_df[['date', 'positive']].sort_values('date')
cols['US'] = us_df.positive.reset_index(drop=True)
# + papermill={"duration": 0.484157, "end_time": "2020-03-20T08:02:52.880909", "exception": false, "start_time": "2020-03-20T08:02:52.396752", "status": "completed"} tags=[]
#collapse-hide
fig = plt.figure(figsize=(16, 9))
ax = plt.axes()
pd.concat(cols, axis=1).plot(ax=ax, marker='o')
plt.rc('axes', prop_cycle=default_cycler)
plt.title('COVID19 Growth in US as a whole and by state', fontsize=BIGGER_SIZE)
plt.ylabel('Cumulative confirmed cases')
plt.xlabel(f'Number of days since {case_threshold}th case')
plt.annotate('Based on COVID Data Repository by the COVID Tracking Project\n'
f'Latest data from {states_df.date.max().strftime("%Y-%m-%d")}, varies by state\n'
'Chart by <NAME>, @avyfain',
(0.07, 0.01), xycoords='figure fraction', fontsize=10);
plt.legend(loc="upper left")
x = np.linspace(0, plt.xlim()[1])
plt.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth', linewidth=1)
plt.annotate('33% Daily Growth',
(0.805, 0.5), xycoords='figure fraction', fontsize=SMALL_SIZE);
formatter = ScalarFormatter()
formatter.set_scientific(False)
ax.yaxis.set_major_formatter(formatter)
ax.yaxis.set_minor_formatter(formatter)
# + [markdown] papermill={"duration": 0.006606, "end_time": "2020-03-20T08:02:52.894833", "exception": false, "start_time": "2020-03-20T08:02:52.888227", "status": "completed"} tags=[]
#
# ## Same Chart, Y-Axis On A Log Scale
# + papermill={"duration": 0.559361, "end_time": "2020-03-20T08:02:53.460877", "exception": false, "start_time": "2020-03-20T08:02:52.901516", "status": "completed"} tags=[]
#collapse-hide
fig = plt.figure(figsize=(16, 8))
ax = plt.axes()
pd.concat(cols, axis=1).plot(ax=ax, marker='o', logy=True)
plt.rc('axes', prop_cycle=default_cycler)
plt.title('COVID19 Growth in US as a whole and by state', fontsize=BIGGER_SIZE)
plt.ylabel('Cumulative confirmed cases (log scale)')
plt.xlabel(f'Number of days since {case_threshold}th case')
plt.annotate('Based on COVID Data Repository by The COVID Tracking Project\n'
f'Latest data from {states_df.date.max().strftime("%Y-%m-%d")}, varies by state\n'
'Chart by <NAME>, @avyfain',
(0.07, .01), xycoords='figure fraction', fontsize=10);
plt.legend(loc="upper left")
plt.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
plt.annotate('33% Daily Growth',
(0.8, 0.75), xycoords='figure fraction', fontsize=SMALL_SIZE);
ax.yaxis.set_major_formatter(formatter)
# + papermill={"duration": 0.244749, "end_time": "2020-03-20T08:02:53.715729", "exception": false, "start_time": "2020-03-20T08:02:53.470980", "status": "completed"} tags=[]
#hide
fig.savefig('../images/covid-growth-states.png')
# + [markdown] papermill={"duration": 0.009773, "end_time": "2020-03-20T08:02:53.735294", "exception": false, "start_time": "2020-03-20T08:02:53.725521", "status": "completed"} tags=[]
# This visualization was made by [<NAME>](https://twitter.com/avyfain)[^1].
#
# [^1]: Data sourced from ["The COVID Tracking Project"](https://covidtracking.com/). Link to [original notebook](https://github.com/avyfain/covid19/blob/master/covid19.ipynb). Updated hourly by [GitHub Actions](https://github.com/features/actions).
| _notebooks/2020-03-12-covid19-us-states.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="aXZ_HklUCeO7" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="0187ac0e-5bba-46f0-f71f-bf5af5596ba7"
# !wget 'https://raw.githubusercontent.com/RG2806/Recommendation-Engine-Movies-/master/movie_dataset.csv'
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
data = pd.read_csv('/content/movie_dataset.csv')
features = ['keywords','director','genres','cast']
for feature in features:
data[feature] = data[feature].fillna('')
data['combined_features'] = data['keywords']+" "+data['director']+" "+data['genres']+" "+data['cast']
cv = CountVectorizer()
count_matrix = cv.fit_transform(data["combined_features"])
cosine_sim = cosine_similarity(count_matrix)
def get_title_from_index(index):
return data[data.index == index]["title"].values[0]
def get_index_from_title(title):
return data[data.title == title]["index"].values[0]
movie_user_likes = input("Enter the movie you liked")
movie_index = get_index_from_title(movie_user_likes)
similar = list(enumerate(cosine_sim[movie_index]))
sorted_similar = sorted(similar,key=lambda x:x[1], reverse=True)[1:8]
i = 0
print("Some movies similiar to " + movie_user_likes + " are:")
for element in sorted_similar:
print(get_title_from_index(element[0]))
i = i+1
if i > 7:
break;
# + [markdown] id="gYCOOVPiecIg"
# # **OUTPUT BELOW**
# + id="cNi4b0oHeWXP" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="7f1d1927-fce8-49e1-f7e9-e1011e17c5e7"
| MovieRecommendationEngine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# default_exp analysis
# -
# # Analysis
#
# > This contains fastai Learner extensions useful to perform prediction analysis.
#export
import sklearn.metrics as skm
from sklearn.model_selection import train_test_split
from fastai.learner import *
from fastai.interpret import *
from tsai.imports import *
from tsai.utils import *
from tsai.data.preprocessing import *
from tsai.data.core import *
from tsai.inference import *
#export
@patch
@delegates(subplots)
def show_probas(self:Learner, figsize=(6,6), ds_idx=1, dl=None, one_batch=False, max_n=None, **kwargs):
recorder = copy(self.recorder) # This is to avoid loss of recorded values while generating preds
if dl is None: dl = self.dls[ds_idx]
if one_batch: dl = [dl.one_batch()]
probas, targets = self.get_preds(dl=dl)
if probas.ndim == 2 and probas.min() < 0 or probas.max() > 1: probas = nn.Softmax(-1)(probas)
if not isinstance(targets[0].item(), Integral): return
targets = targets.flatten()
if max_n is not None:
idxs = np.random.choice(len(probas), max_n, False)
probas, targets = probas[idxs], targets[idxs]
if isinstance(probas, torch.Tensor): probas = probas.detach().cpu().numpy()
if isinstance(targets, torch.Tensor): targets = targets.detach().cpu().numpy()
fig = plt.figure(figsize=figsize, **kwargs)
classes = np.unique(targets)
nclasses = len(classes)
vals = np.linspace(.5, .5 + nclasses - 1, nclasses)[::-1]
plt.vlines(.5, min(vals) - 1, max(vals), color='black', linewidth=.5)
cm = plt.get_cmap('gist_rainbow')
color = [cm(1.* c/nclasses) for c in range(1, nclasses + 1)][::-1]
# class_probas = np.array([probas[i,t] for i,t in enumerate(targets)])
class_probas = np.array([probas[i][t] for i,t in enumerate(targets)])
for i, c in enumerate(classes):
plt.scatter(class_probas[targets == c] if nclasses > 2 or i > 0 else 1 - class_probas[targets == c],
targets[targets == c] + .5 * (np.random.rand((targets == c).sum()) - .5), color=color[i], edgecolor='black', alpha=.2, s=100)
if nclasses > 2: plt.vlines((targets == c).mean(), i - .5, i + .5, color='r', linewidth=.5)
plt.hlines(vals, 0, 1)
plt.ylim(min(vals) - 1, max(vals))
plt.xlim(0,1)
plt.xticks(np.linspace(0,1,11), fontsize=12)
plt.yticks(classes, [self.dls.vocab[x] for x in classes], fontsize=12)
plt.title('Predicted proba per true class' if nclasses > 2 else 'Predicted class 1 proba per true class', fontsize=14)
plt.xlabel('Probability', fontsize=12)
plt.ylabel('True class', fontsize=12)
plt.grid(axis='x', color='gainsboro', linewidth=.2)
plt.show()
self.recorder = recorder
#export
@patch
def plot_confusion_matrix(self:Learner, ds_idx=1, dl=None, thr=.5, normalize=False, title='Confusion matrix', cmap="Blues", norm_dec=2, figsize=(6,6),
title_fontsize=16, fontsize=12, plot_txt=True, **kwargs):
"Plot the confusion matrix, with `title` and using `cmap`."
# This function is mainly copied from the sklearn docs
if dl is None: dl = self.dls[ds_idx]
assert dl.cat
if dl.c == 2: # binary classification
probas, preds = self.get_preds(dl=dl)
y_pred = (probas[:, 1] > thr).numpy().astype(int)
y_test = preds.numpy()
if normalize: skm_normalize = 'true'
else: skm_normalize = None
cm = skm.confusion_matrix(y_test, y_pred, normalize=skm_normalize)
else:
cm = ClassificationInterpretation.from_learner(self).confusion_matrix()
if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
fig = plt.figure(figsize=figsize, **kwargs)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
if self.dls.c == 2:
plt.title(f"{title} (threshold: {thr})", fontsize=title_fontsize)
else:
plt.title(title, fontsize=title_fontsize)
tick_marks = np.arange(len(self.dls.vocab))
plt.xticks(tick_marks, self.dls.vocab, rotation=90, fontsize=fontsize)
plt.yticks(tick_marks, self.dls.vocab, rotation=0, fontsize=fontsize)
if plot_txt:
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
coeff = f'{cm[i, j]:.{norm_dec}f}' if normalize else f'{cm[i, j]}'
plt.text(j, i, coeff, horizontalalignment="center", verticalalignment="center", color="white" if cm[i, j] > thresh else "black", fontsize=fontsize)
ax = fig.gca()
ax.set_ylim(len(self.dls.vocab)-.5,-.5)
plt.tight_layout()
plt.ylabel('Actual', fontsize=fontsize)
plt.xlabel('Predicted', fontsize=fontsize)
plt.grid(False)
# +
#export
@patch
def top_losses(self:Learner,
X, # array-like object representing the independent variables
y, # array-like object representing the target
k:int=9, # Optional. #items to plot
largest=True, # Flag to show largest or smallest losses
bs:int=64, # batch size
):
*_, losses = self.get_X_preds(X, y, bs=bs, with_loss=True)
top_losses, idxs = losses.topk(ifnone(k, len(losses)), largest=largest)
idxs = idxs.tolist()
return top_losses, idxs
@patch
def plot_top_losses(self:Learner,
X, # array-like object representing the independent variables
y, # array-like object representing the target
k:int=9, # Optional. #items to plot
largest=True, # Flag to show largest or smallest losses
bs:int=64, # batch size
**kwargs, # show_batch kwargs
):
*_, losses = self.get_X_preds(X, y, bs=bs, with_loss=True)
idxs = losses.topk(ifnone(k, len(losses)), largest=largest)[1].tolist()
dl = self.dls.valid.new_dl(X[idxs], y=y[idxs], bs=k)
b = dl.one_batch()
dl.show_batch(b, max_n=k, **kwargs)
# -
# ## Permutation importance
# We've also introduced 2 methods to help you better understand how important certain features or certain steps are for your model. Both methods use permutation importance.
#
# โ ๏ธ**The permutation feature or step importance is defined as the decrease in a model score when a single feature or step value is randomly shuffled.**
#
# So if you using accuracy (higher is better), the most important features or steps will be those with a *lower* value on the chart (as randomly shuffling them reduces performance).
#
# The opposite occurs for metrics like mean squared error (lower is better). In this case, the most important features or steps will be those with a *higher* value on the chart.
#
# There are 2 issues with step importance:
#
# * there may be many steps and the analysis could take very long
# * steps will likely have a high autocorrelation
#
# For those reasons, we've introduced an argument (n_steps) to group steps. In this way you'll be able to know which part of the time series is the most important.
#
# Feature importance has been adapted from https://www.kaggle.com/cdeotte/lstm-feature-importance by <NAME> (Kaggle GrandMaster).
#export
@patch
def feature_importance(self:Learner,
X=None, # array-like object containing the time series. If None, all data in the validation set will be used.
y=None, # array-like object containing the targets. If None, all targets in the validation set will be used.
partial_n:(int, float)=None, # # (int) or % (float) of used to measure feature importance. If None, all data will be used.
method:str='permutation', # Method used to invalidate feature. Use 'permutation' for shuffling or 'ablation' for setting values to np.nan.
feature_names:list=None, # Optional list of feature names that will be displayed if available. Otherwise var_0, var_1, etc.
sel_classes:(str, list)=None, # classes for which the analysis will be made
key_metric_idx:int=0, # Optional position of the metric used. If None or no metric is available, the loss will be used.
show_chart:bool=True, # Flag to indicate if a chart showing permutation feature importance will be plotted.
figsize:tuple=(10, 5), # Size of the chart.
title:str=None, # Optional string that will be used as the chart title. If None 'Permutation Feature Importance'.
return_df:bool=True, # Flag to indicate if the dataframe with feature importance will be returned.
save_df_path:Path=None, # Path where dataframe containing the permutation feature importance results will be saved.
random_state:int=23, # Optional int that controls the shuffling applied to the data.
verbose:bool=True, # Flag that controls verbosity.
):
r"""Calculates feature importance as the drop in the model's validation loss or metric when a feature value is randomly shuffled"""
assert method in ['permutation', 'ablation']
# X, y
if X is None:
X = self.dls.valid.dataset.tls[0].items
if hasattr(self.dls.valid.dataset.tls[0], '_splits'): X = X[self.dls.valid.dataset.tls[0]._splits]
if y is None:
y = self.dls.valid.dataset.tls[1].items
if partial_n is not None:
_, rand_idxs, *_ = train_test_split(np.arange(len(y)), y, test_size=partial_n, random_state=random_state, stratify=y)
X = X.oindex[rand_idxs] if hasattr(X, 'oindex') else X[rand_idxs]
y = y.oindex[rand_idxs] if hasattr(y, 'oindex') else y[rand_idxs]
else:
X, y = X[:], y[:]
if sel_classes is not None:
filt = np.isin(y, listify(sel_classes))
X, y = X[filt], y[filt]
pv(f'X.shape: {X.shape}', verbose)
pv(f'y.shape: {y.shape}', verbose)
# Metrics
metrics = [mn for mn in self.recorder.metric_names if mn not in ['epoch', 'train_loss', 'valid_loss', 'time']]
if len(metrics) == 0 or key_metric_idx is None:
metric_name = self.loss_func.__class__.__name__
key_metric_idx = None
else:
metric_name = metrics[key_metric_idx]
metric = self.recorder.metrics[key_metric_idx].func
metric_name = metric_name.replace("train_", "").replace("valid_", "")
pv(f'Selected metric: {metric_name}', verbose)
# Selected vars & feature names
sel_vars = not(isinstance(self.dls.sel_vars, slice) and self.dls.sel_vars == slice(None, None, None))
if feature_names is None:
feature_names = L([f"var_{i}" for i in range(X.shape[1])])
if sel_vars:
feature_names = feature_names[self.dls.sel_vars]
else:
feature_names = listify(feature_names)
if sel_vars:
assert len(feature_names) == len(self.dls.sel_vars)
else:
assert len(feature_names) == X.shape[1]
sel_var_idxs = L(np.arange(X.shape[1]).tolist())
if sel_vars:
sel_var_idxs = sel_var_idxs[self.dls.sel_vars]
assert len(feature_names) == len(sel_var_idxs)
g = list(zip(np.arange(len(sel_var_idxs)+2), [0] + sel_var_idxs))
# Loop
COLS = ['BASELINE'] + list(feature_names)
results = []
pv(f'Computing feature importance ({method} method)...', verbose)
try:
if method == 'ablation':
fs = self.dls.valid.after_batch.fs
self.dls.valid.after_batch.fs = fs + [TSNan2Value()]
for i,k in progress_bar(g):
if i > 0:
if k not in sel_var_idxs: continue
save_feat = X[:, k].copy()
if method == 'permutation':
# shuffle along samples & steps
X[:, k] = random_shuffle(X[:, k].flatten(), random_state=random_state).reshape(X[:, k].shape)
elif method == 'ablation':
X[:, k] = np.nan
if key_metric_idx is None:
value = self.get_X_preds(X, y, with_loss=True)[-1].mean().item()
else:
output = self.get_X_preds(X, y)
if self.dls.c == 2:
try: value = metric(output[1], output[0][:, 1]).item()
except: value = metric(output[0], output[1]).item()
else:
value = metric(output[0], output[1]).item()
del output
pv(f"{k:3} feature: {COLS[i]:20} {metric_name}: {value:8.6f}", verbose)
results.append([COLS[i], value])
del value; gc.collect()
if i > 0:
X[:, k] = save_feat
del save_feat; gc.collect()
if method == 'ablation':
self.dls.valid.after_batch.fs = fs
except KeyboardInterrupt:
if i > 0:
X[:, k] = save_feat
del save_feat; gc.collect()
if method == 'ablation':
self.dls.valid.after_batch.fs = fs
# DataFrame
df = pd.DataFrame(results, columns=["Feature", metric_name])
df[f'{metric_name}_change'] = df[metric_name] - df.loc[0, metric_name]
sign = np.sign(df[f'{metric_name}_change'].mean())
if sign == 0: sign = 1
df[f'{metric_name}_change'] = df[f'{metric_name}_change'] * sign
# Display feature importance
if show_chart:
print()
value_change = df.loc[1:, f'{metric_name}_change'].values
pos_value_change = value_change.copy()
neg_value_change = value_change.copy()
pos_value_change[pos_value_change < 0] = 0
neg_value_change[neg_value_change > 0] = 0
plt.figure(figsize=(10, .5*len(value_change)))
plt.barh(np.arange(len(value_change))[::-1], pos_value_change, color='lime', edgecolor='black')
plt.barh(np.arange(len(value_change))[::-1], neg_value_change, color='red', edgecolor='black')
plt.axvline(0, color='black')
plt.yticks(np.arange(len(value_change))[::-1], df.loc[1:, "Feature"].values)
if title is None: title = f'Feature Importance ({method} method)'
plt.title(title, size=16)
text = 'increase' if sign == 1 else 'decrease'
plt.xlabel(f"{metric_name} {text} when feature is removed")
plt.ylim((-1,len(value_change)))
plt.show()
# Save feature importance
df = df.sort_values(metric_name, ascending=sign < 0).reset_index(drop=True)
if save_df_path:
if save_df_path.split('.')[-1] != 'csv': save_df_path = f'{save_df_path}.csv'
df.to_csv(f'{save_df_path}', index=False)
pv(f'Feature importance df saved to {save_df_path}', verbose)
if return_df:
return df
#export
@patch
def step_importance(
self:Learner,
X=None, # array-like object containing the time series. If None, all data in the validation set will be used.
y=None, # array-like object containing the targets. If None, all targets in the validation set will be used.
partial_n:(int, float)=None, # # (int) or % (float) of used to measure feature importance. If None, all data will be used.
method:str='permutation', # Method used to invalidate feature. Use 'permutation' for shuffling or 'ablation' for setting values to np.nan.
step_names:list=None, # Optional list of step names that will be displayed if available. Otherwise 0, 1, 2, etc.
sel_classes:(str, list)=None, # classes for which the analysis will be made
n_steps:int=1, # # of steps that will be analyzed at a time. Default is 1.
key_metric_idx:int=0, # Optional position of the metric used. If None or no metric is available, the loss will be used.
show_chart:bool=True, # Flag to indicate if a chart showing permutation feature importance will be plotted.
figsize:tuple=(10, 5), # Size of the chart.
title:str=None, # Optional string that will be used as the chart title. If None 'Permutation Feature Importance'.
xlabel=None, # Optional string that will be used as the chart xlabel. If None 'steps'.
return_df:bool=True, # Flag to indicate if the dataframe with feature importance will be returned.
save_df_path:Path=None, # Path where dataframe containing the permutation feature importance results will be saved.
random_state:int=23, # Optional int that controls the shuffling applied to the data.
verbose:bool=True, # Flag that controls verbosity.
):
r"""Calculates step importance as the drop in the model's validation loss or metric when a step/s value/s is/are randomly shuffled"""
assert method in ['permutation', 'ablation']
# X, y
if X is None:
X = self.dls.valid.dataset.tls[0].items
if hasattr(self.dls.valid.dataset.tls[0], '_splits'): X = X[self.dls.valid.dataset.tls[0]._splits]
if y is None:
y = self.dls.valid.dataset.tls[1].items
if partial_n is not None:
_, rand_idxs, *_ = train_test_split(np.arange(len(y)), y, test_size=partial_n, random_state=random_state, stratify=y)
X = X.oindex[rand_idxs] if hasattr(X, 'oindex') else X[rand_idxs]
y = y.oindex[rand_idxs] if hasattr(y, 'oindex') else y[rand_idxs]
else:
X, y = X[:], y[:]
if sel_classes is not None:
filt = np.isin(y, listify(sel_classes))
X, y = X[filt], y[filt]
pv(f'X.shape: {X.shape}', verbose)
pv(f'y.shape: {y.shape}', verbose)
# Metrics
metrics = [mn for mn in self.recorder.metric_names if mn not in ['epoch', 'train_loss', 'valid_loss', 'time']]
if len(metrics) == 0 or key_metric_idx is None:
metric_name = self.loss_func.__class__.__name__
key_metric_idx = None
else:
metric_name = metrics[key_metric_idx]
metric = self.recorder.metrics[key_metric_idx].func
metric_name = metric_name.replace("train_", "").replace("valid_", "")
pv(f'Selected metric: {metric_name}', verbose)
# Selected steps
sel_step_idxs = L(np.arange(X.shape[-1]).tolist())[self.dls.sel_steps]
if n_steps != 1:
sel_step_idxs = [listify(sel_step_idxs[::-1][n:n+n_steps][::-1]) for n in range(0, len(sel_step_idxs), n_steps)][::-1]
g = list(zip(np.arange(len(sel_step_idxs)+2), [0] + sel_step_idxs))
# Loop
COLS = ['BASELINE'] + sel_step_idxs
results = []
_step_names = []
pv('Computing step importance...', verbose)
try:
if method == 'ablation':
fs = self.dls.valid.after_batch.fs
self.dls.valid.after_batch.fs = fs + [TSNan2Value()]
for i,k in progress_bar(g):
if i > 0:
if k not in sel_step_idxs: continue
save_feat = X[..., k].copy()
if method == 'permutation':
# shuffle along samples
X[..., k] = shuffle_along_axis(X[..., k], axis=0, random_state=random_state)
elif method == 'ablation':
X[..., k] = np.nan
if key_metric_idx is None:
value = self.get_X_preds(X, y, with_loss=True)[-1].mean().item()
else:
output = self.get_X_preds(X, y)
if self.dls.c == 2:
try: value = metric(output[1], output[0][:, 1]).item()
except: value = metric(output[0], output[1]).item()
else:
value = metric(output[0], output[1]).item()
del output
# Step names
if i == 0 or step_names is None:
if i > 0 and n_steps != 1:
step_name = f"{str(COLS[i][0])} to {str(COLS[i][-1])}"
else: step_name = str(COLS[i])
else:
step_name = step_names[i - 1]
if i > 0: _step_names.append(step_name)
pv(f"{i:3} step: {step_name:20} {metric_name}: {value:8.6f}", verbose)
results.append([step_name, value])
del value; gc.collect()
if i > 0:
X[..., k] = save_feat
del save_feat; gc.collect()
if method == 'ablation':
self.dls.valid.after_batch.fs = fs
except KeyboardInterrupt:
if i > 0:
X[..., k] = save_feat
del save_feat; gc.collect()
if method == 'ablation':
self.dls.valid.after_batch.fs = fs
# DataFrame
df = pd.DataFrame(results, columns=["Step", metric_name])
df[f'{metric_name}_change'] = df[metric_name] - df.loc[0, metric_name]
sign = np.sign(df[f'{metric_name}_change'].mean())
if sign == 0: sign = 1
df[f'{metric_name}_change'] = df[f'{metric_name}_change'] * sign
# Display step importance
if show_chart:
print()
value_change = df.loc[1:, f'{metric_name}_change'].values
pos_value_change = value_change.copy()
neg_value_change = value_change.copy()
pos_value_change[pos_value_change < 0] = 0
neg_value_change[neg_value_change > 0] = 0
plt.figure(figsize=(10, .5*len(value_change)))
plt.bar(np.arange(len(value_change)), pos_value_change, color='lime', edgecolor='black')
plt.bar(np.arange(len(value_change)), neg_value_change, color='red', edgecolor='black')
plt.axhline(0, color='black')
plt.xticks(np.arange(len(value_change)), _step_names, rotation=90)
if title is None: title = f'Step Importance ({method} method)'
plt.title(title, size=16)
text = 'increase' if sign == 1 else 'decrease'
if xlabel is None: xlabel = 'steps'
plt.xlabel(xlabel)
plt.ylabel(f"{metric_name} {text} when removed")
plt.xlim((-1,len(value_change)))
plt.show()
# Save step importance
df = df.sort_values(metric_name, ascending=sign < 0).reset_index(drop=True)
if save_df_path:
if save_df_path.split('.')[-1] != 'csv': save_df_path = f'{save_df_path}.csv'
df.to_csv(f'{save_df_path}', index=False)
pv(f'Step importance df saved to {save_df_path}', verbose)
if return_df:
return df
from tsai.data.external import get_UCR_data
from tsai.data.preprocessing import TSRobustScale, TSStandardize
from tsai.learner import ts_learner
from tsai.models.FCNPlus import FCNPlus
from tsai.metrics import accuracy
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
tfms = [None, [TSClassification()]]
batch_tfms = TSRobustScale()
batch_tfms = TSStandardize()
dls = get_ts_dls(X, y, splits=splits, sel_vars=[0, 3, 5, 8, 10], sel_steps=slice(-30, None), tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, FCNPlus, metrics=accuracy, train_metrics=True)
learn.fit_one_cycle(2)
learn.plot_metrics()
learn.show_probas()
learn.plot_confusion_matrix()
learn.plot_top_losses(X[splits[1]], y[splits[1]], largest=True)
learn.top_losses(X[splits[1]], y[splits[1]], largest=True)
learn.feature_importance()
learn.step_importance(n_steps=5);
# You may pass an X and y if you want to analyze a particular group of samples:
#
# ```bash
# learn.feature_importance(X=X[splits[1]], y=y[splits[1]])
# ```
# If you have a large validation dataset, you may also use the partial_n argument to select a fixed amount of samples (integer) or a percentage of the validation dataset (float):
#
# ```bash
# learn.feature_importance(partial_n=.1)
# ```
#
# ```bash
# learn.feature_importance(partial_n=100)
# ```
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
| nbs/052b_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbpresent={"id": "eff3a02c-cf13-4c08-b2b3-df48c676159c"}
# <head>
# <meta content="text/html; charset=ISO-8859-1"
# http-equiv="content-type">
# <title></title>
# </head>
# <body>
# <p style="text-align: center;" class="MsoNormal"><span
# style="font-weight: bold; text-decoration: underline;">Hackathon 2018 - chart-digitizer</span></p>
#
# <p class="MsoNormal"><br>
# <o:p></o:p></p>
# <p style="text-align: center;" class="MsoNormal"><o:p></o:p></p>
# <p class="MsoNormal"><o:p> </o:p></p>
# <p style="text-align: center;" class="MsoNormal"><span
# style="font-style: italic;"><NAME></span><o:p></o:p></p>
# <p style="text-align: center;" class="MsoNormal"><span
# style="font-style: italic;"><NAME></span><o:p></o:p></p>
# <p style="text-align: center;" class="MsoNormal"><span
# style="font-style: italic;"><NAME></span><o:p></o:p></p>
# <p style="text-align: center;" class="MsoNormal"><span
# style="font-style: italic;"><NAME></span><o:p></o:p></p>
# <br>
# <p class="MsoNormal"><o:p></o:p></p>
# </body>
#
# </div><!--/.row-->
# <p class="clearfix"></p>
# <br />
# <div class="row">
# + [markdown] nbpresent={"id": "15f3fbd2-0e7f-4ceb-93c1-8936cdebed7b"}
# <head>
# <meta content="text/html; charset=ISO-8859-1"
# http-equiv="content-type">
# <title></title>
# </head>
# <body>
# <p style="text-align: center;" class="MsoNormal"><img
# style="width: 50px; height: 15px;" alt="Python"
# src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/260px-Python_logo_and_wordmark.svg.png"><br>
# <o:p></o:p></p>
# <p style="text-align: center;" class="MsoNormal"><img alt="OpenCV"
# src="https://a.fsdn.com/allura/p/opencvlibrary/icon" /><br>
# <span style="font-weight: bold; text-decoration: underline;"></span><o:p></o:p></p><p class="MsoNormal"><img style="width: 50px; height: 15px;"
# alt="Anaconda"
# src="https://upload.wikimedia.org/wikipedia/en/thumb/c/cd/Anaconda_Logo.png/200px-Anaconda_Logo.png"><br>
# <o:p></o:p></p>
# </body>
# + [markdown] nbpresent={"id": "7fb97c5e-4092-4e4e-8023-b795f49791da"}
#
# - Random chart generator
# - Model training using object detection
# - Result analysis
#
# + nbpresent={"id": "e0937ba9-ac52-4500-9355-f809320da00f"}
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib
# + [markdown] nbpresent={"id": "912cb702-4e13-45e3-be96-8ca5bb075111"}
# Random bar chart generator
# +
data = pd.DataFrame(data=np.random.rand(5,1), index=range(1,6), columns=['Fred'])
m,n = np.shape(data)
plt.clf()
plt.bar(x=data.index.values, height=data.values.ravel(), color='k') # figsize=(10, 6))
# Options for later from https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html
# bar_width = 0.35
# alpha = .3
fig=plt.gcf()
fig.set_size_inches(3, 2)
plt.axis('off')
fig.tight_layout()
fig.canvas.draw()
# grab the pixel buffer and dump it into a numpy array
pixels = np.array(fig.canvas.renderer._renderer)
plt.plot();
# -
# Display generated chart
# + nbpresent={"id": "b6b7891d-3c45-408b-b1e0-9ff049e33104"}
print(pixels);
print(data);
# + nbpresent={"id": "ae7b3af1-e0c2-4011-8fb6-ae6794c5dfa9"}
y, X = img_gen_bar()
print(y)
#for neural net
X=X/255
#for DNN only
#X=X.reshape(1,-1,3)
#data={}
#for i in range(1000) :
# data[i] = (generate_bar_chart() )
# + [markdown] nbpresent={"id": "45a597e1-1a3d-4e04-b570-92fb942bbbc8"}
# For historical reasons, OpenCV defaults to BGR format instead of usual RGB <br><br>
# Lets convert OpenCV image to RGB consistently<br>
#
#
# The Lab color space has three components. <br><br>
#
# L โ Lightness ( Intensity ).<br>
# a โ color component ranging from Green to Magenta.<br>
# b โ color component ranging from Blue to Yellow. <br><br>
#
# The Lab color space is quite different from the RGB color space. In RGB color space the color information is separated into three channels but the same three channels also encode brightness information. On the other hand, in Lab color space, the L channel is independent of color information and encodes brightness only. The other two channels encode color.
# + nbpresent={"id": "dfa4b5ff-7f56-4a23-84b4-af0ac0e890ca"}
cvimrgb = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2RGB)
#or
#imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR)
figure()
imshow(cvimrgb)
cvimlab = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2LAB)
#or
#imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR)
figure()
imshow(cvimlab)
# + [markdown] nbpresent={"id": "5e5ef64d-692e-4935-bcf8-8aefff52c384"}
# Useful utility function
# + nbpresent={"id": "71e23afe-a5a6-4888-8fbb-574af66c9ed1"}
img = cv2.imread('sample-1.png', 0)
img = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY)[1] # ensure binary
ret, labels = cv2.connectedComponents(img)
# Map component labels to hue val
label_hue = np.uint8(179*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
# cvt to BGR for display
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
# set bg label to black
labeled_img[label_hue==0] = 0
figure()
imshow( labeled_img)
# -
# Simple filtering example
# +
im2disp = imread('sample-1.png')
blurred = cv2.GaussianBlur(im2disp,(19,19),0)
figure()
imshow(blurred)
#more general method
kernel = np.ones((5,5),np.float32)/25
blurred2 = cv2.filter2D(im2disp,-1,kernel)
figure()
imshow(blurred2)
# + [markdown] nbpresent={"id": "0b2b252f-dc8d-46b3-865c-d91e5a67ed6a"}
# Converting to LAB
# + nbpresent={"id": "b580a43e-03cd-4fdc-8f0d-e69653969251"}
cv2.imwrite('data/mycvimage.png', cvim2disp)
#or
imsave('data/myimage.png',im2disp)
# -
x=2
# %whos
# + [markdown] nbpresent={"id": "1501269c-2107-40a8-a8ab-d46443c5c133"}
# #1 numpy gotcha for people coming from Matlab
# + nbpresent={"id": "1aee9993-9a16-4053-bfc8-e97ea1fabbdb"}
x = zeros(5)
y = x
y[1] = 1
#uncomment next line and run
print(x)
# + [markdown] nbpresent={"id": "0f618461-e7ba-41e3-9416-e70fd18e03d5"}
# What happened? Why did modifying y change x?
# <br><br>A: Python copies arrays and other mutable data types by reference by default
# <br><br><br>Here's what you probably want:
#
# + nbpresent={"id": "285b7d68-b953-4b9e-8095-9e28a454adee"}
x=zeros(5)
y=x.copy()
y[1] = 1
print(x)
# + [markdown] nbpresent={"id": "fdf28f4d-e257-49f5-a7c3-a4050163da8b"}
# Let's run some of the included OpenCV examples
#
# + nbpresent={"id": "48cf29b7-5967-4798-acf8-12fcb8c08928"}
# %run inpaint.py
# + nbpresent={"id": "c2b0183e-eb4e-43fe-ad48-4023b9bb7409"}
# %run deconvolution.py
# + nbpresent={"id": "755b4867-41e4-4c84-94bc-c91926f4de4e"}
# %run find_obj.py
# + nbpresent={"id": "5525c670-90ca-440a-8fdf-3e4583db70d7"}
# %run peopledetect.py
# -
# cd python
| learning/.ipynb_checkpoints/bar-chart-digitizer-ML-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
wthr = pd.read_csv("weather3_180703.csv")
wthr['date'] = pd.to_datetime(wthr['date'])
wthr.tail()
# +
dic = {}
for i in wthr.columns:
count_null = wthr[i].isna().sum()
print(i, ":", count_null, "(",round((count_null / len(wthr) * 100),2),"%",")")
dic[i] = count_null
plt.figure(figsize=(25,5))
sns.barplot(x=list(dic.keys()), y=list(dic.values()))
plt.show()
# -
wthr.daytime.unique()
rain_text = ['FC', 'TS', 'GR', 'RA', 'DZ', 'SN', 'SG', 'GS', 'PL', 'IC', 'FG', 'BR', 'UP', 'FG+']
other_text = ['HZ', 'FU', 'VA', 'DU', 'DS', 'PO', 'SA', 'SS', 'PY', 'SQ', 'DR', 'SH', 'FZ', 'MI', 'PR', 'BC', 'BL', 'VC' ]
wthr['rainY'] = 0
idx_ls = []
for txt in rain_text:
for idx in wthr.index.values:
if txt in wthr.loc[idx, "codesum"]:
idx_ls.append(idx)
wthr.loc[idx_ls, "rainY"] = 1
wthr['otherY'] = 0
idx_ls = []
for txt in other_text:
for idx in wthr.index.values:
if txt in wthr.loc[idx, "codesum"]:
idx_ls.append(idx)
wthr.loc[idx_ls, "otherY"] = 1
# +
wthr['nothing'] = 0
idx_ls = []
for idx in wthr.index.values:
if "MD" in wthr.loc[idx, "codesum"]:
idx_ls.append(idx)
# -
wthr.loc[idx_ls, "nothing"] = 1
wthr.tail()
wthr.to_csv("weather3_180703.csv", index=False)
wthr.plot.scatter('dewpoint','preciptotal')
plt.show()
plt.scatter((wthr['dewpoint']) ,np.log1p(wthr['preciptotal']))
plt.show()
# +
from statsmodels.stats.outliers_influence import variance_inflation_factor
cols = ['tmax','tmin','tavg','']
wthr2 = wthr.loc[:,[]]
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(wthr.values, i) for i in range(wthr.shape[1])]
vif["features"] = wthr.columns
vif
# -
| DataScience_Project1_Predict_products_sales_in_Walmart/weather3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Basic trading
#
# - entry: do I use only the data for that stock endogenous, or also include the data from other stocks, exagenous.
# - entry and exit :
# - exit : take profit, stop loss, time out
import pandas as pd
import numpy as np
import datetime as dt
import yfinance as yf # for data
from pandas_datareader import data as pdr
yf.pdr_override()
stock = input('Enter stock tikcer symbol:')
stock
# +
start_year = 2018
start_month = 1
start_day = 1
start = dt.datetime(start_year,start_month,start_day)
now = dt.datetime.now()
df = pdr.get_data_yahoo(stock,period = "35d",
# fetch data by interval (including intraday if period < 60 days)
# valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
# (optional, default is '1d')
interval = "5m",)
emasUsed = [26,50]
for ema in emasUsed:
df['Ema_' + str(ema)] = round(df['Adj Close'].ewm(span = ema, adjust = False).mean(),2)
df['Middle Band'] =df['Adj Close'].rolling(window=20).mean()
df['Upper Band'] = df['Middle Band'] + 1.96*df['Close'].rolling(window=20).std()
df['Lower Band'] = df['Middle Band'] - 1.96*df['Close'].rolling(window=20).std()
df['status_lower'] = np.where(df['Close'] < df['Lower Band'],'below_ballinger','normal')
df['status_upper'] = np.where(df['Close'] > df['Upper Band'],'above_ballinger','normal')
df = df.iloc[20:,:]
# +
#### 1. STRATEGY CROSSING
# +
pos = 0
num = 0
percentchange = []
for i in df.index:
cmin = df['Ema_26'][i]
cmax = df['Ema_50'][i]
status_lower = df['status_lower'][i]
close = df['Adj Close'][i]
#if(status_lower=='below_ballinger'):
# if (cmin>cmax) and (status_lower=='below_ballinger'):
if (cmin>cmax):
# print('red white blue')
if pos ==0:
bp =close
pos=1
#print('Buying now at'+ str(bp))
#print(i)
elif(cmin<cmax):
#print('blue white red')
if pos ==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
#print(i)
pc = (sp/bp-1)*100
percentchange.append(pc)
if num ==df['Adj Close'].count()-1 and pos==1:
pos = 0
sp = close
print('Selling now at'+ str(sp))
pc = (sp/bp-1)*100
percentchange.append(pc)
num +=1
# +
#### 2. BALLINGER
# +
pos = 0
num = 0
percentchange = []
for i in df.index:
cmin = df['Ema_26'][i]
cmax = df['Ema_50'][i]
status_upper = df['status_upper'][i]
status_lower = df['status_lower'][i]
close = df['Adj Close'][i]
if(status_lower=='below_ballinger'):
# print('red white blue')
if pos ==0:
bp =close
pos=1
#print('Buying now at'+ str(bp))
#print(i)
elif(status_upper=='upper_ballinger'):
#print('blue white red')
if pos ==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
#print(i)
pc = (sp/bp-1)*100
percentchange.append(pc)
if num ==df['Adj Close'].count()-1 and pos==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
pc = (sp/bp-1)*100
percentchange.append(pc)
num +=1
#print(percentchange)
# +
#### 3. STRATEGY BUY CROSSING, SELL BALLINGER
# +
pos = 0
num = 0
percentchange = []
for i in df.index:
cmin = df['Ema_26'][i]
cmax = df['Ema_50'][i]
status_upper = df['status_upper'][i]
close = df['Adj Close'][i]
#if(status_lower=='below_ballinger'):
# if (cmin>cmax) and (status_lower=='below_ballinger'):
if (cmin>cmax):
# print('red white blue')
if pos ==0:
bp =close
pos=1
#print('Buying now at'+ str(bp))
#print(i)
elif(status_upper=='upper_ballinger'):
#print('blue white red')
if pos ==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
#print(i)
pc = (sp/bp-1)*100
percentchange.append(pc)
if num ==df['Adj Close'].count()-1 and pos==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
pc = (sp/bp-1)*100
percentchange.append(pc)
num +=1
#print(percentchange)
# +
#### 4. STRATEGY BUY BALLINGER, SELL CROSSING
# +
pos = 0
num = 0
percentchange = []
for i in df.index:
cmin = df['Ema_26'][i]
cmax = df['Ema_50'][i]
status_lower = df['status_lower'][i]
close = df['Adj Close'][i]
if(status_lower=='below_ballinger'):
# if (cmin>cmax) and (status_lower=='below_ballinger'):
# print('red white blue')
if pos ==0:
bp =close
pos=1
#print('Buying now at'+ str(bp))
#print(i)
elif(cmin<cmax):
#print('blue white red')
if pos ==1:
pos = 0
sp = close
#print('Selling now at'+ str(sp))
#print(i)
pc = (sp/bp-1)*100
percentchange.append(pc)
if num ==df['Adj Close'].count()-1 and pos==1:
pos = 0
sp = close
print('Selling now at'+ str(sp))
pc = (sp/bp-1)*100
percentchange.append(pc)
num +=1
#print(percentchange)
# +
## print results
gains = 0
ng = 0
losses = 0
nl = 0
totallR = 1
for i in percentchange:
if i >0:
gains +=i
ng +=1
else:
losses +=i
nl +=1
totallR = totallR*((i/100)+1)
totallR = round((totallR-1)*100)
if ng > 0:
avgGain = gains/ng
maxR = str(max(percentchange))
else:
avgGain = 0
maxR = 'undefined'
if nl>0:
avgLoss = losses/nl
maxL = str(min(percentchange))
ratio = str(-(avgGain/avgLoss))
else:
avgLoss = 0
maxL = 'undefined'
ratio = 'inf'
if ng >0 and nl >0:
bettingAvg = ng/ng+nl
else:
bettingAvg = 0
print()
print('Result for'+stock+"going back to "+str(df.index[0])+" Sample size: "+str(ng+nl)+"trades")
print('EMAs used : ',str(emasUsed))
print('Batting Avg : '+str(bettingAvg))
print('Gain/loss ratio: ' + ratio)
print('Avg gain: ' + str(avgGain))
print('Avg loss: '+ str(avgLoss))
print('Max return: '+ str(maxR))
print('Max loss: ' + str(maxL))
print('Total return over '+str(ng+nl)+' trades: '+str(totallR)+'%')
print()
# -
x =
# +
x['UR'] = {}
x['UR']['return'] =8
x['UR']['gain'] = 1
# -
x
import pandas as pd
pd.DataFrame(x)
x['a']=2
x
from collections import Counter
df = pd.DataFrame(x.T)
df['UBER'].value_counts()
df = df.transpose()
df['return'].value_counts().idxmax()
df
| testing_strategy_macd_60_5/strtegy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # T-DAB Challenge: Marine Electronics Company
# ## Part II - A ) Modeling Trees - Learning from my mistakes...
# ### Your second task is to build a model that will alert sailors of tacking event happening in the future.
# Your supervisor told you that on top of whatever you come up with, what you should definitely do is "tack prediction".
#
# ```โA tack is a specific maneuver in sailing and alerting the sailor of the necessity to tack in the near future would bring some advantage to them compared to other sailors, who would have to keep an eye out on the conditions all the time to decide when to tackโ``` he writes in his email. The supervisor, who has some experience in sailing labels the tacks in the data from the client (added as `Tacking` column in the data).
#
# <b>[Wikipedia](https://en.wikipedia.org/wiki/Tacking_(sailing)#:~:text=Tacking%20is%20a%20sailing%20maneuver,progress%20in%20the%20desired%20direction.)<b>
# ```Tacking is a sailing maneuver by which a sailing vessel, whose desired course is into the wind, turns its bow toward the wind so that the direction from which the wind blows changes from one side to the other, allowing progress in the desired direction.```
# Importing relevant libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# import datetime as dt
# Set seaborn style
sns.set(style="darkgrid")
sns.set(font_scale=1.5)
# Read cleaned data
df = pd.read_csv('./data/clean_data.csv',header = 0)
df['DateTime'] = pd.to_datetime(df['DateTime'])
df.set_index('DateTime', inplace = True)
print(df.info())
df.head(5).transpose()
# +
def get_summary(input_data):
# Get a whole bunch of stats
output_data = input_data.describe().transpose()
# Count NANs
output_data['number_nan'] = input_data.shape[0] - output_data['count']
# Count unique values
output_data['number_distinct'] = input_data.apply(lambda x: len(pd.unique(x)), axis=0)
# Print DateTime information
try:
print(input_data['DateTime'].describe(datetime_is_numeric=True))
except:
pass
return output_data
get_summary(df)
# -
# ## Tree Based Modeling (Desicion Tree / Random Forest / XGBoost with Tree Stumps
#
# - I can start my analysis with Tree Based Models because they are more flexible in terms of data pre-processing requirements.
# - The scale of the features will not negatively impact the models as they would in Distance Based or Linear Classifiers
# - They are good to inform the feature selection process
# +
# Import all relevant scikit-learn modules
# Model Selection
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
# Metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
# ROC-AUC
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
# Tree Models + Ensembles
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
import xgboost as xgb
# Set SEED
SEED = 42
# -
# ### Feature Reference
# Importing features' descriptions
var_dict = pd.read_json('./data/data_dict.json')
var_dict.transpose().sort_values('units')
# Yaw description
var_dict['Route'].description
# +
# Grouping our (27) features by similar categories / units
# Speed Group 1 [knots]
wind_sp_cols = ['TWS', 'AWS', 'WSoG']
# Speed Group 2 [knots]
ship_sp_cols = ['SoG', 'SoS', 'AvgSoS', 'VMG']
# Direction Group 1 [degrees]
direction_cols = ['TWA', 'AWA', 'TWD']
# Direction Group 2 [degrees]
heading_cols = ['HeadingMag', 'HoG', 'HeadingTrue','Yaw']
# Current Group
current_cols = ['CurrentDir', 'CurrentSpeed']
# Axes Group 1 [degrees]
axes_cols = ['Roll', 'Pitch']
# Axes Group 2 [degrees] - Rudder (Timon)
angle_cols = ['RudderAng', 'Leeway']
# Pilote Group [degrees]
voltgage_cols = ['VoltageDrawn']
# GeoLoc Group [degrees]
geo_cols = ['Longitude', 'Latitude']
# Temperature Group [Celcius]
temperature_col = ['AirTemp']
# DateTime index
datetime_col = 'DateTime'
# Mode Pilote
mp_col = 'ModePilote'
# Target Variable
target = 'Tacking'
# -
# ### Feature Selection
#
# Feature selection is an iterative process. To start my analysis I did the following:
#
# - I discarded variables that were highly correlated (See EDA pair plots and Correlation Matrix). Addittionally, I tried to keep (when possible) those variables that appeared Normally distributed (See EDA histograms) in part I.
# - I payed attention to Normally Distributed features because after trying Tree Based models I wanted to try models such as Logistic Regression and LinearSVM. Those models will require me to standarize the variables of interest.
# - I also dropped unique identifiers such as: `Date`, `Latitude` and `Longitude`. This last two were very granular variables, increasing monotonously and seemed to give little information on the target variable.
# - I also discarded `ModePilote` variable because I did not have information on how that variable was generated and I wanted to avoid potential Data Leakage.
#
# Variable selection:
#
# - I chose Wind Speed Over Ground (`WSoG`) over the highly correlated True Wind Speed (`TWS`) and Apparent Wind Speed (`AWS`)
# - I chose Speed Over Ground (`SoG`) over the highly correlated Speed Over Surface (`SoS`) (also linked to `VMG`)
# - I am also intrested in keeping Velocity Made Good (`VMG`) signal
# - Eric Note: Depending on wind speed, there will be an optimum wind angle to sail in order to have the best velocity to the point we are trying to get to. VMG may also be better on one tack or the other depending on shifts in wind direction. It is a key indicator for making decisions like sail choice, tacking/gybing, and wind angle to sail.
# - I could try adding and removing Average Speed Over Surface (`AvgSoS`). Notice its distribution is far from Normal.
# - I chose True Wind Angle (`TWA`) over the highly correlated Apparent Wind Angle (`AWA`)
# - I also kept the "well behaved" True Wind Direction (`TWD`)
# - I also kept Magnetic Heading (`HeadingMag`).
# - I also included `Yaw` = True heading - Heading Over ground . An leave that combines Heading Over Ground (`HoG`) and True Heading (`HeadingTrue`).
# - I kept `Pitch` over the correlated `Roll`
# - I kept both `RudderAngle` and `Leeway`
# - `VoltageDrawn`, `AirTemp`, `CurrentDir` and `CurrentSpeed` also seem to be independent variables that fluctuate on a daily basis, I could try adding and removing them from my models and then decide if they help or not in the `Tacking` prediction tast.
#
# Summary of variables to add/remove in Feature Selection: `SoG`, `Pitch`. `RudderAngle`, `Leeway`, `VoltageDrawn`, `Temperature`
# +
# Read SEEN data
df = pd.read_csv('./data/seen_data.csv',header = 0)
df['DateTime'] = pd.to_datetime(df['DateTime'])
# Create a list of column names to drop
to_drop = ['TWS', 'AWS'] + \
['SoS', 'AvgSoS'] + \
['AWA'] + \
['HoG', 'HeadingTrue' ] + \
[] + \
['Roll'] + \
[] + \
['VoltageDrawn'] + \
['Longitude', 'Latitude'] + \
[] + \
['DateTime'] + \
['ModePilote']
keep = ['WSoG'] + \
['SoG','VMG'] + \
['TWA', 'TWD'] + \
['HeadingMag', 'Yaw'] + \
['CurrentDir', 'CurrentSpeed'] + \
['Pitch'] + \
['RudderAng', 'Leeway'] + \
[] + \
[] + \
['AirTemp'] + \
[] + \
[]
# Assert Number of Variables
assert len(to_drop) + len(keep) == 26
# Drop those columns from the dataset
df = df.drop(to_drop, axis=1)
#############################################################################
# selection = ['DateTime', 'CurrentSpeed', 'CurrentDir', 'TWA', 'AWS', 'AWA',
# 'Roll', 'Pitch', 'HeadingMag', 'HoG', 'HeadingTrue', 'AirTemp',
# 'Longitude', 'Latitude', 'SoS', 'AvgSoS', 'VMG', 'RudderAng',
# 'Leeway', 'TWD', 'WSoG', 'VoltageDrawn', 'ModePilote']
# df = df.drop(selection, axis=1)
# Print remaning columns
df.columns
# -
# #### Note: SoS might also be informative. Check!
# ### Model Selection
#
# - Keep in mind that we are dealing with an imbalanced data set
# - Sample data with `stratify` following our target variable
# - Maybe try Under-sampling & Over-sampling techniques
# +
# Create a data with all columns except target
X_trees = df.drop("Tacking", axis=1)
# Create a labels column
y_trees = df[["Tacking"]]
# Use stratified sampling to split up the dataset according to the volunteer_y dataset
X_train, X_val, y_train, y_val = train_test_split(X_trees, y_trees, test_size=0.3,stratify=y_trees)
# Print out the target count proportions on the training y test sets
print("Train props:\n",round(y_train["Tacking"].value_counts() / len(y_train["Tacking"]),4))
print("Test props:\n",round(y_val["Tacking"].value_counts() / len(y_val["Tacking"]),4))
# -
X_train.head()
# ### Random Undersampling (Downsampling)
def data_under_sample(df, SEED = 42):
# Class count
count_class_0, count_class_1 = df['Tacking'].value_counts()
# Divide by class
df_class_0 = df[df['Tacking'] == 0]
df_class_1 = df[df['Tacking'] == 1]
# Random under-sampling
df_class_0_under = df_class_0.sample(count_class_1)
df_under = pd.concat([df_class_0_under, df_class_1], axis=0)
print('Random under-sampling:')
print(df_under['Tacking'].value_counts())
# Checking new distribution
df_under['Tacking'].value_counts().plot(kind='bar', title='Count (target)')
plt.show()
return df_under
# +
# Obtain under-sample Dataset
df_under = data_under_sample(df)
# Create an undersampled data with all columns except target
X_under = df.drop("Tacking", axis=1)
# Create a labels column
y_under = df[["Tacking"]]
# Examine Under-sample data
df_under.head()
# -
# ### Random Over-sampling (Upsampling)
def data_over_sample(df, SEED = 42):
# Class count
count_class_0, count_class_1 = df['Tacking'].value_counts()
# Divide by class
df_class_0 = df[df['Tacking'] == 0]
df_class_1 = df[df['Tacking'] == 1]
# Random under-sampling
df_class_1_over = df_class_1.sample(count_class_0, replace=True)
df_over = pd.concat([df_class_0, df_class_1_over], axis=0)
print('Random over-sampling:')
print(df_over['Tacking'].value_counts())
# Checking new distribution
df_over['Tacking'].value_counts().plot(kind='bar', title='Count (target)')
plt.show()
return df_over
# +
# Obtain under-sample Dataset
df_over = data_over_sample(df)
# Create an oversampled data with all columns except target
X_over = df.drop("Tacking", axis=1)
# Create a labels column
y_over = df[["Tacking"]]
# Examine Under-sample data
df_over.head()
# -
# ### Decission Tree Classifier
# +
# Instantiate a DecisionTreeClassifier
# Better recall model
# dt = DecisionTreeClassifier(max_depth = 7 , min_samples_leaf = 100 ,class_weight = 'balanced', random_state = SEED)
# Better precision model
dt = DecisionTreeClassifier(max_depth = 7 , min_samples_leaf = 100, random_state = SEED)
# Fit dt to the training set
dt.fit(X_train, y_train)
# Predict test set labels
y_pred = dt.predict(X_val)
# Get parameters from classifier
dt.get_params()
# +
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, y_pred)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, y_pred)
print("Report: \n",report)
# +
# Create a Series to visualize features importances
importances = pd.Series(data=dt.feature_importances_,
index= X_train.columns)
# Sort importances
importances_sorted = importances.sort_values()
# Draw a horizontal barplot of importances_sorted
importances_sorted.plot(kind='barh', color='lightgreen')
plt.title('Features Importances')
plt.show()
# + active=""
# # Compute roc-auc score
# rocauc = roc_auc_score(y_val, y_pred)
# print("ROC-AUC score: \n",rocauc)
#
# # Generate the probabilities
# y_pred_prob = dt.predict_proba(X_val)[:, 1]
#
# # Calculate the roc metrics
# fpr, tpr, thresholds = roc_curve(y_val, y_pred_prob)
#
# # Plot the ROC curve
# plt.plot(fpr, tpr)
#
# # Add labels and diagonal line
# plt.xlabel("False Positive Rate")
# plt.ylabel("True Positive Rate")
# plt.plot([0, 1], [0, 1], "k--")
# plt.show()
# +
# Decision Tree Classifier Grid search
# Parameter Grid for Decision Tree
params_dt = {
'max_depth': [5, 7, 9],
'min_samples_leaf': [2, 100, 500],
'min_samples_split': [2, 200, 1000],
# 'max_features' : [None,'log2','sqrt'],
'class_weight' : [None, 'balanced']}
# Setting Grid Search
grid_dt = GridSearchCV(estimator=dt,
param_grid=params_dt,
scoring='precision',
cv=5,
verbose = 1,
n_jobs=-1)
# Fit RF to the UNDER-SAMPLED set
grid_dt.fit(X_under, np.ravel(y_under))
# Fit RF to the OVER-SAMPLED set
# grid_dt.fit(X_over, np.ravel(y_over))
# Extract the best estimator
dt_best_model = grid_dt.best_estimator_
# Print models best params
dt_best_model.get_params()
# -
# #### DT Stratified KFold CV
# +
def build_kf_dt_gsCV_model(X , y):
# Decision Tree Classifier with Stratified K Fold
sKF = StratifiedKFold(n_splits=5)
index_iterator = sKF.split(X, np.ravel(y))
params_dt = {
'max_depth': [3,5,7],
'min_samples_leaf': [2, 10, 100],
'min_samples_split': [2, 500, 1000],
# 'max_features' : [None,'log2','sqrt'],
# 'class_weight': ['balanced'],
'criterion': ['entropy','gini']}
# Instantiate GridSearchCV with index_iterator
skf_grid_dt = GridSearchCV(estimator = DecisionTreeClassifier(), param_grid=params_dt, scoring='roc_auc', cv = index_iterator,
verbose=1, n_jobs = -1)
# Fit DT to the training-validation set
skf_grid_dt.fit(X,np.ravel(y))
# Extract the best estimator
dt_kf_best_model = skf_grid_dt.best_estimator_
# Print models best params
print(dt_kf_best_model.get_params())
return dt_kf_best_model, skf_grid_dt
# Run sKF for Decision Trees
# dt_kf_best_model = build_kf_dt_gsCV_model(X_trees , y_trees)[0]
# dt_kf_best_model = build_kf_dt_gsCV_model(X_train , y_train)[0]
# -
# ### Random Forest Classifier
# +
# Instantiate a RandomForesClassifier
rf = RandomForestClassifier(random_state = SEED)
# Get parameters from classifier
rf.get_params()
# -
# #### RF Grid Search CV
# +
def build_rf_gsCV_model(X = X_under, y = y_under, downsample = True):
# Parameter Grid for Random Forest
params_rf = {'n_estimators': [10,25,50],
'max_depth': [3,5,7],
'min_samples_leaf': [2, 10, 100],
'min_samples_split': [2, 100, 700],
# 'class_weight': ['balanced'],
'max_features' : [None,'log2','sqrt'],
}
# Setting Grid Search
grid_rf = GridSearchCV(estimator=rf,
param_grid=params_rf,
scoring='precision',
cv=5,
verbose = 1,
n_jobs=-1)
if downsample:
# Fit RF to the UNDER-SAMPLED set
grid_rf.fit(X_under, np.ravel(y_under))
else:
# Fit RF to the OVER-SAMPLED set
grid_rf.fit(X_over, np.ravel(y_over))
# Extract the best estimator
rf_best_model = grid_rf.best_estimator_
# Print models best params
print(rf_best_model.get_params())
return rf_best_model, grid_rf
# Run GSCV for RandomForest
rf_best_model = build_rf_gsCV_model(X = X_under, y = y_under, downsample = True)[0]
# -
# #### RF Stratified KFold CV
# +
def build_kf_rf_gsCV_model(X , y):
# Random Forest Classifier with Stratified K Fold
sKF = StratifiedKFold(n_splits=5)
index_iterator = sKF.split(X, np.ravel(y))
# Setting Grid Search
params_rf = {'n_estimators': [10,50,100],
'max_depth': [3,5,7],
'min_samples_leaf': [2, 100, 1000],
'min_samples_split': [2, 300, 1000],
'max_features' : [None,'log2','sqrt'],
'class_weight': ['balanced']}
# Instantiate GridSearchCV with index_iterator
skf_grid_rf = GridSearchCV(estimator = rf, param_grid=params_rf, scoring='recall', cv = index_iterator,
verbose=1, n_jobs = -1)
# Fit RF to the training-validation set
skf_grid_rf.fit(X,np.ravel(y))
# Extract the best estimator
rf_kf_best_model = skf_grid_rf.best_estimator_
# Print models best params
print(rf_kf_best_model.get_params())
return rf_kf_best_model, skf_grid_rf
# Run sKF for RandomForest
# rf_kf_best_model = build_kf_rf_gsCV_model(X = X_trees, y = y_trees)[0]
# -
# ### Model Metrics and Performance
# +
# Read unseen data
df_unseen = pd.read_csv('./data/unseen_data.csv',header = 0)
df_unseen['DateTime'] = pd.to_datetime(df_unseen['DateTime'])
df_unseen.set_index('DateTime',inplace=True)
print(df_unseen.info())
df_unseen.reset_index(inplace=True)
# Drop those columns from the dataset
df_unseen = df_unseen.drop(to_drop, axis=1)
##############################################
#df_unseen = df_unseen.drop(selection, axis=1)
# Print remaning columns
print(df_unseen.columns)
# Read unseen data
df_unseen.head(5).transpose()
# +
# Create data with all columns except target
X_test = df_unseen.drop("Tacking", axis=1)
# Create a labels column
y_test = df_unseen[["Tacking"]]
# -
# #### Decision Tree Model Performance
# Check current DT model
dt.get_params()
# +
# Predict on unseen dataset
y_pred_dt = dt.predict(X_test)
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, y_pred_dt)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, y_pred_dt)
print("Report: \n",report)
# +
# Tuning Decision Threshold for X_test
sub_dt = pd.DataFrame()
sub_dt['probas'] = dt.predict_proba(X_test)[:,1]
# Get Predictions
threshold = 0.5
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold , 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, sub_dt['predict'])
print("Report: \n",report)
# +
# Tuning Decision Threshold for X_val
sub_dt = pd.DataFrame()
sub_dt['probas'] = dt.predict_proba(X_val)[:,1]
# Get Predictions
threshold = 0.5
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold , 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, sub_dt['predict'])
print("Report: \n",report)
# -
# Check current SKFolds dt model
dt_best_model.get_params()
# +
# Predict on unseen dataset
y_pred_dt = dt_best_model.predict(X_test)
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, y_pred_dt)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, y_pred_dt)
print("Report: \n",report)
# +
# Tuning Decision Threshold for X_test
sub_dt = pd.DataFrame()
sub_dt['probas'] = dt_best_model.predict_proba(X_test)[:,1]
# Get Predictions
threshold = 0.5
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold , 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, sub_dt['predict'])
print("Report: \n",report)
# +
try:
# Check current SKFolds dt model
print(dt_kf_best_model.get_params())
# Predict on unseen dataset
y_pred_dt = dt_kf_best_model.predict(X_val)
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, y_pred_dt)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, y_pred_dt)
print("Report: \n",report)
except:
pass
# +
try:
# Tuning Decision Threshold for X_val
sub_dt = pd.DataFrame()
sub_dt['probas'] = dt_kf_best_model.predict_proba(X_val)[:,1]
# Get Predictions
threshold = 0.5
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold, 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, sub_dt['predict'])
print("Report: \n",report)
except:
pass
# +
try:
# Check current SKFolds dt model
print(dt_kf_best_model.get_params())
# Predict on unseen dataset
y_pred_dt = dt_kf_best_model.predict(X_test)
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, y_pred_dt)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, y_pred_dt)
print("Report: \n",report)
except:
pass
# -
# #### Random Forest Model Performance
# +
try:
# Check current Grid Search RF model
print(rf_best_model.get_params())
# Predict on unseen dataset
y_pred_rf = rf_best_model.predict(X_test)
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, y_pred_rf)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, y_pred_rf)
print("Report: \n",report)
except:
pass
# -
# ### XGBoost Logic
# +
from scipy import stats
from xgboost import XGBClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import KFold
def build_kf_xgb_rs_model(X = X_under, y = y_under, imbalance = True):
if imbalance:
spw = [1,5,10,20]
else:
spw = 1
clf_xgb = XGBClassifier(objective = 'binary:logistic')
param_dist = {'n_estimators': stats.randint(150, 500),
'learning_rate': stats.uniform(0.01, 0.07),
'subsample': stats.uniform(0.3, 0.7),
'max_depth': [3, 5, 7, 9],
'colsample_bytree': stats.uniform(0.5, 0.45),
'scale_pos_weight': spw,
'min_child_weight': [1, 2, 3]
}
clf = RandomizedSearchCV(clf_xgb, param_distributions = param_dist, n_iter = 5, scoring = 'precision', error_score = 0, verbose = 3, n_jobs = -1)
numFolds = 5
folds = KFold(n_splits = numFolds, shuffle = True)
estimators = []
results = np.zeros(len(X))
score = 0.0
for train_index, test_index in folds.split(X):
X_train, X_test = X.iloc[train_index,:], X.iloc[test_index,:]
y_train, y_test = y.iloc[train_index].values.ravel(), y.iloc[test_index].values.ravel()
clf.fit(X_train, y_train)
estimators.append(clf.best_estimator_)
results[test_index] = clf.predict(X_test)
score += precision_score(y_test, results[test_index])
score /= numFolds
return estimators, results, score
estimators, results, score = build_kf_xgb_rs_model(X = X_trees, y = y_trees)
# -
try:
# Best XGBoost
best_xgb = estimators[3]
print('Mean score:',score)
best_xgb.get_params()
except:
pass
# +
try:
# Predict on unseen dataset
y_pred_xgb = best_xgb.predict(X_val)
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, y_pred_xgb)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, y_pred_xgb)
print("Report: \n",report)
except:
pass
# +
try:
# Tuning Decision Threshold for X_test
sub_dt = pd.DataFrame()
sub_dt['probas'] = best_xgb.predict_proba(X_val)[:,1]
# Get Predictions
threshold = 0.80
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold , 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_val, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_val, sub_dt['predict'])
print("Report: \n",report)
except:
pass
# +
try:
# Predict on unseen dataset
y_pred_xgb = best_xgb.predict(X_test)
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, y_pred_xgb)
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, y_pred_xgb)
print("Report: \n",report)
except:
pass
# +
try:
# Tuning Decision Threshold for X_test
sub_dt = pd.DataFrame()
sub_dt['probas'] = best_xgb.predict_proba(X_test)[:,1]
# Get Predictions
threshold = 0.5
sub_dt.loc[sub_dt['probas'] < threshold , 'predict'] = 0
sub_dt.loc[sub_dt['probas'] >= threshold , 'predict'] = 1
# Compute confusion matrix
conf_mat = confusion_matrix(y_test, sub_dt['predict'])
print("Confusion Matrix: \n",conf_mat)
# Compute classification report
report = classification_report(y_test, sub_dt['predict'])
print("Report: \n",report)
except:
pass
# -
# ## Conclusion
# - #### My models are most likely overfitting my data and not able to generalize. After doing further research of the problem I found classical approaches to the problem involve hand crafting features from the time series data based on fixed-sized windows and training machine learning models, such as ensembles of decision trees. My next step is going to dive into some Feature Engineering and then check how my models perform after that.
# - #### Another possible approach I found involves using Long Short Term Memory (LSTM) Recurrent Neural Networks so I am also going to try that route.
# - #### The aforemention approach could include predictions based on several time windows of data
| tdab-modeling-trees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/2series/DataScienceHackathons/blob/main/building_detection/open_buildings_spatial_analysis_examples.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="8UYNvAExBmKR"
# ##### Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License");
#
# # Open Buildings - spatial analysis examples
#
# This notebook demonstrates some analysis methods with [Open Buildings](https://sites.research.google/open-buildings/) data:
#
# * Generating heatmaps of building density and size.
# * A simple analysis of accessibility to health facilities.
# + id="0eXL156ae-iT"
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
# + id="4y9Xfp9Z1S-g"
# + [markdown] id="LbGgdE4mj1hd"
# ### Download buildings data for a region in Africa [takes up to 15 minutes for large countries]
# + cellView="form" id="qP6ADuzRdZTF"
#@markdown Select a region from either the [Natural Earth low res](https://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/) (fastest), [Natural Earth high res](https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/) or [World Bank high res](https://datacatalog.worldbank.org/dataset/world-bank-official-boundaries) shapefiles:
region_border_source = 'Natural Earth (Low Res 110m)' #@param ["Natural Earth (Low Res 110m)", "Natural Earth (High Res 10m)", "World Bank (High Res 10m)"]
region = 'GHA (Ghana)' #@param ["", "AGO (Angola)", "BDI (Burundi)", "BEN (Benin)", "BFA (Burkina Faso)", "BWA (Botswana)", "CAF (Central African Republic)", "CIV (Cรดte d'Ivoire)", "COD (Democratic Republic of the Congo)", "COG (Republic of the Congo)", "DJI (Djibouti)", "DZA (Algeria)", "EGY (Egypt)", "ERI (Eritrea)", "ETH (Ethiopia)", "GAB (Gabon)", "GHA (Ghana)", "GIN (Guinea)", "GMB (The Gambia)", "GNB (Guinea-Bissau)", "GNQ (Equatorial Guinea)", "KEN (Kenya)", "LBR (Liberia)", "LSO (Lesotho)", "MDG (Madagascar)", "MOZ (Mozambique)", "MRT (Mauritania)", "MWI (Malawi)", "NAM (Namibia)", "NER (Niger)", "NGA (Nigeria)", "RWA (Rwanda)", "SDN (Sudan)", "SEN (Senegal)", "SLE (Sierra Leone)", "SOM (Somalia)", "SWZ (eSwatini)", "TGO (Togo)", "TUN (Tunisia)", "TZA (Tanzania)", "UGA (Uganda)", "ZAF (South Africa)", "ZMB (Zambia)", "ZWE (Zimbabwe)"]
# @markdown Alternatively, specify an area of interest in [WKT format](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) (assumes crs='EPSG:4326'); this [tool](https://arthur-e.github.io/Wicket/sandbox-gmaps3.html) might be useful.
your_own_wkt_polygon = '' #@param {type:"string"}
# !pip install s2geometry pygeos geopandas
import functools
import glob
import gzip
import multiprocessing
import os
import shutil
import tempfile
from typing import List, Optional, Tuple
import gdal
import geopandas as gpd
from google.colab import files
from IPython import display
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import s2geometry as s2
import shapely
import tensorflow as tf
import tqdm.notebook
BUILDING_DOWNLOAD_PATH = ('gs://open-buildings-data/v1/'
'polygons_s2_level_6_gzip_no_header')
def get_filename_and_region_dataframe(
region_border_source: str, region: str,
your_own_wkt_polygon: str) -> Tuple[str, gpd.geodataframe.GeoDataFrame]:
"""Returns output filename and a geopandas dataframe with one region row."""
if your_own_wkt_polygon:
filename = 'open_buildings_v1_polygons_your_own_wkt_polygon.csv.gz'
region_df = gpd.GeoDataFrame(
geometry=gpd.GeoSeries.from_wkt([your_own_wkt_polygon]),
crs='EPSG:4326')
if not isinstance(region_df.iloc[0].geometry,
shapely.geometry.polygon.Polygon) and not isinstance(
region_df.iloc[0].geometry,
shapely.geometry.multipolygon.MultiPolygon):
raise ValueError("`your_own_wkt_polygon` must be a POLYGON or "
"MULTIPOLYGON.")
print(f'Preparing your_own_wkt_polygon.')
return filename, region_df
if not region:
raise ValueError('Please select a region or set your_own_wkt_polygon.')
if region_border_source == 'Natural Earth (Low Res 110m)':
url = ('https://www.naturalearthdata.com/http//www.naturalearthdata.com/'
'download/110m/cultural/ne_110m_admin_0_countries.zip')
# !wget -N {url}
display.clear_output()
region_shapefile_path = os.path.basename(url)
source_name = 'ne_110m'
elif region_border_source == 'Natural Earth (High Res 10m)':
url = ('https://www.naturalearthdata.com/http//www.naturalearthdata.com/'
'download/10m/cultural/ne_10m_admin_0_countries.zip')
# !wget -N {url}
display.clear_output()
region_shapefile_path = os.path.basename(url)
source_name = 'ne_10m'
elif region_border_source == 'World Bank (High Res 10m)':
url = ('https://development-data-hub-s3-public.s3.amazonaws.com/ddhfiles/'
'779551/wb_countries_admin0_10m.zip')
# !wget -N {url}
# !unzip -o {os.path.basename(url)}
display.clear_output()
region_shapefile_path = 'WB_countries_Admin0_10m'
source_name = 'wb_10m'
region_iso_a3 = region.split(' ')[0]
filename = f'open_buildings_v1_polygons_{source_name}_{region_iso_a3}.csv.gz'
region_df = gpd.read_file(region_shapefile_path).query(
f'ISO_A3 == "{region_iso_a3}"').dissolve(by='ISO_A3')[['geometry']]
print(f'Preparing {region} from {region_border_source}.')
return filename, region_df
def get_bounding_box_s2_covering_tokens(
region_geometry: shapely.geometry.base.BaseGeometry) -> List[str]:
region_bounds = region_geometry.bounds
s2_lat_lng_rect = s2.S2LatLngRect_FromPointPair(
s2.S2LatLng_FromDegrees(region_bounds[1], region_bounds[0]),
s2.S2LatLng_FromDegrees(region_bounds[3], region_bounds[2]))
coverer = s2.S2RegionCoverer()
# NOTE: Should be kept in-sync with s2 level in BUILDING_DOWNLOAD_PATH.
coverer.set_fixed_level(6)
coverer.set_max_cells(1000000)
return [cell.ToToken() for cell in coverer.GetCovering(s2_lat_lng_rect)]
def s2_token_to_shapely_polygon(
s2_token: str) -> shapely.geometry.polygon.Polygon:
s2_cell = s2.S2Cell(s2.S2CellId_FromToken(s2_token, len(s2_token)))
coords = []
for i in range(4):
s2_lat_lng = s2.S2LatLng(s2_cell.GetVertex(i))
coords.append((s2_lat_lng.lng().degrees(), s2_lat_lng.lat().degrees()))
return shapely.geometry.Polygon(coords)
def download_s2_token(
s2_token: str, region_df: gpd.geodataframe.GeoDataFrame) -> Optional[str]:
"""Downloads the matching CSV file with polygons for the `s2_token`.
NOTE: Only polygons inside the region are kept.
NOTE: Passing output via a temporary file to reduce memory usage.
Args:
s2_token: S2 token for which to download the CSV file with building
polygons. The S2 token should be at the same level as the files in
BUILDING_DOWNLOAD_PATH.
region_df: A geopandas dataframe with only one row that contains the region
for which to keep polygons.
Returns:
Either filepath which contains a gzipped CSV without header for the
`s2_token` subfiltered to only contain building polygons inside the region
or None which means that there were no polygons inside the region for this
`s2_token`.
"""
s2_cell_geometry = s2_token_to_shapely_polygon(s2_token)
region_geometry = region_df.iloc[0].geometry
prepared_region_geometry = shapely.prepared.prep(region_geometry)
# If the s2 cell doesn't intersect the country geometry at all then we can
# know that all rows would be dropped so instead we can just return early.
if not prepared_region_geometry.intersects(s2_cell_geometry):
return None
try:
# Using tf.io.gfile.GFile gives better performance than passing the GCS path
# directly to pd.read_csv.
with tf.io.gfile.GFile(
os.path.join(BUILDING_DOWNLOAD_PATH, f'{s2_token}_buildings.csv.gz'),
'rb') as gf:
# If the s2 cell is fully covered by country geometry then can skip
# filtering as we need all rows.
if prepared_region_geometry.covers(s2_cell_geometry):
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tmp_f:
shutil.copyfileobj(gf, tmp_f)
return tmp_f.name
# Else take the slow path.
# NOTE: We read in chunks to save memory.
csv_chunks = pd.read_csv(
gf, chunksize=2000000, dtype=object, compression='gzip', header=None)
tmp_f = tempfile.NamedTemporaryFile(mode='w+b', delete=False)
tmp_f.close()
for csv_chunk in csv_chunks:
points = gpd.GeoDataFrame(
geometry=gpd.points_from_xy(csv_chunk[1], csv_chunk[0]),
crs='EPSG:4326')
# sjoin 'within' was faster than using shapely's 'within' directly.
points = gpd.sjoin(points, region_df, predicate='within')
csv_chunk = csv_chunk.iloc[points.index]
csv_chunk.to_csv(
tmp_f.name,
mode='ab',
index=False,
header=False,
compression={
'method': 'gzip',
'compresslevel': 1
})
return tmp_f.name
except tf.errors.NotFoundError:
return None
# Clear output after pip install.
display.clear_output()
filename, region_df = get_filename_and_region_dataframe(region_border_source,
region,
your_own_wkt_polygon)
# Remove any old outputs to not run out of disk.
for f in glob.glob('/tmp/open_buildings_*'):
os.remove(f)
# Write header to the compressed CSV file.
with gzip.open(f'/tmp/{filename}', 'wt') as merged:
merged.write(','.join([
'latitude', 'longitude', 'area_in_meters', 'confidence', 'geometry',
'full_plus_code'
]) + '\n')
download_s2_token_fn = functools.partial(download_s2_token, region_df=region_df)
s2_tokens = get_bounding_box_s2_covering_tokens(region_df.iloc[0].geometry)
# Downloads CSV files for relevant S2 tokens and after filtering appends them
# to the compressed output CSV file. Relies on the fact that concatenating
# gzipped files produces a valid gzip file.
# NOTE: Uses a pool to speed up output preparation.
with open(f'/tmp/{filename}', 'ab') as merged:
with multiprocessing.Pool(4) as e:
for fname in tqdm.notebook.tqdm(
e.imap_unordered(download_s2_token_fn, s2_tokens),
total=len(s2_tokens)):
if fname:
with open(fname, 'rb') as tmp_f:
shutil.copyfileobj(tmp_f, merged)
os.unlink(fname)
# + [markdown] id="UY9Ba1Cxpcb4"
# # Visualise the data
#
# First we convert the CSV file into a GeoDataFrame. The CSV files can be quite large because they include the polygon outline of every building. For this example we only need longitude and latitude, so we only process those columns to save memory.
# + id="hSpb1JKVjuYj"
buildings = pd.read_csv(
f"/tmp/{filename}", engine="c",
usecols=['latitude', 'longitude', 'area_in_meters', 'confidence'])
print(f"Read {len(buildings):,} records.")
# + [markdown] id="7yi3IAq2fk-6"
# For some countries there can be tens of millions of buildings, so we also take a random sample for doing plots.
# + cellView="form" id="J4YmtqUyf4sU"
sample_size = 200000 #@param
# + id="dRGBn87afzRu"
buildings_sample = (buildings.sample(sample_size)
if len(buildings) > sample_size else buildings)
# + id="Ge4EbCVTj5Vk"
plt.plot(buildings_sample.longitude, buildings_sample.latitude, 'k.',
alpha=0.25, markersize=0.5)
plt.gcf().set_size_inches(10, 10)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.axis('equal');
# + [markdown] id="0Vpk1lJteYyx"
# # Prepare the data for mapping building statistics
#
# Set up a grid, which we will use to calculate statistics about buildings.
#
# We also want to select the examples most likely to be buildings, using a threshold on the confidence score.
# + cellView="form" id="I8yao05wYTba"
max_grid_dimension = 1000 #@param
confidence_threshold = 0.75 #@param
# + id="tb6AM8KQCZX-"
buildings = buildings.query(f"confidence > {confidence_threshold}")
# + id="24_FjZ8yrNjj"
# Create a grid covering the dataset bounds
min_lon = buildings.longitude.min()
max_lon = buildings.longitude.max()
min_lat = buildings.latitude.min()
max_lat = buildings.latitude.max()
grid_density_degrees = (max(max_lon - min_lon, max_lat - min_lat)
/ max_grid_dimension)
bounds = [min_lon, min_lat, max_lon, max_lat]
xcoords = np.arange(min_lon, max_lon, grid_density_degrees)
ycoords = np.arange(max_lat, min_lat, -grid_density_degrees)
xv, yv = np.meshgrid(xcoords, ycoords)
xy = np.stack([xv.ravel(), yv.ravel()]).transpose()
print(f"Calculated grid of size {xv.shape[0]} x {xv.shape[1]}.")
# + [markdown] id="ophg9IrprQ0Y"
# To calculate statistics, we need a function to convert between (longitude, latitude) coordinates in the world and (x, y) coordinates in the grid.
# + id="qE_5xKBpbMcs"
geotransform = (min_lon, grid_density_degrees, 0,
max_lat, 0, -grid_density_degrees)
def lonlat_to_xy(lon, lat, geotransform):
x = int((lon - geotransform[0])/geotransform[1])
y = int((lat - geotransform[3])/geotransform[5])
return x,y
# + [markdown] id="R0-QQNjZneNH"
# Now we can count how many buildings there are on each cell of the grid.
# + id="czf5TZicWeGL"
counts = np.zeros(xv.shape)
area_totals = np.zeros(xv.shape)
for lat, lon, area in tqdm.notebook.tqdm(
zip(buildings.latitude, buildings.longitude, buildings.area_in_meters)):
x, y = lonlat_to_xy(lon, lat, geotransform)
if x >= 0 and y >= 0 and x < len(xcoords) and y < len(ycoords):
counts[y, x] += 1
area_totals[y, x] += area
area_totals[counts == 0] = np.nan
counts[counts == 0] = np.nan
mean_area = area_totals / counts
# + [markdown] id="_gZeJwyhm7lM"
# # Plot the counts of buildings
#
# Knowing the counts of buildings is useful for example in planning service delivery, estimating population or designing census enumeration areas.
# + id="VrMupotmWeBV"
plt.imshow(np.log10(np.nan_to_num(counts) + 1.), cmap="viridis")
plt.gcf().set_size_inches(15, 15)
cbar = plt.colorbar(shrink=0.5)
cbar.ax.set_yticklabels([f'{x:.0f}' for x in 10 ** cbar.ax.get_yticks()])
plt.title("Building counts per grid cell");
# + [markdown] id="cFMRgXr93k0G"
# ## [optional] Export a GeoTIFF file
#
# This can be useful to carry our further analysis with software such as [QGIS](https://qgis.org/).
# + id="jmTlPdc73vzB"
def save_geotiff(filename, values, geotransform):
driver = gdal.GetDriverByName("GTiff")
dataset = driver.Create(filename, values.shape[1], values.shape[0], 1,
gdal.GDT_Float32)
dataset.SetGeoTransform(geotransform)
band = dataset.GetRasterBand(1)
band.WriteArray(values)
band.SetNoDataValue(-1)
dataset.FlushCache()
filename = "building_counts.tiff"
save_geotiff(filename, counts, geotransform)
files.download(filename)
# + [markdown] id="O-0pi0uxm-_c"
# # Generate a map of building sizes
#
# Knowing average building sizes is useful too -- it is linked, for example, to how much economic activity there is in each area.
# + id="BiZGeEqPnBPo"
# Only calculate the mean building size for grid locations with at
# least a few buildings, so that we get more reliable averages.
mean_area_filtered = mean_area.copy()
mean_area_filtered[counts < 10] = 0
# Set a maximum value for the colour scale, to make the plot brighter.
plt.imshow(np.nan_to_num(mean_area_filtered), vmax=250, cmap="viridis")
plt.title("Mean building size (m$^2$)")
plt.colorbar(shrink=0.5, extend="max")
plt.gcf().set_size_inches(15, 15)
# + [markdown] id="G-jTJGounB3S"
# # Health facility accessibility
#
# We can combine different types of geospatial data to get various insights. If we have information on the locations of clinics and hospitals across Ghana, for example, then one interesting analysis is how accessible health services are in different places.
#
# In this example, we'll look at the average distance to the nearest health facility.
#
# We use this [data](https://data.humdata.org/m/dataset/ghana-healthsites) made available by [Global Healthsites Mapping Project](https://healthsites.io/).
#
# + id="PeOLKjCaNSl2"
health_sites = pd.read_csv("https://data.humdata.org/dataset/364c5aca-7cd7-4248-b394-335113293c7a/"
"resource/b7e55f34-9e3b-417f-b329-841cff6a9554/download/ghana.csv")
health_sites = gpd.GeoDataFrame(
health_sites, geometry=gpd.points_from_xy(health_sites.X, health_sites.Y))
health_sites.head()
# + [markdown] id="d0r4lJCwhtCv"
# We drop all columns not relevant to the computation of mean distance from health facilities. We also exclude all rows with empty or NaN values, select amenities captured as hospitals in the new geodata and choose values within the range of our area of interest.
# + id="VlU_elaUJ1lT"
health_sites = health_sites[['X', 'Y', 'amenity', 'name', 'geometry']]
health_sites.dropna(axis=0, inplace=True)
health_sites = health_sites[health_sites['amenity'].isin(['hospital','clinic','health_post', 'doctors'])]
health_sites = health_sites.query(
f'Y > {min_lat} and Y < {max_lat}'
f'and X > {min_lon} and X < {max_lon}')
health_sites.head()
# + [markdown] id="AgEFFmmbiA5D"
# Have a look at the locations of health facilities compared to the locations of buildings.
#
# *Note: this data may not be complete.*
# + id="fdn1pXT1XAia"
plt.plot(buildings_sample.longitude,
buildings_sample.latitude,
'k.', alpha=0.25, markersize=0.5)
plt.plot(health_sites.X, health_sites.Y,
marker='$\\oplus$', color= 'red', alpha = 0.8,
markersize=10, linestyle='None')
plt.gcf().set_size_inches(10, 10)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.legend(['Building', 'Health center'])
plt.axis('equal');
# + [markdown] id="7leA8WZBhZX6"
# Next we calculate, for each building, the distance to the nearest health facility. We use the sample of the buildings data that we took earlier, so that the computations don't take too long.
# + id="CJRNAPpMgnqJ"
buildings_sample = gpd.GeoDataFrame(buildings_sample,
geometry=gpd.points_from_xy(buildings_sample.longitude,
buildings_sample.latitude))
buildings_sample["distance_to_nearest_health_facility"] = buildings_sample.geometry.apply(
lambda g: health_sites.distance(g).min())
# + id="MF6-elrV0OTl"
buildings_sample.head()
# + [markdown] id="QQDNRlYWzdQt"
# That has computed the distance in degrees (longitude and latitude), which is not very intuitive. We can convert this approximately to kilometers by multiplying with the distance spanned by one degree at the equator.
# + id="EAM8m7A4zM2T"
buildings_sample["distance_to_nearest_health_facility"] *= 111.32
# + [markdown] id="D6D9K9LriYrt"
# Now we can then find the mean distance to the nearest health facility by administrative area. First, we load data on the shapes of adminstrative areas.
#
# We use this [data](https://data.humdata.org/m/dataset/ghana-administrative-boundaries) made available by [OCHA ROWCA](https://www.unocha.org/rowca) - United Nations Office for the Coordination of Humanitarian Affairs for West and Central Africa.
# + id="OF2BN9Cqgcn3"
# !wget https://data.humdata.org/dataset/dc4c17cf-59d9-478c-b2b7-acd889241194/resource/4443ddba-eeaf-4367-9457-7820ea482f7f/download/gha_admbnda_gss_20210308_shp.zip
# !unzip gha_admbnda_gss_20210308_shp.zip
display.clear_output()
admin_areas = gpd.read_file("gha_admbnda_gss_20210308_SHP/gha_admbnda_adm2_gss_20210308.shp")
# + [markdown] id="IwTTyxd5nK5z"
# Next, find the average distance to the nearest health facility within each area.
# + id="8VA0NeSznKHF"
# Both data frames have the same coordinate system.
buildings_sample.crs = admin_areas.crs
# Spatial join to find out which administrative area every building is in.
points_polys = gpd.sjoin(buildings_sample, admin_areas, how="left")
# Aggregate by admin area to get the average distance to nearest health facility.
stats = points_polys.groupby("index_right")["distance_to_nearest_health_facility"].agg(["mean"])
admin_areas_with_distances = gpd.GeoDataFrame(stats.join(admin_areas))
# + id="3XR7hfFJpn_Q"
admin_areas_with_distances.plot(
column="mean", legend=True, legend_kwds={"shrink": 0.5})
plt.title("Average distance to the nearest health facility (km)")
plt.gcf().set_size_inches(15, 15)
| building_detection/open_buildings_spatial_analysis_examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Display HTML
from IPython.display import Image
from IPython.core.display import HTML
# Validation
from sklearn.model_selection import train_test_split
#
from sklearn.feature_selection import mutual_info_classif
# +
from sklearn.datasets import make_classification, load_breast_cancer
X, y = load_breast_cancer(return_X_y = True, as_frame=True)
X.head()
y
# +
# from sklearn.datasets import load_boston
# # load data
# boston = load_boston()
# X = pd.DataFrame(boston.data, columns=boston.feature_names)
# X.drop('CHAS', axis=1, inplace=True)
# y = pd.Series(boston.target, name='MEDV')
# # inspect data
# X.head()
# -
# Split into train & test
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
stratify=y,
random_state=11)
Image(
url="https://machinelearningmastery.com/wp-content/uploads/2019/11/Overview-of-Feature-Selection-Techniques3.png"
)
# +
# Image(
# url="https://machinelearningmastery.com/wp-content/uploads/2020/06/Overview-of-Data-Variable-Types2.png"
# )
# -
Image(
url="https://machinelearningmastery.com/wp-content/uploads/2019/11/How-to-Choose-Feature-Selection-Methods-For-Machine-Learning.png"
)
Image(
url="https://miro.medium.com/max/1250/1*b645U4bvSqa2L3m88hkEVQ.png"
)
Image(
url="https://miro.medium.com/max/1290/0*TD6Tf326AV9N9dCY.png"
)
# # Statistical Tests for Feature Information (Filter-Based Feature Selection)
# example of chi squared feature selection for categorical data
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif, chi2, mutual_info_classif
from matplotlib import pyplot
# +
# placeholder for all functions
# -
# ## Gather Feature Information
# ## Select K-best
# ### Regression Feature Selection: (Numerical Input, Numerical Output)
Pearsonโs correlation coefficient (linear).
Spearmanโs rank coefficient (nonlinear)
Pearsonโs Correlation Coefficient: f_regression()
Mutual Information: mutual_info_regression()
# ### Classification Feature Selection: (Numerical Input, Categorical Output)
# #### ANOVA correlation coefficient (linear): numerical feature to categorical target
# example of anova f-test feature selection for numerical data
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from matplotlib import pyplot
# +
# load the dataset
def load_dataset(filename):
# load the dataset as a pandas DataFrame
data = read_csv(filename, header=None)
# retrieve numpy array
dataset = data.values
# split into input (X) and output (y) variables
X = dataset[:, :-1]
y = dataset[:,-1]
return X, y
# feature selection
def select_anova_features(X_train, y_train, X_test):
# configure to select all features
fs = SelectKBest(score_func=f_classif, k='all')
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
# -
filename = 'data/pima-indians-diabetes.csv'
# +
# load the dataset
X, y = load_dataset(filename)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# +
# feature selection
X_train_fs, X_test_fs, fs = select_anova_features(X_train, y_train, X_test)
# what are scores for the features
for i in range(len(fs.scores_)):
print('Feature %d: %f' % (i, fs.scores_[i]))
# plot the scores
pyplot.barh([i for i in range(len(fs.scores_))], fs.scores_)
pyplot.gca().invert_yaxis()
# +
# feature selection
X_train_fs, X_test_fs, fs = select_anova_features(X_train, y_train, X_test)
# what are scores for the features
for i in range(len(fs.scores_)):
print('Feature %d: %f' % (i, fs.pvalues_[i]))
# plot the scores
pyplot.barh([i for i in range(len(fs.pvalues_))], fs.pvalues_)
pyplot.gca().invert_yaxis()
# -
# +
# load the dataset
def load_anova_dataset(filename, target, cols_to_use_for_anova=None):
# load the dataset as a pandas DataFrame
data = read_csv(filename)
# split into input (X) and output (y) variables
X_df = data.drop(target, axis=1)
y = data[target]
# retrieve numpy array
X = X_df.values
y = y.values
return X, y
# prepare target
def prepare_targets(y_train, y_test):
le = LabelEncoder()
le.fit(y_train)
y_train_enc = le.transform(y_train)
y_test_enc = le.transform(y_test)
return y_train_enc, y_test_enc
# feature selection
def select_anova_features(X_train, y_train, X_test):
# configure to select all features
fs = SelectKBest(score_func=f_classif, k='all')
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
# -
filename = "data/breast-cancer/breast-cancer_numeric.csv"
target = "diagnosis"
# cols_to_use_for_anova = ['']
# +
X_y_df = pd.read_csv(filename)
X_y_df.head()
X_df = X_y_df.drop(target, axis=1)
y = X_y_df[target]
X_df.head()
# -
X_df.columns
# +
f_statistic, p_values = f_classif(X_df, y)
# plot the scores
pyplot.barh(X_df.columns, f_statistic)
pyplot.gca().invert_yaxis()
# +
dfscores = pd.DataFrame(f_statistic)
dfcolumns = pd.DataFrame(X_df.columns)
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['feature','f_statistic']
featureScores
# +
f_statistic, p_values = f_classif(X_df, y)
list(zip(X_df.columns, p_values))
# plot the scores
pyplot.barh(X_df.columns, p_values)
pyplot.gca().invert_yaxis()
# -
# +
X, y = load_anova_dataset(filename, target=target)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# prepare target data
y_train_enc, y_test_enc = prepare_targets(y_train, y_test)
# +
# feature selection
X_train_fs, X_test_fs, fs = select_anova_features(X_train, y_train_enc, X_test)
print("ANOVA scores")
print("(xxxx):", "\n")
for col, fs_score in list(zip(X_df.columns, fs.scores_)):
print('%s: %f' % (col, fs_score))
# plot the scores
pyplot.barh(X_df.columns, fs.scores_)
pyplot.gca().invert_yaxis()
# +
# feature selection
X_train_fs, X_test_fs, fs = select_features(X_train, y_train_enc, X_test)
print("p-values")
print("(below 0.05 means can reject null hypthesis of no relationship with target; therefore can keep feature for model):", "\n")
for col, fs_pvalue in list(zip(X_df.columns, fs.pvalues_)):
print('%s: %f' % (col, fs_pvalue))
# plot the scores
pyplot.barh(X_df.columns, fs.pvalues_)
pyplot.gca().invert_yaxis()
# -
# ### Kendallโs rank coefficient (nonlinear).: numerical feature to categorical target
# ### Mutual Information (MI): numerical feature to categorical target
# +
from sklearn.datasets import make_classification, load_breast_cancer
X, y = load_breast_cancer(return_X_y = True, as_frame=True)
X.head()
# -
# Split into train & test
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
stratify=y,
random_state=11)
# +
# Mutual Information: mutual_info_classif()
###
data = X_train.copy()
data['RANDOM_FEATURE'] = np.random.randint(1, 5)
target = y_train
###
mi_score = mutual_info_classif(
data,
target,
n_neighbors=10,
random_state=22
)
sorted_idx = np.argsort(mi_score)
mi_scoredf = pd.DataFrame(
mi_score[sorted_idx[::-1]],
index=data.columns[sorted_idx[::-1]],
columns=['mi_score'])
plt.figure(figsize=(10, 10))
plt.barh(
data.columns[sorted_idx],
mi_score[sorted_idx]
)
plt.xlabel("Mutual Information Score");
# -
# ##ย Classification Feature Selection: (Categorical Input, Categorical Output)
# ### Chi-Squared test (contingency tables): categorical feature to categorical target
# +
# load the dataset
def load_chi2_dataset(filename, target, cols_to_use_for_chi2=None):
# load the dataset as a pandas DataFrame
data = read_csv(filename)
# split into input (X) and output (y) variables
X_df = data.drop(target, axis=1)
y = data[target]
# retrieve numpy array
dataset = data.values
# retrieve numpy array
X = X_df.values
y = y.values
# format all fields as string
X = X.astype(str)
return X, y
# prepare input data
def prepare_inputs(X_train, X_test):
oe = OrdinalEncoder()
oe.fit(X_train)
X_train_enc = oe.transform(X_train)
X_test_enc = oe.transform(X_test)
return X_train_enc, X_test_enc
# prepare target
def prepare_targets(y_train, y_test):
le = LabelEncoder()
le.fit(y_train)
y_train_enc = le.transform(y_train)
y_test_enc = le.transform(y_test)
return y_train_enc, y_test_enc
# feature selection
def select_chi2_features(X_train, y_train, X_test):
fs = SelectKBest(score_func=chi2, k='all')
fs.fit(X_train, y_train)
X_train_fs = fs.transform(X_train)
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
# -
filename = "data/breast-cancer/breast-cancer_categorical_new.csv"
target = "Class"
# +
X_y_df = pd.read_csv(filename)
X_df = X_y_df.drop(target, axis=1)
y = X_y_df[target]
X_y_df.head()
# X_df.head()
# X.columns
# +
#Getting all the categorical columns except the target
categorical_columns = (
X_y_df.select_dtypes(exclude = 'number')
.drop(target, axis = 1)
.columns
)
categorical_columns
# +
# load the dataset
X, y = load_chi2_dataset(filename, target)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# prepare input data
X_train_enc, X_test_enc = prepare_inputs(X_train, X_test)
# prepare target data
y_train_enc, y_test_enc = prepare_targets(y_train, y_test)
# +
# feature selection
X_train_fs, X_test_fs, fs = select_chi2_features(X_train_enc, y_train_enc, X_test_enc)
print("chi2 scores")
print("(higher means more of a relationship with target; therefore can keep feature for model):", "\n")
for col, fs_score in list(zip(X_df.columns, fs.scores_)):
print('%s: %f' % (col, fs_score))
# plot the scores
pyplot.barh(X_df.columns, fs.scores_)
pyplot.gca().invert_yaxis()
# +
# feature selection
X_train_fs, X_test_fs, fs = select_chi2_features(X_train_enc, y_train_enc, X_test_enc)
print("p-values")
print("(below 0.05 means can reject null hypthesis of no relationship with target; therefore can keep feature for model):", "\n")
for col, fs_pvalue in list(zip(X_df.columns, fs.pvalues_)):
print('%s: %f' % (col, fs_pvalue))
# plot the scores
pyplot.barh(X_df.columns, fs.pvalues_)
pyplot.gca().invert_yaxis()
# -
# Import the function
from scipy.stats import chi2_contingency
X_y_df.head()
categorical_columns
# +
chi2_check = []
for i in categorical_columns:
if chi2_contingency(pd.crosstab(X_y_df[target], X_y_df[i]))[1] < 0.05:
chi2_check.append('Reject Null Hypothesis')
else:
chi2_check.append('Fail to Reject Null Hypothesis')
res = pd.DataFrame(data = [categorical_columns, chi2_check]).T
res.columns = ['Column', 'Hypothesis']
print(res)
# +
check = {}
for i in res[res['Hypothesis'] == 'Reject Null Hypothesis']['Column']:
dummies = pd.get_dummies(X_y_df[i])
bon_p_value = 0.05/X_y_df[i].nunique()
for series in dummies:
if chi2_contingency(pd.crosstab(X_y_df[target], dummies[series]))[1] < bon_p_value:
check['{}-{}'.format(i, series)] = 'Reject Null Hypothesis'
else:
check['{}-{}'.format(i, series)] = 'Fail to Reject Null Hypothesis'
res_chi_ph = pd.DataFrame(data = [check.keys(), check.values()]).T
res_chi_ph.columns = ['Pair', 'Hypothesis']
res_chi_ph
# -
chi2_keep_cols_mask = res['Hypothesis'] == 'Reject Null Hypothesis'
chi2_keep_cols = list(res[chi2_keep_cols_mask]['Column'])
chi2_keep_cols
# +
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import pandas as pd
import numpy as np
# Load iris data
iris = load_iris()
iris.feature_names
# Create features and target
X = iris.data
y = iris.target
# # Convert to categorical data by converting data to integers
X = X.astype(int)
# Select two features with highest chi-squared statistics
chi2_selector = SelectKBest(chi2, k=3)
chi2_selector.fit(X, y)
# Look at scores returned from the selector for each feature
chi2_scores = pd.DataFrame(list(zip(iris.feature_names, chi2_selector.scores_, chi2_selector.pvalues_)), columns=['ftr', 'score', 'pval'])
chi2_scores.head()
# # you can see that the kbest returned from SelectKBest
# # #+ were the two features with the _highest_ score
kbest = np.asarray(iris.feature_names)[chi2_selector.get_support()]
kbest
# -
# ### Mutual Information: categorical feature to categorical target
# +
# Results changed each time so look into this
# +
# mutual_info_keep_cols = []
# -
# +
##########
##########
| notebooks/machine_learning_algorithms/6a-Feature-Selection-Statistical-Tests-for-Feature-Information.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python-3.7.1
# language: python
# name: python-3.7.1
# ---
# +
### If you want to use my data (detected_colours.npy) skip to the next cell ###
# Run this cell only if you want to process your own images.
# Each directory with images should contain _detected_objects.txt that contains the terminal output of darknet
# Set debug = True in the settings section below to preview images
# Uncomment
# #with open('detected_colours.npy','wb') as destination: np.save(destination, rgb_raw)`
# at the bottom of the cell if you want to save the results
##########################
######## SETTINGS ########
# %matplotlib inline
# high dpi images in notebook
# %config InlineBackend.figure_format = 'retina'
# file with detected objects
detected_objs_file = 'detected_objects.txt'
# debug flag: set to True to show objects and print size and colour info
debug = False
# how many images to process while debugging
debug_max_counter = 50
##########################
##########################
import os
import re
import json
import random
import numpy as np
import cv2
from matplotlib import pyplot as plt
from sklearn.cluster import KMeans, MeanShift
from itertools import groupby
path = ''
counter = 0
prev_file = ''
detected_colours = []
with open(os.path.expanduser(detected_objs_file),'r') as source:
for line in source.readlines():
m = re.search(r"{.+}", line)
if m:
car_obj = json.loads(m[0])
if 'path' in car_obj:
path = car_obj['path']
elif car_obj['object']=='car':
car_obj['path'] = path
if debug:
print (counter, ': ', car_obj['path'])
else:
print ('\r', counter, ': ', car_obj['path'], end='')
if debug & (counter==debug_max_counter): break # for debugging
if prev_file != car_obj['path']: #
img = cv2.cvtColor(cv2.imread(car_obj['path']), cv2.COLOR_BGR2RGB) # read and convert BGR to RGB
prev_file = car_obj['path'] # keep track of previous file opened
car_img = img[int(car_obj['y0']):int(car_obj['y1']), int(car_obj['x0']):int(car_obj['x1'])]
height, width, _ = car_img.shape
if debug:
print ('Dimensions: {}x{}; h/w = {:.2f}; w/h = {:.2f}'.format(width, height,
height/width, width/height))
# skip oddly shaped objects
if height/width > 1 or height/width <0.3:
if debug: print("Skipped because of aspect ratio")
continue
# subset a horizontal line of pixels (30% from the top) and flatten:
# (height, width, col_chnls) -> (width, col_chnls)
line_scan = car_img[int(height*0.3):int(height*0.3+1), 0:width].reshape(-1, 3)
# Apply K-means to line scan
kmeans = KMeans(n_clusters = 5).fit(line_scan)
# sort cluster labels, calculate frequency and pick the most common colour
cluster_freq = [len(list(group)) for key, group in groupby(sorted(kmeans.labels_))]
most_freq_label = cluster_freq.index(max(cluster_freq))
car_colour = np.rint(kmeans.cluster_centers_[most_freq_label])
if debug:
print ('Detected colour RGB-8bit: ', car_colour)
#print ('label frequency list', cluster_freq, 'most frequent label ', most_freq_label)
# display image and colour scan line
if debug:
cv2.line(car_img, (0, int(height*0.3)), (width, int(height*0.3)), car_colour, thickness=4)
plt.imshow(car_img, interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.show()
detected_colours.append(car_colour)
counter = counter + 1
rgb_raw = np.array(detected_colours, np.float32) # convert to numpy array
print("\nReady")
# +
# Save the dataset of detected files
##########################
######## SETTINGS ########
# where to save the dataset of detected colours
detected_cols_file = 'detected_colours.npy'
##########################
with open(detected_cols_file,'wb') as destination: np.save(destination, rgb_raw)
print("\nDectected colours saved to", detected_cols_file)
| 2__outdated__-extract-colours.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Workshop 1
# Your very first notebook
# <br>
# Connecting to <b> mongoDB </b>
# ---
# # Installing pymongo
# !pip install pymongo
# ---
# # Connecting to db
# +
import pymongo
HOST = '172.16.58.3'
PORT = 3360
USER = 'emad'
PASSWORD = '<PASSWORD>'
DB_NAME = 'daricheh_gorge_crawler'
client = pymongo.MongoClient(f"mongodb://{USER}:{PASSWORD}@{HOST}:{PORT}")
db = client[DB_NAME]
db
# +
cursor = db['posts'].find({"id": {"$in": [1, 10, 13]}})
posts = list(cursor)
print("posts:")
print(posts)
| Workshops/03. mongo_db.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="vZe8oEJj2EZR"
#
# + colab={} colab_type="code" id="6Ks9_kkrCIQi"
import numpy as np
import pandas as pd
# +
# load data
dataX=pd.read_csv("trainX.csv")
dataA=pd.read_csv("trainA.csv")
dataY=pd.read_csv("trainY.csv")
# remove index
dataX=dataX.values[:,1:]
dataY=dataY.values[:,1:-1]
dataA=dataA.values[:,1:]
# + colab={} colab_type="code" id="F77bYonkF8AE"
# split cure or not cure
์น๋ฃํํ์๋ฐ์ดํฐ=pd.read_csv("2๋ฒ ์ด์ ์ ๊ฑฐํ ์น๋ฃํ ํ์ ๋ฐ์ดํฐ.csv")
์น๋ฃํ์ง์์ํ์๋ฐ์ดํฐ=pd.read_csv("8๋ฒ ์ด์ ์ ๊ฑฐํ ์น๋ฃํ์ง ์์ ํ์ ๋ฐ์ดํฐ.csv")
# -
์น๋ฃํํ์๋ฐ์ดํฐ
์น๋ฃํ์ง์์ํ์๋ฐ์ดํฐ
data_CY=์น๋ฃํํ์๋ฐ์ดํฐ['time']
dataNCY=์น๋ฃํ์ง์์ํ์๋ฐ์ดํฐ['time']
data_CX = ์น๋ฃํํ์๋ฐ์ดํฐ.loc[:,['X0','X1','X3','X4','X5','X6','X7','X8','X9','X10','X11','X12','X13','X14','X15','X16']]
dataNCX = ์น๋ฃํ์ง์์ํ์๋ฐ์ดํฐ.loc[:,['X0','X1','X2','X3','X4','X5','X6','X7','X9','X10','X11','X12','X13','X14','X15','X16']]
data_CX
dataNCX
# + colab={} colab_type="code" id="8bTfqtUGPvsq"
from keras import models
from keras import layers
import tensorflow as tf
# +
def build_dnn_swish_adam():
# 2. ๋ชจ๋ธ์ ๊ตฌ์กฐ๋ฅผ BatchNormalization layer๋ฅผ ์ฌ์ฉํ์ฌ ๋ง๋ ๋ค.
X = tf.keras.layers.Input(shape=[16])
H = tf.keras.layers.Dense(2048)(X)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 1
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 2
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 3
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 4
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 5
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 6
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 7
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 8
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 9
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 10
H = tf.keras.layers.Dense(2048)(X)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 11
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 12
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 13
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 14
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 15
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 16
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 17
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 18
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 19
H = tf.keras.layers.Dense(2048)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H) # Hidden layer 20
Y = tf.keras.layers.Dense(1)(H)
model = tf.keras.models.Model(X, Y)
model.compile(loss='mse', optimizer='adam', metrics=['mean_absolute_error'])
return model
def build_model_NC_100_stepper():
# 2. ๋ชจ๋ธ์ ๊ตฌ์กฐ๋ฅผ BatchNormalization layer๋ฅผ ์ฌ์ฉํ์ฌ ๋ง๋ ๋ค.
X = tf.keras.layers.Input(shape=[16])
H = tf.keras.layers.Dense(2048)(X)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H)
H = tf.keras.layers.Dense(1024)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H)
H = tf.keras.layers.Dense(512)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H)
H = tf.keras.layers.Dense(256)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H)
H = tf.keras.layers.Dense(128)(H)
H = tf.keras.layers.BatchNormalization()(H)
H = tf.keras.layers.Activation('swish')(H)
Y = tf.keras.layers.Dense(1)(H)
model = tf.keras.models.Model(X, Y)
model.compile(loss='mse', optimizer='adam', metrics=['mean_absolute_error'])
return model
# -
# ์น๋ฃ ๋ฐ์ดํฐ
num_epochs = 10000
all_mae_histories_cured = []
k = 4
num_val_samples = len(data_CX) // k
for i in range(k):
print('์ฒ๋ฆฌ์ค์ธ ํด๋ #', i)
# ๊ฒ์ฆ ๋ฐ์ดํฐ ์ค๋น: k๋ฒ์งธ ๋ถํ
val_data = data_CX[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = data_CY[i * num_val_samples: (i + 1) * num_val_samples]
# ํ๋ จ ๋ฐ์ดํฐ ์ค๋น: ๋ค๋ฅธ ๋ถํ ์ ์ฒด
partial_train_data = np.concatenate(
[data_CX[:i * num_val_samples],
data_CX[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[data_CY[:i * num_val_samples],
data_CY[(i + 1) * num_val_samples:]],
axis=0)
# ์ผ๋ผ์ค ๋ชจ๋ธ ๊ตฌ์ฑ(์ปดํ์ผ ํฌํจ)
model = build_dnn_swish_adam()
# ๋ชจ๋ธ ํ๋ จ(verbose=0 ์ด๋ฏ๋ก ํ๋ จ ๊ณผ์ ์ด ์ถ๋ ฅ๋์ง ์์ต๋๋ค)
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history_cured = history.history['val_mean_absolute_error']
all_mae_histories_cured.append(mae_history_cured)
# ์น๋ฃํ์ง ์์ ๋ฐ์ดํฐ
num_epochs = 10000
all_mae_histories_non_cured = []
k = 4
num_val_samples = len(dataNCX) // k
for i in range(k):
print('์ฒ๋ฆฌ์ค์ธ ํด๋ #', i)
# ๊ฒ์ฆ ๋ฐ์ดํฐ ์ค๋น: k๋ฒ์งธ ๋ถํ
val_data = dataNCX[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = dataNCY[i * num_val_samples: (i + 1) * num_val_samples]
# ํ๋ จ ๋ฐ์ดํฐ ์ค๋น: ๋ค๋ฅธ ๋ถํ ์ ์ฒด
partial_train_data = np.concatenate(
[dataNCX[:i * num_val_samples],
dataNCX[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[dataNCY[:i * num_val_samples],
dataNCY[(i + 1) * num_val_samples:]],
axis=0)
# ์ผ๋ผ์ค ๋ชจ๋ธ ๊ตฌ์ฑ(์ปดํ์ผ ํฌํจ)
model = build_dnn_swish_adam()
# ๋ชจ๋ธ ํ๋ จ(verbose=0 ์ด๋ฏ๋ก ํ๋ จ ๊ณผ์ ์ด ์ถ๋ ฅ๋์ง ์์ต๋๋ค)
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history_non_cured = history.history['val_mean_absolute_error']
all_mae_histories_non_cured.append(mae_history_non_cured)
# ์น๋ฃ ์์ฌ ๊ฒฐ์ ๋ชจ๋ธ๊ณผ ๋น์น๋ฃ ์์ฌ ๊ฒฐ์ ๋ชจ๋ธ์ ํ๊ท MAE ๊ฐ์ ํ์คํ ๋ฆฌ๋ฅผ ๊ตฌํ๋ค.
average_mae_history_cured = [
np.mean([x[i] for x in all_mae_histories_cured]) for i in range(num_epochs)]
average_mae_history_non_cured = [
np.mean([x[i] for x in all_mae_histories_non_cured]) for i in range(num_epochs)]
import matplotlib.pyplot as plt
# ์น๋ฃ ์์ฌ ๊ฒฐ์ ๋ชจ๋ธ ์์ ์์กด ์๊ฐ์ ๊ทธ๋ํ๋ก ์ถ๋ ฅํ๋ค.
plt.plot(range(1, len(average_mae_history_cured) + 1), average_mae_history_cured)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
# ์น๋ฃํ์ง ์๋ ์์ฌ ๊ฒฐ์ ๋ชจ๋ธ ์์ ์์กด ์๊ฐ์ ๊ทธ๋ํ๋ก ์ถ๋ ฅํ๋ค.
plt.plot(range(1, len(average_mae_history_non_cured) + 1), average_mae_history_non_cured)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
def smooth_curve(points, factor=0.9):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
# +
smooth_mae_history_cured = smooth_curve(average_mae_history_cured[10:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history_cured)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
# +
smooth_mae_history_non_cured = smooth_curve(average_mae_history_cured[10:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history_non_cured)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
# + colab={} colab_type="code" id="0oeHzl4DXr4P"
# 3.๋ฐ์ดํฐ๋ก ๋ชจ๋ธ์ ํ์ต(FIT)ํฉ๋๋ค.
model = build_dnn_swish_adam() # ์ต์ loss ๊ฐ : 0.0492
model.fit(data_CX, data_CY, epochs=30000, batch_size=16, verbose=0)
model.fit(data_CX, data_CY, epochs=1, batch_size=16, verbose=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 386} colab_type="code" id="kzsyJB09b3t6" outputId="801dfd98-ec7d-4f5d-97cb-fefbf54ef5b4"
# 3.์น๋ฃํ์ง ์์๊ณ ์์กด์๊ฐ์ด 100 ์ดํ์ธ ํ์ ๋ฐ์ดํฐ๋ก ๋ชจ๋ธ์ ํ์ต(FIT)ํฉ๋๋ค.
model_NC = build_dnn_swish_adam() # ์ต์ loss ๊ฐ : 0.0159
model_NC.fit(dataNCX, dataNCY, epochs=30000, batch_size=16,verbose=0)
model.fit(dataNCX, dataNCY, epochs=1, batch_size=16, verbose=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="B6OBWoPGHuWw" outputId="a53a20ec-daaa-497f-d427-9bde00b08329"
์น๋ฃ_ํ์_ํ์ผ๊ฒฝ๋ก = '2๋ฒ ์ด์ ์ ๊ฑฐํ ์น๋ฃํ ํ์ ํ
์คํธ ๋ฐ์ดํฐ.csv'
๋น์น๋ฃ_ํ์_ํ์ผ๊ฒฝ๋ก = '8๋ฒ ์ด์ ์ ๊ฑฐํ ์น๋ฃํ์ง ์์ ํ์ ํ
์คํธ ๋ฐ์ดํฐ.csv'
์น๋ฃ_ํ์_๋ฐ์ดํฐ = pd.read_csv(์น๋ฃ_ํ์_ํ์ผ๊ฒฝ๋ก)
๋น์น๋ฃ_ํ์_๋ฐ์ดํฐ = pd.read_csv(๋น์น๋ฃ_ํ์_ํ์ผ๊ฒฝ๋ก)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iEzT2ja2IWrd" outputId="699e3367-7e90-4bc5-c92a-3c1339be05d0"
# ์น๋ฃํ ๊ฒฝ์ฐ ์์กด ์๊ฐ์ ๊ตฌํ๋ค.
cured = model.predict(์น๋ฃ_ํ์_๋ฐ์ดํฐ)
# ์น๋ฃํ์ง ์์ ๊ฒฝ์ฐ ์์กด ์๊ฐ์ ๊ตฌํ๋ค.
non_cured = model_NC.predict(๋น์น๋ฃ_ํ์_๋ฐ์ดํฐ)
# ์น๋ฃํ ๊ฒฝ์ฐ ์์กด ์๊ฐ > ์น๋ฃํ์ง ์์ ๊ฒฝ์ฐ ์์กด ์๊ฐ : 1,
# ์น๋ฃํ ๊ฒฝ์ฐ ์์กด ์๊ฐ > ์น๋ฃํ์ง ์์ ๊ฒฝ์ฐ ์์กด ์๊ฐ : 0
result = np.where(cured > non_cured, 1, 0)
print(result)
# + colab={} colab_type="code" id="xdWBw8y0NZMB"
import csv
# ์ด์ฐจ์ ๋ฆฌ์คํธ
with open('result_20200902_2.csv','w', newline='') as f:
makewrite = csv.writer(f)
for value in result:
makewrite.writerow(value)
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="TQ9WCCfehqVH" outputId="909e27c3-fb09-4021-e84b-69715a0c87c0"
"""
from openpyxl import Workbook
write_wb = Workbook()
#์ด๋ฆ์ด ์๋ ์ํธ๋ฅผ ์์ฑ
#write_ws = write_wb.create_sheet('์์ฑ์ํธ')
#Sheet1์๋ค ์
๋ ฅ
write_ws = write_wb.active
write_ws['A1'] = 'Title'
write_ws['B1'] = 'action'
#ํ ๋จ์๋ก ์ถ๊ฐ
# write_ws.append([1,2,3])
#์
๋จ์๋ก ์ถ๊ฐ
i = 1
for value in result:
write_ws.cell(i,2, value)
i= i+1
write_wb.save('/content/drive/My Drive/traindata/result_20200824_3.csv')
"""
# + colab={} colab_type="code" id="RAlZ120_AXbj"
| DHH_127page_20200901.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Normal Distribution: Density
# The _Normal_ or _Gaussian_ distribution, also called the _Bell Curve_, has probability density function
#
# $f(x;\mu,\sigma) = \frac{1}{\sqrt{2\pi} \sigma} \exp(-\frac{1}{2} \frac{(x-\mu)^2}{\sigma^2})$
# The _Standard_ Normal has $\mu=0, \sigma=1$, which gives the usual form
#
# $\phi(x) = \frac{1}{\sqrt{2\pi}} \exp(-\frac{1}{2} x^2)$
# The variable $\phi$ (phi) is, in some contexts, reserved for use as the standard normal pdf.
import numpy as np
import matplotlib.pyplot as plt
# +
def stdnormpdf(x):
return 1/np.sqrt(2*np.pi) * np.exp(-.5 * x**2)
plt.figure()
plt.plot(np.linspace(-4,4, num=1000), stdnormpdf(np.linspace(-4,4, num=1000)))
plt.show()
# -
# The mode of the standard normal occurs at 0 (with density equal to $\frac{1}{\sqrt {2\pi} }\approx$
stdnormpdf(0)
# The standard normal has inflection points at 1 and -1, and has "thin tails": moving leftwards two units at a time decreases the density by factors
print('0 to -2:', stdnormpdf(0)/stdnormpdf(-2))
print('-2 to -4:', stdnormpdf(-2)/stdnormpdf(-4))
print('-4 to -6:', stdnormpdf(-4)/stdnormpdf(-6))
# so events further than $4\sigma$ away from 0 are incredibly unlikely, which is not true of all distributions.
# # The Normal Distribution: CDF
# There is no known closed-form expression for the CDF of a normal distribution. The CDF of a standard normal is equal to
#
# $\Phi(x;\mu,\sigma) = \intop_{z=-\infty}^{x}\frac{1}{\sqrt{2\pi}} \exp(-\frac{1}{2} z^2)$
#
# As in the case of the pdf, $\Phi$ (capital Phi) is often reserved for the standard normal cdf.
# Because of its ubiquity in mathematics, most programming languages offer at least a routine to quickly calculate the _Error Function_,
#
# $\textrm{erf}(x) = \frac{2}{\pi} \intop_{0}^{x} \exp(-z^2)$
# If this routine is available, the relationship
#
# $\Phi(x) = \frac{1}{2} + \frac{1}{2} \textrm{erf}(\frac{x}{\sqrt{2}})$
#
# can be used to approximate the standard normal CDF even if no specialized routines are available. Usually there are those routines, however, e.g.
from scipy.stats import norm
norm.cdf(1.1)
plt.figure()
plt.plot(np.linspace(-4,4,num=500), norm.cdf(np.linspace(-4,4,num=500)))
plt.show()
# The CDF is of course symmetric around 0 and continuous everywhere.
# There are an incredible number of algorithms to calculate the normal CDF or error function, we will not investigate these deeply but see https://en.wikipedia.org/wiki/Normal_distribution#Numerical_approximations_for_the_normal_CDF for more discussion. One simple example taken from the link is the approximation
#
# $\Phi(x) \approx 1 - \phi(x)(b_1 t + b_2 t^2 + b_3 t^3 + b_4 t^4 + b_5 t^5)$
#
# for $\phi$ the standard normal pdf, $t=\frac{1}{1+b_0 x}$ and $b_0 = 0.2316419, b_1 = 0.319381530, b_2 = โ0.356563782, b_3 = 1.781477937, b_4 = โ1.821255978, b_5 = 1.330274429.$
#
# Again, in practice we would hardly ever be required to directly calculate these approximations.
# # The Multivariate Normal
# We say the rv _X_ is _Multivariate Normal_ if $X\in\mathbb{R}^n$ and the density of _X_ is
#
# $f(x;\mu,\sigma) =\sqrt{(2\pi)^n \vert\Sigma\vert}^{-1} \exp(-\frac{1}{2} (x-\mu)^\prime\Sigma^{-1}(x-\mu))$
# source: https://matplotlib.org/gallery/images_contours_and_fields/image_annotated_heatmap.html
# Not scaled correctly
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X0 = np.arange(-4, 4, 0.15)
Y0 = np.arange(-4, 4, 0.15)
X, Y = np.meshgrid(X0, Y0)
Z = np.exp(-.5*(X**2 + 2 * .6 * X * Y + Y**2))
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0)
ax.set_zlim(0, 1.01)
plt.figure()
plt.imshow(Z)
plt.xticks([])
plt.yticks([])
plt.show()
# The matrix $\Sigma$, a positive semi-definite matrix, is called the _variance/covariance_ matrix of X, and summarizes both the variance of X and the degree to which realizations of the different components of X are correlated with each other. If $\Sigma=I$ and $\mu=0$ we of course have the _standard_ multivariate normal.
# Generating from the multivariate normal with any covariance matrix is easy due to the relationship
#
# $\textrm{Var}(AX) = A \textrm{Var}(X) A^\prime$
#
# for a fixed matrix A and a random vector X.
#
# To generate $X\sim\textrm{N}(\mu,\Sigma)$, we use the _Cholesky decomposition_ of $\Sigma$, the unique matrix such that any positive definite matrix can be written
# $A = C C^\prime$. With access to a univariate normal generator:
#
# 1. Generate $n$ independent draws of a standard normal from the univariate normal generator; call this $Z_{i}$.
# 2. Calculate the Cholesky decomposition of $\Sigma$, a standard feature of programming languages; call this $C$.
# 2. Premultiply the vector $Z_{i}$ by $C$ and add the vector $\mu$ to get $X = \mu + CZ$.
#
# The variance of $X = \mu + CZ$ is $\textrm{Var}(X) = C\textrm{Var(Z)}C^\prime=CIC^\prime=\Sigma$ as desired.
| The Normal Distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## introduction to pandas
# - enviroment setting:
# pd.set_option('precision', 1)
# pd.option.display.float_format = {:.02f}
#
# - index: [], ., loc, iloc, ix.
# set_index() and & or | not ~
#
# - attribution:
# columns, index, name, values, shape, size, dtype
#
# - method:
# head(), describe()
#
# - plot(kind = "bar")
# hist()
# pie()
# bar()
# kde()
#
# - sort_values(by = columns_name, inplace = True)
# sort_index()
#
# - idmax() = argmax()
# .ravel()
# unique()
# values_counts()
# duplicated()
# drop_duplicates()
# isnull()
# fillna(what)
# map()
# apply()
#
#
# - str.upper()
# str.lower()
# str.len()
#
# - mutilindex
# pivot_table(values, index, columns)
# stack(level = (0,1))
# unstack(level = 2)
# transpose()
# multi-index idx
#
# - merge()
# concat()
#
# - group
# groupby()
# .get_group()
# .groups.keys()
#
#
#
#
#
#
#
#
# ## Introduction to Matlibplot.pyplot
# fig, ax = plt.subplot(row, columns, figsize = (12,12))
#
# plot(x,y, color, linewidth, label, lines)
# ax.legand
# ax.label
# ax.xlim
# ax.ylim
#
# ``` python
# import numpy as np |
# import matplotlib.pyplot as plt
# from mpl_toolkits.mplot3d.axes3d import Axes3D
# f = lambda x, y: x**2 + y**2
# x,y = np.linspace(-1,1,100), np.linspace(-1,1,100)
# x,y = np.meshgrid(x,y)
#
# fig = plt.figure()
# ax = fig.gca( projection='3d')
# ax.plot_surface(x,y,f(x,y), color = "yellow")
# plt.show()
# ```
#
#
#
#
# ## Introduction of scipy
# - scipy.optimize
# brentq
# minimize(function, initial, method)
# bisect(function, initial)
# newton(function, initial)
# fsolve: multi-dimention don't use it unless
# fixed_point(function, initial)
# fminbound(function, a, b) minimize function with boundary
#
#
# - scipy.integrate
# quad approximating of polynomial
#
#
# - scipy.stats
# pdf, cdf, pps, rvs
# .nrom(shape, loc, scale) Y = c + dX
# gradient, intercept, r_value, p_value, std_err = linregress(x,y)
#
# - numpy.linalg
#
#
# ## Introduction of optimization
# 1. Bisection
# require boundary point value at opposite sign.
#
# 2. Newton-Raphson
# using the information of second moment, expand the function at xn
# $$ f(x_{n+1}) = f(x_n) + f(x_n)'\times(x_n - x_{n+1}) $$
# We suppose $f(x_{n+1}) = 0$ so:
# $$x_{n+1} = x_n -\frac{f(x_n)}{f'(x_n)} $$
#
# 3. Nelder-Mead
# sort transpose extend inner shrink
import numpy as np
a = np.linspace(0,3,4)
b = np.linspace(0,4,5)
a,b = np.meshgrid(a,b)
print((b-a).sum())
| Lesson_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import os
sys.path.append(os.path.abspath("/Users/mouginot/work/helpmetric"))
import cymetrichelper as cyh
import pandahelper as pdh
import cymetric as cym
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (40,24)
from cymetric import graphs as cgr
from cymetric import timeseries as tm
# %matplotlib inline
# #%pylab inline
#pylab.rcParams['figure.figsize'] = (40, 24)
SMALL_SIZE = 25
MEDIUM_SIZE = SMALL_SIZE+2
BIGGER_SIZE = MEDIUM_SIZE +2
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
# +
f_ref = "cycle_ref/main.h5"
f_civ = "cycle_civ/main.h5"
f_a = "cycle_a/main.h5"
f_b = "cycle_b/main.h5"
db_ref = cym.dbopen(f_ref)
db_civ = cym.dbopen(f_civ)
db_a = cym.dbopen(f_a)
db_b = cym.dbopen(f_b)
ev_ref = cym.Evaluator(db=db_ref, write=False)
ev_civ = cym.Evaluator(db=db_civ, write=False)
ev_a = cym.Evaluator(db=db_a, write=False)
ev_b = cym.Evaluator(db=db_b, write=False)
# -
ev_ref.eval('AgentEntry')
from cymetric.tools import format_nucs, reduce, merge, add_missing_time_step
def SWU(evaler, facilities=()):
"""
Shape the reduced SWU Data Frame. Applying nuclides/facilities selection when required.
Parameters
----------
evaler : evaler
facilities : of the facility
nucs : of nuclide to select.
"""
# Get inventory table
df = evaler.eval('TimeSeriesEnrichmentSWU')
agents = evaler.eval('AgentEntry')
rdc_table = [] # because we want to get rid of the nuclide asap
if len(facilities) != 0:
agents = agents[agents['Prototype'].isin(facilities)]
rdc_table.append(['AgentId', agents['AgentId'].tolist()])
else:
wng_msg = "no faciity provided"
warnings.warn(wng_msg, UserWarning)
df = reduce(df, rdc_table)
base_col = ['SimId', 'AgentId']
added_col = base_col + ['Prototype']
df = merge(df, base_col, agents, added_col)
df = df[['Time', 'Value']].groupby(['Time']).sum()
df.reset_index(inplace=True)
return df
civ_SWU_ref = cyh.month2year(SWU(ev_ref, facilities=['civ_enrichment']), mode=2, division=12, column='Value')
civ_SWU_civ = cyh.month2year(SWU(ev_civ, facilities=['civ_enrichment']), mode=2, division=12, column='Value')
civ_SWU_a = cyh.month2year(SWU(ev_a, facilities=['civ_enrichment']), mode=2, division=12, column='Value')
civ_SWU_b = cyh.month2year(SWU(ev_b, facilities=['civ_enrichment']), mode=2, division=12, column='Value')
ax = civ_SWU_ref.plot(x='Time', y='Value', label='Ref', figsize=(18, 12))
civ_SWU_civ.plot(x='Time', y='Value', ax=ax, label='Civ')
civ_SWU_a[civ_SWU_a['Value'] < 1e7].plot(x='Time', y='Value', ax=ax, label='A', style='-')
civ_SWU_b.plot(x='Time', y='Value', ax=ax, label='B', style=':')
civ_SWU_civ.plot(x='Time', y='Value', ax=ax, kind='scatter', color='blue', label='Civ', s = 0.9)
civ_SWU_a[SWU_a['Value'] < 1e7].plot(x='Time', y='Value', ax=ax, kind='scatter', color='green', label='A', s = 0.7)
civ_SWU_b.plot(x='Time', y='Value', ax=ax, kind='scatter', color='black', label='B', s=0.3)
#ax.legend(["ref", "civ", "a", "b"]);
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="CnuY2D2ua1mQ"
# <img width=150 src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/NumPy_logo.svg/200px-NumPy_logo.svg.png"></img>
#
# # Part.2-1-01 NumPy ้ฃๅ็ๅบๆฌๆไฝ
# + [markdown] id="M28AhyiWa1mZ"
# # 0. ๅฎ่ฃ่่ผๅ
ฅ NumPy
#
# ๅฎ่ฃ NumPy ๅฏไปฅไฝฟ็จ `pip install numpy` ๆๆฏ `pip3 install numpy`๏ผๅจ Anaconda ็ฐๅขไธ็่ฉฑ๏ผๅท่ก `conda install numpy` ้ฒ่กๅฎ่ฃใ
# + id="gIPwS9Hwa1ma"
# !pip3 install numpy
# + [markdown] id="PYXeWm6ba1mb"
# ่ผๅ
ฅ NumPy
# + id="d6Y8_EG_a1mc"
import numpy as np
# + [markdown] id="WaOQK8Vda1mc"
# ็ฎๅๅฎ่ฃ็ NumPy ็ๆฌ
# + id="1z5wd2CHa1mc" outputId="fbfbaebe-fc94-4eae-85e2-9b0ee51be8f3"
np.__version__
# + [markdown] id="LqM-FONEa1me"
# ## 1. ๅปบ็ซ NumPy array (้ฃๅ)
#
# ### 1.1 `array()` ๅฝๅผ
#
# ไฝฟ็จ `array()` ๅฝๅผ๏ผๅฏๅฐ Python list ๆๅ
็ต (tuple) ็ๅผๅปบ็ซ็บ NumPy arrayใ
# + id="620SyM9da1mf"
# ไฝฟ็จ list
a = np.array([1, 2, 3, 4, 5])
# + id="Nam1Hq2xa1mf" outputId="cb81788e-88bb-4e53-c6cd-c6cd4db10956"
a
# + id="QezMY2JOa1mf"
# ไฝฟ็จๅ
็ต
b = np.array((6, 7, 8))
# + [markdown] id="S9Uz0w49a1mg"
# ๅฏไปฅ็ๅฐๅปบ็ซ็้ฃๅๅๅฅ็บ numpy.ndarray
# + id="Z3DP52Bsa1mg" outputId="404ffe3d-37e2-4d99-c36b-ee69d2567239"
type(a), type(b)
# + [markdown] id="EsBBJrbfa1mh"
# ไธๅ็่ชๆณๅๆ้ ๆ้ฏ่ชค
# + id="JsnFo17Ra1mh" outputId="f8fbd124-f251-48bb-f3be-b67a0fb91ac3"
np.array(1, 2, 3)
# + [markdown] id="ebd-8ykOa1mi"
# ### 1.2 ไฝฟ็จ `arange()` ่ `linspace()` ๅฝๅผ็ข็็ญๅทฎไธ็ถญ้ฃๅ
#
# ่ฆ็จๅบๅๆธๅญ็ข็้ฃๅๅ
็ด ็่ฉฑ๏ผๅฏไปฅไฝฟ็จ `arange()` ๅฝๅผ๏ผ`arange()` ๅฝๅผๅผๆธๅฆไธ๏ผๅ
ถไธญ็ตๆๅผ็บๅฟ
่ผธ๏ผ่ตทๅงๅผใ้้ๅผ้ๅฟ
่ผธใ็ข็็ๅบๅๆธๅญๅ
ๅซ่ตทๅงๅผไฝไธๅ
ๅซ็ตๆๅผ๏ผไนๅฐฑๆฏ `[start, stop)` ็่กจ็คบๆนๅผใ
#
# ```python
# numpy.arange([start, ]stop, [step, ]dtype=None)
# ```
#
# ็ข็็ๅ
็ด ๏ผๆๆฏไพ็
ง่ตทๅงๅผใ็ตๆๅผใ้้ๅผๅ็ญๅทฎ็ๆธๅญๅบๅใ
#
# NumPy ้ฃๅๆฏๆด็่ณๆๅๅฅ (dtype) ๅฆไธ่กจ๏ผ
#
# ||ๅๅฅ|
# |---|---|
# |signedๆดๆธ|int8, int16, int32, int64|
# |unsignedๆดๆธ|uint8, uint16, uint32, uint64|
# |ๆตฎ้ปๆธ|float16, float32, float64, float128|
# |ๅธๆๅผ|bool|
# + id="wJXxiEW2a1mi" outputId="a0bae189-15bd-4f22-b3a2-d7414795654f"
# ็ข็ 0 - 9 ๅๅๆธๅญๅ
็ด
np.arange(10)
# + id="eVNcIv4pa1mi" outputId="62f45bb5-81d7-43c4-fe0b-dd9f4ed7d4c2"
# ็ข็ 2, 4, 6, 8 ๆธๅญๅ
็ด
np.arange(2, 10, 2)
# + [markdown] id="Fymcu3ZJa1mj"
# ไธไพๆฏๆตฎ้ปๆธๅบๅ็็คบ็ฏใ
# + id="g1Mfufuqa1mj" outputId="a1cbc24e-39bc-4292-d8b1-e6c8fc41020a"
np.arange(1.0, 3.0, 0.5, dtype='float64')
# + [markdown] id="vCE1mi0Ua1mj"
# ็ถๅจ `arange()` ไฝฟ็จ้ๆดๆธ็้้ๅผๆ๏ผๆๅฏ่ฝๆ็ข็ไธไธ่ด็็ตๆ๏ผๅฆไธไพๆๆๆๆๅ
ๅซ็ตๆๅผไฝๆๆๅไธๆใ้ๆๅๅฏไปฅ่ๆ
ฎไฝฟ็จ `linspace()` ๅฝๅผใ
# + id="yhPNav7Ba1mj" outputId="397f2ad6-60f4-487b-97aa-3c30d2de8ba9"
a = np.arange(0.13, 0.16, step=0.01)
print("ๆฒๆๅ
ๅซ็ตๆๅผ๏ผ", a)
b = np.arange(0.12, 0.16, step=0.01)
print("ๅ
ๅซ็ตๆๅผ๏ผ", b)
# + [markdown] id="0lEziAnJa1mk"
# ่ท `arange()` ็็จๆณๅพ้กไผผ๏ผๅผๅซ `linspace()` ๆ็ๅผๆธๆๅๅงๅผใ็ตๆๅผใ่ณๆๅๅฅ `dtype`ใ
#
# ่ `arange()` ไธๅ็ๅฐๆนๅจๆผ่ตทๅง่ท็ตๆๅผ้ฝๆฏๅฟ
่ผธ๏ผ็ข็่ฉฒ็ฏๅๅ
ง็ญๅ็ๆธๅผ๏ผๅฆๅคๅฐๆผ็ข็็ๅ
็ด ไนๅฏไปฅๆๆดๅค็ๆงๅถ๏ผ
# - `num`๏ผ็ข็ๅ
็ด ๆธ
# - `endpoint`:ๆฏๅฆๅ
ๅซ็ตๆๅผ
# - `retstep`๏ผๆฏๅฆ่ฆ้กฏ็คบ้้ๅผ๏ผ้้ๅผๆฏไพ็
งๅๅงๅผใ็ตๆๅผใ็ข็ๅ
็ด ๆธ่จ็ฎ่ๅพ
# - `axis`๏ผ็ข็ไพๆ็่ปธ
#
# `linspace()` ๅฝๅผ็ๅฎ็พฉ๏ผ
#
# ```python
# numpy.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0)
# ```
#
# ไธ้ข `linspace()` ็ไพๅญ่ทไธ้ข `arange()` ็ไพๅญๆๆๅฎๅ
จ็ธๅ็็ขๅบใ
# ```python
# np.arange(2, 10, 2)
# ```
# + id="e00xqppia1mk" outputId="33f47154-521a-42ad-ffd5-7d44753ace67"
np.linspace(2, 10, num=4, endpoint=False)
# + id="ckt_IDNRa1mk" outputId="3c27922b-3ec0-4eec-dec6-21f6ffabd541"
# ้กฏ็คบ้ๅผๅผ
np.linspace(2.0, 3.0, num=5, retstep=True)
# + [markdown] id="d3u6OMFNa1ml"
# ### 1.3 ๅปบ็ซๅค็ถญ้ฃๅ
#
# ่ฆๅปบ็ซๅค็ถญ้ฃๅ๏ผ็ฐกๅฎ็็่งฃๆนๅผๅฐฑๆฏ้ฃๅไธญ็ๅ
็ด ไนๆฏ้ฃๅ๏ผๅผๅซ `array()` ๅฝๅผๆ๏ผๅช่ฆๅฐ list ไธญ็ๅ
็ด ไนๆฏ list ๅณๅ
ฅๅณๅฏใไธ้ข็ไพๅญๆฏๅปบ็ซไบ็ถญ้ฃๅใ
# + id="XPRG30Nta1ml"
a = np.array([[1, 2, 3], [4, 5, 6]])
# + [markdown] id="XcEpWHEea1ml"
# ไฝฟ็จ `print()` ๅฝๅผๅฏๅฐ้ฃๅๅฐๅบ
# + id="JYDf9RKWa1ml" outputId="65f1a146-2e8b-48a9-db8e-446805ce4872"
print(a)
# + [markdown] id="fmN07tqZa1mm"
# ไฝฟ็จ `shape` ๅฑฌๆง๏ผๅฏไปฅๆฅ็ๅค็ถญ้ฃๅ็ๅฝข็ใๅฆไธไพ a ้ฃๅๆฏไธๅ 2 $\times$ 3 ็ไบ็ถญ้ฃๅใ
# + id="m2It_4AXa1mm" outputId="05db6869-fc01-4463-f063-3729300c81d9"
a.shape
# + [markdown] id="k4ir1WMda1mm"
# ไธๅไพๅญๆฏๅปบ็ซไธ็ถญ้ฃๅ๏ผไนๅฏไปฅ็่งฃ็บ 2 ๅ 4 $\times$ 3 ็ไบ็ถญ้ฃๅๆๅจไธ่ตทใ
# + id="XqWROFUJa1mm"
b = np.array([[[1, 2, 3], [4, 5, 6],
[7, 8, 9], [10, 11, 12]],
[[1, 2, 3], [4, 5, 6],
[7, 8, 9], [10, 11, 12]]])
# + id="FxieQNsda1mn" outputId="9fcedc12-e5ea-4d23-b1ec-fef479988938"
print(b)
# + id="35aUA125a1mn" outputId="22c0aed2-ad6e-47b0-bdb9-5a5f66dd3489"
b.shape
# + [markdown] id="ASm0lYK_a1mn"
# ่ฆๆฅ็ๅค็ถญ้ฃๅ็็ถญๅบฆๆธ็ฎ๏ผๅฏไปฅๆฅ็้ฃๅ็ `ndim` ๅฑฌๆงใ
# + id="91zU5HRna1mo" outputId="ec2abd02-7c79-413b-a17f-754dab572346"
b.ndim
# + [markdown] id="P0OdbUpga1mo"
# ๅค็ถญ้ฃๅๅปบ็ซๆ้ ๆณจๆๅ็ถญๅบฆๆฏๅฆไธ่ด๏ผๅฆๅๅฆไธไพๆ็ข็้ ๆไนๅค็็ตๆใ
# + id="VxapZbZAa1mp" outputId="5df4c3a5-08a0-4402-90df-3290f53f6020"
np.array([[[1, 2, 3], [4, 5, 6],
[7, 8, 9], [10, 11, 12]],
[[1, 2, 3], [4, 5, 6]]])
# + [markdown] id="iSg28uRUa1mp"
# ### 1.4 `zeros()`ใ`ones()`ใ`empty()`
#
# ๅผๅซ `zeros()`ใ`ones()`ๅฝๅผ๏ผๅฏไปฅไพ็
งๅณๅ
ฅ็ๅฝข็ๅผๆธ๏ผๅปบ็ซๅ
็ด ๅ
จ็บ 0ใๅ
จ็บ 1 ็้ฃๅใ
#
# `empty()` ๅๆฏไธ้่ฆ็ตฆๅฎ่ตทๅงๅผ๏ผไฝๆฏๅฏไปฅๅปบ็ซ็ตฆๅฎๅฝข็็้ฃๅ๏ผๅ
็ด ๅผๅๆ้จๆฉ็ตฆๅฎใ
# + id="1MK0_U5Pa1mq" outputId="38e67e4d-dd56-4db8-fce1-7ec3673abb7e"
np.zeros((5, 3))
# + id="wyOg1ySAa1mq" outputId="8dc453fd-54d6-4901-91df-e5530633f6c9"
np.ones([2, 3])
# + id="OhxhkdBQa1mr" outputId="cad6b8b6-27ab-473f-bcbc-3ae92bf10758"
np.empty((2, 2, 2))
# + [markdown] id="baEI3o5pa1mr"
# ### 1.5 ไฝฟ็จ้จๆฉๅฝๅผ็ข็้ฃๅ็ๅ
็ด
#
# ไธๅๆฏๅธธ็จ็ๅฝๅผๅ็ฐกไป๏ผ
#
# |ๅฝๅผ|่ชชๆ|็ข็ๆธๅผๅ้|้จๆฉๆธ่ณๆๅๅฅ|้จๆฉๆธๅไฝ|
# |---|---|---|---|---|
# |rand()|้จๆฉ็ข็ๆๅฎๅฝข็(shape)็้ฃๅ|[0, 1)|ๆตฎ้ปๆธ|้ฃ็บๅๅๅปๅๅธ|
# |randn()|้จๆฉ็ข็ๆๅฎๅฝข็(shape)็้ฃๅ|(-1, 1)|ๆตฎ้ปๆธ|ๅธธๆ
ๅไฝ|
# |randint((low[, high, size, dtype]))|้จๆฉ็ข็่จญๅฎๅ้ๅ
็ด |[low, high)|ๆดๆธ|้ขๆฃๅๅๅปๅๅธ|
# |random_sample([size])|้จๆฉ็ข็ๆๅฎๅคงๅฐ็ไธ็ถญ้ฃๅ|[0.0, 1.0)|ๆตฎ้ปๆธ|้ฃ็บๅๅๅปๅๅธ|
# |random([size])|้จๆฉ็ข็ๆๅฎๅคงๅฐ็ไธ็ถญ้ฃๅ|[0.0, 1.0)|ๆตฎ้ปๆธ|้ฃ็บๅๅๅปๅๅธ|
# |randf([size])|้จๆฉ็ข็ๆๅฎๅคงๅฐ็ไธ็ถญ้ฃๅ|[0.0, 1.0)|ๆตฎ้ปๆธ|้ฃ็บๅๅๅปๅๅธ|
# |sample([size])|้จๆฉ็ข็ๆๅฎๅคงๅฐ็ไธ็ถญ้ฃๅ|[0.0, 1.0)|ๆตฎ้ปๆธ|้ฃ็บๅๅๅปๅๅธ|
# + id="U9MavDN-a1mr"
# ่จญๅฎ้จๆฉ็จฎๅญ
np.random.seed(0)
# + [markdown] id="W_VbRTqna1ms"
# #### 1.5.1 ้จๆฉ็ข็ๆๅฎๅฝข็็้ฃๅ
# + id="yDvcnEUPa1ms" outputId="63f2c1ce-cfeb-47de-8edf-c73a01a94735"
np.random.rand(2, 3)
# + id="sXr332tRa1ms" outputId="fe6aceb4-09e0-419f-83cf-5207f3ba28b2"
np.random.randn(2, 3)
# + [markdown] id="BtkdDtYQa1mt"
# #### 1.5.2 ้จๆฉ็ข็ไธ็ถญ้ฃๅ็ๅ
็ด
# + id="U1pWkQJ4a1mt" outputId="fd48bf61-206c-410d-fb50-82ef51ddfd45"
np.random.random(10)
# + id="xQJg1OLga1mt" outputId="2d7fabd9-dc59-424e-bdcf-a1fb16d9165f"
np.random.randint(1, 10, 10)
# + id="M-GjAbpwa1mu" outputId="7953f9d5-a5ea-4b06-c80f-35c95fd00fca"
np.random.random_sample(10)
# + id="fCw9M4rca1mu" outputId="a0aa1500-4d5f-4b69-9b31-e661417aa4a6"
np.random.choice(100, 10)
# + [markdown] id="G6yqkIMBa1mu"
# ### 1.6 ้จๆฉ็ข็ไธๅๅไฝ็้ฃๅๅ
็ด
#
# ้จๆฉ้ฃๅไนๅฏไปฅ็ข็ไธๅๅไฝ็ๅ
็ด ๏ผNumPy ๆไพ็ๅฝๅผๅๅไฝ้ๅธธ่ฑๅฏ๏ผๆๆๅฝๅผ่่ฉณ็ดฐ่ชชๆๅฏไปฅๅ่ๅฎๆนๆไปถ [Random sampling - Distributions](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html#distributions)ใ
#
# ไปฅ Gamma ๅไฝ็บไพใ
#
# ใ่จปใ่ฅๅฐๆชๅฎ่ฃ Matplotlib ๅ SciPy ็่ฉฑ๏ผๅท่กไธ้ข็ฏไพๅ่ซๅ
ๅฎ่ฃใ
# + id="CGVMDwh6a1mv"
shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
s = np.random.gamma(shape, scale, 1000)
# + id="L8OJrQtwa1mv" outputId="4eec52d4-d428-4529-e268-a70c6b209c25"
# %matplotlib inline
import matplotlib.pyplot as plt
import scipy.special as sps
count, bins, ignored = plt.hist(s, 50, density=True)
y = bins**(shape-1)*(np.exp(-bins/scale) /
(sps.gamma(shape)*scale**shape))
plt.plot(bins, y, linewidth=2, color='r')
plt.show()
# + [markdown] id="5ydSXZXSa1mv"
# ## 2. NumPy้ฃๅ็็ดขๅผๅๅ็ (Slicing)
#
# ้้็ดขๅผๅญๅ้ฃๅๅ
็ด ๆ้ฒ่กๅ็ (slicing)๏ผๅฏไปฅไฝฟ็จ็ดขๅผๅผ๏ผๆๆฏ [start:stop:step] ่ชๆณๅๅพ็ฏๅๅ
ง็ๅ
็ด ๏ผ่ฆ็ๆ็ๆฏ่ตทๅง-็ตๆ็ฏๅไปๆฏ half-open ็๏ผๆไปฅๅๅณ็ๅ
็ด ๅฐไธๅ
ๅซ็ตๆ็ดขๅผ็ๅ
็ด ใ
# + id="VvFsP3_Ea1mv" outputId="5874fd78-8385-4aa6-b81b-a1dacd859fb1"
a = np.arange(20)
a
# + id="p1ptHcaja1mw" outputId="f9ea2c5b-44fe-4cd5-ca13-fe0d582f5ef5"
a[3]
# + id="oLCO-_mua1mw" outputId="ff391cd1-e0c0-4917-9b89-309dfd16c6ec"
a[5:10]
# + id="M8WkkTkHa1mw" outputId="f086730c-2096-43d8-a797-dfc700c829db"
a[5:10:2]
# + [markdown] id="h9mbGZZwa1mx"
# ็ดขๅผ -1 ่กจ็คบๅๅพๆๅพไธๅๅ
็ด ใ
# + id="Ysfy6RwHa1mx" outputId="22d6af21-e474-439d-a126-114989c815a9"
a[-1]
# + [markdown] id="1FQpHSpta1mx"
# ๅ็ๅฆๆๅชๆ็ตฆๅฎ step ๅผ็บ -1 ็่ฉฑ๏ผๅไปฃ่กจๆฏๅๅๅๅบ๏ผๅ
็ด ๅผๆฏๅพๆๅพไธ็ญ้ๅงๅๅบใ
# + id="IeQYRs_Ba1mx" outputId="e65351a0-daaa-4495-bffc-c933ca48660f"
a[::-1]
# + [markdown] id="O8Artq_pa1my"
# ๅค็ถญ้ฃๅ็็ดขๅผๆฏๅๅฅ็ตฆๅฎๅ็ถญๅบฆ็็ดขๅผๅผๆ็ฏๅใ
# + id="GUD8H239a1my" outputId="7b84e7d1-73f5-4e8e-da8f-5a0bb009fa05"
b = np.array([[1, 2, 3], [4, 5, 6]])
b
# + id="kINkgFE0a1my" outputId="91f65720-4cf5-4f26-ed55-77c136a50ad0"
b[0, 2]
# + [markdown] id="-ZAO0Ozla1my"
# ่ฅๆฒๆ็ตฆๅฎ start ๆ stop ๅผ็่ฉฑๅไปฃ่กจๆฏๅๅบ่ฉฒ็ดขๅผไนๅๆไนๅพ็ๆๆๅ
็ด ใ่ฅ start ๅ stop ๅผ้ฝๆฒๆ็ตฆๅฎ็่ฉฑ๏ผๅฐฑๆฏๅๅบๆๆๅ
็ด ๅผใ
# + id="3yjel0Cua1mz" outputId="7da2645e-e4cc-4f31-f458-6f2633441886"
b[:, 1:]
# + [markdown] id="vL7xwe0Da1mz"
# ## 3. NumPy ้ฃๅ็ๅธธ็จๅฑฌๆง
#
# |ๅฑฌๆง|่ชชๆ|
# |---|---|
# |shape|้ฃๅ็ๅฝข็|
# |ndim|้ฃๅ็็ถญๅบฆๆธ็ฎ๏ผไนๅฐฑๆฏ่ปธ(axis)็ๆธ็ฎ|
# |dtype|้ฃๅๅ
็ด ็่ณๆๅๅฅ|
# |size|้ฃๅๅ
็ด ็ๆธ็ฎ|
# |flat|้ฃๅ็ไธ็ถญ่ฟญไปฃๅจ|
# |T|้ฃๅ่ฝ็ฝฎ|
# |real|้ฃๅๅ
็ด ่คๆธ(complex number)็ๅฏฆๆธ้จๅ|
# |imag|้ฃๅๅ
็ด ่คๆธ(complex number)็่ๆธ้จๅ|
# |data|้กฏ็คบbuffer็ฉไปถ๏ผๆๅ้ฃๅ่ณๆ็้ๅงไฝๅ|
# |itemsize|ๆฏๅๅ
็ด ็่จๆถ้ซไฝฟ็จ้|
# |nbytes|้ฃๅๆๆๅ
็ด ็่จๆถ้ซไฝฟ็จ้|
# |strides|ๅพ็ธ้ผๅ
็ด ็งปๅๆ้่ฆ็byteๆธ|
# + [markdown] id="mqA3dptNa1m0"
# a ้ฃๅ็บไบ็ถญ้ฃๅ๏ผ้ฃๅๅฝข็ใ็ถญๅบฆใๅ
็ด ๆธ็ฎๅฏ้้ๅฑฌๆงๆฅ็ใ
# + id="suguCpLEa1m0"
a = np.array([[1, 2, 3, 4, 5],
[4, 5, 6, 7, 8]])
# + id="bJtrUn9Pa1m0" outputId="d72602e1-d890-4659-defc-644cabe1220e"
a.shape
# + id="iRVRyVOsa1m0" outputId="fa7bf9c2-92b0-4038-9927-94125eb7309d"
a.ndim
# + id="KKkc-h9Ya1m1" outputId="a5e262c2-2fb0-4a02-f357-ecb1da6df70e"
a.dtype
# + id="9JUnJafia1m1" outputId="4fba6c03-c1a7-4700-db9a-28a7ce41b838"
a.size
# + [markdown] id="mZCr8ZLDa1m2"
# ่ฅๅฐ a ้ฃๅ้้ไธ็ถญ่ฟญไปฃๅจไพๆฅ็็่ฉฑ๏ผ็ดขๅผ 6 ็ๅ
็ด ๅผๅฐๆฏ 5ใ
# + id="HwrfPbnla1m2" outputId="eb72c8e2-dc64-4338-eb77-582cb459a0e1"
a.flat[6]
# + [markdown] id="fMcTy52ha1m2"
# ่ฝ็ฝฎ (transpose) ้ฃๅ๏ผๅฐๆๅจๅพ็บ็ทๆงไปฃๆธ็ๅฎๅ
ๆๆดๅค็ไป็ดนใ
# + id="dsZIjjHra1m2" outputId="c1ebc2ea-cad0-4b8b-80d7-f511ccaa991d"
a.T
# + [markdown] id="ajL7J3bya1m3"
# x ้ฃๅ็ๅ
็ด ็บ่คๆธ (complex number)๏ผๆฅ็ `real` ่ `imag` ๅฑฌๆงๅๅฅ้กฏ็คบๆธๅญ็ๅฏฆ้จ่่้จใ
# + id="4rp939Kua1m3" outputId="8fc75558-4d88-4bbf-ea89-020e709800f9"
x = np.array([1+0j, 0+1j])
x
# + id="WiJ98FUta1m3" outputId="41ee48c5-2e6f-4b75-d23d-a3216e43db32"
x.real
# + id="JlU38MYOa1m4" outputId="8c49909c-3683-4e37-bd25-db5b63caa073"
x.imag
# + id="9isWHxO5a1m4" outputId="abdd167e-457a-4e4f-af13-a98119b26737"
# ้กฏ็คบbuffer็ฉไปถ๏ผๆๅ้ฃๅ่ณๆ็้ๅงไฝๅ
x.data
# + [markdown] id="Tpk_95Z2a1m4"
# b ้ฃๅ็่ณๆๅๅฅ็บ `int64` 64 bit ็ๆดๆธ๏ผไนๅฐฑๆฏ 8 byte๏ผ้ฃๅๆ 3 ๅๅ
็ด ๆไปฅ้ฃๅ็ `dtype`ใ`itemsize`ใ`nbytes`ใ`strides` ๅ
็ฏๅ็ตๆๅฆไธใ
# + id="zcXiOV2xa1m4"
b = np.array([1, 2, 3])
# + id="6yt7T_1aa1m5" outputId="07c3fbe3-b55c-450c-b270-21a6a6c1b6f0"
b.dtype
# + id="lfutxvI4a1m5" outputId="e576105a-76db-4814-e2a0-3ad7b96521b7"
b.itemsize
# + id="cwTplS5oa1m5" outputId="cc1ce6ba-8e02-455a-f3c7-cff043371f7d"
b.nbytes
# + id="TrJSC2mBa1m5" outputId="7649cfd3-f9d5-4846-b23a-22eaf7d06d7f"
b.strides
# + id="Q7qAYN-2a1m6"
| Sample Code/Day_01_SampleCode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # bf_qt_scraping
#
# This notebook describes how hotel data can be scraped using PyQT.
#
# The items we want to extract are:
# - the hotels for a given city
# - links to each hotel page
# - text hotel summary
# - text hotel description
#
# Once the links for each hotel are determined, I then want to extract the following items pertaining to each review:
# - title
# - author
# - text
# - rating
#
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
from lxml import html
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
def update_url(self, url):
self.mainFrame().load(QUrl(url))
self.app.exec_()
url = 'http://www.bringfido.com/lodging/city/new_haven_ct_us'
#This does the magic.Loads everything
r = Render(url)
#result is a QString.
result = r.frame.toHtml()
# +
# result
# -
#QString should be converted to string before processed by lxml
formatted_result = str(result.toAscii())
#Next build lxml tree from formatted_result
tree = html.fromstring(formatted_result)
tree.text_content
#Now using correct Xpath we are fetching URL of archives
archive_links = tree.xpath('//*[@id="results_list"]/div')
print archive_links
# +
url = 'http://pycoders.com/archive/'
r = Render(url)
result = r.frame.toHtml()
#QString should be converted to string before processed by lxml
formatted_result = str(result.toAscii())
tree = html.fromstring(formatted_result)
# +
#Now using correct Xpath we are fetching URL of archives
archive_links = tree.xpath('//*[@class="campaign"]/a/@href')
# for lnk in archive_links:
# print(lnk)
# -
# ## Now the Hotels
# +
url = 'http://www.bringfido.com/lodging/city/new_haven_ct_us'
r = Render(url)
result = r.frame.toHtml()
#QString should be converted to string before processed by lxml
formatted_result = str(result.toAscii())
tree = html.fromstring(formatted_result)
# +
#Now using correct Xpath we are fetching URL of archives
archive_links = tree.xpath('//*[@id="results_list"]/div')
print(archive_links)
print('')
for lnk in archive_links:
print(lnk.xpath('div[2]/h1/a/text()')[0])
print(lnk.text_content())
print('*'*25)
# -
# ### Now Get the Links
links = []
for lnk in archive_links:
print(lnk.xpath('div/h1/a/@href')[0])
links.append(lnk.xpath('div/h1/a/@href')[0])
print('*'*25)
lnk.xpath('//*/div/h1/a/@href')[0]
links
# ## Loading Reviews
#
# Next, we want to step through each page, and scrape the reviews for each hotel.
# +
url_base = 'http://www.bringfido.com'
r.update_url(url_base+links[0])
result = r.frame.toHtml()
#QString should be converted to string before processed by lxml
formatted_result = str(result.toAscii())
tree = html.fromstring(formatted_result)
# +
hotel_description = tree.xpath('//*[@class="body"]/text()')
details = tree.xpath('//*[@class="address"]/text()')
address = details[0]
csczip = details[1]
phone = details[2]
#Now using correct Xpath we are fetching URL of archives
reviews = tree.xpath('//*[@class="review_container"]')
texts = []
titles = []
authors = []
ratings = []
print(reviews)
print('')
for rev in reviews:
titles.append(rev.xpath('div/div[1]/text()')[0])
authors.append(rev.xpath('div/div[2]/text()')[0])
texts.append(rev.xpath('div/div[3]/text()')[0])
ratings.append(rev.xpath('div[2]/img/@src')[0].split('/')[-1][0:1])
print(rev.xpath('div[2]/img/@src')[0].split('/')[-1][0:1])
# -
titles
authors
texts
ratings
| code/bf_qt_scraping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Junhojuno/DeepLearning-Beginning/blob/master/code_review/1.MLP_mnist_tensorflow_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="uMrL1jnP6oV-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="9f9e2372-6589-40a7-fded-b5fe98021c48"
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# + id="xByUItre6oWM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3497d988-9238-472a-84e2-865be8900301"
# parameter setting
n_input = 784 # 28 x 28 image, input layer nodes
n_hidden1 = 256 # the number of 1st hidden layer nodes
n_hidden2 = 128 # the number of 2nd hidden layer nodes
n_classes = 10 # 0~9 classification , output layer nodes
# placeholder x, y that we input
x = tf.placeholder(dtype=tf.float32, shape=[None, n_input]) # ?(batch size) x 784
y = tf.placeholder(dtype=tf.float32, shape=[None, n_classes]) # ?(batch size) x 10
# network parameter setting : weights & biases
# weights๋ inner product๊ด์ ์์ ์๊ฐํด๋ณด๋ฉด ์ดํดํ ์ ์์.
# biases๋ ๊ฐ layer์ node๊ฐฏ์
# input : ? x 784
weights = {
'h1' : tf.Variable(initial_value=tf.random_normal(shape=[n_input, n_hidden1], stddev=0.1)), # 784 x 256
'h2' : tf.Variable(initial_value=tf.random_normal(shape=[n_hidden1, n_hidden2], stddev=0.1)), # 256 x 128
'out' : tf.Variable(initial_value=tf.random_normal(shape=[n_hidden2, n_classes], stddev=0.1)) # 128 x 10
}
biases = {
'h1' : tf.Variable(initial_value=tf.random_normal(shape=[n_hidden1])), # 256
'h2' : tf.Variable(initial_value=tf.random_normal(shape=[n_hidden2])), # 128
'out' : tf.Variable(initial_value=tf.random_normal(shape=[n_classes])) # 10
}
print("parameter setting is completed")
# + id="h5bul93k6oWR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7cb6beb4-b962-4e1a-86f6-cea98c3bdf6d"
# define graph
# ์ฐ์ฐ๊ณผ ์ฐ์ฐ์ ์ฐ๊ฒฐ์ ๋ง๋ ๋ค.
# model
# ์์์ ์ ์ํ ๋ณ์๋ค์ ๊ณ ๋ คํ์ฌ, ์ค์ ์ฐ์ฐ network๋ฅผ ์ ์ํ๋ค.
# matrix product, add biases, activation function์ ๊ณ ๋ คํ๋ค.
def mlp(_x, _w, _b):
# tf.sigmoid = tf.nn.sigmoid : ๋์ ์ฐจ์ด๋ ์๋ค๊ณ ํ๋ค.
hidden_layer1 = tf.nn.sigmoid(tf.add(tf.matmul(_x, _w['h1']),_b['h1']))
hidden_layer2 = tf.nn.sigmoid(tf.add(tf.matmul(hidden_layer1, _w['h2']),_b['h2']))
out = tf.add(tf.matmul(hidden_layer2, _w['out']), _b['out'])
return out # logit์ผ๋ก ์ถ๋ ฅ
# prediction
prediction = mlp(x, weights, biases)
# cost(loss) function, optimizer
# loss : cross entropy
# reduce_sum ๊น์ง๋ ๋ฐ์ดํฐ ํ๋๋น loss
# reduce_mean๊น์ง ๋ณด๋ฉด ์ ์ฒด ๋ฐ์ดํฐ์ loss ํ๊ท
# cost = -tf.reduce_mean(tf.reduce_sum(y * tf.log(prediction),axis=1))
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
# accuracy check
# tf.argmax๋ ๊ฐ์ฅํฐ ๊ฐ์ ์ธ๋ฑ์ค ์ถ์ถ
# ์ด๊ฒ ๋ง์ผ๋ฉด 1, ํ๋ฆฌ๋ฉด 0 ์ถ์ถ(tf.equal)
# float๋ก type๋ณ๊ฒฝํ์ฌ ์ ํ๋ ์ถ๋ ฅ(boolean type --> float type)
correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,dtype='float'))
print("Graph ready..")
# + id="8nXHGA1R6oWY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="1cef3902-2f33-4bb8-ec44-25f9bb1813ef"
# Training
training_epochs = 20
batch_size = 100
display_step = 4
# initialize
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# optimizing
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples / batch_size)
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size=batch_size)
feed_dict = {x:batch_xs, y:batch_ys}
sess.run(optimizer, feed_dict=feed_dict)
avg_cost += sess.run(cost, feed_dict=feed_dict)
avg_cost = avg_cost / total_batch # epoch๋น cost ํ๊ท
# Display
if (epoch+1) % display_step == 0:
print("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
feed_dict = {x: batch_xs, y: batch_ys}
train_acc = sess.run(accuracy, feed_dict=feed_dict)
print("Train Accuracy: %.3f" % (train_acc))
feed_dict = {x: mnist.test.images, y: mnist.test.labels}
test_acc = sess.run(accuracy, feed_dict=feed_dict)
print("Test Accuracy: %.3f" % (test_acc))
print("Optimizing Finished!!")
| code_review/1.MLP_mnist_tensorflow_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.0
# language: julia
# name: julia-1.6
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p><div class="lev1 toc-item"><a href="#Orthogonal-projections" data-toc-modified-id="Orthogonal-projections-1"><span class="toc-item-num">1 </span>Orthogonal projections</a></div><div class="lev2 toc-item"><a href="#Ortho-complementary-subspaces" data-toc-modified-id="Ortho-complementary-subspaces-11"><span class="toc-item-num">1.1 </span>Ortho-complementary subspaces</a></div><div class="lev2 toc-item"><a href="#The-fundamental-theorem-of-linear-algebra" data-toc-modified-id="The-fundamental-theorem-of-linear-algebra-12"><span class="toc-item-num">1.2 </span>The fundamental theorem of linear algebra</a></div><div class="lev2 toc-item"><a href="#Orthogonal-projections" data-toc-modified-id="Orthogonal-projections-13"><span class="toc-item-num">1.3 </span>Orthogonal projections</a></div><div class="lev2 toc-item"><a href="#Orthogonal-projection-into-a-column-space" data-toc-modified-id="Orthogonal-projection-into-a-column-space-14"><span class="toc-item-num">1.4 </span>Orthogonal projection into a column space</a></div>
# -
# # Orthogonal projections
#
# **Highlights** In this lecture, we'll show (1) the **fundamental theorem of linear algebra**:
#
# <img src="../04-vecsp/four_fundamental_subspaces.png" width=400 align="center"/>
#
# and (2) any symmetric, idempotent matrix $\mathbf{P}$ is the orthogonal projector onto $\mathcal{C}(\mathbf{P})$ (along $\mathcal{C}(\mathbf{P})^\perp = \mathcal{N}(\mathbf{P}')$).
# ## Ortho-complementary subspaces
#
# - An **orthocomplement set** of a set $\mathcal{X}$ (not necessarily a subspace) in a vector space $\mathcal{V} \subseteq \mathbb{R}^m$ is defined as
# $$
# \mathcal{X}^\perp = \{ \mathbf{u} \in \mathcal{V}: \langle \mathbf{x}, \mathbf{u} \rangle = 0 \text{ for all } \mathbf{x} \in \mathcal{X}\}.
# $$
# In midterm Q8, we showed that $\mathcal{X}^\perp$ is always a vector space regardless $\mathcal{X}$ is a vector space or not.
#
# - TODD: visualize $\mathbb{R}^3 = \text{a plane} \oplus \text{plan}^\perp$.
#
# - **Direct sum theorem for orthocomplementary subspaces.** Let $\mathcal{S}$ be a subspace of a vector space $\mathcal{V}$ with $\text{dim}(\mathcal{V}) = m$. Then the following statements are true.
# 1. $\mathcal{V} = \mathcal{S} + \mathcal{S}^\perp$. That is every vector $\mathbf{y} \in \mathcal{V}$ can be expressed as $\mathbf{y} = \mathbf{u} + \mathbf{v}$, where $\mathbf{u} \in \mathcal{S}$ and $\mathbf{v} \in \mathcal{S}^\perp$.
# 2. $\mathcal{S} \cap \mathcal{S}^\perp = \{\mathbf{0}\}$ (essentially disjoint).
# 3. $\mathcal{V} = \mathcal{S} \oplus \mathcal{S}^\perp$.
# 4. $m = \text{dim}(\mathcal{S}) + \text{dim}(\mathcal{S}^\perp)$.
# By the uniqueness of decomposition for direct sum, we know the expression of $\mathbf{y} = \mathbf{u} + \mathbf{v}$ is also **unique**.
#
# Proof of 1: Let $\{\mathbf{z}_1, \ldots, \mathbf{z}_r\}$ be an orthonormal basis of $\mathcal{S}$ and extend it to an orthonormal basis $\{\mathbf{z}_1, \ldots, \mathbf{z}_r, \mathbf{z}_{r+1}, \ldots, \mathbf{z}_m\}$ of $\mathcal{V}$. Then any $\mathbf{y} \in \mathcal{V}$ can be expanded as
# $$
# \mathbf{y} = (\alpha_1 \mathbf{z}_1 + \cdots + \alpha_r \mathbf{z}_r) + (\alpha_{r+1} \mathbf{z}_{r+1} + \cdots + \alpha_m \mathbf{z}_m),
# $$
# where the first sum belongs to $\mathcal{S}$ and the second to $\mathcal{S}^\perp$.
# Proof of 2: Suppose $\mathbf{x} \in \mathcal{S} \cap \mathcal{S}^\perp$, then $\mathbf{x} \perp \mathbf{x}$, i.e., $\langle \mathbf{x}, \mathbf{x} \rangle = 0$. Therefore $\mathbf{x} = \mathbf{0}$.
# Proof of 3: Statement 1 says $\mathcal{V} = \mathcal{S} + \mathcal{S}^\perp$. Statement 2 says $\mathcal{S}$ and $\mathcal{S}^\perp$ are essentially disjoint. Thus $\mathcal{V} = \mathcal{S} \oplus \mathcal{S}^\perp$.
# Proof of 4: Follows from essential disjointness between $\mathcal{S}$ and $\mathcal{S}^\perp$.
#
# - Some facts (optional):
# 1. For a set $\mathcal{X}$ in a vector space $\mathcal{V}$, $\mathcal{X}^\perp$ is always a subspace, whether or not $\mathcal{X}$ is a subspace.
# 2. If $\mathcal{S}$ is a subspace of $\mathcal{V}$, then $(\mathcal{S}^\perp)^\perp = \mathcal{S}$.
# 3. If $\mathcal{S}_1 \subseteq \mathcal{S}_2$ are subsets of $\mathcal{V}$, then $\mathcal{S}_1^\perp \supseteq \mathcal{S}_2^\perp$.
# 4. If $\mathcal{S}_1 = \mathcal{S}_2$ are two subsets of $\mathcal{V}$, then $\mathcal{S}_1^\perp = \mathcal{S}_2^\perp$.
# 5. If $\mathcal{S}_1$ and $\mathcal{S}_2$ are two subspaces in $\mathcal{V}$, then $(\mathcal{S}_1 + \mathcal{S}_2)^\perp = \mathcal{S}_1^\perp \cap \mathcal{S}_2^\perp$ and $(\mathcal{S}_1 \cap \mathcal{S}_2)^\perp = \mathcal{S}_1^\perp + \mathcal{S}_2^\perp$.
# ## The fundamental theorem of linear algebra
#
# - Let $\mathbf{A} \in \mathbb{R}^{m \times n}$. Then
# 1. $\mathcal{C}(\mathbf{A})^\perp = \mathcal{N}(\mathbf{A}')$ and $\mathbb{R}^m = \mathcal{C}(\mathbf{A}) \oplus \mathcal{N}(\mathbf{A}')$.
# 2. $\mathcal{C}(\mathbf{A}) = \mathcal{N}(\mathbf{A}')^\perp$.
# 3. $\mathcal{N}(\mathbf{A})^\perp = \mathcal{C}(\mathbf{A}')$ and $\mathbb{R}^n = \mathcal{N}(\mathbf{A}) \oplus \mathcal{C}(\mathbf{A}')$.
#
# Proof of 1: To show $\mathcal{C}(\mathbf{A})^\perp = \mathcal{N}(\mathbf{A}')$,
# \begin{eqnarray*}
# & & \mathbf{x} \in \mathcal{N}(\mathbf{A}') \\
# &\Leftrightarrow& \mathbf{A}' \mathbf{x} = \mathbf{0} \\
# &\Leftrightarrow& \mathbf{x} \text{ is orthogonal to columns of } \mathbf{A} \\
# &\Leftrightarrow& \mathbf{x} \in \mathcal{C}(\mathbf{A})^\perp.
# \end{eqnarray*}
# Then, $\mathbb{R}^m = \mathcal{C}(\mathbf{A}) \oplus \mathcal{C}(\mathbf{A})^\perp = \mathcal{C}(\mathbf{A}) \oplus \mathcal{N}(\mathbf{A}')$.
#
# Proof of 2: Since $\mathcal{C}(\mathbf{A})$ is a subspace, $(\mathcal{C}(\mathbf{A})^\perp)^\perp = \mathcal{N}(\mathbf{A}')^\perp$.
#
# Proof of 3: Applying part 2 to $\mathbf{A}'$, we have
# $$
# \mathcal{C}(\mathbf{A}') = \mathcal{N}((\mathbf{A}')')^\perp = \mathcal{N}(\mathbf{A})^\perp
# $$
# and
# $$
# \mathbb{R}^n = \mathcal{N}(\mathbf{A}) \oplus \mathcal{N}(\mathbf{A})^\perp = \mathcal{N}(\mathbf{A}) \oplus \mathcal{C}(\mathbf{A}').
# $$
# ## Orthogonal projections
#
# <img src="../06-matinv/three_projs.png" width=600 align="center"/>
#
# - If $\mathcal{S}$ is a subspace of some vector space $\mathcal{V}$ and $\mathbf{y} \in \mathcal{V}$, then the projection of $\mathbf{y}$ into $\mathcal{S}$ along $\mathcal{S}^\perp$ is called the **orthogonal projection** of $\mathbf{y}$ into $\mathcal{S}$.
#
# - **The closest point theorem.** Let $\mathcal{S}$ be a subspace of some vector space $\mathcal{V}$ and $\mathbf{y} \in \mathcal{V}$. The orthogonal projection of $\mathbf{y}$ into $\mathcal{S}$ is the **unique** point in $\mathcal{S}$ that is closest to $\mathbf{y}$. In other words, if $\mathbf{u}$ is the orthogonal projection of $\mathbf{y}$ into $\mathcal{S}$, then
# $$
# \|\mathbf{y} - \mathbf{u}\|^2 \le \|\mathbf{y} - \mathbf{w}\|^2 \text{ for all } \mathbf{w} \in \mathcal{S},
# $$
# with equality holding only when $\mathbf{w} = \mathbf{u}$.
#
# Proof: Picture.
# \begin{eqnarray*}
# & & \|\mathbf{y} - \mathbf{w}\|^2 \\
# &=& \|\mathbf{y} - \mathbf{u} + \mathbf{u} - \mathbf{w}\|^2 \\
# &=& \|\mathbf{y} - \mathbf{u}\|^2 + 2(\mathbf{y} - \mathbf{u})'(\mathbf{u} - \mathbf{w}) + \|\mathbf{u} - \mathbf{w}\|^2 \\
# &=& \|\mathbf{y} - \mathbf{u}\|^2 + \|\mathbf{u} - \mathbf{w}\|^2 \\
# &\ge& \|\mathbf{y} - \mathbf{u}\|^2.
# \end{eqnarray*}
#
# - Let $\mathbb{R}^n = \mathcal{S} \oplus \mathcal{S}^\perp$. A square matrix $\mathbf{P}_{\mathcal{S}}$ is called the **orthogonal porjector** into $\mathcal{S}$ if, for every $\mathbf{y} \in \mathbb{R}^n$, $\mathbf{P}_{\mathcal{S}} \mathbf{y}$ is the projection of $\mathbf{y}$ into $\mathcal{S}$ along $\mathcal{S}^\perp$.
# ## Orthogonal projection into a column space
#
# - For a matrix $\mathbf{X}$, the orthogonal projector onto $\mathcal{C}(\mathbf{X})$ is written as $\mathbf{P}_{\mathbf{X}}$.
#
# <img src="./ls_projection.png" width=400 align="center"/>
#
# - Let $\mathbf{y} \in \mathbb{R}^n$ and $\mathbf{X} \in \mathbb{R}^{n \times p}$.
# 1. The orthogonal projector of $\mathbf{y}$ into $\mathcal{C}(\mathbf{X})$ is given by $\mathbf{u} = \mathbf{X} \boldsymbol{\beta}$, where $\boldsymbol{\beta}$ satisfies the **normal equation**
# $$
# \mathbf{X}' \mathbf{X} \boldsymbol{\beta} = \mathbf{X}' \mathbf{y}.
# $$
# Normal equation always has solution(s) (why?) and any generalized inverse $(\mathbf{X}'\mathbf{X})^-$ yields a solution $\widehat{\boldsymbol{\beta}} = (\mathbf{X}'\mathbf{X})^- \mathbf{X}' \mathbf{y}$. Therefore,
# $$
# \mathbf{P}_{\mathbf{X}} = \mathbf{X} (\mathbf{X}' \mathbf{X})^- \mathbf{X}'.
# $$
# for any generalized inverse $(\mathbf{X}' \mathbf{X})^-$.
# 2. If $\mathbf{X}$ has full column rank, then the orthogonal projector into $\mathcal{C}(\mathbf{X})$ is given by
# $$
# \mathbf{P}_{\mathbf{X}} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}'.
# $$
#
# Proof of 1: Since the orthogonal projection of $\mathbf{y}$ into $\mathcal{C}(\mathbf{X})$ lives in $\mathcal{C}(\mathbf{X})$, thus can be written as $\mathbf{u} = \mathbf{X} \boldsymbol{\beta}$ for some $\boldsymbol{\beta} \in \mathbb{R}^p$. Furthermore, $\mathbf{v} = \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \in \mathcal{C}(\mathbf{X})^\perp$ is orthogonal to any vectors in $\mathcal{C}(\mathbf{X})$ including the columns of $\mathbf{X}$. Thus
# $$
# \mathbf{X}' (\mathbf{y} - \mathbf{X} \boldsymbol{\beta}) = \mathbf{0},
# $$
# or equivalently,
# $$
# \mathbf{X}' \mathbf{X} \boldsymbol{\beta} = \mathbf{X}' \mathbf{y}.
# $$
#
# Proof of 2: If $\mathbf{X}$ has full column rank, $\mathbf{X}' \mathbf{X}$ is non-singular and the solution to the normal equation is uniquely determined by $\boldsymbol{\beta} = (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}' \mathbf{y}$, and the orthogonal projection is $\mathbf{u} = \mathbf{X} \boldsymbol{\beta} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}' \mathbf{y}$.
#
# - Example: HW5 Q1.3.
#
# - **Uniqueness of orthogonal projector.** Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times p}$, both of full column rank and $\mathcal{C}(\mathbf{A}) = \mathcal{C}(\mathbf{B})$. Then $\mathbf{P}_{\mathbf{A}} = \mathbf{P}_{\mathbf{B}}$.
#
# Proof: Since $\mathcal{C}(\mathbf{A}) = \mathcal{C}(\mathbf{B})$, there exists a non-singular $\mathbf{C} \in \mathbb{R}^{p \times p}$ such that $\mathbf{A} = \mathbf{B} \mathbf{C}$. Then
# \begin{eqnarray*}
# \mathbf{P}_{\mathbf{A}} &=& \mathbf{A} (\mathbf{A}' \mathbf{A})^{-1} \mathbf{A}' \\
# &=& \mathbf{B} \mathbf{C} (\mathbf{C}' \mathbf{B}' \mathbf{B} \mathbf{C})^{-1} \mathbf{C}' \mathbf{B}' \\
# &=& \mathbf{B} \mathbf{C} \mathbf{C}^{-1} (\mathbf{B}' \mathbf{B})^{-1} (\mathbf{C}')^{-1} \mathbf{C}' \mathbf{B}' \\
# &=& \mathbf{B} (\mathbf{B}' \mathbf{B})^{-1} \mathbf{B}' \\
# &=& \mathbf{P}_{\mathbf{B}}.
# \end{eqnarray*}
#
# - Let $\mathbf{P}_\mathbf{X}$ be the orthogonal projector into $\mathcal{C}(\mathbf{X})$, where $\mathbf{X} \in \mathbb{R}^{n \times p}$ has full column rank. Following statements are true.
# 1. $\mathbf{P}_\mathbf{X}$ and $\mathbf{I} - \mathbf{P}_\mathbf{X}$ are both symmetric and idemponent.
# 2. $\mathbf{P}_\mathbf{X} \mathbf{X} = \mathbf{X}$ and $(\mathbf{I} - \mathbf{P}_\mathbf{X}) \mathbf{X} = \mathbf{O}$.
# 3. $\mathcal{C}(\mathbf{X}) = \mathcal{C}(\mathbf{P}_\mathbf{X})$ and $\text{rank}(\mathbf{P}_\mathbf{X}) = \text{rank}(\mathbf{X}) = p$.
# 4. $\mathcal{C}(\mathbf{I} - \mathbf{P}_\mathbf{X}) = \mathcal{N}(\mathbf{X}') = \mathcal{C}(\mathbf{X})^\perp$.
# 5. $\mathbf{I} - \mathbf{P}_\mathbf{X}$ is the orthogonal projector into $\mathcal{N}(\mathbf{X}')$ (or $\mathcal{C}(\mathbf{X})^\perp)$.
#
# Proof of 1: Check directly using $\mathbf{P}_{\mathbf{X}} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}'$.
#
# Proof of 2: Check directly using $\mathbf{P}_{\mathbf{X}} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}'$.
#
# Proof of 3: Since $\mathbf{P}_{\mathbf{X}} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}'$,
# $$
# \mathcal{C}(\mathbf{P}_\mathbf{X}) \subseteq \mathcal{C}(\mathbf{X}) = \mathcal{C}(\mathbf{P}_\mathbf{X} \mathbf{X}) \subseteq \mathcal{C}(\mathbf{P}_\mathbf{X}).
# $$
#
# Proof of 4 (optional): The second equality is simply the fundamental theorem of linear algebra. For the first equality, first we show $\mathcal{C}(\mathbf{I} - \mathbf{P}_\mathbf{X}) \subseteq \mathcal{N}(\mathbf{X}')$:
# \begin{eqnarray*}
# & & \mathbf{u} \in \mathcal{C}(\mathbf{I} - \mathbf{P}_\mathbf{X}) \\
# &\Rightarrow& \mathbf{u} = (\mathbf{I} - \mathbf{P}_\mathbf{X}) \mathbf{v} \text{ for some } \mathbf{v} \\
# &\Rightarrow& \mathbf{X}' \mathbf{u} = [(\mathbf{I} - \mathbf{P}_\mathbf{X}) \mathbf{X}]' \mathbf{v} = \mathbf{O} \mathbf{v} = \mathbf{0} \\
# &\Rightarrow& \mathbf{u} \in \mathcal{N}(\mathbf{X}').
# \end{eqnarray*}
# To show the other direction $\mathcal{C}(\mathbf{I} - \mathbf{P}_\mathbf{X}) \supseteq \mathcal{N}(\mathbf{X}')$,
# \begin{eqnarray*}
# & & \mathbf{u} \in \mathcal{N}(\mathbf{X}') \\
# &\Rightarrow& \mathbf{X}' \mathbf{u} = \mathbf{0} \\
# &\Rightarrow& \mathbf{P}_\mathbf{X} \mathbf{u} = \mathbf{X} (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}' \mathbf{u} = \mathbf{0} \\
# &\Rightarrow& \mathbf{u} = \mathbf{u} - \mathbf{P}_\mathbf{X} \mathbf{u} = (\mathbf{I} - \mathbf{P}_\mathbf{X}) \mathbf{u} \\
# &\Rightarrow& \mathbf{u} \in \mathcal{C}(\mathbf{I} - \mathbf{P}_\mathbf{X}).
# \end{eqnarray*}
#
# Proof of 5: For any $\mathbf{y} \in \mathbb{R}^n$, write $\mathbf{y} = \mathbf{P}_\mathbf{X} \mathbf{y} + (\mathbf{I} - \mathbf{P}_\mathbf{X}) \mathbf{y}$.
#
# - Let $\mathbf{P} \in \mathbb{R}^{n \times n}$ be symmetric with rank $r$ and $\mathbf{P} = \mathbf{U} \mathbf{U}'$ be a rank factorization of $\mathbf{P}$. Then $\mathbf{P}$ is an orthogonal projector in $\mathcal{C}(\mathbf{U})$ if and only if $\mathbf{U}' \mathbf{U} = \mathbf{I}_r$.
#
# - Above result gives another way to construct an orthogonal projector into $\mathcal{C}(\mathbf{A})$, where $\mathbf{A}$ might not be full column rank. Obtain an orthornormal basis $\mathbf{Q} \in \mathbb{R}^{n \times r}$ of $\mathcal{C}(\mathbf{A})$, e.g., by Gram-Schmidt, and let $\mathbf{P} = \mathbf{Q} \mathbf{Q}'$.
#
# - Example: HW5 Q1.3, Q7.
| slides/08-orthproj/08-orthproj.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # missing Completely at random
# What does MCAR mean?
#
# ANS:- Working at any kind of dataset if there are missing values like we have in this particular dataset of titanic ... we have atmost three columns where the data is missing. so what does MCAR means here is that ... Refer to missing values age,cabin and Embarked. so embarked does not have any relationship between the age and cabin group. so this is the example of MCAR.
import pandas as pd
df = pd.read_csv('train.csv')
df.head()
df.info()
df.tail()
df.isnull().sum()
df['Embarked']
df[df['Embarked'].isnull()]
# ##### 2. Not missing at random(NMAR):- Systematic missing values
#
# There is absolutely some relationship between the data missing and any other values.
import numpy as np
# +
# where ever there is nan values replacing it by 1 and if there are not any then value=0
df['cabin_null']=np.where(df['Cabin'].isnull(),1,0)
#mean is just for %
df['cabin_null'].mean()
df['cabin_null'].head()
# -
df.columns
df.groupby(['Survived'])[('cabin_null')].mean()
# ### 3.misssing at random
# # All the techniques of handling missing values.
#
# 1.mean/median/mode replacement
# 2.Random Sample Imputation
# 3.Capturing nan values with new feature
# 4.End of Distribution imputation
# 5.Arbitrary Imputation
# 6.Frequent category Imputation
#
# 1. Mean/median Imputation
#
# when should we apply?
#
# it has the assumption that the data are missing completely at random(MCAR).
#
# we solve this by replacing the NAN values with the most frequent occurance of the variables.
#
#
df = pd.read_csv('train.csv', usecols=['Age','Fare','Survived'])
df.head()
## the percentage of the missing values.
df.isnull().mean()
def impute_nan(df,variable,median):
df[variable+"_median"] = df[variable].fillna(median)
median = df.Age.median()
median
impute_nan(df,'Age',median)
df.head()
print(df['Age'].std())
print(df['Age_median'].std())
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure()
ax = fig.add_subplot(111)
df['Age'].plot(kind='kde', ax=ax)
df.Age_median.plot(kind='kde',ax=ax,color='red')
lines,labels = ax.get_legend_handles_labels()
ax.legend(lines, labels, loc='best')
# # Missing Values
import pandas as pd
df = pd.read_csv('train.csv', usecols =['Age','Fare','Survived'])
df.head()
df.isnull().sum()
df.isnull().mean()
df['Age'].isnull().sum()
df['Age'].dropna().sample(df['Age'].isnull().sum(),random_state=0)
df[df['Age'].isnull()].index
def impute_nan(df,variable,median):
df[variable+"_median"] = df[variable].fillna(median)
df[variable+"_random"] = df[variable]
#it will have random sample to fill na
random_sample = df[variable].dropna().sample(df[variable].isnull().sum(),random_state=0)
# pandas need to have same index in order to merge the dataset
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(),variable+'_random'] = random_sample
median = df.Age.median()
median
impute_nan(df,"Age",median)
df.head()
import matplotlib.pyplot as plt
# %matplotlib inline
fig = plt.figure()
ax = fig.add_subplot(111)
df['Age'].plot(kind='kde', ax=ax)
df.Age_random.plot(kind='kde',ax=ax,color='green')
df.Age_median.plot(kind='kde',ax=ax,color='red')
lines,labels = ax.get_legend_handles_labels()
ax.legend(lines, labels, loc='best')
import numpy as np
df = pd.read_csv('train.csv', usecols =['Age','Fare','Survived'])
df.head()
df['Age_NAN']=np.where(df['Age'].isnull(),1,0)
df.Age.median()
df["Age"].fillna(df.Age.median(),inplace=True)
df.head(10)
# # End of Distribution Imputation
df = pd.read_csv('train.csv', usecols =['Age','Fare','Survived'])
df.head()
import matplotlib.pyplot as plt
# %matplotlib inline
df.Age.hist(bins=50)
extreme = df.Age.mean()+3*df.Age.std()
import seaborn as sns
sns.boxplot('Age',data=df)
def impute_nan(df,variable,median,extreme):
df[variable+"_end_distribution"] = df[variable].fillna(extreme)
df[variable].fillna(median,inplace=True)
impute_nan(df,'Age',df.Age.median(),extreme)
df.head()
df['Age'].hist(bins=50)
df['Age_end_distribution'].hist(bins=50)
sns.boxplot('Age_end_distribution',data=df)
# # Arbitrary Value Imputation
#
# This Technique was derived from Kaggle Competition . It consists of the method to replace the NAN values by arbitrary values.
import pandas as pd
df = pd.read_csv('train.csv', usecols =['Age','Fare','Survived'])
df.head()
def impute_nan(df,variable):
df[variable+"_Zero"]=df[variable].fillna(0)
df[variable+"_Hundred"]=df[variable].fillna(100)
| c. Feature Engineering/How To Handle Missing Data- All Techniques.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
import seaborn as sns
import time
import matplotlib.pyplot as plt
# %matplotlib inline
# Any results you write to the current directory are saved as output.
# -
# In this kernel, I implement vectorized PDF caculation (without for loop) to get their correlation matrix. This is helpful to study feature grouping.
# credits to @sibmike https://www.kaggle.com/sibmike/are-vars-mixed-up-time-intervals
# **Functions**
# + _kg_hide-input=false
def logloss(y,yp):
yp = np.clip(yp,1e-5,1-1e-5)
return -y*np.log(yp)-(1-y)*np.log(1-yp)
def reverse(tr,te):
reverse_list = [0,1,2,3,4,5,6,7,8,11,15,16,18,19,
22,24,25,26,27,41,29,
32,35,37,40,48,49,47,
55,51,52,53,60,61,62,103,65,66,67,69,
70,71,74,78,79,
82,84,89,90,91,94,95,96,97,99,
105,106,110,111,112,118,119,125,128,
130,133,134,135,137,138,
140,144,145,147,151,155,157,159,
161,162,163,164,167,168,
170,171,173,175,176,179,
180,181,184,185,187,189,
190,191,195,196,199]
reverse_list = ['var_%d'%i for i in reverse_list]
for col in reverse_list:
tr[col] = tr[col]*(-1)
te[col] = te[col]*(-1)
return tr,te
def scale(tr,te):
for col in tr.columns:
if col.startswith('var_'):
mean,std = tr[col].mean(),tr[col].std()
tr[col] = (tr[col]-mean)/std
te[col] = (te[col]-mean)/std
return tr,te
def getp_vec_sum(x,x_sort,y,std,c=0.5):
# x is sorted
left = x - std/c
right = x + std/c
p_left = np.searchsorted(x_sort,left)
p_right = np.searchsorted(x_sort,right)
p_right[p_right>=y.shape[0]] = y.shape[0]-1
p_left[p_left>=y.shape[0]] = y.shape[0]-1
return (y[p_right]-y[p_left])
def get_pdf(tr,col,x_query=None,smooth=3):
std = tr[col].std()
df = tr.groupby(col).agg({'target':['sum','count']})
cols = ['sum_y','count_y']
df.columns = cols
df = df.reset_index()
df = df.sort_values(col)
y,c = cols
df[y] = df[y].cumsum()
df[c] = df[c].cumsum()
if x_query is None:
rmin,rmax,res = -5.0, 5.0, 501
x_query = np.linspace(rmin,rmax,res)
dg = pd.DataFrame()
tm = getp_vec_sum(x_query,df[col].values,df[y].values,std,c=smooth)
cm = getp_vec_sum(x_query,df[col].values,df[c].values,std,c=smooth)+1
dg['res'] = tm/cm
dg.loc[cm<500,'res'] = 0.1
return dg['res'].values
def get_pdfs(tr):
y = []
for i in range(200):
name = 'var_%d'%i
res = get_pdf(tr,name)
y.append(res)
return np.vstack(y)
def print_corr(corr_mat,col,bar=0.97):
print(col)
cols = corr_mat.loc[corr_mat[col]>bar,col].index.values
cols_ = ['var_%s'%(i.split('_')[-1]) for i in cols]
print(cols)
return cols
# -
# **load data & group vars**
# %%time
path = '../input/'
tr = pd.read_csv('%s/train.csv'%path)
te = pd.read_csv('%s/test.csv'%path)
# %%time
tr,te = reverse(tr,te)
tr,te = scale(tr,te)
# %%time
prob = get_pdf(tr,'var_0')
plt.plot(prob)
# %%time
pdfs = get_pdfs(tr)
# %%time
df_pdf = pd.DataFrame(pdfs.T,columns=['var_prob_%d'%i for i in range(200)])
corr_mat = df_pdf.corr(method='pearson')
corr_mat.head()
plt.figure(figsize=(15,10))
sns.heatmap(corr_mat, cmap='RdBu_r', center=0.0)
plt.title('PDF Correlations',fontsize=16)
plt.show()
# **We can group features using this correlation matrix. For example, var_0 and var_2's pdfs is 0.97+ correlated. We can confirm it using the figure below.**
plt.figure(figsize=(10,5))
plt.plot(pdfs[0],color='b',label='var_0')
plt.plot(pdfs[2],color='r',label='var_2')
plt.legend(loc='upper right')
# **We can find the group of a var using the following functions.**
cols = print_corr(corr_mat,'var_prob_12')
corr_mat.loc[cols,cols]
#
| 12 customer prediction/fast-pdf-calculation-with-correlation-matrix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import re
import spacy
import time
from empath import Empath
# -
# [`re`](https://docs.python.org/3/library/re.html) is a library for using regular expressions (patterns of characters used to match more varied strings)
df = pd.read_json("rjobs_2020_raw.json")
# +
# df.head()
# -
df.shape
# ### Keeping posts that have selftext and meet other criteria
df = df[(df["locked"]==False) & (df["selftext"] != "[removed]") & (df["selftext"] != "[deleted]")
& (~df["selftext"].isnull()) & (df["is_self"]==True) & (df["is_video"]==False) & (df["pinned"]==False)
& (df["stickied"]==False)]
df.shape
# ### Converting "created_utc" to the date and then creating different time variables
df["date"] = pd.to_datetime(df["created_utc"],unit="s") # "created_utc" is in seconds
df["dayofyear"] = df["date"].dt.dayofyear
df["hour"] = df["date"].dt.hour
df["dayofmonth"] = df["date"].dt.day
df["month"] = df["date"].dt.month
df["dayofweek"] = df["date"].dt.dayofweek
df["week"] = df["date"].dt.week
df["day_name"] = df["date"].dt.day_name()
df["month_name"] = df["date"].dt.month_name()
# ### Randomly shuffling the data and then creating anonymized author IDs
#
# I already anonymized these (the "author_pseudo" field) but this shows the process.
# +
df = df.sample(frac=1.0) # this random "sample" is 100% of the data points, just in a different order
authors = set(df["author_pseudo"]) # set of all the unique author usernames
author_ids = np.random.randint(0,1000000, len(authors)) # random numbers between 0 and 1,000,000
author_ids = author_ids.tolist() # convert it from an array to a list
while len(set(author_ids)) < len(authors): # because the numbers are random, some may be duplicates
author_ids = list(set(author_ids)) # cast it to set to nix duplicates, then back to list
new = np.random.randint(0,100000) # get one new random number
new_idx = np.random.randint(0,len(author_ids)-1) # get one random index
author_ids.insert(new_idx, new) # insert the new random number at the random index
# the while loop will keep going until there are the same number of unique IDs as there are unique authors
author_id = [f"{id_:0>6}" for id_ in author_ids] # convert them to strings with leading zeros, e.g. '000001'
author_id_dict = {author:author_id for author, author_id in zip(authors, author_ids)} # dict mapping authors to IDs
df["author_id"] = df["author_pseudo"].apply(lambda x: author_id_dict[x]) # create new variable, anonymizing authors
# we drop the author column later
# -
assert len(set(authors)) == len(set(author_ids))
# ### Creating the "text" field by merging the title and selftext
df["text"] = df.apply(lambda row: row["title"] + "\n " + row["selftext"], axis = 1)
# ### Preprocessing
# +
def preprocess_post(post: str) -> str:
"""
Tokenize, lemmatize, remove stop words,
remove non-alphabetic characters.
"""
post = " ".join([word.lemma_ for word in nlp(post) if not word.is_stop])
post = re.sub("[^a-z]", " ", post.lower())
return re.sub("\s+", " ", post).strip()
nlp = spacy.load("en_core_web_sm", disable=["ner"])
# +
example = df.sample(1)["text"].values[0]
# df.sample(1) takes 1 row from the data frame at random
# ["text"] selects the "text" field (which is the combination of the "title" and "selftext" fields created above)
# .values turns it into a vector of strings, in this case just the one
# [0] takes the first element in the vector (which is also the only element)
# now we have the string for the text instead of a dataframe or some kind of vector
print(example)
# -
for word in nlp(example):
print(f"word: {word.text} | lemma: {word.lemma_} | part of speech: {word.pos_}")
print(preprocess_post(example))
# "<tt>nlp</tt>" is a language model from [spaCy](https://spacy.io/). It does part-of-speech tagging, named entity recognition, and more. `disable=["ner"]` tells it not to perform named entity recognition. Turning things off might speed it up
#
# The function <tt>preprocess_post</tt> is equivalent to the following:
#
# ```python
# def preprocess_post(post: str) -> str:
# """
# Tokenizes and returns the lowercase lemmas of
# tokens that are not stop words, minus any
# non-alphabetic characters
# """
# words = []
# for word in nlp(post): # each "word" in nlp(post) has been part-of-speech tagged, etc.
# if not word.is_stop: # ".is_stop" checks whether spacy has determined it's a stop word
# words.append(word.lemma_) # adding the lemma of the word, not the word itself, to the list
# post = " ".join(words) # converting the list of words to a string variable separated by spaces
# post = post.lower() # make everything lowercase
# post = re.sub("[^a-z]", " ", post) # now we replace non-alphabetic chars with spaces
# post = re.sub("\s+", " ", post) # now we replace long stretches of whitespace with a single space
# post = post.strip() # now we strip whitespace from the edges
# return post
# ```
# +
start_time = time.time()
df["preprocessed"] = df["text"].apply(preprocess_post)
print(f"Finished preprocessing {df.shape[0]} posts in {(time.time()-start_time)/60:.1f} minutes")
# -
# (This took about twice as long in Windows, for what that's worth)
# ### Calculating scores for each dictionary from Empath for each post in corpus
#
# This calls lexicon.analyze() on each preprocessed post. lexicon.analyze() returns a dictionary with lexical categories as keys and a post's score as the value for each. This creates a column (variable) for each key and populates it with each post's score.
# +
start_time = time.time()
lexicon = Empath()
df[list(lexicon.cats)] = df["preprocessed"].apply(lambda x: pd.Series(lexicon.analyze(x, normalize=True)))
print(f"Analyzed all posts in {(time.time()-start_time)/60:.1f} minutes")
# -
# A bit of Googling suggests .apply() is slow. Other methods also give warnings, but I'll look into alternatives.
# ### That's it!
#
# Now we just get the subset of columns we want and export the whole dataframe to a JSON file.
df["id"] = [f"{i:0>5}" for i in range(df.shape[0])] # create an ID as a string, based on order
cols = ["id", "author_id", "score", "num_comments", "title", "selftext", "text", "preprocessed", "date", "dayofyear", "hour", "dayofmonth", "month", "dayofweek", "week", "day_name", "month_name"]
cols += sorted(list(lexicon.cats.keys())) # we add the categories from Empath so we keep the columns from that, too
df = df[cols]
df.head()
df.to_csv("rjobs_2020_cleaned.json")
| notebooks/soc128d_preprocessing_corpus_for_notebook4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Maquinas de vectores de soporte
"""
Construiremos un clasificador de mรกquinas de vectores de soporte
para predecir el nivel de ingresos de una persona determinada
en funciรณn de 14 atributos. Nuestro objetivo es ver
dรณnde el ingreso es mayor o menor que $ 50,000 por aรฑo
https://archive.ics.uci.edu/ml/datasets/Census+Income.
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsOneClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
#datos
input_file = 'income_data.txt'
X = []
y = []
count_class1 = 0
count_class2 = 0
max_datapoints = 25000
data = np.load
# +
#Abrir el archivo y leer por lineas
with open(input_file, 'r') as f:
for line in f.readlines():
if count_class1 >= max_datapoints and count_class2 >= max_datapoints:
break
if '?' in line:
continue
data = line[:-1].split(', ')
if data[-1] == '<=50K' and count_class1 < max_datapoints:
X.append(data)
count_class1 += 1
if data[-1] == '>50K' and count_class2 < max_datapoints:
X.append(data)
count_class2 += 1
#Convierte a un numpy array para poder ser procesado por sklearn
X = np.array(X)
# -
#Codifica los campos String en numeros
label_encoder = []
X_encoded = np.empty(X.shape)
for i, item in enumerate(X[0]):
if item.isdigit():
X_encoded[:,i] = X[:, i]
else:
label_encoder.append(preprocessing.LabelEncoder())
X_encoded[:, i] = label_encoder[-1].fit_transform(X[:, i])
X = X_encoded[:, :-1].astype(int)
y = X_encoded[:, -1].astype(int)
#Crea el SVM con linear kernel
classifier = OneVsOneClassifier(LinearSVC(random_state=0),n_jobs=-1)
#Entredando clasificador
classifier.fit(X,y)
#Validaciรณn cruzada
X_train, X_test, y_train, y_test = train_test_split(X,y,
test_size=0.2, random_state=5)
classifier = OneVsOneClassifier(LinearSVC(random_state=0),n_jobs=-1)
classifier.fit(X_train, y_train)
y_test_pred = classifier.predict(X_test)
#Puntuar F1 de clasificador
f1 = cross_val_score(classifier,X,y,
scoring='f1_weighted',cv=3)
print("F1 score: " + str(round(100*f1.mean(),2)) + "%")
input_data = ['37', 'Private', '215646', 'HS-grad', '9', 'Never-married', 'Handlers-cleaners', 'Not-in-family', 'White', 'Male', '0', '0', '40', 'United-States']
"""
Uso el ultimo de los datos reales disponibles
que no se uso para el entrenamient de la IA
para probar la predicciรณn que segun la evaluaciรณn es
71.35% acertada
"""
input_data_real = ["52", "Self-emp-inc", "287927", "HS-grad", "9", "Married-civ-spouse", "Exec-managerial", "Wife", "White", "Female", "15024", "0", "40", "United-States"]
# Encode test datapoint
input_data_encoded = [[-1] * len(input_data),[-1] * len(input_data)]
count = 0
for i, item in enumerate(input_data):
if item.isdigit():
input_data_encoded[0][i] = int(input_data[i])
input_data_encoded[1][i] = int(input_data_real[i])
else:
#print(label_encoder[count])
#print([input_data[i]])
#print(label_encoder[count].transform([np.array(input_data[i])]))
input_data_encoded[0][i] = int(label_encoder[count].transform([input_data[i]]))
input_data_encoded[1][i] = int(label_encoder[count].transform([input_data_real[i]]))
count += 1
input_data_encoded = np.array(input_data_encoded)
#prediccion con dato aleatorio sera la posiciรณn 0, y la real 1
predicted_class = classifier.predict(input_data_encoded)
print(label_encoder[-1].inverse_transform(predicted_class))
#Probando el desempeรฑo de la nueva clasificaciรณn
accuracy = 100.0 * (y_test == y_test_pred).sum() / X_test.shape[0]
print("Desempeรฑo de la nueva clasificaciรณn =", round(accuracy, 2), "%")
| supervised/SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('..')
import deeplabcut
import tensorflow as tf
print(tf.__version__)
import glob, os
base_path = '/media/ssd_storage/work_repos/MGH/bootstrap_round3_2019_08_14_videos/'
work_dir = '/media/ssd_storage/work_repos/MGH/deeplabcut_mgh_pose/experiments/'
# +
videos_bw46 = glob.glob(os.path.join(base_path, 'BW46', '*.avi'))
videos_mg51b = glob.glob(os.path.join(base_path, 'MG51b', '*.avi'))
videos_mg117 = glob.glob(os.path.join(base_path, 'MG117', '*.avi'))
videos_mg118 = glob.glob(os.path.join(base_path, 'MG118', '*.avi'))
videos_mg120b = glob.glob(os.path.join(base_path, 'MG120b', '*.avi'))
config_path_bw46 = deeplabcut.create_new_project(
project='mgh_pose_dlc_BW46_3',
experimenter='Kalpit',
videos=videos_bw46,
working_directory=work_dir,
copy_videos=True
)
config_path_mg51b = deeplabcut.create_new_project(
project='mgh_pose_dlc_MG51b_3',
experimenter='Kalpit',
videos=videos_mg51b,
working_directory=work_dir,
copy_videos=True
)
config_path_mg117 = deeplabcut.create_new_project(
project='mgh_pose_dlc_MG117_3',
experimenter='Kalpit',
videos=videos_mg117,
working_directory=work_dir,
copy_videos=True
)
config_path_mg118 = deeplabcut.create_new_project(
project='mgh_pose_dlc_MG118_3',
experimenter='Kalpit',
videos=videos_mg118,
working_directory=work_dir,
copy_videos=True
)
config_path_mg120b = deeplabcut.create_new_project(
project='mgh_pose_dlc_MG120b_3',
experimenter='Kalpit',
videos=videos_mg120b,
working_directory=work_dir,
copy_videos=True
)
# +
# Extract frames from the videos mentioned in the YAML at config_path and check if they are labeled (I generated
# labels by hand as we had already annotated them).
config_path_bw46 = os.path.join(work_dir, 'mgh_pose_dlc_BW46_2-Kalpit-2019-08-14', 'config.yaml')
config_path_mg51b = os.path.join(work_dir, 'mgh_pose_dlc_MG51b_2-Kalpit-2019-08-14', 'config.yaml')
config_path_mg117 = os.path.join(work_dir, 'mgh_pose_dlc_MG117_2-Kalpit-2019-08-14', 'config.yaml')
config_path_mg118 = os.path.join(work_dir, 'mgh_pose_dlc_MG118_2-Kalpit-2019-08-14', 'config.yaml')
config_path_mg120b = os.path.join(work_dir, 'mgh_pose_dlc_MG120b_2-Kalpit-2019-08-14', 'config.yaml')
config_path = config_path_bw46
deeplabcut.extract_frames(config_path, 'automatic', 'uniform', crop=False)
# deeplabcut.check_labels(config_path)
# -
# Run this function to create the training dataset for different subjects
deeplabcut.create_training_dataset(config_path_bw46)
#deeplabcut.create_training_dataset(config_path_mg51b)
#deeplabcut.create_training_dataset(config_path_mg117)
#deeplabcut.create_training_dataset(config_path_mg118)
#deeplabcut.create_training_dataset(config_path_mg120b)
# Train the network for different subjects one by one (the parameters to the train_network are good I think.
# You can add something by referring the deeplabcut readme if you want to)
config_path = config_path_bw46 # Change this value for different subject training
deeplabcut.train_network(config_path,
shuffle=1,
trainingsetindex=0,
max_snapshots_to_keep=5,
autotune=False,
displayiters=100,
saveiters=20000,
maxiters=800000)
config_path = config_path_bw46
# Analyze videos (generate predictions) and plot the predictions on the videos (this takes a long time)
# Note that, config_path is set in the previous cell already so it will use that subject here
vpath = '/media/data_cifs/MGH/videos/'
#videos = glob.glob(os.path.join(vpath, 'BW46', '*.avi'))
videos = glob.glob(os.path.join(vpath, 'BW46', '*.mp4'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
#deeplabcut.create_labeled_video(config_path, videos, save_frames=True)
# +
config_path = config_path_bw46
vpath = '/media/data_cifs/lakshmi/MGH/mgh-pose/test_videos/v3_DLC/BW46'
videos = glob.glob(os.path.join(vpath, '*.avi'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
#deeplabcut.create_labeled_video(config_path, videos, save_frames=True)
config_path = config_path_mg51b
vpath = '/media/data_cifs/lakshmi/MGH/mgh-pose/test_videos/v3_DLC/MG51b'
videos = glob.glob(os.path.join(vpath, '*.avi'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
config_path = config_path_mg117
vpath = '/media/data_cifs/lakshmi/MGH/mgh-pose/test_videos/v3_DLC/MG117'
videos = glob.glob(os.path.join(vpath, '*.avi'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
config_path = config_path_mg118
vpath = '/media/data_cifs/lakshmi/MGH/mgh-pose/test_videos/v3_DLC/MG118'
videos = glob.glob(os.path.join(vpath, '*.avi'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
config_path = config_path_mg120b
vpath = '/media/data_cifs/lakshmi/MGH/mgh-pose/test_videos/v3_DLC/MG120b'
videos = glob.glob(os.path.join(vpath, '*.avi'))
deeplabcut.analyze_videos(config_path, videos, save_as_csv=True)
# -
| experiments/DLC_train_test_MGH.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # """Multiple Linear Regression model using Scikit-Learn"""
# ### 1. Importing the libs
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv('50_Startups.csv')
data
X = data.iloc[:,:-1].values
y = data.iloc[:,4].values
# ### 2. Encoding the categorical data.
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelEncoder = LabelEncoder()
X[:,3] = labelEncoder.fit_transform(X[:,3])
X[:,3]
oneHotEncoder = OneHotEncoder(categorical_features=[3])
X = oneHotEncoder.fit_transform(X).toarray()
# ### 3. Avoiding Dummy variable trap
X = X[:,1:]
# ### 4. Splitting the data into training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# ### 5. Fitting the model to training set using scikit-learn.
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
# ### 6. Predicting the test set results.
y_pred = regressor.predict(X_test)
y_pred
y_test
# ### 7. Building an optimal model using backward Elimination.
import statsmodels.formula.api as sm
X = np.append(np.ones((50,1),dtype=int), X, axis=1)
X
X_opt = X[:,[0,1,2,3,4,5]]
regressor_OLS = sm.OLS(y, X_opt).fit()
regressor_OLS.summary()
X_opt = X[:,[0,2,3,4,5]]
regressor_OLS = sm.OLS(y,X_opt).fit()
regressor_OLS.summary()
X_opt = X[:,[0,3,4,5]]
regressor_OLS = sm.OLS(y,X_opt).fit()
regressor_OLS.summary()
X_opt = X[:,[0,3,5]]
regressor_OLS = sm.OLS(y,X_opt).fit()
regressor_OLS.summary()
X_opt = X[:,[0,3]]
regressor_OLS = sm.OLS(y,X_opt).fit()
regressor_OLS.summary()
| Multiple Linear Regression with backward elemination/multiple_linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Lecture 4: MATH6005 Introduction to Python
#
# ### Introduction to scientific programming using NumPy (2)
#
# Dr <NAME>.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Topics covered
#
# * Recap previous lecture
# * Creating multi-dimensional arrays
# * Accessing multi-dimensional arrays (slicing continued)
# * Array operations
# * Numpy array functions
# - Min and Max
# - Basic statistics
# * Basic plotting
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Recap from previous lecture
#
# To use numpy, we need to import it first:
# + slideshow={"slide_type": "fragment"}
import numpy as np
# + [markdown] slideshow={"slide_type": "subslide"}
# Arrays can be created explicitly:
# + slideshow={"slide_type": "fragment"}
a = np.array([[1, 2], [3, 4]])
print(a)
# + [markdown] slideshow={"slide_type": "subslide"}
# They can also be created pre-populated with specific functions:
# + slideshow={"slide_type": "fragment"}
b = np.zeros((2, 2))
print(b)
# + [markdown] slideshow={"slide_type": "subslide"}
# Arrays have a handful of data types (e.g.: integer, float):
# + slideshow={"slide_type": "fragment"}
print(a)
print(type(a[0,0]))
# + slideshow={"slide_type": "fragment"}
print(b)
print(type(b[0,0]))
# + [markdown] slideshow={"slide_type": "slide"}
# And we need to be careful with these!
#
#
# We can use `a[0,0] = 1` as an index for an array, because it is an integer.
# + slideshow={"slide_type": "fragment"}
c = np.arange(5)
print(c)
# + slideshow={"slide_type": "fragment"}
print(a[0,0])
c[a[0,0]]
# -
# But we cannot use `b[0,0] = 0.0` as an index for `c`, because it is a **float**
print(b[0,0])
c[b[0,0]]
# + [markdown] slideshow={"slide_type": "slide"}
# To avoid errors, it is a good practice to specify them, using `dtype=...`:
# +
d = np.zeros((2,2), dtype=np.int32)
print(d)
# -
print(type(d[0,0]))
print(d[0,0])
c[d[0,0]]
# + [markdown] slideshow={"slide_type": "slide"}
# ## Creating multi-dimensional arrays
#
# There are a few ways of creating multi-dimensional arrays in numpy:
# * Providing the data explicitly
# * Reading a file (eg. csv)
# * Built-in functions
# * `np.zeros`
# * `np.ones`
# * `np.full`
# * `np.random`
#
# + [markdown] slideshow={"slide_type": "slide"}
# ### Providing the data explicitly
# This is what we have done before, converting a list of lists into an array.
# With three dimensions:
# +
td = np.array([
[[11,12], [13,14]],
[[21,22], [23,24]],
[[31,32], [33,34]]
])
print(td)
# -
# Since this is a 3D matrix, we can access its contents with **three** indices, eg. `[0,1,1]`
td[0,1,1]
# + [markdown] slideshow={"slide_type": "slide"}
# ### Reading a file
# It is done exactly as in the one-dimensional case. Let us consider the following CSV file:
# 
# + slideshow={"slide_type": "fragment"}
file_name = 'data/salaries.csv'
salary_data = np.genfromtxt(file_name, skip_header=1, delimiter=',')
# + [markdown] slideshow={"slide_type": "subslide"}
# **Tip:** The `shape` property tells you the shape of the data you have just read:
# + slideshow={"slide_type": "fragment"}
salary_data.shape
# + slideshow={"slide_type": "fragment"}
print(salary_data)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Skipping rows and colums
#
# We have seen how to skip rows at the top (skip_header). We can also skip columns.
#
# For example, imagine that we have a file called `salaries_extended.csv` that looks that the one below; and we are only interested in `Age` and `Salary`:
#
# 
#
# + [markdown] slideshow={"slide_type": "subslide"}
#
# We can use the option `usecols` and specify a tuple with the columns we want to read. In this case, we want only columns `2` and `4`.
# **Tip:** The index of the first column is `0`
# -
ext_salary_data = np.genfromtxt('data/salaries_extended.csv', skip_header=1, delimiter=',', dtype=np.int32)
print(ext_salary_data)
ext_salary_data = np.genfromtxt('data/salaries_extended.csv', skip_header=1, usecols=(2,4), delimiter=',', dtype=np.int32)
print(ext_salary_data)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Built-in functions
# * `np.zeros` Creates an array full of zeros
# * `np.ones` Creates an array full of ones
# * `np.full` Creates an array full of whatever
#
#
# Their syntax is similar, specify shape with a tuple (eg `(2,2)` for a 2 by 2 matrix) and optionally a type with `dtype`:
# -
np.zeros((2,2), dtype=np.int32)
np.ones((2,2,2), dtype=np.float64)
# + [markdown] slideshow={"slide_type": "subslide"}
# With `np.full`, you need to specify what it should be full of as well:
# + slideshow={"slide_type": "fragment"}
np.full((3,5), -7, dtype=np.int32)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Generating random numbers
#
# There are a few functions that allow you to fill arrays with random numbers. They are stored in `np.random`
# * `np.random.uniform` -> Uniform distribution
# * `np.random.normal` -> Normal distribution
# * `np.random.randint` -> Random integers
# * Many others available, check the documentation!
# -
# Each needs to be provided with a shape and their own parameters:
# For example, `randint` requires a "low" and a "high" integer. `low` might be included, but `high` is not:
# + slideshow={"slide_type": "fragment"}
np.random.randint(size=(3,3), low=3, high=10)
# + [markdown] slideshow={"slide_type": "subslide"}
# In the `normal` distribution, `loc` is the keyword for the mean and `scale` is the keyword for the standard deviation:
# -
np.random.normal(size=(2, 3, 3), loc=5, scale=3)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Accessing multi-dimensional arrays
#
# A single element of a multi-dimensional array can be accessed simply by stating the position in square brackets,
# eg. the first element of a matrix can be accessed with `[0,0]`
# + slideshow={"slide_type": "fragment"}
a = np.array([[1, 2], [3, 4]], dtype=np.float64)
print(a)
a[0,0]
# + [markdown] slideshow={"slide_type": "subslide"}
# If we only use one index, we would get a full row:
# -
a[0]
# + [markdown] slideshow={"slide_type": "fragment"}
# This can also be done using slicing.
#
#
# **Remember:** A slice is defined as `[start:end:step]`. Since we want a full row, we can use just `:`
# -
a[0,:]
# + [markdown] slideshow={"slide_type": "slide"}
# Slices can also be used on multiple indices simultaneosly.
#
# Recall the 3D matrix example:
# + slideshow={"slide_type": "fragment"}
td = np.array([
[[11,12], [13,14]],
[[21,22], [23,24]],
[[31,32], [33,34]]
])
# + slideshow={"slide_type": "slide"}
# Meetoo question!
print(td)
slice = td[:,1,:]
# + slideshow={"slide_type": "fragment"}
# Meetoo question!
# print('\nSlice [:,1,:] is:\n')
# print(slice)
# print('\nShape is {0}:'.format(slice.shape))
# + [markdown] slideshow={"slide_type": "slide"}
# **Remember:** A slice is only a subset of an array.
#
# If we assign a slice to a variable and modify it, we are modifying the original array.
# + slideshow={"slide_type": "subslide"}
original = np.array([[1, 2], [3, 4], [5, 6]])
print('Original matrix:')
print(original)
# + slideshow={"slide_type": "fragment"}
middle_row = original[1,:]
print('\nMiddle row: {0}'.format(middle_row))
# + slideshow={"slide_type": "fragment"}
middle_row[0] = 0
print('Middle row modified: {0}'.format(middle_row))
# + slideshow={"slide_type": "fragment"}
# This changed original!!!
print('\nAfter modifying the slice the original matrix becomes:')
print(original)
# + [markdown] slideshow={"slide_type": "slide"}
# This behaviour **also** occurs inside functions:
# -
def zero_zero_to_seven(input_array):
input_array[0,0] = 7
# +
test_array = np.ones((5,5))
zero_zero_to_seven(test_array[1:2,:])
zero_zero_to_seven(test_array[3:,3:])
zero_zero_to_seven(test_array)
test_array
# + [markdown] slideshow={"slide_type": "slide"}
# **Warning** but this does not occur if the function **redefines** the matrix:
# -
def array_times_seven(input_array):
input_array = input_array*7
# +
test_array = np.ones((5,5))
array_times_seven(test_array[3:,3:])
test_array
# +
def array_times_seven_view(input_array):
input_array[:,:] = input_array*7
test_array = np.ones((5,5))
array_times_seven_view(test_array[3:,3:])
test_array
# + [markdown] slideshow={"slide_type": "slide"}
# ### Meetoo question
# + slideshow={"slide_type": "fragment"}
# Meetoo question!
def change_slice(arr):
arr = arr*3
arr[3:] = 0
arr = np.ones(5)
change_slice(arr)
# print(arr)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Testing conditions on arrays
# + [markdown] slideshow={"slide_type": "-"}
# We can also test conditions on arrays.
# For example, let's check which salaries from the `salary_data` array are above 30k (recall the salary value is in the 4th column, index 3)
# + slideshow={"slide_type": "subslide"}
print(salary_data)
# -
upper_salaries = salary_data[:,3] >= 30000
print(upper_salaries)
# + [markdown] slideshow={"slide_type": "subslide"}
# This tells us which rows verify our condition. But this kind of array can be used as index.
#
# Let's print now all the entries for these rows:
# -
salary_data[upper_salaries]
# + slideshow={"slide_type": "fragment"}
print('IDs of employees with salary over 30k:')
salary_data[upper_salaries][:,0]
# + [markdown] slideshow={"slide_type": "subslide"}
# We can use this to calculate, for example, the average of these salaries:
# -
np.mean(salary_data[upper_salaries][:,3])
# + [markdown] slideshow={"slide_type": "fragment"}
# If we want to know the mean of the salaries below 30k, we can just negate `upper_salaries` with the `~` operator:
# -
print(upper_salaries)
print(~upper_salaries)
np.mean(salary_data[~upper_salaries][:,3])
# + [markdown] slideshow={"slide_type": "slide"}
# ## Array operations
# These operators are designed to work with `bool` arrays, i.e. the ones containing `True` or `False` values.
#
# Let's take a look at the following logical functions and operators:
# * Negation operator `~`
# * And operator `&`
# * Or operator `|`
# * `np.all()` function
# * `np.any()` function
# + slideshow={"slide_type": "subslide"}
# Let's create two boolean arrays, b1 and b2:
b1 = np.random.random((2, 4)) > 0.5
print('b1 =\n {0}'.format(b1))
b2 = np.random.random((2, 4)) > 0.5
print('\nb2 =\n {0}'.format(b2))
# + slideshow={"slide_type": "fragment"}
print('\nAnd operator: b1 & b2 =\n {0}'.format(b1 & b2))
# + slideshow={"slide_type": "fragment"}
print('\nOr operator: b1 | b2 =\n {0}'.format(b1 | b2))
# + [markdown] slideshow={"slide_type": "slide"}
# The functions `np.all()` and `np.any()` can be used to test whether all or any of the elements of a boolean array are `True`:
# -
np.all(b1)
np.all(np.array([True, True, True]))
np.any(b1)
np.any(np.array([False, False, False]))
# + [markdown] slideshow={"slide_type": "slide"}
# ### Arithmetic
# When talking about array arithmetic operations, we need to make a distinction. We say that operations are **element-wise**
# when they involve single elements and that they are **matrix-wise** when they involve the array treated
# as a matrix.
# -
# When we write the usual operators `*`, `/`, `+`, `-` they will perform **element-wise** operations:
# + slideshow={"slide_type": "fragment"}
print(a)
# + slideshow={"slide_type": "fragment"}
a*a
# + slideshow={"slide_type": "fragment"}
a-a
# + [markdown] slideshow={"slide_type": "subslide"}
# These operations can also be performed with a scalar (a single number):
# + slideshow={"slide_type": "fragment"}
print(a)
# + slideshow={"slide_type": "fragment"}
a*4
# + slideshow={"slide_type": "fragment"}
1/a
# + [markdown] slideshow={"slide_type": "subslide"}
# Matrix multiplication has its own operator, `@`
# -
a @ a
# + [markdown] slideshow={"slide_type": "subslide"}
# Similarly, dot product can be done with either `@` or `np.dot`
# -
np.ones(2) @ np.ones(2)
np.dot(np.ones(2), np.ones(2))
# + [markdown] slideshow={"slide_type": "slide"}
# Numpy has almost functions for any kind of matrix manipulation, for example:
# * Transposing a matrix: `np.transpose`
# * Inverse of a matrix: `np.linalg.inv`
# * Determinant of a matrix: `np.linalg.det`
# + slideshow={"slide_type": "fragment"}
print('Original matrix =\n{0}'.format(a))
print('\nTransposed =\n{0}'.format(np.transpose(a)))
print('\nInverse =\n{0}'.format(np.linalg.inv(a)))
print('\nDeterminant = {0:.2f}'.format(np.linalg.det(a)))
# + [markdown] slideshow={"slide_type": "slide"}
# ## Other numpy functions
# -
# Numpy offers a range of useful functions to deal with arrays. In Lecture 3 we have seen `np.sum`.
# This will also work for 2D arrays:
# + slideshow={"slide_type": "fragment"}
lsize = 100
large_array = np.random.random((lsize, lsize))
np.sum(large_array)
# + [markdown] slideshow={"slide_type": "subslide"}
# **Tip:** Remember that using `np.sum` (or any numpy function) is normally much faster than coding the same function yourself.
# + slideshow={"slide_type": "fragment"}
def my_sum(large_array):
total_sum = 0
for i in range(lsize):
for j in range(lsize):
total_sum += large_array[i,j]
return(total_sum)
my_sum(large_array)
# + slideshow={"slide_type": "subslide"}
print('Numpy time:')
# t1 = %timeit -o np.sum(large_array)
print('\nOur own time:')
# t2 = %timeit -o my_sum(large_array)
print('\nThe average time of our function is {0:.0f} times slower than the average time of Numpy.'.format(t2.average/t1.average))
# + [markdown] slideshow={"slide_type": "subslide"}
# Other useful functions are the following:
# * `np.max` get the maximum value of the array
# * `np.min` get the minimum value of the array
# +
print('The maximum value of large_array is {0:.8f}'.format(np.max(large_array)))
print('The minimum value of large_array is {0:.8f}'.format(np.min(large_array)))
# + [markdown] slideshow={"slide_type": "slide"}
# Need something we have not mentioned? Numpy is likely to have it!
# <img src="im/gsearch.png">
# + [markdown] slideshow={"slide_type": "slide"}
# ## Basic plotting
# There are many alternatives for plotting in python, but the most popular to use with numpy is `matplotlib.pyplot`.
#
# Let's import it in the usual way, calling it `plt`:
#
# + slideshow={"slide_type": "fragment"}
import matplotlib.pyplot as plt
# + [markdown] slideshow={"slide_type": "fragment"}
# Following with the salaries example, let's plot age against salary:
# + [markdown] slideshow={"slide_type": "slide"}
# Recall the salary data:
# -
print(salary_data)
# + slideshow={"slide_type": "subslide"}
plt.plot(salary_data[:,1], salary_data[:,3], 'ro')
# + [markdown] slideshow={"slide_type": "fragment"}
# **Tip:** In this example, `'ro'` defines the style, the `r` stands for red and the `o` for circles
# + [markdown] slideshow={"slide_type": "subslide"}
# Equally simple would be to create a histogram of the salaries, using `hist`:
# + slideshow={"slide_type": "fragment"}
plt.hist(salary_data[:,3])
# + [markdown] slideshow={"slide_type": "subslide"}
# **Tip:** Here, a lot of features (for example the number of bins to use) have been defined automatically, but they can be tweaked. As usual, we can check this in the function help:
# + slideshow={"slide_type": "fragment"}
help(plt.hist)
# + [markdown] slideshow={"slide_type": "slide"}
# **Line plots** If you want to plot a function, you can use the `plot` function. Let us plot `3x^2 + 4` with x values between 0 and 100
# -
number_of_points = 100
x = np.linspace(0, 100, number_of_points)
y = 3*x**2 + 4
plt.plot(x,y, 'g-')
# + [markdown] slideshow={"slide_type": "slide"}
# It is fairly easy to make plots "pretty" in matplotlib. We can add labels, title, legend...
# + slideshow={"slide_type": "fragment"}
plt.title('The title')
plt.xlabel('This is what I have to say about x')
plt.ylabel('Same for y')
plt.plot(x,2*x**2 + 40, 'r--', label='Red line')
plt.plot(x,y, 'g-', label='Green line')
plt.legend()
plt.show()
# -
# **Tip:** in Jupyter notebooks the plots will just appear automatically,
# but in regular scripts you need to call `plt.show()` to display the image, once you have added everything you want.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Moving forward
# * The possibilities of what you can do with matplotlib are endless, so the official documentation, books (like the one suggested in the course) and StackOverFlow are your friends
# * **Remember all labs are tomorrow!** There is no lab on Friday, and no Drop-In session this week
# * If you have a chance, try and have a go at the lab material before the class
# * Assignment 2 deadline is around the corner!
# * Next lecture we will walk you through writting a complete program to solve a problem
#
| Lectures/Lecture4/Lecture4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import logging
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
import os
import cmocean
import matplotlib
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pathlib import Path
import cmocean
import matplotlib.gridspec as gridspec
import matplotlib.colors as colors
import datetime
from mpl_toolkits.axes_grid1 import AxesGrid
from cartopy.mpl.geoaxes import GeoAxes
from matplotlib import ticker, cm
# + pycharm={"name": "#%%\n"}
def set_extent_and_map_axes(ax):
# Hardangerfjord
ldom = np.array([[58.5, 62.0], [3, 7.5]])
# ax.set_extent([3,7.5,58.5,62], crs=ccrs.PlateCarree())
ax.set_xticks([3,4,5,7], crs=ccrs.PlateCarree())
ax.set_yticks([58,59,60], crs=ccrs.PlateCarree())
ax.xaxis.set_major_formatter(LongitudeFormatter(zero_direction_label=True))
ax.yaxis.set_major_formatter(LatitudeFormatter())
return ax
def add_colorbar(cs, levels, ax, fig):
steps = 4
ticks = [float("{:.1f}".format(levels[i])) for i in range(0, len(levels), steps)]
divider = make_axes_locatable(ax)
ax_cb = divider.new_horizontal(size="3%", pad=0.1, axes_class=plt.Axes)
fig.add_axes(ax_cb)
clb = plt.colorbar(cs,
fraction=0.01,
orientation="vertical",
ticks=ticks,
cax=ax_cb)
clb.ax.set_title("Depth (m)", fontsize=8)
def create_map_grid(ds, depths, levels):
fig = plt.figure(figsize=(12, 12))
ax = plt.subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.xaxis.set_major_formatter(LongitudeFormatter(zero_direction_label=True))
ax.yaxis.set_major_formatter(LatitudeFormatter())
ax = set_extent_and_map_axes(ax)
cs1 = ax.contourf(ds.lon_rho.values, ds.lat_rho.values, depths.values,
cmap=level_colormap(levels, cmap=plt.cm.get_cmap("RdYlBu_r")),
levels=levels,
zorder=2,
alpha=1.0,
extend="min",
transform=ccrs.PlateCarree())
# ax.scatter(ds.lon_rho.values, ds.lat_rho.values, c="r", s=1, zorder=10, transform=ccrs.PlateCarree())
# ax.coastlines(zorder=20, linewidth=2)
# ax.add_feature(cfeature.LAND, color="lightgrey", zorder=20)
# ax.coastlines(resolution="10m", linewidth=2, color="black", alpha=1.0, zorder=4)
ax.add_feature(cfeature.GSHHSFeature('high', edgecolor='black'), zorder=20)
plt.grid(True, zorder=0, alpha=0.5)
add_colorbar(cs1, levels, ax, fig)
plotfile = "Examples/Hardangerfjorden_160m_grid.png"
print("[CMIP6_plot] Created plot {}".format(plotfile))
plt.savefig(plotfile, dpi=200)
plt.show()
# -------------
# Colormap
# -------------
# Colormap, smlgn. med <NAME>
def level_colormap(levels, cmap=None):
"""Make a colormap based on an increasing sequence of levels"""
# Start with an existing colormap
if cmap == None:
cmap = pl.get_cmap()
# Spread the colours maximally
nlev = len(levels)
S = np.arange(nlev, dtype='float') / (nlev - 1)
A = cmap(S)
# Normalize the levels to interval [0,1]
levels = np.array(levels, dtype='float')
L = (levels - levels[0]) / (levels[-1] - levels[0])
# Make the colour dictionary
R = [(L[i], A[i, 0], A[i, 0]) for i in range(nlev)]
G = [(L[i], A[i, 1], A[i, 1]) for i in range(nlev)]
B = [(L[i], A[i, 2], A[i, 2]) for i in range(nlev)]
cdict = dict(red=tuple(R), green=tuple(G), blue=tuple(B))
# Use
return matplotlib.colors.LinearSegmentedColormap(
'%s_levels' % cmap.name, cdict, 256)
# + pycharm={"name": "#%%\n"}
ds=xr.open_dataset("/Users/trondkr/Dropbox/NIVA/NAUTILOS/Hardanger/norfjords_160m_grid.nc_A04.nc")
depths = -ds.h
levels = [ -1500, -1000, -900,-800,-700,-600,-500, -400,-300,
-200, -175, -150, -125, -100, -75, -50, -40, -30, -25,
-20, -15, -10, -5,0]
create_map_grid(ds, depths, levels)
| plot_grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from gs_quant.markets.index import Index
from gs_quant.session import Environment, GsSession
# ## Pre Requisites
# To use below functionality on **STS Indices**, your application needs to have access to the following datasets:
# 1. [STSLEVELS](https://marquee.gs.com/s/developer/datasets/STSLEVELS) - Official Values of STS Indices
# 2. [STS_INDICATIVE_LEVELS](https://marquee.gs.com/s/developer/datasets/STS_INDICATIVE_LEVELS) - Indicative Values of STS Indices
#
# You can request access by going to the Dataset Catalog Page linked above.
#
# Note - Please skip this if you are an internal user
# external users should substitute their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(Environment.PROD, client_id=None, client_secret=None, scopes=('read_product_data',))
index = Index.get('GSXXXXXX') # substitute input with any identifier for an index
# #### Close price functions supports the following price types
# You may choose one of the following price types:
#
# - **Official Price:** PriceType.OFFICIAL_PRICE
# - **Indicative Price** PriceType.INDICATIVE_CLOSE_PRICE - Currently supports STS indices only.
#
# Default returns the official close price
index.get_latest_close_price(price_type=[PriceType.OFFICIAL_CLOSE_PRICE]) # returns latest official levels for the index.
index.get_close_price_for_date(dt.date(2021, 1, 7), price_type=[PriceType.OFFICIAL_CLOSE_PRICE]) # returns official levels for a given date.
index.get_close_prices(start=dt.date(2021, 1, 7), end=dt.date(2021, 3, 27), price_type=[PriceType.OFFICIAL_CLOSE_PRICE]) # returns official levels for a given date range.
index.get_close_prices(price_type=[PriceType.OFFICIAL_CLOSE_PRICE]) # returns all the official levels of the index.
# #### STS indices can use PriceType.INDICATIVE_CLOSE_PRICE as well to get the indicative levels
index.get_latest_close_price(price_type=[PriceType.OFFICIAL_CLOSE_PRICE, PriceType.INDICATIVE_CLOSE_PRICE]) # returns latest indicative and official levels of the index.
index.get_close_price_for_date(dt.date(2021, 1, 7), price_type=[PriceType.OFFICIAL_CLOSE_PRICE, PriceType.INDICATIVE_CLOSE_PRICE]) # returns both indicative and official levels of the index for a given date.
index.get_close_prices(start=dt.date(2021, 1, 7), end=dt.date(2021, 3, 27), price_type=[PriceType.OFFICIAL_CLOSE_PRICE, PriceType.INDICATIVE_CLOSE_PRICE]) # returns both indicative and official levels of the index for a given date range.
index.get_close_prices(price_type=[PriceType.OFFICIAL_CLOSE_PRICE, PriceType.INDICATIVE_CLOSE_PRICE]) # returns all the indicative and official levels of the index.
# *Have any other questions? Reach out to the [Marquee STS team](mailto:<EMAIL>)!*
| gs_quant/documentation/07_index/examples/0001_get_index_close_prices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # `MaxwellVacuumID`: An Einstein Toolkit thorn for generating initial data for Maxwell's equations
#
# ## Authors: <NAME>, <NAME>, & <NAME>
# ### Formatting improvements courtesy <NAME>
#
# ### NRPy+ Source Code for this module: [Maxwell/InitialData.py](../edit/Maxwell/InitialData.py) [\[**tutorial**\]](Tutorial-VacuumMaxwell_InitialData.ipynb) Contructs the SymPy expressions for toroidal dipole field initial data
#
# ## Introduction:
# In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for two formulations Maxwell's equations. In a [previous tutorial notebook](Tutorial-VacuumMaxwell_InitialData.ipynb), we used NRPy+ to contruct the SymPy expressions for toroidal dipole initial data. This thorn is largely based on and should function similarly to the NRPy+ generated [`IDScalarWaveNRPy`](Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb) thorn.
#
# We will construct this thorn in two steps.
#
# 1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.
# 1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module.
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules
# 1. [Step 2](#toroidal_id): NRPy+-generated C code kernels for toroidal dipole field initial data
# 1. [Step 3](#cclfiles): CCL files - Define how this module interacts and interfaces with the wider Einstein Toolkit infrastructure
# 1. [Step 3.a](#paramccl): `param.ccl`: specify free parameters within `MaxwellVacuumID`
# 1. [Step 3.b](#interfaceccl): `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns
# 1. [Step 3.c](#scheduleccl): `schedule.ccl`:schedule all functions used within `MaxwellVacuumID`, specify data dependencies within said functions, and allocate memory for gridfunctions
# 1. [Step 4](#cdrivers): C driver functions for ETK registration & NRPy+-generated kernels
# 1. [Step 4.a](#etkfunctions): Initial data function
# 1. [Step 4.b](#makecodedefn): `make.code.defn`: List of all C driver functions needed to compile `MaxwellVacuumID`
# 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='initializenrpy'></a>
#
# # Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](#toc)\]
#
# $$\label{initializenrpy}$$
# +
# Step 1: Import needed core NRPy+ modules
from outputC import lhrh # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import os, sys # Standard Python modules for multiplatform OS-level functions
# Step 1a: Create directories for the thorn if they don't exist.
# Create directory for MaxwellVacuumID thorn & subdirectories in case they don't exist.
outrootdir = "MaxwellVacuumID/"
cmd.mkdir(os.path.join(outrootdir))
outdir = os.path.join(outrootdir,"src") # Main C code output directory
cmd.mkdir(outdir)
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# -
# <a id='toroidal_id'></a>
#
# # Step 2: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](#toc)\]
# $$\label{toroidal_id}$$
#
# Using sympy, we construct the exact expressions for toroidal dipole field initial data currently supported in NRPy, documented in [Tutorial-VacuumMaxwell_InitialData.ipynb](Tutorial-VacuumMaxwell_InitialData.ipynb). We write the generated C codes into different C files, corresponding to the type of initial data the may want to choose at run time. Note that the code below can be easily extensible to include other types of initial data.
# +
import Maxwell.InitialData as mwid
# Set coordinate system. ETK only supports cartesian coordinates
CoordSystem = "Cartesian"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
# set up ID sympy expressions - System I
mwid.InitialData()
# x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
AIU = ixp.register_gridfunctions_for_single_rank1("EVOL","AIU")
EIU = ixp.register_gridfunctions_for_single_rank1("EVOL","EIU")
psiI = gri.register_gridfunctions("EVOL","psiI")
# Set which system to use, which are defined in Maxwell/VacuumMaxwell_Flat_Cartesian_ID.py
par.set_parval_from_str("Maxwell.InitialData::System_to_use","System_II")
# set up ID sympy expressions - System II
mwid.InitialData()
AIIU = ixp.register_gridfunctions_for_single_rank1("EVOL","AIIU")
EIIU = ixp.register_gridfunctions_for_single_rank1("EVOL","EIIU")
psiII = gri.register_gridfunctions("EVOL","psiII")
GammaII = gri.register_gridfunctions("EVOL","GammaII")
Maxwell_ID_SymbExpressions = [\
lhrh(lhs=gri.gfaccess("out_gfs","AIU0"),rhs=mwid.AidU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AIU1"),rhs=mwid.AidU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AIU2"),rhs=mwid.AidU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIU0"),rhs=mwid.EidU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIU1"),rhs=mwid.EidU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIU2"),rhs=mwid.EidU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","psiI"),rhs=mwid.psi_ID),\
lhrh(lhs=gri.gfaccess("out_gfs","AIIU0"),rhs=mwid.AidU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AIIU1"),rhs=mwid.AidU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AIIU2"),rhs=mwid.AidU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIIU0"),rhs=mwid.EidU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIIU1"),rhs=mwid.EidU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","EIIU2"),rhs=mwid.EidU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","psiII"),rhs=mwid.psi_ID),\
lhrh(lhs=gri.gfaccess("out_gfs","GammaII"),rhs=mwid.Gamma_ID)]
declare_string = """
const double x = xGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)];
const double y = yGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)];
const double z = zGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)];
"""
Maxwell_ID_CcodeKernel = fin.FD_outputC("returnstring",
Maxwell_ID_SymbExpressions,\
params="outCverbose=True")
Maxwell_ID_looped = lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
declare_string+Maxwell_ID_CcodeKernel).replace("time","cctk_time")\
.replace("xx0", "x")\
.replace("xx1", "y")\
.replace("xx2", "z")
# Step 4: Write the C code kernel to file.
with open(os.path.join(outdir,"Maxwell_ID.h"), "w") as file:
file.write(str(Maxwell_ID_looped))
# -
# <a id='cclfiles'></a>
#
# # Step 3: ETK `ccl` file generation \[Back to [top](#toc)\]
# $$\label{cclfiles}$$
#
# <a id='paramccl'></a>
#
# ## Step 3.a: `param.ccl`: specify free parameters within `MaxwellVacuumID` \[Back to [top](#toc)\]
# $$\label{paramccl}$$
#
# All parameters necessary for the computation of the initial data expressions are registered within NRPy+; we use this information to automatically generate `param.ccl`. NRPy+ also specifies default values for each parameter.
#
# More information on `param.ccl` syntax can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.html#x1-184000D2.3).
# +
def keep_param__return_type(paramtuple):
keep_param = True # We'll not set some parameters in param.ccl;
# e.g., those that should be #define'd like M_PI.
typestring = ""
# Separate thorns within the ETK take care of grid/coordinate parameters;
# thus we ignore NRPy+ grid/coordinate parameters:
if paramtuple.module == "grid" or paramtuple.module == "reference_metric":
keep_param = False
partype = paramtuple.type
if partype == "bool":
typestring += "BOOLEAN "
elif partype == "REAL":
if paramtuple.defaultval != 1e300: # 1e300 is a magic value indicating that the C parameter should be mutable
typestring += "CCTK_REAL "
else:
keep_param = False
elif partype == "int":
typestring += "CCTK_INT "
elif partype == "#define":
keep_param = False
elif partype == "char":
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
else:
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
return keep_param, typestring
with open(os.path.join(outrootdir,"param.ccl"), "w") as file:
file.write("""
# This param.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
shares: grid
USES KEYWORD type
CCTK_KEYWORD initial_data "Type of initial data"
{
"toroid" :: "Toroidal Dipole field"
} "toroid"
restricted:
""")
paramccl_str = ""
for i in range(len(par.glb_Cparams_list)):
# keep_param is a boolean indicating whether we should accept or reject
# the parameter. singleparstring will contain the string indicating
# the variable type.
keep_param, singleparstring = keep_param__return_type(par.glb_Cparams_list[i])
if keep_param:
parname = par.glb_Cparams_list[i].parname
partype = par.glb_Cparams_list[i].type
singleparstring += parname + " \""+ parname +" (see NRPy+ for parameter definition)\"\n"
singleparstring += "{\n"
if partype != "bool":
singleparstring += " *:* :: \"All values accepted. NRPy+ does not restrict the allowed ranges of parameters yet.\"\n"
singleparstring += "} "+str(par.glb_Cparams_list[i].defaultval)+"\n\n"
paramccl_str += singleparstring
file.write(paramccl_str)
# -
# <a id='interfaceccl'></a>
#
# ## Step 3.b: `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns \[Back to [top](#toc)\]
# $$\label{interfaceccl}$$
#
# `interface.ccl` declares all gridfunctions and determines how `MaxwellVacuumID` interacts with other Einstein Toolkit thorns.
#
# The [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.html#x1-179000D2.2) defines what must/should be included in an `interface.ccl` file.
# +
evol_gfs_list = []
for i in range(len(gri.glb_gridfcs_list)):
if gri.glb_gridfcs_list[i].gftype == "EVOL":
evol_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF")
# NRPy+'s finite-difference code generator assumes gridfunctions
# are alphabetized; not sorting may result in unnecessary
# cache misses.
evol_gfs_list.sort()
with open(os.path.join(outrootdir,"interface.ccl"), "w") as file:
file.write("""
# With "implements", we give our thorn its unique name.
implements: MaxwellVacuumID
# By "inheriting" other thorns, we tell the Toolkit that we
# will rely on variables/function that exist within those
# functions.
inherits: MaxwellVacuum grid
""")
# -
# <a id='scheduleccl'></a>
#
# ## Step 3.c: `schedule.ccl`: schedule all functions used within `MaxwellVacuumID`, specify data dependencies within said functions, and allocate memory for gridfunctions \[Back to [top](#toc)\]
# $$\label{scheduleccl}$$
#
# Official documentation on constructing ETK `schedule.ccl` files is found [here](https://einsteintoolkit.org/usersguide/UsersGuide.html#x1-187000D2.4).
with open(os.path.join(outrootdir,"schedule.ccl"), "w") as file:
file.write("""
# This schedule.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
schedule Maxwell_InitialData at CCTK_INITIAL as Maxwell_InitialData
{
STORAGE: MaxwellVacuum::evol_variables[3]
LANG: C
} "Initial data for Maxwell's equations"
""")
# <a id='cdrivers'></a>
#
# # Step 4: C driver functions for ETK registration & NRPy+-generated kernels \[Back to [top](#toc)\]
# $$\label{cdrivers}$$
#
# Now that we have constructed the basic C code kernels and the needed Einstein Toolkit `ccl` files, we next write the driver functions for registering `MaxwellVacuumID` within the Toolkit and the C code kernels. Each of these driver functions is called directly from [`schedule.ccl`](#scheduleccl).
make_code_defn_list = []
def append_to_make_code_defn_list(filename):
if filename not in make_code_defn_list:
make_code_defn_list.append(filename)
return os.path.join(outdir,filename)
# <a id='etkfunctions'></a>
#
# ## Step 4.a: Initial data function \[Back to [top](#toc)\]
# $$\label{etkfunctions}$$
#
# Here we define the initial data function, and how it's to be called in the `schedule.ccl` file by ETK.
with open(append_to_make_code_defn_list("InitialData.c"),"w") as file:
file.write("""
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void Maxwell_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "Maxwell_ID.h"
}
""")
# <a id='makecodedefn'></a>
#
# ## Step 4.b: `make.code.defn`: List of all C driver functions needed to compile `MaxwellVacuumID` \[Back to [top](#toc)\]
# $$\label{makecodedefn}$$
#
# When constructing each C code driver function above, we called the `append_to_make_code_defn_list()` function, which built a list of each C code driver file. We'll now add each of those files to the `make.code.defn` file, used by the Einstein Toolkit's build system.
with open(os.path.join(outdir,"make.code.defn"), "w") as file:
file.write("""
# Main make.code.defn file for thorn MaxwellVacuumID
# Source files in this directory
SRCS =""")
filestring = ""
for i in range(len(make_code_defn_list)):
filestring += " "+make_code_defn_list[i]
if i != len(make_code_defn_list)-1:
filestring += " \\\n"
else:
filestring += "\n"
file.write(filestring)
# <a id='latex_pdf_output'></a>
#
# # Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-ETK_thorn-MaxwellVacuumID.pdf](Tutorial-ETK_thorn-MaxwellVacuumID.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-MaxwellVacuumID")
| Tutorial-ETK_thorn-MaxwellVacuumID.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 1: Python basics
# Student I: Karde799 (<NAME>)
#
# Student II: Sresa472 (<NAME>)
# ### A word of caution
#
# There are currently two versions of Python in common use, Python 2 and Python 3, which are not 100% compatible. Python 2 is slowly being phased out but has a large enough install base to still be relevant. This course uses the more modern Python 3 but while searching for help online it is not uncommon to find help for Python 2. Especially older posts on sources such as Stack Exchange might refer to Python 2 as simply "Python". This should not cause any serious problems but keep it in mind whenever googling. With regards to this lab, the largest differences are how `print` works and the best practice recommendations for string formatting.
# ### References to R
#
# Most students taking this course who are not already familiar with Python will probably have some experience of the R programming language. For this reason, there will be intermittent references to R throughout this lab. For those of you with a background in R (or MATLAB/Octave, or Julia) the most important thing to remember is that indexing starts at 0, not at 1.
# ### Recommended Reading
#
# This course is not built on any specific source and no specific litterature is required. However, for those who prefer to have a printed reference book, we recommended the books by <NAME>:
#
# * Learning Python by <NAME>, 5th edition, O'Reilly. Recommended for those who have no experience of Python. This book is called LP in the text below.
#
# * Programming Python by <NAME>, 4th edition, O'Reilly. Recommended for those who have some experience with Python, it generally covers more advanced topics than what is included in this course but gives you a chance to dig a bit deeper if you're already comfortable with the basics. This book is called PP in the text.
#
# For the student interested in Python as a language, it is worth mentioning
# * Fluent Python by <NAME> (also O'Reilly). Note that it is - at the time of writing - still in its first edition, from 2015. Thus newer features will be missing.
# ### A note about notebooks
#
# When using this notebook, you can enter python code in the empty cells, then press ctrl-enter. The code in the cell is executed and if any output occurs it will be displayed below the square. Code executed in this manner will use the same environment regardless of where in the notebook document it is placed. This means that variables and functions assigned values in one cell will thereafter be accessible from all other cells in your notebook session.
#
# Note that the programming environments described in section 1 of LP is not applicable when you run python in this notebook.
# ### A note about the structure of this lab
#
# This lab will contain tasks of varying difficulty. There might be cases when the solution seems too simple to be true (in retrospect), and cases where you have seen similar material elsewhere in the course. Don't be fooled by this. In many cases, the task might just serve to remind us of things that are worthwhile to check out, or to find out how to use a specific method.
#
# We will be returning to, and using, several of the concepts in this lab.
# ### 1. Strings and string handling
#
# The primary datatype for storing raw text in Python is the string. Note that there is no character datatype, only strings of length 1. This can be compared to how there are no atomic numbers in R, only vectors of length 1. A reference to the string datatype can be found __[here](https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str)__.
#
# [Litterature: LP: Part II, especially Chapter 4, 7.]
# a) Define the variable `parrot` as the string containing the sentence _It is dead, that is what is wrong with it. This is an ex-"Parrot"!_.
#
# [Note: If you have been programming in a language such as C or Java, you might be a bit confused about the term "define". Different languages use different terms when creating variables, such as "define", "declare", "initialize", etc. with slightly different meanings. In statically typed languages such as C or Java, declaring a variable creates a name connected to a container which can contain data of a specific type, but does not put a value in that container. Initialization is then the act of putting an initial value in such a container. Defining a variable is often used as a synonym to declaring a variable in statically typed languages but as Python is dynamically typed, i.e. variables can contain values of any type, there is no need to declare variables before initializing them. Thus, defining a variable in python entails simply assigning a value to a new name, at which point the variable is both declared and initialized. This works exactly as in R.]
parrot = "it is dead, that is what is wrong with it. This is an ex-ยจparrotยจ!"
print(parrot)
# b) What methods does the string now called `parrot` (or indeed any string) seem to support? Write Python commands below to find out.
dir(parrot)
# c) Count the number of characters (letters, blank space, commas, periods
# etc) in the sentence.
len(parrot)
# d) If we type `parrot + parrot`, should it change the string itself, or merely produce a new string? How would you test your intuition? Write expressions below.
parrot + parrot
# It will produce a new string by combining the two string.
# e) Separate the sentence into a list of words (possibly including separators) using a built-in method. Call the list `parrot_words`.
parrot_words = parrot.split()
print(parrot_words)
# f) Merge (concatenate) `parrot_words` into a string again.
merged_parrot = ' '.join(parrot_words)
print(merged_parrot)
# g) Create a string `parrot_info` which consists of "The length of parrot_info is 66." (the length of the string should be calculated automatically, and you may not write any numbers in the string). Use f-string syntax!
parrot_info = f"The Length of parrot_info is {len(parrot)}."
print(parrot_info)
# ### 2. Iteration, sequences and string formatting
#
# Loops are not as painfully slow in Python as they are in R and thus, not as critical to avoid. However, for many use cases, _comprehensions_, like _list comprehensions_ or _dict comprehensions_ are faster. In this assignment we will see both traditional loop constructs and comprehensions. For an introduction to comprehensions, __[this](https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html)__ might be a good place to start.
#
# It should also be noted that what Python calls lists are unnamed sequences. As in R, a Python list can contain elements of many types, however, these can only be accessed by indexing or sequence, not by name as in R.
# a) Write a `for`-loop that produces the following output on the screen:<br>
# > `The next number in the loop is 5`<br>
# > `The next number in the loop is 6`<br>
# > ...<br>
# > `The next number in the loop is 10`<br>
#
# [Hint: the `range` function has more than one argument.]<br>
# [Literature: For the range construct see LP part II chapter 4 (p.112).]
for n in range(5,11,1):
print(f"The next number in the loop is {n}.")
# b) Write a `for`-loop that for a given`n` sets `first_n_squared` to the sum of squares of the first `n` numbers (0..n-1). Call the iteration variable `i`.
# +
n = 100 # If we change this and run the code, the value of first_n_squared should change afterwards!
first_n_squared = 0
for i in range(n):
first_n_squared = first_n_squared + i**2
print(f"The square of {n-1} factorial is {first_n_squared}")
#first_n_squared should return 0^2 + 1^2 + ... + 99^2 = 328350 if n = 100
# -
# Hint (not mandatory): iteration is often about a gradual procedure of updating or computing. Write out, on paper, how you would compute $0^2$, $0^2 + 1^2$, $0^2 + 1^2 + 2^2$, and consider what kinds of gradual updates you might want to perform.
# c) It is often worth considering what a piece of code actually contributes. Think about a single loop iteration (when we go through the body of the loop). What should the variable `first_n_squared` contain _before_ a loop iteration? What should the loop iteration contribute? What does it contain _after_ ? A sentence or two for each is enough. Write this as a code comment in the box below:
"""
Before a loop iteration: The first_n_squared should contain value as zero before the for-loop starts and after each iteration it should have value with sum from 0 to i-1.
After: After each time, the square value of the i value is added to the first_n_square variable which contains
the summation of square values of (i-1).
Contributed:Each iteration the square of the i value is added to the summation value.
"""
# Hint:
# * Your answer might involve the iteration variable `i` (informally: the current number we're looking at in the loop).
# * After all the loop iterations are done (and your iteration variable has reached _n - 1_ ), it should contain the sum $0^2 + 1^2 + ... + (n-1)^2$. Does your explanation suggest that this should be the case?
# [Tangent: this form of reasoning can form the basis of a mathematical correctness proof for algorithms, that enables us to formally check that code does what it should. This is quite beyond the scope of this course, but the (CS-)interested reader might want to consider reading up on eg [loop invariants](https://en.wikipedia.org/wiki/Loop_invariant), We only go into it at the level of detail that actually forces us to think about what our (simple) code does.]
# d) Write a code snippet that counts the number of __letters__ (alphabetic characters) in `parrot` (as defined above). Use a `for` loop.
def countLetters(x):
alphabets = 0
for letters in x:
if(letters.isalpha()):
alphabets += 1
print("The total number of letters are {0}".format(alphabets))
countLetters(parrot)
# e) Explain your letter-counting code in the same terms as above (before, after, contributed).
"""
Before a loop iteration: Two variable is created with value zero before the iteration and for each iteration it should have the count of character of i-1 position.
After: After each iteration there will be a value added to the alphabet variable.
Contributed: Each iteration the number of each available character is contributed towards the output.
"""
# f) Write a for-loop that iterates over the list `names` below and presents them on the screen in the following fashion:
#
# > `The name Tesco is nice`<br>
# > ...<br>
# > `The name Zeno is nice`<br>
#
# Use Python's string formatting capabilities (the `format` function in the string class) to solve the problem.
#
# [Warning: The best practices for how to do string formatting differs from Python 2 and 3, make sure you use the Python 3 approach.]<br>
# [Literature: String formatting is covered in LP part II chapter 7.]
names = ['Tesco', 'Forex', 'Alonzo', 'Zeno']
for name in names:
print("The name {0} is nice".format(name))
# g) Write a for-loop that iterates over the list `names` and produces the list `n_letters` (`[5,5,6,4]`) with the length of each name.
n_letters = list()
for i in range(len(names)):
n_letters.append(len(names[i]))
print(f"The length of each name is {n_letters}")
# h) How would you - in a Python interpreter/REPL or in this Notebook - retrieve the help for the built-in function `max`?
help(max)
# i) Show an example of how `max` can be used with an iterable of your choice.
set_number = [5,10,25,12,13,11,14,18]
print(f"The largest number in the set is {max(set_number)}")
# j) Use a comprehension (or generator) to calculate the sum 0^2 + ... + (n-1)^2 as above.
n = 100
first_n_square = sum([x**2 for x in range(n)])
print(f"The sum of square untill 99 is {first_n_square}")
# k) Solve the previous task using a list comprehension.
#
# [Literature: Comprehensions are covered in LP part II chapter 4.]
# l) Use a list comprehension to produce a list `short_long` that indicates if the name (in the list `names`) has more than four letters. The answer should be `['long', 'long', 'long', 'short']`.
short_long = ["long" if len(names[name]) > 4 else "short" for name in range(4)]
print(short_long)
# m) Use a comprehension to count the number of letters in `parrot`. You may not use a `for`-loop. (The comprehension will contain the word `for`, but it isn't a `for ... in ...:`-statement.)
parrot_number = sum([1 for letter in parrot if letter.isalpha()])
print(parrot_number)
# [Note: this is fairly similar to the long/short task, but note how we access member functions of the values.]
# n) Below we have the string `datadump`. Retrieve the substring string starting at character 27 (that is "o") and ending at character 34 ("l") by means of slicing.
datadump = "The name of the game is <b>old html</b>. That is <b>so cool</b>."
datadump[27:35]
# o) Write a loop that uses indices to __simultaneously__ loop over the lists `names` and `short_long` to write the following to the screen:
#
# > `The name Tesco is a long name`<br>
# > ...<br>
# > `The name Zeno is a short name`<br>
z = [f"The name {names[i]} is a {short_long[i]} name" for i in range(len(names))]
print(z)
c = [print(z[i]) for i in range(len(z))]
"""Here the type of each z element are of type string but the type of c is NoneType. If a second comprehension for printing the z components then it will print exactly the way it is asked in question or it will print the way it will prit a list """
# Note: this is a common programming pattern, though not particularly Pythonic in this use case. We do however need to know how to use indices in lists to work properly with Python.
# p) Do the task above once more, but this time without the use of indices.
for name,length in zip(names,short_long):
print(f"The name {name} is a {length} name")
# [Hint: Use the `zip` function.]<br>
# [Literature: zip usage with dictionary is found in LP part II chapter 8 and dictionary comprehensions in the same place.]
# q) Among the built-in datatypes, it is also worth mentioning the tuple. Construct two tuples, `one` containing the number one and `two` containing the number 1 and the number 2. What happens if you add them? Name some method that a list with similar content (such as `two_list` below) would support, that `two` doesn't and explain why this makes sense.
one = (1,) # Change this.
two = (1,2) # Change this
print(one+two)
two_list = [1, 2]
"""It will merge the one and two tuple. There is add function in tuples.Tuple can contains all types so adding a string and int will not make sense. append method is there for list but not for tuples. It is because Tuples is immutable object so append can not append any element to the existing tuple."""
# ### 3. Conditionals, logic and while loops
# a) Below we have an integer called `n`. Write code that prints "It's even!" if it is even, and "It's odd!" if it's not.
n = 5 # Change this to other values and run your code to test.
def odd_even(x):
if (x%2) == 0:
print(f"It is even")
else:
print(f"It is odd")
odd_even(n)
# b) Below we have the list `options`. Write code (including an `if` statement) that ensures that the boolean variable `OPTIMIZE` is True _if and only if_ the list contains the string `--optimize` (exactly like that).
OPTIMIZE = None # Or some value which we are unsure of.
options = ['--print-results','--optimize', '-x']# This might have been generated by a GUI or command line option
if('--optimize' in options):
OPTIMIZE =True
else:
OPTIMIZE = False
OPTIMIZE
# Note: It might be tempting to use a `for` loop. In this case, we will not be needing this, and you may _not_ use it. Python has some useful built-ins to test for membership.
#
# You may use an `else`-free `if` statement if you like.
# c) Redo the task above, but now consider the case where the boolean `OPTIMIZE` is True _if and only if_ the `options` list contains either `--optimize` or `-o` (or both). **You may only use one if-statement**.
# +
OPTIMIZE = None # Or some value which we are unsure of.
options = ['--print-results', '-o', '-x'] # This might have been generated by a GUI or command line option
requirements = ['-o', '--optimize']
if(any([i in options for i in requirements])):
OPTIMIZE = True
else:
OPTIMIZE = False
print(OPTIMIZE)
# -
# [Hint: Don't forget to test your code with different versions of the options list!
#
# If you find something that seems strange, you might want to check what the value of the _condition itself_ is.]
#
# [Note: This extension of the task is included as it includes a common source of hard-to-spot bugs.]
# d) Sometimes we can avoid using an `if` statement altogether. The task above is a prime example of this (and was introduced to get some practice with the `if` statement). Solve the task above in a one-liner without resorting to an `if` statement. (You may use an `if` expression, but you don't have to.)
# +
options = ['--print-results', '-o', '-x'] # This might have been generated by a GUI or command line option
OPTIMIZE = None # Replace None with your single line of code.
# Here OPTIMIZE should be True if and only if we found '--optimize' or '-o' in the list.
requirements = ['-o', '--optimize']
OPTIMIZE = any([i in options for i in requirements])
print(OPTIMIZE)
# -
# [Hint: What should the value of the condition be when you enter the then-branch of the `if`? When you enter the else-branch?]
# e) Write a `while`-loop that repeatedly generates a random number from a uniform distribution over the interval [0,1], and prints the sentence 'The random number is smaller than 0.9' on the screen until the generated random number is greater than 0.9.
#
# [Hint: Python has a `random` module with basic random number generators.]<br/>
#
# [Literature: Introduction to the Random module can be found in LP part III chapter 5 (Numeric Types). Importing modules is introduced in part I chapter 3 and covered in depth in part IV.]
import random
while (v < 0.9):
v = random.uniform(0,1)
print('The random number is smaller than 0.9')
# ### 4. Dictionaries
#
# Dictionaries are association tables, or maps, connecting a key to a value. For instance a name represented by a string as key with a number representing some attribute as a value. Dictionaries can themselves be values in other dictionaries, creating nested or hierarchical data structures. This is similar to named lists in R but keys in Python dictionaries can be more complex than just strings.
#
# [Literature: Dictionaries are found in LP section II chapter 4.]
# a) Make a dictionary named `amadeus` containing the information that the student Amadeus is a male, scored 8 on the Algebra exam and 13 on the History exam. The dictionary should NOT include a name entry.
Amadeus = {'Gender':'male','Algebra':8,'History':13}
# b) Make three more dictionaries, one for each of the students: Rosa, Mona and Ludwig, from the information in the following table:
#
# | Name | Gender | Algebra | History |
# | :-----------: | :-----------: |:-------------:| :------:|
# | Rosa | Female | 19 | 22 |
# | Mona | Female | 6 | 27 |
# | Ludwig | Other | 12 | 18 |
Rosa = {'Gender':'Female','Algebra':19,'History':22}
Mona = {'Gender':'Female','Algebra':6,'History':27}
Ludwig = {'Gender':'other','Algebra':12,'History':18}
# + active=""
# # c) Combine the four students in a dictionary named `students` such that a user of your dictionary can type `students['Amadeus']['History']` to retrive Amadeus score on the history test.
#
# [HINT: The values in a dictionary can be dictionaries.]
# -
students = {'Amadeus':Amadeus,'Rosa':Rosa,'Mona':Mona,'Ludwig':Ludwig}
students['Amadeus']['History']
# d) Add the new male student Karl to the dictionary `students`. Karl scored 14 on the Algebra exam and 10 on the History exam.
students['Karl'] = {'Gender':'male','Algebra':14,'History':10}
# e) Use a `for`-loop to print out the names and scores of all students on the screen. The output should look like something this (the order of the students doesn't matter):
#
# > `Student Amadeus scored 8 on the Algebra exam and 13 on the History exam`<br>
# > `Student Rosa scored 19 on the Algebra exam and 22 on the History exam`<br>
# > ...
#
# [Hint: Dictionaries are iterables, also, check out the `items` function for dictionaries.]
for name,score in students.items(): # Change the names of iteration variables to something moresuitable than k, v.
print(f"Student {name} scored {score['Algebra']} on the Algebra exam and {score['History']} on the History exam")
# f) Use a dict comprehension and the lists `names` and `short_long` from assignment 2 to create a dictionary of names and wether they are short or long. The result should be a dictionary equivalent to {'Forex':'long', 'Tesco':'long', ...}.
zipp = zip(names,short_long)
sl_dict = {name:size for (name,size) in zipp}
print(sl_dict)
# ### 5. Introductory file I/O
#
# File I/O in Python is a bit more general than what most R programmers are used to. In R, reading and writing files are usually performed using file type specific functions such as `read.csv` while in Python we usually start with reading standard text files. However, there are lots of specialized functions for different file types in Python as well, especially when using the __[pandas](http://pandas.pydata.org/)__ library which is built around a datatype similar to R DataFrames. Pandas will not be covered in this course though.
#
# [Literature: Files are introduced in LP part II chapter 4 and chapter 9.]
# The file `students.tsv` contains tab separated values corresponding to the students in previous assigments.
#
# a) Iterate over the file, line by line, and print each line. The result should be something like this:
#
# > `Amadeus Male 8 13`<br>
# > `Rosa Female 19 22`<br>
# > ...
#
# The file should be closed when reading is complete.
#
# [Hint: Files are iterable in Python.]
student_file = 'students.tsv'
P1 = open(student_file, 'r')
for i in P1:
print(i)
# b) Working with many files can be problematic, especially when you forget to close files or errors interrupt programs before files are closed. Python thus has a special `with` statement which automatically closes files for you, even if an error occurs. Redo the assignment above using the `with` statement.
#
# [Literature: With is introduced in LP part II chapter 9 page 294.]
with open(student_file) as P2:
for line in P2:
print(line)
# c) If you are going to open text files that might have different character encodings, a useful habit might be to use the [`codecs`](https://docs.python.org/3/library/codecs.html) module. Redo the task above, but using codecs.open. You might want to find out the character encoding of the file (for instance in an edit
import codecs
with codecs.open(student_file) as P3:
for line in P3:
print(line)
# d) Recreate the dictionary from assignment the previous assignment by reading the data from the file. Using a dedicated csv-reader is not permitted.
new_student = {}
with open('students.tsv') as h:
for line in h:
name, gender, algebra, history = line.split()
new_student[name] = {}
new_student[name]['Gender'] = gender
new_student[name]['Algebra'] = algebra
new_student[name]['History'] = history
print(new_student)
# e) Using the dictionary above, write sentences from task 4e above to a new file, called `students.txt`.
student_txt = 'students.txt'
with open(student_txt, 'w') as h:
for name,score in new_student.items() : # Change the names of iteration variables to something moresuitable than k, v.
h.write("Student " + name+ " scored " + score['Algebra'] + " on the Algebra exam and " + score['History'] + "on the History exam\n")
h.close()
| 732a74-la1-2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('TS-Remove-Duplicates-Wide-Long.csv')
# ### TS-Remove-Duplicates-Wide-Long Summary
# This notebook is a template for reshaping data from wide to long format and aslo removes duplicates using where the account name is the same, but there are duplicate `Business_date` values.
#
# This is very common when reshaping AMI interval data.
df.head(15)
#http://strftime.org/
#This code must be specfic to the format of the date in the dataset.
df['Business_Date'] = pd.to_datetime(df['Business_Date'], format='%m/%d/%y')
df.head()
#Does not work.
for each in df.Account.unique().tolist():
output = df[df['Account'] == each].drop_duplicates(subset='Business_Date', keep='first')
print(output)
#https://stackoverflow.com/questions/44481768/remove-duplicate-rows-from-pandas-dataframe-where-only-some-columns-have-the-sam
df.drop_duplicates(subset=['Account', 'Business_Date'], keep='first', inplace=True)
df.reset_index(drop=True, inplace=True)
df
df_melt = pd.melt(df, id_vars=['Account', 'Business_Date'], value_vars=['Reading_001', 'Reading_002'])
df_melt.columns.tolist()
df_melt.rename(columns={'value': 'THM_VALUE', 'variable': 'TIME_PST', 'Account': 'ACCOUNT', 'Business_Date': 'BUSINESS_DATE'}, inplace=True)
#Converts from wide to long format.
#https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html
df_melt.sort_values(by=['ACCOUNT', 'BUSINESS_DATE'], ascending=True).reset_index(drop=True)
| TS-Remove-Duplicates-Wide-Long.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
SQLContext.newSession(sqlContext)
from pyspark.sql import functions as F
from pyspark.ml.feature import VectorAssembler,StandardScaler,RFormula
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder, TrainValidationSplit
from pyspark.ml.linalg import VectorUDT,Vectors
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.sql import Window
from pyspark.ml import Pipeline
from pyspark.ml.regression import GeneralizedLinearRegression
import pandas as pd
import re
from tabulate import tabulate
import random
import sys
import matplotlib.pyplot as plt
import numpy as np
# +
#import data and rename bad name rank into vaerdiSlope
#RAW DATA!!!
#exclude some of the variables, and cast all variables to double
excludeCols = ["medArb_"+str(i) for i in range(1,16)] # we don't need the medarbejders
includeCols = [i for i in df.columns if i not in excludeCols]
rankCols = [re.sub(pattern="rank_",repl="vaerdiSlope_",string=i) for i in includeCols]
finalCols = [F.col(i) for i in includeCols[:2]]+["kortBeskrivelse"]+[F.col(i).cast("double") for i in includeCols[2:] if i not in ["kortBeskrivelse"]]
df = sqlContext.read.parquet("/home/svanhmic/workspace/Python/Erhvervs/data/cdata/featureDataCvr")
df.select(["cvrNummer"])
rankCols = [re.sub(pattern="rank_",repl="vaerdiSlope_",string=i) for i in includeCols ]
renamedDf = (df
.select(*finalCols)
.select([F.col(val).alias(rankCols[idx]) for idx,val in enumerate(includeCols)])
.filter((F.col("kortBeskrivelse") == "APS") | (F.col("kortBeskrivelse") == "AS"))
)
renamedDf.show()
# +
windowSpecRank =(Window.partitionBy(F.col("cvrNummer"))).orderBy(F.col("periode_gyldigFra").desc())
groupCols = ["cvrNummer","vaerdi"]
companyNameDf = (sqlContext
.read
.parquet("/home/svanhmic/workspace/Python/Erhvervs/data/cdata/companyCvrData")
.withColumn(colName="rank",col=F.rank().over(windowSpecRank))
.filter((F.col("rank")==1) & (F.col("sekvensnr")==0))
.select([F.col(i) for i in groupCols])
.withColumnRenamed(existing="vaerdi",new="navn")
.orderBy(F.col("cvrNummer"))
.cache()
)
# +
labelCols = ["navn","cvrNummer","label","status","kortBeskrivelse"]
featCols = [i for i in companyNameDf.columns+renamedDf.columns if i not in labelCols]
#get minimum values from each column
minCols = [F.min(i).alias(i) for i in featCols]
minValsRdd = renamedDf.groupby().agg(*minCols).rdd
broadcastedmin = sc.broadcast(minValsRdd.first().asDict())
#create array that subtracts minimum value in the numeric columns.
logColsSelected = [F.col(i).alias(i) for i in labelCols]+[(F.col(i)-F.lit(broadcastedmin.value[i])).alias(i) for i in featCols]
#takes log(x+1) to the numeric columns and fills the blanks with 0.0
logDf = (renamedDf
.join(companyNameDf,(companyNameDf["cvrNummer"]==renamedDf["cvrNummer"]),"inner")
.drop(companyNameDf["cvrNummer"])
.select(*logColsSelected)
#.select([F.col(i).alias(i) for i in labelCols]+[F.log1p(F.col(i)).alias(i) for i in featCols])
.distinct()
.na
.fill(0.0,featCols)
.cache()
)
logDf.show(4)
# +
strs = ""
excludedCols = ["cvrNummer","label","status","navn","kortBeskrivelse"]
for i in logDf.columns:
if i not in excludedCols:
strs += i+" + "
#excludedCols
imputedDf = logDf.fillna(value=0.0)
formula = RFormula(formula="label ~ "+strs[:-3],labelCol="label")
glr = GeneralizedLinearRegression(family="binomial", link="logit", maxIter=10, regParam=0.3)
standardScale = StandardScaler(withMean=True,withStd=True,inputCol=glr.getFeaturesCol(),outputCol="scaledFeatures")
pipeline = Pipeline(stages=[formula,standardScale,glr])
grid = (ParamGridBuilder()
.baseOn({lr.predictionCol:"prediction"})
.baseOn({lr.rawPredictionCol:"rawPrediction"})
.baseOn({lr.probabilityCol:"probability"})
.baseOn({lr.labelCol:"label"})
.baseOn({lr.featuresCol:"features"})
.addGrid(param=lr.elasticNetParam,values=[0.1,1.0])
.addGrid(param=lr.getMaxIter,values=[10])
.build()
)
evaluate = BinaryClassificationEvaluator()
trainEvalModel = TrainValidationSplit(estimator=pipeline,estimatorParamMaps=grid,evaluator=evaluate,trainRatio=0.8)
# +
cols = [i for i in logDf.columns if i not in excludedCols]+["label"]
model = pipeline.fit(imputedDf.filter(F.col("label") <= 1).select(*cols))
# -
predict = model.transform(imputedDf.select(*cols).filter(F.col("label") <= 1))
coef = model.stages[-1]
# +
p = model.stages[-1].summary
print("Coefficient Standard Errors: " + str(p.coefficientStandardErrors))
print("T Values: " + str(p.tValues))
print("P Values: " + str(p.pValues))
print("Dispersion: " + str(p.dispersion))
print("Null Deviance: " + str(p.nullDeviance))
print("Residual Degree Of Freedom Null: " + str(p.residualDegreeOfFreedomNull))
print("Deviance: " + str(p.deviance))
print("Residual Degree Of Freedom: " + str(p.residualDegreeOfFreedom))
print("AIC: " + str(p.aic))
print("Deviance Residuals: ")
p.residuals().show()
# +
print(len(cols))
print(type(coef.coefficients.toArray()))
print()
summary = {"Labels":cols[:-1]+["intercept"],"Coefficients":np.insert(coef.coefficients.toArray(),0,coef.intercept),"coefficient Std Err":p.coefficientStandardErrors,"T Values":p.tValues,"P Values":p.pValues}
# +
pd.options.display.float_format = '{:,.4f}'.format
df = pd.DataFrame(summary,columns=["Labels","Coefficients","coefficient Std Err","T Values","P Values"])
import subprocess
HEADER = '''
<html>
<head>
<style>
.df tbody
</style>
</head>
<body>
'''
FOOTER = '''
</body>
</html>
'''
#df = pd.DataFrame({'a': np.arange(10), 'b': np.random.randn(10)})
with open('test.html', 'w') as f:
f.write(HEADER)
f.write(df.to_html(classes='df'))
f.write(FOOTER)
# -
#check mean and stddev
df
#descriptionCVR.filter((F.col("summary") =="mean") | (F.col("summary") =="stddev")).show()
# +
windowSpecRank =(Window.partitionBy(F.col("cvrNummer"))).orderBy(F.col("periode_gyldigFra").desc())
groupCols = ["cvrNummer","vaerdi"]
companyNameDf = (sqlContext
.read
.parquet("/home/svanhmic/workspace/Python/Erhvervs/data/cdata/"+"companyCvrData")
.withColumn(colName="rank",col=F.rank().over(windowSpecRank))
.filter((F.col("rank")==1) & (F.col("sekvensnr")==0))
.select([F.col(i) for i in groupCols])
.withColumnRenamed(existing="vaerdi",new="navn")
.orderBy(F.col("cvrNummer"))
.cache()
)
companyNameDf.show(2)
# -
logDf.show()
# +
#take ln(x+1) of features
#First convert features to vetor
toDenseUDf = F.udf(lambda x: Vectors.dense(x.toArray()),VectorUDT())
vectorizer = VectorAssembler(inputCols=logFeatCols,outputCol="features")
labelCols = ["cvrNummer","label","status","kortBeskrivelse"]
logFeatCols = [i for i in logDf.columns if i not in labelCols]
#print(logFeatCols)
rawVectorDataDf = (vectorizer.transform(logDf)
.select(["navn"]+labelCols+[toDenseUDf(vectorizer.getOutputCol()).alias(vectorizer.getOutputCol())])
)
standardScale = StandardScaler(withMean=True,withStd=True,inputCol=vectorizer.getOutputCol(),outputCol="scaledFeatures")
standardScaleModel = standardScale.fit(rawVectorDataDf)
scaledFeaturesDf = (standardScaleModel
.transform(rawVectorDataDf)
.drop("features")
.withColumnRenamed(existing="scaledFeatures",new="features")
)
scaledFeaturesDf.show()
# +
#put them into a feature vecto
vectorizedTestDf = scaledFeaturesDf.filter(F.col("label") <= 1).sampleBy("label", fractions={0: 0.2, 1: 0.2}, seed=42)
vectorizedTestDf.groupBy("label").count().show()
scaledCvrDf = scaledFeaturesDf.select(F.col("cvrNummer"))
cvrTestDf = vectorizedTestDf.select("cvrNummer")
cvrTrainDf = scaledCvrDf.subtract(cvrTestDf) #take the other partion as training set
vectorizedTrainDf = (scaledFeaturesDf
.filter(F.col("label") <= 1)
.join(cvrTrainDf,(scaledFeaturesDf["cvrNummer"] == cvrTrainDf["cvrNummer"]),"inner")
.drop(cvrTrainDf["cvrNummer"])
)
vectorizedTrainDf.groupBy("label").count().show()
print("Number of data points: "+str(scaledFeaturesDf.count()))
print("Number of data points train: "+str(vectorizedTrainDf.select("cvrNummer").count()))
print("Number of data points test: "+str(vectorizedTestDf.select("cvrNummer").count()))
#vectorizedTrainDf.printSchema()
#print(vectorizedTrainDf.first())
# -
vectorizedTrainDf.show()
# +
#Train the logistic regressionmodel
lr = LogisticRegression()
grid = (ParamGridBuilder()
.baseOn({lr.predictionCol:"prediction"})
.baseOn({lr.rawPredictionCol:"rawPrediction"})
.baseOn({lr.probabilityCol:"probability"})
.baseOn({lr.labelCol:"label"})
.baseOn({lr.featuresCol:"features"})
.addGrid(param=lr.elasticNetParam,values=[0.1,1.0])
.addGrid(param=lr.getMaxIter,values=[10])
.build()
)
evaluate = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
crossVal = CrossValidator(estimator=lr,estimatorParamMaps=grid,evaluator=evaluate,numFolds=10)
crossValModel = crossVal.fit(dataset=vectorizedTrainDf)
evaluate.evaluate(crossValModel.transform(vectorizedTestDf))
#coef = lrModel.coefficients
# -
bestModel = crossValModel.bestModel
#test the values
result = bestModel.transform(vectorizedTestDf)
# +
#
# +
#result.orderBy("prediction").show(100)
confCols = [F.col(i) for i in ["TP","TN","FP","FN"]]
csCols = [F.when((F.col("label")==1) & (F.col("difference") == 0),"TP")
,F.when((F.col("label")==0) & (F.col("difference") == 0),"TN")
,F.when(F.col("difference") == 1,"FN")
,F.when(F.col("difference") == -1,"FP")
]
confusionDf = result.select(F.col("label"),F.col("prediction"),(F.col("label")-F.col("prediction")).alias("difference"))
(confusionDf
.select(F.coalesce(*csCols).alias("cases")
#,.otherwise(0).alias("FP")
#,.otherwise(0).alias("FN")
)
.groupBy("cases").count()
).show()
# -
acc = confusionDf.groupBy("cases").max().take(1)
print(acc)
crossValModel.bestModel.hasSummary
summary = crossValModel.bestModel.summary
# +
summary.areaUnderROC
summary.fMeasureByThreshold.show()
summary.pr.show()
summary.precisionByThreshold.show()
summary.roc.show()
summary.recallByThreshold.show()
summary.totalIterations
summary.
#for s in summary:
# print(s + str(summary[s])+"\n")
# -
summary.predictions.show()
| notebooks/konkurs/BancruptcyDectect.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 research env
# language: python
# name: py3_research
# ---
# ## Lab 2. Simple text processing and gradient boosting.
# This lab assigments consists of two parts:
#
# 1. Simple text classification using Bag of Words and TF-IDF.
# 2. Human activity classification using gradient boosting.
#
# These tasks are independent.
#
# _We recommend to keep the datasets on your computer because they will be used in Lab 3 as well._
#
# Deadline: May 5th, 23:59
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ### Part I: Prohibited Comment Classification (2 points)
# This part of assigment is fully based on YSDA NLP_course homework. Special thanks to YSDA team for making it available on github.
#
# 
#
# __In this part__ you will build an algorithm that classifies social media comments into normal or toxic.
# Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
# +
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
# -
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
# __Note:__ it is generally a good idea to split data into train/test before anything is done to them.
#
# It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.
# ### Preprocessing and tokenization
#
# Comments contain raw text with punctuation, upper/lowercase letters and even newline symbols.
#
# To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
#
# Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
# +
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# +
# task: preprocess each comment in train and test
texts_train = #<YOUR CODE>
texts_test = #<YOUR CODE>
# -
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
# ### Solving it: bag of words
#
# 
#
# One traditional approach to such problem is to use bag of words features:
# 1. build a vocabulary of frequent words (use train data only)
# 2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).
# 3. consider this count a feature for some classifier
#
# __Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__
# * `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
# +
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
#<YOUR CODE>
bow_vocabulary = #<YOUR CODE>
print('example features:', sorted(bow_vocabulary)[::100])
# -
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
#<YOUR CODE>
return np.array(<...>, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
# Machine learning stuff: fit, predict, evaluate. You know the drill.
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
# +
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
# -
# Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
# +
# Your beautiful code here
# -
# #### Task: implement TF-IDF features
#
# Not all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:
#
# $$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$
#
#
# where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1).
# And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.
#
# It may also be a good idea to normalize each data sample after computing tf-idf features.
#
# __Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.
#
# __Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though.
# Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
# +
# Your beautiful code here
# -
# ### Part 2: gradient boosting (4 points)
#
# Here we will work with widely known Human Actividy Recognition (HAR) dataset. Data is available at [UCI repository](https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones). Download it and place in `data/` folder in the same directory as this notebook. There are available both raw and preprocessed datasets. This time we will use the preprocessed one.
#
# First you are required to choose one of the main gradient boosting frameworks:
# 1. LightGBM by Microsoft. [Link to github](https://github.com/Microsoft/LightGBM). One of the most popular frameworks these days that shows both great quality and performance.
# 2. xgboost by dlmc. [Link to github](https://github.com/dmlc/xgboost). The most famous framework which got very popular on kaggle.
# 3. Catboost by Yandex. [Link to github](https://github.com/catboost/catboost). Novel framework by Yandex company tuned to deal well with categorical features. It's quite new, but if you wish to use it - you are welcome.
#
# Some simple preprocessing is done for you.
#
# Your __ultimate target is to get familiar with one of the frameworks above__ and achieve at least 85% accuracy on test dataset.
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# +
X_train = np.genfromtxt('data/train/X_train.txt')
y_train = np.genfromtxt('data/train/y_train.txt')
X_test = np.genfromtxt('data/test/X_test.txt')
y_test = np.genfromtxt('data/test/y_test.txt')
with open('data/activity_labels.txt', 'r') as iofile:
activity_labels = iofile.readlines()
activity_labels = [x.replace('\n', '').split(' ') for x in activity_labels]
activity_labels = dict([(int(x[0]), x[1]) for x in activity_labels])
# -
activity_labels
# +
print(X_train.shape)
data_mean = X_train.mean(axis=0)
data_std = X_train.std(axis=0)
X_train = (X_train - data_mean)/data_std
X_test = (X_test - data_mean)/data_std
# -
# The dataset has some duplicating features. File `unique_columns.txt` stores the indices of the unique ones.
unique_columns = np.genfromtxt('unique_columns.txt', delimiter=',').astype(int)
X_train_unique = X_train[:, unique_columns]
X_test_unique = X_test[:, unique_columns]
# PCA could be useful in this case. E.g.
pca = PCA(0.99)
X_train_pca = pca.fit_transform(X_train_unique)
X_test_pca = pca.transform(X_test_unique)
X_train_pca.shape
X_test_pca.shape
plt.scatter(X_train_pca[:1000, 0], X_train_pca[:1000, 1], c=y_train[:1000])
plt.grid()
plt.xlabel('Principal component 1')
plt.ylabel('Principal component 2')
plt.scatter(X_train_pca[:1000, 3], X_train_pca[:1000, 4], c=y_train[:1000])
plt.grid()
plt.xlabel('Principal component 4')
plt.ylabel('Principal component 5')
# Despite optimal parameters (e.g. for xgboost) can be found on the web, we still want you to use grid/random search (or any other approach) to approximate them by yourself.
# +
# Your code here.
### Example: https://rpubs.com/burakh/har_xgb
| homeworks/Lab2_Texts_and_Boosting/Lab2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # View dataset statistics for each language
# 1. load datasets
# 2. plot/view statistics
import pandas as pd
import numpy as np
from parallelspaper.config.paths import DATA_DIR, FIGURE_DIR
from parallelspaper.speech_datasets import LCOL_DICT
from parallelspaper.utils import save_fig
# +
german_stats = pd.read_pickle(DATA_DIR/'stats_df/GECO_stats_df.pickle')
german_stats['Language'] = 'German'
italian_stats = pd.read_pickle(DATA_DIR/'stats_df/AsiCA_stats_df.pickle')
italian_stats['Language'] = 'Italian'
english_stats = pd.read_pickle(DATA_DIR/'stats_df/BUCKEYE_stats_df.pickle')
english_stats['Language'] = 'English'
japanese_stats = pd.read_pickle(DATA_DIR/'stats_df/CSJ_stats_df.pickle')
japanese_stats['Language'] = 'Japanese'
stats_df = pd.concat([german_stats, italian_stats, english_stats, japanese_stats])
# -
stats_df
for idx, row in stats_df.iterrows():
print(row.Language)
print(np.sum(np.array(row.word_length_phones) == 1)/len(row.word_length_phones))
from matplotlib import gridspec
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
LCOL_DICT
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
# +
bw = 0.25
yoff = -0.20
fig, axs = plt.subplots(ncols=2, figsize=(10, 4))
kwk = {"lw": 6, "bw": bw}
for ix, (idx, row) in enumerate(stats_df[stats_df.Language.isin(['English', 'Japanese'])].sort_values(by="Language").iterrows()):
ax = axs.flatten()[ix]
ax.annotate(letters[ix], xy=(-0.05, 1.05), xycoords="axes fraction", size=20, fontweight='bold', fontfamily='Arial')
ax.hist(
np.array(row.utterance_length_phones),
density=True,
bins=np.arange(0, 100, 2),
color=LCOL_DICT[row.Language.lower()],
)
ax.set_xlim([0, 100])
#ax.set_yscale("log")
ax.set_xlabel("Utterance length (phones)", fontsize=18)
ax.tick_params(axis="both", labelsize=14, pad=15)
for axis in ["top", "bottom", "left", "right"]:
ax.spines[axis].set_linewidth(3)
ax.spines[axis].set_color("k")
ax.grid(False)
ax.tick_params(which="both", direction="in", labelsize=14, pad=10)
ax.tick_params(which="major", length=10, width=3)
ax.tick_params(which="minor", length=5, width=2)
axs[0].set_ylabel("Prob. Density", labelpad=5, fontsize=18)
axs[0].yaxis.set_label_coords(yoff, 0.5)
save_fig(FIGURE_DIR / "utt_len_phones")
# -
| notebooks/language/04.0-Language-dataset-stats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from odt_parse import OdtData
from odt_diff import odt_compare
ref = OdtData('libro_predefinidos.odt')
doc = OdtData('libro_modificado.odt')
print(odt_compare(ref, doc))
ref.style
doc.style
| ODT Diff.ipynb |