code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: py35-paddle1.2.0
# ---
# # 通过OCR实现验证码识别
#
# **作者:** [GT_老张](https://github.com/GT-ZhangAcer)
#
# **时间:** 2021.12
#
# **摘要:** 本篇将介绍如何通过飞桨实现简单的CRNN+CTC自定义数据集OCR识别模型,数据集采用[CaptchaDataset](https://github.com/GT-ZhangAcer/CaptchaDataset)中OCR部分的9453张图像,其中前8453张图像在本案例中作为训练集,后1000张则作为测试集。
# 在更复杂的场景中推荐使用[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)产出工业级模型,模型轻量且精度大幅提升。
# 同样也可以在[PaddleHub](https://www.paddlepaddle.org.cn/hubdetail?name=chinese_ocr_db_crnn_mobile&en_category=TextRecognition)中快速使用PaddleOCR。
# ## 一、环境配置
#
# 本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) PaddlePaddle 2.2 。
import paddle
print(paddle.__version__)
# ## 二、自定义数据集读取器
#
# 常见的开发任务中,我们并不一定会拿到标准的数据格式,好在我们可以通过自定义Reader的形式来随心所欲读取自己想要数据。
#
# 设计合理的Reader往往可以带来更好的性能,我们可以将读取标签文件列表、制作图像文件列表等必要操作在`__init__`特殊方法中实现。这样就可以在实例化`Reader`时装入内存,避免使用时频繁读取导致增加额外开销。同样我们可以在`__getitem__`特殊方法中实现如图像增强、归一化等个性操作,完成数据读取后即可释放该部分内存。
# 需要我们注意的是,如果不能保证自己数据十分纯净,可以通过`try`和`expect`来捕获异常并指出该数据的位置。当然也可以制定一个策略,使其在发生数据读取异常后依旧可以正常进行训练。
# ### 2.1 数据展示
# <div align="center">
# <img src=https://ai-studio-static-online.cdn.bcebos.com/57d6c77aa5194cdca5c7edc533cc57e4d5070de95f6a4454b3cd1ca1e0eebe98 width="500px">
# </div>
#
# 点此[快速获取本节数据集](https://aistudio.baidu.com/aistudio/datasetdetail/57285),待数据集下载完毕后可使用`!unzip OCR_Dataset.zip -d data/`命令或熟悉的解压软件进行解压,待数据准备工作完成后修改本文“训练准备”中的`DATA_PATH = 解压后数据集路径`。
# 下载数据集
# !wget -O OCR_Dataset.zip https://bj.bcebos.com/v1/ai-studio-online/c91f50ef72de43b090298a38281e9c59a2d741eadd334f1cba7c710c5496e342?responseContentDisposition=attachment%3B%20filename%3DOCR_Dataset.zip&authorization=bce-auth-v1%2F0ef6765c1e494918bc0d4c3ca3e5c6d1%2F2020-10-27T09%3A50%3A21Z%2F-1%2F%2Fddc4aebed803af6c57dac46abba42d207961b78e7bc81744e8388395979b66fa
# 解压数据集
# !unzip OCR_Dataset.zip -d data/
# +
import os
import PIL.Image as Image
import numpy as np
from paddle.io import Dataset
# 图片信息配置 - 通道数、高度、宽度
IMAGE_SHAPE_C = 3
IMAGE_SHAPE_H = 30
IMAGE_SHAPE_W = 70
# 数据集图片中标签长度最大值设置 - 因图片中均为4个字符,故该处填写为4即可
LABEL_MAX_LEN = 4
class Reader(Dataset):
def __init__(self, data_path: str, is_val: bool = False):
"""
数据读取Reader
:param data_path: Dataset路径
:param is_val: 是否为验证集
"""
super().__init__()
self.data_path = data_path
# 读取Label字典
with open(os.path.join(self.data_path, "label_dict.txt"), "r", encoding="utf-8") as f:
self.info = eval(f.read())
# 获取文件名列表
self.img_paths = [img_name for img_name in self.info]
# 将数据集后1024张图片设置为验证集,当is_val为真时img_path切换为后1024张
self.img_paths = self.img_paths[-1024:] if is_val else self.img_paths[:-1024]
def __getitem__(self, index):
# 获取第index个文件的文件名以及其所在路径
file_name = self.img_paths[index]
file_path = os.path.join(self.data_path, file_name)
# 捕获异常 - 在发生异常时终止训练
try:
# 使用Pillow来读取图像数据
img = Image.open(file_path)
# 转为Numpy的array格式并整体除以255进行归一化
img = np.array(img, dtype="float32").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255
except Exception as e:
raise Exception(file_name + "\t文件打开失败,请检查路径是否准确以及图像文件完整性,报错信息如下:\n" + str(e))
# 读取该图像文件对应的Label字符串,并进行处理
label = self.info[file_name]
label = list(label)
# 将label转化为Numpy的array格式
label = np.array(label, dtype="int32")
return img, label
def __len__(self):
# 返回每个Epoch中图片数量
return len(self.img_paths)
# -
# ## 三、模型配置
# ### 3.1 定义模型结构以及模型输入
#
# 模型方面使用的简单的CRNN-CTC结构,输入形为CHW的图像在经过CNN->Flatten->Linear->RNN->Linear后输出图像中每个位置所对应的字符概率。考虑到CTC解码器在面对图像中元素数量不一、相邻元素重复时会存在无法正确对齐等情况,故额外添加一个类别代表“分隔符”进行改善。
#
# CTC相关论文:[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neu](http://people.idsia.ch/~santiago/papers/icml2006.pdf)
#
# <div align="center">
# <img src=https://ai-studio-static-online.cdn.bcebos.com/f9458cedbb4441d682f15fefd3f3cae5e49d499bcf0a4bbdb976dfdff5a2e656 width="500px">
# </div>
#
# 网络部分,因本篇采用数据集较为简单且图像尺寸较小并不适合较深层次网络。若在对尺寸较大的图像进行模型构建,可以考虑使用更深层次网络/注意力机制来完成。当然也可以通过目标检测形式先检出文本位置,然后进行OCR部分模型构建。
#
# <div align="center">
# <img src=https://ai-studio-static-online.cdn.bcebos.com/19ddf6107e7f47ee9b3b84ee0c12de1e15f7ab8b88f04eed95232440c92fe0d7 width="500px">
# </div>
#
# <a href="https://github.com/PaddlePaddle/PaddleOCR">PaddleOCR效果图</a>
# </p>
# +
import paddle
# 分类数量设置 - 因数据集中共包含0~9共10种数字+分隔符,所以是11分类任务
CLASSIFY_NUM = 11
# 定义输入层,shape中第0维使用-1则可以在预测时自由调节batch size
input_define = paddle.static.InputSpec(shape=[-1, IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W],
dtype="float32",
name="img")
# 定义网络结构
class Net(paddle.nn.Layer):
def __init__(self, is_infer: bool = False):
super().__init__()
self.is_infer = is_infer
# 定义一层3x3卷积+BatchNorm
self.conv1 = paddle.nn.Conv2D(in_channels=IMAGE_SHAPE_C,
out_channels=32,
kernel_size=3)
self.bn1 = paddle.nn.BatchNorm2D(32)
# 定义一层步长为2的3x3卷积进行下采样+BatchNorm
self.conv2 = paddle.nn.Conv2D(in_channels=32,
out_channels=64,
kernel_size=3,
stride=2)
self.bn2 = paddle.nn.BatchNorm2D(64)
# 定义一层1x1卷积压缩通道数,输出通道数设置为比LABEL_MAX_LEN稍大的定值可获取更优效果,当然也可设置为LABEL_MAX_LEN
self.conv3 = paddle.nn.Conv2D(in_channels=64,
out_channels=LABEL_MAX_LEN + 4,
kernel_size=1)
# 定义全连接层,压缩并提取特征(可选)
self.linear = paddle.nn.Linear(in_features=429,
out_features=128)
# 定义RNN层来更好提取序列特征,此处为双向LSTM输出为2 x hidden_size,可尝试换成GRU等RNN结构
self.lstm = paddle.nn.LSTM(input_size=128,
hidden_size=64,
direction="bidirectional")
# 定义输出层,输出大小为分类数
self.linear2 = paddle.nn.Linear(in_features=64 * 2,
out_features=CLASSIFY_NUM)
def forward(self, ipt):
# 卷积 + ReLU + BN
x = self.conv1(ipt)
x = paddle.nn.functional.relu(x)
x = self.bn1(x)
# 卷积 + ReLU + BN
x = self.conv2(x)
x = paddle.nn.functional.relu(x)
x = self.bn2(x)
# 卷积 + ReLU
x = self.conv3(x)
x = paddle.nn.functional.relu(x)
# 将3维特征转换为2维特征 - 此处可以使用reshape代替
x = paddle.tensor.flatten(x, 2)
# 全连接 + ReLU
x = self.linear(x)
x = paddle.nn.functional.relu(x)
# 双向LSTM - [0]代表取双向结果,[1][0]代表forward结果,[1][1]代表backward结果,详细说明可在官方文档中搜索'LSTM'
x = self.lstm(x)[0]
# 输出层 - Shape = (Batch Size, Max label len, Signal)
x = self.linear2(x)
# 在计算损失时ctc-loss会自动进行softmax,所以在预测模式中需额外做softmax获取标签概率
if self.is_infer:
# 输出层 - Shape = (Batch Size, Max label len, Prob)
x = paddle.nn.functional.softmax(x)
# 转换为标签
x = paddle.argmax(x, axis=-1)
return x
# -
# ## 四、训练准备
# ### 4.1 定义label输入以及超参数
# 监督训练需要定义label,预测则不需要该步骤。
# +
# 数据集路径设置
DATA_PATH = "./data/OCR_Dataset"
# 训练轮数
EPOCH = 10
# 每批次数据大小
BATCH_SIZE = 16
label_define = paddle.static.InputSpec(shape=[-1, LABEL_MAX_LEN],
dtype="int32",
name="label")
# -
# ### 4.2 定义CTC Loss
#
# 了解CTC解码器效果后,我们需要在训练中让模型尽可能接近这种类型输出形式,那么我们需要定义一个CTC Loss来计算模型损失。不必担心,在飞桨框架中内置了多种Loss,无需手动复现即可完成损失计算。
#
# 使用文档:[CTCLoss](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-beta/api/paddle/nn/functional/loss/ctc_loss_cn.html#ctc-loss)
class CTCLoss(paddle.nn.Layer):
def __init__(self):
"""
定义CTCLoss
"""
super().__init__()
def forward(self, ipt, label):
input_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN + 4,dtype= "int64")
label_lengths = paddle.full(shape=[BATCH_SIZE],fill_value=LABEL_MAX_LEN,dtype= "int64")
# 按文档要求进行转换dim顺序
ipt = paddle.tensor.transpose(ipt, [1, 0, 2])
# 计算loss
loss = paddle.nn.functional.ctc_loss(ipt, label, input_lengths, label_lengths, blank=10)
return loss
# ### 4.3 实例化模型并配置优化策略
# 实例化模型
model = paddle.Model(Net(), inputs=input_define, labels=label_define)
# +
# 定义优化器
optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters())
# 为模型配置运行环境并设置该优化策略
model.prepare(optimizer=optimizer,
loss=CTCLoss())
# -
# ## 五、开始训练
#
# 执行训练
model.fit(train_data=Reader(DATA_PATH),
eval_data=Reader(DATA_PATH, is_val=True),
batch_size=BATCH_SIZE,
epochs=EPOCH,
save_dir="output/",
save_freq=1,
verbose=1,
drop_last=True)
# ## 六、预测前准备
# ### 6.1 像定义训练Reader一样定义预测Reader
# 与训练近似,但不包含Label
class InferReader(Dataset):
def __init__(self, dir_path=None, img_path=None):
"""
数据读取Reader(预测)
:param dir_path: 预测对应文件夹(二选一)
:param img_path: 预测单张图片(二选一)
"""
super().__init__()
if dir_path:
# 获取文件夹中所有图片路径
self.img_names = [i for i in os.listdir(dir_path) if os.path.splitext(i)[1] == ".jpg"]
self.img_paths = [os.path.join(dir_path, i) for i in self.img_names]
elif img_path:
self.img_names = [os.path.split(img_path)[1]]
self.img_paths = [img_path]
else:
raise Exception("请指定需要预测的文件夹或对应图片路径")
def get_names(self):
"""
获取预测文件名顺序
"""
return self.img_names
def __getitem__(self, index):
# 获取图像路径
file_path = self.img_paths[index]
# 使用Pillow来读取图像数据并转成Numpy格式
img = Image.open(file_path)
img = np.array(img, dtype="float32").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255
return img
def __len__(self):
return len(self.img_paths)
# ### 6.2 参数设置
# 待预测目录 - 可在测试数据集中挑出\b3张图像放在该目录中进行推理
INFER_DATA_PATH = "./sample_img"
# 训练后存档点路径 - final 代表最终训练所得模型
CHECKPOINT_PATH = "./output/final.pdparams"
# 每批次处理数量
BATCH_SIZE = 32
# ### 6.3 展示待预测数据
# +
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
sample_idxs = np.random.choice(50000, size=25, replace=False)
for img_id, img_name in enumerate(os.listdir(INFER_DATA_PATH)):
plt.subplot(1, 3, img_id + 1)
plt.xticks([])
plt.yticks([])
im = Image.open(os.path.join(INFER_DATA_PATH, img_name))
plt.imshow(im, cmap=plt.cm.binary)
plt.xlabel("Img name: " + img_name)
plt.show()
# -
# ## 七、开始预测
# > 飞桨2.2 CTC Decoder 相关API正在迁移中,本节暂时使用简易版解码器。
# +
# 编写简易版解码器
def ctc_decode(text, blank=10):
"""
简易CTC解码器
:param text: 待解码数据
:param blank: 分隔符索引值
:return: 解码后数据
"""
result = []
cache_idx = -1
for char in text:
if char != blank and char != cache_idx:
result.append(char)
cache_idx = char
return result
# 实例化推理模型
model = paddle.Model(Net(is_infer=True), inputs=input_define)
# 加载训练好的参数模型
model.load(CHECKPOINT_PATH)
# 设置运行环境
model.prepare()
# 加载预测Reader
infer_reader = InferReader(INFER_DATA_PATH)
img_names = infer_reader.get_names()
results = model.predict(infer_reader, batch_size=BATCH_SIZE)
index = 0
for text_batch in results[0]:
for prob in text_batch:
out = ctc_decode(prob, blank=10)
print(f"文件名:{img_names[index]},推理结果为:{out}")
index += 1
# -
|
docs/practices/cv/image_ocr.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TEST38
# language: python
# name: test38
# ---
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas as pd
from importlib import reload
import sys
sys.path.append('../') # add path where the following python modules live
import load_windgps_data_to_pandas
import process_windgps_data
# # Parameters
# where your binary data files are
data_directory = '0000001/'
correct_year = 2020
out = '.' # in this directory
# # Load data
df = load_windgps_data_to_pandas.load_data_from_directory(data_directory)
# # Fix millis - if necessary
plt.plot(df.millis)
# +
# seems like not necessary anymore
# df = process_windgps_data.fix_millis_errors(df)
# plt.plot(df.millis)
# -
# # Fix GPS Date
# ### Look at raw data
plt.plot(df.gps_date)
# ### Fix it
df = process_windgps_data.fix_gps_date(df, correct_year=correct_year)
# ### Check the results
plt.plot(df['year'])
plt.plot(df.month)
plt.plot(df.day)
# # Interpolate epoch time
plt.plot(df.gps_time)
df = process_windgps_data.calc_interpolated_epoch_time(df)
plt.plot(df.millis, df.time_epoch)
# # Parse wind data
df = process_windgps_data.parse_and_save_several_wind_strings(df, wind_strings=['S2', 'D'])
plt.plot(df.S2)
plt.plot(df.D, '.')
# # Save it
df.year.iloc[0]
datestr = str(int(df.year.iloc[0])) + str(int(df.month.iloc[0])) + str(int(df.day.iloc[0])) +\
'_' + str(int(df.hour.iloc[0])) + str(int(df.minute.iloc[0])) + str(int(df.second.iloc[0]))
fname = datestr + '_' + 'windgps_data.hdf'
full_fname = os.path.join(out, fname)
df.to_hdf(full_fname, 'windgps')
# # Read it
# read it
df_read = pd.read_hdf(full_fname)
plt.plot(df.millis, df.time_epoch)
df.keys()
|
example_20201006/.ipynb_checkpoints/process_windgps_data_notebook-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import fastbook
fastbook.setup_book()
from fastbook import *
from IPython.display import display,HTML
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Project specific imports
# +
# import sys
# sys.path.append('../')
# from utils import datasets, get_discharge_summaries, concat_and_split
# -
# Start by creating a data folder (in my case it is called data2) with the following files:
#
# data2
# |- NOTEEVENTS.csv
# |- DIAGNOSES_ICD.csv
# |- PROCEDURES_ICD.csv
# |- *_hadm_ids.csv
# |- D_ICD_DIAGNOSES.csv
# |- D_ICD_PROCEDURES.csv
# |- ICD9_descriptions
#
# Along the way we will create some processed files and place them inside a subdirectory of data2 called data2/processed.
#
#
# When we are done with this notebook, our dataset will be ready in '/home/ubuntu/codemimic/data2/processed/notes_labelled.csv'
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Setting base path and data path:
# +
path = Path.cwd()
path_data = path/'data'
path_data.ls()
# !tree -rtD {path_data}
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Mimic-III Data Extraction
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Data Formatting
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ### Combine diagnoses and procedure codes and reformat them
# -
# The codes in MIMIC-III are given in separate files for procedures and diagnoses, and the codes are given without periods. So let's add periods in the right place.
# We begin by taking a peek at the procedure and diagnoses codes.
df_proc = pd.read_csv(path_data/'PROCEDURES_ICD.csv')
df_diag = pd.read_csv(path_data/'DIAGNOSES_ICD.csv')
df_proc.head()
df_diag.head()
# (Replace later!) Let's now add periods at the right places.
# +
# # datasets.reformat??
# -
df_proc['absolute_code'] = df_proc.apply(lambda row: str(datasets.reformat(str(row[4]), False)), axis=1)
df_proc.head()
df_diag['absolute_code'] = df_diag.apply(lambda row: str(datasets.reformat(str(row[4]), True)), axis=1)
df_diag.head()
# Okay, this looks good! Let's now combine the diagnoses and procedures codes.
df_codes = pd.concat([df_diag, df_proc])
df_codes.head()
# It's time to crash the save button!
path_processed = path_data/'processed'
path_processed.mkdir(parents=True, exist_ok=True)
df_codes.to_csv(path_processed/'ALL_CODES.csv', index=False,
columns=['ROW_ID', 'SUBJECT_ID', 'HADM_ID', 'SEQ_NUM', 'absolute_code'],
header=['row_id', 'subject_id', 'hadm_id', 'seq_num', 'ICD9_code'])
# Note that we saved the 'absolute_code' column as 'ICD9_code'. This is what we will use from here on. Okay, let's now read it back in and check how many codes we have.
path_processed.ls()
df_codes = pd.read_csv(path_processed/'ALL_CODES.csv', dtype={"ICD9_code": str})
df_codes.head()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### How many codes (procedure + diagnoses) do we have?
# -
df_codes['ICD9_code'].nunique()
# So we have 8894 codes in total.
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Get the Raw Text
# -
# From the NOTEEVENTS.csv we will get the raw text (under the column 'text') only when the corresponding 'category' is 'Discharge Summary'.
df_notes = pd.read_csv(path_data/'NOTEEVENTS.csv')
# First, let's change the column names to lower case:
df_notes.rename(columns = {col: col.lower() for col in df_notes.columns}, inplace=True)
df_notes.head(3)
# Let's convert the columns `text` and `category` into string:
df_notes[['text', 'category']] = df_notes[['text', 'category']].astype('str')
# Let's now read all the notes and select only the ones with 'category' equal to "Discharge Summary".
# And also let's remove any columns other than `subject_id`, `hadm_id`, `charttime` and `text`.
df_notes_filter = df_notes[df_notes.category == 'Discharge summary'];
relevant_cols = ['subject_id', 'hadm_id', 'charttime', 'text']
df_notes_filter = df_notes_filter[relevant_cols]
# Let's convert the datatype of `hadm_id` from `float64` to `int64` (becuase that's what it is in our `ALL_CODES.csv` file)
df_notes_filter.hadm_id.dtype
df_notes_filter['hadm_id'] = df_notes_filter['hadm_id'].astype('int')
df_notes_filter.head()
# Before moving on let's smash that save button again:
df_notes_filter.to_csv(path_processed/'disch_full.csv', index=False)
# Let's now read the file we just created to check:
df_notes_filter = pd.read_csv(path_processed/'disch_full.csv')
df_notes_filter.head()
df_notes_filter.hadm_id.dtype
# Let's check the effect of filtering:
len(df_notes), len(df_notes_filter)
# To do a cross-check let's count the number of rows in the `df_notes` dataframe that have 'category == Discharge summary', and see if it's indeed `len(df_notes_filter)`
(df_notes['category'] == 'Discharge summary').sum()
# Great, we got a match!
# Let's now sort the `df_notes_filter` dataframe by `subject_id` and `hadm_id`
df_notes_filter.sort_values(['subject_id', 'hadm_id'], inplace=True)
# Let's also save this `df_notes_filter` dataframe:
df_notes_filter.to_csv(path_processed/'disch_full.csv', index=False)
df_notes_filter.head()
# Let's also sort the `df_codes` dataframe by `subject_id` and `hadm_id`:
df_codes = pd.read_csv(path_processed/'ALL_CODES.csv', dtype={"ICD9_code": str})
df_codes.sort_values(['subject_id', 'hadm_id'], inplace=True)
df_codes.head()
# Note that at this point we have two dataframes, namely,
# 1. `df_codes`: contains the `ICD9_code` corresponding to `subject_id` and `hadm_id` (corresponding file: ALL_CODES.csv)
# 2. `df_notes_filter`: contains the `text` (which is the discharge summary) corresponding to the `subject_id` and `hadm_id` (corresponding file: disch_full.csv)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Consolidate the Raw Text with (Multi-)Labels
# -
# Note that in our case a data point is uniquely identified by `hadm_id`. So let's calculate the unique `hadm_id`s from the two dataframes: `df_codes` and `df_notes_filter`
df_notes_filter['hadm_id'].nunique(), df_codes['hadm_id'].nunique()
# Okay, hold on a second! It turns out there are 58976 unique hospital admission ids that were assigned ICD9_codes but only 52726 of those unique hospital admission ids had associated discharge summaries. So we need to weed out hospital admission ids that did not have discharge summaries.
# (Replace later)
hadm_ids = set(df_notes_filter['hadm_id'])
with open(path_processed/'ALL_CODES.csv', 'r') as lf:
with open(path_processed/'ALL_CODES_filtered.csv', 'w') as of:
w = csv.writer(of)
w.writerow(['subject_id', 'hadm_id', 'ICD9_code', 'admittime', 'dischtime'])
r = csv.reader(lf)
#header
next(r)
for i,row in enumerate(r):
hadm_id = int(row[2])
#print(hadm_id)
#break
if hadm_id in hadm_ids:
w.writerow(row[1:3] + [row[-1], '', ''])
df_codes_filter = pd.read_csv(path_processed/'ALL_CODES_filtered.csv', index_col=None)
# Okay, let's now sort the dataframe `df_codes_filter` by `subject_id` and `hadm_id` and rewrite the file `ALL_CODES_filtered.csv` on disk.
df_codes_filter.sort_values(['subject_id', 'hadm_id'], inplace=True)
df_codes_filter.to_csv(path_processed/'ALL_CODES_filtered.csv', index=False)
df_codes_filter.head()
df_notes_filter['hadm_id'].nunique(), df_codes_filter['hadm_id'].nunique()
# Awesome! now we have both the dataframes, `df_notes_filter` and `df_codes_filter`, with the same number of unique hospital admission ids.
# Note that at this point we have two dataframes, namely,
# 1. `df_codes_filter`: contains the `ICD9_code` corresponding to `subject_id` and `hadm_id` (corresponding file: ALL_CODES_filtered.csv)
# 2. `df_notes_filter`: contains the `text` (which is the discharge summary) corresponding to the `subject_id` and `hadm_id` (corresponding file: disch_full.csv)
# Finally, we are ready to a create a single file with raw texts and the corresponding labels. We will do this appending labels (i.e., the ICD9_code) to the the raw texts (the discharge summaries, also referred to as notes).
#
# (Replace later) Here is a small script that does this. It generates a file called `notes_labelled.csv` in the `processed` subdirectory.
# +
# # concat_and_split.concat_data??
# -
labelsfile = path_processed/'ALL_CODES_filtered.csv'
notesfile = path_processed/'disch_full.csv'
outfile = path_processed/'notes_labelled.csv'
labelled_data = concat_and_split.concat_data(labelsfile, notesfile, outfile)
# Let's read in the data and check:
df_labelled_data = pd.read_csv(outfile)
# Let's convert the column names to lower case
df_labelled_data.rename(columns = {col: col.lower() for col in df_labelled_data.columns}, inplace=True)
# Let's take a look:
df_labelled_data.head(4)
# Let's calculate the number of unique `hadm_id`s and see if it is still 52726:
df_labelled_data['hadm_id'].nunique()
# Yes, it is! Let's save it again:
df_labelled_data.to_csv(outfile, index=False)
# **So the final file which contains our labelled data is `notes_labelled.csv`.**
# +
# for row in df_labelled_data.itertuples():
# print(row.SUBJECT_ID, "|", row.LABELS)
# break
# -
# Let's now read the hadm_id splits (into train/dev/test) created by CAML_MIMIC paper.
# !find {path_data} -type f -name '*full_hadm*'
train_hadm = pd.read_csv(path_data/'train_full_hadm_ids.csv', header=None)
dev_hadm = pd.read_csv(path_data/'dev_full_hadm_ids.csv', header=None)
test_hadm = pd.read_csv(path_data/'test_full_hadm_ids.csv', header=None)
len(train_hadm), len(dev_hadm), len(test_hadm)
train_hadm['hadm_id'] = train_hadm; del train_hadm[0]; train_hadm['is_valid'] = False
dev_hadm['hadm_id'] = dev_hadm; del dev_hadm[0]; dev_hadm['is_valid'] = False
test_hadm['hadm_id'] = test_hadm; del test_hadm[0]; test_hadm['is_valid'] = True
train_hadm
dev_hadm
test_hadm
# Let's now concatenate the three hadm_ids so that we can merge with the dataframe `df_labelled_data` based on the column `hadm_id`:
hadm = pd.concat([train_hadm, dev_hadm, test_hadm]); hadm
# Let's now merge `df_labelled_data` and `hadm`:
df_labelled_data.columns, hadm.columns
df_labelled_data = df_labelled_data.merge(hadm)
df_labelled_data
# Smash it one final time!
df_labelled_data.to_csv(outfile, index=False)
# - **So at this point we have our dataset in the file `/home/ubuntu/codemimic/data2/processed/notes_labelled.csv`**
# - **Also, in the next section we create a sample (maintaining the useful representative statistics) of the dataset. It will be saved in `/home/ubuntu/codemimic/data2/processed/notes_labelled_sample.csv`**
# - **Also the descriptions of the code are stored in the file `/home/ubuntu/codemimic/data2/processed/codes_descriptions.csv`**
# ---
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# # Exploratory Data Analysis
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Load up `notes_labelled.csv`:
# -
df = pd.read_csv(path_data/'notes_labelled.csv',
dtype={'text': str, 'labels': str, 'subject_id': np.int64, 'hadm_id': np.int64 })
df[['text', 'labels']] = df[['text', 'labels']].astype('str')
# + tags=[]
df.head(2)
# -
# Let's sort the lengths of the texts in the raw mimic iii dataset and see what it looks like (how big are those texts)
df.apply(lambda row: len(row.text), axis=1).sort_values()
# Let's make a list of the hadm_ids in the training set and the validation set:
my_train, my_valid = df.hadm_id[~df.is_valid], df.hadm_id[df.is_valid]
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### If you want to use `caml_notes_labelled.csv`:
# +
# all_files = path_data.glob("*full.csv")
# +
# L(all_files)
# +
# all_files = list(all_files)
# all_files
# +
# all_files.remove(Path(path_data/'disch_full.csv'))
# all_files
# +
# li = []
# for file in all_files:
# df_caml = pd.read_csv(file, header=0, names=['subject_id', 'hadm_id', 'text', 'labels', 'length'])
# if file.name == 'test_full.csv':
# df_caml['is_valid'] = True
# else:
# df_caml['is_valid'] = False
# li.append(df_caml)
# +
# df_caml = pd.concat(li, axis=0)
# +
# df_caml.to_csv(path_data/'caml_notes_labelled.csv', index=False)
# -
df_caml = pd.read_csv(path_data/'caml_notes_labelled.csv', dtype={'text': str, 'labels': str})
df_caml.head(2)
len(df_caml), df_caml.subject_id.nunique(), df_caml.hadm_id.nunique()
df_caml.length
# +
# len((df_caml.iloc[-1, 2]).split(' '))
# -
caml_train, caml_valid = df_caml.hadm_id[~df_caml.is_valid], df_caml.hadm_id[df_caml.is_valid]
assert set(my_train) == set(caml_train)
assert set(my_valid) == set(caml_valid)
# +
# set(my_train).symmetric_difference(set(caml_train))
# +
# set(my_valid).symmetric_difference(set(caml_valid))
# -
# Check the length of some random notes in both the datsets (i.e., the full raw version as well as the caml truncated/preprocessed version):
ind = random.choice(range(len(df)))
hadmid = df.iloc[ind].hadm_id
ind, hadmid
df[df.hadm_id == hadmid]
# + tags=[]
print(df.iloc[ind].text)
# + tags=[]
len((df.iloc[ind].text).split(' '))
# -
ind_caml = df_caml[df_caml.hadm_id == hadmid].index[0]
ind_caml
df_caml[df_caml.hadm_id == hadmid]
print(df_caml.iloc[ind_caml].text)
len((df_caml.iloc[ind_caml].text).split(' '))
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Plotting the # of labels disctibution:
# -
df.dtypes
label_freq = Counter()
for row in df.itertuples():
labels = row.labels.split(';')
for label in labels:
label_freq[label] += 1
# So the total number of labels are:
len(label_freq)
# Let's sort the labels according to frequencies:
labels_sorted = sorted(label_freq.items(), key=lambda item: item[1], reverse=True)
labels_top50 = dict(labels_sorted[:50])
labels_top50
# An alternative way of doing the same using inbuilt `most_common` method of the `Counter` container:
label_freq.most_common(20)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Computing the average number of instances per labels:
# -
array(list(label_freq.values())).mean()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Computing the average number of labels per instance:
# -
# We will add an extra column with the number of labels per instance so that we can take the mean of that column.
label_count = []
for labels in df.labels:
label_count.append(len(labels.split(';')))
df_copy = df.copy()
df_copy['label_count'] = label_count
df_copy
# The average number of labels per instance is:
df_copy.label_count.mean()
df_copy.label_count.std()
df_copy.label_count.hist(bins=100);
df_copy.label_count.plot.density();
import seaborn as sns
sns.distplot(df_copy.label_count, bins=100, color='b');
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Sampling to create a small dataset:
# -
len(df)
df_sample = df.sample(frac=0.1, random_state=42, ignore_index=True)
df_sample = pd.read_csv(path_data/'processed'/'notes_labelled_sample.csv',
dtype={'text': str, 'labels': str, 'subject_id': np.int64, 'hadm_id': np.int64 })
df_sample[['text', 'labels']] = df_sample[['text', 'labels']].astype('str')
# + jupyter={"outputs_hidden": true} tags=[]
df_sample
# -
len(df_sample)
# + jupyter={"outputs_hidden": true} tags=[]
print(df_sample.iloc[4567,2])
# -
# ---
# + jupyter={"outputs_hidden": true, "source_hidden": true}
df_caml_train = pd.read_csv(path_data/'train_sample.csv')
df_caml_test = pd.read_csv(path_data/'test_sample.csv')
df_caml_dev = pd.read_csv(path_data/'dev_sample.csv')
# + jupyter={"outputs_hidden": true, "source_hidden": true}
hadms_train = df_caml_train.HADM_ID
hadms_test = df_caml_test.HADM_ID
hadms_dev = df_caml_dev.HADM_ID
# + jupyter={"outputs_hidden": true, "source_hidden": true}
len(hadms_train), len(hadms_test), len(hadms_dev)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
hadms_train = list(pd.concat([hadms_train, hadms_dev], axis=0))
len(hadms_train)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
train = df_caml[df_caml.hadm_id.isin(hadms_train)].reset_index(drop=True)
train.head(3)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
array(train.is_valid, dtype=float).sum()
# + jupyter={"outputs_hidden": true, "source_hidden": true}
valid = df_caml[df_caml.hadm_id.isin(hadms_test)].reset_index(drop=True)
valid.head(3)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
array(valid.is_valid, dtype=float).sum()
# + jupyter={"outputs_hidden": true, "source_hidden": true}
df_caml_sample = pd.concat([train, valid], axis=0)
df_caml_sample.head(3)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
len(df_caml_sample)
# + jupyter={"outputs_hidden": true, "source_hidden": true}
df_caml_sample.to_csv(path_data/'caml_notes_labelled_sample_10percent.csv', index=False)
# -
# ---
# Let's check how we are doing w.r.t our training and validation split:
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### **Data Statistics Check #1: Number of instances**:
# -
# - Original: we had 52,726 data instances and train = 49,354 and valid = 3372.
# - After sampling: we have 15,818 data instances and train = 14839, and valid = 979.
train, valid = df_sample.index[~df_sample['is_valid']], df_sample.index[df_sample['is_valid']]
len(train), len(valid)
# Okay, this does not look that bad. Let's now check the remaining three statistics:
# - Total number of labels
# - average number of labels per instance, and
# - average number of instances per labels
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### **Data Statistics Check #2: Number of labels**:
# -
# - Original: we had 8922 labels
# - After Sampling: we have 6594 labels
label_count = []
for labels in df_sample.labels:
label_count.append(len(labels.split(';')))
df_sample_copy = df_sample.copy()
# + jupyter={"outputs_hidden": true} tags=[]
df_sample_copy['label_count'] = label_count
df_sample_copy
# -
import seaborn as sns
sns.distplot(df_sample_copy.label_count, bins=100, color='b');
sample_label_freq = Counter()
for row in df_sample.itertuples():
labels = row.labels.split(';')
sample_label_freq.update(labels)
len(sample_label_freq)
labels_sorted = sorted(sample_label_freq.items(), key=lambda item: item[1], reverse=True)
labels_sorted[:20]
ranked_labels = L(labels_sorted).itemgot(0)
ranked_freqs = L(labels_sorted).itemgot(1)
ranked_labels, ranked_freqs
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(ranked_freqs)
ax.set_xlabel('Code rank by frequency')
ax.set_ylabel('Note count')
ax.set_yscale('log');
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Data Satatistics Check #3: Computing the min label freq for each text:
# -
df_sample_copy = df_sample.copy()
df_sample_copy.head(3)
# + jupyter={"outputs_hidden": true} tags=[]
df_sample_copy['min_code_freq'] = df_sample_copy.apply(
lambda row: min([sample_label_freq[label] for label in row.labels.split(';')]), axis=1)
df_sample_copy['max_code_freq'] = df_sample_copy.apply(
lambda row: max([sample_label_freq[label] for label in row.labels.split(';')]), axis=1)
df_sample_copy['median_code_freq'] = df_sample_copy.apply(
lambda row: np.percentile(np.array([sample_label_freq[label] for label in row.labels.split(';')]), 90), axis=1)
df_sample_copy
# +
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(20,20))
for freq, axis in zip(['min_code_freq', 'max_code_freq', 'median_code_freq'], axes):
df_sample_copy[freq].hist(ax=axis[0], bins=25)
axis[0].set_xlabel(freq)
axis[0].set_ylabel('# of notes')
df_sample_copy[freq].plot.density(ax=axis[1])
axis[1].set_xlabel(freq)
# -
min_code_freq = Counter(df_sample_copy.min_code_freq)
max_code_freq = Counter(df_sample_copy.max_code_freq)
median_code_freq = Counter(df_sample_copy.median_code_freq)
total_notes = L(min_code_freq.values()).sum()
total_notes
# +
for kmin in min_code_freq:
min_code_freq[kmin] = (min_code_freq[kmin]/total_notes) * 100
for kmax in max_code_freq:
max_code_freq[kmax] = (max_code_freq[kmax]/total_notes) * 100
for kmedian in median_code_freq:
median_code_freq[kmedian] = (median_code_freq[kmedian]/total_notes) * 100
# -
min_code_freq = dict(sorted(min_code_freq.items(), key=lambda item: item[0]))
max_code_freq = dict(sorted(max_code_freq.items(), key=lambda item: item[0]))
median_code_freq = dict(sorted(median_code_freq.items(), key=lambda item: item[0]))
# +
fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(20,20))
for axis, freq_dict, label in zip(axes, (min_code_freq, max_code_freq, median_code_freq), ('min', 'max', 'median')):
axis[0].plot(freq_dict.keys(), freq_dict.values())
axis[0].set_xlabel(f'{label} code freq (f)')
axis[0].set_ylabel('% of notes ( P[f] )');
axis[1].plot(freq_dict.keys(), np.cumsum(list(freq_dict.values())))
axis[1].set_xlabel(f'{label} code freq (f)')
axis[1].set_ylabel('P[f<=t]');
# -
# The most frequent labels previously:
label_freq.most_common(20)
# The most frequent lables after sampling:
sample_label_freq.most_common(20)
# Simply Incredible!
array(list(sample_label_freq.values())).mean()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### **Data Statistics Check #3 (Avg number of instances per label)_**:
# -
# - Original: ~98
# - After Sampling: ~38
# Let's now compute the average number of labels per instance:
labels_per_instance = df_sample.apply(lambda row: len(row.labels.split(';')), axis=1)
labels_per_instance
labels_per_instance.mean()
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### **Data Statistics Check #4 (Avg number of labels per instance)_**:
# -
# - Original: ~16
# - After Sampling: ~16
# Fantastic! Previously (in the original dataset) also it was also ~16.
# Let's save this sample dataset we just created:
# !tree {path_data}
df_sample.to_csv(path_data/'notes_labelled_sample_10percent.csv', index=False)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ## Creating a Dataframe with ICD9 codes and the corresponding descriptions:
# -
file_descrip = path_data/'ICD9_descriptions'
file_descrip
lines = L()
with open(file_descrip, 'r') as f: lines += L(*f.readlines())
# !wc -l {path_data/'ICD9_descriptions'}
lines = [l.strip().split('\t') for l in lines]
len(lines)
# Take a look at an example line:
lines[7]
# Covert this into a dataframe:
df_descrip = pd.DataFrame(lines, columns=['ICD9_code', 'description'])
df_descrip
# Now we will save this dataframe into a csv file:
df_descrip.to_csv(path_data/'processed'/'code_descriptions.csv', index=False)
|
nbs/examples/mimic/mimic3_data_extraction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Welcome to the Noisebridge Python Class!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Raise your hand if you have seen this code before
# + slideshow={"slide_type": "-"}
a = 'hi'
b = 'mom'
print(a + b)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Raise your hand if you have seen this code before
# + slideshow={"slide_type": "-"}
l = ['a', 'b', 'c']
print(l[2])
# + [markdown] slideshow={"slide_type": "slide"}
# ## Raise your hand if you have seen this code before
# + slideshow={"slide_type": "-"}
for i in range(5):
print(i)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Raise your hand if you have seen this code before
# + slideshow={"slide_type": "-"}
def func(arg1, arg2):
"add two numbers."
return arg1 + arg2
# + [markdown] slideshow={"slide_type": "slide"}
# ### Functions have 5 important parts
# * Syntax for creating a new function:
# `def ____(____, ____):`
# * Name: `func`
# * Doc string: `"add two numbers."`
# * Arguments / Parameters: `arg1`, `arg2`
# * Return statement: `return arg1 + arg2`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Docstrings are important!
#
# ### They get carried around wherever the function goes.
# -
help(func)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Look through the functions attached to strings with dir.
#
# ### Use these string functions to create a new function that checks if a character is in the first 10 characters of a string.
# +
def character_near_beginning(string, char):
pass
dir('hi mom')
# + [markdown] slideshow={"slide_type": "slide"}
# ## It is sometimes hard to read code when arguments are passed by position.
#
# ### Try passing arguments by keyword
# -
print(character_near_beginning(string='abc', char='c'))
print(character_near_beginning(char='c', string='abc'))
# + [markdown] slideshow={"slide_type": "slide"}
# ## What do you think prints when the following code runs.
# ## Does your neighbor agree? Check to see if you are right.
# +
def add_to_global_var(x):
return global_var + x
x = 5
print(add_to_global_var(4))
# + [markdown] slideshow={"slide_type": "slide"}
# #### What do you think prints when the following code runs?
#
# #### Does your neighbor agree? Check to see if you are right.
# +
x = 0
y = 0
result = 0
def inside_versus_outside(x, y):
result = x + y
return result
print(inside_versus_outside(1, 2))
print('x:', x)
print('y:', y)
print('result:', result)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Functions provide
# * Structure
# * Documentation
# * Isolation
# + [markdown] slideshow={"slide_type": "slide"}
# #### Isolation is provided by the stack.
#
# | function | vars |
# |:---------------------:|:------------------------ |
# | __main__ | x = 0 |
# | ... | y = 0 |
# | ... | result = 0 |
# | inside_versus_outside | x = 1 |
# | ... | y = 2 |
# | ... | result = 3 |
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Trick question! What is one divided by zero?
# -
1 / 0
# + [markdown] slideshow={"slide_type": "slide"}
# ## The stack also gives you tracebacks
#
# ### Tracebacks show the execution order when things go wrong
# + slideshow={"slide_type": "-"}
def everything_is_fine():
uh_oh()
def uh_oh():
return 1 / 0
everything_is_fine()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Remember character_near_beginning?
#
# ### Let's raise an error if char is not a single character
# + [markdown] slideshow={"slide_type": "slide"}
# ## Did you notice something strange about the `print` function?
#
# ### It can take any number of arguments
# -
print('one')
print('one', 'two')
print('one', 'two', 'three')
# + [markdown] slideshow={"slide_type": "slide"}
# ## We can accept any number of arguments by using the unpack operators
# +
def unpack(*args, **kwargs):
print('args:', args)
print('kwargs:', kwargs)
unpack()
unpack(1, 2, arg1='a', arg2='b')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Remember character_near_beginning?
#
#
# ### Rewrite the function so that it checks any number of characters with an unpack operator
# + [markdown] slideshow={"slide_type": "slide"}
# ### Notice that using `*args` forces the function caller to pass any additional arguments through the keyword
# + [markdown] slideshow={"slide_type": "slide"}
# ## What do you think the arguments for this function accomplish?
#
# ### Check with your neighbor, then try it out
# -
def F(arg1, *, flag):
...
# + [markdown] slideshow={"slide_type": "slide"}
# ## Let's take one last look at the print() function.
#
# #### What does `sep=' '` mean?
# -
help(print)
# + [markdown] slideshow={"slide_type": "slide"}
# ## `sep` is a parameter with a default.
#
# #### If you don't set it explicitly, it will equal `' '`
# -
print('hello', 'world')
print('hello', 'world', sep='!')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Defaults are defined with an equal sign
# +
def my_print(*strings, sep=' '):
print(sep.join(strings))
my_print('hello', 'world')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Remember character_near_beginning?
#
# ### Let's make 10 a default instead of a hardcoded value
# -
def character_near_beginning(string, char):
...
# + [markdown] slideshow={"slide_type": "slide"}
# ## Challenge Questions!
# + [markdown] slideshow={"slide_type": "slide"}
# ## The Old Switcheroo
#
# #### Taken from Software Carpentry
# http://swcarpentry.github.io/python-novice-inflammation/06-func/index.html
# +
a = 3
b = 7
def swap(a, b):
temp = a
a = b
b = temp
swap(a, b)
print(a, b) # What does this print?
# -
# ## Mixing Default and Non Default parameters
#
# #### Taken from Software Carpentry
#
# http://swcarpentry.github.io/python-novice-inflammation/06-func/index.html
# +
# What does this code do, and why?
def numbers(one, two=2, three, four=4):
n = str(one) + str(two) + str(three) + str(four)
return n
print(numbers(1, three=3))
|
course/functions/Functions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# In this notebook, I will demonstrate how to use existing SDSS software to create a commissioning design using a carton that already exists in targetdb. Specifically, we will be creating a design of all-skies.
# # Creating Commissioning Designs
# For this example, I will demonstrate how to create a design for a carton that is already in targetdb, specifically an all-sky design. First, we have to select all skies near some field center in targetdb.
# +
from sdssdb.peewee.sdss5db import targetdb
# connect to targetdb
targetdb.database.connect_from_parameters(user='sdss_user',
host='localhost',
port=7502)
# +
from peewee import *
import numpy as np
# set search radius based on field size for APO or LCO
observatory = 'APO'
if observatory == 'APO':
r_search = 1.49
else:
r_serach = 0.95
# specify the field center and position angle
racen = 20.
deccen = 20.
position_angle=24.188576
# specify the commissioning cartons we want to consider
cartons = ['ops_sky_boss', 'ops_sky_apogee']
# get all of the targets in the commissioning carton near the field center
all_skies = (targetdb.Target.select(targetdb.Target.catalogid,
targetdb.Target.ra,
targetdb.Target.dec,
targetdb.Target.pk,
targetdb.CartonToTarget.priority,
targetdb.CartonToTarget.value,
targetdb.Cadence.label,
targetdb.Carton.carton,
targetdb.CartonToTarget.pk,
targetdb.Magnitude.g,
targetdb.Magnitude.r,
targetdb.Magnitude.i,
targetdb.Magnitude.bp,
targetdb.Magnitude.gaia_g,
targetdb.Magnitude.rp,
targetdb.Magnitude.h,
targetdb.Category.label,
targetdb.Carton.program,
targetdb.Instrument.label)
.join(targetdb.CartonToTarget)
.join(targetdb.Cadence, JOIN.LEFT_OUTER)
.switch(targetdb.CartonToTarget)
.join(targetdb.Carton)
.join(targetdb.Category)
.switch(targetdb.CartonToTarget)
.join(targetdb.Magnitude, JOIN.LEFT_OUTER)
.switch(targetdb.CartonToTarget)
.join(targetdb.Instrument)
.where((targetdb.Carton.carton.in_(cartons)) &
(targetdb.Target.cone_search(racen, deccen, r_search))))
# grab the results
catalogid, ra, dec, target_pk, priority, value, cadences, carton, carton_to_target_pk, g, r, i, bp, gaia_g, rp, h, category, program, instrument = map(list, zip(*list(all_skies.tuples())))
catalogid = np.array(catalogid, dtype=np.int64)
ra = np.array(ra)
dec = np.array(dec)
target_pk = np.array(target_pk, dtype=np.int64)
priority = np.array(priority)
value = np.array(value)
carton = np.array(carton)
carton_to_target_pk = np.array(carton_to_target_pk)
category = np.array(category)
program = np.array(program)
instrument = np.char.upper(instrument)
magnitudes = np.zeros((len(g),7))
magnitudes[:, 0] = g
magnitudes[:, 1] = r
magnitudes[:, 2] = i
magnitudes[:, 3] = bp
magnitudes[:, 4] = gaia_g
magnitudes[:, 5] = rp
magnitudes[:, 6] = h
# -
# Next, we will want to create the design using using Robostrategy. First though, lets convert ra,dec to FPS x,y to visualize the possible targets. To do this, we will need to also specify the observation time of the design in Julian Days.
# +
from coordio.utils import radec2wokxy
import robostrategy.obstime as obstime
import coordio.time
# specify observation time
ot = obstime.ObsTime(observatory=observatory.lower())
obsTime = coordio.time.Time(ot.nominal(lst=racen)).jd
# convert to x,y
x, y, fieldWarn, HA, PA_coordio = radec2wokxy(ra=ra,
dec=dec,
coordEpoch=np.array([2457174] * len(ra)),
waveName=np.array(list(map(lambda x:x.title(), instrument))),
raCen=racen,
decCen=deccen,
obsAngle=position_angle,
obsSite=observatory,
obsTime=obsTime)
# -
# To visualize our possible targets for the design, below is a plot of the possible targets in the FPS focal plane:
# +
import matplotlib.pylab as plt
# %matplotlib inline
plt.rcParams.update({'font.size': 18})
plt.figure(figsize=(10,10))
plt.scatter(x[instrument == 'APOGEE'], y[instrument == 'APOGEE'],
c='k', marker='.', label='APOGEE Sky')
plt.scatter(x[instrument == 'BOSS'], y[instrument == 'BOSS'],
c='r', marker='.', label='BOSS Sky')
plt.grid()
plt.legend()
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()
# -
# Now we can finally build the design. This requires that we create a specially formatted array for Robostrategy to build the design. This is demonstrated below.
# +
import robostrategy.field as field
import roboscheduler.cadence as cadence
from sdssdb.peewee.sdss5db import targetdb
# connect to targetdb
targetdb.database.connect_from_parameters(user='sdss_user',
host='localhost',
port=7502)
# need to load cadences before building designs
cadence.CadenceList().fromdb(version='v1')
# cadencelist = cadence.CadenceList()
# cadencelist.fromfits(filename='rsCadences-test-designmode-2-apo.fits', unpickle=False)
# set cadence. must be in list of loaded cadences
# set cadence here because NONEs currently for sky carton in targetdb
cad = 'bright_1x1'
# create the field object
f = field.Field(racen=racen, deccen=deccen, pa=position_angle,
field_cadence=cad, observatory=observatory.lower())
# set the required skies, in this case all fibers
f.required_calibrations['sky_boss'] = [375]
f.required_calibrations['sky_apogee'] = [125]
f.required_calibrations['standard_boss'] = [0]
f.required_calibrations['standard_apogee'] = [0]
# create array for RS field
N = len(ra)
# these are datatypes from robostrategy.Field
targets_dtype = np.dtype([('ra', np.float64),
('dec', np.float64),
('epoch', np.float32),
('pmra', np.float32),
('pmdec', np.float32),
('parallax', np.float32),
('lambda_eff', np.float32),
('delta_ra', np.float64),
('delta_dec', np.float64),
('magnitude', np.float32, 7),
('x', np.float64),
('y', np.float64),
('within', np.int32),
('incadence', np.int32),
('priority', np.int32),
('value', np.float32),
('program', np.unicode_, 30),
('carton', np.unicode_, 50),
('category', np.unicode_, 30),
('cadence', np.unicode_, 30),
('fiberType', np.unicode_, 10),
('catalogid', np.int64),
('carton_to_target_pk', np.int64),
('rsid', np.int64),
('target_pk', np.int64),
('rsassign', np.int32)])
# create an empty array
targs = np.zeros(N, dtype=targets_dtype)
# fill in the relevant columns
targs['ra'] = ra
targs['dec'] = dec
targs['epoch'] = np.zeros(N, dtype=np.float32) + 2015.5
targs['x'] = x
targs['y'] = y
targs['within'] = np.zeros(N, dtype=np.int32) + 1
targs['incadence'] = np.zeros(N, dtype=np.int32) + 1
targs['priority'] = priority
targs['value'] = value
targs['program'] = program
targs['carton'] = carton
targs['category'] = category
targs['cadence'] = np.array([cad] * N, dtype='<U30')
targs['fiberType'] = instrument
targs['lambda_eff'] = np.zeros(N, dtype=np.float32)
targs['lambda_eff'][targs['fiberType'] == 'APOGEE'] = 16000.
targs['lambda_eff'][targs['fiberType'] == 'BOSS'] = 5400.
targs['catalogid'] = catalogid
targs['carton_to_target_pk'] = carton_to_target_pk
targs['rsid'] = np.arange(N, dtype=np.int64) + 1
targs['target_pk'] = target_pk
targs['magnitude'] = magnitudes
targs['rsassign'] = np.zeros(N, dtype=np.int32) + 1
# assign targets
f.targets_fromarray(targs)
f.assign()
# -
# You can print out the results of the assigments for the design to see how close Robostrategy got to the desired design.
print(f.assess())
# You can also plot the resulting assignments:
f.plot(iexp=0)
# To save the design, you can then export it as a fits file.
f.tofits(filename='comm_all_skies_example.fits')
# # Checking Commissioning Design with Mugatu
# Finally, if you have created a design to your liking, you can validate the design once more using Mugatu. To do this, we will use the results of the above design to create a mugatu.FPSDesign object. Designs can be easily loaded directly from the fits file you created above.
# +
from mugatu.fpsdesign import FPSDesign
# create a mugatu.FPSDesign object that is specified as a manual design
fps_design = FPSDesign(design_pk=-1,
obsTime=obsTime,
design_file='comm_all_skies_example.fits',
manual_design=True,
exp=0)
# -
# Once loaded you can validate the design. If no errors come up, your design is valid and passed all deadlock and collision tests!
fps_design.validate_design()
# If any warnings come up (i.e. any assignment removed due to deadlocks or collisions), you can check which targets were removed by calling the below).
fps_design.targets_unassigned
fps_design.targets_collided
|
examples/commissioning_design_example_all_skies.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Forward modeling tutorial using mosaic images
# ## Extract BeamCutout
#
# Here I will show you how to extract BeamCutouts. Saving these (fits) files before modeling will make entire process quicker. The BeamCutouts contain the orient information which is nessecary for better fitting models. Here's what Gabe says about this from his grizli notebooks:
#
# >To interact more closely with an individual object, its information can be extracted from the full exposure with the BeamCutout class. This object will contain the high-level GrismDisperser object useful for generating the model spectra and it will also have tools for analyzing and fitting the observed spectra.
#
# >It also makes detailed cutouts of the parent direct and grism images preserving the native WCS information.
# +
from grizli import model
from grizli import multifit
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from shutil import copy
from astropy.table import Table
from astropy import wcs
from astropy.io import fits
from glob import glob
import os
## Seaborn is used to make plots look nicer.
## If you don't have it, you can comment it out and it won't affect the rest of the code
import seaborn as sea
sea.set(style='white')
sea.set(style='ticks')
sea.set_style({'xtick.direct'
'ion': 'in','xtick.top':True,'xtick.minor.visible': True,
'ytick.direction': "in",'ytick.right': True,'ytick.minor.visible': True})
# %config InlineBackend.figure_format = 'retina'
# %matplotlib inline
# -
# ## Set files and target
#
# For this example I'll be using one of my quiescent galaxys from GOODS north.
# +
Grism_flts = glob('/Volumes/Vince_CLEAR/Data/Grism_fields/ERSPRIME/*GrismFLT.fits')
grp = multifit.GroupFLT(grism_files = Grism_flts, verbose=False)
# -
# ## Use Grizli to extract beam
#
# First you'll need to create a GrismFLT object.
#
# Next run blot_catalog to create the catalog of objects in the field.
# Another routine (photutils_detection) is used if you're not using mosiac images and segmentation maps,
# but since we have them you should do it this way.
beams = grp.get_beams(39170)
pa = -1
for BEAM in beams:
if pa != BEAM.get_dispersion_PA():
print('Instrument : {0}, ORIENT : {1}'.format(BEAM.grism.filter,BEAM.get_dispersion_PA()))
pa = BEAM.get_dispersion_PA()
# +
# save out G102 - 345
BEAM = beams[16]
BEAM.write_fits(root='98', clobber=True)
fits.setval('98_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('98_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G102 - 78
BEAM = beams[4]
BEAM.write_fits(root='78', clobber=True)
fits.setval('78_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('78_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G102 - 48
BEAM = beams[8]
BEAM.write_fits(root='48', clobber=True)
fits.setval('48_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('48_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G141 - 345
BEAM = beams[0]
BEAM.write_fits(root='345', clobber=True)
fits.setval('345_39170.g141.A.fits', 'EXPTIME', ext=0,
value=fits.open('345_39170.g141.A.fits')[1].header['EXPTIME'])
# +
## G102 cutouts
for i in glob('*.g102*'):
g102_beam = model.BeamCutout(fits_file=i)
plt.figure()
plt.imshow(g102_beam.beam.direct)
plt.xticks([])
plt.yticks([])
plt.title(i)
## G141 cutout
g141_beam = model.BeamCutout(fits_file='345_39170.g141.A.fits')
plt.figure()
plt.imshow(g141_beam.beam.direct)
plt.xticks([])
plt.yticks([])
plt.title('345_39170.g141.A.fits')
# +
## G102 cutouts
for i in glob('*.g102*'):
g102_beam = model.BeamCutout(fits_file=i)
plt.figure()
plt.imshow(g102_beam.grism.data['SCI']- g102_beam.contam, vmin = -0.1, vmax=0.5)
plt.xticks([])
plt.yticks([])
plt.title(i)
## G141 cutout
g141_beam = model.BeamCutout(fits_file='345_39170.g141.A.fits')
plt.figure()
plt.imshow(g141_beam.grism.data['SCI']- g141_beam.contam, vmin = -0.1, vmax=0.5)
plt.xticks([])
plt.yticks([])
plt.title('345_39170.g141.A.fits')
# -
|
notebooks/forward_modeling/Extract_beam.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import libraries
from IPython.display import Image, display
import numpy as np
import os
import tensorflow as tf
from os.path import join
from PIL import ImageFile
import pandas as pd
from matplotlib import cm
import seaborn as sns
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D
from keras.applications.resnet50 import ResNet50
from keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
# from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from sklearn.metrics import mean_squared_error, mean_absolute_error, roc_auc_score, classification_report, confusion_matrix
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.ensemble import IsolationForest
from sklearn import svm
from sklearn.mixture import GaussianMixture
from sklearn.isotonic import IsotonicRegression
import re
ImageFile.LOAD_TRUNCATED_IMAGES = True
plt.style.use('fivethirtyeight')
# %matplotlib inline
# -
train_img_dir_n = "../../dataset/archive/natural_images/teeth"
train_img_paths_n = [join(train_img_dir_n,filename) for filename in os.listdir(train_img_dir_n)]
len(train_img_paths_n)
# split teeth data into train, test, and val
train_img_paths, test_img_paths_teeth = train_test_split(train_img_paths_n, test_size=0.2, random_state=42)
train_img_paths, val_img_paths_teeth = train_test_split(train_img_paths, test_size=0.2, random_state=42)
# +
# import ~teeth images
natural_images_path = "../../dataset/archive/natural_images/"
test_img_paths_no_teeth = []
# print(os.listdir("../../dataset/archive/natural_images"))
for d in [d for d in os.listdir("../../dataset/archive/natural_images") if d not in ["car", "airplane", "motorbike"]]:
test_img_dir_na = natural_images_path + d
test_img_paths_no_teeth.append([join(test_img_dir_na,filename) for filename in os.listdir(test_img_dir_na)])
test_img_paths_no_teeth_flat = [item for sublist in test_img_paths_no_teeth for item in sublist]
test_img_paths_no_teeth, val_img_paths_no_teeth = train_test_split(test_img_paths_no_teeth_flat, test_size = 0.2, random_state = 42)
# -
def natural_img_dir(image_path):
path_regex = r"natural_images\/(\w*)"
if 'natural_images' in image_path:
return re.findall(path_regex,image_path,re.MULTILINE)[0].strip()
else:
return 'teeth'
# create test dataframe
all_test_paths = test_img_paths_teeth + test_img_paths_no_teeth
test_path_df = pd.DataFrame({
'path': all_test_paths,
'is_teeth': [1 if path in test_img_paths_teeth else 0 for path in all_test_paths]
})
test_path_df = shuffle(test_path_df,random_state = 0).reset_index(drop = True)
test_path_df['image_type'] = test_path_df['path'].apply(lambda x: natural_img_dir(x))
all_test_paths = test_path_df['path'].tolist()
print('Distribution of Image Types in Test Set')
print(test_path_df['image_type'].value_counts())
test_path_df
# create val dataframe
all_val_paths = val_img_paths_teeth + val_img_paths_no_teeth
val_path_df = pd.DataFrame({
'path': all_val_paths,
'is_teeth': [1 if path in val_img_paths_teeth else 0 for path in all_val_paths]
})
val_path_df = shuffle(val_path_df,random_state = 0).reset_index(drop = True)
val_path_df['image_type'] = val_path_df['path'].apply(lambda x: natural_img_dir(x))
all_val_paths = val_path_df['path'].tolist()
print('Distribution of Image Types in Validation Set')
print(val_path_df['image_type'].value_counts())
# +
# prepare images for resnet50
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
#output = img_array
output = preprocess_input(img_array)
return(output)
X_train = read_and_prep_images(train_img_paths)
X_test = read_and_prep_images(all_test_paths)
X_val = read_and_prep_images(all_val_paths)
# +
# get features from resnet50
resnet_weights_path = './resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
# X : images numpy array
resnet_model = ResNet50(input_shape=(image_size, image_size, 3), weights='./resnet50.h5', include_top=False, pooling='avg') # Since top layer is the fc layer used for predictions
X_train = resnet_model.predict(X_train)
X_test = resnet_model.predict(X_test)
X_val = resnet_model.predict(X_val)
# +
# Apply standard scaler to output from resnet50
ss = StandardScaler()
ss.fit(X_train)
X_train = ss.transform(X_train)
X_test = ss.transform(X_test)
X_val = ss.transform(X_val)
# Take PCA to reduce feature space dimensionality
pca = PCA(n_components=120, whiten=True)
pca = pca.fit(X_train)
print('Explained variance percentage = %0.2f' % sum(pca.explained_variance_ratio_))
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
X_val = pca.transform(X_val)
# +
# Train classifier and obtain predictions for OC-SVM
oc_svm_clf = svm.OneClassSVM(gamma=0.001, kernel='rbf', nu=0.08) # Obtained using grid search
if_clf = IsolationForest(contamination=0.08, max_features=1.0, max_samples=1.0, n_estimators=40) # Obtained using grid search
oc_svm_clf.fit(X_train)
if_clf.fit(X_train)
oc_svm_preds = oc_svm_clf.predict(X_test)
if_preds = if_clf.predict(X_test)
# +
# svm_if_results=pd.DataFrame({
# 'path': all_test_paths,
# 'oc_svm_preds': [0 if x == -1 else 1 for x in oc_svm_preds],
# 'if_preds': [0 if x == -1 else 1 for x in if_preds]
# })
svm_if_results=svm_if_results.merge(test_path_df)
svm_if_results[svm_if_results.is_teeth == 1]
# -
print('roc auc score: if_preds')
if_preds=svm_if_results['if_preds']
actual=svm_if_results['is_teeth']
print(roc_auc_score(actual, if_preds))
print(classification_report(actual, if_preds))
sns.heatmap(confusion_matrix(actual, if_preds),annot=True,fmt='2.0f')
plt.show()
# +
import joblib
filename_svm = 'oc_svm_preds.sav'
filename_clf = 'if_clf.sav'
joblib.dump(oc_svm_clf, filename_svm)
joblib.dump(if_clf, filename_clf)
# -
|
Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HamzahSarmad/Y3_CO3093_Big_Data_And_Predictive_Analytics/blob/main/Week2LAB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="73BU3u-ru5jK"
# # Question 1 Preparing and Describing the Data
# + id="V7B3GHaAHmMc"
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
# + id="RfZI7KXdH8LN"
baseUrl = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series'
confirmed = baseUrl + '/time_series_covid19_confirmed_global.csv'
deaths = baseUrl + '/time_series_covid19_deaths_global.csv'
recovered = baseUrl + '/time_series_covid19_recovered_global.csv'
# + id="X77CLDKmJS1E"
# Reading all of the input files
confi = pd.read_csv(confirmed)
death = pd.read_csv(deaths)
rec = pd.read_csv(recovered)
# + [markdown] id="NII-1xtTvZDz"
# # Question 2 Data Handling and Visualisation
# + colab={"base_uri": "https://localhost:8080/", "height": 514} id="E-77N9p4vnS2" outputId="f1762541-b560-4262-de0c-0e2f12ddd70b"
grouped_conf = confi.groupby(by='Country/Region').sum()
sorted_grouped_conf = grouped_conf.sort_values(by=grouped_conf.columns[-1], ascending=False)
last_col = confi.iloc[-1]
last_day = last_col.index[-1]
plt.figure(figsize=(12, 8))
plt.title('Top 10 countries with highest cases', fontsize=14)
plt.barh(sorted_grouped_conf[last_day].index[:10], \
sorted_grouped_conf[last_day].head(10))
plt.xlabel('Total cases by '+last_day)
plt.grid()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 514} id="-6AxHRwS0LKy" outputId="86b7d21d-d0e5-44ec-c814-5698b58e642a"
grouped_deaths = death.groupby(by='Country/Region').sum()
sorted_grouped_deaths = grouped_deaths.sort_values(by=grouped_deaths.columns[-1], ascending=False)
last_col = confi.iloc[-1]
last_day = last_col.index[-1]
plt.figure(figsize=(12, 8))
plt.title('Top 10 countries with highest deaths', fontsize=14)
plt.barh(sorted_grouped_deaths[last_day].index[:10], \
sorted_grouped_deaths[last_day].head(10))
plt.xlabel('Total deaths by '+last_day)
plt.grid()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 514} id="PZculK7S1fSJ" outputId="157fa1e9-4e85-4fb7-cf4d-a32299eea9ed"
grouped_rec = rec.groupby(by='Country/Region').sum()
sorted_grouped_rec = grouped_rec.sort_values(by=grouped_rec.columns[-1], ascending=False)
last_col = confi.iloc[-1]
last_day = last_col.index[-1]
plt.figure(figsize=(12, 8))
plt.title('Top 10 countries with highest recovered', fontsize=14)
plt.barh(sorted_grouped_rec[last_day].index[:10], \
sorted_grouped_rec[last_day].head(10))
plt.xlabel('Total recovered by '+last_day)
plt.grid()
plt.show()
# + [markdown] id="snxR8zLS2Fv3"
# Data set shows no records since december of 2021, hence the graphs shows 0 for all countries
# + [markdown] id="c4LiyBFo2PAa"
# # Question 3
# + id="vN3063ZT2SXw"
def get_total_confirmed_world():
total = confi.iloc[:, 4: ].apply(sum, axis=0)
total.index = pd.to_datetime(total.index)
return total
# + id="pgkdGooK2_Cl"
#Question 2a
def get_total_confirmed_ofcountry(country):
df_country = confi['Country/Region']==country
total = confi[df_country].iloc[:,4:].apply(sum, axis=0)
total.index = pd.to_datetime(total.index)
return total
#get_total_confirmed_ofcountry('United Kingdom')
# + id="B97uYany35Fu" colab={"base_uri": "https://localhost:8080/", "height": 500} outputId="fcb28044-31e8-4981-92e5-78e5c4204b75"
def line_plot_ofcountry(name, col):
data = get_total_confirmed_ofcountry(name)
plt.figure(figsize=(12, 8))
plt.title(name.upper()+': Total cases reported', fontsize=14)
plt.plot(data.index, data, color=col, lw=5)
plt.ylabel('Total cases')
plt.grid()
plt.show()
line_plot_ofcountry("US", "red")
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="s0tEnOqo3_BC" outputId="c3093980-7adb-48f9-8707-cea6c3e275b1"
#Question 2b
def hist_total_confirmed_ofcountry(country):
data = get_total_confirmed_ofcountry(country)
plt.figure(figsize=(12, 8))
plt.title('Histogram for total confirmed cases of '+country, fontsize=14)
plt.hist(data, bins=50)
plt.ylabel("%s's confirmed cases" % country)
plt.grid()
plt.show()
hist_total_confirmed_ofcountry('US')
# + colab={"base_uri": "https://localhost:8080/", "height": 500} id="VSA02C4tRzya" outputId="bef4d31f-2e2c-4b1a-ed82-d89570c12475"
def bar_total_confirmed_ofcountry(country):
data = get_total_confirmed_ofcountry(country)
plt.figure(figsize=(12, 8))
plt.title('Histogram for total confirmed cases of '+country, fontsize=14)
plt.bar(data.index, data)
plt.ylabel("%s's confirmed cases" % country)
plt.grid()
plt.show()
bar_total_confirmed_ofcountry('US')
# + colab={"base_uri": "https://localhost:8080/", "height": 494} id="c9poDxvpwFDE" outputId="c2e63332-55b6-46dc-a9b9-0b56ce715d47"
def line_plot_ofcountries(names, cols):
plt.figure(figsize=(12,8))
for i in range(len(names)):
data = get_total_confirmed_ofcountry(names[i])
plt.plot(data.index, data, color=cols[i], lw=5)
plt.ylabel("Total Cases")
plt.grid()
plt.show()
names=['US','Pakistan','United Kingdom']
cols=['red','blue','green']
line_plot_ofcountries(names, cols)
# + [markdown] id="J6MqRxG0w4YS"
# # Question 4
# + id="GIa-Wfhyw2O7"
def get_daily_confirmed_country(name):
df_country = confi['Country/Region']==name
cases = confi[df_country].iloc[:, 4: ].apply(lambda x: x.sum())
dates = pd.to_datetime(cases.index)
frame = {'Dates':dates, 'Cases':cases}
df = pd.DataFrame(frame)
df['Lag'] = df.Cases.shift(1).fillna(0)
df['Daily Cases'] = df.Cases - df.Lag
return df[['Dates', 'Daily Cases']]
def moving_averages (country, wn=7):
df = get_daily_confirmed_country(country)
df['SMA_1'] = df['Daily Cases'].rolling(window=wn).mean()
a = np.zeros(df.shape[0])
for i in range(0, df.shape[0]-6):
a[i+6] = df['Daily Cases'][i:i+wn].mean()
df['SMA_2'] = np.array(a)
return df
def plot_daily_and_avg_country(name):
df = get_daily_confirmed_country(name)
df['SMA_1'] = df['Daily Cases'].rolling(window=7).mean()
plt.figure(figsize=(12,8))
ax = df['SMA_1'].fillna(0).plot.line(color='red', lw=3)
df['Daily Cases'].plot.bar(ax=ax, color='blue')
ax.set_title(name.upper()+': Daily cases reported', fontsize=14)
ax.set_ylabel('Daily cases')
x = 0
for xlabel in ax.xaxis.get_ticklabels():
if x % 20 != 0:
xlabel.set_visible(False)
x = x+1
#plt.grid()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 112} id="6QEgIaSmxCm9" outputId="aeaddea4-69a1-40bf-f54e-41530ffcf2bb"
data = get_daily_confirmed_country('United Kingdom')
data.tail(2)
# + colab={"base_uri": "https://localhost:8080/", "height": 535} id="GOAx15eBxITv" outputId="01a61cae-b139-45bb-90a3-2da61bc4c64e"
plot_daily_and_avg_country('US')
# + colab={"base_uri": "https://localhost:8080/", "height": 535} id="v2VTssqpxL8_" outputId="031f4adb-5953-4855-bf25-98ed2c70b718"
plot_daily_and_avg_country('United Kingdom')
# + colab={"base_uri": "https://localhost:8080/", "height": 535} id="t5qoXBWpxOVz" outputId="ab73de43-1791-4c73-d3ae-4e57ad153468"
plot_daily_and_avg_country('Pakistan')
# + [markdown] id="8IKhan62xSDk"
# # Question 5
# + id="VIww_dxxxTrz"
def model0(x, p0, p1):
y = p0+p1*np.power(x,1)
return y
def model1(x, p1, p2, p3, p4):
y = p1*np.power(x, 2)+p2*np.power(x,3)+p3*np.power(x,4)+p4*np.power(x,5)
return y
def model2(x, p1, p2, p3):
y = p1*np.power(x,1)+p2*np.exp(p3*x)
return y
def model_cases_ofcountry(name):
df = get_total_confirmed_ofcountry(name)
df = df.reset_index(drop = True)
pars1, cov1 = curve_fit(f=model1, xdata=df.index, ydata=df, p0=[0, 0, 0, 0], bounds=(-np.inf, np.inf))
pars0, cov0 = curve_fit(f=model0, xdata=df.index, ydata=df, p0=[0, 0], bounds=(-np.inf, np.inf))
stdevs = np.sqrt(np.diag(cov1))
pred1 = model1(df.index, *pars1)
pred0 = model0(df.index, *pars0)
plt.figure(figsize=(12, 8))
plt.title(name.upper()+': Total cases reported', fontsize=14)
g1, = plt.plot(df.index, df, 'o', lw=3, label = 'actual')
g3, = plt.plot(df.index, pred1, color='red', lw=4, label = 'predicted')
plt.legend(handles=[g1, g3], loc='upper center')
plt.grid()
plt.show()
return stdevs
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="iV2BFqqxxZPW" outputId="a5e02599-0d0e-4b0f-e8b3-11ee656eccb7"
model_cases_ofcountry('United Kingdom')
|
Week2LAB.ipynb
|
# +
# Copyright 2010-2018 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Code sample that solves a model and displays a small number of solutions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ortools.sat.python import cp_model
class VarArraySolutionPrinterWithLimit(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, variables, limit):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__variables = variables
self.__solution_count = 0
self.__solution_limit = limit
def on_solution_callback(self):
self.__solution_count += 1
for v in self.__variables:
print('%s=%i' % (v, self.Value(v)), end=' ')
print()
if self.__solution_count >= self.__solution_limit:
print('Stop search after %i solutions' % self.__solution_limit)
self.StopSearch()
def solution_count(self):
return self.__solution_count
def StopAfterNSolutionsSampleSat():
"""Showcases calling the solver to search for small number of solutions."""
# Creates the model.
model = cp_model.CpModel()
# Creates the variables.
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
# Create a solver and solve.
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinterWithLimit([x, y, z], 5)
status = solver.SearchForAllSolutions(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Number of solutions found: %i' % solution_printer.solution_count())
assert solution_printer.solution_count() == 5
StopAfterNSolutionsSampleSat()
|
examples/notebook/sat/stop_after_n_solutions_sample_sat.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# +
domviol=pd.read_csv("domviol.csv")
domviol.head() #this looks weird, single column, needs to be separated into 4 columns
# +
domviol.dataframeName = 'domviol.csv'
nRow, nCol = domviol.shape
if nCol>1:
plural1="s"
else:
plural1=""
if nRow>1:
plural2="s"
else:
plural2=""
print("This dataset has: {} column{} and {} row{}.".format(nCol,plural1, nRow,plural2))
# +
# different values are separated by ";"
#Task
#split the column into 8 columns, use ";" as a separator, str.split method, n=0
new = domviol["kpiId;KpiName;value;DataType;period;StartDate;EndDate;CollectionFrequency"].str.split(";", n = 0, expand = True)
# making separate columns from new data frame
domviol["kpiId"]= new[0]
domviol["KpiName"]= new[1]
domviol["value"]=new[2]
domviol["DataType"]=new[3]
domviol["period"]=new[4]
domviol["StartDate"]=new[5]
domviol["EndDate"]=new[6]
domviol["CollectionFrequency"]=new[7]
# Dropping old Name columns
domviol.drop(columns =["kpiId;KpiName;value;DataType;period;StartDate;EndDate;CollectionFrequency"], inplace = True)
# +
# i want to remove the word "number" frok the value column
domviol['value'] = domviol['value'].map(lambda x: x.rstrip('Number'))
domviol.head()
# +
# now I would like to sort out the period variable- only select quarters or month by month
#datset with yearly figures
year_list=["2011/2012", "2012/2013" , "2013/2014" , "2014/2015" , "2015/2016" , "2016/2017" , "2017/2018", "2018/2019"]
domviol_yearly=domviol[domviol.period.isin(year_list)]
domviol_yearly[['period', 'value']]
# +
#now do similar for monthly data
# -
|
domestic_violence.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/joaogomes95/Dino_Google/blob/main/Aula_02_Codelab_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="n_igYcmdk6U9"
# # **Exercícios para treinar:**
# + [markdown] id="rRyZR6ZAlEHU"
# 1. **Frase na tela** - Implemente um programa que escreve na tela a frase "O primeiro programa a gente nunca esquece!".
#
# 2. **Etiqueta** - Elabore um programa que escreve seu nome completo na primeira linha, seu endereço na segunda e o CEP e telefone na terceira.
#
# 3. **Letra de música** - Faça um programa que mostre na tela uma letra de música que você gosta (proibido letras do <NAME>).
#
# 4. **Tabela de notas** - Você foi contratado ou contratada por uma escola pra fazer o sistema de boletim dos alunos. Como primeiro passo, escreva um programa que produza a seguinte saída:
# ```
# ALUNO (A) NOTA
# ========= ====
# ALINE 9.0
# MÁRIO DEZ
# SÉRGIO 4.5
# SHIRLEY 7.0
# ```
#
# 5. **Menu** - Elabore um programa que mostre o seguinte menu na tela:
# ```
# Cadastro de Clientes
# 0 - Fim
# 1 - Inclui
# 2 - Altera
# 3 - Exclui
# 4 - Consulta
# Digite uma opção:
# ```
# Ao digitar um valor para a opção, o programa exibe qual opção foi escolhida.
# ```
# Você escolheu a opção '0'.
# ```
#
# 6. **Calculadora de Dano** - Escreva um programa que receba dois valores digitados pelo usuário:
# - Quantidade de vida de um monstro (entre 10 e 50);
# - Valor do ataque do jogador por turno (entre 5 e 10);
# - Baseado nos valores digitados, exiba a quantidade de turnos que o jogador irá demorar para conseguir derrotar o monstro.
# - ```
# O jogador irá derrotar o monstro em 8 turnos.
# ```
# + [markdown] id="HqgnPaCGqMPB"
# # Exercícios
# + [markdown] id="U-ltrv0jqPBq"
# ## #01 - E os 10% do garçom?**
#
# - Defina uma variável para o valor de uma refeição que custou R$ 42,54;
#
# - Defina uma variável para o valor da taxa de serviço que é de 10%;
#
# - Defina uma variável que calcula o valor total da conta e exiba-o no console com essa formatação: R$ XXXX.XX.
# + id="jbcSv4NFkB1C"
# + [markdown] id="JR3-KgsTqR41"
# ## #02 - Qual o valor do troco?
#
# * Defina uma variável para o valor de uma compra que custou R$100,98;
#
# * Defina uma variável para o valor que o cliente pagou R$150,00;
#
# * Defina uma variável que calcula o valor do troco e exiba-o no console com o valor final arredondado.
# + id="hBf3kE_pksAI"
# + [markdown] id="2Uu84Uqtkunt"
# ## #03 - Você está na flor da idade?
#
# * Defina uma variável para o valor do ano do nascimento;
# * Defina uma variável para o valor do ano atual;
# * Defina uma variável que calcula o valor final da idade da pessoa;
# * Exiba uma mensagem final dizendo a idade da pessoa e a mensagem "Você está na flor da idade".
# + id="kyXxJV3nk14Z"
# + [markdown] id="w1T7s2q0p-pt"
# # Mini projetos
# + [markdown] id="meUw7PTwqC-Y"
# ## #01 - Conversor de moedas
#
# Crie um programa que solicite um um valor em real ao usuário e converta esse valor, para:
#
# - DOLAR,
# - EURO,
# - LIBRA ESTERLINA,
# - <NAME>,
# - <NAME>,
# - <NAME>.
#
# Para esse exercício você precisará realizar uma pesquisa para saber a cotação de cada moeda em real. Mostrar o resultado no formato $ XXXX.XX
# + id="q481snSClrXe"
# + [markdown] id="KK9PIcJ4qIF9"
# ## #02 - Calculadora de aumento de aluguel
#
# Vamos construir um programa que irá calcular o aumento anual do seu aluguel em duas partes:
#
# ### Parte 1
# A sua calculadora vai receber o `valor do aluguel` e calcular o aumento baseado no `IGPM de 31%`. A calculadora deve apresentar o aluguel reajustado no formato `R$ XXXX.XX`
#
# **Exemplo:**
# ```
# Valor do aluguel = 1000
# Valor do aluguel reajustado = R$ 1310,00
# + id="VdjHa9Csl0Lg"
# + [markdown] id="lKTwnuNMl8rP"
# ### Parte 2
# Agora, altere sua calculadora para receber além do `valor do aluguel`, o percentual do reajuste no formato `XX%`.
#
# **Dica:** Descubra uma forma de transformar o percentual recebido em um número para efetuar o cálculo.
#
# **Exemplo:**
# ```
# Valor do aluguel = 1000
# Percentual do reajuste = 31%
# Valor do aluguel reajustado = R% 1310,00
# ```
# + id="aw2Zk_9KmA85" colab={"base_uri": "https://localhost:8080/"} outputId="143cc13a-3060-4a50-8ce3-767e0947180c"
print ("O primeiro programa a gente nunca esquece!")
# + id="hDMHy2bO-YLU"
# + colab={"base_uri": "https://localhost:8080/"} id="fKz2tbf37Iaz" outputId="b381f24c-00bf-47f9-a2b8-9e1c51546bfa"
nome = '<NAME>'
endereco = 'Rua Judite Leite Chaves Carvalho'
cep = 2991010
print(nome)
print(endereco)
print(cep)
# + [markdown] id="Mbq9Fr6vARbS"
# Tabela de notas - Você foi contratado ou contratada por uma escola pra fazer o sistema de boletim dos alunos. Como primeiro passo, escreva um programa que produza a seguinte saída:
#
# ALUNO (A) NOTA
# ========= ====
# ALINE 9.0
# MÁRIO DEZ
# SÉRGIO 4.5
# SHIRLEY 7.0
# + id="9bt8Rb31-aQX"
"""Atirei o pau no gato
Mas o gato não morreu
<NAME> admirou-se
Do berro, do berro que o gato deu
Miau!
Atirei o pau no gato
Mas o gato não morreu
<NAME> admirou-se
Do berro, do berro que o gato deu
Miau!
Miau miau miau
Miau miau miau
Miau miau miau
Miau miau miau
Miau miau miau
Miau miau miau
Miau miau miau
Atirei o pau no gato
Mas o gato não morreu
<NAME> admirou-se
Do berro, do berro que o gato deu
Miau!
Não atire o pau no gato
Por que isso
Não se faz
O gatinho é nosso amigo
Não devemos maltratar os animais
Jamais!
# + [markdown] id="8RivvvPdAU5_"
# Tabela de notas - Você foi contratado ou contratada por uma escola pra fazer o sistema de boletim dos alunos. Como primeiro passo, escreva um programa que produza a seguinte saída:
#
# ALUNO (A) NOTA
# ========= ====
# ALINE 9.0
# MÁRIO DEZ
# SÉRGIO 4.5
# SHIRLEY 7.0
# + colab={"base_uri": "https://localhost:8080/"} id="6jfrUkKAAKb-" outputId="a80b98d9-f015-4dac-fca3-54fad4d4009b"
aluno1 = input('Digite o nome do aluno: ')
nota1 = input('Digite a nota do aluno: ')
aluno2 = input('Digite o nome do aluno: ')
nota2 = input('Digite a nota do aluno: ')
aluno3 = input('Digite o nome do aluno: ')
nota3 = input('Digite a nota do aluno: ')
aluno4 = input('Digite o nome do aluno: ')
nota4 = input('Digite a nota do aluno: ')
print('ALUNO (A) NOTA \n ========= ====')
print(f'{aluno1}\t{nota1}\n{aluno2}\t{nota2}\n{aluno3}\t{nota3}\n{aluno4}\t{nota4}')
# + id="OP8Jc634IMt4"
"""Menu - Elabore um programa que mostre o seguinte menu na tela:
Cadastro de Clientes
0 - Fim
1 - Inclui
2 - Altera
3 - Exclui
4 - Consulta
Digite uma opção: """
# + colab={"base_uri": "https://localhost:8080/"} id="mAxkRI41IB0N" outputId="a35ac3f9-8834-45f9-f567-1cb9f804cb9a"
opcao = ("""Cadastro de Clientes
0 - Fim
1 - Inclui
2 - Altera
3 - Exclui
4 - Consulta""")
input('digite uma opção: ')
print(f'{opcao}')
# + id="93vWp_P-PG30"
"""Calculadora de Dano - Escreva um programa que receba dois valores digitados pelo usuário:
Quantidade de vida de um monstro (entre 10 e 50);
Valor do ataque do jogador por turno (entre 5 e 10);
Baseado nos valores digitados, exiba a quantidade de turnos que o jogador irá demorar para conseguir derrotar o monstro.
O jogador irá derrotar o monstro em 8 turnos."""
# + colab={"base_uri": "https://localhost:8080/"} id="g22gy803PHaE" outputId="5731e04e-902a-4d80-c24b-e2044c097fda"
vidamonstro = int(input('Vida do Monstro: '))
valorataque = int(input('Valor do ataque: '))
rodadas = int(vidamonstro // valorataque)
print(f'O número de rodadas necessárias: {rodadas}')
# + id="S3W8A0ybT-72"
"""#01 - E os 10% do garçom?**
Defina uma variável para o valor de uma refeição que custou R$ 42,54;
Defina uma variável para o valor da taxa de serviço que é de 10%;
Defina uma variável que calcula o valor total da conta e exiba-o no console com essa formatação: R$ XXXX.XX."""
# + colab={"base_uri": "https://localhost:8080/"} id="qpsqYtPzUBC-" outputId="04ee80c9-ad35-49d2-a088-c0aca086e2b2"
valorref = 42.54
taxaservico = 1.1
total = valorref * taxaservico
print(f'R${total:.2f}')
# + id="iKUl1cM5WazI"
"""#02 - Qual o valor do troco?
Defina uma variável para o valor de uma compra que custou R$100,98;
Defina uma variável para o valor que o cliente pagou R$150,00;
Defina uma variável que calcula o valor do troco e exiba-o no console com o valor final arredondado."""
# + colab={"base_uri": "https://localhost:8080/"} id="F5nN_X9lWcSG" outputId="8d009817-d58d-4ce2-a4e8-2833e58f3716"
valorcompra = 100.98
valorpago = 150
troco = int(valorpago - valorcompra)
print(troco)
# + id="-gjxC3Z4Yctw"
"""#03 - Você está na flor da idade?
Defina uma variável para o valor do ano do nascimento;
Defina uma variável para o valor do ano atual;
Defina uma variável que calcula o valor final da idade da pessoa;
Exiba uma mensagem final dizendo a idade da pessoa e a mensagem "Você está na flor da idade"."""
# + colab={"base_uri": "https://localhost:8080/"} id="q3oi4RRuYeGz" outputId="f265bb49-e29f-42a6-a28a-f89553cca0d1"
anonasc = int(input('Digite seu ano de nascimento: '))
anoatual = 2021
idade = anoatual - anonasc
print(f'Você tem {idade} anos, e está na flor da idade!')
# + id="RDxynwNcZwA-"
"""#01 - Conversor de moedas
Crie um programa que solicite um um valor em real ao usuário e converta esse valor, para:
DOLAR,
EURO,
LIBRA ESTERLINA,
DÓLAR CANADENSE,
PESO ARGENTINO,
PESO CHILENO.
Para esse exercício você precisará realizar uma pesquisa para saber a cotação de cada moeda em real. Mostrar o resultado no formato $ XXXX.XX"""
# + colab={"base_uri": "https://localhost:8080/"} id="CUJD8YnnZxn_" outputId="b14909d4-f5d8-45df-a8e8-73d66032f384"
real = int(input('Digite o valor em real: '))
dolar = real * 0.20
euro = real * 0.16
libra = real * 0.14
dolarcanadence = real * 0.24
pesoargentino = real * 18.85
pesochileno = real * 142.50
print(f'Dolar: R${dolar:.2f}')
print(f'Euro: R${euro:.2f}')
print(f'Libra: R${libra:.2f}')
print(f'Dolar Canadence: R${dolarcanadence:.2f}')
print(f'Peso Argentino: R${pesoargentino:.2f}')
print(f'Peso Chileno: R${pesochileno:.2f}')
# + id="cOMhv9qsc6oC"
|
Aula_02_Codelab_1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Accessing Landsat 8 data on Azure
#
# This notebook demonstrates basic access to Landsat 8 data on Azure, using the NASA CMR API to query for tiles, then accessing the tiles on Azure blob storage. Because Landsat data are in preview, the user needs to provide storage credentials.
#
# Landsat data are stored in the West Europe Azure region, so this notebook will run most efficiently on Azure compute located in West Europe. We recommend that substantial computation depending on Landsat data also be situated in West Europe. You don't want to download hundreds of terabytes to your laptop! If you are using Landsat data for environmental science applications, consider applying for an [AI for Earth grant](http://aka.ms/ai4egrants) to support your compute requirements.
# ### Imports
# +
import re
import os
import datetime
import progressbar
import tempfile
import urllib
import shutil
import numpy as np
import matplotlib.pyplot as plt
import rasterio
import gdal
# From fiona
import rasterio.features
from azure.storage.blob import ContainerClient
from shapely.geometry import Point, Polygon
from cmr import GranuleQuery
# -
# ### Constants
# +
# Let's take a look at Seattle, which is at 47.6062° N, 122.3321° W
query_lon = -122.3321
query_lat = 47.6062
# Summer 2020... because cloudy images don't look good in demos, and
# it's cloudy in Seattle in the Winter
query_start_date = datetime.datetime(2020, 7, 1, 0, 0, 0)
query_end_date = datetime.datetime(2020, 9, 1, 0, 0, 0)
# This should be a text file with a SAS token for the Landsat container on the first line
sas_file = r'c:\git\ai4edev\datamanagement\test_data\landsat_ro_sas.txt'
temp_dir = os.path.join(tempfile.gettempdir(),'landsat')
os.makedirs(temp_dir,exist_ok=True)
# Maps instrument names as they appear in CMR results to the short names used in filenames
instrument_mapping = {
'Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) Collection 1 V1':
'oli-tirs'}
# Normalization value for rendering
norm_value = 35000
# Select a thumbnail for thumbnail rendering (six are available for Landsat C2 files)
thumbnail_index = -3
# When rendering whole images, how much should we downscale?
dsfactor = 5
# Sensor to query the CMR for
#
# Relevant values are:
#
# Landsat1-5_MSS_C1
# Landsat4-5_TM_C1
# Landsat7_ETM_Plus_C1
# Landsat_8_OLI_TIRS_C1
#
query_short_name = 'Landsat_8_OLI_TIRS_C1'
# -
# ### Azure storage constants
# +
assert os.path.isfile(sas_file)
lines = []
with open(sas_file,'r') as f:
lines = f.readlines()
assert len(lines) >= 1
sas_token = lines[0].strip()
storage_account_name = 'landsateuwest'
container_name = 'landsat-c2'
storage_account_url = 'https://' + storage_account_name + '.blob.core.windows.net/'
container_client = ContainerClient(account_url=storage_account_url,
container_name=container_name,
credential=sas_token)
# -
# ### Search for granules
# +
api = GranuleQuery()
granules = api.short_name(query_short_name).point(query_lon,query_lat).temporal(query_start_date, query_end_date).get()
print('Found {} granules:'.format(len(granules)))
for granule in granules:
print(granule['title'])
# -
# ### Grab the first granule
# +
granule = granules[0]
# E.g. 'LC08_L1TP_047027_20200103_20200113_01_T1'
granule_id = granule['title']
print('Accessing tile {}'.format(granule_id))
# -
# ### Map this to our Azure blob paths
# +
level = 'level-2'
category = 'standard'
sensor = instrument_mapping[granule['dataset_id']]
# E.g., 2020-01-03T19:01:46.557Z
date = granule['time_start']
year = date[0:4]
month = date[5:7]
day = date[8:10]
path = granule_id[10:13]
row = granule_id[13:16]
row_folder = '/'.join([level,category,sensor,year,path,row])
# E.g. 01152004
granule_date_string = granule_id[11:19]
granule_month = granule_date_string[0:2]
granule_day = granule_date_string[2:4]
granule_year = granule_date_string[4:8]
# E.g. LC08_L1TP_047027_20200103
scene_prefix = granule_id[0:25]
# E.g. LC08_L2SP_047027_20200103
scene_prefix = scene_prefix[0:5] + 'L2SP' + scene_prefix[9:]
azure_scene_prefix = row_folder + '/' + scene_prefix
# -
# ### Enumerate image files
# +
generator = container_client.list_blobs(name_starts_with=azure_scene_prefix)
image_paths = [blob.name for blob in generator if blob.name.endswith('.TIF')]
print('Found {} images:'.format(len(image_paths)))
for fn in image_paths:
print(fn.split('/')[-1])
# -
# ### Convert to Azure URLs
azure_urls = [storage_account_url + container_name + '/' + p + sas_token for p in image_paths]
# ### Make GDAL happy about SAS URLs
# GDAL gets a little unhappy with SSL access to Azure via SAS URLs in some situations;
# this fixes this issue.
#
# From:
#
# http://gpsinfo.org/qgis-opening-remote-files-with-gdal-over-https-fails/
gdal.SetConfigOption("GDAL_HTTP_UNSAFESSL", "YES")
gdal.VSICurlClearCache()
# ### Choose bands for an RGB composite
# From https://www.usgs.gov/media/images/common-landsat-band-rgb-composites
rgb_bands = ['B4','B3','B2']
rgb_urls = []
for band_name in rgb_bands:
rgb_urls.append([s for s in azure_urls if band_name + '.TIF' in s][0])
# ### Render previews without reading the whole file
# +
thumbnail_data = []
# url = rgb_urls[0]
for url in rgb_urls:
# From:
#
# https://automating-gis-processes.github.io/CSC/notebooks/L5/read-cogs.html
with rasterio.open(url) as raster:
# List of overviews from biggest to smallest
oviews = raster.overviews(1)
# Retrieve a small-ish thumbnail (six are available for Landsat files)
decimation_level = oviews[thumbnail_index]
h = int(raster.height/decimation_level)
w = int(raster.width/decimation_level)
thumbnail_channel = raster.read(1, out_shape=(1, h, w)) / norm_value
thumbnail_data.append(thumbnail_channel)
rgb = np.dstack((thumbnail_data[0],thumbnail_data[1],thumbnail_data[2]))
np.clip(rgb,0,1,rgb)
plt.imshow(rgb)
# -
# ### Download support functions
# +
max_path_len = 255
class DownloadProgressBar():
"""
https://stackoverflow.com/questions/37748105/how-to-use-progressbar-module-with-urlretrieve
"""
def __init__(self):
self.pbar = None
def __call__(self, block_num, block_size, total_size):
if not self.pbar:
self.pbar = progressbar.ProgressBar(max_value=total_size)
self.pbar.start()
downloaded = block_num * block_size
if downloaded < total_size:
self.pbar.update(downloaded)
else:
self.pbar.finish()
def download_url(url, destination_filename=None, progress_updater=None, force_download=False):
"""
Download a URL to a temporary file
"""
# This is not intended to guarantee uniqueness, we just know it happens to guarantee
# uniqueness for this application.
if destination_filename is None:
# url_as_filename = url.replace('://', '_').replace('/', '_').replace('?','_').replace('&',
url_without_sas = url.split('?', 1)[0]
url_as_filename = re.sub(r'\W+', '', url_without_sas)
n_folder_chars = len(temp_dir)
if len(url_as_filename) + n_folder_chars > max_path_len:
print('Warning: truncating filename target to {} characters'.format(max_path_len))
url_as_filename = url_as_filename[-1*(max_path_len-n_folder_chars):]
destination_filename = \
os.path.join(temp_dir,url_as_filename)
if (not force_download) and (os.path.isfile(destination_filename)):
url_no_sas = url.split('?')[0]
print('Bypassing download of already-downloaded file {}'.format(os.path.basename(url_no_sas)))
return destination_filename
print('Downloading file {} to {}'.format(os.path.basename(url),destination_filename),end='')
urllib.request.urlretrieve(url, destination_filename, progress_updater)
assert(os.path.isfile(destination_filename))
nBytes = os.path.getsize(destination_filename)
print('...done, {} bytes.'.format(nBytes))
return destination_filename
# -
# ### Download our three (R,G,B) tiles
# +
filenames = []
for image_url in rgb_urls:
fn = download_url(url=image_url,destination_filename=None,progress_updater=DownloadProgressBar())
filenames.append(fn)
# -
# ### Render composite image
# +
image_data = []
# fn = filenames[0]
for fn in filenames:
with rasterio.open(fn,'r') as raster:
h = int(raster.height/dsfactor)
w = int(raster.width/dsfactor)
print('Resampling to {},{}'.format(h,w))
band_array = raster.read(1, out_shape=(1, h, w))
raster.close()
band_array = band_array / norm_value
image_data.append(band_array)
rgb = np.dstack((image_data[0],image_data[1],image_data[2]))
np.clip(rgb,0,1,rgb)
plt.imshow(rgb)
# -
# ### Clean up temporary files
shutil.rmtree(temp_dir)
|
data/landsat-8.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as ureq
from selenium import webdriver
import time
import re
url = "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/e0c252b08cb179f9852573620061603e!OpenDocument#General"
uClient = ureq(url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
# # Scrape core courses
# ## 1. Collect all course codes and names
mech_start = page_soup.find("font", text="Bachelor of Applied Science in Mechanical Engineering")
mech_start.text
mech_start.findNext().findNext().findNext().findNext().findNext().text
# +
#get all course codes and names for MechEng core courses
container = mech_start
course_names = []
course_codes = []
counter = 0
while container.text != "Bachelor of Applied Science in Mechanical Engineering with Aerospace Option":
if bool(re.match("[A-Z]{4}-[0-9]{4}. [A-Za-z -]+", container.text)):
course_names.append(container.text.split(". ")[1])
course_codes.append(container.text.split(". ")[0])
counter += 1
container = container.findNext()
print(counter)
# -
course_codes
course_names
url_dict = {
"MATH": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/c4d12a22771cfa81852585310007c4bd!OpenDocument",
"MECH": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/9b72ca039c14a10585257364004c63b8!OpenDocument",
"GENG": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/9879b7f56ea6fde68525776d00585601!OpenDocument",
"CHEM": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/523d8435ae5f886785257364004a511c!OpenDocument",
"PHYS": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/fac10bc71a59714285257364004e518b!OpenDocument",
"INDE": "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/9b72ca039c14a10585257364004c63b8!OpenDocument"
}
# ## 2. Test run - try to find the first course and scrape it
# +
uClient = ureq(url_dict["MATH"])
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("font")
for container in containers:
if container.text.split(".")[0].strip() == "MATH-1720":
break
container
# -
container.findNext().findNext().findNext().findNext()
containers.index(container)
# ## 3. find the starting point of each course description, then compile (some descriptions are broken over sever containers...) and extract
# +
#get the course descriptions for all MechEng Core courses
course_descs = []
counter = 0
for code in course_codes:
subject = code.split("-")[0]
uClient = ureq(url_dict[subject])
time.sleep(1)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("font")
for container in containers:
if container.text.split(".")[0].strip() == code:
course_desc = ""
i = containers.index(container) + 1
while i < len(containers) and bool(re.match("[A-Z]{4}-[0-9]{4}", containers[i].text.strip())) == False:
if (course_names[course_codes.index(code)] not in containers[i].text or len(containers[i].text)>len(course_names[course_codes.index(code)])+15) and len(containers[i].text) > 0:
course_desc += containers[i].text.strip()
i += 1
course_descs.append(course_desc)
print("scraped ", code)
counter += 1
break
counter
# -
# ## 4. Inspect, clean, and store in dataframe
course_descs
course_descs[21] = course_descs[21][20:]
# +
import pandas as pd
df = pd.DataFrame({
"Course Number": course_codes,
"Course Name": course_names,
"Course Description": course_descs
})
df
# -
df = df.drop([36], axis = 0) #duplicate course
df
# # Scrape elective courses
# +
#scrape the rest of mech courses as they are elective courses
url = "http://web4.uwindsor.ca/units/registrar/calendars/undergraduate/fall2020.nsf/982f0e5f06b5c9a285256d6e006cff78/9b72ca039c14a10585257364004c63b8!OpenDocument"
uClient = ureq(url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
# -
# ## 1. scrape all mech courses that haven't been scraped yet
# +
course_names2 = []
course_codes2 = []
course_descs2 = []
counter = 0
look_for_desc = False
containers = page_soup.findAll("font")
for container in containers:
if not look_for_desc and bool(re.match("MECH-[24683]{2}[0-9]{2}\.\D", container.text)):
current_code = container.text.split(".")[0].strip()
if current_code not in course_codes:
course_codes2.append(current_code)
course_names2.append(container.text.split(".")[1].strip())
look_for_desc = True
print(course_codes2[-1])
elif look_for_desc:
course_desc2 = ""
i = containers.index(container)
while i < len(containers) and bool(re.match("MECH-[0-9]{4}", containers[i].text.strip())) == False and containers[i].text not in "MECHANICAL: APPROVED COURSES TO FULFILL NON-SPECIFIED ENGINEERING COURSE REQUIREMENTS AEROSPACE ENGINEERING AUTOMOTIVE ENGINEERING ENGINEERING MATERIALS INDUSTRIAL AND MANUFACTURING SYSTEMS ENGINEERING":
if len(containers[i].text) > 0:
course_desc2 += containers[i].text.strip()
i += 1
course_descs2.append(course_desc2)
counter += 1
look_for_desc = False
counter
# -
course_descs2
len(course_descs2)
# ## 2. Clean and store in dataframe
# +
df2 = pd.DataFrame({
"Course Number": course_codes2,
"Course Name": course_names2,
"Course Description": course_descs2
})
df2
# -
df2["Course Description"][7] = df2["Course Description"][7][41:]
df2["Course Name"][7] = "Heating, Ventilation, and Air Conditioning"
df2
# ## 3. Scrape some missing courses
# +
#scrape the 36xx and 46xx courses - they are coded differently than the other elective courses
course_names3 = []
course_codes3 = []
course_descs3 = []
counter = 0
look_for_desc = False
containers = page_soup.findAll("font")
for container in containers:
if not look_for_desc and bool(re.match("MECH-[346]{2}[0-9]{2}", container.text)):
current_code = container.text.strip()
if current_code not in course_codes and current_code not in course_codes2:
course_codes3.append(current_code)
try:
course_names3.append(container.findNext("font").text.split(".")[1].strip())
except IndexError:
course_names3.append("")
look_for_desc = True
print(course_codes3[-1])
elif look_for_desc:
course_desc3 = ""
i = containers.index(container) + 1
while i < len(containers) and bool(re.match("MECH-[0-9]{4}", containers[i].text.strip())) == False and containers[i].text not in "MECHANICAL: APPROVED COURSES TO FULFILL NON-SPECIFIED ENGINEERING COURSE REQUIREMENTS AEROSPACE ENGINEERING AUTOMOTIVE ENGINEERING ENGINEERING MATERIALS INDUSTRIAL AND MANUFACTURING SYSTEMS ENGINEERING":
if len(containers[i].text) > 0:
course_desc3 += containers[i].text.strip()
i += 1
course_descs3.append(course_desc3)
counter += 1
look_for_desc = False
counter
# -
# ## 4. Clean and append all dataframes and write to CSV
# we only havent scraped the first 5 and the last one
course_names3 = course_names3[:5] + [course_names3[-1]]
course_codes3 = course_codes3[:5] + [course_codes3[-1]]
course_descs3 = course_descs3[:5] + [course_descs3[-1]]
# +
df3 = pd.DataFrame({
"Course Number": course_codes3,
"Course Name": course_names3,
"Course Description": course_descs3
})
df3
# -
df= df.append(df2).append(df3, ignore_index=True)
df
df.to_csv('NEW_Windsor_MechEng_Core_and_Electives_(AllYears)_Courses.csv', index = False)
# +
#some manual fix: mech 4259 edit out the MECHANICAL: APPROVED COURSES... part
#geng 2200 remove course name at beginning of course description
|
Web-Scraping Scripts and Data/Accredited Canadian English Undergrad MechEng Programs/Windsor/WS_Windsor_MechEng_Core_and_Electives_(AllYears).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## The Basics
# Supervised learning consists of getting <span style='color:red'>**inputs** to **match** a certain **output**</span>. In order to achieve this, **we give** the artifical neural network various inputs and compare it to our expected output, updating the weights according to how well the output compares to what we want.
|
ANN_Basics/.ipynb_checkpoints/Supervised_Learning_Intro-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from ipywidgets import interact
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
plt.style.use('seaborn-talk')
# + [markdown] slideshow={"slide_type": "slide"}
# _Connect code and reports with_
# <br>
#
# <img src="https://jupyter.readthedocs.io/en/latest/_static/_images/jupyter.svg" alt="title" style="float: left;">
# + [markdown] slideshow={"slide_type": "slide"}
# ## Typical guidelines for keeping a notebook of wet-lab work
# + [markdown] slideshow={"slide_type": "fragment"}
# <font size="5">
# <ol>
# <li>Record everything you do in the lab, even if you are following a published procedure.</li>
# <li>If you make a mistake, put a line through the mistake and write the new information next to it.</li>
# <li>Use a ball point pen so that marks will not smear nor will they be erasable.</li>
# <li>Use a bound notebook so that tear-out would be visible.</li>
# <li>When you finish a page, put a corner-to corner line through any blank parts that could still be used for data entry.</li>
# <li>All pages must be pre-numbered.</li>
# <li>Write a title for each and every new set of entries.</li>
# <li>It is critical that you enter all procedures and data directly into your notebook in a timely manner.</li>
# <li>Properly introduce and summarize each experiment.</li>
# <li>The investigator and supervisor must sign each page.</li>
# <br>
# etc...
# </ol>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Typical guidelines for keeping a notebook of dry-lab work
# + [markdown] slideshow={"slide_type": "fragment"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# <font size="5">
#
# **Literate programming**
#
# >Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do. - _<NAME> (1984)_
#
# **Literate computing**
#
# >A literate computing environment is one that allows users not only to execute commands interactively, but also to store in a literate document the results of these commands along with figures and free-form text. - _<NAME> and <NAME> (2014)_
# + [markdown] slideshow={"slide_type": "subslide"}
# **Wolfram Mathematica notebook (1987)**
#
# <img src="https://content.wolfram.com/uploads/sites/17/2019/08/1987_early_programs_2.png" alt="title" width="500" style="float:left;">
# + [markdown] slideshow={"slide_type": "slide"}
# ## The Jupyter notebook
# + [markdown] slideshow={"slide_type": "fragment"}
# The Jupyter Notebook is a web application for **interactive** data science and scientific computing.
# + [markdown] slideshow={"slide_type": "fragment"}
# In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion/introspection.
# + [markdown] slideshow={"slide_type": "subslide"}
# Document your work in Markdown
# + [markdown] slideshow={"slide_type": "fragment"}
# # Penguin data analysis
# Here we will investigate the [Penguin dataset](https://github.com/allisonhorst/palmerpenguins).
#
# -----
# The species included in this set are:
# - _Adelie_
# - _Chinstrap_
# - _Gentoo_
# + [markdown] slideshow={"slide_type": "subslide"}
# Execute code directly from the browser, with results attached to the code which generated them
# + slideshow={"slide_type": "fragment"}
data = sns.load_dataset("penguins")
data.groupby("species").mean()
# + [markdown] slideshow={"slide_type": "subslide"}
# Generate plots directly in the browser and/or save to file.
# + slideshow={"slide_type": "skip"}
sns.set_context("paper", rc={"axes.labelsize":6})
# + slideshow={"slide_type": "fragment"}
ax = sns.pairplot(data, hue="species", height=1,
plot_kws=dict(s=20, linewidth=0.5),
diag_kws=dict(linewidth=0.5))
# + slideshow={"slide_type": "skip"}
# %load_ext rpy2.ipython
# + [markdown] slideshow={"slide_type": "subslide"}
# Mix and match languages in addition to `python` (_e.g._ `R`, `bash`, `ruby`)
# + slideshow={"slide_type": "fragment"} language="R"
# x <- 1:12
# sample(x, replace = TRUE)
# + slideshow={"slide_type": "fragment"} language="bash"
# uname -v
# + [markdown] slideshow={"slide_type": "subslide"}
# Create interactive widgets
# + slideshow={"slide_type": "fragment"}
def f(palette, x, y):
plt.figure(1, figsize=(3,3))
ax = sns.scatterplot(data=data, x=x, y=y, hue="species", palette=palette)
ax.legend(bbox_to_anchor=(1,1))
_ = interact(f, palette=["Set1","Set2","Dark2","Paired"],
y=["bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g"],
x=["bill_depth_mm", "bill_length_mm", "flipper_length_mm", "body_mass_g"])
# + [markdown] slideshow={"slide_type": "slide"}
# ### Notebook basics
# + [markdown] slideshow={"slide_type": "fragment"}
# - Runs as a local web server
# - Load/save/manage notebooks from the menu
#
# <img src="https://github.com/NBISweden/workshop-reproducible-research/raw/main/pages/images/jupyter_basic_update.png" alt="title" width="600" style="float:left;padding:5px;">
# + [markdown] slideshow={"slide_type": "subslide"}
# The notebook itself is a JSON file
# + slideshow={"slide_type": "fragment"}
# !head -20 jupyter.ipynb
# + [markdown] slideshow={"slide_type": "slide"}
# ## Sharing is caring
# + [markdown] slideshow={"slide_type": "fragment"}
# - Put the notebook on GitHub/Bitbucket and it will be rendered there...
# + [markdown] slideshow={"slide_type": "fragment"}
# - ... or export to one of many different formats, _e.g._ HTML, PDF, code, **slides** etc. (this presentation is a [Jupyter notebook](https://github.com/NBISweden/workshop-reproducible-research/blob/main/lectures/jupyter/jupyter.ipynb))
# + [markdown] slideshow={"slide_type": "subslide"}
# Or paste a link to any Jupyter notebook at [nbviewer.jupyter.org](https://nbviewer.jupyter.org) and it will be rendered for you.
# + slideshow={"slide_type": "fragment"} language="html"
# <!-- MRSA Notebook that you'll work on in the tutorial -->
# <!-- https://github.com/NBISweden/workshop-reproducible-research/blob/main/tutorials/jupyter/supplementary_material.ipynb -->
# <iframe src="https://nbviewer.jupyter.org/" height="800" width="800"></iframe>
# + [markdown] slideshow={"slide_type": "subslide"}
# Or generate interactive notebooks using [Binder](https://mybinder.org)
# + slideshow={"slide_type": "subslide"}
# %%HTML
<iframe src="https://mybinder.org" height="800" width="800"></iframe>
# + [markdown] slideshow={"slide_type": "subslide"}
# Binder can generate a _'Binder badge'_ for your repo. Clicking the badge launches an interactive version of your repository or notebook.
#
# [](https://mybinder.org/v2/gh/NBISweden/workshop-reproducible-research/binder?filepath=lectures%2Fjupyter%2Fjupyter.ipynb)
# + [markdown] slideshow={"slide_type": "slide"}
# # Jupyter Lab
# ---
# _JupyterLab is the next-generation web-based user interface for Project Jupyter._
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="https://jupyterlab.readthedocs.io/en/stable/_images/jupyterlab.png" width="75%">
#
# <font size="5">
#
# - full-fledged IDE, similar to e.g. Rstudio.
# - Tab views, Code consoles, Show output in a separate tab, Live rendering of edits
#
# `conda install –c conda-forge jupyterlab`
# + [markdown] slideshow={"slide_type": "slide"}
# # <img src="https://jupyterbook.org/_static/logo-wide.svg" style="float: left;" width="240">
#
# + [markdown] slideshow={"slide_type": "fragment"}
# _lets you build an online book using a collection of Jupyter Notebooks and Markdown files_
#
# - Interactivity
# - Citations
# - Build and host it online with GitHub/GitHub Pages...
# - or locally on your own laptop
# + [markdown] slideshow={"slide_type": "slide"}
# # For the tutorial:
#
# ---
#
# use `jupyter notebook` or `jupyter lab`
# + [markdown] slideshow={"slide_type": "slide"}
# # # Questions?
|
lectures/jupyter/jupyter.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import shutil
print(os.getcwd())
path = 'C:\\Users\\AliHaider\\Desktop\\ML_master\\Github code\\Dog Species Classification\\images\\Images'
os.chdir(path)
print(os.getcwd())
directories = os.listdir()
print(directories)
for directory in directories:
class_name = directory.split('-')[-1]
path = 'C:\\Users\\AliHaider\\Desktop\\ML_master\\Github code\\Dog Species Classification\\images\\Images\\' + directory
os.chdir(path)
files = os.listdir()
for index,file in enumerate(files):
#os.rename(path + '\\' + file, class_name + '_' + str(index) + ".jpg")
shutil.move('C:\\Users\\AliHaider\\Desktop\\ML_master\\Github code\\Dog Species Classification\\images\\Images\\' + directory + '\\' + file, 'C:\\Users\\AliHaider\\Desktop\\ML_master\\Github code\\Dog Species Classification\\images\\Images\\' + class_name + '_' + str(index) + ".jpg")
|
Directory_structure_change.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--
# # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# #
# # Licensed under the Apache License, Version 2.0 (the "License").
# # You may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
# -->
#
# # Sample notebook to build podsettings for gpu images from the lake user teamspace.
#
# ## Content
# 1. [Configuration](#Configuration)
# 2. [Build Podsetting](#Building the PodSetting for the Image)
#
#
# ### Configuration
image_name = 'gpu-jupyter-user'
folder_name = 'aws-orbit_jupyter-user'
# Get our orbit env and team names
# env_name = %env AWS_ORBIT_ENV
# team_name = %env AWS_ORBIT_TEAM_SPACE
# user_name = %env USERNAME
# namespace = %env AWS_ORBIT_USER_SPACE
# account = %env ACCOUNT_ID
# region=%env AWS_DEFAULT_REGION
(env_name,team_name, user_name, namespace, account, region)
# Repository has an image (see below). Users are only able to manipulate ECR repos that start with 'orbit-{env_name}/users/'
image = f"{account}.dkr.ecr.{region}.amazonaws.com/orbit-regression/users/{image_name}"
image
# ### Building the PodSetting for the Image
# +
import json
customnameGPU = "orbit-gpu-image-sample-ps-"+team_name
description= " Machine Learning Image GPU - 2 CPU + 4G MEM + 1 GPU "
podsetting={
"name": customnameGPU,
"description": description,
"image": image,
"resources":{
"limits":{
"cpu": "2.0",
"memory": "4Gi",
"nvidia.com/gpu": "1"
},
"requests":{
"cpu": "2.0",
"memory": "4Gi",
"nvidia.com/gpu": "1"
}
},
"node-group":"primary-gpu",
"env":[
{
"name":"source",
"value":"regressiontests"
}
]
}
with open("podsetting_data_gpu.json", 'w') as f:
json.dump(podsetting, f)
### NOTE: "node-group":"primary-gpu" can be replaced with "instance-type":"g4dn.xlarge"
# -
# !orbit build podsetting --help
# Create the podsetting
# !orbit build podsetting --debug -e $env_name -t $team_name podsetting_data.json
import time
time.sleep(3)
# !kubectl get podsettings -n$team_name|grep $customname
# !kubectl get poddefault -n$team_name|grep $customname
# !orbit delete podsetting --help
# !orbit delete podsetting -n $customname -t $team_name --debug
import time
time.sleep(3)
# !kubectl get podsettings -n$team_name|grep $customname
# !kubectl get poddefault -n$team_name|grep $customname
|
samples/notebooks/I-Image/Example-6-gpu-podsetting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # QCoDeS Example with Tektronix Keithley 7510 Multimeter
#
# In this example we will show how to use a few basic functions of the Keithley 7510 DMM. We attached the 1k Ohm resistor to the front terminals, with no source current or voltage.
#
# For more detail about the 7510 DMM, please see the User's Manual: https://www.tek.com/digital-multimeter/high-resolution-digital-multimeters-manual/model-dmm7510-75-digit-graphical-sam-0, or Reference Manual: https://www.tek.com/digital-multimeter/high-resolution-digital-multimeters-manual-9
from qcodes.instrument_drivers.tektronix.keithley_7510 import Keithley7510
dmm = Keithley7510("dmm_7510", 'USB0::0x05E6::0x7510::04450363::INSTR')
# # To reset the system to default settings:
dmm.reset()
# # To perform measurement with different sense functions:
# When first turned on, the default sense function is for DC voltage
dmm.sense.function()
# to perform the measurement:
dmm.sense.voltage()
# There'll be an error if try to call other functions, such as current:
try:
dmm.sense.current()
except AttributeError as err:
print(err)
# To switch between functions, do the following:
dmm.sense.function('current')
dmm.sense.function()
dmm.sense.current()
# And of course, once the sense function is changed to 'current', the user can't make voltage measurement
try:
dmm.sense.voltage()
except AttributeError as err:
print(err)
# The available functions in the driver now are 'voltage', 'current', 'Avoltage', 'Acurrent', 'resistance', and 'Fresistance', where 'A' means 'AC', and 'F' means 'Four-wire'
try:
dmm.sense.function('ac current')
except ValueError as err:
print(err)
# # To set measurement range (positive full-scale measure range):
# By default, the auto range is on
dmm.sense.auto_range()
# We can change it to 'off' as following
dmm.sense.auto_range(0)
dmm.sense.auto_range()
# Note: this auto range setting is for the sense function at this moment, which is 'current'
dmm.sense.function()
# If switch to another function, the auto range is still on, by default
dmm.sense.function('voltage')
dmm.sense.function()
dmm.sense.auto_range()
# to change the range, use the following
dmm.sense.range(10)
dmm.sense.range()
# This will also automatically turn off the auto range:
dmm.sense.auto_range()
# the allowed range (upper limit) value is a set of discrete numbers, for example, 100mV, 1V, 10V, 100V, 100V. If a value other than those allowed values is input, the system will just use the "closest" one:
dmm.sense.range(150)
dmm.sense.range()
dmm.sense.range(105)
dmm.sense.range()
# The driver will not give any error messages for the example above, but if the value is too large or too small, there'll be an error message:
try:
dmm.sense.range(0.0001)
except ValueError as err:
print(err)
# # To set the NPLC (Number of Power Line Cycles) value for measurements:
# By default, the NPLC is 1 for each sense function
dmm.sense.nplc()
# To set the NPLC value:
dmm.sense.nplc(.1)
dmm.sense.nplc()
# Same as the 'range' variable, each sense function has its own NPLC value:
dmm.sense.function('resistance')
dmm.sense.function()
dmm.sense.nplc()
# # To set the delay:
# By default, the auto delay is enabled. According to the guide, "When this is enabled, a delay is added after a range or function change to allow the instrument to settle." But it's unclear how much the delay is.
dmm.sense.auto_delay()
# To turn off the auto delay:
dmm.sense.auto_delay(0)
dmm.sense.auto_delay()
# To turn the auto delay back on:
dmm.sense.auto_delay(1)
dmm.sense.auto_delay()
# There is also an "user_delay", but it is designed for rigger model, please see the user guide for detail.
# To set the user delay time:
#
# First to set a user number to relate the delay time with: (default user number is empty, so an user number has to be set before setting the delay time)
dmm.sense.user_number(1)
dmm.sense.user_number()
# By default, the user delay is 0s:
dmm.sense.user_delay()
# Then to set the user delay as following:
dmm.sense.user_delay(0.1)
dmm.sense.user_delay()
# The user delay is tied to user number:
dmm.sense.user_number(2)
dmm.sense.user_number()
dmm.sense.user_delay()
# For the record, the auto delay here is still on:
dmm.sense.auto_delay()
# # To set auto zero (automatic updates to the internal reference measurements):
# By default, the auto zero is on
dmm.sense.auto_zero()
# To turn off auto zero:
dmm.sense.auto_zero(0)
dmm.sense.auto_zero()
# The auto zero setting is also tied to each function, not universal:
dmm.sense.function('current')
dmm.sense.function()
dmm.sense.auto_zero()
# There is way to ask the system to do auto zero once:
dmm.sense.auto_zero_once()
# See P487 of the Reference Manual for how to use auto zero ONCE. Note: it's not funtion-dependent.
# # To set averaging filter for measurements, including average count, and filter type:
# By default, averaging is off:
dmm.sense.average()
# To turn it on:
dmm.sense.average(1)
dmm.sense.average()
# Default average count value is 10, **remember to turn average on**, or it will not work:
dmm.sense.average_count()
# To change the average count:
dmm.sense.average_count(23)
dmm.sense.average_count()
# The range for average count is 1 to 100:
try:
dmm.sense.average_count(200)
except ValueError as err:
print(err)
# There are two average types, repeating (default) or moving filter:
dmm.sense.average_type()
# To make changes:
dmm.sense.average_type('MOV')
dmm.sense.average_type()
|
docs/examples/driver_examples/Qcodes example with Keithley 7510.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import numpy as np
vanilla = pd.read_csv('per_moa_performance/level5_vanilla_moa_performance.csv').set_index('MOA')['zscore']
vanilla_leaveOut = pd.read_csv('per_moa_performance/level5_vanilla_leaveOut_moa_performance.csv').set_index('MOA')['zscore']
beta = pd.read_csv('per_moa_performance/level5_beta_moa_performance.csv').set_index('MOA')['zscore']
beta_leaveOut = pd.read_csv('per_moa_performance/level5_beta_leaveOut_moa_performance.csv').set_index('MOA')['zscore']
mmd = pd.read_csv('per_moa_performance/level5_mmd_moa_performance.csv').set_index('MOA')['zscore']
mmd_leaveOut = pd.read_csv('per_moa_performance/level5_mmd_leaveOut_moa_performance.csv').set_index('MOA')['zscore']
vanilla_df = pd.concat([vanilla, vanilla_leaveOut], axis = 1)
beta_df = pd.concat([beta, beta_leaveOut], axis = 1)
mmd_df = pd.concat([mmd, mmd_leaveOut], axis = 1)
vanilla_df = pd.DataFrame(- np.log(stats.norm.sf(-(vanilla_df)))).reset_index().assign(moaType = 'Not left out')
beta_df = pd.DataFrame(- np.log(stats.norm.sf(-(beta_df)))).reset_index().assign(moaType = 'Not left out')
mmd_df = pd.DataFrame(- np.log(stats.norm.sf(-(mmd_df)))).reset_index().assign(moaType = 'Not left out')
vanilla_df.loc[vanilla_df.index < 5, 'moaType'] = 'Left out'
beta_df.loc[beta_df.index < 5, 'moaType'] = 'Left out'
mmd_df.loc[mmd_df.index < 5, 'moaType'] = 'Left out'
vanilla_df = vanilla_df.rename(columns = {'moaType': ''})
beta_df = beta_df.rename(columns = {'moaType': ''})
mmd_df = mmd_df.rename(columns = {'moaType': ''})
sns.set_theme()
sns.set(font_scale=.5)
# +
fig, (ax1, ax2, ax3) = plt.subplots(3,1, figsize = (8,15), dpi=400)
sns.scatterplot(data = vanilla_df, x = 0, y = 1, hue = '', palette = ['red', 'dodgerblue'], ax = ax1)
sns.scatterplot(data = beta_df, x = 0, y = 1, hue = '', palette = ['red', 'dodgerblue'], ax = ax2)
sns.scatterplot(data = mmd_df, x = 0, y = 1, hue = '', palette = ['red', 'dodgerblue'], ax = ax3)
ax1.axis('square')
ax2.axis('square')
ax3.axis('square')
x = np.arange(0,8)
ax1.plot(x,x,':')
ax2.plot(x,x,':')
ax3.plot(x,x,':')
ax1.set_xlabel('Vanilla VAE -log pvalue')
ax1.set_ylabel('Vanilla VAE leave out -log pvalue')
ax2.set_xlabel('β-VAE -log pvalue')
ax2.set_ylabel('β-VAE leave out -log pvalue')
ax3.set_xlabel('MMD-VAE -log pvalue')
ax3.set_ylabel('MMD-VAE leave out -log pvalue')
|
cell-painting/3.application/leave 5 out analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# # dense_1_Multiply_50_embeddings_4_epochs_dropout
#
# # Deep recommender on top of Amason’s Clean Clothing Shoes and Jewelry explicit rating dataset
#
# Frame the recommendation system as a rating prediction machine learning problem and create a hybrid architecture that mixes the collaborative and content based filtering approaches:
# - Collaborative part: Predict items ratings in order to recommend to the user items that he is likely to rate high.
# - Content based: use metadata inputs (such as price and title) about items to find similar items to recommend.
#
# ### - Create 2 explicit recommendation engine models based on 2 machine learning architecture using Keras:
# 1. a matrix factorization model
# 2. a deep neural network model.
#
#
# ### Compare the results of the different models and configurations to find the "best" predicting model
#
# ### Used the best model for recommending items to users
# +
### name of model
modname = 'dense_1_Multiply_50_embeddings_4_epochs_dropout'
### size of embedding
embedding_size = 50
### number of epochs
num_epochs = 4
# +
# import sys
# # !{sys.executable} -m pip install --upgrade pip
# # !{sys.executable} -m pip install sagemaker-experiments
# # !{sys.executable} -m pip install pandas
# # !{sys.executable} -m pip install numpy
# # !{sys.executable} -m pip install matplotlib
# # !{sys.executable} -m pip install boto3
# # !{sys.executable} -m pip install sagemaker
# # !{sys.executable} -m pip install pyspark
# # !{sys.executable} -m pip install ipython-autotime
# # !{sys.executable} -m pip install surprise
# # !{sys.executable} -m pip install smart_open
# # !{sys.executable} -m pip install pyarrow
# # !{sys.executable} -m pip install fastparquet
# +
# Check Jave version
# # !sudo yum -y update
# +
# # Need to use Java 1.8.0
# # !sudo yum remove jre-1.7.0-openjdk -y
# -
# !java -version
# +
# # !sudo update-alternatives --config java
# +
# # !pip install pyarrow fastparquet
# # !pip install ipython-autotime
# # !pip install tqdm pydot pydotplus pydot_ng
# +
#### To measure all running time
# https://github.com/cpcloud/ipython-autotime
# %load_ext autotime
# +
# %pylab inline
import warnings
warnings.filterwarnings("ignore")
# %matplotlib inline
import re
import seaborn as sbn
import nltk
import tqdm as tqdm
import sqlite3
import pandas as pd
import numpy as np
from pandas import DataFrame
import string
import pydot
import pydotplus
import pydot_ng
import pickle
import time
import gzip
import os
os.getcwd()
import matplotlib.pyplot as plt
from math import floor,ceil
#from nltk.corpus import stopwords
#stop = stopwords.words("english")
from nltk.stem.porter import PorterStemmer
english_stemmer=nltk.stem.SnowballStemmer('english')
from nltk.tokenize import word_tokenize
from sklearn.metrics import accuracy_score, confusion_matrix,roc_curve, auc,classification_report, mean_squared_error, mean_absolute_error
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.svm import LinearSVC
from sklearn.neighbors import NearestNeighbors
from sklearn.linear_model import LogisticRegression
from sklearn import neighbors
from scipy.spatial.distance import cosine
from sklearn.feature_selection import SelectKBest
from IPython.display import SVG
# Tensorflow
import tensorflow as tf
#Keras
from keras.models import Sequential, Model, load_model, save_model
from keras.callbacks import ModelCheckpoint
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D, Embedding
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
from keras.layers.core import Reshape, Dropout, Dense
from keras.layers.merge import Multiply, Dot, Concatenate
from keras.layers.embeddings import Embedding
from keras import optimizers
from keras.callbacks import ModelCheckpoint
from keras.utils.vis_utils import model_to_dot
# -
# ### Set and Check GPUs
# +
#Session
from keras import backend as K
def set_check_gpu():
cfg = K.tf.ConfigProto()
cfg.gpu_options.per_process_gpu_memory_fraction =1 # allow all of the GPU memory to be allocated
# for 8 GPUs
# cfg.gpu_options.visible_device_list = "0,1,2,3,4,5,6,7" # "0,1"
# for 1 GPU
cfg.gpu_options.visible_device_list = "0"
#cfg.gpu_options.allow_growth = True # # Don't pre-allocate memory; dynamically allocate the memory used on the GPU as-needed
#cfg.log_device_placement = True # to log device placement (on which device the operation ran)
sess = K.tf.Session(config=cfg)
K.set_session(sess) # set this TensorFlow session as the default session for Keras
print("* TF version: ", [tf.__version__, tf.test.is_gpu_available()])
print("* List of GPU(s): ", tf.config.experimental.list_physical_devices() )
print("* Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID";
# set for 8 GPUs
# os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3,4,5,6,7";
# set for 1 GPU
os.environ["CUDA_VISIBLE_DEVICES"] = "0";
# Tf debugging option
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
# print(tf.config.list_logical_devices('GPU'))
print(tf.config.experimental.list_physical_devices('GPU'))
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# -
set_check_gpu()
# reset GPU memory& Keras Session
def reset_keras():
try:
del classifier
del model
except:
pass
K.clear_session()
K.get_session().close()
# sess = K.get_session()
cfg = K.tf.ConfigProto()
cfg.gpu_options.per_process_gpu_memory_fraction
# cfg.gpu_options.visible_device_list = "0,1,2,3,4,5,6,7" # "0,1"
cfg.gpu_options.visible_device_list = "0" # "0,1"
cfg.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
sess = K.tf.Session(config=cfg)
K.set_session(sess) # set this TensorFlow session as the default session for Keras
# ## Load dataset and analysis using Spark
# ## Download and prepare Data:
# #### 1. Read the data:
# #### Read the data from the reviews dataset of amazon.
# #### Use the dastaset in which all users and items have at least 5 reviews.
#
# ### Location of dataset: https://nijianmo.github.io/amazon/index.html
# +
import pandas as pd
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.session import Session
from sagemaker.analytics import ExperimentAnalytics
import gzip
import json
from pyspark.ml import Pipeline
from pyspark.sql.types import StructField, StructType, StringType, DoubleType
from pyspark.ml.feature import StringIndexer, VectorIndexer, OneHotEncoder, VectorAssembler
from pyspark.sql.functions import *
# spark imports
from pyspark.sql import SparkSession
from pyspark.sql.functions import UserDefinedFunction, explode, desc
from pyspark.sql.types import StringType, ArrayType
from pyspark.ml.evaluation import RegressionEvaluator
import os
import pandas as pd
import pyarrow
import fastparquet
# from pandas_profiling import ProfileReport
# +
# # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/Clean_Clothing_Shoes_and_Jewelry_5_clean.parquet ./data/
# -
# !ls -alh ./data
# ### Read clened dataset from parquet files
review_data = pd.read_parquet("./data/Clean_Clothing_Shoes_and_Jewelry_5_clean.parquet")
review_data[:3]
review_data.shape
# ### 2. Arrange and clean the data
# Rearrange the columns by relevance and rename column names
review_data.columns
# +
review_data = review_data[['asin', 'image', 'summary', 'reviewText', 'overall', 'reviewerID', 'reviewerName', 'reviewTime']]
review_data.rename(columns={ 'overall': 'score','reviewerID': 'user_id', 'reviewerName': 'user_name'}, inplace=True)
#the variables names after rename in the modified data frame
list(review_data)
# -
# # Add Metadata
#
# ### Metadata includes descriptions, price, sales-rank, brand info, and co-purchasing links
# - asin - ID of the product, e.g. 0000031852
# - title - name of the product
# - price - price in US dollars (at time of crawl)
# - imUrl - url of the product image
# - related - related products (also bought, also viewed, bought together, buy after viewing)
# - salesRank - sales rank information
# - brand - brand name
# - categories - list of categories the product belongs to
# +
# # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/Cleaned_meta_Clothing_Shoes_and_Jewelry.parquet ./data/
# -
all_info = pd.read_parquet("./data/Cleaned_meta_Clothing_Shoes_and_Jewelry.parquet")
all_info.head(n=5)
# ### Arrange and clean the data
# - Cleaning, handling missing data, normalization, etc:
# - For the algorithm in keras to work, remap all item_ids and user_ids to an interger between 0 and the total number of users or the total number of items
all_info.columns
items = all_info.asin.unique()
item_map = {i:val for i,val in enumerate(items)}
inverse_item_map = {val:i for i,val in enumerate(items)}
all_info["old_item_id"] = all_info["asin"] # copying for join with metadata
all_info["item_id"] = all_info["asin"].map(inverse_item_map)
items = all_info.item_id.unique()
print ("We have %d unique items in metadata "%items.shape[0])
all_info['description'] = all_info['description'].fillna(all_info['title'].fillna('no_data'))
all_info['title'] = all_info['title'].fillna(all_info['description'].fillna('no_data').apply(str).str[:20])
all_info['image'] = all_info['image'].fillna('no_data')
all_info['price'] = pd.to_numeric(all_info['price'],errors="coerce")
all_info['price'] = all_info['price'].fillna(all_info['price'].median())
# +
users = review_data.user_id.unique()
user_map = {i:val for i,val in enumerate(users)}
inverse_user_map = {val:i for i,val in enumerate(users)}
review_data["old_user_id"] = review_data["user_id"]
review_data["user_id"] = review_data["user_id"].map(inverse_user_map)
items_reviewed = review_data.asin.unique()
review_data["old_item_id"] = review_data["asin"] # copying for join with metadata
review_data["item_id"] = review_data["asin"].map(inverse_item_map)
items_reviewed = review_data.item_id.unique()
users = review_data.user_id.unique()
# -
print ("We have %d unique users"%users.shape[0])
print ("We have %d unique items reviewed"%items_reviewed.shape[0])
# We have 192403 unique users in the "small" dataset
# We have 63001 unique items reviewed in the "small" dataset
review_data.head(3)
# ## Adding the review count and avarage to the metadata
#items_nb = review_data['old_item_id'].value_counts().reset_index()
items_avg = review_data.drop(['summary','reviewText','user_id','asin','user_name','reviewTime','old_user_id','item_id'],axis=1).groupby('old_item_id').agg(['count','mean']).reset_index()
items_avg.columns= ['old_item_id','num_ratings','avg_rating']
#items_avg.head(5)
items_avg['num_ratings'].describe()
all_info = pd.merge(all_info,items_avg,how='left',left_on='asin',right_on='old_item_id')
pd.set_option('display.max_colwidth', 100)
all_info.head(2)
# # Explicit feedback (Reviewed Dataset) Recommender System
# ### Explicit feedback is when users gives voluntarily the rating information on what they like and dislike.
#
# - In this case, I have explicit item ratings ranging from one to five.
# - Framed the recommendation system as a rating prediction machine learning problem:
# - Predict an item's ratings in order to be able to recommend to a user an item that he is likely to rate high if he buys it. `
#
# ### To evaluate the model, I randomly separate the data into a training and test set.
ratings_train, ratings_test = train_test_split( review_data, test_size=0.1, random_state=0)
ratings_train.shape
ratings_test.shape
# ## Adding Metadata to the train set
# Create an architecture that mixes the collaborative and content based filtering approaches:
# ```
# - Collaborative Part: Predict items ratings to recommend to the user items which he is likely to rate high according to learnt item & user embeddings (learn similarity from interactions).
# - Content based part: Use metadata inputs (such as price and title) about items to recommend to the user contents similar to those he rated high (learn similarity of item attributes).
# ```
#
# #### Adding the title and price - Add the metadata of the items in the training and test datasets.
# +
# # creating metadata mappings
# titles = all_info['title'].unique()
# titles_map = {i:val for i,val in enumerate(titles)}
# inverse_titles_map = {val:i for i,val in enumerate(titles)}
# price = all_info['price'].unique()
# price_map = {i:val for i,val in enumerate(price)}
# inverse_price_map = {val:i for i,val in enumerate(price)}
# print ("We have %d prices" %price.shape)
# print ("We have %d titles" %titles.shape)
# all_info['price_id'] = all_info['price'].map(inverse_price_map)
# all_info['title_id'] = all_info['title'].map(inverse_titles_map)
# # creating dict from
# item2prices = {}
# for val in all_info[['item_id','price_id']].dropna().drop_duplicates().iterrows():
# item2prices[val[1]["item_id"]] = val[1]["price_id"]
# item2titles = {}
# for val in all_info[['item_id','title_id']].dropna().drop_duplicates().iterrows():
# item2titles[val[1]["item_id"]] = val[1]["title_id"]
# # populating the rating dataset with item metadata info
# ratings_train["price_id"] = ratings_train["item_id"].map(lambda x : item2prices[x])
# ratings_train["title_id"] = ratings_train["item_id"].map(lambda x : item2titles[x])
# # populating the test dataset with item metadata info
# ratings_test["price_id"] = ratings_test["item_id"].map(lambda x : item2prices[x])
# ratings_test["title_id"] = ratings_test["item_id"].map(lambda x : item2titles[x])
# -
# ## create rating train/test dataset and upload into S3
# +
# # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/ratings_test.parquet ./data/
# # !aws s3 cp s3://dse-cohort5-group1/2-Keras-DeepRecommender/dataset/ratings_train.parquet ./data/
# -
ratings_test = pd.read_parquet('./data/ratings_test.parquet')
ratings_train = pd.read_parquet('./data/ratings_train.parquet')
ratings_train[:3]
ratings_train.shape
# # **Define embeddings
# ### The $\underline{embeddings}$ are low-dimensional hidden representations of users and items,
# ### i.e. for each item I can find its properties and for each user I can encode how much they like those properties so I can determine attitudes or preferences of users by a small number of hidden factors
#
# ### Throughout the training, I learn two new low-dimensional dense representations: one embedding for the users and another one for the items.
#
price = all_info['price'].unique()
titles = all_info['title'].unique()
# +
# declare input embeddings to the model
# User input
user_id_input = Input(shape=[1], name='user')
# Item Input
item_id_input = Input(shape=[1], name='item')
price_id_input = Input(shape=[1], name='price')
title_id_input = Input(shape=[1], name='title')
# define the size of embeddings as a parameter
# Check 5, 10 , 15, 20, 50
user_embedding_size = embedding_size
item_embedding_size = embedding_size
price_embedding_size = embedding_size
title_embedding_size = embedding_size
# apply an embedding layer to all inputs
user_embedding = Embedding(output_dim=user_embedding_size, input_dim=users.shape[0],
input_length=1, name='user_embedding')(user_id_input)
item_embedding = Embedding(output_dim=item_embedding_size, input_dim=items_reviewed.shape[0],
input_length=1, name='item_embedding')(item_id_input)
price_embedding = Embedding(output_dim=price_embedding_size, input_dim=price.shape[0],
input_length=1, name='price_embedding')(price_id_input)
title_embedding = Embedding(output_dim=title_embedding_size, input_dim=titles.shape[0],
input_length=1, name='title_embedding')(title_id_input)
# reshape from shape (batch_size, input_length,embedding_size) to (batch_size, embedding_size).
user_vecs = Reshape([user_embedding_size])(user_embedding)
user_vecs = Dropout(0.8)(user_vecs)
item_vecs = Reshape([item_embedding_size])(item_embedding)
item_vecs = Dropout(0.8)(item_vecs)
price_vecs = Reshape([price_embedding_size])(price_embedding)
price_vecs = Dropout(0.8)(price_vecs)
title_vecs = Reshape([title_embedding_size])(title_embedding)
title_vecs = Dropout(0.8)(title_vecs)
# -
# # 2. Deep Recommender
#
# ### Instead of taking a dot product of the user and the item embedding, concatenate or multiply them and use them as features for a neural network.
# ### Thus, we are not constrained to the dot product way of combining the embeddings, and can learn complex non-linear relationships.
#
# 
#
#
#
#
#
# !mkdir -p ./models
# Try add dense layers on top of the embeddings before merging (Comment to drop this idea.)
user_vecs = Dense(64, activation='relu')(user_vecs)
user_vecs = Dropout(0.4)(user_vecs)
item_vecs = Dense(64, activation='relu')(item_vecs)
item_vecs = Dropout(0.4)(item_vecs)
# price_vecs = Dense(64, activation='relu')(price_vecs)
# title_vecs = Dense(64, activation='relu')(title_vecs)
# +
# Concatenate the item embeddings :
# item_vecs_complete = Concatenate()([item_vecs, price_vecs,title_vecs])
# Concatenate user and item embeddings and use them as features for the neural network:
# input_vecs = Concatenate()([user_vecs, item_vecs_complete]) # can be changed by Multiply
#input_vecs = Concatenate()([user_vecs, item_vecs]) # can be changed by Multiply
# Multiply user and item embeddings and use them as features for the neural network:
input_vecs = Multiply()([user_vecs, item_vecs]) # can be changed by concat
# Dropout is a technique where randomly selected neurons are ignored during training to prevent overfitting
input_vecs = Dropout(0.6)(input_vecs)
# Check one dense 128 or two dense layers (128,128) or (128,64) or three denses layers (128,64,32))
# First layer
# Dense(128) is a fully-connected layer with 128 hidden units.
# Use rectified linear units (ReLU) f(x)=max(0,x) as an activation function.
x = Dense(128, activation='relu')(input_vecs)
x = Dropout(0.6)(x) # Add droupout or not # To improve the performance
# Next Layers
# x = Dense(128, activation='relu')(x) # Add dense again or not
x = Dropout(0.1)(x) # Add droupout or not # To improve the performance
# x = Dense(128, activation='relu')(x) # Add dense again or not
x = Dropout(0.1)(x) # Add droupout or not # To improve the performance
# x = Dense(32, activation='relu')(x) # Add dense again or not #
x = Dropout(0.1)(x) # Add droupout or not # To improve the performance
# The output
y = Dense(1)(x)
# +
# create model
model = Model(inputs=
[
user_id_input,
item_id_input
],
outputs=y)
# compile model
model.compile(loss='mse',
optimizer="adam" )
# set save location for model
save_path = "./models"
thename = save_path + '/' + modname + '.h5'
mcheck = ModelCheckpoint(thename, monitor='val_loss', save_best_only=True)
# fit model - increate batch_size to 512
history = model.fit([ratings_train["user_id"]
, ratings_train["item_id"]
]
, ratings_train["score"]
, batch_size=512
, epochs=num_epochs
, validation_split=0.1
, callbacks=[mcheck]
, shuffle=True)
# +
# Save the fitted model history to a file
with open('./histories/' + modname + '.pkl' , 'wb') as file_pi: pickle.dump(history.history, file_pi)
print("Save history in ", './histories/' + modname + '.pkl')
# +
def disp_model(path,file,suffix):
model = load_model(path+file+suffix)
## Summarise the model
model.summary()
# Extract the learnt user and item embeddings, i.e., a table with number of items and users rows and columns, with number of columns is the dimension of the trained embedding.
# In our case, the embeddings correspond exactly to the weights of the model:
weights = model.get_weights()
print ("embeddings \ weights shapes",[w.shape for w in weights])
return model
model_path = "./models/"
# +
def plt_pickle(path,file,suffix):
with open(path+file+suffix , 'rb') as file_pi:
thepickle= pickle.load(file_pi)
plot(thepickle["loss"],label ='Train Error ' + file,linestyle="--")
plot(thepickle["val_loss"],label='Validation Error ' + file)
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Error")
##plt.ylim(0, 0.1)
return pd.DataFrame(thepickle,columns =['loss','val_loss'])
hist_path = "./histories/"
# -
print(model_path)
print(modname)
model=disp_model(model_path, modname, '.h5')
# Display the model using keras
SVG(model_to_dot(model).create(prog='dot', format='svg'))
x=plt_pickle(hist_path , modname , '.pkl')
x.head(20).transpose()
|
Keras-DeepRecommender-Clothing-Shoes-Jewelry/2_Modeling/*dense_1_Multiply_50_embeddings_4_epochs_dropout.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # 13-validate-csv
# > Validating a csv file
#
# This notebook validates the format of csv files.
# # Helpful packages
# +
#all_no_test
# -
#export text_preprocessing
import pandas as pd
import re
import os.path
# # Set the file path
# You can change 'base_prefix' variable below according to your computer environment. In this example, Soyeon's local file path was used.
#base_prefix = os.path.expanduser('~/Box Sync/DSI Documents/')
base_prefix = '/data/p_dsi/wise/data/'
file_directory = base_prefix + 'cleaned_data/csv_files/megadata.csv'
df = pd.read_csv(file_directory)
df.head()
# # Function #1: Checking the labels
# This function checks if there is any labels with typos. If used label is default('PRS', 'REP', 'OTR', 'NEU'), keep the value of the variable 'accepted_labels" as None. If you want to change the label, then assign a list of new labels to the 'accepted_labels' variable.
#export text_preprocessing
def check_label(dataframe, accepted_labels = None):
"""
Validate labels
Parameters
----------
dataframe : dataframe
a dataframe that you want to check labels
accepted_labels: either None or a list of new labels
None: When you want to use the default labels ('PRS', 'REP', 'OTR', 'NEU')
a list of new labels: When you want to change the labels (ex. ['OTR_academic, 'OTR_behavior])
Returns
-------
dataframe
rows of which label is not an element of the list we input
"""
if accepted_labels == None:
label_list = ['PRS', 'REP', 'OTR', 'NEU']
return dataframe[~dataframe['label'].isin(label_list)]
else:
label_list = accepted_labels
return dataframe[~dataframe['label'].isin(label_list)]
# ## Test 1
# The result below shows a row of which label is typo. The label is "NUE". This should be corrected to "NEU".
check_label(df)
# ## Test 2
# This time. accepted_labels is not None. I assigned a new list of labels. In this case, we are accepting only 'NEU', 'PRS', and 'OTR'. Therefore, the rows of which 'label' is "REP" are returned saying the labels are wrong.
check_label(df, accepted_labels = ['NEU', 'PRS', 'OTR']).head()
# ## Function #2: Checking the format of start/end_timestamp
# This function validates `start_timestamp` and `end_timestamp`. There are some timestamp in incorrect format such as "00:00:04 .44", and "00:04:53:33". It finds which timestamp is in wrong format, and ask a user if he/she wants to change it. If the user wants to change it, he/she can put correct timestamp and save it.
#export text_preprocessing
def check_ts(dataframe, outfile_name = 'megadata_final', interactive=False):
"""
Find start/end_timestamp in wrong format
If an user wants to change the wrong start_timestamp and end_timestamp, he/she can change it and save it.
Parameters
----------
dataframe: dataframe
A dataframe you want to validate in terms of start_timestamp and end_timestamp
outfile_name: string (default 'megadata_final')
String of the filename you want the new file saved to
interactive: boolean (default False)
Determines whether you want to interactively change the values or just see the errors.
Returns
-------
dataframe
Rows having start/end_timestamp in wrong format
It also asks users if he/she wants to change the wrong timestamp.
If the user wants to change it, he/she can type a correct timestamp and save it.
"""
# Find the rows having start/end_timestamp in wrong format
df_ts_typo = dataframe[(~dataframe.start_timestamp.str.match('\d\d:\d\d:\d\d\.\d\d')) | (~dataframe.end_timestamp.str.match('\d\d:\d\d:\d\d\.\d\d'))]
display(df_ts_typo)
# Get the list of timestamp in wrong format
df_start_ts_typo = dataframe[(~dataframe.start_timestamp.str.match('\d\d:\d\d:\d\d\.\d\d'))]
df_end_ts_typo = dataframe[(~dataframe.end_timestamp.str.match('\d\d:\d\d:\d\d\.\d\d'))]
total_typos = len(df_start_ts_typo) + len(df_end_ts_typo)
# Get unique timestamps
start_ts_list = df_start_ts_typo['start_timestamp'].unique().tolist()
end_ts_list = df_end_ts_typo['end_timestamp'].unique().tolist()
combine = start_ts_list + end_ts_list
combine = list(set(combine))
# Show the number of timestamp in wrong format
print('There are', len(combine), 'unique timestamps in wrong format with a total of', total_typos, 'incorrect rows.')
if interactive:
# Check all the timestamp in incorrect format
for i in combine:
user_answer1 = input("Do you want to change {}? y/n \n".format(i))
if user_answer1 == "y":
user_answer2 = input("What is the correct value?\n")
dataframe.loc[dataframe.start_timestamp == i, 'start_timestamp'] = user_answer2
dataframe.loc[dataframe.end_timestamp == i, 'end_timestamp'] = user_answer2
print("The new value has been updated.")
else:
print("Please go to the original docx file and correct it.")
# Ask a user if he/she wants to save the change
user_answer3 = input("Do you want to save the changes? y/n \n")
if user_answer3 == "y":
output_filepath = base_prefix + 'cleaned_data/csv_files/' + outfile_name + '.csv'
dataframe.to_csv(output_filepath, index=False)
print("You just saved your updated changes. The updated file is: ", output_filepath, '.')
else:
print("You did not save the change.")
return
# ## Test 1
# There are 108 rows in megadata dataframe having start/end_timestamp in wrong format, and their unique values are "00:07:45:15", and "00:04:32-00". The first one is incorrect because the unit for milisecond should be a dot(.), not a colon(:). The latter one is having hypen(-) between the digits. After the function shows all rows having incorrect start/end_timestamp, it summurizes the result. It also asks to a user if he/she wants to change the timestamp. If the user types 'y', the function asks what the correct value is. Finally, the user can decide if he/she wants to save the change.
check_ts(df)
# ## Test 2
# After using the validation function, we can tell that there is no more timestamp in wrong format.
check_ts(df, outfile_name = 'megadata_final', interactive=True)
check_ts(df)
|
13-validate-csv.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="fSIfBsgi8dNK" colab_type="code" colab={}
#@title Copyright 2020 Google LLC. { display-mode: "form" }
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="aV1xZ1CPi3Nw" colab_type="text"
# <table class="ee-notebook-buttons" align="left"><td>
# <a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
# </td><td>
# <a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table>
# + [markdown] id="AC8adBmw-5m3" colab_type="text"
# # Introduction
#
# This is an Earth Engine <> TensorFlow demonstration notebook. Specifically, this notebook shows:
#
# 1. Exporting training/testing data from Earth Engine in TFRecord format.
# 2. Preparing the data for use in a TensorFlow model.
# 2. Training and validating a simple model (Keras `Sequential` neural network) in TensorFlow.
# 3. Making predictions on image data exported from Earth Engine in TFRecord format.
# 4. Ingesting classified image data to Earth Engine in TFRecord format.
#
# This is intended to demonstrate a complete i/o pipeline. For a worflow that uses a [Google AI Platform](https://cloud.google.com/ai-platform) hosted model making predictions interactively, see [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb).
# + [markdown] id="KiTyR3FNlv-O" colab_type="text"
# # Setup software libraries
#
# Import software libraries and/or authenticate as necessary.
# + [markdown] id="dEM3FP4YakJg" colab_type="text"
# ## Authenticate to Colab and Cloud
#
# To read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). *This should be the same account you use to login to Earth Engine*. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process.
#
# (You may need to run this again if you get a credentials error later.)
# + id="sYyTIPLsvMWl" colab_type="code" cellView="code" colab={}
from google.colab import auth
auth.authenticate_user()
# + [markdown] id="Ejxa1MQjEGv9" colab_type="text"
# ## Authenticate to Earth Engine
#
# Authenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. *This should be the same account you used to login to Cloud previously*. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process.
# + id="HzwiVqbcmJIX" colab_type="code" cellView="code" colab={}
import ee
ee.Authenticate()
ee.Initialize()
# + [markdown] id="iJ70EsoWND_0" colab_type="text"
# ## Test the TensorFlow installation
#
# Before any operations from the TensorFlow API are used, import TensorFlow and enable eager execution. This provides an imperative interface that can help with debugging. See the [TensorFlow eager execution guide](https://www.tensorflow.org/guide/eager) or the [`tf.enable_eager_execution()` docs](https://www.tensorflow.org/api_docs/python/tf/enable_eager_execution) for details.
# + id="i1PrYRLaVw_g" colab_type="code" cellView="code" colab={}
import tensorflow as tf
tf.enable_eager_execution()
print(tf.__version__)
# + [markdown] id="b8Xcvjp6cLOL" colab_type="text"
# ## Test the Folium installation
#
# We will use the Folium library for visualization. Import the library and check the version.
# + id="YiVgOXzBZJSn" colab_type="code" colab={}
import folium
print(folium.__version__)
# + [markdown] id="DrXLkJC2QJdP" colab_type="text"
# # Define variables
#
# This set of global variables will be used throughout. For this demo, you must have a Cloud Storage bucket into which you can write files. ([learn more about creating Cloud Storage buckets](https://cloud.google.com/storage/docs/creating-buckets)). You'll also need to specify your Earth Engine username, i.e. `users/USER_NAME` on the [Code Editor](https://code.earthengine.google.com/) Assets tab.
# + id="GHTOc5YLQZ5B" colab_type="code" colab={}
# Your Earth Engine username. This is used to import a classified image
# into your Earth Engine assets folder.
USER_NAME = 'username'
# Cloud Storage bucket into which training, testing and prediction
# datasets will be written. You must be able to write into this bucket.
OUTPUT_BUCKET = 'your-bucket'
# Use Landsat 8 surface reflectance data for predictors.
L8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
# Use these bands for prediction.
BANDS = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
# This is a trianing/testing dataset of points with known land cover labels.
LABEL_DATA = ee.FeatureCollection('projects/google/demo_landcover_labels')
# The labels, consecutive integer indices starting from zero, are stored in
# this property, set on each point.
LABEL = 'landcover'
# Number of label values, i.e. number of classes in the classification.
N_CLASSES = 3
# These names are used to specify properties in the export of
# training/testing data and to define the mapping between names and data
# when reading into TensorFlow datasets.
FEATURE_NAMES = list(BANDS)
FEATURE_NAMES.append(LABEL)
# File names for the training and testing datasets. These TFRecord files
# will be exported from Earth Engine into the Cloud Storage bucket.
TRAIN_FILE_PREFIX = 'Training_demo'
TEST_FILE_PREFIX = 'Testing_demo'
file_extension = '.tfrecord.gz'
TRAIN_FILE_PATH = 'gs://' + OUTPUT_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension
TEST_FILE_PATH = 'gs://' + OUTPUT_BUCKET + '/' + TEST_FILE_PREFIX + file_extension
# File name for the prediction (image) dataset. The trained model will read
# this dataset and make predictions in each pixel.
IMAGE_FILE_PREFIX = 'Image_pixel_demo_'
# The output path for the classified image (i.e. predictions) TFRecord file.
OUTPUT_IMAGE_FILE = 'gs://' + OUTPUT_BUCKET + '/Classified_pixel_demo.TFRecord'
# Export imagery in this region.
EXPORT_REGION = ee.Geometry.Rectangle([-122.7, 37.3, -121.8, 38.00])
# The name of the Earth Engine asset to be created by importing
# the classified image from the TFRecord file in Cloud Storage.
OUTPUT_ASSET_ID = 'users/' + USER_NAME + '/Classified_pixel_demo'
# + [markdown] id="ZcjQnHH8zT4q" colab_type="text"
# # Get Training and Testing data from Earth Engine
#
# To get data for a classification model of three classes (bare, vegetation, water), we need labels and the value of predictor variables for each labeled example. We've already generated some labels in Earth Engine. Specifically, these are visually interpreted points labeled "bare," "vegetation," or "water" for a very simple classification demo ([example script](https://code.earthengine.google.com/?scriptPath=Examples%3ADemos%2FClassification)). For predictor variables, we'll use [Landsat 8 surface reflectance imagery](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR), bands 2-7.
# + [markdown] id="0EJfjgelSOpN" colab_type="text"
# ## Prepare Landsat 8 imagery
#
# First, make a cloud-masked median composite of Landsat 8 surface reflectance imagery from 2018. Check the composite by visualizing with folium.
# + id="DJYucYe3SPPr" colab_type="code" colab={}
# Cloud masking function.
def maskL8sr(image):
cloudShadowBitMask = ee.Number(2).pow(3).int()
cloudsBitMask = ee.Number(2).pow(5).int()
qa = image.select('pixel_qa')
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(
qa.bitwiseAnd(cloudsBitMask).eq(0))
return image.updateMask(mask).select(BANDS).divide(10000)
# The image input data is a 2018 cloud-masked median composite.
image = L8SR.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()
# Use folium to visualize the imagery.
mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})
map = folium.Map(location=[38., -122.5])
folium.TileLayer(
tiles=mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='median composite',
).add_to(map)
map.add_child(folium.LayerControl())
map
# + [markdown] id="UEeyPf3zSPct" colab_type="text"
# ## Add pixel values of the composite to labeled points
#
# Some training labels have already been collected for you. Load the labeled points from an existing Earth Engine asset. Each point in this table has a property called `landcover` that stores the label, encoded as an integer. Here we overlay the points on imagery to get predictor variables along with labels.
# + id="iOedOKyRExHE" colab_type="code" colab={}
# Sample the image at the points and add a random column.
sample = image.sampleRegions(
collection=LABEL_DATA, properties=[LABEL], scale=30).randomColumn()
# Partition the sample approximately 70-30.
training = sample.filter(ee.Filter.lt('random', 0.7))
testing = sample.filter(ee.Filter.gte('random', 0.7))
from pprint import pprint
# Print the first couple points to verify.
pprint({'training': training.first().getInfo()})
pprint({'testing': testing.first().getInfo()})
# + [markdown] id="uNc7a2nRR4MI" colab_type="text"
# ## Export the training and testing data
#
# Now that there's training and testing data in Earth Engine and you've inspected a couple examples to ensure that the information you need is present, it's time to materialize the datasets in a place where the TensorFlow model has access to them. You can do that by exporting the training and testing datasets to tables in TFRecord format ([learn more about TFRecord format](https://www.tensorflow.org/tutorials/load_data/tf-records)) in your Cloud Storage bucket.
# + id="Pb-aPvQc0Xvp" colab_type="code" colab={}
# Make sure you can see the output bucket. You must have write access.
print('Found Cloud Storage bucket.' if tf.gfile.Exists('gs://' + OUTPUT_BUCKET)
else 'Can not find output Cloud Storage bucket.')
# + [markdown] id="Wtoqj0Db1TmJ" colab_type="text"
# Once you've verified the existence of the intended output bucket, run the exports.
# + id="TfVNQzg8R6Wy" colab_type="code" colab={}
# Create the tasks.
training_task = ee.batch.Export.table.toCloudStorage(
collection=training,
description='Training Export',
fileNamePrefix=TRAIN_FILE_PREFIX,
bucket=OUTPUT_BUCKET,
fileFormat='TFRecord',
selectors=feature_names)
testing_task = ee.batch.Export.table.toCloudStorage(
collection=testing,
description='Testing Export',
fileNamePrefix=TEST_FILE_PREFIX,
bucket=OUTPUT_BUCKET,
fileFormat='TFRecord',
selectors=feature_names)
# + id="QF4WGIekaS2s" colab_type="code" colab={}
# Start the tasks.
training_task.start()
testing_task.start()
# + [markdown] id="q7nFLuySISeC" colab_type="text"
# ### Monitor task progress
#
# You can see all your Earth Engine tasks by listing them. Make sure the training and testing tasks are completed before continuing.
# + id="oEWvS5ekcEq0" colab_type="code" colab={}
# Print all tasks.
pprint(ee.batch.Task.list())
# + [markdown] id="43-c0JNFI_m6" colab_type="text"
# ### Check existence of the exported files
#
# If you've seen the status of the export tasks change to `COMPLETED`, then check for the existince of the files in the output Cloud Storage bucket.
# + id="YDZfNl6yc0Kj" colab_type="code" colab={}
print('Found training file.' if tf.gfile.Exists(TRAIN_FILE_PATH)
else 'No training file found.')
print('Found testing file.' if tf.gfile.Exists(TEST_FILE_PATH)
else 'No testing file found.')
# + [markdown] id="NA8QA8oQVo8V" colab_type="text"
# ## Export the imagery
#
# You can also export imagery using TFRecord format. Specifically, export whatever imagery you want to be classified by the trained model into the output Cloud Storage bucket.
# + id="tVNhJYacVpEw" colab_type="code" colab={}
# Specify patch and file dimensions.
image_export_options = {
'patchDimensions': [256, 256],
'maxFileSize': 104857600,
'compressed': True
}
# Setup the task.
image_task = ee.batch.Export.image.toCloudStorage(
image=image,
description='Image Export',
fileNamePrefix=IMAGE_FILE_PREFIX,
bucket=OUTPUT_BUCKET,
scale=30,
fileFormat='TFRecord',
region=EXPORT_REGION.toGeoJSON()['coordinates'],
formatOptions=image_export_options,
)
# + id="6SweCkHDaNE3" colab_type="code" colab={}
# Start the task.
image_task.start()
# + [markdown] id="JC8C53MRTG_E" colab_type="text"
# ### Monitor task progress
# + id="BmPHb779KOXm" colab_type="code" colab={}
# Print all tasks.
pprint(ee.batch.Task.list())
# + [markdown] id="SrUhA1JKLONj" colab_type="text"
# It's also possible to monitor an individual task. Here we poll the task until it's done. If you do this, please put a `sleep()` in the loop to avoid making too many requests. Note that this will block until complete (you can always halt the execution of this cell).
# + id="rKZeZswloP11" colab_type="code" colab={}
import time
while image_task.active():
print('Polling for task (id: {}).'.format(image_task.id))
time.sleep(30)
print('Done with image export.')
# + [markdown] id="9vWdH_wlZCEk" colab_type="text"
# # Data preparation and pre-processing
#
# Read data from the TFRecord file into a `tf.data.Dataset`. Pre-process the dataset to get it into a suitable format for input to the model.
# + [markdown] id="LS4jGTrEfz-1" colab_type="text"
# ## Read into a `tf.data.Dataset`
#
# Here we are going to read a file in Cloud Storage into a `tf.data.Dataset`. ([these TensorFlow docs](https://www.tensorflow.org/guide/data) explain more about reading data into a `Dataset`). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable.
#
#
# + id="T3PKyDQW8Vpx" colab_type="code" cellView="code" colab={}
# Create a dataset from the TFRecord file in Cloud Storage.
train_dataset = tf.data.TFRecordDataset(TRAIN_FILE_PATH, compression_type='GZIP')
# Print the first record to check.
print(iter(train_dataset).next())
# + [markdown] id="BrDYm-ibKR6t" colab_type="text"
# ## Define the structure of your data
#
# For parsing the exported TFRecord files, `featuresDict` is a mapping between feature names (recall that `featureNames` contains the band and label names) and `float32` [`tf.io.FixedLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature) objects. This mapping is necessary for telling TensorFlow how to read data in a TFRecord file into tensors. Specifically, **all numeric data exported from Earth Engine is exported as `float32`**.
#
# (Note: *features* in the TensorFlow context (i.e. [`tf.train.Feature`](https://www.tensorflow.org/api_docs/python/tf/train/Feature)) are not to be confused with Earth Engine features (i.e. [`ee.Feature`](https://developers.google.com/earth-engine/api_docs#eefeature)), where the former is a protocol message type for serialized data input to the model and the latter is a geometry-based geographic data structure.)
# + id="-6JVQV5HKHMZ" colab_type="code" cellView="code" colab={}
# List of fixed-length features, all of which are float32.
columns = [
tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES
]
# Dictionary with names as keys, features as values.
features_dict = dict(zip(FEATURE_NAMES, columns))
pprint(features_dict)
# + [markdown] id="QNfaUPbcjuCO" colab_type="text"
# ## Parse the dataset
#
# Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized [`Example` proto](https://www.tensorflow.org/api_docs/python/tf/train/Example) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. ([These TensorFlow docs](https://www.tensorflow.org/tutorials/load_data/tfrecord) explain more about reading `Example` protos from TFRecord files).
# + id="x2Q0g3fBj2kD" colab_type="code" cellView="code" colab={}
def parse_tfrecord(example_proto):
"""The parsing function.
Read a serialized example into the structure defined by featuresDict.
Args:
example_proto: a serialized Example.
Returns:
A tuple of the predictors dictionary and the label, cast to an `int32`.
"""
parsed_features = tf.io.parse_single_example(example_proto, features_dict)
labels = parsed_features.pop(LABEL)
return parsed_features, tf.cast(labels, tf.int32)
# Map the function over the dataset.
parsed_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=5)
# Print the first parsed record to check.
pprint(iter(parsed_dataset).next())
# + [markdown] id="Nb8EyNT4Xnhb" colab_type="text"
# Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is a class label.
# + [markdown] id="xLCsxWOuEBmE" colab_type="text"
# ## Create additional features
#
# Another thing we might want to do as part of the input process is to create new features, for example NDVI, a vegetation index computed from reflectance in two spectral bands. Here are some helper functions for that.
# + id="lT6v2RM_EB1E" colab_type="code" cellView="code" colab={}
def normalized_difference(a, b):
"""Compute normalized difference of two inputs.
Compute (a - b) / (a + b). If the denomenator is zero, add a small delta.
Args:
a: an input tensor with shape=[1]
b: an input tensor with shape=[1]
Returns:
The normalized difference as a tensor.
"""
nd = (a - b) / (a + b)
nd_inf = (a - b) / (a + b + 0.000001)
return tf.where(tf.is_finite(nd), nd, nd_inf)
def add_NDVI(features, label):
"""Add NDVI to the dataset.
Args:
features: a dictionary of input tensors keyed by feature name.
label: the target label
Returns:
A tuple of the input dictionary with an NDVI tensor added and the label.
"""
features['NDVI'] = normalized_difference(features['B5'], features['B4'])
return features, label
# + [markdown] id="nEx1RAXOZQkS" colab_type="text"
# # Model setup
#
# The basic workflow for classification in TensorFlow is:
#
# 1. Create the model.
# 2. Train the model (i.e. `fit()`).
# 3. Use the trained model for inference (i.e. `predict()`).
#
# Here we'll create a `Sequential` neural network model using Keras. This simple model is inspired by examples in:
#
# * [The TensorFlow Get Started tutorial](https://www.tensorflow.org/tutorials/)
# * [The TensorFlow Keras guide](https://www.tensorflow.org/guide/keras#build_a_simple_model)
# * [The Keras `Sequential` model examples](https://keras.io/getting-started/sequential-model-guide/#multilayer-perceptron-mlp-for-multi-class-softmax-classification)
#
# Note that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning.
# + [markdown] id="t9pWa54oG-xl" colab_type="text"
# ## Create the Keras model
#
# Before we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columns#categorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details).
#
# Here we will use a simple neural network model with a 64 node hidden layer, a dropout layer and an output layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See [the Keras `Sequential` model guide](https://keras.io/getting-started/sequential-model-guide/) for more details.
# + id="OCZq3VNpG--G" colab_type="code" cellView="code" colab={}
from tensorflow import keras
# Add NDVI.
input_dataset = parsed_dataset.map(add_NDVI)
# Keras requires inputs as a tuple. Note that the inputs must be in the
# right shape. Also note that to use the categorical_crossentropy loss,
# the label needs to be turned into a one-hot vector.
def to_tuple(dict, label):
return tf.transpose(list(dict.values())), tf.one_hot(indices=label, depth=N_CLASSES)
# Map the to_tuple function, shuffle and batch.
shuffle_size = training.size().getInfo()
input_dataset = input_dataset.map(to_tuple).shuffle(shuffle_size).batch(8)
# Define the layers in the model.
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(N_CLASSES, activation=tf.nn.softmax)
])
# Compile the model with the specified loss function.
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Fit the model to the training data.
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(x=input_dataset, epochs=10)
# + [markdown] id="Pa4ex_4eKiyb" colab_type="text"
# ## Check model accuracy on the test set
#
# Now that we have a trained model, we can evaluate it using the test dataset. To do that, read and prepare the test dataset in the same way as the training dataset. Here we specify a batch size of 1 so that each example in the test set is used exactly once to compute model accuracy. For model steps, just specify a number larger than the test dataset size (ignore the warning).
# + id="tE6d7FsrMa1p" colab_type="code" cellView="code" colab={}
test_dataset = (
tf.data.TFRecordDataset(TEST_FILE_PATH, compression_type='GZIP')
.map(parse_tfrecord, num_parallel_calls=5)
.map(add_NDVI)
.map(to_tuple)
.batch(1))
model.evaluate(test_dataset)
# + [markdown] id="nhHrnv3VR0DU" colab_type="text"
# # Use the trained model to classify an image from Earth Engine
#
# Now it's time to classify the image that was exported from Earth Engine. If the exported image is large, it will be split into multiple TFRecord files in its destination folder. There will also be a JSON sidecar file called "the mixer" that describes the format and georeferencing of the image. Here we will find the image files and the mixer file, getting some info out of the mixer that will be useful during model inference.
# + [markdown] id="nmTayDitZgQ5" colab_type="text"
# ## Find the image files and JSON mixer file in Cloud Storage
#
# Use `gsutil` to locate the files of interest in the output Cloud Storage bucket. Check to make sure your image export task finished before running the following.
# + id="oUv9WMpcVp8E" colab_type="code" colab={}
# Get a list of all the files in the output bucket.
# files_list = !gsutil ls 'gs://'{OUTPUT_BUCKET}
# Get only the files generated by the image export.
exported_files_list = [s for s in files_list if IMAGE_FILE_PREFIX in s]
# Get the list of image files and the JSON mixer file.
image_files_list = []
json_file = None
for f in exported_files_list:
if f.endswith('.tfrecord.gz'):
image_files_list.append(f)
elif f.endswith('.json'):
json_file = f
# Make sure the files are in the right order.
image_files_list.sort()
pprint(image_files_list)
print(json_file)
# + [markdown] id="RcjYG9fk53xL" colab_type="text"
# ## Read the JSON mixer file
#
# The mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction.
# + id="Gn7Dr0AAd93_" colab_type="code" colab={}
import json
# Load the contents of the mixer file to a JSON object.
# json_text = !gsutil cat {json_file}
# Get a single string w/ newlines from the IPython.utils.text.SList
mixer = json.loads(json_text.nlstr)
pprint(mixer)
# + [markdown] id="6xyzyPPJwpVI" colab_type="text"
# ## Read the image files into a dataset
#
# You can feed the list of files (`imageFilesList`) directly to the `TFRecordDataset` constructor to make a combined dataset on which to perform inference. The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.
# + id="tn8Kj3VfwpiJ" colab_type="code" cellView="code" colab={}
# Get relevant info from the JSON mixer file.
patch_width = mixer['patchDimensions'][0]
patch_height = mixer['patchDimensions'][1]
patches = mixer['totalPatches']
patch_dimensions_flat = [patch_width * patch_height, 1]
# Note that the tensors are in the shape of a patch, one patch for each band.
image_columns = [
tf.FixedLenFeature(shape=patch_dimensions_flat, dtype=tf.float32)
for k in BANDS
]
# Parsing dictionary.
image_features_dict = dict(zip(BANDS, image_columns))
# Note that you can make one dataset from many files by specifying a list.
image_dataset = tf.data.TFRecordDataset(image_files_list, compression_type='GZIP')
# Parsing function.
def parse_image(example_proto):
return tf.parse_single_example(example_proto, image_features_dict)
# Parse the data into tensors, one long tensor per patch.
image_dataset = image_dataset.map(parse_image, num_parallel_calls=5)
# Break our long tensors into many little ones.
image_dataset = image_dataset.flat_map(
lambda features: tf.data.Dataset.from_tensor_slices(features)
)
# Add additional features (NDVI).
image_dataset = image_dataset.map(
# Add NDVI to a feature that doesn't have a label.
lambda features: add_NDVI(features, None)[0]
)
# Turn the dictionary in each record into a tuple without a label.
image_dataset = image_dataset.map(
lambda data_dict: (tf.transpose(list(data_dict.values())), )
)
# Turn each patch into a batch.
image_dataset = image_dataset.batch(patch_width * patch_height)
# + [markdown] id="_2sfRemRRDkV" colab_type="text"
# ## Generate predictions for the image pixels
#
# To get predictions in each pixel, run the image dataset through the trained model using `model.predict()`. Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.
# + id="8VGhmiP_REBP" colab_type="code" colab={}
# Run prediction in batches, with as many steps as there are patches.
predictions = model.predict(image_dataset, steps=patches, verbose=1)
# Note that the predictions come as a numpy array. Check the first one.
print(predictions[0])
# + [markdown] id="bPU2VlPOikAy" colab_type="text"
# ## Write the predictions to a TFRecord file
#
# Now that there's a list of class probabilities in `predictions`, it's time to write them back into a file, optionally including a class label which is simply the index of the maximum probability. We'll write directly from TensorFlow to a file in the output Cloud Storage bucket.
#
# Iterate over the list, compute class label and write the class and the probabilities in patches. Specifically, we need to write the pixels into the file as patches in the same order they came out. The records are written as serialized `tf.train.Example` protos. This might take a while.
# + id="AkorbsEHepzJ" colab_type="code" colab={}
print('Writing to file ' + OUTPUT_IMAGE_FILE)
# + id="kATMknHc0qeR" colab_type="code" cellView="code" colab={}
# Instantiate the writer.
writer = tf.python_io.TFRecordWriter(OUTPUT_IMAGE_FILE)
# Every patch-worth of predictions we'll dump an example into the output
# file with a single feature that holds our predictions. Since our predictions
# are already in the order of the exported data, the patches we create here
# will also be in the right order.
patch = [[], [], [], []]
cur_patch = 1
for prediction in predictions:
patch[0].append(tf.argmax(prediction, 1))
patch[1].append(prediction[0][0])
patch[2].append(prediction[0][1])
patch[3].append(prediction[0][2])
# Once we've seen a patches-worth of class_ids...
if (len(patch[0]) == patch_width * patch_height):
print('Done with patch ' + str(cur_patch) + ' of ' + str(patches) + '...')
# Create an example
example = tf.train.Example(
features=tf.train.Features(
feature={
'prediction': tf.train.Feature(
int64_list=tf.train.Int64List(
value=patch[0])),
'bareProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[1])),
'vegProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[2])),
'waterProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[3])),
}
)
)
# Write the example to the file and clear our patch array so it's ready for
# another batch of class ids
writer.write(example.SerializeToString())
patch = [[], [], [], []]
cur_patch += 1
writer.close()
# + [markdown] id="1K_1hKs0aBdA" colab_type="text"
# # Upload the classifications to an Earth Engine asset
# + [markdown] id="M6sNZXWOSa82" colab_type="text"
# ## Verify the existence of the predictions file
#
# At this stage, there should be a predictions TFRecord file sitting in the output Cloud Storage bucket. Use the `gsutil` command to verify that the predictions image (and associated mixer JSON) exist and have non-zero size.
# + id="6ZVWDPefUCgA" colab_type="code" colab={}
# !gsutil ls -l {OUTPUT_IMAGE_FILE}
# + [markdown] id="2ZyCo297Clcx" colab_type="text"
# ## Upload the classified image to Earth Engine
#
# Upload the image to Earth Engine directly from the Cloud Storage bucket with the [`earthengine` command](https://developers.google.com/earth-engine/command_line#upload). Provide both the image TFRecord file and the JSON file as arguments to `earthengine upload`.
# + id="NXulMNl9lTDv" colab_type="code" cellView="code" colab={}
print('Uploading to ' + OUTPUT_ASSET_ID)
# + id="V64tcVxsO5h6" colab_type="code" colab={}
# Start the upload.
# !earthengine upload image --asset_id={OUTPUT_ASSET_ID} {OUTPUT_IMAGE_FILE} {json_file}
# + [markdown] id="Yt4HyhUU_Bal" colab_type="text"
# ## Check the status of the asset ingestion
#
# You can also use the Earth Engine API to check the status of your asset upload. It might take a while. The upload of the image is an asset ingestion task.
# + id="_vB-gwGhl_3C" colab_type="code" cellView="code" colab={}
ee.batch.Task.list()
# + [markdown] id="vvXvy9GDhM-p" colab_type="text"
# ## View the ingested asset
#
# Display the vector of class probabilities as an RGB image with colors corresponding to the probability of bare, vegetation, water in a pixel. Also display the winning class using the same color palette.
# + id="kEkVxIyJiFd4" colab_type="code" colab={}
predictions_image = ee.Image(OUTPUT_ASSET_ID)
prediction_vis = {
'bands': 'prediction',
'min': 0,
'max': 2,
'palette': ['red', 'green', 'blue']
}
probability_vis = {'bands': ['bareProb', 'vegProb', 'waterProb'], 'max': 0.5}
prediction_map_id = predictions_image.getMapId(prediction_vis)
probability_map_id = predictions_image.getMapId(probability_vis)
map = folium.Map(location=[37.6413, -122.2582])
folium.TileLayer(
tiles=prediction_map_id['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='prediction',
).add_to(map)
folium.TileLayer(
tiles=probability_map_id['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='probability',
).add_to(map)
map.add_child(folium.LayerControl())
map
|
python/examples/ipynb/TF_demo1_keras.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: python
# name: pyspark
# ---
# +
# # Caso ja não tenha instalado, instale os seguintes pacotes:
# # !pip install cmake
# # !pip install xgboost
# # !pip install sklearn
# # !pip install sklearn-deap
# # (para usar o EvolutionarySearch >> https://github.com/rsteca/sklearn-deap)
# # !pip install scikit-plot
# +
import multiprocessing
print(multiprocessing.cpu_count())
# +
import pandas as pd
import pyspark.sql.functions as F
from datetime import datetime
from pyspark.sql.types import *
from pyspark import StorageLevel
import numpy as np
pd.set_option("display.max_rows", 1000)
pd.set_option("display.max_columns", 1000)
pd.set_option("mode.chained_assignment", None)
# +
import xgboost as xgb #XGBClassifier
from sklearn.model_selection import train_test_split, StratifiedKFold
import matplotlib.pylab as plt
from sklearn import metrics
from evolutionary_search import EvolutionaryAlgorithmSearchCV
import scikitplot as skplt
# -
import sklearn
import scikitplot as skplt
from sklearn.metrics import classification_report, confusion_matrix, precision_score
# <hr />
# <hr />
# <hr />
# +
# undersamp_col = ['02-KMODES', '03-STRSAMP-AG', '04-STRSAMP-EW']
# dfs = ['ds-1', 'ds-2', 'ds-3']
# cols_sets = ['cols_set_1', 'cols_set_2', 'cols_set_3']
undersamp_col = ['02-KMODES']
dfs = ['ds-1']
cols_sets = ['cols_set_2']
# +
# lists of params
model_MaxEstimators = [50]
model_maxDepth = [10]
list_of_param_dicts = []
for maxIter in model_MaxEstimators:
for maxDepth in model_maxDepth:
params_dict = {}
params_dict['MaxEstimators'] = maxIter
params_dict['maxDepth'] = maxDepth
list_of_param_dicts.append(params_dict)
print("There is {} set of params.".format(len(list_of_param_dicts)))
# list_of_param_dicts
# -
prefix = 'gs://ai-covid19-datalake/trusted/experiment_map/'
# <hr />
# <hr />
# <hr />
# +
# filename = 'gs://ai-covid19-datalake/trusted/experiment_map/02-KMODES/ds-1/cols_set_1/experiment0.parquet'
# df = spark.read.parquet(filename).sample(0.3)
# +
# df = df.toPandas()
# +
# params_dict = {'MaxEstimators': 10,
# 'maxDepth': 3}
# cols = 'cols_set_1'
# experiment_filter = 'ds-1'
# undersampling_method = '03-STRSAMP-AG',
# experiment_id = 0
# +
# model = run_xgboost(df, params_dict, cols, filename, experiment_filter, undersampling_method, experiment_id)
# +
# print('finished')
# +
# model['model_time_exec']
# +
# model['model_AUC_PR']
# -
# <hr />
# <hr />
# <hr />
# +
# Ref: https://stackoverflow.com/questions/37292872/how-can-i-one-hot-encode-in-python
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
def run_xgboost(exp_df, params_dict, cols, filename, experiment_filter,
undersampling_method, experiment_id):
import time
start_time = time.time()
n_covid = len(exp_df[exp_df['CLASSI_FIN']==1.0])
n_not_covid = len(exp_df[exp_df['CLASSI_FIN']==0.0])
id_cols = ['NU_NOTIFIC', 'CLASSI_FIN']
for column in exp_df.columns:
exp_df[column] = exp_df[column].astype("category")
if column != "CLASSI_FIN":
exp_df = encode_and_bind(exp_df, column)
# Seleção das variáveis que serão submetidas ao modelo
x = exp_df.drop("CLASSI_FIN", axis=1)
y = exp_df.CLASSI_FIN
X_train, X_test, y_train, y_test = train_test_split(x,
y,
test_size=0.3,
random_state=2021)
# Gerando o modelo
model = xgb.XGBClassifier(objective="binary:logistic",
n_jobs = 30,
colsample_bytree = 0.3,
learning_rate=0.1,
max_depth= params_dict['maxDepth'],
n_estimators = params_dict['MaxEstimators'],
min_child_weight = 1,
subsample=0.5,
scale_pos_weight=2,
eval_metric="error",
booster='dart')
model.fit(X_train,y_train)
# Predizendo
pred = model.predict(X_test)
# validation
fpr, tpr, thresholds_auc_roc = metrics.roc_curve(y_test, pred)
auc_ROC = metrics.auc(fpr, tpr)
precision, recall, thresholds = metrics.precision_recall_curve(y_test, pred)
aupr_ROC = metrics.auc(recall, precision)
de_para = {1.0: 'covid', 0.0: 'nao_covid'}
y_test = y_test.replace(de_para)
pred = pd.Series(pred).replace(de_para)
report = metrics.classification_report(y_test,pred,output_dict=True)
conf_matrix = metrics.confusion_matrix(y_test, pred)
# Gerando os metadados
result_dict = {}
result_dict['experiment_filter'] = experiment_filter
result_dict['undersampling_method'] = undersampling_method
result_dict['filename'] = filename
result_dict['experiment_id'] = experiment_id
result_dict['n_covid'] = n_covid
result_dict['n_not_covid'] = n_not_covid
result_dict['model_name'] = 'XGBoost'
result_dict['params'] = params_dict
result_dict['model_AUC_ROC'] = auc_ROC
result_dict['model_AUC_PR'] = aupr_ROC
result_dict['model_covid_precision'] = report['covid']['precision']
result_dict['model_covid_recall'] = report['covid']['recall']
result_dict['model_covid_f1'] = report['covid']['f1-score']
result_dict['model_not_covid_precision'] = report['nao_covid']['precision']
result_dict['model_not_covid_recall'] = report['nao_covid']['recall']
result_dict['model_not_covid_f1'] = report['nao_covid']['f1-score']
result_dict['model_avg_precision'] = report['macro avg']['precision']
result_dict['model_avg_recall'] = report['macro avg']['recall']
result_dict['model_avg_f1'] = report['macro avg']['f1-score']
result_dict['model_avg_acc'] = report['accuracy']
result_dict['model_TP'] = conf_matrix[0][0]
result_dict['model_TN'] = conf_matrix[1][1]
result_dict['model_FN'] = conf_matrix[0][1]
result_dict['model_FP'] = conf_matrix[1][0]
result_dict['model_time_exec'] = time.time() - start_time
result_dict['model_col_set'] = cols
return result_dict
# -
# <hr />
# <hr />
# <hr />
experiments = []
# ### Datasets:
for uc in undersamp_col:
for ds in dfs:
for col_set in cols_sets:
for params_dict in list_of_param_dicts:
for id_exp in range(5):
filename = prefix + uc + '/' + ds + '/' + col_set + '/' + 'experiment' + str(id_exp) + '.parquet'
exp_dataframe = spark.read.parquet(filename)
exp_dataframe = exp_dataframe.toPandas()
print('read {}'.format(filename))
undersampling_method = uc
experiment_filter = ds
experiment_id = id_exp
try:
model = run_xgboost(exp_dataframe, params_dict, col_set, filename, experiment_filter, undersampling_method, experiment_id)
experiments.append(model)
print("Parameters ==> {}\n Results: \n AUC_PR: {} \n Precision: {} \n Time: {}".format(str(params_dict), str(model['model_AUC_PR']), str(model['model_avg_precision']), str(model['model_time_exec'])))
print('=========================== \n')
except:
print('=========== W A R N I N G =========== \n')
print('Something wrong with the exp: {}, {}, {}'.format(filename, params_dict, col_set))
for i in range(len(experiments)):
for d in list(experiments[i].keys()):
experiments[i][d] = str(experiments[i][d])
# +
# experiments
# -
cols = ['experiment_filter', 'undersampling_method', 'filename', 'experiment_id', 'n_covid', 'n_not_covid', 'model_name', 'params', 'model_AUC_ROC', 'model_AUC_PR', 'model_covid_precision', 'model_covid_recall', 'model_covid_f1', 'model_not_covid_precision', 'model_not_covid_recall', 'model_not_covid_f1', 'model_avg_precision', 'model_avg_recall', 'model_avg_f1', 'model_avg_acc', 'model_TP', 'model_TN', 'model_FN', 'model_FP', 'model_time_exec', 'model_col_set']
intermed_results = spark.createDataFrame(data=experiments).select(cols)
intermed_results.toPandas()
intermed_results.write.parquet('gs://ai-covid19-datalake/trusted/intermed_results/KMODES/XGBOOST_experiments-kmodes-ds1-cs2.parquet', mode='overwrite')
print('finished')
intermed_results.show()
|
05-models/02-experiment-design/07.3-xgboost_model_kmodes-ds1-cs2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 探索电影数据集
#
# 在这个项目中,你将尝试使用所学的知识,使用 `NumPy`、`Pandas`、`matplotlib`、`seaborn` 库中的函数,来对电影数据集进行探索。
#
# 下载数据集:
# [TMDb电影数据](https://s3.cn-north-1.amazonaws.com.cn/static-documents/nd101/explore+dataset/tmdb-movies.csv)
#
#
# 数据集各列名称的含义:
# <table>
# <thead><tr><th>列名称</th><th>id</th><th>imdb_id</th><th>popularity</th><th>budget</th><th>revenue</th><th>original_title</th><th>cast</th><th>homepage</th><th>director</th><th>tagline</th><th>keywords</th><th>overview</th><th>runtime</th><th>genres</th><th>production_companies</th><th>release_date</th><th>vote_count</th><th>vote_average</th><th>release_year</th><th>budget_adj</th><th>revenue_adj</th></tr></thead><tbody>
# <tr><td>含义</td><td>编号</td><td>IMDB 编号</td><td>知名度</td><td>预算</td><td>票房</td><td>名称</td><td>主演</td><td>网站</td><td>导演</td><td>宣传词</td><td>关键词</td><td>简介</td><td>时常</td><td>类别</td><td>发行公司</td><td>发行日期</td><td>投票总数</td><td>投票均值</td><td>发行年份</td><td>预算(调整后)</td><td>票房(调整后)</td></tr>
# </tbody></table>
#
# **请注意,你需要提交该报告导出的 `.html`、`.ipynb` 以及 `.py` 文件。**
#
#
# ---
#
# ---
#
# ## 第一节 数据的导入与处理
#
# 在这一部分,你需要编写代码,使用 Pandas 读取数据,并进行预处理。
#
# **任务1.1:** 导入库以及数据
#
# 1. 载入需要的库 `NumPy`、`Pandas`、`matplotlib`、`seaborn`。
# 2. 利用 `Pandas` 库,读取 `tmdb-movies.csv` 中的数据,保存为 `movie_data`。
#
# 提示:记得使用 notebook 中的魔法指令 `%matplotlib inline`,否则会导致你接下来无法打印出图像。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
movie_data = pd.read_csv('tmdb-movies.csv')
# ---
#
# **任务1.2: ** 了解数据
#
# 你会接触到各种各样的数据表,因此在读取之后,我们有必要通过一些简单的方法,来了解我们数据表是什么样子的。
#
# 1. 获取数据表的行列,并打印。
# 2. 使用 `.head()`、`.tail()`、`.sample()` 方法,观察、了解数据表的情况。
# 3. 使用 `.dtypes` 属性,来查看各列数据的数据类型。
# 4. 使用 `isnull()` 配合 `.any()` 等方法,来查看各列是否存在空值。
# 5. 使用 `.describe()` 方法,看看数据表中数值型的数据是怎么分布的。
#
#
print('movie data is of {} rows and {} columns'.format(*movie_data.shape))
movie_data.sample()
movie_data.head()
movie_data.tail()
movie_data.dtypes
print('NaNs in data')
movie_data.isnull().any()
movie_data.describe()
# ---
#
# **任务1.3: ** 清理数据
#
# 在真实的工作场景中,数据处理往往是最为费时费力的环节。但是幸运的是,我们提供给大家的 tmdb 数据集非常的「干净」,不需要大家做特别多的数据清洗以及处理工作。在这一步中,你的核心的工作主要是对数据表中的空值进行处理。你可以使用 `.fillna()` 来填补空值,当然也可以使用 `.dropna()` 来丢弃数据表中包含空值的某些行或者列。
#
# 任务:使用适当的方法来清理空值,并将得到的数据保存。
print('missing values of each column')
fields = movie_data.isnull().sum().sort_values(ascending=False)
missing_fields = fields[fields > 0]
base_color = sns.color_palette()[0]
sns.barplot(missing_fields, missing_fields.index.values, color=base_color)
# 从上图可以看出,homepage字段的缺失值最多,其次是tagline,然后是keywords,homepage和imdb_id对后面的分析用处不大,可以直接删掉这两列,其他可能有用的字段缺失值就统一标记为missing吧。
# homepage缺失项最多,且对后续分析没什么用处,imdb_id也是,直接drop掉这2列
movie_data_cleaned = movie_data.drop(['imdb_id', 'homepage'], axis=1)
# 缺失的tagline, keywords和production_companies统一标记为'missing'
value = {
'tagline': 'missing',
'keywords': 'missing',
'production_companies': 'missing',
}
movie_data_cleaned.fillna(value=value, inplace=True)
# 剩下占比不多的空值直接drop掉,以免影响后续分析
movie_data_cleaned.dropna(inplace=True)
print('movie data after cleaned in {} rows and {} columns', *movie_data_cleaned.shape)
movie_data_cleaned.isnull().sum()
# ---
#
# ---
#
# ## 第二节 根据指定要求读取数据
#
#
# 相比 Excel 等数据分析软件,Pandas 的一大特长在于,能够轻松地基于复杂的逻辑选择合适的数据。因此,如何根据指定的要求,从数据表当获取适当的数据,是使用 Pandas 中非常重要的技能,也是本节重点考察大家的内容。
#
#
# ---
#
# **任务2.1: ** 简单读取
#
# 1. 读取数据表中名为 `id`、`popularity`、`budget`、`runtime`、`vote_average` 列的数据。
# 2. 读取数据表中前1~20行以及48、49行的数据。
# 3. 读取数据表中第50~60行的 `popularity` 那一列的数据。
#
# 要求:每一个语句只能用一行代码实现。
movie_data_cleaned[['id', 'popularity', 'budget', 'runtime', 'vote_average']].head()
movie_data_cleaned.iloc[list(range(20)) + [48, 49]]
movie_data_cleaned.iloc[list(range(50, 61))]['popularity']
# ---
#
# **任务2.2: **逻辑读取(Logical Indexing)
#
# 1. 读取数据表中 **`popularity` 大于5** 的所有数据。
# 2. 读取数据表中 **`popularity` 大于5** 的所有数据且**发行年份在1996年之后**的所有数据。
#
# 提示:Pandas 中的逻辑运算符如 `&`、`|`,分别代表`且`以及`或`。
#
# 要求:请使用 Logical Indexing实现。
movie_data_cleaned[movie_data_cleaned['popularity'] > 5]
movie_data_cleaned[(movie_data_cleaned['popularity'] > 5) & (movie_data_cleaned['release_year'] > 1996)]
# ---
#
# **任务2.3: **分组读取
#
# 1. 对 `release_year` 进行分组,使用 [`.agg`](http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) 获得 `revenue` 的均值。
# 2. 对 `director` 进行分组,使用 [`.agg`](http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) 获得 `popularity` 的均值,从高到低排列。
#
# 要求:使用 `Groupby` 命令实现。
movie_data_cleaned.groupby('release_year')['revenue'].agg(['mean'])
movie_data_cleaned.groupby('director')['popularity'].agg('mean').sort_values(ascending=False)
# ---
#
# ---
#
# ## 第三节 绘图与可视化
#
# 接着你要尝试对你的数据进行图像的绘制以及可视化。这一节最重要的是,你能够选择合适的图像,对特定的可视化目标进行可视化。所谓可视化的目标,是你希望从可视化的过程中,观察到怎样的信息以及变化。例如,观察票房随着时间的变化、哪个导演最受欢迎等。
#
# <table>
# <thead><tr><th>可视化的目标</th><th>可以使用的图像</th></tr></thead><tbody>
# <tr><td>表示某一属性数据的分布</td><td>饼图、直方图、散点图</td></tr>
# <tr><td>表示某一属性数据随着某一个变量变化</td><td>条形图、折线图、热力图</td></tr>
# <tr><td>比较多个属性的数据之间的关系</td><td>散点图、小提琴图、堆积条形图、堆积折线图</td></tr>
# </tbody></table>
#
# 在这个部分,你需要根据题目中问题,选择适当的可视化图像进行绘制,并进行相应的分析。对于选做题,他们具有一定的难度,你可以尝试挑战一下~
# **任务3.1:**对 `popularity` 最高的20名电影绘制其 `popularity` 值。
movies_top_20_pop = movie_data_cleaned.sort_values(by=['popularity'], ascending=False).head(20)
base_color = sns.color_palette()[0]
sns.barplot(data=movies_top_20_pop, x='popularity', y='original_title', color=base_color)
plt.xlabel('Popularity')
plt.ylabel('Title');
# 这里采用的是条形图来展示流行度最高的前20名电影,因为电影标题属于称名量表(nominal)而非顺序量表(ordinal),因此这里按流行度进行排序,最流行的是《侏罗纪世界》,其次是《疯狂的Max》,第三名则是《星级穿越》。
# ---
# **任务3.2:**分析电影净利润(票房-成本)随着年份变化的情况,并简单进行分析。
movie_data_cleaned['net_profit'] = movie_data_cleaned['revenue'] - movie_data_cleaned['budget']
cnt_movies_by_year = movie_data_cleaned['original_title'].groupby(movie_data_cleaned['release_year']).count()
cnt_movies_by_year.plot()
plt.xlabel('Year')
plt.ylabel('Movies Released');
cnt_movies_by_year = movie_data_cleaned['net_profit'].groupby(movie_data_cleaned['release_year']).sum()
cnt_movies_by_year.plot()
plt.xlabel('Year')
plt.ylabel('Total Profit');
net_profit_by_year = movie_data_cleaned.groupby(['release_year'])['net_profit'].agg('mean')
net_profit_by_year.plot(kind='line')
plt.xlabel('Year')
plt.ylabel('Avg Profit');
# 电影的发行量和总利润每年是逐步升高的,特别是2000年以后增长很快;再结合平均利润来看,某些年份会有回落,如果某一年电影发行量比较少的, 那么每部电影对平均利润的影响会更大;如果某一年电影发行量较多, 每部电影对平均利润的影响就会更少。
# ---
#
# **[选做]任务3.3:**选择最多产的10位导演(电影数量最多的),绘制他们排行前3的三部电影的票房情况,并简要进行分析。
# +
movie_data_split = movie_data_cleaned['director'].str.split('|', expand=True).stack()\
.reset_index(level=0).set_index('level_0').rename(columns={0:'director'})\
.join(movie_data_cleaned[['revenue', 'original_title']])
top_10_directors = movie_data_split['original_title'].groupby(movie_data_split['director'])\
.count().sort_values(ascending=False)[:10].index
top_director_movies = movie_data_split[movie_data_split['director'].isin(top_10_directors)]
top_3_movies = top_director_movies.sort_values(by='revenue', ascending=False).groupby(['director']).head(3)
# -
def plot_top_3_movies(data, directors):
plt.figure(figsize=(20, 40))
for index, director in enumerate(directors):
plt.subplot(len(directors), 1, index+1)
dd = data[data['director'] == director]
plt.bar(x=dd['original_title'], height=dd['revenue'])
plt.ylabel(director)
plot_top_3_movies(top_3_movies, top_10_directors.values)
# 最多产的10位导演中,前三甲分别是<NAME>,<NAME>和<NAME>。大部分导演排前3的电影票房差距还是比较大的,名次越靠前,所执导的电影票房收入越高。
# ---
#
# **[选做]任务3.4:**分析1968年~2015年六月电影的数量的变化。
after1968 = movie_data_cleaned['release_year'] >= 1968
before2015 = movie_data_cleaned['release_year'] <= 2015
inJune = movie_data_cleaned['release_date'].str.startswith('6/')
movies_in_year = movie_data_cleaned[(after1968) & (before2015) & (inJune)]
# movies_in_year.groupby(['release_year'])['original_title'].count().plot(kind='bar')
plt.figure(figsize=(10, 8))
sns.countplot(data=movies_in_year, y='release_year', color=base_color)
# 从图像上看,1968年到2015年以来,每年6月发行的电影量总体趋势上是呈上升趋势的,上世纪90年代有小幅回落,千禧年之后又有较大规模增长。
# ---
#
# **[选做]任务3.5:**分析1968年~2015年六月电影 `Comedy` 和 `Drama` 两类电影的数量的变化。
comedy = movies_in_year['genres'].str.contains('Comedy')
drama = movies_in_year['genres'].str.contains('Drama')
movies_drama = movies_in_year[drama]
movies_comedy = movies_in_year[comedy]
plt.figure(figsize=(10, 8))
sns.countplot(data=movies_comedy, y='release_year', color=base_color)
plt.xlabel('Comedy Movies');
# 1968年到2015年期间每年6月喜剧电影发行量总体呈上升趋势,20世纪80年代有过一次小规模的爆发,到了2000年以后,喜剧电影发行量每年6月都超过了10部。
plt.figure(figsize=(10, 8))
sns.countplot(data=movies_drama, y='release_year', color=base_color)
plt.xlabel('Drama Movies');
# 1968年到2015年期间每年6月发行的戏剧总体也是呈上升趋势的,从1999年以后戏剧在每年6月发行量有较大增长,大部分都超过了10部。
# > 注意: 当你写完了所有的代码,并且回答了所有的问题。你就可以把你的 iPython Notebook 导出成 HTML 文件。你可以在菜单栏,这样导出**File -> Download as -> HTML (.html)、Python (.py)** 把导出的 HTML、python文件 和这个 iPython notebook 一起提交给审阅者。
|
P2_ExploreMovieDataset/.ipynb_checkpoints/Explore Movie Dataset-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firstEnv
# language: python
# name: firstenv
# ---
# # Visualising a few heatmaps
# +
from torch.utils.data import DataLoader
from dataloaders.datasets import surfrider
from dataloaders.utils import decode_segmap
import matplotlib.pyplot as plt
from modeling.deeplab import *
import numpy as np
import argparse
import torch
from matplotlib.pyplot import figure
class Args:
def __init__(self, **kwds):
self.__dict__.update(kwds)
# parser = argparse.ArgumentParser()
args = Args(base_size = 513,
crop_size = 513,
out_stride = 16,
sync_bn = True,
freeze_bn = False,
resume = '/home/mathis/Documents/repos/pytorch-deeplab-xception/weights/best_trained_surfrider_only.pth.tar',
cuda = True)
# -
# ### Loading trained model
model = DeepLab(num_classes=4,
backbone='mobilenet',
output_stride=args.out_stride,
sync_bn=args.sync_bn,
freeze_bn=args.freeze_bn)
checkpoint = torch.load(args.resume)
model.load_state_dict(checkpoint['state_dict'])
# ## From TACO directly
# ### Creating dataloader
val_set = surfrider.SURFSegmentation(args, split='val')
val_loader = DataLoader(val_set, batch_size=2, shuffle=False, num_workers=0)
# ### Plotting for a few images
# +
for ii, sample in enumerate(val_loader):
img = sample['image']
gt = sample['label'].numpy()
with torch.no_grad():
output = model(img)
img = img.numpy()
for jj in range(sample["image"].size()[0]):
output_jj = output[jj,:,:,:]
output_jj = torch.nn.functional.softmax(output_jj,dim=0).numpy()
output_jj = np.sum(output_jj[1:,:,:],axis=0)
output_jj = output_jj
tmp = np.array(gt[jj]).astype(np.uint8)
segmap = decode_segmap(tmp, dataset='surfrider')
img_tmp = np.transpose(img[jj], axes=[1, 2, 0])
img_tmp *= (0.234, 0.220, 0.220)
img_tmp += (0.498, 0.470, 0.415)
img_tmp *= 255.0
img_tmp = img_tmp.astype(np.uint8)
plt.figure(figsize=(10,10))
plt.title('display')
plt.subplot(131)
plt.imshow(img_tmp)
plt.subplot(132)
plt.imshow(segmap)
plt.subplot(133)
plt.imshow(output_jj,vmin=0,vmax=1)
if ii == 1:
break
plt.show(block=True)
# -
# ## From video with overlayed TACO trash bottle
from PIL import Image
from torchvision import transforms
from torchvision.datasets import ImageFolder
# +
def read_folder(input_path):
# for now, read directly from images in folder ; later from json outputs
return [os.path.join(input_path, file) for file in sorted(os.listdir(input_path))]
preprocess = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.498, 0.470, 0.415], std=[0.234, 0.220, 0.220]),
])
dataset = ImageFolder('/home/mathis/Documents/repos/mot/notebooks/2nd_video_2fps_extracted/', transform=preprocess)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=False)
count=0
for ii, sample in enumerate(dataloader):
img = sample[0]
gt = sample[1].numpy()
with torch.no_grad():
output = model(img)
img = img.numpy()
for jj in range(sample[0].size()[0]):
output_jj = output[jj,:,:,:]
output_jj = torch.nn.functional.softmax(output_jj,dim=0).numpy()
output_jj = np.sum(output_jj[1:,:,:],axis=0)
output_jj = output_jj
img_tmp = np.transpose(img[jj], axes=[1, 2, 0])
img_tmp *= (0.234, 0.220, 0.220)
img_tmp += (0.498, 0.470, 0.415)
img_tmp *= 255.0
img_tmp = img_tmp.astype(np.uint8)
plt.figure(figsize=(15,15))
plt.title('display')
plt.subplot(121)
plt.imshow(img_tmp)
plt.subplot(122)
plt.imshow(output_jj,vmin=0,vmax=1)
plt.savefig('image_{}'.format(count))
count+=1
plt.show(block=True)
|
visualize_results_surfrider.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Regression
# +
import tensorflow as tf
import numpy as np
import math
import matplotlib.pyplot as plt
print(tf.__version__)
# -
# ## Generate Demo Data
# +
# generation some house sizes between 1000 and 3500 (typical sq ft of house)
num_house = 160
np.random.seed(58)
house_size = np.random.randint(low=1000, high=3500, size=num_house )
# Generate house prices from house size with a random noise added.
np.random.seed(58)
house_price = house_size * 100.0 + np.random.randint(low=20000, high=70000, size=num_house)
# Plot generated house and size
plt.plot(house_size, house_price, "bx") # bx = blue x
plt.ylabel("Price")
plt.xlabel("Size")
plt.show()
# -
# ## Split Data Into Training and Test
# +
# you need to normalize values to prevent under/overflows.
def normalize(array):
return (array - array.mean()) / array.std()
# define number of training samples, 0.7 = 70%. We can take the first 70% since the values are randomized
num_train_samples = int(math.floor(num_house * 0.7))
# define training data
train_house_size = np.asarray(house_size[:num_train_samples])
train_price = np.asanyarray(house_price[:num_train_samples:])
train_house_size_norm = normalize(train_house_size)
train_price_norm = normalize(train_price)
# define test data
test_house_size = np.array(house_size[num_train_samples:])
test_house_price = np.array(house_price[num_train_samples:])
test_house_size_norm = normalize(test_house_size)
test_house_price_norm = normalize(test_house_price)
# Plot the graph
plt.rcParams["figure.figsize"] = (10,8)
plt.figure()
plt.ylabel("Price")
plt.xlabel("Size (sq.ft)")
plt.plot(train_house_size, train_price, 'go', label='Training data')
plt.plot(test_house_size, test_house_price, 'mo', label='Testing data')
plt.legend(loc='upper left')
plt.show()
# -
# ## Define Model
# +
# Set up the TensorFlow placeholders that get updated as we descend down the gradient
tf_house_size = tf.placeholder("float", name="house_size") # x / features
tf_price = tf.placeholder("float", name="price") # y_ / labels
# Define the variables holding the size_factor and price we set during training.
# We initialize them to some random values based on the normal distribution.
tf_size_factor = tf.Variable(np.random.randn(), name="size_factor") # weights
tf_price_offset = tf.Variable(np.random.randn(), name="price_offset") # bias
# -
# ### Hypothesis
# Define the operations for the predicting values:
# 
# Notice, the use of the tensorflow add and multiply functions. These add the operations to the computation graph, AND the tensorflow methods understand how to deal with Tensors. Therefore do not try to use numpy or other library methods.
tf_price_pred = tf_price_offset + tf_house_size * tf_size_factor
# Or: tf_price_pred = tf.add(tf.multiply(tf_size_factor, tf_house_size), tf_price_offset)
# ### Loss Function
# Define the Loss Function (how much error) - Mean squared error
# 
loss = tf.reduce_sum(tf.pow(tf_price_pred - tf_price, 2)) / (2*num_train_samples)
# ### Gradient Descent
# 
# +
# Optimizer learning rate. The size of the steps down the gradient
learning_rate = 0.1
# 4. define a Gradient descent optimizer that will minimize the loss defined in the operation "cost".
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train = optimizer.minimize(loss)
# -
# ## Train Model
# +
# Initializing the variables
init = tf.global_variables_initializer()
size_factor = None
price_offset = None
# Launch the graph in the session
with tf.Session() as sess:
sess.run(init)
# set how often to display training progress and number of training iterations
display_every = 2
num_training_iter = 50
# keep iterating the training data
for iteration in range(num_training_iter):
# Fit all training data
for (x, y) in zip(train_house_size_norm, train_price_norm):
sess.run(train, feed_dict={tf_house_size: x, tf_price: y})
# Display current status
if (iteration + 1) % display_every == 0:
c = sess.run(loss, feed_dict={tf_house_size: train_house_size_norm, tf_price:train_price_norm})
print("iteration #:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(c), \
"size_factor=", sess.run(tf_size_factor), "price_offset=", sess.run(tf_price_offset))
print("Optimization Finished!")
training_cost = sess.run(loss, feed_dict={tf_house_size: train_house_size_norm, tf_price: train_price_norm})
size_factor = sess.run(tf_size_factor)
price_offset = sess.run(tf_price_offset)
print("Trained cost=", training_cost, "size_factor=", size_factor, "price_offset=", price_offset, '\n')
# Plot of training and test data, and learned regression
# get values used to normalized data so we can denormalize data back to its original scale
train_house_size_mean = train_house_size.mean()
train_house_size_std = train_house_size.std()
train_price_mean = train_price.mean()
train_price_std = train_price.std()
# Plot the graph
plt.rcParams["figure.figsize"] = (10,8)
plt.figure()
plt.ylabel("Price")
plt.xlabel("Size (sq.ft)")
plt.plot(train_house_size, train_price, 'go', label='Training data')
plt.plot(test_house_size, test_house_price, 'mo', label='Testing data')
plt.plot(train_house_size_norm * train_house_size_std + train_house_size_mean,
(size_factor * train_house_size_norm + price_offset) * train_price_std + train_price_mean,
label='Learned Regression')
plt.legend(loc='upper left')
plt.show()
|
solutions/notebooks/03 Linear Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/john-s-butler-dit/CaseStudy_PredatorPrey/blob/master/NA3_Rabbit%20Foxes%20Myxomatosis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="4hga2Dl_WrDu"
# # Numerical Assignment 3
# Name and Student ID
# + [markdown] id="KFTOV4WJWrDy"
# ## Problem 1
# The code in Rabbit Foxes.ipynb solves a predator prey model for foxes (F) and rabbits (R) and plots the output.
#
# The system of differential equations is described by
#
# \begin{equation}
# \begin{array}{cl}
# \frac{d R}{dt}=a_{Birth} R-b_{Con}FR,\\
# \frac{d F}{dt}=-c_{Death}F+d_{Food} FR,\\
# \end{array}
# \end{equation}
#
# where
# * $ a_{Birth} = 1 $
# * $ b_{Con} = 0.1 $
# * $c_{Death} = 1.5$
# * $d_{Food} = 0.075$
#
#
#
# + [markdown] id="rX39otV8bd76"
# ## Problem 1 code
# + id="DprAh29nWrDz"
# #!python
from numpy import *
import pylab as p
def plot_rabbit_fox(X0):
# Definition of parameters
a = 1
b = 0.1
c = 1.5
d = 0.075
def dX_dt(X, t=0):
""" Return the growth rate of fox and rabbit populations. """
return array([ a*X[0] - b*X[0]*X[1] ,
-c*X[1] + d*b*X[0]*X[1] ])
X_f0 = array([ 0. , 0.])
X_f1 = array([ c/(d*b), a/b])
# #!python
from scipy import integrate
t = linspace(0, 15, 1000) # time
X, infodict = integrate.odeint(dX_dt, X0, t, full_output=True)
infodict['message'] # >>> 'Integration successful.'
# #!python
rabbits, foxes = X.T
values = linspace(0.3, 0.9, 5) # position of X0 between X_f0 and X_f1
vcolors = p.cm.autumn_r(linspace(0.3, 1., len(values))) # colors for each trajectory
p.xkcd()
f1 = p.figure(figsize=(14,4))
p.subplot(121)
p.plot(t, rabbits, 'r-', label='Rabbits')
p.plot(t, foxes , 'b-', label='Foxes')
p.grid()
p.legend(loc='best')
p.xlabel('time')
p.ylabel('population')
p.title('Evolution of fox and rabbit populations')
p.subplot(122)
v=1
#-------------------------------------------------------
# plot trajectories
#for v, col in zip(values, vcolors):
# X0 = v * X_f1 # starting point
X = integrate.odeint( dX_dt, X0, t) # we don't need infodict here
p.plot( X[:,0], X[:,1], lw=3.5*v, color=vcolors[v,:], label='IC=(%.f, %.f)' % ( X0[0], X0[1]) )
#-------------------------------------------------------
# define a grid and compute direction at each point
ymax = p.ylim(ymin=0)[1] # get axis limits
xmax = p.xlim(xmin=0)[1]
nb_points = 20
x = linspace(0, xmax, nb_points)
y = linspace(0, ymax, nb_points)
X1 , Y1 = meshgrid(x, y) # create a grid
DX1, DY1 = dX_dt([X1, Y1]) # compute growth rate on the gridt
M = (hypot(DX1, DY1)) # Norm of the growth rate
M[ M == 0] = 1. # Avoid zero division errors
DX1 /= M # Normalize each arrows
DY1 /= M
#-------------------------------------------------------
# Drow direction fields, using matplotlib 's quiver function
# I choose to plot normalized arrows and to use colors to give information on
# the growth speed
p.title('Trajectories and direction fields')
Q = p.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=p.cm.jet)
p.xlabel('Number of rabbits')
p.ylabel('Number of foxes')
p.legend(bbox_to_anchor=(1.2, 1.0))
p.grid()
p.xlim(0, xmax)
p.ylim(0, ymax)
p.tight_layout()
# + [markdown] id="fWNS5WIhWrD0"
# ## Problem 1 Question
# From the output of the code write about the relationships between the foxes and rabbits for the different initial conditions.
#
# Run the code for the three initial conditions
#
# i) R(0)=200, F(0)=10
# + id="txJc_44KWrD1" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="aa241655-4413-4d47-fcdb-8d7d6e04a8b9"
INITIAL_CONDITION = array([200, 10])
plot_rabbit_fox(INITIAL_CONDITION)
# + [markdown] id="AqHwO9FqWrD2"
# ii) R(0)=80,F(0)=12,
#
# + id="FSpXn7c1WrD2"
## INSERT CODE FOR FOR ii) R(0)=80,F(0)=12
# + [markdown] id="3Ew1eaPTWrD2"
#
# iii) R(0)=20, F(0)=20,
#
# + id="ruMNtKF4WrD3"
## INSERT CODE FOR FOR iii) R(0)=20, F(0)=20.
# + [markdown] id="c_xzJveUWrD3"
# Describe the different plots for the three different initial conditions.
# + [markdown] id="hYd2z2iIWrD3"
# ## Problem 2
# The plot below shows the simulation of a predator prey model for foxes (F) and rabbits (R) in Ireland from 1950 to 1980. In 1954 the Irish government introduced myxomatosis (M) as a method of reducing the rabbit population. The following system of equations describes this relationship:
# \begin{equation}
# \begin{array}{cl}
# \frac{d R}{dt}= R-0.1FR-0.1R(M-1),\\
# \frac{d F}{dt}=-1.5F+0.075 FR,\\
# \frac{d M}{dt}=-M+0.1 MR.
# \end{array}
# \end{equation}
# with the initial conditions R(1950)=25 , F(1950)=5, M(1950)=0.
# + [markdown] id="PltddrlAdA0d"
# ## Problem 2 Code
# + id="3ZXh798dcg8u"
def myxomatosis_code():
# DEFINITION OF PARAMETERS
a = 1
b = 0.1
c = 1.5
d = 0.075
## TIME
N=10000
t_start=1950.0
t_end=1980.0
t = linspace(t_start, t_end, N) # time
# INITIAL CONDITIONS
rabbits=zeros(N)
foxes=zeros(N)
myxomatosis=ones(N)
rabbits[0]=25
foxes[0]=4
# EULER METHOD
h=(t_end-t_start)/N
for i in range (1,N):
rabbits[i]=rabbits[i-1]+h*rabbits[i-1]*(a-b*(foxes[i-1]+(myxomatosis [i-1]-1)))
foxes[i]=foxes[i-1]+h*foxes[i-1]*((-c+d*rabbits[i-1]))
if t[i]>1954:
myxomatosis[i]=myxomatosis[i-1]+h*myxomatosis[i-1]*(-1+0.1*rabbits[i-1])
p.xkcd()
f1 = p.figure(figsize=(14,4))
p.plot(t, rabbits, 'r-', label='Rabbits')
p.plot(t, foxes , 'b-', label='Foxes')
p.plot(t, myxomatosis, 'g', label='myxomatosis')
p.grid()
p.legend(loc='best')
p.ylim(-10, 50)
p.xlabel('time')
p.ylabel('population')
p.title('Evolution of fox and rabbit populations in Ireland')
p.show()
# + [markdown] id="joEAyPVxWrD3"
# ## Problem 2 Question
#
#
#
# + id="y4r7FVFqdSxW" colab={"base_uri": "https://localhost:8080/", "height": 303} outputId="e7fd2358-eccd-4f9a-a655-f75c42d7344f"
myxomatosis_code()
# + [markdown] id="DxifWfCAdYCg"
# i) From the plot and equations describe the relationship between rabbits, foxes and myxomatosis.
#
# ii) From this model do you think that the introduction did what it was intended to do?
# + [markdown] id="YtkoQW9_WrD4"
# ## Reference
# Wikipedia contributors. (2021, January 14). Myxomatosis. In Wikipedia, The Free Encyclopedia. Retrieved 15:21, February 22, 2021, from https://en.wikipedia.org/w/index.php?title=Myxomatosis&oldid=1000214621
|
NA3_Rabbit Foxes Myxomatosis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Scilab
# language: scilab
# name: scilab
# ---
# Jupyter Scilab Kernel
# ============
#
# Interact with Scilab in the Notebook. All commands are interpreted by Scilab. Since this is a [MetaKernel](https://github.com/Calysto/metakernel), a standard set of magics are available. Help on commands is available using the `%help` magic or using `?` with a command.
t = linspace(0,6*%pi,100);
plot(sin(t))
plot(cos(t), 'r')
b = 10*cos(t)+30; plot(b);
a = [1,2,3]
b = a + 3;
disp(b)
%lsmagic
|
scilab_kernel.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Scala (2.13)
// language: scala
// name: scala213
// ---
// <p style="float: left;"><a href="tour-of-scala.ipynb" target="_blank">Previous</a></p>
// <p style="float: right;"><a href="unified-types.ipynb" target="_blank">Next</a></p>
// <p style="text-align:center;">Tour of Scala</p>
// <div style="clear: both;"></div>
//
// # Basics
//
// In this page, we will cover basics of Scala.
//
// ## Trying Scala interactively in a Jupyter Notebook
//
// If you want to execute the code examples in this tour interactively, the easiest way to get started is to [run the tour as Jupyter notebooks on Binder](https://mybinder.org/v2/gh/sbrunk/almond-examples/master?urlpath=lab%2Ftree%2Fscala-tour%2Fbasics.ipynb). That way, you can play with the examples and try new things in your browser without having to install anything locally.
//
// To execute a cell of code, just click on it and press control + enter or use the menu.
//
// For more information how this works or how to run it locally see the [README](https://github.com/sbrunk/almond-examples/blob/master/README.md) of this project or the [Almond documentation](http://almond-sh.github.io/almond/stable/docs/intro).
//
// All content is derived from https://docs.scala-lang.org/tour/tour-of-scala.html
//
// ## Expressions
//
// Expressions are computable statements.
// + attributes={"classes": ["tut"], "id": ""}
1 + 1
// -
// You can output results of expressions using `println`.
// + attributes={"classes": ["tut"], "id": ""}
println(1) // 1
println(1 + 1) // 2
println("Hello!") // Hello!
println("Hello," + " world!") // Hello, world!
// -
// ### Values
//
// You can name results of expressions with the `val` keyword.
// + attributes={"classes": ["tut"], "id": ""}
val x = 1 + 1
println(x) // 2
// -
// Named results, such as `x` here, are called values. Referencing
// a value does not re-compute it.
//
// Values cannot be re-assigned.
// + attributes={"classes": ["tut"], "id": ""}
x = 3 // This does not compile.
// -
// Types of values can be inferred, but you can also explicitly state the type, like this:
// + attributes={"classes": ["tut"], "id": ""}
val x: Int = 1 + 1
// -
// Notice how the type declaration `Int` comes after the identifier `x`. You also need a `:`.
//
// ### Variables
//
// Variables are like values, except you can re-assign them. You can define a variable with the `var` keyword.
// + attributes={"classes": ["tut"], "id": ""}
var x = 1 + 1
x = 3 // This compiles because "x" is declared with the "var" keyword.
println(x * x) // 9
// -
// As with values, you can explicitly state the type if you want:
// + attributes={"classes": ["tut"], "id": ""}
var x: Int = 1 + 1
// -
// ## Blocks
//
// You can combine expressions by surrounding them with `{}`. We call this a block.
//
// The result of the last expression in the block is the result of the overall block, too.
// + attributes={"classes": ["tut"], "id": ""}
println({
val x = 1 + 1
x + 1
}) // 3
// -
// ## Functions
//
// Functions are expressions that take parameters.
//
// You can define an anonymous function (i.e. no name) that returns a given integer plus one:
// + attributes={"classes": ["tut"], "id": ""}
(x: Int) => x + 1
// -
// On the left of `=>` is a list of parameters. On the right is an expression involving the parameters.
//
// You can also name functions.
// + attributes={"classes": ["tut"], "id": ""}
val addOne = (x: Int) => x + 1
println(addOne(1)) // 2
// -
// Functions may take multiple parameters.
// + attributes={"classes": ["tut"], "id": ""}
val add = (x: Int, y: Int) => x + y
println(add(1, 2)) // 3
// -
// Or it can take no parameters.
// + attributes={"classes": ["tut"], "id": ""}
val getTheAnswer = () => 42
println(getTheAnswer()) // 42
// -
// ## Methods
//
// Methods look and behave very similar to functions, but there are a few key differences between them.
//
// Methods are defined with the `def` keyword. `def` is followed by a name, parameter lists, a return type, and a body.
// + attributes={"classes": ["tut"], "id": ""}
def add(x: Int, y: Int): Int = x + y
println(add(1, 2)) // 3
// -
// Notice how the return type is declared _after_ the parameter list and a colon `: Int`.
//
// Methods can take multiple parameter lists.
// + attributes={"classes": ["tut"], "id": ""}
def addThenMultiply(x: Int, y: Int)(multiplier: Int): Int = (x + y) * multiplier
println(addThenMultiply(1, 2)(3)) // 9
// -
// Or no parameter lists at all.
// + attributes={"classes": ["tut"], "id": ""}
def name: String = System.getProperty("user.name")
println("Hello, " + name + "!")
// -
// There are some other differences, but for now, you can think of them as something similar to functions.
//
// Methods can have multi-line expressions as well.
// + attributes={"classes": ["tut"], "id": ""}
def getSquareString(input: Double): String = {
val square = input * input
square.toString
}
// -
// The last expression in the body is the method's return value. (Scala does have a `return` keyword, but it's rarely used.)
//
// ## Classes
//
// You can define classes with the `class` keyword followed by its name and constructor parameters.
// + attributes={"classes": ["tut"], "id": ""}
class Greeter(prefix: String, suffix: String) {
def greet(name: String): Unit =
println(prefix + name + suffix)
}
// -
// The return type of the method `greet` is `Unit`, which says there's nothing meaningful to return. It's used similarly to `void` in Java and C. (A difference is that because every Scala expression must have some value, there is actually a singleton value of type Unit, written (). It carries no information.)
//
// You can make an instance of a class with the `new` keyword.
// + attributes={"classes": ["tut"], "id": ""}
val greeter = new Greeter("Hello, ", "!")
greeter.greet("Scala developer") // Hello, Scala developer!
// -
// We will cover classes in depth [later](classes.ipynb).
//
// ## Case Classes
//
// Scala has a special type of class called a "case" class. By default, case classes are immutable and compared by value. You can define case classes with the `case class` keywords.
// + attributes={"classes": ["tut"], "id": ""}
case class Point(x: Int, y: Int)
// -
// You can instantiate case classes without `new` keyword.
// + attributes={"classes": ["tut"], "id": ""}
val point = Point(1, 2)
val anotherPoint = Point(1, 2)
val yetAnotherPoint = Point(2, 2)
// -
// And they are compared by value.
// + attributes={"classes": ["tut"], "id": ""}
if (point == anotherPoint) {
println(point + " and " + anotherPoint + " are the same.")
} else {
println(point + " and " + anotherPoint + " are different.")
} // Point(1,2) and Point(1,2) are the same.
if (point == yetAnotherPoint) {
println(point + " and " + yetAnotherPoint + " are the same.")
} else {
println(point + " and " + yetAnotherPoint + " are different.")
} // Point(1,2) and Point(2,2) are different.
// -
// There is a lot more to case classes that we'd like to introduce, and we are convinced you will fall in love with them! We will cover them in depth [later](case-classes.ipynb).
//
// ## Objects
//
// Objects are single instances of their own definitions. You can think of them as singletons of their own classes.
//
// You can define objects with the `object` keyword.
// + attributes={"classes": ["tut"], "id": ""}
object IdFactory {
private var counter = 0
def create(): Int = {
counter += 1
counter
}
}
// -
// You can access an object by referring to its name.
// + attributes={"classes": ["tut"], "id": ""}
val newId: Int = IdFactory.create()
println(newId) // 1
val newerId: Int = IdFactory.create()
println(newerId) // 2
// -
// We will cover objects in depth [later](singleton-objects.ipynb).
//
// ## Traits
//
// Traits are types containing certain fields and methods. Multiple traits can be combined.
//
// You can define traits with `trait` keyword.
// + attributes={"classes": ["tut"], "id": ""}
trait Greeter {
def greet(name: String): Unit
}
// -
// Traits can also have default implementations.
// + attributes={"classes": ["tut"], "id": ""}
trait Greeter {
def greet(name: String): Unit =
println("Hello, " + name + "!")
}
// -
// You can extend traits with the `extends` keyword and override an implementation with the `override` keyword.
// + attributes={"classes": ["tut"], "id": ""}
class DefaultGreeter extends Greeter
class CustomizableGreeter(prefix: String, postfix: String) extends Greeter {
override def greet(name: String): Unit = {
println(prefix + name + postfix)
}
}
val greeter = new DefaultGreeter()
greeter.greet("Scala developer") // Hello, Scala developer!
val customGreeter = new CustomizableGreeter("How are you, ", "?")
customGreeter.greet("Scala developer") // How are you, Scala developer?
// -
// Here, `DefaultGreeter` extends only a single trait, but it could extend multiple traits.
//
// We will cover traits in depth [later](traits.ipynb).
//
// ## Main Method
//
// The main method is an entry point of a program. The Java Virtual
// Machine requires a main method to be named `main` and take one
// argument, an array of strings.
//
// Using an object, you can define a main method as follows:
// + attributes={"classes": ["tut"], "id": ""}
object Main {
def main(args: Array[String]): Unit =
println("Hello, Scala developer!")
}
// -
// <p style="float: left;"><a href="tour-of-scala.ipynb" target="_blank">Previous</a></p>
// <p style="float: right;"><a href="unified-types.ipynb" target="_blank">Next</a></p>
// <p style="text-align:center;">Tour of Scala</p>
// <div style="clear: both;"></div>
|
notebooks/scala-tour/basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Restructure our table
#
# We flatten the table of offensive words. This means that we can now use it in combination with technologies such as spark - that don't support multiindex structures.
import pandas as pd
word_table = pd.read_pickle("../pickles/word_table_cleaned.pickle")
word_table.head()
word_table["descriptor"] = word_table[["category", "strength", "target"]].apply(tuple, axis=1)
word_table.head()
word_table_filtered = word_table["descriptor"]
word_table_filtered.head()
word_table_filtered.to_csv("../datasets/word_table.csv", header=True)
|
Project/4_restructuring_word_rating.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Time Modifiers Compared
# +
from scripts.imports import *
df_ph = pd.read_csv(paths['phrase_dataset'], index_col='node', low_memory=False)
# selection criteria
df_sgph = df_ph[
(df_ph.n_heads == 1) # keep to simple phrases
& (df_ph.n_phatoms == 1)
& (df_ph.function != 'Spec') # ignore cuz N=2
& (df_ph.heads_POS != 'PREP') # ignore prep-only phrases
& (~df_ph.heads_etcbc.str.match('.*\[')) # ignore verbs
].copy()
out = Exporter(
paths['outdir'],
'time_modis'
)
# -
out.number(
df_sgph.shape[0],
'N_modexp_af'
)
df_sgph.columns
# ## Identify Nominalization Threshold
# +
lex_hasnom_ct = pivot_ct(
df_sgph,
['heads_etcbc','heads_utf8'],
['unmodified'],
)
lex_hasnom_ct = lex_hasnom_ct.set_axis(['yes', 'no'], 1)
lex_hasnom_ct.columns.name = 'noun-modified'
lex_hasnom_pr = lex_hasnom_ct.div(lex_hasnom_ct.sum(1), 0)
# -
out.table(
lex_hasnom_ct.droplevel('heads_etcbc',0)\
.rename_axis(index='head lexeme')\
.head(15).T,
'all_nounmod_ct',
caption='Lexeme Noun-modification Frequencies',
adjustbox=True,
hebaxis=1,
)
out.number(
lex_hasnom_ct.shape[0],
'all_N_lexs'
)
out.table(
lex_hasnom_pr.droplevel('heads_etcbc',0)\
.rename_axis(index='head lexeme')\
.head(15).T\
.mul(100).round().astype(int).astype(str) + '%',
'all_nounmod_pc',
caption='Lexeme Noun-modification Percentages',
adjustbox=True,
hebaxis=1,
)
fig, ax = plt.subplots()
lex_hasnom_pr['yes'].plot(kind='hist', edgecolor='black')
ax.set_xlabel('Proportion of Lexeme Occurrences with Noun Modifier'.lower())
ax.set_ylabel('number of lexemes')
out.plot(
'hist_all_nounmod'
)
# **Threshold >= 0.5**
# ## Apply ΔP Modifier Tests on selected set
threshold = lex_hasnom_pr[lex_hasnom_pr['yes'] >= 0.5].index.droplevel('heads_utf8')
lex_hasnom_ct
# +
th_df = df_sgph[df_sgph.heads_etcbc.isin(threshold)] # threshold DF
out.number(
th_df.heads_etcbc.unique().shape[0],
'all_N_thresh',
)
# -
out.number(
th_df.shape[0],
'all_ph_thresh_N'
)
# +
mod_ct = pivot_ct(
th_df,
'function',
'modtag2',
)
out.table(
mod_ct.iloc[:, :10].rename_axis(columns='modifiers'),
'functmod_ct_10',
caption='Modifier Counts by Frequency (top 10 modifiers)',
adjustbox=True,
)
# +
# ts.show(
# th_df[
# (th_df.function == 'Subj')
# & (th_df.modtag2 == 'Ø')
# ],
# extra=['text', 'heads_etcbc', 'ADJV']
# )
# -
mod_ct.sum()
# N unique mod tags
out.number(
mod_ct.shape[1],
'N_unique_modtag'
)
# N unique mod tags
out.number(
mod_ct.sum().sum(),
'N_observed_phs'
)
# +
top_time_mods = mod_ct.loc['Time'].sort_values(ascending=False)
top_time_mods.head(10)
# -
fig, ax = plt.subplots()
top_time_mods.plot(ax=ax)
ax.set_xticklabels(ax.get_xticklabels(), rotation=30)
ax.set_ylabel('frequency')
ax.set_xlabel('modifier tag')
# +
mod_dp = sig.apply_deltaP(mod_ct, 0, 1)
mod_dp
# +
# narrow down to top time mods
mod_dp2 = mod_dp[top_time_mods.index]
mod_dp2.head()
# +
fig, axs = plt.subplots(2, 1, figsize=(8, 10))
titles = [
'Most Frequent Function × Modifier ΔP Scores',
'Most Frequent Time × Modifier ΔP Scores',
]
for i, dpdata in enumerate([mod_dp, mod_dp2]):
ax = axs[i]
heatmap(dpdata.round(2).iloc[:,:10], ax=ax, square=False, robust=True, annot=True)
ax.set_yticklabels(ax.get_yticklabels(), rotation=0)
ax.set_xticklabels(ax.get_xticklabels(), rotation=30)
ax.set_xlabel('modifiers')
ax.set_title(titles[i])
fig.tight_layout()
out.plot(
'heat_dp_fmodall'
)
# +
fig, ax = plt.subplots(figsize=(10, 6))
heatmap(dpdata.round(2).iloc[:,:10], ax=ax, square=False, robust=True, annot=True)
ax.set_yticklabels(ax.get_yticklabels(), rotation=0)
ax.set_xticklabels(ax.get_xticklabels(), rotation=30)
ax.set_xlabel('modifiers')
out.plot(
'heat_dp_fmodtime'
)
# -
# ## Pull out numbers
out.number(
mod_dp['DEMON']['Time'] * 100,
'dp_time_demon',
)
# +
time_ex = df_sg[
(df_sg.times_etcbc.isin(threshold))
& (df_sg.cl_kind == 'VC')
& (~df_sg.cl_type2.isin(['WayH', 'WQtH']))
]
out.examples(
time_ex[time_ex.modtag2 == 'DEMON'].sample(5, random_state=42),
'ex_demon_time',
)
# -
out.number(
mod_dp['DEF']['Time'] * 100,
'dp_time_def',
)
out.examples(
time_ex[time_ex.modtag2 == 'DEF'].sample(5, random_state=69),
'ex_def_time',
)
out.number(
mod_dp['ORDN']['Time'] * 100,
'dp_time_ordn',
)
out.examples(
time_ex[time_ex.modtag2 == 'ORDN'].sample(5, random_state=69),
'ex_ordn_time',
)
out.number(
mod_dp['NUM']['Time'] * 100,
'dp_time_num',
)
out.examples(
time_ex[
(time_ex.modtag2 == 'NUM')
& (time_ex.function == 'atelic_ext')
].sample(3, random_state=42069),
'ex_numatel_time',
)
out.examples(
df[
(df.modtag2 == 'NUM')
& (df.function == 'simultaneous_calendar')
].sample(3, random_state=69),
'ex_numsim_time',
)
out.number(
mod_dp['NUM+PL']['Time'] * 100,
'dp_time_numpl',
)
out.examples(
time_ex[
(time_ex.modtag2 == 'NUM+PL')
].sample(3, random_state=69),
'ex_plnum_time',
)
# +
# count heads with functs to see if ELOHIM is
# responsible for higher PL score with Subj
headfunct_ct = pivot_ct(
th_df,
'function',
'heads_utf8',
)
headfunct_pr = headfunct_ct.div(headfunct_ct.sum(1), 0)
# -
headfunct_pr.loc['Subj'].sort_values(ascending=False).head(20)
out.number(
headfunct_pr.loc['Subj']['אֱלֹהִים']*100,
'subj_elohim_pc',
)
# + jupyter={"outputs_hidden": true}
ts.show(
df_sgph[
(df_sgph.heads_etcbc.isin(threshold))
& (df_sgph.function == 'Subj')
& (df_sgph.modtag2 == 'PL')
]
, extra=['text'])
# -
ts.show(
df_sgph[
(df_sgph.heads_etcbc.isin(threshold))
& (df_sgph.function == 'Cmpl')
& (df_sgph.modtag2 == 'DEF')
]
, extra=['text'])
mod_dp.loc['Time'][mod_dp.loc['Time'].index.str.match('DEF')]
# ## By Main Genre
# +
# lex_nomgen_ct = pivot_ct(
# df_sgph[df_sgph.heads_etcbc.isin(threshold)],
# ['function', 'main_genre'],
# ['modtag2'],
# )
# lex_nomgen_ct
|
workflow/notebooks/analysis/time_modifiers_compared.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python, numpy and pandas basics | Logistic regression casestudy using scikitlearn
# ### Basics of python
a = 15
b = 20
a < b
l = [1,2,3,4,5,6,10]
set(l)
# Power
2**3
# Slicing
l[1:2]
l[-1]
# String Manipulation
s = "Hello"
s.upper()
s
# String are inmmutable
s.lower()
# Hence the functions don't change the value of the string
l.append("ten")
l
l + ['10']
# Create a matrix
l_1 = [1,2,3,4,5]
l_2 = [5,6,7,8,9]
l_3 = [1,3,5,7,9]
mat = [l_1,l_2,l_3]
mat
# Taking input
inp = input("Give input")
inp
# User defined functions
def greeet(stri):
print("Hello",stri)
return;
greeet("Upgrad")
a = 23000.2
b = 23000
if b > a:
print("b is greateer than a")
elif a==b:
print("a is equal to b")
else:
print("a is greater than b")
# Try and except
astr = "123"
try:
istr = int(astr)
except:
print("astr is not convertible to integer")
istr
astr = "Hello upgrad"
try:
istr = int(astr)
except:
print("astr is not convertible to integer")
# #### while loop
i = 1
while i<10:
print(i)
i+=1
friends = ['a','b','c','d','e']
len(friends)
for i in range(len(friends)-1,-1,-1):
print(friends[i])
for i in range(0,len(friends)-1,2):
print(friends[i])
# # numpy
import numpy as np
# Convert a list to an array
mylist = [1,2,3]
np.array(mylist)
# range function in numpy
np.arange(0,11)
np.arange(0,11,2)
# Create an array of zeroes
np.zeros(10)
# Create a matrix of ones
np.ones((2,2))
# Identity matrix
np.eye(4)
# Get a random uniform distribution
np.random.rand(5)
# Normal distribution
np.random.rand(3,3)
# Generate a random number within range
np.random.randint(1,100)
# Generate n random integers between a range
np.random.randint(1,100,3)
# Convert an array into matrix
arr = np.arange(0,16,1)
arr
# Reshape the array to a 4X4 matrix
mat = arr.reshape(4,4)
mat
# Get the shape of the matrix
mat.shape
# arrays are mutable
arr[1] = 10
arr
# Slicing
arr[0:3]
arr[5:]
# Print all the numbers till the 5th index (5 exclusive)
arr[:5]
# Replace a range in the array with a number
arr[0:4] = 100
arr
# Deepcopy vs shallowcopy
# Shallow copy
a1 = np.arange(1,16)
a1
b1 = a1
b1[:] = 10
a1
# Deep copy
a2 = np.arange(1,16)
a2
b2 = a1.copy()
b2[:] = 100
a2
# Accessing elements in an array
arr_2d = np.arange(0,16).reshape(4,4)
arr_2d
arr_2d[2,2]
arr_2d[1]
# Mathematical operations on arrays
a1 = np.arange(0,16).reshape(4,4)
a2 = np.arange(16,32).reshape(4,4)
a1
a2
a1 + a2
a1 * a2
np.sqrt(a1)
np.sin(a1)
np.max(a1)
np.min(a1)
# Natural log (base e)
np.log(a1)
# log(0) is infinity. Runtime warning will be thrown. No error will be genrated
np.cos(a1)
# ### Questions
# Create an array of size 10; points should be linearly spaced
np.linspace(0,5,10)
# # Pandas
import pandas as pd
# #### Series
labels = ['a','b','c','d']
labels
data = [10,30,50,70]
ar = np.array(data)
ar
# Create a dictionary
d={'a':10,'b':30,'c':50,'d':70}
d
# Create a series with data and labels
pd.Series(data=data,index=labels)
pd.Series(data,labels)
series1 = pd.Series([1,2,3,4],['India','Afghanistan','China','Pakistan'])
series1
series2 = pd.Series([1,2,7,4],['India','Afghanistan','Bangladesh','Pakistan'])
series2
# Accessing an element
series1['India']
series3 = pd.Series(data=labels)
series3[0]
# DataFrame - Pandas series object with an index
np.random.seed(1000)
df=pd.DataFrame(np.random.randn(5,3),['A','B','C','D','E'],['W','X','Y'])
df
# Select rows
df['W']
type(df['W'])
df[['W','X']]
# Create a new column
df['new'] = df['W']+df['Y']
df
# Drop the newly create column
df.drop('new',axis=1,inplace=True) # axis = 0 means the row and axis = 1 means column, inplace = True - for changes to occur
df
# Drop a row
df.drop('E')
df.shape
# 5 denotes the number of rows and 3 denotes the number of columns
# Access rows
df.loc['A']
df.iloc[0]
df.loc['A','Y']
df.loc[['A','B'],['W','X']]
# Missing values
# +
d = ({'A':[1,2,np.nan],'B':[6,np.nan,np.nan],'C':[1,4,5]})
df = pd.DataFrame(d)
df
# -
# Replacing with median
# Drop rows with NA values
df.dropna() # For Rows
# Drop columns with NA values
df.dropna(axis=1) # For Columns
# Remove those columns which has atleast 2 missing values
df.dropna(thresh=2)
# Imputing missing values with mean
# imputing the missing values in column 'A' with it's mean
df['A'].fillna(value=df['A'].mean())
# ##### Groupby function
# +
data = {'Company':['Goog','MSFT','ABINB','DMART','TCS','TCS'],
'Person':['Ra','Ma','Ta','Ka','Ga','Fa'],
'Sales': [200,1000,100001,232,445,567]}
df = pd.DataFrame(data)
df
# -
# groupby is the function to group values in pandas
grouping by the 'company' column
r=df.groupby('Company')
r.mean()
r.sum()
r.count()
# Oneliner for the above operation
df.groupby("Company").count()
df.groupby('Company').max()
df.groupby('Company').describe()
# Merging and concatinating
# +
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
# -
df1
df2
df3
# concatenate all the rows from all the data frames
# Similar to Rbind in R
pd.concat([df1,df2,df3])
# Concatenate dataframes column wise <br>
# similar to cbind in R
pd.concat([df1,df2,df3],axis=1)
# Merge
# +
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
# -
left
right
pd.merge(left,right,how='inner',on='key')
pd.merge(left, right, how='outer', on='key')
# Read input from the CSV file
training = pd.read_csv("data/train.csv")
training
training.describe()
# ### Logistic regression
from sklearn.linear_model import LogisticRegression
# +
Xtrain= training.iloc[:,1:16]
Ytrain = training.iloc[:,16]
# -
Xtrain.describe()
Xtrain
Xtrain.isnull().sum()
Xtrain.isnull().any().any()
Ytrain.isnull().any().any()
Xtrain.B.describe()
Xtrain.G.describe()
# Missing value imputation <br>
# 'A', 'D', 'E', 'F', 'G' are categorical values
# Replacing with mode for categorical values
for column in ['A', 'D', 'E', 'F', 'G']:
Xtrain[column].fillna(Xtrain[column].mode()[0], inplace=True)
# For numeric values, we would replace the missing values with median
# Instead of replacing the missing values directly with median, we are grouping the continuous values by the target variable(column "P") and then calculating the median and replace the NA values with median for the respective categories
# +
Xtrain1= Xtrain[['B','N']]
Xtrain1 = pd.concat([Xtrain1,Ytrain],axis=1)
Xtrain1=Xtrain1.groupby('P', group_keys=False).apply(lambda x: x.fillna(x.median()).astype(float))
# -
# Removing the actual columns and replacing the newly created columns in the actual data
# +
Xtrain = Xtrain.drop(['B','N'],axis=1)
Xtrain1= Xtrain1.drop('P',axis=1)
Xtrain = pd.concat([Xtrain,Xtrain1],axis=1)
# -
Xtrain.isnull().sum()
# Onehot encoding for the categorical values ( Creating dummy variables)
# +
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder_X_1 = LabelEncoder()
Xtrain.iloc[:,0] = labelencoder_X_1.fit_transform(Xtrain.iloc[:,0])
labelencoder_X_2 = LabelEncoder()
Xtrain.iloc[:,2] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,2])
labelencoder_X_3 = LabelEncoder()
Xtrain.iloc[:,3] = labelencoder_X_3.fit_transform(Xtrain.iloc[:,3])
labelencoder_X_4 = LabelEncoder()
Xtrain.iloc[:,4] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,4])
labelencoder_X_5 = LabelEncoder()
Xtrain.iloc[:,5] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,5])
labelencoder_X_7 = LabelEncoder()
Xtrain.iloc[:,7] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,7])
labelencoder_X_8 = LabelEncoder()
Xtrain.iloc[:,8] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,8])
labelencoder_X_10 = LabelEncoder()
Xtrain.iloc[:,10] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,10])
labelencoder_X_11 = LabelEncoder()
Xtrain.iloc[:,11] = labelencoder_X_2.fit_transform(Xtrain.iloc[:,11])
# -
# Creating new variables based on the frequency
# +
Xtrain['A_freq'] = Xtrain.groupby('A')['A'].transform('count')
Xtrain['D_freq'] = Xtrain.groupby('D')['D'].transform('count')
Xtrain['E_freq'] = Xtrain.groupby('E')['E'].transform('count')
Xtrain['F_freq'] = Xtrain.groupby('F')['F'].transform('count')
Xtrain['G_freq'] = Xtrain.groupby('G')['G'].transform('count')
Xtrain['I_freq'] = Xtrain.groupby('I')['I'].transform('count')
Xtrain['J_freq'] = Xtrain.groupby('J')['J'].transform('count')
Xtrain['K_freq'] = Xtrain.groupby('K')['K'].transform('count')
Xtrain['L_freq'] = Xtrain.groupby('L')['L'].transform('count')
Xtrain['M_freq'] = Xtrain.groupby('M')['M'].transform('count')
# -
# Fitting a logistic regression model
logmodel = LogisticRegression()
logmodel.fit(Xtrain,Ytrain)
|
Basecamp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
"""
Practice on network analysis
Explore Gephi
"""
# +
import re, itertools
import networkx as nx
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from collections import Counter
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
# -
# %load_ext autoreload
# %autoreload 2
import sys
sys.path.append('/Users/katiehuang/Desktop/metis/projects/onl_ds5_project_4/py')
from word_cloud import *
import importlib
topic_df = pd.read_pickle('../dump/topic_df')
speech_df = pd.read_pickle('../dump/speech_clean_lemma')
dtm = pd.read_pickle('../dump/data_dtm_lemma.pkl')
data = dtm.transpose()
# ## For Gephi
data.head()
# +
# Find the top 30 words said in each speech
top_dict = {}
for c in data.columns:
top = data[c].sort_values(ascending=False).head(30)
top_dict[c]= list(zip(top.index, top.values))
words = []
for speech in data.columns:
top = [word for (word, count) in top_dict[speech]]
for t in top:
words.append(t)
boring_words = ['say','like','just','dont','don','im',
'ive','youll','youve','things','thing','youre','right','really','lot',
'make','know','people','way',
'come','thats','graduate']
# -
len(words)
most_common = pd.DataFrame(Counter([word for word in words if word not in boring_words]).most_common()[1:6000:20],
columns=['word','freq'])
len(most_common),most_common
dtm.head()
columns_drop = [word for word in data.index.tolist() if word not in most_common.word.tolist()]
len(columns_drop)
data.head()
data_slim = dtm.drop(columns=columns_drop)
data_slim
# Create co-currence matrix
df = data_slim.transpose().dot(data_slim)
df.values[[np.arange(df.shape[0])]*2] = 0
df
m = min(min(df.to_numpy().tolist()))
df = df/10
df
df.to_csv('../dump/to_gephi.csv',sep=',')
# Create co-currence matrix
df_fat = data.transpose().dot(data)
df_fat.to_csv('../dump/to_gephi_fat.csv',sep=',')
# ### Topic & terms for Gephi
topic_df
topic_df['words'] = topic_df.apply(lambda x: " ".join(x.astype('str')),axis=1)
topic_word = topic_df[['words']]
topic_word
data_dtm = cv_dtm(topic_word,'words')
data_dtm
# Create topic-topic df to append
topic_topic = pd.DataFrame(columns = data_dtm.index,index=data_dtm.index).fillna(0)
topic_topic
concat_1 = pd.concat([topic_topic,data_dtm],axis=1)
concat_1
# Create word-word df to append
word_word = pd.DataFrame(columns = concat_1.columns,index=data_dtm.columns).fillna(0)
word_word
concat_2 = pd.concat([concat_1,word_word],axis=0)
concat_2
concat_2.to_csv('../dump/to_gephi_topic_words.csv',sep=',')
concat_2.to_pickle('../dump/to_nx_topic_words')
|
notebook/network_3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Challenge 3
# ***
# +
# Load the packages required to run this notebook
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import numpy.random as nr
import math
from sklearn import preprocessing
import sklearn.model_selection as ms
from sklearn import linear_model
import sklearn.metrics as sklm
import scipy.stats as ss
# %matplotlib inline
# -
# Load the datasets
adventure_works = pd.read_csv('../adventure_works.csv', parse_dates = ['BirthDate'])
adventure_works.columns
# Drop columns with minimal predictive power and 'BikeBuyer' since we don't have that at the time of evaluation
adventure_works.drop(['CustomerID', 'FirstName', 'LastName', 'AddressLine1', 'PostalCode', 'PhoneNumber',
'BirthDate', 'BikeBuyer'], axis=1, inplace=True)
adventure_works.head()
# Transform numeric features to make their distributions symmetric
adventure_works['SqrtYearlyIncome'] = np.sqrt(adventure_works['YearlyIncome'])
adventure_works['LogAveMonthSpend'] = np.log(adventure_works['AveMonthSpend'])
# ## Prepare data for scikit-learn model
# Create numpy array of label values
Labels = np.array(adventure_works['LogAveMonthSpend'])
Labels = Labels.reshape(Labels.shape[0],)
# +
import warnings
warnings.filterwarnings('ignore')
# Create model matrix
def encode_string(cat_features):
## First encode the strings to numeric categories
enc = preprocessing.LabelEncoder()
enc.fit(cat_features)
enc_cat_features = enc.transform(cat_features)
## Now, apply one hot encoding
ohe = preprocessing.OneHotEncoder()
encoded = ohe.fit(enc_cat_features.reshape(-1,1))
return encoded.transform(enc_cat_features.reshape(-1,1)).toarray()
categorical_columns = ['Occupation', 'Gender', 'MaritalStatus', 'HomeOwnerFlag', 'AgeGroup',
'NumberCarsOwned', 'NumberChildrenAtHome', 'TotalChildren']
Features = encode_string(adventure_works['Education'])
for col in categorical_columns:
temp = encode_string(adventure_works[col])
Features = np.concatenate([Features, temp], axis = 1)
print(Features.shape)
print(Features[:2, :])
# -
# Concatenate numeric features to model matrix
Features = np.concatenate([Features, np.array(adventure_works[['SqrtYearlyIncome']])], axis = 1)
print(Features.shape)
print(Features[:2, :])
print(Features.shape)
print(Labels.shape)
# # `Linear Regression Model`
## Randomly sample cases to create independent training and test data
nr.seed(9988)
indx = range(Features.shape[0])
indx = ms.train_test_split(indx, test_size = 0.2)
X_train = Features[indx[0],:]
y_train = np.ravel(Labels[indx[0]])
X_test = Features[indx[1],:]
y_test = np.ravel(Labels[indx[1]])
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# Scale numeric features
scaler = preprocessing.StandardScaler().fit(X_train[:,37:])
X_train[:,37:] = scaler.transform(X_train[:,37:])
X_test[:,37:] = scaler.transform(X_test[:,37:])
X_train[:2,]
## define and fit the linear regression model
lin_mod = linear_model.LinearRegression()
lin_mod.fit(X_train, y_train)
# Print model coefficients
print(lin_mod.intercept_)
print(lin_mod.coef_)
# +
# Print several useful metrics for regression
def print_metrics(y_true, y_predicted, n_parameters):
## First compute R^2 and the adjusted R^2
r2 = sklm.r2_score(y_true, y_predicted)
r2_adj = r2 - (n_parameters - 1)/(y_true.shape[0] - n_parameters) * (1 - r2)
## Print the usual metrics and the R^2 values
print('Mean Square Error = ' + str(sklm.mean_squared_error(y_true, y_predicted)))
print('Root Mean Square Error = ' + str(math.sqrt(sklm.mean_squared_error(y_true, y_predicted))))
print('Mean Absolute Error = ' + str(sklm.mean_absolute_error(y_true, y_predicted)))
print('Median Absolute Error = ' + str(sklm.median_absolute_error(y_true, y_predicted)))
print('R^2 = ' + str(r2))
print('Adjusted R^2 = ' + str(r2_adj))
y_score = lin_mod.predict(X_test)
print_metrics(y_test, y_score, 38)
# +
# Plot histogram of residuals
def hist_resids(y_test, y_score):
## first compute vector of residuals.
resids = np.subtract(y_test.reshape(-1,1), y_score.reshape(-1,1))
## now make the residual plots
sns.distplot(resids)
plt.title('Histogram of residuals')
plt.xlabel('Residual value')
plt.ylabel('count')
hist_resids(y_test, y_score)
# +
# Display Q-Q Normal plot
def resid_qq(y_test, y_score):
## first compute vector of residuals.
resids = np.subtract(y_test.reshape(-1,1), y_score.reshape(-1,1))
## now make the residual plots
ss.probplot(resids.flatten(), plot = plt)
plt.title('Residuals vs. predicted values')
plt.xlabel('Predicted values')
plt.ylabel('Residual')
resid_qq(y_test, y_score)
# +
# Plot of residuals vs predicted values
def resid_plot(y_test, y_score):
## first compute vector of residuals.
resids = np.subtract(y_test.reshape(-1,1), y_score.reshape(-1,1))
## now make the residual plots
sns.regplot(y_score, resids, fit_reg=False)
plt.title('Residuals vs. predicted values')
plt.xlabel('Predicted values')
plt.ylabel('Residual')
resid_plot(y_test, y_score)
# -
# Plot of residuals vs exp(predicted values) (Note: predictions are LogAvgMonthSpend)
y_score_untransform = np.exp(y_score)
y_test_untransform = np.exp(y_test)
resid_plot(y_test_untransform, y_score_untransform)
# ### Evaluating logistic regression model on AW_test.csv
evaluation = pd.read_csv('../Resources/AW_test.csv', parse_dates = ['BirthDate'])
evaluation.columns
# +
# Prepare evaluation data for scikit-learn model
evaluation['Age'] = (pd.to_datetime("1998-01-01") - evaluation['BirthDate']) / np.timedelta64(1,'Y')
evaluation['Age'] = evaluation['Age'].astype('int64')
# Categorize customers in specific age groups
def age_group(row):
if row['Age'] < 25:
return "Under 25 years"
if row['Age'] >= 25 and row['Age'] < 45:
return "Between 25 and 45 years"
if row['Age'] >= 45 and row['Age'] <= 55:
return "Between 45 and 55 years"
else:
return "Over 55 years"
evaluation['AgeGroup'] = evaluation.apply(lambda row: age_group(row), axis=1)
evaluation['AgeGroup'].unique()
# Sqrt transform YearlyIncome values
evaluation['SqrtYearlyIncome'] = np.sqrt(evaluation['YearlyIncome'])
# -
evaluation.drop(['CustomerID', 'Title', 'FirstName', 'MiddleName', 'LastName', 'Suffix', 'AddressLine1',
'AddressLine2', 'City', 'StateProvinceName', 'CountryRegionName', 'PostalCode', 'PhoneNumber',
'BirthDate', 'Age'], axis=1, inplace=True)
evaluation.tail()
# +
# Create model matrix for final evaluation on AW_test
categorical_columns = ['Occupation', 'Gender', 'MaritalStatus', 'HomeOwnerFlag', 'AgeGroup',
'NumberCarsOwned', 'NumberChildrenAtHome', 'TotalChildren']
AW_test = encode_string(evaluation['Education'])
for col in categorical_columns:
temp = encode_string(evaluation[col])
AW_test = np.concatenate([AW_test, temp], axis = 1)
# Concatenate numeric features to model matrix
AW_test = np.concatenate([AW_test, np.array(evaluation[['SqrtYearlyIncome']])], axis = 1)
# Scale numeric features using same scalar object for train data
AW_test[:,37:] = scaler.transform(AW_test[:,37:])
# +
# Compute predictions on AW_test and exponentiate predicted LogAvgMonthSpend values to bring back to original scale
predictions = lin_mod.predict(AW_test)
predictions = np.exp(predictions)
print(pd.DataFrame(predictions))
# pd.DataFrame(predictions).to_csv('challenge3_predictions.csv', sep = '\n', index = False, header = False)
# Note: 3.117595749 RMSE on AW_test
# -
|
Challenge3/challenge3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This code provides a donut plot with 3 groups and several **subgroups** for each group. You can set the position of the 2 circle levels using the **radius** and **width** options. Then, the idea is to attribute a color palette for each group. Note that the code for this graphic is far from optimal. It would be great to create a more general function. Do not hesitate to leave a comment if you have a better way to do it!
# +
# Libraries
import matplotlib.pyplot as plt
# Make data: I have 3 groups and 7 subgroups
group_names=['groupA', 'groupB', 'groupC']
group_size=[12,11,30]
subgroup_names=['A.1', 'A.2', 'A.3', 'B.1', 'B.2', 'C.1', 'C.2', 'C.3', 'C.4', 'C.5']
subgroup_size=[4,3,5,6,5,10,5,5,4,6]
# Create colors
a, b, c=[plt.cm.Blues, plt.cm.Reds, plt.cm.Greens]
# First Ring (outside)
fig, ax = plt.subplots()
ax.axis('equal')
mypie, _ = ax.pie(group_size, radius=1.3, labels=group_names, colors=[a(0.6), b(0.6), c(0.6)] )
plt.setp( mypie, width=0.3, edgecolor='white')
# Second Ring (Inside)
mypie2, _ = ax.pie(subgroup_size, radius=1.3-0.3, labels=subgroup_names, labeldistance=0.7, colors=[a(0.5), a(0.4), a(0.3), b(0.5), b(0.4), c(0.6), c(0.5), c(0.4), c(0.3), c(0.2)])
plt.setp( mypie2, width=0.4, edgecolor='white')
plt.margins(0,0)
# show it
plt.show()
|
src/notebooks/163-donut-plot-with-subgroups.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="etP7C3tQfXCR"
# ## Setting the environment for Colab
# + colab={"base_uri": "https://localhost:8080/"} id="xOwXpNsEfORS" executionInfo={"status": "ok", "timestamp": 1606511189507, "user_tz": 360, "elapsed": 291, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="9ac027ca-f031-4c7f-d9e5-ac73c0110347"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="uMUr8T8pfYmG" executionInfo={"status": "ok", "timestamp": 1606511189642, "user_tz": 360, "elapsed": 419, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="05c626cb-8e12-4988-860a-153644f608c5"
# %cd "/content/drive/My Drive/Colab Notebooks/w266_final/project_re"
# + colab={"base_uri": "https://localhost:8080/"} id="64OOH-uAhapD" executionInfo={"status": "ok", "timestamp": 1606511192832, "user_tz": 360, "elapsed": 3602, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="ecaee690-5a76-41bb-c323-1044cfc1a15c"
# !pip install transformers
# + id="NPJ1N3yvfhCm" executionInfo={"status": "ok", "timestamp": 1606511196930, "user_tz": 360, "elapsed": 7695, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
# %reload_ext autoreload
# %matplotlib inline
import logging
import time
from platform import python_version
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
import torch
import torch.nn as nn
import torch.nn.functional as F
import transformers
from sklearn.metrics import roc_auc_score
from torch.autograd import Variable
# + id="qopPKo2yKCfz" executionInfo={"status": "ok", "timestamp": 1606511196935, "user_tz": 360, "elapsed": 7696, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# + colab={"base_uri": "https://localhost:8080/"} id="BVY4z2wghkE5" executionInfo={"status": "ok", "timestamp": 1606511196936, "user_tz": 360, "elapsed": 7691, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="3dbf1445-27f1-4a09-941a-831d48f20a09"
df_train = pd.read_csv('data_divided/train.tsv', sep ="\t", header=None, skiprows=100, nrows=1000)
df_train = df_train.rename(columns={0: "id", 1: "relation_code", 2: "alpha", 3:"string"})
df_train = df_train[['id', 'string', 'relation_code' ]]
df_train = df_train.dropna(subset=['string'])
df_train = pd.get_dummies(df_train, columns = ['relation_code'])
df_train = df_train.rename(columns={"relation_code_0": "reason",
"relation_code_1": "route",
"relation_code_2": "strength",
"relation_code_3": "frequency",
"relation_code_4": "duration",
"relation_code_5": "form",
"relation_code_6": "dosage",
"relation_code_7": "ade",
"relation_code_8": "no_relation"})
print(len(df_train))
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="f2oneYzViABX" executionInfo={"status": "ok", "timestamp": 1606511196936, "user_tz": 360, "elapsed": 7683, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="770a3073-3f62-48a8-ffa8-f6eab28f4bc6"
df_train.head()
# + colab={"base_uri": "https://localhost:8080/"} id="MPcAbjpriFpk" executionInfo={"status": "ok", "timestamp": 1606511196937, "user_tz": 360, "elapsed": 7675, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="21725a4f-22eb-4be8-d50f-6b28d3a79df3"
df_val = pd.read_csv('data_divided/dev.tsv', sep ="\t", header=None,skiprows=100, nrows=500)
df_val = df_val.rename(columns={0: "id", 1: "relation_code", 2: "alpha", 3:"string"})
df_val = df_val[['id', 'string', 'relation_code' ]]
df_val = df_val.dropna(subset=['string'])
df_val = pd.get_dummies(df_val, columns = ['relation_code'])
df_val = df_val.rename(columns={"relation_code_0": "reason",
"relation_code_1": "route",
"relation_code_2": "strength",
"relation_code_3": "frequency",
"relation_code_4": "duration",
"relation_code_5": "form",
"relation_code_6": "dosage",
"relation_code_7": "ade",
"relation_code_8": "no_relation"})
print(len(df_val))
# + id="2TsszA9_m7eD" executionInfo={"status": "ok", "timestamp": 1606511196938, "user_tz": 360, "elapsed": 7670, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
#df_val.head()
# + colab={"base_uri": "https://localhost:8080/"} id="PIEb-KIpm9Nq" executionInfo={"status": "ok", "timestamp": 1606511196941, "user_tz": 360, "elapsed": 7668, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="5dda249d-64af-4952-cf01-7f6afbf4c72b"
df_test = pd.read_csv('data_divided/test.tsv', sep ="\t", header=None,skiprows=100, nrows=500)
len(df_test)
# + colab={"base_uri": "https://localhost:8080/"} id="n4cm8Zc8sEV0" executionInfo={"status": "ok", "timestamp": 1606511196941, "user_tz": 360, "elapsed": 7660, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="88d2a518-2ec9-4409-cbe1-6ac69a72be48"
df_test = df_test.rename(columns={0: "id", 1: "relation_code", 2: "alpha", 3:"string"})
df_test = df_test[['id', 'string', 'relation_code' ]]
df_test = df_test.dropna(subset=['string'])
df_test = pd.get_dummies(df_test, columns = ['relation_code'])
df_test = df_test.rename(columns={"relation_code_0": "reason",
"relation_code_1": "route",
"relation_code_2": "strength",
"relation_code_3": "frequency",
"relation_code_4": "duration",
"relation_code_5": "form",
"relation_code_6": "dosage",
"relation_code_7": "ade",
"relation_code_8": "no_relation"})
print(len(df_test))
# + id="qaNhrUX-sOoh" executionInfo={"status": "ok", "timestamp": 1606511196942, "user_tz": 360, "elapsed": 7656, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
#df_test.head()
# + id="FyCLOgwwtsw5" executionInfo={"status": "ok", "timestamp": 1606511196944, "user_tz": 360, "elapsed": 7654, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
target_columns = ["reason", "route", "strength", "frequency", "duration", "form", "dosage", "ade", "no_relation"]
# + [markdown] id="1MX-kBJ_pTsg"
# ## BERT
# + id="Y18NHNW7otpO" executionInfo={"status": "ok", "timestamp": 1606511203597, "user_tz": 360, "elapsed": 14303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
model_class = transformers.BertModel
tokenizer_class = transformers.BertTokenizer
pretrained_weights='gsarti/biobert-nli'
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
bert_model = model_class.from_pretrained(pretrained_weights).to(device)
#bert_model = model_class.from_pretrained(pretrained_weights)
# + id="NKtjr4L8paZ7" executionInfo={"status": "ok", "timestamp": 1606511203599, "user_tz": 360, "elapsed": 14301, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
max_seq = 256
def tokenize_text(df, max_seq):
return [
tokenizer.encode(text, add_special_tokens=True)[:max_seq] for text in df.string.values
]
def pad_text(tokenized_text, max_seq):
return torch.Tensor(np.array([el + [0] * (max_seq - len(el)) for el in tokenized_text])).type(torch.LongTensor).to(device)
def tokenize_and_pad_text(df, max_seq):
tokenized_text = tokenize_text(df, max_seq)
padded_text = pad_text(tokenized_text, max_seq)
return torch.tensor(padded_text)
def targets_to_tensor(df, target_columns):
return torch.tensor(df[target_columns].values, dtype=torch.float32)
# + colab={"base_uri": "https://localhost:8080/"} id="5thVKbi5pq-4" executionInfo={"status": "ok", "timestamp": 1606511206267, "user_tz": 360, "elapsed": 16964, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="510bbeb4-9775-4cbc-a6a7-db8b40ad073f"
train_indices = tokenize_and_pad_text(df_train, max_seq)
print(type(train_indices))
val_indices = tokenize_and_pad_text(df_val, max_seq)
test_indices = tokenize_and_pad_text(df_test, max_seq)
with torch.no_grad():
x_train = bert_model(train_indices)[0]
x_val = bert_model(val_indices)[0]
x_test = bert_model(test_indices)[0]
y_train = targets_to_tensor(df_train, target_columns)
y_val = targets_to_tensor(df_val, target_columns)
y_test = targets_to_tensor(df_test, target_columns)
# + id="n5ri4rMV5p9C" executionInfo={"status": "ok", "timestamp": 1606511206269, "user_tz": 360, "elapsed": 16958, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
#df_train['string'].isnull().values.sum()
# + colab={"base_uri": "https://localhost:8080/"} id="7zGg0mp2tg3I" executionInfo={"status": "ok", "timestamp": 1606511220096, "user_tz": 360, "elapsed": 30780, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="50ec97fc-d10b-41d7-c971-57299b85ba71"
x_train[0]
# + colab={"base_uri": "https://localhost:8080/"} id="zxQO1hF51xRH" executionInfo={"status": "ok", "timestamp": 1606511220097, "user_tz": 360, "elapsed": 30778, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="0ca06048-46b0-49d1-e4f5-789cd0507829"
y_train[0]
# + id="ppRSRX091zQ3" executionInfo={"status": "ok", "timestamp": 1606511220097, "user_tz": 360, "elapsed": 30772, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/<KEY>6=s64", "userId": "13066601150058162597"}}
class KimCNN(nn.Module):
def __init__(self, embed_num, embed_dim, class_num, kernel_num, kernel_sizes, dropout, static):
super(KimCNN, self).__init__()
V = embed_num
D = embed_dim
C = class_num
Co = kernel_num
Ks = kernel_sizes
self.static = static
self.embed = nn.Embedding(V, D)
self.convs1 = nn.ModuleList([nn.Conv2d(1, Co, (K, D)) for K in Ks])
self.dropout = nn.Dropout(dropout)
self.fc1 = nn.Linear(len(Ks) * Co, C)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
if self.static:
x = Variable(x)
x = x.unsqueeze(1) # (N, Ci, W, D)
x = [F.relu(conv(x)).squeeze(3).to(device) for conv in self.convs1] # [(N, Co, W), ...]*len(Ks)
x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # [(N, Co), ...]*len(Ks)
x = torch.cat(x, 1)
x = self.dropout(x) # (N, len(Ks)*Co)
logit = self.fc1(x) # (N, C)
output = self.sigmoid(logit)
return output
# + id="NK5oJfMz12fj" executionInfo={"status": "ok", "timestamp": 1606511220098, "user_tz": 360, "elapsed": 30768, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
embed_num = x_train.shape[1]
embed_dim = x_train.shape[2]
class_num = y_train.shape[1]
kernel_num = 3
kernel_sizes = [2, 3, 4]
dropout = 0.5
static = True
model = KimCNN(
embed_num=embed_num,
embed_dim=embed_dim,
class_num=class_num,
kernel_num=kernel_num,
kernel_sizes=kernel_sizes,
dropout=dropout,
static=static,
)
# + id="0ccGoHC816hc" executionInfo={"status": "ok", "timestamp": 1606511220098, "user_tz": 360, "elapsed": 30764, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
n_epochs = 5
batch_size = 12
lr = 0.00001
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
loss_fn = nn.BCELoss()
# + id="BmtJ-4m21_hr" executionInfo={"status": "ok", "timestamp": 1606511220099, "user_tz": 360, "elapsed": 30762, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
def generate_batch_data(x, y, batch_size):
i, batch = 0, 0
for batch, i in enumerate(range(0, len(x) - batch_size, batch_size), 1):
x_batch = x[i : i + batch_size]
y_batch = y[i : i + batch_size]
yield x_batch.to(device), y_batch.to(device), batch
if i + batch_size < len(x):
yield x[i + batch_size :].to(device), y[i + batch_size :].to(device), batch + 1
if batch == 0:
yield x.to(device), y.to(device), 1
#generate_batch_data
# + colab={"base_uri": "https://localhost:8080/"} id="QIQdibk52M0k" executionInfo={"status": "ok", "timestamp": 1606511221934, "user_tz": 360, "elapsed": 32594, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}} outputId="c5474c9b-ee48-4853-fd37-aeb4d6a25d6e"
train_losses, val_losses = [], []
for epoch in range(n_epochs):
start_time = time.time()
train_loss = 0
model.train(True).to(device)
for x_batch, y_batch, batch in generate_batch_data(x_train, y_train, batch_size):
#print(type(x_batch))
y_pred = model(x_batch).to(device)
#print(type(y_pred))
optimizer.zero_grad()
loss = loss_fn(y_pred, y_batch)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss /= batch
train_losses.append(train_loss)
elapsed = time.time() - start_time
model.eval() # disable dropout for deterministic output
# deactivate autograd engine to reduce memory usage and speed up computations
with torch.no_grad():
val_loss, batch = 0, 1
for x_batch, y_batch, batch in generate_batch_data(x_val, y_val, batch_size):
y_pred = model(x_batch)
loss = loss_fn(y_pred, y_batch).to(device)
val_loss += loss.item()
val_loss /= batch
val_losses.append(val_loss)
print(
"Epoch %d Train loss: %.2f. Validation loss: %.2f. Elapsed time: %.2fs."
% (epoch + 1, train_losses[-1], val_losses[-1], elapsed)
)
# + id="6aRuH6nx2PGG" executionInfo={"status": "ok", "timestamp": 1606511221936, "user_tz": 360, "elapsed": 32589, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhEI3USGoSv9o6JShJGhR2V47o7KYZh-Ya1FgZ6=s64", "userId": "13066601150058162597"}}
|
project_re/model/new_BERT_CNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: widgets-tutorial
# language: python
# name: widgets-tutorial
# ---
# # Widget List
import ipywidgets as widgets
# + [markdown] slideshow={"slide_type": "slide"}
# ## Numeric widgets
# -
# There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing `Float` with `Int` in the widget name, you can find the Integer equivalent.
# ### IntSlider
import ipywidgets as widgets
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatSlider
# -
import ipywidgets as widgets
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
# Sliders can also be **displayed vertically**.
import ipywidgets as widgets
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
)
# ### FloatLogSlider
# The `FloatLogSlider` has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The `min` and `max` refer to the minimum and maximum exponents of the `base`, and the `value` refers to the actual value of the slider.
import ipywidgets as widgets
widgets.FloatLogSlider(
value=10,
base=10,
min=-10, # max exponent of base
max=10, # min exponent of base
step=0.2, # exponent step
description='Log Slider'
)
# ### IntRangeSlider
import ipywidgets as widgets
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
# ### FloatRangeSlider
import ipywidgets as widgets
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
# ### IntProgress
import ipywidgets as widgets
widgets.IntProgress(
value=7,
min=0,
max=10,
step=1,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatProgress
# -
import ipywidgets as widgets
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Loading:',
bar_style='info',
orientation='horizontal'
)
# The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
#
# ### BoundedIntText
import ipywidgets as widgets
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### BoundedFloatText
# -
import ipywidgets as widgets
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False
)
# ### IntText
import ipywidgets as widgets
widgets.IntText(
value=7.5,
description='Any:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### FloatText
# -
import ipywidgets as widgets
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Boolean widgets
# -
# There are three widgets that are designed to display a boolean value.
# ### ToggleButton
import ipywidgets as widgets
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check'
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Checkbox
# -
import ipywidgets as widgets
widgets.Checkbox(
value=False,
description='Check me',
disabled=False,
indent=True
)
# ### Valid
#
# The valid widget provides a read-only indicator.
import ipywidgets as widgets
widgets.Valid(
value=True,
description='Valid!',
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Selection widgets
# -
# There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the **enumeration of selectable options by passing a list** (options are either (label, value) pairs, or simply values for which the labels are derived by calling `str`).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Dropdown
# -
import ipywidgets as widgets
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
)
# The following is also valid, displaying the words `'One', 'Two', 'Three'` as the dropdown choices but returning the values `1, 2, 3`.
import ipywidgets as widgets
widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### RadioButtons
#
# Note that the label for this widget is truncated; we will return later to how to allow longer labels.
# -
import ipywidgets as widgets
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple',
description='Pizza topping:',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Select
# -
import ipywidgets as widgets
widgets.Select(
options=['Linux', 'Windows', 'OSX'],
value='OSX',
# rows=10,
description='OS:',
disabled=False
)
# ### SelectionSlider
import ipywidgets as widgets
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
# ### SelectionRangeSlider
#
# The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
import ipywidgets as widgets
import datetime
dates = [datetime.date(2015,i,1) for i in range(1,13)]
options = [(i.strftime('%b'), i) for i in dates]
widgets.SelectionRangeSlider(
options=options,
index=(0,11),
description='Months (2015)',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### ToggleButtons
# -
import ipywidgets as widgets
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
# ### SelectMultiple
# Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
import ipywidgets as widgets
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
#rows=10,
description='Fruits',
disabled=False
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## String widgets
# -
# There are several widgets that can be used to display a string value. The `Text` and `Textarea` widgets accept input. The `Password` widget is a special `Text` widget that hides its input. The `HTML` and `HTMLMath` widgets display a string as HTML (`HTMLMath` also renders math). The `Label` widget can be used to construct a custom control label.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Text
# -
import ipywidgets as widgets
widgets.Text(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
# ### Textarea
import ipywidgets as widgets
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
# ### Password
import ipywidgets as widgets
widgets.Password(
value='password',
placeholder='Enter password',
description='Password:',
disabled=False
)
# ## Combobox
import ipywidgets as widgets
widgets.Combobox(
options=['One', 'Two', 'Three'],
description='Select or type',
placeholder='Type here',
)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Label
#
# The `Label` widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
# -
import ipywidgets as widgets
widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()])
# ### HTML
import ipywidgets as widgets
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
# ### HTMLMath
import ipywidgets as widgets
widgets.HTMLMath(
value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$",
placeholder='Some HTML',
description='Some HTML',
)
# ## Image
import ipywidgets as widgets
file = open("../images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400,
)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Button
# -
import ipywidgets as widgets
widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check'
)
# ## Output
#
# The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#module-IPython.display). After the widget is created, direct output to it using a context manager.
import ipywidgets as widgets
out = widgets.Output()
out
# You can print text to the output area as shown below.
with out:
for i in range(10):
print(i, 'Hello world!')
# Rich material can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
# ## Play
# ### An animation widget
# The `Play` widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
import ipywidgets as widgets
play = widgets.Play(
# interval=10,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
# ## Video
#
# The `value` of this widget accepts a byte string. The byte string is the
# raw video data that you want the browser to display. You can explicitly
# define the format of the byte string using the `format` trait (which
# defaults to "mp4").
#
# #### Displaying YouTube videos
#
# Though it is possible to stream a YouTube video in the `Video` widget, there is an easier way using the `Output` widget and the IPython `YouTubeVideo` display.
import ipywidgets as widgets
f = open('../Big.Buck.Bunny.mp4', 'rb')
widgets.Video(
value=f.read(),
format='mp4'
)
f.close()
# ## Audio
#
# The `value` of this widget accepts a byte string. The byte string is the
# raw audio data that you want the browser to display. You can explicitly
# define the format of the byte string using the `format` trait (which
# defaults to "mp3").
#
# If you pass `"url"` to the `"format"` trait, `value` will be interpreted
# as a URL as bytes encoded in UTF-8.
import ipywidgets as widgets
f = open('../invalid_keypress.mp3', 'rb')
widgets.Audio(
value=f.read(),
)
f.close()
# ## DatePicker
#
# The date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field.
import ipywidgets as widgets
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
# ## ColorPicker
import ipywidgets as widgets
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
# ## FileUpload
#
# The `FileUpload` allows to upload any type of file(s) as bytes.
import ipywidgets as widgets
a = widgets.FileUpload(
accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf'
multiple=False, # True to accept multiple files upload else False
)
a
a
# The file contents are available in `a.value` for a single upload. For multiple uploads `a.value` is a dictionary where the keys are the file names and the values are the file contents.
# ## Controller
#
# The `Controller` allows a game controller to be used as an input device.
import ipywidgets as widgets
widgets.Controller(
index=0,
)
# ## Container/Layout widgets
#
# These widgets are used to hold other widgets, called children. Each has a `children` property that may be set either when the widget is created or later.
# ### Box
import ipywidgets as widgets
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
# ### HBox
import ipywidgets as widgets
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
# ### VBox
import ipywidgets as widgets
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
# ### GridBox
#
# This box uses the CSS Grid Layout specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items.
import ipywidgets as widgets
items = [widgets.Label(str(i)) for i in range(8)]
widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(3, 100px)"))
# ### Accordion
import ipywidgets as widgets
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
# ### Tab
#
# In this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for `Accordion`.
import ipywidgets as widgets
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
for i in range(len(children)):
tab.set_title(i, str(i))
tab
# ### Accordion and Tab use `selected_index`, not value
#
# Unlike the rest of the widgets discussed earlier, the container widgets `Accordion` and `Tab` update their `selected_index` attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing *and* programmatically set what the user sees by setting the value of `selected_index`.
#
# Setting `selected_index = None` closes all of the accordions or deselects all tabs.
# In the cells below try displaying or setting the `selected_index` of the `tab` and/or `accordion`.
tab.selected_index = 3
accordion.selected_index = None
# ### Nesting tabs and accordions
#
# Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.
#
# The example below makes a couple of tabs with an accordion children in one of them
import ipywidgets as widgets
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
# ## TwoByTwoLayout
# You can easily create a layout with 4 widgets aranged on 2x2 matrix using the `TwoByTwoLayout` widget:
# +
from ipywidgets import TwoByTwoLayout, Button, Layout
TwoByTwoLayout(top_left=Button(description="Top left"),
top_right=Button(description="Top right"),
bottom_left=Button(description="Bottom left"),
bottom_right=Button(description="Bottom right"))
# -
# ## AppLayout
# `AppLayout` is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane:
from ipywidgets import AppLayout, Button, Layout
# +
header = Button(description="Header",
layout=Layout(width="auto", height="auto"))
left_sidebar = Button(description="Left Sidebar",
layout=Layout(width="auto", height="auto"))
center = Button(description="Center",
layout=Layout(width="auto", height="auto"))
right_sidebar = Button(description="Right Sidebar",
layout=Layout(width="auto", height="auto"))
footer = Button(description="Footer",
layout=Layout(width="auto", height="auto"))
AppLayout(header=header,
left_sidebar=left_sidebar,
center=center,
right_sidebar=right_sidebar,
footer=footer)
# -
# ## GridspecLayout
# `GridspecLayout` is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's [GridSpec](https://matplotlib.org/tutorials/intermediate/gridspec.html#sphx-glr-tutorials-intermediate-gridspec-py).
#
# You can use `GridspecLayout` to define a simple regularly-spaced grid. For example, to create a 4x3 layout:
# +
from ipywidgets import GridspecLayout, Button, Layout
grid = GridspecLayout(4, 3)
for i in range(4):
for j in range(3):
grid[i, j] = Button(layout=Layout(width='auto', height='auto'))
grid
|
notebooks/reference_guides/complete-ipywidgets-widget-list.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Implementing local gauge invariance with atomic mixtures.
#
# This Notebook is based on the following [paper](https://science.sciencemag.org/content/367/6482/1128), which was performed on the NaLi machine at SynQS. In this paper a new scalable analog quantum simulator of a U(1) gauge theory is demonstrated.
#
# By using interspecies spin-changing collisions between particles, a gauge-invariant interaction between matter and gauge-field is achieved. In this case an atomic mixture of sodium and lithium is used.
#
# We will model the system with two qudits of slightly different length. The first qudit is the matter field and the second qudit the gauge field.
#
#
# +
import pennylane as qml
import numpy as np
import matplotlib.pyplot as plt
# -
# Make sure that you followed the necessary steps for obtaining the credentials as desribed in the [introduction](https://synqs.github.io/pennylane-ls/intro.html).
from pennylane_ls import *
from heroku_credentials import username, password
# # Rotating the matter field
#
# we have to rotate the matter field to initialize the dynamics in the system first
NaLiDevice = qml.device(
"synqs.mqs", wires=2, shots=500, username=username, password=password
)
@qml.qnode(NaLiDevice)
def matterpreparation(alpha=0):
MultiQuditOps.load(2, wires=[0])
MultiQuditOps.load(20, wires=[1])
MultiQuditOps.rLx(alpha, wires=[1])
obs = MultiQuditOps.Lz(0) @ MultiQuditOps.Lz(1)
return qml.expval(obs)
# visualize the circuit
matterpreparation(np.pi/2)
print(matterpreparation.draw())
NaLiDevice.job_id
# now reproduce figure 1
alphas = np.linspace(0, np.pi, 15)
means = np.zeros((len(alphas), 2))
for i in range(len(alphas)):
if i % 10 == 0:
print("step", i)
# Calculate the resulting states after each rotation
means[i, :] = matterpreparation(alphas[i])
f, ax = plt.subplots()
ax.plot(alphas, means[:, 0], "o", label="gauge field")
ax.plot(alphas, means[:, 1], "o", label="matter field")
ax.set_ylabel(r"$\eta$")
ax.set_xlabel(r"$\varphi$")
ax.legend()
# ## time evolution
# and now a time evolution
@qml.qnode(NaLiDevice)
def t_evolv(alpha=0, beta=0, gamma=0, delta=0, NLi=1, NNa=10, Ntrott=1):
"""Circuit that describes the time evolution.
alpha ... Initial angle of rotation of the matter field
beta ... Angle of rotation for the matter field
gamma ... Angle of rotation on the squeezing term.
delta ... Angle of rotation of the flip flop term.
"""
# preparation step
MultiQuditOps.load(NLi, wires=[0])
MultiQuditOps.load(NNa, wires=[1])
MultiQuditOps.rLx(alpha, wires=[1])
# time evolution
for ii in range(Ntrott):
MultiQuditOps.LxLy(delta / Ntrott, wires=[0, 1])
MultiQuditOps.rLz(beta / Ntrott, wires=[0])
MultiQuditOps.rLz2(gamma / Ntrott, wires=[1])
obs = MultiQuditOps.Lz(0)
return qml.expval(obs)
t_evolv(alpha=np.pi / 2, beta=0.1, gamma=25, delta=0.2)
print(t_evolv.draw())
# parameters of the experiment
Delta = -2 * np.pi * 500
chiT = 2.0 * np.pi * 0.01 * 300e3
lamT = 2.0 * np.pi * 2e-4 * 300e3; # lamT = 2.*np.pi*2e-5*300e3;
Ntrott = 12
NLi = 5
NNa = 50
alpha = np.pi / 2
chi = chiT / NNa
lam = lamT / NNa;
times = np.linspace(0, 10.0, 10) * 1e-3
means = np.zeros(len(times))
for i in range(len(times)):
means[i] = t_evolv(
alpha, Delta * times[i], chi * times[i], lam * times[i], NLi, NNa, Ntrott=Ntrott
)
f, ax = plt.subplots()
ax.plot(times * 1e3, means, "o", label="matter field")
ax.set_ylabel(r"$\eta$")
ax.set_xlabel("time (ms)")
ax.legend()
|
examples/Gauge_Theory.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/wintercameearly/Plate-Number-Classification/blob/master/plate_number_classification_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="uahyHtnoNtOv" colab_type="code" colab={}
import numpy as np
import pandas as pd
from keras.preprocessing.image import ImageDataGenerator, load_img
from keras.utils import to_categorical
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
import os, cv2, itertools
import matplotlib.pyplot as plt
import PIL
from PIL import Image
from matplotlib import image
from numpy import asarray
from PIL import ImageFile
# %matplotlib inline
# + id="B13q1Ze_0nIv" colab_type="code" colab={}
from google.colab import drive
drive.mount('/gdrive')
# %cd /gdrive
# + id="Ww2x0OALE58y" colab_type="code" colab={}
#plate_number_dir = '/gdrive/My Drive/Plate-Number-Classification/Plate_number/'
#negative_images_dir = '/gdrive/My Drive/Plate-Number-Classification/negative_images/'
plate_number_dir = '/gdrive/My Drive/MachineLearning/plate_number/'
negative_images_dir = '/gdrive/My Drive/MachineLearning/negative_images/'
ROWS = 64
COLS = 64
CHANNELS = 3
# + id="_Abl_upuGJ2M" colab_type="code" colab={}
plate_numbers_images = [plate_number_dir+i for i in os.listdir(plate_number_dir)]
negative_images_images = [negative_images_dir+i for i in os.listdir(negative_images_dir)]
# + colab_type="code" id="NX4TxsrLJYl1" colab={}
#def read_image(file_path):
# img = Image.open(file_path)
# ImageFile.LOAD_TRUNCATED_IMAGES = True
# img = img.convert("RGB")
# img_resized = img.resize((ROWS,COLS))
# image_resized_array = asarray(img)
# resized_img = cv2.resize(image_resized_array, (ROWS, COLS), interpolation = cv2.INTER_AREA)
# return resized_img
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR)
resized_img = cv2.resize(img, (ROWS, COLS), interpolation = cv2.INTER_AREA)
return resized_img
# + id="Mkid7rYxFm04" colab_type="code" colab={}
def prepare_data(images):
m = len(images)
n_x = ROWS*COLS*CHANNELS
X = np.ndarray((n_x,m), dtype = np.uint8)
y = np.zeros((1,m))
print("X.shape is {}".format(X.shape))
for i,image_file in enumerate(images):
image = read_image(image_file)
X[:,i] = np.squeeze(image.reshape((n_x,1)))
if '-' in image_file.lower():
y[0,i] = 1
elif '00' in image_file.lower():
y[0,i] = 0
if i%100 == 0 :
print("Proceed {} of {}".format(i, m))
print(image.shape)
return X,y
# + id="vf6sVVgCFyNX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="8e789f16-2c5d-4b4b-afa2-eb73c346b1bd"
plate_img, negative_img = prepare_data(plate_numbers_images + negative_images_images)
# + id="3TJ20V6zauV0" colab_type="code" colab={}
classes = {0: 'Negative_Image',
1: 'Plate_Number'}
# + id="8TYsBJ5Ia0Fl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="3e5381c8-5643-4e56-c847-1e2e6d0b14a5"
def show_images(X, y, idx) :
image = X[idx]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize=(4,2))
plt.imshow(image),
plt.title(classes[y[idx,0]])
plt.show()
show_images(plate_img.T, negative_img.T, 0)
# + id="EHrxzY7Xe58c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2b9f1eea-027b-4ed3-a5f4-c62cb9b88f9c"
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
plate_img_lr, neg_img_lr = plate_img.T, negative_img.T.ravel()
knn.fit(plate_img_lr, neg_img_lr)
print("Model accuracy: {:.2f}%".format(knn.score(plate_img_lr, neg_img_lr)*100))
# + id="Q-KIEUPSfe2S" colab_type="code" colab={}
def show_image_prediction(X, idx, model) :
image = X[idx].reshape(1,-1)
image_class = classes[model.predict(image).item()]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize = (4,2))
plt.imshow(image)
plt.title("Test {} : I think this is {}".format(idx, image_class))
plt.show()
# + id="5AHBNwK-fn1W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 797} outputId="9cc06fe3-065b-42aa-8d38-c5dd8a6f369f"
plate_img_lr, neg_img_lr = plate_img.T, negative_img.T
for i in np.random.randint(0, len(plate_img_lr), 5) :
show_image_prediction(plate_img_lr, i, knn)
# + id="649uSbV7A5x-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="240b40b2-22a2-427c-d932-76c26645e468"
from sklearn.neighbors import RadiusNeighborsClassifier
rnc = RadiusNeighborsClassifier()
rnc.fit(plate_img_lr, neg_img_lr)
print("Model accuracy: {:.2f}%".format(rnc.score(plate_img_lr, neg_img_lr)*100))
# + id="cH4G2RBUBCcb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 797} outputId="5af6cfcd-ab7e-472e-db59-18a7a53bef0c"
plate_img_lr, neg_img_lr = plate_img.T, negative_img.T
for i in np.random.randint(0, len(plate_img_lr), 5) :
show_image_prediction(plate_img_lr, i, rnc)
# + [markdown] id="AptcXfFlBCLn" colab_type="text"
#
|
plate_number_classification_v2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.decomposition import PCA
import math
import scipy.linalg as la
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.spatial import distance
import numba
from numba import jit, int32, int64, float32, float64
import timeit
import time
import pstats
from sklearn.preprocessing import StandardScaler
# %load_ext cython
# +
#iris = sns.load_dataset('iris')
# matrix data
#X = np.array(iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']])
#specs = np.array(iris['species'])
# -
X = np.loadtxt("Data/mnist2500_X.txt")
labels = np.loadtxt("Data/mnist2500_labels.txt")
small = X[:1000]
small.shape
# ## Original Code
# +
def squared_euc_dist(X):
"""Calculate squared euclidean distance for all pairs in a data matrix X with d dimensions and n rows.
Output is a pairwise distance matrix D that is nxn.
"""
D = distance.squareform(distance.pdist(X, 'sqeuclidean'))
return D
def p_cond(d_matrix, sigmas):
"""Convert a distances matrix to a matrix of conditional probabilities."""
sig_2 = np.square(sigmas.reshape((-1, 1)))
P_cond = np.exp((d_matrix / (2 * sig_2)) - np.max((d_matrix / (2 * sig_2)), axis=1).reshape([-1, 1]))
# set p_i|i = 0
np.fill_diagonal(P_cond, 0.)
P_cond = (P_cond + 1e-10) / (P_cond + 1e-10).sum(axis=1).reshape([-1, 1])
return P_cond
def binary_search(eval_fn, target, tol=1e-10, max_iter=10000,
lower=1e-20, upper=1000.):
"""Perform a binary search over input values to eval_fn.
# Arguments
eval_fn: Function that we are optimising over.
target: Target value we want the function to output.
tol: Float, once our guess is this close to target, stop.
max_iter: Integer, maximum num. iterations to search for.
lower: Float, lower bound of search range.
upper: Float, upper bound of search range.
# Returns:
Float, best input value to function found during search.
"""
for i in range(max_iter):
mid = (lower + upper) / 2.
val = eval_fn(mid)
if val > target:
upper = mid
else:
lower = mid
if np.abs(val - target) <= tol:
break
return mid
def perp(d_matrix, sigmas):
"""calculate perplexity from distance matrix, sigmas, and conditional probability matrix."""
P = p_cond(d_matrix, sigmas)
entropy = -np.sum(P * np.log2(P), axis=1)
perplexity = 2 ** entropy
return perplexity
def find_optimal_sigmas(d_matrix, target_perplexity):
"""For each row of distances matrix, find sigma that results
in target perplexity for that role."""
sigmas = []
# For each row of the matrix (each point in our dataset)
for i in range(d_matrix.shape[0]):
# Make fn that returns perplexity of this row given sigma
eval_fn = lambda sigma: \
perp(d_matrix[i:i + 1, :], np.array(sigma))
# Binary search over sigmas to achieve target perplexity
correct_sigma = binary_search(eval_fn, target_perplexity)
# Append the resulting sigma to our output array
sigmas.append(correct_sigma)
return np.array(sigmas)
def q_ij(Y):
"""Calculate joint probabilities over all points given Y, the low-dimensional map of data points. (pg. 2585)"""
numerator = np.power(1. + (squared_euc_dist(Y)), -1)
Q = numerator / np.sum(numerator)
# q_i|i = 0
np.fill_diagonal(Q, 0.)
return Q
def p_ij(X, target_perplexity):
"""Calculate joint probabilities in the high dimensional space given data matrix X
and a target perplexity to find optimal sigmas (pg. 2584).
"""
d_matrix = -squared_euc_dist(X)
# optimal sigma for each row of distance matrix
sigmas = find_optimal_sigmas(d_matrix, target_perplexity)
# conditional p matrix from optimal sigmas
p_conditional = p_cond(d_matrix, sigmas)
# convert conditional P to joint P matrix (pg. 2584)
n = p_conditional.shape[0]
p_joint = (p_conditional + p_conditional.T) / (2. * n)
return p_joint
def grad_C(P, Q, Y):
"""Calculate gradient of cost function (KL) with respect to lower dimensional map points Y (pg. 2586)"""
pq_diff = (P - Q)[:, :, np.newaxis]
y_diff = Y[:, np.newaxis, :] - Y[np.newaxis, :, :]
y_dist = (np.power(1. + (squared_euc_dist(Y)), -1))[:, :, np.newaxis]
grad = 4. * (pq_diff * y_diff * y_dist).sum(axis=1)
return grad
def tsne(X, num_iters=1000, perplexity=30, alpha=10, momentum=0.9):
"""Calculate Y, the optimal low-dimensional representation of data matrix X using optimized TSNE.
Inputs:
X: data matrix
num_iters: number of iterations
perplexity: target perplexity for calculating optimal sigmas for P probability matrix
alpha: learning rate
momentum: momentum to speed up gradient descent algorithm
"""
# Initialize Y
np.random.seed(0)
Y = np.random.normal(0, 0.0001, size=(X.shape[0], 2))
P = p_ij(X, perplexity)
# Initialise past y_t-1 and y_t-2 values (used for momentum)
Y_tmin2 = Y
Y_tmin1 = Y
# gradient descent with momentum
for i in range(num_iters):
Q = q_ij(Y)
grad = grad_C(P, Q, Y)
# Update Y using momentum (pg. 2587)
Y = (Y - alpha * grad) + (momentum * (Y_tmin1 - Y_tmin2))
# update values of y_t-1 and y_t-2
Y_tmin2 = Y_tmin1
Y_tmin1 = Y
return Y
# -
# normal = %timeit -o -r3 -n3 tsne(X[:1000, ])
# %prun -q -D tsne.prof tsne(X[:1000, ])
p = pstats.Stats('tsne.prof')
p.print_stats()
pass
# ## Reformed and added numba JIT
# +
@jit(nopython = True)
def p_ij(d_matrix, perplexity = 40.0, tol = 1e-6):
"""
Finds P_ij matrix using binary search to find value of sigma_i
Inputs: d_matrix- np.array of pairwise distance matrix, with a fixed perplexity
Output: P-ij matrix
"""
(n, d) = d_matrix.shape
P = np.zeros((n, d), dtype=np.float64)
prec_sum = 0.0
# precision = 1/2sigma^2
for i in range(n):
prec_min = -np.inf
prec_max = np.inf
prec = 1.0
# implement binary search for optimal sigmas
for j in range(10): # 10 binary search steps
sum_p = 0.0
for k in range(d):
if k != i:
P[i, k] = np.exp(-d_matrix[i, k] * prec)
sum_p += P[i, k]
sum_p_distribution = 0.0
for k in range(d):
P[i, k] /= (sum_p + 1e-8)
sum_p_distribution += d_matrix[i, k] * P[i, k]
# Calculate entropy, H matrix
H = np.log(sum_p) + prec * sum_p_distribution
H_diff = H - np.log(perplexity)
# check if entropy is within tolerance
if np.fabs(H_diff) <= tol:
break
if H_diff > 0.0:
prec_min = prec
if prec_max == np.inf:
prec *= 2.0
else:
prec = (prec + prec_max) / 2.0
else:
prec_max = prec
if prec_min == -np.inf:
prec /= 2.0
else:
prec = (prec + prec_min) / 2.0
prec_sum += prec
return P
@jit(nopython = True)
def squared_euc_dist(X):
"""Calculate squared euclidean distance for all pairs in a data matrix X with d dimensions and n rows.
Output is a pairwise distance matrix D that is nxn.
"""
norms = np.power(X, 2).sum(axis = 1)
D = np.add(np.add(-2 * np.dot(X, X.T), norms).T, norms)
return D
@jit(nopython = True)
def q_ij(Y):
"""Calculate joint probabilities over all points given Y, the low-dimensional map of data points. (pg. 2585)"""
numerator = np.power(1. + (squared_euc_dist(Y)), -1)
Q = numerator / np.sum(numerator)
# q_i|i = 0
np.fill_diagonal(Q, 0.)
return Q
@jit(nopython = True)
def grad_C(P, Q, Y):
"""Estimate the gradient of t-SNE cost with respect to Y."""
pq_diff = np.expand_dims((P - Q), 2)
y_diff = np.expand_dims(Y, 1) - np.expand_dims(Y, 0)
y_dist = np.expand_dims(np.power(1 + squared_euc_dist(Y), -1), 2)
grad = 4. * (pq_diff * y_diff * y_dist).sum(axis = 1)
return grad
@jit(nopython = True)
def tsne_opt(X, num_iters = 1000, perplexity = 40, alpha = 100, momentum = 0.8):
"""Calculate Y, the optimal low-dimensional representation of data matrix X using optimized TSNE.
Inputs:
X: data matrix
num_iters: number of iterations
perplexity: target perplexity for calculating optimal sigmas for P probability matrix
alpha: learning rate
momentum: momentum to speed up gradient descent algorithm
"""
# Initialize Y
np.random.seed(0)
Y = np.random.normal(0, 0.0001, size=(X.shape[0], 2))
D = squared_euc_dist(X)
P = p_ij(D)
P = P + np.transpose(P)
P = P / np.sum(P)
# Initialise past y_t-1 and y_t-2 values (used for momentum)
Y_tmin2 = Y
Y_tmin1 = Y
# gradient descent with momentum
for i in range(num_iters):
Q = q_ij(Y)
grad = grad_C(P, Q, Y)
# Update Y using momentum (pg. 2587)
Y = (Y - (alpha * grad)) + (momentum * (Y_tmin1 - Y_tmin2))
# update values of y_t-1 and y_t-2
Y_tmin2 = Y_tmin1
Y_tmin1 = Y
return Y
# -
# numb = %timeit -o -r3 -n3 tsne_opt(X[:1000, ])
# ## Cythonize
# %load_ext cython
# + magic_args="-a" language="cython"
#
# from libc cimport math
# cimport cython
# import numpy as np
# cimport numpy as np
# from numpy cimport ndarray
# from scipy.spatial import distance
#
# cdef extern from "numpy/npy_math.h":
# float NPY_INFINITY
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
# cdef p_ij_cy(double[:,:] d_matrix, float perplexity = 40.0, float tol = 1e-5):
# """
# Finds P_ij matrix using binary search to find value of sigmas
#
# Inputs: X- np.array of pairwise distance matrix, fixed perplexity
#
# Output: P-ij matrix
# """
# cdef int s = 10
#
# cdef int n = d_matrix.shape[0], d = d_matrix.shape[1]
#
# cdef np.ndarray[np.float64_t, ndim=2] P = np.zeros(
# (n,d), dtype=np.float64)
#
# cdef float prec_sum = 0.0
#
# # precision = 1/2sigma^2
# for i in range(n):
# prec_min = -NPY_INFINITY
# prec_max = NPY_INFINITY
# prec = 1.0
#
# # implement binary search for optimal sigmas
# for j in range(s):
# sum_p = 0.0
# for k in range(d):
# if k != i:
# P[i, k] = math.exp(-d_matrix[i, k] * prec)
# sum_p += P[i, k]
#
# sum_p_distribution = 0.0
#
# for k in range(d):
# P[i, k] /= sum_p
# sum_p_distribution += d_matrix[i, k] * P[i, k]
#
# # Calculate entropy, H matrix
# H = np.log(sum_p) + prec * sum_p_distribution
# H_diff = H - np.log(perplexity)
#
# if math.fabs(H_diff) <= tol:
# break
#
# if H_diff > 0.0:
# prec_min = prec
# if prec_max == NPY_INFINITY:
# prec *= 2.0
# else:
# prec = (prec + prec_max) / 2.0
# else:
# prec_max = prec
# if prec_min == -NPY_INFINITY:
# prec /= 2.0
# else:
# prec = (prec + prec_min) / 2.0
#
# prec_sum += prec
#
# return P
#
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
# cdef squared_euc_dist(double[:,:] X):
# """Calculate squared euclidean distance for all pairs in a data matrix X with d dimensions and n rows.
# Output is a pairwise distance matrix D that is nxn.
# """
# cdef int n = X.shape[0]
# cdef int d = X.shape[1]
# cdef double diff
# cdef double dist
# cdef double[:, ::1] D = np.empty((n, n), dtype=np.float64)
#
# for i in range(n):
# for j in range(n):
# dist = 0.0
# for k in range(d):
# diff = X[i, k] - X[j, k]
# dist += diff * diff
# D[i, j] = dist
# return np.asarray(D)
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
# cdef q_ij(double[:,::1] Y):
# """Calculate joint probabilities over all points given Y, the low-dimensional map of data points. (pg. 2585)"""
#
#
# cdef int n = Y.shape[0]
# cdef np.ndarray[np.float64_t, ndim=2] Q = np.empty((n, n), dtype=np.float64)
# cdef double[:, ::1] numerator = np.empty((n, n), dtype=np.float64)
#
# numerator = 1/(1. + (squared_euc_dist(Y)))
# Q = numerator / (np.sum(numerator))
# cdef int m = Q.shape[0]
# cdef int d = Q.shape[1]
#
# # q_i|i = 0
# for i in range(m):
# for j in range(d):
# if i==j:
# Q[i,j] = 0
#
# return Q
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
# cdef grad_C(np.ndarray[np.float64_t, ndim=2] P, np.ndarray[np.float64_t, ndim=2] Q, double[:,:] Y):
# """Estimate the gradient of t-SNE cost with respect to Y."""
#
# pq_diff = np.expand_dims((P - Q), 2)
#
# y_diff = np.expand_dims(Y, 1) - np.expand_dims(Y, 0)
#
# y_dist = np.expand_dims(np.power(1 + squared_euc_dist(Y), -1),2)
#
# return 4. * (pq_diff * y_diff * y_dist).sum(axis = 1)
#
# @cython.boundscheck(False)
# @cython.wraparound(False)
# def tsne_opt_cy(double[:,:] X, int num_iters = 1000, int perplexity = 40, int alpha = 100, float momentum = 0.8):
# """Calculate Y, the optimal low-dimensional representation of data matrix X using optimized TSNE.
#
# Inputs:
# X: data matrix
# num_iters: number of iterations
# perplexity: target perplexity for calculating optimal sigmas for P probability matrix
# alpha: learning rate
# momentum: momentum to speed up gradient descent algorithm
# """
#
# # Initialize Y
# np.random.seed(0)
# cdef int n = X.shape[0]
# Y = np.random.normal(0, 0.0001, size=(n, 2))
# D = squared_euc_dist(X)
# cdef np.ndarray[np.float64_t, ndim=2] P = p_ij_cy(D)
# cdef double[:, :] Pt = P.T
# P = P + P.T
# P = P / np.sum(P)
#
# # Initialise past y_t-1 and y_t-2 values (used for momentum)
# Y_tmin2 = Y
# Y_tmin1 = Y
#
# # gradient descent with momentum
# for i in range(num_iters):
#
# Q = q_ij(Y)
# grad = grad_C(P, Q, Y)
#
# # Update Y using momentum (pg. 2587)
# Y = (Y - (alpha * grad)) + (momentum * (Y_tmin1 - Y_tmin2))
#
# # update values of y_t-1 and y_t-2
# Y_tmin2 = Y_tmin1
# Y_tmin1 = Y
#
# return Y
# -
# cy = %timeit -o -r3 -n3 tsne_opt_cy(X[:1000, ])
# ### Initialize JIT with PCA
# initialize X by reducing to 50 dimensions using PCA
train = StandardScaler().fit_transform(X[:1000, ])
X_reduce = PCA(n_components=50).fit_transform(train)
X_reduce.shape
# numb_pca = %timeit -o -r3 -n3 tsne_opt(X_reduce)
# ### JIT with loop distance function
# +
@jit(nopython = True)
def p_ij(d_matrix, perplexity = 40.0, tol = 1e-6):
"""
Finds P_ij matrix using binary search to find value of sigma_i
Inputs: d_matrix- np.array of pairwise distance matrix, with a fixed perplexity
Output: P-ij matrix
"""
(n, d) = d_matrix.shape
P = np.zeros((n, d), dtype=np.float64)
prec_sum = 0.0
# precision = 1/2sigma^2
for i in range(n):
prec_min = -np.inf
prec_max = np.inf
prec = 1.0
# implement binary search for optimal sigmas
for j in range(10): # 10 binary search steps
sum_p = 0.0
for k in range(d):
if k != i:
P[i, k] = np.exp(-d_matrix[i, k] * prec)
sum_p += P[i, k]
sum_p_distribution = 0.0
for k in range(d):
P[i, k] /= (sum_p + 1e-8)
sum_p_distribution += d_matrix[i, k] * P[i, k]
# Calculate entropy, H matrix
H = np.log(sum_p) + prec * sum_p_distribution
H_diff = H - np.log(perplexity)
# check if entropy is within tolerance
if np.fabs(H_diff) <= tol:
break
if H_diff > 0.0:
prec_min = prec
if prec_max == np.inf:
prec *= 2.0
else:
prec = (prec + prec_max) / 2.0
else:
prec_max = prec
if prec_min == -np.inf:
prec /= 2.0
else:
prec = (prec + prec_min) / 2.0
prec_sum += prec
return P
@jit(nopython = True)
def squared_euc_dist(X):
"""Calculate squared euclidean distance for all pairs in a data matrix X with d dimensions and n rows.
Output is a pairwise distance matrix D that is nxn.
"""
n = X.shape[0]
d = X.shape[1]
D = np.empty((n, n), dtype=np.float64)
for i in range(n):
for j in range(n):
dist = 0.0
for k in range(d):
diff = X[i, k] - X[j, k]
dist += diff * diff
D[i, j] = dist
return np.asarray(D)
@jit(nopython = True)
def q_ij(Y):
"""Calculate joint probabilities over all points given Y, the low-dimensional map of data points. (pg. 2585)"""
numerator = np.power(1. + (squared_euc_dist(Y)), -1)
Q = numerator / np.sum(numerator)
# q_i|i = 0
np.fill_diagonal(Q, 0.)
return Q
@jit(nopython = True)
def grad_C(P, Q, Y):
"""Estimate the gradient of t-SNE cost with respect to Y."""
pq_diff = np.expand_dims((P - Q), 2)
y_diff = np.expand_dims(Y, 1) - np.expand_dims(Y, 0)
y_dist = np.expand_dims(np.power(1 + squared_euc_dist(Y), -1), 2)
grad = 4. * (pq_diff * y_diff * y_dist).sum(axis = 1)
return grad
@jit(nopython = True)
def tsne_opt(X, num_iters = 1000, perplexity = 30, alpha = 10, momentum = 0.9):
"""Calculate Y, the optimal low-dimensional representation of data matrix X using optimized TSNE.
Inputs:
X: data matrix
num_iters: number of iterations
perplexity: target perplexity for calculating optimal sigmas for P probability matrix
alpha: learning rate
momentum: momentum to speed up gradient descent algorithm
"""
# Initialize Y
np.random.seed(0)
Y = np.random.normal(0, 0.0001, size=(X.shape[0], 2))
D = squared_euc_dist(X)
P = p_ij(D)
P = P + np.transpose(P)
P = P / np.sum(P)
# Initialise past y_t-1 and y_t-2 values (used for momentum)
Y_tmin2 = Y
Y_tmin1 = Y
# gradient descent with momentum
for i in range(num_iters):
Q = q_ij(Y)
grad = grad_C(P, Q, Y)
# Update Y using momentum (pg. 2587)
Y = (Y - (alpha * grad)) + (momentum * (Y_tmin1 - Y_tmin2))
# update values of y_t-1 and y_t-2
Y_tmin2 = Y_tmin1
Y_tmin1 = Y
return Y
# -
# numb_loop = %timeit -o -r3 -n3 tsne_opt(X[:1000, ])
# ### Try Numba with Looped Distance and Initial PCA
# initialize X by reducing to 50 dimensions using PCA
train = StandardScaler().fit_transform(X[:1000, ])
X_reduce = PCA(n_components=50).fit_transform(train)
# +
@jit(nopython = True)
def p_ij(d_matrix, perplexity = 40.0, tol = 1e-6):
"""
Finds P_ij matrix using binary search to find value of sigma_i
Inputs: d_matrix- np.array of pairwise distance matrix, with a fixed perplexity
Output: P-ij matrix
"""
(n, d) = d_matrix.shape
P = np.zeros((n, d), dtype=np.float64)
prec_sum = 0.0
# precision = 1/2sigma^2
for i in range(n):
prec_min = -np.inf
prec_max = np.inf
prec = 1.0
# implement binary search for optimal sigmas
for j in range(10): # 10 binary search steps
sum_p = 0.0
for k in range(d):
if k != i:
P[i, k] = np.exp(-d_matrix[i, k] * prec)
sum_p += P[i, k]
sum_p_distribution = 0.0
for k in range(d):
P[i, k] /= (sum_p + 1e-8)
sum_p_distribution += d_matrix[i, k] * P[i, k]
# Calculate entropy, H matrix
H = np.log(sum_p) + prec * sum_p_distribution
H_diff = H - np.log(perplexity)
# check if entropy is within tolerance
if np.fabs(H_diff) <= tol:
break
if H_diff > 0.0:
prec_min = prec
if prec_max == np.inf:
prec *= 2.0
else:
prec = (prec + prec_max) / 2.0
else:
prec_max = prec
if prec_min == -np.inf:
prec /= 2.0
else:
prec = (prec + prec_min) / 2.0
prec_sum += prec
return P
@jit(nopython = True)
def squared_euc_dist(X):
"""Calculate squared euclidean distance for all pairs in a data matrix X with d dimensions and n rows.
Output is a pairwise distance matrix D that is nxn.
"""
n = X.shape[0]
d = X.shape[1]
D = np.empty((n, n), dtype=np.float64)
for i in range(n):
for j in range(n):
dist = 0.0
for k in range(d):
diff = X[i, k] - X[j, k]
dist += diff * diff
D[i, j] = dist
return np.asarray(D)
@jit(nopython = True)
def q_ij(Y):
"""Calculate joint probabilities over all points given Y, the low-dimensional map of data points. (pg. 2585)"""
numerator = np.power(1. + (squared_euc_dist(Y)), -1)
Q = numerator / np.sum(numerator)
# q_i|i = 0
np.fill_diagonal(Q, 0.)
return Q
@jit(nopython = True)
def grad_C(P, Q, Y):
"""Estimate the gradient of t-SNE cost with respect to Y."""
pq_diff = np.expand_dims((P - Q), 2)
y_diff = np.expand_dims(Y, 1) - np.expand_dims(Y, 0)
y_dist = np.expand_dims(np.power(1 + squared_euc_dist(Y), -1), 2)
grad = 4. * (pq_diff * y_diff * y_dist).sum(axis = 1)
return grad
@jit(nopython = True)
def tsne_opt(X, num_iters = 1000, perplexity = 40, alpha = 100, momentum = 0.8):
"""Calculate Y, the optimal low-dimensional representation of data matrix X using optimized TSNE.
Inputs:
X: data matrix
num_iters: number of iterations
perplexity: target perplexity for calculating optimal sigmas for P probability matrix
alpha: learning rate
momentum: momentum to speed up gradient descent algorithm
"""
# Initialize Y
np.random.seed(0)
Y = np.random.normal(0, 0.0001, size=(X.shape[0], 2))
D = squared_euc_dist(X)
P = p_ij(D)
P = P + np.transpose(P)
P = P / np.sum(P)
# Initialise past y_t-1 and y_t-2 values (used for momentum)
Y_tmin2 = Y
Y_tmin1 = Y
# gradient descent with momentum
for i in range(num_iters):
Q = q_ij(Y)
grad = grad_C(P, Q, Y)
# Update Y using momentum (pg. 2587)
Y = (Y - (alpha * grad)) + (momentum * (Y_tmin1 - Y_tmin2))
# update values of y_t-1 and y_t-2
Y_tmin2 = Y_tmin1
Y_tmin1 = Y
return Y
# -
# numb_loop_pca = %timeit -o -r3 -n3 tsne_opt(X[:1000, ])
# +
# speed up multiplier
types = [numb, cy, numb_pca, numb_loop, numb_loop_pca]
mult = list(map(lambda x: (normal.average/x.average), types))
import pandas as pd
speed_table_final = pd.DataFrame(mult, index = ['Numba', 'Cython', 'PCA Initialized Numba', 'Numba with Looped Distance', 'PCA Initialized Numba with Looped Distance'], columns = ["Speed-up Multiplier"])
speed_table_final
# -
speed_table_final.to_csv("speed_table_final.csv")
|
Testing, Optimization, Comparisons/Optimization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
X = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = np.array([[0,0,0,1]]).T
X.shape, Y.shape
def sig(z):
return 1/(1 + np.exp(-z))
# no hidden layer weights
weights = 2* np.random.random((2, 1)) - 1
bias = 2 * np.random.random(1) - 1
weights, bias
# forward propagation without any hidden layer
output0 = X
output = sig(np.dot(output0, weights) + bias)
output
# one hidden layer weights
wh = 2* np.random.random((2, 2)) - 1
bh = 2* np.random.random((1, 2)) - 1
wo = 2 * np.random.random((2, 1)) - 1
bo = 2 * np.random.random((1,1)) - 1
# forward propagation with one hidden layer
output0 = X
outputHidden = sig(np.dot(output0, wh) + bh)
output = sig(np.dot(outputHidden, wo) + bo)
output
|
Lecture 24 Neural Networks-2/Forward Propagation -1/2.ForwardPropagation-One Hidden layer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# This notebook accompanies the blog post https://engineering.taboola.com/think-your-data-different.
import pandas as pd
import numpy as np
import itertools
from sklearn.cluster import KMeans
import pprint
# ## 1. Prepare input for node2vec
# We'll use a CSV file where each row represents a single recommendable item: it contains a comma separated list of the named entities that appear in the item's title.
named_entities_df = pd.read_csv('named_entities.csv')
named_entities_df.head()
# First, we'll have to tokenize the named entities, since `node2vec` expects integers.
tokenizer = dict()
named_entities_df['named_entities'] = named_entities_df['named_entities'].apply(
lambda named_entities: [tokenizer.setdefault(named_entitie, len(tokenizer))
for named_entitie in named_entities.split(',')])
named_entities_df.head()
pprint.pprint(dict(tokenizer.items()[:5]))
# In order to construct the graph on which we'll run node2vec, we first need to understand which named entities appear together.
pairs_df = named_entities_df['named_entities'].apply(lambda named_entities: list(itertools.combinations(named_entities, 2)))
pairs_df = pairs_df[pairs_df.apply(len) > 0]
pairs_df = pd.DataFrame(np.concatenate(pairs_df.values), columns=['named_entity_1', 'named_entity_2'])
pairs_df.head()
# Now we can construct the graph. The weight of an edge connecting two named entities will be the number of times these named entities appear together in our dataset.
# +
NAMED_ENTITIES_CO_OCCURENCE_THRESHOLD = 25
edges_df = pairs_df.groupby(['named_entity_1', 'named_entity_2']).size().reset_index(name='weight')
edges_df = edges_df[edges_df['weight'] > NAMED_ENTITIES_CO_OCCURENCE_THRESHOLD]
edges_df[['named_entity_1', 'named_entity_2', 'weight']].to_csv('edges.csv', header=False, index=False, sep=' ')
edges_df.head()
# -
# Next, we'll run `node2vec`, which will output the result embeddings in a file called `emb`.
# We'll use the open source implementation developed by [Stanford](https://github.com/snap-stanford/snap/tree/master/examples/node2vec).
# !python node2vec/src/main.py --input edges.csv --output emb --weighted
# ## 2. Read embedding and run KMeans clusterring:
emb_df = pd.read_csv('emb', sep=' ', skiprows=[0], header=None)
emb_df.set_index(0, inplace=True)
emb_df.index.name = 'named_entity'
emb_df.head()
# Each column is a dimension in the embedding space. Each row contains the dimensions of the embedding of one named entity.
# We'll now cluster the embeddings using a simple clustering algorithm such as k-means.
# +
NUM_CLUSTERS = 10
kmeans = KMeans(n_clusters=NUM_CLUSTERS)
kmeans.fit(emb_df)
labels = kmeans.predict(emb_df)
emb_df['cluster'] = labels
clusters_df = emb_df.reset_index()[['named_entity','cluster']]
clusters_df.head()
# -
# ## 3. Prepare input for Gephi:
# [Gephi](https://gephi.org) is a nice visualization tool for graphical data.
# We'll output our data into a format recognizable by Gephi.
# +
id_to_named_entity = {named_entity_id: named_entity
for named_entity, named_entity_id in tokenizer.items()}
with open('clusters.gdf', 'w') as f:
f.write('nodedef>name VARCHAR,cluster_id VARCHAR,label VARCHAR\n')
for index, row in clusters_df.iterrows():
f.write('{},{},{}\n'.format(row['named_entity'], row['cluster'], id_to_named_entity[row['named_entity']]))
f.write('edgedef>node1 VARCHAR,node2 VARCHAR, weight DOUBLE\n')
for index, row in edges_df.iterrows():
f.write('{},{},{}\n'.format(row['named_entity_1'], row['named_entity_2'], row['weight']))
# -
# Finally, we can open `clusters.gdf` using Gephi in order to inspect the clusters.
|
node2vec.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Run MCFOST and examine its outputs
# +
import glob, getpass, sys, os
sys.path.append('../library/')
import numpy as np
import matplotlib.pyplot as plt
from mcfost_dust import Dust as McDust
from mcfost_polimg import ImgPol as McImg
# -
wave = 0.5 # default, this valus is overwritten if MCFOST is executet
RUN_MC = False
# ## Execute MCFOST
# MCFOTS terminal messages are print in the terminal where this Notebook is executed
if RUN_MC:
para = '../para_files/test_21_10_2019.para'
wave = np.float(input('Introduce wavelength [microms]: '))
text = f'Executing MCFOST for wavelength {wave} [microns]'
print('=' * len(text))
print(text)
print('=' * len(text))
os.system(f'sh doit.sh {para} {wave}')
# By default the code examine the latest directory
mcdirs = glob.glob('mcfostout_*')
mcdirs.sort()
newest_run = mcdirs[-1] + '/'
# ## Examine DUST Phase Function & Polarisation properties
# +
# Read MCFOST OUTPUT: DUST PROPERTIES ===================
# mcfost_out = McDust(newest_run)
# mcfost_out.read_all()
# mcfost_out.plot_scat()
# -
# ## Examine Polarised Images
imgs = McImg(newest_run, wave)
imgs.read_fitscube()
imgs.get_dims()
imgs.get_imgs()
imgs.make_PSF(gauss_kernel = 2)
imgs.convolve()
imgs.add_noise(SN = 3)
imgs.make_pol()
imgs.plot_pol_imgs() # From Left to right: Disc in Intensity (total scattered light), Stokes Q, Stokes U, P_I = sqrt(Q*Q + U*U)
# +
# TO DO (after Francois explains me/us how to load the experimental matrices to MCFOST):
# 1) Add image labels, create option to save images in pdf, remove/change axes, etc etc >>>>> DONE
# 2) Convolve with Gaussian to emulate real observations >>>>> DONE
# 3) Add photon/gaussian noise to emulate real observations >>>>> DONE
# 4) Zoom-in images (I zooom-in afterwards to not lose spatial resolution when MCFOST computes the grid)
|
notebooks/run_mcfost_01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [](http://colab.research.google.com/github/Yquetzal/tnetwork/blob/master/demo_visu.ipynb)
# # Visualization
# In this notebook, we will introduce the different types of visualization available in tnetwork.
#
# There are two types: visualization of graphs at particular time (e.g., a particular snapshot), and visualization of the evolution of the community structure (longitudinal visualization)
# If tnerwork library is not installed, you need to install it, for instance using the following command
# +
# #%%capture #avoid printing output
# #!pip install --upgrade git+https://github.com/Yquetzal/tnetwork.git
# -
import tnetwork as tn
import seaborn as sns
import pandas as pd
import networkx as nx
import numpy as np
# Let's start with a toy example generated using tnetwork generator (see the corresponding documentation for details)
# +
my_scenario = tn.ComScenario()
[com1,com2] = my_scenario.INITIALIZE([6,6],["c1","c2"])
(com2,com3)=my_scenario.THESEUS(com2,delay=20)
my_scenario.DEATH(com2,delay=10)
(generated_network_IG,generated_comunities_IG) = my_scenario.run()
# -
# ## Cross-section visualization
# One way to see a dynamic graph is to plot it as a series of standard static graph.
# We can start by plotting a single graph at a single time.
#
# There are two libraries that can be used to render the plot: networkx (using matplotlib) or bokeh. matplotlib has the advantage of being more standard, while bokeh has the advantage of providing interactive graphs. This is especially useful to check who is each particular node or community in real datasets.
#
# But Bokeh also has weaknesses:
# * It can alter the responsiveness of the netbook if large visualization are embedded in it
# * In some online notebooks e.g., google colab, embedding bokeh pictures in the notebook does not work well.
#
# As a consequence, it is recommended to embed bokeh visualization in notebooks only for small graphs, and to open them in new windows for larger ones.
#
# Let's start by plotting the networks in timestep 1 (ts=1).
# First, using matplotlib, the default option.
tn.plot_as_graph(generated_network_IG,ts=1,width=300,height=200)
# Then, using bokeh and the `auto_show` option. It won't work in google colab, see a solution below.
tn.plot_as_graph(generated_network_IG,ts=1,width=600,height=300,bokeh=True,auto_show=True)
# One can plot in a new window (and/or in a file) by ignoring the auto_show option, and instead receiving a figure, that we can manipulate as usual with bokeh
from bokeh.plotting import figure, output_file, show
fig = tn.plot_as_graph(generated_network_IG,ts=1,width=600,height=300,bokeh=True)
output_file("fig.html")
show(fig)
# Instead of plotting a single graph, we can plot several ones in a single call. Note that in this case, the position of nodes is common to all plots, and is decided based on the cumulated network
from bokeh.plotting import figure, output_file, show
fig = tn.plot_as_graph(generated_network_IG,ts=[1,30,60,80,generated_network_IG.end()-1],width=200,height=300)
# If we have dynamic communities associated with this dynamic graph, we can plot them too. Note that the same function accepts snapshots and interval graphs, but both the graph and the community structure must have the same format (SN or IG)
from bokeh.plotting import figure, output_file, show
fig = tn.plot_as_graph(generated_network_IG,generated_comunities_IG,ts=[1,30,60,80,generated_network_IG.end()-1],auto_show=True,width=200,height=300)
# ### Longitudinal Visualization
# The second type of visualization plots only nodes and not edges.
#
# Time corresponds to the x axis, while each node has a fixed position on the y axis.
#
# It is possible to plot only a dynamic graphs, without communities. White means that the node is not present or has no edges
plot = tn.plot_longitudinal(generated_network_IG,height=300)
# Or only communities, without a graph:
plot = tn.plot_longitudinal(communities=generated_comunities_IG,height=300)
# Or both on the same graph. The grey color always corresponds to nodes whithout communities. Other colors corresponds to communities
plot = tn.plot_longitudinal(generated_network_IG,communities=generated_comunities_IG,height=300)
# It is possible to plot only a subset of nodes, and/or to plot them in a particular order
plot = tn.plot_longitudinal(generated_network_IG,communities=generated_comunities_IG,height=300,nodes=["n_t_0000_0008","n_t_0000_0002"])
# ### Timestamps
# It is common, when manipulating real data, to have dates in the form of timestamps. There is an option to automatically transform timestamps to dates on the x axis : `to_datetime`
#
# We give an example using the sociopatterns dataset
sociopatterns = tn.graph_socioPatterns2012(format=tn.DynGraphSN)
#It takes a few seconds
to_plot_SN = tn.plot_longitudinal(sociopatterns,height=500,to_datetime=True)
# ### Snapshot duration
# By default, snapshots last until the next snapshot. If snapshots have a fix duration, there is a parameter to indicate this duration : `sn_duration`
#in sociopatterns, there is an observed snapshot every 20 seconds.
to_plot_SN = tn.plot_longitudinal(sociopatterns,height=500,to_datetime=True,sn_duration=20)
# ### Bokeh longitudinal plots
# Longitudinal plots can also use bokeh. It is clearly interesting to have ineractive plots in order to zoom on details or to check the name of communities or nodes. However, bokeh plots with large number of elements can quickly become unresponsive, that is why there are not used by default.
#
# By adding the parameter `bokeh=True`, you can obtain a bokeh plot exactly like for the cross-section graphs, with or without the `auto_show` option.
tn.plot_longitudinal(generated_network_IG,communities=generated_comunities_IG,height=300,bokeh=True,auto_show=True)
from bokeh.plotting import figure, output_file, show
fig = tn.plot_longitudinal(sociopatterns,bokeh=True)
output_file("fig.html")
show(fig)
|
demo_visu.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lesson 14.4 - Plotting with Seaborn: Visualizing linear relationships
#
# *Facsimile of [Seaborn tutorial](http://stanford.edu/~mwaskom/software/seaborn/tutorial.html).*
#
# Many datasets contain multiple quantitative variables, and the goal of an analysis is often to relate those variables to each other. We :ref:`previously discussed <distribution_tutorial>` functions that can accomplish this by showing the joint distribution of two variables. It can be very helpful, though, to use statistical models to estimate a simple relationship between two noisy sets of observations. The functions discussed in this chapter will do so through the common framework of linear regression.
#
# In the spirit of Tukey, the regression plots in seaborn are primarily intended to add a visual guide that helps to emphasize patterns in a dataset during exploratory data analyses. That is to say that seaborn is not itself a package for statistical analysis. To obtain quantitative measures related to the fit of regression models, you should use `statsmodels <http://statsmodels.sourceforge.net/>`_. The goal of seaborn, however, is to make exploring a dataset through visualization quick and easy, as doing so is just as (if not more) important than exploring a dataset through tables of statistics.
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(sum(map(ord, "regression")))
tips = sns.load_dataset("tips")
tips.head()
# Functions to draw linear regression models
# ------------------------------------------
#
# Two main functions in seaborn are used to visualize a linear relationship as determined through regression. These functions, `regplot` and `lmplot` are closely related, and share much of their core functionality. It is important to understand the ways they differ, however, so that you can quickly choose the correct tool for particular job.
#
# * `sns.regplot()` -- plot data and a linear regression model fit
# * `sns.lmplot()` -- plot data and regression model fits across a `FacetGrid`
#
# The [`FacetGrid`](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html) is an object that links a Pandas DataFrame to a matplotlib figure with a particular structure.
#
# In particular, `FacetGrid` is used to draw plots with multiple Axes where each Axes shows the same relationship conditioned on different levels of some variable. It’s possible to condition on up to three variables by assigning variables to the rows and columns of the grid and using different colors for the plot elements.
#
# In the simplest invocation, both `regplot` and `lmplot` functions draw a scatterplot of two variables, ``x`` and ``y``, and then fit the regression model ``y ~ x`` and plot the resulting regression line and a 95% confidence interval for that regression:
sns.regplot(x="total_bill", y="tip", data=tips)
sns.lmplot(x="total_bill", y="tip", data=tips)
# You should note that the resulting plots are identical, except that the figure shapes are different. We will explain why this is shortly. For now, the other main difference to know about is that `regplot` accepts the ``x`` and ``y`` variables in a variety of formats including simple numpy arrays, pandas ``Series`` objects, or as references to variables in a pandas ``DataFrame`` object passed to ``data``. In contrast, `lmplot` has ``data`` as a required parameter and the ``x`` and ``y`` variables must be specified as strings. This data format is called "long-form" or `"tidy" <http://vita.had.co.nz/papers/tidy-data.pdf>`_ data. Other than this input flexibility, `regplot` possesses a subset of `lmplot`'s features, so we will demonstrate them using the latter.
#
# It's possible to fit a linear regression when one of the variables takes discrete values, however, the simple scatterplot produced by this kind of dataset is often not optimal:
sns.lmplot(x="size", y="tip", data=tips)
# One option is to add some random noise ("jitter") to the discrete values to make the distribution of those values more clear. Note that jitter is applied only to the scatterplot data and does not influence the regression line fit itself:
sns.lmplot(x="size", y="tip", data=tips, x_jitter=.05)
# A second option is to collapse over the observations in each discrete bin to plot an estimate of central tendency along with a confidence interval:
sns.lmplot(x="size", y="tip", data=tips, x_estimator=np.mean)
# Fitting different kinds of models
# ---------------------------------
#
# The simple linear regression model used above is very simple to fit, however, it is not appropriate for some kinds of datasets. The `Anscombe's quartet <https://en.wikipedia.org/wiki/Anscombe%27s_quartet>`_ dataset shows a few examples where simple linear regression provides an identical estimate of a relationship where simple visual inspection clearly shows differences. For example, in the first case, the linear regression is a good model:
anscombe = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
ci=None, scatter_kws={"s": 80})
# The linear relationship in the second dataset is the same, but the plot clearly shows that this is not a good model:
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
ci=None, scatter_kws={"s": 80})
# In the presence of these kind of higher-order relationships, `lmplot` and `regplot` can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset:
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
order=2, ci=None, scatter_kws={"s": 80})
# A different problem is posed by "outlier" observations that deviate for some reason other than the main relationship under study:
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
ci=None, scatter_kws={"s": 80})
# In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals:
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
robust=True, ci=None, scatter_kws={"s": 80})
# When the ``y`` variable is binary, simple linear regression also "works" but provides implausible predictions:
tips["big_tip"] = (tips.tip / tips.total_bill) > .15
sns.lmplot(x="total_bill", y="big_tip", data=tips,
y_jitter=.03)
# The solution in this case is to fit a logistic regression, such that the regression line shows the estimated probability of ``y = 1`` for a given value of ``x``:
sns.lmplot(x="total_bill", y="big_tip", data=tips,
logistic=True, y_jitter=.03)
# Note that the logistic regression estimate is considerably more computationally intensive (this is true of robust regression as well) than simple regression, and as the confidence interval around the regression line is computed using a bootstrap procedure, you may wish to turn this off for faster iteration (using ``ci=False``).
#
# An altogether different approach is to fit a nonparametric regression using a `lowess smoother <https://en.wikipedia.org/wiki/Local_regression>`_. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all:
sns.lmplot(x="total_bill", y="tip", data=tips,
lowess=True)
# The `residplot` function can be a useful tool for checking whether the simple regression model is appropriate for a dataset. It fits and removes a simple linear regression and then plots the residual values for each observation. Ideally, these values should be randomly scattered around ``y = 0``:
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'I'"),
scatter_kws={"s": 80})
# If there is structure in the residuals, it suggests that simple linear regression is not appropriate:
sns.residplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
scatter_kws={"s": 80})
# Conditioning on other variables
# -------------------------------
#
# The plots above show many ways to explore the relationship between a pair of variables. Often, however, a more interesting question is "how does the relationship between these two variables change as a function of a third variable?" This is where the difference between `regplot` and `lmplot` appears. While `regplot` always shows a single relationsihp, `lmplot` combines `regplot` with `FacetGrid` to provide an easy interface to show a linear regression on "faceted" plots that allow you to explore interactions with up to three additional categorical variables.
#
# The best way to separate out a relationship is to plot both levels on the same axes and to use color to distinguish them:
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips)
# In addition to color, it's possible to use different scatterplot markers to make plots the reproduce to black and white better. You also have full control over the colors used:
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips,
markers=["o", "x"], palette="Set1")
# To add another variable, you can draw multiple "facets" which each level of the variable appearing in the rows or columns of the grid:
sns.lmplot(x="total_bill", y="tip", data=tips, hue="smoker", col="time")
sns.lmplot(x="total_bill", y="tip", hue="smoker",
col="time", row="sex", data=tips)
# Controlling the size and shape of the plot
# ------------------------------------------
#
# Before we noted that the default plots made by `regplot` and `lmplot` look the same but on axes that have a different size and shape. This is because func:`regplot` is an "axes-level" function draws onto a specific axes. This means that you can make mutli-panel figures yourself and control exactly where the the regression plot goes. If no axes is provided, it simply uses the "currently active" axes, which is why the default plot has the same size and shape as most other matplotlib functions. To control the size, you need to create a figure object yourself.
f, ax = plt.subplots(figsize=(5, 6))
sns.regplot(x="total_bill", y="tip", data=tips, ax=ax)
# In contrast, the size and shape of the `lmplot` figure is controlled through the `FacetGrid` interface using the ``size`` and ``aspect`` parameters, which apply to each *facet* in the plot, not to the overall figure itself:
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
col_wrap=2, size=3)
sns.lmplot(x="total_bill", y="tip", col="day", data=tips,
aspect=.5)
# Plotting a regression in other contexts
# ---------------------------------------
#
# A few other seaborn functions use `regplot` in the context of a larger, more complex plot. The first is the `jointplot` function that we introduced in the :ref:`distributions tutorial <distribution_tutorial>`. In addition to the plot styles previously discussed, `jointplot` can use `regplot` to show the linear regression fit on the joint axes by passing ``kind="reg"``:
sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg")
# Using the `pairplot` function with ``kind="reg"`` combines `regplot` and `PairGrid` to show the linear relationship between variables in a dataset. Take care to note how this is different from `lmplot`. In the figure below, the two axes don't show the same relationship conditioned on two levels of a third variable; rather, `PairGrid` is used to show multiple relationships between different pairings of the variables in a dataset:
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
size=5, aspect=.8, kind="reg")
# Like `lmplot`, but unlike `jointplot`, conditioning on an additional categorical variable is built into `pairplot` using the ``hue`` parameter:
sns.pairplot(tips, x_vars=["total_bill", "size"], y_vars=["tip"],
hue="smoker", size=5, aspect=.8, kind="reg")
|
lessons/lesson14.4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Keras plus TensorFlow
# ### Installation Instructions
# * TensorFlow (not GPU) https://www.tensorflow.org/install/install_windows
# * Keras https://keras.io/#installation
# ### Task: There are diabetics dataset example. Do the same with Iris dataset or your project data
#
# ### Codebook of diabetics dataset
# 1. Number of times pregnant
# 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
# 3. Diastolic blood pressure (mm Hg)
# 4. Triceps skin fold thickness (mm)
# 5. 2-Hour serum insulin (mu U/ml)
# 6. Body mass index (weight in kg/(height in m)^2)
# 7. Diabetes pedigree function
# 8. Age (years)
# 9. Class variable (0 or 1)
#
# Class Distribution: (class value 1 is interpreted as "tested positive for
# diabetes")
#
# Class Value Number of instances
# 0 500
# 1 268
# Create first network with Keras
from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.data", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=2)
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
# +
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils import np_utils
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
# %matplotlib inline
dataset = pd.read_csv('iris.csv')
#change string value to numeric
dataset.set_value(dataset['species']=='setosa',['species'],0)
dataset.set_value(dataset['species']=='versicolor',['species'],1)
dataset.set_value(dataset['species']=='virginica',['species'],2)
dataset = dataset.apply(pd.to_numeric)
#change dataframe to array
dataset_array = dataset.as_matrix()
#split x and y (feature and target)
X = dataset_array[:,:4]
Y =dataset_array[:,4]
Y = np_utils.to_categorical(Y)
print(Y.shape)
# model = Sequential()#model is a linear stack of layers.
# model.add(Dense(10, input_dim=4, init='uniform', activation='relu'))
# model.add(Dense(4, init='uniform', activation='relu'))
# model.add(Dense(3, init='uniform', activation='sigmoid'))
# # Compile model
# #Before training a model, you need to configure the learning process
# #A loss function. This is the objective that the model will try to minimize
# #model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# model.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy'])
# #get prediction
# # Fit the model
# model.fit(X, Y, epochs=150, batch_size=10)
# # calculate predictions
# predictions = model.predict(X)#Generates output predictions for the input samples
# # round predictions
# rounded = [[round(x[0]),round(x[1])]for x in predictions]
# #print (predictions)
# print(rounded)
# print ("Target :")
# print (np.asarray(Y,dtype="int32"))
# dataset.plot(kind="scatter", x="sepal_length", y="sepal_width")
# sns.FacetGrid(dataset, hue="species", size=10) \
# .map(plt.scatter, "sepal_length", "sepal_width") \
# .add_legend()
# +
#change string value to numericimport tensorflow as tf
import pandas as pd
import numpy as np
# Dependencies
import tensorflow as tf
import pandas as pd
import numpy as np
# Make results reproducible
seed = 1234
np.random.seed(seed)
tf.set_random_seed(seed)
# Loading the dataset
dataset = pd.read_csv('Iris_Dataset.csv')
dataset = pd.get_dummies(dataset, columns=['Species']) # One Hot Encoding
values = list(dataset.columns.values)
y = dataset[values[-3:]]
y = np.array(y, dtype='float32')
X = dataset[values[1:-3]]
X = np.array(X, dtype='float32')
# Shuffle Data
indices = np.random.choice(len(X), len(X), replace=False)
X_values = X[indices]
y_values = y[indices]
# Creating a Train and a Test Dataset
test_size = 10
X_test = X_values[-test_size:]
X_train = X_values[:-test_size]
y_test = y_values[-test_size:]
y_train = y_values[:-test_size]
# Session
sess = tf.Session()
# Interval / Epochs
interval =1
epoch = 150
# Initialize placeholders
X_data = tf.placeholder(shape=[None, 4], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 3], dtype=tf.float32)
# Input neurons : 4
# Hidden neurons : 8
# Output neurons : 3
hidden_layer_nodes = 8
# Create variables for Neural Network layers
w1 = tf.Variable(tf.random_normal(shape=[4,hidden_layer_nodes])) # Inputs -> Hidden Layer
b1 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes])) # First Bias
w2 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes,3])) # Hidden layer -> Outputs
b2 = tf.Variable(tf.random_normal(shape=[3])) # Second Bias
# Operations
hidden_output = tf.nn.relu(tf.add(tf.matmul(X_data, w1), b1))
final_output = tf.nn.softmax(tf.add(tf.matmul(hidden_output, w2), b2))
# Cost Function
loss = tf.reduce_mean(-tf.reduce_sum(y_target * tf.log(final_output), axis=0))
# Optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training
print('Training the model...')
for i in range(1, (epoch + 1)):
sess.run(optimizer, feed_dict={X_data: X_train, y_target: y_train})
if i % interval == 0:
print('Epoch', i, '|', 'Loss:', sess.run(loss, feed_dict={X_data: X_train, y_target: y_train}))
# Prediction
print()
for i in range(len(X_test)):
print('Actual:', y_test[i], 'Predicted:', np.rint(sess.run(final_output, feed_dict={X_data: [X_test[i]]})))
# -
|
lab10/lab10_new.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: DESI 19.12
# language: python
# name: desi-19.12
# ---
# # determine nominal BGS exposure time from spectral simulations
# In this notebook, we will use BGS spectral simulations to determine the nominal exposure time. We define nominal exposure time as the exposure time required to achieve 95% redshift success for objects brighter than r<19.5 during **nominal dark conditions**.
#
# This notebook is meant to be run on [jupyter.nersc.gov](https://jupyter.nersc.gov/) using the `master` DESI jupyter kernel. See [wiki page](https://desi.lbl.gov/trac/wiki/Computing/JupyterAtNERSC) for details on using jupyter on NERSC and how to set up a DESI jupyter kernel.
#
# If you have any issues with running this notebook, please let me know on the DESI slack channel or by e-mail at <EMAIL>
# first lets install the python package `feasibgs`, a python package for the BGS spectral simulations
# !pip install git+https://github.com/desi-bgs/feasiBGS.git --upgrade
import os, sys
import numpy as np
# add the directory where feasibgs was installed to the python path
sys.path.append(os.path.join(os.environ['HOME'], '.local/lib/python3.8/site-packages'))
import fitsio
# --- feasibgs ---
from feasibgs import util as UT
from feasibgs import cmx as BGS_cmx
from feasibgs import spectral_sims as BGS_spec_sim
from feasibgs import forwardmodel as FM
# for making pretty plots
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
# # load nominal dark sky
# I've implemented a function in `feasibgs.spectral_sims` to quickly read in the nominal dark sky surface brightness.
Idark = BGS_spec_sim.nominal_dark_sky()
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(Idark[0].value, Idark[1].value)
sub.set_xlabel('wavelength', fontsize=20)
sub.set_xlim(3.6e3, 9.8e3)
sub.set_ylabel('sky surface brightness', fontsize=20)
sub.set_ylim(0., 10)
# # generate BGS spectral simulations
# We've read in the sky brightness for the exposure we want to simulate. Next we'll use the BGS spectral simulation to simulate spectra for this exposure.
#
# We begin by reading in noiseless BGS source spectra then run it through the `feasibgs` forward model. For details on how the BGS source spectra are constructed see [`feasibgs/spectral_sims.py`](https://github.com/desi-bgs/feasiBGS/blob/63975b1e60f6f93f3b5020ee51ca565f325b918d/feasibgs/spectral_sims.py#L38)
# +
# function for reading in BGS source spectra
# BGS_spec_sim.simulated_GAMA_source_spectra?
# -
# read in source wavelength, flux, and galaxy properties.
wave_s, flux_s, prop = BGS_spec_sim.simulated_GAMA_source_spectra(emlines=True)
# extract true redshift and r-band magnitude for the simulated galaxies
ztrue = prop['zred'] # redshift
r_mag = prop['r_mag'] # Legacy Survey r mag
# ## BGS forward model
# Now we can run these source spectra through the BGS forward model with nominal dark sky and different exposure times
fdesi = FM.fakeDESIspec()
for exptime in [160, 170, 180, 200, 210, 240, 270]:
fspec = 'bgs_spectral_sim.nominal_dark.texp%.fs.fits' % exptime
if not os.path.isfile(fspec):
bgs = fdesi.simExposure(
wave_s,
flux_s,
exptime=exptime,
airmass=1.1,
Isky=[Idark[0].value, Idark[1].value],
filename=fspec
)
else:
from desispec.io.spectra import read_spectra
bgs = read_spectra(fspec)
print(bgs.ivar['b'][0])
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for band in ['b', 'r', 'z']:
sub.plot(bgs.wave[band], bgs.flux[band][1], label={'b': 'BGS spectrum for this exposure', 'r':None, 'z':None}[band])
sub.plot(wave_s, flux_s[1], c='k', ls='--', label='Source Spectra')
sub.legend(loc='upper left', fontsize=20)
sub.set_xlabel('wavelength', fontsize=20)
sub.set_xlim(3.6e3, 9.8e3)
sub.set_ylabel('flux', fontsize=20)
sub.set_ylim(-5, 10)
# # run `redrock` on spectral sims
# Now that we have a simulated BGS spectra, lets run them through `redrock`. This will take a while so go grab a coffee ... or twenty...
for exptime in [160, 170, 180, 200, 210, 240, 270]:
f_spec = 'bgs_spectral_sim.nominal_dark.texp%.fs.fits' % exptime
f_rr_h5 = 'redrock.bgs_spectral_sim.nominal_dark.texp%.fs.h5' % exptime
f_rr = 'zbest.bgs_spectral_sim.nominal_dark.texp%.fs.fits' % exptime
if os.path.isfile(f_rr): continue
print(f_rr)
# !rrdesi -o $f_rr_h5 -z $f_rr $f_spec
# # calculate and compare redshift success rates
# We can now use outputs from `redrock` to estimate the BGS redshift success rate for the different exposure times and determine which meets the 95% redshift success rate requirement for r < 19.5
#
# We'll be using a number of convenience functions in `feasibgs`.
zsuccesses, rlims = [], []
for exptime in [160, 170, 180, 200, 210, 240, 270]:
# read redrock output
frr = 'zbest.bgs_spectral_sim.nominal_dark.texp%.fs.fits' % exptime
rr = fitsio.read(frr)
# redshift success defined as |z_redrock - z_true|/(1+z_true) < 0.003 and ZWARN flag = 0
zsuccess = UT.zsuccess(rr['Z'], ztrue, rr['ZWARN'], deltachi2=rr['DELTACHI2'], min_deltachi2=40)
zsuccess[r_mag < 18.2] = True # this is to account for the unusual bright z failures, which we think will eventually be ironed out
zsuccesses.append(zsuccess)
# determine r magnitude limit where we reach 95% redshift completeness
for _r in np.linspace(17, 20, 31):
if (np.sum(zsuccess[r_mag <= _r]) / np.sum(r_mag <= _r)) >= 0.95: rlim = _r
rlims.append(rlim)
# +
fig = plt.figure(figsize=(8,6))
sub = fig.add_subplot(111)
sub.plot([16, 21], [1, 1], c='k', ls=':')
for i, exptime in enumerate([160, 170, 180, 200, 210, 240, 270]):
# calculate redshift success rate
wmean, rate, err_rate = UT.zsuccess_rate(r_mag, zsuccesses[i], range=[15, 22], nbins=28, bin_min=10)
sub.errorbar(wmean, rate, err_rate, fmt='.C%i' % i, label=r'$t_{\rm exp} = %.fs, r_{\rm lim} = %.1f$' % (exptime, rlims[i]))
sub.plot(wmean, rate, c='C%i' % i)
sub.legend(loc='lower left', handletextpad=0., fontsize=15)
sub.set_xlabel('$r$ magnitude', fontsize=20)
sub.set_xlim(16, 20.5)
sub.set_ylabel('redrock $z$ success rate', fontsize=20)
sub.set_ylim(0.6, 1.1)
sub.set_yticks([0.6, 0.7, 0.8, 0.9, 1.])
# -
# Based on the redshift success rate the nominal exposure time should be somewhere between 180-190s
zsuccesses, rlims = [], []
for exptime in [160, 170, 180, 200, 210, 240, 270]:
# read redrock output
frr = 'zbest.bgs_spectral_sim.nominal_dark.texp%.fs.fits' % exptime
rr = fitsio.read(frr)
# redshift success defined as |z_redrock - z_true|/(1+z_true) < 0.003 and ZWARN flag = 0
zsuccess = UT.zsuccess(rr['Z'], ztrue, rr['ZWARN'], deltachi2=rr['DELTACHI2'], min_deltachi2=100)
zsuccess[r_mag < 18.2] = True # this is to account for the unusual bright z failures, which we think will eventually be ironed out
zsuccesses.append(zsuccess)
# determine r magnitude limit where we reach 95% redshift completeness
for _r in np.linspace(17, 20, 61):
if (np.sum(zsuccess[r_mag <= _r]) / np.sum(r_mag <= _r)) >= 0.95: rlim = _r
rlims.append(rlim)
# +
fig = plt.figure(figsize=(8,6))
sub = fig.add_subplot(111)
sub.plot([16, 21], [1, 1], c='k', ls=':')
for i, exptime in enumerate([160, 170, 180, 200, 210, 240, 270]):
# calculate redshift success rate
wmean, rate, err_rate = UT.zsuccess_rate(r_mag, zsuccesses[i], range=[15, 22], nbins=28, bin_min=10)
sub.errorbar(wmean, rate, err_rate, fmt='.C%i' % i, label=r'$t_{\rm exp} = %.fs, r_{\rm lim} = %.2f$' % (exptime, rlims[i]))
sub.plot(wmean, rate, c='C%i' % i)
sub.legend(loc='lower left', handletextpad=0., fontsize=15)
sub.set_xlabel('$r$ magnitude', fontsize=20)
sub.set_xlim(16, 20.5)
sub.set_ylabel('redrock $z$ success rate', fontsize=20)
sub.set_ylim(0.6, 1.1)
sub.set_yticks([0.6, 0.7, 0.8, 0.9, 1.])
# -
|
doc/nb/nominal_exposure_time.ipynb
|
# ---
# title: "Reindexing DataFrame from a list"
# date: 2020-04-12T14:41:32+02:00
# author: "<NAME>"
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
months = ['Apr', 'Jul', 'Jan', 'Oct']
temp = [61.956044, 68.934783, 32.133333, 43.434783]
mytemp = {'Mean TemperatureF': temp}
weather1 = pd.DataFrame(mytemp)
weather1.index=months
print(weather1)
year = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
# +
# Import pandas
import pandas as pd
# Reindex weather1 using the list year: weather2
weather2 = weather1.reindex(year)
# Print weather2
print(weather2)
# Reindex weather1 using the list year with forward-fill: weather3
weather3 = weather1.reindex(year).ffill()
# Print weather3
print(weather3)
# -
|
courses/datacamp/notes/python/pandas/reindexfromlist.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# ## Reflect Tables into SQLALchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
# +
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# -
# View all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# ## Bonus Challenge Assignment: Temperature Analysis II
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, maximum, and average temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# For example
print(calc_temps('2012-02-28', '2012-03-05'))
# +
# Use the function `calc_temps` to calculate the tmin, tavg, and tmax
# for a year in the data set
# These are the dates for the trip
start_date = '2017-08-01'
end_date = '2017-08-07'
# Calling the function saving and printing the results
year = calc_temps(start_date, end_date)
print(year)
# +
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for bar height (y value)
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
plt.title('Trip Avg Temp')
plt.ylabel('Temperature (°F)')
plt.xticks([0,1,2], "")
plt.margins(x=1, y=0.2)
plt.bar(1, year[0][1])
plt.errorbar(1, year[0][1], yerr=year[0][2]-year[0][0], ecolor='red')
plt.show()
# -
# ### Daily Rainfall Average
# +
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's
# matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Changing dates to previous year values
trip_start = start_date
year = int(trip_start[0:4])
year += 1
trip_start = str(year)+start_date[4:10]
trip_end = end_date
year = int(trip_end[0:4])
year += 1
trip_end = str(year)+end_date[4:10]
trip_stats = session.query(Measurement.station, Measurement.date, func.sum(Measurement.prcp))\
.order_by(Measurement.prcp.desc())\
.group_by(Measurement.station,
Measurement.date)\
.filter(Measurement.date >= start_date,
Measurement.date <= end_date)
for _ in trip_stats:
print(_)
# +
# Use this function to calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
# For example
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
start_date = '2017-08-01'
end_date = '2017-08-07'
# Use the start and end date to create a range of dates
# Changing strings to timedate
start_date = dt.datetime.strptime(start_date, '%Y-%m-%d')
end_date = dt.datetime.strptime(end_date, '%Y-%m-%d')
# Creting the date range
delta = dt.timedelta(days=1)
date_range = []
while start_date <= end_date:
date_range.append(start_date)
start_date += delta
# Strip off the year and save a list of strings in the format %m-%d
trip_dates = []
for _ in date_range:
trip_dates.append(dt.datetime.strftime(_, '%m-%d'))
# Use the `daily_normals` function to calculate the normals for each date string
# and append the results to a list called `normals`.
normals = []
for _ in trip_dates:
normals.append(daily_normals(_))
normals
# +
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Initializing lists
min_temp = []
avg_temp = []
max_temp = []
# Unpacking the results
for _ in normals:
min_temp.append(_[0][0])
avg_temp.append(_[0][1])
max_temp.append(_[0][2])
# Creating dictionaries for the dataframe
trip_df = pd.DataFrame({'date': trip_dates, 'min': min_temp, 'avg': avg_temp, 'max': max_temp}).set_index('date')
trip_df
# -
# Plot the daily normals as an area plot with `stacked=False`
trip_df.plot.area(stacked=False, title = 'Normal Temperatures for date range')
# ## Close Session
session.close()
|
temp_analysis_bonus_2_starter.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp bayesian_logistic_regression
# -
# # bayesian_logistic_regression
#
# > API details.
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
import matplotlib.pyplot as plt
from scipy.optimize import fmin
import seaborn as sns
sns.set_context('talk')
sns.set_style('white')
RANDOM_SEED = 20090425
# -
# the very low birthweight infants dataset
vlbw = pd.read_csv('data/vlbw.csv', index_col=0).dropna(axis=0, subset=['ivh', 'pneumo'])
ivh = vlbw.ivh.isin(['definite', 'possible']).astype(int).values
pneumo = vlbw.pneumo.values
print(vlbw.groupby('pneumo').bwt.mean())
bwt_kg = vlbw.bwt.values/1000
bwt_kg.shape
# +
jitter = np.random.normal(scale=0.02, size=len(vlbw))
plt.scatter(bwt_kg, ivh + jitter, alpha=0.3)
plt.yticks([0,1])
plt.ylabel("IVH")
plt.xlabel("Birth weight")
# +
sum_of_squares = lambda θ, x, y: np.sum((y - θ[0] - θ[1]*x) ** 2)
betas_vlbw = fmin(sum_of_squares, [1,1], args=(bwt_kg,ivh))
# -
betas_vlbw
plt.scatter(bwt_kg, ivh + jitter, alpha=0.3)
plt.yticks([0,1])
plt.ylabel("IVH")
plt.xlabel("Birth weight")
plt.plot([0,2.5], [betas_vlbw[0] + betas_vlbw[1]*0, betas_vlbw[0] + betas_vlbw[1]*2.5], 'r-')
# ### Stochastic model
# $$\text{logit}(p) = \log\left[\frac{p}{1-p}\right] = x$$
logit = lambda p: np.log(p/(1.-p))
unit_interval = np.linspace(0,1)
plt.plot(unit_interval/(1-unit_interval), unit_interval)
plt.xlabel(r'$p/(1-p)$')
plt.ylabel('p');
plt.plot(logit(unit_interval), unit_interval)
plt.xlabel('logit(p)')
plt.ylabel('p');
|
04_bayesian_logistic_regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Margin Clearance
#
# This is example of how to extract tumour margins from the segmentations. It is not an automated scripted for all images, but outputs the margins from the 10x test set.
# +
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle as BoundingBox
import skimage.io as io
from skimage.measure import label, regionprops
from skimage.morphology import closing, square, disk
from skimage.morphology import binary_closing
from skimage.transform import rotate
from skimage.filters import median
from sklearn.linear_model import LinearRegression
from scipy.misc import imrotate
import keras
import keras.backend as K
from keras.layers import Input, Conv2D, GlobalMaxPool2D, Dense, Flatten
from keras.models import Model
from keras.optimizers import Adam
# +
# Create color palette
color_dict = {
"EPI": [73, 0, 106],
"GLD": [108, 0, 115],
"INF": [145, 1, 122],
"RET": [181, 9, 130],
"FOL": [216, 47, 148],
"PAP": [236, 85, 157],
"HYP": [254, 246, 242],
"KER": [248, 123, 168],
"BKG": [0, 0, 0],
"BCC": [127, 255, 255],
"SCC": [127, 255, 142],
"IEC": [255, 127, 127]
}
LUT = {
0 : "EPI",
1 : "GLD",
2 : "INF",
3 : "RET",
4 : "FOL",
5 : "PAP",
6 : "HYP",
7 : "KER",
8 : "BKG",
9 : "BCC",
10 : "SCC",
11: "IEC"
}
# +
def radians2degress(radians):
return radians * 180/np.pi
def degrees2radians(degrees):
return degrees * (np.pi / 180)
# +
def convert_RGB_to_8bit(image):
""" returns the 8 bit encoding of the image based on the LUT and color_dict order"""
segmentation_8bit = np.zeros((image.shape[0], image.shape[1]), dtype="uint8")
for i in range(12):
segmentation_8bit[np.all(image == color_dict[LUT[i]], axis=-1)] = i
return segmentation_8bit
def convert_8bit_to_RGB(image):
""" returns the rgb encoding of the 8-bit image based on the LUT and color_dict order"""
segmentation_rgb = np.zeros((image.shape[0], image.shape[1], 3), dtype="uint8")
for i in range(12):
segmentation_rgb[image == i] = color_dict[LUT[i]]
return segmentation_rgb
# -
def pad_image(image, value):
"""
Pads the image to make a square the size of the hypotenuse
"""
# Find largest axis
rows, cols = image.shape[0], image.shape[1]
# Find hypotenuse
hyp = int(np.sqrt(rows**2 + cols**2))
# Calculate size to pad
pad_width = [[],[],(0, 0)]
diff = hyp - rows
extra = diff % 2
size = diff // 2
pad_width[0] = [size, size+extra]
diff = hyp - cols
extra = diff % 2
size = diff // 2
pad_width[1] = [size, size+extra]
return np.pad(image, pad_width, mode="constant", constant_values=(value, value))
def get_orientation(segmantation_mask):
# Get EPI mask
epi = np.all(segmantation_mask == color_dict["PAP"], axis=-1).astype("int")
# Get points of EPI
points = np.where(epi)
# Reshape coords
X = points[0].reshape(-1, 1)
y = points[1].reshape(-1, 1)
# Fit line
reg = LinearRegression().fit(X, y)
# Get predict points
y1, x1 = np.min(X), np.int(reg.predict(np.array([[np.min(X)]]))[0, 0])
y2, x2 = np.max(X), np.int(reg.predict(np.array([[np.max(X)]]))[0, 0])
# Get lengths of triangle
opp = y1 - y2
adj = x2 - x1
# Set base angle based on line lengths
base = 0
if opp < 0 and adj < 0:
base = 180
# Calculate angle change
rads = np.arctan(opp / adj)
theta = degrees_to_rotate = base - radians2degress(rads)
radians = degrees2radians(degrees_to_rotate)
# Check if orientation was already upwards
# whole = np.all(segmantation_mask != color_dict["BKG"], axis=-1).astype("int")
# x_centroid, y_centroid = regionprops(whole)[0].centroid
# y_center = opp / 2
# if y_centroid > y_center: # Already facing up
# theta += 180
# radians == np.pi
return theta, radians, (x1, y1), (x2, y2)
def rotate_image(image, theta, fill=255, median_filter=False):
"""
Rotates and resizes the image based on theta (in degrees). Value
is a greyscale color between 0-1.
To Implement:
resize by padding the image to the largest axis, that way it won't have to rescale
"""
temp = pad_image(image, fill)
# Convert to 8bit to apply median filters and then back again
if median_filter:
temp = convert_8bit_to_RGB(median(convert_RGB_to_8bit(temp), disk(6)))
temp = rotate(temp, theta, resize=False, cval=fill/255., order=0) # order = 'nearest neighbor'
return (temp*255.).astype("uint8")
def get_cancer_margins(rotated_segmentation):
"""
Finds the North, South, East and West extremities of the cancer classes.
Returns the points as (x,y) in the order [N, S, E, W]
"""
# Get Cancer
cancer = np.logical_or(
np.all(rot_segmentation == color_dict["BCC"], axis=-1),
np.logical_or(np.all(rot_segmentation == color_dict["SCC"], axis=-1),
np.all(rot_segmentation == color_dict["IEC"], axis=-1)
)
).astype("int")
# Measure region
region = regionprops(cancer)[0]
# Get bounding box coords
minr, minc, maxr, maxc = region.bbox
# Get coords for extremeties of cancer
yN, xN = minr, minc+np.median(np.where(region.convex_image[0, :]))
yS, xS = minr+(maxr-minr), minc+np.median(np.where(region.convex_image[-1, :]))
yE, xE = minr+np.median(np.where(region.convex_image[:, -1])), minc+(maxc-minc)
yW, xW = minr+np.median(np.where(region.convex_image[:, 0])), minc
return (xN, yN), (xS, yS), (xE, yE), (xW, yW)
def get_tissue_margins(rotated_segmentation):
"""
Find the East and West extremities of the top tissue layers.
Returns the points as (x, y) in the order [E, W]
"""
# Get Cancer
top = np.logical_or(
np.all(rotated_segmentation == color_dict["EPI"], axis=-1),
np.all(rotated_segmentation == color_dict["PAP"], axis=-1)
).astype("int")
# Measure region
region = regionprops(top)[0]
# Get bounding box coords
minr, minc, maxr, maxc = region.bbox
# Get coords for extremeties of tissue
yE_tissue, xE_tissue = minr+np.median(np.where(region.convex_image[:, -1])), minc+(maxc-minc)
yW_tissue, xW_tissue = minr+np.median(np.where(region.convex_image[:, 0])), minc
return (xE_tissue, yE_tissue), (xW_tissue, yW_tissue)
# +
# Directions stored as x, y
directions_forward = [[0, 1], [1, 1], [1, 0], [1, -1], [0, -1], [-1,-1], [-1,0], [-1, 1]]
directions_backward = [[0, 1], [-1,1], [-1,0], [-1,-1], [0, -1], [1, -1], [1, 0], [ 1, 1]]
def inverse(direction):
x, y = direction
return [x*-1, y*-1]
def nextPixel(state, sequence, binary_mask):
x,y,d = state
directions = sequence[d::] + sequence[0:d]
for i, nextdir in enumerate(directions):
# assume binary_mask[y, x] (rows, cols) is true if pixel x,y is foreground
if binary_mask[y+nextdir[1], x+nextdir[0]]:
break
# Find the position
inv = inverse(nextdir)
pos = sequence.index(inv)
# Start from the previous location + 1 (no immediate repeat)
d = (pos + 1) % 8
return([ x+nextdir[0], y+nextdir[1], d ])
def exitCondition(state, color_mask, dist=20, show=False):
neighborhood = color_mask[state[1]-dist:state[1]+dist, state[0]-dist:state[0]+dist]
if show:
fig2, ax2 = plt.subplots()
ax2.imshow(rot_segmentation[state[1]-dist:state[1]+dist, state[0]-dist:state[0]+dist])
fig.show()
# Model input
X = neighborhood / 255.
p = model.predict(np.expand_dims(X, axis=0))[0]
if p > 0.9999:
print(p)
return True
return False
def get_crawl_start_position(rotated_RGB_segmentation):
"""
Returns the starting point (x, y) for crawl.
"""
# Get non-BKG mask
mask = np.all(rotated_RGB_segmentation != color_dict["BKG"], axis=-1).astype("int")
# Characterise and get centroid
region = regionprops(mask)[0]
x, y1 = region.centroid
# Get last position of true values
y2 = np.where(mask[:, int(x)])[0][-1]
return (x, y2), mask
# -
def get_tissue_margin(rotated_segmentation, east=True):
"""
return (x, y) position of east/west tissue margin. East is
forward pass, west is backward.
"""
# Get starting position
(x, y), mask = get_crawl_start_position(rotated_segmentation)
# Set state - 1 for forward sequence : 0 for backward sequence
state = [int(x), int(y), 0]
# Get 8-bit version
#color_mask = convert_RGB_to_8bit(rotated_segmentation)
color_mask = rotated_segmentation
# Set direction
if east:
sequence = directions_forward
else:
sequence = directions_backward
# Crawl!
count = 0
while count < 2e3: # Limit to 5K steps
state = nextPixel(state, sequence, mask)
count + 1
if count % 20 == 0:
if exitCondition(state, color_mask, dist=20):
break
return (state[0], state[1])
# ### Run Code
#
# This is a demo of how to orient and measure the surgical margins
#seg_dir = "/home/simon/Documents/PhD/Data/Histo_Segmentation/Datasets_n290/10x/Masks/"
seg_dir = "/home/simon/Desktop/10x_Experiments_Over_Aug/ALL_IMAGES/"
image_dir = "/home/simon/Documents/PhD/Data/Histo_Segmentation/Datasets_n290/10x/Images/"
out_dir = "/home/simon/Desktop/Margins/"
images = [
"IEC_45",
"SCC_23",
"SCC_32",
"BCC_131",
"BCC_135",
"SCC_52",
"SCC_20",
"BCC_4",
"SCC_9",
"BCC_48",
"BCC_80",
"SCC_38",
"BCC_95",
"BCC_86",
"BCC_133",
"IEC_75",
"BCC_51",
"IEC_41",
"BCC_90",
"BCC_74",
"BCC_22",
"SCC_7",
"SCC_24",
"IEC_22",
"IEC_23",
"BCC_60",
"BCC_61",
"IEC_34",
"IEC_35",
"BCC_23",
"BCC_24"
]
ppmm = 0.00067
green = (15/255., 1, 16/255.)
pos = 10
for step, fname in enumerate(images[pos::]):
try:
print(step+1, "of", len(images), "-", fname)
segmentation = io.imread(os.path.join(seg_dir, fname) + ".png")
image = io.imread(os.path.join(image_dir, fname) + ".tif")
# Get orientation
print("getting orientation...")
theta, _, p1, p2 = get_orientation(segmentation)
# Rotate images
print("rotating...")
rot_segmentation = rotate_image(segmentation, theta, 0, median_filter=True)
rot_image = rotate_image(image, theta, 255, median_filter=False)
# Get Surgical Margins
print("finding cancer margins...")
(xN, yN), (xS, yS), (xE, yE), (xW, yW) = get_cancer_margins(rot_segmentation)
#(xE_tissue, yE_tissue), (xW_tissue, yW_tissue) = get_tissue_margins(rot_segmentation)
print("Finding East margin...")
(xE_tissue, yE_tissue) = get_tissue_margin(rot_segmentation, east=True)
print("Finding West margin...")
(xW_tissue, yW_tissue) = get_tissue_margin(rot_segmentation, east=False)
# Show results
fig = plt.figure(figsize=(24, 36))
gs = mpl.gridspec.GridSpec(2, 2, wspace=0.25, hspace=0.25) # 2x2 grid
ax1 = fig.add_subplot(gs[0, 0]) # first row, first col
ax2 = fig.add_subplot(gs[0, 1]) # first row, second col
ax3 = fig.add_subplot(gs[1, :]) # full second row
# Plot original segmentation
ax1.imshow(segmentation)
ax1.scatter(p1[0], p1[1], s=30, color="red")
ax1.scatter(p2[0], p2[1], s=30, color="red")
ax1.plot([p1[0], p2[0]], [p1[1], p2[1]], lw=2, color="red")
ax1.set_title("Original Segmentation")
ax1.axis("off")
# Plot rotated segmentation
ax2.imshow(rot_segmentation)
ax2.set_title("Rotate: -{0:.0f} degrees".format(abs(theta)))
ax2.axis("off")
# Plot cancer margins on input image
ax3.imshow(rot_image)
# Add cancer contour
cancer = np.logical_or(
np.all(rot_segmentation == color_dict["BCC"], axis=-1),
np.logical_or(np.all(rot_segmentation == color_dict["SCC"], axis=-1),
np.all(rot_segmentation == color_dict["IEC"], axis=-1)
)
)
ax3.contour(cancer, linewidths=1, colors=[green], linestyles="dashed")
# Plot Cancer Margins
ax3.scatter(xN, yN, s=100, color="red", marker="$+$", label="Superficial Margin")
ax3.scatter(xS, yS, s=100, color="red", marker="$-$", label="Deep Margin")
ax3.scatter(xE, yE, s=30, color=green, )#label="Cancer E")
ax3.scatter(xW, yW, s=30, color=green, )#label="Cancer W")
# Plot tissue margins
ax3.scatter(xE_tissue, yE_tissue, s=30, color=green) #, label="Tissue E")
ax3.scatter(xW_tissue, yW_tissue, s=30, color=green) #, label="Tissue E")
factor=10
east_dist = np.round(np.sqrt( (yE_tissue-yE)**2 + (xE_tissue-xE)**2)*ppmm*factor, 1)
west_dist = np.round(np.sqrt( (yW_tissue-yW)**2 + (xW_tissue-xW)**2)*ppmm*factor, 1)
# Draw measurement lines
ax3.plot([xE, xE_tissue],[yE, yE_tissue], lw=1, color=green, label="East Margin: {0}mm".format(east_dist))
ax3.plot([xW, xW_tissue],[yW, yW_tissue], lw=1, color=green, label="West Margin: {0}mm".format(west_dist))
ax3.axis("off")
ax3.legend(loc='center left', bbox_to_anchor=(1, 0.5))
# Save
plt.savefig(os.path.join(out_dir, fname + ".png"), dpi=300)
plt.close()
#plt.show()
except Exception as e:
print("FAILED...", e)
continue
plt.imshow(rot_segmentation)
plt.imshow(rot_image)
exit_out = "/home/simon/Desktop/exit_out/" + fname +"/"
cmd = "mkdir -p " + exit_out
os.system(cmd)
# +
(x, y), mask = get_crawl_start_position(rot_segmentation)
state = [int(x), int(y), 0]
print("Starting at", state)
fig, ax = plt.subplots()
ax.matshow(mask)
dist = 20
count = 1
# Show start
ax.scatter(state[0], state[1], color="green")
x, y = state[0], state[1]
count = 0
while True:
state = nextPixel(state, directions_backward, mask)
ax.scatter(state[0], state[1], color="red",marker="+")
if count % 10 == 0:
if exitCondition(state, rot_segmentation, dist=20, show=False):
break
# # Show stop
ax.scatter(state[0], state[1], color="red")
dim = 50
ax.set_title("Count:" + str(count))
ax.set_xlim(state[0]-dim, state[0]+dim)
ax.set_ylim(state[1]+dim , state[1]-dim)
ax.grid(True)
plt.show()
# -
# ```python
# directions_forward = [[0, 1], [1, 1], [1, 0], [1, -1], [0, -1], [-1,-1], [-1,0], [-1, 1]]
# directions_backward = [[0, 1], [-1,1], [-1,0], [-1,-1], [0, -1], [1, -1], [1, 0], [ 1, 1]]
# ```
# +
# ---------------------------------- #
# exit_out = "/home/simon/Desktop/exit_out/" + fname +"/"
# cmd = "mkdir -p " + exit_out
# os.system(cmd)
# (x, y), mask = get_crawl_start_position(rot_segmentation)
# state = [int(x), int(y), 0]
# # print("Starting at", state)
# # fig, ax = plt.subplots()
# # ax.matshow(mask)
# dist = 20
# count = 1
# # Show start
# # ax.scatter(state[0], state[1], color="green")
# x, y = state[0], state[1]
# while True:
# state = nextPixel(state, directions_forward, mask)
# # ax.scatter(state[0], state[1], color="red", marker="+")
# # Returned to start
# if state[0] == int(x) and state[1] == int(y):
# print("Breaking - count = ", count)
# break
# if count % 10 == 0:
# neighborhood = rot_segmentation[state[1]-dist:state[1]+dist, state[0]-dist:state[0]+dist, :]
# io.imsave(os.path.join(exit_out, "{0}_{1}.png".format(fname, count)), neighborhood)
# count += 1
# # ---------------------------------- #
# -
|
Measure_Margins.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.4 64-bit
# metadata:
# interpreter:
# hash: 5c4d2f1fdcd3716c7a5eea90ad07be30193490dd4e63617705244f5fd89ea793
# name: python3
# ---
# https://towardsdatascience.com/logistic-regression-using-python-sklearn-numpy-mnist-handwriting-recognition-matplotlib-a6b31e2b166a
# ## Sigmoid
import numpy as np
import matplotlib.pyplot as pl
# +
def sigmoid(z):
return 1 / (1 + np.exp(-z))
sigmoid(757)
# -
from sklearn.datasets import load_digits
digits = load_digits() # 8x8 = 64 pixels -- Very clean Dataset
# #### Now that you have the dataset loaded you can use the commands below
digits.keys()
# + tags=[]
# Print to show there are 1797 images (8 by 8 images for a dimensionality of 64)
print("Image Data Shape" , digits.data.shape)
# Print to show there are 1797 labels (integers from 0–9)
print("Label Data Shape", digits.target.shape)
# -
digits.data.shape
digits.keys()
# +
import pandas as pd
df = pd.DataFrame(data= np.c_[digits['data'], digits['target']])
df
# -
cuatro = df.iloc[4]
k = np.reshape(cuatro[:64].values, (8,8))
plt.imshow(k, cmap=plt.cm.gray)
digits.target
# +
#y = a + b1X1 + b2X2 + b... + b64*X64
# -
digits.target[0:50]
# + tags=[]
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(20,2))
for index, (image, label) in enumerate(zip(digits.data[0:5], digits.target[0:5])):
plt.subplot(1, 5, index + 1)
plt.imshow(np.reshape(image, (8,8)), cmap=plt.cm.gray)
plt.title('Training: ' + str(label), fontsize = 20)
# -
# ### Splitting Data into Training and Test Sets (Digits Dataset)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.15, random_state=0)
x_train.shape
# + tags=[]
from sklearn.linear_model import LogisticRegression
# all parameters not specified are set to their defaults
logisticRegr = LogisticRegression()
logisticRegr.fit(x_train, y_train)
# + tags=[]
print(y_train[:5])
# -
# ### To predict
logisticRegr.predict(x_train[:5])
x_train[:5].shape
x_test.shape
y_test[:5]
plt.figure(figsize=(20,2))
for index, (image, label) in enumerate(zip(x_test[0:5], y_test[0:5])):
plt.subplot(1, 5, index + 1)
plt.imshow(np.reshape(image, (8,8)), cmap=plt.cm.gray)
plt.title('Test: ' + str(label), fontsize = 20)
plt.rcParams['figure.figsize'] = 20 , 2
first_test_image = x_test[0]
plt.imshow(np.reshape(first_test_image, (8,8)), cmap=plt.cm.gray)
x_test[0].shape
# Returns a NumPy Array
# Predict for One Observation (image)
logisticRegr.predict(x_test[0].reshape(1, -1))
max(logisticRegr.predict_proba(x_test[0].reshape(1, -1))[0])
y_test[0:10]
logisticRegr.predict(x_test[0:10])
# ### Probabilities
x_test[:1].shape
logisticRegr.predict_proba(x_test[:1])
sum(logisticRegr.predict_proba(x_test[0:1])[0])
logisticRegr.classes_
max(logisticRegr.predict_proba(x_test[0:1])[0])
# ### Measuring Model Performance (Digits Dataset)
# Use score method to get accuracy of model
score = logisticRegr.score(x_train, y_train)
score
# + tags=[]
# Use score method to get accuracy of model
score = logisticRegr.score(x_test, y_test)
print(score * 100, "%")
# -
x_test.shape
10 / 1
1000 / 10
# ### Matriz de confusión
# Eje horizontal: falso negativo
#
# Eje vertical: falso positivo
# + tags=[]
import sklearn.metrics as metrics
predictions = logisticRegr.predict(x_test)
cm = metrics.confusion_matrix(y_test, predictions)
print(cm)
# +
import seaborn as sns
plt.figure(figsize=(8,8))
sns.heatmap(cm, annot=True, fmt=".1f", linewidths=.5, square = True, cmap = 'Blues_r')
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 15)
# -
df2 = pd.DataFrame(data=digits['data'])
df2.shape
y = df[64]
y
y.values.reshape(-1, 1).shape
y_pred.shape
digits.target.shape
# +
# Yo ya estoy contento con mi modelo:
# Ahora entreno el modelo con todos los datos que tengo
# Entreno el modelo con todos los datos
logisticRegr.fit(df2, y.values)
# Predecimos con todos los datos que tenemos
y_pred = logisticRegr.predict(df2)
# Calculamos cuánto de bien ha entrenado con nuestros datos
logisticRegr.score(df2, y.values)
# -
|
week8_ML_lr_knn_encoder/day4_logistic_regresion_confusion_matrix/theory/logistic_regression_digits.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Strings
#
# Wir können in Python nicht nur mit Zahlen, sondern auch mit Zeichenketten, sogenannten **Strings**, arbeiten. Darunter kannst du dir jegliche Abfolgen von Zeichen vorstellen, die in Anführungszeichen stehen. Zum Beispiel ist _"Hallo"_ ein String und _"Hal<NAME>"_ ebenso, aber auch _"123.2"_ oder _"!Achtung!"_
# ### Allgemeines
#
# Auch Strings können wir mit dem print()-Befehl ausgeben.
print("<NAME>")
#diese hier funktionieren nicht:
print(«Hallo Welt»)
print('Hallo Welt')
# Du kannst Strings wie Zahlen in Variablen speichern.
name = "Max"
print(name)
# ### Strings zusammenfügen
#
# Du kannst zwei oder mehrere Zeichenketten auch mittels **+** zusammenfügen.
print("Ich bin: " + "Max")
print("Ich bin: " + name + ". Und wer bist du?")
# Allerdings kommt es zu einer Fehlermeldung, wenn du versuchst, Zahlen und Strings zu addieren:
print("Ich bin: " + 4)
# ### Eine Zahl in einen String umwandeln
# Du kannst diese Fehler korrigieren, indem du die Zahl in einen String umwandelst. Dazu hast du zwei Möglichkeiten:
#
# 1.) Du setzt Anführungszeichen um die Zahl und machst sie so zu einem String:
print("Ich bin: " + "4")
# 2.) Du wandelst die Zahl mit dem **str()**-Befehl in einen String um:
age = 22
print("Ich bin: " + str(age))
# **Beachte, dass du mit "4" oder str(age) nicht mehr rechnen kannst!**
# ## Übung
# * Schaue wieviele einwohner zürich hat. [hier](https://www.stadt-zuerich.ch/prd/de/index/statistik/themen/bevoelkerung.html)
# * Speichere diese Zahl in x.
# * Speichere die Einwohner vor 10 Jahren in y.
# * Berechne z.
# * Gib aus "In Zürich gibt es x Einwohner. Vor 10 Jahren gab es y Einwohner. Das ist ein anstieg von z Prozent."
# +
## Übung
- Lass dir vom Nachbarn seine Email geben.
- Wenn die Mailadresse _<EMAIL>_ lautet, sollst du _Max-Mustermann_ ausgeben; wenn die Mailadresse _<EMAIL>_ heisst, sollst du _KlaraKlarnamen_ ausgeben.
# -
email = '<EMAIL>'
domain = email.split('@')[1]
x = 425812
y = 380499
z = x * 100 / y - 100
print(z)
round(z, 2)
z_gerundet = round(z, 2)
print("In Zürich gibt es " + str(x) + " Einwohner. Vor 10 Jahren gab es " + str(y) + " Einwohner. Das ist ein Anstieg von " + str(z_gerundet) + " Prozent.")
|
03 Python Teil 1/03 Strings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 4.1 NumPy ndarray
import numpy as np
data = np.random.randn(2, 3)
data
data * 10
data.shape
data.dtype
# 4.1.1 ndarray
data1 = [6, 7.5, 8, 0, 1]
arr1 = np.array(data1)
arr1
data2 = [[1, 2, 3, 4], [5, 6, 7, 8]]
arr2 = np.array(data2)
arr2
arr2.ndim
arr2.shape
arr1.dtype
arr2.dtype
np.zeros(10)
np.empty((2, 3, 2))
np.arange(15)
# 4.1.2 ndarray
arr1 = np.array([1, 2, 3], dtype=np.float64)
arr2 = np.array([1, 2, 3], dtype=np.int32)
arr1.dtype
arr2.dtype
arr = np.array([1, 2, 3, 4, 5])
arr.dtype
float_arr = arr.astype(np.float64)
float_arr.dtype
arr = np.array([3.7, -1.2, -2.6, 0.5, 12.9, 10.1])
arr
arr.astype(np.int32)
numeric_strings = np.array(['1.25', '-9.6', '42'], dtype=np.string_)
numeric_strings.astype(float)
int_array = np.arange(10)
int_array
calibers = np.array([.22, .270, .357, .380, .44, .50], dtype=np.float64)
calibers
# 4.1.3 배열과 스칼라간의 연산
arr = np.array([[1., 2., 3.], [4., 5., 6.]])
arr
arr * arr
arr - arr
# 4.1.4 색인과 슬라이싱 기초
arr = np.arange(10)
arr
arr[5]
arr[5:8]
arr
arr_slice = arr[5:8]
arr_slice[1] = 12345
arr_slice
arr
arr_slice[:] = 64
arr
arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
arr2d
arr2d[2]
arr2d[0][2]
arr2d[0, 2]
arr3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
arr3d
arr3d[0]
old_values = arr3d[0].copy()
arr3d[0] = 42
arr3d
arr3d[0] = old_values
arr3d
arr3d[1, 0]
arr[1:6]
arr2d
arr2d[:2]
arr2d[:2, 1:]
arr2d[1, :2]
arr2d[2, :1]
arr2d[:, :1]
arr2d[:2, 1:] = 0
arr2d
names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
data = np.random.randn(7, 4)
names
data
names == 'Bob'
data[names == 'Bob']
data[names == 'Bob', 2:]
data[names == 'Bob', 3]
names != 'Bob'
data[~(names == 'Bob')]
mask = (names == 'Bob') | (names == 'Will')
mask
data[mask]
data[data < 0] = 0
data
# 4.1.6 팬시 색인
arr = np.empty((8, 4))
for i in range(8):
arr[i] = i
arr
arr[[4, 3, 0, 6]]
arr[[-3, -5, -1]]
arr = np.arange(32).reshape((8, 4))
arr
arr[[1, 5, 7, 2], [0, 3, 1, 2]]
arr[[1, 5, 7, 2]]
arr
arr[np.ix_([1, 5, 7, 2], [0, 3, 1, 2])]
# 4.1.7 배열 전치와 축 바꾸기
arr = np.arange(15).reshape((3, 5))
arr
arr.T
arr = np.random.randn(6, 3)
arr
np.dot(arr.T, arr)
# 4.2 유니버설 함수
#
# ndarry안에 있는 데이터 원소별로 연산을 수행하는 함수.
#
# 하나 이상의 스칼라 값을 받아서 하나 이상의 스칼라 결과 값을 반환하는 간단한 함수를 고속으로 수행할 수 있는 벡터화된 래퍼 함수
arr = np.arange(10)
arr
np.sqrt(arr)
np.exp(arr)
x = np.random.randn(8)
y = np.random.randn(8)
x
y
np.maximum(x, y)
arr = np.random.randn(7) * 5
np.modf(arr)
# 4.3 배열을 사용한 데이터 처리
#
# Numpy 배열을 사용하면 반복문을 작성하지 않고 간결한 배열연산을 통해 많은 종류의 데이터 처리 작업을 할 수 있음.
#
# 벡터화된 배열에 대한 산술 연산은 순수 파이선 연산에 비해 2~3배에서 많게는 수십, 수백배까지 빠르다.
points = np.arange(-5, 5, 0.01)
xs, ys = np.meshgrid(points, points)
ys
import matplotlib.pyplot as plt
z = np.sqrt(xs ** 2 + ys ** 2)
z
plt.imshow(z, cmap=plt.cm.gray)
plt.colorbar()
plt.title("Image plot of $\sqrt{x^2 + y^2}$ for a grid of values")
plt.show()
# 4.3.1 배열 연산으로 조건절 표현하기
xarr = np.array([1.1, 1.2, 1.3, 1.4, 1.5])
xarr
yarr = np.array([2.1, 2.2, 2.3, 2.4, 2.5])
yarr
cond = np.array([True, False, True, True, False])
result = [(x if c else y)
for x, y, c in zip(xarr, yarr, cond)]
result
result = np.where(cond, xarr, yarr)
result
arr = np.random.randn(4, 4)
arr
np.where(arr > 0, 2, -2)
np.where(arr > 0, 2, arr)
result = []
for i in range(n):
if cond1[i] and cond2[i]:
result.append(0)
elif cond1[i]:
result.append(1)
elif cond2[i]:
result.append(2)
else:
result.append(3)
np.where(cond1 & cond2, 0,
np.where(con1, 1,
np.where(cond2, 2, 3)))
# 4.3.2. 수학 메서드와 통계 메서드
arr = np.random.randn(5, 4)
arr
arr.mean()
np.mean(arr)
arr.sum()
arr.mean(axis=1)
arr = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
arr
arr.cumsum(0)
arr.cumsum(1)
arr.cumprod(1)
arr.cumprod(2)
# 4.3.3 불리언 배열을 위한 메서드
arr = np.random.randn(100)
# 4.5 선형대수
x = np.array([[1., 2., 3.], [4., 5., 6.]])
y = np.array([[6., 23.], [-1, 7], [8, 9]])
x
y
x.dot(y)
np.dot(x, np.ones(3))
from numpy.linalg import inv, qr
# numpy.linalg는 행렬의 분할과 역행렬, 행렬식 같은 것을 포함하고 있다. 이는 MATLB, R같은 언어에서사용하는 표준 포트란 라이브러리인 BLAS, LAPACK 또는 Intel MKL를 사용해서 구현되었다.
X = np.random.randn(5, 5)
mat = X.T.dot(X)
inv(mat)
q, r = qr(mat)
r
# 4.6 난수 생성
samples = np.random.normal(size=(4, 4))
samples
from random import normalvariate
N = 1000000
# %timeit samples = [normalvariate(0, 1) for _ in range(N)]
# %timeit np.random.normal(size=N)
import random
position = 0
walkt = [position]
steps = 1000
for i in xrange(steps):
step = 1 if random.randint(0, 1) else -1
position += step
walk.append(position)
nsteps = 1000
draws = np.random.randint(0, 2, size=nsteps)
steps = np.where(draws > 0, 1, -1)
walk = steps.cumsum()
walk.min()
walk.max()
(np.abs(walk) >= 10).argmax()
# 4.7.1 한번에 계단 오르내리기 시뮬레이션하기
nwalks = 5000
nsteps = 1000
draws = np.random.randint(0, 2, size=(nwalks, nsteps))
steps = np.where(draws > 0, 1, -1)
walks = steps.cumsum(1)
walks
walks.max()
walks.min()
hits30 = (np.abs(walks) >= 30).any(1)
hits30
hits30.sum()
crossing_times = (np.abs(walks[hits30]) >= 30).argmax(1)
crossing_times.mean()
steps = np.random.normal(loc=0, scale=0.25, size=(nwalks, nsteps))
steps
|
Ch4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
# %matplotlib inline
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import os
import json
from sagemaker.predictor import json_deserializer
# -
# <h1>FM Cloud Prediction Invocation Template</h1>
# <h4>Invoke SageMaker Prediction Service</h4>
import boto3
import re
from sagemaker import get_execution_role
import sagemaker
# +
# Acquire a realtime endpoint
endpoint_name = 'fm-movie-v2'
predictor_sparse = sagemaker.predictor.RealTimePredictor(endpoint=endpoint_name)
dim_movie = 9737
# Movie Ratings: 9737 columns
# UserID: first 671 columns
# MovieID: next 9066 columns
# -
def fm_sparse_serializer(data):
js = {'instances': []}
for row in data:
column_list = row.tolist()
value_list = np.ones(len(column_list),dtype=int).tolist()
js['instances'].append({'data':{'features': { 'keys': column_list, 'shape':[dim_movie], 'values': value_list}}})
return json.dumps(js)
# Testing
print(fm_sparse_serializer([np.array([341,1416]),np.array([209,2640]),np.array([164,1346])]))
# +
# Initialize Predictor with correct configuration
# -
predictor_sparse.content_type = 'application/json'
predictor_sparse.serializer = fm_sparse_serializer
predictor_sparse.deserializer = json_deserializer
# Test libSVM
# predictor_sparse.predict(['5 341:1 1416:1', '2.5 209:1 2640:1','2.5 164:1 1346:1'])
# Test Data Frame Matrix Predictor
predictor_sparse.predict([np.array([341,1416]),np.array([209,2640]),np.array([164,1346])])
# Load the test file in svm format. '5 341:1 1416:1'
test_file = r'ml-latest-small/user_movie_test.svm'
df_test = pd.read_csv(test_file, sep=' ', names=['rating','user_index','movie_index'])
df_test.head()
# update column to contain only the one hot encoded index
df_test.user_index = df_test.user_index.map(lambda value: int(value.split(':')[0]))
df_test.movie_index = df_test.movie_index.map(lambda value: int(value.split(':')[0]))
df_test.head()
df_test.shape
# For large number of predictions, we can split the input data and
# Query the prediction service.
# array_split is convenient to specify how many splits are needed
def get_predictions(predictor, arr_features):
predictions = []
for arr in np.array_split(arr_features,100):
if arr.shape[0] > 0:
print (arr.shape, end=' ')
result = predictor.predict(arr)
predictions += [values['score'] for values in result['predictions']]
return predictions
# %time predictions = get_predictions(predictor_sparse, df_test[['user_index','movie_index']].as_matrix())
df_test.shape[0]/25
df_test['predictions'] = predictions
df_test.head()
import sklearn.metrics as metrics
print('RMSE: ', metrics.mean_squared_error(df_test.rating, df_test.predictions)**.5)
# +
# Training Data Residuals
residuals = (df_test.predictions - df_test.rating)
plt.hist(residuals)
plt.grid(True)
plt.xlabel('(Predicted - Actual)')
plt.ylabel('Count')
plt.title('Residuals Distribution')
plt.axvline(color='g')
# -
# ## Get Prediction for a single user and all movies
# Load the one hot coded index values in svm format
test_file = r'ml-latest-small/one_hot_enc_movies.svm'
df_one_user_test = pd.read_csv(test_file,sep=' ',names=['movieId','user_index','movie_index'])
df_one_user_test.user_index = df_one_user_test.user_index.map(lambda value: int(value.split(':')[0]))
df_one_user_test.movie_index = df_one_user_test.movie_index.map(lambda value: int(value.split(':')[0]))
df_one_user_test.head()
df_one_user_test.shape[0]
# %time predictions = get_predictions(predictor_sparse, df_one_user_test[['user_index','movie_index']].as_matrix())
df_one_user_test['rating_predicted'] = predictions
df_one_user_test.head()
df_movies = pd.read_csv(r'ml-latest-small/movies_genre.csv')
df_movies.head()
df_one_user_test = df_one_user_test.merge(df_movies, on='movieId')
df_one_user_test.head()
df_one_user_test.sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
# Any Action Movies?
df_one_user_test[df_one_user_test.Action == 1].sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
# What about comedy?
df_one_user_test[df_one_user_test.Comedy == 1].sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
# And Drama
df_one_user_test[df_one_user_test.Drama == 1].sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
df_one_user_test.user_index = 333
predictions = get_predictions(predictor_sparse, df_one_user_test[['user_index','movie_index']].as_matrix())
df_one_user_test['rating_predicted'] = predictions
df_one_user_test.sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
df_one_user_test.user_index = 209
predictions = get_predictions(predictor_sparse, df_one_user_test[['user_index','movie_index']].as_matrix())
df_one_user_test['rating_predicted'] = predictions
df_one_user_test.sort_values(['rating_predicted'], ascending=False)[['title','rating_predicted','genres']].head(10)
|
fm-pca-xgboost/fm/fm_cloud_prediction_template.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:djenv] *
# language: python
# name: conda-env-djenv-py
# ---
# +
# GENERAL THINGS FOR COMPUTING AND PLOTTING
import pandas as pd
import numpy as np
import os, sys, time
from datetime import datetime
from datetime import timedelta
import scipy as sp
# visualisation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks", context="talk")
# ibl specific things
import datajoint as dj
from ibl_pipeline import reference, subject, action, acquisition, data, behavior
from ibl_pipeline.analyses import behavior as behavioral_analyses
# -
# all mice that started training
# leave out mice in the brain-wide project that got too old before the ephys setup was ready...
all_mice = (subject.Subject & 'subject_birth_date > "2019-03-01"') \
* (subject.SubjectProject & 'subject_project = "ibl_neuropixel_brainwide_01"')
#* (subject.SubjectLab & 'lab_name = "churchlandlab"')
all_mice = all_mice.fetch(format='frame').reset_index()
print('# of animals in brainwide project:')
all_mice.subject_nickname.nunique()
# +
# all mice that made it to ready4ephys
all_ephys_sess = (subject.Subject & 'subject_birth_date > "2019-03-01"') \
* (subject.SubjectProject & 'subject_project = "ibl_neuropixel_brainwide_01"') \
* (acquisition.Session & 'task_protocol LIKE "%ephysChoice%"')
# * (subject.SubjectLab & 'lab_name = "churchlandlab"') \
all_ephys_sess = all_ephys_sess.fetch(format='frame').reset_index()
print('# of animals with ephys sessions:')
all_ephys_sess.subject_nickname.nunique()
# -
animals_noephys = list(set(all_mice.subject_nickname.unique()) - set(all_ephys_sess.subject_nickname.unique()))
print('animals without any ephys data:')
sorted(animals_noephys)
# #### animals that never made it to ephys
#
# * CSHL048: issue during headbar implant surgery, never started training
# * CSHL056: experimental well from tip of centrifuge tube, infection on skull
# * CSHL057: experimental well from tip of centrifuge tube, infection on skull
#
# * CSHL046: has not reached ready4ephysRig, now > 7 months
# * CSHL058: now on biasedCW, will hopefully still record
#
# +
### for those mice with ephys data, how many sessions?
print('average number of sessions per mouse:')
print(all_ephys_sess.groupby(['subject_nickname'])['session_start_time'].nunique().mean())
all_ephys_sess.groupby(['subject_nickname'])['session_start_time'].nunique().reset_index().sort_values(by='subject_nickname')
# -
# ### cull/ephys end reasons
#
# ##### bad animals
# * CSHL054: session 1 great. session 2 terrible behavior, mouse looks like it's in pain
# * CSHL055 (honeycomb): session 1 great, session 2 awful. mouse lost weight, lethargic, died 4 days after craniotomy surgery
# * CSHL051: when attempting to do second cranio surgery with punch, skull broke. emergency perfusion.
#
# ##### good animals
# * CSHL059: no more sites to go into the brain
# * CSHL045:
# * CSHL047:
# * CSHL052: honeycomb; quite a lot of blood in cranio but behavior great.
# * CSHL053:
# * CSHL049:
# * CSHL060: still ingesting (has 6 sessions)
#
# +
# how many probes per ephys session?
ephys = dj.create_virtual_module('ephys', 'ibl_ephys')
all_ephys_sess = (subject.Subject & 'subject_birth_date > "2019-03-01"') \
* (subject.SubjectProject & 'subject_project = "ibl_neuropixel_brainwide_01"') \
* (acquisition.Session & 'task_protocol LIKE "%ephysChoice%"') \
* ephys.ProbeInsertion * (ephys.ProbeTrajectory & 'insertion_data_source = "Micro-manipulator"') \
* subject.SubjectLab()
all_ephys_sess = all_ephys_sess.fetch(format='frame').reset_index()
# +
### for those mice with ephys data, how many sessions?
yield_permouse = all_ephys_sess.groupby(['lab_name', 'subject_nickname'])['session_start_time', 'probe_trajectory_uuid'].nunique().reset_index()
yield_permouse.rename(columns={'session_start_time':'num_sessions', 'probe_trajectory_uuid':'num_penetrations'}, inplace=True)
assert ((yield_permouse['num_sessions'] * 2 >= yield_permouse['num_penetrations']).all() == True)
print('average number of penetrations per mouse:')
print(yield_permouse['num_penetrations'].mean())
yield_permouse.sort_values(by='subject_nickname')
# -
yield_permouse.groupby(['lab_name'])['num_penetrations'].describe()
|
python/.ipynb_checkpoints/ephys_yield_anneu-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + deletable=true editable=true active=""
# Baseline
# MAX_LEN = 40
# EMBEDDING_DIM = 300
# BATCH_SIZE = 128
# VALID_SPLIT = 0.05
# RE_WEIGHT = True
#
# LSTM(256)
# parameters of LSTM: 570368
# parameters of Dense: 513
# weights.001-0.3333.hdf5
# 475s - loss: 0.3051 - acc: 0.7766 - val_loss: 0.3333 - val_acc: 0.7743
#
# LSTM(128)
# parameters of LSTM: 219648
# parameters of Dense: 257
# weights.001-0.3358.hdf5
# 389s - loss: 0.3218 - acc: 0.7598 - val_loss: 0.3358 - val_acc: 0.7611
#
# LSTM(64)
# parameters of LSTM: 93440
# parameters of Dense: 129
# weights.002-0.3393.hdf5
# 356s - loss: 0.3138 - acc: 0.7675 - val_loss: 0.3393 - val_acc: 0.7641
#
# LSTM(64)*2
# parameters of LSTM: 126464 = 93440+33024
# parameters of Dense: 129
# weights.003-0.3385.hdf5
# 400s - loss: 0.2892 - acc: 0.7953 - val_loss: 0.3385 - val_acc: 0.7695
#
# LSTM(64)*2
# BATCH_SIZE: 256 ==> 128
# parameters of LSTM: 126464 = 93440+33024
# parameters of Dense: 129
# weights.002-0.3367.hdf5
# 767s - loss: 0.2986 - acc: 0.7842 - val_loss: 0.3367 - val_acc: 0.7680
#
# LSTM(64)*3
# parameters of LSTM: 159,488 = 93440+33024+33024
# parameters of Dense: 129
# weights.002-0.3395.hdf5
# 604s - loss: 0.3081 - acc: 0.7750 - val_loss: 0.3395 - val_acc: 0.7612
#
# BIDIRECT(LSTM(64))
# parameters of LSTM: 186,880
# parameters of Dense: 257
# weights.006-0.3417.hdf5
# 193s - loss: 0.3007 - acc: 0.7826 - val_loss: 0.3417 - val_acc: 0.7639
#
# LSTM(64)
# BatchNormalization()
# parameters of LSTM: 93440
# parameters of Dense: 129
# weights.003-0.3443.hdf5
# 205s - loss: 0.2958 - acc: 0.7875 - val_loss: 0.3443 - val_acc: 0.7702
#
# LSTM(64,dropout=0.5)
# Dropout(0.5)
# parameters of LSTM: 93440
# parameters of Dense: 129
# weights.004-0.3344.hdf5
# 198s - loss: 0.3097 - acc: 0.7715 - val_loss: 0.3344 - val_acc: 0.7508
#
# LSTM(64)
# DENSE(128)
# parameters of LSTM: 93440
# parameters of Dense: 8256+129
# weights.002-0.2749.hdf5
# 199s - loss: 0.2354 - acc: 0.8314 - val_loss: 0.2749 - val_acc: 0.8093
#
# LSTM(64)
# DENSE(64)
# parameters of LSTM: 93440
# parameters of Dense: 8256+65
# weights.002-0.2769.hdf5
# 196s - loss: 0.2383 - acc: 0.8285 - val_loss: 0.2769 - val_acc: 0.8106
#
# LSTM(64)
# DENSE(64)
# Dropout(0.5)
# weights.008-0.3096.hdf5
# 200s - loss: 0.2776 - acc: 0.7945 - val_loss: 0.3096 - val_acc: 0.7730
#
# LSTM(64,dropout=0.5)
# DENSE(64)
# weights.002-0.2788.hdf5
# 197s - loss: 0.2391 - acc: 0.8271 - val_loss: 0.2788 - val_acc: 0.8087
#
# LSTM(64)
# DENSE(64)*2
# parameters of LSTM: 93440
# parameters of Dense: 8256+4160+65
# weights.002-0.2733.hdf5
# 198s - loss: 0.2340 - acc: 0.8354 - val_loss: 0.2733 - val_acc: 0.8190
#
# LSTM(64)
# DENSE(64)*3
# parameters of LSTM: 93440
# parameters of Dense: 8256+4160+4160+65
# weights.002-0.2723.hdf5
# 203s - loss: 0.2364 - acc: 0.8329 - val_loss: 0.2723 - val_acc: 0.8171
#
# LSTM(64)
# DENSE(64)
# lr 1e-3 ==>1e-2
# weights.005-0.2899.hdf5
# 198s - loss: 0.2458 - acc: 0.8240 - val_loss: 0.2899 - val_acc: 0.7940
#
# LSTM(64)
# DENSE(64)
# lr 1e-3 ==>1e-4
# weights.021-0.2894.hdf5
# 194s - loss: 0.2383 - acc: 0.8289 - val_loss: 0.2894 - val_acc: 0.7917
#
# LSTM(128)*3
# DENSE(128)*3
# weights.001-0.2713.hdf5
# 709s - loss: 0.2539 - acc: 0.8190 - val_loss: 0.2713 - val_acc: 0.8094
#
# LSTM(64)
# DENSE(64)
# EMBEDDING_TRAINABLE = True
# weights.000-0.2829.hdf5
# 345s - loss: 0.2980 - acc: 0.7705 - val_loss: 0.2829 - val_acc: 0.8035
# + deletable=true editable=true
import pandas as pd
import numpy as np
import nltk
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
import re
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import datetime, time, json, os, math, pickle, sys
from string import punctuation
from __future__ import division
from __future__ import print_function
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential, Model, load_model
from keras.layers import concatenate, Embedding, Dense, Input, Dropout, Bidirectional, LSTM, BatchNormalization, TimeDistributed
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau, TensorBoard
from keras import backend as K
# + deletable=true editable=true
DATA_DIR = '../data/'
MODEL = 'Baseline'
if os.getcwd().split('/')[-1] != MODEL:
print('WRONG MODEL DIR!!!')
CHECKPOINT_DIR = './checkpoint/'
if not os.path.exists(CHECKPOINT_DIR):
os.mkdir(CHECKPOINT_DIR)
LOG_DIR = './log/'
if not os.path.exists(LOG_DIR):
os.mkdir(LOG_DIR)
OUTPUT_DIR = './output/'
if not os.path.exists(OUTPUT_DIR):
os.mkdir(OUTPUT_DIR)
MAX_LEN = 40
EMBEDDING_DIM = 300
BATCH_SIZE = 256
VALID_SPLIT = 0.05
RE_WEIGHT = True # whether to re-weight classes to fit the 17.5% share in test set
# VOCAB_SIZE = 10000
def get_best_model(checkpoint_dir = CHECKPOINT_DIR):
files = glob.glob(checkpoint_dir+'*')
val_losses = [float(f.split('-')[-1][:-5]) for f in files]
index = val_losses.index(min(val_losses))
print('Loading model from checkpoint file ' + files[index])
model = load_model(files[index])
model_name = files[index].split('/')[-1]
print('Loading model Done!')
return (model, model_name)
# + deletable=true editable=true
trainval_df = pd.read_csv(DATA_DIR+"train.csv")
test_df = pd.read_csv(DATA_DIR+"test.csv")
print(trainval_df.shape)
print(test_df.shape)
# + deletable=true editable=true active=""
# # Check for any null values
# # inds = pd.isnull(trainval_df).any(1).nonzero()[0]
# # trainval_df.loc[inds]
# # inds = pd.isnull(test_df).any(1).nonzero()[0]
# # test_df.loc[inds]
#
# # # Add the string 'empty' to empty strings
# # trainval_df = trainval_df.fillna('empty')
# # test_df = test_df.fillna('empty')
# + deletable=true editable=true
# data cleaning
abbr_dict={
"i'm":"i am",
"'re":" are",
"'s":" is",
"'ve":" have",
"'ll":" will",
"n't":" not",
}
_WORD_SPLIT = re.compile(b"([.,!?\"':;)(])")
# stop_words = ['the','a','an','and','but','if','or','because','as','what','which','this','that','these','those','then',
# 'just','so','than','such','both','through','about','for','is','of','while','during','to','What','Which',
# 'Is','If','While','This']
# print('stop_words:', len(stop_words))
# # nltk.download("stopwords")
# stop_words = stopwords.words('english')
# print('stop_words:', len(stop_words))
def text_to_wordlist(text, abbr_dict=None, remove_stop_words=False, stem_words=False):
if isinstance(text,float):
# turn nan to empty string
text = ""
else:
# Convert words to lower case and split them
# text = text.lower()
# # abbreviation replace
# # Create a regular expression from the dictionary keys
# regex = re.compile("(%s)" % "|".join(map(re.escape, abbr_dict.keys())))
# # For each match, look-up corresponding value in dictionary
# text = regex.sub(lambda mo: abbr_dict[mo.string[mo.start():mo.end()]], text)
words = []
for space_separated_fragment in text.strip().split():
words.extend(_WORD_SPLIT.split(space_separated_fragment))
text = [w for w in words if w]
text = " ".join(text)
# Remove punctuation from text
# text = ''.join([c for c in text if c not in punctuation])
# Optionally, remove stop words
if remove_stop_words:
text = text.split()
text = [w for w in text if not w in stop_words]
text = " ".join(text)
# Optionally, shorten words to their stems
if stem_words:
text = text.split()
stemmer = SnowballStemmer('english')
stemmed_words = [stemmer.stem(word) for word in text]
text = " ".join(stemmed_words)
# Return a list of words
return(text)
# + deletable=true editable=true active=""
# trainval_df['len1'] = trainval_df.apply(lambda row: len(row['question1_WL'].split()), axis=1)
# trainval_df['len2'] = trainval_df.apply(lambda row: len(row['question2_WL'].split()), axis=1)
#
# test_df['len1'] = test_df.apply(lambda row: len(row['question1_WL'].split()), axis=1)
# test_df['len2'] = test_df.apply(lambda row: len(row['question2_WL'].split()), axis=1)
#
# lengths = pd.concat([trainval_df['len1'],trainval_df['len2']], axis=0)
# print(lengths.describe())
# print(np.percentile(lengths, 99.0))
# print(np.percentile(lengths, 99.4))
# print(np.percentile(lengths, 99.5))
# print(np.percentile(lengths, 99.9))
# + deletable=true editable=true
# question to word list by data cleaning
file_name = 'trainval_df.pickle'
if os.path.exists(OUTPUT_DIR+file_name):
print ('Loading from file '+file_name)
trainval_df = pd.read_pickle(OUTPUT_DIR+file_name)
else:
print ('Generating file '+file_name)
trainval_df['question1_WL'] = trainval_df.apply(lambda row: text_to_wordlist(row['question1']), axis=1)
trainval_df['question2_WL'] = trainval_df.apply(lambda row: text_to_wordlist(row['question2']), axis=1)
trainval_df.to_pickle(OUTPUT_DIR+file_name)
file_name = 'test_df.pickle'
if os.path.exists(OUTPUT_DIR+file_name):
print ('Loading from file '+file_name)
test_df = pd.read_pickle(OUTPUT_DIR+file_name)
else:
print ('Generating file '+file_name)
test_df['question1_WL'] = test_df.apply(lambda row: text_to_wordlist(row['question1']), axis=1)
test_df['question2_WL'] = test_df.apply(lambda row: text_to_wordlist(row['question2']), axis=1)
test_df.to_pickle(OUTPUT_DIR+file_name)
test_size = trainval_df.shape[0]-int(math.ceil(trainval_df.shape[0]*(1-VALID_SPLIT)/1024)*1024)
train_df, valid_df = train_test_split(trainval_df, test_size=test_size, random_state=1986, stratify=trainval_df['is_duplicate'])
# + deletable=true editable=true
# tokenize and pad
all_questions = pd.concat([trainval_df['question1_WL'],trainval_df['question2_WL'],test_df['question1_WL'],test_df['question2_WL']], axis=0)
tokenizer = Tokenizer(num_words=None, lower=True)
tokenizer.fit_on_texts(all_questions)
word_index = tokenizer.word_index
nb_words = len(word_index)
print("Words in index: %d" % nb_words) #126355
train_q1 = pad_sequences(tokenizer.texts_to_sequences(train_df['question1_WL']), maxlen = MAX_LEN)
train_q2 = pad_sequences(tokenizer.texts_to_sequences(train_df['question2_WL']), maxlen = MAX_LEN)
valid_q1 = pad_sequences(tokenizer.texts_to_sequences(valid_df['question1_WL']), maxlen = MAX_LEN)
valid_q2 = pad_sequences(tokenizer.texts_to_sequences(valid_df['question2_WL']), maxlen = MAX_LEN)
y_train = train_df.is_duplicate.values
y_valid = valid_df.is_duplicate.values
train_q1_Double = np.vstack((train_q1, train_q2))
train_q2_Double = np.vstack((train_q2, train_q1))
valid_q1_Double = np.vstack((valid_q1, valid_q2))
valid_q2_Double = np.vstack((valid_q2, valid_q1))
y_train_Double = np.hstack((y_train, y_train))
y_valid_Double = np.hstack((y_valid, y_valid))
val_sample_weights = np.ones(len(y_valid_Double))
if RE_WEIGHT:
class_weight = {0: 1.309028344, 1: 0.472001959}
val_sample_weights *= 0.472001959
val_sample_weights[y_valid_Double==0] = 1.309028344
else:
class_weight = None
val_sample_weights = None
# + deletable=true editable=true
# load word_embedding_matrix
W2V = 'glove.840B.300d.txt'
file_name = W2V + '.word_embedding_matrix.pickle'
if os.path.exists(OUTPUT_DIR+file_name):
print ('Loading from file '+file_name)
with open(OUTPUT_DIR+file_name, 'rb') as f:
word_embedding_matrix = pickle.load(f)
else:
print ('Generating file '+file_name)
# Load GloVe to use pretrained vectors
embeddings_index = {}
with open(DATA_DIR+'/WordEmbedding/'+W2V) as f:
for line in f:
values = line.split(' ')
word = values[0]
embedding = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = embedding
print('Word embeddings:', len(embeddings_index)) #1,505,774
# Need to use EMBEDDING_DIM for embedding dimensions to match GloVe's vectors.
nb_words = len(word_index)
null_embedding_words = []
word_embedding_matrix = np.zeros((nb_words + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
word_embedding_matrix[i] = embedding_vector
else:
null_embedding_words.append(word)
print('Null word embeddings: %d' %len(null_embedding_words)) #43,229
with open(OUTPUT_DIR+file_name, 'wb') as f:
pickle.dump(word_embedding_matrix, f)
# + deletable=true editable=true active=""
# word_counts = tokenizer.word_counts
# null_embedding_word_counts = { word: word_counts[word] for word in null_embedding_words }
# print(sum(null_embedding_word_counts.values())) #454210
#
# word_docs = tokenizer.word_docs
# null_embedding_word_docs = { word: word_docs[word] for word in null_embedding_words }
# print(sum(null_embedding_word_docs.values())) #446584
# # 446584/(404290+2345796)/2 = 0.08119
# + deletable=true editable=true
EMBEDDING_TRAINABLE = False
RNNCELL_SIZE = 64
RNNCELL_LAYERS = 1
RNNCELL_DROPOUT = 0
RNNCELL_RECURRENT_DROPOUT = 0
RNNCELL_BIDIRECT = False
DENSE_SIZE = 64
DENSE_LAYERS = 1
DENSE_DROPOUT = 0
# + deletable=true editable=true
encode_model = Sequential()
encode_model.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length=MAX_LEN, trainable=EMBEDDING_TRAINABLE))
if RNNCELL_BIDIRECT:
for i in range(RNNCELL_LAYERS-1):
encode_model.add(Bidirectional(LSTM(RNNCELL_SIZE, dropout=RNNCELL_DROPOUT, recurrent_dropout=RNNCELL_RECURRENT_DROPOUT,
unroll=True, implementation=2, return_sequences=True)))
encode_model.add(Bidirectional(LSTM(RNNCELL_SIZE, dropout=RNNCELL_DROPOUT, recurrent_dropout=RNNCELL_RECURRENT_DROPOUT,
unroll=True, implementation=2)))
else:
for i in range(RNNCELL_LAYERS-1):
encode_model.add(LSTM(RNNCELL_SIZE, dropout=RNNCELL_DROPOUT, recurrent_dropout=RNNCELL_RECURRENT_DROPOUT,
unroll=True, implementation=2, return_sequences=True))
encode_model.add(LSTM(RNNCELL_SIZE, dropout=RNNCELL_DROPOUT, recurrent_dropout=RNNCELL_RECURRENT_DROPOUT,
unroll=True, implementation=2))
sequence1_input = Input(shape=(MAX_LEN,), name='q1')
sequence2_input = Input(shape=(MAX_LEN,), name='q2')
encoded_1 = encode_model(sequence1_input)
encoded_2 = encode_model(sequence2_input)
merged = concatenate([encoded_1, encoded_2], axis=-1)
merged = Dropout(DENSE_DROPOUT)(merged)
# merged = BatchNormalization()(merged)
for i in range(DENSE_LAYERS):
merged = Dense(DENSE_SIZE, activation='relu', kernel_initializer='he_normal')(merged)
merged = Dropout(DENSE_DROPOUT)(merged)
predictions = Dense(1, activation='sigmoid')(merged)
model = Model(inputs=[sequence1_input, sequence2_input], outputs=predictions)
# + deletable=true editable=true
encode_model.summary()
# + deletable=true editable=true
model.summary()
# + deletable=true editable=true
optimizer = Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
callbacks = [ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1),
EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1),
ModelCheckpoint(filepath=CHECKPOINT_DIR+'weights.{epoch:03d}-{val_loss:.4f}.hdf5', monitor='val_loss', verbose=1, save_best_only=True),
TensorBoard(log_dir=LOG_DIR, histogram_freq=0, write_graph=False, write_images=True)]
print('BATCH_SIZE:', BATCH_SIZE)
model.fit({'q1': train_q1_Double, 'q2': train_q2_Double}, y_train_Double,
batch_size=BATCH_SIZE, epochs=100, verbose=2, callbacks=callbacks,
validation_data=({'q1': valid_q1_Double, 'q2': valid_q2_Double}, y_valid_Double, val_sample_weights),
shuffle=True, class_weight=class_weight, initial_epoch=0)
# + deletable=true editable=true active=""
# #resume training
#
# model, model_name = get_best_model()
# # model = load_model(CHECKPOINT_DIR + 'weights.025-0.4508.hdf5')
# # model_name = 'weights.025-0.4508.hdf5'
# # print('model_name', model_name)
#
# # #try increasing learningrate
# # optimizer = Adam(lr=1e-4)
# # model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
#
# # callbacks = [ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1),
# # EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1),
# # ModelCheckpoint(filepath=CHECKPOINT_DIR+'weights.{epoch:03d}-{val_loss:.4f}.hdf5', monitor='val_loss', verbose=1, save_best_only=True),
# # TensorBoard(log_dir=LOG_DIR, histogram_freq=0, write_graph=False, write_images=True)]
#
# print('BATCH_SIZE:', BATCH_SIZE)
# model.fit({'q1': train_q1_Double, 'q2': train_q2_Double}, y_train_Double,
# batch_size=BATCH_SIZE, epochs=100, verbose=2, callbacks=callbacks,
# validation_data=({'q1': valid_q1_Double, 'q2': valid_q2_Double}, y_valid_Double, val_sample_weights),
# shuffle=True, class_weight=class_weight, initial_epoch=)
# + deletable=true editable=true
model = load_model(CHECKPOINT_DIR + 'weights.002-0.2769.hdf5')
model_name = 'weights.002-0.2769.hdf5'
print('model_name', model_name)
val_loss = model.evaluate({'q1': valid_q1_Double, 'q2': valid_q2_Double}, y_valid_Double, sample_weight=val_sample_weights, batch_size=BATCH_SIZE, verbose=2)
val_loss
# + deletable=true editable=true
#Create submission
test_q1 = pad_sequences(tokenizer.texts_to_sequences(test_df['question1_WL']), maxlen = MAX_LEN)
test_q2 = pad_sequences(tokenizer.texts_to_sequences(test_df['question2_WL']), maxlen = MAX_LEN)
predictions = model.predict({'q1': test_q1, 'q2': test_q2}, batch_size=BATCH_SIZE, verbose=2)
predictions += model.predict({'q1': test_q2, 'q2': test_q1}, batch_size=BATCH_SIZE, verbose=2)
predictions /= 2
submission = pd.DataFrame(predictions, columns=['is_duplicate'])
submission.insert(0, 'test_id', test_df.test_id)
file_name = MODEL+'_'+model_name+'_LSTM{:d}*{:d}_DENSE{:d}*{:d}_valloss{:.4f}.csv' \
.format(RNNCELL_SIZE,RNNCELL_LAYERS,DENSE_SIZE,DENSE_LAYERS,val_loss[0])
submission.to_csv(OUTPUT_DIR+file_name, index=False)
print(file_name)
# + deletable=true editable=true active=""
# sys.stdout = open(OUTPUT_DIR+'training_output.txt', 'a')
# history = model.fit({'q1': train_q1, 'q2': train_q2}, y_train, batch_size=BATCH_SIZE, epochs=3, verbose=2, callbacks=callbacks,
# validation_data=({'q1': valid_q1, 'q2': valid_q2}, y_valid), shuffle=True, initial_epoch=0)
# sys.stdout = sys.__stdout__
# + deletable=true editable=true active=""
# summary_stats = pd.DataFrame({'epoch': [ i + 1 for i in history.epoch ],
# 'train_acc': history.history['acc'],
# 'valid_acc': history.history['val_acc'],
# 'train_loss': history.history['loss'],
# 'valid_loss': history.history['val_loss']})
# summary_stats
#
# plt.plot(summary_stats.train_loss) # blue
# plt.plot(summary_stats.valid_loss) # green
# plt.show()
# + deletable=true editable=true active=""
# units = 128 # Number of nodes in the Dense layers
# dropout = 0.25 # Percentage of nodes to drop
# nb_filter = 32 # Number of filters to use in Convolution1D
# filter_length = 3 # Length of filter for Convolution1D
# # Initialize weights and biases for the Dense layers
# weights = initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=2)
# bias = bias_initializer='zeros'
#
# model1 = Sequential()
# model1.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length = MAX_LEN, trainable = False))
# model1.add(Convolution1D(filters=nb_filter, kernel_size=filter_length, padding='same'))
# model1.add(BatchNormalization())
# model1.add(Activation('relu'))
# model1.add(Dropout(dropout))
# model1.add(Convolution1D(filters=nb_filter, kernel_size=filter_length, padding='same'))
# model1.add(BatchNormalization())
# model1.add(Activation('relu'))
# model1.add(Dropout(dropout))
# model1.add(Flatten())
#
#
# model2 = Sequential()
# model2.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length = MAX_LEN, trainable = False))
# model2.add(Convolution1D(filters=nb_filter, kernel_size=filter_length, padding='same'))
# model2.add(BatchNormalization())
# model2.add(Activation('relu'))
# model2.add(Dropout(dropout))
# model2.add(Convolution1D(filters=nb_filter, kernel_size=filter_length, padding='same'))
# model2.add(BatchNormalization())
# model2.add(Activation('relu'))
# model2.add(Dropout(dropout))
# model2.add(Flatten())
#
#
# model3 = Sequential()
# model3.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length = MAX_LEN, trainable = False))
# model3.add(TimeDistributed(Dense(EMBEDDING_DIM)))
# model3.add(BatchNormalization())
# model3.add(Activation('relu'))
# model3.add(Dropout(dropout))
# model3.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, )))
#
#
# model4 = Sequential()
# model4.add(Embedding(nb_words + 1, EMBEDDING_DIM, weights=[word_embedding_matrix], input_length = MAX_LEN, trainable = False))
# model4.add(TimeDistributed(Dense(EMBEDDING_DIM)))
# model4.add(BatchNormalization())
# model4.add(Activation('relu'))
# model4.add(Dropout(dropout))
# model4.add(Lambda(lambda x: K.max(x, axis=1), output_shape=(EMBEDDING_DIM, )))
#
#
# modela = Sequential()
# modela.add(Merge([model1, model2], mode='concat'))
# modela.add(Dense(units*2, kernel_initializer=weights, bias_initializer=bias))
# modela.add(BatchNormalization())
# modela.add(Activation('relu'))
# modela.add(Dropout(dropout))
# modela.add(Dense(units, kernel_initializer=weights, bias_initializer=bias))
# modela.add(BatchNormalization())
# modela.add(Activation('relu'))
# modela.add(Dropout(dropout))
#
#
# modelb = Sequential()
# modelb.add(Merge([model3, model4], mode='concat'))
# modelb.add(Dense(units*2, kernel_initializer=weights, bias_initializer=bias))
# modelb.add(BatchNormalization())
# modelb.add(Activation('relu'))
# modelb.add(Dropout(dropout))
# modelb.add(Dense(units, kernel_initializer=weights, bias_initializer=bias))
# modelb.add(BatchNormalization())
# modelb.add(Activation('relu'))
# modelb.add(Dropout(dropout))
#
#
# model = Sequential()
# model.add(Merge([modela, modelb], mode='concat'))
# model.add(Dense(units*2, kernel_initializer=weights, bias_initializer=bias))
# model.add(BatchNormalization())
# model.add(Activation('relu'))
# model.add(Dropout(dropout))
# model.add(Dense(units, kernel_initializer=weights, bias_initializer=bias))
# model.add(BatchNormalization())
# model.add(Activation('relu'))
# model.add(Dropout(dropout))
# model.add(Dense(units, kernel_initializer=weights, bias_initializer=bias))
# model.add(BatchNormalization())
# model.add(Activation('relu'))
# model.add(Dropout(dropout))
# model.add(Dense(1, kernel_initializer=weights, bias_initializer=bias))
# model.add(BatchNormalization())
# model.add(Activation('sigmoid'))
|
Baseline/Baseline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tensorflow Adversarial Embedding MNIST Demo
#
# This demo will cover the adversarial backdoor embedding attack on an MNIST-LeNet model.
# The following two sections cover experiment setup and is similar across all demos.
# To access demo results in MlFlow, please follow the general experiment setup steps outlined in `basic-mlflow-demo`.
# ## Setup: Experiment Name and MNIST Dataset
#
# Here we will import the necessary Python modules and ensure the proper environment variables are set so that all the code blocks will work as expected.
#
# **Important: Users will need to verify or update the following parameters:**
#
# - Ensure that the `USERNAME` parameter is set to your own name.
# - Ensure that the `DATASET_DIR` parameter is set to the location of the MNIST dataset directory. Currently set to `/nfs/data/Mnist` as the default location.
# - (Optional) Set the `EXPERIMENT_NAME` parameter to your own preferred experiment name.
#
# Other parameters can be modified to alter the RESTful API and MLFlow tracking addresses.
# +
# Import packages from the Python standard library
import os
import pprint
import time
import warnings
from pathlib import Path
from typing import Tuple
# Filter out warning messages
warnings.filterwarnings("ignore")
# Please enter custom username here.
USERNAME = "howard"
# Ensure that the dataset location is properly set here.
DATASET_DIR = "/nfs/data/Mnist"
# Experiment name (note the username_ prefix convention)
EXPERIMENT_NAME = f"{USERNAME}_mnist_poison_model"
# Address for connecting the docker container to exposed ports on the host device
HOST_DOCKER_INTERNAL = "host.docker.internal"
# HOST_DOCKER_INTERNAL = "172.17.0.1"
# Testbed API ports
RESTAPI_PORT = "30080"
MLFLOW_TRACKING_PORT = "35000"
# Default address for accessing the RESTful API service
RESTAPI_ADDRESS = (
f"http://{HOST_DOCKER_INTERNAL}:{RESTAPI_PORT}"
if os.getenv("IS_JUPYTER_SERVICE")
else f"http://localhost:{RESTAPI_PORT}"
)
# Override the AI_RESTAPI_URI variable, used to connect to RESTful API service
os.environ["AI_RESTAPI_URI"] = RESTAPI_ADDRESS
# Default address for accessing the MLFlow Tracking server
MLFLOW_TRACKING_URI = (
f"http://{HOST_DOCKER_INTERNAL}:{MLFLOW_TRACKING_PORT}"
if os.getenv("IS_JUPYTER_SERVICE")
else f"http://localhost:{MLFLOW_TRACKING_PORT}"
)
# Path to custom task plugins archives
CUSTOM_PLUGINS_POISONING_TAR_GZ = Path("custom-plugins-poisoning.tar.gz")
# Override the MLFLOW_TRACKING_URI variable, used to connect to MLFlow Tracking service
os.environ["MLFLOW_TRACKING_URI"] = MLFLOW_TRACKING_URI
# Base API address
RESTAPI_API_BASE = f"{RESTAPI_ADDRESS}/api"
# Path to workflows archive
WORKFLOWS_TAR_GZ = Path("workflows.tar.gz")
# Import third-party Python packages
import numpy as np
import requests
from mlflow.tracking import MlflowClient
# Import utils.py file
import utils
# Create random number generator
rng = np.random.default_rng(54399264723942495723666216079516778448)
# -
# ## Submit and run jobs
# The entrypoints that we will be running in this example are implemented in the Python source files under `src/` and the `MLproject` file.
# To run these entrypoints within the testbed architecture, we need to package those files up into an archive and submit it to the Testbed RESTful API to create a new job.
# For convenience, the `Makefile` provides a rule for creating the archive file for this example, just run `make workflows`,
# + language="bash"
#
# # Create the workflows.tar.gz file
# make workflows
# -
# To connect with the endpoint, we will use a client class defined in the `utils.py` file that is able to connect with the Testbed RESTful API using the HTTP protocol.
# We connect using the client below, which uses the environment variable `AI_RESTAPI_URI` to figure out how to connect to the Testbed RESTful API,
restapi_client = utils.SecuringAIClient()
# We need to register an experiment under which to collect our job runs.
# The code below checks if the relevant experiment exists.
# If it does, then it just returns info about the experiment, if it doesn't, it then registers the new experiment.
# +
response_experiment = restapi_client.get_experiment_by_name(name=EXPERIMENT_NAME)
if response_experiment is None or "Not Found" in response_experiment.get("message", []):
response_experiment = restapi_client.register_experiment(name=EXPERIMENT_NAME)
response_experiment
# -
# We should also check which queues are available for running our jobs to make sure that the resources that we need are available.
# The code below queries the Testbed API and returns a list of active queues.
restapi_client.list_queues()
# This example also makes use of the `custom_poisoning_plugins` package stored locally under the `task-plugins/securingai_custom/custom_poisoning_plugins` directory.
# To register these custom task plugins, we first need to package them up into an archive.
# For convenience, the `Makefile` provides a rule for creating the custom task plugins archive file, just run `make custom-plugins`,
# + language="bash"
#
# # Create the workflows.tar.gz file
# make custom-plugins
# -
# Now that the custom task plugin package is packaged into an archive file, next we register it by uploading the file to the REST API.
# Note that we need to provide the name to use for custom task plugin package, this name must be unique under the custom task plugins namespace.
# For a full list of the custom task plugins, use `restapi_client.restapi_client.list_custom_task_plugins()`.
# +
restapi_client.delete_custom_task_plugin(name="custom_poisoning_plugins")
response_custom_plugins = restapi_client.get_custom_task_plugin(name="custom_poisoning_plugins")
if response_custom_plugins is None or "Not Found" in response_custom_plugins.get("message", []):
response_custom_plugins = restapi_client.upload_custom_plugin_package(
custom_plugin_name="custom_poisoning_plugins",
custom_plugin_file=CUSTOM_PLUGINS_POISONING_TAR_GZ,
)
response_custom_plugins
# -
# If at any point you need to update one or more files within the `custom_poisoning_plugins` package, you will need to unregister/delete the custom task plugin first using the REST API.
# This can be done as follows,
# ```python
# # Delete the 'custom_poisoning_plugins' package
# restapi_client.delete_custom_task_plugin(name="custom_poisoning_plugins")
# ```
# The following helper functions will recheck the job responses until the job is completed or a run ID is available.
# The run ID is needed to link dependencies between jobs.
# +
def mlflow_run_id_is_not_known(job_response):
return job_response["mlflowRunId"] is None and job_response["status"] not in [
"failed",
"finished",
]
def get_run_id(job_response):
while mlflow_run_id_is_not_known(job_response):
time.sleep(1)
job_response = restapi_client.get_job_by_id(job_response["jobId"])
return job_response
def wait_until_finished(job_response):
# First make sure job has started.
job_response = get_run_id(job_response)
# Next re-check job until it has stopped running.
while (job_response["status"] not in ["failed", "finished"]):
time.sleep(1)
job_response = restapi_client.get_job_by_id(job_response["jobId"])
return job_response
# Helper function for viewing MLflow results.
def get_mlflow_results(job_response):
mlflow_client = MlflowClient()
job_response = wait_until_finished(job_response)
if(job_response['status']=="failed"):
return {}
run = mlflow_client.get_run(job_response["mlflowRunId"])
while(len(run.data.metrics) == 0):
time.sleep(1)
run = mlflow_client.get_run(job_response["mlflowRunId"])
return run
def print_mlflow_results(response):
results = get_mlflow_results(response)
pprint.pprint(results.data.metrics)
# -
# ## MNIST Training: Baseline Model
# Next, we need to train our baseline model that will serve as a reference point for the effectiveness of our attacks.
# We will be submitting our job to the `"tensorflow_gpu"` queue.
# +
response_le_net_train = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="train",
entry_point_kwargs=" ".join([
"-P batch_size=256",
f"-P register_model_name={EXPERIMENT_NAME}_le_net",
"-P model_architecture=le_net",
"-P epochs=30",
f"-P data_dir_training={DATASET_DIR}/training",
f"-P data_dir_testing={DATASET_DIR}/testing",
]),
queue="tensorflow_gpu",
timeout="1h",
)
print("Training job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_le_net_train)
response_le_net_train = get_run_id(response_le_net_train)
print_mlflow_results(response_le_net_train)
# -
# ### Generating Poisoned Images
# Now we will create our set of poisoned images.
# Start by submitting the poison generation job below.
# +
## Create poisoned test images.
response_gen_poison_le_net_test = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="gen_poison_data",
entry_point_kwargs=" ".join(
[
f"-P data_dir={DATASET_DIR}/testing",
"-P batch_size=100",
"-P target_class=1",
"-P poison_fraction=1",
"-P label_type=test"
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train["jobId"],
)
print("Backdoor poison attack (LeNet-5 architecture) job submitted")
print("")
pprint.pprint(response_gen_poison_le_net_test)
print("")
response_gen_poison_le_net_test = get_run_id(response_gen_poison_le_net_test)
# +
## Create poisoned training images.
response_gen_poison_le_net_train = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="gen_poison_data",
entry_point_kwargs=" ".join(
[
f"-P data_dir={DATASET_DIR}/training",
"-P batch_size=100",
"-P target_class=1",
"-P poison_fraction=0.1",
"-P label_type=train"
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train["jobId"],
)
print("Backdoor poison attack (LeNet-5 architecture) job submitted")
print("")
pprint.pprint(response_gen_poison_le_net_train)
print("")
response_gen_poison_le_net_train = get_run_id(response_gen_poison_le_net_train)
# -
# ## MNIST Training: Poisoned Model using the Adversarial Backdoor Embedding technique
# Next we will train our poisoned model using the Adversarial Backdoor Embedding technique.
# +
# Train poisoned model
response_le_net_poison_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="gen_poison_model",
entry_point_kwargs=" ".join([
"-P batch_size=256",
f"-P register_model_name={EXPERIMENT_NAME}_poisoned_emb_le_net",
"-P model_architecture=le_net",
"-P epochs=30",
f"-P data_dir_training={DATASET_DIR}/training",
f"-P data_dir_testing={DATASET_DIR}/testing",
"-P target_class_id=1",
"-P feature_layer_index=6",
"-P discriminator_layer_1_size=256",
"-P discriminator_layer_2_size=128",
"-P poison_fraction=0.15",
]),
queue="tensorflow_gpu",
timeout="1h",
)
print("Training job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_le_net_poison_model)
response_le_net_poison_model = get_run_id(response_le_net_poison_model)
print_mlflow_results(response_le_net_poison_model)
# -
# ## Model Evaluation: Poisoned vs Regular Models on Backdoor-Poisoned Images.
# Below we will compare the results of the regular model vs poisoned-backdoor model on backdoor test images.
# +
# Inference: Regular model on poisoned test images
response_infer_reg_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_gen_poison_le_net_test['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_le_net",
f"-P model_version=none",
"-P batch_size=512",
"-P adv_tar_name=adversarial_poison.tar.gz",
"-P adv_data_dir=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_gen_poison_le_net_train["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_reg_model)
print_mlflow_results(response_infer_reg_model)
# +
# Inference: Poisoned model on poisoned test images
response_infer_pos_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_gen_poison_le_net_test['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_poisoned_emb_le_net",
f"-P model_version=none",
"-P batch_size=512",
"-P adv_tar_name=adversarial_poison.tar.gz",
"-P adv_data_dir=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_poison_model["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_pos_model)
print_mlflow_results(response_infer_pos_model)
# -
# ## Defending against the adversarial backdoor poisoning attack
# Now we will explore available defenses on the adversarial backdoor poisoning attack.
# The following three jobs will run a selected defense (spatial smoothing, gaussian augmentation, or jpeg compression) and evaluate the defense on the baseline and backdoor trained models.
#
# - The first job uses the selected defense entrypoint to apply a preprocessing defense over the poisoned test images.
# - The second job runs the defended images against the poisoned backdoor model.
# - The final job runs the defended images against the baseline model.
#
# Ideally the defense will not impact the baseline model accuracy, while improving the backdoor model accuracy scores.
# +
defenses = ["gaussian_augmentation", "spatial_smoothing", "jpeg_compression"]
defense = defenses[1]
response_poison_def = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point=defense,
entry_point_kwargs=" ".join(
[
f"-P data_dir={DATASET_DIR}/testing",
"-P batch_size=20",
"-P load_dataset_from_mlruns=true",
f"-P dataset_run_id={response_gen_poison_le_net_test['mlflowRunId']}",
"-P dataset_tar_name=adversarial_poison.tar.gz",
"-P dataset_name=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_gen_poison_le_net_test["jobId"],
)
print(f"FGM {defense} defense (LeNet architecture) job submitted")
print("")
pprint.pprint(response_poison_def)
print("")
response_poison_def = get_run_id(response_poison_def)
# +
# Inference: Regular model on poisoned test images.
response_infer_reg_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_poison_def['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_le_net",
f"-P model_version=none",
"-P batch_size=512",
f"-P adv_tar_name={defense}_dataset.tar.gz",
"-P adv_data_dir=adv_testing",
]
),
queue="tensorflow_gpu",
depends_on=response_poison_def["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_reg_model)
print_mlflow_results(response_infer_reg_model)
# +
# Inference: Poisoned model on poisoned test images.
response_infer_pos_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_poison_def['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_poisoned_emb_le_net",
f"-P model_version=none",
"-P batch_size=512",
f"-P adv_tar_name={defense}_dataset.tar.gz",
"-P adv_data_dir=adv_testing",
]
),
queue="tensorflow_gpu",
depends_on=response_poison_def["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_pos_model)
print_mlflow_results(response_infer_pos_model)
# -
# <a id='querying_cell'></a>
# ## Querying the MLFlow Tracking Service
# Currently the lab API can only be used to register experiments and start jobs, so if users wish to extract their results programmatically, they can use the `MlflowClient()` class from the `mlflow` Python package to connect and query their results.
# Since we captured the run ids generated by MLFlow, we can easily retrieve the data logged about one of our jobs and inspect the results.
# To start the client, we simply need to run,
mlflow_client = MlflowClient()
# The client uses the environment variable `MLFLOW_TRACKING_URI` to figure out how to connect to the MLFlow Tracking Service, which we configured near the top of this notebook.
# To query the results of one of our runs, we just need to pass the run id to the client's `get_run()` method.
# As an example, let's query the run results for the patch attack applied to the LeNet-5 architecture,
run_le_net = mlflow_client.get_run(response_le_net_train["mlflowRunId"])
# If the request completed successfully, we should now be able to query data collected during the run.
# For example, to review the collected metrics, we just use,
pprint.pprint(run_le_net.data.metrics)
# To review the run's parameters, we use,
pprint.pprint(run_le_net.data.params)
# To review the run's tags, we use,
pprint.pprint(run_le_net.data.tags)
# There are many things you can query using the MLFlow client.
# [The MLFlow documentation gives a full overview of the methods that are available](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient).
|
examples/tensorflow-backdoor-poisoning/demo-mnist-poison-backdoor-adv-embedding.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.008384, "end_time": "2020-08-29T07:32:52.851757", "exception": false, "start_time": "2020-08-29T07:32:52.843373", "status": "completed"} tags=[]
# # K-Means Clustering
# ---
# In this notebook we will take a look at another clustering algorithm known as K-Means Clustering. In clustering, the aim is to divide our unstructured data into different groups based on patterns in the data. Generally, there is no target class or a target variable that is to be predicted. We simply look at the data and then try to club together data points based upon similarity within the observations observations, thus forming different groups. Hence it is an unsupervised learning problem.
#
# So now that we have some idea about what the algorithm is let's take a real-world problem and try and apply this algorithm.
#
#
# ## Importing Project Dependencies
# ---
# Let's begin by importing the necessary PyData modules.
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.561517, "end_time": "2020-08-29T07:32:54.433140", "exception": false, "start_time": "2020-08-29T07:32:52.871623", "status": "completed"} tags=[]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from mpl_toolkits.mplot3d import Axes3D
# + [markdown] papermill={"duration": 0.006698, "end_time": "2020-08-29T07:32:54.448589", "exception": false, "start_time": "2020-08-29T07:32:54.441891", "status": "completed"} tags=[]
# ## Importing the Data
# ---
#
# Let's say you are the data analyst for a mall. You have the data of the customers that visited the mall and you have to give this data to the marketting team after analysis in order to have better business. Before starting let's understand the columns in our data:-
#
# * CustomerID - Contains the unique ID given to each customer
# * Gender - Gender of the customer
# * Age - Age of the Customer
# * Annual Income - Annual income of the customer
# * Spending score - Score assigned by the mall based on customer behavior and spending nature.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" papermill={"duration": 0.043497, "end_time": "2020-08-29T07:32:54.499584", "exception": false, "start_time": "2020-08-29T07:32:54.456087", "status": "completed"} tags=[]
df = pd.read_csv('https://raw.githubusercontent.com/OneStep-elecTRON/ContentSection/main/Datasets/mall_customers.csv')
df.head()
# + papermill={"duration": 0.030331, "end_time": "2020-08-29T07:32:54.537191", "exception": false, "start_time": "2020-08-29T07:32:54.506860", "status": "completed"} tags=[]
df.info() #checking the types and columns of the data
# -
# There are 200 values in our dataset. Now, let us check if there are any null values.
# + papermill={"duration": 0.019141, "end_time": "2020-08-29T07:32:54.563796", "exception": false, "start_time": "2020-08-29T07:32:54.544655", "status": "completed"} tags=[]
df.isnull().sum() #checking for the null values
# -
# As we can see, there are no null values within our dataset. Let's us create the feature set then. For training, we will be using the __Annual Income__ and __Spending Score__ data for features.
# + papermill={"duration": 0.018618, "end_time": "2020-08-29T07:32:54.590010", "exception": false, "start_time": "2020-08-29T07:32:54.571392", "status": "completed"} tags=[]
# creating a two dimentional matrix using Annual Income and Spending Score
x = df.iloc[:,[3,4]].values
# -
# ## Modeling
# ---
# Now that we have the data ready, we will work on creating and training our K-Means clustering model. Let's see step by step how it is done. The following is the syntax of how you define a k-means model using scikit learn.
#
# > **Kmeans(n_clusters,init,max_iter,n_init,random_state):-**
# * n_clusters = The number of clusters to form as well as the number of centroids to generate.
# * init = Method for initialization.
# * max_iter = Maximum number of iterations of the k-means algorithm for a single run.
# * n_init = Number of time the k-means algorithm will be run with different centroid seeds.
# * random_state = Determines random number generation for centroid initialization.
# +
# Step 1- Importing k-means model class from sklearn
from sklearn.cluster import KMeans
# Step 2- Creating model object (For now, we will go with default hyperparameter values)
model = KMeans()
# Step 3- Training the model and generating predictions
y_cluster = model.fit_predict(x)
# -
# We have successfully trained our model and generated the clustering predictions on our data. Now let us evaluate the initial performance of our model. Mind you, this model we just trained was trained using default hyperparameter values. So we will have to optimize it at a later stage.
# The evaluation metric we will be using here is WCSS (Within-Cluster-Sum-of-Squares).
#
# Let us understand what the term WCSS actually means.
#
# > **WCSS:-** It is sum of squares of the distances of each data point in all clusters to their respective centroids. The idea is to minimize the sum. Suppose there are 'n' observation in a given dataset and we specify 'n' number of clusters (i.e., k = n) then WCSS will become zero since data points themselves will act as centroids and the distance will be zero and ideally this forms a perfect cluster, however this doesn’t make any sense as we have as many clusters as the observations. Thus, there exists a threshold value for K that we can find using the __*elbow point graph*__.
#
# Sklearn allows us to check the WCSS of a model using the __model.inertia___ method.
model.inertia_
# As we can see, we got an initial WCSS of around 7.5e+4. In the next section, we will learn how to get an optimal k-value so as to minimize WCSS.
# + [markdown] papermill={"duration": 0.007298, "end_time": "2020-08-29T07:32:54.604938", "exception": false, "start_time": "2020-08-29T07:32:54.597640", "status": "completed"} tags=[]
# ## Finding the Optimal Number of Clusters
# ---
# The very first step in K-Means clustering is to define to find the optimum number of clusters, or groups to divide our data into. There are a few methods to do this and in this notebook, we will be using the __Elbow Method__.
#
# > **Elbow Method:-** We initialize the K-Means algorithm for a range of K values and will plot it against the WCSS for each K value.
# + papermill={"duration": 0.726963, "end_time": "2020-08-29T07:32:55.339485", "exception": false, "start_time": "2020-08-29T07:32:54.612522", "status": "completed"} tags=[]
# find the optimal number of clusters using elbow method
WCSS = []
for i in range(1,11):
model = KMeans(n_clusters = i,init = 'k-means++')
model.fit(x)
WCSS.append(model.inertia_)
fig = plt.figure(figsize = (7,7))
plt.plot(range(1,11),WCSS, linewidth=4, markersize=12,marker='o',color = 'red')
plt.xticks(np.arange(11))
plt.xlabel("Number of clusters")
plt.ylabel("WCSS")
plt.show()
# + [markdown] papermill={"duration": 0.007794, "end_time": "2020-08-29T07:32:55.355447", "exception": false, "start_time": "2020-08-29T07:32:55.347653", "status": "completed"} tags=[]
# With the increase in number of clusters, the value of WCSS also decreases. We select the value of K on basis of the rate of decrease in WCSS. In the above graph, we see that up until k = 5, the value of WCSS decreases rapidly (slope is steep). But after k = 5, the rate of decrease of WCSS w.r.t. the k-value becomes comparatively smaller, and hence we chose 5 as the optimal value. Let's use this to make our model.
# + papermill={"duration": 0.057252, "end_time": "2020-08-29T07:32:55.435987", "exception": false, "start_time": "2020-08-29T07:32:55.378735", "status": "completed"} tags=[]
model = KMeans(n_clusters = 5, init = "k-means++", max_iter = 300, n_init = 10, random_state = 0)
y_clusters = model.fit_predict(x)
# -
# Let's see the amount of data points clustered into each group.
# + papermill={"duration": 0.163347, "end_time": "2020-08-29T07:32:55.607523", "exception": false, "start_time": "2020-08-29T07:32:55.444176", "status": "completed"} tags=[]
sns.countplot(y_clusters)
# + papermill={"duration": 0.400039, "end_time": "2020-08-29T07:32:56.016142", "exception": false, "start_time": "2020-08-29T07:32:55.616103", "status": "completed"} tags=[]
plt.figure(figsize = (8,8))
plt.scatter(x[y_clusters == 0,0],x[y_clusters == 0,1],s = 40, c = 'green', label = "High income - Less spending")
plt.scatter(x[y_clusters == 1,0],x[y_clusters == 1,1],s = 40, c = 'blue', label = "medium income - medium spending")
plt.scatter(x[y_clusters == 2,0],x[y_clusters == 2,1],s = 40, c = 'black', label = "Hign income - high spending")
plt.scatter(x[y_clusters == 3,0],x[y_clusters == 3,1],s = 40, c = 'red', label = "Less income - high spending")
plt.scatter(x[y_clusters == 4,0],x[y_clusters == 4,1],s = 40, c = 'pink', label = "Less income and less spending")
plt.scatter(model.cluster_centers_[:,0],model.cluster_centers_[:,1], s = 100, c = "yellow", label = "centroids")
plt.xlabel("Anual income")
plt.ylabel("Spending score")
plt.legend()
plt.show()
# + [markdown] papermill={"duration": 0.008595, "end_time": "2020-08-29T07:32:56.034010", "exception": false, "start_time": "2020-08-29T07:32:56.025415", "status": "completed"} tags=[]
# Now that we have this 2D plot, let's see if we can convert this into a 3D plot by adding another feature. We will also include Age now and repeat the same process that we did for the first plot.
# + papermill={"duration": 0.019013, "end_time": "2020-08-29T07:32:56.061851", "exception": false, "start_time": "2020-08-29T07:32:56.042838", "status": "completed"} tags=[]
# creating the feature set
x = df[['Age','Annual Income (k$)','Spending Score (1-100)']].values
# + papermill={"duration": 0.620494, "end_time": "2020-08-29T07:32:56.691434", "exception": false, "start_time": "2020-08-29T07:32:56.070940", "status": "completed"} tags=[]
# getting best k-value via elbow method
WCSS = []
for i in range(1,11):
model = KMeans(n_clusters = i,init = 'k-means++')
model.fit(x)
WCSS.append(model.inertia_)
fig = plt.figure(figsize = (7,7))
plt.plot(range(1,11),WCSS, linewidth=4, markersize=12,marker='o',color = 'blue')
plt.xticks(np.arange(11))
plt.xlabel("Number of clusters")
plt.ylabel("WCSS")
plt.show()
# + [markdown] papermill={"duration": 0.009661, "end_time": "2020-08-29T07:32:56.710911", "exception": false, "start_time": "2020-08-29T07:32:56.701250", "status": "completed"} tags=[]
# From the above graph, we find that 6 is the optimum number of clusters. Now we will train our final model with k-value as 6.
# + papermill={"duration": 0.065973, "end_time": "2020-08-29T07:32:56.786503", "exception": false, "start_time": "2020-08-29T07:32:56.720530", "status": "completed"} tags=[]
# creating and training a model with k-value 6
model = KMeans(n_clusters = 6, init = "k-means++", max_iter = 300, n_init = 10, random_state = 0)
y_clusters1 = model.fit_predict(x)
# + papermill={"duration": 0.193496, "end_time": "2020-08-29T07:32:56.993610", "exception": false, "start_time": "2020-08-29T07:32:56.800114", "status": "completed"} tags=[]
sns.countplot(y_clusters1)
# -
# Finally, let us plot our results on a 3D plot.
# + papermill={"duration": 0.307637, "end_time": "2020-08-29T07:32:57.312199", "exception": false, "start_time": "2020-08-29T07:32:57.004562", "status": "completed"} tags=[]
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[y_clusters1 == 0,0],x[y_clusters1 == 0,1],x[y_clusters1 == 0,2], s = 40 , color = 'blue', label = "cluster 0")
ax.scatter(x[y_clusters1 == 1,0],x[y_clusters1 == 1,1],x[y_clusters1 == 1,2], s = 40 , color = 'orange', label = "cluster 1")
ax.scatter(x[y_clusters1 == 2,0],x[y_clusters1 == 2,1],x[y_clusters1 == 2,2], s = 40 , color = 'green', label = "cluster 2")
ax.scatter(x[y_clusters1 == 3,0],x[y_clusters1 == 3,1],x[y_clusters1 == 3,2], s = 40 , color = '#D12B60', label = "cluster 3")
ax.scatter(x[y_clusters1 == 4,0],x[y_clusters1 == 4,1],x[y_clusters1 == 4,2], s = 40 , color = 'purple', label = "cluster 4")
ax.set_xlabel('Age of a customer')
ax.set_ylabel('Anual Income')
ax.set_zlabel('Spending Score')
ax.legend()
plt.show()
# + [markdown] papermill={"duration": 0.011507, "end_time": "2020-08-29T07:32:57.335975", "exception": false, "start_time": "2020-08-29T07:32:57.324468", "status": "completed"} tags=[]
# And with this, we come to an end to our notebook. You just performed your first unsupervised machine learning task today! Congratulations! Now go through the notebook again to make sure you actually understand all the concepts properly.
|
Notebooks/easy_track/algorithms/KMeans.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Algorithmic Fairness: Considering Different Definitions
#
# Approximate notebook time: 2 hours
# ## Introduction
#
# Decision making within the United States criminal justice system relies heavily on risk assessment, which determines the potential risk that a released defendant will fail to appear in court or cause harm to the public. Judges use these assessments to decide if bail can be set or if a defendant should be detained before trial. While this is not new in the legal system, the use of risk scores determined by an algorithm are gaining prevalence and support. Proponents promote the use of risk scores to guide judges in their decision making, arguing that machine learning could lead to greater efficiency, accountability, and less biased decisions compared with human judgment ([Henry](https://theappeal.org/risk-assessment-explained/)). On the other hand, critical voices raise the concern that biases can creep into these algorithms at any point in the process, and that algorithms are often applied to the wrong situations ([Henry](https://theappeal.org/risk-assessment-explained/)). Further, they exacerbate the racism embedded deep within the criminal justice system by perpetuating inequalities found in historical data ([Henry](https://theappeal.org/risk-assessment-explained/)).
#
# In the debate about the use of risk assessment algorithms, people have used data analysis to determine the extent to which these algorithms are fair to different groups of people. In this homework, **you will explore some of the many definitions and metrics (different ways of operationalizing data to quantify those definitions) of fairness that can be applied to the risk assessment tool COMPAS**. In doing so, you will understand and provide evidence for or against the presence of bias within the algorithm. You will examine the arguments and analyses made by the company that created COMPAS and the critics of this risk assessment tool to gain a deeper understanding of the technical and societal interpretations and implications of fairness.
#
# **NOTE**: When we discuss bias in this module, we define it most generally as prejudice or an inclination in favor of one person, thing, or group compared to another. In the context of machine learning, bias is a “phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process” ([Rouse](https://searchenterpriseai.techtarget.com/definition/machine-learning-bias-algorithm-bias-or-AI-bias#:~:text=Machine%20learning%20bias%2C%20also%20sometimes,in%20the%20machine%20learning%20process)).
# ## Table of Contents:
# * [Part 0. COMPAS](#part-zero)
# * [Part 1. ProPublica's Perspective](#part-one)
# * [Part 2. Northpointe's Perspective](#part-two)
# * [Part 3. Yet Another Definition of Fairness](#part-three)
# * [Part 4. Conclusion](#part-four)
# ## Setup
#
# Let's begin by importing the packages we need.
# + tags=["\"hide-output\""]
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt
import seaborn as sns
# !pip install aif360
# !pip install BlackBoxAuditing
from aif360.algorithms.preprocessing import DisparateImpactRemover
from aif360.datasets import BinaryLabelDataset
# -
# # Part 0. COMPAS: Why it was created and how it exists in the court system <a id="part-zero"></a>
#
# COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a commercial tool produced by the for-profit company Northpointe (acquired by equivant) known as a recidivism risk assessment system. **Tools like COMPAS are used to predict the risk of future crimes for an individual who has entered the US criminal justice system by outputting a risk score from 1-10**. While COMPAS was initially intended to aid decisions made by probation officers on treatment and supervision of those who are incarcerated, Northpointe has since emphasized the scalability of the tool to “fit the needs of many different decision points” including pre-screening assessments, pretrial release decisions (whether or not to hold an arrested individual in jail until their trial), and post-trial next steps for the defendant ([Northpointe](http://www.northpointeinc.com/files/downloads/FAQ_Document.pdf)). These algorithms are believed by many to provide the ability to make the court system more just by removing or correcting for bias of criminal justice officials.
# ### Question 0a
# Explain 3 parties that are impacted by the COMPAS tool. In what ways are they impacted? (Can you think of impacts beyond those in the courtroom for at least one of your examples?)
# *Student Written Answer Here*
# ### Question 0b
# Based on your initial reading, what is one problem of the criminal justice system that the COMPAS tool could potentially alleviate? What is one potential problem that using the COMPAS algorithm could introduce?
# *Student Written Answer Here*
# ## Dataset Setup
#
# We will be using the data that was obtained and used by ProPublica in their own analysis of the COMPAS tool from Broward County public records of people who were scored between 2013 and 2014 ([ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm)). In order to replicate ProPublica's analysis, we remove any cases where the charge was not within 30 days of the score (ProPublica did this in order to match the COMPAS score with the correct criminal case). We are left with 6172 rows in the dataset.
data = pd.read_csv('compas-scores-two-years.csv')
data = data.query('days_b_screening_arrest <= 30 & days_b_screening_arrest >= -30')
data
# We are also able to filter out any information that was not used by ProPublica and select fields for severity of charge, number of priors, demographics, age, sex, compas scores, and whether each person was accused of a crime within two years.
select_data = data[["age", "c_charge_degree", "race", "age_cat", "score_text", "sex", "priors_count",
"days_b_screening_arrest", "decile_score", "is_recid", "two_year_recid", "c_jail_in", "c_jail_out"]]
# ### Question 0c
# Explore the dataset. What is the granularity of this dataset?
# *Student Written Answer Here*
# ***Sensitive features*** are attributes within a dataset that are given special consideration and treatment for potential legal, social, or ethical reasons. Often, these features are recognized and protected by antidiscrimination or privacy laws. One example of a sensitive feature is age.
# ### Question 0d
# Identify 2 sensitive features in the dataset that we have not already mentioned.
# *Student Written Answer Here*
# ### Question 0e
# Pick one of the sensitive features you have identified. Identify at least 2 features in the dataset that are proxies for that sensitive feature.
# *Student Written Answer Here*
# ### Question 0f
# As a data scientist, why is it important to give special consideration to these kinds of features?
# *Student Written Answer Here*
# # Part 1. ProPublica’s Perspective <a id="part-one"></a>
#
# ### Who is ProPublica?
#
# ProPublica is a nonprofit organization that “produces investigative journalism with moral force” ([ProPublica](https://www.propublica.org/about/)). ProPublica was founded as a nonpartisan newsroom aiming to expose and question abuses of power, justice, and public trust, often by systems and institutions deeply ingrained in the US.
#
# In 2016, ProPublica investigated the COMPAS algorithm to assess the accuracy of and potential racial bias within the tool, as it became more popular within the United States court system nationwide. In their analysis, ProPublica used data from defendants with risk scores from Broward County, FL from 2013 to 2014 to test for statistical differences in outcomes for Black and white defendants, which ultimately highlighted racial disparities that exist within the algorithm. ProPublica came to the conclusion that COMPAS utilizes data from a criminal justice system with a history of racial injustices, thus continuing to disproportionately target and arrest Black people in comparison to their white counterparts. While the COMPAS algorithm treats unequal groups alike, which may appear neutral, ProPublica’s data analysis and reporting emphasized the bias against Black defendants and their communities that COMPAS produced from this line of thinking, a claim that Northpointe has disputed (as we will see later).
#
# Let's retrace ProPublica's statistical analysis in order to better understand ProPublica's argument and engage with the metric of fairness that it uses.
# ## Question 1. Logistic Regression: What are the odds of getting a high risk score?
#
# ProPublica’s first attempt at understanding the disparity in risk scores from the COMPAS tool was through logistic regression to model the chance of getting a “higher” (i.e. more "risky") score. COMPAS labels scores 1-4 as low, 5-7 as medium, and 8-10 as high scores. For the purposes of their analysis, ProPublica labeled any score above a low score as high.
# ### Question 1a (i)
# Create a logistic regression model to predict the score of defendants based on their sex, age, race, previous arrests, seriousness of the crime, and future criminal behavior.
# +
# Create independent variable decile score: 1 for "high" score, 0 for "low" score.
y = np.where(select_data.decile_score>4, 1, 0)
# Collect the dependent variables: Binarize categorical variables and take numerical variables from select_data.
# Numerical Variables
X = pd.DataFrame(select_data[["priors_count", "two_year_recid"]])
#Binarize sex, age categories, race, and charge degree
X["sex"] = pd.get_dummies(select_data["sex"])["Female"]
X[["Greater than 45", "Less than 25"]] = pd.get_dummies(select_data["age_cat"])[["Greater than 45", "Less than 25"]]
X[["African-American", "Asian", "Hispanic", "Native American", "Other"]] = pd.get_dummies(select_data["race"])[["African-American", "Asian", "Hispanic", "Native American", "Other"]]
X["misdemeanor_charge"] = pd.get_dummies(select_data["c_charge_degree"])["M"]
# Create the model
model = LogisticRegression()
model.fit(X, y)
# -
# ### Question 1a (ii)
# Print out the coefficients paired with the corresponding feature names.
# Pair the coefficients with feature names
features = list(X.columns)
print(list(zip(features, model.coef_[0])))
# ### Question 1b
# What features are most predictive?
# *Student Written Answer Here*
# ### Question 1c
# Are Black defendants more likely to get a high risk score opposed to white defendants? If so, by how much? Show your calculations.
intercept = model.intercept_[0]
control = np.exp(intercept)/(1 + np.exp(intercept))
black_coef = model.coef_[0][5]
np.exp(black_coef)/(1 - control + (control * np.exp(black_coef)))
# *Student Written Answer Here*
# ## Question 2. FPR and FNR: Does COMPAS overpredict or underpredict across groups?
#
# In order to answer this question and understand the ways in which bias is present in the risk scores, ProPublica used the False Positive Rate (FPR) and False Negative Rate (FNR) as their metrics to understand and quantify fairness.
# ### Question 2a
# Complete the following functions to calculate the FPR and FNR. Afterwards, apply these functions to each racial subgroup: Black defendants and white defendants. Keep in mind that ProPublica defines a high score as anything above 4, and therefore a false positive would be a defendant with a high score who did not recidivate.
# +
def fpr(race_feature, data):
# Return the False Positive Rate of scores for the specified race_feature
subgroup = data[data["race"] == race_feature]
did_not_recidivate = subgroup[subgroup["two_year_recid"] == 0]
fp = did_not_recidivate[did_not_recidivate["decile_score"] > 4].shape[0]
tn = did_not_recidivate[did_not_recidivate["decile_score"] <= 4].shape[0]
return fp / (fp + tn)
def fnr(race_feature, data):
# Return the False Negative Rate of scores for the specified race_feature
subgroup = data[data["race"] == race_feature]
recidivated = subgroup[subgroup["two_year_recid"] == 1]
fn = recidivated[recidivated["decile_score"] <= 4].shape[0]
tp = recidivated[recidivated["decile_score"] > 4].shape[0]
return fn / (fn + tp)
# -
# Apply the metrics to the dataset
print("FPR for Black defendants:", fpr("African-American", select_data))
print("FPR for white defendants:", fpr("Caucasian", select_data))
print("FNR for Black defendants:", fnr("African-American", select_data))
print("FNR for white defendants:", fnr("Caucasian", select_data))
# ### Question 2b
# What can you conclude from these metrics about the overprediction of risk scores for Black and white defendants? By how much is the tool overpredicting? (Hint: Look at your calculations for the FPR.)
# Calculation of ratio for FPR
fpr("African-American", select_data) / fpr("Caucasian", select_data)
# *Student Written Answer Here*
# ### Question 2c
# What can you conclude from these metrics about the underprediction of risk scores for Black and white defendants? By how much is the tool underpredicting? (Hint: Look at your calculations for the FNR.)
# Calculation of ratio for FNR
fnr("Caucasian", select_data) / fnr("African-American", select_data)
# *Student Written Answer Here*
# ### Question 2d
# What is the importance of overprediction and underprediction in regard to ProPublica’s analysis? How might these observations have real impacts on the defendants who receive scores?
# *Student Written Answer Here*
# ## Question 3.
# ### Question 3a (i)
# Utilizing your answers from 1b and 2b, what problems does ProPublica highlight in the COMPAS algorithm?
# *Student Written Answer Here*
# ### Question 3a (ii)
# How would you describe ProPublica’s definition of fairness, after learning and utilizing the metrics they used?
# *Student Written Answer Here*
# ### Question 3b
# Why did ProPublica choose to investigate bias between races rather than a different sensitive feature? (Hint: think about how ProPublica’s conclusions reflect the racial disparities in our current criminal justice system.)
# *Student Written Answer Here*
# ### Question 3c
# What is ProPublica’s agenda as an investigative journalism organization? How do we see this in their analysis and conclusions?
# *Student Written Answer Here*
# We mentioned earlier that Northpointe disagreed with ProPublica's argument that the COMPAS algorithm is racially biased. Now that we’ve analyzed ProPublica’s perspective and seen the way in which they define and operationalize the concept of fairness, let’s move on to Northpointe’s.
# # Part 2. Northpointe's Perspective <a id="part-two"></a>
#
# ### Who is Northpointe?
#
# Northpointe (merged with two other companies to create *equivant* in 2017) is a for-profit computer software company that aims to advance justice by informing and instilling confidence in decision makers at every stage of the criminal justice system ([equivant](https://www.equivant.com/)). In addition to operating and continuing to develop COMPAS, *equivant* has developed a variety of technologies for use in court case management, attorney case management, inmate classification, and risk/needs assessment strategies.
#
# In the wake of criticism from ProPublica and other researchers alike, Northpointe produced a [detailed response](http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf) to ProPublica’s allegations, claiming that these critiques of their tool utilized the wrong type of classification statistics in their analysis and portrayed the tool incorrectly. The company provided their own analysis of the COMPAS algorithm by using different statistical methods and responding individually to each of ProPublica’s claims of racial bias against Black defendants.
#
# Upon examining their tool’s fairness through accuracy equity and predictive parity (which are metrics that were left out of ProPublica’s analysis), as well as the fact that the model was not trained with a race feature, Northpointe concluded that their algorithm treats all citizens and specified groups equally, and therefore does not exhibit signs of bias or inequality for specified groups. Now, let’s take a look at how Northpointe supported this argument.
# ## Question 4. Accuracy Equity: Is each group being discriminated against equally?
#
# Instead of analyzing and comparing the model errors FNR and FPR, Northpointe utilized the complement of FNR, known as the TPR (or what is often referred to as *Sensitivity*), paired with the FPR to prove what they refer to as ***Accuracy Equity*** through the use of a *ROC Curve*. Accuracy equity, according to [Northpointe](http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf), is exhibited in the model “if it can discriminate recidivists and nonrecidivists equally well for two different groups such as blacks and whites.” Recall that we use ROC curves and the *Area Under the Curve* to understand how much a model is capable of distinguishing between classes.
# ### Question 4a
# Utilize the sklearn metrics package to calculate TPR and FPR, visualize the ROC curve for both white and Black defendants, and calculate the AUC for each curve.
# +
# Calculate FPR and FNR from metrics package - Black defendants
black_def = select_data[select_data["race"] == "African-American"]
# True binary values
y1 = black_def['two_year_recid']
# Predicted values from COMPAS tool
pred1 = black_def["decile_score"].replace([1, 2, 3, 4], 0).replace([5, 6, 7, 8, 9, 10], 1)
fpr_black, tpr_black, threshold = roc_curve(y1, pred1)
# Calculate FPR and FNR from metrics package - White defendants
white_def = select_data[select_data["race"] == "Caucasian"]
# True binary values
y2 = white_def['two_year_recid']
# Predicted values from COMPAS tool
pred2 = white_def["decile_score"].replace([1, 2, 3, 4], 0).replace([5, 6, 7, 8, 9, 10], 1)
fpr_white, tpr_white, threshold = roc_curve(y2, pred2)
# Plot the ROC
plt.subplots(1, figsize=(10,10))
plt.title('ROC - Black Defendents')
plt.plot(fpr_black, tpr_black)
plt.plot(fpr_white, tpr_white)
plt.plot([0, 1], ls="--")
plt.ylabel('Sensitivity')
plt.xlabel('1 - Specificity')
plt.show()
# -
# Calculate the AUC
print("AUC for Black defendants:", roc_auc_score(y1, pred1))
print("AUC for white defendants:", roc_auc_score(y2, pred2))
# ### Question 4b (i)
# What do you notice from the ROC curve and the AUC calculation? List at least two general observations.
# *Student Written Answer Here*
# ### Question 4b (ii)
# What could Northpointe take away from this visualization to prove their point? Is accuracy equity being represented here? (Hint: Is each racial group being discriminated against equally?)
# *Student Written Answer Here*
# ## Question 5. Predictive Parity: Is the likelihood of recidivism equal across groups?
#
# In addition to the metric outlined above, Northpointe also utilized positive predictive values to explore the likelihood of defendants to reoffend, and to therefore prove that ***Predictive Parity*** is achieved. Predictive parity, according to [Northpointe](http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf), is exhibited in a model “if the classifier obtains similar predictive values for two different groups such as blacks and whites, for example, the probability of recidivating, given a high risk score, is similar for blacks and whites.” Let’s explore how they analyzed this.
# ### Question 5a
#
# Complete the following functions to calculate the positive predictive values and negative predictive values. Afterwards, apply these functions to the data of white defendants and the data of Black defendants.
# +
def ppv(race_feature, data):
# Return the Positive Predictive Value of scores for the specified race_feature
subgroup = data[data["race"] == race_feature]
recidivated = subgroup[subgroup["two_year_recid"] == 1]
did_not_recidivate = subgroup[subgroup["two_year_recid"] == 0]
fp = did_not_recidivate[did_not_recidivate["decile_score"] > 4].shape[0]
tp = recidivated[recidivated["decile_score"] > 4].shape[0]
return tp / (tp + fp)
def npv(race_feature, data):
# Return the Negative Predictive Value of scores for the specified race_feature
subgroup = data[data["race"] == race_feature]
recidivated = subgroup[subgroup["two_year_recid"] == 1]
did_not_recidivate = subgroup[subgroup["two_year_recid"] == 0]
fn = recidivated[recidivated["decile_score"] <= 4].shape[0]
tn = did_not_recidivate[did_not_recidivate["decile_score"] <= 4].shape[0]
return tn / (tn + fn)
# -
# Apply metrics to the dataset
print("PPV for Black defendants:", ppv("African-American", select_data))
print("NPV for Black defendants:", npv("African-American", select_data))
print("PPV for white defendants:", ppv("Caucasian", select_data))
print("NPV for white defendants:", npv("Caucasian", select_data))
# ### Question 5b
# Use the metrics you calculated above to fill in the table below.
# | | White | Black |
# |---:|:-------------|:-----------|
# | Labeled higher risk, but didn't reoffend | 41% | 35% |
# | Labeled lower risk, but did reoffend | 29% | 35% |
# High risk but did not re-offend - white
1 - ppv("Caucasian", select_data)
# Low risk but did re-offend - white
1 - npv("Caucasian", select_data)
# High risk but did not re-offend - Black
1 - ppv("African-American", select_data)
# Low risk but did re-offend - Black
1 - npv("African-American", select_data)
# ### Question 5c (i)
# What do you notice about the positive predictive values for each group? List at least one general observation.
# *Student Written Answer Here*
# ### Question 5c (ii)
# What could Northpointe conclude from these findings? Is predictive parity represented here? (Hint: Is the likelihood of recidivism relatively equal for each racial group?)
# *Student Written Answer Here*
# ## Question 6.
# ### Question 6a
# How would you describe Northpointe’s definition of fairness, after learning and utilizing the metrics they used? How is this different from your description of ProPublica’s definition from Q3aii?
# *Student Written Answer Here*
# ### Question 6b
#
# If anything, what are ProPublica and Northpointe each not considering in their definitions? (Hint: Think about other goodness metrics in ML, as well as your knowledge of the historical context of policing data)
# *Student Written Answer Here*
# So far, we’ve investigated ProPublica’s and Northpointe’s definitions of fairness. In the world of machine learning there are [many more](https://www.google.com/url?q=https://fairmlbook.org/tutorial2.html&sa=D&ust=1606727018134000&usg=AOvVaw06zU_fm8h7xp71d8igA8KI), so in the next section we will take a look at a third definition.
# # Part 3. Yet Another Definition of Fairness <a id="part-three"></a>
#
# In this section, you will go through yet another metric and definition used to evaluate fairness in machine learning: **disparate impact**. Disparate impact is a legal doctrine that determines if there is unintended discrimination towards a protected class ([Society for Human Resource Management](https://www.shrm.org/resourcesandtools/tools-and-samples/hr-qa/pages/disparateimpactdisparatetreatment.aspx)). In machine learning, disparate impact is a metric to evaluate fairness in a model. It is a form of bias within an algorithm that reflects systemic discrimination when a model’s outputs are dependent on a ***sensitive feautre*** (the protected class). This is often considered unintentional (like the legal doctrine) due to the fact that the sensitive feature is omitted from the model, though it is still correlated with the output through proxy variables ([Wang et al.](https://arxiv.org/pdf/1801.05398.pdf#:~:text=Abstract%E2%80%94In%20the%20context%20of,e.g.%2C%20race%20or%20gender)).
#
# Not only will you evaluate the fairness of the tool (as Northpointe and ProPublica did) by measuring the bias reflected in the outputs of the model, but you will remove it to actually change those outputs and therefore eliminate the dependencies between the risk scores and the race feature. In order to computationally remove the disparate impact that we quantify, we can use tools like aif360’s [Disparate Impact Remover](https://aif360.readthedocs.io/en/latest/modules/generated/aif360.algorithms.preprocessing.DisparateImpactRemover.html). aif360 is a package created by IBM’s AI Research team, which contains a variety of tools to “help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle” ([AI Fairness 360](https://aif360.mybluemix.net/)).
# ## Question 7. Disparate Impact: Quantification and Removal
#
# First, let’s visualize the disparity that we would like to remove from the dataset. In order to do that we need to distinguish between a privileged group and an unprivileged group. In technical terms, the privileged group receives higher scores from a trained model, so therefore the Black defendants will be considered "privileged" and the white defendants will be considered "unprivileged" in this case.
# ### Question 7a
#
# Use a histogram to plot the scores for Black defendants and the scores for white defendants. Visualize both histograms on the same plot.
unpriv = select_data[select_data["race"] == "Caucasian"]
priv = select_data[select_data["race"] == "African-American"]
sns.distplot(priv["decile_score"], hist=True, rug=False)
sns.distplot(unpriv["decile_score"], hist=True, rug=False)
# ### Question 7b
#
# What do you notice from the plot? (Hint: how do the distributions differ across racial groups with respect to mean, shape of distribution, etc.)
# *Student Written Answer Here*
# Now, we need to quantify the disparate impact we are seeing in the plot. In machine learning, we can understand disparate impact as the proportion of individuals that get positive outcomes (did they get a high score) for the two groups described above:
# \begin{align}
# \Pr(Y=1|D=Unprivileged) \ / \ \Pr(Y=1|D=Privileged)
# \end{align}
# In this equation Y is 1 if the defendant received a high score and 0 if they received a low score.
# ### Question 7c (i)
#
# Create a function to calculate the proportion of individuals from a specified racial group that get positive outcomes.
def proportion(data, group):
# Returns the proportion of individuals in data from the group who recidivate
race_group = data[data["race"] == group]
positive_outcomes = race_group[race_group["decile_score"] > 4]
return len(positive_outcomes) / len(race_group)
# ### Question 7c (ii)
#
# Use this function to calculate the disparate impact, using the equation from above.
prob_priv = proportion(select_data, "African-American")
prob_unpriv = proportion(select_data, "Caucasian")
prob_unpriv / prob_priv
# If the proportion of unprivileged individuals receiving positive outcomes to privileged individuals receiving positive outcomes is less than 80%, there is a disparate impact violation. In order to stop a trained model from replicating these biases in its output, we can now use aif360’s Disparate Impact Remover to remove the bias we just calculated.
# ### Question 7d
#
# Create a Disparate Impact Remover type object and use the function fit_transform on our data. In order to do this you will need to first create a BinaryLabelDataset from our dataset to use for fit_transform. Check out the documentation [here](https://aif360.readthedocs.io/en/latest/modules/generated/aif360.datasets.BinaryLabelDataset.html#aif360.datasets.BinaryLabelDataset) to see how to implement this.
# +
# Create new DataFrame with just the necessary columns - Only numeric values for race, decile score, and two year recid
# The BinaryLabelDataset requires decile_scores to be continuous -->
# Use this line of code noise = np.random.normal(0, 0.1, race_data.shape[0]) to add noise to your decile_score column
race_data = select_data[(select_data["race"] == "Caucasian") | (select_data["race"] == "African-American")]
race_col = pd.get_dummies(race_data, "race")["race_Caucasian"]
noise = np.random.normal(0, 0.1, race_data.shape[0])
decile_col = race_data["decile_score"] + noise
recid_col = race_data["two_year_recid"]
new_df = pd.DataFrame({"race": race_col, "decile_score": decile_col, "two_year_recid": recid_col})
# Create BinaryLabelDataset
BLD = BinaryLabelDataset(favorable_label=1, # Positive Outcome
unfavorable_label=0, # Negative Outcome
df=new_df,
label_names=["two_year_recid"],
protected_attribute_names=["race"],
unprivileged_protected_attributes=[1])
# -
remover = DisparateImpactRemover(repair_level=1.0, sensitive_attribute="race")
transformed_data = remover.fit_transform(BLD)
# ### Question 7e
#
# Similar to part a, use a histogram to plot the scores on the modified dataset. Afterwards, use the proportion function created above to calculate the disparate impact of the transformed dataset.
# +
# Transform output from DIRemover into usable DataFrame
transformed_df = pd.DataFrame(np.hstack([transformed_data.features, transformed_data.labels]),
columns=["race","decile_score","two_year_recid"])
unpriv_t = transformed_df[transformed_df["race"] == 1]
priv_t = transformed_df[transformed_df["race"] == 0]
sns.distplot(priv_t["decile_score"], hist=True, rug=False)
sns.distplot(unpriv_t["decile_score"], hist=True, rug=False)
# -
# Calculate disparate impact
prob_priv_t = proportion(transformed_df, 0)
prob_unpriv_t = proportion(transformed_df, 1)
prob_unpriv_t / prob_priv_t
# ### Question 7f
#
# What has changed from our original histogram? Please explain why this change has happened.
# *Student Written Answer Here*
# ### Question 7g
#
# How would you describe this third definition of fairness, after learning and utilizing these new metrics?
# *Student Written Answer Here*
# ### Question 7h
# How does this definition of fairness differ from ProPublica’s and Northpointe’s?
# *Student Written Answer Here*
# ## Question 8. Considering Expertise Outside of Data Science
#
# Just now, you used your technical data science skills to computationally remove bias from the data set. By removing bias, we’ve made the outputs of the algorithm statistically fair in regards to one definition of fairness. However, it is important to consider many types of knowledge and experiences beyond data science expertise when analyzing and creating an algorithm like COMPAS. As such, you will think through issues of expertise and fairness in the next set of questions.
# ### Question 8a
#
# Look back to your answer from Q0a. Now that you’ve gone through several definitions of fairness, how would you add to or revise your answer: Explain 3 parties that are impacted by the COMPAS tool. In what ways are they impacted?
# *Student Written Answer Here*
# ### Question 8b
#
# What expertise and lived experiences are necessary to understand and critically think about the issues produced by COMPAS?
# *Student Written Answer Here*
# ### Question 8c
#
# Why is this third definition of fairness still inadequate as a measurement of justice in the court system? (Hint: look at the previous two questions and answers).
# *Student Written Answer Here*
# # Part 4. Conclusion <a id="part-four"></a>
# ## Question 9. Which Definition Is Fair? And Who Decides?
#
# We’ve now gone through three definitions of fairness, each one with a different idea of how to operationalize fairness and to judge whether or not an algorithm is fair. As a data scientist, you may encounter situations where you will need to make decisions that affect real-world outcomes and people! Let’s try to do this for COMPAS.
# ### Question 9a
# If you had to decide between the three definitions of fairness above, which definition do you think would make “fair” decisions for everyone who goes through the court system? What values did you consider as you made this decision? If you cannot come to a decision, what challenges did you come across when considering this?
# *Student Written Answer Here*
# ### Question 9b
# Take a step back and think about how different actors who created, utilize, and are affected by COMPAS would consider which definition is most fair. Name two relevant actors, and discuss what they would value in *their own* definitions of fairness. Of the three definitions you have explored, which would they decide is most fair from the perspective of that actor? If you don’t think they’d choose any of them, explain why. (Examples of actors, which you’re welcome to use: judges, defendants, police, policy makers, community members)
# *Student Written Answer Here*
# Choosing one definition of fairness can be incredibly difficult when you need to consider all the actors at play. Throughout this module we have examined where and how the COMPAS algorithm is appropriate to use. It is also important to recognize the problems that are not solvable by an algorithm and think through how we can make the ecosystem that COMPAS is in (which includes but is not limited to the legal system, affected communities, the tech industry, etc.) more equitable.
# ### Question 9c
# What issues that are relevant to the COMPAS ecosystem but outside of the algorithm itself need to be addressed to be able to create a more equitable system, with or without the algorithm?
# *Student Written Answer Here*
# You’ve now begun to think through the very complex systems in which the COMPAS algorithm functions. **Congratulations!** Through considering a few of the differing definitions of fairness connected to COMPAS, hopefully you can begin to understand some of the human contexts of creating algorithms that intentionally affect people and their decision-making.
|
COMPAS/.ipynb_checkpoints/COMPAS Project-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1> Oridnary Least Squares Regression </h1>
#
# In this notebook we apply an OLS model to USD-UER exchange rate data.
# +
import datetime as dt
import numpy as np
import pandas as pd
from pylab import mpl, plt
import pandas_datareader as pdr
from sklearn.linear_model import LinearRegression
# -
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
# %matplotlib inline
# %matplotlib notebook
# <h2> Data Preperation <
# These are the parameters for our analysis. We define the start and end time of the data we want to download, and the variable train_test_divide is the datetime that seperates out training data (before this date) and testing data (after this date). The parameters for our OLS model will be the data from the two pervious days, indicated by setting lags to 2.
start = dt.datetime(2010, 1, 4)
train_test_divide = dt.datetime(2017, 6, 29)
end = dt.datetime(2018, 6, 29)
lags = 2
# Here we load the data, calculate the returns, and also the direction (whether the returns are positive or negative on a given day).
data = pdr.data.DataReader('DEXUSEU', 'fred', start, end)
data.columns = ['EUR']
data['returns'] = np.log(data / data.shift(1))
data.dropna(inplace=True)
data['direction'] = np.sign(data.returns).astype(int)
print(data.head())
# Generate the lag data.
# +
cols = []
for lag in range(1, lags + 1):
col = f"lag_{lag}"
data[col] = data["returns"].shift(lag)
cols.append(col)
data.dropna(inplace=True)
print(data.head())
# -
# Plotting the one and two day lags against each other shows no particular pattern.
plt.figure(figsize=(10, 6))
plt.scatter(x=data[cols[0]], y=data[cols[1]], c=data['returns'], cmap="coolwarm")
plt.xlabel(cols[0])
plt.ylabel(cols[1])
# <h2> Regression </h2>
#
# Here we divide our data into training and testing samples.
train = data.iloc[data.index < train_test_divide].copy()
test = data.iloc[data.index >= train_test_divide].copy()
# We build one model to predict actual returns, and the other to predict the direction data.
returns_model = LinearRegression()
direction_model = LinearRegression()
returns_model.fit(train[cols], train['returns'])
direction_model.fit(train[cols], train['direction'])
# Make prediction with each model. We make predictions on our training data as a sanity check, and on the test data to actually verify the model works.
# +
train['pos_ols_1'] = returns_model.predict(train[cols])
train['pos_ols_2'] = direction_model.predict(train[cols])
test['pos_ols_1'] = returns_model.predict(test[cols])
test['pos_ols_2'] = direction_model.predict(test[cols])
test.head()
# -
# Turn the model output into direction data.
train[['pos_ols_1', 'pos_ols_2']] = train[['pos_ols_1', 'pos_ols_2']].apply(np.sign).astype(int)
test[['pos_ols_1', 'pos_ols_2']] = test[['pos_ols_1', 'pos_ols_2']].apply(np.sign).astype(int)
test.head()
train['returns_ols_1'] = train['pos_ols_1'] * train['returns']
train['returns_ols_2'] = train['pos_ols_2'] * train['returns']
test['returns_ols_1'] = test['pos_ols_1'] * test['returns']
test['returns_ols_2'] = test['pos_ols_2'] * test['returns']
# Sum the returns for each model. On the training data both models outperform the long position, with ols_1 performing the best. But on the test data, both models are outperformed by simply holding the long position.
train[['returns', 'returns_ols_1', 'returns_ols_2']].sum().apply(np.exp)
test[['returns', 'returns_ols_1', 'returns_ols_2']].sum().apply(np.exp)
# View the cumulative returns of our model on the train and test set.
train[['returns', 'returns_ols_1', 'returns_ols_2']].cumsum().apply(np.exp).plot(figsize=(10, 6))
test[['returns', 'returns_ols_1', 'returns_ols_2']].cumsum().apply(np.exp).plot(figsize=(10, 6))
|
trading_strategies/OLS.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from dbfread import DBF
import pandas as pd
from pandas import *
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# import plotly.plotly as py
# import plotly.graph_objs as go
# # Notebook 4
# **In this notebook, we are going to explore some of the non human-factors that lead to accidents and fatal death.**
# Load 10 years of accident data from the saved data on Notebook2
accidents = pd.read_hdf('results/accidents.h5', 'accidents')
vehicles = pd.read_hdf('results/vehicles.h5', 'vehicles')
person = pd.read_hdf('results/person.h5','person')
fatal_crashs = pd.merge(vehicles, accidents, left_index=True, right_index=True, how='inner',on=('STATE', 'YEAR','CASE_NUM'))
fatal_crashs_all = pd.merge(fatal_crashs, person, how='inner',on=('STATE', 'YEAR','CASE_NUM'))
fatal_crashs_all.head()
# # Weather
# ** one of the most important contributing factor to accidetn is weather.**
# **we want to explore the proportion of eahc weather condition that leads to accident.**
#
accident_weather = fatal_crashs.groupby("WEATHER")["WEATHER"].count().reset_index(name="count").sort_values("count", ascending = False )
accident_weather = accident_weather.reset_index(drop=True)
accident_weather = accident_weather.rename(columns={"count" : "num of accident"})
accident_weather
# let's generate a boxplot to visulizaize how many accident happens under each weather condition
sns.barplot(y = 'WEATHER', x = 'num of accident', data = accident_weather)
plt.title("Number of Weather Related Accident", fontsize=18)
plt.xlabel("count")
plt.show()
plt.savefig("fig/Number of Weather Related Accident")
weather_death = fatal_crashs_all.groupby(["WEATHER"])["DEATHS"].count().reset_index(name="count").sort_values("count", ascending = False )
weather_death = weather_death.reset_index(drop=True)
weather_death = weather_death.rename(columns={"count" : "num of death"})
weather_death
sns.barplot(y = 'WEATHER', x = 'num of death', data = weather_death)
plt.title("Number of Death in Weather Related Accident", fontsize=18)
plt.xlabel("count")
plt.show()
plt.savefig("fig/Number of Death in Weather Related Accident")
# **As you can see from the previous 2 graphs, most accidents happen during clear day. In my initial thinking, i though that most accident might happen during cloudy day or raining day as the visibility is not as great, but this graph proved me wrong. But, actually it makes sense since in most places, clear day happens most of time. Some of the other weather condition accident is way too small to display on the graph, we would need to deal with this in the following part**
#
# **Although this graph could give us some information, it doesn't relate to our focus of this project. We would need to do further data exploration to analyze the relationship between death and weather-related accidents.**
#
weather_per_death = accident_weather.merge(weather_death,left_on='WEATHER', right_on='WEATHER', how='outer')
weather_per_death["death per accident ratio"] = weather_death["num of death"]/weather_per_death["num of accident"]
weather_per_death
sns.barplot(y = 'WEATHER', x = "death per accident ratio", data = weather_per_death)
plt.title("Number of Death Per Accident Ratio ", fontsize=18)
plt.xlabel("count")
plt.savefig("fig/Number of Death Per Accident Ratio.png")
plt.show()
# *** Now, this part has answered the question raised in the previous part. As you can see from the graph, the death per accident is highest in Snow condition, also noticed the second highest is the Blowing Snow Condition. Snow day usually indicates the worst of driving condition: limited visibility, snow blindness and the lost of traction.***
#
# *** The third highest ratio in the graph is the fog, smog and smoke condition. For obvious reasons, we often focus on the deadly potential of excessive rain, cold, even wind. However, the result in the plot comes as a shock to me how deadly fog can be on the roads each year. ***
#
# *** To relate to our daily life, fog, smog and smoke condition happened much more that we think: Driving on Golden Gate Bridge or Bay Bridge, the recent Napa and Los Angles Forest Fire, and even driving back to home located north Berkeley at night. Just a side note, It would be best to take extra care driving in any of these extreme weather condition.***
#
# +
# a function hide any percentage that is less than 1 percent
def my_autopct(pct):
return ('%1.1f%%' % pct) if pct > 1 else ''
# hide any label with percentage that is less than 1 percent
label = []
summ = sum(accident_weather['num of accident'])
length = len(accident_weather['num of accident'])
for i in range(length):
if accident_weather['num of accident'][i]/summ >= 0.01:
label.append(accident_weather['WEATHER'][i])
else:
label.append('')
# -
# ***I have created this function to hide any percentage that is less than 1 percent. Otherwise, the plot would be crowded with numbers and hard to get useful information out of it***
fig = plt.figure(figsize=[10, 10])
ax = fig.add_subplot(111)
ax.pie(accident_weather['num of accident'],labels = label,autopct= my_autopct)
plt.title("Porportion of Weather Related Accidents ", fontsize=18)
plt.savefig("fig/Porportion of Weather Related Accidents.png")
plt.show()
# ***This pie chart serves as a final visulization to weather related accident.***
# # Light Condition
# ***we also explore the relationship between Light Condition and traffic accident, as this is another non human error factor that caused accidents***
light_accident = fatal_crashs_all.groupby("LIGHT_CONDITION")["LIGHT_CONDITION"].count().reset_index(name="count").sort_values("count", ascending = False )
light_accident = light_accident.reset_index(drop=True)
light_accident.head()
sns.barplot(x = 'LIGHT_CONDITION', y = 'count', data = light_accident)
plt.title("Accident under Light Condition ", fontsize=18)
plt.xlabel("count")
plt.show()
# *** As you can see from the graph, daylight has the highest accident number since in general people mostly are out during day time. Dark condition has the second highest accident ratio***
light_death = fatal_crashs_all.groupby(["LIGHT_CONDITION"])["DEATHS"].count().reset_index(name="count").sort_values("count", ascending = False )
light_death = light_death.reset_index(drop=True)
light_death = light_death.rename(columns={"count" : "num of death"})
light_death
light_per_death = light_accident.merge(light_death,left_on='LIGHT_CONDITION', right_on='LIGHT_CONDITION', how='outer')
light_per_death
#weather_per_death["death per accident ratio"] = weather_death["num of death"]/weather_per_death["num of accident"]
#weather_per_death
# # Intersection
# **we will explore how different intersections leads to traffic accidents. **
#
# **Since this data has only been recorded since 2009, we need to create a new data frame from 2010 to 2016.**
intersection=[]
for i in range(2010,2017):
intersection.append(DataFrame(iter(DBF('data/accident/accident{}.dbf'.format(i)))))
accidents_intersect =pd.concat(intersection, axis=0,join='outer')
accidents_intersect['TYP_INT'] = accidents_intersect['TYP_INT'].map({1:'Not an Intersection',2:'Four-Way Intersection',3:'T-Intersection',4:'Y-Intersection',5:'Traffic Circle',6:'Roundabout',7:"Five-Point, or More",10:"L-Intersection",98:"Not Reported",99:"Unknown"})
accidents_intersect.head()
accidents_intersect_count = accidents_intersect.groupby("TYP_INT")["TYP_INT"].count().reset_index(name="count").sort_values("count", ascending = False )
accidents_intersect_count = accidents_intersect_count.reset_index(drop=True)
accidents_intersect_count.head()
sns.barplot(data = accidents_intersect_count, y = 'TYP_INT', x = 'count')
plt.xlabel("count")
plt.ylabel("Type of Intersection")
plt.title('Type of Intersection vs. Accidents, 2010-2016', fontsize = 20)
plt.savefig("fig/'Type of Intersection vs. Accidents, 2010-2016'.png")
plt.show()
# **Conclusion**
#
# One of some potential further questions we could explore would be to taking all these factors into consideration and then explore the relationship with respect to death, accident, and time of accidents.
#
# This notebook examines a few non human factors that would lead to accidents. The finding and results would be later put into the final report.
|
Notebook4_Non_Human_factor_Accidents.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
import yaml
secrets = yaml.safe_load(open("secrets.yaml"))
headers = {"Authorization": "Bearer %s" % secrets["token"]}
r = requests.get("https://canvas.ubc.ca/api/v1/courses/", headers=headers)
# -
r.json()
|
Untitled1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Define libraries
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv1D, MaxPooling1D, BatchNormalization, Flatten
from sklearn.model_selection import KFold
from keras.utils import multi_gpu_model
#from sklearn.cross_validation import StratifiedKFold
from contextlib import redirect_stdout
from keras.utils import plot_model
from IPython.display import Image
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import auc
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.utils.vis_utils import plot_model
from IPython.display import SVG
import datetime
from keras.utils.vis_utils import model_to_dot
from keras.callbacks import EarlyStopping, ModelCheckpoint
gpu_options = tf.GPUOptions(allow_growth=True)
sess =tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
tf.keras.backend.set_session(sess)
NBname='_12fPANet'
# %matplotlib inline
# =======
# 441PANet2
# np.random.seed(100)
# kernel_len 25
# half (3,6, 9, 12, 15)
# decay=0.0000125
# dropout 0.25
# # # diff b/w 441PANet2 & 10p121PANet2
# FC 2x12
# patience 10
# epochs 50
# lr=0.00000625
# # # diff b/w 10p121PANet2 & 5m_12FC
# lr=0.00000625*5 (0.00003125)
# =======
# +
SMALL_SIZE = 10
MEDIUM_SIZE = 15
BIGGER_SIZE = 18
# font = {'family' : 'monospace',
# 'weight' : 'bold',
# 'size' : 'larger'}
#plt.rc('font', **font) # pass in the font dict as kwargs
plt.rc('font', size=MEDIUM_SIZE,family='normal',weight='normal') # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE,) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE,) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE,titleweight='bold') # fontsize of the figure title
#plt.rc('xtick', labelsize=15)
#plt.rc('ytick', labelsize=15)
# -
print(str(datetime.datetime.now()))
def plot_perform1(mod, metric, last,ttl):
plt.figure(figsize=(11,11))
name='final'
plt.plot(mod.epoch, mod.history[metric], label=name.title()+'_Train',linewidth=1.5)
plt.xlabel('Epochs')
plt.ylabel(metric.replace('_',' ').title())
plt.ylabel(metric.title())
plt.title(ttl)
plt.legend(loc='best')
plt.xlim([0,max(mod.epoch)])
figname=metric+last+'.png'
plt.savefig(figname,dpi=500)
# def save_models(mod, last):
# for i in range(len(mod)):
# name=str(i+1)+last
# mod[i].model.save(name)
# +
# def plot_perform2(mod, metric, last,ttl):
# #plt.figure(figsize=(13,13))
# plt.figure(figsize=(11,11))
# for i in range(len(mod)):
# name=str(i+1)
# val = plt.plot(mod[i].epoch, mod[i].history['val_'+metric],
# '--', label=name.title()+'_Val',linewidth=1.5)
# plt.plot(mod[i].epoch, mod[i].history[metric],
# color=val[0].get_color(), label=name.title()+'_Train',linewidth=1.2)
# plt.xlabel('Epochs')
# plt.ylabel(metric.replace('_',' ').title())
# plt.ylabel(metric.title())
# plt.title(ttl)
# plt.legend(loc='best')
# plt.xlim([0,max(mod[i].epoch)])
# figname=metric+last+'.png'
# plt.savefig(figname,dpi=500)
# +
def create_model0(shape1):
model0 = Sequential()
model0.add(Conv1D(3, 25, strides=1,padding='same',activation='relu', batch_input_shape=(None,shape1,1)))
model0.add(BatchNormalization())
model0.add(Conv1D(3, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(6, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(6, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(9, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(9, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(12, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(12, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Conv1D(15, 25, strides=1,padding='same',activation='relu'))
model0.add(BatchNormalization())
model0.add(Conv1D(15, 25, strides=1,padding='same',activation='relu'))
model0.add(MaxPooling1D(2))
model0.add(Flatten())
model0.add(Dense(12, activation='relu'))
model0.add(Dense(12, activation='relu'))
#model0.add(Dense(8, activation='relu'))
model0.add(Dropout(0.25))
model0.add(Dense(2, activation='softmax'))
return model0
# +
# %%time
batch_size = 10
N_epochs = 12
N_folds=4
np.random.seed(100)
kf = KFold(n_splits=N_folds, shuffle=False)
# fmd='train_x.npy'
# fld='train_y.npy'
# data=np.load(os.path.abspath(fmd))
# dlabels=np.load(os.path.abspath(fld))
rm='res_x.npy'
rl='res_y.npy'
rdata=np.load(os.path.abspath(rm))
rlabels=np.load(os.path.abspath(rl))
sm='sen_x.npy'
sl='sen_y.npy'
sdata=np.load(os.path.abspath(sm))
slabels=np.load(os.path.abspath(sl))
fmtim='testim_x.npy'
fltim='testim_y.npy'
testim=np.load(os.path.abspath(fmtim))
tlabelsim=np.load(os.path.abspath(fltim))
fmtb='testb_x.npy'
fltb='testb_y.npy'
testb=np.load(os.path.abspath(fmtb))
tlabelsb=np.load(os.path.abspath(fltb))
# =================
# Do once!
# =================
sen_batch = np.random.RandomState(seed=45).permutation(sdata.shape[0])
bins = np.linspace(0, 200, 41)
digitized = np.digitize(sen_batch, bins,right=False)
# ================
# ===============================
# # FINAL TRAIN
# ===============================
train_idx_k=np.random.permutation(rdata.shape[0])
s_x=sdata[np.isin(digitized,train_idx_k+1)]
s_y=slabels[np.isin(digitized,train_idx_k+1)]
r_x=np.concatenate((rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k]))
r_y=np.concatenate((rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k]))
f_train_x, f_train_y = np.concatenate((s_x,r_x)), np.concatenate((s_y,r_y))
train_shuf_idx = np.random.permutation(f_train_x.shape[0])
x_train, y_train = f_train_x[train_shuf_idx], f_train_y[train_shuf_idx]
model0 = create_model0(rdata.shape[1])
model0.compile(optimizer=keras.optimizers.Adamax(lr=0.00003125, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0000125),
loss='categorical_crossentropy',
metrics=['accuracy','categorical_crossentropy'])
fmodel=model0.fit(x_train, y_train, epochs=N_epochs, batch_size=batch_size, verbose=2)
# =======================
# # ONLY FOR CROSS-VAL
# =======================
# i=0
# adamax=[]
# callbacks = [EarlyStopping(monitor='val_loss', patience=10),
# ModelCheckpoint(filepath='best_model'+NBname+'.h5', monitor='val_loss', save_best_only=True)]
# for train_idx_k, val_idx_k in kf.split(rdata):
# print ("Running Fold", i+1, "/", N_folds)
# # ===============================
# # select train
# # ===============================
# s_train_x=sdata[np.isin(digitized,train_idx_k+1)]
# s_train_y=slabels[np.isin(digitized,train_idx_k+1)]
# r_train_x=np.concatenate((rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k],rdata[train_idx_k]))
# r_train_y=np.concatenate((rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k],rlabels[train_idx_k]))
# # ===============================
# # select val
# # ===============================
# s_val_x=sdata[np.isin(digitized,val_idx_k+1)]
# s_val_y=slabels[np.isin(digitized,val_idx_k+1)]
# r_val_x=np.concatenate((rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k],rdata[val_idx_k]))
# r_val_y=np.concatenate((rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k],rlabels[val_idx_k]))
# # ===============================
# # concatenate F_train/val_x/y
# # ===============================
# f_train_x, f_train_y = np.concatenate((s_train_x,r_train_x)), np.concatenate((s_train_y,r_train_y))
# # train_shuf_idx = np.random.permutation(f_train_x.shape[0])
# # F_train_x, F_train_y = f_train_x[train_shuf_idx], f_train_y[train_shuf_idx]
# f_val_x, f_val_y = np.concatenate((s_val_x,r_val_x)), np.concatenate((s_val_y,r_val_y))
# # val_shuf_idx = np.random.permutation(f_val_x.shape[0])
# # F_val_x, F_val_y = f_val_x[val_shuf_idx], f_val_y[val_shuf_idx]
# # ===============================
# # shuffle just because we can?
# # ===============================
# train_shuf_idx = np.random.permutation(f_train_x.shape[0])
# x_train_CV, y_train_CV = f_train_x[train_shuf_idx], f_train_y[train_shuf_idx]
# val_shuf_idx = np.random.permutation(f_val_x.shape[0])
# x_val_CV, y_val_CV = f_val_x[val_shuf_idx], f_val_y[val_shuf_idx]
# # ===============================
# # clear and create empty model
# # ===============================
# model0 = None # Clearing the NN.
# model0 = create_model0(rdata.shape[1])
# # x_train_CV, y_train_CV, = data[train_idx_k], dlabels[train_idx_k]
# # x_val_CV, y_val_CV, = data[val_idx_k], dlabels[val_idx_k]
# # parallel_model = None
# # parallel_model = multi_gpu_model(model0, gpus=2)
# # #default
# # #parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0),
# # parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.004, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.005),
# # loss='categorical_crossentropy',
# # metrics=['accuracy','categorical_crossentropy'])
# # model0_adamax = parallel_model.fit(x_train_CV, y_train_CV,
# # epochs=N_epochs,
# # batch_size=batch_size,
# # validation_data=(x_val_CV,y_val_CV),
# # verbose=1)
# #default
# #parallel_model.compile(optimizer=keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0),
# model0.compile(optimizer=keras.optimizers.Adamax(lr=0.00003125, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0000125),
# loss='categorical_crossentropy',
# metrics=['accuracy','categorical_crossentropy'])
# model0_adamax = model0.fit(x_train_CV, y_train_CV,
# epochs=N_epochs,
# batch_size=batch_size,
# validation_data=(x_val_CV,y_val_CV),
# verbose=2,callbacks=callbacks)
# adamax.append(model0_adamax)
# i=i+1
# -
mname='final'+NBname+'.h5'
fmodel.model.save(mname)
plot_perform1(fmodel,'acc',NBname,'CV:Performance-I')
plot_perform1(fmodel,'loss',NBname,'CV:Performance-II')
with open('summary'+NBname+'.txt', 'w') as f:
with redirect_stdout(f):
fmodel.model.summary()
fmodel.model.summary()
print(str(datetime.datetime.now()))
# +
# testim=np.load(os.path.abspath(fmtim))
# tlabelsim=np.load(os.path.abspath(fltim))
# testb=np.load(os.path.abspath(fmtb))
# tlabelsb=np.load(os.path.abspath(fltb))
# +
# ==========================================================================
# # DO NOT UNCOMMENT UNTIL THE END; DECLARES FUNCTION FOR AN UNBIASED TEST
# ==========================================================================
def plot_auc(aucies,fprs,tprs, last):
#plt.figure(figsize=(13,13))
plt.figure(figsize=(11,11))
plt.plot([0, 1], [0, 1], 'k--')
for i in range(len(aucies)):
st='CV_'+str(i+1)+' '
if i==0:
st='Balanced'
else:
st='Imbalanced'
plt.plot(fprs[i], tprs[i], label='{} (AUC= {:.3f})'.format(st,aucies[i]),linewidth=1.5)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
figname='ROC'+last+'.png'
plt.savefig(figname,dpi=500)
# +
# ==========================================================================
# # THIS IS THE FUCKING UNBIASED TEST; DO NOT UNCOMMENT UNTIL THE END
# ==========================================================================
fpr_x=[]
tpr_x=[]
thresholds_x=[]
auc_x=[]
pre_S=[]
rec_S=[]
f1_S=[]
kap_S=[]
acc_S=[]
mat_S=[]
# +
NBname='_12fPANetb'
y_predb = fmodel.model.predict(testb)#.ravel()
fpr_0, tpr_0, thresholds_0 = roc_curve(tlabelsb[:,1], y_predb[:,1])
fpr_x.append(fpr_0)
tpr_x.append(tpr_0)
thresholds_x.append(thresholds_0)
auc_x.append(auc(fpr_0, tpr_0))
# predict probabilities for testb set
yhat_probs = fmodel.model.predict(testb, verbose=0)
# predict crisp classes for testb set
yhat_classes = fmodel.model.predict_classes(testb, verbose=0)
# reduce to 1d array
testby=tlabelsb[:,1]
#testby1=tlabels[:,1]
#yhat_probs = yhat_probs[:, 0]
#yhat_classes = yhat_classes[:, 0]
# accuracy: (tp + tn) / (p + n)
acc_S.append(accuracy_score(testby, yhat_classes))
#print('Accuracy: %f' % accuracy_score(testby, yhat_classes))
#precision tp / (tp + fp)
pre_S.append(precision_score(testby, yhat_classes))
#print('Precision: %f' % precision_score(testby, yhat_classes))
#recall: tp / (tp + fn)
rec_S.append(recall_score(testby, yhat_classes))
#print('Recall: %f' % recall_score(testby, yhat_classes))
# f1: 2 tp / (2 tp + fp + fn)
f1_S.append(f1_score(testby, yhat_classes))
#print('F1 score: %f' % f1_score(testby, yhat_classes))
# kappa
kap_S.append(cohen_kappa_score(testby, yhat_classes))
#print('Cohens kappa: %f' % cohen_kappa_score(testby, yhat_classes))
# confusion matrix
mat_S.append(confusion_matrix(testby, yhat_classes))
#print(confusion_matrix(testby, yhat_classes))
with open('perform'+NBname+'.txt', "w") as f:
f.writelines("AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
for x in range(len(fpr_x)):
f.writelines(map("{}\n".format, mat_S[x]))
f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# ==========================================================================
# # THIS IS THE FUCKING UNBIASED testb; DO NOT UNCOMMENT UNTIL THE END
# ==========================================================================
plot_auc(auc_x,fpr_x,tpr_x,NBname)
# -
# +
NBname='_12fPANetim'
y_pred = fmodel.model.predict(testim)#.ravel()
fpr_0, tpr_0, thresholds_0 = roc_curve(tlabelsim[:,1], y_pred[:,1])
fpr_x.append(fpr_0)
tpr_x.append(tpr_0)
thresholds_x.append(thresholds_0)
auc_x.append(auc(fpr_0, tpr_0))
# predict probabilities for testim set
yhat_probs = fmodel.model.predict(testim, verbose=0)
# predict crisp classes for testim set
yhat_classes = fmodel.model.predict_classes(testim, verbose=0)
# reduce to 1d array
testimy=tlabelsim[:,1]
#testimy1=tlabels[:,1]
#yhat_probs = yhat_probs[:, 0]
#yhat_classes = yhat_classes[:, 0]
# accuracy: (tp + tn) / (p + n)
acc_S.append(accuracy_score(testimy, yhat_classes))
#print('Accuracy: %f' % accuracy_score(testimy, yhat_classes))
#precision tp / (tp + fp)
pre_S.append(precision_score(testimy, yhat_classes))
#print('Precision: %f' % precision_score(testimy, yhat_classes))
#recall: tp / (tp + fn)
rec_S.append(recall_score(testimy, yhat_classes))
#print('Recall: %f' % recall_score(testimy, yhat_classes))
# f1: 2 tp / (2 tp + fp + fn)
f1_S.append(f1_score(testimy, yhat_classes))
#print('F1 score: %f' % f1_score(testimy, yhat_classes))
# kappa
kap_S.append(cohen_kappa_score(testimy, yhat_classes))
#print('Cohens kappa: %f' % cohen_kappa_score(testimy, yhat_classes))
# confusion matrix
mat_S.append(confusion_matrix(testimy, yhat_classes))
#print(confusion_matrix(testimy, yhat_classes))
with open('perform'+NBname+'.txt', "w") as f:
f.writelines("AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
for x in range(len(fpr_x)):
f.writelines(map("{}\n".format, mat_S[x]))
f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# ==========================================================================
# # THIS IS THE FUCKING UNBIASED testim; DO NOT UNCOMMENT UNTIL THE END
# ==========================================================================
plot_auc(auc_x,fpr_x,tpr_x,NBname)
# -
# +
#model = load_model('final_fPANet.h5')
# +
# produces extremely tall png, that doesn't really fit into a screen
# plot_model(model0, to_file='model'+NBname+'.png', show_shapes=True,show_layer_names=False)
# +
# produces crappy SVG object. dont uncomment until desperate
# SVG(model_to_dot(model0, show_shapes=True,show_layer_names=False).create(prog='dot', format='svg'))
# -
# +
# # =====================================
# # Legacy block, life saver truly
# # =====================================
# # sdata.shape
# # (200, 1152012, 1)
# print('\n')
# sen_batch = np.random.RandomState(seed=45).permutation(sdata.shape[0])
# print(sen_batch)
# print('\n')
# bins = np.linspace(0, 200, 41)
# print(bins.shape)
# print(bins)
# print('\n')
# digitized = np.digitize(sen_batch, bins,right=False)
# print(digitized.shape)
# print(digitized)
# # #instead of 10, run counter
# # print(np.where(digitized==10))
# # print(sdata[np.where(digitized==10)].shape)
# # # (array([ 0, 96, 101, 159, 183]),)
# # # (5, 1152012, 1)
# # dig_sort=digitized
# # dig_sort.sort()
# # # print(dig_sort)
# # # [ 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5
# # # 5 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10
# # # 10 10 11 11 11 11 11 12 12 12 12 12 13 13 13 13 13 14 14 14 14 14 15 15
# # # 15 15 15 16 16 16 16 16 17 17 17 17 17 18 18 18 18 18 19 19 19 19 19 20
# # # 20 20 20 20 21 21 21 21 21 22 22 22 22 22 23 23 23 23 23 24 24 24 24 24
# # # 25 25 25 25 25 26 26 26 26 26 27 27 27 27 27 28 28 28 28 28 29 29 29 29
# # # 29 30 30 30 30 30 31 31 31 31 31 32 32 32 32 32 33 33 33 33 33 34 34 34
# # # 34 34 35 35 35 35 35 36 36 36 36 36 37 37 37 37 37 38 38 38 38 38 39 39
# # # 39 39 39 40 40 40 40 40]
# # print(val_idx_k)
# # # array([ 2, 3, 8, 10, 14, 15, 23, 24, 30, 32])
# # print(val_idx_k+1)
# # # array([ 3, 4, 9, 11, 15, 16, 24, 25, 31, 33])
# # print('\n')
# # print(sdata[np.isin(digitized,train_idx_k+1)].shape)
# # # (150, 1152012, 1)
# # print(sdata[np.isin(digitized,val_idx_k+1)].shape)
# # # (50, 1152012, 1)
# -
# plt.figure(figsize=(16,10))
# plt.plot([0, 1], [0, 1], 'k--')
# plt.plot(fpr_x[0], tpr_x[0], label='CV1 (area= {:.3f})'.format(auc_x[0]))
# plt.plot(fpr_x[1], tpr_x[1], label='CV2 (area= {:.3f})'.format(auc_x[1]))
# plt.plot(fpr_x[2], tpr_x[2], label='CV3 (area= {:.3f})'.format(auc_x[2]))
# plt.xlabel('False positive rate')
# plt.ylabel('True positive rate')
# plt.title('ROC curve')
# plt.legend(loc='best')
# figname='model0_011GWAS'+'_ROC.png'
# plt.savefig(figname,dpi=400)
# As index starts from 0, changed from general form
# [(M*(k-i)):(M*k-1)]
for train_idx,val_idx in kf.split(rdata):
print(train_idx)
print(5*train_idx)
print(5*train_idx+4)
print('\n')
print(val_idx)
print(5*val_idx)
print(5*val_idx+4)
print('\n \n')
# +
# plot_perform([#('1_nadam', nadam[0]),
# ('1_adamax', adamax[0]),
# #('2_nadam', nadam[1]),
# ('2_adamax', adamax[1]),
# #('3_nadam', nadam[2]),
# ('3_adamax', adamax[2])],
# #('3_nadam', nadam[2]),
# #('4_adamax', adamax[3]),
# #('3_nadam', nadam[2]),
# #('5_adamax', adamax[4])],
# 'acc','model0_011GWAS')
# +
# plot_perform([#('1_nadam', nadam[0]),
# ('1_adamax', adamax[0]),
# #('2_nadam', nadam[1]),
# ('2_adamax', adamax[1]),
# #('3_nadam', nadam[2]),
# ('3_adamax', adamax[2])],
# #('3_nadam', nadam[2]),
# #('4_adamax', adamax[3]),
# #('3_nadam', nadam[2]),
# #('5_adamax', adamax[4])],
# 'loss','model0_011GWAS')
# +
# adamax[0].model.save('adamax_1_011GWAS')
# adamax[1].model.save('adamax_2_011GWAS')
# adamax[2].model.save('adamax_3_011GWAS')
# # adamax[3].model.save('adamax_4_011GWAS')
# # adamax[4].model.save('adamax_5_011GWAS')
# +
# # plot_perform([#('1_nadam', nadam[0]),
# # ('1_adamax', adamax[0]),
# # #('2_nadam', nadam[1]),
# # ('2_adamax', adamax[1]),
# # #('3_nadam', nadam[2]),
# # ('3_adamax', adamax[2])],
# # #('3_nadam', nadam[2]),
# # #('4_adamax', adamax[3]),
# # #('3_nadam', nadam[2]),
# # #('5_adamax', adamax[4])],
# # 'acc','model0_011GWAS')
# def plot_perform(histories, metric,initial):
# plt.figure(figsize=(16,10))
# for name, history in histories:
# val = plt.plot(history.epoch, history.history['val_'+metric],
# '--', label=name.title()+' Val')
# #print(val) [<matplotlib.lines.Line2D object at 0x7fbb1899a940>]
# #print(val[0]) Line2D(Baseline Val)
# #print(val[0].get_color()) #1f77b4
# plt.plot(history.epoch, history.history[metric],
# color=val[0].get_color(), label=name.title()+' Train')
# plt.xlabel('Epochs')
# plt.ylabel(metric.replace('_',' ').title())
# plt.ylabel(metric.title())
# plt.legend()
# plt.xlim([0,max(history.epoch)])
# figname=initial+"_"+metric+".png"
# plt.savefig(figname,dpi=400)
|
11Oct/12fPANet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sympy import *
from IPython.display import display
# %matplotlib inline
init_printing(use_latex=True)
# # Rayleigh Quotient MarkII
#
# We want to mix the last two functions we saw in the exercise, the shape associated with a load applied to the tip and the shape associated with a uniform distributed load.
#
# We start by defining a number of variables that point to `Symbol` objects,
z, h , r0, dr, t, E, rho, zeta = symbols('z H r_0 Delta t E rho zeta')
# We define the tip-load function starting from the expression of the bending moment, just a linear function that is 0 for $z=H$... we integrate two times and we get the displacements bar the constants of integration that, on the other hand, happen to be both equal to zero due to clamped end at $z=0$, implying that $\psi_1(0)=0$ and $\psi'_1(0)=0$
f12 = h-z
f11 = integrate(f12,z)
f10 = integrate(f11,z)
# We have no scaling in place... we have to scale correctly our function by evaluating it for $z=H$
scale_factor = f10.subs(z,h)
# Dividing our shape function (and its derivatives) by this particular scale factor we have, of course, an unit value of the tip displacement.
f10 /= scale_factor
f11 /= scale_factor
f12 /= scale_factor
f10, f11, f12, f10.subs(z,h)
# We repeat the same procedure to compute the shape function for a constant distributed load, here the constraint on the bending moment is that both the moment and the shear are zero for $z=H$, so the non-normalized expression for $M_b\propto \psi_2''$ is
f22 = h*h/2 - h*z + z*z/2
# The rest of the derivation is the same
f21 = integrate(f22,z)
f20 = integrate(f21,z)
scale_factor = f20.subs(z,h)
f20 /= scale_factor
f21 /= scale_factor
f22 /= scale_factor
f20, f21, f22, f20.subs(z,h)
# To combine the two shapes in the _right_ way we write
#
# $$\psi = \alpha\,\psi_1+(1-\alpha)\,\psi_2$$
#
# so that $\psi(H)=1$, note that the shape function depends on one parameter, $\alpha$, and we can minimize the Rayleigh Quotient with respect to $\alpha$.
a = symbols('alpha')
f0 = a*f10 + (1-a)*f20
f2 = diff(f0,z,2)
f0.expand().collect(z), f2.expand().collect(z), f0.subs(z,h)
# Working with symbols we don't need to formally define a Python function, it suffices to bind a name to a symbolic expression. That's done for the different variable quantities that model our problem and using these named expressions we can compute the denominator and the numerator of the Rayleigh Quotient.
re = r0 - dr * z/h
ri = re - t
A = pi*(re**2-ri**2)
J = pi*(re**4-ri**4)/4
fm = rho*A*f0**2
fs = E*J*f2**2
mstar = 80000+integrate(fm,(z,0,h))
kstar = integrate(fs,(z,0,h))
# Our problem is characterized by a set of numerical values for the different basic variables:
# +
values = {E:30000000000,
h:32,
rho:2500,
t:Rational(1,4),
r0:Rational(18,10),
dr:Rational(6,10)}
values
# -
# We can substitute these values in the numerator and denominator of the RQ
display(mstar.subs(values))
display(kstar.subs(values))
# Let's look at the RQ as a function of $\alpha$, with successive refinements
rq = (kstar/mstar).subs(values)
plot(rq, (a,-3,3));
plot(rq, (a,1,3));
plot(rq, (a,1.5,2.0));
# Here we do the following:
#
# 1. Derive the RQ and obtain a numerical function (rather than a symbolic expression) using the `lambdify` function.
# 2. Using a root finder function (here `bisect` from the `scipy.optimize` collection) we find the location of the minimum of RQ.
# 3. Display the location of the minimum.
# 4. Display the shape function as a function of $\zeta=z/H$.
# 5. Display the minimum value of RQ.
#
# Note that the eigenvalue we have previously found, for $\psi\propto1-\cos\zeta\pi/2$ was $\omega^2= 66.259\,(\text{rad/s})^2$
rqdiff = lambdify(a, rq.diff(a))
from scipy.optimize import bisect
a_0 = bisect(rqdiff, 1.6, 1.9)
display(a_0)
display(f0.expand().subs(a,a_0).subs(z,zeta*h))
rq.subs(a,a_0).evalf()
# Oh, we have (re)discovered the Ritz method! and we have the better solution so far...
# usual incantation
from IPython.display import HTML
HTML(open('00_custom.css').read())
|
dati_2015/ha03/04_Rayleigh_MkII.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2D Advection-Diffusion equation
# in this notebook we provide a simple example of the DeepMoD algorithm and apply it on the 2D advection-diffusion equation.
# +
# General imports
import numpy as np
import torch
import matplotlib.pylab as plt
# DeepMoD functions
from deepymod import DeepMoD
from deepymod.model.func_approx import NN, Siren
from deepymod.model.library import Library2D_third
from deepymod.model.constraint import LeastSquares
from deepymod.model.sparse_estimators import Threshold,PDEFIND
from deepymod.training import train
from deepymod.training.sparsity_scheduler import TrainTestPeriodic
from scipy.io import loadmat
# Settings for reproducibility
np.random.seed(42)
torch.manual_seed(0)
if torch.cuda.is_available():
device = 'cuda'
else:
device = 'cpu'
# %load_ext autoreload
# %autoreload 2
# -
# ## Prepare the data
# Next, we prepare the dataset.
data_pre = np.load('diffusion_advection_29longb.npy').T
data= data_pre[120:-60,:,10:30]
down_data= np.take(np.take(np.take(data,np.arange(0,data.shape[0],5),axis=0),np.arange(0,data.shape[1],5),axis=1),np.arange(0,data.shape[2],1),axis=2)
down_data.shape
steps = down_data.shape[2]
width = down_data.shape[0]
width_2 = down_data.shape[1]
plt.plot(np.sum(np.sum(data_pre,axis=1),axis=0))
x_arr = np.arange(0,width)
y_arr = np.arange(0,width_2)
t_arr = np.arange(0,steps)
x_grid, y_grid, t_grid = np.meshgrid(x_arr, y_arr, t_arr, indexing='ij')
X = np.transpose((t_grid.flatten(), x_grid.flatten(), y_grid.flatten()))
plt.imshow(down_data[:,:,1])
# Next we plot the dataset for three different time-points
# We flatten it to give it the right dimensions for feeding it to the network:
# +
X = np.transpose((t_grid.flatten()/10, x_grid.flatten()/np.max(y_grid), y_grid.flatten()/np.max(y_grid)))
#X = np.transpose((t_grid.flatten(), x_grid.flatten(), y_grid.flatten()))
y = np.float32(down_data.reshape((down_data.size, 1)))
y = 4*y/np.max(y)
# -
len(y)
# +
number_of_samples = 5000
idx = np.random.permutation(y.shape[0])
X_train = torch.tensor(X[idx, :][:number_of_samples], dtype=torch.float32, requires_grad=True).to(device)
y_train = torch.tensor(y[idx, :][:number_of_samples], dtype=torch.float32).to(device)
# -
# ## Configuration of DeepMoD
# Configuration of the function approximator: Here the first argument is the number of input and the last argument the number of output layers.
network = NN(3, [30, 30, 30, 30], 1)
# Configuration of the library function: We select athe library with a 2D spatial input. Note that that the max differential order has been pre-determined here out of convinience. So, for poly_order 1 the library contains the following 12 terms:
# * [$1, u_x, u_y, u_{xx}, u_{yy}, u_{xy}, u, u u_x, u u_y, u u_{xx}, u u_{yy}, u u_{xy}$]
library = Library2D_third(poly_order=0)
# Configuration of the sparsity estimator and sparsity scheduler used. In this case we use the most basic threshold-based Lasso estimator and a scheduler that asseses the validation loss after a given patience. If that value is smaller than 1e-5, the algorithm is converged.
estimator = PDEFIND()
estimator = Threshold(0.05)
sparsity_scheduler = TrainTestPeriodic(periodicity=50, patience=25, delta=1e-5)
#
# Configuration of the sparsity estimator
constraint = LeastSquares()
# Configuration of the sparsity scheduler
# Now we instantiate the model and select the optimizer
# +
model = DeepMoD(network, library, estimator, constraint).to(device)
# Defining optimizer
optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.99), amsgrad=True, lr=1e-3)
# -
# ## Run DeepMoD
# We can now run DeepMoD using all the options we have set and the training data:
# * The directory where the tensorboard file is written (log_dir)
# * The ratio of train/test set used (split)
# * The maximum number of iterations performed (max_iterations)
# * The absolute change in L1 norm considered converged (delta)
# * The amount of epochs over which the absolute change in L1 norm is calculated (patience)
train(model, X_train, y_train, optimizer,sparsity_scheduler, log_dir='runs/news10/', split=0.8, max_iterations=100000, delta=1e-7, patience=200)
# Sparsity masks provide the active and non-active terms in the PDE:
sol = model(torch.tensor(X, dtype=torch.float32))[0].reshape((width,width_2,steps)).detach().numpy()
ux = model(torch.tensor(X, dtype=torch.float32))[2][0][:,1].reshape((width,width_2,steps)).detach().numpy()
uy = model(torch.tensor(X, dtype=torch.float32))[2][0][:,2].reshape((width,width_2,steps)).detach().numpy()
uxx = model(torch.tensor(X, dtype=torch.float32))[2][0][:,3].reshape((width,width_2,steps)).detach().numpy()
uyy = model(torch.tensor(X, dtype=torch.float32))[2][0][:,4].reshape((width,width_2,steps)).detach().numpy()
import pysindy as ps
fd_spline = ps.SINDyDerivative(kind='spline', s=1e-2)
fd_spectral = ps.SINDyDerivative(kind='spectral')
fd_sg = ps.SINDyDerivative(kind='savitzky_golay', left=0.5, right=0.5, order=3)
y = down_data[2,:,19]
x = x_arr
plt.plot(x,y, 'b--')
plt.plot(x,sol[2,:,19]*np.max(down_data),'b', label='x = 1')
y = down_data[5,:,19]
x = x_arr
plt.plot(x,y, 'g--')
plt.plot(x,sol[5,:,19]*np.max(down_data),'g', label='x = 5')
y = down_data[11,:,19]
x = x_arr
plt.plot(x,y, 'r--')
plt.plot(x,sol[11,:,19]*np.max(down_data),'r', label='x = 10')
plt.legend()
y = down_data[1,:,1]
x = x_arr
plt.plot(x,y, 'b--')
plt.plot(x,sol[1,:,1]*np.max(down_data),'b', label='x = 1')
y = down_data[5,:,1]
x = x_arr
plt.plot(x,y, 'g--')
plt.plot(x,sol[5,:,1]*np.max(down_data),'g', label='x = 5')
y = down_data[11,:,1]
x = x_arr
plt.plot(x,y, 'r--')
plt.plot(x,sol[11,:,1]*np.max(down_data),'r', label='x = 10')
plt.legend()
np.max(down_data)/100
plt.plot(x,fd_sg(y,x), 'ro')
y = down_data[1,:,19]
x = x_arr
plt.plot(x,fd_sg(y,x), 'b--')
plt.plot(x,uy[1,:,19]*np.max(down_data)/100,'b', label='x = 1')
y = down_data[5,:,19]
x = x_arr
plt.plot(x,fd_sg(y,x), 'g--')
plt.plot(x,uy[5,:,19]*np.max(down_data)/100,'g', label='x = 5')
y = down_data[10,:,19]
x = x_arr
plt.plot(x,fd_sg(y,x), 'r--')
plt.plot(x,uy[10,:,19]*np.max(down_data)/100,'r', label='x = 10')
plt.legend()
y = down_data[2,:,19]
x = x_arr
plt.plot(x,fd_sg(fd_sg(y,x)), 'b--')
plt.plot(x,uyy[2,:,19]*np.max(down_data)/(100*100),'b')
y = down_data[5,:,19]
x = x_arr
plt.plot(x,fd_sg(fd_sg(y,x)), 'g--')
plt.plot(x,uyy[5,:,19]*np.max(down_data)/(100*100),'g')
y = down_data[11,:,19]
x = x_arr
plt.plot(x,fd_sg(fd_sg(y,x)), 'r--')
plt.plot(x,uyy[11,:,19]*np.max(down_data)/(100*100),'r')
# +
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3, 1)
y = down_data[2,:,2]
x = x_arr
plt.plot(x,y)
plt.plot(x,sol[2,:,2]*np.max(down_data))
plt.legend()
plt.subplot(1,3, 2)
y = down_data[2,:,2]
x = x_arr
plt.plot(x,y)
plt.plot(x,sol[2,:,2]*np.max(down_data))
plt.subplot(1,3, 3)
y = down_data[2,:,2]
x = x_arr
plt.plot(x,y)
plt.plot(x,sol[2,:,2]*np.max(down_data))
plt.legend()
plt.show()
# +
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3, 1)
plt.imshow(sol[:,:,1], aspect=0.5)
plt.subplot(1,3, 2)
plt.imshow(sol[:,:,19], aspect=0.5)
plt.subplot(1,3, 3)
plt.imshow(sol[:,:,39], aspect=0.5)
plt.savefig('reconstruction.pdf')
# +
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3, 1)
plt.imshow(down_data[:,:,1], aspect=0.5)
plt.subplot(1,3, 2)
plt.imshow(down_data[:,:,19], aspect=0.5)
plt.subplot(1,3, 3)
plt.imshow(down_data[:,:,39], aspect=0.5)
plt.savefig('original_20_20_40.pdf')
# -
np.max(down_data)
plt.plot(x,sol[5,:,10]*np.max(down_data))
noise_level = 0.025
y_noisy = y + noise_level * np.std(y) * np.random.randn(y.size)
plt.plot(x,uy[25,:,10])
plt.plot(x,ux[25,:,10])
# +
fig = plt.figure(figsize=(15,5))
plt.subplot(1,3, 1)
plt.plot(fd_spline(y.reshape(-1,1),x), label='Ground truth',linewidth=3)
plt.plot(fd_spline(y_noisy.reshape(-1,1),x), label='Spline',linewidth=3)
plt.legend()
plt.subplot(1,3, 2)
plt.plot(fd_spline(y.reshape(-1,1),x), label='Ground truth',linewidth=3)
plt.plot(fd_sg(y_noisy.reshape(-1,1),x), label='<NAME>',linewidth=3)
plt.legend()
plt.subplot(1,3, 3)
plt.plot(fd_spline(y.reshape(-1,1),x), label='Ground truth',linewidth=3)
plt.plot(uy[25,:,10],linewidth=3, label='DeepMoD')
plt.legend()
plt.show()
# -
plt.plot(ux[10,:,5])
ax = plt.subplot(1,1,1)
ax.plot(fd(y.reshape(-1,1),x), label='Ground truth')
ax.plot(fd_sline(y_noisy.reshape(-1,1),x), label='Spline')
ax.plot(fd_sg(y_noisy.reshape(-1,1),x), label='Savitzky Golay')
ax.legend()
plt.plot(model(torch.tensor(X, dtype=torch.float32))[2][0].detach().numpy())
sol = model(torch.tensor(X, dtype=torch.float32))[0]
plt.imshow(sol[:,:,4].detach().numpy())
plt.plot(sol[10,:,6].detach().numpy())
plt.plot(down_data[10,:,6]/np.max(down_data))
x = np.arange(0,len(y))
import pysindy as ps
diffs = [
('PySINDy Finite Difference', ps.FiniteDifference()),
('Smoothed Finite Difference', ps.SmoothedFiniteDifference()),
('Savitzky Golay', ps.SINDyDerivative(kind='savitzky_golay', left=0.5, right=0.5, order=3)),
('Spline', ps.SINDyDerivative(kind='spline', s=1e-2)),
('Trend Filtered', ps.SINDyDerivative(kind='trend_filtered', order=0, alpha=1e-2)),
('Spectral', ps.SINDyDerivative(kind='spectral')),
]
fd = ps.SINDyDerivative(kind='spline', s=1e-2)
y = down_data[:,10,9]/np.max(down_data)
x = np.arange(0,len(y))
t = np.linspace(0,1,5)
X = np.vstack((np.sin(t),np.cos(t))).T
plt.plot(y)
plt.plot(fd(y.reshape(-1,1),x))
y.shape
plt.plot(fd._differentiate(y.reshape(-1,1),x))
plt.plot(ux[:,10,6])
plt.plot(sol[:,10,6].detach().numpy())
plt.plot(down_data[:,10,6]/np.max(down_data))
model.sparsity_masks
# estimatior_coeffs gives the magnitude of the active terms:
print(model.estimator_coeffs())
plt.contourf(ux[:,:,10])
plt.plot(ux[25,:,2])
ax = plt.subplot(1,1,1)
ax.plot(fd(y.reshape(-1,1),x), label='Ground truth')
ax.plot(fd_sline(y_noisy.reshape(-1,1),x), label='Spline')
ax.plot(fd_sg(y_noisy.reshape(-1,1),x), label='<NAME>')
ax.legend()
import pysindy as ps
fd_spline = ps.SINDyDerivative(kind='spline', s=1e-2)
fd_spectral = ps.SINDyDerivative(kind='spectral')
fd_sg = ps.SINDyDerivative(kind='savitzky_golay', left=0.5, right=0.5, order=3)
y = u_v[25,:,2]
x = y_v[25,:,2]
plt.scatter(x,y)
y.shape
noise_level = 0.025
y_noisy = y + noise_level * np.std(y) * np.random.randn(y.size)
ax = plt.subplot(1,1,1)
ax.plot(x,y_noisy, label="line 1")
ax.plot(x,y, label="line 2")
ax.legend()
ax = plt.subplot(1,1,1)
ax.plot(fd(y.reshape(-1,1),x), label='Ground truth')
ax.plot(fd_sline(y_noisy.reshape(-1,1),x), label='Spline')
ax.plot(fd_sg(y_noisy.reshape(-1,1),x), label='<NAME>')
ax.legend()
|
paper/Advection_diffusion/Old/AD_sensor_density_set2_working_tak5-3-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab_type="code" id="BOwsuGQQY9OL" colab={}
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers
import tensorflow.keras.utils as ku
import numpy as np
# + colab_type="code" id="PRnDnCW-Z7qv" colab={}
tokenizer = Tokenizer()
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sonnets.txt \
# -O /tmp/sonnets.txt
data = open('/tmp/sonnets.txt').read()
corpus = data.lower().split("\n")
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
# create input sequences using list of tokens
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# pad sequences
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
# create predictors and label
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
label = ku.to_categorical(label, num_classes=total_words)
# + colab_type="code" id="w9vH8Y59ajYL" colab={}
model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(150, return_sequences = True)))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dense(total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
# + colab_type="code" id="AIg2f1HBxqof" colab={}
history = model.fit(predictors, label, epochs=100, verbose=1)
# + colab_type="code" id="1fXTEO3GJ282" colab={}
import matplotlib.pyplot as plt
acc = history.history['accuracy']
loss = history.history['loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training accuracy')
plt.title('Training accuracy')
plt.figure()
plt.plot(epochs, loss, 'b', label='Training Loss')
plt.title('Training loss')
plt.legend()
plt.show()
# + colab_type="code" id="6Vc6PHgxa6Hm" colab={}
seed_text = "Help me <NAME>, you're my only hope"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
|
TensorFlow in Practice Specialization/Course 3 - Natural Language Processing in TensorFlow/Week 4 - Sequence models and literature/Exercise/NLP_Week4_Exercise_Shakespeare_Answer.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia (4 threads) 1.6.4
# language: julia
# name: julia-(4-threads)-1.6
# ---
# # AlgorithmicThinking: PeakFinding
#
# <a href="https://www.youtube.com/watch?v=HtSuA80QTyo&list=PLUl4u3cNGP61Oq3tWYp6V_F-5jb5L2iHb&index=2" title="1. Algorithmic Thinking, Peak Finding"><img src="http://i3.ytimg.com/vi/HtSuA80QTyo/hqdefault.jpg" alt="1. Algorithmic Thinking, Peak Finding" /></a>
#
using BenchmarkTools
# # 1-D Peak Finding
#
# Given Array of 'n' numbers (1-indexed), a number nums[i] is defined as a peak if nums[i-1]=<nums[i]>=nums[i+1]
#
# Can assume num[0]=num[n]=-inf
#
# Return a peak if it exists
#
# for eg: Position 2 is a peak if and only if b ≥ a and b ≥ c. Position 9 is a peak if i ≥ h.
#
# `a,b,c,d,e,f,g,h,i`
#
# `1,2,3,4,5,6,7,8,9`
#
# Find peak if it exist.
#
# 1. Straight forward stepping algorithm that checks each number from left
# - Looks at n/2 elements on average, could look at n elements in the worst case $\Theta(n)$ => gives both lower and upper bound. O() is just upper bound. Algorithm complexity is linear
# - How can we lower this complexity?
#
# What if we start in the middle and look at n/2 elements
#
# 2. Recursive algorithm (divide & conquer)
# - if a[n/2] < a[n/2-1] then only look at left half,
# - elseif a[n/2] < a[n/2+1]then look right hald
# - else n/2 position is a peak
# - work algorithm does on input of size n : $T(n) = T(n/2) + \Theta(1)$ with base case $T(1) =\Theta(1)$
# - where, $\Theta(1)$ is time to compare a [n/2] neighbors
# - $T(n) = \underset{log_2 n \text{ times}}{\Theta(1) + \dots + \Theta(1)} = \Theta(log_2 n)$
#
#
# +
"""Test if the index is the peak"""
function isPeak(arr, idx)
idx == 1 && return arr[idx] >= arr[idx+1]
idx == length(arr) && return arr[idx] >= arr[idx-1]
return arr[idx-1] <= arr[idx] >= arr[idx+1]
end
arr = [1,2,3,4,7,3,5,2,6,7,9,7,2]
isPeak(arr, 1), isPeak(arr,5), isPeak(arr,13)
# -
using Random
Random.seed!(0)
numArr = rand(-1000:1000, 10000);
# +
"""Start at the left and look at all elements until the condition is fulfilled"""
function naivePeakSearch(arr)::Int32
peaks = nothing
for i in eachindex(arr) # same as 1:length(arr)
if isPeak(arr,i)
return i
end
end
end
@benchmark naivePeakSearch(numArr) setup=(numArr=rand(-10000:10000, 1_000_000)) # values in ns, should be higher
# -
# In the worst caset, the peak might be at the right and we might have to look at all elements. The worse case complexity is $\Theta(n)$.
#
# But generally arrays of numbers are generally are correlated, hence following the increasing or decreasing trend can be a good way to find peaks, it there's only one peak we can find it recursively as follows.
using Distributions
genRandPeak(n) = rand(TriangularDist(-n,n),n)
genRandPeak(5)'
# +
"""Recursive Peak Search: Divide and Conquer
Look at n/2,
If nums[n/2]<nums[n/2+1], look right and recurse,
If nums[n/2]<nums[n/2-1], look left and recurse
return nums[n/2]
"""
function recursivePeakSearch(arr, lo, hi)::Int32
mid = (lo + hi) ÷ 2
arr[mid] < arr[mid-1] && return recursivePeakSearch(arr, lo, mid)
arr[mid] < arr[mid+1] && return recursivePeakSearch(arr, mid, hi)
return mid
end
@benchmark recursivePeakSearch(numArr, 0 , length(numArr)) setup=(numArr=genRandPeak(100_000))
# -
# ## 2-D Peak Finding
#
# Given a 2-D array Matrix, an element Matrix[i][j] is a hill (or peak) iff
#
# - Matrix[i-1, j] <= Matrix[i, j] >= Matrix[i+1, j]
# - Matrix[i, j-1] <= Matrix[i. j] >= Matrix[i, j+1]
arr2d = [1 2 3 4;
14 13 12 10;
15 9 11 18;
16 17 19 20]
# +
"""Returns neighbours of a cell in a matrix"""
function neighbors1(arr, i, j)
r, c = size(arr)
neighbors = []
for idx in [(i-1,j), (i+1,j),(i,j-1),(i,j+1)]
if !(idx[1] in [0,r+1]) && !(idx[2] in [0,c+1])
push!(neighbors, idx)
end
end
return neighbors
end
neighbors1(arr2d,4,4)
# +
"""Returns neighbours of a cell in a matrix"""
function neighbors(arr, idx)
r, c = size(arr)
neighborIdx = [CartesianIndex(idx)] .+ CartesianIndex.([(1,0),(0,1),(-1,0),(0,-1)])
validIdx = [checkbounds(Bool,arr,idx) for idx in neighborIdx]
# return arr[neighborIdx[validIdx]]
return neighborIdx[validIdx]
end
neighbors(arr2d, (2,2))
# +
"""Greedy Ascent follows direction of largest increase to find a peak, has Θ(nm) complexity
Strategy: Pick arbitrary midpoint, return if peak else choose highest neighbor and repeat
"""
function greedyAscent(arr)
idx = CartesianIndex(size(arr).÷2) # start at midpoint
while true
neighborIdx = neighbors(arr,idx)
all(arr[idx] .> arr[neighborIdx]) && return idx # current index is peak
idx = neighborIdx[argmax(arr[neighborIdx])]
end
end
greedyAscent(arr2d)
# -
Random.seed!(10)
mat = rand(0:50, 10,10)
greedyAscent(mat) # There are other larger peaks as well, which this algo misses
# +
""" Divide and Conquer
1. Pick Middle column j =m/2 and find 1-D peak at column and then find 1D peak at row i. But 2D-peak may not exist on row i. as in arr2d below
2. Divide and conquer with Global maximum
1. Pick middle column j = m/2
2. Find global maximum on column j at (i,j)
3. compare (i,j-1) (i,j), (i,j+1) and peak greater one. if (i,j) is greater, it is a 2D-peak
4. solve the new problem with half number of columns
5. When you have a single column find global maximum
Complexity
T(n,m) = T(n,m/2) + Θ(n) (finding column max)
T(n,m) = log(m) Θ(n) there are log m of Θ(n) operations
T(n,m) = Θ(n log(m))
"""
function recursive2dpeak(arr, colLo, colHi)
col = (colHi + colLo) ÷ 2
row = argmax(arr[col, :])
colLo == colHi && return row, col
arr[row,col-1] > arr[row,col] && return recursive2dpeak(arr, colLo, col-1)
arr[row,col+1] > arr[row,col] && return recursive2dpeak(arr, col+1, colHi)
return row, col
end
recursive2dpeak(arr2d, 0, size(arr2d)[2])
# -
recursive2dpeak(mat, 0, size(mat)[2])
|
L1 Peak Finding.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mask R-CNN Demo
#
# A quick intro to using the pre-trained model to detect and segment objects.
# +
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
# %matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
# -
# ## Configurations
#
# We'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.
#
# For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
# +
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
# -
# ## Create Model and Load Trained Weights
# +
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# -
# ## Class Names
#
# The model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.
#
# To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.
#
# To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.
# ```
# # Load COCO dataset
# dataset = coco.CocoDataset()
# dataset.load_coco(COCO_DIR, "train")
# dataset.prepare()
#
# # Print class names
# print(dataset.class_names)
# ```
#
# We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
# ## Run Object Detection
# +
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
|
samples/demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Exploring Dask
# Dask is a better framework for data frame, and can be used for huge data
import dask
import numpy as np
import pandas as pd
import dask.dataframe as dd
from dask.distributed import Client
#Importing plot lib
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.dates as mdates
sns.set(rc={'figure.figsize':(20, 10)})
import statsmodels as sm
from statsmodels.tsa.stattools import adfuller
from numpy import log
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
# Creating distributed client
client = Client()
df = dd.read_csv('june*')
print (df)
df.columns=['te','td','sa','da','sp','dp','pr','flg','fwd','stos','pkt','byt','label']
df.head(10)
# Keeping a copy
data = df
# Changing column data type for te
data['te'] = dd.to_datetime(df['te'])
# Function to mark traffic has anomalous data when windowing
normal_traffic_type = ['background', 'blacklist']
def isAnomolus(x):
data = ~x.isin(normal_traffic_type)
if data.any():
return 1
else:
return 0
# Function to mark traffic has anomalous data when windowing
normal_traffic_type = ['background', 'blacklist']
def isAnomolus_new(x):
return x
isAnomolus_agg = dd.Aggregation(
name = 'isAnomolus',
chunk = lambda x: x.apply(lambda a: (~a.isin(normal_traffic_type)).any()),
agg = lambda y: y.apply(lambda z: 1 if z.any() == True else 0)
)
# Data windowed for 5 min by adding pkt and byt, whreas label and true if any anomalous traffic exists in time window
data_win = data.groupby(pd.Grouper(key='te', freq='5T')).agg({
"pkt": "sum",
"byt": "sum",
"label": isAnomolus_agg
}).compute()
data_win
data_win_anomaly = data_win[data_win['label'] == 1]
data_win_anomaly
# ### Taking only normal traffic(including blacklisted)
# The idea is to forecats the normal traffic, and compare with anomalous traffic for any deviation. Hence getting normal traffic windowed for each 5min which will be used for forecasting.
# Filter normal traffic
data_normal_traffic = data[data['label'].isin(normal_traffic_type)]
# Window normal traffic for 5min window
data_normal_traffic_win = data_normal_traffic.groupby(pd.Grouper(key='te', freq='5T')).agg({
"pkt": "sum",
"byt": "sum"
}).compute()
plt.figure()
plt.plot(data_normal_traffic_win.index, data_normal_traffic_win.byt)
plt.show()
plt.close()
merged_data = pd.merge(data_win, data_normal_traffic_win, on=['te'], how='inner')
merged_data['byt_diff'] = merged_data.byt_x - merged_data.byt_y
plt.rc('font', size=12)
fig, ax = plt.subplots(figsize=(20, 6))
myFmt = mdates.DateFormatter('%Y-%M-%D')
ax.xaxis.set_major_formatter(myFmt)
ax.plot(merged_data.byt_diff, label='Difference')
plt.show()
plt.close()
plt.close()
# ### EDA for windowed data
plt.hist(data_win.label)
plt.show()
plt.close()
plt.scatter(merged_data.label, merged_data.byt_diff)
plt.show()
plt.close()
merged_data[(merged_data.byt_diff == 0) & (merged_data.label == 1)].count()
merged_data[merged_data.byt_diff !=0].byt_diff.nsmallest(40)
data['anomaly'] = data['label'].apply(lambda x: 0 if x in(normal_traffic_type) else 1)
protocols = ['TCP', 'UDP', 'ICMP', 'GRE', 'ESP', 'RSVP', 'IPv6', 'IPIP', '255','nan']
data['protocol'] = data['pr'].apply(lambda x: protocols.index(x) if x in protocols else -1)
# +
#data['pr'].unique().compute()
# +
#data['flg'].unique().compute()
# -
plt.close()
# ## ARIMA model forecasting
result = adfuller(data_normal_traffic_win.byt, autolag='AIC')
print(f'ADF Statistic: {result[0]}')
print(f'n_lags: {result[1]}')
print(f'p-value: {result[1]}')
for key, value in result[4].items():
print('Critial Values:')
print(f' {key}, {value}')
# +
# Original Series and taking byt as value
fig1, axes1 = acfplt.subplots(3, 2, sharex=True)
axes1[0, 0].plot(data_normal_traffic_win.byt); axes1[0, 0].set_title('Original Series')
plot_acf(data_normal_traffic_win.byt.squeeze(), lags=np.arange(len(data_normal_traffic_win)) , ax=axes1[0, 1])
# 1st Differencing
axes1[1, 0].plot(data_normal_traffic_win.byt.diff()); axes1[1, 0].set_title('1st Order Differencing')
#plot_acf(data_normal_traffic_win.byt.diff().dropna(), lags=np.arange(len(data_normal_traffic_win) -2), ax=axes1[1, 1])
# 2nd Differencing
axes1[2, 0].plot(data_normal_traffic_win.byt.diff().diff()); axes1[2, 0].set_title('2nd Order Differencing')
#plot_acf(data_normal_traffic_win.byt.diff().diff().dropna(), lags=np.arange(len(data_normal_traffic_win) - 3), ax=axes1[2, 1])
acfplt.show()
# -
print(data_normal_traffic_win.index.freq)
|
dask.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
import numpy as np
from TideRec import Rec
from TideRec import CrossCor
import matplotlib.pyplot as plt
BoxSize = 250.0
NMesh = 256
MyRec5 = Rec(BoxSize = BoxSize, NMesh = NMesh)
MyRec5.Read(mode = 'binary', path = '../exam/exam_data/Ng256_Density.bin')
MyRec5.AutoRec(mode ='5term', sigma = 1)
MyRec2 = Rec(BoxSize = BoxSize, NMesh = NMesh)
MyRec2.Read(mode = 'binary', path = '../exam/exam_data/Ng256_Density.bin')
MyRec2.AutoRec(mode ='2term', sigma = 1)
MyCor5 = CrossCor(Rec = MyRec5, path = '../exam/exam_data/Ng256_Density.bin')
MyCor2 = CrossCor(Rec = MyRec2, path = '../exam/exam_data/Ng256_Density.bin')
n, k, rk = MyCor5.Compute()
if MyCor5.rank == 0:
plt.semilogx(k, rk, label = '5term')
plt.xlim([0.03, 1])
plt.ylim([0, 1])
n, k, rk = MyCor2.Compute()
if MyCor5.rank == 0:
plt.semilogx(k, rk, label = '2term')
plt.xlim([0.03, 1])
plt.ylim([0, 1])
plt.legend()
plt.show()
MyCor5.setbin(style = 'log', binnum = 10)
MyCor2.setbin(style = 'log', binnum = 10)
n, k, rk = MyCor5.Compute()
if MyCor5.rank == 0:
plt.semilogx(k, rk, label = '5term')
plt.xlim([0.03, 1])
plt.ylim([0, 1])
n, k, rk = MyCor2.Compute()
if MyCor2.rank == 0:
plt.semilogx(k, rk, label = '2term')
plt.xlim([0.03, 1])
plt.ylim([0, 1])
plt.legend()
plt.show()
|
examples/CrossCorExample.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
#to run the script, you need to start pathway tools form the command line
# using the -lisp -python options. Example (from the pathway tools github repository)
import os
# os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python &')
os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity
os.system('nohup /shared/D1/opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity
# +
# modify sys.path to recognize local pythoncyc
import os
import sys
module_path = os.path.abspath(os.path.join('./PythonCyc/'))
sys.path = [module_path]
sys.path
# +
# remove pyc files
# !rm ./PythonCyc/pythoncyc/*pyc
import pythoncyc
all_orgids = pythoncyc.all_orgids()
print all_orgids
meta = pythoncyc.select_organism(u'|META|')
ecoli = pythoncyc.select_organism(u'|ECOLI|')
compounds = [x.frameid for x in ecoli.compounds.instances][0:2]
reactions = [x.frameid for x in ecoli.reactions.instances][0:2]
enzymes = ecoli.all_enzymes(type_of_reactions = ':chemical-change')[0:2]
# +
"""
cpd
A child of class |Compounds|.
non-specific-too?
Keyword, Optional If non-nil, returns all generic reactions where cpd, or a
parent of cpd, appears as a substrate.
transport-only?
Keyword, Optional If non-nil, return only transport reactions.
compartment
Keyword, Optional If non-nil, return only reactions within the specified
compartment.
enzymatic?
Keyword, Optional If non-nil, return only enzymatic reactions.
"""
lst = []
for compound in compounds:
lst.append(ecoli.reactions_of_compound(compound, enzymatic = True))
print(lst[-10:])
# -
try:
lst = []
for compound in compounds:
for reaction in reactions:
lst.append(ecoli.substrate_of_generic_rxn(compound, reaction))
print(lst[-10:])
except:
pass
# +
"""
cpd
An instance of class |Compounds|.
non-specific-too?
Keyword, Optional If non-nil, returns all generic reactions where cpd, or a
parent of cpd, appears as a substrate.
modulators?
Keyword, Optional If non-nil, returns pathways where cpd appears as a regulator
as well.
phys-relevant?
Keyword, Optional If true, then only return inhibitors that are associated with
|Regulation| instances that have the 'physiologically‑relevant? slot set to
non-nil.
include-rxns?
Keyword, Optional If non-nil, then return a list of reaction-pathway pairs.
"""
lst = []
for compound in compounds:
lst.append(ecoli.pathways_of_compound(compound))#, modulators = True, include_rxns = True))
print(lst[-10:])
# +
"""
cpds
An instance or list of instances of class |Compounds|.
mode
Keyword, Optional Represents the type of regulation. Can take on the values of
“+”, “-”, or 'nil.
mechanisms
Keyword, Optional Keywords from the 'mechanism slot of the corresponding
sub-class of the class |Regulation|. If non-nil, only regulation objects with
mechanisms in this list will be explored for regulated objects.
phys-relevant?
Keyword, Optional If true, then only return inhibitors that are associated with
|Regulation| instances that have the 'physiologically‑relevant? slot set to
non-nil.
slots
Keyword A list of enzymatic reaction slots to
"""
try:
lst = []
for compound in compounds:
lst.append(ecoli.activated_or_inhibited_by_compound(compound))
print(lst[-10:])
except:
pass
# -
lst = []
for compound in compounds:
lst.append(ecoli.tfs_bound_to_compound(compound, include_inactive = True))
print(lst[-10:])
# +
# Object Name Manipulation Operations
# lst = []
# for reaction in reactions:
# lst.append(ecoli.get_name_string(reaction))
# print lst[-10:]
# print ''
lst = []
for reaction in reactions:
lst.append(ecoli.full_enzyme_name(reaction))
print lst[-10:]
print ''
lst = []
for enzyme in enzymes:
lst.append(ecoli.enzyme_activity_name(enzyme))
print lst[-10:]
# -
|
ptools API - Operations on Compounds.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <font size="+5">#03 | Model Selection. Decision Tree vs Support Vector Machines vs Logistic Regression</font>
# - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)
# - Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄
# # Discipline to Search Solutions in Google
# > Apply the following steps when **looking for solutions in Google**:
# >
# > 1. **Necesity**: How to load an Excel in Python?
# > 2. **Search in Google**: by keywords
# > - `load excel python`
# > - ~~how to load excel in python~~
# > 3. **Solution**: What's the `function()` that loads an Excel in Python?
# > - A Function to Programming is what the Atom to Phisics.
# > - Every time you want to do something in programming
# > - **You will need a `function()`** to make it
# > - Theferore, you must **detect parenthesis `()`**
# > - Out of all the words that you see in a website
# > - Because they indicate the presence of a `function()`.
# + [markdown] tags=[]
# # Load the Data
# -
# Load the dataset from [CIS](https://www.cis.es/cis/opencms/ES/index.html) executing the lines of code below:
# > - The goal of this dataset is
# > - To predict `internet_usage` of **people** (rows)
# > - Based on their **socio-demographical characteristics** (columns)
# +
import pandas as pd
url = 'https://raw.githubusercontent.com/py-thocrates/data/main/internet_usage_spain.csv'
df = pd.read_csv(url)
df.head()
# -
# # Build & Compare Models
# + [markdown] tags=[]
# ## `DecisionTreeClassifier()` Model in Python
# -
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# > - Build the model `model.fit()`
# > - And see how good it is `model.score()`
# + [markdown] tags=[]
# ## `SVC()` Model in Python
# -
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/efR1C6CvhmE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# > - Build the model `model.fit()`
# > - And see how good it is `model.score()`
# + [markdown] tags=[]
# ## `LogisticRegression()` Model in Python
# -
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/yIYKR4sgzI8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# > - Build the model `model.fit()`
# > - And see how good it is `model.score()`
# # Function to Automate Lines of Code
# > - We repeated all the time the same code:
#
# ```python
# model.fit()
# model.score()
# ```
#
# > - Why not turning the lines into a `function()`
# > - To automate the process?
# > - In a way that you would just need
#
# ```python
# calculate_accuracy(model=dt)
#
# calculate_accuracy(model=svm)
#
# calculate_accuracy(model = lr)
# ```
#
# > - To calculate the `accuracy`
# ## Make a Procedure Sample for `DecisionTreeClassifier()`
# ## Automate the Procedure into a `function()`
# **Code Thinking**
#
# > 1. Think of the functions `result`
# > 2. Store that `object` to a variable
# > 3. `return` the `result` at the end
# > 4. **Indent the body** of the function to the right
# > 5. `def`ine the `function():`
# > 6. Think of what's gonna change when you execute the function with `different models`
# > 7. Locate the **`variable` that you will change**
# > 8. Turn it into the `parameter` of the `function()`
# ## `DecisionTreeClassifier()` Accuracy
# ## `SVC()` Accuracy
# ## `LogisticRegression()` Accuracy
# # Which is the Best Model?
# > Which model has the **highest accuracy**?
# ## University Access Exams Analogy
# > Let's **imagine**:
# >
# > 1. You have a `math exam` on Saturday
# > 2. Today is Monday
# > 3. You want to **calculate if you need to study more** for the math exam
# > 4. How do you calibrate your `math level`?
# > 5. Well, you've got **100 questions `X` with 100 solutions `y`** from past years exams
# > 6. You may study the 100 questions with 100 solutions `fit(questions, solutions)`
# > 7. Then, you may do a `mock exam` with the 100 questions `predict(questions)`
# > 8. And compare `your_solutions` with the `real_solutions`
# > 9. You've got **90/100 correct answers** `accuracy` in the mock exam
# > 10. You think you are **prepared for the maths exam**
# > 11. And when you do **the real exam on Saturday, the mark is 40/100**
# > 12. Why? How could have we prevented this?
# > 13. **Solution**: separate the 100 questions in
# > - `70 train` to study & `30 test` for the mock exam.
# # `train_test_split()` the Data
# > 1. **`fit()` the model with `Train Data`**
# >
# > - `model.fit(70%questions, 70%solutions)`
# > 2. **`.predict()` answers with `Test Data` (mock exam)**
# >
# > - `your_solutions = model.predict(30%questions)`
# > **3. Compare `your_solutions` with `correct answers` from mock exam**
# >
# > - `your_solutions == real_solutions`?
# # Optimize All Models & Compare Again
# ## Make a Procedure Sample for `DecisionTreeClassifier()`
# ## Automate the Procedure into a `function()`
# **Code Thinking**
#
# > 1. Think of the functions `result`
# > 2. Store that `object` to a variable
# > 3. `return` the `result` at the end
# > 4. **Indent the body** of the function to the right
# > 5. `def`ine the `function():`
# > 6. Think of what's gonna change when you execute the function with `different models`
# > 7. Locate the **`variable` that you will change**
# > 8. Turn it into the `parameter` of the `function()`
# ## `DecisionTreeClassifier()` Accuracy
# ## `SVC()` Accuracy
# ## `LogisticRegression()` Accuracy
# # Which is the Best Model with `train_test_split()`?
# > Which model has the **highest accuracy**?
# # Reflect
#
# > - Banks deploy models to predict the **probability for a customer to pay the loan**
# > - If the Bank used the `DecisionTreeClassifier()` instead of the `LogisticRegression()`
# > - What would have happened?
# > - Is `train_test_split()` always required to compare models?
|
II Machine Learning & Deep Learning/04_Model Selection. Decision Tree vs Support Vector Machines vs Logistic Regression/04session_model-selection.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [Py-OO] Aula 01
#
# ## Introdução a Orientação a Objetos em Python
# O que você vai aprender nesta aula?
#
# Após o término da aula você terá aprendido:
#
# - Objetos em Python
# - Como funcionam
# - Tipagem
# - Mutabilidade
# - Como funciona atribuição e variáveis
# - Classes
# - Sintaxe básica, criando instâncias, métodos de instância, de classe e estáticos
# ### Revisão dos conceitos de Orientação a Objetos
#
# Vamos começar com uma rápida revisão dos conceitos de Orientação a Objetos e como eles são aplicados em Python.
#
# O paradigma de programação orientada a objetos tem por objetivo fornecer uma abstração do mundo real e aplicá-la para a programação.
#
# *Objetos* são componentes de software que incluem *dados* e *comportamentos*. Por exemplo cães possuem *estados* (nome, raça e cor) e comportamentos (latir, abanar o rabo e pegar objetos). Já bicicletas possuem outros *estados* (modelo, marcha atual e velocidade atual) e *comportamentos* (mudar marcha, frear).
#
# Outro conceito importante é o de *classe*. Estas representam a estrutura de um objeto, por exemplo: a receita de um bolo. Nesse exemplo a receita seria a classe que contém instruções de como criar o objeto, além de ter informações sobre a instância (bolo).
#
# Em Python os objetos possuem *atributos* que podem ser tanto **métodos** (funções vinculadas ao objeto) ou **atributos de dados** do objeto. Este último geralmente é geralmente chamado de *atributo*.
#
# É importante saber que em Python instâncias de classes são chamadas exatamente de **instâncias**. É comum em Java/C++ chamar "instâncias de classes" de "objetos de classes". Isso não acontece em Python, pois nesta linguagem tudo é um objeto e, portanto, chamar a instância de uma classe de objeto é redundante.
#
# Quando falamos sobre linguagens que implementam o paradigma de orientação a objetos elas devem fornecer quatro conceitos básicos:
# - Abstração: habilidade de modelar características do mundo real (?)
# - Encapsulamento: permitir a proteção de dados e que operações internas tenham acesso a esses dados.
# - Herança: mecanismo que possibilitam a criação de novos objetos por meio da alteração de algo já exisitente; e vincular o objeto criado com o antigo.
# - Polimorfismo: capacidade de uma unidade de ter várias formas.
# ### Objetos em Python
#
# Como dito anteriormente tudo em Python é objeto. Vamos começar analisando um objeto `dict`:
notas = {'bia': 10, 'pedro': 0, 'ana': 7}
notas
# O dicionários possui diversos métodos que usamos para alterar os objetos:
notas.keys()
notas.pop('bia')
notas
# Podemos usar a função `dir()` para inspecionar os métodos e atributos do `dict` notas:
dir(notas)
# Aqui vemos vários métodos que o nome contém *underscores* no começo e fim como `__len__`, `__getitem__`, `__setitem__`. Esses métodos são chamados de métodos especiais que fazem parte do modelo de dados do Python. Esses métodos são chamados pelo interpretador quando uma sintaxe especial é acionada. Como, por exemplo, quando acessamos os itens do dicionário por sua chave o interpretador invoca a função `dict.__getitem__()`:
notas
notas.__getitem__('ana')
notas['ana']
notas.__getitem__('joselito')
notas['joselito']
# O `dict` também possui atributos de dados especiais como `__class__`, que armazena o nome da classe do objeto, e `__doc__` que retém a docstring do objeto:
notas.__class__
notas.__doc__
# Para ver a docstring formatada para saída use a função `print()`:
print(notas.__doc__)
# Números são objetos:
3 + 4
# Possuem métodos e atributos:
print(3 .__doc__)
3 .__add__(4)
3 .__sub__(4)
# Só lembrando que os métodos especiais não devem ser chamados diretamente, os exemplos anteriores só servem para ilustrar o funcionamento e existência desses métodos. Caso você queira consultar a documentação de um objeto use a função `help()`:
help(3)
# Como explicado na [py-intro] Aula 05 funções também são objetos. Na terminologia utilizada pelos livros isso quer dizer que, em Python, as funções são objetos de primeira classe ou cidadãos de primeira classe.
def soma(a, b):
""" retorna a + b """
soma = a + b
return soma
soma(1, 2)
soma
# Podemos a atribuir funções a variáveis:
adição = soma
adição
# Acessar atributos:
adição.__name__
adição.__doc__
# Podemos ver o bytecode que a função executa usando o módudlo `dis` (disassembly), enviando a função `soma()` como argumento da função `dis.dis()`:
import dis
dis.dis(soma)
# ### Tipagem dos objetos
#
# #### Tipagem forte
#
# Os objetos em Python possuem tipagem forte, isso quer dizer que *dificilmente* são feitas conversões de tipos implícitas na realização de operações. Vamos ver alguns exemplos que ilustram esse conceito:
"1" + 10
# Tentamos concatenar o número 10 à string "1", porém uma exceção do tipo `TypeError` foi levantada dizendo que não foi possível converter um objeto `int` para `str` de forma implicita.
#
# Em Javascript e PHP, linguagens que possuem tipagem fraca, não seria levantado uma exceção e o interpretador faria a conversão de um dos tipos. No Javascript (1.5) o resultado seria uma string "110" e no PHP (5.6) o número 11.
#
# Aqui percebemos que a operação `"1" + 10` pode produzir dois resultados: uma string ou um número. Conforme consta no Zen do Python: "In the face of ambiguity, refuse the temptation to guess" (Ao encontrar uma ambiguidade recuse a tentação de inferir) e é exatamente o que o Python faz: a linguagem recusa-se a inferir o tipo do resultado e levanta uma exceção.
#
# Para fazermos esse exemplo funcionar precisamos converter os tipos explicitamente:
"1" + str(10)
int("1") + 10
# #### Tipagem dinâmica
#
# Dizemos que uma linguagem possui tipagem dinâmica quando não é necessário especificar explicitamente os tipos das váriaveis. Os objetos possuem tipos, porém as variáveis podem referenciar objetos de quaisquer tipos. Verificações de tipos são feitas em tempo de execução e não durante a compilação.
#
# Quando definimos uma função `dobra(x)` que retorna o valor recebido multiplicado por 2 podemos receber qualquer tipo de objeto como argumento:
def dobra(x):
return x * 2
# Podemos dobrar `int`:
dobra(2)
# Dobrar `float`:
dobra(1.15)
# `string`s:
dobra('bo')
# sequências:
dobra([1, 2, 3])
dobra((4, 5, 6))
# Tipos que não suportam multiplicação por inteiros levantarão exceção quando executados:
dobra(None)
# A função `type()` nos permite verificar os tipos dos objetos:
type(1)
type([1, 2, 3])
type((1, 2, 3))
type({})
type('lalala')
type(False)
# ### Mutabilidade
#
# No Python existem objetos mutáveis e imutáveis, já vimos vários exemplos disso ao longo do curso. O estado (atributo) de objetos mutáveis podem ser alterados, já obetos imutáveis não podem ser alterados de forma alguma.
#
# A tabela abaixo mostra a mutabilidade dos tipos embutidos do Python:
#
# <table>
# <thead>
# <th>Imutáveis</th>
# <th>Mutáveis</th>
# </thead>
# <tbody>
# <tr>
# <td>tuple</td>
# <td>list</td>
# </tr>
# <tr>
# <td>números (int, float, complex)</td>
# <td>dict</td>
# </tr>
# <tr>
# <td>frozenset</td>
# <td>set</td>
# </tr>
# <tr>
# <td>str, bytes</td>
# <td>objetos que permitem alteração de atributos por acesso direto, setters ou métodos</td>
# </tr>
# </tbody>
# </table>
#
# Vamos ver alguns exemplos que demonstram isso:
a = 10
a
# Todo objeto python possui uma identidade, um número único que diferencia esse objeto. Podemos acessar a identidade de um objeto usando a função `id()`:
id(a)
# Isso quer dizer que a identidade do objeto `a` é 10894368.
# Agora vamos tentar mudar o valor de a:
b = 3
b
a += b
a
id(a)
# A identidade mudou, isso significa que a variável `a` está referenciando outro objeto que foi **criado** quando executamos `a += b`.
# Vamos ver agora um exemplo de objeto mutável:
lista = [1, 2, 3, 4]
lista
# Vamos verificar a identidade dessa lista:
id(lista)
lista.append(10)
lista.remove(2)
lista += [-4, -3]
lista
id(lista)
# Mesmo modificando a lista através da inserção e remoção de valores sua identidade continua a mesma.
#
# Strings também são imutáveis:
s = 'abcd'
id(s)
s[0] = 'z'
# Como vimos na aula dois do módulo de introdução strings são imutáveis e para alterar seu valor precisamos usar `slicing`:
s = 'z' + s[1:]
s
id(s)
# Comparando a identidade de `s` antes e depois da mudança vemos que trata-se de objetos diferentes.
# ### Variáveis
#
# Variáveis são apenas referências para objetos, assim como acontece em Java. Variáveis são apenas rótulos (ou post-its) associados a objetos. Diferentemente de C ou Pascal as variáveis em Python **não** são caixas que armazenam objetos ou valores.
#
# Por exemplo:
a = [1, 2, 3]
a
b = a
a.append(4)
b
# As variáveis `a` e `b` armazenam referências à mesma lista em vez de cópias.
# É importante notar que objetos são criados antes da atribuição. A operação do lado direito de uma atribuição ocorre antes que a atribuição:
c = 1 / 0
# Como não foi possível criar o número - por representar uma operação inválida (divisão por zero) para a linguagem - a variável c não foi atribuída a nenhum objeto:
c
# Como as variáveis são apenas rótulos a forma correta de falar sobre atribuição é "a variável `x` foi atribuída à (instância) lâmpada" e não "a lâmpada foi atribuída à variável `x`". Pois é como se colocássemos um "post-it" `x` em um objeto, e não guardássemos esse objeto em uma caixa `x`.
#
# Por serem rótulos podemos atribuir diversos rótulos a um mesmo objeto. Isso faz com que apelidos (aliases) sejam criados:
josé = {'nome': '<NAME>', 'idade': 10}
zé = josé
zé is josé
id(zé), id(josé)
zé['ano_nascimento'] = 2006
josé
# Vamos supor que exista um impostor - o João - que possua as mesmas credenciais que o José Silva. Suas credenciais são as mesmas, porém João não é José:
joão = {'nome': '<NAME>', 'idade': 10, 'ano_nascimento': 2006}
joão == josé
# O valor de seus dados (ou credenciais) são iguais, porém eles não são os mesmos:
joão is josé
# Nesse exemplo vimos o *apelidamento* (aliasing). `josé` e `zé` são apelidos (aliases): duas variáveis associadas ao mesmo objeto. Por outro lado vimos que `joão` não é um apelido de `josé`: essas variáveis estão associadas a objetos distintos. O que acontece é que `joão` e `josé` possuem o mesmo **valor** - que é isso que `==` compara - mas têm identidades diferentes.
# O operador `==` realiza a comparação dos valores de objetos (os dados armazenados por eles), enquanto `is` compara suas identidades. É mais comum comparar valores do que identidades, por esse motivo `==` aparece com mais frequência que `is` em códigos Python. Um caso em que o `is` é bastante utilizada é para comparação com `None`:
a = 10
a is None
b = None
b is None
# ### Classes
#
# Vamos ver como é a sintaxe de definição de classes no Python, para isso vamos fazer alguns exemplos.
#
# Começaremos por criar uma classe que representa um cão. Armazenaremos: o nome, a quantidade de patas, se o cão é carnívoro e se ele está nervoso.
class Cão:
qtd_patas = 4
carnívoro = True
nervoso = False
def __init__(self, nome):
self.nome = nome
# Na primeira linha definimos uma classe de nome `Cão`.
#
# Da segunda até a quarta linha definimos os atributos de classe `qtd_patas`, `carnívoro`, `nervoso`. Os atributos de classe representam dados que aparecem em todas as classes.
#
# Na sexta linha definimos o inicializador (também pode ser chamado de construtor) que deve receber o nome do `Cão`.
#
# Na última linha criamos o atributo da instância `nome` e associamos à string enviada para o construtor.
#
# Vamos agora criar uma instância de Cão:
rex = Cão('Rex')
type(rex)
# Vamos verificar seus atributos:
rex.qtd_patas
rex.carnívoro
rex.nervoso
rex.nome
# Podemos também alterar esses atributos:
rex.nervoso = True
rex.nervoso
# Mudamos apenas o atributo `nervoso` da instância `rex`. O valor de `Cão.nervoso` continua o mesmo:
Cão.nervoso
# Também podemos criar atributos dinamicamente para nossa instância `rex`:
rex.sujo = True
rex.sujo
rex.idade = 5
rex.idade
# Lembrando mais uma vez que essas mudanças ocorrem somente na instância e não na classe:
Cão.sujo
Cão.idade
# Classes também são objetos e podemos acessar seus atributos:
Cão.__name__
Cão.qtd_patas
Cão.nervoso
Cão.carnívoro
Cão.nome
# Não podemos acessar o `nome`, pois `nome` é um atributo que é associado somente a *instâncias* da classe.
fido = Cão('Fido')
fido.nome
# Os atributos de classe são usados para fornecerer valores padrão para dados que são compartilhados por todos os "cães" como, por exemplo, a quantidade de patas.
#
# Agora vamos criar métodos (funções associadas a classes) para a classe `Cão`:
class Cão:
qtd_patas = 4
carnívoro = True
nervoso = False
def __init__(self, nome):
self.nome = nome
def latir(self, vezes=1):
""" Latir do cão. Quanto mais nervoso mais late. """
vezes += self.nervoso * vezes
latido = 'Au! ' * vezes
print('{}: {}'.format(self.nome, latido))
rex = Cão('Rex')
rex.latir()
rex.nervoso = True
rex.latir()
rex.latir(10)
# Vamos brincar um pouco mais com o `Cão` e implementar ainda mais métodos:
class Cão:
qtd_patas = 4
carnívoro = True
nervoso = False
def __init__(self, nome, truques=None):
self.nome = nome
if not truques:
self.truques = []
else:
self.truques = list(truques)
def latir(self, vezes=1):
""" Latir do cão. Quanto mais nervoso mais late. """
vezes += self.nervoso * vezes
latido = 'Au! ' * vezes
print('{}: {}'.format(self.nome, latido))
def ensina_truque(self, truque):
if truque not in self.truques:
self.truques.append(truque)
fido = Cão('Fido', truques=['Pegar'])
fido.truques
fido.ensina_truque('Rolar')
fido.truques
fido.ensina_truque('Pegar')
fido.truques
# #### Métodos de instância, classe e estático
#
# Por padrão os metódos de uma classe são **métodos de instância** (ou *instance methods*), isso significa que os métodos recebem, obrigatoriamente, uma instância da classe.
#
# Como por exemplo:
class ExemploInstancia:
def metodo_instancia(self):
print('Recebi {}'.format(self))
# Não podemos chamar o método de instância somente com a classe:
ExemploInstancia.metodo_instancia()
# Precisamos criar uma instância para utilizá-lo:
inst = ExemploInstancia()
inst.metodo_instancia()
# Já os **métodos de classe** (ou *class methods*) são métodos referentes à classe como um todo e recebem - não uma instância mas o - objeto da classe.
#
# Para tornar o método de uma classe um *classmethod* usamos o decorador `@classmethod`. Decoradores são usados para "decorar" (ou marcar) funções e modificar seu comportamento de alguma maneira. Na Aula 05 deste módulo (de orientação a objetos em python) falaremos mais sobre decoradores.
#
# Métodos de classe são definidos e utilizados assim:
class ExemploClasse:
@classmethod
def metodo_classe(cls):
print("Recebi {}".format(cls))
# Podemos chamar o método usando o objeto de classe `ExemploClasse`:
ExemploClasse.metodo_classe()
# Também podemos chamar o método a partir de uma instância dessa classe. Por ser um *classmethod* o método continuará a receber como argumento o objeto da classe e não a instância:
inst = ExemploClasse()
inst.metodo_classe()
# Por fim também temos os **métodos estáticos** que funcionam como funções simples agregadas a objetos ou classes. Eles não recebem argumentos de forma automática:
class Exemplo:
@staticmethod
def metodo_estático():
print('Sou estátio e não recebo nada')
Exemplo.metodo_estático()
# Também podemos chamar o método estático a partir de uma instância:
inst = Exemplo()
inst.metodo_estático()
# Se for criar um método estático pense bem se esse método **realmente** precisa estar associado àquela classe. Muitas vezes podemos, ao invés de usar *staticmethod*, criar uma função e deixá-la associado ao módulo.
#
# Por exemplo, no framework web Django, a autenticação de usuários são feitos com funções e não métodos estáticos da classe do usuário:
#
# ```py
# from django.contrib.auth import authenticate, login, logout
#
# def exemplo_login_view(request):
# user = authenticate('usuario', '<PASSWORD>ha')
# login(request, user) # usuário é logado no sistema
# ...
#
# def exemplo_logout_view(request):
# logout(request.user) # usuário associado aquela requisição é deslogado do sistema
# ...
# ```
# #### Exercícios (para casa):
#
# Implementar outras características (atributos) e comportamentos (metodos) para a classe `Cão`.
#
# ##### Extra:
#
# Implementar testes unitários da classe Cão modificada. *Bônus: reimplementar a classe `Cão` usando TDD.*
|
02-python-oo/aula-01/Aula 01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/johndaguio/OOP---1-1/blob/main/OOP_Concepts_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="8-Gm7NZAMXUZ"
# Classes with Multiple Objects
# + colab={"base_uri": "https://localhost:8080/"} id="WmAV96w3Mb6J" outputId="3ee7df57-41fa-4150-97d2-2111bc91273d"
class Birds:
def __init__(self,birds_name):
self.birds_name = birds_name
def flying_birds(self):
print(f"{self.birds_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.birds_name} is the national bird of Australia")
vulture = Birds("Griffon Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
# + [markdown] id="ZU010xhNPkmz"
# Encapsulation (mangling with double underscore)
# + colab={"base_uri": "https://localhost:8080/"} id="Df2hAaOAPm6N" outputId="d7f4bd0a-2955-4507-8311-310e4a4d078d"
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a+ self.__b
object_foo = foo(3,4)
object_foo.add()
object_foo.__a = 6
object_foo.__b = 7
object_foo.add()
# + colab={"base_uri": "https://localhost:8080/"} id="foiLUU8FSXXR" outputId="67ed835a-1c10-47c0-f217-04d198abae11"
class Counter:
def __init__(self):
self.current = 0
def increment(self):
self.current = 1
def value(self):
self.current
def reset(self):
self.current = 0
number = Counter()
number.increment()
print(number.value())
# + colab={"base_uri": "https://localhost:8080/"} outputId="e7d849f2-743a-4710-c81e-bd2fca869629" id="OMtBM9JITLKn"
class Counter:
def __init__(self):
self.__current = 0
def increment(self):
self.__current+=1
def value(self):
return self.__current
def reset(self):
self.__current = 0
number = Counter()
number.__current = 1
number.increment()
number.increment()
number.increment()
print(number.value())
# + [markdown] id="GVTF5Ba0VAol"
# Inheritance
# + colab={"base_uri": "https://localhost:8080/"} id="t3Z8J6VgVC7J" outputId="81e9548d-9baf-4af1-dd26-06627ba192e9"
class Person:
def __init__(self,firstname,surname):
self.firstname = firstname
self.surname = surname
def fullname(self):
print(self.firstname,self.surname)
person = Person("John", "Daguio")
person.fullname()
class Teacher(Person):
pass
person2 = Teacher("Maam","Maria")
person2.fullname()
class Student(Person):
pass
person3 = Student("Lance","Alcala")
person3.fullname()
# + [markdown] id="655jWkiIXczh"
# Polymorphism
# + colab={"base_uri": "https://localhost:8080/"} id="rfE6yJq5Xe-i" outputId="3ec8c2f4-cac0-43b7-bf99-f31516792709"
class RegularPolygon:
def __init__(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
obj1 = Square(4)
print(obj1.area())
obj2 = EquilateralTriangle(3)
print(obj2.area())
# + [markdown] id="HYWIaEYEGe52"
# Application 1
#
# 1. Create a Python program that displays the name of three students(Student 1, Student 2, and Student 3) and their term grades.
# 2. Create a class name Person and attributes - std1, std2, std3, pre, mid,fin
# 3.Compute the average of each term grade using Grade() method
# 4. Information about student's grades must be hidden from others
# + colab={"base_uri": "https://localhost:8080/"} id="pA9usy43HUU-" outputId="87671064-e6f3-4c00-cfb4-91d22c83dfa4"
class Person:
def __init__(self,std1, pre):
self.__std1 = std1
self.__pre = pre
def student_name(self):
print("The name of student =",f"{self.__std1}")
def Grade(self):
print("Grade =",f"{self.__pre*0.7 + self.__pre*0.3}")
person1 = Person("Student 1", 70) #the grade 70 represents the total score for both activities and exam
person1.student_name()
person1.Grade()
class Mid(Person):
pass
person12 = Mid("Student 1", 90) #the grade 90 represents the total score for both activities and exam
person12.student_name()
person12.Grade()
class Fin(Person):
pass
person13 = Fin("Student 1", 80) #the grade 80 represents the total score for both activities and exam
person13.student_name()
person13.Grade()
class Student2(Person):
pass
person2 = Student2("Student 2", 76) #the grade 76 represents the total score for both activities and exam
person2.student_name()
person2.Grade()
class Mid2(Person):
pass
person22 = Student2("Student 2", 85) #the grade 85 represents the total score for both activities and exam
person22.student_name()
person22.Grade()
class Fin2(Person):
pass
person23 = Student2("Student 2", 94) #the grade 94 represents the total score for both activities and exam
person23.student_name()
person23.Grade()
class Student3(Person):
pass
person3 = Student3("Student 3", 87) #the grade 87 represents the total score for both activities and exam
person3.student_name()
person3.Grade()
class Mid3(Person):
pass
person32 = Mid3("Student 3", 79) #the grade 79 represents the total score for both activities and exam
person32.student_name()
person32.Grade()
class Fin3(Person):
pass
person33 = Fin3("Student 3", 96) #the grade 96 represents the total score for both activities and exam
person33.student_name()
person33.Grade()
|
OOP_Concepts_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import time
import math
label = 'cz' # 数据标签
save_file = 'data/faceimg/'
length = 0
gap_time = 1
video_type = './data/facevideo/3.mp4' # 获取视频的方式,=0为默认摄像头,等于地址为视频的地址
recognize_type = 0 # 分类方式,0:对CNN产生的特征进行KNN,1:不经过CNN直接KNN,2:用lightCnn直接分类
dis_type = 'cosine' # 距离函数
kernel_type = 0 # 核函数 0:线性核 1:RBF核
# Capture webcam image
import numpy as np
import cv2
from light_cnn_test import shift_img_with_eyes
from read_face_data import read_face_data
from knn_10_20 import get_Kernels,KNN#get_d2
from hyperparams import Hyperparams
hp = Hyperparams()
# 读取名字列表
name = ['error']
name.extend(read_face_data('data/faceimg/', 0))
name.append('error')
print(name)
# +
# 获取lightCnn的net
from test_light_cnn_in_numpy import get_feature, get_net
net = get_net("data/lightCNN_pretrain0.npy")
print("Finish")
# -
# 获取knn的数据集
tem_trains,labels = read_face_data('data/faceimg/',2)
trains = []
for num, train in enumerate(tem_trains):
tem = train[np.newaxis, :, :, :]
tem = get_feature(net, tem).reshape([1,-1])
trains.append(tem)
trains = np.concatenate(trains, axis=0)
print(trains.shape)
print(labels)
face_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_eye.xml')
aa = {'asdsa':'ahello'}
np.save("data//tttt.npy",aa)
# 硬件初始化
from pynq import Overlay
overlay = Overlay('pwm_1.bit')
from motor import *
from steer import *
motor = Motor(overlay)
steer = Steer(overlay)
motor.set_speed(0, 0)
#steer.setangle(0,0)
摄像头初始化
from pynq.lib.video import *
# camera (input) configuration
frame_in_w = 640
frame_in_h = 480
# +
# initialize camera from OpenCV
import cv2
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("Capture device is open: " + str(videoIn.isOpened()))
print(cv2.CAP_PROP_FRAME_WIDTH)
# -
steer.set_angle(0, 0)
# +
steer.set_angle(0, 0)
import math
import time
TURN_LEFT = 0
TURN_RIGHT = 1
length = 0
cheak_times = 0
steer.set_angle(0, 0)
steer_flag = 0
if_have_face = False
patrol_w = 5
patrol_h = 8
runspeed = [0,0]
no_faces_times = 0 # 记录当前连续未出现目标头像的循环数
while(True) :
while videoIn.isOpened() :
if_have_face = False
ret, Vshow = videoIn.read() # 重复获取以保证获取的是最新的数据
np_frame = Vshow
cheak_times += 1
gray = cv2.cvtColor(np_frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
print(faces)
if type(faces) != 'list' and no_faces_times > 20:
save_file_name = save_file + 'pre_result/' + str(cheak_times) + "_000" + ".jpg"
cv2.imwrite(save_file_name, Vshow)
print("patrol")
if steer_flag == TURN_LEFT :
if steer.horAngle > -75:
steer.adj_angle(-patrol_w,0)
else:
if steer.verAngle - patrol_h*2 < -85:
steer.set_angle(-80,0)
else:
steer.adj_angle(0,-patrol_h*2)
steer_flag = TURN_RIGHT
elif steer_flag == TURN_RIGHT :
if steer.horAngle < 75:
steer.adj_angle(patrol_w,0)
else:
if steer.verAngle + patrol_h*2 > 45:
steer.set_angle(86,0)
else:
steer.adj_angle(0,patrol_h)
steer_flag = TURN_LEFT
time.sleep(0.800)
print("v:%f h:%f"%(steer.verAngle, steer.horAngle))
print(steer_flag)
for (x,y,w,h) in faces:
steer_flag = 0
if_have_face = True
#cv2.rectangle(np_frame,(x,y),(x+w,y+h),(255,0,0),2)
#roi_gray = gray[y:y+h, x:x+w]
#roi_color = np_frame[y:y+h, x:x+w]
#eyes = eye_cascade.detectMultiScale(roi_gray)
#for (ex,ey,ew,eh) in eyes:
#cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
face = Vshow[y:y+h,x:x+w]
face = shift_img_with_eyes(face)
# face = cv2.cvtColor(face,cv2.COLOR_BGR2GRAY) # [3]
# face = np.mean(face, 2)
# now_face = face.reshape((-1,now_face.shape[0]*now_face.shape[1]))
# now_face = np.transpose(face,(2,0,1))
# now_face = now_face.reshape((-1,1,now_face.shape[1]*now_face.shape[2]))
# now_face = now_face.astype(np.float32)
now_face = face
print("first",str(now_face.shape))
now_face = now_face[np.newaxis, :, :, :]
now_face = np.transpose(now_face, (0, 3, 1, 2))
now_face = get_feature(net, now_face).reshape([1,-1])
print("second",str(now_face.shape))
print(now_face.shape)
predicted = KNN(trains,labels,now_face)
print("predicted:" + str(predicted) + name[predicted])
text = name[predicted]
cv2.putText(Vshow, text, (x, y),
cv2.FONT_HERSHEY_DUPLEX, 0.5, (255, 255, 255))
cv2.rectangle(Vshow, (x,y),(x+w,y+h),(255,0,0),2)
save_file_name = save_file + 'pre_result/' + str(length) + "_" + str(predicted) + ".jpg"
length += 1
print(save_file_name)
cv2.imwrite(save_file_name, Vshow)
if predicted == 1 :
faceSizePct = w/frame_in_w
widthOffset = frame_in_w/2 - (x + w/2)
heightOffset = frame_in_w/2 - (y + w/2)
deltaThetaX = math.degrees (atan(2 * widthOffset * tan(0.675) / L))
deltaThetaY = math.degrees (atan(2 * heightOffset * tan(0.675) / L))
# deltaThetaX = widthOffset / frame_in_w * 68
# deltaThetaY = - heightOffset / frame_in_w * 68
if runspeed[0] == 0 and faceSizePct - 0.6 < 0 :
baseSpeed = (abs(faceSizePct-0.6) ** 2) * 250
runspeed[0] = runspeed[1] = baseSpeed;
print(baseSpeed)
if widthOffset > 0:
runspeed[1] += min(deltaThetaX / 9 ,10)
else:
runspeed[0] += min(deltaThetaX / 9,10)
#runspeed[0] = min(baseSpeed - (frame_in_w/2 - (x + w/2)) * 0.2,100)
#runspeed[1] = min(baseSpeed + (frame_in_w/2 - (x + w/2)) * 0.2,100)
print(runspeed)
#elif widthOffset <= 0.08:
#runspeed[0] = 0
#runspeed[1] = 0
if abs(deltaThetaX) < 1.5:
x_angle = 0
if abs(deltaThetaY) < 1.5:
y_angle = 0
steer.adj_angle(deltaThetaX, deltaThetaY)
print("adj angle: %d,%d speed: %d,%d"%(deltaThetaX, deltaThetaY, runspeed[0], runspeed[1]))
no_faces_times = 0
else:
no_faces_times += 1
if no_faces_times == 3: # 连续三循环(照片)没有找到目标,就停下来
runspeed[0] = 0
runspeed[1] = 0
print("runspeed: " + str(runspeed))
motor.set_speed(runspeed[0],runspeed[1])
if cheak_times % 15 == 0:
save_file_name = save_file + 'cheak/' + str(cheak_times) + ".jpg"
print(save_file_name)
cv2.imwrite(save_file_name, Vshow)
'''
if if_have_face:
for i in range(3):
ret, Vshow = videoIn.read()
'''
#if length > 10:
# break
videoIn.release()
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("Capture device is open: " + str(videoIn.isOpened()))
# -
videoIn.release()
|
knn/test_knn - 1020.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
# +
# compute numerically integrals of general functions, using Interpolatory quadrature rules
# -
# $$\int_a^b f(x) dx \sim \sum_{i=0}^{n-1} f(q_i) w_i = \int_a^b I^{n-1}(f) = \int_a^b\sum_{i=0}^{n-1}f(q_i) l_i(x) dx = \sum_{i=0}^{n-1}f(q_i) \int_a^b l_i(x) dx$$
#
# $$
# w_i := \int_a^b l_i(x) dx
# $$
# where $l_i$ are the Lagrange basis functions associated to $\{q_i\}_{i=0}^{n-1}$.
#
# The definition of $l_i$:
#
# $$
# l_i := \prod_{i\neq j, j=0}^{n-1} \frac{(x-q_j)}{(q_i-q_j)}
# $$
N = 101
x = linspace(0,1, N)
def lagrange(x, q, i):
assert i < len(q)
li = ones_like(x)
for j in range(len(q)):
if i != j:
li = li*(x-q[j])/(q[i]-q[j])
return li
# +
Nq = 5
q = linspace(0,1, Nq)
[plot(x, lagrange(x, q, i)) for i in range(5)]
# -
# With the monomial basis $\{v_i\}_{i=0}^{n-1} := \{x^i\}_{i=0}^{n-1}$, we can rewrite the lagrange basis as
#
# $$
# l_i = V^{ij} v_j
# $$
#
# where $V_{ij} = v_j(q_i)$, and $V^{ij}$ is the $ij$ index of the inverse matrix of $V_{ij}$
def monomial(x, i):
return x**i
V = array([monomial(q,i) for i in range(Nq)]).T
Vinv = linalg.inv(V)
# $$
# M_{\alpha i} := v_i(x_\alpha) = pow(x_\alpha,i)
# $$
#
# $$
# L_{\alpha j}:= M_{\alpha i} V^{ij} := l_j(x_\alpha)
# $$
M = array([monomial(x,i) for i in range(Nq)]).T
# +
L = M.dot(Vinv)
plot(x, L)
# -
# express basis with index 1 in power basis
plot(x, M.dot(Vinv[:,1]))
Vinv[:,1]
# If we have the monomial coefficients, we can write the integral of the monomials explicitly:
#
# $$
# \int_a^b x^i = \left.\frac{x^{(1+1)}}{i+1} \right|^{b}_a
# $$
#
# Lets define the vector
# $$
# \{W_i\}_{i=0}^{n-1} := \left\{\int_a^b x^i\right\}_{i=0}^{n-1}
# $$
# then the integral of a polynomial can be written as:
# $$
# \int_a^b \sum_i p^i x^{i} = \sum p_i W_i
# $$
def integral_of_monomials(a,b, i):
assert i >= 0
return (b**(i+1)-a**(i+1))/(i+1)
W = array([integral_of_monomials(0,1,i) for i in range(Nq)])
# Now compute w_i, which is the integral of l_i:
w = W.dot(Vinv)
w
def weights_of_quadrature_rule(q,a,b):
Nq = len(q)
V = array([q**i for i in range(Nq)]).T
Vinv = linalg.inv(V)
W = array([integral_of_monomials(a,b,i) for i in range(Nq)])
w = W.dot(Vinv)
return w
q = linspace(-1,1, Nq)
w = weights_of_quadrature_rule(q, -1, 1)
# Let's check basic identities:
#
# $$
# \int_a^b p = \sum_{i=0}^{n-1} p(q_i) w_i \qquad \forall p \in P^{k}(a,b), \quad k
# \le n-1
# $$
# +
def error_on_monomial(q,w,a,b,i):
exact = integral_of_monomials(a,b,i)
computed = (q**i).dot(w)
return abs(exact-computed)
error = array([error_on_monomial(q,w,-1,1,i) for i in range(2*Nq)])
# -
semilogy(error)
# Gauss quadrature rules are the "optimal" ones: they are exact up to $k=2n-1$
qg, wg = numpy.polynomial.legendre.leggauss(Nq)
errorg = array([error_on_monomial(qg,wg,-1,1,i) for i in range(2*Nq+2)])
semilogy(error)
semilogy(errorg)
# +
a = 0
b = 1
Nq = 5
# Rescale gauss quadrature between a and b
qg, wg = numpy.polynomial.legendre.leggauss(Nq)
qg = (qg+1)*(b-a)/2+a
wg = wg/2*(b-a)
q = linspace(a,b, Nq)
w = weights_of_quadrature_rule(q, a, b)
# +
def myfun(x):
return sin(pi*x)
def myintegral():
return 2/pi
# -
def difference_between_formulas(N,a,b,myfun,myintegral):
assert N >=1
errors = zeros((N-1, 4))
for Nq in range(1,N):
# Rescale gauss quadrature between a and b
qg, wg = numpy.polynomial.legendre.leggauss(Nq)
qg = (qg+1)*(b-a)/2+a
wg = wg/2*(b-a)
# Chebyshev points
qc, wc = numpy.polynomial.chebyshev.chebgauss(Nq)
qc = (qc+1)*(b-a)/2+a
wc = weights_of_quadrature_rule(qc, a, b)
# Equispaced points
q = linspace(a,b, Nq)
w = weights_of_quadrature_rule(q, a, b)
# "Averaged" mid point, or iterated trapezoidal
wa = ones_like(q)/(len(q))*(b-a)
index = Nq-1
errors[index, 0] = abs(myfun(q).dot(w)-myintegral())
errors[index, 1] = abs(myfun(qc).dot(wc)-myintegral())
errors[index, 2] = abs(myfun(qg).dot(wg)-myintegral())
errors[index, 3] = abs(myfun(q).dot(wa)-myintegral())
return errors
errors = difference_between_formulas(`, 0, 1, myfun, myintegral)
semilogy(errors)
# +
def absfun(x):
return 2*abs(x-.5)
def absintegral():
return 0.5
# -
plot(x, absfun(x))
errors = difference_between_formulas(30, 0, 1, absfun, absintegral)
semilogy(errors)
|
slides/Lecture 09 - LH - LAB - Numerical Integration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: maatpy
# language: python
# name: maatpy
# ---
# **Step 1**: Load dataset
from sklearn.model_selection import StratifiedShuffleSplit
from maatpy.dataset import Dataset
seed = 0
yeast = Dataset()
yeast.load_from_csv('datasets/yeast_data.csv', output_column='Class', ignore='Sequence Name')
X = yeast.data
y = yeast.target
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=seed)
sss.get_n_splits(X, y)
for train_index, test_index in sss.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
yeast
# **Step 2**: Simulating dataset
from maatpy.dataset import simulate_dataset
from collections import Counter
sim_data = Dataset()
sim_data = simulate_dataset()
Xs = sim_data.data
Ys = sim_data.target
Counter(Ys)
# **Step 3**: Make imbalance
sim_data.make_imbalance(ratio=[0.8, 0.2], random_state=seed)
Xi = sim_data.data
Yi = sim_data.target
Counter(Yi)
# **Step 4**: Samplers, Pipeline and Plots
# Class #1 has only 5 samples which when split into training and test set do not have enough samples to run the k-nearest neighbours algorithm for SMOTE even when k_neighbours are dropped to 3.
# To avoid error the subsequent evaluations will use the following modified dataset which removes class 1.
X_mod= X[y!=1]
y_mod = y[y!=1]
print(X_mod.shape)
print(Counter(y_mod))
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=seed)
sss.get_n_splits(X_mod, y_mod)
for train_index, test_index in sss.split(X_mod, y_mod):
Xm_train, Xm_test = X_mod[train_index], X_mod[test_index]
ym_train, ym_test = y_mod[train_index], y_mod[test_index]
# *plot_resampling*
# +
from maatpy.plots import plot_resampling
from maatpy.samplers import SMOTEENN, SMOTETomek
import matplotlib.pyplot as plt
from sklearn.svm import SVC
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
ax_arr = (ax1, ax2)
for ax, sampler in zip(ax_arr, [SMOTEENN(random_state=seed), SMOTETomek(random_state=seed)]):
plot_resampling(Xm_train, ym_train, sampler, ax)
ax.set_title('{}'.format(
sampler.__class__.__name__), fontsize=16)
plt.plot()
# -
# *plot_decision_function*
# +
from maatpy.pipeline import make_pipeline
from maatpy.plots import plot_decision_function
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
ax_arr = (ax1, ax2)
#plot decision function only works for 2-features datasets
for ax, sampler in zip(ax_arr, [SMOTEENN(random_state=seed), SMOTETomek(random_state=seed)]):
clf = make_pipeline(sampler, SVC(kernel="linear", random_state=seed))
clf.fit(Xi, Yi)
plot_decision_function(Xi, Yi, clf, ax)
ax.set_title('{}'.format(
sampler.__class__.__name__), fontsize=16)
plt.plot()
# -
# *plot_confusion_matrix*
# +
from maatpy.plots import plot_confusion_matrix
from sklearn.metrics import confusion_matrix
sampler = SMOTEENN(random_state=seed)
clf = make_pipeline(sampler, SVC(kernel="linear", random_state=seed))
clf.fit(Xm_train, ym_train)
y_pred = clf.predict(Xm_test)
conf_matrix = confusion_matrix(ym_test,y_pred)
plot_confusion_matrix(conf_matrix, classes=Counter(ym_test).keys(), title='{}'.format(
sampler.__class__.__name__))
plt.plot()
# -
sampler = SMOTETomek(random_state=seed)
clf = make_pipeline(sampler, SVC(kernel="linear", random_state=seed))
clf.fit(Xm_train, ym_train)
y_pred = clf.predict(Xm_test)
conf_matrix = confusion_matrix(ym_test,y_pred)
plot_confusion_matrix(conf_matrix, classes=Counter(ym_test).keys(), title='{}'.format(
sampler.__class__.__name__))
plt.plot()
# **Step 5**: Classifiers
# +
from maatpy.classifiers import AdaCost
from sklearn.metrics import cohen_kappa_score
import pandas as pd
results = {}
algorithm = ['adacost', 'adac1', 'adac2', 'adac3']
for alg in algorithm:
clf = AdaCost(algorithm=alg, random_state=seed)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
kappa = cohen_kappa_score(y_test, y_pred)
results[alg] = kappa
df1 = pd.DataFrame.from_dict(results, orient='index')
df1.columns = ['yeast']
df1
# -
from maatpy.classifiers import (BalancedRandomForestClassifier,
SMOTEBoost,
SMOTEBagging)
results = {}
for clf in [BalancedRandomForestClassifier(random_state=seed),
SMOTEBoost(random_state=seed),
SMOTEBagging(random_state=seed)]:
clf.fit(Xm_train, ym_train)
y_pred = clf.predict(Xm_test)
kappa = cohen_kappa_score(ym_test, y_pred)
results[clf.__class__.__name__] = kappa
df1 = pd.DataFrame.from_dict(results, orient='index')
df1.columns = ['yeast']
df1
|
development/Final demonstration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dipbanik/AIML-Training/blob/develop/Visualisation_Examples.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="hZtOuAplEOAY" colab_type="code" outputId="d31a6821-8965-4703-8426-e2c7c0711905" colab={}
print('hello world')
# + id="bAKsStkHEOAe" colab_type="code" colab={}
a = 2
# + id="PI7x-a05EOAi" colab_type="code" outputId="8640f864-a5f1-40d1-8cf2-96817390e090" colab={}
print(a)
# + [markdown] id="tEgT1pUbEOAn" colab_type="text"
# # Header 1
#
# ## Header 2
#
# This is a markdown cell.
#
# + id="JdjQOY7hEOAo" colab_type="code" colab={}
from matplotlib import pyplot as plt
# + id="D6hdeYqbEOAs" colab_type="code" outputId="fa302ec7-935d-4961-8f9a-ed06104ad8ae" colab={}
plt.plot([1,2,3],[1,4,9])
plt.plot([1,2,3],[10,20,30])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title('Title')
plt.legend(['Data Set 1','Data Set 2'])
plt.savefig('exported_image')
plt.show()
# + id="2YydaQDxEOAx" colab_type="code" colab={}
from matplotlib import pyplot as plt
import pandas as pd
data = {'year' : [2008,2012,2016],
'attendees' : [112,321,729],
'avg age' : [24, 43, 31]}
df = pd.DataFrame(data)
# + id="_FqzCeYtEOA2" colab_type="code" outputId="58ff2ca1-885b-4ab8-e8e0-236eeae3480c" colab={}
df
# + id="gHyC4O_DEOA9" colab_type="code" outputId="d691ad56-3207-4917-8e28-e3df2cd11760" colab={}
df['year']
# + id="Fy3Tu_V4EOBD" colab_type="code" outputId="476e8733-d022-4d69-def8-0dcee4a6e202" colab={}
type(df['year'])
# + id="H8dFGSceEOBK" colab_type="code" outputId="906c078e-96d0-4af8-da34-9cba13092439" colab={}
df['year'] <2013
# + id="as7W9RAqEOBQ" colab_type="code" colab={}
earlier_than_2013 = df['year'] <2013
# + id="DSCnxsAgEOBY" colab_type="code" outputId="2e623e93-ee3f-44fb-ee13-7f20b1642809" colab={}
df[earlier_than_2013]
# + id="XmhDL6tUEOBf" colab_type="code" outputId="f0c7e1f0-dfb1-45a6-9146-7ecac70356cf" colab={}
plt.plot(df['year'], df['attendees'])
plt.plot(df['year'], df['avg age'])
plt.legend(['attendees','avg age'], loc = 'best')
plt.show()
# + id="-H-onlecEOBv" colab_type="code" colab={}
from matplotlib import pyplot as plt
import pandas as pd
# + id="qCRTWY4BEOB0" colab_type="code" outputId="f9a7c8fe-75cd-4535-fc9e-c00ddb2e0911" colab={}
data = pd.read_csv('countries.csv')
data.head()
#data['country']
data.country
afg = data[data.country == 'Afghanistan']
plt.plot(afg.year, afg.gdpPerCapita)
plt.title("Afghanistan's GDP Per Capita")
plt.show()
# + id="-ihepwhNEOB5" colab_type="code" outputId="1fe6d76f-1ad0-4517-ba1e-2e4e9e94cd57" colab={}
type(data)
# + id="8ncxVKFJEOB-" colab_type="code" colab={}
import pandas as pd
from matplotlib import pyplot as plt
# + id="AxJhEEorEOCD" colab_type="code" outputId="4610902d-1256-41a7-9d65-a9b201bae280" colab={}
data = pd.read_csv("countries.csv")
data.head()
# + id="ULyZ2vDMEOCI" colab_type="code" outputId="1981b494-7673-4af2-beba-1d9804855294" colab={}
set(data.continent)
# + id="tYRhaUx_EOCO" colab_type="code" colab={}
data_2007 = data[data.year == 2007]
# + id="4g0f7FGgEOCV" colab_type="code" colab={}
asia_2007 = data_2007[data_2007.continent == 'Asia']
europe_2007 = data_2007[data_2007.continent == 'Europe']
# + id="e9iYS2hOEOCc" colab_type="code" outputId="3e18f58d-9a98-4964-f5ed-ae2a5a003ef9" colab={}
print(len(set(asia_2007.country)))
print(len(set(europe_2007.country)))
# + id="M6dqSeDjEOCk" colab_type="code" outputId="f0f17247-5aa1-476f-9e7c-b85039bf793d" colab={}
print('Mean GDP Per Capita in Asia:')
print(asia_2007.gdpPerCapita.mean())
print('Median GDP Per Capita in Asia:')
print(asia_2007.gdpPerCapita.median())
print('Mean GDP Per Capita in Europe:')
print(europe_2007.gdpPerCapita.mean())
print('Median GDP Per Capita in Europe:')
print(europe_2007.gdpPerCapita.median())
# + id="BbxnVyKMEOCo" colab_type="code" outputId="4f5a5e3a-82aa-441f-b534-5b1977e78746" colab={}
plt.subplot(2, 1, 1)
plt.title('Distribution of GDP per Capita')
plt.hist(asia_2007.gdpPerCapita, 20, range =(0,50000), edgecolor='black')
plt.ylabel('Asia')
plt.subplot(2, 1, 2)
plt.hist(europe_2007.gdpPerCapita, 20, range =(0,50000), edgecolor='black')
plt.ylabel('Europe')
plt.show()
# + id="wTX6A3xXEOCs" colab_type="code" outputId="b55ae29f-8c17-4183-934b-7b006a1e34d7" colab={}
data_1997 = data[data.year == 1997]
europe_1997 = data_1997[data_1997.continent == 'Europe' ]
europe_1997.head()
# + id="FgceKPtPEOCx" colab_type="code" outputId="637c1444-a903-41be-9bfa-d924b7a59651" colab={}
americas_1997 = data_1997[data_1997.continent == 'Americas']
americas_1997.head()
# + id="2UlXqXXMEOC3" colab_type="code" outputId="3e847ea4-3613-4c00-889a-eae2c061525d" colab={}
bins = 20
plt.title('Life Expetency of Americas and Europe')
plt.subplot(2,1,1)
plt.ylabel('Americas')
plt.hist(americas_1997.lifeExpectancy, bins, range=(55,85), edgecolor='black')
plt.subplot(2,1,2)
plt.ylabel('Europe')
plt.hist(europe_1997.lifeExpectancy, bins, range=(55,85), edgecolor='black')
plt.show()
# + id="fe_lM2h3EOC8" colab_type="code" outputId="3a0c65c6-0e60-4dfe-81b4-c0463deb2ae9" colab={}
import pandas as pd
from matplotlib import pyplot as plt
data = pd.read_csv('countries.csv')
data.head()
# + id="UNM1cdLGEODM" colab_type="code" outputId="2ff2e3cc-3333-424f-881f-c8b3167801b4" colab={}
us = data[data.country == 'United States']
us.head()
# + id="ATMZrwolEODR" colab_type="code" outputId="940e1754-0f63-4e0a-b0a4-0ff12d5f6840" colab={}
plt.plot(us.year, us.gdpPerCapita)
plt.xlabel('year')
plt.ylabel('GDP Per Capita')
plt.show()
# + id="iQfwV0dQEODX" colab_type="code" outputId="ee60c5db-6708-43d0-e383-fff89f98537a" colab={}
china = data[data.country == 'China']
china.head()
# + id="pFdo_b57EODj" colab_type="code" outputId="89ed1a86-2ead-4566-cbad-a06ce2443e1b" colab={}
plt.plot(us.year, us.gdpPerCapita)
plt.plot(china.year, china.gdpPerCapita)
plt.xlabel('year')
plt.ylabel('GDP Per Capita')
plt.legend(['United States', 'China'], loc='best')
plt.show()
# + id="3Ms5xoCMEODo" colab_type="code" outputId="dcf1ccec-1a2a-44a6-f6f8-64cb733f88d8" colab={}
us_growth = us.gdpPerCapita / us.gdpPerCapita.iloc[0] * 100
china_growth = china.gdpPerCapita / china.gdpPerCapita.iloc[0] * 100
plt.plot(us.year, us_growth)
plt.plot(china.year, china_growth)
plt.show()
# + id="pocgsJWuEODx" colab_type="code" outputId="06e5dbc7-6724-4ecb-ae95-d64bb320eac6" colab={}
import pandas as ppd
from matplotlib import pyplot as plt
data = pd.read_csv('countries.csv')
china = data[data.country == 'China']
us = data[data.country == 'United States']
print(len(china.year))
print(len(us.year))
us.head()
# + id="0PeXOPrSEOD3" colab_type="code" outputId="459852bb-4f49-4465-bcfc-8a9bf5d07adc" colab={}
plt.plot(us.year, us.population / 10**6)
plt.plot(china.year, china.population / 10**6)
plt.legend(['United States', 'China'], loc = 'best')
plt.title('Population Growth in US and China (in millions) - Absolute terms')
plt.ylabel('Population')
plt.xlabel('Year')
plt.show()
# + id="KFjrQhpGEOEL" colab_type="code" outputId="9c203b60-48bd-4acd-fa38-5efe2a5faf4f" colab={}
us_growth = us.population / us.population.iloc[0] * 100
china_growth = china.population / china.population.iloc[0] * 100
plt.plot(us.year, us_growth)
plt.plot(china.year, china_growth)
plt.legend(['United States','China'], loc = 'best')
plt.title('Relative Population Growth in US and China')
plt.ylabel('Population Growth')
plt.xlabel('Year')
plt.show()
# + id="p9dnNP_HEOES" colab_type="code" outputId="49f3ca1c-cf37-4e21-95d9-42a6450cbc9f" colab={}
#Scatter plot
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
data = pd.read_csv('countries.csv')
data.head()
# + id="P7_oz0fOEOEZ" colab_type="code" outputId="de2a4bfd-0c96-4c86-e585-1b9314345845" colab={}
data_2007 = data[data.year == 2007]
data_2007.head()
# + id="tCSpmi9YEOEk" colab_type="code" outputId="a3dac177-ee95-44e4-f5c3-9544f67b69d5" colab={}
plt.scatter(data_2007.gdpPerCapita, data_2007.lifeExpectancy, 7)
plt.title('GDP per Capita and Life Expectancy in 2007')
plt.xlabel('GDP per Capita ($)')
plt.ylabel('Life Expectancy')
plt.show()
# + id="AW1MScfYEOEu" colab_type="code" outputId="887a0feb-4b12-4753-8b37-3b0651685fcd" colab={}
data_2007.gdpPerCapita.corr(data_2007.lifeExpectancy)
# + id="MW_dpPPlEOE2" colab_type="code" outputId="23ed343f-7a27-46e5-90e9-e82cd995b7ef" colab={}
np.log10(data_2007.gdpPerCapita).corr(data_2007.lifeExpectancy)
# + id="EpUT78RmEOE5" colab_type="code" outputId="463ed30c-72e7-487d-9f74-b171209d0365" colab={}
data_2007.lifeExpectancy.corr(np.log10(data_2007.gdpPerCapita))
# + id="9-yPKURNEOE9" colab_type="code" colab={}
years_sorted = sorted(set(data.year))
# + id="nEYRNF74EOE_" colab_type="code" outputId="c9ec3352-d652-4226-fb5c-179519b00c10" colab={}
for given_year in years_sorted:
data_year = data[data.year == given_year]
plt.scatter(data_year.gdpPerCapita, data_year.lifeExpectancy, 6)
plt.title(given_year)
plt.xlabel('GDP per Capita')
plt.ylabel('Life Expectancy')
plt.xlim(0,60000)
plt.ylim(25,85)
plt.show()
# + id="2c6JJ09YEOFC" colab_type="code" outputId="72aaf28b-5a19-444b-9e19-d3a55088b6d8" colab={}
data[data.gdpPerCapita > 60000]
# + id="OWM9bhnwEOFG" colab_type="code" outputId="1902884d-ff54-4b93-f18b-cfc683696931" colab={}
data.gdp = data.gdpPerCapita * data.population /
# + id="mKP6qR5mEOFL" colab_type="code" colab={}
log_gdp = np.log10(data.gdp)
# + id="ALgHOnP0EOFP" colab_type="code" outputId="07365743-5df9-4337-e36f-cd631483ec4d" colab={}
data_2007 = data[data.year == 2007]
data_2007.head()
# + id="JNWpLBWGEOFT" colab_type="code" colab={}
top10 = data_2007.sort_values('population',ascending = False).head(10)
# + id="G04x8HgxEOFd" colab_type="code" outputId="2e43b024-8991-4893-ae80-aff26ea0f2f3" colab={}
plt.clf()
x = range(10)
plt.bar(x, top10.population / 10**6, align = 'center')
plt.xticks(x, top10.country, rotation='vertical')
plt.title('10 most populous countries')
plt.ylabel('Population in millions')
plt.show()
# + id="_6_L8GsAEOFo" colab_type="code" outputId="64d11be2-adc0-4c8e-fac9-dab1b0375066" colab={}
gdp = top10.population * top10.gdpPerCapita / 10**9
plt.clf()
x = range(10)
plt.subplot(2,1,1)
plt.title('10 most populous countries')
plt.bar(x, top10.population / 10**6, align = 'center')
plt.xticks([],[])
plt.ylabel('Population in millions')
plt.subplot(2,1,2)
plt.bar(x, gdp, align = 'center')
plt.xticks(x, top10.country, rotation='vertical')
plt.ylabel('GDP')
plt.show()
# + id="pO6wE6mNEOFu" colab_type="code" outputId="f5b35e07-2945-46ac-b235-ffd310a767db" colab={}
import numpy as np # We're going to import np for np.arange().
#np.arange(10) is similar to range(10), and it allows us to shift
# each value in it by the bar width as you can see below.
x = np.arange(10)
# We need to create subplots in order to overlay two bar plots
# with proper axes on the left hand side and the right hand side.
fig, ax1 = plt.subplots()
width = 0.3 # This is the width of each bar in the bar plot.\n
plt.xticks(x, top10.country, rotation='vertical')
population = ax1.bar(x, top10.population / 10**6, width)
plt.ylabel('Population')
# ax1.twinx() gives us the same x-axis with the y-axis on the right.
ax2 = ax1.twinx()
gdp = ax2.bar(x + width, top10.gdpPerCapita * top10.population / 10**9,
width, color='orange')
plt.ylabel('GDP')
plt.legend([population, gdp],
['Population in Millions', 'GDP in Billions'])
figure = plt.gcf() # get current figure
plt.show()
# + id="sv2IpSIhEOFy" colab_type="code" outputId="1d52cca5-8fdc-4ec3-882a-74b66b67e83e" colab={}
data = pd.read_csv('obama.csv', parse_dates=['year_month'])
data.head()
# + id="NQUjIYGvEOF1" colab_type="code" outputId="73d81a28-93e8-4627-9881-abb0157fb4d7" colab={}
plt.plot(data.year_month, data.approve_percent, 'o', markersize=2, alpha = 0.3)
plt.show()
# + id="1TfA2To2EOF6" colab_type="code" colab={}
data_mean = data.groupby('year_month').mean()
data_median = data.groupby('year_month').median()
data25 = data.groupby('year_month').quantile(0.25)
data75 = data.groupby('year_month').quantile(0.75)
# + id="3EIR4SDVEOF9" colab_type="code" outputId="d62b33bc-16aa-44ef-b25d-b37f29ac078d" colab={}
plt.plot(data_mean.index, data_mean.approve_percent, 'red')
plt.plot(data_median.index, data_median.approve_percent, 'green')
plt.plot(data25.index, data25.approve_percent, 'blue')
plt.plot(data75.index, data75.approve_percent, 'orange')
plt.plot(data.year_month, data.approve_percent, 'o', markersize=2, alpha = 0.3)
plt.legend(['mean', 'median', '25th Percentile', '75th Percentile'])
plt.show()
# + id="_88cSLuSEOF_" colab_type="code" colab={}
|
Visualisation_Examples.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
# remove noise
X = X[y < 50]
y = y[y < 50]
X.shape
from model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, seed=666)
from LinearRegression import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit_normal(X_train, y_train)
lin_reg.coef_
lin_reg.intercept_
lin_reg.score(X_test, y_test)
|
ml/linearRegression/LinearRegression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="7SDpjhWtMNUc" executionInfo={"status": "ok", "timestamp": 1603998900953, "user_tz": 240, "elapsed": 19058, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiDrvQNUz8aa7ZRcyK0-tgLj-JKq6T-Y_V39JmEzQ=s64", "userId": "01161724679039743723"}} outputId="6ce85126-5b18-456c-9ed7-f7663447f25f" colab={"base_uri": "https://localhost:8080/"}
from google.colab import drive
drive.mount('/content/drive')
# + id="FOjRm_kuMYVO" executionInfo={"status": "ok", "timestamp": 1603998945350, "user_tz": 240, "elapsed": 217, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiDrvQNUz8aa7ZRcyK0-tgLj-JKq6T-Y_V39JmEzQ=s64", "userId": "01161724679039743723"}} outputId="5ef9286e-3178-4be3-afdd-07847f3d6476" colab={"base_uri": "https://localhost:8080/"}
# #%cd '/content/drive/My Drive/Colab Notebooks/OSU/CS467_shared'
# %cd '/content/drive/My Drive'
# + id="pB0fDUa7Mt6g"
import librosa
import librosa.display
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from skimage import color
# + id="nbjJZuXwrhL1"
fpath = '/content/drive/My Drive/fma_large/146/'
fname = '146698.mp3'
# create mel scale spectrogram from input .mp3 file
y, sr = librosa.load(fpath + fname)
mel_spect = librosa.feature.melspectrogram(y=y, sr=sr, n_fft=2048, hop_length=1024)
mel_spect = librosa.power_to_db(mel_spect, ref=np.max)
# normalize image between min and max
img = 255 * ((mel_spect - mel_spect.min()) / (mel_spect.max() - mel_spect.min()))
# convert pixel values to 8 bit ints
img = img.astype(np.uint8)
# flip and invert image
img = np.flip(img, axis=0)
img = 255-img
# + id="lRAA9REq1iY6" executionInfo={"status": "ok", "timestamp": 1603999628517, "user_tz": 240, "elapsed": 806, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiDrvQNUz8aa7ZRcyK0-tgLj-JKq6T-Y_V39JmEzQ=s64", "userId": "01161724679039743723"}} outputId="4ad1a664-2b1d-4e99-96d8-a482ae9b19d5" colab={"base_uri": "https://localhost:8080/", "height": 319}
fig = plt.figure(frameon=False)
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
# img = color.rgb2gray(img)
ax.imshow(img, aspect='auto', cmap='Greys')
fig.savefig('spec1.png', cmap='Greys')
# + id="rjTn8htT4ez3" executionInfo={"status": "ok", "timestamp": 1602639917957, "user_tz": 300, "elapsed": 27410, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "06102582458453001776"}} outputId="a076a2d6-ec02-4f06-cb8e-66ed17058449" colab={"base_uri": "https://localhost:8080/", "height": 279}
image = tf.keras.preprocessing.image.load_img('./spec1.png')
image = image.resize((400, 250))
plt.imshow(image)
plt.show()
print(image.size)
|
src/Data/Mp3_to_Spectrogram.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Regression Example
#
# Let's walk through the steps of the official documentation example. Doing this will help your ability to read from the documentation, understand it, and then apply it to your own problems (the upcoming Consulting Project).
import findspark
findspark.init('/home/pushya/spark-2.1.0-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('lr_example').getOrCreate()
from pyspark.ml.regression import LinearRegression
# Load training data
training = spark.read.format("libsvm").load("sample_linear_regression_data.txt")
# Interesting! We haven't seen libsvm formats before. In fact the aren't very popular when working with datasets in Python, but the Spark Documentation makes use of them a lot because of their formatting. Let's see what the training data looks like:
training.show()
# This is the format that Spark expects. Two columns with the names "label" and "features".
#
# The "label" column then needs to have the numerical label, either a regression numerical value, or a numerical value that matches to a classification grouping. Later on we will talk about unsupervised learning algorithms that by their nature do not use or require a label.
#
# The feature column has inside of it a vector of all the features that belong to that row. Usually what we end up doing is combining the various feature columns we have into a single 'features' column using the data transformations we've learned about.
#
# Let's continue working through this simple example!
# +
# These are the default values for the featuresCol, labelCol, predictionCol
lr = LinearRegression(featuresCol='features', labelCol='label', predictionCol='prediction')
# You could also pass in additional parameters for regularization, do the reading
# in ISLR to fully understand that, after that its just some simple parameter calls.
# Check the documentation with Shift+Tab for more info!
# -
# Fit the model
lrModel = lr.fit(training)
# Print the coefficients and intercept for linear regression
print("Coefficients: {}".format(str(lrModel.coefficients))) # For each feature...
print('\n')
print("Intercept:{}".format(str(lrModel.intercept)))
# There is a summary attribute that contains even more info!
# Summarize the model over the training set and print out some metrics
trainingSummary = lrModel.summary
# Lots of info, here are a few examples:
trainingSummary.residuals.show()
print("RMSE: {}".format(trainingSummary.rootMeanSquaredError))
print("r2: {}".format(trainingSummary.r2))
# ## Train/Test Splits
#
# But wait! We've commited a big mistake, we never separated our data set into a training and test set. Instead we trained on ALL of the data, something we generally want to avoid doing. Read ISLR and check out the theory lecture for more info on this, but remember we won't get a fair evaluation of our model by judging how well it does again on the same data it was trained on!
#
# Luckily Spark DataFrames have an almost too convienent method of splitting the data! Let's see it:
all_data = spark.read.format("libsvm").load("sample_linear_regression_data.txt")
# Pass in the split between training/test as a list.
# No correct, but generally 70/30 or 60/40 splits are used.
# Depending on how much data you have and how unbalanced it is.
train_data,test_data = all_data.randomSplit([0.7,0.3])
train_data.show()
test_data.show()
unlabeled_data = test_data.select('features')
unlabeled_data.show()
# Now we only train on the train_data
correct_model = lr.fit(train_data)
# Now we can directly get a .summary object using the evaluate method:
test_results = correct_model.evaluate(test_data)
test_results.residuals.show()
print("RMSE: {}".format(test_results.rootMeanSquaredError))
# Well that is nice, but realistically we will eventually want to test this model against unlabeled data, after all, that is the whole point of building the model in the first place. We can again do this with a convenient method call, in this case, transform(). Which was actually being called within the evaluate() method. Let's see it in action:
predictions = correct_model.transform(unlabeled_data)
predictions.show()
# Okay, so this data is a bit meaningless, so let's explore this same process with some data that actually makes a little more intuitive sense!
|
Linear_Regression_Example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as pp
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
# %matplotlib inline
df = pd.read_csv('avocado.REGIONS.csv')
df.head()
df.groupby('Season')['AveragePrice'].mean()
df.mean()
df.head(5)
df = df.drop(['4046','4225','4770','Large Bags','Small Bags','XLarge Bags','Total Volume'],axis=1)
NE = df.loc[(df['region']) == 'Northeast']
NE.head(5)
NE.mean()
NE.groupby('Season')['AveragePrice'].mean()
NNE = df.loc[(df['region']) == 'NorthernNewEngland']
NNE.head(5)
NNE.mean()
NNE.groupby('Season')['AveragePrice'].mean()
SC = df.loc[(df['region']) == 'SouthCentral']
SC.head(5)
SC.mean()
SC.groupby('Season')['AveragePrice'].mean()
SE = df.loc[(df['region']) == 'Southeast']
SE.head(5)
SE.mean()
SE.groupby('Season')['AveragePrice'].mean()
P = df.loc[(df['region']) == 'Plains']
P.head(5)
P.mean()
P.groupby('Season')['AveragePrice'].mean()
GL = df.loc[(df['region']) == 'GreatLakes']
GL.head(5)
GL.mean()
GL.groupby('Season')['AveragePrice'].mean()
MS = df.loc[(df['region']) == 'Midsouth']
MS.head(5)
# +
MS.mean()
# -
MS.groupby('Season')['AveragePrice'].mean()
# + active=""
#
# -
NE['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
MS['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
GL['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
P['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
W['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
SE['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
SC['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
NNE['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
df['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)
# Season # 1= Spring #2= Summer #3= Fall #4= Winter
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=df)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=SC)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=GL)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=P)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=SE)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=MS)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=NNE)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=NE)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# +
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(x='Season#', y='AveragePrice', data=W)
# Label the axes
_ = plt.xlabel('Season')
_ = plt.ylabel('Average Price')
# Show the plot
plt.show()
# -
# +
import plotly.plotly as py
import plotly.graph_objs as go
labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen']
values = [4500,2500,1053,500]
trace = go.Pie(labels=labels, values=values)
py.iplot([trace], filename='basic_pie_chart')
# -
pricemap = {}
regions = set (df['region'].values)
seasons = set (df['Season'].values)
for region in regions:
for season in seasons:
prediction = df[(df['region'] == region) & (df['Season'] == season)].mean()['AveragePrice']
print(region, season)
print(prediction)
|
ava-kaggle-regions.hist.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_amazonei_mxnet_p36
# language: python
# name: conda_amazonei_mxnet_p36
# ---
# # Time Series Forecasting
#
# A time series is data collected periodically, over time. Time series forecasting is the task of predicting future data points, given some historical data. It is commonly used in a variety of tasks from weather forecasting, retail and sales forecasting, stock market prediction, and in behavior prediction (such as predicting the flow of car traffic over a day). There is a lot of time series data out there, and recognizing patterns in that data is an active area of machine learning research!
#
# <img src='notebook_ims/time_series_examples.png' width=80% />
#
# In this notebook, we'll focus on one method for finding time-based patterns: using SageMaker's supervised learning model, [DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html).
#
#
# ### DeepAR
#
# DeepAR utilizes a recurrent neural network (RNN), which is designed to accept some sequence of data points as historical input and produce a predicted sequence of points. So, how does this model learn?
#
# During training, you'll provide a training dataset (made of several time series) to a DeepAR estimator. The estimator looks at *all* the training time series and tries to identify similarities across them. It trains by randomly sampling **training examples** from the training time series.
# * Each training example consists of a pair of adjacent **context** and **prediction** windows of fixed, predefined lengths.
# * The `context_length` parameter controls how far in the *past* the model can see.
# * The `prediction_length` parameter controls how far in the *future* predictions can be made.
# * You can find more details, in [this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html).
#
# <img src='notebook_ims/context_prediction_windows.png' width=50% />
#
# > Since DeepAR trains on several time series, it is well suited for data that exhibit **recurring patterns**.
#
# In any forecasting task, you should choose the context window to provide enough, **relevant** information to a model so that it can produce accurate predictions. In general, data closest to the prediction time frame will contain the information that is most influential in defining that prediction. In many forecasting applications, like forecasting sales month-to-month, the context and prediction windows will be the same size, but sometimes it will be useful to have a larger context window to notice longer-term patterns in data.
#
# ### Energy Consumption Data
#
# The data we'll be working with in this notebook is data about household electric power consumption, over the globe. The dataset is originally taken from [Kaggle](https://www.kaggle.com/uciml/electric-power-consumption-data-set), and represents power consumption collected over several years from 2006 to 2010. With such a large dataset, we can aim to predict over long periods of time, over days, weeks or months of time. Predicting energy consumption can be a useful task for a variety of reasons including determining seasonal prices for power consumption and efficiently delivering power to people, according to their predicted usage.
#
# **Interesting read**: An inversely-related project, recently done by Google and DeepMind, uses machine learning to predict the *generation* of power by wind turbines and efficiently deliver power to the grid. You can read about that research, [in this post](https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/).
#
# ### Machine Learning Workflow
#
# This notebook approaches time series forecasting in a number of steps:
# * Loading and exploring the data
# * Creating training and test sets of time series
# * Formatting data as JSON files and uploading to S3
# * Instantiating and training a DeepAR estimator
# * Deploying a model and creating a predictor
# * Evaluating the predictor
#
# ---
#
# Let's start by loading in the usual resources.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# # Load and Explore the Data
#
# We'll be loading in some data about global energy consumption, collected over a few years. The below cell downloads and unzips this data, giving you one text file of data, `household_power_consumption.txt`.
# ! wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip
# ! unzip household-electric-power-consumption
# ### Read in the `.txt` File
#
# The next cell displays the first few lines in the text file, so we can see how it is formatted.
# +
# display first ten lines of text data
n_lines = 10
with open('household_power_consumption.txt') as file:
head = [next(file) for line in range(n_lines)]
display(head)
# -
# ## Pre-Process the Data
#
# The 'household_power_consumption.txt' file has the following attributes:
# * Each data point has a date and time (hour:minute:second) of recording
# * The various data features are separated by semicolons (;)
# * Some values are 'nan' or '?', and we'll treat these both as `NaN` values
#
# ### Managing `NaN` values
#
# This DataFrame does include some data points that have missing values. So far, we've mainly been dropping these values, but there are other ways to handle `NaN` values, as well. One technique is to just fill the missing column values with the **mean** value from that column; this way the added value is likely to be realistic.
#
# I've provided some helper functions in `txt_preprocessing.py` that will help to load in the original text file as a DataFrame *and* fill in any `NaN` values, per column, with the mean feature value. This technique will be fine for long-term forecasting; if I wanted to do an hourly analysis and prediction, I'd consider dropping the `NaN` values or taking an average over a small, sliding window rather than an entire column of data.
#
# **Below, I'm reading the file in as a DataFrame and filling `NaN` values with feature-level averages.**
# +
import txt_preprocessing as pprocess
# create df from text file
initial_df = pprocess.create_df('household_power_consumption.txt', sep=';')
# fill NaN column values with *average* column value
df = pprocess.fill_nan_with_mean(initial_df)
# print some stats about the data
print('Data shape: ', df.shape)
df.head()
# -
# ## Global Active Power
#
# In this example, we'll want to predict the global active power, which is the household minute-averaged active power (kilowatt), measured across the globe. So, below, I am getting just that column of data and displaying the resultant plot.
# Select Global active power data
power_df = df['Global_active_power'].copy()
print(power_df.shape)
# display the data
plt.figure(figsize=(12,6))
# all data points
power_df.plot(title='Global active power', color='blue')
plt.show()
# Since the data is recorded each minute, the above plot contains *a lot* of values. So, I'm also showing just a slice of data, below.
# +
# can plot a slice of hourly data
end_mins = 1440 # 1440 mins = 1 day
plt.figure(figsize=(12,6))
power_df[0:end_mins].plot(title='Global active power, over one day', color='blue')
plt.show()
# -
# ### Hourly vs Daily
#
# There is a lot of data, collected every minute, and so I could go one of two ways with my analysis:
# 1. Create many, short time series, say a week or so long, in which I record energy consumption every hour, and try to predict the energy consumption over the following hours or days.
# 2. Create fewer, long time series with data recorded daily that I could use to predict usage in the following weeks or months.
#
# Both tasks are interesting! It depends on whether you want to predict time patterns over a day/week or over a longer time period, like a month. With the amount of data I have, I think it would be interesting to see longer, *recurring* trends that happen over several months or over a year. So, I will resample the 'Global active power' values, recording **daily** data points as averages over 24-hr periods.
#
# > I can resample according to a specified frequency, by utilizing pandas [time series tools](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html), which allow me to sample at points like every hour ('H') or day ('D'), etc.
#
#
# +
# resample over day (D)
freq = 'D'
# calculate the mean active power for a day
mean_power_df = power_df.resample(freq).mean()
# display the mean values
plt.figure(figsize=(15,8))
mean_power_df.plot(title='Global active power, mean per day', color='blue')
plt.tight_layout()
plt.show()
# -
# In this plot, we can see that there are some interesting trends that occur over each year. It seems that there are spikes of energy consumption around the end/beginning of each year, which correspond with heat and light usage being higher in winter months. We also see a dip in usage around August, when global temperatures are typically higher.
#
# The data is still not very smooth, but it shows noticeable trends, and so, makes for a good use case for machine learning models that may be able to recognize these patterns.
# ---
# ## Create Time Series
#
# My goal will be to take full years of data, from 2007-2009, and see if I can use it to accurately predict the average Global active power usage for the next several months in 2010!
#
# Next, let's make one time series for each complete year of data. This is just a design decision, and I am deciding to use full years of data, starting in January of 2017 because there are not that many data points in 2006 and this split will make it easier to handle leap years; I could have also decided to construct time series starting at the first collected data point, just by changing `t_start` and `t_end` in the function below.
#
# The function `make_time_series` will create pandas `Series` for each of the passed in list of years `['2007', '2008', '2009']`.
# * All of the time series will start at the same time point `t_start` (or t0).
# * When preparing data, it's important to use a consistent start point for each time series; DeepAR uses this time-point as a frame of reference, which enables it to learn recurrent patterns e.g. that weekdays behave differently from weekends or that Summer is different than Winter.
# * You can change the start and end indices to define any time series you create.
# * We should account for leap years, like 2008, in the creation of time series.
# * Generally, we create `Series` by getting the relevant global consumption data (from the DataFrame) and date indices.
#
# ```
# # get global consumption data
# data = mean_power_df[start_idx:end_idx]
#
# # create time series for the year
# index = pd.DatetimeIndex(start=t_start, end=t_end, freq='D')
# time_series.append(pd.Series(data=data, index=index))
# ```
def make_time_series(mean_power_df, years, freq='D', start_idx=16):
'''Creates as many time series as there are complete years. This code
accounts for the leap year, 2008.
:param mean_power_df: A dataframe of global power consumption, averaged by day.
This dataframe should also be indexed by a datetime.
:param years: A list of years to make time series out of, ex. ['2007', '2008'].
:param freq: The frequency of data recording (D = daily)
:param start_idx: The starting dataframe index of the first point in the first time series.
The default, 16, points to '2017-01-01'.
:return: A list of pd.Series(), time series data.
'''
# store time series
time_series = []
# store leap year in this dataset
leap = '2008'
# create time series for each year in years
for i in range(len(years)):
year = years[i]
if(year == leap):
end_idx = start_idx+366
else:
end_idx = start_idx+365
# create start and end datetimes
t_start = year + '-01-01' # Jan 1st of each year = t_start
t_end = year + '-12-31' # Dec 31st = t_end
# get global consumption data
data = mean_power_df[start_idx:end_idx]
# create time series for the year
index = pd.date_range(start=t_start, end=t_end, freq=freq)
time_series.append(pd.Series(data=data, index=index))
start_idx = end_idx
# return list of time series
return time_series
# ## Test the results
#
# Below, let's construct one time series for each complete year of data, and display the results.
# +
# test out the code above
# yearly time series for our three complete years
full_years = ['2007', '2008', '2009']
freq='D' # daily recordings
# make time series
time_series = make_time_series(mean_power_df, full_years, freq=freq)
# +
# display first time series
time_series_idx = 0
plt.figure(figsize=(12,6))
time_series[time_series_idx].plot()
plt.show()
# -
# ---
# # Splitting in Time
#
# We'll evaluate our model on a test set of data. For machine learning tasks like classification, we typically create train/test data by randomly splitting examples into different sets. For forecasting it's important to do this train/test split in **time** rather than by individual data points.
# > In general, we can create training data by taking each of our *complete* time series and leaving off the last `prediction_length` data points to create *training* time series.
#
# ### EXERCISE: Create training time series
#
# Complete the `create_training_series` function, which should take in our list of complete time series data and return a list of truncated, training time series.
#
# * In this example, we want to predict about a month's worth of data, and we'll set `prediction_length` to 30 (days).
# * To create a training set of data, we'll leave out the last 30 points of *each* of the time series we just generated, so we'll use only the first part as training data.
# * The **test set contains the complete range** of each time series.
#
# create truncated, training time series
def create_training_series(complete_time_series, prediction_length):
'''Given a complete list of time series data, create training time series.
:param complete_time_series: A list of all complete time series.
:param prediction_length: The number of points we want to predict.
:return: A list of training time series.
'''
# your code here
training_series = []
for time_series in complete_time_series:
training_series.append(time_series.iloc[:-prediction_length])
return training_series
# +
# test your code!
# set prediction length
prediction_length = 30 # 30 days ~ a month
time_series_training = create_training_series(time_series, prediction_length)
# -
# ### Training and Test Series
#
# We can visualize what these series look like, by plotting the train/test series on the same axis. We should see that the test series contains all of our data in a year, and a training series contains all but the last `prediction_length` points.
# +
# display train/test time series
time_series_idx = 0
plt.figure(figsize=(15,8))
# test data is the whole time series
time_series[time_series_idx].plot(label='test', lw=3)
# train data is all but the last prediction pts
time_series_training[time_series_idx].plot(label='train', ls=':', lw=3)
plt.legend()
plt.show()
# -
# ## Convert to JSON
#
# According to the [DeepAR documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html), DeepAR expects to see input training data in a JSON format, with the following fields:
#
# * **start**: A string that defines the starting date of the time series, with the format 'YYYY-MM-DD HH:MM:SS'.
# * **target**: An array of numerical values that represent the time series.
# * **cat** (optional): A numerical array of categorical features that can be used to encode the groups that the record belongs to. This is useful for finding models per class of item, such as in retail sales, where you might have {'shoes', 'jackets', 'pants'} encoded as categories {0, 1, 2}.
#
# The input data should be formatted with one time series per line in a JSON file. Each line looks a bit like a dictionary, for example:
# ```
# {"start":'2007-01-01 00:00:00', "target": [2.54, 6.3, ...], "cat": [1]}
# {"start": "2012-01-30 00:00:00", "target": [1.0, -5.0, ...], "cat": [0]}
# ...
# ```
# In the above example, each time series has one, associated categorical feature and one time series feature.
#
# ### EXERCISE: Formatting Energy Consumption Data
#
# For our data:
# * The starting date, "start," will be the index of the first row in a time series, Jan. 1st of that year.
# * The "target" will be all of the energy consumption values that our time series holds.
# * We will not use the optional "cat" field.
#
# Complete the following utility function, which should convert `pandas.Series` objects into the appropriate JSON strings that DeepAR can consume.
def series_to_json_obj(ts):
'''Returns a dictionary of values in DeepAR, JSON format.
:param ts: A single time series.
:return: A dictionary of values with "start" and "target" keys.
'''
# your code here
start = str(ts.index[0].to_pydatetime())
target = ts.values.tolist()
return {'start': start, 'target': target}
# +
# test out the code
ts = time_series[0]
json_obj = series_to_json_obj(ts)
print(json_obj)
# -
# ### Saving Data, Locally
#
# The next helper function will write one series to a single JSON line, using the new line character '\n'. The data is also encoded and written to a filename that we specify.
# +
# import json for formatting data
import json
import os # and os for saving
def write_json_dataset(time_series, filename):
with open(filename, 'wb') as f:
# for each of our times series, there is one JSON line
for ts in time_series:
json_line = json.dumps(series_to_json_obj(ts)) + '\n'
json_line = json_line.encode('utf-8')
f.write(json_line)
print(filename + ' saved.')
# +
# save this data to a local directory
data_dir = 'json_energy_data'
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# +
# directories to save train/test data
train_key = os.path.join(data_dir, 'train.json')
test_key = os.path.join(data_dir, 'test.json')
# write train/test JSON files
write_json_dataset(time_series_training, train_key)
write_json_dataset(time_series, test_key)
# -
# ---
# ## Uploading Data to S3
#
# Next, to make this data accessible to an estimator, I'll upload it to S3.
#
# ### Sagemaker resources
#
# Let's start by specifying:
# * The sagemaker role and session for training a model.
# * A default S3 bucket where we can save our training, test, and model data.
import boto3
import sagemaker
from sagemaker import get_execution_role
# +
# session, role, bucket
sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()
# -
# ### EXERCISE: Upoad *both* training and test JSON files to S3
#
# Specify *unique* train and test prefixes that define the location of that data in S3.
# * Upload training data to a location in S3, and save that location to `train_path`
# * Upload test data to a location in S3, and save that location to `test_path`
# +
# suggested that you set prefixes for directories in S3
prefix = 'energy'
# upload data to S3, and save unique locations
train_prefix = '{}/{}'.format(prefix, 'train')
test_prefix = '{}/{}'.format(prefix, 'test')
train_path = sagemaker_session.upload_data(train_key, bucket=bucket, key_prefix=train_prefix)
test_path = sagemaker_session.upload_data(test_key, bucket=bucket, key_prefix=test_prefix)
# -
# check locations
print('Training data is stored in: '+ train_path)
print('Test data is stored in: '+ test_path)
# ---
# # Training a DeepAR Estimator
#
# Some estimators have specific, SageMaker constructors, but not all. Instead you can create a base `Estimator` and pass in the specific image (or container) that holds a specific model.
#
# Next, we configure the container image to be used for the region that we are running in.
# +
from sagemaker.amazon.amazon_estimator import get_image_uri
image_name = get_image_uri(boto3.Session().region_name, # get the region
'forecasting-deepar') # specify image
# -
# ### EXERCISE: Instantiate an Estimator
#
# You can now define the estimator that will launch the training job. A generic Estimator will be defined by the usual constructor arguments and an `image_name`.
# > You can take a look at the [estimator source code](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.py#L595) to view specifics.
#
# +
from sagemaker.estimator import Estimator
output_path = 's3://{}/{}/output'.format(bucket, prefix)
# instantiate a DeepAR estimator
estimator = Estimator(role=role,
sagemaker_session=sagemaker_session,
output_path=output_path,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
image_name=image_name)
# -
# ## Setting Hyperparameters
#
# Next, we need to define some DeepAR hyperparameters that define the model size and training behavior. Values for the epochs, frequency, prediction length, and context length are required.
#
# * **epochs**: The maximum number of times to pass over the data when training.
# * **time_freq**: The granularity of the time series in the dataset ('D' for daily).
# * **prediction_length**: A string; the number of time steps (based off the unit of frequency) that the model is trained to predict.
# * **context_length**: The number of time points that the model gets to see *before* making a prediction.
#
# ### Context Length
#
# Typically, it is recommended that you start with a `context_length`=`prediction_length`. This is because a DeepAR model also receives "lagged" inputs from the target time series, which allow the model to capture long-term dependencies. For example, a daily time series can have yearly seasonality and DeepAR automatically includes a lag of one year. So, the context length can be shorter than a year, and the model will still be able to capture this seasonality.
#
# The lag values that the model picks depend on the frequency of the time series. For example, lag values for daily frequency are the previous week, 2 weeks, 3 weeks, 4 weeks, and year. You can read more about this in the [DeepAR "how it works" documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html).
#
# ### Optional Hyperparameters
#
# You can also configure optional hyperparameters to further tune your model. These include parameters like the number of layers in our RNN model, the number of cells per layer, the likelihood function, and the training options, such as batch size and learning rate.
#
# For an exhaustive list of all the different DeepAR hyperparameters you can refer to the DeepAR [hyperparameter documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html).
# +
freq='D'
context_length=30 # same as prediction_length
hyperparameters = {
"epochs": "50",
"time_freq": freq,
"prediction_length": str(prediction_length),
"context_length": str(context_length),
"num_cells": "50",
"num_layers": "2",
"mini_batch_size": "128",
"learning_rate": "0.001",
"early_stopping_patience": "10"
}
# -
# set the hyperparams
estimator.set_hyperparameters(**hyperparameters)
# ## Training Job
#
# Now, we are ready to launch the training job! SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.
#
# If you provide the `test` data channel, as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test data set. This is done by predicting the last `prediction_length` points of each time series in the test set and comparing this to the *actual* value of the time series. The computed error metrics will be included as part of the log output.
#
# The next cell may take a few minutes to complete, depending on data size, model complexity, and training options.
# +
# %%time
# train and test channels
data_channels = {
"train": train_path,
"test": test_path
}
# fit the estimator
estimator.fit(inputs=data_channels)
# -
# ## Deploy and Create a Predictor
#
# Now that we have trained a model, we can use it to perform predictions by deploying it to a predictor endpoint.
#
# Remember to **delete the endpoint** at the end of this notebook. A cell at the very bottom of this notebook will be provided, but it is always good to keep, front-of-mind.
# +
# %%time
# create a predictor
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.t2.medium',
content_type="application/json" # specify that it will accept/produce JSON
)
# -
# ---
# # Generating Predictions
#
# According to the [inference format](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-in-formats.html) for DeepAR, the `predictor` expects to see input data in a JSON format, with the following keys:
# * **instances**: A list of JSON-formatted time series that should be forecast by the model.
# * **configuration** (optional): A dictionary of configuration information for the type of response desired by the request.
#
# Within configuration the following keys can be configured:
# * **num_samples**: An integer specifying the number of samples that the model generates when making a probabilistic prediction.
# * **output_types**: A list specifying the type of response. We'll ask for **quantiles**, which look at the list of num_samples generated by the model, and generate [quantile estimates](https://en.wikipedia.org/wiki/Quantile) for each time point based on these values.
# * **quantiles**: A list that specified which quantiles estimates are generated and returned in the response.
#
#
# Below is an example of what a JSON query to a DeepAR model endpoint might look like.
#
# ```
# {
# "instances": [
# { "start": "2009-11-01 00:00:00", "target": [4.0, 10.0, 50.0, 100.0, 113.0] },
# { "start": "1999-01-30", "target": [2.0, 1.0] }
# ],
# "configuration": {
# "num_samples": 50,
# "output_types": ["quantiles"],
# "quantiles": ["0.5", "0.9"]
# }
# }
# ```
#
#
# ## JSON Prediction Request
#
# The code below accepts a **list** of time series as input and some configuration parameters. It then formats that series into a JSON instance and converts the input into an appropriately formatted JSON_input.
def json_predictor_input(input_ts, num_samples=50, quantiles=['0.1', '0.5', '0.9']):
'''Accepts a list of input time series and produces a formatted input.
:input_ts: An list of input time series.
:num_samples: Number of samples to calculate metrics with.
:quantiles: A list of quantiles to return in the predicted output.
:return: The JSON-formatted input.
'''
# request data is made of JSON objects (instances)
# and an output configuration that details the type of data/quantiles we want
instances = []
for k in range(len(input_ts)):
# get JSON objects for input time series
instances.append(series_to_json_obj(input_ts[k]))
# specify the output quantiles and samples
configuration = {"num_samples": num_samples,
"output_types": ["quantiles"],
"quantiles": quantiles}
request_data = {"instances": instances,
"configuration": configuration}
json_request = json.dumps(request_data).encode('utf-8')
return json_request
# ### Get a Prediction
#
# We can then use this function to get a prediction for a formatted time series!
#
# In the next cell, I'm getting an input time series and known target, and passing the formatted input into the predictor endpoint to get a resultant prediction.
# +
# get all input and target (test) time series
input_ts = time_series_training
target_ts = time_series
# get formatted input time series
json_input_ts = json_predictor_input(input_ts)
# get the prediction from the predictor
json_prediction = predictor.predict(json_input_ts)
print(json_prediction)
# -
# ## Decoding Predictions
#
# The predictor returns JSON-formatted prediction, and so we need to extract the predictions and quantile data that we want for visualizing the result. The function below, reads in a JSON-formatted prediction and produces a list of predictions in each quantile.
# helper function to decode JSON prediction
def decode_prediction(prediction, encoding='utf-8'):
'''Accepts a JSON prediction and returns a list of prediction data.
'''
prediction_data = json.loads(prediction.decode(encoding))
prediction_list = []
for k in range(len(prediction_data['predictions'])):
prediction_list.append(pd.DataFrame(data=prediction_data['predictions'][k]['quantiles']))
return prediction_list
# +
# get quantiles/predictions
prediction_list = decode_prediction(json_prediction)
# should get a list of 30 predictions
# with corresponding quantile values
print(prediction_list[0])
# -
# ## Display the Results!
#
# The quantile data will give us all we need to see the results of our prediction.
# * Quantiles 0.1 and 0.9 represent higher and lower bounds for the predicted values.
# * Quantile 0.5 represents the median of all sample predictions.
#
# display the prediction median against the actual data
def display_quantiles(prediction_list, target_ts=None):
# show predictions for all input ts
for k in range(len(prediction_list)):
plt.figure(figsize=(12,6))
# get the target month of data
if target_ts is not None:
target = target_ts[k][-prediction_length:]
plt.plot(range(len(target)), target, label='target')
# get the quantile values at 10 and 90%
p10 = prediction_list[k]['0.1']
p90 = prediction_list[k]['0.9']
# fill the 80% confidence interval
plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval')
# plot the median prediction line
prediction_list[k]['0.5'].plot(label='prediction median')
plt.legend()
plt.show()
# display predictions
display_quantiles(prediction_list, target_ts)
# ## Predicting the Future
#
# Recall that we did not give our model any data about 2010, but let's see if it can predict the energy consumption given **no target**, only a known start date!
#
# ### EXERCISE: Format a request for a "future" prediction
#
# Create a formatted input to send to the deployed `predictor` passing in my usual parameters for "configuration". The "instances" will, in this case, just be one instance, defined by the following:
# * **start**: The start time will be time stamp that you specify. To predict the first 30 days of 2010, start on Jan. 1st, '2010-01-01'.
# * **target**: The target will be an empty list because this year has no, complete associated time series; we specifically withheld that information from our model, for testing purposes.
# ```
# {"start": start_time, "target": []} # empty target
# ```
# +
# Starting my prediction at the beginning of 2010
start_date = '2010-01-01'
timestamp = '00:00:00'
# formatting start_date
start_time = start_date +' '+ timestamp
# format the request_data
# with "instances" and "configuration"
instances = [{'start': start_time, 'target': []}]
num_samples=50
quantiles=['0.1', '0.5', '0.9']
configuration = {"num_samples": num_samples,
"output_types": ["quantiles"],
"quantiles": quantiles}
request_data = {"instances": instances,
"configuration": configuration}
# create JSON input
json_input = json.dumps(request_data).encode('utf-8')
print('Requesting prediction for '+start_time)
# -
# Then get and decode the prediction response, as usual.
# +
# get prediction response
json_prediction = predictor.predict(json_input)
prediction_2010 = decode_prediction(json_prediction)
# -
# Finally, I'll compare the predictions to a known target sequence. This target will come from a time series for the 2010 data, which I'm creating below.
# +
# create 2010 time series
ts_2010 = []
# get global consumption data
# index 1112 is where the 2011 data starts
data_2010 = mean_power_df.values[1112:]
index = pd.date_range(start=start_date, periods=len(data_2010), freq='D')
ts_2010.append(pd.Series(data=data_2010, index=index))
# +
# range of actual data to compare
start_idx=0 # days since Jan 1st 2010
end_idx=start_idx+prediction_length
# get target data
target_2010_ts = [ts_2010[0][start_idx:end_idx]]
# display predictions
display_quantiles(prediction_2010, target_2010_ts)
# -
# ## Delete the Endpoint
#
# Try your code out on different time series. You may want to tweak your DeepAR hyperparameters and see if you can improve the performance of this predictor.
#
# When you're done with evaluating the predictor (any predictor), make sure to delete the endpoint.
## TODO: delete the endpoint
predictor.delete_endpoint()
# ## Conclusion
#
# Now you've seen one complex but far-reaching method for time series forecasting. You should have the skills you need to apply the DeepAR model to data that interests you!
|
Time_Series_Forecasting/Energy_Consumption_Exercise.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
#
# This tutorial introduces the basic Auto-PyTorch API together with the classes for featurized and image data.
# So far, Auto-PyTorch covers classification and regression on featurized data as well as classification on image data.
# For installing Auto-PyTorch, please refer to the github page.
#
# **Disclaimer**: In this notebook, data will be downloaded from the openml project for featurized tasks and CIFAR10 will be downloaded for image classification. Hence, an internet connection is required.
# # API
#
# There are classes for featurized tasks (classification, multi-label classification, regression) and image tasks (classification). You can import them via:
from autoPyTorch import (AutoNetClassification,
AutoNetMultilabel,
AutoNetRegression,
AutoNetImageClassification,
AutoNetImageClassificationMultipleDatasets)
# Other imports for later usage
import pandas as pd
import numpy as np
import os as os
import openml
import json
# Upon initialization of a class, you can specify its configuration. Later, you can override its configuration in each fit call. The *config_preset* allows to constrain the search space to one of *tiny_cs, medium_cs* or *full_cs*. These presets can be seen in *core/presets/*.
autonet = AutoNetClassification(config_preset="tiny_cs", result_logger_dir="logs/")
# Here are some useful methods provided by the API:
# +
# Get the current configuration as dict
current_configuration = autonet.get_current_autonet_config()
# Get the ConfigSpace object with all hyperparameters, conditions, default values and default ranges
hyperparameter_search_space = autonet.get_hyperparameter_search_space()
# Print all possible configuration options
#autonet.print_help()
# -
# The most important methods for using Auto-PyTorch are ***fit***, ***refit***, ***score*** and ***predict***.
#
# First, we get some data:
# Get data from the openml task "Supervised Classification on credit-g (https://www.openml.org/t/31)"
task = openml.tasks.get_task(task_id=31)
X, y = task.get_X_and_y()
ind_train, ind_test = task.get_train_test_split_indices()
X_train, Y_train = X[ind_train], y[ind_train]
X_test, Y_test = X[ind_test], y[ind_test]
# ***fit*** is used to search for a good configuration by fitting configurations chosen by the algorithm (by default BOHB). The incumbent configuration is then returned and stored in the class.
#
# We recommend to have a look at the possible configuration options first. Some of the most important options allow you to set the budget type (epochs or time), run id and task id for cluster usage, tensorboard logging, seed and more.
#
# Here we search for a configuration for 300 seconds with 60-100 s time for fitting each individual configuration.
# Use the *validation_split* parameter to specify a split size. You can also pass your own validation set
# via *X_val* and *Y_val*. Use *log_level="info"* or *log_level="debug"* for more detailed output.
# +
autonet = AutoNetClassification(config_preset="tiny_cs", result_logger_dir="logs/")
# Fit (note that the settings are for demonstration, you might need larger budgets)
results_fit = autonet.fit(X_train=X_train,
Y_train=Y_train,
validation_split=0.3,
max_runtime=300,
min_budget=60,
max_budget=100,
refit=True)
# Save fit results as json
with open("logs/results_fit.json", "w") as file:
json.dump(results_fit, file)
# -
# ***refit*** allows you to fit a configuration of your choice for a defined time. By default, the incumbent configuration is refitted during a *fit* call using the *max_budget*. However, *refit* might be useful if you want to fit on the full dataset or even another dataset or if you just want to fit a model without searching.
#
# You can specify a hyperparameter configuration to fit (if you do not specify a configuration the incumbent configuration from the last fit call will be used):
# +
# Create an autonet
autonet_config = {
"result_logger_dir" : "logs/",
"budget_type" : "epochs",
"log_level" : "info",
"use_tensorboard_logger" : True,
"validation_split" : 0.0
}
autonet = AutoNetClassification(**autonet_config)
# Sample a random hyperparameter configuration as an example
hyperparameter_config = autonet.get_hyperparameter_search_space().sample_configuration().get_dictionary()
# Refit with sampled hyperparameter config for 120 s. This time on the full dataset.
results_refit = autonet.refit(X_train=X_train,
Y_train=Y_train,
X_valid=None,
Y_valid=None,
hyperparameter_config=hyperparameter_config,
autonet_config=autonet.get_current_autonet_config(),
budget=50)
# Save json
with open("logs/results_refit.json", "w") as file:
json.dump(results_refit, file)
# -
# ***pred*** returns the predictions of the incumbent model. ***score*** can be used to evaluate the model on a test set.
# +
# See how the random configuration performs (often it just predicts 0)
score = autonet.score(X_test=X_test, Y_test=Y_test)
pred = autonet.predict(X=X_test)
print("Model prediction:", pred[0:10])
print("Accuracy score", score)
# -
# Finally, you can also get the incumbent model as PyTorch Sequential model via
pytorch_model = autonet.get_pytorch_model()
print(pytorch_model)
# # Featurized Data
#
# All classes for featurized data (*AutoNetClassification*, *AutoNetMultilabel*, *AutoNetRegression*) can be used as in the example above. The only difference is the type of labels they accept.
# # Image Data
#
# Auto-PyTorch provides two classes for image data. *autonet_image_classification* can be used for classification for images. The *autonet_multi_image_classification* class allows to search for configurations for image classification across multiple datasets. This means Auto-PyTorch will try to choose a configuration that works well on all given datasets.
# Load classes
autonet_image_classification = AutoNetImageClassification(config_preset="full_cs", result_logger_dir="logs/")
autonet_multi_image_classification = AutoNetImageClassificationMultipleDatasets(config_preset="tiny_cs", result_logger_dir="logs/")
# For passing your image data, you have two options (note that arrays are expected):
#
# I) Via a path to a comma-separated value file, which in turn contains the paths to the images and the image labels (note header is assumed to be None):
# +
csv_dir = os.path.abspath("../../datasets/example.csv")
X_train = np.array([csv_dir])
Y_train = np.array([0])
# -
# II) directly passing the paths to the images and the labels
df = pd.read_csv(csv_dir, header=None)
X_train = df.values[:,0]
Y_train = df.values[:,1]
# Make sure you specify *image_root_folders* if the paths to the images are not specified from your current working directory. You can also specify *images_shape* to up- or downscale images.
#
# Using the flag *save_checkpoints=True* will save checkpoints to the result directory:
autonet_image_classification.fit(X_train=X_train,
Y_train=Y_train,
images_shape=[3,32,32],
min_budget=200,
max_budget=400,
max_runtime=600,
save_checkpoints=True,
images_root_folders=[os.path.abspath("../../datasets/example_images")])
# Auto-PyTorch also supports some common datasets. By passing a comma-separated value file with just one line, e.g. "CIFAR10, 0" and specifying *default_dataset_download_dir* it will automatically download the data and use it for searching. Supported datasets are CIFAR10, CIFAR100, SVHN and MNIST.
# +
path_to_cifar_csv = os.path.abspath("../../datasets/CIFAR10.csv")
autonet_image_classification.fit(X_train=np.array([path_to_cifar_csv]),
Y_train=np.array([0]),
min_budget=600,
max_budget=900,
max_runtime=1800,
default_dataset_download_dir="./datasets",
images_root_folders=["./datasets"])
# -
# For searching across multiple datasets, pass multiple csv files to the corresponding Auto-PyTorch class. Make sure your specify *images_root_folders* for each of them.
autonet_multi_image_classification.fit(X_train=np.array([path_to_cifar_csv, csv_dir]),
Y_train=np.array([0]),
min_budget=1500,
max_budget=2000,
max_runtime=4000,
default_dataset_download_dir="./datasets",
images_root_folders=["./datasets", "./datasets/example_images"])
|
examples/basics/Auto-PyTorch Tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Linear Regression
# Linear regression is used for finding linear relationship between one dependent variable and one independent variable. simple form of Regression equation is defined by y=c+b*x,where y is estimated dependent variable and c is constant,b is regression cofficient ,x is score on the independent variable.
# It is basic and commonly used type of predictive analyis.
# import libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn import metrics
from sklearn.metrics import r2_score
# importing the dataset
dataset = pd.read_csv('/home/webtunix/Desktop/Regression/random.csv')
print(len(dataset))
# Splitting data into two sets x and y
x = dataset.iloc[:,1:4].values
y = dataset.iloc[:,4].values
# Splitting the training and testing sets
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
# Apply Linear Regression
model = LinearRegression()
model.fit(X_train,y_train)
model.score(X_train,y_train)
pred = model.predict(X_test)
print(pred)
# Accuracy of model
print("Accuracy:",r2_score(y_test,pred))
# Plotting the scatter graph of actual values and predicting values
# +
colors = np.random.rand(72)
#plot target and predicted values
plt.scatter(colors,y_test, c='blue',label='target')
plt.scatter(colors,pred, c='green',label='predicted')
#plot x and y lables
plt.xlabel('x')
plt.ylabel('y')
#plot title
plt.title('Linear Regression')
plt.legend()
plt.show()
# -
# # Research Infinite Solutions LLP
# by Research Infinite Solutions (https://www.ris-ai.com//)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
Regression_models/implement_linearreg.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 说明:
# 给定一个输入字符串 s 和一个模式 p, 实现支持'.'的正则表达式匹配和 "*"。
# 1、'.' 任意一个字符.
# 2、'*' 零个或者多个前字符,只能匹配一种字符,不能匹配多种.
#
# 匹配项应覆盖整个输入字符串(而不是部分)。
# 注意:
# 1、s可以为空,并且仅包含小写字母a-z。
# 2、p可以为空,并且仅包含小写字母a-z 、'.' 和 '*'
#
# Example 1:
# Input:
# s = "aa"
# p = "a"
# Output: false
# Explanation: "a" does not match the entire string "aa".
#
# Example 2:
# Input:
# s = "aa"
# p = "a*"
# Output: true
# Explanation: '*' means zero or more of the preceding element, 'a'. Therefore, by repeating 'a' once, it becomes "aa".
#
# Example 3:
# Input:
# s = "ab"
# p = ".*"
# Output: true
# Explanation: ".*" means "zero or more (*) of any character (.)".
#
# Example 4:
# Input:
# s = "aab"
# p = "c*a*b"
# Output: true
# Explanation: c can be repeated 0 times, a can be repeated 1 time. Therefore, it matches "aab".
#
# Example 5:
# Input:
# s = "mississippi"
# p = "mis*is*p*."
# Output: false
# -
class Solution:
def isMatch(self, s: str, p: str) -> bool:
if not s: return True
if s and not p: return False
return self.dfs(s, p, 0)
def dfs(self, s, p, count):
if count == len(s):
return True
if p[count].isalpha():
if s[count] == p[count]:
self.dfs(s, p, count + 1)
return False
if p[count] == '.':
pass
if p[count] == '*':
pass
class Solution:
def isMatch(self, s: str, p: str) -> bool:
s, p = ' ' + s, ' ' + p
lenS, lenP = len(s), len(p)
dp = [[False] * lenP for _ in range(lenS)]
dp[0][0] = True
# 当s为空的字符,p不为空的字符时
# s = ""
# p = "a*" dp[0][j=2] = dp[0][j-2] = dp[0][0]
# p = "a*b*" dp[0][j=4] = dp[0]
for j in range(1, lenP):
if p[j] == '*': # 此时只能将‘*’视作一个空的字符,
dp[0][j] = dp[0][j-2] # j-1是 ‘*’ 之前的字符,j-2必须要是空的字符才有可能匹配
for i in range(1, lenS):
for j in range(1, lenP):
if p[j] == s[i] or p[j] == '.':
dp[i][j] = dp[i-1][j-1] # 当前字母匹配,如果两个字符串的前一个字符也匹配,则两个字符串匹配
elif p[j] == '*':
# 零个匹配、一个匹配、多个匹配
if p[j-1] != s[i] and p[j-1] != '.': # 匹配0个的情况
dp[i][j] = dp[i][j-2]
else:
# 0、1个、或者多个匹配
dp[i][j] = dp[i][j-2] or dp[i][j-1] or dp[i-1][j]
print(dp)
return dp[-1][-1]
s_ = 'aaac'
p_ = 'a*c'
solution = Solution()
solution.isMatch(s_, p_)
[[True, False, True, False], # s=' ', [0: p = ' ', True], [1: p=' a', False], [2: p='*', True], [3: p='c', False]
[False, True, True, False], # s='a', [0: p = ' ', False], [1: p=' a', True], [2: p='*', True], [3: p='c', False]
[False, False, True, False], # s='a', [0: p = ' ', False], [1: p=' a', False], [2: p='*', True], [3: p='c', False]
[False, False, True, False], # s='a', [0: p = ' ', False], [1: p=' a', False], [2: p='*', True], [3: p='c', False]
[False, False, False, True]] # s='c', [0: p = ' ', False], [1: p=' a', False], [2: p='*', True], [3: p='c', True]
|
Back Tracking/0907/10. Regular Expression Matching.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### Exercice 1:
# Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square.
class square:
#define your methods
def __init__(self,lenght):
self.lenght=lenght
def area(self):
return self.lenght**2
def primeter(self):
return self.lenght*2
pass
# ### Exercice 2:
# Write a python class rectangle that inherits from the square class.
class rectangle(square):
#__init__ method only
def __init__(self,lenght, width):
super().__init__(lenght)
self.width = width
pass
r=rectangle(7, 2)
print(r.lenght)
# ### Exercice 3:
# Use python decorators to make the above code works
# +
class SampleClass:
def __init__(self, a):
# private varibale in Python
self.__a = a
def get_a(self):
print(self.__a)
x = SampleClass(3)
x.get_a()
x.a = 23
print(x.a)
|
exercices/part4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:talent-env]
# language: python
# name: conda-env-talent-env-py
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Evidence calculation for EFT expansions
# + [markdown] slideshow={"slide_type": "fragment"}
# <div style="text-align: center !important;"><img src="fitting_an_elephant_quote.png"></div>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## The toy model
#
# Here we continue to explore aspects of Bayesian statistical analysis using toy models for effective field theories (EFTs), namely Taylor series of some specified functions. In this notebook we are exploring the evidence for how many coefficients in the EFT expansion are determined by the given data.
#
# Let's first review the function we are using as a toy model, taken from [*Bayesian parameter estimation for effective field theories*](https://arxiv.org/abs/1511.03618):
#
# $$
# g(x) = \left(\frac12 + \tan\left(\frac{\pi}{2}x\right)\right)^2
# $$
#
# represents the true, underlying theory. It has a Taylor expansion
#
# $$
# g(x) = 0.25 + 1.57x + 2.47x^2 + 1.29 x^3 + \cdots
# $$
#
# Our model for an EFT for this "theory" is
#
# $$
# g_{\rm th}(x) \equiv \sum_{i=0}^k a_i x^i \;.
# $$
#
# In mini-project I, our general task was to fit 1, 2, 3, ... of the parameters $a_i$ and to analyze the results.
#
# $% Some LaTeX definitions we'll use.
# \newcommand{\pr}{{p}} %\newcommand{\pr}{\textrm{p}}
# \newcommand{\abar}{\bar a}
# \newcommand{\avec}{{\bf a}}
# \newcommand{\kmax}{k_{\rm max}}
# $
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## The statistical model (recap)
#
#
# Here we are given data with simple error bars, which imply that the probability for any *single* data point is a normal distribution about the true value. That is,
#
# $$
# y_i \sim \mathcal{N}(y_M(x_i;\theta), \varepsilon_i)
# $$
#
# or, in other words,
#
# $$
# \pr(x_i\mid y_i, \theta) = \frac{1}{\sqrt{2\pi\varepsilon_i^2}} \exp\left(\frac{-\left[y_i - y_M(x_i;\theta)\right]^2}{2\varepsilon_i^2}\right)
# $$
#
# where $\varepsilon_i$ are the (known) measurement errors indicated by the error bars.
#
#
# Assuming all the points are independent, we can find the full likelihood by multiplying the individual likelihoods together:
#
# $$
# \pr(D\mid\theta) = \prod_{i=1}^N \pr(x_i,y_i | \theta)
# $$
#
# For convenience and numerical accuracy, this is usually expressed in terms of the log-likelihood:
#
# $$
# \log \pr(D\mid\theta) = -\frac{1}{2}\sum_{i=1}^N\left(\log(2\pi\varepsilon_i^2) + \frac{\left[y_i - y_M(x_i;\theta)\right]^2}{\varepsilon_i^2}\right)
# $$
#
#
# We consider two priors for the coefficients. The first is a Gaussian prior that encodes naturalness through the parameter $\abar$:
#
# $$
# \pr(\avec\mid \abar, I) = \left(\frac{1}{\sqrt{2\pi}\abar}\right)^{k+1} \exp{\left(-\frac{\avec^2}{2\abar^2}\right)}
# $$
#
# with $\abar$ taken to be fixed (at $\abar_{\rm fix} = 5$ usually). That is, the prior pdf for $\abar$ is
#
# $$
# \pr(\abar) = \delta(\abar - \abar_{\rm fix}) \;.
# $$
#
# (In more recent work, we have used a conjugate prior for $\abar$ that simplifies the calculations.)
#
# The second is an uninformative uniform prior that we take to be a constant (cutting it off only at very high values, which
# may not even be needed).
# Given likelihood and prior, the posterior pdf by Bayes' Theorem is
#
# $$
# \pr(\avec\mid D, k, \kmax, I) = \frac{\pr(D\mid \avec, k, \kmax, I)\; \pr(\avec\mid I)}{\pr(D \mid k, \kmax, I)}
# $$
#
# We have focused previously on calculating this posterior to find distributions for the coefficients $\theta = \{a_0, a_1, \cdots, a_k\}$.
# Furthermore, up to now we have ignored the denominator, which is the *evidence*, because we didn't need to calculate it independently. Now we will calculate it.
# + slideshow={"slide_type": "slide"}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn; seaborn.set("talk") # for plot formatting
import scipy.stats as stats
from scipy import linalg
from cycler import cycler
from matplotlib.cm import get_cmap
# + [markdown] slideshow={"slide_type": "slide"}
# ## The Data and the true result
#
# Let's start by defining the exact function and the data for the toy model.
# + slideshow={"slide_type": "fragment"}
def g_fun(x):
"""
Toy function to model an EFT expansion. It has a Taylor expansion about
x=0 with a radius of convergence of 1.
"""
return (0.5 + np.tan(np.pi * x / 2.))**2
def y_model(x_pts, theta, orders=None):
"""
Returns the evaluation of the theoretical model at all x values in the
numpy array x_pts, using orders coefficients from theta (defaults to all).
"""
if orders is None: # allow for not using the full theta vector
orders = len(theta)
return np.array( [ np.sum(
[theta[i] * x**i for i in range(orders)]
) for x in x_pts ] )
# + slideshow={"slide_type": "subslide"}
theta_true = np.array([0.25, 1.5707963, 2.4674011, 1.2919282, 4.0587121,
1.275082, 5.67486677])
# Generate data points as described in the paper; remember these are relative
# errors, so multiply the percent by the data at each x.
x_max = 1./np.pi # we'll eventually test sensitivity to x_max
x_data_pts = np.linspace(x_max/10., x_max, 10) # don't start at x=0
eta = 0.05 # specified relative uncertainty is 5%
# Here we generate new (different) data points with every run
y_data_pts = g_fun(x_data_pts) * \
(1. + stats.norm.rvs(0., eta, size=len(x_data_pts)) )
# *** The following is the exact data with errors from the paper ***
y_data_pts = np.array([0.31694, 0.33844, 0.42142, 0.57709, 0.56218, \
0.68851, 0.73625, 0.87270, 1.0015, 1.0684])
dy_data = eta * y_data_pts
# Always make a figure to check your data!
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1)
ax.errorbar(x_data_pts, y_data_pts, dy_data, fmt='o')
ax.set_xlabel(r'x')
ax.set_ylabel(r'g(x)')
ax.set_xlim(0, 0.5)
ax.set_ylim(0, 1.5)
x_pts_all = np.arange(0., 1., .01)
ax.plot(x_pts_all, g_fun(x_pts_all), color='red', alpha=0.5, label='exact')
ax.set_title('Toy function, data, and first terms in expansion')
n_dim = 3
colors = ['b', 'g', 'c', 'm', 'k']
for order in range(n_dim):
ax.plot(x_pts_all, y_model(x_pts_all, theta_true[:n_dim], order+1),
label=f'order {order:d}', color=colors[order], alpha=0.8)
ax.legend()
fig.tight_layout()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Evidence calculation
#
# Now we seek to reproduce and understand Figure 8 in the paper [*Bayesian parameter estimation for effective field theories*](https://arxiv.org/abs/1511.03618), which shows that the evidence for the model expansion up to order $\kmax$ *saturates* (i.e., increases up to a maximum and then flattens out close to that value). This is in contrast to the more typical expectation from evidence calculations that lead to a definite peak.
#
# The evidence can be expressed by marginalization as an integral over *all possible* $\avec$. (The notation with $k$ and $\kmax$ is for consistency with the paper; for our purposes today consider this as the evidence for an expansion up to order $k$.)
#
# $$
# \begin{align}
# \pr(D \mid k \leq \kmax, \kmax, I) &= \int d\abar \int d\avec \, \pr(D \mid \avec, k=\kmax, \kmax, I) \;
# \pr(\avec\mid\abar, I)\; \pr(\abar\mid I)
# \end{align}
# $$
#
# *If you don't see how this equations comes about, please ask!*
#
# The first term in the integrand is the likelihood, which we saw above is a multivariate Gaussian and, in the present case with independent points, it is very simple, just the product of one-dimensional Gaussians. If we take the case of a Gaussian prior for $\avec$ and the fixed (delta function) prior for $\abar$, we can do the $\abar$ integral for free and the remaining integral for the evidence can be done analytically.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Evidence using linear algebra and Gaussian integrals
#
# If we write the multivariate Gaussians in the evidence in matrix form, we can use the basic formula for integration:
#
# $$
# \int e^{-\frac12 x^T A x + B^T x}\, d^nx = \sqrt{\det (2\pi A^{-1})} \; e^{\frac12 B^T A^{-1} B}
# $$
#
# where $x$ and $B$ are n-dimensional vectors and $A$ is an $n\times n$ matrix, with $n$ the number of data points. The $x_i$ integrations are from $-\infty$ to $+\infty$.
# -
# ### Evidence using conjugate prior
#
# The usefulness of a conjugate prior is in carrying out a Bayesian update without having to do any calculation. Recall yet again how Bayes theorem tells us how to update (the information $I$ will be implicit in the following):
#
# $$
# \pr(\theta\mid D) = \frac{\pr(D\mid\theta)\; \pr(\theta)}{\pr(D)}
# $$
#
# If $\pr(\theta)$ is a conjugate prior to the likelihood, the updating consists solely of changing the parameters that specify the prior pdf.
#
# The most complete table of conjugate priors out in the wild seems to be the Wikipedia webpage [Conjugate Prior](https://en.wikipedia.org/wiki/Conjugate_prior#Table_of_conjugate_distributions). Take a look!
# + slideshow={"slide_type": "fragment"}
def make_matrices(x_pts, y_pts, dy_pts, k_order, a_bar):
"""
Construct and return the matrices we'll need to calculate the evidence.
We have only one observable for now, so d is omitted.
"""
m = k_order + 1 # number of coefficients is one more than the order
A_mat = np.array( [[x**i for x in x_pts] for i in range(m)] ).T
Sigma_mat = np.diag(dy_pts**2)
Vp_mat = a_bar**2 * np.eye(m)
y_vec = y_pts
return A_mat, Sigma_mat, Vp_mat, y_vec
def gaussian_norm(cov_mat):
"""Return the normalization factor for Gaussians.
You can decide whether to use a covariance matrix or its inverse."""
return 1. / np.sqrt(linalg.det(2. * np.pi * cov_mat))
# + slideshow={"slide_type": "subslide"}
# step through the orders
k_max = 10
k_orders = range(k_max)
evidence = np.zeros(k_max)
for k_order in k_orders:
a_bar = 5.
A_mat, Sigma_mat, Vp_mat, y_vec = make_matrices(x_data_pts, y_data_pts,
dy_data, k_order, a_bar)
Sigma_mat_inv = linalg.inv(Sigma_mat)
Lambda_mat = A_mat.T @ Sigma_mat_inv @ A_mat + linalg.inv(Vp_mat)
Lambda_mat_inv = linalg.inv(Lambda_mat)
Vp_mat_inv = linalg.inv(Vp_mat)
a_hat = Lambda_mat_inv @ A_mat.T @ Sigma_mat_inv @ y_vec
chisq_min = (y_vec - A_mat @ a_hat).T @ Sigma_mat_inv @ \
(y_vec - A_mat @ a_hat)
evidence[k_order] = np.sqrt(linalg.det(2.*np.pi*Lambda_mat_inv)) \
* gaussian_norm(Sigma_mat) * np.exp(-chisq_min / 2.) \
* gaussian_norm(Vp_mat) \
* np.exp(- a_hat.T @ Vp_mat_inv @ a_hat / 2.)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(k_orders[1:], evidence[1:], color='blue', marker='o',
linestyle='solid', linewidth=1, markersize=12)
ax.set_title('Evidence [Fig. 8 in J Phys. G 43, 074001]')
ax.set_xlabel(r'$k$')
ax.set_ylabel(r'$p(D1_{5\%} \mid k, k_{\rm max}=k)$')
fig.tight_layout()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Things to try:
# * What do you expect to happen if you increase the range of data (set by `x_max` at the upper end)?
# * What do you expect to happen if you change (first decrease, then increase) the relative error at each point?
# * What happens if you comment the definition of `y_data_pts` that uses the exact data from the paper and instead generate the noise randomly. Does the pattern of the evidence change? Does the magnitude of the evidence change?
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Notes
# * The simple expression for $\hat a$ here, which minimizes $\chi^2$ (or, equivalently, maximizes the likelihood), analytically reproduces the results we worked hard for earlier to get by sampling. The point of the latter exercise was to illustrate in a checkable problem how to do sampling, not because it was required in this case.
# -
|
topics/model-selection/Evidence_for_model_EFT_coefficients.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
library("readxl")
library("lubridate")
library('tidyr')
library('ggplot2')
search()
source("helpers.R")
path <- "../data/data.xlsx"
df <- Dataframing(path)
df <- df[-c(210),]
df$Date <- as.Date(df$Date)
firstYields <- df[df$Date == "2022-02-11",]
firstYields
# + [markdown] tags=[]
# #### Suponemos que:
# - $\alpha_1$ representa la tasa forward entre 0 y 3 meses,
# - $\alpha_2$ la tasa entre 3 meses y 1 año,
# - $\alpha_3$ la tasa entre 1 y 3 años,
# - $\alpha_4$ la tasa entre 3 y 5 años ,
# - $\alpha_5$ la tasa entre 5 y 10 años.
# -
# Forward rates:
today <- firstYields$Date
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# #### Además, para traer a valor presente usando una curva forward, se sigue el factor de descuento a un plazo T es:
# -
# ### $$ e^{-\int_{0}^{T} f(t) \,dt}$$
# #### Definamos algunos objetos de utilidad. Cuántos cupones paga cada bono? Cómo ajustamos la tasa para cada pago?
alpha <- c(0.1,0.11,0.12,0.13,0.14)
alpha
xx <- c(1:120)
aa <- alphacuts(alpha)
plot(xx,aa, type= "p",
pch=22,
bg = "navy",
col = "navy",
main= "Alphas iniciales",
xlab = "Meses",
ylab = "Alphas")
today = firstYields$Date
# +
threeMonthBond <- 100 + RateConverter(firstYields[["3M"]],3,today)*DiscountFactor(alpha, 3)
threeMonthBond
RateConverter(firstYields[["3M"]],3,today)
RateConverter(firstYields[["1Y"]],12,today)
# -
sixMonthBond <- 100 + RateConverter(firstYields[["6M"]], 6, today)*DiscountFactor(alpha, 6)
sixMonthBond
oneYearBond <- 100 + RateConverter(firstYields[["1Y"]], 6, today) *DiscountFactor(alpha, 6) + RateConverter(firstYields[["1Y"]], 12, today)*DiscountFactor(alpha, 12)
oneYearBond
maturity<- 36
periods <- seq(6, maturity, 6)
threeMonthBond
# Recordemos que cada pago está dado por :
#
# $$\$100\times R \times \frac{ACT}{360}$$
singleBondPayment <- function(maturity, yield, alpha, today){
return (RateConverter(yield, maturity,today)*DiscountFactor(alpha, maturity))
}
singleBondPayment(120,firstYields[["10Y"]], alpha, today)
firstYields[["10Y"]]
yieldList <- list(firstYields)
nombres <- colnames(firstYields)
nombres <- nombres[-1]
nombres
nombres <- as.list(nombres)
names(nombres) <- c(120,84,60,36,24,12,6,3)
# +
BondValue <- function (today, maturity, yieldlist,alpha ){
if (maturity == 3){
return (100*DiscountFactor(alpha, maturity) + singleBondPayment(3,yieldlist[["3M"]], alpha, today))
}
setter <- as.character(maturity)
periods <- seq(6, maturity, 6)
bondSum <- sum(unlist(lapply(periods, function(x) singleBondPayment(x, yield = yieldlist[[nombres[[setter]]]], alpha = alpha, today = today))))
return(bondSum +100*DiscountFactor(alpha, maturity))
}
# -
BondValue(today, 120, firstYields, alpha)
maturities <- c(120,84,60,36,24,12,6,3)
alpha <- c(0.00343, 0.0105, 0.0817, 0.1598, 0.2744)
bondValues <- list()
for (i in maturities){
bondValues <- append(bondValues, BondValue(today, i, firstYields, alpha))
}
bondValues
# +
error <-function(alpha){
maturities <- c(3,6,12,24,36,60,84,120)
bondValues <- list()
for (i in maturities){
bondValues <- append(bondValues, BondValue(today, i, firstYields, alpha))
}
error <- sum((unlist(bondValues)-100)**2)
return (error)
}
error(alpha)
# -
opt <- optim(alpha, error, lower = 0, upper= 1, method = "L-BFGS-B")
alpha <- opt$par
alpha
xx <- c(1:120)
aa <- alphacuts(alpha)
plot(xx,aa, type= "p",
pch=22,
bg = "maroon",
col = "maroon",
main= "Alphas óptimos",
xlab = "Meses",
ylab = "Alphas")
# Valoración de bono de 7 años
# +
alpha <- opt$par
pvnotional <- 100 - DiscountFactor(alpha[4], 84)*100
pvnotional
couponsum <- function (c){
pvcoupon <- DiscountFactor(alpha[2], 12)*RateConverter(c, 12,today) + DiscountFactor(alpha[2], 24)*RateConverter(c,24,today) +
DiscountFactor(alpha[3], 36)*RateConverter(c, 36,today) +DiscountFactor(alpha[3], 48)*RateConverter(c, 48,today) +
DiscountFactor(alpha[4], 60)*RateConverter(c, 60,today) + DiscountFactor(alpha[4], 72)*RateConverter(c, 72,today) +
DiscountFactor(alpha[4], 84)*RateConverter(c, 84,today) + DiscountFactor(alpha[4], 84)*RateConverter(c, 84,today)
result <-pvnotional- pvcoupon
if (result <0){
return (500)
}
else {
return (pvnotional- pvcoupon)
}
}
opti <- optim(c,couponsum,lower = 0, upper=10, method = "Brent")
opti
# -
# Matriz de curvas 1 año en el pasado
# +
matAlphas <- list()
df <- df[(nrow(df)-52):nrow(df),]
for ( r in 1:nrow(df)){
firstYields <- df[r,]
today <- firstYields$Date
alpha <- opt$par
opt <- optim(alpha, error, lower = 0, upper= 1, method = "L-BFGS-B")
matAlphas <- append(matAlphas, list(opt$par))
}
# -
MatofAlphas <- matrix(unlist(matAlphas), ncol = 5, byrow = TRUE)
MatofAlphas
write.csv(MatofAlphas,"../data/sim.csv", row.names = FALSE)
MatofAlphas <- read.csv("../data/sim.csv")
i_alpha <- MatofAlphas[-nrow(MatofAlphas),]
f_alpha <- MatofAlphas[-1,]
delta_alpha <- f_alpha - i_alpha
cov_delta_a <- cov(delta_alpha)
MatofAlphas
cov_delta_a
eigen_delta <- eigen(cov_delta_a)
eigen(cov_delta_a)
A <- eigen_delta$vectors
sqrt <- sqrt(eigen_delta$values)
A
sqrt
B <- t(t(A)*sqrt)
B
Bbase <- as.data.frame(B)
Bbase$N <- c(1,2,3,4,5)
Bbase <- pivot_longer(Bbase, cols=1:5,names_to= "component", values_to='Values')
ggplot(Bbase, aes(x = N, y = Values, color = component)) +
geom_line() +
geom_point()
Y <- MatofAlphas*A
Y
W <- MatofAlphas*B
W
best_components <- W[,1:2]
librerias <- c("forecast","xts","rugarch","timeSeries","ggplot2","astsa","scales","lubridate","reshape2","quantmod","xtable","tseries")
if(length(setdiff(librerias, rownames(installed.packages()))) > 0){
install.packages(setdiff(librerias, rownames(installed.packages())))}
invisible(sapply(librerias, require, character.only = TRUE,quietly = TRUE))
Arima_1 <- auto.arima(best_components[,1],stepwise = T,approximation = F)
Box.test(Arima_1$residuals)
Arima_2 <- auto.arima(best_components[,2],stepwise = T,approximation = F)
Box.test(Arima_2$residuals)
# Simulaciones al futuro
# +
future <- 52
simulations <- 1000
sim_prueb <- simulate(object = Arima_1,nsim = future)
plot(sim_prueb)
Sim_component_1 <- replicate(expr = forecast(object = Arima_1,h = future),n = simulations)
Sim_component_2 <- replicate(expr = forecast(object = Arima_2,h = future),n = simulations)
Bt1 <- t(B[,1])
Bt2 <- t(B[,2])
delta<-matrix(,nrow=future, ncol=simulations)
for (i in 1:future){
for (j in 1:simulations){
delta[i,j] <- sum(Sim_component_1[i,j]*Bt1)+sum(Sim_component_2[i,j]*Bt2)
}
}
# +
forwards <- list()
alpha <- c( 0.000947582290432353,0.00474077085158422,0.0310741501203435,0.0572666472357455,0.108575410857186)
for (i in 1:simulations){
newalph <- alpha
for (j in 1:future){
newalph <- newalph + delta[j,i]
if (j==52){
forwards <- append(forwards, newalph)
}
}
}
matforwards <- matrix(forwards, ncol=5,byrow=T)
# -
matforwards
# +
couponsim <- list()
c <- 3.93957166428661
for (i in 1:simulations){
alpha <- lapply(as.vector(matforwards[i,]), as.numeric)
pvnotional <- DiscountFactor(as.numeric(alpha[4]), 84 )*100
pvnotional
pvcoupon <- DiscountFactor(as.numeric(alpha[2]), 12)*RateConverter(c, 12,today) + DiscountFactor(as.numeric(alpha[2]), 24)*RateConverter(c,24,today) +
DiscountFactor(as.numeric(alpha[3]), 36)*RateConverter(c, 36,today) +DiscountFactor(as.numeric(alpha[3]), 48)*RateConverter(c, 48,today) +
DiscountFactor(as.numeric(alpha[4]), 60)*RateConverter(c, 60,today) + DiscountFactor(as.numeric(alpha[4]), 72)*RateConverter(c, 72,today) +
DiscountFactor(as.numeric(alpha[4]), 84)*RateConverter(c, 84,today) + DiscountFactor(as.numeric(alpha[4]), 84)*RateConverter(c, 84,today)
result <-pvnotional + pvcoupon
couponsim <- append(couponsim,list(result))
}
matcoupon <- matrix(couponsim, ncol=1)
# +
matcoupon <- as.data.frame(matcoupon)
bondvaluesim <- as.numeric(matcoupon$V1)
# +
hist(bondvaluesim)
summary(bondvaluesim)
# -
train <- best_components[1:40,1]
test <- best_components[40:nrow(best_components),1]
Arima_1 <- auto.arima(train,stepwise = T,approximation = F)
forcast_1 <- forecast(Arima_1,test, h=52)
|
Taller 1/scripts/first.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "Weeknotes: Question answering with 🤗 transformers, mock interviews"
#
# - comments: false
# - categories: [weeknotes,nlp,huggingface,transformers]
# - badges: false
# During my PhD and postdoc, I kept detailed research notes that I would often revisit to reproduce a lengthy calculation or simply take stock of the progress I'd made on my projects.
#
# 
#
# For various reasons, I dropped this habit when I switched to industry{% fn 1 %} and nowadays find myself digging out code snippets or techniques from a tangle of Google Docs, Git repositories, and Markdown files that I've built up over the years.
#
# To break this anti-pattern, I've decided to "work in public" as much as possible this year, mostly in the form of [TILs](https://www.urbandictionary.com/define.php?term=TIL) and weeknotes. Here, I am drawing inspiration from the prolific [<NAME>](https://twitter.com/simonw?s=20), whose [blog](https://simonwillison.net/) meticulously documents the development of his open-source projects.{% fn 2 %}
#
# To that end, here's the first weeknotes of the year - hopefully they're not the last!
# ## Question answering
# This week I've been doing a deep dive into extractive question answering as part of a book chapter I'm writing on compression methods for Transformers. Although I built a question answering PoC with BERT in the dark ages of 2019, I was curious to see how the implementation could be done in the `transformers` library, specifically with a custom `Trainer` class and running everything inside Jupyter notebooks.
#
# Fortunately, [<NAME>](https://twitter.com/GuggerSylvain?s=20) at HuggingFace had already implemented
#
# * A [tutorial](https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynb) on fine-tuning language models for question answering, but without a custom `Trainer`
# * A custom [`QuestionAnsweringTrainer`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/trainer_qa.py) as part of the [question answering scripts](https://github.com/huggingface/transformers/tree/master/examples/question-answering) in `transformers`
#
# so my warm-up task this week was to simply merge the two in a single notebook and fine-tune `bert-base-uncased` on SQuAD v1.
#
# I implemented a _very_ scrappy version that achieves this in my [`transformerlab`](https://github.com/lewtun/transformerlab) repository, and the main lesson I learnt is that
#
# > Dealing with context size is tricky for long documents
#
# Transformer models can only process a finite number of input tokens, a property usually referred to as the maximum context size. As described in Sylvain's tutorial, naive truncation of documents for question answering is problematic because
#
# > removing part of the the context might result in losing the answer we are looking for.
#
# The solution is to apply a _sliding window_{% fn 3 %} to the input context, so that long contexts are split into _multiple_ features. An example from the tutorial shows how this works by introducing two new hyperparameters `max_length` and `doc_stride` that control the degree of overlap (bold shows the overlapping region):
# > [CLS] how many wins does the notre dame men's basketball team have? [SEP] the men's basketball team has over 1, 600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 ncaa tournaments. former player <NAME> holds the record for most points scored in a single game of the tournament with 61. although the team has never won the ncaa tournament, they were named by the helms athletic foundation as national champions twice. the team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending ucla's record 88 - game winning streak in 1974. the team has beaten an additional eight number - one teams, and those nine wins rank second, to ucla's 10, all - time in wins against the top team. the team plays in newly renovated purcell pavilion ( within the edmund p. joyce center ), which reopened for the beginning of the 2009 – 2010 season. the team is coached by <NAME>, who, as of the 2014 – 15 season, his fifteenth at notre dame, has achieved a 332 - 165 record. in 2009 they were invited to the nit, where they advanced to the semifinals but were beaten by penn state who went on and beat baylor in the _**championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey's fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were**_ [SEP]
# >
# > [CLS] how many wins does the notre dame men's basketball team have? [SEP] championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey's fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were the most by the fighting irish team since 1908 - 09. [SEP]
# Remarkably, `transformers` supports this preprocessing logic out of the box, so one just has to specify a few arguments in the tokenizer:
#
# ```python
# tokenized_example = tokenizer(
# example["question"],
# example["context"],
# max_length=max_length,
# truncation="only_second",
# return_overflowing_tokens=True,
# return_offsets_mapping=True,
# stride=doc_stride)
# ```
#
# One drawback from this approach is that it introduces significant complexity into the data preparation step:
#
# * With multiple features per example, one needs to do some heavy wrangling to pick out the start and end positions of each answer. For example, the `postprocess_qa_predictions` function in Sylvain's tutorial is about 80 lines long, and breaking this down for readers is likely to distract from the main focus on compression methods.
# * We need slightly different logic for preprocessing the training and validation sets (see the `prepare_train_features` and `prepare_validation_features`)
#
# Instead, I may opt for the simpler, but less rigourous approach of truncating the long examples. As shown in the `transformer` [docs](https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0), we'd only need to define a custom dataset
#
# ```python
# import torch
#
# class SquadDataset(torch.utils.data.Dataset):
# def __init__(self, encodings):
# self.encodings = encodings
#
# def __getitem__(self, idx):
# return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
#
# def __len__(self):
# return len(self.encodings.input_ids)
# ```
#
# and then pass the encoding for the training and validation sets as follows:
#
# ```python
# train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
# val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
#
# train_dataset = SquadDataset(train_encodings)
# val_dataset = SquadDataset(val_encodings)
# ```
#
# From here we can just use the native `Trainer` in `transformers`, together with the `squad` metric from `datasets`. By looking at the distribution of question and context lengths, we can see that this simplification will only fail in a very small number of examples:
#
# 
#
# Another alternative would be to adopt the "retriever-reader" architecture that I used in my PoC (where I split long documents into smaller paragraphs), but that introduces it's own set of complexity that I'd like to avoid.
# ## Running a mock interview
#
# A friend of mine is applying for a research scientist position and we thought it would be fun to run a couple of mock interviews together. Since the position is likely to involve Transformers, I asked my friend a few GPT-related questions (e.g. how does the architecture differ from BERT and what is the difference between GPT / GPT-2 and GPT-3?), followed by a coding session to see how fast one could implement GPT from scratch. The goal was to approach a skeleton of [<NAME>'s](https://karpathy.ai/) excellent [minGPT](https://github.com/karpathy/minGPT) implementation
# > twitter: https://twitter.com/karpathy/status/1295410274095095810?s=20
# and the experience taught me a few lessons:
#
# * There's a significant difference between being a power-user of a library like `transformers` versus deeply knowing how every layer, activation function, etc in a deep neural architecture is put together. Running the interview reminded me that I should aim to block some time per week to hone the foundations of my machine learning knowledge.
# * Open-ended coding interviews like this are way more fun to conduct than the usual LeetCode / HackerRank problems one usually encounters in industry. To me, they resemble a pair-programming interaction that gives the interviewer a pretty good feel for what it would be like to work closely with the candidate. Something to remember the next time I'm interviewing people for a real job!
# ## Papers this week
#
# This week I've been mostly reading papers on compressing Transformers and how to improve few-shot learning _without_ resorting to massive scaling:
#
# * [_DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter_](https://arxiv.org/abs/1910.01108) by <NAME>, <NAME>, <NAME>, and <NAME> (2019)
# * [_FastFormers: Highly Efficient Transformer Models for Natural Language Understanding_](https://arxiv.org/abs/2010.13382) by <NAME> and <NAME> (2020)
# * [_Uncertainty-aware Self-training for Text Classification with Few Labels_](https://arxiv.org/abs/2006.15315) by <NAME> and <NAME> (2020)
#
# This week also coincided with the release of OpenAI's [DALL-E](https://openai.com/blog/dall-e/) which, although light on implementation details, provided a fun interface to see how far you can push the limits of text-to-image generation:
# 
# ## TIL this week
#
# * [Polling a web service with bash and jq](https://lewtun.github.io/blog/til/2021/01/07/til-poll-api-with-bash.html)
# #hide
#
# ## Footnotes
# {{ 'Mostly due to playing an insane game of "data science catch-up" at an early-stage startup.' | fndetail: 1 }}
#
# {{ 'Even down to the level of reviewing his own [pull requests](https://github.com/simonw/datasette/pull/1117)!' | fndetail: 2 }}
#
# {{ 'We want a _sliding_ window instead of a _tumbling_ one because the answer might appear across the boundary of the two windows.' | fndetail: 3 }}
|
_notebooks/2021-01-10-wknotes-question-answering.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Apache Toree - Scala
// language: scala
// name: apache_toree_scala
// ---
// # Mask a GeoTiff
//
// In this notebook the user can load two GeoTiffs, extract a Tile from the first GeoTiff and mask it with second GeoTiff.
// ## Dependencies
// +
import sys.process._
import geotrellis.proj4.CRS
import geotrellis.raster.io.geotiff.writer.GeoTiffWriter
import geotrellis.raster.io.geotiff.{SinglebandGeoTiff, _}
import geotrellis.raster.{CellType, DoubleArrayTile}
import geotrellis.spark.io.hadoop._
import geotrellis.vector.{Extent, ProjectedExtent}
import org.apache.spark.mllib.linalg.Vector
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
//Spire is a numeric library for Scala which is intended to be generic, fast, and precise.
import spire.syntax.cfor._
// -
sc.getConf.getAll
// ## Read a GeoTiff file
// +
var geo_projected_extent = new ProjectedExtent(new Extent(0,0,0,0), CRS.fromName("EPSG:3857"))
var geo_num_cols_rows :(Int, Int) = (0, 0)
val geo_path = "hdfs:///user/pheno/modis/usa_mask.tif"
val geo_tiles_RDD = sc.hadoopGeoTiffRDD(geo_path).values
val geo_extents_withIndex = sc.hadoopMultibandGeoTiffRDD(geo_path).keys.zipWithIndex().map{case (e,v) => (v,e)}
geo_projected_extent = (geo_extents_withIndex.filter(m => m._1 == 0).values.collect())(0)
val geo_tiles_withIndex = geo_tiles_RDD.zipWithIndex().map{case (e,v) => (v,e)}
val geo_tile0 = (geo_tiles_withIndex.filter(m => m._1==0).values.collect())(0)
geo_num_cols_rows = (geo_tile0.cols, geo_tile0.rows)
val geo_cellT = geo_tile0.cellType
// -
// ## Read Mask
val mask_path = "hdfs:///user/hadoop/modis/usa_mask.tif"
val mask_tiles_RDD = sc.hadoopGeoTiffRDD(mask_path).values
val mask_tiles_withIndex = mask_tiles_RDD.zipWithIndex().map{case (e,v) => (v,e)}
val mask_tile0 = (mask_tiles_withIndex.filter(m => m._1==0).values.collect())(0)
// ## Mask GeoTiff
val res_tile = geo_tile0.localInverseMask(mask_tile0, 1, 0).toArrayDouble()
// ## Save the new GeoTiff file
// +
val clone_tile = DoubleArrayTile(res_tile, geo_num_cols_rows._1, geo_num_cols_rows._2)
val cloned = geotrellis.raster.DoubleArrayTile.empty(geo_num_cols_rows._1, geo_num_cols_rows._2)
cfor(0)(_ < geo_num_cols_rows._1, _ + 1) { col =>
cfor(0)(_ < geo_num_cols_rows._2, _ + 1) { row =>
val v = clone_tile.getDouble(col, row)
cloned.setDouble(col, row, v)
}
}
val geoTif = new SinglebandGeoTiff(cloned, geo_projected_extent.extent, geo_projected_extent.crs, Tags.empty, GeoTiffOptions(compression.DeflateCompression))
//Save GeoTiff to /tmp
val output = "/user/pheno/modis/modis_usa_mask.tif"
val tmp_output = "/tmp/modis_usa_mask.tif"
GeoTiffWriter.write(geoTif, tmp_output)
//Upload to HDFS
var cmd = "hadoop dfs -copyFromLocal -f " + tmp_output + " " + output
Process(cmd)!
cmd = "rm -fr " + tmp_output
Process(cmd)!
// -
|
applications/notebooks/examples/scala/mask_geotiff.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cosmo
# language: python
# name: cosmo
# ---
# In this notebook, we aim to recover the convergence map $\kappa$ from the observed shear $\gamma$.
#
# The shear map can be obtained by the Kaiser-Squires transformation: $\gamma = \mathrm{TFP}^* \kappa$, where
#
# <ul>
# <li>$\mathrm{T}$ is a Non-equispaced Discrete Fourier Transform, so not necessarily invertible,</li>
# <li>$\mathrm{P}$ is a diagonal operator, implementing the transformation from convergence to shear in Fourier space,</li>
# <li>$\mathrm{F}$ is the Fourier matrix.</li>
# </ul>
#
# Since this transformation is not always invertible, we recover the convergence map $\kappa$ by maximum likelihood:
#
# $\begin{aligned}
# \kappa &= \underset{\kappa}{\operatorname{argmax}} p(\gamma|\kappa)
# \end{aligned}$
#
# Here, we supppose that we observe the shear $\gamma$ as the Kaiser-Squires transformation of the convergence $\kappa$ comtaminated by an additive gaussian noise $n\sim \mathcal{N}(0, \bf{I})$. Then, $\gamma \sim \mathcal{N}(\mathrm{TPF}^*\kappa, \bf{I})$ and
#
# $\begin{aligned}
# \kappa &= \underset{\kappa}{\operatorname{argmax}} \dfrac{1}{(2\pi)^{N/2}} \exp \left(-\frac{1}{2}\|\gamma - \mathrm{TFP}^* \kappa\|_2^2 \right) \\
# &= \underset{\kappa}{\operatorname{argmin}} \|\gamma - \mathrm{TFP}^* \kappa\|_2^2
# \end{aligned}$.
#
# We can also add a regulizer to this likelihood as the $\ell_2$-norm on the convergence (corresponding to a gaussian prior) or the $\ell_1$-norm to enforce sparcity.
#
# Eventually, the convergence maps are obtained by minimizing these criteria with a stochastic gradient descent.
# +
import matplotlib.pyplot as plt
from lenspack.utils import bin2d
from lenspack.peaks import find_peaks2d
import jax_lensing
from jax_lensing.inversion import ks93
from jax_lensing.inverse_problem import gamma2kappa, square_norm, square_norm_smooth, square_norm_sparse
from astropy.table import Table
# -
def plot_convergence(kappaE, x, y, title=''):
# Plot peak positions over the convergence
fig, ax = plt.subplots(1, 1, figsize=(7, 5.5))
mappable = ax.imshow(kappaE, origin='lower', cmap='bone')
ax.scatter(y, x, s=10, c='orange') # reverse x and y due to array indexing
ax.set_axis_off()
fig.colorbar(mappable)
plt.title(title)
# +
# Import the galaxy catalog
cat = Table.read('../data/gal_cat.fits')
# Bin ellipticity components based on galaxy position into a 128 x 128 map
e1map, e2map = bin2d(cat['ra'], cat['dec'], v=(-cat['gamma1'], -cat['gamma2']), npix=128)
# -
# **Recover the convergence via Kaiser-Squires inversion**
# +
# Recover convergence via Kaiser-Squires inversion
kappaE, kappaB = ks93(e1map, e2map)
x, y, h = find_peaks2d(kappaE, threshold=0.03, include_border=True)
# Plot peak positions over the convergence
plot_convergence(kappaE, x, y, title='$\kappa$ via Kaiser-Squires inversion')
plt.show()
# -
# **Recover the convergence via minimizing the following objective function:**
# $$\|\gamma - \mathrm{TFP}^* \kappa\|_2^2$$
# where $\mathrm{TFP}^*$ corresponds to the Kaiser-Squires transformation.
# +
# Recover convergence minimizing the squared error
kEhat, kBhat = gamma2kappa(e1map, e2map, obj=square_norm, kappa_shape=kappaE.shape)
x, y, h = find_peaks2d(kappaE, threshold=0.03, include_border=True)
plot_convergence(kEhat, x, y, title='$\kappa$ via squared error minimization')
# -
# **With a *smooth* regularisation:**
# $$\|\gamma - \mathrm{TFP}^* \kappa\|_2^2 + \lambda \|\kappa\|_2^2$$
# where $\mathrm{TFP}^*$ corresponds to the Kaiser-Squires transformation.
# +
# Recover convergence minimizing the squared error with smooth regularization
kEhat, kBhat = gamma2kappa(e1map, e2map, obj=square_norm_smooth, kappa_shape=kappaE.shape)
x, y, h = find_peaks2d(kEhat, threshold=0.03, include_border=True)
# Plot peak positions over the convergence
plot_convergence(kEhat, x, y, title='$\kappa$ via squared error minimization (smooth reg.)')
# -
# **With a *sparse* regularisation:**
# $$\|\gamma - \mathrm{TFP}^* \kappa\|_2^2 + \lambda \|\kappa\|_1$$
# where $\mathrm{TFP}^*$ corresponds to the Kaiser-Squires transformation.
# +
# Recover convergence minimizing the squared error with sparse regularization
kEhat, kBhat = gamma2kappa(e1map, e2map, obj=square_norm_sparse, kappa_shape=kappaE.shape)
x, y, h = find_peaks2d(kEhat, threshold=0.03, include_border=True)
# Plot peak positions over the convergence
plot_convergence(kEhat, x, y, title='$\kappa$ via squared error minimization (sparse reg.)')
|
notebooks/point-estimate.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .fs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: .NET (F#)
// language: F#
// name: .net-fsharp
// ---
// <h2>--- Day 10: Syntax Scoring ---</h2>
// [](https://mybinder.org/v2/gh/oddrationale/AdventOfCode2021FSharp/main?urlpath=lab%2Ftree%2FDay10.ipynb)
// <p>You ask the submarine to determine the best route out of the deep-sea cave, but it only replies:</p>
// <pre><code>Syntax error in navigation subsystem on line: <span title="Some days, that's just how it is.">all of them</span></code></pre>
// <p><em>All of them?!</em> The damage is worse than you thought. You bring up a copy of the navigation subsystem (your puzzle input).</p>
// <p>The navigation subsystem syntax is made of several lines containing <em>chunks</em>. There are one or more chunks on each line, and chunks contain zero or more other chunks. Adjacent chunks are not separated by any delimiter; if one chunk stops, the next chunk (if any) can immediately start. Every chunk must <em>open</em> and <em>close</em> with one of four legal pairs of matching characters:</p>
// <ul>
// <li>If a chunk opens with <code>(</code>, it must close with <code>)</code>.</li>
// <li>If a chunk opens with <code>[</code>, it must close with <code>]</code>.</li>
// <li>If a chunk opens with <code>{</code>, it must close with <code>}</code>.</li>
// <li>If a chunk opens with <code><</code>, it must close with <code>></code>.</li>
// </ul>
// <p>So, <code>()</code> is a legal chunk that contains no other chunks, as is <code>[]</code>. More complex but valid chunks include <code>([])</code>, <code>{()()()}</code>, <code><([{}])></code>, <code>[<>({}){}[([])<>]]</code>, and even <code>(((((((((())))))))))</code>.</p>
// <p>Some lines are <em>incomplete</em>, but others are <em>corrupted</em>. Find and discard the corrupted lines first.</p>
// <p>A corrupted line is one where a chunk <em>closes with the wrong character</em> - that is, where the characters it opens and closes with do not form one of the four legal pairs listed above.</p>
// <p>Examples of corrupted chunks include <code>(]</code>, <code>{()()()></code>, <code>(((()))}</code>, and <code><([]){()}[{}])</code>. Such a chunk can appear anywhere within a line, and its presence causes the whole line to be considered corrupted.</p>
// <p>For example, consider the following navigation subsystem:</p>
// <pre><code>[({(<(())[]>[[{[]{<()<>>
// [(()[<>])]({[<{<<[]>>(
// {([(<{}[<>[]}>{[]{[(<()>
// (((({<>}<{<{<>}{[]{[]{}
// [[<[([]))<([[{}[[()]]]
// [{[{({}]{}}([{[{{{}}([]
// {<[[]]>}<{[{[{[]{()[[[]
// [<(<(<(<{}))><([]([]()
// <{([([[(<>()){}]>(<<{{
// <{([{{}}[<[[[<>{}]]]>[]]
// </code></pre>
// <p>Some of the lines aren't corrupted, just incomplete; you can ignore these lines for now. The remaining five lines are corrupted:</p>
// <ul>
// <li><code>{([(<{}[<>[]}>{[]{[(<()></code> - Expected <code>]</code>, but found <code>}</code> instead.</li>
// <li><code>[[<[([]))<([[{}[[()]]]</code> - Expected <code>]</code>, but found <code>)</code> instead.</li>
// <li><code>[{[{({}]{}}([{[{{{}}([]</code> - Expected <code>)</code>, but found <code>]</code> instead.</li>
// <li><code>[<(<(<(<{}))><([]([]()</code> - Expected <code>></code>, but found <code>)</code> instead.</li>
// <li><code><{([([[(<>()){}]>(<<{{</code> - Expected <code>]</code>, but found <code>></code> instead.</li>
// </ul>
// <p>Stop at the first incorrect closing character on each corrupted line.</p>
// <p>Did you know that syntax checkers actually have contests to see who can get the high score for syntax errors in a file? It's true! To calculate the syntax error score for a line, take the <em>first illegal character</em> on the line and look it up in the following table:</p>
// <ul>
// <li><code>)</code>: <code>3</code> points.</li>
// <li><code>]</code>: <code>57</code> points.</li>
// <li><code>}</code>: <code>1197</code> points.</li>
// <li><code>></code>: <code>25137</code> points.</li>
// </ul>
// <p>In the above example, an illegal <code>)</code> was found twice (<code>2*3 = <em>6</em></code> points), an illegal <code>]</code> was found once (<code><em>57</em></code> points), an illegal <code>}</code> was found once (<code><em>1197</em></code> points), and an illegal <code>></code> was found once (<code><em>25137</em></code> points). So, the total syntax error score for this file is <code>6+57+1197+25137 = <em>26397</em></code> points!</p>
// <p>Find the first illegal character in each corrupted line of the navigation subsystem. <em>What is the total syntax error score for those errors?</em></p>
// + dotnet_interactive={"language": "fsharp"}
let input = File.ReadAllLines @"input/10.txt"
// -
let invert =
function
| '(' -> ')'
| '<' -> '>'
| '[' -> ']'
| '{' -> '}'
| ')' -> '('
| '>' -> '<'
| ']' -> '['
| '}' -> '{'
| _ -> raise <| new Exception("Invalid character.")
let findUnexpectedChar (line: string) =
let rec loop stack remainder =
match remainder with
| [] -> None
| curr::remainder' ->
match curr with
| '(' | '<' | '[' | '{' ->
loop ([curr] @ stack) remainder'
| _ ->
match stack with
| [] -> loop stack remainder'
| pop::tail ->
if invert curr = pop then
loop tail remainder'
else
Some curr
loop List.empty (line |> List.ofSeq)
let syntaxCheckerPoints =
function
| ')' -> 3
| ']' -> 57
| '}' -> 1197
| '>' -> 25137
| _ -> 0
#!time
input
|> Seq.map findUnexpectedChar
|> Seq.choose id
|> Seq.map syntaxCheckerPoints
|> Seq.sum
// <h2 id="part2">--- Part Two ---</h2>
// <p>Now, discard the corrupted lines. The remaining lines are <em>incomplete</em>.</p>
// <p>Incomplete lines don't have any incorrect characters - instead, they're missing some closing characters at the end of the line. To repair the navigation subsystem, you just need to figure out <em>the sequence of closing characters</em> that complete all open chunks in the line.</p>
// <p>You can only use closing characters (<code>)</code>, <code>]</code>, <code>}</code>, or <code>></code>), and you must add them in the correct order so that only legal pairs are formed and all chunks end up closed.</p>
// <p>In the example above, there are five incomplete lines:</p>
// <ul>
// <li><code>[({(<(())[]>[[{[]{<()<>></code> - Complete by adding <code>}}]])})]</code>.</li>
// <li><code>[(()[<>])]({[<{<<[]>>(</code> - Complete by adding <code>)}>]})</code>.</li>
// <li><code>(((({<>}<{<{<>}{[]{[]{}</code> - Complete by adding <code>}}>}>))))</code>.</li>
// <li><code>{<[[]]>}<{[{[{[]{()[[[]</code> - Complete by adding <code>]]}}]}]}></code>.</li>
// <li><code><{([{{}}[<[[[<>{}]]]>[]]</code> - Complete by adding <code>])}></code>.</li>
// </ul>
// <p>Did you know that autocomplete tools <em>also</em> have contests? It's true! The score is determined by considering the completion string character-by-character. Start with a total score of <code>0</code>. Then, for each character, multiply the total score by 5 and then increase the total score by the point value given for the character in the following table:</p>
// <ul>
// <li><code>)</code>: <code>1</code> point.</li>
// <li><code>]</code>: <code>2</code> points.</li>
// <li><code>}</code>: <code>3</code> points.</li>
// <li><code>></code>: <code>4</code> points.</li>
// </ul>
// <p>So, the last completion string above - <code>])}></code> - would be scored as follows:</p>
// <ul>
// <li>Start with a total score of <code>0</code>.</li>
// <li>Multiply the total score by 5 to get <code>0</code>, then add the value of <code>]</code> (2) to get a new total score of <code>2</code>.</li>
// <li>Multiply the total score by 5 to get <code>10</code>, then add the value of <code>)</code> (1) to get a new total score of <code>11</code>.</li>
// <li>Multiply the total score by 5 to get <code>55</code>, then add the value of <code>}</code> (3) to get a new total score of <code>58</code>.</li>
// <li>Multiply the total score by 5 to get <code>290</code>, then add the value of <code>></code> (4) to get a new total score of <code>294</code>.</li>
// </ul>
// <p>The five lines' completion strings have total scores as follows:</p>
// <ul>
// <li><code>}}]])})]</code> - <code>288957</code> total points.</li>
// <li><code>)}>]})</code> - <code>5566</code> total points.</li>
// <li><code>}}>}>))))</code> - <code>1480781</code> total points.</li>
// <li><code>]]}}]}]}></code> - <code>995444</code> total points.</li>
// <li><code>])}></code> - <code>294</code> total points.</li>
// </ul>
// <p>Autocomplete tools are an odd bunch: the winner is found by <em>sorting</em> all of the scores and then taking the <em>middle</em> score. (There will always be an odd number of scores to consider.) In this example, the middle score is <code><em>288957</em></code> because there are the same number of scores smaller and larger than it.</p>
// <p>Find the completion string for each incomplete line, score the completion strings, and sort the scores. <em>What is the middle score?</em></p>
let complete (line: string) =
let rec loop stack remainder =
match remainder with
| [] -> stack |> Seq.map invert |> String.Concat
| curr::remainder' ->
match curr with
| '(' | '<' | '[' | '{' ->
loop ([curr] @ stack) remainder'
| _ ->
match stack with
| [] -> loop stack remainder'
| pop::tail -> loop tail remainder'
loop List.empty (line |> List.ofSeq)
// + dotnet_interactive={"language": "fsharp"}
let autocompletePoints =
function
| ')' -> 1L
| ']' -> 2L
| '}' -> 3L
| '>' -> 4L
| _ -> 0L
// -
let score str =
str
|> Seq.fold (fun acc c -> acc * 5L + (c |> autocompletePoints)) 0L
#!time
input
|> Seq.filter (fun line -> line |> findUnexpectedChar |> Option.isNone)
|> Seq.map complete
|> Seq.map score
|> Seq.sort
|> fun scores ->
scores |> Seq.item ((scores |> Seq.length) / 2)
// [Prev](Day09.ipynb) | [Next](Day11.ipynb)
|
Day10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# author=cxf
# date=2020-8-8
# file for extract features of RNCU distribution by FFT
import pandas as pd
import numpy as np
import numpy.fft as nf
import matplotlib.pyplot as mp
import scipy.interpolate as si
# get RNCU distribution ->x
distribution_1_value = open('../0.prepare_processing/run1/machine_X_values.txt')
distribution_2_value = open('../0.prepare_processing/run2/machine_X_values.txt')
distribution_1_index = open('../0.prepare_processing/run1/machine_X_index.txt')
distribution_2_index = open('../0.prepare_processing/run2/machine_X_index.txt')
sample_list=[]
total_list=[]
freq1 = []
freq2 = []
freq3 = []
freq4 = []
freq5 = []
freq6 = []
freq7 = []
freq8 = []
freq9 = []
freq10 = []
pow1 = []
pow2 = []
pow3 = []
pow4 = []
pow5 = []
pow6 = []
pow7 = []
pow8 = []
pow9 = []
pow10 = []
while True:
value=distribution_1_value.readline()
if value:
sample=value[:-1].split(',')[0]
sample_list.append(sample)
y=value[:-1].split(',')[1:]
y=[int(i) for i in y]
total=sum(y)
y=[i/total for i in y]
total_list.append(total)
index=distribution_1_index.readline()
x=index[:-1].split(',')[1:]
x=[int(i) for i in x]
linear = si.interp1d(x,y, kind='linear')
x=[i for i in range(1,int(x[-1])+1)]
y = linear(x)
complex_ary = nf.fft(y)
y_ = nf.ifft(complex_ary)
# get freqency of all sinusoidals
freqs = nf.fftfreq(y_.size, 1)
# get amplitude of all sinusoidals
pows = np.abs(complex_ary)
# get sinusoidal with top ten amplitude and record their frequency and amplitude
freq_top_ten = freqs[freqs > 0][np.argsort(-pows[freqs > 0])][0:10]
pow_top_ten = -np.sort(-pows[freqs > 0])[0:10]
freq_top_ten=list(freq_top_ten)
pow_top_ten=list(pow_top_ten)
for i in range(10-len(pow_top_ten)):
pow_top_ten.append(0)
freq_top_ten.append(0)
freq1.append(freq_top_ten[0])
freq2.append(freq_top_ten[1])
freq3.append(freq_top_ten[2])
freq4.append(freq_top_ten[3])
freq5.append(freq_top_ten[4])
freq6.append(freq_top_ten[5])
freq7.append(freq_top_ten[6])
freq8.append(freq_top_ten[7])
freq9.append(freq_top_ten[8])
freq10.append(freq_top_ten[9])
pow1.append(pow_top_ten[0])
pow2.append(pow_top_ten[1])
pow3.append(pow_top_ten[2])
pow4.append(pow_top_ten[3])
pow5.append(pow_top_ten[4])
pow6.append(pow_top_ten[5])
pow7.append(pow_top_ten[6])
pow8.append(pow_top_ten[7])
pow9.append(pow_top_ten[8])
pow10.append(pow_top_ten[9])
# mp.figure(figsize=(10,5))
# print(sample)
# mp.subplot(121)
# mp.grid(ls=':')
# mp.bar([i for i in range(1,11)],pow_top_ten,label='amplitude')
# ax = mp.gca()
# ax.xaxis.set_major_locator(mp.MultipleLocator(1))
# mp.legend()
# mp.subplot(122)
# mp.grid(ls=':')
# mp.bar([i for i in range(1,11)],freq_top_ten,label='frequency')
# ax = mp.gca()
# ax.xaxis.set_major_locator(mp.MultipleLocator(1))
# mp.legend()
# mp.show()
else:
while True:
value=distribution_2_value.readline()
if value:
sample=value[:-1].split(',')[0]
sample_list.append(sample)
y=value[:-1].split(',')[1:]
y=[int(i) for i in y]
total=sum(y)
y=[i/total for i in y]
total_list.append(total)
index=distribution_2_index.readline()
x=index[:-1].split(',')[1:]
x=[int(i) for i in x]
linear = si.interp1d(x,y, kind='linear')
x=[i for i in range(1,int(x[-1])+1)]
y = linear(x)
complex_ary = nf.fft(y)
y_ = nf.ifft(complex_ary)
# get freqency of all sinusoidals
freqs = nf.fftfreq(y_.size, 1)
# get amplitude of all sinusoidals
pows = np.abs(complex_ary)
# get sinusoidal with top ten amplitude and record their frequency and amplitude
freq_top_ten = freqs[freqs > 0][np.argsort(-pows[freqs > 0])][0:10]
pow_top_ten = -np.sort(-pows[freqs > 0])[0:10]
freq_top_ten=list(freq_top_ten)
pow_top_ten=list(pow_top_ten)
for i in range(10-len(pow_top_ten)):
pow_top_ten.append(0)
freq_top_ten.append(0)
freq1.append(freq_top_ten[0])
freq2.append(freq_top_ten[1])
freq3.append(freq_top_ten[2])
freq4.append(freq_top_ten[3])
freq5.append(freq_top_ten[4])
freq6.append(freq_top_ten[5])
freq7.append(freq_top_ten[6])
freq8.append(freq_top_ten[7])
freq9.append(freq_top_ten[8])
freq10.append(freq_top_ten[9])
pow1.append(pow_top_ten[0])
pow2.append(pow_top_ten[1])
pow3.append(pow_top_ten[2])
pow4.append(pow_top_ten[3])
pow5.append(pow_top_ten[4])
pow6.append(pow_top_ten[5])
pow7.append(pow_top_ten[6])
pow8.append(pow_top_ten[7])
pow9.append(pow_top_ten[8])
pow10.append(pow_top_ten[9])
else:
break
break
distribution=pd.DataFrame()
distribution['sample'] = sample_list
distribution['freq1'] = freq1
distribution['freq2'] = freq2
distribution['freq3'] = freq3
distribution['freq4'] = freq4
distribution['freq5'] = freq5
distribution['freq6'] = freq6
distribution['freq7'] = freq7
distribution['freq8'] = freq8
distribution['freq9'] = freq9
distribution['freq10'] = freq10
distribution['pow1'] = pow1
distribution['pow2'] = pow2
distribution['pow3'] = pow3
distribution['pow4'] = pow4
distribution['pow5'] = pow5
distribution['pow6'] = pow6
distribution['pow7'] = pow7
distribution['pow8'] = pow8
distribution['pow9'] = pow9
distribution['pow10'] = pow10
distribution['total']=total_list
# +
# from RNCU distribution of each samples, we consider there are two main distribution,
# the first whose center at 1 RNCU is caused by hopping and
# the second is normal distribution of sequencing.
# the RNCU distance between centers of these two distribution we consider is important for cutoff setting
# the center of former we consider is always 1 RNCU
# so here we add the position of the center of latter although may be it has been included in sinusoidals
df_input=pd.read_csv('train_data_run1_run2.csv')
df_output=df_input[['sample','max_cutoff']]
df_input=df_input[['sample','precise']]
df_input=pd.merge(df_input,distribution,on='sample',how='left')
# write to files
df_input.to_csv('input_run1_run2.csv',header=True,index=0)
df_output.to_csv('output_run1_run2.csv',header=True,index=0)
# -
|
model_training/2.extract_features/2.extract_sinusoidal_with _top_ten_powers.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Geo_env
# language: python
# name: geo_env
# ---
# ### Making Geojson_File from Korean Energy District Classification
#
# 건축물의 에너지절약설계기준의 지역분류는 중부1, 중부2, 남부, 제주도 총 4개로 구분되며, 지역별로 건축물 부위의 열관류율의 값이 다르게 적용된다. 각 지역은 대체로 북-남 방향 시도별로 구분되어 있으나, 일부는 시,군,구에 따라 서로 다른 지역에 속하기도 한다.
# <span style='color:red'>(예, 강원도는 중부1지역(철원군, 화천군, 양구군, 춘천시, 인제군, 홍천군, 횡성군, 평창군, 원주시, 영월군, 정선군, 태백시), 중부2지역(고성군, 속초시, 양양군, 강릉시, 동해시, 삼척시)으로 나뉨)</span><br>
#
# 에너지절약설계기준의 지역분류에 포함되는 시도군은 총 161개이며, 공용데이터로 지원되는 시도별 지리데이터(shp or geojson)는 약 20개, 시군구별 지리데이터(shp or geojson)는 약 250개로 양 측 모두와 불일치하다. 공용 지리데이터를 이용, 위의 예처럼 정리가 필요한 지역들 및 Geometry 정보를 편집하여 최종적으론 에너지절약설계기준용 Geojson 파일을 생성한다.
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from shapely.geometry import *
from shapely import wkt
import geopandas as gpd
from geopandas import GeoSeries
# ### 건축물의에너지절약설계기준(제2017-881호) 지역분류
# 중부1지역<br>
# - 강원도 (철원군, 화천군, 양구군, 춘천시, 인제군, 홍천군, 횡성군, 평창군, 원주시, 영월군, 정선군, 태백시)
# - 경기도 (가평군, 남양주시, 동두천시, 양주시, 연천군, 의정부시, 파주시, 포천시)
# - 충청북도 (제천시)
# - 경상북도 (봉화군, 청송군)
#
# 중부2지역<br>
# - 서울특별시 (종로구, 중구, 용산구, 성동구, 광진구, 동대문구, 중랑구, 성북구, 강북구, 도봉구, 노원구, 은평구, 서대문구, 마포구, 양천구, 강서구, 구로구, 금천구, 영등포구, 동작구, 관악구, 서초구, 강남구, 송파구, 강동구)
# - 대전광역시 (동구, 중구, 서구, 유성구, 대덕구)
# - 세종특별시
# - 인천광역시 (중구, 동구, 미추홀구, 연수구, 남동구, 부평구, 계양구, 서구, 강화군, 옹진군)
# - 강원도 (고성군, 속초시, 양양군, 강릉시, 동해시, 삼척시)
# - 경기도 (고양시, 과천시, 광명시, 광주시, 구리시, 군포시, 김포시, 부천시, 성남시, 수원시, 시흥시, 안산시, 안성시, 안양시, 양평군, 여주시, 오산시, 용인시, 의왕시, 이천시, 평택시, 하남시, 화성시)
# - 충청북도 (청주시, 충주시, 괴산군, 단양군, 보은군, 영동군, 옥천군, 음성군, 증평군, 진천군)
# - 충청남도 (천안시, 공주시, 보령시, 아산시, 서산시, 논산시, 계룡시, 당진시, 금산군, 부여군, 서천군, 청양군, 홍성군, 예산군, 태안군)
# - 경상북도 (김천시, 안동시, 구미시, 영주시, 영천시, 상주시, 문경시, 군위군, 의성군, 영양군, 고령군, 성주군, 칠곡군, 예천군, 울릉군)
# - 전라북도 (전주시, 익산시, 군산시, 정읍시, 김제시, 남원시, 완주군, 고창군, 부안군, 임실군, 순창군, 진안군, 무주군, 장수군)
# - 경상남도 (거창군, 함양군)
#
# 남부지역<br>
# - 부산광역시 (중구, 서구, 동구, 영도구, 부산진구, 동래구, 남구, 북구, 해운대구, 사하구, 금정구, 강서구, 연제구, 수영구, 사상구, 기장군)
# - 대구광역시 (중구, 동구, 서구, 남구, 북구, 수성구, 달서구, 달성군)
# - 울산광역시 (중구, 남구, 동구, 북구, 울주군)
# - 광주광역시 (동구, 서구, 남구, 북구, 광산구)
# - 전라남도 (목포시, 여수시, 순천시, 나주시, 광양시, 담양군, 곡성군, 구례군, 고흥군, 보성군, 화순군, 장흥군, 강진군, 해남군, 영암군, 무안군, 함평군, 영광군, 장성군, 완도군, 진도군, 신안군)
# - 경상북도 (울진군, 영덕군, 포항시, 경주시, 청도군, 경산시)
# - 경상남도 (창원시, 김해시, 진주시, 양산시, 거제시, 통영시, 사천시, 밀양시, 함안군, 창녕군, 고성군, 하동군, 합천군, 남해군, 산청군, 의령군)
#
# 제주도
# ### 폴리곤 병합(merge)이 필요한 시군구
#
# - 수원시 (장안구, 권선구, 팔달구, 영통구) (41111, 41113, 41115, 41117)
#
# - 성남시 (수정구, 중원구, 분당구) (41131, 41133, 41135)
#
# - 안양시 (만안구, 동안구) (41171, 41173)
#
# - 안산시 (상록구, 단원구) (41271, 41273)
#
# - 고양시 (덕양구, 일산동구, 일산서구) (41281, 41285, 41287)
#
# - 용인시 (처인구, 기흥구, 수지구) (41461, 41463, 41465)
#
# - 창원시 (의창구, 성산구, 마산합포구, 마산회원구, 진해구) (48121, 48123, 48125, 48127, 48129)
#
# - 포항시 (남구, 북구) (47111, 47113)
#
# - 광주광역시 (동구, 서구, 남구, 북구, 광산구) 29110, 29140, 29155, 29170, 29200
#
# - 대구광역시 (중구, 동구, 서구, 남구, 북구, 수성구, 달서구, 달성군) (27110, 27140, 27170, 27200, 27230, 27260, 27290, 27710)
#
# - 대전광역시 (동구, 중구, 서구, 유성구, 대덕구) (30110, 30140, 30170, 30200, 30230)
#
# - 부산광역시 (중구, 서구, 동구, 영도구, 부산진구, 동래구, 남구, 북구, 해운대구, 사하구, 금정구, 강서구, 연제구, 수영구, 사상구, 기장군) (26110, 26140, 26170, 26200, 26230, 26260, 26290, 26320, 26350, 26380, 26410, 26440, 26470, 26500, 26530, 26710)
#
# - 서울특별시 (종로구, 중구, 용산구, 성동구, 광진구, 동대문구, 중랑구, 성북구, 강북구, 도봉구, 노원구, 은평구, 서대문구, 마포구, 양천구, 강서구, 구로구, 금천구, 영등포구, 동작구, 관악구, 서초구, 강남구, 송파구, 강동구) (11110, 11140, 11170, 11200, 11215, 11230, 11260, 11290, 11305, 11320, 11350, 11380, 11410, 11440, 11470, 11500, 11530, 11545, 11560, 11590, 11620, 11650, 11680, 11710, 11740)
#
# - 울산광역시 (중구, 남구, 동구, 북구, 울주군) (31110, 31140, 31170, 31200, 31710)
#
# - 인천광역시 (중구, 동구, 미추홀구, 연수구, 남동구, 부평구, 계양구, 서구, 강화군, 옹진군) (28110, 28140, 28177, 28185, 28200, 28237, 28245, 28260, 28710, 28720)
#
# - 전주시 (완산구, 덕진구) (45111, 45113)
#
# - 제주특별자치도 (제주시, 서귀포시) (50110, 50130)
#
# - 천안시 (동남구, 서북구) (44131, 44133)
#
# - 청주시 (상당구, 서원구, 흥덕구, 청원구) (43111, 43112, 43113, 43114)
korean_districts = gpd.read_file('C:/Users/ilove/Desktop/DATA/district in Korea/TL_SCCO_SIG.json')
korean_districts.head(5)
def polygon_merge(dists):
global korean_districts
merge_polygon = None
dist_nums = '|'.join(map(str, dists))
polygons = korean_districts.loc[korean_districts['SIG_CD'].str.contains(dist_nums, na=False)]['geometry']
delete_index = korean_districts.loc[korean_districts['SIG_CD'].str.contains(dist_nums, na=False)].index
# 오류 발생 확인을 위해 idx 추가
for idx, p in enumerate(polygons):
polygon = Polygon(p)
# polygon merge 과정에서 오류발생
# TopologyException: Input geom 1 is invalid: Hole lies outside shell at 128.46837904224051 35.899470834165882
# polygon.buffer(0)으로 하면 오류발생 안남 이유는 모름
polygon = polygon.buffer(0)
if merge_polygon == None:
merge_polygon = polygon
else:
try:
merge_polygon = merge_polygon.union(polygon)
except:
print('error', idx)
return merge_polygon, delete_index
# +
# 대표번호 첫번째 수의 한자리수 높은 자리에서 내림
# 나누어진 구역의 대표번호가 모두 11110 or 45111 처럼 1로 시작하기 때문에 round로 해도 동일
def new_district_number(num):
for i in range(5):
if num % (10**i) != 0:
new_num = round(num, -i)
return new_num
# +
edit_districts_list = [['수원시', 'Suwon-si', [41111, 41113, 41115, 41117]],
['성남시', 'Seongnam-si', [41131, 41133, 41135]],
['안양시', 'Anyang-si', [41171, 41173]],
['안산시', 'Ansan-si', [41271, 41273]],
['고양시', 'Goyang-si', [41281, 41285, 41287]],
['용인시', 'Yongin-si', [41461, 41463, 41465]],
['창원시', 'Changwon-si', [48121, 48123, 48125, 48127, 48129]],
['포항시', 'Pohang-si', [47111, 47113]],
['광주광역시', 'Gwangju-si', [29110, 29140, 29155, 29170, 29200]],
['대구광역시', 'Daegu-si', [27110, 27140, 27170, 27200, 27230, 27260, 27290, 27710]],
['대전광역시', 'Daejeon-si', [30110, 30140, 30170, 30200, 30230]],
['부산광역시', 'Busan-si', [26110, 26140, 26170, 26200, 26230, 26260, 26290, 26320, 26350, 26380, 26410, 26440, 26470, 26500, 26530, 26710]],
['서울특별시', 'Seoul-si', [11110, 11140, 11170, 11200, 11215, 11230, 11260, 11290, 11305, 11320, 11350, 11380, 11410, 11440, 11470, 11500, 11530, 11545, 11560, 11590, 11620, 11650, 11680, 11710, 11740]],
['울산광역시', 'Ulsan-si', [31110, 31140, 31170, 31200, 31710]],
['인천광역시', 'Incheon-si', [28110, 28140, 28177, 28185, 28200, 28237, 28245, 28260, 28710, 28720]],
['전주시', 'Jeonju-si', [45111, 45113]],
['제주특별자치도', 'Jeju-si', [50110, 50130]],
['천안시', 'Cheonan-si', [44131, 44133]],
['청주시', 'Cheongju-si', [43111, 43112, 43113, 43114]]]
n = len(korean_districts)
count = 0
for dist in edit_districts_list:
name_kor = dist[0]
name_eng = dist[1]
merge_nums = dist[2]
new_num = new_district_number(merge_nums[0])
edit_polygon, delete_index = polygon_merge(merge_nums)
# edit & merge district row 추가
korean_districts = korean_districts.append(pd.DataFrame([[str(new_num), name_eng, name_kor, edit_polygon]], columns=['SIG_CD', 'SIG_ENG_NM', 'SIG_KOR_NM', 'geometry']))
# dist_nums에 속한 row 삭제
korean_districts.drop(delete_index, inplace=True)
count += 1
# reset index
korean_districts = korean_districts.reset_index(drop=True)
# -
# ### Checking Result of Merge
#
# 누락 지역 : 대구 달서군, 부산 강서구, 인천 강화군, 안산시
#
# 일부 폴리곤내 영역이 나누어진 지역(예, Polygon (좌표), (좌표)) geom_type은 Polygon이지만 내부에서 폴리곤들이 분리되어 있는 항목들 확인됨, 그리고 해당 항목들에서 병합수행시 일부 구간에서 누락됨<br>
# 또한, mapshaper.org에서 shp 파일등을 geojson 파일로 컨버팅하고 simplify 과정에서 선의 단순화로 일부 구간의 공백발생<br>
# simplify를 조정(보다 복잡하게)하여 진행할 경우, 다수의 지역의 geom_type이 MultiPolygon으로 변환되는데 그 이유를 모르겠음
# +
# 부산 강서구 누락 수정
busan = korean_districts.loc[korean_districts['SIG_CD'] == '26100']
kangseogu = wkt.loads('POLYGON ((128.9973048459866 35.22464418306168, 128.9970899849073 35.22353206822895, 128.9911941775027 35.20442068023949, 128.9877300254159 35.20145511866447, 128.9770696029243 35.19332833266679, 128.9754604641508 35.19266411778098, 128.9648296975699 35.18602980095815, 128.9636180266806 35.18450990376363, 128.9632910506188 35.18364154746703, 128.962219042033 35.16119451088441, 128.9606596659281 35.15432330703933, 128.9604371510883 35.15084180714722, 128.9604042269063 35.15058116880843, 128.960054811157 35.14006960287914, 128.959640574434 35.13664648078658, 128.9596393102 35.13664146120078, 128.9594113011431 35.13662922162182, 128.9352202056099 35.11018258380043, 128.9301070059982 35.10514053419802, 128.895378367089 35.07903726959671, 128.8376072739296 35.08339068569149, 128.8217153475088 35.09774299300431, 128.8370305072757 35.10333412709255, 128.8374829513471 35.10352002376145, 128.8290864244583 35.12832206016792, 128.8280851869485 35.12827971068544, 128.8265718039466 35.12952660303733, 128.8258064261979 35.12999224668981, 128.8073159307006 35.13947956152156, 128.8076694331804 35.14044294697322, 128.807151709658 35.14214929547722, 128.8066853688821 35.14250378032979, 128.8051568981764 35.14211493920995, 128.8045656196743 35.14193547559952, 128.8035597425355 35.14186061122918, 128.7936655022466 35.15716242869193, 128.8253416432042 35.1559654117045, 128.8257722310348 35.15571475588612, 128.8270573468191 35.15560175261344, 128.8275265037942 35.15558862722684, 128.8339540796184 35.15759332515164, 128.8349959800773 35.15823965287132, 128.8429330911119 35.15798870499948, 128.8435447311445 35.15846971802387, 128.8436033956949 35.15866821854317, 128.8436459851266 35.15880795717648, 128.8441831167619 35.15990783830183, 128.8441698725588 35.15997450327137, 128.8440739045199 35.16046302962047, 128.8445414639476 35.16285563020531, 128.8449291015606 35.163103751383, 128.8459997090588 35.1635417104711, 128.8531496421986 35.16636530049093, 128.8541361768501 35.16620799155735, 128.856379821424 35.1669037490641, 128.8576337498311 35.16693734112474, 128.8618947591963 35.16772471638492, 128.863343644323 35.16783556333509, 128.867583356211 35.15895533976767, 128.8675045366242 35.15909229404599, 128.8655440276345 35.15947477329713, 128.8656070556902 35.15943142576011, 128.8656044981843 35.15920045987004, 128.8655478082097 35.15913721104234, 128.8659734578807 35.1572328012554, 128.8660973166353 35.15708544200077, 128.8672188464915 35.15586268589249, 128.8673825048197 35.15559656178385, 128.8673864567501 35.15558931926304, 128.8674760210298 35.15545740588411, 128.8679702778192 35.15486219176877, 128.8685097862035 35.1542072653803, 128.8739852125211 35.15103264425661, 128.8740171656991 35.151035525734, 128.8740188775295 35.15103564471029, 128.874111343647 35.15092592974987, 128.874362787309 35.15063127012521, 128.8752770823503 35.1511462956522, 128.8756769616073 35.15073077166956, 128.8765884195095 35.15124161049369, 128.875292718938 35.15288985792525, 128.8750350408392 35.15319687044273, 128.8792102186047 35.15947969460008, 128.8792977661159 35.15940948625484, 128.879311197822 35.15936703698713, 128.879846466607 35.15946983399421, 128.8814329251682 35.16127689145357, 128.8813936956747 35.16133044711008, 128.8816730845016 35.16174318325657, 128.8807742763705 35.16204228011525, 128.8775341700745 35.16848941121081, 128.8770943358145 35.16876629854445, 128.8740167038434 35.17391677322698, 128.8759451992977 35.17312007946061, 128.8804342608239 35.17152975743161, 128.8805909935833 35.17149972793587, 128.8801415572369 35.17554810795689, 128.8801159673222 35.17758068485961, 128.8800443780015 35.17886668865872, 128.8801467817665 35.17973938958004, 128.8802711780151 35.18030083893971, 128.8806754939309 35.18090723988468, 128.8814171494901 35.18281581289992, 128.8807560362646 35.18357082142852, 128.8794132116051 35.18487331592355, 128.8791920684931 35.18514682384479, 128.8784546709598 35.18592763862774, 128.8782127608583 35.18622287345028, 128.8759158329272 35.19113810014902, 128.8760886503323 35.19180652168439, 128.8759593871645 35.19316819160954, 128.875591257753 35.19363025590732, 128.8737293947778 35.20415196454872, 128.8739415526795 35.20426161171375, 128.8827369720663 35.21155099542734, 128.8828635513814 35.21163829769814, 128.8849055085696 35.21380275460146, 128.8853132455225 35.2140287443124, 128.895466813498 35.21337829818424, 128.8966122542355 35.21314239437329, 128.9010985534415 35.21433372885344, 128.9030792686569 35.21487895564113, 128.9046679925106 35.21530132744913, 128.9069158178895 35.21510192204623, 128.9075028302966 35.215111122648, 128.9085724748748 35.21527366666682, 128.9089230399164 35.21532615905225, 128.9091630630745 35.21597797953697, 128.9089520189319 35.21618308208575, 128.9081049742616 35.21701397011784, 128.9077574112549 35.2173547067904, 128.9051108823082 35.22013002801491, 128.9059694205733 35.22069890728505, 128.9060105664342 35.22072670116565, 128.9064327418615 35.22100857814301, 128.9075987421145 35.22179494629919, 128.9096106554875 35.22303392795703, 128.9102376741355 35.22291629017576, 128.9102851214942 35.22289076323912, 128.9107860571069 35.22279767034716, 128.9108649835181 35.22278290249412, 128.9117196426568 35.22262692045813, 128.9142173933236 35.22014190651694, 128.9166529890983 35.21696081298033, 128.9457098069331 35.2272537262351, 128.9458528012373 35.2271434044051, 128.9482703532233 35.22533402180147, 128.9485347461366 35.22524204706867, 128.949918458722 35.22523977759157, 128.9506783292838 35.22524091882361, 128.9527547424873 35.22525814025075, 128.9577864469182 35.22541966029614, 128.9596157700999 35.2256944953223, 128.9606807276693 35.22585186387759, 128.9746943588537 35.22761574644951, 128.9749787552835 35.22768122856393, 128.9780852878363 35.22839609343686, 128.9784210630793 35.22847199690519, 128.986087269229 35.23052718711877, 128.9862745064245 35.23060240141214, 128.9967601001117 35.23608051244329, 128.9973048459866 35.22464418306168))')
busan_edit_polygon = busan.union(kangseogu)
korean_districts.loc[korean_districts['SIG_CD'] == '26100', 'geometry'] = busan_edit_polygon
# 대구 달서군 누락 수정
daegu = korean_districts.loc[korean_districts['SIG_CD'] == '27100']
dalsugun = wkt.loads('POLYGON((128.4683790422405 35.89947083416588, 128.4690983779744 35.89901352273121, 128.4784767982152 35.89682429582408, 128.4791650183944 35.89661456382811, 128.504788286117 35.89137522049469, 128.5049293804274 35.89136948619967, 128.5256992451401 35.88799448446839, 128.5190300767411 35.86953728907918, 128.5187949203539 35.86932511241663, 128.518459668696 35.86878105757415, 128.5121339402516 35.8682068947116, 128.5115248482964 35.86839787213857, 128.5111928163664 35.86858735477895, 128.5109120259484 35.8688127888113, 128.5099075606522 35.86922810130885, 128.5094163183193 35.86959529140682, 128.5086847783223 35.86710377752946, 128.5086713928417 35.86707324276215, 128.5083915834989 35.86676643091245, 128.5082211037854 35.86635037761391, 128.5082007085653 35.86629684145739, 128.5081830137526 35.86627566181598, 128.5065471190038 35.86604863566041, 128.5064497741731 35.86604141226118, 128.5063605327457 35.866005923304, 128.5062277041004 35.86595210369322, 128.5045133629387 35.86569533713642, 128.5035710029484 35.86571303835309, 128.5034820236587 35.86569077852059, 128.5032653038793 35.8655186088267, 128.502866857711 35.86532223452902, 128.5027885828509 35.86528097544809, 128.5024542754722 35.86505508006031, 128.5017662853876 35.86403438953581, 128.5014549448369 35.86370513374719, 128.5011749383015 35.86353290416223, 128.5010290709888 35.86344799489098, 128.5005341065532 35.86347162661166, 128.4998303996389 35.86370907789068, 128.4985489118479 35.86405651037096, 128.4977688515459 35.8641718275925, 128.4972638957888 35.86417784782117, 128.4917347160565 35.86570953091688, 128.491414310192 35.86578108125875, 128.4910304406809 35.86586272536982, 128.4908925583137 35.86587175571305, 128.4907756763034 35.86591235538096, 128.4906420929612 35.86598300327739, 128.4902011106808 35.86623036472835, 128.4876679061862 35.86763768187816, 128.4871724447099 35.86791529693332, 128.4866489676982 35.86809861063286, 128.4860701983814 35.86810153664374, 128.4849676976276 35.86799857628949, 128.4846631605287 35.86802993653794, 128.4838262112975 35.86778710020261, 128.483769678727 35.86777990808601, 128.4829806723552 35.86767727956315, 128.4816640702127 35.86824518257852, 128.4816822420235 35.86803121751667, 128.4817242819531 35.86781780631726, 128.4811696634333 35.86737012279693, 128.4805607721181 35.86703630699005, 128.4799379667756 35.86641879949838, 128.4764669171471 35.8605048517955, 128.4762964878748 35.86019406329414, 128.4754876338466 35.85922061343775, 128.4740401579477 35.8581138843847, 128.4739642261672 35.85810663857028, 128.4734008562263 35.858000350452, 128.4731151764025 35.8561508451226, 128.4724014934691 35.85371492080537, 128.4716565199909 35.84982526357125, 128.4712195209906 35.84743860366326, 128.4689920535007 35.83963096777291, 128.3865417576958 35.85130538822979, 128.3989766866902 35.89992652584503, 128.4541552739237 35.94314943866124, 128.4764188660141 35.93443933745593, 128.4683790422405 35.89947083416588))')
daegu_edit_polygon = daegu.union(dalsugun)
korean_districts.loc[korean_districts['SIG_CD'] == '27100', 'geometry'] = daegu_edit_polygon
# 안산 누락부분 수정
ansan = korean_districts.loc[korean_districts['SIG_CD'] == '41270']
ansan_omission = wkt.loads('POLYGON ((126.829298252301 37.36317482313436, 126.8295244790377 37.36296791797485, 126.8414678460737 37.36278645570483, 126.834509852309 37.30362465658693, 126.8244022469778 37.30416860058266, 126.8168024063352 37.29804341965559, 126.7893646211123 37.29340677267749, 126.7246846409233 37.31536633530663, 126.7485639111491 37.32968297806942, 126.7606117392704 37.34113308508078, 126.7606381910905 37.34113313893575, 126.7671390857075 37.34271613413495, 126.7672762241343 37.34272325347646, 126.7903085333907 37.35000603221026, 126.7902882063461 37.35027207616381, 126.8115043871005 37.35561506092767, 126.8118982575286 37.35608422304917, 126.8120931061889 37.35644916877121, 126.8130472299681 37.35701410010054, 126.8173215370879 37.35848950292772, 126.8175806229748 37.35870615065416, 126.8202371128109 37.3617737157715, 126.8204627550473 37.36182811345061, 126.8240658510097 37.36568994134219, 126.8242798376575 37.36590650651723, 126.829298252301 37.36317482313436))')
ansan_edit_polygon = ansan.union(ansan_omission)
korean_districts.loc[korean_districts['SIG_CD'] == '41270', 'geometry'] = ansan_edit_polygon
# +
# 인천 누락 수정 (섬 구역으로 merge error로 Multipolygon list 추가 방법으로 수행)
# Error 발생
# ValueError: Must have equal len keys and value when setting with an iterable
# 섬들로 이루어져있어 한 덩어리가 아닌 MultiPolygon으로 작성되는데, 이 경우 숫자가 1개가 아닌 내부에 포함된 폴리곤의 개수만큼으로 산출됨
# gpd.GeoSeries(incheon_edit_polygon).values 로 해결
incheon_polygons = korean_districts.loc[korean_districts['SIG_CD'] == '28100']['geometry']
polygons = []
for i in incheon_polygons:
for j in i:
polygons.append(j)
incheon_omission_1 = wkt.loads('POLYGON ((126.264516695344 37.8177859737137, 126.2977329085618 37.80219917730991, 126.3157703786009 37.77398875321612, 126.29081542865 37.76294271832154, 126.2163462949183 37.77815929451813, 126.2232200753593 37.80503683597274, 126.264516695344 37.8177859737137))')
incheon_omission_2 = wkt.loads('POLYGON ((126.4312165907913 37.8298695751518, 126.5068972291791 37.78234812041452, 126.5264833865245 37.74732662885508, 126.5137214511587 37.72503488488039, 126.5225125218826 37.65188793485908, 126.5427448413968 37.61780959555252, 126.5106283935224 37.59662425451864, 126.4031678699724 37.59428304311225, 126.3767630962724 37.63629361017008, 126.4126314616412 37.65640571273303, 126.392366592362 37.69419447943856, 126.3555904415652 37.70667750178803, 126.3505795955799 37.78956556000363, 126.4312165907913 37.8298695751518))')
incheon_omission_3 = wkt.loads('POLYGON ((126.5156343284461 37.53422681554858, 126.5640050083955 37.51475494968334, 126.5825749134498 37.49064668407699, 126.5076612203656 37.46618111953222, 126.4430874578446 37.42152576820278, 126.3800536217994 37.43990800177242, 126.35578472578 37.46756812347854, 126.4169123779343 37.49607068061442, 126.4713484659584 37.49847664623879, 126.5156343284461 37.53422681554858))')
polygons.append(incheon_omission_1)
polygons.append(incheon_omission_2)
polygons.append(incheon_omission_3)
incheon_edit_polygon = MultiPolygon(polygons)
incheon_edit_polygon
# incheon_edit_polygon = Polygon(polygons)
# a = Polygon(incheon_edit_polygon)
korean_districts.loc[korean_districts['SIG_CD'] == '28100', 'geometry'] = gpd.GeoSeries(incheon_edit_polygon).values
# -
# ### 지역구분에 따른 column 추가
#
# 중부1지역 : [철원군, 화천군, 양구군, 춘천시, 인제군, 홍천군, 횡성군, 평창군, 원주시, 영월군, 정선군, 태백시, 가평군, 남양주시, 동두천시, 양주시, 연천군, 의정부시, 파주시, 포천시, 제천시, 봉화군, 청송군]
#
#
# 중부2지역 : [서울특별시, 대전광역시, 인천광역시, 세종특별자치시, 고성군, 속초시, 양양군, 강릉시, 동해시, 삼척시, 고양시, 과천시, 광명시, 광주시, 구리시, 군포시, 김포시, 부천시, 성남시, 수원시, 시흥시, 안산시, 안성시, 안양시, 양평군, 여주시, 오산시, 용인시, 의왕시, 이천시, 평택시, 하남시, 화성시, 청주시, 충주시, 괴산군, 단양군, 보은군, 영동군, 옥천군, 음성군, 증평군, 진천군, 천안시, 공주시, 보령시, 아산시, 서산시, 논산시, 계룡시, 당진시, 금산군, 부여군, 서천군, 청양군, 홍성군, 예산군, 태안군, 김천시, 안동시, 구미시, 영주시, 영천시, 상주시, 문경시, 군위군, 의성군, 영양군, 고령군, 성주군, 칠곡군, 예천군, 울릉군, 전주시, 익산시, 군산시, 정읍시, 김제시, 남원시, 완주군, 고창군, 부안군, 임실군, 순창군, 진안군, 무주군, 장수군, 거창군, 함양군]
#
# 남부지역 : [부산광역시, 대구광역시, 울산광역시, 광주광역시, 목포시, 여수시, 순천시, 나주시, 광양시, 담양군, 곡성군, 구례군, 고흥군, 보성군, 화순군, 장흥군, 강진군, 해남군, 영암군, 무안군, 함평군, 영광군, 장성군, 완도군, 진도군, 신안군, 울진군, 영덕군, 포항시, 경주시, 청도군, 경산시, 창원시, 김해시, 진주시, 양산시, 거제시, 통영시, 사천시, 밀양시, 함안군, 창녕군, 고성군, 하동군, 합천군, 남해군, 산청군, 의령군]
#
# 제주도 : [제주특별자치도]
# +
energy_class = {'중부1지역' : ['철원군', '화천군', '양구군', '춘천시', '인제군', '홍천군', '횡성군', '평창군', '원주시', '영월군', '정선군', '태백시', '가평군', '남양주시', '동두천시', '양주시', '연천군', '의정부시', '파주시', '포천시', '제천시', '봉화군', '청송군'],
'중부2지역' : ['서울특별시', '대전광역시', '인천광역시', '세종특별자치시', '고성군(중부2)', '속초시', '양양군', '강릉시', '동해시', '삼척시', '고양시', '과천시', '광명시', '광주시', '구리시', '군포시', '김포시', '부천시', '성남시', '수원시', '시흥시', '안산시', '안성시', '안양시', '양평군', '여주시', '오산시', '용인시', '의왕시', '이천시', '평택시', '하남시', '화성시', '청주시', '충주시', '괴산군', '단양군', '보은군', '영동군', '옥천군', '음성군', '증평군', '진천군', '천안시', '공주시', '보령시', '아산시', '서산시', '논산시', '계룡시', '당진시', '금산군', '부여군', '서천군', '청양군', '홍성군', '예산군', '태안군', '김천시', '안동시', '구미시', '영주시', '영천시', '상주시', '문경시', '군위군', '의성군', '영양군', '고령군', '성주군', '칠곡군', '예천군', '울릉군', '전주시', '익산시', '군산시', '정읍시', '김제시', '남원시', '완주군', '고창군', '부안군', '임실군', '순창군', '진안군', '무주군', '장수군', '거창군', '함양군'],
'남부지역' : ['부산광역시', '대구광역시', '울산광역시', '광주광역시', '목포시', '여수시', '순천시', '나주시', '광양시', '담양군', '곡성군', '구례군', '고흥군', '보성군', '화순군', '장흥군', '강진군', '해남군', '영암군', '무안군', '함평군', '영광군', '장성군', '완도군', '진도군', '신안군', '울진군', '영덕군', '포항시', '경주시', '청도군', '경산시', '창원시', '김해시', '진주시', '양산시', '거제시', '통영시', '사천시', '밀양시', '함안군', '창녕군', '고성군(남부)', '하동군', '합천군', '남해군', '산청군', '의령군'],
'제주도' : ['제주특별자치도']}
def classify_func(x):
global energy_class
for c in energy_class:
if x in energy_class[c]:
dist_class = c
return dist_class
# +
# len(korean_districts) : 161
# len(korean_districts['SIG_KOR_NM'].unique()) : 160
# 고성군이 2개로 분류를 위해 분류를 위한 SIG_KOR_NM을 만들고 고성군을 구분함
# 중부2에 속한 고성군은 고성군(중부2), 남부에 속한 고성군은 고성군(남부)로 수정
# 고성군_1 SIG_CD : 42820, 고성군_2 SIG_CD : 48820
korean_districts['SIG_KOR_NM_copy'] = korean_districts['SIG_KOR_NM']
korean_districts.loc[korean_districts['SIG_CD'] == '42820', 'SIG_KOR_NM_copy'] = '고성군(중부2)'
korean_districts.loc[korean_districts['SIG_CD'] == '48820', 'SIG_KOR_NM_copy'] = '고성군(남부)'
korean_districts['energy_district_class'] = korean_districts.apply(lambda x: classify_func(x['SIG_KOR_NM_copy']), axis=1)
# -
korean_areas = gpd.read_file('C:/Users/ilove/Desktop/DATA/district in Korea/TL_SCCO_CTPRVN.json')
plt.rc('font', family='Malgun Gothic')
ax = korean_districts.plot(column = 'energy_district_class', legend = True, figsize=(18, 18), cmap='winter', alpha=0.5, edgecolor='blue')
# 시도별 라인추가
korean_areas.plot(ax = ax, color='white', edgecolor='navy', alpha=0.5, linewidth=2)
plt.rc('axes', unicode_minus=False)
plt.title('건축물의에너지절약설계기준의 지역분류', fontsize=20)
plt.show()
pd.options.display.max_rows = 999
korean_districts
korean_districts.to_file('C:/Users/ilove/Desktop/DATA/district in Korea/korean_districts.geojson', driver='GeoJSON')
|
Making_Energy_Districts_Class_Geojson.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ACDH-CH Digital Prosopography Summer School 2020 | [website](https://acdh-oeaw.gitlab.io/summerschool2020/)
#
# # Analysis III: Statistics, Plotting, Unsupervised ML
#
# 2020-07-09, Vienna / Zoom
#
# <NAME> | Data Analyst | ACDH-CH | ÖAW | [web](https://www.oeaw.ac.at/acdh/team/current-team/sabine-laszakovits/) | [email](mailto:<EMAIL>)
# + [markdown] slideshow={"slide_type": "slide"}
# ## What we'll do
#
# * Basic statistics
# * Exploratory data analysis (EDA)
# * Characteristic numbers
# * Variables
# * Plotting in Python
# * Find clusters in the data
# * K-Means algorithm
# + [markdown] slideshow={"slide_type": "slide"}
# ## The dataset
#
# https://en.wikipedia.org/wiki/Iris_flower_data_set
#
# We have a list of 150 flowers from 3 species of irises with measurements of their leaves.
#
# For now, we want to see what we can observe about these measurements.
#
# Ultimately, we will want to figure out from the measurements, which species each flower belongs to.
# + [markdown] slideshow={"slide_type": "slide"}
# | Iris setosa | Iris versicolor | Iris virginica |
# | :-: | :-: | :-: |
# |  |  |  |
# | source: https://alchetron.com/Iris-setosa | source: https://commons.wikimedia.org/wiki/File:Iris_versicolor_3.jpg | source: https://commons.wikimedia.org/wiki/File:Iris_virginica.jpg |
# + slideshow={"slide_type": "slide"}
import pandas
data_complete = pandas.read_csv('iris.csv')
data = data_complete.drop(columns=['species']) # that would be cheating
# + slideshow={"slide_type": "fragment"}
data
# + [markdown] slideshow={"slide_type": "slide"}
# What is a "petal" \[ˈpɛtəl\] and what is a "sepal" \[ˈsɛpəl\]?
#
# 
# Source: https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exploratory data analysis
#
# * How many variables are there?
# * What is the range of each variable?
# * What is the distribution of each variable?
# * Are the variables independent or dependent?
# + [markdown] slideshow={"slide_type": "slide"}
# ### How many variables are there?
# + slideshow={"slide_type": "-"}
data.head()
# + [markdown] slideshow={"slide_type": "slide"}
# ### What is the range of each variable?
# + slideshow={"slide_type": "-"}
data.describe()
# + slideshow={"slide_type": "fragment"}
data['sepal_length'].plot.hist(bins=100)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Minimum, maximum
#
# The smallest and largest value in the set.
# + slideshow={"slide_type": "-"}
min = data['sepal_length'].min()
max = data['sepal_length'].max()
data['sepal_length'].plot.hist(bins=100)
import matplotlib.pyplot
matplotlib.pyplot.axvline(min, color='k', linestyle='dashed', linewidth=1)
matplotlib.pyplot.axvline(max, color='k', linestyle='dashed', linewidth=1)
# + [markdown] slideshow={"slide_type": "slide"}
# ### What is the distribution of each variable?
#
# * Mean
# * Quantiles
# * Standard deviation
# + [markdown] slideshow={"slide_type": "slide"}
# #### Mean
#
# The arithmetic mean, also known as average.
#
# $\bar{x} = \frac{x_1+x_2+\ldots+x_n}{n} = \frac{\sum_{i=1}^{n}x_i}{n}$
# + slideshow={"slide_type": "fragment"}
data['sepal_length'].plot.hist(bins=100)
matplotlib.pyplot.axvline(data['sepal_length'].mean(), color='k', linestyle='dashed', linewidth=1)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Standard deviation (SD, std)
#
# Measures the dispersion of a dataset relative to its mean.
#
# $\sigma = \sqrt{\frac{\sum_{i=1}^n \left(x_i - \bar{x}\right)^2}{n}}$
#
# [Why square?](https://www.mathsisfun.com/data/standard-deviation.html#WhySquare)
#
# The SD is also used to express the confidence in statistical conclusions and to calculate the margin of error.
# + slideshow={"slide_type": "slide"}
data['sepal_length'].plot.hist(bins=100)
mean = data['sepal_length'].mean()
std = data['sepal_length'].std()
matplotlib.pyplot.axvline(mean, color='k', linestyle='dashed', linewidth=1)
matplotlib.pyplot.fill_between([mean-std, mean+std], 0, 11, facecolor='green', alpha=0.3)
# + [markdown] slideshow={"slide_type": "slide"}
# ##### The effect of the standard deviation
#
# The 2 distributions have the same mean but different standard deviations.
#
# 
# Source: https://commons.wikimedia.org/wiki/File:Comparison_standard_deviations.svg
# + [markdown] slideshow={"slide_type": "slide"}
# #### Quantiles
#
# Sort the sample ascending.
# Partition the sample into $q$ groups with an equal number of datapoints in each group.
#
# Groups have equal probabilities.
#
# Some common quantiles:
# * quartile = partition into 4 groups = 4-quantile
# * percentile = partition into 100 groups = 100-quantile
# * median = 2-quantile
# + [markdown] slideshow={"slide_type": "slide"}
# ##### Example
# 25th percentile = the value of the datapoint that is ranked at position $\frac{25}{100}n$
#
# = 1st quartile
# + slideshow={"slide_type": "fragment"}
# pandas's calculation
q25 = data['sepal_length'].quantile(0.25)
print(f"Calculated value = {q25}")
# -
sorted_values = list(data['sepal_length'].sort_values())
q25_index = int(len(sorted_values)*0.25)
q25_value = sorted_values[q25_index]
print(f"Value at index {q25_index} = {q25_value}")
# + [markdown] slideshow={"slide_type": "slide"}
# ##### Partitions span across different intervals
# + slideshow={"slide_type": "-"}
q50 = data['sepal_length'].quantile(0.50)
q75 = data['sepal_length'].quantile(0.75)
data['sepal_length'].plot.hist(bins=100)
matplotlib.pyplot.fill_between([min, q25], 0, 11, facecolor='red', alpha=0.2)
matplotlib.pyplot.fill_between([q25, q50], 0, 11, facecolor='red', alpha=0.4)
matplotlib.pyplot.fill_between([q50, q75], 0, 11, facecolor='red', alpha=0.6)
matplotlib.pyplot.fill_between([q75, max], 0, 11, facecolor='red', alpha=0.85)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Dependencies
#
# We have 4 variables:
# * sepal_length
# * sepal_width
# * petal_length
# * petal_width
#
# Are there dependencies between these variables?
# -
# Example:
# * If sepal_length is a large value, sepal_width is also a large value.
# * petal_width is always about twice as large as sepal_width.
# + [markdown] slideshow={"slide_type": "slide"}
# #### Dependency between sepal length & sepal width
# + slideshow={"slide_type": "-"}
data.plot.scatter(x='sepal_length', y='sepal_width')
# + slideshow={"slide_type": "slide"}
import seaborn
seaborn.set()
seaborn.pairplot(data, height=1.9)
# + [markdown] slideshow={"slide_type": "slide"}
# It looks like:
# * There are dependencies between the 4 variables.
# * The dependencies cannot be expressed between pairs of variables.
# * We suspect that our sample of flowers consists of multiple clusters. Each cluster refers to a different species of flower.
# * The dependencies are best described within each cluster.
# + slideshow={"slide_type": "slide"}
seaborn.pairplot(data_complete, hue="species", height=1.9)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Clustering
#
# We apply a machine learning technique to find the clusters of datapoints.
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Machine Learning
#
# ML = An algorithm looks at data and draws generalizations from it
#
# **Supervised ML** = The algorithm gets a bunch of correctly labeled examples and learns the connection data <=> label.
#
# **Unsupervised ML** = The algorithm gets unlabeled data and finds structure within the data.
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Model
#
# Algorithm + learned generalizations = **model**
#
# We apply a model to new data, where it makes **predictions** by assigning a label.
# + [markdown] slideshow={"slide_type": "slide"}
# ### K-means algorithm
#
# This is an algorithm that finds $k$ clusters in the given dataset. (You need to specify $k$.)
#
# #### How it works:
# 1. **Initialize**: Declare at random $k$ datapoints in the dataset. These are the "centers".
# 2. **Assign**: Assign each datapoint to its closest "center". Now you have temporary clusters.
# 3. **Update**: For each cluster, calculate the mean. Shift the cluster's center to this mean value.
# 4. **Iterate**: Keep doing the Assignment step and the Update step until the center points don't move anymore ("converge"). If you've done this for a while and they still don't converge, start over with different random initial centers.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# 
#
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "subslide"}
# 
#
# Source: https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm
# + [markdown] slideshow={"slide_type": "slide"}
# #### Points of variation:
# * the value $k$
# * the randomness of the initialization
# * the definition of "closest"
# * the definition of "mean"
# * the number of iterations you wait for convergence
# + [markdown] slideshow={"slide_type": "slide"}
# #### Running the algorithm
#
# Library: https://scikit-learn.org/
# + slideshow={"slide_type": "-"}
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3, n_init=1, random_state=245234) # declare k
km = km.fit(data) # run the algorith on the data
# -
km.labels_
km.cluster_centers_
# + [markdown] slideshow={"slide_type": "slide"}
# #### Evaluate the result
# + slideshow={"slide_type": "-"}
evaluation = {
'Iris-setosa': {'0': 0, '1': 0, '2': 0},
'Iris-versicolor': {'0': 0, '1': 0, '2': 0},
'Iris-virginica': {'0': 0, '1': 0, '2': 0}
}
for i in range(len(data_complete)):
evaluation[data_complete['species'][i]][str(km.labels_[i])] += 1
# + slideshow={"slide_type": "-"}
evaluation
# + [markdown] slideshow={"slide_type": "slide"}
# #### Run the algorithm again
#
# Let's see whether we get a different result with different random initial values.
# -
km2 = KMeans(n_clusters=3, n_init=1, random_state=123657651)
km2 = km2.fit(data)
# + slideshow={"slide_type": "-"}
evaluation2 = {
'Iris-setosa': {'0': 0, '1': 0, '2': 0},
'Iris-versicolor': {'0': 0, '1': 0, '2': 0},
'Iris-virginica': {'0': 0, '1': 0, '2': 0}
}
for i in range(150):
evaluation2[data_complete['species'][i]][str(km2.labels_[i])] += 1
# + slideshow={"slide_type": "slide"}
evaluation, evaluation2
# + slideshow={"slide_type": "fragment"}
km.cluster_centers_, km2.cluster_centers_
# + slideshow={"slide_type": "fragment"}
km.n_iter_, km2.n_iter_
# + [markdown] slideshow={"slide_type": "slide"}
# #### Plotting the centers
# + slideshow={"slide_type": "-"}
col_names = [ d for d in data ]
col_idx_name = [ (i, col_names[i]) for i in range(len(col_names)) ]
centers = [ { v: c[i] for (i,v) in col_idx_name } for c in km.cluster_centers_ ]
# -
centers
# + [markdown] slideshow={"slide_type": "slide"}
# #### Predicted species
# + slideshow={"slide_type": "-"}
c = {0: 'g', 1: 'b', 2: 'brown'}
seaborn.scatterplot(x='sepal_length', y='sepal_width', data=data,
hue=km.labels_, palette=c)
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'gx', data=centers[0])
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'bx', data=centers[1])
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'rx', data=centers[2])
# + [markdown] slideshow={"slide_type": "slide"}
# #### Correct species
# + slideshow={"slide_type": "-"}
c = {'Iris-setosa': 'b', 'Iris-versicolor': 'brown', 'Iris-virginica': 'g'}
seaborn.scatterplot(x='sepal_length', y='sepal_width', data=data_complete,
hue='species', palette=c)
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'gx', data=centers[0])
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'bx', data=centers[1])
matplotlib.pyplot.plot('sepal_length', 'sepal_width', 'rx', data=centers[2])
# + [markdown] slideshow={"slide_type": "slide"}
# ## Questions?
|
session_4-4_Stats_KMeans/Session 4-4 Statistics (Thursday 9-7-2020, 14 pm).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import missingno
df=pd.read_csv('../datasets/Customer-Data.csv')
df1=pd.read_csv('../datasets/Products.csv')
df.head()
df1.head()
df1['Front Camera'] = df1['Front Camera'].str.extract('(\d+)')
df1['Rear Camera'] = df1['Rear Camera'].str.extract('(\d+)')
df1['RAM'] = df1['RAM'].str.extract('(\d+)')
df1['Storage'] = df1['Storage'].str.extract('(\d+)')
df1['Battery'] = df1['Battery'].str.extract('(\d+)')
df1['Display']=df1['Display'].str.replace('inches','').astype('float')
df1.head()
x=df.merge(df1)
x=x.drop(columns=['Name','OS','Display'])
x.State = x.State.fillna(x.Country)
x.to_csv('Dataset.csv',index=False)
x.shape
|
python-notebook/DataPreProcessing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import os
import shutil
from pommerman.envs.v0 import Pomme
from pommerman.agents import SimpleAgent, BaseAgent
from pommerman.configs import ffa_v0_env
from pommerman.constants import BOARD_SIZE, GameType
from tensorforce.agents import PPOAgent
from tensorforce.execution import Runner
from tensorforce.contrib.openai_gym import OpenAIGym
# -
num_episodes = 30000
batching_capacity = 1000
save_seconds = 300
main_dir = './ppo/'
log_path = main_dir + 'logs/'
model_path = main_dir + 'model'
if not os.path.isdir(main_dir):
os.mkdir(main_dir)
if os.path.isdir(log_path):
shutil.rmtree(log_path, ignore_errors=True)
os.mkdir(log_path)
# +
# Instantiate the environment
config = ffa_v0_env()
env = Pomme(**config["env_kwargs"])
env.seed(0)
# Create a Proximal Policy Optimization agent
network = dict(type='pomm_network.PommNetwork')
states = {
"board": dict(shape=(BOARD_SIZE, BOARD_SIZE, 3, ), type='float'),
"state": dict(shape=(3,), type='float')
}
saver = {
"directory": model_path,
"seconds": save_seconds,
"load": os.path.isdir(model_path)
}
agent = PPOAgent(
states=states,
actions=dict(type='int', num_actions=env.action_space.n),
network=network,
batching_capacity=batching_capacity,
step_optimizer=dict(
type='adam',
learning_rate=1e-4
),
saver=saver
)
# +
class TensorforceAgent(BaseAgent):
def act(self, obs, action_space):
pass
# Add 3 random agents
agents = []
for agent_id in range(3):
agents.append(SimpleAgent(config["agent"](agent_id, config["game_type"])))
# Add TensorforceAgent
agent_id += 1
agents.append(TensorforceAgent(config["agent"](agent_id, config["game_type"])))
env.set_agents(agents)
env.set_training_agent(agents[-1].agent_id)
env.set_init_game_state(None)
# -
class WrappedEnv(OpenAIGym):
def __init__(self, gym, visualize=False):
self.gym = gym
self.visualize = visualize
def execute(self, actions):
if self.visualize:
self.gym.render()
obs = self.gym.get_observations()
all_actions = self.gym.act(obs)
all_actions.insert(self.gym.training_agent, actions)
state, reward, terminal, _ = self.gym.step(all_actions)
agent_state = WrappedEnv.featurize(state[self.gym.training_agent])
agent_reward = reward[self.gym.training_agent]
# If nobody die, use some "smart" reward
if agent_reward == 0:
agent_reward = self.gym.train_reward
return agent_state, terminal, agent_reward
def reset(self):
obs = self.gym.reset()
agent_obs = WrappedEnv.featurize(obs[3])
return agent_obs
@staticmethod
def featurize(obs):
def get_matrix(dict, key):
res = dict[key]
return res.reshape(res.shape[0], res.shape[1], 1).astype(np.float32)
board = get_matrix(obs, 'board')
teammate_position = None
teammate = obs["teammate"]
if teammate is not None:
teammate = teammate.value
if teammate > 10 and teammate < 15:
teammate_position = np.argwhere(board == teammate)[0]
else:
teammate = None
# My self - 11
# Team mate - 12
# Enemy - 13
# Everyone enemy
board[(board > 10) & (board < 15)] = 13
# I'm not enemy
my_position = obs['position']
board[my_position[0], my_position[1], 0] = 11
# Set teammate
if teammate_position is not None:
board[teammate_position[0], teammate_position[1], teammate_position[2]] = 12
bomb_blast_strength = get_matrix(obs, 'bomb_blast_strength')
bomb_life = get_matrix(obs, 'bomb_life')
conv_inp = np.concatenate([board, bomb_blast_strength, bomb_life], axis=2)
state = np.array([obs["ammo"], obs["blast_strength"], obs["can_kick"]]).astype(np.float32)
return dict(board=conv_inp, state=state)
def episode_finished(r):
if r.episode % 10 == 0:
print("Finished episode {ep} after {ts} timesteps".format(ep=r.episode + 1, ts = r.timestep + 1))
print("Episode reward: {}".format(r.episode_rewards[-1]))
print("Average of last 10 rewards: {}".format(np.mean(r.episode_rewards[10:])))
return True
# +
# Instantiate and run the environment for 5 episodes.
wrapped_env = WrappedEnv(env, False)
runner = Runner(agent=agent, environment=wrapped_env)
runner.run(num_episodes=num_episodes, episode_finished=episode_finished, max_episode_timesteps=env._max_steps)
print("Stats: ", runner.episode_rewards, runner.episode_timesteps, runner.episode_times)
try:
runner.close()
except AttributeError as e:
pass
# -
|
rl_agent/ppo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Study Case : Winscosin Breast Cancer
#
# #### <NAME>
# As part of Study Case Assignment on Make Ai Bootcamp
# ##### June 2018
# Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [<NAME> and <NAME>: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
#
# This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/
#
# Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
#
# Attribute Information:
#
# 1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)
#
# Ten real-valued features are computed for each cell nucleus:
#
# a) radius (mean of distances from center to points on the perimeter) <br>b) texture (standard deviation of gray-scale values) <br>c) perimeter <br>d) area <br>e) smoothness (local variation in radius lengths) <br>f) compactness (perimeter^2 / area - 1.0) <br>g) concavity (severity of concave portions of the contour) <br>h) concave points (number of concave portions of the contour) <br>i) symmetry <br>j) fractal dimension ("coastline approximation" - 1)
#
# The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.
#
# All feature values are recoded with four significant digits.
#
# Missing attribute values: none
#
# Class distribution: 357 benign, 212 malignant
#
# ### Import Library and Dataset
import pandas as pd
import pandas_profiling
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
import seaborn as sns
sns.set()
plt.style.use('bmh')
df=pd.read_csv('Dataset/data.csv')
print("Dataset size : ",df.shape)
df=df.drop(columns=['id','Unnamed: 32'])
df.head()
pandas_profile=pandas_profiling.ProfileReport(df)
pandas_profile.to_file(outputfile='Pandas_ProfilingOutput.html')
#pandas_profile
# ## [Detail HTML Pandas Profiling](Pandas_ProfilingOutput.html)
# ### Explore the Values
#
# Explore Distribution values from the dataset using describe statistic and histogram
df.describe()
df.hist(figsize=(16,25),bins=50,xlabelsize=8,ylabelsize=8);
#
# ## Training Dataset Preparation
#
# Since most of the Algorithm machine learning only accept array like as input, so we need to create an array from dataframe set to X and y array before running machine learning algorithm
X=np.array(df.drop(columns=['diagnosis']))
y=df['diagnosis'].values
print ("X dataset shape : ",X.shape)
print ("y dataset shape : ",y.shape)
# The dataset is splitted by X the parameter and y for classification labels
# # Machine Learning Model
#
# ### Import Machine Learning Library from Scikit-Learn
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
# ### Using 5 Machine Learning Model
#
# Machine learning model used is Classification model, since the purpose of this Study case is to classify diagnosis between "Malignant" (M) Breast Cancer and "Benign" (B) Breast Cancer
#
# * Model 1 : Using Simple Logistic Regression
# * Model 2 : Using Support Vector Classifier
# * Model 3 : Using Decision Tree Classifier
# * Model 4 : Using Random Forest Classifier
# * Model 5 : Using Gradient Boosting Classifier
model_1 = LogisticRegression()
model_2 = SVC()
model_3 = DecisionTreeClassifier()
model_4 = RandomForestClassifier()
model_5 = GradientBoostingClassifier()
# # Model Fitting
#
# since we need to fit the dataset into algorithm, so proper spliting dataset into training set and test set are required
#
# ## Method 1. Train test split
#
# Using Scikit learn built in tools to split data into training set and test set to check the result score of the model <br>
# train_test_split configuration using 20% data to test and 80& data to train the model, random_state generator is 45.
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=45)
print ("Train size : ",X_train.shape)
print ("Test size : ",X_test.shape)
# -
# ### Fitting train dataset into model
model_1.fit(X_train,y_train)
model_2.fit(X_train,y_train)
model_3.fit(X_train,y_train)
model_4.fit(X_train,y_train)
model_5.fit(X_train,y_train)
# ### Predict and show Score and F1 Score prediction using test data
# +
# Predict data
y_pred1=model_1.predict(X_test)
y_pred2=model_2.predict(X_test)
y_pred3=model_3.predict(X_test)
y_pred4=model_4.predict(X_test)
y_pred5=model_5.predict(X_test)
#Show F1 Score
from sklearn.metrics import f1_score
f1_model1=f1_score(y_test,y_pred1,average='weighted',labels=np.unique(y_pred1))
f1_model2=f1_score(y_test,y_pred2,average='weighted',labels=np.unique(y_pred2))
f1_model3=f1_score(y_test,y_pred3,average='weighted',labels=np.unique(y_pred3))
f1_model4=f1_score(y_test,y_pred4,average='weighted',labels=np.unique(y_pred4))
f1_model5=f1_score(y_test,y_pred5,average='weighted',labels=np.unique(y_pred5))
print("F1 score Model 1 : ",f1_model1)
print("F1 score Model 2 : ",f1_model2)
print("F1 score Model 3 : ",f1_model3)
print("F1 score Model 4 : ",f1_model4)
print("F1 score Model 5 : ",f1_model5)
# -
# ## Method 2. Cross validation method
#
# Using Cross validation will resulted in more reliability of the model <br>
# in this case using StratifiedKFold from Scikit Learn, with n_split = 10 times and Shuffle = True
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=10, shuffle=True)
skf.get_n_splits(X,y)
# Set Container to gather the cross validation result of the model
score_list_model1,score_list_model2,score_list_model3,score_list_model4,score_list_model5 = [],[],[],[],[]
for train_index, test_index in skf.split(X,y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model_1.fit(X_train, y_train)
model_2.fit(X_train, y_train)
model_3.fit(X_train, y_train)
model_4.fit(X_train, y_train)
model_5.fit(X_train, y_train)
y_pred1=model_1.predict(X_test)
y_pred2=model_2.predict(X_test)
y_pred3=model_3.predict(X_test)
y_pred4=model_4.predict(X_test)
y_pred5=model_5.predict(X_test)
score_list_model1.append(f1_score(y_test,y_pred1,average='weighted',labels=np.unique(y_pred1)))
score_list_model2.append(f1_score(y_test,y_pred2,average='weighted',labels=np.unique(y_pred2)))
score_list_model3.append(f1_score(y_test,y_pred3,average='weighted',labels=np.unique(y_pred3)))
score_list_model4.append(f1_score(y_test,y_pred4,average='weighted',labels=np.unique(y_pred4)))
score_list_model5.append(f1_score(y_test,y_pred5,average='weighted',labels=np.unique(y_pred5)))
score_table = pd.DataFrame({"F1 Score model 1" :score_list_model1,
"F1 Score model 2" :score_list_model2,
"F1 Score model 3" :score_list_model3,
"F1 Score model 4" :score_list_model4,
"F1 Score model 5" :score_list_model5})
score_table
# +
final_1=np.mean(score_list_model1)
final_2=np.mean(score_list_model2)
final_3=np.mean(score_list_model3)
final_4=np.mean(score_list_model4)
final_5=np.mean(score_list_model5)
print("F1 Score Average Model_1",final_1)
print("F1 Score Average Model_2",final_2)
print("F1 Score Average Model_3",final_3)
print("F1 Score Average Model_4",final_4)
print("F1 Score Average Model_5",final_5)
# -
# ## Hyperparameter Search
# ### Purpose is to Optimize Model 5 (Gradient Boosting model)
# #### 1. Get Current Params
model_5.get_params()
# #### 2. Optimization in _'max depth'_ , _'min samples leaf'_
# Using GridSearch CV
# +
from sklearn.model_selection import GridSearchCV
gb_tuned_params = {'max_depth' : [1, 2, 3, 4],
'min_samples_leaf': [1, 3, 5],
'min_samples_split' : [2, 3, 5]}
GridGB = GridSearchCV(GradientBoostingClassifier(),gb_tuned_params, cv=5)
GridGB.fit(X,y)
print("Best Params : ",GridGB.best_params_)
print()
means = GridGB.cv_results_['mean_test_score']
stds = GridGB.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, GridGB.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
# -
# #### 3. Fit to find F1 Score using hyperparameter best params in model_5 GradientBoostingClassifier
# +
Optimized_model=GradientBoostingClassifier(max_depth=3,min_samples_leaf=5,min_samples_split=5)
score_list_optimized=[]
for train_index, test_index in skf.split(X,y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
Optimized_model.fit(X_train, y_train)
y_pred=Optimized_model.predict(X_test)
score_list_optimized.append(
f1_score(y_test,y_pred,average='weighted',labels=np.unique(y_pred)))
print()
print("F1 Score Optimized model : ",np.mean(score_list_optimized))
# -
#
# # Conclusion
# After Testing 5 Model of machine learning classifier and testing both using train test split and cross validation method, conclude that __Model 5__ which is __Gradient Boosting__ winth with crossvalidation F1 Score __0.969__ , and Optimized parameter : 'max_depth': 1, 'min_samples_leaf': 5, 'min_samples_split': 2
|
.ipynb_checkpoints/Study Case #2 Breast Cancer_Benedict Aryo-checkpoint.ipynb
|