code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Taylor problem 3.23
last revised: 04-Jan-2020 by Dick Furnstahl [furnstahl.1@osu.edu]
**This notebook is almost ready to go, except that the initial conditions and $\Delta v$ are different from the problem statement and there is no statement to print the figure. Fix these and you're done!**
This is a conservation of momentum problem, which in the end lets us determine the trajectories of the two masses before and after the explosion. How should we visualize that the center-of-mass of the pieces continues to follow the original parabolic path?
Plan:
1. Plot the original trajectory, also continued past the explosion time.
2. Plot the two trajectories after the explosion.
3. For some specified times of the latter two trajectories, connect the points and indicate the center of mass.
The implementation here could certainly be improved! Please make suggestions (and develop improved versions).
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
First define some functions we think we will need. The formulas are based on our paper-and-pencil work.
The trajectory starting from $t=0$ is:
$
\begin{align}
x(t) &= x_0 + v_{x0} t \\
y(t) &= y_0 + v_{y0} t - \frac{1}{2} g t^2
\end{align}
$
```
def trajectory(x0, y0, vx0, vy0, t_pts, g=9.8):
"""Calculate the x(t) and y(t) trajectories for an array of times,
which must start with t=0.
"""
return x0 + vx0*t_pts, y0 + vy0*t_pts - g*t_pts**2/2.
```
The velocity at the final time $t_f$ is:
$
\begin{align}
v_{x}(t) &= v_{x0} \\
v_{y}(t) &= v_{y0} - g t_f
\end{align}
$
```
def final_velocity(vx0, vy0, t_pts, g=9.8):
"""Calculate the vx(t) and vy(t) at the end of an array of times t_pts"""
return vx0, vy0 - g*t_pts[-1] # -1 gives the last element
```
The center of mass of two particles at $(x_1, y_1)$ and $(x_2, y_2)$ is:
$
\begin{align}
x_{cm} &= \frac{1}{2}(x_1 + x_2) \\
y_{cm} &= \frac{1}{2}(y_1 + y_2)
\end{align}
$
```
def com_position(x1, y1, x2, y2):
"""Find the center-of-mass (com) position given two positions (x,y)."""
return (x1 + x2)/2., (y1 + y2)/2.
```
**1. Calculate and plot the original trajectory up to the explosion.**
```
# initial conditions
x0_before, y0_before = [0., 0.] # put the origin at the starting point
vx0_before, vy0_before = [6., 3.] # given in the problem statement
g = 1. # as recommended
# Array of times to calculate the trajectory up to the explosion at t=4
t_pts_before = np.array([0., 1., 2., 3., 4.])
x_before, y_before = trajectory(x0_before, y0_before,
vx0_before, vy0_before,
t_pts_before, g)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x_before, y_before, 'ro-')
ax.set_xlabel('x')
ax.set_ylabel('y')
```
Does it make sense so far? Note that we could use more intermediate points to make a more correct curve (rather than the piecewise straight lines) but this is fine at least for a first pass.
**2. Calculate and plot the two trajectories after the explosion.**
For the second part of the trajectory, we reset our clock to $t=0$ because that is how our trajectory function is constructed. We'll need initial positions and velocities of the pieces just after the explosion. These are the final position of the combined piece before the explosion and the final velocity plus and minus $\Delta \mathbf{v}$. We are told $\Delta \mathbf{v}$. We have to figure out the final velocity before the explosion.
```
delta_v = np.array([2., 1.]) # change in velociy of one piece
# reset time to 0 for calculating trajectories
t_pts_after = np.array([0., 1., 2., 3., 4., 5.])
# Also could have used np.arange(0.,6.,1.)
x0_after = x_before[-1] # -1 here means the last element of the array
y0_after = y_before[-1]
vxcm0_after, vycm0_after = final_velocity(vx0_before, vy0_before,
t_pts_before, g)
# The _1 and _2 refer to the two pieces after the explosinon
vx0_after_1 = vxcm0_after + delta_v[0]
vy0_after_1 = vycm0_after + delta_v[1]
vx0_after_2 = vxcm0_after - delta_v[0]
vy0_after_2 = vycm0_after - delta_v[1]
# Given the initial conditions after the explosion, we calculate trajectories
x_after_1, y_after_1 = trajectory(x0_after, y0_after,
vx0_after_1, vy0_after_1,
t_pts_after, g)
x_after_2, y_after_2 = trajectory(x0_after, y0_after,
vx0_after_2, vy0_after_2,
t_pts_after, g)
# This is the center-of-mass trajectory
xcm_after, ycm_after = trajectory(x0_after, y0_after,
vxcm0_after, vycm0_after,
t_pts_after, g)
# These are calculated points of the center-of-mass
xcm_pts, ycm_pts = com_position(x_after_1, y_after_1, x_after_2, y_after_2)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x_before, y_before, 'ro-', label='before explosion')
ax.plot(x_after_1, y_after_1, 'go-', label='piece 1 after')
ax.plot(x_after_2, y_after_2, 'bo-', label='piece 2 after')
ax.plot(xcm_after, ycm_after, 'r--', label='original trajectory')
ax.plot(xcm_pts, ycm_pts, 'o', color='black', label='center-of-mass of 1 and 2')
for i in range(len(t_pts_after)):
ax.plot([x_after_1[i], x_after_2[i]],
[y_after_1[i], y_after_2[i]],
'k--'
)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend();
```
| github_jupyter |
```
import pandas as pd
import numpy as np
```
# Distances
```
activator = "sgmd"
mnist_sgmd = []
hands_sgmd = []
fashn_sgmd = []
for i in range(0, 10):
dataset = 'mnist'
df_cnn_relu0_1 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'handsign_mnist'
df_cnn_relu0_2 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'fashion_mnist'
df_cnn_relu0_3 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
max_norm = np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_1))
mnist_sgmd.append(np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_2)) / max_norm)
hands_sgmd.append(np.linalg.norm(df_cnn_relu0_2.corrwith(df_cnn_relu0_3)) / max_norm)
fashn_sgmd.append(np.linalg.norm(df_cnn_relu0_3.corrwith(df_cnn_relu0_1)) / max_norm)
activator = "tanh"
mnist_tanh = []
hands_tanh = []
fashn_tanh = []
for i in range(0, 10):
dataset = 'mnist'
df_cnn_relu0_1 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'handsign_mnist'
df_cnn_relu0_2 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'fashion_mnist'
df_cnn_relu0_3 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
max_norm = np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_1))
mnist_tanh.append(np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_2)) / max_norm)
hands_tanh.append(np.linalg.norm(df_cnn_relu0_2.corrwith(df_cnn_relu0_3)) / max_norm)
fashn_tanh.append(np.linalg.norm(df_cnn_relu0_3.corrwith(df_cnn_relu0_1)) / max_norm)
activator = "relu"
mnist_relu = []
hands_relu = []
fashn_relu = []
for i in range(0, 10):
dataset = 'mnist'
df_cnn_relu0_1 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'handsign_mnist'
df_cnn_relu0_2 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
dataset = 'fashion_mnist'
df_cnn_relu0_3 = pd.read_csv(dataset + "/results/" + activator + "/cnn_K" + str(i) + ".csv")
max_norm = np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_1))
mnist_relu.append(np.linalg.norm(df_cnn_relu0_1.corrwith(df_cnn_relu0_2)) / max_norm)
hands_relu.append(np.linalg.norm(df_cnn_relu0_2.corrwith(df_cnn_relu0_3)) / max_norm)
fashn_relu.append(np.linalg.norm(df_cnn_relu0_3.corrwith(df_cnn_relu0_1)) / max_norm)
```
| github_jupyter |
```
import os
from pprint import pprint
import torch
import torch.nn as nn
from transformers import BertForTokenClassification, BertTokenizer
from transformers import AdamW
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from sklearn.model_selection import train_test_split
import numpy as np
from tqdm.notebook import tqdm
```
## 读取MSRA实体识别数据集
```
file = "../datasets/dh_msra.txt"
```
## 检查GPU情况
```
# GPUcheck
print("CUDA Available: ", torch.cuda.is_available())
n_gpu = torch.cuda.device_count()
if torch.cuda.is_available():
print("GPU numbers: ", n_gpu)
print("device_name: ", torch.cuda.get_device_name(0))
device = torch.device("cuda:0") # 注意选择
torch.cuda.set_device(0)
print(f"当前设备:{torch.cuda.current_device()}")
else :
device = torch.device("cpu")
print(f"当前设备:{device}")
```
## 配置参数
规范化配置参数,方便使用。
```
class Config(object):
"""配置参数"""
def __init__(self):
self.model_name = 'Bert_NER.bin'
self.bert_path = './bert-chinese/'
self.ner_file = '../datasets/dh_msra.txt'
self.num_classes = 10 # 类别数(按需修改),这里有10种实体类型
self.hidden_size = 768 # 隐藏层输出维度
self.hidden_dropout_prob = 0.1 # dropout比例
self.batch_size = 128 # mini-batch大小
self.max_len = 103 # 句子的最长padding长度
self.epochs = 3 # epoch数
self.learning_rate = 2e-5 # 学习率
self.save_path = './saved_model/' # 模型训练结果保存路径
# self.fp16 = False
# self.fp16_opt_level = 'O1'
# self.gradient_accumulation_steps = 1
# self.warmup_ratio = 0.06
# self.warmup_steps = 0
# self.max_grad_norm = 1.0
# self.adam_epsilon = 1e-8
# self.class_list = class_list # 类别名单
# self.require_improvement = 1000 # 若超过1000batch效果还没提升,则提前结束训练
config = Config()
all_sentences_separate = []
all_letter_labels = []
label_set = set()
with open(config.ner_file, encoding="utf-8") as f:
single_sentence = []
single_sentence_labels = []
for s in f.readlines():
if s != "\n":
word, label = s.split("\t")
label = label.strip("\n")
single_sentence.append(word)
single_sentence_labels.append(label)
label_set.add(label)
elif s == "\n":
all_sentences_separate.append(single_sentence)
all_letter_labels.append(single_sentence_labels)
single_sentence = []
single_sentence_labels = []
print(all_sentences_separate[0:2])
print(all_letter_labels[0:2])
print(f"\n所有的标签:{label_set}")
# 构建 tag 到 索引 的字典
tag_to_ix = {"B-LOC": 0,
"I-LOC": 1,
"B-ORG": 2,
"I-ORG": 3,
"B-PER": 4,
"I-PER": 5,
"O": 6,
"[CLS]":7,
"[SEP]":8,
"[PAD]":9}
ix_to_tag = {0:"B-LOC",
1:"I-LOC",
2:"B-ORG",
3:"I-ORG",
4:"B-PER",
5:"I-PER",
6:"O",
7:"[CLS]",
8:"[SEP]",
9:"[PAD]"}
```
## 数据示例
这里简单查看一些数据例子,其中很多都是数字6。
数字6说明是 O 类型的实体。
```
all_sentences = [] # 句子
for one_sentence in all_sentences_separate:
sentence = "".join(one_sentence)
all_sentences.append(sentence)
print(all_sentences[0:2])
all_labels = [] # labels
for letter_labels in all_letter_labels:
labels = [tag_to_ix[t] for t in letter_labels]
all_labels.append(labels)
print(all_labels[0:2])
print(len(all_labels[0]))
print(len(all_labels))
```
### input数据准备
```
# word2token
tokenizer = BertTokenizer.from_pretrained('./bert-chinese/', do_lower_case=True)
# 新版代码,一次性处理好输入
encoding = tokenizer(all_sentences,
return_tensors='pt', # pt 指 pytorch,tf 就是 tensorflow
padding='max_length', # padding 到 max_length
truncation=True, # 激活并控制截断
max_length=config.max_len)
input_ids = encoding['input_ids']
# 这句话的input_ids
print(f"Tokenize 前的第一句话:\n{all_sentences[0]}\n")
print(f"Tokenize + Padding 后的第一句话: \n{input_ids[0]}")
# 新版代码
attention_masks = encoding['attention_mask']
token_type_ids = encoding['token_type_ids']
# 第一句话的 attention_masks
print(attention_masks[0])
```
## 准备labels
由于我们的input_ids是带有`[CLS]`和`[SEP]`的,所以在准备label的同时也要考虑这些情况。
```
# [3] 代表 O 实体
for label in all_labels:
label.insert(len(label), 8) # [SEP]
label.insert(0, 7) # [CLS]
if config.max_len > len(label) -1:
for i in range(config.max_len - len(label)): #+2的原因是扣除多出来的CLS和SEP
label.append(9) # [PAD]
print(len(all_labels[0]))
print(all_labels[0])
# 统计最长的段落
max_len_label = 0
max_len_text = 0
for label in all_labels:
if len(label) > max_len_text:
max_len_label = len(label)
print(max_len_label)
for one_input in input_ids:
if len(one_input) > max_len_text:
max_len_text = len(one_input)
print(max_len_text)
```
## 切分训练和测试集
```
# train-test-split
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids,
all_labels,
random_state=2021,
test_size=0.1)
train_masks, validation_masks, _, _ = train_test_split(attention_masks,
input_ids,
random_state=2021,
test_size=0.1)
print(len(train_inputs))
print(len(validation_inputs))
print(train_inputs[0])
print(validation_inputs[0])
```
这里把输入的labels变为tensor形式。
```
train_labels = torch.tensor(train_labels).clone().detach()
validation_labels = torch.tensor(validation_labels).clone().detach()
print(train_labels[0])
print(len(train_labels))
print(len(train_inputs))
# dataloader
# 形成训练数据集
train_data = TensorDataset(train_inputs, train_masks, train_labels)
# 随机采样
train_sampler = RandomSampler(train_data)
# 读取数据
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=config.batch_size)
# 形成验证数据集
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
# 随机采样
validation_sampler = SequentialSampler(validation_data)
# 读取数据
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=config.batch_size)
model = BertForTokenClassification.from_pretrained(config.bert_path, num_labels=config.num_classes)
model.cuda()
# 注意:
# 在新版的 Transformers 中会给出警告
# 原因是我们导入的预训练参数权重是不包含模型最终的线性层权重的
# 不过我们本来就是要“微调”它,所以这个情况是符合期望的
# BERT fine-tuning parameters
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.weight']
# 权重衰减
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}]
# 优化器
optimizer = AdamW(optimizer_grouped_parameters,
lr=5e-5)
# 保存loss
train_loss_set = []
# BERT training loop
for _ in range(config.epochs):
## 训练
print(f"当前epoch: {_}")
# 开启训练模式
model.train()
tr_loss = 0 # train loss
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in tqdm(enumerate(train_dataloader)):
# 把batch放入GPU
batch = tuple(t.to(device) for t in batch)
# 解包batch
b_input_ids, b_input_mask, b_labels = batch
# 梯度归零
optimizer.zero_grad()
# 前向传播loss计算
output = model(input_ids=b_input_ids,
attention_mask=b_input_mask,
labels=b_labels)
loss = output[0]
# print(loss)
# 反向传播
loss.backward()
# Update parameters and take a step using the computed gradient
# 更新模型参数
optimizer.step()
# Update tracking variables
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print(f"当前 epoch 的 Train loss: {tr_loss/nb_tr_steps}")
# 验证状态
model.eval()
# 建立变量
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
# 验证集的读取也要batch
for batch in tqdm(validation_dataloader):
# 元组打包放进GPU
batch = tuple(t.to(device) for t in batch)
# 解开元组
b_input_ids, b_input_mask, b_labels = batch
# 预测
with torch.no_grad():
# segment embeddings,如果没有就是全0,表示单句
# position embeddings,[0,句子长度-1]
outputs = model(input_ids=b_input_ids,
attention_mask=b_input_mask,
token_type_ids=None,
position_ids=None)
# print(logits[0])
# Move logits and labels to CPU
scores = outputs[0].detach().cpu().numpy() # 每个字的标签的概率
pred_flat = np.argmax(scores[0], axis=1).flatten()
label_ids = b_labels.to('cpu').numpy() # 真实labels
# print(logits, label_ids)
# 保存模型
# They can then be reloaded using `from_pretrained()`
# 创建文件夹
if not os.path.exists(config.save_path):
os.makedirs(config.save_path)
print("文件夹不存在,创建文件夹!")
else:
pass
output_dir = config.save_path
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
# Good practice: save your training arguments together with the trained model
torch.save(model_to_save.state_dict(), os.path.join(output_dir, config.model_name))
# 读取模型
# Load a trained model and vocabulary that you have fine-tuned
output_dir = config.save_path
model = BertForTokenClassification.from_pretrained(output_dir)
tokenizer = BertTokenizer.from_pretrained(output_dir)
model.to(device)
# 单句测试
# test_sententce = "在北京市朝阳区的一家网吧,我亲眼看见卢本伟和孙笑川一起开挂。"
test_sententce = "史源源的房子租在滨江区南环路税友大厦附近。"
# 构建 tag 到 索引 的字典
tag_to_ix = {"B-LOC": 0,
"I-LOC": 1,
"B-ORG": 2,
"I-ORG": 3,
"B-PER": 4,
"I-PER": 5,
"O": 6,
"[CLS]":7,
"[SEP]":8,
"[PAD]":9}
ix_to_tag = {0:"B-LOC",
1:"I-LOC",
2:"B-ORG",
3:"I-ORG",
4:"B-PER",
5:"I-PER",
6:"O",
7:"[CLS]",
8:"[SEP]",
9:"[PAD]"}
encoding = tokenizer(test_sententce,
return_tensors='pt', # pt 指 pytorch,tf 就是 tensorflow
padding=True, # padding到最长的那句话
truncation=True, # 激活并控制截断
max_length=50)
test_input_ids = encoding['input_ids']
# 创建attention masks
test_attention_masks = encoding['attention_mask']
# 形成验证数据集
# 为了通用,这里还是用了 DataLoader 的形式
test_data = TensorDataset(test_input_ids, test_attention_masks)
# 随机采样
test_sampler = SequentialSampler(test_data)
# 读取数据
test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=config.batch_size)
# 验证状态
model.eval()
# 建立变量
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
# 验证集的读取也要batch
for batch in tqdm(test_dataloader):
# 元组打包放进GPU
batch = tuple(t.to(device) for t in batch)
# 解开元组
b_input_ids, b_input_mask = batch
# 预测
with torch.no_grad():
# segment embeddings,如果没有就是全0,表示单句
# position embeddings,[0,句子长度-1]
outputs = model(input_ids=b_input_ids,
attention_mask=None,
token_type_ids=None,
position_ids=None)
# Move logits and labels to CPU
scores = outputs[0].detach().cpu().numpy() # 每个字的标签的概率
pred_flat = np.argmax(scores[0], axis=1).flatten()
# label_ids = b_labels.to('cpu').numpy() # 真实labels
print(pred_flat) # 预测值
pre_labels = [ix_to_tag[n] for n in pred_flat]
print(f"测试句子: {test_sententce}")
print(len(test_sententce))
print(pre_labels)
pre_labels_cut = pre_labels[0:len(test_sententce)+2]
pre_labels_cut
person = [] # 临时栈
persons = []
location = []
locations = []
for i in range(len(pre_labels_cut) - 1):
# Person
# 单字情况
if pre_labels[i] == 'B-PER' and pre_labels[i+1] != 'I-PER' and len(location) == 0:
person.append(i)
persons.append(person)
person = [] # 清空
continue
# 非单字
# 如果前面有连着的 PER 实体
if pre_labels[i] == 'B-PER'and pre_labels[i+1] == 'I-PER' and len(person) != 0:
person.append(i)
# 如果前面没有连着的 B-PER 实体
elif pre_labels[i] == 'B-PER'and pre_labels[i+1] == 'I-PER' and len(location) == 0:
person.append(i) # 加入新的 B-PER
elif pre_labels[i] != 'I-PER' and len(person) != 0:
persons.append(person) # 临时栈内容放入正式栈
person = [] # 清空临时栈
elif pre_labels[i] == 'I-PER' and len(person) != 0:
person.append(i)
else: # 极少数情况会有 I-PER 开头的,不理
pass
# Location
# 单字情况
if pre_labels[i] == 'B-LOC' and pre_labels[i+1] != 'I-LOC' and len(location) == 0:
location.append(i)
locations.append(location)
location = [] # 清空
continue
# 非单字
# 如果前面有连着的 LOC 实体
if pre_labels[i] == 'B-LOC' and pre_labels[i+1] == 'I-LOC' and len(location) != 0:
locations.append(location)
location = [] # 清空栈
location.append(i) # 加入新的 B-LOC
# 如果前面没有连着的 B-LOC 实体
elif pre_labels[i] == 'B-LOC' and pre_labels[i+1] == 'I-LOC' and len(location) == 0:
location.append(i) # 加入新的 B-LOC
elif pre_labels[i] == 'I-LOC' and len(location) != 0:
location.append(i)
# 结尾
elif pre_labels[i] != 'I-LOC' and len(location) != 0:
locations.append(location) # 临时栈内容放入正式栈
location = [] # 清空临时栈
else: # 极少数情况会有 I-LOC 开头的,不理
pass
print(persons)
print(locations)
# 从文字中提取
# 人物
NER_PER = []
for word_idx in persons:
ONE_PER = []
for letter_idx in word_idx:
ONE_PER.append(test_sententce[letter_idx - 1])
NER_PER.append(ONE_PER)
NER_PER_COMBINE = []
for w in NER_PER:
PER = "".join(w)
NER_PER_COMBINE.append(PER)
# 地点
NER_LOC = []
for word_idx in locations:
ONE_LOC = []
for letter_idx in word_idx:
# print(letter_idx)
# print(test_sententce[letter_idx])
ONE_LOC.append(test_sententce[letter_idx - 1])
NER_LOC.append(ONE_LOC)
NER_LOC_COMBINE = []
for w in NER_LOC:
LOC = "".join(w)
NER_LOC_COMBINE.append(LOC)
# 组织
print(f"当前句子:{test_sententce}\n")
print(f" 人物:{NER_PER_COMBINE}\n")
print(f" 地点:{NER_LOC_COMBINE}\n")
```
| github_jupyter |
<a id='StartingPoint'></a>
# ONNX classification example
Sharing DL models between frameworks or programming languages is possible with Open Neural Network Exchange (ONNX for short).
This notebook starts from an onnx model exported from MATLAB and uses it in Python.
On MATLAB a GoogleNet model pre-trained on ImageNet was loaded and saved to onnx file format through a one-line command [exportONNXNetwork(net,filename)](https://www.mathworks.com/help/deeplearning/ref/exportonnxnetwork.html).
The model is then loaded here, as well as the data to evaluate (some images retrieved from google).
Images are preprocessed to the desired format np.array with shape (BatchSize, numChannels, width, heigh), and the model is applied to classify the images and get the probabilities of the classifications.
## Input Variables
The why for the existence of each variable is explained [bellow](#ModelDef), near the model definition.
```
%%time
# model vars
modelPath = 'D:/onnxStartingCode/model/preTrainedImageNetGooglenet.onnx'
labelsPath = 'D:/onnxStartingCode/model/labels.csv'
hasHeader = 1
#data vars
image_folder = 'D:/onnxStartingCode/ImageFolder/'
EXT = ("jfif","jpg")
```
## Import Modules
Let's start by importing all the needed modules
[back to top](#StartingPoint)
```
%%time
import onnx
import numpy as np
from PIL import Image
import os as os
import matplotlib.pyplot as plt
from onnxruntime import InferenceSession
import csv
```
<a id='ModelDef'></a>
## Define Model and Data functions
We need to define functions to retrieve the Classifier and Data array.
To load the model we need the path to the file that stores it and the pat to the file that stores the labels. Finnaly, the parameter hasHeader defines the way the firs row of the labelsFile is treated, as header or ar a label.
The labelsPath is required here becuase the model here used does not contain label information, so an external csv file needs to be read.
[back to top](#StartingPoint)
```
%%time
def loadmodel(modelPath,labelsPath,hasHeader):
# define network
# load and check the model
# load the inference module
onnx.checker.check_model(modelPath)
sess = InferenceSession(modelPath)
# Determine the name of the input and output layers
inname = [input.name for input in sess.get_inputs()]
outname = [output.name for output in sess.get_outputs()]
# auxiliary function to load labels file
def extractLabels( filename , hasHeader ):
file = open(filename)
csvreader = csv.reader(file)
if (hasHeader>0):
header = next(csvreader)
#print(header)
rows = []
for row in csvreader:
rows.append(row)
#print(rows)
file.close()
return rows
# Get labels
labels = extractLabels(labelsPath,hasHeader)
# Extract information on the inputSize =(width, heigh) and numChannels = 3(RGB) or 1(Grayscale)
for inp in sess.get_inputs():
inputSize = inp.shape
numChannels = inputSize[1]
inputSize = inputSize[2:4]
return sess,inname,outname,numChannels,inputSize,labels
def getData(image_folder,EXT,inputSize):
def getImagesFromFolder(EXT):
imageList = os.listdir(image_folder)
if (not(isinstance(EXT, list)) and not(isinstance(EXT,tuple))):
ext = [EXT]
fullFilePath = [os.path.join(image_folder, f)
for ext in EXT for f in imageList if os.path.isfile(os.path.join(image_folder, f)) & f.endswith(ext)]
return fullFilePath
def imFile2npArray(imFile,inputSize):
data = np.array([
np.array(
Image.open(fname).resize(inputSize),
dtype=np.int64)
for fname in fullFilePath
])
X=data.transpose(0,3,1,2)
X = X.astype(np.float32)
return X, data
fullFilePath = getImagesFromFolder(EXT)
X, data = imFile2npArray(fullFilePath,inputSize)
return X,data,fullFilePath
```
## Run loading functions to get model and data
* get full filename of all files in a gives directory that end with a given ext (might be an array of EXTENSIONS)
* load data into numpy arrays for future use:
* to plot data has to have shape = (x,y,3)
* the model here presented requires data with shape (3,x,y)
* two data arrays are then exported, data for ploting and X for classification
[back to top](#StartingPoint)
```
%%time
# run code
sess,inname,outname,numChannels,inputSize,labels = loadmodel(modelPath,labelsPath,hasHeader)
X,data,fullFilePath = getData(image_folder,EXT,inputSize)
print("inputSize: " + str(inputSize))
print("numChannels: " + str(numChannels))
print("inputName: ", inname[0])
print("outputName: ", outname[0])
```
## Define a functions to load all data
1. get full filename of all files in a gives directory that end with a given ext (might be an arrat of EXT)
2. load data into numpy arrays for future use:
* to plot data has to have shape = (x,y,3)
* the model here presented requires data with shape (3,x,y)
* two data arrays are then exported, data for ploting and X for classification
[back to top](#StartingPoint)
## Classification
[back to top](#StartingPoint)
```
%%time
#data_output = sess.run(outname, {inname: X[0]})
out = sess.run(None, {inname[0]: X})
out=np.asarray(out[0])
print(out.shape)
IND = []
PROB= []
for i in range(out.shape[0]):
ind=np.where(out[i] == np.amax(out[i]))
IND.append(ind[0][0])
PROB.append(out[i,ind[0][0]])
l = [labels[ind] for ind in IND]
print([labels[ind] for ind in IND])
print(IND)
print(PROB)
```
## Plot some examples
[back to top](#StartingPoint)
```
%%time
plt.figure(figsize=(10,10))
if data.shape[0]>=6:
nPlots=6
subArray=[2,3]
else:
nPlots=data.shape[0]
subArray = [1, nPlots]
for i in range(nPlots):
plt.subplot(subArray[0],subArray[1],i+1)
plt.imshow(data[i])
plt.axis('off')
plt.title(l[i][0] + ' --- ' + str(round(100*PROB[i])) + '%')
plt.show()
[back to top](#StartingPoint)
```
| github_jupyter |
# Flux.pl
The `Flux.pl` Perl script takes four input parameters:
`Flux.pl [input file] [output file] [bin width (s)] [geometry base directory]`
or, as invoked from the command line,
`$ perl ./perl/Flux.pl [input file] [output file] [bin width (s)] [geometry directory]`
## Input Parameters
* `[input file]`
`Flux.pl` expects the first non-comment line of the input file to begin with a string of the form `<DAQ ID>.<channel>`. This is satisfied by threshold and wire delay files, as well as the outputs of data transformation scripts like `Sort.pl` and `Combine.pl` if their inputs are of the appropriate form.
If the input file doesn't meet this condition, `Flux.pl` -- specifically, the `all_geo_info{}` subroutine of `CommonSubs.pl` -- won't be able to load the appropriate geometry files and execution will fail (see the `[geometry directory]` parameter below).
* `[output file]`
This is what the output file will be named.
* `[binWidth]`
In physical terms, cosmic ray _flux_ is the number of incident rays per unit area per unit time. The `[binWidth]` parameter determines the "per unit time" portion of this quantity. `Flux.pl` will sort the events in its input data into bins of the given time interval, returning the number of events per unit area recorded within each bin.
* `[geometry directory]`
With `[binWidth]` handling the "per unit time" portion of the flux calculation, the geometry file associated with each detector handles the "per unit area".
`Flux.pl` expects geometry files to be stored in a directory structure of the form
```
geo/
├── 6119/
│ └── 6119.geo
├── 6148/
│ └── 6148.geo
└── 6203/
└── 6203.geo
```
where each DAQ has its own subdirectory whose name is the DAQ ID, and each such subdirectory has a geometry file whose name is given by the DAQ ID with the `.geo` extension. The command-line argument in this case is `geo/`, the parent directory. With this as the base directory, `Flux.pl` determines what geometry file to load by looking for the DAQ ID in the first line of data. This is why, as noted above, the first non-comment line of `[input file]` must begin with `<DAQ ID>.<channel>`.
## Flux Input Files
As we mentioned above, threshold files have the appropriate first-line structure to allow `Flux.pl` to access geometry data for them. So what does `Flux.pl` do when acting on a threshold file?
We'll test it using the threshold files `files/6148.2016.0109.0.thresh` and `files/6119.2016.0104.1.thresh` as input. First, take a look at the files themselves so we know what the input looks like:
```
!head -10 files/6148.2016.0109.0.thresh
!wc -l files/6148.2016.0109.0.thresh
!head -10 files/6119.2016.0104.1.thresh
!wc -l files/6119.2016.0104.1.thresh
```
(remember, `wc -l` returns a count of the number of lines in the file). These look like fairly standard threshold files. Now we'll see what `Flux.pl` does with them.
## The Parsl Flux App
For convenience, we'll wrap the UNIX command-line invocation of the `Flux.pl` script in a Parsl App, which will make it easier to work with from within the Jupyter Notebook environment.
```
# The prep work:
import parsl
from parsl.config import Config
from parsl.executors.threads import ThreadPoolExecutor
from parsl.app.app import bash_app,python_app
from parsl import File
config = Config(
executors=[ThreadPoolExecutor()],
lazy_errors=True
)
parsl.load(config)
# The App:
@bash_app
def Flux(inputs=[], outputs=[], binWidth='600', geoDir='geo/', stdout='stdout.txt', stderr='stderr.txt'):
return 'perl ./perl/Flux.pl %s %s %s %s' % (inputs[0], outputs[0], binWidth, geoDir)
```
_Edit stuff below to use the App_
## Flux Output
Below is the output generated by `Flux.pl` using the threshold files `6148.2016.0109.0.thresh` and `6119.2016.0104.1.thresh` (separately) as input:
```
$ perl ./perl/Flux.pl files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_01 600 geo/
$ head -15 outputs/ThreshFluxOut6148_01
#cf12d07ed2dfe4e4c0d52eb663dd9956
#md5_hex(1536259294 1530469616 files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_01 600 geo/)
01/09/2016 00:06:00 59.416172 8.760437
01/09/2016 00:16:00 63.291139 9.041591
01/09/2016 00:26:00 71.041075 9.579177
01/09/2016 00:36:00 50.374580 8.066389
01/09/2016 00:46:00 55.541204 8.469954
01/09/2016 00:56:00 73.624386 9.751788
01/09/2016 01:06:00 42.624645 7.419998
01/09/2016 01:16:00 54.249548 8.370887
01/09/2016 01:26:00 45.207957 7.641539
01/09/2016 01:36:00 42.624645 7.419998
01/09/2016 01:46:00 65.874451 9.224268
01/09/2016 01:56:00 59.416172 8.760437
01/09/2016 02:06:00 94.290881 11.035913
```
```
$ perl ./perl/Flux.pl files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_01 600 geo/
$ head -15 outputs/ThreshFluxOut6119_01
#84d0f02f26edb8f59da2d4011a27389d
#md5_hex(1536259294 1528996902 files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_01 600 geo/)
01/04/2016 21:00:56 12496.770860 127.049313
01/04/2016 21:10:56 12580.728494 127.475379
01/04/2016 21:20:56 12929.475588 129.230157
01/04/2016 21:30:56 12620.769827 127.678079
01/04/2016 21:40:56 12893.309222 129.049289
01/04/2016 21:50:56 12859.726169 128.881113
01/04/2016 22:00:56 12782.226815 128.492174
01/04/2016 22:10:56 12520.020666 127.167443
01/04/2016 22:20:56 12779.643503 128.479189
01/04/2016 22:30:56 12746.060449 128.310265
01/04/2016 22:40:56 12609.144924 127.619264
01/04/2016 22:50:56 12372.771894 126.417419
01/04/2016 23:00:56 12698.269181 128.069490
```
`Flux.pl` seems to give reasonable output with a threshold file as input, provided the DAQ has a geometry file that's up to standards. Can we interpret the output? Despite the lack of a header line, some reasonable inferences will make it clear.
The first column is clearly the date that the data was taken, and in both cases it agrees with the date indicated by the threshold file's filename.
The second column is clearly time-of-day values, but what do they mean? We might be tempted to think of them as the full-second portion of cosmic ray event times, but we note in both cases that they occur in a regular pattern of exactly every ten minutes. Of course, that happens to be exactly what we selected as the `binWidth` parameter, 600s = 10min. These are the time bins into which the cosmic ray event data is organized.
Since we're calculating flux -- muon strikes per unit area per unit time -- we expect the flux count itself to be included in the data, and in fact this is what the third column is, in units of $events/m^2/min$. Note that the "$/min$" part is *always* a part of the units of the third column, no matter what the size of the time bins we selected.
Finally, when doing science, having a measurement means having uncertainty. The fourth column is the obligatory statistical uncertainty in the flux.
## An exercise in statistical uncertainty
The general formula for flux $\Phi$ is
$$\Phi = \frac{N}{AT}$$
where $N$ is the number of incident events, $A$ is the cross-sectional area over which the flux is measured, and $T$ is the time interval over which the flux is measured.
By the rule of quadrature for propagating uncertainties,
$$\frac{\delta \Phi}{\Phi} \approx \frac{\delta N}{N} + \frac{\delta A}{A} + \frac{\delta T}{T}$$
Here, $N$ is the raw count of muon hits in the detector, an integer with a standard statistical uncertainty of $\sqrt{N}$.
In our present analysis, errors in the bin width and detector area are negligible compared to the statistical fluctuation of cosmic ray muons. Thus, we'll take $\delta A \approx \delta T \approx 0$ to leave
$$\delta \Phi \approx \frac{\delta N}{N} \Phi = \frac{\Phi}{\sqrt{N}}$$
Rearranging this a bit, we find that we should be able to calculate the exact number of muon strikes for each time bin as
$$N \approx \left(\frac{\Phi}{\delta\Phi}\right)^2.$$
Let's see what happens when we apply this to the data output from `Flux.pl`. For the 6148 data with `binWidth=600`, we find
```
Date Time Phi dPhi (Phi/dPhi)^2
01/09/16 12:06:00 AM 59.416172 8.760437 45.999996082
01/09/16 12:16:00 AM 63.291139 9.041591 49.0000030968
01/09/16 12:26:00 AM 71.041075 9.579177 54.9999953935
01/09/16 12:36:00 AM 50.37458 8.066389 38.9999951081
01/09/16 12:46:00 AM 55.541204 8.469954 43.0000020769
01/09/16 12:56:00 AM 73.624386 9.751788 57.000001784
01/09/16 01:06:00 AM 42.624645 7.419998 33.0000025577
01/09/16 01:16:00 AM 54.249548 8.370887 41.999999903
01/09/16 01:26:00 AM 45.207957 7.641539 35.0000040418
01/09/16 01:36:00 AM 42.624645 7.419998 33.0000025577
01/09/16 01:46:00 AM 65.874451 9.224268 51.00000197
01/09/16 01:56:00 AM 59.416172 8.760437 45.999996082
01/09/16 02:06:00 AM 94.290881 11.035913 72.9999984439
```
The numbers we come up with are in fact integers to an excellent approximation!
---
### Exercise 1
**A)** Using the data table above, round the `(Phi/dPhi)^2` column to the nearest integer, calling it `N`. With $\delta N = \sqrt{N}$, calculate $\frac{\delta N}{N}$ for each row in the data.
**B)** Using your knowledge of the cosmic ray muon detector, estimate the uncertainty $\delta A$ in the detector area $A$ and the uncertainty $\delta T$ in the time bin $T$ given as the input `binWidth` parameter. Calculate $\frac{\delta A}{A}$ and $\frac{\delta T}{T}$ for this analysis.
**C)** Considering the results of **A)** and **B)**, do you think our previous assumption that $\frac{\delta A}{A} \approx 0$ and $\frac{\delta T}{T} \approx 0$ compared to $\frac{\delta N}{N}$ is justified?
---
### Additional Exercises
* Do the number of counts $N$ in one `binWidth=600s` bin match the sum of counts in the ten corresponding `binWidth=60s` bins?
* Considering raw counts, do you think the "zero" bins in the above analyses are natural fluctuations in cosmic ray muon strikes?
* Do the flux values shown above reasonably agree with the known average CR muon flux at sea level? If "no," what effects do you think might account for the difference?
---
We can dig more information out of the `Flux.pl` output by returning to the definition of flux
$$\Phi = \frac{N}{AT}.$$
Now that we know $N$ for each data point, and given that we know the bin width $T$ because we set it for the entire analysis, we should be able to calculate the area of the detector as
$$A = \frac{N}{\Phi T}$$
One important comment: `Flux.pl` gives flux values in units of `events/m^2/min` - note the use of minutes instead of seconds. When substituting a numerical value for $T$, we must convert the command line parameter `binWidth=600` from $600s$ to $10min$.
When we perform this calculation, we find consistent values for $A$:
```
Date Time Phi dPhi N=(Phi/dPhi)^2 A=N/Phi T
01/09/16 12:06:00 AM 59.416172 8.760437 45.999996082 0.0774199928
01/09/16 12:16:00 AM 63.291139 9.041591 49.0000030968 0.0774200052
01/09/16 12:26:00 AM 71.041075 9.579177 54.9999953935 0.0774199931
01/09/16 12:36:00 AM 50.37458 8.066389 38.9999951081 0.0774199906
01/09/16 12:46:00 AM 55.541204 8.469954 43.0000020769 0.0774200035
01/09/16 12:56:00 AM 73.624386 9.751788 57.000001784 0.0774200029
01/09/16 01:06:00 AM 42.624645 7.419998 33.0000025577 0.0774200056
01/09/16 01:16:00 AM 54.249548 8.370887 41.999999903 0.0774199997
01/09/16 01:26:00 AM 45.207957 7.641539 35.0000040418 0.0774200083
01/09/16 01:36:00 AM 42.624645 7.419998 33.0000025577 0.0774200056
01/09/16 01:46:00 AM 65.874451 9.224268 51.00000197 0.077420003
01/09/16 01:56:00 AM 59.416172 8.760437 45.999996082 0.0774199928
01/09/16 02:06:00 AM 94.290881 11.035913 72.9999984439 0.0774199983
```
In fact, the area of one standard 6000-series QuarkNet CRMD detector panel is $0.07742m^2$.
It's important to note that we're reversing only the calculations, not the physics! That is, we find $A=0.07742m^2$ because that's the value stored in the `6248.geo` file, not because we're able to determine the actual area of the detector panel from the `Flux.pl` output data using physical principles.
## Testing binWidth
To verify that the third-column flux values behave as expected, we can run a quick check by manipulating the `binWidth` parameter. We'll run `Flux.pl` on the above two threshold files again, but this time we'll reduce `binWidth` by a factor of 10:
```
$ perl ./perl/Flux.pl files/6148.2016.0109.0.thresh outputs/ThreshFluxOut6148_02 60 geo/
```
```
!head -15 outputs/ThreshFluxOut6148_02
```
```
$ perl ./perl/Flux.pl files/6119.2016.0104.1.thresh outputs/ThreshFluxOut6119_02 60 geo/
```
```
!head -15 outputs/ThreshFluxOut6119_02
```
In the case of the 6148 data, our new fine-grained binning reveals some sparsity in the first several minutes of the data, as all of the bins between the `2:30` bin and the `13:30` bin are empty of muon events (and therefore not reported). What happened here? It's difficult to say -- under normal statistical variations, it's possible that there were simply no recorded events during these bins. It's also possible that the experimenter adjusted the level of physical shielding around the detector during these times, or had a cable unplugged while troubleshooting.
| github_jupyter |
# Character-level recurrent sequence-to-sequence model
**Author:** [fchollet](https://twitter.com/fchollet)<br>
**Date created:** 2017/09/29<br>
**Last modified:** 2020/04/26<br>
**Description:** Character-level recurrent sequence-to-sequence model.
## Introduction
This example demonstrates how to implement a basic character-level
recurrent sequence-to-sequence model. We apply it to translating
short English sentences into short French sentences,
character-by-character. Note that it is fairly unusual to
do character-level machine translation, as word-level
models are more common in this domain.
**Summary of the algorithm**
- We start with input sequences from a domain (e.g. English sentences)
and corresponding target sequences from another domain
(e.g. French sentences).
- An encoder LSTM turns input sequences to 2 state vectors
(we keep the last LSTM state and discard the outputs).
- A decoder LSTM is trained to turn the target sequences into
the same sequence but offset by one timestep in the future,
a training process called "teacher forcing" in this context.
It uses as initial state the state vectors from the encoder.
Effectively, the decoder learns to generate `targets[t+1...]`
given `targets[...t]`, conditioned on the input sequence.
- In inference mode, when we want to decode unknown input sequences, we:
- Encode the input sequence into state vectors
- Start with a target sequence of size 1
(just the start-of-sequence character)
- Feed the state vectors and 1-char target sequence
to the decoder to produce predictions for the next character
- Sample the next character using these predictions
(we simply use argmax).
- Append the sampled character to the target sequence
- Repeat until we generate the end-of-sequence character or we
hit the character limit.
## Setup
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
```
## Download the data
```
!!curl -O http://www.manythings.org/anki/fra-eng.zip
!!unzip fra-eng.zip
```
## Configuration
```
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = "fra.txt"
```
## Prepare the data
```
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, "r", encoding="utf-8") as f:
lines = f.read().split("\n")
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, _ = line.split("\t")
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = "\t" + target_text + "\n"
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print("Number of samples:", len(input_texts))
print("Number of unique input tokens:", num_encoder_tokens)
print("Number of unique output tokens:", num_decoder_tokens)
print("Max sequence length for inputs:", max_encoder_seq_length)
print("Max sequence length for outputs:", max_decoder_seq_length)
input_token_index = dict([(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict([(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype="float32"
)
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.0
encoder_input_data[i, t + 1 :, input_token_index[" "]] = 1.0
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.0
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.0
decoder_input_data[i, t + 1 :, target_token_index[" "]] = 1.0
decoder_target_data[i, t:, target_token_index[" "]] = 1.0
```
## Build the model
```
# Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
```
## Train the model
```
model.compile(
optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.fit(
[encoder_input_data, decoder_input_data],
decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2,
)
# Save model
model.save("s2s")
```
## Run inference (sampling)
1. encode input and retrieve initial decoder state
2. run one step of decoder with this initial state
and a "start of sequence" token as target.
Output will be the next target token.
3. Repeat with the current target token and current states
```
# Define sampling models
# Restore the model and construct the encoder and decoder.
model = keras.models.load_model("s2s")
encoder_inputs = model.input[0] # input_1
encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output # lstm_1
encoder_states = [state_h_enc, state_c_enc]
encoder_model = keras.Model(encoder_inputs, encoder_states)
decoder_inputs = model.input[1] # input_2
decoder_state_input_h = keras.Input(shape=(latent_dim,))
decoder_state_input_c = keras.Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_lstm = model.layers[3]
decoder_outputs, state_h_dec, state_c_dec = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs
)
decoder_states = [state_h_dec, state_c_dec]
decoder_dense = model.layers[4]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = keras.Model(
[decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states
)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict((i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict((i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index["\t"]] = 1.0
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ""
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if sampled_char == "\n" or len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.0
# Update states
states_value = [h, c]
return decoded_sentence
```
You can now generate decoded sentences as such:
```
for seq_index in range(20):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index : seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print("-")
print("Input sentence:", input_texts[seq_index])
print("Decoded sentence:", decoded_sentence)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import os
import random
from collections import defaultdict
import pandas as pd
import time
def load_data_train():
user_movie = defaultdict(set)
data=pd.read_csv('BRP_datas\\BRP_common_user_book\\common_user_book_19_1VS2.csv')
num_user=len(pd.unique(data['user_id']))
num_book=len(pd.unique(data['book_id']))
print('训练集借阅记录数:{}'.format(data.shape[0]))
for row,val in data.iterrows():
u = int(val['user_id'])
i = int(val['book_id'])
user_movie[u].add(i) #
print("num_user:", num_user)
print("num_book", num_book)
return num_user, num_book, user_movie
def load_data_test():
user_movie = defaultdict(set)
data=pd.read_csv('BRP_datas\\BRP_common_user_book\\common_user_book_19_2VS1.csv')
num_user=len(pd.unique(data['user_id']))
num_book=len(pd.unique(data['book_id']))
print('测试集借阅记录数:{}'.format(data.shape[0]))
for row,val in data.iterrows():
u = int(val['user_id'])
i = int(val['book_id'])
user_movie[u].add(i)
print("num_user:", num_user)
print("num_book", num_book)
return num_user, num_book, user_movie
def generate_test(user_movie_pair_test):
"""
对每一个用户u,在user_movie_pair_test中随机找到他借阅过的一本书,保存在user_ratings_test,
后面构造训练集和测试集需要用到。
"""
user_test = dict()
for u,i_list in user_movie_pair_test.items():
user_test[u] = random.sample(user_movie_pair_test[u],1)[0]
return user_test
def generate_train_batch(user_movie_pair_train,item_count,batch_size=50):
t = []
for b in range(batch_size):
u = random.sample(user_movie_pair_train.keys(),1)[0]
i = random.sample(user_movie_pair_train[u],1)[0]
j = random.randint(0,item_count)
while j in user_movie_pair_train[u]:
j = random.randint(0,item_count)
t.append([u,i,j])
return np.asarray(t)
def generate_test_batch(user_ratings_test,user_movie_pair_test,item_count):
"""
对于每个用户u,它的评分图书i是我们在user_ratings_test中随机抽取的,它的j是用户u所有没有借阅过的图书集合,
比如用户u有1000本书没有借阅,那么这里该用户的测试集样本就有1000个
"""
for u in user_movie_pair_test.keys():
t = []
i = user_ratings_test[u]
for j in range(0,item_count):
if not(j in user_movie_pair_test[u]):
t.append([u,i,j])
yield np.asarray(t)
def bpr_mf(user_count,item_count,hidden_dim):
u = tf.placeholder(tf.int32,[None])
i = tf.placeholder(tf.int32,[None])
j = tf.placeholder(tf.int32,[None])
user_emb_w = tf.get_variable("user_emb_w", [user_count+1, hidden_dim],
initializer=tf.random_normal_initializer(0, 0.1))
item_emb_w = tf.get_variable("item_emb_w", [item_count+1, hidden_dim],
initializer=tf.random_normal_initializer(0, 0.1))
u_emb = tf.nn.embedding_lookup(user_emb_w, u)
i_emb = tf.nn.embedding_lookup(item_emb_w, i)
j_emb = tf.nn.embedding_lookup(item_emb_w, j)
x = tf.reduce_sum(tf.multiply(u_emb,(i_emb-j_emb)),1,keep_dims=True)
mf_auc = tf.reduce_mean(tf.to_float(x>0))
l2_norm = tf.add_n([
tf.reduce_sum(tf.multiply(u_emb, u_emb)),
tf.reduce_sum(tf.multiply(i_emb, i_emb)),
tf.reduce_sum(tf.multiply(j_emb, j_emb))
])
regulation_rate = 0.0001
bprloss = regulation_rate * l2_norm - tf.reduce_mean(tf.log(tf.sigmoid(x)))
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(bprloss)
return u, i, j, mf_auc, bprloss, train_op
start=time.clock()
user_count,item_count,user_movie_pair_train = load_data_train()
test_user_count,test_item_count,user_movie_pair_test = load_data_test()
user_ratings_test = generate_test(user_movie_pair_test)
print('user_ratings_test的值为:{}'.format(user_ratings_test))
with tf.Session() as sess:
u,i,j,mf_auc,bprloss,train_op = bpr_mf(user_count,item_count,20)
sess.run(tf.global_variables_initializer())
for epoch in range(1,6):
print('epoch的值为{}'.format(epoch))
_batch_bprloss = 0
for k in range(1,5000):
uij = generate_train_batch(user_movie_pair_train,item_count)
_bprloss,_train_op = sess.run([bprloss,train_op],
feed_dict={u:uij[:,0],i:uij[:,1],j:uij[:,2]})
_batch_bprloss += _bprloss
print("epoch:",epoch)
print("bpr_loss:",_batch_bprloss / k)
print("_train_op")
user_count = 0
_auc_sum = 0.0
for t_uij in generate_test_batch(user_ratings_test,user_movie_pair_test,item_count):
_auc, _test_bprloss = sess.run([mf_auc, bprloss],
feed_dict={u: t_uij[:, 0], i: t_uij[:, 1], j: t_uij[:, 2]}
)
user_count += 1
_auc_sum += _auc
print("test_loss: ", _test_bprloss, "test_auc: ", _auc_sum / user_count)
print("")
variable_names = [v.name for v in tf.trainable_variables()]
values = sess.run(variable_names)
for k, v in zip(variable_names, values):
print("Variable: ", k)
print("Shape: ", v.shape)
print(v)
session1 = tf.Session()
u1_all = tf.matmul(values[0], values[1],transpose_b=True)
result_1 = session1.run(u1_all)
print (result_1)
p = np.squeeze(result_1)
# np.argsort(p),将元素从小到大排列,提取对应的索引。找到了索引就是找到了书
ind = np.argsort(p)[:,-5:]
print('top5对应的索引为{}'.format(ind))
num=0
all_num_user_item=0
for ii in range(len(user_movie_pair_test)):
num_user_item=0
for jj in user_movie_pair_test[ii]:
num_user_item+=1
if jj in (ind[ii]):
num+=1
all_num_user_item+=num_user_item
print('num的值为:{}'.format(num))
print('用户的数目为{}'.format(len(user_movie_pair_test)))
print('用户喜欢的物品的数目为:{}'.format(all_num_user_item))
print('召回率为{}'.format(num/all_num_user_item))
print('准确率为{}'.format(num/(len(user_movie_pair_test)*5)))
duration=time.clock()-start
print('耗费时间:{}'.format(duration))
```
| github_jupyter |
**Note**: There are multiple ways to solve these problems in SQL. Your solution may be quite different from mine and still be correct.
**1**. Connect to the SQLite3 database at `data/faculty.db` in the `notebooks` folder using the `sqlite` package or `ipython-sql` magic functions. Inspect the `sql` creation statement for each tables so you know their structure.
```
%load_ext sql
%sql sqlite:///../notebooks/data/faculty.db
%%sql
SELECT sql FROM sqlite_master WHERE type='table';
```
2. Find the youngest and oldest faculty member(s) of each gender.
```
%%sql
SELECT min(age), max(age) FROM person
%%sql
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT min(age) FROM person) AND gender = 'Male'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT min(age) FROM person) AND gender = 'Female'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT max(age) FROM person) AND gender = 'Male'
UNION
SELECT first, last, age, gender
FROM person
INNER JOIN gender
ON person.gender_id = gender.gender_id
WHERE age IN (SELECT max(age) FROM person) AND gender = 'Female'
LIMIT 10
```
3. Find the median age of the faculty members who know Python.
As SQLite3 does not provide a median function, you can create a User Defined Function (UDF) to do this. See [documentation](https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function).
```
import statistics
class Median:
def __init__(self):
self.acc = []
def step(self, value):
self.acc.append(value)
def finalize(self):
return statistics.median(self.acc)
import sqlite3
con = sqlite3.connect('../notebooks/data/faculty.db')
con.create_aggregate("Median", 1, Median)
cr = con.cursor()
cr.execute('SELECT median(age) FROM person')
cr.fetchall()
```
4. Arrange countries by the average age of faculty in descending order. Countries are only included in the table if there are at least 3 faculty members from that country.
```
%%sql
SELECT country, count(country), avg(age)
FROM person
INNER JOIN country
ON person.country_id = country.country_id
GROUP BY country
HAVING count(*) > 3
ORDER BY age DESC
LIMIT 3
```
5. Which country has the highest average body mass index (BMII) among the faculty? Recall that BMI is weight (kg) / (height (m))^2.
```
%%sql
SELECT country, avg(weight / (height*height)) as avg_bmi
FROM person
INNER JOIN country
ON person.country_id = country.country_id
GROUP BY country
ORDER BY avg_bmi DESC
LIMIT 3
```
6. Do obese faculty (BMI > 30) know more languages on average than non-obese faculty?
```
%%sql
SELECT is_obese, avg(language)
FROM (
SELECT
weight / (height*height) > 30 AS is_obese,
count(language_name) AS language
FROM person
INNER JOIN person_language
ON person.person_id = person_language.person_id
INNER JOIN language
ON person_language.language_id = language.language_id
GROUP BY person.person_id
)
GROUP BY is_obese
```
| github_jupyter |
# Frequent opiate prescriber
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import preprocessors as pp
sns.set(style="darkgrid")
data = pd.read_csv('../data/prescriber-info.csv')
data.head()
```
## Variable Separation
```
uniq_cols = ['NPI']
cat_cols = list(data.columns[1:5])
cat_cols
num_cols = list(data.columns[5:-1])
# print(num_cols)
target = [data.columns[-1]]
target
```
## Categorical Variable Analysis & EDA
### Missing values
```
# chcking for missing values
data[cat_cols].isnull().sum()
# checking for missing value percentage
data[cat_cols].isnull().sum()/data.shape[0] *100
# checking for null value in drugs column
data[num_cols].isnull().sum().sum()
data['NPI'].nunique()
```
Remarks:
1. We dont need `NPI` column it has all unique values.
2. The `Credentials` column has missing values ~3% of total.
<!-- 3. All the `med_clos` are sparse in nature -->
### Basic plots
```
data[num_cols].iloc[:,2].value_counts()
cat_cols
for item in cat_cols[1:]:
print('-'*25)
print(data[item].value_counts())
cat_cols
# Gender analysis
plt.figure(figsize=(7,5))
sns.countplot(data=data,x='Gender')
plt.title('Count plot of Gender column')
plt.show()
# State column
plt.figure(figsize=(15,5))
sns.countplot(data=data,x='State')
plt.title('Count plot of State column')
plt.show()
# lets check out `Speciality` column
data['Specialty'].nunique()
plt.figure(figsize=(20,5))
sns.countplot(data=data,x='Specialty')
plt.title('Count plot of Specialty column')
plt.xticks(rotation=90)
plt.show()
data['Specialty'].value_counts()[:20]
# filling missing values with mean
```
In `credentials` we can do lot more
1. The credentals column have multiple occupation in the same row.
2. \[PHD, MD\] and \[MD, PHD\] are treated differently.
3. P,A, is treated different from P.A and PA
4. MD ----- M.D. , M.D, M D, MD\`
5. This column is a mess
```
cat_cols
```
Remarks:
1. We don't need `Credentials` column which is a real mess, the `Specialty` column has the same information as of `Credentials`.
2. Cat Features to remove - `NPI`, `Credentials`
3. Cat Features to keep - `Gender`, `State`, `Speciality`
4. Cat encoder pipeline -
1. Gender - normal 1/0 encoding using category_encoders
2. State - Frequency encoding using category_encoders
3. Speciality - Frequency encoding
### Numerical Variable Analysis & Engineering
```
for item in num_cols:
print('-'*25)
print(f'frequency - {data[item].nunique()}')
print(f'Min \t Average \t Max \t Prob>0')
for item in num_cols:
print('-'*40)
prob = sum(data[item] > 0) / data[item].shape[0]
print(f'{data[item].min()}\t{data[item].mean(): .4f} \t{data[item].max()} \t {prob:.4f}')
print(f'Maximun of all maxs - {data[num_cols].max().max()}')
print(f'Average of all maxs - {data[num_cols].max().mean()}')
print(f'Minimun of all maxs - {data[num_cols].max().min()}')
print(f'Maximun of all mins - {data[num_cols].min().max()}')
print(f'Minimun of all mins - {data[num_cols].min().min()}')
sns.distplot(data[num_cols[0]]);
sns.boxplot(data = data, x = num_cols[0],orient="v");
```
Problem:
1. All the continuous cols have large number of zeros, and other values are counting value.
2. The solutions I stumble accross are - `Two-part-models(twopm)`, `hurdle models` and `zero inflated poisson models(ZIP)`
3. These models thinks the target variable has lots of zero and the non-zero values are not 1, if they had been 1s and 0s we could use a a classification model but they are like 0s mostly and if not zeros they are continuous variable like 100,120, 234, 898, etc.
4. In our case our feature variable has lots of zeros.
```
data[data[num_cols[0]] > 0][num_cols[0]]
temp = 245
sns.distplot(data[data[num_cols[temp]] > 0][num_cols[temp]]);
temp = 5
sns.distplot(np.log(data[data[num_cols[temp]] > 0][num_cols[temp]]));
from sklearn.preprocessing import power_transform
temp = 5
# data_without_0 = data[data[num_cols[temp]] > 0][num_cols[temp]]
data_without_0 = data[num_cols[temp]]
data_0 = np.array(data_without_0).reshape(-1,1)
data_0_trans = power_transform(data_0, method='yeo-johnson')
sns.distplot(data_0_trans);
temp = 5
# data_without_0 = data[data[num_cols[temp]] > 0][num_cols[temp]]
data_without_0 = data[num_cols[temp]]
data_0 = np.array(data_without_0).reshape(-1,1)
data_0_trans = power_transform(data_0+1, method='box-cox')
# data_0_trans = np.log(data_0 + 1 )
# data_0
sns.distplot(data_0_trans);
from sklearn.decomposition import PCA
pca = PCA(n_components=0.8,svd_solver='full')
# pca = PCA(n_components='mle',svd_solver='full')
pca.fit(data[num_cols])
pca_var_ratio = pca.explained_variance_ratio_
pca_var_ratio
len(pca_var_ratio)
plt.plot(pca_var_ratio[:],'-*');
sum(pca_var_ratio[:10])
data[num_cols].sample(2)
pca.transform(data[num_cols].sample(1))
pca2 = pp.PCATransformer(cols=num_cols,n_components=0.8)
pca2.fit(data)
pca2.transform(data[num_cols].sample(1))
```
### Train test split and data saving
```
# train test split
from sklearn.model_selection import train_test_split
X = data.drop(target,axis=1)
y = data[target]
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.20, random_state=1)
pd.concat([X_train,y_train],axis=1).to_csv('../data/train.csv',index=False)
pd.concat([X_test,y_test],axis=1).to_csv('../data/test.csv',index=False)
```
## Data Engineering
```
from sklearn.preprocessing import LabelBinarizer
lbin = LabelBinarizer()
lbin.fit(X_train['Gender'])
gen_tra = lbin.transform(X_train['Gender'])
gen_tra
X_train[num_cols[:5]].info();
```
| github_jupyter |
# Priprava okolja
```
!pip install transformers
!pip install sentencepiece
import csv
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel
import pandas as pd
from google.colab import drive
import transformers
import json
from tqdm import tqdm
from torch.utils.data import Dataset, DataLoader
RANDOM_SEED = 42
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
```
# Pomožni razredi in funkcije
```
class SentimentClassifier(nn.Module):
def __init__(self, n_classes):
super(SentimentClassifier, self).__init__()
self.model = AutoModel.from_pretrained('EMBEDDIA/sloberta')
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.2)
self.classifier = nn.Linear(self.model.config.hidden_size, n_classes)
def forward(self, input_ids, attention_mask):
output = self.model(
input_ids=input_ids,
attention_mask=attention_mask
)
last_hidden_state = output[0]
pooler = last_hidden_state[:, 0, :]
pooler = self.dropout(pooler)
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
class ArticleTestDataset(torch.utils.data.Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.tokenizer = tokenizer
self.df = dataframe
self.text = dataframe.body
self.max_len = max_len
def __getitem__(self, idx):
text = str(self.text[idx])
inputs = tokenizer.encode_plus(
text,
None,
add_special_tokens=True,
padding='max_length',
truncation=True,
max_length=self.max_len,
return_attention_mask=True,
return_token_type_ids=True
)
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
return {
'text': text,
'input_ids': torch.tensor(input_ids, dtype=torch.long),
'attention_mask': torch.tensor(attention_mask, dtype=torch.long),
}
def __len__(self):
return len(self.text)
def get_predictions(model, data_loader):
model = model.eval()
predictions = []
data_iterator = tqdm(data_loader, desc="Iteration")
with torch.no_grad():
for step, d in enumerate(data_iterator):
input_ids = d["input_ids"].to(device)
attention_mask = d["attention_mask"].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, preds = torch.max(outputs, dim=1)
predictions.extend(preds)
predictions = torch.stack(predictions).cpu()
return predictions
```
# MAIN
```
model_path = '/content/drive/MyDrive/Diploma/best_model_state_latest.bin'
MAX_LEN = 512
BATCH_SIZE = 8
test_params = {'batch_size': BATCH_SIZE,
'shuffle': False,
'num_workers': 0
}
tokenizer = AutoTokenizer.from_pretrained('EMBEDDIA/sloberta', use_fast=False)
model = SentimentClassifier(3)
model.load_state_dict(torch.load(model_path))
model = model.to(device)
# V naslednji vrstici lako spremenite vrednost. Možne vrednosti so:
# "2019_slovenija_sentiment",
# "2019_svet_sentiment",
# "2020_korona_sentiment",
# "2020_svet_sentiment",
# "2020_slovenska_politika_sentiment",
file_name = '2019_slovenija_sentiment'
filepath = f'/content/drive/MyDrive/Diploma/data/{file_name}.pkl'
data = pd.read_pickle(filepath)
dataloader = DataLoader(ArticleTestDataset(data, tokenizer, MAX_LEN), **test_params)
preds = get_predictions(model, dataloader)
data['sentiment'] = preds
# data.to_pickle(filepath)
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import gc
```
# To ensemble I used submissions from 8 public notebooks:
* LB: 0.0225 - https://www.kaggle.com/lunapandachan/h-m-trending-products-weekly-add-test/notebook
* LB: 0.0217 - https://www.kaggle.com/tarique7/hnm-exponential-decay-with-alternate-items/notebook
* LB: 0.0221 - https://www.kaggle.com/astrung/lstm-sequential-modelwith-item-features-tutorial
* LB: 0.0224 - https://www.kaggle.com/code/hirotakanogami/h-m-eda-customer-clustering-by-kmeans
* LB: 0.0220 - https://www.kaggle.com/code/hengzheng/time-is-our-best-friend-v2/notebook
* LB: 0.0227 - https://www.kaggle.com/code/hechtjp/h-m-eda-rule-base-by-customer-age
* LB: 0.0231 - https://www.kaggle.com/code/ebn7amdi/trending/notebook?scriptVersionId=90980162
* LB: 0.0225 - https://www.kaggle.com/code/mayukh18/svd-model-reranking-implicit-to-explicit-feedback
```
sub0 = pd.read_csv('../input/hm-00231-solution/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0231
sub1 = pd.read_csv('../input/handmbestperforming/h-m-trending-products-weekly-add-test.csv').sort_values('customer_id').reset_index(drop=True) # 0.0225
sub2 = pd.read_csv('../input/handmbestperforming/hnm-exponential-decay-with-alternate-items.csv').sort_values('customer_id').reset_index(drop=True) # 0.0217
sub3 = pd.read_csv('../input/handmbestperforming/lstm-sequential-modelwith-item-features-tutorial.csv').sort_values('customer_id').reset_index(drop=True) # 0.0221
sub4 = pd.read_csv('../input/hm-00224-solution/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0224
sub5 = pd.read_csv('../input/handmbestperforming/time-is-our-best-friend-v2.csv').sort_values('customer_id').reset_index(drop=True) # 0.0220
sub6 = pd.read_csv('../input/handmbestperforming/rule-based-by-customer-age.csv').sort_values('customer_id').reset_index(drop=True) # 0.0227
sub7 = pd.read_csv('../input/h-m-faster-trending-products-weekly/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0231
sub8 = pd.read_csv('../input/h-m-framework-for-partitioned-validation/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0225
sub0.columns = ['customer_id', 'prediction0']
sub0['prediction1'] = sub1['prediction']
sub0['prediction2'] = sub2['prediction']
sub0['prediction3'] = sub3['prediction']
sub0['prediction4'] = sub4['prediction']
sub0['prediction5'] = sub5['prediction']
sub0['prediction6'] = sub6['prediction']
sub0['prediction7'] = sub7['prediction'].astype(str)
del sub1, sub2, sub3, sub4, sub5, sub6, sub7
gc.collect()
sub0.head()
def cust_blend(dt, W = [1,1,1,1,1,1,1,1]):
#Create a list of all model predictions
REC = []
# Second Try
REC.append(dt['prediction0'].split())
REC.append(dt['prediction1'].split())
REC.append(dt['prediction2'].split())
REC.append(dt['prediction3'].split())
REC.append(dt['prediction4'].split())
REC.append(dt['prediction5'].split())
REC.append(dt['prediction6'].split())
REC.append(dt['prediction7'].split())
#Create a dictionary of items recommended.
#Assign a weight according the order of appearance and multiply by global weights
res = {}
for M in range(len(REC)):
for n, v in enumerate(REC[M]):
if v in res:
res[v] += (W[M]/(n+1))
else:
res[v] = (W[M]/(n+1))
# Sort dictionary by item weights
res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())
# Return the top 12 items only
return ' '.join(res[:12])
sub0['prediction'] = sub0.apply(cust_blend, W = [1.05, 0.78, 0.86, 0.85, 0.68, 0.64, 0.70, 0.24], axis=1)
sub0.head()
```
# Make a submission
```
del sub0['prediction0']
del sub0['prediction1']
del sub0['prediction2']
del sub0['prediction3']
del sub0['prediction4']
del sub0['prediction5']
del sub0['prediction6']
del sub0['prediction7']
gc.collect()
sub1 = pd.read_csv('../input/h-m-framework-for-partitioned-validation/submission.csv').sort_values('customer_id').reset_index(drop=True)
sub1['prediction'] = sub1['prediction'].astype(str)
sub0.columns = ['customer_id', 'prediction0']
sub0['prediction1'] = sub1['prediction']
del sub1
gc.collect()
def cust_blend(dt, W = [1,1,1,1,1]):
#Global ensemble weights
#W = [1.15,0.95,0.85]
#Create a list of all model predictions
REC = []
# Second Try
REC.append(dt['prediction0'].split())
REC.append(dt['prediction1'].split())
#Create a dictionary of items recommended.
#Assign a weight according the order of appearance and multiply by global weights
res = {}
for M in range(len(REC)):
for n, v in enumerate(REC[M]):
if v in res:
res[v] += (W[M]/(n+1))
else:
res[v] = (W[M]/(n+1))
# Sort dictionary by item weights
res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())
# Return the top 12 items only
return ' '.join(res[:12])
sub0['prediction'] = sub0.apply(cust_blend, W = [1.20, 0.85], axis=1)
del sub0['prediction0']
del sub0['prediction1']
sub0.head()
sub0.to_csv('submission.csv', index=False)
```
| github_jupyter |
# The Shared Library with GCC
When your program is linked against a shared library, only a small table is created in the executable. Before the executable starts running, **the operating system loads the machine code needed for the external functions** - a process known as **dynamic linking.**
* Dynamic linking makes executable files smaller and saves disk space, because `one` copy of a **library** can be **shared** between `multiple` programs.
* Furthermore, most operating systems allows one copy of a shared library in memory to be used by all running programs, thus, saving memory.
* The shared library codes can be upgraded without the need to recompile your program.
A **shared library** has file extension of
* **`.so`** (shared objects) in `Linux(Unixes)`
* **`.dll** (dynamic link library) in `Windows`.
## 1: Building the shared library
The shared library we will build consist of a single source file: `SumArray.c/h`
We will compile the C file with `Position Independent Code( PIC )` into a shared library。
GCC assumes that all libraries
* `start` with `lib`
* `end` with `.dll`(windows) or `.so`(Linux),
so, we should name the shared library begin with `lib prefix` and the `.so/.dll` extensions.
* libSumArray.dll(Windows)
* libSumArray.so(Linux)
#### Under Windows
```
!gcc -c -O3 -Wall -fPIC -o ./demo/bin/SumArray.o ./demo/src/SumArray.c
!gcc -shared -o ./demo/bin/libSumArray.dll ./demo/bin/SumArray.o
!dir .\demo\bin\libSumArray.*
```
#### under Linux
```
!gcc -c -O3 -Wall -fPIC -o ./demo/obj/SumArray.o ./demo/gcc/SumArray.c
!gcc -shared -o ./cdemo/bin/libSumArray ./demo/obj/SumArray.o
!ls ./demo/bin/libSumArray.*
```
* `-c`: compile into object file with default name : funs.o.
By default, the object file has the same name as the source file with extension of ".o"
* `-O3`: Optimize yet more.
turns on all optimizations specified by -O2 and also turns on the -finline-functions, -fweb, -frename-registers and -funswitch-loops optionsturns on all optimizations
* `-Wall`: prints "all" compiler's warning message.
This option should always be used, in order to generate better code.
* **`-fPIC`** : stands for `Position Independent Code`(位置无关代码)
the generated machine code is `not dependent` on being located at a `specific address` in order to `work`.
Position-independent code can be `executed` at `any memory address`
* **-shared:** creating a shared library
```
%%file ./demo/makefile-SumArray-dll
CC=gcc
CFLAGS=-O3 -Wall -fPIC
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
all: libdll
libdll: obj
$(CC) -shared -o $(BINDIR)libSumArray.dll $(OBJDIR)SumArray.o
del .\demo\obj\SumArray.o
obj: ./demo/src/SumArray.c
$(CC) -c $(CFLAGS) -o $(OBJDIR)SumArray.o $(SRCDIR)SumArray.c
clean:
del .\demo\src\libSumArray.dll
!make -f ./demo/makefile-SumArray-dll
!dir .\demo\bin\libSum*.dll
```
#### Under Linux
```
%%file ./code/makefile-SumArray-so
CC=gcc
CFLAGS=-O3 -Wall -fPIC
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
all: libdll
libdll: obj
$(CC) -shared -o $(BINDIR)libSumArray.dll $(OBJDIR)SumArray.o
rm -f ./demo/obj/SumArray.o
obj: ./demo/src/SumArray.c
$(CC) -c $(CFLAGS) -o $(OBJDIR)SumArray.o $(SRCDIR)SumArray.c
clean:
rm -f ./demo/src/libSumArray.dll
!make -f ./code/makefile-SumArray-so
!ls ./code/bin/libSum*.so
```
## 2 Building a client executable
### Header Files and Libraries
* `Header File`: When compiling the program, the **compiler** needs the **header** files to compile the source codes;
* `libraries`: the **linker** needs the **libraries** to resolve external references from other object files or libraries.
The `compiler` and `linker` will not find the `headers/libraries` unless you set **the appropriate options**
* **1 Searching for Header Files**
**`-Idir`:** The include-paths are specified via **-Idir** option (`uppercase` 'I' followed by the directory path or environment variable **CPATH**).
* **2 Searching for libraries Files**
**`-Ldir`**: The library-path is specified via **-Ldir** option (`uppercase` 'L' followed by the directory path(or environment variable **LIBRARY_PATH**).
* **3 Linking the library**
**`-llibname`**: Link with the library name **without** the `lib` prefix and the `.so/.dll` extensions.
Windows
```bash
-I./demo/src/ -L./demo/bin/ -lSumArray
```
Linux
```bash
-I./demo/src/ -L./demo/bin/ -lSumArray -Wl,-rpath=./demo/bin/
```
* **`-Wl,option`**
Pass option as an option to the **linker**. If option contains `commas`, it is split into multiple options at the commas. You can use this syntax to pass an argument to the option. For example, -Wl,-Map,output.map passes -Map output.map to the linker. When using the GNU linker, you can also get the same effect with `-Wl,-Map=output.map'.
* **`-rpath=dir`**
**Add a directory to the runtime library search path**. This is used when linking an ELF executable with shared objects. All -rpath arguments are concatenated and passed to the runtime linker, which uses them to locate shared objects at runtime. The -rpath option is also used when locating shared objects which are needed by shared objects explicitly included in the link;
---
The following source code `"mainSum.c"` demonstrates calling the DLL's functions:
**NOTE:** mainSum.c is the same code in multi-source example
```
%%file ./demo/src/mainSum.c
#include <stdio.h>
#include "SumArray.h"
int main() {
int a1[] = {8, 4, 5, 3, 2};
printf("sum is %d\n", sum(a1, 5)); // sum is 22
return 0;
}
```
#### Windows
```
!gcc -c -o ./demo/obj/mainSum.o ./demo/src/mainSum.c
!gcc -o ./demo/bin/mainSum ./demo/obj/mainSum.o -I./demo/src/ -L./demo/bin/ -lSumArray
!.\demo\bin\mainSum
```
#### Linux
```
!gcc -c -o ./demo/obj/mainSum.o ./demo/obj/mainSum.c
!gcc -o ./demo/bin/mainSum ./demo/obj/mainSum.o -I./demo/obj/ -L./demo/bin/ -lSumArray -Wl,-rpath=./demo/bin/
!ldd ./demo/bin/mainSum
!./code/demo/mainSum
```
#### Under Windows
```
%%file ./demo/makefile-call-dll
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
all: mainexe
clean:
del .\demo\bin\mainSum.exe
mainexe: sumobj $(SRCDIR)SumArray.h
gcc -o $(BINDIR)mainSum.exe $(OBJDIR)mainSum.o -I$(SRCDIR) -L$(BINDIR) -lSumArray
del .\demo\obj\mainSum.o
sumobj: $(SRCDIR)mainSum.c
gcc -c -o $(OBJDIR)mainSum.o $(SRCDIR)mainSum.c
!make -f ./demo/makefile-call-dll
!.\demo\bin\mainSum
```
#### Under Linux
```
%%file ./demo/makefile-call-so
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
all: main
clean:
rm -f ./demo/bin/mainSum.exe
main: sumobj $(SRCDIR)SumArray.h
gcc -o $(BINDIR)mainSum.exe $(OBJDIR)mainSum.o -I$(SRCDIR) -L$(BINDIR) -lSumArray -Wl,-rpath=./code/bin/
rm -f ./demo/obj/mainSum.o
sumobj: $(SRCDIR)mainSum.c
gcc -c -o $(OBJDIR)mainSum.o $(SRCDIR)mainSum.c
!make -f ./demo/makefile-call-so
!./demo/bin/mainSum
```
## 3 Building a `shared library` with `multi-source` files
The shared library we will build consist of a multi-source files
* funs.c/h
* SumArray.c/h
```
%%file ./demo/src/funs.h
#ifndef FUNS_H
#define FUNS_H
double dprod(double *x, int n);
int factorial(int n);
#endif
%%file ./demo/src/funs.c
#include "funs.h"
// x[0]*x[1]*...*x[n-1]
double dprod(double *x, int n)
{
double y = 1.0;
for (int i = 0; i < n; i++)
{
y *= x[i];
}
return y;
}
// The factorial of a positive integer n, denoted by n!, is the product of all positive integers less than or equal to n.
// For example,5!=5*4*3*2*1=120
// The value of 0! is 1
int factorial(int n)
{
if (n == 0 ) {
return 1;
}
else
{
return n * factorial(n - 1);
}
}
```
#### Building `funs.c` and `SumArray.c` into libmultifuns.dll
```
!gcc -c -O3 -Wall -fPIC -o ./demo/obj/funs.o ./demo/src/funs.c
!gcc -c -O3 -Wall -fPIC -o ./demo/obj/SumArray.o ./demo/src/SumArray.c
!gcc -shared -o ./demo/bin/libmultifuns.dll ./demo/obj/funs.o ./demo/obj/SumArray.o
!dir .\demo\bin\libmulti*.dll
```
#### Building with makefile
```
%%file ./demo/makefile-libmultifun
CC=gcc
CFLAGS=-O3 -Wall -fPIC
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
all: libmultifuns.dll
libmultifuns.dll: multifunsobj
$(CC) -shared -o $(BINDIR)libmultifuns.dll $(OBJDIR)funs.o $(OBJDIR)SumArray.o
del .\demo\obj\funs.o .\demo\obj\SumArray.o
multifunsobj: $(SRCDIR)funs.c $(SRCDIR)SumArray.c
$(CC) -c $(CFLAGS) -o $(OBJDIR)SumArray.o $(SRCDIR)SumArray.c
$(CC) -c $(CFLAGS) -o $(OBJDIR)funs.o $(SRCDIR)funs.c
clean:
del .\demo\bin\libmultifuns.dll
!make -f ./demo/makefile-libmultifun
```
The result is a compiled shared library **`libmultifuns.dll`**
##### makefile-libmultifun - more vars
```
%%file ./code/makefile-libmultifun
CC=gcc
CFLAGS=-O3 -Wall -fPIC
SRCDIR= ./demo/src/
OBJDIR= ./demo/obj/
BINDIR= ./demo/bin/
INC = -I$(SRCDIR)
SRCS= $(SRCDIR)funs.c \
$(SRCDIR)SumArray.c
all: libmultifuns.dll
libmultifuns.dll: multifunsobj
$(CC) -shared -o $(BINDIR)libmultifuns.dll funs.o SumArray.o
del funs.o SumArray.o
multifunsobj:
$(CC) -c $(CFLAGS) $(INC) $(SRCS)
clean:
del .\demo\bin\libmultifuns.dll
!make -f ./code/makefile-libmultifun
```
##### Building a client executable
The following source code `"mainMultifuns.c"` demonstrates calling the DLL's functions:
```
%%file ./demo/src/mainMultifuns.c
#include <stdio.h>
#include "SumArray.h"
#include "funs.h"
int main() {
int a1[] = {8, 4, 5, 3, 2};
printf("sum is %d\n", sum(a1, 5)); // sum is 22
double a2[] = {8.0, 4.0, 5.0, 3.0, 2.0};
printf("dprod is %f\n", dprod(a2, 5)); // dprod is 960
int n =5;
printf("the factorial of %d is %d\n",n,factorial(n)); // 5!=120
return 0;
}
!gcc -c -o ./demo/obj/mainMultifuns.o ./demo/src/mainMultifuns.c
!gcc -o ./demo/bin/mainMultifuns ./demo/obj/mainMultifuns.o -I./demo/src/ -L./demo/bin/ -lmultifuns
!.\demo\bin\mainMultifuns
```
## Reference
* GCC (GNU compilers) http://gcc.gnu.org
* GCC Manual http://gcc.gnu.org/onlinedocs
* An Introduction to GCC http://www.network-theory.co.uk/docs/gccintro/index.html.
* GCC and Make:Compiling, Linking and Building C/C++ Applications http://www3.ntu.edu.sg/home/ehchua/programming/cpp/gcc_make.html
* MinGW-W64 (GCC) Compiler Suite: http://www.mingw-w64.org/doku.php
* C/C++ for VS Code https://code.visualstudio.com/docs/languages/cpp
* C/C++ Preprocessor Directives http://www.cplusplus.com/doc/tutorial/preprocessor/
* What is a DLL and How Do I Create or Use One? http://www.mingw.org/wiki/DLL
| github_jupyter |
# Searching the UniProt database and saving fastas:
This notebook is really just to demonstrate how Andrew finds the sequences for the datasets. <br>
If you do call it from within our github repository, you'll probably want to add the fastas to the `.gitignore` file.
```
# Import bioservices module, to run remote UniProt queries
# (will probably need to pip install this to use)
from bioservices import UniProt
```
## Connecting to UniProt using bioservices:
```
service = UniProt()
fasta_path = 'refined_query_fastas/' #optional file organization param
```
## Query with signal_peptide
```
def data_saving_function_with_SP(organism,save_path=''):
secreted_query = f'(((organism:{organism} OR host:{organism}) annotation:("signal peptide") keyword:secreted) NOT annotation:(type:transmem)) AND reviewed:yes'
secreted_result = service.search(secreted_query, frmt="fasta")
secreted_outfile = f'{save_path}{organism}_secreted_SP_new.fasta'
with open(secreted_outfile, 'a') as ofh:
ofh.write(secreted_result)
cytoplasm_query = f'(((organism:{organism} OR host:{organism}) locations:(location:cytoplasm)) NOT (annotation:(type:transmem) OR annotation:("signal peptide"))) AND reviewed:yes'
cytoplasm_result = service.search(cytoplasm_query, frmt="fasta")
cytoplasm_outfile = f'{save_path}{organism}_cytoplasm_SP_new.fasta'
with open(cytoplasm_outfile, 'a') as ofh:
ofh.write(cytoplasm_result)
membrane_query = f'(((organism:{organism} OR host:{organism}) annotation:(type:transmem)) annotation:("signal peptide")) AND reviewed:yes'
membrane_result = service.search(membrane_query, frmt="fasta")
membrane_outfile = f'{save_path}{organism}_membrane_SP_new.fasta'
with open(membrane_outfile, 'a') as ofh:
ofh.write(membrane_result)
data_saving_function_with_SP('human',fasta_path)
data_saving_function_with_SP('escherichia',fasta_path)
```
## Query without signal_peptide
```
def data_saving_function_without_SP(organism,save_path=''):
# maybe new:
secreted_query = f'(((organism:{organism} OR host:{organism}) AND (keyword:secreted OR goa:("extracellular region [5576]"))) NOT (annotation:(type:transmem) OR goa:("membrane [16020]") OR locations:(location:cytoplasm) OR goa:("cytoplasm [5737]") )) AND reviewed:yes'
secreted_result = service.search(secreted_query, frmt="fasta")
secreted_outfile = f'{save_path}{organism}_secreted_noSP_new_new.fasta'
with open(secreted_outfile, 'a') as ofh:
ofh.write(secreted_result)
cytoplasm_query = f'(((organism:{organism} OR host:{organism}) AND (locations:(location:cytoplasm) OR goa:("cytoplasm [5737]")) ) NOT (annotation:(type:transmem) OR goa:("membrane [16020]") OR keyword:secreted OR goa:("extracellular region [5576]") )) AND reviewed:yes'
cytoplasm_result = service.search(cytoplasm_query, frmt="fasta")
cytoplasm_outfile = f'{save_path}{organism}_cytoplasm_noSP_new_new.fasta'
with open(cytoplasm_outfile, 'a') as ofh:
ofh.write(cytoplasm_result)
membrane_query= f'(((organism:{organism} OR host:{organism}) AND ( annotation:(type:transmem) OR goa:("membrane [16020]") )) NOT ( keyword:secreted OR goa:("extracellular region [5576]") OR locations:(location:cytoplasm) OR goa:("cytoplasm [5737]") )) AND reviewed:yes'
membrane_result = service.search(membrane_query, frmt="fasta")
membrane_outfile = f'{save_path}{organism}_membrane_noSP_new_new.fasta'
with open(membrane_outfile, 'a') as ofh:
ofh.write(membrane_result)
data_saving_function_without_SP('human',fasta_path)
data_saving_function_without_SP('yeast',fasta_path)
data_saving_function_without_SP('escherichia',fasta_path)
```
## Query ALL SHIT (warning: do not do unless you have lots of free time and computer memory)
```
def data_saving_function_without_SP_full_uniprot(save_path=''):
# maybe new:
secreted_query = f'((keyword:secreted OR goa:("extracellular region [5576]")) NOT (annotation:(type:transmem) OR goa:("membrane [16020]") OR locations:(location:cytoplasm) OR goa:("cytoplasm [5737]") )) AND reviewed:yes'
secreted_result = service.search(secreted_query, frmt="fasta")
secreted_outfile = f'{save_path}all_secreted_noSP_new_new.fasta'
with open(secreted_outfile, 'a') as ofh:
ofh.write(secreted_result)
cytoplasm_query = f'(( locations:(location:cytoplasm) OR goa:("cytoplasm [5737]") ) NOT (annotation:(type:transmem) OR goa:("membrane [16020]") OR keyword:secreted OR goa:("extracellular region [5576]") )) AND reviewed:yes'
cytoplasm_result = service.search(cytoplasm_query, frmt="fasta")
cytoplasm_outfile = f'{save_path}all_cytoplasm_noSP_new_new.fasta'
with open(cytoplasm_outfile, 'a') as ofh:
ofh.write(cytoplasm_result)
membrane_query= f'(( annotation:(type:transmem) OR goa:("membrane [16020]") ) NOT ( keyword:secreted OR goa:("extracellular region [5576]") OR locations:(location:cytoplasm) OR goa:("cytoplasm [5737]") )) AND reviewed:yes'
membrane_result = service.search(membrane_query, frmt="fasta")
membrane_outfile = f'{save_path}all_membrane_noSP_new_new.fasta'
with open(membrane_outfile, 'a') as ofh:
ofh.write(membrane_result)
data_saving_function_without_SP_full_uniprot(fasta_path)
```
| github_jupyter |
# Validation report for dmu26_XID+PACS_COSMOS_20170303
The data product dmu26_XID+PACS_COSMOS_20170303, contains three files:
1. dmu26_XID+PACS_COSMOS_20170303.fits: The catalogue file
2. dmu26_XID+PACS_COSMOS_20170303_Bayes_pval_PACS100.fits: The Bayesian pvalue map
3. dmu26_XID+PACS_COSMOS_20170303_Bayes_pval_PACS160.fits: The Bayesian pvalue map
## Catalogue Validation
Validation of the catalogue should cover the following as a minimum:
* Compare XID+ Fluxes with previous catalogues
* Check for sources with poor convergence (i.e. $\hat{R}$ >1.2 and $n_{eff}$ <40)
* Check for sources with strange error (i.e. small upper limit and large lower limit, which would be indicating prior is limiting flux)
* Check for sources that return prior (i.e. probably very large flux and large error)
* Check background estimate is similar across neighbouring tiles (will vary depending on depth of prior list)
```
from astropy.table import Table
import numpy as np
import pylab as plt
%matplotlib inline
table=Table.read('/Users/williamp/validation/cosmos/PACS/dmu26_XID+PACS_COSMOS_20170303.fits', format='fits')
table[:10].show_in_notebook()
import seaborn as sns
```
### Comparison to previous catalogues
Using COSMOS2015 catalogue and matching to closest PACS objects within 1''
```
#table.sort('help_id')
COSMOS2015 = Table.read('/Users/williamp/validation/cosmos/PACS/COSMOS2015_Laigle+v1.1_wHELPids_PACS_red.fits')
COSMOS2015.sort('HELP_ID')
plt.scatter(np.log10(COSMOS2015['F_PACS_160']), np.log10(COSMOS2015['FLUX_160']))
plt.xlabel('$\log_{10}S_{160 \mathrm{\mu m}, XID+}$')
plt.ylabel('$\log_{10}S_{160 \mathrm{\mu m}, COSMOS2015}$')
plt.show()
plot=sns.jointplot(x=np.log10(COSMOS2015['F_PACS_160']), y=np.log10(COSMOS2015['FLUX_160']), xlim=(-1,3), kind='hex')
plot.set_axis_labels('$\log_{10}S_{160 \mathrm{\mu m}, XID+}$', '$\log_{10}S_{160 \mathrm{\mu m}, COSMOS2015}$')
plt.show()
```
Agreement is reasonable good. Lower flux objects can be given lower flux density by XID+ as would be expecetd. High flux density objects have a convergance in XID+ and COSMOS2015 as would be expected.
### Convergence Statistics
e.g. How many of the objects satisfy critera?
(note Some of the $\hat{R}$ values are NaN. This is a PyStan bug. They are most likely 1.0
```
plt.figure(figsize=(20,10))
plt.subplot(1,2,1)
Rhat=plt.hist(np.isfinite(table['Rhat_PACS_160']), bins=np.arange(0.9,1.2,0.01))
plt.xlabel(r'$\hat{R}$')
plt.subplot(1,2,2)
neff=plt.hist(table['n_eff_PACS_160'])
plt.yscale('log')
plt.xlabel(r'$n_{eff.}$')
numRhat = 0
numNeff = 0
for i in range(0, len(table)):
if table['Rhat_PACS_160'][i] > 1.2 and np.isfinite(table['Rhat_PACS_160']):
numRhat += 1
if table['n_eff_PACS_160'][i] < 40:
numNeff += 1
print(str(numRhat)+' objects have $\hat{R}$ > 1.2')
print(str(numNeff)+' objects have n$_{eff}$ < 40')
```
All objects have good $\hat{R}$ and n$_{eff}$ values
### Skewness
```
plot=sns.jointplot(x=np.log10(table['F_PACS_160']), y=(table['FErr_PACS_160_u']-table['F_PACS_160'])/(table['F_PACS_160']-table['FErr_PACS_160_l*1000.0']), xlim=(-1,2), kind='hex')
plot.set_axis_labels(r'$\log_{10}S_{160 \mathrm{\mu m}}$ ', r'$(84^{th}-50^{th})/(50^{th}-16^{th})$ percentiles')
```
### Sources where posterior=prior
Suggest looking at size of errors to diagnose. How many appear to be returning prior? Where are they on Bayesian P value map? Does it make sense why?
The lower errors appear to be very, very small. The catalogue appears to have them multiplied by 1000 and they are still 10\,000 times smaller than the upper errors.
### Background value
Are all the background values similar? For those that aren't is it obvious why? (e.g. edge of map, extended source not fitted well etc)
```
plt.hist(table['Bkg_PACS_160'])
plt.xlabel(r'Background (MJy/sr)')
plt.show()
```
The background seems to have quite a large scatter but roughly consistent around ~-1.0
-------------
## Bayesian P value map
The Bayesian P value map can be thought of as a more robust residual map. They provide the probability our model would obtain the pixel data value, having inferred our model on the data. The probabilitiy value is expressed in terms of a sigma value.
* a value of < -2 indicates that our model cannot explain why there is so little flux in the map
* a value of > -2 indicates our model cannot explain why there is so much flux in the map
* a value of ~ 0 indicates a good fit
Plotting the distribution of the Bayesian P value map can indicate whether the map has been been fit well in general. If the distribution is centered on 0 and roughly symmetric then it is a good fit.
Suggested validation checks:
* Check distribution is reasonable
* Check for strange areas in map
```
import aplpy
from astropy.io import fits
hdulist=fits.open('/Users/williamp/validation/cosmos/PACS/dmu26_XID+PACS_COSMOS_20170303_Bayes_pval_PACS160.fits')
plt.figure()
Bayes_hist=plt.hist(np.isfinite(hdulist[1].data), bins=np.arange(-6,6.1,0.05))
plt.xlabel(r'$\sigma$ value')
plt.show()
```
#Checking the Positions
Using the PACS map to check where the high flux objects and objects with highish bakground are.
```
hdulist = fits.open('/Users/williamp/dev/XID_plus/input/cosmosKs/pep_COSMOS_red_Map.DR1.fits')
hdulist[1].header['CTYPE1'] = 'RA'
hdulist[1].header['CTYPE2'] = 'DEC'
```
PACS_red map is in greyscale, object positions are in blue, objects with fluxes > 1e3 are in red, objects with background values > 1.25 MJy/sr are in yellow, objects with P-values > 0.5 are in green.
```
vmin=0.0001
vmax=0.02
fig = plt.figure(figsize=(30,10))
pltut = aplpy.FITSFigure(hdulist[1], figure=fig)
pltut.show_colorscale(vmin=vmin,vmax=vmax,cmap='Greys',stretch='log')
pltut.show_circles(table['RA'],table['Dec'], radius=0.0025, color='b')
bRA = []
bDec = []
for i in range(0, len(table)):
if table['F_PACS_160'][i] > 1e3:
bRA.append(table['RA'][i])
bDec.append(table['Dec'][i])
if len(bRA) > 0:
pltut.show_circles(bRA, bDec, radius=0.005, color='r')
bgRA = []
bgDec = []
for i in range(0, len(table)):
if table['Bkg_PACS_160'][i] > 1.25:
bgRA.append(table['RA'][i])
bgDec.append(table['Dec'][i])
if len(bgRA) > 0:
pltut.show_circles(bgRA, bgDec, radius=0.005, color='y')
pRA = []
pDec = []
for i in range(0, len(table)):
if table['Pval_res_160'][i] > 0.5:
pRA.append(table['RA'][i])
pDec.append(table['Dec'][i])
if len(pRA) > 0:
pltut.show_circles(pRA, pDec, radius=0.005, color='g')
plt.show()
```
The high flux value objects (red) appear to mainly cluster arround the edges of the masked regions, so may just be artefacts of bad masking.
The high background objects (yellow) appear to be a property of individual tiles as they form squares on the image above. These tiles are on the edge of the map and the high background is probably a result of this.
The high P-value objects (green) appear arround brighter objects (but not all bright objects) which makes sense; bright blended objects are harder to deblend than non-blended objects. They also often coincide with the high background tiles.
| github_jupyter |
# RNA World Hypothesis
RNA is a simpler cousin of DNA. As you may know, RNA is widely thought to be the first self replicating life-form to arise perhaps around 4 billion years ago. One of the strongest arguments for this theory is that RNA is able to carry information in its nucleotides like DNA, and like protein, it is able to adopt higher order structures to catalyze reactions, such as self replication. So it is likely, and there is growing evidence that this is the case, that the first form of replicating life was RNA. And because of this dual property of RNA as a hereditary information vessel as well as a structural/functional element we can use RNA molecules to build very nice population models.
So in this section, we'll be walking you through building genetic populations, simulating their evolution, and using statistics and other mathematical tools for understanding key properties of populations.
```
from IPython.display import HTML
# Youtube
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/K1xnYFCZ9Yg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
#HTML('<iframe width="784" height="441" src="https://scaleofuniverse.com/" /iframe>')
```
## Population Evolution in *an* RNA World
In order to study the evolution of a population, we first need a model of a population. And even before that, we need to define what we mean by *population*. Populations can be defined on many levels and with many diffferent criteria. For our purposes, we will simply say that a population is a set of individuals sharing a common environment. And because this is population *genetics* we can think of individuals as entities comprising of specific genes or chromosomes.
So where do we get a population from? As you may have discussed in previous workshops, there are very large datasets containing sequencing information from different populations. So we could download one of these datasets and perform some analysis on it. But I find this can be dry and tedious. So why download data when we can simply create our own?
In this section we're going to be creating and studying our own "artificial" populations to illustrate some important population genetics concepts and methodologies. Not only will this help you learn population genetics, but you will get a lot more programming practice than if we were to simply parse data files and go from there.
More specifically, we're going to build our own RNA world.
### Building an RNA population
As we saw earlier, RNA has the nice property of posessing a strong mapping between information carrying (sequence) and function (structure). This is analogous to what is known in evolutionary terms as a genotype and a phenotype. With these properties, we have everything we need to model a population, and simulate its evolution.
#### RNA sequence-structure
We can think of the genotype as a sequence $s$ consisting of letters/nucleotides from the alphabet $\{U,A,C,G\}$. The corresponding phenotype $\omega$ is the secondary structure of $s$ which can be thought of as a pairing between nucleotides in the primary sequence that give rise to a 2D architecture. Because it has been shown that the function of many biomolecules, including RNA, is driven by structure this gives us a good proxy for phenotype.
Below is an example of what an RNA secondary structure, or pairing, looks like.
```
### 1
from IPython.display import Image
#This will load an image of an RNA secondary structure
Image(url='https://viennarna.github.io/forgi/_images/1y26_ss.png')
from IPython.display import HTML
# Youtube
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/JQByjprj_mA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
#import matplotlib.pyplot as plt
#import forgi.visual.mplotlib as fvm
#import forgi
#cg = forgi.load_rna("1y26.fx", allow_many=False)
#fvm.plot_rna(cg, text_kwargs={"fontweight":"black"}, lighten=0.7,
# backbone_kwargs={"linewidth":3})
#plt.show()
```
As you can see, unpaired positions are forming loop-like structures, and paired positions are forming stem-like structures. It is this spatial arrangement of nucleotides that drives RNA's function. Therefore, another sequence that adopts a similar shape, is likely to behave in a similar manner. Another thing to notice is that, although in reality this is often not the case, in general we only allow pairs between $\{C,G\}$ and $\{A, U\}$ nucleotides, most modern approaches allow for non-canonical pairings and you will find some examples of this in the above structure.
*How do we go from a sequence to a structure?*
So a secondary structure is just a list of pairings between positions. How do we get the optimal pairing?
The algorithm we're going to be using in our simulations is known as the Nussinov Algorithm. The Nussinov algorithm is one of the first and simplest attempts at predicting RNA structure. Because bonds tend to stabilize RNA, the algorithm tries to maximize the number of pairs in the structure and return that as its solution. Current approaches achieve more accurate solutions by using energy models based one experimental values to then obtain a structure that minimizes free energy. But since we're not really concerned with the accuracy of our predictions, Nussinov is a good entry point. Furthermore, the main algorithmic concepts are the same between Nussinov and state of the art RNA structure prediction algorithms. I implemented the algorithm in a separate file called `fold.py` that we can import and use its functions. I'm not going to go into detail here on how the algorithm works because it is beyond the scope of this workshop but there is a bonus exercise at the end if you're curious.
You can predict a secondary structure by calling `nussinov()` with a sequence string and it will return a tuple in the form `(structure, pairs)`.
```
### 2
import numpy as np
from fold import nussinov #Codes by Carlos G. Oliver (https://github.com/cgoliver)
sequence_to_fold = "ACCCGAUGUUAUAUAUACCU"
struc = nussinov(sequence_to_fold)
print(">test") #creates the structure of a .fx file for "forna"
print(sequence_to_fold)
print(struc[0])
#Check the molecule at: http://rna.tbi.univie.ac.at/forna/
# Paste the text below in the webpage and see the structure
#>test
#ACCCGAUGUUAUAUAUACCU
#(...(..(((....).))))
```
You will see a funny dot-bracket string in the output. This is a representation of the structure of an RNA. Quite simply, a matching parir of parentheses (open and close) correspond to the nucleotides at those positions being paried. Whereas, a dot means that that position is unpaired in the structure. Feel free to play around with the input sequence to get a better understanding of the notation.
If you want to visually check the sequence, go to: [forna](http://rna.tbi.univie.ac.at/forna/forna.html) and copy the text above with the sequence and its structure in the **Add Molecule** button. The webpage is embedded below.
So that's enough about RNA structure prediction. Let's move on to building our populations.
```
HTML('<iframe width="784" height="441" src="http://rna.tbi.univie.ac.at/forna/forna.html" /iframe>')
print(np.random.choice(5, 3, p=[0.1, 0, 0.6, 0.3, 0],replace=True))
```
### Fitness of a sequence: Target Structure
Now that we have a good way of getting a phenotype (secondary structure), we need a way to evaluate the fitness of that phenotype. If we think in real life terms, fitness is the ability of a genotype to replicate into the next generation. If you have a gene carrying a mutation that causes some kind of disease, your fitness is decreased and you have a lower chance of contributing offspring to the next generation. On a molecular level the same concept applies. A molecule needs to accomplish a certain function, i.e. bind to some other molecule or send some kind of signal. And as we've seen before, the most important factor that determines how well it can carry out this function is its structure. So we can imagine that a certain structure, we can call this a 'target' structure, is required in order to accomplish a certain function. So a sequence that folds correctly to a target structure is seen as having a greater fitness than one that does not. Since we've encoded structures as simple dot-bracket strings, we can easily compare structures and thus evaluate the fitness between a given structure and the target, or 'correct' structure.
There are many ways to compare structures $w_{1}$ and $w_{2}$, but we're going to use one of the simplest ways, which is base-pair distance. This is just the number of pairs in $w_{1}$ that are not in $w_{2}$. Again, this is beyond the scope of this workshop so I'll just give you the code for it and if you would like to know more you can ask me.
```
### 3
#ss_to_bp() and bp_distance() by Vladimir Reinharz.
def ss_to_bp(ss):
bps = set()
l = []
for i, x in enumerate(ss):
if x == '(':
l.append(i)
elif x == ')':
bps.add((l.pop(), i))
return bps
def bp_distance(w1, w2):
"""
return base pair distance between structures w1 and w1.
w1 and w1 are lists of tuples representing pairing indices.
"""
return len(set(w1).symmetric_difference(set(w2)))
#let's fold two sequences
w1 = nussinov("CCAAAAGG")
w2 = nussinov("ACAAAAGA")
print(w1)
print(w2)
#give the list of pairs to bp_distance and see what the distance is.
print(bp_distance(w1[-1], w2[-1]))
```
## Defining a cell
The cell we will define here is a simple organism with two copies of an RNA gene, each with its own structure. This simple definition of a cell will help us create populations to play around in our evolutionary reactor.
```
### 4
class Cell:
def __init__(self, seq_1, struc_1, seq_2, struc_2):
self.sequence_1 = seq_1
self.sequence_2 = seq_2
self.structure_1 = struc_1
self.structure_2 = struc_2
#for now just try initializing a Cell with made up sequences and structures
cell = Cell("AACCCCUU", "((.....))", "GGAAAACA", "(....).")
print(cell.sequence_1, cell.structure_2, cell.sequence_1, cell.structure_2)
```
## Populations of Cells
Now we've defined a 'Cell'. Since a population is a collection of individuals our populations will naturally consist of **lists** of 'Cell' objects, each with their own sequences. Here we initialize all the Cells with random sequences and add them to the 'population' list.
```
### 5
import random
def populate(target, pop_size):
'''Creates a population of cells (pop_size) with a number of random RNA nucleotides (AUCG)
matching the length of the target structure'''
population = []
for i in range(pop_size):
#get a random sequence to start with
sequence = "".join([random.choice("AUCG") for _ in range(len(target))])
#use nussinov to get the secondary structure for the sequence
structure = nussinov(sequence)
#add a new Cell object to the population list
new_cell = Cell(sequence, structure, sequence, structure)
new_cell.id = i
new_cell.parent = i
population.append(new_cell)
return population
```
Try creating a new population and printing the first 10 sequences and structures (in dot-bracket) on the first chromosome!
```
### 6
target = "(((....)))"
pop = populate(target, pop_size=300)
for p in pop[:10]:
print(p.id, p.sequence_1, p.structure_1[0], p.sequence_2, p.structure_2[0])
for p in pop[-10:]:#for p in pop[:10]:#for p in pop[-10:]:#for p in pop[:10]:
print(p.id, p.sequence_1, p.structure_1[0], p.sequence_2, p.structure_2[0])
```
## The Fitness of a Cell
Once we are able to create populations of cells, we need a way of asssessing their individual fitness. In our model, a *Cell* is an object that contains two sequences of RNA, something analogous to having two copies of a gene in each chromosome.
We could simply loop through each *Cell* in the population and check the base pair distance to the target structure we defined. However, this approach of using base-pair distance is not the best for defining fitness. There are two reasons for this:
1. We want fitness to represent a *probability* that a cell will reproduce (pass its genes to the next generation), and base pair distance is, in our case, an integer value.
2. We want this probability to be a *relative* measure of fitness. That is, we want the fitness to be proportional to how good a cell is with respect to all others in the population. This touches on an important principle in evolution where we only need to be 'better' than the rest of the population (the competition) and not good in some absolute measure. For instance, if you and I are being chased by a predator. In order to survive, I only need to be faster than you, and not necessarily some absolute level of fitness.
In order to get a probability (number between 0 and 1) we use the following equation to define the fitness of a structure $\omega$ on a target structure $T$:
$$P(\omega, T) = N^{-1} exp(\frac{-\beta \texttt{dist}(\omega, T)}{\texttt{len}(\omega)})$$
$$N = \sum_{i \in Pop}{P(\omega_i, T})$$
Here, the $N$ is what gives us the 'relative' measure because we divide the fitness of the Cell by the sum of the fitness of every other Cell.
Let's take a quick look at how this function behaves if we plot different base pair distance values.
What is the effect of the parameter $\beta$? Try plotting the same function but with different values of $\beta$.
```
%matplotlib inline
import matplotlib.pyplot as plt
import math
#import seaborn as sns
target_length = 50
beta = -3
plt.plot([math.exp(beta * (bp_dist / float(target_length))) for bp_dist in range(target_length)])
plt.xlabel("Base pair distance to target structure")
plt.ylabel("P(w, T)")
```
As you can see, it's a very simple function that evaluates to 1 (highest fitness) if the base pair distance is 0, and decreases as the structures get further and further away from the target. I didn't include the $N$ in the plotting as it will be a bit more annoying to compute, but it is simply a scaling factor so the shape and main idea won't be different.
Now we can use this function to get a fitness value for each Cell in our population.
```
### 7
def compute_fitness(population, target, beta=-3):
"""
Assigns a fitness and bp_distance value to each cell in the population.
"""
#store the fitness values of each cell
tot = []
#iterate through each cell
for cell in population:
#calculate the bp_distance of each chromosome using the cell's structure
bp_distance_1 = bp_distance(cell.structure_1[-1], ss_to_bp(target))
bp_distance_2 = bp_distance(cell.structure_2[-1], ss_to_bp(target))
#use the bp_distances and the above fitness equation to calculate the fitness of each chromosome
fitness_1 = math.exp((beta * bp_distance_1 / float(len(cell.sequence_1))))
fitness_2 = math.exp((beta * bp_distance_2 / float(len(cell.sequence_2))))
#get the fitness of the whole cell by multiplying the fitnesses of each chromosome
cell.fitness = fitness_1 * fitness_2
#store the bp_distance of each chromosome.
cell.bp_distance_1 = bp_distance_1
cell.bp_distance_2 = bp_distance_2
#add the cell's fitness value to the list of all fitness values (used for normalization later)
tot.append(cell.fitness)
#normalization factor is sum of all fitness values in population
norm = np.sum(tot)
#divide all fitness values by the normalization factor.
for cell in population:
cell.fitness = cell.fitness / norm
return None
compute_fitness(pop, target)
for cell in pop[:10]:
print(cell.fitness, cell.bp_distance_1, cell.bp_distance_2)
```
## Introducing diversity: Mutations
Evolution would go nowhere without random mutations. While mutations are technically just random errors in the copying of genetic material, they are essential in the process of evolution. This is because they introduce novel diversity to populatons, which with a low frequency can be beneficial. And when a beneficial mutation arises (i.e. a mutation that increases fitness, or replication probability) it quickly takes over the population and the populatioin as a whole has a higher fitness.
Implementing mutations in our model will be quite straightforward. Since mutations happen at the genotype/sequence level, we simply have to iterate through our strings of nucleotides (sequences) and randomly introduce changes.
```
def mutate(sequence, mutation_rate=0.001):
"""Takes a sequence and mutates bases with probability mutation_rate"""
#start an empty string to store the mutated sequence
new_sequence = ""
#boolean storing whether or not the sequence got mutated
mutated = False
#go through every bp in the sequence
for bp in sequence:
#generate a random number between 0 and 1
r = random.random()
#if r is below mutation rate, introduce a mutation
if r < mutation_rate:
#add a randomly sampled nucleotide to the new sequence
new_sequence = new_sequence + random.choice("aucg")
mutated = True
else:
#if the mutation condition did not get met, copy the current bp to the new sequence
new_sequence = new_sequence + bp
return (new_sequence, mutated)
sequence_to_mutate = 'AAAAGGAGUGUGUAUGU'
print(sequence_to_mutate)
print(mutate(sequence_to_mutate, mutation_rate=0.5))
```
## Selection
The final process in this evolution model is selection. Once you have populations with a diverse range of fitnesses, we need to select the fittest individuals and let them replicate and contribute offspring to the next generation. In real populations this is just the process of reproduction. If you're fit enough you will be likely to reproduce more than another individual who is not as well suited to the environment.
In order to represent this process in our model, we will use the fitness values that we assigned to each Cell earlier and use that to select replicating Cells. This is equivalent to sampling from a population with the sampling being weighted by the fitness of each Cell. Thankfully, `numpy.random.choice` comes to the rescue here. Once we have sampled enough Cells to build our next generation, we introduce mutations and compute the fitness values of the new generation.
```
def selection(population, target, mutation_rate=0.001, beta=-3):
"""
Returns a new population with offspring of the input population
"""
#select the sequences that will be 'parents' and contribute to the next generation
parents = np.random.choice(population, len(population), p=[rna.fitness for rna in population], replace=True)
#build the next generation using the parents list
next_generation = []
for i, p in enumerate(parents):
new_cell = Cell(p.sequence_1, p.structure_1, p.sequence_2, p.structure_2)
new_cell.id = i
new_cell.parent = p.id
next_generation.append(new_cell)
#introduce mutations in next_generation sequeneces and re-fold when a mutation occurs
for rna in next_generation:
mutated_sequence_1, mutated_1 = mutate(rna.sequence_1, mutation_rate=mutation_rate)
mutated_sequence_2, mutated_2 = mutate(rna.sequence_2, mutation_rate=mutation_rate)
if mutated_1:
rna.sequence_1 = mutated_sequence_1
rna.structure_1 = nussinov(mutated_sequence_1)
if mutated_2:
rna.sequence_2 = mutated_sequence_2
rna.structure_2 = nussinov(mutated_sequence_2)
else:
continue
#update fitness values for the new generation
compute_fitness(next_generation, target, beta=beta)
return next_generation
next_gen = selection(pop, target)
for cell in next_gen[:10]:
print(cell.sequence_1)
```
## Gathering information on our populations
Here we simply store some statistics (in a dictionary) on the population at each generation such as the average base pair distance and the average fitness of the populations. No coding to do here, it's not a very interesting function but feel free to give it a look.
```
def record_stats(pop, population_stats):
"""
Takes a population list and a dictionary and updates it with stats on the population.
"""
generation_bp_distance_1 = [rna.bp_distance_1 for rna in pop]
generation_bp_distance_2 = [rna.bp_distance_2 for rna in pop]
mean_bp_distance_1 = np.mean(generation_bp_distance_1)
mean_bp_distance_2 = np.mean(generation_bp_distance_2)
mean_fitness = np.mean([rna.fitness for rna in pop])
population_stats.setdefault('mean_bp_distance_1', []).append(mean_bp_distance_1)
population_stats.setdefault('mean_bp_distance_2', []).append(mean_bp_distance_2)
population_stats.setdefault('mean_fitness', []).append(mean_fitness)
return None
```
## And finally.... evolution
We can put all the above parts together in a simple function that does the following:
1. start a new population and compute its fitness
2. repeat the following for the desired number of generations:
1. record statistics on population
2. perform selection+mutation
3. store new population
And that's it! We have an evolutionary reactor!
```
def evolve(target, generations=20, pop_size=100, mutation_rate=0.001, beta=-2):
"""
Takes target structure and sets up initial population, performs selection and iterates for desired generations.
"""
#store list of all populations throughotu generations [[cells from generation 1], [cells from gen. 2]...]
populations = []
#start a dictionary that will hold some stats on the populations.
population_stats = {}
#get a starting population
initial_population = populate(target, pop_size=pop_size)
#compute fitness of initial population
compute_fitness(initial_population, target)
#set current_generation to initial population.
current_generation = initial_population
#iterate the selection process over the desired number of generations
for i in range(generations):
#let's get some stats on the structures in the populations
record_stats(current_generation, population_stats)
#add the current generation to our list of populations.
populations.append(current_generation)
#select the next generation
new_gen = selection(current_generation, target, mutation_rate=mutation_rate, beta=beta)
#set current generation to be the generation we just obtained.
current_generation = new_gen
return (populations, population_stats)
```
Try a run of the `evolve()` function.
```
target = "(((....)))"
pops, pops_stats = evolve(target, generations=20, pop_size=300, mutation_rate=0.001, beta=-2)
#Print the first 10 sequences of the population
for p in pop[:10]:
print(p.id, p.sequence_1, p.structure_1[0], p.sequence_2, p.structure_2[0])
#Print the last 10 sequences of the population
for p in pop[-10:]:
print(p.id, p.sequence_1, p.structure_1[0], p.sequence_2, p.structure_2[0])
```
Let's see if it actually worked by plotting the average base pair distance as a function of generations for both genes in each cell. We should expect a gradual decrease as the populations get closer to the target structure.
```
def evo_plot(pops_stats):
"""
Plot base pair distance for each chromosome over generations.
"""
plt.figure('Mean base pair Distance',figsize=(10,5))
for m in ['mean_bp_distance_1', 'mean_bp_distance_2']:
plt.plot(pops_stats[m], label=m)
plt.legend()
plt.xlabel("Generations")
plt.ylabel("Mean Base Pair Distance")
evo_plot(pops_stats)
```
Let's see the structure of random cells from each generation. Run the code below and copy the output in the RNA folding webpage. Compare the base-pair distance plot with the structures.
Questions:
- Do you notice some simmilarities from a particular generation onwards? Compare your observations to the plot with the Mean Base Pair Distance.
- What could trigger another evolutionary jump?

```
from IPython.display import HTML
HTML('<iframe width="784" height="441" src="http://rna.tbi.univie.ac.at/forna/forna.html" /iframe>')
#Select a random RNA sequence from each generation to check its folding structure
from random import randrange
#print(randrange(999))
generations=20
pop_size=300
#Print some random cells from each generation
#pops[generation][cell in that generation].{quality to retrieve}
for gen in range(0,generations):
cid=randrange(pop_size)
print('>Gen'+np.str(gen+1)+'_Cell_'+np.str(pops[gen][cid].id)+'')
print(pops[gen][2].sequence_1)
print(''+np.str(pops[gen][2].structure_1[0])+'\n')
```
You should see a nice drop in base pair distance! Another way of visualizing this is by plotting a histogram of the base pair distance of all Cells in the initial population versus the final population.
```
def bp_distance_distributions(pops):
"""
Plots histograms of base pair distance in initial and final populations.
"""
#plot bp_distance_1 for rnas in first population
g = sns.distplot([rna.bp_distance_1 for rna in pops[0]], label='initial population')
#plot bp_distance_1 for rnas in first population
g = sns.distplot([rna.bp_distance_1 for rna in pops[-1]], label='final population')
g.set(xlabel='Mean Base Pair Distance')
g.legend()
bp_distance_distributions(pops)
```
## Introducing mating to the model
The populations we generated evolved asexually. This means that individuals do not mate or exchange genetic information. So to make our simulation a bit more interesting let's let the Cells mate. This is going to require a few small changes in the `selection()` function. Previously, when we selected sequences to go into the next generation we just let them provide one offspring which was a copy of itself and introduced mutations. Now instead of choosing one Cell at a time, we will randomly choose two 'parents' that will mate. When they mate, each parent will contribute one of its chromosomes to the child. We'll repeat this process until we have filled the next generation.
```
def selection_with_mating(population, target, mutation_rate=0.001, beta=-2):
next_generation = []
counter = 0
while len(next_generation) < len(population):
#select two parents based on their fitness
parents_pair = np.random.choice(population, 2, p=[rna.fitness for rna in population], replace=False)
#take the sequence and structure from the first parent's first chromosome and give it to the child
child_chrom_1 = (parents_pair[0].sequence_1, parents_pair[0].structure_1)
#do the same for the child's second chromosome and the second parent.
child_chrom_2 = (parents_pair[1].sequence_2, parents_pair[1].structure_2)
#initialize the new child Cell with the new chromosomes.
child_cell = Cell(child_chrom_1[0], child_chrom_1[1], child_chrom_2[0], child_chrom_2[1])
#give the child and id and store who its parents are
child_cell.id = counter
child_cell.parent_1 = parents_pair[0].id
child_cell.parent_2 = parents_pair[1].id
#add the child to the new generation
next_generation.append(child_cell)
counter = counter + 1
#introduce mutations in next_generation sequeneces and re-fold when a mutation occurs (same as before)
for rna in next_generation:
mutated_sequence_1, mutated_1 = mutate(rna.sequence_1, mutation_rate=mutation_rate)
mutated_sequence_2, mutated_2 = mutate(rna.sequence_2, mutation_rate=mutation_rate)
if mutated_1:
rna.sequence_1 = mutated_sequence_1
rna.structure_1 = nussinov(mutated_sequence_1)
if mutated_2:
rna.sequence_2 = mutated_sequence_2
rna.structure_2 = nussinov(mutated_sequence_2)
else:
continue
#update fitness values for the new generation
compute_fitness(next_generation, target, beta=beta)
return next_generation
#run a small test to make sure it works
next_gen = selection_with_mating(pop, target)
for cell in next_gen[:10]:
print(cell.sequence_1)
```
Now we just have to update our `evolution()` function to call the new `selection_with_mating()` function.
```
def evolve_with_mating(target, generations=10, pop_size=100, mutation_rate=0.001, beta=-2):
populations = []
population_stats = {}
initial_population = populate(target, pop_size=pop_size)
compute_fitness(initial_population, target)
current_generation = initial_population
#iterate the selection process over the desired number of generations
for i in range(generations):
#let's get some stats on the structures in the populations
record_stats(current_generation, population_stats)
#add the current generation to our list of populations.
populations.append(current_generation)
#select the next generation, but this time with mutations
new_gen = selection_with_mating(current_generation, target, mutation_rate=mutation_rate, beta=beta)
current_generation = new_gen
return (populations, population_stats)
```
Try out the new evolution model!
```
pops_mating, pops_stats_mating = evolve_with_mating("(((....)))", generations=20, pop_size=1000, beta=-2)
evo_plot(pops_stats_mating)
#Select a random RNA sequence from each generation to check its folding structure
from random import randrange
#print(randrange(999))
generations=20
pop_size=1000
#Print some random cells from each generation
#pops[generation][cell in that generation].{quality to retrieve}
for gen in range(0,generations):
cid=randrange(pop_size)
print('>Gen'+np.str(gen+1)+'_Cell_'+np.str(pops_mating[gen][cid].id)+'')
print(pops_mating[gen][2].sequence_1)
print(''+np.str(pops_mating[gen][2].structure_1[0])+'\n')
```
# Acknowledgements
The computational codes of this notebook were originally created by [Carlos G. Oliver](https://github.com/cgoliver), and adapted by Evert Durán for ASTRO200.
| github_jupyter |
# Introduction
In this post,we will talk about some of the most important papers that have been published over the last 5 years and discuss why they’re so important.We will go through different CNN Architectures (LeNet to DenseNet) showcasing the advancements in general network architecture that made these architectures top the ILSVRC results.
# What is ImageNet
[ImageNet](http://www.image-net.org/)
ImageNet is formally a project aimed at (manually) labeling and categorizing images into almost 22,000 separate object categories for the purpose of computer vision research.
However, when we hear the term “ImageNet” in the context of deep learning and Convolutional Neural Networks, we are likely referring to the ImageNet Large Scale Visual Recognition Challenge, or ILSVRC for short.
The ImageNet project runs an annual software contest, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where software programs compete to correctly classify and detect objects and scenes.
The goal of this image classification challenge is to train a model that can correctly classify an input image into 1,000 separate object categories.
Models are trained on ~1.2 million training images with another 50,000 images for validation and 100,000 images for testing.
These 1,000 image categories represent object classes that we encounter in our day-to-day lives, such as species of dogs, cats, various household objects, vehicle types, and much more. You can find the full list of object categories in the ILSVRC challenge
When it comes to image classification, the **ImageNet** challenge is the de facto benchmark for computer vision classification algorithms — and the leaderboard for this challenge has been dominated by Convolutional Neural Networks and deep learning techniques since 2012.
# LeNet-5(1998)
[Gradient Based Learning Applied to Document Recognition](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf)
1. A pioneering 7-level convolutional network by LeCun that classifies digits,
2. Found its application by several banks to recognise hand-written numbers on checks (cheques)
3. These numbers were digitized in 32x32 pixel greyscale which acted as an input images.
4. The ability to process higher resolution images requires larger and more convolutional layers, so this technique is constrained by the availability of computing resources.

# AlexNet(2012)
[ImageNet Classification with Deep Convolutional Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
1. One of the most influential publications in the field by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton that started the revolution of CNN in Computer Vision.This was the first time a model performed so well on a historically difficult ImageNet dataset.
2. The network consisted 11x11, 5x5,3x3, convolutions and made up of 5 conv layers, max-pooling layers, dropout layers, and 3 fully connected layers.
3. Used ReLU for the nonlinearity functions (Found to decrease training time as ReLUs are several times faster than the conventional tanh function) and used SGD with momentum for training.
4. Used data augmentation techniques that consisted of image translations, horizontal reflections, and patch extractions.
5. Implemented dropout layers in order to combat the problem of overfitting to the training data.
6. Trained the model using batch stochastic gradient descent, with specific values for momentum and weight decay.
7. AlexNet was trained for 6 days simultaneously on two Nvidia Geforce GTX 580 GPUs which is the reason for why their network is split into two pipelines.
8. AlexNet significantly outperformed all the prior competitors and won the challenge by reducing the top-5 error from 26% to 15.3%

# ZFNet(2013)
[Visualizing and Understanding Convolutional Neural Networks](https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)
<br>
This architecture was more of a fine tuning to the previous AlexNet structure by tweaking the hyper-parameters of AlexNet while maintaining the same structure but still developed some very keys ideas about improving performance.Few minor modifications done were the following:
1. AlexNet trained on 15 million images, while ZF Net trained on only 1.3 million images.
2. Instead of using 11x11 sized filters in the first layer (which is what AlexNet implemented), ZF Net used filters of size 7x7 and a decreased stride value. The reasoning behind this modification is that a smaller filter size in the first conv layer helps retain a lot of original pixel information in the input volume. A filtering of size 11x11 proved to be skipping a lot of relevant information, especially as this is the first conv layer.
3. As the network grows, we also see a rise in the number of filters used.
4. Used ReLUs for their activation functions, cross-entropy loss for the error function, and trained using batch stochastic gradient descent.
5. Trained on a GTX 580 GPU for twelve days.
6. Developed a visualization technique named **Deconvolutional Network**, which helps to examine different feature activations and their relation to the input space. Called **deconvnet** because it maps features to pixels (the opposite of what a convolutional layer does).
7. It achieved a top-5 error rate of 14.8%

# VggNet(2014)
[VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION](https://arxiv.org/pdf/1409.1556v6.pdf)
This architecture is well konwn for **Simplicity and depth**.. VGGNet is very appealing because of its very uniform architecture.They proposed 6 different variations of VggNet however 16 layer with all 3x3 convolution produced the best result.
Few things to note:
1. The use of only 3x3 sized filters is quite different from AlexNet’s 11x11 filters in the first layer and ZF Net’s 7x7 filters. The authors’ reasoning is that the combination of two 3x3 conv layers has an effective receptive field of 5x5. This in turn simulates a larger filter while keeping the benefits of smaller filter sizes. One of the benefits is a decrease in the number of parameters. Also, with two conv layers, we’re able to use two ReLU layers instead of one.
2. 3 conv layers back to back have an effective receptive field of 7x7.
3. As the spatial size of the input volumes at each layer decrease (result of the conv and pool layers), the depth of the volumes increase due to the increased number of filters as you go down the network.
4. Interesting to notice that the number of filters doubles after each maxpool layer. This reinforces the idea of shrinking spatial dimensions, but growing depth.
5. Worked well on both image classification and localization tasks. The authors used a form of localization as regression (see page 10 of the paper for all details).
6. Built model with the Caffe toolbox.
7. Used scale jittering as one data augmentation technique during training.
8. Used ReLU layers after each conv layer and trained with batch gradient descent.
9. Trained on 4 Nvidia Titan Black GPUs for two to three weeks.
10. It achieved a top-5 error rate of 7.3%


**In Standard ConvNet, input image goes through multiple convolution and obtain high-level features.**
After Inception V1 ,the author proposed a number of upgrades which increased the accuracy and reduced the computational complexity.This lead to many new upgrades resulting in different versions of Inception Network :
1. Inception v2
2. Inception V3
# Inception Network (GoogleNet)(2014)
[Going Deeper with Convolutions](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf)
Prior to this, most popular CNNs just stacked convolution layers deeper and deeper, hoping to get better performance,however **Inception Network** was one of the first CNN architectures that really strayed from the general approach of simply stacking conv and pooling layers on top of each other in a sequential structure and came up with the **Inception Module**.The Inception network was complex. It used a lot of tricks to push performance; both in terms of speed and accuracy. Its constant evolution lead to the creation of several versions of the network. The popular versions are as follows:
1. Inception v1.
2. Inception v2 and Inception v3.
3. Inception v4 and Inception-ResNet.
<br>
Each version is an iterative improvement over the previous one.Let us go ahead and explore them one by one

## Inception V1
[Inception v1](https://arxiv.org/pdf/1409.4842v1.pdf)

**Problems this network tried to solve:**
1. **What is the right kernel size for convolution**
<br>
A larger kernel is preferred for information that is distributed more globally, and a smaller kernel is preferred for information that is distributed more locally.
<br>
**Ans-** Filters with multiple sizes.The network essentially would get a bit “wider” rather than “deeper”
<br>
<br>
3. **How to stack convolution which can be less computationally expensive**
<BR>
Stacking them naively computationally expensive.
<br>
**Ans-**Limit the number of input channels by adding an extra 1x1 convolution before the 3x3 and 5x5 convolutions
<br>
<br>
2. **How to avoid overfitting in a very deep network**
<br>
Very deep networks are prone to overfitting. It also hard to pass gradient updates through the entire network.
<br>
**Ans-**Introduce two auxiliary classifiers (The purple boxes in the image). They essentially applied softmax to the outputs of two of the inception modules, and computed an auxiliary loss over the same labels. The total loss function is a weighted sum of the auxiliary loss and the real loss.
The total loss used by the inception net during training.
<br>
**total_loss = real_loss + 0.3 * aux_loss_1 + 0.3 * aux_loss_2**
<br>
<br>

**Points to note**
1. Used 9 Inception modules in the whole architecture, with over 100 layers in total! Now that is deep…
2. No use of fully connected layers! They use an average pool instead, to go from a 7x7x1024 volume to a 1x1x1024 volume. This saves a huge number of parameters.
3. Uses 12x fewer parameters than AlexNet.
4. Trained on “a few high-end GPUs within a week”.
5. It achieved a top-5 error rate of 6.67%
## Inception V2
[Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/pdf/1512.00567v3.pdf)
Upgrades were targeted towards:
1. Reducing representational bottleneck by replacing 5x5 convolution to two 3x3 convolution operations which further improves computational speed
<br>
The intuition was that, neural networks perform better when convolutions didn’t alter the dimensions of the input drastically. Reducing the dimensions too much may cause loss of information, known as a **“representational bottleneck”**
<br>

2. Using smart factorization method where they factorize convolutions of filter size nxn to a combination of 1xn and nx1 convolutions.
<br>
For example, a 3x3 convolution is equivalent to first performing a 1x3 convolution, and then performing a 3x1 convolution on its output. They found this method to be 33% more cheaper than the single 3x3 convolution.

# ResNet(2015)
[Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf)

**In ResNet, identity mapping is proposed to promote the gradient propagation. Element-wise addition is used. It can be viewed as algorithms with a state passed from one ResNet module to another one.**


# ResNet-Wide

left: a building block of [2], right: a building block of ResNeXt with cardinality = 32
# DenseNet(2017)
[Densely Connected Convolutional Networks](https://arxiv.org/pdf/1608.06993v3.pdf)
<br>
It is a logical extension to ResNet.
**From the paper:**
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
**DenseNet Architecture**

Let us explore different componenets of the network
<br>
<br>
**1. Dense Block**
<br>
Feature map sizes are the same within the dense block so that they can be concatenated together easily.

**In DenseNet, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. Concatenation is used. Each layer is receiving a “collective knowledge” from all preceding layers.**

Since each layer receives feature maps from all preceding layers, network can be thinner and compact, i.e. number of channels can be fewer. The growth rate k is the additional number of channels for each layer.
The paper proposed different ways to implement DenseNet with/without B/C by adding some variations in the Dense block to further reduce the complexity,size and to bring more compression in the architecture.
1. Dense Block (DenseNet)
- Batch Norm (BN)
- ReLU
- 3×3 Convolution
2. Dense Block(DenseNet B)
- Batch Norm (BN)
- ReLU
- 1×1 Convolution
- Batch Norm (BN)
- ReLU
- 3×3 Convolution
3. Dense Block(DenseNet C)
- If a dense block contains m feature-maps, The transition layer generate $\theta $ output feature maps, where $\theta \leq \theata \leq$ is referred to as the compression factor.
- $\theta$=0.5 was used in the experiemnt which reduced the number of feature maps by 50%.
4. Dense Block(DenseNet BC)
- Combination of Densenet B and Densenet C
<br>
**2. Trasition Layer**
<br>
The layers between two adjacent blocks are referred to as transition layers where the following operations are done to change feature-map sizes:
- 1×1 Convolution
- 2×2 Average pooling
**Points to Note:**
1. it requires fewer parameters than traditional convolutional networks
2. Traditional convolutional networks with L layers have L connections — one between each layer and its subsequent layer — our network has L(L+1)/ 2 direct connections.
3. Improved flow of information and gradients throughout the network, which makes them easy to train
4. They alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
5. Concatenating feature maps instead of summing learned by different layers increases variation in the input of subsequent layers and improves efficiency. This constitutes a major difference between DenseNets and ResNets.
6. It achieved a top-5 error rate of 6.66%
# MobileNet
## Spatial Seperable Convolution

**Divides a kernel into two, smaller kernels**

**Instead of doing one convolution with 9 multiplications(parameters), we do two convolutions with 3 multiplications(parameters) each (6 in total) to achieve the same effect**

**With less multiplications, computational complexity goes down, and the network is able to run faster.**
This was used in an architecture called [Effnet](https://arxiv.org/pdf/1801.06434v1.pdf) showing promising results.
The main issue with the spatial separable convolution is that not all kernels can be “separated” into two, smaller kernels. This becomes particularly bothersome during training, since of all the possible kernels the network could have adopted, it can only end up using one of the tiny portion that can be separated into two smaller kernels.
## Depthwise Convolution

Say we need to increase the number of channels from 16 to 32 using 3x3 kernel.
<br>
**Normal Convolution**
<br>
Total No of Parameters = 3 x 3 x 16 x 32 = 4608

**Depthwise Convolution**
1. DepthWise Convolution = 16 x [3 x 3 x 1]
2. PointWise Convolution = 32 x [1 x 1 x 16]
Total Number of Parameters = 656
**Mobile net uses depthwise seperable convolution to reduce the number of parameters**
# References
[Standford CS231n Lecture Notes](http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture9.pdf)
<br>
[The 9 Deep Learning Papers You Need To Know About](https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html)
<br>
[CNN Architectures](https://medium.com/@sidereal/cnns-architectures-lenet-alexnet-vgg-googlenet-resnet-and-more-666091488df5)
<br>
[Lets Keep It Simple](https://arxiv.org/pdf/1608.06037.pdf)
<br>
[CNN Architectures Keras](https://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/)
<br>
[Inception Versions](https://towardsdatascience.com/a-simple-guide-to-the-versions-of-the-inception-network-7fc52b863202)
<br>
[DenseNet Review](https://towardsdatascience.com/review-densenet-image-classification-b6631a8ef803)
<br>
[DenseNet](https://towardsdatascience.com/densenet-2810936aeebb)
<br>
[ResNet](http://teleported.in/posts/decoding-resnet-architecture/)
<br>
[ResNet Versions](https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035)
<br>
[Depthwise Convolution](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)
| github_jupyter |
## Probalistic Confirmed COVID19 Cases- Denmark
**Jorge: remember to reexecute the cell with the photo.**
### Table of contents
[Initialization](#Initialization)
[Data Importing and Processing](#Data-Importing-and-Processing)
1. [Kalman Filter Modeling: Case of Denmark Data](#1.-Kalman-Filter-Modeling:-Case-of-Denmark-Data)
1.1. [Model with the vector c fixed as [0, 1]](#1.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
1.2. [Model with the vector c as a random variable with prior](#1.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
1.3. [Model without input (2 hidden variables)](#1.3.-Kalman-Filter-without-Input)
2. [Kalman Filter Modeling: Case of Norway Data](#2.-Kalman-Filter-Modeling:-Case-of-Norway-Data)
2.1. [Model with the vector c fixed as [0, 1]](#2.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
2.2. [Model with the vector c as a random variable with prior](#2.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
2.3. [Model without input (2 hidden variables)](#2.3.-Kalman-Filter-without-Input)
3. [Kalman Filter Modeling: Case of Sweden Data](#Kalman-Filter-Modeling:-Case-of-Sweden-Data)
3.1. [Model with the vector c fixed as [0, 1]](#3.1.-Kalman-Filter-Model-vector-c-fixed-as-[0,-1])
3.2. [Model with the vector c as a random variable with prior](#3.2.-Kalman-Filter-with-the-vector-c-as-a-random-variable-with-prior)
3.3. [Model without input (2 hidden variables)](#3.3.-Kalman-Filter-without-Input)
## Initialization
```
from os.path import join, pardir
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import numpyro.distributions as dist
import pandas as pd
import seaborn as sns
from jax import lax, random, vmap
from jax.scipy.special import logsumexp
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
from sklearn.preprocessing import StandardScaler
np.random.seed(2103)
ROOT = pardir
DATA = join(ROOT, "data", "raw")
# random seed
np.random.seed(42)
#plot style
plt.style.use('ggplot')
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 10)
```
## Data Importing and Processing
The data in this case are the confirmed cases of the COVID-19 and the the mobility data (from Google) for three specific countries: Denmark, Sweden and Norway.
```
adress = join(ROOT, "data", "processed")
data = pd.read_csv(join(adress, 'data_three_mob_cov.csv'),parse_dates=['Date'])
data.info()
data.head(5)
```
Handy functions to split the data, train the models and plot the results.
```
def split_forecast(df, n_train=65):
"""Split dataframe `df` as training, test and input mobility data."""
# just take the first 4 mobility features
X = df.iloc[:, 3:7].values.astype(np.float_)
# confirmed cases
y = df.iloc[:,2].values.astype(np.float_)
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, len(y))]
y_train = y[:n_train]
y_test = y[n_train:]
return X, y_train, y_test
def train_kf(model, data, n_train, n_test, num_samples=9000, num_warmup=3000, **kwargs):
"""Train a Kalman Filter model."""
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
nuts_kernel = NUTS(model=model)
# burn-in is still too much in comparison with the samples
mcmc = MCMC(
nuts_kernel, num_samples=num_samples, num_warmup=num_warmup, num_chains=1
)
mcmc.run(rng_key_, T=n_train, T_forecast=n_test, obs=data, **kwargs)
return mcmc
def get_samples(mcmc):
"""Get samples from variables in MCMC."""
return {k: v for k, v in mcmc.get_samples().items()}
def plot_samples(hmc_samples, nodes, dist=True):
"""Plot samples from the variables in `nodes`."""
for node in nodes:
if len(hmc_samples[node].shape) > 1:
n_vars = hmc_samples[node].shape[1]
for i in range(n_vars):
plt.figure(figsize=(4, 3))
if dist:
sns.distplot(hmc_samples[node][:, i], label=node + "%d" % i)
else:
plt.plot(hmc_samples[node][:, i], label=node + "%d" % i)
plt.legend()
plt.show()
else:
plt.figure(figsize=(4, 3))
if dist:
sns.distplot(hmc_samples[node], label=node)
else:
plt.plot(hmc_samples[node], label=node)
plt.legend()
plt.show()
def plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test):
"""Plot the results of forecasting (dimension are different)."""
y_hat = hmc_samples["y_pred"].mean(axis=0)
y_std = hmc_samples["y_pred"].std(axis=0)
y_pred_025 = y_hat - 1.96 * y_std
y_pred_975 = y_hat + 1.96 * y_std
plt.plot(idx_train, y_train, "b-")
plt.plot(idx_test, y_test, "bx")
plt.plot(idx_test[:-1], y_hat, "r-")
plt.plot(idx_test[:-1], y_pred_025, "r--")
plt.plot(idx_test[:-1], y_pred_975, "r--")
plt.fill_between(idx_test[:-1], y_pred_025, y_pred_975, alpha=0.3)
plt.legend(
[
"true (train)",
"true (test)",
"forecast",
"forecast + stddev",
"forecast - stddev",
]
)
plt.show()
n_train = 65 # number of points to train
n_test = 20 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
```
## 1. Kalman Filter Modeling: Case of Denmark Data
```
data_dk=data[data['Country'] == "Denmark"]
data_dk.head(5)
print("The length of the full dataset for Denmark is:" + " " )
print(len(data_dk))
```
Prepare input of the models (we are using numpyro so the inputs are numpy arrays).
```
X, y_train, y_test = split_forecast(data_dk)
```
### 1.1. Kalman Filter Model vector c fixed as [0, 1]
First model: the sampling distribution is replaced by one fixed variable $c$.
```
def f(carry, input_t):
x_t, noise_t = input_t
W, beta, z_prev, tau = carry
z_t = beta * z_prev + W @ x_t + noise_t
z_prev = z_t
return (W, beta, z_prev, tau), z_t
def model_wo_c(T, T_forecast, x, obs=None):
"""Define KF with inputs and fixed sampling dist."""
# Define priors over beta, tau, sigma, z_1
W = numpyro.sample(
name="W", fn=dist.Normal(loc=jnp.zeros((2, 4)), scale=jnp.ones((2, 4)))
)
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.ones(2)))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=0.1))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (W, beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f, carry, (x, noises), T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
obs_mean = z_collection[:T, 1]
pred_mean = z_collection[T:, 1]
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc = train_kf(model_wo_c, y_train, n_train, n_test, x=X[2:])
```
Plots of the distribution of the samples for each variable.
```
hmc_samples = get_samples(mcmc)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
```
Forecasting prediction, all the datapoints in the test set are within the Confidence Interval.
```
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 1.2. Kalman Filter with the vector c as a random variable with prior
Second model: the sampling distribution is a Normal distribution $c$.
```
def model_w_c(T, T_forecast, x, obs=None):
# Define priors over beta, tau, sigma, z_1 (keep the shapes in mind)
W = numpyro.sample(
name="W", fn=dist.Normal(loc=jnp.zeros((2, 4)), scale=jnp.ones((2, 4)))
)
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.array([0.0, 0.0]), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.array([2,2])))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=1))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (W, beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f, carry, (x, noises), T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
c = numpyro.sample(
name="c", fn=dist.Normal(loc=jnp.array([[0.0], [0.0]]), scale=jnp.ones((2, 1)))
)
obs_mean = jnp.dot(z_collection[:T, :], c).squeeze()
pred_mean = jnp.dot(z_collection[T:, :], c).squeeze()
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc2 = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 1.3. Kalman Filter without Input
Third model: no input mobility data, **two** hidden states.
```
def f_s(carry, noise_t):
"""Propagate forward the time series."""
beta, z_prev, tau = carry
z_t = beta * z_prev + noise_t
z_prev = z_t
return (beta, z_prev, tau), z_t
def twoh_c_kf(T, T_forecast, obs=None):
"""Define Kalman Filter with two hidden variates."""
# Define priors over beta, tau, sigma, z_1
# W = numpyro.sample(name="W", fn=dist.Normal(loc=jnp.zeros((2,4)), scale=jnp.ones((2,4))))
beta = numpyro.sample(
name="beta", fn=dist.Normal(loc=jnp.array([0.0, 0.0]), scale=jnp.ones(2))
)
tau = numpyro.sample(name="tau", fn=dist.HalfCauchy(scale=jnp.array([10,10])))
sigma = numpyro.sample(name="sigma", fn=dist.HalfCauchy(scale=5))
z_prev = numpyro.sample(
name="z_1", fn=dist.Normal(loc=jnp.zeros(2), scale=jnp.ones(2))
)
# Define LKJ prior
L_Omega = numpyro.sample("L_Omega", dist.LKJCholesky(2, 10.0))
Sigma_lower = jnp.matmul(
jnp.diag(jnp.sqrt(tau)), L_Omega
) # lower cholesky factor of the covariance matrix
noises = numpyro.sample(
"noises",
fn=dist.MultivariateNormal(loc=jnp.zeros(2), scale_tril=Sigma_lower),
sample_shape=(T + T_forecast - 2,),
)
# Propagate the dynamics forward using jax.lax.scan
carry = (beta, z_prev, tau)
z_collection = [z_prev]
carry, zs_exp = lax.scan(f_s, carry, noises, T + T_forecast - 2)
z_collection = jnp.concatenate((jnp.array(z_collection), zs_exp), axis=0)
c = numpyro.sample(
name="c", fn=dist.Normal(loc=jnp.array([[0.0], [0.0]]), scale=jnp.ones((2, 1)))
)
obs_mean = jnp.dot(z_collection[:T, :], c).squeeze()
pred_mean = jnp.dot(z_collection[T:, :], c).squeeze()
# Sample the observed y (y_obs)
numpyro.sample(name="y_obs", fn=dist.Normal(loc=obs_mean, scale=sigma), obs=obs)
numpyro.sample(name="y_pred", fn=dist.Normal(loc=pred_mean, scale=sigma), obs=None)
mcmc3 = train_kf(twoh_c_kf, y_train, n_train, n_test, num_samples=12000, num_warmup=5000)
hmc_samples = get_samples(mcmc3)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
## 2. Kalman Filter Modeling: Case of Norway Data
```
data_no=data[data['Country'] == "Norway"]
data_no.head(5)
print("The length of the full dataset for Norway is:" + " " )
print(len(data_no))
n_train = 66 # number of points to train
n_test = 20 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
X, y_train, y_test = split_forecast(data_no, n_train)
```
### 2.1. Kalman Filter Model vector c fixed as [0, 1]
```
mcmc_no = train_kf(model_wo_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 2.2. Kalman Filter with the vector c as a random variable with prior
```
mcmc2_no = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 2.3. Kalman Filter without Input
```
mcmc3_no = train_kf(twoh_c_kf, y_train, n_train, n_test)
hmc_samples = get_samples(mcmc3_no)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
## 3. Kalman Filter Modeling: Case of Sweden Data
```
data_sw=data[data['Country'] == "Sweden"]
data_sw.head(5)
print("The length of the full dataset for Sweden is:" + " " )
print(len(data_sw))
n_train = 75 # number of points to train
n_test = 22 # number of points to forecast
idx_train = [*range(0,n_train)]
idx_test = [*range(n_train, n_train+n_test)]
X, y_train, y_test = split_forecast(data_sw, n_train)
```
### 3.1. Kalman Filter Model vector c fixed as [0, 1]
```
mcmc_sw = train_kf(model_wo_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 3.2. Kalman Filter with the vector c as a random variable with prior
```
mcmc2_sw = train_kf(model_w_c, y_train, n_train, n_test, x=X[:-2])
hmc_samples = get_samples(mcmc2_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
### 3.3. Kalman Filter without Input
```
mcmc3_sw = train_kf(twoh_c_kf, y_train, n_train, n_test)
hmc_samples = get_samples(mcmc3_sw)
plot_samples(hmc_samples, ["beta", "tau", "sigma"])
plot_forecast(hmc_samples, idx_train, idx_test, y_train, y_test)
```
Save results to rerun the plotting functions.
```
import pickle
MODELS = join(ROOT, "models")
for i, mc in enumerate([mcmc3_no, mcmc_sw, mcmc2_sw, mcmc3_sw]):
with open(join(MODELS, f"hmc_ok_{i}.pickle"), "wb") as f:
pickle.dump(get_samples(mc),f)
```
## Gaussian Process
| github_jupyter |
<a href="https://colab.research.google.com/github/Janani-harshu/Machine_Learning_Projects/blob/main/Covid19_death_prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Covid-19 is one of the deadliest viruses you’ve ever heard. Mutations in covid-19 make it either more deadly or more infectious. We have seen a lot of deaths from covid-19 while there is a higher wave of cases. We can use historical data on covid-19 cases and deaths to predict the number of deaths in the future.
In this notebook, I will take you through the task of Covid-19 deaths prediction with machine learning using Python.
## Covid-19 Deaths Prediction (Case Study)
You are given a dataset of Covid-19 in India from 30 January 2020 to 18 January 2022. The dataset contains information about the daily confirmed cases and deaths. Below are all the columns of the dataset:
Date: Contains the date of the record
Date_YMD: Contains date in Year-Month-Day Format
Daily Confirmed: Contains the daily confirmed cases of Covid-19
Daily Deceased: Contains the daily deaths due to Covid-19
You need to use this historical data of covid-19 cases and deaths to predict the number of deaths for the next 30 days
```
# Importing the libraries
import pandas as pd
import numpy as np
data = pd.read_csv("COVID19 data for overall INDIA.csv")
print(data.head())
data.isnull().sum()
data = data.drop("Date", axis=1)
# Daily confirmed cases of Covid-19
import plotly.express as px
fig = px.box(data, x='Date_YMD', y='Daily Confirmed')
fig.show()
```
## Covid-19 Death Rate Analysis
Now let’s visualize the death rate due to Covid-19:
```
cases = data["Daily Confirmed"].sum()
deceased = data["Daily Deceased"].sum()
labels = ["Confirmed", "Deceased"]
values = [cases, deceased]
fig = px.funnel_area(data, values=values,
names=labels,
title='Daily Confirmed Cases vs Daily Deaths')
fig.show()
# calculate the death rate of Covid-19:
death_rate = (data["Daily Deceased"].sum() / data["Daily Confirmed"].sum()) * 100
print(death_rate)
# daily deaths of covid-19:
import plotly.express as px
fig = px.box(data, x='Date_YMD', y='Daily Deceased')
fig.show()
!pip install AutoTS
```
## Covid-19 Deaths Prediction Model
Now let’s move to the task of covid-19 deaths prediction for the next 30 days. Here I will be using the AutoTS library, which is one of the best Automatic Machine Learning libraries for Time Series Analysis.
```
from autots import AutoTS
model = AutoTS(forecast_length=30, frequency='infer', ensemble='simple')
model = model.fit(data, date_col="Date_YMD", value_col='Daily Deceased', id_col=None)
# Predict covid-19 deaths with machine learning for the next 30 days:
prediction = model.predict()
forecast = prediction.forecast
print(forecast)
```
So this is how we can predict covid-19 deaths with machine learning using the Python programming language. We can use the historical data of covid-19 cases and deaths to predict the number of deaths in future. You can implement the same method for predicting covid-19 deaths and waves on the latest dataset
| github_jupyter |
## Dependencies
```
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
pd.set_option('display.float_format', lambda x: '%.4f' % x)
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-1-clean-polish/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_1/'
fold_n = 1
# Unzip files
!tar -xf /kaggle/input/jigsaw-data-split-roberta-192-ratio-1-clean-polish/fold_1.tar.gz
```
# Model parameters
```
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 128,
"EPOCHS": 3,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": None,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
## Learning rate schedule
```
lr_min = 1e-7
lr_start = 0
lr_max = config['LEARNING_RATE']
step_size = (len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) * 2) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 1
decay = .9998
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps,
lr_start, lr_max, lr_min, decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
cls_token = last_hidden_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
```
# Train
```
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid_int.npy').reshape(x_valid.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train_tail = np.load(base_data_path + 'x_train_tail.npy')
y_train_tail = np.load(base_data_path + 'y_train_int_tail.npy').reshape(x_train_tail.shape[1], 1).astype(np.float32)
x_train = np.hstack([x_train, x_train_tail])
y_train = np.vstack([y_train, y_train_tail])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
valid_2_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
valid_2_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
valid_2_data_iter = iter(valid_2_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
@tf.function
def valid_2_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_2_auc.update_state(y, probabilities)
valid_2_loss.update_state(loss)
for _ in tf.range(valid_2_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
lr = lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, decay=decay)
optimizer = optimizers.Adam(learning_rate=lr)
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
valid_2_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
valid_2_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc,
'val_2_loss': valid_2_loss, 'val_2_auc': valid_2_auc}
history = custom_fit_2(model, metrics_dict, train_step, valid_step, valid_2_step, train_data_iter,
valid_data_iter, valid_2_data_iter, step_size, valid_step_size, valid_2_step_size,
config['BATCH_SIZE'], config['EPOCHS'], config['ES_PATIENCE'], save_last=False)
# model.save_weights('model.h5')
# Make predictions
# x_train = np.load(base_data_path + 'x_train.npy')
# x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
# train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
# valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
# k_fold.loc[k_fold[f'fold_{fold_n}'] == 'train', f'pred_{fold_n}'] = np.round(train_preds)
# k_fold.loc[k_fold[f'fold_{fold_n}'] == 'validation', f'pred_{fold_n}'] = np.round(valid_preds)
valid_df[f'pred_{fold_n}'] = valid_ml_preds
# Fine-tune on validation set
#################### ADD TAIL ####################
x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')])
y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml])
valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail, config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
# Step functions
@tf.function
def train_ml_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(valid_step_size_tail):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
# Fine-tune on validation set
history_ml = custom_fit_2(model, metrics_dict, train_ml_step, valid_step, valid_2_step, train_ml_data_iter,
valid_data_iter, valid_2_data_iter, valid_step_size_tail, valid_step_size, valid_2_step_size,
config['BATCH_SIZE'], 2, config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df[f'pred_ml_{fold_n}'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
```
## Model loss graph
```
plot_metrics_2(history)
```
# Model evaluation
```
# display(evaluate_model_single_fold(k_fold, fold_n, label_col='toxic_int').style.applymap(color_map))
```
# Confusion matrix
```
# train_set = k_fold[k_fold[f'fold_{fold_n}'] == 'train']
# validation_set = k_fold[k_fold[f'fold_{fold_n}'] == 'validation']
# plot_confusion_matrix(train_set['toxic_int'], train_set[f'pred_{fold_n}'],
# validation_set['toxic_int'], validation_set[f'pred_{fold_n}'])
```
# Model evaluation by language
```
display(evaluate_model_single_fold_lang(valid_df, fold_n).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_single_fold_lang(valid_df, fold_n, pred_col='pred_ml').style.applymap(color_map))
```
# Visualize predictions
```
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
```
# Test set predictions
```
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib as plt
from shapely.geometry import Point, Polygon
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
import zipfile
import requests
import os
import shutil
from downloading_funcs import addr_shape, down_extract_zip
from supp_funcs import *
import lnks
import warnings #DANGER: I triggered a ton of warnings.
warnings.filterwarnings('ignore')
import geopandas as gpd
%matplotlib inline
#Load the BBL list
BBL12_17CSV = ['https://hub.arcgis.com/datasets/82ab09c9541b4eb8ba4b537e131998ce_22.csv', 'https://hub.arcgis.com/datasets/4c4d6b4defdf4561b737a594b6f2b0dd_23.csv', 'https://hub.arcgis.com/datasets/d7aa6d3a3fdc42c4b354b9e90da443b7_1.csv', 'https://hub.arcgis.com/datasets/a8434614d90e416b80fbdfe2cb2901d8_2.csv', 'https://hub.arcgis.com/datasets/714d5f8b06914b8596b34b181439e702_36.csv', 'https://hub.arcgis.com/datasets/c4368a66ce65455595a211d530facc54_3.csv',]
def data_pipeline(shapetype, bbl_links, supplement=None,
dex=None, ts_lst_range=None):
#A pipeline for group_e dataframe operations
#Test inputs --------------------------------------------------------------
if supplement:
assert isinstance(supplement, list)
assert isinstance(bbl_links, list)
if ts_lst_range:
assert isinstance(ts_lst_range, list)
assert len(ts_lst_range) == 2 #Must be list of format [start-yr, end-yr]
#We'll need our addresspoints and our shapefile
if not dex:
dex = addr_shape(shapetype)
#We need a list of time_unit_of_analysis
if ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(1980, 2025)]
ts_lst = [x for x in ts_lst if
x >= ts_lst_range[0] and x <= ts_lst_range[1]]
ts_lst = sorted(ts_lst)
if not ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(2012, 2017)]
ts_lst = sorted(ts_lst)
#Now we need to stack our BBL data ----------------------------------------
#Begin by forming an empty DF
bbl_df = pd.DataFrame()
for i in list(range(2012, 2018)):
bblpth = './data/bbls/Basic_Business_License_in_'+str(i)+'.csv' #Messy hack
#TODO: generalize bblpth above
bbl = pd.read_csv(bblpth, low_memory=False)
col_len = len(bbl.columns)
bbl_df = bbl_df.append(bbl)
if len(bbl.columns) != col_len:
print('Column Mismatch!')
del bbl
bbl_df.LICENSE_START_DATE = pd.to_datetime(
bbl_df.LICENSE_START_DATE)
bbl_df.LICENSE_EXPIRATION_DATE = pd.to_datetime(
bbl_df.LICENSE_EXPIRATION_DATE)
bbl_df.LICENSE_ISSUE_DATE = pd.to_datetime(
bbl_df.LICENSE_ISSUE_DATE)
bbl_df.sort_values('LICENSE_START_DATE')
#Set up our time unit of analysis
bbl_df['month'] = 0
bbl_df['endMonth'] = 0
bbl_df['issueMonth'] = 0
bbl_df['month'] = bbl_df['LICENSE_START_DATE'].dt.year + (
bbl_df['LICENSE_START_DATE'].dt.month/100
)
bbl_df['endMonth'] = bbl_df['LICENSE_EXPIRATION_DATE'].dt.year + (
bbl_df['LICENSE_EXPIRATION_DATE'].dt.month/100
)
bbl_df['issueMonth'] = bbl_df['LICENSE_ISSUE_DATE'].dt.year + (
bbl_df['LICENSE_ISSUE_DATE'].dt.month/100
)
bbl_df.endMonth.fillna(max(ts_lst))
bbl_df['endMonth'][bbl_df['endMonth'] > max(ts_lst)] = max(ts_lst)
#Sort on month
bbl_df = bbl_df.dropna(subset=['month'])
bbl_df = bbl_df.set_index(['MARADDRESSREPOSITORYID','month'])
bbl_df = bbl_df.sort_index(ascending=True)
bbl_df.reset_index(inplace=True)
bbl_df = bbl_df[bbl_df['MARADDRESSREPOSITORYID'] >= 0]
bbl_df = bbl_df.dropna(subset=['LICENSESTATUS', 'issueMonth', 'endMonth',
'MARADDRESSREPOSITORYID','month',
'LONGITUDE', 'LATITUDE'
])
#Now that we have the BBL data, let's create our flag and points data -----
#This is the addresspoints, passed from the dex param
addr_df = dex[0]
#Zip the latlongs
addr_df['geometry'] = [
Point(xy) for xy in zip(
addr_df.LONGITUDE.apply(float), addr_df.LATITUDE.apply(float)
)
]
addr_df['Points'] = addr_df['geometry'] #Duplicate, so raw retains points
addr_df['dummy_counter'] = 1 #Always one, always dropped before export
crs='EPSG:4326' #Convenience assignment of crs
#Now we're stacking for each month ----------------------------------------
out_gdf = pd.DataFrame() #Empty storage df
for i in ts_lst: #iterate through the list of months
print('Month '+ str(i))
strmfile_pth = str(
'./data/strm_file/' + str(i) +'_' + shapetype + '.csv')
if os.path.exists(strmfile_pth):
print('Skipping, ' + str(i) + ' stream file path already exists:')
print(strmfile_pth)
continue
#dex[1] is the designated shapefile passed from the dex param,
#and should match the shapetype defined in that param
#Copy of the dex[1] shapefile
shp_gdf = dex[1]
#Active BBL in month i
bbl_df['inRange'] = 0
bbl_df['inRange'][(bbl_df.endMonth > i) & (bbl_df.month <= i)] = 1
#Issued BBL in month i
bbl_df['isuFlag'] = 0
bbl_df['isuFlag'][bbl_df.issueMonth == i] = 1
#Merge BBL and MAR datasets -------------------------------------------
addr = pd.merge(addr_df, bbl_df, how='left',
left_on='ADDRESS_ID', right_on='MARADDRESSREPOSITORYID')
addr = gpd.GeoDataFrame(addr, crs=crs, geometry=addr.geometry)
shp_gdf.crs = addr.crs
raw = gpd.sjoin(shp_gdf, addr, how='left', op='intersects')
#A simple percent of buildings with active flags per shape,
#and call it a 'utilization index'
numer = raw.groupby('NAME').sum()
numer = numer.inRange
denom = raw.groupby('NAME').sum()
denom = denom.dummy_counter
issue = raw.groupby('NAME').sum()
issue = issue.isuFlag
flags = []
utl_inx = pd.DataFrame(numer/denom)
utl_inx.columns = [
'Util_Indx_BBL'
]
flags.append(utl_inx)
#This is number of buildings with an active BBL in month i
bbl_count = pd.DataFrame(numer)
bbl_count.columns = [
'countBBL'
]
flags.append(bbl_count)
#This is number of buildings that were issued a BBL in month i
isu_count = pd.DataFrame(issue)
isu_count.columns = [
'countIssued'
]
flags.append(isu_count)
for flag in flags:
flag.crs = shp_gdf.crs
shp_gdf = shp_gdf.merge(flag,
how="left", left_on='NAME', right_index=True)
shp_gdf['month'] = i
#Head will be the list of retained columns
head = ['NAME', 'Util_Indx_BBL',
'countBBL', 'countIssued',
'month', 'geometry']
shp_gdf = shp_gdf[head]
print('Merging...')
if supplement: #this is where your code will be fed into the pipeline.
#To include time unit of analysis, pass 'i=i' as the last
#item in your args list over on lnks.py, and the for-loop
#will catch that. Else, it will pass your last item as an arg.
#Ping CDL if you need to pass a func with more args and we
#can extend this.
for supp_func in supplement:
if len(supp_func) == 2:
if supp_func[1] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, i=i)
if supp_func[1] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1])
if len(supp_func) == 3:
if supp_func[2] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1], i=i)
if supp_func[2] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2])
if len(supp_func) == 4:
if supp_func[3] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2], i=i)
if supp_func[3] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2], supp_func[3])
print(str(supp_func[0]) + ' is done.')
if not os.path.exists(strmfile_pth):
shp_gdf = shp_gdf.drop('geometry', axis=1)
#Save, also verify re-read works
shp_gdf.to_csv(strmfile_pth, encoding='utf-8', index=False)
shp_gdf = pd.read_csv(strmfile_pth, encoding='utf-8',
engine='python')
del shp_gdf, addr, utl_inx, numer, denom, issue, raw #Save me some memory please!
#if i != 2016.12:
# del raw
print('Merged month:', i)
print()
#Done iterating through months here....
pth = './data/strm_file/' #path of the streamfiles
for file in os.listdir(pth):
try:
filepth = str(os.path.join(pth, file))
print([os.path.getsize(filepth), filepth])
fl = pd.read_csv(filepth, encoding='utf-8', engine='python') #read the stream file
out_gdf = out_gdf.append(fl) #This does the stacking
del fl
except IsADirectoryError:
continue
out_gdf.to_csv('./data/' + shapetype + '_out.csv') #Save
#shutil.rmtree('./data/strm_file/')
print('Done!')
return [bbl_df, addr_df, out_gdf] #Remove this later, for testing now
dex = addr_shape('anc')
sets = data_pipeline('anc', BBL12_17CSV, supplement=lnks.supplm, dex=dex, ts_lst_range=None)
sets[2].columns #Our number of rows equals our number of shapes * number of months
```
| github_jupyter |
# Important installation
This notebook requires unusual packages: `LightGBM`, `SHAP` and `LIME`.
For installation, do:
`conda install lightgbm lime shap`
## Initial classical imports
```
import os
import numpy as np
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=Warning)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
# Read in data
The stock data first:
```
dfs = {}
for ticker in ['BLK', 'GS', 'MS']:
dfs[ticker] = pd.read_pickle('stock_{}.pickle'.format(ticker)).set_index('Date')
```
The media data:
```
df_media = pd.read_csv('MediaAttention_Mini.csv', parse_dates=[0], index_col='Time')
df_media.info()
import glob
df_media_long = pd.DataFrame()
for fle in glob.glob('MediaAttentionLong*.csv'):
dummy = pd.read_csv(fle, parse_dates=[0], index_col='Time')
df_media_long = pd.concat([df_media_long,dummy])
df_media_long = df_media_long['2007':'2013']
df_media_long = df_media_long.resample('1D').sum()
df_media_long.shape
#df_media_long= pd.read_csv('MediaAttentionLong_2008.csv', parse_dates=[0], index_col='Time')
df_media_long.columns = ['{}_Long'.format(c) for c in df_media_long.columns]
df_media_long.loc['2008',:].info()
for ticker in dfs:
dfs[ticker] = dfs[ticker].merge(df_media, how='inner', left_index=True, right_index=True)
dfs[ticker] = dfs[ticker].merge(df_media_long, how='inner', left_index=True, right_index=True)
dfs['BLK'].head(10)
```
# Model
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, recall_score, roc_auc_score, make_scorer
from sklearn.model_selection import cross_val_score, KFold
model = RandomForestClassifier(n_estimators=100, max_depth=4, random_state=314, n_jobs=4)
for ticker in dfs:
X = dfs[ticker].drop(['Label', 'Close'], axis=1).fillna(-1)
y = dfs[ticker].loc[:,'Label']
scores = cross_val_score(model, X, y,
scoring=make_scorer(accuracy_score),
cv=KFold(10, shuffle=True, random_state=314),
n_jobs=1
)
print('{} prediction performance in accuracy = {:.3f}+-{:.3f}'.format(ticker,
np.mean(scores),
np.std(scores)
))
plt.figure(figsize=(15,12))
sns.heatmap(dfs['BLK'].corr(), vmin=-0.2, vmax=0.2)
```
### Train a model on all data
```
df = pd.concat(list(dfs.values()), axis=0)
X = df.drop(['Label', 'Close'], axis=1).fillna(-1)
y = df.loc[:,'Label']
scores = cross_val_score(model, X, y,
scoring=make_scorer(accuracy_score),
cv=KFold(10, shuffle=True, random_state=314),
n_jobs=1
)
print('{} prediction performance in accuracy = {:.3f}+-{:.3f}'.format('ALL',
np.mean(scores),
np.std(scores)
))
scores
mdl = model.fit(X,y)
import shap
shap.initjs()
explainer=shap.TreeExplainer(mdl)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X, plot_type="bar")
```
## LightGBM
```
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
from sklearn.base import clone, ClassifierMixin, RegressorMixin
import lightgbm as lgb
def train_single_model(clf_, X_, y_, random_state_=314, opt_parameters_={}, fit_params_={}):
'''
A wrapper to train a model with particular parameters
'''
c = clone(clf_)
c.set_params(**opt_parameters_)
c.set_params(random_state=random_state_)
return c.fit(X_, y_, **fit_params_)
def train_model_in_CV(model, X, y, metric, metric_args={},
model_name='xmodel',
seed=31416, n=5,
opt_parameters_={}, fit_params_={},
verbose=True,
groups=None, y_eval=None):
# the list of classifiers for voting ensable
clfs = []
# performance
perf_eval = {'score_i_oof': 0,
'score_i_ave': 0,
'score_i_std': 0,
'score_i': []
}
# full-sample oof prediction
y_full_oof = pd.Series(np.zeros(shape=(y.shape[0],)),
index=y.index)
sample_weight=None
if 'sample_weight' in metric_args:
sample_weight=metric_args['sample_weight']
index_weight=None
if 'index_weight' in metric_args:
index_weight=metric_args['index_weight']
del metric_args['index_weight']
doSqrt=False
if 'sqrt' in metric_args:
doSqrt=True
del metric_args['sqrt']
if groups is None:
cv = KFold(n, shuffle=True, random_state=seed) #Stratified
else:
cv = GroupKFold(n)
# The out-of-fold (oof) prediction for the k-1 sample in the outer CV loop
y_oof = pd.Series(np.zeros(shape=(X.shape[0],)),
index=X.index)
scores = []
clfs = []
for n_fold, (trn_idx, val_idx) in enumerate(cv.split(X, (y!=0).astype(np.int8), groups=groups)):
X_trn, y_trn = X.iloc[trn_idx], y.iloc[trn_idx]
X_val, y_val = X.iloc[val_idx], y.iloc[val_idx]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
G_trn, G_val = groups.iloc[trn_idx], groups.iloc[val_idx]
if fit_params_:
# use _stp data for early stopping
fit_params_["eval_set"] = [(X_trn,y_trn), (X_val,y_val)]
fit_params_['verbose'] = verbose
if index_weight is not None:
fit_params_["sample_weight"] = y_trn.index.map(index_weight).values
fit_params_["eval_sample_weight"] = [None, y_val.index.map(index_weight).values]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
fit_params_['group'] = G_trn.groupby(G_trn, sort=False).count()
fit_params_['eval_group'] = [G_trn.groupby(G_trn, sort=False).count(),
G_val.groupby(G_val, sort=False).count()]
#display(y_trn.head())
clf = train_single_model(model, X_trn, y_trn, 314+n_fold, opt_parameters_, fit_params_)
clfs.append(('{}{}'.format(model_name,n_fold), clf))
# oof predictions
if isinstance(clf, RegressorMixin):
y_oof.iloc[val_idx] = clf.predict(X_val)
elif isinstance(clf, ClassifierMixin) and metric.__name__=='roc_auc_score':
y_oof.iloc[val_idx] = clf.predict_proba(X_val)[:,1]
else:
y_oof.iloc[val_idx] = clf.predict(X_val)
# prepare weights for evaluation
if sample_weight is not None:
metric_args['sample_weight'] = y_val.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_val.index.map(index_weight).values
# prepare target values
y_true_tmp = y_val if 'LGBMRanker' not in type(model).__name__ and y_eval is None else y_eval.iloc[val_idx]
y_pred_tmp = y_oof.iloc[val_idx] if y_eval is None else y_oof.iloc[val_idx]
#store evaluated metric
scores.append(metric(y_true_tmp, y_pred_tmp, **metric_args))
#cleanup
del X_trn, y_trn, X_val, y_val, y_true_tmp, y_pred_tmp
# Store performance info for this CV
if sample_weight is not None:
metric_args['sample_weight'] = y_oof.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_oof.index.map(index_weight).values
perf_eval['score_i_oof'] = metric(y, y_oof, **metric_args)
perf_eval['score_i'] = scores
if doSqrt:
for k in perf_eval.keys():
if 'score' in k:
perf_eval[k] = np.sqrt(perf_eval[k])
scores = np.sqrt(scores)
perf_eval['score_i_ave'] = np.mean(scores)
perf_eval['score_i_std'] = np.std(scores)
return clfs, perf_eval, y_oof
def print_perf_clf(name, perf_eval):
print('Performance of the model:')
print('Mean(Val) score inner {} Classifier: {:.4f}+-{:.4f}'.format(name,
perf_eval['score_i_ave'],
perf_eval['score_i_std']
))
print('Min/max scores on folds: {:.4f} / {:.4f}'.format(np.min(perf_eval['score_i']),
np.max(perf_eval['score_i'])))
print('OOF score inner {} Classifier: {:.4f}'.format(name, perf_eval['score_i_oof']))
print('Scores in individual folds: {}'.format(perf_eval['score_i']))
mdl_inputs = {
'lgbm1_reg': (lgb.LGBMClassifier(max_depth=-1, min_child_samples=400, random_state=314, silent=True, metric='None',
n_jobs=4, n_estimators=1000, learning_rate=0.1),
{'colsample_bytree': 0.9, 'min_child_weight': 10.0, 'num_leaves': 20, 'reg_alpha': 1, 'subsample': 0.9},
{"early_stopping_rounds":20,
"eval_metric" : 'binary_logloss',
'eval_names': ['train', 'early_stop'],
'verbose': False,
#'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_decay_power)],
},#'categorical_feature': 'auto'},
y,
None
),
'rf1': (
RandomForestClassifier(n_estimators=100, max_depth=4, random_state=314, n_jobs=4),
{},
{},
y,
None
)
}
%%time
mdls = {}
results = {}
y_oofs = {}
for name, (mdl, mdl_pars, fit_pars, y_, g_) in mdl_inputs.items():
print('--------------- {} -----------'.format(name))
mdl_, perf_eval_, y_oof_ = train_model_in_CV(mdl, X, y_, accuracy_score,
metric_args={},
model_name=name,
opt_parameters_=mdl_pars,
fit_params_=fit_pars,
n=10, seed=3146,
verbose=500,
groups=g_)
results[name] = perf_eval_
mdls[name] = mdl_
y_oofs[name] = y_oof_
print_perf_clf(name, perf_eval_)
```
# Train LGBM model on a simple rain/val/test split (70/15/15)
```
from sklearn.model_selection import train_test_split
X_1, X_tst, y_1, y_tst = train_test_split(X, y, test_size=0.15, shuffle=True, random_state=314)
X_trn, X_val, y_trn, y_val = train_test_split(X_1, y_1, test_size=0.15, shuffle=True, random_state=31)
mdl = lgb.LGBMClassifier(max_depth=-1, min_child_samples=400, random_state=314,
silent=True, metric='None',
n_jobs=4, n_estimators=1000, learning_rate=0.1,
**{'colsample_bytree': 0.9, 'min_child_weight': 10.0, 'num_leaves': 20, 'reg_alpha': 1, 'subsample': 0.9}
)
mdl.fit(X_trn, y_trn,
**{"early_stopping_rounds":20,
"eval_metric" : 'binary_logloss',
'eval_set': [(X_trn, y_trn), (X_val, y_val)],
'eval_names': ['train', 'early_stop'],
'verbose': 100
})
print('Accuracy score on train/validation/test samples is: {:.3f}/{:.3f}/{:.3f}'.format(accuracy_score(y_trn, mdl.predict(X_trn)),
accuracy_score(y_val, mdl.predict(X_val)),
accuracy_score(y_tst, mdl.predict(X_tst))
))
```
## Do LGBM model exlanation
### SHAP
```
explainer=shap.TreeExplainer(mdl)
shap_values = explainer.shap_values(X_tst)
shap.summary_plot(shap_values, X_tst, plot_type="bar")
```
_To understand how a single feature effects the output of the model we can plot **the SHAP value of that feature vs. the value of the feature** for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, **the plot below represents the change in predicted label as either of chosen variables changes**._
```
for f in ['negative_frac', 'BlackRock_count_Long', 'positive_count', 'diff_7d']:
shap.dependence_plot(f, shap_values, X_tst)
```
### LIME
```
import lime
from lime.lime_tabular import LimeTabularExplainer
explainer = LimeTabularExplainer(X_trn.values,
feature_names=X_trn.columns,
class_names=['Down','Up'],
verbose=False,
mode='classification')
exp= []
for i in range(5):
e = explainer.explain_instance(X_trn.iloc[i,:].values, mdl.predict_proba)
_ = e.as_pyplot_figure(label=1)
#exp.append(e)
import pickle
with open('model_lighgbm.pkl', 'wb') as fout:
pickle.dump(mdl, fout)
```
| github_jupyter |
# Advanced RNNs
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
In this notebook we're going to cover some advanced topics related to RNNs.
1. Conditioned hidden state
2. Char-level embeddings
3. Encoder and decoder
4. Attentional mechanisms
5. Implementation
# Set up
```
# Load PyTorch library
!pip3 install torch
import os
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
# Set Numpy and PyTorch seeds
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
# Arguments
args = Namespace(
seed=1234,
cuda=True,
batch_size=4,
condition_vocab_size=3, # vocabulary for condition possibilities
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
```
# Conditioned RNNs
Conditioning an RNN is to add extra information that will be helpful towards a prediction. We can encode (embed it) this information and feed it along with the sequential input into our model. For example, suppose in our document classificaiton example in the previous notebook, we knew the publisher of each news article (NYTimes, ESPN, etc.). We could have encoded that information to help with the prediction. There are several different ways of creating a conditioned RNN.
**Note**: If the conditioning information is novel for each input in the sequence, just concatenate it along with each time step's input.
1. Make the initial hidden state the encoded information instead of using the initial zerod hidden state. Make sure that the size of the encoded information is the same as the hidden state for the RNN.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn1.png" width=400>
```
import torch.nn as nn
import torch.nn.functional as F
# Condition
condition = torch.LongTensor([0, 2, 1, 2]) # batch size of 4 with a vocab size of 3
condition_embeddings = nn.Embedding(
embedding_dim=args.embedding_dim, # should be same as RNN hidden dim
num_embeddings=args.condition_vocab_size) # of unique conditions
# Initialize hidden state
num_directions = 1
if args.bidirectional:
num_directions = 2
# If using multiple layers and directions, the hidden state needs to match that size
hidden_t = condition_embeddings(condition).unsqueeze(0).repeat(
args.num_layers * num_directions, 1, 1).to(args.device) # initial state to RNN
print (hidden_t.size())
# Feed into RNN
# y_out, _ = self.rnn(x_embedded, hidden_t)
```
2. Concatenate the encoded information with the hidden state at each time step. Do not replace the hidden state because the RNN needs that to learn.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/conditioned_rnn2.png" width=400>
```
# Initialize hidden state
hidden_t = torch.zeros((args.num_layers * num_directions, args.batch_size, args.rnn_hidden_dim))
print (hidden_t.size())
def concat_condition(condition_embeddings, condition, hidden_t, num_layers, num_directions):
condition_t = condition_embeddings(condition).unsqueeze(0).repeat(
num_layers * num_directions, 1, 1)
hidden_t = torch.cat([hidden_t, condition_t], 2)
return hidden_t
# Loop through the inputs time steps
hiddens = []
seq_size = 1
for t in range(seq_size):
hidden_t = concat_condition(condition_embeddings, condition, hidden_t,
args.num_layers, num_directions).to(args.device)
print (hidden_t.size())
# Feed into RNN
# hidden_t = rnn_cell(x_in[t], hidden_t)
...
```
# Char-level embeddings
Our conv operations will have inputs that are words in a sentence represented at the character level| $\in \mathbb{R}^{NXSXWXE}$ and outputs are embeddings for each word (based on convlutions applied at the character level.)
**Word embeddings**: capture the temporal correlations among
adjacent tokens so that similar words have similar representations. Ex. "New Jersey" is close to "NJ" is close to "Garden State", etc.
**Char embeddings**: create representations that map words at a character level. Ex. "toy" and "toys" will be close to each other.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/char_embeddings.png" width=450>
```
# Arguments
args = Namespace(
seed=1234,
cuda=False,
shuffle=True,
batch_size=64,
vocab_size=20, # vocabulary
seq_size=10, # max length of each sentence
word_size=15, # max length of each word
embedding_dim=100,
num_filters=100, # filters per size
)
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, num_input_channels,
num_output_channels, padding_idx):
super(Model, self).__init__()
# Char-level embedding
self.embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_output_channels,
kernel_size=f) for f in [2,3,4]])
def forward(self, x, channel_first=False, apply_softmax=False):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
if not channel_first:
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
# Input
input_size = (args.batch_size, args.seq_size, args.word_size)
x_in = torch.randint(low=0, high=args.vocab_size, size=input_size).long()
print (x_in.size())
# Initial char-level embedding model
model = Model(embedding_dim=args.embedding_dim,
num_embeddings=args.vocab_size,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
padding_idx=0)
print (model.named_modules)
# Forward pass to get char-level embeddings
z = model(x_in)
print (z.size())
```
There are several different ways you can use these char-level embeddings:
1. Concat char-level embeddings with word-level embeddings, since we have an embedding for each word (at a char-level) and then feed it into an RNN.
2. You can feed the char-level embeddings into an RNN to processes them.
# Encoder and decoder
So far we've used RNNs to `encode` a sequential input and generate hidden states. We use these hidden states to `decode` the predictions. So far, the encoder was an RNN and the decoder was just a few fully connected layers followed by a softmax layer (for classification). But the encoder and decoder can assume other architectures as well. For example, the decoder could be an RNN that processes the hidden state outputs from the encoder RNN.
```
# Arguments
args = Namespace(
batch_size=64,
embedding_dim=100,
rnn_hidden_dim=100,
hidden_dim=100,
num_layers=1,
bidirectional=False,
dropout=0.1,
)
class Encoder(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0):
super(Encoder, self).__init__()
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
padding_idx=padding_idx)
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim, hidden_size=rnn_hidden_dim,
num_layers=num_layers, batch_first=True,
bidirectional=bidirectional)
def forward(self, x_in, x_lengths):
# Word level embeddings
z_word = self.word_embeddings(x_in)
# Feed into RNN
out, h_n = self.gru(z)
# Gather the last relevant hidden state
out = gather_last_relevant_hidden(out, x_lengths)
return out
class Decoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(Decoder, self).__init__()
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_output, apply_softmax=False):
# FC layers
z = self.dropout(encoder_output)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
class Model(nn.Module):
def __init__(self, embedding_dim, num_embeddings, rnn_hidden_dim,
hidden_dim, num_layers, bidirectional, output_dim, dropout_p,
padding_idx=0):
super(Model, self).__init__()
self.encoder = Encoder(embedding_dim, num_embeddings, rnn_hidden_dim,
num_layers, bidirectional, padding_idx=0)
self.decoder = Decoder(rnn_hidden_dim, hidden_dim, output_dim, dropout_p)
def forward(self, x_in, x_lengths, apply_softmax=False):
encoder_outputs = self.encoder(x_in, x_lengths)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
model = Model(embedding_dim=args.embedding_dim, num_embeddings=1000,
rnn_hidden_dim=args.rnn_hidden_dim, hidden_dim=args.hidden_dim,
num_layers=args.num_layers, bidirectional=args.bidirectional,
output_dim=4, dropout_p=args.dropout, padding_idx=0)
print (model.named_parameters)
```
# Attentional mechanisms
When processing an input sequence with an RNN, recall that at each time step we process the input and the hidden state at that time step. For many use cases, it's advantageous to have access to the inputs at all time steps and pay selective attention to the them at each time step. For example, in machine translation, it's advantageous to have access to all the words when translating to another language because translations aren't necessarily word for word.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention1.jpg" width=650>
Attention can sound a bit confusing so let's see what happens at each time step. At time step j, the model has processed inputs $x_0, x_1, x_2, ..., x_j$ and has generted hidden states $h_0, h_1, h_2, ..., h_j$. The idea is to use all the processed hidden states to make the prediction and not just the most recent one. There are several approaches to how we can do this.
With **soft attention**, we learn a vector of floating points (probabilities) to multiply with the hidden states to create the context vector.
Ex. [0.1, 0.3, 0.1, 0.4, 0.1]
With **hard attention**, we can learn a binary vector to multiply with the hidden states to create the context vector.
Ex. [0, 0, 0, 1, 0]
We're going to focus on soft attention because it's more widley used and we can visualize how much of each hidden state helps with the prediction, which is great for interpretability.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/attention2.jpg" width=650>
We're going to implement attention in the document classification task below.
# Document classification with RNNs
We're going to implement the same document classification task as in the previous notebook but we're going to use an attentional interface for interpretability.
**Why not machine translation?** Normally, machine translation is the go-to example for demonstrating attention but it's not really practical. How many situations can you think of that require a seq to generate another sequence? Instead we're going to apply attention with our document classification example to see which input tokens are more influential towards predicting the genre.
## Set up
```
from argparse import Namespace
import collections
import copy
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import torch
def set_seeds(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
# Creating directories
def create_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
args = Namespace(
seed=1234,
cuda=True,
shuffle=True,
data_file="news.csv",
split_data_file="split_news.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="news",
train_size=0.7,
val_size=0.15,
test_size=0.15,
pretrained_embeddings=None,
cutoff=25,
num_epochs=5,
early_stopping_criteria=5,
learning_rate=1e-3,
batch_size=128,
embedding_dim=100,
kernels=[3,5],
num_filters=100,
rnn_hidden_dim=128,
hidden_dim=200,
num_layers=1,
bidirectional=False,
dropout_p=0.25,
)
# Set seeds
set_seeds(seed=args.seed, cuda=args.cuda)
# Create save dir
create_dirs(args.save_dir)
# Expand filepaths
args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir, args.model_state_file)
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
```
## Data
```
import urllib
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(args.data_file, 'wb') as fp:
fp.write(html)
df = pd.read_csv(args.data_file, header=0)
df.head()
by_category = collections.defaultdict(list)
for _, row in df.iterrows():
by_category[row.category].append(row.to_dict())
for category in by_category:
print ("{0}: {1}".format(category, len(by_category[category])))
final_list = []
for _, item_list in sorted(by_category.items()):
if args.shuffle:
np.random.shuffle(item_list)
n = len(item_list)
n_train = int(args.train_size*n)
n_val = int(args.val_size*n)
n_test = int(args.test_size*n)
# Give data point a split attribute
for item in item_list[:n_train]:
item['split'] = 'train'
for item in item_list[n_train:n_train+n_val]:
item['split'] = 'val'
for item in item_list[n_train+n_val:]:
item['split'] = 'test'
# Add to final list
final_list.extend(item_list)
split_df = pd.DataFrame(final_list)
split_df["split"].value_counts()
def preprocess_text(text):
text = ' '.join(word.lower() for word in text.split(" "))
text = re.sub(r"([.,!?])", r" \1 ", text)
text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text)
text = text.strip()
return text
split_df.title = split_df.title.apply(preprocess_text)
split_df.to_csv(args.split_data_file, index=False)
split_df.head()
```
## Vocabulary
```
class Vocabulary(object):
def __init__(self, token_to_idx=None):
# Token to index
if token_to_idx is None:
token_to_idx = {}
self.token_to_idx = token_to_idx
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
return {'token_to_idx': self.token_to_idx}
@classmethod
def from_serializable(cls, contents):
return cls(**contents)
def add_token(self, token):
if token in self.token_to_idx:
index = self.token_to_idx[token]
else:
index = len(self.token_to_idx)
self.token_to_idx[token] = index
self.idx_to_token[index] = token
return index
def add_tokens(self, tokens):
return [self.add_token[token] for token in tokens]
def lookup_token(self, token):
return self.token_to_idx[token]
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self.token_to_idx)
# Vocabulary instance
category_vocab = Vocabulary()
for index, row in df.iterrows():
category_vocab.add_token(row.category)
print (category_vocab) # __str__
print (len(category_vocab)) # __len__
index = category_vocab.lookup_token("Business")
print (index)
print (category_vocab.lookup_index(index))
```
## Sequence vocabulary
Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of words.
```
from collections import Counter
import string
class SequenceVocabulary(Vocabulary):
def __init__(self, token_to_idx=None, unk_token="<UNK>",
mask_token="<MASK>", begin_seq_token="<BEGIN>",
end_seq_token="<END>"):
super(SequenceVocabulary, self).__init__(token_to_idx)
self.mask_token = mask_token
self.unk_token = unk_token
self.begin_seq_token = begin_seq_token
self.end_seq_token = end_seq_token
self.mask_index = self.add_token(self.mask_token)
self.unk_index = self.add_token(self.unk_token)
self.begin_seq_index = self.add_token(self.begin_seq_token)
self.end_seq_index = self.add_token(self.end_seq_token)
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
def to_serializable(self):
contents = super(SequenceVocabulary, self).to_serializable()
contents.update({'unk_token': self.unk_token,
'mask_token': self.mask_token,
'begin_seq_token': self.begin_seq_token,
'end_seq_token': self.end_seq_token})
return contents
def lookup_token(self, token):
return self.token_to_idx.get(token, self.unk_index)
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the SequenceVocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx)
def __len__(self):
return len(self.token_to_idx)
# Get word counts
word_counts = Counter()
for title in split_df.title:
for token in title.split(" "):
if token not in string.punctuation:
word_counts[token] += 1
# Create SequenceVocabulary instance
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= args.cutoff:
title_word_vocab.add_token(word)
print (title_word_vocab) # __str__
print (len(title_word_vocab)) # __len__
index = title_word_vocab.lookup_token("general")
print (index)
print (title_word_vocab.lookup_index(index))
```
We're also going to create an instance fo SequenceVocabulary that processes the input on a character level.
```
# Create SequenceVocabulary instance
title_char_vocab = SequenceVocabulary()
for title in split_df.title:
for token in title:
title_char_vocab.add_token(token)
print (title_char_vocab) # __str__
print (len(title_char_vocab)) # __len__
index = title_char_vocab.lookup_token("g")
print (index)
print (title_char_vocab.lookup_index(index))
```
## Vectorizer
Something new that we introduce in this Vectorizer is calculating the length of our input sequence. We will use this later on to extract the last relevant hidden state for each input sequence.
```
class NewsVectorizer(object):
def __init__(self, title_word_vocab, title_char_vocab, category_vocab):
self.title_word_vocab = title_word_vocab
self.title_char_vocab = title_char_vocab
self.category_vocab = category_vocab
def vectorize(self, title):
# Word-level vectorization
word_indices = [self.title_word_vocab.lookup_token(token) for token in title.split(" ")]
word_indices = [self.title_word_vocab.begin_seq_index] + word_indices + \
[self.title_word_vocab.end_seq_index]
title_length = len(word_indices)
word_vector = np.zeros(title_length, dtype=np.int64)
word_vector[:len(word_indices)] = word_indices
# Char-level vectorization
word_length = max([len(word) for word in title.split(" ")])
char_vector = np.zeros((len(word_vector), word_length), dtype=np.int64)
char_vector[0, :] = self.title_word_vocab.mask_index # <BEGIN>
char_vector[-1, :] = self.title_word_vocab.mask_index # <END>
for i, word in enumerate(title.split(" ")):
char_vector[i+1,:len(word)] = [title_char_vocab.lookup_token(char) \
for char in word] # i+1 b/c of <BEGIN> token
return word_vector, char_vector, len(word_indices)
def unvectorize_word_vector(self, word_vector):
tokens = [self.title_word_vocab.lookup_index(index) for index in word_vector]
title = " ".join(token for token in tokens)
return title
def unvectorize_char_vector(self, char_vector):
title = ""
for word_vector in char_vector:
for index in word_vector:
if index == self.title_char_vocab.mask_index:
break
title += self.title_char_vocab.lookup_index(index)
title += " "
return title
@classmethod
def from_dataframe(cls, df, cutoff):
# Create class vocab
category_vocab = Vocabulary()
for category in sorted(set(df.category)):
category_vocab.add_token(category)
# Get word counts
word_counts = Counter()
for title in df.title:
for token in title.split(" "):
word_counts[token] += 1
# Create title vocab (word level)
title_word_vocab = SequenceVocabulary()
for word, word_count in word_counts.items():
if word_count >= cutoff:
title_word_vocab.add_token(word)
# Create title vocab (char level)
title_char_vocab = SequenceVocabulary()
for title in df.title:
for token in title:
title_char_vocab.add_token(token)
return cls(title_word_vocab, title_char_vocab, category_vocab)
@classmethod
def from_serializable(cls, contents):
title_word_vocab = SequenceVocabulary.from_serializable(contents['title_word_vocab'])
title_char_vocab = SequenceVocabulary.from_serializable(contents['title_char_vocab'])
category_vocab = Vocabulary.from_serializable(contents['category_vocab'])
return cls(title_word_vocab=title_word_vocab,
title_char_vocab=title_char_vocab,
category_vocab=category_vocab)
def to_serializable(self):
return {'title_word_vocab': self.title_word_vocab.to_serializable(),
'title_char_vocab': self.title_char_vocab.to_serializable(),
'category_vocab': self.category_vocab.to_serializable()}
# Vectorizer instance
vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff)
print (vectorizer.title_word_vocab)
print (vectorizer.title_char_vocab)
print (vectorizer.category_vocab)
word_vector, char_vector, title_length = vectorizer.vectorize(preprocess_text(
"Roger Federer wins the Wimbledon tennis tournament."))
print ("word_vector:", np.shape(word_vector))
print ("char_vector:", np.shape(char_vector))
print ("title_length:", title_length)
print (word_vector)
print (char_vector)
print (vectorizer.unvectorize_word_vector(word_vector))
print (vectorizer.unvectorize_char_vector(char_vector))
```
## Dataset
```
from torch.utils.data import Dataset, DataLoader
class NewsDataset(Dataset):
def __init__(self, df, vectorizer):
self.df = df
self.vectorizer = vectorizer
# Data splits
self.train_df = self.df[self.df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.df[self.df.split=='val']
self.val_size = len(self.val_df)
self.test_df = self.df[self.df.split=='test']
self.test_size = len(self.test_df)
self.lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.val_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
# Class weights (for imbalances)
class_counts = df.category.value_counts().to_dict()
def sort_key(item):
return self.vectorizer.category_vocab.lookup_token(item[0])
sorted_counts = sorted(class_counts.items(), key=sort_key)
frequencies = [count for _, count in sorted_counts]
self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
@classmethod
def load_dataset_and_make_vectorizer(cls, split_data_file, cutoff):
df = pd.read_csv(split_data_file, header=0)
train_df = df[df.split=='train']
return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff))
@classmethod
def load_dataset_and_load_vectorizer(cls, split_data_file, vectorizer_filepath):
df = pd.read_csv(split_data_file, header=0)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(df, vectorizer)
def load_vectorizer_only(vectorizer_filepath):
with open(vectorizer_filepath) as fp:
return NewsVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
with open(vectorizer_filepath, "w") as fp:
json.dump(self.vectorizer.to_serializable(), fp)
def set_split(self, split="train"):
self.target_split = split
self.target_df, self.target_size = self.lookup_dict[split]
def __str__(self):
return "<Dataset(split={0}, size={1})".format(
self.target_split, self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.target_df.iloc[index]
title_word_vector, title_char_vector, title_length = \
self.vectorizer.vectorize(row.title)
category_index = self.vectorizer.category_vocab.lookup_token(row.category)
return {'title_word_vector': title_word_vector,
'title_char_vector': title_char_vector,
'title_length': title_length,
'category': category_index}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, collate_fn, shuffle=True,
drop_last=False, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
collate_fn=collate_fn, shuffle=shuffle,
drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# Dataset instance
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
print (dataset) # __str__
input_ = dataset[10] # __getitem__
print (input_['title_word_vector'])
print (input_['title_char_vector'])
print (input_['title_length'])
print (input_['category'])
print (dataset.vectorizer.unvectorize_word_vector(input_['title_word_vector']))
print (dataset.vectorizer.unvectorize_char_vector(input_['title_char_vector']))
print (dataset.class_weights)
```
## Model
embed → encoder → attend → predict
```
import torch.nn as nn
import torch.nn.functional as F
class NewsEncoder(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx=0, char_padding_idx=0):
super(NewsEncoder, self).__init__()
self.num_layers = num_layers
self.bidirectional = bidirectional
# Embeddings
self.word_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_word_embeddings,
padding_idx=word_padding_idx)
self.char_embeddings = nn.Embedding(embedding_dim=embedding_dim,
num_embeddings=num_char_embeddings,
padding_idx=char_padding_idx)
# Conv weights
self.conv = nn.ModuleList([nn.Conv1d(num_input_channels,
num_output_channels,
kernel_size=f) for f in kernels])
# GRU weights
self.gru = nn.GRU(input_size=embedding_dim*(len(kernels)+1),
hidden_size=rnn_hidden_dim, num_layers=num_layers,
batch_first=True, bidirectional=bidirectional)
def initialize_hidden_state(self, batch_size, rnn_hidden_dim, device):
"""Modify this to condition the RNN."""
num_directions = 1
if self.bidirectional:
num_directions = 2
hidden_t = torch.zeros(self.num_layers * num_directions,
batch_size, rnn_hidden_dim).to(device)
def get_char_level_embeddings(self, x):
# x: (N, seq_len, word_len)
input_shape = x.size()
batch_size, seq_len, word_len = input_shape
x = x.view(-1, word_len) # (N*seq_len, word_len)
# Embedding
x = self.char_embeddings(x) # (N*seq_len, word_len, embedding_dim)
# Rearrange input so num_input_channels is in dim 1 (N, embedding_dim, word_len)
x = x.transpose(1, 2)
# Convolution
z = [F.relu(conv(x)) for conv in self.conv]
# Pooling
z = [F.max_pool1d(zz, zz.size(2)).squeeze(2) for zz in z]
z = [zz.view(batch_size, seq_len, -1) for zz in z] # (N, seq_len, embedding_dim)
# Concat to get char-level embeddings
z = torch.cat(z, 2) # join conv outputs
return z
def forward(self, x_word, x_char, x_lengths, device):
"""
x_word: word level representation (N, seq_size)
x_char: char level representation (N, seq_size, word_len)
"""
# Word level embeddings
z_word = self.word_embeddings(x_word)
# Char level embeddings
z_char = self.get_char_level_embeddings(x=x_char)
# Concatenate
z = torch.cat([z_word, z_char], 2)
# Feed into RNN
initial_h = self.initialize_hidden_state(
batch_size=z.size(0), rnn_hidden_dim=self.gru.hidden_size,
device=device)
out, h_n = self.gru(z, initial_h)
return out
class NewsDecoder(nn.Module):
def __init__(self, rnn_hidden_dim, hidden_dim, output_dim, dropout_p):
super(NewsDecoder, self).__init__()
# Attention FC layer
self.fc_attn = nn.Linear(rnn_hidden_dim, rnn_hidden_dim)
self.v = nn.Parameter(torch.rand(rnn_hidden_dim))
# FC weights
self.dropout = nn.Dropout(dropout_p)
self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, encoder_outputs, apply_softmax=False):
# Attention
z = torch.tanh(self.fc_attn(encoder_outputs))
z = z.transpose(2,1) # [B*H*T]
v = self.v.repeat(encoder_outputs.size(0),1).unsqueeze(1) #[B*1*H]
z = torch.bmm(v,z).squeeze(1) # [B*T]
attn_scores = F.softmax(z, dim=1)
context = torch.matmul(encoder_outputs.transpose(-2, -1),
attn_scores.unsqueeze(dim=2)).squeeze()
if len(context.size()) == 1:
context = context.unsqueeze(0)
# FC layers
z = self.dropout(context)
z = self.fc1(z)
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return attn_scores, y_pred
class NewsModel(nn.Module):
def __init__(self, embedding_dim, num_word_embeddings, num_char_embeddings,
kernels, num_input_channels, num_output_channels,
rnn_hidden_dim, hidden_dim, output_dim, num_layers,
bidirectional, dropout_p, word_padding_idx, char_padding_idx):
super(NewsModel, self).__init__()
self.encoder = NewsEncoder(embedding_dim, num_word_embeddings,
num_char_embeddings, kernels,
num_input_channels, num_output_channels,
rnn_hidden_dim, num_layers, bidirectional,
word_padding_idx, char_padding_idx)
self.decoder = NewsDecoder(rnn_hidden_dim, hidden_dim, output_dim,
dropout_p)
def forward(self, x_word, x_char, x_lengths, device, apply_softmax=False):
encoder_outputs = self.encoder(x_word, x_char, x_lengths, device)
y_pred = self.decoder(encoder_outputs, apply_softmax)
return y_pred
```
## Training
```
import torch.optim as optim
class Trainer(object):
def __init__(self, dataset, model, model_state_file, save_dir, device,
shuffle, num_epochs, batch_size, learning_rate,
early_stopping_criteria):
self.dataset = dataset
self.class_weights = dataset.class_weights.to(device)
self.device = device
self.model = model.to(device)
self.save_dir = save_dir
self.device = device
self.shuffle = shuffle
self.num_epochs = num_epochs
self.batch_size = batch_size
self.loss_func = nn.CrossEntropyLoss(self.class_weights)
self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer=self.optimizer, mode='min', factor=0.5, patience=1)
self.train_state = {
'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'early_stopping_criteria': early_stopping_criteria,
'learning_rate': learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': model_state_file}
def update_train_state(self):
# Verbose
print ("[EPOCH]: {0:02d} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format(
self.train_state['epoch_index'], self.train_state['learning_rate'],
self.train_state['train_loss'][-1], self.train_state['train_acc'][-1],
self.train_state['val_loss'][-1], self.train_state['val_acc'][-1]))
# Save one model at least
if self.train_state['epoch_index'] == 0:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
self.train_state['stop_early'] = False
# Save model if performance improved
elif self.train_state['epoch_index'] >= 1:
loss_tm1, loss_t = self.train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= self.train_state['early_stopping_best_val']:
# Update step
self.train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < self.train_state['early_stopping_best_val']:
torch.save(self.model.state_dict(), self.train_state['model_filename'])
# Reset early stopping step
self.train_state['early_stopping_step'] = 0
# Stop early ?
self.train_state['stop_early'] = self.train_state['early_stopping_step'] \
>= self.train_state['early_stopping_criteria']
return self.train_state
def compute_accuracy(self, y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
def pad_word_seq(self, seq, length):
vector = np.zeros(length, dtype=np.int64)
vector[:len(seq)] = seq
vector[len(seq):] = self.dataset.vectorizer.title_word_vocab.mask_index
return vector
def pad_char_seq(self, seq, seq_length, word_length):
vector = np.zeros((seq_length, word_length), dtype=np.int64)
vector.fill(self.dataset.vectorizer.title_char_vocab.mask_index)
for i in range(len(seq)):
char_padding = np.zeros(word_length-len(seq[i]), dtype=np.int64)
vector[i] = np.concatenate((seq[i], char_padding), axis=None)
return vector
def collate_fn(self, batch):
# Make a deep copy
batch_copy = copy.deepcopy(batch)
processed_batch = {"title_word_vector": [], "title_char_vector": [],
"title_length": [], "category": []}
# Max lengths
get_seq_length = lambda sample: len(sample["title_word_vector"])
get_word_length = lambda sample: len(sample["title_char_vector"][0])
max_seq_length = max(map(get_seq_length, batch))
max_word_length = max(map(get_word_length, batch))
# Pad
for i, sample in enumerate(batch_copy):
padded_word_seq = self.pad_word_seq(
sample["title_word_vector"], max_seq_length)
padded_char_seq = self.pad_char_seq(
sample["title_char_vector"], max_seq_length, max_word_length)
processed_batch["title_word_vector"].append(padded_word_seq)
processed_batch["title_char_vector"].append(padded_char_seq)
processed_batch["title_length"].append(sample["title_length"])
processed_batch["category"].append(sample["category"])
# Convert to appropriate tensor types
processed_batch["title_word_vector"] = torch.LongTensor(
processed_batch["title_word_vector"])
processed_batch["title_char_vector"] = torch.LongTensor(
processed_batch["title_char_vector"])
processed_batch["title_length"] = torch.LongTensor(
processed_batch["title_length"])
processed_batch["category"] = torch.LongTensor(
processed_batch["category"])
return processed_batch
def run_train_loop(self):
for epoch_index in range(self.num_epochs):
self.train_state['epoch_index'] = epoch_index
# Iterate over train dataset
# initialize batch generator, set loss and acc to 0, set train mode on
self.dataset.set_split('train')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# zero the gradients
self.optimizer.zero_grad()
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute gradients using loss
loss.backward()
# use optimizer to take a gradient step
self.optimizer.step()
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['train_loss'].append(running_loss)
self.train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('val')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.
running_acc = 0.
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.to("cpu").item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['val_loss'].append(running_loss)
self.train_state['val_acc'].append(running_acc)
self.train_state = self.update_train_state()
self.scheduler.step(self.train_state['val_loss'][-1])
if self.train_state['stop_early']:
break
def run_test_loop(self):
# initialize batch generator, set loss and acc to 0, set eval mode on
self.dataset.set_split('test')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, collate_fn=self.collate_fn,
shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
_, y_pred = self.model(x_word=batch_dict['title_word_vector'],
x_char=batch_dict['title_char_vector'],
x_lengths=batch_dict['title_length'],
device=self.device)
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = self.compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['test_loss'] = running_loss
self.train_state['test_acc'] = running_acc
def plot_performance(self):
# Figure size
plt.figure(figsize=(15,5))
# Plot Loss
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(trainer.train_state["train_loss"], label="train")
plt.plot(trainer.train_state["val_loss"], label="val")
plt.legend(loc='upper right')
# Plot Accuracy
plt.subplot(1, 2, 2)
plt.title("Accuracy")
plt.plot(trainer.train_state["train_acc"], label="train")
plt.plot(trainer.train_state["val_acc"], label="val")
plt.legend(loc='lower right')
# Save figure
plt.savefig(os.path.join(self.save_dir, "performance.png"))
# Show plots
plt.show()
def save_train_state(self):
with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp:
json.dump(self.train_state, fp)
# Initialization
dataset = NewsDataset.load_dataset_and_make_vectorizer(args.split_data_file,
args.cutoff)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
print (model.named_modules)
# Train
trainer = Trainer(dataset=dataset, model=model,
model_state_file=args.model_state_file,
save_dir=args.save_dir, device=args.device,
shuffle=args.shuffle, num_epochs=args.num_epochs,
batch_size=args.batch_size, learning_rate=args.learning_rate,
early_stopping_criteria=args.early_stopping_criteria)
trainer.run_train_loop()
# Plot performance
trainer.plot_performance()
# Test performance
trainer.run_test_loop()
print("Test loss: {0:.2f}".format(trainer.train_state['test_loss']))
print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc']))
# Save all results
trainer.save_train_state()
```
## Inference
```
class Inference(object):
def __init__(self, model, vectorizer):
self.model = model
self.vectorizer = vectorizer
def predict_category(self, title):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
attn_scores, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top category
y_prob, indices = y_pred.max(dim=1)
index = indices.item()
# Predicted category
category = vectorizer.category_vocab.lookup_index(index)
probability = y_prob.item()
return {'category': category, 'probability': probability,
'attn_scores': attn_scores}
def predict_top_k(self, title, k):
# Vectorize
word_vector, char_vector, title_length = self.vectorizer.vectorize(title)
title_word_vector = torch.tensor(word_vector).unsqueeze(0)
title_char_vector = torch.tensor(char_vector).unsqueeze(0)
title_length = torch.tensor([title_length]).long()
# Forward pass
self.model.eval()
_, y_pred = self.model(x_word=title_word_vector,
x_char=title_char_vector,
x_lengths=title_length,
device="cpu",
apply_softmax=True)
# Top k categories
y_prob, indices = torch.topk(y_pred, k=k)
probabilities = y_prob.detach().numpy()[0]
indices = indices.detach().numpy()[0]
# Results
results = []
for probability, index in zip(probabilities, indices):
category = self.vectorizer.category_vocab.lookup_index(index)
results.append({'category': category, 'probability': probability})
return results
# Load the model
dataset = NewsDataset.load_dataset_and_load_vectorizer(
args.split_data_file, args.vectorizer_file)
vectorizer = dataset.vectorizer
model = NewsModel(embedding_dim=args.embedding_dim,
num_word_embeddings=len(vectorizer.title_word_vocab),
num_char_embeddings=len(vectorizer.title_char_vocab),
kernels=args.kernels,
num_input_channels=args.embedding_dim,
num_output_channels=args.num_filters,
rnn_hidden_dim=args.rnn_hidden_dim,
hidden_dim=args.hidden_dim,
output_dim=len(vectorizer.category_vocab),
num_layers=args.num_layers,
bidirectional=args.bidirectional,
dropout_p=args.dropout_p,
word_padding_idx=vectorizer.title_word_vocab.mask_index,
char_padding_idx=vectorizer.title_char_vocab.mask_index)
model.load_state_dict(torch.load(args.model_state_file))
model = model.to("cpu")
print (model.named_modules)
# Inference
inference = Inference(model=model, vectorizer=vectorizer)
title = input("Enter a title to classify: ")
prediction = inference.predict_category(preprocess_text(title))
print("{} → {} (p={:0.2f})".format(title, prediction['category'],
prediction['probability']))
# Top-k inference
top_k = inference.predict_top_k(preprocess_text(title), k=len(vectorizer.category_vocab))
print ("{}: ".format(title))
for result in top_k:
print ("{} (p={:0.2f})".format(result['category'],
result['probability']))
```
# Interpretability
We can inspect the probability vector that is generated at each time step to visualize the importance of each of the previous hidden states towards a particular time step's prediction.
```
import seaborn as sns
import matplotlib.pyplot as plt
attn_matrix = prediction['attn_scores'].detach().numpy()
ax = sns.heatmap(attn_matrix, linewidths=2, square=True)
tokens = ["<BEGIN>"]+preprocess_text(title).split(" ")+["<END>"]
ax.set_xticklabels(tokens, rotation=45)
ax.set_xlabel("Token")
ax.set_ylabel("Importance\n")
plt.show()
```
# TODO
- attn visualization isn't always great
- bleu score
- ngram-overlap
- perplexity
- beamsearch
- hierarchical softmax
- hierarchical attention
- Transformer networks
- attention interpretability is hit/miss
| github_jupyter |
### Supervised Machine Learning Models for Cross Species comparison of supporting cells
```
import numpy as np
import pandas as pd
import scanpy as sc
import matplotlib.pyplot as plt
import os
import sys
import anndata
def MovePlots(plotpattern, subplotdir):
os.system('mkdir -p '+str(sc.settings.figdir)+'/'+subplotdir)
os.system('mv '+str(sc.settings.figdir)+'/*'+plotpattern+'** '+str(sc.settings.figdir)+'/'+subplotdir)
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.figdir = '/home/jovyan/Gonads/Flat_SupportVectorMachine_Fetal/SVM/training/'
sc.logging.print_versions()
sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
sys.executable
```
**Load our fetal samples**
```
human = sc.read('/nfs/team292/lg18/with_valentina/FCA-M5-annotatedCluster4Seurat.h5ad')
human = human[[i in ['female'] for i in human.obs['sex']]]
human.obs['stage'].value_counts()
```
**Take fine grained annotations from Luz on supporting cells**
```
supporting = pd.read_csv('/nfs/team292/lg18/with_valentina/supporting_nocycling_annotation.csv', index_col = 0)
print(supporting['annotated_clusters'].value_counts())
supporting = supporting[supporting['annotated_clusters'].isin(['coelEpi', 'sLGR5', 'sPAX8b', 'preGC_III_Notch', 'preGC_II',
'preGC_II_hypoxia', 'preGC_I_OSR1', 'sKITLG',
'ovarianSurf'])]
mapping = supporting['annotated_clusters'].to_dict()
human.obs['supporting_clusters'] = human.obs_names.map(mapping)
# Remove doublets as well as NaNs corresponding to cells from enriched samples
human.obs['supporting_clusters'] = human.obs['supporting_clusters'].astype(str)
human = human[[i not in ['nan'] for i in human.obs['supporting_clusters']]]
human.obs['supporting_clusters'].value_counts(dropna = False)
### Join sub-states of preGC_II and preGC_III
joined = {'coelEpi' : 'coelEpi', 'sLGR5' : 'sLGR5', 'sPAX8b' : 'sPAX8b', 'preGC_III_Notch' : 'preGC_III', 'preGC_II' : 'preGC_II',
'preGC_II_hypoxia' : 'preGC_II', 'preGC_I_OSR1' : 'preGC_I_OSR1', 'sKITLG' : 'sKITLG',
'ovarianSurf' : 'ovarianSurf'}
human.obs['supporting_clusters'] = human.obs['supporting_clusters'].map(joined)
human.obs['supporting_clusters'].value_counts(dropna = False)
```
**Intersect genes present in all fetal gonads scRNAseq datasets of human and mouse**
Mouse ovary
```
mouse = sc.read("/nfs/team292/vl6/Mouse_Niu2020/supporting_mesothelial.h5ad")
mouse = anndata.AnnData(X= mouse.raw.X, var=mouse.raw.var, obs=mouse.obs)
mouse
```
Extract the genes from all datasets
```
human_genes = human.var_names.to_list()
mouse_genes = mouse.var_names.to_list()
from functools import reduce
inters = reduce(np.intersect1d, (human_genes, mouse_genes))
len(inters)
cell_cycle_genes = [x.strip() for x in open(file='/nfs/users/nfs_v/vl6/regev_lab_cell_cycle_genes.txt')]
cell_cycle_genes = [x for x in cell_cycle_genes if x in list(inters)]
inters = [x for x in list(inters) if x not in cell_cycle_genes]
len(inters)
```
**Subset fetal data to keep only these genes**
```
human = human[:, list(inters)]
human
```
**Downsample more frequent classes**
```
myindex = human.obs['supporting_clusters'].value_counts().index
myvalues = human.obs['supporting_clusters'].value_counts().values
clusters = pd.Series(myvalues, index = myindex)
clusters.values
import random
from itertools import chain
# Find clusters with > n cells
n = 1500
cl2downsample = clusters.index[ clusters.values > n ]
# save all barcode ids from small clusters
holder = []
holder.append( human.obs_names[[ i not in cl2downsample for i in human.obs['supporting_clusters'] ]] )
# randomly sample n cells in the cl2downsample
for cl in cl2downsample:
print(cl)
cl_sample = human[[ i == cl for i in human.obs['supporting_clusters'] ]].obs_names
# n = int(round(len(cl_sample)/2, 0))
cl_downsample = random.sample(set(cl_sample), n )
holder.append(cl_downsample)
# samples to include
samples = list(chain(*holder))
# Filter adata_count
human = human[[ i in samples for i in human.obs_names ]]
human.X.shape
```
**Preprocess the data**
```
# Per cell normalization
sc.pp.normalize_per_cell(human, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(human)
# Filter HVGs --> Select top 300 highly variable genes that will serve as features to the machine learning models
sc.pp.highly_variable_genes(human, n_top_genes = 300)
highly_variable_genes = human.var["highly_variable"]
human = human[:, highly_variable_genes]
# Scale
sc.pp.scale(human, max_value=10)
print('Total number of cells: {:d}'.format(human.n_obs))
print('Total number of genes: {:d}'.format(human.n_vars))
```
**Import libraries**
```
# Required libraries regardless of the model you choose
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
# Library for Logistic Regression
from sklearn.linear_model import LogisticRegression
# Library for Random Forest
from sklearn.ensemble import RandomForestClassifier
# Library for Support Vector Machine
from sklearn.svm import SVC
print("Loading data")
X = np.array(human.X) # Fetching the count matrix to use as input to the model
print(type(X), X.shape)
# Choose output variable, meaning the labels you want to predict
y = list(human.obs.supporting_clusters.astype('str'))
# Split the training dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.25, # This can be changed, though it makes sense to use 25-30% of the data for test
random_state=1234,
)
```
**Option 1: Logistic Regression classifier**
```
# Instantiate a Logistic Regression Classifier and specify L2 regularization
lr = LogisticRegression(penalty='l2', multi_class="multinomial", max_iter = 2000)
# Instantiate a PCA object
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('LogReg', lr)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_lr = {'LogReg__C' : [0.001, 0.01, 0.1, 1, 10, 100], 'LogReg__solver' : ["lbfgs", 'newton-cg', 'sag'],
'pca__n_components' : [0.7, 0.8, 0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_lr = RandomizedSearchCV(estimator = pipe, param_distributions = params_lr, cv = 5, n_jobs = -1)
# Fit the model to the training set of the training data
grid_lr.fit(X_train, y_train)
# Report the best parameters
print("Best CV params", grid_lr.best_params_)
# Report the best hyperparameters and the corresponding score
print("Softmax training accuracy:", grid_lr.score(X_train, y_train))
print("Softmax test accuracy:", grid_lr.score(X_test, y_test))
```
**Option 2: Support Vector Machine classifier**
```
# Instantiate an RBF Support Vector Machine
svm = SVC(kernel = "rbf", probability = True)
# Instantiate a PCA
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('SVC', svm)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_svm = {'SVC__C':[0.1, 1, 10, 100], 'SVC__gamma':[0.001, 0.01, 0.1], 'pca__n_components': [0.7, 0.8, 0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_svm = RandomizedSearchCV(pipe, param_distributions = params_svm, cv=5, verbose =1, n_jobs = -1)
# Fit the model to the training set of the training data
grid_svm.fit(X_train, y_train)
# Report the best hyperparameters and the corresponding score
print("Best CV params", grid_svm.best_params_)
print("Best CV accuracy", grid_svm.best_score_)
```
**Option 3: Random Forest classifier**
```
# Instantiate a Random Forest Classifier
SEED = 123
rf = RandomForestClassifier(random_state = SEED) # set a seed to ensure reproducibility of results
print(rf.get_params()) # Look at the hyperparameters that can be tuned
# Instantiate a PCA object
pca = PCA()
# Create pipeline object
pipe = Pipeline(steps=[('pca', pca), ('RF', rf)])
print('Hyperparameter tuning with exhaustive grid search')
# Choose a grid of hyperparameters values (these are arbitrary but reasonable as I took reference values from the documentation)
params_rf = {"RF__n_estimators": [50, 100, 200, 300], 'RF__min_samples_leaf': [1, 5], 'RF__min_samples_split': [2, 5, 10],
'pca__n_components' : [0.7, 0.8,0.9]}
# Use grid search cross validation to span the hyperparameter space and choose the best
grid_rf = RandomizedSearchCV(estimator = pipe, param_distributions = params_rf, cv = 5, n_jobs = -1)
# Fit the model to the training set of the training data
grid_rf.fit(X_train, y_train)
# Report the best hyperparameters and the corresponding score
print("Best CV params", grid_rf.best_params_)
print("Best CV accuracy", grid_rf.best_score_)
```
All 3 models return an object (which I called *grid_lr*, *grid_rf*, *grid_svm*, respectively) that has an attribute called **.best_estimator_** which holds the model with the best hyperparameters that was found using grid search cross validation. This is the model that you will use to make predictions.
**Evaluating the model's performance on the test set of the training data**
```
predicted_labels = grid_svm.best_estimator_.predict(X_test) # Here as an example I am using the support vector machine model
report_rf = classification_report(y_test, predicted_labels)
print(report_rf)
print("Accuracy:", accuracy_score(y_test, predicted_labels))
cnf_matrix = confusion_matrix(y_test, predicted_labels)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
class_names=[0,1] # name of classes
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
# create heatmap
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print("Accuracy:", accuracy_score(y_test, predicted_labels))
grid_svm.best_estimator_.feature_names = list(human.var_names)
```
**Predict cell types in the mouse data**
```
def process_and_subset_data(adata, genes):
# save the log transformed counts as raw
adata.raw = adata.copy()
# Per cell normalization
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(adata)
# Subset data
adata = adata[:, list(genes)]
# Scale
sc.pp.scale(adata, max_value=10)
return adata
def process_data(adata):
# Per cell normalization
sc.pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
# Log transformation
sc.pp.log1p(adata)
# Scale
sc.pp.scale(adata, max_value=10)
import scipy
def make_single_predictions(adata, classifier):
#if scipy.sparse.issparse(adata.X):
#adata.X = adata.X.toarray()
adata_X = np.array(adata.X)
print(type(adata_X), adata_X.shape)
adata_preds = classifier.predict(adata_X)
adata.obs['human_classifier_supporting'] = adata_preds
print(adata.obs.human_classifier_supporting.value_counts(dropna = False))
def make_correspondence(classifier):
corr = {}
for i in range(0,len(classifier.classes_)):
corr[i] = classifier.classes_[i]
return corr
def make_probability_predictions(adata, classifier):
adata_X = np.array(adata.X)
print(type(adata_X), adata_X.shape)
proba_preds = classifier.predict_proba(adata_X)
df_probs = pd.DataFrame(np.column_stack(list(zip(*proba_preds))))
corr = make_correspondence(classifier)
for index in df_probs.columns.values:
celltype = corr[index]
adata.obs['prob_'+celltype] = df_probs[index].to_list()
```
Mouse ovary (Niu et al., 2020)
```
mouse = process_and_subset_data(mouse, grid_svm.best_estimator_.feature_names)
make_single_predictions(mouse, grid_svm.best_estimator_)
make_probability_predictions(mouse, grid_svm.best_estimator_)
mouse
mouse.write('/nfs/team292/vl6/Mouse_Niu2020/supporting_cells_with_human_preds.h5ad')
mouse = sc.read('/nfs/team292/vl6/Mouse_Niu2020/supporting_cells_with_human_preds.h5ad')
mouse
mouse_predictions = mouse.obs[['prob_coelEpi', 'prob_ovarianSurf', 'prob_preGC_II', 'prob_preGC_III', 'prob_preGC_I_OSR1', 'prob_sKITLG', 'prob_sLGR5', 'prob_sPAX8b']]
mouse_predictions.columns = ['prob_coelEpi', 'prob_ovarianSurf', 'prob_preGC-II', 'prob_preGC-II-late', 'prob_preGC-I',
'prob_sKITLG', 'prob_sLGR5', 'prob_sPAX8b']
mouse_predictions.head()
mouse_predictions.to_csv('/nfs/team292/vl6/Mouse_Niu2020/mouse_Niu2020_supporting_predictions.csv')
```
| github_jupyter |
# Nodejs MNIST model Deployment
* Wrap a nodejs tensorflow model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
## Dependencies
* ```pip install seldon-core```
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
* node (version>=8.11.0)
* npm
## Train locally using npm commands
This model takes in mnist images of size 28x28x1 as input and outputs an array of size 10 with prediction of each digits from 0-9
```
!make train && make clean_build
```
Training creates a model.json file and a weights.bin file which is utilized for prediction
## Prediction using REST API on the docker container
```
!s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-mnist-model:0.1
!docker run --name "nodejs_mnist_predictor" -d --rm -p 5000:5000 node-s2i-mnist-model:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t
!docker rm nodejs_mnist_predictor --force
```
## Prediction using GRPC API on the docker container
```
!s2i build -E ./.s2i/environment_grpc . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-mnist-model:0.2
!docker run --name "nodejs_mnist_predictor" -d --rm -p 5000:5000 node-s2i-mnist-model:0.2
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t --grpc
!docker rm nodejs_mnist_predictor --force
```
## Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
!helm init
!kubectl rollout status deploy/tiller-deploy -n kube-system
!helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
!kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system
```
## Setup Ingress
Please note: There are reported gRPC issues with ambassador (see https://github.com/SeldonIO/seldon-core/issues/473).
```
!helm install stable/ambassador --name ambassador --set crds.keep=false
!kubectl rollout status deployment.apps/ambassador
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-mnist-model:0.1
!kubectl create -f nodejs_mnist_deployment.json
!kubectl rollout status deploy/nodejs-mnist-deployment-nodejs-mnist-predictor-5aa9edd
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace seldon -p
!minikube delete
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 3
# Dimensionality Reduction and reconstruction
__Content creators:__ Alex Cayco Gajic, John Murray
__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom
---
# Tutorial Objectives
In this notebook we'll learn to apply PCA for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.
Overview:
- Perform PCA on MNIST
- Calculate the variance explained
- Reconstruct data with different numbers of PCs
- (Bonus) Examine denoising using PCA
You can learn more about MNIST dataset [here](https://en.wikipedia.org/wiki/MNIST_database).
```
# @title Video 1: PCA for dimensionality reduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="oO0bbInoO_0", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
---
# Setup
Run these cells to get the tutorial started.
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper Functions
def plot_variance_explained(variance_explained):
"""
Plots eigenvalues.
Args:
variance_explained (numpy array of floats) : Vector of variance explained
for each PC
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1, len(variance_explained) + 1), variance_explained,
'--k')
plt.xlabel('Number of components')
plt.ylabel('Variance explained')
plt.show()
def plot_MNIST_reconstruction(X, X_reconstructed):
"""
Plots 9 images in the MNIST dataset side-by-side with the reconstructed
images.
Args:
X (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
X_reconstructed (numpy array of floats) : Data matrix each column
corresponds to a different
random variable
Returns:
Nothing.
"""
plt.figure()
ax = plt.subplot(121)
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(X[k, :], (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.title('Data')
plt.clim([0, 250])
ax = plt.subplot(122)
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(np.real(X_reconstructed[k, :]), (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.clim([0, 250])
plt.title('Reconstructed')
plt.tight_layout()
def plot_MNIST_sample(X):
"""
Plots 9 images in the MNIST dataset.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig, ax = plt.subplots()
k = 0
for k1 in range(3):
for k2 in range(3):
k = k + 1
plt.imshow(np.reshape(X[k, :], (28, 28)),
extent=[(k1 + 1) * 28, k1 * 28, (k2+1) * 28, k2 * 28],
vmin=0, vmax=255)
plt.xlim((3 * 28, 0))
plt.ylim((3 * 28, 0))
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
plt.clim([0, 250])
ax.set_xticks([])
ax.set_yticks([])
plt.show()
def plot_MNIST_weights(weights):
"""
Visualize PCA basis vector weights for MNIST. Red = positive weights,
blue = negative weights, white = zero weight.
Args:
weights (numpy array of floats) : PCA basis vector
Returns:
Nothing.
"""
fig, ax = plt.subplots()
cmap = plt.cm.get_cmap('seismic')
plt.imshow(np.real(np.reshape(weights, (28, 28))), cmap=cmap)
plt.tick_params(axis='both', which='both', bottom=False, top=False,
labelbottom=False)
plt.clim(-.15, .15)
plt.colorbar(ticks=[-.15, -.1, -.05, 0, .05, .1, .15])
ax.set_xticks([])
ax.set_yticks([])
plt.show()
def add_noise(X, frac_noisy_pixels):
"""
Randomly corrupts a fraction of the pixels by setting them to random values.
Args:
X (numpy array of floats) : Data matrix
frac_noisy_pixels (scalar) : Fraction of noisy pixels
Returns:
(numpy array of floats) : Data matrix + noise
"""
X_noisy = np.reshape(X, (X.shape[0] * X.shape[1]))
N_noise_ixs = int(X_noisy.shape[0] * frac_noisy_pixels)
noise_ixs = np.random.choice(X_noisy.shape[0], size=N_noise_ixs,
replace=False)
X_noisy[noise_ixs] = np.random.uniform(0, 255, noise_ixs.shape)
X_noisy = np.reshape(X_noisy, (X.shape[0], X.shape[1]))
return X_noisy
def change_of_basis(X, W):
"""
Projects data onto a new basis.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
Y = np.matmul(X, W)
return Y
def get_sample_cov_matrix(X):
"""
Returns the sample covariance matrix of data X.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Covariance matrix
"""
X = X - np.mean(X, 0)
cov_matrix = 1 / X.shape[0] * np.matmul(X.T, X)
return cov_matrix
def sort_evals_descending(evals, evectors):
"""
Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two
eigenvectors to be in first two quadrants (if 2D).
Args:
evals (numpy array of floats) : Vector of eigenvalues
evectors (numpy array of floats) : Corresponding matrix of eigenvectors
each column corresponds to a different
eigenvalue
Returns:
(numpy array of floats) : Vector of eigenvalues after sorting
(numpy array of floats) : Matrix of eigenvectors after sorting
"""
index = np.flip(np.argsort(evals))
evals = evals[index]
evectors = evectors[:, index]
if evals.shape[0] == 2:
if np.arccos(np.matmul(evectors[:, 0],
1 / np.sqrt(2) * np.array([1, 1]))) > np.pi / 2:
evectors[:, 0] = -evectors[:, 0]
if np.arccos(np.matmul(evectors[:, 1],
1 / np.sqrt(2)*np.array([-1, 1]))) > np.pi / 2:
evectors[:, 1] = -evectors[:, 1]
return evals, evectors
def pca(X):
"""
Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
X = X - np.mean(X, 0)
cov_matrix = get_sample_cov_matrix(X)
evals, evectors = np.linalg.eigh(cov_matrix)
evals, evectors = sort_evals_descending(evals, evectors)
score = change_of_basis(X, evectors)
return score, evectors, evals
def plot_eigenvalues(evals, limit=True):
"""
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1, len(evals) + 1), evals, 'o-k')
plt.xlabel('Component')
plt.ylabel('Eigenvalue')
plt.title('Scree plot')
if limit:
plt.show()
```
---
# Section 1: Perform PCA on MNIST
The MNIST dataset consists of a 70,000 images of individual handwritten digits. Each image is a 28x28 pixel grayscale image. For convenience, each 28x28 pixel image is often unravelled into a single 784 (=28*28) element vector, so that the whole dataset is represented as a 70,000 x 784 matrix. Each row represents a different image, and each column represents a different pixel.
Enter the following cell to load the MNIST dataset and plot the first nine images.
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml(name='mnist_784')
X = mnist.data
plot_MNIST_sample(X)
```
The MNIST dataset has an extrinsic dimensionality of 784, much higher than the 2-dimensional examples used in the previous tutorials! To make sense of this data, we'll use dimensionality reduction. But first, we need to determine the intrinsic dimensionality $K$ of the data. One way to do this is to look for an "elbow" in the scree plot, to determine which eigenvalues are signficant.
## Exercise 1: Scree plot of MNIST
In this exercise you will examine the scree plot in the MNIST dataset.
**Steps:**
- Perform PCA on the dataset and examine the scree plot.
- When do the eigenvalues appear (by eye) to reach zero? (**Hint:** use `plt.xlim` to zoom into a section of the plot).
```
help(pca)
help(plot_eigenvalues)
#################################################
## TO DO for students: perform PCA and plot the eigenvalues
#################################################
# perform PCA
# score, evectors, evals = ...
# plot the eigenvalues
# plot_eigenvalues(evals, limit=False)
# plt.xlim(...) # limit x-axis up to 100 for zooming
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_a876e927.py)
*Example output:*
<img alt='Solution hint' align='left' width=558 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_a876e927_0.png>
---
# Section 2: Calculate the variance explained
The scree plot suggests that most of the eigenvalues are near zero, with fewer than 100 having large values. Another common way to determine the intrinsic dimensionality is by considering the variance explained. This can be examined with a cumulative plot of the fraction of the total variance explained by the top $K$ components, i.e.,
\begin{equation}
\text{var explained} = \frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i}
\end{equation}
The intrinsic dimensionality is often quantified by the $K$ necessary to explain a large proportion of the total variance of the data (often a defined threshold, e.g., 90%).
## Exercise 2: Plot the explained variance
In this exercise you will plot the explained variance.
**Steps:**
- Fill in the function below to calculate the fraction variance explained as a function of the number of principal componenets. **Hint:** use `np.cumsum`.
- Plot the variance explained using `plot_variance_explained`.
**Questions:**
- How many principal components are required to explain 90% of the variance?
- How does the intrinsic dimensionality of this dataset compare to its extrinsic dimensionality?
```
help(plot_variance_explained)
def get_variance_explained(evals):
"""
Calculates variance explained from the eigenvalues.
Args:
evals (numpy array of floats) : Vector of eigenvalues
Returns:
(numpy array of floats) : Vector of variance explained
"""
#################################################
## TO DO for students: calculate the explained variance using the equation
## from Section 2.
# Comment once you've filled in the function
raise NotImplementedError("Student excercise: calculate explaine variance!")
#################################################
# cumulatively sum the eigenvalues
csum = ...
# normalize by the sum of eigenvalues
variance_explained = ...
return variance_explained
#################################################
## TO DO for students: call the function and plot the variance explained
#################################################
# calculate the variance explained
variance_explained = ...
# Uncomment to plot the variance explained
# plot_variance_explained(variance_explained)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_0f5f51b9.py)
*Example output:*
<img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_0f5f51b9_0.png>
---
# Section 3: Reconstruct data with different numbers of PCs
```
# @title Video 2: Data Reconstruction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ZCUhW26AdBQ", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Now we have seen that the top 100 or so principal components of the data can explain most of the variance. We can use this fact to perform *dimensionality reduction*, i.e., by storing the data using only 100 components rather than the samples of all 784 pixels. Remarkably, we will be able to reconstruct much of the structure of the data using only the top 100 components. To see this, recall that to perform PCA we projected the data $\bf X$ onto the eigenvectors of the covariance matrix:
\begin{equation}
\bf S = X W
\end{equation}
Since $\bf W$ is an orthogonal matrix, ${\bf W}^{-1} = {\bf W}^T$. So by multiplying by ${\bf W}^T$ on each side we can rewrite this equation as
\begin{equation}
{\bf X = S W}^T.
\end{equation}
This now gives us a way to reconstruct the data matrix from the scores and loadings. To reconstruct the data from a low-dimensional approximation, we just have to truncate these matrices. Let's call ${\bf S}_{1:K}$ and ${\bf W}_{1:K}$ as keeping only the first $K$ columns of this matrix. Then our reconstruction is:
\begin{equation}
{\bf \hat X = S}_{1:K} ({\bf W}_{1:K})^T.
\end{equation}
## Exercise 3: Data reconstruction
Fill in the function below to reconstruct the data using different numbers of principal components.
**Steps:**
* Fill in the following function to reconstruct the data based on the weights and scores. Don't forget to add the mean!
* Make sure your function works by reconstructing the data with all $K=784$ components. The two images should look identical.
```
help(plot_MNIST_reconstruction)
def reconstruct_data(score, evectors, X_mean, K):
"""
Reconstruct the data based on the top K components.
Args:
score (numpy array of floats) : Score matrix
evectors (numpy array of floats) : Matrix of eigenvectors
X_mean (numpy array of floats) : Vector corresponding to data mean
K (scalar) : Number of components to include
Returns:
(numpy array of floats) : Matrix of reconstructed data
"""
#################################################
## TO DO for students: Reconstruct the original data in X_reconstructed
# Comment once you've filled in the function
raise NotImplementedError("Student excercise: reconstructing data function!")
#################################################
# Reconstruct the data from the score and eigenvectors
# Don't forget to add the mean!!
X_reconstructed = ...
return X_reconstructed
K = 784
#################################################
## TO DO for students: Calculate the mean and call the function, then plot
## the original and the recostructed data
#################################################
# Reconstruct the data based on all components
X_mean = ...
X_reconstructed = ...
# Plot the data and reconstruction
# plot_MNIST_reconstruction(X, X_reconstructed)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_e3395916.py)
*Example output:*
<img alt='Solution hint' align='left' width=557 height=289 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_e3395916_0.png>
## Interactive Demo: Reconstruct the data matrix using different numbers of PCs
Now run the code below and experiment with the slider to reconstruct the data matrix using different numbers of principal components.
**Steps**
* How many principal components are necessary to reconstruct the numbers (by eye)? How does this relate to the intrinsic dimensionality of the data?
* Do you see any information in the data with only a single principal component?
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
def refresh(K=100):
X_reconstructed = reconstruct_data(score, evectors, X_mean, K)
plot_MNIST_reconstruction(X, X_reconstructed)
plt.title('Reconstructed, K={}'.format(K))
_ = widgets.interact(refresh, K=(1, 784, 10))
```
## Exercise 4: Visualization of the weights
Next, let's take a closer look at the first principal component by visualizing its corresponding weights.
**Steps:**
* Enter `plot_MNIST_weights` to visualize the weights of the first basis vector.
* What structure do you see? Which pixels have a strong positive weighting? Which have a strong negative weighting? What kinds of images would this basis vector differentiate?
* Try visualizing the second and third basis vectors. Do you see any structure? What about the 100th basis vector? 500th? 700th?
```
help(plot_MNIST_weights)
#################################################
## TO DO for students: plot the weights calling the plot_MNIST_weights function
#################################################
# Plot the weights of the first principal component
# plot_MNIST_weights(...)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_f358e413.py)
*Example output:*
<img alt='Solution hint' align='left' width=499 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_f358e413_0.png>
---
# Summary
* In this tutorial, we learned how to use PCA for dimensionality reduction by selecting the top principal components. This can be useful as the intrinsic dimensionality ($K$) is often less than the extrinsic dimensionality ($N$) in neural data. $K$ can be inferred by choosing the number of eigenvalues necessary to capture some fraction of the variance.
* We also learned how to reconstruct an approximation of the original data using the top $K$ principal components. In fact, an alternate formulation of PCA is to find the $K$ dimensional space that minimizes the reconstruction error.
* Noise tends to inflate the apparent intrinsic dimensionality, however the higher components reflect noise rather than new structure in the data. PCA can be used for denoising data by removing noisy higher components.
* In MNIST, the weights corresponding to the first principal component appear to discriminate between a 0 and 1. We will discuss the implications of this for data visualization in the following tutorial.
---
# Bonus: Examine denoising using PCA
In this lecture, we saw that PCA finds an optimal low-dimensional basis to minimize the reconstruction error. Because of this property, PCA can be useful for denoising corrupted samples of the data.
## Exercise 5: Add noise to the data
In this exercise you will add salt-and-pepper noise to the original data and see how that affects the eigenvalues.
**Steps:**
- Use the function `add_noise` to add noise to 20% of the pixels.
- Then, perform PCA and plot the variance explained. How many principal components are required to explain 90% of the variance? How does this compare to the original data?
```
help(add_noise)
###################################################################
# Insert your code here to:
# Add noise to the data
# Plot noise-corrupted data
# Perform PCA on the noisy data
# Calculate and plot the variance explained
###################################################################
np.random.seed(2020) # set random seed
X_noisy = ...
# score_noisy, evectors_noisy, evals_noisy = ...
# variance_explained_noisy = ...
# plot_MNIST_sample(X_noisy)
# plot_variance_explained(variance_explained_noisy)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_d4a41b8c.py)
*Example output:*
<img alt='Solution hint' align='left' width=424 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_d4a41b8c_0.png>
<img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_d4a41b8c_1.png>
## Exercise 6: Denoising
Next, use PCA to perform denoising by projecting the noise-corrupted data onto the basis vectors found from the original dataset. By taking the top K components of this projection, we can reduce noise in dimensions orthogonal to the K-dimensional latent space.
**Steps:**
- Subtract the mean of the noise-corrupted data.
- Project the data onto the basis found with the original dataset (`evectors`, not `evectors_noisy`) and take the top $K$ components.
- Reconstruct the data as normal, using the top 50 components.
- Play around with the amount of noise and K to build intuition.
```
###################################################################
# Insert your code here to:
# Subtract the mean of the noise-corrupted data
# Project onto the original basis vectors evectors
# Reconstruct the data using the top 50 components
# Plot the result
###################################################################
X_noisy_mean = ...
projX_noisy = ...
X_reconstructed = ...
# plot_MNIST_reconstruction(X_noisy, X_reconstructed)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5_DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_e3ee8262.py)
*Example output:*
<img alt='Solution hint' align='left' width=557 height=289 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D5_DimensionalityReduction/static/W1D5_Tutorial3_Solution_e3ee8262_0.png>
| github_jupyter |
# Projection, Joining, and Sorting
## Setup
```
import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
print('Hello!')
```
## Projections: adding/selecting columns
Projections are the general way for adding new columns to tables, or selecting or removing existing ones.
```
table = con.table('functional_alltypes')
table.limit(5)
```
First, the basics: selecting columns:
```
proj = table['bool_col', 'int_col', 'double_col']
proj.limit(5)
```
You can make a list of columns you want, too, and pass that:
```
to_select = ['bool_col', 'int_col']
table[to_select].limit(5)
```
You can also use the explicit `projection` or `select` functions
```
table.select(['int_col', 'double_col']).limit(5)
```
We can add new columns by using named column expressions
```
bigger_expr = (table.int_col * 2).name('bigger_ints')
proj2 = table['int_col', bigger_expr]
proj2.limit(5)
```
Adding columns is a shortcut for projection. In Ibis, adding columns always produces a new table reference
```
table2 = table.add_column(bigger_expr)
table2.limit(5)
```
In more complicated projections involving joins, we may need to refer to all of the columns in a table at once. This is how `add_column` works. We just pass the whole table in the projection:
```
table.select([table, bigger_expr]).limit(5)
```
To use constants in projections, we have to use a special `ibis.literal` function
```
foo_constant = ibis.literal(5).name('foo')
table.select([table.bigint_col, foo_constant]).limit(5)
```
## Joins
Ibis attempts to provide good support for all the standard relational joins supported by Impala, Hive, and other relational databases.
- inner, outer, left, right joins
- semi and anti-joins
To illustrate the joins we'll use the TPC-H tables for now
```
region = con.table('tpch_region')
nation = con.table('tpch_nation')
customer = con.table('tpch_customer')
lineitem = con.table('tpch_lineitem')
```
`region` and `nation` are connected by their respective `regionkey` columns
```
join_expr = region.r_regionkey == nation.n_regionkey
joined = region.inner_join(nation, join_expr)
```
If you have multiple join conditions, either compose them yourself (like filters) or pass a list to the join function
join_exprs = [cond1, cond2, cond3]
joined = table1.inner_join(table2, join_exprs)
Once you've joined tables, you don't necessarily have anything yet. I'll put it in big letters
### Joins are declarations of intent
After calling the join function (which validates the join condition, of course), you may perform any number of other operations:
- Aggregation
- Projection
- Filtering
and so forth. Most importantly, depending on your schemas, the joined tables may include overlapping column names that could create a conflict if not addressed directly. Some other systems, like pandas, handle this by applying suffixes to the overlapping column names and computing the fully joined tables immediately. We don't do this.
So, with the above data, suppose we just want the region name and all the nation table data. We can then make a projection on the joined reference:
```
table_ref = joined[nation, region.r_name.name('region')]
table_ref.columns
table_ref.limit(5)
agged = table_ref.aggregate([table_ref.n_name.count().name('nrows')], by=['region'])
agged
```
Things like `group_by` work with unmaterialized joins, too, as you would hope.
```
joined.group_by(region.r_name).size()
```
### Explicit join materialization
If you're lucky enough to have two table schemas with no overlapping column names (lucky you!), the join can be *materialized* without having to perform some other relational algebra operation:
joined = a.inner_join(b, join_expr).materialize()
Note that this is equivalent to doing
joined = a.join(b, join_expr)[a, b]
i.e., joining and then selecting all columns from both joined tables. If there is a name overlap, just like with the equivalent projection, there will be an immediate error.
### Writing down join keys
In addition to having explicit comparison expressions as join keys, you can also write down column names, or use expressions referencing the joined tables, e.g.:
joined = a.join(b, [('a_key1', 'b_key2')])
joined2 = a.join(b, [(left_expr, right_expr)])
joined3 = a.join(b, ['common_key'])
These will be compared for equality when performing the join; if you want non-equality conditions in the join, you will have to form those yourself.
### Join referential nuances
There's nothing to stop you from doing many joins in succession, and, in fact, with complex schemas it will be to your advantage to build the joined table references for your analysis first, then reuse the objects as you go:
joined_ref = (a.join(b, a.key1 == b.key2)
.join(c, [a.key3 == c.key4, b.key5 == c.key6]))
Note that, at least right now, you need to provide explicit comparison expressions (or tuples of column references) referencing the joined tables.
### Aggregating joined table with metrics involving more than one base reference
Let's consider the case similar to the SQL query
SELECT a.key, sum(a.foo - b.bar) AS metric
FROM a
JOIN b
ON a.key = b.key
GROUP BY 1
I'll use a somewhat contrived example using the data we already have to show you what this looks like. Take the `functional.alltypes` table, and suppose we want to compute the **mean absolute deviation (MAD) from the hourly mean of the double_col**. Silly, I know, but bear with me.
First, the hourly mean:
```
table = con.table('functional_alltypes')
hour_dim = table.timestamp_col.hour().name('hour')
hourly_mean = (table.group_by(hour_dim)
.aggregate([table.double_col.mean().name('avg_double')]))
hourly_mean
```
Okay, great, now how about the MAD? The only trick here is that we can form an aggregate metric from the two tables, and we then have to join it later. Ibis **will not** figure out how to join the tables automatically for us.
```
mad = (table.double_col - hourly_mean.avg_double).abs().mean().name('MAD')
```
This metric is only valid if used in the context of `table` joined with `hourly_mean`, so let's do that. Writing down the join condition is simply a matter of writing:
```
join_expr = hour_dim == hourly_mean.hour
```
Now let's compute the MAD grouped by `string_col`
```
result = (table.inner_join(hourly_mean, join_expr)
.group_by(table.string_col)
.aggregate([mad]))
result
```
## Sorting
Sorting tables works similarly to the SQL `ORDER BY` clause. We use the `sort_by` function and pass one of the following:
- Column names
- Column expressions
- One of these, with a False (descending order) or True (ascending order) qualifier
So, to sort by `total` in ascending order we write:
table.sort_by('total')
or by `key` then by `total` in descending order
table.sort_by(['key', ('total', False)])
For descending sort order, there is a convenience function `desc` which can wrap sort keys
from ibis import desc
table.sort_by(['key', desc(table.total)])
Here's a concrete example involving filters, custom grouping dimension, and sorting
```
table = con.table('functional_alltypes')
keys = ['string_col', (table.bigint_col > 40).ifelse('high', 'low').name('bigint_tier')]
metrics = [table.double_col.sum().name('total')]
agged = (table
.filter(table.int_col < 8)
.group_by(keys)
.aggregate(metrics))
sorted_agged = agged.sort_by(['bigint_tier', ('total', False)])
sorted_agged
```
For sorting in descending order, you can use the special `ibis.desc` function:
```
agged.sort_by(ibis.desc('total'))
```
| github_jupyter |
# Predicting Boston Housing Prices
## Using XGBoost in SageMaker (Hyperparameter Tuning)
_Deep Learning Nanodegree Program | Deployment_
---
As an introduction to using SageMaker's High Level Python API for hyperparameter tuning, we will look again at the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.
The documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/)
## General Outline
Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
In this notebook we will only be covering steps 1 through 5 as we are only interested in creating a tuned model and testing its performance.
## Step 0: Setting up the notebook
We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
```
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.predictor import csv_serializer
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
```
## Step 1: Downloading the data
Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
```
boston = load_boston()
```
## Step 2: Preparing and splitting the data
Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## Step 3: Uploading the data files to S3
When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.
### Save the data locally
First we need to create the test, train and validation csv files which we will then upload to S3.
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Upload to S3
Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
```
prefix = 'boston-xgboost-tuning-HL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## Step 4: Train the XGBoost model
Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. Unlike in the previous notebooks, instead of training a single model, we will use SageMaker's hyperparameter tuning functionality to train multiple models and use the one that performs the best on the validation set.
To begin with, as in the previous approaches, we will need to construct an estimator object.
```
# As stated above, we use this utility method to construct the image name for the training container.
container = get_image_uri(session.boto_region_name, 'xgboost')
# Now that we know which container to use, we can construct the estimator object.
xgb = sagemaker.estimator.Estimator(container, # The name of the training container
role, # The IAM role to use (our current role in this case)
train_instance_count=1, # The number of instances to use for training
train_instance_type='ml.m4.xlarge', # The type of instance ot use for training
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
# Where to save the output (the model artifacts)
sagemaker_session=session) # The current SageMaker session
```
Before beginning the hyperparameter tuning, we should make sure to set any model specific hyperparameters that we wish to have default values. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)
```
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='reg:linear',
early_stopping_rounds=10,
num_round=200)
```
Now that we have our estimator object completely set up, it is time to create the hyperparameter tuner. To do this we need to construct a new object which contains each of the parameters we want SageMaker to tune. In this case, we wish to find the best values for the `max_depth`, `eta`, `min_child_weight`, `subsample`, and `gamma` parameters. Note that for each parameter that we want SageMaker to tune we need to specify both the *type* of the parameter and the *range* of values that parameter may take on.
In addition, we specify the *number* of models to construct (`max_jobs`) and the number of those that can be trained in parallel (`max_parallel_jobs`). In the cell below we have chosen to train `20` models, of which we ask that SageMaker train `3` at a time in parallel. Note that this results in a total of `20` training jobs being executed which can take some time, in this case almost a half hour. With more complicated models this can take even longer so be aware!
```
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 20, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
```
Now that we have our hyperparameter tuner object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.
```
# This is a wrapper around the location of our train and validation data, to make sure that SageMaker
# knows our data is in csv format.
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
As in many of the examples we have seen so far, the `fit()` method takes care of setting up and fitting a number of different models, each with different hyperparameters. If we wish to wait for this process to finish, we can call the `wait()` method.
```
xgb_hyperparameter_tuner.wait()
```
Once the hyperamater tuner has finished, we can retrieve information about the best performing model.
```
xgb_hyperparameter_tuner.best_training_job()
```
In addition, since we'd like to set up a batch transform job to test the best model, we can construct a new estimator object from the results of the best training job. The `xgb_attached` object below can now be used as though we constructed an estimator with the best performing hyperparameters and then fit it to our training data.
```
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
```
## Step 5: Test the model
Now that we have our best performing model, we can test it. To do this we will use the batch transform functionality. To start with, we need to build a transformer object from our fit model.
```
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previous stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once.
Note that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong.
```
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
```
Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
```
| github_jupyter |
**Instructions:**
1. **For all questions after 10th, Please only use the data specified in the note given just below the question**
2. **You need to add answers in the same file i.e. PDS_UberDriveProject_Questions.ipynb' and rename that file as 'Name_Date.ipynb'.You can mention the date on which you will be uploading/submitting the file.For e.g. if you plan to submit your assignment on 31-March, you can rename the file as 'STUDENTNAME_31-Mar-2020'**
# Load the necessary libraries. Import and load the dataset with a name uber_drives .
```
import pandas as pd
import numpy as np
# Get the Data
data_uber_driver = pd.read_csv('uberdrive-1.csv')
```
## Q1. Show the last 10 records of the dataset. (2 point)
```
data_uber_driver.tail(10)
```
## Q2. Show the first 10 records of the dataset. (2 points)
```
data_uber_driver.head(10)
```
## Q3. Show the dimension(number of rows and columns) of the dataset. (2 points)
```
data_uber_driver.shape
```
## Q4. Show the size (Total number of elements) of the dataset. (2 points)
```
data_uber_driver.size
```
## Q5. Print the information about all the variables of the data set. (2 points)
```
data_uber_driver.info()
```
## Q6. Check for missing values. (2 points) - Note: Output should be boolean only.
```
data_uber_driver.isna()
```
## Q7. How many missing values are present? (2 points)
```
data_uber_driver.isna().sum().sum()
```
## Q8. Get the summary of the original data. (2 points). Hint:Outcome will contain only numerical column.
```
data_uber_driver.describe()
```
## Q9. Drop the missing values and store the data in a new dataframe (name it"df") (2-points)
### Note: Dataframe "df" will not contain any missing value
```
df = data_uber_driver.dropna()
```
## Q10. Check the information of the dataframe(df). (2 points)
```
df.info()
```
## Q11. Get the unique start destinations. (2 points)
### Note: This question is based on the dataframe with no 'NA' values
### Hint- You need to print the unique destination place names in this and not the count.
```
df['START*'].unique()
```
## Q12. What is the total number of unique start destinations? (2 points)
### Note: Use the original dataframe without dropping 'NA' values
```
data_uber_driver['START*'].nunique()
```
## Q13. Print the total number of unique stop destinations. (2 points)
### Note: Use the original dataframe without dropping 'NA' values.
```
data_uber_driver['STOP*'].unique().size
```
## Q14. Print all the Uber trips that has the starting point of San Francisco. (2 points)
### Note: Use the original dataframe without dropping the 'NA' values.
### Hint: Use the loc function
```
data_uber_driver[data_uber_driver['START*']=='San Francisco']
```
## Q15. What is the most popular starting point for the Uber drivers? (2 points)
### Note: Use the original dataframe without dropping the 'NA' values.
### Hint:Popular means the place that is visited the most
```
data_uber_driver['START*'].value_counts().idxmax()
```
## Q16. What is the most popular dropping point for the Uber drivers? (2 points)
### Note: Use the original dataframe without dropping the 'NA' values.
### Hint: Popular means the place that is visited the most
```
data_uber_driver['STOP*'].value_counts().idxmax()
```
## Q17. List the most frequent route taken by Uber drivers. (3 points)
### Note: This question is based on the new dataframe with no 'na' values.
### Hint-Print the most frequent route taken by Uber drivers (Route= combination of START & END points present in the Data set).
```
df.groupby(['START*', 'STOP*']).size().sort_values(ascending=False)
```
## Q18. Print all types of purposes for the trip in an array. (3 points)
### Note: This question is based on the new dataframe with no 'NA' values.
```
df['PURPOSE*']
```
## Q19. Plot a bar graph of Purpose vs Miles(Distance). (3 points)
### Note: Use the original dataframe without dropping the 'NA' values.
### Hint:You have to plot total/sum miles per purpose
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(19,5))
ax = fig.add_axes([0,0,1,1])
# replacing na 'PURPOSE' values
data_uber_driver["PURPOSE*"].fillna("NO_PURPOSE_PROVIDED", inplace = True)
# replacing na 'MILES' values
data_uber_driver["MILES*"].fillna("NO_MILES_PROVIDED", inplace = True)
ax.bar(data_uber_driver['PURPOSE*'],data_uber_driver['MILES*'])
plt.show()
```
## Q20. Print a dataframe of Purposes and the distance travelled for that particular Purpose. (3 points)
### Note: Use the original dataframe without dropping "NA" values
```
data_uber_driver.groupby(by=["PURPOSE*"]).sum()
```
## Q21. Plot number of trips vs Category of trips. (3 points)
### Note: Use the original dataframe without dropping the 'NA' values.
### Hint : You can make a countplot or barplot.
```
# import seaborn as sns
# sns.countplot(x='CATEGORY*',data=data_uber_driver)
data_uber_driver['CATEGORY*'].value_counts().plot(kind='bar',figsize=(19,7),color='red');
```
## Q22. What is proportion of trips that is Business and what is the proportion of trips that is Personal? (3 points)
### Note:Use the original dataframe without dropping the 'NA' values. The proportion calculation is with respect to the 'miles' variable.
### Hint:Out of the category of trips, you need to find percentage wise how many are business and how many are personal on the basis of miles per category.
```
data_uber_driver['CATEGORY*'].value_counts(normalize=True)*100
```
| github_jupyter |
```
import process_output
from PIL import Image, ImageEnhance, ImageFilter
import requests
from io import BytesIO
import imgkit
import json
def get_unsplash_url(client_id, query, orientation):
root = 'https://api.unsplash.com/'
path = 'photos/random/?client_id={}&query={}&orientation={}'
search_url = root + path.format(client_id, query, orientation)
api_response = requests.get(search_url)
data = api_response.json()
api_response.close()
#print(json.dumps(data, indent=4, sort_keys=True))
return data['urls']['regular']
client_id = 'L-CxZwGQjlKToJ1xdSiBCnj1gAyUJ0nBLKYqaQOXOAg'
query = 'nature dark'
orientation = 'landscape'
image_url = get_unsplash_url(client_id, query, orientation)
quote_text = process_output.get_quote()
image_response = requests.get(image_url)
img = Image.open(BytesIO(image_response.content))
image_response.close()
# resize down until either a desired width or height is acheived, then crop the other dimension
# to acheive a non-distorted version of the image with desired dimensions
def resize_crop(im, desired_width=800, desired_height=600):
width, height = im.size
if width/height > desired_width/desired_height:
im.thumbnail((width, desired_height))
else:
im.thumbnail((desired_width, height))
width, height = im.size
box = [0, 0, width, height] # left, upper, right, lower
if width > desired_width:
box[0] = width/2 - desired_width/2
box[2] = width/2 + desired_width/2
if height > desired_height:
box[1] = height/2 - desired_height/2
box[3] = height/2 + desired_height/2
im = im.crop(box=box)
return im
def reduce_color(im, desired_color=0.5):
converter = ImageEnhance.Color(im)
im = converter.enhance(desired_color)
return im
def gaussian_blur(im, radius=2):
im = im.filter(ImageFilter.GaussianBlur(radius=radius))
return im
img = resize_crop(img)
img = reduce_color(img)
#img = gaussian_blur(img)
img.save('backdrop.jpg')
html_doc = None
with open('image_template.html', 'r') as f:
html_doc = f.read()
html_doc = html_doc.replace('dynamictext', quote_text)
#print(len(quote_text))
def get_font_size(text):
size = len(text)
if size < 40:
return '44'
if size < 75:
return '36'
return '30'
html_doc = html_doc.replace('dynamicfontsize', get_font_size(quote_text))
with open('image_out.html', 'w') as f:
f.write(html_doc)
imgkit.from_file('image_out.html', 'image_out.jpg', options={'width' : 800,
'height' : 600,
'quality' : 100,
'encoding' : 'utf-8'
})
```
| github_jupyter |
# Robust Scaler - Experimento
Este é um componante que dimensiona atributos usando estatísticas robustas para outliers. Este Scaler remove a mediana e dimensiona os dados de acordo com o intervalo quantil (o padrão é Amplitude interquartil). Amplitude interquartil é o intervalo entre o 1º quartil (25º quantil) e o 3º quartil (75º quantil). Faz uso da implementação do [Scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html). <br>
Scikit-learn é uma biblioteca open source de machine learning que suporta apredizado supervisionado e não supervisionado. Também provê várias ferramentas para montagem de modelo, pré-processamento de dados, seleção e avaliação de modelos, e muitos outros utilitários.
## Declaração de parâmetros e hiperparâmetros
Declare parâmetros com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsMIwnXL7c0AAACDUlEQVQ4y92UP4gTQRTGf29zJxhJZ2NxbMBKziYWlmJ/ile44Nlkd+dIYWFzItiNgoIEtFaTzF5Ac/inE/urtLWxsMqmUOwCEpt1Zmw2xxKi53XitPO9H9978+aDf/3IUQvSNG0450Yi0jXG7C/eB0cFeu9viciGiDyNoqh2KFBrHSilWstgnU7nFLBTgl+ur6/7PwK11kGe5z3n3Hul1MaiuCgKDZwALHA7z/Oe1jpYCtRaB+PxuA8kQM1aW68Kt7e3zwBp6a5b1ibj8bhfhQYVZwMRiQHrvW9nWfaqCrTWPgRWvPdvsiy7IyLXgEJE4slk8nw+T5nDgDbwE9gyxryuwpRSF5xz+0BhrT07HA4/AyRJchUYASvAbhiGaRVWLIMBYq3tAojIszkMoNRulbXtPM8HwV/sXSQi54HvQRDcO0wfhGGYArvAKjAq2wAgiqJj3vsHpbtur9f7Vi2utLx60LLW2hljEuBJOYu9OI6vAzQajRvAaeBLURSPlsBelA+VhWGYaq3dwaZvbm6+m06noYicE5ErrVbrK3AXqHvvd4bD4Ye5No7jSERGwKr3Pms2m0pr7Rb30DWbTQWYcnFvAieBT7PZbFB1V6vVfpQaU4UtDQetdTCZTC557/eA48BlY8zbRZ1SqrW2tvaxCvtt2iRJ0i9/xb4x5uJRwmNlaaaJ3AfqIvKY/+78Av++6uiSZhYMAAAAAElFTkSuQmCC" /> na barra de ferramentas.<br>
A variável `dataset` possui o caminho para leitura do arquivos importados na tarefa de "Upload de dados".<br>
Você também pode importar arquivos com o botão <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsOBy6ASTeXAAAC/0lEQVQ4y5WUT2gcdRTHP29m99B23Uiq6dZisgoWCxVJW0oL9dqLfyhCvGWY2YUBI95MsXgwFISirQcLhS5hfgk5CF3wJIhFI7aHNsL2VFZFik1jS1qkiZKdTTKZ3/MyDWuz0fQLc/m99/vMvDfv+4RMlUrlkKqeAAaBAWAP8DSgwJ/AXRG5rao/WWsvTU5O3qKLBMD3fSMiPluXFZEPoyj67PGAMzw83PeEMABHVT/oGpiamnoAmCcEWhH5tFsgF4bh9oWFhfeKxeJ5a+0JVT0oImWgBPQCKfAQuAvcBq67rltX1b+6ApMkKRcKhe9V9QLwbavV+qRer692Sx4ZGSnEcXw0TdP3gSrQswGYz+d/S5IkVtXTwOlCoZAGQXAfmAdagAvsAErtdnuXiDy6+023l7qNRsMODg5+CawBzwB9wFPA7mx8ns/KL2Tl3xCRz5eWlkabzebahrHxPG+v4zgnc7ncufHx8Z+Hhoa29fT0lNM03Q30ikiqqg+ttX/EcTy3WTvWgdVqtddaOw/kgXvADHBHROZVNRaRvKruUNU+EdkPfGWM+WJTYOaSt1T1LPDS/4zLWWPMaLVaPWytrYvIaBRFl/4F9H2/JCKvGmMu+76/X0QOqGoZKDmOs1NV28AicMsYc97zvFdc1/0hG6kEeNsY83UnsCwivwM3VfU7YEZE7lhr74tIK8tbnJiYWPY8b6/ruleAXR0ftQy8boyZXi85CIIICDYpc2ZgYODY3NzcHmvt1eyvP64lETkeRdE1yZyixWLx5U2c8q4x5mIQBE1g33/0d3FlZeXFR06ZttZesNZejuO4q1NE5CPgWVV9E3ij47wB1IDlJEn+ljAM86urq7+KyAtZTgqsO0VV247jnOnv7/9xbGzMViqVMVX9uANYj6LonfVtU6vVkjRNj6jqGeCXzGrPAQeA10TkuKpOz87ONrayhnIA2Qo7BZwKw3B7kiRloKSqO13Xja21C47jPNgysFO1Wi0GmtmzQap6DWgD24A1Vb3SGf8Hfstmz1CuXEIAAAAASUVORK5CYII=" /> na barra de ferramentas.
```
# parâmetros
dataset = "/tmp/data/iris.csv" #@param {type:"string"}
target = None #@param {type:"feature", label:"Atributo alvo", description: "Esse valor será utilizado para garantir que o alvo não seja removido."}
with_centering = True #@param {type:"boolean", label:"Centralização", description:"Centralizar os dados antes de dimensionar. Ocorre exceção quando usado com matrizes esparsas"}
with_scaling = True #@param {type:"boolean", label:"Dimensionamento", description:"Dimensionar os dados para um intervalo interquartil"}
```
## Acesso ao conjunto de dados
O conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.<br>
O tipo da variável retornada depende do arquivo de origem:
- [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz
- [Binary IO stream](https://docs.python.org/3/library/io.html#binary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc
```
import pandas as pd
df = pd.read_csv(dataset)
has_target = True if target is not None and target in df.columns else False
X = df.copy()
if has_target:
X = df.drop(target, axis=1)
y = df[target]
```
## Acesso aos metadados do conjunto de dados
Utiliza a função `stat_dataset` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para carregar metadados. <br>
Por exemplo, arquivos CSV possuem `metadata['featuretypes']` para cada coluna no conjunto de dados (ex: categorical, numerical, or datetime).
```
import numpy as np
from platiagro import stat_dataset
metadata = stat_dataset(name=dataset)
featuretypes = metadata["featuretypes"]
columns = df.columns.to_numpy()
featuretypes = np.array(featuretypes)
if has_target:
target_index = np.argwhere(columns == target)
columns = np.delete(columns, target_index)
featuretypes = np.delete(featuretypes, target_index)
```
## Configuração dos atributos
```
from platiagro.featuretypes import NUMERICAL
# Selects the indexes of numerical
numerical_indexes = np.where(featuretypes == NUMERICAL)[0]
non_numerical_indexes = np.where(~(featuretypes == NUMERICAL))[0]
# After the step of the make_column_transformer,
# numerical features are grouped in the beggining of the array
numerical_indexes_after_first_step = np.arange(len(numerical_indexes))
```
## Treina um modelo usando sklearn.preprocessing.RobustScaler
```
from sklearn.compose import make_column_transformer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import RobustScaler
pipeline = Pipeline(
steps=[
(
"imputer",
make_column_transformer(
(SimpleImputer(), numerical_indexes), remainder="passthrough"
),
),
(
"robust_scaler",
make_column_transformer(
(
RobustScaler(
with_centering=with_centering, with_scaling=with_scaling
),
numerical_indexes_after_first_step,
),
remainder="passthrough",
),
),
]
)
# Train model and transform dataset
X = pipeline.fit_transform(X)
# Put numerical features in the lowest indexes
features_after_pipeline = np.concatenate(
(columns[numerical_indexes], columns[non_numerical_indexes])
)
# Put data back in a pandas.DataFrame
df = pd.DataFrame(data=X, columns=features_after_pipeline)
if has_target:
df[target] = y
```
## Cria visualização do resultado
Cria visualização do resultado como uma planilha.
```
import matplotlib.pyplot as plt
from platiagro.plotting import plot_data_table
ax = plot_data_table(df)
plt.show()
```
## Salva alterações no conjunto de dados
O conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.<br>
```
# save dataset changes
df.to_csv(dataset, index=False)
```
## Salva resultados da tarefa
A plataforma guarda o conteúdo de `/tmp/data/` para as tarefas subsequentes.
```
from joblib import dump
artifacts = {
"pipeline": pipeline,
"columns": columns,
"features_after_pipeline": features_after_pipeline,
}
dump(artifacts, "/tmp/data/robust-scaler.joblib")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ralsouza/python_fundamentos/blob/master/src/05_desafio/05_missao05.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## **Missão: Analisar o Comportamento de Compra de Consumidores.**
### Nível de Dificuldade: Alto
Você recebeu a tarefa de analisar os dados de compras de um web site! Os dados estão no formato JSON e disponíveis junto com este notebook.
No site, cada usuário efetua login usando sua conta pessoal e pode adquirir produtos à medida que navega pela lista de produtos oferecidos. Cada produto possui um valor de venda. Dados de idade e sexo de cada usuário foram coletados e estão fornecidos no arquivo JSON.
Seu trabalho é entregar uma análise de comportamento de compra dos consumidores. Esse é um tipo de atividade comum realizado por Cientistas de Dados e o resultado deste trabalho pode ser usado, por exemplo, para alimentar um modelo de Machine Learning e fazer previsões sobre comportamentos futuros.
Mas nesta missão você vai analisar o comportamento de compra dos consumidores usando o pacote Pandas da linguagem Python e seu relatório final deve incluir cada um dos seguintes itens:
**Contagem de Consumidores**
* Número total de consumidores
**Análise Geral de Compras**
* Número de itens exclusivos
* Preço médio de compra
* Número total de compras
* Rendimento total (Valor Total)
**Informações Demográficas Por Gênero**
* Porcentagem e contagem de compradores masculinos
* Porcentagem e contagem de compradores do sexo feminino
* Porcentagem e contagem de outros / não divulgados
**Análise de Compras Por Gênero**
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Compras for faixa etária
**Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):**
* Login
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Itens mais populares
**Identifique os 5 itens mais populares por contagem de compras e, em seguida, liste (em uma tabela):**
* ID do item
* Nome do item
* Número de compras
* Preço Médio do item
* Valor Total de Compra
* Itens mais lucrativos
**Identifique os 5 itens mais lucrativos pelo valor total de compra e, em seguida, liste (em uma tabela):**
* ID do item
* Nome do item
* Número de compras
* Preço Médio do item
* Valor Total de Compra
**Como considerações finais:**
* Seu script deve funcionar para o conjunto de dados fornecido.
* Você deve usar a Biblioteca Pandas e o Jupyter Notebook.
```
# Imports
import pandas as pd
import numpy as np
# Load file from Drive
from google.colab import drive
drive.mount('/content/drive')
# Load file to Dataframe
load_file = "/content/drive/My Drive/dados_compras.json"
purchase_file = pd.read_json(load_file, orient = "records")
```
## **1. Análise Exploratória**
### **1.1 Checagem das primeiras linhas**
```
# Nota-se que os logins se repetem.
purchase_file.sort_values('Login')
```
### **1.2 Checagem dos tipos dos dados**
```
purchase_file.dtypes
```
### **1.3 Checagem de valores nulos**
```
purchase_file.isnull().sum().sort_values(ascending = False)
```
### **1.4 Checagem de valores zero**
```
(purchase_file == 0).sum()
```
### **1.5 Distribuição de idades**
O público mais representativo desta amostra encontra-se entre 19 há 26 anos de idade.
```
plt.hist(purchase_file['Idade'], histtype='bar', rwidth=0.8)
plt.title('Distribuição de vendas por idade')
plt.xlabel('Idade')
plt.ylabel('Quantidade de compradores')
plt.show()
```
### **1.6 Distribuição dos valores**
A maioria das vendas são dos produtos de `R$ 2,30`, `R$ 3,40` e `R$ 4,20`.
```
plt.hist(purchase_file['Valor'], histtype='bar', rwidth=0.8)
plt.title('Distribuição por Valores')
plt.xlabel('Reais R$')
plt.ylabel('Quantidade de vendas')
plt.show()
```
## **2. Informações Sobre os Consumidores**
* Número total de consumidores
```
# Contar a quantidade de logins, removendo as linhas com dados duplicados.
total_consumidores = purchase_file['Login'].drop_duplicates().count()
print('O total de consumidores na amostra são: {}'.format(total_consumidores))
```
## **3. Análise Geral de Compras**
* Número de itens exclusivos
* Preço médio de compra
* Número total de compras
* Rendimento total (Valor Total)
```
# Número de itens exclusivos
itens_exclusivos = purchase_file['Item ID'].drop_duplicates().count()
preco_medio = np.average(purchase_file['Valor'])
total_compras = purchase_file['Nome do Item'].count()
valor_total = np.sum(purchase_file['Valor'])
analise_geral = pd.DataFrame({
'Itens Exclusivos':[itens_exclusivos],
'Preço Médio (R$)':[np.round(preco_medio, decimals=2)],
'Qtd. Compras':[total_compras],
'Valor Total (R$)':[valor_total]
})
analise_geral
```
## **4. Análise Demográfica por Genêro**
* Porcentagem e contagem de compradores masculinos
* Porcentagem e contagem de compradores do sexo feminino
* Porcentagem e contagem de outros / não divulgados
```
# Selecionar os dados únicos do compradores para deduplicação
info_compradores = purchase_file.loc[:,['Login','Sexo','Idade']]
# Deduplicar os dados
info_compradores = info_compradores.drop_duplicates()
# Quantidade de compradores por genêro
qtd_compradores = info_compradores['Sexo'].value_counts()
# Percentual de compradores por genêro
perc_compradores = round(info_compradores['Sexo'].value_counts(normalize=True) * 100, 2)
# Armazenar dados no Dataframe
analise_demografica = pd.DataFrame(
{'Percentual':perc_compradores,
'Qtd. Compradores':qtd_compradores
}
)
# Impressão da tabela
analise_demografica
plot = analise_demografica['Percentual'].plot(kind='pie',
title='Percentual de Compras por Genêro',
autopct='%.2f')
plot = analise_demografica['Qtd. Compradores'].plot(kind='barh',
title='Quantidade de Compradores por Genêro')
# Add labels
for i in plot.patches:
plot.text(i.get_width()+.1, i.get_y()+.31, \
str(round((i.get_width()), 2)), fontsize=10)
```
## **5. Análise de Compras Por Gênero**
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Compras for faixa etária
```
# Número de compras por genêro
nro_compras_gen = purchase_file['Sexo'].value_counts()
# Preço médio de compra por genêro
media_compras_gen = round(purchase_file.groupby('Sexo')['Valor'].mean(), 2)
# Total de compras por genêro
total_compras_gen = purchase_file.groupby('Sexo')['Valor'].sum()
analise_compras = pd.DataFrame(
{'Qtd. de Compras':nro_compras_gen,
'Preço Médio (R$)':media_compras_gen,
'Total Compras (R$)':total_compras_gen}
)
# Impressão da tabela
analise_compras
# Usar dataframe deduplicado
info_compradores
# Compras por faixa etária
age_bins = [0, 9.99, 14.99, 19.99, 24.99, 29.99, 34.99, 39.99, 999]
seg_idade = ['Menor de 10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', 'Maior de 39']
info_compradores['Intervalo Idades'] = pd.cut(info_compradores['Idade'], age_bins, labels=seg_idade)
df_hist_compras = pd.DataFrame(info_compradores['Intervalo Idades'].value_counts(), index=seg_idade)
hist = df_hist_compras.plot(kind='bar', legend=False)
hist.set_title('Compras for faixa etária', fontsize=15)
hist.set_ylabel('Frequência')
hist.set_xlabel('Faixas de Idades')
```
## **6. Consumidores Mais Populares (Top 5)**
Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):
* Login
* Número de compras
* Preço médio de compra
* Valor Total de Compra
* Itens mais populares
```
consumidores_populares = purchase_file[['Login','Nome do Item','Valor']]
consumidores_populares.head(5)
top_por_compras = consumidores_populares.groupby(['Login']).count()['Nome do Item']
top_por_valor_medio = round(consumidores_populares.groupby('Login').mean()['Valor'], 2)
top_por_valor_total = consumidores_populares.groupby('Login').sum()['Valor']
top_consumidores = pd.DataFrame({'Número de Compras': top_por_compras,
'Preço Médio(R$)': top_por_valor_medio,
'Valor Total(R$)': top_por_valor_total}) \
.sort_values(by=['Valor Total(R$)'], ascending=False) \
.head(5)
top_itens = consumidores_populares['Nome do Item'].value_counts().head(5)
top_consumidores
itens_populares = pd.DataFrame(consumidores_populares['Nome do Item'].value_counts().head(5))
itens_populares
```
## **7. Itens Mais Populares**
Identifique os 5 itens mais populares **por contagem de compras** e, em seguida, liste (em uma tabela):
* ID do item
* Nome do item
* Número de compras
* Preço Médio do item
* Valor Total de Compra
* Itens mais lucrativos
```
itens_populares = purchase_file[['Item ID','Nome do Item','Valor']]
num_compras = itens_populares.groupby('Nome do Item').count()['Item ID']
media_preco = round(itens_populares.groupby('Nome do Item').mean()['Valor'], 2)
total_preco = itens_populares.groupby('Nome do Item').sum()['Valor']
df_itens_populares = pd.DataFrame({
'Numero de Compras': num_compras,
'Preço Médio do Item': media_preco,
'Valor Total da Compra': total_preco})
df_itens_populares.sort_values(by=['Numero de Compras'], ascending=False).head(5)
```
## **8. Itens Mais Lucrativos**
Identifique os 5 itens mais lucrativos pelo **valor total de compra** e, em seguida, liste (em uma tabela):
* ID do item
* Nome do item
* Número de compras
* Preço Médio do item
* Valor Total de Compra
```
itens_lucrativos = purchase_file[['Item ID','Nome do Item','Valor']]
itens_lucrativos.head(5)
qtd_compras = itens_lucrativos.groupby(['Nome do Item']).count()['Valor']
avg_compras = itens_lucrativos.groupby(['Nome do Item']).mean()['Valor']
sum_compras = itens_lucrativos.groupby(['Nome do Item']).sum()['Valor']
df_itens_lucrativos = pd.DataFrame({
'Número de Compras': qtd_compras,
'Preço Médio do Item (R$)': round(avg_compras, 2),
'Valor Total de Compra (R$)': sum_compras
})
df_itens_lucrativos.sort_values(by='Valor Total de Compra (R$)', ascending=False).head(5)
itens_lucrativos.sort_values('Nome do Item')
itens_lucrativos.sort_values(by='Nome do Item')
```
| github_jupyter |
WINE CLASSIFIER
```
# Imports
from io import StringIO
import pandas as pd
import spacy
from cytoolz import *
import numpy as np
from IPython.display import display
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import chi2
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import *
from sklearn.linear_model import *
from sklearn.dummy import *
from sklearn.pipeline import make_pipeline
from sklearn.feature_extraction.text import *
from sklearn.metrics import *
from sklearn.decomposition import *
from sklearn import metrics
%precision 4
%matplotlib inline
nlp = spacy.load('en', disable=['tagger', 'ner', 'parser'])
#1. Prepare Data
df = pd.read_msgpack('http://bulba.sdsu.edu/wine.dat')
#df.head()
#about 40,000 rows in full msgpack
#sample created to increase speed, but remove sample definition for increased accuracy!
df = df.sample(4000)
df = df[pd.notnull(df['review_text'])]
df = df[pd.notnull(df['wine_variant'])]
df.info()
#Create 'category_id' column for LinearSVC use
df['category_id'] = df['wine_variant'].factorize()[0]
category_id_df = df[['wine_variant', 'category_id']].drop_duplicates().sort_values('category_id')
category_to_id = dict(category_id_df.values)
id_to_category = dict(category_id_df[['category_id', 'wine_variant']].values)
#Create a tokenized column for Logistical Regression use
def tokenize(text):
return [tok.orth_ for tok in nlp.tokenizer(text)]
df['tokens'] = df['review_text'].apply(tokenize)
df.head()
#Check to see sample sizes for each variant, ensurring result accuracy
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
df.groupby('wine_variant').review_text.count().plot.bar(ylim=0)
plt.show()
#2. BASELINE
folds = StratifiedKFold(shuffle = True,
n_splits = 10,
random_state = 10)
sum(df['wine_variant'] == True), len(df)
baseline = make_pipeline(CountVectorizer(analyzer = identity),
DummyClassifier('most_frequent'))
base_score = cross_val_score(baseline,
df['tokens'],
df['wine_variant'],
cv=folds,
n_jobs = -1)
base_score.mean(), base_score.std()
#3. SIMPLE LOGISTIC REGRESSION CLASSIFIER
lr = make_pipeline(CountVectorizer(analyzer = identity),
LogisticRegression())
params = {'logisticregression__C': [0.01, 0.1, 1.0],
'countvectorizer__min_df': [1, 2],
'countvectorizer__max_df': [0.25, 0.5]}
grid_search = GridSearchCV(lr,
params,
n_jobs = -1,
verbose = 1,
return_train_score = True)
grid_search.fit(df['tokens'], df['wine_variant'])
grid_search.best_params_
lr.set_params(**grid_search.best_params_)
lr_score = cross_val_score(lr,
df['tokens'],
df['wine_variant'],
cv = folds,
n_jobs = -1)
lr_score.mean(), lr_score.std()
grid = pd.DataFrame(grid_search.cv_results_, dtype = float)
grid.plot.line('param_countvectorizer__max_df', 'mean_test_score')
#4. BEST CLASSIFIER -- found through n_gram correlation
best = make_pipeline(CountVectorizer(analyzer = identity),
TfidfTransformer(),
LinearSVC())
params_best = {'tfidftransformer__norm': ['l2', None],
'tfidftransformer__use_idf': [True, False],
'tfidftransformer__sublinear_tf': [True, False],
'linearsvc__penalty': ['l2'],
'linearsvc__C': [0.01, 0.1, 1.0],
'countvectorizer__min_df': [1, 2, 3],
'countvectorizer__max_df': [0.1, 0.5, 1.0]}
best_grid_search = GridSearchCV(best,
params_best,
n_jobs = -1,
verbose = 1,
return_train_score = True)
best_grid_search.fit(df['tokens'], df['wine_variant'])
best_grid_search.best_params_
#Set hyperparameters for best model
best.set_params(**best_grid_search.best_params_)
best_score = cross_val_score(best,
df['tokens'],
df['wine_variant'],
cv = folds,
n_jobs = -1)
best_score.mean(), best_score.std()
#Result score is slightly higher than using LR model, and std is slightly less
best_grid = pd.DataFrame(best_grid_search.cv_results_, dtype = float)
best_grid.plot.line('param_countvectorizer__max_df', 'mean_test_score')
#5. Error Analysis & Discussion
#Inspect feautues
tfidf = TfidfVectorizer(sublinear_tf = True,
min_df = 1,
norm = 'l2',
encoding = 'latin-1',
ngram_range = (1, 3),
stop_words = 'english')
features = tfidf.fit_transform(df.review_text).toarray()
labels = df.category_id
features.shape
# Display the n_grams with highest correlation for each variant
N = 5
for wine_variant, category_id in sorted(category_to_id.items()):
# chi squared determines the correlation of each ngram to each variant, taking into account sample size
features_chi2 = chi2(features,
labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
trigrams = [v for v in feature_names if len(v.split(' ')) == 3]
print("# '{}':".format(wine_variant))
print(" . Most correlated unigrams:\n . {}".format('\n . '.join(unigrams[-N:])))
print(" . Most correlated bigrams:\n . {}".format('\n . '.join(bigrams[-N:])))
print(" . Most correlated trigrams:\n . {}".format('\n . '.join(trigrams[-N:])))
#The ngrams below appear more accurate and unique to each of the different variants.
# Heatmap & Confusion Matrix to display accuracies of predictions with LinearSVC
model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features,
labels,
df.index,
test_size = 0.33,
random_state = 10)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize = (10,8))
sns.heatmap(conf_mat,
annot = True,
fmt = 'd',
xticklabels = category_id_df.wine_variant.values,
yticklabels = category_id_df.wine_variant.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
#WRONG RESULT EXAMPLES FOR LINEAR SVC CLASSIFIER
for predicted in category_id_df.category_id:
for actual in category_id_df.category_id:
if predicted != actual and conf_mat[actual, predicted] >= 6:
print("'{}' predicted as '{}' : {} examples.".format(id_to_category[actual],
id_to_category[predicted],
conf_mat[actual, predicted]))
display(df.loc[indices_test[(y_test == actual) & (y_pred == predicted)]][['wine_variant', 'review_text']])
print('')
model.fit(features, labels)
N = 5
for wine_variant, category_id in sorted(category_to_id.items()):
indices = np.argsort(model.coef_[category_id])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in reversed(feature_names) if len(v.split(' ')) == 1][:N]
bigrams = [v for v in reversed(feature_names) if len(v.split(' ')) == 2][:N]
trigrams = [v for v in reversed(feature_names) if len(v.split(' ')) == 3][:N]
print("# '{}':".format(wine_variant))
print(" . Top unigrams:\n . {}".format('\n . '.join(unigrams)))
print(" . Top bigrams:\n . {}".format('\n . '.join(bigrams)))
print(" . Top trigrams:\n . {}".format('\n . '.join(trigrams)))
# Scores for each variant using a LinearSVC classifier with Tfid
print(metrics.classification_report(y_test,
y_pred,
target_names=df['wine_variant'].unique()))
```
My best classifier (Linear SVC + tfid) struggles with classifying reviews in which the reviewer states key words that the actual wine variant lacks, whether the reviewer is uneducated or they are saying the identifying phrase in a negative (or lacking) manner. Also, generic reviews that do not include significantly unique characteristics will struggle, mainly due to the fact that wine shares a lot of characteristics between variants, but certain of those characteristics are more heavily present (on average) with specific variants.
I have learned that classification can be done from string comparisions, statistics (logs), indexes, and various other measurable variables/attributes. The best way to ensure that classification is most successful is to combine the various models, depending from situation to situation. This wine prediction strongly supports the idea of vector and phrase classifications for multi-classes, leading to a need for identifying the distinguishing qualities of each class. Even though there are many variables and similarities between the variants, I found the beginning predictions to be quite easy and efficient. The main errors came from trying to distingusih merlot from cabernet, which does make sense since they are the closest related wines as far as shared features. As more and more data is collected, the dictionary of unique words grows in number and certainty in making a correct prediction. I do believe that a score better than 90% is achievable after subpar and uneducated reveiws are removed, tokenization further cleans reamining punctuation errors, and continued training data is supplied to the program to increase its correlation certainties. Also, to increase the accuracy, it would be advantageous to have the phrases with a negative or quantifying variable treated separate from the usage count. By recognizing the negative and quantifying variables as an un-splittable part of its modifying phrase, we ensure that certain features, mainly in the unigram category, aren't wrongly weighting that feauture for that variant. An example of this dilemma can be seen when comparing things like 'red' and 'not red,' which obviously mean a distinct difference, but if 'not' is allowed to separate from 'red,' non-red wine variants could get too much weight on the word 'red,' creating a higher chance for inaccuracies.
```
```
| github_jupyter |
# Criminology in Portugal (2011)
## Introduction
> In this _study case_, it will be analysed the **_crimes occurred_** in **_Portugal_**, during the civil year of **_2011_**. It will analysed all the _categories_ or _natures_ of this **_crimes_**, _building some statistics and making some filtering of data related to them_.
> It will be applied some _filtering_ and made some _analysis_ on the data related to **_Portugal_** as _country_, like the following:
* _Crimes by **Nature/Category**_
* _Crimes by **Geographical Zone**_
* _Crimes by **Region/City** (only considered th 5 more populated **regions/cities** in **Portugal**)_
* _Conclusions_
> It will be applied some _filtering_ and made some _analysis_ on the data related to the **_5 biggest/more populated regions/cities_** (_Metropolitan Area of Lisbon_, _North_, _Center_, _Metropolitan Area of Porto_, and _Algarve_) of **_Portugal_**, like the following:
* **_Metropolitan Area of Lisbon_**
* _Crimes by **Nature/Category**_
* _Crimes by **Locality/Village**_
* _Conclusions_
* **_North_**
* _Crimes by **Nature/Category**_
* _Crimes by **Locality/Village**_
* _Conclusions_
* **_Center_**
* _Crimes by **Nature/Category**_
* _Crimes by **Locality/Village**_
* _Conclusions_
* **_Metropolitan Area of Porto_**
* _Crimes by **Nature/Category**_
* _Crimes by **Locality/Village**_
* _Conclusions_
* **_Algarve_**
* _Crimes by **Nature/Category**_
* _Crimes by **Locality/Village**_
* _Conclusions_
```
# Importing pandas library
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
crimes_by_geozone_2011 = pd.read_csv("datasets/ine.pt/2011/dataset-crimes-portugal-2011-by-geozone-2.csv" , header=1)
crimes_by_geozone_2011 = crimes_by_geozone_2011.rename(columns={'Unnamed: 0': 'Zona Geográfica'})
crimes_by_geozone_2011 = crimes_by_geozone_2011.set_index("Zona Geográfica", drop = True)
```
#### Data Available in the Dataset
> All the data available and used for this _study case_ can be found in the following _hyperlink_:
* [dataset-crimes-portugal-2011-by-geozone-2.csv](datasets/ine.pt/2011/dataset-crimes-portugal-2011-by-geozone-2.csv)
##### Note:
> If you pretend to see all the data available and used for this _study case_, uncomment the following line.
```
# Just for debug
#crimes_by_geozone_2011
```
## Starting the Study Case
### Criminology in **_Metropolitan Area of Lisbon_** (**_2011_**)
#### Analysing the **_crimes occurred_** in **_Metropolitan Area of Lisbon_**, during **_2011_**
* The total of **_crime occurrences_** in **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_:
```
crimes_lisbon_2011 = crimes_by_geozone_2011.loc["170: Área Metropolitana de Lisboa", : ]
crimes_lisbon_2011
```
* The total number of **_crime occurrences_** in **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_ (_organised as a Data Frame_):
```
crimes_lisbon_2011 = pd.DataFrame(crimes_lisbon_2011).T
crimes_lisbon_2011 = crimes_lisbon_2011.iloc[:,1:8]
crimes_lisbon_2011
# Just for debug
#crimes_lisbon_2011.columns
crimes_lisbon_2011.values[0,3] = 0
crimes_lisbon_2011.values[0,6] = 0
crimes_lisbon_2011.values[0,0] = int(crimes_lisbon_2011.values[0,0])
crimes_lisbon_2011.values[0,1] = int(float(crimes_lisbon_2011.values[0,1]))
crimes_lisbon_2011.values[0,2] = int(crimes_lisbon_2011.values[0,2])
crimes_lisbon_2011.values[0,4] = int(float(crimes_lisbon_2011.values[0,4]))
crimes_lisbon_2011.values[0,5] = int(float(crimes_lisbon_2011.values[0,5]))
# Just for debug
#crimes_lisbon_2011.values
# Just for debug
#crimes_lisbon_2011
```
* The total number of **_crime occurrences_** in **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_ (_organised as a Data Frame_ and excluding some _redundant fields and data_):
```
del crimes_lisbon_2011['Crimes contra a identidade cultural e integridade pessoal']
del crimes_lisbon_2011['Crimes contra animais de companhia']
crimes_lisbon_2011
crimes_lisbon_2011_categories = crimes_lisbon_2011.columns.tolist()
# Just for debug
#crimes_lisbon_2011_categories
crimes_lisbon_2011_values = crimes_lisbon_2011.values[0].tolist()
# Just for debug
#crimes_lisbon_2011_values
```
* A _plot_ of a representation of the total of **_crime occurrences_** in **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_ (_organised as a Data Frame_ and excluding some _redundant fields and data_):
```
plt.bar(crimes_lisbon_2011_categories, crimes_lisbon_2011_values)
plt.xticks(crimes_lisbon_2011_categories, rotation='vertical')
plt.xlabel('\nCrime Category/Nature\n')
plt.ylabel('\nNum. Occurrences\n')
plt.title('Crimes in Metropolitan Area of Lisbon, during 2011 (by Crime Category/Nature) - Bars Chart\n')
print('\n')
plt.show()
plt.pie(crimes_lisbon_2011_values, labels=crimes_lisbon_2011_categories, autopct='%.2f%%')
plt.title('Crimes in Metropolitan Area of Lisbon, during 2011 (by Crime Category/Nature) - Pie Chart\n\n')
plt.axis('equal')
print('\n')
plt.show()
```
* The total number of **_crime occurrences_** in all the **_localities/villages_** of the **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_ (_organised as a Data Frame_):
```
crimes_lisbon_2011_by_locality = crimes_by_geozone_2011.loc["1701502: Alcochete":"1701114: Vila Franca de Xira", : ]
crimes_lisbon_2011_by_locality
```
* The total number of **_crime occurrences_** in all the **_localities/villages_** of the **_Metropolitan Area of Lisbon_**, during **_2011_**, filtered by _category or nature of the crime_ (_organised as a Data Frame_ and excluding some _redundant fields and data_):
```
del crimes_lisbon_2011_by_locality['Crimes de homicídio voluntário consumado']
del crimes_lisbon_2011_by_locality['Crimes contra a identidade cultural e integridade pessoal']
del crimes_lisbon_2011_by_locality['Crimes contra animais de companhia']
crimes_lisbon_2011_by_locality
top_6_crimes_lisbon_2011_by_locality = crimes_lisbon_2011_by_locality.sort_values(by='Total', ascending=False).head(6)
top_6_crimes_lisbon_2011_by_locality
top_6_crimes_lisbon_2011_by_locality_total = top_6_crimes_lisbon_2011_by_locality.loc[:,"Total"]
top_6_crimes_lisbon_2011_by_locality_total
top_6_crimes_lisbon_2011_by_locality_total = pd.DataFrame(top_6_crimes_lisbon_2011_by_locality_total).T
top_6_crimes_lisbon_2011_by_locality_total = top_6_crimes_lisbon_2011_by_locality_total.iloc[:,0:6]
top_6_crimes_lisbon_2011_by_locality_total
top_6_crimes_lisbon_2011_by_locality_total_localities = top_6_crimes_lisbon_2011_by_locality_total.columns.tolist()
# Just for debug
#top_6_crimes_lisbon_2011_by_locality_total_localities
top_6_crimes_lisbon_2011_by_locality_total_values = top_6_crimes_lisbon_2011_by_locality_total.values[0].tolist()
# Just for debug
#top_6_crimes_lisbon_2011_by_locality_total_values
plt.bar(top_6_crimes_lisbon_2011_by_locality_total_localities, top_6_crimes_lisbon_2011_by_locality_total_values)
plt.xticks(top_6_crimes_lisbon_2011_by_locality_total_localities, rotation='vertical')
plt.xlabel('\nLocality/Village')
plt.ylabel('\nNum. Occurrences\n')
plt.title('Crimes in Metropolitan Area of Lisbon, during 2011 (by Locality/Village in Top 6) - Bars Chart\n')
print('\n')
plt.show()
plt.pie(top_6_crimes_lisbon_2011_by_locality_total_values, labels=top_6_crimes_lisbon_2011_by_locality_total_localities, autopct='%.2f%%')
plt.title('Crimes in Metropolitan Area of Lisbon, during 2011 (by Locality/Village in Top 6) - Pie Chart\n\n')
plt.axis('equal')
print('\n')
plt.show()
```
#### Conclusion of the **_crimes occurred_** in **_Metropolitan Area of Lisbon_**, during **_2011_**
* After studying all the perspectives about the **_crimes occurred_** in **_Metropolitan Area of Lisbon_**, during **_2011_**, it's possible to conclude the following:
* a) The most of the **_crimes_** occurred was against:
> 1) The **_country's patrimony_** (**68.52%**)
> 2) The **_people_**, at general (**20.35%**)
> 3) The **_life in society_** (**9.32%**)
Thank you, and I hope you enjoy it!
Sincerely,
> Rúben André Barreiro.
| github_jupyter |
```
!pip install torch torchvision
!pip install wavio
!pip install sounddevice
from google.colab import drive
drive.mount('/content/drive')
!ls "/content/drive/My Drive/IMT Atlantique/Projet 3A /master/kitchen20"
%cd /content/drive/My Drive/IMT Atlantique/Projet 3A /master/kitchen20
from envnet import EnvNet
from kitchen20 import Kitchen20
from torch.utils.data import DataLoader
import torch.nn as nn
import utils as U
import torch
# Model
model = EnvNet(20, True)
model.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
# Dataset
batchSize = 32
inputLength = 48000
transforms = []
transforms += [U.random_scale(1.25)] # Strong augment
transforms += [U.padding(inputLength // 2)] # Padding
transforms += [U.random_crop(inputLength)] # Random crop
transforms += [U.normalize(float(2 ** 16 / 2))] # 16 bit signed
transforms += [U.random_flip()] # Random +-
trainData = Kitchen20(root='../',
transforms=transforms,
folds=[1,2,3,4,5,6,7,8],
overwrite=False,
audio_rate=44100,
use_bc_learning=False)
trainIter = DataLoader(trainData, batch_size=batchSize,
shuffle=True, num_workers=2)
inputLength = 64000
transforms = []
transforms += [U.padding(inputLength // 2)] # Padding
transforms += [U.random_crop(inputLength)] # Random crop
transforms += [U.normalize(float(2 ** 16 / 2))] # 16 bit signed
transforms += [U.random_flip()] # Random +-
valData = Kitchen20(root='../',
transforms=transforms,
folds=[9,],
audio_rate=44100,
overwrite=False,
use_bc_learning=False)
valIter = DataLoader(valData, batch_size=batchSize,
shuffle=True, num_workers=2)
for epoch in range(600):
tAcc = tLoss = 0
vAcc = vLoss = 0
for x, y in trainIter: # Train epoch
model.train()
x = x[:, None, None, :]
x = x.to('cuda:0')
y = y.to('cuda:0')
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
y_pred = y_pred[:, :, 0, 0]
# Compute and print loss
loss = criterion(y_pred, y.long())
acc = (y_pred.argmax(dim=1).long() == y.long()).sum()
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
tLoss += loss.item()
tAcc += acc.item()/len(trainData)
for x, y in valIter: # Test epoch
model.eval()
x = x[:, None, None, :]
x = x.to('cuda:0')
y = y.to('cuda:0')
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
y_pred = y_pred[:, :, 0, 0]
loss = criterion(y_pred, y.long())
acc = (y_pred.argmax(dim=1).long() == y.long()).sum()
vLoss += loss.item()
vAcc += acc.item()/len(valData)
# loss = loss / len(dataset)
# acc = acc / float(len(dataset))
print('epoch {} -- train: {}/{} -- val:{}/{}'.format(
epoch, tAcc, tLoss, vAcc, vLoss))
testData = Kitchen20(root='../',
transforms=transforms,
folds=[10,],
audio_rate=44100,
overwrite=False,
use_bc_learning=False)
testIter = DataLoader(testData, batch_size=1,
shuffle=True, num_workers=2)
testAcc = 0
for x, y in testIter: # Test epoch
model.eval()
x = x[:, None, None, :]
x = x.to('cuda:0')
y = y.to('cuda:0')
# Forward pass: Compute predicted y by passing x to the model
y_test = model(x)
y_test = y_test[:, :, 0, 0]
print(y_test)
print(y_test.argmax(dim=1))
#loss = criterion(y_pred, y.long())
acc = (y_test.argmax(dim=1).long() == y.long()).sum()
#vLoss += loss.item()
testAcc += acc.item()/len(testData)
testAcc
len(testData)
y
y_pred.argmax(dim=1).long()
import numpy as np
data = np.load('../audio/44100.npz', allow_pickle=True)
lst = data.files
for item in lst:
print(item)
print(data[item])
len(trainData)
acc.item()
for i in range(10):
print(len(data[data.files[i]].item()['sounds']))
```
| github_jupyter |
```
from sklearn import *
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,
AdaBoostClassifier)
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import RFE
%pylab
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns #for graphics and figure styling
import pandas as pd
from matplotlib.colors import ListedColormap
data = pd.read_csv('E:/Stony Brook/AMS560/Data/FlightDelay2018.csv')
data=data[1:50000]
data.DepDelayMinutes.fillna(1)
data.DepDelayMinutes[data.DepDelayMinutes!=0]=1
from collections import defaultdict
a=0
b=0
missing=defaultdict(int)
for col in data:
for i in data[col].isnull():
if i:
a+=1
b+=1
#print('Missing data in',col,'is',a/b*100,'%')
missing[col]=a/b*100
a=0
b=0
missing['Year']
for col in data:
if missing[col]>5:
data=data.drop(col, axis=1)
data=data.drop('TailNum',axis=1)
enc = LabelEncoder()
data = data.apply(LabelEncoder().fit_transform)
depDelayColumn = data.DepDelayMinutes
data = data.drop('DepDelayMinutes', axis=1)
data = data.drop('DepDelay', axis=1)
data = data.drop(['CRSDepTime','DepTime','DepartureDelayGroups'], axis=1)
data_train, data_test, y_train, y_test = train_test_split(data, depDelayColumn, test_size=.3)
scaler = StandardScaler().fit(data)
standard_data_test = scaler.transform(data_test)
scaler = StandardScaler().fit(data_train)
standard_data = scaler.transform(data_train)
#Using the Random Forest Classifier on our Data, with depth 3.
depth=3;
n_features=5;
censusIDM = RandomForestClassifier(max_depth=depth, random_state=0)
frfe = RFE(censusIDM, n_features_to_select=n_features)
frfe.fit(data_train, y_train)
print(frfe.ranking_)
frfe.score(data_test, y_test)
feature_to_select=[0]*n_features
j=0
for i in range(len(frfe.ranking_)):
if frfe.ranking_[i]==1:
feature_to_select[j]=i
j=j+1
print(feature_to_select)
data.columns[36]
# Parameters
n_classes = 2
n_estimators = 30
cmap = plt.cm.RdYlBu
plot_step = 0.02 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
fig=plt.figure(figsize=[15,5])
plt.subplot(1,3, 1)
f1=[36,28]
f2=[5,36]
f3=[5,28]
#X=standardized_test_data[:,[0,4]];
X=standard_data[:,f1];
y=y_train
frfe.fit(X, y)
print(frfe.score(standard_data_test[:,f1], y_test))
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = frfe.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = frfe.predict(np.c_[xx_coarser.ravel(),yy_coarser.ravel()]).reshape(xx_coarser.shape)
cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,c=Z_points_coarser, cmap=cmap, edgecolors="none")
# Plot the training points, these are clustered together and have a
# black outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['r', 'y', 'b']),
edgecolor='k', s=20)
xlabel('ArrDelay')
ylabel('DepDel15')
plt.subplot(1,3,2)
X=standard_data[:,f2];
frfe.fit(X, y)
print(frfe.score(standard_data_test[:,f2], y_test))
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = frfe.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = frfe.predict(np.c_[xx_coarser.ravel(),yy_coarser.ravel()]).reshape(xx_coarser.shape)
cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,c=Z_points_coarser, cmap=cmap, edgecolors="none")
# Plot the training points, these are clustered together and have a
# black outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['r', 'y', 'b']),
edgecolor='k', s=20)
xlabel('DayOfWeek')
ylabel('ArrDelay')
plt.subplot(1,3,3)
X=standard_data[:,f3];
frfe.fit(X, y)
print(frfe.score(standard_data_test[:,f3], y_test))
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = frfe.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = frfe.predict(np.c_[xx_coarser.ravel(),yy_coarser.ravel()]).reshape(xx_coarser.shape)
cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,c=Z_points_coarser, cmap=cmap, edgecolors="none")
# Plot the training points, these are clustered together and have a
# black outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['r', 'y', 'b']),
edgecolor='k', s=20)
xlabel('DayOfWeek')
ylabel('DepDel15')
plt.suptitle('RandomForestTree model on feature subsets ');
#fig.savefig('RandomForest.pdf',dpi=200)
```
| github_jupyter |
##### Copyright 2021 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/qcvv/xeb_theory>"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
```
# Cross Entropy Benchmarking Theory
Cross entropy benchmarking uses the properties of random quantum programs to determine the fidelity of a wide variety of circuits. When applied to circuits with many qubits, XEB can characterize the performance of a large device. When applied to deep, two-qubit circuits it can be used to accurately characterize a two-qubit interaction potentially leading to better calibration.
```
# Standard imports
import numpy as np
import cirq
from cirq.contrib.svg import SVGCircuit
```
## The action of random circuits with noise
An XEB experiment collects data from the execution of random circuits
subject to noise. The effect of applying a random circuit with unitary $U$ is
modeled as $U$ followed by a depolarizing channel. The result is that the
initial state $|𝜓⟩$ is mapped to a density matrix $ρ_U$ as follows:
$$
|𝜓⟩ → ρ_U = f |𝜓_U⟩⟨𝜓_U| + (1 - f) I / D
$$
where $|𝜓_U⟩ = U|𝜓⟩$, $D$ is the dimension of the Hilbert space, $I / D$ is the
maximally mixed state, and $f$ is the fidelity with which the circuit is
applied.
For this model to be accurate, we require $U$ to be a random circuit that scrambles errors. In practice, we use a particular circuit ansatz consisting of random single-qubit rotations interleaved with entangling gates.
### Possible single-qubit rotations
These 8*8 possible rotations are chosen randomly when constructing the circuit.
Geometrically, we choose 8 axes in the XY plane to perform a quarter-turn (pi/2 rotation) around. This is followed by a rotation around the Z axis of 8 different magnitudes.
```
exponents = np.linspace(0, 7/4, 8)
exponents
import itertools
SINGLE_QUBIT_GATES = [
cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a)
for a, z in itertools.product(exponents, repeat=2)
]
SINGLE_QUBIT_GATES[:10], '...'
```
### Random circuit
We use `random_rotations_between_two_qubit_circuit` to generate a random two-qubit circuit. Note that we provide the possible single-qubit rotations from above and declare that our two-qubit operation is the $\sqrt{i\mathrm{SWAP}}$ gate.
```
import cirq_google as cg
from cirq.experiments import random_quantum_circuit_generation as rqcg
q0, q1 = cirq.LineQubit.range(2)
circuit = rqcg.random_rotations_between_two_qubit_circuit(
q0, q1,
depth=4,
two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b),
single_qubit_gates=SINGLE_QUBIT_GATES
)
SVGCircuit(circuit)
```
## Estimating fidelity
Let $O_U$ be an observable that is diagonal in the computational
basis. Then the expectation value of $O_U$ on $ρ_U$ is given by
$$
Tr(ρ_U O_U) = f ⟨𝜓_U|O_U|𝜓_U⟩ + (1 - f) Tr(O_U / D).
$$
This equation shows how $f$ can be estimated, since $Tr(ρ_U O_U)$ can be
estimated from experimental data, and $⟨𝜓_U|O_U|𝜓_U⟩$ and $Tr(O_U / D)$ can be
computed.
Let $e_U = ⟨𝜓_U|O_U|𝜓_U⟩$, $u_U = Tr(O_U / D)$, and $m_U$ denote the experimental
estimate of $Tr(ρ_U O_U)$. We can write the following linear equation (equivalent to the
expression above):
$$
m_U = f e_U + (1-f) u_U \\
m_U - u_U = f (e_U - u_U)
$$
```
# Make long circuits (which we will truncate)
MAX_DEPTH = 100
N_CIRCUITS = 10
circuits = [
rqcg.random_rotations_between_two_qubit_circuit(
q0, q1,
depth=MAX_DEPTH,
two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b),
single_qubit_gates=SINGLE_QUBIT_GATES)
for _ in range(N_CIRCUITS)
]
# We will truncate to these lengths
cycle_depths = np.arange(1, MAX_DEPTH + 1, 9)
cycle_depths
```
### Execute circuits
Cross entropy benchmarking requires sampled bitstrings from the device being benchmarked *as well as* the true probabilities from a noiseless simulation. We find these quantities for all `(cycle_depth, circuit)` permutations.
```
pure_sim = cirq.Simulator()
# Pauli Error. If there is an error, it is either X, Y, or Z
# with probability E_PAULI / 3
E_PAULI = 5e-3
noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(E_PAULI))
# These two qubit circuits have 2^2 = 4 probabilities
DIM = 4
records = []
for cycle_depth in cycle_depths:
for circuit_i, circuit in enumerate(circuits):
# Truncate the long circuit to the requested cycle_depth
circuit_depth = cycle_depth * 2 + 1
assert circuit_depth <= len(circuit)
trunc_circuit = circuit[:circuit_depth]
# Pure-state simulation
psi = pure_sim.simulate(trunc_circuit)
psi = psi.final_state_vector
pure_probs = np.abs(psi)**2
# Noisy execution
meas_circuit = trunc_circuit + cirq.measure(q0, q1)
sampled_inds = noisy_sim.sample(meas_circuit, repetitions=10_000).values[:,0]
sampled_probs = np.bincount(sampled_inds, minlength=DIM) / len(sampled_inds)
# Save the results
records += [{
'circuit_i': circuit_i,
'cycle_depth': cycle_depth,
'circuit_depth': circuit_depth,
'pure_probs': pure_probs,
'sampled_probs': sampled_probs,
}]
print('.', end='', flush=True)
```
## What's the observable
What is $O_U$? Let's define it to be the observable that gives the sum of all probabilities, i.e.
$$
O_U |x \rangle = p(x) |x \rangle
$$
for any bitstring $x$. We can use this to derive expressions for our quantities of interest.
$$
e_U = \langle \psi_U | O_U | \psi_U \rangle \\
= \sum_x a_x^* \langle x | O_U | x \rangle a_x \\
= \sum_x p(x) \langle x | O_U | x \rangle \\
= \sum_x p(x) p(x)
$$
$e_U$ is simply the sum of squared ideal probabilities. $u_U$ is a normalizing factor that only depends on the operator. Since this operator has the true probabilities in the definition, they show up here anyways.
$$
u_U = \mathrm{Tr}[O_U / D] \\
= 1/D \sum_x \langle x | O_U | x \rangle \\
= 1/D \sum_x p(x)
$$
For the measured values, we use the definition of an expectation value
$$
\langle f(x) \rangle_\rho = \sum_x p(x) f(x)
$$
It becomes notationally confusing because remember: our operator on basis states returns the ideal probability of that basis state $p(x)$. The probability of observing a measured basis state is estimated from samples and denoted $p_\mathrm{est}(x)$ here.
$$
m_U = \mathrm{Tr}[\rho_U O_U] \\
= \langle O_U \rangle_{\rho_U} = \sum_{x} p_\mathrm{est}(x) p(x)
$$
```
for record in records:
e_u = np.sum(record['pure_probs']**2)
u_u = np.sum(record['pure_probs']) / DIM
m_u = np.sum(record['pure_probs'] * record['sampled_probs'])
record.update(
e_u=e_u,
u_u=u_u,
m_u=m_u,
)
```
Remember:
$$
m_U - u_U = f (e_U - u_U)
$$
We estimate f by performing least squares
minimization of the sum of squared residuals
$$
\sum_U \left(f (e_U - u_U) - (m_U - u_U)\right)^2
$$
over different random circuits. The solution to the
least squares problem is given by
$$
f = (∑_U (m_U - u_U) * (e_U - u_U)) / (∑_U (e_U - u_U)^2)
$$
```
import pandas as pd
df = pd.DataFrame(records)
df['y'] = df['m_u'] - df['u_u']
df['x'] = df['e_u'] - df['u_u']
df['numerator'] = df['x'] * df['y']
df['denominator'] = df['x'] ** 2
df.head()
```
### Fit
We'll plot the linear relationship and least-squares fit while we transform the raw DataFrame into one containing fidelities.
```
%matplotlib inline
from matplotlib import pyplot as plt
# Color by cycle depth
import seaborn as sns
colors = sns.cubehelix_palette(n_colors=len(cycle_depths))
colors = {k: colors[i] for i, k in enumerate(cycle_depths)}
_lines = []
def per_cycle_depth(df):
fid_lsq = df['numerator'].sum() / df['denominator'].sum()
cycle_depth = df.name
xx = np.linspace(0, df['x'].max())
l, = plt.plot(xx, fid_lsq*xx, color=colors[cycle_depth])
plt.scatter(df['x'], df['y'], color=colors[cycle_depth])
global _lines
_lines += [l] # for legend
return pd.Series({'fidelity': fid_lsq})
fids = df.groupby('cycle_depth').apply(per_cycle_depth).reset_index()
plt.xlabel(r'$e_U - u_U$', fontsize=18)
plt.ylabel(r'$m_U - u_U$', fontsize=18)
_lines = np.asarray(_lines)
plt.legend(_lines[[0,-1]], cycle_depths[[0,-1]], loc='best', title='Cycle depth')
plt.tight_layout()
```
### Fidelities
```
plt.plot(
fids['cycle_depth'],
fids['fidelity'],
marker='o',
label='Least Squares')
xx = np.linspace(0, fids['cycle_depth'].max())
# In XEB, we extract the depolarizing fidelity, which is
# related to (but not equal to) the Pauli error.
# For the latter, an error involves doing X, Y, or Z with E_PAULI/3
# but for the former, an error involves doing I, X, Y, or Z with e_depol/4
e_depol = E_PAULI / (1 - 1/DIM**2)
# The additional factor of four in the exponent is because each layer
# involves two moments of two qubits (so each layer has four applications
# of a single-qubit single-moment depolarizing channel).
plt.plot(xx, (1-e_depol)**(4*xx), label=r'$(1-\mathrm{e\_depol})^{4d}$')
plt.ylabel('Circuit fidelity', fontsize=18)
plt.xlabel('Cycle Depth $d$', fontsize=18)
plt.legend(loc='best')
plt.yscale('log')
plt.tight_layout()
from cirq.experiments.xeb_fitting import fit_exponential_decays
# Ordinarily, we'd use this function to fit curves for multiple pairs.
# We add our qubit pair as a column.
fids['pair'] = [(q0, q1)] * len(fids)
fit_df = fit_exponential_decays(fids)
fit_row = fit_df.iloc[0]
print(f"Noise model fidelity: {(1-e_depol)**4:.3e}")
print(f"XEB layer fidelity: {fit_row['layer_fid']:.3e} +- {fit_row['layer_fid_std']:.2e}")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Shubham0Rajput/Feature-Detection-with-AKAZE/blob/master/AKAZE_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#IMPORT FILES
import matplotlib.pyplot as plt
import cv2
#matplotlib inline
#MOUNTIING DRIVE
from google.colab import drive
drive.mount('/content/drive')
from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse
from math import sqrt
import matplotlib.pyplot as plt
imge1 = cv.imread('/content/drive/My Drive/e2.jpg')
img1 = cv.cvtColor(imge1, cv.COLOR_BGR2GRAY) # queryImage
imge2 = cv.imread('/content/drive/My Drive/e1.jpg')
img2 = cv.cvtColor(imge2, cv.COLOR_BGR2GRAY) # trainImage
if img1 is None or img2 is None:
print('Could not open or find the images!')
exit(0)
fs = cv.FileStorage('/content/drive/My Drive/H1to3p.xml', cv.FILE_STORAGE_READ)
homography = fs.getFirstTopLevelNode().mat()
## [AKAZE]
akaze = cv.AKAZE_create()
kpts1, desc1 = akaze.detectAndCompute(img1, None)
kpts2, desc2 = akaze.detectAndCompute(img2, None)
## [AKAZE]
## [2-nn matching]
matcher = cv.DescriptorMatcher_create(cv.DescriptorMatcher_BRUTEFORCE_HAMMING)
nn_matches = matcher.knnMatch(desc1, desc2, 2)
## [2-nn matching]
## [ratio test filtering]
matched1 = []
matched2 = []
nn_match_ratio = 0.8 # Nearest neighbor matching ratio
for m, n in nn_matches:
if m.distance < nn_match_ratio * n.distance:
matched1.append(kpts1[m.queryIdx])
matched2.append(kpts2[m.trainIdx])
## [homography check]
inliers1 = []
inliers2 = []
good_matches = []
inlier_threshold = 2.5 # Distance threshold to identify inliers with homography check
for i, m in enumerate(matched1):
col = np.ones((3,1), dtype=np.float64)
col[0:2,0] = m.pt
col = np.dot(homography, col)
col /= col[2,0]
dist = sqrt(pow(col[0,0] - matched2[i].pt[0], 2) +\
pow(col[1,0] - matched2[i].pt[1], 2))
if dist > inlier_threshold:
good_matches.append(cv.DMatch(len(inliers1), len(inliers2), 0))
inliers1.append(matched1[i])
inliers2.append(matched2[i])
## [homography check]
## [draw final matches]
res = np.empty((max(img1.shape[0], img2.shape[0]), img1.shape[1]+img2.shape[1], 3), dtype=np.uint8)
img0 = cv.drawMatches(img1, inliers1, img2, inliers2, good_matches, res)
#img0 = cv.drawMatchesKnn(img1,inliers1,img2,inliers2,res,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imwrite("akaze_result.png", res)
inlier_ratio = len(inliers1) / float(len(matched1))
print('A-KAZE Matching Results')
print('*******************************')
print('# Keypoints 1: \t', len(kpts1))
print('# Keypoints 2: \t', len(kpts2))
print('# Matches: \t', len(matched1))
print('# Inliers: \t', len(inliers1))
print('# Inliers Ratio: \t', inlier_ratio)
print('# Dist: \t', dist)
plt.imshow(img0),plt.show()
## [draw final matches]
```
| github_jupyter |
## Programming for Data Analysis Project 2018
### Patrick McDonald G00281051
#### Problem statement
For this project you must create a data set by simulating a real-world phenomenon of your choosing. You may pick any phenomenon you wish – you might pick one that is of interest to you in your personal or professional life. Then, rather than collect data related to the phenomenon, you should model and synthesise such data using Python. We suggest you use the numpy.random package for this purpose.
Specifically, in this project you should:
1. Choose a real-world phenomenon that can be measured and for which you could collect at least one-hundred data points across at least four different variables.
2. Investigate the types of variables involved, their likely distributions, and their relationships with each other.
3. Synthesise/simulate a data set as closely matching their properties as possible.
4. Detail your research and implement the simulation in a Jupyter notebook – the data set itself can simply be displayed in an output cell within the notebook.
### 1. Choose a real-world phenomenon that can be measured and for which you could collect at least one-hundred data points across at least four different variables.
For the purpose of this project, I shall extract some wave buoy data from the [M6 weather buoy](http://www.marine.ie/Home/site-area/data-services/real-time-observations/irish-weather-buoy-network) off the westcoast of Ireland. I surf occassionally, and many surfers, like myself; use weather buoy data in order to predict when there will be decent waves to surf. There are many online resources that provide such information, but I thought this may be an enjoyable exploration of raw data that is used everyday, worldwide.
```
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Downloaded hly62095.csv from https://data.gov.ie/dataset/hourly-data-for-buoy-m6
# Opened dataset in VSCode. It contains the label legend, so I have skipped these rows.
# I also only want to utilise 4 relevant columns of data, I'll use the 'usecols' arguement:
# https://realpython.com/python-data-cleaning-numpy-pandas/#dropping-columns-in-a-dataframe
df = pd.read_csv("hly62095.csv", skiprows = 19, low_memory = False, usecols= ['date', 'dir', 'per', 'wavht'])
# Change the date column to a Pythonic datetime -
# reference: https://github.com/ianmcloughlin/jupyter-teaching-notebooks/raw/master/time-series.ipynb
df['datetime'] = pd.to_datetime(df['date'])
```
Downloaded hly62095.csv from https://data.gov.ie/dataset/hourly-data-for-buoy-m6. Opened dataset in VSCode. It contains the label legend, so I have skipped these rows 1-19:
###Label legend
```
1. Station Name: M6
2. Station Height: 0 M
3. Latitude:52.990 ,Longitude: -15.870
4.
5.
6. date: - Date and Time (utc)
7. temp: - Air Temperature (C)
8. rhum: - Relative Humidity (%)
9. windsp: - Mean Wind Speed (kt)
10. dir: - Mean Wind Direction (degrees)
11. gust: - Maximum Gust (kt)
12. msl: - Mean Sea Level Pressure (hPa)
13. seatp: - Sea Temperature (C)
14. per: - Significant Wave Period (seconds)
15. wavht: - Significant Wave Height (m)
16. mxwav: - Individual Maximum Wave Height(m)
17. wvdir: - Wave Direction (degrees)
18. ind: - Indicator
19.
20. date,temp,rhum,wdsp,dir,gust,msl,seatp,per,wavht,mxwave,wvdir
21. 25-sep-2006 09:00,15.2, ,8.000,240.000, ,1007.2,15.4,6.000,1.5, ,
22. 25-sep-2006 10:00,15.2, ,8.000,220.000, ,1008.0,15.4,6.000,1.5, ,.........
```
```
# View DataFrame
df
```
There are a significant missing datapoints, and its a large sample, with 94248 rows. I'm going to explore this further, and extract the relevant data for September 2018. This will give me enough data to explore and simulate for this project.
First, I'll describe the datatypes in the set.
```
df.describe()
```
I want to view the data for September 2018. So I'll extract the relevant datapoints from this dataset.
```
# Create a datetime index for a data frame.
# Adapted from: https://pandas.pydata.org/pandas-docs/stable/timeseries.html
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html
rng = pd.date_range(start='1-sep-2018', periods=30, freq='D')
rng
```
I'm using 4 variables from the dataset. These are;
1. date: - Date and Time (utc)
2. dir: - Mean Wind Direction (degrees)
3. per: - Significant Wave Period (seconds) - This is important for quality waves!
4. wavht: - Significant Wave Height (m)
```
df.head(10)
```
Next, I'll display the data columns from 1st September 2018. Since I've already removed rows 1-19, the row label numbers have been modified by pandas. In this case - I worked backwards to find the right label number.
I'm going to name this smaller dataframe 'wavedata'.
```
wavedata = df.loc['93530':]
wavedata
```
I'm now going to check for Null values in the data.
```
# checking for null values
# There are no null values in this dataframe, according to pandas!
wavedata.isnull().sum()
```
### 2. Investigate the types of variables involved, their likely distributions, and their relationships with each other.
Now for some exploratory data analysis.
```
# Check datatypes
df.dtypes
```
#### Explore distributions
```
%matplotlib inline
plt.figure(figsize=(26, 10))
plot = sns.scatterplot(x="datetime", y="wavht", hue='per', data=wavedata)
```
It looks like I've a problem with this data! It's mostly objects, not floats.
| github_jupyter |
# Перечислимые типы (enums)
## 1. Базовые возможности
```
enum Color
{
White, // 0
Red, // 1
Green, // 2
Blue, // 3
Orange, // 4
}
Color white = Color.White;
Console.WriteLine(white); // White
Color red = (Color)1; // Так можно приводить к типу перечисления
Console.WriteLine(red); // Red
Color unknown = (Color)42; // Нет ошибки!
Console.WriteLine(unknown); // 42
Color green = Enum.Parse<Color>("Green");
green.ToString()
Enum.TryParse<Color>("Blue", out Color blue);
blue.ToString()
// Посмотрим, какими типами можно задавать перечисления
enum Dummy : object {}
```
## 2. Приведение перечислимых типов
```
enum Fruit
{
Melon, // 0
Tomato, // 1
Apple, // 2
Blueberry, // 3
Orange, // 4
}
Fruit orange = Color.Orange; // Безопасность типов -> ошибка
Fruit tomato = (Fruit)Color.Red; // А вот так уже можно
Console.WriteLine(tomato);
Color unknownColor = (Color)42;
Fruit unknownFruit = (Fruit)unknownColor;
Console.WriteLine(unknownFruit);
// Любой enum имеет следующую цепочку наследования: MyEnum <- System.Enum <- System.ValueType <- System.Object
Enum enumEnum = Color.Blue;
ValueType enumValueType = Color.Blue;
object enumObj = Color.Blue; // BOXING
Console.WriteLine($"{enumEnum}, {enumValueType}, {enumObj}");
```
## 3. Использование одного целочисленного значения для нескольких enum значений
```
public enum Subject
{
Programming = 0,
DiscreteMath = 1,
Algebra = 2,
Calculus = 3,
Economics = 4,
MostDifficultSubject = Algebra,
MostUsefulSubject = Programming,
// MostHatefulSubject = Programming
}
Console.WriteLine(Subject.Programming);
Console.WriteLine(Subject.MostUsefulSubject);
Console.WriteLine((Subject)0);
Console.WriteLine(Subject.Programming == Subject.MostUsefulSubject)
Console.WriteLine(Subject.Algebra);
Console.WriteLine(Subject.MostDifficultSubject);
Console.WriteLine((Subject)2);
Console.WriteLine(Subject.Algebra == Subject.MostDifficultSubject)
```
## 4. Рефлексия перечислимых типов
Статический метод Enum.GetUnderlyingType возвращает целочисленный тип для енама
```
Enum.GetUnderlyingType(typeof(Subject))
Enum.GetUnderlyingType(typeof(Subject))
```
В типе System.Type также есть метод GetEnumUnderlyingType
```
typeof(Subject).GetEnumUnderlyingType()
```
Который работает только с объектами-типами енамов
```
typeof(short).GetEnumUnderlyingType()
```
Можно получить все значения енама c помощью Enum.GetValues(Type)
```
var enumValues = Enum.GetValues(typeof(Subject)); // Аналог: typeof(Subject).GetEnumValues();
foreach(var value in enumValues){
Console.WriteLine(value);
}
Enum.GetNames(typeof(Subject)) // Аналог: typeof(Subject).GetEnumNames()
```
Проверка, есть ли в енаме соответствующее значение.
```
Enum.IsDefined(typeof(Subject), 3)
Enum.IsDefined(typeof(Subject), 42)
```
## 5. Битовые флаги
```
[Flags]
enum FilePermission : byte
{
None = 0b00000000,
Read = 0b00000001,
Write = 0b00000010,
Execute = 0b00000100,
Rename = 0b00001000,
Move = 0b00010000,
Delete = 0b00100000,
User = Read | Execute,
ReadWrite = Read | Write,
Admin = Read | Write | Execute | Rename | Move | Delete
}
```
[Про FlagsAttribute](https://docs.microsoft.com/ru-ru/dotnet/api/system.flagsattribute?view=net-5.0)
```
FilePermission permission = FilePermission.User;
permission.HasFlag(FilePermission.Read)
```
Пример использования:
```
void RenameFile(File file, User user)
{
if (!user.Permission.HasFlag(FilePermission.Rename)) {
throw new SomeException("you can't.")
}
...
}
```
```
for (int i = 0; i <= 16; ++i) {
FilePermission fp = (FilePermission)i;
Console.WriteLine(fp.ToString("G"));
}
```
Пример из стандартной библиотеки: System.AttributeTargets
```
[Flags, Serializable]
public enum AttributeTargets {
Assembly = 0x0001,
Module = 0x0002,
Class = 0x0004,
Struct = 0x0008,
Enum = 0x0010,
Constructor = 0x0020,
Method = 0x0040,
Property = 0x0080,
Field = 0x0100,
Event = 0x0200,
Interface = 0x0400,
Parameter = 0x0800,
Delegate = 0x1000,
ReturnValue = 0x2000,
GenericParameter = 0x4000,
All = Assembly | Module | Class | Struct | Enum |
Constructor | Method | Property | Field | Event |
Interface | Parameter | Delegate | ReturnValue |
GenericParameter
}
```
## 6. Методы расширения для enum
Перечислениям можно "добавлять функциональность" с помощью методов расширения
```
//public static class EnumExtentions
//{
public static int GetMark(this Subject subject)
{
return subject switch
{
Subject.Programming => 8,
Subject.DiscreteMath => 10,
Subject.Algebra => 5,
Subject.Calculus => 7,
Subject.Economics => 6,
_ => 0,
};
}
//}
Subject prog = Subject.Programming;
prog.GetMark()
```
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Space X Falcon 9 First Stage Landing Prediction**
## Web scraping Falcon 9 and Falcon Heavy Launches Records from Wikipedia
Estimated time needed: **40** minutes
In this lab, you will be performing web scraping to collect Falcon 9 historical launch records from a Wikipedia page titled `List of Falcon 9 and Falcon Heavy launches`
[https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches](https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01)

Falcon 9 first stage will land successfully

Several examples of an unsuccessful landing are shown here:

More specifically, the launch records are stored in a HTML table shown below:

## Objectives
Web scrap Falcon 9 launch records with `BeautifulSoup`:
* Extract a Falcon 9 launch records HTML table from Wikipedia
* Parse the table and convert it into a Pandas data frame
First let's import required packages for this lab
```
!pip3 install beautifulsoup4
!pip3 install requests
import sys
import requests
from bs4 import BeautifulSoup
import re
import unicodedata
import pandas as pd
```
and we will provide some helper functions for you to process web scraped HTML table
```
def date_time(table_cells):
"""
This function returns the data and time from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
return [data_time.strip() for data_time in list(table_cells.strings)][0:2]
def booster_version(table_cells):
"""
This function returns the booster version from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=''.join([booster_version for i,booster_version in enumerate( table_cells.strings) if i%2==0][0:-1])
return out
def landing_status(table_cells):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=[i for i in table_cells.strings][0]
return out
def get_mass(table_cells):
mass=unicodedata.normalize("NFKD", table_cells.text).strip()
if mass:
mass.find("kg")
new_mass=mass[0:mass.find("kg")+2]
else:
new_mass=0
return new_mass
def extract_column_from_header(row):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
if (row.br):
row.br.extract()
if row.a:
row.a.extract()
if row.sup:
row.sup.extract()
colunm_name = ' '.join(row.contents)
# Filter the digit and empty names
if not(colunm_name.strip().isdigit()):
colunm_name = colunm_name.strip()
return colunm_name
```
To keep the lab tasks consistent, you will be asked to scrape the data from a snapshot of the `List of Falcon 9 and Falcon Heavy launches` Wikipage updated on
`9th June 2021`
```
static_url = "https://en.wikipedia.org/w/index.php?title=List_of_Falcon_9_and_Falcon_Heavy_launches&oldid=1027686922"
```
Next, request the HTML page from the above URL and get a `response` object
### TASK 1: Request the Falcon9 Launch Wiki page from its URL
First, let's perform an HTTP GET method to request the Falcon9 Launch HTML page, as an HTTP response.
```
# use requests.get() method with the provided static_url
# assign the response to a object
response = requests.get(static_url)
```
Create a `BeautifulSoup` object from the HTML `response`
```
# Use BeautifulSoup() to create a BeautifulSoup object from a response text content
soup = BeautifulSoup(response.text)
```
Print the page title to verify if the `BeautifulSoup` object was created properly
```
# Use soup.title attribute
print(soup.title)
```
### TASK 2: Extract all column/variable names from the HTML table header
Next, we want to collect all relevant column names from the HTML table header
Let's try to find all tables on the wiki page first. If you need to refresh your memory about `BeautifulSoup`, please check the external reference link towards the end of this lab
```
# Use the find_all function in the BeautifulSoup object, with element type `table`
# Assign the result to a list called `html_tables`
html_tables = soup.find_all('table')
```
Starting from the third table is our target table contains the actual launch records.
```
# Let's print the third table and check its content
first_launch_table = html_tables[2]
print(first_launch_table)
```
You should able to see the columns names embedded in the table header elements `<th>` as follows:
```
<tr>
<th scope="col">Flight No.
</th>
<th scope="col">Date and<br/>time (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)
</th>
<th scope="col"><a href="/wiki/List_of_Falcon_9_first-stage_boosters" title="List of Falcon 9 first-stage boosters">Version,<br/>Booster</a> <sup class="reference" id="cite_ref-booster_11-0"><a href="#cite_note-booster-11">[b]</a></sup>
</th>
<th scope="col">Launch site
</th>
<th scope="col">Payload<sup class="reference" id="cite_ref-Dragon_12-0"><a href="#cite_note-Dragon-12">[c]</a></sup>
</th>
<th scope="col">Payload mass
</th>
<th scope="col">Orbit
</th>
<th scope="col">Customer
</th>
<th scope="col">Launch<br/>outcome
</th>
<th scope="col"><a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">Booster<br/>landing</a>
</th></tr>
```
Next, we just need to iterate through the `<th>` elements and apply the provided `extract_column_from_header()` to extract column name one by one
```
column_names = []
th = first_launch_table.find_all('th')
for name in th:
name = extract_column_from_header(name)
if name is not None and len(name) > 0:
column_names.append(name)
# Apply find_all() function with `th` element on first_launch_table
# Iterate each th element and apply the provided extract_column_from_header() to get a column name
# Append the Non-empty column name (`if name is not None and len(name) > 0`) into a list called column_names
```
Check the extracted column names
```
print(column_names)
```
## TASK 3: Create a data frame by parsing the launch HTML tables
We will create an empty dictionary with keys from the extracted column names in the previous task. Later, this dictionary will be converted into a Pandas dataframe
```
launch_dict= dict.fromkeys(column_names)
# Remove an irrelvant column
del launch_dict['Date and time ( )']
# Let's initial the launch_dict with each value to be an empty list
launch_dict['Flight No.'] = []
launch_dict['Launch site'] = []
launch_dict['Payload'] = []
launch_dict['Payload mass'] = []
launch_dict['Orbit'] = []
launch_dict['Customer'] = []
launch_dict['Launch outcome'] = []
# Added some new columns
launch_dict['Version Booster']=[]
launch_dict['Booster landing']=[]
launch_dict['Date']=[]
launch_dict['Time']=[]
```
Next, we just need to fill up the `launch_dict` with launch records extracted from table rows.
Usually, HTML tables in Wiki pages are likely to contain unexpected annotations and other types of noises, such as reference links `B0004.1[8]`, missing values `N/A [e]`, inconsistent formatting, etc.
To simplify the parsing process, we have provided an incomplete code snippet below to help you to fill up the `launch_dict`. Please complete the following code snippet with TODOs or you can choose to write your own logic to parse all launch tables:
```
extracted_row = 0
#Extract each table
for table_number,table in enumerate(soup.find_all('table',"wikitable plainrowheaders collapsible")):
# get table row
for rows in table.find_all("tr"):
#check to see if first table heading is as number corresponding to launch a number
if rows.th:
if rows.th.string:
flight_number=rows.th.string.strip()
flag=flight_number.isdigit()
else:
flag=False
#get table element
row=rows.find_all('td')
#if it is number save cells in a dictonary
if flag:
extracted_row += 1
# Flight Number value
# TODO: Append the flight_number into launch_dict with key `Flight No.`
#print(flight_number)
datatimelist=date_time(row[0])
# Date value
# TODO: Append the date into launch_dict with key `Date`
date = datatimelist[0].strip(',')
#print(date)
# Time value
# TODO: Append the time into launch_dict with key `Time`
time = datatimelist[1]
#print(time)
# Booster version
# TODO: Append the bv into launch_dict with key `Version Booster`
bv=booster_version(row[1])
if not(bv):
bv=row[1].a.string
print(bv)
# Launch Site
# TODO: Append the bv into launch_dict with key `Launch Site`
launch_site = row[2].a.string
#print(launch_site)
# Payload
# TODO: Append the payload into launch_dict with key `Payload`
payload = row[3].a.string
#print(payload)
# Payload Mass
# TODO: Append the payload_mass into launch_dict with key `Payload mass`
payload_mass = get_mass(row[4])
#print(payload)
# Orbit
# TODO: Append the orbit into launch_dict with key `Orbit`
orbit = row[5].a.string
#print(orbit)
# Customer
# TODO: Append the customer into launch_dict with key `Customer`
customer = row[6].a.string
#print(customer)
# Launch outcome
# TODO: Append the launch_outcome into launch_dict with key `Launch outcome`
launch_outcome = list(row[7].strings)[0]
#print(launch_outcome)
# Booster landing
# TODO: Append the launch_outcome into launch_dict with key `Booster landing`
booster_landing = landing_status(row[8])
#print(booster_landing)
```
After you have fill in the parsed launch record values into `launch_dict`, you can create a dataframe from it.
```
df=pd.DataFrame(launch_dict)
```
We can now export it to a <b>CSV</b> for the next section, but to make the answers consistent and in case you have difficulties finishing this lab.
Following labs will be using a provided dataset to make each lab independent.
<code>df.to_csv('spacex_web_scraped.csv', index=False)</code>
## Authors
<a href="https://www.linkedin.com/in/yan-luo-96288783/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Yan Luo</a>
<a href="https://www.linkedin.com/in/nayefaboutayoun/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Nayef Abou Tayoun</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | --------------------------- |
| 2021-06-09 | 1.0 | Yan Luo | Tasks updates |
| 2020-11-10 | 1.0 | Nayef | Created the initial version |
Copyright © 2021 IBM Corporation. All rights reserved.
| github_jupyter |
# Entities Recognition
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/entities](https://github.com/huseinzol05/Malaya/tree/master/example/entities).
</div>
<div class="alert alert-warning">
This module only trained on standard language structure, so it is not save to use it for local language structure.
</div>
```
%%time
import malaya
```
### Models accuracy
We use `sklearn.metrics.classification_report` for accuracy reporting, check at https://malaya.readthedocs.io/en/latest/models-accuracy.html#entities-recognition and https://malaya.readthedocs.io/en/latest/models-accuracy.html#entities-recognition-ontonotes5
### Describe supported entities
```
import pandas as pd
pd.set_option('display.max_colwidth', -1)
malaya.entity.describe()
```
### Describe supported Ontonotes 5 entities
```
malaya.entity.describe_ontonotes5()
```
### List available Transformer NER models
```
malaya.entity.available_transformer()
```
### List available Transformer NER Ontonotes 5 models
```
malaya.entity.available_transformer_ontonotes5()
string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'
string1 = 'memperkenalkan Husein, dia sangat comel, berumur 25 tahun, bangsa melayu, agama islam, tinggal di cyberjaya malaysia, bercakap bahasa melayu, semua membaca buku undang-undang kewangan, dengar laju Siti Nurhaliza - Seluruh Cinta sambil makan ayam goreng KFC'
```
### Load Transformer model
```python
def transformer(model: str = 'xlnet', quantized: bool = False, **kwargs):
"""
Load Transformer Entity Tagging model trained on Malaya Entity, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
model = malaya.entity.transformer(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_model = malaya.entity.transformer(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
model.predict(string)
model.predict(string1)
quantized_model.predict(string)
quantized_model.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
model.analyze(string)
model.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string,
'Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais',
'contact Husein at husein.zol05@gmail.com',
'tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek']
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load Transformer Ontonotes 5 model
```python
def transformer_ontonotes5(
model: str = 'xlnet', quantized: bool = False, **kwargs
):
"""
Load Transformer Entity Tagging model trained on Ontonotes 5 Bahasa, transfer learning Transformer + CRF.
Parameters
----------
model : str, optional (default='bert')
Model architecture supported. Allowed values:
* ``'bert'`` - Google BERT BASE parameters.
* ``'tiny-bert'`` - Google BERT TINY parameters.
* ``'albert'`` - Google ALBERT BASE parameters.
* ``'tiny-albert'`` - Google ALBERT TINY parameters.
* ``'xlnet'`` - Google XLNET BASE parameters.
* ``'alxlnet'`` - Malaya ALXLNET BASE parameters.
* ``'fastformer'`` - FastFormer BASE parameters.
* ``'tiny-fastformer'`` - FastFormer TINY parameters.
quantized : bool, optional (default=False)
if True, will load 8-bit quantized model.
Quantized model not necessary faster, totally depends on the machine.
Returns
-------
result: model
List of model classes:
* if `bert` in model, will return `malaya.model.bert.TaggingBERT`.
* if `xlnet` in model, will return `malaya.model.xlnet.TaggingXLNET`.
* if `fastformer` in model, will return `malaya.model.fastformer.TaggingFastFormer`.
"""
```
```
albert = malaya.entity.transformer_ontonotes5(model = 'albert')
alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet')
```
#### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_albert = malaya.entity.transformer_ontonotes5(model = 'albert', quantized = True)
quantized_alxlnet = malaya.entity.transformer_ontonotes5(model = 'alxlnet', quantized = True)
```
#### Predict
```python
def predict(self, string: str):
"""
Tag a string.
Parameters
----------
string : str
Returns
-------
result: Tuple[str, str]
"""
```
```
albert.predict(string)
alxlnet.predict(string)
albert.predict(string1)
alxlnet.predict(string1)
quantized_albert.predict(string)
quantized_alxlnet.predict(string1)
```
#### Group similar tags
```python
def analyze(self, string: str):
"""
Analyze a string.
Parameters
----------
string : str
Returns
-------
result: {'words': List[str], 'tags': [{'text': 'text', 'type': 'location', 'score': 1.0, 'beginOffset': 0, 'endOffset': 1}]}
"""
```
```
alxlnet.analyze(string1)
```
#### Vectorize
Let say you want to visualize word level in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, string: str):
"""
vectorize a string.
Parameters
----------
string: List[str]
Returns
-------
result: np.array
"""
```
```
strings = [string, string1]
r = [quantized_model.vectorize(string) for string in strings]
x, y = [], []
for row in r:
x.extend([i[0] for i in row])
y.extend([i[1] for i in row])
tsne = TSNE().fit_transform(y)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = x
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
Pretty good, the model able to know cluster similar entities.
### Load general Malaya entity model
This model able to classify,
1. date
2. money
3. temperature
4. distance
5. volume
6. duration
7. phone
8. email
9. url
10. time
11. datetime
12. local and generic foods, can check available rules in malaya.texts._food
13. local and generic drinks, can check available rules in malaya.texts._food
We can insert BERT or any deep learning model by passing `malaya.entity.general_entity(model = model)`, as long the model has `predict` method and return `[(string, label), (string, label)]`. This is an optional.
```
entity = malaya.entity.general_entity(model = model)
entity.predict('Husein baca buku Perlembagaan yang berharga 3k ringgit dekat kfc sungai petani minggu lepas, 2 ptg 2 oktober 2019 , suhu 32 celcius, sambil makan ayam goreng dan milo o ais')
entity.predict('contact Husein at husein.zol05@gmail.com')
entity.predict('tolong tempahkan meja makan makan nasi dagang dan jus apple, milo tarik esok dekat Restoran Sebulek')
```
### Voting stack model
```
malaya.stack.voting_stack([albert, alxlnet, alxlnet], string1)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/FCM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<center>
# 模式识别·第七次作业·模糊聚类(Fussy C Means)
#### 纪泽西 17375338
#### Last Modified:26th,April,2020
</center>
<table align="center">
<td align="center"><a target="_blank" href="https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/FCM.ipynb">
<img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" /><br>Run in Google Colab</a></td>
</table>
## Part1: 导入库文件及数据集
#### 如需在其他环境运行需改变数据集所在路径
```
!pip install -U scikit-fuzzy
%tensorflow_version 2.x
import tensorflow as tf
import sklearn
from sklearn.metrics import confusion_matrix
from skfuzzy.cluster import cmeans
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
import glob
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from time import *
import os
import scipy.io as sio
%cd /content/drive/My Drive/Pattern Recognition/Dataset/cell_dataset
x_train = np.load("x_train.npy")
y_train = np.load("y_train.npy")
x_test = np.load("x_test.npy")
y_test = np.load("y_test.npy")
print(x_train.shape,x_test.shape)
print(np.unique(y_test))
print(np.bincount(y_test.astype(int)))
```
## Part2:数据预处理
```
x_train = x_train.reshape(x_train.shape[0],-1)
x_test = x_test.reshape(x_test.shape[0],-1)
x_train = x_train/255.0
x_test = x_test/255.0
print(x_train.shape,x_test.shape)
```
## Part3:模型建立
由于skfuzzy模块内提到对于高维特征数据,cmeans聚类可能存在问题,故使用[第五次作业:细胞聚类](https://colab.research.google.com/github/Phantom-Ren/PR_TH/blob/master/细胞聚类.ipynb)中使用的AutoEncoder进行特征降维。
```
encoding_dim = 10
encoder = tf.keras.models.Sequential([
tf.keras.layers.Dense(128,activation='relu') ,
tf.keras.layers.Dense(32,activation='relu') ,
tf.keras.layers.Dense(8,activation='relu') ,
tf.keras.layers.Dense(encoding_dim)
])
decoder = tf.keras.models.Sequential([
tf.keras.layers.Dense(8,activation='relu') ,
tf.keras.layers.Dense(32,activation='relu') ,
tf.keras.layers.Dense(128,activation='relu') ,
tf.keras.layers.Dense(2601,activation='sigmoid')
])
AE = tf.keras.models.Sequential([
encoder,
decoder
])
AE.compile(optimizer='adam',loss='binary_crossentropy')
AE.fit(x_train,x_train,epochs=10,batch_size=256)
x_encoded = encoder.predict(x_train)
x_encoded_test = encoder.predict(x_test)
x_encoded_t = x_encoded.T
print(x_encoded_t.shape)
st=time()
center, u, u0, d, jm, p, fpc = cmeans(x_encoded_t, m=2, c=8, error=0.0005, maxiter=1000)
et=time()
print('Time Usage:',et-st,'s')
print('Numbers of iterations used:',p)
for i in u:
yhat = np.argmax(u, axis=0)
print(center)
print(center.shape)
from sklearn.metrics import fowlkes_mallows_score
def draw_confusionmatrix(ytest, yhat):
plt.figure(figsize=(10,7))
cm = confusion_matrix(ytest, yhat)
ax = sns.heatmap(cm, annot=True, fmt="d")
plt.ylabel('True label')
plt.xlabel('Predicted label')
acc = accuracy_score(ytest, yhat)
score_f=fowlkes_mallows_score(ytest,yhat)
print(f"Sum Axis-1 as Classification accuracy: {acc}")
print('F-Score:',score_f)
draw_confusionmatrix(y_train,yhat)
temp=[2,2,2,2,1,0,1,1]
y_hat1=np.zeros(14536)
for i in range(0,14536):
y_hat1[i] = temp[yhat[i]]
draw_confusionmatrix(y_train,y_hat1)
```
将结果与Kmeans聚类相比,发现结果有较大提升(61%->67%)。但相对有监督学习方法,结果仍不尽如人意。
| github_jupyter |
<img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/horizontal-primary-light.png" alt="he-black-box" width="600"/>
# Homomorphic Encryption using Duet: Data Owner
## Tutorial 2: Encrypted image evaluation
Welcome!
This tutorial will show you how to evaluate Encrypted images using Duet and TenSEAL. This notebook illustrates the Data Owner view on the operations.
We recommend going through Tutorial 0 and 1 before trying this one.
### Setup
All modules are imported here, make sure everything is installed by running the cell below.
```
import os
import requests
import syft as sy
import tenseal as ts
from torchvision import transforms
from random import randint
import numpy as np
from PIL import Image
from matplotlib.pyplot import imshow
import torch
from syft.grid.client.client import connect
from syft.grid.client.grid_connection import GridHTTPConnection
from syft.core.node.domain.client import DomainClient
sy.load_lib("tenseal")
```
## Connect to PyGrid
Connect to PyGrid Domain server.
```
client = connect(
url="http://localhost:5000", # Domain Address
credentials={"email":"admin@email.com", "password":"pwd123"},
conn_type= GridHTTPConnection, # HTTP Connection Protocol
client_type=DomainClient) # Domain Client type
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 1 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Data Owner helpers
```
# Create the TenSEAL security context
def create_ctx():
"""Helper for creating the CKKS context.
CKKS params:
- Polynomial degree: 8192.
- Coefficient modulus size: [40, 21, 21, 21, 21, 21, 21, 40].
- Scale: 2 ** 21.
- The setup requires the Galois keys for evaluating the convolutions.
"""
poly_mod_degree = 8192
coeff_mod_bit_sizes = [40, 21, 21, 21, 21, 21, 21, 40]
ctx = ts.context(ts.SCHEME_TYPE.CKKS, poly_mod_degree, -1, coeff_mod_bit_sizes)
ctx.global_scale = 2 ** 21
ctx.generate_galois_keys()
return ctx
def download_images():
try:
os.mkdir("data/mnist-samples")
except BaseException as e:
pass
url = "https://raw.githubusercontent.com/OpenMined/TenSEAL/master/tutorials/data/mnist-samples/img_{}.jpg"
path = "data/mnist-samples/img_{}.jpg"
for idx in range(6):
img_url = url.format(idx)
img_path = path.format(idx)
r = requests.get(img_url)
with open(img_path, 'wb') as f:
f.write(r.content)
# Sample an image
def load_input():
download_images()
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
idx = randint(1, 5)
img_name = "data/mnist-samples/img_{}.jpg".format(idx)
img = Image.open(img_name)
return transform(img).view(28, 28).tolist(), img
# Helper for encoding the image
def prepare_input(ctx, plain_input):
enc_input, windows_nb = ts.im2col_encoding(ctx, plain_input, 7, 7, 3)
assert windows_nb == 64
return enc_input
```
### Prepare the context
```
context = create_ctx()
```
### Sample and encrypt an image
```
image, orig = load_input()
encrypted_image = prepare_input(context, image)
print("Encrypted image ", encrypted_image)
print("Original image ")
imshow(np.asarray(orig))
ctx_ptr = context.send(client, searchable=True, tags=["context"])
enc_image_ptr = encrypted_image.send(client, searchable=True, tags=["enc_image"])
client.store.pandas
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 2 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Approve the requests
```
client.requests.pandas
client.requests[0].accept()
client.requests[0].accept()
client.requests.pandas
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 3 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### Retrieve and decrypt the evaluation result
```
result = client.store["result"].get(delete_obj=False)
result.link_context(context)
result = result.decrypt()
```
### Run the activation and retrieve the label
```
probs = torch.softmax(torch.tensor(result), 0)
label_max = torch.argmax(probs)
print("Maximum probability for label {}".format(label_max))
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 4 : Well done!
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft and TenSEAL on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
- [Star TenSEAL](https://github.com/OpenMined/TenSEAL)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org). #lib_tenseal and #code_tenseal are the main channels for the TenSEAL project.
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# Shallow regression for vector data
This script reads zip code data produced by **vectorDataPreparations** and creates different machine learning models for
predicting the average zip code income from population and spatial variables.
It assesses the model accuracy with a test dataset but also predicts the number to all zip codes and writes it to a geopackage
for closer inspection
# 1. Read the data
```
import time
import geopandas as gpd
import pandas as pd
from math import sqrt
import os
import matplotlib.pyplot as plt
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor, BaggingRegressor,ExtraTreesRegressor, AdaBoostRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error,r2_score
```
### 1.1 Input and output file paths
```
paavo_data = "../data/paavo"
### Relative path to the zip code geopackage file that was prepared by vectorDataPreparations.py
input_geopackage_path = os.path.join(paavo_data,"zip_code_data_after_preparation.gpkg")
### Output file. You can change the name to identify different regression models
output_geopackage_path = os.path.join(paavo_data,"median_income_per_zipcode_shallow_model.gpkg")
```
### 1.2 Read the input data to a Geopandas dataframe
```
original_gdf = gpd.read_file(input_geopackage_path)
original_gdf.head()
```
# 2. Train the model
Here we try training different models. We encourage you to dive into the documentation of different models a bit and try different parameters.
Which one is the best model? Can you figure out how to improve it even more?
### 2.1 Split the dataset to train and test datasets
```
### Split the gdf to x (the predictor attributes) and y (the attribute to be predicted)
y = original_gdf['hr_mtu'] # Average income
### Remove geometry and textual fields
x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1)
### Split the both datasets to train (80%) and test (20%) datasets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.2, random_state=42)
```
### 2.2 These are the functions used for training, estimating and predicting.
```
def trainModel(x_train, y_train, model):
start_time = time.time()
print(model)
model.fit(x_train,y_train)
print('Model training took: ', round((time.time() - start_time), 2), ' seconds')
return model
def estimateModel(x_test,y_test, model):
### Predict the unemployed number to the test dataset
prediction = model.predict(x_test)
### Assess the accuracy of the model with root mean squared error, mean absolute error and coefficient of determination r2
rmse = sqrt(mean_squared_error(y_test, prediction))
mae = mean_absolute_error(y_test, prediction)
r2 = r2_score(y_test, prediction)
print(f"\nMODEL ACCURACY METRICS WITH TEST DATASET: \n" +
f"\t Root mean squared error: {round(rmse)} \n" +
f"\t Mean absolute error: {round(mae)} \n" +
f"\t Coefficient of determination: {round(r2,4)} \n")
```
### 2.3 Run different models
### Gradient Boosting Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#regression
```
model = GradientBoostingRegressor(n_estimators=30, learning_rate=0.1,verbose=1)
model_name = "Gradient Boosting Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Random Forest Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#forest
```
model = RandomForestRegressor(n_estimators=30,verbose=1)
model_name = "Random Forest Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Extra Trees Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html
```
model = ExtraTreesRegressor(n_estimators=30,verbose=1)
model_name = "Extra Trees Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### Bagging Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#bagging
```
model = BaggingRegressor(n_estimators=30,verbose=1)
model_name = "Bagging Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
### AdaBoost Regressor
* https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostRegressor.html
* https://scikit-learn.org/stable/modules/ensemble.html#adaboost
```
model = AdaBoostRegressor(n_estimators=30)
model_name = "AdaBoost Regressor"
trainModel(x_train, y_train,model)
estimateModel(x_test,y_test, model)
```
# 3. Predict average income to all zip codes
Here we predict the average income to the whole dataset. Prediction is done with the model you have stored in the model variable - the one you ran last
```
### Print chosen model (the one you ran last)
print(model)
### Drop the not-used columns from original_gdf as done before model training.
x = original_gdf.drop(['geometry','postinumer','nimi','hr_mtu'],axis=1)
### Predict the median income with already trained model
prediction = model.predict(x)
### Join the predictions to the original geodataframe and pick only interesting columns for results
original_gdf['predicted_hr_mtu'] = prediction.round(0)
original_gdf['difference'] = original_gdf['predicted_hr_mtu'] - original_gdf['hr_mtu']
resulting_gdf = original_gdf[['postinumer','nimi','hr_mtu','predicted_hr_mtu','difference','geometry']]
fig, ax = plt.subplots(figsize=(20, 10))
ax.set_title("Predicted average income by zip code " + model_name, fontsize=25)
ax.set_axis_off()
resulting_gdf.plot(column='predicted_hr_mtu', ax=ax, legend=True, cmap="magma")
```
# 4. EXERCISE: Calculate the difference between real and predicted incomes
Calculate the difference of real and predicted income amounts by zip code level and plot a map of it
* **original_gdf** is the original dataframe
* **resulting_gdf** is the predicted one
| github_jupyter |
# RNN Sentiment Classifier
In this notebook, we use an RNN to classify IMDB movie reviews by their sentiment.
[](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/rnn_sentiment_classifier.ipynb)
#### Load dependencies
```
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, SpatialDropout1D
from keras.layers import SimpleRNN # new!
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Set hyperparameters
```
# output directory name:
output_dir = 'model_output/rnn'
# training:
epochs = 16 # way more!
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 10000
max_review_length = 100 # lowered due to vanishing gradient over time
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# RNN layer architecture:
n_rnn = 256
drop_rnn = 0.2
# dense layer architecture:
# n_dense = 256
# dropout = 0.2
```
#### Load data
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words) # removed n_words_to_skip
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
```
#### Design neural network architecture
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
model.add(SimpleRNN(n_rnn, dropout=drop_rnn))
# model.add(Dense(n_dense, activation='relu')) # typically don't see top dense layer in NLP like in
# model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
#### Configure model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+"/weights.07.hdf5")
y_hat = model.predict_proba(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
```
| github_jupyter |
# Intro to Jupyter Notebooks
### `Jupyter` is a project for developing open-source software
### `Jupyter Notebooks` is a `web` application to create scripts
### `Jupyter Lab` is the new generation of web user interface for Jypyter
### But it is more than that
#### It lets you insert and save text, equations & visualizations ... in the same page!

***
# Notebook dashboard
When you launch the Jupyter notebook server in your computer, you would see a dashboard like this:

# Saving your own script
All scripts we are showing here today are running online & we will make changes through the workshop. To keep your modified script for further reference, you will need to save a copy on your own computer at the end.
<div class="alert alert-block alert-info">
<b>Try it out! </b>
<br><br>
Go to <b>File</b> in the top menu -> Download As -> Notebook </div>
<br>
Any changes made online, even if saved (not downloaded) will be lost once the binder connection is closed.
***
## Two type of cells
### `Code` Cells: execute code
### `Markdown` Cells: show formated text
There are two ways to change the type on a cell:
- Cliking on the scroll-down menu on the top
- using the shortcut `Esc-y` for code and `Esc-m` for markdown types
<br>
<div class="alert alert-block alert-info"><b>Try it out! </b>
<bR>
<br>- Click on the next cell
<br>- Change the type using the scroll-down menu & select <b>Code</b>
<br>- Change it back to <b>Markdown</b>
</div>
## This is a simple operation
y = 4 + 6
print(y)
## <i>Note the change in format of the first line & the text color in the second line</i>
<div class="alert alert-block alert-info"><b>Try it out!</br>
<br><br>In the next cell:
<br>- Double-Click on the next cell
<br>- Press <b> Esc</b> (note to blue color of the left border)
<br>- Type <b>y</b> to change it to <b>Code</b> type
<br>- Use <b>m</b> to change it back to <b>Markdown</b> type
</div>
```
# This is a simple operation
y = 4 + 6
print(y)
```
***
# To execute commands
## - `Shift-Enter` : executes cell & advance to next
## - `Control-enter` : executes cell & stay in the same cell
<div class="alert alert-block alert-info"><b>Try it out!</b>
<br>
<br>In the previous cell:
<br>- Double-Click on the previous cell
<br>- Use <b>Shift-Enter</b> to execute
<br>- Double-Click on the in the previous cell again
<br>- This time use <b>Control-Enter</b> to execute
<br>
<br>- Now change the type to <b>Code</b> & execute the cell
</div>
## You could also execute the entire script use the `Run` tab in the top menu
## Or even the entire script from the `Cell` menu at the top
***
## Other commands
### From the icon menu:
### Save, Add Cell, Cut Cell, Copy Cell, Paste Cell, Move Cell Up, Move Cell Down

### or the drop down menu 'command palette'
<div class="alert alert-block alert-info"><b>Try them out!</b>
## Now, the keyboard shortcuts
#### First press `Esc`, then:
- `s` : save changes
<br>
- `a`, `b` : create cell above and below
<br>
- `dd` : delete cell
<br>
- `x`, `c`, `v` : cut, copy and paste cell
<br>
- `z` undo last change
<div class="alert alert-block alert-info">
<b> Let's practice!</b>
<br>
<br>- Create a cell bellow with <b>Esc-b</b>, and click on it
<br>- Type print('Hello world!') and execute it using <b>Control-Enter</b>
<br>- Copy-paste the cell to make a duplicate by typing <b>Esc-c</b> & <b>Esc-v</b>
<br>- Cut the first cell using <b>Esc-x</b>
</div>
## And the last one: adding line numbers
- `Esc-l` : in Jupyter Notebooks
- `Esc-Shift-l`: in Jupyter Lab
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>
- Try it in a code cell
<br>- And now try it in the markdown cell
</div>
```
y = 5
print(y + 4)
x = 8
print(y*x)
```
***
## Last note about the `Kernel`
#### That little program that is running in the background & let you run your notebook
<div class="alert alert-block alert-danger">
Once in a while the <b>kernel</b> will die or your program will get stucked, & like everything else in the computer world.... you'll have to restart it.
</div>
### You can do this by going to the `Kernel` menu -> Restart, & then you'll have to run all your cells (or at least the ones above the one you're working on (use `Cell` menu -> Run all Above.
| github_jupyter |

# Qiskit Aqua: Vehicle Routing
## The Introduction
Logistics is a major industry, with some estimates valuing it at USD 8183 billion globally in 2015. Most service providers operate a number of vehicles (e.g., trucks and container ships), a number of depots, where the vehicles are based overnight, and serve a number of client locations with each vehicle during each day. There are many optimization and control problems that consider these parameters. Computationally, the key challenge is how to design routes from depots to a number of client locations and back to the depot, so as to minimize vehicle-miles traveled, time spent, or similar objective functions. In this notebook we formalize an idealized version of the problem and showcase its solution using the quantum approximate optimization approach of Farhi, Goldstone, and Gutman (2014).
The overall workflow we demonstrate comprises:
1. establish the client locations. Normally, these would be available ahead of the day of deliveries from a database. In our use case, we generate these randomly.
3. compute the pair-wise distances, travel times, or similar. In our case, we consider the Euclidean distance, "as the crow flies", which is perhaps the simplest possible.
4. compute the actual routes. This step is run twice, actually. First, we obtain a reference value by a run of a classical solver (IBM CPLEX) on the classical computer. Second, we run an alternative, hybrid algorithm partly on the quantum computer.
5. visualization of the results. In our case, this is again a simplistic plot.
In the following, we first explain the model, before we proceed with the installation of the pre-requisites and the data loading.
## The Model
Mathematically speaking, the vehicle routing problem (VRP) is a combinatorial problem, wherein the best routes from a depot to a number of clients and back to the depot are sought, given a number of available vehicles. There are a number of formulations possible, extending a number of formulations of the traveling salesman problem [Applegate et al, 2006]. Here, we present a formulation known as MTZ [Miller, Tucker, Zemlin, 1960].
Let $n$ be the number of clients (indexed as $1,\dots,n$), and $K$ be the number of available vehicles. Let $x_{ij} = \{0,1\}$ be the binary decision variable which, if it is $1$, activates the segment from node $i$ to node $j$. The node index runs from $0$ to $n$, where $0$ is (by convention) the depot. There are twice as many distinct decision variables as edges. For example, in a fully connected graph, there are $n(n+1)$ binary decision variables.
If two nodes $i$ and $j$ have a link from $i$ to $j$, we write $i \sim j$. We also denote with $\delta(i)^+$ the set of nodes to which $i$ has a link, i.e., $j \in \delta(i)^+$ if and only if $i \sim j$. Similarly, we denote with
$\delta(i)^-$ the set of nodes which are connected to $i$, in the sense that $j \in \delta(i)^-$ if and only if $j \sim i$.
In addition, we consider continuous variables, for all nodes $i = 1,\dots, n$, denoted $u_i$. These variables are needed in the MTZ formulation of the problem to eliminate sub-tours between clients.
The VRP can be formulated as:
$$
(VRP) \quad f = \min_{\{x_{ij}\}_{i\sim j}\in \{0,1\}, \{u_i\}_{i=1,\dots,n}\in \mathbb{R}} \quad \sum_{i \sim j} w_{ij} x_{ij}
$$
subject to the node-visiting constraint:
$$
\sum_{j \in \delta(i)^+} x_{ij} = 1, \,\sum_{j \in \delta(i)^-} x_{ji} = 1,\, \forall i \in \{1,\dots,n\},
$$
the depot-visiting constraints:
$$
\sum_{i \in \delta(0)^+} x_{0i} = K, \, \sum_{j \in \delta(0)^+} x_{j0} = K,
$$
and the sub-tour elimination constraints:
$$
u_i - u_j + Q x_{ij} \leq Q-q_j, \, \forall i \sim j, \,i ,j \neq 0, \quad q_i \leq u_i \leq Q,\, \forall i, i \neq 0.
$$
In particular,
- The cost function is linear in the cost functions and weighs the different arches based on a positive weight $w_{ij}>0$ (typically the distance between node $i$ and node $j$);
- The first set of constraints enforce that from and to every client, only one link is allowed;
- The second set of constraints enforce that from and to the depot, exactly $K$ links are allowed;
- The third set of constraints enforce the sub-tour elimination constraints and are bounds on $u_i$, with $Q>q_j>0$, and $Q,q_i \in \mathbb{R}$.
## Classical solution
We can solve the VRP classically, e.g., by using CPLEX. CPLEX uses a branch-and-bound-and-cut method to find an approximate solution of the VRP, which, in this formulation, is a mixed-integer linear program (MILP). For the sake of notation, we pack the decision variables in one vector as
$$
{\bf z} = [x_{01},x_{02},\ldots,x_{10}, x_{12},\ldots,x_{n(n-1)}]^T,
$$
wherein ${\bf z} \in \{0,1\}^N$, with $N = n (n+1)$. So the dimension of the problem scales quadratically with the number of nodes. Let us denote the optimal solution by ${\bf z}^*$, and the associated optimal cost $f^*$.
## Quantum solution
Here, we demonstrate an approach that combines classical and quantum computing steps, following the quantum approximate optimization approach of Farhi, Goldstone, and Gutman (2014). In particular, we use the variational quantum eigensolver (VQE). We stress that given the use of limited depth of the quantum circuits employed (variational forms), it is hard to discuss the speed-up of the algorithm, as the solution obtained is heuristic in nature. At the same time, due to the nature and importance of the target problems, it is worth investigating heuristic approaches, which may be worthwhile for some problem classes.
Following [5], the algorithm can be summarized as follows:
- Preparation steps:
- Transform the combinatorial problem into a binary polynomial optimization problem with equality constraints only;
- Map the resulting problem into an Ising Hamiltonian ($H$) for variables ${\bf z}$ and basis $Z$, via penalty methods if necessary;
- Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.
- Choose a set of controls $\theta$ and make a trial function $\big|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$.
- Algorithm steps:
- Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)\big|H\big|\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen.
- Use a classical optimizer to choose a new set of controls.
- Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.
- Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $\Big|\langle z_i\big|\psi(\boldsymbol\theta)\rangle\Big|^2\;\forall i$ to obtain the answer.
There are many parameters throughout, notably the choice of the trial wavefunction. Below, we consider:
$$
\big|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m \big|+\rangle
$$
where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully-entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^N Y(\theta_{i})$, where $N$ is the number of qubits and $m$ is the depth of the quantum circuit.
### Construct the Ising Hamiltonian
From $VRP$ one can construct a binary polynomial optimization with equality constraints only by considering cases in which $K=n-1$. In these cases the sub-tour elimination constraints are not necessary and the problem is only on the variable ${\bf z}$. In particular, we can write an augmented Lagrangian as
$$
(IH) \quad H = \sum_{i \sim j} w_{ij} x_{ij} + A \sum_{i \in \{1,\dots,n\}} \Big(\sum_{j \in \delta(i)^+} x_{ij} - 1\Big)^2 + A \sum_{i \in \{1,\dots,n\}}\Big(\sum_{j \in \delta(i)^-} x_{ji} - 1\Big)^2 +A \Big(\sum_{i \in \delta(0)^+} x_{0i} - K\Big)^2 + A\Big(\sum_{j \in \delta(0)^+} x_{j0} - K\Big)^2
$$
where $A$ is a big enough parameter.
### From Hamiltonian to QP formulation
In the vector ${\bf z}$, and for a complete graph ($\delta(i)^+ = \delta(i)^- = \{0,1,\dots,i-1,i+1,\dots,n\}$), $H$ can be written as follows.
$$
\min_{{\bf z}\in \{0,1\}^{n(n+1)}} {\bf w}^T {\bf z} + A \sum_{i \in \{1,\dots,n\}} \Big({\bf e}_i \otimes {\bf 1}_n^T {\bf z} - 1\Big)^2 + A \sum_{i \in \{1,\dots,n\}}\Big({\bf v}_i^T {\bf z} - 1\Big)^2 + A \Big(({\bf e}_0 \otimes {\bf 1}_n)^T{\bf z} - K\Big)^2 + A\Big({\bf v}_0^T{\bf z} - K\Big)^2.
$$
That is:
$$
\min_{\bf z\in \{0,1\}^{n(n+1)}} \bf z^T {\bf Q} \bf z + {\bf g}^T \bf z + c,
$$
Where: first term:
$$
{\bf Q} = A \sum_{i \in \{0,1,\dots,n\}} \Big[({\bf e}_i \otimes {\bf 1}_n)({\bf e}_i \otimes {\bf 1}_n)^T + {\bf v}_i{\bf v}_i^T \Big]
$$
Second term:
$$
{\bf g} = {\bf w} -2 A \sum_{i \in \{1,\dots,n\}} \Big[({\bf e}_i \otimes {\bf 1}_n) + {\bf v}_i \Big] -2 A K \Big[({\bf e}_0 \otimes {\bf 1}_n) + {\bf v}_0 \Big]
$$
Third term:
$$
c = 2An +2AK^2.
$$
The QP formulation of the Ising Hamiltonian is ready for the use of VQE.
## References
[1] E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028, 2014
[2] https://github.com/Qiskit/qiskit-tutorial/blob/master/qiskit/aqua/optimization/maxcut_and_tsp.ipynb
[3] C. E. Miller, E. W. Tucker, and R. A. Zemlin (1960). "Integer Programming Formulations and Travelling Salesman Problems". J. ACM. 7: 326–329. doi:10.1145/321043.321046.
[4] D. L. Applegate, R. M. Bixby, V. Chvátal, and W. J. Cook (2006). The Traveling Salesman Problem. Princeton University Press, ISBN 978-0-691-12993-8.
## Initialization
First of all we load all the packages that we need:
- Python 3.6 or greater is required;
- CPLEX 12.8 or greater is required for the classical computations;
- Latest Qiskit is required for the quantum computations.
```
# Load the packages that are required
import numpy as np
import operator
import matplotlib.pyplot as plt
import sys
if sys.version_info < (3, 6):
raise Exception('Please use Python version 3.6 or greater.')
try:
import cplex
from cplex.exceptions import CplexError
except:
print("Warning: Cplex not found.")
import math
# Qiskit packages
from qiskit.quantum_info import Pauli
from qiskit.aqua.input import EnergyInput
from qiskit.aqua import run_algorithm
from qiskit.aqua.operators import WeightedPauliOperator
# setup aqua logging
import logging
from qiskit.aqua._logging import set_logging_config, build_logging_config
#set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
```
We then initialize the variables
```
# Initialize the problem by defining the parameters
n = 3 # number of nodes + depot (n+1)
K = 2 # number of vehicles
```
We define an initializer class that randomly places the nodes in a 2-D plane and computes the distance between them.
```
# Get the data
class Initializer():
def __init__(self, n):
self.n = n
def generate_instance(self):
n = self.n
# np.random.seed(33)
np.random.seed(1543)
xc = (np.random.rand(n) - 0.5) * 10
yc = (np.random.rand(n) - 0.5) * 10
instance = np.zeros([n, n])
for ii in range(0, n):
for jj in range(ii + 1, n):
instance[ii, jj] = (xc[ii] - xc[jj]) ** 2 + (yc[ii] - yc[jj]) ** 2
instance[jj, ii] = instance[ii, jj]
return xc, yc, instance
# Initialize the problem by randomly generating the instance
initializer = Initializer(n)
xc,yc,instance = initializer.generate_instance()
```
## Classical solution using IBM ILOG CPLEX
For a classical solution, we use IBM ILOG CPLEX. CPLEX is able to find the exact solution of this problem. We first define a ClassicalOptimizer class that encodes the problem in a way that CPLEX can solve, and then instantiate the class and solve it.
```
class ClassicalOptimizer:
def __init__(self, instance,n,K):
self.instance = instance
self.n = n # number of nodes
self.K = K # number of vehicles
def compute_allowed_combinations(self):
f = math.factorial
return f(self.n) / f(self.K) / f(self.n-self.K)
def cplex_solution(self):
# refactoring
instance = self.instance
n = self.n
K = self.K
my_obj = list(instance.reshape(1, n**2)[0])+[0. for x in range(0,n-1)]
my_ub = [1 for x in range(0,n**2+n-1)]
my_lb = [0 for x in range(0,n**2)] + [0.1 for x in range(0,n-1)]
my_ctype = "".join(['I' for x in range(0,n**2)]) + "".join(['C' for x in range(0,n-1)])
my_rhs = 2*([K] + [1 for x in range(0,n-1)]) + [1-0.1 for x in range(0,(n-1)**2-(n-1))] + [0 for x in range(0,n)]
my_sense = "".join(['E' for x in range(0,2*n)]) + "".join(['L' for x in range(0,(n-1)**2-(n-1))])+"".join(['E' for x in range(0,n)])
try:
my_prob = cplex.Cplex()
self.populatebyrow(my_prob,my_obj,my_ub,my_lb,my_ctype,my_sense,my_rhs)
my_prob.solve()
except CplexError as exc:
print(exc)
return
x = my_prob.solution.get_values()
x = np.array(x)
cost = my_prob.solution.get_objective_value()
return x,cost
def populatebyrow(self,prob,my_obj,my_ub,my_lb,my_ctype,my_sense,my_rhs):
n = self.n
prob.objective.set_sense(prob.objective.sense.minimize)
prob.variables.add(obj = my_obj, lb = my_lb, ub = my_ub, types = my_ctype)
prob.set_log_stream(None)
prob.set_error_stream(None)
prob.set_warning_stream(None)
prob.set_results_stream(None)
rows = []
for ii in range(0,n):
col = [x for x in range(0+n*ii,n+n*ii)]
coef = [1 for x in range(0,n)]
rows.append([col, coef])
for ii in range(0,n):
col = [x for x in range(0+ii,n**2,n)]
coef = [1 for x in range(0,n)]
rows.append([col, coef])
# Sub-tour elimination constraints:
for ii in range(0, n):
for jj in range(0,n):
if (ii != jj)and(ii*jj>0):
col = [ii+(jj*n), n**2+ii-1, n**2+jj-1]
coef = [1, 1, -1]
rows.append([col, coef])
for ii in range(0,n):
col = [(ii)*(n+1)]
coef = [1]
rows.append([col, coef])
prob.linear_constraints.add(lin_expr=rows, senses=my_sense, rhs=my_rhs)
# Instantiate the classical optimizer class
classical_optimizer = ClassicalOptimizer(instance,n,K)
# Print number of feasible solutions
print('Number of feasible solutions = ' + str(classical_optimizer.compute_allowed_combinations()))
# Solve the problem in a classical fashion via CPLEX
x = None
z = None
try:
x,classical_cost = classical_optimizer.cplex_solution()
# Put the solution in the z variable
z = [x[ii] for ii in range(n**2) if ii//n != ii%n]
# Print the solution
print(z)
except:
print("CPLEX may be missing.")
# Visualize the solution
def visualize_solution(xc, yc, x, C, n, K, title_str):
plt.figure()
plt.scatter(xc, yc, s=200)
for i in range(len(xc)):
plt.annotate(i, (xc[i] + 0.15, yc[i]), size=16, color='r')
plt.plot(xc[0], yc[0], 'r*', ms=20)
plt.grid()
for ii in range(0, n ** 2):
if x[ii] > 0:
ix = ii // n
iy = ii % n
plt.arrow(xc[ix], yc[ix], xc[iy] - xc[ix], yc[iy] - yc[ix], length_includes_head=True, head_width=.25)
plt.title(title_str+' cost = ' + str(int(C * 100) / 100.))
plt.show()
if x: visualize_solution(xc, yc, x, classical_cost, n, K, 'Classical')
```
If you have CPLEX, the solution shows the depot with a star and the selected routes for the vehicles with arrows.
## Quantum solution from the ground up
For the quantum solution, we use Qiskit.
First, we derive the solution from the ground up, using a class QuantumOptimizer that encodes the quantum approach to solve the problem and then we instantiate it and solve it. We define the following methods inside the class:
- `binary_representation` : encodes the problem $(M)$ into a the Ising Hamiltonian QP (that's basically linear algebra);
- `construct_hamiltonian` : constructs the Ising Hamiltonian in terms of the $Z$ basis;
- `check_hamiltonian` : makes sure that the Ising Hamiltonian is correctly encoded in the $Z$ basis: to do this, it solves a eigenvalue-eigenvector problem for a symmetric matrix of dimension $2^N \times 2^N$. For the problem at hand $n=3$, that is $N = 12$ seems the limit;
- `vqe_solution` : solves the problem $(M)$ via VQE by using the SPSA solver (with default parameters);
- `_q_solution` : internal routine to represent the solution in a usable format.
```
class QuantumOptimizer:
def __init__(self, instance, n, K, max_trials=1000):
self.instance = instance
self.n = n
self.K = K
self.max_trials = max_trials
def binary_representation(self,x_sol=0):
instance = self.instance
n = self.n
K = self.K
A = np.max(instance) * 100 # A parameter of cost function
# Determine the weights w
instance_vec = instance.reshape(n ** 2)
w_list = [instance_vec[x] for x in range(n ** 2) if instance_vec[x] > 0]
w = np.zeros(n * (n - 1))
for ii in range(len(w_list)):
w[ii] = w_list[ii]
# Some variables I will use
Id_n = np.eye(n)
Im_n_1 = np.ones([n - 1, n - 1])
Iv_n_1 = np.ones(n)
Iv_n_1[0] = 0
Iv_n = np.ones(n-1)
neg_Iv_n_1 = np.ones(n) - Iv_n_1
v = np.zeros([n, n*(n-1)])
for ii in range(n):
count = ii-1
for jj in range(n*(n-1)):
if jj//(n-1) == ii:
count = ii
if jj//(n-1) != ii and jj%(n-1) == count:
v[ii][jj] = 1.
vn = np.sum(v[1:], axis=0)
# Q defines the interactions between variables
Q = A*(np.kron(Id_n, Im_n_1) + np.dot(v.T, v))
# g defines the contribution from the individual variables
g = w - 2 * A * (np.kron(Iv_n_1,Iv_n) + vn.T) - \
2 * A * K * (np.kron(neg_Iv_n_1, Iv_n) + v[0].T)
# c is the constant offset
c = 2 * A * (n-1) + 2 * A * (K ** 2)
try:
max(x_sol)
# Evaluates the cost distance from a binary representation of a path
fun = lambda x: np.dot(np.around(x), np.dot(Q, np.around(x))) + np.dot(g, np.around(x)) + c
cost = fun(x_sol)
except:
cost = 0
return Q,g,c,cost
def construct_hamiltonian(self):
instance = self.instance
n = self.n
K = self.K
N = (n - 1) * n # number of qubits
Q,g,c,_ = self.binary_representation()
# Defining the new matrices in the Z-basis
Iv = np.ones(N)
Qz = (Q / 4)
gz = (-g / 2 - np.dot(Iv, Q / 4) - np.dot(Q / 4, Iv))
cz = (c + np.dot(g / 2, Iv) + np.dot(Iv, np.dot(Q / 4, Iv)))
cz = cz + np.trace(Qz)
Qz = Qz - np.diag(np.diag(Qz))
# Getting the Hamiltonian in the form of a list of Pauli terms
pauli_list = []
for i in range(N):
if gz[i] != 0:
wp = np.zeros(N)
vp = np.zeros(N)
vp[i] = 1
pauli_list.append((gz[i], Pauli(vp, wp)))
for i in range(N):
for j in range(i):
if Qz[i, j] != 0:
wp = np.zeros(N)
vp = np.zeros(N)
vp[i] = 1
vp[j] = 1
pauli_list.append((2 * Qz[i, j], Pauli(vp, wp)))
pauli_list.append((cz, Pauli(np.zeros(N), np.zeros(N))))
return cz, pauli_list
def check_hamiltonian(self):
cz, op = self.construct_hamiltonian()
Op = WeightedPauliOperator(paulis=op)
qubitOp, offset = Op, 0
algo_input = EnergyInput(qubitOp)
# Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params, algo_input)
quantum_solution = self._q_solution(result['eigvecs'][0],self.n*(self.n+1))
ground_level = result['energy'] + offset
return quantum_solution, ground_level
def vqe_solution(self):
cz, op = self.construct_hamiltonian()
Op = WeightedPauliOperator(paulis=op)
qubitOp, offset = Op, cz
algo_input = EnergyInput(qubitOp)
algorithm_cfg = {
'name': 'VQE'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': self.max_trials
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {'name': 'qasm_simulator'
}
}
result = run_algorithm(params, algo_input)
#quantum_solution = self._q_solution(result['eigvecs'][0], self.n * (self.n + 1))
quantum_solution_dict = result['eigvecs'][0]
q_s = max(quantum_solution_dict.items(), key=operator.itemgetter(1))[0]
quantum_solution= [int(chars) for chars in q_s]
quantum_solution = np.flip(quantum_solution, axis=0)
_,_,_,level = self.binary_representation(x_sol=quantum_solution)
return quantum_solution_dict, quantum_solution, level
def _q_solution(self, v, N):
index_value = [x for x in range(len(v)) if v[x] == max(v)][0]
string_value = "{0:b}".format(index_value)
while len(string_value)<N:
string_value = '0'+string_value
sol = list()
for elements in string_value:
if elements == '0':
sol.append(0)
else:
sol.append(1)
sol = np.flip(sol, axis=0)
return sol
```
### Step 1
Instantiate the quantum optimizer class with parameters:
- the instance;
- the number of nodes and vehicles `n` and `K`;
- the number of iterations for SPSA in VQE (default 1000)
```
# Instantiate the quantum optimizer class with parameters:
quantum_optimizer = QuantumOptimizer(instance, n, K, 100)
```
### Step 2
Encode the problem as a binary formulation (IH-QP).
Sanity check: make sure that the binary formulation in the quantum optimizer is correct (i.e., yields the same cost given the same solution).
```
# Check if the binary representation is correct
try:
if z:
Q,g,c,binary_cost = quantum_optimizer.binary_representation(x_sol = z)
print(binary_cost,classical_cost)
if np.abs(binary_cost - classical_cost)<0.01:
print('Binary formulation is correct')
else: print('Error in the binary formulation')
else:
print('Could not verify the correctness, due to CPLEX solution being unavailable.')
Q,g,c,binary_cost = quantum_optimizer.binary_representation()
except NameError as e:
print("Warning: Please run the cells above first.")
print(e)
```
### Step 3
Encode the problem as an Ising Hamiltonian in the Z basis.
Sanity check: make sure that the formulation is correct (i.e., yields the same cost given the same solution)
```
ground_state, ground_level = quantum_optimizer.check_hamiltonian()
print(ground_state)
if z:
if np.abs(ground_level - classical_cost)<0.01:
print('Ising Hamiltonian in Z basis is correct')
else: print('Error in the Ising Hamiltonian formulation')
```
### Step 4
Solve the problem via VQE. N.B. Depending on the number of qubits, the state-vector simulation can can take a while; for example with 12 qubits, it takes more than 12 hours. Logging is useful to see what the program is doing.
```
quantum_dictionary, quantum_solution, quantum_cost = quantum_optimizer.vqe_solution()
print(quantum_solution, quantum_cost)
```
### Step 5
Visualize the solution
```
# Put the solution in a way that is compatible with the classical variables
x_quantum = np.zeros(n**2)
kk = 0
for ii in range(n ** 2):
if ii // n != ii % n:
x_quantum[ii] = quantum_solution[kk]
kk += 1
# visualize the solution
visualize_solution(xc, yc, x_quantum, quantum_cost, n, K, 'Quantum')
# and visualize the classical for comparison
if x: visualize_solution(xc, yc, x, classical_cost, n, K, 'Classical')
```
The plots present the depot with a star and the selected routes for the vehicles with arrows. Note that in this particular case, we can find the optimal solution of the QP formulation, which happens to coincide with the optimal solution of the ILP.
Keep in mind that VQE is an heuristic working on the QP formulation of the Ising Hamiltonian, though. For suitable choices of A, local optima of the QP formulation will be feasible solutions to the ILP. While for some small instances, as above, we can find optimal solutions of the QP formulation which coincide with optima of the ILP, finding optimal solutions of the ILP is harder than finding local optima of the QP formulation, in general, which in turn is harder than finding feasible solutions of the ILP. Even within the VQE, one may provide stronger guarantees, for specific variational forms (trial wave functions).
Last but not least, you may be pleased to learn that the above has been packaged in Qiskit Aqua.
```
from qiskit import Aer
from qiskit.aqua import QuantumInstance
from qiskit.aqua import run_algorithm
from qiskit.aqua.input import EnergyInput
from qiskit.aqua.algorithms import VQE, QAOA, ExactEigensolver
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua.translators.ising.vehicle_routing import *
qubitOp = get_vehiclerouting_qubitops(instance, n, K)
backend = Aer.get_backend('statevector_simulator')
seed = 50
cobyla = COBYLA()
cobyla.set_options(maxiter=250)
ry = RY(qubitOp.num_qubits, depth=3, entanglement='full')
vqe = VQE(qubitOp, ry, cobyla)
vqe.random_seed = seed
quantum_instance = QuantumInstance(backend=backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
# print(result)
x_quantum2 = get_vehiclerouting_solution(instance, n, K, result)
print(x_quantum2)
quantum_cost2 = get_vehiclerouting_cost(instance, n, K, x_quantum2)
print(quantum_cost2)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JoseAugustoVital/Decision-Score-MarketPlace/blob/main/decision_score.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ***UNIVERSIDADE FEDERAL DO MATO GROSSO DO SUL***
# Análise de dados para aumentar nível de satisfação dos clientes de um marketplace utilizando árvore de decisão.
**TRABALHO 3 - INTELIGÊNCIA ARTIFICIAL 2021/1**
__________________________________________________
**Aluno:**
Nome: **José Augusto Lajo Vieira Vital**
```
# INÍCIO DO ESTUDO
```
**Importação do sistema operacional**
```
import os
```
**Determinação do acesso ao diretório**
```
os.chdir('/content/drive/MyDrive/datasets')
from google.colab import drive
drive.mount('/content/drive')
!pwd
!ls
```
**Importação das bibliotecas Pandas e Numpy**
```
import pandas as pd
import numpy as np
```
**Leitura dos tabelas .csv**
```
tabela_cliente = pd.read_csv('olist_customers_dataset.csv')
tabela_localizacao = pd.read_csv('olist_geolocation_dataset.csv')
tabela_pedido = pd.read_csv('olist_order_items_dataset.csv')
tabela_pagamento = pd.read_csv('olist_order_payments_dataset.csv')
tabela_review = pd.read_csv('olist_order_reviews_dataset.csv')
tabela_entrega_pedido = pd.read_csv('olist_orders_dataset.csv')
tabela_descricao_produto = pd.read_csv('olist_products_dataset.csv')
tabela_vendedor = pd.read_csv('olist_sellers_dataset.csv')
tabela_categoria_traduzido = pd.read_csv('product_category_name_translation.csv')
```
**Checagem dos 5 primeiros elementos de cada tabela**
```
tabela_cliente.head()
tabela_localizacao.head()
tabela_pedido.head()
tabela_pagamento.head()
tabela_review.head()
tabela_entrega_pedido.head()
tabela_descricao_produto.head()
tabela_vendedor.head()
tabela_categoria_traduzido.head()
```
**Início do processo de união das 9 tabelas disponibilizadas com a finalidade de produzir uma tabela resultante que possua os elementos mais importantes para a determinação do review_score. No primero merge realizado, unimos a tabela de clientes com as respectivas entregas dos pedidos usando o código individual de cada consumidor como parâmetro.**
```
pd.merge(tabela_cliente, tabela_entrega_pedido, on=["customer_id"], how="inner")
```
**Processo de união com as demais tabelas disponibilizadas**
**1 - (Clientes, Entregas)**
**2 - (1, Pedidos)**
**3 - (2, Pagamentos)**
**4 - (3, Review)**
**5 - (4, Vendedor)**
```
test = pd.merge(tabela_cliente, tabela_entrega_pedido, on=["customer_id"], how="inner")
test = pd.merge(test, tabela_pedido, on=["order_id"], how="inner")
test = pd.merge(test, tabela_pagamento, on=["order_id"], how="inner")
test = pd.merge(test, tabela_review, on=["order_id"], how="inner")
test = pd.merge(test, tabela_vendedor, on=["seller_id"], how="inner")
```
**Tabela Resultante**
**Linhas: 118315**
**Colunas: 31**
```
test
```
**Segunda filtragem consiste em remover elementos que não possuem
relação com a variável review_score**
```
#test = test.drop(columns=["customer_unique_id"],axis=1)
#test = test.drop(columns=["customer_city"],axis=1)
#test = test.drop(columns=["customer_state"],axis=1)
#test = test.drop(columns=["order_status"],axis=1)
#test = test.drop(columns=["order_purchase_timestamp"],axis=1)
#test = test.drop(columns=["order_approved_at"],axis=1)
#test = test.drop(columns=["order_delivered_carrier_date"],axis=1)
#test = test.drop(columns=["order_delivered_customer_date"],axis=1)
#test = test.drop(columns=["order_estimated_delivery_date"],axis=1)
#test = test.drop(columns=["shipping_limit_date"],axis=1)
#test = test.drop(columns=["review_creation_date"],axis=1)
#test = test.drop(columns=["review_answer_timestamp"],axis=1)
#test = test.drop(columns=["seller_city"],axis=1)
#test = test.drop(columns=["seller_state"],axis=1)
#test = test.drop(columns=["review_comment_title"],axis=1)
#test = test.drop(columns=["review_comment_message"],axis=1)
```
**Tabela Resultante após a remoção de atributos não prioritários para o nível de
satisfação dos clientes**
**Linhas: 118315**
**Colunas: 15**
```
test
```
**Inserindo cada atributo da tabela resultante em um vetor para melhor manipulação dos dados**
```
vetor_cliente = np.array(test.customer_id)
vetor_cepcliente = np.array(test.customer_zip_code_prefix)
vetor_pedido = np.array(test.order_id)
vetor_idpedido = np.array(test.order_item_id)
vetor_produto = np.array(test.product_id)
vetor_vendedor = np.array(test.seller_id)
vetor_preco_produto = np.array(test.price)
vetor_frete = np.array(test.freight_value)
vetor_parcela = np.array(test.payment_sequential)
vetor_tipopagamento = np.array(test.payment_type)
vetor_pay = np.array(test.payment_installments)
vetor_valorfinal = np.array(test.payment_value)
vetor_review = np.array(test.review_id)
vetor_score = np.array(test.review_score)
vetor_cepvendedor = np.array(test.seller_zip_code_prefix)
```
**Definindo um novo dataframe vazio**
```
df = pd.DataFrame()
df
```
**Definindo as colunas do novo dataframe e atribuindo para cada coluna, seu respectivo vetor de dados registrado anteriormente.**
```
COLUNAS = [
'Cliente',
'CEP_Cliente',
'Pedido',
'id_Pedido',
'Produto',
'Vendedor',
'Preco_produto',
'Frete',
'Parcela',
'Tipo_pagamento',
'Installments',
'Valor_total',
'ID_Review',
'CEP_Vendedor',
'Score'
]
df = pd.DataFrame(columns =COLUNAS)
df.Cliente = vetor_cliente
df.CEP_Cliente = vetor_cepcliente
df.Pedido = vetor_pedido
df.id_Pedido = vetor_idpedido
df.Produto = vetor_produto
df.Vendedor = vetor_vendedor
df.Preco_produto = vetor_preco_produto
df.Frete = vetor_frete
df.Parcela = vetor_parcela
df.Tipo_pagamento = vetor_tipopagamento
df.Installments = vetor_pay
df.Valor_total = vetor_valorfinal
df.ID_Review = vetor_review
df.CEP_Vendedor = vetor_cepvendedor
df.Score = vetor_score
df
```
**Impressão da coluna de clientes.**
```
df.Cliente
#for index, row in df.iterrows():
# if row['Score'] == 1:
# df.loc[index,'Classe'] = 'Pessimo'
# if row['Score'] == 2:
# df.loc[index,'Classe'] = 'Ruim'
# if row['Score'] == 3:
# df.loc[index,'Classe'] = 'Mediano'
# if row['Score'] == 4:
# df.loc[index,'Classe'] = 'Bom'
# if row['Score'] == 5:
# df.loc[index,'Classe'] = 'Otimo'
```
**Informações do dataframe**
**Atributos, elementos não nulos, Tipo das variáveis da coluna**
```
df.info()
```
**Agrupando os elementos do dataframe por consumidor**
```
df.groupby(by='Cliente').size()
```
**Importando os métodos de árvore de decisão**
```
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
from sklearn import metrics
```
**Para simplificar os dados e evitar criar um dummy com esse dataframe, removemos todos os elementos não-numéricos para que o modelo seja capaz de realizar a execução. Solução encontrada para simplificar os atributos do tipo "Objeto" para tipos numéricos.**
```
df['Cliente'] = df['Cliente'].str.replace(r'\D', '')
df['Pedido'] = df['Pedido'].str.replace(r'\D', '')
df['Produto'] = df['Produto'].str.replace(r'\D', '')
df['Vendedor'] = df['Vendedor'].str.replace(r'\D', '')
df['ID_Review'] = df['ID_Review'].str.replace(r'\D', '')
df
```
**Realizamos o procedimento de remoção dos elementos não-numéricos para todas as colunas do tipo objeto com exceção do tipo de pagamento pois o tipo de pagamento se resume a poucas opções. Dessa forma, usamos a função get_dummies apenas para o tipo de pagamento**
**Portanto, a coluna Tipo_pagamento se divide em quatro colunas com lógica booleana. As novas colunas são: Tipo_pagamento_boleto, Tipo_pagamento_credit_card, Tipo_pagamento_debit_card, Tipo_pagamento_voucher**
```
result_df = pd.get_dummies(df, columns=["Tipo_pagamento"])
```
**Resultado final do dataframe**
```
result_df
```
**Criação de um dataframe reserva para possíveis conclusões**
```
reserva = result_df
reserva
```
**Eliminando todas as linhas com nível 4 ou nível 5 de satisfação. Dessa forma, temos um dataframe com todos os dados, e um apenas com dados classificados com nível 3,2 ou 1 ou seja, mediano, ruim ou péssimo. (elementos que apresentam nível de insatisfação interessante para análise)**
```
reserva = reserva.drop(reserva[reserva.Score > 3].index)
reserva
```
**Processo de separação de treine/teste**
**Proporções estabelecidas:**
**70% treino**
**30% teste**
```
X_train, X_test, y_train, y_test = train_test_split(result_df.drop('Score', axis=1), result_df['Score'], test_size=0.3)
```
**Número de amostras para cada processo**
```
X_train.shape, X_test.shape
```
**Número de targets para cada processo**
```
y_train.shape, y_test.shape
```
**Criação do classifcador**
```
cls = DecisionTreeClassifier()
```
**Treinamento**
```
cls = cls.fit(X_train, y_train)
```
**Vetor com as importancias de cada atributo para a determinação do review_score**
```
cls.feature_importances_
df.head()
```
**Para tornar o modelo mais visual, criamos um laço para a impressão dos pesos de cada atributo para determinar o score**
```
for feature, importancia in zip(result_df.columns, cls.feature_importances_):
print("{}:{:.1f}%".format(feature,((importancia*100))))
```
**Vetor com predições para checagem do aprendizado**
```
result = cls.predict(X_test)
result
result_df.Score[118310]
```
**Representação das métricas de precisão e médias do modelo**
```
from sklearn import metrics
print(metrics.classification_report(y_test,result))
```
**Precisão total**
```
from sklearn.model_selection import cross_val_score
allScores = cross_val_score(cls, X_train, y_train , cv=10)
allScores.mean()
```
**Treinamento utilizando o dataframe reserva. (apenas com os níveis de satisfação abaixo da média, score <=3)**
**Split do dataframe em treino e teste (70%, 30%, respectivamente)**
```
X_train, X_test, y_train, y_test = train_test_split(reserva.drop('Score', axis=1), reserva['Score'], test_size=0.3)
```
**Quantidade de amostras do treino**
```
X_train.shape, X_test.shape
```
**Classificador**
```
clf = DecisionTreeClassifier()
```
**Treino**
```
clf = clf.fit(X_train, y_train)
```
**Importancia de cada atributo para determinar o nível de satisfação dos consumidores**
```
clf.feature_importances_
for feature, importancia in zip(reserva.columns, clf.feature_importances_):
print("{}:{:.1f}%".format(feature,((importancia*100))))
```
# ANÁLISE DOS DADOS PROCESSADOS
**Com o objetivo de aumentar o review_score desse marketplace, utilizamos o algoritmo de árvore de decisão para encontrar elementos que influenciam diretamente a aceitação e satisfação dos clientes, analisando desde o processo de despacho do produto, até a qualidade do atendimento e do produto final. Dessa forma, utilizamos estratégias de filtragem. Inicialmente eliminamos os elementos que, evidentemente, não apresentavam influencia sobre a nota registrada pelo cliente. Após esse passo inicial, simplificamos o tipo dos dados para facilitar o processamento do dataframe pelo algoritmo. Após o processo de preparar os dados para o aprendizado registramos os seguintes valores:**
**Considerando todos as notas (Ótimo, bom, mediano, ruim, péssimo) :**
Cliente:10.4%
CEP_Cliente:11.0%
Pedido:10.4%
id_Pedido:0.8%
Produto:9.4%
Vendedor:8.0%
Preco_produto:8.0%
Frete:8.1%
Parcela:0.3%
Installments:3.7%
Valor_total:8.6%
ID_Review:11.4%
CEP_Vendedor:7.7%
Score:0.8%
Tipo_pagamento_boleto:0.8%
Tipo_pagamento_credit_card:0.2%
Tipo_pagamento_debit_card:0.3%
------------------------------------
**Considerando apenas avaliações com notas instatisfatórias (mediano, ruim, péssimo) :**
Cliente:10.0%
CEP_Cliente:11.1%
Pedido:10.9%
id_Pedido:0.8%
Produto:9.5%
Vendedor:8.3%
Preco_produto:7.4%
Frete:8.3%
Parcela:0.2%
Installments:3.6%
Valor_total:8.7%
ID_Review:11.6%
CEP_Vendedor:7.6%
Score:0.7%
Tipo_pagamento_boleto:0.7%
Tipo_pagamento_credit_card:0.2%
Tipo_pagamento_debit_card:0.3%
---------------------
**Portanto, com base nos resultados obtidos, o dono do marketplace deve se atentar aos seguintes parâmetros de sua logística:**
**1 - Relação entre CEP_Cliente, Frete e CEP_Vendedor**
**Esses atributos apresentaram importancia direta na nota registrada pelo cliente, dessa forma, algumas razões devem ser consideradas:**
**-Problemas com a entrega.**
**-Qualidade da entrega(tempo de entrega,comprometimento do produto durante o processo de transporte).**
**-Alto preço do frete para determinadas regiões.**
-----------------------------
**2 - Produto**
**O atributo produto apresentou importancia direta na insatisfação dos clientes. Portanto, deve-se considerar:**
**-Qualidade ruim de determinados produtos.**
**-O produto entregue não apresentar as caracteristicas do produto anunciado.**
**-Entrega de produtos errados, problema de logística.**
---------------------------------
**3 - Vendedor**
**O atributo Vendedor apresentou importancia direta na insatisfação dos clientes. Portanto, deve-se considerar:**
**-Qualidade duvidosa do atendimento por parte do vendedor.**
**-Erro do vendedor em alguma etapa do processo em específico.**
--------------------------
# **CONCLUSÃO**
**Portanto, é possível concluir que para o marketplace aumentar seu score, analisar problemas logísticos por parte do processo de transporte, aspectos do produto e os vendedores são os fatores mais importantes para entender o que se passa na empresa e solucionar o problema que está gerando insatisfação dos clientes desse e-commerce.**
| github_jupyter |
# Introduction to geospatial vector data in Python
```
%matplotlib inline
import pandas as pd
import geopandas
pd.options.display.max_rows = 10
```
## Importing geospatial data
Geospatial data is often available from specific GIS file formats or data stores, like ESRI shapefiles, GeoJSON files, geopackage files, PostGIS (PostgreSQL) database, ...
We can use the GeoPandas library to read many of those GIS file formats (relying on the `fiona` library under the hood, which is an interface to GDAL/OGR), using the `geopandas.read_file` function.
For example, let's start by reading a shapefile with all the countries of the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/, zip file is available in the `/data` directory), and inspect the data:
```
countries = geopandas.read_file("zip://./data/ne_110m_admin_0_countries.zip")
# or if the archive is unpacked:
# countries = geopandas.read_file("data/ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp")
countries.head()
countries.plot()
```
What can we observe:
- Using `.head()` we can see the first rows of the dataset, just like we can do with Pandas.
- There is a 'geometry' column and the different countries are represented as polygons
- We can use the `.plot()` method to quickly get a *basic* visualization of the data
## What's a GeoDataFrame?
We used the GeoPandas library to read in the geospatial data, and this returned us a `GeoDataFrame`:
```
type(countries)
```
A GeoDataFrame contains a tabular, geospatial dataset:
* It has a **'geometry' column** that holds the geometry information (or features in GeoJSON).
* The other columns are the **attributes** (or properties in GeoJSON) that describe each of the geometries
Such a `GeoDataFrame` is just like a pandas `DataFrame`, but with some additional functionality for working with geospatial data:
* A `.geometry` attribute that always returns the column with the geometry information (returning a GeoSeries). The column name itself does not necessarily need to be 'geometry', but it will always be accessible as the `.geometry` attribute.
* It has some extra methods for working with spatial data (area, distance, buffer, intersection, ...), which we will see in later notebooks
```
countries.geometry
type(countries.geometry)
countries.geometry.area
```
**It's still a DataFrame**, so we have all the pandas functionality available to use on the geospatial dataset, and to do data manipulations with the attributes and geometry information together.
For example, we can calculate average population number over all countries (by accessing the 'pop_est' column, and calling the `mean` method on it):
```
countries['pop_est'].mean()
```
Or, we can use boolean filtering to select a subset of the dataframe based on a condition:
```
africa = countries[countries['continent'] == 'Africa']
africa.plot()
```
---
**Exercise**: create a plot of South America
<!--
countries[countries['continent'] == 'South America'].plot()
-->
---
```
countries.head()
```
---
The rest of the tutorial is going to assume you already know some pandas basics, but we will try to give hints for that part for those that are not familiar.
A few resources in case you want to learn more about pandas:
- Pandas docs: https://pandas.pydata.org/pandas-docs/stable/10min.html
- Other tutorials: chapter from pandas in https://jakevdp.github.io/PythonDataScienceHandbook/, https://github.com/jorisvandenbossche/pandas-tutorial, https://github.com/TomAugspurger/pandas-head-to-tail, ...
<div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b>: <br>
<ul>
<li>A `GeoDataFrame` allows to perform typical tabular data analysis together with spatial operations</li>
<li>A `GeoDataFrame` (or *Feature Collection*) consists of:
<ul>
<li>**Geometries** or **features**: the spatial objects</li>
<li>**Attributes** or **properties**: columns with information about each spatial object</li>
</ul>
</li>
</ul>
</div>
## Geometries: Points, Linestrings and Polygons
Spatial **vector** data can consist of different types, and the 3 fundamental types are:
* **Point** data: represents a single point in space.
* **Line** data ("LineString"): represents a sequence of points that form a line.
* **Polygon** data: represents a filled area.
And each of them can also be combined in multi-part geometries (See https://shapely.readthedocs.io/en/stable/manual.html#geometric-objects for extensive overview).
For the example we have seen up to now, the individual geometry objects are Polygons:
```
print(countries.geometry[2])
```
Let's import some other datasets with different types of geometry objects.
A dateset about cities in the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-populated-places/, zip file is available in the `/data` directory), consisting of Point data:
```
cities = geopandas.read_file("zip://./data/ne_110m_populated_places.zip")
print(cities.geometry[0])
```
And a dataset of rivers in the world (from http://www.naturalearthdata.com/downloads/50m-physical-vectors/50m-rivers-lake-centerlines/, zip file is available in the `/data` directory) where each river is a (multi-)line:
```
rivers = geopandas.read_file("zip://./data/ne_50m_rivers_lake_centerlines.zip")
print(rivers.geometry[0])
```
### The `shapely` library
The individual geometry objects are provided by the [`shapely`](https://shapely.readthedocs.io/en/stable/) library
```
type(countries.geometry[0])
```
To construct one ourselves:
```
from shapely.geometry import Point, Polygon, LineString
p = Point(1, 1)
print(p)
polygon = Polygon([(1, 1), (2,2), (2, 1)])
```
<div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b>: <br><br>
Single geometries are represented by `shapely` objects:
<ul>
<li>If you access a single geometry of a GeoDataFrame, you get a shapely geometry object</li>
<li>Those objects have similar functionality as geopandas objects (GeoDataFrame/GeoSeries). For example:
<ul>
<li>`single_shapely_object.distance(other_point)` -> distance between two points</li>
<li>`geodataframe.distance(other_point)` -> distance for each point in the geodataframe to the other point</li>
</ul>
</li>
</ul>
</div>
## Coordinate reference systems
A **coordinate reference system (CRS)** determines how the two-dimensional (planar) coordinates of the geometry objects should be related to actual places on the (non-planar) earth.
For a nice in-depth explanation, see https://docs.qgis.org/2.8/en/docs/gentle_gis_introduction/coordinate_reference_systems.html
A GeoDataFrame or GeoSeries has a `.crs` attribute which holds (optionally) a description of the coordinate reference system of the geometries:
```
countries.crs
```
For the `countries` dataframe, it indicates that it used the EPSG 4326 / WGS84 lon/lat reference system, which is one of the most used.
It uses coordinates as latitude and longitude in degrees, as can you be seen from the x/y labels on the plot:
```
countries.plot()
```
The `.crs` attribute is given as a dictionary. In this case, it only indicates the EPSG code, but it can also contain the full "proj4" string (in dictionary form).
Under the hood, GeoPandas uses the `pyproj` / `proj4` libraries to deal with the re-projections.
For more information, see also http://geopandas.readthedocs.io/en/latest/projections.html.
---
There are sometimes good reasons you want to change the coordinate references system of your dataset, for example:
- different sources with different crs -> need to convert to the same crs
- distance-based operations -> if you a crs that has meter units (not degrees)
- plotting in a certain crs (eg to preserve area)
We can convert a GeoDataFrame to another reference system using the `to_crs` function.
For example, let's convert the countries to the World Mercator projection (http://epsg.io/3395):
```
# remove Antartica, as the Mercator projection cannot deal with the poles
countries = countries[(countries['name'] != "Antarctica")]
countries_mercator = countries.to_crs(epsg=3395) # or .to_crs({'init': 'epsg:3395'})
countries_mercator.plot()
```
Note the different scale of x and y.
---
**Exercise**: project the countries to [Web Mercator](http://epsg.io/3857), the CRS used by Google Maps, OpenStreetMap and most web providers.
<!--
countries.to_crs(epsg=3857)
-->
---
## Plotting our different layers together
```
ax = countries.plot(edgecolor='k', facecolor='none', figsize=(15, 10))
rivers.plot(ax=ax)
cities.plot(ax=ax, color='red')
ax.set(xlim=(-20, 60), ylim=(-40, 40))
```
See the [04-more-on-visualization.ipynb](04-more-on-visualization.ipynb) notebook for more details on visualizing geospatial datasets.
---
**Exercise**: replicate the figure above by coloring the countries in black and cities in yellow
<!--
ax = countries.plot(edgecolor='w', facecolor='k', figsize=(15, 10))
rivers.plot(ax=ax)
cities.plot(ax=ax, color='yellow')
ax.set(xlim=(-20, 60), ylim=(-40, 40))
-->
---
## A bit more on importing and creating GeoDataFrames
### Note on `fiona`
Under the hood, GeoPandas uses the [Fiona library](http://toblerity.org/fiona/) (pythonic interface to GDAL/OGR) to read and write data. GeoPandas provides a more user-friendly wrapper, which is sufficient for most use cases. But sometimes you want more control, and in that case, to read a file with fiona you can do the following:
```
import fiona
from shapely.geometry import shape
with fiona.drivers():
with fiona.open("data/ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp") as collection:
for feature in collection:
# ... do something with geometry
geom = shape(feature['geometry'])
# ... do something with properties
print(feature['properties']['name'])
```
### Constructing a GeoDataFrame manually
```
geopandas.GeoDataFrame({
'geometry': [Point(1, 1), Point(2, 2)],
'attribute1': [1, 2],
'attribute2': [0.1, 0.2]})
```
### Creating a GeoDataFrame from an existing dataframe
For example, if you have lat/lon coordinates in two columns:
```
df = pd.DataFrame(
{'City': ['Buenos Aires', 'Brasilia', 'Santiago', 'Bogota', 'Caracas'],
'Country': ['Argentina', 'Brazil', 'Chile', 'Colombia', 'Venezuela'],
'Latitude': [-34.58, -15.78, -33.45, 4.60, 10.48],
'Longitude': [-58.66, -47.91, -70.66, -74.08, -66.86]})
df['Coordinates'] = list(zip(df.Longitude, df.Latitude))
df['Coordinates'] = df['Coordinates'].apply(Point)
gdf = geopandas.GeoDataFrame(df, geometry='Coordinates')
gdf
```
See http://geopandas.readthedocs.io/en/latest/gallery/create_geopandas_from_pandas.html#sphx-glr-gallery-create-geopandas-from-pandas-py for full example
---
**Exercise**: use [geojson.io](http://geojson.io) to mark five points, and create a `GeoDataFrame` with it. Note that coordinates will be expressed in longitude and latitude, so you'll have to set the CRS accordingly.
<!--
df = pd.DataFrame(
{'Name': ['Hotel', 'Capitol', 'Barton Springs'],
'Latitude': [30.28195889019179, 30.274782936992608, 30.263728440902543],
'Longitude': [-97.74006128311157, -97.74038314819336, -97.77013421058655]})
df['Coordinates'] = list(zip(df.Longitude, df.Latitude))
df['Coordinates'] = df['Coordinates'].apply(Point)
gdf = geopandas.GeoDataFrame(df, geometry='Coordinates', crs={'init': 'epsg:4326'})
-->
---
| github_jupyter |
```
import os, json, sys, time, random
import numpy as np
import torch
from easydict import EasyDict
from math import floor
from easydict import EasyDict
from steves_utils.vanilla_train_eval_test_jig import Vanilla_Train_Eval_Test_Jig
from steves_utils.torch_utils import get_dataset_metrics, independent_accuracy_assesment
from steves_models.configurable_vanilla import Configurable_Vanilla
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.lazy_map import Lazy_Map
from steves_utils.sequence_aggregator import Sequence_Aggregator
from steves_utils.stratified_dataset.traditional_accessor import Traditional_Accessor_Factory
from steves_utils.cnn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.torch_utils import (
confusion_by_domain_over_dataloader,
independent_accuracy_assesment
)
from steves_utils.utils_v2 import (
per_domain_accuracy_from_confusion,
get_datasets_base_path
)
# from steves_utils.ptn_do_report import TBD
required_parameters = {
"experiment_name",
"lr",
"device",
"dataset_seed",
"seed",
"labels",
"domains_target",
"domains_source",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"batch_size",
"n_epoch",
"patience",
"criteria_for_best",
"normalize_source",
"normalize_target",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"pickle_name_source",
"pickle_name_target",
"torch_default_dtype",
}
from steves_utils.ORACLE.utils_v2 import (
ALL_SERIAL_NUMBERS,
ALL_DISTANCES_FEET_NARROWED,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "MANUAL CORES CNN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["seed"] = 1337
standalone_parameters["labels"] = ALL_SERIAL_NUMBERS
standalone_parameters["domains_source"] = [8,32,50]
standalone_parameters["domains_target"] = [14,20,26,38,44,]
standalone_parameters["num_examples_per_domain_per_label_source"]=-1
standalone_parameters["num_examples_per_domain_per_label_target"]=-1
standalone_parameters["pickle_name_source"] = "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["pickle_name_target"] = "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["batch_size"]=128
standalone_parameters["n_epoch"] = 3
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["normalize_source"] = False
standalone_parameters["normalize_target"] = False
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": len(standalone_parameters["labels"])}},
]
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "cnn_2:wisig",
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains_source": [1, 2, 3, 4],
"domains_target": [1, 2, 3, 4],
"pickle_name_target": "wisig.node3-19.stratified_ds.2022A.pkl",
"pickle_name_source": "wisig.node3-19.stratified_ds.2022A.pkl",
"device": "cuda",
"lr": 0.0001,
"batch_size": 128,
"normalize_source": False,
"normalize_target": False,
"num_examples_per_domain_per_label_source": -1,
"num_examples_per_domain_per_label_target": -1,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 130}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"dataset_seed": 7,
"seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
def wrap_in_dataloader(p, ds):
return torch.utils.data.DataLoader(
ds,
batch_size=p.batch_size,
shuffle=True,
num_workers=1,
persistent_workers=True,
prefetch_factor=50,
pin_memory=True
)
taf_source = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_source),
seed=p.dataset_seed
)
train_original_source, val_original_source, test_original_source = \
taf_source.get_train(), taf_source.get_val(), taf_source.get_test()
taf_target = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_target),
seed=p.dataset_seed
)
train_original_target, val_original_target, test_original_target = \
taf_target.get_train(), taf_target.get_val(), taf_target.get_test()
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Map. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[:2] # Strip the tuple to just (x,y)
train_processed_source = wrap_in_dataloader(
p,
Lazy_Map(train_original_source, transform_lambda)
)
val_processed_source = wrap_in_dataloader(
p,
Lazy_Map(val_original_source, transform_lambda)
)
test_processed_source = wrap_in_dataloader(
p,
Lazy_Map(test_original_source, transform_lambda)
)
train_processed_target = wrap_in_dataloader(
p,
Lazy_Map(train_original_target, transform_lambda)
)
val_processed_target = wrap_in_dataloader(
p,
Lazy_Map(val_original_target, transform_lambda)
)
test_processed_target = wrap_in_dataloader(
p,
Lazy_Map(test_original_target, transform_lambda)
)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
ep = next(iter(test_processed_target))
ep[0].dtype
model = Configurable_Vanilla(
x_net=x_net,
label_loss_object=torch.nn.NLLLoss(),
learning_rate=p.lr
)
jig = Vanilla_Train_Eval_Test_Jig(
model=model,
path_to_best_model=p.BEST_MODEL_PATH,
device=p.device,
label_loss_object=torch.nn.NLLLoss(),
)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
patience=p.patience,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
criteria_for_best=p.criteria_for_best
)
total_experiment_time_secs = time.time() - start_time_secs
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = wrap_in_dataloader(p, Sequence_Aggregator((datasets.source.original.val, datasets.target.original.val)))
confusion = confusion_by_domain_over_dataloader(model, p.device, val_dl, forward_uses_domain=False)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
###################################
# Write out the results
###################################
experiment = {
"experiment_name": p.experiment_name,
"parameters": p,
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "cnn"),
}
get_loss_curve(experiment)
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
## Computer Vision Learner
[`vision.learner`](/vision.learner.html#vision.learner) is the module that defines the [`cnn_learner`](/vision.learner.html#cnn_learner) method, to easily get a model suitable for transfer learning.
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
## Transfer learning
Transfer learning is a technique where you use a model trained on a very large dataset (usually [ImageNet](http://image-net.org/) in computer vision) and then adapt it to your own dataset. The idea is that it has learned to recognize many features on all of this data, and that you will benefit from this knowledge, especially if your dataset is small, compared to starting from a randomly initialized model. It has been proved in [this article](https://arxiv.org/abs/1805.08974) on a wide range of tasks that transfer learning nearly always give better results.
In practice, you need to change the last part of your model to be adapted to your own number of classes. Most convolutional models end with a few linear layers (a part will call head). The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of our classes. In transfer learning we will keep all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initialized randomly.
Then we will train the model we obtain in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data), then we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possibly using differential learning rates).
The [`cnn_learner`](/vision.learner.html#cnn_learner) factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data.
```
show_doc(cnn_learner)
```
This method creates a [`Learner`](/basic_train.html#Learner) object from the [`data`](/vision.data.html#vision.data) object and model inferred from it with the backbone given in `arch`. Specifically, it will cut the model defined by `arch` (randomly initialized if `pretrained` is False) at the last convolutional layer by default (or as defined in `cut`, see below) and add:
- an [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) layer,
- a [`Flatten`](/layers.html#Flatten) layer,
- blocks of \[[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)\] layers.
The blocks are defined by the `lin_ftrs` and `ps` arguments. Specifically, the first block will have a number of inputs inferred from the backbone `arch` and the last one will have a number of outputs equal to `data.c` (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_frts` (of course a block has a number of inputs equal to the number of outputs of the previous block). The default is to have an intermediate hidden size of 512 (which makes two blocks `model_activation` -> 512 -> `n_classes`). If you pass a float then the final dropout layer will have the value `ps`, and the remaining will be `ps/2`. If you pass a list then the values are used for dropout probabilities directly.
Note that the very last block doesn't have a [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU) activation, to allow you to use any final activation you want (generally included in the loss function in pytorch). Also, the backbone will be frozen if you choose `pretrained=True` (so only the head will train if you call [`fit`](/basic_train.html#fit)) so that you can immediately start phase one of training as described above.
Alternatively, you can define your own `custom_head` to put on top of the backbone. If you want to specify where to split `arch` you should so in the argument `cut` which can either be the index of a specific layer (the result will not include that layer) or a function that, when passed the model, will return the backbone you want.
The final model obtained by stacking the backbone and the head (custom or defined as we saw) is then separated in groups for gradual unfreezing or differential learning rates. You can specify how to split the backbone in groups with the optional argument `split_on` (should be a function that returns those groups when given the backbone).
The `kwargs` will be passed on to [`Learner`](/basic_train.html#Learner), so you can put here anything that [`Learner`](/basic_train.html#Learner) will accept ([`metrics`](/metrics.html#metrics), `loss_func`, `opt_func`...)
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learner = cnn_learner(data, models.resnet18, metrics=[accuracy])
learner.fit_one_cycle(1,1e-3)
learner.save('one_epoch')
show_doc(unet_learner)
```
This time the model will be a [`DynamicUnet`](/vision.models.unet.html#DynamicUnet) with an encoder based on `arch` (maybe `pretrained`) that is cut depending on `split_on`. `blur_final`, `norm_type`, `blur`, `self_attention`, `y_range`, `last_cross` and `bottle` are passed to unet constructor, the `kwargs` are passed to the initialization of the [`Learner`](/basic_train.html#Learner).
```
jekyll_warn("The models created with this function won't work with pytorch `nn.DataParallel`, you have to use distributed training instead!")
```
### Get predictions
Once you've actually trained your model, you may want to use it on a single image. This is done by using the following method.
```
show_doc(Learner.predict)
img = learner.data.train_ds[0][0]
learner.predict(img)
```
Here the predict class for our image is '3', which corresponds to a label of 0. The probabilities the model found for each class are 99.65% and 0.35% respectively, so its confidence is pretty high.
Note that if you want to load your trained model and use it on inference mode with the previous function, you should export your [`Learner`](/basic_train.html#Learner).
```
learner.export()
```
And then you can load it with an empty data object that has the same internal state like this:
```
learn = load_learner(path)
```
### Customize your model
You can customize [`cnn_learner`](/vision.learner.html#cnn_learner) for your own model's default `cut` and `split_on` functions by adding them to the dictionary `model_meta`. The key should be your model and the value should be a dictionary with the keys `cut` and `split_on` (see the source code for examples). The constructor will call [`create_body`](/vision.learner.html#create_body) and [`create_head`](/vision.learner.html#create_head) for you based on `cut`; you can also call them yourself, which is particularly useful for testing.
```
show_doc(create_body)
show_doc(create_head, doc_string=False)
```
Model head that takes `nf` features, runs through `lin_ftrs`, and ends with `nc` classes. `ps` is the probability of the dropouts, as documented above in [`cnn_learner`](/vision.learner.html#cnn_learner).
```
show_doc(ClassificationInterpretation, title_level=3)
```
This provides a confusion matrix and visualization of the most incorrect images. Pass in your [`data`](/vision.data.html#vision.data), calculated `preds`, actual `y`, and your `losses`, and then use the methods below to view the model interpretation results. For instance:
```
learn = cnn_learner(data, models.resnet18)
learn.fit(1)
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
```
The following factory method gives a more convenient way to create an instance of this class:
```
show_doc(ClassificationInterpretation.from_learner, full_name='from_learner')
```
You can also use a shortcut `learn.interpret()` to do the same.
```
show_doc(Learner.interpret, full_name='interpret')
```
Note that this shortcut is a [`Learner`](/basic_train.html#Learner) object/class method that can be called as: `learn.interpret()`.
```
show_doc(ClassificationInterpretation.plot_top_losses, full_name='plot_top_losses')
```
The `k` items are arranged as a square, so it will look best if `k` is a square number (4, 9, 16, etc). The title of each image shows: prediction, actual, loss, probability of actual class. When `heatmap` is True (by default it's True) , Grad-CAM heatmaps (http://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf) are overlaid on each image. `plot_top_losses` should be used with single-labeled datasets. See `plot_multi_top_losses` below for a version capable of handling multi-labeled datasets.
```
interp.plot_top_losses(9, figsize=(7,7))
show_doc(ClassificationInterpretation.top_losses)
```
Returns tuple of *(losses,indices)*.
```
interp.top_losses(9)
show_doc(ClassificationInterpretation.plot_multi_top_losses, full_name='plot_multi_top_losses')
```
Similar to `plot_top_losses()` but aimed at multi-labeled datasets. It plots misclassified samples sorted by their respective loss.
Since you can have multiple labels for a single sample, they can easily overlap in a grid plot. So it plots just one sample per row.
Note that you can pass `save_misclassified=True` (by default it's `False`). In such case, the method will return a list containing the misclassified images which you can use to debug your model and/or tune its hyperparameters.
```
show_doc(ClassificationInterpretation.plot_confusion_matrix)
```
If [`normalize`](/vision.data.html#normalize), plots the percentages with `norm_dec` digits. `slice_size` can be used to avoid out of memory error if your set is too big. `kwargs` are passed to `plt.figure`.
```
interp.plot_confusion_matrix()
show_doc(ClassificationInterpretation.confusion_matrix)
interp.confusion_matrix()
show_doc(ClassificationInterpretation.most_confused)
```
#### Working with large datasets
When working with large datasets, memory problems can arise when computing the confusion matrix. For example, an error can look like this:
RuntimeError: $ Torch: not enough memory: you tried to allocate 64GB. Buy new RAM!
In this case it is possible to force [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) to compute the confusion matrix for data slices and then aggregate the result by specifying slice_size parameter.
```
interp.confusion_matrix(slice_size=10)
interp.plot_confusion_matrix(slice_size=10)
interp.most_confused(slice_size=10)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
| github_jupyter |
```
import pandas as pd
```
## Formas de Criar uma lista
### Usando espaço e a função split
```
data = ['1 2 3 4'.split(),
'5 6 7 8 '.split(),
'9 10 11 12'.split(),
'13 14 15 16'.split()]
data
```
## Passando int para cada elemento
### Com a função map
```
# map usa os parâmetros (function sem parênteses, variável)
for i, l in enumerate(data):
data[i] = list(map(int, l))
data
```
### Com for e range
```
for l in range(len(data)):
for i in range(len(data[l])):
data[l][i] = int(data[l][i])
data
```
### Com for e enumerate
```
for i, l in enumerate(data):
for p, n in enumerate(data[i]):
data[i][p] = int(n)
data
```
## Criando um Dataframe com split
* para criar o index e as columns pode-se passar uma str e a função split
* o split usa por padrão o espaço vazio como separador
* se for usado outro separador, deve ser passado como parâmetro
```
data = [(1, 2, 3, 4),
(5, 6, 7, 8),
(9, 10, 11, 12),
(13, 14, 15, 16)]
df = pd.DataFrame(data, 'l1 l2 l3 l4'.split(), 'c1 c2 c3 c4'.split())
df
```
## Tipos de Seleção
### Selecionar coluna
* em outras palavras, selecionando uma series
```
df['c1']
type(df['c1'])
```
### Mais de uma coluna
```
df[['c3', 'c1']]
# as colunas são apresentadas na ordem que foi passada
type(df[['c3', 'c1']])
```
### Seleção por linha
```
# a seleção por linha obedece ao padrão de fatiamento de string
# não se usa o index sozinho, usa-se os ':'
# e mais ':' se for usar o passo além do intervalo
# ex => [::2] => seleciona tudo dando 2 passos
df[:]
```
### Seleção de linhas e colunas
```
# selecionar a partir da linha de index 1
# e ainda, as colunas c3 e c1, nessa ordem
df[1:][['c3', 'c1']]
```
### Selecionando linhas com o loc
* o loc permite selecionar linhas usando o label da linha
```
df
# pega uma linha e transforma numa series
df.loc['l1']
# obedece o mesmo formato para selecionar mais de uma linha
df.loc[['l3', 'l2']]
# para selecionar um elemento, usa-se a conotação matricial
df.loc['l3', 'c1']
```
### Selecionando com iloc
* o iloc tem a mesma função do loc, mas usa o index da linha, não o label
```
# selecionando o mesmo elemento com o iloc
df.iloc[2, 0]
```
### Selecionando várias linhas e colunas com loc e iloc
```
df
# usando o loc
df.loc[['l3', 'l1'], ['c4', 'c1']]
# usando o iloc
df.iloc[[2, 0], [3, 0]]
```
## Exercícios
### Crie um DataFrame somente com os alunos reprovados e mantenha neste DataFrame apenas as colunas Nome, Sexo e Idade, nesta ordem.
```
alunos = pd.DataFrame({'Nome': ['Ary', 'Cátia', 'Denis', 'Beto', 'Bruna', 'Dara', 'Carlos', 'Alice'],
'Sexo': ['M', 'F', 'M', 'M', 'F', 'F', 'M', 'F'],
'Idade': [15, 27, 56, 32, 42, 21, 19, 35],
'Notas': [7.5, 2.5, 5.0, 10, 8.2, 7, 6, 5.6],
'Aprovado': [True, False, False, True, True, True, False, False]},
columns = ['Nome', 'Idade', 'Sexo', 'Notas', 'Aprovado'])
alunos
```
### Resposta do exercício
```
selecao = alunos['Aprovado'] == False
reprovados = alunos[['Nome', 'Sexo', 'Idade']][selecao]
reprovados
```
### Resposta organizando por sexo, colocando em ordem alfabética e refazendo o index
```
reprovados = alunos['Aprovado'] == False
# cria uma variável em que a coluna Aprovado tem valor False
alunos_reprovados = alunos[reprovados]
# cria um Dataframe apenas com Aprovado == False
alunos_reprovados = alunos_reprovados[['Nome', 'Sexo', 'Idade']].sort_values(by=['Sexo', 'Nome'])
# sobrescreve o Dataframe apenas com as colunas desejadas e com o filtro criado
alunos_reprovados.index = range(alunos_reprovados.shape[0])
# refaz o index com o range do tamanho do nº de linhas desse Dataframe
alunos_reprovados
```
### Crie uma visualização com os três alunos mais novos
```
alunos
alunos.sort_values(by='Idade', inplace=True)
alunos.iloc[:3]
```
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
```
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight)
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
# GA4GH Variation Representation Schema
This notebook demonstrates the use of the VR schema to represent variation in APOE. Objects created in this notebook are saved at the end and used by other notebooks to demonstrate other features of the VR specification.
## APOE Variation
rs7412
NC_000019.10:g.44908822
NM_000041.3:c.526
C T
rs429358 C APOE-ε4 APOE-ε1
NC_000019.10:g.44908684 T APOE-ε3 APOE-ε2
NM_000041.3:c.388
Note: The example currently uses only rs7412:T. Future versions of the schema will support haplotypes and genotypes, and these examples will be extended appropriately.
## Using the VR Reference Implemention
See https://github.com/ga4gh/vr-python for information about installing the reference implementation.
```
from ga4gh.vrs import __version__, models
__version__
```
## Schema Overview
<img src="images/schema-current.png" width="75%" alt="Current Schema"/>
## Sequences
The VR Specfication expects the existence of a repository of biological sequences. At a minimum, these sequences must be indexed using whatever accessions are available. Implementations that wish to use the computed identifier mechanism should also have precomputed ga4gh sequence accessions. Either way, sequences must be referred to using [W3C Compact URIs (CURIEs)](https://w3.org/TR/curie/). In the examples below, we'll use "refseq:NC_000019.10" to refer to chromosome 19 from GRCh38.
## Locations
A Location is an *abstract* object that refer to contiguous regions of biological sequences.
In the initial release of VR, the only Location is a SequenceLocation, which represents a precise interval (`SimpleInterval`) on a sequence. GA4GH VR uses interbase coordinates exclusively; therefore the 1-based residue position 44908822 is referred to using the 0-based interbase interval <44908821, 44908822>.
Future Location subclasses will provide for approximate coordinates, gene symbols, and cytogenetic bands.
#### SequenceLocation
```
location = models.SequenceLocation(
sequence_id="refseq:NC_000019.10",
interval=models.SimpleInterval(start=44908821, end=44908822))
location.as_dict()
```
## Variation
### Text Variation
The TextVariation class represents variation descriptions that cannot be parsed, or cannot be parsed yet. The primary use for this class is to allow unparsed variation to be represented within the VR framework and be associated with annotations.
```
variation = models.Text(definition="APO loss")
variation.as_dict()
```
### Alleles
An Allele is an asserion of a state of biological sequence at a Location. In the first version of the VR Schema, the only State subclass is SequenceState, which represents the replacement of sequence. Future versions of State will enable representations of copy number variation.
### "Simple" sequence replacements
This case covers any "ref-alt" style variation, which includes SNVs, MNVs, del, ins, and delins.
```
allele = models.Allele(location=location,
state=models.SequenceState(sequence="A"))
allele.as_dict()
```
----
## Saving the objects
Objects created in this notebook will be saved as a json file and loaded by subsequent notebooks.
```
import json
filename = "objects.json"
data = {
"alleles": [allele.as_dict()],
"locations": [location.as_dict()]
}
json.dump(data, open(filename, "w"))
```
| github_jupyter |
```
import numpy as np
import cv2 as cv
import json
"""缩小图像,方便看效果
resize会损失像素,造成边缘像素模糊,不要再用于计算的原图上使用
"""
def resizeImg(src):
height, width = src.shape[:2]
size = (int(width * 0.3), int(height * 0.3))
img = cv.resize(src, size, interpolation=cv.INTER_AREA)
return img
"""找出ROI,用于分割原图
原图有四块区域,一个是地块区域,一个是颜色示例区域,一个距离标尺区域,一个南北方向区域
理论上倒排后的最大轮廓的是地块区域
"""
def findROIContours(src):
copy = src.copy()
gray = cv.cvtColor(copy, cv.COLOR_BGR2GRAY)
# cv.imshow("gray", gray)
# 低于thresh都变为黑色,maxval是给binary用的
# 白底 254, 255 黑底 0, 255
threshold = cv.threshold(gray, 0, 255, cv.THRESH_BINARY)[1]
# cv.imshow("threshold", threshold)
contours, hierarchy = cv.findContours(threshold, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
sortedCnts = sorted(contours, key = cv.contourArea, reverse=True)
# cv.drawContours(copy, [maxCnt], -1, (255, 0, 0), 2)
# cv.imshow("roi contours", copy)
return sortedCnts
"""按照mask,截取roi
"""
def getROIByContour(src, cnt):
copy = src.copy()
# black background
mask = np.zeros(copy.shape[:2], np.uint8)
mask = cv.fillConvexPoly(mask, cnt, (255,255,255))
# cv.imshow("mask", resizeImg(mask))
# print(mask.shape)
# print(copy.dtype)
roi = cv.bitwise_and(copy, copy, mask=mask)
# cv.imshow("roi", roi)
# white background for none roi and fill the roi's backgroud
mask = cv.bitwise_not(mask)
whiteBg = np.full(copy.shape[:2], 255, dtype=np.uint8)
whiteBg = cv.bitwise_and(whiteBg, whiteBg, mask=mask)
whiteBg = cv.merge((whiteBg,whiteBg,whiteBg))
# cv.imshow("whiteBg", resizeImg(whiteBg))
roiWithAllWhite = cv.bitwise_or(roi, whiteBg)
return roiWithAllWhite
"""找出所有的地块轮廓
"""
def findAllBlockContours(src):
copy = src.copy()
contours = findExternalContours(copy)
return contours
"""找出颜色示例里的颜色BGR
根据限定长,高,比例来过滤出实例颜色区域
"""
def findBGRColors(cnts):
W_RANGE = [170,180]
H_RANGE = [75, 85]
RATIO_RANGE = [0.40, 0.50]
colors = []
# TODO 如果可以知道颜色示例的个数可以提前统计退出循环
for cnt in cnts:
x,y,w,h = cv.boundingRect(cnt)
ratio = round(h/w, 2)
if ratio > RATIO_RANGE[0] and ratio < RATIO_RANGE[1] \
and w > W_RANGE[0] and w < W_RANGE[1] \
and h > H_RANGE[0] and h < H_RANGE[1]:
# print(ratio,x,y,w,h)
# 因为,原图色块矩形的周边和mask出来的颜色区都有模糊渐变的线
# 无法使用cv.mean(colorRegion, meanMask)来计算实际颜色
# 所以,取矩形的中心点的颜色最为准确
cx,cy = round(x+w/2), round(y+h/2)
bgr = img_white_bg[cy, cx]
# print(bgr)
colors.append(bgr)
return colors
def drawnForTest(img, contours, rect=False):
img = img.copy()
for i in contours:
if rect:
x, y, w, h = cv.boundingRect(i)
cv.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv.putText(img, 'Area' + str(cv.contourArea(i)), (x+5,y+15), cv.FONT_HERSHEY_PLAIN, 1,(255,0,0), 1, cv.LINE_AA)
else:
cv.drawContours(img, [i], -1, (0, 255, 0), 2)
cv.imshow("detect", resizeImg(img))
cv.waitKey(0)
"""Find Original Contours
Find Original Contours from source image, we only need external contour.
Args:
src: source image
Returns:
Original contours
"""
def findExternalContours(src):
# 必须是白色背景的图片,如果是黑色背景,将黑色改成白色
# src[np.where((src == [0,0,0]).all(axis = 2))] = [255,255,255]
# preprocess, remove noise, a lot noise on the road
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
# 测试这里道理有没有必要高斯
# blur = cv.GaussianBlur(gray, (3,3), 0)
thresVal = 254
maxVal = 255
ret,thresh1 = cv.threshold(gray, thresVal, maxVal, cv.THRESH_BINARY)
kernel = np.ones((7,7),np.uint8)
morph = cv.morphologyEx(thresh1, cv.MORPH_CLOSE, kernel)
# ??threshold怎么计算的?
edges = cv.Canny(morph,100,200)
# edges找出来,但是是锯齿状,会在找轮廓时形成很多点,这里加一道拉普拉斯锐化一下
edges = cv.Laplacian(edges, -1, (3,3))
contours, hierarchy = cv.findContours(edges, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# contours, hierarchy = cv.findContours(edges, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# cv.imshow('gray', resizeImg(gray))
# cv.imshow('thresh1', resizeImg(thresh1))
# cv.imshow('edges', resizeImg(edges))
# cv.imshow('opening', opening)
# cv.imshow('opening', resizeImg(opening))
return contours, hierarchy
"""根据找出的轮廓和层级关系计算地块和色块的父子关系
"""
def getBlockColorTree(copy, blockCnts, hierarchy):
# print(hierarchy)
# hierarchy [Next, Previous, First_Child, Parent]
currentRootIndex = -1
rootRegions = {}
for i,cnt in enumerate(blockCnts):
x,y,w,h = cv.boundingRect(cnt)
cntArea = cv.contourArea(cnt)
if cntArea > 1000:
continue
cv.putText(copy, str(i), (x+5,y+15), cv.FONT_HERSHEY_PLAIN, 1,(255,0,0), 1, cv.LINE_AA)
print(i, hierarchy[0][i])
if hierarchy[0][i][3] == -1:
# root region
currentRootIndex = i
if currentRootIndex == len(blockCnts):
break
rootRegion = {'index': i, 'contour': cv.contourArea(blockCnts[currentRootIndex]), 'childRegion': []}
rootRegions[currentRootIndex] = rootRegion
elif hierarchy[0][i][3] == currentRootIndex:
rootRegions[currentRootIndex]['childRegion'].append({'index': i, 'contour': cv.contourArea(cnt)})
cv.imshow("blockCnts with debug info", resizeImg(copy))
print(rootRegions)
data2 = json.dumps(rootRegions, sort_keys=True, indent=4, separators=(',', ': '))
print(data2)
"""使用颜色来分块,并返回所有地块和色块父子关系
debug 只演示前三个地块的识别过程,可以通过debugFrom:debugLen来调整debug开始位置和长度
"""
def findColorRegionsForAllBlocks(img_white_bg, blockCnts, debug=False, debugFrom=0, debugLen=3):
filteredBlockCnts = [cnt for cnt in blockCnts if cv.contourArea(cnt) > 100]
if debug:
filteredBlockCnts = filteredBlockCnts[debugFrom:debugLen]
for blockCnt in filteredBlockCnts:
findColorRegionsForBlock(img_white_bg, blockCnt, debug)
"""根据threshold重新计算BGR的值
"""
def bgrWithThreshold(bgr, threshold):
newBgr = []
for x in bgr.tolist():
if x + threshold < 0:
newBgr.append(0)
elif x + threshold > 255:
newBgr.append(255)
else:
newBgr.append(x + threshold )
return newBgr
"""使用颜色来找出单个地块内的色块
"""
def findColorRegionsForBlock(img_white_bg, blockCnt, debug=False):
blockWithColorsDict = {'area': cv.contourArea(blockCnt), 'points':[] , 'children': []}
blockRegion = getROIByContour(img_white_bg, blockCnt)
if debug:
cv.imshow("blockRegions", np.hstack([resizeImg(img_white_bg), resizeImg(blockRegion)]))
colorCnts = []
for bgr in bgrColors:
# 图片里的颜色可能和示例颜色不相等,适当增加点阈值来防色差
threshold = 5
lower = np.array(bgrWithThreshold(bgr, -threshold), dtype="uint8")
upper = np.array(bgrWithThreshold(bgr, threshold), dtype="uint8")
# 根据阈值找到对应颜色区域,黑底白块
mask = cv.inRange(blockRegion, lower, upper)
# cv.imshow("mask", resizeImg(mask))
# 过滤出于颜色匹配的色块
nonZeroCount = cv.countNonZero(mask)
# print('none zero count', nonZeroCount)
if nonZeroCount == 0:
continue
contours, hierarchy = cv.findContours(mask.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# print('external', len(contours))
# print(hierarchy)
colorCnts.extend(contours)
# contours, hierarchy = cv.findContours(mask.copy(), cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
# print('tree', len(contours))
# print(hierarchy)
if debug:
# 黑白变白黑
mask_inv = 255 - mask
# cv.imshow("mask_inv", resizeImg(mask_inv))
# 展示图片
output = cv.bitwise_and(blockRegion, blockRegion, mask=mask_inv)
# cv.drawContours(output, contours, -1, (0, 0, 255), 3)
cv.imshow("images", np.hstack([resizeImg(blockRegion), resizeImg(output)]))
cv.waitKey(0)
# 色块在同一个地块内也可能是多个的,色块内又内嵌了色块
# TODO 嵌套的
# colorCnts 循环递归,不到叶子节点,所有的色块统一当作地块继续处理,这样就可以解决嵌套问题
# 所以要先构建一个三层数据模型
# 第一层 是一个列表 存放所有的地块
# 第二层 是以色块为节点,色块内无嵌套,则为叶子节点,有嵌套,色块升级为地块,继续深入查找色块,直到没有内嵌,成为叶子节点
# 第三层 是以色块为叶子节点
colorDicts = []
for colorCnt in contours:
colorDict = {'area': cv.contourArea(colorCnt), 'points':[], 'color': bgr.tolist()}
colorDicts.append(colorDict)
blockWithColorsDict['children'].extend(colorDicts)
# print(blockWithColorsDict)
jsonData = json.dumps(blockWithColorsDict, sort_keys=True, indent=4, separators=(',', ': '))
print(jsonData)
return colorCnts
# 用于找ROI
img = cv.imread('data_hierarchy2.png')
# 用于真实计算,
# 1. 色调区间误差,原图4识别最高,图3识别一般,需要增加threshold到5,
# 2. 间隙与边框误差,色块面积总和与地块面积相差3000左右,应该是线的面积没计算进去
# 3.
img_white_bg = cv.imread('data_hierarchy4.png')
# 将图片按照轮廓排序,最大的是总地块
# 按照原图中的比例和实际距离来分割图片,参考findBGRColors的计算方式
sortedCnts = findROIContours(img)
# print(len(sortedCnts[2:]))
# drawnForTest(img_white_bg, sortedCnts[3], rect=True)
# print(sortedCnts[3])
print(cv.boundingRect(sortedCnts[3]))
print(img_white_bg.shape)
# 2401 * 3151
# 670
px_km_scale = 670/1000
area_px = (2401*3151)
area_km2 = (2401*px_km_scale*3151*px_km_scale)
print(area_px/area_km2)
print(1/(px_km_scale*px_km_scale))
# 获取总地块
rootRegion = getROIByContour(img_white_bg, sortedCnts[0])
# cv.imshow("rootRegion", resizeImg(rootRegion))
# 找出色块的颜色
bgrColors = findBGRColors(sortedCnts[1:])
# print(bgrColors)
# 找出地块
copy = rootRegion.copy()
blockCnts, hierarchy = findAllBlockContours(copy)
# print(len(blockCnts))
# drawnForTest(img_white_bg, blockCnts, rect=False)
# 通过颜色来检测地块内色块
# findColorRegionsForAllBlocks(img_white_bg, blockCnts, debug=True, debugLen=1)
# findColorRegionsForAllBlocks(img_white_bg, blockCnts)
# 根据轮廓找地块的方法首先hierarchy的转换父子关系,还有很多小的干扰项待解决
# getBlockColorTree(copy, blockCnts, hierarchy)
cv.waitKey(0)
cv.destroyAllWindows()
a = [1]
b = [2,3]
a.extend(b)
a
c=1
-c
```
| github_jupyter |
```
"""
Estimating the causal effect of sodium on blood pressure in a simulated example
adapted from Luque-Fernandez et al. (2018):
https://academic.oup.com/ije/article/48/2/640/5248195
"""
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
def generate_data(n=1000, seed=0, beta1=1.05, alpha1=0.4, alpha2=0.3, binary_treatment=True, binary_cutoff=3.5):
np.random.seed(seed)
age = np.random.normal(65, 5, n)
sodium = age / 18 + np.random.normal(size=n)
if binary_treatment:
if binary_cutoff is None:
binary_cutoff = sodium.mean()
sodium = (sodium > binary_cutoff).astype(int)
blood_pressure = beta1 * sodium + 2 * age + np.random.normal(size=n)
proteinuria = alpha1 * sodium + alpha2 * blood_pressure + np.random.normal(size=n)
hypertension = (blood_pressure >= 140).astype(int) # not used, but could be used for binary outcomes
return pd.DataFrame({'blood_pressure': blood_pressure, 'sodium': sodium,
'age': age, 'proteinuria': proteinuria})
def estimate_causal_effect(Xt, y, model=LinearRegression(), treatment_idx=0, regression_coef=False):
model.fit(Xt, y)
if regression_coef:
return model.coef_[treatment_idx]
else:
Xt1 = pd.DataFrame.copy(Xt)
Xt1[Xt.columns[treatment_idx]] = 1
Xt0 = pd.DataFrame.copy(Xt)
Xt0[Xt.columns[treatment_idx]] = 0
return (model.predict(Xt1) - model.predict(Xt0)).mean()
binary_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=True, n=10000000)
continuous_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=False, n=10000000)
ate_est_naive = None
ate_est_adjust_all = None
ate_est_adjust_age = None
for df, name in zip([binary_t_df, continuous_t_df],
['Binary Treatment Data', 'Continuous Treatment Data']):
print()
print('### {} ###'.format(name))
print()
# Adjustment formula estimates
ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0)
ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']],
df['blood_pressure'], treatment_idx=0)
ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure'])
print('# Adjustment Formula Estimates #')
print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive)
print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all)
print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age)
print()
# Linear regression coefficient estimates
ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0,
regression_coef=True)
ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']],
df['blood_pressure'], treatment_idx=0,
regression_coef=True)
ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure'],
regression_coef=True)
print('# Regression Coefficient Estimates #')
print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive)
print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all)
print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age)
print()
```
| github_jupyter |
# abc
```
doc_a = "Brocolli is good to eat. My brother likes to eat good brocolli, but not my mother."
doc_b = "My mother spends a lot of time driving my brother around to baseball practice."
doc_c = "Some health experts suggest that driving may cause increased tension and blood pressure."
doc_d = "I often feel pressure to perform well at school, but my mother never seems to drive my brother to do better."
doc_e = "Health professionals say that brocolli is good for your health."
# compile sample documents into a list
doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]
```
# Tokenization
```
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
raw = doc_a.lower()
tokens = tokenizer.tokenize(raw)
print(tokens)
```
# Stop words
```
from nltk.corpus import stopwords
# create English stop words list
en_stop = stopwords.words('english')
# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]
print(stopped_tokens)
```
# Stemming
```
from nltk.stem.porter import PorterStemmer
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
# stem token
texts = [p_stemmer.stem(i) for i in stopped_tokens]
print(texts)
```
# Constructing a document-term matrix
```
#!pip install gensim
a = []
a.append(texts)
a
from gensim import corpora, models
dictionary = corpora.Dictionary(a)
corpus = [dictionary.doc2bow(text) for text in a]
print(corpus[0])
```
# Applying the LDA model
```
ldamodel = models.ldamodel.LdaModel(corpus, num_topics=3, id2word = dictionary, passes=20)
```
# Examining the result
```
print(ldamodel.print_topics(num_topics=3, num_words=3))
ldamodel = models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary, passes=20)
b = ldamodel.print_topics(num_topics=2, num_words=4)
len(b)
for i in b:
print(i)
```
-----
# My Example
```
doc_f = "The decision to ban lawmaker Eddie Chu Hoi-dick from running in a rural representative election was based on a shaky argument that could be struck down in court, according to leading legal scholars, who also called on Hong Kong’s courts to clarify the vagueness in election laws. Johannes Chan Man-mun, the former law dean of the University of Hong Kong, was speaking on Sunday after Chu was told he would not be allowed to run for a post as a local village’s representative. Returning officer Enoch Yuen Ka-lok pointed to Chu’s stance on Hong Kong independence and said the lawmaker had dodged his questions on his political beliefs. Yuen took this to imply that Chu supported the possibility of Hong Kong breaking with Beijing in the future. Chan, however, said Chu’s responses to the returning officer were open to interpretation. The legal scholar did not believe they met the standard of giving the election officer “cogent, clear and compelling” evidence as required by the precedent set in the case of Andy Chan Ho-tin. Andy Chan was barred from standing in a Legislative Council by-election in New Territories West in 2016 because of his political beliefs. According to Section 24 of the Rural Representative Election Ordinance, candidates are required to declare their allegiance to the Hong Kong Special Administrative Region and to state they will uphold the Basic Law, Hong Kong’s mini-constitution, when filing their application. The allegiance requirement was written into law in 2003, mirroring clauses in the rules for the Legco and district council elections, but it had never been applied by an election officer. The situation changed after separatist Andy Chan lost his election appeal in February this year, with the courts saying returning officers could ban candidates who held political views that ran contrary to the Basic Law. While the landmark ruling was concerned only with Legco elections, Johannes Chan said, after Chu’s case, returning officers for other elections could have similar powers to ban candidates from running, including in the district council elections next year. Gladys Li, the lawyer who represented Andy Chan, said the ruling would be binding on returning officers for other elections. Eric Cheung Tat-ming, another legal scholar at HKU, said Yuen had provided weak reasons for disqualifying Chu. He agreed that there will be room for Chu to launch an appeal. “The logic has become – if your interpretation of the Basic Law is different from the government’s, it means you have no intention of upholding the Basic Law,” Cheung said. He also said Hong Kong courts must clarify the vagueness in election laws and process such appeals more quickly. Stephen Fisher, the former deputy home affairs secretary who led the government’s effort to formalise rural representative elections under the ordinance, said it was “common sense” that rural representatives had to uphold allegiance to Hong Kong. “The village representatives are also elected by people, and they are empowered to identify who the indigenous villagers are,” Fisher said before Chu’s disqualification. “So it’s normal that the legal drafting [of the ordinance] follows the law on Legislative Council and district council elections.” Fisher, who would not comment on Chu’s case, said it would have been “unthinkable” for anyone back then to have imagined a candidate being disqualified for their political views. “The requirement was written there, but it was never contentious,” Fisher said. Chu was disqualified by Yuen because he had “defended independence as an option to Hongkongers” in a statement in 2016. Pressed twice by the returning officer to clarify his position, Chu would say only that he did not support Hong Kong’s independence, but added that he would support another’s right to peacefully advocate it. Johannes Chan said Chu’s political stance was open to interpretation, and the election officer could hardly fulfil the criteria for providing “cogent, clear and compelling” evidence to disqualify him. “At best, we could argue Chu’s reply to the officer was vague about self-determination – even the returning officer himself confessed Chu was only ‘implicitly’ confirming independence as an option,” he said. “But we can’t take a candidate’s silence as his stance. That would have jumped many, many steps.” The decision on Sunday would also create a “conflicting” situation over Chu's political allegiance, Chan added, since the lawmaker remained in office but was disqualified in a separate election. Both Chan and Li said how the returning officer had come to the disqualification might require clarification in any future court ruling. “It was as if they [government officials] could read your mind,” Li said. “The court still has not clarified how far back election officials can look – such as in this case, could we go back to statements Chu made two years ago?” Chan asked."
```
# Tokenization
```
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
my_raw = doc_f.lower()
my_tokens = tokenizer.tokenize(my_raw)
#print(my_tokens)
```
# Stop words
```
from nltk.corpus import stopwords
# create English stop words list
eng_stop = stopwords.words('english')
# remove stop words from tokens
my_stopped_tokens = [i for i in my_tokens if not i in eng_stop]
#print(my_stopped_tokens)
```
# Stemming
```
from nltk.stem.porter import PorterStemmer
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
# stem token
my_texts = [p_stemmer.stem(i) for i in my_stopped_tokens]
#print(texts)
my_texts_list = []
#my_texts_list.append(my_texts)
my_texts_list.append(my_stopped_tokens)
#my_texts_list
from gensim import corpora, models
my_dictionary = corpora.Dictionary(my_texts_list)
my_corpus = [my_dictionary.doc2bow(text) for text in my_texts_list]
corpus[0]
```
# Applying the LDA model
```
my_ldamodel = models.ldamodel.LdaModel(my_corpus, num_topics=3, id2word = my_dictionary, passes=20)
result = my_ldamodel.print_topics(num_topics=3, num_words=3)
result
```
| github_jupyter |
# McKinsey Data Scientist Hackathon
link: https://datahack.analyticsvidhya.com/contest/mckinsey-analytics-online-hackathon-recommendation/?utm_source=sendinblue&utm_campaign=Download_The_Dataset_McKinsey_Analytics_Online_Hackathon__Recommendation_Design_is_now_Live&utm_medium=email
slack:https://analyticsvidhya.slack.com/messages/C8X88UJ5P/
## Problem Statement ##
Your client is a fast-growing mobile platform, for hosting coding challenges. They have a unique business model, where they crowdsource problems from various creators(authors). These authors create the problem and release it on the client's platform. The users then select the challenges they want to solve. The authors make money based on the level of difficulty of their problems and how many users take up their challenge.
The client, on the other hand makes money when the users can find challenges of their interest and continue to stay on the platform. Till date, the client has relied on its domain expertise, user interface and experience with user behaviour to suggest the problems a user might be interested in. You have now been appointed as the data scientist who needs to come up with the algorithm to keep the users engaged on the platform.
The client has provided you with history of last 10 challenges the user has solved, and you need to predict which might be the next 3 challenges the user might be interested to solve. Apply your data science skills to help the client make a big mark in their user engagements/revenue.
### Data Relationships
Client: problem platform maintainer
Creators: problem contributors
Users: people who solve these problems
Question? Given the 10 challenges the user solved, what might be the next 3 challenges user want to solve?
## Now let's first look at some raw data
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
import pandas
import seaborn
x_data = pandas.read_csv('./train_mddNHeX/train.csv')
y_data = pandas.read_csv('./train_mddNHeX/challenge_data.csv')
x_test = pandas.read_csv('./test.csv')
y_sub_temp = pandas.read_csv('./sample_submission_J0OjXLi_DDt3uQN.csv')
print('shape of submission data = {}, number of users = {}'.format(y_sub_temp.shape, y_sub_temp.shape[0]/13))
y_sub_temp.head()
print('shape of user data = {}, number of users = {}'.format(x_data.shape, x_data.shape[0]/13))
#x_data.sort_values('user_id')
x_data[0:20]
print('shape of user test data = {}, number of users = {}'.format(x_test.shape, x_test.shape[0]/10))
#x_test[0:15]
x_test.head(15)
#x_test.sort_values('user_id').head(15)
print('shape of challenge data = {}'.format(y_data.shape))
y_data[0:10]#.tail()
#print(y_data.loc[:,['challenge_ID','challenge_series_ID']])
#print(y_data.groupby('challenge_series_ID'))
```
## Dirty try
1. Need to find a feature vector for a given challenge
- This is associated with [prog_lang, challenge_series, total submission, publish_time, auth_id, auth_org, categ]
2. Create a preference vector for each user
- This will be randomly initialized
3. Use the first 10 samples from each users as ground truth for training the feature vector and the preference vector
## Prepare training data
Let's prepare the challange id as a lookup table to constuct training data
```
def str2ascii(astr):
"""
input:
astr: a string
output:
val: a number which is sum of char's ascii.
"""
val = 0
real = 0
count_val, count_real = 0, 0
for i in list(astr):
num = ord(i)
if 48<= num and num <= 57:
real = real*10 + int(i)
count_real += 1
else:
val += num
count_val += 1
val = val*10**count_real + real
return val
# Retain the original copy of the y_data
ch_table = y_data
orig_y_data = y_data.copy()
print(ch_table.columns)
## Fill NaN with some values
values = {'challenge_series_ID':'SI0000','author_ID':'AI000000','author_gender':'I'
,'author_org_ID':'AOI000000', 'category_id':0.0
,'programming_language':0,'total_submissions':0, 'publish_date':'00-00-0000'}
ch_table = y_data.fillna(value = values)
print(y_data.head(), ch_table.head())
ch_table.iloc[3996]
## Change strings to some encoded values
columns = ['challenge_series_ID','author_ID','author_gender','author_org_ID','publish_date']
#print(ch_table[0:10])
for col in columns:
print(col)
#ch_table[col] = ch_table.apply(lambda x: str2ascii(x[col]),axis=1)
ch_table[col] = ch_table[col].apply(lambda x: str2ascii(x))
ch_table[0:10]
y_data['programming_language'].describe()
```
### Now, we need to normalize the table
```
## using normalizer
from sklearn import preprocessing
normalizer = preprocessing.Normalizer()
min_max_scaler = preprocessing.MinMaxScaler()
## Decrease the variance between data points in each columns
columns = ch_table.columns
#print(columns[1:],ch_table.loc[:,columns[1:]])
ch_table.loc[:,columns[1:]].head()
minmax_ch_table = min_max_scaler.fit_transform(ch_table.loc[:,columns[1:]])
norm_ch_table = preprocessing.normalize(ch_table.loc[:,columns[1:]],norm='l2')
#ch_table.loc[:,columns[1:]] = norm_ch_table
#ch_table.head()
print(pandas.DataFrame(minmax_ch_table, columns=columns[1:]).head(2))
print(pandas.DataFrame(norm_ch_table, columns=columns[1:]).head(2))
## Finally put the scaled data back
ch_table[columns[1:]] = minmax_ch_table
ch_table.head(10)
```
## Great!, now we have feature vectors for every challenges
Next lets prepare the ground truth matrix for users
Shape of y = (n_c, n_u)
1. n_c: the number of challenges
2. n_u: the number of users
```
## The ch_features contains
ch_features = ch_table.sort_values('challenge_ID')
ch_features = ch_features.loc[:,columns[1:]].values
#ch_features.head(10)
print('Shape of feature (n_c, n_f) = {}'.format(ch_features.shape))
## Setting up the lookup table
ch_lookup = {}
tmp = ch_table['challenge_ID'].to_dict()
ch_id_lookup=tmp
#for key in tmp.keys
for key in tmp.keys():
#print(key, tmp[key])
ch_lookup[tmp[key]] = key
#ch_lookup
## now lets set up a training y array with shape = (n_c, n_u)
def findChallengeFeatures(challenge_id, table, lookup):
"""
input:
challenge_id: a string of the challenge_id
table: pandas dataframe lookup table
output:
features: numpy array of features
"""
columns = table.columns
return table.loc[lookup[challenge_id], columns[1:]]
%%time
ch_table.head()
featureVec = findChallengeFeatures(x_data.loc[0,'challenge'],ch_table, ch_lookup)
print(featureVec.shape)
%%time
from operator import itemgetter
#myvalues = itemgetter(*mykeys)(mydict)
columns = ch_table.columns.values
usr_table = x_data
print(columns[1:])
for i in columns[1:]:\
usr_table[i] = np.nan
nSamples = x_data.shape[0]
## Finding indices
indices = np.array([ch_lookup[i] for i in x_data.loc[:nSamples-1,'challenge']])
print(indices.shape)
usr_table.loc[:nSamples-1, columns[1:]] = ch_table.loc[indices, columns[1:]].values
#print(ch_table.loc[indices,columns[1:3]])
#print(usr_table.loc[:nSamples-1, columns[1:]].shape)
usr_table.head(15)
usr_table.to_csv('train_withFeatureVec_allsamples.csv')
ch_table.to_csv('challenge_featureVecTable_allsamples.csv')
```
## Let's prepare the labels
First, we need an empyty array to hold challenges
```
ch_emptyVec = np.zeros((ch_table.shape[0]))
ch_emptyVec.shape
x_data.head(13)
## constructing a (n_u, n_c) array for nSamples
%%time
columns = ch_table.columns
nSamples = int(x_data.shape[0])
x_train = np.zeros((nSamples, 10, len(ch_table.columns)-1)) ## (m, n_i, n_f)
y_train = np.zeros((nSamples, ch_table.shape[0])) ## (m, n_c)
for i in range(nSamples/13):
curpt = i*13
#print(i)
x_train[i] = x_data.loc[curpt:(curpt+9), columns[1:]] ## 0-10, 13-26
#print(x_train[i].shape)
#y_train[i] = ch_emptyVec
#tmp = x_data.loc[(curpt+10):(curpt+12), 'challenge'].values
#tmp = [ch_lookup[tmp[0]],ch_lookup[tmp[1]],ch_lookup[tmp[2]]]
#y_train[i,tmp] = 1 ## 10-13, 26-29
indices = [int(ch_lookup[j]) for j in x_data.loc[(curpt+10):(curpt+12), 'challenge']]
#print(indices, np.ones(3), tmp)
y_train[i, indices] = 1 ## 10-13, 26-29
#break
print('x_train shape = {}, y_train shape = {}'.format(x_train.shape, y_train.shape))
## Flatten the array
x_train = x_train.reshape((x_data.shape[0],-1))
```
## Finally lets dunmp it into a classifier
```
from sklearn.naive_bayes import GaussianNB
from sklearn import tree
gnb = GaussianNB()
clf = tree.DecisionTreeClassifier()
clf.fit(x_train, y_train)
## A simple NN
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(64, input_dim=80))
model.add(Activation('relu'))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dense(y_train.shape[1]))
model.add(Activation('softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1)
model.save_weights('simpleNN.h5')
```
## Running out of time just gonna plug it in and submit
```
%%time
nSamples = x_test.shape[0]
columns = ch_table.columns
test_table = x_test
print(columns[1:])
for i in columns[1:]:
test_table[i] = np.nan
indices = np.array([ch_lookup[i] for i in x_test.loc[:nSamples-1,'challenge']])
test_table.loc[:nSamples-1, columns[1:]] = ch_table.loc[indices, columns[1:]].values
print(indices.shape)
%%time
test_table.to_csv('prepared_test_table_for_prediction.csv')
x_submit = np.zeros((nSamples/10, 10, len(ch_table.columns)-1)) ## (m, n_i, n_f)
y_submit = pandas.DataFrame(columns=['user_sequence','challenge'],
data = np.empty((x_test.shape[0]/10*3,2), dtype=np.str))
#y_submit['user_sequence']
#y_submit.head(15)
%%time
for i in range(nSamples/10):
curpt = i*10
#print(i)
x_submit[i] = x_test.loc[curpt:(curpt+9), columns[1:]] ## 0-10, 13-26
pred = model.predict(x_submit[i].reshape((1,80)))
ids = np.argsort(pred.reshape(-1))[-3:]
#print(pred, ids, ids.shape)
#print(pred[0,ids])
outpt = i*3
user_id = x_test.loc[curpt,'user_id']
y_submit.iloc[outpt:outpt+3,:] = [[str(user_id)+'_11', ch_id_lookup[ids[0]]],
[str(user_id)+'_12', ch_id_lookup[ids[1]]],
[str(user_id)+'_13', ch_id_lookup[ids[2]]]
]
#print(y_submit.iloc[outpt:outpt+3,:])
y_submit.head()
y_submit.head(15)
y_submit.to_csv('ftl_submission.csv')
```
| github_jupyter |
# Trax : Ungraded Lecture Notebook
In this notebook you'll get to know about the Trax framework and learn about some of its basic building blocks.
## Background
### Why Trax and not TensorFlow or PyTorch?
TensorFlow and PyTorch are both extensive frameworks that can do almost anything in deep learning. They offer a lot of flexibility, but that often means verbosity of syntax and extra time to code.
Trax is much more concise. It runs on a TensorFlow backend but allows you to train models with 1 line commands. Trax also runs end to end, allowing you to get data, model and train all with a single terse statements. This means you can focus on learning, instead of spending hours on the idiosyncrasies of big framework implementation.
### Why not Keras then?
Keras is now part of Tensorflow itself from 2.0 onwards. Also, trax is good for implementing new state of the art algorithms like Transformers, Reformers, BERT because it is actively maintained by Google Brain Team for advanced deep learning tasks. It runs smoothly on CPUs,GPUs and TPUs as well with comparatively lesser modifications in code.
### How to Code in Trax
Building models in Trax relies on 2 key concepts:- **layers** and **combinators**.
Trax layers are simple objects that process data and perform computations. They can be chained together into composite layers using Trax combinators, allowing you to build layers and models of any complexity.
### Trax, JAX, TensorFlow and Tensor2Tensor
You already know that Trax uses Tensorflow as a backend, but it also uses the JAX library to speed up computation too. You can view JAX as an enhanced and optimized version of numpy.
**Watch out for assignments which import `import trax.fastmath.numpy as np`. If you see this line, remember that when calling `np` you are really calling Trax’s version of numpy that is compatible with JAX.**
As a result of this, where you used to encounter the type `numpy.ndarray` now you will find the type `jax.interpreters.xla.DeviceArray`.
Tensor2Tensor is another name you might have heard. It started as an end to end solution much like how Trax is designed, but it grew unwieldy and complicated. So you can view Trax as the new improved version that operates much faster and simpler.
### Resources
- Trax source code can be found on Github: [Trax](https://github.com/google/trax)
- JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html)
## Installing Trax
Trax has dependencies on JAX and some libraries like JAX which are yet to be supported in [Windows](https://github.com/google/jax/blob/1bc5896ee4eab5d7bb4ec6f161d8b2abb30557be/README.md#installation) but work well in Ubuntu and MacOS. We would suggest that if you are working on Windows, try to install Trax on WSL2.
Official maintained documentation - [trax-ml](https://trax-ml.readthedocs.io/en/latest/) not to be confused with this [TraX](https://trax.readthedocs.io/en/latest/index.html)
```
#!pip install trax==1.3.1 Use this version for this notebook
```
## Imports
```
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
# Trax version 1.3.1 or better
!pip list | grep trax
```
## Layers
Layers are the core building blocks in Trax or as mentioned in the lectures, they are the base classes.
They take inputs, compute functions/custom calculations and return outputs.
You can also inspect layer properties. Let me show you some examples.
### Relu Layer
First I'll show you how to build a relu activation function as a layer. A layer like this is one of the simplest types. Notice there is no object initialization so it works just like a math function.
**Note: Activation functions are also layers in Trax, which might look odd if you have been using other frameworks for a longer time.**
```
# Layers
# Create a relu trax layer
relu = tl.Relu()
# Inspect properties
print("-- Properties --")
print("name :", relu.name)
print("expected inputs :", relu.n_in)
print("promised outputs :", relu.n_out, "\n")
# Inputs
x = np.array([-2, -1, 0, 1, 2])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = relu(x)
print("-- Outputs --")
print("y :", y)
```
### Concatenate Layer
Now I'll show you how to build a layer that takes 2 inputs. Notice the change in the expected inputs property from 1 to 2.
```
# Create a concatenate trax layer
concat = tl.Concatenate()
print("-- Properties --")
print("name :", concat.name)
print("expected inputs :", concat.n_in)
print("promised outputs :", concat.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2, "\n")
# Outputs
y = concat([x1, x2])
print("-- Outputs --")
print("y :", y)
```
## Layers are Configurable
You can change the default settings of layers. For example, you can change the expected inputs for a concatenate layer from 2 to 3 using the optional parameter `n_items`.
```
# Configure a concatenate layer
concat_3 = tl.Concatenate(n_items=3) # configure the layer's expected inputs
print("-- Properties --")
print("name :", concat_3.name)
print("expected inputs :", concat_3.n_in)
print("promised outputs :", concat_3.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
x3 = x2 * 0.99
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2)
print("x3 :", x3, "\n")
# Outputs
y = concat_3([x1, x2, x3])
print("-- Outputs --")
print("y :", y)
```
**Note: At any point,if you want to refer the function help/ look up the [documentation](https://trax-ml.readthedocs.io/en/latest/) or use help function.**
```
#help(tl.Concatenate) #Uncomment this to see the function docstring with explaination
```
## Layers can have Weights
Some layer types include mutable weights and biases that are used in computation and training. Layers of this type require initialization before use.
For example the `LayerNorm` layer calculates normalized data, that is also scaled by weights and biases. During initialization you pass the data shape and data type of the inputs, so the layer can initialize compatible arrays of weights and biases.
```
# Uncomment any of them to see information regarding the function
# help(tl.LayerNorm)
# help(shapes.signature)
# Layer initialization
norm = tl.LayerNorm()
# You first must know what the input data will look like
x = np.array([0, 1, 2, 3], dtype="float")
# Use the input data signature to get shape and type for initializing weights and biases
norm.init(shapes.signature(x)) # We need to convert the input datatype from usual tuple to trax ShapeDtype
print("Normal shape:",x.shape, "Data Type:",type(x.shape))
print("Shapes Trax:",shapes.signature(x),"Data Type:",type(shapes.signature(x)))
# Inspect properties
print("-- Properties --")
print("name :", norm.name)
print("expected inputs :", norm.n_in)
print("promised outputs :", norm.n_out)
# Weights and biases
print("weights :", norm.weights[0])
print("biases :", norm.weights[1], "\n")
# Inputs
print("-- Inputs --")
print("x :", x)
# Outputs
y = norm(x)
print("-- Outputs --")
print("y :", y)
```
## Custom Layers
This is where things start getting more interesting!
You can create your own custom layers too and define custom functions for computations by using `tl.Fn`. Let me show you how.
```
help(tl.Fn)
# Define a custom layer
# In this example you will create a layer to calculate the input times 2
def TimesTwo():
layer_name = "TimesTwo" #don't forget to give your custom layer a name to identify
# Custom function for the custom layer
def func(x):
return x * 2
return tl.Fn(layer_name, func)
# Test it
times_two = TimesTwo()
# Inspect properties
print("-- Properties --")
print("name :", times_two.name)
print("expected inputs :", times_two.n_in)
print("promised outputs :", times_two.n_out, "\n")
# Inputs
x = np.array([1, 2, 3])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = times_two(x)
print("-- Outputs --")
print("y :", y)
```
## Combinators
You can combine layers to build more complex layers. Trax provides a set of objects named combinator layers to make this happen. Combinators are themselves layers, so behavior commutes.
### Serial Combinator
This is the most common and easiest to use. For example could build a simple neural network by combining layers into a single layer using the `Serial` combinator. This new layer then acts just like a single layer, so you can inspect intputs, outputs and weights. Or even combine it into another layer! Combinators can then be used as trainable models. _Try adding more layers_
**Note:As you must have guessed, if there is serial combinator, there must be a parallel combinator as well. Do try to explore about combinators and other layers from the trax documentation and look at the repo to understand how these layers are written.**
```
# help(tl.Serial)
# help(tl.Parallel)
# Serial combinator
serial = tl.Serial(
tl.LayerNorm(), # normalize input
tl.Relu(), # convert negative values to zero
times_two, # the custom layer you created above, multiplies the input recieved from above by 2
### START CODE HERE
# tl.Dense(n_units=2), # try adding more layers. eg uncomment these lines
# tl.Dense(n_units=1), # Binary classification, maybe? uncomment at your own peril
# tl.LogSoftmax() # Yes, LogSoftmax is also a layer
### END CODE HERE
)
# Initialization
x = np.array([-2, -1, 0, 1, 2]) #input
serial.init(shapes.signature(x)) #initialising serial instance
print("-- Serial Model --")
print(serial,"\n")
print("-- Properties --")
print("name :", serial.name)
print("sublayers :", serial.sublayers)
print("expected inputs :", serial.n_in)
print("promised outputs :", serial.n_out)
print("weights & biases:", serial.weights, "\n")
# Inputs
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = serial(x)
print("-- Outputs --")
print("y :", y)
```
## JAX
Just remember to lookout for which numpy you are using, the regular ol' numpy or Trax's JAX compatible numpy. Both tend to use the alias np so watch those import blocks.
**Note:There are certain things which are still not possible in fastmath.numpy which can be done in numpy so you will see in assignments we will switch between them to get our work done.**
```
# Numpy vs fastmath.numpy have different data types
# Regular ol' numpy
x_numpy = np.array([1, 2, 3])
print("good old numpy : ", type(x_numpy), "\n")
# Fastmath and jax numpy
x_jax = fastmath.numpy.array([1, 2, 3])
print("jax trax numpy : ", type(x_jax))
```
## Summary
Trax is a concise framework, built on TensorFlow, for end to end machine learning. The key building blocks are layers and combinators. This notebook is just a taste, but sets you up with some key inuitions to take forward into the rest of the course and assignments where you will build end to end models.
| github_jupyter |
<h1>Data Exploration</h1>
<p>In this notebook we will perform a broad data exploration on the <code>Hitters</code> data set. Note that the aim of this exploration is not to be completely thorough; instead we would like to gain quick insights to help develop a first prototype. Upon analyzing the output of the prototype, we can analyze the data further to gain more insight.</p>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%run ../../customModules/DataQualityReports.ipynb
# https://stackoverflow.com/questions/34398054/ipython-notebook-cell-multiple-outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
<p>We first read the comma-separated values (csv) <code>Hitters</code> file into a pandas DataFrame. To get a feeling for the data we display the top five rows of the DataFrame using the <code>head()</code> method and we show how many rows and columns the DataFrame has by using the <code>shape</code> attribute. We also show the <code>dtypes</code> attribute, which returns a pandas Series with the data type of each column.</p>
```
df = pd.read_csv("Hitters.csv", index_col = 0)
df.head()
df.shape
df.dtypes
```
<p>Is appears that all the columns have the data type we would expect. We can perform another check to see if any values are missing in the DataFrame using its <code>isnull</code> method.</p>
```
df.reset_index()[df.reset_index().isnull().any(axis=1)]
df[df.isnull().any(axis=1)].shape
```
<p>This shows that there are $59$ missing values in total that seem pretty randomly distributed accross the $322$ total rows. So the next step to be able to produce the data quality reports with our custom <code>createDataQualityReports</code> function is to organize our DataFrame by quantitative and categorical variables using hierarchical indexing.</p>
```
df.columns = pd.MultiIndex.from_tuples([('quantitative', 'AtBat'), ('quantitative', 'Hits'),
('quantitative', 'HmRun'), ('quantitative', 'Runs'),
('quantitative', 'RBI'), ('quantitative', 'Walks'),
('quantitative', 'Years'), ('quantitative', 'CAtBat'),
('quantitative', 'CHits'), ('quantitative', 'CHmRun'),
('quantitative', 'CRuns'), ('quantitative', 'CRBI'),
('quantitative', 'CWalks'), ('categorical', 'League'),
('categorical', 'Division'), ('quantitative', 'PutOuts'),
('quantitative', 'Assists'), ('quantitative', 'Errors'),
('quantitative', 'Salary'), ('categorical', 'NewLeague')],
names=['type of variable', 'variable'])
df.sort_index(axis=1, level='type of variable', inplace=True)
df.head()
```
<p>We are now in the position to use our own <code>createDataQualityReports</code> function to create a data quality report for both the categorical and the quantitative variables.</p>
```
df_qr_quantitative, df_qr_categorical = createDataQualityReports(df)
df_qr_quantitative.name + ':'
df_qr_quantitative.round(2)
df_qr_categorical.name + ':'
df_qr_categorical.round(2)
```
<p>To further gain insight into the data, we use the <code>plotQuantitativeVariables</code> and <code>plotCategoricalVariables</code> functions the produce the frequency plots for each quantitative and categorical variable.</p>
```
plotQuantitativeVariables(df.xs('quantitative', axis=1), height=3, width=7)
plotCategoricalVariables(df.xs('categorical', axis=1), height=3, width=7)
```
<p>We also compute the correlation matrix of the variables.</p>
```
corr = df.corr()
corr.style.background_gradient(cmap='coolwarm').set_precision(2)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/choderalab/pinot/blob/master/scripts/adlala_mol_graph.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# import
```
! rm -rf pinot
! git clone https://github.com/choderalab/pinot.git
! pip install dgl
! wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
! chmod +x Miniconda3-latest-Linux-x86_64.sh
! time bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
! time conda install -q -y -c conda-forge rdkit
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
sys.path.append('/content/pinot/')
```
# data
```
import pinot
dir(pinot)
ds = pinot.data.esol()
ds = pinot.data.utils.batch(ds, 32)
ds_tr, ds_te = pinot.data.utils.split(ds, [4, 1])
```
# network
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
```
# Adam
```
import torch
import numpy as np
opt = torch.optim.Adam(net.parameters(), 1e-3)
loss_fn = torch.nn.functional.mse_loss
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
opt.step()
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# Langevin
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
opt = pinot.inference.adlala.AdLaLa(net.parameters(), partition='La', h=1e-3)
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# Adaptive Langevin
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
opt = pinot.inference.adlala.AdLaLa(net.parameters(), partition='AdLa', h=1e-3)
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
# AdLaLa: AdLa for GN, La for last layer
```
net = pinot.representation.Sequential(
lambda in_feat, out_feat: pinot.representation.dgl_legacy.GN(in_feat, out_feat, 'SAGEConv'),
[32, 'tanh', 32, 'tanh', 32, 'tanh', 1])
net
opt = pinot.inference.adlala.AdLaLa(
[
{'params': list(net.f_in.parameters())\
+ list(net.d0.parameters())\
+ list(net.d2.parameters())\
+ list(net.d4.parameters()), 'partition': 'AdLa', 'h': torch.tensor(1e-3)},
{
'params': list(net.d6.parameters()) + list(net.f_out.parameters()),
'partition': 'La', 'h': torch.tensor(1e-3)
}
])
rmse_tr = []
rmse_te = []
for _ in range(100):
for g, y in ds_tr:
def l():
opt.zero_grad()
y_hat = net(g)
loss = loss_fn(y, y_hat)
loss.backward()
print(loss)
return loss
opt.step(l)
rmse_tr.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_tr]))
rmse_te.append(np.mean([np.sqrt(loss_fn(y, net(g)).detach().numpy()) for g, y in ds_te]))
import matplotlib
from matplotlib import pyplot as plt
plt.rc('font', size=16)
plt.plot(rmse_tr, label='training $RMSE$', linewidth=5, alpha=0.8)
plt.plot(rmse_te, label='test $RMSE$', linewidth=5, alpha=0.8)
plt.xlabel('epochs')
plt.ylabel('$RMSE (\log (\mathtt{mol/L}))$')
plt.legend()
```
| github_jupyter |
#### Libraries
```
%%javascript
utils.load_extension('collapsible_headings/main')
utils.load_extension('hide_input/main')
utils.load_extension('autosavetime/main')
utils.load_extension('execute_time/ExecuteTime')
utils.load_extension('code_prettify/code_prettify')
utils.load_extension('scroll_down/main')
utils.load_extension('jupyter-js-widgets/extension')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
from sklearn.metrics import roc_auc_score
def plot_feature_importance(
columnas, model_features, columns_ploted=10, model_name="Catboost"
):
"""
This method is yet non-tested
This function receives a set of columns feeded to a model, and the importance of each of feature.
Returns a graphical visualization
Call it fot catboost pipe example:
plot_feature_importance(pipe_best_estimator[:-1].transform(X_tr).columns,pipe_best_estimator.named_steps['cb'].get_feature_importance(),20)
Call it for lasso pipe example:
plot_feature_importance(pipe_best_estimator[:-1].transform(X_tr).columns,np.array(pipe_best_estimator.named_steps['clf'].coef_.squeeze()),20)
"""
feature_importance = pd.Series(index=columnas, data=np.abs(model_features))
n_selected_features = (feature_importance > 0).sum()
print(
"{0:d} features, reduction of {1:2.2f}%".format(
n_selected_features,
(1 - n_selected_features / len(feature_importance)) * 100,
)
)
plt.figure()
feature_importance.sort_values().tail(columns_ploted).plot(
kind="bar", figsize=(18, 6)
)
plt.title("Feature Importance for {}".format(model_name))
plt.show()
!ls
```
## Joins
### Generic
```
generic = pd.read_csv('gx_num_generics.csv').drop(columns='Unnamed: 0')
generic.head(1)
```
### Package
```
package = pd.read_csv('gx_package.csv').drop(columns='Unnamed: 0')
package.head()
package.presentation.unique()
package.country.nunique()
package.brand.nunique()
package.brand.value_counts()
```
### Panel
```
panel = pd.read_csv('gx_panel.csv').drop(columns='Unnamed: 0')
panel.head(2)
panel.brand.nunique()
panel.channel.unique()
```
### Therapeutic
```
therapeutic_area = pd.read_csv('gx_therapeutic_area.csv').drop(columns='Unnamed: 0')
therapeutic_area.head(1)
therapeutic_area.therapeutic_area.nunique()
```
### Volume
```
volume = pd.read_csv('gx_volume.csv').drop(columns='Unnamed: 0')
volume.head(1)
volume[(volume.country=='country_1') & (volume.brand=='brand_3')]
```
### Subm
```
subm = pd.read_csv('submission_template.csv')
subm
pd.merge(volume, subm,left_on=['country','brand','month_num'], right_on = ['country','brand','month_num'])
594/4584
```
## Full
```
volume
generic
a = pd.merge(volume, generic,how='left',left_on=['country','brand'], right_on = ['country','brand'])
full = pd.merge(volume, generic,how='left',left_on=['country','brand'], right_on = ['country','brand'])
# package
full = pd.merge(full, package,how='left',left_on=['country','brand'], right_on = ['country','brand'])
full
panel
panel.groupby(['country', 'brand','channel'], as_index=False).agg(['min', 'max','sum','mean','median'])
full
# generic
full = pd.merge(volume, generic,how='left',left_on=['country','brand'], right_on = ['country','brand'])
# package
full = pd.merge(full, package,how='left',left_on=['country','brand'], right_on = ['country','brand'])
# panel
full = pd.merge(full, panel, how='left',left_on=['country','brand'], right_on = ['country','brand'])
full.shape
# generic
full = pd.merge(volume, generic,how='left',left_on=['country','brand'], right_on = ['country','brand'])
# package
full = pd.merge(full, package,how='left',left_on=['country','brand'], right_on = ['country','brand'])
# panel
full = pd.merge(full, panel, how='left',left_on=['country','brand'], right_on = ['country','brand'])
# therapeutic
full = pd.merge(full, therapeutic_area,how='left',left_on=['brand'], right_on = ['brand'])
full.head(1)
full.shape
```
## Adversarial Trainning
```
from catboost import CatBoostClassifier
from sklearn.model_selection import train_test_split
from category_encoders.m_estimate import MEstimateEncoder
adv = pd.read_csv('data/gx_merged.csv')
adv = adv.drop(columns=['month_name','volume',
#'brand','B','C','D','num_generics'
])
adv['random'] = np.random.random(adv.shape[0])
me = MEstimateEncoder()
X = adv.drop(columns=['test'])
y = adv.test
X = me.fit_transform(X,y)
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.33, random_state=42)
cb = CatBoostClassifier(iterations=100,verbose=0)
cb.fit(X_train,y_train)
plot_feature_importance(X.columns,cb.get_feature_importance())
roc_auc_score(y_test,cb.predict(X_test))
X.columns
adv
```
## Splitting
```
df = pd.read_csv('data/gx_merged.csv')
# Take out test
df = df[df.test==0]
# Create our unique index variable
df['count_brand'] = df["country"].astype(str) + '-'+ df["brand"]
# Unique index
lista = df['count_brand'].unique()
df['count_brand'].nunique()
# Get the ones that have not 24months
a = pd.DataFrame(df.groupby(["country", "brand"]).month_num.max()).reset_index()
a = a[a.month_num < 23]
a["count_brand"] = a["country"].astype(str) + "-" + a["brand"]
deformed = a.count_brand.unique()
buenos = list(set(lista) - set(list(deformed)))
split = int(len(buenos)*0.75)
split_train_list = buenos[:split]
split_valid_list = buenos[split:]
len(split_train_list)
len(split_valid_list)
train_split = df[df['count_brand'].isin(split_train_list)]
valid_split = df[df['count_brand'].isin(split_valid_list)]
train_split = train_split[['country','brand']]
valid_split = valid_split[['country','brand']]
train_split.shape
train_split.drop_duplicates().to_csv('data/train_split_noerror.csv',index=False)
valid_split.drop_duplicates().to_csv('data/valid_split.csv',index=False)
split_train_split_deformed = list(set((split_train_list + list(deformed))))
train_split = df[df['count_brand'].isin(split_train_split_deformed)]
train_split = train_split[['country','brand']]
train_split.drop_duplicates().to_csv('data/train_split.csv',index=False)
576/768
len(buenos)
pd.read_csv('data/train_split.csv').shape
pd.read_csv('data/valid_split.csv').shape
pd.read_csv('data/train_split_noerror.csv').shape
```
### Split test
```
df = pd.read_csv('data/gx_merged.csv')
# Take out test
df = df[df.test==1]
# Create our unique index variable
df['count_brand'] = df["country"].astype(str) + '-'+ df["brand"]
# Unique index
lista = df['count_brand'].unique()
df['count_brand'].nunique()
split_test_list = lista
test_split = df[df['count_brand'].isin(split_test_list)]
test_split = test_split[['country','brand']]
test_split.drop_duplicates().to_csv('data/test_split.csv',index=False)
```
| github_jupyter |
<img src="images/Callysto_Notebook-Banner_Top_06.06.18.jpg">
```
%%html
<script src="https://cdn.geogebra.org/apps/deployggb.js"></script>
```
# Reflections of Graphs
<img src="images/cat_fight.jpg" width=960 height=640>
## Introduction
In the photo above, a kitten is looking at its reflection in a mirror.
There are a few obvious but important observations to make about the kitten and its reflection.
Firstly, the kitten and its reflection appear to be on opposite sides of the mirror.
Secondly, the kitten and its reflection appear to be equally as far from the mirror's surface.
Thirdly, the kitten can see its reflection because it is looking at the mirror straight on --
the photographer can't see her reflection in the mirror because she's looking at it at an angle.
Now we let's see how the reflection of a point across a line is just like a reflection in a mirror.
The applet below shows a point $P$ and its reflection $P'$ across a line.
Try moving $P$ and the line and see how $P'$ changes.
```
%%html
<div id="ggb-point"></div>
<script>
var ggbApp = new GGBApplet({
"height": 400,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableRightClick": false,
"enableShiftDragZoom": true,
"useBrowserForJS": false,
"filename": "geogebra/reflection-point.ggb"
}, 'ggb-point');
ggbApp.inject();
</script>
```
Whichever side of the line $P$ is on, its reflection $P'$ is on the opposite side.
The point $P$ is as far from the line as $P'$ is.
If we were to draw a line from $P$ to $P'$, this line would intersect the line we are reflecting across at a right angle; to "see" $P'$, $P$ has to look at the line straight on.
The applet shows the reflection of a point across any line.
In this notebook, we will learn how to reflect a point across three particular lines:
the $x$-axis, the $y$-axis, and the line $y = x$.
We will also learn how to reflect functions and graphs of functions across these lines.
Reflecting across other lines will be outside the scope of this notebook.
## Reflections across the $x$-axis
### Points
The easiest line to reflect across is the $x$-axis.
After toying with the above applet,
you might already have an idea of where the reflection of a point across the $x$-axis ought to be.
If not, you may want to try playing with the applet above some more.
In particular, try making the line horizontal, then try dragging the point $P$ around.
So let's test your intuition.
In the applet below, there are three blue points, $A$, $B$, and $C$.
There are three more red points, $A'$, $B'$, and $C'$ that are supposed to be their reflections,
but they are in the wrong place.
Try moving $A'$, $B'$, and $C'$ to where you think they belong.
You will see a message if you got it right.
You can also keep reading and come back to this exercise later.
```
%%html
<div id="ggb-exercise1"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-exercise1.ggb"
}, 'ggb-exercise1');
ggbApp.inject();
</script>
```
If you were able to solve the exercise, you might already have guessed the following facts:
* A point and its reflection across the $x$-axis have the same $x$-coordinate.
This is because the line from a point to its reflection across the $x$-axis
has to intersect the $x$-axis at a right angle.
Since the $x$-axis is perfectly horizontal,
this line from point to point has to be perfectly vertical,
which means the points are directly above one another, so they have the same $x$-coordinate.
* A point and its reflection across the $x$-axis have equal but opposite $y$-coordinates.
What is meant by that is if a point has a $y$-coordinate of, say, 17,
its reflection has the $y$-coordinate -17.
More generally, if a point has a $y$-coordinate of $a$, its reflection has the $y$-coordinate $-a$.
This follows from the fact that the two points are on opposite sides of the $x$-axis
(so one is positive and the other negative, unless the points are *on* the $x$-axis)
and the fact that the two points are equally distant from the $x$-axis.
Putting these two facts together, we get the following rule:
**Rule:** For an arbitrary point $(x, y)$, its reflection across the $x$-axis is the point $(x, -y)$.
**Example:** Consider the point $P = (1, 3)$.
Let's call its reflection $P'$.
Then $P'$ has the same $x$-coordinate as $P$, but its $y$-coordinate is the negative of $P$'s.
This means $P' = (1, -3)$.
**Example:** Suppose we have instead the point $P = (2, 0)$.
This point is *on* the $x$-axis.
Since $-0 = 0$, its reflection is $P' = (2, 0)$.
If we have a point on the $x$-axis and we reflect it across the $x$-axis, we get the same point back.
It is its own reflection.
### Graphs
In the previous exercise, not only did we reflect three points,
but we also plotted the reflection of a triangle.
We can reflect points, triangles, and many other shapes and objects.
Now we will see how to reflect the graph of a function.
The graph of a function is just a bunch of points --
so many points packed closely together that it looks like a single curve.
To reflect the graph, we just have to reflect all of the points!
Below, in blue, is the graph of some function $y = f(x)$
and a few of the points making up its graph.
The reflection of these points is in red.
Use the slider to see what happens when we take and reflect more and more points on the graph.
```
%%html
<div id="ggb-slider1"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableRightClick": false,
"enableShiftDragZoom": true,
"useBrowserForJS": false,
"filename": "geogebra/reflection-slider1.ggb"
}, 'ggb-slider1');
ggbApp.inject();
</script>
```
If we only reflect a few points, the red dots don't look like much,
but as we reflect more and more points, the red dots start to resemble the blue curve but flipped upside-down.
This is the reflection of the graph across the $x$-axis.
Or more accurately, if we had the time to sample and reflect infinitely many points, we would get the reflection of the graph.
Usually, it will suffice to sample and reflect a few points and connect the dots with a curve.
(Even computer programs that graph functions typically just plot a bunch of points and connect them by straight lines, but they plot so many points that it looks accurate.)
**Example:** Let's reflect the graph of $y = \log_2(x)$ across the $x$-axis.
We start by identifying a few points on the graph of $y = \log_2(x)$.
We know, for example, that $(1,0)$, $(2,1)$, $(4,2)$, and $(8,3)$ are points on the graph.
Their reflections are $(1, 0)$, $(2,-1)$, $(4,-2)$, and $(8,-3)$, respectively.
Then we connect these points by a curve.
```
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from math import log
def draw():
def g(x):
return log(x,2)
f = np.vectorize(g)
xmin, xmax = 0.01, 10
nsamples = 100 #2*(xmax - xmin) - 1
x = np.linspace(xmin, xmax, nsamples)
plt.axhline(color="black", linewidth=1)
plt.axvline(color="black", linewidth=1)
plt.ylim(-5, 5)
plt.plot(x, f(x), label="$y = \log_2(x)$")
plt.plot(x, -f(x), label="Reflection of $y = \log_2(x)$", color="red", sketch_params=0.8)
pts = [(1,0), (2,1), (4,2), (8,3), (2,-1), (4,-2), (8,-3)]
fmts = ["mo", "bo", "bo", "bo", "ro", "ro", "ro"]
for i in range(0, len(pts)) :
plt.plot(pts[i][0], pts[i][1], fmts[i])
plt.annotate("$({0},{1})$".format(pts[i][0], pts[i][1]), xy = (pts[i][0], pts[i][1]), xytext = (4, 4), textcoords = "offset points")
plt.legend(loc='upper center', bbox_to_anchor=(1.45, 0.8))
draw()
```
### Functions
We have reflected points and graphs across the $x$-axis.
These were both geometric ideas.
Now we will reflect functions themselves,
and we start with the observation that the reflection of the graph $y = \log_2(x)$ in the last example
is precisely the graph of the function $y = -\log_2(x)$.
If we have an arbitrary function $y = f(x)$,
its graph is all of the points of the form $(x, f(x))$.
To reflect these points across the $x$-axis, we negate their $y$-coordinates,
so the reflection of the graph is all of the points of the form $(x, -f(x))$.
But this is just the graph of the function $y = -f(x)$!
**Rule:** The reflection of a function $y = f(x)$ across the $x$-axis is the function $y = -f(x)$.
**Example:** Suppose we have the function $y = x^2$ and we want to reflect it across the $x$-axis.
All we have to do is negate the right hand side of the equation.
The reflection is simply
$$ y = -(x^2) = -x^2. $$
**Example:** Suppose we have the function $y = \sin(x) - x$ instead.
Again, we just negate the right hand side of the equation,
but make sure to negate *all* of the terms and be careful with double negatives.
This time, the reflection across the $x$-axis is
$$ y = -(\sin(x) - x) = -\sin(x) + x. $$
The interactive graph below allows you to enter an arbitrary function $f(x)$ and see its graph and its reflection across the $x$-axis.
```
%%html
<div id="ggb-interactive1"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": true,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-interactive1.ggb"
}, 'ggb-interactive1');
ggbApp.inject();
</script>
```
## Reflections across the $y$-axis
### Points
Now that we know how to reflect points, graphs, and functions across the $x$-axis,
we will see how to reflect them across the $y$-axis instead.
Geometrically, the idea is the same as before.
A point and its reflection are on opposite sides of the $y$-axis,
and both are the same distance away from the $y$-axis.
The line between them intersects the $y$-axis at a right angle.
Since the the $y$-axis is vertical, this line intersecting it is horizontal,
so the point's reflection must be directly to the left or right of it.
Therefore the point and its reflection have the same $y$-coordinate.
Before seeing the "rule" for reflecting a point across the $y$-axis,
try this exercise to test your intuition and understanding.
Click and drag the points $A'$, $B'$, and $C'$ so that they are the reflections across the $y$-axis
of $A$, $B$, and $C$, respectively.
```
%%html
<div id="ggb-exercise2"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-exercise2.ggb"
}, 'ggb-exercise2');
ggbApp.inject();
</script>
```
**Rule:** For an arbitrary point $(x, y)$, its reflection across the $y$-axis is the point $(-x, y)$.
**Example:** Consider the point $P = (1, 3)$ and its reflection across the $y$-axis, $P'$.
The points $P$ and $P'$ have the same $y$-coordinates, but their $x$-coordinates are negatives of one another.
So $P' = (-1, 3)$.
**Example:** Suppose we have instead the point $P = (0, 2)$.
This point is on the $y$-axis.
Since $-0 = 0$, its reflection is $P' = (0, 2)$.
If we have a point on the $y$-axis and we reflect it across the $y$-axis, we get the same point back.
### Graphs
To reflect a graph across the $y$-axis, we do just like before.
In theory, we think of the graph as consisting of infinitely many points,
and we draw the reflection across the $y$-axis of each of these points to get the graphs reflection.
In practice, because life is short, we reflect a few points and connect them by a curve.
The more points we reflect, the more accurately we can draw the reflection of the graph.
```
%%html
<div id="ggb-slider2"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-slider2.ggb"
}, 'ggb-slider2');
ggbApp.inject();
</script>
```
### Functions
In the same way that we reflected the function $y = f(x)$ across the $x$-axis to get $y = -f(x)$,
we can also reflect the function across the $y$-axis.
An arbitrary point on the graph of $y = f(x)$ has the form $(x, f(x))$.
Using the rule above, the reflection of an arbitrary point on the graph is of the form $(-x, f(x))$.
But $(-x, f(x))$ has the same "form" as $(x, f(-x))$, and points of *this* form make up the graph of $y = f(-x)$.
**Rule:** The reflection of a function $y = f(x)$ across the $y$-axis is $y = f(-x)$.
This means we just have to replace $x$ with $-x$ everywhere in our function.
Let's do a couple of examples.
We reflected the following functions across the $x$-axis before.
Let's reflect them across the $y$-axis instead.
**Example:** Let's reflect $y = x^2$ across the $y$-axis.
According to our rule, we just replace $x$ with $-x$,
but we should be careful and put parentheses around it, like so: $(-x)$.
The reflection is simply
$$ y = (-x)^2 = x^2. $$
In this case, the reflection across the $y$-axis is the same as the original function.
**Example:** Let's reflect the function $y = \sin(x) - x$ across the $y$-axis.
We replace every $x$ with $-x$ to get
$$ y = \sin(-x) - (-x) = \sin(-x) + x. $$
This is a perfectly acceptable answer.
You might have learned that $\sin(-x) = -\sin(x)$,
so we could also rewrite this as
$$ y = -\sin(x) + x $$
if we prefer.
You might also notice that this is the same answer that we got when we reflected it across the $x$-axis.
Whether we reflect $y = \sin(x) - x$ across the $x$-axis or the $y$-axis, we get the same result.
Use the interactive graph below to plot any function with its reflection across the $y$-axis.
As usual, the function will be in blue and its reflection in red.
```
%%html
<div id="ggb-interactive2"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": true,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-interactive2.ggb"
}, 'ggb-interactive2');
ggbApp.inject();
</script>
```
## Reflections across both axes
If we reflect a point across the $x$-axis and then reflect it again, the twice-reflected point is in the same position as the original point.
The same thing happens when we reflect a point twice across the $y$-axis.
But what happens when we reflect a point across the $x$-axis and the across the $y$-axis?
Let's work with an example.
Suppose we start with the point $(1, 2)$.
Its reflection across the $x$-axis is $(-1, 2)$.
The reflection of *that* across the $y$-axis is $(-1, -2)$.
What if we work with an arbitrary point whose coordinates we don't know?
We start with the point $(x, y)$, then reflect it across the $x$-axis to get $(-x, y)$.
The reflection of the new point across the $y$-axis is $(-x, -y)$.
To reflect a point across the $x$-axis followed by the $y$-axis, we just negate both of the point's coordinates.
Now check for yourself that if we reflected $(x, y)$ across the axes in the other order --
the $y$-axis and then the $x$-axis --
we still get the point $(-x, -y)$.
The order we do the reflections in does not matter!
Something interesting happens when we look at what happens graphically.
Try playing with the following applet.
Click and drag the point $P$ in blue.
The red point $P'$ is the result of reflecting $P$ across both axes.
```
%%html
<div id="ggb-point2"></div>
<script>
var ggbApp = new GGBApplet({
"height": 400,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-point2.ggb"
}, 'ggb-point2');
ggbApp.inject();
</script>
```
Can you see what is happening?
There are a couple of ways of thinking about the relationship between $P$ and $P'$ in this applet.
One way is that $P'$ is the result of rotating $P$ 180 degrees around the origin.
Another way is that $P'$ is the reflection of $P$ across the origin -- a line from $P$ to $P'$ passes through the origin and both points are equally distant from the origin.
## Even and odd functions
Functions that are their own reflections across the $y$-axis have a special name.
These are called **even** functions.
Geometrically, this means the graph of the function is the same after we reflect it across the $y$-axis.
What does this mean algebraically?
An arbitrary point on the graph of the function looks like $(x, f(x))$.
When we reflect it across the $y$-axis, we get the point $(-x, f(x))$.
But because the graph is its own reflection, this has to be the same as the point $(-x, f(-x))$.
If $(-x, f(x))$ and $(-x, f(-x))$ are the same point, that means $f(x) = f(-x)$.
An even function is a function for which $f(x) = f(-x)$.
Some example of even functions are
* $f(x) = c$, where $c$ is any constant;
* $f(x) = |x|$;
* $f(x) = x^2$;
* $f(x) = x^a$ where $a$ is any even power; and
* $f(x) = \cos(x)$.
Earlier in this notebook, there was an interactive graph allowing you to enter a function and see its reflection across the $y$-axis.
Try entering these functions and see how the functions and their reflections overlap.
Functions that are their own 180-degree rotations around the origin also have a special name.
They are called **odd** functions.
Geometrically, this means if we graph the function and the rotate the graph 180 degrees about the origin,
we get the same image.
Like before, let's consider what this means algebraically.
We start with an arbitrary point on the graph of the function, $(x, f(x))$.
We rotate it 180 degrees (or equivalently, we reflect across the $x$-axis and then the $y$-axis) and get the point $(-x, -f(x))$.
But since the graph is its own rotation, this is the same point as $(-x, f(-x))$.
This means that $f(-x) = -f(x)$.
So an odd function is a function for which $f(-x) = -f(x)$.
Some examples of odd functions are
* $f(x) = 0$;
* $f(x) = x$;
* $f(x) = x^a$ where $a$ is any odd power; and
* $f(x) = \sin(x)$.
We don't have a special name for functions that are their own reflections across the $x$-axis.
Why not?
Because other than the function $f(x) = 0$, there is no such thing as a function that is its own reflection across the $x$-axis!
Such a "function" would not pass the vertical line test.
The only function that is both even and odd at the same time is $y = 0$.
We are used to calling integers even and odd.
It is strange to call functions even and odd,
but there is a relationship between the two.
* The product and quotient of two even functions is even, just like the sum and difference of even numbers is even.
* The product and quotient of two odd functions is even, just like the sum and difference of odd numbers is even.
* The product and quotient of an even and odd function is odd, just like the sum and difference of an even and odd number is odd.
Be careful though, because the sum of two odd functions is odd, whereas the sum of two odd numbers is even.
## Reflections across the line $y = x$
### Points
The "rule" for reflecting across the line $y = x$ is not hard to use,
but understanding why the rule works is harder to explain than for the previous two reflections.
Let's state the rule first:
**Rule:** The reflection of an arbitrary point $(x,y)$ across the line $y = x$ is the point $(y,x)$.
We just swap the point's $x$- and $y$-coordinates.
Try the following exercise.
Move the points $A'$, $B'$, $C'$, and $D'$ so that they are the reflections of $A$, $B$, $C$, and $D$, respectively.
```
%%html
<div id="ggb-exercise3"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-exercise3.ggb"
}, 'ggb-exercise3');
ggbApp.inject();
</script>
```
Here is an explanation of why this rule works.
Let's start by drawing a picture of the line $y = x$, a point $P$, and its reflection $P'$.
The points $P$ and $P'$ are on opposite sides of the line $y = x$,
and the line connecting them intersects $y = x$ at a right angle.
Let's call the this point of intersection $Q$.
The points $P$ and $P'$ are the same distance from $Q$.
This is our picture so far:
<img src="images/ggb1.png" width=300 height=300>
Draw a line segment from $P$ directly left until it intersects the line $y = x$.
Draw another line segment from $P'$ directly down until it too intersects the line $y = x$.
These two line segments and the line $y = x$ all meet at the same point.
Let's call this point $R$.
<img src="images/ggb2.png" width=300 height=300>
Since the line segment $PR$ is horizontal and $P'R$ is vertical,
then angle $\angle PRP'$ is a right angle.
The other angles $\angle RPP'$ and $\angle RP'P$ are each 45 degrees, so this makes the triangle $\triangle PRP'$ an isosceles triangle.
Now suppose $P$ has the coordinates $(a, b)$.
Let's try to find the coordinates of $P'$.
The triangle $\triangle PRP'$ is an iscoceles triangle, so the lengths $\overline{PR}$ and $\overline{P'R}$ are equal.
What's more, $R$ and $P$ have the same $y$-coordinate (we only moved in the $x$ direction to get to $R$)
and $R$ and $P'$ have the same $x$-coordinate.
We just need to determine how far $R$ is from $P$.
To get from $P$ to $P'$, we subtract that distance from the $x$-coordinate and add it to the $y$-coordinate.
Since $P = (a, b)$ is to the lower-right of $y = x$, its $x$-coordinate is larger than the $y$-coordinate, so $a > b$.
Since $R$ is on the line $y = x$, its $x$- and $y$-coordinates are equal.
But we know $R$ has the same $y$-coordinate as $P$, so $R = (b, b)$.
The distance from $P$ to $R$ is just the difference between $a$ and $b$, so $\overline{PR} = a - b$.
Now
\begin{align*}
P'
&= (a - \overline{PR}, b + \overline{PR}) \\
&= (a - (a - b), b + (a - b)) \\
&= (a - a + b, b + a - b) \\
&= (b, a).
\end{align*}
This argument depended on $P$ being to the lower-right of the line $y = x$.
As an exercise, figure out how this argument has to change if $P$ is to the upper-left instead.
Now let's reflect the graph of a function across the line $y = x$.
As always, we sample a bunch of points on the curve, and reflect them each across the line $y = x$.
See what happens when we sample more and more points on the graph of $y = x^2$.
```
%%html
<div id="ggb-slider3"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": false,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-slider3.ggb"
}, 'ggb-slider3');
ggbApp.inject();
</script>
```
We started with a parabola opening upwards.
Its reflection is a parabola opening sideways.
This has an important implication:
*the reflection of a function across $y = x$ might not pass the vertical line test!*
Keep this in mind while we see how to reflect a function across $y = x$.
Points on the graph of an arbitrary function $y = f(x)$ have the form $(x, f(x))$.
Their reflections have the form $(f(x), x)$, but this is the same "form" as $(f(y), y)$.
Points of that form are just the points on the graph of $x = f(y)$.
**Rule:** The reflection of a function $y = f(x)$ across the line $y = x$ is the function $x = f(y)$.
This rule tells us that we only need to swap the $x$'s and $y$'s in our function to get its reflection across $y = x$.
**Example:** Let's reflect the function $y = x^2$ across the line $y = x$.
According to our rule, we just swap the $x$'s and $y$'s.
So the reflection is
$$ x = y^2. $$
The problem is solved at this point, but suppose we try to solve for $y$.
We get either
$$ y = \sqrt x $$
or
$$ y = -\sqrt x. $$
How do the graphs of $x = y^2$, $y = \sqrt x$, and $y = -\sqrt x$ compare to one another?
The graph of $x = y^2$ is a sideways parabola and does not pass the vertical line test.
The graph of $y = \sqrt x$ is just the top half of that parabola and therefore *does* pass the vertical line test.
The graph of $y = -\sqrt x$ is the bottom half of the parabola and also passes the vertical line test.
**Example:** Now let's reflect the straight line $y = \frac 1 2 x + 1$ across the line $y = x$.
Our rule says we just need to swap the $x$'s and $y$'s, so we get
$$ x = \frac 1 2 y + 1. $$
Once again, the problem is solved at this point, but let's try solving for $y$:
$$ y = 2 x - 2. $$
The reflection of the line $y = \frac 1 2 x + 1$ across the line $y = x$ is the line $y = 2 x - 2$.
This is a case of a function whose reflection actually does pass the vertical line test
and can be written in the form $y = f(x)$.
Use the interactive graph below to plot functions and their reflections across $y = x$.
```
%%html
<div id="ggb-interactive3"></div>
<script>
var ggbApp = new GGBApplet({
"height": 600,
"showToolBar": false,
"showMenuBar": false,
"showAlgebraInput": false,
"showResetIcon": true,
"enableLabelDrags": true,
"enableShiftDragZoom": true,
"enableRightClick": false,
"useBrowserForJS": false,
"filename": "geogebra/reflection-interactive3.ggb"
}, 'ggb-interactive3');
ggbApp.inject();
</script>
```
## Reflections combined with function operations
We might be asked to reflect a function after performing some function operations.
For example, we might be asked to add two functions and the reflect them across the $x$-axis,
or reflect the composition of two functions across the $y$-axis.
Well, the sum of two functions, for instance, is a function and we know how to reflect functions now,
so we ought to be able to do this.
**Problem:**
Reflect $f(x) + g(x)$ across the $x$-axis, where
$$ f(x) = x^2 + 1 \text{ and } g(x) = x^3 + 1. $$
**Solution:**
To do this, let's add $f(x)$ and $g(x)$ and call their sum $h(x)$. So
\begin{align*}
h(x)
&= f(x) + g(x) \\
&= (x^2 + 1) + (x^3 + 1) \\
&= x^3 + x^2 + 2.
\end{align*}
Now we just need to reflect $h(x)$ across the $x$-axis.
Its reflection is $y = -h(x)$, so
\begin{align*}
y
&= -h(x) \\
&= -(x^3 + x^2 + 2) \\
&= -x^3 - x^2 - 2.
\end{align*}
**Problem:**
Reflect $f(g(x))$ across the $y$-axis, where
$$ f(x) = \sin(x) \text{ and } g(x) = x^2. $$
**Solution:**
Like before, let's compose $f(x)$ and $g(x)$ and call their composite $h(x)$. So
\begin{align*}
h(x)
&= f(g(x)) \\
&= f(x^2) \\
&= \sin(x^2).
\end{align*}
Now we reflect $h(x)$ across the $y$-axis.
The reflection of $h(x)$ is $y = h(-x)$, so
\begin{align*}
y
&= h(-x) \\
&= \sin((-x)^2) \\
&= \sin(x^2).
\end{align*}
## Reflections combined with translations
We can apply the same idea above to reflect translations of graphs.
**Problem:**
Translate the function $y = f(x)$ left by three units, then reflect across the $y$-axis,
where $y = |2x|$.
**Solution:**
Let's label by $g(x)$ the translation of $f(x)$ left by three units.
Then we just have to reflect $g(x)$ across the $y$-axis.
So
\begin{align*}
g(x)
&= f(x+3) \\
&= |2(x+3)| \\
&= |2x + 6|.
\end{align*}
Its reflection across the $y$-axis is
\begin{align*}
y
&= g(-x) \\
&= |2(-x) + 6| \\
&= |-2x + 6|.
\end{align*}
## Conclusion
In this notebook, we saw how to reflect over the $x$-axis, the $y$-axis, the line $y = x$, and combining reflection.
We've also made a distinction between reflecting a point, reflecting the graph of a function, and reflecting the function itself.
Reflections of points and graphs are geometric ideas, manipulating plots;
reflections of functions is about manipulating equations,
although
The studies of statistics and of the sciences depend heavily on the skills learned in later math courses, especially Calculus courses.
A lot of time is spent in Calculus courses analyzing functions and to do this, it is usually helpful to have a clear image in one's mind of the function being analyzed.
Understanding reflections of graphs and functions is one step towards forming such a clear image.
For instance, you might already know what the graph of $y = 10^x$ looks like,
but using the techniques covered in this notebook,
you can also tell what $y = -10^x$, $y = 10^{-x}$, $y = -10^{-x}$, and $x = 10^y$ look like.
Recognizing when a function is "even" or "odd" also has its uses.
A typical question in a Calculus class is to find the area underneath a curve and above the $x$-axis.
If the curve is given by an even function, then the area to the right of the $y$-axis is the same as the area to the left.
That means we can get away with only calculating half the area and then doubling the result.
Doubling a number is easier than most things in math, so this can be a time saver.
## Exercises
* Plot the point $(1, 2)$ as well as its reflections across the $x$-axis, the $y$-axis, both axes, and the line $y = x$.
* Graph the function $y = x^2 - 2x$. Plot its reflection across the $x$-axis and the $y$-axis. What are the equations of these reflections?
* Graph the function $y = e^x$. Plot its reflection across the line $y = x$. Does the reflected function pass the vertical line test? What is the equation of its reflection (in the form $x = f(y)$ and in the form $y = f(x)$)?
<img src="images/Callysto_Notebook-Banners_Bottom_06.06.18.jpg">
| github_jupyter |
```
from sklearn.datasets import load_iris, fetch_openml
from sklearn.preprocessing import MinMaxScaler, normalize
from sklearn.model_selection import train_test_split
from scipy.spatial.distance import minkowski, cosine
from sklearn.metrics import accuracy_score
from collections import Counter
import numpy as np
import math
import random
X, Y = load_iris(return_X_y=True)
X = MinMaxScaler().fit_transform(X)
class Neuron:
def __init__(self, size, x, y):
self.weight = np.array([random.uniform(-1, 1) for i in range(size)]).reshape(1,-1)
self.x = x
self.y = y
self.label = None
self.wins = Counter()
self.active = True
def predict(self, data):
return cosine(data, self.weight)
class SOM:
def __init__(self, rows, columns, size):
self.network = list()
for i in range(rows):
for j in range(columns):
self.network.append(Neuron(size=size, x=i, y=j))
def fit(self, X, epochs, radius, alpha0):
alpha = alpha0
for t in range(epochs):
D = np.copy(X)
np.random.shuffle(D)
for data in D:
l = map(lambda x: x.predict(data), self.network)
l = list(l)
winner = self.network[np.argmax(l)]
for neuron in self.network:
if winner.x-radius < neuron.x < winner.x+radius and winner.y-radius < neuron.y < winner.y+radius:
#p = neuron.weight+alpha*data
#neuron.weight = p/np.linalg.norm(p)
#neuron.weight += normalize(alpha*(data-neuron.weight), norm="max")
neuron.weight += alpha*(data-neuron.weight)
radius -= 1
if radius == -1:
radius == 0
alpha = alpha0 / (1+(t/len(D)))
def neuron_labeling(self, X, Y):
for neuron in self.network:
l = map(neuron.predict, X)
l = list(l)
neuron.label = Y[np.argmax(l)]
def mode_labeling(self, X, Y):
for i, instance in enumerate(X):
l = map(lambda x: x.predict(instance), filter(lambda x: x.active, self.network))
l = list(l)
winner = self.network[np.argmax(l)]
winner.wins[Y[i]] += 1
winner.label = winner.wins.most_common()[0][0]
if len(winner.wins.keys()) > 1:
winner.active = True
def predict(self, X):
output = np.zeros((X.shape[0],))
for i,instance in enumerate(X):
l = map(lambda x: x.predict(instance), filter(lambda x: x.active, self.network))
l = list(l)
output[i] = self.network[np.argmax(l)].label
return output
X_train, X_test, Y_train, Y_test= train_test_split(X, Y, test_size=0.33, random_state=0, stratify=Y)
som = SOM(12, 8, 4)
som.fit(X_train, 100, 4, 0.5)
som.mode_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
np.sum(Y_predict == Y_test)/Y_test.shape[0]
# MNIST
X, Y = fetch_openml("mnist_784", return_X_y=True)
X = MinMaxScaler().fit_transform(X)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=10000, random_state=0, stratify=Y)
som = SOM(12, 8, 784)
som.fit(X_train, 10, 4, 0.5)
som.mode_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
print(accuracy_score(Y_predict, Y_test, normalize=True))
som = SOM(12, 8, 784)
som.fit(X_train, 10, 4, 0.5)
som.neuron_labeling(X_train, Y_train)
Y_predict = som.predict(X_test)
print(accuracy_score(Y_predict, Y_test, normalize=True))
```
Los resultados con Iris dan un 25% de acierto
| github_jupyter |
# Finding the Data
We need to install [newsapi-python](https://github.com/mattlisiv/newsapi-python) package. We can do this by entering ! in the beginning of a cell to directly access to the system terminal. Using exclamation mark is an easy way to access system terminal and install required packages as well undertake other work such as finding paths for working directory or other files.
```
$ pip install newsapi-python
```
After installing the package, we can start to send queries to retrive data. First we need to import NewsApiClient from _newsapi_ module.
```
from newsapi import NewsApiClient
```
We need to use the key from the News Api application that we earlier created. In order to not to show your secret and key, it is a good practice to save them as in a different python file. Then we can import that here and attacht to variables to prevent exposure. I save mine in a file called __'nws_token.py'__. Using the code below, I __import__ key and secret string objects __from__ nws_token module that I created.
In Python there are various ways to __import__ a module, here are some examples.
```Python
import module #method 1
from module import something #method 2
from module import * #method 3 imports all
```
If you use the first method, later you need to use the syntax below by first calling the module name then the function/variable name later on:
```Python
x = module.function_name() #if you use the first method
```
Otherwise, you can just call the method/variable from that module by its name. Here, we use the second method to import a variable from a module since there will not be any other variables with the same name that might cause bugs.
After importing the key, we will create an instance of the NewsApiClient object by passing our individual key as a parameter.
```
from nws_token import key
api = NewsApiClient(api_key=key)
```
Since we created an instance of _NewsApiClient_ object, we are now ready find the data we are looking for. It is always a good practice to refer to the official documentation to find out what parametres we can pass, and what kind of data we can retrive. You can reach the official documentation of News API [here!](https://newsapi.org/docs) After reading through the documentation, we have a better understanding of the parameters we want to use.
Now, let's try to retrive all 100 most recent news articles mentioning 2020 Taiwan Presidential Elections and save all into a __dictionary__ object called _articles_.
```
articles = {}
for i in range(1,6):
articles.update({'page'+str(i): (api.get_everything(q='Taiwan AND Elections',
language= 'en', page = i))})
```
All the information of the articles are now saved in our dictionary object called _articles_. It has a nested data structure that the iteration above saved every 20 articles for each page. As it stands, _articles_ does not have much use for us. It is complex, hard to read data object with numerous information for each article(i.e. date posted, author, source, abstract, full content,). If you want to take a look just run this code in an empty cell:
```Python
print(articles)
```
Looks complex and hard to read! As an example, let's take a look at the data on one article.
```
print(articles['page1']['articles'][0])
```
__It is still complicated but gives a better view on the available data. Given that we have 100 of such a data, we need to manipulate and filter these information into a more useful form.__
News API does not provide the full content of the articles. We need to use webscrapping to retrive the full content of each article. For now, we can use a function to parse the results to only save the fields we need. We need Title, Source, Publication Date,description and the URL.
#### Functions in Python
Functions are the fundamental programming tools that enables to wrap several statements and procudes the values that we desire. They make it easy for the code reusability and recyclability. For this workshop, it is sufficient just to grasp the basics of the functions in Python. A Function code usually basically looks like this:
```Python
def func_name(args):
statement
return result
```
You can also use 'yield' instead of return if your function is a generator. But it is a more advanced technique that we will not use in this workshop. After you define the function, you need to call it by its name, and if required you can bind the returned object to variable.
```Python
func_name() ## calls the function
x = func_name() ## binds the returned object to a variable called x
```
Functions are a rich and powerful way in Python, and I recommend you to read more about them.
We will now use a function to grap the information we need from the articles.
Let's __```import datetime```__ and __```dateutil.parser```__ modules for formatting existing publication date into a more readable format.
```
from dateutil.parser import parse
from datetime import datetime
```
Let's first create a helper function to make the publication date more readable.
```
def reformat_date(date):
'''takes a string and returns a reformatted string'''
newdate = parse(date)
return newdate.strftime("%d-%B-%Y")
```
Now, we will create another helper function to prevent duplicate articles appearing in our dataset.
```
def check_duplicate(dataset,title):
'''
takes a list of dictionaries and a title string
to check for duplication of same articles
'''
for i in dataset:
if i['title'] == title:
return True
```
Since the article News API does not provide the full text of the articles, we need a web scrabbing function to retrive the full text of the each articles. We need to import __```requests```__ and __```BeautifulSoup```__ packages.
```
import requests
from bs4 import BeautifulSoup
```
Now we can write another helper function to retrive the full text of the articles. Since we might face errors and exceptions while retriving the full text from a website, it is important to catch the possible exceptions and handle them to prevent our application from breaking. We can do this using this syntax:
```Python
try:
some_code()
except:
some_exception_handling()
```
Below, we use __"```Exception as e```"__ expression so that we can print the properties of the error to be able to handle it better next time.
```
def get_fulltext(url):
'''
Takes the URL and returns
article full text.
'''
HEADERS = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)'
' AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36'}
try:
page = requests.get(url,headers = HEADERS)
soup = BeautifulSoup(page.content, 'html.parser')
texts = soup.find_all('p')
article = ''
for i in texts:
article += str(i.get_text())
return article
except Exception as e:
print(e)
return None
```
We can now create a function to extract the information we want in a readable way.
```
def article_extract(articles):
'''
takes a dictionary object returned from News API and
returns a list of dictionary with the required fields
'''
news_data = []
for i in articles.keys():
for n in range(0,len(articles[i]['articles'])):
if not check_duplicate(news_data,articles[i]['articles'][n]['title']):
news_data.append({'title':articles[i]['articles'][n]['title'],
'source': articles[i]['articles'][n]['source']['name'],
'URL': articles[i]['articles'][n]['url'],
'description': articles[i]['articles'][n]['description'],
'date': reformat_date(articles[i]['articles'][n]['publishedAt']),
'fulltext': get_fulltext(articles[i]['articles'][n]['url'])})
return news_data
```
__Now our function is ready for operation. Let's call it and see the first item of the dataset created by our function. It must be more readible with only required fields.__
```
data_set = article_extract(articles)
print(data_set[0])
```
***
It seems from the results that we managed to create our data set. Now we can save it in a commo seperated value file to start our analysis. For this, we need to __```import csv```__ module.
```
import csv
with open("tw_dataset.csv", 'w') as file:
tw_dt= csv.DictWriter(file,data_set[0].keys())
tw_dt.writeheader()
tw_dt.writerows(data_set)
```
__Our Data Set is saved in our working directory and now ready for exploration and analysis!__
<img src="images/dataset.gif" style="width: 650px" align="middle" />
- __[Previous: Setting the Scene](0 - Setting the Scene.ipynb)__
- __[Next: Data Exploration](2 - Data Exploration.ipynb)__
| github_jupyter |
```
from sacred import Experiment
import tensorflow as tf
import threading
import numpy as np
import os
import Datasets
from Input import Input as Input
from Input import batchgenerators as batchgen
import Models.WGAN_Critic
import Models.Unet
import Utils
import cPickle as pickle
import Test
import pickle
dsd_train, dsd_test = Datasets.getDSDFilelist("DSD100.xml")
dataset = dict()
dataset["train_sup"] = dsd_train # 50 training tracks from DSD100 as supervised dataset
dataset["train_unsup"] = [] # Initialise unsupervised dateaset structure (fill up later)
dataset["valid"] = [dsd_test[0][:25], dsd_test[1][:25], dsd_test[2][:25]] # Validation and test contains 25 songs of DSD each, plus more (added later)
dataset["test"] = [dsd_test[0][25:], dsd_test[1][25:], dsd_test[2][25:]]
#Zip up all paired dataset partitions so we have (mixture, accompaniment, drums) tuples
dataset["train_sup"] = zip(dataset["train_sup"][0], dataset["train_sup"][1], dataset["train_sup"][2])
dataset["valid"] = zip(dataset["valid"][0], dataset["valid"][1], dataset["valid"][2])
dataset["test"] = zip(dataset["test"][0], dataset["test"][1], dataset["test"][2])
from Sample import Sample
import glob
import os.path
dataset['train_unsup'] = [] #list of tuples of sample objects (mix, acc, drums)
unsup_mix = []
for i in range(1, 40):
unsup_mix.append(Sample.from_path('/home/ubuntu/AAS/data/unsup/mix/' + str(i) + '.mp3'))
#load drums
unsup_drums = []
for i in range(1, 178):
unsup_drums.append(Sample.from_path('/home/ubuntu/AAS/data/unsup/drums/' + str(i) + '.wav'))
import librosa
def add_audio(audio_list, path_postfix):
'''
Reads in a list of audio files, sums their signals, and saves them in new audio file which is named after the first audio file plus a given postfix string
:param audio_list: List of audio file paths
:param path_postfix: Name to append to the first given audio file path in audio_list which is then used as save destination
:return: Audio file path where the sum signal was saved
'''
save_path = audio_list[0] + "_" + path_postfix + ".wav"
if not os.path.exists(save_path):
for idx, instrument in enumerate(audio_list):
instrument_audio, sr = librosa.load(instrument, sr=None)
if idx == 0:
audio = instrument_audio
else:
audio += instrument_audio
if np.min(audio) < -1.0 or np.max(audio) > 1.0:
print("WARNING: Mixing tracks together caused the result to have sample values outside of [-1,1]. Clipping those values")
audio = np.minimum(np.maximum(audio, -1.0), 1.0)
librosa.output.write_wav(save_path, audio, sr)
return save_path
unsup_acc = []
#TODO: wrap this in a loop, check for num of folders in unsup/acc, check for num of files for inner loop
path = '/home/ubuntu/AAS/data/unsup/acc/5/'
audio_list = []
for i in range(1,6):
audio_list.append(path + str(i) + '.wav') #TODO: check for filetype
summed_path = add_audio(audio_list, "a")
unsup_acc.append(Sample.from_path(summed_path))
unsup_acc.append(Sample.from_path('/home/ubuntu/AAS/data/unsup/acc/3/1.mp3'))
unsup_acc
dataset['train_unsup'] = [unsup_mix, unsup_acc, unsup_drums]
len(dataset)
dataset['train_unsup'][1][2].path
with open('dataset.pkl', 'wb') as file:
pickle.dump(dataset, file)
print("Created dataset structure")
# unsup_drums = unsup_mix[39:]
# unsup_mix = unsup_mix[:39]
unsup_drums[-1].path
#replaced
# root = '/home/ubuntu/AAS'
# if dataset == 'train_unsup':
# mix_list = glob.glob(root+dataset+'/*.wav')
# voice_list = list()
# else:
# mix_list = glob.glob(root+dataset+'/Mixed/*.wav')
# voice_list = glob.glob(root+dataset+'/Drums/*.wav')
# mix = list()
# voice = list()
# for item in mix_list:
# mix.append(Sample.from_path(item))
# for item in voice_list:
# voice.append(Sample.from_path(item))
# return mix, voice
for i in range(len(dataset['test'])):
for samp in dataset['test'][i]:
samp.path = '/home/ubuntu/AAS/' + samp.path
for tup in dataset['train_sup']:
for samp in tup[0]:
print(samp.path)
dataset['test'][0][0].path
from pysndfile import formatinfo, sndfile
from pysndfile import supported_format, supported_endianness, supported_encoding, PyaudioException, PyaudioIOError
with open('dataset.pkl', 'r') as file:
dataset = pickle.load(file)
print("Loaded dataset from pickle!")
```
| github_jupyter |
# Hyperparameter Tuning with Amazon SageMaker and MXNet
_**Creating a Hyperparameter Tuning Job for an MXNet Network**_
---
---
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Code](#Code)
1. [Tune](#Train)
1. [Wrap-up](#Wrap-up)
---
## Background
This example notebook focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using MXNet distributed training. It leverages SageMaker's hyperparameter tuning to kick off multiple training jobs with different hyperparameter combinations, to find the set with best model performance. This is an important step in the machine learning process as hyperparameter settings can have a large impact on model accuracy. In this example, we'll use the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) to create a hyperparameter tuning job for an MXNet estimator.
---
## Setup
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the notebook instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/using-identity-based-policies.html) for more details on creating these. Note, if a role not associated with the current notebook instance, or more than one role is required for training and/or hosting, please replace `sagemaker.get_execution_role()` with a the appropriate full IAM role arn string(s).
```
import sagemaker
role = sagemaker.get_execution_role()
```
Now we'll import the Python libraries we'll need.
```
import sagemaker
import boto3
from sagemaker.mxnet import MXNet
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
```
---
## Data
The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). See [here](http://yann.lecun.com/exdb/mnist/) for more details on MNIST.
For this example notebook we'll use a version of the dataset that's already been published in the desired format to a shared S3 bucket. Let's specify that location now.
```
region = boto3.Session().region_name
train_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/train'.format(region)
test_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/test'.format(region)
```
---
## Code
To use SageMaker's pre-built MXNet containers, we need to pass in an MXNet script for the container to run. For our example, we'll define several functions, including:
- `load_data()` and `find_file()` which help bring in our MNIST dataset as NumPy arrays
- `build_graph()` which defines our neural network structure
- `train()` which is the main function that is run during each training job and calls the other functions in order to read in the dataset, create a neural network, and train it.
There are also several functions for hosting which we won't define, like `input_fn()`, `output_fn()`, and `predict_fn()`. These will take on their default values as described [here](https://github.com/aws/sagemaker-python-sdk#model-serving), and are not important for the purpose of showcasing SageMaker's hyperparameter tuning.
```
!cat mnist.py
```
Once we've specified and tested our training script to ensure it works, we can start our tuning job. Testing can be done in either local mode or using SageMaker training. Please see the [MXNet MNIST example notebooks](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb) for more detail.
---
## Tune
Similar to training a single MXNet job in SageMaker, we define our MXNet estimator passing in the MXNet script, IAM role, (per job) hardware configuration, and any hyperparameters we're not tuning.
```
estimator = MXNet(entry_point='mnist.py',
role=role,
instance_count=1,
instance_type='ml.m4.xlarge',
sagemaker_session=sagemaker.Session(),
py_version='py3',
framework_version='1.4.1',
base_job_name='DEMO-hpo-mxnet',
hyperparameters={'batch_size': 100})
```
Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.
- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`
- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`
- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`
*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning `thresh` as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with possible values of 0.01, 0.1, 0.15, or 0.2.*
```
hyperparameter_ranges = {'optimizer': CategoricalParameter(['sgd', 'Adam']),
'learning_rate': ContinuousParameter(0.01, 0.2),
'num_epoch': IntegerParameter(1, 5)}
```
Next we'll specify the objective metric that we'd like to tune and its definition. This includes the regular expression (Regex) needed to extract that metric from the CloudWatch logs of our training job.
```
objective_metric_name = 'Validation-accuracy'
metric_definitions = [{'Name': 'Validation-accuracy',
'Regex': 'Validation-accuracy=([0-9\\.]+)'}]
```
Now, we'll create a `HyperparameterTuner` object, which we pass:
- The MXNet estimator we created above
- Our hyperparameter ranges
- Objective metric name and definition
- Number of training jobs to run in total and how many training jobs should be run simultaneously. More parallel jobs will finish tuning sooner, but may sacrifice accuracy. We recommend you set the parallel jobs value to less than 10% of the total number of training jobs (we'll set it higher just for this example to keep it short).
- Whether we should maximize or minimize our objective metric (we haven't specified here since it defaults to 'Maximize', which is what we want for validation accuracy)
```
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=9,
max_parallel_jobs=3)
```
And finally, we can start our tuning job by calling `.fit()` and passing in the S3 paths to our train and test datasets.
```
tuner.fit({'train': train_data_location, 'test': test_data_location})
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is `InProgress`.
```
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
```
---
## Wrap-up
Now that we've started our hyperparameter tuning job, it will run in the background and we can close this notebook. Once finished, we can use the [HPO Analysis notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) to determine which set of hyperparameters worked best.
For more detail on Amazon SageMaker's Hyperparameter Tuning, please refer to the AWS documentation.
| github_jupyter |
# Data Loading Tutorial
```
cd ../..
save_path = 'data/'
from scvi.dataset import LoomDataset, CsvDataset, Dataset10X, AnnDataset
import urllib.request
import os
from scvi.dataset import BrainLargeDataset, CortexDataset, PbmcDataset, RetinaDataset, HematoDataset, CbmcDataset, BrainSmallDataset, SmfishDataset
```
## Generic Datasets
`scvi v0.1.3` supports dataset loading for the following three generic file formats:
* `.loom` files
* `.csv` files
* `.h5ad` files
* datasets from `10x` website
Most of the dataset loading instances implemented in scvi use a positional argument `filename` and an optional argument `save_path` (value by default: `data/`). Files will be downloaded or searched for at the location `os.path.join(save_path, filename)`, make sure this path is valid when you specify the arguments.
### Loading a `.loom` file
Any `.loom` file can be loaded with initializing `LoomDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url the dataset if the file needs to be downloaded from the web
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
```
# Loading a remote dataset
remote_loom_dataset = LoomDataset("osmFISH_SScortex_mouse_all_cell.loom",
save_path=save_path,
url='http://linnarssonlab.org/osmFISH/osmFISH_SScortex_mouse_all_cells.loom')
# Loading a local dataset
local_loom_dataset = LoomDataset("osmFISH_SScortex_mouse_all_cell.loom",
save_path=save_path)
```
### Loading a `.csv` file
Any `.csv` file can be loaded with initializing `CsvDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url of the dataset if the file needs to be downloaded from the web
* `compression`: set `compression` as `.gz`, `.bz2`, `.zip`, or `.xz` to load a zipped `csv` file
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
Note: `CsvDataset` currently only supoorts `.csv` files that are genes by cells.
If the dataset has already been downloaded at the location `save_path`, it will not be downloaded again.
```
# Loading a remote dataset
remote_csv_dataset = CsvDataset("GSE100866_CBMC_8K_13AB_10X-RNA_umi.csv.gz",
save_path=save_path,
compression='gzip',
url = "https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE100866&format=file&file=GSE100866%5FCBMC%5F8K%5F13AB%5F10X%2DRNA%5Fumi%2Ecsv%2Egz")
# Loading a local dataset
local_csv_dataset = CsvDataset("GSE100866_CBMC_8K_13AB_10X-RNA_umi.csv.gz",
save_path=save_path,
compression='gzip')
```
### Loading a `.h5ad` file
[AnnData](http://anndata.readthedocs.io/en/latest/) objects can be stored in `.h5ad` format. Any `.h5ad` file can be loaded with initializing `AnnDataset` with `filename`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `url`: url the dataset if the file needs to be downloaded from the web
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
* `subset_genes`: a list of gene names for subsampling
```
# Loading a local dataset
local_ann_dataset = AnnDataset("TM_droplet_mat.h5ad",
save_path = save_path)
```
### Loading a file from `10x` website
If the dataset has already been downloaded at the location `save_path`, it will not be downloaded again.
`10x` has published several datasets on their [website](https://www.10xgenomics.com).
Initialize `Dataset10X` by passing in the dataset name of one of the following datasets that `scvi` currently supports: `frozen_pbmc_donor_a`, `frozen_pbmc_donor_b`, `frozen_pbmc_donor_c`, `pbmc8k`, `pbmc4k`, `t_3k`, `t_4k`, and `neuron_9k`.
Optional parameters:
* `save_path`: save path (default to be `data/`) of the file
* `type`: set `type` (default to be `filtered`) to be `filtered` or `raw` to choose one from the two datasets that's available on `10X`
* `new_n_genes`: the number of subsampling genes - set it to be `False` to turn off subsampling
```
tenX_dataset = Dataset10X("neuron_9k", save_path=save_path)
```
### Loading local `10x` data
It is also possible to create a Dataset object from 10X data saved locally. Initialize Dataset10X by passing in the optional remote argument as False to specify you're loading local data and give the name of the directory that contains the gene expression matrix and gene names of the data as well as the path to this directory.
If your data (the genes.tsv and matrix.mtx files) is located inside the directory 'mm10' which is located at 'data/10X/neuron_9k/filtered_gene_bc_matrices/'. Then filename should have the value 'mm10' and save_path should be the path to the directory containing 'mm10'.
```
local_10X_dataset = Dataset10X('mm10', save_path=os.path.join(save_path, '10X/neuron_9k/filtered_gene_bc_matrices/'),
remote=False)
```
## Built-In Datasets
We've also implemented seven built-in datasets to make it easier to reproduce results from the scVI paper.
* **PBMC**: 12,039 human peripheral blood mononuclear cells profiled with 10x;
* **RETINA**: 27,499 mouse retinal bipolar neurons, profiled in two batches using the Drop-Seq technology;
* **HEMATO**: 4,016 cells from two batches that were profiled using in-drop;
* **CBMC**: 8,617 cord blood mononuclear cells profiled using 10x along with, for each cell, 13 well-characterized mononuclear antibodies;
* **BRAIN SMALL**: 9,128 mouse brain cells profiled using 10x.
* **BRAIN LARGE**: 1.3 million mouse brain cells profiled using 10x;
* **CORTEX**: 3,005 mouse Cortex cells profiled using the Smart-seq2 protocol, with the addition of UMI
* **SMFISH**: 4,462 mouse Cortex cells profiled using the osmFISH protocol
* **DROPSEQ**: 71,639 mouse Cortex cells profiled using the Drop-Seq technology
* **STARMAP**: 3,722 mouse Cortex cells profiled using the STARmap technology
### Loading `STARMAP` dataset
`StarmapDataset` consists of 3722 cells profiled in 3 batches. The cells come with spatial coordinates of their location inside the tissue from which they were extracted and cell type labels retrieved by the authors ofthe original publication.
Reference: X.Wang et al., Science10.1126/science.aat5691 (2018)
### Loading `DROPSEQ` dataset
`DropseqDataset` consists of 71,639 mouse Cortex cells profiled using the Drop-Seq technology. To facilitate comparison with other methods we use a random filtered set of 15000 cells and then keep only a filtered set of 6000 highly variable genes. Cells have cell type annotaions and even sub-cell type annotations inferred by the authors of the original publication.
Reference: https://www.biorxiv.org/content/biorxiv/early/2018/04/10/299081.full.pdf
### Loading `SMFISH` dataset
`SmfishDataset` consists of 4,462 mouse cortex cells profiled using the OsmFISH protocol. The cells come with spatial coordinates of their location inside the tissue from which they were extracted and cell type labels retrieved by the authors of the original publication.
Reference: Simone Codeluppi, Lars E Borm, Amit Zeisel, Gioele La Manno, Josina A van Lunteren, Camilla I Svensson, and Sten Linnarsson. Spatial organization of the somatosensory cortex revealed by cyclic smFISH. bioRxiv, 2018.
```
smfish_dataset = SmfishDataset(save_path=save_path)
```
### Loading `BRAIN-LARGE` dataset
<font color='red'>Loading BRAIN-LARGE requires at least 32 GB memory!</font>
`BrainLargeDataset` consists of 1.3 million mouse brain cells, spanning the cortex, hippocampus and subventricular zone, and profiled with 10x chromium. We use this dataset to demonstrate the scalability of scVI. It can be used to demonstrate the scalability of scVI.
Reference: 10x genomics (2017). URL https://support.10xgenomics.com/single-cell-gene-expression/datasets.
```
brain_large_dataset = BrainLargeDataset(save_path=save_path)
```
### Loading `CORTEX` dataset
`CortexDataset` consists of 3,005 mouse cortex cells profiled with the Smart-seq2 protocol, with the addition of UMI. To facilitate com- parison with other methods, we use a filtered set of 558 highly variable genes. The `CortexDataset` exhibits a clear high-level subpopulation struc- ture, which has been inferred by the authors of the original publication using computational tools and annotated by inspection of specific genes or transcriptional programs. Similar levels of annotation are provided with the `PbmcDataset` and `RetinaDataset`.
Reference: Zeisel, A. et al. Cell types in the mouse cortex and hippocampus revealed by single-cell rna-seq. Science 347, 1138–1142 (2015).
```
cortex_dataset = CortexDataset(save_path=save_path)
```
### Loading `PBMC` dataset
`PbmcDataset` consists of 12,039 human peripheral blood mononu- clear cells profiled with 10x.
Reference: Zheng, G. X. Y. et al. Massively parallel digital transcriptional profiling of single cells. Nature Communications 8, 14049 (2017).
```
pbmc_dataset = PbmcDataset(save_path=save_path)
```
### Loading `RETINA` dataset
`RetinaDataset` includes 27,499 mouse retinal bipolar neu- rons, profiled in two batches using the Drop-Seq technology.
Reference: Shekhar, K. et al. Comprehensive classification of retinal bipolar neurons by single-cell transcriptomics. Cell 166, 1308–1323.e30 (2017).
```
retina_dataset = RetinaDataset(save_path=save_path)
```
### Loading `HEMATO` dataset
`HematoDataset` includes 4,016 cells from two batches that were profiled using in-drop. This data provides a snapshot of hematopoietic progenitor cells differentiating into various lineages. We use this dataset as an example for cases where gene expression varies in a continuous fashion (along pseudo-temporal axes) rather than forming discrete subpopulations.
Reference: Tusi, B. K. et al. Population snapshots predict early haematopoietic and erythroid hierarchies. Nature 555, 54–60 (2018).
```
hemato_dataset = HematoDataset(save_path=os.path.join(save_path, 'HEMATO/'))
```
### Loading `CBMC` dataset
`CbmcDataset` includes 8,617 cord blood mononuclear cells pro- filed using 10x along with, for each cell, 13 well-characterized mononuclear antibodies. We used this dataset to analyze how the latent spaces inferred by dimensionality-reduction algorithms summarize protein marker abundance.
Reference: Stoeckius, M. et al. Simultaneous epitope and transcriptome measurement in single cells. Nature Methods 14, 865–868 (2017).
```
cbmc_dataset = CbmcDataset(save_path=os.path.join(save_path, "citeSeq/"))
```
### Loading `BRAIN-SMALL` dataset
`BrainSmallDataset` consists of 9,128 mouse brain cells profiled using 10x. This dataset is used as a complement to PBMC for our study of zero abundance and quality control metrics correlation with our generative posterior parameters.
Reference:
```
brain_small_dataset = BrainSmallDataset(save_path=save_path)
def allow_notebook_for_test():
print("Testing the data loading notebook")
```
| github_jupyter |
<h1> Explore and create ML datasets </h1>
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
<div id="toc"></div>
Let's start off with the Python imports that we need.
```
import google.datalab.bigquery as bq
import seaborn as sns
import pandas as pd
import numpy as np
import shutil
```
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Write a SQL query to pick up the following fields
<pre>
pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
</pre>
from the dataset and explore a small part of the data. Make sure to pick a repeatable subset of the data so that if someone reruns this notebook, they will get the same results.
```
rawdata = """
SELECT
pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),EVERY_N) = 1
"""
query = rawdata.replace("EVERY_N", "100000")
print query
trips = bq.Query(query).execute().result().to_dataframe()
print "Total dataset is {} taxi rides".format(len(trips))
trips[:10]
```
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
```
ax = sns.regplot(x = "trip_distance", y = "fare_amount", ci = None, truncate = True, data = trips)
```
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
What's up with the streaks at \$45 and \$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's examine whether the toll amount is captured in the total amount.
```
tollrides = trips[trips['tolls_amount'] > 0]
tollrides[tollrides['pickup_datetime'] == '2014-05-20 23:09:00']
```
Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
```
trips.describe()
```
Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
```
def showrides(df, numlines):
import matplotlib.pyplot as plt
lats = []
lons = []
goodrows = df[df['pickup_longitude'] < -70]
for iter, row in goodrows[:numlines].iterrows():
lons.append(row['pickup_longitude'])
lons.append(row['dropoff_longitude'])
lons.append(None)
lats.append(row['pickup_latitude'])
lats.append(row['dropoff_latitude'])
lats.append(None)
sns.set_style("darkgrid")
plt.plot(lons, lats)
showrides(trips, 10)
showrides(tollrides, 10)
```
As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
<li>Discard the timestamp</li>
</ol>
Let's change the BigQuery query appropriately. In production, we'll have to carry out the same preprocessing on the real-time input data.
```
def sample_between(a, b):
basequery = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
sampler = "AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), EVERY_N) = 1"
sampler2 = "AND {0} >= {1}\n AND {0} < {2}".format(
"MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), EVERY_N * 100)",
"(EVERY_N * {})".format(a), "(EVERY_N * {})".format(b)
)
return "{}\n{}\n{}".format(basequery, sampler, sampler2)
def create_query(phase, EVERY_N):
"""Phase: train (70%) valid (15%) or test (15%)"""
query = ""
if phase == 'train':
# Training
query = sample_between(0, 70)
elif phase == 'valid':
# Validation
query = sample_between(70, 85)
else:
# Test
query = sample_between(85, 100)
return query.replace("EVERY_N", str(EVERY_N))
print create_query('train', 100000)
def to_csv(df, filename):
outdf = df.copy(deep = False)
outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key
# Reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove('fare_amount')
cols.insert(0, 'fare_amount')
print cols # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header = False, index_label = False, index = False)
print "Wrote {} to {}".format(len(outdf), filename)
for phase in ['train', 'valid', 'test']:
query = create_query(phase, 100000)
df = bq.Query(query).execute().result().to_dataframe()
to_csv(df, 'taxi-{}.csv'.format(phase))
```
<h3> Verify that datasets exist </h3>
```
!ls -l *.csv
```
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
```
%bash
head taxi-train.csv
```
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
```
import datalab.bigquery as bq
import pandas as pd
import numpy as np
import shutil
def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies". Taxis can't fly of course.
dist = np.degrees(np.arccos(np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1)))) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted)**2))
def print_rmse(df, rate, name):
print "{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate * estimate_distance(df)), name)
FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']
TARGET = 'fare_amount'
columns = list([TARGET])
columns.extend(FEATURES) # in CSV, target is the first column, after the features
columns.append('key')
df_train = pd.read_csv('taxi-train.csv', header = None, names = columns)
df_valid = pd.read_csv('taxi-valid.csv', header = None, names = columns)
df_test = pd.read_csv('taxi-test.csv', header = None, names = columns)
rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()
print "Rate = ${0}/km".format(rate)
print_rmse(df_train, rate, 'Train')
print_rmse(df_valid, rate, 'Valid')
print_rmse(df_test, rate, 'Test')
```
The simple distance-based rule gives us a RMSE of <b>$9.35</b> on the validation dataset. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. You don't wnat to set a goal on the test dataset because you want to change the architecture of the network etc. to get the best validation error. Then, you can evaluate ONCE on the test data.
## Challenge Exercise
Let's say that you want to predict whether a Stackoverflow question will be acceptably answered. Using this [public dataset of questions](https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_questions), create a machine learning dataset that you can use for classification.
<p>
What is a reasonable benchmark for this problem?
What features might be useful?
<p>
If you got the above easily, try this harder problem: you want to predict whether a question will be acceptably answered within 2 days. How would you create the dataset?
<p>
Hint (highlight to see):
<p style='color:white' linkstyle='color:white'>
You will need to do a SQL join with the table of [answers]( https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_answers) to determine whether the answer was within 2 days.
</p>
Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Wind Statistics
### Introduction:
The data have been modified to contain some missing values, identified by NaN.
Using pandas should make this exercise
easier, in particular for the bonus question.
You should be able to perform all of these operations without using
a for loop or other looping construct.
1. The data in 'wind.data' has the following format:
```
"""
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
"""
```
The first three columns are year, month and day. The
remaining 12 columns are average windspeeds in knots at 12
locations in Ireland on that day.
More information about the dataset go [here](wind.desc).
### Step 1. Import the necessary libraries
### Step 2. Import the dataset from this [address](https://github.com/guipsamora/pandas_exercises/blob/master/06_Stats/Wind_Stats/wind.data)
### Step 3. Assign it to a variable called data and replace the first 3 columns by a proper datetime index.
### Step 4. Year 2061? Do we really have data from this year? Create a function to fix it and apply it.
### Step 5. Set the right dates as the index. Pay attention at the data type, it should be datetime64[ns].
### Step 6. Compute how many values are missing for each location over the entire record.
#### They should be ignored in all calculations below.
### Step 7. Compute how many non-missing values there are in total.
### Step 8. Calculate the mean windspeeds of the windspeeds over all the locations and all the times.
#### A single number for the entire dataset.
### Step 9. Create a DataFrame called loc_stats and calculate the min, max and mean windspeeds and standard deviations of the windspeeds at each location over all the days
#### A different set of numbers for each location.
### Step 10. Create a DataFrame called day_stats and calculate the min, max and mean windspeed and standard deviations of the windspeeds across all the locations at each day.
#### A different set of numbers for each day.
### Step 11. Find the average windspeed in January for each location.
#### Treat January 1961 and January 1962 both as January.
### Step 12. Downsample the record to a yearly frequency for each location.
### Step 13. Downsample the record to a monthly frequency for each location.
### Step 14. Downsample the record to a weekly frequency for each location.
### Step 15. Calculate the min, max and mean windspeeds and standard deviations of the windspeeds across all locations for each week (assume that the first week starts on January 2 1961) for the first 52 weeks.
| github_jupyter |
**[Pandas Micro-Course Home Page](https://www.kaggle.com/learn/pandas)**
---
# Introduction
Maps allow us to transform data in a `DataFrame` or `Series` one value at a time for an entire column. However, often we want to group our data, and then do something specific to the group the data is in. We do this with the `groupby` operation.
In these exercises we'll apply groupwise analysis to our dataset.
# Relevant Resources
- [**Grouping Reference and Examples**](https://www.kaggle.com/residentmario/grouping-and-sorting-reference)
- [Pandas cheat sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
# Set Up
Run the code cell below to load the data before running the exercises.
```
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
#pd.set_option("display.max_rows", 5)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.grouping_and_sorting import *
print("Setup complete.")
```
# Exercises
## 1.
Who are the most common wine reviewers in the dataset? Create a `Series` whose index is the `taster_twitter_handle` category from the dataset, and whose values count how many reviews each person wrote.
```
reviews.head()
type(reviews_written)
reviews_written.index
# Your code here
reviews_written = reviews.taster_twitter_handle.value_counts()
reviews_written = reviews.groupby('taster_twitter_handle').size()
# so it only accepts it in the order they expect :/
q1.check()
# reviews.taster_twitter_handle.value_counts()
q1.hint()
q1.solution()
```
## 2.
What is the best wine I can buy for a given amount of money? Create a `Series` whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that `4.0` dollars is at the top and `3300.0` dollars is at the bottom).
```
reviews.head()
wine = reviews.groupby('price').points.max()
wine.head()
best_rating_per_price = wine
q2.check()
#q2.hint()
#q2.solution()
```
## 3.
What are the minimum and maximum prices for each `variety` of wine? Create a `DataFrame` whose index is the `variety` category from the dataset and whose values are the `min` and `max` values thereof.
```
reviews.head()
price_extremes = reviews.groupby("variety").price.agg([min, max])
q3.check()
#q3.hint()
#q3.solution()
```
## 4.
What are the most expensive wine varieties? Create a variable `sorted_varieties` containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties).
```
sorted_varieties = price_extremes.sort_values(by = ["min", "max"], ascending = False)
q4.check()
#q4.hint()
#q4.solution()
```
## 5.
Create a `Series` whose index is reviewers and whose values is the average review score given out by that reviewer. Hint: you will need the `taster_name` and `points` columns.
```
reviews.head()
reviewer_mean_ratings = reviews.groupby("taster_name").points.mean()
q5.check()
#q5.hint()
#q5.solution()
```
Are there significant differences in the average scores assigned by the various reviewers? Run the cell below to use the `describe()` method to see a summary of the range of values.
```
reviewer_mean_ratings.describe()
```
## 6.
What combination of countries and varieties are most common? Create a `Series` whose index is a `MultiIndex`of `{country, variety}` pairs. For example, a pinot noir produced in the US should map to `{"US", "Pinot Noir"}`. Sort the values in the `Series` in descending order based on wine count.
```
country_variety_counts = reviews.groupby(["country", "variety"]).size().sort_values(ascending= False)
# .count()
country_variety_counts.head()
q6.check()
#q6.hint()
#q6.solution()
```
# Keep Going
Move on to the [**Data types and missing data workbook**](https://www.kaggle.com/kernels/fork/598827).
---
**[Pandas Micro-Course Home Page](https://www.kaggle.com/learn/pandas)**
| github_jupyter |
<a href="https://colab.research.google.com/github/jana0601/AA_Summer-school-LMMS/blob/main/Lab_Session_ToyModels.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.linalg as scl
```
In this notebook, we will apply the basic EDMD algorithm to analyze data from the linear stochastic differential equation:
$$ \mathrm{d}X_t = -X_t \mathrm{d}t + \sigma(X_t) \mathrm{d}W_t $$
### Simulation and Evolution of Densities
Let us first define a numerical integrator (i.e. the machinery to produce data), and then have a look at the evolution of probability distributions with time.
```
# This function realizes the standard Euler scheme
# for a linear stochastic differential equation:
def Euler_Scheme(x0, sigma, dt, m):
# Prepare output:
y = np.zeros(m)
y[0] = x0
# Initialize at x0:
x = x0
# Integrate:
for kk in range(1, m):
# Update:
xn = x - dt * x + sigma * np.sqrt(dt)*np.random.randn()
# Update current state:
y[kk] = xn
x = xn
return y
```
First, use the above function to produce 1000 simulations, each comprising discrete 1000 steps, at integration time step 1e-2, starting at $x_0 = 2$. Produce a histogram of the data after [10, 20, 50, 100, 200, 500] steps.
Then, repeat the experiment, but draw the initial condition from a normal distribution with mean zero, and standard deviation 0.5.
```
# Settings:
m = 1000
dt = 1e-2
ntraj = 1000
sigma = 1.0
# Generate data:
X = np.zeros((ntraj, m+1))
for l in range(ntraj):
x0 = np.random.randn(1)
X[l,:] = Euler_Scheme(x0,sigma,dt,m+1)
plt.plot(X[:5,:].T)
# Time instances to be used for histogramming:
t_vec = np.array([10, 20, 50, 100, 200, 500])
# Bins for histogram:
xe = np.arange(-2.5, 3.51, 0.1)
xc = 0.5 * (xe[1:] + xe[:-1])
# Histogram the data at different time instances:
plt.figure()
qq = 0
for t in t_vec:
h, _ = np.histogram(X[:,t],bins=xe,density=True)
plt.plot(xc,h,label="%d"%t_vec[qq])
qq += 1
plt.plot(xc, (1.0/np.sqrt(2*np.pi *0.5))*np.exp(-xc**2), "k--")
plt.xlabel("x", fontsize=12)
plt.tick_params(labelsize=12)
plt.ylim([-.5, 1.5])
plt.legend(loc=2)
```
### Estimating the Koopman Operator
First, write a function to compute a matrix approximation for the Koopman operator. Inputs should the raw data, the time shifted raw data, a callable function to realize the basis set, and the number of basis functions:
```
def koopman_matrix(X, Y, psi, n):
# Get info on data:
m = X.shape[0]
# Evaluate basis set on full data:
# Compute Koopman matrix:
return K
```
Produce 10,000 pairs $(x_l, y_l)$ by drawing $x_l$ from the invariant measure of our linear SDE. Compute each $y_l$ by running the dynamics over time $t = 0.1$ (10 discrete time steps). Then, estimate the Koopman matrix for the monomial basis of degree 10.
```
# Produce the data:
m = 10000
x = np.sqrt(0.5) * np.random.randn(m)
y = np.zeros(m)
# Define basis set:
n = 5
# Compute Koopman matrix:
```
### Koopman-based Prediction
Diagonalize the Koopman matrix. Use the spectral mapping theorem to predict the eigenvalues at times $[0.1, 0.2, 0.3, ..., 2.0]$. Compare to the analytical values: the $k$-th eigenvalue at lag time $t$ is given by $\exp(-k \cdot t)$.
```
# Diagonalize K:
d, V =
# Sort eigenvalues and eigenvectors:
# Plot eigenvalues at multiple lag times:
lags = nsteps * np.arange(1, 21)
plt.figure()
for k in range(1, 4):
plt.plot(dt*lags, np.exp(- k * dt* lags), "x")
```
Use the Koopman matrix to predict the variance of the process at times $[0.1, 0.2, 0.3, ..., 2.0]$, if started at $x$, as a function of $x$. The variance is
$\mathbb{E}^x[(X_t)^2]$, which equals the Koopman operator applied to the function $x^2$. Remember this function is contained in your basis set.
```
# Coefficient vector of x**2 with respect to monomial basis:
b = np.eye(n)[:, 2]
# Prepare output:
lag_coeffs = np.zeros((n, lags.shape[0]))
# Repeatedly apply Koopman matrix to coefficient vector:
# Plot coefficients of the variance as a function of t:
for ii in range(n):
plt.plot(dt*lags, lag_coeffs[ii, :], "o--", label=r"$x^{%d}$"%ii)
plt.legend(loc=1)
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Indexed Expressions: Representing and manipulating tensors, pseudotensors, etc. in NRPy+
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
### NRPy+ Source Code for this module: [indexedexp.py](../edit/indexedexp.py)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules
1. [Step 2](#idx1): Rank-1 Indexed Expressions
1. [Step 2.a](#dot): Performing a Dot Product
1. [Step 3](#idx2): Rank-2 and Higher Indexed Expressions
1. [Step 3.a](#con): Creating C Code for the contraction variable
1. [Step 3.b](#simd): Enable SIMD support
1. [Step 4](#exc): Exercise
1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize core NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from NRPy+ for dealing with indexed expressions and ouputting C code.
```
# The NRPy_param_funcs module sets up global structures that manage free parameters within NRPy+
import NRPy_param_funcs as par # NRPy+: Parameter interface
# The indexedexp module defines various functions for defining and managing indexed quantities like tensors and pseudotensors
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
# The grid module defines various parameters related to a numerical grid or the dimensionality of indexed expressions
# For example, it declares the parameter DIM, which specifies the dimensionality of the indexed expression
import grid as gri # NRPy+: Functions having to do with numerical grids
from outputC import outputC # NRPy+: Basic C code output functionality
```
<a id='idx1'></a>
# Step 2: Rank-1 Indexed Expressions \[Back to [top](#toc)\]
$$\label{idx1}$$
Indexed expressions of rank 1 are stored as [Python lists](https://www.tutorialspoint.com/python/python_lists.htm).
There are two standard ways to declare indexed expressions:
+ **Initialize indexed expression to zero:**
+ **zerorank1(DIM=-1)** $\leftarrow$ As we will see below, initializing to zero is useful if the indexed expression depends entirely on some other indexed or non-indexed expressions.
+ **DIM** is an *optional* parameter that, if set to -1, will default to the dimension as set in the **grid** module: `par.parval_from_str("grid::DIM")`. Otherwise the rank-1 indexed expression will have dimension **DIM**.
+ **Initialize indexed expression symbolically:**
+ **declarerank1(symbol, DIM=-1)**.
+ As in **`zerorank1()`, **DIM** is an *optional* parameter that, if set to -1, will default to the dimension as set in the **grid** module: `par.parval_from_str("grid::DIM")`. Otherwise the rank-1 indexed expression will have dimension **DIM**.
`zerorank1()` and `declarerank1()` are both wrapper functions for the more general function `declare_indexedexp()`.
+ **declare_indexedexp(rank, symbol=None, symmetry=None, dimension=None)**.
+ The following are optional parameters: **symbol**, **symmetry**, and **dimension**. If **symbol** is not specified, then `declare_indexedexp()` will initialize an indexed expression to zero. If **symmetry** is not specified or has value "nosym", then an indexed expression will not be symmetrized, which has no relevance for an indexed expression of rank 1. If **dimension** is not specified or has value -1, then **dimension** will default to the dimension as set in the **grid** module: `par.parval_from_str("grid::DIM")`.
For example, the 3-vector $\beta^i$ (upper index denotes contravariant) can be initialized to zero as follows:
```
# Declare rank-1 contravariant ("U") vector
betaU = ixp.zerorank1()
# Print the result. It's a list of zeros!
print(betaU)
```
Next set $\beta^i = \sum_{j=0}^i j = \{0,1,3\}$
```
# Get the dimension we just set, so we know how many indices to loop over
DIM = par.parval_from_str("grid::DIM")
for i in range(DIM): # sum i from 0 to DIM-1, inclusive
for j in range(i+1): # sum j from 0 to i, inclusive
betaU[i] += j
print("The 3-vector betaU is now set to: "+str(betaU))
```
Alternatively, the 3-vector $\beta^i$ can be initialized **symbolically** as follows:
```
# Set the dimension to 3
par.set_parval_from_str("grid::DIM",3)
# Declare rank-1 contravariant ("U") vector
betaU = ixp.declarerank1("betaU")
# Print the result. It's a list!
print(betaU)
```
Declaring $\beta^i$ symbolically is standard in case `betaU0`, `betaU1`, and `betaU2` are defined elsewhere (e.g., read in from main memory as a gridfunction.
As can be seen, NRPy+'s standard naming convention for indexed rank-1 expressions is
+ **\[base variable name\]+\["U" for contravariant (up index) or "D" for covariant (down index)\]**
*Caution*: After declaring the vector, `betaU0`, `betaU1`, and `betaU2` can only be accessed or manipulated through list access; i.e., via `betaU[0]`, `betaU[1]`, and `betaU[2]`, respectively. Attempts to access `betaU0` directly will fail.
Knowing this, let's multiply `betaU1` by 2:
```
betaU[1] *= 2
print("The 3-vector betaU is now set to "+str(betaU))
print("The component betaU[1] is now set to "+str(betaU[1]))
```
<a id='dot'></a>
## Step 2.a: Performing a Dot Product \[Back to [top](#toc)\]
$$\label{dot}$$
Next, let's declare the variable $\beta_j$ and perform the dot product $\beta^i \beta_i$:
```
# First set betaU back to its initial value
betaU = ixp.declarerank1("betaU")
# Declare beta_j:
betaD = ixp.declarerank1("betaD")
# Get the dimension we just set, so we know how many indices to loop over
DIM = par.parval_from_str("grid::DIM")
# Initialize dot product to zero
dotprod = 0
# Perform dot product beta^i beta_i
for i in range(DIM):
dotprod += betaU[i]*betaD[i]
# Print result!
print(dotprod)
```
<a id='idx2'></a>
# Step 3: Rank-2 and Higher Indexed Expressions \[Back to [top](#toc)\]
$$\label{idx2}$$
Moving to higher ranks, rank-2 indexed expressions are stored as lists of lists, rank-3 indexed expressions as lists of lists of lists, etc. For example
+ the covariant rank-2 tensor $g_{ij}$ is declared as `gDD[i][j]` in NRPy+, so that e.g., `gDD[0][2]` is stored with name `gDD02` and
+ the rank-2 tensor $T^{\mu}{}_{\nu}$ is declared as `TUD[m][n]` in NRPy+ (index names are of course arbitrary).
*Caveat*: Note that it is currently up to the user to determine whether the combination of indexed expressions makes sense; NRPy+ does not track whether up and down indices are written consistently.
NRPy+ supports symmetries in indexed expressions (above rank 1), so that if $h_{ij} = h_{ji}$, then declaring `hDD[i][j]` to be symmetric in NRPy+ will result in both `hDD[0][2]` and `hDD[2][0]` mapping to the *single* SymPy variable `hDD02`.
To see how this works in NRPy+, let's define in NRPy+ a symmetric, rank-2 tensor $h_{ij}$ in three dimensions, and then compute the contraction, which should be given by $$con = h^{ij}h_{ij} = h_{00} h^{00} + h_{11} h^{11} + h_{22} h^{22} + 2 (h_{01} h^{01} + h_{02} h^{02} + h_{12} h^{12}).$$
```
# Get the dimension we just set (should be set to 3).
DIM = par.parval_from_str("grid::DIM")
# Declare h_{ij}=hDD[i][j] and h^{ij}=hUU[i][j]
hUU = ixp.declarerank2("hUU","sym01")
hDD = ixp.declarerank2("hDD","sym01")
# Perform sum h^{ij} h_{ij}, initializing contraction result to zero
con = 0
for i in range(DIM):
for j in range(DIM):
con += hUU[i][j]*hDD[i][j]
# Print result
print(con)
```
<a id='con'></a>
## Step 3.a: Creating C Code for the contraction variable $\text{con}$ \[Back to [top](#toc)\]
$$\label{con}$$
Next let's create the C code for the contraction variable $\text{con}$, without CSE (common subexpression elimination)
```
outputC(con,"con")
```
<a id='simd'></a>
## Step 3.b: Enable SIMD support \[Back to [top](#toc)\]
$$\label{simd}$$
Finally, let's see how it looks with SIMD support enabled
```
outputC(con,"con",params="enable_SIMD=True")
```
<a id='exc'></a>
# Step 4: Exercise \[Back to [top](#toc)\]
$$\label{exc}$$
Setting $\beta^i$ via the declarerank1(), write the NRPy+ code required to generate the needed C code for the lowering operator: $g_{ij} \beta^i$, and set the result to C variables `betaD0out`, `betaD1out`, and `betaD2out` [solution](Tutorial-Indexed_Expressions_soln.ipynb). *Hint: You will want to use the `zerorank1()` function*
**To complete this exercise, you must first reset all variables in the notebook:**
```
# *Uncomment* the below %reset command and then press <Shift>+<Enter>.
# Respond with "y" in the dialog box to reset all variables.
# %reset
```
**Write your solution below:**
<a id='latex_pdf_output'></a>
# Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Indexed_Expressions.pdf](Tutorial-Indexed_Expressions.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Indexed_Expressions")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/arfild/dw_matrix/blob/master/Day5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install hyperopt
import pandas as pd
import numpy as np
import os
import datetime
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
from skimage import color, exposure
from sklearn.metrics import accuracy_score
from hyperopt import hp, STATUS_OK, tpe, Trials, fmin
%load_ext tensorboard
cd'/content/drive/My Drive/Colab Notebook/matrix/matrix_tree/dw_matrix_road_sign/data'
train = pd.read_pickle('train.p')
test = pd.read_pickle('test.p')
[X_train, y_train] = train['features'], train['labels']
[X_test, y_test] = test['features'], test['labels']
df=pd.read_csv('signnames.csv')
labels = df.to_dict()['b']
if y_train.ndim == 1: y_train = to_categorical(y_train)
if y_test.ndim == 1: y_test = to_categorical(y_test)
input_shape = X_train.shape[1:]
num_classes = y_train.shape[1]
def train_model(model, X_train, y_train, params_fit={}):
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime('%Y%m%d-%H%M%S'))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(
X_train,
y_train,
batch_size=params_fit.get('batch_size', 128), # rozmiar paczki
epochs=params_fit.get('epochs', 5), # liczba iteracji
verbose=params_fit.get('verbose', 1), # stan
validation_data=params_fit.get('validation_data', (X_train, y_train)), # do krzywych uczenia się
callbacks=[tensorboard_callback] # do wizualizacji
)
return model
def predict(model_trained, X_test, y_test, scoring=accuracy_score):
y_test_norm = np.argmax(y_test, axis=1)
y_pred_prob = model_trained.predict(X_test)
y_pred = np.argmax(y_pred_prob, axis=1)
return scoring(y_test_norm, y_pred)
def get_cnn_v5(input_shape, num_classes):
return Sequential([
Conv2D(filters=32, kernel_size=(3,3), activation='relu', input_shape=input_shape),
Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(),
Dropout(0.3),
Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
Conv2D(filters=64, kernel_size=(3,3), activation='relu'),
MaxPool2D(),
Dropout(0.3),
Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
Conv2D(filters=64, kernel_size=(3,3), activation='relu'),
MaxPool2D(),
Dropout(0.3),
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.3),
Dense(1024, activation='relu'),
Dropout(0.3),
Dense(num_classes, activation='softmax')
])
model = get_cnn_v5(input_shape, num_classes)
model_trained = train_model(model, X_train, y_train)
predict(model_trained, X_test, y_test)
model_trained.evaluate(X_test, y_test)
def get_model(params):
return Sequential([
Conv2D(filters=32, kernel_size=(3,3), activation='relu', input_shape=input_shape),
Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same'),
MaxPool2D(),
Dropout(params['dropout_cnn_block_1']),
Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
Conv2D(filters=64, kernel_size=(3,3), activation='relu'),
MaxPool2D(),
Dropout(params['dropout_cnn_block_2']),
Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='same'),
Conv2D(filters=128, kernel_size=(3,3), activation='relu'),
MaxPool2D(),
Dropout(params['dropout_cnn_block_3']),
Flatten(),
Dense(1024, activation='relu'),
Dropout(params['dropout_dense_block_1']),
Dense(1024, activation='relu'),
Dropout(params['dropout_dense_block_2']),
Dense(num_classes, activation='softmax')
])
model_trained.evaluate(X_test, y_test)
def func_obj(params):
get_model(params)
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
model.fit(
X_train,
y_train,
batch_size=int(params.get('batch_size', 128)), # rozmiar paczki
epochs=7, # liczba iteracji
verbose=0, # stan
)
score = model.evaluate(X_test, y_test,verbose=0)
accuracy = score[1]
print(params, 'accuracy={}'.format(accuracy))
return {'loss': -accuracy, 'status': STATUS_OK, 'model': model}
space = {
'batch_size': hp.quniform('batch_size', 50, 200, 20),
'dropout_cnn_block_1': hp.uniform('dropout_cnn_block_1', 0.3, 0.5),
'dropout_cnn_block_2': hp.uniform('dropout_cnn_block_2', 0.3, 0.5),
'dropout_cnn_block_3': hp.uniform('dropout_cnn_block_3', 0.3, 0.5),
'dropout_dense_block_1': hp.uniform('dropout_dense_block_1', 0.3, 0.7),
'dropout_dense_block_2': hp.uniform('dropout_dense_block_2', 0.3, 0.7),
}
best = fmin( func_obj, space, tpe.suggest, 30, Trials())
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.backends.backend_tkagg
import matplotlib.pylab as plt
from astropy.io import fits
from astropy import units as units
import astropy.io.fits as pyfits
from astropy.convolution import Gaussian1DKernel, convolve
from extinction import calzetti00, apply, ccm89
from scipy import optimize
import sys
import time
import emcee
import corner
from multiprocessing import Pool,cpu_count
import warnings
import glob, os
import math
warnings.filterwarnings('ignore')
%matplotlib inline
ncpu = cpu_count()
print("{0} CPUs".format(ncpu))
emcee.__version__
plt.tight_layout()
plt.rc('lines', linewidth=1, markersize=2)
plt.rc('font', size=12, family='serif')
plt.rc('mathtext', fontset='stix')
plt.rc('axes', linewidth=2)
plt.rc('xtick.major', width=1.5, size=4)
plt.rc('ytick.major', width=1.5, size=4)
plt.tick_params(axis='both', which='major', labelsize=18)
plt.tick_params(axis='both', which='minor', labelsize=18)
plt.subplots_adjust(bottom=0.2, left=0.2)
tik = time.clock()
df_cat=pd.read_csv('/Volumes/My Passport/uds_3dhst_v4.1.5_catalogs/uds_3dhst.v4.1.5.zbest.rf', delim_whitespace=True,header=None,comment='#',index_col=False)
df_cat.columns=["id", "z_best", "z_type", "z_spec", "DM", "L153", "nfilt153", "L154","nfilt154", "L155", "nfilt155", "L161", "nfilt161", "L162", "nfilt162", \
"L163", "nfilt163", "L156", "nfilt156", "L157", "nfilt157", "L158", "nfilt158", "L159", "nfilt159", "L160", "nfilt160", "L135", "nfilt135", "L136", "nfilt136",\
"L137", "nfilt137", "L138", "nfilt138", "L139", "nfilt139", "L270", "nfilt270", "L271", "nfilt271", "L272", "nfilt272", "L273", "nfilt273", "L274", "nfilt274", "L275", "nfilt275"]
# df = pd.read_csv('/Volumes/My Passport/GV_CMD_fn_table_20180904/matching_galaxies_uds_20180823_GV.csv', sep=',')
# df = pd.read_csv('/Volumes/My Passport/TPAGB/database/matching_galaxies_uds_20200206_PSB.csv', sep=',')
df = pd.read_csv('/Volumes/My Passport/TPAGB/database/matching_galaxies_uds_20200301_PSB.csv', sep=',')
df = pd.read_csv('/Volumes/My Passport/TPAGB/database/matching_galaxies_uds_20200303_PSB.csv', sep=',')
df.columns=['detector','ID','region','filename','chip']
df_photometry=pd.read_csv('/Volumes/My Passport/uds_3dhst.v4.2.cats/Catalog/uds_3dhst.v4.2.cat', delim_whitespace=True,header=None,comment='#',index_col=False)
df_photometry.columns=["id", "x", "y", "ra", "dec", "faper_F160W", "eaper_F160W","faper_F140W", "eaper_F140W", "f_F160W", "e_F160W", "w_F160W", \
"f_u", "e_u", "w_u","f_B", "e_B", "w_B","f_V", "e_V", "w_V", "f_F606W", "e_F606W","w_F606W",\
"f_R", "e_R", "w_R", "f_i", "e_i", "w_i", "f_F814W", "e_F814W", "w_F814W", "f_z", "e_z", "w_z",\
"f_F125W", "e_F125W", "w_F125W","f_J", "e_J", "w_J", "f_F140W", "e_F140W", "w_F140W",\
"f_H", "e_H", "w_H","f_K", "e_K", "w_K", "f_IRAC1", "e_IRAC1", "w_IRAC1", "f_IRAC2", "e_IRAC2", "w_IRAC2",\
"f_IRAC3", "e_IRAC3", "w_IRAC3", "f_IRAC4", "e_IRAC4", "w_IRAC4","tot_cor", "wmin_ground", "wmin_hst","wmin_wfc3",\
"wmin_irac", "z_spec", "star_flag", "kron_radius", "a_image", "b_image", "theta_J2000", "class_star", "flux_radius", "fwhm_image",\
"flags", "IRAC1_contam", "IRAC2_contam", "IRAC3_contam", "IRAC4_contam", "contam_flag","f140w_flag", "use_phot", "near_star", "nexp_f125w", "nexp_f140w", "nexp_f160w"]
df_fast = pd.read_csv('/Volumes/My Passport/uds_3dhst.v4.2.cats/Fast/uds_3dhst.v4.2.fout', delim_whitespace=True,header=None,comment='#',index_col=False)
df_fast.columns = ['id', 'z', 'ltau', 'metal','lage','Av','lmass','lsfr','lssfr','la2t','chi2']
tok = time.clock()
print('Time to read the catalogues:'+str(tok-tik))
df_zfit = pd.read_csv('/Volumes/My Passport/uds_3dhst_v4.1.5_catalogs/uds_3dhst.v4.1.5.zfit.concat.dat',delim_whitespace=True,header=None,comment='#',index_col=False)
df_zfit.columns=['phot_id','grism_id','jh_mag','z_spec','z_peak_phot','z_phot_l95',\
'z_phot_l68','z_phot_u68','z_phot_u95','z_max_grism','z_peak_grism',\
'l95','l68','u68','u95','f_cover','f_flagged','max_contam','int_contam',\
'f_negative','flag1','flag2']
# ### Ma05
tik2 = time.clock()
norm_wavelength= 5500.0
df_Ma = pd.read_csv('/Volumes/My Passport/M09_ssp_pickles.sed', delim_whitespace=True, header=None, comment='#', index_col=False)# only solar metallicity is contained in this catalogue
df_Ma.columns = ['Age','ZH','l','Flambda']
age = df_Ma.Age
metallicity = df_Ma.ZH
wavelength = df_Ma.l
Flux = df_Ma.Flambda
age_1Gyr_index = np.where(age==1.0)[0]
age_1Gyr = age[age_1Gyr_index]
metallicity_1Gyr = metallicity[age_1Gyr_index]
wavelength_1Gyr = wavelength[age_1Gyr_index]
Flux_1Gyr = Flux[age_1Gyr_index]
F_5500_1Gyr_index=np.where(wavelength_1Gyr==norm_wavelength)[0]
F_5500_1Gyr = Flux_1Gyr[wavelength_1Gyr==norm_wavelength].values # this is the band to be normalized
df_M13 = pd.read_csv('/Volumes/My Passport/M13_models/sed_M13.ssz002',delim_whitespace=True,header=None,comment='#',index_col=False)
df_M13.columns = ['Age','ZH','l','Flambda']
age_M13 = df_M13.Age
metallicity_M13 = df_M13.ZH
wavelength_M13 = df_M13.l
Flux_M13 = df_M13.Flambda
age_1Gyr_index_M13 = np.where(age_M13==1.0)[0]#[0]
age_1Gyr_M13 = age_M13[age_1Gyr_index_M13]
metallicity_1Gyr_M13 = metallicity_M13[age_1Gyr_index_M13]
wavelength_1Gyr_M13 = wavelength_M13[age_1Gyr_index_M13]
Flux_1Gyr_M13 = Flux_M13[age_1Gyr_index_M13]
F_5500_1Gyr_index_M13=np.where(abs(wavelength_1Gyr_M13-norm_wavelength)<15)[0]
F_5500_1Gyr_M13 = 0.5*(Flux_1Gyr_M13.loc[62271+F_5500_1Gyr_index_M13[0]]+Flux_1Gyr_M13.loc[62271+F_5500_1Gyr_index_M13[1]])
# ### BC03
df_BC = pd.read_csv('/Volumes/My Passport/ssp_900Myr_z02.spec',delim_whitespace=True,header=None,comment='#',index_col=False)
df_BC.columns=['Lambda','Flux']
wavelength_BC = df_BC.Lambda
Flux_BC = df_BC.Flux
F_5500_BC_index=np.where(wavelength_BC==norm_wavelength)[0]
Flux_BC_norm = Flux_BC[F_5500_BC_index]
### Read in the BC03 models High-resolution, with Stelib library, Salpeter IMF, solar metallicity
BC03_fn='/Volumes/My Passport/bc03/models/Stelib_Atlas/Salpeter_IMF/bc2003_hr_stelib_m62_salp_ssp.ised_ASCII'
BC03_file = open(BC03_fn,"r")
BC03_X = []
for line in BC03_file:
BC03_X.append(line)
BC03_SSP_m62 = np.array(BC03_X)
BC03_age_list = np.array(BC03_SSP_m62[0].split()[1:])
BC03_age_list_num = BC03_age_list.astype(np.float)/1.0e9 # unit is Gyr
BC03_wave_list = np.array(BC03_SSP_m62[6].split()[1:])
BC03_wave_list_num = BC03_wave_list.astype(np.float)
BC03_flux_list = np.array(BC03_SSP_m62[7:-12])
BC03_flux_array = np.zeros((221,7178))
for i in range(221):
BC03_flux_array[i,:] = BC03_flux_list[i].split()[1:]
BC03_flux_array[i,:] = BC03_flux_array[i,:]/BC03_flux_array[i,2556]# Normalize the flux
## Prepare the M05 models and store in the right place
M05_model = []
M05_model_list=[]
for i in range(30):
age_index = i
age_prior = df_Ma.Age.unique()[age_index]
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
fn1 = '/Volumes/My Passport/SSP_models/new/M05_age_'+'0_'+split_galaxy_age_string[1]+'_Av_00_z002.csv'
M05_model = np.loadtxt(fn1)
M05_model_list.append(M05_model)
fn1 = '/Volumes/My Passport/SSP_models/new/M05_age_1_Av_00_z002.csv'
fn2 = '/Volumes/My Passport/SSP_models/new/M05_age_1_5_Av_00_z002.csv'
M05_model = np.loadtxt(fn1)
M05_model_list.append(M05_model)
M05_model = np.loadtxt(fn2)
M05_model_list.append(M05_model)
for i in range(32,46):
age_index = i
age_prior = df_Ma.Age.unique()[age_index]
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
fn2 = '/Volumes/My Passport/SSP_models/new/M05_age_'+split_galaxy_age_string[0]+'_Av_00_z002.csv'
M05_model = np.loadtxt(fn2)
M05_model_list.append(M05_model)
## Prepare the M13 models and store in the right place
M13_model = []
M13_model_list=[]
fn1 = '/Volumes/My Passport/SSP_models/new/M13_age_1e-06_Av_00_z002.csv'
fn2 = '/Volumes/My Passport/SSP_models/new/M13_age_0_0001_Av_00_z002.csv'
M13_model = np.genfromtxt(fn1)
M13_model_list.append(M13_model)
M13_model = np.genfromtxt(fn2)
M13_model_list.append(M13_model)
for i in range(2,51):
age_index = i
age_prior = df_M13.Age.unique()[age_index]
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
fn1 = '/Volumes/My Passport/SSP_models/new/M13_age_'+'0_'+split_galaxy_age_string[1]+'_Av_00_z002.csv'
M13_model = np.loadtxt(fn1)
M13_model_list.append(M13_model)
fn1 = '/Volumes/My Passport/SSP_models/new/M13_age_1_Av_00_z002.csv'
fn2 = '/Volumes/My Passport/SSP_models/new/M13_age_1_5_Av_00_z002.csv'
M13_model = np.loadtxt(fn1)
M13_model_list.append(M13_model)
M13_model = np.loadtxt(fn2)
M13_model_list.append(M13_model)
for i in range(53,67):
age_index = i
age_prior = df_M13.Age.unique()[age_index]
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
fn2 = '/Volumes/My Passport/SSP_models/new/M13_age_'+split_galaxy_age_string[0]+'_Av_00_z002.csv'
M13_model = np.loadtxt(fn2)
M13_model_list.append(M13_model)
def read_spectra(row):
"""
region: default 1 means the first region mentioned in the area, otherwise, the second region/third region
"""
detector=df.detector[row]
region = df.region[row]
chip = df.chip[row]
ID = df.ID[row]
redshift_1=df_cat.loc[ID-1].z_best
mag = -2.5*np.log10(df_cat.loc[ID-1].L161)+25#+0.02
#print mag
#WFC3 is using the infrared low-resolution grism, and here we are using the z band
if detector == 'WFC3':
filename="/Volumes/My Passport/UDS_WFC3_V4.1.5/uds-"+"{0:02d}".format(region)+"/1D/ASCII/uds-"+"{0:02d}".format(region)+"-G141_"+"{0:05d}".format(ID)+".1D.ascii"
OneD_1 = np.loadtxt(filename,skiprows=1)
if detector =="ACS":
filename="/Volumes/My Passport/UDS_ACS_V4.1.5/acs-uds-"+"{0:02d}".format(region)+"/1D/FITS/"+df.filename[row]
OneD_1 = fits.getdata(filename, ext=1)
return ID, OneD_1,redshift_1, mag
def Lick_index_ratio(wave, flux, band=3):
if band == 3:
blue_min = 1.06e4 # 1.072e4#
blue_max = 1.08e4 # 1.08e4#
red_min = 1.12e4 # 1.097e4#
red_max = 1.14e4 # 1.106e4#
band_min = blue_max
band_max = red_min
# Blue
blue_mask = (wave >= blue_min) & (wave <= blue_max)
blue_wave = wave[blue_mask]
blue_flux = flux[blue_mask]
# Red
red_mask = (wave >= red_min) & (wave <= red_max)
red_wave = wave[red_mask]
red_flux = flux[red_mask]
band_mask = (wave >= band_min) & (wave <= band_max)
band_wave = wave[band_mask]
band_flux = flux[band_mask]
if len(blue_wave) == len(red_wave) and len(blue_wave) != 0:
ratio = np.mean(blue_flux) / np.mean(red_flux)
elif red_wave == []:
ratio = np.mean(blue_flux) / np.mean(red_flux)
elif len(blue_wave) != 0 and len(red_wave) != 0:
ratio = np.mean(blue_flux) / np.mean(red_flux)
# ratio_err = np.sqrt(np.sum(1/red_flux**2*blue_flux_err**2)+np.sum((blue_flux/red_flux**2*red_flux_err)**2))
return ratio # , ratio_err
def binning_spec_keep_shape(wave,flux,bin_size):
wave_binned = wave
flux_binned = np.zeros(len(wave))
# flux_err_binned = np.zeros(len(wave))
for i in range((int(len(wave)/bin_size))):
flux_binned[bin_size*i:bin_size*(i+1)] = np.mean(flux[bin_size*i:bin_size*(i+1)])
#flux_err_binned[bin_size*i:bin_size*(i+1)] = np.mean(flux_err[bin_size*i:bin_size*(i+1)])
return wave_binned, flux_binned#, flux_err_binned
def derive_1D_spectra_Av_corrected(OneD_1,redshift_1,rownumber,wave_list,band_list,photometric_flux,photometric_flux_err,photometric_flux_err_mod,A_v):
"""
OneD_1 is the oneD spectra
redshift_1 is the redshift of the spectra
rownumber is the row number in order to store the spectra
"""
region = df.region[rownumber]
ID = df.ID[rownumber]
n = len(OneD_1)
age=10**(df_fast.loc[ID-1].lage)/1e9 ## in Gyr
metal = df_fast.loc[ID-1].metal
sfr = 10**(df_fast.loc[ID-1].lsfr)
intrinsic_Av = df_fast.loc[ID-1].Av
norm_factor_BC = int((OneD_1[int(n/2+1)][0]-OneD_1[int(n/2)][0])/(1+redshift_1)/1)
norm_limit_BC = int(5930/norm_factor_BC)*norm_factor_BC+400
smooth_wavelength_BC_1 = wavelength_BC[400:norm_limit_BC].values.reshape(-1,norm_factor_BC).mean(axis=1)
smooth_wavelength_BC = np.hstack([smooth_wavelength_BC_1,wavelength_BC[norm_limit_BC:]])
smooth_Flux_BC_1 = Flux_BC[400:norm_limit_BC].values.reshape(-1,norm_factor_BC).mean(axis=1)
smooth_Flux_BC = np.hstack([smooth_Flux_BC_1,Flux_BC[norm_limit_BC:]])/Flux_BC_norm.values[0]
norm_factor_Ma = int((OneD_1[int(n/2+1)][0]-OneD_1[int(n/2)][0])/(1+redshift_1)/5)
norm_limit_Ma = int(4770/norm_factor_Ma)*norm_factor_Ma
smooth_wavelength_Ma = wavelength_1Gyr[:norm_limit_Ma].values.reshape(-1,norm_factor_Ma).mean(axis=1)
smooth_Flux_Ma_1Gyr = Flux_1Gyr[:norm_limit_Ma].values.reshape(-1,norm_factor_Ma).mean(axis=1)/F_5500_1Gyr
if redshift_1<=0.05:
i = 2
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at V: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
elif redshift_1<=0.14:
i = 6
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at F606W: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
elif redshift_1<=0.26:
i = 3
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at R: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
elif redshift_1<=0.42:
i = 4
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at i: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
elif redshift_1<=0.54:
i = 7
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at F814W: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
else:
i = 5
temp_norm_wave = wave_list[i]/(1+redshift_1)
index_wave_norm = find_nearest(smooth_wavelength_BC,temp_norm_wave)
norm_band = photometric_flux[i]
#plt.text(5000,0.55,'normalized at z: rest frame '+"{0:.2f}".format(temp_norm_wave),fontsize=16)
#plt.axvline(temp_norm_wave,linewidth=2,color='b')
x = np.zeros(n)
y = np.zeros(n)
y_err = np.zeros(n)
sensitivity = np.zeros(n)
for i in range(0,n):
x[i] = OneD_1[i][0]#/(1+redshift_1)
print('wavelength range:',x[0],x[-1])
spectra_extinction = calzetti00(x, A_v, 4.05)
for i in range(n):
spectra_flux_correction = 10**(0.4*spectra_extinction[i])# from obs to obtain the true value: the absolute value
x[i] = x[i]/(1+redshift_1)
y[i] = (OneD_1[i][1]-OneD_1[i][3])/OneD_1[i][6]*spectra_flux_correction#/Flux_0 # (flux-contamination)/sensitivity
y_err[i] = OneD_1[i][2]/OneD_1[i][6]*spectra_flux_correction#/Flux_0
sensitivity[i] = OneD_1[i][6]
# end_index = np.argmin(np.diff(sensitivity[263:282],2)[1:],0)+263
# start_index = np.argmin(np.diff(sensitivity[40:50],2)[1:])+42
start_index = np.argmin(abs(x*(1+redshift_1)-11407.53))
end_index = np.argmin(abs(x*(1+redshift_1)-16428.61))
print('masking region:',x[start_index]*(1+redshift_1),x[end_index]*(1+redshift_1),start_index,end_index)
# plt.plot(x*(1+redshift_1),sensitivity,color='k')
# plt.plot(x[start_index:end_index]*(1+redshift_1),sensitivity[start_index:end_index],color='red')
print('before masking',len(x))
x = x[start_index:end_index]#[int(n*2/10):int(n*8/10)]
y = y[start_index:end_index]*1e-17/norm_band#[int(n*2/10):int(n*8/10)]*1e-17/norm_band
y_err = y_err[start_index:end_index]*1e-17/norm_band#[int(n*2/10):int(n*8/10)]*1e-17/norm_band
print('after masking',len(x))
# mask_non_neg_photo = np.where(photometric_flux>0)
# wave_list = wave_list[mask_non_neg_photo]
# band_list = band_list[mask_non_neg_photo]
# photometric_flux = photometric_flux[mask_non_neg_photo]
# photometric_flux_err_mod = photometric_flux_err_mod[mask_non_neg_photo]
return x, y, y_err, wave_list/(1+redshift_1), band_list/(1+redshift_1), photometric_flux/norm_band, photometric_flux_err/norm_band, photometric_flux_err_mod/norm_band
def binning_spec_keep_shape_x(wave,flux,flux_err,bin_size):
wave_binned = wave
flux_binned = np.zeros(len(wave))
flux_err_binned = np.zeros(len(wave))
for i in range((int(len(wave)/bin_size))+1):
flux_binned[bin_size*i:bin_size*(i+1)] = np.mean(flux[bin_size*i:bin_size*(i+1)])
flux_err_binned[bin_size*i:bin_size*(i+1)] = np.mean(flux_err[bin_size*i:bin_size*(i+1)])
return wave_binned, flux_binned, flux_err_binned
def minimize_age_AV_vector_weighted(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
# print('minimize process age av grid',X)
n=len(x)
age_index = find_nearest(df_Ma.Age.unique(), galaxy_age)
age_prior = df_Ma.Age.unique()[age_index]
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
# print(age_prior)
if age_prior < 1:
if galaxy_age < age_prior:
model1 = (M05_model_list[age_index]*(galaxy_age-df_Ma.Age.unique()[age_index-1]) \
+ M05_model_list[age_index-1]*(age_prior-galaxy_age))/(df_Ma.Age.unique()[age_index]-df_Ma.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model1 = (M05_model_list[age_index]*(df_Ma.Age.unique()[age_index+1]-galaxy_age) \
+ M05_model_list[age_index+1]*(galaxy_age-age_prior))/(df_Ma.Age.unique()[age_index+1]-df_Ma.Age.unique()[age_index])
elif galaxy_age == age_prior:
model1 = M05_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model1 = (3.0-galaxy_age)*M05_model_list[32] + (galaxy_age-2.0)*M05_model_list[33]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model1 = (4.0-galaxy_age)*M05_model_list[33] + (galaxy_age-3.0)*M05_model_list[34]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model1 = (5.0-galaxy_age)*M05_model_list[34] + (galaxy_age-4.0)*M05_model_list[35]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model1 = (6.0-galaxy_age)*M05_model_list[35] + (galaxy_age-5.0)*M05_model_list[36]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model1 = (7.0-galaxy_age)*M05_model_list[36] + (galaxy_age-6.0)*M05_model_list[37]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model1 = (8.0-galaxy_age)*M05_model_list[37] + (galaxy_age-7.0)*M05_model_list[38]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model1 = (9.0-galaxy_age)*M05_model_list[38] + (galaxy_age-8.0)*M05_model_list[39]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model1 = (10.0-galaxy_age)*M05_model_list[39] + (galaxy_age-9.0)*M05_model_list[40]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model1 = (11.0-galaxy_age)*M05_model_list[40] + (galaxy_age-10.0)*M05_model_list[41]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model1 = (12.0-galaxy_age)*M05_model_list[41] + (galaxy_age-11.0)*M05_model_list[42]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model1 = (13.0-galaxy_age)*M05_model_list[42] + (galaxy_age-12.0)*M05_model_list[43]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model1 = (14.0-galaxy_age)*M05_model_list[43] + (galaxy_age-13.0)*M05_model_list[44]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model1 = (15.0-galaxy_age)*M05_model_list[44] + (galaxy_age-14.0)*M05_model_list[45]
else:
model1 = M05_model_list[age_index]
spectra_extinction = calzetti00(model1[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M05_flux_center = model1[1,:]*spectra_flux_correction
F_M05_index=700#167
Flux_M05_norm_new = M05_flux_center[F_M05_index]
smooth_Flux_Ma_1Gyr_new = M05_flux_center/Flux_M05_norm_new
binning_index = find_nearest(model1[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model1[0,binning_index]-model1[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model1[0,binning_index]-model1[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model1[0,:], smooth_Flux_Ma_1Gyr_new,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
if np.isnan(x2):
print('spectra chi2 is nan,binning model',model_flux_binned)
print('spectra model wave', model1[0,:], model1[1,:], intrinsic_Av)
print('model flux before binning', spectra_extinction, spectra_flux_correction, M05_flux_center, Flux_M05_norm_new)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1, wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
else:
binning_size = int((model1[0,binning_index]-model1[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model1[0,:], smooth_Flux_Ma_1Gyr_new)
# print('binning data, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new, redshift_1, wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
x2_tot = 0.5*weight1*x2+0.5*weight2*x2_photo
else:
x2_tot = np.inf
except ValueError: # NaN value case
x2_tot = np.inf
print('ValueError', x2_tot)
# print('M05 x2 tot:',x2, x2_photo, x2_tot)
return x2_tot
def lg_minimize_age_AV_vector_weighted(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_Ma.Age.unique(), galaxy_age)
age_prior = df_Ma.Age.unique()[age_index]
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
if age_prior < 1:
if galaxy_age < age_prior:
model1 = (M05_model_list[age_index]*(galaxy_age-df_Ma.Age.unique()[age_index-1]) \
+ M05_model_list[age_index-1]*(age_prior-galaxy_age))/(df_Ma.Age.unique()[age_index]-df_Ma.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model1 = (M05_model_list[age_index]*(df_Ma.Age.unique()[age_index+1]-galaxy_age) \
+ M05_model_list[age_index+1]*(galaxy_age-age_prior))/(df_Ma.Age.unique()[age_index+1]-df_Ma.Age.unique()[age_index])
elif galaxy_age == age_prior:
model1 = M05_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model1 = (3.0-galaxy_age)*M05_model_list[32] + (galaxy_age-2.0)*M05_model_list[33]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model1 = (4.0-galaxy_age)*M05_model_list[33] + (galaxy_age-3.0)*M05_model_list[34]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model1 = (5.0-galaxy_age)*M05_model_list[34] + (galaxy_age-4.0)*M05_model_list[35]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model1 = (6.0-galaxy_age)*M05_model_list[35] + (galaxy_age-5.0)*M05_model_list[36]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model1 = (7.0-galaxy_age)*M05_model_list[36] + (galaxy_age-6.0)*M05_model_list[37]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model1 = (8.0-galaxy_age)*M05_model_list[37] + (galaxy_age-7.0)*M05_model_list[38]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model1 = (9.0-galaxy_age)*M05_model_list[38] + (galaxy_age-8.0)*M05_model_list[39]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model1 = (10.0-galaxy_age)*M05_model_list[39] + (galaxy_age-9.0)*M05_model_list[40]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model1 = (11.0-galaxy_age)*M05_model_list[40] + (galaxy_age-10.0)*M05_model_list[41]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model1 = (12.0-galaxy_age)*M05_model_list[41] + (galaxy_age-11.0)*M05_model_list[42]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model1 = (13.0-galaxy_age)*M05_model_list[42] + (galaxy_age-12.0)*M05_model_list[43]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model1 = (14.0-galaxy_age)*M05_model_list[43] + (galaxy_age-13.0)*M05_model_list[44]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model1 = (15.0-galaxy_age)*M05_model_list[44] + (galaxy_age-14.0)*M05_model_list[45]
else:
model1 = M05_model_list[age_index]
spectra_extinction = calzetti00(model1[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M05_flux_center = model1[1,:]*spectra_flux_correction
F_M05_index=700#167
Flux_M05_norm_new = M05_flux_center[F_M05_index]
smooth_Flux_Ma_1Gyr_new = M05_flux_center/Flux_M05_norm_new
binning_index = find_nearest(model1[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model1[0,binning_index]-model1[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model1[0,binning_index]-model1[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model1[0,:], smooth_Flux_Ma_1Gyr_new,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
# x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
else:
binning_size = int((model1[0,binning_index]-model1[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model1[0,:], smooth_Flux_Ma_1Gyr_new)
x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning data, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# print('binning size, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, model1[0,:], smooth_Flux_Ma_1Gyr_new)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
lnprobval = -0.5*(0.5*x2+0.5*x2_photo)#np.log(np.exp(-0.5*(0.5*weight1*x2+0.5*weight2*x2_photo)))
if np.isnan(lnprobval):
lnprobval = -np.inf
else:
lnprobval = -np.inf
except ValueError: # NaN value case
lnprobval = -np.inf
print('valueError',lnprobval)
if np.isinf(lnprobval):
print('lnprob:',lnprobval, x2, x2_photo,galaxy_age,intrinsic_Av)
return lnprobval
def minimize_age_AV_vector_weighted_return_flux(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_Ma.Age.unique(), galaxy_age)
age_prior = df_Ma.Age.unique()[age_index]
#print('galaxy age', galaxy_age, 'age prior:', age_prior)
AV_string = str(intrinsic_Av)
#print('intrinsic Av:', intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
if age_prior < 1:
if galaxy_age < age_prior:
model1 = (M05_model_list[age_index]*(galaxy_age-df_Ma.Age.unique()[age_index-1]) \
+ M05_model_list[age_index-1]*(age_prior-galaxy_age))/(df_Ma.Age.unique()[age_index]-df_Ma.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model1 = (M05_model_list[age_index]*(df_Ma.Age.unique()[age_index+1]-galaxy_age) \
+ M05_model_list[age_index+1]*(galaxy_age-age_prior))/(df_Ma.Age.unique()[age_index+1]-df_Ma.Age.unique()[age_index])
elif galaxy_age == age_prior:
model1 = M05_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model1 = (3.0-galaxy_age)*M05_model_list[32] + (galaxy_age-2.0)*M05_model_list[33]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model1 = (4.0-galaxy_age)*M05_model_list[33] + (galaxy_age-3.0)*M05_model_list[34]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model1 = (5.0-galaxy_age)*M05_model_list[34] + (galaxy_age-4.0)*M05_model_list[35]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model1 = (6.0-galaxy_age)*M05_model_list[35] + (galaxy_age-5.0)*M05_model_list[36]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model1 = (7.0-galaxy_age)*M05_model_list[36] + (galaxy_age-6.0)*M05_model_list[37]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model1 = (8.0-galaxy_age)*M05_model_list[37] + (galaxy_age-7.0)*M05_model_list[38]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model1 = (9.0-galaxy_age)*M05_model_list[38] + (galaxy_age-8.0)*M05_model_list[39]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model1 = (10.0-galaxy_age)*M05_model_list[39] + (galaxy_age-9.0)*M05_model_list[40]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model1 = (11.0-galaxy_age)*M05_model_list[40] + (galaxy_age-10.0)*M05_model_list[41]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model1 = (12.0-galaxy_age)*M05_model_list[41] + (galaxy_age-11.0)*M05_model_list[42]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model1 = (13.0-galaxy_age)*M05_model_list[42] + (galaxy_age-12.0)*M05_model_list[43]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model1 = (14.0-galaxy_age)*M05_model_list[43] + (galaxy_age-13.0)*M05_model_list[44]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model1 = (15.0-galaxy_age)*M05_model_list[44] + (galaxy_age-14.0)*M05_model_list[45]
else:
model1 = M05_model_list[age_index]
spectra_extinction = calzetti00(model1[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M05_flux_center = model1[1,:]*spectra_flux_correction
F_M05_index=700#167
Flux_M05_norm_new = M05_flux_center[F_M05_index]
smooth_Flux_Ma_1Gyr_new = M05_flux_center/Flux_M05_norm_new
binning_index = find_nearest(model1[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model1[0,binning_index]-model1[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model1[0,binning_index]-model1[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model1[0,:], smooth_Flux_Ma_1Gyr_new,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
# x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
else:
binning_size = int((model1[0,binning_index]-model1[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model1[0,:], smooth_Flux_Ma_1Gyr_new)
x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning data, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, model1[0,:], smooth_Flux_Ma_1Gyr_new)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
x2_tot = 0.5*weight1*x2+0.5*weight2*x2_photo
else:
x2_tot = np.inf
except ValueError: # NaN value case
x2_tot = np.inf
print('valueError', x2_tot)
# print('model wave range', model1[0,0], model1[0,-1])
return x2_tot, model1[0,:], smooth_Flux_Ma_1Gyr_new
def minimize_age_AV_vector_weighted_return_chi2_sep(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_Ma.Age.unique(), galaxy_age)
age_prior = df_Ma.Age.unique()[age_index]
#print('galaxy age', galaxy_age, 'age prior:', age_prior)
AV_string = str(intrinsic_Av)
#print('intrinsic Av:', intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
if age_prior < 1:
if galaxy_age < age_prior:
model1 = (M05_model_list[age_index]*(galaxy_age-df_Ma.Age.unique()[age_index-1]) \
+ M05_model_list[age_index-1]*(age_prior-galaxy_age))/(df_Ma.Age.unique()[age_index]-df_Ma.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model1 = (M05_model_list[age_index]*(df_Ma.Age.unique()[age_index+1]-galaxy_age) \
+ M05_model_list[age_index+1]*(galaxy_age-age_prior))/(df_Ma.Age.unique()[age_index+1]-df_Ma.Age.unique()[age_index])
elif galaxy_age == age_prior:
model1 = M05_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model1 = 2.*(1.5-galaxy_age)*M05_model_list[30] + 2.*(galaxy_age-1.0)*M05_model_list[31]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model1 = 2.*(2.0-galaxy_age)*M05_model_list[31] + 2.*(galaxy_age-1.5)*M05_model_list[32]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model1 = (3.0-galaxy_age)*M05_model_list[32] + (galaxy_age-2.0)*M05_model_list[33]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model1 = (4.0-galaxy_age)*M05_model_list[33] + (galaxy_age-3.0)*M05_model_list[34]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model1 = (5.0-galaxy_age)*M05_model_list[34] + (galaxy_age-4.0)*M05_model_list[35]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model1 = (6.0-galaxy_age)*M05_model_list[35] + (galaxy_age-5.0)*M05_model_list[36]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model1 = (7.0-galaxy_age)*M05_model_list[36] + (galaxy_age-6.0)*M05_model_list[37]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model1 = (8.0-galaxy_age)*M05_model_list[37] + (galaxy_age-7.0)*M05_model_list[38]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model1 = (9.0-galaxy_age)*M05_model_list[38] + (galaxy_age-8.0)*M05_model_list[39]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model1 = (10.0-galaxy_age)*M05_model_list[39] + (galaxy_age-9.0)*M05_model_list[40]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model1 = (11.0-galaxy_age)*M05_model_list[40] + (galaxy_age-10.0)*M05_model_list[41]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model1 = (12.0-galaxy_age)*M05_model_list[41] + (galaxy_age-11.0)*M05_model_list[42]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model1 = (13.0-galaxy_age)*M05_model_list[42] + (galaxy_age-12.0)*M05_model_list[43]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model1 = (14.0-galaxy_age)*M05_model_list[43] + (galaxy_age-13.0)*M05_model_list[44]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model1 = (15.0-galaxy_age)*M05_model_list[44] + (galaxy_age-14.0)*M05_model_list[45]
else:
model1 = M05_model_list[age_index]
spectra_extinction = calzetti00(model1[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M05_flux_center = model1[1,:]*spectra_flux_correction
F_M05_index=700#167
Flux_M05_norm_new = M05_flux_center[F_M05_index]
smooth_Flux_Ma_1Gyr_new = M05_flux_center/Flux_M05_norm_new
binning_index = find_nearest(model1[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model1[0,binning_index]-model1[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model1[0,binning_index]-model1[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model1[0,:], smooth_Flux_Ma_1Gyr_new,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
else:
binning_size = int((model1[0,binning_index]-model1[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model1[0,:], smooth_Flux_Ma_1Gyr_new)
x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# x2_photo = chisquare_photo(model1[0,:], smooth_Flux_Ma_1Gyr_new, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning data, model 1', n, (model1[0,binning_index]-model1[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, model1[0,:], smooth_Flux_Ma_1Gyr_new)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
pass
else:
x2 = np.inf
x2_photo = np.inf
except ValueError: # NaN value case
x2 = np.inf
x2_photo = np.inf
print('ValueError', x2)
return x2, x2_photo
def minimize_age_AV_vector_weighted_M13(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
# print('minimize process age av grid M13:',X)
n=len(x)
age_index = find_nearest(df_M13.Age.unique(), galaxy_age)
age_prior = df_M13.Age.unique()[age_index]
age_prior = float(age_prior)
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
if age_prior < 1e-5:
model2 = M13_model_list[0]
elif age_prior >= 1e-5 and age_prior < 1:
if galaxy_age < age_prior:
model2 = (M13_model_list[age_index]*(galaxy_age-df_M13.Age.unique()[age_index-1]) \
+ M13_model_list[age_index-1]*(age_prior-galaxy_age))/(df_M13.Age.unique()[age_index]-df_M13.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model2 = (M13_model_list[age_index]*(df_M13.Age.unique()[age_index+1]-galaxy_age) \
+ M13_model_list[age_index+1]*(galaxy_age-age_prior))/(df_M13.Age.unique()[age_index+1]-df_M13.Age.unique()[age_index])
elif galaxy_age == age_prior:
model2 = M13_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model2 = (3.0-galaxy_age)*M13_model_list[53] + (galaxy_age-2.0)*M13_model_list[54]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model2 = (4.0-galaxy_age)*M13_model_list[54] + (galaxy_age-3.0)*M13_model_list[55]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model2 = (5.0-galaxy_age)*M13_model_list[55] + (galaxy_age-4.0)*M13_model_list[56]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model2 = (6.0-galaxy_age)*M13_model_list[56] + (galaxy_age-5.0)*M13_model_list[57]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model2 = (7.0-galaxy_age)*M13_model_list[57] + (galaxy_age-6.0)*M13_model_list[58]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model2 = (8.0-galaxy_age)*M13_model_list[58] + (galaxy_age-7.0)*M13_model_list[59]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model2 = (9.0-galaxy_age)*M13_model_list[59] + (galaxy_age-8.0)*M13_model_list[60]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model2 = (10.0-galaxy_age)*M13_model_list[60] + (galaxy_age-9.0)*M13_model_list[61]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model2 = (11.0-galaxy_age)*M13_model_list[61] + (galaxy_age-10.0)*M13_model_list[62]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model2 = (12.0-galaxy_age)*M13_model_list[62] + (galaxy_age-11.0)*M13_model_list[63]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model2 = (13.0-galaxy_age)*M13_model_list[63] + (galaxy_age-12.0)*M13_model_list[64]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model2 = (14.0-galaxy_age)*M13_model_list[64] + (galaxy_age-13.0)*M13_model_list[65]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model2 = (15.0-galaxy_age)*M13_model_list[65] + (galaxy_age-14.0)*M13_model_list[66]
else:
model2 = M13_model_list[age_index]
spectra_extinction = calzetti00(model2[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M13_flux_center = model2[1,:]*spectra_flux_correction
F_M13_index = 326#126##np.where(abs(model2[0,:]-norm_wavelength)<10.5)[0][0]
Flux_M13_norm_new = M13_flux_center[F_M13_index]
smooth_Flux_M13_1Gyr_new = M13_flux_center/Flux_M13_norm_new
binning_index = find_nearest(model2[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model2[0,binning_index]-model2[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model2[0,binning_index]-model2[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model2[0,:], smooth_Flux_M13_1Gyr_new,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
if np.isnan(x2):
print('spectra chi2 is nan, binning model', model_flux_binned)
print('spectra model wave', model2[0,:],intrinsic_Av)
print('model flux before binning', spectra_extinction, spectra_flux_correction, M13_flux_center, Flux_M13_norm_new)
sys.exit()
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 2', n, (model2[0,binning_index]-model2[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]),binning_size)
else:
binning_size = int((model2[0,binning_index]-model2[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned = binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model2[0,:], smooth_Flux_M13_1Gyr_new)
if np.isnan(x2):
print('spectra chi2 is nan,binning data',x_binned)
print('spectra model wave', model2[0,:],intrinsic_Av)
print('model flux before binning', spectra_extinction, spectra_flux_correction, M13_flux_center, Flux_M13_norm_new)
sys.exit()
x2_photo = chisquare_photo(model2[0,:], smooth_Flux_M13_1Gyr_new,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
if np.isnan(x2_photo):
print('model 2 photo nan', x2_photo)
# print('binning data, model 2', n, (model2[0,binning_index]-model2[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]),binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, model2[0,:], smooth_Flux_M13_1Gyr_new)
# print(x2_photo)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
x2_tot = 0.5*weight1*x2+0.5*weight2*x2_photo
else:
x2_tot = np.inf
except ValueError: # NaN value case
x2_tot = np.inf
print('ValueError', x2_tot)
return x2_tot
def lg_minimize_age_AV_vector_weighted_M13(X):
tik = time.clock()
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_M13.Age.unique(), galaxy_age)
age_prior = df_M13.Age.unique()[age_index]
age_prior = float(age_prior)
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
model2 = np.zeros((2,762))
if age_prior < 1e-5:
model2 = M13_model_list[0]
elif age_prior >= 1e-5 and age_prior < 1:
if galaxy_age < age_prior:
model2 = (M13_model_list[age_index]*(galaxy_age-df_M13.Age.unique()[age_index-1]) \
+ M13_model_list[age_index-1]*(age_prior-galaxy_age))/(df_M13.Age.unique()[age_index]-df_M13.Age.unique()[age_index-1])
# print('age interval', (galaxy_age-df_M13.Age.unique()[age_index-1]), (age_prior-galaxy_age))
elif galaxy_age > age_prior:
model2 = (M13_model_list[age_index]*(df_M13.Age.unique()[age_index+1]-galaxy_age) \
+ M13_model_list[age_index+1]*(galaxy_age-age_prior))/(df_M13.Age.unique()[age_index+1]-df_M13.Age.unique()[age_index])
elif galaxy_age == age_prior:
model2 = M13_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model2 = (3.0-galaxy_age)*M13_model_list[53] + (galaxy_age-2.0)*M13_model_list[54]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model2 = (4.0-galaxy_age)*M13_model_list[54] + (galaxy_age-3.0)*M13_model_list[55]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model2 = (5.0-galaxy_age)*M13_model_list[55] + (galaxy_age-4.0)*M13_model_list[56]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model2 = (6.0-galaxy_age)*M13_model_list[56] + (galaxy_age-5.0)*M13_model_list[57]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model2 = (7.0-galaxy_age)*M13_model_list[57] + (galaxy_age-6.0)*M13_model_list[58]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model2 = (8.0-galaxy_age)*M13_model_list[58] + (galaxy_age-7.0)*M13_model_list[59]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model2 = (9.0-galaxy_age)*M13_model_list[59] + (galaxy_age-8.0)*M13_model_list[60]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model2 = (10.0-galaxy_age)*M13_model_list[60] + (galaxy_age-9.0)*M13_model_list[61]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model2 = (11.0-galaxy_age)*M13_model_list[61] + (galaxy_age-10.0)*M13_model_list[62]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model2 = (12.0-galaxy_age)*M13_model_list[62] + (galaxy_age-11.0)*M13_model_list[63]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model2 = (13.0-galaxy_age)*M13_model_list[63] + (galaxy_age-12.0)*M13_model_list[64]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model2 = (14.0-galaxy_age)*M13_model_list[64] + (galaxy_age-13.0)*M13_model_list[65]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model2 = (15.0-galaxy_age)*M13_model_list[65] + (galaxy_age-14.0)*M13_model_list[66]
else:
model2 = M13_model_list[age_index]
spectra_extinction = calzetti00(model2[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M13_flux_center = model2[1,:]*spectra_flux_correction
F_M13_index = 326#126##np.where(abs(model2[0,:]-norm_wavelength)<10.5)[0][0]
Flux_M13_norm_new = M13_flux_center[F_M13_index]
smooth_Flux_M13_1Gyr_new = M13_flux_center/Flux_M13_norm_new
binning_index = find_nearest(model2[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index == len(model2[0,:]):
binning_index = len(model2[0,:])-1
# print('binning index:',binning_index,len(model2[0,:]),len(x), model2[:,binning_index-2:binning_index])
# print('galaxy age:', galaxy_age, age_prior,age_index)
# print(x, n)
# print(len(model2),galaxy_age, age_prior, age_index, len(x), len(model2), np.median(x), np.min(model2[0,:]),np.max(model2[0,:]), binning_index)
if (x[int(n/2)]-x[int(n/2)-1]) > (model2[0,binning_index]-model2[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model2[0,binning_index]-model2[0,binning_index-1]))
# print('bin size', model2[0,binning_index],\
# model2[0,binning_index-1],\
# (model2[0,binning_index]-model2[0,binning_index-1]),\
# int((x[int(n/2)]-x[int(n/2)-1])),\
# binning_size)
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model2[0,:], smooth_Flux_M13_1Gyr_new, binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
else:
binning_size = int((model2[0,binning_index]-model2[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned = binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model2[0,:], smooth_Flux_M13_1Gyr_new)
x2_photo = chisquare_photo(model2[0,:], smooth_Flux_M13_1Gyr_new,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
tok = time.clock()
# print('time for lg_minimize',tok-tik)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
lnprobval = -0.5*(0.5*x2+0.5*x2_photo)#np.log(np.exp(-0.5*(0.5*weight1*x2+0.5*weight2*x2_photo)))
if np.isnan(lnprobval):
lnprobval = -np.inf
else:
lnprobval = -np.inf
except ValueError: # NaN value case
lnprobval = -np.inf
print('valueError',lnprobval,x2, x2_photo)
# print('lnprob:',lnprobval)
return lnprobval
def minimize_age_AV_vector_weighted_M13_return_flux(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_M13.Age.unique(), galaxy_age)
age_prior = df_M13.Age.unique()[age_index]
age_prior = float(age_prior)
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
model2 = np.zeros((2,762))
if age_prior < 1e-5:
model2 = M13_model_list[0]
elif age_prior >= 1e-5 and age_prior < 1:
if galaxy_age < age_prior:
model2 = (M13_model_list[age_index]*(galaxy_age-df_M13.Age.unique()[age_index-1]) \
+ M13_model_list[age_index-1]*(age_prior-galaxy_age))/(df_M13.Age.unique()[age_index]-df_M13.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model2 = (M13_model_list[age_index]*(df_M13.Age.unique()[age_index+1]-galaxy_age) \
+ M13_model_list[age_index+1]*(galaxy_age-age_prior))/(df_M13.Age.unique()[age_index+1]-df_M13.Age.unique()[age_index])
elif galaxy_age == age_prior:
model2 = M13_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model2[0,:] = (3.0-galaxy_age)*M13_model_list[53][0,:] + (galaxy_age-2.0)*M13_model_list[54][0,:]
model2[1,:] = (3.0-galaxy_age)*M13_model_list[53][1,:] + (galaxy_age-2.0)*M13_model_list[54][1,:]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model2 = (4.0-galaxy_age)*M13_model_list[54] + (galaxy_age-3.0)*M13_model_list[55]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model2 = (5.0-galaxy_age)*M13_model_list[55] + (galaxy_age-4.0)*M13_model_list[56]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model2 = (6.0-galaxy_age)*M13_model_list[56] + (galaxy_age-5.0)*M13_model_list[57]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model2 = (7.0-galaxy_age)*M13_model_list[57] + (galaxy_age-6.0)*M13_model_list[58]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model2 = (8.0-galaxy_age)*M13_model_list[58] + (galaxy_age-7.0)*M13_model_list[59]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model2 = (9.0-galaxy_age)*M13_model_list[59] + (galaxy_age-8.0)*M13_model_list[60]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model2 = (10.0-galaxy_age)*M13_model_list[60] + (galaxy_age-9.0)*M13_model_list[61]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model2 = (11.0-galaxy_age)*M13_model_list[61] + (galaxy_age-10.0)*M13_model_list[62]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model2 = (12.0-galaxy_age)*M13_model_list[62] + (galaxy_age-11.0)*M13_model_list[63]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model2 = (13.0-galaxy_age)*M13_model_list[63] + (galaxy_age-12.0)*M13_model_list[64]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model2 = (14.0-galaxy_age)*M13_model_list[64] + (galaxy_age-13.0)*M13_model_list[65]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model2 = (15.0-galaxy_age)*M13_model_list[65] + (galaxy_age-14.0)*M13_model_list[66]
else:
model2 = M13_model_list[age_index]
spectra_extinction = calzetti00(model2[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M13_flux_center = model2[1,:]*spectra_flux_correction
F_M13_index = 326#126##np.where(abs(model2[0,:]-norm_wavelength)<10.5)[0][0]
Flux_M13_norm_new = M13_flux_center[F_M13_index]
smooth_Flux_M13_1Gyr_new = M13_flux_center/Flux_M13_norm_new
binning_index = find_nearest(model2[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model2[0,binning_index]-model2[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model2[0,binning_index]-model2[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model2[0,:], smooth_Flux_M13_1Gyr_new, binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
smooth_Flux_M13_1Gyr_new = model_flux_binned
else:
binning_size = int((model2[0,binning_index]-model2[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned = binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model2[0,:], smooth_Flux_M13_1Gyr_new)
x2_photo = chisquare_photo(model2[0,:], smooth_Flux_M13_1Gyr_new, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
x2_tot = 0.5*weight1*x2+0.5*weight2*x2_photo
else:
x2_tot = np.inf
except ValueError: # NaN value case
x2_tot = np.inf
print('valueError', x2_tot)
# print('model wave range', model2[0,0], model2[0,-1], split_galaxy_age_string )
# print('model wave separately', M13_model_list[53][0,0],M13_model_list[53][0,-1],len(M13_model_list[53][0,:]),len(M13_model_list[54][0,:]),M13_model_list[54][0,0],M13_model_list[53][0,-1])
# print('model test', model_test[0,0], model_test[0,-1])
# print('age',galaxy_age,age_prior)
return x2_tot, model2[0,:], smooth_Flux_M13_1Gyr_new
def minimize_age_AV_vector_weighted_M13_return_chi2_sep(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(df_M13.Age.unique(), galaxy_age)
age_prior = df_M13.Age.unique()[age_index]
age_prior = float(age_prior)
AV_string = str(intrinsic_Av)
galaxy_age_string = str(age_prior)
split_galaxy_age_string = str(galaxy_age_string).split('.')
if age_prior < 1e-5:
model2 = M13_model_list[0]
elif age_prior >= 1e-5 and age_prior < 1:
if galaxy_age < age_prior:
model2 = (M13_model_list[age_index]*(galaxy_age-df_M13.Age.unique()[age_index-1]) \
+ M13_model_list[age_index-1]*(age_prior-galaxy_age))/(df_M13.Age.unique()[age_index]-df_M13.Age.unique()[age_index-1])
elif galaxy_age > age_prior:
model2 = (M13_model_list[age_index]*(df_M13.Age.unique()[age_index+1]-galaxy_age) \
+ M13_model_list[age_index+1]*(galaxy_age-age_prior))/(df_M13.Age.unique()[age_index+1]-df_M13.Age.unique()[age_index])
elif galaxy_age == age_prior:
model2 = M13_model_list[age_index]
elif age_prior == 1.5:
if galaxy_age >=1.25 and galaxy_age <1.5:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >= 1.5 and galaxy_age <= 1.75:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif len(split_galaxy_age_string[1])==1:
if galaxy_age >= 1.0 and galaxy_age < 1.25:
model2 = 2.*(1.5-galaxy_age)*M13_model_list[51] + 2.*(galaxy_age-1.0)*M13_model_list[52]
elif galaxy_age >=1.75 and galaxy_age < 2.0:
model2 = 2.*(2.0-galaxy_age)*M13_model_list[52] + 2.*(galaxy_age-1.5)*M13_model_list[53]
elif galaxy_age >= 2.0 and galaxy_age < 3.0:
model2 = (3.0-galaxy_age)*M13_model_list[53] + (galaxy_age-2.0)*M13_model_list[54]
elif galaxy_age >= 3.0 and galaxy_age < 4.0:
model2 = (4.0-galaxy_age)*M13_model_list[54] + (galaxy_age-3.0)*M13_model_list[55]
elif galaxy_age >= 4.0 and galaxy_age < 5.0:
model2 = (5.0-galaxy_age)*M13_model_list[55] + (galaxy_age-4.0)*M13_model_list[56]
elif galaxy_age >= 5.0 and galaxy_age < 6.0:
model2 = (6.0-galaxy_age)*M13_model_list[56] + (galaxy_age-5.0)*M13_model_list[57]
elif galaxy_age >= 6.0 and galaxy_age < 7.0:
model2 = (7.0-galaxy_age)*M13_model_list[57] + (galaxy_age-6.0)*M13_model_list[58]
elif galaxy_age >= 7.0 and galaxy_age < 8.0:
model2 = (8.0-galaxy_age)*M13_model_list[58] + (galaxy_age-7.0)*M13_model_list[59]
elif galaxy_age >= 8.0 and galaxy_age < 9.0:
model2 = (9.0-galaxy_age)*M13_model_list[59] + (galaxy_age-8.0)*M13_model_list[60]
elif galaxy_age >= 9.0 and galaxy_age < 10.0:
model2 = (10.0-galaxy_age)*M13_model_list[60] + 2.*(galaxy_age-9.0)*M13_model_list[61]
elif galaxy_age >= 10.0 and galaxy_age < 11.0:
model2 = (11.0-galaxy_age)*M13_model_list[61] + 2.*(galaxy_age-10.0)*M13_model_list[62]
elif galaxy_age >= 11.0 and galaxy_age < 12.0:
model2 = (12.0-galaxy_age)*M13_model_list[62] + 2.*(galaxy_age-11.0)*M13_model_list[63]
elif galaxy_age >= 12.0 and galaxy_age < 13.0:
model2 = (13.0-galaxy_age)*M13_model_list[63] + 2.*(galaxy_age-12.0)*M13_model_list[64]
elif galaxy_age >= 13.0 and galaxy_age < 14.0:
model2 = (14.0-galaxy_age)*M13_model_list[64] + 2.*(galaxy_age-13.0)*M13_model_list[65]
elif galaxy_age >= 14.0 and galaxy_age < 15.0:
model2 = (15.0-galaxy_age)*M13_model_list[65] + 2.*(galaxy_age-14.0)*M13_model_list[66]
else:
model2 = M13_model_list[age_index]
spectra_extinction = calzetti00(model2[0,:], intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
M13_flux_center = model2[1,:]*spectra_flux_correction
F_M13_index = 326#126##np.where(abs(model2[0,:]-norm_wavelength)<10.5)[0][0]
Flux_M13_norm_new = M13_flux_center[F_M13_index]
smooth_Flux_M13_1Gyr_new = M13_flux_center/Flux_M13_norm_new
binning_index = find_nearest(model2[0,:],np.median(x))
if binning_index == 0:
binning_index = 1
elif binning_index ==len(x):
binning_index = len(x)-1
if (x[int(n/2)]-x[int(n/2)-1]) > (model2[0,binning_index]-model2[0,binning_index-1]):
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(model2[0,binning_index]-model2[0,binning_index-1]))
model_wave_binned,model_flux_binned = binning_spec_keep_shape(model2[0,:], smooth_Flux_M13_1Gyr_new,binnning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('binning model, model 2', n, (model2[0,binning_index]-model2[0,binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]),binning_size)
else:
binning_size = int((model2[0,binning_index]-model2[0,binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned = binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, model2[0,:], smooth_Flux_M13_1Gyr_new)
x2_photo = chisquare_photo(model2[0,:], smooth_Flux_M13_1Gyr_new,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
try:
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*x2_photo):
pass
else:
x2 = np.inf
x2_photo = np.inf
except ValueError: # NaN value case
x2 = np.inf
x2_photo = np.inf
print('ValueError', x2)
return x2, x2_photo
def minimize_age_AV_vector_weighted_BC03(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(BC03_age_list_num, galaxy_age)
age_prior = BC03_age_list_num[age_index]
AV_string = str(intrinsic_Av)
# print(galaxy_age,age_prior)
if galaxy_age == age_prior:
model3_flux = BC03_flux_array[age_index, :7125]
elif galaxy_age < age_prior:
age_interval = BC03_age_list_num[age_index+1] - BC03_age_list_num[age_index]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index+1]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index]))*1./age_interval
elif galaxy_age > age_prior:
age_interval = BC03_age_list_num[age_index] - BC03_age_list_num[age_index-1]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index-1]))*1./age_interval
spectra_extinction = calzetti00(BC03_wave_list_num, intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
BC03_flux_attenuated = model3_flux*spectra_flux_correction
BC03_flux_norm = BC03_flux_attenuated[2556]
BC03_flux_attenuated = BC03_flux_attenuated/BC03_flux_norm
binning_index = find_nearest(BC03_wave_list_num, np.median(x))
if (x[int(n/2)]-x[int(n/2)-1]) < (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]):
binning_size = int((BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned = binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, BC03_wave_list_num, BC03_flux_attenuated)
x2_photo = chisquare_photo(BC03_wave_list_num, BC03_flux_attenuated, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin data', n, binning_size, x2)
else:
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]))
model_wave_binned, model_flux_binned = binning_spec_keep_shape(BC03_wave_list_num, BC03_flux_attenuated,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin model',binning_size, x2)
# print('binning size, model 3', n, (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, BC03_wave_list_num, BC03_flux_attenuated)
# print('BC x2_nu',x2,x2_photo,0.5*weight1*x2+0.5*weight2*x2_photo)
return 0.5*weight1*x2+0.5*weight2*x2_photo
def lg_minimize_age_AV_vector_weighted_BC03(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(BC03_age_list_num, galaxy_age)
age_prior = BC03_age_list_num[age_index]
AV_string = str(intrinsic_Av)
if galaxy_age == age_prior:
model3_flux = BC03_flux_array[age_index, :7125]
elif galaxy_age < age_prior and galaxy_age <1.97500006e+01:
age_interval = BC03_age_list_num[age_index+1] - BC03_age_list_num[age_index]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index+1]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index]))*1./age_interval
elif galaxy_age > age_prior and galaxy_age <1.97500006e+01:
age_interval = BC03_age_list_num[age_index] - BC03_age_list_num[age_index-1]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index-1]))*1./age_interval
else:
model3_flux = BC03_flux_array[-1, :7125]
spectra_extinction = calzetti00(BC03_wave_list_num, intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
BC03_flux_attenuated = model3_flux*spectra_flux_correction
BC03_flux_norm = BC03_flux_attenuated[2556]
BC03_flux_attenuated = BC03_flux_attenuated/BC03_flux_norm
binning_index = find_nearest(BC03_wave_list_num, np.median(x))
if (x[int(n/2)]-x[int(n/2)-1]) < (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]):
binning_size = int((BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, BC03_wave_list_num, BC03_flux_attenuated)
x2_photo = chisquare_photo(BC03_wave_list_num, BC03_flux_attenuated, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin data', binning_size, x2)
else:
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]))
model_wave_binned, model_flux_binned = binning_spec_keep_shape(BC03_wave_list_num, BC03_flux_attenuated,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin model',binning_size, x2)
# print('binning size, model 3', n, (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, BC03_wave_list_num, BC03_flux_attenuated)
if 0.01<galaxy_age<13 and 0.0<=intrinsic_Av<=4.0 and not np.isinf(0.5*x2+0.5*1e-3*x2_photo):
return np.log(np.exp(-0.5*(0.5*weight1*x2+0.5*weight2*x2_photo)))
else:
return -np.inf
def minimize_age_AV_vector_weighted_BC03_mod_no_weight_return_flux(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(BC03_age_list_num, galaxy_age)
age_prior = BC03_age_list_num[age_index]
AV_string = str(intrinsic_Av)
if galaxy_age == age_prior:
model3_flux = BC03_flux_array[age_index, :7125]
elif galaxy_age < age_prior and galaxy_age <1.97500006e+01:
age_interval = BC03_age_list_num[age_index] - BC03_age_list_num[age_index-1]
model3_flux = (BC03_flux_array[age_index-1, :7125]*(BC03_age_list_num[age_index]-galaxy_age)\
+ BC03_flux_array[age_index, :7125]*(galaxy_age-BC03_age_list_num[age_index-1]))*1./age_interval
elif galaxy_age > age_prior and galaxy_age <1.97500006e+01:
age_interval = BC03_age_list_num[age_index+1] - BC03_age_list_num[age_index]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index+1]-galaxy_age)\
+ BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index]))*1./age_interval
spectra_extinction = calzetti00(BC03_wave_list_num, intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
BC03_flux_attenuated = model3_flux*spectra_flux_correction
BC03_flux_norm = BC03_flux_attenuated[2556]
BC03_flux_attenuated = BC03_flux_attenuated/BC03_flux_norm
binning_index = find_nearest(BC03_wave_list_num, np.median(x))
if (x[int(n/2)]-x[int(n/2)-1]) < (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]):
binning_size = int((BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, BC03_wave_list_num, BC03_flux_attenuated)
x2_photo = chisquare_photo(BC03_wave_list_num, BC03_flux_attenuated, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin data', binning_size, x2)
else:
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]))
model_wave_binned, model_flux_binned = binning_spec_keep_shape(BC03_wave_list_num, BC03_flux_attenuated,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin model',binning_size, x2)
# print('binning size, model 3', n, (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, BC03_wave_list_num, BC03_flux_attenuated)
return 0.5*weight1*x2+0.5*weight2*x2_photo,BC03_flux_attenuated
def minimize_age_AV_vector_weighted_BC03_return_chi2_sep(X):
galaxy_age= X[0]
intrinsic_Av = X[1]
n=len(x)
age_index = find_nearest(BC03_age_list_num, galaxy_age)
age_prior = BC03_age_list_num[age_index]
AV_string = str(intrinsic_Av)
if galaxy_age == age_prior:
model3_flux = BC03_flux_array[age_index, :7125]
elif galaxy_age < age_prior:
age_interval = BC03_age_list_num[age_index+1] - BC03_age_list_num[age_index]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index+1]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index]))*1./age_interval
elif galaxy_age > age_prior:
age_interval = BC03_age_list_num[age_index] - BC03_age_list_num[age_index-1]
model3_flux = (BC03_flux_array[age_index, :7125]*(BC03_age_list_num[age_index]-galaxy_age) + BC03_flux_array[age_index+1, :7125]*(galaxy_age-BC03_age_list_num[age_index-1]))*1./age_interval
spectra_extinction = calzetti00(BC03_wave_list_num, intrinsic_Av, 4.05)
spectra_flux_correction = 10**(-0.4*spectra_extinction)
BC03_flux_attenuated = model3_flux*spectra_flux_correction
BC03_flux_norm = BC03_flux_attenuated[2556]
BC03_flux_attenuated = BC03_flux_attenuated/BC03_flux_norm
binning_index = find_nearest(BC03_wave_list_num, np.median(x))
if (x[int(n/2)]-x[int(n/2)-1]) < (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]):
binning_size = int((BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1])/(x[int(n/2)]-x[int(n/2)-1]))
x_binned,y_binned,y_err_binned=binning_spec_keep_shape_x(x,y,y_err,binning_size)
x2 = reduced_chi_square(x_binned, y_binned, y_err_binned, BC03_wave_list_num, BC03_flux_attenuated)
x2_photo = chisquare_photo(BC03_wave_list_num, BC03_flux_attenuated, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin data', binning_size, x2)
else:
binning_size = int((x[int(n/2)]-x[int(n/2)-1])/(BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]))
model_wave_binned, model_flux_binned = binning_spec_keep_shape(BC03_wave_list_num, BC03_flux_attenuated,binning_size)
x2 = reduced_chi_square(x, y, y_err, model_wave_binned, model_flux_binned)
x2_photo = chisquare_photo(model_wave_binned, model_flux_binned,redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
# print('bin model',binning_size, x2)
# print('binning size, model 3', n, (BC03_wave_list_num[binning_index]-BC03_wave_list_num[binning_index-1]), (x[int(n/2)]-x[int(n/2)-1]), binning_size)
# x2_photo = reduced_chi_square(wave_list, photometric_flux, photometric_flux_err, BC03_wave_list_num, BC03_flux_attenuated)
return x2,x2_photo
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
# print('find nearest idx searchsorted:', idx)
if np.isnan(idx):
print('find nearest',idx,value)
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return idx-1#array[idx-1]
else:
return idx#array[idx]
def all_same(items):
return all(x == items[0] for x in items)
def reduced_chi_square(data_wave,data,data_err,model_wave,model):
n=len(data_wave)
chi_square = 0
for i in range(n):
model_flux_interp = np.interp(data_wave[i], model_wave, model)
chi_square += (data[i]-model_flux_interp)**2/(data_err[i]**2)
# print('spectra chisquare processes new',i,chi_square, data_wave[i],model_flux_interp)
dof = n-2
reduced_chi_square = chi_square/dof
return reduced_chi_square
def chisquare_photo(model_wave, model_flux, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod):
"""
work in the observed frame
"""
tik = time.clock()
model_wave = model_wave*(1+redshift_1)
model_flux = model_flux
filter_array_index= np.arange(1,15)
# SNR Mask
mask_SNR3_photo = np.where(photometric_flux/photometric_flux_err>3.)
photometric_flux = photometric_flux[mask_SNR3_photo]
photometric_flux_err = photometric_flux_err[mask_SNR3_photo]
photometric_flux_err_mod = photometric_flux_err_mod[mask_SNR3_photo]
filter_array_index = filter_array_index[mask_SNR3_photo]
photometry_list = np.zeros(len(photometric_flux))
photometry_list_index = 0
# print('masked filter array index:',filter_array_index)
for i in filter_array_index:
sum_flambda_AB_K = 0
sum_transmission = 0
length = 0
filter_curve = filter_curve_list[i-1]
wave_inter = np.zeros(len(model_wave))
wave_inter[:-1] = np.diff(model_wave)
index = np.where(model_wave<filter_curve[-1,0])[0]#[0]
wave = model_wave[index]
flux = model_flux[index]
wave_inter = wave_inter[index]
index = np.where(wave>filter_curve[0,0])
wave = wave[index]
flux = flux[index]
wave_inter = wave_inter[index]
transmission = np.interp(wave, filter_curve[:,0], filter_curve[:,1])
n = len(flux)
if n!= 0 and n!=1:
for j in range(n):
try:
if all_same(wave_inter):
flambda_AB_K = flux[j]*transmission[j]
sum_flambda_AB_K += flambda_AB_K
sum_transmission += transmission[j]
length = length+1
else:
flambda_AB_K = flux[j]*transmission[j]*wave_inter[j]
sum_flambda_AB_K += flambda_AB_K
sum_transmission += transmission[j]*wave_inter[j]
length = length+1
except:
print('Error',n,transmission_index, j,wave[j],filter_curve[0,0],filter_curve[-1,0])
elif n==1:
flambda_AB_K = flux[0]*transmission[0]
sum_flambda_AB_K += flambda_AB_K*wave_inter
sum_transmission += np.sum(transmission)*wave_inter
length = length+1
if length == 0:
photometry_list[photometry_list_index]=0
else:
photometry_list[photometry_list_index] = sum_flambda_AB_K/sum_transmission
photometry_list_index += 1
chisquare_photo_list = ((photometric_flux-photometry_list)/photometric_flux_err_mod)**2
tok = time.clock()
dof = len(chisquare_photo_list)-2
reduced_chi_square_photo = np.sum(chisquare_photo_list)/dof
return reduced_chi_square_photo
columns = ['ID','region','field',
'M05_age_opt','M05_AV_opt','M13_age_opt','M13_AV_opt','BC_age_opt','BC_AV_opt',\
'x2_spectra_M05_opt','x2_photo_M05_opt','x2_spectra_M13_opt','x2_photo_M13_opt','x2_spectra_BC_opt','x2_photo_BC_opt',\
'M05_age_MCMC50','M05_age_std','M05_AV_MCMC50','M05_AV_std','M13_age_MCMC50','M13_age_std','M13_AV_MCMC50','M13_AV_std','BC_age_MCMC50','BC_age_std','BC_AV_MCMC50','BC_AV_std',\
'x2_spectra_M05_MCMC50','x2_photo_M05_MCMC50','x2_spectra_M13_MCMC50','x2_photo_M13_MCMC50','x2_spectra_BC_MCMC50','x2_photo_BC_MCMC50',\
'x2_M05_opt','x2_M13_opt','x2_BC_opt','x2_M05_MCMC50','x2_M13_MCMC50','x2_BC_MCMC50',\
'model','grism_index','grism_index_AV_corr','age_opt','age_opt_std','AV_opt','AV_opt_std']
chi_square_list = pd.DataFrame(index=df.index,columns=columns)#np.zeros([len(df), 31])
chi_square_list_final = pd.DataFrame(index=df.index,columns=columns)
weight1 = 1./2.575
weight2 = 1./1.153
nsteps=3000
current_dir = '/Volumes/My Passport/TPAGB/'
outcome_dir = 'outcome/'
date='20200328_photo'
plot_dir = 'plot/'+str(date)+'_uds/'
tik = time.time()
filter_fn_list = []
filter_curve_list=[]
filter_curve_fit_list=[]
path = "/Volumes/My Passport/TAPS/filter/uds/"
import glob, os
os.chdir(path)
for i in range(1,15):
for file in glob.glob("f"+str(i)+"_*"):
print(file)
fn = path+file
filter_fn_list.append(fn)
filter_curve = np.loadtxt(fn)
filter_curve_list.append(filter_curve)
filter_f = interpolate.interp1d(filter_curve[:,0], filter_curve[:,1])
filter_curve_fit_list.append(filter_f)
tok = time.time()
print('Time reading the filter curves and without generate filter functions:',tok-tik)
```
### 0 Initializing the parameters
```
##
row=5
[ID, OneD_1, redshift_1, mag_1] = read_spectra(row)
print(row, ID)
ID_no = ID-1
redshift = df_photometry.loc[ID_no].z_spec
region = df.region[row]
intrinsic_Av = df_fast.loc[ID-1].Av
print('intrinsic Av:'+str(intrinsic_Av))
galaxy_age = 10**(df_fast.loc[ID-1].lage)/1e9
print('Galaxy age:', galaxy_age)
A_v=0.0563
c=3e10
chi_square_list.loc[row,'ID'] = float(ID)
chi_square_list.loc[row,'region'] = region
chi_square_list.loc[row,'field'] = 'uds'
# Photometry
#U | CFHT | Almaini/Foucaud in prep.
# CFHT_megacam_u
u_wave = 3.86e3
u_band = 574.8/2.
u = df_photometry.loc[ID_no].f_u/((u_wave)**2)*c*1e8*3.63e-30
u_err = df_photometry.loc[ID_no].e_u/((u_wave)**2)*c*1e8*3.63e-30
# B,V,R,i,z | SXDS | Furusawa et al. (2008)
# B: 450, V: 548, Rc: 650, i’: 768, z’: 889
#use cosmos filter
B_wave = 4.50e3
B_band = 1030.5/2.
B = df_photometry.loc[ID_no].f_B/((B_wave)**2)*c*1e8*3.63e-30
B_err = df_photometry.loc[ID_no].e_B/((B_wave)**2)*c*1e8*3.63e-30
V_wave = 5.48e3
V_band = 1337.9/2.
V = df_photometry.loc[ID_no].f_V/((V_wave)**2)*c*1e8*3.63e-30
V_err = df_photometry.loc[ID_no].e_V/((V_wave)**2)*c*1e8*3.63e-30
R_wave = 6.5e3
R_band = 1143.2/2.
R = df_photometry.loc[ID_no].f_R/((R_wave)**2)*c*1e8*3.63e-30
R_err = df_photometry.loc[ID_no].e_R/((R_wave)**2)*c*1e8*3.63e-30
i_wave = 7.68e3
i_band = 1505.7/2.
i = df_photometry.loc[ID_no].f_i/((i_wave)**2)*c*1e8*3.63e-30
i_err = df_photometry.loc[ID_no].e_i/((i_wave)**2)*c*1e8*3.63e-30
z_wave = 8.89e3
z_band = 1403.5/2.
z = df_photometry.loc[ID_no].f_z/((z_wave)**2)*c*1e8*3.63e-30
z_err = df_photometry.loc[ID_no].e_z/((z_wave)**2)*c*1e8*3.63e-30
# CANDELS | Koekemoer et al. 2011, what wavelength this should take? : the same as above
F606W_wave = 5.98e3
F606W_band = 2324./2.
F606W = df_photometry.loc[ID_no].f_F606W/((F606W_wave)**2)*c*1e8*3.63e-30
F606W_err = df_photometry.loc[ID_no].e_F606W/((F606W_wave)**2)*c*1e8*3.63e-30
F814W_wave = 7.91e3
F814W_band = 1826./2.
F814W = df_photometry.loc[ID_no].f_F814W/((F814W_wave)**2)*c*1e8*3.63e-30
F814W_err = df_photometry.loc[ID_no].e_F814W/((F814W_wave)**2)*c*1e8*3.63e-30
# CANDELS | Grogin et al. 2011, Koekemoer et al. 2011|
F125W_wave = 1.250e4
F125W_band = 3005./2.
F125W = df_photometry.loc[ID_no].f_F125W/((F125W_wave)**2)*c*1e8*3.63e-30
F125W_err = df_photometry.loc[ID_no].e_F125W/((F125W_wave)**2)*c*1e8*3.63e-30
F160W_wave = 1.539e4
F160W_band = 2874./2.
F160W = df_photometry.loc[ID_no].f_F160W/((F160W_wave)**2)*c*1e8*3.63e-30 #http://www.stsci.edu/hst/wfc3/design/documents/handbooks/currentIHB/c07_ir06.html
F160W_err = df_photometry.loc[ID_no].e_F160W/((F160W_wave)**2)*c*1e8*3.63e-30
# 3D-HST | Brammer et al. 2012
F140W_wave = 13635
F140W_band = 3947./2.
F140W = df_photometry.loc[ID_no].f_F140W/((F140W_wave)**2)*c*1e8*3.63e-30 #http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?id=HST/WFC3_IR.F140W
F140W_err = df_photometry.loc[ID_no].e_F140W/((F140W_wave)**2)*c*1e8*3.63e-30
# J, H, Ks | UKIDSS /WFCAM? | Almaini et al .in prep.
# J: 1251, H:1636, K: 2206
J_wave = 1.251e4
J_band = 1590./2
J = df_photometry.loc[ID_no].f_J/J_wave**2*c*1e8*3.63e-30
J_err = df_photometry.loc[ID_no].e_J/J_wave**2*c*1e8*3.63e-30
H_wave = 1.636e4
H_band = 2920./2.
H = df_photometry.loc[ID_no].f_H/H_wave**2*c*1e8*3.63e-30
H_err = df_photometry.loc[ID_no].e_H/H_wave**2*c*1e8*3.63e-30
K_wave = 2.206e4
K_band = 3510./2.
K = df_photometry.loc[ID_no].f_K/K_wave**2*c*1e8*3.63e-30
K_err = df_photometry.loc[ID_no].e_K/K_wave**2*c*1e8*3.63e-30
wave_list = np.array([u_wave, B_wave, V_wave, R_wave, i_wave, z_wave, F606W_wave, F814W_wave, F125W_wave, F140W_wave, F160W_wave, J_wave, H_wave, K_wave])
band_list = np.array([u_band, B_band, V_band, R_band, i_band, z_band, F606W_band, F814W_band, F125W_band, F140W_band, F160W_band, J_band, H_band, K_band])
photometric_flux = np.array([u, B, V, R, i, z, F606W, F814W, F125W, F140W, F160W,J, H, K])
photometric_flux_err = np.array([u_err, B_err, V_err, R_err, i_err, z_err, F606W_err, F814W_err, F125W_err, F140W_err, F160W_err,J_err, H_err, K_err])
photometric_flux_err_mod = np.array([u_err+0.1*u, B_err+0.1*B, V_err+0.1*V, R_err+0.1*R, i_err+0.1*i, z_err+0.1*z,\
F606W_err+0.03*F606W, F814W_err+0.03*F814W, F125W_err+0.03*F125W, F140W_err+0.03*F140W, F160W_err+0.03*F160W,\
J_err+0.1*J, H_err+0.1*H, K_err+0.1*K])
#-------------------------------------------------Initial Reduce the spectra ----------------------------------------------------------
print('-------------------------------------Initial fit ---------------------------------------------------------------------------------------')
[x, y, y_err, wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod ] = \
derive_1D_spectra_Av_corrected(OneD_1, redshift_1, row, wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod, A_v)
if redshift< 0.49:
try:
chi_square_list.loc[row,'grism_index'] = Lick_index_ratio(x,y)
except:
pass
# print(int(len(x)/2))
# print(x)
# print(x)
# print(wave_list)
# print(photometric_flux)
# print(x[int(len(x)/2)]-x[int(len(x)/2)-2])
# global x,y,y_err,wave_list,band_list,photometric_flux,photometric_flux_err
# Testing fitting a line
photo_list_for_scaling = []
photo_err_list_for_scaling = []
grism_flux_list_for_scaling = []
grism_flux_err_list_for_scaling = []
grism_wave_list_for_scaling =[]
for i in range(len(wave_list)):
if wave_list[i]-band_list[i] > x[0] and wave_list[i] + band_list[i] < x[-1]:
print(i)
scale_index = find_nearest(x, wave_list[i])
photo_list_for_scaling.append(photometric_flux[i])
photo_err_list_for_scaling.append(photometric_flux_err[i])
grism_flux_list_for_scaling.append(y[scale_index])
grism_flux_err_list_for_scaling.append(y_err[scale_index])
grism_wave_list_for_scaling.append(x[scale_index])
photo_array_for_scaling = np.array(photo_list_for_scaling)
photo_err_array_for_scaling = np.array(photo_err_list_for_scaling)
grism_flux_array_for_scaling = np.array(grism_flux_list_for_scaling)
grism_flux_err_array_for_scaling = np.array(grism_flux_err_list_for_scaling)
grism_wave_array_for_scaling = np.array(grism_wave_list_for_scaling)
print('Number of photometric points for rescaling:',len(photo_array_for_scaling))
print(np.mean(photo_array_for_scaling/grism_flux_array_for_scaling))
coeff = np.mean(photo_array_for_scaling/grism_flux_array_for_scaling)
y = y*coeff
chisquare_photo(model_wave, model_flux, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
## M05
print('____________________M05_________________________ Optimization__________________________')
X = np.array([galaxy_age, intrinsic_Av])
# X = np.array([3.43397335, 0.22173541])
bnds = ((0.01, 13.0), (0.0, 4.0))
sol = optimize.minimize(minimize_age_AV_vector_weighted, X, bounds = bnds, method='TNC')#, options = {'disp': True})
# print('Optimized weighted reduced chisqure result:', sol)
[age_prior_optimized, AV_prior_optimized] = sol.x
X = sol.x
x2_optimized = minimize_age_AV_vector_weighted(X)
x2_spec, x2_phot = minimize_age_AV_vector_weighted_return_chi2_sep(X)
chi_square_list.loc[row,'M05_age_opt'] = X[0]
chi_square_list.loc[row,'M05_AV_opt'] = X[1]
chi_square_list.loc[row,'x2_M05_opt'] = x2_optimized
chi_square_list.loc[row,'x2_spectra_M05_opt'] = x2_spec
chi_square_list.loc[row,'x2_photo_M05_opt'] = x2_phot
#--- Plot
X=sol.x
n = len(x)
print(X)
fig1 = plt.figure(figsize=(20,10))
frame1 = fig1.add_axes((.1,.35,.8,.6))
plt.step(x, y, color='r',lw=3)
plt.fill_between(x,(y+y_err),(y-y_err),alpha=0.1)
plt.errorbar(wave_list, photometric_flux, xerr=band_list, yerr=photometric_flux_err_mod, color='r', fmt='o', label='photometric data', markersize='14')
model_wave =minimize_age_AV_vector_weighted_return_flux(X)[1]
model_flux =minimize_age_AV_vector_weighted_return_flux(X)[2]
model1_wave =minimize_age_AV_vector_weighted_return_flux(X)[1]
model1_flux =minimize_age_AV_vector_weighted_return_flux(X)[2]
plt.plot(model_wave, model_flux, color='k',label='TP-AGB heavy',lw=0.5)
plt.xlim([2.5e3,1.9e4])
plt.ylim([0.05, 1.1])#plt.ylim([ymin,ymax])
plt.semilogx()
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.legend(loc='upper right',fontsize=24)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
frame2 = fig1.add_axes((.1,.2,.8,.15))
relative_spectra = np.zeros([1,n])
relative_spectra_err = np.zeros([1,n])
relative_sigma = np.zeros([1,n])
index0 = 0
for wave in x:
if y[index0]>0.25 and y[index0]<1.35:
index = find_nearest(model_wave, wave);#print index
relative_spectra[0, index0] = y[index0]/model_flux[index]
relative_spectra_err[0, index0] = y_err[index0]/model_flux[index]
relative_sigma[0, index0] = (y[index0]-model_flux[index])/y_err[index0]
index0 = index0+1
# plt.step(x[:index0], relative_spectra[0,:index0], color='r', linewidth=2)
# plt.fill_between(x[:index0],(relative_spectra[0,:index0]+relative_spectra_err[0,:index0]),\
# (relative_spectra[0,:index0]-relative_spectra_err[0,:index0]),alpha=0.1)
plt.step(x[:index0], relative_sigma[0,:index0], color='r', linewidth=2)
# print(relative_sigma[0,:index0])
index0 = 0
# relative_photo = np.zeros([1,(len(wave_list))])
for i in range(len(wave_list)):
try:
index = find_nearest(model_wave, wave_list[i])
# relative_photo[0, index0] = model_flux[index]/(photometric_flux[i])
except:
pass
plt.errorbar(wave_list[i], (photometric_flux[i]-model_flux[index])/photometric_flux_err_mod[i], xerr=band_list[i], fmt='o', color='r', markersize=12)
# plt.errorbar(wave_list[i], (photometric_flux[i])/model_flux[index], xerr=band_list[i], yerr=photometric_flux_err[i]/model_flux[index], fmt='o', color='r', markersize=16)
index0 = index0+1
plt.xlim([2.5e3,1.9e4])
plt.semilogx()
# plt.axhline(1.0, linestyle='--', linewidth=2, color='k')
# plt.ylim([0.6,1.5])
# plt.ylim([0.9,1.1])
# plt.ylim([0.7,1.45])
plt.axhline(3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(-3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(1.0, linestyle='--', linewidth=0.5, color='k')
plt.axhline(-1.0, linestyle='--', linewidth=0.5, color='k')
plt.ylim([-5,5])
plt.ylabel(r'$\rm (F_{\lambda,\rm data}-F_{\lambda,\rm model})/F_{\lambda,\rm err}$',fontsize=16)
plt.xlabel(r'Wavelength($\rm \AA$)', fontsize=20)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tick_params(axis='both', which='minor', labelsize=20)
```
### 2 M13 model
```
##
print('____________________M13_________________________ Optimization__________________________')
bnds = ((0.0, 13.0), (0.0, 4.0))
X = np.array([galaxy_age, intrinsic_Av])
sol_M13 = optimize.minimize(minimize_age_AV_vector_weighted_M13, X, bounds = bnds, method='TNC')#, options = {'disp': True})
# print('Optimized M13 weighted reduced chisqure result:', sol_M13)
[age_prior_optimized_M13, AV_prior_optimized_M13] = sol_M13.x
X = sol_M13.x
x2_optimized = minimize_age_AV_vector_weighted_M13(X)
x2_spec, x2_phot = minimize_age_AV_vector_weighted_M13_return_chi2_sep(X)
chi_square_list.loc[row,'M13_age_opt'] = X[0]#"{0:.2f}".format(X[0])
chi_square_list.loc[row,'M13_AV_opt'] = X[1]#"{0:.2f}".format(X[1])
chi_square_list.loc[row,'x2_M13_opt'] = x2_optimized
chi_square_list.loc[row,'x2_spectra_M13_opt'] = x2_spec
chi_square_list.loc[row,'x2_photo_M13_opt'] = x2_phot
#--- Plot
X = sol_M13.x
n = len(x)
fig1 = plt.figure(figsize=(20,10))
frame1 = fig1.add_axes((.1,.35,.8,.6))
plt.step(x, y, color='r',lw=3)
plt.fill_between(x,(y+y_err),(y-y_err),alpha=0.1)
plt.errorbar(wave_list, photometric_flux, xerr=band_list, yerr=photometric_flux_err_mod, color='r', fmt='o', label='photometric data', markersize='14')
model_wave =minimize_age_AV_vector_weighted_M13_return_flux(X)[1]
model_flux =minimize_age_AV_vector_weighted_M13_return_flux(X)[2]
model2_wave =minimize_age_AV_vector_weighted_M13_return_flux(X)[1]
model2_flux =minimize_age_AV_vector_weighted_M13_return_flux(X)[2]
plt.plot(model_wave, model_flux, color='g',label='TP-AGB mild',lw=0.5)
plt.xlim([2.5e3,1.9e4])
plt.ylim([0.05, 1.1])
plt.semilogx()
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.legend(loc='upper right',fontsize=24)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
frame2 = fig1.add_axes((.1,.2,.8,.15))
relative_spectra = np.zeros([1,n])
relative_spectra_err = np.zeros([1,n])
relative_sigma = np.zeros([1,n])
index0 = 0
for wave in x:
if y[index0]>0.25 and y[index0]<1.35:
index = find_nearest(model_wave, wave);#print index
relative_spectra[0, index0] = y[index0]/model_flux[index]
relative_spectra_err[0, index0] = y_err[index0]/model_flux[index]
relative_sigma[0, index0] = (y[index0]-model_flux[index])/y_err[index0]
index0 = index0+1
# plt.step(x[:index0], relative_spectra[0,:index0], color='r', linewidth=2)
# plt.fill_between(x[:index0],(relative_spectra[0,:index0]+relative_spectra_err[0,:index0]),\
# (relative_spectra[0,:index0]-relative_spectra_err[0,:index0]),alpha=0.1)
plt.step(x[:index0], relative_sigma[0,:index0], color='r', linewidth=2)
# print(relative_sigma[0,:index0])
index0 = 0
# relative_photo = np.zeros([1,(len(wave_list))])
for i in range(len(wave_list)):
try:
index = find_nearest(model_wave, wave_list[i])
# relative_photo[0, index0] = model_flux[index]/(photometric_flux[i])
except:
pass
plt.errorbar(wave_list[i], (photometric_flux[i]-model_flux[index])/photometric_flux_err_mod[i], xerr=band_list[i], fmt='o', color='r', markersize=12)
# plt.errorbar(wave_list[i], (photometric_flux[i])/model_flux[index], xerr=band_list[i], yerr=photometric_flux_err[i]/model_flux[index], fmt='o', color='r', markersize=16)
index0 = index0+1
plt.xlim([2.5e3,1.9e4])
plt.semilogx()
# plt.axhline(1.0, linestyle='--', linewidth=2, color='k')
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
# plt.ylim([0.75,1.5])
# plt.ylim([0.9,1.1])
# plt.ylim([0.7,1.45])
plt.axhline(3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(-3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(1.0, linestyle='--', linewidth=0.5, color='k')
plt.axhline(-1.0, linestyle='--', linewidth=0.5, color='k')
plt.ylim([-5,5])
plt.ylabel(r'$\rm (F_{\lambda,\rm data}-F_{\lambda,\rm model})/F_{\lambda,\rm err}$',fontsize=16)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tick_params(axis='both', which='minor', labelsize=20)
plt.xlabel(r'Wavelength($\rm \AA$)', fontsize=20)
with Pool() as pool:
ndim, nwalkers = 2, 10
tik = time.clock()
p0 = [sol_M13.x + 4.*np.random.rand(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lg_minimize_age_AV_vector_weighted_M13, pool=pool)
sampler.run_mcmc(p0, nsteps,progress=True)
samples = sampler.chain[:, 500:, :].reshape((-1,ndim))
samples = samples[(samples[:,0] > age_prior_optimized_M13*0.1) & (samples[:,0] < age_prior_optimized_M13*2.0) & (samples[:,1] < AV_prior_optimized_M13*3.0)]
tok = time.clock()
multi_time = tok-tik
print("Multiprocessing took {0:.1f} seconds".format(multi_time))
print('Time to run M13 MCMC:'+str(tok-tik))
if samples.size > 1e3 :
value2=np.percentile(samples, 50, axis=0)
[std_age_prior_optimized_M13, std_AV_prior_optimized_M13] = np.std(samples, axis=0)
plt.figure(figsize=(32,32),dpi=100)
fig = corner.corner(samples,
labels=["age(Gyr)", r"$\rm A_V$"],
levels=(1-np.exp(-0.5),),
truths=[age_prior_optimized_M13, AV_prior_optimized_M13],
show_titles=True,title_kwargs={'fontsize':12},
quantiles=(0.16,0.5, 0.84))
axes = np.array(fig.axes).reshape((ndim, ndim))
for i in range(ndim):
ax = axes[i, i]
ax.axvline(X[i], color="g")
# Loop over the histograms
for i in range(ndim):
ax = axes[i, i]
ax.axvline(X[i], color="g")
ax.axvline(value2[i],color='r')
# Loop over the histograms
for yi in range(ndim):
for xi in range(yi):
ax = axes[yi, xi]
ax.axvline(X[xi], color="g")
ax.axvline(value2[xi], color="r")
ax.axhline(X[yi], color="g")
ax.axhline(value2[yi], color="r")
ax.plot(X[xi], X[yi], "sg")
ax.plot(value2[xi],value2[yi],'sr')
plt.rcParams.update({'font.size': 12})
#--- Plot
X = np.percentile(samples, 50, axis=0)
x2_optimized = minimize_age_AV_vector_weighted_M13(X)
x2_spec, x2_phot = minimize_age_AV_vector_weighted_M13_return_chi2_sep(X)
chi_square_list.loc[row,'M13_age_MCMC50'] = X[0]#"{0:.2f}".format(X[0])
chi_square_list.loc[row,'M13_AV_MCMC50'] = X[1]#"{0:.2f}".format(X[1])
chi_square_list.loc[row,'x2_M13_MCMC50'] = x2_optimized
chi_square_list.loc[row,'x2_spectra_M13_MCMC50'] = x2_spec
chi_square_list.loc[row,'x2_photo_M13_MCMC50'] = x2_phot
chi_square_list.loc[row,'M13_age_std'] = np.std(samples, axis=0)[0]#"{0:.2f}".format(np.std(samples, axis=0)[0])
chi_square_list.loc[row,'M13_AV_std'] = np.std(samples, axis=0)[1]#"{0:.2f}".format(np.std(samples, axis=0)[1])
n = len(x)
fig1 = plt.figure(figsize=(20,10))
frame1 = fig1.add_axes((.1,.35,.8,.6))
plt.step(x, y, color='r',lw=3)
plt.fill_between(x,(y+y_err),(y-y_err),alpha=0.1)
plt.errorbar(wave_list, photometric_flux, xerr=band_list, yerr=photometric_flux_err_mod, color='r', fmt='o', label='photometric data', markersize='14')
model_wave =minimize_age_AV_vector_weighted_M13_return_flux(X)[1]
model_flux =minimize_age_AV_vector_weighted_M13_return_flux(X)[2]
plt.plot(model_wave, model_flux, color='g',label='TP-AGB mild',lw=0.5)
plt.xlim([2.5e3,1.9e4])
plt.ylim([0.05, 1.1])
plt.semilogx()
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.legend(loc='upper right',fontsize=24)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
frame2 = fig1.add_axes((.1,.2,.8,.15))
relative_spectra = np.zeros([1,n])
relative_spectra_err = np.zeros([1,n])
relative_sigma = np.zeros([1,n])
index0 = 0
for wave in x:
if y[index0]>0.25 and y[index0]<1.35:
index = find_nearest(model_wave, wave);#print index
relative_spectra[0, index0] = y[index0]/model_flux[index]
relative_spectra_err[0, index0] = y_err[index0]/model_flux[index]
relative_sigma[0, index0] = (y[index0]-model_flux[index])/y_err[index0]
index0 = index0+1
# plt.step(x[:index0], relative_spectra[0,:index0], color='r', linewidth=2)
# plt.fill_between(x[:index0],(relative_spectra[0,:index0]+relative_spectra_err[0,:index0]),\
# (relative_spectra[0,:index0]-relative_spectra_err[0,:index0]),alpha=0.1)
plt.step(x[:index0], relative_sigma[0,:index0], color='r', linewidth=2)
# print(relative_sigma[0,:index0])
index0 = 0
# relative_photo = np.zeros([1,(len(wave_list))])
for i in range(len(wave_list)):
try:
index = find_nearest(model_wave, wave_list[i])
# relative_photo[0, index0] = model_flux[index]/(photometric_flux[i])
except:
pass
plt.errorbar(wave_list[i], (photometric_flux[i]-model_flux[index])/photometric_flux_err_mod[i], xerr=band_list[i], fmt='o', color='r', markersize=12)
# plt.errorbar(wave_list[i], (photometric_flux[i])/model_flux[index], xerr=band_list[i], yerr=photometric_flux_err[i]/model_flux[index], fmt='o', color='r', markersize=16)
index0 = index0+1
plt.xlim([2.5e3,1.9e4])
plt.semilogx()
# plt.axhline(1.0, linestyle='--', linewidth=2, color='k')
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
# plt.ylim([0.75,1.5])
# plt.ylim([0.9,1.1])
# plt.ylim([0.7,1.45])
plt.axhline(3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(-3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(1.0, linestyle='--', linewidth=0.5, color='k')
plt.axhline(-1.0, linestyle='--', linewidth=0.5, color='k')
plt.ylim([-5,5])
plt.ylabel(r'$\rm (F_{\lambda,\rm data}-F_{\lambda,\rm model})/F_{\lambda,\rm err}$',fontsize=16)
plt.xlabel(r'Wavelength($\rm \AA$)', fontsize=20)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tick_params(axis='both', which='minor', labelsize=20)
```
### 3 BC03 model
```
##
print('____________________BC03_________________________ Optimization__________________________')
X = np.array([galaxy_age, intrinsic_Av])
bnds = ((0.0, 13.0), (0.0, 4.0))
sol_BC03 = optimize.minimize(minimize_age_AV_vector_weighted_BC03, X, bounds = bnds, method='TNC', options = {'disp': True})
print('Optimized BC03 weighted reduced chisqure result:', sol_BC03)
[age_prior_optimized_BC03, AV_prior_optimized_BC03] = sol_BC03.x
X = sol_BC03.x
x2_optimized = minimize_age_AV_vector_weighted_BC03(X)
x2_spec, x2_phot = minimize_age_AV_vector_weighted_BC03_return_chi2_sep(X)
chi_square_list.loc[row,'BC_age_opt'] = X[0]#"{0:.2f}".format(X[0])
chi_square_list.loc[row,'BC_AV_opt'] = X[1]#"{0:.2f}".format(X[1])
chi_square_list.loc[row,'x2_BC_opt'] = x2_optimized
chi_square_list.loc[row,'x2_spectra_BC_opt'] = x2_spec
chi_square_list.loc[row,'x2_photo_BC_opt'] = x2_phot
#--- Plot
X = sol_BC03.x
n = len(x)
fig1 = plt.figure(figsize=(20,10))
frame1 = fig1.add_axes((.1,.35,.8,.6))
plt.step(x, y, color='r',lw=3)
plt.fill_between(x,(y+y_err),(y-y_err),alpha=0.1)
plt.errorbar(wave_list, photometric_flux, xerr=band_list, yerr=photometric_flux_err_mod, color='r', fmt='o', label='photometric data', markersize='14')
BC03_flux_attenuated = minimize_age_AV_vector_weighted_BC03_mod_no_weight_return_flux(X)[1]
plt.plot(BC03_wave_list_num, BC03_flux_attenuated, color='orange',label='TP-AGB light',lw=0.5)
model_wave = BC03_wave_list_num
model_flux = BC03_flux_attenuated
model3_wave = BC03_wave_list_num
model3_flux = BC03_flux_attenuated
plt.xlim([2.5e3,1.9e4])
plt.ylim([0.05, 1.1])
# plt.ylim([0.7,1.45])
plt.semilogx()
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.legend(loc='upper right',fontsize=24)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
frame2 = fig1.add_axes((.1,.2,.8,.15))
relative_spectra = np.zeros([1,n])
relative_spectra_err = np.zeros([1,n])
relative_sigma = np.zeros([1,n])
index0 = 0
for wave in x:
if y[index0]>0.25 and y[index0]<1.35:
index = find_nearest(model_wave, wave);#print index
relative_spectra[0, index0] = y[index0]/model_flux[index]
relative_spectra_err[0, index0] = y_err[index0]/model_flux[index]
relative_sigma[0, index0] = (y[index0]-model_flux[index])/y_err[index0]
index0 = index0+1
# plt.step(x[:index0], relative_spectra[0,:index0], color='r', linewidth=2)
# plt.fill_between(x[:index0],(relative_spectra[0,:index0]+relative_spectra_err[0,:index0]),\
# (relative_spectra[0,:index0]-relative_spectra_err[0,:index0]),alpha=0.1)
plt.step(x[:index0], relative_sigma[0,:index0], color='r', linewidth=2)
# print(relative_sigma[0,:index0])
index0 = 0
# relative_photo = np.zeros([1,(len(wave_list))])
for i in range(len(wave_list)):
try:
index = find_nearest(model_wave, wave_list[i])
# relative_photo[0, index0] = model_flux[index]/(photometric_flux[i])
except:
pass
plt.errorbar(wave_list[i], (photometric_flux[i]-model_flux[index])/photometric_flux_err_mod[i], xerr=band_list[i], fmt='o', color='r', markersize=12)
# plt.errorbar(wave_list[i], (photometric_flux[i])/model_flux[index], xerr=band_list[i], yerr=photometric_flux_err[i]/model_flux[index], fmt='o', color='r', markersize=16)
index0 = index0+1
plt.xlim([2.5e3,1.9e4])
plt.semilogx()
# plt.axhline(1.0, linestyle='--', linewidth=2, color='k')
# plt.ylim([0.6,1.5])
# plt.ylim([0.9,1.1])
# plt.ylim([0.7,1.45])
plt.axhline(3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(-3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(1.0, linestyle='--', linewidth=0.5, color='k')
plt.axhline(-1.0, linestyle='--', linewidth=0.5, color='k')
plt.ylim([-5,5])
plt.ylabel(r'$\rm (F_{\lambda,\rm data}-F_{\lambda,\rm model})/F_{\lambda,\rm err}$',fontsize=16)
plt.xlabel(r'Wavelength($\rm \AA$)', fontsize=20)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tick_params(axis='both', which='minor', labelsize=20)
with Pool() as pool:
ndim, nwalkers = 2, 10
tik = time.clock()
p0 = [sol_BC03.x + 4.*np.random.rand(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lg_minimize_age_AV_vector_weighted_BC03, pool=pool)
sampler.run_mcmc(p0, nsteps, progress=True)
samples = sampler.chain[:, 500:, :].reshape((-1,ndim))
samples = samples[(samples[:,0] > age_prior_optimized_BC03*0.1) &
(samples[:,0] < age_prior_optimized_BC03*2.0) &
(samples[:,1] < AV_prior_optimized_BC03*3.0)]
tok = time.clock()
multi_time = tok-tik
print("Multiprocessing took {0:.1f} seconds".format(multi_time))
print('Time to run BC03 MCMC:'+str(tok-tik))
if samples.size > 1e3:
value2=np.percentile(samples,50,axis=0)
[std_age_prior_optimized_BC03, std_AV_prior_optimized_BC03] = np.std(samples, axis=0)
plt.figure(figsize=(32,32),dpi=100)
fig = corner.corner(samples,
labels=["age(Gyr)", r"$\rm A_V$"],\
truths=[age_prior_optimized_BC03, AV_prior_optimized_BC03],\
levels = (1-np.exp(-0.5),),\
show_titles=True,title_kwargs={'fontsize':12},
quantiles=(0.16,0.5, 0.84))
axes = np.array(fig.axes).reshape((ndim, ndim))
for i in range(ndim):
ax = axes[i, i]
ax.axvline(X[i], color="g")
ax.axvline(value2[i],color='r')
# Loop over the histograms
for yi in range(ndim):
for xi in range(yi):
ax = axes[yi, xi]
ax.axvline(X[xi], color="g")
ax.axvline(value2[xi], color="r")
ax.axhline(X[yi], color="g")
ax.axhline(value2[yi], color="r")
ax.plot(X[xi], X[yi], "sg")
ax.plot(value2[xi],value2[yi],'sr')
plt.rcParams.update({'font.size': 12})
#--- Plot
X = np.percentile(samples, 50, axis=0)
x2_optimized = minimize_age_AV_vector_weighted_BC03(X)
x2_spec, x2_phot = minimize_age_AV_vector_weighted_BC03_return_chi2_sep(X)
chi_square_list.loc[row,'BC_age_MCMC50'] = X[0]#"{0:.2f}".format(X[0])
chi_square_list.loc[row,'BC_AV_MCMC50'] =X[1] #"{0:.2f}".format(X[1])
chi_square_list.loc[row,'x2_BC_MCMC50'] = x2_optimized
chi_square_list.loc[row,'x2_spectra_BC_MCMC50'] = x2_spec
chi_square_list.loc[row,'x2_photo_BC_MCMC50'] = x2_phot
chi_square_list.loc[row,'BC_age_std'] = np.std(samples, axis=0)[0] #"{0:.2f}".format(np.std(samples, axis=0)[0])
chi_square_list.loc[row,'BC_AV_std'] = np.std(samples, axis=0)[1]#"{0:.2f}".format(np.std(samples, axis=0)[1])
n = len(x)
fig1 = plt.figure(figsize=(20,10))
frame1 = fig1.add_axes((.1,.35,.8,.6))
plt.step(x, y, color='r',lw=3)
plt.fill_between(x,(y+y_err),(y-y_err),alpha=0.1)
plt.errorbar(wave_list, photometric_flux, xerr=band_list, yerr=photometric_flux_err_mod, color='r', fmt='o', label='photometric data', markersize='14')
BC03_flux_attenuated = minimize_age_AV_vector_weighted_BC03_mod_no_weight_return_flux(X)[1]
plt.plot(BC03_wave_list_num, BC03_flux_attenuated, color='orange',label='TP-AGB light',lw=0.5)
model_wave = BC03_wave_list_num
model_flux = BC03_flux_attenuated
plt.xlim([2.5e3,1.9e4])
plt.ylim([0.05, 1.1])
plt.semilogx()
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=22)
plt.legend(loc='upper right',fontsize=24)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
frame2 = fig1.add_axes((.1,.2,.8,.15))
relative_spectra = np.zeros([1,n])
relative_spectra_err = np.zeros([1,n])
relative_sigma = np.zeros([1,n])
index0 = 0
for wave in x:
if y[index0]>0.25 and y[index0]<1.35:
index = find_nearest(model_wave, wave);#print index
relative_spectra[0, index0] = y[index0]/model_flux[index]
relative_spectra_err[0, index0] = y_err[index0]/model_flux[index]
relative_sigma[0, index0] = (y[index0]-model_flux[index])/y_err[index0]
index0 = index0+1
# plt.step(x[:index0], relative_spectra[0,:index0], color='r', linewidth=2)
# plt.fill_between(x[:index0],(relative_spectra[0,:index0]+relative_spectra_err[0,:index0]),\
# (relative_spectra[0,:index0]-relative_spectra_err[0,:index0]),alpha=0.1)
plt.step(x[:index0], relative_sigma[0,:index0], color='r', linewidth=2)
# print(relative_sigma[0,:index0])
index0 = 0
# relative_photo = np.zeros([1,(len(wave_list))])
for i in range(len(wave_list)):
try:
index = find_nearest(model_wave, wave_list[i])
# relative_photo[0, index0] = model_flux[index]/(photometric_flux[i])
except:
pass
plt.errorbar(wave_list[i], (photometric_flux[i]-model_flux[index])/photometric_flux_err_mod[i], xerr=band_list[i], fmt='o', color='r', markersize=12)
# plt.errorbar(wave_list[i], (photometric_flux[i])/model_flux[index], xerr=band_list[i], yerr=photometric_flux_err[i]/model_flux[index], fmt='o', color='r', markersize=16)
index0 = index0+1
plt.xlim([2.5e3,1.9e4])
plt.semilogx()
# plt.axhline(1.0, linestyle='--', linewidth=2, color='k')
# plt.ylim([0.6,1.5])
# plt.ylim([0.9,1.1])
# plt.ylim([0.7,1.45])
plt.axhline(3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(-3.0, linestyle='--', linewidth=1, color='k')
plt.axhline(1.0, linestyle='--', linewidth=0.5, color='k')
plt.axhline(-1.0, linestyle='--', linewidth=0.5, color='k')
plt.ylim([-5,5])
plt.ylabel(r'$\rm (F_{\lambda,\rm data}-F_{\lambda,\rm model})/F_{\lambda,\rm err}$',fontsize=16)
plt.xlabel(r'Wavelength($\rm \AA$)', fontsize=20)
plt.axvspan(1.06e4,1.08e4, color='gray',alpha=0.1)
plt.axvspan(1.12e4,1.14e4, color='gray',alpha=0.1)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.tick_params(axis='both', which='minor', labelsize=20)
```
### 4 Testing the filter sets
```
print(redshift_1)
filter_fn_list = []
path = "/Volumes/My Passport/TAPS/filter/uds/"
import glob, os
os.chdir(path)
for i in range(1,15):
for file in glob.glob("f"+str(i)+"_*"):
print(file)
fn = path+file
filter_fn_list.append(fn)
# filter_fn_list[0]
filter_curve = np.loadtxt(fn)
# print(filter_curve.size)#[:,0]
plt.plot(filter_curve[:,0],filter_curve[:,1])
model_wave = model3_wave*(1+redshift_1)
model_flux = model3_flux
def all_same(items):
return all(x == items[0] for x in items)
def all_same(items):
return all(x == items[0] for x in items)
plt.figure(figsize=(12,6),dpi=300)
plt.plot(model_wave, model_flux, color='orange',lw=0.5)
photometry_list = np.zeros(len(wave_list))
plt.xlim([3.e3,2.9e4])
plt.ylim([-0.05, 1.1])
plt.semilogx()
plt.step(x*(1+redshift_1), y, color='r',lw=3)
plt.fill_between(x*(1+redshift_1),(y+y_err),(y-y_err),alpha=0.1)
plt.ylabel(r'$\rm F_{\lambda}/F_{0.55\mu m}$',fontsize=24)
for i in range(1,15):
for file in glob.glob("f"+str(i)+"_*"):
print(i,file)
fn = path+file
filter_fn_list.append(fn)
filter_curve = np.loadtxt(fn)
print(filter_curve.size)#[:,0]
sum_flambda_AB_K = 0
sum_transmission = 0
length = 0
for j in range(len(filter_curve)-1):
wave_inter = np.zeros(len(model_wave))
wave_inter[:-1] = np.diff(model_wave)
index = np.where(model_wave<filter_curve[j+1,0])[0]#[0]
wave = model_wave[index]
flux = model_flux[index]
wave_inter = wave_inter[index]
index = np.where(wave>filter_curve[j,0])
wave = wave[index]
flux = flux[index]
wave_inter = wave_inter[index]
n = len(flux)
if n!= 0 and n!=1:
try:
transmission = np.interp(wave, filter_curve[j:j+2,0], filter_curve[j:j+2,1])
except:
print('Error')
# Checking if all spectral elements are the same
if all_same(wave_inter):
flambda_AB_K = np.sum(flux*transmission)
sum_flambda_AB_K += flambda_AB_K
sum_transmission += np.sum(transmission)
length = length+1
else:
flambda_AB_K = np.sum(flux*transmission*wave_inter)
sum_flambda_AB_K += flambda_AB_K
sum_transmission += np.sum(transmission*wave_inter)
length = length+1
elif n==1:
transmission = np.interp(wave, filter_curve[j:j+2,0], filter_curve[j:j+2,1])
flambda_AB_K = flux[0]*transmission[0]
sum_flambda_AB_K += flambda_AB_K*wave_inter
sum_transmission += np.sum(transmission)*wave_inter#/len(transmission)#np.trapz(transmission, x=wave)
length = length+1
if length == 0:
photometry_list[i-1]=0
else:
photometry_list[i-1] = sum_flambda_AB_K/sum_transmission
print(wave_list[i-1]*(1+redshift_1), photometry_list[i-1], sum_flambda_AB_K, sum_transmission,length)#, wave[int(n/2)])
plt.errorbar(wave_list[i-1]*(1+redshift_1),photometry_list[i-1],\
xerr=band_list[i-1], color='g', fmt='o', markersize=14)
plt.errorbar(wave_list[i-1]*(1+redshift_1), photometric_flux[i-1],\
xerr=band_list[i-1], yerr=photometric_flux_err_mod[i-1], color='r', fmt='o', label='photometric data', markersize='14')
chisquare_photo_list = ((photometric_flux-photometry_list)/photometric_flux_err_mod)**2
chisquare_photo_list = ((photometric_flux-photometry_list)/photometric_flux_err_mod)**2
print(chisquare_photo_list)
print(np.sum(chisquare_photo_list))
chi2_M05 = chisquare_photo(model1_wave, model1_flux, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
chi2_M13 = chisquare_photo(model2_wave, model2_flux, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
chi2_BC = chisquare_photo(model3_wave, model3_flux, redshift_1,wave_list, band_list, photometric_flux, photometric_flux_err, photometric_flux_err_mod)
```
| github_jupyter |
Os treinamentos das redes MLP e Resnet foram feitos com um batch de 64 imagens por epoca e 10 epocas de treinamento com o dataset Cifar10
#CNN
```
'''For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(
in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
#ultima camada de features
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
#inserir uma nova camada aqui
self.linear = nn.Linear(512*block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def ResNet18():
return ResNet(BasicBlock, [2, 2, 2, 2])
def test():
net = ResNet18()
y = net(torch.randn(1, 3, 32, 32))
print(y.size())
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
#from loader import MultiFolderLoader
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
#from torch.utils.tensorboard import SummaryWriter
#from auto_augment import AutoAugment
#from auto_augment import Cutout
import os
import argparse
import numpy as np
from torch.autograd import Variable
#from PreResNet import *
batch_size = 64
lr = 0.02
device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0 # best test accuracy
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
# Data
print('==> Preparing data..')
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
# AutoAugment(),
# Cutout(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), #transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), #
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) #transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), #,
])
trainset = datasets.CIFAR10(root='~/data', train=True, download=True, transform=transform_train) #'./data', train=True, download=True, transform=transform_train)
testset = datasets.CIFAR10(root='~/data', train=False, download=True, transform=transform_test)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=3)
testloader = DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=1)
classes = ('plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck')
# Model
print('==> Building model..')
net = ResNet18()
net = net.to(device)
if device == 'cuda':
net = torch.nn.DataParallel(net)
cudnn.benchmark = True
Softmin = nn.Softmin()
Softmax = nn.Softmax()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=lr,momentum=0.9, weight_decay=5e-4)
# Training
def train(epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
listSum = 0
lr=0.02
if epoch >= 150:
lr /= 10
for param_group in optimizer.param_groups:
param_group['lr'] = lr
#print("antes")
for batch_idx, data in enumerate(trainloader, 0):
# for batch_idx, (inputs, targets) in enumerate(trainloader):
# print(batch_idx)
inputs, targets = data
inputs, targets = inputs.to(device), targets.to(device).view(-1)
optimizer.zero_grad()
outputs = net(inputs)
#print(outputs.size)
batch_size = outputs.size()[0]
lam = 2
loss = criterion(outputs,targets)
#print(loss)
loss.backward()
optimizer.step()
#train_loss += loss.data[0]
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().float()# predicted.eq(targets).sum().item()
acc = 100.*correct/total
print('loss train')
print((train_loss/64))
print('')
print('---------------------------------------------------------------')
print('Acc train')
print(acc)
print('')
print('---------------------------------------------------------------')
def test(epoch):
global best_acc
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader,0):
inputs, targets = inputs.to(device), targets.to(device).view(-1)
outputs = net(inputs)
batch_size = outputs.size()[0]
loss = criterion(outputs,targets)
test_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += predicted.eq(targets).cpu().sum().float()
acc = 100.*correct/total
# progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
# % (test_loss/(batch_idx+1), 100.*correct/total, correct, total))
print('Loss test')
print(test_loss/64)
print('')
print('---------------------------------------------------------------')
print('Acc test')
print(acc)
print('')
print('---------------------------------------------------------------')
# Save checkpoint.
acc = 100.*correct/total
if acc > best_acc:
print('Saving..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
torch.save(state, './checkpoint/ckpt.pth')
best_acc = acc
print("best_acc: %f"%best_acc)
print('---------------------------------------------------------------')
for epoch in range(start_epoch, start_epoch+10):
train(epoch)
test(epoch)
```
#MLP
```
import time
import numpy as np
import torch
import torchvision
from torch.autograd import Variable
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# Constantes
IMAGE_WIDTH = 32
IMAGE_HEIGHT = 32
COLOR_CHANNELS = 3
EPOCHS = 10
LEARNING_RATES = [.00001, 0.0001, 0.001, 0.01, 0.1]
KEEP_RATES = [.5, .65, .8]
MOMENTUM_RATES = [.25, .5, .75]
WEIGHT_DECAY_RATES = [.0005, .005, .05]
BATCH_SIZE = 64
BATCH_IMAGE_COUNT = 10000
TRAIN_BATCHS = ["data_batch_1", "data_batch_2", "data_batch_3", "data_batch_4"]
TEST_BATCHES = ["data_batch_5"]
CLASSES = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
N_CLASSES = len(CLASSES)
PLOT = False
class Net(torch.nn.Module):
def __init__(self, n_hidden_nodes, n_hidden_layers, activation, keep_rate=0):
super(Net, self).__init__()
self.n_hidden_nodes = n_hidden_nodes
self.n_hidden_layers = n_hidden_layers
self.activation = activation
if not keep_rate:
keep_rate = 0.5
self.keep_rate = keep_rate
# Configure camadas da rede e adicione dropout
self.fc1 = torch.nn.Linear(IMAGE_WIDTH * IMAGE_WIDTH * COLOR_CHANNELS,
n_hidden_nodes)
self.fc1_drop = torch.nn.Dropout(1 - keep_rate)
if n_hidden_layers == 2:
self.fc2 = torch.nn.Linear(n_hidden_nodes,
n_hidden_nodes)
self.fc2_drop = torch.nn.Dropout(1 - keep_rate)
self.out = torch.nn.Linear(n_hidden_nodes, N_CLASSES)
def forward(self, x):
x = x.view(-1, IMAGE_WIDTH * IMAGE_WIDTH * COLOR_CHANNELS)
if self.activation == "sigmoid":
sigmoid = torch.nn.Sigmoid()
x = sigmoid(self.fc1(x))
elif self.activation == "relu":
x = torch.nn.functional.relu(self.fc1(x))
x = self.fc1_drop(x)
if self.n_hidden_layers == 2:
if self.activation == "sigmoid":
x = sigmoid(self.fc2(x))
elif self.activation == "relu":
x = torch.nn.functional.relu(self.fc2(x))
x = self.fc2_drop(x)
return torch.nn.functional.log_softmax(self.out(x))
def train(epoch, model, train_loader, optimizer, log_interval=100, cuda=None):
model.train()
correct = 0
for batch_idx, (data, target) in enumerate(train_loader):
if cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
pred = output.data.max(1)[1] # obter o índice da probabilidade de log máxima
correct += pred.eq(target.data).cpu().sum()
accuracy = 100. * correct / len(train_loader.dataset)
loss = torch.nn.functional.nll_loss(output, target)
loss.backward()
optimizer.step()
#if batch_idx % log_interval == 0:
#print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f} Accuracy: {}'.format(
#epoch, batch_idx * len(data), len(train_loader.dataset),
#100. * batch_idx / len(train_loader), loss.item(), accuracy))
print('Epoch:')
print(epoch)
print('loss:')
print(loss.item())
print('acc:')
print(accuracy)
def validate(loss_vector, accuracy_vector, model, validation_loader, cuda=None):
model.eval()
val_loss, correct = 0, 0
for data, target in validation_loader:
if cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
val_loss += torch.nn.functional.nll_loss(output, target).item()
pred = output.data.max(1)[1] # obter o índice da probabilidade de log máxima
correct += pred.eq(target.data).cpu().sum()
val_loss /= len(validation_loader)
loss_vector.append(val_loss)
accuracy = 100. * correct / len(validation_loader.dataset)
accuracy_vector.append(accuracy)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
val_loss, correct, len(validation_loader.dataset), accuracy))
def main():
cuda = torch.cuda.is_available()
print('Using PyTorch version:', torch.__version__, 'CUDA:', cuda)
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=0, pin_memory=False)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
validation_loader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=0, pin_memory=False)
hidden_nodes = 100
layers = 1
for i in range(1, len(LEARNING_RATES) + 1):
model = Net(hidden_nodes, layers, "sigmoid")
if cuda:
model.cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=LEARNING_RATES[i-1])
loss_vector = []
acc_vector = []
for epoch in range(1, EPOCHS + 1):
train(epoch, model, train_loader, optimizer, cuda=cuda)
validate(loss_vector, acc_vector, model, validation_loader, cuda=cuda)
if epoch == 25:
break
# Plote perda de trem e precisão de validação versus épocas para cada taxa de aprendizado
if PLOT:
epochs = [i for i in range(1, 26)]
plt.plot(epochs, acc_vector)
plt.xlabel("Epochs")
plt.ylabel("Accuracy with Sigmoid")
plt.show()
plt.plot(epochs, loss_vector)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
# Repita usando RELU para ativação
hidden_nodes = 100
layers = 1
start_time = time.time()
for i in range(1, len(LEARNING_RATES)):
model = Net(hidden_nodes, layers, "relu")
if cuda:
model.cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=LEARNING_RATES[i])
loss_vector = []
acc_vector = []
for epoch in range(1, EPOCHS + 1):
train(epoch, model, train_loader, optimizer, cuda=cuda)
validate(loss_vector, acc_vector, model, validation_loader, cuda=cuda)
if epoch == 25:
break
end_time = time.time() - start_time
print("Total time", end_time)
# Plote perda de trem e precisão de validação versus épocas para cada taxa de aprendizado
if PLOT:
epochs = [i for i in range(1, 26)]
plt.plot(epochs, acc_vector)
plt.xlabel("Epochs")
plt.ylabel("Accuracy with RELU")
plt.show()
plt.plot(epochs, loss_vector)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
# Experimentando com/diferentes parâmetros:
hidden_nodes = 100
layers = 1
start_time = time.time()
for i in range(1, len(KEEP_RATES) + 1):
model = Net(hidden_nodes, layers, "relu", keep_rate=KEEP_RATES[i-1])
if cuda:
model.cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=LEARNING_RATES[2], momentum=MOMENTUM_RATES[1], weight_decay=WEIGHT_DECAY_RATES[0])
loss_vector = []
acc_vector = []
for epoch in range(1, EPOCHS + 1):
train(epoch, model, train_loader, optimizer, cuda=cuda)
validate(loss_vector, acc_vector, model, validation_loader, cuda=cuda)
if epoch == 20:
break
end_time = time.time() - start_time
print("Total time", end_time)
if PLOT:
epochs = [i for i in range(1, 21)]
plt.plot(epochs, acc_vector)
plt.xlabel("Epochs")
plt.ylabel("Accuracy with RELU")
plt.show()
plt.plot(epochs, loss_vector)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
# 2 camadas, 50 nós ocultos:
hidden_nodes = 50
layers = 2
start_time = time.time()
model = Net(hidden_nodes, layers, "relu", keep_rate=.8)
if cuda:
model.cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=LEARNING_RATES[2])
loss_vector = []
acc_vector = []
for epoch in range(1, EPOCHS + 1):
train(epoch, model, train_loader, optimizer, cuda=cuda)
validate(loss_vector, acc_vector, model, validation_loader, cuda=cuda)
if epoch == 30:
break
end_time = time.time() - start_time
print("Total time", end_time)
if PLOT:
epochs = [i for i in range(1, 31)]
plt.plot(epochs, acc_vector)
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.show()
plt.plot(epochs, loss_vector)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
if __name__ == '__main__':
main()
```
#Resultado
Na nossa resnet obtivemos melhores resultados nas mesmas condicões comparados a nossa MLP, entretando na nossa cnn necessitamos de mais parametros para regular os pesos, mas de forma mais geral a nossa cnn possui melhores resultados
| github_jupyter |
```
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_selection import RFE
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
import numpy as np
import pandas as pd
import os
import _pickle as cPickle
import joblib
data = pd.read_csv('../data/usapl_data.csv')
data
data.dtypes
data.isnull().sum()
for column in data.columns:
if data[column].isnull().sum() > 30000:
# print(column)
print(f'{column}: {data[column].isnull().sum()} null values')
raw_lifters = data[data['Equipment'] == 'Raw']
raw_lifters
raw_lifters.isnull().sum()
# Can drop these columns, too many nan values
for column in raw_lifters.columns:
if raw_lifters[column].isnull().sum() > 30000:
print(column)
raw_lifters.drop(['Squat4Kg', 'Bench4Kg', 'Deadlift4Kg', 'MeetTown'], axis=1, inplace=True)
clean = raw_lifters.dropna()
# Completely clean dataset
clean.isnull().sum()
clean.sort_values(['AgeClass', 'Age'], inplace=True)
clean
# clean.to_csv('../data/no_null_vals.csv', index=False)
clean.dtypes
# Remove Mx
clean = clean[clean['Sex'] != 'Mx']
clean['AgeClass'].unique()
# 18-19 through 35-39 are the important categories
clean.groupby('AgeClass').agg('count')
ages = ['18-19', '20-23', '24-34', '35-39']
clean_ages = clean[clean.AgeClass.isin(ages) == True]
important_cols = ['Sex', 'Age', 'AgeClass', 'BodyweightKg', 'WeightClassKg', 'Best3SquatKg', 'Best3BenchKg',
'Best3DeadliftKg', 'TotalKg', 'Country', 'Date']
clean_ages = clean_ages[important_cols]
clean_ages['TotalKg'].hist()
sns.countplot(x='AgeClass', data=clean_ages)
# Most of the data is concentrated between the 18-19 and 35-39 age ranges
sns.countplot(x='Sex', data=clean_ages)
clean_ages['Sex'].value_counts()
clean = clean[important_cols]
clean
plt.figure(figsize=(9,6))
sns.histplot(data=clean['TotalKg'])
sns.displot(data=clean, x='Age', y='TotalKg')
sns.relplot(data=clean, x='Age', y='TotalKg', hue='Sex', col='Sex')
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(10,5))
ax1 = sns.histplot(data=clean['TotalKg'])
ax2 = sns.histplot(data=clean['BodyweightKg'])
clean_ages
clean_ages[clean_ages['Sex'] == 'M']['WeightClassKg'].unique()
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
'''
USAPL RECENTLY UPDATED WEIGHT CLASSES, WE CAN DROP THAT BECAUSE BODYWEIGHT ACCOUNTS FOR IT ALREADY
'''
# d_features = clean_ages[['Sex', 'Age', 'BodyweightKg', 'Best3SquatKg', 'Best3BenchKg']]
# d_target = clean_ages['Best3DeadliftKg']
# b_features = clean_ages[['Sex', 'Age', 'BodyweightKg', 'Best3SquatKg', 'Best3DeadliftKg']]
# b_target = clean_ages['Best3BenchKg']
# s_features = clean_ages[['Sex', 'Age', 'BodyweightKg', 'Best3DeadliftKg', 'Best3BenchKg']]
# s_target = clean_ages['Best3SquatKg']
d_features = clean[['Sex', 'Age', 'BodyweightKg', 'Best3BenchKg', 'Best3SquatKg']]
d_target = clean['Best3DeadliftKg']
b_features = clean[['Sex', 'Age', 'BodyweightKg', 'Best3SquatKg', 'Best3DeadliftKg']]
b_target = clean['Best3BenchKg']
s_features = clean[['Sex', 'Age', 'BodyweightKg', 'Best3BenchKg', 'Best3DeadliftKg']]
s_target = clean['Best3SquatKg']
# clean.to_csv('model_training_data', index=False)
b_features
def train_test_scaled_split(features, target):
features_to_encode = ['Sex']
for feature in features_to_encode:
features = encode_and_bind(features, feature)
# split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=3000)
# Create the scaler
scaler = MinMaxScaler()
# Fit the scaler to the training data(features only)
scaler.fit(X_train)
# Transform X_train and X_test based on the (same) scaler
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Replace any potential NaN with 0
X_train_scaled[np.isnan(X_train_scaled)] = 0
X_test_scaled[np.isnan(X_test_scaled)] = 0
return X_train_scaled, X_test_scaled, y_train, y_test
# split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=3000)
# Create the scaler
scaler = MinMaxScaler()
# Fit the scaler to the training data(features only)
scaler.fit(X_train)
# Transform X_train and X_test based on the (same) scaler
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Replace any potential NaN with 0
X_train_scaled[np.isnan(X_train_scaled)] = 0
X_test_scaled[np.isnan(X_test_scaled)] = 0
pred_map = {'squat': [s_features, s_target], 'bench': [b_features, b_target], 'deadlift': [d_features, d_target]}
best_models = {}
scalers = {}
for lift, data in pred_map.items():
features, target = data
X_train_scaled, X_test_scaled, y_train, y_test = train_test_scaled_split(features, target)
model = RandomForestRegressor().fit(X=X_train_scaled, y=y_train)
# Prediction results
print(lift + ":\n")
print("\tR-squared value for training set: ", r2_score(y_train, model.predict(X_train_scaled)))
print("\tMean-squared-error value for training set: ", mean_squared_error(y_train, model.predict(X_train_scaled)))
print("\n")
print("\tR-squared value for testing set: ", r2_score(y_test, model.predict(X_test_scaled)))
print("\tMean-squared-error value for testing set: ", mean_squared_error(y_test, model.predict(X_test_scaled)))
print("\n")
best_models[lift] = model
best_models
for lift, data in pred_map.items():
features, target = data
features_to_encode = ['Sex']
for feature in features_to_encode:
features = encode_and_bind(features, feature)
# split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=3000)
# Create the scaler
scaler = MinMaxScaler()
# Fit the scaler to the training data(features only)
scaler.fit(X_train)
scalers[lift] = scaler
scalers
# Export models and scalers
for lift, model in best_models.items():
with open(f'{lift}_model.pickle', 'wb') as output_file:
cPickle.dump(model, output_file)
for lift, scaler in scalers.items():
scaler_filename = f'{lift}_scaler'
joblib.dump(scaler, scaler_filename)
best_param_models = {}
# scoring = {"Max Error": "max_error", "R-squared": "r2"}
# for lift, data in pred_map.items():
# features, target = data
# X_train_scaled, X_test_scaled, y_train, y_test = train_test_scaled_split(features, target)
# param_grid = {"max_depth":[3, 5, 7, 9, 11]}
# grid_search = GridSearchCV(RandomForestRegressor(), param_grid, scoring=scoring, refit='R-squared', return_train_score=True, cv=5)
# # Fit the grid search object on the training data (CV will be performed on this)
# grid_search.fit(X=X_train_scaled, y=y_train)
# # Grid search results
# print(lift + ":\n")
# print("\tBest estimator: ", grid_search.best_estimator_)
# print("\tBest parameters: ", grid_search.best_params_)
# print("\tBest cross-validation score: ", grid_search.best_score_)
# print("\n")
# model = grid_search.best_estimator_
# print("\tR-squared value for training set: ", r2_score(y_train, model.predict(X_train_scaled)))
# print("\tMean-squared-error value for training set: ", mean_squared_error(y_train, model.predict(X_train_scaled)))
# print("\n")
# # Add the best model to dictionary
# best_param_models[estimator_name] = grid_search.best_estimator_
# Parameter grids for Validation/Optimization
ridge_param_grid = {"alpha":[0.001, 0.01, 0.1, 1, 10, 100]}
lasso_param_grid = {"alpha":[0.001, 0.01, 0.1, 1, 10, 100]}
knn_param_grid = {"n_neighbors":[1, 5, 10], "metric": ['euclidean', 'manhattan', 'minkowski']}
tree_param_grid = {"max_depth":[3, 5, 7, 9, 11]}
forest_param_grid = {"max_depth":[3, 5, 7, 9, 11]}
# Dictionary of models with their parameter grids
estimators = {
'Ridge': [Ridge(), ridge_param_grid],
'Lasso': [Lasso(), lasso_param_grid],
'k-Nearest Neighbor': [KNeighborsRegressor(), knn_param_grid],
'Decision Tree': [DecisionTreeRegressor(), tree_param_grid],
'Random Forest': [RandomForestRegressor(), forest_param_grid]}
# Initial Model Performance Analysis
print("Initial Results for Models Trained on All Features\n")
for estimator_name, estimator_objects in estimators.items():
estimator_model = estimator_objects[0]
model = estimator_model.fit(X=X_train_scaled, y=y_train)
# Prediction results
print(estimator_name + ":\n")
print("\tR-squared value for training set: ", r2_score(y_train, model.predict(X_train_scaled)))
print("\tMean-squared-error value for training set: ", mean_squared_error(y_train, model.predict(X_train_scaled)))
print("\n")
print("\tR-squared value for testing set: ", r2_score(y_test, model.predict(X_test_scaled)))
print("\tMean-squared-error value for testing set: ", mean_squared_error(y_test, model.predict(X_test_scaled)))
print("\n")
'''
FOR DEADLIFT RANDOM FOREST IS BEST
Initial Results for Models Trained on All Features
Ridge:
R-squared value for training set: 0.900791795329309
Mean-squared-error value for training set: 310.66843606211154
R-squared value for testing set: 0.8982009899083301
Mean-squared-error value for testing set: 318.8348637482929
Lasso:
R-squared value for training set: 0.8604311108121666
Mean-squared-error value for training set: 437.057082837424
R-squared value for testing set: 0.8591595054946677
Mean-squared-error value for testing set: 441.11293258562176
k-Nearest Neighbor:
R-squared value for training set: 0.9268768144787918
Mean-squared-error value for training set: 228.9837394110679
R-squared value for testing set: 0.885514033615932
Mean-squared-error value for testing set: 358.57045623808995
Decision Tree:
R-squared value for training set: 0.9997456336932952
Mean-squared-error value for training set: 0.796542815719662
R-squared value for testing set: 0.8247594118265117
Mean-squared-error value for testing set: 548.8541490054909
Random Forest:
R-squared value for training set: 0.9867339096710506
Mean-squared-error value for training set: 41.54248682186873
R-squared value for testing set: 0.9029074378444212
Mean-squared-error value for testing set: 304.09425197720844
WITHOUT CLASSES
Ridge:
R-squared value for training set: 0.898504968343617
Mean-squared-error value for training set: 317.8295873554723
R-squared value for testing set: 0.8962298550449012
Mean-squared-error value for testing set: 325.0084651914204
Lasso:
R-squared value for training set: 0.8604159502657829
Mean-squared-error value for training set: 437.10455777410425
R-squared value for testing set: 0.8591452403723664
Mean-squared-error value for testing set: 441.1576109996967
k-Nearest Neighbor:
R-squared value for training set: 0.9281983657588784
Mean-squared-error value for training set: 224.84532897693873
R-squared value for testing set: 0.8878280908548475
Mean-squared-error value for testing set: 351.32282068828425
Decision Tree:
R-squared value for training set: 0.9997418966085204
Mean-squared-error value for training set: 0.808245419211549
R-squared value for testing set: 0.8250143648717643
Mean-squared-error value for testing set: 548.0556351557776
Random Forest:
R-squared value for training set: 0.986720766316167
Mean-squared-error value for training set: 41.58364496518781
R-squared value for testing set: 0.9025839068349805
Mean-squared-error value for testing set: 305.1075522560664
'''
'''
FOR BENCH
Ridge:
R-squared value for training set: 0.892988866535197
Mean-squared-error value for training set: 176.53306406644927
R-squared value for testing set: 0.8964825991149635
Mean-squared-error value for testing set: 170.39844784776005
Lasso:
R-squared value for training set: 0.8145834294703707
Mean-squared-error value for training set: 305.8761669415857
R-squared value for testing set: 0.8166275930373708
Mean-squared-error value for testing set: 301.8465809360993
k-Nearest Neighbor:
R-squared value for training set: 0.9226421141450587
Mean-squared-error value for training set: 127.61498899707571
R-squared value for testing set: 0.8842375523053484
Mean-squared-error value for testing set: 190.55483655480288
Decision Tree:
R-squared value for training set: 0.9997902915333114
Mean-squared-error value for training set: 0.3459497809858721
R-squared value for testing set: 0.8164845794658392
Mean-squared-error value for testing set: 302.0819934406811
Random Forest:
R-squared value for training set: 0.9858845429595768
Mean-squared-error value for training set: 23.285847008281845
R-squared value for testing set: 0.9021000538156578
Mean-squared-error value for testing set: 161.1516395462605
WITHOUT CLASSES
Ridge:
R-squared value for training set: 0.8904680622199737
Mean-squared-error value for training set: 180.69155949837258
R-squared value for testing set: 0.8941704889775871
Mean-squared-error value for testing set: 174.20437781985777
Lasso:
R-squared value for training set: 0.8139401495175196
Mean-squared-error value for training set: 306.9373666266323
R-squared value for testing set: 0.8160193189895053
Mean-squared-error value for testing set: 302.8478517633823
k-Nearest Neighbor:
R-squared value for training set: 0.9230868882485617
Mean-squared-error value for training set: 126.88125847047938
R-squared value for testing set: 0.8861075022450645
Mean-squared-error value for testing set: 187.47673988162134
Decision Tree:
R-squared value for training set: 0.9997902564728233
Mean-squared-error value for training set: 0.346007619223888
R-squared value for testing set: 0.8173917138206432
Mean-squared-error value for testing set: 300.58877312480723
Random Forest:
R-squared value for training set: 0.9858982376597523
Mean-squared-error value for training set: 23.26325527128061
R-squared value for testing set: 0.9018881160223156
Mean-squared-error value for testing set: 161.50050718317019
'''
'''
FOR SQUAT
Ridge:
R-squared value for training set: 0.9105562760189563
Mean-squared-error value for training set: 268.75260056053975
R-squared value for testing set: 0.9104550067401644
Mean-squared-error value for testing set: 267.1095685007973
Lasso:
R-squared value for training set: 0.8794849681241365
Mean-squared-error value for training set: 362.11292175333625
R-squared value for testing set: 0.8809785596583386
Mean-squared-error value for testing set: 355.036774415219
k-Nearest Neighbor:
R-squared value for training set: 0.9340890035860671
Mean-squared-error value for training set: 198.04353959518784
R-squared value for testing set: 0.8993721303492088
Mean-squared-error value for testing set: 300.1693993496946
Decision Tree:
R-squared value for training set: 0.9998465488672468
Mean-squared-error value for training set: 0.46107640816830153
R-squared value for testing set: 0.842712904857357
Mean-squared-error value for testing set: 469.1818781244988
Random Forest:
R-squared value for training set: 0.98807146372382
Mean-squared-error value for training set: 35.84181206256046
R-squared value for testing set: 0.9142998788291369
Mean-squared-error value for testing set: 255.6404501588471
WITHOUT CLASSES
Ridge:
R-squared value for training set: 0.9100313257045812
Mean-squared-error value for training set: 270.3299248922422
R-squared value for testing set: 0.9101631812638482
Mean-squared-error value for testing set: 267.980073642611
Lasso:
R-squared value for training set: 0.879482610475747
Mean-squared-error value for training set: 362.12000580694723
R-squared value for testing set: 0.8809764546205674
Mean-squared-error value for testing set: 355.04305366892487
k-Nearest Neighbor:
R-squared value for training set: 0.9353216809041093
Mean-squared-error value for training set: 194.33969968188046
R-squared value for testing set: 0.9005989534753995
Mean-squared-error value for testing set: 296.50982907184897
Decision Tree:
R-squared value for training set: 0.9998460483885462
Mean-squared-error value for training set: 0.4625802023567154
R-squared value for testing set: 0.8425767179090207
Mean-squared-error value for testing set: 469.5881190061077
Random Forest:
R-squared value for training set: 0.9879984822578319
Mean-squared-error value for training set: 36.06110032454263
R-squared value for testing set: 0.9139984869229377
Mean-squared-error value for testing set: 256.5394916248604
'''
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 6
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
```
### Code from the previous chapter
```
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
p_0 = get_first_value(census)
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
```
### System objects
We can rewrite the code from the previous chapter using system objects.
```
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
annual_growth=annual_growth)
```
And we can encapsulate the code that runs the model in a function.
```
def run_simulation1(system):
"""Runs the constant growth model.
system: System object
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = results[t] + system.annual_growth
return results
```
We can also encapsulate the code that plots the results.
```
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
```
Here's how we run it.
```
results = run_simulation1(system)
plot_results(census, un, results, 'Constant growth model')
```
## Proportional growth
Here's a more realistic model where the number of births and deaths is proportional to the current population.
```
def run_simulation2(system):
"""Run a model with proportional birth and death.
system: System object
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
return results
```
I picked a death rate that seemed reasonable and then adjusted the birth rate to fit the data.
```
system.death_rate = 0.01
system.birth_rate = 0.027
```
Here's what it looks like.
```
results = run_simulation2(system)
plot_results(census, un, results, 'Proportional model')
savefig('figs/chap03-fig03.pdf')
```
The model fits the data pretty well for the first 20 years, but not so well after that.
### Factoring out the update function
`run_simulation1` and `run_simulation2` are nearly identical except the body of the loop. So we can factor that part out into a function.
```
def update_func1(pop, t, system):
"""Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
births = system.birth_rate * pop
deaths = system.death_rate * pop
return pop + births - deaths
```
The name `update_func` refers to a function object.
```
update_func1
```
Which we can confirm by checking its type.
```
type(update_func1)
```
`run_simulation` takes the update function as a parameter and calls it just like any other function.
```
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
```
Here's how we use it.
```
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
birth_rate=0.027,
death_rate=0.01)
results = run_simulation(system, update_func1)
plot_results(census, un, results, 'Proportional model, factored')
```
Remember not to put parentheses after `update_func1`. What happens if you try?
**Exercise:** When you run `run_simulation`, it runs `update_func1` once for each year between `t_0` and `t_end`. To see that for yourself, add a print statement at the beginning of `update_func1` that prints the values of `t` and `pop`, then run `run_simulation` again.
### Combining birth and death
Since births and deaths get added up, we don't have to compute them separately. We can combine the birth and death rates into a single net growth rate.
```
def update_func2(pop, t, system):
"""Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop
return pop + net_growth
```
Here's how it works:
```
system.alpha = system.birth_rate - system.death_rate
results = run_simulation(system, update_func2)
plot_results(census, un, results, 'Proportional model, combined birth and death')
```
### Exercises
**Exercise:** Maybe the reason the proportional model doesn't work very well is that the growth rate, `alpha`, is changing over time. So let's try a model with different growth rates before and after 1980 (as an arbitrary choice).
Write an update function that takes `pop`, `t`, and `system` as parameters. The system object, `system`, should contain two parameters: the growth rate before 1980, `alpha1`, and the growth rate after 1980, `alpha2`. It should use `t` to determine which growth rate to use. Note: Don't forget the `return` statement.
Test your function by calling it directly, then pass it to `run_simulation`. Plot the results. Adjust the parameters `alpha1` and `alpha2` to fit the data as well as you can.
```
# Solution goes here
def update_func3(pop, t, system):
if t < 1970:
net_growth = system.alpha1 * pop
else:
net_growth = system.alpha2 * pop
return pop + net_growth
# Solution goes here
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha1=0.017,
alpha2=0.0175
)
results = run_simulation(system, update_func3)
plot_results(census, un, results, 'Proportional model, combined birth and death')
```
| github_jupyter |
```
%matplotlib inline
# Importing standard Qiskit libraries and configuring account
from qiskit import QuantumCircuit, execute, Aer, IBMQ
from qiskit.compiler import transpile, assemble
from qiskit.tools.jupyter import *
from qiskit.visualization import *
# Loading your IBM Q account(s)
provider = IBMQ.load_account()
```
# Chapter 11 - Ignis
```
# Import plot and math libraries
import numpy as np
import matplotlib.pyplot as plt
# Import the noise models and some standard error methods
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import amplitude_damping_error, phase_damping_error
# Import all three coherence circuits generators and fitters
from qiskit.ignis.characterization.coherence import t1_circuits, t2_circuits, t2star_circuits
from qiskit.ignis.characterization.coherence import T1Fitter, T2Fitter, T2StarFitter
# Generate the T1 test circuits
# Generate a list of number of gates to add to each circuit
# using np.linspace so that the number of gates increases linearly
# and append with a large span at the end of the list (200-4000)
num_of_gates = np.append((np.linspace(1, 100, 12)).astype(int), np.array([200, 400, 800, 1000, 2000, 4000]))
#Define the gate time for each Identity gate
gate_time = 0.1
# Select the first qubit as the one we wish to measure T1
qubits = [0]
# Generate the test circuits given the above parameters
test_circuits, delay_times = t1_circuits(num_of_gates, gate_time, qubits)
# The number of I gates appended for each circuit
print('Number of gates per test circuit: \n', num_of_gates, '\n')
# The gate time of each circuit (number of I gates * gate_time)
print('Delay times for each test circuit created, respectively:\n', delay_times)
print('Total test circuits created: ', len(test_circuits))
print('Test circuit 1 with 1 Identity gate:')
test_circuits[0].draw()
print('Test circuit 2 with 10 Identity gates:')
test_circuits[1].draw()
# Set the simulator with amplitude damping noise
# Set the amplitude damping noise channel parameters T1 and Lambda
t1 = 20
lam = np.exp(-gate_time/t1)
# Generate the amplitude dampling error channel
error = amplitude_damping_error(1 - lam)
noise_model = NoiseModel()
# Set the dampling error to the ID gate on qubit 0.
noise_model.add_quantum_error(error, 'id', [0])
# Run the simulator with the generated noise model
backend = Aer.get_backend('qasm_simulator')
shots = 200
backend_result = execute(test_circuits, backend, shots=shots, noise_model=noise_model).result()
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[0]))
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[len(test_circuits)-1]))
# Initialize the parameters for the T1Fitter, A, T1, and B
param_t1 = t1*1.2
param_a = 1.0
param_b = 0.0
# Generate the T1Fitter for our test circuit results
fit = T1Fitter(backend_result, delay_times, qubits,
fit_p0=[param_a, param_t1, param_b],
fit_bounds=([0, 0, -1], [2, param_t1*2, 1]))
# Plot the fitter results for T1 over each test circuit's delay time
fit.plot(0)
# Import the thermal relaxation error we will use to create our error
from qiskit.providers.aer.noise.errors.standard_errors import thermal_relaxation_error
# Import the T2Fitter Class and t2_circuits method
from qiskit.ignis.characterization.coherence import T2Fitter
from qiskit.ignis.characterization.coherence import t2_circuits
num_of_gates = (np.linspace(1, 300, 50)).astype(int)
gate_time = 0.1
# Note that it is possible to measure several qubits in parallel
qubits = [0]
t2echo_test_circuits, t2echo_delay_times = t2_circuits(num_of_gates, gate_time, qubits)
# The number of I gates appended for each circuit
print('Number of gates per test circuit: \n', num_of_gates, '\n')
# The gate time of each circuit (number of I gates * gate_time)
print('Delay times for T2 echo test circuits:\n', t2echo_delay_times)
# Draw the first T2 test circuit
t2echo_test_circuits[0].draw()
# We'll create a noise model on the backend simulator
backend = Aer.get_backend('qasm_simulator')
shots = 400
# set the t2 decay time
t2 = 25.0
# Define the T2 noise model based on the thermal relaxation error model
t2_noise_model = NoiseModel()
t2_noise_model.add_quantum_error(thermal_relaxation_error(np.inf, t2, gate_time, 0.5), 'id', [0])
# Execute the circuit on the noisy backend
t2echo_backend_result = execute(t2echo_test_circuits, backend, shots=shots,
noise_model=t2_noise_model, optimization_level=0).result()
plot_histogram(t2echo_backend_result.get_counts(t2echo_test_circuits[0]))
plot_histogram(t2echo_backend_result.get_counts(t2echo_test_circuits[len(t2echo_test_circuits)-1]))
```
# T2 Decoherence Time
```
# Generate the T2Fitter class using similar parameters as the T1Fitter
t2echo_fit = T2Fitter(t2echo_backend_result, t2echo_delay_times,
qubits, fit_p0=[0.5, t2, 0.5], fit_bounds=([-0.5, 0, -0.5], [1.5, 40, 1.5]))
# Print and plot the results
print(t2echo_fit.params)
t2echo_fit.plot(0)
plt.show()
# 50 total linearly spaced number of gates
# 30 from 10->150, 20 from 160->450
num_of_gates = np.append((np.linspace(1, 150, 30)).astype(int), (np.linspace(160,450,20)).astype(int))
# Set the Identity gate delay time
gate_time = 0.1
# Select the qubit to measure T2*
qubits = [0]
# Generate the 50 test circuits with number of oscillations set to 4
test_circuits, delay_times, osc_freq = t2star_circuits(num_of_gates, gate_time, qubits, nosc=4)
print('Circuits generated: ', len(test_circuits))
print('Delay times: ', delay_times)
print('Oscillating frequency: ', osc_freq)
print(test_circuits[0].count_ops())
test_circuits[0].draw()
print(test_circuits[1].count_ops())
test_circuits[1].draw()
# Get the backend to execute the test circuits
backend = Aer.get_backend('qasm_simulator')
# Set the T2* value to 10
t2Star = 10
# Set the phase damping error and add it to the noise model to the Identity gates
error = phase_damping_error(1 - np.exp(-2*gate_time/t2Star))
noise_model = NoiseModel()
noise_model.add_quantum_error(error, 'id', [0])
# Run the simulator
shots = 1024
backend_result = execute(test_circuits, backend, shots=shots,
noise_model=noise_model).result()
# Plot the noisy results of the shortest (first in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[0]))
# Plot the noisy results of the largest (last in the list) circuit
plot_histogram(backend_result.get_counts(test_circuits[len(test_circuits)-1]))
# Set the initial values of the T2StarFitter parameters
param_T2Star = t2Star*1.1
param_A = 0.5
param_B = 0.5
# Generate the T2StarFitter with the given parameters and bounds
fit = T2StarFitter(backend_result, delay_times, qubits,
fit_p0=[0.5, t2Star, osc_freq, 0, 0.5],
fit_bounds=([-0.5, 0, 0, -np.pi, -0.5],
[1.5, 40, 2*osc_freq, np.pi, 1.5]))
# Plot the qubit characterization from the T2StarFitter
fit.plot(0)
```
# Mitigating Readout errors
```
# Import Qiskit classes
from qiskit.providers.aer import noise
from qiskit.tools.visualization import plot_histogram
# Import measurement calibration functions
from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter
# Generate the calibration circuits
# Set the number of qubits
num_qubits = 5
# Set the qubit list to generate the measurement calibration circuits
qubit_list = [0,1,2,3,4]
# Generate the measurement calibrations circuits and state labels
meas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=num_qubits, circlabel='mcal')
# Print the number of measurement calibration circuits generated
print(len(meas_calibs))
# Draw any of the generated calibration circuits, 0-31.
# In this example we will draw the last one.
meas_calibs[31].draw()
state_labels
# Execute the calibration circuits without noise on the qasm simulator
backend = Aer.get_backend('qasm_simulator')
job = execute(meas_calibs, backend=backend, shots=1000)
# Obtain the measurement calibration results
cal_results = job.result()
# The calibration matrix without noise is the identity matrix
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
meas_fitter.plot_calibration()
# Create a 5 qubit circuit
qc = QuantumCircuit(5,5)
# Place the first qubit in superpostion
qc.h(0)
# Entangle all other qubits together
qc.cx(0, 1)
qc.cx(1, 2)
qc.cx(2, 3)
qc.cx(3, 4)
# Include a barrier just to ease visualization of the circuit
qc.barrier()
# Measure and draw the final circuit
qc.measure([0,1,2,3,4], [0,1,2,3,4])
qc.draw()
# Obtain the least busy backend device, not a simulator
from qiskit.providers.ibmq import least_busy
# Find the least busy operational quantum device with 5 or more qubits
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 4 and not x.configuration().simulator and x.status().operational==True))
# Print the least busy backend
print("least busy backend: ", backend)
# Execute the quantum circuit on the backend
job = execute(qc, backend=backend, shots=1024)
results = job.result()
# Results from backend without mitigating the noise
noisy_counts = results.get_counts()
# Obtain the measurement fitter object
measurement_filter = meas_fitter.filter
# Mitigate the results by applying the measurement fitter
filtered_results = measurement_filter.apply(results)
# Get the mitigated result counts
filtered_counts = filtered_results.get_counts(0)
plot_histogram(noisy_counts)
plot_histogram(filtered_counts)
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
# Availability Calculator
This tool estimates the average device availability over a period of time.
Double-click into the cells below, where it says `'here'`, and adjust the values as necessary.
After setting configuration values, select `Kernel` > `Restart & Run All` from the menu.
```
from datetime import datetime, timedelta
import time
import query
from measure import DeviceCounter
import pandas
from statistics import mean
```
## `provider_name`
Valid choices are (casing matters):
* `bird`
* `JUMP`
* `Lime`
* `Lyft`
```
### Configuration ###
provider_name = 'here'
#####################
print(f"Provider: {provider_name}")
```
## `vehicle_type`
Valid choices are (casing matters):
* `bicycle` - `JUMP` only
* `scooter` - all providers
```
### Configuration ###
vehicle_type = 'here'
#####################
print(f"Vehicle Type: {vehicle_type}")
```
## `start_date`:
```
### Configuration ###
start_year = 2018
start_month = 11
start_day = 0
#####################
start_date = datetime(start_year, start_month, start_day, 0, 0, 0)
print("Starting:", start_date)
```
## `end_date`:
```
### Configuration ###
end_year = 2018
end_month = 11
end_day = 0
#####################
end_date = datetime(end_year, end_month, end_day, 23, 59, 59)
print("Ending:", end_date)
```
## Query for availability data:
```
q = query.Availability(start_date, end_date, vehicle_types=vehicle_type, table="csm_availability", local=True, debug=True)
data = q.get(provider_name=provider_name)
```
## Count availability in a partitioned time range:
```
# create a device counter for the time range, assuming local time
devices = DeviceCounter(start_date, end_date, local=True, debug=True)
# create the interval partition and aggregate counts
partition = devices.count(data).partition()
partition.describe()
```
## Average availability:
Over the computed interval partition.
```
overall_avg = devices.average()
print(f"Overall average: {overall_avg}")
```
## Count availability (again), day-by-day:
Calculate average availability for each day in the range `start_date` to `end_date`.
At the end, calculate the overall average.
```
oneday = timedelta(days=1)
counts = {}
start = start_date
while start < end_date:
end = start + oneday
print(f"Counting {start.strftime('%Y-%m-%d')} to {end.strftime('%Y-%m-%d')}")
q = query.Availability(start, end, vehicle_types=vehicle_type, table="csm_availability", local=True, debug=False)
data = q.get(provider_name=provider_name)
print(f"{len(data)} availability records in time period")
counter = DeviceCounter(start, start + oneday, local=True, debug=False)
counts[start] = counter.count(data)
start = start + oneday
print()
print("Done counting. Daily averages:")
print()
for date, count in counts.items():
print(f"{provider_name},{vehicle_type},{date.strftime('%Y-%m-%d')},{count.average()},{overall_avg}")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/TerrenceAm22/DS-Unit-2-Kaggle-Challenge/blob/master/LS_DS_223_assignment_checkpoint2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 2, Module 3*
---
# Cross-Validation
## Assignment
- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [ ] Continue to participate in our Kaggle challenge.
- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [ ] Commit your notebook to your fork of the GitHub repo.
**You can't just copy** from the lesson notebook to this assignment.
- Because the lesson was **regression**, but the assignment is **classification.**
- Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.
So you will have to adapt the example, which is good real-world practice.
1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`
3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))
## Stretch Goals
### Reading
- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
### Doing
- Add your own stretch goals!
- Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.
- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
### BONUS: Stacking!
Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
```python
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
```
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Importing Necessary Libraries
import category_encoders as ce
import numpy as np
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from category_encoders import TargetEncoder
from sklearn.metrics import accuracy_score
from sklearn.metrics import make_scorer
#from sklearn.model_selection import cross_validate
train.head()
# Preforming Train/Test Split
train, val = train_test_split(train, random_state=42)
train.shape, test.shape, val.shape
target = 'status_group'
y_train = train[target]
y_train.value_counts(normalize=True)
# Assigning a dataframe with all train columns except the target & ID
train_features = train.drop(columns=['status_group', 'id'])
# Make a list of all numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
#Making a list of all categorical features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Getting a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
features = numeric_features + categorical_features
print(features)
# Arranging data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
# Using pipeline 2 for RandomizedSearchCV
#pipeline = make_pipeline(
#ce.OneHotEncoder(use_cat_names=True),
#SimpleImputer(strategy='mean'),
#DecisionTreeClassifier(random_state=42)
#)
# Fit on train
#pipeline.fit(X_train, y_train)
# Score on train, val
#print('Train Accuracy', pipeline.score(X_train, y_train))
#print('Validation Accuracy', pipeline.score(X_val, y_val))
# Predict on test
#y_pred = pipeline.predict(X_test)
pipeline2 = make_pipeline(
ce.OrdinalEncoder(verbose=20),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=200, n_jobs=-1)
)
pipeline2.fit(X_train, y_train)
print('Train Accuracy', pipeline2.score(X_train, y_train))
print('Validation Accuracy', pipeline2.score(X_val, y_val))
y_pred = pipeline2.predict(X_test)
#cross_val_score(pipeline2, X_train, y_train, cv=5)
param_distributions = {
'ordinalencoder__verbose':[20],
'simpleimputer__strategy':['mean'],
'randomforestclassifier__n_estimators':[100, 120, 140, 160, 180, 200],
'randomforestclassifier__n_jobs':[-1],
'randomforestclassifier__max_depth':[None, 20, 30, 40, 50],
'randomforestclassifier__criterion':["entropy"],
'randomforestclassifier__min_samples_split':[20, 40]
}
search = RandomizedSearchCV(
pipeline2,
param_distributions = param_distributions,
n_iter=10,
cv=5,
verbose=10,
scoring= 'accuracy',
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train)
print('Best hyperparameters', search.best_params_)
print('Cross-validation MAE', search.best_score_)
```
| github_jupyter |
```
import wget, json, os, math
from pathlib import Path
from string import capwords
from pybtex.database import parse_string
import pybtex.errors
from mpcontribs.client import Client
from bravado.exception import HTTPNotFound
from pymatgen.core import Structure
from pymatgen.ext.matproj import MPRester
from tqdm.notebook import tqdm
from matminer.datasets import load_dataset
from monty.json import MontyEncoder, MontyDecoder
```
### Configuration and Initialization
```
BENCHMARK_FULL_SET = [
{
"name": "log_kvrh",
"data_file": "matbench_log_kvrh.json.gz",
"target": "log10(K_VRH)",
"clf_pos_label": None,
"unit": None,
"has_structure": True,
}, {
"name": "log_gvrh",
"data_file": "matbench_log_gvrh.json.gz",
"target": "log10(G_VRH)",
"clf_pos_label": None,
"unit": None,
"has_structure": True,
}, {
"name": "dielectric",
"data_file": "matbench_dielectric.json.gz",
"target": "n",
"clf_pos_label": None,
"unit": None,
"has_structure": True,
}, {
"name": "jdft2d",
"data_file": "matbench_jdft2d.json.gz",
"target": "exfoliation_en",
"clf_pos_label": None,
"unit": "meV/atom",
"has_structure": True,
}, {
"name": "mp_gap",
"data_file": "matbench_mp_gap.json.gz",
"target": "gap pbe",
"clf_pos_label": None,
"unit": "eV",
"has_structure": True,
}, {
"name": "mp_is_metal",
"data_file": "matbench_mp_is_metal.json.gz",
"target": "is_metal",
"clf_pos_label": True,
"unit": None,
"has_structure": True,
}, {
"name": "mp_e_form",
"data_file": "matbench_mp_e_form.json.gz",
"target": "e_form",
"clf_pos_label": None,
"unit": "eV/atom",
"has_structure": True,
}, {
"name": "perovskites",
"data_file": "matbench_perovskites.json.gz",
"target": "e_form",
"clf_pos_label": None,
"unit": "eV",
"has_structure": True,
}, {
"name": "glass",
"data_file": "matbench_glass.json.gz",
"target": "gfa",
"clf_pos_label": True,
"unit": None,
"has_structure": False,
}, {
"name": "expt_is_metal",
"data_file": "matbench_expt_is_metal.json.gz",
"target": "is_metal",
"clf_pos_label": True,
"unit": None,
"has_structure": False,
}, {
"name": "expt_gap",
"data_file": "matbench_expt_gap.json.gz",
"target": "gap expt",
"clf_pos_label": None,
"unit": "eV",
"has_structure": False,
}, {
"name": "phonons",
"data_file": "matbench_phonons.json.gz",
"target": "last phdos peak",
"clf_pos_label": None,
"unit": "cm^-1",
"has_structure": True,
}, {
"name": "steels",
"data_file": "matbench_steels.json.gz",
"target": "yield strength",
"clf_pos_label": None,
"unit": "MPa",
"has_structure": False,
}
]
# Map of canonical yet non-mpcontribs-compatible tagret nams to compatible (unicode, no punctuation) target names
target_map = {
"yield strength": "σᵧ",
"log10(K_VRH)": "log₁₀Kᵛʳʰ",
"log10(G_VRH)": "log₁₀Gᵛʳʰ",
"n": "𝑛",
"exfoliation_en": "Eˣ",
"gap pbe": "Eᵍ",
"is_metal": "metallic",
"e_form": "Eᶠ",
"gfa": "glass",
"gap expt": "Eᵍ",
"last phdos peak": "ωᵐᵃˣ",
}
pybtex.errors.set_strict_mode(False)
mprester = MPRester()
client = Client(host='ml-api.materialsproject.org')
datadir = Path('/Users/patrick/gitrepos/mp/mpcontribs-data/')
fn = Path('dataset_metadata.json')
fp = datadir / fn
if not fp.exists():
prefix = "https://raw.githubusercontent.com/hackingmaterials/matminer"
url = f'{prefix}/master/matminer/datasets/{fn}'
wget.download(url)
fn.rename(fp)
metadata = json.load(open(fp, 'r'))
```
### Prepare and create/update Projects
```
for ds in BENCHMARK_FULL_SET:
name = "matbench_" + ds["name"]
primitive_key = "structure" if ds["has_structure"] else "composition"
target = ds["target"]
columns = {
target_map[target]: metadata[name]["columns"][target],
primitive_key: metadata[name]["columns"][primitive_key]
}
project = {
'name': name,
'is_public': True,
'owner': 'ardunn@lbl.gov',
'title': name, # TODO update and set long_title
'authors': 'A. Dunn, A. Jain',
'description': metadata[name]['description'] + \
" If you are viewing this on MPContribs-ML interactively, please ensure the order of the"
f"identifiers is sequential (mb-{ds['name']}-0001, mb-{ds['name']}-0002, etc.) before benchmarking.",
'other': {
'columns': columns,
'entries': metadata[name]['num_entries']
},
'references': [
{'label': 'RawData', 'url': metadata["name"]["url"]}
]
}
for ref in metadata[name]['bibtex_refs']:
if name == "matbench_phonons":
ref = ref.replace(
"petretto_dwaraknath_miranda_winston_giantomassi_rignanese_van setten_gonze_persson_hautier_2018",
"petretto2018"
)
bib = parse_string(ref, 'bibtex')
for key, entry in bib.entries.items():
key_is_doi = key.startswith('doi:')
url = 'https://doi.org/' + key.split(':', 1)[-1] if key_is_doi else entry.fields.get('url')
k = 'Zhuo2018' if key_is_doi else capwords(key.replace('_', ''))
if k.startswith('C2'):
k = 'Castelli2012'
elif k.startswith('Landolt'):
k = 'LB1997'
elif k == 'Citrine':
url = 'https://www.citrination.com'
if len(k) > 8:
k = k[:4] + k[-4:]
project['references'].append(
{'label': k, 'url': url}
)
try:
client.projects.get_entry(pk=name, _fields=["name"]).result()
except HTTPNotFound:
client.projects.create_entry(project=project).result()
print(name, "created")
else:
project.pop("name")
client.projects.update_entry(pk=name, project=project).result()
print(name, "updated")
```
### Prepare Contributions
```
structure_filename = "/Users/patrick/Downloads/outfile.cif"
for ds in BENCHMARK_FULL_SET:
name = "matbench_" + ds["name"]
fn = datadir / f"{name}.json"
if fn.exists():
continue
target = ds["target"]
unit = f" {ds['unit']}" if ds["unit"] else ""
df = load_dataset(name)
contributions = []
id_prefix = df.shape[0]
id_n_zeros = math.floor(math.log(df.shape[0], 10)) + 1
for i, row in tqdm(enumerate(df.iterrows()), total=df.shape[0]):
entry = row[1]
contrib = {'project': name, 'is_public': True}
if "structure" in entry.index:
s = entry.loc["structure"]
s.to("cif", structure_filename)
s = Structure.from_file(structure_filename)
c = s.composition.get_integer_formula_and_factor()[0]
contrib["structures"] = [s]
else:
c = entry["composition"]
id_number = f"{i+1:0{id_n_zeros}d}"
identifier = f"mb-{ds['name']}-{id_number}"
contrib["identifier"] = identifier
contrib["data"] = {target_map[target]: f"{entry.loc[target]}{unit}"}
contrib["formula"] = c
contributions.append(contrib)
with open(fn, "w") as f:
json.dump(contributions, f, cls=MontyEncoder)
print("saved to", fn)
```
### Submit Contributions
```
name = "matbench_log_gvrh"
fn = datadir / f"{name}.json"
with open(fn, "r") as f:
contributions = json.load(f, cls=MontyDecoder)
# client.delete_contributions(name)
client.submit_contributions(contributions, ignore_dupes=True)
```
| github_jupyter |
# DeepDreaming with TensorFlow
>[Loading and displaying the model graph](#loading)
>[Naive feature visualization](#naive)
>[Multiscale image generation](#multiscale)
>[Laplacian Pyramid Gradient Normalization](#laplacian)
>[Playing with feature visualzations](#playing)
>[DeepDream](#deepdream)
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
- visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)
- embed TensorBoard graph visualizations into Jupyter notebooks
- produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg))
- use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
- generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.
```
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
```
<a id='loading'></a>
## Loading and displaying the model graph
The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:
```
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
```
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
```
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
```
<a id='naive'></a>
## Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
```
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
```
<a id="multiscale"></a>
## Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
```
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
```
<a id="laplacian"></a>
## Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normalization_.
```
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
```
<a id="playing"></a>
## Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
```
render_lapnorm(T(layer)[:,:,:,65])
```
Lower layers produce features of lower complexity.
```
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
```
There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
```
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
```
<a id="deepdream"></a>
## DeepDream
Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow.
```
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
```
Let's load some image and populate it with DogSlugs (in case you've missed them).
```
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
```
Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
```
render_deepdream(T(layer)[:,:,:,139], img0)
```
Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.
We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=3
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### Load packages
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
import tensorflow_addons as tfa
import pickle
```
### parameters
```
dataset = "fmnist"
labels_per_class = 256 # 'full'
n_latent_dims = 1024
confidence_threshold = 0.0 # minimum confidence to include in UMAP graph for learned metric
learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints
augmented = False #
min_dist= 0.001 # min_dist parameter for UMAP
negative_sample_rate = 5 # how many negative samples per positive sample
batch_size = 128 # batch size
optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train
optimizer = tfa.optimizers.MovingAverage(optimizer)
label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy
max_umap_iterations = 500 # how many times, maximum, to recompute UMAP
max_epochs_per_graph = 10 # how many epochs maximum each graph trains for (without early stopping)
graph_patience = 10 # how many times without improvement to train a new graph
min_graph_delta = 0.0025 # minimum improvement on validation acc to consider an improvement for training
from datetime import datetime
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(confidence_threshold)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_umap_augmented'
)
print(datestring)
```
#### Load dataset
```
from tfumap.semisupervised_keras import load_dataset
(
X_train,
X_test,
X_labeled,
Y_labeled,
Y_masked,
X_valid,
Y_train,
Y_test,
Y_valid,
Y_valid_one_hot,
Y_labeled_one_hot,
num_classes,
dims
) = load_dataset(dataset, labels_per_class)
```
### load architecture
```
from tfumap.semisupervised_keras import load_architecture
encoder, classifier, embedder = load_architecture(dataset, n_latent_dims)
```
### load pretrained weights
```
from tfumap.semisupervised_keras import load_pretrained_weights
encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier)
```
#### compute pretrained accuracy
```
# test current acc
pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True)
pretrained_predictions = np.argmax(pretrained_predictions, axis=1)
pretrained_acc = np.mean(pretrained_predictions == Y_test)
print('pretrained acc: {}'.format(pretrained_acc))
```
### get a, b parameters for embeddings
```
from tfumap.semisupervised_keras import find_a_b
a_param, b_param = find_a_b(min_dist=min_dist)
```
### build network
```
from tfumap.semisupervised_keras import build_model
model = build_model(
batch_size=batch_size,
a_param=a_param,
b_param=b_param,
dims=dims,
encoder=encoder,
classifier=classifier,
negative_sample_rate=negative_sample_rate,
optimizer=optimizer,
label_smoothing=label_smoothing,
embedder = embedder,
)
```
### build labeled iterator
```
from tfumap.semisupervised_keras import build_labeled_iterator
labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims)
```
### training
```
from livelossplot import PlotLossesKerasTF
from tfumap.semisupervised_keras import get_edge_dataset
from tfumap.semisupervised_keras import zip_datasets
```
#### callbacks
```
# plot losses callback
groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']}
plotlosses = PlotLossesKerasTF(groups=groups)
history_list = []
current_validation_acc = 0
batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int)
epochs_since_last_improvement = 0
current_umap_iterations = 0
current_epoch = 0
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder / 'test_loss.npy')
for cui in tqdm(np.arange(current_epoch, max_umap_iterations)):
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
break
# make dataset
edge_dataset = get_edge_dataset(
model,
augmented,
classifier,
encoder,
X_train,
Y_masked,
batch_size,
confidence_threshold,
labeled_dataset,
dims,
learned_metric = learned_metric
)
# zip dataset
zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size)
# train dataset
history = model.fit(
zipped_ds,
epochs= current_epoch + max_epochs_per_graph,
initial_epoch = current_epoch,
validation_data=(
(X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)),
{"classifier": Y_valid_one_hot},
),
callbacks = [plotlosses],
max_queue_size = 100,
steps_per_epoch = batches_per_epoch,
#verbose=0
)
current_epoch+=len(history.history['loss'])
history_list.append(history)
# save score
class_pred = classifier.predict(encoder.predict(X_test))
class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test)
np.save(save_folder / 'test_loss.npy', (np.nan, class_acc))
# save weights
encoder.save_weights((save_folder / "encoder").as_posix())
classifier.save_weights((save_folder / "classifier").as_posix())
# save history
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump([i.history for i in history_list], file_pi)
current_umap_iterations += 1
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
#break
plt.plot(previous_history)
```
### save embedding
```
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
```
| github_jupyter |
```
import sys
sys.path.append('../scripts/')
from robot import *
from scipy.stats import multivariate_normal
import random #追加
import copy
class Particle:
def __init__(self, init_pose, weight):
self.pose = init_pose
self.weight = weight
def motion_update(self, nu, omega, time, noise_rate_pdf):
ns = noise_rate_pdf.rvs()
pnu = nu + ns[0]*math.sqrt(abs(nu)/time) + ns[1]*math.sqrt(abs(omega)/time)
pomega = omega + ns[2]*math.sqrt(abs(nu)/time) + ns[3]*math.sqrt(abs(omega)/time)
self.pose = IdealRobot.state_transition(pnu, pomega, time, self.pose)
def observation_update(self, observation, envmap, distance_dev_rate, direction_dev): #変更
for d in observation:
obs_pos = d[0]
obs_id = d[1]
##パーティクルの位置と地図からランドマークの距離と方角を算出##
pos_on_map = envmap.landmarks[obs_id].pos
particle_suggest_pos = IdealCamera.observation_function(self.pose, pos_on_map)
##尤度の計算##
distance_dev = distance_dev_rate*particle_suggest_pos[0]
cov = np.diag(np.array([distance_dev**2, direction_dev**2]))
self.weight *= multivariate_normal(mean=particle_suggest_pos, cov=cov).pdf(obs_pos)
class Mcl:
def __init__(self, envmap, init_pose, num, motion_noise_stds={"nn":0.19, "no":0.001, "on":0.13, "oo":0.2}, \
distance_dev_rate=0.14, direction_dev=0.05):
self.particles = [Particle(init_pose, 1.0/num) for i in range(num)]
self.map = envmap
self.distance_dev_rate = distance_dev_rate
self.direction_dev = direction_dev
v = motion_noise_stds
c = np.diag([v["nn"]**2, v["no"]**2, v["on"]**2, v["oo"]**2])
self.motion_noise_rate_pdf = multivariate_normal(cov=c)
def motion_update(self, nu, omega, time):
for p in self.particles: p.motion_update(nu, omega, time, self.motion_noise_rate_pdf)
def observation_update(self, observation):
for p in self.particles:
p.observation_update(observation, self.map, self.distance_dev_rate, self.direction_dev)
self.resampling()
def resampling(self): ###systematicsampling
ws = np.cumsum([e.weight for e in self.particles]) #重みを累積して足していく(最後の要素が重みの合計になる)
if ws[-1] < 1e-100: ws = [e + 1e-100 for e in ws] #重みの合計が0のときの処理
step = ws[-1]/len(self.particles) #正規化されていない場合はステップが「重みの合計値/N」になる
r = np.random.uniform(0.0, step)
cur_pos = 0
ps = [] #抽出するパーティクルのリスト
while(len(ps) < len(self.particles)):
if r < ws[cur_pos]:
ps.append(self.particles[cur_pos]) #もしかしたらcur_posがはみ出るかもしれませんが例外処理は割愛で
r += step
else:
cur_pos += 1
self.particles = [copy.deepcopy(e) for e in ps] #以下の処理は前の実装と同じ
for p in self.particles: p.weight = 1.0/len(self.particles)
def draw(self, ax, elems):
xs = [p.pose[0] for p in self.particles]
ys = [p.pose[1] for p in self.particles]
vxs = [math.cos(p.pose[2])*p.weight*len(self.particles) for p in self.particles] #重みを要素に反映
vys = [math.sin(p.pose[2])*p.weight*len(self.particles) for p in self.particles] #重みを要素に反映
elems.append(ax.quiver(xs, ys, vxs, vys, \
angles='xy', scale_units='xy', scale=1.5, color="blue", alpha=0.5)) #変更
class EstimationAgent(Agent):
def __init__(self, time_interval, nu, omega, estimator):
super().__init__(nu, omega)
self.estimator = estimator
self.time_interval = time_interval
self.prev_nu = 0.0
self.prev_omega = 0.0
def decision(self, observation=None):
self.estimator.motion_update(self.prev_nu, self.prev_omega, self.time_interval)
self.prev_nu, self.prev_omega = self.nu, self.omega
self.estimator.observation_update(observation)
return self.nu, self.omega
def draw(self, ax, elems):
self.estimator.draw(ax, elems)
def trial():
time_interval = 0.1
world = World(30, time_interval, debug=False)
### 地図を生成して3つランドマークを追加 ###
m = Map()
for ln in [(-4,2), (2,-3), (3,3)]: m.append_landmark(Landmark(*ln))
world.append(m)
### ロボットを作る ###
initial_pose = np.array([0, 0, 0]).T
estimator = Mcl(m, initial_pose, 100) #地図mを渡す
a = EstimationAgent(time_interval, 0.2, 10.0/180*math.pi, estimator)
r = Robot(initial_pose, sensor=Camera(m), agent=a, color="red")
world.append(r)
world.draw()
trial()
```
| github_jupyter |
```
# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %% [markdown]
# ## Import necessary dependencies
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_selection import chi2
from PIL import Image
from collections import Counter
import matplotlib.pyplot as plt
import sqlite3
import re
import nltk
nltk.download('stopwords')
nltk.download('gutenberg')
nltk.download('punkt')
from keras.preprocessing import text
from keras.utils import np_utils
from keras.preprocessing import sequence
import pydot
pd.options.display.max_colwidth = 200
%matplotlib inline
```
## Load in the data from the database
```
dbconn = sqlite3.connect('./data/newsclassifier.db')
train_data_df = pd.read_sql_query('SELECT * FROM train_data_sample', dbconn)
headline_bagofwords_df = pd.read_sql_query('SELECT * FROM headline_bagofwords', dbconn)
dbconn.commit()
dbconn.close()
```
### Check the if the data was loaded correctly
```
train_data_df.head()
headline_bagofwords_df.head()
train_data_df.drop('index', axis=1, inplace=True)
train_data_df.head()
headline_bagofwords_df.drop('index', axis=1, inplace=True)
headline_bagofwords_df.head()
```
### We have bag of words already, let's make a Bag of N-Grams
```
# Use countvectorizer to get a word vector
cv = CountVectorizer(min_df = 2, lowercase = True, token_pattern=r'(?u)\b[A-Za-z]{2,}\b',
strip_accents = 'ascii', ngram_range = (2, 3),
stop_words = 'english')
cv_matrix = cv.fit_transform(train_data_df.headline_cleaned).toarray()
# below is if wanted to define a specific category for the data.
# cv_matrix = cv.fit_transform(train_data_df[train_data_df.category == 1].headline_cleaned).toarray()
# get all unique words in the corpus
vocab = cv.get_feature_names()
# produce a dataframe including the feature names
headline_bagofngrams_df = pd.DataFrame(cv_matrix, columns=vocab)
```
### Make sure we got the dataframe output for the Bag of N-Grams
```
headline_bagofngrams_df.head()
```
### Let's explore the data we got through plots and tables
```
word_count_dict = {}
for word in vocab:
word_count_dict[word] = int(sum(headline_bagofngrams_df.loc[:, word]))
counter = Counter(word_count_dict)
freq_df = pd.DataFrame.from_records(counter.most_common(20),
columns=['Top 20 words', 'Frequency'])
freq_df.plot(kind='bar', x='Top 20 words');
```
## TF/IDF
### Unigram TF/IDF
```
tfidf_vect = TfidfVectorizer(sublinear_tf = True, min_df = 2, lowercase = True,
strip_accents = 'ascii', ngram_range = (1, 1),
stop_words = 'english', use_idf = True, token_pattern=r'(?u)\b[A-Za-z]{2,}\b')
tfidf_unigram = tfidf_vect.fit_transform(train_data_df.headline_cleaned).toarray()
# get all unique words in the corpus
vocab = tfidf_vect.get_feature_names()
tfidf_unigram = pd.DataFrame(np.round(tfidf_unigram, 2), columns = vocab)
tfidf_unigram.head()
```
### N-Gram TF/IDF
```
tfidf_vect = TfidfVectorizer(sublinear_tf = True, min_df = 2, lowercase = True,
strip_accents = 'ascii', ngram_range = (2, 3),
stop_words = 'english', use_idf = True, token_pattern=r'(?u)\b[A-Za-z]{2,}\b')
tfidf_ngram = tfidf_vect.fit_transform(train_data_df.headline_cleaned).toarray()
# get all unique words in the corpus
vocab = tfidf_vect.get_feature_names()
tfidf_ngram = pd.DataFrame(np.round(tfidf_ngram, 2), columns = vocab)
tfidf_ngram.head()
```
### Character TF/IDF
```
tfidf_vect = TfidfVectorizer(analyzer = 'char', sublinear_tf = True, min_df = 2,
lowercase = True, strip_accents = 'ascii', ngram_range = (2, 3),
stop_words = 'english', use_idf = True, token_pattern=r'\w{1,}')
tfidf_char = tfidf_vect.fit_transform(train_data_df.headline_cleaned).toarray()
# get all unique words in the corpus
vocab = tfidf_vect.get_feature_names()
tfidf_char = pd.DataFrame(np.round(tfidf_char, 2), columns = vocab)
tfidf_char.head()
word_count_dict = {}
for word in vocab:
word_count_dict[word] = int(sum(tfidf_char.loc[:, word]))
counter = Counter(word_count_dict)
freq_df = pandas.DataFrame.from_records(counter.most_common(50),
columns=['Top 50 words', 'Frequency'])
freq_df.plot(kind='bar', x='Top 50 words');
```
## Word Embedding
Build the Corpus Vocabulary
```
tokenizer = text.Tokenizer()
tokenizer.fit_on_texts(train_data_df.headline_cleaned)
word2id = tokenizer.word_index
# build vocabulary of unique words
word2id['PAD'] = 0
id2word = {v:k for k, v in word2id.items()}
wids = [[word2id[w] for w in text.text_to_word_sequence(doc)] for doc in train_data_df.headline_cleaned]
vocab_size = len(word2id)
embed_size = 100
window_size = 2 # context window size
print('Vocabulary Size:', vocab_size)
print('Vocabulary Sample:', list(word2id.items())[:100])
# Build a CBOW (context, target) generator
def generate_context_word_pairs(corpus, window_size, vocab_size):
context_length = window_size*2
for words in corpus:
sentence_length = len(words)
for index, word in enumerate(words):
context_words = []
label_word = []
start = index - window_size
end = index + window_size + 1
context_words.append([words[i]
for i in range(start, end)
if 0 <= i < sentence_length
and i != index])
label_word.append(word)
x = sequence.pad_sequences(context_words, maxlen=context_length)
y = np_utils.to_categorical(label_word, vocab_size)
yield (x, y)
# Test this out for some samples
i = 0
for x, y in generate_context_word_pairs(corpus=wids, window_size=window_size, vocab_size=vocab_size):
if 0 not in x[0]:
print('Context (X):', [id2word[w] for w in x[0]], '-> Target (Y):', id2word[numpy.argwhere(y[0])[0][0]])
if i == 20:
break
i += 1
# %% [markdown]
# ## Using gensim to build Word2Vec
from gensim.models import word2vec
# tokenize sentences in corpus
wpt = nltk.WordPunctTokenizer()
tokenized_corpus = [wpt.tokenize(document) for document in train_data_df]
# Set values for various parameters
feature_size = 100 # Word vector dimensionality
window_context = 30 # Context window size
min_word_count = 1 # Minimum word count
sample = 1e-3 # Downsample setting for frequent words
w2v_model = word2vec.Word2Vec(tokenized_corpus, size=feature_size,
window=window_context, min_count=min_word_count,
sample=sample, iter=50)
# view similar words based on gensim's model
#similar_words = {search_term: [item[0] for item in w2v_model.wv.most_similar([search_term], topn=5)]
# for search_term in ['god', 'jesus', 'noah', 'egypt', #'john', 'gospel', 'moses','famine']}
#similar_words
# %% [markdown]
# ## Visualize word embedding
```
| github_jupyter |
# Sentiment Identification
## BACKGROUND
A large multinational corporation is seeking to automatically identify the sentiment that their customer base talks
about on social media. They would like to expand this capability into multiple languages. Many 3rd party tools exist for sentiment analysis, however, they need help with under-resourced languages.
## GOAL
Train a sentiment classifier (Positive, Negative, Neutral) on a corpus of the provided documents. Your goal is to
maximize accuracy. There is special interest in being able to accurately detect negative sentiment. The training data
includes documents from a wide variety of sources, not merely social media, and some of it may be inconsistently
labeled. Please describe the business outcomes in your work sample including how data limitations impact your results
and how these limitations could be addressed in a larger project.
## DATA
Link to data: http://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set
```
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_colwidth', None)
```
## Data Exploration
```
import emoji
import functools
import operator
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
import spacy
import string
import re
import os
import tensorflow as tf
from tensorflow import keras
from imblearn.over_sampling import SMOTE
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import plot_confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
DATA_DIR = os.path.abspath('../data/raw')
data_path = os.path.join(DATA_DIR, 'Roman Urdu Dataset.csv')
raw_df = pd.read_csv(data_path, skipinitialspace=True, names=['comment', 'sentiment', 'nan'], encoding='utf-8')
raw_df.tail()
#Print a concise summary of a DataFrame.
raw_df.info()
# Check missing data
raw_df.isnull().sum()
# For each column of the dataframe, we want to know numbers of unique attributes and the attributes values.
for column in raw_df.columns:
unique_attribute = (raw_df[column].unique())
print('{0:20s} {1:5d}\t'.format(column, len(unique_attribute)), unique_attribute[0:10])
```
## Initial Data Preprocessing
-- Drop the NaN column
-- Replace "Neative" - > "Negative"
```
cleaned_df = raw_df.copy()
cleaned_df.drop('nan',axis=1,inplace=True)
cleaned_df.dropna(axis=0, subset = ['comment'], inplace=True)
cleaned_df.replace(to_replace='Neative', value='Negative', inplace=True)
cleaned_df.dropna(subset=['sentiment'], inplace=True)
cleaned_df.head(5)
```
## Examine the class label imbalance
```
print(f'There are total {cleaned_df.shape[0]} comments')
print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Positive"].shape[0]} Positive comments')
print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Neutral"].shape[0]} Neutral comments')
print(f'There are {cleaned_df[cleaned_df["sentiment"] == "Negative"].shape[0]} Negative comments')
```
# Data Preprocessing:
### 1. Encode the Labels
### 2. Tokenization:
-- Applied lower case for each token
-- remove 0-9 numeric
-- remove the punctuation
-- (TO DO) Remove stop words
### 3. Train, Val, Test split
```
# Encode the output labels:
# Negative -> 0
# Neutral -> 1
# Positive -> 2
le = LabelEncoder()
le.fit(cleaned_df['sentiment'])
cleaned_df['sentiment']= le.transform(cleaned_df['sentiment'])
# tokenize for a single document
def tokenizer(doc):
""" Tokenize a single document"""
tokens = [word.lower() for word in nltk.word_tokenize(doc)]
tokens = [re.sub(r'[0-9]', '', word) for word in tokens]
tokens = [re.sub(r'['+string.punctuation+']', '', word) for word in tokens]
tokens = ' '.join(tokens)
em_split_emoji = emoji.get_emoji_regexp().split(tokens)
em_split_whitespace = [substr.split() for substr in em_split_emoji]
em_split = functools.reduce(operator.concat, em_split_whitespace)
tokens = ' '.join(em_split)
return tokens
cleaned_df['comment'] = cleaned_df['comment'].apply(lambda x: tokenizer(x))
train_df, test_df = train_test_split(cleaned_df, test_size=0.2,random_state=40)
train_labels = train_df['sentiment']
test_labels = test_df['sentiment']
train_features = train_df['comment']
test_features = test_df['comment']
```
# Gridsearch Pipeline: LogisticRegression
### TF-IDF
```
from typing import Any, List, Tuple
def vectorize(train_texts: List[str], train_labels, test_texts: List[str]) -> Tuple[Any, Any]:
""" Convert the document into word n-grams and vectorize it
:param train_texts: of training texts
:param train_labels: An array of labels from the training dataset
:param test_texts: List of test texts
:return: A tuple of vectorize training_text and vectorize test texts
"""
kwargs = {
'ngram_range': (1, 2),
'analyzer': 'word',
'min_df': MIN_DOCUMENT_FREQUENCY
}
# Use TfidfVectorizer to convert the raw documents to a matrix of TF-IDF features
#
vectorizer = TfidfVectorizer(**kwargs)
X_train = vectorizer.fit_transform(train_texts)
X_test = vectorizer.transform(test_texts)
selector = SelectKBest(f_classif, k=min(30000, X_train.shape[1]))
selector.fit(X_train, train_labels)
X_train = selector.transform(X_train)
X_test = selector.transform(X_test)
return X_train, X_test
from sklearn.ensemble import GradientBoostingClassifier
NGRAM_RANGE = (1, 2)
TOKEN_MODE = 'word'
MIN_DOCUMENT_FREQUENCY = 2
X_train, X_test = vectorize(train_features, train_labels, test_features)
# gridsearch
lr_tfidf = Pipeline([
('clf', LogisticRegression(random_state=40, solver = 'saga'))
])
C_OPTIONS = [1, 3, 5, 7, 10]
param_grid = [
{
'clf__penalty': ['l1', 'l2'],
'clf__C': C_OPTIONS
}
]
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=2,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, train_labels)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf_lr = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf_lr.score(X_test, test_labels))
from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix
y_pred_lr_tfidf = gs_lr_tfidf.predict(X_test)
y_test_exp = test_labels.to_numpy()
print('Precision Test for model: {}' .format(precision_score(y_test_exp, y_pred_lr_tfidf, average=None)))
print('Recall Test for model: {}' .format(recall_score(test_labels, y_pred_lr_tfidf, average=None)))
print('F1 Test for model: {}' .format(f1_score(test_labels, y_pred_lr_tfidf, average=None)))
print('Confusion matrix (Test):')
print(confusion_matrix(test_labels, y_pred_lr_tfidf))
title_options = [("Confusion matrix, without normalization", None),
("Normalization confusion matrix", 'true')]
classes_names = np.array(['Negative', 'Neutral', 'Positive'])
fig = plt.figure(figsize=(18,9))
nrows=1
ncols=2
for idx,value in enumerate(title_options):
ax = fig.add_subplot(nrows, ncols, idx+1)
disp= plot_confusion_matrix(clf_lr, X_test, test_labels,
display_labels=classes_names,
cmap=plt.cm.Blues,
normalize=value[1],
ax = ax)
disp.ax_.set_title(value[0])
```
# Multiclass Classification - > Binary Clasification
Since the model can only accurately predict negative sentiment about ~50% of time, I want to see if I can improve upon this result. One idea is to combine neutral sentiment and positive sentiment into one label and turn this analysis into a binary classification problem.
Since we have class imbalance issues when reforumlate to binary classifiaiton problems, use SMOTE to generate synthetic data within the minority class to have both classes have equal numbers of samples in training. The test accuracy improves from ~50% to ~70% on <b>negative sentiment </b>!
```
from imblearn.pipeline import Pipeline
# Combine the neutral/positive labels into one label -> 1
train_labels_binary = train_labels.map(lambda x: 1 if (x==2 or x==1) else 0)
test_labels_binary = test_labels.map(lambda x: 1 if (x==2 or x==1) else 0)
train_labels_binary.value_counts()
# Class Imbalance Issues
print(f'There are {train_labels_binary.value_counts()[0]} that can be classified as negative sentiments')
print(f'There are {train_labels_binary.value_counts()[1]} that can be classified as non-negative sentiments' )
# tfidf = TfidfVectorizer(strip_accents=None,
# lowercase=False,
# preprocessor=None)
param_grid = [{'clf__penalty': ['l1', 'l2'],
'clf__C': [0, 1, 3, 5, 7, 10]},
]
lr_tfidf = Pipeline([
('smote', SMOTE(sampling_strategy=1.0, random_state=5, k_neighbors=10)),
('clf', LogisticRegression(random_state=1, solver = 'saga'))
])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=2,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, train_labels_binary)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf_lr = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf_lr.score(X_test, test_labels_binary))
from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix
y_pred_lr_tfidf = gs_lr_tfidf.predict(X_test)
y_test_exp_binary = test_labels_binary.to_numpy()
print('Precision Test for model: {}' .format(precision_score(y_test_exp_binary, y_pred_lr_tfidf, average=None)))
print('Recall Test for model: {}' .format(recall_score(test_labels_binary, y_pred_lr_tfidf, average=None)))
print('F1 Test for model: {}' .format(f1_score(test_labels_binary, y_pred_lr_tfidf, average=None)))
print('ROC AUC Train: %.3f for Logistic Regression' % roc_auc_score(y_test_exp_binary, y_pred_lr_tfidf, average=None))
print('Confusion matrix (Test):')
print(confusion_matrix(test_labels_binary, y_pred_lr_tfidf))
title_options = [("Confusion matrix, without normalization", None),
("Normalization confusion matrix", 'true')]
classes_names = np.array(['Negative', 'Positive/Neutral'])
fig = plt.figure(figsize=(18,9))
nrows=1
ncols=2
for idx,value in enumerate(title_options):
ax = fig.add_subplot(nrows, ncols, idx+1)
disp= plot_confusion_matrix(clf_lr, X_test, test_labels_binary,
display_labels=classes_names,
cmap=plt.cm.Blues,
normalize=value[1],
ax = ax)
disp.ax_.set_title(value[0])
```
# Gridsearch Pipeline: Naive Bayes
```
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [
{
'clf__alpha': [0.25, 0.3, 0.35, 0.4, 0.45, 0.50]
},
]
nb_tfidf = Pipeline([
#('vect', tfidf),
('smote', SMOTE(sampling_strategy=1.0, random_state=5, k_neighbors=3)),
('clf', MultinomialNB())
])
gs_nb_tfidf = GridSearchCV(nb_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=2,
n_jobs=-1)
gs_nb_tfidf.fit(X_train, train_labels_binary)
print('Best parameter set: %s ' % gs_nb_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_nb_tfidf.best_score_)
clf_nb = gs_nb_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf_nb.score(X_test, test_labels_binary))
from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, confusion_matrix
y_pred_nb_tfidf = gs_nb_tfidf.predict(X_test)
y_test_exp_binary = test_labels_binary.to_numpy()
print('Precision Test for model: {}' .format(precision_score(y_test_exp_binary, y_pred_nb_tfidf, average=None)))
print('Recall Test for model: {}' .format(recall_score(test_labels_binary, y_pred_nb_tfidf, average=None)))
print('F1 Test for model: {}' .format(f1_score(test_labels_binary, y_pred_nb_tfidf, average=None)))
print('ROC AUC Train: %.3f for Naive Bayes' % roc_auc_score(y_test_exp_binary, y_pred_nb_tfidf, average=None))
print('Confusion matrix (Test):')
print(confusion_matrix(test_labels_binary, y_pred_nb_tfidf))
title_options = [("Confusion matrix, without normalization", None),
("Normalization confusion matrix", 'true')]
classes_names = np.array(['Negative', 'Positive/Neutral'])
fig = plt.figure(figsize=(18,9))
nrows=1
ncols=2
for idx,value in enumerate(title_options):
ax = fig.add_subplot(nrows, ncols, idx+1)
disp= plot_confusion_matrix(clf_nb, X_test, test_labels_binary,
#display_labels=sorted(test_labels_binary.unique()),
cmap=plt.cm.Blues,
display_labels=classes_names,
normalize=value[1],
ax = ax)
disp.ax_.set_title(value[0])
```
| github_jupyter |
# Feature Engineering

## Objective
Data preprocessing and engineering techniques generally refer to the addition, deletion, or transformation of data.
The time spent on identifying data engineering needs can be significant and requires you to spend substantial time understanding your data...
> _"Live with your data before you plunge into modeling"_ - Leo Breiman
In this module we introduce:
- an example of preprocessing numerical features,
- two common ways to preprocess categorical features,
- using a scikit-learn pipeline to chain preprocessing and model training.
## Basic prerequisites
Let's go ahead and import a couple required libraries and import our data.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last">We will import additional libraries and functions as we proceed but we do so at the time of using the libraries and functions as that provides better learning context.</p>
</div>
```
import pandas as pd
# to display nice model diagram
from sklearn import set_config
set_config(display='diagram')
# import data
adult_census = pd.read_csv('../data/adult-census.csv')
# separate feature & target data
target = adult_census['class']
features = adult_census.drop(columns='class')
```
## Selection based on data types
Typically, data types fall into two categories:
* __Numeric__: a quantity represented by a real or integer number.
* __Categorical__: a discrete value, typically represented by string labels (but not only) taken from a finite list of possible choices.
```
features.dtypes
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">Do not take dtype output at face value! It is possible to have categorical data represented by numbers (i.e. <tt class="docutils literal">education_num</tt>. And <tt class="docutils literal">object</tt> dtypes can represent data that would be better represented as continuous numbers (i.e. dates).
Bottom line, always understand how your data is representing your features!
</p>
</div>
We can separate categorical and numerical variables using their data types to identify them.
There are a few ways we can do this. Here, we make use of [`make_column_selector`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_selector.html) helper to select the corresponding columns.
```
from sklearn.compose import make_column_selector as selector
# create selector object based on data type
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
# get columns of interest
numerical_columns = numerical_columns_selector(features)
categorical_columns = categorical_columns_selector(features)
# results in a list containing relevant column names
numerical_columns
```
## Preprocessing numerical data
Scikit-learn works "out of the box" with numeric features. However, some algorithms make some assumptions regarding the distribution of our features.
We see that our numeric features span across different ranges:
```
numerical_features = features[numerical_columns]
numerical_features.describe()
```
Normalizing our features so that they have mean = 0 and standard deviation = 1, helps to ensure our features align to algorithm assumptions.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p>Here are some reasons for scaling features:</p>
<ul class="last simple">
<li>Models that rely on the distance between a pair of samples, for instance
k-nearest neighbors, should be trained on normalized features to make each
feature contribute approximately equally to the distance computations.</li>
<li>Many models such as logistic regression use a numerical solver (based on
gradient descent) to find their optimal parameters. This solver converges
faster when the features are scaled.</li>
</ul>
</div>
Whether or not a machine learning model requires normalization of the features depends on the model family. Linear models such as logistic regression generally benefit from scaling the features while other models such as tree-based models (i.e. decision trees, random forests) do not need such preprocessing (but will not suffer from it).
We can apply such normalization using a scikit-learn transformer called [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html).
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(numerical_features)
```
The `fit` method for transformers is similar to the `fit` method for
predictors. The main difference is that the former has a single argument (the
feature matrix), whereas the latter has two arguments (the feature matrix and the
target).

In this case, the algorithm needs to compute the mean and standard deviation
for each feature and store them into some NumPy arrays. Here, these
statistics are the model states.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last">The fact that the model states of this scaler are arrays of means and
standard deviations is specific to the <tt class="docutils literal">StandardScaler</tt>. Other
scikit-learn transformers will compute different statistics and store them
as model states, in the same fashion.</p>
</div>
We can inspect the computed means and standard deviations.
```
scaler.mean_
scaler.scale_
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">Scikit-learn convention: if an attribute is learned from the data, its name
ends with an underscore (i.e. <tt class="docutils literal">_</tt>), as in <tt class="docutils literal">mean_</tt> and <tt class="docutils literal">scale_</tt> for the
<tt class="docutils literal">StandardScaler</tt>.</p>
</ul>
</div>
Once we have called the `fit` method, we can perform data transformation by
calling the method `transform`.
```
numerical_features_scaled = scaler.transform(numerical_features)
numerical_features_scaled
```
Let's illustrate the internal mechanism of the `transform` method and put it
to perspective with what we already saw with predictors.

The `transform` method for transformers is similar to the `predict` method
for predictors. It uses a predefined function, called a **transformation
function**, and uses the model states and the input data. However, instead of
outputting predictions, the job of the `transform` method is to output a
transformed version of the input data.
Finally, the method `fit_transform` is a shorthand method to call
successively `fit` and then `transform`.

```
# fitting and transforming in one step
scaler.fit_transform(numerical_features)
```
Notice that the mean of all the columns is close to 0 and the standard deviation in all cases is close to 1:
```
numerical_features = pd.DataFrame(
numerical_features_scaled,
columns=numerical_columns
)
numerical_features.describe()
```
## Model pipelines
We can easily combine sequential operations with a scikit-learn
[`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), which chains together operations and is used as any other
classifier or regressor. The helper function [`make_pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html#sklearn.pipeline.make_pipeline) will create a
`Pipeline`: it takes as arguments the successive transformations to perform,
followed by the classifier or regressor model.
```
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), LogisticRegression())
model
```
Let's divide our data into train and test sets and then apply and score our logistic regression model:
```
from sklearn.model_selection import train_test_split
# split our data into train & test
X_train, X_test, y_train, y_test = train_test_split(numerical_features, target, random_state=123)
# fit our pipeline model
model.fit(X_train, y_train)
# score our model on the test data
model.score(X_test, y_test)
```
## Preprocessing categorical data
Unfortunately, Scikit-learn does not accept categorical features in their raw form. Consequently, we need to transform them into numerical representations.
The following presents typical ways of dealing with categorical variables by encoding them, namely **ordinal encoding** and **one-hot encoding**.
### Encoding ordinal categories
The most intuitive strategy is to encode each category with a different
number. The [`OrdinalEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html) will transform the data in such manner.
We will start by encoding a single column to understand how the encoding
works.
```
from sklearn.preprocessing import OrdinalEncoder
# let's illustrate with the 'education' feature
education_column = features[["education"]]
encoder = OrdinalEncoder()
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
We see that each category in `"education"` has been replaced by a numeric
value. We could check the mapping between the categories and the numerical
values by checking the fitted attribute `categories_`.
```
encoder.categories_
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p class="last"><tt class="docutils literal">OrindalEncoder</tt> transforms the category value into the corresponding index value of <tt class="docutils literal">encoder.categories_</tt>.</p>
</div>
However, be careful when applying this encoding strategy:
using this integer representation leads downstream predictive models
to assume that the values are ordered (0 < 1 < 2 < 3... for instance).
By default, `OrdinalEncoder` uses a lexicographical strategy to map string
category labels to integers. This strategy is arbitrary and often
meaningless. For instance, suppose the dataset has a categorical variable
named `"size"` with categories such as "S", "M", "L", "XL". We would like the
integer representation to respect the meaning of the sizes by mapping them to
increasing integers such as `0, 1, 2, 3`.
However, the lexicographical strategy used by default would map the labels
"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.
The `OrdinalEncoder` class accepts a `categories` argument to
pass categories in the expected ordering explicitly (`categories[i]` holds the categories expected in the ith column).
```
ed_levels = [' Preschool', ' 1st-4th', ' 5th-6th', ' 7th-8th', ' 9th', ' 10th', ' 11th',
' 12th', ' HS-grad', ' Prof-school', ' Some-college', ' Assoc-acdm',
' Assoc-voc', ' Bachelors', ' Masters', ' Doctorate']
encoder = OrdinalEncoder(categories=[ed_levels])
education_encoded = encoder.fit_transform(education_column)
education_encoded
encoder.categories_
```
If a categorical variable does not carry any meaningful order information
then this encoding might be misleading to downstream statistical models and
you might consider using one-hot encoding instead (discussed next).
### Ecoding nominal categories
[`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) is an alternative encoder that converts the categorical levels into new columns.
We will start by encoding a single feature (e.g. `"education"`) to illustrate
how the encoding works.
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Note</b></p>
<p><tt class="docutils literal">sparse=False</tt> is used in the <tt class="docutils literal">OneHotEncoder</tt> for didactic purposes, namely
easier visualization of the data.</p>
<p class="last">Sparse matrices are efficient data structures when most of your matrix
elements are zero. They won't be covered in detail in this workshop. If you
want more details about them, you can look at
<a class="reference external" href="https://scipy-lectures.org/advanced/scipy_sparse/introduction.html#why-sparse-matrices">this</a>.</p>
</div>
Viewing this as a data frame provides a more intuitive illustration:
```
feature_names = encoder.get_feature_names(input_features=["education"])
pd.DataFrame(education_encoded, columns=feature_names)
```
As we can see, each category (unique value) became a column; the encoding
returned, for each sample, a 1 to specify which category it belongs to.
Let's apply this encoding to all the categorical features:
```
# get all categorical features
categorical_features = features[categorical_columns]
# one-hot encode all features
categorical_features_encoded = encoder.fit_transform(categorical_features)
# view as a data frame
columns_encoded = encoder.get_feature_names(categorical_features.columns)
pd.DataFrame(categorical_features_encoded, columns=columns_encoded).head()
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">One-hot encoding can significantly increase the number of features in our data. In this case we went from 8 features to 102! If you have a data set with many categorical variables and those categorical variables in turn have many unique levels, the number of features can explode. In these cases you may want to explore ordinal encoding or some other alternative.</p>
</ul>
</div>
### Choosing an encoding strategy
Choosing an encoding strategy will depend on the underlying models and the
type of categories (i.e. ordinal vs. nominal).
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">In general <tt class="docutils literal">OneHotEncoder</tt> is the encoding strategy used when the
downstream models are <strong>linear models</strong> while <tt class="docutils literal">OrdinalEncoder</tt> is often a
good strategy with <strong>tree-based models</strong>.</p>
</div>
Using an `OrdinalEncoder` will output ordinal categories. This means
that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The
impact of violating this ordering assumption is really dependent on the
downstream models. Linear models will be impacted by misordered categories
while tree-based models will not.
You can still use an `OrdinalEncoder` with linear models but you need to be
sure that:
- the original categories (before encoding) have an ordering;
- the encoded categories follow the same ordering than the original
categories.
One-hot encoding categorical variables with high cardinality can cause
computational inefficiency in tree-based models. Because of this, it is not recommended
to use `OneHotEncoder` in such cases even if the original categories do not
have a given order.
## Using numerical and categorical variables together
Now let's look at how to combine some of these tasks so we can preprocess both numeric and categorical data.
First, let's get our train & test data established:
```
# drop the duplicated column `"education-num"` as stated in the data exploration notebook
features = features.drop(columns='education-num')
# create selector object based on data type
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
# get columns of interest
numerical_columns = numerical_columns_selector(features)
categorical_columns = categorical_columns_selector(features)
# split into train & test sets
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=123)
```
Scikit-learn provides a [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) class which will send specific
columns to a specific transformer, making it easy to fit a single predictive
model on a dataset that combines both kinds of variables together.
We first define the columns depending on their data type:
* **one-hot encoding** will be applied to categorical columns.
* **numerical scaling** numerical features which will be standardized.
We then create our `ColumnTransfomer` by specifying three values:
1. the preprocessor name,
2. the transformer, and
3. the columns.
First, let's create the preprocessors for the numerical and categorical
parts.
```
categorical_preprocessor = OneHotEncoder(handle_unknown="ignore")
numerical_preprocessor = StandardScaler()
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">We can use the <tt class="docutils literal">handle_unknown</tt> parameter to ignore rare categories that may show up in test data but were not present in the training data.</p>
</ul>
</div>
Now, we create the transformer and associate each of these preprocessors
with their respective columns.
```
from sklearn.compose import ColumnTransformer
preprocessor = ColumnTransformer([
('one-hot-encoder', categorical_preprocessor, categorical_columns),
('standard_scaler', numerical_preprocessor, numerical_columns)
])
```
We can take a minute to represent graphically the structure of a
`ColumnTransformer`:

A `ColumnTransformer` does the following:
* It **splits the columns** of the original dataset based on the column names
or indices provided. We will obtain as many subsets as the number of
transformers passed into the `ColumnTransformer`.
* It **transforms each subset**. A specific transformer is applied to
each subset: it will internally call `fit_transform` or `transform`. The
output of this step is a set of transformed datasets.
* It then **concatenates the transformed datasets** into a single dataset.
The important thing is that `ColumnTransformer` is like any other
scikit-learn transformer. In particular it can be combined with a classifier
in a `Pipeline`:
```
model = make_pipeline(preprocessor, LogisticRegression(max_iter=500))
model
```
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;"><b>Warning</b></p>
<p class="last">Including non-scaled data can cause some algorithms to iterate
longer in order to converge. Since our categorical features are not scaled it's often recommended to increase the number of allowed iterations for linear models.</p>
</div>
```
# fit our model
_ = model.fit(X_train, y_train)
# score on test set
model.score(X_test, y_test)
```
## Wrapping up
Unfortunately, we only have time to scratch the surface of feature engineering in this workshop. However, this module should provide you with a strong foundation of how to apply the more common feature preprocessing tasks.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;"><b>Tip</b></p>
<p class="last">Scikit-learn provides many feature engineering options. Learn more here: <a href="https://scikit-learn.org/stable/modules/preprocessing.html">https://scikit-learn.org/stable/modules/preprocessing.html</a></p>
</ul>
</div>
In this module we learned how to:
- normalize numerical features with `StandardScaler`,
- ordinal and one-hot encode categorical features with `OrdinalEncoder` and `OneHotEncoder`, and
- chain feature preprocessing and model training steps together with `ColumnTransformer` and `make_pipeline`.
| github_jupyter |
```
from sklearn.cluster import KMeans
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
trueLables = pd.read_csv('bbcsport_classes.csv',delimiter=",", header=None).values
print(trueLables.shape)
terms = pd.read_csv('bbcsport_terms.csv',delimiter=",", header=None).values
print(terms.shape)
X = pd.read_csv('bbcsport_mtx.csv',delimiter=",", header=None).values
print(X.shape)
kmeans = KMeans(n_clusters=5)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
print(centroids.shape)
print(labels.shape)
totalrandom=0
totalmutual=0
for k in range(50):
kmeans = KMeans(n_clusters=5, n_init=10)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
trueLables = np.ravel(trueLables)
totalrandom = totalrandom + metrics.adjusted_rand_score(labels, trueLables)
totalmutual = totalmutual + metrics.adjusted_mutual_info_score(labels, trueLables)
print("rand index over 50 averages iterations ",totalrandom/50)
print("mutual information",totalmutual/50)
from sklearn.cluster import KMeans
from matplotlib import pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import numpy as np
import pandas as pd
trueLables = pd.read_csv('bbcsport_classes.csv',delimiter=",", header=None).values
print(trueLables.shape)
terms = pd.read_csv('bbcsport_terms.csv',delimiter=",", header=None).values
print(terms.shape)
X = pd.read_csv('bbcsport_mtx.csv',delimiter=",", header=None).values
print(X.shape)
kmeans = KMeans(n_clusters=5)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
centroids = sum(map(np.array, centroids))
Y = pd.read_csv('bbcsport_terms.csv',delimiter=",", header=None).values
Y = Y.reshape(len(Y),)
Z = dict(zip(Y.T,centroids))
wordcloud = WordCloud(stopwords=STOPWORDS,background_color='white', width=1200, height=1000).generate_from_frequencies(Z)
fig = plt.figure()
plt.imshow(wordcloud)
plt.show()
X = pd.read_csv('bbcsport_mtx.csv',delimiter=",", header=None).values
print(X.shape)
mu = X.mean(axis=0)
sigma = X.std(axis=0)
Xnorm = (X - mu)/sigma
print (Xnorm[:,:])
m = len(Xnorm)
covmat = np.dot(Xnorm.T, Xnorm)/m
print(covmat)
S,U = np.linalg.eig(covmat)
print('Eigen values: {}'.format(S))
print('Eigen vectors:')
print(U)
Z = np.dot(Xnorm,U)
plt.plot(Z, '.', markersize=10)
plt.title('Data after PCA')
k = 1
Ured = U[:,0:k]
Zred = np.dot(Xnorm,Ured)
Xrec = np.dot(Zred, Ured.T)
plt.plot(Xrec, '.', markersize=10)
plt.title('Reconstructed Normalized Data')
```
| github_jupyter |
ERROR: type should be string, got "https://keras.io/examples/structured_data/structured_data_classification_from_scratch/\n\nmudar nome das coisas. Editar como quero // para de servir de exemplo pra o futuro..\n\n```\nimport tensorflow as tf\nimport numpy as np\nimport pandas as pd\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport pydot\nfile_url = \"http://storage.googleapis.com/download.tensorflow.org/data/heart.csv\"\ndataframe = pd.read_csv(file_url)\ndataframe.head()\nval_dataframe = dataframe.sample(frac=0.2, random_state=1337)\ntrain_dataframe = dataframe.drop(val_dataframe.index)\ndef dataframe_to_dataset(dataframe):\n dataframe = dataframe.copy()\n labels = dataframe.pop(\"target\")\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n ds = ds.shuffle(buffer_size=len(dataframe))\n return ds\n\n\ntrain_ds = dataframe_to_dataset(train_dataframe)\nval_ds = dataframe_to_dataset(val_dataframe)\n```\n\nfor x, y in train_ds.take(1):\n print(\"Input:\", x)\n print(\"Target:\", y)\n \n |||||| entender isto melhor\n\n```\ntrain_ds = train_ds.batch(32)\nval_ds = val_ds.batch(32)\nfrom tensorflow.keras.layers.experimental.preprocessing import Normalization\nfrom tensorflow.keras.layers.experimental.preprocessing import CategoryEncoding\nfrom tensorflow.keras.layers.experimental.preprocessing import StringLookup\n\n\ndef encode_numerical_feature(feature, name, dataset):\n # Create a Normalization layer for our feature\n normalizer = Normalization()\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the statistics of the data\n normalizer.adapt(feature_ds)\n\n # Normalize the input feature\n encoded_feature = normalizer(feature)\n return encoded_feature\n\n\ndef encode_string_categorical_feature(feature, name, dataset):\n # Create a StringLookup layer which will turn strings into integer indices\n index = StringLookup()\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the set of possible string values and assign them a fixed integer index\n index.adapt(feature_ds)\n\n # Turn the string input into integer indices\n encoded_feature = index(feature)\n\n # Create a CategoryEncoding for our integer indices\n encoder = CategoryEncoding(output_mode=\"binary\")\n\n # Prepare a dataset of indices\n feature_ds = feature_ds.map(index)\n\n # Learn the space of possible indices\n encoder.adapt(feature_ds)\n\n # Apply one-hot encoding to our indices\n encoded_feature = encoder(encoded_feature)\n return encoded_feature\n\n\ndef encode_integer_categorical_feature(feature, name, dataset):\n # Create a CategoryEncoding for our integer indices\n encoder = CategoryEncoding(output_mode=\"binary\")\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))\n\n # Learn the space of possible indices\n encoder.adapt(feature_ds)\n\n # Apply one-hot encoding to our indices\n encoded_feature = encoder(feature)\n return encoded_feature\n# Categorical features encoded as integers\nsex = keras.Input(shape=(1,), name=\"sex\", dtype=\"int64\")\ncp = keras.Input(shape=(1,), name=\"cp\", dtype=\"int64\")\nfbs = keras.Input(shape=(1,), name=\"fbs\", dtype=\"int64\")\nrestecg = keras.Input(shape=(1,), name=\"restecg\", dtype=\"int64\")\nexang = keras.Input(shape=(1,), name=\"exang\", dtype=\"int64\")\nca = keras.Input(shape=(1,), name=\"ca\", dtype=\"int64\")\n\n# Categorical feature encoded as string\nthal = keras.Input(shape=(1,), name=\"thal\", dtype=\"string\")\n\n# Numerical features\nage = keras.Input(shape=(1,), name=\"age\")\ntrestbps = keras.Input(shape=(1,), name=\"trestbps\")\nchol = keras.Input(shape=(1,), name=\"chol\")\nthalach = keras.Input(shape=(1,), name=\"thalach\")\noldpeak = keras.Input(shape=(1,), name=\"oldpeak\")\nslope = keras.Input(shape=(1,), name=\"slope\")\n\nall_inputs = [\n sex,\n cp,\n fbs,\n restecg,\n exang,\n ca,\n thal,\n age,\n trestbps,\n chol,\n thalach,\n oldpeak,\n slope,\n]\n\n# Integer categorical features\nsex_encoded = encode_integer_categorical_feature(sex, \"sex\", train_ds)\ncp_encoded = encode_integer_categorical_feature(cp, \"cp\", train_ds)\nfbs_encoded = encode_integer_categorical_feature(fbs, \"fbs\", train_ds)\nrestecg_encoded = encode_integer_categorical_feature(restecg, \"restecg\", train_ds)\nexang_encoded = encode_integer_categorical_feature(exang, \"exang\", train_ds)\nca_encoded = encode_integer_categorical_feature(ca, \"ca\", train_ds)\n\n# String categorical features\nthal_encoded = encode_string_categorical_feature(thal, \"thal\", train_ds)\n\n# Numerical features\nage_encoded = encode_numerical_feature(age, \"age\", train_ds)\ntrestbps_encoded = encode_numerical_feature(trestbps, \"trestbps\", train_ds)\nchol_encoded = encode_numerical_feature(chol, \"chol\", train_ds)\nthalach_encoded = encode_numerical_feature(thalach, \"thalach\", train_ds)\noldpeak_encoded = encode_numerical_feature(oldpeak, \"oldpeak\", train_ds)\nslope_encoded = encode_numerical_feature(slope, \"slope\", train_ds)\n\nall_features = layers.concatenate(\n [\n sex_encoded,\n cp_encoded,\n fbs_encoded,\n restecg_encoded,\n exang_encoded,\n slope_encoded,\n ca_encoded,\n thal_encoded,\n age_encoded,\n trestbps_encoded,\n chol_encoded,\n thalach_encoded,\n oldpeak_encoded,\n ]\n)\nx = layers.Dense(32, activation=\"relu\")(all_features)\nx = layers.Dropout(0.5)(x)\noutput = layers.Dense(1, activation=\"sigmoid\")(x)\nmodel = keras.Model(all_inputs, output)\nmodel.compile(\"adam\", \"binary_crossentropy\", metrics=[\"accuracy\"])\nmodel.fit(train_ds, epochs=50, validation_data=val_ds)\nsample = {\n \"age\": 60,\n \"sex\": 1,\n \"cp\": 1,\n \"trestbps\": 145,\n \"chol\": 233,\n \"fbs\": 1,\n \"restecg\": 2,\n \"thalach\": 150,\n \"exang\": 0,\n \"oldpeak\": 2.3,\n \"slope\": 3,\n \"ca\": 0,\n \"thal\": \"fixed\",\n}\n\ninput_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}\npredictions = model.predict(input_dict)\n\nprint(\n \"This particular patient had a %.1f percent probability \"\n \"of having a heart disease, as evaluated by our model.\" % (100 * predictions[0][0],)\n)\n```\n\n" | github_jupyter |
## Dependencies
```
import warnings, json, random, os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
```
# Model parameters
```
config = {
"BATCH_SIZE": 64,
"EPOCHS": 100,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"PB_SEQ_LEN": 107,
"PV_SEQ_LEN": 130,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
# Load data
```
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
```
## Auxiliary functions
```
token2int = {x:i for i, x in enumerate('().ACGUBEHIMSX')}
token2int_seq = {x:i for i, x in enumerate('ACGU')}
token2int_struct = {x:i for i, x in enumerate('().')}
token2int_loop = {x:i for i, x in enumerate('BEHIMSX')}
def plot_metrics(history):
metric_list = [m for m in list(history_list[0].keys()) if m is not 'lr']
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
if size > 1:
axes = axes.flatten()
else:
axes = [axes]
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
axes[index].axvline(np.argmin(history[metric_name]), linestyle='dashed')
axes[index].axvline(np.argmin(history[val_metric_name]), linestyle='dashed', color='orange')
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
def preprocess_inputs(df, encoder, cols=['sequence', 'structure', 'predicted_loop_type']):
input_lists = df[cols].applymap(lambda seq: [encoder[x] for x in seq]).values.tolist()
return np.transpose(np.array(input_lists), (0, 2, 1))
def evaluate_model(df, y_true, y_pred, target_cols):
# Complete data
metrics = []
metrics_clean_sn = []
metrics_noisy_sn = []
metrics_clean_sig = []
metrics_noisy_sig = []
for idx, col in enumerate(pred_cols):
metrics.append(np.sqrt(np.mean((y_true[:, :, idx] - y_pred[:, :, idx])**2)))
target_cols = ['Overall'] + target_cols
metrics = [np.mean(metrics)] + metrics
# SN_filter = 1
idxs = df[df['SN_filter'] == 1].index
for idx, col in enumerate(pred_cols):
metrics_clean_sn.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean_sn = [np.mean(metrics_clean_sn)] + metrics_clean_sn
# SN_filter = 0
idxs = df[df['SN_filter'] == 0].index
for idx, col in enumerate(pred_cols):
metrics_noisy_sn.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy_sn = [np.mean(metrics_noisy_sn)] + metrics_noisy_sn
# signal_to_noise > 1
idxs = df[df['signal_to_noise'] > 1].index
for idx, col in enumerate(pred_cols):
metrics_clean_sig.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean_sig = [np.mean(metrics_clean_sig)] + metrics_clean_sig
# signal_to_noise <= 1
idxs = df[df['signal_to_noise'] <= 1].index
for idx, col in enumerate(pred_cols):
metrics_noisy_sig.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy_sig = [np.mean(metrics_noisy_sig)] + metrics_noisy_sig
metrics_df = pd.DataFrame({'Metric/MCRMSE': target_cols, 'Complete': metrics, 'Clean (SN)': metrics_clean_sn,
'Noisy (SN)': metrics_noisy_sn, 'Clean (signal)': metrics_clean_sig,
'Noisy (signal)': metrics_noisy_sig})
return metrics_df
def get_dataset(x, y=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0):
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
else:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(AUTO)
return dataset
def get_dataset_sampling(x, y=None, shuffled=True, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
return dataset
```
# Model
```
# def model_fn(embed_dim=75, hidden_dim=128, dropout=.5, sp_dropout=.2, pred_len=68, n_outputs=5):
def model_fn(embed_dim=75, hidden_dim=128, dropout=.5, sp_dropout=.2, pred_len=68, n_outputs=1):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
shared_embed = L.Embedding(input_dim=len(token2int), output_dim=embed_dim, name='shared_embedding')
embed_seq = shared_embed(inputs_seq)
embed_struct = shared_embed(inputs_struct)
embed_loop = shared_embed(inputs_loop)
x_concat = L.concatenate([embed_seq, embed_struct, embed_loop], axis=2, name='embedding_concatenate')
x_reshaped = L.Reshape((-1, x_concat.shape[2]*x_concat.shape[3]))(x_concat)
x = L.SpatialDropout1D(sp_dropout)(x_reshaped)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
# Since we are only making predictions on the first part of each sequence, we have to truncate it
x_truncated = x[:, :pred_len]
outputs = L.Dense(n_outputs, activation='linear', name='outputs')(x_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop], outputs=outputs)
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss=losses.MeanSquaredError())
return model
model = model_fn()
model.summary()
```
# Pre-process
```
feature_cols = ['sequence', 'structure', 'predicted_loop_type']
target_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C']
encoder_list = [token2int, token2int, token2int]
train_features = np.array([preprocess_inputs(train, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
train_labels = np.array(train[pred_cols].values.tolist()).transpose((0, 2, 1))
train_labels_reac = np.array(train[['reactivity']].values.tolist()).transpose((0, 2, 1))
train_labels_ph = np.array(train[['deg_Mg_pH10']].values.tolist()).transpose((0, 2, 1))
train_labels_c = np.array(train[['deg_Mg_50C']].values.tolist()).transpose((0, 2, 1))
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = np.array([preprocess_inputs(public_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
x_test_private = np.array([preprocess_inputs(private_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
```
# Training
```
AUTO = tf.data.experimental.AUTOTUNE
skf = StratifiedKFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
history_list = []
oof = train[['id']].copy()
oof_preds = np.zeros(train_labels.shape)
test_public_preds = np.zeros((x_test_public.shape[0], config['PB_SEQ_LEN'], len(pred_cols)))
test_private_preds = np.zeros((x_test_private.shape[0], config['PV_SEQ_LEN'], len(pred_cols)))
train['signal_to_noise_int'] = train['signal_to_noise'].astype(int)
tasks = ['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C']
tasks_labels = [train_labels_reac, train_labels_ph, train_labels_c]
for task_idx, task in enumerate(tasks):
print(f'\n ===== {task_idx} {task} =====')
for fold,(train_idx, valid_idx) in enumerate(skf.split(train_labels, train['signal_to_noise_int'])):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
### Create datasets
# Create clean and noisy datasets
clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, train_idx)
train_clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, train_idx)
valid_clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, valid_idx)
x_train = train_features[train_clean_idxs]
y_train = tasks_labels[task_idx][train_clean_idxs]
x_valid = train_features[valid_clean_idxs]
y_valid = tasks_labels[task_idx][valid_clean_idxs]
train_ds = get_dataset(x_train, y_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{task}_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train (reactivity)
history = model.fit(train_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
valid_preds = model.predict(get_dataset(train_features[valid_idx], labeled=False, shuffled=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED))
oof_preds[valid_idx, :, task_idx] = valid_preds.reshape(valid_preds.shape[:2])
# Short sequence (public test)
model = model_fn(pred_len= config['PB_SEQ_LEN'])
model.load_weights(model_path)
test_public_preds[:, :, task_idx] += model.predict(test_public_ds).reshape(test_public_preds.shape[:2]) * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len= config['PV_SEQ_LEN'])
model.load_weights(model_path)
test_private_preds[:, :, task_idx] += model.predict(test_private_ds).reshape(test_private_preds.shape[:2]) * (1 / config['N_USED_FOLDS'])
```
## Model loss graph
```
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}")
plot_metrics(history)
```
# Post-processing
```
# Assign values to OOF set
# Assign labels
for idx, col in enumerate(pred_cols):
val = train_labels[:, :, idx]
oof = oof.assign(**{col: list(val)})
# Assign preds
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
# Fill missing columns (if thre is any)
missing_columns = [col for col in target_cols if col not in pred_cols]
for col in missing_columns:
preds_df[col] = 0
```
# Model evaluation
```
display(evaluate_model(train, train_labels, oof_preds, pred_cols))
```
# Visualize test predictions
```
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
```
# Test set predictions
```
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
```
| github_jupyter |
## PyCity Schools Analysis
##### Top and Bottom Performing Schools
- The most immediate observation is that Charter schools populate the top 5 performing schools and District schools populate the bottom 5 performing schools for standardized test scores.
- Overall, District schools had lower standardized test scores than Charter schools.
##### Subject Grades
- Math and Reading standardized test scores seem to stay the same across the board between grades in a school.
- If the test grades deviated wildly from each other between different grades, further analysis would be required. That doesn't appear to be the case.
- Therefore, any solutions that would improve one grade's performance would probably work for all the other grades as well, across most of these schools.
##### Budget & Population
- Large Schools (population greater than 2000 students) appear to have a significantly lower overall passing rate than schools with less than 2000 students, by about 20%.
- It appears the more that schools spent on their students in \$\$ amounts, the less well they performed on these standardized tests.
- Charter schools have smaller budgets than District schools. They also have fewer students.
##### Conclusions
- Given the observations and conditions above, one of the conclusions we might draw from this dataset is that a higher school budget doesn't necessarily mean better performance on standardized tests.
- Perhaps there is a correlation between the *size* of a school and performance on standardized tests, however, and it would be a good set of data to delve further into.
- Specifically, this data shows that the more students a school has, the worse its performance on standardized tests.
##### Opinions
- This could be the result of numerous factors; in my opinion it would be due to the lack of attention to every individual student in the time crunch required of teachers to get their lessons across. However this data doesn't necessarily prove that theory in and of itself so we would require further data. Perhaps something like *Average Amount of Time Teacher Spends With Individual Students* or something similar.
##### Other
- Why is there someone named 'Dr. Richard Scott' in the student list under Huang High School? Are some teachers actually mixed in with these students? What if some of them don't have prefixes like 'Dr.' in their name? Does this data need further scrubbing? My numbers seem generally the same as the example results we were shown so I'm just going to trust the teachers on this one.
```
import pandas as pd
#import schools and then students file
schools_file = "Resources/schools_complete.csv"
students_file = "Resources/students_complete.csv"
#read one dataframe for schools
schools_df = pd.read_csv(schools_file)
# schools_df.head()
#rename 'name' column to 'school' to prepare for merge
schools_df = schools_df.rename(columns={"name":"school"})
# schools_df.head()
#read second dataframe for students
students_df = pd.read_csv(students_file)
# students_df.head()
#outer merge on 'school' column
merged_df = pd.merge(schools_df, students_df, on="school", how="outer")
merged_df.head()
#check for missing data
#merged_df.count()
#check columns
merged_df.columns
```
## District Summary
```
#Create Dataframe with headers: 'Total Schools', 'Total Students,' 'Total Budget', 'Avg Math Score', 'Avg Reading Score',
#'% Passing Math', '% Passing Reading,' % Overall Passing Rate'
totalschools = merged_df['school'].nunique()
totalschools
totalstudents = len(merged_df['name'])
totalstudents
totalbudget = sum(schools_df['budget'])
totalbudget
#score averages
mathscore = merged_df['math_score'].mean()
readingscore = merged_df['reading_score'].mean()
#passing score formulas
#return the amount of math scores above 65
#revision: actually this assignment considers 'above 70' to be passing
#anyway this formula passes the amount of students whose scores are over 70. all little harvard prodigies.
passmath = len(merged_df.loc[merged_df['math_score'] > 70, ['math_score']])
passmath
#average is the amount of passing scores divided by all students. example file reads 72.392137
#if passing score is '65' i get 0.83 which doesn't match the example file
#if passing scores is '70' i get '0.72392137' which matches the example file. so i'll go with that then.
#would be nice if the readme file specified what counts as a passing score.
percmath = passmath / totalstudents * 100
percmath
passreading = len(merged_df.loc[merged_df['reading_score'] > 70, ['reading_score']])
percreading = passreading / totalstudents * 100
#according to the example this should read 82.971662
percreading
#get overall passing rate "(Average of the above two)" reading and writing and put it all in a new DF
#in the example the result is clearly wrong and gives 80.431606 as the average of 72.392137 and 82.971662
percpassing = (percmath + percreading) / 2
percpassing
#putting it together
dist_summary = pd.DataFrame({"Total Schools": totalschools,
"Total Students": totalstudents,
"Total Budget": totalbudget,
"Average Math Score": mathscore,
"Average Reading Score": readingscore,
"% Passing Math": percmath,
"% Passing Reading": percreading,
"% Overall Passing Rate": percpassing}, index=[0])
#formating
dist_summary["Total Budget"] = dist_summary["Total Budget"].map("${:,.2f}".format)
dist_summary = dist_summary[["Total Schools",
"Total Students",
"Total Budget",
"Average Math Score",
"% Passing Math",
"% Passing Reading",
"% Overall Passing Rate"]]
dist_summary
```
## School Summary
```
#Group by school, show 'School Type', 'Total Students', 'Total School Budget', 'Per Student Budget', 'Average Math Score',
#'% Passing Math', '% Passing Reading', '% Overall Passing Rate'
school_group = merged_df.groupby(['school'])
#basic stuff
#i borrowed this schooltype thing from someone else's solution because it took me like 6 hours to figure this out to no avail.
#takeaway: strings are annoying
schooltype = schools_df.set_index('school')["type"]
totalstudents = school_group['name'].count()
totschoolbud = school_group['budget'].mean()
perstudentbud = totschoolbud / totalstudents
#average math score
mathscore = school_group['math_score'].mean()
#average reading score
readingscore = school_group['reading_score'].mean()
#% passing math
#also took these % passing from another solution because i couldn't figure out why i can't use .loc with groupby
passmath = merged_df[merged_df['math_score'] > 70].groupby('school')['Student ID'].count() / totalstudents * 100
#% passing reading
passreading = merged_df[merged_df['reading_score'] > 70].groupby('school')['Student ID'].count() / totalstudents * 100
#% overall passing rate
percpassing = (passmath + passreading) / 2
school_summary = pd.DataFrame({"School Type": schooltype,
"Total Students": totalstudents,
"Total School Budget": totschoolbud,
"Per Student Budget": perstudentbud,
"Average Math Score": mathscore,
"Average Reading Score": readingscore,
"% Passing Math": passmath,
"% Passing Reading": passreading,
"% Overall Passing": percpassing})
#formating
school_summary["Total School Budget"] = school_summary["Total School Budget"].map("${:,.2f}".format)
school_summary = school_summary[["School Type",
"Total Students",
"Total School Budget",
"Per Student Budget",
"Average Math Score",
"Average Reading Score",
"% Passing Math",
"% Passing Reading",
"% Overall Passing"]]
school_summary
```
## Top Performing Schools (By Passing Rate)
```
#sort by overall passing and put highest on top
top_schools = school_summary.sort_values(['% Overall Passing'], ascending=False)
top_schools.head()
```
## Bottom Performing Schools (By Passing Rate)
```
#"I put my thing down, flip it and reverse it" - Missy Elliot, 2002
bottom_schools = school_summary.sort_values(['% Overall Passing'], ascending=True)
bottom_schools.head()
```
## Math Scores by Grade
```
# breaking down this formula:
# first it locates all entries matching '9th' under the 'grade' column in students_df
# then it groups them by 'school'
# then it returns the mean of the values in the math_score column
# the index is still the schools from the previous dataframes so we don't have to do anything
#yes, this would probably work a lot better in a loop of some sort but i'm against the clock here
ninth = students_df.loc[students_df['grade'] == '9th'].groupby('school')['math_score'].mean()
tenth = students_df.loc[students_df['grade'] == '10th'].groupby('school')['math_score'].mean()
eleventh = students_df.loc[students_df['grade'] == '11th'].groupby('school')['math_score'].mean()
twelfth = students_df.loc[students_df['grade'] == '12th'].groupby('school')['math_score'].mean()
math_grade = pd.DataFrame({"9th": ninth.map("{:,.2f}".format),
"10th": tenth.map("{:,.2f}".format),
"11th": eleventh.map("{:,.2f}".format),
"12th": twelfth.map("{:,.2f}".format)})
math_grade = math_grade [['9th', '10th', '11th', '12th']]
math_grade
```
## Reading Scores by Grade
```
ninth = students_df.loc[students_df['grade'] == '9th'].groupby('school')['reading_score'].mean()
tenth = students_df.loc[students_df['grade'] == '10th'].groupby('school')['reading_score'].mean()
eleventh = students_df.loc[students_df['grade'] == '11th'].groupby('school')['reading_score'].mean()
twelfth = students_df.loc[students_df['grade'] == '12th'].groupby('school')['reading_score'].mean()
reading_grade = pd.DataFrame({"9th": ninth.map("{:,.2f}".format),
"10th": tenth.map("{:,.2f}".format),
"11th": eleventh.map("{:,.2f}".format),
"12th": twelfth.map("{:,.2f}".format)})
reading_grade = reading_grade [['9th', '10th', '11th', '12th']]
reading_grade
```
## Scores by School Spending
```
#creating bins based on example, then making a new variable to hold them
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
merged_df["Spending Range"] = pd.cut(merged_df['budget']/merged_df['size'], spending_bins, labels=group_names)
# school_summary
spending_group = merged_df.groupby("Spending Range")
#Average Math Score, Average Reading Score, % Passing Math, % Passing Reading, % Overall Passing Rate
avgmath = spending_group['math_score'].mean()
avgreading = spending_group['reading_score'].mean()
#passing math
passmath = merged_df[merged_df['math_score'] > 70].groupby("Spending Range")['Student ID'].count() / spending_group['Student ID'].count() * 100
#% passing reading
passreading = merged_df[merged_df['reading_score'] > 70].groupby("Spending Range")['Student ID'].count() / spending_group['Student ID'].count() * 100
#% overall passing rate
percpassing = (passmath + passreading) / 2
spending_scores = pd.DataFrame({"Average Math Score": avgmath.map("{:,.2f}".format),
"Average Reading Score": avgreading.map("{:,.2f}".format),
"% Passing Math": passmath.map("{:,.2f}".format),
"% Passing Reading": passreading.map("{:,.2f}".format),
"% Overall Passing Rate": percpassing.map("{:,.2f}".format)})
spending_scores = spending_scores[['Average Math Score',
'Average Reading Score',
'% Passing Math',
'% Passing Reading',
'% Overall Passing Rate']]
spending_scores.index.name = "Spending Ranges (Per Student)"
spending_scores
```
## Scores by School Size
```
#just using the Example's bins
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
merged_df['School Sizes'] = pd.cut(merged_df['size'], size_bins, labels = group_names)
size_group = merged_df.groupby('School Sizes')
#Average Math Score, Average Reading Score, % Passing Math, % Passing Reading, % Overall Passing Rate
avgmath = size_group['math_score'].mean()
avgreading = size_group['reading_score'].mean()
#passing math
passmath = merged_df[merged_df['math_score'] > 70].groupby("School Sizes")['Student ID'].count() / size_group['Student ID'].count() * 100
#% passing reading
passreading = merged_df[merged_df['reading_score'] > 70].groupby("School Sizes")['Student ID'].count() / size_group['Student ID'].count() * 100
#% overall passing rate
percpassing = (passmath + passreading) / 2
size_scores = pd.DataFrame({"Average Math Score": avgmath.map("{:,.2f}".format),
"Average Reading Score": avgreading.map("{:,.2f}".format),
"% Passing Math": passmath.map("{:,.2f}".format),
"% Passing Reading": passreading.map("{:,.2f}".format),
"% Overall Passing Rate": percpassing.map("{:,.2f}".format)})
size_scores = size_scores[['Average Math Score',
'Average Reading Score',
'% Passing Math',
'% Passing Reading',
'% Overall Passing Rate']]
size_scores.index.name = "School Size"
size_scores
```
## Scores by School Type
```
#group by school type
type_group = merged_df.groupby('type')
#Average Math Score, Average Reading Score, % Passing Math, % Passing Reading, % Overall Passing Rate
avgmath = type_group['math_score'].mean()
avgreading = type_group['reading_score'].mean()
#passing math
passmath = merged_df[merged_df['math_score'] > 70].groupby("type")['Student ID'].count() / type_group['Student ID'].count() * 100
#% passing reading
passreading = merged_df[merged_df['reading_score'] > 70].groupby("type")['Student ID'].count() / type_group['Student ID'].count() * 100
#% overall passing rate
percpassing = (passmath + passreading) / 2
type_scores = pd.DataFrame({"Average Math Score": avgmath.map("{:,.2f}".format),
"Average Reading Score": avgreading.map("{:,.2f}".format),
"% Passing Math": passmath.map("{:,.2f}".format),
"% Passing Reading": passreading.map("{:,.2f}".format),
"% Overall Passing Rate": percpassing.map("{:,.2f}".format)})
type_scores = type_scores[['Average Math Score',
'Average Reading Score',
'% Passing Math',
'% Passing Reading',
'% Overall Passing Rate']]
type_scores.index.name = "School Type"
type_scores
```
| github_jupyter |
Greyscale ℓ1-TV Denoising
=========================
This example demonstrates the use of class [tvl1.TVL1Denoise](http://sporco.rtfd.org/en/latest/modules/sporco.admm.tvl1.html#sporco.admm.tvl1.TVL1Denoise) for removing salt & pepper noise from a greyscale image using Total Variation regularization with an ℓ1 data fidelity term (ℓ1-TV denoising).
```
from __future__ import print_function
from builtins import input
import numpy as np
from sporco.admm import tvl1
from sporco import util
from sporco import signal
from sporco import metric
from sporco import plot
plot.config_notebook_plotting()
```
Load reference image.
```
img = util.ExampleImages().image('monarch.png', scaled=True,
idxexp=np.s_[:,160:672], gray=True)
```
Construct test image corrupted by 20% salt & pepper noise.
```
np.random.seed(12345)
imgn = signal.spnoise(img, 0.2)
```
Set regularization parameter and options for ℓ1-TV denoising solver. The regularization parameter used here has been manually selected for good performance.
```
lmbda = 8e-1
opt = tvl1.TVL1Denoise.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 5e-3, 'gEvalY': False,
'AutoRho': {'Enabled': True}})
```
Create solver object and solve, returning the the denoised image ``imgr``.
```
b = tvl1.TVL1Denoise(imgn, lmbda, opt)
imgr = b.solve()
```
Display solve time and denoising performance.
```
print("TVL1Denoise solve time: %5.2f s" % b.timer.elapsed('solve'))
print("Noisy image PSNR: %5.2f dB" % metric.psnr(img, imgn))
print("Denoised image PSNR: %5.2f dB" % metric.psnr(img, imgr))
```
Display reference, corrupted, and denoised images.
```
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.imview(img, title='Reference', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(imgn, title='Corrupted', fig=fig)
plot.subplot(1, 3, 3)
plot.imview(imgr, title=r'Restored ($\ell_1$-TV)', fig=fig)
fig.show()
```
Get iterations statistics from solver object and plot functional value, ADMM primary and dual residuals, and automatically adjusted ADMM penalty parameter against the iteration number.
```
its = b.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(1, 3, 3)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
```
| github_jupyter |
```
%matplotlib inline
```
# Out-of-core classification of text documents
This is an example showing how scikit-learn can be used for classification
using an out-of-core approach: learning from data that doesn't fit into main
memory. We make use of an online classifier, i.e., one that supports the
partial_fit method, that will be fed with batches of examples. To guarantee
that the features space remains the same over time we leverage a
HashingVectorizer that will project each example into the same feature space.
This is especially useful in the case of text classification where new
features (words) may appear in each batch.
```
# Authors: Eustache Diemert <eustache@diemert.fr>
# @FedericoV <https://github.com/FedericoV/>
# License: BSD 3 clause
from glob import glob
import itertools
import os.path
import re
import tarfile
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
from html.parser import HTMLParser
from urllib.request import urlretrieve
from sklearn.datasets import get_data_home
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import Perceptron
from sklearn.naive_bayes import MultinomialNB
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return '__file__' in globals()
```
Reuters Dataset related routines
--------------------------------
The dataset used in this example is Reuters-21578 as provided by the UCI ML
repository. It will be automatically downloaded and uncompressed on first
run.
```
class ReutersParser(HTMLParser):
"""Utility class to parse a SGML file and yield documents one at a time."""
def __init__(self, encoding='latin-1'):
HTMLParser.__init__(self)
self._reset()
self.encoding = encoding
def handle_starttag(self, tag, attrs):
method = 'start_' + tag
getattr(self, method, lambda x: None)(attrs)
def handle_endtag(self, tag):
method = 'end_' + tag
getattr(self, method, lambda: None)()
def _reset(self):
self.in_title = 0
self.in_body = 0
self.in_topics = 0
self.in_topic_d = 0
self.title = ""
self.body = ""
self.topics = []
self.topic_d = ""
def parse(self, fd):
self.docs = []
for chunk in fd:
self.feed(chunk.decode(self.encoding))
for doc in self.docs:
yield doc
self.docs = []
self.close()
def handle_data(self, data):
if self.in_body:
self.body += data
elif self.in_title:
self.title += data
elif self.in_topic_d:
self.topic_d += data
def start_reuters(self, attributes):
pass
def end_reuters(self):
self.body = re.sub(r'\s+', r' ', self.body)
self.docs.append({'title': self.title,
'body': self.body,
'topics': self.topics})
self._reset()
def start_title(self, attributes):
self.in_title = 1
def end_title(self):
self.in_title = 0
def start_body(self, attributes):
self.in_body = 1
def end_body(self):
self.in_body = 0
def start_topics(self, attributes):
self.in_topics = 1
def end_topics(self):
self.in_topics = 0
def start_d(self, attributes):
self.in_topic_d = 1
def end_d(self):
self.in_topic_d = 0
self.topics.append(self.topic_d)
self.topic_d = ""
def stream_reuters_documents(data_path=None):
"""Iterate over documents of the Reuters dataset.
The Reuters archive will automatically be downloaded and uncompressed if
the `data_path` directory does not exist.
Documents are represented as dictionaries with 'body' (str),
'title' (str), 'topics' (list(str)) keys.
"""
DOWNLOAD_URL = ('http://archive.ics.uci.edu/ml/machine-learning-databases/'
'reuters21578-mld/reuters21578.tar.gz')
ARCHIVE_FILENAME = 'reuters21578.tar.gz'
if data_path is None:
data_path = os.path.join(get_data_home(), "reuters")
if not os.path.exists(data_path):
"""Download the dataset."""
print("downloading dataset (once and for all) into %s" %
data_path)
os.mkdir(data_path)
def progress(blocknum, bs, size):
total_sz_mb = '%.2f MB' % (size / 1e6)
current_sz_mb = '%.2f MB' % ((blocknum * bs) / 1e6)
if _not_in_sphinx():
sys.stdout.write(
'\rdownloaded %s / %s' % (current_sz_mb, total_sz_mb))
archive_path = os.path.join(data_path, ARCHIVE_FILENAME)
urlretrieve(DOWNLOAD_URL, filename=archive_path,
reporthook=progress)
if _not_in_sphinx():
sys.stdout.write('\r')
print("untarring Reuters dataset...")
tarfile.open(archive_path, 'r:gz').extractall(data_path)
print("done.")
parser = ReutersParser()
for filename in glob(os.path.join(data_path, "*.sgm")):
for doc in parser.parse(open(filename, 'rb')):
yield doc
```
Main
----
Create the vectorizer and limit the number of features to a reasonable
maximum
```
vectorizer = HashingVectorizer(decode_error='ignore', n_features=2 ** 18,
alternate_sign=False)
# Iterator over parsed Reuters SGML files.
data_stream = stream_reuters_documents()
# We learn a binary classification between the "acq" class and all the others.
# "acq" was chosen as it is more or less evenly distributed in the Reuters
# files. For other datasets, one should take care of creating a test set with
# a realistic portion of positive instances.
all_classes = np.array([0, 1])
positive_class = 'acq'
# Here are some classifiers that support the `partial_fit` method
partial_fit_classifiers = {
'SGD': SGDClassifier(max_iter=5),
'Perceptron': Perceptron(),
'NB Multinomial': MultinomialNB(alpha=0.01),
'Passive-Aggressive': PassiveAggressiveClassifier(),
}
def get_minibatch(doc_iter, size, pos_class=positive_class):
"""Extract a minibatch of examples, return a tuple X_text, y.
Note: size is before excluding invalid docs with no topics assigned.
"""
data = [('{title}\n\n{body}'.format(**doc), pos_class in doc['topics'])
for doc in itertools.islice(doc_iter, size)
if doc['topics']]
if not len(data):
return np.asarray([], dtype=int), np.asarray([], dtype=int)
X_text, y = zip(*data)
return X_text, np.asarray(y, dtype=int)
def iter_minibatches(doc_iter, minibatch_size):
"""Generator of minibatches."""
X_text, y = get_minibatch(doc_iter, minibatch_size)
while len(X_text):
yield X_text, y
X_text, y = get_minibatch(doc_iter, minibatch_size)
# test data statistics
test_stats = {'n_test': 0, 'n_test_pos': 0}
# First we hold out a number of examples to estimate accuracy
n_test_documents = 1000
tick = time.time()
X_test_text, y_test = get_minibatch(data_stream, 1000)
parsing_time = time.time() - tick
tick = time.time()
X_test = vectorizer.transform(X_test_text)
vectorizing_time = time.time() - tick
test_stats['n_test'] += len(y_test)
test_stats['n_test_pos'] += sum(y_test)
print("Test set is %d documents (%d positive)" % (len(y_test), sum(y_test)))
def progress(cls_name, stats):
"""Report progress information, return a string."""
duration = time.time() - stats['t0']
s = "%20s classifier : \t" % cls_name
s += "%(n_train)6d train docs (%(n_train_pos)6d positive) " % stats
s += "%(n_test)6d test docs (%(n_test_pos)6d positive) " % test_stats
s += "accuracy: %(accuracy).3f " % stats
s += "in %.2fs (%5d docs/s)" % (duration, stats['n_train'] / duration)
return s
cls_stats = {}
for cls_name in partial_fit_classifiers:
stats = {'n_train': 0, 'n_train_pos': 0,
'accuracy': 0.0, 'accuracy_history': [(0, 0)], 't0': time.time(),
'runtime_history': [(0, 0)], 'total_fit_time': 0.0}
cls_stats[cls_name] = stats
get_minibatch(data_stream, n_test_documents)
# Discard test set
# We will feed the classifier with mini-batches of 1000 documents; this means
# we have at most 1000 docs in memory at any time. The smaller the document
# batch, the bigger the relative overhead of the partial fit methods.
minibatch_size = 1000
# Create the data_stream that parses Reuters SGML files and iterates on
# documents as a stream.
minibatch_iterators = iter_minibatches(data_stream, minibatch_size)
total_vect_time = 0.0
# Main loop : iterate on mini-batches of examples
for i, (X_train_text, y_train) in enumerate(minibatch_iterators):
tick = time.time()
X_train = vectorizer.transform(X_train_text)
total_vect_time += time.time() - tick
for cls_name, cls in partial_fit_classifiers.items():
tick = time.time()
# update estimator with examples in the current mini-batch
cls.partial_fit(X_train, y_train, classes=all_classes)
# accumulate test accuracy stats
cls_stats[cls_name]['total_fit_time'] += time.time() - tick
cls_stats[cls_name]['n_train'] += X_train.shape[0]
cls_stats[cls_name]['n_train_pos'] += sum(y_train)
tick = time.time()
cls_stats[cls_name]['accuracy'] = cls.score(X_test, y_test)
cls_stats[cls_name]['prediction_time'] = time.time() - tick
acc_history = (cls_stats[cls_name]['accuracy'],
cls_stats[cls_name]['n_train'])
cls_stats[cls_name]['accuracy_history'].append(acc_history)
run_history = (cls_stats[cls_name]['accuracy'],
total_vect_time + cls_stats[cls_name]['total_fit_time'])
cls_stats[cls_name]['runtime_history'].append(run_history)
if i % 3 == 0:
print(progress(cls_name, cls_stats[cls_name]))
if i % 3 == 0:
print('\n')
```
Plot results
------------
The plot represents the learning curve of the classifier: the evolution
of classification accuracy over the course of the mini-batches. Accuracy is
measured on the first 1000 samples, held out as a validation set.
To limit the memory consumption, we queue examples up to a fixed amount
before feeding them to the learner.
```
def plot_accuracy(x, y, x_legend):
"""Plot accuracy as a function of x."""
x = np.array(x)
y = np.array(y)
plt.title('Classification accuracy as a function of %s' % x_legend)
plt.xlabel('%s' % x_legend)
plt.ylabel('Accuracy')
plt.grid(True)
plt.plot(x, y)
rcParams['legend.fontsize'] = 10
cls_names = list(sorted(cls_stats.keys()))
# Plot accuracy evolution
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with #examples
accuracy, n_examples = zip(*stats['accuracy_history'])
plot_accuracy(n_examples, accuracy, "training examples (#)")
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with runtime
accuracy, runtime = zip(*stats['runtime_history'])
plot_accuracy(runtime, accuracy, 'runtime (s)')
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
# Plot fitting times
plt.figure()
fig = plt.gcf()
cls_runtime = [stats['total_fit_time']
for cls_name, stats in sorted(cls_stats.items())]
cls_runtime.append(total_vect_time)
cls_names.append('Vectorization')
bar_colors = ['b', 'g', 'r', 'c', 'm', 'y']
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=10)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Training Times')
def autolabel(rectangles):
"""attach some text vi autolabel on rectangles."""
for rect in rectangles:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2.,
1.05 * height, '%.4f' % height,
ha='center', va='bottom')
plt.setp(plt.xticks()[1], rotation=30)
autolabel(rectangles)
plt.tight_layout()
plt.show()
# Plot prediction times
plt.figure()
cls_runtime = []
cls_names = list(sorted(cls_stats.keys()))
for cls_name, stats in sorted(cls_stats.items()):
cls_runtime.append(stats['prediction_time'])
cls_runtime.append(parsing_time)
cls_names.append('Read/Parse\n+Feat.Extr.')
cls_runtime.append(vectorizing_time)
cls_names.append('Hashing\n+Vect.')
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0, len(cls_names) - 1, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=8)
plt.setp(plt.xticks()[1], rotation=30)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Prediction Times (%d instances)' % n_test_documents)
autolabel(rectangles)
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
%matplotlib inline
```
Como hay gente voluntariosa, pero que no deja de ser radical, tenemos que limpiar formatos dispares de documentos estatales
```
# Carta Marina 2015
data2015 = pd.read_csv('../data/raw/escuelas-elecciones-2015-cordoba.csv')
data2015.head()
```
Separamos `Escuela` en `Escuela`, `direccion` y `barrio`. Lo que es una lastima, porque la muestra de 2017 no tiene `barrio`. O bien hay una confusion entre lo que se nombra como circuito en cada muestra. Lo que tambien seria una lastima.
```
data_copy = data2015.copy()
data_copy[['Escuela', 'direccion', 'barrio']] = data_copy.Escuela.str.split(' - ', expand=True)
data_copy = data_copy.rename(columns={
'Escuela': 'escuela',
'direccion': 'direccion',
'Seccion Nro': 'seccion_nro',
'Seccion Nombre': 'seccion_nombre',
'Circuito Nro': 'circuito_nro',
'Circuito Nombre': 'circuito_nombre',
'Mesas': 'mesas',
'Desde': 'desde',
'Hasta': 'hasta',
'Electores': 'electores',
'barrio': 'barrio'
})
data_copy[['escuela', 'direccion', 'seccion_nro', 'seccion_nombre', 'circuito_nro', 'circuito_nombre', 'mesas', 'desde', 'hasta', 'electores']].to_csv('../data/raw/escuelas-elecciones-2015-cordoba-CLEAN.csv', index=False)
data2015 = pd.read_csv('../data/raw/escuelas-elecciones-2015-cordoba-CLEAN.csv')
data2015.sort_values(by='desde').head()
```
Ahi va... Mucho mejor. Ahora veamos 2017.
```
data2017 = pd.read_csv('../data/raw/escuelas-elecciones-2017-cordoba.csv')
print(len(data2017))
data2017.sort_values(by='desde').head()
Esas columnas no son narmales, spanglish es muy malo.
data_copy = data2017.copy()
data_copy.columns = ['escuela', 'direccion', 'seccion_nro', 'seccion_nombre', 'circuito_nro', 'circuito_nombre', 'mesas', 'desde', 'hasta', 'electores',]
data_copy.to_csv('../data/raw/escuelas-elecciones-2017-cordoba-CLEAN.csv', index=False)
data2017 = pd.read_csv('../data/raw/escuelas-elecciones-2017-cordoba-CLEAN.csv')
data2017.head()
results = pd.read_csv('../data/processed/resultados-secciones-2015.csv', index_col=0)
print(len(results))
results.head()
```
Tratemos de integrar los datos de la carta marina con los resultados.
Vamos a iterar sobre los resultados por mesa. Nos fijamos si la mesa esta en el rango de mesas que tiene cada escuela y sumamos los resultados.
```
completo = data2015.copy()
store = {}
columns = ['votos_fpv', 'votos_cambiemos', 'votos_blancos', 'votos_nulos', 'votos_recurridos', 'total']
def merge_results(df):
for x in df.itertuples():
value = x.numero_mesa
circuito = x.nombre_circuito
condition = ((completo.desde<=value) & (completo.hasta>=value))
index = completo[completo.circuito_nro==circuito][condition]
columns = ['votos_fpv', 'votos_cambiemos', 'votos_blancos', 'votos_nulos', 'votos_recurridos', 'total']
subset = x[columns]
completo[index][columns] += subset
return completo
mask = results.nombre_circuito.isin(data2015[data2015.seccion_nro==1].circuito_nro.unique())
results[mask].groupby('nombre_circuito')[columns].plot.bar()
```
| github_jupyter |
# Basic objects
A `striplog` depends on a hierarchy of objects. This notebook shows the objects and their basic functionality.
- [Lexicon](#Lexicon): A dictionary containing the words and word categories to use for rock descriptions.
- [Component](#Component): A set of attributes.
- [Interval](#Interval): One element from a Striplog — consists of a top, base, a description, one or more Components, and a source.
Striplogs (a set of `Interval`s) are described in [a separate notebook](Striplog_object.ipynb).
Decors and Legends are also described in [another notebook](Display_objects.ipynb).
```
import striplog
striplog.__version__
# If you get a lot of warnings here, just run it again.
```
<hr />
## Lexicon
```
from striplog import Lexicon
print(Lexicon.__doc__)
lexicon = Lexicon.default()
lexicon
lexicon.synonyms
```
Most of the lexicon works 'behind the scenes' when processing descriptions into `Rock` components.
```
lexicon.find_synonym('Halite')
s = "grysh gn ss w/ sp gy sh"
lexicon.expand_abbreviations(s)
```
<hr />
## Component
A set of attributes. All are optional.
```
from striplog import Component
print(Component.__doc__)
```
We define a new rock with a Python `dict` object:
```
r = {'colour': 'grey',
'grainsize': 'vf-f',
'lithology': 'sand'}
rock = Component(r)
rock
```
The Rock has a colour:
```
rock['colour']
```
And it has a summary, which is generated from its attributes.
```
rock.summary()
```
We can format the summary if we wish:
```
rock.summary(fmt="My rock: {lithology} ({colour}, {grainsize!u})")
```
The formatting supports the usual `s`, `r`, and `a`:
* `s`: `str`
* `r`: `repr`
* `a`: `ascii`
Also some string functions:
* `u`: `str.upper`
* `l`: `str.lower`
* `c`: `str.capitalize`
* `t`: `str.title`
And some numerical ones, for arrays of numbers:
* `+` or `∑`: `np.sum`
* `m` or `µ`: `np.mean`
* `v`: `np.var`
* `d`: `np.std`
* `x`: `np.product`
```
x = {'colour': ['Grey', 'Brown'],
'bogosity': [0.45, 0.51, 0.66],
'porosity': [0.2003, 0.1998, 0.2112, 0.2013, 0.1990],
'grainsize': 'VF-F',
'lithology': 'Sand',
}
X = Component(x)
# This is not working at the moment.
#fmt = 'The {colour[0]!u} {lithology!u} has a total of {bogosity!∑:.2f} bogons'
#fmt += 'and a mean porosity of {porosity!µ:2.0%}.'
fmt = 'The {lithology!u} is {colour[0]!u}.'
X.summary(fmt)
X.json()
```
We can compare rocks with the usual `==` operator:
```
rock2 = Component({'grainsize': 'VF-F',
'colour': 'Grey',
'lithology': 'Sand'})
rock == rock2
rock
```
In order to create a Component object from text, we need a lexicon to compare the text against. The lexicon describes the language we want to extract, and what it means.
```
rock3 = Component.from_text('Grey fine sandstone.', lexicon)
rock3
```
Components support double-star-unpacking:
```
"My rock: {lithology} ({colour}, {grainsize})".format(**rock3)
```
<hr />
## Position
Positions define points in the earth, like a top, but with uncertainty. You can define:
* `upper` — the highest possible location
* `middle` — the most likely location
* `lower` — the lowest possible location
* `units` — the units of measurement
* `x` and `y` — the _x_ and _y_ location (these don't have uncertainty, sorry)
* `meta` — a Python dictionary containing anything you want
Positions don't have a 'way up'.
```
from striplog import Position
print(Position.__doc__)
params = {'upper': 95,
'middle': 100,
'lower': 110,
'meta': {'kind': 'erosive', 'source': 'DOE'}
}
p = Position(**params)
p
```
Even if you don't give a `middle`, you can always get `z`: the central, most likely position:
```
params = {'upper': 75, 'lower': 85}
p = Position(**params)
p
p.z
```
<hr />
## Interval
Intervals are where it gets interesting. An interval can have:
* a top
* a base
* a description (in natural language)
* a list of `Component`s
Intervals don't have a 'way up', it's implied by the order of `top` and `base`.
```
from striplog import Interval
print(Interval.__doc__)
```
I might make an `Interval` explicitly from a Component...
```
Interval(10, 20, components=[rock])
```
... or I might pass a description and a `lexicon` and Striplog will parse the description and attempt to extract structured `Component` objects from it.
```
Interval(20, 40, "Grey sandstone with shale flakes.", lexicon=lexicon).__repr__()
```
Notice I only got one `Component`, even though the description contains a subordinate lithology. This is the default behaviour, we have to ask for more components:
```
interval = Interval(20, 40, "Grey sandstone with black shale flakes.", lexicon=lexicon, max_component=2)
print(interval)
```
`Interval`s have a `primary` attribute, which holds the first component, no matter how many components there are.
```
interval.primary
```
Ask for the summary to see the thickness and a `Rock` summary of the primary component. Note that the format code only applies to the `Rock` part of the summary.
```
interval.summary(fmt="{colour} {lithology}")
```
We can change an interval's properties:
```
interval.top = 18
interval
interval.top
```
<hr />
## Comparing and combining intervals
```
# Depth ordered
i1 = Interval(top=61, base=62.5, components=[Component({'lithology': 'limestone'})])
i2 = Interval(top=62, base=63, components=[Component({'lithology': 'sandstone'})])
i3 = Interval(top=62.5, base=63.5, components=[Component({'lithology': 'siltstone'})])
i4 = Interval(top=63, base=64, components=[Component({'lithology': 'shale'})])
i5 = Interval(top=63.1, base=63.4, components=[Component({'lithology': 'dolomite'})])
# Elevation ordered
i8 = Interval(top=200, base=100, components=[Component({'lithology': 'sandstone'})])
i7 = Interval(top=150, base=50, components=[Component({'lithology': 'limestone'})])
i6 = Interval(top=100, base=0, components=[Component({'lithology': 'siltstone'})])
i2.order
```
**Technical aside:** The `Interval` class is a `functools.total_ordering`, so providing `__eq__` and one other comparison (such as `__lt__`) in the class definition means that instances of the class have implicit order. So you can use `sorted` on a Striplog, for example.
It wasn't clear to me whether this should compare tops (say), so that '>' might mean 'above', or if it should be keyed on thickness. I chose the former, and implemented other comparisons instead.
```
print(i3 == i2) # False, they don't have the same top
print(i1 > i4) # True, i1 is above i4
print(min(i1, i2, i5).summary()) # 0.3 m of dolomite
i2 > i4 > i5 # True
```
We can combine intervals with the `+` operator. (However, you cannot subtract intervals.)
```
i2 + i3
```
Adding a rock adds a (minor) component and adds to the description.
```
interval + rock3
i6.relationship(i7), i5.relationship(i4)
print(i1.partially_overlaps(i2)) # True
print(i2.partially_overlaps(i3)) # True
print(i2.partially_overlaps(i4)) # False
print()
print(i6.partially_overlaps(i7)) # True
print(i7.partially_overlaps(i6)) # True
print(i6.partially_overlaps(i8)) # False
print()
print(i5.is_contained_by(i3)) # True
print(i5.is_contained_by(i4)) # True
print(i5.is_contained_by(i2)) # False
x = i4.merge(i5)
x[-1].base = 65
x
i1.intersect(i2, blend=False)
i1.intersect(i2)
i1.union(i3)
i3.difference(i5)
```
<hr />
<p style="color:gray">©2015 Agile Geoscience. Licensed CC-BY. <a href="https://github.com/agile-geoscience/striplog">striplog.py</a></p>
| github_jupyter |
# Import Modules
```
import warnings
warnings.filterwarnings('ignore')
from src import detect_faces, show_bboxes
from PIL import Image
import torch
from torchvision import transforms, datasets
import numpy as np
import os
```
# Path Definition
```
dataset_path = '../Dataset/emotiw/'
face_coordinates_directory = '../Dataset/FaceCoordinates/'
processed_dataset_path = '../Dataset/CroppedFaces/'
```
# Load Train and Val Dataset
```
image_datasets = {x : datasets.ImageFolder(os.path.join(dataset_path, x))
for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
class_names
training_dataset = image_datasets['train']
validation_dataset = image_datasets['val']
neg_train = sorted(os.listdir(dataset_path + 'train/Negative/'))
neu_train = sorted(os.listdir(dataset_path + 'train/Neutral/'))
pos_train = sorted(os.listdir(dataset_path + 'train/Positive/'))
neg_val = sorted(os.listdir(dataset_path + 'val/Negative/'))
neu_val = sorted(os.listdir(dataset_path + 'val/Neutral/'))
pos_val = sorted(os.listdir(dataset_path + 'val/Positive/'))
neg_train_filelist = [x.split('.')[0] for x in neg_train]
neu_train_filelist = [x.split('.')[0] for x in neu_train]
pos_train_filelist = [x.split('.')[0] for x in pos_train]
neg_val_filelist = [x.split('.')[0] for x in neg_val]
neu_val_filelist = [x.split('.')[0] for x in neu_val]
pos_val_filelist = [x.split('.')[0] for x in pos_val]
print(neg_train_filelist[:10])
print(neu_train_filelist[:10])
print(pos_train_filelist[:10])
print(neg_val_filelist[:10])
print(neu_val_filelist[:10])
print(pos_val_filelist[:10])
train_filelist = neg_train_filelist + neu_train_filelist + pos_train_filelist
val_filelist = neg_val_filelist + neu_val_filelist + pos_val_filelist
print(len(training_dataset))
print(len(validation_dataset))
```
# Crop Faces
```
for i in range(len(training_dataset)):
try:
image, label = training_dataset[i]
face_list = []
landmarks_new_coordinates = []
if label == 0:
if os.path.isfile(processed_dataset_path + 'train/Negative/' + train_filelist[i] + '.npz'):
print(train_filelist[i] + ' Already present')
continue
bbox_lm = np.load(face_coordinates_directory + 'train/Negative/' + train_filelist[i] +'.npz')
bounding_boxes = bbox_lm['a']
if bounding_boxes.size == 0 or (bounding_boxes[0] == 0).all():
print("No bounding boxes for " + train_filelist[i] + ". Adding empty file for the same")
np.savez(processed_dataset_path + 'train/Negative/' + train_filelist[i], a = np.zeros(1), b = np.zeros(1))
continue
landmarks = bbox_lm['b']
for j in range(len(bounding_boxes)):
bbox_coordinates = bounding_boxes[j]
landmark = landmarks[j]
img_face = image.crop((bbox_coordinates[0], bbox_coordinates[1], bbox_coordinates[2], bbox_coordinates[3]))
x = bbox_coordinates[0]
y = bbox_coordinates[1]
for k in range(5):
landmark[k] -= x
landmark[k+5] -= y
img_face = np.array(img_face)
landmark = np.array(landmark)
if len(face_list) != 0:
if img_face.shape[0] == face_list[-1].shape[0]:
img_face = image.crop((bbox_coordinates[0] - 1, bbox_coordinates[1] - 1, bbox_coordinates[2], bbox_coordinates[3]))
img_face = np.array(img_face)
landmark +=1
face_list.append(img_face)
landmarks_new_coordinates.append(landmark)
face_list = np.asarray(face_list)
landmarks_new_coordinates = np.asarray(landmarks_new_coordinates)
np.savez(processed_dataset_path + 'train/Negative/' + train_filelist[i], a = face_list, b = landmarks_new_coordinates)
elif label == 1:
if os.path.isfile(processed_dataset_path + 'train/Neutral/' + train_filelist[i] + '.npz'):
print(train_filelist[i] + ' Already present')
continue
bbox_lm = np.load(face_coordinates_directory + 'train/Neutral/' + train_filelist[i] +'.npz')
bounding_boxes = bbox_lm['a']
if bounding_boxes.size == 0 or (bounding_boxes[0] == 0).all():
print("No bounding boxes for " + train_filelist[i] + ". Adding empty file for the same")
np.savez(processed_dataset_path + 'train/Neutral/' + train_filelist[i], a = np.zeros(1), b = np.zeros(1))
continue
landmarks = bbox_lm['b']
for j in range(len(bounding_boxes)):
bbox_coordinates = bounding_boxes[j]
landmark = landmarks[j]
img_face = image.crop((bbox_coordinates[0], bbox_coordinates[1], bbox_coordinates[2], bbox_coordinates[3]))
x = bbox_coordinates[0]
y = bbox_coordinates[1]
for k in range(5):
landmark[k] -= x
landmark[k+5] -= y
img_face = np.array(img_face)
landmark = np.array(landmark)
if len(face_list) != 0:
if img_face.shape[0] == face_list[-1].shape[0]:
img_face = image.crop((bbox_coordinates[0] - 1, bbox_coordinates[1] - 1, bbox_coordinates[2], bbox_coordinates[3]))
img_face = np.array(img_face)
landmark += 1
face_list.append(img_face)
landmarks_new_coordinates.append(landmark)
face_list = np.asarray(face_list)
landmarks_new_coordinates = np.asarray(landmarks_new_coordinates)
np.savez(processed_dataset_path + 'train/Neutral/' + train_filelist[i], a = face_list, b = landmarks_new_coordinates)
else:
if os.path.isfile(processed_dataset_path + 'train/Positive/' + train_filelist[i] + '.npz'):
print(train_filelist[i] + ' Already present')
continue
bbox_lm = np.load(face_coordinates_directory + 'train/Positive/' + train_filelist[i] +'.npz')
bounding_boxes = bbox_lm['a']
if bounding_boxes.size == 0 or (bounding_boxes[0] == 0).all():
print("No bounding boxes for " + train_filelist[i] + ". Adding empty file for the same")
np.savez(processed_dataset_path + 'train/Positive/' + train_filelist[i], a = np.zeros(1), b = np.zeros(1))
continue
landmarks = bbox_lm['b']
for j in range(len(bounding_boxes)):
bbox_coordinates = bounding_boxes[j]
landmark = landmarks[j]
img_face = image.crop((bbox_coordinates[0], bbox_coordinates[1], bbox_coordinates[2], bbox_coordinates[3]))
x = bbox_coordinates[0]
y = bbox_coordinates[1]
for k in range(5):
landmark[k] -= x
landmark[k+5] -= y
img_face = np.array(img_face)
landmark = np.array(landmark)
if len(face_list) != 0:
if img_face.shape[0] == face_list[-1].shape[0]:
img_face = image.crop((bbox_coordinates[0] - 1, bbox_coordinates[1] - 1, bbox_coordinates[2], bbox_coordinates[3]))
img_face = np.array(img_face)
landmark += 1
face_list.append(img_face)
landmarks_new_coordinates.append(landmark)
face_list = np.asarray(face_list)
landmarks_new_coordinates = np.asarray(landmarks_new_coordinates)
np.savez(processed_dataset_path + 'train/Positive/' + train_filelist[i], a = face_list, b = landmarks_new_coordinates)
if i % 100 == 0:
print(i)
except:
print("Error/interrput at validation dataset file " + val_filelist[i])
print(bounding_boxes)
print(landmarks)
print(bounding_boxes.shape)
print(landmarks.shape)
break
```
| github_jupyter |
```
# This cell is added by sphinx-gallery
!pip install mrsimulator --quiet
%matplotlib inline
import mrsimulator
print(f'You are using mrsimulator v{mrsimulator.__version__}')
```
# Itraconazole, ¹³C (I=1/2) PASS
¹³C (I=1/2) 2D Phase-adjusted spinning sideband (PASS) simulation.
The following is a simulation of a 2D PASS spectrum of itraconazole, a triazole
containing drug prescribed for the prevention and treatment of fungal infection.
The 2D PASS spectrum is a correlation of finite speed MAS to an infinite speed MAS
spectrum. The parameters for the simulation are obtained from Dey `et al.` [#f1]_.
```
import matplotlib.pyplot as plt
from mrsimulator import Simulator
from mrsimulator.methods import SSB2D
from mrsimulator import signal_processing as sp
```
There are 41 $^{13}\text{C}$ single-site spin systems partially describing the
NMR parameters of itraconazole. We will import the directly import the spin systems
to the Simulator object using the `load_spin_systems` method.
```
sim = Simulator()
filename = "https://sandbox.zenodo.org/record/687656/files/itraconazole_13C.mrsys"
sim.load_spin_systems(filename)
```
Use the ``SSB2D`` method to simulate a PASS, MAT, QPASS, QMAT, or any equivalent
sideband separation spectrum. Here, we use the method to generate a PASS spectrum.
```
PASS = SSB2D(
channels=["13C"],
magnetic_flux_density=11.74,
rotor_frequency=2000,
spectral_dimensions=[
dict(
count=20 * 4,
spectral_width=2000 * 20, # value in Hz
label="Anisotropic dimension",
),
dict(
count=1024,
spectral_width=3e4, # value in Hz
reference_offset=1.1e4, # value in Hz
label="Isotropic dimension",
),
],
)
sim.methods = [PASS] # add the method.
# A graphical representation of the method object.
plt.figure(figsize=(5, 3.5))
PASS.plot()
plt.show()
```
For 2D spinning sideband simulation, set the number of spinning sidebands in the
Simulator.config object to `spectral_width/rotor_frequency` along the sideband
dimension.
```
sim.config.number_of_sidebands = 20
# run the simulation.
sim.run()
```
Apply post-simulation processing. Here, we apply a Lorentzian line broadening to the
isotropic dimension.
```
data = sim.methods[0].simulation
processor = sp.SignalProcessor(
operations=[
sp.IFFT(dim_index=0),
sp.apodization.Exponential(FWHM="100 Hz", dim_index=0),
sp.FFT(dim_index=0),
]
)
processed_data = processor.apply_operations(data=data).real
processed_data /= processed_data.max()
```
The plot of the simulation.
```
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(processed_data, aspect="auto", cmap="gist_ncar_r", vmax=0.5)
plt.colorbar(cb)
ax.invert_xaxis()
ax.invert_yaxis()
plt.tight_layout()
plt.show()
```
.. [#f1] Dey, K .K, Gayen, S., Ghosh, M., Investigation of the Detailed Internal
Structure and Dynamics of Itraconazole by Solid-State NMR Measurements,
ACS Omega (2019) **4**, 21627.
`DOI:10.1021/acsomega.9b03558 <https://doi.org/10.1021/acsomega.9b03558>`_
| github_jupyter |
```
# !pip install plotly
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from sklearn.metrics import confusion_matrix
import sys
%matplotlib inline
import matplotlib.pyplot as plt
import plotly.express as px
!ls data
def genesis_train(file):
data = pd.read_csv(file)
del data['Unnamed: 32']
print('Number of datapoints in Training dataset: ',len(data))
X_train = data.iloc[:, 2:].values
y_train = data.iloc[:, 1].values
test = pd.read_csv('./data/test.csv')
del test['Unnamed: 32']
print('Number of datapoints in Testing dataset: ',len(test))
X_test = test.iloc[:, 2:].values
y_test = test.iloc[:, 1].values
labelencoder = LabelEncoder()
y_train = labelencoder.fit_transform(y_train)
y_test = labelencoder.fit_transform(y_test)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
model = Sequential()
model.add(Dense(16, activation='relu', input_dim=30))
model.add(Dropout(0.1))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=100, epochs=5)
scores = model.evaluate(X_test, y_test)
print("Loss: ", scores[0]) #Loss
print("Accuracy: ", scores[1]) #Accuracy
#Saving Model
model.save("./output.h5")
return scores[1]
def update_train(file):
data = pd.read_csv(file)
del data['Unnamed: 32']
X_train = data.iloc[:, 2:].values
y_train = data.iloc[:, 1].values
test = pd.read_csv('./data/test.csv')
del test['Unnamed: 32']
print('Number of datapoints in Testing dataset: ',len(test))
X_test = test.iloc[:, 2:].values
y_test = test.iloc[:, 1].values
labelencoder = LabelEncoder()
y_train = labelencoder.fit_transform(y_train)
y_test = labelencoder.fit_transform(y_test)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
model = Sequential()
model.add(Dense(16, activation='relu', input_dim=30))
model.add(Dropout(0.1))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1, activation='sigmoid'))
model.load_weights("./output.h5")
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=100, epochs=5)
scores = model.evaluate(X_test, y_test)
print("Loss: ", scores[0]) #Loss
print("Accuracy: ", scores[1]) #Accuracy
#Saving Model
model.save("./output.h5")
return scores[1]
datasetAccuracy = {}
datasetAccuracy['Complete Dataset'] = genesis_train('./data/data.csv')
datasetAccuracy['A'] = genesis_train('./data/dataA.csv')
datasetAccuracy['B'] = genesis_train('./data/dataB.csv')
datasetAccuracy['C'] = genesis_train('./data/dataC.csv')
datasetAccuracy['D'] = genesis_train('./data/dataD.csv')
datasetAccuracy['E'] = genesis_train('./data/dataE.csv')
datasetAccuracy['F'] = genesis_train('./data/dataF.csv')
datasetAccuracy['G'] = genesis_train('./data/dataG.csv')
datasetAccuracy['H'] = genesis_train('./data/dataH.csv')
datasetAccuracy['I'] = genesis_train('./data/dataI.csv')
px.bar(pd.DataFrame.from_dict(datasetAccuracy, orient='index'))
FLAccuracy = {}
FLAccuracy['A'] = update_train('./data/dataA.csv')
FLAccuracy['B'] = update_train('./data/dataB.csv')
FLAccuracy['C'] = update_train('./data/dataC.csv')
FLAccuracy['D'] = update_train('./data/dataD.csv')
FLAccuracy['E'] = update_train('./data/dataE.csv')
FLAccuracy['F'] = update_train('./data/dataF.csv')
FLAccuracy['G'] = update_train('./data/dataG.csv')
FLAccuracy['H'] = update_train('./data/dataH.csv')
FLAccuracy['I'] = update_train('./data/dataI.csv')
px.bar(pd.DataFrame.from_dict(FLAccuracy, orient='index'))
#################################################################################################################
FLAccuracy
```
| github_jupyter |
# Lambda School Data Science - Logistic Regression
Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models).
## Lecture - Where Linear goes Wrong
### Return of the Titanic 🚢
You've likely already explored the rich dataset that is the Titanic - let's use regression and try to predict survival with it. The data is [available from Kaggle](https://www.kaggle.com/c/titanic/data), so we'll also play a bit with [the Kaggle API](https://github.com/Kaggle/kaggle-api).
```
!pip install kaggle
# Note - you'll also have to sign up for Kaggle and authorize the API
# https://github.com/Kaggle/kaggle-api#api-credentials
# This essentially means uploading a kaggle.json file
# For Colab we can have it in Google Drive
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
# You also have to join the Titanic competition to have access to the data
!kaggle competitions download -c titanic
# How would we try to do this with linear regression?
import pandas as pd
train_df = pd.read_csv('train.csv').dropna()
test_df = pd.read_csv('test.csv').dropna() # Unlabeled, for Kaggle submission
train_df.head()
train_df.describe()
from sklearn.linear_model import LinearRegression
X = train_df[['Pclass', 'Age', 'Fare']]
y = train_df.Survived
linear_reg = LinearRegression().fit(X, y)
linear_reg.score(X, y)
linear_reg.predict(test_df[['Pclass', 'Age', 'Fare']])
linear_reg.coef_
import numpy as np
test_case = np.array([[1, 5, 500]]) # Rich 5-year old in first class
linear_reg.predict(test_case)
np.dot(linear_reg.coef_, test_case.reshape(-1, 1)) + linear_reg.intercept_
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression().fit(X, y)
log_reg.score(X, y)
log_reg.predict(test_df[['Pclass', 'Age', 'Fare']])
log_reg.predict(test_case)[0]
help(log_reg.predict)
log_reg.predict_proba(test_case)[0]
# What's the math?
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
```
Let's try to write the actual math expression for the above.
Intercept + dot product of coefficients and observation
X = [x1, x2, x3]
Y = [y1, y2, y3]
'X dot Y' = (x1 * y1) + (x2 * y2) + (x3 * y3)
Generalizes to any length, but X and Y must have the same length
'X dot X' = norm(X) = length/distance metric = L2 = sum of squares
```
is_cat = 0
is_dog = 0
is_other = 1
'''
is_cat = 0.5
is_dog = -0.2
NO is_other coefficient (was omitted)
What this means is - cats are more likely than "other"
And dogs are less likely than "other"
'''
```
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study).
For live - let's tackle [another classification dataset on absenteeism](http://archive.ics.uci.edu/ml/datasets/Absenteeism+at+work) - it has 21 classes, but remember, scikit-learn LogisticRegression automatically handles more than two classes. How? By essentially treating each label as different (1) from some base class (0).
```
# Live - let's try absenteeism!
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00445/Absenteeism_at_work_AAA.zip
!unzip Absenteeism_at_work_AAA.zip
!head Absenteeism_at_work.csv
absent_df = pd.read_table('Absenteeism_at_work.csv', sep=';')
absent_df.head()
absent_df.shape
LogisticRegression
absent_df.columns
X = absent_df.drop('Reason for absence', axis='columns')
y = absent_df['Reason for absence']
absent_log1 = LogisticRegression().fit(X, y)
absent_log1.score(X, y)
absent_log1
absent_log1.coef_
absent_log1.predict(X)
absent_log1.predict_proba(X)[0]
?LogisticRegression
```
## Assignment - real-world classification
We're going to check out a larger dataset - the [FMA Free Music Archive data](https://github.com/mdeff/fma). It has a selection of CSVs with metadata and calculated audio features that you can load and try to use to classify genre of tracks. To get you started:
```
!wget https://os.unil.cloud.switch.ch/fma/fma_metadata.zip
!unzip fma_metadata.zip
# Thanks A!
tracks = pd.read_csv('fma_metadata/tracks.csv', header=[0,1], index_col=0)
# Maybe it'd be handy to sample so we have less data so things run faster
tracks = tracks.sample(2000, random_state=42)
help(tracks.sample)
tracks.describe()
pd.set_option('display.max_columns', None) # Unlimited columns
tracks.head()
pd.options.mode.use_inf_as_na = True
tracks.isna().sum()
# While exploring, we want to be efficient and focus on what works for now
# Thanks Dan!
garbage_columns = ['track_id','id', 'information','comments.1','title','bio',
'members','website','wikipedia_page','split','subset',
'comments.2','genres','genres_all','information.1','license',
'title.1']
tracks2 = tracks.drop(columns=garbage_columns)
# Another approach, restrict columns by type
# Shreyas
numeric = ['int32', 'int64', 'float16', 'float32', 'float64']
tracks_num = tracks.select_dtypes(include=numeric).copy()
tracks.select_dtypes('number')
tracks._get_numeric_data() # "inner" method shorthand
# Another thing you can do - sample! See above where data loaded
tracks._get_numeric_data()
tracks.shape
```
This is the biggest data you've played with so far, and while it does generally fit in Colab, it can take awhile to run. That's part of the challenge!
Your tasks:
- Clean up the variable names in the dataframe
- Use logistic regression to fit a model predicting (primary/top) genre
- Inspect, iterate, and improve your model
- Answer the following questions (written, ~paragraph each):
- What are the best predictors of genre?
- What information isn't very useful for predicting genre?
- What surprised you the most about your results?
*Important caveats*:
- This is going to be difficult data to work with - don't let the perfect be the enemy of the good!
- Be creative in cleaning it up - if the best way you know how to do it is download it locally and edit as a spreadsheet, that's OK!
- If the data size becomes problematic, consider sampling/subsetting
- You do not need perfect or complete results - just something plausible that runs, and that supports the reasoning in your written answers
If you find that fitting a model to classify *all* genres isn't very good, it's totally OK to limit to the most frequent genres, or perhaps trying to combine or cluster genres as a preprocessing step. Even then, there will be limits to how good a model can be with just this metadata - if you really want to train an effective genre classifier, you'll have to involve the other data (see stretch goals).
This is real data - there is no "one correct answer", so you can take this in a variety of directions. Just make sure to support your findings, and feel free to share them as well! This is meant to be practice for dealing with other "messy" data, a common task in data science.
## Resources and stretch goals
- Check out the other .csv files from the FMA dataset, and see if you can join them or otherwise fit interesting models with them
- [Logistic regression from scratch in numpy](https://blog.goodaudience.com/logistic-regression-from-scratch-in-numpy-5841c09e425f) - if you want to dig in a bit more to both the code and math (also takes a gradient descent approach, introducing the logistic loss function)
- Create a visualization to show predictions of your model - ideally show a confidence interval based on error!
- Check out and compare classification models from scikit-learn, such as [SVM](https://scikit-learn.org/stable/modules/svm.html#classification), [decision trees](https://scikit-learn.org/stable/modules/tree.html#classification), and [naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html). The underlying math will vary significantly, but the API (how you write the code) and interpretation will actually be fairly similar.
- Sign up for [Kaggle](https://kaggle.com), and find a competition to try logistic regression with
- (Not logistic regression related) If you enjoyed the assignment, you may want to read up on [music informatics](https://en.wikipedia.org/wiki/Music_informatics), which is how those audio features were actually calculated. The FMA includes the actual raw audio, so (while this is more of a longterm project than a stretch goal, and won't fit in Colab) if you'd like you can check those out and see what sort of deeper analysis you can do.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.