markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Build the tree!
Now that all the tests are passing, we will train a tree model on the train_data. Limit the depth to 6 (max_depth = 6) to make sure the algorithm doesn't run for too long. Call this tree my_decision_tree.
Warning: This code block may take 1-2 minutes to learn.
|
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)
print(my_decision_tree.keys())
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Making predictions with a decision tree
As discussed in the lecture, we can make predictions from the decision tree with a simple recursive function. Below, we call this function classify, which takes in a learned tree and a test point x to classify. We include an option annotate that describes the prediction path when set to True.
|
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Evaluating your decision tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# total examples}}
$$
Now, write a function called evaluate_classification_error that takes in as input:
1. tree (as described above)
2. data (an SFrame)
This function should return a prediction (class label) for each row in data using the decision tree.
|
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
number_examples = prediction.size()
safe_loans = data['safe_loans']
classification_error = 1 - ( 1.0*(safe_loans == prediction).sum() ) / number_examples
return classification_error
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Printing out a decision stump
As discussed in the lecture, we can print out a single decision stump.
|
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Exploring the left subtree of the left subtree
|
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
print_stump(my_decision_tree['left']['left']['left'], my_decision_tree['left']['left']['splitting_feature'])
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Exploring the right subtree of the left subtree
|
print_stump(my_decision_tree)
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['right']['right'], my_decision_tree['right']['splitting_feature'])
|
notebooks/binary-decision-tree.ipynb
|
leon-adams/datascience
|
mpl-2.0
|
Pandas support
After importing vaex.graphql, vaex also installs a pandas accessor, so it is also accessible for Pandas DataFrames.
|
df_pandas = df.to_pandas_df()
df_pandas.graphql.execute("""
{
df(where: {age: {_gt: 20}}) {
row(offset: 3, limit: 2) {
name
survived
}
}
}
"""
).data
|
docs/source/example_graphql.ipynb
|
maartenbreddels/vaex
|
mit
|
GPUが使えるか使えないか確認
使える場合は自動的にGPU上で動かす
|
use_gpu = torch.cuda.is_available()
if use_gpu:
print('cuda is available!')
# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2))
self.fc = nn.Linear(7 * 7 * 32, 10)
def forward(self, x):
print('1:', x.size())
out = self.layer1(x)
print('2:', out.size())
out = self.layer2(out)
print('3:', out.size())
out = out.view(out.size(0), -1)
print('4:', out.size())
out = self.fc(out)
print('5:', out.size())
return out
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
最後のLinearのユニット数がわからなければ途中結果をprintしてみる
7 x 7 x 32 = 1568
GPUモードで動かすには
モデルとテンソルデータをcuda()でGPUに転送する!
|
model = CNN()
if use_gpu:
model.cuda()
print(model)
# テスト
model = CNN()
images, labels = iter(train_loader).next()
print(images.size())
outputs = model(Variable(images))
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
BatchNormalizationを使っている場合はモデルのモードが重要
model.train() で訓練モード
model.eval() で評価モード
|
def train(train_loader):
model.train()
running_loss = 0
for batch_idx, (images, labels) in enumerate(train_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
loss.backward()
optimizer.step()
train_loss = running_loss / len(train_loader)
return train_loss
def valid(test_loader):
model.eval()
running_loss = 0
correct = 0
total = 0
for batch_idx, (images, labels) in enumerate(test_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
_, predicted = torch.max(outputs.data, 1)
correct += (predicted == labels.data).sum()
total += labels.size(0)
val_loss = running_loss / len(test_loader)
val_acc = correct / total
return val_loss, val_acc
loss_list = []
val_loss_list = []
val_acc_list = []
for epoch in range(num_epochs):
loss = train(train_loader)
val_loss, val_acc = valid(test_loader)
print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f'
% (epoch, loss, val_loss, val_acc))
# logging
loss_list.append(loss)
val_loss_list.append(val_loss)
val_acc_list.append(val_acc)
# save the trained model
np.save('loss_list.npy', np.array(loss_list))
np.save('val_loss_list.npy', np.array(val_loss_list))
np.save('val_acc_list.npy', np.array(val_acc_list))
torch.save(model.state_dict(), 'cnn.pkl')
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
cuda is available!
epoch 0, loss: 0.1666 val_loss: 0.0549 val_acc: 0.9824
epoch 1, loss: 0.0487 val_loss: 0.0372 val_acc: 0.9875
epoch 2, loss: 0.0369 val_loss: 0.0283 val_acc: 0.9905
epoch 3, loss: 0.0295 val_loss: 0.0359 val_acc: 0.9884
epoch 4, loss: 0.0247 val_loss: 0.0302 val_acc: 0.9904
epoch 5, loss: 0.0194 val_loss: 0.0402 val_acc: 0.9871
epoch 6, loss: 0.0161 val_loss: 0.0298 val_acc: 0.9903
epoch 7, loss: 0.0133 val_loss: 0.0351 val_acc: 0.9883
epoch 8, loss: 0.0123 val_loss: 0.0307 val_acc: 0.9909
epoch 9, loss: 0.0104 val_loss: 0.0242 val_acc: 0.9926
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
loss_list = np.load('loss_list.npy')
val_loss_list = np.load('val_loss_list.npy')
val_acc_list = np.load('val_acc_list.npy')
# plot learning curve
plt.figure()
plt.plot(range(num_epochs), loss_list, 'r-', label='train_loss')
plt.plot(range(num_epochs), val_loss_list, 'b-', label='val_loss')
plt.legend()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.grid()
plt.figure()
plt.plot(range(num_epochs), val_acc_list, 'g-', label='val_acc')
plt.legend()
plt.xlabel('epoch')
plt.ylabel('acc')
plt.grid()
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
ほぼ最初のエポックで98%を超えるレベル
CIFAR-10
http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
|
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.autograd import Variable
use_gpu = torch.cuda.is_available()
if use_gpu:
print('cuda is available!')
num_epochs = 30
batch_size = 128
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # [0, 1] => [-1, 1]
])
train_set = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
shuffle=True, num_workers=4)
test_set = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
shuffle=False, num_workers=4)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
データ表示テスト
http://pytorch.org/docs/master/torchvision/utils.html#torchvision.utils.make_grid
|
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def imshow(img):
# unnormalize [-1, 1] => [0, 1]
img = img / 2 + 0.5
npimg = img.numpy()
# [c, h, w] => [h, w, c]
plt.imshow(np.transpose(npimg, (1, 2, 0)))
images, labels = iter(train_loader).next()
images, labels = images[:16], labels[:16]
imshow(torchvision.utils.make_grid(images, nrow=4, padding=1))
plt.axis('off')
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# print('1:', x.size())
x = self.pool(F.relu(self.conv1(x)))
# print('2:', x.size())
x = self.pool(F.relu(self.conv2(x)))
# print('3:', x.size())
x = x.view(-1, 16 * 5 * 5)
# print('4:', x.size())
x = F.relu(self.fc1(x))
# print('5:', x.size())
x = F.relu(self.fc2(x))
# print('6:', x.size())
x = self.fc3(x)
# print('7:', x.size())
return x
model = CNN()
if use_gpu:
model.cuda()
model
images, labels = iter(train_loader).next()
outputs = model(Variable(images))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
def train(train_loader):
model.train()
running_loss = 0
for i, (images, labels) in enumerate(train_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
loss.backward()
optimizer.step()
train_loss = running_loss / len(train_loader)
return train_loss
def valid(test_loader):
model.eval()
running_loss = 0
correct = 0
total = 0
for i, (images, labels) in enumerate(test_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
_, predicted = torch.max(outputs.data, 1)
if use_gpu:
correct += (predicted.cpu() == labels.cpu().data).sum()
else:
correct += (predicted == labels.data).sum()
total += labels.size(0)
val_loss = running_loss / len(test_loader)
val_acc = correct / total
return val_loss, val_acc
loss_list = []
val_loss_list = []
val_acc_list = []
for epoch in range(num_epochs):
loss = train(train_loader)
val_loss, val_acc = valid(test_loader)
print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f'
% (epoch, loss, val_loss, val_acc))
# logging
loss_list.append(loss)
val_loss_list.append(val_loss)
val_acc_list.append(val_acc)
print('Finished training')
# save the trained model
np.save('loss_list.npy', np.array(loss_list))
np.save('val_loss_list.npy', np.array(val_loss_list))
np.save('val_acc_list.npy', np.array(val_acc_list))
torch.save(model.state_dict(), 'cnn.pkl')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
loss_list = np.load('loss_list.npy')
val_loss_list = np.load('val_loss_list.npy')
val_acc_list = np.load('val_acc_list.npy')
# plot learning curve
plt.figure()
plt.plot(range(num_epochs), loss_list, 'r-', label='train_loss')
plt.plot(range(num_epochs), val_loss_list, 'b-', label='val_loss')
plt.legend()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.grid()
plt.figure()
plt.plot(range(num_epochs), val_acc_list, 'g-', label='val_acc')
plt.legend()
plt.xlabel('epoch')
plt.ylabel('acc')
plt.grid()
|
pytorch/180202-convolutional-neural-network.ipynb
|
aidiary/notebooks
|
mit
|
Import the data into H2O
Everything is scalable and distributed from now on. All processing is done on the fully multi-threaded and distributed H2O Java-based backend and can be scaled to large datasets on large compute clusters.
Here, we use a small public dataset (Titanic), but you can use datasets that are hundreds of GBs large.
|
## 'path' can point to a local file, hdfs, s3, nfs, Hive, directories, etc.
df = h2o.import_file(path = "http://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/titanic.csv")
print(df.dim)
print(df.head)
print(df.tail)
print(df.describe)
## pick a response for the supervised problem
response = "survived"
## the response variable is an integer, we will turn it into a categorical/factor for binary classification
df[response] = df[response].asfactor()
## use all other columns (except for the name & the response column ("survived")) as predictors
predictors = df.columns
del predictors[1:3]
print(predictors)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Establish baseline performance
As the first step, we'll build some default models to see what accuracy we can expect. Let's use the AUC metric for this demo, but you can use h2o.logloss() and stopping_metric="logloss" as well. It ranges from 0.5 for random models to 1 for perfect models.
The first model is a default GBM, trained on the 60% training split
|
#We only provide the required parameters, everything else is default
gbm = H2OGradientBoostingEstimator()
gbm.train(x=predictors, y=response, training_frame=train)
## Show a detailed model summary
print(gbm)
## Get the AUC on the validation set
perf = gbm.model_performance(valid)
print(perf.auc())
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
The AUC is 95%, so this model is highly predictive!
The second model is another default GBM, but trained on 80% of the data (here, we combine the training and validation splits to get more training data), and cross-validated using 4 folds.
Note that cross-validation takes longer and is not usually done for really large datasets.
|
## rbind() makes a copy here, so it's better to use split_frame with `ratios = c(0.8)` instead above
cv_gbm = H2OGradientBoostingEstimator(nfolds = 4, seed = 0xDECAF)
cv_gbm.train(x = predictors, y = response, training_frame = train.rbind(valid))
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
We see that the cross-validated performance is similar to the validation set performance:
|
## Show a detailed summary of the cross validation metrics
## This gives you an idea of the variance between the folds
cv_summary = cv_gbm.cross_validation_metrics_summary().as_data_frame()
#print(cv_summary) ## Full summary of all metrics
#print(cv_summary.iloc[4]) ## get the row with just the AUCs
## Get the cross-validated AUC by scoring the combined holdout predictions.
## (Instead of taking the average of the metrics across the folds)
perf_cv = cv_gbm.model_performance(xval=True)
print(perf_cv.auc())
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
This model doesn't seem to be better than the previous models:
|
perf_lucky = gbm_lucky.model_performance(valid)
print(perf_lucky.auc())
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
For this small dataset, dropping 20% of observations per tree seems too aggressive in terms of adding regularization. For larger datasets, this is usually not a bad idea. But we'll let this parameter tune freshly below, so no worries.
Hyper-Parameter Search
Next, we'll do real hyper-parameter optimization to see if we can beat the best AUC so far (around 94%).
The key here is to start tuning some key parameters first (i.e., those that we expect to have the biggest impact on the results). From experience with gradient boosted trees across many datasets, we can state the following "rules":
Build as many trees (ntrees) as it takes until the validation set error starts increasing.
A lower learning rate (learn_rate) is generally better, but will require more trees. Using learn_rate=0.02and learn_rate_annealing=0.995 (reduction of learning rate with each additional tree) can help speed up convergence without sacrificing accuracy too much, and is great to hyper-parameter searches. For faster scans, use values of 0.05 and 0.99 instead.
The optimum maximum allowed depth for the trees (max_depth) is data dependent, deeper trees take longer to train, especially at depths greater than 10.
Row and column sampling (sample_rate and col_sample_rate) can improve generalization and lead to lower validation and test set errors. Good general values for large datasets are around 0.7 to 0.8 (sampling 70-80 percent of the data) for both parameters. Column sampling per tree (col_sample_rate_per_tree) can also be tuned. Note that it is multiplicative with col_sample_rate, so setting both parameters to 0.8 results in 64% of columns being considered at any given node to split.
For highly imbalanced classification datasets (e.g., fewer buyers than non-buyers), stratified row sampling based on response class membership can help improve predictive accuracy. It is configured with sample_rate_per_class (array of ratios, one per response class in lexicographic order).
Most other options only have a small impact on the model performance, but are worth tuning with a Random hyper-parameter search nonetheless, if highest performance is critical.
First we want to know what value of max_depth to use because it has a big impact on the model training time and optimal values depend strongly on the dataset.
We'll do a quick Cartesian grid search to get a rough idea of good candidate max_depth values. Each model in the grid search will use early stopping to tune the number of trees using the validation set AUC, as before.
We'll use learning rate annealing to speed up convergence without sacrificing too much accuracy.
|
## Depth 10 is usually plenty of depth for most datasets, but you never know
hyper_params = {'max_depth' : list(range(1,30,2))}
#hyper_params = {max_depth = [4,6,8,12,16,20]} ##faster for larger datasets
#Build initial GBM Model
gbm_grid = H2OGradientBoostingEstimator(
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## sample 80% of rows per tree
sample_rate = 0.8,
## sample 80% of columns per split
col_sample_rate = 0.8,
## fix a random number generator seed for reproducibility
seed = 1234,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
grid = H2OGridSearch(gbm_grid,hyper_params,
grid_id = 'depth_grid',
search_criteria = {'strategy': "Cartesian"})
#Train grid search
grid.train(x=predictors,
y=response,
training_frame = train,
validation_frame = valid)
## by default, display the grid search results sorted by increasing logloss (since this is a classification task)
print(grid)
## sort the grid models by decreasing AUC
sorted_grid = grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_grid)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
It appears that max_depth values of 5 to 13 are best suited for this dataset, which is unusally deep!
|
max_depths = sorted_grid.sorted_metric_table()['max_depth'][0:5]
new_max = int(max(max_depths, key=int))
new_min = int(min(max_depths, key=int))
print("MaxDepth", new_max)
print("MinDepth", new_min)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Now that we know a good range for max_depth, we can tune all other parameters in more detail. Since we don't know what combinations of hyper-parameters will result in the best model, we'll use random hyper-parameter search to "let the machine get luckier than a best guess of any human".
|
# create hyperameter and search criteria lists (ranges are inclusive..exclusive))
hyper_params_tune = {'max_depth' : list(range(new_min,new_max+1,1)),
'sample_rate': [x/100. for x in range(20,101)],
'col_sample_rate' : [x/100. for x in range(20,101)],
'col_sample_rate_per_tree': [x/100. for x in range(20,101)],
'col_sample_rate_change_per_level': [x/100. for x in range(90,111)],
'min_rows': [2**x for x in range(0,int(math.log(train.nrow,2)-1)+1)],
'nbins': [2**x for x in range(4,11)],
'nbins_cats': [2**x for x in range(4,13)],
'min_split_improvement': [0,1e-8,1e-6,1e-4],
'histogram_type': ["UniformAdaptive","QuantilesGlobal","RoundRobin"]}
search_criteria_tune = {'strategy': "RandomDiscrete",
'max_runtime_secs': 3600, ## limit the runtime to 60 minutes
'max_models': 100, ## build no more than 100 models
'seed' : 1234,
'stopping_rounds' : 5,
'stopping_metric' : "AUC",
'stopping_tolerance': 1e-3
}
gbm_final_grid = H2OGradientBoostingEstimator(distribution='bernoulli',
## more trees is better if the learning rate is small enough
## here, use "more than enough" trees - we have early stopping
ntrees=10000,
## smaller learning rate is better
## since we have learning_rate_annealing, we can afford to start with a
#bigger learning rate
learn_rate=0.05,
## learning rate annealing: learning_rate shrinks by 1% after every tree
## (use 1.00 to disable, but then lower the learning_rate)
learn_rate_annealing = 0.99,
## score every 10 trees to make early stopping reproducible
#(it depends on the scoring interval)
score_tree_interval = 10,
## fix a random number generator seed for reproducibility
seed = 1234,
## early stopping once the validation AUC doesn't improve by at least 0.01% for
#5 consecutive scoring events
stopping_rounds = 5,
stopping_metric = "AUC",
stopping_tolerance = 1e-4)
#Build grid search with previously made GBM and hyper parameters
final_grid = H2OGridSearch(gbm_final_grid, hyper_params = hyper_params_tune,
grid_id = 'final_grid',
search_criteria = search_criteria_tune)
#Train grid search
final_grid.train(x=predictors,
y=response,
## early stopping based on timeout (no model should take more than 1 hour - modify as needed)
max_runtime_secs = 3600,
training_frame = train,
validation_frame = valid)
print(final_grid)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
We can see that the best models have even better validation AUCs than our previous best models, so the random grid search was successful!
|
## Sort the grid models by AUC
sorted_final_grid = final_grid.get_grid(sort_by='auc',decreasing=True)
print(sorted_final_grid)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
You can also see the results of the grid search in Flow:
<img src="files/final_grid.png">
Model Inspection and Final Test Set Scoring
Let's see how well the best model of the grid search (as judged by validation set AUC) does on the held out test set:
|
#Get the best model from the list (the model name listed at the top of the table)
best_model = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
performance_best_model = best_model.model_performance(test)
print(performance_best_model.auc())
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Good news. It does as well on the test set as on the validation set, so it looks like our best GBM model generalizes well to the unseen test set:
We can inspect the winning model's parameters:
|
params_list = []
for key, value in best_model.params.items():
params_list.append(str(key)+" = "+str(value['actual']))
params_list
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Now we can confirm that these parameters are generally sound, by building a GBM model on the whole dataset (instead of the 60%) and using internal 5-fold cross-validation (re-using all other parameters including the seed):
|
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][0])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
gbm_best = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(gbm_best) and getattr(gbm_best,key) != params[key]['actual']:
setattr(gbm_best,key,params[key]['actual'])
gbm_best.train(x=predictors, y=response, training_frame=df)
print(gbm_best.cross_validation_metrics_summary())
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
It looks like the winning model performs slightly better on the validation and test sets than during cross-validation on the training set as the mean AUC on the 5 folds is estimated to be only 97.4%, but with a fairly large standard deviation of 0.9%. For small datasets, such a large variance is not unusual. To get a better estimate of model performance, the Random hyper-parameter search could have used nfolds = 5 (or 10, or similar) in combination with 80% of the data for training (i.e., not holding out a validation set, but only the final test set). However, this would take more time, as nfolds+1 models will be built for every set of parameters.
Instead, to save time, let's just scan through the top 5 models and cross-validate their parameters with nfolds=5 on the entire dataset:
|
for i in range(5):
gbm = h2o.get_model(sorted_final_grid.sorted_metric_table()['model_ids'][i])
#get the parameters from the Random grid search model and modify them slightly
params = gbm.params
new_params = {"nfolds":5, "model_id":None, "training_frame":None, "validation_frame":None,
"response_column":None, "ignored_columns":None}
for key in new_params.keys():
params[key]['actual'] = new_params[key]
new_model = H2OGradientBoostingEstimator()
for key in params.keys():
if key in dir(new_model) and getattr(new_model,key) != params[key]['actual']:
setattr(new_model,key,params[key]['actual'])
new_model.train(x = predictors, y = response, training_frame = df)
cv_summary = new_model.cross_validation_metrics_summary().as_data_frame()
print(gbm.model_id)
print(cv_summary.iloc[1]) ## AUC
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Note that the label (survived or not) is predicted as well (in the first predict column), and it uses the threshold with the highest F1 score (here: 0.528098) to make labels from the probabilities for survival (p1). The probability for death (p0) is given for convenience, as it is just 1-p1.
|
best_model.model_performance(valid)
# Key of best model:
best_model.key
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
You can also see the "best" model in more detail in Flow:
<img src="files/best_gbm1.png">
<img src="files/best_gbm2.png">
The model and the predictions can be saved to file as follows:
|
# uncomment if you want to export the best model
# h2o.save_model(best_model, "/tmp/bestModel.csv", force=True)
# h2o.export_file(preds, "/tmp/bestPreds.csv", force=True)
# print pojo to screen, or provide path to download location
# h2o.download_pojo(best_model)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
We can bring those ensemble predictions to our Python session's memory space and use other Python packages.
|
from sklearn.metrics import roc_auc_score
# convert prob and test[response] h2oframes to pandas' frames and then convert them each to numpy array
np_array_prob = prob.as_data_frame().values
np_array_test = test[response].as_data_frame().values
probInPy = np_array_prob
labeInPy = np_array_test
# compare true scores (test[response]) to probability scores (prob)
roc_auc_score(labeInPy, probInPy)
|
h2o-docs/src/product/tutorials/gbm/gbmTuning.ipynb
|
h2oai/h2o-3
|
apache-2.0
|
Line plot
|
import pandas as pd
import random
random.seed(42)
data = []
for i in range(20):
data.append({'x': i, 'y': random.uniform(0,1), 'c': int(random.uniform(0,3))})
|
multiple_simple_examples.ipynb
|
stitchfix/d3-jupyter-tutorial
|
mit
|
Based on http://bl.ocks.org/d3noob/b3ff6ae1c120eea654b5 :
|
HTML(d3_lib.draw_graph('basic_line',{'data': data}))
|
multiple_simple_examples.ipynb
|
stitchfix/d3-jupyter-tutorial
|
mit
|
Scatterplot with same data
Based on http://bl.ocks.org/mbostock/3887118 :
|
HTML(d3_lib.draw_graph('basic_scatter',{'data': data}))
|
multiple_simple_examples.ipynb
|
stitchfix/d3-jupyter-tutorial
|
mit
|
Graph visualization
Based on http://bl.ocks.org/mbostock/4062045
|
n_nodes = 30
p_edge = 0.05
graph = {"nodes": [], "links": []}
for i in range(n_nodes):
graph["nodes"].append( {"name": "i" + str(i), "group": int(random.uniform(1,11))} )
for i in range(n_nodes):
for j in range(n_nodes):
if random.uniform(0,1) < p_edge:
graph["links"].append( {"source": i, "target": j, "value": random.uniform(0.5,3)} )
HTML(d3_lib.draw_graph('force_directed_graph',{'data': graph}))
|
multiple_simple_examples.ipynb
|
stitchfix/d3-jupyter-tutorial
|
mit
|
Day-Hour heatmap
Based on http://bl.ocks.org/tjdecke/5558084
|
data = []
for d in range(1,8):
for h in range(1,25):
data.append({'day': d, 'hour': h, 'value': int(random.gauss(0,100))})
HTML(d3_lib.draw_graph('day-hr-heatmap',{'data': data}))
|
multiple_simple_examples.ipynb
|
stitchfix/d3-jupyter-tutorial
|
mit
|
Run simulation
|
%%bash
if [[ -z "${FAUNUS_EXECUTABLE}" ]]; then
mpirun -np 4 faunus -i input.json
else
echo "Seems we're running CTest - use Faunus target from CMake"
"${MPIEXEC}" -np 4 "${FAUNUS_EXECUTABLE}" -i input.json --nobar
fi
|
examples/temper/temper.ipynb
|
mlund/faunus
|
mit
|
matplotlib
interactive vis notes: http://matplotlib.org/users/navigation_toolbar.html
|
t = arange(0.0, 1.0, 0.01)
y1 = sin(2*pi*t)
y2 = sin(2*2*pi*t)
import pandas as pd
df = pd.DataFrame({'t': t, 'y1': y1, 'y2': y2})
df.head(10)
fig = figure(1, figsize = (10,10))
ax1 = fig.add_subplot(211)
ax1.plot(t, y1)
ax1.grid(True)
ax1.set_ylim((-2, 2))
ax1.set_ylabel('Gentle Lull')
ax1.set_title('I can plot waves')
for label in ax1.get_xticklabels():
label.set_color('r')
ax2 = fig.add_subplot(212)
ax2.plot(t, y2,)
ax2.grid(True)
ax2.set_ylim((-2, 2))
ax2.set_ylabel('Getting choppier')
l = ax2.set_xlabel('Hi PyLadies')
l.set_color('g')
l.set_fontsize('large')
show()
|
notebooks/HELLO WORLD | matplotlib + seaborn + ipywidgets.ipynb
|
morningc/pyladies-interactive-planetary
|
mit
|
+ seaborn
|
import seaborn as sns
sns.set(color_codes=True)
sns.distplot(y1)
sns.distplot(y2)
|
notebooks/HELLO WORLD | matplotlib + seaborn + ipywidgets.ipynb
|
morningc/pyladies-interactive-planetary
|
mit
|
ipywidgets
helpful tutorial here
with matplotlib
|
from ipywidgets import widgets
from IPython.html.widgets import *
t = arange(0.0, 1.0, 0.01)
def pltsin(f):
plt.plot(t, sin(2*pi*t*f))
interact(pltsin, f=(1,10,0.1))
|
notebooks/HELLO WORLD | matplotlib + seaborn + ipywidgets.ipynb
|
morningc/pyladies-interactive-planetary
|
mit
|
with seaborn!
|
def pltsin(f):
sns.distplot(sin(2*pi*t*f))
interact(pltsin, f=(1,10,0.1))
|
notebooks/HELLO WORLD | matplotlib + seaborn + ipywidgets.ipynb
|
morningc/pyladies-interactive-planetary
|
mit
|
execfile()
|
# Python 2 only:
execfile('myfile.py')
# Python 2 and 3: alternative 1
from past.builtins import execfile
execfile('myfile.py')
# Python 2 and 3: alternative 2
exec(compile(open('myfile.py').read()))
# This can sometimes cause this:
# SyntaxError: function ... uses import * and bare exec ...
# See https://github.com/PythonCharmers/python-future/issues/37
|
imported/future/docs/notebooks/.ipynb_checkpoints/Writing Python 2-3 compatible code-checkpoint.ipynb
|
blockstack/packaging
|
gpl-3.0
|
Tkinter
|
# Python 2 only:
import Tkinter
import Dialog
import FileDialog
import ScrolledText
import SimpleDialog
import Tix
import Tkconstants
import Tkdnd
import tkColorChooser
import tkCommonDialog
import tkFileDialog
import tkFont
import tkMessageBox
import tkSimpleDialog
# Python 2 and 3 (after ``pip install future``):
import tkinter
import tkinter.dialog
import tkinter.filedialog
import tkinter.scolledtext
import tkinter.simpledialog
import tkinter.tix
import tkinter.constants
import tkinter.dnd
import tkinter.colorchooser
import tkinter.commondialog
import tkinter.filedialog
import tkinter.font
import tkinter.messagebox
import tkinter.simpledialog
import tkinter.ttk
|
imported/future/docs/notebooks/.ipynb_checkpoints/Writing Python 2-3 compatible code-checkpoint.ipynb
|
blockstack/packaging
|
gpl-3.0
|
Configurando a biblioteca:
|
sn.node_size = 3
sn.node_color = (0, 0, 0)
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Esse será o grafo dos professores.
Atribuindo aleatoriamente tipos negativos e positivos às arestas:
|
from random import random
def randomize_types(g):
for n, m in g.edges():
if random() < 0.5:
g.edges[n, m]['type'] = -1
else:
g.edges[n, m]['type'] = 1
randomize_types(g)
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Essa distribuição uniforme modela de forma aproximada os julgamentos dos professores após o "retreat". Não vamos usar uma atribuição fixa de tipos, pois sua solução não pode ser "viciada". Ela deve ser adequada para qualquer atribuição aleatória próxima.
Convertendo tipos em cores para visualização:
|
def convert_types_to_colors(g):
for n, m in g.edges():
if g.edges[n, m]['type'] == -1:
g.edges[n, m]['color'] = (255, 0, 0)
else:
g.edges[n, m]['color'] = (0, 0, 255)
convert_types_to_colors(g)
sn.show_graph(g)
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Definindo uma atualização de tipo:
|
from random import choice
# Essa função calcula a pressão que cada aresta sofre para mudar seu tipo,
# ou seja, a quantidade de situações de influência nas quais está envolvida.
# Se nenhuma aresta está sofrendo pressão, a função devolve False. Senão,
# uma aresta com maior pressão inverte seu tipo e a função devolve True.
#
# Os parâmetros a, b, c representam os pesos dos três tipos de pressão.
#
# a: peso da Situação A ("inimigo comum")
#
# b: peso da Situação B ("pacificador")
#
# c: peso da Situação C ("cizânia")
#
# Os valores padrão são 1. Já vimos em sala que eles levam a polarização.
def update_type(g, a=1, b=1, c=1):
# Inicializa as pressões.
for n, m in g.edges():
g.edges[n, m]['pressure'] = 0
# Para cada triângulo do grafo.
for n in g.nodes():
for m in g.nodes():
if n != m:
for l in g.nodes():
if n != l and m != l:
# Armazena em uma lista as três arestas do triângulo.
edges = [(n, m), (n, l), (m, l)]
# Conta quantas dessas arestas são positivas.
positives = 0
for e in edges:
if g.edges[e[0], e[1]]['type'] == 1:
positives += 1
# Se existem zero ou duas positivas, aumenta as pressões.
if positives == 0:
for e in edges:
g.edges[e[0], e[1]]['pressure'] += a # Situação A
if positives == 2:
for e in edges:
if g.edges[e[0], e[1]]['type'] == -1:
g.edges[e[0], e[1]]['pressure'] += b # Situação B
else:
g.edges[e[0], e[1]]['pressure'] += c # Situação C
# Obtém a maior pressão.
pressure = max([g.edges[n, m]['pressure'] for n, m in g.edges()])
# Se essa maior pressão for zero, devolve False.
if pressure == 0:
return False
# Senão, escolhe aleatoriamente uma aresta com maior pressão e inverte seu tipo.
n, m = choice([(n, m) for n, m in g.edges() if g.edges[n, m]['pressure'] == pressure])
g.edges[n, m]['type'] *= -1
return True
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Você não precisa modificar update_type, apenas decidir quais valores de a, b e c devem ser passados.
Definindo um contador de componentes:
|
from queue import Queue
# Não é obrigatório entender esse código, mas eu ficarei triste se
# você não perceber que é apenas uma sequência de buscas em largura!
def count_components(g):
for n in g.nodes():
g.node[n]['label'] = 0
label = 0
q = Queue()
for s in g.nodes():
if g.node[s]['label'] == 0:
label += 1
g.node[s]['label'] = label
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if g.node[m]['label'] == 0 and g.edges[n, m]['type'] == 1:
g.node[m]['label'] = label
q.put(m)
return label
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Simulando várias vezes o processo no qual as arestas do grafo mudam de tipo até as pressões acabarem.
No final das simulações, o número médio de componentes é impresso.
Esse é o código que você deve modificar. Em particular, deve modificar as variáveis a, b e c.
|
TIMES = 100
LIMIT = 100
a = 1 # Situação A
b = 1 # Situação B
c = 1 # Situação C
# Inicializa o número médio de componentes.
mean_components = 0
for _ in range(TIMES):
# Inicializa as arestas para uma nova simulação.
randomize_types(g)
# Inicializa o contador de iterações.
iterations = 0
# Continua enquanto alguma aresta sofre pressão.
while update_type(g, a=a, b=b, c=c):
iterations += 1
# Se passou de LIMIT iterações, provavelmente nunca vai terminar.
if iterations == LIMIT:
break
# Se uma das simulações não convergiu, desiste.
if iterations == LIMIT:
break
# Senão, atualiza o número médio de componentes.
mean_components += count_components(g)
if iterations == LIMIT:
print('uma das simulações não parecia estar convergindo')
else:
# Finaliza o número médio de componentes
mean_components /= TIMES
print('número médio de componentes:', mean_components)
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Para ter insights sobre o que está acontecendo, não esqueça de examinar a versão animada da simulação!
|
def snapshot(g, frames):
convert_types_to_colors(g)
frame = sn.generate_frame(g)
frames.append(frame)
frames = []
# Inicializa as arestas para uma nova simulação.
randomize_types(g)
# Inicializa aleatoriamente as posições dos nós.
sn.randomize_positions(g)
snapshot(g, frames)
iterations = 0
# Continua enquanto alguma aresta sofre pressão.
while update_type(g, a=a, b=b, c=c):
# Move um pouco os vértices de posição, usando o atributo 'type' das
# arestas como referência para saber o quanto dois vértices se atraem.
sn.update_positions(g, 'type')
snapshot(g, frames)
iterations += 1
# Se passou de LIMIT iterações, provavelmente nunca vai terminar.
if iterations == LIMIT:
print('a simulação não parecia estar convergindo')
break
sn.show_animation(frames)
|
encontro09.ipynb
|
hashiprobr/redes-sociais
|
gpl-3.0
|
Exercise 1: The Pmf object provides __add__, so you can use the + operator to compute the Pmf of the sum of two dice.
Compute and plot the Pmf of the sum of two 6-sided dice.
|
# Solution
thinkplot.Hist(d6+d6)
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 2: Suppose I roll two dice and tell you the result is greater than 3.
Plot the Pmf of the remaining possible outcomes and compute its mean.
|
# Solution
pmf = d6 + d6
pmf[2] = 0
pmf[3] = 0
pmf.Normalize()
thinkplot.Hist(pmf)
pmf.Mean()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 3: Suppose we put the first cookie back, stir, choose again from the same bowl, and get a chocolate cookie.
Hint: The posterior (after the first cookie) becomes the prior (before the second cookie).
|
# Solution
cookie['Bowl 1'] *= 0.25
cookie['Bowl 2'] *= 0.5
cookie.Normalize()
cookie.Print()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 4: Instead of doing two updates, what if we collapse the two pieces of data into one update?
Re-initialize Pmf with two equally likely hypotheses and perform one update based on two pieces of data, a vanilla cookie and a chocolate cookie.
The result should be the same regardless of how many updates you do (or the order of updates).
|
# Solution
cookie = Pmf(['Bowl 1', 'Bowl 2'])
cookie['Bowl 1'] *= 0.75 * 0.25
cookie['Bowl 2'] *= 0.5 * 0.5
cookie.Normalize()
cookie.Print()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 5: We'll solve this problem two ways. First we'll do it "by hand", as we did with the cookie problem; that is, we'll multiply each hypothesis by the likelihood of the data, and then renormalize.
In the space below, update suite based on the likelihood of the data (rolling a 6), then normalize and print the results.
|
# Solution
pmf[4] *= 0
pmf[6] *= 1/6
pmf[8] *= 1/8
pmf[12] *= 1/12
pmf.Normalize()
pmf.Print()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 6: Now let's do the same calculation using Suite.Update.
Write a definition for a new class called Dice that extends Suite. Then define a method called Likelihood that takes data and hypo and returns the probability of the data (the outcome of rolling the die) for a given hypothesis (number of sides on the die).
Hint: What should you do if the outcome exceeds the hypothetical number of sides on the die?
Here's an outline to get you started:
|
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
return 1
# Solution
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
if data > hypo:
return 0
else:
return 1 / hypo
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 7: Suppose we see another tank with serial number 17. What effect does this have on the posterior probabilities?
Update the suite again with the new data and plot the results.
|
# Solution
thinkplot.Pdf(tank, color='0.7')
tank.Update(17)
thinkplot.Pdf(tank)
tank.Mean()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
The Euro problem
Exercise 8: Write a class definition for Euro, which extends Suite and defines a likelihood function that computes the probability of the data (heads or tails) for a given value of x (the probability of heads).
Note that hypo is in the range 0 to 100. Here's an outline to get you started.
|
class Euro(Suite):
def Likelihood(self, data, hypo):
"""
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
"""
return 1
# Solution
class Euro(Suite):
def Likelihood(self, data, hypo):
"""
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
"""
x = hypo / 100
if data == 'H':
return x
else:
return 1-x
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Exercise 9: Update euro1 and euro2 with the same data we used before (140 heads and 110 tails) and plot the posteriors. How big is the difference in the means?
|
# Solution
evidence = 'H' * 140 + 'T' * 110
for outcome in evidence:
euro1.Update(outcome)
euro2.Update(outcome)
thinkplot.Pdfs([euro1, euro2])
thinkplot.Config(title='Posteriors')
euro1.Mean(), euro2.Mean()
|
workshop/workshop01soln.ipynb
|
AllenDowney/ThinkBayes2
|
mit
|
Traditional Value
In this notebook, we will develop a strategy based on the "traditional value" metrics described in the Lo/Patel whitepaper. The factors employed in this strategy designate stocks as either cheap or expensive using classic fundamental analysis. The factors that Lo/Patel used are:
Dividend Yield
Price to Book Value
Price to Trailing 12-Month Sales
Price to Trainling 12-Month Cash Flows
Dividend Yield
Dividend yield is calculated as:
$$Dividend\;Yield = \frac{Annual\;Dividends\;per\;share}{Price\;per\;share}$$
When a company makes profit, it faces a choice. It could either reinvest those profits in the company with an eye to increase efficiency, purchase new technology, etc. or it could pay dividends to its equity holders. While reinvestment may increase a company's future share price and thereby reward investors, the most concrete way equity holders are rewarded is through dividends. An equity with a high dividend yield is particularly attractive as the quantity of dividends paid to investors represent a larger proportion of the share price itself. Now we shall create a Dividend Yield factor using the Pipeline API framework and Pipeline's list of fundamental values.
|
# Custom Factor 1 : Dividend Yield
class Div_Yield(CustomFactor):
inputs = [Fundamentals.div_yield5_year]
window_length = 1
def compute(self, today, assets, out, d_y):
out[:] = d_y[-1]
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
While this factor could be calculated using other fundamental metrics, Fundamentals removes the need for any calculation. It is good practice to check the list of fundamentals before creating a custom factor from scratch.
We will initialize a temporary Pipeline to get a sense of the values.
|
# create the pipeline
temp_pipe_1 = Pipeline()
# add the factor to the pipeline
temp_pipe_1.add(Div_Yield(), 'Dividend Yield')
# run the pipeline and get data for first 5 equities
run_pipeline(temp_pipe_1, start_date = '2015-11-11', end_date = '2015-11-11').dropna().head()
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
Price to Book Value
Price to Book Value (a.k.a Price to Book Ratio) is calculated as:
$$P/B\;Ratio = \frac{Price\;per\;share}{Net\;Asset\;Value\;per\;share}$$
Net Asset Value per share can be thought of (very roughly) as a company's total assets less its total liabilities, all divided by the number of shares outstanding.
The P/B Ratio gives a sense of a stock being either over- or undervalued. A high P/B ratio suggests that a stock's price is overvalued, and should therefore be shorted, whereas a low P/B ratio is attractive as the stock gained by purchasing the equity is hypothetically "worth more" than the price paid for it.
We will now create a P/B Ratio custom factor and look at some of the results.
|
# Custom Factor 2 : P/B Ratio
class Price_to_Book(CustomFactor):
inputs = [Fundamentals.pb_ratio]
window_length = 1
def compute(self, today, assets, out, pbr):
out[:] = pbr[-1]
# create the Pipeline
temp_pipe_2 = Pipeline()
# add the factor to the Pipeline
temp_pipe_2.add(Price_to_Book(), 'P/B Ratio')
# run the Pipeline and get data for first 5 equities
run_pipeline(temp_pipe_2, start_date='2015-11-11', end_date='2015-11-11').head()
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
There are two points to make about this data series.
Firstly, AA_PR's P/B Ratio is given as NaN by Pipeline. NaN stands for "not a number" and occurs when a value can not be fetched by Pipeline. Eventually, we will remove these NaN values from the dataset as they often lead to confusing errors when manipulating the data.
Secondly, a low P/B Ratio and a high Dividend Yield are attractive for investors, whereas a a high P/B Ratio and a low Dividend Yield are unattractive. Therefore, we will "invert" the P/B ratio by making each value negative in the factor output so that, when the data is aggregated later in the algorithm, the maxima and minima have the same underlying "meaning".
Price to Trailing 12-Month Sales
This is calculated as a simple ratio between price per share and trailing 12-month (TTM) sales.
TTM is a transformation rather than a metric and effectively calculates improvement or deterioration of a fundamental value from a particular quarter one year previously. For example, if one wanted to calculate today's TTM Sales for company XYZ, one would take the most recent quarter's revenue and divide it by the difference between this quarter's revenue and this quarter's revenue last year added to the revenue as given by the company's most recent fiscal year-end filing.
To calculate the exact TTM of a security is indeed possible using Pipeline; however, the code required is slow. Luckily, this value can be well approximated by the built-in Fundamental Morningstar ratios, which use annual sales to calculate the Price to Sales fundamental value. This slight change boosts the code's speed enormously yet has very little impact on the results of the strategy itself.
Price to TTM Sales is similar to the P/B Ratio in terms of function. The major difference in these two ratios is the fact that inclusion of TTM means that seasonal fluctuations are minimized, as previous data is used to smooth the value. In our case, annualized values accomplish this same smoothing.
Also, note that the values produced are negative; this factor requires the same inversion as the P/B Ratio.
|
# Custom Factor 3 : Price to Trailing 12 Month Sales
class Price_to_TTM_Sales(CustomFactor):
inputs = [Fundamentals.ps_ratio]
window_length = 1
def compute(self, today, assets, out, ps):
out[:] = -ps[-1]
# create the pipeline
temp_pipe_3 = Pipeline()
# add the factor to the pipeline
temp_pipe_3.add(Price_to_TTM_Sales(), 'Price / TTM Sales')
# run the pipeline and get data for first 5 equities
run_pipeline(temp_pipe_3, start_date='2015-11-11', end_date='2015-11-11').head()
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
Price to Trailing 12-Month Cashflows
This is calculated as a simple ratio between price per share and TTM free cashflow (here using the built-in Fundamental Morningstar ratio as an approximaton).
This ratio serves a similar function to the previous two. A future notebook will explore the subtle differences in these metrics, but they largely serve the same purpose. Once again, low values are attractive and high values are unattractive, so the metric must be inverted.
|
# Custom Factor 4 : Price to Trailing 12 Month Cashflow
class Price_to_TTM_Cashflows(CustomFactor):
inputs = [Fundamentals.pcf_ratio]
window_length = 1
def compute(self, today, assets, out, pcf):
out[:] = -pcf[-1]
# create the pipeline
temp_pipe_4 = Pipeline()
# add the factor to the pipeline
temp_pipe_4.add(Price_to_TTM_Cashflows(), 'Price / TTM Cashflows')
# run the pipeline and get data for first 5 equities
run_pipeline(temp_pipe_4, start_date='2015-11-11', end_date='2015-11-11').head()
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
The Full Pipeline
Now that each individual factor has been added, it is now time to get all the necessary data at once. In the algorithm, this will take place once every day.
Later in the process, we will need a factor in order to create an approximate S&P500, so we will also include another factor called SPY_proxy (SPY is an ETF that tracks the S&P500). The S&P500 is a collection of 500 of the largest companies traded on the stock market. Our interpretation of the S&P500 is a group of 500 companies with the greatest market capitalizations; however, the actual S&P500 will be slightly different as Standard and Poors, who create the index, have a more nuanced algorithm for calculation.
We will also alter our P/B Ratio factor in order to account for the inversion.
|
# This factor creates the synthetic S&P500
class SPY_proxy(CustomFactor):
inputs = [Fundamentals.market_cap]
window_length = 1
def compute(self, today, assets, out, mc):
out[:] = mc[-1]
# Custom Factor 2 : P/B Ratio
class Price_to_Book(CustomFactor):
inputs = [Fundamentals.pb_ratio]
window_length = 1
def compute(self, today, assets, out, pbr):
out[:] = -pbr[-1]
def Data_Pull():
# create the piepline for the data pull
Data_Pipe = Pipeline()
# create SPY proxy
Data_Pipe.add(SPY_proxy(), 'SPY Proxy')
# Div Yield
Data_Pipe.add(Div_Yield(), 'Dividend Yield')
# Price to Book
Data_Pipe.add(Price_to_Book(), 'Price to Book')
# Price / TTM Sales
Data_Pipe.add(Price_to_TTM_Sales(), 'Price / TTM Sales')
# Price / TTM Cashflows
Data_Pipe.add(Price_to_TTM_Cashflows(), 'Price / TTM Cashflow')
return Data_Pipe
# NB: Data pull is a function that returns a Pipeline object, so need ()
results = run_pipeline(Data_Pull(), start_date='2015-11-11', end_date='2015-11-11')
results.head()
|
notebooks/lectures/Case_Study_Traditional_Value_Factor/notebook.ipynb
|
quantopian/research_public
|
apache-2.0
|
DR8b Validation
Locate the data at NERSC:
|
Truth = Path('/project/projectdirs/desi/target/analysis/truth/')
assert Truth.exists
release = 'dr8b'
assert (Truth / release).exists
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Define the stripe 82 footprint:
|
def in_stripe82(ra, dec):
ra_hr = ra * 24 / 360
return (dec > -1.26) & (dec < 1.26) & ((ra_hr > 20) | (ra_hr < 4))
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Load a truth-matched catalog, and flag objects in stripe 82:
|
def load_matched(name='sdss-specObj-dr14-unique-trimmed', survey='decam', release=release):
fullname = name + '-match.fits'
path = Truth / release / survey
assert path.exists
# Why does prefix contain decals even for 90prine-mosaic? (reported this to Rongpu)
prefix = f'decals-{release}'
external = astropy.table.Table.read(path / 'matched' / f'{name}-match.fits')
matched = astropy.table.Table.read(path / 'matched' / f'{prefix}-{name}-match.fits')
# Look up unmatched QSOs.
allobjs = np.load(path / 'allobjects' / f'{prefix}-{name}.npy')
parent = astropy.table.Table.read(Truth / 'parent' / f'{name}.fits')
unmatched_QSO = ~allobjs & (parent['CLASS'] == 'QSO ')
unmatched = parent[unmatched_QSO]
unmatched['STRIPE82'] = 1 * in_stripe82(unmatched['PLUG_RA'], unmatched['PLUG_DEC'])
n82 = np.count_nonzero(unmatched['STRIPE82'])
print(f'Found {len(unmatched)} unmatched QSOs ({n82} in Stripe 82).')
# Merge the matched catalogs.
merged = astropy.table.hstack((external, matched))
merged['STRIPE82'] = 1 * in_stripe82(merged['RA'], merged['DEC'])
n82 = np.count_nonzero(merged['STRIPE82'])
print(f'Loaded matched catalog for "{name}" with {len(merged)} entries ({n82} in Stripe 82).')
return merged, unmatched
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Load the north and south DR8b catalogs matched to DR14 objects:
|
DR14N, _ = load_matched(survey='90prime-mosaic')
DR14S, unmatched = load_matched(survey='decam')
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Plot densities in stripe 82 for detected DR8b objects matched to DR14 quasars:
|
def plot82(ra, dec):
ra[ra > 180] -= 360
rabins = np.linspace(-60, 60, 60)
decbins = np.linspace(-1.26, +1.26, 10)
density, _, _ = np.histogram2d(dec, ra, bins=(decbins, rabins))
binarea = (decbins[1] - decbins[0]) * (rabins[1] - rabins[0])
density /= binarea
plt.figure(figsize=(12, 4))
vmin, vmax = 0, 300 #np.percentile(density[density > 0], (1, 99))
plt.imshow(density, extent=(rabins[0], rabins[-1], decbins[0], decbins[-1]),
aspect='auto', interpolation='none', origin='lower', vmin=vmin, vmax=vmax)
plt.grid(False)
plt.xlabel('RA [deg]')
plt.ylabel('DEC [deg]')
plt.colorbar().set_label('QSO density / sq.deg.')
QSOin82 = (DR14S['CLASS'] == 'QSO ') & (DR14S['STRIPE82'] == 1)
plot82(DR14S['RA'][QSOin82], DR14S['DEC'][QSOin82])
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Do the same plot for DR14 stripe 82 quasars that were not detected in DR8b:
|
QSOin82 = (unmatched['STRIPE82'] == 1)
plot82(unmatched['PLUG_RA'][QSOin82], unmatched['PLUG_DEC'][QSOin82])
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Compare the redshift distributions of DR14 stripe 82 quasars detected or undetected in DR8b:
|
def plot_undetected():
# Lookup detected objects in RA 0-45 deg of stripe 82.
area = 45 * 1.26 * 2
detected = np.where(
(DR14S['CLASS'] == 'QSO ') & (DR14S['STRIPE82'] == 1) &
(DR14S['PLUG_RA'] > 0) & (DR14S['PLUG_RA'] < 45))[0]
z_detected = DR14S[detected]['Z']
# Lookup undetected objects in the same region.
undetected = np.where(
(unmatched['STRIPE82'] == 1) &
(unmatched['PLUG_RA'] > 0) & (unmatched['PLUG_RA'] < 45))[0]
z_undetected = unmatched[undetected]['Z']
t = astropy.table.Table()
t['Name'] = [f'z={z:.3f}' for z in z_undetected]
t['RA'] = unmatched[undetected]['PLUG_RA']
t['DEC'] = unmatched[undetected]['PLUG_DEC']
t = t[np.argsort(z_undetected)[::-1]]
# Compare the redshift distributions.
zbins = np.linspace(0, 5, 50)
plt.figure(figsize=(10, 5))
plt.hist(z_detected, bins=zbins, weights=np.ones(len(detected)) / area, alpha=0.4, label='In DR8b')
plt.hist(z_undetected, bins=zbins, weights=np.ones(len(undetected)) / area, histtype='step', lw=2, label='Undetected')
plt.xlabel('DR14 redshift')
plt.ylabel('DR14 QSOs / 0.1 / sq.deg.')
plt.xlim(zbins[0], zbins[-1])
plt.legend()
return t
undetected = plot_undetected()
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Save (z,ra,dec) for undetected DR14 QSOs in stripe 82 for use with http://legacysurvey.org/viewer-dev and https://yymao.github.io/decals-image-list-tool/:
|
undetected.write('undetected.dat', format='ascii', overwrite=True)
undetected.write('undetected.fits', overwrite=True)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Apply the random forest selection to all truth-matched objects and prepare the random forest inputs for tests below:
|
def prepare(cat, r_min=17.5, r_max=22.7, south=True):
ncat = len(cat)
print(f'Input catalog has {ncat} sources.')
# Lookup TRACTOR object type and brightblob flags.
objtype = cat['TYPE']
brightstarinblob = (cat['BRIGHTBLOB'] & 2**0) != 0
# Calculate extinction-corrected fluxes and check for zero fluxes.
bands = 'G', 'R', 'Z', 'W1', 'W2'
photOK = np.ones(ncat, bool)
flux = {}
for band in bands:
flux[band] = cat[f'FLUX_{band}'] / cat[f'MW_TRANSMISSION_{band}']
sel = flux[band] > 1e-4
photOK &= sel
print(f'Dropped {np.count_nonzero(~sel)}/{ncat} zero-flux entries for {band}.')
# Run desitarget selection.
desisel = desitarget.cuts.isQSO_randomforest(
flux['G'], flux['R'], flux['Z'], flux['W1'], flux['W2'],
objtype=objtype, brightstarinblob=brightstarinblob, south=south)
print(f'desitarget selects {np.count_nonzero(desisel)} / {ncat} sources.')
if not south:
# Convert north survey (BASS/MzLS) photometry to south survey (DECaLS) system.
flux['G'], flux['R'], flux['Z'] = desitarget.cuts.shift_photo_north(flux['G'], flux['R'], flux['Z'])
# Convert fluxes to magnitudes.
mag = {}
for band in bands:
mag[band] = np.zeros(ncat, float)
sel = flux[band] > 1e-4
mag[band][sel] = 22.5 - 2.5 * np.log10(flux[band][sel])
# Apply the same preselection as desitarget.cuts.isQSO_randomforest.
keep = photOK & (objtype == 'PSF ') & ~brightstarinblob & (mag['R'] > r_min) & (mag['R'] < r_max)
nkeep = np.count_nonzero(keep)
print(f'Preselection keeps {nkeep} / {ncat} sources.')
# Check that the DESI selection is a subset of our reconstructed preselection.
##badsel = set(np.where(desisel)[0]) - set(np.where(keep)[0])
##print(f'Found {len(badsel)} entries selected by DESI but failing preselection:')
assert np.all(desisel[~keep] == False)
# Calculate the random forest features used by desitarget.
out = astropy.table.Table()
out['R'] = mag['R'][keep]
for i, band1 in enumerate(bands[:-1]):
for band2 in bands[i + 1:]:
out[f'{band1}-{band2}'] = (mag[band1] - mag[band2])[keep]
classes = np.unique(cat['CLASS'])
out['CLASS'] = np.zeros(len(out), int)
for i, classname in enumerate(classes):
sel = np.where(cat[keep]['CLASS'] == classname)
print(f'Selected {np.count_nonzero(sel)} in class [{i+1}] "{classname.strip()}".')
out['CLASS'][sel] = i + 1
out['RA'] = cat[keep]['RA']
out['DEC'] = cat[keep]['DEC']
out['REDSHIFT'] = cat[keep]['Z']
out['STRIPE82'] = cat[keep]['STRIPE82']
out['DESI'] = 1 * desisel[keep]
return out
outN = prepare(DR14N, south=False)
outS = prepare(DR14S, south=True)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Show how the redshift distribution of DR14 quasars detected in DR8b is shaped by the preselection cuts and random forest selection:
|
def summarize(cat, out):
zbins = np.linspace(0, 5, 50)
fig = plt.figure(figsize=(10, 5))
isQSO = cat['CLASS'] == 'QSO '
plt.hist(cat[isQSO]['Z'], bins=zbins, label='DR14 QSO in DR8b')
isQSO = out['CLASS'] == 2
plt.hist(out[isQSO]['REDSHIFT'], bins=zbins, label='Preselection')
sel = isQSO & (out['DESI'] == 1)
plt.hist(out[sel]['REDSHIFT'], bins=zbins, label='Random Forest Sel')
plt.xlabel('Redshift')
plt.xlim(zbins[0], zbins[-1])
plt.legend()
plt.show()
t = astropy.table.Table()
t['CLASS'] = ('GALAXY', 'QSO', 'STAR')
t['DR14'] = [np.count_nonzero(cat['CLASS'] == cname) for cname in ('GALAXY', 'QSO ', 'STAR ')]
t['Presel'] = [np.count_nonzero(out['CLASS'] == cidx) for cidx in (1, 2, 3)]
t['DESI RF'] = [np.count_nonzero((out['CLASS'] == cidx) & (out['DESI'] == 1)) for cidx in (1, 2, 3)]
return t
summarize(DR14N, outN)
summarize(DR14S, outS)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Random Forest Performance and Interpretability
Load the DR3 training data (the more recent DR7 training data is not available at nersc issue#43):
|
Training = Path('/global/project/projectdirs/desi/target/qso_training/')
assert Training.exists
def loadDR3():
bands = 'G', 'R', 'Z', 'W1', 'W2'
qso_mags = desitarget.train.train_mva_decals.magsExtFromFlux(
fits.open(Training / 'qso_dr3_nora36-42.fits')[1].data)
star_mags = desitarget.train.train_mva_decals.magsExtFromFlux(
fits.open(Training / 'star_dr3_nora36-42_normalized.fits')[1].data)
Xs, ys = [], []
for target, mags in zip(('QSO', 'STAR'), (qso_mags, star_mags)):
ok = np.ones(len(mags[0]), bool)
for band, mag in zip(bands, mags):
zero = (mag == 0)
print(f'Dropping {np.count_nonzero(zero)} {target} sources with zero {band} flux.')
ok &= ~zero
nok = np.count_nonzero(ok)
X, y = pd.DataFrame(), pd.DataFrame()
y['TARGET'] = np.zeros(nok, int) if target == 'STAR' else np.ones(nok, int)
X['R'] = mags[1][ok]
for i, (band1, mag1) in enumerate(zip(bands[:-1], mags[:-1])):
for (band2, mag2) in zip(bands[i+1:], mags[i+1:]):
X[f'{band1}-{band2}'] = (mag1 - mag2)[ok]
Xs.append(X)
ys.append(y)
X, y = pd.concat(Xs), pd.concat(ys)
print(f'Loaded {len(X)} DR3 training objects.')
return X, y.values.reshape(-1)
X_train, y_train = loadDR3()
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Train a random forest using the same hyperparameters as desitarget.train.train_mva_decals:
|
gen = np.random.RandomState(seed=123)
%time fit = ensemble.RandomForestClassifier(n_estimators=200, random_state=gen, n_jobs=8).fit(X_train, y_train)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Test on DR8b objects in stripe 82 RA 36-42, which were excluded from the training sample:
|
DR8b_test = (outS['STRIPE82'] == 1) & (outS['RA'] > 36) & (outS['RA'] < 42)
X_test = outS[DR8b_test][outS.colnames[:11]].to_pandas()
y_test = 1 * (outS[DR8b_test]['CLASS'] == 2)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Display the estimated importance of each feature to the trained random forest (from shuffling each feature, to preserve 1-pt stats, and measuring score change):
|
importance = pd.DataFrame(
{'Feature': X_test.columns, 'Importance': fit.feature_importances_}
).sort_values(by='Importance', ascending=False)
importance.plot('Feature', 'Importance', 'barh', figsize=(10, 10), legend=False)
plt.xlabel('Feature Importance');
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Train a smaller model that only uses the 4 most important features:
|
nbest = 4
best_features = importance[:nbest]['Feature']
importance[:nbest]
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Optimize hyperparameters for training a small model, using a 20% validation sample:
|
def optimize(max_features=(2, 3, 4), n_estimators=(10, 20, 30, 40, 50, 60, 80, 100, 150, 200), seed=123):
gen = np.random.RandomState(seed)
# Hold out a 20% validation sample for hyperparam optimization.
X_t, X_v, y_t, y_v = model_selection.train_test_split(X_train[best_features], y_train, test_size=0.2, random_state=gen)
for m in max_features:
score = []
for n in n_estimators:
model = ensemble.RandomForestClassifier(
n_estimators=n, max_features=m, random_state=gen, n_jobs=8).fit(X_t, y_t)
score.append(model.score(X_v, y_v))
plt.plot(n_estimators, score, 'o-', label=f'max_features={m}')
plt.xlabel('RandomForest n_estimators')
plt.ylabel('Validation sample mean accuracy')
plt.legend()
optimize()
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Train a small model with optimized hyperparameters:
|
gen = np.random.RandomState(seed=123)
%time small_fit = ensemble.RandomForestClassifier(n_estimators=50, max_features=2, random_state=gen, n_jobs=8).fit(X_train[best_features], y_train)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Recalculate feature importance for the small model:
|
small_importance = pd.DataFrame(
{'Feature': X_test[best_features].columns, 'Importance': small_fit.feature_importances_}
).sort_values(by='Importance', ascending=False)
small_importance.plot('Feature', 'Importance', 'barh', figsize=(10, 5), legend=False)
plt.xlabel('Feature Importance');
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Calculate the QSO probability predicted by each tree, for each test object, using both models (FULL / SMALL):
|
full_prob = np.array([tree.predict_proba(X_test) for tree in fit.estimators_])[:,:,1]
small_prob = np.array([tree.predict_proba(X_test[best_features]) for tree in small_fit.estimators_])[:,:,1]
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
All leaf nodes in all trees (of both models) are unanimous on the classification:
|
np.unique(full_prob), np.unique(small_prob)
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
However, the ensemble average of trees gives a probability per source. Compare the full and small models for each DR14 class:
|
qso_prob = np.array([full_prob.mean(axis=0), small_prob.mean(axis=0)])
def plot_test():
bins = np.linspace(0, 1, 50)
fig, axes = plt.subplots(3, 1, figsize=(12, 10))
for C, clsname in enumerate(('GALAXY', 'QSO', 'STAR')):
sel = outS[DR8b_test]['CLASS'] == C+1
for i, (name, style) in enumerate(zip(('FULL', 'SMALL'), (dict(alpha=0.5), dict(histtype='step', lw=2)))):
ax = axes[C]
ax.hist(qso_prob[i][sel], bins=bins, label=name, **style)
ax.legend(loc='upper center', ncol=2, title=f'DR14 {clsname}')
ax.set_yscale('log')
ax.set_xlabel('QSO probability')
ax.set_xlim(0, 1)
plot_test()
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Plot the distribution of the 4 most important features for each DR14 source type:
|
def plot_features():
df = X_test[best_features].copy()
C = outS[DR8b_test]['CLASS']
df['CLASS'] = [('GALAXY', 'QSO', 'STAR')[c-1] for c in C]
sns.pairplot(df, hue='CLASS', markers='.', diag_kind='kde', size=3.5)
plot_features()
|
doc/nb/qso-dr8.ipynb
|
desihub/desitarget
|
bsd-3-clause
|
Using FVA
The first approach we can follow is to use FVA (Flux Variability Analysis) which among many other applications, is used to detect blocked reactions. The cobra.flux_analysis.find_blocked_reactions() function will return a list of all the blocked reactions obtained using FVA.
|
cobra.flux_analysis.find_blocked_reactions(test_model)
|
documentation_builder/consistency.ipynb
|
opencobra/cobrapy
|
gpl-2.0
|
As we see above, we are able to obtain the blocked reaction, which in this case is $v_2$.
Using FASTCC
The second approach to obtaining consistent network in cobrapy is to use FASTCC. Using this method, you can expect to efficiently obtain an accurate consistent network. For more details regarding the algorithm, please see Vlassis N, Pacheco MP, Sauter T (2014).
|
consistent_model = cobra.flux_analysis.fastcc(test_model)
consistent_model.reactions
|
documentation_builder/consistency.ipynb
|
opencobra/cobrapy
|
gpl-2.0
|
SPA output
|
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')
print(tus)
golden = Location(39.742476, -105.1786, 'America/Denver', 1830, 'Golden')
print(golden)
golden_mst = Location(39.742476, -105.1786, 'MST', 1830, 'Golden MST')
print(golden_mst)
berlin = Location(52.5167, 13.3833, 'Europe/Berlin', 34, 'Berlin')
print(berlin)
times = pd.date_range(start=datetime.datetime(2014,6,23), end=datetime.datetime(2014,6,24), freq='1Min')
times_loc = times.tz_localize(tus.pytz)
times
pyephemout = pvlib.solarposition.pyephem(times_loc, tus.latitude, tus.longitude)
spaout = pvlib.solarposition.spa_python(times_loc, tus.latitude, tus.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
plt.figure()
pyephemout['elevation'].plot(label='pyephem')
spaout['elevation'].plot(label='spa')
(pyephemout['elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
(pyephemout['apparent_elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_zenith'].plot(label='pyephem apparent')
spaout['zenith'].plot(label='spa')
(pyephemout['apparent_zenith'] - spaout['zenith']).plot(label='diff')
plt.legend(ncol=3)
plt.title('zenith')
plt.figure()
pyephemout['apparent_azimuth'].plot(label='pyephem apparent')
spaout['azimuth'].plot(label='spa')
(pyephemout['apparent_azimuth'] - spaout['azimuth']).plot(label='diff')
plt.legend(ncol=3)
plt.title('azimuth');
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
spaout = pvlib.solarposition.spa_python(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.date_range(start=datetime.date(2015,3,28), end=datetime.date(2015,3,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.date_range(start=datetime.date(2015,3,30), end=datetime.date(2015,3,31), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.date_range(start=datetime.date(2015,6,28), end=datetime.date(2015,6,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
plt.xlim(pd.Timestamp('2015-06-28 02:00:00+02:00'), pd.Timestamp('2015-06-28 06:00:00+02:00'))
plt.ylim(-10,10);
# use calc_time to find the time at which a solar angle occurs.
pvlib.solarposition.calc_time(
datetime.datetime(2020, 9, 14, 12),
datetime.datetime(2020, 9, 14, 15),
32.2,
-110.9,
'alt',
0.05235987755982988, # 3 degrees in radians
)
pvlib.solarposition.calc_time(
datetime.datetime(2020, 9, 14, 22),
datetime.datetime(2020, 9, 15, 4),
32.2,
-110.9,
'alt',
0.05235987755982988, # 3 degrees in radians
)
|
docs/tutorials/solarposition.ipynb
|
cwhanse/pvlib-python
|
bsd-3-clause
|
Speed tests
|
times = pd.date_range(start='20180601', freq='1min', periods=14400)
times_loc = times.tz_localize(loc.tz)
%%timeit
# NBVAL_SKIP
pyephemout = pvlib.solarposition.pyephem(times_loc, loc.latitude, loc.longitude)
#ephemout = pvlib.solarposition.ephemeris(times, loc)
%%timeit
# NBVAL_SKIP
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times_loc, loc.latitude, loc.longitude)
%%timeit
# NBVAL_SKIP
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numpy')
|
docs/tutorials/solarposition.ipynb
|
cwhanse/pvlib-python
|
bsd-3-clause
|
This numba test will only work properly if you have installed numba.
|
%%timeit
# NBVAL_SKIP
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numba')
|
docs/tutorials/solarposition.ipynb
|
cwhanse/pvlib-python
|
bsd-3-clause
|
The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default.
|
%%timeit
# NBVAL_SKIP
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numba', numthreads=16)
%%timeit
# NBVAL_SKIP
ephemout = pvlib.solarposition.spa_python(times_loc, loc.latitude, loc.longitude,
how='numba', numthreads=16)
|
docs/tutorials/solarposition.ipynb
|
cwhanse/pvlib-python
|
bsd-3-clause
|
<span style="color:red">
The last value appears to be noise.</span>.
<span style="color:red">
I am not sure what is up with the first peak but I know it is not the peak of interest, which is around index 2000 to 2500.</span>.
<span style="color:red">
Right now I am not sure how to determinine that the range of the peak of interest is the values 2000-2500 but I will hard code the values for now</span>.
Below are the channels with values over 7.5 in the range of 2000 to 2500.
|
#x = findPeak(summed, (2000, 2500))
[x for x in range(len(csv)) if np.max(csv[x][2000:2500]) > 7.5]
samp = summed[2000:2500]
mu = np.mean(samp)
sig = np.std(samp)
print(mu, sig)
#plt.plot(samp)
def func(x, a, m, s, c):
return a * np.exp(-(x - m)**2 / (2 * s**2)) + c
xdata = range(0,len(samp))
trydata = func(samp, np.max(samp), mu, sig, np.max(samp) + 50)
p0 = [250,250,50,10]
popt, pcov = curve_fit(func, xdata, samp, p0)
print(popt)
plt.plot(xdata,samp)
plt.plot(xdata,func(xdata, *popt))
plt.show()
|
calibration/Untitled.ipynb
|
bearing/dosenet-analysis
|
mit
|
Find the channel# of the peak
|
fit = func(xdata, *popt)
channel = np.argmax(fit)
print("The channel number is", channel,"and its values is", np.max(fit))
plt.plot(xdata,samp)
plt.plot(xdata,func(xdata, *popt))
plt.plot(channel, np.max(fit), 'ro')
plt.show()
print(int(popt[1] + 2000))
|
calibration/Untitled.ipynb
|
bearing/dosenet-analysis
|
mit
|
The End.
Everything below here is no longer relevant____
Okay but all of that was cheating and I need to use the summed plot to find the width of the peak.
Then the plan is to take the highest value within that range and find the channel it corresponds to.
I think i will start by disregarding the first peak and only looking at values above index 1000, and getting rid of the final value
|
snipped = summed.copy()
snipped[-1] = 0
snipped[:1000] = np.mean(summed)/5
plt.plot(snipped)
plt.show()
plt.plot(summed)
plt.show()
print(np.std(snipped), np.std(summed))
|
calibration/Untitled.ipynb
|
bearing/dosenet-analysis
|
mit
|
Okay so the plan for finding the peak will be to look for points above the standard deviation and to see if 9/10(arbitrary value) of the values in between are greater than the STD.
|
def peakFinder(data):
std = np.std(data)
points = []
for x in range(len(data)):
if data[x] == int(std):
points = points + [x]
for p in range(len(points) - 1):
data[points[p]:
return peak
peakFinder(snipped)
|
calibration/Untitled.ipynb
|
bearing/dosenet-analysis
|
mit
|
Below is random code I wrote that turned out to be useless
|
# This indexHelper helps me avoid array index out of bound errors
def indexHelper(i, top, up):
if i <= 0 or i >= top - 1:
return 0
elif up:
return i+1
else:
return i-1
# Returns if x-1 < x > x+1
def isLiteralPeak(array, x, top):
return array[indexHelper(x, top, False)] < array[x] and array[x] > array[indexHelper(x, top, True)]
def findPeak(array, rng):
top = len(array)
peaks = []`
[peaks.append((x, array[x])) for x in range(rng[0], rng[1]) if isLiteralPeak(array, x, top)]
return peaks
def rangeFinder(row):
x, y = 0, 0
for i in range(len(row)):
if row[i] != 0:
x = i
break
for j in reversed(range(len(row))):
if row[j] != 0:
y = j
break
return (x, y)
def channelRange(csv):
return [(i, rangeFinder(csv[i])) for i in range(len(csv)) if rangeFinder(csv[i]) != (0, 0)]
channelRange(csv.T)
|
calibration/Untitled.ipynb
|
bearing/dosenet-analysis
|
mit
|
Types Matter
Python's built in functions and operators work differently depending on the type of the variable.:
|
a = 4
b = 5
a + b # this plus in this case means add so 9
a = "4"
b = "5"
a + b # the plus + in this case means concatenation, so '45'
|
content/lessons/02-Variables/LAB-Variables.ipynb
|
IST256/learn-python
|
mit
|
Switching Types
there are built-in Python functions for switching types. For example:
|
x = "45" # x is a str
y = int(x) # y is now an int
z = float(x) # z is a float
print(x,y,z)
|
content/lessons/02-Variables/LAB-Variables.ipynb
|
IST256/learn-python
|
mit
|
Inputs type str
When you use the input() function the result is of type str:
|
age = input("Enter your age: ")
type(age)
|
content/lessons/02-Variables/LAB-Variables.ipynb
|
IST256/learn-python
|
mit
|
We can use a built in Python function to convert the type from str to our desired type:
|
age = input("Enter your age: ")
age = int(age)
type(age)
|
content/lessons/02-Variables/LAB-Variables.ipynb
|
IST256/learn-python
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.