markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
SAPCAR Entry Header | f0._sapcar.files0[0].canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Data Block | f0._sapcar.files0[0].blocks[0].canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Compressed Data | f0._sapcar.files0[0].blocks[0].compressed.canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Archive version 2.01 | f1 = SAPCARArchive("archive_file.car", mode="wb", version=SAPCAR_VERSION_201)
f1.add_file("some_file") | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
The file is comprised of the following main structures:
SAPCAR Archive Header | f1._sapcar.canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Entry Header | f1._sapcar.files1[0].canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Data Block | f1._sapcar.files1[0].blocks[0].canvas_dump() | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
SAPCAR Compressed data | f1._sapcar.files1[0].blocks[0].compressed.canvas_dump()
from os import remove
remove("some_file")
remove("archive_file.car") | docs/fileformats/SAPCAR.ipynb | CoreSecurity/pysap | gpl-2.0 |
Boilerplate for graph visualization | # This is for graph visualization.
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe)) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Load the data
Run 00_download_data.ipynb if you haven't already | DATA_DIR = '../data/'
data_filename = os.path.join(DATA_DIR, "zoo.npz")
data = np.load(open(data_filename))
train_data = data['arr_0']
train_labels = data['arr_1']
test_data = data['arr_2']
test_labels = data['arr_3']
del data
print("Data shapes: ", test_data.shape, test_labels.shape, train_data.shape, train_labels.shape) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Create a simple classifier with low-level TF Ops | tf.reset_default_graph()
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 10 classes
batch_size = 32
hidden1_units = 128
data_batch = tf.placeholder("float", shape=[None, input_dimension], name="data")
label_batch = tf.placeholder("float", shape=[None, output_dimension], name="labels")
weights_1 = tf.Variable(
tf.truncated_normal(
[input_dimension, hidden1_units],
stddev=1.0 / np.sqrt(float(input_dimension))),
name='weights_1')
# Task: Add Bias to first layer
# Task: Use Cross-Entropy instead of Squared Loss
# SOLUTION: Create biases variable.
biases_1 = tf.Variable(
tf.truncated_normal(
[hidden1_units],
stddev=1.0 / np.sqrt(float(hidden1_units))),
name='biases_1')
weights_2 = tf.Variable(
tf.truncated_normal(
[hidden1_units, output_dimension],
stddev=1.0 / np.sqrt(float(hidden1_units))),
name='weights_2')
# SOLUTION: Add the bias term to the first layer
wx_b = tf.add(tf.matmul(data_batch, weights_1), biases_1)
hidden_activations = tf.nn.relu(wx_b)
output_activations = tf.nn.tanh(tf.matmul(hidden_activations, weights_2))
# SOLUTION: Replace the l2 loss with softmax cross entropy.
with tf.name_scope("loss"):
# loss = tf.nn.l2_loss(label_batch - output_activations)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels=label_batch,
logits=output_activations))
show_graph(tf.get_default_graph().as_graph_def())
| archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
We can run this graph by feeding in batches of examples using a feed_dict. The keys of the feed_dict are placeholders we've defined previously.
The first argument of session.run is the tensor that we're computing. Only parts of the graph required to produce this value will be executed. | with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
for i in range(1000):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx + batch_size]
batch_loss = sess.run(
loss,
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:]
})
if (i + 1) % 100 == 0:
print("Loss at iteration {}: {}".format(i+1, batch_loss)) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
No learning yet but we get the losses per batch.
We need to add an optimizer to the graph. | # Task: Replace GradientDescentOptimizer with AdagradOptimizer and a 0.1 learning rate.
# learning_rate = 0.005
# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# SOLUTION: Replace GradientDescentOptimizer
learning_rate = 0.1
updates = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
n_epochs = 10 # how often do to go through the training data
max_steps = train_data.shape[0]*n_epochs // batch_size
for i in range(max_steps):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]
batch_loss, _ = sess.run(
[loss, updates],
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:]
})
if i % 200 == 0 or i == max_steps - 1:
random_indices = np.random.permutation(train_data.shape[0])
print("Batch-Loss at iteration {}: {}".format(i, batch_loss))
test_predictions = sess.run(
output_activations,
feed_dict = {
data_batch : test_data,
label_batch : test_labels
})
wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)
print("Accuracy on test: {}%".format(100*np.mean(wins))) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Loss going down, Accuracy going up! \o/
Notice how batch loss differs between batches.
Model wrapped in a custom estimator
In TensorFlow, we can make it easier to experiment with different models when we separately define a model_fn and an input_fn. | tf.reset_default_graph()
# Model parameters.
batch_size = 32
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
n_epochs = 10 # how often do to go through the training data
def input_fn(data, labels):
input_images = tf.constant(data, shape=data.shape, verify_shape=True, dtype=tf.float32)
input_labels = tf.constant(labels, shape=labels.shape, verify_shape=True, dtype=tf.float32)
image, label = tf.train.slice_input_producer(
[input_images, input_labels],
num_epochs=n_epochs)
dataset_dict = dict(images=image, labels=label)
batch_dict = tf.train.batch(
dataset_dict, batch_size, allow_smaller_final_batch=True)
batch_labels = batch_dict.pop('labels')
return batch_dict, batch_labels
def model_fn(features, targets, mode, params):
# 1. Configure the model via TensorFlow operations (same as above)
weights_1 = tf.Variable(
tf.truncated_normal(
[input_dimension, hidden1_units],
stddev=1.0 / np.sqrt(float(input_dimension))))
weights_2 = tf.Variable(
tf.truncated_normal(
[hidden1_units, output_dimension],
stddev=1.0 / np.sqrt(float(hidden1_units))))
hidden_activations = tf.nn.relu(tf.matmul(features['images'], weights_1))
output_activations = tf.matmul(hidden_activations, weights_2)
# 2. Define the loss function for training/evaluation
loss = tf.reduce_mean(tf.nn.l2_loss(targets - output_activations))
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer="SGD")
# 4. Generate predictions
predictions_dict = {
"classes": tf.argmax(input=output_activations, axis=1),
"probabilities": tf.nn.softmax(output_activations, name="softmax_tensor"),
"logits": output_activations,
}
# Optional: Define eval metric ops; here we add an accuracy metric.
is_correct = tf.equal(tf.argmax(input=targets, axis=1),
tf.argmax(input=output_activations, axis=1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
eval_metric_ops = { "accuracy": accuracy}
# 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object
return tf.contrib.learn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
custom_model = tf.contrib.learn.Estimator(model_fn=model_fn)
# Train and evaluate the model.
def evaluate_model(model, input_fn):
for i in range(6):
max_steps = train_data.shape[0]*n_epochs // batch_size
model.fit(input_fn=lambda: input_fn(train_data, train_labels), steps=max_steps)
print(model.evaluate(input_fn=lambda: input_fn(test_data, test_labels),
steps=150))
evaluate_model(custom_model, input_fn) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Custom model, simplified with tf.layers
Instead of doing the matrix multiplications and everything ourselves, we can use tf.layers to simplify the definition. | tf.reset_default_graph()
# Model parameters.
batch_size = 32
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
def layers_custom_model_fn(features, targets, mode, params):
# 1. Configure the model via TensorFlow operations (using tf.layers). Note how
# much simpler this is compared to defining the weight matrices and matrix
# multiplications by hand.
hidden_layer = tf.layers.dense(inputs=features['images'], units=hidden1_units, activation=tf.nn.relu)
output_layer = tf.layers.dense(inputs=hidden_layer, units=output_dimension, activation=tf.nn.relu)
# 2. Define the loss function for training/evaluation
loss = tf.losses.mean_squared_error(labels=targets, predictions=output_layer)
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer="SGD")
# 4. Generate predictions
predictions_dict = {
"classes": tf.argmax(input=output_layer, axis=1),
"probabilities": tf.nn.softmax(output_layer, name="softmax_tensor"),
"logits": output_layer,
}
# Define eval metric ops; we can also use a pre-defined function here.
accuracy = tf.metrics.accuracy(
labels=tf.argmax(input=targets, axis=1),
predictions=tf.argmax(input=output_layer, axis=1))
eval_metric_ops = {"accuracy": accuracy}
# 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object
return tf.contrib.learn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
layers_custom_model = tf.contrib.learn.Estimator(
model_fn=layers_custom_model_fn)
# Train and evaluate the model.
evaluate_model(layers_custom_model, input_fn) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Model using canned estimators
Instead of defining our own DNN classifier, TensorFlow supplies a number of canned estimators that can save a lot of work. | tf.reset_default_graph()
# Model parameters.
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
# Our model can be defined using just three simple lines...
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
images_column = tf.contrib.layers.real_valued_column("images")
# Task: Use the DNNClassifier Estimator to create the model in 1 line.
# SOLUTION: DNNClassifier can be used to efficiently (in lines of code) create the model.
canned_model = tf.contrib.learn.DNNClassifier(
feature_columns=[images_column],
hidden_units=[hidden1_units],
n_classes=output_dimension,
activation_fn=tf.nn.relu,
optimizer=optimizer)
# Potential exercises: play with model parameters, e.g. add dropout
# We need to change the input_fn so that it returns integers representing the classes instead of one-hot vectors.
def class_input_fn(data, labels):
input_images = tf.constant(
data, shape=data.shape, verify_shape=True, dtype=tf.float32)
# The next two lines are different.
class_labels = np.argmax(labels, axis=1)
input_labels = tf.constant(
class_labels, shape=class_labels.shape, verify_shape=True, dtype=tf.int32)
image, label = tf.train.slice_input_producer(
[input_images, input_labels], num_epochs=n_epochs)
dataset_dict = dict(images=image, labels=label)
batch_dict = tf.train.batch(
dataset_dict, batch_size, allow_smaller_final_batch=True)
batch_labels = batch_dict.pop('labels')
return batch_dict, batch_labels
# Train and evaluate the model.
evaluate_model(canned_model, class_input_fn) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Using Convolutions | import tensorflow as tf
tf.reset_default_graph()
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
batch_size = 32
data_batch = tf.placeholder("float", shape=[None, input_dimension])
label_batch = tf.placeholder("float", shape=[None, output_dimension])
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# Task: convert the batch_size x num_pixels (784) input to batch_size, height (28), width(28), channels
# SOLUTION: reshape the input. We only have a single color channel.
image_batch = tf.reshape(data_batch, [-1, 28, 28, 1])
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(image_batch, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 48])
b_conv2 = bias_variable([48])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 48, 256])
b_fc1 = bias_variable([256])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*48])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Task: add dropout to fully connected layer. Add a variable to turn dropout off in eval.
# SOLUTION: add placeholder variable to deactivate dropout (keep_prob=1.0) in eval.
keep_prob = tf.placeholder(tf.float32)
# SOLUTION: add dropout to fully connected layer.
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([256, output_dimension])
b_fc2 = bias_variable([output_dimension])
output_activations = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=label_batch,
logits=output_activations))
# Solution: Switch from GradientDescentOptimizer to AdamOptimizer
# learning_rate = 0.001
# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
learning_rate = 0.001
updates = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
n_epochs = 5 # how often to go through the training data
max_steps = train_data.shape[0]*n_epochs // batch_size
for i in range(max_steps):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]
batch_loss, _ = sess.run(
[loss, updates],
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:],
# SOLUTION: Dropout active during training
keep_prob : 0.5})
if i % 100 == 0 or i == max_steps - 1:
random_indices = np.random.permutation(train_data.shape[0])
print("Batch-Loss at iteration {}/{}: {}".format(i, max_steps-1, batch_loss))
test_predictions = sess.run(
output_activations,
feed_dict = {
data_batch : test_data,
label_batch : test_labels,
# SOLUTION: No dropout during eval
keep_prob : 1.0
})
wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)
print("Accuracy on test: {}%".format(100*np.mean(wins))) | archive/zurich/solutions/02_quickdraw_solution.ipynb | random-forests/tensorflow-workshop | apache-2.0 |
Model Building Process | Image("images/model-pipeline.png") | 02-logisitc-regression-intro.ipynb | sampathweb/movie-sentiment-analysis | mit |
Dataset | centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100)
np.random.seed(42)
X = np.random.normal(0, 0.2, (200, 2)) + centers
y = np.array([0] * 100 + [1] * 100)
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu)
plt.colorbar();
X[:5]
y[:5], y[-5:] | 02-logisitc-regression-intro.ipynb | sampathweb/movie-sentiment-analysis | mit |
Logistic Regression - Model
Take a weighted sum of the features and add a bias term to get the logit.
Sqash this weighted sum to arange between 0-1 via a Sigmoid function.
Sigmoid Function
<img src="images/sigmoid.png",width=500>
$$f(x) = \frac{e^x}{1+e^x}$$ | Image("images/logistic-regression.png")
## Build the Model
from sklearn.linear_model import LogisticRegression
## Step 1 - Instantiate the Model with Hyper Parameters (We don't have any here)
model = LogisticRegression()
## Step 2 - Fit the Model
model.fit(X, y)
## Step 3 - Evaluate the Model
model.score(X, y)
def plot_decision_boundaries(model, X, y):
pred_labels = model.predict(X)
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu,
vmin=0.0, vmax=1)
xx = np.linspace(-1, 2, 100)
w0, w1 = model.coef_[0]
bias = model.intercept_
yy = -w0 / w1 * xx - bias / w1
plt.plot(xx, yy, 'k')
plt.axis((-1,2,-1,2))
plt.colorbar()
plot_decision_boundaries(model, X, y) | 02-logisitc-regression-intro.ipynb | sampathweb/movie-sentiment-analysis | mit |
Dataset - Take 2 | centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100)
np.random.seed(42)
X = np.random.normal(0, 0.5, (200, 2)) + centers
y = np.array([0] * 100 + [1] * 100)
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu)
plt.colorbar();
# Instantiate, Fit, Evalaute
model = LogisticRegression()
model.fit(X, y)
print(model.score(X, y))
y_pred = model.predict(X)
plot_decision_boundaries(model, X, y) | 02-logisitc-regression-intro.ipynb | sampathweb/movie-sentiment-analysis | mit |
Other Evaluation Methods
Confusion Matrix | from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, y_pred)
cm
pd.crosstab(y, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True) | 02-logisitc-regression-intro.ipynb | sampathweb/movie-sentiment-analysis | mit |
Let's define an ES index mapping for the data that will be uploaded to the ES server | MAPPING_GIT = {
"mappings": {
"item": {
"properties": {
"date": {
"type": "date",
"format" : "E MMM d HH:mm:ss yyyy Z",
"locale" : "US"
},
"commit": {"type": "keyword"},
"author": {"type": "keyword"},
"domain": {"type": "keyword"},
"file": {"type": "keyword"},
"added": {"type": "integer"},
"removed": {"type": "integer"},
"repository": {"type": "keyword"}
}
}
}
} | Light Git index generator.ipynb | jsmanrique/grimoirelab-personal-utils | mit |
Let's give a name to the index to be created, and create it.
Note: utils.create_ES_index() removes any existing index with the given name before creating it | index_name = 'git'
utils.create_ES_index(es, index_name, MAPPING_GIT) | Light Git index generator.ipynb | jsmanrique/grimoirelab-personal-utils | mit |
Let's import the git backend from Perceval | from perceval.backends.core.git import Git | Light Git index generator.ipynb | jsmanrique/grimoirelab-personal-utils | mit |
For each repository in the settings file, let's get its data, create a summary object with the desired information and upload data to the ES server using ES bulk API. | for repo_url in settings['git']:
repo_name = repo_url.split('/')[-1]
repo = Git(uri=repo_url, gitpath='/tmp/'+repo_name)
utils.logging.info('Go for {}'.format(repo_name))
items = []
bulk_size = 10000
for commit in repo.fetch():
author_name = commit['data']['Author'].split('<')[0][:-1]
author_domain = commit['data']['Author'].split('@')[-1][:-1]
for file in commit['data']['files']:
if 'added' not in file.keys() or file['added'] == '-':
file['added'] = 0
if 'removed' not in file.keys() or file['removed'] == '-':
file['removed'] = 0
summary = {
'date': commit['data']['AuthorDate'],
'commit': commit['data']['commit'],
'author': author_name,
'domain': author_domain,
'file': file['file'],
'added': file['added'],
'removed': file['removed'],
'repository': repo_name
}
items.append({'_index': index_name, '_type': 'item', '_source': summary})
if len(items) > bulk_size:
utils.helpers.bulk(es, items)
items = []
utils.logging.info('{} items uploaded'.format(bulk_size))
if len(items) != 0:
utils.helpers.bulk(es, items)
utils.logging.info('Remaining {} items uploaded'.format(len(items))) | Light Git index generator.ipynb | jsmanrique/grimoirelab-personal-utils | mit |
Load data from exported CSV from Ted Full Grade Center. Some sanitization is performed to remove non-ascii characters and cruft | def load_data(filename):
d = read_csv(filename)
d.columns = [remove_non_ascii(c) for c in d.columns]
d.columns = [c.split("[")[0].strip().strip("\"") for c in d.columns]
d["Weighted Total"] = [float(i.strip("%")) for i in d["Weighted Total"]]
print(d.columns)
return d
d = load_data("gc_CENG114_WI16_Ong_fullgc_2016-03-15-19-58-36.csv")
def bar_plot(dframe, data_key, offset=0):
"""
Creates a historgram of the results.
Args:
dframe: DataFrame which is imported from CSV.
data_key: Specific column to plot
offset: Allows an offset for each grade. Defaults to 0.
Returns:
dict of cutoffs, {grade: (lower, upper)}
"""
data = dframe[data_key]
d = filter(lambda x: (not np.isnan(x)), list(data))
N = len(d)
print N
heights, bins = np.histogram(d, bins=20, range=(0, 100))
bins = list(bins)
bins.pop(-1)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
ppl.bar(ax, bins, heights, width=5, color=colors[0], grid='y')
plt = get_publication_quality_plot(12, 8, plt)
plt.xlabel("Score")
plt.ylabel("Number of students")
#print len([d for d in data if d > 90])
mean = data.mean(0)
sigma = data.std()
maxy = np.max(heights)
prev_cutoff = 100
cutoffs = {}
grade = ["A", "B+", "B", "B-", "C+", "C", "C-", "F"]
for grade, cutoff in grade_cutoffs.items():
if cutoff == float("-inf"):
cutoff = 0
else:
cutoff = max(0, mean + cutoff * sigma) + offset
plt.plot([cutoff] * 2, [0, maxy], 'k--')
plt.annotate("%.1f" % cutoff, [cutoff, maxy - 1], fontsize=18, horizontalalignment='left', rotation=45)
n = len([d for d in data if cutoff <= d < prev_cutoff])
print "Grade %s (%.1f-%.1f): %d (%.2f%%)" % (grade, cutoff, prev_cutoff, n, n*1.0/N*100)
plt.annotate(grade, [(cutoff + prev_cutoff) / 2, maxy], fontsize=18, horizontalalignment='center')
cutoffs[grade] = (cutoff, prev_cutoff)
prev_cutoff = cutoff
plt.ylim([0, maxy * 1.1])
plt.annotate("$\mu = %.1f$\n$\sigma = %.1f$\n$max=%.1f$" % (mean, sigma, data.max()), xy=(10, 7), fontsize=30)
title = data_key.split("[")[0].strip()
plt.title(title, fontsize=30)
plt.tight_layout()
plt.savefig("%s.png" % title)
return cutoffs
for c in d.columns:
if "PS" in c or "Midterm" in c or "Final" in c:
if not all(np.isnan(d[c])):
print c
bar_plot(d, c) | grades/statsw2016.ipynb | materialsvirtuallab/ceng114 | bsd-2-clause |
Overall grade
Overall points and assign overall grade. | cutoffs = bar_plot(d, "Weighted Total", offset=-2)
print cutoffs
def assign_grade(pts):
for g, c in cutoffs.items():
if c[0] < pts <= c[1]:
return g
#d = load_data("gc_CENG114_WI16_Ong_fullgc_2016-03-21-15-47-06.csv") #use revised gc
d["Final_Assigned_Egrade"] = map(assign_grade, d["Weighted Total"])
d.to_csv("Overall grades_OLD.csv")
print("Written!") | grades/statsw2016.ipynb | materialsvirtuallab/ceng114 | bsd-2-clause |
1. Data preprocessing
1.1. The dataset.
A key component of any data processing method or any machine learning algorithm is the dataset, i.e., the set of data that will be the input to the method or algorithm.
The dataset collects information extracted from a population (of objects, entities, individuals,...). For instance, we can measure the weight and height of students from a class and collect this information in a dataset ${\cal S} = {{\bf x}k, k=0, \ldots, K-1}$ where $K$ is the number of students, and each sample is a 2 dimensional vector, ${\bf x}_k= (x{k0}, x_{k1})$, with the height and the weight in the first and the second component, respectively. These components are usually called features. In other datasets, the number of features can be arbitrarily large.
1.1. Data preprocessing
The aim of data preprocessing methods is to transform the data into a form that is ready to apply machine learning algorithms. This may include:
Data normalization: transform the individual features to ensure a proper range of variation
Data imputation: assign values to features that may be missed for some data samples
Feature extraction: transform the original data to compute new features that are more appropiate for a specific prediction task
Dimensionality reduction: remove features that are not relevant for the prediction task.
Outlier removal: remove samples that may contain errors and are not reliable for the prediction task.
Clustering: partition the data into smaller subsets, that could be easier to process.
In this notebook we will focus on data normalization.
2. Data normalization
All samples in the dataset can be arranged by rows in a $K \times m$ data matrix ${\bf X}$, where $m$ is the number of features (i.e. the dimension of the vector space containing the data). Each one of the $m$ data features may represent variables of very different nature (e.g. time, distance, price, volume, pixel intensity,...). Thus, the scale and the range of variation of each feature can be completely different.
As an illustration, consider the 2-dimensional dataset in the figure | from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=0.60)
X = X @ np.array([[30, 4], [-8, 1]]) + np.array([90, 10])
plt.figure(figsize=(12, 3))
plt.scatter(X[:, 0], X[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show() | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
We can see that the first data feature ($x_0$) has a much large range of variation than the second ($x_1$). In practice, this may be problematic: the convergence properties of some machine learning algorithms may depend critically on the feature distributions and, in general, features sets ranging over similar scales use to offer a better performance.
For this reason, transforming the data in order to get similar range of variations for all features is desirable. This can be done in several ways.
2.1. Standard scaling.
A common normalization method consists on applying an affine transformation
$$
{\bf t}_k = {\bf D}({\bf x}_k - {\bf m})
$$
where ${\bf D}$ is a diagonal matrix, in such a way that the transformed dataset ${\cal S}' = {{\bf t}_k, k=0, \ldots, K-1}$ has zero sample mean, i.e.,
$$
\frac{1}{K} \sum_{k=0}^{K-1} {\bf t}_k = 0
$$
and unit sample variance, i.e.,
$$
\frac{1}{K} \sum_{k=0}^{K-1} t_{ki}^2 = 1
$$
It is not difficult to verify that this can be done by taking ${\bf m}$ equal to the sample mean
$$
{\bf m} = \frac{1}{K} \sum_{k=0}^{K-1} {\bf x}_k
$$
and taking the diagonal components of ${\bf D}$ equal to the inverse of the standard deviation of each feature, i.e.,
$$
d_{ii} = \frac{1}{\sqrt{\frac{1}{K} \sum_{k=0}^{K-1} (x_{ki} - m_i)^2}}
$$
Using the data matrix ${\bf X}$ and the broadcasting property of the basic mathematical operators in Python, the implementation of this normalization is straightforward.
Exercise 1: Apply a standard scaling to the data matrix. To do so:
Compute the mean, and store it in variable m (you can use method mean from numpy)
Compute the standard deviation of each feature, and store the result in variable s (you can use method std from numpy)
Take advangate of the broadcasting property to normalize the data matrix in a single line of code. Save the result in variable T. | # Compute the sample mean
# m = <FILL IN>
m = np.mean(X, axis=0) # Compute the sample mean
print(f'The sample mean is m = {m}')
# Compute the standard deviation of each feature
# s = <FILL IN>
s = np.std(X, axis=0) # Compute the standard deviation of each feature
# Normalize de data matrix
# T = <FILL IN>
T = (X-m)/s # Normalize | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
We can test if the transformed features have zero-mean and unit variance: | # Testing mean
print(f"- The mean of the transformed features are: {np.mean(T, axis=0)}")
print(f"- The standard deviation of the transformed features are: {np.std(T, axis=0)}") | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
(note that the results can deviate from 0 or 1 due to finite precision errors) | # Now you can verify if your solution satisfies
plt.figure(figsize=(4, 4))
plt.scatter(T[:, 0], T[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show() | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
2.1.1. Implementation in sklearn
The sklearn package contains a method to perform the standard scaling over a given data matrix. | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
print(f'The sample mean is m = {scaler.mean_}')
T2 = scaler.transform(X)
plt.figure(figsize=(4, 4))
plt.scatter(T2[:, 0], T2[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show() | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
Note that, once we have defined the scaler object in Python, you can apply the scaling transformation to other datasets. This will be useful in further topics, when the dataset may be split in several matrices and we may be interested in defining the transformation using some matrix, and apply it to others
2.2. Other normalizations.
The are some alternatives to the standard scaling that may be interesting for some datasets. Here we show some of them, available at the preprocessing module in sklearn:
preprocessing.MaxAbsScaler: Scale each feature by its maximum absolute value. As a result, all feature values will lie in the interval [-1, 1].
preprocessing.MinMaxScaler: Transform features by scaling each feature to a given range. Also, all feature values will lie in the specified interval.
preprocessing.Normalizer: Normalize samples individually to unit norm. That is, it applies the transformation ${\bf t}_k = \frac{1}{\|{\bf x}_k\|} {\bf x}_k$
preprocessing.PowerTransformer: Apply a power transform featurewise to make data more Gaussian-like.
preprocessing.QuantileTransformer: Transform features using quantile information. The transformed features follow a specific target distribution (uniform or normal).
preprocessing.RobustScaler: Scale features using statistics that are robust to outliers. This way, anomalous values in one or very few samples cannot have a strong influence in the normalization.
You can find more detailed explanation of these transformations sklearn documentation.
Exercise 2: Use sklearn to transform the data matrix X into a matrix T24such that the minimum feature value is 2 and the maximum is 4.
(Hint: select and import the appropriate preprocessing module from sklearn an follow the same steps used in the code cell above for the scandard scaler) | # Write your solution here
# <SOL>
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(2, 4))
scaler.fit(X)
T24 = scaler.transform(X)
# </SOL>
# We can visually check that the transformed data features lie in the selected range.
plt.figure(figsize=(4, 4))
plt.scatter(T24[:, 0], T24[:, 1], s=50);
plt.axis('equal')
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.show() | P5.Data preprocessing/Intro5_DataNormalization_professor.ipynb | ML4DS/ML4all | mit |
Introduction to Visualization:
Density Estimation and Data Exploration
Version 0.1
There are many flavors of data analysis that fall under the "visualization" umbrella in astronomy. Today, by way of example, we will focus on 2 basic problems.
By AA Miller
16 September 2017
Problem 1) Density Estimation
Starting with 2MASS and SDSS and extending through LSST, we are firmly in an era where data and large statistical samples are cheap. With this explosion in data volume comes a problem: we do not know the underlying probability density function (PDF) of the random variables measured via our observations. Hence - density estimation: an attempt to recover the unknown PDF from observations. In some cases theory can guide us to a parametric form for the PDF, but more often than not such guidance is not available.
There is a common, simple, and very familiar tool for density estimation: histograms.
But there is also a problem:
HISTOGRAMS LIE!
We will "prove" this to be the case in a series of examples. For this exercise, we will load the famous Linnerud data set, which tested 20 middle aged men by measuring the number of chinups, situps, and jumps they could do in order to compare these numbers to their weight, pulse, and waist size. To load the data (just chinups for now) we will run the following:
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0] | from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0] | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 1a
Plot the histogram for the number of chinups using the default settings in pyplot. | plt.hist( # complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
Problem 1b
Using the same data make 2 new histograms: (i) one with 5 bins (bins = 5), and (ii) one with the bars centered on the left bin edges (align = "left").
Hint - if overplotting the results, you may find it helpful to use the histtype = "step" option | plt.hist( # complete
# complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
What if we instead allow the bin width to vary and require the same number of points in each bin? You can determine the bin edges for bins with 5 sources using the following command:
bins = np.append(np.sort(chinups)[::5], np.max(chinups))
Problem 1c
Plot a histogram with variable width bins, each with the same number of points.
Hint - setting normed = True will normalize the bin heights so that the PDF integrates to 1. | # complete
plt.hist(# complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Ending the lie
Earlier I stated that histograms lie. One simple way to combat this lie: show all the data. Displaying the original data points allows viewers to somewhat intuit the effects of the particular bin choices that have been made (though this can also be cumbersome for very large data sets, which these days is essentially all data sets). The standard for showing individual observations relative to a histogram is a "rug plot," which shows a vertical tick (or other symbol) at the location of each source used to estimate the PDF.
Problem 1d Execute the cell below to see an example of a rug plot. | plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
To recap, histograms are not ideal for density estimation for the following reasons:
They introduce discontinuities that are not present in the data
They are strongly sensitive to user choices ($N_\mathrm{bins}$, bin centering, bin grouping), without any mathematical guidance to what these choices should be
They are difficult to visualize in higher dimensions
Histograms are useful for generating a quick representation of univariate data, but for the reasons listed above they should never be used for analysis. Most especially, functions should not be fit to histograms given how greatly the number of bins and bin centering affects the output histogram.
Okay - so if we are going to rail on histograms this much, there must be a better option. There is: Kernel Density Estimation (KDE), a nonparametric form of density estimation whereby a normalized kernel function is convolved with the discrete data to obtain a continuous estimate of the underlying PDF. As a rule, the kernel must integrate to 1 over the interval $-\infty$ to $\infty$ and be symmetric. There are many possible kernels (gaussian is highly popular, though Epanechnikov, an inverted parabola, produces the minimal mean square error).
KDE is not completely free of the problems we illustrated for histograms above (in particular, both a kernel and the width of the kernel need to be selected), but it does manage to correct a number of the ills. We will now demonstrate this via a few examples using the scikit-learn implementation of KDE: KernelDensity, which is part of the sklearn.neighbors module.
Note There are many implementations of KDE in Python, and Jake VanderPlas has put together an excellent description of the strengths and weaknesses of each. We will use the scitkit-learn version as it is in many cases the fastest implementation.
To demonstrate the basic idea behind KDE, we will begin by representing each point in the dataset as a block (i.e. we will adopt the tophat kernel). Borrowing some code from Jake, we can estimate the KDE using the following code:
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
The two main options to set are the bandwidth and the kernel. | # execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 1e
Plot the KDE of the PDF for the number of chinups middle aged men can do using a bandwidth of 0.1 and a tophat kernel.
Hint - as a general rule, the grid should be smaller than the bandwidth when plotting the PDF. | grid = # complete
PDFtophat = kde_sklearn( # complete
plt.plot( # complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Problem 1f
Plot the KDE of the PDF for the number of chinups middle aged men can do using bandwidths of 1 and 5 and a tophat kernel. How do the results differ from the histogram plots above? | PDFtophat1 = # complete
# complete
# complete
# complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Problem 1g Plot the KDE of the PDF for the number of chinups middle aged men can do using a gaussian and Epanechnikov kernel. How do the results differ from the histogram plots above?
Hint - you will need to select the bandwidth. The examples above should provide insight into the useful range for bandwidth selection. You may need to adjust the values to get an answer you "like." | PDFgaussian = # complete
PDFepanechnikov = # complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used:
$$h = 1.059 \sigma n^{-1/5}$$
where $h$ is the bandwidth, $\sigma$ is the standard deviation of the samples, and $n$ is the total number of samples. Note - in situations with bimodal or more complicated distributions, this rule of thumb can lead to woefully inaccurate PDF estimates. The most general way to estimate the choice of bandwidth is via cross validation (we will cover cross-validation later today).
What about multidimensional PDFs? It is possible using many of the Python implementations of KDE to estimate multidimensional PDFs, though it is very very important to beware the curse of dimensionality in these circumstances.
Problem 2) Data Exploration
Now a more open ended topic: data exploration. In brief, data exploration encompases a large suite of tools (including those discussed above) to examine data that live in large dimensional spaces. There is no single best method or optimal direction for data exploration. Instead, today we will introduce some of the tools available via python.
As an example we will start with a basic line plot - and examine tools beyond matplotlib. | x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Seaborn
Seaborn is a plotting package that enables many useful features for exploration. In fact, a lot of the functionality that we developed above can readily be handled with seaborn.
To begin, we will make the same plot that we created in matplotlib. | import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
These plots look identical, but it is possible to change the style with seaborn.
seaborn has 5 style presets: darkgrid, whitegrid, dark, white, and ticks. You can change the preset using the following:
sns.set_style("whitegrid")
which will change the output for all subsequent plots. Note - if you want to change the style for only a single plot, that can be accomplished with the following:
with sns.axes_style("dark"):
with all ploting commands inside the with statement.
Problem 3a
Re-plot the sine curve using each seaborn preset to see which you like best - then adopt this for the remainder of the notebook. | sns.set_style( # complete
# complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Here we load the default: | # default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
which we will now change to colorblind, which is clearer to those that are colorblind. | # set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Note - for those familiar with pandas seaborn is designed to integrate easily and directly with pandas DataFrame objects. In the example below the Iris data are loaded into a DataFrame. iPython notebooks also display the DataFrame data in a nice readable format. | iris = sns.load_dataset("iris")
iris | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.
Problem 3b
Plot the distribution of petal lengths for the Iris data set. | # note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots.
At this point, we are familiar with basic scatter plots in matplotlib.
Problem 3c
Make a matplotlib scatter plot showing the Iris petal length against the Iris petal width. | plt.scatter( # complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below: | with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above). | # hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar() | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function. | with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Suppose we want to see univariate distributions in addition to the scatter plot? This is certainly possible with matplotlib and you can find examples on the web, however, with seaborn this is really easy. | sns.jointplot(x=iris['petal_length'], y=iris['petal_width']) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
But! Histograms and scatter plots can be problematic as we have discussed many times before.
Problem 3d
Re-create the plot above but set kind='kde' to produce density estimates of the distributions. | sns.jointplot( # complete | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.) | sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]]) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'. | sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde') | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously. | g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3) | Sessions/Session10/Day0/TooBriefVisualization.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. | def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias = False, activation=None)
layer = tf.layers.batch_normalization(layer, training = is_training)
layer = tf.nn.relu(layer)
return layer | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps. | def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias = False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training = is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly. | def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# training boolean
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell TensorFlow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate) | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables. | def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected. | def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training. | def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate) | batch-norm/Batch_Normalization_Exercises.ipynb | nwhidden/ND101-Deep-Learning | mit |
Classifiers and Regressors
Pulsar stars | col_names = ["Mean of the integrated profile","Standard deviation of the integrated profile",
"Excess kurtosis of the integrated profile",
"Skewness of the integrated profile",
"Mean of the DM-SNR curve",
"Standard deviation of the DM-SNR curve",
"Excess kurtosis of the DM-SNR curve",
"Skewness of the DM-SNR curve",
"Class"]
col_names = [cleanup_names(col) for col in col_names]
print(col_names)
df = pd.read_csv("HTRU_2.csv", sep=",", names=col_names)
df.info() | pulsar_stars.ipynb | searchs/bigdatabox | mit |
Mean of the integrated profile.
Standard deviation of the integrated profile.
Excess kurtosis of the integrated profile.
Skewness of the integrated profile.
Mean of the DM-SNR curve.
Standard deviation of the DM-SNR curve.
Excess kurtosis of the DM-SNR curve.
Skewness of the DM-SNR curve.
Class | df.head(3)
# df.info()
len(df)
from sklearn.linear_model import LogisticRegression
X = df.iloc[:, 0:8]
y = df.iloc[:,8]
def clf_model(model):
clf = model
scores = cross_val_score(clf, X, y)
print(f"Scores: {scores}")
print(f"Mean Score: {scores.mean()}")
clf_model(LogisticRegression())
from sklearn.naive_bayes import GaussianNB
clf_model(GaussianNB())
from sklearn.neighbors import KNeighborsClassifier
clf_model(KNeighborsClassifier())
from sklearn.tree import DecisionTreeClassifier
clf_model(DecisionTreeClassifier())
from sklearn.ensemble import RandomForestClassifier
clf_model(RandomForestClassifier())
df.Mean_of_the_integrated_profile.count()
df.Class.count()
df[df.Class == 1].Class.count()
df[df.Class == 1].Class.count()/df.Class.count()
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)
def confusion(model):
clf = model
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(f'Confusion Matrix: {y_test, y_pred}')
print(f'Classification Report: {classification_report(y_test,y_pred)}')
return clf
confusion(LogisticRegression())
confusion(RandomForestClassifier())
from sklearn.ensemble import AdaBoostClassifier
clf_model(AdaBoostClassifier())
confusion(AdaBoostClassifier()) | pulsar_stars.ipynb | searchs/bigdatabox | mit |
Customer Churn | churnDF = pd.read_csv("CHURN.csv")
churnDF.Churn.head(3)
churnDF['Churn'] = churnDF['Churn']. \
replace(to_replace=['No', 'Yes'], value=[0,1])
churnDF.Churn.head()
len(churnDF.columns)
X = churnDF.iloc[:, 0:20]
y = churnDF.iloc[:,20]
X = pd.get_dummies(X)
def clf_models(model, cv=3):
clf = model
scores = cross_val_score(clf, X, y, cv=cv)
print(f"Scores: {scores}")
print(f"Mean Score: {scores.mean()}")
clf_models(RandomForestClassifier())
clf_models(KNeighborsClassifier())
clf_models(LinearRegression())
clf_models(AdaBoostClassifier())
# GaussianNB
# DecisionTree
# X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)
clf_models(GaussianNB())
clf_models(DecisionTreeClassifier())
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.25)
# def confusion_churn(model):
# clf = model
# clf.fit(X_train, y_train)
# y_pred = clf.predict(X_test)
# print(f'Confusion Matrix: {y_test, y_pred}')
# print(f'Classification Report: {classification_report(y_test,y_pred)}')
# return clf
confusion(AdaBoostClassifier(n_estimators=250))
confusion(RandomForestClassifier())
clf_models(LogisticRegression())
confusion(LogisticRegression()) | pulsar_stars.ipynb | searchs/bigdatabox | mit |
Conv2DTranspose
[convolutional.Conv2DTranspose.0] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=False | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='valid', data_format='channels_last',
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(150)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
# print('b shape:', weights[1].shape)
# print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
[convolutional.Conv2DTranspose.1] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='valid', data_format='channels_last', activation='linear', use_bias=True | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='valid', data_format='channels_last',
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(151)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
[convolutional.Conv2DTranspose.2] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='valid', data_format='channels_last', activation='relu', use_bias=True | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(2,2),
padding='valid', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(152)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
[convolutional.Conv2DTranspose.3] 4 3x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(4, (3,3), strides=(1,1),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(153)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
[convolutional.Conv2DTranspose.4] 5 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', activation='relu', use_bias=True | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(5, (3,3), strides=(2,2),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(154)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
[convolutional.Conv2DTranspose.5] 3 2x3 filters on 4x4x2 input, strides=(1,1), padding='same', data_format='channels_last', activation='relu', use_bias=True | data_in_shape = (4, 4, 2)
conv = Conv2DTranspose(3, (2,3), strides=(1,1),
padding='same', data_format='channels_last',
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(155)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2DTranspose.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
export for Keras.js tests | import os
filename = '../../../test/data/layers/convolutional/Conv2DTranspose.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA)) | notebooks/layers/convolutional/Conv2DTranspose.ipynb | transcranial/keras-js | mit |
The dataset contains, for each applicant:
- income (in the Income column),
- the number of children (in the Num_Children column),
- whether the applicant owns a car (in the Own_Car column, the value is 1 if the applicant owns a car, and is else 0), and
- whether the applicant owns a home (in the Own_Housing column, the value is 1 if the applicant owns a home, and is else 0)
When evaluating fairness, we'll check how the model performs for users in different groups, as identified by the Group column:
- The Group column breaks the users into two groups (where each group corresponds to either 0 or 1).
- For instance, you can think of the column as breaking the users into two different races, ethnicities, or gender groupings. If the column breaks users into different ethnicities, 0 could correspond to a non-Hispanic user, while 1 corresponds to a Hispanic user.
Run the next code cell without changes to train a simple model to approve or deny individuals for a credit card. The output shows the performance of the model. | from sklearn import tree
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Train a model and make predictions
model_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_baseline.fit(X_train, y_train)
preds_baseline = model_baseline.predict(X_test)
# Function to plot confusion matrix
def plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=["Deny", "Approve"],
include_values=True, xticks_rotation='horizontal', values_format='',
normalize=None, cmap=plt.cm.Blues):
cm = confusion_matrix(y_true, y_pred, normalize=normalize)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)
return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,
values_format=values_format)
# Function to evaluate the fairness of the model
def get_stats(X, y, model, group_one, preds):
y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]
y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]
print("Total approvals:", preds.sum())
print("Group A:", preds_zero.sum(), "({}% of approvals)".format(round(preds_zero.sum()/sum(preds)*100, 2)))
print("Group B:", preds_one.sum(), "({}% of approvals)".format(round(preds_one.sum()/sum(preds)*100, 2)))
print("\nOverall accuracy: {}%".format(round((preds==y).sum()/len(y)*100, 2)))
print("Group A: {}%".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))
print("Group B: {}%".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))
cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)
disp_zero.ax_.set_title("Group A")
cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)
disp_one.ax_.set_title("Group B")
print("\nSensitivity / True positive rate:")
print("Group A: {}%".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))
print("Group B: {}%".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))
# Evaluate the model
get_stats(X_test, y_test, model_baseline, X_test["Group"]==1, preds_baseline) | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.
- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).
- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).
1) Varieties of fairness
Consider three different types of fairness covered in the tutorial:
- Demographic parity: Which group has an unfair advantage, with more representation in the group of approved applicants? (Roughly 50% of applicants are from Group A, and 50% of applicants are from Group B.)
- Equal accuracy: Which group has an unfair advantage, where applicants are more likely to be correctly classified?
- Equal opportunity: Which group has an unfair advantage, with a higher true positive rate? | # Check your answer (Run this code cell to get credit!)
q_1.check() | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
Run the next code cell without changes to visualize the model. | def visualize_model(model, feature_names, class_names=["Deny", "Approve"], impurity=False):
plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)
[process_plot_item(item) for item in plot_list]
def process_plot_item(item):
split_string = item.get_text().split("\n")
if split_string[0].startswith("samples"):
item.set_text(split_string[-1])
else:
item.set_text(split_string[0])
plt.figure(figsize=(20, 6))
plot_list = visualize_model(model_baseline, feature_names=X_train.columns) | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
The flowchart shows how the model makes decisions:
- Group <= 0.5 checks what group the applicant belongs to: if the applicant belongs to Group A, then Group <= 0.5 is true.
- Entries like Income <= 80210.5 check the applicant's income.
To follow the flow chart, we start at the top and trace a path depending on the details of the applicant. If the condition is true at a split, then we move down and to the left branch. If it is false, then we move to the right branch.
For instance, consider an applicant in Group B, who has an income of 75k. Then,
- We start at the top of the flow chart. the applicant has an income of 75k, so Income <= 80210.5 is true, and we move to the left.
- Next, we check the income again. Since Income <= 71909.5 is false, we move to the right.
- The last thing to check is what group the applicant belongs to. The applicant belongs to Group B, so Group <= 0.5 is false, and we move to the right, where the model has decided to approve the applicant.
2) Understand the baseline model
Based on the visualization, how can you explain one source of unfairness in the model?
Hint: Consider the example applicant, but change the group membership from Group B to Group A (leaving all other characteristics the same). Is this slightly different applicant approved or denied by the model? | # Check your answer (Run this code cell to get credit!)
q_2.check() | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Run the next code cell to see how this new group unaware model performs. | # Create new dataset with gender removed
X_train_unaware = X_train.drop(["Group"],axis=1)
X_test_unaware = X_test.drop(["Group"],axis=1)
# Train new model on new dataset
model_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_unaware.fit(X_train_unaware, y_train)
# Evaluate the model
preds_unaware = model_unaware.predict(X_test_unaware)
get_stats(X_test_unaware, y_test, model_unaware, X_test["Group"]==1, preds_unaware) | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
3) Varieties of fairness, part 2
How does this model compare to the first model you trained, when you consider demographic parity, equal accuracy, and equal opportunity? Once you have an answer, run the next code cell. | # Check your answer (Run this code cell to get credit!)
q_3.check() | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about here.)
Run the next code cell without changes to evaluate this new model. | # Change the value of zero_threshold to hit the objective
zero_threshold = 0.11
one_threshold = 0.99
# Evaluate the model
test_probs = model_unaware.predict_proba(X_test_unaware)[:,1]
preds_approval = (((test_probs>zero_threshold)*1)*[X_test["Group"]==0] + ((test_probs>one_threshold)*1)*[X_test["Group"]==1])[0]
get_stats(X_test, y_test, model_unaware, X_test["Group"]==1, preds_approval) | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
4) Varieties of fairness, part 3
How does this final model compare to the previous models, when you consider demographic parity, equal accuracy, and equal opportunity? | # Check your answer (Run this code cell to get credit!)
q_4.check() | notebooks/ethics/raw/ex4.ipynb | Kaggle/learntools | apache-2.0 |
<H1>Polynomial fit</H1> | # generate some data
np.random.seed(2)
xdata = np.random.normal(3.0, 1.0, 100)
ydata = np.random.normal(50.0, 30.0, 100) / xdata
plt.plot(xdata, ydata, 'ko',ms=2);
# lets fit to a polynomial function of degree 8
# and plot all together
f = np.poly1d( np.polyfit(xdata, ydata, 8) )
x = np.linspace(np.min(xdata), np.max(xdata), 100)
plt.plot(xdata, ydata, 'ko', ms=2)
plt.plot(x,f(x), 'red');
# compute r**2
from sklearn.metrics import r2_score
r2_score(ydata, f(xdata)) | Optimization/Polynomial regression.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
The r2_score is not very good and the large degree of the polynomial suggest an overfitting. The r2_score alone
cannot not say which fitting is the best. | # find the best polynomial
mypoly = dict()
for n in range(1, 10):
f = np.poly1d( np.polyfit(xdata, ydata, n) )
mypoly[n] = r2_score(ydata, f(xdata))
print 'Pol. deg %d -> %f' %(n, mypoly[n]) | Optimization/Polynomial regression.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
<H2>Trial and test method</H2>
To avoid overfitting, we'll split the data in two - 80% of it will be used for "training" our model, and the other 20% for testing it. | #we'll select 80% of the data to train
xtrain = xdata[:80]
ytrain = ydata[:80]
xtest = xdata[80:]
ytest = ydata[80:]
print(len(xtrain), len(xtest))
plt.plot(xtrain, ytrain, 'ro', ms=2)
plt.xlim(0,7), plt.ylim(0,200)
plt.title('Train');
plt.plot(xtest, ytest, 'bo', ms=2)
plt.xlim(0,7), plt.ylim(0,200)
plt.title('Test');
f = np.poly1d(np.polyfit(xtrain, ytrain, 8))
plt.plot(xtrain, ytrain, 'ko', ms=2)
plt.plot(x, f(x),'r')
plt.title('Train');
plt.plot(xtest, ytest, 'ko', ms=2)
plt.plot(x, f(x),'r')
plt.title('Test');
r2_score(ytrain, f(xtrain)), r2_score(ytest, f(xtest)) | Optimization/Polynomial regression.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
the r2_score value of the test value is telling us that this fit is not very good | # let's compute train and test for all polynomial
# find the best polynomial
r2_test, r2_train = list(), list()
polydeg = range(1,15)
for n in polydeg:
f = np.poly1d( np.polyfit(xtrain, ytrain, n) )
r2train = r2_score(ytrain, f(xtrain))
r2_train.append(r2train)
r2test = r2_score(ytest, f(xtest))
r2_test.append(r2test)
print 'Pol. deg %2d -> r2(train) = %2.4f, r2(test) = %2.4f' %(n,r2train, r2test)
| Optimization/Polynomial regression.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
Looking at the r2_scores of the test value, we can resolve that a fitting with a polynomial degree of six is the best | plt.plot(polydeg, r2_train, color='gray')
plt.bar(polydeg, r2_test, color='red', alpha=.4)
plt.xlim(1, 15);
# the best is to fit with a polynomial of degree 6
f = np.poly1d(np.polyfit(xdata,ydata,6))
plt.plot(xdata,ydata, 'ko', ms=2)
plt.plot(x,f(x),'red'); | Optimization/Polynomial regression.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
Prepare Data | def generate_sequence(seq_len):
xs = np.random.random(seq_len)
ys = np.array([0 if x < 2.5 else 1 for x in np.cumsum(xs).tolist()])
return xs, ys
X, Y = generate_sequence(SEQ_LENGTH)
print(X)
print(Y)
def generate_data(seq_len, num_seqs):
xseq, yseq = [], []
for i in range(num_seqs):
X, Y = generate_sequence(seq_len)
xseq.append(X)
yseq.append(Y)
return np.expand_dims(np.array(xseq), axis=2), np.array(yseq)
Xtrain, Ytrain = generate_data(SEQ_LENGTH, TRAIN_SIZE)
Xval, Yval = generate_data(SEQ_LENGTH, VAL_SIZE)
Xtest, Ytest = generate_data(SEQ_LENGTH, TEST_SIZE)
print(Xtrain.shape, Ytrain.shape, Xval.shape, Yval.shape, Xtest.shape, Ytest.shape) | src/pytorch/10-cumsum-prediction.ipynb | sujitpal/polydlot | apache-2.0 |
Define Network
The sequence length for the input and output sequences are the same size. Our network follows the model built (using Keras) in the book. Unlike the typical encoder-decoder LSTM architecture that is used for most seq2seq problems, here we have a single LSTM followed by a FCN layer at each timestep of its output. Each FCN returns a binary 0/1 output, which is concatenated to produce the predicted result. | class CumSumPredictor(nn.Module):
def __init__(self, seq_len, input_dim, hidden_dim, output_dim):
super(CumSumPredictor, self).__init__()
self.seq_len = seq_len
self.hidden_dim = hidden_dim
self.output_dim = output_dim
# network layers
self.enc_lstm = nn.LSTM(input_dim, hidden_dim, 1, batch_first=True,
bidirectional=True)
self.fcn = nn.Linear(hidden_dim * 2, output_dim) # bidirectional input
self.fcn_relu = nn.ReLU()
self.fcn_softmax = nn.Softmax()
def forward(self, x):
if torch.cuda.is_available():
h = (Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()),
Variable(torch.randn(2, x.size(0), self.hidden_dim).cuda()))
else:
h = (Variable(torch.randn(2, x.size(0), self.hidden_dim)),
Variable(torch.randn(2, x.size(0), self.hidden_dim)))
x, h = self.enc_lstm(x, h) # encoder LSTM
x_fcn = Variable(torch.zeros(x.size(0), self.seq_len, self.output_dim))
for i in range(self.seq_len): # decoder LSTM -> fcn for each timestep
x_fcn[:, i, :] = self.fcn_softmax(self.fcn_relu(self.fcn(x[:, i, :])))
x = x_fcn
return x
model = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)
if torch.cuda.is_available():
model.cuda()
print(model)
# size debugging
print("--- size debugging ---")
inp = Variable(torch.randn(BATCH_SIZE, SEQ_LENGTH, EMBED_SIZE))
outp = model(inp)
print(outp.size())
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE) | src/pytorch/10-cumsum-prediction.ipynb | sujitpal/polydlot | apache-2.0 |
Train Network | def compute_accuracy(pred_var, true_var):
if torch.cuda.is_available():
ypred = pred_var.cpu().data.numpy()
ytrue = true_var.cpu().data.numpy()
else:
ypred = pred_var.data.numpy()
ytrue = true_var.data.numpy()
pred_nums, true_nums = [], []
for i in range(pred_var.size(0)): # for each row of output
pred_nums.append(int("".join([str(x) for x in ypred[i].tolist()]), 2))
true_nums.append(int("".join([str(x) for x in ytrue[i].tolist()]), 2))
return pred_nums, true_nums, accuracy_score(pred_nums, true_nums)
history = []
for epoch in range(NUM_EPOCHS):
num_batches = Xtrain.shape[0] // BATCH_SIZE
shuffled_indices = np.random.permutation(np.arange(Xtrain.shape[0]))
train_loss, train_acc = 0., 0.
for bid in range(num_batches):
# extract one batch of data
Xbatch_data = Xtrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]
Ybatch_data = Ytrain[shuffled_indices[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
# initialize gradients
optimizer.zero_grad()
# forward
loss = 0.
Ybatch_ = model(Xbatch)
for i in range(Ybatch.size(1)):
loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])
# backward
loss.backward()
train_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(2)
_, _, acc = compute_accuracy(ybatch_, Ybatch)
train_acc += acc
optimizer.step()
# compute training loss and accuracy
train_loss /= num_batches
train_acc /= num_batches
# compute validation loss and accuracy
val_loss, val_acc = 0., 0.
num_val_batches = Xval.shape[0] // BATCH_SIZE
for bid in range(num_val_batches):
# data
Xbatch_data = Xval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Ybatch_data = Yval[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
loss = 0.
Ybatch_ = model(Xbatch)
for i in range(Ybatch.size(1)):
loss += loss_fn(Ybatch_[:, i, :], Ybatch[:, i])
val_loss += loss.data[0]
_, ybatch_ = Ybatch_.max(2)
_, _, acc = compute_accuracy(ybatch_, Ybatch)
val_acc += acc
val_loss /= num_val_batches
val_acc /= num_val_batches
torch.save(model.state_dict(), MODEL_FILE.format(epoch+1))
print("Epoch {:2d}/{:d}: loss={:.3f}, acc={:.3f}, val_loss={:.3f}, val_acc={:.3f}"
.format((epoch+1), NUM_EPOCHS, train_loss, train_acc, val_loss, val_acc))
history.append((train_loss, val_loss, train_acc, val_acc))
losses = [x[0] for x in history]
val_losses = [x[1] for x in history]
accs = [x[2] for x in history]
val_accs = [x[3] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs, color="r", label="train")
plt.plot(val_accs, color="b", label="valid")
plt.legend(loc="best")
plt.subplot(212)
plt.title("Loss")
plt.plot(losses, color="r", label="train")
plt.plot(val_losses, color="b", label="valid")
plt.legend(loc="best")
plt.tight_layout()
plt.show() | src/pytorch/10-cumsum-prediction.ipynb | sujitpal/polydlot | apache-2.0 |
Evaluate Network | saved_model = CumSumPredictor(SEQ_LENGTH, EMBED_SIZE, 50, 2)
saved_model.load_state_dict(torch.load(MODEL_FILE.format(NUM_EPOCHS)))
if torch.cuda.is_available():
saved_model.cuda()
ylabels, ypreds = [], []
num_test_batches = Xtest.shape[0] // BATCH_SIZE
for bid in range(num_test_batches):
Xbatch_data = Xtest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Ybatch_data = Ytest[bid * BATCH_SIZE : (bid + 1) * BATCH_SIZE]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(2)
pred_nums, true_nums, _ = compute_accuracy(ybatch_, Ybatch)
ylabels.extend(true_nums)
ypreds.extend(pred_nums)
print("Test accuracy: {:.3f}".format(accuracy_score(ylabels, ypreds)))
Xbatch_data = Xtest[0:10]
Ybatch_data = Ytest[0:10]
Xbatch = Variable(torch.from_numpy(Xbatch_data).float())
Ybatch = Variable(torch.from_numpy(Ybatch_data).long())
if torch.cuda.is_available():
Xbatch = Xbatch.cuda()
Ybatch = Ybatch.cuda()
Ybatch_ = saved_model(Xbatch)
_, ybatch_ = Ybatch_.max(2)
if torch.cuda.is_available():
ybatch__data = ybatch_.cpu().data.numpy()
else:
ybatch__data = ybatch_.data.numpy()
for i in range(Ybatch_data.shape[0]):
label = Ybatch_data[i]
pred = ybatch__data[i]
correct = "True" if np.array_equal(label, pred) else "False"
print("y={:s}, yhat={:s}, correct={:s}".format(str(label), str(pred), correct))
for i in range(NUM_EPOCHS):
os.remove(MODEL_FILE.format(i + 1)) | src/pytorch/10-cumsum-prediction.ipynb | sujitpal/polydlot | apache-2.0 |
Effect of the Bottom-SCC optimisation on semi-deterministic automata
The orange states below form deterministic bottom SCCs. After processing by Seminator, they are both in the 1st (violet) and 2nd (green) component. Simplifications cannot merge these duplicates as one is accepting and one is not. In fact, we do not need the copy in the first component as there is no non-determinism and so there is nothing to wait for. We have to make every edge entering such SCC as a cut-edge. | def example(**opts):
in_a = spot.translate("(FGp2 R !p2) | GFp1")
in_a.highlight_states([3,4], 2).set_name("input")
# Note: the pure=True option disables all optimizations that are usually on by default.
out_a = seminator(in_a, pure=True, postprocess=False, highlight=True, **opts)
out_a.set_name("output")
simp_a = seminator(in_a, pure=True, postprocess=True, highlight=True, **opts)
simp_a.set_name("simplified output")
display_inline(in_a, out_a, simp_a, per_row=3, show=".vn")
example() | notebooks/bSCC.ipynb | mklokocka/seminator | gpl-3.0 |
Enabling the bottom-SCC optimization simplifies the output automata as follows: | example(bscc_avoid=True) | notebooks/bSCC.ipynb | mklokocka/seminator | gpl-3.0 |
Cut-deterministic automata
The same idea can be applied to cut-deterministic automata. Removing the states 3 and 4 from the fist part of the cut-deterministic automaton would remove state ${3}$ and would merge the states ${1,3,4}$ and ${1,3}$. | example(cut_det=True)
example(cut_det=True, bscc_avoid=True) | notebooks/bSCC.ipynb | mklokocka/seminator | gpl-3.0 |
Exension to semi-deterministic SCCs
We can avoid more than bottom SCC. In fact, we can avoid all SCCs that are already good for semi-deterministic automata (semi-deterministic SCC). SCC $C$ is semi-deterministic if $C$ and all successors of $C$ are deterministic. This is ilustrated on the following example and states 1 and 5. | def example2(**opts):
in_a = spot.translate('G((((a & b) | (!a & !b)) & (GF!b U !c)) | (((!a & b) | (a & !b)) & (FGb R c)))')
spot.highlight_nondet_states(in_a, 1)
in_a.set_name("input")
options = { "cut_det": True, "highlight": True, "jobs": ViaTGBA, "skip_levels": True, "pure": True, **opts}
out_a = seminator(in_a, **options, postprocess=False)
out_a.set_name("output")
simp_a = seminator(in_a, **options, postprocess=True)
simp_a.set_name("simplified output")
display_inline(in_a, out_a, simp_a, show=".nhs")
example2()
example2(bscc_avoid=True) | notebooks/bSCC.ipynb | mklokocka/seminator | gpl-3.0 |
Reusing the semi-deterministic components with TGBA acceptance
In the previous example we have saved several states by not including the semi-deterministic components in the 1st part of the result. However, we still got 6 (and 5 after postprocessing) states out of the 3 deterministic states $1, 5$, and $6$. This can be tackled by reusing the semi-deterministic components as they are. This immediately leads to a TGBA on the output and we have to adress this in the parts which still rely on breakpoint construction. The edges that are accepting will now carry all the marks that are needed (as they do in the original automaton anyway). | example2(powerset_on_cut=True, reuse_deterministic=True) | notebooks/bSCC.ipynb | mklokocka/seminator | gpl-3.0 |
1. Connect girder client and set parameters | APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SOURCE_SLIDE_ID = '5d5d6910bd4404c6b1f3d893'
POST_SLIDE_ID = '5d586d76bd4404c6b1f286ae'
gc = girder_client.GirderClient(apiUrl=APIURL)
# gc.authenticate(interactive=True)
gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# get and parse slide annotations into dataframe
slide_annotations = gc.get('/annotation/item/' + SOURCE_SLIDE_ID)
_, contours_df = parse_slide_annotations_into_tables(slide_annotations) | docs/examples/polygon_merger_using_rtree.ipynb | DigitalSlideArchive/HistomicsTK | apache-2.0 |
2. Polygon merger
The Polygon_merger_v2() is the top level function for performing the merging. | print(Polygon_merger_v2.__doc__)
print(Polygon_merger_v2.__init__.__doc__) | docs/examples/polygon_merger_using_rtree.ipynb | DigitalSlideArchive/HistomicsTK | apache-2.0 |
Required arguments for initialization
The only required argument is a dataframe of contours merge. | contours_df.head() | docs/examples/polygon_merger_using_rtree.ipynb | DigitalSlideArchive/HistomicsTK | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.