text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
### x lines of Python
# Reading and writing LAS files
This notebook goes with [the Agile blog post](https://agilescientific.com/blog/2017/10/23/x-lines-of-python-load-curves-from-las) of 23 October.
Set up a `conda` environment with:
conda create -n welly python=3.6 matplotlib=2.0 scipy pandas
You'll need `welly` in your environment:
conda install tqdm # Should happen automatically but doesn't
pip install welly
This will also install the latest versions of `striplog` and `lasio`.
```
import welly
ls ../data/*.LAS
```
### 1. Load the LAS file with `lasio`
```
import lasio
l = lasio.read('../data/P-129.LAS') # Line 1.
```
That's it! But the object itself doesn't tell us much — it's really just a container:
```
l
```
### 2. Look at the WELL section of the header
```
l.header['Well'] # Line 2.
```
You can go in and find the KB if you know what to look for:
```
l.header['Parameter']['EKB']
```
### 3. Look at the curve data
The curves are all present one big NumPy array:
```
l.data
```
Or we can go after a single curve object:
```
l.curves.GR # Line 3.
```
And there's a shortcut to its data:
```
l['GR'] # Line 4.
```
...so it's easy to make a plot against depth:
```
import matplotlib.pyplot as plt
plt.figure(figsize=(15,3))
plt.plot(l['DEPT'], l['GR'])
plt.show()
```
### 4. Inspect the curves as a `pandas` dataframe
```
l.df().head() # Line 5.
```
### 5. Load the LAS file with `welly`
```
from welly import Well
w = Well.from_las('../data/P-129.LAS') # Line 6.
```
`welly` Wells know how to display some basics:
```
w
```
And the `Well` object also has `lasio`'s access to a pandas DataFrame:
```
w.df().head()
```
### 6. Look at `welly`'s Curve object
Like the `Well`, a `Curve` object can report a bit about itself:
```
gr = w.data['GR'] # Line 7.
gr
```
One important thing about Curves is that each one knows its own depths — they are stored as a property called `basis`. (It's not actually stored, but computed on demand from the start depth, the sample interval (which must be constant for the whole curve) and the number of samples in the object.)
```
gr.basis
```
### 7. Plot part of a curve
We'll grab the interval from 300 m to 1000 m and plot it.
```
gr.to_basis(start=300, stop=1000).plot() # Line 8.
```
### 8. Smooth a curve
Curve objects are, fundamentally, NumPy arrays. But they have some extra tricks. We've already seen `Curve.plot()`.
Using the `Curve.smooth()` method, we can easily smooth a curve, eg by 15 m (passing `samples=True` would smooth by 15 samples):
```
sm = gr.smooth(window_length=15, samples=False) # Line 9.
sm.plot()
```
### 9. Export a set of curves as a matrix
You can get at all the data through the lasio `l.data` object:
```
print("Data shape: {}".format(w.las.data.shape))
w.las.data
```
But we might want to do some other things, such as specify which curves you want (optionally using aliases like GR1, GRC, NGC, etc for GR), resample the data, or specify a start and stop depth — `welly` can do all this stuff. This method is also wrapped by `Project.data_as_matrix()` which is nice because it ensures that all the wells are exported at the same sample interval.
Here are the curves in this well:
```
w.data.keys()
keys=['CALI', 'DT', 'DTS', 'RHOB', 'SP']
w.plot(tracks=['TVD']+keys)
X, basis = w.data_as_matrix(keys=keys, start=275, stop=1850, step=0.5, return_basis=True)
w.data['CALI'].shape
```
So CALI had 12,718 points in it... since we downsampled to 0.5 m and removed the top and tail, we should have substantially fewer points:
```
X.shape
plt.figure(figsize=(15,3))
plt.plot(X.T[0])
plt.show()
```
### 10+. BONUS: fix the lat, lon
OK, we're definitely going to go over our budget on this one.
Did you notice that the location of the well did not get loaded properly?
```
w.location
```
Let's look at some of the header:
# LAS format log file from PETREL
# Project units are specified as depth units
#==================================================================
~Version information
VERS. 2.0:
WRAP. YES:
#==================================================================
~WELL INFORMATION
#MNEM.UNIT DATA DESCRIPTION
#---- ------ -------------- -----------------------------
STRT .M 1.0668 :START DEPTH
STOP .M 1939.13760 :STOP DEPTH
STEP .M 0.15240 :STEP
NULL . -999.25 :NULL VALUE
COMP . Elmworth Energy Corporation :COMPANY
WELL . Kennetcook #2 :WELL
FLD . Windsor Block :FIELD
LOC . Lat = 45* 12' 34.237" N :LOCATION
PROV . Nova Scotia :PROVINCE
UWI. Long = 63* 45'24.460 W :UNIQUE WELL ID
LIC . P-129 :LICENSE NUMBER
CTRY . CA :COUNTRY (WWW code)
DATE. 10-Oct-2007 :LOG DATE {DD-MMM-YYYY}
SRVC . Schlumberger :SERVICE COMPANY
LATI .DEG :LATITUDE
LONG .DEG :LONGITUDE
GDAT . :GeoDetic Datum
SECT . 45.20 Deg N :Section
RANG . PD 176 :Range
TOWN . 63.75 Deg W :Township
Look at **LOC** and **UWI**. There are two problems:
1. These items are in the wrong place. (Notice **LATI** and **LONG** are empty.)
2. The items are malformed, with lots of extraneous characters.
We can fix this in two steps:
1. Remap the header items to fix the first problem.
2. Parse the items to fix the second one.
We'll define these in reverse because the remapping uses the transforming function.
```
import re
def transform_ll(text):
"""
Parses malformed lat and lon so they load properly.
"""
def callback(match):
d = match.group(1).strip()
m = match.group(2).strip()
s = match.group(3).strip()
c = match.group(4).strip()
if c.lower() in ('w', 's') and d[0] != '-':
d = '-' + d
return ' '.join([d, m, s])
pattern = re.compile(r""".+?([-0-9]+?).? ?([0-9]+?).? ?([\.0-9]+?).? +?([NESW])""", re.I)
text = pattern.sub(callback, text)
return welly.utils.dms2dd([float(i) for i in text.split()])
```
Make sure that works!
```
print(transform_ll("""Lat = 45* 12' 34.237" N"""))
remap = {
'LATI': 'LOC', # Use LOC for the parameter LATI.
'LONG': 'UWI', # Use UWI for the parameter LONG.
'LOC': None, # Use nothing for the parameter SECT.
'SECT': None, # Use nothing for the parameter SECT.
'RANG': None, # Use nothing for the parameter RANG.
'TOWN': None, # Use nothing for the parameter TOWN.
}
funcs = {
'LATI': transform_ll, # Pass LATI through this function before loading.
'LONG': transform_ll, # Pass LONG through it too.
'UWI': lambda x: "No UWI, fix this!"
}
w = Well.from_las('../data/P-129.LAS', remap=remap, funcs=funcs)
w.location.latitude, w.location.longitude
w.uwi
```
Let's just hope the mess is the same mess in every well. (LOL, no-one's that lucky.)
<hr>
**© 2017 [agilescientific.com](https://www.agilescientific.com/) and licensed [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)**
| github_jupyter |
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20Deployment/Course%203%20-%20TensorFlow%20Datasets/Week%202/Examples/Week2ExerciseA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
```
# TRANSFER LEARNING
The next code block will download the mobilenet model from TensorFlow Hub, and
use its learned features, extracted as feature_extractor and set to be
fine tuned by making them trainable.
```
import tensorflow_hub as hub
model_selection = ("mobilenet_v2", 224, 1280)
handle_base, pixels, FV_SIZE = model_selection
IMAGE_SIZE = (pixels, pixels)
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,))
feature_extractor.trainable = True
```
## Import libraries and set up the splits
```
import tensorflow as tf
import tensorflow_datasets as tfds
# Here is where you will write your code
# You need to use subsets of the original data, which is entirely in the 'train'
# split. I.E. 'train' contains 25000 records.
# Split this up so that you get
# The first 10% is your 'new' training set
# The last 10% is your validation and test sets, split down the middle
# (i.e. the first half of the last 10% is validation, the second half is test)
# These 3 recordsets should be called
# train_examples, validation_examples and test_examples respectively
splits = ['train[:10%]', 'train[90%:95%]', 'train[95%:]']
splits, info = tfds.load('cats_vs_dogs', split = splits, with_info=True)
(train_examples, validation_examples, test_examples) = splits
num_examples = 2500
num_classes = 2
# This will turn the 3 sets into batches
# so we can train
# This code should not be changed
def format_image(features):
image = features['image']
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, features['label']
BATCH_SIZE = 32
train_batches = train_examples.shuffle(num_examples).map(format_image).batch(BATCH_SIZE)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE)
test_batches = test_examples.map(format_image).batch(BATCH_SIZE)
# The new model will take the features from the mobilenet via transfer learning
# And add a new dense layer at the bottom, with 2 classes -- for cats and dogs
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(2, activation='softmax')
])
model.summary()
# Compile the model with adam optimizer and sparse categorical crossentropy,
# and track the accuracy metric
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])
# Train it for a number of epochs. You should not need many
# Train on the train_Batches, and validation on the validation_batches
EPOCHS = 10
history = model.fit(train_batches, epochs=EPOCHS, validation_data=validation_batches)
# Evaluate the model on the test batches
eval_results = model.evaluate(test_batches)
# And print the results. You should have >93% accuracy
for metric, value in zip(model.metrics_names, eval_results):
print(metric + ': {:.4}'.format(value))
```
| github_jupyter |
```
%%bash
# Download model check-point and module from below repo by pudae:
# Check if tf-slim will have densenet121 at some point
wget -N https://github.com/pudae/tensorflow-densenet/raw/master/nets/densenet.py
wget -N https://ikpublictutorial.blob.core.windows.net/deeplearningframeworks/tf-densenet121.tar.gz
tar xzvf tf-densenet121.tar.gz
#######################################################################################################
# Summary
# 1. Tensorflow Multi-GPU example using Estimator & Dataset high-APIs
# 2. On-the-fly data-augmentation (random crop, random flip)
# ToDo:
# 3. Investigate tfrecord speed improvement (to match MXNet)
# References:
# https://www.tensorflow.org/performance/performance_guide
# 1. https://jhui.github.io/2017/03/07/TensorFlow-Perforamnce-and-advance-topics/
# 2. https://www.tensorflow.org/versions/master/performance/datasets_performance
# 3. https://github.com/pudae/tensorflow-densenet
# 4. https://stackoverflow.com/a/48096625/6772173
# 5. https://stackoverflow.com/questions/47867748/transfer-learning-with-tf-estimator-estimator-framework
# 6. https://github.com/BobLiu20/Classification_Nets/blob/master/tensorflow/common/average_gradients.py
# 7. https://github.com/BobLiu20/Classification_Nets/blob/master/tensorflow/training/train_estimator.py
#######################################################################################################
MULTI_GPU = True # TOGGLE THIS
import os
import sys
import time
import multiprocessing
import numpy as np
import pandas as pd
from PIL import Image
import random
import tensorflow as tf
from tensorflow.python.framework import dtypes
from tensorflow.python.framework.ops import convert_to_tensor
from common.utils import download_data_chextxray, get_imgloc_labels, get_train_valid_test_split
from common.utils import compute_roc_auc, get_cuda_version, get_cudnn_version, get_gpu_name
from common.params_dense import *
slim = tf.contrib.slim
#https://github.com/pudae/tensorflow-densenet/raw/master/nets/densenet.py
import densenet # Download from https://github.com/pudae/tensorflow-densenet
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Numpy: ", np.__version__)
print("Tensorflow: ", tf.__version__)
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
CPU_COUNT = multiprocessing.cpu_count()
GPU_COUNT = len(get_gpu_name())
print("CPUs: ", CPU_COUNT)
print("GPUs: ", GPU_COUNT)
# Model-params
IMAGENET_RGB_MEAN_CAFFE = np.array([123.68, 116.78, 103.94], dtype=np.float32)
IMAGENET_SCALE_FACTOR_CAFFE = 0.017
# Paths
CSV_DEST = "chestxray"
IMAGE_FOLDER = os.path.join(CSV_DEST, "images")
LABEL_FILE = os.path.join(CSV_DEST, "Data_Entry_2017.csv")
print(IMAGE_FOLDER, LABEL_FILE)
CHKPOINT = 'tf-densenet121.ckpt' # Downloaded tensorflow-checkpoint
# Manually scale to multi-gpu
if MULTI_GPU:
LR *= GPU_COUNT
BATCHSIZE *= GPU_COUNT
%%time
# Download data
print("Please make sure to download")
print("https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux#download-and-install-azcopy")
download_data_chextxray(CSV_DEST)
#####################################################################################################
## Data Loading
class XrayData():
def __init__(self, img_dir, lbl_file, patient_ids, mode,
width=WIDTH, height=HEIGHT, batch_size=BATCHSIZE,
imagenet_mean=IMAGENET_RGB_MEAN_CAFFE, imagenet_scaling = IMAGENET_SCALE_FACTOR_CAFFE,
buffer=10):
self.img_locs, self.labels = get_imgloc_labels(img_dir, lbl_file, patient_ids)
self.data_size = len(self.labels)
self.imagenet_mean = imagenet_mean
self.imagenet_scaling = imagenet_scaling
self.width = width
self.height = height
data = tf.data.Dataset.from_tensor_slices((self.img_locs, self.labels))
# Processing
# Output as channels-last and TF model will reshape in densenet.py
# inputs = tf.transpose(inputs, [0, 3, 1, 2])
if mode == 'training':
# Augmentation and repeat
data = data.shuffle(self.data_size).repeat().apply(
tf.contrib.data.map_and_batch(self._parse_function_train, batch_size)).prefetch(buffer)
elif mode == "validation":
# Repeat
data = data.repeat().apply(
tf.contrib.data.map_and_batch(self._parse_function_inference, batch_size)).prefetch(buffer)
elif mode == 'testing':
# No repeat, no augmentation
data = data.apply(
tf.contrib.data.map_and_batch(self._parse_function_inference, batch_size)).prefetch(buffer)
self.data = data
print("Loaded {} labels and {} images".format(len(self.labels), len(self.img_locs)))
def _parse_function_train(self, filename, label):
img_rgb, label = self._preprocess_image_labels(filename, label)
# Random crop (from 264x264)
img_rgb = tf.random_crop(img_rgb, [self.height, self.width, 3])
# Random flip
img_rgb = tf.image.random_flip_left_right(img_rgb)
# Channels-first
img_rgb = tf.transpose(img_rgb, [2, 0, 1])
return img_rgb, label
def _parse_function_inference(self, filename, label):
img_rgb, label = self._preprocess_image_labels(filename, label)
# Resize to final dimensions
img_rgb = tf.image.resize_images(img_rgb, [self.height, self.width])
# Channels-first
img_rgb = tf.transpose(img_rgb, [2, 0, 1])
return img_rgb, label
def _preprocess_image_labels(self, filename, label):
# load and preprocess the image
img_decoded = tf.to_float(tf.image.decode_png(tf.read_file(filename), channels=3))
img_centered = tf.subtract(img_decoded, self.imagenet_mean)
img_rgb = img_centered * self.imagenet_scaling
return img_rgb, tf.cast(label, dtype=tf.float32)
train_set, valid_set, test_set = get_train_valid_test_split(TOT_PATIENT_NUMBER)
with tf.device('/cpu:0'):
# Create dataset for iterator
train_dataset = XrayData(img_dir=IMAGE_FOLDER, lbl_file=LABEL_FILE, patient_ids=train_set,
mode='training')
valid_dataset = XrayData(img_dir=IMAGE_FOLDER, lbl_file=LABEL_FILE, patient_ids=valid_set,
mode='validation')
test_dataset = XrayData(img_dir=IMAGE_FOLDER, lbl_file=LABEL_FILE, patient_ids=test_set,
mode='testing')
#####################################################################################################
## Helper Functions
def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
grads = []
for g, _ in grad_and_vars:
expanded_g = tf.expand_dims(g, 0)
grads.append(expanded_g)
grad = tf.concat(axis=0, values=grads)
grad = tf.reduce_mean(grad, 0)
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads
def get_symbol(in_tensor, out_features):
# Import symbol
# is_training=True? | https://github.com/tensorflow/models/issues/3556
with slim.arg_scope(densenet.densenet_arg_scope(data_format="NCHW")):
base_model, _ = densenet.densenet121(in_tensor,
num_classes=out_features,
is_training=True)
# Need to reshape from (?, 1, 1, 14) to (?, 14)
sym = tf.reshape(base_model, shape=[-1, out_features])
return sym
def model_fn_single(features, labels, mode, params):
sym = get_symbol(features, out_features=params["n_classes"])
# Predictions
predictions = tf.sigmoid(sym)
# ModeKeys.PREDICT
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Optimizer & Loss
optimizer = tf.train.AdamOptimizer(params['lr'], beta1=0.9, beta2=0.999)
loss_fn = tf.losses.sigmoid_cross_entropy(labels, sym)
loss = tf.reduce_mean(loss_fn)
train_op = optimizer.minimize(loss, tf.train.get_global_step())
# Create eval metric ops
eval_metric_ops = {"val_loss": slim.metrics.streaming_mean(
tf.losses.sigmoid_cross_entropy(labels, predictions))}
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
def multi_gpu_X_y_split(features, labels, batchsize, gpus):
# Make sure splits sum to batch-size
split_size = batchsize // len(gpus)
splits = [split_size, ] * (len(gpus) - 1)
splits.append(batchsize - split_size * (len(gpus) - 1))
# Split the features and labels
features_split = tf.split(features, splits, axis=0)
labels_split = tf.split(labels, splits, axis=0)
return features_split, labels_split
def model_fn_multigpu(features, labels, mode, params):
if mode == tf.estimator.ModeKeys.PREDICT:
# Create symbol
sym = get_symbol(features, out_features=params["n_classes"])
# Predictions
predictions = tf.sigmoid(sym)
# ModeKeys.PREDICT
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# For multi-gpu split features and labels
features_split, labels_split = multi_gpu_X_y_split( features, labels, params["batchsize"], params["gpus"])
tower_grads = []
eval_logits = []
# Training operation
global_step = tf.train.get_global_step()
optimizer = tf.train.AdamOptimizer(LR, beta1=0.9, beta2=0.999)
# Load model on multiple GPUs
with tf.variable_scope(tf.get_variable_scope()):
for i in range(len(params['gpus'])):
with tf.device('/gpu:%d' % i), tf.name_scope('%s_%d' % ("classification", i)) as scope:
# Symbol
sym = get_symbol(features_split[i], out_features=params["n_classes"])
# Loss
tf.losses.sigmoid_cross_entropy(labels_split[i], sym)
# Training-ops
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope)
updates_op = tf.group(*update_ops)
with tf.control_dependencies([updates_op]):
losses = tf.get_collection(tf.GraphKeys.LOSSES, scope)
total_loss = tf.add_n(losses, name='total_loss')
# reuse var
tf.get_variable_scope().reuse_variables()
# grad compute
grads = optimizer.compute_gradients(total_loss)
tower_grads.append(grads)
eval_logits.append(sym)
# We must calculate the mean of each gradient
grads = average_gradients(tower_grads)
# Apply the gradients to adjust the shared variables.
apply_gradient_op = optimizer.apply_gradients(grads, global_step=global_step)
# Group all updates to into a single train op.
train_op = tf.group(apply_gradient_op)
# Create eval metric ops (predict on multi-gpu)
predictions = tf.concat(eval_logits, 0)
eval_metric_ops = {"val_loss": slim.metrics.streaming_mean(
tf.losses.sigmoid_cross_entropy(labels, predictions))}
return tf.estimator.EstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
def train_input_fn():
return train_dataset.data.make_one_shot_iterator().get_next()
def valid_input_fn():
return valid_dataset.data.make_one_shot_iterator().get_next()
def test_input_fn():
return test_dataset.data.make_one_shot_iterator().get_next()
# Warm start from saved checkpoint (not logits)
ws = tf.estimator.WarmStartSettings(ckpt_to_initialize_from=CHKPOINT, vars_to_warm_start="^(?!.*(logits))")
# Params
params={"lr":LR, "n_classes":CLASSES, "batchsize":BATCHSIZE, "gpus":list(range(GPU_COUNT))}
# Model functions
if MULTI_GPU:
model_fn=model_fn_multigpu
else:
model_fn=model_fn_single
%%time
# Create Estimator
nn = tf.estimator.Estimator(model_fn=model_fn, params=params, warm_start_from=ws)
%%time
# Create train & eval specs
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn,
max_steps=EPOCHS*(train_dataset.data_size//BATCHSIZE))
# Hard to run validation every epoch so playing around with throttle_secs to get 5 runs
eval_spec = tf.estimator.EvalSpec(input_fn=valid_input_fn,
throttle_secs=400)
%%time
# 1 GPU - Main training loop: 33min 29s
# 4 GPU - Main training loop: 22min 11s
# Run train and evaluate (on validation data)
tf.estimator.train_and_evaluate(nn, train_spec, eval_spec)
%%time
# 1 GPU AUC: 0.8009
# 4 GPU AUC: 0.8120
predictions = list(nn.predict(test_input_fn))
y_truth = test_dataset.labels
y_guess = np.array(predictions)
print("Test AUC: {0:.4f}".format(compute_roc_auc(y_truth, y_guess, CLASSES)))
#####################################################################################################
## Synthetic Data (Pure Training)
# Test on fake-data -> no IO lag
batch_in_epoch = train_dataset.data_size//BATCHSIZE
tot_num = batch_in_epoch * BATCHSIZE
fake_X = np.random.rand(tot_num, 3, 224, 224).astype(np.float32)
fake_y = np.random.rand(tot_num, CLASSES).astype(np.float32)
%%time
# Create Estimator
nn = tf.estimator.Estimator(model_fn=model_fn, params=params)
%%time
# 1 GPU - Synthetic data: 25min 25s
# 4 GPU - Synthetic data: 13min 55s
nn.train(tf.estimator.inputs.numpy_input_fn(
fake_X,
fake_y,
shuffle=False,
num_epochs=EPOCHS,
batch_size=BATCHSIZE))
```
| github_jupyter |
# Nodes
From the [Interface](basic_interfaces.ipynb) tutorial, you learned that interfaces are the core pieces of Nipype that run the code of your desire. But to streamline your analysis and to execute multiple interfaces in a sensible order, you have to put them in something that we call a ``Node``.
In Nipype, a node is an object that executes a certain function. This function can be anything from a Nipype interface to a user-specified function or an external script. Each node consists of a name, an interface category and at least one input field, and at least one output field.
Following is a simple node from the `utility` interface, with the name `name_of_node`, the input field `IN` and the output field `OUT`:

Once you connect multiple nodes to each other, you create a directed graph. In Nipype we call such graphs either workflows or pipelines. Directed connections can only be established from an output field (below `node1_out`) of a node to an input field (below `node2_in`) of another node.

This is all there is to Nipype. Connecting specific nodes with certain functions to other specific nodes with other functions. So let us now take a closer look at the different kind of nodes that exist and see when they should be used.
## Example of a simple node
First, let us take a look at a simple stand-alone node. In general, a node consists of the following elements:
nodename = Nodetype(interface_function(), name='labelname')
- **nodename**: Variable name of the node in the python environment.
- **Nodetype**: Type of node to be created. This can be a `Node`, `MapNode` or `JoinNode`.
- **interface_function**: Function the node should execute. Can be user specific or coming from an `Interface`.
- **labelname**: Label name of the node in the workflow environment (defines the name of the working directory)
Let us take a look at an example: For this, we need the `Node` module from Nipype, as well as the `Function` module. The second only serves a support function for this example. It isn't a prerequisite for a `Node`.
```
# Import Node and Function module
from nipype import Node, Function
# Create a small example function
def add_two(x_input):
return x_input + 2
# Create Node
addtwo = Node(Function(input_names=["x_input"],
output_names=["val_output"],
function=add_two),
name='add_node')
```
As specified before, `addtwo` is the **nodename**, `Node` is the **Nodetype**, `Function(...)` is the **interface_function** and `add_node` is the **labelname** of the this node. In this particular case, we created an artificial input field, called `x_input`, an artificial output field called `val_output` and specified that this node should run the function `add_two()`.
But before we can run this node, we need to declare the value of the input field `x_input`:
```
addtwo.inputs.x_input = 4
```
After all input fields are specified, we can run the node with `run()`:
```
addtwo.run()
temp_res = addtwo.run()
temp_res.outputs
```
And what is the output of this node?
```
addtwo.result.outputs
```
## Example of a neuroimaging node
Let's get back to the BET example from the [Interface](basic_interfaces.ipynb) tutorial. The only thing that differs from this example, is that we will put the ``BET()`` constructor inside a ``Node`` and give it a name.
```
# Import BET from the FSL interface
from nipype.interfaces.fsl import BET
# Import the Node module
from nipype import Node
# Create Node
bet = Node(BET(frac=0.3), name='bet_node')
```
In the [Interface](basic_interfaces.ipynb) tutorial, we were able to specify the input file with the ``in_file`` parameter. This works exactly the same way in this case, where the interface is in a node. The only thing that we have to be careful about when we use a node is to specify where this node should be executed. This is only relevant for when we execute a node by itself, but not when we use them in a [Workflow](basic_workflow.ipynb).
```
in_file = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'
# Specify node inputs
bet.inputs.in_file = in_file
bet.inputs.out_file = '/output/node_T1w_bet.nii.gz'
res = bet.run()
```
As we know from the [Interface](basic_interfaces.ipynb) tutorial, the skull stripped output is stored under ``res.outputs.out_file``. So let's take a look at the before and the after:
```
from nilearn.plotting import plot_anat
%matplotlib inline
import matplotlib.pyplot as plt
plot_anat(in_file, title='BET input', cut_coords=(10,10,10),
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
plot_anat(res.outputs.out_file, title='BET output', cut_coords=(10,10,10),
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
### Exercise 1
Define a `Node` for `IsotropicSmooth` (from `fsl`). Run the node for T1 image for one of the subjects.
```
# write your solution here
# Import the Node module
from nipype import Node
# Import IsotropicSmooth from the FSL interface
from nipype.interfaces.fsl import IsotropicSmooth
# Define a node
smooth_node = Node(IsotropicSmooth(), name="smoothing")
smooth_node.inputs.in_file = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'
smooth_node.inputs.fwhm = 4
smooth_node.inputs.out_file = '/output/node_T1w_smooth.nii.gz'
smooth_res = smooth_node.run()
```
### Exercise 2
Plot the original image and the image after smoothing.
```
# write your solution here
from nilearn.plotting import plot_anat
%pylab inline
plot_anat(smooth_node.inputs.in_file, title='smooth input', cut_coords=(10,10,10),
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
plot_anat(smooth_res.outputs.out_file, title='smooth output', cut_coords=(10,10,10),
display_mode='ortho', dim=-1, draw_cross=False, annotate=False);
```
| github_jupyter |
```
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
identifier_series = pd.Series(pd.unique(interaction_dataframe[columns].values.ravel('K')))
ids = identifier_series[identifier_series.str.startswith('ensembl:')]
from pathlib import Path
import pandas as pd
import numpy as np
from phppipy.dataprep import taxonid
from phppipy.ppi_tools import ppi_filter
from phppipy.ppi_tools import id_mapper
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged.tsv')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
id_mapper.check_unique_identifier(interaction_dataframe)
db_tag='refseq:'
id_abbreviation='P_REFSEQ_AC'
merged_columns = pd.unique(
interaction_dataframe[columns].values.ravel('K'))
filepath = '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new2'
path = Path(filepath) / (
db_tag.replace('/', '-').strip(':') + r'2uniprot.txt')
id_mapper._create_mapping_files(merged_columns, id_abbreviation, db_tag,
path)
mapping_dict = id_mapper._create_mapping_dict(path, True)
# print(mapping_dict)
# create helper function to use in apply() acting on columns to be remapped
def lambda_helper(row_entry, mapping_dict):
# retrieve id mapping if it exists, otherwise retain original id
original_id = row_entry.split(':')[1]
new_id = mapping_dict.get(original_id, None)
# if no list is returned by dict.get(), original id is passed
if new_id:
# if a new id is found in the mapping_dict, return it
# mapping_dict contains lists, so extract or join for multiple mappings
if len(new_id) > 1:
new_id = 'MULT_MAP'.join(new_id)
else: # retrieve id as string from list
new_id = new_id[0]
else:
# return original id if no remapping is found
new_id = row_entry
return new_id
success_series = []
total_series = []
for col in columns:
# note: use startswith to avoid mixing ensembl and embl ids...
mapping_selection = interaction_dataframe[col].str.startswith(
db_tag)
print(mapping_selection.sum())
# print(interaction_dataframe.loc[mapping_selection,col])
# keep track of total (unique) identifier count
total_series.append(interaction_dataframe.loc[mapping_selection,col])
# print(total_series)
interaction_dataframe.loc[mapping_selection, col] = \
interaction_dataframe.loc[mapping_selection, col].apply(func=lambda_helper, args=(mapping_dict,))
# count how many (unique) identifiers were remapped
success_series.append(interaction_dataframe.loc[mapping_selection & ~interaction_dataframe[col].str.startswith(db_tag), col])
# print(success_series)
succes_count = pd.concat(success_series).unique().size
total_count = pd.concat(total_series).unique().size
# print(pd.concat(success_series).unique())
# print(pd.concat(total_series).unique())
print(
'{} {} out of identifiers {} were succesfully remapped to uniprot accession numbers.\n'.
format(db_tag, succes_count, total_count))
col='xref_A'
interaction_dataframe.loc[mapping_selection & ~interaction_dataframe[col].str.startswith(db_tag),col]
a = pd.Series()
a
if Path(path).is_file():
# create dictionaries for current identifier remapping file
mapping_dict = id_mapper._create_mapping_dict(path, True, db_tag)
# create helper function to use in apply() acting on columns to be remapped
def lambda_helper(row_entry, mapping_dict):
# retrieve id mapping if it exists, otherwise retain original id
original_id = row_entry.split(':')[1]
new_id = mapping_dict.get(original_id, None)
# if no list is returned by dict.get(), original id is passed
if new_id:
# if a new id is found in the mapping_dict, return it
# mapping_dict contains lists, so extract or join for multiple mappings
if len(new_id) > 1:
new_id = 'MULT_MAP'.join(new_id)
else: # retrieve id as string from list
new_id = new_id[0]
else:
# return original id if no remapping is found
new_id = original_id
return new_id
# go through all IDs in the supplied columns and remap them if needed
succes_count = 0
total_count = 0
for col in columns:
# note: use startswith to avoid mixing ensembl and embl ids...
mapping_selection = interaction_dataframe[col].str.startswith(
db_tag)
interaction_dataframe.loc[mapping_selection, col] = \
interaction_dataframe.loc[mapping_selection, col].apply(func=lambda_helper, args=(mapping_dict,))
succes_count += mapping_selection.sum() - interaction_dataframe[col].str.startswith(db_tag).sum()
total_count += mapping_selection.sum()
print(
'{} {} out of identifiers {} were succesfully remapped to uniprot accession numbers.\n'.
format(db_tag, succes_count, total_count))
ppi_df.loc[ppi_df.publication == 'doi:10.1038/nature11288|pubmed:22810586|imex:IM-25307']
from pathlib import Path
import pandas as pd
import numpy as np
from phppipy.dataprep import taxonid
from phppipy.ppi_tools import ppi_filter
from phppipy.ppi_tools import id_mapper
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged.tsv')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
columns = ['xref_A','xref_B']
ppi_df['unique'] = ppi_df.apply(lambda x: '%'.join(sorted([x['xref_A'], x['xref_B']])), axis=1)
xref_partners_sorted_array = np.sort(np.stack((ppi_df.xref_A, ppi_df.xref_B), axis=1), axis=1)
xref_partners_sorted_array
xref_partners_df = pd.DataFrame(xref_partners_sorted_array, columns=['A', 'B'])
ppi_df['xref_partners_sorted'] = xref_partners_df['A'] + '%' + xref_partners_df['B']
all(ppi_df.xref_partners_sorted == ppi_df.unique)
print(np.sum(ppi_df.duplicated(subset=['xref_partners_sorted'])))
print('\nNumber of unique interactions per raw data set')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique())
print(pd.unique(ppi_df.xref_partners_sorted).size)
print(ppi_df.xref_partners_sorted.unique().size)
print(ppi_df.xref_partners_sorted.size)
print(18490+11842)
ppi_df['original_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
id_mapper.check_unique_identifier(ppi_df)
xref_partners_sorted_array = np.sort(np.stack((ppi_df.xref_A, ppi_df.xref_B), axis=1), axis=1)
xref_partners_df = pd.DataFrame(xref_partners_sorted_array, columns=['A', 'B'])
ppi_df['xref_partners_sorted'] = xref_partners_df['A'] + '%' + xref_partners_df['B']
print(np.sum(ppi_df.duplicated(subset=['xref_partners_sorted'])))
print('\nNumber of unique interactions per raw data set')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique())
print(pd.unique(ppi_df.xref_partners_sorted).size)
print(ppi_df.xref_partners_sorted.unique().size)
print(ppi_df.xref_partners_sorted.size)
print(18490+11842)
ppi_df['checker_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
id_mapper.map2uniprot(ppi_df, '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
print(ppi_df.groupby('origin')['xref_partners_sorted'].size())
xref_partners_sorted_array = np.sort(np.stack((ppi_df.xref_A, ppi_df.xref_B), axis=1), axis=1)
xref_partners_df = pd.DataFrame(xref_partners_sorted_array, columns=['A', 'B'])
ppi_df['xref_partners_sorted'] = xref_partners_df['A'] + '%' + xref_partners_df['B']
print(np.sum(ppi_df.duplicated(subset=['xref_partners_sorted'])))
print('\nNumber of unique interactions per raw data set')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique())
print(pd.unique(ppi_df.xref_partners_sorted).size)
print(ppi_df.xref_partners_sorted.unique().size)
print(ppi_df.xref_partners_sorted.size)
print(18490+11842)
ppi_df['remap_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
ppi_df.loc[ (ppi_df.checker_unique == True) & (ppi_df.remap_unique == False), ['xref_partners_sorted', 'original_unique', 'remap_unique', 'checker_unique']].size
ppi_df.loc[ (ppi_df.checker_unique == True) & (ppi_df.remap_unique == False)].groupby('origin').size()
ppi_df.loc[ (ppi_df.original_unique == True) & (ppi_df.checker_unique == False), ['xref_partners_sorted', 'original_unique', 'remap_unique', 'checker_unique']].size
ppi_df.loc[ (ppi_df.original_unique == True) & (ppi_df.checker_unique == False)].groupby('origin').size()
ppi_df.loc[ (ppi_df.original_unique == True) & (ppi_df.remap_unique == False), ['xref_partners_sorted', 'original_unique', 'remap_unique', 'checker_unique']].size
ppi_df.loc[ (ppi_df.original_unique == True) & (ppi_df.remap_unique == False)].groupby('origin').size()
ppi_df.duplicated(subset=['xref_partners_sorted', 'publication']).size
print('Number of unique interactions per raw dataset:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique())
print('\nTotal dataset size:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].size())
print('\nTotal number of unique interactions out of {}'.format(ppi_df.size))
print(np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted'])))
print('\nTotal number of unique interactions out of {}, where publications are considered unique:'.format(ppi_df.size))
print(np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted', 'publication'])))
np.unique(ppi_df['origin'].values)
ppi_df.drop_duplicates(subset=['xref_partners_sorted'], keep='first').groupby('origin').size()
ppi_df['origin'] = pd.Categorical(ppi_df['origin'], ['intact-virus-22.03.2018.mitab',
'BIOGRID-ALL-3.4.160.mitab',
'hpidb2-19.02.2018.mitab',
'virhostnet-01.2018.mitab',
'phi_data.csv'])
ppi_df.sort_values(by='origin', ascending=True, inplace=True)
ppi_df.drop_duplicates(subset=['xref_partners_sorted'], keep='first').groupby('origin').size()
ppi_df.loc[ppi_df[columns[0]].str.contains('MULT_MAP') | ppi_df[columns[1]].str.contains('MULT_MAP')].shape[0]
columns = ['xref_A', 'xref_B']
interaction_dataframe = ppi_df
selection = (~interaction_dataframe[columns[0]].str.contains('MULT_MAP') &
~interaction_dataframe[columns[1]].str.contains('MULT_MAP'))
print(
'Omitted {} PPIs due to the existance of multiple mappings.\n'.format(
np.sum(~selection)))
interaction_dataframe.loc[selection].apply(lambda x: x).size
ppi_df['origin'] = pd.Categorical(ppi_df['origin'], ['intact-virus-22.03.2018.mitab',
'BIOGRID-ALL-3.4.160.mitab',
'hpidb2-19.02.2018.mitab',
'virhostnet-01.2018.mitab',
'phi_data.csv'])
ppi_df.sort_values(by='origin', ascending=True, inplace=True)
size = ppi_df.shape[0]
ppi_df = ppi_df.drop_duplicates(subset=['xref_partners_sorted'], keep='first')
ppi_df = ppi_df.reset_index(drop=True)
print('All duplicate interactions were removed, leaving {} out of {} PPIs.'.format(ppi_df.shape[0], size))
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
id_mapper.map2uniprot(ppi_df, '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
id_mapper.map2uniprot(ppi_df, '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
id_mapper.map2uniprot(ppi_df, '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
id_mapper.map2uniprot(ppi_df, '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
merged_columns = pd.unique(
interaction_dataframe[columns].values.ravel('K'))
present_identifiers = id_mapper._extract_identifiers(merged_columns)
identifier_dict = {
'ddbj/embl/genbank:': 'EMBL_ID',
'ensembl:': 'ENSEMBL_ID',
'ensemblgenomes:': 'ENSEMBLGENOME_ID',
'entrez gene/locuslink:': 'P_ENTREZGENEID',
'refseq:': 'P_REFSEQ_AC',
'dip:': 'DIP_ID'
}
# NOTE: UniProt REST API does not support intact EBI:identifiers.
print(present_identifiers)
for i in present_identifiers:
if i + ':' not in identifier_dict:
print(
'WARNING: interaction dataset contains "{}" entries, which could not be remapped to UniProt AC (check all possible mappings at https://www.uniprot.org/help/api_idmapping .'.
format(i))
id_mapper.check_unique_identifier(interaction_dataframe)
filepath = '/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new'
for db_tag, id_abbreviation in identifier_dict.items():
# Define filepath (without forward slashes)
path = Path(filepath) / (
db_tag.replace('/', '-').strip(':') + r'2uniprot.txt')
# path.parent.mkdir(parents=True, exist_ok=True) # moved to create_mapping_files function
# Skip if file already exists
# TODO: add options to force re-run
if not Path(path).is_file():
id_mapper._create_mapping_files(merged_columns, id_abbreviation, db_tag,
path)
with path.open() as mapping_file:
# TODO: possible to keep unreviewed identifiers? Will result in many more 1:n mappings...
# Store mapping into dictionary where non-UniProt keys map to lists of UniProtACs.
mapping_dict = {}
reviewed_mapping_dict = {}
unreviewed_mapping_dict = {}
for line in mapping_file:
# [0] contains the original identifier
# [1] is an empty element in most mapping files
# [2] contains the UniProt AC
split_line = line.split('\t')
from_id = split_line[0]
uniprot_ac = 'uniprotkb:' + split_line[2]
if '\treviewed' in line:
# assert from_id not in mapping_dict, 'WARNING: the remapping file {} contains duplicates!'.format(path.resolve())
if from_id not in mapping_dict:
reviewed_mapping_dict[from_id] = [uniprot_ac]
else:
# even when only selecting reviewed proteins, there are still 1:n mappings
# e.g. http://www.uniprot.org/uniprot/?query=yourlist:M2017111583C3DD8CE55183C76102DC5D3A26728BF70646W&sort=yourlist:M2017111583C3DD8CE55183C76102DC5D3A26728BF70646W&columns=yourlist%28M2017111583C3DD8CE55183C76102DC5D3A26728BF70646W%29,isomap%28M2017111583C3DD8CE55183C76102DC5D3A26728BF70646W%29,id,entry+name,reviewed,protein+names,genes,organism,length
reviewed_mapping_dict[from_id].append(uniprot_ac)
elif '\tunreviewed' in line:
# assert from_id not in unreviewed_mapping_dict, 'WARNING: the remapping file {} contains duplicates!'.format(path.resolve())
if from_id not in unreviewed_mapping_dict:
unreviewed_mapping_dict[from_id] = [uniprot_ac]
else:
unreviewed_mapping_dict[from_id].append(uniprot_ac)
interaction_dataframe.loc[1:10, 'xref_A'].apply(type)
interaction_dataframe['xref_A'].str.startswith('refseq').sum()
pd.Series(
pd.unique(ppi_df[['xref_A','xref_B']].values.ravel(
'K'))).str.contains('MULT_MAP')
ppi_df.loc[ppi_df.xref_A.str.contains('MULT_MAP') | ppi_df.xref_B.str.contains('MULT_MAP')]
from pathlib import Path
import pandas as pd
import numpy as np
from phppipy.dataprep import taxonid
from phppipy.ppi_tools import ppi_filter
from phppipy.ppi_tools import id_mapper
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged.tsv')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
out_path = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
from pathlib import Path
import pandas as pd
import numpy as np
from phppipy.dataprep import taxonid
from phppipy.ppi_tools import ppi_filter
from phppipy.ppi_tools import id_mapper
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged.tsv')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
out_path = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
input_dir = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/taxonid')
pathogen_taxonid_path = [i for i in input_dir.glob('*child*')]
associated_taxonid_path = [i for i in input_dir.glob('*interacting*')]
try:
assert len(pathogen_taxonid_path) == 1
assert len(associated_taxonid_path) == 1
except AssertionError:
print(
'WARNING: multiple possible taxonID files were found in the directory {}'.
format(input_dir))
print(
'Please run the pathogen_selection.py script again and verify that only two files are being created in a pathogen-specific directory, e.g. 10292/10292-child-taxids.txt and 10292/associated-taxids.txt'
)
pathogen_taxonids = taxonid.read_taxids(pathogen_taxonid_path[0])
host_taxonids = taxonid.read_taxids(associated_taxonid_path[0])
print('TaxonIDs were read from {} and {}\n'.format(pathogen_taxonid_path[0],
associated_taxonid_path[0]))
print(host_taxonids[0])
ppi_filter.annotate_inter_intra_species(ppi_df)
ppi_filter.annotate_inter_intra_pathogen(ppi_df, pathogen_taxonids)
# assert all(temp_intra_species == ppi_df['inter-intra-species'])
# assert all(temp_intra_pathogen == ppi_df['inter-intra-pathogen'])
ppi_df = ppi_df.loc[(ppi_df['inter-intra-species'] != 'intra')
& (ppi_df['inter-intra-pathogen'] != 'intra')]
ppi_df = ppi_df.reset_index(drop=True)
ppi_df.drop(
['inter-intra-species', 'inter-intra-pathogen'], axis=1, inplace=True)
print('All intra-species and intra-pathogen interactions were omitted.')
# check for intra-hosts cross species
assert any(
ppi_df.taxid_A.isin(host_taxonids) & ppi_df.taxid_B.isin(host_taxonids)
) is False
# create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['original_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
# fix bad xref identifiers (i.e. containing more than a single entry)
id_mapper.check_unique_identifier(ppi_df)
# re-create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['checker_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
print(
'By removing/fixing protein identifiers that consisted of multiple entries, the following duplicates were introduced in the dataset:\n'
)
print(ppi_df.loc[(ppi_df.original_unique)
& ~(ppi_df.checker_unique)].groupby('origin').size(),'\n')
# create UniProt mapping files and remap identifiers
out_mappings = out_path / 'mapping'
out_mappings.parent.mkdir(parents=True, exist_ok=True)
id_mapper.map2uniprot(ppi_df, out_mappings, reviewed_only=True)
# remove multiple mappings
ppi_df = id_mapper.remove_mult(ppi_df)
# re-create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['remap_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
print(
'The act of remapping protein identifiers to UniProt AC, introduced the following duplicates:\n'
)
print(ppi_df.loc[(ppi_df.checker_unique)
& ~(ppi_df.remap_unique)].groupby('origin').size(),'\n')
# check if remapping creates new unique values (should never happen)
assert ppi_df.loc[
~(ppi_df.original_unique) & (ppi_df.remap_unique), [
'xref_partners_sorted', 'original_unique', 'remap_unique',
'checker_unique'
]].shape[0] == 0
# report information about duplicates
print('Number of unique interactions per raw dataset:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique(),'\n')
print('Total dataset size:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].size(),'\n')
print('Total number of unique interactions out of {}'.format(
ppi_df.shape[0]))
print(np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted'])),'\n')
print(
'Total number of unique interactions out of {}, where publications are considered unique:'.
format(ppi_df.shape[0]))
print(
np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted', 'publication'])),'\n')
ppi_df.drop(
['original_unique', 'checker_unique', 'remap_unique'],
axis=1,
inplace=True)
# remove duplicates
# first sort on dataframe priority
ppi_df['origin'] = pd.Categorical(ppi_df['origin'], [
'intact-virus-22.03.2018.mitab', 'BIOGRID-ALL-3.4.160.mitab',
'hpidb2-19.02.2018.mitab', 'virhostnet-01.2018.mitab', 'phi_data.csv'
])
ppi_df.sort_values(by='origin', ascending=True, inplace=True)
size = ppi_df.shape[0]
ppi_df = ppi_df.drop_duplicates(subset=['xref_partners_sorted'], keep='first')
ppi_df = ppi_df.reset_index(drop=True)
print('All duplicate interactions were removed, leaving {} out of {} PPIs.\n'.
format(ppi_df.shape[0], size))
# remove non-UniProt identifiers
size = ppi_df.shape[0]
ppi_df = ppi_df.loc[(ppi_df.xref_A.str.contains('uniprot'))
& (ppi_df.xref_B.str.contains('uniprot'))]
ppi_df = ppi_df.reset_index(drop=True)
print('Omitted {} non-UniProt AC entries, leaving {} PPIs.'.format(
size - ppi_df.shape[0], ppi_df.shape[0]))
from pathlib import Path
import pandas as pd
import numpy as np
from phppipy.dataprep import taxonid
from phppipy.ppi_tools import ppi_filter
from phppipy.ppi_tools import id_mapper
ppi_file = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/ppi_data/10292-ppi-merged.tsv')
ppi_df = pd.read_csv(ppi_file, sep='\t', header=0)
interaction_dataframe = ppi_df
columns = ['xref_A', 'xref_B']
out_path = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/mappings-new')
input_dir = Path('/media/pieter/DATA/Wetenschap/Doctoraat/projects/host-pathogen-ppi-analysis/data/interim/10292/taxonid')
pathogen_taxonid_path = [i for i in input_dir.glob('*child*')]
associated_taxonid_path = [i for i in input_dir.glob('*interacting*')]
try:
assert len(pathogen_taxonid_path) == 1
assert len(associated_taxonid_path) == 1
except AssertionError:
print(
'WARNING: multiple possible taxonID files were found in the directory {}'.
format(input_dir))
print(
'Please run the pathogen_selection.py script again and verify that only two files are being created in a pathogen-specific directory, e.g. 10292/10292-child-taxids.txt and 10292/associated-taxids.txt'
)
pathogen_taxonids = taxonid.read_taxids(pathogen_taxonid_path[0])
host_taxonids = taxonid.read_taxids(associated_taxonid_path[0])
print('TaxonIDs were read from {} and {}\n'.format(pathogen_taxonid_path[0],
associated_taxonid_path[0]))
print(host_taxonids[0])
ppi_filter.annotate_inter_intra_species(ppi_df)
ppi_filter.annotate_inter_intra_pathogen(ppi_df, pathogen_taxonids)
# assert all(temp_intra_species == ppi_df['inter-intra-species'])
# assert all(temp_intra_pathogen == ppi_df['inter-intra-pathogen'])
ppi_df = ppi_df.loc[(ppi_df['inter-intra-species'] != 'intra')
& (ppi_df['inter-intra-pathogen'] != 'intra')]
ppi_df = ppi_df.reset_index(drop=True)
ppi_df.drop(
['inter-intra-species', 'inter-intra-pathogen'], axis=1, inplace=True)
print('All intra-species and intra-pathogen interactions were omitted.')
# check for intra-hosts cross species
assert any(
ppi_df.taxid_A.isin(host_taxonids) & ppi_df.taxid_B.isin(host_taxonids)
) is False
# create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['original_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
# fix bad xref identifiers (i.e. containing more than a single entry)
id_mapper.check_unique_identifier(ppi_df)
# re-create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['checker_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
print(
'By removing/fixing protein identifiers that consisted of multiple entries, the following duplicates were introduced in the dataset:\n'
)
print(ppi_df.loc[(ppi_df.original_unique)
& ~(ppi_df.checker_unique)].groupby('origin').size(),'\n')
# create UniProt mapping files and remap identifiers
out_mappings = out_path / 'mapping'
out_mappings.parent.mkdir(parents=True, exist_ok=True)
id_mapper.map2uniprot(ppi_df, out_mappings, reviewed_only=True)
# remove multiple mappings
ppi_df = id_mapper.remove_mult(ppi_df)
# re-create unique identifier by combining xrefs
ppi_filter.unique_identifier(ppi_df)
ppi_df['remap_unique'] = ~ppi_df.duplicated(subset=['xref_partners_sorted'])
print(
'The act of remapping protein identifiers to UniProt AC, introduced the following duplicates:\n'
)
print(ppi_df.loc[(ppi_df.checker_unique)
& ~(ppi_df.remap_unique)].groupby('origin').size(),'\n')
# check if remapping creates new unique values (should never happen)
assert ppi_df.loc[
~(ppi_df.original_unique) & (ppi_df.remap_unique), [
'xref_partners_sorted', 'original_unique', 'remap_unique',
'checker_unique'
]].shape[0] == 0
# report information about duplicates
print('Number of unique interactions per raw dataset:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].nunique(),'\n')
print('Total dataset size:')
print(ppi_df.groupby('origin')['xref_partners_sorted'].size(),'\n')
print('Total number of unique interactions out of {}'.format(
ppi_df.shape[0]))
print(np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted'])),'\n')
print(
'Total number of unique interactions out of {}, where publications are considered unique:'.
format(ppi_df.shape[0]))
print(
np.sum(~ppi_df.duplicated(subset=['xref_partners_sorted', 'publication'])),'\n')
ppi_df.drop(
['original_unique', 'checker_unique', 'remap_unique'],
axis=1,
inplace=True)
ppi_df
ppi_df.groupby('origin').size()
ppi_df.loc[ppi_df['taxid_A'].isin(host_taxonids)].groupby('origin').size()
ppi_df.loc[ppi_df['taxid_B'].isin(host_taxonids)].groupby('origin').size()
ppi_df.loc[ppi_df['taxid_A'].isin(pathogen_taxonids)].groupby('origin').size()
ppi_df.loc[ppi_df['taxid_B'].isin(pathogen_taxonids)].groupby('origin').size()
ppi_df.loc[ppi_df['taxid_B'].isin(pathogen_taxonids)].shape
ppi_df.loc[ppi_df['taxid_A'].isin(pathogen_taxonids)].shape
ppi_df.columns
host_list = set(host_taxonids)
host_position_mask = ppi_df['taxid_B'].isin(host_list)
column_names = ['xref', 'taxid', 'aliases', 'alt_identifiers', 'display_id', 'taxid_name']
columns_to_swap = [name + label for name in column_names for label in ['_A', '_B']]
columns_after_swap = [name + label for name in column_names for label in ['_B', '_A']]
ppi_df.loc[host_position_mask, columns_to_swap] = ppi_df.loc[host_position_mask, columns_after_swap].values
ppi_df.loc[0:3]
ppi_df.loc[0:3]
ppi_df.loc[host_position_mask, columns_after_swap].values
host_list = set(host_list)
host_position_mask = interaction_dataframe['taxid_B'].isin(host_list)
column_names = ['xref', 'taxid', 'aliases', 'alt_identifiers', 'display_id', 'taxid_name']
columns_to_swap = [name + label for name in column_names for label in ['_A', '_B']]
columns_after_swap = [name + label for name in column_names for label in ['_B', '_A']]
interaction_dataframe.loc[host_position_mask, columns_to_swap] = interaction_dataframe.loc[host_position_mask, columns_after_swap].values
all_identifiers = pd.unique(interaction_dataframe[columns].values.ravel('K'))
out_identifiers = Path('/media/pieter/Seagate Red Pieter Moris/workdir/uniprot_identifiers.txt')
with out_identifiers.open('w') as out:
for i in all_identifiers:
out.write('{}\n'.format(i))
```
| github_jupyter |
# Introduction
:label:`chap_introduction`
Until recently, nearly every computer program that we interact with daily
was coded by software developers from first principles.
Say that we wanted to write an application to manage an e-commerce platform. After huddling around a whiteboard for a few hours to ponder the problem,
we would come up with the broad strokes of a working solution that might probably look something like this:
(i) users interact with the application through an interface
running in a web browser or mobile application;
(ii) our application interacts with a commercial-grade database engine
to keep track of each user's state and maintain records
of historical transactions; and (iii) at the heart of our application,
the *business logic* (you might say, the *brains*) of our application
spells out in methodical detail the appropriate action
that our program should take in every conceivable circumstance.
To build the *brains* of our application,
we'd have to step through every possible corner case
that we anticipate encountering, devising appropriate rules.
Each time a customer clicks to add an item to their shopping cart,
we add an entry to the shopping cart database table,
associating that user's ID with the requested product’s ID.
While few developers ever get it completely right the first time
(it might take some test runs to work out the kinks),
for the most part, we could write such a program from first principles
and confidently launch it *before ever seeing a real customer*.
Our ability to design automated systems from first principles
that drive functioning products and systems, often in novel situations,
is a remarkable cognitive feat.
And when you are able to devise solutions that work $100\%$ of the time,
*you should not be using machine learning*.
Fortunately for the growing community of machine learning (ML) scientists,
many tasks that we would like to automate
do not bend so easily to human ingenuity.
Imagine huddling around the whiteboard with the smartest minds you know,
but this time you are tackling one of the following problems:
* Write a program that predicts tomorrow's weather given geographic
information, satellite images, and a trailing window of past weather.
* Write a program that takes in a question, expressed in free-form text, and
answers it correctly.
* Write a program that given an image can identify all the people it contains,
drawing outlines around each.
* Write a program that presents users with products that they are likely to
enjoy but unlikely, in the natural course of browsing, to encounter.
In each of these cases, even elite programmers
are incapable of coding up solutions from scratch.
The reasons for this can vary. Sometimes the program
that we are looking for follows a pattern that changes over time,
and we need our programs to adapt.
In other cases, the relationship (say between pixels,
and abstract categories) may be too complicated,
requiring thousands or millions of computations
that are beyond our conscious understanding
(even if our eyes manage the task effortlessly).
ML is the study of powerful
techniques that can *learn* from *experience*.
As an ML algorithm accumulates more experience,
typically in the form of observational data or
interactions with an environment, its performance improves.
Contrast this with our deterministic e-commerce platform,
which performs according to the same business logic,
no matter how much experience accrues,
until the developers themselves *learn* and decide
that it is time to update the software.
In this book, we will teach you the fundamentals of machine learning,
and focus in particular on deep learning, a powerful set of techniques
driving innovations in areas as diverse as computer vision,
natural language processing, healthcare, and genomics.
## A Motivating Example
Before we could begin writing, the authors of this book,
like much of the work force, had to become caffeinated.
We hopped in the car and started driving.
Using an iPhone, Alex called out "Hey Siri",
awakening the phone's voice recognition system.
Then Mu commanded "directions to Blue Bottle coffee shop".
The phone quickly displayed the transcription of his command.
It also recognized that we were asking for directions
and launched the Maps application to fulfill our request.
Once launched, the Maps app identified a number of routes.
Next to each route, the phone displayed a predicted transit time.
While we fabricated this story for pedagogical convenience,
it demonstrates that in the span of just a few seconds,
our everyday interactions with a smart phone
can engage several machine learning models.
Imagine just writing a program to respond to a *wake word*
like "Alexa", "Okay, Google" or "Siri".
Try coding it up in a room by yourself
with nothing but a computer and a code editor,
as illustrated in :numref:`fig_wake_word`.
How would you write such a program from first principles?
Think about it... the problem is hard.
Every second, the microphone will collect roughly 44,000 samples.
Each sample is a measurement of the amplitude of the sound wave.
What rule could map reliably from a snippet of raw audio to confident predictions ``{yes, no}`` on whether the snippet contains the wake word?
If you are stuck, do not worry.
We do not know how to write such a program from scratch either.
That is why we use ML.

:label:`fig_wake_word`
Here's the trick.
Often, even when we do not know how to tell a computer
explicitly how to map from inputs to outputs,
we are nonetheless capable of performing the cognitive feat ourselves.
In other words, even if you do not know
*how to program a computer* to recognize the word "Alexa",
you yourself *are able* to recognize the word "Alexa".
Armed with this ability, we can collect a huge *dataset*
containing examples of audio and label those that *do*
and that *do not* contain the wake word.
In the ML approach, we do not attempt to design a system
*explicitly* to recognize wake words.
Instead, we define a flexible program
whose behavior is determined by a number of *parameters*.
Then we use the dataset to determine the best possible set of parameters, those that improve the performance of our program
with respect to some measure of performance on the task of interest.
You can think of the parameters as knobs that we can turn,
manipulating the behavior of the program.
Fixing the parameters, we call the program a *model*.
The set of all distinct programs (input-output mappings)
that we can produce just by manipulating the parameters
is called a *family* of models.
And the *meta-program* that uses our dataset
to choose the parameters is called a *learning algorithm*.
Before we can go ahead and engage the learning algorithm,
we have to define the problem precisely,
pinning down the exact nature of the inputs and outputs,
and choosing an appropriate model family.
In this case, our model receives a snippet of audio as *input*,
and it generates a selection among ``{yes, no}`` as *output*.
If all goes according to plan the model's guesses will
typically be correct as to whether (or not) the snippet contains the wake word.
If we choose the right family of models,
then there should exist one setting of the knobs
such that the model fires ``yes`` every time it hears the word "Alexa". Because the exact choice of the wake word is arbitrary,
we will probably need a model family sufficiently rich that,
via another setting of the knobs, it could fire ``yes``
only upon hearing the word "Apricot".
We expect that the same model family should be suitable
for *"Alexa" recognition* and *"Apricot" recognition*
because they seem, intuitively, to be similar tasks.
However, we might need a different family of models entirely
if we want to deal with fundamentally different inputs or outputs,
say if we wanted to map from images to captions,
or from English sentences to Chinese sentences.
As you might guess, if we just set all of the knobs randomly,
it is not likely that our model will recognize "Alexa",
"Apricot", or any other English word.
In deep learning, the *learning* is the process
by which we discover the right setting of the knobs
coercing the desired behavior from our model.
As shown in :numref:`fig_ml_loop`, the training process usually looks like this:
1. Start off with a randomly initialized model that cannot do anything useful.
1. Grab some of your labeled data (e.g., audio snippets and corresponding ``{yes, no}`` labels)
1. Tweak the knobs so the model sucks less with respect to those examples
1. Repeat until the model is awesome.

:label:`fig_ml_loop`
To summarize, rather than code up a wake word recognizer,
we code up a program that can *learn* to recognize wake words,
*if we present it with a large labeled dataset*.
You can think of this act of determining a program's behavior
by presenting it with a dataset as *programming with data*.
We can "program" a cat detector by providing our machine learning system
with many examples of cats and dogs, such as the images below:
|cat|cat|dog|dog|
|:---------------:|:---------------:|:---------------:|:---------------:|
|||||
This way the detector will eventually learn to emit a very large positive number if it is a cat, a very large negative number if it is a dog,
and something closer to zero if it is not sure,
and this barely scratches the surface of what ML can do.
Deep learning is just one among many popular methods
for solving machine learning problems.
Thus far, we have only talked about machine learning broadly
and not deep learning. To see why deep learning is important,
we should pause for a moment to highlight a couple of crucial points.
First, the problems that we have discussed thus far---learning
from the raw audio signal, the raw pixel values of images,
or mapping between sentences of arbitrary lengths and
their counterparts in foreign languages---are problems
where deep learning excels and where traditional ML methods faltered.
Deep models are *deep* in precisely the sense
that they learn many *layers* of computation.
It turns out that these many-layered (or hierarchical) models
are capable of addressing low-level perceptual data
in a way that previous tools could not.
In bygone days, the crucial part of applying ML to these problems
consisted of coming up with manually-engineered ways
of transforming the data into some form amenable to *shallow* models.
One key advantage of deep learning is that it replaces not
only the *shallow* models at the end of traditional learning pipelines,
but also the labor-intensive process of feature engineering.
Second, by replacing much of the *domain-specific preprocessing*,
deep learning has eliminated many of the boundaries
that previously separated computer vision, speech recognition,
natural language processing, medical informatics, and other application areas,
offering a unified set of tools for tackling diverse problems.
## The Key Components: Data, Models, and Algorithms
In our *wake-word* example, we described a dataset
consisting of audio snippets and binary labels, and we
gave a hand-wavy sense of how we might *train*
a model to approximate a mapping from snippets to classifications.
This sort of problem, where we try to predict a designated unknown *label*
given known *inputs*, given a dataset consisting of examples,
for which the labels are known is called *supervised learning*,
and it is just one among many *kinds* of machine learning problems.
In the next section, we will take a deep dive into the different ML problems.
First, we'd like to shed more light on some core components
that will follow us around, no matter what kind of ML problem we take on:
1. The *data* that we can learn from.
2. A *model* of how to transform the data.
3. A *loss* function that quantifies the *badness* of our model.
4. An *algorithm* to adjust the model's parameters to minimize the loss.
### Data
It might go without saying that you cannot do data science without data.
We could lose hundreds of pages pondering what precisely constitutes data,
but for now, we will err on the practical side
and focus on the key properties to be concerned with.
Generally, we are concerned with a collection of *examples*
(also called *data points*, *samples*, or *instances*).
In order to work with data usefully, we typically
need to come up with a suitable numerical representation.
Each *example* typically consists of a collection
of numerical attributes called *features*.
In the supervised learning problems above,
a special feature is designated as the prediction *target*,
(sometimes called the *label* or *dependent variable*).
The given features from which the model must make its predictions
can then simply be called the *features*,
(or often, the *inputs*, *covariates*, or *independent variables*).
If we were working with image data,
each individual photograph might constitute an *example*,
each represented by an ordered list of numerical values
corresponding to the brightness of each pixel.
A $200\times 200$ color photograph would consist of $200\times200\times3=120000$
numerical values, corresponding to the brightness
of the red, green, and blue channels for each spatial location.
In a more traditional task, we might try to predict
whether or not a patient will survive,
given a standard set of features such as age, vital signs, diagnoses, etc.
When every example is characterized by the same number of numerical values,
we say that the data consists of *fixed-length* vectors
and we describe the (constant) length of the vectors
as the *dimensionality* of the data.
As you might imagine, fixed-length can be a convenient property.
If we wanted to train a model to recognize cancer in microscopy images,
fixed-length inputs mean we have one less thing to worry about.
However, not all data can easily be represented as fixed-length vectors.
While we might expect microscope images to come from standard equipment,
we cannot expect images mined from the Internet
to all show up with the same resolution or shape.
For images, we might consider cropping them all to a standard size,
but that strategy only gets us so far.
We risk losing information in the cropped out portions.
Moreover, text data resists fixed-length representations even more stubbornly.
Consider the customer reviews left on e-commerce sites
like Amazon, IMDB, or TripAdvisor.
Some are short: "it stinks!". Others ramble for pages.
One major advantage of deep learning over traditional methods
is the comparative grace with which modern models
can handle *varying-length* data.
Generally, the more data we have, the easier our job becomes.
When we have more data, we can train more powerful models
and rely less heavily on pre-conceived assumptions.
The regime change from (comparatively) small to big data
is a major contributor to the success of modern deep learning.
To drive the point home, many of the most exciting models in deep learning do not work without large datasets.
Some others work in the low-data regime,
but are no better than traditional approaches.
Finally, it is not enough to have lots of data and to process it cleverly.
We need the *right* data. If the data is full of mistakes,
or if the chosen features are not predictive
of the target quantity of interest, learning is going to fail.
The situation is captured well by the cliché: *garbage in, garbage out*.
Moreover, poor predictive performance is not the only potential consequence.
In sensitive applications of machine learning,
like predictive policing, resumé screening, and risk models used for lending,
we must be especially alert to the consequences of garbage data.
One common failure mode occurs in datasets where some groups of people
are unrepresented in the training data.
Imagine applying a skin cancer recognition system in the wild
that had never seen black skin before.
Failure can also occur when the data
does not merely under-represent some groups
but reflects societal prejudices.
For example, if past hiring decisions are used to train a predictive model
that will be used to screen resumes,
then machine learning models could inadvertently
capture and automate historical injustices.
Note that this can all happen without the data scientist
actively conspiring, or even being aware.
### Models
Most machine learning involves *transforming* the data in some sense.
We might want to build a system that ingests photos and predicts *smiley-ness*.
Alternatively, we might want to ingest a set of sensor readings
and predict how *normal* vs *anomalous* the readings are.
By *model*, we denote the computational machinery for ingesting data
of one type, and spitting out predictions of a possibly different type.
In particular, we are interested in statistical models
that can be estimated from data.
While simple models are perfectly capable of addressing
appropriately simple problems the problems
that we focus on in this book stretch the limits of classical methods.
Deep learning is differentiated from classical approaches
principally by the set of powerful models that it focuses on.
These models consist of many successive transformations of the data
that are chained together top to bottom, thus the name *deep learning*.
On our way to discussing deep neural networks,
we will discuss some more traditional methods.
### Objective functions
Earlier, we introduced machine learning as "learning from experience".
By *learning* here, we mean *improving* at some task over time.
But who is to say what constitutes an improvement?
You might imagine that we could propose to update our model,
and some people might disagree on whether the proposed update
constituted an improvement or a decline.
In order to develop a formal mathematical system of learning machines,
we need to have formal measures of how good (or bad) our models are.
In machine learning, and optimization more generally,
we call these objective functions.
By convention, we usually define objective functions
so that *lower* is *better*.
This is merely a convention. You can take any function $f$
for which higher is better, and turn it into a new function $f'$
that is qualitatively identical but for which lower is better
by setting $f' = -f$.
Because lower is better, these functions are sometimes called
*loss functions* or *cost functions*.
When trying to predict numerical values,
the most common objective function is squared error $(y-\hat{y})^2$.
For classification, the most common objective is to minimize error rate,
i.e., the fraction of instances on which
our predictions disagree with the ground truth.
Some objectives (like squared error) are easy to optimize.
Others (like error rate) are difficult to optimize directly,
owing to non-differentiability or other complications.
In these cases, it is common to optimize a *surrogate objective*.
Typically, the loss function is defined
with respect to the model's parameters
and depends upon the dataset.
The best values of our model's parameters are learned
by minimizing the loss incurred on a *training set*
consisting of some number of *examples* collected for training.
However, doing well on the training data
does not guarantee that we will do well on (unseen) test data.
So we will typically want to split the available data into two partitions:
the training data (for fitting model parameters)
and the test data (which is held out for evaluation),
reporting the following two quantities:
* **Training Error:**
The error on that data on which the model was trained.
You could think of this as being like
a student's scores on practice exams
used to prepare for some real exam.
Even if the results are encouraging,
that does not guarantee success on the final exam.
* **Test Error:** This is the error incurred on an unseen test set.
This can deviate significantly from the training error.
When a model performs well on the training data
but fails to generalize to unseen data,
we say that it is *overfitting*.
In real-life terms, this is like flunking the real exam
despite doing well on practice exams.
### Optimization algorithms
Once we have got some data source and representation,
a model, and a well-defined objective function,
we need an algorithm capable of searching
for the best possible parameters for minimizing the loss function.
The most popular optimization algorithms for neural networks
follow an approach called gradient descent.
In short, at each step, they check to see, for each parameter,
which way the training set loss would move
if you perturbed that parameter just a small amount.
They then update the parameter in the direction that reduces the loss.
## Kinds of Machine Learning
In the following sections, we discuss a few *kinds*
of machine learning problems in greater detail.
We begin with a list of *objectives*, i.e.,
a list of things that we would like machine learning to do.
Note that the objectives are complemented
with a set of techniques of *how* to accomplish them,
including types of data, models, training techniques, etc.
The list below is just a sampling of the problems ML can tackle
to motivate the reader and provide us with some common language
for when we talk about more problems throughout the book.
### Supervised learning
Supervised learning addresses the task of
predicting *targets* given *inputs*.
The targets, which we often call *labels*, are generally denoted by *y*.
The input data, also called the *features* or covariates,
are typically denoted $\mathbf{x}$.
Each (input, target) pair is called an *example* or *instance*.
Sometimes, when the context is clear, we may use the term examples,
to refer to a collection of inputs,
even when the corresponding targets are unknown.
We denote any particular instance with a subscript, typically $i$,
for instance ($\mathbf{x}_i, y_i$).
A dataset is a collection of $n$ instances $\{\mathbf{x}_i, y_i\}_{i=1}^n$.
Our goal is to produce a model $f_\theta$ that maps any input $\mathbf{x}_i$
to a prediction $f_{\theta}(\mathbf{x}_i)$.
To ground this description in a concrete example,
if we were working in healthcare,
then we might want to predict whether or not
a patient would have a heart attack.
This observation, *heart attack* or *no heart attack*,
would be our label $y$.
The input data $\mathbf{x}$ might be vital signs
such as heart rate, diastolic and systolic blood pressure, etc.
The supervision comes into play because for choosing the parameters $\theta$, we (the supervisors) provide the model with a dataset
consisting of *labeled examples* ($\mathbf{x}_i, y_i$),
where each example $\mathbf{x}_i$ is matched with the correct label.
In probabilistic terms, we typically are interested in estimating
the conditional probability $P(y|x)$.
While it is just one among several paradigms within machine learning,
supervised learning accounts for the majority of successful
applications of machine learning in industry.
Partly, that is because many important tasks
can be described crisply as estimating the probability
of something unknown given a particular set of available data:
* Predict cancer vs not cancer, given a CT image.
* Predict the correct translation in French, given a sentence in English.
* Predict the price of a stock next month based on this month's financial reporting data.
Even with the simple description "predict targets from inputs"
supervised learning can take a great many forms
and require a great many modeling decisions,
depending on (among other considerations) the type, size,
and the number of inputs and outputs.
For example, we use different models to process sequences
(like strings of text or time series data)
and for processing fixed-length vector representations.
We will visit many of these problems in depth
throughout the first 9 parts of this book.
Informally, the learning process looks something like this:
Grab a big collection of examples for which the covariates are known
and select from them a random subset,
acquiring the ground truth labels for each.
Sometimes these labels might be available data that has already been collected
(e.g., did a patient die within the following year?)
and other times we might need to employ human annotators to label the data,
(e.g., assigning images to categories).
Together, these inputs and corresponding labels comprise the training set.
We feed the training dataset into a supervised learning algorithm,
a function that takes as input a dataset
and outputs another function, *the learned model*.
Finally, we can feed previously unseen inputs to the learned model,
using its outputs as predictions of the corresponding label.
The full process is drawn in :numref:`fig_supervised_learning`.

:label:`fig_supervised_learning`
#### Regression
Perhaps the simplest supervised learning task
to wrap your head around is *regression*.
Consider, for example, a set of data harvested
from a database of home sales.
We might construct a table, where each row corresponds to a different house,
and each column corresponds to some relevant attribute,
such as the square footage of a house, the number of bedrooms, the number of bathrooms, and the number of minutes (walking) to the center of town.
In this dataset, each *example* would be a specific house,
and the corresponding *feature vector* would be one row in the table.
If you live in New York or San Francisco,
and you are not the CEO of Amazon, Google, Microsoft, or Facebook,
the (sq. footage, no. of bedrooms, no. of bathrooms, walking distance)
feature vector for your home might look something like: $[100, 0, .5, 60]$.
However, if you live in Pittsburgh, it might look more like $[3000, 4, 3, 10]$.
Feature vectors like this are essential
for most classic machine learning algorithms.
We will continue to denote the feature vector corresponding
to any example $i$ as $\mathbf{x}_i$ and we can compactly refer
to the full table containing all of the feature vectors as $X$.
What makes a problem a *regression* is actually the outputs.
Say that you are in the market for a new home.
You might want to estimate the fair market value of a house,
given some features like these.
The target value, the price of sale, is a *real number*.
If you remember the formal definition of the reals
you might be scratching your head now.
Homes probably never sell for fractions of a cent,
let alone prices expressed as irrational numbers.
In cases like this, when the target is actually discrete,
but where the rounding takes place on a sufficiently fine scale,
we will abuse language just a bit and continue to describe
our outputs and targets as real-valued numbers.
We denote any individual target $y_i$
(corresponding to example $\mathbf{x}i$)
and the set of all targets $\mathbf{y}$
(corresponding to all examples $X$).
When our targets take on arbitrary values in some range,
we call this a regression problem.
Our goal is to produce a model whose predictions
closely approximate the actual target values.
We denote the predicted target for any instance $\hat{y}_i$.
Do not worry if the notation is bogging you down.
We will unpack it more thoroughly in the subsequent chapters.
Lots of practical problems are well-described regression problems.
Predicting the rating that a user will assign to a movie
can be thought of as a regression problem
and if you designed a great algorithm to accomplish this feat in 2009,
you might have won the [1-million-dollar Netflix prize](https://en.wikipedia.org/wiki/Netflix_Prize).
Predicting the length of stay for patients in the hospital
is also a regression problem.
A good rule of thumb is that any *How much?* or *How many?* problem
should suggest regression.
* "How many hours will this surgery take?": *regression*
* "How many dogs are in this photo?": *regression*.
However, if you can easily pose your problem as "Is this a _ ?",
then it is likely, classification, a different kind
of supervised problem that we will cover next.
Even if you have never worked with machine learning before,
you have probably worked through a regression problem informally.
Imagine, for example, that you had your drains repaired
and that your contractor spent $x_1=3$ hours
removing gunk from your sewage pipes.
Then she sent you a bill of $y_1 = \$350$.
Now imagine that your friend hired the same contractor for $x_2 = 2$ hours
and that she received a bill of $y_2 = \$250$.
If someone then asked you how much to expect
on their upcoming gunk-removal invoice
you might make some reasonable assumptions,
such as more hours worked costs more dollars.
You might also assume that there is some base charge
and that the contractor then charges per hour.
If these assumptions held true, then given these two data points,
you could already identify the contractor's pricing structure:
\$100 per hour plus \$50 to show up at your house.
If you followed that much then you already understand
the high-level idea behind linear regression
(and you just implicitly designed a linear model with a bias term).
In this case, we could produce the parameters
that exactly matched the contractor's prices.
Sometimes that is not possible, e.g., if some of
the variance owes to some factors besides your two features.
In these cases, we will try to learn models
that minimize the distance between our predictions and the observed values.
In most of our chapters, we will focus on one of two very common losses,
the [L1 loss](http://mxnet.incubator.apache.org/api/python/gluon/loss.html#mxnet.gluon.loss.L1Loss)
where
$$l(y, y') = \sum_i |y_i-y_i'|$$
and the least mean squares loss, or
[L2 loss](http://mxnet.incubator.apache.org/api/python/gluon/loss.html#mxnet.gluon.loss.L2Loss),
where
$$l(y, y') = \sum_i (y_i - y_i')^2.$$
As we will see later, the $L_2$ loss corresponds to the assumption
that our data was corrupted by Gaussian noise,
whereas the $L_1$ loss corresponds to an assumption
of noise from a Laplace distribution.
#### Classification
While regression models are great for addressing *how many?* questions,
lots of problems do not bend comfortably to this template.
For example, a bank wants to add check scanning to its mobile app.
This would involve the customer snapping a photo of a check
with their smart phone's camera
and the machine learning model would need to be able
to automatically understand text seen in the image.
It would also need to understand hand-written text to be even more robust.
This kind of system is referred to as optical character recognition (OCR),
and the kind of problem it addresses is called *classification*.
It is treated with a different set of algorithms
than those used for regression (although many techniques will carry over).
In classification, we want our model to look at a feature vector,
e.g., the pixel values in an image,
and then predict which category (formally called *classes*),
among some (discrete) set of options, an example belongs.
For hand-written digits, we might have 10 classes,
corresponding to the digits 0 through 9.
The simplest form of classification is when there are only two classes,
a problem which we call binary classification.
For example, our dataset $X$ could consist of images of animals
and our *labels* $Y$ might be the classes $\mathrm{\{cat, dog\}}$.
While in regression, we sought a *regressor* to output a real value $\hat{y}$,
in classification, we seek a *classifier*, whose output $\hat{y}$ is the predicted class assignment.
For reasons that we will get into as the book gets more technical,
it can be hard to optimize a model that can only output
a hard categorical assignment, e.g., either *cat* or *dog*.
In these cases, it is usually much easier to instead express
our model in the language of probabilities.
Given an example $x$, our model assigns a probability $\hat{y}_k$
to each label $k$. Because these are probabilities,
they need to be positive numbers and add up to $1$
and thus we only need $K-1$ numbers
to assign probabilities of $K$ categories.
This is easy to see for binary classification.
If there is a $0.6$ ($60\%$) probability that an unfair coin comes up heads,
then there is a $0.4$ ($40\%$) probability that it comes up tails.
Returning to our animal classification example,
a classifier might see an image and output the probability
that the image is a cat $P(y=\text{cat} \mid x) = 0.9$.
We can interpret this number by saying that the classifier
is $90\%$ sure that the image depicts a cat.
The magnitude of the probability for the predicted class
conveys one notion of uncertainty.
It is not the only notion of uncertainty
and we will discuss others in more advanced chapters.
When we have more than two possible classes,
we call the problem *multiclass classification*.
Common examples include hand-written character recognition
`[0, 1, 2, 3 ... 9, a, b, c, ...]`.
While we attacked regression problems by trying
to minimize the L1 or L2 loss functions,
the common loss function for classification problems is called cross-entropy.
In MXNet Gluon, the corresponding loss function can be found [here](https://mxnet.incubator.apache.org/api/python/gluon/loss.html#mxnet.gluon.loss.SoftmaxCrossEntropyLoss).
Note that the most likely class is not necessarily
the one that you are going to use for your decision.
Assume that you find this beautiful mushroom in your backyard
as shown in :numref:`fig_death_cap`.

:width:`200px`
:label:`fig_death_cap`
Now, assume that you built a classifier and trained it
to predict if a mushroom is poisonous based on a photograph.
Say our poison-detection classifier outputs
$P(y=\mathrm{death cap}|\mathrm{image}) = 0.2$.
In other words, the classifier is $80\%$ sure
that our mushroom *is not* a death cap.
Still, you'd have to be a fool to eat it.
That is because the certain benefit of a delicious dinner
is not worth a $20\%$ risk of dying from it.
In other words, the effect of the *uncertain risk*
outweighs the benefit by far. We can look at this more formally.
Basically, we need to compute the expected risk that we incur,
i.e., we need to multiply the probability of the outcome
with the benefit (or harm) associated with it:
$$L(\mathrm{action}| x) = E_{y \sim p(y| x)}[\mathrm{loss}(\mathrm{action},y)].$$
Hence, the loss $L$ incurred by eating the mushroom
is $L(a=\mathrm{eat}| x) = 0.2 * \infty + 0.8 * 0 = \infty$,
whereas the cost of discarding it is
$L(a=\mathrm{discard}| x) = 0.2 * 0 + 0.8 * 1 = 0.8$.
Our caution was justified: as any mycologist would tell us,
the above mushroom actually *is* a death cap.
Classification can get much more complicated than just
binary, multiclass, or even multi-label classification.
For instance, there are some variants of classification
for addressing hierarchies.
Hierarchies assume that there exist some relationships among the many classes.
So not all errors are equal---if we must err, we would prefer
to misclassify to a related class rather than to a distant class.
Usually, this is referred to as *hierarchical classification*.
One early example is due to [Linnaeus](https://en.wikipedia.org/wiki/Carl_Linnaeus), who organized the animals in a hierarchy.
In the case of animal classification,
it might not be so bad to mistake a poodle for a schnauzer,
but our model would pay a huge penalty
if it confused a poodle for a dinosaur.
Which hierarchy is relevant might depend
on how you plan to use the model.
For example, rattle snakes and garter snakes
might be close on the phylogenetic tree,
but mistaking a rattler for a garter could be deadly.
#### Tagging
Some classification problems do not fit neatly
into the binary or multiclass classification setups.
For example, we could train a normal binary classifier
to distinguish cats from dogs.
Given the current state of computer vision,
we can do this easily, with off-the-shelf tools.
Nonetheless, no matter how accurate our model gets,
we might find ourselves in trouble when the classifier
encounters an image of the Town Musicians of Bremen.

:width:`300px`
As you can see, there is a cat in the picture,
and a rooster, a dog, a donkey, and a bird,
with some trees in the background.
Depending on what we want to do with our model
ultimately, treating this as a binary classification problem
might not make a lot of sense.
Instead, we might want to give the model the option of
saying the image depicts a cat *and* a dog *and* a donkey
*and* a rooster *and* a bird.
The problem of learning to predict classes that are
*not mutually exclusive* is called multi-label classification.
Auto-tagging problems are typically best described
as multi-label classification problems.
Think of the tags people might apply to posts on a tech blog,
e.g., "machine learning", "technology", "gadgets",
"programming languages", "linux", "cloud computing", "AWS".
A typical article might have 5-10 tags applied
because these concepts are correlated.
Posts about "cloud computing" are likely to mention "AWS"
and posts about "machine learning" could also deal
with "programming languages".
We also have to deal with this kind of problem when dealing
with the biomedical literature, where correctly tagging articles is important
because it allows researchers to do exhaustive reviews of the literature.
At the National Library of Medicine, a number of professional annotators
go over each article that gets indexed in PubMed
to associate it with the relevant terms from MeSH,
a collection of roughly 28k tags.
This is a time-consuming process and the
annotators typically have a one year lag between archiving and tagging.
Machine learning can be used here to provide provisional tags
until each article can have a proper manual review.
Indeed, for several years, the BioASQ organization
has [hosted a competition](http://bioasq.org/) to do precisely this.
#### Search and ranking
Sometimes we do not just want to assign each example to a bucket
or to a real value. In the field of information retrieval,
we want to impose a ranking on a set of items.
Take web search for example, the goal is less to determine whether
a particular page is relevant for a query, but rather,
which one of the plethora of search results is *most relevant*
for a particular user.
We really care about the ordering of the relevant search results
and our learning algorithm needs to produce ordered subsets
of elements from a larger set.
In other words, if we are asked to produce the first 5 letters from the alphabet, there is a difference
between returning ``A B C D E`` and ``C A B E D``.
Even if the result set is the same,
the ordering within the set matters.
One possible solution to this problem is to first assign
to every element in the set a corresponding relevance score
and then to retrieve the top-rated elements.
[PageRank](https://en.wikipedia.org/wiki/PageRank),
the original secret sauce behind the Google search engine
was an early example of such a scoring system but it was
peculiar in that it did not depend on the actual query.
Here they relied on a simple relevance filter
to identify the set of relevant items
and then on PageRank to order those results
that contained the query term.
Nowadays, search engines use machine learning and behavioral models
to obtain query-dependent relevance scores.
There are entire academic conferences devoted to this subject.
#### Recommender systems
:label:`subsec_recommender_systems`
Recommender systems are another problem setting
that is related to search and ranking.
The problems are similar insofar as the goal
is to display a set of relevant items to the user.
The main difference is the emphasis on *personalization*
to specific users in the context of recommender systems.
For instance, for movie recommendations,
the results page for a SciFi fan and the results page
for a connoisseur of Peter Sellers comedies might differ significantly.
Similar problems pop up in other recommendation settings,
e.g., for retail products, music, or news recommendation.
In some cases, customers provide explicit feedback communicating
how much they liked a particular product
(e.g., the product ratings and reviews on Amazon, IMDB, GoodReads, etc.).
In some other cases, they provide implicit feedback,
e.g., by skipping titles on a playlist,
which might indicate dissatisfaction but might just indicate
that the song was inappropriate in context.
In the simplest formulations, these systems are trained
to estimate some score $y_{ij}$, such as an estimated rating
or the probability of purchase, given a user $u_i$ and product $p_j$.
Given such a model, then for any given user,
we could retrieve the set of objects with the largest scores $y_{ij}$,
which could then be recommended to the customer.
Production systems are considerably more advanced and take
detailed user activity and item characteristics into account
when computing such scores. :numref:`fig_deeplearning_amazon` is an example
of deep learning books recommended by Amazon based on personalization algorithms tuned to capture the author's preferences.

:label:`fig_deeplearning_amazon`
Despite their tremendous economic value, recommendation systems
naively built on top of predictive models
suffer some serious conceptual flaws.
To start, we only observe *censored feedback*.
Users preferentially rate movies that they feel strongly about:
you might notice that items receive many 5 and 1 star ratings
but that there are conspicuously few 3-star ratings.
Moreover, current purchase habits are often a result
of the recommendation algorithm currently in place,
but learning algorithms do not always take this detail into account.
Thus it is possible for feedback loops to form
where a recommender system preferentially pushes an item
that is then taken to be better (due to greater purchases)
and in turn is recommended even more frequently.
Many of these problems about how to deal with censoring,
incentives, and feedback loops, are important open research questions.
#### Sequence Learning
So far, we have looked at problems where we have
some fixed number of inputs and produce a fixed number of outputs.
Before we considered predicting home prices from a fixed set of features: square footage, number of bedrooms,
number of bathrooms, walking time to downtown.
We also discussed mapping from an image (of fixed dimension)
to the predicted probabilities that it belongs to each
of a fixed number of classes, or taking a user ID and a product ID,
and predicting a star rating. In these cases,
once we feed our fixed-length input
into the model to generate an output,
the model immediately forgets what it just saw.
This might be fine if our inputs truly all have the same dimensions
and if successive inputs truly have nothing to do with each other.
But how would we deal with video snippets?
In this case, each snippet might consist of a different number of frames.
And our guess of what is going on in each frame might be much stronger
if we take into account the previous or succeeding frames.
Same goes for language. One popular deep learning problem
is machine translation: the task of ingesting sentences
in some source language and predicting their translation in another language.
These problems also occur in medicine.
We might want a model to monitor patients in the intensive care unit
and to fire off alerts if their risk of death
in the next 24 hours exceeds some threshold.
We definitely would not want this model to throw away
everything it knows about the patient history each hour
and just make its predictions based on the most recent measurements.
These problems are among the most exciting applications of machine learning
and they are instances of *sequence learning*.
They require a model to either ingest sequences of inputs
or to emit sequences of outputs (or both!).
These latter problems are sometimes referred to as ``seq2seq`` problems. Language translation is a ``seq2seq`` problem.
Transcribing text from the spoken speech is also a ``seq2seq`` problem.
While it is impossible to consider all types of sequence transformations,
a number of special cases are worth mentioning:
**Tagging and Parsing**. This involves annotating a text sequence with attributes.
In other words, the number of inputs and outputs is essentially the same.
For instance, we might want to know where the verbs and subjects are.
Alternatively, we might want to know which words are the named entities.
In general, the goal is to decompose and annotate text based on structural
and grammatical assumptions to get some annotation.
This sounds more complex than it actually is.
Below is a very simple example of annotating a sentence
with tags indicating which words refer to named entities.
```text
Tom has dinner in Washington with Sally.
Ent - - - Ent - Ent
```
**Automatic Speech Recognition**. With speech recognition, the input sequence $x$
is an audio recording of a speaker (shown in :numref:`fig_speech`), and the output $y$
is the textual transcript of what the speaker said.
The challenge is that there are many more audio frames
(sound is typically sampled at 8kHz or 16kHz)
than text, i.e., there is no 1:1 correspondence between audio and text,
since thousands of samples correspond to a single spoken word.
These are ``seq2seq`` problems where the output is much shorter than the input.

:width:`700px`
:label:`fig_speech`
**Text to Speech**. Text-to-Speech (TTS) is the inverse of speech recognition.
In other words, the input $x$ is text
and the output $y$ is an audio file.
In this case, the output is *much longer* than the input.
While it is easy for *humans* to recognize a bad audio file,
this is not quite so trivial for computers.
**Machine Translation**. Unlike the case of speech recognition, where corresponding
inputs and outputs occur in the same order (after alignment),
in machine translation, order inversion can be vital.
In other words, while we are still converting one sequence into another,
neither the number of inputs and outputs nor the order
of corresponding data points are assumed to be the same.
Consider the following illustrative example
of the peculiar tendency of Germans
to place the verbs at the end of sentences.
```text
German: Haben Sie sich schon dieses grossartige Lehrwerk angeschaut?
English: Did you already check out this excellent tutorial?
Wrong alignment: Did you yourself already this excellent tutorial looked-at?
```
Many related problems pop up in other learning tasks.
For instance, determining the order in which a user
reads a Webpage is a two-dimensional layout analysis problem.
Dialogue problems exhibit all kinds of additional complications,
where determining what to say next requires taking into account
real-world knowledge and the prior state of the conversation
across long temporal distances. This is an active area of research.
### Unsupervised learning
All the examples so far were related to *Supervised Learning*,
i.e., situations where we feed the model a giant dataset
containing both the features and corresponding target values.
You could think of the supervised learner as having
an extremely specialized job and an extremely anal boss.
The boss stands over your shoulder and tells you exactly what to do
in every situation until you learn to map from situations to actions.
Working for such a boss sounds pretty lame.
On the other hand, it is easy to please this boss.
You just recognize the pattern as quickly as possible
and imitate their actions.
In a completely opposite way, it could be frustrating
to work for a boss who has no idea what they want you to do.
However, if you plan to be a data scientist, you'd better get used to it.
The boss might just hand you a giant dump of data and tell you to *do some data science with it!* This sounds vague because it is.
We call this class of problems *unsupervised learning*,
and the type and number of questions we could ask
is limited only by our creativity.
We will address a number of unsupervised learning techniques
in later chapters. To whet your appetite for now,
we describe a few of the questions you might ask:
* Can we find a small number of prototypes
that accurately summarize the data?
Given a set of photos, can we group them into landscape photos,
pictures of dogs, babies, cats, mountain peaks, etc.?
Likewise, given a collection of users' browsing activity,
can we group them into users with similar behavior?
This problem is typically known as *clustering*.
* Can we find a small number of parameters
that accurately capture the relevant properties of the data?
The trajectories of a ball are quite well described
by velocity, diameter, and mass of the ball.
Tailors have developed a small number of parameters
that describe human body shape fairly accurately
for the purpose of fitting clothes.
These problems are referred to as *subspace estimation* problems.
If the dependence is linear, it is called *principal component analysis*.
* Is there a representation of (arbitrarily structured) objects
in Euclidean space (i.e., the space of vectors in $\mathbb{R}^n$)
such that symbolic properties can be well matched?
This is called *representation learning* and it is used
to describe entities and their relations,
such as Rome $-$ Italy $+$ France $=$ Paris.
* Is there a description of the root causes
of much of the data that we observe?
For instance, if we have demographic data
about house prices, pollution, crime, location,
education, salaries, etc., can we discover
how they are related simply based on empirical data?
The fields concerned with *causality* and
*probabilistic graphical models* address this problem.
* Another important and exciting recent development in unsupervised learning
is the advent of *generative adversarial networks* (GANs).
These give us a procedural way to synthesize data,
even complicated structured data like images and audio.
The underlying statistical mechanisms are tests
to check whether real and fake data are the same.
We will devote a few notebooks to them.
### Interacting with an Environment
So far, we have not discussed where data actually comes from,
or what actually *happens* when a machine learning model generates an output.
That is because supervised learning and unsupervised learning
do not address these issues in a very sophisticated way.
In either case, we grab a big pile of data upfront,
then set our pattern recognition machines in motion
without ever interacting with the environment again.
Because all of the learning takes place
after the algorithm is disconnected from the environment,
this is sometimes called *offline learning*.
For supervised learning, the process looks like :numref:`fig_data_collection`.

:label:`fig_data_collection`
This simplicity of offline learning has its charms.
The upside is we can worry about pattern recognition
in isolation, without any distraction from these other problems.
But the downside is that the problem formulation is quite limiting.
If you are more ambitious, or if you grew up reading Asimov's Robot Series,
then you might imagine artificially intelligent bots capable
not only of making predictions, but of taking actions in the world.
We want to think about intelligent *agents*, not just predictive *models*.
That means we need to think about choosing *actions*,
not just making *predictions*. Moreover, unlike predictions,
actions actually impact the environment.
If we want to train an intelligent agent,
we must account for the way its actions might
impact the future observations of the agent.
Considering the interaction with an environment
opens a whole set of new modeling questions.
Does the environment:
* Remember what we did previously?
* Want to help us, e.g., a user reading text into a speech recognizer?
* Want to beat us, i.e., an adversarial setting like spam filtering (against spammers) or playing a game (vs an opponent)?
* Not care (as in many cases)?
* Have shifting dynamics (does future data always resemble the past or do the patterns change over time, either naturally or in response to our automated tools)?
This last question raises the problem of *distribution shift*,
(when training and test data are different).
It is a problem that most of us have experienced
when taking exams written by a lecturer,
while the homeworks were composed by her TAs.
We will briefly describe reinforcement learning and adversarial learning,
two settings that explicitly consider interaction with an environment.
### Reinforcement learning
If you are interested in using machine learning
to develop an agent that interacts with an environment
and takes actions, then you are probably going to wind up
focusing on *reinforcement learning* (RL).
This might include applications to robotics,
to dialogue systems, and even to developing AI for video games.
*Deep reinforcement learning* (DRL), which applies
deep neural networks to RL problems, has surged in popularity.
The breakthrough [deep Q-network that beat humans at Atari games using only the visual input](https://www.wired.com/2015/02/google-ai-plays-atari-like-pros/),
and the [AlphaGo program that dethroned the world champion at the board game Go](https://www.wired.com/2017/05/googles-alphago-trounces-humans-also-gives-boost/) are two prominent examples.
Reinforcement learning gives a very general statement of a problem,
in which an agent interacts with an environment over a series of *timesteps*.
At each timestep $t$, the agent receives some observation $o_t$
from the environment and must choose an action $a_t$
that is subsequently transmitted back to the environment
via some mechanism (sometimes called an actuator).
Finally, the agent receives a reward $r_t$ from the environment.
The agent then receives a subsequent observation,
and chooses a subsequent action, and so on.
The behavior of an RL agent is governed by a *policy*.
In short, a *policy* is just a function that maps
from observations (of the environment) to actions.
The goal of reinforcement learning is to produce a good policy.

It is hard to overstate the generality of the RL framework.
For example, we can cast any supervised learning problem as an RL problem.
Say we had a classification problem.
We could create an RL agent with one *action* corresponding to each class.
We could then create an environment which gave a reward
that was exactly equal to the loss function
from the original supervised problem.
That being said, RL can also address many problems
that supervised learning cannot.
For example, in supervised learning we always expect
that the training input comes associated with the correct label.
But in RL, we do not assume that for each observation,
the environment tells us the optimal action.
In general, we just get some reward.
Moreover, the environment may not even tell us which actions led to the reward.
Consider for example the game of chess.
The only real reward signal comes at the end of the game
when we either win, which we might assign a reward of 1,
or when we lose, which we could assign a reward of -1.
So reinforcement learners must deal with the *credit assignment problem*:
determining which actions to credit or blame for an outcome.
The same goes for an employee who gets a promotion on October 11.
That promotion likely reflects a large number
of well-chosen actions over the previous year.
Getting more promotions in the future requires figuring out
what actions along the way led to the promotion.
Reinforcement learners may also have to deal
with the problem of partial observability.
That is, the current observation might not
tell you everything about your current state.
Say a cleaning robot found itself trapped
in one of many identical closets in a house.
Inferring the precise location (and thus state) of the robot
might require considering its previous observations before entering the closet.
Finally, at any given point, reinforcement learners
might know of one good policy,
but there might be many other better policies
that the agent has never tried.
The reinforcement learner must constantly choose
whether to *exploit* the best currently-known strategy as a policy,
or to *explore* the space of strategies,
potentially giving up some short-run reward in exchange for knowledge.
#### MDPs, bandits, and friends
The general reinforcement learning problem
is a very general setting.
Actions affect subsequent observations.
Rewards are only observed corresponding to the chosen actions.
The environment may be either fully or partially observed.
Accounting for all this complexity at once may ask too much of researchers.
Moreover, not every practical problem exhibits all this complexity.
As a result, researchers have studied a number of
*special cases* of reinforcement learning problems.
When the environment is fully observed,
we call the RL problem a *Markov Decision Process* (MDP).
When the state does not depend on the previous actions,
we call the problem a *contextual bandit problem*.
When there is no state, just a set of available actions
with initially unknown rewards, this problem
is the classic *multi-armed bandit problem*.
## Roots
Although many deep learning methods are recent inventions,
humans have held the desire to analyze data
and to predict future outcomes for centuries.
In fact, much of natural science has its roots in this.
For instance, the Bernoulli distribution is named after
[Jacob Bernoulli (1655-1705)](https://en.wikipedia.org/wiki/Jacob_Bernoulli), and the Gaussian distribution was discovered
by [Carl Friedrich Gauss (1777-1855)](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss).
He invented, for instance, the least mean squares algorithm,
which is still used today for countless problems
from insurance calculations to medical diagnostics.
These tools gave rise to an experimental approach
in the natural sciences---for instance, Ohm's law
relating current and voltage in a resistor
is perfectly described by a linear model.
Even in the middle ages, mathematicians had a keen intuition of estimates.
For instance, the geometry book of [Jacob Köbel (1460-1533)](https://www.maa.org/press/periodicals/convergence/mathematical-treasures-jacob-kobels-geometry) illustrates
averaging the length of 16 adult men's feet to obtain the average foot length.

:width:`500px`
:label:`fig_koebel`
:numref:`fig_koebel` illustrates how this estimator works.
The 16 adult men were asked to line up in a row, when leaving church.
Their aggregate length was then divided by 16
to obtain an estimate for what now amounts to 1 foot.
This "algorithm" was later improved to deal with misshapen feet---the
2 men with the shortest and longest feet respectively were sent away,
averaging only over the remainder.
This is one of the earliest examples of the trimmed mean estimate.
Statistics really took off with the collection and availability of data.
One of its titans, [Ronald Fisher (1890-1962)](https://en.wikipedia.org/wiki/Ronald_Fisher), contributed significantly to its theory
and also its applications in genetics.
Many of his algorithms (such as Linear Discriminant Analysis)
and formula (such as the Fisher Information Matrix)
are still in frequent use today (even the Iris dataset
that he released in 1936 is still used sometimes
to illustrate machine learning algorithms).
Fisher was also a proponent of eugenics,
which should remind us that the morally dubious use of data science
has as long and enduring a history as its productive use
in industry and the natural sciences.
A second influence for machine learning came from Information Theory
[(Claude Shannon, 1916-2001)](https://en.wikipedia.org/wiki/Claude_Shannon) and the Theory of computation via [Alan Turing (1912-1954)](https://en.wikipedia.org/wiki/Alan_Turing).
Turing posed the question "can machines think?”
in his famous paper [Computing machinery and intelligence](https://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence) (Mind, October 1950).
In what he described as the Turing test, a machine
can be considered intelligent if it is difficult
for a human evaluator to distinguish between the replies
from a machine and a human based on textual interactions.
Another influence can be found in neuroscience and psychology.
After all, humans clearly exhibit intelligent behavior.
It is thus only reasonable to ask whether one could explain
and possibly reverse engineer this capacity.
One of the oldest algorithms inspired in this fashion
was formulated by [Donald Hebb (1904-1985)](https://en.wikipedia.org/wiki/Donald_O._Hebb).
In his groundbreaking book The Organization of Behavior :cite:`Hebb.Hebb.1949`,
he posited that neurons learn by positive reinforcement.
This became known as the Hebbian learning rule.
It is the prototype of Rosenblatt's perceptron learning algorithm
and it laid the foundations of many stochastic gradient descent algorithms
that underpin deep learning today: reinforce desirable behavior
and diminish undesirable behavior to obtain good settings
of the parameters in a neural network.
Biological inspiration is what gave *neural networks* their name.
For over a century (dating back to the models of Alexander Bain, 1873
and James Sherrington, 1890), researchers have tried to assemble
computational circuits that resemble networks of interacting neurons.
Over time, the interpretation of biology has become less literal
but the name stuck. At its heart, lie a few key principles
that can be found in most networks today:
* The alternation of linear and nonlinear processing units, often referred to as *layers*.
* The use of the chain rule (also known as *backpropagation*) for adjusting parameters in the entire network at once.
After initial rapid progress, research in neural networks
languished from around 1995 until 2005.
This was due to a number of reasons.
Training a network is computationally very expensive.
While RAM was plentiful at the end of the past century,
computational power was scarce.
Second, datasets were relatively small.
In fact, Fisher's Iris dataset from 1932
was a popular tool for testing the efficacy of algorithms.
MNIST with its 60,000 handwritten digits was considered huge.
Given the scarcity of data and computation,
strong statistical tools such as Kernel Methods,
Decision Trees and Graphical Models proved empirically superior.
Unlike neural networks, they did not require weeks to train
and provided predictable results with strong theoretical guarantees.
## The Road to Deep Learning
Much of this changed with the ready availability of large amounts of data,
due to the World Wide Web, the advent of companies serving
hundreds of millions of users online, a dissemination of cheap,
high-quality sensors, cheap data storage (Kryder's law),
and cheap computation (Moore's law), in particular in the form of GPUs, originally engineered for computer gaming.
Suddenly algorithms and models that seemed computationally infeasible
became relevant (and vice versa).
This is best illustrated in :numref:`tab_intro_decade`.
:Dataset versus computer memory and computational power
|Decade|Dataset|Memory|Floating Point Calculations per Second|
|:--|:-|:-|:-|
|1970|100 (Iris)|1 KB|100 KF (Intel 8080)|
|1980|1 K (House prices in Boston)|100 KB|1 MF (Intel 80186)|
|1990|10 K (optical character recognition)|10 MB|10 MF (Intel 80486)|
|2000|10 M (web pages)|100 MB|1 GF (Intel Core)|
|2010|10 G (advertising)|1 GB|1 TF (Nvidia C2050)|
|2020|1 T (social network)|100 GB|1 PF (Nvidia DGX-2)|
:label:`tab_intro_decade`
It is evident that RAM has not kept pace with the growth in data.
At the same time, the increase in computational power
has outpaced that of the data available.
This means that statistical models needed to become more memory efficient
(this is typically achieved by adding nonlinearities)
while simultaneously being able to spend more time
on optimizing these parameters, due to an increased compute budget.
Consequently, the sweet spot in machine learning and statistics
moved from (generalized) linear models and kernel methods to deep networks.
This is also one of the reasons why many of the mainstays
of deep learning, such as multilayer perceptrons
:cite:`McCulloch.Pitts.1943`, convolutional neural networks
:cite:`LeCun.Bottou.Bengio.ea.1998`, Long Short-Term Memory
:cite:`Hochreiter.Schmidhuber.1997`,
and Q-Learning :cite:`Watkins.Dayan.1992`,
were essentially "rediscovered" in the past decade,
after laying comparatively dormant for considerable time.
The recent progress in statistical models, applications, and algorithms,
has sometimes been likened to the Cambrian Explosion:
a moment of rapid progress in the evolution of species.
Indeed, the state of the art is not just a mere consequence
of available resources, applied to decades old algorithms.
Note that the list below barely scratches the surface
of the ideas that have helped researchers achieve tremendous progress
over the past decade.
* Novel methods for capacity control, such as Dropout
:cite:`Srivastava.Hinton.Krizhevsky.ea.2014`
have helped to mitigate the danger of overfitting.
This was achieved by applying noise injection :cite:`Bishop.1995`
throughout the network, replacing weights by random variables
for training purposes.
* Attention mechanisms solved a second problem
that had plagued statistics for over a century:
how to increase the memory and complexity of a system without
increasing the number of learnable parameters.
:cite:`Bahdanau.Cho.Bengio.2014` found an elegant solution
by using what can only be viewed as a learnable pointer structure.
Rather than having to remember an entire sentence, e.g.,
for machine translation in a fixed-dimensional representation,
all that needed to be stored was a pointer to the intermediate state
of the translation process. This allowed for significantly
increased accuracy for long sentences, since the model
no longer needed to remember the entire sentence before
commencing the generation of a new sentence.
* Multi-stage designs, e.g., via the Memory Networks (MemNets)
:cite:`Sukhbaatar.Weston.Fergus.ea.2015` and the Neural Programmer-Interpreter :cite:`Reed.De-Freitas.2015`
allowed statistical modelers to describe iterative approaches to reasoning. These tools allow for an internal state of the deep network
to be modified repeatedly, thus carrying out subsequent steps
in a chain of reasoning, similar to how a processor
can modify memory for a computation.
* Another key development was the invention of GANs
:cite:`Goodfellow.Pouget-Abadie.Mirza.ea.2014`.
Traditionally, statistical methods for density estimation
and generative models focused on finding proper probability distributions
and (often approximate) algorithms for sampling from them.
As a result, these algorithms were largely limited by the lack of
flexibility inherent in the statistical models.
The crucial innovation in GANs was to replace the sampler
by an arbitrary algorithm with differentiable parameters.
These are then adjusted in such a way that the discriminator
(effectively a two-sample test) cannot distinguish fake from real data.
Through the ability to use arbitrary algorithms to generate data,
it opened up density estimation to a wide variety of techniques.
Examples of galloping Zebras :cite:`Zhu.Park.Isola.ea.2017`
and of fake celebrity faces :cite:`Karras.Aila.Laine.ea.2017`
are both testimony to this progress.
Even amateur doodlers can produce
photorealistic images based on just sketches that describe
how the layout of a scene looks :cite:`Park.Liu.Wang.ea.2019`.
* In many cases, a single GPU is insufficient to process
the large amounts of data available for training.
Over the past decade the ability to build parallel
distributed training algorithms has improved significantly.
One of the key challenges in designing scalable algorithms
is that the workhorse of deep learning optimization,
stochastic gradient descent, relies on relatively
small minibatches of data to be processed.
At the same time, small batches limit the efficiency of GPUs.
Hence, training on 1024 GPUs with a minibatch size of,
say 32 images per batch amounts to an aggregate minibatch
of 32k images. Recent work, first by Li :cite:`Li.2017`,
and subsequently by :cite:`You.Gitman.Ginsburg.2017`
and :cite:`Jia.Song.He.ea.2018` pushed the size up to 64k observations,
reducing training time for ResNet50 on ImageNet to less than 7 minutes.
For comparison---initially training times were measured in the order of days.
* The ability to parallelize computation has also contributed quite crucially
to progress in reinforcement learning, at least whenever simulation is an
option. This has led to significant progress in computers achieving
superhuman performance in Go, Atari games, Starcraft, and in physics
simulations (e.g., using MuJoCo). See e.g.,
:cite:`Silver.Huang.Maddison.ea.2016` for a description
of how to achieve this in AlphaGo. In a nutshell,
reinforcement learning works best if plenty of (state, action, reward) triples are available, i.e., whenever it is possible to try out lots of things to learn how they relate to each
other. Simulation provides such an avenue.
* Deep Learning frameworks have played a crucial role
in disseminating ideas. The first generation of frameworks
allowing for easy modeling encompassed
[Caffe](https://github.com/BVLC/caffe),
[Torch](https://github.com/torch), and
[Theano](https://github.com/Theano/Theano).
Many seminal papers were written using these tools.
By now, they have been superseded by
[TensorFlow](https://github.com/tensorflow/tensorflow),
often used via its high level API [Keras](https://github.com/keras-team/keras), [CNTK](https://github.com/Microsoft/CNTK), [Caffe 2](https://github.com/caffe2/caffe2), and [Apache MxNet](https://github.com/apache/incubator-mxnet). The third generation of tools, namely imperative tools for deep learning,
was arguably spearheaded by [Chainer](https://github.com/chainer/chainer),
which used a syntax similar to Python NumPy to describe models.
This idea was adopted by [PyTorch](https://github.com/pytorch/pytorch)
and the [Gluon API](https://github.com/apache/incubator-mxnet) of MXNet.
It is the latter group that this course uses to teach deep learning.
The division of labor between systems researchers building better tools
and statistical modelers building better networks
has greatly simplified things. For instance,
training a linear logistic regression model
used to be a nontrivial homework problem,
worthy to give to new machine learning
PhD students at Carnegie Mellon University in 2014.
By now, this task can be accomplished with less than 10 lines of code,
putting it firmly into the grasp of programmers.
## Success Stories
Artificial Intelligence has a long history of delivering results
that would be difficult to accomplish otherwise.
For instance, mail is sorted using optical character recognition.
These systems have been deployed since the 90s
(this is, after all, the source of the famous MNIST and USPS sets of handwritten digits).
The same applies to reading checks for bank deposits and scoring
creditworthiness of applicants.
Financial transactions are checked for fraud automatically.
This forms the backbone of many e-commerce payment systems,
such as PayPal, Stripe, AliPay, WeChat, Apple, Visa, MasterCard.
Computer programs for chess have been competitive for decades.
Machine learning feeds search, recommendation, personalization
and ranking on the Internet. In other words, artificial intelligence
and machine learning are pervasive, albeit often hidden from sight.
It is only recently that AI has been in the limelight, mostly due to
solutions to problems that were considered intractable previously.
* Intelligent assistants, such as Apple's Siri, Amazon's Alexa, or Google's
assistant are able to answer spoken questions with a reasonable degree of
accuracy. This includes menial tasks such as turning on light switches (a boon to the disabled) up to making barber's appointments and offering phone support dialog. This is likely the most noticeable sign that AI is affecting our lives.
* A key ingredient in digital assistants is the ability to recognize speech
accurately. Gradually the accuracy of such systems has increased to the point
where they reach human parity :cite:`Xiong.Wu.Alleva.ea.2018` for certain
applications.
* Object recognition likewise has come a long way. Estimating the object in a
picture was a fairly challenging task in 2010. On the ImageNet benchmark
:cite:`Lin.Lv.Zhu.ea.2010` achieved a top-5 error rate of 28%. By 2017,
:cite:`Hu.Shen.Sun.2018` reduced this error rate to 2.25%. Similarly, stunning
results have been achieved for identifying birds, or diagnosing skin cancer.
* Games used to be a bastion of human intelligence.
Starting from TDGammon [23], a program for playing Backgammon
using temporal difference (TD) reinforcement learning,
algorithmic and computational progress has led to algorithms
for a wide range of applications. Unlike Backgammon,
chess has a much more complex state space and set of actions.
DeepBlue beat Garry Kasparov, Campbell et al.
:cite:`Campbell.Hoane-Jr.Hsu.2002`, using massive parallelism,
special purpose hardware and efficient search through the game tree.
Go is more difficult still, due to its huge state space.
AlphaGo reached human parity in 2015, :cite:`Silver.Huang.Maddison.ea.2016` using Deep Learning combined with Monte Carlo tree sampling.
The challenge in Poker was that the state space is
large and it is not fully observed (we do not know the opponents'
cards). Libratus exceeded human performance in Poker using efficiently
structured strategies :cite:`Brown.Sandholm.2017`.
This illustrates the impressive progress in games
and the fact that advanced algorithms played a crucial part in them.
* Another indication of progress in AI is the advent of self-driving cars
and trucks. While full autonomy is not quite within reach yet,
excellent progress has been made in this direction,
with companies such as Tesla, NVIDIA,
and Waymo shipping products that enable at least partial autonomy.
What makes full autonomy so challenging is that proper driving
requires the ability to perceive, to reason and to incorporate rules
into a system. At present, deep learning is used primarily
in the computer vision aspect of these problems.
The rest is heavily tuned by engineers.
Again, the above list barely scratches the surface of where machine learning has impacted practical applications. For instance, robotics, logistics, computational biology, particle physics, and astronomy owe some of their most impressive recent advances at least in parts to machine learning. ML is thus becoming a ubiquitous tool for engineers and scientists.
Frequently, the question of the AI apocalypse, or the AI singularity
has been raised in non-technical articles on AI.
The fear is that somehow machine learning systems
will become sentient and decide independently from their programmers
(and masters) about things that directly affect the livelihood of humans.
To some extent, AI already affects the livelihood of humans
in an immediate way---creditworthiness is assessed automatically,
autopilots mostly navigate vehicles, decisions about
whether to grant bail use statistical data as input.
More frivolously, we can ask Alexa to switch on the coffee machine.
Fortunately, we are far from a sentient AI system
that is ready to manipulate its human creators (or burn their coffee).
First, AI systems are engineered, trained and deployed in a specific,
goal-oriented manner. While their behavior might give the illusion
of general intelligence, it is a combination of rules, heuristics
and statistical models that underlie the design.
Second, at present tools for *artificial general intelligence*
simply do not exist that are able to improve themselves,
reason about themselves, and that are able to modify,
extend and improve their own architecture
while trying to solve general tasks.
A much more pressing concern is how AI is being used in our daily lives.
It is likely that many menial tasks fulfilled by truck drivers
and shop assistants can and will be automated.
Farm robots will likely reduce the cost for organic farming
but they will also automate harvesting operations.
This phase of the industrial revolution
may have profound consequences on large swaths of society
(truck drivers and shop assistants are some
of the most common jobs in many states).
Furthermore, statistical models, when applied without care
can lead to racial, gender or age bias and raise
reasonable concerns about procedural fairness
if automated to drive consequential decisions.
It is important to ensure that these algorithms are used with care.
With what we know today, this strikes us a much more pressing concern
than the potential of malevolent superintelligence to destroy humanity.
## Summary
* Machine learning studies how computer systems can leverage *experience* (often data) to improve performance at specific tasks. It combines ideas from statistics, data mining, artificial intelligence, and optimization. Often, it is used as a means of implementing artificially-intelligent solutions.
* As a class of machine learning, representational learning focuses on how to automatically find the appropriate way to represent data. This is often accomplished by a progression of learned transformations.
* Much of the recent progress in deep learning has been triggered by an abundance of data arising from cheap sensors and Internet-scale applications, and by significant progress in computation, mostly through GPUs.
* Whole system optimization is a key component in obtaining good performance. The availability of efficient deep learning frameworks has made design and implementation of this significantly easier.
## Exercises
1. Which parts of code that you are currently writing could be "learned", i.e., improved by learning and automatically determining design choices that are made in your code? Does your code include heuristic design choices?
1. Which problems that you encounter have many examples for how to solve them, yet no specific way to automate them? These may be prime candidates for using deep learning.
1. Viewing the development of artificial intelligence as a new industrial revolution, what is the relationship between algorithms and data? Is it similar to steam engines and coal (what is the fundamental difference)?
1. Where else can you apply the end-to-end training approach? Physics? Engineering? Econometrics?
## [Discussions](https://discuss.mxnet.io/t/2310)

| github_jupyter |
(2.1.0)=
# 2.1.0 Download ORACC JSON Files
Each public [ORACC](http://oracc.org) project has a `zip` file that contains a collection of JSON files, which provide data on lemmatizations, transliterations, catalog data, indexes, etc. The `zip` file can be found at `http://oracc.museum.upenn.edu/[PROJECT]/json/[PROJECT].zip`, where `[PROJECT]` is to be replaced with the project abbreviation. For sub-projects the address is `http://oracc.museum.upenn.edu/[PROECT]/[SUBPROJECT]/json/[PROJECT]-[SUBPROJECT].zip`
:::{note}
For instance http://oracc.museum.upenn.edu/etcsri/json/etcsri.zip or, for a subproject http://oracc.museum.upenn.edu/cams/gkab/json/cams-gkab.zip.
:::
One may download these files by hand (simply type the address in your browser), or use the code in the current notebook. The notebook will create a directory `jsonzip` and copy the file to that directory - all further scripts will expect the `zip` files to reside in `jsonzip`.
:::{note}
One may also use the function `oracc_download()` in the `utils` module. See below ([2.1.0.5](2.1.0.5)) for instructions on how to use the `utils` module.
:::
```{figure} ../images/mocci_banner.jpg
:scale: 50%
:figclass: margin
```
Some [ORACC](http://oracc.org) projects are maintained by the Munich Open-access Cuneiform Corpus Initiative ([MOCCI](https://www.en.ag.geschichte.uni-muenchen.de/research/mocci/index.html)). This includes, for example, Official Inscriptions of the Middle East in Antiquity ([OIMEA](http://oracc.org/oimea)) and, Archival Texts of the Middle East in Antiquity ([ATMEA](http://oracc.org/atmea)) and various other projects and sub-projects. In theory, project data are copied from the Munich server to the Philadelphia ORACC server (and v.v.), but in order to get the most recent data set it is sometimes advisable to request the `zip` files directly from the Munich server. The address is `http://oracc.ub.uni-muenchen.de/[PROJECT]/[SUBPROJECT]/json/[PROJECT]-[SUBPROJECT].zip`.
:::{note}
The function `oracc_download()` in the `utils` module will try the various servers to find the project(s) of your choice.
:::
After downloading the JSON `zip` file you may unzip it to inspect its contents but there is no necessity to do so. For larger projects unzipping may result in hundreds or even thousands of files; the scripts will always read the data directly from the `zip` file.
## 2.1.0.0. Import Packages
* requests: for communicating with a server over the internet
* tqdm: for creating progress bars
* os: for basic Operating System operations (such as creating a directory)
* ipywidgets: for user interface (input project names to be downloaded)
```
import requests
from tqdm.auto import tqdm
import os
import ipywidgets as widgets
```
## 2.1.0.1. Create Download Directory
Create a directory called `jsonzip`. If the directory already exists, do nothing.
```
os.makedirs("jsonzip", exist_ok = True)
```
## 2.1.0.2 Input a List of Project Abbreviations
Enter one or more project abbreviations to download their JSON zip files. The project names are separated by commas. Note that subprojects must be given explicitly, they are not automatically included in the main project. For instance:
* saao/saa01, aemw/alalakh/idrimi, rimanum
```
projects = widgets.Textarea(
placeholder='Type project names, separated by commas',
description='Projects:',
)
projects
```
## 2.1.0.3 Split the List of Projects
Lower case the list of projects and split it to create a list of project names.
```
project_list = projects.value.lower().split(',') # split at each comma and make a list called `project_list`
project_list = [project.strip() for project in project_list] # strip spaces left and right of each entry
```
## 2.1.0.4 Download the ZIP files
For larger projects (such as [DCCLT](http://oracc.org/dcclt)) the `zip` file may be 25Mb or more. Downloading may take some time and it may be necessary to chunk the downloading process. The `iter_content()` function in the `requests` library takes care of that.
In order to show a progress bar (with `tqdm`) we need to know how large the file to be downloaded is (this value is is then fed to the `total` parameter). The http protocol provides a key `content-length` in the headers (a dictionary) that indicates file length. Not all servers provide this field - if `content-length` is not avalaible it is set to 0. With the `total` value of 0 `tqdm` will show a bar and will count the number of chunks received, but it will not indicate the degree of progress.
```
CHUNK = 1024
for project in project_list:
proj = project.replace('/', '-')
url = f"http://oracc.museum.upenn.edu/json/{proj}.zip"
file = f'jsonzip/{proj}.zip'
with requests.get(url, stream=True) as request:
if request.status_code == 200: # meaning that the file exists
total_size = int(request.headers.get('content-length', 0))
tqdm.write(f'Saving {url} as {file}')
t=tqdm(total=total_size, unit='B', unit_scale=True, desc = project)
with open(file, 'wb') as f:
for c in request.iter_content(chunk_size=CHUNK):
t.update(len(c))
f.write(c)
else:
tqdm.write(f"WARNING: {url} does not exist.")
```
(2.1.0.5)=
## 2.1.0.5 Downloading with the utils Module
In the chapters 3-6, downloading of [ORACC](http://oracc.org) data will be done with the `oracc_download()` function in the module `utils` that can be found in the `utils` directory. The following code illustrates how to use that function.
The function `oracc_download()` takes a list of project names as its first argument. Replace the line
```python
projects = ["dcclt", "saao/saa01"]
```
with the list of projects (and sub-projects) of your choice.
The second (optional) argument is `server`; possible values are "penn" (default; try the Penn server first) and "lmu" (try the Munich server first). The `oracc_download()` function returns a cleaned list of projects with duplicates and non-existing projects removed.
```
import os
import sys
util_dir = os.path.abspath('../utils') # When necessary, adapt the path to the utils directory.
sys.path.append(util_dir)
import utils
directories = ["jsonzip"]
os.makedirs("jsonzip", exist_ok = True)
projects = ["dcclt", "saao/saa01"] # or any list of ORACC projects
utils.oracc_download(projects, server="penn")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
def f(theta, t):
c1 = -0.2
c2 = 0.2
c3 = 1.2
return np.exp(t * (theta - c1)**2) + np.exp(t * ((theta - c2)**2 +0.1)) + np.exp(t * (theta - c3)**2)
def get_optimal_and_minimizer(t, iters, lr, theta_0=0.25):
c1 = -0.2
c2 = 0.2
c3 = 1.2
theta = theta_0
for i in range(iters):
Z = f(theta, t)
g = (theta - c1) * np.exp(t*(theta - c1)**2) + (theta - c2) * np.exp(t* ((theta - c2)**2 + 0.1)) + (theta - c3) * np.exp(t*(theta - c3)**2)
theta = theta - lr /Z * g
optimal_value = np.log(f(theta, t)/3)/t
return theta, optimal_value
optimal_thetas = []
optimal_values = []
optimal_theta = 0.25
for t in np.logspace(-2, 2, 20, endpoint=True):
optimal_theta, optimal_value = get_optimal_and_minimizer(t, 100000, 0.02, optimal_theta)
optimal_thetas.append(optimal_theta)
optimal_values.append(optimal_value)
print(t, optimal_theta)
for t in -1*np.logspace(-2, 2, 20, endpoint=True):
optimal_theta, optimal_value = get_optimal_and_minimizer(t, 100000, 0.02, optimal_theta-0.005)
optimal_thetas.append(optimal_theta)
optimal_values.append(optimal_value)
print(t, optimal_theta)
%matplotlib inline
import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
tot = 40
colors_positive = pl.cm.Reds(np.logspace(-1, 0, int(tot/2)))
colors_negative = pl.cm.Blues(np.logspace(-1, 0, int(tot/2)))
plt.figure(figsize=(4.5, 3.5))
ax = plt.subplot(1, 1, 1)
ts = np.concatenate((np.logspace(-2, 2, int(tot/2), endpoint=True), -1*np.logspace(-2, 2, int(tot/2), endpoint=True)))
for i in range(len(optimal_thetas)):
y = []
for x in np.arange(-1, 1.5, 0.005):
y.append(np.log(f(x, ts[i])/3)/ts[i])
ord_ = 1
wid = 0.8
if i < tot/2:
col = colors_positive[i]
elif i > tot/2:
col = colors_negative[i-int(tot/2)]
if i == 20 or i == 1 or i == 21:
col='#e377c2'
ord_ = 2
wid=1.2
plt.plot(np.arange(-1, 1.5, 0.005), np.array(y), label="t="+str(ts[i]), c=col, zorder=ord_, linewidth=wid)
if i == 19 or i == 20 or i == 39:
if i < tot/2:
col = colors_positive[i]
elif i > tot/2:
col = colors_negative[i-int(tot/2)]
print(optimal_thetas[i])
plt.scatter(optimal_thetas[i], optimal_values[i], c=col, zorder=3, s=100, marker=(5, 2))
plt.scatter(0.125, 0.115, c='#ff7f0e', zorder=3, s=100, marker=(5, 2))
x = np.arange(-1, 0.125, 0.005)
plt.plot(x, (x - 0.2)**2+0.1, c='#ff7f0e', linewidth=1.2)
x = np.arange(0.125, 0.5, 0.005)
plt.plot(x, (x + 0.2)**2, c='#ff7f0e', linewidth=1.2)
x = np.arange(0.5, 0.65, 0.005)
plt.plot(x, (x - 1.2)**2, c='#ff7f0e', linewidth=1.2)
x = np.arange(0.65, 1.75, 0.005)
plt.plot(x, (x - 0.2)**2+0.1, c='#ff7f0e', linewidth=1.2)
c1 = pl.cm.Reds(np.linspace(-0.4, 0.8, 12))
c2 = pl.cm.Blues(np.linspace(0, 1, 11))
for idx, x in enumerate(np.arange(0.38, 0.5, 0.01)):
plt.scatter(x, -0.3, c=c1[idx])
for idx, x in enumerate(np.arange(-0.2, 0.35, 0.06)):
print(10-idx-1)
plt.scatter(x, -0.3, c=c2[11-idx-1])
plt.scatter(0.39, -0.3, c='#e377c2')
plt.scatter(0.125, -0.3, c='#ff7f0e', zorder=3)
ax.tick_params(color='#dddddd')
ax.spines['bottom'].set_color('#dddddd')
ax.spines['top'].set_color('#dddddd')
ax.spines['right'].set_color('#dddddd')
ax.spines['left'].set_color('#dddddd')
plt.xticks(np.arange(-0.75, 1.75, 0.25))
plt.yticks(np.arange(0, 2.5, 0.5))
plt.ylim(-0.35, 1.45)
plt.xlim(-0.75, 1.5)
plt.ylabel(r'$\widetilde{R}(t; \theta)$', fontsize=15)
plt.xlabel(r'$\theta$', fontsize=17)
plt.title(r'$f_1(\theta)= (\theta + 0.2)^2, f_2(\theta) = (\theta - 0.2)^2 + 0.1, f_3(\theta) = (\theta - 1.2)^2$', fontsize=10)
plt.tight_layout()
plt.savefig("function.pdf")
from scipy.stats import entropy
ITER = 10000
ITER2 = 500000
t_LR_pairs = [(1, 0.7), (5, 0.3), (20, 0.15), (100, 0.05), (500, 0.01), (0.0001, 1), (-0.000001, 1), (-1, 1.3), (-5, 1.5), (-20, 1)]
theta_sol = {}
value_sol = {}
optimal_theta_2 = {}
optimal_value_2 = {}
gradient_weights_entropy = {}
worst_loss = {}
avg_loss = {}
for t_lr_pair in t_LR_pairs:
print(t_lr_pair)
t = t_lr_pair[0]
LR = t_lr_pair[1]
#theta = 10
theta = 0.2879294984694842
theta_sol[t_lr_pair] = []
for i in range(ITER2):
loss1 = (theta + 0.2)**2
loss2 = (theta - 0.2)**2 + 0.1
loss3 = (theta - 1.2)**2
min_l = min([loss1, loss2, loss3])
Z = np.exp(t* (loss1-min_l)) + np.exp(t*(loss2-min_l)) + np.exp(t*(loss3-min_l))
theta = theta - LR/Z * ((theta + 0.2) * np.exp(t* (loss1-min_l)) + (theta - 0.2) * np.exp(t*(loss2-min_l)) + (theta - 0.6) * np.exp(t*(loss3-min_l)))
optimal_theta_2[t_lr_pair] = theta
theta = theta - 0.1
for i in range(ITER):
theta_sol[t_lr_pair].append(theta)
loss1 = (theta + 0.2)**2
loss2 = (theta - 0.2)**2 + 0.1
loss3 = (theta - 1.2)**2
min_l = min([loss1, loss2, loss3])
Z = np.exp(t* (loss1-min_l)) + np.exp(t*(loss2-min_l)) + np.exp(t*(loss3-min_l))
theta = theta - LR/Z * ((theta + 0.2) * np.exp(t* (loss1-min_l)) + (theta - 0.2) * np.exp(t*(loss2-min_l)) + (theta - 0.6) * np.exp(t*(loss3-min_l)))
print(optimal_theta_2)
import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
plt.figure(figsize=(4, 3))
ax = plt.subplot(1, 1, 1)
i = 0
colors_positive = pl.cm.Reds(np.logspace(-1, 0.1, 6))
colors_negative = pl.cm.Blues(np.logspace(-0.4, 0, 4))
plt.yscale("log")
for idx, t_lr_pair in enumerate(t_LR_pairs):
t = t_lr_pair[0]
LR = t_lr_pair[1]
print(t, LR)
s = str(t)
ss = theta_sol[t_lr_pair]
if t > 0:
color_ = colors_positive[idx]
else:
color_ = colors_negative[idx-6]
if t == 0.0001:
color_= '#e377c2'
plt.plot(np.abs(ss- optimal_theta_2[t_lr_pair]), label = "t="+s+", lr="+str(LR), c=color_)
print(optimal_theta_2[t_lr_pair], ss[:10])
i += 1
ax.tick_params(color='#dddddd')
ax.spines['bottom'].set_color('#dddddd')
ax.spines['top'].set_color('#dddddd')
ax.spines['right'].set_color('#dddddd')
ax.spines['left'].set_color('#dddddd')
plt.xlim(-1, 90)
plt.ylim(1e-14, 5 * 1e-1)
plt.title(r"convergence with $t$", fontsize=15)
plt.xlabel("\# iters", fontsize=13)
plt.ylabel(r"$|\widetilde{R}(t, \theta) - \widetilde{R}(t, \breve{\theta}(t))|$", fontsize=13)
plt.tight_layout()
plt.savefig("convergence.pdf")
for t, val in gradient_weights_entropy.items():
print(t, val[-1])
from scipy.stats import entropy
theta = 0.25
LRs = np.ones(60) * 0.01
ITER = 10000
ITER2 = 200000
ts = np.concatenate((np.logspace(-1.5, 3, 30, endpoint=True), -1*np.logspace(-2, 1.5, 30, endpoint=True)))
theta_sol = {}
value_sol = {}
optimal_theta_2 = {}
optimal_value_2 = {}
gradient_weights_entropy = {}
worst_loss = {}
min_loss = {}
avg_loss = {}
for idx, t in enumerate(ts):
print(t)
theta = 0.25
theta_sol[str(t)] = []
value_sol[str(t)] = []
avg_loss[str(t)] = []
worst_loss[str(t)] = []
gradient_weights_entropy[str(t)] = []
for i in range(ITER):
Z = np.exp(t* (theta**2 - 2* theta)) + np.exp(t*theta**2)
gradient_weights_entropy[str(t)].append(np.var([theta**2 - 2*theta, theta**2]))
losses = np.array([theta**2, theta**2-2*theta])
avg_loss[str(t)].append(np.mean(losses))
worst_loss[str(t)].append(max(losses))
theta = theta - LRs[idx]/Z * ((theta - 1) * np.exp(t* (theta**2 - 2*theta)) + theta * np.exp(t*theta**2))
theta_sol[str(t)].append(theta)
value_sol[str(t)].append(np.log(0.5 * (np.exp(t * (theta ** 2 - 2 * theta)) + np.exp(t * theta **2)))/t)
for i in range(ITER2):
Z = np.exp(t* (theta**2 - 2 * theta)) + np.exp(t*theta**2)
theta = theta - LRs[idx]/Z * ((theta - 1) * np.exp(t* (theta**2 - 2* theta)) + theta * np.exp(t*theta**2))
optimal_theta_2[str(t)] = theta
optimal_value_2[str(t)] = np.log(0.5 * (np.exp(t * (theta ** 2 - 2 * theta)) + np.exp(t * theta **2)))/t
print(optimal_theta_2)
#print(optimal_value_2)
import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
colors_positive = pl.cm.Reds(np.logspace(-1, 0, 30))
colors_negative = pl.cm.Blues(np.logspace(-1, 0.15, 30))
plt.figure(figsize=(4.5, 3.5))
ax = plt.subplot(1, 1, 1)
avg_l = []
worst_l = []
num_ts = len(avg_loss.keys())
ts = avg_loss.keys()
i = 0
for t in ts:
avg = avg_loss[t][-1]
print(t, avg)
worst = worst_loss[t][-1]
avg_l.append(avg)
worst_l.append(worst)
if i < 30:
col = colors_positive[i]
else:
col = colors_negative[i-30]
plt.scatter(worst, avg, c=col)
i += 1
plt.ylabel("average loss", fontsize=15)
plt.xlabel("worst loss", fontsize=15)
plt.title("average vs. worst loss tradeoffs", fontsize=15)
ax.tick_params(color='#dddddd')
ax.spines['bottom'].set_color('#dddddd')
ax.spines['top'].set_color('#dddddd')
ax.spines['right'].set_color('#dddddd')
ax.spines['left'].set_color('#dddddd')
plt.tight_layout()
plt.savefig("loss_tradeoff.pdf")
```
| github_jupyter |
```
import pandas as pd
method_raw_text = pd.read_excel('sents_df.xlsx')
```
# Knowledge Related Sentences in Reviews - Co-Word Network
```
# replace all newlines from dataframe
method_raw_text = method_raw_text.replace('\n','', regex=True)
method_raw_text = method_raw_text.dropna()
import re
for line in method_raw_text['sents']:
substring = re.sub(r'[^\w\s]','',str(line))
#substring = ''.join([i for i in str(substring) if not i.isdigit()])
substring = str(substring).lower().replace("$", "").replace("/","")
method_raw_text['sents'] = method_raw_text['sents'].replace(line, substring)
def text_preprocessor (text_column, column_name):
import re, string
from nltk import word_tokenize, pos_tag
from nltk.corpus import stopwords
from stop_words import get_stop_words
from nltk.stem import WordNetLemmatizer
stop_words = list(get_stop_words('en')) #About 900 stopwords
nltk_words = list(stopwords.words('english')) #About 150 stopwords
stop_words.extend(nltk_words)
remove_words = ['payment', 'price', 'debit', 'card', 'account', 'tbh', 'subscription', 'app', 'application', 'youtube', 'username', 'password', 'yr', 'ur', 'id', 'isnt', 'wouldnt', 'doesnt', 'accountyou', 'im', 'thats', 'logins', 'wont', 'didnt', 'ive', 'ill', 'youre']
stop_words.extend(remove_words)
lemmatizer = WordNetLemmatizer()
globals()[column_name+'_filtered_sent']=[]
for line in text_column:
word_tokens = word_tokenize(re.sub('[%s]' % re.escape(string.punctuation), '', str(line)))
nouns = [token for token, pos in pos_tag(word_tokens) if pos.startswith('NN')]
adjectives = [token for token, pos in pos_tag(word_tokens) if pos.startswith('JJ')]
lemmatized = []
for noun in nouns:
lemmatized.append(lemmatizer.lemmatize(noun))
for adjective in adjectives:
lemmatized.append(lemmatizer.lemmatize(adjective))
filtered_sentence = [w for w in lemmatized if not w in stop_words]
globals()[column_name+'_filtered_sent'].append(filtered_sentence)
text_preprocessor(method_raw_text['sents'], 'reviews')
import scipy.sparse as sp
from sklearn.feature_extraction.text import CountVectorizer
class Cooccurrence(CountVectorizer):
"""Co-ocurrence matrix
Convert collection of raw documents to word-word co-ocurrence matrix
Parameters
----------
encoding : string, 'utf-8' by default.
If bytes or files are given to analyze, this encoding is used to
decode.
ngram_range : tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different
n-grams to be extracted. All values of n such that min_n <= n <= max_n
will be used.
max_df: float in range [0, 1] or int, default=1.0
min_df: float in range [0, 1] or int, default=1
Example
-------
>> import Cooccurrence
>> docs = ['this book is good',
'this cat is good',
'cat is good shit']
>> model = Cooccurrence()
>> Xc = model.fit_transform(docs)
Check vocabulary by printing
>> model.vocabulary_
"""
def __init__(self, encoding='utf-8', ngram_range=(1, 1),
max_df=1.0, min_df=1, max_features=None,
stop_words=None, normalize=True, vocabulary=None):
super(Cooccurrence, self).__init__(
ngram_range=ngram_range,
max_df=max_df,
min_df=min_df,
max_features=max_features,
stop_words=stop_words,
vocabulary=vocabulary
)
self.X = None
self.normalize = normalize
def fit_transform(self, raw_documents, y=None):
"""Fit cooccurrence matrix
Parameters
----------
raw_documents : iterable
an iterable which yields either str, unicode or file objects
Returns
-------
Xc : Cooccurrence matrix
"""
X = super(Cooccurrence, self).fit_transform(raw_documents)
self.X = X
n_samples, n_features = X.shape
Xc = (X.T * X)
if self.normalize:
g = sp.diags(1./Xc.diagonal())
Xc = g * Xc
else:
Xc.setdiag(0)
return Xc
def vocab(self):
tuples = super(Cooccurrence, self).get_feature_names()
vocabulary=[]
for e_tuple in tuples:
tokens = e_tuple.split()
for t in tokens:
if t not in vocabulary:
vocabulary.append(t)
return vocabulary
def word_histgram(self):
word_list = super(Cooccurrence, self).get_feature_names()
count_list = self.X.toarray().sum(axis=0)
return dict(zip(word_list,count_list))
from collections import Counter
from nltk import bigrams
from collections import defaultdict
import operator
import numpy as np
class BaseCooccurrence:
INPUT=[list,str]
OUTPUT=[list,tuple]
class CooccurrenceWorker(BaseCooccurrence):
def __init__(self):
name = 'cooccurrence'
self.inst = Cooccurrence(ngram_range=(2, 2), stop_words='english')
def __call__(self, *args, **kwargs):
# bigram_vectorizer = CountVectorizer(ngram_range=(1, 2), vocabulary={'awesome unicorns': 0, 'batman forever': 1})
co_occurrences = self.inst.fit_transform(args[0])
# print('Printing sparse matrix:', co_occurrences)
# print(co_occurrences.todense())
sum_occ = np.sum(co_occurrences.todense(), axis=0)
# print('Sum of word-word occurrences:', sum_occ)
# Converting itertor to set
result = zip(self.inst.get_feature_names(), np.array(sum_occ)[0].tolist())
result_set = list(result)
return result_set, self.inst.vocab()
class CooccurrenceManager:
def computeCooccurence(self, list):
com = defaultdict(lambda: defaultdict(int))
count_all = Counter()
count_all1 = Counter()
uniqueList = []
for _array in list:
for line in _array:
for word in line:
if word not in uniqueList:
uniqueList.append(word)
terms_bigram = bigrams(line)
# Update the counter
count_all.update(line)
count_all1.update(terms_bigram)
# Build co-occurrence matrix
for i in range(len(line) - 1):
for j in range(i + 1, len(line)):
w1, w2 = sorted([line[i], line[j]])
if w1 != w2:
com[w1][w2] += 1
com_max = []
# For each term, look for the most common co-occurrent terms
for t1 in com:
t1_max_terms = sorted(com[t1].items(), key=operator.itemgetter(1), reverse=True)[:5]
for t2, t2_count in t1_max_terms:
com_max.append(((t1, t2), t2_count))
# Get the most frequent co-occurrences
terms_max = sorted(com_max, key=operator.itemgetter(1), reverse=True)
return terms_max, uniqueList
co = CooccurrenceWorker()
documents = []
for sublist in reviews_filtered_sent:
document = ",".join(sublist)
documents.append(document)
#import itertools
#merged = list(itertools.chain(*mecab_nouns))
co_result, vocab = co.__call__(documents)
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.font_manager as fm
import platform
from matplotlib.ft2font import FT2Font
import matplotlib as mpl
class GraphMLCreator:
def __init__(self):
self.G = nx.Graph()
# Hack: offset the most central node to avoid too much overlap
self.rad0 = 0.3
def createGraphML(self, co_occurrence, word_hist, vocabulary, file):
G = nx.Graph()
for obj in vocabulary:
G.add_node(obj)
# convert list to a single dictionary
for pair in co_occurrence:
node1 = ''
node2 = ''
for inner_pair in pair:
if type(inner_pair) is tuple:
node1 = inner_pair[0]
node2 = inner_pair[1]
elif type(inner_pair) is str:
inner_pair=inner_pair.split()
node1 = inner_pair[0]
node2 = inner_pair[1]
elif type(inner_pair) is int:
#print ("X " + node1 + " == " + node2 + " == " + str(inner_pair) + " : " + str(tuple[node1]))
G.add_edge(node1, node2, weight=float(inner_pair))
elif type(inner_pair) is float:
#print ("X " + node1 + " == " + node2 + " == " + str(inner_pair) + " : ")
G.add_edge(node1, node2, weight=float(inner_pair))
for word in word_hist:
G.add_node(word, count=word_hist[word])
self.G = G
print(self.G.number_of_nodes())
nx.write_graphml(G, file)
def createGraphMLWithThreshold(self, co_occurrence, word_hist, vocab, file, threshold=10.0):
G = nx.Graph()
filtered_word_list=[]
for pair in co_occurrence:
node1 = ''
node2 = ''
for inner_pair in pair:
if type(inner_pair) is tuple:
node1 = inner_pair[0]
node2 = inner_pair[1]
elif type(inner_pair) is str:
inner_pair=inner_pair.split()
node1 = inner_pair[0]
node2 = inner_pair[1]
elif type(inner_pair) is int:
if float(inner_pair) >= threshold:
#print ("X " + node1 + " == " + node2 + " == " + str(inner_pair) + " : " + str(tuple[node1]))
G.add_edge(node1, node2, weight=float(inner_pair))
if node1 not in filtered_word_list:
filtered_word_list.append(node1)
if node2 not in filtered_word_list:
filtered_word_list.append(node2)
elif type(inner_pair) is float:
if float(inner_pair) >= threshold:
#print ("X " + node1 + " == " + node2 + " == " + str(inner_pair) + " : ")
G.add_edge(node1, node2, weight=float(inner_pair))
if node1 not in filtered_word_list:
filtered_word_list.append(node1)
if node2 not in filtered_word_list:
filtered_word_list.append(node2)
for word in word_hist:
if word in filtered_word_list:
G.add_node(word, count=word_hist[word])
self.G = G
print(self.G.number_of_nodes())
nx.write_graphml(G, file)
def centrality_layout(self):
centrality = nx.eigenvector_centrality_numpy(self.G)
"""Compute a layout based on centrality.
"""
# Create a list of centralities, sorted by centrality value
cent = sorted(centrality.items(), key=lambda x:float(x[1]), reverse=True)
nodes = [c[0] for c in cent]
cent = np.array([float(c[1]) for c in cent])
rad = (cent - cent[0])/(cent[-1]-cent[0])
rad = self.rescale_arr(rad, self.rad0, 1)
angles = np.linspace(0, 2*np.pi, len(centrality))
layout = {}
for n, node in enumerate(nodes):
r = rad[n]
th = angles[n]
layout[node] = r*np.cos(th), r*np.sin(th)
return layout
def plot_graph(self, title=None, file='graph.png'):
from matplotlib.font_manager import _rebuild
_rebuild()
font_path = '/System/Library/Fonts/Supplemental/AppleGothic.ttf'
font_name = fm.FontProperties(fname=font_path).get_name()
plt.rc('font', family=font_name)
plt.rc('axes', unicode_minus=False)
# 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처
mpl.rcParams['axes.unicode_minus'] = False
#print('버전: ', mpl.__version__)
#print('설치 위치: ', mpl.__file__)
#print('설정 위치: ', mpl.get_configdir())
#print('캐시 위치: ', mpl.get_cachedir())
# size, family
print('# 설정 되어있는 폰트 사이즈')
print(plt.rcParams['font.size'])
print('# 설정 되어있는 폰트 글꼴')
print(plt.rcParams['font.family'])
fig = plt.figure(figsize=(8, 8))
pos = self.centrality_layout()
"""Conveniently summarize graph visually"""
# config parameters
edge_min_width= 3
edge_max_width= 12
label_font = 18
node_font = 22
node_alpha = 0.4
edge_alpha = 0.55
edge_cmap = plt.cm.Spectral
# Create figure
if fig is None:
fig, ax = plt.subplots()
else:
ax = fig.add_subplot(111)
fig.subplots_adjust(0,0,1)
font = FT2Font(font_path)
# Plot nodes with size according to count
sizes = []
degrees = []
for n, d in self.G.nodes(data=True):
sizes.append(d['count'])
degrees.append(self.G.degree(n))
sizes = self.rescale_arr(np.array(sizes, dtype=float), 100, 1000)
# Compute layout and label edges according to weight
pos = nx.spectral_layout(self.G) if pos is None else pos
labels = {}
width = []
for n1, n2, d in self.G.edges(data=True):
w = d['weight']
labels[n1, n2] = w
width.append(w)
width = self.rescale_arr(np.array(width, dtype=float), edge_min_width,
edge_max_width)
# Draw
nx.draw_networkx_nodes(self.G, pos, node_size=sizes, node_color=degrees,
alpha=node_alpha)
nx.draw_networkx_edges(self.G, pos, width=width, edge_color=width,
edge_cmap=edge_cmap, alpha=edge_alpha)
#nx.draw_networkx_edge_labels(self.G, pos, edge_labels=labels,
#font_size=label_font)
nx.draw_networkx_labels(self.G, pos, font_size=node_font, font_family=font_name, font_weight='bold')
if title is not None:
ax.set_title(title, fontsize=label_font)
ax.set_xticks([])
ax.set_yticks([])
# Mark centrality axes
kw = dict(color='k', linestyle='-')
cross = [ax.axhline(0, **kw), ax.axvline(self.rad0, **kw)]
[l.set_zorder(0) for l in cross]
plt.savefig(file)
plt.show()
def rescale_arr(self, arr, amin, amax):
"""Rescale an array to a new range.
Return a new array whose range of values is (amin, amax).
Parameters
----------
arr : array-like
amin : float
new minimum value
amax : float
new maximum value
Examples
--------
>>> a = np.arange(5)
>>> rescale_arr(a,3,6)
array([ 3. , 3.75, 4.5 , 5.25, 6. ])
"""
# old bounds
m = arr.min()
M = arr.max()
# scale/offset
s = float(amax - amin) / (M - m)
d = amin - s * m
# Apply clip before returning to cut off possible overflows outside the
# intended range due to roundoff error, so that we can absolutely guarantee
# that on output, there are no values > amax or < amin.
return np.clip(s * arr + d, amin, amax)
def summarize_centrality(self, limit=10):
centrality = nx.eigenvector_centrality_numpy(self.G)
c = centrality.items()
c = sorted(c, key=lambda x: x[1], reverse=True)
print('\nGraph centrality')
count=0
for node, cent in c:
if count>limit:
break
print ("%15s: %.3g" % (node, float(cent)))
count+=1
def sort_freqs(self, freqs):
"""Sort a word frequency histogram represented as a dictionary.
Parameters
----------
freqs : dict
A dict with string keys and integer values.
Return
------
items : list
A list of (count, word) pairs.
"""
items = freqs.items()
items.sort(key=lambda wc: wc[1])
return items
def plot_word_histogram(self, freqs, show=10, title=None):
"""Plot a histogram of word frequencies, limited to the top `show` ones.
"""
sorted_f = self.sort_freqs(freqs) if isinstance(freqs, dict) else freqs
# Don't show the tail
if isinstance(show, int):
# interpret as number of words to show in histogram
show_f = sorted_f[-show:]
else:
# interpret as a fraction
start = -int(round(show * len(freqs)))
show_f = sorted_f[start:]
# Now, extract words and counts, plot
n_words = len(show_f)
ind = np.arange(n_words)
words = [i[0] for i in show_f]
counts = [i[1] for i in show_f]
fig = plt.figure()
ax = fig.add_subplot(111)
if n_words <= 20:
# Only show bars and x labels for small histograms, they don't make
# sense otherwise
ax.bar(ind, counts)
ax.set_xticks(ind)
ax.set_xticklabels(words, rotation=45)
fig.subplots_adjust(bottom=0.25)
else:
# For larger ones, do a step plot
ax.step(ind, counts)
# If it spans more than two decades, use a log scale
if float(max(counts)) / min(counts) > 100:
ax.set_yscale('log')
if title:
ax.set_title(title)
return ax
cv = CountVectorizer()
cv_fit = cv.fit_transform(documents)
word_list = cv.get_feature_names();
count_list = cv_fit.toarray().sum(axis=0)
word_hist = dict(zip(word_list, count_list))
graph_builder = GraphMLCreator()
graph_builder.createGraphMLWithThreshold(co_result, word_hist, vocab, file="info_sents_coword_network_18.graphml", threshold=18.0)
#graph_builder.plot_graph(file="if_app_review_coword_network.pdf")
```
| github_jupyter |
Nineth exercice: non-Cartesian MR image reconstruction
=============================================
In this tutorial we will reconstruct an MRI image from radial undersampled kspace measurements. Let us denote $\Omega$ the undersampling mask, the under-sampled Fourier transform now reads $F_{\Omega}$.
Import neuroimaging data
--------------------------------------
We use the toy datasets available in pysap, more specifically a 2D brain slice and the radial under-sampling scheme. We compare zero-order image reconstruction with Compressed sensing reconstructions (analysis vs synthesis formulation) using the FISTA algorithm for the synthesis formulation and the Condat-Vu algorithm for the analysis formulation.
We remind that the synthesis formulation reads (minimization in the sparsifying domain):
$$
\widehat{z} = \text{arg}\,\min_{z\in C^n_\Psi} \frac{1}{2} \|y - F_\Omega \Psi^*z \|_2^2 + \lambda \|z\|_1
$$
and the image solution is given by $\widehat{x} = \Psi^*\widehat{z}$. For an orthonormal wavelet transform,
we have $n_\Psi=n$ while for a frame we may have $n_\Psi > n$.
while the analysis formulation consists in minimizing the following cost function (min. in the image domain):
$$
\widehat{x} = \text{arg}\,\min_{x\in C^n} \frac{1}{2} \|y - F_\Omega x\|_2^2 + \lambda \|\Psi x\|_1 \,.
$$
- Author: Chaithya G R & Philippe Ciuciu
- Date: 01/06/2021
- Target: ATSI MSc students, Paris-Saclay University
```
# Package import
from mri.operators import NonCartesianFFT, WaveletN, WaveletUD2
from mri.operators.utils import convert_locations_to_mask, \
gridded_inverse_fourier_transform_nd
from mri.reconstructors import SingleChannelReconstructor
import pysap
from pysap.data import get_sample_data
# Third party import
from modopt.math.metrics import ssim
from modopt.opt.linear import Identity
from modopt.opt.proximity import SparseThreshold
import numpy as np
import matplotlib.pyplot as plt
```
Loading input data
---------------------------
```
image = get_sample_data('2d-mri')
radial_mask = get_sample_data("mri-radial-samples")
kspace_loc = radial_mask.data
mask = pysap.Image(data=convert_locations_to_mask(kspace_loc, image.shape))
plt.figure()
plt.imshow(image, cmap='gray')
plt.figure()
plt.imshow(mask, cmap='gray')
plt.show()
```
Generate the kspace
-------------------
From the 2D brain slice and the acquisition mask, we retrospectively
undersample the k-space using a cartesian acquisition mask
We then reconstruct the zero order solution as a baseline
Get the locations of the kspace samples
```
fourier_op = NonCartesianFFT(samples=kspace_loc, shape=image.shape,
implementation='cpu')
kspace_obs = fourier_op.op(image.data)
```
Gridded solution
```
grid_space = np.linspace(-0.5, 0.5, num=image.shape[0])
grid2D = np.meshgrid(grid_space, grid_space)
grid_soln = gridded_inverse_fourier_transform_nd(kspace_loc, kspace_obs,
tuple(grid2D), 'linear')
plt.imshow(np.abs(grid_soln), cmap='gray')
# Calculate SSIM
base_ssim = ssim(grid_soln, image)
plt.title('Gridded Solution\nSSIM = ' + str(base_ssim))
plt.show()
```
FISTA optimization
------------------
We now want to refine the zero order solution using a FISTA optimization.
The cost function is set to Proximity Cost + Gradient Cost
```
linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)
regularizer_op = SparseThreshold(Identity(), 6 * 1e-7, thresh_type="soft")
```
# Generate operators
```
reconstructor = SingleChannelReconstructor(
fourier_op=fourier_op,
linear_op=linear_op,
regularizer_op=regularizer_op,
gradient_formulation='synthesis',
verbose=1,
)
```
Synthesis formulation: FISTA optimization
------------------------------------------------------------
We now want to refine the zero order solution using a FISTA optimization.
The cost function is set to Proximity Cost + Gradient Cost
```
x_final, costs, metrics = reconstructor.reconstruct(
kspace_data=kspace_obs,
optimization_alg='fista',
num_iterations=200,
)
image_rec = pysap.Image(data=np.abs(x_final))
recon_ssim = ssim(image_rec, image)
plt.imshow(np.abs(image_rec), cmap='gray')
recon_ssim = ssim(image_rec, image)
plt.title('FISTA Reconstruction\nSSIM = ' + str(recon_ssim))
plt.show()
```
## POGM reconstruction
```
x_final, costs, metrics = reconstructor.reconstruct(
kspace_data=kspace_obs,
optimization_alg='pogm',
num_iterations=200,
)
image_rec = pysap.Image(data=np.abs(x_final))
recon_ssim = ssim(image_rec, image)
plt.imshow(np.abs(image_rec), cmap='gray')
recon_ssim = ssim(image_rec, image)
plt.title('POGM Reconstruction\nSSIM = ' + str(recon_ssim))
plt.show()
```
Analysis formulation: Condat-Vu reconstruction
---------------------------------------------------------------------
```
#linear_op = WaveletN(wavelet_name="sym8", nb_scales=4)
linear_op = WaveletUD2(
wavelet_id=24,
nb_scale=4,
)
reconstructor = SingleChannelReconstructor(
fourier_op=fourier_op,
linear_op=linear_op,
regularizer_op=regularizer_op,
gradient_formulation='analysis',
verbose=1,
)
x_final, costs, metrics = reconstructor.reconstruct(
kspace_data=kspace_obs,
optimization_alg='condatvu',
num_iterations=200,
)
image_rec = pysap.Image(data=np.abs(x_final))
plt.imshow(np.abs(image_rec), cmap='gray')
recon_ssim = ssim(image_rec, image)
plt.title('Condat-Vu Reconstruction\nSSIM = ' + str(recon_ssim))
plt.show()
```
| github_jupyter |
## Job Description to Resume Comparator - FreqDist
This program compares the words found in a job description to the words in a resume. The current version compares all words and gives a naive percentage match.
```
from nltk import sent_tokenize, word_tokenize, pos_tag
from nltk.corpus import stopwords
import pandas as pd
from nltk import FreqDist
import codecs
from nltk.corpus import stopwords
# NLTK's default english stopwords
default_stopwords = stopwords.words('english')
#File Locations
document_folder = '../data/'
resume_file = document_folder + 'resume.txt'
job_description_file = document_folder + 'job_description.txt'
custom_stopwords_file = document_folder + 'custom_stopwords.txt'
custom_stopwords = codecs.open(custom_stopwords_file, 'r', 'utf-8').read().splitlines()
all_stopwords = set(default_stopwords + custom_stopwords)
def process_text(text,stopwords):
tokens = word_tokenize(text)
words = [t for t in tokens if t.isalpha()]
words = [w for w in words if len(w)>1]
words = [w for w in words if not w.isnumeric()]
words = [w for w in words if w not in all_stopwords]
words = [w.lower() for w in words]
return FreqDist(words)
f_resume=open(resume_file,'r',)
f_desc = open(job_description_file,'r')
raw_resume =f_resume.read()
raw_desc = f_desc.read()
resume_words = process_text(raw_resume,all_stopwords)
job_words = process_text(raw_desc,all_stopwords)
df_desc = pd.DataFrame.from_dict(job_words,orient='index')
df_desc.columns = ['Frequency']
df_desc.index.name = 'Term'
df_resume = pd.DataFrame.from_dict(resume_words, orient='index')
df_resume.columns = ['Frequency']
df_resume.index.name = 'Term'
df = pd.merge(df_desc,df_resume,how='left',left_index=True,right_index=True).fillna(0)
df_matches = pd.merge(df_desc,df_resume,how='inner',left_index=True,right_index=True)
df.sort_values(by='Frequency_x',ascending=False,inplace=True)
# df.sort_values(by='Frequency_y',inplace=True,na_position='first')
# df_missing = df[df['Frequency_y']==0]
df_missing = df[df['Frequency_y']==0]
df_missing.columns = ['In Job Description','In Resume']
print ('You resume matches at ',"{0:.0%}".format(df_matches.size/df_desc.size))
import pandasql as ps
q1 = """select * from
(SELECT df_desc.Term,df_desc.Frequency,df_resume.Frequency
from df_desc
left join df_resume on (lower(df_desc.Term) = lower(df_resume.Term)
and df_resume.Term is null)
order by 2 desc
)
"""
print(ps.sqldf(q1, locals()))
resume_set = set(resume_words)
```
## Next Steps: Improve Comparisons
1. Exclude low information parts of speach like prepositions, conjunctions.
2. Develop a list of skills.
3. Break comparisons by parts of speech. (Nouns, verbs, adjectives).
4. Look for key bigrams.
5. Enumerate and compare sentence subjects
## Next Steps: File Import of different formats
| github_jupyter |
# Density Profile and IFT of mixture of Hexane + Ethanol and CPME
First it's needed to import the necessary modules
```
import numpy as np
import matplotlib.pyplot as plt
from sgtpy import component, mixture, saftvrmie
from sgtpy.equilibrium import bubblePy
from sgtpy.sgt import sgt_mix
```
The ternary mixture is created and then the interactions parameters are set. As CPME can associate with Ethanol, this site/site interaction are set up manually modifying the ```eos.eABij``` and ```eos.rcij``` arrays. Finally, the $\beta_{ij}$ corrections are set.
```
ethanol = component('ethanol2C', ms = 1.7728, sigma = 3.5592 , eps = 224.50,
lambda_r = 11.319, lambda_a = 6., eAB = 3018.05, rcAB = 0.3547,
rdAB = 0.4, sites = [1,0,1], cii= 5.3141080872882285e-20)
cpme = component('cpme', ms = 2.32521144, sigma = 4.13606074, eps = 343.91193798, lambda_r = 14.15484877,
lambda_a = 6.0, npol = 1.91990385,mupol = 1.27, sites =[0,0,1], cii = 3.5213681817448466e-19)
hexane = component('hexane', ms = 1.96720036, sigma = 4.54762477, eps = 377.60127994,
lambda_r = 18.41193194, npol = 0., cii = 3.581510586936205e-19 )
# creating mixture
mix = mixture(hexane, ethanol)
mix.add_component(cpme)
# setting kij corrections
k12 = 0.011818492037463553
k13 = 0.0008700151297528677
k23 = 0.01015194
Kij = np.array([[0., k12, k13], [k12, 0., k23], [k13, k23, 0.]])
mix.kij_saft(Kij)
eos = saftvrmie(mix)
# cross associationg set up
rc = 2.23153033
eos.eABij[1,2] = ethanol.eAB / 2
eos.eABij[2,1] = ethanol.eAB / 2
eos.rcij[1,2] = rc * 1e-10
eos.rcij[2,1] = rc * 1e-10
# setting beta corrections for SGT
b12 = 0.05719272059410664
b13 = 0.0
b23 = 0.0358453055603665
beta = np.array([[0., b12, b13],
[b12, 0., b23],
[b13, b23, 0.]])
eos.beta_sgt(beta)
```
Now it is necessary to compute the equilibria pressure. This bubble point is computed with the ```bubblePy``` function. Further information about this function can be found running ```bubblePy?```.
```
# computing bubble points
X = np.array([0.906, 0.071,0.023])
T = 298.15 # K
# initial guesses for pressure and vapor composition
P0 = 20000. # Pa
y0 = np.array([0.7, 0.2, 0.1])
sol = bubblePy(y0, P0, X, T, eos, full_output = True)
sol
```
The results are used to compute the density vectors and the SGT is applied with the ```sgt_mix``` function. Further information about this function can be found running ```sgt_mix?```.
```
# reading solution object
Y, P = sol.Y, sol.P
vl, vv = sol.v1, sol.v2
#density vector of each phase
rhox = X/vl
rhoy = Y/vv
solsgt = sgt_mix(rhoy, rhox, T, P, eos, n = 25, full_output = True)
# density paths in kmol/m3
rho = solsgt.rho / 1000
z = solsgt.z # in Amstrong
plt.plot(z, rho[0], color= 'k')
plt.plot(z, rho[1], '--',color = 'k')
plt.plot(z, rho[2], ':', color = 'k')
plt.xlabel(r'z / $\rm \AA$')
plt.ylabel(r'$\rho$ / kmol m$^{-3}$')
```
| github_jupyter |
# The Unique Properties of Qubits
```
from qiskit import *
from qiskit.visualization import plot_histogram
```
You now know something about bits, and about how our familiar digital computers work. All the complex variables, objects and data structures used in modern software are basically all just big piles of bits. Those of us who work on quantum computing call these *classical variables.* The computers that use them, like the one you are using to read this article, we call *classical computers*.
In quantum computers, our basic variable is the _qubit_: a quantum variant of the bit. These are quantum objects, obeying the laws of quantum mechanics. Unlike any classical variable, these cannot be represented by some number of classical bits. They are fundamentally different.
The purpose of this section is to give you your first taste of what a qubit is, and how they are unique. We'll do this in a way that requires essentially no math. This means leaving terms like 'superposition' and 'entanglement' until future sections, since it is difficult to properly convey their meaning without pointing at an equation.
Instead, we will use another well-known feature of quantum mechanics: the uncertainty principle.
### Heisenberg's uncertainty principle
The most common formulation of the uncertainty principle refers to the position and momentum of a particle: the more precisely its position is defined, the more uncertainty there is in its momentum, and vice-versa.

This is a common feature of quantum objects, though it need not always refer to position and momentum. There are many possible sets of parameters for different quantum objects, where certain knowledge of one means that our observations of the others will be completely random.
To see how the uncertainty principle affects qubits, we need to look at measurement. As we saw in the last section, this is the method by which we extract a bit from a qubit.
```
measure_z = QuantumCircuit(1,1)
measure_z.measure(0,0)
measure_z.draw(output='mpl')
```
On the [Circuit Composer](https://quantum-computing.ibm.com/composer), the same operation looks like this.

This version has a small ‘z’ written in the box that represents the operation. This hints at the fact that this kind of measurement is not the only one. In fact, it is only one of an infinite number of possible ways to extract a bit from a qubit. Specifically, it is known as a *z measurement*.
Another commonly used measurement is the *x measurement*. It can be performed using the following sequence of gates.
```
measure_x = QuantumCircuit(1,1)
measure_x.h(0)
measure_x.measure(0,0)
measure_x.draw(output='mpl')
```
Later chapters will explain why this sequence of operations performs a new kind of measurement. For now, you'll need to trust us.
Like the position and momentum of a quantum particle, the z and x measurements of a qubit are governed by the uncertainty principle. Below we'll look at results from a few different circuits to see this effect in action.
#### Results for an empty circuit
The easiest way to see an example is to take a freshly initialized qubit.
```
qc_0 = QuantumCircuit(1)
qc_0.draw(output='mpl')
```
Qubits are always initialized such that they are certain to give the result `0` for a z measurement. The resulting histogram will therefore simply have a single column, showing the 100% probability of getting a `0`.
```
qc = qc_0 + measure_z
print('Results for z measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
If we instead do an x measurement, the results will be completely random.
```
qc = qc_0 + measure_x
print('Results for x measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
Note that the reason why the results are not split exactly 50/50 here is because we take samples by repeating the circuit a finite number of times, and so there will always be statistical noise. In this case, the default of `shots=1024` was used.
#### Results for a single Hadamard
Now we'll try a different circuit. This has a single gate called a Hadamard, which we will learn more about in future sections.
```
qc_plus = QuantumCircuit(1)
qc_plus.h(0)
qc_plus.draw(output='mpl')
```
To see what effect it has, let's first try the z measurement.
```
qc = qc_plus + measure_z
qc.draw()
print('Results for z measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
Here we see that it is the results of the z measurement that are random for this circuit.
Now let's see what happens for an x measurement.
```
qc = qc_plus + measure_x
print('Results for x measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
For the x measurement, it is certain that the output for this circuit is `0`. The results here are therefore very different to what we saw for the empty circuit. The Hadamard has lead to an entirely opposite set of outcomes.
#### Results for a y rotation
Using other circuits we can manipulate the results in different ways. Here is an example with an `ry` gate.
```
qc_y = QuantumCircuit(1)
qc_y.ry( -3.14159/4,0)
qc_y.draw(output='mpl')
```
We will learn more about `ry` in future sections. For now, just notice the effect it has for the z and x measurements.
```
qc = qc_y + measure_z
print('Results for z measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
Here we have a case that we have not seen before. The z measurement is most likely to output `0`, but it is not completely certain. A similar effect is seen below for the x measurement: it is most likely, but not certain, to output `1`.
```
qc = qc_y + measure_x
print('\nResults for x measurement:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
These results hint at an important principle: Qubits have a limited amount of certainty that they can hold. This ensures that, despite the different ways we can extract outputs from a qubit, it can only be used to store a single bit of information. In the case of the blank circuit, this certainty was dedicated entirely to the outcomes of z measurements. For the circuit with a single Hadamard, it was dedicated entirely to x measurements. In this case, it is shared between the two.
### Einstein vs. Bell
We have now played with some of the features of qubits, but we haven't done anything that couldn't be reproduced by a few bits and a random number generator. You can therefore be forgiven for thinking that quantum variables are just classical variables with some randomness bundled in.
This is essentially the claim made by Einstein, Podolsky and Rosen back in 1935. They objected to the uncertainty seen in quantum mechanics, and thought it meant that the theory was incomplete. They thought that a qubit should always know what output it would give for both kinds of measurement, and that it only seems random because some information is hidden from us. As Einstein said: God does not play dice with the universe.
No one spoke of qubits back then, and people hardly spoke of computers. But if we translate their arguments into modern language, they essentially claimed that qubits can indeed be described by some form of classical variable. They didn’t know how to do it, but they were sure it could be done. Then quantum mechanics could be replaced by a much nicer and more sensible theory.
It took until 1964 to show that they were wrong. J. S. Bell proved that quantum variables behaved in a way that was fundamentally unique. Since then, many new ways have been found to prove this, and extensive experiments have been done to show that this is exactly the way the universe works. We'll now consider a simple demonstration, using a variant of _Hardy’s paradox_.
For this we need two qubits, set up in such a way that their results are correlated. Specifically, we want to set them up such that we see the following properties.
1. If z measurements are made on both qubits, they never both output `0`.
2. If an x measurement of one qubit outputs `1`, a z measurement of the other will output `0`.
If we have qubits that satisfy these properties, what can we infer about the remaining case: an x measurement of both?
For example, let's think about the case where both qubits output `1` for an x measurement. By applying property 2 we can deduce what the result would have been if we had made z measurements instead: We would have gotten an output of `0` for both. However, this result is impossible according to property 1. We can therefore conclude that an output of `1` for x measurements of both qubits must also be impossible.
The paragraph you just read contains all the math in this section. Don't feel bad if you need to read it a couple more times!
Now let's see what actually happens. Here is a circuit, composed of gates you will learn about in later sections. It prepares a pair of qubits that will satisfy the above properties.
```
qc_hardy = QuantumCircuit(2)
qc_hardy.ry(1.911,1)
qc_hardy.cx(1,0)
qc_hardy.ry(0.785,0)
qc_hardy.cx(1,0)
qc_hardy.ry(2.356,0)
qc_hardy.draw(output='mpl')
```
Let's see it in action. First a z measurement of both qubits.
```
measurements = QuantumCircuit(2,2)
# z measurement on both qubits
measurements.measure(0,0)
measurements.measure(1,1)
qc = qc_hardy + measurements
print('\nResults for two z measurements:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
The probability of `00` is zero, and so these qubits do indeed satisfy property 1.
Next, let's see the results of an x measurement of one and a z measurement of the other.
```
measurements = QuantumCircuit(2,2)
# x measurement on qubit 0
measurements.h(0)
measurements.measure(0,0)
# z measurement on qubit 1
measurements.measure(1,1)
qc = qc_hardy + measurements
print('\nResults for two x measurement on qubit 0 and z measurement on qubit 1:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
The probability of `11` is zero. You'll see the same if you swap around the measurements. These qubits therefore also satisfy property 2.
Finally, let's look at an x measurement of both.
```
measurements = QuantumCircuit(2,2)
measurements.h(0)
measurements.measure(0,0)
measurements.h(1)
measurements.measure(1,1)
qc = qc_hardy + measurements
print('\nResults for two x measurement on both qubits:')
counts = execute(qc,Aer.get_backend('qasm_simulator')).result().get_counts()
plot_histogram(counts)
```
We reasoned that, given properties 1 and 2, it would be impossible to get the output `11`. From the results above, we see that our reasoning was not correct: one in every dozen results will have this 'impossible' result.
So where did we go wrong? Our mistake was in the following piece of reasoning.
> By applying property 2 we can deduce what the result would have been if we had made z measurements instead
We used our knowledge of the x outputs to work out what the z outputs were. Once we’d done that, we assumed that we were certain about the value of both. More certain than the uncertainty principle allows us to be. And so we were wrong.
Our logic would be completely valid if we weren’t reasoning about quantum objects. If it was some non-quantum variable, that we initialized by some random process, the x and z outputs would indeed both be well defined. They would just be based on some pre-determined list of random numbers in our computer, or generated by some deterministic process. Then there would be no reason why we shouldn't use one to deduce the value of the other, and our reasoning would be perfectly valid. The restriction it predicts would apply, and it would be impossible for both x measurements to output `1`.
But our qubits behave differently. The uncertainty of quantum mechanics allows qubits to dodge restrictions placed on classical variables. It allows them to do things that would otherwise be impossible. Indeed, this is the main thing to take away from this section:
> A physical system in a definite state can still behave randomly.
This is the first of the key principles of the quantum world. It needs to become your new intuition, as it is what makes quantum systems different to classical systems. It's what makes quantum computers able to outperform classical computers. It leads to effects that allow programs made with quantum variables to solve problems in ways that those with normal variables cannot. But just because qubits don’t follow the same logic as normal computers, it doesn’t mean they defy logic entirely. They obey the definite rules laid out by quantum mechanics.
If you’d like to learn these rules, we’ll use the remainder of this chapter to guide you through them. We'll also show you how to express them using math. This will provide a foundation for later chapters, in which we'll explain various quantum algorithms and techniques.
| github_jupyter |
# Cross Validation
> Holdout sets are a great start to model validation. However, using a single train and test set if often not enough. Cross-validation is considered the gold standard when it comes to validating model performance and is almost always used when tuning model hyper-parameters. This chapter focuses on performing cross-validation to validate model performance. This is the Summary of lecture "Model Validation in Python", via datacamp.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp, Machine_Learning]
- image: images/loocv.png
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (8, 8)
```
## The problems with holdout sets
### Two samples
After building several classification models based on the `tic_tac_toe` dataset, you realize that some models do not generalize as well as others. You have created training and testing splits just as you have been taught, so you are curious why your validation process is not working.
After trying a different training, test split, you noticed differing accuracies for your machine learning model. Before getting too frustrated with the varying results, you have decided to see what else could be going on.
```
tic_tac_toe = pd.read_csv('./dataset/tic-tac-toe.csv')
tic_tac_toe.head()
# Create two different samples of 200 observations
sample1 = tic_tac_toe.sample(n=200, random_state=1111)
sample2 = tic_tac_toe.sample(n=200, random_state=1171)
# Print the number of common observations
print(len([index for index in sample1.index if index in sample2.index]))
# Print the number of observations in the Class column for both samples
print(sample1['Class'].value_counts())
print(sample2['Class'].value_counts())
```
Notice that there are a varying number of positive observations for both sample test sets. Sometimes creating a single test holdout sample is not enough to achieve the high levels of model validation you want. You need to use something more robust.
## Cross-validation

### scikit-learn's KFold()
You just finished running a colleagues code that creates a random forest model and calculates an out-of-sample accuracy. You noticed that your colleague's code did not have a random state, and the errors you found were completely different than the errors your colleague reported.
To get a better estimate for how accurate this random forest model will be on new data, you have decided to generate some indices to use for KFold cross-validation.
```
candy = pd.read_csv('./dataset/candy-data.csv')
candy.head()
X = candy.drop(['competitorname', 'winpercent'], axis=1).to_numpy()
y = candy['winpercent'].to_numpy()
from sklearn.model_selection import KFold
# Use KFold
kf = KFold(n_splits=5, shuffle=True, random_state=1111)
# Create splits
splits = kf.split(X)
# Print the number of indices
for train_index, val_index in splits:
print("Number of training indices: %s" % len(train_index))
print("Number of validation indices: %s" % len(val_index))
```
This dataset has 85 rows. You have created five splits - each containing 68 training and 17 validation indices. You can use these indices to complete 5-fold cross-validation.
### Using KFold indices
You have already created `splits`, which contains indices for the candy-data dataset to complete 5-fold cross-validation. To get a better estimate for how well a colleague's random forest model will perform on a new data, you want to run this model on the five different training and validation indices you just created.
In this exercise, you will use these indices to check the accuracy of this model using the five different splits. A for loop has been provided to assist with this process.
```
# Create splits
splits = kf.split(X)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
# Access the training and validation indices of splits
for train_index, val_index in splits:
# Setup the training and validation data
X_train, y_train = X[train_index], y[train_index]
X_val, y_val = X[val_index], y[val_index]
# Fit the random forest model
rfc.fit(X_train, y_train)
# Make predictions, and print the accuracy
predictions = rfc.predict(X_val)
print("Split accuracy: " + str(mean_squared_error(y_val, predictions)))
```
`KFold()` is a great method for accessing individual indices when completing cross-validation. One drawback is needing a for loop to work through the indices though.
## sklearn's cross_val_score()
### scikit-learn's methods
You have decided to build a regression model to predict the number of new employees your company will successfully hire next month. You open up a new Python script to get started, but you quickly realize that sklearn has a lot of different modules. Let's make sure you understand the names of the modules, the methods, and which module contains which method.
Follow the instructions below to load in all of the necessary methods for completing cross-validation using sklearn. You will use modules:
- metrics
- model_selection
- ensemble
```
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error, make_scorer
```
### Implement cross_val_score()
Your company has created several new candies to sell, but they are not sure if they should release all five of them. To predict the popularity of these new candies, you have been asked to build a regression model using the candy dataset. Remember that the response value is a head-to-head win-percentage against other candies.
Before you begin trying different regression models, you have decided to run cross-validation on a simple random forest model to get a baseline error to compare with any future results.
```
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
mse = make_scorer(mean_squared_error)
# Setup cross_val_score
cv = cross_val_score(estimator=rfc, X=X_train, y=y_train, cv=10, scoring=mse)
# Print the mean error
print(cv.mean())
```
You now have a baseline score to build on. If you decide to build additional models or try new techniques, you should try to get an error lower than 155.56. Lower errors indicate that your popularity predictions are improving.
## Leave-one-out-cross-validation (LOOCV)
- LOOCV

- When to use LOOCV?
- The amount of training data is limited
- You want the absolute best error estimate for new data
- Be cautious when:
- Computation resources are limited
- You have a lot of data
- You have a lot of parameters to test
### Leave-one-out-cross-validation
Let's assume your favorite candy is not in the candy dataset, and that you are interested in the popularity of this candy. Using 5-fold cross-validation will train on only 80% of the data at a time. The candy dataset only has 85 rows though, and leaving out 20% of the data could hinder our model. However, using leave-one-out-cross-validation allows us to make the most out of our limited dataset and will give you the best estimate for your favorite candy's popularity!
```
from sklearn.metrics import mean_absolute_error
# Create scorer
mae_scorer = make_scorer(mean_absolute_error)
rfr = RandomForestRegressor(n_estimators=15, random_state=1111)
# Implement LOOCV
scores = cross_val_score(rfr, X, y, cv=85, scoring=mae_scorer)
# Print the mean and standard deviation
print("The mean of the errors is: %s." % np.mean(scores))
print("The standard deviation of the errors is: %s." % np.std(scores))
```
You have come along way with model validation techniques. The final chapter will wrap up model validation by discussing how to select the best model and give an introduction to parameter tuning.
| github_jupyter |
# Data Download and Pre-processing
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import yaml
import os
from usal_echo import bucket, dcm_dir, img_dir, segmentation_dir, model_dir, classification_model
from usal_echo.d00_utils.db_utils import *
from usal_echo.d01_data.ingestion_dcm import ingest_dcm
from usal_echo.d01_data.ingestion_xtdb import ingest_xtdb
from usal_echo.d02_intermediate.clean_dcm import clean_dcm_meta
from usal_echo.d02_intermediate.clean_xtdb import clean_tables
from usal_echo.d02_intermediate.filter_instances import filter_all
from usal_echo.d02_intermediate.download_dcm import _downsample_train_test, s3_download_decomp_dcm, dcmdir_to_jpgs_for_classification
```
## Ingest dicom metada and Xcelera csv files
Retrieve data from s3 bucket. These functions write to database schema `.raw`.
```
#ingest_dcm(bucket) # This function takes ~3 days to run.
ingest_xtdb(bucket)
```
## Clean dicom metadata and Xcelera database tables
These functions write to database schema `.clean`.
```
clean_dcm_meta()
clean_tables()
```
## Filter study instances
Theis function writes to database schema `.views`.
The following tables are created:
* **views.machines_all_bmi**: list of all studies in db; columns: studyidk, machine type and bmi
* **views.machines_new_bmi**: same as machines_all_bmi, but only includes studies with new machines (_i.e. machine types ECOEPIQ2, EPIQ7-1, ECOIE33, AFFINITI_1, AFFINITI_2_)
* **views.instances_unique_master_list**, a list of unique instances in the database (_unique means that instances with naming conflicts (e.g. duplicate instanceidk's) have been removed_)
* **views.frames_w_labels**: all frames with labels plax, a4c, a2c
* **views.frames_sorted_by_views_temp**: intermediate table; used by other scripts
* **views.instances_w_conflicts**: instances to avoid
* **views.instances_w_labels**: all instances which are labeled plax, a4c, a2c
Assumption: if a frame has a view label, other frames within that instance correspond to the same view. This discludes instances which have >1 frames with conflicting labels
<font color='red'>All subsequent processes use **views.instances_w_labels** which are the ground truths for classification.</font>
```
filter_all()
```
## Check that tables have been created
```
io_raw = dbReadWriteRaw()
io_raw.list_tables()
io_clean = dbReadWriteClean()
io_clean.list_tables()
dm_spain_view_study_summary = io_clean.get_table('dm_spain_view_study_summary')
dm_spain_view_study_summary[:2]
io_views = dbReadWriteViews()
io_views.list_tables()
groundtruth_classification = io_views.get_table('instances_w_labels')
groundtruth_classification.head()
```
## Download and decompress dicom files
```
s3_download_decomp_dcm(train_test_ratio=0.5, downsample_ratio=0.0001, dcm_dir=dcm_dir, bucket=bucket)
```
| github_jupyter |
```
import numpy as np
import sys
import matplotlib.pyplot as plt
import os
import scipy.constants as ct
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Ryd2eV=13.605692
bohr2nm=ct.physical_constants["Bohr radius"][0]*1e9
print bohr2nm
plt.rcParams['figure.figsize'] = (14.0, 10.0)
def readmps(filename):
atom1=4
atom2=15
trdip=0
with open(filename,"r") as f:
lines=f.readlines()
if "D[e*nm]" in lines[1]:
a=lines[1].split()
z=float(a[5])
y=float(a[4])
x=float(a[3].split("=")[1])
trdip=np.array([x,y,z])
b=lines[atom1-1].split()
x=float(b[1])
y=float(b[2])
z=float(b[3])
at1=np.array([x,y,z])
c=lines[atom2-1].split()
x=float(c[1])
y=float(c[2])
z=float(c[3])
at2=np.array([x,y,z])
orientation=at1-at2
return trdip,orientation
def readorb(filename):
trdip=0
with open(filename,"r") as f:
lines=f.readlines()
for line in lines:
if "Singlet1 x y z" in line:
#print line
a=line.split()
x=float(a[7])
y=float(a[8])
z=float(a[9])
trdip=-1*np.array([x,y,z])*bohr2nm
break
return trdip
#files1=os.listdir("logfiles2")
files1=[]
files2=[]
for i in range(1,513):
files2.append("DCV2T_n2s1_{}.mps".format(i))
files1.append("molecule_{}.orb.log".format(i))
#files2=os.listdir("espfits")
trdipqm=[]
for filename in files1:
#print filename
trp=readorb("logfiles2/"+filename)
trdipqm.append(trp)
trdipcl=[]
orient=[]
for filename in files2:
trp,orientation=readmps("espfits/"+filename)
orient.append(orientation)
#print orientation
trdipcl.append(trp)
same=0
diff=0
plus=0
minus=0
dotplus=0
dotminus=0
for i,j in zip(trdipqm,trdipcl):
#print i ,j
dot=np.dot(i,j)
if dot>0:
dotplus+=1
else:
dotminus+=1
if (i[1]>0 and j[1]<0):
plus+=1
elif (i[1]<0 and j[1]>0):
minus+=1
if (i[1]>0 and j[1]>0) or (i[1]<0 and j[1]<0):
same+=1
else:
diff+=1
print same, diff
print plus, minus
print dotplus, dotminus
samedir=0
oppodir=0
for i,j in zip(orient,trdipcl):
#print i,j
dot=np.dot(i,j)
if dot>0:
samedir+=1
else:
oppodir+=1
print samedir, oppodir
orarray=np.array(trdipcl)
prarray=np.array(trdipqm)
#print orarray.T[0]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(orarray.T[0], orarray.T[1],orarray.T[2], marker="o",c="b")
ax.scatter(prarray.T[0], prarray.T[1],prarray.T[2], marker="x",c="r")
max_range = np.array([orarray.T[0].max()-orarray.T[0].min(), orarray.T[1].max()-orarray.T[1].min(), orarray.T[2].max()-orarray.T[2].min()]).max()
Xb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(orarray.T[0].max()+orarray.T[0].min())
Yb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(orarray.T[1].max()+orarray.T[1].min())
Zb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(orarray.T[2].max()+orarray.T[2].min())
# Comment or uncomment following both lines to test the fake bounding box:
for xb, yb, zb in zip(Xb, Yb, Zb):
ax.plot([xb], [yb], [zb], 'w')
plt.grid()
plt.show()
def appendSpherical_np(xyz):
ptsnew = np.hstack((xyz, np.zeros(xyz.shape)))
xy = xyz[:,0]**2 + xyz[:,1]**2
ptsnew[:,3] = np.sqrt(xy + xyz[:,2]**2)
ptsnew[:,4] = np.arctan2(np.sqrt(xy), xyz[:,2]) # for elevation angle defined from Z-axis down
#ptsnew[:,4] = np.arctan2(xyz[:,2], np.sqrt(xy)) # for elevation angle defined from XY-plane up
ptsnew[:,5] = np.arctan2(xyz[:,1], xyz[:,0])
return ptsnew
cl=appendSpherical_np(orarray)
qm=appendSpherical_np(prarray)
#plt.scatter(cl[:,5],cl[:,4],c="b",marker="o")
#plt.scatter(qm[:,5],qm[:,4],c="r",marker="x")
for c,q in zip(cl,qm):
x=[c[5],q[5]]
y=[c[4],q[4]]
#print x,y
plt.plot(x,y,lw=2)
x=np.linspace(-np.pi,np.pi,9)
labels=x/np.pi*180
plt.xticks(x, labels)
plt.yticks(x[4:], labels[4:])
plt.show()
```
| github_jupyter |
# Find hospitals closest to an incident
The `network` module of the ArcGIS API for Python can be used to solve different types of network analysis operations. In this sample, we see how to find the hospital that is closest to an incident.
## Closest facility
The closest facility solver provides functionality for finding out the closest locations to a particular input point. This solver would be useful in cases when you have an incident and need to find the closest facility or need to get information on the travel time and the distance to each of the facilities from an incident point for reporting purposes.

When finding closest facilities, you can specify how many to find and whether the direction of travel is toward or away from them. The closest facility solver displays the best routes between incidents and facilities, reports their travel costs, and returns driving directions.
### Connect to your GIS
As a first step, you would need to establish a connection to your organization which could be an ArcGIS Online organization or an ArcGIS Enterprise.
```
from IPython.display import HTML
import pandas as pd
from arcgis.gis import GIS
#connect to your GIS
user_name = 'arcgis_python'
password = 'P@ssword123'
my_gis = GIS('https://www.arcgis.com', user_name, password)
```
### Create a Network Layer
To perform any network analysis (such as finding the closest facility, the best route between multiple stops, or service area around a facility), you would need to create a `NetworkLayer` object. In this sample, since we are solving for closest facilities, we need to create a `ClosestFacilityLayer` which is a type of `NetworkLayer`.
To create any `NetworkLayer` object, you would need to provide the URL to the appropriate network analysis service. Hence, in this sample, we provide a `ClosestFacility` URL to create a `ClosestFacilityLayer` object.
Since all ArcGIS Online organizations already have access to those routing services, you can access this URL through the `GIS` object's `helperServices` property. If you have your own ArcGIS Server based map service with network analysis capability enabled, you would need to provide the URL for this service.
Let us start by importing the `network` module
```
import arcgis.network as network
```
Access the analysis URL from the `GIS` object
```
analysis_url = my_gis.properties.helperServices.closestFacility.url
analysis_url
```
Create a `ClosestFacilityLayer` object using this URL
```
cf_layer = network.ClosestFacilityLayer(analysis_url, gis=my_gis)
```
### Create hospitals layer
In this sample, we will be looking for the closest hospital (facility) to an incident location. Even though we are interested in finding out the closest one, it would still be helpful to get the information on the distance and travel time to all of them for reference purposes.
In the code below, we need to geocode the hospitals' addresses as well as do the reverse geocode for the incident location which has been supplied in the latitude/longitude format.
To perform the geocode operations, we import the `geocoding` module of the ArcGIS API.
```
from arcgis import geocoding
```
In this sample, we geocode addresses of hospitals to create the facility layer. In your workflows, this could any feature layer. Create a list of hospitals in Rio de Janeiro, Brazil.
```
hospitals_addresses = ['Estrada Adhemar Bebiano, 339 Del Castilho, Rio de Janeiro RJ, 21051-370, Brazil',
'R. José dos Reis Engenho de Dentro, Rio de Janeiro RJ, 20750-000, Brazil',
'R. Dezessete, s/n Maré, Rio de Janeiro RJ, 21042-010, Brazil',
'Rua Dr. Miguel Vieira Ferreira, 266 Ramos, Rio de Janeiro RJ, Brazil']
```
Loop through each address and geocode it. The geocode operation returns a list of matches for each address. We pick the first result and extract the coordinates from it and construct a `Feature` object out of it. Then we combine all the `Feature`s representing the hospitals into a `FeatureSet` object.
```
from arcgis.features import Feature, FeatureSet
hosp_feat_list = []
for address in hospitals_addresses:
hit = geocoding.geocode(address)[0]
hosp_feat = Feature(geometry=hit['location'], attributes=hit['attributes'])
hosp_feat_list.append(hosp_feat)
```
Construct a `FeatureSet` using each hospital `Feature`.
```
hospitals_fset = FeatureSet(features=hosp_feat_list,
geometry_type='esriGeometryPoint',
spatial_reference={'latestWkid': 4326})
```
Lets draw our hospitals on a map
```
map1 = my_gis.map('Rio de Janeiro, Brazil')
map1
map1.draw(hospitals_fset, symbol={"type": "esriSMS","style": "esriSMSSquare",
"color": [76,115,0,255],"size": 8,})
```
### Create incidents layer
Similarly, let us create the incient layer
```
incident_coords = '-43.281206,-22.865676'
reverse_geocode = geocoding.reverse_geocode({"x": incident_coords.split(',')[0],
"y": incident_coords.split(',')[1]})
incident_feature = Feature(geometry=reverse_geocode['location'],
attributes=reverse_geocode['address'])
incident_fset = FeatureSet([incident_feature], geometry_type='esriGeometryPoint',
spatial_reference={'latestWkid': 4326})
```
Let us add the incident to the map
```
map1.draw(incident_fset, symbol={"type": "esriSMS","style": "esriSMSCircle","size": 8})
```
## Solve for closest hospital
By default the closest facility service would return only the closest location, so we need to specify explicitly the `default_target_facility_count` parameter as well as `return_facilities`.
```
result = cf_layer.solve_closest_facility(incidents=incident_fset,
facilities=hospitals_fset,
default_target_facility_count=4,
return_facilities=True,
impedance_attribute_name='TravelTime',
accumulate_attribute_names=['Kilometers','TravelTime'])
```
Let us inspect the result dictionary
```
result.keys()
```
Let us use the `routes` dictionary to construct line features out of the routes to display on the map
```
result['routes'].keys()
result['routes']['features'][0].keys()
```
Construct line features out of the routes that are returned.
```
line_feat_list = []
for line_dict in result['routes']['features']:
f1 = Feature(line_dict['geometry'], line_dict['attributes'])
line_feat_list.append(f1)
routes_fset = FeatureSet(line_feat_list,
geometry_type=result['routes']['geometryType'],
spatial_reference= result['routes']['spatialReference'])
```
Add the routes back to the map. The route to the closest hospital is in red
```
map1.draw(routes_fset)
```
## Analyze the results in a table
Since we parsed the routes as a `FeatureSet`, we can display the attributes easily as a `pandas` `DataFrame`.
```
df1 = routes_fset.sdf
df1
```
Let us add the hospital addresses and incident address to this table and display only the relevant columns
```
df1['facility_address'] = hospitals_addresses
df1['incident_address'] = [incident_feature.attributes['Match_addr'] for i in range(len(hospitals_addresses))]
df1[['facility_address','incident_address','Total_Miles','Total_TravelTime']]
```
### Conclusion
Thus using the `network` module of the ArcGIS API for Python, you can solve for closest facilities from an incident location.
| github_jupyter |
## maggy - MNIST Example
---
Created: 24/04/2019
This notebook illustrates the usage of the maggy framework for asynchronous hyperparameter optimization on the famous MNIST dataset.
In this specific example we are using random search over three parameters and we are deploying the median early stopping rule in order to make use of the asynchrony of the framework. The Median Stopping Rule implements the simple strategy of stopping a trial if its performance falls below the median of other trials at similar points in time.
We are using Keras for this example. This notebook works with any Spark cluster given that you are using maggy 0.1. In future versions we will add functionality that relies on Hopsworks.
This notebook has been tested with TensorFlow 1.11.0 and Spark 2.4.0.
Requires Python 3.6 or higher.
### 1. Spark Session
Make sure you have a running Spark Session/Context available. On Hopsworks just execute a simple command to start the spark application.
```
print("Hello World!")
```
### 2. Searchspace definition
We want to conduct random search for the MNIST example on three hyperparameters: Kernel size, pooling size and dropout rate. Hence, we have two continuous integer valued parameters and one double valued parameter.
```
from maggy import Searchspace
# The searchspace can be instantiated with parameters
sp = Searchspace(kernel=('INTEGER', [2, 8]), pool=('INTEGER', [2, 8]))
# Or additional parameters can be added one by one
sp.add('dropout', ('DOUBLE', [0.01, 0.99]))
```
### 3. Model training definition
The programming model is that you wrap the code containing the model training inside a wrapper function. Inside that wrapper function provide all imports and parts that make up your experiment.
There are several requirements for this wrapper function:
1. The function should take the hyperparameters as arguments, plus one additional parameter `reporter` which is needed for reporting the current metric to the experiment driver.
2. The function should return the metric that you want to optimize for. This should coincide with the metric being reported in the Keras callback (see next point).
3. In order to leverage on the early stopping capabilities of maggy, you need to make use of the maggy reporter API. By including the reporter in your training loop, you are telling maggy which metric to report back to the experiment driver for optimization and to check for early stopping. It is as easy as adding `reporter.broadcast(metric=YOUR_METRIC)` for example at the end of your epoch or batch training step and adding a `reporter` argument to your function signature. If you are not writing your own training loop you can use the pre-written Keras callbacks:
- KerasBatchEnd
- KerasEpochEnd
(Please see documentation for a detailed explanation.)
We are going to use the `KerasBatchEnd` callback to report back the accuracy after each batch. However, note that in the BatchEnd callback we have only access to training accuracy since validation after each batch would be too expensive.
```
from maggy import experiment
from maggy.callbacks import KerasBatchEnd
```
Definition of the training wrapper function:
(maggy specific parts are highlighted with comments and correspond to the three points described above.)
```
#########
### maggy: hyperparameters as arguments and including the reporter
#########
def training_function(kernel, pool, dropout, reporter):
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
from maggy import tensorboard
from hops import hdfs
log_dir = tensorboard.logdir()
batch_size = 512
num_classes = 10
epochs = 1
# Input image dimensions
img_rows, img_cols = 28, 28
train_filenames = [hdfs.project_path() + "TourData/mnist/train/train.tfrecords"]
validation_filenames = [hdfs.project_path() + "TourData/mnist/validation/validation.tfrecords"]
# Create an iterator over the dataset
def data_input(filenames, batch_size=128, shuffle=False, repeat=None):
def parser(serialized_example):
"""Parses a single tf.Example into image and label tensors."""
features = tf.io.parse_single_example(
serialized_example,
features={
'image_raw': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.int64),
})
image = tf.io.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([28 * 28])
# Normalize the values of the image from the range [0, 255] to [-0.5, 0.5]
image = tf.cast(image, tf.float32) / 255 - 0.5
label = tf.cast(features['label'], tf.int32)
# Reshape the tensor
image = tf.reshape(image, [img_rows, img_cols, 1])
# Create a one hot array for your labels
label = tf.one_hot(label, num_classes)
return image, label
# Import MNIST data
dataset = tf.data.TFRecordDataset(filenames)
num_samples = sum(1 for _ in dataset)
# Map the parser over dataset, and batch results by up to batch_size
dataset = dataset.map(parser)
if shuffle:
dataset = dataset.shuffle(buffer_size=128)
dataset = dataset.batch(batch_size)
dataset = dataset.repeat(repeat)
return dataset, num_samples
input_shape = (28, 28, 1)
model = Sequential()
model.add(Conv2D(32, kernel_size=(kernel, kernel),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (kernel, kernel), activation='relu'))
model.add(MaxPooling2D(pool_size=(pool, pool)))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(num_classes, activation='softmax'))
opt = keras.optimizers.Adadelta(1.0)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=opt,
metrics=['accuracy'])
# Setup TensorBoard
tb_callback = TensorBoard(
log_dir,
update_freq='batch',
profile_batch=0, # workaround for issue #2084
)
#########
### maggy: REPORTER API through keras callback
#########
callbacks = [KerasBatchEnd(reporter, metric='accuracy'), tb_callback]
# Initialize the datasets
train_input, num_train = data_input(train_filenames[0], batch_size=batch_size)
eval_input, num_val = data_input(validation_filenames[0], batch_size=batch_size)
model.fit(train_input,
steps_per_epoch = num_train//batch_size,
callbacks=callbacks, # add callback
epochs=epochs,
verbose=1,
validation_data=eval_input,
validation_steps=num_val//batch_size)
score = model.evaluate(eval_input, steps=num_val//batch_size, verbose=1)
# Using print in the wrapper function will print underneath the Jupyter Cell with a
# prefix to indicate which prints come from the same executor
print('Test loss:', score[0])
print('Test accuracy:', score[1])
#########
### maggy: return the metric to be optimized, test accuracy in this case
#########
return score[1]
```
### 4. Configuring the experiment
Finally, we have to configure the maggy experiment.
There are a variety of parameters to specify, some of which are optional:
1. `num_trials`: number of different parameter combinations to be evaluated
2. `optimizer`: the optimization algorithm to be used (only 'randomsearch' available at the moment)
3. `searchspace`: the searchspace object
4. `direction`: maximize or minimize the specified metric
5. `es_interval`: Interval in seconds, specifying how often the currently running trials should be checked for early stopping. Should be bigger than the `hb_interval`.
6. `es_min`: Minimum number of trials to be finished before starting to check for early stopping. For example, the median stopping rule implements the simple strategy of stopping a trial if its performance falls below the median of finished trials at similar points in time. We only want to start comparing to the median once there are several trials finished.
7. `name`: An experiment name
8. `description`: A description of the experiments that is used in the experiment's logs.
9. `hb_interval`: Time in seconds between the heartbeat messages with the metric to the experiment driver. A sensible value is not much smaller than the frequency in which your training loop updates the metric. So using the KerasBatchEnd reporter callback, it does not make sense having a much smaller interval than the amount of time a batch takes to be processed.
```
from maggy.experiment_config import OptimizationConfig
config = OptimizationConfig(num_trials=4, optimizer="randomsearch", searchspace=sp, direction="max", es_interval=1, es_min=5, name="hp_tuning_test")
```
### 5. Running the experiment
With all necessary configurations done, we can now run the hyperparameter tuning calling lagom with our prepared training function and the previously created config object.
```
result = experiment.lagom(train_fn=training_function, config=config)
```
To observe the progress, you can check the sterr of the spark executors. TensorBoard support is added in the coming version.
| github_jupyter |
```
import sys
import os
import json
import numpy as np
import glob
import copy
%matplotlib inline
import matplotlib.pyplot as plt
import importlib
import util_human_model_comparison
import util_figures_psychophysics
sys.path.append('/packages/msutil')
import util_figures
def load_results_dict(results_dict_fn, pop_key_list=['psychometric_function']):
with open(results_dict_fn) as f: results_dict = json.load(f)
for pop_key in pop_key_list:
if pop_key in results_dict.keys():
results_dict.pop(pop_key)
return results_dict
def calc_best_metric(valid_metrics_fn, metric_key='f0_label:accuracy', maximize=True):
if not os.path.exists(valid_metrics_fn):
return None
with open(valid_metrics_fn) as f:
valid_metrics_dict = json.load(f)
if metric_key not in valid_metrics_dict.keys():
# If metric_key does not exist in validation_metrics_dict, look for a similarly named key
for available_key in valid_metrics_dict.keys():
if all([mkp in available_key for mkp in metric_key.split(':')]):
metric_key = available_key
break
metric_values = valid_metrics_dict[metric_key]
if maximize:
best_metric_value = np.max(metric_values)
else:
best_metric_value = np.min(metric_values)
return best_metric_value
# Specify list of models to load (each entry can glob multiple models to average across)
list_regex_model_dir = [
# 'human',
# '../models/default/arch_????',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0191',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0302',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0288',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0335',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0346',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0286',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0083',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0154',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0190',
# '/saved_models/arch_search_v02_topN/f0_label_192/arch_0338',
# '/saved_models/arch_search_v02_topN/REDOsr2000_cf1000_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0320Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC1000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC6000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC9000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species004_spont070_BWlinear_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont1eN1_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'EVAL_SOFTMAX_flat_exc_mean'),
'/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_neg10pos10_filter_signalLPv01/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_neg10pos10_filter_signalHPv00/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_mfcc_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_mfcc_PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_v08spch_noise_TLAS_snr_neg10pos10/arch_0???/',
# '/saved_models/arch_search_v02_topN/PND_v08inst_noise_TLAS_snr_neg10pos10/arch_0???/',
'/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_pos10pos30/arch_0???/',
'/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_posInf/arch_0???/',
]
# Specify basename for validation metrics
basename_valid_metrics = 'validation_metrics.json'
# Specify results_dict basenames for each experiment
experiment_to_basename_map = {
'bernox2005': 'EVAL_SOFTMAX_lowharm_v01_bestckpt_results_dict.json',
'transposedtones': 'EVAL_SOFTMAX_transposedtones_v01_bestckpt_results_dict.json',
'freqshiftedcomplexes': 'EVAL_SOFTMAX_freqshifted_v01_bestckpt_results_dict.json',
'mistunedharmonics': 'EVAL_SOFTMAX_mistunedharm_v01_bestckpt_results_dict.json',
'altphasecomplexes': 'EVAL_SOFTMAX_altphase_v01_bestckpt_results_dict.json',
}
# Specify human results_dict for each experiment
experiment_to_human_results_map = {
'bernox2005': util_human_model_comparison.get_human_results_dict_bernox2005(),
'transposedtones': util_human_model_comparison.get_human_results_dict_transposedtones(),
'freqshiftedcomplexes': util_human_model_comparison.get_human_results_dict_freqshiftedcomplexes(),
'mistunedharmonics': util_human_model_comparison.get_human_results_dict_mistunedharmonics(),
'altphasecomplexes': util_human_model_comparison.get_human_results_dict_altphasecomplexes(),
}
# Specify list of experiments to load
experiment_keys = [
'bernox2005',
'altphasecomplexes',
'freqshiftedcomplexes',
'mistunedharmonics',
'transposedtones',
]
# Compile list of lists of model psychophysical data to plot grid of results (models-by-experiments)
list_list_model_dir = []
list_list_valid_metric = []
list_dict_results_dicts = []
# For each entry in list_regex_model_dir, grab all of the models that are globbed by the regex
for regex_model_dir in list_regex_model_dir:
prefix = None
if isinstance(regex_model_dir, tuple):
(regex_model_dir, prefix) = regex_model_dir
list_model_dir = []
list_valid_metric = []
dict_results_dicts = {ek: [] for ek in experiment_keys}
if 'HUMAN' in regex_model_dir.upper():
list_model_dir = 'HUMAN'
list_valid_metric = []
dict_results_dicts = experiment_to_human_results_map
else:
for idx, model_dir in enumerate(sorted(glob.glob(regex_model_dir))):
fn_valid_metric = os.path.join(model_dir, basename_valid_metrics)
fn_result_dict = {
ek: os.path.join(model_dir, experiment_to_basename_map[ek]) for ek in experiment_keys
}
if 'snr_pos' in model_dir:
high_snr_basename = 'EVAL_SOFTMAX_lowharm_v04_bestckpt_results_dict.json'
fn_result_dict['bernox2005'] = os.path.join(model_dir, high_snr_basename)
high_snr_basename = 'EVAL_SOFTMAX_transposedtones_v02_bestckpt_results_dict.json'
fn_result_dict['transposedtones'] = os.path.join(model_dir, high_snr_basename)
print(model_dir)
if prefix is not None:
for k in fn_result_dict.keys():
fn_result_dict[k] = fn_result_dict[k].replace('EVAL_SOFTMAX', prefix)
print(fn_result_dict[k])
include_model_flag = True
for ek in experiment_keys:
if not os.path.exists(fn_result_dict[ek]): include_model_flag = False
if include_model_flag:
list_valid_metric.append(calc_best_metric(fn_valid_metric))
list_model_dir.append(model_dir)
# Load results_dict for each model
for ek, results_dict_fn in fn_result_dict.items():
with open(results_dict_fn) as f:
dict_results_dicts[ek].append(json.load(f))
# Add lists of model results to the master list
list_list_valid_metric.append(list_valid_metric)
list_list_model_dir.append(list_model_dir)
list_dict_results_dicts.append(dict_results_dicts)
print(regex_model_dir, len(list_model_dir), list_valid_metric)
importlib.reload(util_figures)
importlib.reload(util_figures_psychophysics)
importlib.reload(util_human_model_comparison)
experiment_to_plot_fcn_map = {
'bernox2005': util_figures_psychophysics.make_bernox_threshold_plot,
'transposedtones': util_figures_psychophysics.make_TT_threshold_plot,
'freqshiftedcomplexes': util_figures_psychophysics.make_freqshiftedcomplexes_plot,
'mistunedharmonics': util_figures_psychophysics.make_mistuned_harmonics_line_plot,
'altphasecomplexes': util_figures_psychophysics.make_altphase_histogram_plot,
}
experiment_keys = [
'bernox2005',
'altphasecomplexes',
'freqshiftedcomplexes',
'mistunedharmonics',
'transposedtones',
]
NROWS = len(experiment_keys)
NCOLS = len(list_dict_results_dicts)
figsize = (4*NCOLS*0.9, 3*NROWS*0.9)
gridspec_kw = {}
fig, ax = plt.subplots(nrows=NROWS, ncols=NCOLS, figsize=figsize, gridspec_kw=gridspec_kw)
ax = np.array(ax).reshape([NROWS, NCOLS])
for c_idx, (dict_results_dicts, list_model_dir) in enumerate(zip(list_dict_results_dicts, list_list_model_dir)):
for r_idx, key in enumerate(experiment_keys):
results_dict_input = dict_results_dicts[key]
plot_fcn = experiment_to_plot_fcn_map[key]
# Specify kwargs for all psychophysics subplots
kwargs = {
'include_yerr': True,
}
# Modify kwargs for special cases
if (isinstance(list_model_dir, str)) and (list_model_dir == 'HUMAN'):
kwargs['include_yerr'] = False
plot_fcn(ax[r_idx, c_idx], results_dict_input, **kwargs)
if c_idx > 0:
ax[r_idx, c_idx].xaxis.label.set_color('w')
ax[r_idx, c_idx].yaxis.label.set_color('w')
plt.tight_layout()
plt.show()
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_arch_search_v02_top10archs_individually.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_IHClowpass.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_cochFilterBWs.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_spont_rate.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_flat_exc_mean_test.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_sound_statistics_training_stimuli.pdf'
# fn_save = 'figures/archive_2021_05_07_pitchnet_paper_figures_v04/psychophysics_all_manipulation_sound_statistics_SNR.pdf'
# fig.savefig(fn_save, bbox_inches='tight', pad_inches=0)
# for r in range(ax.shape[0]):
# for c in range(ax.shape[1]):
# bbox_inches = ax[r, c].get_tightbbox(fig.canvas.get_renderer()).transformed(fig.dpi_scale_trans.inverted())
# save_fn = os.path.join(save_dir, 'panel_{}{}.pdf'.format(r, c))
# fig.savefig(save_fn, bbox_inches=bbox_inches, pad_inches=0)
### GENERIC PARAMETERS
figsize=(4,3)
poster_plot_kwargs = {
'fontsize_labels': 16,
'fontsize_legend': 14,
'fontsize_ticks': 14,
'include_yerr': True,
'kwargs_bootstrap': {
'bootstrap_repeats': 1000,
'metric_function': 'median',
},
}
### Build dictionary of human results_dict for each experiment
experiment_to_human_results_map = {
'bernox2005': util_human_model_comparison.get_human_results_dict_bernox2005(),
'transposedtones': util_human_model_comparison.get_human_results_dict_transposedtones(),
'freqshiftedcomplexes': util_human_model_comparison.get_human_results_dict_freqshiftedcomplexes(),
'mistunedharmonics': util_human_model_comparison.get_human_results_dict_mistunedharmonics(),
'altphasecomplexes': util_human_model_comparison.get_human_results_dict_altphasecomplexes(),
}
import sys
import os
import json
import numpy as np
import glob
import copy
import importlib
%matplotlib inline
import matplotlib.pyplot as plt
import util_human_model_comparison
import util_figures_psychophysics
importlib.reload(util_figures_psychophysics)
sys.path.append('/packages/msutil')
import util_figures
### SPECIFY THE OUTERMOST DIRECTORY CONTAINING ALL MODELS
model_dir = '/om2/user/msaddler/pitchnet/saved_models/'
### SPECIFY RESULTS DICT BASENAME: determines which experiment to plot
# results_dict_basename = 'EVAL_SOFTMAX_bernox2005_FixedFilter_bestckpt_results_dict.json'
results_dict_basename = 'EVAL_SOFTMAX_lowharm_v01_bestckpt_results_dict.json'
### SPECIFY REGULAR EXPRESSIONS FOR MODELS: (regex, model_name) pairs
master_list = [
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species004_spont070_BWlinear_IHC3000Hz_IHC7order/arch_0???/', 'Linearly spaced'),
# # ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW02eN1_IHC3000Hz_IHC7order/arch_0???/', '4x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0???/', '2x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'Human filter BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0???/', '2x broader BWs'),
# # ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW40eN1_IHC3000Hz_IHC7order/arch_0???/', '4x broader BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0191_seed*/', '2x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0191_seed*/', 'Human filter BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0191_seed*/', '2x broader BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'Natural'),
# ('/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_neg10pos10_filter_signalLPv01/arch_0???/', 'Lowpass'),
# ('/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_neg10pos10_filter_signalHPv00/arch_0???/', 'Highpass'),
# ('/saved_models/arch_search_v02_topN/PND_mfcc_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/arch_0???/', 'Matched'),
# ('/saved_models/arch_search_v02_topN/PND_mfcc_PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/arch_0???/', 'Anti-matched'),
# ('/saved_models/arch_search_v02_topN/PND_v08spch_noise_TLAS_snr_neg10pos10/arch_0???/', 'Speech only'),
# ('/saved_models/arch_search_v02_topN/PND_v08inst_noise_TLAS_snr_neg10pos10/arch_0???/', 'Music only'),
# ('/saved_models/arch_search_v02_topN/PND_v08_noise_TLAS_snr_posInf/arch_0???/EVAL_SOFTMAX_lowharm_v04_bestckpt_results_dict.json', 'Speech + music (natural)\nwith no background noise'),
# ('/saved_models/arch_search_v02_topN/cochlearn_PND_v08spch_noise_TLAS_snr_neg10pos10/arch_0???/', 'Speech only'),
# ('/saved_models/arch_search_v02_topN/cochlearn_PND_v08inst_noise_TLAS_snr_neg10pos10/arch_0???/', 'Music only'),
('/saved_models/arch_search_v02_topN/cochlearn_IHC4000Hz_PND_v08spch_noise_TLAS_snr_neg10pos10/arch_0???/', 'Speech only'),
('/saved_models/arch_search_v02_topN/cochlearn_IHC4000Hz_PND_v08inst_noise_TLAS_snr_neg10pos10/arch_0???/', 'Music only'),
# ('/saved_models/arch_search_v02_topN/f0_label_024/arch_0???/', '1/2 st'),
# ('/saved_models/arch_search_v02_topN/f0_label_048/arch_0???/', '1/4 st'),
# ('/saved_models/arch_search_v02_topN/f0_label_096/arch_0???/', '1/8 st'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', '1/16 st'),
# ('/saved_models/arch_search_v02_topN/f0_label_384/arch_0???/', '1/32 st'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '100 ANF'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cfI100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '100$^T$ ANF'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cfI250_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '250$^T$ ANF'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cfI500_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '500$^T$ ANF'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cf1000_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '1000$^T$ ANF'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0???/', '2x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'Human filter BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0???/', '2x broader BWs'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cf1000_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '50Hz'),
# # ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '50Hz'),
# # ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0250Hz_IHC7order/arch_0???/', '250Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0320Hz_IHC7order/arch_0???/', '320Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC1000Hz_IHC7order/arch_0???/', '1000Hz'),
# # ('HUMAN', 'Humans'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', '3000Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC6000Hz_IHC7order/arch_0???/', '6000Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC9000Hz_IHC7order/arch_0???/', '9000Hz'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_bestckpt_results_dict.json', 'train NH + test NH'),
# # ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_dbspl85_bestckpt_results_dict.json', 'train NH + test NH (85dB)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_cohc0_lowharm_v01_bestckpt_results_dict.json', 'train NH + test HI'),
# # ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_cohc0_lowharm_v01_dbspl85_bestckpt_results_dict.json', 'train NH + test HI (85dB)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order_cohc0_dBSPL60to90/arch_0???/EVAL_SOFTMAX_cohc0_lowharm_v01_bestckpt_results_dict.json', 'train HI + test HI'),
# # ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order_cohc0_dBSPL60to90/arch_0???/EVAL_SOFTMAX_cohc0_lowharm_v01_dbspl85_bestckpt_results_dict.json', 'train HI + test HI (85dB)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0???/', '2x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'Human filter BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0???/', '2x broader BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_BW05eN1_lowharm_v01_bestckpt_results_dict.json', '2x narrower BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_bestckpt_results_dict.json', 'Human filter BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_BW20eN1_lowharm_v01_bestckpt_results_dict.json', '2x broader BWs'),
# ('HUMAN', 'Humans'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_bestckpt_results_dict.json', 'Model'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_thresh40_bestckpt_results_dict.json', 'Model (thresh40)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_noise08_bestckpt_results_dict.json', 'Model (noise08)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/EVAL_SOFTMAX_lowharm_v01_noise10_bestckpt_results_dict.json', 'Model (noise10)'),
]
### LOAD PSYCHOPHYSICS EXPERIMENT RESULTS
model_keys = []
results_dicts = {}
master_count = 0
for fn_regex, model_key in master_list:
results_dicts[model_key] = []
model_keys.append(model_key)
if fn_regex.upper() == 'HUMAN':
results_dicts[model_key].append(util_human_model_comparison.get_human_results_dict_bernox2005())
else:
if not fn_regex[0] == '/': fn_regex = os.path.join(model_dir, fn_regex)
if '.json' not in fn_regex: fn_regex = os.path.join(fn_regex, results_dict_basename)
for results_dict_fn in sorted(glob.glob(fn_regex)):
master_count = master_count + 1
with open(results_dict_fn) as f:
results_dicts[model_key].append(json.load(f))
print('Loaded results from {} files ({})'.format(master_count, results_dict_basename))
for key in results_dicts.keys():
print(key, len(results_dicts[key]))
### bernox2005 discrimination threholds
importlib.reload(util_figures_psychophysics)
importlib.reload(util_human_model_comparison)
plot_fcn = util_figures_psychophysics.make_bernox_threshold_plot
human_rd = util_human_model_comparison.get_human_results_dict_bernox2005()
legend_loc = 'lower right'
add_lines = False
save_dir = '/om2/user/msaddler/pitchnet/assets_psychophysics/figures/archive_2021_05_07_pitchnet_paper_figures_v04/'
if 'noise' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR FILTERING SOUNDS
color_list = [color_list[idx] for idx in [0, 2, 4]]
legend_loc = 'lower right'
save_fn = 'tmp.pdf'
elif 'natural' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR FILTERING SOUNDS
color_list = [color_list[idx] for idx in [0, 2, 4]]
legend_loc = 'upper right'
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_natural.pdf')
elif 'matched' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR SYNTHETIC TONES
color_list = [color_list[idx] for idx in [0, 4]]
legend_loc = 'upper right'
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_synthetic.pdf')
elif 'only' in model_keys[0].lower():
color_list = util_figures.get_color_list(8, cmap_name='Accent') # CMAP FOR SPEECH VS MUSIC
color_list = [color_list[idx] for idx in [4,5]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_speech_vs_music.pdf')
if 'cochlearn_IHC4000Hz' in master_list[0][0]:
save_fn = save_fn.replace('.pdf', '_cochlearn_IHC4000Hz.pdf')
elif 'cochlearn' in master_list[0][0]:
save_fn = save_fn.replace('.pdf', '_cochlearn.pdf')
elif 'BW' in model_keys[1].upper():
color_list = ['#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_cochFilterBWs.pdf')
elif 'hz' in model_keys[0].lower():
color_list = ['#fdb863', '#e08214', '#b35806', 'k', '#8073ac', '#b2abd2'] # CMAP FOR IHC LOWPASS
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHClowpass.pdf')
elif '/' in model_keys[0]:
color_list = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026'] # CMAP for F0 bin width
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_f0_bin_width.pdf')
elif 'ANF' in model_keys[0]:
color_list = ['#bdd7e7', '#6baed6', '#3182bd', '#08519c'] # CMAP for number of ANFs
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHC0050Hz_num_ANFs.pdf')
else:
raise ValueError("Failed to automatically specify color_list and save_fn!!!")
color_list = color_list + ['b']
kwargs = {
'xlimits': [0,31],
'include_yerr': True,
'legend_on': True,
'restrict_conditions': [0],
}
kwargs['kwargs_legend'] = {
'loc': legend_loc,
'ncol': 2,
'frameon': False,
'framealpha': 1.0,
'facecolor': 'w',
'edgecolor': 'k',
'handlelength': 0.5,
'markerscale': 0.0,
'fontsize': 10.0,
'borderpad': 0.6,
'borderaxespad': 0.3,
}
if len(model_keys) < 4:
kwargs['kwargs_legend']['ncol'] = 1
kwargs['kwargs_legend']['frameon'] = True
NROWS = 1
NCOLS = 1
# figsize = (4*NCOLS*.9, 3*NROWS*.9)
figsize = (4, 3)
gridspec_kw = {}
fig, ax = plt.subplots(nrows=NROWS, ncols=NCOLS, figsize=figsize, gridspec_kw=gridspec_kw)
### PLOT MODEL ###
zorder = 0
for cidx, key in enumerate(model_keys):
kwargs['sine_plot_kwargs'] = {
'label': key,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
kwargs['rand_plot_kwargs'] = {
'label': None,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
# for rd in results_dicts[key]:
# plot_fcn(ax, rd, **kwargs)
rd_itr0 = plot_fcn(ax, results_dicts[key], **kwargs)
zorder -= 1
import matplotlib
leg = [c for c in ax.get_children() if isinstance(c, matplotlib.legend.Legend)]
if len(leg) == 1:
for legobj in leg[0].legendHandles:
legobj.set_linewidth(6.0)
plt.tight_layout()
plt.show()
# print(save_fn)
# fig.savefig(save_fn, bbox_inches='tight', pad_inches=0, transparent=False)
# fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
### bernox2005 discrimination threholds
importlib.reload(util_figures_psychophysics)
importlib.reload(util_human_model_comparison)
plot_fcn = util_figures_psychophysics.make_bernox_threshold_plot
human_rd = util_human_model_comparison.get_human_results_dict_bernox2005()
save_dir = '/om2/user/msaddler/pitchnet/assets_psychophysics/figures/archive_2020_09_26_pitchnet_paper_figures_v03/'
if 'natural' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR FILTERING SOUNDS
color_list = [color_list[idx] for idx in [0, 2, 4]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_natural.pdf')
elif 'matched' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR SYNTHETIC TONES
color_list = [color_list[idx] for idx in [0, 4]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_synthetic.pdf')
elif 'only' in model_keys[0].lower():
color_list = util_figures.get_color_list(8, cmap_name='Accent') # CMAP FOR SPEECH VS MUSIC
color_list = [color_list[idx] for idx in [4,5]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_speech_vs_music.pdf')
elif 'BW' in model_keys[1].upper():
# color_list = ['#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
# save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_cochFilterBWs.pdf')
color_list = ['#f768a1', '#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
if len(model_keys) < 4:
color_list = color_list[1:]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_cochFilterBWs_linear.pdf')
elif 'hz' in model_keys[0].lower():
color_list = ['#fdb863', '#e08214', '#b35806', 'k', '#8073ac', '#b2abd2'] # CMAP FOR IHC LOWPASS
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHClowpass.pdf')
elif '/' in model_keys[0]:
color_list = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026'] # CMAP for F0 bin width
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_f0_bin_width.pdf')
elif 'ANF' in model_keys[0]:
color_list = ['#bdd7e7', '#6baed6', '#3182bd', '#08519c'] # CMAP for number of ANFs
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHC0050Hz_num_ANFs.pdf')
elif 'train NH' in model_keys[0]:
color_list = util_figures.get_color_list(12, cmap_name='Paired')
save_fn = None
else:
color_list = util_figures.get_color_list(9, cmap_name='Set1')
save_fn = None
# raise ValueError("Failed to automatically specify color_list and save_fn!!!")
color_list = color_list + ['b']
def get_transition_point(results_dict_input, phase_mode=0, transition_f0dl=1.0,):
'''
'''
if not isinstance(results_dict_input, list):
results_dict_input = [results_dict_input]
f0dls = np.array([rd['f0dl'] for rd in results_dict_input])
list_phase_mode = np.array(results_dict_input[0]['phase_mode'])
list_low_harm = np.array(results_dict_input[0]['low_harm'])
f0dls = f0dls[:, list_phase_mode == phase_mode]
list_low_harm = list_low_harm[list_phase_mode == phase_mode]
list_phase_mode = list_phase_mode[list_phase_mode == phase_mode]
list_transition = np.zeros([f0dls.shape[0]])
for itr0 in range(f0dls.shape[0]):
list_transition[itr0] = list_low_harm[f0dls[itr0, :] > transition_f0dl][0]
transition_mean, transition_err = util_figures_psychophysics.bootstrap(list_transition)
return transition_mean, transition_err, list_transition
fontsize_labels=12
fontsize_ticks=12
xlimits=[0, 31]
ylimits=[1e-1, 1e2]
kwargs = {
'xlimits': xlimits,
'ylimits': ylimits,
'include_yerr': True,
'legend_on': True,
'restrict_conditions': [0],
'kwargs_legend': {
# 'ncol': 2,
'handlelength': 0.5,
'borderpad': 0,
'columnspacing': 1,
'loc': 'lower right',
'handletextpad': 0.5,
'ncol': 1,
# 'fontsize': 8,
},
# 'kwargs_bootstrap':{'bootstrap_repeats': 1000, 'metric_function': 'median'},
}
if len(model_keys) <= 4:
kwargs['kwargs_legend']['ncol'] = 1
NROWS = 2
NCOLS = 2
wratio = 6.5
hratio = 6.5 * 3.6/4
figsize = (4 * (wratio + 1) / wratio, 3 * (hratio + 1) / hratio)
gridspec_kw = {
'width_ratios': [1, wratio],
'height_ratios': [hratio, 1],
'wspace': 0.0,
'hspace': 0.0,
}
fig, ax_arr = plt.subplots(nrows=NROWS, ncols=NCOLS, figsize=figsize, gridspec_kw=gridspec_kw, sharex='col', sharey='row')
### PLOT MODEL ###
zorder = 0
cidx = 0
for itr0, key in enumerate(model_keys):
kwargs['sine_plot_kwargs'] = {
'label': key,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
kwargs['rand_plot_kwargs'] = {
'label': None,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
if False:#'HUMAN' in key.upper():
kwargs['sine_plot_kwargs'] = {
'color': [0, 0, 0],
'label': None,
'lw': 1,
'ls': '--',
'zorder': 1000,
'marker': 'o',
'ms': 6,
'markerfacecolor': [1, 1, 1],
'markeredgecolor': [0, 0, 0],
'markeredgewidth': 1,
}
rd_itr0 = plot_fcn(ax_arr[0, 1], results_dicts[key], **kwargs)
if True:#'HUMAN' not in key.upper():
transition_mean, transition_err, list_transition = get_transition_point(results_dicts[key])
threshold_mask = np.logical_and(np.array(rd_itr0['phase_mode']) == 0,
np.array(rd_itr0['low_harm']) == 1)
log10_f0dl = rd_itr0['log10_f0dl'][threshold_mask][0]
log10_f0dl_err = rd_itr0['log10_f0dl_err'][threshold_mask][0]
line_plot_kwargs = {
'lw': 0.50,
'ls': '--',
'dashes': (2, 2),
'color': color_list[cidx],
'zorder': zorder,
}
line_ymin = 1e-1
line_ymax = np.power(10.0, np.interp(transition_mean, rd_itr0['low_harm'], rd_itr0['log10_f0dl']))
ax_arr[0, 1].plot([transition_mean, transition_mean],
[line_ymin, line_ymax],
**line_plot_kwargs)
errorbar_kwargs = {
'fmt': 's',
'color': color_list[cidx],
'ms': 2,
'ecolor': color_list[cidx],
'elinewidth': 3,
'capsize': 0,
'capthick': 0,
}
yerr = np.array([
np.power(10.0, log10_f0dl) - np.power(10.0, log10_f0dl-2*log10_f0dl_err),
np.power(10.0, log10_f0dl+2*log10_f0dl_err) - np.power(10.0, log10_f0dl)
]).reshape([2, -1])
ax_arr[0, 0].errorbar(itr0, np.power(10.0, log10_f0dl), xerr=None, yerr=yerr, **errorbar_kwargs)
ax_arr[1, 1].errorbar(transition_mean, len(model_keys)-itr0-1, xerr=2*transition_err, yerr=None, **errorbar_kwargs)
ax_arr[1, 1].plot([transition_mean, transition_mean],
[len(model_keys)-itr0-1, 100],
**line_plot_kwargs)
cidx += 1
zorder -= 1
import matplotlib
leg = [c for c in ax_arr[0, 1].get_children() if isinstance(c, matplotlib.legend.Legend)]
if len(leg) == 1:
for legobj in leg[0].legendHandles:
if False:#'human' in str(legobj).lower():
legobj.set_linewidth(1.0)
legobj._legmarker.set_markersize(6)
else:
legobj.set_linewidth(6.0)
ax_arr[0, 1].set(xlabel=None, ylabel=None, xticklabels=[], yticklabels=[])
ax_arr[0, 1].tick_params(which='both', length=0)
util_figures.format_axes(ax_arr[0, 0],
str_xlabel=None,
str_ylabel='F0 discrimination\nthreshold (%F0)',
fontsize_labels=fontsize_labels,
fontsize_ticks=fontsize_ticks,
fontweight_labels=None,
xscale='linear',
yscale='log',
xlimits=[-1, itr0+1],
ylimits=ylimits,
xticks=[],
yticks=None,
xticks_minor=[],
yticks_minor=None,
xticklabels=[],
yticklabels=None,
spines_to_hide=[],
major_tick_params_kwargs_update={},
minor_tick_params_kwargs_update={})
util_figures.format_axes(ax_arr[1, 1],
str_xlabel='Lowest harmonic number',
str_ylabel=None,
fontsize_labels=fontsize_labels,
fontsize_ticks=fontsize_ticks,
fontweight_labels=None,
xscale='linear',
yscale='linear',
xlimits=xlimits,
ylimits=[-1, itr0+1],
xticks=np.arange(xlimits[0], xlimits[1], 5),
yticks=[],
xticks_minor=np.arange(xlimits[0], xlimits[1], 1),
yticks_minor=None,
xticklabels=None,
yticklabels=None,
spines_to_hide=[],
major_tick_params_kwargs_update={},
minor_tick_params_kwargs_update={})
ax_arr[0, 0].text(np.mean([-1, itr0+1]), 1.8, 'Best thresholds', {'ha': 'center', 'va': 'bottom'}, rotation=90, fontsize=11)
ax_arr[1, 1].text(9, np.mean([-1, itr0+1]), 'Transition points', {'ha': 'left', 'va': 'center'}, rotation=0, fontsize=11)
ax_arr[1, 0].axis('off')
plt.tight_layout()
plt.show()
print(save_fn)
# fig.savefig(save_fn, bbox_inches='tight', pad_inches=0, transparent=False)
fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
### bernox2005 discrimination threholds
importlib.reload(util_figures_psychophysics)
importlib.reload(util_human_model_comparison)
plot_fcn = util_figures_psychophysics.make_bernox_threshold_plot
human_rd = util_human_model_comparison.get_human_results_dict_bernox2005()
save_dir = '/om2/user/msaddler/pitchnet/assets_psychophysics/figures/archive_2020_09_26_pitchnet_paper_figures_v03/'
if 'natural' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR FILTERING SOUNDS
color_list = [color_list[idx] for idx in [0, 2, 4]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_natural.pdf')
elif 'matched' in model_keys[0].lower():
color_list = util_figures.get_color_list(6, cmap_name='gist_heat') # CMAP FOR SYNTHETIC TONES
color_list = [color_list[idx] for idx in [0, 4]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_synthetic.pdf')
elif 'only' in model_keys[0].lower():
color_list = util_figures.get_color_list(8, cmap_name='Accent') # CMAP FOR SPEECH VS MUSIC
color_list = [color_list[idx] for idx in [4,5]]
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_sound_statistics_speech_vs_music.pdf')
elif 'BW' in model_keys[1].upper():
# color_list = ['#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
# save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_cochFilterBWs.pdf')
color_list = ['#f768a1', '#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_cochFilterBWs_linear.pdf')
elif 'hz' in model_keys[0].lower():
color_list = ['#fdb863', '#e08214', '#b35806', 'k', '#8073ac', '#b2abd2'] # CMAP FOR IHC LOWPASS
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHClowpass.pdf')
elif '/' in model_keys[0]:
color_list = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026'] # CMAP for F0 bin width
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_f0_bin_width.pdf')
elif 'ANF' in model_keys[0]:
color_list = ['#c6dbef', '#bdd7e7', '#6baed6', '#3182bd', '#08519c'] # CMAP for number of ANFs
save_fn = os.path.join(save_dir, 'psychophysics_bernoxSinePhase_manipulation_IHC0050Hz_num_ANFs.pdf')
else:
raise ValueError("Failed to automatically specify color_list and save_fn!!!")
color_list = color_list + ['b']
def get_transition_point(results_dict_input, phase_mode=0, transition_f0dl=1.0,):
'''
'''
if not isinstance(results_dict_input, list):
results_dict_input = [results_dict_input]
f0dls = np.array([rd['f0dl'] for rd in results_dict_input])
list_phase_mode = np.array(results_dict_input[0]['phase_mode'])
list_low_harm = np.array(results_dict_input[0]['low_harm'])
f0dls = f0dls[:, list_phase_mode == phase_mode]
list_low_harm = list_low_harm[list_phase_mode == phase_mode]
list_phase_mode = list_phase_mode[list_phase_mode == phase_mode]
list_transition = np.zeros([f0dls.shape[0]])
for itr0 in range(f0dls.shape[0]):
list_transition[itr0] = list_low_harm[f0dls[itr0, :] > transition_f0dl][0]
transition_mean, transition_err = util_figures_psychophysics.bootstrap(list_transition)
return transition_mean, transition_err, list_transition
fontsize_labels=12
fontsize_ticks=12
xlimits=[0, 31]
ylimits=[1e-1, 1e2]
kwargs = {
'xlimits': xlimits,
'ylimits': ylimits,
'include_yerr': True,
'legend_on': True,
'restrict_conditions': [0],
'kwargs_legend': {
'ncol': 2,
'handlelength': 0.5,
'borderpad': 0,
'columnspacing': 1,
'loc': 'lower right',
'handletextpad': 0.5,
},
}
if len(model_keys) <= 4:
kwargs['kwargs_legend']['ncol'] = 1
NROWS = 1
NCOLS = 2
wratio = 6.5
figsize = (4 * (wratio + 1) / wratio, 3 * (wratio + 1/2) / wratio)
gridspec_kw = {
'width_ratios': [1, wratio],
'wspace': 0.0,
'hspace': 0.0,
}
fig, ax_arr = plt.subplots(nrows=NROWS, ncols=NCOLS, figsize=figsize, gridspec_kw=gridspec_kw, sharex='col', sharey='row')
ax_arr = ax_arr.reshape([1, -1])
### PLOT MODEL ###
zorder = 0
cidx = 0
for itr0, key in enumerate(model_keys):
kwargs['sine_plot_kwargs'] = {
'label': key,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
kwargs['rand_plot_kwargs'] = {
'label': None,
'color': color_list[cidx],
'lw': 3,
'zorder': zorder,
}
if 'HUMAN' in key.upper():
kwargs['sine_plot_kwargs'] = {
'color': [0, 0, 0],
'label': None,
'lw': 1,
'ls': '--',
'zorder': 1000,
'marker': 'o',
'ms': 6,
'markerfacecolor': [1, 1, 1],
'markeredgecolor': [0, 0, 0],
'markeredgewidth': 1,
}
rd_itr0 = plot_fcn(ax_arr[0, 1], results_dicts[key], **kwargs)
if 'HUMAN' not in key.upper():
transition_mean, transition_err, list_transition = get_transition_point(results_dicts[key])
threshold_mask = np.logical_and(np.array(rd_itr0['phase_mode']) == 0,
np.array(rd_itr0['low_harm']) == 1)
log10_f0dl = rd_itr0['log10_f0dl'][threshold_mask][0]
log10_f0dl_err = rd_itr0['log10_f0dl_err'][threshold_mask][0]
line_plot_kwargs = {
'lw': 0.50,
'ls': '--',
'dashes': (2, 2),
'color': color_list[cidx],
'zorder': zorder,
}
line_ymin = 1e-1
line_ymax = np.power(10.0, np.interp(transition_mean, rd_itr0['low_harm'], rd_itr0['log10_f0dl']))
# ax_arr[0, 1].plot([transition_mean, transition_mean],
# [line_ymin, line_ymax],
# **line_plot_kwargs)
errorbar_kwargs = {
'fmt': 's',
'color': color_list[cidx],
'ms': 2,
'ecolor': color_list[cidx],
'elinewidth': 3,
'capsize': 0,
'capthick': 0,
}
yerr = np.array([
np.power(10.0, log10_f0dl) - np.power(10.0, log10_f0dl-2*log10_f0dl_err),
np.power(10.0, log10_f0dl+2*log10_f0dl_err) - np.power(10.0, log10_f0dl)
]).reshape([2, -1])
ax_arr[0, 0].errorbar(itr0, np.power(10.0, log10_f0dl), xerr=None, yerr=yerr, **errorbar_kwargs)
cidx += 1
zorder -= 1
import matplotlib
leg = [c for c in ax_arr[0, 1].get_children() if isinstance(c, matplotlib.legend.Legend)]
if len(leg) == 1:
for legobj in leg[0].legendHandles:
if 'human' in str(legobj).lower():
legobj.set_linewidth(1.0)
legobj._legmarker.set_markersize(6)
else:
legobj.set_linewidth(6.0)
ax_arr[0, 1].set(ylabel=None, yticklabels=[])
ax_arr[0, 1].tick_params(which='both', axis='y', length=0)
util_figures.format_axes(ax_arr[0, 0],
str_xlabel=None,
str_ylabel='F0 discrimination\nthreshold (%F0)',
fontsize_labels=fontsize_labels,
fontsize_ticks=fontsize_ticks,
fontweight_labels=None,
xscale='linear',
yscale='log',
xlimits=[-1, itr0+1],
ylimits=ylimits,
xticks=[],
yticks=None,
xticks_minor=[],
yticks_minor=None,
xticklabels=[],
yticklabels=None,
spines_to_hide=[],
major_tick_params_kwargs_update={},
minor_tick_params_kwargs_update={})
ax_arr[0, 0].text(np.mean([-1, itr0+1]), 1.8, 'Best thresholds', {'ha': 'center', 'va': 'bottom'}, rotation=90, fontsize=11)
plt.tight_layout()
plt.show()
# print(save_fn)
# fig.savefig(save_fn, bbox_inches='tight', pad_inches=0, transparent=False)
# fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
import sys
import os
import json
import numpy as np
import scipy.stats
import glob
import copy
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker
import matplotlib.cm
import matplotlib.colors
import f0dl_bernox
import f0dl_generalized
import util_human_model_comparison
import util_figures_psychophysics
import importlib
importlib.reload(f0dl_generalized)
importlib.reload(util_human_model_comparison)
importlib.reload(util_figures_psychophysics)
sys.path.append('/packages/msutil')
import util_figures
model_dir_list = [
# ('/saved_models/arch_search_v02_topN/connear_IHC3000Hz/arch_0???/', 'CoNNear_IHC3000Hz'),
# ('HUMAN', 'Human listeners\n(Wier et al., 1977)'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW02eN1_IHC3000Hz_IHC7order/arch_0???/', '4x narrower BWs'),
('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW05eN1_IHC3000Hz_IHC7order/arch_0???/', '2x narrower BWs'),
('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', 'Human filter BWs'),
('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW20eN1_IHC3000Hz_IHC7order/arch_0???/', '2x broader BWs'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW40eN1_IHC3000Hz_IHC7order/arch_0???/', '4x broader BWs'),
# ('/saved_models/arch_search_v02_topN/REDOsr2000_cf1000_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '50Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0320Hz_IHC7order/arch_0???/', '320Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC1000Hz_IHC7order/arch_0???/', '1000Hz'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', '3000Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC6000Hz_IHC7order/arch_0???/', '6000Hz'),
# ('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC9000Hz_IHC7order/arch_0???/', '9000Hz'),
]
if 'BW0' in model_dir_list[0][0]:
basename, key_xval = ('EVAL_SOFTMAX_testsnr_v02_bestckpt_results_dict.json', 'snr_per_component')
else:
basename, key_xval = ('EVAL_SOFTMAX_testspl_v03_bestckpt_results_dict.json', 'dbspl')
list_model_name = []
list_list_results_dict = []
for (model_dir, model_name) in model_dir_list:
if model_dir.upper() == 'HUMAN':
list_results_dict = [util_human_model_comparison.get_human_results_dict_pure_tone_spl(threshold_level=0)]
print(model_dir, len(list_results_dict))
else:
regex_json_fn = model_dir
if '.json' not in regex_json_fn:
regex_json_fn = os.path.join(regex_json_fn, basename)
list_results_dict = []
for fn_results_dict in sorted(glob.glob(regex_json_fn)):
with open(fn_results_dict) as f:
results_dict = json.load(f)
list_results_dict.append(results_dict)
print(regex_json_fn, len(list_results_dict))
list_model_name.append(model_name)
list_list_results_dict.append(list_results_dict)
importlib.reload(util_figures_psychophysics)
if 'BW' in list_model_name[0]:
color_list = ['#5ab4ac', 'k', '#a6611a'] # CMAP FOR COCH FILTER BW
else:
color_list = ['#fdb863', '#e08214', '#b35806', 'k', '#8073ac', '#b2abd2'] # CMAP FOR IHC LOWPASS
# color_list = np.array([[90,180,172], [0,0,0], [166,97,26]])/256 # CMAP FOR COCH FILTER BW
fig, ax = plt.subplots(figsize=(4, 3))
for cidx, (list_results_dict, model_name) in enumerate(zip(list_list_results_dict, list_model_name)):
kwargs = {
'key_xval': key_xval,
'str_xlabel': 'Per component SNR (dB)',
# 'xlimits': [-23, -7],
'xticks': 5,
'xticks_minor': 1,
'include_yerr': True,
'plot_kwargs_update': {
'color': color_list[cidx],
'label': model_name,
'lw': 3,
'marker': '.',
'markersize': 6,
},
'kwargs_legend': {
'ncol': 1,
'handlelength': 0.5,
'borderpad': 0,
'columnspacing': 1,
'loc': 'upper right',
'handletextpad':0.5,
},
}
util_figures_psychophysics.make_f0dl_threshold_plot(ax, list_results_dict, **kwargs)
import matplotlib
leg = [c for c in ax.get_children() if isinstance(c, matplotlib.legend.Legend)]
if len(leg) == 1:
for legobj in leg[0].legendHandles:
legobj.set_linewidth(6.0)
plt.tight_layout()
plt.show()
# fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
importlib.reload(util_figures_psychophysics)
color_list = np.array([[253,184,99], [224,130,20], [179,88,6], [0,0,0], [128,115,172], [178,171,210]])/256# CMAP FOR IHC LOWPASS
# fig, ax = plt.subplots(figsize=(4, 3))
# fig, ax = plt.subplots(figsize=(4.615384615384615, 3.5128205128205128))
fig, ax = plt.subplots(figsize=(3.5, 3.5128205128205128))
zorder = 0
cidx = 0
for (list_results_dict, model_name) in zip(list_list_results_dict, list_model_name):
if len(list_results_dict) == 1:
kwargs = {
'key_xval': key_xval,
'str_xlabel': 'Sensation level (dB SL)',
'str_ylabel': 'Frequency discrimination\nthreshold (%)',
'xlimits': [0, 105],
'xticks': 20,
'xticks_minor': 5,
'include_yerr': True,
'plot_kwargs_update': {
'color': [0, 0, 0],
'label': model_name,
'lw': 1,
'ls': '--',
'zorder': 1000,
'marker': 'o',
'markersize': 6,
'markerfacecolor': [1, 1, 1],
'markeredgecolor': [0, 0, 0],
'markeredgewidth': 1,
},
'kwargs_legend': {
'ncol': 2,
'handlelength': 3*0.5,
'borderpad': 0.25,
'columnspacing': 1,
'loc': 'upper center',
'handletextpad': 0.5,
},
}
else:
kwargs = {
'key_xval': key_xval,
'str_xlabel': 'Stimulus level (dB SPL)',
'str_ylabel': 'Frequency discrimination\nthreshold (%)',
'xlimits': [0, 105],
'xticks': 20,
'xticks_minor': 5,
'include_yerr': True,
'plot_kwargs_update': {
'color': color_list[cidx],
'label': model_name,
'lw': 3,
'zorder': zorder,
'marker': '.',
'markersize': 6,
},
'kwargs_legend': {
'ncol': 2,
'handlelength': 0.5,
'borderpad': 0.25,
'columnspacing': 1,
'loc': 'upper center',
'handletextpad': 0.5,
},
}
cidx += 1
zorder -= 1
util_figures_psychophysics.make_f0dl_threshold_plot(ax, list_results_dict, **kwargs)
import matplotlib
leg = [c for c in ax.get_children() if isinstance(c, matplotlib.legend.Legend)]
if len(leg) == 1:
for legobj in leg[0].legendHandles:
if 'human' in str(legobj).lower():
legobj.set_linewidth(1.0)
legobj._legmarker.set_markersize(6)
else:
legobj.set_linewidth(6.0)
plt.tight_layout()
plt.show()
# fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
import sys
import os
import glob
import json
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import util_figures_psychophysics
import f0dl_bernox
sys.path.append('/packages/msutil')
import util_figures
import util_misc
master_list = [
# ('/saved_models/arch_search_v02_topN/f0_label_024/arch_0???/', '1/2'),
# ('/saved_models/arch_search_v02_topN/f0_label_048/arch_0???/', '1/4'),
# ('/saved_models/arch_search_v02_topN/f0_label_096/arch_0???/', '1/8'),
# ('/saved_models/arch_search_v02_topN/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/arch_0???/', '1/16'),
# ('/saved_models/arch_search_v02_topN/f0_label_384/arch_0???/', '1/32'),
('/saved_models/arch_search_v02_topN/REDOsr20000_cf100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '100'),
('/saved_models/arch_search_v02_topN/REDOsr2000_cfI100_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '100$^T$'),
('/saved_models/arch_search_v02_topN/REDOsr2000_cfI250_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '250$^T$'),
('/saved_models/arch_search_v02_topN/REDOsr2000_cfI500_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '500$^T$'),
('/saved_models/arch_search_v02_topN/REDOsr2000_cf1000_species002_spont070_BW10eN1_IHC0050Hz_IHC7order/arch_0???/', '1000$^T$'),
]
results_dict_basename = 'EVAL_validation_bestckpt_results_dict.json'
if '/' in master_list[0][1]:
str_xlabel = 'F0 bin width (semitones)'
list_color = ['#fed976', '#feb24c', '#fd8d3c', '#f03b20', '#bd0026']
else:
str_xlabel = 'Number of simulated ANFs'
list_color = ['#c6dbef', '#bdd7e7', '#6baed6', '#3182bd', '#08519c']
model_keys = []
results_dicts = {}
master_count = 0
for fn_regex, model_key in master_list:
results_dicts[model_key] = []
model_keys.append(model_key)
if '.json' not in fn_regex:
fn_regex = os.path.join(fn_regex, results_dict_basename)
for results_dict_fn in sorted(glob.glob(fn_regex)):
master_count = master_count + 1
with open(results_dict_fn) as f:
results_dicts[model_key].append(json.load(f))
print('Loaded results from {} files ({})'.format(master_count, results_dict_basename))
for key in results_dicts.keys():
print(key, len(results_dicts[key]))
fig, ax = plt.subplots(figsize=(4, 3))
xticklabels = model_keys
for xval, key in enumerate(model_keys):
list_rd = results_dicts[key]
list_f0_pct_error_median = [rd['f0_pct_error_median'] for rd in list_rd]
print(key, len(list_f0_pct_error_median), np.mean(list_f0_pct_error_median))
yval, yerr = util_figures_psychophysics.bootstrap(np.array(list_f0_pct_error_median))
bar_kwargs = {
'yerr': 2*yerr,
'color': list_color[xval],
'capsize': 6,
'ecolor': list_color[xval],
'linewidth': 12,
'capthick': 1,
}
ax.errorbar(xval, yval, **bar_kwargs)
point_xvals = np.linspace(xval-0.12, xval+0.12, len(list_f0_pct_error_median))
ax.plot(point_xvals,
list_f0_pct_error_median,
color='k',
ls='',
marker='.',
ms=3)
ax = util_figures.format_axes(ax,
str_xlabel=str_xlabel,
str_ylabel='Median F0 error for\nnatural sounds (%F0)',
xlimits=[-0.5, len(model_keys)-0.5],
xticks=np.arange(0, len(model_keys)),
xticks_minor=[],
xticklabels=model_keys,
yscale='log',
ylimits=[0.4, 20],
spines_to_hide=[])
plt.tight_layout()
plt.show()
# fig.savefig('tmp.pdf', bbox_inches='tight', pad_inches=0, transparent=False)
```
| github_jupyter |
<font size=5>Confusion Matrix</font>
<p>When we build models, it is important to assess how good or bad our model is, and how well it performs on unseen data. Several metrics like accuracy, time taken etc. exist to evaluate model performance. We will see some of the most important and useful ones for the same.</p>
<p>What all metrics can we use to evaluate the performance of a classification model? The obvious thing that comes to mind is accuracy over an unseen test set. Accuracy is simply the number of values correctly predicted.</p><p>
There is another metric called the confusion matrix, which is a matrix consisting of the number of predicted and actual values for both classes. Confusion matrix is useful in that we can assess how many predictions the model got right, and we understand that the model is performing in this particular way so we now think about how we can further improve our model.</p>
<p>There are some terms that one must know regarding confusion matrices.</p>
<ol>
<li>True Positives: This is the number of samples predicted positive which were actually positive.</li>
<li>True Negatives: This is the number of samples predicted negative which were actually negative.</li>
<li>False Positives: This is the number of samples predicted positive which were <b>not</b> actually positive.</li>
<li>False Negatives: This is the number of samples predicted negative which were <b>not</b> actually negative.</li>
</ol>
<p>In the case of multi-class classification, however, the confusion matrix shows the number of samples predicted correctly and wrongly for each class instead of true positives etc.</p>
```
from sklearn.metrics import confusion_matrix
y_true = [0,0,1,0,1] # dummy label data
y_pred = [1,1,0,0,1] # dummy predicted data
print(confusion_matrix(y_true,y_pred))
```
<font size=5>Classification Measures</font>
<p>There are measures other than the confusion matrix which can help achieve better understanding and analysis of our model and its performance. We talk about two particular measures here - precision and recall.</p>
<p>Note that precision and recall will be defined per class label, not for the dataset as a whole. Precision defines the percentage of samples with a certain predicted class label actually belonging to that class label. Recall defines the percentage of samples of a certain class which were correctly predicted as belonging to that class.</p>
<p>However, how do we choose between precision and recall? Which one is a better metric - precision or recall? Turns out, we can use a better metric which combines both of these - the f1 score. The f1 score is defined as the harmonic mean of precision and recall, and is a far better indicator of model performance than precision and recall (usually).</p>
```
from sklearn.metrics import classification_report
print(classification_report(y_true,y_pred))
```
<p>Tip: Accuracy is not always a good measure of model performance. Accuracy fails when the class labels are highly unbalanced, simply because the accuracy will be high owing to the model predicting a large number of samples as belonging to the majority class label. In such cases, f1 score is a better metric. There are some other metrics like ROC_AUC, which stands for Receiver Operating Characteristic - Area Under Curve. That is, it returns the area under the ROC curve.</p>
| github_jupyter |
# Cloud APIs for Computer Vision: Up and Running in 15 Minutes
This code is part of [Chapter 8- Cloud APIs for Computer Vision: Up and Running in 15 Minutes ](https://learning.oreilly.com/library/view/practical-deep-learning/9781492034858/ch08.html).
## Get MSCOCO validation image ids with legible text
We will develop a dataset of images from the MSCOCO dataset that contain at least a single instance of legible text and are in the validation split.
In order to do this, we first need to download `cocotext.v2.json` from https://bgshih.github.io/cocotext/.
```
!wget -nc -q -O tmp.zip https://github.com/bgshih/cocotext/releases/download/dl/cocotext.v2.zip && unzip -n tmp.zip && rm tmp.zip
```
Let's verify that the file has been downloaded and that it exists.
```
import os
os.path.isfile('./cocotext.v2.json')
```
We also need to download the `coco_text.py` file from the COCO-Text repository from http://vision.cornell.edu/se3/coco-text/
```
!wget -nc https://raw.githubusercontent.com/bgshih/coco-text/master/coco_text.py
import coco_text
# Load the COCO text json file
ct = coco_text.COCO_Text('./cocotext.v2.json')
# Find the total number of images in validation set
print(len(ct.val))
```
Add the paths to the `train2014` directory downloaded from the [MSCOCO website](http://cocodataset.org/#download).
```
path = <PATH_TO_IMAGES> # Please update with local absolute path to train2014
os.path.exists(path)
```
Get all images containing at least one instance of legible text
```
image_ids = ct.getImgIds(imgIds=ct.val, catIds=[('legibility', 'legible')])
```
Find total number of validation images which have legible text
```
print(len(image_ids))
```
In the data we downloaded, make sure all the image IDs exist.
```
def filename_from_image_id(image_id):
return "COCO_train2014_000000" + str(image_id) + ".jpg"
final_image_ids = []
for each in image_ids:
filename = filename_from_image_id(each)
if os.path.exists(path + filename):
final_image_ids.append(each)
print(len(final_image_ids))
```
Make a folder where all the temporary data files can be stored
```
!mkdir data-may-2020
!mkdir data-may-2020/legible-images
```
Save a list of the image ids of the validation images
```
with open('./data-may-2020/val-image-ids-final.csv', 'w') as f:
f.write("\n".join(str(image_id) for image_id in final_image_ids))
```
Move these images to a separate folder for future use.
```
from shutil import copy2
for each in final_image_ids:
filename = filename_from_image_id(each)
if os.path.exists(path + filename):
copy2(path + filename, './data-may-2020/legible-images/')
```
| github_jupyter |
# Fleet Predictive Maintenance: Part 2. Data Preparation with Data Wrangler
1. [Architecure](0_usecase_and_architecture_predmaint.ipynb#0_Architecture)
1. [Data Prep: Processing Job from Data Wrangler Output](./1_dataprep_dw_job_predmaint.ipynb)
1. [Data Prep: Featurization](./2_dataprep_predmaint.ipynb)
1. [Train, Tune and Predict using Batch Transform](./3_train_tune_predict_predmaint.ipynb.ipynb)
## SageMaker Data Wrangler Job Notebook
This notebook uses the Data Wrangler .flow file to submit a SageMaker Data Wrangler Job
with the following steps:
* Push Data Wrangler .flow file to S3
* Parse the .flow file inputs, and create the argument dictionary to submit to a boto client
* Submit the ProcessingJob arguments and wait for Job completion
Optionally, the notebook also gives an example of starting a SageMaker XGBoost TrainingJob using
the newly processed data.
```
# SageMaker Python SDK version 2.x is required
import pkg_resources
import subprocess
import sys
original_version = pkg_resources.get_distribution("sagemaker").version
_ = subprocess.check_call([sys.executable, "-m", "pip", "install", "sagemaker==2.20.0"])
import json
import os
import time
import uuid
import boto3
import sagemaker
```
## Parameters
The following lists configurable parameters that are used throughout this notebook.
```
# S3 bucket for saving processing job outputs
# Feel free to specify a different bucket here if you wish.
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = "data_wrangler_flows"
flow_id = f"{time.strftime('%d-%H-%M-%S', time.gmtime())}-{str(uuid.uuid4())[:8]}"
flow_name = f"flow-{flow_id}"
flow_uri = f"s3://{bucket}/{prefix}/{flow_name}.flow"
flow_file_name = "dw_flow/prm.flow"
iam_role = sagemaker.get_execution_role()
container_uri = (
"415577184552.dkr.ecr.us-east-2.amazonaws.com/sagemaker-data-wrangler-container:1.2.1"
)
# Processing Job Resources Configurations
# Data wrangler processing job only supports 1 instance.
instance_count = 1
instance_type = "ml.m5.4xlarge"
# Processing Job Path URI Information
output_prefix = f"export-{flow_name}/output"
output_path = f"s3://{bucket}/{output_prefix}"
output_name = "ff586e7b-a02d-472b-91d4-da3dd05d7a30.default"
processing_job_name = f"data-wrangler-flow-processing-{flow_id}"
processing_dir = "/opt/ml/processing"
# Modify the variable below to specify the content type to be used for writing each output
# Currently supported options are 'CSV' or 'PARQUET', and default to 'CSV'
output_content_type = "CSV"
# URL to use for sagemaker client.
# If this is None, boto will automatically construct the appropriate URL to use
# when communicating with sagemaker.
sagemaker_endpoint_url = None
```
__For this demo, the following cell has been added to the generated code from the Data Wrangler export. The changes are needed to update the S3 bucket in the .flow file to match your S3 location as well as make sure we have the right container URI depending on your region.__
```
from demo_helpers import update_dw_s3uri, get_dw_container_for_region
# update the flow file to change the s3 location to our bucket
update_dw_s3uri(flow_file_name)
# get the Data Wrangler container associated with our region
region = boto3.Session().region_name
container_uri = get_dw_container_for_region(region)
dw_output_path_prm = output_path
print(
f"Storing dw_output_path_prm = {dw_output_path_prm} for use in next notebook 2_fleet_predmaint.ipynb"
)
%store dw_output_path_prm
```
## Push Flow to S3
Use the following cell to upload the Data Wrangler .flow file to Amazon S3 so that
it can be used as an input to the processing job.
```
# Load .flow file
with open(flow_file_name) as f:
flow = json.load(f)
# Upload to S3
s3_client = boto3.client("s3")
s3_client.upload_file(flow_file_name, bucket, f"{prefix}/{flow_name}.flow")
print(f"Data Wrangler Flow notebook uploaded to {flow_uri}")
```
## Create Processing Job arguments
This notebook submits a Processing Job using the Sagmaker Python SDK. Below, utility methods are
defined for creating Processing Job Inputs for the following sources: S3, Athena, and Redshift.
```
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.dataset_definition.inputs import (
AthenaDatasetDefinition,
DatasetDefinition,
RedshiftDatasetDefinition,
)
def create_flow_notebook_processing_input(base_dir, flow_s3_uri):
return ProcessingInput(
source=flow_s3_uri,
destination=f"{base_dir}/flow",
input_name="flow",
s3_data_type="S3Prefix",
s3_input_mode="File",
s3_data_distribution_type="FullyReplicated",
)
def create_s3_processing_input(s3_dataset_definition, name, base_dir):
return ProcessingInput(
source=s3_dataset_definition["s3ExecutionContext"]["s3Uri"],
destination=f"{base_dir}/{name}",
input_name=name,
s3_data_type="S3Prefix",
s3_input_mode="File",
s3_data_distribution_type="FullyReplicated",
)
def create_athena_processing_input(athena_dataset_defintion, name, base_dir):
return ProcessingInput(
input_name=name,
dataset_definition=DatasetDefinition(
local_path=f"{base_dir}/{name}",
athena_dataset_definition=AthenaDatasetDefinition(
catalog=athena_dataset_defintion["catalogName"],
database=athena_dataset_defintion["databaseName"],
query_string=athena_dataset_defintion["queryString"],
output_s3_uri=athena_dataset_defintion["s3OutputLocation"] + f"{name}/",
output_format=athena_dataset_defintion["outputFormat"].upper(),
),
),
)
def create_redshift_processing_input(redshift_dataset_defintion, name, base_dir):
return ProcessingInput(
input_name=name,
dataset_definition=DatasetDefinition(
local_path=f"{base_dir}/{name}",
redshift_dataset_definition=RedshiftDatasetDefinition(
cluster_id=redshift_dataset_defintion["clusterIdentifier"],
database=redshift_dataset_defintion["database"],
db_user=redshift_dataset_defintion["dbUser"],
query_string=redshift_dataset_defintion["queryString"],
cluster_role_arn=redshift_dataset_defintion["unloadIamRole"],
output_s3_uri=redshift_dataset_defintion["s3OutputLocation"] + f"{name}/",
output_format=redshift_dataset_defintion["outputFormat"].upper(),
),
),
)
def create_processing_inputs(processing_dir, flow, flow_uri):
"""Helper function for creating processing inputs
:param flow: loaded data wrangler flow notebook
:param flow_uri: S3 URI of the data wrangler flow notebook
"""
processing_inputs = []
flow_processing_input = create_flow_notebook_processing_input(processing_dir, flow_uri)
processing_inputs.append(flow_processing_input)
for node in flow["nodes"]:
if "dataset_definition" in node["parameters"]:
data_def = node["parameters"]["dataset_definition"]
name = data_def["name"]
source_type = data_def["datasetSourceType"]
if source_type == "S3":
processing_inputs.append(create_s3_processing_input(data_def, name, processing_dir))
elif source_type == "Athena":
processing_inputs.append(
create_athena_processing_input(data_def, name, processing_dir)
)
elif source_type == "Redshift":
processing_inputs.append(
create_redshift_processing_input(data_def, name, processing_dir)
)
else:
raise ValueError(f"{source_type} is not supported for Data Wrangler Processing.")
return processing_inputs
def create_processing_output(output_name, output_path, processing_dir):
return ProcessingOutput(
output_name=output_name,
source=os.path.join(processing_dir, "output"),
destination=output_path,
s3_upload_mode="EndOfJob",
)
def create_container_arguments(output_name, output_content_type):
output_config = {output_name: {"content_type": output_content_type}}
return [f"--output-config '{json.dumps(output_config)}'"]
```
## Start ProcessingJob
Now, the Processing Job is submitted using the Processor from the Sagemaker SDK.
Logs are turned off, but can be turned on for debugging purposes.
```
%%time
from sagemaker.processing import Processor
processor = Processor(
role=iam_role,
image_uri=container_uri,
instance_count=instance_count,
instance_type=instance_type,
sagemaker_session=sess,
)
processor.run(
inputs=create_processing_inputs(processing_dir, flow, flow_uri),
outputs=[create_processing_output(output_name, output_path, processing_dir)],
arguments=create_container_arguments(output_name, output_content_type),
wait=True,
logs=False,
job_name=processing_job_name,
)
```
## Kick off SageMaker Training Job (Optional)
Data Wrangler is a SageMaker tool for processing data to be used for Machine Learning. Now that
the data has been processed, users will want to train a model using the data. The following shows
an example of doing so using a popular algorithm XGBoost.
It is important to note that the following XGBoost objective ['binary', 'regression',
'multiclass'], hyperparameters, or content_type may not be suitable for the output data, and will
require changes to train a proper model. Furthermore, for CSV training, the algorithm assumes that
the target variable is in the first column. For more information on SageMaker XGBoost, please see
https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html.
### Find Training Data path
The below demonstrates how to recursively search the output directory to find the data location.
```
s3_client = boto3.client("s3")
list_response = s3_client.list_objects_v2(Bucket=bucket, Prefix=output_prefix)
training_path = None
for content in list_response["Contents"]:
if "_SUCCESS" not in content["Key"]:
training_path = content["Key"]
print(training_path)
```
Next, the Training Job hyperparameters are set. For more information on XGBoost Hyperparameters,
see https://xgboost.readthedocs.io/en/latest/parameter.html.
```
region = boto3.Session().region_name
container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1")
hyperparameters = {
"max_depth": "5",
"objective": "reg:squarederror",
"num_round": "10",
}
train_content_type = (
"application/x-parquet" if output_content_type.upper() == "PARQUET" else "text/csv"
)
train_input = sagemaker.inputs.TrainingInput(
s3_data=f"s3://{bucket}/{training_path}",
content_type=train_content_type,
)
```
The TrainingJob configurations are set using the SageMaker Python SDK Estimator, and which is fit
using the training data from the ProcessingJob that was run earlier.
```
estimator = sagemaker.estimator.Estimator(
container,
iam_role,
hyperparameters=hyperparameters,
instance_count=1,
instance_type="ml.m5.2xlarge",
)
estimator.fit({"train": train_input})
```
### Cleanup
Uncomment the following code cell to revert the SageMaker Python SDK to the original version used
before running this notebook. This notebook upgrades the SageMaker Python SDK to 2.x, which may
cause other example notebooks to break. To learn more about the changes introduced in the
SageMaker Python SDK 2.x update, see
[Use Version 2.x of the SageMaker Python SDK.](https://sagemaker.readthedocs.io/en/stable/v2.html).
```
# _ = subprocess.check_call(
# [sys.executable, "-m", "pip", "install", f"sagemaker=={original_version}"]
# )
```
| github_jupyter |
# Purpose
This notebook's purpose is to sift through all of the hospital chargemasters and metadata generated via the work already done in [this wonderful repo](https://github.com/vsoch/hospital-chargemaster) (from which I forked my repo). This is where the data engineering for Phase II of this project occurs. For more information on what Phase II is, please see [the README](README.md) for this project. Results from the explorations done in this notebook will be incorporated into a single cleaning script within the repo.
Based upon the originating repo's own README, there's at least some data collection that still needs to be done for completeness (e.g. [data from hospitalpriceindex.com](https://search.hospitalpriceindex.com/hospital/Barnes-Jewish-Hospital/5359?page=1) has to be scraped but they're denying IP addresses that try to do so). However, that is beyond the current scope of this work.
# Background
*Assume everything in this cell is quoted directly from the originating repo README, albeit with some extra content removed for the purposes of streamlining. Anything in italics like this should be assumed to be editorial additions by me.*
**From the original README:**
## Get List of Hospital Pages
We have compiled a list of hospitals and links in the [hospitals.tsv](hospitals.tsv)
file, generated via the [0.get_hospitals.py](0.get_hospitals.py) script *which pulls these data from [a Quartz article](https://qz.com/1518545/price-lists-for-the-115-biggest-us-hospitals-new-transparency-law/) detailing ~115 hospital URLs from which the authors were able to find chargemasters in one form or another*.
The file includes the following variables, separated by tabs:
- **hospital_name** is the human friendly name
- **hospital_url** is the human friendly URL, typically the page that includes a link to the data.
- **hospital_id** is the unique identifier for the hospital, the hospital name, in lowercase, with spaces replaced with `-`
## Organize Data
Each hospital has records kept in a subfolder in the [data](data) folder. Specifically,
each subfolder is named according to the hospital name (made all lowercase, with spaces
replaced with `-`). If a subfolder begins with an underscore, it means that I wasn't
able to find the charge list on the hospital site (and maybe you can help?)
Within that folder, you will find:
- `scrape.py`: A script to scrape the data
- `browser.py`: If we need to interact with a browser, we use selenium to do this.
- `latest`: a folder with the last scraped (latest data files)
- `YYYY-MM-DD` folders, where each folder includes:
- `records.json` the complete list of records scraped for a particular data
- `*.csv` or `*.xlsx` or `*.json`: the scraped data files.
## Parsing
This is likely one of the hardest steps. I wanted to see the extent to which I could
create a simple parser that would generate a single TSV (tab separted value) file
per hospital, with minimally an identifier for a charge, and a price in dollars. If
provided, I would also include a description and code:
- **charge_code**
- **price**
- **description**
- **hospital_id**
- **filename**
Each of these parsers is also in the hospital subfolder, and named as "parser.py." The parser would output a data-latest.tsv file at the top level of the folder, along with a dated (by year `data-<year>.tsv`). At some point
I realized that there were different kinds of charges, including inpatient, outpatient, DRG (diagnostic related group) and others called
"standard" or "average." I then went back and added an additional column
to the data:
- **charge_type** can be one of standard, average, inpatient, outpatient, drg, or (if more detail is supplied) insured, uninsured, pharmacy, or supply. This is not a gold standard labeling but a best effort. If not specified, I labeled as standard, because this would be a good assumption.
# Exploring the Chargemaster Data
OK, I think I have a handle on this, let's take a look at the chargemaster data from @vsoch's repo.
```
#Make sure any changes to custom packages can be reflected immediately
#in the notebook without kernel restart
import autoreload
%load_ext autoreload
%autoreload 2
```
## Reading in the Tabulated Data
OK, there are **a lot** of files to plow through here! And @vsoch was kind enough to try and compile them whenever appropriate in the various hospital/site-specific folders within `data` as `data-latest[-n].tsv` (`-n` indicates that, if the file gets above 100 MB, it's split into `data-latest-1.tsv`, `data-latest-2.tsv`, etc. to avoid going over the GitHub per-file size limit).
Let's try to parse all of these TSV files into a single coherent DataFrame for analysis purposes! The entire `data` folder set is less than 4 GB, and I'm confident that more than half of that is individual XLSX/CSV files, so I think this should be something we can hold in memory easily enough.
...still, we'll use some tricks (e.g. making the sub-dataframes as a generator instead of a list) to ensure optimal memory usage, just to be safe.
```
import pandas as pd
# Search through the data/hospital-id folders for data-latest[-n].tsv files
# so you can concatenate them into a single DataFrame
from glob import glob, iglob
def load_data(filepath='data/*/data-latest*.tsv'):
'''
Re-constitute the DataFrame after doing work outside of the DataFrame in memory,
such as correcting and re-running a parser.
Inputs
------
filepath: str. Provides an explicit or wildcard-based filepath for all
data files that should be concatenated together
Outputs
-------
Returns a single pandas DataFrame that contains all data from the files
specified by filepath
'''
# Setup the full dataframe using iterators/generators to save on memory
all_files = iglob(filepath)
individual_dfs = (pd.read_csv(f, delimiter = '\t',
low_memory = False,
thousands = ',') for f in all_files)
return pd.concat(individual_dfs, ignore_index=True)
df = load_data()
df.info(memory_usage = 'deep', verbose = True, null_counts = True)
df.head()
df.tail()
```
## Checking and Cleaning Columns
Since these datasets were all put together by individual parsing scripts that parsed through a bunch of non-standardized data files from different sources, there's almost guaranteed to be leakage of values from one column to another and so on. So we're going to check each column for anomalies and correct them as we go before proceeding any further.
```
df.columns
```
### Filename
Since the values in this column come internally from the project data collection process, I expect that this column will be clean and orderly...right?
```
print(f"There are {df['filename'].nunique()} unique values in this column\n")
df['filename'].value_counts()
```
OK, nothing stands out as problematic here. Since every filename should end with `.file_extension`, let's do a quick check that nothing violates that rule.
```
# Check how many values match the structure of ".letters" at the end
df['filename'].str.contains(r'\.[a-z]+$', case=False).sum() / len(df['filename'].dropna())
```
Interesting, so a few values don't match. Let's see what they are.
```
# Return values that don't match what we're expecting
df.loc[~df['filename'].str.contains(r'\.[a-z]+$', case=False),
'filename']
```
**Oy vay, looks like we found some anomalies!** These entries clearly have had their `hospital_id` values leak into their `filename` column. We'll need to repair the parser once we're sure we know which one(s) are the problem.
```
df.loc[~df['filename'].str.contains(r'\.[a-z]+$', case=False),
'filename'].value_counts()
df[~df['filename'].str.contains(r'\.[a-z]+$', case=False)].head()
```
**Yup, it's all of the data from the `geisinger-medical-center` data folder.** I'll take a look at the parser and correct it then re-import then DataFrame and see if it's improved at all.
```
df = load_data()
# Is it fixed? Shouldn't return any values
df.loc[~df['filename'].str.contains(r'\.[a-z]+$', case=False),
'filename'].value_counts()
```
**That did the trick!** At this point, we can be sure that all of the entries in the `filename` column have file extensions, which seems like a reasonable data check. Onwards!
### Hospital ID
Again, since these data are essentially internal to this project, I'm thinking that this one will be good to go already or require minimal cleaning. Here's hoping...
Note that it's my intent to convert these into more human-readable names once I've cleaned them up so that they'll be able to match with the Centers for Medicare and Medicaid (CMS) hospital names and can then be mapped to standardized hospital identifiers.
```
print(f"There are {df['hospital_id'].nunique()} unique values in this column\n")
df['hospital_id'].value_counts()
```
**OK, it looks like I'll need to correct some parser issues here too.**
* At first I thought I could use the names of the folders in the `data/` folder for the project as the gold standard of `hospital_id` values, but there's only 115 of those and possibly twice as many `hospital_id` values (correcting for the erroneous ones we're seeing here). The Geisinger Medical Center ones alone all are within the `geisinger-medical-center` folder even though there are actually 7-8 unique `hospital_id` values.
* Let's start by looking for values that don't have at least one hyphen with letters on either side of it.
* After correcting for these, we can also look to see if any low-value-count anomalies remain and go from there.
```
# Find hospital_id values that don't have at least one hyphen with letters on either side of it
df.dropna(subset=['hospital_id']).loc[~df['hospital_id'].dropna().
str.contains(r'[a-z]+-[a-z]+',
case=False),
'hospital_id'].value_counts()
```
Interesting. A few things to note:
1. 'x0020' seems to be the space character for some text encodings
2. It looks like the vast majority of these are hospital names that never translated into `hospital_id` values for some reason, with some instances of `description` or `charge_code` values also leaking into these.
3. Quick review of many of these indicates they are hospital names (or stems of names) of hospitals from the `advent-health/` directory, which has a single parser. So if I can correct that one, I may make substantial progress in one fell swoop!
Since we know that the only (hopefully) fully cleaned column is `filename`, we'll have to use that as our guide. I'll first focus on the parsers for those hospitals I can identify and see if those are easy fixes and hopefully by clearing away that clutter we can correct the vast majority of the problem children here. I'll then tackle what remains individually. And at the end of it all, I'll also need too look at how to correct the records that have `hospital_id == np.nan`.
#### Correct Parsers Wherein `hospital_id == hospital name`
```
df.loc[df['hospital_id'] == 'Heartland']
df.dropna(subset=['hospital_id']).loc[df['hospital_id'].dropna().str.startswith('adventhealth'),
'hospital_id'].value_counts()
```
**Interesting! It looks like the majority of the data files for advent health don't have proper IDs in the database.** Likely this means there's some common thread in the parser that, when corrected, will cause a lot of my `hospital_id` problems to evaporate. Or at least I hope so!
**It looks like the parser was taking the raw hospital IDs from the data files (the capitalized and space-delimited names) and not modifying them.** So I simply modified them to be all lowercase and replace spaces with hyphens. Let's see how that goes!
...this is going to take a while, given the sheer amount of data being parsed. In the meantime, let's address the hospitals that *aren't* Advent Health.
```
df.loc[df['hospital_id'] == 'Stanislaus Surgical Hospital']
```
Based upon a quick search of the `data/` folder, it looks like the Stanislaus entries are probably coming from the `california-pacific-medical-center-r.k.-davies-medical-center/` and `st.-luke’s-hospital-(san-francisco)` folders, the `chargemaster-2019.xlsx` and `chargemaster-2019.json` files, resp.
```
# Take a look at St. Luke's first - stealing code from parse.py to make this simple
import codecs, json
filename = "data/st.-luke’s-hospital-(san-francisco)/latest/chargemaster-2019.json"
with codecs.open(filename, "r", encoding='utf-8-sig', errors='ignore') as filey:
content = json.loads(filey.read())
names = set()
for row in content['CDM']:
if 'HOSPITAL_NAME' in row:
names.add(row['HOSPITAL_NAME'])
print(names)
```
**Alrighty, it's pretty clear that Stanislaus is in this dataset.** Let's check the other one too.
Fun fact, for some reason CPMC-RKDMC has both a JSON and an XLSX file! The parser only pays attention to the XLSX file though, so that's all we'll worry about too (likely one is just a different file format version of the other).
```
# Now for CPMC-RKDMC
filename = "data/california-pacific-medical-center-\
r.k.-davies-medical-center/latest/chargemaster-2019.xlsx"
temp = pd.read_excel(filename)
temp.dropna(subset=['HOSPITAL_NAME'])[temp['HOSPITAL_NAME'].dropna().str.contains('Stanislaus')]
for row in temp.iterrows():
print(row)
break
temp.head()
temp.loc[0:2,'HOSPITAL_NAME'].astype(str).str.lower().str.replace(" ", "-")
temp.loc[0:2]
temp[['FACILITY', 'CMS_PROV_ID']] = temp[['HOSPITAL_NAME', 'SERVICE_SETTING']]
temp.loc[0:2]
```
Yup, this is one of them too! OK, now we know what hospital data folders we're looking at (who was this Stanislaus person anyhow? Quite the surgical philanthropist...). The problem here is that each of these chargemasters covers data for multiple hospitals, making it so that the `records.json` URI isn't terribly useful.
What I'm going to do instead is modify them so that the hospital names extracted are of a similar format to all other hospital IDs in our dataset.
```
# Find hospital_id values that don't have at least one hyphen with letters on either side of it
df = load_data()
df.dropna(subset=['hospital_id']).loc[~df['hospital_id'].dropna().
str.contains(r'[a-z]+-[a-z]+',
case=False),
'hospital_id'].value_counts()
```
**Very nice! By fixing those few parsers, we reduced the number of problem IDs from more than 350 to only a little more than 100!** There's still a ways to go, but I'm hopeful. Let's take a look at that `baptist` entry...
#### Correct Parsers and Data for Remaining Problem IDs
#### Correct Records Wherein `hospital_id == np.nan`
Note that, from my experience parsing the Stanislaus Surgical Hospital data, these NaN values could be coming from the parsing process itself, even when it's properly done (e.g. source chargemaster doesn't have actually hospital names to pull for some rows for some reason, not that one column got confused with another).
#### Hospital ID -> Hospital Name
Now I'll convert this cleaned up column of "IDs" that are actually just the hospital names with hyphens instead of spaces into regular hospital names so that I can match them to CMS records and combine the CMS hospital data in seamlessly.
Ultimately, what I'll do is create names based upon the mapping of URIs to names in the individual `results.json` files in `data/<hospital group>/latest/`
### Price
As this should be a continuous variable (maybe with some commas or dollar signs to pull out, but otherwise just float values), determining what are reasonable values for this column and what are anomalous should be easy...ish.
### Charge Type
I think these are again internally-derived labels for this project (that are also totally valid categories of charges mind you) and, as such, likely to not have too many values to contend with, making cleaning them pretty seamless (I hope).
### Charge Code
This one will be tough. There's theoretically nothing limiting a hospital from making its own random charge code mappings, alphanumeric values being fair game. Likely there will be a few oddballs that stand out as being problematic, but I may not be able to catch all of the problems in this one.
That all being said, this isn't a critical field to begin with, and hopefully most corrections in the earlier columns will correct most of the problems in this one. My priority will be to find any entries in this column that are clearly meant to be in the other columns (especially `description`) so that I can rely on them at this column's expense.
### Description
This one is the trickiest and the most important. It can theoretically have any value and will be what I need to aggregate on in order to find trends in prices across hospitals. Hopefully by fixing all of the other columns prior to this one I'll have minimized the cleaning to be done here.
## Optimize the DataFrame
This dataframe is quite large (almost 4GB in memory!) and that's likely to cause all sorts of problems when it comes time tofilepathnalysis. So we'll need to make sure we understand the nature of each column's data and then optimize for it (after potentially correcting for things like strings that are actually numbers).
To start with, let's check out the nature of the data in each column:
1. How often do charge codes exist?
2. We'll check, but likely that description exists for most
3. Other than decimal numbers, what values are in price?
4. What are all of the different charge type values, and which ones can we filter out (e.g. drg)?
* Do charge codes get embedded in descriptions a lot, like what we see in df.tail()? Or is this something that is only present for non-standard charge_type?
```
# In case you need to reinitialize df after experimenting...
#Make sure any changes to custom packages can be reflected immediately
#in the notebook without kernel restart
import autoreload
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
df = pd.read_csv('data/all_hospitals-latest.csv', index_col = 0, low_memory = False)
# First, let's get a better handle on unique values, so we can figure out what fields make
# sense as categoricals and what ones definitely don't
unique_vals = pd.DataFrame(df.nunique()).rename(columns = {0: 'number of unique values'})
unique_vals['fraction of all values'] = round(unique_vals['number of unique values'] / len(df), 2)
unique_vals['data type'] = df.dtypes
# Assumes strings are involved if feature should be categorical,
# although may not be true for numerically-coded categoricals
unique_vals['make categorical?'] = (unique_vals['fraction of all values'] < 0.5) \
& (unique_vals['data type'] == 'object')
unique_vals
df['description'].value_counts()
df['hospital_id'].value_counts()
```
**Interesting. There are so many records with repeat information that we can change the dtype of pretty much all of them to be categorical.** Here's what we're going to do:
1. `charge_code`, `price`, and `description`: I'm not going to convert these to categoricals
* `charge_code` and `description`: while from a data perspective these would likely be better handled in memory by making anything categorical that has so few unique values that the count of them is less than 50% of the count of all rows, it doesn't make sense to make these fields categoricals, as that implies they are drawing from a common data reference. That's simply not the case.
* Given that two different hospitals can have the charge code `1001` refer to two totally different procedures/consumables, there's no reason to add confusion by treating these in the dataset like they have the same meaning.
* The same logic goes for the description field (although that one has me scratching my head a bit, as I'd expect it to be a bit more free text in nature and thus not likely to have repeated values)
* `price`: this should be a continuous variable, not a categorical one!
2. `hospital_id`, `filename`, and `charge_type`: these are classic examples of categorical variables and we should convert them.
* That being said, it's pretty clear from a very brief look at the unique values in the `hospital_id` field that something is fishy here and that likely some of the parsers have failed to work properly. So it looks like we'll need to parse each column separately and make corrections before proceeding further.
### Categorically Annoying
```
# Make categorical columns where appropriate
cat_cols = ['hospital_id', 'filename', 'charge_type']
for col in cat_cols:
df.loc[:, col] = df.loc[:, col].astype('category')
df['charge_type'].cat.categories
df.info(memory_usage = 'deep', verbose = True, null_counts = True)
```
**Nice! We cut the memory usage in half!** OK, on to less obvious optimizing!
### Description and Charge Code Fields
```
# What do our missing values look like? How sparse are these data?
missing = pd.DataFrame(df.isnull().sum()).rename(columns = {0: 'total missing'})
missing['percent missing'] = round(missing['total missing'] / len(df),2)
missing.sort_values('total missing', ascending = False)
missing
```
**Looks like we have description text for all but 12% of the data.** Not bad.
```
# How often do charge codes exist?
df['charge_code'].describe()
df['charge_code'].value_counts()
```
**Quite a few different charge codes, with lots of different possible values.** Given that charge codes are likely a somewhat-random "unique" identifiers (air quotes because I'm suspicious of any large entity's data management practices until that suspicion is proven unwarranted). Nothing to see here.
*OK, let's get to the meat of it, the thing that's actually most interesting (arguably): the price!*
### The Price Column, AKA The Super Important Yet Extremely Messy Column
```
# Separate out only the price rows that are non-numeric
df['price'].dropna().apply(type).value_counts()
```
**So about half of the non-null prices are `str` type.** Now we need to figure out what those strings actually are.
```
# Look at only string prices
df.loc[df['price'].apply(type) == str,'price']
```
**Huh, how odd. These strings all look like regular old float values.** Why is pandas importing them as strings? Let's force the column to be float type and then we'll see how our missing values change (since we can force non-numeric strings to be `NaN`)
```
# Convert price to be float type, then see if missing['total_missing'] changes
missing_price_initial = missing.loc['price','total missing']
delta = pd.to_numeric(df['price'], errors = 'coerce').isnull().sum() - missing_price_initial
delta
```
**OK, so we see about 300K of the 4.67M string values become `NaN` when we do this numeric conversion. That's not ideal.** Looks like we're losing a lot of data with this conversion, is there a better way? Or should we just consider this an acceptable loss?
```
# How to filter so we can see a sample of the ~300K true non-float strings in price
# so we can get an idea as to how to deal with them?
# Show only the prices that are null after numeric conversion
df.loc[pd.to_numeric(df['price'], errors = 'coerce').isnull(), 'price']
```
**Ah, I see the issues here! We've got commas making numbers look like they're not numbers, and this weird ' - ' string that must be a certain hospitals equivalent of `null`.** Let's correct for these!
The `null` string is one space, following by a hyphen and three spaces.
```
# Set ' - ' to proper NaN
import numpy as np
df['price'].replace(' - ', np.nan, inplace = True)
# What are the values that contain commas?
df.loc[df['price'].str.contains(r'\d,\d', na = False), 'price']
```
**Oh goodie, there are *ranges* of dollar amounts that we have to worry about!** Fantastic, just fantastic. These hospitals sure don't like making this easy, do they?
Here's the plan:
1. Strip dollar signs from all of these strings. They're superfluous and just make things complicated
2. Split strings on the string ' - '
3. Remove commas and cast results as float values
* Any rows with `NaN` for the second split column are just regular comma-in-the-thousands-place numbers. Leave them alone for the moment
* Any with non-null values in the second column: take the average of the two columns and overwrite the first column with the average. This midpoint value will be considered the useful estimate of the cost
4. Push the first column values (now overwritten in the case of ranges of values to be the midpoints) back into their places in the `price` column and continue hunting for edge cases
```
# Replace effectively null prices with np.nan
df.loc[df['price'] == '-', 'price'] = np.nan
def remove_silly_chars(row):
'''
It is assumed that this will be used as part of a Series.apply() call.
Takes in an element of a pandas Series that should be a string representation
of a price or a range of prices and spits back the string without any thousands
separators (e.g. commas) or dollar signs. If it detects that there's a range of
prices being provided (e.g. $1.00 - $2.00), it returns the midpoint of the range.
Parameters
----------
row: str. The string representation of a price or range of prices.
Returns
-------
float. The price (or midpoint price if a range was given in row)
'''
# Replace '$', ',', and '-' with empty strings, splitting on the hyphen
price_strings = row.replace('$','').replace(',','').split('-')
# Strip leading and trailing whitespace from list of split strings
# and take the average of the different prices from the original range,
# returning it as a float value
# If ValueError raised, assume there were invalid characters and
# set to np.nan
try:
return pd.Series([float(i.strip()) for i in price_strings]).mean()
except ValueError:
return np.nan
# When only digits + commas + $, convert to only digits
# Also take average if multiple value range provided in string
ix = df['price'].str.contains(r'\d,\d|\$\d', na=False, case=False)
df.loc[ix, 'price'] = df.loc[ix,'price'].apply(remove_silly_chars)
```
**OK, now that we've cleaned up a lot of those strings so that they can be proper floats, how many bad strings are we left with?** When we ran this check prior to the `$` and `,` cleaning process, we had about 300K records coming up as null after the numeric conversion that weren't null previously. What does it look like now?
```
# Convert price to be float type, then see if missing['total_missing'] changes
missing_price_initial = missing.loc['price','total missing']
delta = pd.to_numeric(df['price'], errors = 'coerce').isnull().sum() - missing_price_initial
delta
```
**SIGH. Now we're done to 200K, but that's not awesome.** At this point, I'm fine with it. We'll lose about 2.1% of these that may not appropriately be null (although most probably should be), but anything else I've tried to pare it down further actually makes things worse, so I say we're good now!
```
# Coerce any obvious float strings to floats
df.loc[:, 'price'] = pd.to_numeric(df['price'], errors = 'coerce')
df.info(memory_usage = 'deep', verbose = True, null_counts = True)
df['price'].describe()
```
**Ah nuts, looks like we have some unreasonable ranges on these data.** If I had to guess, likely these are issues with non-price floats being inserted into the `price` column incorrectly by individual hospital chargemaster parsers. Let's take a quick look at these data, identify the outliers, and see if we can fix them (and what the magnitude of the problem is).
```
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
g = sns.distplot(df['price'].dropna(), kde = False)
g.axes.set_yscale('log')
```
**WHOA. The x-axis goes up to \\$10B USD?!** I'm going to assume that this is unreasonable...
In fact, looks like we have a non-trivial amount of data above \\$1B! That's ludicrous. Let's take a closer look in the \\$1M to \\$10B scale.
```
df.loc[(df['price'] >= 1E6) & (df['price'] <= 1E10), 'price'].describe()
g = sns.distplot(df.loc[(df['price'] >= 1E6) & (df['price'] <= 1E10), 'price'].dropna(),
kde=False)
g.axes.set_yscale('log')
df.loc[(df['price'] >= 1E6) & (df['price'] <= 1E10), 'hospital_id'].value_counts()
```
**OK, so it's pretty clear to me here that we have a problem hospital here (foothill-presbyterian-hospital).** For simplicity's sake, we may just need to drop them from the dataset, but I'll take a look at their data and see if a fix is possible (I think it's a matter of switching the `price` and `charge_code` column data).
For the other hospitals, the counts are so low that we can probably parse those out manually to make sure they aren't nonsensical.
**TO DO**
1. See if it's reasonable to switch `price` and `charge_code` data for foothill hospital. If it is, do so. If there's a problem, drop them from dataset.
2. Check out the other hospitals in this price range and see if there are persistent problems with their `price` data too that need to be corrected.
3. Look at how many records have `hospital_id` without any hyphens, like the last one in the list above. Clearly there's a problem with those...
4. Once you're satisfied that the data are clean, script up an `import_data` file and start up a new analysis-focused Jupyter notebook.
* Subset by `charge_type == standard`
* Cluster based on `description` similarity, and assume those are similar procedures/consumables
* Figure out if there's a way to validate this
* Do some analyses on price spreads and trends among the different clusters/procedures/consumables
* Anything cool that can be predicted??
| github_jupyter |
```
import numpy as np
import random
from math import *
import time
import copy
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
torch.set_default_tensor_type('torch.DoubleTensor')
# defination of activation function
def activation(x):
return x * torch.sigmoid(x)
# build ResNet with one blocks
class Net(nn.Module):
def __init__(self,input_size,width):
super(Net,self).__init__()
self.layer_in = nn.Linear(input_size,width)
self.layer_1 = nn.Linear(width,width)
self.layer_2 = nn.Linear(width,width)
self.layer_out = nn.Linear(width,1)
def forward(self,x):
output = self.layer_in(x)
output = output + activation(self.layer_2(activation(self.layer_1(output)))) # residual block 1
output = self.layer_out(output)
return output
input_size = 1
width = 4
# exact solution
def u_ex(x):
return torch.sin(pi*x)
# f(x)
def f(x):
return pi**2 * torch.sin(pi*x)
grid_num = 200
x = torch.zeros(grid_num + 1, input_size)
for index in range(grid_num + 1):
x[index] = index * 1 / grid_num
net = Net(input_size,width)
def model(x):
return x * (x - 1.0) * net(x)
# loss function to DGM by auto differential
def loss_function(x):
h = 1 / grid_num
sum_0 = 0.0
sum_1 = 0.0
sum_2 = 0.0
sum_a = 0.0
sum_b = 0.0
for index in range(grid_num):
x_temp = x[index] + h / 2
x_temp.requires_grad = True
grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
sum_1 += ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2
for index in range(1, grid_num):
x_temp = x[index]
x_temp.requires_grad = True
grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
sum_2 += ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2
x_temp = x[0]
x_temp.requires_grad = True
grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
sum_a = ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2
x_temp = x[grid_num]
x_temp.requires_grad = True
grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True)
sum_b = ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2
sum_0 = h / 6 * (sum_a + 4 * sum_1 + 2 * sum_2 + sum_b)
return sum_0
# load model parameters on cpu
pretrained_dict = torch.load('net_params_DRM_to_DGM.pkl', map_location = 'cpu')
# get state_dict
net_state_dict = net.state_dict()
# remove keys that does not belong to net_state_dict
pretrained_dict_1 = {k: v for k, v in pretrained_dict.items() if k in net_state_dict}
# update dict
net_state_dict.update(pretrained_dict_1)
# set new dict back to net
net.load_state_dict(net_state_dict)
def get_weights(net):
""" Extract parameters from net, and return a list of tensors"""
return [p.data for p in net.parameters()]
def set_weights(net, weights, directions=None, step=None):
"""
Overwrite the network's weights with a specified list of tensors
or change weights along directions with a step size.
"""
if directions is None:
# You cannot specify a step length without a direction.
for (p, w) in zip(net.parameters(), weights):
p.data.copy_(w.type(type(p.data)))
else:
assert step is not None, 'If a direction is specified then step must be specified as well'
if len(directions) == 2:
dx = directions[0]
dy = directions[1]
changes = [d0*step[0] + d1*step[1] for (d0, d1) in zip(dx, dy)]
else:
changes = [d*step for d in directions[0]]
for (p, w, d) in zip(net.parameters(), weights, changes):
p.data = w + torch.Tensor(d).type(type(w))
def set_states(net, states, directions=None, step=None):
"""
Overwrite the network's state_dict or change it along directions with a step size.
"""
if directions is None:
net.load_state_dict(states)
else:
assert step is not None, 'If direction is provided then the step must be specified as well'
if len(directions) == 2:
dx = directions[0]
dy = directions[1]
changes = [d0*step[0] + d1*step[1] for (d0, d1) in zip(dx, dy)]
else:
changes = [d*step for d in directions[0]]
new_states = copy.deepcopy(states)
assert (len(new_states) == len(changes))
for (k, v), d in zip(new_states.items(), changes):
d = torch.tensor(d)
v.add_(d.type(v.type()))
net.load_state_dict(new_states)
def get_random_weights(weights):
"""
Produce a random direction that is a list of random Gaussian tensors
with the same shape as the network's weights, so one direction entry per weight.
"""
return [torch.randn(w.size()) for w in weights]
def get_random_states(states):
"""
Produce a random direction that is a list of random Gaussian tensors
with the same shape as the network's state_dict(), so one direction entry
per weight, including BN's running_mean/var.
"""
return [torch.randn(w.size()) for k, w in states.items()]
def get_diff_weights(weights, weights2):
""" Produce a direction from 'weights' to 'weights2'."""
return [w2 - w for (w, w2) in zip(weights, weights2)]
def get_diff_states(states, states2):
""" Produce a direction from 'states' to 'states2'."""
return [v2 - v for (k, v), (k2, v2) in zip(states.items(), states2.items())]
def normalize_direction(direction, weights, norm='filter'):
"""
Rescale the direction so that it has similar norm as their corresponding
model in different levels.
Args:
direction: a variables of the random direction for one layer
weights: a variable of the original model for one layer
norm: normalization method, 'filter' | 'layer' | 'weight'
"""
if norm == 'filter':
# Rescale the filters (weights in group) in 'direction' so that each
# filter has the same norm as its corresponding filter in 'weights'.
for d, w in zip(direction, weights):
d.mul_(w.norm()/(d.norm() + 1e-10))
elif norm == 'layer':
# Rescale the layer variables in the direction so that each layer has
# the same norm as the layer variables in weights.
direction.mul_(weights.norm()/direction.norm())
elif norm == 'weight':
# Rescale the entries in the direction so that each entry has the same
# scale as the corresponding weight.
direction.mul_(weights)
elif norm == 'dfilter':
# Rescale the entries in the direction so that each filter direction
# has the unit norm.
for d in direction:
d.div_(d.norm() + 1e-10)
elif norm == 'dlayer':
# Rescale the entries in the direction so that each layer direction has
# the unit norm.
direction.div_(direction.norm())
def normalize_directions_for_weights(direction, weights, norm='filter', ignore='biasbn'):
"""
The normalization scales the direction entries according to the entries of weights.
"""
assert(len(direction) == len(weights))
for d, w in zip(direction, weights):
if d.dim() <= 1:
if ignore == 'biasbn':
d.fill_(0) # ignore directions for weights with 1 dimension
else:
d.copy_(w) # keep directions for weights/bias that are only 1 per node
else:
normalize_direction(d, w, norm)
def normalize_directions_for_states(direction, states, norm='filter', ignore='ignore'):
assert(len(direction) == len(states))
for d, (k, w) in zip(direction, states.items()):
if d.dim() <= 1:
if ignore == 'biasbn':
d.fill_(0) # ignore directions for weights with 1 dimension
else:
d.copy_(w) # keep directions for weights/bias that are only 1 per node
else:
normalize_direction(d, w, norm)
def ignore_biasbn(directions):
""" Set bias and bn parameters in directions to zero """
for d in directions:
if d.dim() <= 1:
d.fill_(0)
def create_random_direction(net, dir_type='weights', ignore='biasbn', norm='filter'):
"""
Setup a random (normalized) direction with the same dimension as
the weights or states.
Args:
net: the given trained model
dir_type: 'weights' or 'states', type of directions.
ignore: 'biasbn', ignore biases and BN parameters.
norm: direction normalization method, including
'filter" | 'layer' | 'weight' | 'dlayer' | 'dfilter'
Returns:
direction: a random direction with the same dimension as weights or states.
"""
# random direction
if dir_type == 'weights':
weights = get_weights(net) # a list of parameters.
direction = get_random_weights(weights)
normalize_directions_for_weights(direction, weights, norm, ignore)
elif dir_type == 'states':
states = net.state_dict() # a dict of parameters, including BN's running mean/var.
direction = get_random_states(states)
normalize_directions_for_states(direction, states, norm, ignore)
return direction
weights_temp = get_weights(net)
states_temp = net.state_dict()
step_size = 0.0005
grid = np.arange(-0.01, 0.01 + step_size, step_size)
loss_matrix = np.zeros((len(grid), len(grid)))
time_start = time.time()
for dx in grid:
itemindex = np.argwhere(grid == dx)
print('[Finished: {:}/{:}]'.format(itemindex[0][0] + 1, len(grid)))
for dy in grid:
itemindex_1 = np.argwhere(grid == dx)
itemindex_2 = np.argwhere(grid == dy)
weights = weights_temp
states = states_temp
step = [dx, dy]
direction_1 = create_random_direction(net, dir_type='weights', ignore='biasbn', norm='filter')
normalize_directions_for_states(direction_1, states, norm='filter', ignore='ignore')
direction_2 = create_random_direction(net, dir_type='weights', ignore='biasbn', norm='filter')
normalize_directions_for_states(direction_2, states, norm='filter', ignore='ignore')
directions = [direction_1, direction_2]
set_states(net, states, directions, step)
loss_temp = loss_function(x)
loss_matrix[itemindex_1[0][0], itemindex_2[0][0]] = loss_temp
# get state_dict
net_state_dict = net.state_dict()
# remove keys that does not belong to net_state_dict
pretrained_dict_1 = {k: v for k, v in pretrained_dict.items() if k in net_state_dict}
# update dict
net_state_dict.update(pretrained_dict_1)
# set new dict back to net
net.load_state_dict(net_state_dict)
weights_temp = get_weights(net)
states_temp = net.state_dict()
time_end = time.time()
print('total time is: ', time_end-time_start, 'seconds')
np.save("loss_matrix_DRM_to_DGM_at_tilde_theta_R.npy",loss_matrix)
```
| github_jupyter |
# Renaming CSV Files
This notebook renames the CSV files of the ESA project. The filenames in the SQL database are not very descriptive, therefore it was important to change the filenames for a better user experience.
The current filenames look something like this: 1059614_14_lattice-v_1.csv
In this notebook, we will rename them to something like: nrthmntn_table-3-summary-of-aquatics-field-work-and-aborigi_pt-1_pg-14_doc-num-A3Q6H2.csv
The new filename contains the following information:
- Short Name of Project (Ex: nrthmntn)
- Table Title of the extracted table (Ex: table-3-summary-of-aquatics-field-work-and-aborigi; length of title is shortened to keep filepath under 256 characters)
- The Part of the table (we have a counter for each additional CSV of a particular table)
- Page Number of the extracted CSV
- Document Number of the particular PDF
```
import pandas as pd
import os
import glob
import shutil
from time import localtime, strftime
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 1000)
pd.set_option('display.max_colwidth', 255)
# filepath to the English and French index files
ENG_index_filepath = 'F:/Environmental Baseline Data/Version 4 - Final/Indices/ESA_website_ENG_03032021.csv'
# FRA_index_filepath = 'F:/Environmental Baseline Data/Version 4 - Final/Indices/ESA_website_FRA_titles_sections_translated.xlsx'
FRA_index_filepath = 'F:/Environmental Baseline Data/Version 4 - Final/Indices/ESA_website_FRA_03032021.csv'
# Loading index file of all tables
df = pd.read_csv(ENG_index_filepath, encoding='ISO-8859-1')
# df_FRA = pd.read_excel(FRA_index_filepath)
df_FRA = pd.read_csv(FRA_index_filepath, encoding='ISO-8859-1')
df.head()
df_FRA.head()
# Remove all rows for figures so that we are only moving tables
df = df[df['Content Type'] == 'Table']
df_FRA = df_FRA[df_FRA['Type de contenu'] == 'Tableau']
df_bad_DataIDs = pd.read_csv("H:/esa-intermediate-code/data_id_to_be_excluded.csv")
df['bad_csv'] = df['Data ID'].isin(list(df_bad_DataIDs.values))
df_FRA['mauvais_csv'] = df_FRA['Identificateur de données'].isin(list(df_bad_DataIDs.values))
df.head()
# Dropping old index column to create new one
# df.drop(columns=['Unnamed: 0'], inplace=True)
df = df.reset_index()
df.rename(columns = {"index": "Index"}, inplace = True)
# df_FRA.drop(columns=['Unnamed: 0'], inplace=True)
df_FRA = df_FRA.reset_index()
df_FRA.rename(columns = {"index": "Indice"}, inplace = True)
df_FRA.head()
# Creating the names of each csv file
df['filename'] = df['Download folder name'] + '_' + df['Title'].str.lower().str.replace('(', '').str.replace(')', '').str.replace(' ', '-').str.replace('.', '-').str.replace('[^\w+-]', '').str.slice(0,50)
df_FRA['nom_du_fichier'] = df_FRA['Télécharger le nom du dossier'] + '_' + df_FRA['Titre'].str.lower().str.replace('(', '').str.replace(')', '').str.replace(' ', '-').str.replace('.', '-').str.replace('[^\w+-]', '').str.slice(0,50)
df.head()
# Creating a column with the old filename so that we can rename the files
old_filename_df = df['CSV Download URL'].str.split('/').str[-1].str.split('_')
df['old_filename'] = old_filename_df.str[0] + '_' + old_filename_df.str[1] + '_lattice-v_' + old_filename_df.str[2]
vieux_nom_du_fichier_df = df_FRA['URL de téléchargement CSV'].str.split('/').str[-1].str.split('_')
df_FRA['vieux_nom_de_fichier'] = vieux_nom_du_fichier_df.str[0] + '_' + vieux_nom_du_fichier_df.str[1] + '_lattice-v_' + vieux_nom_du_fichier_df.str[2]
df['old_filename']
%%time
# We add a counter for all CSVs connected to the same table
# For the English index file
prev_title = ''
for index, row in df.iterrows():
current_title = row['filename']
if current_title == prev_title:
i += 1
current_title = current_title + '_pt-' + str(i) + '_pg-' + str(row['PDF Page Number']) + '_doc-num-' + str(row['Document Number']) + '.csv'
else:
i = 1
current_title = current_title + '_pt-' + str(i) + '_pg-' + str(row['PDF Page Number']) + '_doc-num-' + str(row['Document Number']) + '.csv'
df.loc[index, 'filename'] = current_title
df.loc[index, 'CSV Download URL'] = os.path.join('http://www.cer-rec.gc.ca/esa-ees/', row['Download folder name'] + '/' + current_title)
prev_title = row['filename']
df.head(5)
%%time
# For French index file
prev_title = ''
for index, row in df_FRA.iterrows():
current_title = row['nom_du_fichier']
if current_title == prev_title:
i += 1
current_title = current_title + '_pt-' + str(i) + '_pg-' + str(row['Numéro de page PDF']) + '_num-du-doc-' + str(row['Numéro de document']) + '.csv'
else:
i = 1
current_title = current_title + '_pt-' + str(i) + '_pg-' + str(row['Numéro de page PDF']) + '_num-du-doc-' + str(row['Numéro de document']) + '.csv'
df_FRA.loc[index, 'nom_du_fichier'] = current_title
df_FRA.loc[index, 'URL de téléchargement CSV'] = os.path.join('http://www.cer-rec.gc.ca/esa-ees/', row['Télécharger le nom du dossier'] + '/' + current_title)
prev_title = row['nom_du_fichier']
df_FRA.head(5)
# Adding an index ID to each file to avoid duplicates
# Add only if there are duplicates
# df['filename'] = df['filename'] + '-' + 'no' + df['Index'].astype(str) + '.csv'
# df_FRA['nom_du_fichier'] = df_FRA['nom_du_fichier'] + '-' + 'no' + df_FRA['Indice'].astype(str) + '.csv'
# Making sure there are no duplicates in English filenames
assert len(df) - len(df['filename'].unique()) == 0, "Should be 0. Instead, it is " + str(len(df) - len(df['filename'].unique())) + '.'
# Making sure there are no duplicates in French filenames
assert len(df_FRA) - len(df_FRA['nom_du_fichier'].unique()) == 0, "Should be 0. Instead, it is " + str(len(df_FRA) - len(df_FRA['nom_du_fichier'].unique())) + '.'
df['bad_csv'].value_counts()
# Where the CSVs are located
csv_folder_path_ENG = 'F:/Environmental Baseline Data/Version 4 - Final/all_csvs_cleaned_latest_ENG/'
csv_folder_path_FRA = 'F:/Environmental Baseline Data/Version 4 - Final/all_csvs_cleaned_latest_FRA/'
%%time
# English CSVs
# If the final file has been renamed, it will skip the renaming loop
os.chdir(csv_folder_path_ENG)
if os.path.isfile(df['old_filename'].iloc[-1]):
#loop through the name and rename
for index, row in df.iterrows():
if os.path.isfile(row['old_filename']):
if row['bad_csv'] == False:
shutil.move(row['old_filename'], row['filename'])
else:
os.remove(row['old_filename'])
%%time
# French CSVs
# If the final file has been renamed, it will skip the renaming loop
os.chdir(csv_folder_path_FRA)
if os.path.isfile(df_FRA['vieux_nom_de_fichier'].iloc[-1]):
#loop through the name and rename
for index, row in df_FRA.iterrows():
if os.path.isfile(row['vieux_nom_de_fichier']):
if row['mauvais_csv'] == False:
shutil.move(row['vieux_nom_de_fichier'], row['nom_du_fichier'])
else:
os.remove(row['vieux_nom_de_fichier'])
# Updating base path to Indices folder to save index files
os.chdir('F:/Environmental Baseline Data/Version 4 - Final/Indices/')
# Saving index files
df.to_csv('ESA_website_ENG_' + strftime("%Y_%m_%d", localtime()) + '_final' + '.csv', encoding='ISO-8859-1', index=False)
df_FRA.to_csv('ESA_website_FRA_' + strftime("%Y_%m_%d", localtime()) + '_final' + '.csv', encoding='ISO-8859-1', index=False)
```
| github_jupyter |
```
from local_vars import root_folder
data_folder = r"Circles"
image_size = 64
batch_size = 20
input_intensity_scaling = 1 / 255.0
import pandas as pd
import numpy as np
import itertools
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from matplotlib import pyplot as plt
from sklearn.metrics import confusion_matrix
from skimage.io import imread
from skimage.transform import resize
from tensorflow.keras.utils import Sequence
data_fullpath = os.path.join(root_folder, data_folder)
train_fullpath = os.path.join(data_fullpath, "train")
valid_fullpath = os.path.join(data_fullpath, "valid")
test_fullpath = os.path.join(data_fullpath, "test")
print("Training data folder: {}".format(train_fullpath))
print("Validation data folder: {}".format(valid_fullpath))
print("Test data folder: {}".format(test_fullpath))
train_annotations_df = pd.DataFrame()
num_csv_files = 0
for file_name in os.listdir(train_fullpath):
if not file_name.endswith(".csv"):
continue
num_csv_files += 1
current_file_path = os.path.join(train_fullpath, file_name)
train_annotations_df = pd.concat([train_annotations_df, pd.read_csv(current_file_path)])
valid_annotations_df = pd.DataFrame()
for file_name in os.listdir(valid_fullpath):
if not file_name.endswith(".csv"):
continue
num_csv_files += 1
current_file_path = os.path.join(valid_fullpath, file_name)
valid_annotations_df = pd.concat([valid_annotations_df, pd.read_csv(current_file_path)])
test_annotations_df = pd.DataFrame()
for file_name in os.listdir(test_fullpath):
if not file_name.endswith(".csv"):
continue
num_csv_files += 1
current_file_path = os.path.join(test_fullpath, file_name)
test_annotations_df = pd.concat([test_annotations_df, pd.read_csv(current_file_path)])
num_test_images = test_annotations_df.shape[0]
print("Number of training images: {}".format(train_annotations_df.shape[0]))
print("Number of validation images: {}".format(valid_annotations_df.shape[0]))
print("Number of test images: {}".format(num_test_images))
annot_data = train_annotations_df[:5]
print("Original annotations")
print(annot_data)
scaled_positions = pd.concat( [annot_data['x'] / annot_data['width'], annot_data['y'] / annot_data['height']], axis=1 )
scaled_positions.columns = ['x', 'y']
# scaled_positions = np.concatenate( annot_data[:,2] / annot_data[:,0], annot_data[:,3] / annot_data[:,1], axis=1)
print("\nScaled positions")
print(scaled_positions)
class MyBatchGenerator(Sequence):
def __init__(self, image_filenames, annotations_df, batch_size, image_size=256):
self.image_filenames, self.annotations_df = image_filenames, annotations_df
self.batch_size = batch_size
def __len__(self):
num_batches = np.ceil(len(self.image_filenames) / float(self.batch_size))
return int(num_batches)
def __getitem__(self, idx):
from_index = idx * self.batch_size
to_index = (idx + 1) * self.batch_size
batch_x = self.image_filenames[from_index:to_index]
annot_data = self.annotations_df[['width', 'height', 'x', 'y']].iloc[from_index:to_index]
batch_y = pd.concat( [annot_data['x'] / annot_data['width'], annot_data['y'] / annot_data['height']], axis=1 )
return np.array([
resize(
imread(file_name) * input_intensity_scaling,
(image_size, image_size, 1),
anti_aliasing=False,
preserve_range=True,
mode='constant') for file_name in batch_x]), np.array(batch_y)
train_image_filenames = [os.path.join(train_fullpath,fn) for fn in os.listdir(train_fullpath) if fn.endswith('png')]
valid_image_filenames = [os.path.join(valid_fullpath,fn) for fn in os.listdir(valid_fullpath) if fn.endswith('png')]
test_image_filenames = [os.path.join(test_fullpath,fn) for fn in os.listdir(test_fullpath) if fn.endswith('png')]
train_generator = MyBatchGenerator(train_image_filenames, train_annotations_df, batch_size, image_size)
valid_generator = MyBatchGenerator(valid_image_filenames, valid_annotations_df, batch_size, image_size)
x, y = valid_generator.__getitem__(0)
print("Example input image with true location")
plt.title(str((y[0][0], y[0][1])))
plt.imshow(x[0][:,:,0], cmap='gray')
plt.axvline(x=y[0][0] * image_size)
plt.axhline(y=y[0][1] * image_size)
out = plt.colorbar()
print("\nValidation locations")
print(y[:5])
model = tf.keras.Sequential()
model.add(layers.Conv2D(16, (3, 3), padding='same', activation='relu', input_shape=(image_size, image_size, 1)))
model.add(layers.Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(.2))
model.add(layers.Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(.2))
model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Dropout(.2))
model.add(layers.Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(.2))
model.add(layers.Dense(2, activation='sigmoid'))
model.summary()
num_validation_steps = valid_annotations_df.shape[0] // batch_size
num_steps = train_annotations_df.shape[0] // batch_size
print("Training steps: {}".format(num_steps))
print("Validation steps: {}".format(num_validation_steps))
model.compile(tf.keras.optimizers.SGD(lr=0.1), loss='mse', metrics=['accuracy'])
num_epochs = 8
history = model.fit_generator(
generator=train_generator,
steps_per_epoch=num_steps,
epochs=num_epochs,
verbose=1,
validation_data=valid_generator,
validation_steps=num_validation_steps)
plt.figure(figsize=[8,6])
plt.plot(history.history['accuracy'],'r',linewidth=3.0)
plt.plot(history.history['val_accuracy'],'b',linewidth=3.0)
plt.legend(['Training Accuracy', 'Validation Accuracy'],fontsize=18)
plt.xlabel('Epochs ',fontsize=16)
plt.ylabel('Accuracy',fontsize=16)
plt.title('Accuracy Curves',fontsize=16)
test_generator = MyBatchGenerator(test_image_filenames, test_annotations_df, num_test_images, image_size)
def plots(ims, ys, figsize=(20,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
cols = len(ims)//rows if len(ims) % 2 == 0 else len(ims)//rows + 1
for i in range(len(ims)):
sp = f.add_subplot(rows, cols, i+1)
sp.axis("Off")
if titles is not None:
sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i][:, :, 0], cmap='gray')
plt.axvline(x=ys[i][0] * image_size)
plt.axhline(y=ys[i][1] * image_size)
x, y = test_generator.__getitem__(0)
y_pred = model.predict(x)
n = len(y)
average_error = 0.0
averaging_factor = 1.0 / n
for i in range(len(y)):
distance = np.linalg.norm(y[i] - y_pred[i])
average_error += distance * averaging_factor
print("Average error: {0:.4f}".format(average_error))
print("\nTrue locations")
print(y[:5])
print("\nPredicted locations")
print(y_pred[:5])
plots(x[:24], y_pred[:24], figsize=(20,12), rows=4)
import datetime
timestamp = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
weights_folder = os.path.join(root_folder, "weights")
if not os.path.exists(weights_folder):
os.makedirs(weights_folder)
print("Creating folder: {}".format(weights_folder))
weights_file_name = "weights_" + timestamp + ".h5"
weights_file_path = os.path.join(weights_folder, weights_file_name)
model.save_weights(weights_file_path)
```
| github_jupyter |
# Calculating Thermodynamics Observables with a quantum computer
```
# imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from functools import partial
from qiskit.utils import QuantumInstance
from qiskit import Aer
from qiskit.algorithms import NumPyMinimumEigensolver, VQE
from qiskit_nature.drivers import UnitsType, Molecule
from qiskit_nature.drivers.second_quantization import PySCFDriver
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper
from qiskit_nature.algorithms import GroundStateEigensolver
import qiskit_nature.constants as const
from qiskit_nature.algorithms.pes_samplers import BOPESSampler, EnergySurface1DSpline
from thermodynamics_utils.thermodynamics import constant_volume_heat_capacity
from thermodynamics_utils.vibrational_structure_fd import VibrationalStructure1DFD
from thermodynamics_utils.partition_function import DiatomicPartitionFunction
from thermodynamics_utils.thermodynamics import Thermodynamics
import warnings
warnings.simplefilter('ignore', np.RankWarning)
```
A preliminary draft with more information related to this tutorial can be found in preprint: Stober et al, arXiv 2003.02303 (2020)
### Calculation of the Born Oppenheimer Potential Energy Surface (BOPES)
To compute thermodynamic observables we begin with single point energy calculation which calculates the wavefunction and charge density and therefore the energy of a particular arrangement of nuclei. Here we compute the Born-Oppenheimer potential energy surface of a hydrogen molecule, as an example, which is simply the electronic energy as a function of bond length.
```
qubit_converter = QubitConverter(mapper = JordanWignerMapper())
quantum_instance = QuantumInstance(backend=Aer.get_backend('aer_simulator_statevector'))
solver = VQE(quantum_instance=quantum_instance)
me_gss = GroundStateEigensolver(qubit_converter, solver)
stretch1 = partial(Molecule.absolute_distance, atom_pair=(1, 0))
mol = Molecule(geometry=[('H', [0., 0., 0.]),
('H', [0., 0., 0.2])],
degrees_of_freedom=[stretch1],
masses=[1.6735328E-27, 1.6735328E-27]
)
# pass molecule to PSYCF driver
driver = PySCFDriver(molecule=mol)
es_problem = ElectronicStructureProblem(driver)
# BOPES sampler testing
bs = BOPESSampler(gss=me_gss,
bootstrap=True)
points = np.linspace(0.45, 5, 50)
res = bs.sample(es_problem,points)
energies = []
bs_res_full = res.raw_results
for point in points:
energy = bs_res_full[point].computed_energies + bs_res_full[point].nuclear_repulsion_energy
energies.append(energy)
fig = plt.figure()
plt.plot(points,energies)
plt.title('Dissociation profile')
plt.xlabel('Interatomic distance')
plt.ylabel('Energy')
energy_surface = EnergySurface1DSpline()
xdata = res.points
ydata = res.energies
energy_surface.fit(xdata=xdata, ydata=ydata)
plt.plot(xdata, ydata, 'kx')
x = np.arange(min(xdata) - 0.25, max(xdata) + 0.25, 0.05)
plt.plot(x, energy_surface.eval(x), 'r-')
plt.xlabel(r'distance, $\AA$')
plt.ylabel('energy, Hartree')
dist = max(ydata) - min(ydata)
plt.ylim(min(ydata) - 0.1 * dist, max(ydata) + 0.1 * dist)
```
### Calculation of the molecular Vibrational Energy levels
The Born-Oppeheimer approximation removes internuclear vibrations from the molecular Hamiltonian and the energy computed from quantum mechanical ground-state energy calculations using this approximation contain only the electronic energy. Since even at absolute zero internuclear vibrations still occur, a correction is required to obtain the true zero-temperature energy of a molecule. This correction is called the zero-point vibrational energy (ZPE), which is computed by summing the contribution from internuclear vibrational modes. Therefore, the next step in computing thermodynamic observables is determining the vibrational energy levels. This can be done by constructing the Hessian matrix based on computed single point energies close to the equilibrium bond length. The eigenvalues of the Hessian matrix can then be used to determine the vibrational energy levels and the zero-point vibrational energy
\begin{equation}
{\rm ZPE} = \frac{1}{2}\, \sum_i ^M \nu_i \, ,
\end{equation}
with $\nu_i$ being the vibrational frequencies, $M = 3N − 6$ or $M = 3N − 5$ for non-linear or linear molecules, respectively, and $N$ is number of the particles.
Here we fit a "full" energy surface using a 1D spline potential and use it to evaluate molecular vibrational energy levels.
```
vibrational_structure = VibrationalStructure1DFD(mol, energy_surface)
plt.plot(xdata, ydata, 'kx')
x = np.arange(min(xdata) - 0.25, max(xdata) + 0.25, 0.05)
plt.plot(x, energy_surface.eval(x), 'r-')
plt.xlabel(r'distance, $\AA$')
plt.ylabel('energy, Hartree')
dist = max(ydata) - min(ydata)
plt.ylim(min(ydata) - 0.1 * dist, max(ydata) + 0.1 * dist)
for N in range(15):
on = np.ones(x.shape)
on *= (energy_surface.eval(energy_surface.get_equilibrium_geometry()) +
vibrational_structure.vibrational_energy_level(N))
plt.plot(x, on, 'g:')
on = np.ones(x.shape)
plt.show()
```
### Create a partition function for the calculation of heat capacity
The partition function for a molecule is the product of contributions from translational, rotational, vibrational, electronic, and nuclear degrees of freedom. Having the vibrational frequencies, now we can obtain the vibrational partition function $q_{\rm vibration}$ to compute the whole molecular partition function
\begin{equation}
q_{\rm vibration} = \prod_{i=1} ^M \frac{\exp\,(-\Theta_{\nu_i}/2T)}{1-\exp\,(-\Theta_{\nu_i}/2T} \, .
\end{equation}
Here $\Theta_{\nu_i}= h\nu_i/k_B$, $T$ is the temperature and $k_B$ is the Boltzmann constant.
The single-point energy calculations and the resulting partition function can be used to calculate the (constant volume or constant pressure) heat capacity of the molecules. The constant volume heat capacity, for example, is given by
\begin{equation}
C_v = \left.\frac{\partial U}{\partial T}\right|_{N,V}\, ,
\qquad
{\rm with} \quad
U=k_B T^2 \left.\frac{\partial {\rm ln} Q}{\partial T}\right|_{N,V} .
\end{equation}
$U$ is the internal energy, $V$ is the volume and $Q$ is the partition function.
Here we illustrate the simplest usage of the partition function, namely creating a Thermodynamics object to compute properties like the constant pressure heat capacity defined above.
```
Q = DiatomicPartitionFunction(mol, energy_surface, vibrational_structure)
P = 101350 # Pa
temps = np.arange(10, 1050, 5) # K
mol.spins = [1/2,1/2]
td = Thermodynamics(Q, pressure = 101350)
td.set_pressure(101350)
temps = np.arange(10, 1500, 5)
ymin = 5
ymax = 11
plt.plot(temps,
td.constant_pressure_heat_capacity(temps) / const.CAL_TO_J)
plt.xlim(0, 1025)
plt.ylim(ymin, ymax)
plt.xlabel('Temperature, K')
plt.ylabel('Cp, cal mol$^{-1}$ K$^{-1}$')
plt.show()
```
Here we demonstrate how to access particular components (the rotational part) of the partition function, which in the H2 case we can further split to para-hydrogen and ortho-hydrogen components.
```
eq = Q.get_partition(part="rot", split="eq")
para = Q.get_partition(part="rot", split="para")
ortho = Q.get_partition(part="rot", split="ortho")
```
We will now plot the constant volume heat capacity (of the rotational part) demonstrating how we can call directly the functions in the 'thermodynamics' module, providing a callable object for the partition function (or in this case its rotational component). Note that in the plot we normalize the plot dividing by the universal gas constant R (Avogadro's number times Boltzman's constant) and we use crossed to compare with experimental data found in literature.
```
# REFERENCE DATA from literature
df_brink_T = [80.913535,135.240157,176.633783,219.808499,246.226899]
df_brink_Cv = [0.118605,0.469925,0.711510,0.833597,0.895701]
df_eucken_T = [25.120525, 30.162485, 36.048121, 41.920364, 56.195875, 62.484934, 72.148692, 73.805910, 73.804236, 92.214423,180.031917,230.300866]
df_eucken_Cv = [0.012287,0.012354,0.008448,0.020478,0.032620,0.048640,0.048768,0.076678,0.078670,0.170548,0.667731,0.847681]
df_gia_T = [190.919338,195.951254,202.652107,204.292585,209.322828,225.300754,234.514217,243.747768]
df_gia_Cv = [0.711700,0.723719,0.749704,0.797535,0.811546,0.797814,0.833793,0.845868]
df_parting_T = [80.101665, 86.358919,185.914204,239.927797]
df_parting_Cv = [0.084730,0.138598,0.667809,0.891634]
df_ce_T = [80.669344,135.550569,145.464190,165.301153,182.144856,203.372528,237.993108,268.696642,294.095771,308.872014]
df_ce_Cv = [0.103048,0.467344,0.541364,0.647315,0.714078,0.798258,0.891147,0.944848,0.966618,0.985486]
HeatCapacity = constant_volume_heat_capacity
R = const.N_A * const.KB_J_PER_K
plt.plot(temps,
HeatCapacity(eq, temps) / R, '-k',
label='Cv_rot Equilibrium')
plt.plot(temps, HeatCapacity(para, temps) / R, '-b',
label='Cv_rot Para')
plt.plot(temps, HeatCapacity(ortho, temps) / R, '-r',
label='Cv_rot Ortho')
plt.plot(temps, 0.25 * HeatCapacity(para, temps) / R
+ 0.75 * HeatCapacity(ortho, temps) / R, '-g',
label='Cv_rot 1:3 para:ortho')
plt.plot(df_brink_T, df_brink_Cv, '+g')
plt.plot(df_eucken_T, df_eucken_Cv, '+g')
plt.plot(df_gia_T, df_gia_Cv, '+g')
plt.plot(df_parting_T, df_parting_Cv, '+g')
plt.plot(df_ce_T, df_ce_Cv, '+g', label = 'experimental data')
plt.legend(loc='upper right', frameon=False)
plt.xlim(10, 400)
plt.ylim(-0.1, 2.8)
plt.xlabel('Temperature, K')
plt.ylabel('Cv (rotational)/R')
plt.tight_layout()
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
```
### 1. Data science formalism
```
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
```
In supervised machine learning, you have some data with the corresponding label for these data. Let's check what are those data.
```
X_df = pd.DataFrame(iris.data, columns=iris.feature_names)
X_df.tail()
y = pd.Series(iris.target, name='target')
y = y.apply(lambda x: iris.target_names[x])
y.tail()
sns.pairplot(data=pd.concat([X_df, y], axis=1), hue='target')
```
I created a pandas dataframe and a pandas series from the original data. We should check what type of data were these original data.
```
type(iris.data)
type(iris.target)
```
So these variables are NumPy array. NumPy is a package which allows to work with numeric data efficiently in Python. It is the main used package in scikit-learn to handle data. Let's give an example. We will train a classifier which will predict the class of the majority nearest neighbors.
```
from sklearn.neighbors import KNeighborsClassifier
```
Create the classifier.
```
classifier = KNeighborsClassifier()
```
Train the classifier.
```
classifier.fit(iris.data, iris.target)
```
Let say that we got an iris flower and took the measurements of the petal and sepal and organise it the same way as before.
```
new_flower = np.array([[5.1, 3.5, 1.4, 0.2]])
```
Our classifier will be able to tell use which class this flower should be.
```
classifier.predict(new_flower)
```
Be aware that this classifier could have been directly the dataframe or series because all data are already numeric and scikit-learn could have manage to convert them into NumPy arrays internally.
```
classifier.fit(X_df, y)
classifier.predict(new_flower)
```
## 2. Difference between NumPy array and a pandas dataframe
We can quickly check what is the difference between a NumPy array and a Pandas dataframe by printing them.
```
iris.data
X_df.head()
```
So the dataframe got an index and some column names. However, they represent the same data.
```
X_df.shape
iris.data.shape
```
Anothe major difference is about the data types:
```
X_df.dtypes
iris.data.dtype
```
A dataframe as a data types for each column while a single one for the numpy array. Note that we can always get a numpy array from a dataframe.
```
X_df.values
```
We can select some values from an array similarly to the selection by position of Pandas.
```
iris.data[0, :]
iris.data[:, 0]
iris.target[0]
```
## 2. Quick numerical analysis
We already saw that we could use Pandas not make quick numerical analysis.
```
X_df.mean()
```
However, you don't want to always convert your data to a dataframe to compute simple stats. Let's compute the `mean` as we would do in Pandas. What this mean value is representing?
```
iris.data.mean()
X_df.mean().mean()
```
So we can use the `axis` keyword to further explain how to compute a statistic.
```
iris.data.mean(axis=0)
iris.data.mean(axis=1)
```
## 3. Particular example of image classification
A very common use case in classification is to classifify image. I want to present this case since it is not straightforward to know how to represent our data.
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
```
So we have some image data. We can first check the shape of this numpy array.
```
digits.images.shape
```
And we can plot the first sample of this array which is an image.
```
plt.imshow(digits.images[0])
```
However, we saw that scikit-learn require 2D data array. Let's check how the data were stored.
```
digits.data.shape
digits.data[0]
digits.target.shape
n_images, height, width = digits.images.shape
print(n_images, height, width)
```
So we can easily go from a 3D array to a 2D array and vice-versa.
```
plt.imshow(digits.data[0].reshape((height, width)))
```
Now, we can train a classifier and see how it behave.
```
classifier.fit(digits.data, digits.target)
new_example = digits.data[0]
classifier.predict(new_example)
```
We get a nasty error because we provide a 1D vector to the classifier. Scikit-learn requires a 2D array such that we know the difference between a sample and a feature.
```
new_example.shape
new_example.reshape(-1, 1).shape
new_example[:, np.newaxis].shape
new_example.reshape(1, -1).shape
new_example[np.newaxis, :].shape
```
Now that we know how to specify that our vector was a single sample, we can try to predict again.
```
classifier.predict(new_example[np.newaxis, :])
```
What if I got 2 single samples to classify and that I want to join them to make the predictions.
```
another_example = digits.data[1]
datasets = np.vstack([new_example, another_example])
datasets.shape
datasets
classifier.predict(datasets)
```
| github_jupyter |
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
import os
from pdf2image import convert_from_path
import tempfile
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn import svm
import segment_boards
%matplotlib inline
def sbw(im):
plt.imshow(im, cmap='gray', vmin=0, vmax=255)
def sw(im):
plt.imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
plt.show()
%%time
image_path = '..\\data\\out\\a22.png'
board_im = cv2.imread(image_path, 0)
sw(board_im)
image_path = '..\\data\\dev\\1n.png'
im = cv2.imread(image_path, 0)
X_train.max()
board = boards[1]
ymin, ymax, xmin, xmax, w, h = board
board_im = im[ymin:ymax, xmin:xmax]
board_im = cv2.resize(board_im, (128*8, 128*8))
dim = 128
# dim = min(h, w) // 8
# ic_low = dim // 4
# ic_high = dim // 4 * 3
winSize = (128, 128) #
# winSize = (96, 96) #
# blockSize = (64, 64) #
# blockStride = (32, 32) #
# cellSize = (16, 16) #
blockSize = (16, 16) #
blockStride = (8, 8) #
cellSize = (8, 8) #
nbins = 9 #
derivAperture = 1
winSigma = -1.
histogramNormType = 0
L2HysThreshold = 0.2
gammaCorrection = 1
nlevels = 64
signedGradients = True #
hog = cv2.HOGDescriptor(
winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
L2HysThreshold, gammaCorrection, nlevels, signedGradients
)
for i in range(8):
for j in range(8):
cell = board_im[dim*i: dim*(i+1), dim*j: dim*(j+1)]
# sw(cell)
x = hog.compute(cell)
name = piecenames[int(linear.predict(x.T))]
cv2.imwrite('..//data//dev//pieces//des32//' + name + '_' + str(i) + '_' + str(j) + ' 2.png', cell)
# cell = im[dim*i + ymin: dim*(i+1) + ymin, dim*j + xmin: dim*(j+1) + xmin]
# color = 'white_' if (i + j) % 2 == 0 else 'black_'
# ret,th = cv2.threshold(cell, 225, 255, cv2.THRESH_BINARY)
# inner square majority to determine piece color
# inner_cell = th[ic_low:ic_high, ic_low:ic_high]
# cv2.putText(board_im, color + str(inner_cell.sum()), (dim*i,dim*j), cv2.FONT_HERSHEY_SIMPLEX, 1, 255)
# if color == 'white_':
# continue
# cv2.imwrite('..//data//dev//pieces//n_' +color + str(i) + '_' + str(j) + '.png', th)
# cv2.imwrite('..//data//dev//pieces//n_' +color + str(inner_cell.sum() / inner_cell.size / 255.0) + 'inner.png', inner_cell)
# sw(board_im)
im = cv2.imread(image_path, 0)
path = '..\\data\\dev\\work\\*.jpg'
dpath = '..\\data\\dev\\work\\'
import glob, os
for i, f in enumerate(glob.glob(path)):
n = '%05d' % (i,)
os.rename(f, os.path.join(dpath, 'WhiteSpace-' + n + '.jpg'))
folder = '..\\data\\dev\\set'
num_images = 8168
des = np.empty((num_images, 5184))
label = np.empty((num_images, 1))
x = 0
for i, piece in enumerate(sorted(os.listdir(folder))):
folder2 = os.path.join(folder, piece)
if os.path.isfile(folder2):
continue
for j, filename in enumerate(sorted(os.listdir(folder2))):
fullname = os.path.join(folder2, filename)
# im = cv2.imread(fullname, 0)
# winSize = (128, 128) #
# # winSize = (96, 96) #
# blockSize = (64, 64) #
# blockStride = (32, 32) #
# cellSize = (16, 16) #
# nbins = 9 #
# derivAperture = 1
# winSigma = -1.
# histogramNormType = 0
# L2HysThreshold = 0.2
# gammaCorrection = 1
# nlevels = 64
# signedGradients = False #
# hog = cv2.HOGDescriptor(
# winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
# L2HysThreshold, gammaCorrection, nlevels, signedGradients
# )
# des[x, :] = hog.compute(im, padding=(3,3)).flatten()
label[x] = i
x += 1
print(x)
# folder = '..\\data\\dev\\set'
# piecenames = []
# for i, piece in enumerate(sorted(os.listdir(folder))):
# folder2 = os.path.join(folder, piece)
# if os.path.isfile(folder2):
# continue
# piecenames.append(piece)
# print(piecenames)
piecenames = ['BlackBishop', 'BlackKing', 'BlackKnight', 'BlackPawn', 'BlackQueen', 'BlackRook', 'BlackSpace', 'WhiteBishop', 'WhiteKing', 'WhiteKnight', 'WhitePawn', 'WhiteQueen', 'WhiteRook', 'WhiteSpace']
# fullname = '..\\data\\dev\\set\\BlackKing\\Black-002-36-Average-3.jpg'
# fullname = '..\\data\\dev\\set\\WhiteBishop\\Black-002-9-Average-3.jpg'
fullname = '..\\data\\dev\\pieces\\des32\\BlackPawn_4_2 2.png'
fullname = '..\\data\\out\\23_final\\WhitePawn\\21_WhitePawn_00030_00003_00004_00002.png'
im = cv2.imread(fullname, 0)
sz = 128
im = cv2.resize(im, (sz, sz))
winSize = (128, 128) #
# winSize = (96, 96) #
blockSize = (64, 64) #
blockStride = (32, 32) #
cellSize = (16, 16) #
nbins = 9 #
derivAperture = 1
winSigma = -1.
histogramNormType = 0
L2HysThreshold = 0.2
gammaCorrection = 1
nlevels = 64
signedGradients = False #
hog = cv2.HOGDescriptor(
winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
L2HysThreshold, gammaCorrection, nlevels, signedGradients
)
hog.compute(im, padding=(3,3)).shape
# hog.compute(im[0:96,0:96])
x = hog.compute(im)
# x = hog.compute(im, padding=(3,3))
print(x.shape, im.shape)
piecenames[int(linear.predict(x.T))]
folder = '..\\data\\dev\\pieces\\black'
for i, filename in enumerate(sorted(os.listdir(folder))):
fullname = os.path.join(folder, filename)
print(fullname)
im = cv2.imread(fullname, 0)
sz = 125
im = cv2.resize(im, (sz, sz))
winSize = (128, 128) #
# winSize = (96, 96) #
blockSize = (64, 64) #
blockStride = (32, 32) #
cellSize = (16, 16) #
nbins = 9 #
derivAperture = 1
winSigma = -1.
histogramNormType = 0
L2HysThreshold = 0.2
gammaCorrection = 1
nlevels = 64
signedGradients = False #
hog = cv2.HOGDescriptor(
winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
L2HysThreshold, gammaCorrection, nlevels, signedGradients
)
x = hog.compute(im, padding=(3,3))
y = int(poly.predict(x.T))
print(y)
piecename = piecenames[y]
os.rename(fullname, os.path.join(folder + '3', piecename + str(i) + '.png'))
piecenames = ['BlackBishop', 'BlackKing', 'BlackKnight', 'BlackPawn', 'BlackQueen', 'BlackRook', 'BlackSpace', 'WhiteBishop', 'WhiteKing', 'WhiteKnight', 'WhitePawn', 'WhiteQueen', 'WhiteRook', 'WhiteSpace']
import pickle
linear = pickle.load(open('..\\data\\dev\\set\\linear_des6.pkl', 'rb'))
folder = '..\\data\\out\\yasser'
outfolder = '..\\data\\out\\yasser'
winSize = (128, 128) #
# winSize = (96, 96) #
blockSize = (64, 64) #
blockStride = (32, 32) #
cellSize = (16, 16) #
# blockSize = (16, 16) #
# blockStride = (8, 8) #
# cellSize = (8, 8) #
nbins = 9 #
derivAperture = 1
winSigma = -1.
histogramNormType = 0
L2HysThreshold = 0.2
gammaCorrection = 1
nlevels = 64
signedGradients = True #
hog = cv2.HOGDescriptor(
winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
L2HysThreshold, gammaCorrection, nlevels, signedGradients
)
for l, filename in enumerate(sorted(os.listdir(folder))):
if l % 10 == 0:
print(l)
image_path = os.path.join(folder, filename)
if not os.path.isfile(image_path):
continue
im = cv2.imread(image_path, 0)
boards = segment_boards.segment_boards(im)
for k, board in enumerate(boards):
ymin, ymax, xmin, xmax, _, _ = board
board_im = im[ymin:ymax, xmin:xmax]
dim = 128
board_im = cv2.resize(board_im, (dim * 8, dim * 8))
for i in range(8):
for j in range(8):
cell = board_im[dim*i: dim*(i+1), dim*j: dim*(j+1)]
x = hog.compute(cell)
y = int(linear.predict(x.T))
piecename = piecenames[y]
cv2.imwrite(os.path.join(outfolder, piecename, piecename + ('_%05d_%05d_%05d_%05d' % (l, k, i, j)) + '.png'), cell)
len(piecenames)
np.save('..\data\dev\set\X_test.npy', X_test)
np.save('..\data\dev\set\y_test.npy', y_test)
X_test = np.load('..\data\dev\set\X_test.npy')
y_test = np.load('..\data\dev\set\y_test.npy')
X = des
y = label.flatten()
print(X.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state = 0)
X_train = des
y_train = label.flatten()
print(X_train.shape)
X_test = des
y_test = label.flatten()
print(X_test.shape)
%%time
linear = svm.SVC(kernel='linear', C=1, decision_function_shape='ovo').fit(X_train, y_train)
print('linear')
%%time
rbf = svm.SVC(kernel='rbf', gamma=1, C=1, decision_function_shape='ovo').fit(X_train, y_train)
print('rbf')
%%time
poly = svm.SVC(kernel='poly', degree=3, C=1, decision_function_shape='ovo').fit(X_train, y_train)
print('poly')
%%time
sig = svm.SVC(kernel='sigmoid', C=1, decision_function_shape='ovo').fit(X_train, y_train)
print('sig')
%%time
linear_pred = linear.predict(X_test)
%%time
poly_pred = poly.predict(X_test)
%%time
rbf_pred = rbf.predict(X_test)
%%time
sig_pred = sig.predict(X_test)
# accuracy_lin = linear.score(X_test, y_test)
# print("Accuracy Linear Kernel:", accuracy_lin)
# cm_lin = confusion_matrix(y_test, linear_pred)
total = np.sum(cm_lin)
print(' 0 1 2 3 4 5 6 7 8 9 10 11 12 13')
print(cm_lin)
print('accuracy', np.trace(cm_lin) / total)
print('total', np.trace(cm_lin), total)
print(cm_lin.diagonal() / cm_lin.sum(1))
acc = poly.score(X_test, y_test)
print(acc)
cm_lin = confusion_matrix(y_test, poly_pred)
print(cm_lin)
acc = rbf.score(X_test, y_test)
print(acc)
cm_lin = confusion_matrix(y_test, rbf_pred)
print(cm_lin)
%%time
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(X_train)
distances, indices = nbrs.kneighbors(X_test)
import pickle
pickle.dump(linear, open('..\\data\\dev\\set\\linear_des8.pkl', 'wb'))
import pickle
pickle.dump(linear, open('..\\data\\dev\\set\\linear.pkl', 'wb'))
pickle.dump(poly, open('..\\data\\dev\\set\\poly.pkl', 'wb'))
pickle.dump(rbf, open('..\\data\\dev\\set\\rbf.pkl', 'wb'))
pickle.dump(sig, open('..\\data\\dev\\set\\sig.pkl', 'wb'))
import pickle
linear = pickle.load(open('..\\data\\dev\\set\\linear_des6.pkl', 'rb'))
folder = '..\\data\\out\\23_templates'
# num_images = 28032 * 1
num_images = 14 * 2
# des = np.empty((num_images, 8100))
# des = np.empty((num_images, 1296))
des = np.empty((num_images, 5184))
des = np.empty((num_images, 11664))
label = np.empty((num_images, 1))
x = 0
for i, piece in enumerate(sorted(os.listdir(folder))):
folder2 = os.path.join(folder, piece)
if os.path.isfile(folder2):
continue
for j, filename in enumerate(sorted(os.listdir(folder2))):
fullname = os.path.join(folder2, filename)
im = cv2.imread(fullname, 0)
im = cv2.resize(im, (128, 128))
winSize = (128, 128) #
# winSize = (96, 96) #
blockSize = (64, 64) #
blockStride = (32, 32) #
cellSize = (16, 16) #
# blockSize = (16, 16) #
# blockStride = (8, 8) #
# cellSize = (8, 8) #
nbins = 9 #
derivAperture = 1
winSigma = -1.
histogramNormType = 0
L2HysThreshold = 0.2
gammaCorrection = 1
nlevels = 64
signedGradients = True #
hog = cv2.HOGDescriptor(
winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma, histogramNormType,
L2HysThreshold, gammaCorrection, nlevels, signedGradients
)
des[x, :] = hog.compute(im, padding=(3,3)).flatten()
# des[x, :] = hog.compute(im).flatten()
label[x] = i
x += 1
# blur = cv2.GaussianBlur(im,(5,5),0)
# des[x, :] = hog.compute(blur, padding=(3,3)).flatten()
# # des[x, :] = hog.compute(blur).flatten()
# label[x] = i
# x += 1
# median = cv2.medianBlur(im,5)
# des[x, :] = hog.compute(median, padding=(3,3)).flatten()
# # des[x, :] = hog.compute(median).flatten()
# label[x] = i
# x += 1
if x % 1000 == 0:
print(i, j)
sift = cv2.xfeatures2d.SIFT_create()
kp, ds = sift.detectAndCompute(im,None)
# print(len(kp))
# print(kp[0].pt)
print(ds.shape)
%%time
# criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, int(1e6), 1.0)
criteria = (cv2.TERM_CRITERIA_MAX_ITER, int(1e6), 0)
ret, bestLabels, centers = cv2.kmeans(np.float32(X_test), 28, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=14, random_state=0, max_iter=int(1e6)).fit(X_test)
# cm_lin = confusion_matrix(y_test, bestLabels, labels=np.arange(28))
cm_lin = confusion_matrix(y_test, kmeans.labels_, labels=np.arange(14))
# cm_lin = confusion_matrix(y_test, np.floor(indices / 2), labels=np.arange(14))
# bestLabels.min()
```
| github_jupyter |
```
import numpy as np
from scipy.stats import gennorm, norm
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from scipy.stats import binned_statistic
def bin_data(data,
min_ct=10,
num_bins=10,
ascending=False,
noise=1E-6,
max_iterations=100,
verbose=False):
"""
Bin data for use in NA regression
parameters
----------
data: (array-like)
Data to bin. Must be castable as a one-dimensional numpy array or floats.
min_ct: (int > 0 and <= len(data)/2)
Minimum number of data points to include in each bin.
num_bins: (int >= 2)
Number of bins in which to bin data.
ascending: (bool)
Specifies direction in which bins are computed.
noise: (float >= 0 or None)
Noise to add to data to avoid ties, in units of the data's standard deviation.
verbose: (bool)
Whether to provide printed feedback. Only needed for debugging purposes.
returns
-------
data_bins: (np.ndarray)
Array listing the bins that each data point is assigned to.
bin_df: (pd.DataFrame)
Dataframe listing bin boundaries, counts, and lengths.
"""
# Cast data as np.array
data = np.array(data)
# Get number of data points
N = len(data)
# Add purturbation to data to avoid ties
noise *= np.std(data)
data = np.array(data) + noise*np.random.randn()
# Cast min_ct as int
min_ct = int(min_ct)
# If min_ct is <= 0, raise error
assert min_ct > 0, f'Invalid value min_ct={min_ct}.'
# Make sure min_ct isn't too big
assert min_ct <= N/2, f'min_ct={min_ct} is too large; must be <= N/2={N/2}'
# Make sure num_bins is valid
assert num_bins >= 2, f'Invalid value for num_bins={num_bins}'
# Simplified function to compute binning using upper bound length
def f(l, return_results=False):
data_bins, bin_df = _bin_data_usin_min_length(data,
min_ct=min_ct,
min_length=l,
ascending=ascending)
n = len(bin_df)
if verbose:
print(f'n:{n}, l:{l:.4f}')
return n, data_bins, bin_df
# Start with what should be an upper bound on min_length l
span_ub = max(data)-min(data)
l = span_ub/num_bins
# Compute n given l
n, data_bins, bin_df = f(l)
# If n is correct, return results
if n == num_bins:
return data_bins, bin_df
# Otherwise, make sure n < num_bins and upper bound on l
assert n < num_bins, f'Error: n={n} > num_bins={num_bins}'
l_ub = l
if verbose:
print(f'n:{n} -> l_ub:{l_ub}')
# Reduce l until a lower bound is found, or max iterations are exceeded
for k in range(1,max_iterations+1):
# Reduce the value of l
l = l_ub/(2**k)
# Compute n given l
n, data_bins, bin_df = f(l)
# If n is too big, set l as lower bound and break out of for loop
if n > num_bins:
l_lb = l
if verbose:
print(f'n:{n} -> l_lb:{l_lb}')
break;
# If n is too small, check to make sure continuing with the calculation is worth it
elif n < num_bins:
# If shrinking bins any more won't help, return results
if sum(bin_df['count'] > min_ct) <= 1:
print(f'Cant get more than n={n} bins with min_ct={min_ct}. Using {n} bins.' )
return data_bins, bin_df
# Otherwise, n is correct. Return results.
else:
assert n == num_bins
return data_bins, bin_df
# Perform binary search until correct n is found, or max iterations are exceeded
if verbose:
print('(stage 3)')
for k in range(1,max_iterations+1):
# Set new min_lenfth
l = (l_lb + l_ub)/2
# Compute n given l
n, data_bins, bin_df = f(l)
if verbose:
print(f'l:{l}, l_ub:{l_ub}, l_lb:{l_lb}')
# If n is correct, return results
if n == num_bins:
return data_bins, bin_df
# If n is too large, adjust lower bound upward
if n > num_bins:
l_lb = l
# If n is too small, adjust upper bound downward
elif n < num_bins:
l_ub = l
# Otherwise, n is correct. Return results.
else:
assert n == num_bins
return data_bins, bin_df
print('Failed to converge. Returning last results. ')
return data_bins, bin_df
def _bin_data_usin_min_length(data, min_ct, min_length, ascending):
"""
Utility function used in bin_data().
parameters
----------
data: (array-like)
Data to bin. Must be castable as a one-dimensional numpy array or floats.
min_ct: (int > 0 and <= len(data)/2)
Minimum number of data points to include in each bin.
min_length: (float > 0)
Minimum length of a bin. If set, overrides max_bins.
ascending: (bool)
Specifies direction in which bins are computed.
returns
-------
data_bins: (np.ndarray)
Array listing the bins that each data point is assigned to.
bin_df: (pd.DataFrame)
Dataframe listing bin boundaries, counts, and lengths.
"""
# Cast data as np.array
data = np.array(data)
# Make sure min_ct is an int
min_ct = int(min_ct)
# Get number of data points
N = len(data)
# Sort data
sorted_data = np.sort(data)
if not ascending:
sorted_data = -sorted_data[::-1]
# Compute edges
edge_min = sorted_data[0]
edge_max = sorted_data[-1]
# Initialize n, edges, and cts
n = -1
edge = edge_min
edges = np.array([edge])
ns = np.array([n])
while (n < N-min_ct-1) and (edge < edge_max-min_length):
n_new_a = n + min_ct
edge_new_a = .5*(sorted_data[n_new_a]+sorted_data[n_new_a+1])
edge_new_b = edge + min_length
n_new_b = np.searchsorted(sorted_data, edge_new_b)
n = max(n_new_a, n_new_b)
edge = max(edge_new_a, edge_new_b)
# Record edge and n
edges = np.append(edges,edge)
ns = np.append(ns,n)
# Add upper bound to edges if needed
if (n < N) and (edge < edge_max):
edges = np.append(edges, edge_max)
ns = np.append(ns, N)
# Remove second-to-last edge if cts in last bin is too small
cts = np.diff(ns)
if cts[-1] < min_ct:
edges = np.delete(edges,-2)
# If descending, reverse edges. Note that this might mess up cts a bit.
if not ascending:
edges = -edges[::-1]
# Make sure that cts agrees with histogram
cts, _ = np.histogram(data, edges)
# Create a dataframe to hold results
bin_df = pd.DataFrame()
bin_df['count'] = cts
bin_df['left_in'] = edges[:-1]
bin_df['right_ex'] = edges[1:]
bin_df['length'] = bin_df['right_ex'] - bin_df['left_in']
# Compute bin index for each data point
_, _, data_bins = binned_statistic(data, data, statistic='mean', bins=edges, range=None)
# Return dataframe
return data_bins-1, bin_df
### Simulate data
# Generate signal data
N_signal = int(1E4)
signal_data = 10*(1+gennorm(beta=8).rvs(N_signal))
# Generate background data
N_background = int(1E5)
background_data = 2*norm().rvs(N_background)
# Gather into all N and shuffle
data = np.concatenate([signal_data, background_data])
np.random.shuffle(data)
### Bin data
min_ct = 2000
num_bins = 20
data_bins, bin_df = bin_data(data,
min_ct = min_ct,
num_bins = num_bins,
ascending=False,
verbose=False)
bin_df
### Plot results
# Create figure and axes
fig, axs = plt.subplots(1,3, figsize=[15,5])
axs = axs.ravel()
# Plot data histogram and edge locations
ax = axs[0]
sns.distplot(data, bins=100, ax=ax, kde=False)
edges = bin_df['left_in'].values
edges = np.append(edges,bin_df['right_ex'].values[-1])
for edge in edges:
ax.axvline(edge, linestyle=':', color='k', zorder=-100)
ax.set_yticks([])
ax.set_xlabel('data values')
ax.set_ylabel('density')
# Plot bin nubmers against data values
ax = axs[1]
ix = np.argsort(data)
ax.step(data[ix],data_bins[ix])
ax.set_ylabel('bin number')
ax.set_xlabel('data values')
# Plot counts per bin for each bin
ax1 = axs[2]
color = 'C0'
x = bin_df.index
y1 = bin_df['count']
y2 = bin_df['length']
ax1.plot(x,y1,'-o', color=color)
ax1.axhline(min_ct, color=color, linestyle=':')
ax1.set_ylabel('count in bin', color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax1.set_xlabel('bin number')
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'C1'
ax2.set_ylabel('bin length', color=color) # we already handled the x-label with ax1
ax2.plot(x, y2, '-o', color=color)
ax2.tick_params(axis='y', labelcolor=color)
# Save figure
fig.tight_layout()
fig.savefig('adaptive_binning.pdf')
```
| github_jupyter |
# Import
```
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('runs/lenet')
```
# Load data
```
# Prepare data transformator
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
# Load train set
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
# Load test set
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
# Prepare class labels
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
# Show sample
```
# function to show an image
def imshow(img, convert_gpu=False):
img = img / 2 + 0.5 # unnormalize
if convert_gpu:
img = img.cpu()
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# show some test images
dataiter = iter(testloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join(f'{classes[labels[j]]}' for j in range(4)))
```
# Build architecture
```
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
# Prepare layers
def forward(self, x):
# Connect layers
return x
net = LeNet()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
# GPU
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if device.type == 'cuda':
net.to(device)
print('GPU Activated')
```
# Train
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader):
# get the inputs; data is a list of [inputs, labels]
if device.type == 'cuda':
inputs, labels = data[0].to(device), data[1].to(device)
else:
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = net(inputs)
loss = criterion(outputs, labels)
# backward
loss.backward()
# optimize
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'epoch: {epoch + 1} batches: {i + 1} loss: {running_loss / 2000:.3f}')
writer.add_scalar('train loss', running_loss / 2000, epoch * len(trainloader) + i)
running_loss = 0.0
print('Finished Training')
```
# Testing
```
correct = 0
total = 0
with torch.no_grad():
for i, data in enumerate(testloader):
# get the inputs; data is a list of [inputs, labels]
if device.type == 'cuda':
images, labels = data[0].to(device), data[1].to(device)
else:
images, labels = data
# get outputs
outputs = net(images)
# gather which class had highest prediction score
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
# Show prediction for 1st batch
if i == 0:
imshow(torchvision.utils.make_grid(images), device.type == 'cuda')
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]}' for j in range(4)))
print('Predicted: ', ' '.join(f'{classes[predicted[j]]}' for j in range(4)))
print(f'Accuracy of the network on the 10000 test images: {100 * correct / total} %')
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=2
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### Load packages
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
import tensorflow_addons as tfa
```
### parameters
```
dataset = "cifar10"
labels_per_class = 64 # 'full'
n_latent_dims = 1024
confidence_threshold = 0.8 # minimum confidence to include in UMAP graph for learned metric
learned_metric = True # whether to use a learned metric, or Euclidean distance between datapoints
augmented = True #
min_dist= 0.001 # min_dist parameter for UMAP
negative_sample_rate = 5 # how many negative samples per positive sample
batch_size = 128 # batch size
optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train
optimizer = tfa.optimizers.MovingAverage(optimizer)
label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy
max_umap_iterations = 50 # how many times, maximum, to recompute UMAP
max_epochs_per_graph = 500 # how many epochs maximum each graph trains for (without early stopping)
umap_patience = 5 # how long before recomputing UMAP graph
```
#### Load dataset
```
from tfumap.semisupervised_keras import load_dataset
(
X_train,
X_test,
X_labeled,
Y_labeled,
Y_masked,
X_valid,
Y_train,
Y_test,
Y_valid,
Y_valid_one_hot,
Y_labeled_one_hot,
num_classes,
dims
) = load_dataset(dataset, labels_per_class)
```
### load architecture
```
from tfumap.semisupervised_keras import load_architecture
encoder, classifier, embedder = load_architecture(dataset, n_latent_dims)
```
### load pretrained weights
```
from tfumap.semisupervised_keras import load_pretrained_weights
encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier)
```
#### compute pretrained accuracy
```
# test current acc
pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True)
pretrained_predictions = np.argmax(pretrained_predictions, axis=1)
pretrained_acc = np.mean(pretrained_predictions == Y_test)
print('pretrained acc: {}'.format(pretrained_acc))
```
### get a, b parameters for embeddings
```
from tfumap.semisupervised_keras import find_a_b
a_param, b_param = find_a_b(min_dist=min_dist)
```
### build network
```
from tfumap.semisupervised_keras import build_model
model = build_model(
batch_size,
a_param,
b_param,
dims,
embedder,
encoder,
classifier,
negative_sample_rate=negative_sample_rate,
optimizer=optimizer,
label_smoothing=label_smoothing,
)
```
### build labeled iterator
```
from tfumap.semisupervised_keras import build_labeled_iterator
labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims)
```
### training
```
from livelossplot import PlotLossesKerasTF
from tfumap.semisupervised_keras import get_edge_dataset
from tfumap.semisupervised_keras import zip_datasets
```
#### callbacks
```
# early stopping callback
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_classifier_acc', min_delta=0, patience=15, verbose=0, mode='auto',
baseline=None, restore_best_weights=False
)
# plot losses callback
groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']}
plotlosses = PlotLossesKerasTF(groups=groups)
current_validation_acc = 0
batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int)
epochs_since_last_improvement = 0
for current_umap_iterations in tqdm(np.arange(max_umap_iterations)):
# make dataset
edge_dataset = get_edge_dataset(
model,
classifier,
encoder,
X_train,
Y_masked,
batch_size,
confidence_threshold,
labeled_dataset,
dims,
learned_metric = learned_metric
)
# zip dataset
zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size)
# train dataset
history = model.fit(
zipped_ds,
epochs=max_epochs_per_graph,
validation_data=(
(X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)),
{"classifier": Y_valid_one_hot},
),
callbacks = [early_stopping, plotlosses],
max_queue_size = 100,
steps_per_epoch = batches_per_epoch,
#verbose=0
)
# get validation acc
pred_valid = classifier.predict(encoder.predict(X_valid))
new_validation_acc = np.mean(np.argmax(pred_valid, axis = 1) == Y_valid)
# if validation accuracy has gone up, mark the improvement
if new_validation_acc > current_validation_acc:
epochs_since_last_improvement = 0
current_validation_acc = copy.deepcopy(new_validation_acc)
else:
epochs_since_last_improvement += 1
if epochs_since_last_improvement > umap_patience:
print('No improvement in {} UMAP iterators'.format(umap_patience))
break
class_pred = classifier.predict(encoder.predict(X_test))
class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test)
print(class_acc)
```
| github_jupyter |
```
import os
def get_recursive_filenames(directory,upc_to_filenames):
for name in os.listdir(directory):
path = os.path.join(directory, name)
if os.path.isdir(path):
get_recursive_filenames(path,upc_to_filenames)
else:
upc = os.path.basename(os.path.dirname(path))
if upc in upc_to_filenames:
upc_to_filenames[upc].append(path)
else:
upc_to_filenames[upc] = [path]
upc_to_filenames={}
get_recursive_filenames('/home/src/goodsdl/media/dataset/step20',upc_to_filenames)
len(upc_to_filenames)
# source_cluster = [['A','B'],['A','D'],['B','E'],['E','F'],['C','D'],['G','A'],['N','M']]
source_cluster = [['6951003200048', '6923976111171'], ['6923976111195', '6923976113137'], ['6924743920361', '6921290410444'], ['6924743919228', '6924743920996'], ['6924743919228', '6924743919242'], ['6924743920996', '6924743919242'], ['6909409040775', '6909409012321'], ['6909409040898', '6923976111171'], ['6920912342019', '6920912342002'], ['6932005203077', '6943290500666'], ['6924743919235', '6923976111171'], ['6924743919235', '6924743919242'], ['6924743918610', '6923976111171']]
upc_to_lines = {} # {A:[0,1,5],B:[0,2],C:[3],D:[1,3],E:[2,4],F:[4],G:[5],N:[6],M:[6]}
for i in range(len(source_cluster)):
for j in range(2):
if source_cluster[i][j] in upc_to_lines:
upc_to_lines[source_cluster[i][j]].append(i)
else:
upc_to_lines[source_cluster[i][j]] = [i]
print('upc_to_lines:')
print(upc_to_lines)
solved_line_to_upc = {}
solved_upc_to_lines = {} # {A:[0,1,2,3,4,5],N:[6]}
for upc in upc_to_lines:
unsolved_lines = []
cluster_upcs = []
for line in upc_to_lines[upc]:
if line in solved_line_to_upc:
cluster_upc = solved_line_to_upc[line]
if cluster_upc not in cluster_upcs:
cluster_upcs.append(cluster_upc)
if line not in solved_upc_to_lines[cluster_upc]:
solved_upc_to_lines[cluster_upc].append(line)
else:
unsolved_lines.append(line)
if len(cluster_upcs) > 1: #需要处理多个聚类
cluster_upcs = sorted(cluster_upcs)
main_upc = cluster_upc[0]
if main_upc not in solved_upc_to_lines:
solved_upc_to_lines[main_upc] = []
for cluster_upc in cluster_upcs:
if cluster_upc != main_upc:
for line in solved_upc_to_lines[cluster_upc]:
if line not in solved_upc_to_lines[main_upc]:
solved_upc_to_lines[main_upc].append(line)
solved_line_to_upc[line] = main_upc
solved_upc_to_lines.pop(cluster_upc)
cluster_upcs = [main_upc]
for line in unsolved_lines:
if len(cluster_upcs) == 0: # 不聚类
solved_line_to_upc[line] = upc
if upc not in solved_upc_to_lines:
solved_upc_to_lines[upc] = [line]
else:
if line not in solved_upc_to_lines[upc]:
solved_upc_to_lines[upc].append(line)
elif len(cluster_upcs) == 1: # 聚到指定的类
if line not in solved_upc_to_lines[cluster_upcs[0]]:
solved_upc_to_lines[cluster_upcs[0]].append(line)
solved_line_to_upc[line] = cluster_upcs[0]
print('solved_upc_to_lines:')
print(solved_upc_to_lines)
print('solved_line_to_upc:')
print(solved_line_to_upc)
solved_cluster = {} # {'A': ['B', 'C', 'D', 'G', 'H', 'E', 'F'], 'M': ['N']}
for key in solved_upc_to_lines:
one_cluster = []
for line in solved_upc_to_lines[key]:
for upc in source_cluster[line]:
if upc not in one_cluster:
one_cluster.append(upc)
one_cluster = sorted(one_cluster)
main_upc = one_cluster[0]
one_cluster.remove(main_upc)
solved_cluster[main_upc] = one_cluster
print('solved_cluster:')
print(solved_cluster)
```
| github_jupyter |
INRODUCTION
This tutorial will describe a broad overview of the Plotly visualization tool for Python. It is a wrapper based around the matplotlib library. The advantages of using the plotly library is that it has a faster learning curve and lower complexity compared to the matplotlib library. It is suitable for data science 'newbies' and non data science individuals to acclerate the process of making sense of their data through visualization as it produces professional looking charts in a relatively short timespan. Plotly is one of many other python libraries built for visualization around matplotlib such as Seaborn, ggplot, bokeh, geoplotlib, gleam etc.
Plotly offers a number of interactive raphing/mapping options:
-Basic charts: Line charts, bubble charts, scatter plots, bar charts, gantt charts,pie charts, filled area plots, time series etc
-Statistical Charts: Error bars, histograms, 2D histograms, density plots, tree plots, tree maps etc
-Scientific charts: Log plots, contour plots, heatmaps, polar charts, ternary plots, streamline plots, chord diagram etc
-Financial charts
-Maps: Scatter plot on maps, bubble maps, lines on maps, choropleth maps etc
-3D charts: scatter plots, bubble plots, line plots, ribbon plots, surface plots, mesh plots, parametric plots, 3D network graphs, 3D clustering, surface triangulation etc
NOTE: Plotly is an online web-hosting service for graphs. Graphs are stored online under a private account. However plotly can also be used offline in IPython notebooks. Plotly API is also available for other programming environments such as R, Matlab and JavaScript.
This tutorial will demonstrate the use of Plotly in constructing 2 or 3 different types of plots utilizing real world data sets.
INSTALLATION
Use the package manager pip inside the terminal.
$pip install plotly
The plotly username and api can be generated using the following link:
https://plot.ly/api
The generated credentials need to be assigned to the username and api_key arguments as shown below.
```
import plotly
plotly.tools.set_credentials_file(username='rohit269mail', api_key='ooqdid05dr')
```
GETTING STARTED
Every plotly graph comprises of JSON objects, which are dictionary like datastructures. Simply by changing the value of the keywords of the objects, we can generate different kinds of graphs.
The different objects that define a graph in plotly are:
-Data
-Layout
-Figure
Data consists of the trace to be plotted with the specifications and data to be plotted.
Layout defines the look of the graph and controls features unrelated to the Data.
Figure creates the final object which contains both the Data and Layout objects.
Two modules need to be necessarily imported to generate Plotly graphs:
1. plotly.plotly: contains functions that will help communicate with the Plotly server
2. plotly.graph_objs: contains functions that will generate the graph object
```
import plotly.plotly as py
from plotly.graph_objs import *
#import plotly.graph_objs as go
import pandas as pd
import numpy as np
import math
```
The help function can be used to see the description of any plotly object
```
help(py.iplot)
```
EXAMPLE 1:
The following code is an example of constructing a 'Lines on Maps' plot.
In this example, we take data from http://openflights.org/data.html to construct a map to visualize the commercial air connectivity of the city of Pittsburgh.
The data was curated from its original form for the purpose of this application. The structure of the two data files are shown below.
```
df_airports = pd.read_csv('airports.csv')
print df_airports.head() #read USA airport data file
df_flight_paths = pd.read_csv('PITroute.csv') #read Pittsburgh connections data file
print df_flight_paths.head()
#code to generate 'Lines on Maps' plot to visualize the air connectivity of Pittsburgh
airports = [ dict( #data object containg airports information
type = 'scattergeo', #type of graph object
locationmode = 'USA-states',
lon = df_airports['long'],
lat = df_airports['lat'],
hoverinfo = 'text',
text = df_airports['airport'],
mode = 'markers',
marker = dict(
size=2,
color='rgb(255, 0, 0)',
line = dict(
width=3,
color='rgba(68, 68, 68, 0)'
)
))]
flight_paths = [] #data object containing flight path information
for i in range( len( df_flight_paths ) ):
flight_paths.append(
dict(
type = 'scattergeo',
locationmode = 'USA-states',
lon = [ df_flight_paths['start_lon'][i], df_flight_paths['end_lon'][i] ],
lat = [ df_flight_paths['start_lat'][i], df_flight_paths['end_lat'][i] ],
mode = 'lines',
line = dict(
width = 1,
color = 'red',
)
)
)
layout = dict( #layout object
title = 'Piitsburgh flight connection, Jan 2012<br>(Hover for airport names)',
showlegend = False,
geo = dict(
scope='north america',
projection=dict( type='azimuthal equal area' ),
showland = True,
landcolor = 'rgb(243, 243, 243)',
countrycolor = 'rgb(204, 204, 204)',
),
)
fig = dict( data=flight_paths + airports, layout=layout )
py.iplot( fig, filename='Pitt-flight-paths' )
```
EXAMPLE 2:
A resourcefull reservoir of real world datasets is the World Bank website. A rich repository of various world devopment indicators can be found on http://wdi.worldbank.org/tables. For the next example, we will plot a simple horizontal bar graph. The data chosen was the number of scientific and technical journals published by a country for the year 2013. Journals include articles published in the following fields: physics, biology, chemistry, mathematics, clinical medicine, biomedical research, engineering and technology, and earth and space sciences.
The raw data set extracted from the website is shown below.
```
df=pd.read_csv('rnd.csv')
print df.head(10)
```
First, we will remove countries with missing values. Then we will sort the series in descending order and plot a horizontal bar graph of the 20 countries with the most number of scientific publications.
```
df=pd.read_csv('rnd.csv')
df = df.applymap(lambda x: np.nan if x == '..' else x)
df = df.dropna().reset_index(drop=True) #remove missing rows with missing data
df['papers'] = df['papers'].str.replace(',','').astype(int) #convert attribute type string to integer
df = df.sort_values('papers',ascending=False)
df=df.head(20)
data = [Bar(
x=df['papers'],
y=df['country'],
orientation = 'h'
)]
layout = Layout(
title='top 20 countries with scientific publications',
xaxis=dict(title='number of scientific publications'),
yaxis=dict(title='country'))
fig=dict( data=data, layout=layout )
py.iplot(fig, filename='horizontal-bar')
```
EXAMPLE 3:
For the next example, we will plot a Scatter Bubble Chart. Scatter Bubble charts are extremely useful to visualize and compare three variables. They are defined in terms of three distinct numeric parameters. They allow comparison of entities in terms of their relative x-y postions and the area of the bubbles. The scatter chart data points are replaced by bubbles an additional dimension of the data is represented in the size of the bubbles.
Sizing the bubbles correctly is critical for this visualization. According to wikipedia, "The human visual system naturally experiences a disk's size in terms of its area. And the area of a disk—unlike its diameter or circumference—is not proportional to its radius, but to the square of the radius. " So if one chooses to scale the disks' radii to the third data values directly, then the apparent size differences among the disks will be non-linear and misleading. To get a properly weighted scale, one must scale each disk's radius to the square root of the corresponding data value. This scaling issue can lead to extreme misinterpretations, especially where the range of the data has a large spread.
For the bubble chart, we have taken the example from the Plotly website, and will explain how its constructed. The data used consists of demographic details such as population, life expectancy and GDP per capita for countries over collected over 55 years at an interval of 5 years.
```
data = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv")
print data.head(13)
```
We will be plotting a scatter plot of GDP per capita v/s life expectancy for the year 2007, with the bubbles denoting size of the population. This graph also visualizes a 4th dimension of categorical distribution by filling different colours depending on the continent of the country.
The data for 2007 is first extracted and sorted by each country and continent alphabetically. A scaling factor is chosen to scale down the large value of the population attribute, to represent the bubble area conveniently.
Since the data is sorted by continent, five traces are created for each continent which contain data specific to the continent. The five traces are clubbed into a list which represents the final data to be plotted.
The layout needs to specify the correct ranges for the x and y axis so that the data is scaled properly for visualization. Since GDP per capita varies by a large value for different countries, a log scale is used.
```
data = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv") #import file from online link
df_2007 = data[data['year']==2007] #extract data for year 2007
df_2007 = df_2007.sort_values(['continent', 'country'])
slope = 2.666051223553066e-05 #scaling factor for bubble size
hover_text = []
bubble_size = []
for index, row in df_2007.iterrows():
hover_text.append(('Country: {country}<br>'+
'Life Expectancy: {lifeExp}<br>'+
'GDP per capita: {gdp}<br>'+
'Population: {pop}<br>'+
'Year: {year}').format(country=row['country'],
lifeExp=row['lifeExp'],
gdp=row['gdpPercap'],
pop=row['pop'],
year=row['year']))
bubble_size.append(math.sqrt(row['pop']*slope)) #bubble size list
df_2007['text'] = hover_text
df_2007['size'] = bubble_size
trace0 = Scatter(
x=df_2007['gdpPercap'][df_2007['continent'] == 'Africa'],
y=df_2007['lifeExp'][df_2007['continent'] == 'Africa'],
mode='markers',
name='Africa',
text=df_2007['text'][df_2007['continent'] == 'Africa'],
marker=dict(
symbol='circle',
sizemode='diameter',
sizeref=0.85,
size=df_2007['size'][df_2007['continent'] == 'Africa'],
line=dict(
width=2
),
)
)
trace1 = Scatter(
x=df_2007['gdpPercap'][df_2007['continent'] == 'Americas'],
y=df_2007['lifeExp'][df_2007['continent'] == 'Americas'],
mode='markers',
name='Americas',
text=df_2007['text'][df_2007['continent'] == 'Americas'],
marker=dict(
sizemode='diameter',
sizeref=0.85,
size=df_2007['size'][df_2007['continent'] == 'Americas'],
line=dict(
width=2
),
)
)
trace2 = Scatter(
x=df_2007['gdpPercap'][df_2007['continent'] == 'Asia'],
y=df_2007['lifeExp'][df_2007['continent'] == 'Asia'],
mode='markers',
name='Asia',
text=df_2007['text'][df_2007['continent'] == 'Asia'],
marker=dict(
sizemode='diameter',
sizeref=0.85,
size=df_2007['size'][df_2007['continent'] == 'Asia'],
line=dict(
width=2
),
)
)
trace3 = Scatter(
x=df_2007['gdpPercap'][df_2007['continent'] == 'Europe'],
y=df_2007['lifeExp'][df_2007['continent'] == 'Europe'],
mode='markers',
name='Europe',
text=df_2007['text'][df_2007['continent'] == 'Europe'],
marker=dict(
sizemode='diameter',
sizeref=0.85,
size=df_2007['size'][df_2007['continent'] == 'Europe'],
line=dict(
width=2
),
)
)
trace4 = Scatter(
x=df_2007['gdpPercap'][df_2007['continent'] == 'Oceania'],
y=df_2007['lifeExp'][df_2007['continent'] == 'Oceania'],
mode='markers',
name='Oceania',
text=df_2007['text'][df_2007['continent'] == 'Oceania'],
marker=dict(
sizemode='diameter',
sizeref=0.85,
size=df_2007['size'][df_2007['continent'] == 'Oceania'],
line=dict(
width=2
),
)
)
data = [trace0, trace1, trace2, trace3, trace4]
layout = Layout(
title='Life Expectancy v. Per Capita GDP, 2007',
xaxis=dict(
title='GDP per capita (2000 dollars)',
gridcolor='rgb(255, 255, 255)',
range=[2.003297660701705, 5.191505530708712],
type='log', # log scaling GDP per capita due to large range
zerolinewidth=1,
ticklen=5,
gridwidth=2,
),
yaxis=dict(
title='Life Expectancy (years)',
gridcolor='rgb(255, 255, 255)',
range=[36.12621671352166, 91.72921793264332],
zerolinewidth=1,
ticklen=5,
gridwidth=2,
),
paper_bgcolor='rgb(243, 243, 243)',
plot_bgcolor='rgb(243, 243, 243)',
)
fig = Figure(data=data, layout=layout)
py.iplot(fig, filename='life-expectancy-per-GDP-2007')
```
| github_jupyter |
# <font color='firebrick'><center>Idx Stats Report</center></font>
### This report provides information from the output of samtools idxstats tool. It outputs the number of mapped reads per chromosome/contig.
<br>
```
from IPython.display import display, Markdown
from IPython.display import HTML
import IPython.core.display as di
import csv
import numpy as np
import zlib
import cgatcore.iotools as iotools
import itertools as ITL
import os
import string
import pandas as pd
import sqlite3
import matplotlib as mpl
from matplotlib.backends.backend_pdf import PdfPages # noqa: E402
#mpl.use('Agg') # noqa: E402
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import matplotlib.font_manager as font_manager
import matplotlib.lines as mlines
from matplotlib.colors import ListedColormap
from matplotlib import cm
from matplotlib import rc, font_manager
import cgatcore.experiment as E
import math
from random import shuffle
import matplotlib as mpl
import datetime
import seaborn as sns
import nbformat
##################################################
#Plot customization
#plt.ioff()
plt.style.use('seaborn-white')
#plt.style.use('ggplot')
title_font = {'size':'20','color':'darkblue', 'weight':'bold', 'verticalalignment':'bottom'} # Bottom vertical alignment for more space
axis_font = {'size':'18', 'weight':'bold'}
#For summary page pdf
'''To add description page
plt.figure()
plt.axis('off')
plt.text(0.5,0.5,"my title",ha='center',va='center')
pdf.savefig()
'''
#Panda data frame cutomization
pd.options.display.width = 80
pd.set_option('display.max_colwidth', -1)
chr_feature=['total_reads','total_mapped_reads',
'chr1','chr2','chr3','chr4',
'chr5','chr6','chr7','chr8',
'chr9','chr10','chr11','chr12',
'chr13','chr14','chr15','chr16',
'chr17','chr18','chr19','chrX',
'chrY','chrM']
chr_index=['Total reads','Total mapped reads',
'chr1','chr2','chr3','chr4',
'chr5','chr6','chr7','chr8',
'chr9','chr10','chr11','chr12',
'chr13','chr14','chr15','chr16',
'chr17','chr18','chr19','chrX',
'chrY','chrM']
colors_category = ['red','green','darkorange','yellowgreen', 'pink', 'gold', 'lightskyblue',
'orchid','darkgoldenrod','skyblue','b', 'red',
'darkorange','grey','violet','magenta','cyan',
'hotpink','mediumslateblue']
threshold = 5
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
def y_fmt(y, pos):
decades = [1e9, 1e6, 1e3, 1e0, 1e-3, 1e-6, 1e-9 ]
suffix = ["G", "M", "k", "" , "m" , "u", "n" ]
if y == 0:
return str(0)
for i, d in enumerate(decades):
if np.abs(y) >=d:
val = y/float(d)
signf = len(str(val).split(".")[1])
if signf == 0:
return '{val:d} {suffix}'.format(val=int(val), suffix=suffix[i])
else:
if signf == 1:
#print(val, signf)
if str(val).split(".")[1] == "0":
return '{val:d} {suffix}'.format(val=int(round(val)), suffix=suffix[i])
tx = "{"+"val:.{signf}f".format(signf = signf) +"} {suffix}"
return tx.format(val=val, suffix=suffix[i])
#return y
return y
def getTables(dbname):
'''
Retrieves the names of all tables in the database.
Groups tables into dictionaries by annotation
'''
dbh = sqlite3.connect(dbname)
c = dbh.cursor()
statement = "SELECT name FROM sqlite_master WHERE type='table'"
c.execute(statement)
tables = c.fetchall()
print(tables)
c.close()
dbh.close()
return
def readDBTable(dbname, tablename):
'''
Reads the specified table from the specified database.
Returns a list of tuples representing each row
'''
dbh = sqlite3.connect(dbname)
c = dbh.cursor()
statement = "SELECT * FROM %s" % tablename
c.execute(statement)
allresults = c.fetchall()
c.close()
dbh.close()
return allresults
def getDBColumnNames(dbname, tablename):
dbh = sqlite3.connect(dbname)
res = pd.read_sql('SELECT * FROM %s' % tablename, dbh)
dbh.close()
return res.columns
def plotBar(df,samplename):
fig, ax = plt.subplots()
ax.set_frame_on(True)
ax.xaxis.set_major_formatter(FuncFormatter(y_fmt))
colors=['yellowgreen','darkorange']
for ii in range(0,df.shape[0]):
plt.barh(ii,df['chrX'][ii],color=colors[0], align="center",height=0.6,edgecolor=colors[0])
plt.barh(ii,df['chrY'][ii],color=colors[1], align="center",height=0.6,edgecolor=colors[0])
fig = plt.gcf()
fig.set_size_inches(20,14)
plt.yticks(fontsize =20,weight='bold')
plt.yticks(range(df.shape[0]),df['track'])
plt.xticks(fontsize =20,weight='bold')
ax.grid(which='major', linestyle='-', linewidth='0.3')
plt.ylabel("Sample",labelpad=65,fontsize =25,weight='bold')
plt.xlabel("\nMapped reads",fontsize =25,weight='bold')
plt.title("Reads mapped to X and Y chromosome\n",fontsize =30,weight='bold',color='darkblue')
plt.gca().invert_yaxis()
legend_properties = {'weight':'bold','size':'20'}
leg = plt.legend(chr_feature[21:23],title="Contigs",prop=legend_properties,bbox_to_anchor=(1.14,0.65),frameon=True)
leg.get_frame().set_edgecolor('k')
leg.get_frame().set_linewidth(2)
leg.get_title().set_fontsize(25)
leg.get_title().set_fontweight('bold')
plt.tight_layout()
plt.savefig(''.join([samplename,'.png']),bbox_inches='tight',pad_inches=0.6)
plt.show()
return fig
def displayTable(plotdf,name):
# Display table
styles = [
hover(),
dict(selector="th", props=[("font-size", "130%"),
("text-align", "center"),
]),
dict(selector="td", props=[("font-size", "120%"),
("text-align", "center"),
]),
dict(selector="caption", props=[("caption-side", "top"),
("text-align", "center"),
("font-size", "100%")])
]
df1 = (plotdf.style.set_table_styles(styles).set_caption(name))
display(df1)
print("\n\n")
def plot_idxstats(newdf,df,samplename):
fig,ax = plt.subplots()
ax.grid(which='major', linestyle='-', linewidth='0.25')
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
index=list(range(newdf.shape[1]))
colors = plt.cm.plasma(np.linspace(0,1,newdf.shape[0]))
for ii in range(0,newdf.shape[0]):
plt.plot(index,newdf.iloc[ii],linewidth=2,color=colors[ii],linestyle="-",marker='o',fillstyle='full',markersize=8)
fig = plt.gcf()
fig.set_size_inches(11,8)
plt.xticks(index,chr_feature[2:24],fontsize = 14,weight='bold')
plt.yticks(fontsize = 14,weight='bold')
labels = ax.get_xticklabels()
plt.setp(labels, rotation=40)
legend_properties = {'weight':'bold','size':'14'}
leg = plt.legend(df['track'],title="Sample",prop=legend_properties,bbox_to_anchor=(1.42,1.01),frameon=True)
leg.get_frame().set_edgecolor('k')
leg.get_frame().set_linewidth(2)
leg.get_title().set_fontsize(16)
leg.get_title().set_fontweight('bold')
plt.xlabel('\nContigs',**axis_font)
plt.ylabel('Mapped Reads',**axis_font,labelpad=40)
plt.title("Mapped reads per contig", **title_font)
plt.tight_layout()
plt.savefig(''.join([samplename,'.png']),bbox_inches='tight',pad_inches=0.6)
print("\n\n")
plt.show()
return fig
def idxStatsReport(dbname, tablename):
trans = pd.DataFrame(readDBTable(dbname,tablename))
trans.columns = getDBColumnNames(dbname,tablename)
df=trans
#print(df)
#newdf = df[df.columns[0:25]]
newdf = df[chr_feature[2:24]]
#print(newdf)
plotdf = df[chr_feature]
plotdf.columns = chr_index
plotdf.index = [df['track']]
#del plotdf.index.name
pdf=PdfPages("idx_stats_summary.pdf")
displayTable(plotdf,"Idx Full Stats")
fig = plot_idxstats(newdf,df,"idx_full_stats")
pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6)
print("\n\n\n")
fig = plotBar(df,"idxStats_X_Y_mapped_reads")
pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6)
pdf.close()
#getTables("csvdb")
idxStatsReport("../csvdb","idxstats_reads_per_chromosome")
```
| github_jupyter |
```
# DATA SETUP
''' Note: for more information about our data pre-processing
and categorizing into states, see the README in the data folder.
We have cited sources for all data used and included a brief
description of the states we decided on there.'''
import numpy as np
import pandas as pd
from qubayes_tools import *
from probabilities_temp import *
#################################################
# COVID-19 EXAMPLES:
def build_graph(ntwk_func, filename = None, **kwargs):
if filename == None:
nodes = ntwk_func(**kwargs)
else:
nodes = ntwk_func(filename = filename, **kwargs)
graph = {}
for node in nodes:
if node.parents == []:
#this is a root node, we just need to calculate probabilities
ct = 1
probs = []
got_probs = get_probabilities(node)
newkey = ""
for state_i in node.states.keys():
state_key = node.name + "_" + state_i
if ct == len(node.states.keys()):
newkey += state_key
else:
newkey += state_key + ","
probs.append(got_probs[state_key])
ct += 1
graph.update({newkey : ([], probs)})
else:
#this is a child node, we need conditional probabilities!
cond_probs = []
p_nodes = [] #initialize a list in which to place parent nodes
p_names = [] #get names in same order!
for anode in nodes: #loop thru all nodes in network
if anode.name in node.parents:
p_nodes.append(anode)
p_names.append(str.join(",",[(anode.name + '_' + s) for s in anode.states.keys()]))
cond_prob_dict = get_conditional_probability(node, p_nodes)
# i need to write func(node, p_node[0], p_node[1], ... p_node[n])
p_ct = 0
newkey = ""
for p_str in generate_parent_str(p_nodes[:]):
s_ct = 0
for state_i in node.states.keys():
cond_str = node.name + "_" + state_i + "|" + p_str
cond_probs.append(cond_prob_dict[cond_str])
if p_ct == 0:
if s_ct == 0:
newkey += node.name + "_" + state_i
else:
newkey += "," + node.name + "_" + state_i
s_ct += 1
p_ct += 1
graph.update({newkey : (p_names, cond_probs)})
return graph
def get_lesser_model_nodes(filename='data/lesser_model_data.csv'):
lesserdata = pd.read_csv(filename)
statedataStayHome = {'MarHome' : lesserdata['MarHome'], 'AprHome' : lesserdata['AprHome'], 'MayHome' : lesserdata['MayHome'], 'JunHome' : lesserdata['JunHome']}
StayHome = Node("StayHome", np.ndarray.flatten(np.array(pd.DataFrame(data=statedataStayHome))), states = {"No" : 0, "Yes" : 1})
statedataTests = {'MarTest' : lesserdata['MarTest'], 'AprTest' : lesserdata['AprTest'], 'MayTest' : lesserdata['MayTest'], 'JunTest' : lesserdata['JunTest']}
Tests = Node("Tests", np.ndarray.flatten(np.array(pd.DataFrame(data=statedataTests))), states = {"GT5" : 0, "LE5" : 1})
statedataCases = {'MarCases' : lesserdata['MarCases'], 'AprCases' : lesserdata['AprCases'], 'MayCases' : lesserdata['MayCases'], 'JunCases' : lesserdata['JunCases']}
Cases = Node("Cases", np.ndarray.flatten(np.array(pd.DataFrame(data=statedataCases))), states = {"Inc" : 0, "noInc" : 1}, parents = ["Tests", "StayHome"])
return Cases, Tests, StayHome
def get_mallard_model_nodes(filename='data/mallardmodeldata.csv'):
mallarddata = pd.read_csv(filename)
Cases = Node("Cases", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarCases':mallarddata['MarCases'], 'AprCases':mallarddata['AprCases'], 'MayCases':mallarddata['MayCases'], 'JunCases':mallarddata['JunCases']}))),
states = {"Inc" : 0, "Min" : 1, "Mod" : 2, "Maj" : 3}, parents = ["Test", "Mask", "Work", "Rec","Race","Poverty"])
Test = Node("Test", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarTest':mallarddata['MarTest'],'AprTest':mallarddata['AprTest'],'MayTest':mallarddata['MayTest'], 'JuneTest':mallarddata['JunTest']}))),
states = {"GT5" : 0, "LE5" : 1})
Mask = Node("Mask", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarMask':mallarddata['MarMask'],'AprMask':mallarddata['AprMask'],'MayMask':mallarddata['MayMask'],'JunMask':mallarddata['JunMask']}))),
states = {"No" : 0, "Yes" : 1})
Work = Node("Work", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarWork':mallarddata['MarWork'],'AprWork':mallarddata['AprWork'],'MayWork':mallarddata['MayWork'],'JunWork':mallarddata['JunWork']}))),
states = {"Inc" : 0, "Min" : 1, "Mod" : 2, "Maj" : 3})
Rec = Node("Rec", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarRec':mallarddata['MarRec'],'AprRec':mallarddata['AprRec'],'MayRec':mallarddata['MayRec'],'JunRec':mallarddata['JunRec']}))),
states = {"Inc" : 0, "Min" : 1, "Mod" : 2, "Maj" : 3})
Death = Node("Death", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarDeath':mallarddata['MarDeath'],'AprDeath':mallarddata['AprDeath'],'MayDeath':mallarddata['MayDeath'],'JunDeath':mallarddata['JunDeath']}))),
states = {"Inc" : 0, "notInc" : 1}, parents = ["Cases", "Age"])
Age = Node("Age", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarAge':mallarddata['MarAge'],'AprAge':mallarddata['AprAge'],'MayAge':mallarddata['MayAge'],'JunAge':mallarddata['JunAge']}))),
states = {"Old" : 0, "Young" : 1})
return Cases, Test, Mask, Work, Rec, Death, Age
def get_alabio_model_nodes(filename='data/alabiomodeldata.csv'):
alabiodata = pd.read_csv(filename)
Cases = Node("Cases", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarCases':alabiodata['MarCases'], 'AprCases':alabiodata['AprCases'], 'MayCases':alabiodata['MayCases'], 'JunCases':alabiodata['JunCases']}))),
states={"Inc":0,"Min":1,"Mod":2,"Maj":3}, parents=["Test", "Mask", "Work", "Rec", "Race", "Poverty"])
Test = Node("Test", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarTest':alabiodata['MarTest'],'AprTest':alabiodata['AprTest'],'MayTest':alabiodata['MayTest'], 'JuneTest':alabiodata['JunTest']}))),
states={"GT5" : 0, "LE5" : 1})
Mask = Node("Mask", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarMask':alabiodata['MarMask'],'AprMask':alabiodata['AprMask'],'MayMask':alabiodata['MayMask'],'JunMask':alabiodata['JunMask']}))),
states = {"No" : 0, "Yes" : 1})
Work = Node("Work", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarWork':alabiodata['MarWork'],'AprWork':alabiodata['AprWork'],'MayWork':alabiodata['MayWork'],'JunWork':alabiodata['JunWork']}))),
states = {"Inc" : 0, "Min" : 1, "Mod" : 2, "Maj" : 3})
Rec = Node("Rec", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarRec':alabiodata['MarRec'],'AprRec':alabiodata['AprRec'],'MayRec':alabiodata['MayRec'],'JunRec':alabiodata['JunRec']}))),
states = {"Inc" : 0, "Min" : 1, "Mod" : 2, "Maj" : 3})
Death = Node("Death", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarDeath':alabiodata['MarDeath'],'AprDeath':alabiodata['AprDeath'],'MayDeath':alabiodata['MayDeath'],'JunDeath':alabiodata['JunDeath']}))),
states = {"Inc" : 0, "notInc" : 1}, parents = ["Cases", "Age", "Race", "Poverty", "Health"])
Age = Node("Age", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarAge':alabiodata['MarAge'],'AprAge':alabiodata['AprAge'],'MayAge':alabiodata['MayAge'],'JunAge':alabiodata['JunAge']}))),
states = {"Old" : 0, "Young" : 1})
Race = Node("Race", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarRace':alabiodata['MarRace'],'AprRace':alabiodata['AprRace'],'MayRace':alabiodata['MayRace'],'JunRace':alabiodata['JunRace']}))),
states = {"LE15":0, "15to30":1, "30to45":2, "GT45":3})
Poverty = Node("Poverty", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarPoverty':alabiodata['MarPoverty'],'AprPoverty':alabiodata['AprPoverty'],'MayPoverty':alabiodata['MayPoverty'],'JunPoverty':alabiodata['JunPoverty']}))),
states={"LE11":0, "11to13":1, "13to15":2, "GT15":3})
Health = Node("Health", np.ndarray.flatten(np.array(pd.DataFrame(data={'MarHealth':alabiodata['MarHealth'],'AprHealth':alabiodata['AprHealth'],'MayHealth':alabiodata['MayHealth'],'JunHealth':alabiodata['JunHealth']}))),
states={"Rare":0, "SomewhatCom":1, "Common":2, "VeryCom":3})
return Cases, Test, Mask, Work, Rec, Death, Age, Race, Poverty, Health
#################################################
"""
# MAKE YOUR OWN HERE!
def MyDataSetup(*input, **kwargs):
##########################
# INPUT #
# input str filename with data
# **kwargs whatever else you need
# OUTPUT #
# *data tuple of Nodes
return data
"""
print(build_graph(get_mallard_model_nodes))
print(build_graph(get_lesser_model_nodes))
```
| github_jupyter |
# Vietnam
## Table of contents
1. [General Geography](#1)<br>
1.1 [Soil Resources](#11)<br>
1.2 [Road and Railway Network](#12)<br>
2. [Poverty in Vietnam](#2)<br>
2.1 [The percentage of malnourished children under 5 in 2018 by locality](#21)<br>
2.2 [Proportion of poor households by region from 1998 to 2016 ](#22)<br>
3. [Vietnam Economy](#3)<br>
3.1 [Employment](#31)<br>
3.2 [The Aquaculture Production from 2013 to 2018 by Provinces](#32)<br>
3.3 [Various sources of income by provinces in 2018](#33)<br>
I. [Important Notes](#333)<br>
II. [References](#666)<br>
```
import json
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import geopandas as gpd
import shapely.geometry
from ipywidgets import widgets
# Plot in browser
import plotly.io as pio
pio.renderers.default = 'browser'
from codes.auxiliary import convert_id_map
from codes.plot import *
```
## 1. General Geography <a id="1"></a>
### 1.1 Soil Resources <a id="11"></a>
Dataset of soil types of Vietnam is a geospatial polygon data which is based on FAO classification
[Source](https://data.opendevelopmentmekong.net/dataset/soil-types-in-vietnam)
```
soil_geo = json.load(open('geodata/soilmap_vietnam.geojson',"r"))
# split unique soil type
imap = convert_id_map(soil_geo, 'Type', 'faosoil')
map_keys = imap.keys()
soil_list = []
soil_dict = {}
for key in map_keys:
soil_type = key.split("sols")[0]
soil_type +="sols"
if key not in soil_dict.keys():
soil_dict[key] = soil_type
if soil_type not in soil_list:
soil_list.append(soil_type)
# Soilmap Dataframe
soil_pd = gpd.read_file('geodata/soilmap_vietnam.geojson')
# soil_pd = soil_pd.iloc[:,0:4]
soil_pd["Soil_type"] = soil_pd['Type'].map(soil_dict)
# Plotting soil map
fig = px.choropleth_mapbox(
soil_pd,
geojson=soil_geo,
color = "Soil_type",
color_discrete_sequence=px.colors.qualitative.Light24,
locations = "gid",
featureidkey="properties.gid",
mapbox_style = "carto-positron",
center = {"lat": 16, "lon": 106},
zoom = 5,
title = "Soil Map"
)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
```
<img src="figures/soilmap.png" alt="drawing" style="width:950px;"/>
### 1.2 Road and Railway network <a id="12"></a>
A geospatial dataset containing polylines of transportation network in Vietnam. It contains the railways, the principal roads and the secondary roads.
[Source](https://data.opendevelopmentmekong.net/dataset/giao-thng-vit-nam)
```
# open a zipped shapefile with the zip:// pseudo-protocol
transport_df = gpd.read_file("geodata/transport.zip")
lats = []
lons = []
names = []
for feature, name in zip(transport_df.geometry, transport_df.Name):
if isinstance(feature, shapely.geometry.linestring.LineString):
linestrings = [feature]
elif isinstance(feature, shapely.geometry.multilinestring.MultiLineString):
linestrings = feature.geoms
else:
continue
for linestring in linestrings:
x, y = linestring.xy
lats = np.append(lats, y)
lons = np.append(lons, x)
names = np.append(names, [name]*len(y))
lats = np.append(lats, None)
lons = np.append(lons, None)
names = np.append(names, None)
fig = px.line_mapbox(lat=lats, lon=lons, hover_name=names,
mapbox_style="stamen-terrain", zoom=4.5, center={"lat": 16, "lon": 106})
fig.show()
```
<img src="figures/South_railway_road.png" alt="drawing" style="width:950px;"/>
## 2. Poverty In Vietnam <a id="2"></a>
### 2.1. The percentage of malnourished children under 5 in 2018 by locality <a id="21"></a>
The attributes include the total weight, hight and weight based on height.
```
# Malnutrition data
malnutrition_children_vn_2018 = pd.read_csv("geodata/malnutrition_children_vn_2018.csv")
#Vietnam map
vietnam_geo = json.load(open("geodata/vietnam_state.geojson","r"))
# Plotting
fig = px.choropleth_mapbox(
malnutrition_children_vn_2018,
locations = 'Code',
featureidkey="properties.Code",
geojson = vietnam_geo,
color = 'Wei_Hei',
hover_name = "Name",
mapbox_style = "carto-positron",
center = {"lat": 16,"lon": 106},
zoom = 4.5,
title = "malnourished children under 5 in 2018 by locality in Vietnam ",
)
fig.update_geos(fitbounds = "locations", visible=False)
fig.show()
```
<img src="figures/Malnutrion_children_2018.png" alt="drawing" style="width:950px;"/>
### 2.2 Proportion of poor households by region from 1998 to 2016 <a id="22"></a>
The dataset includes le pourcentage of poor households by region in Vietnam from 1998 to 2016. The standard of poor households for this period based on the average income per person per month of households is updated according to the consumer price index as follows: In 2010, VND 400,000 for rural areas and VND 500,000 for urban areas; Similarly, in 2013 it was VND 570,000 and VND 710,000; in 2014, VND 605,000 dong and VND 750,000; in 2015, there were VND 615,000 and VND 760,000 dong; In 2016, VND 630,000 and VND 780,000 respectively.
```
# Import the Vietnam map by region data (error geojson file)
vnregion_geo = json.load(open("geodata/poverty_rate_1998_2016.geojson", "r",encoding='utf-8'))
# Import aquaculture_production csv
poverty_rate_1998_2016 = pd.read_csv("geodata/poverty_rate_1998_2016.csv")
cols = sorted(poverty_rate_1998_2016.columns[3:], reverse=False)
for i, y in enumerate(cols):
poverty = "Poverty" + y
poverty_rate_1998_2016[poverty] = poverty_rate_1998_2016[y]
# Convert wide to long format
poverty = poverty_rate_1998_2016.drop(cols, axis=1)
final_poverty = pd.wide_to_long(poverty,"Poverty", i=['Name_EN','Name_VI','id'], j= "year")
final_poverty.reset_index(inplace=True)
```
### Choropleth map using GeoJSON
```
input_year ='1998' #1998 2002 2004 2006 2008 2010 2011 2012 2013 2014 2015 2016
fig = px.choropleth_mapbox(
poverty_rate_1998_2016,
locations = 'id',
geojson = vnregion_geo,
featureidkey="properties.id",
color = "Poverty" + input_year ,
color_continuous_scale="Viridis",
range_color=(0, 65),
hover_name = "Name_EN",
# hover_data = ["Poverty_percentage" + input_year],
mapbox_style = "carto-positron",
center = {"lat": 17,"lon": 106},
zoom = 4.5,
title = "Proportion of poor households by region in Vietnam "+ input_year,
)
fig.update_geos(fitbounds = "locations", visible=False)
fig.show()
```
<img src="figures/poverty_rate_1998_2016.png" alt="drawing" style="width:950px;"/>
### Animated figures with GeoJSON, Plotly Express
```
fig = px.choropleth(
final_poverty,
locations = 'id',
featureidkey="properties.id",
animation_frame = "year",
geojson = vnregion_geo,
color = "Poverty",
color_continuous_scale="Viridis",
range_color=(0, 65),
hover_name = "Name_EN",
# hover_data = ['Poverty_percentage'],
title = "Proportion of poor households by region in Vietnam from 1998 to 2016",
)
fig.update_geos(fitbounds = "locations", visible=False)
fig.show()
```
## 3. Vietnam Economy <a id="3"></a>
### 3.1 Employment <a id="31"></a>
[Source](https://www.gso.gov.vn/en/employment/)
```
# Import csv
trained_employee = pd.read_csv("geodata/trained_employee15_vn.csv")
labor_force = pd.read_csv("geodata/labor_force_vn.csv")
```
<img src="figures/Percent_employ15.png" alt="drawing" style="width:950px;"/>
#### Labour force at 15 years of age and above by province
```
title31 = "Labour force at 15 years of age and above by province from 2010 to 2018"
plot_animation_frame_vietnamstate(labor_force,vietnam_geo,"labor_force", title31)
```
#### Percentage of employed workers at 15 years of age and above among population by province
```
title32 = "Percentage of employed workers at 15 years of age and above among population by province from 2010 to 2018"
plot_animation_frame_vietnamstate(trained_employee,vietnam_geo,"percentage", title32)
```
### 3.2 The Aquaculture Production from 2013 to 2018 by Provinces <a id="32"></a>
Published by: Open Development Vietnam The data provides information on Vietnam's aquaculture production from 2013 to 2018. The aquaculture in Vietnam includes: farmed fish production, farmed shrimp production and other aquatic products. Aquaculture production is divided by province and city.
```
# import the Vietnam map by provinces data
vietnam_geo = json.load(open("geodata/vietnam_state.geojson", "r"))
# Convert map properties/
state_id_map = convert_id_map(vietnam_geo, "Name", "Code")
# Import aquaculture_production csv
df = pd.read_csv("geodata/aquaculture_production_2013__2018.csv")
years = ['2013','2014','2015','2016','2017','2018']
for i, y in enumerate(years):
scale = 'Production_Scale'+ y
prod = 'Production' + y
df[scale] = np.log10(df[y])
df[prod] = df[y]
# Convert wide to long format
prod = df.drop(years, axis=1)
final_prod = pd.wide_to_long(prod, stubnames=['Production_Scale','Production'], i=['Name','Code'], j="year")
final_prod.reset_index(inplace=True)
```
### Choropleth map using GeoJSON
```
input_year = '2018'
fig = px.choropleth_mapbox(
df,
geojson = vietnam_geo,
locations ="Code",
color = "Production_Scale" + input_year,
range_color=(2, 6),
hover_name = "Name",
featureidkey = "properties.Code",
hover_data = ['Production'+ input_year],
mapbox_style="carto-positron",
center={"lat": 16, "lon": 106},
zoom=4.5,
title ="The Aquaculture Production of Vietnam by Province in " + input_year
)
fig.update_geos(fitbounds ="locations", visible=False)
fig.show()
```
<img src="figures/Aqua_prod_2013.png" alt="drawing" style="width:950px;"/>
### Animated figures with GeoJSON, Plotly Express
```
title33 = "The Aquaculture Production of Vietnam from 2013 to 2018 by Province"
plot_animation_frame_vietnamstate(final_prod, vietnam_geo, "Production_Scale", title33)
```
## 3.3 Various sources of income by provinces in 2018 <a id="33"></a>
The data provide the information on per capita in come by province in 2018 in Vietnam. The total income per month includes the salary, the income from agriculture, forestry, aquaculture and non-agriculture, non-forestry, non-aquaculture and from other income. The income unit is in thousands vnd
[Source](https://data.opendevelopmentmekong.net/dataset/per-capita-income-by-province-in-2018-in-vietnam)
```
# Import csv and geojson
income_df = pd.read_csv("geodata/thunhapbinhquan.csv")
categories = sorted(income_df.columns[3:])
#Vietnam map
vietnam_geo = json.load(open("geodata/vietnam_state.geojson","r"))
```
<img src="figures/Wage_agri_by_province.png" alt="drawing" style="width:950px;"/>
```
trace = go.Choroplethmapbox(
geojson = vietnam_geo,
featureidkey='properties.Code',
locations = income_df["Code"],
z=income_df.loc[0:, 'income_total_average'],
hovertext = 'Province: ' + income_df.Name_EN,
colorscale ='viridis',
marker_opacity=0.9,
marker_line_width=0.9,
showscale=True
)
lyt = dict(title='Income by provinces',
height = 700,
mapbox_style = "white-bg",
mapbox_zoom = 4,
mapbox_center = {"lat": 17,"lon": 106})
fig = go.FigureWidget(data=[trace], layout=lyt)
# Add dropdowns
## 'Income' dropdown
cat_options = ['total_average', 'salary', 'agri', 'non_agri', 'others']
category = widgets.Dropdown(options=cat_options,
value='total_average',
description='Category')
# Add Submit button
submit = widgets.Button(description='Submit',
disabled=False,
button_style='info',
icon='check')
def submit_event_handler(args):
if category.value in cat_options:
new_data = income_df.loc[0:, 'income_' + str(category.value)]
with fig.batch_update():
fig.data[0].z = new_data
fig.layout.title = ' '.join(['Income ',str(category.value), ' in 2018'])
submit.on_click(submit_event_handler)
container = widgets.HBox([category, submit])
widgets.VBox([container, fig])
```
## Important Notes <a id="333"></a>
## Reference <a id="666"></a>
For additional information and attributes for creating bubble charts in Plotly see: https://plotly.com/python/bubble-charts/.
For more documentation on creating animations with Plotly, see https://plotly.com/python/animations.
| github_jupyter |
```
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import sonnet as snt
from graph_nets import utils_tf
from graph_nets import utils_np
from graph_nets import graphs
from root_gnn.src.generative import mlp_gan as toGan
from root_gnn.utils_plot import add_mean_std
batch_size = 1024
n_batches = 4 # 14
noise_dim = 16
disc_num_iters = 4
output_dir = '/global/homes/x/xju/work/Herwig/PrimaryOnly/mlpGAN/v1'
# output_dir = '/global/homes/x/xju/work/Herwig/PrimaryOnly/rnnGAN/v0'
test_data_name = '/global/homes/x/xju/work/Herwig/PrimaryOnly_1M/inputs/test/primary_11.tfrec'
gan = toGan.GAN()
optimizer = toGan.GANOptimizer(
gan,
batch_size=batch_size,
noise_dim=noise_dim,
num_epcohs=1,
with_disc_reg=True
)
ckpt_dir = os.path.join(output_dir, "checkpoints")
checkpoint = tf.train.Checkpoint(
optimizer=optimizer,
gan=gan)
ckpt_manager = tf.train.CheckpointManager(checkpoint, directory=ckpt_dir,
max_to_keep=5, keep_checkpoint_every_n_hours=8)
_ = checkpoint.restore(ckpt_manager.latest_checkpoint)
from root_gnn.trainer import read_dataset
from root_gnn.trainer import loop_dataset
def visiual(predicts, truths):
# predicts = predicts * node_abs_max[1:]
# truths = truths * node_abs_max[1:]
hist_config = {
"alpha": 0.5,
"lw": 2,
'histtype': 'step',
}
nbins = 40
max_x = 1.2
config_4vector = [
dict([("bins",nbins), ("range",(-max_x, max_x))]),
dict([("bins",nbins), ("range",(-max_x, max_x))]),
dict([("bins",nbins), ("range",(-max_x, max_x))]),
dict([("bins",nbins), ("range",(-max_x, max_x))])
]
xr = 10
nbinsd = 40
config_4vectord = [
dict([("bins",nbinsd), ("range",(-xr, xr))]),
dict([("bins",nbinsd), ("range",(-xr, xr))]),
dict([("bins",nbinsd), ("range",(-xr, xr))]),
dict([("bins",nbinsd), ("range",(-2, 2))]),
]
xlabels_diff = ['($p_\mathrm{predict}^{x}-p_\mathrm{true}^{x}$)/$p_\mathrm{true}^{x}$ [GeV]',
'($p_\mathrm{predict}^{y}-p_\mathrm{true}^{y}$)/$p_\mathrm{true}^{y}$ [GeV]',
'($p_\mathrm{predict}^{z}-p_\mathrm{true}^{z}$)/$p_\mathrm{true}^{z}$ [GeV]',
'($E_\mathrm{predict}-E_\mathrm{true}$/$E_\mathrm{true}$ [GeV]']
xp = [1, 1, 1, 0.5]
yp = np.array([60]*3 + [30])
dy = np.array([10]*4)
def plot_4vector(offset):
_, axs = plt.subplots(2,2, figsize=(10,10), constrained_layout=True)
axs = axs.flatten()
xlabels = ['E [GeV]', 'px [GeV]', 'py [GeV]', 'pz [GeV]']
for ix in range(4):
idx = ix
axs[ix].hist(predicts[:, offset, idx], **hist_config, **config_4vector[ix], label="prediction")
axs[ix].hist(truths[:, offset, idx], **hist_config, **config_4vector[ix], label="truth")
axs[ix].set_xlabel(xlabels[ix])
axs[ix].legend(loc='upper right')
# _, axs = plt.subplots(2,2, figsize=(10,10), constrained_layout=True)
# axs = axs.flatten()
# offset = 0
# for ix in range(4):
# idx = ix
# diff = (predicts[:, offset, idx]-truths[:, offset, idx])/truths[:, offset, idx]
# max_diff = np.max(diff)
# diff = diff[np.abs(diff) < xr]
# # print(idx, np.std(diff), max_diff, diff.shape[0])
# axs[ix].hist(diff, **hist_config, **config_4vectord[ix], label="prediction")
# axs[ix].set_xlabel(xlabels_diff[ix])
# add_mean_std(diff, xp[ix], yp[ix], axs[ix], dy=dy[ix])
for idx in range(2):
plot_4vector(idx)
node_abs_max = np.array([
[49.1, 47.7, 46.0, 47.0],
[46.2, 40.5, 41.0, 39.5],
[42.8, 36.4, 37.0, 35.5]
], dtype=np.float32)
node_mean = np.array([
[14.13, 0.05, -0.10, -0.04],
[7.73, 0.02, -0.04, -0.08],
[6.41, 0.04, -0.06, 0.04]
], dtype=np.float32)
node_scales = np.array([
[13.29, 10.54, 10.57, 12.20],
[8.62, 6.29, 6.35, 7.29],
[6.87, 5.12, 5.13, 5.90]
], dtype=np.float32)
def normalize(inputs, targets):
input_nodes = (inputs.nodes - node_mean[0])/node_scales[0]
target_nodes = np.reshape(targets.nodes, [batch_size, -1, 4])
target_nodes = np.reshape(target_nodes/node_abs_max, [batch_size, -1])
return input_nodes, target_nodes
def check_file(filename, ngen=1000):
dataset, n_graphs = read_dataset(test_data_name)
print("total {} graphs iterated with batch size of {} and {} batches".format(n_graphs, batch_size, n_batches))
print('averaging {} geneveted events for each input'.format(ngen))
test_data = loop_dataset(dataset, batch_size)
predict_4vec = []
truth_4vec = []
for ib in range(n_batches):
inputs, targets = next(test_data)
input_nodes, target_nodes = normalize(inputs, targets)
gen_evts = []
for igen in range(ngen):
noises = tf.random.normal([batch_size, noise_dim], dtype=tf.float32)
inputs = tf.concat([input_nodes, noises], axis=-1)
gen_graph = gan.generate(inputs)
gen_evts.append(gen_graph)
gen_evts = tf.reduce_mean(tf.stack(gen_evts), axis=0)
predict_4vec.append(tf.reshape(gen_evts, [batch_size, -1, 4]))
truth_4vec.append(tf.reshape(target_nodes, [batch_size, -1, 4])[:, 1:, :])
predict_4vec = tf.concat(predict_4vec, axis=0)
truth_4vec = tf.concat(truth_4vec, axis=0)
visiual(predict_4vec.numpy(), truth_4vec.numpy())
check_file(test_data_name, ngen=1000)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import numpy as np
import tensorflow as tf
import json
with open('dataset-bpe.json') as fopen:
data = json.load(fopen)
train_X = data['train_X']
train_Y = data['train_Y']
test_X = data['test_X']
test_Y = data['test_Y']
EOS = 2
GO = 1
vocab_size = 32000
train_Y = [i + [2] for i in train_Y]
test_Y = [i + [2] for i in test_Y]
from tensor2tensor.utils import beam_search
def pad_second_dim(x, desired_size):
padding = tf.tile([[[0.0]]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1], tf.shape(x)[2]], 0))
return tf.concat([x, padding], 1)
class Translator:
def __init__(self, size_layer, num_layers, embedded_size, learning_rate):
def cells(reuse=False):
return tf.nn.rnn_cell.BasicRNNCell(size_layer,reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype = tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype = tf.int32)
batch_size = tf.shape(self.X)[0]
embeddings = tf.Variable(tf.random_uniform([vocab_size, embedded_size], -1, 1))
_, encoder_state = tf.nn.dynamic_rnn(
cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
inputs = tf.nn.embedding_lookup(embeddings, self.X),
sequence_length = self.X_seq_len,
dtype = tf.float32)
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
dense = tf.layers.Dense(vocab_size)
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = encoder_state,
output_layer = dense)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
self.training_logits = training_decoder_output.rnn_output
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = encoder_state,
output_layer = dense)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.fast_result = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 512
num_layers = 2
embedded_size = 256
learning_rate = 1e-3
batch_size = 128
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Translator(size_layer, num_layers, embedded_size, learning_rate)
sess.run(tf.global_variables_initializer())
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
batch_x = pad_sequences(train_X[:10], padding='post')
batch_y = pad_sequences(train_Y[:10], padding='post')
sess.run([model.fast_result, model.cost, model.accuracy],
feed_dict = {model.X: batch_x, model.Y: batch_y})
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = pad_sequences(train_X[i : index], padding='post')
batch_y = pad_sequences(train_Y[i : index], padding='post')
feed = {model.X: batch_x,
model.Y: batch_y}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = pad_sequences(test_X[i : index], padding='post')
batch_y = pad_sequences(test_Y[i : index], padding='post')
feed = {model.X: batch_x,
model.Y: batch_y,}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
test_loss.append(loss)
test_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
from tensor2tensor.utils import bleu_hook
results = []
for i in tqdm.tqdm(range(0, len(test_X), batch_size)):
index = min(i + batch_size, len(test_X))
batch_x = pad_sequences(test_X[i : index], padding='post')
feed = {model.X: batch_x}
p = sess.run(model.fast_result,feed_dict = feed)
result = []
for row in p:
result.append([i for i in row if i > 3])
results.extend(result)
rights = []
for r in test_Y:
rights.append([i for i in r if i > 3])
bleu_hook.compute_bleu(reference_corpus = rights,
translation_corpus = results)
```
| github_jupyter |
```
from flask import Flask
import matplotlib.pyplot as plt
from flask import Flask, request, render_template
import pandas
import os
import sys
from flask import Flask, request, session, g, redirect, url_for, abort, render_template
from flaskext.mysql import MySQL
from flask_wtf import FlaskForm
from wtforms.fields.html5 import DateField
from wtforms import SelectField,StringField, BooleanField, SubmitField
from wtforms.validators import DataRequired
from datetime import date
import time
import gmplot
from wtforms import Form
from flask_wtf import Form
from wtforms import TextField, IntegerField, TextAreaField, SubmitField, RadioField,SelectField
import seaborn as sns
from wtforms import validators, ValidationError
app = Flask(__name__)
app.secret_key = 'A0Zr98slkjdf984jnflskj_sdkfjhT'
mysql = MySQL()
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = '1234'
app.config['MYSQL_DATABASE_DB'] = 'CREDIT'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
conn = mysql.connect()
cursor = conn.cursor()
class AnalyticsForm(FlaskForm):
attributes = SelectField('Data Attributes', choices=[('SEX', 'SEX'), ('MARRIAGE', 'MARRIAGE'), ('EDUCATION', 'EDUCATION')])
class InfoForm(FlaskForm):
sex = StringField('Gender') #, validators=[DataRequired()])
#submit = SubmitField('Sign In')
class PaymentForm(FlaskForm):
attributes = SelectField('Data Attributes', choices=[('BILL_AMT1', 'BILL_AMT1'),
('BILL_AMT2', 'BILL_AMT2'), ('BILL_AMT3', 'BILL_AMT3'),
('BILL_AMT4', 'BILL_AMT4'), ('BILL_AMT5', 'BILL_AMT5'),
('BILL_AMT6', 'BILL_AMT6')])
class ContactForm(Form):
Age = TextField("Age",[validators.Required("Please enter the age.")])
Gender = RadioField('Gender', choices = [('M','Male'),('F','Female')])
Education = SelectField('Education', choices = [('1','Graduate'),('2','university'),('3','high school'), ('4','others'), ('5','unknown')])
#Address = TextAreaField("Education")
PAY_0 = SelectField("PAY_0",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
PAY_2 = SelectField("PAY_2",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
PAY_3 = SelectField("PAY_3",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
PAY_4 = SelectField("PAY_4",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
PAY_5 = SelectField("PAY_5",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
PAY_6 = SelectField("PAY_6",choices = [('-1','pay duly'), ('1','payment delay for one month'), ('2','payment delay for two months'),('3','payment delay for three month'), ('4','payment delay for four months'),('5','payment delay for five month'), ('6','payment delay for six months'),('7','payment delay for one month'), ('8','payment delay for eight months'),('9','payment delay for nine months and above')])
BILL_AMT1 = TextField("Amount of bill statement in September")
BILL_AMT2 = TextField("Amount of bill statement in August")
BILL_AMT3 = TextField("Amount of bill statement in July")
BILL_AMT4 = TextField("Amount of bill statement in June")
BILL_AMT5 = TextField("Amount of bill statement in May")
BILL_AMT6 = TextField("Amount of bill statement in April")
PAY_AMT1 = TextField("Amount of previous payment in September")
PAY_AMT2 = TextField("Amount of previous payment in August")
PAY_AMT3 = TextField("Amount of previous payment in July")
PAY_AMT4 = TextField("Amount of previous payment in June")
PAY_AMT5 = TextField("Amount of previous payment in May")
PAY_AMT6 = TextField("Amount of previous payment in April")
MARRIAGE = RadioField("MARRIAGE", choices = [('1','married'), ('2','single'), ('3','others')])
language = SelectField('Languages', choices = [('cpp', 'C++'), ('py', 'Python')])
submit = SubmitField("Send")
class MarriageForm(FlaskForm):
attributes = SelectField('Data Attributes', choices=[('BILL_AMT1', 'BILL_AMT1'),
('BILL_AMT2', 'BILL_AMT2'), ('BILL_AMT3', 'BILL_AMT3'),
('BILL_AMT4', 'BILL_AMT4'), ('BILL_AMT5', 'BILL_AMT5'),
('BILL_AMT6', 'BILL_AMT6')])
def get_homepage_links1():
return [{"href": url_for('analytics'), "label":"plot"},{"href": url_for('data'), "label":"data"}]
def get_homepage_links2():
return [{"href": url_for('predict'), "label":"predict"}]
def get_df_data():
query = "select ID, MARRIAGE, SEX, EDUCATION,default_payment from credit_card;"
cursor.execute(query)
data = cursor.fetchall()
df = pandas.DataFrame(data=list(data),columns=['ID', 'MARRIAGE', 'SEX', 'EDUCATION','default_payment'])
return df
@app.route("/")
def home():
#session["data_loaded"] = True
return render_template('new_layout.html', link1=get_homepage_links1(), link2=get_homepage_links2())
@app.route("/data")
def data():
return render_template('data1.html')
@app.route("/data2",methods = ['POST', 'GET'])
def data2():
if request.method == 'POST':
result = request.form
AGE = result['AGE']
SEX = result['SEX']
EDUCATION = result['EDUCATION']
MARRIAGE = result['MARRIAGE']
query = f"select ID, AGE, SEX, EDUCATION,MARRIAGE,default_payment from credit_card where AGE ={AGE} "
if SEX:
query += f"and SEX = {SEX} "
if EDUCATION:
query += f"and EDUCATION = {EDUCATION} "
if MARRIAGE:
query += f"and MARRIAGE = {MARRIAGE} "
cursor.execute(query)
data = cursor.fetchall()
data=list(data)
return render_template("data.html",df = data)
@app.route("/predcit")
def predict():
return render_template('predict1.html')
@app.route("/predict2",methods = ['POST', 'GET'])
def predict2():
if request.method == 'POST':
result = request.form
LIMIT_BAL = result['LIMIT_BAL']
AGE = result['AGE']
SEX = result['SEX']
EDUCATION = result['EDUCATION']
MARRIAGE = result['MARRIAGE']
PAY_0 = result['PAY_0']
PAY_2 = result['PAY_2']
PAY_3 = result['PAY_3']
PAY_4 = result['PAY_4']
PAY_5 = result['PAY_5']
PAY_6 = result['PAY_6']
CREDIT_AMT1=result['CREDIT_AMT1']
CREDIT_AMT2=result['CREDIT_AMT2']
CREDIT_AMT3=result['CREDIT_AMT3']
CREDIT_AMT4=result['CREDIT_AMT4']
CREDIT_AMT5=result['CREDIT_AMT5']
CREDIT_AMT6=result['CREDIT_AMT6']
info = [LIMIT_BAL,AGE,SEX,EDUCATION,MARRIAGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,CREDIT_AMT1,CREDIT_AMT2,CREDIT_AMT3,CREDIT_AMT4,CREDIT_AMT5,CREDIT_AMT6]
a = list(visulization(info).values())
df = [['Method1',a[0]],['Method2',a[1]],['Method3',a[2]],['Method4',a[3]],['Method5',a[4]],['Method6',a[5]],['Method6',a[6]]]
return render_template("predict.html",df = df)
def get_age_range(age):
j = 20
for i in range(6):
if int(age) < j+10:
break
j = j + 10
return j
def get_payment_data():
query = "select AGE, MARRIAGE, BILL_AMT1, BILL_AMT2, BILL_AMT3, BILL_AMT4, BILL_AMT5, BILL_AMT6 from credit_card;"
cursor.execute(query)
data = cursor.fetchall()
payment_data = pandas.DataFrame(data=list(data),columns=['AGE', 'MARRIAGE', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6'])
payment_data['BILL_AMT1'] = payment_data['BILL_AMT1'].apply(lambda x: float(x))
payment_data['BILL_AMT2'] = payment_data['BILL_AMT2'].apply(lambda x: float(x))
payment_data['BILL_AMT3'] = payment_data['BILL_AMT3'].apply(lambda x: float(x))
payment_data['BILL_AMT4'] = payment_data['BILL_AMT4'].apply(lambda x: float(x))
payment_data['BILL_AMT5'] = payment_data['BILL_AMT5'].apply(lambda x: float(x))
payment_data['BILL_AMT6'] = payment_data['BILL_AMT6'].apply(lambda x: float(x))
payment_data['age_range'] = payment_data['AGE'].apply(lambda x:get_age_range(x))
return payment_data
@app.route('/analytics',methods=['GET','POST'])
def analytics():
form1= AnalyticsForm()
if form1.validate_on_submit():
df = get_df_data()
column = request.form.get('attributes')
group = df.groupby([column,'default_payment'])
ax = group.size().unstack().plot(kind='bar')
fig = ax.get_figure()
filename = f'static/charts/{column}.png'
fig.savefig(filename)
return render_template('analytics.html', chart_src="/"+filename)
form2 = PaymentForm()
if form2.validate_on_submit():
payment_df = get_payment_data()
column = request.form.get('attributes')
df1 = payment_df[['age_range', column]]
sns.set_style('ticks')
sns_plot = sns.violinplot(df1['age_range'], df1[column])
sns_plot.set_title('Bill Amount for Different Age Groups on Month '+ column[-1])
sns_plot.set_ylabel('Bill Amount')
sns_plot.set_xlabel('Age Range')
fig = sns_plot.get_figure()
filename = f'static/charts/{column + "AGE"}.png'
fig.savefig(filename)
return render_template('payment_age.html', chart_src="/"+filename)
form3 = MarriageForm()
if form3.validate_on_submit():
payment_df = get_payment_data()
column = request.form.get('attributes')
df1 = payment_df[['MARRIAGE', column]]
sns.set_style('ticks')
sns_plot = sns.violinplot(df1['MARRIAGE'], df1[column])
sns_plot.set_title('Bill Amount for Different MARRIAGE Groups on Month '+ column[-1])
sns_plot.set_ylabel('Bill Amount')
sns_plot.set_xlabel('MARRIAGE')
fig = sns_plot.get_figure()
filename = f'static/charts/{column + "MARRIAGE"}.png'
fig.savefig(filename)
return render_template('payment_marriage.html', chart_src="/"+filename)
return render_template('analyticsparams.html', form1 =form1, form2 = form2, form3 = form3)
if __name__ == "__main__":
app.run()
```
| github_jupyter |
# 02 - Ensembling: Bagging, Boosting and Ensemble
<div class="alert alert-block alert-success">
<b>Version:</b> v0.1 <b>Date:</b> 2020-06-09
在这个Notebook中,记录了`Randomforest` `XGBoost` 以及模型组合的实现策略。
</div>
<div class="alert alert-block alert-info">
<b>💡:</b>
- **环境依赖**: Fastai v2 (0.0.18), XGBoost(1.1.1), sklearn
- **数据集**:[ADULT_SAMPLE](http://files.fast.ai/data/examples/adult_sample.tgz)
</div>
<div class="alert alert-block alert-danger">
<b>注意📌:</b>
本文档中只是各个算法的初步测试,非最佳实践.
</div>
## 数据准备
```
from fastai2.tabular.all import *
```
Let's first build our `TabularPandas` object:
```
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'
y_block = CategoryBlock()
splits = RandomSplitter()(range_of(df))
to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, y_block=y_block, splits=splits)
X_train,y_train = to.train.xs.values,to.train.ys.values.squeeze()
X_valid,y_valid = to.valid.xs.values,to.valid.ys.values.squeeze()
features = to.x_names
```
# XGBoost
* Gradient Boosting
* [Documentation](https://xgboost.readthedocs.io/en/latest/)
```
import xgboost as xgb
xgb.__version__
model = xgb.XGBClassifier(n_estimators = 100, max_depth=8, learning_rate=0.1, subsample=0.5)
```
And now we can fit our classifier:
```
xgb_model = model.fit(X_train, y_train)
```
And we'll grab the raw probabilities from our test data:
```
xgb_preds = xgb_model.predict_proba(X_valid)
xgb_preds
```
And check it's accuracy
```
accuracy(tensor(xgb_preds), tensor(y_valid))
```
We can even plot the importance
```
from xgboost import plot_importance
plot_importance(xgb_model).set_yticklabels(list(features))
```
## `fastai2` 中的`Tabular Learner`
```
dls = to.dataloaders()
learn = tabular_learner(dls, layers=[200,100], metrics=accuracy)
learn.fit(5, 1e-2)
```
As we can see, our neural network has 83.84%, slighlty higher than the GBT
Now we'll grab predictions
```
nn_preds = learn.get_preds()[0]
nn_preds
```
Let's check to see if our feature importance changed at all
```
class PermutationImportance():
"Calculate and plot the permutation importance"
def __init__(self, learn:Learner, df=None, bs=None):
"Initialize with a test dataframe, a learner, and a metric"
self.learn = learn
self.df = df if df is not None else None
bs = bs if bs is not None else learn.dls.bs
self.dl = learn.dls.test_dl(self.df, bs=bs) if self.df is not None else learn.dls[1]
self.x_names = learn.dls.x_names.filter(lambda x: '_na' not in x)
self.na = learn.dls.x_names.filter(lambda x: '_na' in x)
self.y = dls.y_names
self.results = self.calc_feat_importance()
self.plot_importance(self.ord_dic_to_df(self.results))
def measure_col(self, name:str):
"Measures change after column shuffle"
col = [name]
if f'{name}_na' in self.na: col.append(name)
orig = self.dl.items[col].values
perm = np.random.permutation(len(orig))
self.dl.items[col] = self.dl.items[col].values[perm]
metric = learn.validate(dl=self.dl)[1]
self.dl.items[col] = orig
return metric
def calc_feat_importance(self):
"Calculates permutation importance by shuffling a column on a percentage scale"
print('Getting base error')
base_error = self.learn.validate(dl=self.dl)[1]
self.importance = {}
pbar = progress_bar(self.x_names)
print('Calculating Permutation Importance')
for col in pbar:
self.importance[col] = self.measure_col(col)
for key, value in self.importance.items():
self.importance[key] = (base_error-value)/base_error #this can be adjusted
return OrderedDict(sorted(self.importance.items(), key=lambda kv: kv[1], reverse=True))
def ord_dic_to_df(self, dict:OrderedDict):
return pd.DataFrame([[k, v] for k, v in dict.items()], columns=['feature', 'importance'])
def plot_importance(self, df:pd.DataFrame, limit=20, asc=False, **kwargs):
"Plot importance with an optional limit to how many variables shown"
df_copy = df.copy()
df_copy['feature'] = df_copy['feature'].str.slice(0,25)
df_copy = df_copy.sort_values(by='importance', ascending=asc)[:limit].sort_values(by='importance', ascending=not(asc))
ax = df_copy.plot.barh(x='feature', y='importance', sort_columns=True, **kwargs)
for p in ax.patches:
ax.annotate(f'{p.get_width():.4f}', ((p.get_width() * 1.005), p.get_y() * 1.005))
imp = PermutationImportance(learn)
```
And it did! Is that bad? No, it's actually what we want. If they utilized the same things, we'd expect very similar results. We're bringing in other models to hope that they can provide a different outlook to how they're utilizing the features (hopefully differently)
## 组合两个模型
And perform our ensembling! To do so we'll average our predictions to gather (take the sum and divide by 2)
```
avgs = (nn_preds + xgb_preds) / 2
avgs
```
And now we'll take the argmax to get our predictions:
```
argmax = avgs.argmax(dim=1)
argmax
```
How do we know if it worked? Let's grade our predictions:
```
y_valid
accuracy(tensor(nn_preds), tensor(y_valid))
accuracy(tensor(xgb_preds), tensor(y_valid))
accuracy(tensor(avgs), tensor(y_valid))
```
As you can see we scored a bit higher!
## Random Forests
Let's also try with Random Forests
```
from sklearn.ensemble import RandomForestClassifier
tree = RandomForestClassifier(n_estimators=100,max_features=0.5,min_samples_leaf=5)
```
Now let's fit
```
tree.fit(X_train, y_train);
```
Now, we are not going to use the default importances. Why? Read up here:
[Beware Default Random Forest Importances](https://explained.ai/rf-importance/) by Terence Parr, Kerem Turgutlu, Christopher Csiszar, and Jeremy Howard
Instead, based on their recommendations we'll be utilizing their `rfpimp` package
```
#!pip install rfpimp
from rfpimp import *
imp = importances(tree, to.valid.xs, to.valid.ys)
plot_importances(imp)
```
Which as we can see, was also very different.
Now we can get our raw probabilities:
```
forest_preds = tree.predict_proba(X_valid)
forest_preds
```
And now we can add it to our ensemble:
```
accuracy(tensor(forest_preds),tensor(y_valid))
avgs = (nn_preds + xgb_preds + forest_preds) / 3
accuracy(tensor(avgs), tensor(y_valid))
```
As we can see, it didn't quite work how we wanted to. But that is okay, the goal was to experiment!
| github_jupyter |
# Chapter 2: Working With Lists
Much of the remainder of this book is dedicated to using data structures to produce analysis that is elegant and efficient. To use the words of economics, you are making a long-term investment in your human capital by working through these exercises. Once you have invested in these fixed-costs, you can work with data at low marginal cost.
If you are familiar with other programming languages, you may be accustomed to working with arrays. An array is must be cast to house particular data types (_float_, _int_, _string_, etc…). By default, Python works with dynamic lists instead of arrays. Dynamic lists are not cast as a particular type.
## Working with Lists
|New Concepts | Description|
| --- | --- |
| Dynamic List | A dynamic list is encapsulated by brackets _([])_. A list is mutable. Elements can be added to or deleted from a list on the fly.|
| List Concatenation | Two lists can be joined together in the same manner that strings are concatenated. |
| List Indexing | Lists are indexed with the first element being indexed as zero and the last element as the length of (number of elements in) the list less one. Indexes are called using brackets – i.e., _lst[0]_ calls the 0th element in the list. |
In later chapters, we will combine lists with dictionaries to essential data structures. We will also work with more efficient and convenient data structures using the numpy and pandas libraries.
Below we make our first lists. One will be empty. Another will contain integers. Another will have floats. Another strings. Another will mix these:
```
#lists.py
empty_list = []
int_list = [1,2,3,4,5]
float_list = [1.0,2.0,3.0,4.0,5.0]
string_list = ["Many words", "impoverished meaning"]
mixed_list = [1,2.0, "Mix it up"]
print(empty_list)
print(int_list)
print(float_list)
print(string_list)
print(mixed_list)
```
Often we will want to transform lists. In the following example, we will concatenate two lists, which means we will join the lists together:
```
#concatenateLists
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
join_lists = list1 + list2
print("list1:", list1)
print("list2:", list2)
print(join_lists)
```
We have joined the lists together to make one long list. We can already observe one way in which Python will be useful for helping us to organize data. If we were doing this in a spread sheet, we would have to identify the row and column values of the elements or copy and paste the desired values into new rows or enter formulas into cells. Python accomplishes this for us with much less work.
For a list of numbers, we will usually perform some arithmetic operation or categorize these values in order to identify meaningful subsets within the data. This requires access the elements, which Python allows us to do efficiently.
# For Loops and _range()_
| New Concepts | Description |
| --- | --- |
| _list(obj)_ | List transforms an iterable object, such as a tuple or set, into a dynamic list. |
| _range(j, k , l)_ | Identifies a range of integers from _j _ to _k–1_ separated by some interval _l_. |
|_len(obj)_ | Measure the length of an iterable object. |
We can use a for loop to more efficiently execute this task. As we saw in the last chapter, the for loop will execute a series of elements: for element in list. Often, this list is a range of numbers that represent the index of a dynamic list. For this purpose we call:
```
for i in range(j, k, l):
<execute script>
```
The for loop cycles through all integers of interval _l _ between _j _ and *k - 1*, executing a script for each value. This script may explicitly integrate the value _i_.
If you do not specify a starting value, *j *, the range function assumes that you are calling an array of elements from _0 _ to _j _. Likewise, if you do not specify an interval, *l *, range assumes that this interval is _1 _. Thus, _for i in range(k)_ is interpreted as _for i in range(0, k, 1)_. We will again use the loop in its simplest form, cycling through number from _0 _ to *(k – 1)*, where the length of the list is the value _k _. These cases are illustrated below in _range.py_.
```
#range.py
list1 = list(range(9))
list2 = list(range(-9,9))
list3 = list(range(-9,9,3))
print(list1)
print(list2)
print(list3)
```
The for loop will automatically identify the elements contained in _range()_ without requiring you to call _list()_. This is illustrated below in _forLoopAndRange.py_.
```
#forLoopAndRange.py
for i in range(10):
print(i)
```
Having printed *i* for all *i in range(0, 10, 1)*, we produce a set of integers from 0 to 9.
If we were only printing index numbers from a range, for loops would not be very useful. For loops can be used to produce a wide variety of outputs. Often, you will call a for loop to cycle through the index of a particular array. Since arrays are indexed starting with 0 and for loops also assume 0 as an initial value, cycling through a list with a for loop is straight-forward. For a list named *A*, just use the command:
```
for i in range(len(A)):
<execute script>
```
This command will call all integers between 0 and 1 less than the length of _A _. In other words, it will call all indexers associated with _A _.
```
#copyListElementsForLoop.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.append(list1[i])
k = len(list2)
for i in range(k):
list3.append(list2[i])
print("list3 elements:", list3)
```
## Creating a New List with Values from Other Lists
| New Concepts | Description |
| --- | --- |
| List Methods i.e., _.append()_, _.insert()_ | List methods append and insert increse the length of a list by adding in element to the list. |
| If Statements | An if statement executes the block of code contained in it if conditions stipulated by the if statement are met (they return True). |
| Else Statement | In the case that the conditions stipulated by an if statement are not met, and else statement executes an alternate block of code |
| Operator i.e., ==, !=, <, >, <=, >= | The operator indicates the condition relating two variables that is to be tested. |
We can extend the exercise by summing the ith elements in each list. In the exercise below, _list3_ is the sum of the ith elements from _list1_ and _list2_.
```
#addListElements.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.append(list1[i] + list2[i])
print("list3:", list3)
```
In the last exercise, we created an empty list, _list3_. We could not fill the list by calling element in it directly, as no elements yet exist in the list. Instead, we use the append method that is owned by the list-object. Alternately, we can use the insert method. It takes the form, _list.insert(index, object)_. This is shown in a later example. We appended the summed values of the first two lists in the order that the elements are ranked. We could have summed them in opposite order by summing element 5, then 4, ..., then 0.
```
#addListElements.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
for i in range(j):
list3.insert(0,list1[i] + list2[i])
print("list3:", list3)
```
In the next exercise we will us a function that we have not used before. We will check the length of each list whose elements are summed. We want to make sure that if we call an index from one list, it exists in the other. We do not want to call a list index if it does not exist. That would produce an error.
We can check if a statement is true using an if statement. As with the for loop, the if statement is followed by a colon. This tells the program that the execution below or in front of the if statement depends upon the truth of the condition specified. The code that follows below an if statement must be indented, as this identifies what block of code is subject to the statement.
```
if True:
print("execute script")
```
If the statement returns _True_, then the commands that follow the if-statement will be executed. Though not stated explicitly, we can think of the program as passing over the if statement to the remainder of the script:
```
if True:
print("execute script")
else:
pass
```
If the statement returns _False_, then the program will continue reading the script.
```
if False:
print("execute script")
else:
print("statement isn't True")
```
Nothing is printed in the console since there is no further script to execute.
We will want to check if the lengths of two different lists are the same. To check that a variable has a stipulated value, we use two equals signs. Using _==_ allows the program to compare two values rather setting the value of the variable on the left, as would occur with only one equals sign.
Following the if statement is a for loop. If the length of _list1_ and _list2_ are equal, the program will set the ith element of _list3_ equal to the sum of the ith elements from _list1_ and _list2_. In this example, the for loop will cycle through index values 0, 1, 2, 3, 4, and 5.
We can take advantage of the for loop to use _.insert()_ in a manner that replicates the effect of our use of _append()_. We will insert the sum of the ith elements of _list1_ and _list2_ at the ith element of _list3_.
```
#addListElements3.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, len(list2)):
list3.insert(i,list1[i] + list2[i])
print("list3:", list3)
```
The if condition may be followed by an else statement. This tells the program to run a different command if the condition of the if statement is not met. In this case, we want the program to tell us why the condition was not met. In other cases, you may want to create other if statements to create a tree of possible outcomes. Below we use an if-else statement to identify when list’s are not the same length. We remove the last element from _list2_ to create lists of different lengths:
```
#addListElements4.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, len(list2)):
list3.insert(i,list1[i] + list2[i])
else:
print("Lists are not the same length, cannot perform element-wise operations.")
print("list3:", list3)
```
Since the condition passed to the if statement was false, no values were appended to *list3*.
## Removing List Elements
| New Concepts | Description |
| --- | --- |
| _del_ | The command del is used to delete an element from a list |
|List Methods i.e., _.pop()_, _.remove()_, _.append()_ | Lists contains methods that can be used to modify the list. These include _.pop()_ which removes the last element of a list, allowing it to be saved as a separate object. Another method, _.remove()_ deletes an explicitly identified element. _.append(x)_ adds an additional element at the end of the list. |
Perhaps you want to remove an element from a list. There are a few means of accomplishing this. Which one you choose depends on the ends desired.
```
#deleteListElements.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
print("lists before deletion: ")
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
del list1[0]
del list2[5]
print()
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
```
We have deleted _"red"_ from _list1_ and _"dog"_ from _list2_. By printing the elements of each list once before and once after one element is deleted from each, we can note the difference in the lists over time.
What if we knew that we wanted to remove the elements but did not want to check what index each element is associated with? We can use the remove function owned by each list. We will tell _list1_ to remove _"red"_ and _list2_ to remove _"dog"_.
```
#removeListElements.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
print("lists before deletion: ")
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
list1.remove("red")
list2.remove("dog")
print()
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
```
We have achieved the same result using a different means. What if we wanted to keep track of the element that we removed? Before deleting or removing the element, we could assign the value to a different object. Let's do this before using the remove function:
```
#removeAndSaveListElementsPop.py
#define list1 and list2
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
#identify what is printed in for loop
print("lists before deletion: ")
if len(list1) == len(list2):
# use for loop to print lists in parallel
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
# remove list elements and save them as variables '_res"
list1_res = "red"
list2_res = "dog"
list1.remove(list1_res)
list2.remove(list2_res)
print()
# print lists again as in lines 8-11
print("lists after deletion: ")
if len(list1) == len(list2):
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
print()
print("Res1", "\tRes2")
print(list1_res, "\t" + (list2_res))
```
An easier way to accomplish this is to use *.pop()*, another method owned by each list.
```
#removeListElementsPop.py
#define list1 and list2
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = ["nose", "ice", "fire", "cat", "mouse", "dog"]
#identify what is printed in for loop
print("lists before deletion: ")
# use for loop to print lists in parallel
for i in range(len(list1)):
print(list1[i],"\t", list2[i])
# remove list elements and save them as variables '_res"
list1_res = list1.pop(0)
list2_res = list2.pop(5)
print()
# print lists again as in lines 8-11
print("lists after deletion: ")
for i in range(len(list1)):
print(list1[i], "\t",list2[i])
print()
print("Res1", "\tRes2")
print(list1_res, "\t" + (list2_res))
```
## More with For Loops
When you loop through element values, it is not necessary that these are consecutive. You may skip values at some interval. The next example returns to the earlier _addListElements#.py_ examples. This time, we pass the number 2 as the third element in _range()_. Now range will count by twos from _0 _ to _j – 1_. This will make _list3_ shorter than before.
```
#addListElements5.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = [6, 3, 2, 1, 5, 3]
print("list1 elements:", list1[0], list1[1], list1[2], list1[3], list1[4])
print("list2 elements:", list2[0], list2[1], list2[2], list2[3], list2[4])
list3 = []
j = len(list1)
if j == len(list2):
for i in range(0, j, 2):
list3.append(list1[i] + list2[i])
else:
print("Lists are not the same length, cannot perform element-wise operations.")
print("list3:", list3)
```
We entered the sum of elements 0, 2, and 4 from _list1_ and _list2_ into *list3*. Since these were appended to *list3*, they are indexed in *list3[0]*, *list3[1]*, and *list3[2]*.
For loops in python can call in sequence element of objects that are iterable. These include lists, strings, keys and values from dictionaries, as well as the range function we have already used. You may use a for loop that calls each element in the list without identifying the index of each element.
```
obj = ["A", "few", "words", "to", "print"]
for x in obj:
print(x)
```
Each $x$ called is an element from _obj_. Where before we passed _len(list1)_ to the for loop, we now pass _list1_ itself to the for loop and append each element $x$ to _list2_.
```
#forLoopWithoutIndexer.py
list1 = ["red", "blue", "orange", "black", "white", "golden"]
list2 = []
for x in list1:
list2.append(x)
print("list1\t", "list2")
k = len(list1)
j = len(list2)
if len(list1) == len(list2):
for i in range(0, len(list1)):
print(list1[i], "\t", list2[i])
```
## Sorting Lists, Errors, and Exceptions
| New Concepts | Description |
| --- | --- |
| _sorted()_ | The function sorted() sorts a list in order of numerical or alphabetical value. |
| passing errors i.e., _try_ and _except_ | A try statement will pass over an error if one is generated by the code in the try block. In the case that an error is passed, code from the except block well be called. This should typically identify the type of error that was passed. |
We can sort lists using the sorted list function that orders the list either by number or alphabetically. We reuse lists from the last examples to show this.
```
#sorting.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
print("list1:", list1)
print("list2:", list2)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
```
What happens if we try to sort a that has both strings and integers? You might expect that Python would sort integers and then strings or vice versa. If you try this, you will raise an error:
```
#sortingError.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
list3 = list1 + list2
print("list1:", list1)
print("list2:", list2)
print("list3:", list3)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
sorted_list3 = sorted(list3)
print("sortedList3:", sorted_list3)
print("Execution complete!")
```
The script returns an error. If this error is raised during execution, it will interrupt the program. One way to deal with this is to ask Python to try to execute some script and to execute some other command if an error would normally be raised:
```
#sortingError.py
list1 = [5, 4, 9, 10, 3, 5]
list2 = ["red", "blue", "orange", "black", "white", "golden"]
list3 = list1 + list2
print("list1:", list1)
print("list2:", list2)
print("list3:", list3)
sorted_list1 = sorted(list1)
sorted_list2 = sorted(list2)
print("sortedList1:", sorted_list1)
print("sortedList2:", sorted_list2)
try:
sorted_list3 = sorted(list3)
print("sortedList3:", sorted_list3)
except:
print("TypeError: unorderable types: str() < int() "
"ignoring error")
print("Execution complete!")
```
We successfully avoided the error and instead called an alternate operation defined under except. The use for this will become more obvious as we move along. We will except use them from time to time and note the reason when we do.
## Slicing a List
| New Concepts | Description |
| --- | --- |
| slice i.e., _list\[a:b\]_|A slice of a list is a copy of a portion (or all) of a list from index a to b – 1.|
Sometimes, we may want to access several elements instantly. Python allows us to do this with a slice. Technically, when you call a list in its entirety, you take a slice that includes the entire list. We can do this explicitly like this:
```
#fullSlice.py
some_list = [3, 1, 5, 6, 1]
print(some_list[:])
```
Using *some_list\[:\]* is equivalent of creating a slice using *some_list\[min_index: list_length\]* where *min_index = 0* and *list_length= len(some_list)*:
```
#fullSlice2.py
some_list = [3, 1, 5, 6, 1]
min_index = 0
max_index = len(some_list)
print("minimum:", min_index)
print("maximum:", max_index)
print("Full list using slice", some_list[min_index:max_index])
print("Full list without slice", some_list)
```
This is not very useful if we do not use this to take a smaller subsection of a list. Below, we create a new array that is a subset of the original array. As you might expect by now, *full_list\[7\]* calls the 8th element. Since indexing begins with the 0th element, this element is actually counted as the 7th element. Also, similar to the command *for i in range(3, 7)*, the slice calls elements 3, 4, 5, and 6:
```
#partialSlice.py
min_index = 3
max_index = 7
full_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
partial_list = full_list[min_index:max_index]
print("Full List:", full_list)
print("Partial List:", partial_list)
print("full_list[7]:", full_list[7])
```
## Nested For Loops
| New Concepts | Description |
| --- | --- |
| Nested For Loops | A for loop may contain other for loops. They are useful for multidimensional data structures. |
Creative use of for loops can save the programmer a lot of work. While you should be careful not to create so many layers of for loops and if statements that code is difficult to interpret (“Flat is better than nested”), you should be comfortable with the structure of nested for loops and, eventually, their use in structures like dictionaries and generators.
A useful way to become acquainted with the power of multiple for loops is to identify what each the result of each iteration of nested for loops. In the code below, the first for loop will count from 0 to 4. For each value of *i*, the second for loop will cycle through values 0 to 4 for _j _.
```
#nestedForLoop.py
print("i", "j")
for i in range(5):
for j in range(5):
print(i, j)
```
Often, we will want to employ values generated by for loops in a manner other than printing the values generated directly by the for loops. We may, for example, want to create a new value constructed from _i _ and _j _. Below, this value is constructed as the sum of _i _ and _j _.
```
#nestedForLoop.py
print("i", "j", "i+j")
for i in range(5):
for j in range(5):
val = i + j
print(i, j, val)
```
If we interpret the results as a table, we can better understand the intuition of for loops. Lighter shading indicates lower values of _i _ with shading growing darker as the value of _i _ increases.
| | | | |__j__| | |
| --- | --- | --- | --- | --- | --- | --- |
| | | __0__ | __1__ |__2__|__3__ | __4__ |
| | __0__ | 0 | 1 | 2 | 3 | 4 |
| | __1__ | 1 | 2 | 3 | 4 | 5 |
| __i__ | __2__ | 2 | 3 | 4 | 5 | 6 |
| | __3__ | 3 | 4 | 5 | 6 | 7 |
| | __4__ | 4 | 5 | 6 | 7 | 8 |
## Lists, Lists, and More Lists
| New Concepts | Description |
| --- | --- |
| _min(lst)_ | The function _min()_ returns the lowest value from a list of values passed to it. |
| _max(lst)_ | The function _max()_ returns that highest value from a list of values passed to it. |
| generators i.e., _[val for val in lst]_ |Generators use a nested for loop to create an iterated data structure. |
Lists have some convenient features. You can find the maximum and minimum values in a list with the _min()_ and _max()_ functions:
```
# minMaxFunctions.py
list1 = [20, 30, 40, 50]
max_list_value = max(list1)
min_list_value = min(list1)
print("maximum:", max_list_value, "minimum:", min_list_value)
```
We could have used a for loop to find these values. The program below performs the same task:
```
#minMaxFuntionsByHand.py
list1 = [20, 30, 40, 50]
# initial smallest value is infinite
# will be replaced if a value from the list is lower
min_list_val = float("inf")
# initial largest values is negative infinite
# will be replaced if a value from the list is higher
max_list_val = float("-inf")
for x in list1:
if x < min_list_val:
min_list_val = x
if x > max_list_val:
max_list_val = x
print("maximum:", max_list_val, "minimum:", min_list_val)
```
We chose to make the starting value of min_list_value large and positive and the starting value of *max_list_value* large and negative. The for loop cycles through these values and assigns the value, _x _, from the list to *min_list_value* if the value is less than the current value assigned to *min_list_value* and to *max_list_value* if the value is greater than the current value assigned to *max_list_value*.
Earlier in the chapter, we constructed lists using list comprehension (i.e., the _list()_ function) and by generating lists and setting values with _.append()_ and _.insert()_. We may also use a generator to create a list. Generators are convenient as they provide a compact means of creating a list that is easier to interpret. They follow the same format as the _list()_ function.
```
#listFromGenerator.py
generator = (i for i in range(20))
print(generator)
list1 = list(generator)
print(list1)
list2 = [2 * i for i in range(20)]
print(list2)
```
### Exercises
1. Create a list of numbers 100 elements in length that counts by 3s - i.e., [3,6,9,12,...]
2. Using the list from question 1, create a second list whose elements are the same values converted to strings. hint: use a for loop and the function str().
3. Using the list from question 2, create a variable that concatenates each of the elements in order of index (Hint: result should be like "36912...").
4. Using .pop() and .append(), create a list whose values are the same as the list from question 1 but in reverse order. (Hint: .pop() removes the last element from a list. The value can be save, i.e., x = lst.pop().)
5. Using len(), calculate the midpoint of the list from question 1. Pass this midpoint to slice the list so that the resultant copy includes only the second half of the list from question 1.
6. Create a string that includes only every other element, starting from the 0th element, from the string in question 3 while maintaining the order of these elements (Hint: this can be done be using a for loop whose values are descending).
7. Explain the difference between a dynamic list in Python (usually referred to as a list) and a tuple.
### Exploration
1. Use a generator to create a list of the first 100 prime numbers. Include a paragraph explaining how a generator works.
2. Using a for loop and the pop function, create a list of the values from the list of prime numbers whose values are descending from highest to lowest.
3. Using either of prime numbers, create another list that includes the same numbers but is randomly ordered. Do this without shuffling the initial list (Hint: you will need to import random and import copy). Explain in a paragraph the process that you followed to accomplish the task.
| github_jupyter |
# Pretrained Transformers as Universal Computation Engines Demo
This is a demo notebook illustrating creating a Frozen Pretrained Transformer (FPT) and training on the Bit XOR task, which converges within a couple minutes.
arXiv: https://arxiv.org/pdf/2103.05247.pdf
Github: https://github.com/kzl/universal-computation
```
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from transformers.models.gpt2.modeling_gpt2 import GPT2Model
```
## Creating the dataset
For this demo, we'll look at calculating the elementwise XOR between two randomly generated bitstrings.
If you want to play more with the model, feel free to try larger $n$, although it will take longer to train.
```
def generate_example(n):
bits = np.random.randint(low=0, high=2, size=(2, n))
xor = np.logical_xor(bits[0], bits[1]).astype(np.long)
return bits.reshape((2*n)), xor
n = 5
bits, xor = generate_example(n)
print(' String 1:', bits[:n])
print(' String 2:', bits[n:])
print('Output XOR:', xor)
```
## Creating the frozen pretrained transformer
We simply wrap a pretrained GPT-2 model with linear input and output layers, then freeze the weights of the self-attention and feedforward layers.
You can also see what happens using a randomly initialized model instead.
```
if torch.cuda.is_available():
device = 'cuda'
else:
device = 'cpu'
gpt2 = GPT2Model.from_pretrained('gpt2') # loads a pretrained GPT-2 base model
in_layer = nn.Embedding(2, 768) # map bit to GPT-2 embedding dim of 768
out_layer = nn.Linear(768, 2) # predict logits
for name, param in gpt2.named_parameters():
# freeze all parameters except the layernorm and positional embeddings
if 'ln' in name or 'wpe' in name:
param.requires_grad = True
else:
param.requires_grad = False
```
## Training loop
We train the model with stochastic gradient descent on the Bit XOR task.
The model should converge within 5000 samples.
```
params = list(gpt2.parameters()) + list(in_layer.parameters()) + list(out_layer.parameters())
optimizer = torch.optim.Adam(params)
loss_fn = nn.CrossEntropyLoss()
for layer in (gpt2, in_layer, out_layer):
layer.to(device=device)
layer.train()
accuracies = [0]
while sum(accuracies[-50:]) / len(accuracies[-50:]) < .99:
x, y = generate_example(n)
x = torch.from_numpy(x).to(device=device, dtype=torch.long)
y = torch.from_numpy(y).to(device=device, dtype=torch.long)
embeddings = in_layer(x.reshape(1, -1))
hidden_state = gpt2(inputs_embeds=embeddings).last_hidden_state[:,n:]
logits = out_layer(hidden_state)[0]
loss = loss_fn(logits, y)
accuracies.append((logits.argmax(dim=-1) == y).float().mean().item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
if len(accuracies) % 500 == 0:
accuracy = sum(accuracies[-50:]) / len(accuracies[-50:])
print(f'Samples: {len(accuracies)}, Accuracy: {accuracy}')
print(f'Final accuracy: {sum(accuracies[-50:]) / len(accuracies[-50:])}')
```
## Visualizing attention map
We can visualize the attention map of the first layer: the model learns to attend to the relevant bits for each element in the XOR operation.
Note the two consistent diagonal lines for output tokens 5-9 across samples, denoting each position of either string (the pattern is stronger if the model is allowed to train longer or evaluated on more samples).
```
for layer in (gpt2, in_layer, out_layer):
layer.eval()
bits, xor = generate_example(n)
with torch.no_grad():
x = torch.from_numpy(bits).to(device=device, dtype=torch.long)
embeddings = in_layer(x)
transformer_outputs = gpt2(
inputs_embeds=embeddings,
return_dict=True,
output_attentions=True,
)
logits = out_layer(transformer_outputs.last_hidden_state[n:])
predictions = logits.argmax(dim=-1).cpu().numpy()
print(' String 1:', bits[:n])
print(' String 2:', bits[n:])
print('Prediction:', predictions)
print('Output XOR:', xor)
attentions = transformer_outputs.attentions[0][0] # first layer, first in batch
mean_attentions = attentions.mean(dim=0) # take the mean over heads
mean_attentions = mean_attentions.cpu().numpy()
plt.xlabel('Input Tokens', size=16)
plt.xticks(range(10), bits)
plt.ylabel('Output Tokens', size=16)
plt.yticks(range(10), ['*'] * 5 + list(predictions))
plt.imshow(mean_attentions)
```
## Sanity check
As a sanity check, we can see that the model could solve this task without needing to finetune the self-attention layers! The XOR was computed using only the connections already present in GPT-2.
```
fresh_gpt2 = GPT2Model.from_pretrained('gpt2')
gpt2.to(device='cpu')
gpt2_state_dict = gpt2.state_dict()
for name, param in fresh_gpt2.named_parameters():
if 'attn' in name or 'mlp' in name:
new_param = gpt2_state_dict[name]
if torch.abs(param.data - new_param.data).sum() > 1e-8:
print(f'{name} was modified')
else:
print(f'{name} is unchanged')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Scott-Huston/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/LS_DS_121_Join_and_Reshape_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Join and Reshape datasets
Objectives
- concatenate data with pandas
- merge data with pandas
- understand tidy data formatting
- melt and pivot data with pandas
Links
- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
- Combine Data Sets: Standard Joins
- Tidy Data
- Reshaping Data
- Python Data Science Handbook
- [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append
- [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join
- [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
- [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
Reference
- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
## Download data
We’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
```
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
# magic command
# stands for change directory
# cd .. moves up a directory
# ls to see what is in a directory
!ls -lh *.csv
# *.csv gets all csv files in directory
# -l gets more info about the file
# h puts it in human readable format
```
# Join Datasets
## Goal: Reproduce this example
The first two orders for user id 1:
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
```
## Load data
Here's a list of all six CSV filenames
```
!ls -lh *.csv
```
For each CSV
- Load it with pandas
- Look at the dataframe's shape
- Look at its head (first rows)
- `display(example)`
- Which columns does it have in common with the example we want to reproduce?
### aisles
```
import pandas as pd
aisles = pd.read_csv('aisles.csv')
print(aisles.shape)
aisles.head()
```
### departments
```
departments = pd.read_csv('departments.csv')
print(departments.shape)
departments.head()
```
### order_products__prior
```
order_products__prior = pd.read_csv('order_products__prior.csv')
print(order_products__prior.shape)
order_products__prior.head(20)
```
Need:
order_id, product_id, and add_to_cart_order
### order_products__train
```
order_products__train = pd.read_csv('order_products__train.csv')
print(order_products__train.shape)
order_products__train.head()
```
### orders
```
orders = pd.read_csv('orders.csv')
print(orders.shape)
orders.head()
```
### products
```
products = pd.read_csv('products.csv')
print(products.shape)
products.head()
# need product_id, product_name
```
## Concatenate order_products__prior and order_products__train
```
order_products = pd.concat([order_products__prior, order_products__train])
print(order_products.shape)
order_products.head()
len(order_products__prior)
```
## Get a subset of orders — the first two orders for user id 1
From `orders` dataframe:
- user_id
- order_id
- order_number
- order_dow
- order_hour_of_day
```
condition = order_products['order_id'] == 2539329
order_products[condition]
condition = (orders['user_id'] == 1) & (orders['order_number'] <=2)
columns = ['order_id', 'user_id', 'order_number', 'order_dow',
'order_hour_of_day']
subset = orders.loc[condition, columns]
subset
```
## Merge dataframes
Merge the subset from `orders` with columns from `order_products`
```
# order_id
# product_id
# add_to_cart_order
columns = ['order_id', 'product_id', 'add_to_cart_order']
merged = pd.merge(subset, order_products[columns], how='inner', on='order_id')
merged
```
Merge with columns from `products`
```
final = pd.merge(merged, products[['product_id', 'product_name']], how='inner', on='product_id')
final
final = final.sort_values(by=['order_number', 'add_to_cart_order'])
final.columns = [column.replace('_', ' ') for column in final]
final
```
# Reshape Datasets
## Why reshape data?
#### Some libraries prefer data in different formats
For example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).
> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:
> - Each variable is a column
- Each observation is a row
> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
#### Data science is often about putting square pegs in round holes
Here's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!
## Hadley Wickham's Examples
From his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
```
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
```
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild.
The table has two columns and three rows, and both rows and columns are labelled."
```
table1
```
"There are many ways to structure the same underlying data.
Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
```
table2
```
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.
Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."
| name | trt | result |
|--------------|-----|--------|
| John Smith | a | - |
| Jane Doe | a | 16 |
| Mary Johnson | a | 3 |
| John Smith | b | 2 |
| Jane Doe | b | 11 |
| Mary Johnson | b | 1 |
## Table 1 --> Tidy
We can use the pandas `melt` function to reshape Table 1 into Tidy format.
```
table1 = table1.reset_index()
table1
tidy = table1.melt(id_vars='index')
tidy.columns = ['name', 'trt', 'result']
tidy
```
## Table 2 --> Tidy
```
table2
table2 = table2.reset_index()
table2
tidy2 = table2.melt(id_vars='index')
tidy2
tidy2.columns = ['trt', 'name', 'result']
tidy2
```
## Tidy --> Table 1
The `pivot_table` function is the inverse of `melt`.
```
tidy.pivot_table(index='name', columns='trt', values='result')
```
## Tidy --> Table 2
```
##### LEAVE BLANK --an assignment exercise #####
tidy2
tidy2.pivot_table(index = 'name', columns = 'trt', values = 'result')
```
# Seaborn example
The rules can be simply stated:
- Each variable is a column
- Each observation is a row
A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
```
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
```
## Now with Instacart data
```
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
```
## Goal: Reproduce part of this example
Instead of a plot with 50 products, we'll just do two — the first products from each list
- Half And Half Ultra Pasteurized
- Half Baked Frozen Yogurt
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
```
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.
## Subset and Merge
One challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
```
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
products.columns.tolist()
orders.columns.tolist()
order_products.columns.tolist()
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']])
.merge(orders[['order_id', 'order_hour_of_day']]))
products.shape, order_products.shape, orders.shape, merged.shape
# What conditon will filter `merged` to just the 2 products
# that we care about?
# This is equivalent ...
condition = ((merged['product_name']=='Half Baked Frozen Yogurt') |
(merged['product_name']=='Half And Half Ultra Pasteurized'))
merged = merged[condition]
# ... to this:
product_names = ['Half Baked Frozen Yogurt', 'Half And Half Ultra Pasteurized']
condition = merged['product_name'].isin(product_names)
subset = merged[condition]
subset
```
## 4 ways to reshape and plot
### 1. value_counts
```
froyo = subset[subset['product_name']=='Half Baked Frozen Yogurt']
cream = subset[subset['product_name']=='Half And Half Ultra Pasteurized']
(cream['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot())
(froyo['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot());
```
### 2. crosstab
```
(pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize='columns') * 100).plot();
```
### 3. Pivot Table
```
subset.pivot_table(index='order_hour_of_day',
columns='product_name',
values='order_id',
aggfunc=len).plot();
```
### 4. melt
```
table = pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize=True)
melted = (table
.reset_index()
.melt(id_vars='order_hour_of_day')
.rename(columns={
'order_hour_of_day': 'Hour of Day Ordered',
'product_name': 'Product',
'value': 'Percent of Orders by Product'
}))
sns.relplot(x='Hour of Day Ordered',
y='Percent of Orders by Product',
hue='Product',
data=melted,
kind='line');
```
# Assignment
## Join Data Section
These are the top 10 most frequently ordered products. How many times was each ordered?
1. Banana
2. Bag of Organic Bananas
3. Organic Strawberries
4. Organic Baby Spinach
5. Organic Hass Avocado
6. Organic Avocado
7. Large Lemon
8. Strawberries
9. Limes
10. Organic Whole Milk
First, write down which columns you need and which dataframes have them.
Next, merge these into a single dataframe.
Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
## Reshape Data Section
- Replicate the lesson code
- Complete the code cells we skipped near the beginning of the notebook
- Table 2 --> Tidy
- Tidy --> Table 2
- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
```
# Join Data Section
products = pd.read_csv('products.csv')
products.head()
order_products.head()
# Filtering the products data before merge
condition = ((products['product_name']=='Banana') |
(products['product_name']=='Bag of Organic Bananas') |
(products['product_name']=='Organic Strawberries') |
(products['product_name']=='Organic Baby Spinach') |
(products['product_name']=='Organic Hass Avocado') |
(products['product_name']=='Organic Avocado') |
(products['product_name']=='Large Lemon') |
(products['product_name']=='Strawberries') |
(products['product_name']=='Limes') |
(products['product_name']=='Organic Whole Milk'))
products = products[condition]
products
# Filtering the order_products data before merge
condition = order_products['product_id'].isin(products['product_id'])
order_products = order_products[condition]
# Checking to verify it worked
order_products.head(20)
order_products['product_id'].value_counts()
# Merging only a few columns
product_columns = ['product_name', 'product_id']
order_products_columns = ['product_id', 'order_id']
merged = pd.merge(products[product_columns], order_products[order_products_columns], how='inner', on='product_id')
merged.head()
# Here are the final counts for each product name
merged.product_name.value_counts()
import seaborn as sns
flights = sns.load_dataset('flights')
flights.head()
flights.pivot_table(index = 'year',
columns = 'month',
values = 'passengers')
```
## Join Data Stretch Challenge
The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)."
The post says,
> "We can also see the time of day that users purchase specific products.
> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.
> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"
Your challenge is to reproduce the list of the top 25 latest ordered popular products.
We'll define "popular products" as products with more than 2,900 orders.
## Reshape Data Stretch Challenge
_Try whatever sounds most interesting to you!_
- Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
```
# Join Data Stretch Challenge
import pandas as pd
aisles = pd.read_csv('aisles.csv')
departments = pd.read_csv('departments.csv')
order_products__prior = pd.read_csv('order_products__prior.csv')
order_products__train = pd.read_csv('order_products__train.csv')
orders = pd.read_csv('orders.csv')
products = pd.read_csv('products.csv')
order_products = pd.concat([order_products__prior, order_products__train])
order_products.head()
order_products.product_id.value_counts()
condition = order_products.product_id.value_counts()
products.head()
orders.head()
```
| github_jupyter |
# An agent-based model of social support
*Joël Foramitti, 10.02.2022*
This notebook introduces a simple agent-based model to explore the propagation of social support through a population.
```
import agentpy as ap
import networkx as nx
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
```
The agents of the model have one variable `support` which indicates their support for a particular cause.
At every time-step, an agent interacts with their friends as well as some random encounters.
The higher the perceived support amongst their encounters, the higher the likelyhood that the agent will also support the cause.
```
class Individual(ap.Agent):
def setup(self):
# Initiate a variable support
# 0 indicates no support, 1 indicates support
self.support = 0
def adapt_support(self):
# Perceive average support amongst friends and random encounters
random_encounters = self.model.agents.random(self.p.random_encounters)
all_encounters = self.friends + random_encounters
perceived_support = sum(all_encounters.support) / len(all_encounters)
# Adapt own support based on random chance and perceived support
random_draw = self.model.random.random() # Draw between 0 and 1
self.support = 1 if random_draw < perceived_support else 0
```
At the start of the simulation, the model initiates a population of agents, defines a random network of friendships between these agents, and chooses a random share of agents to be the initial supporters of the cause.
At every simulation step, agents change their support and the share of supporters is recorded.
At the end of the model, the cause is designated a success if all agents support it.
```
class SupportModel(ap.Model):
def setup(self):
# Initiating agents
self.agents = ap.AgentList(self, self.p.n_agents, Individual)
# Setting up friendships
graph = nx.watts_strogatz_graph(
self.p.n_agents,
self.p.n_friends,
self.p.network_randomness)
self.network = self.agents.network = ap.Network(self, graph=graph)
self.network.add_agents(self.agents, self.network.nodes)
for a in self.agents:
a.friends = self.network.neighbors(a).to_list()
# Setting up initial supporters
initial_supporters = int(self.p.initial_support * self.p.n_agents)
for a in self.agents.random(initial_supporters):
a.support = 1
def step(self):
# Let every agent adapt their support
self.agents.adapt_support()
def update(self):
# Record the share of supporters at each time-step
self.supporter_share = sum(self.agents.support) / self.p.n_agents
self.record('supporter_share')
def end(self):
# Report the success of the social movement
# at the end of the simulation
self.success = 1 if self.supporter_share == 1 else 0
self.model.report('success')
```
For the generation of the network graph, we will use the [Watts-Strogatz model](https://en.wikipedia.org/wiki/Watts%E2%80%93Strogatz_model). This is an algorithm that generates a regular network were every agent will have the same amount of connections, and then introduces a certain amount of randomness to change some of these connections. A network where most agents are not neighbors, but where it is easy to reach every other agent in a small number of steps, is called a [small-world network](https://en.wikipedia.org/wiki/Small-world_network).
<img src="networks.png" alt="drawing" width="600"/>
## A single-run simulation
```
parameters = {
'steps': 100,
'n_agents': 100,
'n_friends': 2,
'network_randomness': 0.5,
'initial_support': 0.5,
'random_encounters': 1
}
model = SupportModel(parameters)
results = model.run()
success = 'Yes' if model.success else 'No'
print(f'Success: {success}')
ax = results.variables.SupportModel.plot()
```
## A multi-run experiment
```
sample_parameters = {
'steps': 100,
'n_agents': 100,
'n_friends': 2,
'network_randomness': 0.5,
'initial_support': ap.Range(0, 1),
'random_encounters': 1
}
sample = ap.Sample(sample_parameters, n=50)
exp = ap.Experiment(SupportModel, sample, iterations=50)
results = exp.run()
ax = sns.lineplot(
data=results.arrange_reporters(),
x='initial_support',
y='success'
)
ax.set_xlabel('Initial share of supporters')
ax.set_ylabel('Chances of success');
```
## Questions for discussion
- What happens under different parameter values?
- How does this model compare to real-world dynamics?
- What false conclusions could be made from this model?
- How could the model be improved or extended?
| github_jupyter |
```
!wget https://datahack-prod.s3.amazonaws.com/train_file/train_LZdllcl.csv -O train.csv
!wget https://datahack-prod.s3.amazonaws.com/test_file/test_2umaH9m.csv -O test.csv
!wget https://datahack-prod.s3.amazonaws.com/sample_submission/sample_submission_M0L0uXE.csv -O sample_submission.csv
import numpy as np
import pandas as pd
#keras
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.optimizers import RMSprop
from keras.optimizers import Adam
from keras.layers import BatchNormalization
from keras.callbacks import ModelCheckpoint
from keras.callbacks import EarlyStopping
from keras.callbacks import ReduceLROnPlateau
#sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import f1_score
#visualisation
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
import warnings
warnings.filterwarnings('ignore')
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
train = pd.read_csv('train.csv')
train.drop('employee_id',inplace=True,axis = 1)
train.head()
train.dtypes
train.isnull().sum()
train.nunique()
train['education'].fillna('other',inplace=True)
train['previous_year_rating'].fillna(99,inplace=True)
train.isnull().sum()
train['is_promoted'].value_counts(normalize = True)
train.shape
y = train['is_promoted']
train = train.drop(['is_promoted'],axis = 1)
train.head()
train = pd.get_dummies(train)
train.head()
X_train, X_valid, y_train, y_valid = train_test_split(train,y,test_size=0.15)
print('Xtrain shape',X_train.shape)
print('Xvalid shape',X_valid.shape)
print('ytrain shape',y_train.shape)
print('yvalid shape',y_valid.shape)
model = Sequential()
model.add(Dense(64, input_dim=59, kernel_initializer='normal', activation='relu'))
model.add(Dense(128,kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Dense(256,kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Dense(128,kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Dense(64,kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
rmsprop = RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.0)
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
checkpointer = ModelCheckpoint(filepath='best_weights.hdf5', verbose=1, save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=3, min_lr=0.00001,verbose = 1)
early_stopping = EarlyStopping(monitor='val_loss',min_delta=0.0001, patience=5,verbose=1)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[f1])
model.summary()
model_info = model.fit(X_train,y_train,epochs=50,batch_size=32,validation_data=(X_valid,y_valid),verbose = 1,callbacks=[checkpointer,reduce_lr,early_stopping])
```
| github_jupyter |
# TensorFlow2 및 SMDataParallel을 사용한 분산 데이터 병렬 BERT 모델 훈련
SMDataParallel은 Amazon SageMaker의 새로운 기능으로 딥러닝 모델을 더 빠르고 저렴하게 훈련할 수 있습니다. SMDataParallel은 PyTorch, TensorFlow 및 MXNet을 위한 분산 데이터 병렬 훈련 프레임워크입니다.
이 노트북 예제는 [Amazon SageMaker](https://aws.amazon.com/sagemaker/)에서 TensorFlow(버전 2.3.1)와 함께 SMDataParallel을 사용하여 [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) 파일 시스템을 데이터 소스로 사용하는 BERT 모델 훈련 방법을 보여줍니다.
본 예제의 개요는 다음과 같습니다.
1. [Amazon S3](https://aws.amazon.com/s3/)에서 데이터셋을 준비합니다. BERT 사전 훈련을 위한 원본 데이터셋은 BooksCorpus(8억 단어) (Zhu et al. 2015) 및 English Wikipedia (2,500M 단어)의 텍스트 구절로 구성됩니다. NVidia의 원래 지침에 따라 hdf5 형식으로 훈련 데이터를 준비하세요 -
https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data
2. Amazon FSx Luster 파일 시스템을 생성하고 S3에서 파일 시스템으로 데이터를 가져옵니다.
3. Docker 훈련 이미지를 빌드하고 [Amazon ECR](https://aws.amazon.com/ecr/)에 푸시합니다.
4. SageMaker에 대한 데이터 입력 채널을 구성합니다.
5. 하이퍼 파라메터를 세팅합니다.
6. 훈련 지표를 정의합니다.
7. 훈련 작업을 정의하고 분산 전략을 SMDataParallel로 설정하고 훈련을 시작합니다.
**참고:** 대규모 훈련 데이터셋의 경우 SageMaker 훈련 작업을 위한 입력 파일 시스템으로 (Amazon FSx)[https://aws.amazon.com/fsx/] 를 사용하는 것이 좋습니다. SageMaker에 대한 FSx 파일 입력은 SageMaker 훈련 작업을 시작할 때마다 훈련 데이터 다운로드를 방지하고 (SageMaker 훈련 작업에 대한 S3 입력으로 수행됨) 우수한 데이터 읽기 처리량(throughput)을 제공하므로 SageMaker에서 훈련 시작 시간을 크게 단축합니다.
**참고:** 이 예제는 SageMaker Python SDK v2.X가 필요합니다.
## Amazon SageMaker 초기화
노트북 인스턴스를 초기화합니다. aws 리전, sagemaker 실행 역할을 가져옵니다.
IAM 역할 arn은 데이터에 대한 훈련 및 호스팅 액세스 권한을 부여하는 데 사용됩니다. 이를 생성하는 방법은 [Amazon SageMaker 역할](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)을 참조하세요. 노트북 인스턴스, 훈련 및 호스팅에 둘 이상의 역할이 필요한 경우 `sagemaker.get_execution_role()`을 적절한 전체 IAM 역할 arn 문자열로 변경해 주세요. 위에서 설명한 대로 FSx를 사용할 것이므로, 이 IAM 역할에 `FSx Access` 권한을 연결해야 합니다.
```
%%time
! python3 -m pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
import boto3
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
```
## SageMaker 훈련 이미지 준비
1. SageMaker는 기본적으로 최신 [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) PyTorch 훈련 이미지를 사용합니다. 이 단계에서는 이를 기본 이미지로 사용하고 BERT 모델 훈련에 필요한 추가 종속성 패키지들을 설치합니다.
2. Github 저장소 https://github.com/HerringForks/DeepLearningExamples.git 에서 TensorFlow2-SMDataParallel BERT 훈련 스크립트를 사용할 수 있도록 만들었습니다. 이 저장소는 모델 훈련을 실행하기 위해 훈련 이미지에서 복제됩니다.
### Docker 이미지 빌드 및 ECR로 푸시
아래 명령을 실행하여 도커 이미지를 빌드하고 ECR에 푸시합니다.
```
image = "<IMAGE_NAME>" # Example: tf2-smdataparallel-bert-sagemaker
tag = "<IMAGE_TAG>" # Example: latest
!pygmentize ./Dockerfile
!pygmentize ./build_and_push.sh
%%time
! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag}
```
## SageMaker용 FSx 입력 데이터 준비
1. S3에서 훈련 데이터셋을 다운로드하고 준비합니다.
2. 여기에 나열된 단계에 따라 훈련 데이터(https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html)가 있는 S3 버켓과 연결된 FSx를 생성합니다. S3 액세스를 허용하는 엔드포인트를 VPC에 추가해야 합니다.
3. 여기에 나열된 단계에 따라 FSx(https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/)를 사용하도록 SageMaker 훈련 작업을 구성합니다.
### 중요 사항
1. SageMaker 노트북 인스턴스를 시작할 때 FSx에서 사용하는 것과 동일한 `서브넷(subnet)` 과`vpc` 및 `보안 그룹(security group)`을 사용해야 합니다. SageMaker 훈련 작업에서 동일한 구성이 사용됩니다.
2. '보안 그룹'에서 적절한 인바운드/출력 규칙을 설정했는지 확인합니다. 특히 SageMaker가 훈련 작업에서 FSx 파일 시스템에 액세스하려면 이러한 포트를 열어야합니다. https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html
3. 이 SageMaker 훈련 작업을 시작하는 데 사용된 `SageMaker IAM 역할`이 `AmazonFSx`에 액세스할 수 있는지 확인합니다.
## SageMaker PyTorch Estimator function options
다음 코드 블록에서 다른 인스턴스 유형, 인스턴스 수 및 분산 전략을 사용하도록 estimator 함수를 업데이트할 수 있습니다. 이전 코드 셀에서 검토한 훈련 스크립트도 estimator 함수로 전달합니다.
**인스턴스 유형**
SMDataParallel은 아래 인스턴스 유형들만 SageMaker 상에서의 모델 훈련을 지원합니다.
1. ml.p3.16xlarge
1. ml.p3dn.24xlarge [권장]
1. ml.p4d.24xlarge [권장]
**인스턴스 수**
최상의 성능과 SMDataParallel을 최대한 활용하려면 2개 이상의 인스턴스를 사용해야 하지만, 이 예제를 테스트하는 데 1개를 사용할 수도 있습니다.
**배포 전략**
DDP 모드를 사용하려면 `distribution` 전략을 업데이트하고 `smdistributed dataparallel`을 사용하도록 설정해야 합니다.
### 훈련 스크립트
Github 저장소( https://github.com/HerringForks/deep-learning-models.git)에서 레퍼런스 TensorFlow-SMDataParallel BERT 훈련 스크립트를 사용할 수 있도록 만들었습니다. 저장소를 복제하십시오.
```
# Clone herring forks repository for reference implementation BERT with TensorFlow2-SMDataParallel
!rm -rf deep-learning-models
!git clone --recursive https://github.com/HerringForks/deep-learning-models.git
from sagemaker.tensorflow import TensorFlow
instance_type = "ml.p3dn.24xlarge" # Other supported instance type: ml.p3.16xlarge, ml.p4d.24xlarge
instance_count = 2 # You can use 2, 4, 8 etc.
docker_image = f"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE
username = 'AWS'
subnets = ['<SUBNET_ID>'] # Should be same as Subnet used for FSx. Example: subnet-0f9XXXX
security_group_ids = ['<SECURITY_GROUP_ID>'] # Should be same as Security group used for FSx. sg-03ZZZZZZ
job_name = 'smdataparallel-bert-tf2-fsx-2p3dn' # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console.
file_system_id = '<FSX_ID>' # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'
SM_DATA_ROOT = '/opt/ml/input/data/train'
hyperparameters={
"train_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/train/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"val_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/validation/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"log_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert/logs']),
"checkpoint_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert']),
"load_from": "scratch",
"model_type": "bert",
"model_size": "large",
"per_gpu_batch_size": 64,
"max_seq_length": 128,
"max_predictions_per_seq": 20,
"optimizer": "lamb",
"learning_rate": 0.005,
"end_learning_rate": 0.0003,
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"gradient_accumulation_steps": 1,
"learning_rate_decay_power": 0.5,
"warmup_steps": 2812,
"total_steps": 2000,
"log_frequency": 10,
"run_name" : job_name,
"squad_frequency": 0
}
estimator = TensorFlow(entry_point='albert/run_pretraining.py',
role=role,
image_uri=docker_image,
source_dir='deep-learning-models/models/nlp',
framework_version='2.3.1',
py_version='py3',
instance_count=instance_count,
instance_type=instance_type,
sagemaker_session=sagemaker_session,
subnets=subnets,
hyperparameters=hyperparameters,
security_group_ids=security_group_ids,
debugger_hook_config=False,
# Training using SMDataParallel Distributed Training Framework
distribution={'smdistributed':{
'dataparallel':{
'enabled': True
}
}
}
)
# Configure FSx Input for your SageMaker Training job
from sagemaker.inputs import FileSystemInput
#YOUR_MOUNT_PATH_FOR_TRAINING_DATA # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/albert''''
file_system_directory_path='<FSX_DIRECTORY_PATH>'
file_system_access_mode='rw'
file_system_type='FSxLustre'
train_fs = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode)
data_channels = {'train': train_fs}
# Submit SageMaker training job
estimator.fit(inputs=data_channels, job_name=job_name)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
```
### Read in json line file
```
board_data = []
with open('company-officers-v2.json') as f:
for line in f:
board_data.append(json.loads(line))
board_data[1]
```
### Read out board member information
and write into json file
```
for i in range(len(board_data)):
company = board_data[i]['company_number']
list_of_people = board_data[i]['data']['items']
for n in range(len(list_of_people)):
new_person = {}
new_person['company_number'] = company
if 'name' in list_of_people[n]:
new_person['name'] = list_of_people[n]['name']
else:
new_person['name'] = None
if 'appointed_on' in list_of_people[n]:
new_person['appointed_on'] = list_of_people[n]['appointed_on']
else:
new_person['appointed_on'] = None
if 'date_of_birth' in list_of_people[n] and 'year' in list_of_people[n]['date_of_birth']:
new_person['year_of_birth'] = list_of_people[n]['date_of_birth']['year']
else:
new_person['year_of_birth'] = None
if 'nationality' in list_of_people[n]:
new_person['nationality'] = list_of_people[n]['nationality']
else:
new_person['nationality'] = None
if 'occupation' in list_of_people[n]:
new_person['occupation'] = list_of_people[n]['occupation']
else:
new_person['occupation'] = None
if 'officer_role' in list_of_people[n]:
new_person['officer_role'] = list_of_people[n]['officer_role']
else:
new_person['officer_role'] = None
if 'resigned_on' in list_of_people[n]:
new_person['resigned_on'] = list_of_people[n]['resigned_on']
else:
new_person['resigned_on'] = None
with open('board_members.json', 'a') as f:
f.write(json.dumps(new_person))
f.write('\n')
# Read the json file back in
board_members = pd.read_json('board_members.json', lines=True)
board_members.head()
board_members.year_of_birth.unique()
# Some name processing
names = board_members['name'].str.split(',', expand=True)
names.head()
surname = names.iloc[:,0]
title = names.iloc[:,2]
second_title = names.iloc[:,3]
first_names = names.iloc[:,1].str.split(' ', expand=True)
first_names.head()
first_name = first_names.iloc[:,1]
middle_name_1 = first_names.iloc[:,2]
middle_name_2 = first_names.iloc[:,3]
middle_name_3 = first_names.iloc[:,4]
board_members['surname'] = surname
board_members['first_name'] = first_name
board_members['middle_name_1'] = middle_name_1
board_members['middle_name_2'] = middle_name_2
board_members['middle_name_3'] = middle_name_3
board_members['title'] = title
board_members['second_title'] = second_title
board_members.head()
board_members.to_csv('Board_members.csv', sep=',')
```
| github_jupyter |
# Density Tree for N-dimensional data and labels
The code below implements a **density** tree for non-labelled data.
## Libraries
First, some libraries are loaded and global figure settings are made for exporting.
```
import numpy as np
import matplotlib.pyplot as plt
import os
from IPython.core.display import Image, display
# Custom Libraries
from density_tree.density_forest import *
from density_tree.density_tree_create import *
from density_tree.density_tree_traverse import *
from density_tree.create_data import *
from density_tree.helpers import *
from density_tree.plots import *
```
# Generate Data
First, let's generate some unlabelled data:
```
dimensions = 2
nclusters = 5
covariance = 10
npoints = 100
minRange = 10
maxRange = 100
dataset = create_data(nclusters, dimensions, covariance, npoints, minrange=minRange, maxrange=maxRange,
labelled=False, random_flip=True, nonlinearities=True)
if dimensions == 2:
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(8,6)
plot_data(dataset, "Unlabelled data", ax, labels=False)
plt.savefig("../Figures/unlabelled-data.pdf", bbox_inches='tight', pad_inches=0)
plt.show()
```
#### Create single Density Tree
```
import warnings
warnings.filterwarnings("ignore")
root = create_density_tree(dataset, dimensions=dimensions, clusters=nclusters)
def get_values_preorder(node, cut_dims, cut_vals):
cut_dims.append(node.split_dimension)
cut_vals.append(node.split_value)
if node.left is not None:
get_values_preorder(node.left, cut_dims, cut_vals)
if node.right is not None:
get_values_preorder(node.right, cut_dims, cut_vals)
return cut_vals, cut_dims
cut_vals, cut_dims = get_values_preorder(root, [], [])
cut_vals = np.asarray(cut_vals).astype(float)
cut_dims = np.asarray(cut_dims).astype(int)
x_split = cut_vals[cut_dims == 0]
y_split = cut_vals[cut_dims == 1]
if dimensions == 2:
fig, ax = plt.subplots(1, 1)
plot_data(dataset, "Training data after splitting", ax, labels=False, lines_x=x_split, lines_y=y_split,
minrange=minRange, maxrange=maxRange, covariance=covariance)
%clear
plt.show()
print(cut_dims, cut_vals)
```
# Printing the Tree
```
print(covs[0])
def tree_visualize(root):
tree_string = ""
tree_string = print_density_tree_latex(root, tree_string)
os.system("cd ../Figures; rm main.tex; more main_pt1.tex >> density-tree.tex; echo '' >> density-tree.tex;")
os.system("cd ../Figures; echo '" + tree_string + "' >> density-tree.tex; more main_pt2.tex >> density-tree.tex;")
os.system("cd ../Figures; /Library/TeX/texbin/pdflatex density-tree.tex; convert -density 300 -trim density-tree.pdf -quality 100 density-tree.png")
os.system("cd ../Figures; rm *.aux *.log")
display(Image('../Figures/density-tree.png', retina=True))
tree_visualize(root)
```
#### Showing all Clusters Covariances
```
covs, means = get_clusters(root, [], [])
if dimensions == 2:
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(8,6)
plot_data(dataset, "Unlabelled data", ax, labels=False, covs=covs, means=means,
minrange = minRange, maxrange = maxRange, covariance=covariance)
plt.savefig("../Figures/unlabelled-data-cov.pdf", bbox_inches='tight', pad_inches=0)
plt.show()
print(cut_dims, cut_vals)
```
#### Descend tree (predict "label")
```
# for all points
probas = []
probas_other = []
for d in dataset:
# descend tree
d_mean, d_cov, d_pct = descend_density_tree(d,root)
# probability for this point to be from this distribution
probas.append(multivariate_normal.pdf(d, d_mean, d_cov)*d_pct)
for i in range(5):
probas_other.append(multivariate_normal.pdf(d, means[i], covs[i])*d_pct)
print("Probability to come from the leaf node cluster: %.5f%%" % np.mean(probas))
print("Probability to come from an arbitrary cluster: %.5f%%" % np.mean(probas_other))
```
#### Density Forest
```
root_nodes = density_forest_create(dataset, dimensions, nclusters, 100, .3, -1)
probas = density_forest_traverse(dataset, root_nodes)
# mean probability of all points to belong to the cluster in the root node
print(np.mean(probas))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.insert(0, '../')
sys.path.append('/home/arya_03/.envs/objdet/lib/python2.7/site-packages/')
import matplotlib
matplotlib.use('Agg')
from __future__ import division
import os
import numpy as np
import pandas as pd
from skimage.io import imread
import cv2
from pandas import read_csv
import matplotlib.pyplot as plt
import glob
%matplotlib inline
```
## Visualization
```
from evaluate import load_network
import cPickle as pickle
from models import *
from utils import get_file_list, create_fixed_image_shape
from nolearn.lasagne import BatchIterator
from plotting import plot_face_bb
# from lazy_batch_iterator import LazyBatchIterator
import skimage
from sklearn.metrics import confusion_matrix, accuracy_score
from mpl_toolkits.axes_grid1 import AxesGrid
train_folder = '../train/'
fnames, bboxes = get_file_list(train_folder)
def prepare_test_img(shape, img=None, name=None):
from skimage.io import imread
im = None
if img is not None:
im = img
elif name is not None:
im = imread(name)
inp = create_fixed_image_shape(im, shape, mode='fit')
return inp
```
### Option 1 - Load weights
```
print NET_CONFIGS
nnet = load_network("/home/arya_03/work/face_detection_gtx/model_run3_smoothl1_model16_scale_trans_3/model_28.pkl",
"config_4c_1234_3d_smoothl1_lr_step",
"BatchIterator")
```
### Option 2 - Full network
### Evaluate result on the test images
```
def plot_imgs_grid(Z, ncols=10):
num = len(Z)
fig = plt.figure(1, (80, 80))
fig.subplots_adjust(left=0.05, right=0.95)
grid = AxesGrid(fig, (1, 4, 2), # similar to subplot(142)
nrows_ncols=(int(np.ceil(float(num) / ncols)), ncols),
axes_pad=0.04,
share_all=True,
label_mode="L",
)
for i in range(num):
im = grid[i].imshow(Z[i], interpolation="nearest")
for i in range(grid.ngrids):
grid[i].axis('off')
for cax in grid.cbar_axes:
cax.toggle_label(False)
# Predict and plot the bounding boxes for images in test_imgs/ folder
outs = []
for fname in glob.glob(os.path.join('../test_imgs', '*.jpg')):
proc_img = prepare_test_img((256, 256, 3), name=fname)
img = np.transpose(proc_img, [2, 0, 1])
preds = nnet.predict([img])
outs.append(plot_face_bb(proc_img, preds[0], path=False, plot=False))
```
```
# run3_3 model_28
plot_imgs_grid(outs, 5)
# run3_3 model_1
plot_imgs_grid(outs, 5)
# run3_2 model_2
plot_imgs_grid(outs, 5)
# run3 model_25
plot_imgs_grid(outs, 5)
# run2 model_25
plot_imgs_grid(outs, 5)
# run3 model_20
plot_imgs_grid(outs, 5)
# run3 model_19
plot_imgs_grid(outs, 5)
# run2 model_19
plot_imgs_grid(outs, 5)
# model_15
plot_imgs_grid(outs, 5)
```
### Visualize weights
```
pp = nnet.get_all_params_values()
for k in pp.keys():
if 'conv' in k:
for w in pp[k]:
if w.ndim <= 2:
continue
for i in range(w.shape[0]):
plt.figure(figsize=(1,1))
plt.axis('off')
plt.imshow(w[i, :, :, :].mean(axis=0), interpolation='nearest')
break
def plot_weight_matrix(Z, k):
num = Z.shape[0]
fig = plt.figure(1, (30, 30))
fig.subplots_adjust(left=0.05, right=0.95)
grid = AxesGrid(fig, (1, 4, 2), # similar to subplot(142)
nrows_ncols=(int(np.ceil(num / 10.)), 10),
axes_pad=0.04,
share_all=True,
label_mode="L",
)
for i in range(num):
im = grid[i].imshow(Z[i, :, :, :].mean(
axis=0), cmap='coolwarm')
for i in range(grid.ngrids):
grid[i].axis('off')
for cax in grid.cbar_axes:
cax.toggle_label(False)
plot_weight_matrix(pp['conv1_1'][0], 'test')
```
## Plot x-y-scale for the faces used for training
```
from mpl_toolkits.mplot3d import Axes3D
df = read_csv('../aug.csv')
df.head()
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df['r'][:4000], df['c'][:4000], df['width'][:4000], c='none', facecolors='none', edgecolors='r')
df.describe()
%matplotlib inline
plt.hist2d(df['r'], df['c'], bins=30);
```
| github_jupyter |
# Multi-Layer Perceptron, MNIST
---
In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
```
# import libraries
import torch
import numpy as np
```
---
## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.
This cell will create DataLoaders for each of our datasets.
```
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize a Batch of Training Data
The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
```
### View an Image in More Detail
```
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# number of hidden nodes in each layer (512)
hidden_1 = 512
hidden_2 = 512
# linear layer (784 -> hidden_1)
self.fc1 = nn.Linear(28 * 28, hidden_1)
# linear layer (n_hidden -> hidden_2)
self.fc2 = nn.Linear(hidden_1, hidden_2)
# linear layer (n_hidden -> 10)
self.fc3 = nn.Linear(hidden_2, 10)
# dropout layer (p=0.2)
# dropout prevents overfitting of data
self.dropout = nn.Dropout(0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add hidden layer, with relu activation function
x = F.relu(self.fc2(x))
# add dropout layer
x = self.dropout(x)
# add output layer
x = self.fc3(x)
return x
# initialize the NN
model = Net()
print(model)
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
```
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer (stochastic gradient descent) and learning rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
The steps for training/learning from a batch of data are described in the comments below:
1. Clear the gradients of all optimized variables
2. Forward pass: compute predicted outputs by passing inputs to the model
3. Calculate the loss
4. Backward pass: compute gradient of the loss with respect to model parameters
5. Perform a single optimization step (parameter update)
6. Update average training loss
The following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
```
# number of epochs to train the model
n_epochs = 50
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf # set initial "min" to infinity
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train() # prep model for training
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval() # prep model for evaluation
for data, target in valid_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update running validation loss
valid_loss += loss.item()*data.size(0)
# print training/validation statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch+1,
train_loss,
valid_loss
))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model.pt')
valid_loss_min = valid_loss
```
### Load the Model with the Lowest Validation Loss
```
model.load_state_dict(torch.load('model.pt'))
```
---
## Test the Trained Network
Finally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
```
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for evaluation
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(len(target)):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.sampler)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
This cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
```
| github_jupyter |
# Agilent 34411A versus Keysight 34465A
The following notebook perfoms a benchmarking of the two DMMs. In part one, raw readings of immediate voltages are timed
and compared. In part two, actual sweeps are performed with a QDac.
```
%matplotlib notebook
import time
import matplotlib.pyplot as plt
import numpy as np
import qcodes as qc
from qcodes.instrument_drivers.Keysight.Keysight_34465A import Keysight_34465A
from qcodes.instrument_drivers.agilent.Agilent_34400A import Agilent_34400A
from qcodes.instrument_drivers.QDev.QDac import QDac
```
## Import and setup
```
ks = Keysight_34465A('Keysight', 'TCPIP0::K-000000-00000::inst0::INSTR')
agi = Agilent_34400A('Agilent', 'TCPIP0::192.168.15.105::inst0::INSTR')
qdac = QDac('qdac', 'ASRL4::INSTR')
station = qc.Station(ks, agi, qdac)
# set up a NPLC of 0.06 corresponding to an aperture time of 1 ms
ks.NPLC(0.06)
agi.NPLC(0.06)
# set the same range on both DMMs
ks.range(1)
agi.range(1)
```
## Part one - raw readings
```
# Preliminary testing, just get N readings from each instrument
# with the displays on
ks.display_clear() # make sure that the display is on
agi.display_clear()
print('Reading with displays ON')
print('-'*10)
for N in [512, 1024, 2048]:
t_start = time.time()
for ii in range(N):
ks.volt()
t_stop = time.time()
print('Read {} values one at a time from Keysight in: {} s'.format(N, t_stop-t_start))
t_start = time.time()
for ii in range(N):
agi.volt()
t_stop = time.time()
print('Read {} values one at a time from Agilent in: {} s'.format(N, t_stop-t_start))
print('-'*10)
# The same test, but with the displays off to decrease latency
print('Reading with displays OFF')
print('-'*10)
for N in [512, 1024, 2048]:
ks.display_text('QCoDeS')
t_start = time.time()
for ii in range(N):
ks.volt()
t_stop = time.time()
print('Read {} values one at a time from Keysight in: {} s'.format(N, t_stop-t_start))
ks.display_clear()
agi.display_text('QCoDes')
t_start = time.time()
for ii in range(N):
agi.volt()
t_stop = time.time()
print('Read {} values one at a time from Agilent in: {} s'.format(N, t_stop-t_start))
agi.display_clear()
print('-'*10)
```
## Part two - QCoDeS looping
### 1D Sweep
```
# Sweep a voltage from 0.2 V to 0.5 V in 512 steps
# Switch off displays for speed
ks.display_text('QCoDeS')
agi.display_text('QCoDeS')
N = 512
V1 = 0.2
V2 = 0.5
dV = (V2-V1)/(N-1) # endpoint included in sweep
loop = qc.Loop(qdac.ch42_v.sweep(V1, V2, dV)).each(ks.volt)
data = loop.get_data_set(name='testsweep')
t_start = time.time()
_ = loop.run()
t_stop = time.time()
print('\n\nDid a {}-point QCoDeS sweep of the QDac/Keysight pair in {:.1f} s\n\n'.format(N, t_stop-t_start))
loop = qc.Loop(qdac.ch41_v.sweep(V1, V2, dV)).each(agi.volt)
data = loop.get_data_set(name='testsweep')
t_start = time.time()
_ = loop.run()
t_stop = time.time()
print('\n\nDid a {}-point QCoDeS sweep of the QDac/Agilent pair in {:.1f} s\n\n'.format(N, t_stop-t_start))
agi.display_clear()
ks.display_clear()
```
### 2D Sweep
```
# Perform the same sweep as before, but this time nested inside another sweep of the same length,
# i.e. at each point of the original sweep, another QDac channel sweeps its voltage in the same number of steps
ks.display_text('QCoDeS')
agi.display_text('QCoDeS')
N = 100
V1 = 0.2
V2 = 0.5
dV = (V2-V1)/(N-1)
loop = qc.Loop(qdac.ch42_v.sweep(V1, V2, dV)).loop(qdac.ch41_v.sweep(V1, V2, dV)).each(ks.volt)
data = loop.get_data_set(name='testsweep')
t_start = time.time()
_ = loop.run()
t_stop = time.time()
print('\n\nDid a {}x{}-point QCoDeS sweep of the QDac/Keysight pair in {:.1f} s\n\n'.format(N, N, t_stop-t_start))
loop = qc.Loop(qdac.ch41_v.sweep(V1, V2, dV)).loop(qdac.ch42_v.sweep(V1, V2, dV)).each(agi.volt)
data = loop.get_data_set(name='testsweep')
t_start = time.time()
_ = loop.run()
t_stop = time.time()
print('\n\nDid a {}x{}-point QCoDeS sweep of the QDac/Agilent pair in {:.1f} s\n\n'.format(N, N, t_stop-t_start))
ks.display_clear()
agi.display_clear()
ks.close()
agi.close()
qdac.close()
```
| github_jupyter |
<small><i>This notebook was originally put together by [Jake Vanderplas](http://www.vanderplas.com) for PyCon 2014. [Peter Prettenhofer](https://github.com/pprett) adapted it for PyCon Ukraine 2014. Source and license info is on [GitHub](https://github.com/pprett/sklearn_pycon2014/).</i></small>
# Part 2: Representation of Data for Machine Learning
When using [scikit-learn](http://scikit-learn.org), it is important to have a handle on how data are represented.
By the end of this section you should:
- Know the internal data representation of scikit-learn.
- Know how to use scikit-learn's dataset loaders to load example data.
- Know how to turn image & text data into data matrices for learning.
## Representation of Data in Scikit-learn
Machine learning is about creating models from data: for that reason, we'll start by discussing how data can be represented in order to be understood by the computer. Along with this, we'll build on our matplotlib examples from the previous section and show some examples of how to visualize data.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
**two-dimensional array or matrix**. The arrays can be
either ``numpy`` arrays, or in some cases ``scipy.sparse`` matrices.
The size of the array is expected to be `[n_samples, n_features]`
- **n_samples:** The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a database record, a row in a CSV file,
or whatever you can describe with a fixed set of (quantitative) traits.
- **n_features:** The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where `scipy.sparse` matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
Data in scikit-learn is represented as a **feature matrix** and a **label vector**
$$
{\rm feature~matrix:~~~} {\bf X}~=~\left[
\begin{matrix}
x_{11} & x_{12} & \cdots & x_{1D}\\
x_{21} & x_{22} & \cdots & x_{2D}\\
x_{31} & x_{32} & \cdots & x_{3D}\\
\vdots & \vdots & \ddots & \vdots\\
\vdots & \vdots & \ddots & \vdots\\
x_{N1} & x_{N2} & \cdots & x_{ND}\\
\end{matrix}
\right]
$$
$$
{\rm label~vector:~~~} {\bf y}~=~ [y_1, y_2, y_3, \cdots y_N]
$$
Here there are $N$ samples and $D$ features.
## A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the
iris data stored by scikit-learn.
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
```
from IPython.core.display import Image, display
display(Image(filename='images/iris_setosa.jpg'))
print "Iris Setosa\n"
display(Image(filename='images/iris_versicolor.jpg'))
print "Iris Versicolor\n"
display(Image(filename='images/iris_virginica.jpg'))
print "Iris Virginica"
```
### Quick Question:
**If we want to design an algorithm to recognize iris species, what might the features be? What might the labels be?**
Remember: we need a 2D data array of size `[n_samples x n_features]`, and a 1D label array of size `n_samples`.
- What would the `n_samples` refer to?
- What might the `n_features` refer to?
Remember that there must be a **fixed** number of features for each sample, and feature
number ``i`` must be a similar kind of quantity for each sample.
### Loading the Iris Data with Scikit-Learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
- Features in the Iris dataset:
1. sepal length in cm
2. sepal width in cm
3. petal length in cm
4. petal width in cm
- Target classes to predict:
1. Iris Setosa
2. Iris Versicolour
3. Iris Virginica
``scikit-learn`` embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
```
from sklearn.datasets import load_iris
iris = load_iris()
```
The result is a ``Bunch()`` object, which is basically an enhanced dictionary which contains the data.
**Note that bunch objects are not required for performing learning in scikit-learn, they are simply a convenient container for the numpy arrays which *are* required**
```
iris.keys()
n_samples, n_features = iris.data.shape
print (n_samples, n_features)
print iris.data[0]
print iris.data.shape
print iris.target.shape
print iris.target
print iris.target_names
```
This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plot_iris_projection(x_index, y_index):
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index],
c=iris.target)
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plot_iris_projection(2, 3)
```
### Quick Exercise:
**Change** `x_index` **and** `y_index` **in the above script
and find a combination of two parameters
which maximally separate the three classes.**
This exercise is a preview of **dimensionality reduction**, which we'll see later.
## Other Available Data
They come in three flavors:
- **Packaged Data:** these small datasets are packaged with the scikit-learn installation,
and can be downloaded using the tools in ``sklearn.datasets.load_*``
- **Downloadable Data:** these larger datasets are available for download, and scikit-learn
includes tools which streamline this process. These tools can be found in
``sklearn.datasets.fetch_*``
- **Generated Data:** there are several datasets which are generated from models based on a
random seed. These are available in the ``sklearn.datasets.make_*``
You can explore the available dataset loaders, fetchers, and generators using IPython's
tab-completion functionality. After importing the ``datasets`` submodule from ``sklearn``,
type
datasets.load_ + TAB
or
datasets.fetch_ + TAB
or
datasets.make_ + TAB
to see a list of available functions.
```
from sklearn import datasets
```
The data downloaded using the fetch_ scripts are stored locally, within a subdirectory of your home directory. You can use the following to determine where it is:
```
from sklearn.datasets import get_data_home
get_data_home()
!ls $HOME/scikit_learn_data/
```
Be warned: many of these datasets are quite large, and can take a long time to download!
(especially on Conference wifi).
If you start a download within the IPython notebook
and you want to kill it, you can use ipython's "kernel interrupt" feature, available in the menu or using
the shortcut ``Ctrl-m i``.
You can press ``Ctrl-m h`` for a list of all ``ipython`` keyboard shortcuts.
## Loading Digits Data
Now we'll take a look at another dataset, one where we have to put a bit
more thought into how to represent the data. We can explore the data in
a similar manner as above:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print (n_samples, n_features)
print digits.data[0]
print digits.target
```
The target here is just the digit represented by the data. The data is an array of
length 64... but what does this data mean?
There's a clue in the fact that we have two versions of the data array:
``data`` and ``images``. Let's take a look at them:
```
print digits.data.shape
print digits.images.shape
```
We can see that they're related by a simple reshaping:
```
print np.all(digits.images.reshape((1797, 64)) == digits.data)
```
*Aside... numpy and memory efficiency:*
*You might wonder whether duplicating the data is a problem. In this case, the memory
overhead is very small. Even though the arrays are different shapes, they point to the
same memory block, which we can see by doing a bit of digging into the guts of numpy:*
```
print digits.data.__array_interface__['data']
print digits.images.__array_interface__['data']
```
*The long integer here is a memory address: the fact that the two are the same tells
us that the two arrays simply views of the same underlying data.*
Let's visualize the data. It's little bit more involved than the simple scatter-plot
we used above, but we can do it rather tersely.
```
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
```
We see now what the features mean. Each feature is a real-valued quantity representing the
darkness of a pixel in an 8x8 image of a hand-written digit.
Even though each sample has data that is inherently two-dimensional, the data matrix flattens
this 2D data into a **single vector**, which can be contained in one **row** of the data matrix.
This example is typically used with a class of unsupervised learning methods known as *manifold learning*. We'll explore unsupervised learning in detail later in the tutorial.
## Exercise: working with the faces dataset
Here we'll take a moment for you to explore the datasets yourself.
Later on we'll be using the Olivetti faces dataset.
Take a moment to fetch the data (about 1.4MB), and visualize the faces.
You can copy the code used to visualize the digits above, and modify it for this data.
```
from sklearn.datasets import fetch_olivetti_faces
# fetch the faces data
# Use a script like above to plot the faces image data.
# hint: plt.cm.bone is a good colormap for this data
```
### Solution:
```
# Uncomment the following to load the solution to this exercise
# %load solutions/02_faces.py
```
| github_jupyter |
Tensor RTに変換された学習済みモデルをつかって自動走行します。
```
import torch
import torchvision
CATEGORIES = ['apex']
device = torch.device('cuda')
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2 * len(CATEGORIES))
model = model.cuda().eval().half()
```
Tensor RT形式のモデルを読み込む。
```
import torch
from torch2trt import TRTModule
model_trt = TRTModule()
model_trt.load_state_dict(torch.load('road_following_model_trt.pth'))
```
racecarクラスをインスタンス化する
```
from jetracer.nvidia_racecar import NvidiaRacecar
type = "TT02"
car = NvidiaRacecar(type)
```
カメラの起動のためにカメラを制御するnvargus-daemonの再起動。
```
!echo jetson | sudo -S systemctl restart nvargus-daemon
```
カメラクラスをインスタンス化する。
```
from jetcam.csi_camera import CSICamera
camera = CSICamera(width=224, height=224, capture_fps=40)
```
最後に、JetRacerを下に置き、下記セルを実行します。
* 車の左右にうまく回らない場合は、`STEERING_GAIN` を小さいくする
* ターンがうまくいかない場合は、`STEERING_GAIN`を大きくする
* 車が左に傾く場合は、`STEERING_BIAS`を値を-0.05ぐらいづつマイナスにする
* 車が右に傾く場合は、`STEERING_BIAS`を値を+0.05ぐらいづつプラスにする
|値|意味|
|:--|:--|
|st_gain|ハンドルの曲がる比率を調整(推論開始ボタンを押したタイミングで反映)|
|st_offset|ハンドルの初期位置を調整(推論開始ボタンを押したタイミングで反映)|
```
import ipywidgets.widgets as widgets
from IPython.display import display
from utils import preprocess
import numpy as np
import threading
import traitlets
import time
throttle_slider = widgets.FloatSlider(description='throttle', min=-1.0, max=1.0, step=0.01, value=0.0, orientation='vertical')
steering_gain = widgets.BoundedFloatText(description='st_gain',min=-1.0, max=1.0, step=0.01, value=-0.65)
steering_offset = widgets.BoundedFloatText(description='st_offset',min=-1.0, max=1.0, step=0.01, value=0)
check_button = widgets.Button(description='ハンドルのチェック')
run_button = widgets.Button(description='推論開始')
stop_button = widgets.Button(description='推論停止')
log_widget = widgets.Textarea(description='ログ')
result_widget = widgets.FloatText(description='推論から導いたXの値')
def live():
global running, count
log_widget.value = "live"
count = 0
while running:
count = count + 1
log_widget.value = str(count) + "回目の推論"
image = camera.read()
image = preprocess(image).half()
output = model_trt(image).detach().cpu().numpy().flatten()
x = float(output[0])
steering_value = x * car.steering_gain + car.steering_offset
result_widget.value = steering_value
car.steering = steering_value
def run(c):
global running, execute_thread, start_time
log_widget.value = "run"
running = True
execute_thread = threading.Thread(target=live)
execute_thread.start()
start_time = time.time()
def stop(c):
global running, execute_thread, start_time, count
end_time = time.time() - start_time
fps = count/int(end_time)
log_widget.value = "FPS: " + str(fps) + "(1秒あたりの推論実行回数)"
running = False
execute_thread.stop()
def check(c):
global running, execute_thread, start_time, count
end_time = time.time() - start_time
fps = count/int(end_time)
log_widget.value = "チェック用に推論を停止します。FPS: " + str(fps) + "(1秒あたりの推論実行回数)"
running = False
count = 0
log_widget.value = "car.steering:1"
car.steering = 1
time.sleep(1)
car.steering = -1
time.sleep(1)
car.steering = 0
run_button.on_click(run)
stop_button.on_click(stop)
check_button.on_click(check)
# create a horizontal box container to place the sliders next to eachother
run_widget = widgets.VBox([
widgets.HBox([throttle_slider, steering_gain,steering_offset,check_button]),
widgets.HBox([run_button, stop_button]),
result_widget,
log_widget
])
throttle_link = traitlets.link((throttle_slider, 'value'), (car, 'throttle'))
steering_gain_link = traitlets.link((steering_gain, 'value'), (car, 'steering_gain'))
steering_offset_link = traitlets.link((steering_offset, 'value'), (car, 'steering_offset'))
# display the container in this cell's output
display(run_widget)
```
| github_jupyter |
```
#내장 함수 정리
#abs : 절댓값 리턴
num = abs(-5)
print(num)
#all, any : 참 거짓 리턴
"""
+-----------------------------------------+---------+---------+
| | any | all |
+-----------------------------------------+---------+---------+
| All Truthy values | True | True |
+-----------------------------------------+---------+---------+
| All Falsy values | False | False |
+-----------------------------------------+---------+---------+
| One Truthy value (all others are Falsy) | True | False |
+-----------------------------------------+---------+---------+
| One Falsy value (all others are Truthy) | True | False |
+-----------------------------------------+---------+---------+
| Empty Iterable | False | True |
+-----------------------------------------+---------+---------+
"""
l = [1, 3, 4, 5]
print(all(l))
l = [0, False]
print(all(l))
l = [1, 0, False]
print(all(l))
l = [False, 1, 2, 3]
print(all(l))
l = []
print(all(l))
l = [1, 3, 4, 5]
print(any(l))
l = [0, False]
print(any(l))
l = [1, 0, False]
print(any(l))
l = [False, 1, 2, 3]
print(any(l))
l = []
print(any(l))
#ascii : 비 아스키 문자를 escape character로 바꿈 (데이터가 아닌 명령어라고 생각)
text = "\tf(x) = √x\n"
print(ascii(text))
#bin : 이진 문자열 리턴
b_num = bin(6)
print(b_num)
oct, hex
#bool : True or False 리턴
num = 10
if bool(num%2 == 0) :
print("짝수")
else :
print("홀수")
#breakpoint : 함수 호출시 디버거로(pdb) 진입
t1 = "HI"
t2 = "HELLO"
t3 = "BYE"
print(t1)
#breakpoint()
print(t2)
#breakpoint()
print(t3)
#chr : 정수를 넘겨 유니코드를 리턴
#유니코드 : 전 세계의 모든 문자를 컴퓨터에서 일관되게 표현하기 위한 규칙
char = chr(65)
print(char)
#complex : 복소수 리턴
n1 = complex(1, 2)
n2 = complex('3+4j')
print(n1, n2)
#dir : 인자로 넘긴 객체의 어트리뷰트(함수, 변수)를 리턴,
#인자 없으면 현재 파이썬 실행 파일에 포함된 이름들 리턴
print(dir("test"))
#divmod : 두 수를 나눠 몫과 나머지로 구성된 한 쌍을 리턴
num = divmod(10, 3)
print(num)
#enumerate : 순환 가능한 객체를 넘겨서 열거형 객체를 리턴
print(list(enumerate(["Apple", "Banana", "Cherry"])))
l = ["Apple", "Banana", "Cherry"]
for i, fruit in enumerate(l, start = 1) :
print(i, "번째 과일 :", fruit)
#filter : 함수를 인자로 넘겨 참인 경우에 대해서만 순환 가능한 객체를 리턴
temp = list(filter((lambda x : x > 0), [-1, 1, -6, 3, 5]))
print(temp)
#hex : 정수를 16진수로 리턴
h_num = hex(123)
print(h_num)
#float : 실수를 만들어서 리턴
print(float('1.23'))
print(float('-45.6'))
print(float('1e+3')) #지수 표기법
print(float('1e-3'))
#frozenset : set과 같은 집합 객체이지만, 내부 항목들의 값 변경이 불가하다.
#hasattr : 인자로 객체와 문자열을 넘기고, 문자열이 객체의 속성에 포함된다면 True를 리턴
print(hasattr(str(), "split"))
#id : 객체의 id(아이덴티티, 고유 주소)를 리턴.
a = 1
b = a
print(id(1), id(a), id(b))
#int : 인트 객체를 리턴. base 인자에 2, 8, 16등의 진수를 넘겨주면 10진법으로 바꿔준다.
print(int('0x7b', base=16))
#isinstance : 인자로 인스턴스가 클래스의 인스턴스인지 참 거짓 판단하여 리턴
print(isinstance(1, int), isinstance("1", int))
#max : 최대 인자 값 찾기
l = [1, 2, 5, -1, -6, 10]
print(max(l))
print(max(1, 2, 5, 9, -5))
#min : 최소 인자 값 찾기
print(min(l))
print(min(1, 2, 5, 9, -5))
#oct : 정수를 8진수 문자열로 리턴
o_num = oct(17)
print(o_num)
#ord : 유니코드를 넘겨 정수를 리턴
print(ord('A'))
#pow(base, exp) : base의 exp 거듭 제곱을 리턴
print(pow(4, 2), pow(4, -1))
#reversed : 인자에 넘긴 이터레이터를 뒤집어서 리턴
l = [1, 2, 3]
rl = list(reversed(l))
print(rl)
#round : 소수점 반올림 함수.
a = 3.141592
print(round(a), round(a, 2))
#sum : 이터레이터의 각 항목을 왼쪽에서 오른쪽 순서로 합해서 리턴
print(sum(range(1, 11)))
#zip : 각 이터레이터의 요소들을 모아주는 이터레이터를 리턴
x = "ABC"
y = "가나다"
zipped = zip(x, y)
for i in zipped :
print(i)
fruits = ['Apple', 'Banana', 'Kiwi']
color = ['Red', 'Yellow', 'Green']
info = dict(zip(fruits, color))
print(info)
#수학과 관련된 함수
#abs : 절댓값을 리턴
num = abs(-5)
print(num) #출력 결과 : 5
#complex : 복소수를 리턴
num1 = complex(1, 2) #방법 1
num2 = complex('3+4j') #방법 2
print(num1, num2) #출력 결과 : (1+2j) (3+4j)
#divmod : 두 수를 나눠 몫과 나머지로 구성된 한 쌍을 리턴
num = divmod(10, 3)
print(num) #출력 결과 : (3, 1)
#round : 소수점 반올림 함수.
a = 3.141592
print(round(a), round(a, 2)) #출력 결과 : 3 3.14
#float : 실수를 만들어서 리턴
print(float('1.23')) #출력 결과 : 1.23
print(float('-45.6')) #출력 결과 : -45.6
print(float('1e+3')) #지수 표기법, 출력 결과 : 1000.0
print(float('1e-3')) #출력 결과 : 0.001
#sum : 이터레이터의 각 항목을 왼쪽에서 오른쪽 순서로 합해서 리턴
print(sum(range(1, 11))) #출력 결과 : 55
#max : 최대 인자 값 찾기
l = [1, 2, 5, -1, -6, 10]
print(max(l)) #출력 결과 : 10
print(max(1, 2, 5, 9, -5)) #출력 결과 : 9
#min : 최소 인자 값 찾기
print(min(l)) #출력 결과 : -6
print(min(1, 2, 5, 9, -5)) #출력 결과 : -5
#pow(base, exp) : base의 exp 거듭 제곱을 리턴
print(pow(4, 2), pow(4, -1)) #출력 결과 : 16 0.25
#bin : 정수를 2진수로 리턴
b_num = bin(6)
print(b_num) #출력 결과 : 0b110
#oct : 정수를 8진수로 리턴
o_num = oct(17)
print(o_num) #출력 결과 : 0o21
#hex : 정수를 16진수로 리턴
h_num = hex(123)
print(h_num) #출력 결과 : 0x7b
#int : 인트 객체를 리턴하는데, base 인자에 2, 8, 16등의 진수를 넘겨주면
#10진법으로 바꿔준다.
print(int('0x7b', base=16)) #출력 결과 : 123
#참 거짓과 관련된 함수
#all, any : 참 거짓 리턴
"""
--------------------------------------
| any | all | 판별 조건 |
|-------+-------+--------------------|
| 참 | 참 | 모두 참인 경우 |
|-------+-------+--------------------|
| 거짓 | 거짓 | 모두 거짓인 경우 |
|-------+-------+--------------------|
| 참 | 거짓 | 하나만 참인 경우 |
|-------+-------+--------------------|
| 참 | 거짓 | 하나만 거짓인 경우 |
|-------+-------+--------------------|
| 거짓 | 참 | 비어있는 경우 |
--------------------------------------
"""
l = [1, 3, 4, 5]
print(any(l)) #출력 결과 : True
l = [0, False]
print(any(l)) #출력 결과 : False
l = [1, 0, False]
print(any(l)) #출력 결과 : True
l = [False, 1, 2, 3]
print(any(l)) #출력 결과 : True
l = []
print(any(l)) #출력 결과 : False
l = [1, 3, 4, 5]
print(all(l)) #출력 결과 : True
l = [0, False]
print(all(l)) #출력 결과 : False
l = [1, 0, False]
print(all(l)) #출력 결과 : False
l = [False, 1, 2, 3]
print(all(l)) #출력 결과 : False
l = []
print(all(l)) #출력 결과 : True
#bool : True or False 리턴
num = 10
if bool(num%2 == 0) :
print("짝수")
else :
print("홀수")
#출력 결과 : 짝수
#반복자 관련 함수
#enumerate : 순환 가능한 객체를 인자로 넘겨 인덱스와 객체의 값을 튜플로 묶어 리턴
temp = list(enumerate(['가', '나', '다']))
print(temp)
#filter : 함수를 인자로 넘겨 참인 경우에 대해서만 순환 가능한 객체를 리턴
temp = list(filter((lambda x : x > 0), [-1, 1, -6, 3, 5]))
print(temp) #출력 결과 : [1, 3, 5]
#reversed : 인자에 넘긴 이터레이터를 뒤집어서 리턴
l = [1, 2, 3]
rl = list(reversed(l))
print(rl) #출력 결과 : [3, 2, 1]
#zip : 각 이터레이터의 요소들을 모아주는 이터레이터를 리턴
x = "ABC"
y = "가나다"
zipped = zip(x, y)
for i in zipped :
print(i) #출력 결과 : ('A', '가')
# ('B', '나')
# ('C', '다')
fruits = ['Apple', 'Banana', 'Kiwi']
color = ['Red', 'Yellow', 'Green']
info = dict(zip(fruits, color))
print(info) #출력 결과 : {'Apple': 'Red', 'Banana': 'Yellow', 'Kiwi': 'Green'}
#기타 함수
#ascii : 비 아스키 문자를 escape character로 바꿈 (데이터가 아닌 명령어라고 생각)
text = "\tf(x) = √x\n"
print(ascii(text)) #출력 결과 : '\tf(x) = \u221ax\n'
#chr : 정수를 넘겨 유니코드를 리턴
#유니코드 : 전 세계의 모든 문자를 컴퓨터에서 일관되게 표현하기 위한 규칙
char = chr(65)
print(char) #출력 결과 : A
#ord : 유니코드를 넘겨 정수를 리턴
print(ord('A')) #출력 결과 : 65
#id : 객체의 id(아이덴티티, 고유 주소)를 리턴.
a = 1
b = a
print(id(1), id(a), id(b))
#dir : 인자로 넘긴 객체의 어트리뷰트(함수, 변수)를 리턴,
#인자 없으면 현재 파이썬 실행 파일에 포함된 이름들 리턴
print(dir("test"))
#hasattr : 인자로 객체와 문자열을 넘기고, 문자열이 객체의 속성에 포함된다면 True를 리턴
print(hasattr(str(), "split")) #출력 결과 : True
#isinstance : 인자로 인스턴스가 클래스의 인스턴스인지 참 거짓 판단하여 리턴
print(isinstance(1, int), isinstance("1", int)) #출력 결과 : True False
#breakpoint : 함수 호출시 디버거로(pdb) 진입
t1 = "HI"
t2 = "HELLO"
t3 = "BYE"
print(t1)
breakpoint() #c를 입력하면 다음 브레이크 포인트까지 실행
print(t2)
breakpoint()
print(t3)
```
| github_jupyter |
# Build Experiment from tf.layers model
Embeds a 3 layer FCN model to predict MNIST handwritten digits in a Tensorflow Experiment. The model is built using the __tf.layers__ API, and wrapped in a custom Estimator, which is then wrapped inside an Experiment.
```
from __future__ import division, print_function
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import tensorflow as tf
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
MODEL_DIR = os.path.join(DATA_DIR, "expt-learn-model")
NUM_FEATURES = 784
NUM_CLASSES = 10
NUM_STEPS = 100
LEARNING_RATE = 1e-3
BATCH_SIZE = 128
tf.logging.set_verbosity(tf.logging.INFO)
```
## Prepare Data
```
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append([float(x) / 255. for x in cols[1:]])
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata, dtype=np.float32)
X = np.array(xdata, dtype=np.float32)
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
```
The train_input_fn and test_input_fn below are equivalent to using the full batches. There is some information on [building batch oriented input functions](http://blog.mdda.net/ai/2017/02/25/estimator-input-fn), but I was unable to make it work. Commented out block is adapted from a Keras data generator, but that does not work either.
```
def train_input_fn():
return tf.constant(Xtrain), tf.constant(ytrain)
def test_input_fn():
return tf.constant(Xtest), tf.constant(ytest)
# def batch_input_fn(X, y, batch_size=BATCH_SIZE,
# num_epochs=NUM_STEPS):
# for e in range(num_epochs):
# num_recs = X.shape[0]
# sids = np.random.permutation(np.arange(num_recs))
# num_batches = num_recs // batch_size
# for bid in range(num_batches):
# sids_b = sids[bid * batch_size : (bid + 1) * batch_size]
# X_b = np.zeros((batch_size, NUM_FEATURES))
# y_b = np.zeros((batch_size,))
# for i in range(batch_size):
# X_b[i] = X[sids_b[i]]
# y_b[i] = y[sids_b[i]]
# yield tf.constant(X_b, dtype=tf.float32), \
# tf.constant(y_b, dtype=tf.float32)
# def train_input_fn():
# return batch_input_fn(Xtrain, ytrain, BATCH_SIZE).next()
# def test_input_fn():
# return batch_input_fn(Xtest, ytest, BATCH_SIZE).next()
```
## Define Model Function
Estimator expects a model_fn function that has all the information about the model, the loss function, etc.
```
def model_fn(features, labels, mode):
# define model
in_training = (mode == tf.contrib.learn.ModeKeys.TRAIN)
fc1 = tf.layers.dense(inputs=features, units=512,
activation=tf.nn.relu, name="fc1")
fc1_dropout = tf.layers.dropout(inputs=fc1, rate=0.2,
training=in_training,
name="fc1_dropout")
fc2 = tf.layers.dense(inputs=fc1_dropout, units=256,
activation=tf.nn.relu, name="fc2")
fc2_dropout = tf.layers.dropout(inputs=fc2, rate=0.2,
training=in_training,
name="fc2_dropout")
logits = tf.layers.dense(inputs=fc2_dropout, units=NUM_CLASSES,
name="logits")
# loss (for TRAIN and EVAL)
loss = None
if mode != tf.contrib.learn.ModeKeys.INFER:
onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32),
depth=NUM_CLASSES)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
# optimizer (TRAIN only)
train_op = None
if mode == tf.contrib.learn.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=LEARNING_RATE,
optimizer="Adam")
# predictions
predictions = {
"classes" : tf.argmax(input=logits, axis=1),
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")}
# additional metrics
accuracy = tf.metrics.accuracy(labels, predictions["classes"])
eval_metric_ops = { "accuracy": accuracy }
# logging variables to tensorboard
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy[0] / accuracy[1])
summary_op = tf.summary.merge_all()
tb_logger = tf.contrib.learn.monitors.SummarySaver(summary_op,
save_steps=10)
return model_fn_lib.ModelFnOps(mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
```
## Define Estimator
```
shutil.rmtree(MODEL_DIR, ignore_errors=True)
estimator = tf.contrib.learn.Estimator(model_fn=model_fn,
model_dir=MODEL_DIR,
config=tf.contrib.learn.RunConfig(save_checkpoints_secs=30000))
```
## Train Estimator
Using the parameters x, y and batch are deprecated and the warnings say to use the input_fn instead. However, using that results in very slow fit and evaluate. The solution is to use batch oriented input_fns. The commented portions will be opened up once I figure out how to make the batch oriented input_fns work.
```
estimator.fit(x=Xtrain, y=ytrain,
batch_size=BATCH_SIZE,
steps=NUM_STEPS)
# estimator.fit(input_fn=train_input_fn, steps=NUM_STEPS)
```
## Evaluate Estimator
```
results = estimator.evaluate(x=Xtest, y=ytest)
# results = estimator.evaluate(input_fn=test_input_fn)
print(results)
```
## alternatively...
## Define Experiment
A model is wrapped in an Estimator, which is then wrapped in an Experiment. Once you have an Experiment, you can run this in a distributed manner on CPU or GPU.
```
NUM_STEPS = 20
def experiment_fn(run_config, params):
feature_cols = [tf.contrib.layers.real_valued_column("",
dimension=NUM_FEATURES)]
estimator = tf.contrib.learn.Estimator(model_fn=model_fn,
model_dir=MODEL_DIR)
return tf.contrib.learn.Experiment(
estimator=estimator,
train_input_fn=train_input_fn,
train_steps=NUM_STEPS,
eval_input_fn=test_input_fn)
```
## Run Experiment
```
shutil.rmtree(MODEL_DIR, ignore_errors=True)
tf.contrib.learn.learn_runner.run(experiment_fn,
run_config=tf.contrib.learn.RunConfig(
model_dir=MODEL_DIR))
```
| github_jupyter |
# HOW TO ADD A NEW CLASS TO OBJECT DETECTION PIPELINE?
```
## Uncomment command below to kill current job:
#!neuro kill $(hostname)
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../')
from detection.model import get_model
from detection.coco_subset import CLS_SELECT, COLORS, N_COCO_CLASSES
from detection.dataset import get_transform
from detection.visualisation import show_legend, predict_and_show
from detection.train import train
import torch
from pathlib import Path
from random import choice
from PIL import Image
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
```
## Downloading the dataset
```
DATA_PATH = "../data"
! mkdir -p {DATA_PATH}
! [ -f {DATA_PATH}/coco-retail.zip ] || wget http://data.neu.ro/coco-retail.zip -O {DATA_PATH}/coco-retail.zip
! [ -d {DATA_PATH}/coco ] || unzip -q {DATA_PATH}/coco-retail.zip -d {DATA_PATH}
```
## Dataset overview
We took **25 classes** from COCO-dataset which can be seen on the shelves as retail products.
Since our goal is show how new category can be added to detection pipeline
(without long training process),
we work with only 100 photos and don't make any train/val split.
Of course, with these settings, the model will be prone to over-fitting,
but training on the whole dataset will take too much time.
```
data_dir = Path(f'{DATA_PATH}/coco/mini_coco/')
show_legend(list(CLS_SELECT.keys()), COLORS)
```
## Evaluation of model, trained on 24 classes
Let's load our model, trained on 24 classes (without class #25 - **sport ball**).
```
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
print(device)
path_to_ckpt = f'{DATA_PATH}/coco/weights/24_classes.ckpt'
model = get_model(n_classes=N_COCO_CLASSES - 1)
model.load_state_dict(torch.load(path_to_ckpt, map_location=device))
model.eval();
```
Now let's make sure that the model can't recognize the 25'th class.
```
def get_im_with_extra_class():
images_with_extra_class = [
'000000091595.jpg', '000000331474.jpg', '000000371042.jpg',
'000000209027.jpg', '000000032611.jpg', '000000002139.jpg',
'000000050407.jpg', '000000345466.jpg', '000000465530.jpg'
]
im_path = data_dir / 'train' / 'images' / choice(images_with_extra_class)
im_pil = Image.open(im_path)
im_tensor, _ = get_transform(False)(im_pil, None)
print(im_path.name)
return im_pil, im_tensor
im_pil, im_tensor = get_im_with_extra_class()
# Note: on CPU, inference of an image takes around 1 minute
predict_and_show(model.to(device), im_pil, im_tensor.to(device))
```
## Add 25'th class
The simplest way to add new classes includes 2 steps:
* Increasing the number of output logits;
* Training / fine-tuning process.
```
# Now we can train our model or load it from prepared checkpoint
want_finetune = False
if want_finetune:
# fine-tune previous model
model_ext = get_model(n_classes=N_COCO_CLASSES - 1).to(device)
model_ext.load_state_dict(torch.load(path_to_ckpt, map_location=device))
n_features = model_ext.roi_heads.box_predictor.cls_score.in_features
model_ext.roi_heads.box_predictor = FastRCNNPredictor(n_features, N_COCO_CLASSES)
train(model=model_ext, data_dir=data_dir, prev_ckpt=None, n_epoch=50,
batch_size=2, n_workers=4, ignore_labels=(), need_save=False)
else:
# load from ckpt
path_to_ckpt_ext = f'{DATA_PATH}/coco/weights/25_classes.ckpt'
model_ext = get_model(n_classes=N_COCO_CLASSES)
model_ext.load_state_dict(torch.load(path_to_ckpt_ext, map_location=device))
```
Let's check that our model can now recognize the added 25'th class.
```
im_pil, im_tensor = get_im_with_extra_class()
predict_and_show(model_ext.to(device), im_pil, im_tensor.to(device))
```
| github_jupyter |
```
if 0 :
%matplotlib inline
else :
%matplotlib notebook
```
# Import libraries
```
import sys
import os
module_path = os.path.abspath('.') +"\\_scripts"
print(module_path)
if module_path not in sys.path:
sys.path.append(module_path)
from _00_Import_packages_git3 import *
from time import sleep
from shutil import rmtree
from pathlib import Path
from os.path import join
import pandas as pd
import numpy as np
import os
from sos_trades_core.execution_engine.execution_engine import ExecutionEngine
from sos_trades_core.execution_engine.sos_simple_multi_scenario import SoSSimpleMultiScenario
from sos_trades_core.execution_engine.sos_very_simple_multi_scenario import SoSVerySimpleMultiScenario
from sos_trades_core.execution_engine.scatter_data import SoSScatterData
from sos_trades_core.execution_engine.sos_discipline_scatter import SoSDisciplineScatter
from tempfile import gettempdir
from sos_trades_core.tools.rw.load_dump_dm_data import DirectLoadDump
from sos_trades_core.study_manager.base_study_manager import BaseStudyManager
from sos_trades_core.execution_engine.sos_discipline import SoSDiscipline
from sos_trades_core.execution_engine.sos_coupling import SoSCoupling
```
# TestScatter
SoSDiscipline test class
# setUp
```
'''
Initialize third data needed for testing
'''
dirs_to_del = []
namespace = 'MyCase'
study_name = f'{namespace}'
repo = 'sos_trades_core.sos_processes.test'
base_path = 'sos_trades_core.sos_wrapping.test_discs'
root_dir = gettempdir()
exec_eng = ExecutionEngine(namespace)
factory = exec_eng.factory
```
# tearDown
```
for dir_to_del in dirs_to_del:
sleep(0.5)
if Path(dir_to_del).is_dir():
rmtree(dir_to_del)
sleep(0.5)
```
# 01_multi_scenario_of_scatter
```
exec_eng = ExecutionEngine(namespace)
factory = exec_eng.factory
# scatter build map
ac_map = {'input_name': 'name_list',
'input_type': 'string_list',
'input_ns': 'ns_scatter_scenario',
'output_name': 'ac_name',
'scatter_ns': 'ns_ac',
'gather_ns': 'ns_scenario',
'ns_to_update': ['ns_data_ac']}
exec_eng.smaps_manager.add_build_map('name_list', ac_map)
import pandas as pd
pd.DataFrame.from_dict(ac_map ,orient='index')
# scenario build map
scenario_map = {'input_name': 'scenario_list',
'input_type': 'string_list',
'input_ns': 'ns_scatter_scenario',
'output_name': 'scenario_name',
'scatter_ns': 'ns_scenario',
'gather_ns': 'ns_scatter_scenario',
'ns_to_update': ['ns_disc3', 'ns_barrierr', 'ns_out_disc3']}
exec_eng.smaps_manager.add_build_map(
'scenario_list', scenario_map)
import pandas as pd
pd.DataFrame.from_dict(scenario_map ,orient='index')
# shared namespace
exec_eng.ns_manager.add_ns('ns_barrierr', 'MyCase')
exec_eng.ns_manager.add_ns(
'ns_scatter_scenario', 'MyCase.multi_scenarios')
exec_eng.ns_manager.add_ns(
'ns_disc3', 'MyCase.multi_scenarios.Disc3')
exec_eng.ns_manager.add_ns(
'ns_out_disc3', 'MyCase.multi_scenarios')
exec_eng.ns_manager.add_ns(
'ns_data_ac', 'MyCase')
# instantiate factory # get instantiator from Discipline class
builder_list = factory.get_builder_from_process(repo=repo,
mod_id='test_disc1_scenario')
scatter_list = exec_eng.factory.create_multi_scatter_builder_from_list(
'name_list', builder_list=builder_list, autogather=True)
mod_list = f'{base_path}.disc3_scenario.Disc3'
disc3_builder = exec_eng.factory.get_builder_from_module(
'Disc3', mod_list)
scatter_list.append(disc3_builder)
multi_scenarios = exec_eng.factory.create_very_simple_multi_scenario_builder(
'multi_scenarios', 'scenario_list', scatter_list, autogather=True, gather_node='Post-processing')
exec_eng.factory.set_builders_to_coupling_builder(
multi_scenarios)
exec_eng.configure()
exec_eng.display_treeview_nodes()
scatter_list[1].__dict__
builder_list[0]
multi_scenarios[1].__dict__
from copy import deepcopy
DESC_IN = deepcopy(builder_list[0].cls.DESC_IN)
DESC_OUT = deepcopy(builder_list[0].cls.DESC_OUT)
import pandas as pd
DESC_IN_df = pd.DataFrame.from_dict(DESC_IN,orient='index')
DESC_OUT_df = pd.DataFrame.from_dict(DESC_OUT,orient='index')
DESC_IN_df
DESC_OUT_df
from copy import deepcopy
DESC_IN = deepcopy(disc3_builder.cls.DESC_IN)
DESC_OUT = deepcopy(disc3_builder.cls.DESC_OUT)
import pandas as pd
DESC_IN_df = pd.DataFrame.from_dict(DESC_IN,orient='index')
DESC_OUT_df = pd.DataFrame.from_dict(DESC_OUT,orient='index')
DESC_IN_df
DESC_OUT_df
dict_values = {}
dict_values[f'{study_name}.multi_scenarios.scenario_list'] = [
'scenario_1', 'scenario_2']
dict_values
exec_eng.load_study_from_input_dict(dict_values)
exec_eng.display_treeview_nodes()
scenario_list = ['scenario_1', 'scenario_2']
for scenario in scenario_list:
x1 = 2.
x2 = 4.
a1 = 3
b1 = 4
a2 = 6
b2 = 2
dict_values[study_name + '.name_1.a'] = a1
dict_values[study_name + '.name_2.a'] = a2
dict_values[study_name + '.multi_scenarios.' +
scenario + '.Disc1.name_1.b'] = b1
dict_values[study_name + '.multi_scenarios.' +
scenario + '.Disc1.name_2.b'] = b2
dict_values[study_name + '.multi_scenarios.' +
scenario + '.Disc3.constant'] = 3
dict_values[study_name + '.multi_scenarios.' +
scenario + '.Disc3.power'] = 2
dict_values[study_name +
'.multi_scenarios.name_list'] = ['name_1', 'name_2']
dict_values[study_name +
'.multi_scenarios.scenario_1.Disc3.z'] = 1.2
dict_values[study_name +
'.multi_scenarios.scenario_2.Disc3.z'] = 1.5
dict_values[study_name + '.name_1.x'] = x1
dict_values[study_name + '.name_2.x'] = x2
dict_values
exec_eng.load_study_from_input_dict(dict_values)
exec_eng.display_treeview_nodes()
exec_eng.execute()
print(exec_eng.dm.get_value(
'MyCase.multi_scenarios.scenario_list'), ['scenario_1', 'scenario_2'])
y1 = a1 * x1 + b1
y2 = a2 * x2 + b2
print(exec_eng.dm.get_value(
'MyCase.multi_scenarios.scenario_1.name_1.y'), y1)
print(exec_eng.dm.get_value(
'MyCase.multi_scenarios.scenario_1.name_2.y'), y2)
print(exec_eng.dm.get_value(
'MyCase.multi_scenarios.scenario_2.name_1.y'), y1)
print(exec_eng.dm.get_value(
'MyCase.multi_scenarios.scenario_2.name_2.y'), y2)
gather_disc1 = exec_eng.dm.get_disciplines_with_name(
'MyCase.Post-processing.Disc1')[0]
exec_eng.display_treeview_nodes()
print([key for key in list(gather_disc1._data_in.keys()) if key not in gather_disc1.NUM_DESC_IN], [
'scenario_list', 'scenario_2.y_dict', 'scenario_1.y_dict'])
print(list(gather_disc1._data_out.keys()), ['y_dict'])
print(gather_disc1._data_out['y_dict']['value'], {
'scenario_1.name_1': y1, 'scenario_1.name_2': y2, 'scenario_2.name_1': y1, 'scenario_2.name_2': y2})
gather_disc3 = exec_eng.dm.get_disciplines_with_name(
'MyCase.Post-processing.Disc3')[0]
exec_eng.display_treeview_nodes()
print([key for key in list(gather_disc3._data_in.keys())if key not in SoSDiscipline.NUM_DESC_IN], [
'scenario_list', 'scenario_1.o', 'scenario_2.o'])
print(list(gather_disc3._data_out.keys()), ['o_dict'])
```
| github_jupyter |

created by Fernando Perez ( https://www.youtube.com/watch?v=g8xQRI3E8r8 )

# Prerequisites2 : Python Data Science Environment
# Learning Plan
### Lesson 2-1: IPython
In this lesson, you learn how to use IPython.
### Lesson 2-2: Jupyter
In this lesson, you learn how to use Jupyter.
### Lesson 2-3: Conda
In this lesson, you learn how to use Conda.
# Lesson 2-1: IPython
Interactive Python
- A powerful interactive shell.
- A kernel for [Jupyter](https://jupyter.org).
- Support for interactive data visualization and use of [GUI toolkits](http://ipython.org/ipython-doc/stable/interactive/reference.html#gui-event-loop-support).
- Flexible, [embeddable interpreters](http://ipython.org/ipython-doc/stable/interactive/reference.html#embedding-ipython)
- Easy to use, high performance tools for [parallel computing](https://ipyparallel.readthedocs.io/en/latest/).
## 2-1-1 : Magic commands
%로 유용한 기능들을 쓸 수 있다.
### 실행 시간 재기
```
%timeit [i**2 for i in range(1000)]
```
timeit은 주어진 라인을 여러번 실행하고 실행 시간을 잰다. 코드의 결과는 출력하지 않는다.
%%는 cell magics라고 불리며 셀 전체에 적용된다. 반드시 셀 첫줄에 써야 한다.
```
%%timeit
[i**2 for i in range(1000)]
```
- % : line magic
- %% : cell magic
### 히스토리
```
%history
```
### 셀의 내용을 파일에 쓰기.
%%writefile [filename]
```
%%writefile hello.py
def hello():
print('Hello world')
hello()
```
### 파일 실행하기
```
%run hello.py
```
### 다시 로드하기
파이썬은 한번 로드된 모듈은 import 문을 실행해도 다시 로드하지 않는다.
하지만 자신이 만든 모듈을 계속 수정하고 불러와야 할 때 이 점은 매우 불편할 수 있다.
이 작동 방식을 바꾸는 방법을 magic command가 제공한다. built-in은 아니고 extension으로 존재한다.
```
from hello import hello
hello()
%%writefile hello.py
def hello():
print('Bye world')
hello()
from hello import hello
hello()
%load_ext autoreload
%autoreload 2
from hello import hello
hello()
```
%autoreload : Reload all modules (except those excluded by %aimport) automatically now.
%autoreload 0 : Disable automatic reloading.
%autoreload 1 : Reload all modules imported with %aimport every time before executing the Python code typed.
%autoreload 2 : Reload all modules (except those excluded by %aimport) every time before executing the Python code typed.
ref : https://ipython.org/ipython-doc/3/config/extensions/autoreload.html
경험 상
%autoreload 2가 가장 편리하다.
### 노트북 상에서 그래프 표시하기
Python에서 가장 전통적인 그래프 그리는 라이브러리는 matplotlib이다.
matplotlib은 직접적으로 노트북 위에서 그래프를 보여주는 기능을 제공하지는 않는다.
따라서 그냥 matplotlib을 쓰면 노트북 상에서는 아무것도 보이지 않는다.
하지만 magic command로 이를 가능케 할 수 있다.
```
%matplotlib inline
from matplotlib import pyplot as plt
plt.plot([1,2,3,4,5,6,7,8,9,10], [4,6,3,6,12,3,8,4,2,9])
```
### 모든 magic commands 보기
```
%magic
```
## 2-1-2 : Shell Commands
Command Line Interface(CLI)를 Python 안에서 쓸 수 있다
```
!pip --version
!date
```
자주 쓰이는 명령어는 느낌표(!) 없이도 사용 가능하다.
(사실은 magic command에 해당한다)
```
cd ..
pwd
ls
```
## 2-1-3 : Help & Tab completion
```
ran
range()
open('02_')
from random import
from random import randint
range?
```
### 구현 코드를 직접 확인하기
```
randint??
```
## 2-1-4 : 이전 명령어 & 결과 보기
### 직전 결과
```
79 * 94
_
```
### 전전 결과
```
92 * 21
13 * 93
__
```
### 이전 명령어 & 결과
```
In[30]
Out[30]
```
## 2-1-5 : pdb
Python Debugger
```
def buggy_function(numbers):
length = len(numbers)
for k, i in enumerate(range(length)):
print(k, numbers[i+1] - numbers[i])
buggy_function([1,4,9,20,30])
%debug
```
**PDB 명령어 실행내용 **
- help 도움말
- next 다음 문장으로 이동
- print 변수값 화면에 표시
- list 소스코드 리스트 출력. 현재 위치 화살표로 표시됨
- where 콜스택 출력
- continue 계속 실행. 다음 중단점에 멈추거나 중단점 없으면 끝까지 실행
- step Step Into 하여 함수 내부로 들어감
- return 현재 함수의 리턴 직전까지 실행
- !변수명 = 값 변수에 값 재설정
- up : 한 프레임 위로
- down : 한 프레임 아래로
- quit : pdb 종료
### 예제
```
buggy_function([1,3,4,5])
def exception_function(numbers):
length = len(numbers)
assert False
for i in range(length):
print(numbers[i+1] - numbers[i])
```
### pdb 자동 실행
```
%pdb on
%pdb off
%pdb
```
# Lesson 2-2: Jupyter Notebook
## Why
그래프 하나, 숫자 하나로는 사람들을 설득하기 쉽지 않다.

https://www.buzzfeed.com/jsvine/the-ferguson-area-is-even-more-segregated-than-you-thought?utm_term=.la9LbenExx#.yh7QWLg2rr
### Literate computing
computational reproducibility
http://blog.fperez.org/2013/04/literate-computing-and-computational.html
### Interesting Jupyter Notebooks
https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks
## Terminology
### Notebook Document or "notebook"
Notebook documents (or “notebooks”, all lower case) are documents produced by the Jupyter Notebook App, which contain both computer code (e.g. python) and rich text elements (paragraph, equations, figures, links, etc...). Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis.
References: Notebook documents [in the project homepage](http://ipython.org/notebook.html#notebook-documents) and [in the official docs](http://jupyter-notebook.readthedocs.io/en/latest/notebook.html#notebook-documents).
### Jupyter Notebook App
#### Server-client application for notebooks
The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser. The Jupyter Notebook App can be executed on a local desktop requiring no internet access (as described in this document) or can be installed on a remote server and accessed through the internet.
In addition to displaying/editing/running notebook documents, the Jupyter Notebook App has a “Dashboard” (Notebook Dashboard), a “control panel” showing local files and allowing to open notebook documents or shutting down their kernels.
### Kernel
** Computational engine for notebooks**
The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser. The Jupyter Notebook App can be executed on a local desktop requiring no internet access (as described in this document) or can be installed on a remote server and accessed through the internet.
In addition to displaying/editing/running notebook documents, the Jupyter Notebook App has a “Dashboard” (Notebook Dashboard), a “control panel” showing local files and allowing to open notebook documents or shutting down their kernels.
### Notebook Dashboard
**Manager of notebooks**
The Notebook Dashboard is the component which is shown first when you launch Jupyter Notebook App. The Notebook Dashboard is mainly used to open notebook documents, and to manage the running kernels (visualize and shutdown).
The Notebook Dashboard has other features similar to a file manager, namely navigating folders and renaming/deleting files.
## Why the name
#### IPython Notebook -> Jupyter Notebook
- 2001 : IPython
- NumPy, SciPy, Matplotlib, pandas, etc.
- Around 2010 : IPython Notebook
- 2014 : Jupyter
### Language agnostic : 언어에 상관 없는
IPython은 Jupyter의 커널 중 하나일 뿐.

https://www.oreilly.com/ideas/the-state-of-jupyter
## Display
```
from IPython.display import YouTubeVideo
YouTubeVideo('xuNj5paMuow')
from IPython.display import Image
Image(url='https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/uploads/2017/05/Deep-Neural-Network-What-is-Deep-Learning-Edureka.png')
from IPython.display import Audio, IFrame, HTML
```
## Markdown

https://gist.github.com/ihoneymon/652be052a0727ad59601
## HTML
<table>
<tr>
<th>Month</th>
<th>Savings</th>
</tr>
<tr>
<td>January</td>
<td>100</td>
</tr>
<tr>
<td>February</td>
<td>80</td>
</tr>
<tr>
<td colspan="2">Sum: 180</td>
</tr>
</table>
## Latex
Inline
sigmoid : $ f(t) = \frac{1}{1+e^{−t}} $
Block
$$ f(t) = \frac{1}{1+e^{−t}} $$
## kernel control
- kernel interrupt : i i
- kernel restart : 0 0
## Widget
`conda install -y -c conda-forge ipywidgets`
```
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
```
### interact
인터렉티브!
```
def f(x):
return x
interact(f, x=10);
interact(f, x=True);
interact(f, x='Hi there!');
```
### interact as a decorator
```
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
```
### fixed
인자 고정시키기
```
def h(p, q):
return (p, q)
interact(h, p=5, q=fixed(20));
```
### 더 자유롭게 컨트롤하기
```
interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10));
```
### interactive
바로 보여주는 대신 객체로 리턴하기
```
from IPython.display import display
def f(a, b):
display(a + b)
return a+b
w = interactive(f, a=10, b=20)
type(w)
w.children
display(w);
w.kwargs
w.result
```
### 두 위젯 간에 상호작용하기
```
x_widget = widgets.FloatSlider(min=0.0, max=10.0, step=0.05)
y_widget = widgets.FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)
def update_x_range(*args):
x_widget.max = 2.0 * y_widget.value
y_widget.observe(update_x_range, 'value')
def printer(x, y):
print(x, y)
interact(printer,x=x_widget, y=y_widget);
```
### 작동 방식

### Multiple widgets
```
from IPython.display import display
w = widgets.IntSlider()
display(w)
display(w)
w.value
w.value = 100
```
# Lesson 2-3: Conda
패키지, 디펜던시, 가상 환경 관리의 끝판왕 - Python부터 R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, FORTRAN 등등
## pip vs. conda
- pip : 파이썬 패키지 관리자. 파이썬만 관리한다.
- 파이썬 너머에 의존성이 있는 경우는 관리하지 못함
- conda : 패키지 관리자이자 가상 환경 관리자
- 언어 상관 없이 모두 관리한다. 파이썬도 패키지 중 하나일 뿐.
- 패키지 관리 뿐 아니라 가상 환경 생성 및 관리도 가능하다.
- 파이썬으로 쓰여짐
## 가상 환경 관리
### 새 가상 환경 만들기
`conda create --name tensorflow`
파이썬만 있는 깨끗한 환경을 원할 때
`conda create --name tensorflow python`
파이썬 버전을 지정하고 싶을 때
`conda create --name tensorflow python=2.7`
### 가상 환경 안으로 들어가기
`source activate tensorflow`
### Jupyter에 새 kernel 등록하기
`pip install ipykernel`
`python -m ipykernel install --user --name tensorflow --display-name "Python (TensorFlow)"`
### 가상 환경 빠져나오기
`source deactivate`
### 가상 환경 목록 보기
`conda env list`
### 설치된 패키지 목록 보기
`conda list`
## Miniconda
- Anaconda : 수학, 과학 패키지가 모두 함꼐 들어 있다.
- Miniconda : 파이썬과 최소한의 패키지만 들어있다. 필요한 패키지만 conda로 직접 설치할 수 있다.
https://conda.io/miniconda.html
| github_jupyter |
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).
# Data Structure Utilities
* slice
* range and xrange
* bisect
* sort
* sorted
* reversed
* enumerate
* zip
* list comprehensions
## slice
Slice selects a section of list types (arrays, tuples, NumPy arrays) with its arguments [start:end]: start is included, end is not. The number of elements in the result is stop - end.

Image source: http://www.nltk.org/images/string-slicing.png
Slice 4 elements starting at index 6 and ending at index 9:
```
seq = 'Monty Python'
seq[6:10]
```
Omit start to default to start of the sequence:
```
seq[:5]
```
Omit end to default to end of the sequence:
```
seq[6:]
```
Negative indices slice relative to the end:
```
seq[-12:-7]
```
Slice can also take a step [start:end:step].
Get every other element:
```
seq[::2]
```
Passing -1 for the step reverses the list or tuple:
```
seq[::-1]
```
You can assign elements to a slice (note the slice range does not have to equal number of elements to assign):
```
seq = [1, 1, 2, 3, 5, 8, 13]
seq[5:] = ['H', 'a', 'l', 'l']
seq
```
Compare the output of assigning into a slice (above) versus the output of assigning into an index (below):
```
seq = [1, 1, 2, 3, 5, 8, 13]
seq[5] = ['H', 'a', 'l', 'l']
seq
```
## range and xrange
Generate a list of evenly spaced integers with range or xrange. Note: range in Python 3 returns a generator and xrange is not available.
Generate 10 integers:
```
range(10)
```
Range can take start, stop, and step arguments:
```
range(0, 20, 3)
```
It is very common to iterate through sequences by index with range:
```
seq = [1, 2, 3]
for i in range(len(seq)):
val = seq[i]
print(val)
```
For longer ranges, xrange is recommended and is available in Python 3 as range. It returns an iterator that generates integers one by one rather than all at once and storing them in a large list.
```
sum = 0
for i in xrange(100000):
if i % 2 == 0:
sum += 1
print(sum)
```
## bisect
The bisect module does not check whether the list is sorted, as this check would be expensive O(n). Using bisect on an unsorted list will not result in an error but could lead to incorrect results.
```
import bisect
```
Find the location where an element should be inserted to keep the list sorted:
```
seq = [1, 2, 2, 3, 5, 13]
bisect.bisect(seq, 8)
```
Insert an element into a location to keep the list sorted:
```
bisect.insort(seq, 8)
seq
```
## sort
Sort in-place O(n log n)
```
seq = [1, 5, 3, 9, 7, 6]
seq.sort()
seq
```
Sort by the secondary key of str length:
```
seq = ['the', 'quick', 'brown', 'fox', 'jumps', 'over']
seq.sort(key=len)
seq
```
## sorted
Return a new sorted list from the elements of a sequence O(n log n):
```
sorted([2, 5, 1, 8, 7, 9])
sorted('foo bar baz')
```
It's common to get a sorted list of unique elements by combining sorted and set:
```
seq = [2, 5, 1, 8, 7, 9, 9, 2, 5, 1, (4, 2), (1, 2), (1, 2)]
sorted(set(seq))
```
## reversed
Iterate over the sequence elements in reverse order:
```
list(reversed(seq))
```
## enumerate
Get the index of a collection and the value:
```
strings = ['foo', 'bar', 'baz']
for i, string in enumerate(strings):
print(i, string)
```
## zip
Pair up the elements of sequences to create a list of tuples:
```
seq_1 = [1, 2, 3]
seq_2 = ['foo', 'bar', 'baz']
zip(seq_1, seq_2)
```
Zip takes an arbitrary number of sequences. The number of elements it produces is determined by the shortest sequence:
```
seq_3 = [True, False]
zip(seq_1, seq_2, seq_3)
```
It is common to use zip for simultaneously iterating over multiple sequences combined with enumerate:
```
for i, (a, b) in enumerate(zip(seq_1, seq_2)):
print('%d: %s, %s' % (i, a, b))
```
Zip can unzip a zipped sequence, which you can think of as converting a list of rows into a list of columns:
```
numbers = [(1, 'one'), (2, 'two'), (3, 'three')]
a, b = zip(*numbers)
a
b
```
## List Comprehensions
List comprehensions concisely form a new list by filtering the elements of a sequence and transforming the elements passing the filter. List comprehensions take the form:
```python
[expr for val in collection if condition]
```
Which is equivalent to the following for loop:
```python
result = []
for val in collection:
if condition:
result.append(expr)
```
Convert to upper case all strings that start with a 'b':
```
strings = ['foo', 'bar', 'baz', 'f', 'fo', 'b', 'ba']
[x.upper() for x in strings if x[0] == 'b']
```
List comprehensions can be nested:
```
list_of_tuples = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
[x for tup in list_of_tuples for x in tup]
```
## Dict Comprehension
A dict comprehension is similar to a list comprehension but returns a dict.
Create a mapping of strings and their locations in the list for strings that start with a 'b':
```
{index : val for index, val in enumerate(strings) if val[0] == 'b'}
```
## Set Comprehension
A set comprehension is similar to a list comprehension but returns a set.
Get the unique lengths of strings that start with a 'b':
```
{len(x) for x in strings if x[0] == 'b'}
```
| github_jupyter |
# Alternative Models
In order to ensure the model used to make predictions for the analysis, I also tried training & testing various other models that were good candidates (based on the characteristics of our data).
Specifically, we also tested the following regression models:
1. Linear (Lasso Regularization)
2. Linear (Ridge Regularization)
3. SGD
4. Decision Tree
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
%matplotlib inline
# Import & preview input variables
X = pd.read_csv('./output/model_X.csv')
X.head()
# Input & preview output variables
y = pd.read_csv('./output/model_y.csv', header=None, squeeze=True)
y.head()
# Split data into training & testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
```
## Linear (Lasso Regularization)
We're going to first try the simplest of the list by adding _Lasso regularization_ to the Linear Regression. The hope is that with the corrective properties (by penalizing complexity), we will be able to get substantially higher training & testing scores.
We're going to test the Lasso model with various alpha values to spot the config with optimal scores.
```
# Function to run Lasso model
def runLasso(alpha=1.0):
"""
Compute the training & testing scores of the Linear Regression (with Lasso regularization)
along with the SUM of coefficients used.
Input:
alpha: the degree of penalization for model complexity
Output:
alpha: the degree of penalization for model complexity
train_scoreL: Training score
test_scoreL: Testing score
coeff_used: SUM of all coefficients used in model
"""
# Instantiate & train
lasso_reg = Lasso(alpha=alpha)
lasso_reg.fit(X_train, y_train)
# Predict testing data
pred_train = lasso_reg.predict(X_train)
pred_test = lasso_reg.predict(X_test)
# Score
train_scoreL = lasso_reg.score(X_train,y_train)
test_scoreL = lasso_reg.score(X_test,y_test)
coeff_used = np.sum(lasso_reg.coef_!=0)
print("Lasso Score (" + str(alpha) + "):")
print(train_scoreL)
print(test_scoreL)
print(' ')
print("Coefficients Used:")
print(coeff_used)
print('-------')
return (alpha, train_scoreL, test_scoreL, coeff_used)
runLasso()
# Test the Lasso regularization for a range of alpha variables
alpha_lasso = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]
for i in range(10):
runLasso(alpha_lasso[i])
```
### Linear Regression (Lasso) Conclusion
The Lasso linear model does not seem to surpass the simple Linear Regression model trained (see "Airbnb NYC Data Exploration" notebook for details), which had scored 25.6% (training) and 20.2% (testing).
**Therefore, we will discount this as a superior modelling assumption**
## Linear (Ridge Regularization)
Similarly, we're going to test a Linear Regression with _Ridge regularization_. Since the dataset is non-sparse, the hypothesis is that we should get more from the L2 regularization's corrective properties for complexity (more than Lasso's L1 reg.)
We're going to test the Ridge model with various alpha values to spot the config with optimal scores.
```
# Function to run Ridge model
def runRidge(alpha=1.0):
"""
Compute the training & testing scores of the Linear Regression (with Ridge regularization)
along with the SUM of coefficients used.
Input:
alpha: the degree of penalization for model complexity
Output:
alpha: the degree of penalization for model complexity
train_scoreL: Training score
test_scoreL: Testing score
coeff_used: SUM of all coefficients used in model
"""
# Instantiate & train
rid_reg = Ridge(alpha=alpha, normalize=True)
rid_reg.fit(X_train, y_train)
# Predict testing data
pred_train = rid_reg.predict(X_train)
pred_test = rid_reg.predict(X_test)
# Score
train_score = rid_reg.score(X_train,y_train)
test_score = rid_reg.score(X_test,y_test)
coeff_used = np.sum(rid_reg.coef_!=0)
print("Ridge Score (" + str(alpha) + "):")
print(train_score)
print(test_score)
print('-------')
print("Coefficients Used:")
print(coeff_used)
return (alpha, train_score, test_score, coeff_used)
runRidge()
alpha_ridge = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]
for i in range(10):
runRidge(alpha_ridge[i])
```
### Linear Regression (Ridge) Conclusion
The Ridge linear model also does not seem to surpass the simple Linear Regression model (see "Airbnb NYC Data Exploration" notebook for details), which had scored 25.6% (training) and 20.2% (testing).
**Therefore, we will discount Ridge regularization as a superior modelling assumption**
## Stochastic Gradient Descent (SGD) Regression
The SGD regression is different from the former two (Lasso, Ridge) that were based on a Linear Regression model. Since SGD basically applies the squared trick at every point in our data at same time (vs Batch, which looks at points one-by-one), I don't expect scores to differ too much when compared to the previous 2.
```
# Function to run SGD model
def runSGD():
"""
Compute the training & testing scores of the SGD
along with the SUM of coefficients used.
Output:
train_score: Training score
test_score: Testing score
coeff_used: SUM of all coefficients used in model
"""
# Instantiate & train
sgd_reg = SGDRegressor(loss="squared_loss", penalty=None)
sgd_reg.fit(X_train, y_train)
# Predict testing data
pred_train = sgd_reg.predict(X_train)
pred_test = sgd_reg.predict(X_test)
# Score
train_score = sgd_reg.score(X_train,y_train)
test_score = sgd_reg.score(X_test,y_test)
coeff_used = np.sum(sgd_reg.coef_!=0)
print("SGD Score:")
print(train_score)
print(test_score)
print('-------')
print("Coefficients Used:")
print(coeff_used)
return (train_score, test_score, coeff_used)
runSGD()
```
### Stochastic Gradient Descent (SGD) Conclusion
The SGD model also does not seem to surpass the simple Linear Regression model (see "Airbnb NYC Data Exploration" notebook for details), which had scored 25.6% (training) and 20.2% (testing). In fact, the output training & testing scores are negative, indicative of terrible fit to the data.
**Therefore, we will discount SGD as a superior modelling assumption**
## Decision Trees
Unlike the former models, Decision Trees have a very different model structure. That is, it generates a series of nodes & branches that maximize informtion gain. Thus, it naturally is also the model most prone to overfitting
To remedy the overfitting challenge, we'll run the Decision Trees model with the below parameters:
- max_depth
- min_samples_leaf
- min_samples_split
To isolate the effect of these parameters on scores, we'll change one at a time (i.e. keeping other parameters constant)
```
# Function to run Decision Trees
def runTree(max_depth=None, min_samples_leaf=1, min_samples_split=2):
"""
Compute the training & testing scores of the Linear Regression (with Lasso regularization)
along with the SUM of coefficients used.
Input:
max_depth: maximum allowed depth of trees ("distance" between root & leaf)
min_samples_leaf: minimum samples to contain per leaf
min_samples_split: minimum samples to split a node
Output:
max_depth: maximum allowed depth of trees ("distance" between root & leaf)
min_samples_leaf: minimum samples to contain per leaf
min_samples_split: minimum samples to split a node
train_score: Training score
test_score: Testing score
"""
# Instantiate & train
tree_reg = DecisionTreeRegressor(criterion='mse', max_depth=max_depth, min_samples_leaf=min_samples_leaf, min_samples_split=min_samples_split)
tree_reg.fit(X_train, y_train)
# Predict testing data
pred_train = tree_reg.predict(X_train)
pred_test = tree_reg.predict(X_test)
# Score
train_score = tree_reg.score(X_train,y_train)
test_score = tree_reg.score(X_test,y_test)
print("Tree Score (" + str(max_depth) + ', ' + str(min_samples_leaf) + ', ' + str(min_samples_split) + "):")
print(train_score)
print(test_score)
print('-------')
runTree()
depths = [2, 5, 6, 7, 8]
for dep in depths:
runTree(dep)
min_leafs = [2, 4, 6, 8, 10, 12, 14, 16]
for lfs in min_leafs:
runTree(7, lfs)
min_splits = [2, 4, 6, 8, 10]
for splt in min_splits:
runTree(7, 14, splt)
for dep in depths:
for lfs in min_leafs:
runTree(dep, lfs)
```
### Decision Tree Conclusion
Unlike the rest, the Decision Tree model does seem to surpass the simple Linear Regression model (see "Airbnb NYC Data Exploration" notebook for details), which had scored 25.6% (training) and 20.2% (testing).
Based on the tests, seems there is a maximum point where testing error is minimized. Also notable is the fact that the training & testing score seem to be inversely correlated.
```
tree_reg = DecisionTreeRegressor(criterion='mse', max_depth=8, min_samples_leaf=16, min_samples_split=2)
tree_reg.fit(X_train, y_train)
# Predict testing data
pred_train = tree_reg.predict(X_train)
pred_test = tree_reg.predict(X_test)
# Import prediction input
df_nei_Manhattan_EV = pd.read_csv('./data/input/pred_input_Manhattan_EV.csv')
df_nei_Manhattan_HA = pd.read_csv('./data/input/pred_input_Manhattan_HA.csv')
df_nei_Manhattan_HK = pd.read_csv('./data/input/pred_input_Manhattan_HK.csv')
df_nei_Manhattan_UWS = pd.read_csv('./data/input/pred_input_Manhattan_UWS.csv')
df_nei_Brooklyn_BS = pd.read_csv('./data/input/pred_input_Brooklyn_BS.csv')
df_nei_Brooklyn_BU = pd.read_csv('./data/input/pred_input_Brooklyn_BU.csv')
df_nei_Brooklyn_WI = pd.read_csv('./data/input/pred_input_Brooklyn_WI.csv')
df_nei_Queens_AS = pd.read_csv('./data/input/pred_input_Queens_AS.csv')
df_nei_Queens_LI = pd.read_csv('./data/input/pred_input_Queens_LI.csv')
avgRev_Manhattan_EV = round(tree_reg.predict(df_nei_Manhattan_EV)[0],2)
avgRev_Manhattan_HA = round(tree_reg.predict(df_nei_Manhattan_HA)[0],2)
avgRev_Manhattan_HK = round(tree_reg.predict(df_nei_Manhattan_HK)[0],2)
avgRev_Manhattan_UWS = round(tree_reg.predict(df_nei_Manhattan_UWS)[0],2)
avgRev_Brooklyn_BS = round(tree_reg.predict(df_nei_Brooklyn_BS)[0],2)
avgRev_Brooklyn_BU = round(tree_reg.predict(df_nei_Brooklyn_BU)[0],2)
avgRev_Brooklyn_WI = round(tree_reg.predict(df_nei_Brooklyn_WI)[0],2)
avgRev_Queens_AS = round(tree_reg.predict(df_nei_Queens_AS)[0],2)
avgRev_Queens_LI = round(tree_reg.predict(df_nei_Queens_LI)[0],2)
print("--------Manhattan---------")
print(avgRev_Manhattan_EV)
print(avgRev_Manhattan_HA)
print(avgRev_Manhattan_HK)
print(avgRev_Manhattan_UWS)
print("")
print("--------Brooklyn---------")
print(avgRev_Brooklyn_BS)
print(avgRev_Brooklyn_BU)
print(avgRev_Brooklyn_WI)
print("")
print("--------Queens---------")
print(avgRev_Queens_AS)
print(avgRev_Queens_LI)
# Import prediction input
df_nei_1_1 = pd.read_csv('./data/input/pred_input_Manhattan_EV.csv')
df_nei_2_2 = pd.read_csv('./data/input/pred_input_Manhattan_EV_2bed_2_bath.csv')
df_nei_2_1 = pd.read_csv('./data/input/pred_input_Manhattan_EV_2bed_1_bath.csv')
avgRev_1_1 = tree_reg.predict(df_nei_1_1)[0]
avgRev_2_2 = tree_reg.predict(df_nei_2_2)[0]
avgRev_2_1 = tree_reg.predict(df_nei_2_1)[0]
print(round(avgRev_1_1,2))
print(round(avgRev_2_2,2))
print(round(avgRev_2_1,2))
```
# Conclusion
Based on these parameters, seems like the best scoring model (Decision Trees) was a bit too generalized, making the same prediction for variations (e.g. 1 bedroom vs 2 bedroom).
| github_jupyter |
## Setup
```
%matplotlib qt
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
import os
Path('mnist_distribution').mkdir(exist_ok=True)
os.chdir('mnist_distribution')
#load MNIST and concatenates train and test data
(x_train, _), (x_test, _) = mnist.load_data()
data = np.concatenate((x_train, x_test))
```
## 1 Mean pixel value
```
mean = np.mean(data, axis=0)
var = np.sqrt(np.var(data, axis=0))
fig, axs = plt.subplots(1, 2)
ax = axs[0]
ax.imshow(mean, cmap='gray', vmin=0, vmax=255, interpolation='nearest')
ax.axis(False)
ax.set_title('Mean')
ax = axs[1]
pcm = ax.imshow(var, cmap='gray', vmin=0, vmax=255, interpolation='nearest')
ax.axis(False)
ax.set_title('Variance')
plt.colorbar(pcm, ax=axs, shrink=0.5)
fig.savefig('mnist_mean_var.pdf', bbox_inches='tight', pad_inches=0)
```
## 2 Pixel value probability distribution
### 2.1 Plot single pixel distribution
```
px = 14
py = 14
pixels = data[:, px, py]
values = np.arange(256)
probs = np.zeros(256)
unique, count = np.unique(pixels, return_counts=True)
for px_value, n_ocurrences in zip(unique, count):
probs[px_value] = 100 * n_ocurrences / data.shape[0]
fig = plt.figure()
plt.plot(values, probs, linewidth=1)
plt.xlabel('Pixel Value')
plt.ylabel('Probability (%)')
plt.grid()
fig.savefig('mnist_dist_pixel_%dx%d.pdf' % (px, py), bbox_inches='tight')
```
### 2.1 Plotting only column distribution
```
def get_column_distribution(data, column_index):
columns = data[:, :, column_index]
total = columns.shape[0]
n_lines = columns.shape[1]
x = np.arange(n_lines)
y = np.arange(256)
z = np.zeros((256, n_lines))
#Iterates through each pixel calculating it's probability distribution
for i in range(n_lines):
unique, count = np.unique(columns[:, i], return_counts=True)
for px_value, n_ocurrences in zip(unique, count):
z[px_value][i] = n_ocurrences / total
return x, y, z
def plot_column_distribution(x, y, z):
n_lines = x.shape[0]
X, Y = np.meshgrid(x, y)
Z = 100 * z
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.view_init(10, 35)
ax.contour3D(X, Y, Z, n_lines, cmap='viridis', zdir = 'x')
ax.set_xlabel('Line')
ax.set_ylabel('Pixel Value')
ax.set_zlabel('Probability (%)')
ax.set_zlim((0, 100))
return fig
for column_index in [0, 12, 15]:
x, y, z = get_column_distribution(data, column_index)
fig = plot_column_distribution(x, y, z)
fig.savefig('mnist_dist_column_%d.pdf' % column_index, bbox_inches='tight', pad_inches=0)
```
### 2.2 Plotting distribution with image reference
```
def high_light_mnist_column(image, column_index):
alpha = np.full_like(image, 50)[..., np.newaxis]
alpha[:, column_index, :] = 255
image = np.repeat(image[:, :, np.newaxis], 3, axis=2)
return np.append(image, alpha, axis=2)
def plot_column_distribution_and_highlight(x, y, z, highlight):
n_lines = x.shape[0]
X, Y = np.meshgrid(x, y)
Z = 100 * z
fig = plt.figure(figsize=(10, 10))
fig.tight_layout()
plt.subplot(323)
plt.imshow(highlight, cmap='gray', vmin=0, vmax=255, interpolation='nearest')
plt.axis('off')
ax = plt.subplot(122, projection='3d')
ax.view_init(10, 35)
ax.contour3D(X, Y, Z, n_lines, cmap='viridis', zdir = 'x')
ax.set_xlabel('Line')
ax.set_ylabel('Pixel Value')
ax.set_zlabel('Probability (%)')
ax.set_zlim((0, 100))
return fig
plt.ioff()
image = data[0]
for column_index in range(28):
x, y, z = get_column_distribution(data, column_index)
highlight = high_light_mnist_column(image, column_index)
fig = plot_column_distribution_and_highlight(x, y, z, highlight)
# Save as pdf to get the nicest quality
fig.savefig('mnist_highlight_dist_column_%d.pdf' % column_index, bbox_inches='tight', pad_inches=0)
# Save as png to convert images to video or gif
fig.savefig('mnist_highlight_dist_column_%d.png' % column_index, bbox_inches='tight', pad_inches=0, dpi=196)
plt.close(fig)
```
## 3 Sampling from pixel distributions
```
def get_cumulative_distribution(data):
total, n_lines, n_columns = data.shape
dist = np.zeros((n_lines, n_columns, 256))
#Iterates through each pixel calculating it's cumulative probability distribution
for i in range(n_lines):
for j in range(n_columns):
values = dist[i, j, :]
unique, count = np.unique(data[:, i, j], return_counts=True)
for px_value, n_ocurrences in zip(unique, count):
values[px_value] = n_ocurrences
for px_value in range(1, 256):
values[px_value] += values[px_value - 1]
values /= total
return dist
def sample_dist(dist):
p = np.random.uniform()
return np.searchsorted(dist, p)
dist = get_cumulative_distribution(data)
SEED = 279923 # https://youtu.be/nWSFlqBOgl8?t=86 - I love this song
np.random.seed(SEED)
images = np.zeros((3, 28, 28))
for img in images:
for i in range(28):
for j in range(28):
img[i, j] = sample_dist(dist[i,j])
fig = plt.figure()
for i, img in enumerate(images):
plt.subplot(1, 3, i + 1)
plt.imshow(img, cmap='gray', vmin=0, vmax=255, interpolation='nearest')
plt.axis(False)
fig.savefig('mnist_simple_samples.pdf', bbox_inches='tight', pad_inches=0)
```
| github_jupyter |
# Plots for litholog paper
Using the demo data provided in the `litholog` release, this notebook demonstrates data import, plotting and simple statistics of bed thickness and grain size.
## Import packages
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import litholog
from litholog import utils, Bed, wentworth
from litholog.sequence import io, BedSequence
from striplog import Component
```
## Import data
```
# Converts 'string' arrays to numpy
transforms = {c : utils.string2array_matlab for c in ['depth_m',
'grain_size_mm']}
# Read the demo data
df = pd.read_csv('../data/demo_data.csv', converters=transforms)
# make a grain size column
df['grain_size_psi'] = df.grain_size_mm.apply(lambda x: np.round(litholog.wentworth.gs2psi(x),4))
df['mean_gs_psi'] = df.mean_gs_mm.apply(lambda x: np.round(litholog.wentworth.gs2psi(x),4))
# Columns shared by whole sequences (i.e., shared by an entire graphic log)
METACOLS = ['name', 'collection', 'ng', 'ar']
# Columns of bed-level data, including the psi column we just made
DATACOLS = ['th', 'gs_tops_mm', 'snd_shl', 'depth_m',
'gs_tops_mm', 'mean_gs_mm', 'max_gs_mm', 'grain_size_mm','grain_size_psi','mean_gs_psi']
# Convert dataframe to a list of `BedSequence`s
seqs = []
for group, values in df.groupby('name'):
seqs.append(
BedSequence.from_dataframe(
values,
thickcol='th',
component_map=litholog.defaults.DEFAULT_COMPONENT_MAP,
metacols=METACOLS,
datacols=DATACOLS,
)
)
# Show name + eod + number of beds of each
print(len(seqs),'logs imported as BedSequences')
[(s.metadata['name'], len(s),'Beds') for s in seqs]
```
## Extract data and make plots
### Figure 4
This figure makes a KDE plot of all the sand and mud thickness data
```
# get some data out of BedSequence using list comprehension
all_gravel_th = [s.get_field('th', lithology='gravel') for s in seqs]
all_sand_th = [s.get_field('th', lithology='sand') for s in seqs]
all_mud_th = [s.get_field('th', lithology='mud') for s in seqs]
all_mud_th[-3]=[] # one log doesn't have any muds, super amalgamated
all_gravel_th[0:7]=[] # lots of logs dont have gravel
# make a quick plot of log10 bed thickness
fig, ax = plt.subplots(figsize=[6,4])
ax.set(facecolor = '#F0F0F0')
sns.kdeplot(np.log10(np.concatenate(all_mud_th, axis=0)), ax=ax,
cumulative=False, shade=True, alpha=0.2, color="xkcd:light brown", linewidth=1)
sns.kdeplot(np.log10(np.concatenate(all_sand_th, axis=0)), ax=ax,
cumulative=False, shade=True, alpha=0.6, color="xkcd:light yellow", linewidth=1)
sns.kdeplot(np.log10(np.concatenate(all_gravel_th, axis=0)), ax=ax,
cumulative=False, shade=True, alpha=0.2, color="xkcd:tangerine", linewidth=1)
sns.kdeplot(np.log10(np.concatenate(all_mud_th, axis=0)), ax=ax,
cumulative=True, shade=False, color="xkcd:light brown", linewidth=5, label='mud (n=360)')
sns.kdeplot(np.log10(np.concatenate(all_sand_th, axis=0)), ax=ax,
cumulative=True, shade=False, color="xkcd:light yellow", linewidth=5, label='sand (n=585)')
sns.kdeplot(np.log10(np.concatenate(all_gravel_th, axis=0)), ax=ax,
cumulative=True, shade=False, color="xkcd:tangerine", linewidth=5, label='gravel (n=98)')
ax.set_xlabel('Bed Thickness (m)',fontsize=12)
ax.set_xlim(-3, 2)
ax.set_xticklabels([10**p for p in range(-3, 3)],fontsize=12,rotation=45)
ax.set_ylabel('Frequency',fontsize=12)
ax.set_title('Thickness Distribution',fontsize=16)
ax.legend(loc='upper left')
fig.savefig('fig4.svg',format='svg')
```
## Comparison of Cerro Toro and Skoorsteenberg formations
```
# subset the BedSequences
krf = seqs[0:4] # krf is for Kleine Reit Fontein, Skoorsteenberg Fm
sdt = seqs[7:12] # sdt is for Sierra del Toro, Cerro Toro Fm
# Get some data out of the BedSequences
sdt_sand_th = [s.get_field('th', lithology='sand') for s in sdt] # list comprehension is the best
krf_sand_th = [s.get_field('th', lithology='sand') for s in krf]
sdt_mud_th = [s.get_field('th', lithology='mud') for s in sdt]
krf_mud_th = [s.get_field('th', lithology='mud') for s in krf]
sdt_gravel_th = [s.get_field('th', lithology='gravel') for s in sdt]
```
### Figure 5 - box plot comparing SDT and KRF
```
fig, ax = plt.subplots(1,5, figsize=[8,3.5], sharey=True)
for axnum in ax:
axnum.set(facecolor = '#F0F0F0')
sns.boxplot(data=sdt_gravel_th, ax=ax[0],color='xkcd:tangerine').set_title('SDT gravel')
sns.boxplot(data=sdt_sand_th, ax=ax[1],color='xkcd:light yellow').set_title('SDT sand')
sns.boxplot(data=krf_sand_th, ax=ax[2],color='xkcd:light yellow').set_title('KRF sand')
sns.boxplot(data=sdt_mud_th, ax=ax[3],color='xkcd:light brown').set_title('SDT mud')
sns.boxplot(data=krf_mud_th, ax=ax[4],color='xkcd:light brown').set_title('KRF mud')
for a in ax: a.set_yscale('log')
# make name labels
sdt_names = [s.metadata['name'] for s in sdt]
krf_names = [s.metadata['name'] for s in krf]
ax[0].set_xticklabels(sdt_names,fontsize=8,rotation=90)
ax[1].set_xticklabels(sdt_names,fontsize=8,rotation=90)
ax[2].set_xticklabels(krf_names,fontsize=8,rotation=90)
ax[3].set_xticklabels(sdt_names,fontsize=8,rotation=90)
ax[4].set_xticklabels(krf_names,fontsize=8,rotation=90)
ax[0].set_ylabel('Bed or interval thickness (m)',fontsize=12)
fig.suptitle('Comparing Cerro Toro (SDT) and Skoorskteenberg (KRF) thicknesses',fontsize=12, y=1.02)
fig.savefig('fig5.svg',format='svg')
```
### Figure 6 KDEs for Cerro Toro
```
# compare grain size values
fig, ax = plt.subplots(figsize=[6,4])
ax.set(facecolor = '#F0F0F0')
for s in sdt[:-1]:
sns.kdeplot(s.get_field('mean_gs_psi', lithology='sand'), ax=ax,
cumulative=False, color="xkcd:yellow", label='sand')
for s in sdt[:-1]:
sns.kdeplot(s.get_field('mean_gs_psi', lithology='gravel'), ax=ax,
cumulative=False, color="xkcd:tangerine", label='gravel')
# then plot the SSM log with dashed line
sns.kdeplot(sdt[-1].get_field('mean_gs_psi', lithology='sand'), ax=ax,
cumulative=False, color="xkcd:yellow", linestyle = '--', linewidth=2, label='sand')
sns.kdeplot(sdt[-1].get_field('mean_gs_psi', lithology='gravel'), ax=ax,
cumulative=False, color="xkcd:tangerine", linestyle = '--', linewidth=2, label='gravel')
#hand, labl = ax.get_legend_handles_labels()
#plt.legend(np.unique(labl))
hand, labl = ax.get_legend_handles_labels()
handout=[]
lablout=[]
for h,l in zip(hand,labl):
if l not in lablout:
lablout.append(l)
handout.append(h)
#ax.get_legend().remove() # get rid of it first
ax.legend(handout, lablout,loc=[0.8,0.75],fontsize=9) # then add it back
# labels
ax.set_xlabel('Mean grain size of bed ('+r'$\psi$'+')',fontsize=14)
ax.set_xlim(-7, 13)
ax.set_xticks(np.arange(-7,14,2)); # could also use "wentworth.fine_scale" to label these with text labels
ax.axvline(-4,color='xkcd:grey',linewidth=0.5)
ax.axvline(1,color='xkcd:grey',linewidth=0.5)
ax.set_ylim(0,0.6)
ax.text(-6,0.57,'silt')
ax.text(-2,0.57,'sand')
ax.text(6,0.57,'gravel')
ax.annotate("sand and gravel \nfrom log in Fig. 3", xy=(-1, 0.35), xytext=(5, 0.3),arrowprops=dict(arrowstyle="->"))
ax.annotate("", xy=(7, 0.15), xytext=(5, 0.3),arrowprops=dict(arrowstyle="->"))
ax.set_title('Cerro Toro sand & conglomerate grain size distribution',fontsize=12)
fig.savefig('fig6.svg',format='svg')
```
### Another plot and KS test for the paper (not used as a fig though)
```
# Now lets concat that list into one big array, then take the log10 of it, then plot it
fig, ax = plt.subplots(2, 1, figsize=[5,7], sharex=True)
sns.kdeplot(np.log10(np.concatenate(krf_sand_th, axis=0)),
ax=ax[0], color="xkcd:golden yellow", cumulative=True, linewidth=3,
label='KRF sand')
sns.kdeplot(np.log10(np.concatenate(krf_mud_th, axis=0)),
ax=ax[0], color="xkcd:grey", cumulative=True, linewidth=3,
label='KRF mud')
sns.kdeplot(np.log10(np.concatenate(sdt_sand_th, axis=0)),
ax=ax[1], color="xkcd:golden yellow", cumulative=True, linewidth=3,
label='SDT sand')
sdt_mud_th[3]=[] # get rid of zero values
sns.kdeplot(np.log10(np.concatenate(sdt_mud_th, axis=0)),
ax=ax[1], color="xkcd:grey", cumulative=True, linewidth=3,
label='SDT mud')
sns.kdeplot(np.log10(np.concatenate(sdt_gravel_th, axis=0)),
ax=ax[1], color="xkcd:brick", cumulative=True, linewidth=3,
label='SDT conglomerate');
ax[1].set_xlabel('Bed Thickness (m)',fontsize=14)
ax[1].set_xlim(-3, 2)
ax[1].set_xticks(np.arange(-3,3,1))
ax[1].set_xticklabels([10**p for p in range(-3, 3)],fontsize=12,rotation=45)
ax[1].legend(loc='upper left',fontsize=9)
# KS test
sand_ks_result = stats.ks_2samp(np.log10(np.concatenate(sdt_sand_th, axis=0)),
np.log10(np.concatenate(krf_sand_th, axis=0)))
sand_ks_str = 'Comparing SDT to KRF\nsand thickness:\n' + ' KS-test p value = ' + str(sand_ks_result[1].round(5))
ax[0].text(0.5,0.2, sand_ks_str, ha='center')
mud_ks_result = stats.ks_2samp(np.log10(np.concatenate(sdt_mud_th, axis=0)),
np.log10(np.concatenate(krf_mud_th, axis=0)))
mud_ks_str = 'Comparing SDT to KRF\nsand thickness:\n' + ' KS-test p value = ' + str(mud_ks_result[1].round(13))
ax[0].text(0.5,0.5, mud_ks_str, ha='center')
ax[0].set_title('Comparing Cerro Toro (SDT) and\nSkoorskteenberg (KRF) bed thicknesses',fontsize=11)
plt.tight_layout()
plt.show()
mud_ks_result
```
| github_jupyter |
This approach checks if some data preprocessing helps on the results.
```
import pandas as pd
import re
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report, f1_score
import preprocessing as pp
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
new_data = pd.read_csv('data/new_dataset.csv')
msr_data = pd.read_csv('data/msr_dataset.csv', encoding='ANSI')
new_data.head()
msr_data.head()
def preprocess(token):
token = str(token)
token_lowered = token.lower()
token_lowered = re.sub(r'(\brow\b)|(\btable\b)|(\binsert\b)|(\bid\b)', 'dbms', token_lowered)
url_pattern = '(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]+\.[^\s]{2,}|www\.[a-zA-Z0-9]+\.[^\s]{2,})'
token_url = re.sub(url_pattern, 'urllink', token_lowered)
date_pattern = '([12]\d{3}/(0[1-9]|1[0-2])/(0[1-9]|[12]\d|3[01]))|([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))|([12]\d{3}/(0[1-9]|1[0-2])/(0[1-9]|[12]\d|3[01]))'
token_dates = re.sub(date_pattern, 'datetime', token_url)
ip_pattern = '(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])'
token_ip = re.sub(ip_pattern, ' ipaddrezz ', token_dates)
token_num = re.sub(r'\b[0-9]+\b', 'numeric', token_ip)
special_character_pattern = '[^A-Za-z0-9]+'
token_char = re.sub(special_character_pattern, ' ', token_num)
return token_char
msr_data['clean_token'] = msr_data['token'].map(lambda s:preprocess(s))
new_data['clean_token'] = new_data['token'].map(lambda t:preprocess(t))
msr_data[['token', 'clean_token']][1500:1700]
y_msr = msr_data['class']
y_new = new_data['class']
X_train, X_test, y_train, y_test = train_test_split(msr_data['clean_token'],
y_msr, train_size=0.8,
random_state=33, shuffle=True)
text_clf2 = Pipeline([
('vectorizer', CountVectorizer(ngram_range=(1,10))),
('model', MultinomialNB())])
text_clf2.fit(X_train, y_train)
preds = text_clf2.predict(X_test)
print(classification_report(y_test, preds))
new_preds = text_clf2.predict(new_data['clean_token'])
print(classification_report(y_new, new_preds))
print('f1', f1_score(y_new, new_preds))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/map_get_center.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_get_center.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_get_center.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# add some data to the Map
dem = ee.Image("AHN/AHN2_05M_RUW")
Map.addLayer(dem, {'min': -5, 'max': 50, 'palette': ['000000', 'ffffff'] }, 'DEM', True)
# zoom in somewhere
Map.setCenter(4.4585, 52.0774, 14)
# TEST
center= Map.getCenter()
# add bounds to the map
Map.addLayer(center, { 'color': 'red' }, 'center')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Advanced Matplotlib Concepts Lecture
In this lecture we cover some more advanced topics which you won't usually use as often. You can always reference the documentation for more resources!
#### Logarithmic scale
It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using `set_xscale` and `set_yscale` methods which accept one parameter (with the value "log" in this case):
```
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, np.exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, np.exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
```
### Placement of ticks and custom tick labels
We can explicitly determine where we want the axis ticks with `set_xticks` and `set_yticks`, which both take a list of values for where on the axis the ticks are to be placed. We can also use the `set_xticklabels` and `set_yticklabels` methods to provide a list of custom text labels for each tick location:
```
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=2)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
```
There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.
#### Scientific notation
With large numbers on axes, it is often better use scientific notation:
```
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_title("scientific notation")
ax.set_yticks([0, 50, 100, 150])
from matplotlib import ticker
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax.yaxis.set_major_formatter(formatter)
```
### Axis number and axis label spacing
```
# distance between x and y axis and the numbers on the axes
matplotlib.rcParams['xtick.major.pad'] = 5
matplotlib.rcParams['ytick.major.pad'] = 5
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("label and axis spacing")
# padding between axis label and axis numbers
ax.xaxis.labelpad = 5
ax.yaxis.labelpad = 5
ax.set_xlabel("x")
ax.set_ylabel("y");
# restore defaults
matplotlib.rcParams['xtick.major.pad'] = 3
matplotlib.rcParams['ytick.major.pad'] = 3
```
#### Axis position adjustments
Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using `subplots_adjust`:
```
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("title")
ax.set_xlabel("x")
ax.set_ylabel("y")
fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
```
### Axis grid
With the `grid` method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the `plot` function:
```
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
```
### Axis spines
We can also change the properties of axis spines:
```
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
```
### Twin axes
Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the `twinx` and `twiny` functions:
```
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
```
### Axes where x and y is zero
```
fig, ax = plt.subplots()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
```
### Other 2D plot styles
In addition to the regular `plot` method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
```
n = np.array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
```
### Text annotation
Annotating text in matplotlib figures can be done using the `text` function. It supports LaTeX formatting just like axis label texts and titles:
```
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
```
### Figures with multiple subplots and insets
Axes can be added to a matplotlib Figure canvas manually using `fig.add_axes` or using a sub-figure layout manager such as `subplots`, `subplot2grid`, or `gridspec`:
#### subplots
```
fig, ax = plt.subplots(2, 3)
fig.tight_layout()
```
#### subplot2grid
```
fig = plt.figure()
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
fig.tight_layout()
```
#### gridspec
```
import matplotlib.gridspec as gridspec
fig = plt.figure()
gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])
for g in gs:
ax = fig.add_subplot(g)
fig.tight_layout()
```
#### add_axes
Manually adding axes with `add_axes` is useful for adding insets to figures:
```
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
fig.tight_layout()
# inset
inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height
inset_ax.plot(xx, xx**2, xx, xx**3)
inset_ax.set_title('zoom near origin')
# set axis range
inset_ax.set_xlim(-.2, .2)
inset_ax.set_ylim(-.005, .01)
# set axis tick locations
inset_ax.set_yticks([0, 0.005, 0.01])
inset_ax.set_xticks([-0.1,0,.1]);
```
### Colormap and contour figures
Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
```
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
```
#### pcolor
```
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
```
#### imshow
```
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
```
#### contour
```
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
```
## 3D figures
To use 3D graphics in matplotlib, we first need to create an instance of the `Axes3D` class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a `projection='3d'` keyword argument to the `add_axes` or `add_subplot` methods.
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
```
#### Surface plots
```
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
```
#### Wire-frame plot
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
```
#### Coutour plots with projections
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)
ax.set_xlim3d(-np.pi, 2*np.pi);
ax.set_ylim3d(0, 3*np.pi);
ax.set_zlim3d(-np.pi, 2*np.pi);
```
## Further reading
* http://www.matplotlib.org - The project web page for matplotlib.
* https://github.com/matplotlib/matplotlib - The source code for matplotlib.
* http://matplotlib.org/gallery.html - A large gallery showcaseing various types of plots matplotlib can create. Highly recommended!
* http://www.loria.fr/~rougier/teaching/matplotlib - A good matplotlib tutorial.
* http://scipy-lectures.github.io/matplotlib/matplotlib.html - Another good matplotlib reference.
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model
<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:**
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:**
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:**
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:**
```
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 64 # batch size
vocab_threshold = ... # minimum word count threshold
vocab_from_file = ... # if True, load existing vocab file
embed_size = ... # dimensionality of image and word embeddings
hidden_size = ... # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = ...
# TODO #4: Define the optimizer.
optimizer = ...
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
```
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
```
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
```
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset.
```
# (Optional) TODO: Validate your model.
```
| github_jupyter |
# EOS 1 image analysis Python code walk-through
- This is to explain how the image analysis works for the EOS 1. Python version 2.7.15 (Anaconda 64-bit)
- If you are using EOS 1, you can use this code for image analysis after reading through this notebook and understand how it works.
- Alternatively, you can also use the ImgAna_minimum.py script.
- Needless to say, this Python code is not optimized for speed.
- Feel free to share and modify.
## - 00 - import libraries: matplolib for handling images; numpy for matrix manipulation
```
import matplotlib.pyplot as pp
import numpy as np
import warnings
```
## - 01 - function for displaying images and making figures
```
# Input: x_img=numpy_array_of_image, marker=marker_of_1D_plot, x_max=max_value
def fig_out( x_img, fig_dpi=120, marker="k.-", x_max=510 ):
pp.figure( dpi=fig_dpi )
pp.style.use( "seaborn-dark" )
if x_img.ndim == 1:
pp.style.use( "seaborn-darkgrid" )
pp.plot( x_img, marker )
elif x_img.ndim == 2:
if len( x_img[0] ) == 3:
pp.style.use( "seaborn-darkgrid" )
pp.plot( x_img[:,0], 'r-' )
pp.plot( x_img[:,1], 'g-' )
pp.plot( x_img[:,2], 'b-' )
else:
pp.imshow( x_img, cmap="gray", vmin=0, vmax=x_max )
pp.colorbar()
elif x_img.ndim == 3:
x_img = x_img.astype( int )
pp.imshow( x_img )
else:
print "Input not recognized."
## Not raise an error because no other functions not depend on output of this function.
```
In Python, an image is represented by a 3D-numpy array.
For example, a simple image of:
red, green, blue, black
cyan, purple, yellow, white
can be written as the following:
```
x = np.array([[[255,0,0], [0,255,0], [0,0,255], [0,0,0]],
[[0,255,255], [255,0,255], [255,255,0], [255,255,255]]])
fig_out( x, fig_dpi=100 )
# example of an image from EOS 1
img_file = "EOS_imgs/example_spec.jpg"
xi = pp.imread( img_file )
fig_out( xi )
```
## - 02 - function for read in image and then calculating the color_diff_sum heat map
```
# Input: x_img=input_image_as_numpy_array, fo=full_output
def cal_heatmap( x_img, fo=False ):
xf = x_img.astype( float )
if xf.ndim == 2:
cds = abs(xf[:,0]-xf[:,1])
cds += abs(xf[:,0]-xf[:,2])
cds += abs(xf[:,1]-xf[:,2])
elif xf.ndim == 3:
cds = abs(xf[:,:,0]-xf[:,:,1])
cds += abs(xf[:,:,0]-xf[:,:,2])
cds += abs(xf[:,:,1]-xf[:,:,2])
else:
raise ValueError( "Image array not recoginzed." )
if fo == True:
fig_out( cds )
else:
pass
return cds
```
This color_diff_sum metric is used to rank the colorfulness of the pixels.
It highlights bright colors while suppresses white and black, as demonstrated below:
```
cal_heatmap( x )
# try out the heat map function on the example image
hm = cal_heatmap( xi, True )
```
## - 03 - function for finding the reference spectrum
```
# Input: x_hm=heat_map_as_numpy_array, fo=full_output, n=threshold_as_ratio_of_peak
# Input: pf_check=profile_check, rt_check=rotation_check
def find_ref( x_hm, fo=False, n=0.25, pf_check=True, rt_check=False ):
n = float( n )
if n<0.1 or n>0.9:
n = 0.25 # n should be between 0.1 and 0.9, otherwise set to 0.25
else:
pass
h, w = x_hm.shape
if h<w and pf_check==True:
warnings.warn( "Input spectra image appears to be landscape." )
proceed = raw_input( "Continue? (y/N): " )
if proceed=='y' or proceed=='Y':
pass
else:
raise RuntimeError( "Program terminated by user." )
else:
pass
x0 = x_hm.mean( axis=0 )
x0thres = np.argwhere( x0 > x0.max()*n ).flatten()
x0diff = x0thres[1:] - x0thres[:-1]
x0gap = np.where( x0diff > 2. )[0].flatten()
if len( x0gap )==0:
if rt_check==True:
fig_out( x_hm )
rotate = raw_input( "Rotate image? (y/N): " )
if rotate=='y' or rotate=='Y':
raise RuntimeError( "Rotate image then restart program." )
else:
pass
else:
pass
l_edge, r_edge = x0thres[0], x0thres[-1]
else:
d_to_center = []
for i in x0gap:
d_to_center.append( abs( w/2. - x0thres[i:i+2].mean() ) )
d_min = np.argmin( d_to_center )
if d_min==0:
l_edge, r_edge = x0thres[0], x0thres[ x0gap[0] ]
else:
l_edge, r_edge = x0thres[ x0gap[d_min-1]+1 ], x0thres[ x0gap[d_min] ]
x_hm_ref = x_hm[ :, l_edge:r_edge+1 ]
x1 = x_hm_ref.mean( axis=1 )
x1thres = np.argwhere( x1 > x1.max()*n ).flatten()
t_edge, b_edge = x1thres[0], x1thres[-1]
tblr_edge = ( t_edge, b_edge, l_edge, r_edge )
if fo==True:
fig_out( x0, fig_dpi=120 )
fig_out( x1, fig_dpi=120 )
else:
pass
return tblr_edge
# try out the reference spectrum function
top, btm, lft, rgt = find_ref( hm, True )
# check the reference spectrum found
fig_out( xi[top:btm+1, lft:rgt+1, :] )
```
## - 04 - function for checking the alignment (omitted)
```
def align_check():
return 0
```
## - 05 - function for normalizing the sample spectrum
```
# Input: x_img=input_image_as_numpy_array, fo=full_output
# Input: bpeak_chl=channel_used_to_find_blue_peak
# Input: trim_edge=trim_edge_of_image, trim_margin=trim_margin_of_spectra
# Input: gapcal=method_for_calculating_gap_between_reference_and_sample
def norm_sam( x_img, fo=False, bpeak_chl='r', trim_edge=False, trim_margin=True, gapcal='p' ):
h, w, d = x_img.shape
if trim_edge == True:
x_img = x_img[h/4:h*3/4, w/4:w*3/4, :]
else:
pass
x_img = x_img.astype( float )
hm = cal_heatmap( x_img )
t_edge, b_edge, l_edge, r_edge = find_ref( hm )
ref_wid = r_edge - l_edge
if trim_margin == True:
mrg = int( ref_wid/10. )
else:
mrg = 0
half_hgt = int( (b_edge - t_edge)/2. )
x_ref = x_img[ t_edge:b_edge, l_edge+mrg:r_edge-mrg, : ]
y_ref = x_ref.mean( axis=1 )
if bpeak_chl == 'r':
peak_r = y_ref[:half_hgt,0].argmax()
peak_b = y_ref[half_hgt:,0].argmax()+half_hgt
else:
peak_rgb = y_ref.argmax( axis=0 )
peak_r, peak_b = peak_rgb[[0,2]]
if gapcal == 'w':
gap = int( ref_wid*0.901 )
else:
gap = int( ( peak_b-peak_r )*0.368 )
x_sam = x_img[ t_edge:b_edge, r_edge+gap+mrg:r_edge+gap+ref_wid-mrg, : ]
y_sam = x_sam.mean( axis=1 )
max_rgb = y_ref.max( axis=0 )
peak_px = np.array([peak_r, peak_b]).flatten()
peak_nm = np.array([610.65, 449.1])
f = np.polyfit( peak_px, peak_nm, 1 )
wavelength = np.arange(b_edge-t_edge)*f[0]+f[1]
if trim_edge == True:
t_edge, b_edge = t_edge+h/4, b_edge+h/4
l_edge, r_edge = l_edge+w/4, r_edge+w/4
peak_r, peak_b = peak_r+t_edge, peak_b+t_edge
else:
pass
y_sam_norm_r = y_sam[:, 0]/max_rgb[0]
y_sam_norm_g = y_sam[:, 1]/max_rgb[1]
y_sam_norm_b = y_sam[:, 2]/max_rgb[2]
y_sam_norm = np.dstack((y_sam_norm_r, y_sam_norm_g, y_sam_norm_b))[0]
if fo == True:
return ((wavelength, y_sam_norm), (y_ref, y_sam),
(t_edge, b_edge, l_edge, r_edge, peak_r, peak_b, gapcal))
else:
return (wavelength, y_sam_norm)
# try out the sample spectrum function
full_result = norm_sam( xi, True )
wv, sam_norm = full_result[0]
ref_raw, sam_raw = full_result[1]
other_result = full_result[2]
# check the reference spectrum (averaged)
fig_out( ref_raw, fig_dpi=120 )
# check the sample spectrum (averaged) before normalization
fig_out( sam_raw, fig_dpi=120 )
# check the normalized sample spectrum (averaged)
pp.figure( dpi=120 )
pp.style.use( "seaborn-darkgrid" )
pp.plot( wv, sam_norm[:,0], 'r-' )
pp.plot( wv, sam_norm[:,1], 'g-' )
pp.plot( wv, sam_norm[:,2], 'b-' )
pp.xlabel( "wavelength (nm)", size=12 )
pp.ylabel( "normalized intensity", size=12 )
```
## - 06 - function for calculating average intensity over a narrow band
```
# Input: ifn=image_file_name, ch=color_channel
# Input: wlc=wavelength_range_center, wlhs=wavelength_range_half_span
# Input: tm=trim_edge, gp=method_for_gap_calculation, fo=full_output
def cal_I( ifn, ch='g', wlc=535., wlhs=5., te=False, gp='p', fo=False ):
wl_low, wl_high = wlc-wlhs, wlc+wlhs
xi = pp.imread( ifn )
wl_arr, sam_norm = norm_sam( xi, trim_edge=te, gapcal=gp )
if ch=='r' or ch=='R':
y_arr = sam_norm[:,0]
elif ch=='g' or ch=='G':
y_arr = sam_norm[:,1]
elif ch=='b' or ch=='B':
y_arr = sam_norm[:,2]
else:
raise ValueError( "Color channel should be 'r', 'g', or 'b'." )
arg_low = np.where( wl_arr < wl_high )[0][0]
arg_high = np.where( wl_arr > wl_low )[0][-1]
I_sum = y_arr[arg_low:arg_high+1].sum()
I_ave = I_sum/(arg_high-arg_low+1)
if fo == True:
print y_arr[arg_low:arg_high+1]
pp.figure( dpi=120 )
pp.style.use( "seaborn-darkgrid" )
pp.plot( wl_arr, y_arr, 'k.-' )
pp.xlabel( "wavelength (nm)", size=12 )
pp.ylabel( "normalized intensity", size=12 )
else:
pass
return I_ave
# try out the average intensity function
cal_I( img_file, fo=True )
```
## - 07 - function for calculating nitrate concentration
```
# Input: image_file=path_and_name_of_image_file, wl=center_wavelength
def test_N( image_file, wl=530., k=-7.8279, b=-0.14917 ):
I = cal_I( img_file, wlc=wl )
lgI = np.log10(I)
nc = lgI*k + b
print "Nitrate Concentration: "+str(round(nc, 2))+" mg/L"
return nc
# try out the nitrate concentration function
test_N( img_file )
```
The k and b values varies a little bit with each individual EOS 1 device, so to ensure accuracy, a three-point calibration is highly recommended.
## - 08 - function for calibrating nitrate tests
```
def cali_N( img_arr, nc_arr, wl, fo=True ):
if len(img_arr) != len(nc_arr):
raise ValueError( "img_arr and nc_arr should have the same length." )
else:
pass
nc = np.array(nc_arr)
I_arr = []
for img in img_arr:
I_arr.append( cal_I( img, wlc=wl ) )
I_arr = np.array( I_arr )
lgI = np.log10( I_arr )
if fo == True:
Ab = (-1.)*lgI
kf, bf = np.polyfit( nc, Ab, 1 )
print kf, bf
pp.style.use( "seaborn-darkgrid" )
pp.figure( dpi=120 )
pp.plot( nc, Ab, 'k.', label="Calibration Data" )
pp.plot( nc, nc*kf+bf, 'k-', label="Linear Fit" )
pp.xlabel( "Nitrate Concentration (mg/L)", size=12)
pp.ylabel( "Absorbance ("+str(wl-5)+"nm $-$ "+str(wl+5)+"nm)", size=12 )
pp.legend( loc="upper left" )
else:
pass
k, b = np.polyfit( lgI, nc_arr, 1 )
return ((k,b), nc, lgI)
imgs = ["EOS_imgs//0mg.jpg", "EOS_imgs//5mg.jpg", "EOS_imgs//10mg.jpg"]
ncs = [0.0, 5.0, 10.0]
cali_N( imgs, ncs, 530. )
k, b = cali_N( imgs, ncs, 530., fo=False )[0]
```
After you run the cali_N, you will feed the k & b back into test_N as inputs.
Now you've understood how the image analysis code works.
You can keep using this Jupyter Notebook, or go back to:
https://github.com/jianshengfeng/EOS1
and find the Python code ImgAna_minimum.py
ImgAna_minimum.py can either be run as a Python script (i.e., python ImgAna_minimum.py) or used as a Python module (i.e., import ImgAna_minimum).
| github_jupyter |
# Exploration of the modulators and downstream effectors of HDAC6
```
import time
import sys
import getpass
from collections import defaultdict
import bel_repository
import bio2bel_hgnc
import bio2bel_famplex
import pandas as pd
import pybel
import pybel_jupyter
import pybel_tools
import hbp_knowledge
from pybel.dsl import Protein
from pybel.struct import get_subgraph_by_neighborhood, get_subgraph_by_second_neighbors
from pybel.struct.mutation import (
remove_biological_processes, remove_pathologies, remove_isolated_nodes, remove_associations,
)
from pybel.manager.citation_utils import get_pubmed_citation_response
from pybel_tools.mutation import enrich_complexes, enrich_variants, expand_internal
print(f'''BEL Repository: v{bel_repository.get_version()}
PyBEL: v{pybel.get_version()}
PyBEL-Jupyer: v{pybel_jupyter.get_version()}
PyBEL-Tools: v{pybel_tools.get_version()}
pharamcome/knowledge: v{hbp_knowledge.VERSION}
''')
print(sys.version)
print(time.asctime())
print(getpass.getuser())
graph = hbp_knowledge.repository.get_graph()
remove_associations(graph)
remove_pathologies(graph)
remove_isolated_nodes(graph)
graph.summarize()
# TODO add expansion
famplex_manager = bio2bel_famplex.Manager()
famplex_manager
hgnc_manager = bio2bel_hgnc.Manager()
hgnc_manager.normalize_genes(graph)
graph.summarize()
# Find all HDAC* in the BEL graph
hdacs = []
for node in graph:
name = node.get('name')
if name is not None and name.startswith('HDAC'):
hdacs.append(node)
hdacs
#subgraph = get_subgraph_by_second_neighbors(graph, [hdac6])
subgraph = get_subgraph_by_neighborhood(graph, hdacs)
enrich_complexes(subgraph)
enrich_variants(subgraph)
#expand_internal(graph, subgraph)
subgraph.summarize()
pybel_jupyter.to_jupyter(subgraph)
pmid_to_targets = defaultdict(set)
for u, v, data in subgraph.edges(data=True):
if 'citation' in data:
for x in u, v:
if 'name' in x and x['namespace'] == 'hgnc':
pmid_to_targets[data['citation']['reference']].add(x)
pmid_to_targets = dict(pmid_to_targets)
with open('hdac6-pmids.txt', 'w') as file:
for pmid in sorted(pmid_to_targets):
print(f'pmid:{pmid}', file=file)
print(f'there are {len(pmid_to_targets)} PubMed references')
pmid_to_targets
pubmed_response = get_pubmed_citation_response(pmid_to_targets)
pmid_pmc_map = {}
for pmid in pubmed_response['result']['uids']:
for article_id in pubmed_response['result'][pmid]['articleids']:
if 'pmc' == article_id['idtype']:
pmid_pmc_map[pmid] = article_id['value']
df = pd.DataFrame(
data=list({
(
target.name,
target.identifier,
pmid,
pmid_pmc_map.get(pmid),
)
for pmid, targets in pmid_to_targets.items()
for target in targets
}),
columns=['hgnc.symbol', 'hgnc', 'pmid', 'pmc']
).sort_values(['hgnc.symbol', 'pmid'])
# Output
df.to_csv('hdac6-article-targets.tsv', sep='\t', index=False)
# Show in notebook
df
```
| github_jupyter |
# Development Notebook for extracting icebergs from DEMs
by Jessica Scheick
Workflow based on previous methods and code developed by JScheick for Scheick et al 2019 *Remote Sensing*.
***Important note about CRS handling*** This code was developed while also learning about Xarray, rioxarray, rasterio, and other Python geospatial libraries. Since projections are not yet fully handled [smoothly] in any of those resources, and especially not integrated, there's little to no built in checking or handling of CRS. Instead, handling is done manually throughout the code and external to this notebook. This is critical to know because the CRS displayed by a rioxarray dataset may be from one variable added to the dataset, but is not necessarily the original (or read in) CRS for each variable in the dataset (hence the manual, external handling). The `get_mask` and `get_new_var_from_file` methods should reproject new data sources before adding them to the dataset.
```
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.gridspec as gridspec
%matplotlib inline
import hvplot.xarray
# import hvplot.pandas
import holoviews as hv
hv.extension('bokeh','matplotlib')
from holoviews import dim, opts
import datetime as dt
import os
import panel as pn
pn.extension()
import pyproj
import rioxarray
%load_ext autoreload
import icebath as icebath
from icebath.core import build_xrds
from icebath.utils import raster_ops as raster_ops
from icebath.utils import vector_ops as vector_ops
from icebath.core import fl_ice_calcs as icalcs
from icebath.core import build_gdf
%autoreload 2
# laptop dask setup
import dask
from dask.distributed import Client, LocalCluster, performance_report
# cluster=LocalCluster()
# client = Client(cluster) #, processes=False) this flag only works if you're not using a LocalCluster, in which case don't use `cluster` either
client = Client(processes=True, n_workers=2, threads_per_worker=2, memory_limit='7GB', dashboard_address=':8787')
client
# Dask docs of interest
# includes notes and tips on threads vs processes: https://docs.dask.org/en/latest/best-practices.html#best-practices
# Pangeo dask setup
from dask_gateway import GatewayCluster
cluster = GatewayCluster()
# options = cluster.gateway.cluster_options()
# options
# cluster.adapt(minimum=2, maximum=10) # or cluster.scale(n) to a fixed size.
client = cluster.get_client()
client
# reconnect to existing cluster
from dask_gateway import Gateway
g = Gateway()
g.list_clusters()
cluster = g.connect(g.list_clusters()[0].name)
cluster
cluster.scale(0)
client = cluster.get_client()
client
cluster.scale(5)
client.get_versions(check=True)
cluster.close()
def debug_mem():
from pympler import summary, muppy
all_objects = muppy.get_objects()
s = summary.summarize(all_objects)
return s
s = client.run(debug_mem)
from pympler import summary, muppy
summary.print_(list(s.values())[0])
```
## Read in DEMs and apply corrections (tidal, geoid)
```
#Ilulissat Isfjord Mouth, resampled to 50m using CHANGES
# ds = build_xrds.xrds_from_dir('/home/jovyan/icebath/notebooks/supporting_docs/Elevation/ArcticDEM/Regridded_50m_tiles/n69w052/', fjord="JI")
# Ilulissat Isfjord Mouth, original 2m (the files from CHANGES seem much smaller than those from Kane/Pennell.
# data = xr.open_rasterio('/home/jovyan/icebath/notebooks/supporting_docs/Elevation/ArcticDEM/2m_tiles/n69w052/SETSM_W1W1_20100813_102001000E959700_102001000ECB6B00_seg1_2m_v3.0_dem.tif')
ds = build_xrds.xrds_from_dir('/Users/jessica/projects/bathymetry_from_bergs/DEMs/2m/', fjord="JI")
# ds = build_xrds.xrds_from_dir('/Users/jessica/projects/bathymetry_from_bergs/DEMs/KaneW2W2/', fjord="KB", metastr="_meta", bitmask=True)
# ds = build_xrds.xrds_from_dir('/home/jovyan/icebath/notebooks/supporting_docs/Elevation/ArcticDEM/2m_tiles/', fjord="JI")
scrolldem = ds['elevation'].hvplot.image(x='x', y='y',datashade=False, rasterize=True, aspect='equal', cmap='magma', dynamic=True,
xlabel="x (km)", ylabel="y (km)", colorbar=True) #turn off datashade to see hover values + colorbar
scrolldem
```
### Get and Apply Land Mask
**Note: requires a shapefile of the land areas in the ROI**
The default is to use a shapefile of Greenland: `shpfile='/home/jovyan/icebath/notebooks/supporting_docs/Land_region.shp'`, but an alternative file can be specified.
Underlying code is based on: https://gis.stackexchange.com/questions/357490/mask-xarray-dataset-using-a-shapefile
Other results used rioxarray (which isn't on my current working environment), and my previous work did it all manually with gdal.
```
ds.bergxr.get_mask(req_dim=['x','y'], req_vars=None, name='land_mask',
# shpfile='/home/jovyan/icebath/notebooks/supporting_docs/Land_region.shp')
shpfile='/Users/jessica/mapping/shpfiles/Greenland/Land_region/Land_region.shp')
# ds.land_mask.plot()
ds['elevation'] = ds['elevation'].where(ds.land_mask == True)
```
### Apply Geoid Correction
ArcticDEMs come as ellipsoidal height. They are corrected to geoidal height according to geoid_ht = ellipsoid - geoid_offset where geoid_offset is taken from BedMachine v3 and resampled in Xarray (using default "linear" interpolation for multidimensional arrays) to the resolution and extent of the region's dataset.
BedMachine is now available on Pangeo via intake thanks to the Lahmont-Doherty Glaciology group.
- Basic info: https://github.com/ldeo-glaciology/pangeo-bedmachine
- Pangeo gallery glaciology examples: http://gallery.pangeo.io/repos/ldeo-glaciology/pangeo-glaciology-examples/index.html
```
ds = ds.bergxr.to_geoid(source='/Users/jessica/mapping/datasets/160281892/BedMachineGreenland-2017-09-20_3413_'+ds.attrs['fjord']+'.nc')
# ds = ds.bergxr.to_geoid(source='/home/jovyan/icebath/notebooks/supporting_docs/160281892/BedMachineGreenland-2017-09-20_'+ds.attrs['fjord']+'.nc')
ds
```
### Apply Tidal Correction
Uses Tyler Sutterly's pyTMD library
```
# model_path='/home/jovyan/pyTMD/models'
model_path='/Users/jessica/computing/tidal_model_files'
ds=ds.bergxr.tidal_corr(loc=[ds.attrs["fjord"]], model_path=model_path)
# # test to make sure that if you already have a tidal correction it won't reapply it, and test that it will return the tides if you don't have an elevation entered
# ds=ds.bergxr.tidal_corr(loc=["JI"])
# ds=ds.bergxr.tidal_corr(loc=["JI"]) # results in assertion error
# ds.attrs['offset_names'] = ('random')
# ds=ds.bergxr.tidal_corr(loc=["JI"]) # results in longer attribute list
# # go directly to icalcs function, called under the hood above, if you want to see plots
# tides = icalcs.predict_tides(loc='JI',img_time=ds.dtime.values[0], model_path='/home/jovyan/pyTMD/models',
# model='AOTIM-5-2018', epsg=3413, plot=True)
# tides[2]
```
## Extract Icebergs from DEM and put into Geodataframe
Completely automated iceberg delineation in the presence of clouds and/or data gaps (as is common in a DEM) is not yet easily implemented with existing methods. Many techniques have been refined for specific fjords or types of situations. Here, we tailor our iceberg detection towards icebergs that will provide reliable water depth estimates. The following filters are applied during the iceberg extraction process:
- a minimum iceberg horizontal area is specified on a per-fjord basis. These minima are based on icebergs used to infer bathymetry in previous work (Scheick et al 2019).
- a maximum allowed height for the median freeboard is specified on a per-fjord basis. These maxima are determined as 10% of the [largest] grounded ice thickness for the source glaciers. While the freeboard values from the DEM are later filtered to remove outliers in determining water depth, this filtering step during the delineation process removes "icebergs" where low clouds, rather than icebergs, are the surface represented in the DEM.
- a maximum iceberg horizontal area of 1000000 m2 (1km2) is assumed to eliminate large clusters of icebergs, melange, and/or cloud picked up by the delineation algorithm.
- the median freeboard must be greater than 15 m relative to [adjusted] sea level. If not, we can assume the iceberg is either a false positive (e.g. cloud or sea ice) or too small to provide a meaningful water depth estimate.
```
import geopandas as gpd
gdf = gpd.read_file('/Users/jessica/projects/bathymetry_from_bergs/prelim_results/JIicebergs.gpkg', ignore_index=True)
%%prun
# %%timeit -n 1 -r 1
# 3min 17s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
# gdf=None
gdf = build_gdf.xarray_to_gdf(ds)
# gdf.loc[((gdf['sl_adjust']>4.27) & (gdf['sl_adjust']<4.36))].groupby('date').berg_poly.plot()
gdf.groupby('date').berg_poly.plot()
# This requires geoviews[http://geoviews.org/] be installed, and their install pages have warning if your environment uses [non] conda-forge
# libraries and it won't resolve the environment with a conda install, so I'll need to create a new test env to try this
# bergs = gdf.hvplot()
# bergs
# xarray-leaflet may be another good option to try: https://github.com/davidbrochart/xarray_leaflet
# scrolldems*bergs
gdf
```
## Compute Water Depths on Icebergs
```
gdf.berggdf.calc_filt_draft()
gdf.berggdf.calc_rowwise_medmaxmad('filtered_draft')
gdf.berggdf.wat_depth_uncert('filtered_draft')
# def mmm(vals): # mmm = min, med, max
# print(np.nanmin(vals))
# print(np.nanmedian(vals))
# print(np.nanmax(vals))
```
## Extract measured values from BedMachine v3 and IBCAOv4 (where available)
All bathymetry values from these gridded products are included, then later parsed into bathymetric observations and inferred (from e.g. gravimetry, modeling) for comparing with iceberg-inferred water depths.
Note that the datasets are subset to the region of the fjord outside this script to reduce memory requirements during processing.
***Improvement: add CRS handling/checks to catch when a measurement dataset is incompatible and needs to be reprojected***
#### BedMachine Greenland
```
fjord = "JI"
# measfile='/Users/jessica/mapping/datasets/160281892/BedMachineGreenland-2017-09-20.nc'
measfile='/Users/jessica/mapping/datasets/160281892/BedMachineGreenland-2017-09-20_3413_'+fjord+'.nc'
# measfile='/home/jovyan/icebath/notebooks/supporting_docs/160281892/BedMachineGreenland-2017-09-20.nc'
# measfile='/home/jovyan/icebath/notebooks/supporting_docs/160281892/BedMachineGreenland-2017-09-20_'+ds.attrs['fjord']+'.nc'
```
#### IBCAOv4
https://www.gebco.net/data_and_products/gridded_bathymetry_data/arctic_ocean/
Source keys: https://www.gebco.net/data_and_products/gridded_bathymetry_data/gebco_2020/
Downloaded Feb 2021
**NOTE** IBCAO has it's own Polar Stereo projection (EPSG:3996: WGS 84/IBCAO Polar Stereographic) so it needs to be reprojected before being applied to these datasets.
See: https://spatialreference.org/ref/?search=Polar+Stereographic
```
measfile2a='/Users/jessica/mapping/datasets/IBCAO_v4_200m_ice_3413.nc'
# measfile2a='/Users/jessica/mapping/datasets/IBCAO_v4_200m_ice_3413_'+fjord+'.nc'
# measfile2a='/home/jovyan/icebath/notebooks/supporting_docs/IBCAO_v4_200m_ice_3413.nc'
# measfile2a='/home/jovyan/icebath/notebooks/supporting_docs/IBCAO_v4_200m_ice_3413_'+ds.attrs['fjord']+'.nc'
measfile2b='/Users/jessica/mapping/datasets/IBCAO_v4_200m_TID_3413.nc'
# measfile2b='/home/jovyan/icebath/notebooks/supporting_docs/IBCAO_v4_200m_TID_3413.nc'
gdf.berggdf.get_meas_wat_depth([measfile, measfile2a, measfile2b],
vardict={"bed":"bmach_bed", "errbed":"bmach_errbed", "source":"bmach_source",
"ibcao_bathy":"ibcao_bed", "z":"ibcao_source"},
nanval=-9999)
gdf #[gdf['date'].dt.year.astype(int)==2016]
```
### Plot the measured and inferred values
Plots the gridded versus iceberg-freeboard-inferred values for all icebergs relative to the values in BedMachine and IBCAO.
Left plot shows measured values within the gridded datasets; right plot show the modeled/inferred values within the gridded data products (hence the larger error bars).
```
from icebath.utils import plot as ibplot
ibplot.meas_vs_infer_fig(gdf, save=False)
```
## Export the iceberg outlines and data to a geopackage
```
shpgdf = gdf.copy(deep=True)
del shpgdf['DEMarray']
del shpgdf['filtered_draft']
shpgdf.to_file("/Users/jessica/projects/bathymetry_from_bergs/prelim_results/JIbergs_faster.gpkg", driver="GPKG")
```
## Export the iceberg outlines and data to a shapefile
```
shpgdf = gdf.copy(deep=True)
shpgdf['year'] = shpgdf['date'].dt.year.astype(int)
del shpgdf['date']
del shpgdf['DEMarray']
del shpgdf['filtered_draft']
# NOTE: need to rename columns due to name length limits for shapefile; otherwise,
# all ended up as "filtered_#"
shpgdf.to_file("/Users/jessica/projects/bathymetry_from_bergs/prelim_results/icebergs_JI.shp")
```
## Visualizing Iceberg Outlines for a Single DEM
Some attempts at doing this with Holoviews, including to try and have it with a slider bar, are in the misc_dev_notes_notebook. As it stands currently, this implementation should work but is quite slow.
```
timei=1
print(ds['dtime'].isel({'dtime':timei}))
dem = ds.isel({'dtime':timei})
im = dem.elevation.values
# Plot objectives: show DEM, land mask, iceberg outlines. 2nd plot with just orig DEM?
fig = plt.figure(figsize=(12,12)) # width, height in inches
# gs = gridspec.GridSpec(ncols=1, nrows=2, figure=fig)
gs=fig.add_gridspec(3,1, hspace=0.3) # nrows, ncols
# DEM plot
axDEM = plt.subplot(gs[0:2,0])
dem.elevation.plot.pcolormesh(ax=axDEM,
vmin=-10, vmax=75, cmap='magma', # vmin and vmax set the colorbar limits here
xscale='linear', yscale='linear',
cbar_kwargs={'label':"Elevation (m amsl)"})
# land mask
landcm = mpl.colors.ListedColormap([(0.5, 0.35, 0.35, 1.), (0.5, 0., 0.6, 0)])
dem.land_mask.plot(ax=axDEM, cmap=landcm, add_colorbar=False)
# iceberg contours - ultimately add this from geodataframe
# dem.elevation.plot.contour(ax=axDEM, levels=[threshold], colors=['gray'])
# Note: dem.elevation.plot.contour(levels=[threshold], colors=['gray']) will show the plot, but you can't
# add it to these axes and then show it inline from a second cell
# I'm not entirely sure this is plotting what I think; it's also not actually plotting the contoured data
gdf.loc[gdf['date']==ds.dtime.isel({'dtime':timei}).values].berg_poly.plot(ax=axDEM,
linestyle='-',
linewidth=2,
edgecolor='gray',
facecolor=(0,0,0,0))
xmin = -250000
xmax = -232750
ymin = -2268250
ymax = -2251000
# xmin = -235000 #zoom in to figure out empty iceberg DEM during gdf generation
# xmax = -233000
# ymin = -2257500
# ymax = -2255000
while (xmin-xmax) != (ymin-ymax):
print("modify your x and y min/max to make the areas equal")
break
axDEM.set_aspect('equal')
axDEM.set_xlim(xmin, xmax)
axDEM.set_ylim(ymin, ymax)
axDEM.set_xlabel("x (km)")
axDEM.set_ylabel("y (km)")
plt.show()
# Note: gdf['date']==timei is returning all false, so the datetimes will need to be dealt with to get the areas from the geometry column
# areas = gdf.loc[:, gdf['date']==timei].geometry.area()
```
| github_jupyter |
```
import csv
from bs4 import BeautifulSoup
from selenium import webdriver
from datetime import datetime
import requests
driver=webdriver.Chrome(executable_path="F:\Web_Scraping\chromedriver.exe")
url = "https://www.naukri.com/"
def get_url(post,location):
template="https://in.indeed.com/jobs?q={}&l={}"
url=template.format(post,location)
return url
url=get_url('hadoop','Gurgaon')
url
#rextract Raw Html
response=requests.get(url)
response
response.reason
soup=BeautifulSoup(response.text,'html.parser')
cards =soup.find_all('div','jobsearch-SerpJobCard')
print(len(cards))
```
## Prototype Cards of job search
##### first indexding of job card
```
card=cards[0]
atag=card.h2.a
job_title=atag.get('title')
job_url="https://in.indeed.com/" + atag.get('href')
comapny=card.find('span','company').text.strip()
comapny
job_location=card.find('div','recJobLoc').get('data-rc-loc')
job_location
job_summry=card.find('div','summary').text.strip()
job_summry
job_post=card.find('span','date').text
job_post
f_date=datetime.today().strftime('%d-%m-%y')
f_date
#use try function sometime salary is there is sometime not
try:
job_salary=card.find('span','salaryText').text.strip()
except AttributeError:
job_salary=''
job_salary
```
# Gernalize all model with function
```
def get_record(card):
atag=card.h2.a
job_title=atag.get('title')
job_url="https://in.indeed.com/" + atag.get('href')
comapny=card.find('span','company').text.strip()
job_location=card.find('div','recJobLoc').get('data-rc-loc')
job_summry=card.find('div','summary').text.strip()
job_post=card.find('span','date').text
f_date=datetime.today().strftime('%d-%m-%y')
try:
job_salary=card.find('span','salaryText').text.strip()
except AttributeError:
job_salary=''
record = (job_title,comapny,job_location,job_summry,job_post,f_date,job_salary,job_url)
return record
records=list()
for card in cards:
record=get_record(card)
if record:
records.append(record)
records[0]
while True:
try:
url="https://in.indeed.com/" + soup.find('a',{'arial-label':'Next'}).get('href')
except AttributeError:
break
response=request.get(url)
soup=BeautifulSoup(response.text,'html.preser')
card =soup.find_all('div','jobsearch-SerpJobCard')
for card in cards:
record=get_record(card)
records.append(record)
#all together
import csv
from bs4 import BeautifulSoup
from selenium import webdriver
from datetime import datetime
import requests
def get_url(post,location):
template="https://in.indeed.com/jobs?q={}&l={}"
url=template.format(post,location)
return url
def get_record(card):
atag=card.h2.a
job_title=atag.get('title')
job_url="https://in.indeed.com/" + atag.get('href')
comapny=card.find('span','company').text.strip()
job_location=card.find('div','recJobLoc').get('data-rc-loc')
job_summry=card.find('div','summary').text.strip()
job_post=card.find('span','date').text
f_date=datetime.today().strftime('%d-%m-%y')
try:
job_salary=card.find('span','salaryText').text.strip()
except AttributeError:
job_salary=''
record = (job_title,comapny,job_location,job_summry,job_post,f_date,job_salary,job_url)
return record
def main(post,location):
records=list()
url=get_url(post,location)
#save data as csv file
while True:
response=requests.get(url)
soup=BeautifulSoup(response.text,'html.parser')
cards =soup.find_all('div','jobsearch-SerpJobCard')
for card in cards:
record=get_record(card)
records.append(record)
try:
url="https://in.indeed.com/" + soup.find('a',{'arial-label':'Next'}).get('href')
except AttributeError:
break
with open('Indeed_jobs.csv','w',newline='',encoding='utf-8')as f:
writer=csv.writer(f)
writer.writerow(['job_title','comapny','job_location','f_date','job_salary','job_url'])
writer.writerows(records)
main('hadoop','Gurgaon')
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
experiment = '_16snap'
_04847_img = np.load('4847' + experiment + '-image.npy')
_04799_img = np.load('4799' + experiment + '-image.npy')
_04820_img = np.load('4820' + experiment + '-image.npy')
_05675_img = np.load('5675' + experiment + '-image.npy')
_05680_img = np.load('5680' + experiment + '-image.npy')
_05710_img = np.load('5710' + experiment + '-image.npy')
_04847_lbl = np.load('4847' + experiment + '-label-onehot.npy')
_04799_lbl = np.load('4799' + experiment + '-label-onehot.npy')
_04820_lbl = np.load('4820' + experiment + '-label-onehot.npy')
_05675_lbl = np.load('5675' + experiment + '-label-onehot.npy')
_05680_lbl = np.load('5680' + experiment + '-label-onehot.npy')
_05710_lbl = np.load('5710' + experiment + '-label-onehot.npy')
train_img = np.vstack((_04847_img[5:,], _04799_img[5:,], _04820_img[5:,], _05675_img[5:,], _05680_img[5:,]))
train_lbl = np.vstack((_04847_lbl[5:,], _04799_lbl[5:,], _04820_lbl[5:,], _05675_lbl[5:,], _05680_lbl[5:,]))
#val_img = _05710_img
#val_lbl = _05710_lbl
val_img = np.vstack((_04847_img[:5,], _04799_img[:5,], _04820_img[:5,], _05675_img[:5,], _05680_img[:5,]))
val_lbl = np.vstack((_04847_lbl[:5,], _04799_lbl[:5,], _04820_lbl[:5,], _05675_lbl[:5,], _05680_lbl[:5,]))
print 'train_img', train_img.shape
print 'train_lbl', train_lbl.shape
print 'val_img', val_img.shape
print 'val_lbl', val_lbl.shape
def next_batch(X, y, size):
perm = np.random.permutation(X.shape[0])
for i in np.arange(0, X.shape[0], size):
yield (X[perm[i:i + size]], y[perm[i:i + size]])
def weight_variable(shape, name='W'):
return tf.Variable(tf.truncated_normal(shape, stddev=0.1), name=name)
def bias_variable(shape, name='b'):
return tf.Variable(tf.constant(0.1, shape=shape), name=name)
def loss_accuracy(cross_entropy_count, accuracy_count, x, y_, keep_prob, phase_train, X, Y, batch_size):
c, l = 0, 0
for batch_xs, batch_ys in next_batch(X, Y, batch_size):
feed_dict = {x: batch_xs, y_: batch_ys, keep_prob: 1.0, phase_train: False}
l += cross_entropy_count.eval(feed_dict=feed_dict)
c += accuracy_count.eval(feed_dict=feed_dict)
return float(l) / float(Y.shape[0]), float(c) / float(Y.shape[0])
def conv_layer(input, channels_in, channels_out, name='conv', patch_x=32, patch_y=32):
with tf.name_scope(name):
w = weight_variable([patch_x, patch_y, channels_in, channels_out], name='W')
b = bias_variable([channels_out], name='B')
conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding='SAME') + b
bn = batch_norm(conv, channels_out, phase_train, name + '_bn')
act = tf.nn.relu(bn)
#act = tf.nn.relu(conv)
# tf.summary.histogram('weights', w)
# tf.summary.histogram('biases', b)
# tf.summary.histogram('activations', act)
return tf.nn.max_pool(act, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def fc_layer(input, channels_in, channels_out, name='fc'):
with tf.name_scope(name):
w = weight_variable([channels_in, channels_out], name='W')
b = bias_variable([channels_out], name='b')
bn = tf.matmul(input, w) + b
# tf.summary.histogram('weights', w)
# tf.summary.histogram('biases', b)
return tf.nn.relu(bn)
def batch_norm(x, n_out, phase_train, name='bn'):
"""
Batch normalization on convolutional maps.
Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
Args:
x: Tensor, 4D BHWD input maps
n_out: integer, depth of input maps
phase_train: boolean tf.Varialbe, true indicates training phase
scope: string, variable scope
Return:
normed: batch-normalized maps
"""
with tf.variable_scope(name):
beta = tf.Variable(tf.constant(0.0, shape=[n_out]), name='beta', trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[n_out]), name='gamma', trainable=True)
batch_mean, batch_var = tf.nn.moments(x, [0,1,2], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=0.6)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(phase_train,
mean_var_with_update,
lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-2)
return normed
x = tf.placeholder(tf.float32, shape=[None, 512, 1024], name='x')
x_image = tf.reshape(x, [-1, 512, 1024, 1], name='x_reshaped')
y_ = tf.placeholder(tf.float32, shape=[None, 2], name='labels')
phase_train = tf.placeholder(tf.bool, name='phase_train')
keep_prob = tf.placeholder(tf.float32, name='keep_prob_fc1')
conv1 = conv_layer(x_image, 1, 32, 'conv1', 16, 16) # 16x16 patch, 1 channel, 32 output channels
conv2 = conv_layer(conv1, 32, 64, 'conv2', 16, 16) # 16x16 patch, 32 channels, 64 output channels
conv2_flat = tf.reshape(conv2, [-1, 16 * 16 * 64]) # flatten conv2
fc1 = fc_layer(conv2_flat, 16 * 16 * 64, 1024, 'fc1') # fc_layer fc1
fc1_drop = tf.nn.dropout(fc1, keep_prob) # dropout on fc1
fc2 = fc_layer(fc1_drop, 1024, 512, name='fc2') # fc_layer fc2
with tf.name_scope('fc3'):
W_fc3 = weight_variable([512, 2])
b_fc3 = bias_variable([2])
# tf.summary.histogram('weights', W_fc3)
# tf.summary.histogram('biases', b_fc3)
y_conv = tf.matmul(fc2, W_fc3) + b_fc3
max_epochs = 200
max_steps = 1000
learning_rate = 1e-5
step, train_acc_arr, train_loss_arr, val_acc_arr, val_loss_arr = 0, [], [], [], []
with tf.name_scope('xent'):
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
cross_entropy_count = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
with tf.name_scope('accuracy'):
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_count = tf.reduce_sum(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('cross_entropy', cross_entropy)
tf.summary.scalar('accuracy', accuracy)
# tf.summary.image('input', x_image, 3)
# config = tf.ConfigProto(device_count = {'GPU': 0})
sess = tf.InteractiveSession()
merged_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter('/tmp/brain/5')
writer.add_graph(sess.graph)
sess.run(tf.global_variables_initializer())
for i in range(max_epochs):
for batch_xs, batch_ys in next_batch(train_img, train_lbl, 2):
print batch_ys.shape
feed_dict={x: batch_xs, y_: batch_ys, keep_prob: 0.5, phase_train: True}
if step % 5 == 0:
s = sess.run(merged_summary, feed_dict=feed_dict)
writer.add_summary(s, step)
train_step.run(feed_dict=feed_dict)
step += 1
if step % 100 == 0:
train_loss, train_accuracy = loss_accuracy(cross_entropy_count, accuracy_count, x, y_, keep_prob, phase_train, train_img, train_lbl, 100)
val_loss, val_accuracy = loss_accuracy(cross_entropy_count, accuracy_count, x, y_, keep_prob, phase_train, val_img, val_lbl, 100)
train_acc_arr.append(train_accuracy), train_loss_arr.append(train_loss)
val_acc_arr.append(val_accuracy), val_loss_arr.append(val_loss)
print "step %d, train accuracy %.4f, train loss %.5f, val accuracy %g, val loss %g" % (
step, train_accuracy, train_loss, val_accuracy, val_loss)
test_loss, test_accuracy = loss_accuracy(cross_entropy_count, accuracy_count, x, y_, keep_prob, phase_train,
val_img, val_lbl, 100)
print 'Val Accuracy: %g, Val Loss: %g' % (test_accuracy * 100, test_loss)
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
get_available_devices()
_04847_img.shape
```
| github_jupyter |
# Homework 1
[](https://colab.research.google.com/github/rhennig/EMA6938/blob/main/Notebooks/Homework1.ipynb)
## Problem 1 (100 points using the rubric)
In this problem, we will investigate a polynomial regression model on a 2-dimensional dataset of $(x, y)$ points.
- Split the dataset into a training and testing set, including 80% and 20% of the data, respectively.
- Using only the numpy and matplotlib libraries, perform a quadratic regression up to 5th order of the provided dataset.
- Calculate the $R^2$ coefficient of determination and the RMSE for the training and the testing set.
- Plot how the $R^2$ value and the RMSE on the training and testing set change with the with the polynomial order of regression.
- Based on the plot, what is the optimal polynomial order?
```
# import required modules
import numpy as np
import matplotlib.pyplot as plt
# Load the numpy array from data.npy file
x, y = np.load('data.npy')
# Plot the loaded data set
plt.figure(figsize=(8, 6))
plt.rcParams['font.size'] = '16'
plt.scatter(x, y, s = 30, marker = 'o')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Scatter Data', fontsize=20)
plt.show()
```
## Problem 2 (100 points using the rubric)
In this problem, we will investigate an alternative method for determining the optimal regression parameters. We will implement an iterative solver that minimizes least squares error of the model.
Gradient based iterative solvers are important algorithms for optimizing the parameters in many machine learning methods where analytic solutions do not exist due to the non-linearity and complexity of the models.
### Loss Function
A suitable loss function for many machine learning methods, such as a regression model, is the mean squared error (MSE):
$$
\text{L} = \frac{1}{n} \sum_{i=1}^n (y_i - \hat y_i )^2.
$$
We simply calculate the square of the error and take the mean, hence the name mean squared error.
Substituting the regression equation for the $\hat y_i$, yields:
$$
\text{L} = \frac{1}{n} \sum_{i=1}^n \left (y_i - (\beta_0 + \beta_1 x_i) \right )^2
$$
Now that we have defined the loss function, we need to minimize it to finding the regression parameters, β.
### Gradient descent methods
The idea behind gradient descent methods is to only use local information about the loss function, its value and gradient, to minimize the loss function.
This is analogous to a being in the mountains in heavy fog with out a map and trying to find the bottom of the nearest valley. We simply keep walking downhill until we reach the point where very direction is uphill.
$R^2$
For our algorithm, we will need to calculate the derivative of the loss function with respect to the model parameters, β.
1. Calculate the partial derivative of the loss function, $L$, with respect to the intercept $β_0$.
2. Calculate the partial derivative of the loss function, $L$, with respect to the slope $β_1$.
3. Implement an algorithm using only the numpy and matplotlib libraries, where you change the value of the entries of β following the negative of the gradient.
Update the current value of β using an equation of the form:
$$
\beta_i \rightarrow \beta_i - s \frac{\partial L}{\partial \beta_i}.
$$
Here, $s$ is a parameter that scales the stepsize for the change in β. You have to be careful selecting the stepsize to ensure your algorithm is stable and converges sufficiently fast to the minimum.
4. Test the gradient descent algorithm on the above dataset for linear regression and compare the results with the ones obtained in Problem 1. What do you observe for the convergence?
## Extra Credit Problems
Extend the regression algorithm of Problem 2:
- Implement a conjugate gradient algorithm for the regression of problem 2 (20 points).
- Determine the coefficients for polynomial regression (20 points).
| github_jupyter |
# CORONA VIRUS PANDEMIC!🦠

# **What is Corona Virus?[](http://)**
Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus.
Most people infected with the COVID-19 virus will experience mild to moderate respiratory illness and recover without requiring special treatment. Older people, and those with underlying medical problems like cardiovascular disease, diabetes, chronic respiratory disease, and cancer are more likely to develop serious illness.
The best way to prevent and slow down transmission is be well informed about the COVID-19 virus, the disease it causes and how it spreads. Protect yourself and others from infection by washing your hands or using an alcohol based rub frequently and not touching your face.
The COVID-19 virus spreads primarily through droplets of saliva or discharge from the nose when an infected person coughs or sneezes, so it’s important that you also practice respiratory etiquette (for example, by coughing into a flexed elbow).[source](https://www.who.int/health-topics/coronavirus#tab=tab_1)
# How would you know that you have contracted coronavirus?
It is extremely important that everyone learns to identify the symptoms of coronavirus. And remembering them isn’t difficult because coronavirus symptoms are the same as those of the flu.
You may have developed coronavirus infection if you notice these symptoms-
1)Sore, itchy or swollen throat
2)Coughing and sneezing
3)Fever
4)Headache and muscle pain that accompany fever
5)Breathing difficulty
6)Runny nose
There are a few risk factors of coronavirus-
Having travelled to or lived in countries that have reported coronavirus outbreaks.
Being elderly.
Having a history of heart problems and diabetes.
# Coronavirus Dos and Don’ts
To help curb the coronavirus pandemic, here is a list of do’s and don’ts that we must all observe –
**Do’s**
Do wash your hands with soap and water or alcoholic hand sanitizer frequently, especially if you are using public facilities like transportation and washrooms.
Do avoid contact with anyone who looks sick.
Do use a disposable napkin or a handkerchief to cover your face when you cough or sneeze.
**Don’ts**
Don’t be part of a large gathering. If you are within just a few feet of an infected person, you could contract the virus.
Don’t touch your face, nose, mouth or eyes with unwashed hands.
Don’t hide your symptoms for fear of being quarantined.
Don’t travel abroad especially to countries that are reeling under the coronavirus outbreak.
Coronavirus is wreaking havoc globally. But you can stop it in its tracks if you follow the precautions strictly. So observe personal hygiene and request your loved ones to do the same to stay safe and protected.
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
```
**Importing important libraries!!**
```
import seaborn as sns
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import plotly.express as px
from datetime import datetime
%matplotlib inline
```
**Reading our test and train datasets**
```
train=pd.read_csv("/kaggle/input/covid19-global-forecasting-week-5/train.csv")
test=pd.read_csv("/kaggle/input/covid19-global-forecasting-week-5/test.csv")
train.head()
test.head()
```
**Checking out the size of the dataset of both train and test dataset respectively**
```
train.shape
test.shape
```
**Checking out the missing values!!So County and Province_State have lots of missing values which will be hard to recover**
```
train.isnull().sum()
test.isnull().sum()
```
**Saving the Train and Test ids for later use onto a new dataframe**
```
ID=train['Id']
FID=test['ForecastId']
```
**Dropping insignificant attributes which have a lot of missing values**
```
train=train.drop(columns=['County','Province_State','Id'])
test=test.drop(columns=['County','Province_State','ForecastId'])
```
**Visualizations to understand the correlations of attributes with each other and how they function and alter with different distribution**
```
sns.pairplot(train)
```
**Target Value vs Target**
```
sns.barplot(y='TargetValue',x='Target',data=train)
```
**Population vs target Value**
```
sns.barplot(x='Target',y='Population',data=train)
```
**Depicyion of target values country wise **
```
fig=plt.figure(figsize=(45,30))
fig=px.pie(train, values='TargetValue', names='Country_Region',color_discrete_sequence=px.colors.sequential.RdBu,hole=.4)
fig.update_traces(textposition='inside')
fig.update_layout(uniformtext_minsize=12, uniformtext_mode='hide')
fig.show()
```
**Lets check out what we can learn from our dataset and fetch some important information!!**
```
df_grouped=train.groupby(['Country_Region']).sum()
df_grouped.TargetValue
```
**Top 5 countries with the higest target value!**
```
plot=df_grouped.nlargest(5,'TargetValue')
plot
sns.catplot(y="Population", x="TargetValue",height=5,aspect=1,kind="bar", data=plot)
plt.title('Top 5 Target Values',size=20)
plt.show()
```
**Top 5 countries with the highest population**
```
plot=df_grouped.nlargest(5,'Population')
plot
```
**Visualizing Treemaps(nested rectangles)in terms of population,target value of every country. Each group is represented by a rectangle, which area is proportional to its value. Using color schemes, it is possible to represent several dimensions: groups, subgroups**
```
fig = px.treemap(train, path=['Country_Region'], values='TargetValue',
color='Population', hover_data=['Country_Region'],
color_continuous_scale='matter')
fig.show()
```
**Coverting Date to Int format in both test and train dataset!**
```
da= pd.to_datetime(train['Date'], errors='coerce')
train['Date']= da.dt.strftime("%Y%m%d").astype(int)
da= pd.to_datetime(test['Date'], errors='coerce')
test['Date']= da.dt.strftime("%Y%m%d").astype(int)
```
**Creating heatmaps for top 2000 entries in terms of Target Value **
```
plot=train.nlargest(2000,'TargetValue')
plot
fig, ax = plt.subplots(figsize=(10,10))
h=pd.pivot_table(plot,values='TargetValue',
index=['Country_Region'],
columns='Date')
sns.heatmap(h,cmap="RdYlGn",linewidths=0.05)
```
**So a lot can be depicted from this heatmap we see how it all started in china and slowly infected iran,italy,spain and finally the US in the worse manner possible**
**Now lets check out how things are at those countries wihich have the largest population globally like India,USA,China,Bangladesh,Brazil**
```
plot=train.nlargest(2000,'Population')
plot
fig, ax = plt.subplots(figsize=(20,10))
h=pd.pivot_table(plot,values='TargetValue',
index=['Country_Region'],
columns='Date')
sns.heatmap(h,cmap="RdYlGn",linewidths=0.005)
```
**US is pretty bad and almost at an alarming stage.Russia and Brazil are slowly passing to the higher stages.Japan has a huge decline ,which is a good sign!!Indias intensity is growing as well, which we can imply from the color change towards recent times**
**Displaying the objects present in our dataset **
```
train.select_dtypes(include=['object']).columns
test.select_dtypes(include=['object']).columns
```
**Preprocessing data to transform object data types**
```
from sklearn.preprocessing import LabelEncoder
l = LabelEncoder()
X = train.iloc[:,0].values
train.iloc[:,0] = l.fit_transform(X.astype(str))
X = train.iloc[:,4].values
train.iloc[:,4] = l.fit_transform(X)
from sklearn.preprocessing import LabelEncoder
l = LabelEncoder()
X = test.iloc[:,0].values
test.iloc[:,0] = l.fit_transform(X.astype(str))
X = test.iloc[:,4].values
test.iloc[:,4] = l.fit_transform(X)
```
**Displaying the table head for seeing the final data forms**
```
train.head()
```
**Setting up our model for prediction!!(fingers crossed)**
```
y_train=train['TargetValue']
x_train=train.drop(['TargetValue'],axis=1)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_train, y_train, test_size=0.2, random_state=0)
```
**Will be using Random forest regressor for this!!
A Random Forest is an ensemble technique capable of performing both regression and classification tasks with the use of multiple decision trees and a technique called Bootstrap Aggregation, commonly known as bagging[source](https://www.google.com/search?rlz=1C1CHWL_enIN772IN772&ei=HPa4XubeNc6b9QPl7ZTQCQ&q=random+forest+regression&oq=random+forest+regression&gs_lcp=CgZwc3ktYWIQAzICCAAyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADICCAA6BAgAEEc6BAgAEENQpQ9YkSlg_itoAXABeAGAAbgEiAG-GZIBCzAuMS44LjEuMS4xmAEAoAEBqgEHZ3dzLXdpeg&sclient=psy-ab&ved=0ahUKEwjm3IjLnKvpAhXOTX0KHeU2BZoQ4dUDCAw&uact=5)**
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
pip = Pipeline([('scaler2' , StandardScaler()),
('RandomForestRegressor: ', RandomForestRegressor())])
pip.fit(x_train , y_train)
prediction = pip.predict(x_test)
```
**Prediction score!!!Seems pretty good**
```
acc=pip.score(x_test,y_test)
acc
predict=pip.predict(test)
output=pd.DataFrame({'id':FID,'TargetValue':predict})
output
```
**Lets calculate the quantile values for the target values.In case you are wondering what is quantile function,here you go: Pandas dataframe.quantile() function return values at the given quantile over requested axis, a numpy.percentile.In each of any set of values of a variate which divide a frequency distribution into equal groups, each containing the same fraction of the total population.**[source](https://www.geeksforgeeks.org/python-pandas-dataframe-quantile/)
Also I would like to mention that since I was strugling to get the final format as specified in the submission file I referred to [this kernel](https://www.kaggle.com/nischaydnk/covid19-week5-visuals-randomforestregressor)!!
```
a=output.groupby(['id'])['TargetValue'].quantile(q=0.05).reset_index()
b=output.groupby(['id'])['TargetValue'].quantile(q=0.5).reset_index()
c=output.groupby(['id'])['TargetValue'].quantile(q=0.95).reset_index()
```
**Bringing all the calculated values into the same dataframe i.e values of q0.05 , q0.5 , 0.95 !!**
```
a.columns=['Id','q0.05']
b.columns=['Id','q0.5']
c.columns=['Id','q0.95']
a=pd.concat([a,b['q0.5'],c['q0.95']],1)
a['q0.05']=a['q0.05']
a['q0.5']=a['q0.5']
a['q0.95']=a['q0.95']
a
```
**Time for making the submission!!**
```
sub=pd.melt(a, id_vars=['Id'], value_vars=['q0.05','q0.5','q0.95'])
sub['variable']=sub['variable'].str.replace("q","", regex=False)
sub['ForecastId_Quantile']=sub['Id'].astype(str)+'_'+sub['variable']
sub['TargetValue']=sub['value']
sub=sub[['ForecastId_Quantile','TargetValue']]
sub.reset_index(drop=True,inplace=True)
sub.to_csv("submission.csv",index=False)
sub.head()
```
| github_jupyter |
## Визуализация данных в Python
**Материал подготовила Арина Ситникова**
Теперь, когда мы рассмотрели основы препроцессинга данных в Python, используя библиотеки Numpy и Pandas, мы можем перейти к очень интересному, но при этом очень важному блоку, который составляет немалую часть работы специалистов в области data science.
Итак, в данном уроке мы научимся визуализировать данные. Визуализация данных помогает представить сложные наборы данных в простом и наглядном виде - так, чтобы даже непрофессионалам, не обладающим техническими знаниями, всё стало ясно и понятно.
Когда датасет имеет низкую размерность (или же когда нашей целью является сравнение той или иной *пары* признаков), визуализация - отличный способ получить общее представление о данных, выявить предполагаемые тренды или даже, если повезёт, сделать определённые выводы о модели поведения данных.
Визуализация может наглядно показать, насколько хорошо работает обученная нами модель, присутствует ли недообучение или переобучение. Графики также являются хорошим инструментом для визуального поиска потенциальных выбросов (outliers).
Мы рассмотрим две наиболее популярные библиотеки: matplotlib и seaborn:
- **Matplotlib**: один из наиболее часто используемых инструментов для построения графиков; данная библиотека отлично подходит для создания 2D и 3D визуализаций.
- **Seaborn**: был создан на основе matplotlib и больше ориентирован на построение статистических графиков.
Обе библиотеки позволяют встраивать графики непосредственно в Jupyter notebooks, а также предоставляют возможность выводить/сохранять графики в виде отдельных файлов изображений (с расширением .png, .jpg и пр.);
### Matplotlib
Matplotlib был основан Джоном Хантером - нейробиологом, который хотел перенести визуализационные возможности MATLAB на Python (сам Хантер признавал, что matplotlib начинался с подражания графическим командам MATLAB). Действительно, хотя библиотека matplotlib построена на принципах ООП, она имеет процедурный интерфейс pylab, который предоставляет аналоги команд MATLAB.
Конечно, как и в случае любой другой библиотеки в Python, для использования matplotlib нам необходимо первым шагом импортировать данную библиотеку. Если мы хотим, чтобы графики выводились непосредственно в Jupyter notebook, а не в отдельном окне/вкладке, нам необходимо прописать отдельную команду:
```
import matplotlib.pyplot as plt
%matplotlib inline
```
Также не забудем импортировать уже знакомые нам библиотеки numpy и pandas:
```
import pandas as pd
import numpy as np
```
#### Введение. Базовые графики.
Начнём с построения простейшего графика. И хотя это занимает всего одну строчку кода, мы в результате не получаем практически никакого контроля над внешним видом графика. Но нужно с чего-то начинать!
Для начала, давайте создадим небольшой псево-датасет:
```
df1 = pd.DataFrame({'A':np.random.randint(75,200,50),'B':np.random.randint(0,25,50),'C':np.random.randint(-50,15,50)})
df1.head(10)
```
Мы можем построить график на основе одного столбца без всяких усились - необходимо лишь вызвать команду `.plot()`:
```
df1['A'].plot();
```
При использовании функции `.plot()` без уточнения дополнительных аргументов точки по умолчанию соединяются непрерывной ломаной линией. Тип графика можно изменить, передав аргумент *kind* внутрь функции (например, *kind = 'bar'* выведет на экран столбчатую диаграмму). Стоит отметить, что если мы отдельно не укажем желаемый цвет линий, matplotlib сделает это за нас.
Конечно, matplotlib не ограничивается построением графиков для одной лишь переменной. Мы можем визуализировать все переменные датафрейма одновременно:
```
df1.plot();
```
Стандартные графики выглядят довольно просто и не слишком информативно, не так ли? К счастью, мы можем значительно изменить внешний вид графика, передав различные аргументы внутри функции - это включает в себя изменение цвета линий, добавление заголовка, легенды (обозначение линий) или названия оси, подпись данных и пр.:
```
df1['A'].plot(title = 'Column A', legend = True, color = 'black');
```
Если же мы хотим взглянуть на каждое наблюдение по отдельности (например, для демонстрации наличия или отсутствия корреляции между двумя переменными), мы можем использовать функцию `plt.scatter()`, которая выведет на экран диаграмму рассеяния (scatter plot).
В отличие от линейной диаграммы `plt.plot()`, где можно передать лишь один аргумент - независимую переменную, `plt.scatter()` требует как минимум два аргумента: как независимую переменную (например, индекс или один из столбцов), так и зависимую переменную:
```
plt.scatter(df1.index, df1['A']);
```
#### Виды графиков и их использование
На данный момент мы рассмотрели два типа графиков: точечные диаграммы и линейные диаграммы. Как же мы можем решить, какой тип графика больше подходит для той или иной ситуации? Общие рекомендации следующие:
1. **Линейная диаграмма (line plot)** хорошо подойдёт для визуализации данных, где наблюдения каким-либо образом связаны между собой; например, изменение температуры в течение определённого времени.
2. **Диаграмма рассеяния (scatter plot)** может оказаться полезной в том случае, когда наблюдения не связаны между собой и могут восприниматься по отдельности.
Вновь создадим искусственный датасет, где в качестве индекса будет использоваться площадь жилья, а в качестве переменной-признака - цена, и построим диаграмму рассеяния:
```
dfhp = pd.DataFrame({'House Price':[284, 302, 376, 372, 341, 385, 361, 345, 371, 317, 337, 404, 408, 367, 486, 402,
477, 475, 455, 456, 492, 515, 535, 567, 519, 580, 534, 539, 550, 618, 589, 624,
566, 630, 624, 596, 634, 639, 721, 690, 688, 751, 777, 798, 821, 781, 800, 797, 803,
898]}, index = [1710, 1737, 1767, 1779, 1791, 1810, 1819, 1825, 1869, 1896, 1963, 2047,
2060, 2201, 2328, 2337, 2465, 2498, 2538, 2577, 2579, 2671, 2680, 2758,
2831, 2866, 3018, 3053, 3084, 3106, 3133, 3143, 3156, 3159, 3169, 3247,
3384, 3391, 3528, 3537, 3596, 3790, 3811, 3970, 4025, 4046, 4204, 4211,
4274, 4410])
dfhp.index.name = 'Square Footage'
dfhp.head()
plt.scatter(dfhp.index, dfhp['House Price']);
```
Очевидно мы можем очень легко визуализировать данные, поскольку каждая точка показывает значение для конкретного дома, но при этом дома никак не связаны друг с другом - каждое наблюдение воспринимается как обособленное.
Конечно, было бы намного сложнее воспринимать аналогичные данные в следующем виде:
```
dfhp.plot();
```
Согласитесь, не слишком очевидно, о чём наши данные?
Однако если вернуться к примеру из определения линейной диаграммы (изменение температуры воздуха во времени), то подобный график будет восприниматься совершенно по-другому - здесь мы можем видеть определённую связь между точками наблюдений:
```
dftp = pd.DataFrame({'Temp':[10.1, 9.7, 10.4, 10.4, 12.2, 12.8, 13.3, 13.1, 13.8, 13.3, 14.6, 14.7, 14.1, 15.8, 15.8,
16.2, 16.4, 16.0, 15.9, 17.9, 16.6, 17.5, 17.4, 18.2, 18.1, 17.3, 18.6, 18.7, 17.5, 17.2,
17.2, 17.3, 18.2, 17.9, 18.1, 17.7, 16.9, 18.3, 17.0, 16.3, 17.4, 15.7, 15.4, 15.2, 15.7,
15.7, 15.0, 14.0, 14.8]},
index = pd.date_range('2017-05-01 08:00:00',periods = 49,freq = '0.25H'))
dftp.head()
dftp.plot();
```
Библиотека matplotlib также позволяет строить столбчатые диаграммы.
Стоит отметить, что если мы хотим визуализировать количественную переменную, традиционная столбчатая диаграмма (bar chart) не подойдёт. Bar charts отлично подходят для визуализации категориальных переменных (это такие переменные, которые имеют ограниченный набор *нечисловых* значений), поскольку каждая категория (значение) будет иметь свой уникальный столбец на оси x, и вы сможете легко увидеть разницу между категориями в зависимости от размера столбца.
Однако, если мы попытаемся построить столбчатую диаграмму для количественных данных, на оси х будет так много значений, что метки вдоль неё превратятся в беспорядочную кашу. Гистограмма, напротив, отлично показывает количественные соотношения некоторого показателя и будет весьма полезна для представления распределения данных. Другими словами, гистограмма - это график, который показывает, как часто каждое значение переменной появляется в наборе данных. В нашем последнем примере это будет показатель температуры воздуха.
Гистограмму можно построить как с помощью общей команды `plt.plot(kind = 'hist')`, так и напрямую с помощью `plt.hist()`. Ниже указаны оба способа:
```
dftp['Temp'].plot(kind = 'hist', bins = 20);
plt.hist(dftp['Temp'], bins = 20);
```
Гистограмма группирует числовые значения по диапазонам, т.е. она показывает частоту появления тех или иных значений в каждом интервале. Количество таких диапазонов регулируется аргументом *bins*.
Помимо видов диаграмм, рассмотренных выше, matplotlib предлагает огромное количество других типов графиков. Больше примеров можно найти здесь: https://matplotlib.org/gallery.html
#### Несколько графиков на одной диаграмме
Иногда может возникнуть необходимость сравнить целый ряд переменных. Конечно, мы не захотим листать несколько графиков, которые наверняка ещё будут в разных масштабах. Гораздо удобнее поместить сразу несколько диаграмм на один график. К нашему счастью, это сделать совсем несложно.
Вернёмся к самому первому датасету, который мы создали ещё в начале статьи. Допустим, мы хотим построить scatter plot для каждой переменной: A, B и C. Удивительно, но если мы запустим три *отдельные* команды построения диаграммы рассеяния, Python автоматически поместит диаграммы на один и тот же график, поскольку независимая переменная (в нашем случае - индекс) идентичная для всех столбцов:
```
plt.scatter(df1.index,df1['A']);
plt.scatter(df1.index,df1['B']);
plt.scatter(df1.index,df1['C']);
```
Аналогичным образом мы можем построить линейную диаграмму:
```
df1['A'].plot();
df1['C'].plot();
```
Конечно, данный код не выглядит слишком эффективным, ведь для каждой отдельной переменной нам приходится писать новую строку кода. В случае большого количества переменных наш код может стать громоздким (но при этом однотипным) и не слишком читаемым.
В случае линейного графика, однако, существует более элегантное решение: достаточно лишь указать список переменных/столбцов, которые мы хотим использовать для построения графика:
```
df1[['A','C']].plot();
```
Важно отметить, что если бы мы хотели построить два графика на основе двух *разных* датафреймов, Python не позволил бы нам это сделать и вывел бы две отдельные диаграммы:
```
df = pd.DataFrame({'X':np.random.randint(0,100,30)},index = pd.date_range('2016-01-01',periods = 30,freq = 'h'))
df1['A'].plot(color = 'blue');
df.plot(color = 'green');
```
#### Построение более продвинутых графиков
Несмотря на то, что matplotlib позволяет строить простейшие графики, требуя буквально одну строчку кода, мы очень часто хотим иметь больше контроля над внешним видом и форматом нашей диаграммы. Это, однако, требует немного больше работы, но мы и с этим справимся :) Код, конечно, становится куда более нетривиальным и объёмным:
```
fig = plt.figure() # создание графика в качестве объекта
plt.plot(x, y1, label = 'name1', color = 'colorname or hex code')
plt.plot(x, y2, label = 'name2', color = 'colorname or hex code') # создаём нужное количество диаграмм,
# передавая желаемые аргументы (опционально)
plt.xlabel('label for x', size = number) # название оси x
plt.ylabel('label for y', size = number) # название оси y
plt.legend() # добавление легенды
plt.grid() # данная функция рисует сетку на нашем графике
plt.title('Title') # заголовок диаграммы
plt.show() # выводим график на экран
```
Давайте вновь воспользуемся самым первым псевдо-датафреймом, чтобы создать диаграмму. Обозначим ось х как "Цена", ось у - как "Кол-во дней после первой сделки", а также зададим название графика - "Цена акций". Помимо этого, для лучшего восприятия диаграммы укажем параметр grid = True:
```
fig = plt.figure()
plt.plot(df1.index,df1['A'],label='Акция A', color='red')
plt.plot(df1.index,df1['B'],label='Акция B', color='green')
plt.plot(df1.index,df1['C'],label='Акция C', color='blue')
plt.xlabel('Кол-во дней после первой сделки', size=12)
plt.ylabel('Цена',size=12)
plt.legend(loc = 5)
plt.title('Цена акций',size=18)
plt.grid(b=True)
plt.show()
```
Такой график выглядит намного интереснее, не так ли? Однако мы всё ещё можем заметить пустые области на обоих концах оси х, которые могут немного подпортить впечатление. Мы можем избавиться и от них, указав диапазон значений переменных - для этого существует команда `plt.xlim()`. Таким образом, мы ограничим нашу ось х именно теми значениями, которые задаются исходными данными; в нашем случае, от 0 до 49:
```
fig = plt.figure()
plt.plot(df1.index,df1['A'],label='Акция A', color='red')
plt.plot(df1.index,df1['B'],label='Акция B', color='green')
plt.plot(df1.index,df1['C'],label='Акция C', color='blue')
plt.xlabel('Кол-во дней после первой сделки', size=10)
plt.ylabel('Цена',size=12)
plt.legend(loc = 5)
plt.title('Цена акций',size=18)
plt.grid(b=True)
plt.xlim(0,49)
plt.show()
```
Мы также можем контролировать внешний вид линий. Matplotlib позволяет уточнить `linestyle` в качестве дополнительного аргумента; можно выбрать один из четырёх вариантов: `'-'`, `'--'`, `'-.'` и `':'`, которые представляют собой сплошные, штриховые, штрих-пунктирные и пунктирные линии соответственно.
Давайте попробуем поиграться с нашими данными::
```
fig = plt.figure()
plt.plot(df1.index,df1['A'],':',label='Акция A', color='red',linewidth=1.4)
plt.plot(df1.index,df1['B'],'--',label='Акция B', color='green',linewidth=0.5)
plt.plot(df1.index,df1['C'],'-.',label='Акция C', color='blue',linewidth=2)
plt.xlabel('Кол-во дней после первой сделки', size=10)
plt.ylabel('Цена',size=12)
plt.legend(loc = 5)
plt.title('Цена акций',size=18)
plt.grid(b=True)
plt.xlim(0,49)
plt.show()
```
В случае ломаных линий, как на примере выше, для удобства можно отдельно выделить определенным маркером каждое наблюдение. Наиболее используемые маркеры: `"."` - точка,`","` - пиксель,`"o"` - кружок, `"D"` - ромб. В качестве альтернативы мы можем указать `linestyle` как `.-`, что выведет сплошную линию, маркированную точками. Для наглядности вернёмся к предыдущему примеру:
```
fig = plt.figure()
plt.plot(df1.index,df1['A'],'.-',label='Акция A', color='red',linewidth=1.4)
plt.plot(df1.index,df1['B'],label='Акция B',marker='o' ,color='green',linewidth=0.5)
plt.plot(df1.index,df1['C'],label='Акция C',marker='D', color='blue',linewidth=2)
plt.xlabel('Кол-во дней после первой сделки', size=10)
plt.ylabel('Цена',size=12)
plt.legend(loc = 5)
plt.title('Цена акций',size=18)
plt.grid(b=True)
plt.xlim(0,49)
plt.show()
```
Полный список маркеров можно найти здесь: https://matplotlib.org/api/markers_api.html
#### Сохранение графиков
Если нам отдельно понадобится тот или иной график, matplotlib позволяет сохранить любую диаграмму очень простым способом. Для этого необходимо заменить `plt.show()` командой `plt.savefig('filepath')`. Нужно отметить, что размер сохраняемого изображения по умолчанию довольно мал (864x720). Тем не менее, мы можем контролировать его, передав в функцию `plt.savefig()` дополнительный аргумент *dpi*.
Скорректируем и посмотрим на уже знакомый пример:
```
fig = plt.figure()
plt.plot(df1.index,df1['A'],'.-',label='Акция A', color='red',linewidth=1.4)
plt.plot(df1.index,df1['B'],label='Акция B',marker='o' ,color='green',linewidth=0.5)
plt.plot(df1.index,df1['C'],label='Акция C',marker='D', color='blue',linewidth=2)
plt.xlabel('Кол-во дней после первой сделки', size=10)
plt.ylabel('Цена',size=12)
plt.legend(loc = 5)
plt.title('Цена акций',size=18)
plt.grid(b=True)
plt.xlim(0,49)
plt.savefig('stock_prices.png') #сохраняет диаграмму в тот же директорий, где находится данный jupyter notebook,
# но можно указать и другой путь
```
#### Второстепенные диаграммы (subplots)
Одной из полезных особенностей библиотеки Matplotlib является возможность вывода сразу нескольких графиков на одно изображение. В частности, для создания такого изображения используется функция `gridspec`, позволяющая совмещать несколько диаграмм (мы называем их subplots). В качестве первого шага мы определяем, как именно будут расположены диаграммы на изображении - это осуществляется с помощью команды `GridSpec(rows,columns)`, где первый аргумент - число графиков по горизонтали (или число строк), второй - число графиков по вертикали (или число столбцов). Далее каждый график строится по отдельности с использованием функции `plt.subplot2grid()`; при этом необходимо каждый раз указывать его позицию на основном изображении. В общем виде синтаксис выглядит следующим образом:
`GridSpec(number of rows, number of columns)`<br>
`plt.subplot2grid((grid rows, grid columns), (plot row position, plot column position))`<br>
`plt.plot(x,y)`
Вернёмся к предыдущему примеру, но на этот раз попробуем построить два отдельных графика для Акции А и Акции В, но при этом поместим их на одном изображении так, чтобы они были друг над другом - это означает, что у нас две строки (два графика по горизонтали) и только один столбец. Позиции этих диаграмм будут `(0,0)` и `(1,0)` для Акции А и Акции В, соответственно (как и всегда, в Python отсчёт начинается с нуля, а не с единицы):
```
import matplotlib.gridspec as gridspec
fig = plt.figure()
# определяем размер изображения, указывая количество строк и столбцов
gridspec.GridSpec(2,1)
plt.subplot2grid((2,1), (0,0)) # (2,1) = размер итогового графика, (0,0) = координаты конкретно *этого* сабплота
plt.plot(df1.index,df1['A'],label='Акция A', color='red')
plt.xlabel('Кол-во дней после первой сделки', size=12)
plt.ylabel('Цена',size=12)
plt.legend()
plt.title('Цена акций',size=18)
plt.grid()
plt.xlim(0,49)
plt.subplot2grid((2,1), (1,0))
plt.plot(df1.index,df1['B'],label='Акция B', color='blue')
plt.xlabel('Кол-во дней после первой сделки', size=12)
plt.ylabel('Цена',size=12)
plt.legend()
plt.grid()
plt.xlim(0,49)
plt.show()
```
Конечно, мы можем создать гораздо более сложный layout, растянув графики сразу на несколько столбцов.
Сейчас мы попробуем построить графики для всех трёх видов акций, но пусть график для Акции А будет иметь размер 2x2, под которым будут находиться графики для Акции В и Акции С, оба размера 1x1. Для того, чтобы "растянуть" первую диаграмму, необходимо указать это при её построении; так, в общем виде функция будет выглядеть следующим образом:
`plt.subplot2grid((r1,c1),(px,py),colspan=n,rowspan=m)`
Вернёмся к нашему примеру:
```
gridspec.GridSpec(3,2)
# Три строки, два столбца. Первый график - для Акции А - будет находиться на позиции (0, 0) и будет "растянут"
# на 2 строки и 2 колонки
# Графики для Акций B и С будут на позициях (2, 0) и (2, 1) соотвественно;
# оба - нормального размера (1x1) и в половину ширины верхнего графика
plt.subplot2grid((3,2), (0,0), colspan = 2, rowspan = 2)
plt.plot(df1.index,df1['A'],label='Акция A', color='red')
plt.ylabel('Цена',size=12)
plt.legend()
plt.title('Цена акций',size=18)
plt.grid()
plt.xlim(0,49)
plt.subplot2grid((3,2), (2,0))
plt.plot(df1.index,df1['B'],label='Акция B', color='blue')
plt.xlabel('Кол-во дней после первой сделки', size=8)
plt.ylabel('Цена',size=12)
plt.legend()
plt.grid()
plt.xlim(0,49)
plt.subplot2grid((3,2), (2,1))
plt.plot(df1.index,df1['C'],label='Акция C', color='green')
plt.xlabel('Кол-во дней после первой сделки', size=8)
plt.ylabel('',size=12)
plt.legend()
plt.grid()
plt.xlim(0,49)
plt.show()
```
### Seaborn
Seaborn - отличная библиотека для построения статистических графиков. Его синтаксис прост в использовании, а визуально графики выглядят намного лучше, нежели стандартные графики matplotlib. Главным же недостатком является то, что функционал seaborn несколько ограничен, и мы не имеем полного контроля над внешним видом графика.
Тем не менее, лучше всего относиться к seaborn и matplotlib как к библиотекам, которые именно дополняют, а не замещают друг друга, поскольку обе имеют как сильные, так и слабые стороны.
Для того, чтобы начать работу с библиотекой seaborn, конечно, необходимо импортировать её:
```
import seaborn as sns
```
Чтобы вам было легче освоить особенности seaborn, давайте начнем с визуализации простых данных. Для начала сгенерируем три выборки из нормального распределения; каждая выборка будет включать в себя по 10,000 наблюдений.
```
normal_data = np.random.normal(size=(10000, 3)) + np.arange(3) / 2
```
Поскольку мы уже посвятили много времени традиционным графикам (таким как диаграмма рассеяния, гистограмма, линейная диаграмма), попробуем попрактиковаться на статистических графиках.
Очень часто в работе мы используем описательные статистики. Среднее, медиана, минимум, максимум и пр. - по этим показателям можно много что сказать о данных, с которыми мы работаем. Однако очень сложно оценить распределение датасета лишь по такому набору чисел. К счастью, seaborn позволяет построить очень полезный график - box plot, или "ящик с усами". Такой вид диаграммы в удобной форме показывает медиану, нижний и верхний квартили, минимальное и максимальное значение выборки и выбросы. Несколько таких ящиков можно нарисовать бок о бок, чтобы визуально сравнивать одно распределение с другим.
Давайте попробуем сначала построить ящик с усами на основе сгенерированных данных, а затем разберемся, как же он работает и из чего состоит:
```
sns.boxplot(data = normal_data);
```
Итак, что же такое box plot? Box plot состоит из коробки (что можно понять из названия), усов и точек. Коробка показывает интерквартильный размах распределения, от первого до третьего квартиля. Горизонтальная черта внутри коробки обозначает медиану распределения.
**FYI:** Квартили разбивают упорядоченный датасет на четыре равные части. Первый квартиль, или Q1, равняется такой точке, значения меньше которой имеют 25% элементов датасета. Медиана - Q2; это значение, делящее датасет пополам. Третий квартиль, Q3, означает точку, значение которой выше значения 75% датасета. Соотвественно, межквартильный диапазон - это все, что находится между Q1 и Q3.
Усы же отображают весь разброс точек на расстоянии вплоть до 1.5 интерквартильного размаха (IQR) по обе стороны коробки. Математически говоря, в промежуток, заданный усами, попадают те значения, которые принадлежат отрезку $(Q1 - 1.5*IQR, Q3 + 1.5*IQR)$, где $IQR = Q3 - Q1$.
Для наглядности и лучшего понимания предлагаем посмотреть на простую картинку:
<img src = "box_plot.jpg">
Помимо “ящиков с усами” есть много способов визуализировать распределение данных. Например, есть симпатичные графики, которые называются violin plots. Они чем-то похожи на box plots - по сути, это они есть, но в уменьшенном размере; помимо этого, с обеих сторон по бокам вертикально расположены плотности распределения.
Построим violin plots для наших данных:
```
sns.violinplot(data = normal_data);
```
Может возникнуть логичный вопрос: а зачем такие графики нужны, есть у нас и так есть “ящики с усами”?
Как мы выяснили ранее, box plots хорошо изображают описательные статистики и разброс значений, но не очень наглядно показывают особенности распределения данных. В случае, когда распределение сильно отличается от нормального, violin plot может быть более полезен, нежели “ящик с усами”, поскольку мы сможем лучше увидеть "странность" распределения наших данных.
**Бонус.** Если мы хотим, чтобы наши графики выглядели миленько, seaborn позволяет в любой момент поменять "дизайн" графика. На данный момент фигуры расположены на белом фоне, но при желании мы можем изменить стиль с помощью команды `sns.set_style()`. Посмотрим на пару примеров:
```
sns.set_style("whitegrid")
sns.violinplot(data = normal_data);
```
Любителям более тёмных цветов могут понравиться и такие тона фона:
```
sns.set_style("dark")
sns.violinplot(data = normal_data);
```
Помимо этого, можно выбрать значения darkgrid и ticks.
Это лишь один из немногих способов контроля внешнего вида графика. Более подробно можно почитать здесь:
https://seaborn.pydata.org/tutorial/aesthetics.html
Наконец, рассмотрим последний, немного более сложный график - pair plot. Очень часто в случае обучения с учителем перед определением модели мы должны проверить те или иные предпосылки. Например, когда мы хотим построить линейную регрессию, мы должны убедиться в отсутсвии мультиколлинеарности - высокой корреляции между факторами.
Pair plot же поможет нам посмотреть на одном изображении, как связаны между собой различные признаки. На диагонали матрицы графиков расположены гистограммы распределений признака. Оставшиеся графики — это простые scatter plots для соответствующих пар признаков. Так, с помощью scatter plot мы можем подтвердить или же отбросить предположение о корреляции определенной пары признаков.
Помните самый первый датасет (искуственно сгенерированные цены акций)? Мы обратимся вновь к этим данным и попробуем построить pair plot:
```
sns.pairplot(df1);
```
Как мы и упомянули, на диагонали находятся графики распределения, а по диаграммам рассеяния мы можем увидеть, как зависят между собой определенные пары признаков.
### Итоги
Итак, мы рассмотрели наиболее популярные виды традиционных и статистических графиков и попрактиковались в их построении, используя библиотеки matplotlib и seaborn. Поскольку объять каждый аспект визуализации данных абсолютно невозможно (тех же параметров, определяющих внешний вид графика, десятки, если не сотни), лучше всего для совершенствования навыков будет покопаться в документации, поизучать примеры визуализаций и попрактиковаться на разных датасетах - много хороших можно найти в kaggle.
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# DeepDream
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/deepdream"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial contains a minimal implementation of DeepDream, as described in this [blog post](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html) by Alexander Mordvintsev.
DeepDream is an experiment that visualizes the patterns learned by a neural network. Similar to when a child watches clouds and tries to interpret random shapes, DeepDream over-interprets and enhances the patterns it sees in an image.
It does so by forwarding an image through the network, then calculating the gradient of the image with respect to the activations of a particular layer. The image is then modified to increase these activations, enhancing the patterns seen by the network, and resulting in a dream-like image. This process was dubbed "Inceptionism" (a reference to [InceptionNet](https://arxiv.org/pdf/1409.4842.pdf), and the [movie](https://en.wikipedia.org/wiki/Inception) Inception).
Let's demonstrate how you can make a neural network "dream" and enhance the surreal patterns it sees in an image.

```
import tensorflow as tf
import numpy as np
import matplotlib as mpl
import IPython.display as display
import PIL.Image
```
## Choose an image to dream-ify
For this tutorial, let's use an image of a [labrador](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg).
```
url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg'
# Download an image and read it into a NumPy array.
def download(url, max_dim=None):
name = url.split('/')[-1]
image_path = tf.keras.utils.get_file(name, origin=url)
img = PIL.Image.open(image_path)
if max_dim:
img.thumbnail((max_dim, max_dim))
return np.array(img)
# Normalize an image
def deprocess(img):
img = 255*(img + 1.0)/2.0
return tf.cast(img, tf.uint8)
# Display an image
def show(img):
display.display(PIL.Image.fromarray(np.array(img)))
# Downsizing the image makes it easier to work with.
original_img = download(url, max_dim=500)
show(original_img)
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
```
## Prepare the feature extraction model
Download and prepare a pre-trained image classification model. You will use [InceptionV3](https://keras.io/api/applications/inceptionv3/) which is similar to the model originally used in DeepDream. Note that any [pre-trained model](https://keras.io/api/applications/#available-models) will work, although you will have to adjust the layer names below if you change this.
```
base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
```
The idea in DeepDream is to choose a layer (or layers) and maximize the "loss" in a way that the image increasingly "excites" the layers. The complexity of the features incorporated depends on layers chosen by you, i.e, lower layers produce strokes or simple patterns, while deeper layers give sophisticated features in images, or even whole objects.
The InceptionV3 architecture is quite large (for a graph of the model architecture see TensorFlow's [research repo](https://github.com/tensorflow/models/tree/master/research/slim)). For DeepDream, the layers of interest are those where the convolutions are concatenated. There are 11 of these layers in InceptionV3, named 'mixed0' though 'mixed10'. Using different layers will result in different dream-like images. Deeper layers respond to higher-level features (such as eyes and faces), while earlier layers respond to simpler features (such as edges, shapes, and textures). Feel free to experiment with the layers selected below, but keep in mind that deeper layers (those with a higher index) will take longer to train on since the gradient computation is deeper.
```
# Maximize the activations of these layers
names = ['mixed3', 'mixed5']
layers = [base_model.get_layer(name).output for name in names]
# Create the feature extraction model
dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
```
## Calculate loss
The loss is the sum of the activations in the chosen layers. The loss is normalized at each layer so the contribution from larger layers does not outweigh smaller layers. Normally, loss is a quantity you wish to minimize via gradient descent. In DeepDream, you will maximize this loss via gradient ascent.
```
def calc_loss(img, model):
# Pass forward the image through the model to retrieve the activations.
# Converts the image into a batch of size 1.
img_batch = tf.expand_dims(img, axis=0)
layer_activations = model(img_batch)
if len(layer_activations) == 1:
layer_activations = [layer_activations]
losses = []
for act in layer_activations:
loss = tf.math.reduce_mean(act)
losses.append(loss)
return tf.reduce_sum(losses)
```
## Gradient ascent
Once you have calculated the loss for the chosen layers, all that is left is to calculate the gradients with respect to the image, and add them to the original image.
Adding the gradients to the image enhances the patterns seen by the network. At each step, you will have created an image that increasingly excites the activations of certain layers in the network.
The method that does this, below, is wrapped in a `tf.function` for performance. It uses an `input_signature` to ensure that the function is not retraced for different image sizes or `steps`/`step_size` values. See the [Concrete functions guide](../../guide/function.ipynb) for details.
```
class DeepDream(tf.Module):
def __init__(self, model):
self.model = model
@tf.function(
input_signature=(
tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),
tf.TensorSpec(shape=[], dtype=tf.int32),
tf.TensorSpec(shape=[], dtype=tf.float32),)
)
def __call__(self, img, steps, step_size):
print("Tracing")
loss = tf.constant(0.0)
for n in tf.range(steps):
with tf.GradientTape() as tape:
# This needs gradients relative to `img`
# `GradientTape` only watches `tf.Variable`s by default
tape.watch(img)
loss = calc_loss(img, self.model)
# Calculate the gradient of the loss with respect to the pixels of the input image.
gradients = tape.gradient(loss, img)
# Normalize the gradients.
gradients /= tf.math.reduce_std(gradients) + 1e-8
# In gradient ascent, the "loss" is maximized so that the input image increasingly "excites" the layers.
# You can update the image by directly adding the gradients (because they're the same shape!)
img = img + gradients*step_size
img = tf.clip_by_value(img, -1, 1)
return loss, img
deepdream = DeepDream(dream_model)
```
## Main Loop
```
def run_deep_dream_simple(img, steps=100, step_size=0.01):
# Convert from uint8 to the range expected by the model.
img = tf.keras.applications.inception_v3.preprocess_input(img)
img = tf.convert_to_tensor(img)
step_size = tf.convert_to_tensor(step_size)
steps_remaining = steps
step = 0
while steps_remaining:
if steps_remaining>100:
run_steps = tf.constant(100)
else:
run_steps = tf.constant(steps_remaining)
steps_remaining -= run_steps
step += run_steps
loss, img = deepdream(img, run_steps, tf.constant(step_size))
display.clear_output(wait=True)
show(deprocess(img))
print ("Step {}, loss {}".format(step, loss))
result = deprocess(img)
display.clear_output(wait=True)
show(result)
return result
dream_img = run_deep_dream_simple(img=original_img,
steps=100, step_size=0.01)
```
## Taking it up an octave
Pretty good, but there are a few issues with this first attempt:
1. The output is noisy (this could be addressed with a `tf.image.total_variation` loss).
1. The image is low resolution.
1. The patterns appear like they're all happening at the same granularity.
One approach that addresses all these problems is applying gradient ascent at different scales. This will allow patterns generated at smaller scales to be incorporated into patterns at higher scales and filled in with additional detail.
To do this you can perform the previous gradient ascent approach, then increase the size of the image (which is referred to as an octave), and repeat this process for multiple octaves.
```
import time
start = time.time()
OCTAVE_SCALE = 1.30
img = tf.constant(np.array(original_img))
base_shape = tf.shape(img)[:-1]
float_base_shape = tf.cast(base_shape, tf.float32)
for n in range(-2, 3):
new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32)
img = tf.image.resize(img, new_shape).numpy()
img = run_deep_dream_simple(img=img, steps=50, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
show(img)
end = time.time()
end-start
```
## Optional: Scaling up with tiles
One thing to consider is that as the image increases in size, so will the time and memory necessary to perform the gradient calculation. The above octave implementation will not work on very large images, or many octaves.
To avoid this issue you can split the image into tiles and compute the gradient for each tile.
Applying random shifts to the image before each tiled computation prevents tile seams from appearing.
Start by implementing the random shift:
```
def random_roll(img, maxroll):
# Randomly shift the image to avoid tiled boundaries.
shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32)
img_rolled = tf.roll(img, shift=shift, axis=[0,1])
return shift, img_rolled
shift, img_rolled = random_roll(np.array(original_img), 512)
show(img_rolled)
```
Here is a tiled equivalent of the `deepdream` function defined earlier:
```
class TiledGradients(tf.Module):
def __init__(self, model):
self.model = model
@tf.function(
input_signature=(
tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),
tf.TensorSpec(shape=[], dtype=tf.int32),)
)
def __call__(self, img, tile_size=512):
shift, img_rolled = random_roll(img, tile_size)
# Initialize the image gradients to zero.
gradients = tf.zeros_like(img_rolled)
# Skip the last tile, unless there's only one tile.
xs = tf.range(0, img_rolled.shape[0], tile_size)[:-1]
if not tf.cast(len(xs), bool):
xs = tf.constant([0])
ys = tf.range(0, img_rolled.shape[1], tile_size)[:-1]
if not tf.cast(len(ys), bool):
ys = tf.constant([0])
for x in xs:
for y in ys:
# Calculate the gradients for this tile.
with tf.GradientTape() as tape:
# This needs gradients relative to `img_rolled`.
# `GradientTape` only watches `tf.Variable`s by default.
tape.watch(img_rolled)
# Extract a tile out of the image.
img_tile = img_rolled[x:x+tile_size, y:y+tile_size]
loss = calc_loss(img_tile, self.model)
# Update the image gradients for this tile.
gradients = gradients + tape.gradient(loss, img_rolled)
# Undo the random shift applied to the image and its gradients.
gradients = tf.roll(gradients, shift=-shift, axis=[0,1])
# Normalize the gradients.
gradients /= tf.math.reduce_std(gradients) + 1e-8
return gradients
get_tiled_gradients = TiledGradients(dream_model)
```
Putting this together gives a scalable, octave-aware deepdream implementation:
```
def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01,
octaves=range(-2,3), octave_scale=1.3):
base_shape = tf.shape(img)
img = tf.keras.utils.img_to_array(img)
img = tf.keras.applications.inception_v3.preprocess_input(img)
initial_shape = img.shape[:-1]
img = tf.image.resize(img, initial_shape)
for octave in octaves:
# Scale the image based on the octave
new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave)
img = tf.image.resize(img, tf.cast(new_size, tf.int32))
for step in range(steps_per_octave):
gradients = get_tiled_gradients(img)
img = img + gradients*step_size
img = tf.clip_by_value(img, -1, 1)
if step % 10 == 0:
display.clear_output(wait=True)
show(deprocess(img))
print ("Octave {}, Step {}".format(octave, step))
result = deprocess(img)
return result
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
show(img)
```
Much better! Play around with the number of octaves, octave scale, and activated layers to change how your DeepDream-ed image looks.
Readers might also be interested in [TensorFlow Lucid](https://github.com/tensorflow/lucid) which expands on ideas introduced in this tutorial to visualize and interpret neural networks.
| github_jupyter |
# IWI131 Programación
## Diccionarios
Un diccionario es una **colección no ordenada** que permite asociar llaves con valores. Utilizando la llave siempre es posible recuperar, de manera eficiente, el valor asociado.
- El funcionamiento de diccionarios es similar a cuando se recupera un elemento de una lista usando su índice: `L[i]`
- En este caso el índice de la lista es una ''llave'' y el elemento recuperado es el ''valor''
```
L = [1, 3, 4, 2, 4]
#imprimiendo el elemento del indice 3 de la lista L
print(L[3])
```
- En diccionarios, las llaves pueden ser: números (int o float), strings o tuplas.
- Las llaves en un diccionario deben ser **únicas**, es decir, una llave no puede aparecer dos veces.
```
#diccionario de telefonos
#llaves: nombres de personas (string)
#valores: numeros de telefono asociados a cada nombre (int)
telefonos = {'Jaimito':5551428, 'Yayita': 5550012, 'Pepito':5552437}
#mostrar por pantalla el telefono de Pepito
print(telefonos['Pepito'])
```
## Creación de Diccionarios
Un **diccionario vacío** puede ser creado por medio de la función `dict()` y con el paréntesis de llave `{}`.
```
dicc1 = {}
dicc2 = dict()
print(dicc1)
print(dicc2)
```
Un **diccionario con elementos**, consiste de pares llave-valor.
- El diccionario está delimitado por paréntesis de llave `{ }`.
- Cada par llave-valor se separa del resto con coma `,`.
- Cada llave se separa de su valor utilizando el caracter dos puntos `:`.
Considerar el siguiente diccionario que relaciona animales (llaves) con la cantidad de patas que tienen (valores).
```
patas = {'humano': 2, 'pulpo': 8, 'perro': 4, 'gato': 4}
```
## Manipulación de Diccionarios
### Agregar/Modificar un elemento
Se debe hacer una asignación del diccionario con una **llave** igual a un **valor**.
- Si la **llave** utilizada **no existe** en el diccionario, se **agrega** un nuevo elemento.
- Si la **llave** utilizada **existe** en el diccionario, el valor asociado es **modificado**.
```
patas = {'humano': 2, 'pulpo': 8, 'perro': 5, 'gato': 4}
print("Diccionario antes de agregar un valor")
print(patas)
patas['cienpies'] = 100
print("Diccionario despues de agregar un valor")
print(patas)
#los perros en realidad tienen cuatro patas, cambiar valor en el diccionario
patas["perro"] = 4
print("Diccionario despues de cambiar un valor")
print(patas)
```
### Eliminar un elemento
Usando la instrucción `del` se puede eliminar un elemento del diccionario. Se debe indicar la llave del elemento que se quiere eliminar.
```
patas = {'cienpies': 100, 'humano': 2, 'gato': 4, 'pulpo': 8, 'perro': 4}
print("Diccionario antes de eliminar un elemento")
print(patas)
#eliminando el elemento del diccionario
del patas["pulpo"]
print("Diccionario despues de eliminar un elemento")
print(patas)
```
Al igual que en listas, al intentar eliminar un elemento que no existe (porque no existe la llave en el diccionario) ocurre un error.
```
patas = {'cienpies': 100, 'humano': 2, 'gato': 4, 'pulpo': 8, 'perro': 4}
#intentando eliminar un elemento en el diccionario
del patas["oso"]
```
## Acceder a elementos de un diccionario
El valor asociado a la llave `k` del diccionario `d` se puede obtener como `d[k]`:
```
patas = {'cienpies': 100, 'humano': 2, 'gato': 4, 'pulpo': 8, 'perro': 4}
#accediendo y mostrando por pantalla la cantidad de patas de un gato
print("El gato tiene",patas['gato'],"patas")
#intentando mostrar las patas que tiene un oso
print("El oso tiene",patas['oso'],"patas")
```
## Funciones sobre Diccionarios
### Cantidad de elementos
La función `len` permite saber la cantidad de elementos del diccionario, es decir, la cantidad de pares llave-valor.
```
patas = {'cienpies': 100, 'humano': 2, 'gato': 4, 'pulpo': 8, 'perro': 4}
print(len(patas))
```
### Comprobar si una llave está en el diccionario
La instrucción `in` verifica si una llave está en el diccionario.
```
patas = {'cienpies': 100, 'humano': 2, 'gato': 4, 'pulpo': 8, 'perro': 4}
print('pulpo' in patas)
print(8 in patas)
```
## Iteración sobre diccionarios
Se pueden utilizar ciclos `for` para iterar sobre diccionarios. En este caso estaremos iterando sobre las llaves del diccionario. Como el diccionario **no tiene orden particular**, lo único que sabemos es que se iterará sobre todas las llaves.
Considere el diccionario `capitales`, donde las llaves corresponden a países y los valores son las capitales respectivas de cada país.
```
capitales = {'Chile': 'Santiago', 'Peru': 'Lima', 'Ecuador': 'Quito'}
for pa in capitales:
#que esta imprimiendo en cada iteracion del ciclo?
print("pa =",pa)
capitales = {'Chile': 'Santiago', 'Peru': 'Lima', 'Ecuador': 'Quito'}
for pais in capitales:
#que se esta imprimiendo en cada iteracion del ciclo?
print("La capital de",pais,"es",capitales[pais])
```
Sintaxis genérica:
```python
for var in diccio:
a = funcion(var)
```
- La variable `var` está accediento **implícitamente** a cada llave del diccionario `diccio`.
- Recordar que, teniendo la llave, es posible acceder al valor asociado mediante `diccio[var]`.
## Ejercicios
**1.** Escriba la función `contar_letras(palabra)` que reciba un string y retorne un diccionario que indique cuántas veces aparece cada letra en dicho string.
```
>>> contar_letras('entretener')
{'e': 4, 'n': 2, 'r': 2, 't': 2}
>>> contar_letras('lapiz')
{'a': 1, 'i': 1, 'l': 1, 'p': 1, 'z': 1}
```
```
def contar_letras(palabra):
d = dict()
for letra in palabra:
if letra not in d:
d[letra] = 0
d[letra] += 1
return d
diccio = contar_letras("entretener")
print(diccio)
```
**2.** Considere la siguente lista de strings `palabras`.
```
lista_palabras = ["el", "jardin", "la", "casa", "mi", "el", "la", ...]
```
Con el objetivo de construir una nube de palabras, se requiere de una función `contar_palabras(lista)` que, dada una lista de palabras `lista`, retorne una lista de pares ordenados, donde la primera componente sea una palabra y la segunda la cantidad de veces que paraece dicha palabra en la lista.
```
>>> contar_palabras(lista)
[('mi', 1), ('casa', 1), ('jardin', 1), ('el', 2), ('la', 2), ...]
```
```
def contar_palabras(lista):
d = {}
for p in lista:
if p not in d:
d[p] = 0
d[p]+=1
lista = list()
for pa in d:
lista.append((pa,d[pa]))
return lista
lista_palabras = ["el", "jardin", "la", "casa", "mi", "el", "la"]
print(contar_palabras(lista_palabras))
```
**3.** Considere la lista de tuplas `viajes`, donde cada tupla agrupa el nombre de una persona y una ciudad donde esa persona ha viajado:
```
viajes = [("Juan", "Santiago"), ("Pedro", "Coquimbo"), ("Juan", "Valparaiso"), ("Diego", "Talcahuano"), ...]
```
Escriba la función `ciudades_visitadas(viajes)` que reciba una lista como la mostrada en `viajes`. Esta función debe retornar un diccionario, cuyas llave corresponde al nombre de una persona y el valor es una lista con todas las ciudades que ha visitado esa persona.
```
>>> ciudades_visitadas(viajes)
{'Diego': ['Talcahuano'], 'Pedro': ['Coquimbo'], 'Juan': ['Santiago', 'Valparaiso'], ...}
```
```
def ciudades_visitadas(viajes):
d = dict()
for tupla in viajes:
persona, ciudad = tupla
if persona not in d:
d[persona] = list()
d[persona].append(ciudad)
return d
v = [("Juan", "Santiago"), ("Pedro", "Coquimbo"), ("Juan", "Valparaiso"), ("Diego", "Talcahuano")]
print(ciudades_visitadas(v))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.svm import SVC
from sklearn import metrics
from mlxtend.plotting import plot_decision_regions
from sklearn import preprocessing, metrics
from sklearn.linear_model import LogisticRegression
import warnings
import numpy as np
from collections import OrderedDict
from lob_data_utils import lob, db_result, model
from lob_data_utils import roc_results, gdf_pca, stocks_numbers
from lob_data_utils.svm_calculation import lob_svm
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
data_length = 24000
stocks = stocks_numbers.chosen_stocks
should_save_fig = False
df_scores = pd.read_csv('../gdf_pca/res_log_que.csv')
df_scores = df_scores[df_scores['stock'].isin(stocks)]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
sns.distplot(df_scores['test_matthews'], kde=False, label='Test data set', ax=ax1)
sns.distplot(df_scores['matthews'], kde=False, label='Train data set', ax=ax1)
ax1.legend()
ax1.set_xlabel('MCC Score')
ax1.set_title('MCC scores distribution')
sns.distplot(df_scores['test_roc_auc'], kde=False, label='Test data set', ax=ax2)
sns.distplot(df_scores['roc_auc'], kde=False, label='Train data set', ax=ax2)
ax2.legend()
ax2.set_xlabel('ROC area Score')
ax2.set_title('ROC area scores distribution')
plt.tight_layout()
#plt.savefig('results_log_que_score_dist.png')
df_scores[['test_matthews', 'matthews', 'test_roc_auc', 'roc_auc']].describe()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
columns = ['stock', 'matthews', 'roc_auc',
'test_matthews', 'test_roc_auc', 'train_matthews', 'train_roc_auc']
df = df_scores[columns].copy()
df.rename(columns={'matthews': 'Validation', 'test_matthews': 'Testing', 'train_matthews': 'Train'}, inplace=True)
df = df.melt(['stock', 'roc_auc', 'test_roc_auc', 'train_roc_auc'])
sns.violinplot(x="variable", y="value", data=df, ax=ax1)
ax1.set_title('Distribution of MCC scores')
ax1.set_xlabel('Data Set')
ax1.set_ylabel('Score')
df = df_scores[columns].copy()
df.rename(columns={'roc_auc': 'Validation', 'test_roc_auc': 'Testing', 'train_roc_auc': 'Train'}, inplace=True)
df = df.melt(['stock', 'matthews', 'test_matthews', 'train_matthews'])
ax2.set_title('Distribution of ROC Area scores')
sns.violinplot(x="variable", y="value", data=df, ax=ax2)
ax2.set_xlabel('Data Set')
ax2.set_ylabel('Score')
plt.tight_layout()
plt.savefig('violin_distribution_scores_log_que.png')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
sns.distplot(df_scores[['train_matthews']], label='Train', ax=ax1)
sns.distplot(df_scores[['matthews']], label='Validation', ax=ax1)
sns.distplot(df_scores[['test_matthews']], label='Test', ax=ax1)
ax1.legend()
ax1.set_title('MCC scores for QUE+LOG')
sns.distplot(df_scores[['train_roc_auc']], label='Train', ax=ax2)
sns.distplot(df_scores[['roc_auc']], label='Validation', ax=ax2)
sns.distplot(df_scores[['test_roc_auc']], label='Test', ax=ax2)
ax2.legend()
ax2.set_title('ROC area scores for QUE+LOG')
plt.tight_layout()
if should_save_fig:
plt.savefig('results_que_log.png')
print(df_scores[['train_matthews', 'matthews', 'test_matthews',
'train_roc_auc', 'roc_auc', 'test_roc_auc']].describe().to_latex())
df_scores.index = df_scores['stock']
df_scores[['train_matthews', 'matthews', 'test_matthews']].plot(kind='bar', figsize=(16, 4))
plt.legend(['Train', 'Validation', 'Test'])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rudyhendrawn/traditional-dance-video-classification/blob/main/tari_vgg16_lstm_224.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import os
import glob
from keras_video import VideoFrameGenerator
import numpy as np
import pandas as pd
```
## Data Loading and Preprocessing
```
# Use sub directories names as classes
classes = [i.split(os.path.sep)[1] for i in glob.glob('Dataset/*')]
classes.sort()
# Some global params
SIZE = (224, 224) # Image size
CHANNELS = 3 # Color channel
NBFRAME = 30 # Frames per video
BS = 2 # Batch size
# Pattern to get videos and classes
glob_pattern = 'Dataset/{classname}/*.mp4'
# Create video frame generator
train = VideoFrameGenerator(
classes=classes,
glob_pattern=glob_pattern,
nb_frames=NBFRAME,
split_val=.20,
split_test=.20,
shuffle=True,
batch_size=BS,
target_shape=SIZE,
nb_channel=CHANNELS,
transformation=None, # Data Augmentation
use_frame_cache=False,
seed=42)
valid = train.get_validation_generator()
test = train.get_test_generator()
from tensorflow.keras.layers import GlobalAveragePooling2D, LSTM, Dense, Dropout, TimeDistributed
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.applications.vgg16 import VGG16
input_shape = (NBFRAME,) + SIZE + (CHANNELS,)
# Define VGG16 model
model_vgg16 = VGG16(weights='imagenet', include_top=False, input_shape=input_shape[1:])
model_vgg16.trainable = False
model = Sequential()
model.add(TimeDistributed(model_vgg16, input_shape=input_shape))
model.add(TimeDistributed(GlobalAveragePooling2D()))
# Define LSTM model
model.add(LSTM(256))
# Dense layer
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.2))
model.add(Dense(int(len(classes)), activation='softmax'))
model.summary()
epochs = 100
earlystop = EarlyStopping(monitor='loss', patience=10)
checkpoint = ModelCheckpoint('Checkpoint/vgg16-lstm-224.h5', monitor='val_acc', save_best_only=True, mode='max', verbose=1)
callbacks = [earlystop, checkpoint]
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
history = model.fit(train,
validation_data=valid,
epochs=epochs,
callbacks=callbacks)
model.save('Model/tari/vgg16-lstm-224-100e-0.86.h5')
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# Save history to csv
hist_df = pd.DataFrame(history.history)
hist_csv_file = 'history_vgg16_lstm_224.csv'
with open(hist_csv_file, mode='w') as f:
hist_df.to_csv(f)
```
## Testing
```
model.evaluate(test)
y_test = []
y_predict = []
for step in range(test.files_count//BS):
X, y = test.next()
prediction = model.predict(X)
y_test.extend(y)
y_predict.extend(prediction)
y_true = np.argmax(y_test, axis=1)
prediction = np.argmax(y_predict, axis=1)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, classification_report, \
roc_curve, auc
# accuracy: (tp + tn) / (p + n)
accuracy = accuracy_score(y_true, prediction)
print(f'Accuracy: {np.round(accuracy, 3)}')
# precision tp / (tp + fp)
precision = precision_score(y_true, prediction, average='macro')
print(f'Precision: {np.round(precision, 3)}')
# recall: tp / (tp + fn)
recall = recall_score(y_true, prediction, average='macro')
print(f'Recall: {np.round(recall, 3)}')
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(y_true, prediction, average='macro')
print(f'F1 score: {np.round(f1, 3)}')
```
## Discussion
```
target_names = test.classes
print(classification_report(y_true, prediction, target_names=target_names))
matrix = confusion_matrix(y_true, prediction)
sns.heatmap(matrix, annot=True, cmap='Blues')
fpr, tpr, _ = roc_curve(y_true, prediction, pos_label=6)
auc_score = auc(fpr, tpr)
print(f'AUC Score : {np.round(auc_score, 3)}')
plt.plot(fpr, tpr, marker='.')
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
```
## Model from checkpoint
```
from tensorflow.keras.models import load_model
ckp_model = load_model('Checkpoint/vgg16-lstm-224.h5')
ckp_model.evaluate(test)
y_test = []
y_predict = []
for step in range(test.files_count//BS):
X, y = test.next()
prediction = ckp_model.predict(X)
y_test.extend(y)
y_predict.extend(prediction)
y_true = np.argmax(y_test, axis=1)
prediction = np.argmax(y_predict, axis=1)
target_names = test.classes
print(classification_report(y_true, prediction, target_names=target_names))
matrix = confusion_matrix(y_true, prediction)
sns.heatmap(matrix, annot=True, cmap='Blues')
# accuracy: (tp + tn) / (p + n)
accuracy = accuracy_score(y_true, prediction)
print(f'Accuracy: {np.round(accuracy, 3)}')
# precision tp / (tp + fp)
precision = precision_score(y_true, prediction, average='macro')
print(f'Precision: {np.round(precision, 3)}')
# recall: tp / (tp + fn)
recall = recall_score(y_true, prediction, average='macro')
print(f'Recall: {np.round(recall, 3)}')
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(y_true, prediction, average='macro')
print(f'F1 score: {np.round(f1, 3)}')
fpr, tpr, _ = roc_curve(y_true, prediction, pos_label=6)
auc_score = auc(fpr, tpr)
print(f'AUC Score : {np.round(auc_score, 3)}')
plt.plot(fpr, tpr, marker='.')
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
```
| github_jupyter |
# 1-Getting Started
Always run this statement first, when working with this book:
```
from scipy import *
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
## Numbers
```
2 ** (2 + 2)
1j ** 2 # A complex number
1. + 3.0j # Another complex number
```
## Strings
```
'valid string'
"string with double quotes"
"you shouldn't forget comments"
'these are double quotes: ".." '
"""This is
a long,
long string"""
```
## Variables
```
x = [3, 4] # a list object is created
y = x # this object now has two labels: x and y
del x # we delete one of the labels
del y # both labels are removed: the object is deleted
x = [3, 4] # a list object is created
print(x)
```
## Lists
```
L1 = [5, 6]
L1[0] # 5
L1[1] # 6
L1[2] # raises IndexError
L2 = ['a', 1, [3, 4]]
L2[0] # 'a'
L2[2][0] # 3
L2[-1] # last element: [3,4]
L2[-2] # second to last: 1
print(list(range(5)))
len(['a', 1, 2, 34])
L = ['a', 'b', 'c']
L[-1] # 'c'
L.append('d')
L # L is now ['a', 'b', 'c', 'd']
L[-1] # 'd'
```
### Operations on Lists
```
L1 = [1, 2]
L2 = [3, 4]
L = L1 + L2 # [1, 2, 3, 4]
L
L = [1, 2]
3 * L # [1, 2, 1, 2, 1, 2]
```
## Boolean Expressions
```
2 >= 4 # False
2 < 3 < 4 # True
2 < 3 and 3 < 2 # False
2 != 3 < 4 or False # True
2 <= 2 and 2 >= 2 # True
not 2 == 3 # True
not False or True and False # True!
```
## Repeating statements by loops
```
L = [1, 2, 10]
for s in L:
print(s * 2) # output: 2 4 20
```
### Repeating a task
```
n = 30
k=0
for iteration in range(n):
k+= iteration #do_something(this gets executed n times)
k
```
### Break and else
```
threshold=30
x_values=range(20)
for x in x_values:
if x > threshold:
break
print(x)
for x in x_values:
if x > threshold:
break
else:
print("all the x are below the threshold")
```
## Conditional Statements
```
# The absolute value
x=-25
if x >= 0:
print(x)
else:
print(-x)
```
## Encapsulating code by functions
Example:
$$x \mapsto f(x) := 2x + 1$$
```
def f(x):
return 2*x + 1
```
Calling this function:
```
f(2) # 5
f(1) # 3
```
## Scripts and modules
```
def f(x):
return 2*x + 1
z = []
for x in range(10):
if f(x) > pi:
z.append(x)
else:
z.append(-1)
print(z)
exec(open('smartscript.py').read())
%run smartscript
```
## Simple modules - collecting Functions
For the next example to work, you need a file `smartfunctions.py`in the same folder as this notebook:
```
def f(x):
return 2*x + 1
def g(x):
return x**2 + 4*x - 5
def h(x):
return 1/f(x)
```
### Using modules and namespaces
```
import smartfunctions
print(smartfunctions.f(2))
from smartfunctions import g #import just this one function
print(g(1))
from smartfunctions import * #import all
print(h(2)*f(2))
```
## Interpreter
```
def f(x):
return y**2
a = 3 # here both a and f are defined
f(2) # error, y is not defined
```
| github_jupyter |
# Automatic Suggestion of Constraints
In our experience, a major hurdle in data validation is that someone needs to come up with the actual constraints to apply on the data. This can be very difficult for large, real-world datasets, especially if they are very complex and contain information from a lot of different sources. We build so-called constraint suggestion functionality into deequ to assist users in finding reasonable constraints for their data.
Our constraint suggestion first [profiles the data](./data_profiling_example.ipynb) and then applies a set of heuristic rules to suggest constraints. In the following, we give a concrete example on how to have constraints suggested for your data.
```
from pyspark.sql import SparkSession, Row, DataFrame
import json
import pandas as pd
import sagemaker_pyspark
import pydeequ
classpath = ":".join(sagemaker_pyspark.classpath_jars())
spark = (SparkSession
.builder
.config("spark.driver.extraClassPath", classpath)
.config("spark.jars.packages", pydeequ.deequ_maven_coord)
.config("spark.jars.excludes", pydeequ.f2j_maven_coord)
.getOrCreate())
```
### Let's first generate some example data:
```
df = spark.sparkContext.parallelize([
Row(productName="thingA", totalNumber="13.0", status="IN_TRANSIT", valuable="true"),
Row(productName="thingA", totalNumber="5", status="DELAYED", valuable="false"),
Row(productName="thingB", totalNumber=None, status="DELAYED", valuable=None),
Row(productName="thingC", totalNumber=None, status="IN_TRANSIT", valuable="false"),
Row(productName="thingD", totalNumber="1.0", status="DELAYED", valuable="true"),
Row(productName="thingC", totalNumber="7.0", status="UNKNOWN", valuable=None),
Row(productName="thingC", totalNumber="20", status="UNKNOWN", valuable=None),
Row(productName="thingE", totalNumber="20", status="DELAYED", valuable="false"),
Row(productName="thingA", totalNumber="13.0", status="IN_TRANSIT", valuable="true"),
Row(productName="thingA", totalNumber="5", status="DELAYED", valuable="false"),
Row(productName="thingB", totalNumber=None, status="DELAYED", valuable=None),
Row(productName="thingC", totalNumber=None, status="IN_TRANSIT", valuable="false"),
Row(productName="thingD", totalNumber="1.0", status="DELAYED", valuable="true"),
Row(productName="thingC", totalNumber="17.0", status="UNKNOWN", valuable=None),
Row(productName="thingC", totalNumber="22", status="UNKNOWN", valuable=None),
Row(productName="thingE", totalNumber="23", status="DELAYED", valuable="false")]).toDF()
```
Now, we ask PyDeequ to compute constraint suggestions for us on the data. It will profile the data and then apply the set of rules specified in `addConstraintRules()` to suggest constraints.
```
from pydeequ.suggestions import *
suggestionResult = ConstraintSuggestionRunner(spark) \
.onData(df) \
.addConstraintRule(DEFAULT()) \
.run()
```
We can now investigate the constraints that deequ suggested. We get a textual description and the corresponding Python code for each suggested constraint. Note that the constraint suggestion is based on heuristic rules and assumes that the data it is shown is 'static' and correct, which might often not be the case in the real world. Therefore the suggestions should always be manually reviewed before being applied in real deployments.
```
for sugg in suggestionResult['constraint_suggestions']:
print(f"Constraint suggestion for \'{sugg['column_name']}\': {sugg['description']}")
print(f"The corresponding Python code is: {sugg['code_for_constraint']}\n")
```
The first suggestions we get are for the `valuable` column. **PyDeequ** correctly identified that this column is actually a `boolean` column 'disguised' as string column and therefore suggests a constraint on the `boolean` datatype. Furthermore, it saw that this column contains some missing values and suggests a constraint that checks that the ratio of missing values should not increase in the future.
```
Constraint suggestion for 'valuable': 'valuable' has less than 62% missing values
The corresponding Python code is: .hasCompleteness("valuable", lambda x: x >= 0.38, "It should be above 0.38!")
Constraint suggestion for 'valuable': 'valuable' has type Boolean
The corresponding Python code is: .hasDataType("valuable", ConstrainableDataTypes.Boolean)
```
Next we look at the `totalNumber` column. PyDeequ identified that this column is actually a numeric column 'disguised' as string column and therefore suggests a constraint on a fractional datatype (such as `float` or `double`). Furthermore, it saw that this column contains some missing values and suggests a constraint that checks that the ratio of missing values should not increase in the future. Additionally, it suggests that values in this column should always be positive (as it did not see any negative values in the example data), which probably makes a lot of sense for this count-like data.
```
Constraint suggestion for 'totalNumber': 'totalNumber' has no negative values
The corresponding Python code is: .isNonNegative("totalNumber")
Constraint suggestion for 'totalNumber': 'totalNumber' has less than 47% missing values
The corresponding Python code is: .hasCompleteness("totalNumber", lambda x: x >= 0.53, "It should be above 0.53!")
Constraint suggestion for 'totalNumber': 'totalNumber' has type Fractional
The corresponding Python code is: .hasDataType("totalNumber", ConstrainableDataTypes.Fractional)
```
Finally, we look at the suggestions for the `productName` and `status` columns. Both of them did not have a single missing value in the example data, so an `isComplete` constraint is suggested for them. Furthermore, both of them only have a small set of possible values, therefore an `isContainedIn` constraint is suggested, which would check that future values are also contained in the range of observed values.
```
Constraint suggestion for 'productName': 'productName' has value range 'thingC', 'thingA', 'thingB', 'thingE', 'thingD'
The corresponding Python code is: .isContainedIn("productName", ["thingC", "thingA", "thingB", "thingE", "thingD"])
Constraint suggestion for 'productName': 'productName' is not null
The corresponding Python code is: .isComplete("productName")
Constraint suggestion for 'status': 'status' has value range 'DELAYED', 'UNKNOWN', 'IN_TRANSIT'
The corresponding Python code is: .isContainedIn("status", ["DELAYED", "UNKNOWN", "IN_TRANSIT"])
Constraint suggestion for 'status': 'status' is not null
The corresponding Python code is: .isComplete("status")
```
Currently, we leave it up to the user to decide whether they want to apply the suggested constraints or not, and provide the corresponding Scala code for convenience. For larger datasets, it makes sense to evaluate the suggested constraints on some held-out portion of the data to see whether they hold or not. You can test this by adding an invocation of .useTrainTestSplitWithTestsetRatio(0.1) to the ConstraintSuggestionRunner. With this configuration, it would compute constraint suggestions on 90% of the data and evaluate the suggested constraints on the remaining 10%.
Finally, we would also like to note that the constraint suggestion code provides access to the underlying [column profiles](./data_profiling_example.ipynb) that it computed via `suggestionResult.columnProfiles`.
| github_jupyter |
# Shingling with Jaccard
Comparing document similarities where the set of objects is word or character ngrams taken over a sliding window from the document (shingles). The set of shingles is used to determine the document similarity, Jaccard similarity, between a pair of documents.
```
from tabulate import tabulate
shingle_size = 5
def shingler(doc, size):
return [doc[i:i+size] for i in range(len(doc))][:-size]
def jaccard_dist(shingle1, shingle2):
return len(set(shingle1) & set(shingle2)) / len(set(shingle1) | set(shingle2))
document1 = """An elephant slept in his bunk
And in slumber his chest rose and sunk
But he snored how he snored
All the other beasts roared
So his wife tied a knot in his trunk"""
document2 = """A large red cow
Tried to make a bow
But did not know how
They say
For her legs got mixed
And her horns got fixed
And her tail would get
In her way"""
document3 = """An walrus slept in his bunk
And in slumber his chest rose and sunk
But he snored how he snored
All the other beasts roared
So his wife tied a knot in his whiskers"""
# shingle and discard the last x as these are just the last n<x characters from the document
shingle1 = shingler(document1, shingle_size)
shingle1[0:10]
# shingle and discard the last x as these are just the last n<x characters from the document
shingle2 = shingler(document2, shingle_size)
shingles[0:10]
# shingle and discard the last x as these are just the last n<x characters from the document
shingle3 = shingler(document3, shingle_size)
shingles[0:10]
# Jaccard distance is the size of set intersection divided by the size of set union
print(f"Document 1 and Document 2 Jaccard Distance: {jaccard_dist(shingle1, shingle2)}")
# Jaccard distance is the size of set intersection divided by the size of set union
print(f"Document 1 and Document 3 Jaccard Distance: {jaccard_dist(shingle1, shingle3)}")
# Jaccard distance is the size of set intersection divided by the size of set union
print(f"Document 2 and Document 3 Jaccard Distance: {jaccard_dist(shingle2, shingle3)}")
shingle_sizes = [1,2,3,4,5,6,7,8,9,10,11,12,13,15]
jaccard_list = []
for s in shingle_sizes:
temp_shingle_1 = shingler(document1, s)
temp_shingle_2 = shingler(document2, s)
temp_shingle_3 = shingler(document3, s)
j1 = jaccard_dist(temp_shingle_1, temp_shingle_2)
j2 = jaccard_dist(temp_shingle_2, temp_shingle_3)
j3 = jaccard_dist(temp_shingle_1, temp_shingle_3)
temp_list = []
temp_list.append(j1)
temp_list.append(j2)
temp_list.append(j3)
temp_list.append(s)
jaccard_list.append(temp_list)
print("1:2\t\t2:3\t1:3\tShingle Size")
print(tabulate(jaccard_list))
```
| github_jupyter |
# Character Issues
```
s = 'café'
len(s)
b = s.encode('utf8')
b
len(b)
b.decode('utf8')
```
# Byte Essentials
```
cafe = bytes('café', encoding='utf_8')
cafe
cafe[0]
cafe[:1]
cafe_arr = bytearray(cafe)
cafe_arr
cafe_arr[-1:]
bytes.fromhex('31 4B CE A9')
import array
numbers = array.array('h', [-2, -1, 0, 1, 2])
octets = bytes(numbers)
octets
```
## Structs and Memory Views
```
import struct
fmt = '<3s3sHH'
with open('b_globe.gif', 'rb') as fp:
img = memoryview(fp.read())
header = img[:10]
header
bytes(header)
struct.unpack(fmt, header)
del header
del img
```
# Basic Encoders/Decoders
```
for codec in ['latin_1', 'utf_8', 'utf_16']:
print(codec, 'El Niño'.encode(codec), sep='\t')
```
# Understanding Encode/Decode Problems
## Coping with UnicodeEncodeError
```
city = 'São Paulo'
city.encode('utf_8')
city.encode('utf_16')
city.encode('iso8859_1')
city.encode('cp437')
city.encode('cp437', errors='ignore')
city.encode('cp437', errors='replace')
city.encode('cp437', errors='xmlcharrefreplace')
```
## Coping with UnicodeDecodeError
```
octets = b'Montr\xe9al'
octets.decode('cp1252')
octets.decode('iso8859_7')
octets.decode('koi8_r')
octets.decode('utf_8')
octets.decode('utf_8', errors='replace')
```
## BOM: A Useful Gremlin
```
u16 = 'El Niño'.encode('utf_16')
u16
list(u16)
u16le = 'El Niño'.encode('utf_16le')
list(u16le)
u16be = 'El Niño'.encode('utf_16be')
list(u16be)
```
# Handling Text Files
```
open('cafe.txt', 'w', encoding='utf_8').write('café')
open('cafe.txt').read()
fp = open('cafe.txt', 'w', encoding='utf_8')
fp
fp.write('café')
fp.close()
import os
os.stat('cafe.txt').st_size
fp2 = open('cafe.txt')
fp2
fp2.read()
fp3 = open('cafe.txt', encoding='utf_8')
fp3
fp3.read()
fp4 = open('cafe.txt', 'rb')
fp4
fp4.read()
```
## Encoding Defaults: A Madhouse
```
import sys, locale
expressions = """
locale.getpreferredencoding()
type(my_file)
my_file.encoding
sys.stdout.isatty()
sys.stdout.encoding
sys.stdin.isatty()
sys.stdin.encoding
sys.stderr.isatty()
sys.stderr.encoding
sys.getdefaultencoding()
sys.getfilesystemencoding()
"""
my_file = open('dummy', 'w')
for expression in expressions.split():
value = eval(expression)
print(expression.rjust(30), '->', repr(value))
value = eval('locale.getpreferredencoding()')
repr(value)
```
# Normalizing Unicode for Saner Comparisons
```
s1 = 'café'
s2 = 'cafe\u0301'
s1, s2
len(s1), len(s2)
s1 == s2
from unicodedata import normalize
len(normalize('NFC', s1)), len(normalize('NFC', s2))
len(normalize('NFD', s1)), len(normalize('NFD', s2))
normalize('NFC', s1) == normalize('NFC', s2)
normalize('NFD', s1) == normalize('NFD', s2)
from unicodedata import normalize, name
ohm = '\u2126'
name(ohm)
ohm_c = normalize('NFC', ohm)
name(ohm_c)
ohm == ohm_c
normalize('NFC', ohm) == normalize('NFC', ohm_c)
from unicodedata import normalize, name
half = '½'
normalize('NFKC', half)
four_squared = '4²'
normalize('NFKC', four_squared)
micro = 'µ'
micro_kc = normalize('NFKC', micro)
micro, micro_kc
ord(micro), ord(micro_kc)
name(micro), name(micro_kc)
```
## Case Folding
```
from unicodedata import name
micro = 'µ'
name(micro)
micro_cf = micro.casefold()
name(micro_cf)
micro, micro_cf
eszett = 'ß'
name(eszett)
eszett_cf = eszett.casefold()
eszett, eszett_cf
name(eszett_cf[1])
```
## Utility Functions for Normalized Text Matching
```
s1 = 'café'
s2 = 'cafe\u0301'
s1 == s2
from unicodedata import normalize
def nfc_equal(str1, str2):
return normalize('NFC', str1) == normalize('NFC', str2)
def fold_equal(str1, str2):
return (normalize('NFC', str1).casefold() ==
normalize('NFC', str2).casefold())
nfc_equal(s1, s2)
nfc_equal('A','a')
s3 = 'Straße'
s4 = 'strasse'
s3 == s4
nfc_equal(s3, s4)
fold_equal(s3, s4)
fold_equal(s1, s2)
fold_equal('A','a')
```
## Extreme "normalization": Taking Out Diacritics
```
import unicodedata
import string
def shave_marks(txt):
norm_txt = unicodedata.normalize('NFD', txt)
shaved = ''.join(c for c in norm_txt if not unicodedata.combining(c))
return unicodedata.normalize('NFC', shaved)
order = '“ Herr Voß: • ½ cup of Œtker ™ caffè latte • bowl of açaí.”'
shave_marks(order)
Greek = 'Ζέφυρος, Zéfiro'
shave_marks(Greek)
def shave_marks_latin(txt):
norm_txt = unicodedata.normalize('NFD', txt)
latin_base = False
keepers = []
for c in norm_txt:
if unicodedata.combining(c) and latin_base:
continue
keepers.append(c)
if not unicodedata.combining(c):
latin_base = c in string.ascii_letters
shaved = ''.join(keepers)
return unicodedata.normalize('NFC', shaved)
```
| github_jupyter |
```
# Reload when code changed:
%load_ext autoreload
%autoreload 2
%pwd
import os
import sys
path = "../"
sys.path.append(path)
#os.path.abspath("../")
print(os.path.abspath(path))
import core
import importlib
importlib.reload(core)
import logging
importlib.reload(core)
try:
logging.shutdown()
importlib.reload(logging)
except:
pass
import pandas as pd
import numpy as np
import json
from event_handler import EventHandler
print(core.__file__)
pd.__version__
def print_workspaces():
request = {'user_id': user_id}
respons = ekos.request_workspace_list(request)
print('')
print('='*100)
print('Workspaces for user: {}'.format(user_id))
print('')
for item in respons['workspaces']:
print('-'*100)
for key in sorted(item.keys()):
print('{}:\t{}'.format(key, item[key]))
print('')
print('='*100)
def print_json(data):
json_string = json.dumps(data, indent=2, sort_keys=True)
print(json_string)
```
### Load directories
```
root_directory = "../" #os.getcwd()
workspace_directory = root_directory + '/workspaces'
resource_directory = root_directory + '/resources'
alias = 'lena'
user_id = 'test_user' #kanske ska vara off_line user?
```
## Initiate EventHandler
```
ekos = EventHandler(root_directory)
# Remove all workspaces belonging to test user
# ekos.remove_test_user_workspaces()
# remove selected workspace
workspace_uuid = ekos.get_unique_id_for_alias(user_id, 'lena_indicator')
#ekos.delete_workspace(user_id = user_id, unique_id = workspace_uuid, permanently=True)
```
# LOAD WORKSPACE
### Load default workspace
```
#default_workspace = core.WorkSpace(alias = 'default_workspace',
# unique_id = 'default_workspace',
# parent_directory=workspace_directory,
# resource_directory=resource_directory,
# user_id = 'default')
#default_workspace.step_0.print_all_paths()
#default_workspace.import_default_data()
```
### Add new workspace
#### Only use this if you are not working with an already created workspace
```
#ekos.copy_workspace(user_id = user_id, source_alias = 'default_workspace', target_alias = 'lena_indicator')
workspace_alias = 'lena_indicator'
```
### Load existing workspace
```
ekos.load_workspace(user_id, alias = workspace_alias)
#Här får jag ibland felmeddelande core has no attribute ParameterMapping
workspace_uuid = ekos.get_unique_id_for_alias(user_id, workspace_alias)
print(workspace_uuid)
```
#### Copy files from default workspace to make a clone
```
#ekos.import_default_data(user_id, workspace_alias = workspace_alias)
```
### Load all data in workspace
```
ekos.load_data(user_id = user_id, unique_id = workspace_uuid)
w = ekos.get_workspace(user_id, unique_id = workspace_uuid, alias = workspace_alias)
len(w.data_handler.get_all_column_data_df())
```
# Step 0
### Set first data filter
```
f0 = w.get_data_filter_object(step=0)
f0.include_list_filter
#include_WB = ['Norrbottens skärgårds kustvatten']#,
#'N S M Bottenhavets kustvatten']
include_stations = []
#exclude_WB = ['Norrbottens skärgårds kustvatten']
include_years = []
#w.set_data_filter(step=0, filter_type='include_list', filter_name='WATERBODY_NAME', data=include_WB)
w.set_data_filter(step=0, filter_type='include_list', filter_name='STATN', data=include_stations)
#w.set_data_filter(step=0, filter_type='exclude_list', filter_name='WATERBODY_NAME', data=exclude_WB)
w.set_data_filter(step=0, filter_type='include_list', filter_name='MYEAR', data=include_years)
f0.include_list_filter
```
### Apply first data filter
```
w.apply_data_filter(step = 0) # This sets the first level of data filter in the IndexHandler
```
### Extract filtered data
```
data_after_first_filter = w.get_filtered_data(step=0) # level=0 means first filter
print('{} rows matching the filter criteria'.format(len(data_after_first_filter)))
```
# Step 1 load subset data
```
#ekos.copy_subset(user_id,
# workspace_alias=workspace_alias,
# workspace_uuid=None,
# subset_source_alias='default_subset',
# subset_source_uuid='default_subset',
# subset_target_alias='A')
```
# Step 1 Set subset filter
```
subset_uuid = ekos.get_unique_id_for_alias(user_id, workspace_alias = workspace_alias, subset_alias = 'A')
print(w.get_subset_list())
f1 = w.get_data_filter_object(subset = subset_uuid, step=1)
print(f1.include_list_filter)
w.apply_data_filter(subset = subset_uuid, step = 1)
df_step1 = w.get_filtered_data(step = 1, subset = subset_uuid)
print(df_step1.columns)
```
# Step 2
```
w.get_step_object(step = 2, subset = subset_uuid).load_indicator_settings_filters()
w.get_step_object(step = 2, subset = subset_uuid).indicator_data_filter_settings
#ref_set = w.get_step_object(step = 2, subset = subset_uuid).indicator_ref_settings['din_winter']
#ref_set.settings.ref_columns
#ref_set.settings.df[ref_set.settings.ref_columns]
dinw_filter_set = w.get_step_object(step = 2, subset = subset_uuid).get_indicator_data_filter_settings('din_winter')
dinw_filter_set.settings.df
dinw_filter_set = w.get_step_object(step = 2, subset = subset_uuid).get_indicator_ref_settings('din_winter')
dinw_filter_set.settings.refvalue_column[0]
r = dinw_filter_set.get_value(variable = dinw_filter_set.settings.refvalue_column[0], type_area = '1s')[0]
if type(r) is str:
print('ja det är en sträng')
type(r)
wb_list = df_step1.VISS_EU_CD.unique()
print('number of waterbodies in step 1: {}'.format(len(wb_list)))
typeA_list = [row.split('-')[0].strip().lstrip('0') for row in df_step1.WATER_TYPE_AREA.unique()]
print('number of type areas in step 1: {}'.format(len(typeA_list)))
#list(zip(typeA_list, df_step1.WATER_TYPE_AREA.unique()))
```
#### Apply indicator filter
```
for type_area in typeA_list:
w.apply_indicator_data_filter(step = 2,
subset = subset_uuid,
indicator = 'din_winter',
type_area = type_area)
#print(len(w.index_handler.booleans['step_0'][subset_uuid]['step_1']['step_2'].keys()))
#w.index_handler.booleans['step_0'][subset_uuid]['step_1']['step_2'].keys()
wb = 'SE654470-222700'
type_area = '2'#'01s - Västkustens inre kustvatten'
#w.index_handler.booleans['step_0'][subset_uuid]['step_1']['step_2'][type_area]['din_winter']['boolean']
#temp_df = w.get_filtered_data(step = 2, subset = subset_uuid, indicator = 'din_winter', water_body = 'SE654470-222700')
#temp_df.loc[(temp_df['MONTH'].isin([11, 12, 1, 2])) & (temp_df['VISS_EU_CD'].isin(['SE654470-222700']))][['MONTH', 'WATER_BODY_NAME', 'VISS_EU_CD', 'WATER_TYPE_AREA']]
print(w.get_filtered_data(step = 2, subset = subset_uuid, type_area = type_area, indicator = 'din_winter').MONTH.unique())
#[['MONTH', 'WATER_BODY_NAME', 'VISS_EU_CD']]
print(w.get_filtered_data(step = 2, subset = subset_uuid, type_area = type_area, indicator = 'din_winter').DEPH.min(),
w.get_filtered_data(step = 2, subset = subset_uuid, type_area = type_area, indicator = 'din_winter').DEPH.max())
w.get_filtered_data(step = 2, subset = subset_uuid, type_area = type_area).WATER_TYPE_AREA.unique()
water_body = wb
temp_df = w.get_filtered_data(step = 2, subset = subset_uuid, indicator = 'din_winter', type_area = type_area)[['SDATE','MONTH', 'WATER_BODY_NAME', 'VISS_EU_CD', 'WATER_TYPE_AREA', 'DIN','SALT_CTD', 'SALT_BTL']].dropna(thresh=7)
print('Waterbodys left: {}'.format(temp_df['WATER_BODY_NAME'].unique()))
temp_df.loc[temp_df['WATER_BODY_NAME'].isin(['Gullmarn centralbassäng'])]
w.get_available_indicators(subset= 'A', step=2)
w.cfg['indicators']
[item.strip() for item in w.cfg['indicators'].loc['din_winter'][0].split(', ')]
```
# Step 3 Load Indicator objects step 3....
```
w.get_step_object(step = 3, subset = subset_uuid).calculate_indicator_status(subset_unique_id = subset_uuid, indicator_list = ['din_winter'])
w.get_step_object(step = 3, subset = subset_uuid).indicator_objects['din_winter'].get_water_body_indicator_df(water_body = wb)
w.get_step_object(step = 3, subset = subset_uuid).indicator_objects['din_winter'].get_ref_value(type_area = '1s', salinity = 25)
indicator= 'din_winter'
w.get_step_object(step = 3, subset = subset_uuid).get_indicator_data_filter_settings(indicator)
s = w.get_subset_object(subset_uuid).indicator_objects['din_winter']
s.get_filtered_data(subset = subset_uuid, step = 'step_2')
B2_NTOT_WINTER_SETTINGS = lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_ref_settings['ntot_winter']
lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_ref_settings['ntot_winter'].allowed_variables
# gör om till
# lv_workspace.get_indicator_ref_settings(step = , subset = , indicator = , waterbody/type)
# ger samma resultat som:
#lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_ref_settings['ntot_winter'].settings.ref_columns
lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_ref_settings['ntot_winter'].settings.get_value('EK G/M', 22)
#print(B2_NTOT_WINTER_SETTINGS)
#B2_NTOT_WINTER_SETTINGS.get_value('2', 'DEPTH_INTERVAL')
av = lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_data_filter_settings['ntot_winter'].allowed_variables
lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_data_filter_settings['ntot_winter'].settings.df[av]
lv_workspace.get_subset_object('B').get_step_object('step_2').indicator_data_filter_settings['ntot_winter'].settings.df
B2_NTOT_WINTER_SETTINGS.settings.mapping_water_body['N m Bottenvikens kustvatten']
```
### Set subset time and area filter
```
f1_A = lv_workspace.get_data_filter_object(step=1, subset='A')
f1_A.include_list_filter
lv_workspace.get_data_filter_info(step=1, subset='A')
f1_A.exclude_list_filter
f0.include_list_filter
```
### Apply subset filter
```
lv_workspace.apply_subset_filter(subset='A') # Not handled properly by the IndexHandler
```
### Extract filtered data
```
data_after_subset_filter = lv_workspace.get_filtered_data(level=1, subset='A') # level=0 means first filter
print('{} rows mathing the filter criteria'.format(len(data_after_subset_filter)))
data_after_subset_filter.head()
# show available waterbodies
lst = data_after_subset_filter.SEA_AREA_NAME.unique()
print('Waterbodies in subset:\n{}'.format('\n'.join(lst)))
import numpy as np
np.where(lv_workspace.index_handler.subset_filter)
f = lv_workspace.get_data_filter_object(step=1, subset='A')
f.all_filters
f.exclude_list_filter
f.include_list_filter
s = lv_workspace.get_step_1_object('A')
s.data_filter.all_filters
f0 = lv_workspace.get_data_filter_object(step=0)
f0.exclude_list_filter
f0.include_list_filter
```
# Quality factor Nutrients
```
lv_workspace.initiate_quality_factors()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/morcellinus/Python_ML-DL/blob/main/3.Iris_data_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Iris classification
```
import pandas as pd
import numpy as np
from sklearn import datasets
iris = datasets.load_iris()
iris.keys()
print(iris['DESCR'])
print('데이터셋 크기:', iris['target'].shape)
print('데이터셋 내용:\n', iris['target'])
print('데이터셋 크기:', iris['data'].shape)
print('데이터셋 내용:\n', iris['data'][:7, :])
df = pd.DataFrame(iris['data'], columns = iris['feature_names'])
print('데이터프레임의 형태:', df.shape)
df.head()
df.columns = ['sepal_length', 'sepal_width', 'patal_length', 'patal_width']
df.head(7)
df['target'] = iris['target']
df.head(7)
df.info() # 데이터프레임의 기본 정보
df.describe() # 데이터프레임의 통계량 확인
df.isnull().sum()
df.duplicated().sum()
df.loc[df.duplicated(), :]
df = df.drop_duplicates()
df.corr()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale = 1.2)
sns.heatmap(data = df.corr(), square = True, annot = True, cbar = True) # annot옵션은 상관 계수 숫자를 표시하맂 여부 지정
plt.show()
df.target.value_counts() # 종류별 샘플 개수
plt.hist(x = 'sepal_length', data = df)
plt.show()
sns.displot(x = 'sepal_width', kind = 'hist', data = df)
plt.show()
sns.displot(x = 'patal_width', kind = 'kde', data = df)
plt.show()
sns.displot(x = 'sepal_length', hue = 'target', kind = 'kde', data = df) # hue 옵션을 통해 빅하고자 하는 클래스 집어 넣음
plt.show()
for col in ['sepal_width', 'patal_length', 'patal_width']:
sns.displot(x = col, hue = 'target', kind = 'kde', data = df)
plt.show()
sns.pairplot(df, hue = 'target', size = 2.5, diag_kind = 'kde')
plt.show()
# train-test 분할
from sklearn.model_selection import train_test_split
x_data = df.loc[:, 'sepal_length':'patal_width']
y_data = df.loc[:, 'target']
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data,
test_size = 0.2,
shuffle = True,
random_state = 20)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
# KNN 분류 알고리즘
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 7)
knn.fit(x_train, y_train)
y_pred_knn = knn.predict(x_test)
print('예측값:',y_pred_knn)
from sklearn.metrics import accuracy_score
knn_acc = accuracy_score(y_test, y_pred_knn)
print('Accuracy:%.4f'%knn_acc)
# SVM
from sklearn.svm import SVC
svc = SVC(kernel = 'rbf') # 커널을 radial basis function 사용
svc.fit(x_train, y_train)
y_pred_svc = svc.predict(x_test)
print('예측값:',y_pred_svc)
svc_acc = accuracy_score(y_test, y_pred_svc)
print('Accuracy:%.4f'%svc_acc)
# 로지스틱 회귀
from sklearn.linear_model import LogisticRegression
lrc = LogisticRegression()
lrc.fit(x_train, y_train)
y_pred_lrc = lrc.predict(x_test)
print('예측값:', y_pred_lrc)
lrc_acc = accuracy_score(y_test, y_pred_lrc)
print('Accuracy:%.4f'%lrc_acc)
y_lrc_prob = lrc.predict_proba(x_test) # 각 클래스에 속할 확률을 예측해줌
y_lrc_prob
# Decision Tree
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(max_depth = 3, random_state = 2021)
dtc.fit(x_train, y_train)
y_pred_dtc = dtc.predict(x_test)
print('예측값:', y_pred_dtc)
dtc_acc = accuracy_score(y_test, y_pred_dtc)
print('Accuracy:%.4f'%dtc_acc)
# Ensemble model _ voting
'''
서로 다른 알고리즘을 사용하는 모델들에 데이터를 적합시키고 voting 옵션에 따라 결정
Hard Voting: 모델들의 예측 값 중 다수결로 클래스 결정
Soft Voting: 모델들의 예측 확률을 평균 내서 클래스 결정
'''
from sklearn.ensemble import VotingClassifier
hvc = VotingClassifier(estimators = [('KNN',knn), ('SVC',svc), ('DT',dtc)],
voting = 'hard')
hvc.fit(x_train, y_train)
y_pred_hvc = hvc.predict(x_test)
print('예측값:', y_pred_hvc)
hvc_acc = accuracy_score(y_test, y_pred_hvc)
print('Accuracy:%.4f'%hvc_acc)
# Ensemble model _ Bagging
'''
같은 종류의 알고리즘 모델을 여러 개 결합하여 예측하는 방법
각 트리는 전체 학습 데이터 중에서 서로 다른 데이터를 샘플링(Bootstrapping)하여 학습한다.
'''
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators = 50, max_depth = 3, random_state = 2021)
rfc.fit(x_train, y_train)
y_pred_rfc = rfc.predict(x_test)
print('예측값:', y_pred_rfc)
rfc_acc = accuracy_score(y_test, y_pred_rfc)
print('Accuracy:%.4f'%rfc_acc)
# Ensemble model _ Boosting
from xgboost import XGBClassifier
xgbc = XGBClassifier(n_estimators = 30, max_depth = 7, random_state = 2021)
xgbc.fit(x_train, y_train)
y_pred_xgbc = xgbc.predict(x_test)
print('예측값:', y_pred_xgbc)
xgbc_acc = accuracy_score(y_test, y_pred_xgbc)
print('Accuracy:%.4f'%xgbc_acc)
```
| github_jupyter |
# 4 - Convolutional Sentiment Analysis
In the previous notebooks, we managed to achieve a test accuracy of ~85% using RNNs and an implementation of the [Bag of Tricks for Efficient Text Classification](https://arxiv.org/abs/1607.01759) model. In this notebook, we will be using a *convolutional neural network* (CNN) to conduct sentiment analysis, implementing the model from [Convolutional Neural Networks for Sentence Classification](https://arxiv.org/abs/1408.5882).
**Note**: This tutorial is not aiming to give a comprehensive introduction and explanation of CNNs. For a better and more in-depth explanation check out [here](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/) and [here](https://cs231n.github.io/convolutional-networks/).
Traditionally, CNNs are used to analyse images and are made up of one or more *convolutional* layers, followed by one or more linear layers. The convolutional layers use filters (also called *kernels* or *receptive fields*) which scan across an image and produce a processed version of the image. This processed version of the image can be fed into another convolutional layer or a linear layer. Each filter has a shape, e.g. a 3x3 filter covers a 3 pixel wide and 3 pixel high area of the image, and each element of the filter has a weight associated with it, the 3x3 filter would have 9 weights. In traditional image processing these weights were specified by hand by engineers, however the main advantage of the convolutional layers in neural networks is that these weights are learned via backpropagation.
The intuitive idea behind learning the weights is that your convolutional layers act like *feature extractors*, extracting parts of the image that are most important for your CNN's goal, e.g. if using a CNN to detect faces in an image, the CNN may be looking for features such as the existance of a nose, mouth or a pair of eyes in the image.
So why use CNNs on text? In the same way that a 3x3 filter can look over a patch of an image, a 1x2 filter can look over a 2 sequential words in a piece of text, i.e. a bi-gram. In the previous tutorial we looked at the FastText model which used bi-grams by explicitly adding them to the end of a text, in this CNN model we will instead use multiple filters of different sizes which will look at the bi-grams (a 1x2 filter), tri-grams (a 1x3 filter) and/or n-grams (a 1x$n$ filter) within the text.
The intuition here is that the appearance of certain bi-grams, tri-grams and n-grams within the review will be a good indication of the final sentiment.
## Preparing Data
As in the previous notebooks, we'll prepare the data.
Unlike the previous notebook with the FastText model, we no longer explicitly need to create the bi-grams and append them to the end of the sentence.
As convolutional layers expect the batch dimension to be first we can tell TorchText to return the data already permuted using the `batch_first = True` argument on the field.
```
import torch
from torchtext import data
from torchtext import datasets
import random
import numpy as np
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy', batch_first = True)
LABEL = data.LabelField(dtype = torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
```
Build the vocab and load the pre-trained word embeddings.
```
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
```
As before, we create the iterators.
```
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
## Build the Model
Now to build our model.
The first major hurdle is visualizing how CNNs are used for text. Images are typically 2 dimensional (we'll ignore the fact that there is a third "colour" dimension for now) whereas text is 1 dimensional. However, we know that the first step in almost all of our previous tutorials (and pretty much all NLP pipelines) is converting the words into word embeddings. This is how we can visualize our words in 2 dimensions, each word along one axis and the elements of vectors aross the other dimension. Consider the 2 dimensional representation of the embedded sentence below:

We can then use a filter that is **[n x emb_dim]**. This will cover $n$ sequential words entirely, as their width will be `emb_dim` dimensions. Consider the image below, with our word vectors are represented in green. Here we have 4 words with 5 dimensional embeddings, creating a [4x5] "image" tensor. A filter that covers two words at a time (i.e. bi-grams) will be **[2x5]** filter, shown in yellow, and each element of the filter with have a _weight_ associated with it. The output of this filter (shown in red) will be a single real number that is the weighted sum of all elements covered by the filter.

The filter then moves "down" the image (or across the sentence) to cover the next bi-gram and another output (weighted sum) is calculated.

Finally, the filter moves down again and the final output for this filter is calculated.

In our case (and in the general case where the width of the filter equals the width of the "image"), our output will be a vector with number of elements equal to the height of the image (or lenth of the word) minus the height of the filter plus one, $4-2+1=3$ in this case.
This example showed how to calculate the output of one filter. Our model (and pretty much all CNNs) will have lots of these filters. The idea is that each filter will learn a different feature to extract. In the above example, we are hoping each of the **[2 x emb_dim]** filters will be looking for the occurence of different bi-grams.
In our model, we will also have different sizes of filters, heights of 3, 4 and 5, with 100 of each of them. The intuition is that we will be looking for the occurence of different tri-grams, 4-grams and 5-grams that are relevant for analysing sentiment of movie reviews.
The next step in our model is to use *pooling* (specifically *max pooling*) on the output of the convolutional layers. This is similar to the FastText model where we performed the average over each of the word vectors, implemented by the `F.avg_pool2d` function, however instead of taking the average over a dimension, we are taking the maximum value over a dimension. Below an example of taking the maximum value (0.9) from the output of the convolutional layer on the example sentence (not shown is the activation function applied to the output of the convolutions).

The idea here is that the maximum value is the "most important" feature for determining the sentiment of the review, which corresponds to the "most important" n-gram within the review. How do we know what the "most important" n-gram is? Luckily, we don't have to! Through backpropagation, the weights of the filters are changed so that whenever certain n-grams that are highly indicative of the sentiment are seen, the output of the filter is a "high" value. This "high" value then passes through the max pooling layer if it is the maximum value in the output.
As our model has 100 filters of 3 different sizes, that means we have 300 different n-grams the model thinks are important. We concatenate these together into a single vector and pass them through a linear layer to predict the sentiment. We can think of the weights of this linear layer as "weighting up the evidence" from each of the 300 n-grams and making a final decision.
### Implementation Details
We implement the convolutional layers with `nn.Conv2d`. The `in_channels` argument is the number of "channels" in your image going into the convolutional layer. In actual images this is usually 3 (one channel for each of the red, blue and green channels), however when using text we only have a single channel, the text itself. The `out_channels` is the number of filters and the `kernel_size` is the size of the filters. Each of our `kernel_size`s is going to be **[n x emb_dim]** where $n$ is the size of the n-grams.
In PyTorch, RNNs want the input with the batch dimension second, whereas CNNs want the batch dimension first - we do not have to permute the data here as we have already set `batch_first = True` in our `TEXT` field. We then pass the sentence through an embedding layer to get our embeddings. The second dimension of the input into a `nn.Conv2d` layer must be the channel dimension. As text technically does not have a channel dimension, we `unsqueeze` our tensor to create one. This matches with our `in_channels=1` in the initialization of our convolutional layers.
We then pass the tensors through the convolutional and pooling layers, using the `ReLU` activation function after the convolutional layers. Another nice feature of the pooling layers is that they handle sentences of different lengths. The size of the output of the convolutional layer is dependent on the size of the input to it, and different batches contain sentences of different lengths. Without the max pooling layer the input to our linear layer would depend on the size of the input sentence (not what we want). One option to rectify this would be to trim/pad all sentences to the same length, however with the max pooling layer we always know the input to the linear layer will be the total number of filters. **Note**: there an exception to this if your sentence(s) are shorter than the largest filter used. You will then have to pad your sentences to the length of the largest filter. In the IMDb data there are no reviews only 5 words long so we don't have to worry about that, but you will if you are using your own data.
Finally, we perform dropout on the concatenated filter outputs and then pass them through a linear layer to make our predictions.
```
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.conv_0 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[0], embedding_dim))
self.conv_1 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[1], embedding_dim))
self.conv_2 = nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (filter_sizes[2], embedding_dim))
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved_0 = F.relu(self.conv_0(embedded).squeeze(3))
conved_1 = F.relu(self.conv_1(embedded).squeeze(3))
conved_2 = F.relu(self.conv_2(embedded).squeeze(3))
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2)
pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2)
pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2)
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
```
Currently the `CNN` model can only use 3 different sized filters, but we can actually improve the code of our model to make it more generic and take any number of filters.
We do this by placing all of our convolutional layers in a `nn.ModuleList`, a function used to hold a list of PyTorch `nn.Module`s. If we simply used a standard Python list, the modules within the list cannot be "seen" by any modules outside the list which will cause us some errors.
We can now pass an arbitrary sized list of filter sizes and the list comprehension will create a convolutional layer for each of them. Then, in the `forward` method we iterate through the list applying each convolutional layer to get a list of convolutional outputs, which we also feed through the max pooling in a list comprehension before concatenating together and passing through the dropout and linear layers.
```
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
```
We can also implement the above model using 1-dimensional convolutional layers, where the embedding dimension is the "depth" of the filter and the number of tokens in the sentence is the width.
We'll run our tests in this notebook using the 2-dimensional convolutional model, but leave the implementation for the 1-dimensional model below for anyone interested.
```
class CNN1d(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv1d(in_channels = embedding_dim,
out_channels = n_filters,
kernel_size = fs)
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.permute(0, 2, 1)
#embedded = [batch size, emb dim, sent len]
conved = [F.relu(conv(embedded)) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
```
We create an instance of our `CNN` class.
We can change `CNN` to `CNN1d` if we want to run the 1-dimensional convolutional model, noting that both models give almost identical results.
```
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
```
Checking the number of parameters in our model we can see it has about the same as the FastText model.
Both the `CNN` and the `CNN1d` models have the exact same number of parameters.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
Next, we'll load the pre-trained embeddings
```
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
```
Then zero the initial weights of the unknown and padding tokens.
```
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
```
## Train the Model
Training is the same as before. We initialize the optimizer, loss function (criterion) and place the model and criterion on the GPU (if available)
```
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
```
We implement the function to calculate accuracy...
```
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
```
We define a function for training our model...
**Note**: as we are using dropout again, we must remember to use `model.train()` to ensure the dropout is "turned on" while training.
```
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
We define a function for testing our model...
**Note**: again, as we are now using dropout, we must remember to use `model.eval()` to ensure the dropout is "turned off" while evaluating.
```
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
Let's define our function to tell us how long epochs take.
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
Finally, we train our model...
```
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut4-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
```
We get test results comparable to the previous 2 models!
```
model.load_state_dict(torch.load('tut4-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
```
## User Input
And again, as a sanity check we can check some input sentences
**Note**: As mentioned in the implementation details, the input sentence has to be at least as long as the largest filter height used. We modify our `predict_sentiment` function to also accept a minimum length argument. If the tokenized input sentence is less than `min_len` tokens, we append padding tokens (`<pad>`) to make it `min_len` tokens.
```
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence, min_len = 5):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
```
An example negative review...
```
predict_sentiment(model, "This film is terrible")
```
An example positive review...
```
predict_sentiment(model, "This film is great")
```
| github_jupyter |
# Neural networks with PyTorch
Next I'll show you how to build a neural network with PyTorch.
```
# Import things like usual
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
from torchvision import datasets, transforms
```
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.MNIST('MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
dataiter = iter(trainloader)
images, labels = dataiter.next()
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. We'd use this to loop through the dataset for training, but here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size (64, 1, 28, 28). So, 64 images per batch, 1 color channel, and 28x28 images.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
## Building networks with PyTorch
Here I'll use PyTorch to build a simple feedfoward network to classify the MNIST images. That is, the network will receive a digit image as input and predict the digit in the image.
<img src="assets/mlp_mnist.png" width=600px>
To build a neural network with PyTorch, you use the `torch.nn` module. The network itself is a class inheriting from `torch.nn.Module`. You define each of the operations separately, like `nn.Linear(784, 128)` for a fully connected linear layer with 784 inputs and 128 units.
The class needs to include a `forward` method that implements the forward pass through the network. In this method, you pass some input tensor `x` through each of the operations you defined earlier. The `torch.nn` module also has functional equivalents for things like ReLUs in `torch.nn.functional`. This module is usually imported as `F`. Then to use a ReLU activation on some layer (which is just a tensor), you'd do `F.relu(x)`. Below are a few different commonly used activation functions.
<img src="assets/activation.png" width=700px>
So, for this network, I'll build it with three fully connected layers, then a softmax output for predicting classes. The softmax function is similar to the sigmoid in that it squashes inputs between 0 and 1, but it's also normalized so that all the values sum to one like a proper probability distribution.
```
from torch import nn
from torch import optim
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight)
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image. This is called the forward pass. We're going to convert the image data into a tensor, then pass it through the operations defined by the network architecture.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
#images.resize_(images.shape[0], 1, 784)
# or images.resize_(images.shape[0], 1, 784) to not automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
You can also pass in an `OrderedDict` to name the individual layers and operations. Note that a dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now it's your turn to build a simple network, use any method I've covered so far. In the next notebook, you'll learn how to train a network so it can make good predictions.
>**Exercise:** Build a network to classify the MNIST images with _three_ hidden layers. Use 400 units in the first hidden layer, 200 units in the second layer, and 100 units in the third layer. Each hidden layer should have a ReLU activation function, and use softmax on the output layer.
```
## TODO: Your network here
# Hyperparameters for our network
input_size = 784
hidden_sizes = [400, 200, 100]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], hidden_sizes[2]),
nn.ReLU(),
nn.Linear(hidden_sizes[2], output_size),
nn.Softmax(dim=1))
print(model)
## Run this cell with your model to make sure it works ##
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
| github_jupyter |
# Simulators
## Introduction
This notebook shows how to import *Qiskit Aer* simulator backends and use them to execute ideal (noise free) Qiskit Terra circuits.
```
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.tools.visualization import plot_histogram, plot_state_city
```
## Qiskit Aer simulator backends
Qiskit Aer currently includes three high performance simulator backends:
* `QasmSimulator`: Allows ideal and noisy multi-shot execution of qiskit circuits and returns counts or memory
* `StatevectorSimulator`: Allows ideal single-shot execution of qiskit circuits and returns the final statevector of the simulator after application
* `UnitarySimulator`: Allows ideal single-shot execution of qiskit circuits and returns the final unitary matrix of the circuit itself. Note that the circuit cannot contain measure or reset operations for this backend
These backends are found in the `Aer` provider with the names `qasm_simulator`, `statevector_simulator` and `unitary_simulator`, respectively.
```
# List Aer backends
Aer.backends()
```
The simulator backends can also be directly imported from `qiskit.providers.aer`
```
from qiskit.providers.aer import QasmSimulator, StatevectorSimulator, UnitarySimulator
```
## QasmSimulator
The `QasmSimulator` backend is designed to mimic an actual device. It executes a Qiskit `QuantumCircuit` and returns a count dictionary containing the final values of any classical registers in the circuit. The circuit may contain *gates*,
*measurements*, *resets*, *conditionals*, and other advanced simulator options that will be discussed in another notebook.
### Simulating a quantum circuit
The basic operation executes a quantum circuit and returns a counts dictionary of measurement outcomes. Here we execute a simple circuit that prepares a 2-qubit Bell-state $|\psi\rangle = \frac{1}{2}(|0,0\rangle + |1,1 \rangle)$ and measures both qubits.
```
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
counts = result.get_counts(circ)
plot_histogram(counts, title='Bell-State counts')
```
### Returning measurement outcomes for each shot
The `QasmSimulator` also supports returning a list of measurement outcomes for each individual shot. This is enabled by setting the keyword argument `memory=True` in the `assemble` or `execute` function.
```
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Execute and get memory
result = execute(circ, simulator, shots=10, memory=True).result()
memory = result.get_memory(circ)
print(memory)
```
### Starting simulation with a custom initial state
The `QasmSimulator` allows setting a custom initial statevector for the simulation. This means that all experiments in a Qobj will be executed starting in a state $|\psi\rangle$ rather than the all zero state $|0,0,..0\rangle$. The custom state may be set in the circuit using the `initialize` method.
**Note:**
* The initial statevector must be a valid quantum state $|\langle\psi|\psi\rangle|=1$. If not, an exception will be raised.
* The simulator supports this option directly for efficiency, but it can also be unrolled to standard gates for execution on actual devices.
We now demonstrate this functionality by setting the simulator to be initialized in the final Bell-state of the previous example:
```
# Construct a quantum circuit that initialises qubits to a custom state
circ = QuantumCircuit(2, 2)
circ.initialize([1, 0, 0, 1] / np.sqrt(2), [0, 1])
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
counts = result.get_counts(circ)
plot_histogram(counts, title="Bell initial statevector")
```
## StatevectorSimulator
The `StatevectorSimulator` executes a single shot of a Qiskit `QuantumCircuit` and returns the final quantum statevector of the simulation. The circuit may contain *gates*, and also *measurements*, *resets*, and *conditional* operations.
### Simulating a quantum circuit
The basic operation executes a quantum circuit and returns a counts dictionary of measurement outcomes. Here we execute a simple circuit that prepares a 2-qubit Bell-state $|\psi\rangle = \frac{1}{2}(|0,0\rangle + |1,1 \rangle)$ and measures both qubits.
```
# Construct quantum circuit without measure
circ = QuantumCircuit(2)
circ.h(0)
circ.cx(0, 1)
# Select the StatevectorSimulator from the Aer provider
simulator = Aer.get_backend('statevector_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
statevector = result.get_statevector(circ)
plot_state_city(statevector, title='Bell state')
```
### Simulating a quantum circuit with measurement
Note that if a circuit contains *measure* or *reset* the final statevector will be a conditional statevector *after* simulating wave-function collapse to the outcome of a measure or reset. For the Bell-state circuit this means the final statevector will be *either* $|0,0\rangle$ *or* $|1, 1\rangle$.
```
# Construct quantum circuit with measure
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the StatevectorSimulator from the Aer provider
simulator = Aer.get_backend('statevector_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
statevector = result.get_statevector(circ)
plot_state_city(statevector, title='Bell state post-measurement')
```
### Starting simulation with a custom initial state
Like the `QasmSimulator`, the `StatevectorSimulator` also allows setting a custom initial statevector for the simulation. Here we run the previous initial statevector example on the `StatevectorSimulator` and initialize it to the Bell state.
```
# Construct a quantum circuit that initialises qubits to a custom state
circ = QuantumCircuit(2)
circ.initialize([1, 0, 0, 1] / np.sqrt(2), [0, 1])
# Select the StatevectorSimulator from the Aer provider
simulator = Aer.get_backend('statevector_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
statevector = result.get_statevector(circ)
plot_state_city(statevector, title="Bell initial statevector")
```
## Unitary Simulator
The `UnitarySimulator` constructs the unitary matrix for a Qiskit `QuantumCircuit` by applying each gate matrix to an identity matrix. The circuit may only contain *gates*, if it contains *resets* or *measure* operations an exception will be raised.
### Simulating a quantum circuit unitary
For this example we will return the unitary matrix corresponding to the previous examples circuit which prepares a bell state.
```
# Construct an empty quantum circuit
circ = QuantumCircuit(2)
circ.h(0)
circ.cx(0, 1)
# Select the UnitarySimulator from the Aer provider
simulator = Aer.get_backend('unitary_simulator')
# Execute and get counts
result = execute(circ, simulator).result()
unitary = result.get_unitary(circ)
print("Circuit unitary:\n", unitary)
```
### Setting a custom initial unitary
We may also set an initial state for the `UnitarySimulator`, however this state is an initial *unitary matrix* $U_i$, not a statevector. In this case the returned unitary will be $U.U_i$ given by applying the circuit unitary to the initial unitary matrix.
**Note:**
* The initial unitary must be a valid unitary matrix $U^\dagger.U =\mathbb{1}$. If not, an exception will be raised.
* If a `Qobj` contains multiple experiments, the initial unitary must be the correct size for *all* experiments in the `Qobj`, otherwise an exception will be raised.
Let us consider preparing the output unitary of the previous circuit as the initial state for the simulator:
```
# Construct an empty quantum circuit
circ = QuantumCircuit(2)
circ.id([0,1])
# Set the initial unitary
opts = {"initial_unitary": np.array([[ 1, 1, 0, 0],
[ 0, 0, 1, -1],
[ 0, 0, 1, 1],
[ 1, -1, 0, 0]] / np.sqrt(2))}
# Select the UnitarySimulator from the Aer provider
simulator = Aer.get_backend('unitary_simulator')
# Execute and get counts
result = execute(circ, simulator, backend_options=opts).result()
unitary = result.get_unitary(circ)
print("Initial Unitary:\n", unitary)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Data-Load" data-toc-modified-id="Data-Load-1"><span class="toc-item-num">1 </span>Data Load</a></span></li><li><span><a href="#Integrated-Gradients" data-toc-modified-id="Integrated-Gradients-2"><span class="toc-item-num">2 </span>Integrated Gradients</a></span></li><li><span><a href="#MNIST" data-toc-modified-id="MNIST-3"><span class="toc-item-num">3 </span>MNIST</a></span></li><li><span><a href="#CIFAR10" data-toc-modified-id="CIFAR10-4"><span class="toc-item-num">4 </span>CIFAR10</a></span></li><li><span><a href="#Save-saliency-maps" data-toc-modified-id="Save-saliency-maps-5"><span class="toc-item-num">5 </span>Save saliency maps</a></span><ul class="toc-item"><li><span><a href="#MNIST" data-toc-modified-id="MNIST-5.1"><span class="toc-item-num">5.1 </span>MNIST</a></span></li><li><span><a href="#CIFAR10" data-toc-modified-id="CIFAR10-5.2"><span class="toc-item-num">5.2 </span>CIFAR10</a></span></li></ul></li></ul></div>
```
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../code')
from dataload import mnist_load, cifar10_load
from saliency.attribution_methods import IntegratedGradients
from saliency.ensembles import *
from utils import get_samples
from visualization import visualize_saliencys
```
# Data Load
```
original_images_mnist, original_targets_mnist, pre_images_mnist, mnist_classes, mnist_model = get_samples('mnist')
original_images_cifar10, original_targets_cifar10, pre_images_cifar10, cifar10_classes, cifar10_model = get_samples('cifar10')
```
# Integrated Gradients

```
IG_mnist = IntegratedGradients(mnist_model)
IG_cifar10 = IntegratedGradients(cifar10_model)
```
# MNIST
```
# Integrated Gradients
outputs, probs, preds = IG_mnist.generate_image(pre_images_mnist, original_targets_mnist)
# ensemble
n = 50
sigma = 2
# Integrated Gradients + smooth grad
outputs_SG, _, _ = generate_smooth_grad(pre_images_mnist, original_targets_mnist, n, sigma, IG_mnist)
# Integrated Gradients + smooth square grad
outputs_SG_SQ, _, _ = generate_smooth_square_grad(pre_images_mnist, original_targets_mnist, n, sigma, IG_mnist)
# Integrated Gradients + smooth var grad
outputs_SG_VAR, _, _ = generate_smooth_var_grad(pre_images_mnist, original_targets_mnist, n, sigma, IG_mnist)
names = ['Integrated Gradients',
'Integrated Gradients\nSmoothGrad','Integrated Gradients\nSmoothGrad-Square','Integrated Gradients\nSmoothGrad-VAR'] # names
results = [outputs, outputs_SG, outputs_SG_SQ, outputs_SG_VAR]
target = 'mnist'
visualize_saliencys(original_images_mnist,
results,
probs,
preds,
mnist_classes,
names,
target,
col=5, row=10, size=(20,35), labelsize=20, fontsize=25)
```
# CIFAR10
```
# Integrated Gradients
outputs, probs, preds = IG_cifar10.generate_image(pre_images_cifar10, original_targets_cifar10)
# ensemble
n = 50
sigma = 2
# Integrated Gradients + smooth grad
outputs_SG, _, _ = generate_smooth_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, IG_cifar10)
# Integrated Gradients + smooth square grad
outputs_SG_SQ, _, _ = generate_smooth_square_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, IG_cifar10)
# Integrated Gradients + smooth var grad
outputs_SG_VAR, _, _ = generate_smooth_var_grad(pre_images_cifar10, original_targets_cifar10, n, sigma, IG_cifar10)
names = ['Integrated Gradients',
'Integrated Gradients\nSmoothGrad','Integrated Gradients\nSmoothGrad-Square','Integrated Gradients\nSmoothGrad-VAR'] # names
results = [outputs, outputs_SG, outputs_SG_SQ, outputs_SG_VAR]
target = 'cifar10'
visualize_saliencys(original_images_cifar10,
results,
probs,
preds,
cifar10_classes,
names,
target,
col=5, row=10, size=(20,35), labelsize=20, fontsize=25)
```
# Save saliency maps
## MNIST
```
trainloader, validloader, testloader = mnist_load(shuffle=False)
IG_mnist.save(trainloader, '../saliency_maps/[mnist]IG_train.hdf5', layer=0)
IG_mnist.save(validloader, '../saliency_maps/[mnist]IG_valid.hdf5', layer=0)
IG_mnist.save(testloader, '../saliency_maps/[mnist]IG_test.hdf5', layer=0)
```
## CIFAR10
```
trainloader, validloader, testloader = cifar10_load(shuffle=False, augmentation=False)
IG_cifar10.save(trainloader, '../saliency_maps/[cifar10]IG_train.hdf5', layer=0)
IG_cifar10.save(validloader, '../saliency_maps/[cifar10]IG_valid.hdf5', layer=0)
IG_cifar10.save(testloader, '../saliency_maps/[cifar10]IG_test.hdf5', layer=0)
```
| github_jupyter |
# A trip on a lift
### Experiment about the motion in 1D and Data Analysis
The motion of a lift can be considered as an example of a motion along a straight line, which we can shortly refer to as **motion in 1D**.
The aim of this worked example is that of studying the position, the velocity and the acceleration as a function of time.

#### Description
The position of the lift along the vertical direction can be recorded using the smartphone with the App [**phyphox**](https://phyphox.org/).
This App can measure the atmospheric pressure and can calculate the change of the height with respect to the initial position, exploiting a suitable algorithm.
The activity will be developed according to the following steps.
1. We **run an experiment** recording the values of the height as a function of time.
2. The experimental data are stored in a text file (*Tab separated values*) having the structure shown below
Time (s) | Pressure (hPa) | Height (m) | Xxxxxx | Xxxxxxxx
---------|----------------|------------|--------|----------
1.002612265E0 | 1.013205700E3 | 0.000000000E0| XXXXXX | XXXXXX
2.003274159E0 | 1.013197046E3 | 7.205260740E-2| XXXXXX | XXXXXX
3. For the purpose of the present study we want to read data from first and third column, so to get a **timetable** of the motion, i.e. a set of data organized as follows
Time (s) | Height (m)
---------|----------------
1.002612265E0 | 0.000000000E0
2.003274159E0 | 7.205260740E-2
4. Once we have got our experimental data, we want to obtain a graphical representation. We prepare a plot using Python and the [MatPlotLib](https://matplotlib.org/) module.
5. We want to **analyze the data** to obtain the values of the velocity as a function of time.
6. We prepare a graph to visualize the behaviour of the **velocity** as a function of time.
7. We save the plots and the results of the analysis in a file.
8. We discuss the results
9. We calculate the values of the **acceleration** as a function of time and we compare them with the values measured using another sensor of the smartphone (the accelerometer) with the same App, Phyphox.
10. We consider all the results obtained showing them in a figure that contains the plot of the heigth, the plot of the velocity and the plot of the acceleration.
##### 1. Experiment
Install on the smartphone the App [phyphox](https://phyphox.org). Among the experiments available in the group named **Everyday life**, choose the experiment **Elevator**. Press **play** to start the measurement.
[](https://phyphox.org)
* You can see what happens if you move the smartphone from the ground up to your head.
* You can obtain interesting data if you can get into **a lift** and get it moving.
##### 2. Save the data
Pressing on the symbol of the three vertical dots in the top right part of the screen you can choose **Export data**. A preferred format can be selected. In this example we choose **CSV(Tabulator, decimal point)**.
Once the data have been saved in a file, the file can be transferred from the smartphone to the computer for the further steps of the experiments.
> We have made the experiments several times and we have selected a few data sets for subsequent analysis.
>
> The filenames are listed in the following table.
Data available |
-----------------|
Trip01-Height.csv
Trip02-Height.csv
Trip03-Height.csv
##### 3.-4. Read the experimental data and prepare a plot
For reading the data and for plotting them in a figure we use Python modules [numpy](https://numpy.org/) and [matplotlib](https://matplotlib.org/).
We write a Python code based on the examples proposed in Refs. [1](https://physics.nyu.edu/pine/pymanual/html/pymanMaster.html), [2](http://stacks.iop.org/PhysED/55/033006/mmedia) e [3](https://github.com/POSS-UniMe/RCwithRPi).
To read the data from a text file we use the **loadtxt** function from the **numpy** module. The option **skiprows** allows us to skip the firs $n$ lines of the text, and the option **usecols** allows us to select the desired columns (Ref. [4](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html))
```
import numpy as np
import matplotlib.pyplot as plt
inputDataFile='data/Trip03-Height.csv'
# read data from file
t, x = np.loadtxt(inputDataFile, skiprows=1, usecols=(0,2), unpack = True)
# plot of the experimental data
plt.plot(t, x, 'o', color='blue', markersize = 3)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('height $h$ (m)')
plt.text(27, 1.8, inputDataFile)
#plt.savefig('height_vs_time.pdf')
print()
plt.show()
print()
```
##### 5. Calculating the velocity
We calculate the average velocity referred to each time interval such as $[t_1, t_2]$ using the simple relation
$$ v = \dfrac{\Delta x}{\Delta t}$$
We associate this value $v_1$ to the time $t'_1$ obtained as the average value between $t_1$ and $t_2$.
To calculate the displacement $\Delta x$ and the duration of each time lapse $\Delta t$ between two subsequent measurements we use the [**diff()**](https://numpy.org/doc/stable/reference/generated/numpy.diff.html) function of the **numpy** module.
The calculated values will be written on a text file named **speedDataFile**.
> The data will be written in two columns: the first one is about the values of time,
> the second is about the values of the velocity.
The lines to be added to the previous script in order to make the calculations and save the data on a file are shown below.
```
speedDataFile='data/Trip03-Speed.csv'
np.set_printoptions(precision=20)
delta_t = np.diff(t)
delta_x = np.diff(x)
v = delta_x / delta_t
t_prime = t[:-1] + (delta_t/2)
np.savetxt(speedDataFile, np.column_stack((t_prime, v)))
```
##### 6. Plot of the velocity as a function of time
We use the **Matplotlib** module for preparing a plot of the velicty as a function of time.
We will insert in the plot also a label containing the name of the file from which we read the values of time and velocity.
```
plt.plot(t_prime, v, 'o', color='red', markersize = 5)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('velocity $v$ (m s$^{-1}$)')
plt.text(20, 0.2, speedDataFile)
plt.draw()
plt.show()
```
##### 7. Plotting together altitude and velocity
We want to show together in the same figure the two previous plots, so to compare them.
We prepare a figure in which the two plots are organized as two rows and one column. This is achieved using the **subplot** function of the **matplotlib** module (See Refs.[5](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html), [6](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplot.html))
```
fig = plt.figure(figsize=(7, 9))
plt.subplot(2, 1, 1)
plt.plot(t, x, 'o', color='blue', markersize = 3)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('height $h$ (m)')
plt.text(27, 1.8, inputDataFile)
plt.subplot(2, 1, 2)
plt.plot(t_prime, v, 'o', color='red', markersize = 5)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('velocity $v$ (m s$^{-1}$)')
plt.text(20, 0.2, speedDataFile)
```
##### 8. Data analysis
* ***8.1 Insert a marker in the plot***
We want to insert in the plot some **markers** that allow us to select the data corresponding to a desired value of the time.
We are assuming that the data are already ordered according to increasing values of the time.
The number of (time $t$, height $h$) pairs is given by the size of the corresponding numpy array. For instance:
```
np.size(t)
```
We want to select a subset of the data, for time values between that corresponding to **MarkerOne** and that corresponding to **MarkerTwo**.
* The values of the two markers are chosen using some **sliders** and
* they are highlighted in the plots as colored vertical lines.
<hr>
> For running the Python code contained in this Notebook you can use the following link to the **binder** environment online
[](https://mybinder.org/v2/gh/POSS-UniMe/simple-physics-with-Python/HEAD)
```
import ipywidgets as widgets
sliderMarkerOne = widgets.IntSlider(min = 0, max = (np.size(t)-1), step = 1, value = 10, continuous_update = False)
sliderMarkerTwo = widgets.IntSlider(min = 0, max = (np.size(t)-1), step = 1, value = 35, continuous_update = False)
def calculate(MarkerOne, MarkerTwo):
print('Marker One = ',MarkerOne, ' Marker Two = ', MarkerTwo, '\n')
MarkerOneTime = t[MarkerOne]
MarkerTwoTime = t[MarkerTwo]
print ('Time One', MarkerOneTime, 's', ' Time Two = ', MarkerTwoTime, '\n')
makeplots(MarkerOne,MarkerTwo)
t_subset = t[MarkerOne:MarkerTwo]
x_subset = x[MarkerOne:MarkerTwo]
def makeplots(MarkerOne, MarkerTwo):
fig = plt.figure(figsize=(7, 9))
# plt.ion()
plt.subplot(2, 1, 1)
plt.plot(t, x, 'o', color='blue', markersize = 3)
plt.plot(t[MarkerOne],x[MarkerOne], 'o', color='red', markersize = 6)
plt.plot(t[MarkerTwo],x[MarkerTwo], 'o', color='cyan', markersize = 7)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t[MarkerOne], zorder = -1)
plt.axvline(color = 'cyan', x = t[MarkerTwo], zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('height $h$ (m)')
plt.text(27, 1.8, inputDataFile)
plt.subplot(2, 1, 2)
plt.plot(t_prime, v, 'o', color='red', markersize = 5)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t[MarkerOne], zorder = -1)
plt.axvline(color = 'cyan', x = t[MarkerTwo], zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('velocity $v$ (m s$^{-1}$)')
plt.text(20, 0.2, speedDataFile)
widgets.interact(calculate, MarkerOne = sliderMarkerOne, MarkerTwo = sliderMarkerTwo)
```
* ***8.2 Linear fit of the $x(t)$ data in a selected time range***
Ipothesys: motion with constant velocity
If we assume that in a certain time interval the velocity is constant, then, for $t_1$ and $t_2$ chosen in such range, the ratio $ (x_2-x_1)/(t_2-t_1) $ is constant and it is equal to $v$.
For a generic $t$ and a fixed $t_1$, the following relation has to hold
$$ \dfrac{x-x_1}{t-t_1} = v $$
From this it follows
$$ x - x_1 = v (t-t_1)$$
so that
$$x = v t + (x_1 - v t_1) $$
As a consequence, in a time range in which the motion occurs with constant velocity, the graph of $x$ as a function of $t$ appears as a straight line, corresponding to an equation of this kind:
$$ x = A t + B $$
We want to find now the values of the parameters $A$ and $B$ corresponding to the straight line that gives the best match with the experimental data. Such values will be found using a method that minimizes the sum of the squared deviations between the measured data and the model. \[[7](https://nbviewer.jupyter.org/github/engineersCode/EngComp1_offtheground/blob/master/notebooks_en/5_Linear_Regression_with_Real_Data.ipynb), [8](https://physics.nyu.edu/pine/pymanual/html/chap7/chap7_funcs.html#example-linear-least-squares-fitting\)].
```
t_subset = t[sliderMarkerOne.value:sliderMarkerTwo.value]
x_subset = x[sliderMarkerOne.value:sliderMarkerTwo.value]
def LineFit(x,y):
x_avg = x.mean()
slope = (y*(x-x_avg)).sum()/(x*(x-x_avg)).sum()
y_intercept = y.mean() - slope * x_avg
return slope, y_intercept
v_est, x_intercept = LineFit(t_subset, x_subset)
print('\n Estimated value of the velocity (from the linear fit)')
print('\n v = {0:0.3} m/s \n'.format(v_est))
#print('$x_intercept$', x_intercept, 'm', '\n')
```
* ***8.3 Plot of the comparison between the experimental data and the linear fit***
We want to draw a graph showing the experimental data as well as the behaviour expected in case of motion with constant velocity, referred to some time interval. We will use the values of the parameters obtained from the **best-fit** procedure described in the previous section.
We can add more details to the graph using the following functions : [plot](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html),
[linestyle](https://matplotlib.org/stable/api/_as_gen/matplotlib.lines.Line2D.html),
[axvline](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.axvline.html),
[legend](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html).
Finally, we save the figure in a file, using the [savefig](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html)
function of the **matplotlib** module.
```
x_fit = v_est * t_subset + x_intercept
#
v_fit = x_fit/x_fit*v_est
#
print()
fig = plt.figure(figsize=(7, 9))
# plt.ion()
plt.subplot(2, 1, 1)
plt.plot(t, x, 'o', color='blue', markersize = 3, label = 'data')
plt.plot(t_subset,x_fit, '-', color='yellow', linewidth = 3, zorder = -1, label = 'fit')
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t_subset[0], linewidth = 1, linestyle = '--', zorder = -1)
plt.axvline(color = 'cyan', x = t_subset[-1],linewidth = 1, linestyle = '--', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('height $h$ (m)')
plt.text(27, 1.8, inputDataFile)
plt.legend(loc='upper center', shadow=True, fontsize='large')
plt.subplot(2, 1, 2)
plt.plot(t_prime, v, 'o', color='red', markersize = 5)
plt.plot(t_subset,v_fit,'-', color = 'orange', linewidth = 3)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t_subset[0], linewidth = 1, linestyle = '--', zorder = -1)
plt.axvline(color = 'cyan', x = t_subset[-1], linewidth = 1, linestyle = '--', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('velocity $v$ (m s$^{-1}$)')
plt.text(20, 0.2, speedDataFile)
plt.savefig('data/height+speed.pdf')
```
##### 9. Acceleration
In this section we will calculate the acceleration from the velocity data but we will also compare the calculated data with the experimental values of the acceleration directly measured with another sensor of the smartphone.
* ***9.1 Calculation***
The task is similar to that of calculating the average velocity referred to different time intervals.
The average acceleration in each time interval of the kind $[t'_1, t'_2]$ is calculated on the basis of the simple relation
$$ a = \dfrac{\Delta v}{\Delta t}$$
We associate this value $a_1$ with the time $t''_1$ obtained as average value between $t'_1$ and $t'_2$.
For calculating the changes of velocity, i.e. for calculating $\Delta v$, as well as for calculating the extent of the time lapses $\Delta t'$ between two subsequent data we use the **numpy.diff()** function.
The results of the calculation will be written in a text file whose name is associated with the variable **accelerationDataFile**. The text is organized in two colums: in the first column are reported the values of time, in the second one are reported the values of the acceleration. It is suggested to choose for the file a name which is suitable to remind that the file is about **calculated values** of the acceleration.
```
accelerationDataFile='data/Trip03-CalcAcceleration.csv'
np.set_printoptions(precision=20)
delta_t_prime = np.diff(t_prime)
delta_v = np.diff(v)
a = delta_v / delta_t_prime
t_double_prime = t_prime[:-1] + (delta_t_prime/2)
np.savetxt(accelerationDataFile, np.column_stack((t_double_prime, a)))
```
* ***9.2 Graph***
We want to show in a graph the behaviour of the acceleration $a$ as a function of time. Moreover, we want to make the comparison with the **values** of acceleration **measured** with an accelerometer sensor. We are talking about the values measured with the same **phyphox** App of the smartphone, and saved in a text file (*Tab separated values*) organized as follows
Time (s) | Acceleration (m/s$^2$)
---------|----------------
2.003207003250000007e+00 | -9.390230845239912194e-02
The name of the text file containing the experimental data is associated with the variable **accelerometerDataFile**
```
print()
#
#
accelerometerDataFile='data/Trip03-Acceleration.csv'
#
# read data from file
t_exp, accel = np.loadtxt(accelerometerDataFile, skiprows=1, unpack = True)
plt.plot(t_double_prime, a, 'o', color='green', markersize = 5, label = 'calculated')
plt.plot(t_exp,accel, 'd', color = 'black', markersize = 2, label = 'accelerometer')
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('acceleration $a$ (m/s$^{2}$)')
plt.ylim(-1.5,1.5)
plt.legend(loc='upper center', shadow=True, fontsize='large')
plt.draw()
plt.show()
```
#### A hint
> How can we interpret such apparently messy data about the acceleration in this experiment?
The results can be efficiently discussed if we compare this plot with the graph showing the behaviour of the velocity as a function of time.
##### 10. Comparing the plots of altitude, velocity and acceleration
We will prepare a figure in which we have the three plots organized as a matrix of three rows and one column. To this aim we will use the **subplot** function. The figure will be saved in a file, using the **savefig** function of the Matplotlib module.
```
print()
fig = plt.figure(figsize=(7, 13))
# plt.ion()
plt.subplot(3, 1, 1)
plt.plot(t, x, 'o', color='blue', markersize = 3, label = 'data')
plt.plot(t_subset,x_fit, '-', color='yellow', linewidth = 3, zorder = -1, label = 'fit')
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t_subset[0], linewidth = 1, linestyle = '--', zorder = -1)
plt.axvline(color = 'cyan', x = t_subset[-1],linewidth = 1, linestyle = '--', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('height $h$ (m)')
plt.text(27, 1.8, inputDataFile)
plt.legend(loc='upper right', shadow=True, fontsize='large')
plt.subplot(3, 1, 2)
plt.plot(t_prime, v, 'o', color='red', markersize = 5)
plt.plot(t_subset,v_fit,'-', color = 'orange', linewidth = 3)
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.axvline(color = 'magenta', x = t_subset[0], linewidth = 1, linestyle = '--', zorder = -1)
plt.axvline(color = 'cyan', x = t_subset[-1], linewidth = 1, linestyle = '--', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('velocity $v$ (m/s)')
plt.text(20, 0.2, speedDataFile)
plt.subplot(3, 1, 3)
plt.plot(t_double_prime, a, 'o', color='green', markersize = 8, label = 'calculated')
plt.plot(t_exp,accel, 'd', color = 'black',zorder=-1, markersize = 3, label = 'accelerometer')
plt.axhline(color = 'gray', zorder = -1)
plt.axvline(color = 'gray', zorder = -1)
plt.xlabel('time $t$ (s)')
plt.ylabel('acceleration $a$ (m/s$^{2}$)')
plt.ylim(-1.5,1.5)
plt.legend(loc='upper center', shadow=True, fontsize='large')
plt.text(10, -1, accelerometerDataFile)
plt.savefig('data/Trip03-Results.pdf')
plt.draw()
plt.show()
print()
```
This figure can be compared directly with the figure produced by the **phyphox** App. In this way we can verify whether our evaluation of the data leads to the same results as the **phyphox** software.
## What we have learned
*Physics*
* How to acquire the data concerning an experiment carried out with portable equiment (a smartphone with the **phyphox** app)
* How to evaluate the data about position and time in order to calculate the velocity and the acceleration.
* How to study the motion considering the graphical behaviour of the variables that describe the position, the velocity, and the acceleration.
*Python*
* Read data from a text file
* Plot the data in a graph
* Save the calculated data in a text file
* Prepare a figure including several plots
* Select a subset of the data
* Obtain a linear fit of a data set
* Compare several datasets in the same plot
* Save a figure to a file.
## References and notes
#### Images
Figure 1. *Augmented reality*. The *Andrea Donato* building at the University of Messina offers a nice view on the Strait of Messina. This will be the headquarters of the Mathematics, Data Science and Computer Science Center of the University. In this figure we imagine the building equipped with a panoramic lift.
#### Graphical representation of the experimental data
1. [Introduction to Python for Science and Engineering](https://physics.nyu.edu/pine/pymanual/html/pymanMaster.html)
2. [Experiments and data analysis on one-dimensional motion with Raspberry Pi and Python](http://stacks.iop.org/PhysED/55/033006/mmedia)
3. [Circuits with Raspberry Pi and Python](https://github.com/POSS-UniMe/RCwithRPi)
4. [Reading data from a text file with **loadtxt**](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html)
* "This function aims to be a fast reader for simply formatted files. The **genfromtxt** function provides more sophisticated handling of, e.g., lines with missing values."
5. [Including more plots in a figure](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subplot.html)
6. [Multiple plots](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplot.html) with Matplotlib
#### Linear fit of the data
7. Lorena A. Barba, Natalia C. Clementi, *Linear regression with real data*, [Engineering Computation](https://nbviewer.jupyter.org/github/engineersCode/EngComp1_offtheground/blob/master/notebooks_en/5_Linear_Regression_with_Real_Data.ipynb), GitHub
8. David J. Pine, *Linear least squares fitting*,
[Introduction to Python for Science](https://physics.nyu.edu/pine/pymanual/html/chap7/chap7_funcs.html#example-linear-least-squares-fitting)
#### Functions of the *matplotlib* module
9. [plot](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html)
10. [linestyle](https://matplotlib.org/stable/api/_as_gen/matplotlib.lines.Line2D.html)
11. [axvline](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.axvline.html)
12. [legend](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html)
13. [savefig](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html)
#### Experiment with **phyphox**
14. [Elevator](https://phyphox.org/wiki/index.php/Experiment:_Elevator)
### Copyright and License
--------------------------
(c) 2021 Andrea Mandanici, Giuseppe Mandaglio, Giovanni Pirrotta, Valeria Conti Nibali, Giacomo Fiumara. All content is under Creative Common Attribution CC BY 4.0 and all code is under BSD 3-Clause License. Notebook based on the Italian version (c) 2020 Andrea Mandanici, Marco Guarnera, Giuseppe Mandaglio, Giovanni Pirrotta. All content is under Creative Common Attribution <a rel="license" href="https://creativecommons.org/licenses/by/4.0" > CC BY 4.0 <a/>
and all code is under [BSD 3-Clause License](https://opensource.org/licenses/BSD-3-Clause)
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
import statsmodels.formula.api as smf
import sklearn
from sklearn.linear_model import Lasso
import matplotlib.pyplot as plt
import sys
alpha = 0.7
```
### PROJECT INTRODUCTION
The World Health Organization (WHO) estimates that each year approximately one million people die from suicide, which represents a global mortality rate of 16 people per 100,000 or one death every 40 seconds. It is predicted that by 2020 the rate of death will increase to one every 20 seconds. Suicide is a top 10 factor of fatality worldwide and even has been the top 5 in some of the countries. Many studies have been looking into the trend and the demographic factors of suicide in order to come up with suitable coping actions and put the most resources to tackle this issue in the most needed place.
The dataset I will be using comes from Kaggle. The information of it was pulled from four other relevant datasets: United Nations Development Program. (2018). Human development index (HDI), World Bank. (2018), Suicide in the Twenty-First Century [dataset], and World Health Organization. (2018). Suicide prevention. The factors in the dataset includes country, year of incident, sex, age, suicides numbers, population of nations, suicide per 100k population, HDI for the year, GDP for the year, GDP per capita, and generation. Basically, it can be interpreted as two major categories, demographic information of the person and the economic environment of the country, which are what most studies of suicide are focusing on. We would like to see which characteristics has higher chances to commit suicide, such as gender and age, and in what level of development of a country has more suicide, such as HDI and GDP. Also, I would take the time factor into analysis to understand if different time has different patterns so we might be able to predict the trend. All of the above contribute to the big mission of mitigating suicide.
The dataset and analysis tasks have been done by others. In this project, besides replicating similar work I tried to bring in new factors - regime and human right proection level to the analysis. The regime and human right protection data was pulled from Center for Systemic Peace, a political database housed at Center for International Development and Conflict Management (CIDCM), at the University of Maryland, College Park. Regime is a score ranging from -10 to 10, where -10 represents complete autocracy and 10 represents complete democracy. Human right protection applies the same format and 10 represents fully protected human right.
```
df0 = pd.read_csv("suicide_data.csv")
df0.head()
```
### DATA CLEANING
```
df0.info()
df0['country'].nunique()
```
* Make the colunm name more clean and succinct
```
df0.rename(columns={'suicides_no':'suicides', 'suicides/100k pop':'suicides/100k',\
' gdp_for_year ($) ':'gdp/year ', 'gdp_per_capita ($)':'gdp/capita'}, inplace=True)
df0.head()
```
* Delete countries whose data size are less than 50
```
plt.figure(figsize=(10,25))
sns.countplot(y='country', data=df0, alpha=alpha)
plt.title('Data by country')
plt.axvline(x=50, color='k')
plt.show()
country_amountData = df0.groupby('country').count()['year'].reset_index()
country_amountData.sort_values(by='year', ascending=True). head(10)
# delete top 8 countires from above that are less then 50
country_selectList = country_amountData[country_amountData['year'] > 50]['country'].reset_index()
df1 = pd.merge(df0, country_selectList, how='outer', indicator=True)
df1 = df1[df1['_merge']=='both']
df1.nunique()
# we have 93 countries now from the original 101 to be considered in our analysis
```
* Delete the years whose data are significantly missing a lot', such as less than 200
```
plt.figure(figsize=(25,8))
sns.countplot(x='year', data=df1, alpha=alpha)
plt.title('Data by year')
plt.axhline(y=200, color='k')
plt.show()
df2 = df1[df1['year'] != 2016]
# Delete 2016
```
* Delete the coloumns that have significant amount of missing data
```
number_NAN = len(df2) - df2['HDI for year'].count()
number_noNAN = len(df2)
number_NAN * 100 / number_noNAN
# Around 70% of the values in that columns are Nan's. Therefore, I deleted this column
df3 = df2.drop('HDI for year', axis=1)
df3.head()
```
### DATA MERGING - regime (level of political freedom) of the country
```
regime = pd.read_csv("regime.csv")
regime = regime.rename(columns={'Entity': 'country', 'Year': 'year'})
regime.head()
df = pd.merge(df3, regime, how = 'inner', on = ['year', 'country'])
df.head()
```
### DATA ANALYSIS
* Suicide by country
```
byCountry = df.groupby('country').mean().sort_values('suicides/100k', ascending=False).reset_index()
plt.figure(figsize=(10,25))
sns.barplot(x='suicides/100k', y ='country', data=byCountry)
plt.axvline(x = byCountry['suicides/100k'].mean(),color = 'red', ls='--', linewidth=2)
plt.title('Suicides per 100k (by country)')
plt.show()
```
Key points:
1. We can see what countries are above world avergae and what are below.
2. Lithuania, Sri Lanka, and Russia are the top 3 countries with more suicides per 100k.
* Suicide by year
```
byYear = df.groupby('year').mean().reset_index()
plt.figure(figsize=(15,5))
sns.lineplot(x='year', y='suicides/100k', data=byYear, color='navy')
plt.axhline(byYear['suicides/100k'].mean(), ls='--', color='red')
plt.title('Suicides per 100k (by year)')
plt.xlim(1985,2015)
plt.show()
```
Key points:
1. Suicide per 100K had raised from 1986 to 1995 and reached its peak, and had decreased to 2010.
2. There is a sign that suicide per 100K is rising in recent year.
3. It would be interesting to look into what happened in the period of 1990 to 1995 that caused the surge of suicide incidents.
* Suicide by sex
```
bySex = df.groupby('sex').mean().reset_index()
bySexYear = df.groupby(['sex','year']).mean().reset_index()
bySexAge = df.groupby(['sex','age']).mean().sort_values('suicides/100k', ascending=True).reset_index()
bySexGeneration = df.groupby(['sex','generation']).mean().sort_values('suicides/100k', ascending=True).reset_index()
plt.figure(figsize=(20,15))
# By sex
plt.subplot(221)
sns.barplot(x='sex', y='suicides/100k', data=bySex, alpha=alpha)
plt.title('Suicides per 100k, by sex')
# Time evolution by sex
plt.subplot(222)
sns.lineplot(x='year', y='suicides/100k', data=bySexYear, hue='sex')
plt.xlim(1985,2015)
plt.title('Evolution suicides per 100k, by sex')
# By sex and age
plt.subplot(223)
sns.barplot(x='sex', y='suicides/100k', data=bySexAge, hue='age', alpha=alpha)
plt.title('Suicides per 100k, by sex and age')
# By sex and generation
plt.subplot(224)
sns.barplot(x='sex', y='suicides/100k', data=bySexGeneration, hue='generation', alpha=alpha)
plt.title('Suicides per 100k, by sex and generation')
plt.tight_layout()
plt.show()
# By sex and country
byCountrySex = df.groupby(['country','sex']).mean().reset_index()
byCountrySex.head()
plt.figure(figsize=(10,30))
sns.barplot(y='country', x='suicides/100k', data=byCountrySex, hue='sex')
plt.title('Suicides per 100k (by country and by sex)')
plt.show()
```
Key points:
1. Men commit significantly more suicide than women.
2. The historical trend of suicide of men is close to the world overall trend, while women suicide remained relatively constant. This tells us that when looking into the cause, especially from 1990 to 1995, focusing on men's perspectives.
3. For both sex, suicides incidents increase while the age increases.
4. Nearly in all countries, men had more suicide than women.
* Suicide by age
```
byAge = df.groupby('age').mean().sort_values('suicides/100k', ascending=True).reset_index()
byAgeYear = df.groupby(['age','year']).mean().sort_values('suicides/100k', ascending=True).reset_index()
byAgeSex = df.groupby(['age','sex']).mean().sort_values('suicides/100k', ascending=True).reset_index()
byAgeGen = df.groupby(['age','generation']).mean().sort_values('suicides/100k', ascending=True).reset_index()
plt.figure(figsize=(20,15))
# By age
plt.subplot(221)
sns.barplot(x='age', y='suicides/100k', data=byAge, alpha=alpha)
plt.title('Suicides per 100k, by age')
# Time evolution by age
plt.subplot(222)
sns.lineplot(x='year', y='suicides/100k', data=byAgeYear, hue='age')
plt.xlim(1985,2015)
plt.title('Evolution suicides per 100k, by age')
#
plt.subplot(223)
sns.barplot(x='age', y='suicides/100k', data=byAgeSex, hue='sex', alpha=alpha)
plt.subplot(224)
sns.barplot(x='age', y='suicides/100k', data=byAgeGen, hue='generation', alpha=alpha)
plt.tight_layout()
plt.show()
```
Key points:
1. The number of suicides per 100k increases with the age.
2. The number of suicides per 100k decreases from 1995 to 2015 in all age groups except 5-14 year, which slightly increases.
3. The peak of suicides in 1995 is more striking for people with ages +75.
4. In all age groups there are more male suicides per 100k than female ones.
* Suicide and GDP/capita
```
g = sns.jointplot(x="gdp/capita", y="suicides/100k", data=byCountry, kind='regresion', \
xlim=(-100,80000), ylim=(0,45), color='blue')
```
Key point:
There is no significant correlation between GDP/capita and suicide.
* Suicide and regime (political freedom)
```
g = sns.jointplot(x="Political regime type (Score)", y="suicides/100k", data=byCountry, kind='regresion', \
xlim=(-10,10), ylim=(0,45), color='blue')
```
Key point:
There is no significant correlation between regime and suicide.
* Suicide and human right protection
```
g = sns.jointplot(x="Human rights protection score", y="suicides/100k", data=byCountry, kind='regresion', \
xlim=(-5,5), ylim=(0,45), color='blue')
```
Key point:
There is no significant correlation between human right protection and suicide.
| github_jupyter |
# widgets.image_cleaner
fastai offers several widgets to support the workflow of a deep learning practitioner. The purpose of the widgets are to help you organize, clean, and prepare your data for your model. Widgets are separated by data type.
```
from fastai.vision import *
from fastai.widgets import DatasetFormatter, ImageCleaner
from fastai.gen_doc.nbdoc import show_doc
%reload_ext autoreload
%autoreload 2
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = create_cnn(data, models.resnet18, metrics=error_rate)
learn.fit_one_cycle(2)
learn.save('stage-1')
```
We create a databunch with all the data in the training set and no validation set (DatasetFormatter uses only the training set)
```
db = (ImageItemList.from_folder(path)
.no_split()
.label_from_folder()
.databunch())
learn = create_cnn(db, models.resnet18, metrics=[accuracy])
learn.load('stage-1');
show_doc(DatasetFormatter)
```
The [`DatasetFormatter`](/widgets.image_cleaner.html#DatasetFormatter) class prepares your image dataset for widgets by returning a formatted [`DatasetTfm`](/vision.data.html#DatasetTfm) based on the [`DatasetType`](/basic_data.html#DatasetType) specified. Use `from_toplosses` to grab the most problematic images directly from your learner. Optionally, you can restrict the formatted dataset returned to `n_imgs`.
```
show_doc(DatasetFormatter.from_similars)
from fastai.gen_doc.nbdoc import *
from fastai.widgets.image_cleaner import *
show_doc(DatasetFormatter.from_toplosses)
show_doc(ImageCleaner)
```
[`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) is for cleaning up images that don't belong in your dataset. It renders images in a row and gives you the opportunity to delete the file from your file system. To use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) we must first use `DatasetFormatter().from_toplosses` to get the suggested indices for misclassified images.
```
ds, idxs = DatasetFormatter().from_toplosses(learn)
ImageCleaner(ds, idxs, path)
```
[`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) does not change anything on disk (neither labels or existence of images). Instead, it creates a 'cleaned.csv' file in your data path from which you need to load your new databunch for the files to changes to be applied.
```
df = pd.read_csv(path/'cleaned.csv', header='infer')
# We create a databunch from our csv. We include the data in the training set and we don't use a validation set (DatasetFormatter uses only the training set)
np.random.seed(42)
db = (ImageItemList.from_df(df, path)
.no_split()
.label_from_df()
.databunch(bs=64))
learn = create_cnn(db, models.resnet18, metrics=error_rate)
learn = learn.load('stage-1')
```
You can then use [`ImageCleaner`](/widgets.image_cleaner.html#ImageCleaner) again to find duplicates in the dataset. To do this, you can specify `duplicates=True` while calling ImageCleaner after getting the indices and dataset from `.from_similars`. Note that if you are using a layer's output which has dimensions [n_batches, n_features, 1, 1] then you don't need any pooling (this is the case with the last layer). The suggested use of `.from_similars()` with resnets is using the last layer and no pooling, like in the following cell.
```
ds, idxs = DatasetFormatter().from_similars(learn, layer_ls=[0,7,1], pool=None)
ImageCleaner(ds, idxs, path, duplicates=True)
```
## Methods
## Undocumented Methods - Methods moved below this line will intentionally be hidden
## New Methods - Please document or move to the undocumented section
```
show_doc(ImageCleaner.make_dropdown_widget)
```
| github_jupyter |
# Harvesting Commonwealth Hansard
The proceedings of Australia's Commonwealth Parliament are recorded in Hansard, which is available online through the Parliamentary Library's ParlInfo database. [Results in ParlInfo](https://parlinfo.aph.gov.au/parlInfo/search/summary/summary.w3p;adv=yes;orderBy=_fragment_number,doc_date-rev;query=Dataset:hansardr,hansardr80;resCount=Default) are generated from well-structured XML files which can be downloaded individually from the web interface – one XML file for each sitting day. This notebook shows you how to download the XML files for large scale analysis. It's an updated version of the code I used to harvest Hansard in 2016.
**If you just want the data, a full harvest of the XML files for both houses between 1901–1980 and 1998–2005 [is available in this repository](https://github.com/wragge/hansard-xml). XML files are not currently available for 1981 to 1997. Open Australia provides access to [Hansard XML files from 2006 onwards](http://data.openaustralia.org.au/).**
The XML files are published on the Australian Parliament website [under a CC-BY-NC-ND licence](https://www.aph.gov.au/Help/Disclaimer_Privacy_Copyright#c).
## Method
When you search in ParlInfo, your results point to fragments within a day's procedings. Multiple fragments will be drawn from a single XML file, so there are many more results than there are files. The first step in harvesting the XML files is to work through the results for each year scraping links to the XML files from the HTML pages and discarding any duplicates. The `harvest_year()` function below does this. These lists of links are saved as CSV files – one for each house and year. You can view the CSV files in the `data` directory.
Once you have a list of XML urls for both houses across all years, you can simply use the urls to download the XML files.
## Import what we need
```
import re
import os
import time
import math
import requests
import requests_cache
import arrow
import pandas as pd
from tqdm.auto import tqdm
from bs4 import BeautifulSoup
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
s = requests_cache.CachedSession()
retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])
s.mount('https://', HTTPAdapter(max_retries=retries))
s.mount('http://', HTTPAdapter(max_retries=retries))
```
## Set your output directory
This is where all the harvested data will go.
```
output_dir = 'data'
os.makedirs(output_dir, exist_ok=True)
```
## Define the base ParlInfo urls
These are the basic templates for searches in ParlInfo. Later on we'll insert a date range in the `query` slot to filter by year, and increment the `page` value to work through the complete set of results.
```
# Years you want to harvest
# Note that no XML files are available for the years 1981 to 1998, so harvests of this period will fail
START_YEAR = 1901
END_YEAR = 2005
URLS = {
'hofreps': (
'http://parlinfo.aph.gov.au/parlInfo/search/summary/summary.w3p;'
'adv=yes;orderBy=date-eLast;page={page};'
'query={query}%20Dataset%3Ahansardr,hansardr80;resCount=100'),
'senate': (
'http://parlinfo.aph.gov.au/parlInfo/search/summary/summary.w3p;'
'adv=yes;orderBy=date-eLast;page={page};'
'query={query}%20Dataset%3Ahansards,hansards80;resCount=100')
}
```
## Define some functions to do the work
```
def get_total_results(house, query):
'''
Get the total number of results in the search.
'''
# Insert query and page values into the ParlInfo url
url = URLS[house].format(query=query, page=0)
# Get the results page
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
try:
# Find where the total results are given in the HTML
summary = soup.find('div', 'resultsSummary').contents[1].string
# Extract the number of results from the string
total = re.search(r'of (\d+)', summary).group(1)
except AttributeError:
total = 0
return int(total)
def get_xml_url(url):
'''
Extract the XML file url from an individual result.
'''
# Load the page for an individual result
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
# Find the XML url by looking for a pattern in the href
try:
xml_url = soup.find('a', href=re.compile('toc_unixml'))['href']
except TypeError:
xml_url = None
if not response.from_cache:
time.sleep(1)
return xml_url
def harvest_year(house, year):
'''
Loop through a search by house and year, finding all the urls for XML files.
'''
# Format the start and end dates
start_date = '01%2F01%2F{}'.format(year)
end_date = '31%2F12%2F{}'.format(year)
# Prepare the query value using the start and end dates
query = 'Date%3A{}%20>>%20{}'.format(start_date, end_date)
# Get the total results
total_results = get_total_results(house, query)
xml_urls = []
dates = []
found_dates = []
if total_results > 0:
# Calculate the number of pages in the results set
num_pages = int(math.ceil(total_results / 100))
# Loop through the page range
for page in tqdm(range(0, num_pages + 1), desc=str(year), leave=False):
# Get the next page of results
url = URLS[house].format(query=query, page=page)
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
# Find the list of results and loop through them
for result in tqdm(soup.find_all('div', 'resultContent'), leave=False):
# Try to identify the date
try:
date = re.search(r'Date: (\d{2}\/\d{2}\/\d{4})', result.find('div', 'sumMeta').get_text()).group(1)
date = arrow.get(date, 'DD/MM/YYYY').format('YYYY-MM-DD')
except AttributeError:
#There are some dodgy dates -- we'll just ignore them
date = None
# If there's a date, and we haven't seen it already, we'll grab the details
if date and date not in dates:
found_dates.append(date)
# Get the link to the individual result page
# This is where the XML file links live
result_link = result.find('div', 'sumLink').a['href']
# Get the XML file link from the individual record page
xml_url = get_xml_url(result_link)
if xml_url:
dates.append(date)
# Save dates and links
xml_urls.append({'date': date, 'url': 'https://parlinfo.aph.gov.au{}'.format(xml_url)})
if not response.from_cache:
time.sleep(1)
for f_date in list(set(found_dates)):
if f_date not in dates:
xml_urls.append({'date': f_date, 'url': ''})
return xml_urls
```
## Harvest all the XML file links
```
for house in ['hofreps', 'senate']:
for year in range(START_YEAR, END_YEAR + 1):
xml_urls = harvest_year(house, year)
df = pd.DataFrame(xml_urls)
df.to_csv(os.path.join(output_dir, '{}-{}-files.csv'.format(house, year)), index=False)
```
## Download all the XML files
This opens up each house/year list of file links and downloads the XML files. The directory structure is simple:
```
-- output directory ('data' by default)
-- hofreps
-- 1901
-- XML files...
```
```
for house in ['hofreps', 'senate']:
for year in range(START_YEAR, END_YEAR + 1):
output_path = os.path.join(output_dir, house, str(year))
os.makedirs(output_path, exist_ok=True)
df = pd.read_csv(os.path.join(output_dir, '{}-{}-files.csv'.format(house, year)))
for row in tqdm(df.itertuples(), desc=str(year), leave=False):
if pd.notnull(row.url):
filename = re.search(r'(?:%20)*([\w\(\)-]+?\.xml)', row.url).group(1)
# Some of the later files don't include the date in the filename so we'll add it.
if filename[:4] != str(year):
filename = f'{row.date}_{filename}'
filepath = os.path.join(output_path, filename)
if not os.path.exists(filepath):
response = s.get(row.url)
with open(filepath, 'w') as xml_file:
xml_file.write(response.text)
if not response.from_cache:
time.sleep(1)
```
## Summarise the data
This just merges all the house/year lists into one big list, adding columns for house and year. It saves the results as a CSV file. This will be useful to analyse things like the number of sitting days per year.
The fields in the CSV file are:
* `date` – date of sitting day in YYYY-MM-DD format
* `url` – url for XML file (where available)
* `year`
* `house` – 'hofreps' or 'senate'
Here's the results of my harvest from 1901 to 2005: [all-sitting-days.csv](data/all-sitting-days.csv)
```
df = pd.DataFrame()
for house in ['hofreps', 'senate']:
for year in range(START_YEAR, END_YEAR + 1):
year_df = pd.read_csv(os.path.join(output_dir, '{}-{}-files.csv'.format(house, year)))
year_df['year'] = year
year_df['house'] = house
df = df.append(year_df)
df.sort_values(by=['house', 'date'], inplace=True)
df.to_csv(os.path.join(output_dir, 'all-sitting-days.csv'), index=False)
```
## Zip up each year individually
For convenience you can zip up each year individually.
```
from shutil import make_archive
for house in ['hofreps', 'senate']:
xml_path = os.path.join(output_dir, house)
for year in [d for d in os.listdir(xml_path) if d.isnumeric()]:
year_path = os.path.join(xml_path, year)
make_archive(year_path, 'zip', year_path)
```
----
Created by [Tim Sherratt](https://timsherratt.org) for the [GLAM Workbench](https://glam-workbench.github.io/).
| github_jupyter |
# Matplotlib
```
import matplotlib.pyplot as plt
import numpy as np
x = [1,2,3]
y = [2,4,6]
plt.scatter(x,y) #scatter point on all the mentioned axis
plt.show()
x = [1,2,3]
y = [2,4,6]
plt.plot(x,y) #connect the point for me
plt.show()
x = [1,2,3]
y = [2,4,6]
plt.plot(x,y) #connect the point for me
plt.scatter(x,y)
plt.show()
x1 = [1,2,3]
y1 = [2,4,6]
x2 = [4,5,6]
y2 = [2,4,6]
plt.plot(x1,y1,"g*") # under quote will decide color + shape
plt.plot(x2,y2)
plt.show()
# plot function x^3
x = np.array([1,2,3,4])
y = x**3
plt.plot(x,y)
plt.show()
# plot function x^3
#x = np.array([1,2,3,4])
x = np.arange(0,5,0.1) # plot for every 0.1+ value from 0 to 4.9
y = x**3
plt.plot(x,y)
plt.show()
# plot function x^3
#x = np.array([1,2,3,4])
x = np.arange(0,5,0.1) # plot for every 0.1+ value from 0 to 4.9
y = x**3
plt.plot(x,y,"ro")
plt.show()
a = [3,4,5] # x wiill be default from range(0.n)
plt.plot(a)
plt.show()
# plot function x^3
# color,marker,linewidth,labels for axis , dataset and title, legends,axis,grid
#x = np.array([1,2,3,4])
x = np.arange(0,5,0.1) # plot for every 0.1+ value from 0 to 4.9
y1 = x**3
y2 = x**2
#plt.plot(x,y,"red",marker='o')
plt.plot(x,y1,"red",linewidth = 5,label = "x**3") # line thikness
plt.plot(x,y2,"green",linewidth = 5,label = "x**2")
# user define axis unit
#plt.axis([0,10,-5,120])
# add something near lines
plt.text(3.5,80,"Text",fontsize = 14)
plt.grid()
plt.ylabel("x**3") # y-label
plt.xlabel("value") # x-label
plt.title("Matplotlib Demo") # title
plt.legend() # small tiny box inside box
plt.show()
```
### Bubble Chart
```
year = [2012,2013,2014,2015,2016,2017]
sal = [12,13,14,15,16,17]
popu = [100,120,180,250,300,370]
plt.scatter(year,sal, s = 100) #s => size
plt.show()
plt.scatter(year,sal, s = popu,c='black') #s => size of each population
plt.title("Bubble chart")
plt.show()
# alpha=> oppacity of bubble
plt.scatter(year,sal, s = popu,c='black',alpha = 0.3) #s => size of each population
plt.title("Bubble chart")
plt.show()
# changing each bubble color
c = np.arange(len(year))
# alpha=> oppacity of bubble
plt.scatter(year,sal, s = popu,c=c,alpha = 0.5) #s => size of each population
plt.title("Bubble chart")
plt.show()
# edgecolor => bubble outline
plt.scatter(year,sal, s = popu,c=c,alpha = 0.5,edgecolor= 'black') #s => size of each population
plt.title("Bubble chart")
plt.show()
# text => add text arounf bubble
plt.scatter(year,sal, s = popu,c=c,alpha = 0.5,edgecolor= 'black',marker = '^') #s => size of each population
plt.text(year[0]+0.1,sal[0]+0.1,popu[0])
plt.text(year[3]+0.1,sal[3]+0.1,popu[3])
plt.title("Bubble chart")
plt.show()
```
### pie-Graph
```
label = ['A','B','C','D']
size = [3,4,6,2]
color = ['blue','black','green','red']
explod = [0.1,0,0,0]
plt.pie(size,colors = color,explode = explod ,labels = label,autopct='%.1f%%')
plt.axis("equal") # for symetrical circl
plt.title("pie Graph")
plt.show
```
### Histogram Chart
```
a = [1,2,3,4,1,3,4,2,1,1,2]
plt.hist(a)
plt.show()
b = [1,2,3,7,7,3,4,4,6,6,8,2,6,2,1,5,5,9,9]
xt = np.arange(23)
xt
plt.hist(b,bins = 25,edgecolor = "black")
plt.xticks(xt)
plt.show()
plt.hist(b,bins = [1,3,4,6,8,10],edgecolor = "black")
plt.axis([0,12,0,10])
plt.show()
```
### Bar Graph
```
year = [2012,2013,2014,2015,2016,2017]
sal = [12,13,14,15,16,17]
popu = [100,120,180,250,300,370]
plt.bar(year,popu)
plt.show()
plt.bar(year,popu,width=0.5) # changing width
plt.show()
```
### Assignment Problem's
```
x = [0,1,2,3,4,5,6,7,9,10]
y = [1,2,5,10,17,26,37,50,82,101]
plt.plot(x,y)
plt.grid()
plt.show()
print(60,"to",70)
x = [i for i in range(0,40,2) if len(x) < 20]
y = [2**i for i in x]
plt.plot(x,y,'b--')
plt.grid()
plt.show()
x = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40]
x = np.array(x)
y = x**2
plt.plot(x,y,"ro",linewidth= 8,label= "Crosspondance")
plt.ylabel("Y-axis")
plt.xlabel("X-axis")
plt.legend()
plt.title("Solution 3")
print(x)
print(y)
```
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/254-tweet-train-5fold-roberta-96-onecycle-lr-pt/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, use_bias=False, name='qa_outputs')(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
# Modul Python Bahasa Indonesia
## Seri Kesembilan
___
Coded by psychohaxer | Version 1.1 (2020.12.24)
___
Notebook ini berisi contoh kode dalam Python sekaligus outputnya sebagai referensi dalam coding. Notebook ini boleh disebarluaskan dan diedit tanpa mengubah atau menghilangkan nama pembuatnya. Selamat belajar dan semoga waktu Anda menyenangkan.
Catatan: Modul ini menggunakan Python 3
Notebook ini dilisensikan dibawah [MIT License](https://opensource.org/licenses/MIT).
___
## Bab 9 Pemrograman Fungsional (Functional Programming)
Pemrograman fungsional adalah paradigma pemrograman dimana blok-blok kode program tertentu hanya perlu ditulis sekali dan cukup dipanggil ketika hendak dijalankan. `Function` itu sendiri adalah sekumpulan blok kode program yang bisa dijalankan dengan memanggil nama fungsinya. Dengan menerapkan pemrograman fungsional, kode kita akan lebih terstruktur dan tidak perlu boros dalam mengetik. _DRY (Don't Repeat Yourself)_ adalah konsep yang diterapkan dan disaat yang sama menghindari _WET (Write Everything Twice)_. Intinya, kita akan mencegah menulis kode yang sama secara berulang.
Karakteristik function:
* Sekumpulan blok kode yang berfungsi untuk melakukan tugas tetentu
* Bisa digunakan kembali
* Dijalankan dengan memanggil namanya
* Bisa meiliki parameter dan diberi argumen
* Parameter adalah variabel yang didefinisikan didalan kurung `()`
* Contoh fungsi dengan parameter: `hitungLuas(sisi)`
* Argumen adalah nilai aktual yang diberikan pada fungsi ketika dipanggil
* Contoh fungsi dengan argumen: `hitungLuas(4)`
* Bisa menghasilkan sebuah nilai sebagai hasil (`return`)
* Ada function built-in (contohnya `print()`)
* Kita bisa membuat function sendiri
### Membuat Fungsi
Jika kita hendak membuat fungsi, hendaknya menggunakan nama yang jelas sesuai dengan kegunaanya. Membuat fungsi dimulai dengan keyword `def` diikuti nama fungsi.
```
def nama_fungsi():
print("Hai aku adalah fungsi!")
```
Diatas adalah contoh sederhana membuat fungsi.
### Memanggil Fungsi
Fungsi tidak akan dijalankan bila tidak dipanggil. Memanggil fungsi juga dikenal juga dengan _call_, _invoke_, _run_, _execute_, dll. Semua itu sama. Memanggil fungsi dilakukan dengan nama fungsi diikuti tanda kurung, dan jika diperlukan sambil memberikan argumennya.
```
nama_fungsi()
```
### Fungsi dengan Parameter
Sebagai contoh, kita akan membuat fungsi untuk membandingkan dua bilangan. Bilangan yang akan kita proses akan kita masukkan ke parameter, dimana pada fungsi dibawah adalah variabel a dan b.
```
def cek_bil(a,b):
if (a == b):
print("A == B")
elif (a < b):
print("A < B")
elif (a > b):
print("A > B")
```
Kita akan memanggil fungsi ini dan memberikan argumennya.
```
cek_bil(13,13)
cek_bil(12,14)
cek_bil(10,7)
```
### Keyword `return`
Return digunakan bila kita ingin fungsi yang kita buat mengembalikan nilai.
```
def jumlah(a,b):
return a+b
jumlah(3,4)
```
### Parameter Default
Nilai parameter default adalah nilai yang digunakan bila fungsi dipanggil tanpa memberikan argumen untuk parameter tersebut. Hanya **parameter terakhir yang bisa memiliki nilai default**. Ini karena nilainya diberikan menggunakan posisi. Jadi jika kita punya beberapa parameter, hanya yang terakhir yang memiliki nilai default.
```
def sapa(nama, sapaan="Halo"):
print(sapaan, nama)
```
Jika kita memanggil fungsi diatas dengan hanya memberikan argumen pada parameter nama, maka parameter sapaan akan terisi dengan `"Halo"`.
```
sapa("Adi")
```
Lain halnya jika kita berikan nilai.
```
sapa("Fajar","Selamat pagi")
```
Bagaimana jika kita memberikan nilai parameter default tidak sesuai tempatnya?
```
def fungsi(a=10,b):
return a+b
```
### Fungsi Banyak Argumen
Jika kita tidak mengetahui secara pasti jumlah argumen, maka penulisannya seperti dibawah ini.
```
def simpan(variabel, * tup):
print(variabel)
for variabel in tup:
print(variabel)
simpan(2)
simpan(2,3,4,5,6)
```
### Menugaskan Fungsi ke Variabel
Kita bisa menggunakan variabel untuk memanggil fungsi. Ingat fungsi `cek_bil` diatas? Kita akan memasukkannya ke variabel `cek`.
```
cek = cek_bil
cek(2,3)
```
### Fungsi Mengembalikan Fungsi Lain
Fungsi dapat memanggil dan mengembalikan nilai dari fungsi didalamnya. Ini dilakukan dengan memperhatikan indentasi fungsi dan menugaskan fungsi ke variabel.
```
def salam():
def ucapkan():
return "Assalamualaikum"
return ucapkan
ucapan_salam = salam()
print(ucapan_salam())
```
### Variabel Lokal dan Variabel Global
Tidak semua variabel bisa diakses dari semua tempat. Ini tergantung dari tempat dimana kita mendefinisikan variabel. Variabel lokal adalah variabel yang didefinisikan didalam sebuah fungsi. Sedangkan variabel global didefinisikan diluar fungsi.
```
variabel_global = "Aku variabel global! Aku bisa diakses dimana saja."
def fungsi1():
variabel_lokal = "Aku variabel lokal! Aku hanya bisa diakses didalam fungsiku."
print(variabel_global)
print(variabel_lokal)
fungsi1()
```
Memanggil variabel lokal diluar fungsi akan menghasilkan error `NameError` karena variabel tidak ditemukan.
```
print(variabel_lokal)
def fungsi2():
variabel_lokal_juga = "Aku variabel lokal juga lho!"
print(variabel_global)
print(variabel_lokal_juga)
print(variabel_lokal)
fungsi2()
```
### Fungsi Bersarang
Fungsi memuat fungsi lainnya, dimana ia disebut dengan fungsi bersarang. Sebagai contoh kita akan membuat fungsi untuk hitung luas persegi dan persegi panjang dimana jika kita memberikan 1 argumen maka akan dihitung sebagai persegi dan jika 2 argumen maka persegi panjang.
```
def hitungLuasSegiEmpat(a, b=0):
def persegi(s):
luas = s*s
return luas
def persegiPanjang(p,l):
luas = p*l
return luas
if (b == 0):
return persegi(a)
else:
return persegiPanjang(a,b)
hitungLuasSegiEmpat(4)
hitungLuasSegiEmpat(3,5)
```
___
## Latihan
1. Buat fungsi untuk menentukan apakah argumen adalah bilangan prima atau bukan.
Challenge: Program juga menampilkan argumen kelipatan berapa.
___
coded with ❤ by [psychohaxer](http://github.com/psychohaxer)
___
| github_jupyter |
# Enumerating BiCliques to Find Frequent Patterns
#### KDD 2019 Workshop
#### Authors
- Tom Drabas (Microsoft)
- Brad Rees (NVIDIA)
- Juan-Arturo Herrera-Ortiz (Microsoft)
#### Problem overview
From time to time PCs running Microsoft Windows fail: a program might crash or hang, or you experience a kernel crash leading to the famous blue screen (we do love those 'Something went wrong' messages as well...;)).
<img src="images/Windows_SomethingWentWrong.png" alt="Windows problems" width=380px class="center"/>
Well, when this happens it's not a good experience and we are truly interested in quickly finding out what might have gone wrong and/or at least what is common among the PCs that have failed.
## Import necessary modules
```
import cudf
import numpy
import azureml.core as aml
import time
```
## Load the data
The data prepared for this workshop will be available to download after the conference. We will share the link in the final notebook that will be available on RAPIDS github account.
### Data
The data we will be using in this workshop has been synthetically generate to showcase the type of scenarios we encounter in our work.
While running certain workloads, PCs might fail for one reason or another. We collect the information from both types of scenarios and enrich the observations with the metadata about each PC (hardware, software, failure logs etc.). This forms a dataset where each row represents a PC and the features column contains a list of all the metadata we want to mine to find frequent patterns about the population that has failed.
In this tutorial we will be representing this data in a form of a bi-partite graph. A bi-partite graph can be divided into two disconnected subgraphs (none of the vertices within the subgraphs are connected) with the edges connecting the vertices from one subgraph to the other. See the example below.
<img src="images/BiPartiteGraph_Example.png" alt="Bi-Partite graph example" width=200px class="center"/>
In order to operate on this type of data we convert the list-of-features per row to a COO (Coordinate list) format: each row represents an edge connection, the first column contains the source vertex, the second one contains the destination vertex, and the third column contains the failure flag (0 = success, 1 = failure).
```
!head -n 3 ../../../../data/fpm_graph/coo_fpm.csv
```
Now we can load the data into a RAPIDS DataFrame `cudf`. ***NOTE: This will take longer than if you were running this on your local machine since the data-store is separate from this running VM. Normally it would be almost instant.***
```
%%time
fpm_df = cudf.read_csv('../../../../data/fpm_graph/coo_fpm.csv', names=['src', 'dst', 'flag'])
import pandas as pd
%%time
fpm_pdf = pd.read_csv('../../../../data/fpm_graph/coo_fpm.csv', names=['src', 'dst', 'flag'])
```
Now that we have the data loaded let's check how big is our data.
```
%%time
shp = fpm_df.shape
print('Row cnt: {0}, col cnt: {1}'.format(*shp))
```
So, we have >41M records in our DataFrame. Let's see what it looks like:
```
print(fpm_df.head(10))
```
## Understand the data
Now that we have the data, let's explore it a bit.
### Overall failure rate
First, let's find out what is the overall failure rate. In general, we do not want to extract any patterns that are below the overall failure rate since the would not help us to understand anything about the phenomenon we're dealing with nor would help us pinpoint the actual problems.
```
%%time
print(fpm_df['flag'].sum() / float(fpm_df['flag'].count()))
```
So, the overall falure rate is 16.7%. However, you can see that running a `sum` and `count` reducers on 41M records took ~5-10ms.
### Device count
I didn't tell you how many devices we included in the dataset. Let's figure it out. Since the `src` column contains multiple edges per PC we need to count only the unique ids for this column.
```
%%time
print(fpm_df['src'].unique().count())
```
So, we have 755k devices in the dataset and it took only 1s to find this out!!!
### Distinct features count
Let's now check how many distinct meatadata features we included in the dataset
```
%%time
print(fpm_df['dst'].unique().count())
```
Now you can see it is a synthetic dataset ;) We have a universe of 15k distinct metadata features each PC can be comprised of.
### Degree distribution
Different PCs have different number of features: some have two CPUs or 4 GPUs (lucky...). Below we can quickly find how many features each PCs has.
```
%%time
degrees = fpm_df.groupby('src').agg({'dst': 'count'})
print(degrees)
print(
'On average PCs have {0:.2f} components. The one with the max numer has {1}.'
.format(
degrees['dst'].mean()
, degrees['dst'].max()
)
)
```
### Inspecting the distribution of degrees
We can very quickly calculate the deciles of degrees.
```
%%time
quantiles = degrees.quantile(q=[float(e) / 100 for e in range(0, 100, 10)])
print(quantiles.to_pandas())
```
Let's see how the distribution looks like.
```
%%time
buckets = degrees['dst'].value_counts().reset_index().to_pandas()
buckets.columns = ['Bucket', 'Count']
import matplotlib.pyplot as plt
%matplotlib inline
plt.bar(buckets['Bucket'], buckets['Count'])
```
## Mine the data and find the bi-cliques
In this part of the tutorial we will show you the prototype implementation of the iMBEA algorithm proposed in Zhang, Y et al. paper from 2014 titled _On finding bicliques in bipartite graphs: A novel algorithm and its application to the integration of diverse biological data types_ published in BMC bioinformatics 15 (110) URL: https://www.researchgate.net/profile/Michael_Langston/publication/261732723_On_finding_bicliques_in_bipartite_graphs_A_novel_algorithm_and_its_application_to_the_integration_of_diverse_biological_data_types/links/00b7d53a300726c5b3000000/On-finding-bicliques-in-bipartite-graphs-A-novel-algorithm-and-its-application-to-the-integration-of-diverse-biological-data-types.pdf
### Setup
First, we do some setting up.
```
from collections import OrderedDict
import numpy as np
# must be factor of 10
PART_SIZE = int(1000)
```
### Data partitioning
We partition the DataFrame into multiple parts to aid computations.
```
def _partition_data_by_feature(_df) :
#compute the number of sets
m = int(( _df['dst'].max() / PART_SIZE) + 1 )
_ui = [None] * (m + 1)
# Partition the data into a number of smaller DataFrame
s = 0
e = s + PART_SIZE
for i in range (m) :
_ui[i] = _df.query('dst >= @s and dst < @e')
s = e
e = e + PART_SIZE
return _ui, m
```
### Enumerating features
One of the key components of iMBEA algorithm is how it scans the graph starting from, in our case, from the most popular to the least popular feature. The `_count_features(...)` method below achieves exactly that and produces a sorted list of features ranked by their popularity.
```
def _count_features( _gdf, sort=True) :
aggs = OrderedDict()
aggs['dst'] = 'count'
c = fpm_df.groupby(['dst'], as_index=False).agg(aggs)
c = c.rename(columns={'dst':'count'})
c = c.reset_index()
if (sort) :
c = c.sort_values(by='count', ascending=False)
return c
print(_count_features(fpm_df))
```
### Fundamental methods
Below are some fundamental methods used iteratively by the final algorithm
#### `get_src_from_dst`
This method returns a DataFrame of all the source vertices that have the destination vertex `id` in their list of features.
```
# get all src vertices for a given dst
def get_src_from_dst( _gdf, id) :
_src_list = (_gdf.query('dst == @id'))
_src_list.drop_column('dst')
return _src_list
```
#### `get_all_features`
This method returns all the features that are connected to the vertices found in the `src_list_df`.
```
# get all the items used by the specified users
def get_all_feature(_gdf, src_list_df, N) :
c = [None] * N
for i in range(N) :
c[i] = src_list_df.merge(_gdf[i], on='src', how="inner")
return cudf.concat(c)
```
#### `is_same_as_last`
This method checks if the bi-clicque has already been enumerated.
```
def is_same_as_last(_old, _new) :
status = False
if (len(_old) == len(_new)) :
m = _old.merge(_new, on='src', how="left")
if m['src'].null_count == 0 :
status = True
return status
```
#### `update_results`
This is a util method that helps to (1) maintain a DataFrame with enumerated bi-cliques that contains some of the `src` and `dst` vertices, and (2) some basic information about these.
```
def update_results(m, f, key, b, s) :
"""
Input
* m = machines
* f = features
* key = cluster ID
* b = biclique answer
* s = stats answer
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not the full
edge list to save space. Since it is a biclique, it is easy to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A Pandas dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ratio'] - the ratio of bad machine / total machines
"""
B = cudf.DataFrame()
S = cudf.DataFrame()
m_df = cudf.DataFrame()
m_df['vert'] = m['src'].astype(np.int32)
m_df['id'] = int(key)
m_df['type'] = int(0)
f_df = cudf.DataFrame()
f_df['vert'] = f['dst'].astype(np.int32)
f_df['id'] = int(key)
f_df['type'] = int(1)
if len(b) == 0 :
B = cudf.concat([m_df, f_df])
else :
B = cudf.concat([b, m_df, f_df])
# now update the stats
num_m = len(m_df)
num_f = len(f_df)
total = num_m# + num_f
num_bad = len(m.query('flag == 1'))
ratio = num_bad / total
# now stats
s_tmp = cudf.DataFrame()
s_tmp['id'] = key
s_tmp['total'] = total
s_tmp['machines'] = num_m
s_tmp['bad_machines'] = num_bad
s_tmp['features'] = num_f
s_tmp['bad_ratio'] = ratio
if len(s) == 0 :
S = s_tmp
else :
S = cudf.concat([s,s_tmp])
del m_df
del f_df
return B, S
```
#### `ms_find_maximal_bicliques`
This is the main loop for the algorithm. It iteratively scans the list of features and enumerates the bi-cliques.
```
def ms_find_maximal_bicliques(df, k,
offset=0,
max_iter=-1,
support=1.0,
min_features=1,
min_machines=10) :
"""
Find the top k maximal bicliques
Parameters
----------
df : cudf:DataFrame
A dataframe containing the bipartite graph edge list
Columns must be called 'src', 'dst', and 'flag'
k : int
The max number of bicliques to return
-1 mean all
offset : int
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not the full
edge list to save space. Since it is a biclique, it is ease to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ration'] - the ratio of bad machine / total machines
"""
x = [col for col in df.columns]
if 'src' not in x:
raise NameError('src column not found')
if 'dst' not in x:
raise NameError('dst column not found')
if 'flag' not in x:
raise NameError('flag column not found')
if support > 1.0 or support < 0.1:
raise NameError('support must be between 0.1 and 1.0')
# this removes a prep step that offset the values for CUDA process
if offset > 0 :
df['dst'] = df['dst'] - offset
# break the data into chunks to improve join/search performance
src_by_dst, num_parts = _partition_data_by_feature(df)
# Get a list of all the dst (features) sorted by degree
f_list = _count_features(df, True)
# create a dataframe for the answers
bicliques = cudf.DataFrame()
stats = cudf.DataFrame()
# create a dataframe to help prevent duplication of work
machine_old = cudf.DataFrame()
# create a dataframe for stats
stats = cudf.DataFrame()
answer_id = 0
iter_max = len(f_list)
if max_iter != -1 :
iter_max = max_iter
# Loop over all the features (dst) or until K is reached
for i in range(iter_max) :
# pop the next feature to process
feature = f_list['dst'][i]
degree = f_list['count'][i]
# compute the index to this item (which dataframe chunk is in)
idx = int(feature/PART_SIZE)
# get all machines that have this feature
machines = get_src_from_dst(src_by_dst[idx], feature)
# if this set of machines is the same as the last, skip this feature
if is_same_as_last(machine_old, machines) == False:
# now from those machines, hop out to the list of all the features
feature_list = get_all_feature(src_by_dst, machines, num_parts)
# summarize occurrences
ic = _count_features(feature_list, True)
goal = int(degree * support)
# only get dst nodes with the same degree
c = ic.query('count >= @goal')
# need more than X feature to make a biclique
if len(c) > min_features :
if len(machines) >= min_machines :
bicliques, stats = update_results(machines, c, answer_id, bicliques, stats)
answer_id = answer_id + 1
# end - if same
machine_old = machines
if k > -1:
if answer_id == k :
break
# end for loop
# All done, reset data
if offset > 0 :
df['dst'] = df['dst'] + offset
return bicliques, stats
```
### Finding bi-cliques
Now that we have a fundamental understanding how this works -- let's put it to action.
```
%%time
bicliques, stats = ms_find_maximal_bicliques(
df=fpm_df,
k=10,
offset=1000000,
max_iter=100,
support = 1.0,
min_features=3,
min_machines=100
)
```
It takes somewhere between <font size="10">10 to 15 seconds</font> to analyze <font size="10">>42M</font>edges and output the top 10 most important bicliques.
Let's see what we got. We enumerated 10 bicliques. The worst of them had a failure rate of over 97%.
```
print(stats)
```
Let's look at the one of the worst ones that affected the most machines: over 57k.
```
bicliques.query('id == 1 and type == 1')['vert'].sort_values().to_pandas()
```
If you change the `type` to `0` we could retrieve a sample list of PCs that fit this particular pattern/bi-clique: this is useful and sometimes helps us to further narrow down a problem by further scanning the logs from PCs.
| github_jupyter |
## Multinomial Naive Bayes
O Multinomial Naive Bayes supõe que os recursos sejam gerados a partir de uma distribuição multinomial simples. A distribuição multinomial descreve a probabilidade de observar contagens entre várias categorias e, portanto, o Multinomial Naive Bayes é mais apropriado para características que representam contagens ou taxas de contagem (variáveis discretas).
A distribuição multinomial normalmente requer contagens de entidades inteiras. No entanto, na prática, contagens fracionadas como ``tf-idf`` também podem funcionar.
O funcionamento é similar ao método gaussiano, exceto que, em vez de modelar a distribuição de dados com a distribuição normal, modelamos a distribuição de dados com uma distribuição multinomial.
http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn import metrics
import numpy as np
import pandas as pd
```
### Exemplo 1: The 20 newsgroups
Esse dataset compreende cerca de 18.000 postagens de grupos de notícias sobre 20 tópicos divididos em dois subconjuntos: um para treinamento (ou desenvolvimento) e outro para teste (ou para avaliação de desempenho). A divisão entre o treino e o conjunto de testes é baseada em mensagens postadas antes e depois de uma data específica.
```
# Importando os dados
from sklearn.datasets import fetch_20newsgroups
# Definindo as categorias
# Em vez de utilizar as 20 categorias, escolheremos apenas 4, para rodar mais rápido
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
# Criando o dataset de treino com as categorias selecionadas anteriormente
train = fetch_20newsgroups(subset = 'train', categories = categories, shuffle=True)
len(train.data) #Verificando a quantidade de linhas do dataset
```
**Explorando o Dataset**
```
# Visualizando as classes dos 10 primeiros registros do target
for t in train.target[:10]:
print(train.target_names[t])
# O Scikit-Learn registra os labels como array de números, a fim de aumentar a velocidade
train.target[:10]
```
**Preprocessamento dos dados**
```
from sklearn.feature_extraction.text import TfidfVectorizer
TfidfVect = TfidfVectorizer(stop_words='english')
# Transformando a coleção de textos (corpus) em uma matriz esparsa
X_train = TfidfVect.fit_transform(train.data)
y_train = train.target
# Verficando as dimensões
print(X_train.shape, y_train.shape)
```
**Treinando o modelo**
```
MultiNB = MultinomialNB().fit(X_train, y_train)
```
**Testando o modelo**
```
# Criando os dados de teste
test = fetch_20newsgroups(subset = 'test', categories = categories, shuffle = True)
# Convertendo o dataset de teste em matriz esparsa TF-IDF
X_test = TfidfVect.transform(test.data)
# Executando a predição
predicted = MultiNB.predict(X_test)
```
**Avaliando o modelo**
```
# Acurácia do Modelo
np.mean(predicted == test.target)
# Métricas
print(metrics.classification_report(test.target, predicted, target_names = test.target_names))
# Confusion Matrix
pd.DataFrame(metrics.confusion_matrix(test.target, predicted),
index=test.target_names,
columns=test.target_names)
```
**Fazendo previsões**
```
# Criando novos dados
docs_new = ['God is love', 'OpenGL on the GPU is fast', 'Computer brain']
# Transformando em metriz esparsa
x_new = TfidfVect.transform(docs_new)
# Fazendo a predição
predicted = MultiNB.predict(x_new)
#Mostrando os resultados
for doc, category in zip(docs_new, predicted):
print('%r => %s' % (doc, train.target_names[category]))
```
### Exemplo 2: SMS Spam Collection Dataset
Dataset contendo 5572 mensagens de SMS classificadas como ham (4825) e spam(747).
https://www.kaggle.com/uciml/sms-spam-collection-dataset
```
spamData = pd.DataFrame(pd.read_csv('datasets/spam.csv', encoding = "ISO-8859-1"))
```
**Explorando os dados**
```
# Verificando as primeiras linhas
spamData.head()
# Verificando se há algum valor nulo no Dataset
spamData.isna().sum()
# Verificando a quantidade de ham e spam
spamData['v1'].value_counts()
```
**Trabalhando os dados**
```
# Retirando as colunas desnecessárias
spamData.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], axis='columns', inplace=True)
spamData.shape
# Modificando a coluna target de string para binário
spamData.replace(['ham', 'spam'], [0, 1], inplace=True)
# Verificando o Dataset final
spamData.head()
# Dividindo os dataset entre Data e target
X_spam = spamData['v2']
y_spam = spamData['v1']
TfidfVect = TfidfVectorizer(stop_words='english')
# Transformando a coleção de textos (corpus) em uma matriz esparsa
X_spamTf = TfidfVect.fit_transform(X_spam)
# Dividindo entre treino e teste
X_train_s, X_test_s, y_train_s, y_test_s = train_test_split(X_spamTf, y_spam)
```
**Treinando o modelo**
```
MultiNB = MultinomialNB().fit(X_train_s, y_train_s)
```
**Testando o modelo**
```
pred = MultiNB.predict(X_test_s)
```
**Avaliando o modelo**
```
# Acurácia (arredondando para 2 casas decimais)
np.round(np.mean(pred == y_test_s), 2)
# Métricas
print(metrics.classification_report(y_test_s, pred))
```
Nosso modelo foi muito bem. Obteve uma acurácia total de 97%, acertando 100% dos casos de SPAM e 96% dos casos de HAM (portanto com 4% de falsos positivos).
| github_jupyter |
# LCLS Archiver restore
These examples show how single snapshots, and time series can be retreived from the archiver appliance.
Note that the times must always be in ISO 8601 format, UTC time (not local time).
```
%pylab --no-import-all inline
%config InlineBackend.figure_format = 'retina'
from lcls_live.archiver import lcls_archiver_restore
from lcls_live import data_dir
import json
import os
# This is the main function
?lcls_archiver_restore
```
## Optional: off-site setup
```
# Optional:
# Open an SSH tunnel in a terminal like:
# ssh -D 8080 <some user>@<some SLAC machine>
# And then set:
os.environ['http_proxy']='socks5h://localhost:8080'
os.environ['HTTPS_PROXY']='socks5h://localhost:8080'
os.environ['ALL_PROXY']='socks5h://localhost:8080'
```
# Restore known PVs
```
pvlist = [
'IRIS:LR20:130:MOTR_ANGLE',
'SOLN:IN20:121:BDES',
'QUAD:IN20:121:BDES',
'QUAD:IN20:122:BDES',
'ACCL:IN20:300:L0A_ADES',
'ACCL:IN20:300:L0A_PDES'
]
lcls_archiver_restore(pvlist, '2020-07-09T05:01:21.000000-07:00')
```
## Get snapshot from a large list
Same as above, but for processing large amounts of data
```
# Get list of PVs
fname = os.path.join(data_dir, 'classic/full_pvlist.json')
pvlist = json.load(open(fname))
pvlist[0:3]
# Simple filename naming
def snapshot_filename(isotime):
return 'epics_snapshot_'+isotime+'.json'
times = ['2018-03-06T15:21:15.000000-08:00']
lcls_archiver_restore(pvlist[0:10], times[0])
%%time
# Make multiple files
root = './data/'
for t in times:
newdata = lcls_archiver_restore(pvlist, t)
fname = os.path.join(root, snapshot_filename(t))
with open(fname, 'w') as f:
f.write(json.dumps(newdata))
print('Written:', fname)
```
# Get history of a single PV
This package also has a couple functions for getting the time history data of a pv.
The first, `lcls_archiver_history`, returns the raw data
```
from lcls_live.archiver import lcls_archiver_history, lcls_archiver_history_dataframe
t_start = '2020-07-09T05:01:15.000000-07:00'
t_end = '2020-07-09T05:03:00.000000-07:00'
secs, vals = lcls_archiver_history('SOLN:IN20:121:BDES', start=t_start, end=t_end)
secs[0:5], vals[0:5]
```
More convenient, is to format this as a pandas dataframe.
```
?lcls_archiver_history_dataframe
df1 = lcls_archiver_history_dataframe('YAGS:IN20:241:YRMS', start=t_start, end=t_end)
df1[0:5]
# Pandas has convenient plotting
df1.plot()
```
# Aligning the history of two PVs
The returned data will not necessarily be time-aligned. Here we will use Pandas' interpolate capabilities to fill in missing data.
```
import pandas as pd
# Try another PV. This one was smoothy scanned
df2 = lcls_archiver_history_dataframe('SOLN:IN20:121:BDES', start=t_start, end=t_end)
df2
df2.plot()
# Notice that some data are taken at the same time, others are not
df4 = pd.concat([df1, df2], axis=1)
df4
# This will fill in the missing values, and drop trailing NaNs
df5 = df4.interpolate().dropna()
df5
# make a plot
DF = df5[:-2] # The last two points are outside the main scan.
k1 = 'SOLN:IN20:121:BDES'
k2 = 'YAGS:IN20:241:YRMS'
plt.xlabel(k1)
plt.ylabel(k2)
plt.title(t_start+'\n to '+t_end)
plt.scatter(DF[k1], DF[k2], marker='.', color='black')
```
# This easily extends to a list
```
pvlist = ['SOLN:IN20:121:BDES', 'YAGS:IN20:241:XRMS', 'YAGS:IN20:241:YRMS']
dflist = []
for pvname in pvlist:
dflist.append(lcls_archiver_history_dataframe(pvname, start=t_start, end=t_end))
df6 = pd.concat(dflist, axis=1).interpolate().dropna()
df6
DF = df6[:-2] # Drop the last two that are unrelated to the scan
k1 = 'SOLN:IN20:121:BDES'
k2 = 'YAGS:IN20:241:XRMS'
k3 = 'YAGS:IN20:241:YRMS'
plt.xlabel(k1+' (kG-m)')
plt.ylabel('Measurement (um)')
plt.title(t_start+'\n to '+t_end)
X1 = DF[k1]
X2 = DF[k2]
X3 = DF[k3]
plt.scatter(X1, X2, marker='x', color='blue', label=k2)
plt.scatter(X1, X3, marker='x', color='green', label=k3)
plt.legend()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jsedoc/ConceptorDebias/blob/master/Experiments/Conceptors/Gradient_Based_Conceptors.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch
import torch.nn.functional as F
from torch import nn, optim
import numpy as np
from numpy.linalg import inv
import matplotlib.pyplot as plt
%matplotlib inline
dtype = torch.float
device = torch.device('cuda:0')
torch.cuda.get_device_name(0)
```
# Gradient Approach To Conceptors
### Initializing Old Conceptors and Data
```
## Old Conceptors Implementations
def get_conceptor(x, alpha):
N = x.shape[1] - 1
cov = (x @ x.T)*(1/N)
return cov @ inv(cov + (1/alpha**2)*np.eye(x.shape[0]))
def improved_conceptor(X, alpha = 1):
N = X.shape[1]
means = np.mean(X, axis = 1)
stds = np.std(X,axis = 1)
X = X - means[:,None]
X = X / stds[:,None]
cov = (X @ X.T)*(1/N)
return cov @ inv(cov + (alpha**(-2))*np.eye(X.shape[0]))
## Generating Dataset
N = 100000
covariance = [[1,0.7],
[0.7,1]]
X = np.random.multivariate_normal([0,0],covariance,N).T
plt.title('Data Cloud')
plt.scatter(X[0],X[1], c = 'm')
plt.show()
print("\n")
## Checking Old Conceptors
alpha = 1
C = improved_conceptor(X, alpha)
print(C)
print(np.linalg.norm(X - C @ X)**2/N + np.linalg.norm(C)**2)
```
### Setup Gradient Based Conceptor
```
learning_rate = 0.1
x = torch.tensor(X.T, device = device, dtype = dtype)
y = x.clone().detach()
W = torch.rand(2,2, device = device, dtype = dtype, requires_grad = True)
for t in range(10001):
y_pred = x.mm(W)
l2_reg = W.norm(2).pow(2)
loss = (1/N)*(y_pred - y).pow(2).sum() + l2_reg
if(t%5000 == 0): print(t,': ', loss.item())
loss.backward()
with torch.no_grad():
W -= learning_rate *(1/(0.001*t+1)) * W.grad
W.grad.zero_()
print(W.data)
model = torch.nn.Sequential(torch.nn.Linear(2,2, bias = False))
model.cuda()
loss_fn = torch.nn.MSELoss(reduction = 'mean')
for t in range(15001):
y_pred = model(x)
# loss = loss_fn(y_pred,y)
l2_reg = None
for param in model.parameters():
if l2_reg is None:
l2_reg = param.norm(2).pow(2)
else:
l2_reg = l2_reg + param.norm(2)
loss = (1/N)*(y_pred - y).pow(2).sum() + l2_reg
if(t%5000 == 0): print(t,': ', loss.item())
model.zero_grad()
loss.backward()
with torch.no_grad():
for param in model.parameters():
param -= learning_rate*(1/(0.001*t+1)) * param.grad
for param in model.parameters():
print(param.data)
# optimizer = optim.SGD(net.parameters(),lr = 0.01, weight_decay = 1)
## NOT conceptor
notx = model(x).detach_()
noty = y.clone().detach()
notmodel = torch.nn.Sequential(torch.nn.Linear(2,2, bias = False))
notmodel.cuda()
learning_rate = 0.1
for t in range(15001):
y_pred = notmodel(notx)
loss = (1/N)*(y_pred - noty).pow(2).sum()
if(t%5000 == 0): print(t,': ', loss.item())
loss.backward()
with torch.no_grad():
for param in notmodel.parameters():
param -= learning_rate*(1/(0.001*t+1)) * param.grad
notmodel.zero_grad()
for param in notmodel.parameters():
print(param.data)
for param in notmodel.parameters():
for param2 in model.parameters():
print(param.mm(param2))
result = notmodel(x).detach().cpu()
plt.scatter(result[:,0],result[:,1])
plt.show()
```
### Nonlinear Conceptors Using Autoencoders (INCOMPLETE)
```
class Conceptor(nn.Module):
def __init__(self, size):
super(Conceptor, self).__init__()
self.fc1 = nn.Linear(size,2)
self.fc2 = nn.Linear(2,2)
self.fc3 = nn.Linear(2,size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
conceptor = Conceptor(2)
conceptor.cuda()
learn_rate = 0.1
for t in range(3001):
pred_y = conceptor(x)
l2 = None
for param in conceptor.parameters():
if l2 is None:
l2 = param.norm(2).pow(2)
else:
l2 = l2 + param.norm(2).pow(2)
loss = (1/N)*(pred_y-y).pow(2).sum() + l2/100
if(t%300 == 0): print(t,': ', loss.item())
conceptor.zero_grad()
loss.backward()
with torch.no_grad():
for param in conceptor.parameters():
param -= learning_rate * param.grad
notC = Conceptor(2)
notC.cuda()
cX = conceptor(x).clone().detach()
for t in range(3001):
pred_y = notC(cX)
loss = (1/N)*(pred_y-y).pow(2).sum()
if(t%300 == 0): print(t,': ', loss.item())
notC.zero_grad()
loss.backward()
with torch.no_grad():
for param in notC.parameters():
param -= learning_rate * param.grad
test = np.random.multivariate_normal([0,0],[[1,0],
[0,1]],N)
test = torch.tensor(test, device = device, dtype = dtype)
testC = conceptor(test).detach().cpu()
plt.scatter(testC[:,0],testC[:,1], c = 'm')
plt.show()
original = xe.detach().cpu()
plt.scatter(original[:,0],original[:,1])
plt.show()
result = notC(x).detach().cpu()
plt.scatter(result[:,0],result[:,1])
plt.show()
what = original - result
plt.scatter(what[:,0],what[:,1])
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.